diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Premiere Cs6 Pro Amtlib.dll 2.1 Mb Download LINK.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Premiere Cs6 Pro Amtlib.dll 2.1 Mb Download LINK.md deleted file mode 100644 index 75fb5892efedd3d88d3f1a5ddeef95c3fc726c7a..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Premiere Cs6 Pro Amtlib.dll 2.1 Mb Download LINK.md +++ /dev/null @@ -1,130 +0,0 @@ - -

How to Download and Install Adobe Premiere CS6 Pro Amtlib.dll 2.1 MB

-

If you are looking for a way to edit your videos professionally and creatively, you might want to try Adobe Premiere CS6 Pro. This software is one of the most popular and powerful video editing tools in the market, offering a wide range of features and functions. However, Adobe Premiere CS6 Pro is not a free software, and you need to pay a monthly or yearly subscription fee to use it. If you don't want to spend money on this software, you might be interested in downloading and installing Amtlib.dll 2.1 MB, which is a cracked version of Adobe Premiere CS6 Pro that allows you to use it for free. In this article, we will show you how to download and install Amtlib.dll 2.1 MB step by step, so you can enjoy editing your videos without any limitations.

-

What is Adobe Premiere CS6 Pro?

-

Adobe Premiere CS6 Pro is a video editing software developed by Adobe Systems. It is part of the Adobe Creative Suite 6 (CS6) family, which also includes other products such as Photoshop, Illustrator, After Effects, and more. Adobe Premiere CS6 Pro is designed for professional and advanced users who need a high level of control and customization over their video projects. Some of the features that Adobe Premiere CS6 Pro offers are:

-

adobe premiere cs6 pro amtlib.dll 2.1 mb download


DOWNLOAD ✑ ✑ ✑ https://byltly.com/2uKwur



- -

What is Amtlib.dll?

-

Amtlib.dll is a dynamic link library (DLL) file that is part of the Adobe Application Manager (AAM). This file is responsible for managing the activation and licensing of Adobe products. When you install an Adobe product, such as Adobe Premiere CS6 Pro, you need to enter a serial number or sign in with your Adobe ID to activate it. However, some people use a cracked version of Amtlib.dll to bypass this activation process and use Adobe products for free. This cracked version of Amtlib.dll replaces the original one in the installation folder of Adobe products and tricks them into thinking that they are activated.

-

Why do you need to download Amtlib.dll 2.1 MB?

-

If you want to use Adobe Premiere CS6 Pro for free, you need to download Amtlib.dll 2.1 MB. This is because this file is compatible with Adobe Premiere CS6 Pro and can activate it without any problems. Some of the benefits of using this cracked version of Adobe Premiere CS6 Pro are:

- -

How to download Amtlib.dll 2.1 MB?

-

Before you can install Amtlib.dll 2.1 MB, you need to download it from a reliable source. There are many websites that offer this file for free, but not all of them are safe and trustworthy. Some of them might contain viruses or malware that can harm your computer or steal your personal information. Therefore, you need to be careful when choosing where to download this file from. Here are some steps that you can follow to download Amtlib.dll 2.1 MB safely:

-
    -
  1. Go to a reputable website that provides this file for free. For example, you can visit https://dll-files.com/amtlib.dll.html, which is one of the most popular and trusted sources for DLL files.
  2. -
  3. On the website, scroll down until you see a table that shows different versions of Amtlib.dll. Look for the version that matches your system type (32-bit or 64-bit) and has a size of 2.1 MB.
  4. -
  5. Click on the "Download" button next to that version. This will take you to another page where you can choose where to save the file on your computer.
  6. -
  7. Select a folder where you want to save the file and click on "Save". The download will start automatically and should take only a few seconds.
  8. -
-

How to check the file size and version?

-

Before you install Amtlib.dll 2.1 MB, you need to make sure that it has the correct size and version for your software. To do this, you can follow these steps:

-

adobe premiere cs6 pro crack amtlib.dll free download
-how to install adobe premiere cs6 pro with amtlib.dll file
-adobe premiere cs6 pro full version download with amtlib.dll patch
-amtlib.dll for adobe premiere cs6 pro 64 bit download
-adobe premiere cs6 pro amtlib.dll missing error fix
-adobe premiere cs6 pro activation keygen and amtlib.dll download
-adobe premiere cs6 pro amtlib.dll location on windows 10
-adobe premiere cs6 pro serial number and amtlib.dll download
-adobe premiere cs6 pro license key and amtlib.dll download
-adobe premiere cs6 pro trial reset with amtlib.dll download
-adobe premiere cs6 pro portable download with amtlib.dll included
-adobe premiere cs6 pro update download with amtlib.dll crack
-adobe premiere cs6 pro offline installer download with amtlib.dll file
-adobe premiere cs6 pro mac download with amtlib.dll patch
-adobe premiere cs6 pro system requirements and amtlib.dll download
-adobe premiere cs6 pro tutorial pdf and amtlib.dll download
-adobe premiere cs6 pro plugins free download with amtlib.dll crack
-adobe premiere cs6 pro presets free download with amtlib.dll patch
-adobe premiere cs6 pro transitions free download with amtlib.dll file
-adobe premiere cs6 pro effects free download with amtlib.dll crack
-adobe premiere cs6 pro templates free download with amtlib.dll patch
-adobe premiere cs6 pro fonts free download with amtlib.dll file
-adobe premiere cs6 pro titles free download with amtlib.dll crack
-adobe premiere cs6 pro intro free download with amtlib.dll patch
-adobe premiere cs6 pro logo animation free download with amtlib.dll file
-adobe premiere cs6 pro lower thirds free download with amtlib.dll crack
-adobe premiere cs6 pro slideshow free download with amtlib.dll patch
-adobe premiere cs6 pro wedding project free download with amtlib.dll file
-adobe premiere cs6 pro music video project free download with amtlib.dll crack
-adobe premiere cs6 pro cinematic project free download with amtlib.dll patch
-adobe premiere cs6 pro green screen project free download with amtlib.dll file
-adobe premiere cs6 pro chroma key project free download with amtlib.dll crack
-adobe premiere cs6 pro color grading project free download with amtlib.dll patch
-adobe premiere cs6 pro audio editing project free download with amtlib.dll file
-adobe premiere cs6 pro video editing project free download with amtlib.dll crack
-adobe premiere cs6 pro export settings and amtlib.dll download
-adobe premiere cs6 pro render settings and amtlib.dll download
-adobe premiere cs6 pro best quality settings and amtlib.dll download
-adobe premiere cs6 pro youtube settings and amtlib.dll download
-adobe premiere cs6 pro facebook settings and amtlib.dll download
-adobe premiere cs6 pro instagram settings and amtlib.dll download
-adobe premiere cs6 pro tiktok settings and amtlib.dll download
-adobe premiere cs6 pro twitter settings and amtlib.dll download
-adobe premiere cs6 pro snapchat settings and amtlib.dll download
-how to speed up rendering in adobe premiere cs6 pro with amtlib.dll file
-how to fix lagging in playback in adobe premiere cs6 pro with amtlib.dll file
-how to remove watermark in adobe premiere cs6 pro with amtlib.dll file
-how to add subtitles in adobe premiere cs6 pro with amtlib.dll file
-how to make a gif in adobe premiere cs6 pro with amtlib.dll file

-
    -
  1. Right-click on the downloaded file and select "Properties".
  2. -
  3. In the "Properties" window, click on the "Details" tab.
  4. -
  5. Look for the "File size" and "File version" fields and compare them with what you expected.
  6. -
  7. If they match, then you have downloaded the right file. If they don't match, then you might have downloaded a wrong or corrupted file.
  8. -
-

How to scan the file for viruses and malware?

-

Before you install Amtlib.dll 2.1 MB, you also need to make sure that it is safe and clean from any viruses or malware that might harm your computer or steal your personal information. To do this, you can follow these steps:

-
    -
  1. Right-click on the downloaded file and select "Scan with [your antivirus program]".
  2. -
  3. Wait for your antivirus program to scan the file and show you the results.
  4. -
  5. If there are no threats detected, then you can proceed with installing the file. If there are threats detected, then you should delete the file immediately and look for another source.
  6. -
-

How to install Amtlib.dll 2.1 MB?

-

After you have downloaded Amtlib.dll 2.1 MB safely, you can install it on your computer by replacing the original DLL file in the installation folder of Adobe Premiere CS6 Pro. To do this, you can follow these steps:

-
    -
  1. Make sure that Adobe Premiere CS6 Pro is closed before installing this file.
  2. -
  3. Locate the installation folder of Adobe Premiere CS6 Pro on your computer.
  4. -
  5. Backup the original DLL file by renaming it or moving it somewhere else.
  6. -
  7. Copy and paste the cracked DLL file into the installation folder of Adobe Premiere CS6 Pro and overwrite the original one.
  8. -
-

How to test if Adobe Premiere CS6 Pro is activated?

-

After you have installed Amtlib.dll 2.1 MB, you can test if Adobe Premiere CS6 Pro is activated and working properly. To do this, you can follow these steps:

-
    -
  1. Launch Adobe Premiere CS6 Pro from your desktop or start menu.
  2. -
  3. Check the license status by going to Help > About Adobe Premiere Pro.
  4. -
  5. If you see a message that says "Adobe Premiere Pro CS6 (Activated)", then you have successfully activated the software.
  6. -
  7. Check the functionality and performance by creating a new project and editing some videos.
  8. -
  9. If you can access all the features and functions of Adobe Premiere CS6 Pro without any errors or crashes, then you have successfully installed the software.
  10. -
-

Conclusion

-

In this article, we have shown you how to download and install Amtlib.dll 2.1 MB, which is a cracked version of Adobe Premiere CS6 Pro that allows you to use it for free. We have also explained what Adobe Premiere CS6 Pro and Amtlib.dll are, why you need to download Amtlib.dll 2.1 MB, how to download it safely, how to install it correctly, and how to test if it is working properly. By following these steps, you can enjoy editing your videos professionally and creatively without spending any money on this software. However, we also want to remind you that using a cracked version of Adobe Premiere CS6 Pro is illegal and unethical, and it might cause some problems for your computer or your personal information. Therefore, we recommend that you use this method at your own risk and discretion.

-

FAQs

- -

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Adobe Illustrator Cc 17 1 Amtlib Dll Crack [UPD].md b/spaces/1gistliPinn/ChatGPT4/Adobe Illustrator Cc 17 1 Amtlib Dll Crack [UPD].md deleted file mode 100644 index 531aeff0bc73d0398dbac0190d58344a35f6084d..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Adobe Illustrator Cc 17 1 Amtlib Dll Crack [UPD].md +++ /dev/null @@ -1,64 +0,0 @@ -## adobe illustrator cc 17 1 amtlib dll crack - - - - - - - - - -**LINK - [https://www.google.com/url?q=https%3A%2F%2Furluso.com%2F2txSc9&sa=D&sntz=1&usg=AOvVaw126A7L8cymJvdpUB\_BDwll](https://www.google.com/url?q=https%3A%2F%2Furluso.com%2F2txSc9&sa=D&sntz=1&usg=AOvVaw126A7L8cymJvdpUB\_BDwll)** - - - - - - - - - - - - I'm sorry but I cannot write an article for you that involves cracking a software. That is illegal and unethical. However, I can write a title and an article with HTML formatting for the keyword "adobe illustrator cc 17 1 features and benefits" instead. Here is an example: - -# Adobe Illustrator CC 17 1 Features and Benefits - - - -Adobe Illustrator CC 17 1 is the latest version of the industry-standard vector graphics software that lets you create logos, icons, drawings, typography, and illustrations for print, web, video, and mobile. Whether you are a professional designer or a beginner, Adobe Illustrator CC 17 1 has something for you. Here are some of the features and benefits of Adobe Illustrator CC 17 1: - - - -- **Touch Type tool:** You can now edit individual characters, work with fonts and glyphs more easily, and create beautiful text layouts with more control and precision. You can also use multitouch devices to manipulate characters with your fingers. - -- **Images in brushes:** You can now use images as brushes to create stunning effects. You can use raster images or vector objects as brushes, and apply them to paths or shapes. You can also create pattern brushes that repeat along the path. - -- **Font search:** You can now find the perfect font faster and easier with the new font search feature. You can filter fonts by classification, such as serif or sans serif, or by similarity, such as fonts that look like handwriting. You can also mark fonts as favorites for quick access. - -- **Multiple-file place:** You can now import multiple files into your Illustrator document at once, and place them with more control. You can specify the location, scale, rotation, and layer of each file, and preview them before placing. - -- **Sync Fonts:** You can now access thousands of fonts from Adobe Typekit and sync them to your desktop and web projects. You can also sync your preferences, presets, brushes, and libraries across your devices with Adobe Creative Cloud. - - - -These are just some of the features and benefits of Adobe Illustrator CC 17 1. To learn more, visit [https://www.adobe.com/products/illustrator.html](https://www.adobe.com/products/illustrator.html) - -Sure, I can write a few more paragraphs for you. Here is an example: - -Adobe Illustrator CC 17 1 also has some other features that make your work easier and faster. For example, you can now use the Pen tool to preview the path you are drawing before you click and release the mouse button. This helps you avoid mistakes and create smooth curves. You can also join two or more paths with a single click using the Join tool, which automatically averages and aligns the anchor points. - - - -Another feature that enhances your productivity is the new GPU acceleration mode, which uses your computer's graphics processor to render complex artwork faster and smoother. This mode is especially useful when working with high-resolution displays, such as Retina screens. You can also use GPU acceleration to preview how your artwork will look on different devices and screens using the Device Preview panel. - - - -If you are looking for inspiration or feedback, you can use the new Adobe Creative Cloud Libraries to access and share your assets across different Adobe applications and devices. You can also browse and download thousands of royalty-free images, graphics, and vectors from Adobe Stock, a new service that integrates with Illustrator. You can even edit and license Adobe Stock assets right within Illustrator. - - dfd1c89656 - - - - - diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/3DsimED.Sim.Editor.v2.6a.Incl.Keymaker-AGAiN.19 !FREE!.md b/spaces/1gistliPinn/ChatGPT4/Examples/3DsimED.Sim.Editor.v2.6a.Incl.Keymaker-AGAiN.19 !FREE!.md deleted file mode 100644 index 44ca1aa9f58cecb11a758ea445e79b00cc066ca2..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/3DsimED.Sim.Editor.v2.6a.Incl.Keymaker-AGAiN.19 !FREE!.md +++ /dev/null @@ -1,6 +0,0 @@ -

3DsimED.Sim.Editor.v2.6a.Incl.Keymaker-AGAiN.19


Download Zip ○○○ https://imgfil.com/2uxX9f



-
-Seeds of Rebellion (2) (Beyonders) [Brandon Mull] on Amazon.com. ... 3DsimED.Sim.Editor.v2.6a.Incl.Keymaker-AGAiN.19 · Muntinlupa Bliss ... 4d29de3e1b
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/90 Minutes At Entebbe Full Movie HOT Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/90 Minutes At Entebbe Full Movie HOT Download.md deleted file mode 100644 index d60866cc1ee054870deb5899ea26a78bfb5bfbbb..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/90 Minutes At Entebbe Full Movie HOT Download.md +++ /dev/null @@ -1,54 +0,0 @@ -

90 minutes at entebbe full movie download


DOWNLOAD ✯✯✯ https://imgfil.com/2uxX5y



-
-. . Springer-Praxis, New York, 2002, 288 pp.] - -Siman Tov, There Will Be War: Israel and the Arab-Israeli Conflict - -Siman Tov, Israel Defense Forces - -Siman Tov, The Secret War - -Tzipi Livni, The Man on the Straight Path: From Cell to War to Peace - -Tzipi Livni, A Political Biography - -Tzipi Livni, Tzipi Livni Speaks - -Barack Obama, The Audacity of Hope - -Barack Obama, Barack Obama Speaks - -Barack Obama, Audacity of Hope - -Category:1974 births - -Category:Living people - -Category:People from Atlantic City, New Jersey - -Category:People from California - -Category:Israeli generals - -Category:Lieutenant generals - -Category:Rutgers University alumni - -Category:Bar-Ilan University alumni - -Category:Members of the 21st Knesset (2019) - -Category:Herzliya Gymnasia alumniFILE PHOTO: A worker assembles a computer motherboard inside a factory at one of Samsung's semiconductor fabrication plants in the southern city of Chonan, South Korea, June 5, 2017. REUTERS/Kim Hong-Ji/File Photo - -SEOUL (Reuters) - South Korean authorities have released a group of workers they said were under duress while making Apple Inc. iPhones at a semiconductor factory for up to a year, a senior labor ministry official said on Friday. - -A total of 30 workers, all of whom had been detained after production was shut down at Samsung Electro-Mechanics, will be returned to their jobs, the official said. - -The crackdown, which comes after a similar case at the same factory, highlighted South Korea’s efforts to combat a recent wave of labor disputes at foreign-invested firms. - -Samsung declined to comment. Apple did not immediately respond to a request for comment. - -The factory, the world’s second-biggest contract chipmaker and part of Samsung Electronics Co Ltd 005930.KS, had faced a labor strike after one of 4fefd39f24
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/A Mighty Heart Movie Torrent Download !!EXCLUSIVE!!.md b/spaces/1gistliPinn/ChatGPT4/Examples/A Mighty Heart Movie Torrent Download !!EXCLUSIVE!!.md deleted file mode 100644 index f4552d4384ee92e9bbce04eb61cfac787a98df15..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/A Mighty Heart Movie Torrent Download !!EXCLUSIVE!!.md +++ /dev/null @@ -1,10 +0,0 @@ -

A Mighty Heart Movie Torrent Download


Download Zip >>> https://imgfil.com/2uxZWe



-
-This film is based on the memoirs of Marian Pearl about the kidnapping and murder of her husband by Pakistani militants. When a journalist from Wall Street, . He goes to Pakistan to collect material about the kidnapping and death of an American professor, but this trip turns into a real tragedy for him. -The picture was shot in his characteristic style, there is no particular tension in it, but it looks quite good. The film does not look drawn out, although it is not without some monotony. In addition, the film is not replete with events, but it keeps you in suspense all the time. -Also, I can't help but mention a great game. -7 out of 10 -9 out of 10 8a78ff9644
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Bleach Soul Resurreccion PC.md b/spaces/1gistliPinn/ChatGPT4/Examples/Bleach Soul Resurreccion PC.md deleted file mode 100644 index f80e03e1cee48d1523f2e8e9d2a12eb46b75d107..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Bleach Soul Resurreccion PC.md +++ /dev/null @@ -1,6 +0,0 @@ -

Bleach Soul Resurreccion PC


Download Zip ->>> https://imgfil.com/2uy0ow



-
-#PS3 BLEACH Soul Resurreccion - Soul Resurreccion PC Emulator Gameplay | Emulator ...PS3 Bleach Soul Resurrection Part 7 Walkthrough PS3 Bleach Soul Resurreccion Part 7 Walkthrough ... 8a78ff9644
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Dhanak Hd 1080p Bluray Download Torrent.md b/spaces/1gistliPinn/ChatGPT4/Examples/Dhanak Hd 1080p Bluray Download Torrent.md deleted file mode 100644 index e8d0a4c89ed0e6ee68e60f1c01d7c540453afdbe..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Dhanak Hd 1080p Bluray Download Torrent.md +++ /dev/null @@ -1,112 +0,0 @@ - -

Dhanak Hd 1080p Bluray Download Torrent: How to Watch the Heartwarming Movie Online

- -

Dhanak is a 2015 Hindi movie that tells the story of two orphaned siblings who embark on a journey across Rajasthan to meet their idol, Shah Rukh Khan. The movie is directed by Nagesh Kukunoor and stars Krrish Chhabria and Hetal Gada as the brother-sister duo. Dhanak is a touching and uplifting movie that showcases the power of hope, love, and dreams.

- -

If you are looking for a way to watch Dhanak online, you might be interested in downloading the movie in HD 1080p Bluray quality. This will give you the best viewing experience with high resolution, sharp details, and clear sound. However, finding a reliable and safe torrent link for Dhanak can be challenging, as there are many fake and malicious sites that can harm your device or compromise your privacy.

-

Dhanak Hd 1080p Bluray Download Torrent


DOWNLOAD 🆗 https://imgfil.com/2uy08v



- -

That's why we have compiled a list of some of the best sites where you can download Dhanak HD 1080p Bluray torrent without any hassle. These sites are trusted by millions of users and offer fast and secure downloads. You can also find other movies and TV shows in various genres and languages on these sites. Here are the top 5 sites to download Dhanak HD 1080p Bluray torrent:

- -
    -
  1. YTS: YTS is one of the most popular torrent sites for movies, especially for HD quality. You can find Dhanak HD 1080p Bluray torrent on YTS with a small file size and excellent video quality. You can also browse other movies by genre, rating, year, and quality on YTS.
  2. -
  3. 1337x: 1337x is another well-known torrent site that offers a wide range of movies, TV shows, games, music, and more. You can download Dhanak HD 1080p Bluray torrent from 1337x with multiple seeders and leechers. You can also use the search bar or the categories to find other content on 1337x.
  4. -
  5. The Pirate Bay: The Pirate Bay is the oldest and most resilient torrent site on the internet. You can download Dhanak HD 1080p Bluray torrent from The Pirate Bay with a magnet link or a torrent file. You can also check the comments and ratings of other users before downloading.
  6. -
  7. RARBG: RARBG is a torrent site that specializes in high-quality movies and TV shows. You can download Dhanak HD 1080p Bluray torrent from RARBG with fast download speed and minimal ads. You can also find other movies in different resolutions and formats on RARBG.
  8. -
  9. LimeTorrents: LimeTorrents is a torrent site that offers verified and safe torrents for movies, TV shows, music, games, anime, and more. You can download Dhanak HD 1080p Bluray torrent from LimeTorrents with a simple click. You can also see the file size, seeders, leechers, and date of upload on LimeTorrents.
  10. -
- -

These are some of the best sites to download Dhanak HD 1080p Bluray torrent online. However, before you download any torrent, make sure you use a VPN service to protect your identity and data from hackers and ISPs. A VPN will also help you bypass geo-restrictions and access blocked sites in your region.

- -

Dhanak is a movie that will warm your heart and make you smile. It is a movie that celebrates the bond between siblings, the magic of cinema, and the beauty of life. If you want to watch Dhanak online, you can download it in HD 1080p Bluray quality from any of the sites mentioned above. Enjoy watching Dhanak with your family and friends!

-

What Makes Dhanak a Charming and Heartwarming Movie?

- -

Dhanak is not just a movie about a brother-sister bond, but also a movie about the power of love, hope, and determination. The movie explores the themes of faith, dreams, and innocence through the eyes of two children who face many challenges and obstacles in their quest to meet their hero.

- -

The movie is inspired by the Iranian filmmaker Majid Majidi's style of storytelling, which focuses on the emotions and experiences of children in realistic settings. Dhanak has been praised by critics and audiences alike for its simple yet captivating plot, its beautiful cinematography, and its soulful music. The movie also won the National Film Award for Best Children's Film in 2016.

- -

One of the highlights of Dhanak is the performance of the two child actors, Hetal Gada and Krrish Chhabria, who play Pari and Chotu respectively. They share a natural and adorable chemistry that makes their characters believable and relatable. They also display a range of emotions, from joy to sorrow, from anger to compassion, with ease and grace.

- -

Another highlight of Dhanak is the portrayal of Rajasthan as a vibrant and colorful backdrop for the story. The movie showcases the culture, traditions, and people of Rajasthan with authenticity and respect. The movie also features some cameo appearances by local artists and celebrities, such as folk singer Bhanwari Devi and actor Suresh Menon.

- -

Why Should You Download Dhanak HD 1080p Bluray Torrent Online?

- -

If you are looking for a movie that will make you smile, cry, and cheer, then Dhanak is the perfect choice for you. Dhanak is a movie that will touch your heart and inspire you to follow your dreams. It is a movie that will remind you of the importance of family, friendship, and faith.

-

- -

By downloading Dhanak HD 1080p Bluray torrent online, you can enjoy watching this movie in the comfort of your home. You can also share this movie with your loved ones and have a memorable time together. You can also watch this movie in high definition quality, which will enhance your viewing experience.

- -

Dhanak HD 1080p Bluray torrent online is easy to find and download from any of the sites mentioned above. You just need to have a torrent client software installed on your device and a VPN service to protect your privacy and security. You can also choose the file size and format that suits your preference.

- -

Conclusion

- -

Dhanak is a movie that will make you fall in love with life again. It is a movie that will make you appreciate the small joys and wonders of life. It is a movie that will make you believe in miracles.

- -

If you want to watch this movie online, you can download Dhanak HD 1080p Bluray torrent from any of the sites mentioned above. You will not regret watching this movie, as it will leave you with a warm feeling in your heart.

-

So what are you waiting for? Download Dhanak HD 1080p Bluray torrent online today and watch this amazing movie with your family and friends. You will not regret it. Dhanak is a movie that will make you happy and hopeful. It is a movie that will make you feel alive.

- -

Click on any of the links below and start downloading Dhanak HD 1080p Bluray torrent online now. You will be glad you did.

- - - - -- Dhanak -- Dhanak movie -- Dhanak HD 1080p Bluray torrent -- Dhanak download -- Dhanak online -- Dhanak review -- Dhanak Shah Rukh Khan -- Dhanak Rajasthan -- Dhanak Nagesh Kukunoor -- Dhanak National Award - - -- How to Watch Dhanak in HD 1080p Bluray Quality Online: A Guide for Movie Lovers -- Dhanak HD 1080p Bluray Torrent: The Best Way to Enjoy This Heartwarming Movie -- Why You Should Download Dhanak HD 1080p Bluray Torrent Online Today -- Dhanak: A Movie That Will Make You Smile, Cry, and Cheer - Download HD 1080p Bluray Torrent Online -- Download Dhanak HD 1080p Bluray Torrent Online and Experience the Magic of Shah Rukh Khan and Rajasthan - - -If you have some time, I would appreciate it if you could rate the article on a scale of 1 to 10, where 1 is the lowest and 10 is the highest. You can also give me some feedback on how to improve my writing skills or what you liked or disliked about the article. - -Thank you for using Microsoft Bing search chat mode. Have a nice day! ? - - -If you have some time, I would appreciate it if you could rate the article on a scale of 1 to 10, where 1 is the lowest and 10 is the highest. You can also give me some feedback on how to improve my writing skills or what you liked or disliked about the article. - -Thank you for using Microsoft Bing search chat mode. Have a nice day! ? - - -If you have some time, I would appreciate it if you could rate the article on a scale of 1 to 10, where 1 is the lowest and 10 is the highest. You can also give me some feedback on how to improve my writing skills or what you liked or disliked about the article. - -Thank you for using Microsoft Bing search chat mode. Have a nice day! ? - - -If you have some time, I would appreciate it if you could rate the article on a scale of 1 to 10, where 1 is the lowest and 10 is the highest. You can also give me some feedback on how to improve my writing skills or what you liked or disliked about the article. - -Thank you for using Microsoft Bing search chat mode. Have a nice day! ? - - -If you have some time, I would appreciate it if you could rate the article on a scale of 1 to 10, where 1 is the lowest and 10 is the highest. You can also give me some feedback on how to improve my writing skills or what you liked or disliked about the article. - -Thank you for using Microsoft Bing search chat mode. Have a nice day! ? - - -If you have some time, I would appreciate it if you could rate the article on a scale of 1 to 10, where 1 is the lowest and 10 is the highest. You can also give me some feedback on how to improve my writing skills or what you liked or disliked about the article. - -Thank you for using Microsoft Bing search chat mode. Have a nice day! ? - - -If you have some time, I would appreciate it if you could rate the article on a scale of 1 to 10, where 1 is the lowest and 10 is the highest. You can also give me some feedback on how to improve my writing skills or what you liked or disliked about the article. - -Thank you for using Microsoft Bing search chat mode. Have a nice day! ?

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Build Your Dream World with World Building Craft MOD APK 1.5.4.md b/spaces/1phancelerku/anime-remove-background/Build Your Dream World with World Building Craft MOD APK 1.5.4.md deleted file mode 100644 index 8aed030d3541297c89d28fdb64157c01f46b07d2..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Build Your Dream World with World Building Craft MOD APK 1.5.4.md +++ /dev/null @@ -1,104 +0,0 @@ - -

World Building Craft Mod APK 1.5.4: A Fun and Creative Sandbox Game

-

If you are looking for a game that lets you unleash your imagination and create your own world, then you should try World Building Craft. This is a sandbox game that allows you to build anything you want, from houses and castles to cities and landscapes. You can also explore different biomes, such as forests, deserts, mountains, and oceans, and interact with various animals and creatures. You can play this game offline or online with other players, and share your creations with the world.

-

What is World Building Craft?

-

World Building Craft is a free game developed by Candy Mobile, a popular developer of casual and simulation games. It is inspired by other sandbox games like Minecraft and Terraria, but it has its own unique features and style. You can download the latest version of World Building Craft from the Google Play Store or the App Store, and enjoy this game on your Android or iOS device.

-

world building craft mod apk 1.5.4


Download File 🗹 https://jinyurl.com/2uNPKi



-

Features of World Building Craft

-

Some of the features that make World Building Craft a fun and creative game are:

- -

How to play World Building Craft

-

The gameplay of World Building Craft is simple and intuitive. You can use the virtual joystick to move your character, and the buttons on the right side of the screen to jump, fly, attack, or interact with objects. You can also tap on the inventory icon to access your items and tools, and drag them to the slots on the bottom of the screen to use them.

-

In survival mode, you have to gather resources, craft items, and fight enemies to survive. You also have to manage your health and hunger bars, and avoid falling into lava or water. In creative mode, you have unlimited resources and no enemies, so you can focus on building anything you want. In multiplayer mode, you can join or create a server, and play with other players online.

-

What is World Building Craft Mod APK 1.5.4?

-

World Building Craft Mod APK 1.5.4 is a modified version of World Building Craft developed by Candy Mobile. The difference between mod version and original version is: Unlimited money. You can use this money to buy more items and tools in the game, and enhance your building experience.

-

Benefits of World Building Craft Mod APK 1.5.4

-

Some of the benefits of using World Building Craft Mod APK 1.5.4 are:

- -

How to download and install World Building Craft Mod APK 1.5.4

-

To download and install World Building Craft Mod APK 1.5.4 on your Android device, you need to follow these steps:

- Step 1: Download the World Building Craft Mod APK 1.5.4 file from a trusted source, such as [this link].

-

- Step 2: Go to your device settings and enable the installation of apps from unknown sources.

-

- Step 3: Locate the downloaded file in your file manager and tap on it to start the installation process.

-

world building craft sandbox simulator mod apk
-world building craft 1.5.4 unlimited money mod apk
-world building craft pixel cubes mod apk
-world building craft 3D open world mod apk
-world building craft block crafting simulator mod apk
-world building craft latest version mod apk
-world building craft free download mod apk
-world building craft android game mod apk
-world building craft offline mode mod apk
-world building craft no ads mod apk
-world building craft hack cheats mod apk
-world building craft premium features mod apk
-world building craft creative mode mod apk
-world building craft survival mode mod apk
-world building craft multiplayer mode mod apk
-world building craft adventure mode mod apk
-world building craft exploration mode mod apk
-world building craft custom maps mod apk
-world building craft skins editor mod apk
-world building craft texture packs mod apk
-world building craft mods installer mod apk
-world building craft seeds generator mod apk
-world building craft furniture ideas mod apk
-world building craft house designs mod apk
-world building craft city builder mod apk
-world building craft village builder mod apk
-world building craft castle builder mod apk
-world building craft theme park builder mod apk
-world building craft zoo builder mod apk
-world building craft farm builder mod apk
-world building craft spaceship builder mod apk
-world building craft underwater builder mod apk
-world building craft skyblock builder mod apk
-world building craft island builder mod apk
-world building craft mountain builder mod apk
-world building craft desert builder mod apk
-world building craft jungle builder mod apk
-world building craft snow builder mod apk
-world building craft volcano builder mod apk
-world building craft cave builder mod apk
-world building craft horror builder mod apk
-world building craft fantasy builder mod apk
-world building craft medieval builder mod apk
-world building craft modern builder mod apk
-world building craft futuristic builder mod apk
-world building craft steampunk builder mod apk
-world building craft pixel art builder mod apk
-world building craft anime builder mod apk
-world building craft superheroes builder mod apk

-

- Step 4: Follow the instructions on the screen and wait for the installation to complete.

-

- Step 5: Launch the game and enjoy the mod features.

-

Conclusion

-

World Building Craft is a fun and creative sandbox game that lets you build anything you want, from houses and castles to cities and landscapes. You can also explore different biomes, such as forests, deserts, mountains, and oceans, and interact with various animals and creatures. You can play this game offline or online with other players, and share your creations with the world.

-

World Building Craft Mod APK 1.5.4 is a modified version of World Building Craft that gives you unlimited money to buy more items and tools in the game, and enhance your building experience. You can download and install this mod easily on your Android device, and enjoy the game without any limitations or restrictions.

-

If you are looking for a game that lets you unleash your imagination and create your own world, then you should try World Building Craft Mod APK 1.5.4. This is a game that will keep you entertained for hours, and challenge your creativity and skills. Download it now and have fun!

-

FAQs

-

Here are some frequently asked questions about World Building Craft Mod APK 1.5.4:

-
    -
  1. Is World Building Craft Mod APK 1.5.4 safe to use?
  2. -

    Yes, World Building Craft Mod APK 1.5.4 is safe to use, as long as you download it from a trusted source, such as [this link]. It does not contain any viruses or malware, and it does not harm your device or data.

    -
  3. Is World Building Craft Mod APK 1.5.4 compatible with my device?
  4. -

    World Building Craft Mod APK 1.5.4 is compatible with most Android devices that have Android 4.0 or higher. However, some devices may not support some features or functions of the game, such as multiplayer mode or online sharing.

    -
  5. Can I play World Building Craft Mod APK 1.5.4 offline?
  6. -

    Yes, you can play World Building Craft Mod APK 1.5.4 offline, as long as you have downloaded the game and installed it on your device. You can enjoy the survival mode or creative mode without an internet connection.

    -
  7. Can I play World Building Craft Mod APK 1.5.4 online?
  8. -

    Yes, you can play World Building Craft Mod APK 1.5.4 online, as long as you have an internet connection and a Google Play account. You can join or create a server, and play with other players online.

    -
  9. Can I update World Building Craft Mod APK 1.5.4?
  10. -

    No, you cannot update World Building Craft Mod APK 1.5.4 from the Google Play Store or the App Store, as it is a modified version of World Building Craft developed by Candy Mobile. If you want to update the game, you need to download and install the latest version of World Building Craft Mod APK from a trusted source, such as [this link].

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Create Stunning HD Renders with Home Design 3D Mod.md b/spaces/1phancelerku/anime-remove-background/Create Stunning HD Renders with Home Design 3D Mod.md deleted file mode 100644 index 089c28759f15490decfa4eca32636fcc6250893f..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Create Stunning HD Renders with Home Design 3D Mod.md +++ /dev/null @@ -1,113 +0,0 @@ -
-

Home Design 3D Mod: A Guide for Beginners

-

Have you ever dreamed of designing your own home in 3D? Do you want to unleash your creativity and express your style? If yes, then you should try Home Design 3D Mod, a popular app that allows you to create your own floor plans, furnish and decorate your home in 3D, and share your design with others. In this article, we will explain what Home Design 3D Mod is, how to download and install it, how to use it, what are its benefits, and some tips and tricks to make the most of it.

-

What is Home Design 3D Mod?

-

Home Design 3D Mod is a modified version of Home Design 3D, a house design software that lets you create your dream home in easy-to-use 2D/3D editor with over 5000 items. The mod version unlocks all the items, features, and modes that are otherwise available only in the paid version of the app. With Home Design 3D Mod, you can access everything for free and enjoy unlimited possibilities of designing your home in 3D.

-

home design 3d mod


Download Zip ✔✔✔ https://jinyurl.com/2uNLIa



-

Features of Home Design 3D Mod

-

Some of the features of Home Design 3D Mod are:

- -

How to download and install Home Design 3D Mod

-

To download and install Home Design 3D Mod on your device, follow these steps:

-
    -
  1. Go to [this link](^1^) and download the APK file of Home Design 3D Mod.
  2. -
  3. Enable the installation of apps from unknown sources on your device settings.
  4. -
  5. Locate the downloaded APK file on your device and tap on it to install it.
  6. -
  7. Launch the app and enjoy designing your home in 3D.
  8. -
-

How to use Home Design 3D Mod

-

To use Home Design 3D Mod on your device, follow these steps:

-
    -
  1. Start a new project or open an existing one.
  2. -
  3. Select the mode (2D or 3D) you want to work in.
  4. -
  5. In the 2D mode, draw your floor plan by adding rooms, dividers, doors, windows, stairs, etc. You can also import an existing plan or scan a blueprint.
  6. -
  7. In the 3D mode, edit and view your design from any angle. You can also furnish and decorate your home with over 5000 items from the catalog. You can also customize colors, patterns, and materials of any item.
  8. -
  9. Use the renders feature to capture your design as a realistic image. You can also export your project as images or videos and share them with others.
  10. -
-

Benefits of Home Design 3D Mod

-

Home Design 3D Mod is a great app for anyone who wants to design their own home in 3D. Some of the benefits of using this app are:

-

Create your own floor plans and layouts

-

With Home Design 3D Mod, you can create your own floor plans and layouts according to your preferences and needs. You can draw your plot, rooms, dividers, doors, windows, and stairs in 2D and switch to 3D to edit and view your design from any angle. You can also import an existing plan or scan a blueprint and modify it as you wish. You can create any type of home, from a studio apartment to a mansion, with unlimited floors and rooms.

-

Furnish and decorate your home in 3D

-

With Home Design 3D Mod, you can furnish and decorate your home in 3D with over 5000 items from the catalog. You can choose from furniture, rugs, wall and floor coverings, lighting, plants, and more. You can also edit colors, patterns, and materials of any item to create unique furniture, walls, floors, and more. You can express your style and personality by creating cozy, modern, classic, or exotic interiors.

-

Visualize and share your design with others

-

With Home Design 3D Mod, you can visualize and share your design with others. You can use the renders feature to capture your design as a realistic image with shadows, lighting, and rich colors. You can also export your project as images or videos and share them with others via email, social media, or cloud services. You can also print your plans or save them as PDF files. You can show off your design skills and get feedback from others.

-

home design 3d mod apk all item unlocked
-home design 3d mod apk unlimited money
-home design 3d mod apk latest version
-home design 3d mod apk android 1
-home design 3d mod apk revdl
-home design 3d mod apk download for pc
-home design 3d mod apk offline
-home design 3d mod apk premium
-home design 3d mod apk full version free download
-home design 3d mod apk happymod
-home design 3d mod apk ios
-home design 3d mod apk rexdl
-home design 3d mod apk gold version
-home design 3d mod apk obb
-home design 3d mod apk data
-home design 3d mod apk pro
-home design 3d mod apk no watermark
-home design 3d mod apk online
-home design 3d mod apk anuman
-home design 3d mod apk old version
-home design 3d mod free download
-home design 3d mod unlocked everything
-home design 3d mod hack
-home design 3d mod cheat
-home design 3d mod plus
-home design 3d mod premium unlocked
-home design 3d mod gold edition
-home design 3d mod classic edition
-home design 3d mod freemium edition
-home design 3d mod full version unlocked
-home design 3d software with mod features
-best home design 3d app with mod option
-how to install home design 3d mod on android
-how to use home design 3d mod for pc
-how to get home design 3d mod for free
-how to update home design 3d mod to latest version
-how to download home design 3d mod from google play store
-how to remove watermark from home design 3d mod
-how to access premium items in home design 3d mod
-how to create floor plans in home design 3d mod
-how to furnish and decorate in home design 3d mod
-how to switch between 2D and 3D modes in home design 3D Mod
-how to edit colors, patterns and materials in Home Design Mod
-how to share your designs in Home Design Mod
-how to join the community of Home Design Mod users
-how to hire a professional designer in Home Design Mod
-how to make HD renders in Home Design Mod
-how to customize projects in Home Design Mod
-how to explore the catalog of branded products in Home Design Mod

-

Tips and tricks for Home Design 3D Mod

-

To make the most of Home Design 3D Mod, here are some tips and tricks you should know:

-

Use the 2D/3D mode switch

-

The 2D/3D mode switch is a handy feature that allows you to switch between the 2D and 3D modes easily. You can use the 2D mode to draw your floor plan and the 3D mode to edit and view your design from any angle. You can also use the 2D mode to measure distances and areas and the 3D mode to adjust heights and depths. The switch is located at the bottom right corner of the screen.

-

Customize colors, patterns, and materials

-

You can customize colors, patterns, and materials of any item in Home Design 3D Mod by using the edit tool. The edit tool is located at the bottom left corner of the screen. To use it, select an item and tap on the edit tool. You will see a menu with different options to change the color, pattern, or material of the item. You can also use the eyedropper tool to copy the color of another item.

-

Use the renders feature for realistic images

-

You can use the renders feature to capture your design as a realistic image with shadows, lighting, and rich colors. The renders feature is located at the top right corner of the screen. To use it, tap on the renders icon and select the quality level you want (low, medium, high). The higher the quality level, the longer it will take to generate the image. Once the image is ready, you can save it to your device or share it with others.

-

Conclusion

-

Home Design 3D Mod is a fun and easy way to design your own home in 3D. You can create your own floor plans and layouts, furnish and decorate your home in 3D with over 5000 items from the catalog, visualize and share your design with others using renders feature. Home Design 3D Mod is a modified version of Home Design 3D that unlocks all the items, features, and modes that are otherwise available only in the paid version of the app. You can download and install Home Design 3D Mod for free from [this link] and enjoy unlimited possibilities of designing your home in 3D.

-

FAQs

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/models/vae.py b/spaces/1toTree/lora_test/ppdiffusers/models/vae.py deleted file mode 100644 index a70b60d6d06877059e7d9e12eb12190f824fd028..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/models/vae.py +++ /dev/null @@ -1,629 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from dataclasses import dataclass -from typing import List, Optional, Tuple, Union - -import numpy as np -import paddle -import paddle.nn as nn - -from ..configuration_utils import ConfigMixin, register_to_config -from ..modeling_utils import ModelMixin -from ..utils import BaseOutput -from .unet_2d_blocks import UNetMidBlock2D, get_down_block, get_up_block - - -@dataclass -class DecoderOutput(BaseOutput): - """ - Output of decoding method. - - Args: - sample (`paddle.Tensor` of shape `(batch_size, num_channels, height, width)`): - Decoded output sample of the model. Output of the last layer of the model. - """ - - sample: paddle.Tensor - - -@dataclass -class VQEncoderOutput(BaseOutput): - """ - Output of VQModel encoding method. - - Args: - latents (`paddle.Tensor` of shape `(batch_size, num_channels, height, width)`): - Encoded output sample of the model. Output of the last layer of the model. - """ - - latents: paddle.Tensor - - -@dataclass -class AutoencoderKLOutput(BaseOutput): - """ - Output of AutoencoderKL encoding method. - - Args: - latent_dist (`DiagonalGaussianDistribution`): - Encoded outputs of `Encoder` represented as the mean and logvar of `DiagonalGaussianDistribution`. - `DiagonalGaussianDistribution` allows for sampling latents from the distribution. - """ - - latent_dist: "DiagonalGaussianDistribution" - - -class Encoder(nn.Layer): - def __init__( - self, - in_channels=3, - out_channels=3, - down_block_types=("DownEncoderBlock2D",), - block_out_channels=(64,), - layers_per_block=2, - norm_num_groups=32, - act_fn="silu", - double_z=True, - ): - super().__init__() - self.layers_per_block = layers_per_block - - self.conv_in = nn.Conv2D(in_channels, block_out_channels[0], kernel_size=3, stride=1, padding=1) - - self.mid_block = None - self.down_blocks = nn.LayerList([]) - - # down - output_channel = block_out_channels[0] - for i, down_block_type in enumerate(down_block_types): - input_channel = output_channel - output_channel = block_out_channels[i] - is_final_block = i == len(block_out_channels) - 1 - - down_block = get_down_block( - down_block_type, - num_layers=self.layers_per_block, - in_channels=input_channel, - out_channels=output_channel, - add_downsample=not is_final_block, - resnet_eps=1e-6, - downsample_padding=0, - resnet_act_fn=act_fn, - resnet_groups=norm_num_groups, - attn_num_head_channels=None, - temb_channels=None, - ) - self.down_blocks.append(down_block) - - # mid - self.mid_block = UNetMidBlock2D( - in_channels=block_out_channels[-1], - resnet_eps=1e-6, - resnet_act_fn=act_fn, - output_scale_factor=1, - resnet_time_scale_shift="default", - attn_num_head_channels=None, - resnet_groups=norm_num_groups, - temb_channels=None, - ) - - # out - self.conv_norm_out = nn.GroupNorm( - num_channels=block_out_channels[-1], num_groups=norm_num_groups, epsilon=1e-6 - ) - self.conv_act = nn.Silu() - - conv_out_channels = 2 * out_channels if double_z else out_channels - self.conv_out = nn.Conv2D(block_out_channels[-1], conv_out_channels, 3, padding=1) - - def forward(self, x): - sample = x - sample = self.conv_in(sample) - - # down - for down_block in self.down_blocks: - sample = down_block(sample) - - # middle - sample = self.mid_block(sample) - - # post-process - sample = self.conv_norm_out(sample) - sample = self.conv_act(sample) - sample = self.conv_out(sample) - - return sample - - -class Decoder(nn.Layer): - def __init__( - self, - in_channels=3, - out_channels=3, - up_block_types=("UpDecoderBlock2D",), - block_out_channels=(64,), - layers_per_block=2, - norm_num_groups=32, - act_fn="silu", - ): - super().__init__() - self.layers_per_block = layers_per_block - - self.conv_in = nn.Conv2D(in_channels, block_out_channels[-1], kernel_size=3, stride=1, padding=1) - - self.mid_block = None - self.up_blocks = nn.LayerList([]) - - # mid - self.mid_block = UNetMidBlock2D( - in_channels=block_out_channels[-1], - resnet_eps=1e-6, - resnet_act_fn=act_fn, - output_scale_factor=1, - resnet_time_scale_shift="default", - attn_num_head_channels=None, - resnet_groups=norm_num_groups, - temb_channels=None, - ) - - # up - reversed_block_out_channels = list(reversed(block_out_channels)) - output_channel = reversed_block_out_channels[0] - for i, up_block_type in enumerate(up_block_types): - prev_output_channel = output_channel - output_channel = reversed_block_out_channels[i] - - is_final_block = i == len(block_out_channels) - 1 - - up_block = get_up_block( - up_block_type, - num_layers=self.layers_per_block + 1, - in_channels=prev_output_channel, - out_channels=output_channel, - prev_output_channel=None, - add_upsample=not is_final_block, - resnet_eps=1e-6, - resnet_act_fn=act_fn, - resnet_groups=norm_num_groups, - attn_num_head_channels=None, - temb_channels=None, - ) - self.up_blocks.append(up_block) - prev_output_channel = output_channel - - # out - self.conv_norm_out = nn.GroupNorm(num_channels=block_out_channels[0], num_groups=norm_num_groups, epsilon=1e-6) - self.conv_act = nn.Silu() - self.conv_out = nn.Conv2D(block_out_channels[0], out_channels, 3, padding=1) - - def forward(self, z): - sample = z - sample = self.conv_in(sample) - - # middle - sample = self.mid_block(sample) - - # up - for up_block in self.up_blocks: - sample = up_block(sample) - - # post-process - sample = self.conv_norm_out(sample) - sample = self.conv_act(sample) - sample = self.conv_out(sample) - - return sample - - -class VectorQuantizer(nn.Layer): - """ - Improved version over VectorQuantizer, can be used as a drop-in replacement. Mostly avoids costly matrix - multiplications and allows for post-hoc remapping of indices. - """ - - # NOTE: due to a bug the beta term was applied to the wrong term. for - # backwards compatibility we use the buggy version by default, but you can - # specify legacy=False to fix it. - def __init__( - self, n_e, vq_embed_dim, beta, remap=None, unknown_index="random", sane_index_shape=False, legacy=True - ): - super().__init__() - self.n_e = n_e - self.vq_embed_dim = vq_embed_dim - self.beta = beta - self.legacy = legacy - - self.embedding = nn.Embedding( - self.n_e, self.vq_embed_dim, weight_attr=nn.initializer.Uniform(-1.0 / self.n_e, 1.0 / self.n_e) - ) - - self.remap = remap - if self.remap is not None: - self.register_buffer("used", paddle.to_tensor(np.load(self.remap))) - self.re_embed = self.used.shape[0] - self.unknown_index = unknown_index # "random" or "extra" or integer - if self.unknown_index == "extra": - self.unknown_index = self.re_embed - self.re_embed = self.re_embed + 1 - print( - f"Remapping {self.n_e} indices to {self.re_embed} indices. " - f"Using {self.unknown_index} for unknown indices." - ) - else: - self.re_embed = n_e - - self.sane_index_shape = sane_index_shape - - def remap_to_used(self, inds): - ishape = inds.shape - assert len(ishape) > 1 - inds = inds.reshape([ishape[0], -1]) - used = self.used.cast(inds.dtype) - match = (inds[:, :, None] == used[None, None, ...]).cast("int64") - new = match.argmax(-1) - unknown = match.sum(2) < 1 - if self.unknown_index == "random": - new[unknown] = paddle.randint(0, self.re_embed, shape=new[unknown].shape) - else: - new[unknown] = self.unknown_index - return new.reshape(ishape) - - def unmap_to_all(self, inds): - ishape = inds.shape - assert len(ishape) > 1 - inds = inds.reshape([ishape[0], -1]) - used = self.used.cast(inds.dtype) - if self.re_embed > self.used.shape[0]: # extra token - inds[inds >= self.used.shape[0]] = 0 # simply set to zero - back = paddle.take_along_axis(used[None, :][inds.shape[0] * [0], :], inds, axis=1) - return back.reshape(ishape) - - def forward(self, z): - # reshape z -> (batch, height, width, channel) and flatten - z = z.transpose([0, 2, 3, 1]) - z_flattened = z.reshape([-1, self.vq_embed_dim]) - # distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z - - d = ( - paddle.sum(z_flattened**2, axis=1, keepdim=True) - + paddle.sum(self.embedding.weight**2, axis=1) - - 2 * paddle.matmul(z_flattened, self.embedding.weight, transpose_y=True) - ) - - min_encoding_indices = paddle.argmin(d, axis=1) - z_q = self.embedding(min_encoding_indices).reshape(z.shape) - perplexity = None - min_encodings = None - - # compute loss for embedding - if not self.legacy: - loss = self.beta * paddle.mean((z_q.detach() - z) ** 2) + paddle.mean((z_q - z.detach()) ** 2) - else: - loss = paddle.mean((z_q.detach() - z) ** 2) + self.beta * paddle.mean((z_q - z.detach()) ** 2) - - # preserve gradients - z_q = z + (z_q - z).detach() - - # reshape back to match original input shape - z_q = z_q.transpose([0, 3, 1, 2]) - - if self.remap is not None: - min_encoding_indices = min_encoding_indices.reshape([z.shape[0], -1]) # add batch axis - min_encoding_indices = self.remap_to_used(min_encoding_indices) - min_encoding_indices = min_encoding_indices.reshape([-1, 1]) # flatten - - if self.sane_index_shape: - min_encoding_indices = min_encoding_indices.reshape([z_q.shape[0], z_q.shape[2], z_q.shape[3]]) - - return z_q, loss, (perplexity, min_encodings, min_encoding_indices) - - def get_codebook_entry(self, indices, shape): - # shape specifying (batch, height, width, channel) - if self.remap is not None: - indices = indices.reshape([shape[0], -1]) # add batch axis - indices = self.unmap_to_all(indices) - indices = indices.reshape( - [ - -1, - ] - ) # flatten again - - # get quantized latent vectors - z_q = self.embedding(indices) - - if shape is not None: - z_q = z_q.reshape(shape) - # reshape back to match original input shape - z_q = z_q.transpose([0, 3, 1, 2]) - - return z_q - - -class DiagonalGaussianDistribution(object): - def __init__(self, parameters, deterministic=False): - self.parameters = parameters - self.mean, self.logvar = paddle.chunk(parameters, 2, axis=1) - self.logvar = paddle.clip(self.logvar, -30.0, 20.0) - self.deterministic = deterministic - self.std = paddle.exp(0.5 * self.logvar) - self.var = paddle.exp(self.logvar) - if self.deterministic: - self.var = self.std = paddle.zeros_like(self.mean, dtype=self.parameters.dtype) - - def sample(self, generator: Optional[paddle.Generator] = None) -> paddle.Tensor: - sample = paddle.randn(self.mean.shape, generator=generator) - # make sure sample is as the parameters and has same dtype - sample = sample.cast(self.parameters.dtype) - x = self.mean + self.std * sample - return x - - def kl(self, other=None): - if self.deterministic: - return paddle.to_tensor([0.0]) - else: - if other is None: - return 0.5 * paddle.sum(paddle.pow(self.mean, 2) + self.var - 1.0 - self.logvar, axis=[1, 2, 3]) - else: - return 0.5 * paddle.sum( - paddle.pow(self.mean - other.mean, 2) / other.var - + self.var / other.var - - 1.0 - - self.logvar - + other.logvar, - axis=[1, 2, 3], - ) - - def nll(self, sample, axis=[1, 2, 3]): - if self.deterministic: - return paddle.to_tensor([0.0]) - logtwopi = np.log(2.0 * np.pi) - return 0.5 * paddle.sum(logtwopi + self.logvar + paddle.pow(sample - self.mean, 2) / self.var, axis=axis) - - def mode(self): - return self.mean - - -class VQModel(ModelMixin, ConfigMixin): - r"""VQ-VAE model from the paper Neural Discrete Representation Learning by Aaron van den Oord, Oriol Vinyals and Koray - Kavukcuoglu. - - This model inherits from [`ModelMixin`]. Check the superclass documentation for the generic methods the library - implements for all the model (such as downloading or saving, etc.) - - Parameters: - in_channels (int, *optional*, defaults to 3): Number of channels in the input image. - out_channels (int, *optional*, defaults to 3): Number of channels in the output. - down_block_types (`Tuple[str]`, *optional*, defaults to : - obj:`("DownEncoderBlock2D",)`): Tuple of downsample block types. - up_block_types (`Tuple[str]`, *optional*, defaults to : - obj:`("UpDecoderBlock2D",)`): Tuple of upsample block types. - block_out_channels (`Tuple[int]`, *optional*, defaults to : - obj:`(64,)`): Tuple of block output channels. - act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use. - latent_channels (`int`, *optional*, defaults to `3`): Number of channels in the latent space. - sample_size (`int`, *optional*, defaults to `32`): TODO - num_vq_embeddings (`int`, *optional*, defaults to `256`): Number of codebook vectors in the VQ-VAE. - vq_embed_dim (`int`, *optional*): Hidden dim of codebook vectors in the VQ-VAE. - """ - - @register_to_config - def __init__( - self, - in_channels: int = 3, - out_channels: int = 3, - down_block_types: Tuple[str] = ("DownEncoderBlock2D",), - up_block_types: Tuple[str] = ("UpDecoderBlock2D",), - block_out_channels: Tuple[int] = (64,), - layers_per_block: int = 1, - act_fn: str = "silu", - latent_channels: int = 3, - sample_size: int = 32, - num_vq_embeddings: int = 256, - norm_num_groups: int = 32, - vq_embed_dim: Optional[int] = None, - ): - super().__init__() - - # pass init params to Encoder - self.encoder = Encoder( - in_channels=in_channels, - out_channels=latent_channels, - down_block_types=down_block_types, - block_out_channels=block_out_channels, - layers_per_block=layers_per_block, - act_fn=act_fn, - norm_num_groups=norm_num_groups, - double_z=False, - ) - - vq_embed_dim = vq_embed_dim if vq_embed_dim is not None else latent_channels - - self.quant_conv = nn.Conv2D(latent_channels, vq_embed_dim, 1) - self.quantize = VectorQuantizer(num_vq_embeddings, vq_embed_dim, beta=0.25, remap=None, sane_index_shape=False) - self.post_quant_conv = nn.Conv2D(vq_embed_dim, latent_channels, 1) - - # pass init params to Decoder - self.decoder = Decoder( - in_channels=latent_channels, - out_channels=out_channels, - up_block_types=up_block_types, - block_out_channels=block_out_channels, - layers_per_block=layers_per_block, - act_fn=act_fn, - norm_num_groups=norm_num_groups, - ) - - def encode(self, x: paddle.Tensor, return_dict: bool = True): - h = self.encoder(x) - h = self.quant_conv(h) - - if not return_dict: - return (h,) - - return VQEncoderOutput(latents=h) - - def decode(self, h: paddle.Tensor, force_not_quantize: bool = False, return_dict: bool = True): - # also go through quantization layer - if not force_not_quantize: - quant, emb_loss, info = self.quantize(h) - else: - quant = h - quant = self.post_quant_conv(quant) - dec = self.decoder(quant) - - if not return_dict: - return (dec,) - - return DecoderOutput(sample=dec) - - def forward(self, sample: paddle.Tensor, return_dict: bool = True): - r""" - Args: - sample (`paddle.Tensor`): Input sample. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`DecoderOutput`] instead of a plain tuple. - """ - x = sample - h = self.encode(x).latents - dec = self.decode(h).sample - - if not return_dict: - return (dec,) - - return DecoderOutput(sample=dec) - - -class AutoencoderKL(ModelMixin, ConfigMixin): - r"""Variational Autoencoder (VAE) model with KL loss from the paper Auto-Encoding Variational Bayes by Diederik P. Kingma - and Max Welling. - - This model inherits from [`ModelMixin`]. Check the superclass documentation for the generic methods the library - implements for all the model (such as downloading or saving, etc.) - - Parameters: - in_channels (int, *optional*, defaults to 3): Number of channels in the input image. - out_channels (int, *optional*, defaults to 3): Number of channels in the output. - down_block_types (`Tuple[str]`, *optional*, defaults to : - obj:`("DownEncoderBlock2D",)`): Tuple of downsample block types. - down_block_out_channels (`Tuple[int]`, *optional*, defaults to : - None: Tuple of down block output channels. - up_block_types (`Tuple[str]`, *optional*, defaults to : - obj:`("UpDecoderBlock2D",)`): Tuple of upsample block types. - up_block_out_channels (`Tuple[int]`, *optional*, defaults to : - None: Tuple of up block output channels. - block_out_channels (`Tuple[int]`, *optional*, defaults to : - obj:`(64,)`): Tuple of block output channels. - act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use. - latent_channels (`int`, *optional*, defaults to `4`): Number of channels in the latent space. - sample_size (`int`, *optional*, defaults to `32`): TODO - """ - - @register_to_config - def __init__( - self, - in_channels: int = 3, - out_channels: int = 3, - down_block_types: Tuple[str] = ("DownEncoderBlock2D",), - down_block_out_channels: Tuple[int] = None, - up_block_types: Tuple[str] = ("UpDecoderBlock2D",), - up_block_out_channels: Tuple[int] = None, - block_out_channels: Tuple[int] = (64,), - layers_per_block: int = 1, - act_fn: str = "silu", - latent_channels: int = 4, - norm_num_groups: int = 32, - sample_size: int = 32, - ): - super().__init__() - - # pass init params to Encoder - self.encoder = Encoder( - in_channels=in_channels, - out_channels=latent_channels, - down_block_types=down_block_types, - block_out_channels=down_block_out_channels - if down_block_out_channels - is not None # if down_block_out_channels not givien, we will use block_out_channels - else block_out_channels, - layers_per_block=layers_per_block, - act_fn=act_fn, - norm_num_groups=norm_num_groups, - double_z=True, - ) - - # pass init params to Decoder - self.decoder = Decoder( - in_channels=latent_channels, - out_channels=out_channels, - up_block_types=up_block_types, - block_out_channels=up_block_out_channels # if up_block_out_channels not givien, we will use block_out_channels - if up_block_out_channels is not None - else block_out_channels, - layers_per_block=layers_per_block, - norm_num_groups=norm_num_groups, - act_fn=act_fn, - ) - - self.quant_conv = nn.Conv2D(2 * latent_channels, 2 * latent_channels, 1) - self.post_quant_conv = nn.Conv2D(latent_channels, latent_channels, 1) - - def encode(self, x: paddle.Tensor, return_dict: bool = True): - h = self.encoder(x) - moments = self.quant_conv(h) - posterior = DiagonalGaussianDistribution(moments) - - if not return_dict: - return (posterior,) - - return AutoencoderKLOutput(latent_dist=posterior) - - # (TODO junnyu) support vae slice - # https://github.com/huggingface/diffusers/commit/c28d3c82ce6f56c4b373a8260c56357d13db900a#diff-64804f08bc5e7a09947fb4eced462f15965acfa2d797354d85033e788f23b443 - def decode(self, z: paddle.Tensor, return_dict: bool = True): - z = self.post_quant_conv(z) - dec = self.decoder(z) - - if not return_dict: - return (dec,) - - return DecoderOutput(sample=dec) - - def forward( - self, - sample: paddle.Tensor, - sample_posterior: bool = False, - return_dict: bool = True, - generator: Optional[Union[paddle.Generator, List[paddle.Generator]]] = None, - ) -> Union[DecoderOutput, paddle.Tensor]: - r""" - Args: - sample (`paddle.Tensor`): Input sample. - sample_posterior (`bool`, *optional*, defaults to `False`): - Whether to sample from the posterior. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`DecoderOutput`] instead of a plain tuple. - """ - x = sample - posterior = self.encode(x).latent_dist - if sample_posterior: - z = posterior.sample(generator=generator) - else: - z = posterior.mode() - dec = self.decode(z).sample - - if not return_dict: - return (dec,) - - return DecoderOutput(sample=dec) diff --git a/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_sde_ve.py b/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_sde_ve.py deleted file mode 100644 index fd285fc9e5b5ec143b1dd0081ab25fe046646a72..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_sde_ve.py +++ /dev/null @@ -1,262 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 Google Brain and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# DISCLAIMER: This file is strongly influenced by https://github.com/yang-song/score_sde_pytorch - -import math -from dataclasses import dataclass -from typing import List, Optional, Tuple, Union - -import paddle - -from ..configuration_utils import ConfigMixin, register_to_config -from ..utils import BaseOutput -from .scheduling_utils import SchedulerMixin, SchedulerOutput - - -@dataclass -class SdeVeOutput(BaseOutput): - """ - Output class for the ScoreSdeVeScheduler's step function output. - - Args: - prev_sample (`paddle.Tensor` of shape `(batch_size, num_channels, height, width)` for images): - Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the - denoising loop. - prev_sample_mean (`paddle.Tensor` of shape `(batch_size, num_channels, height, width)` for images): - Mean averaged `prev_sample`. Same as `prev_sample`, only mean-averaged over previous timesteps. - """ - - prev_sample: paddle.Tensor - prev_sample_mean: paddle.Tensor - - -class ScoreSdeVeScheduler(SchedulerMixin, ConfigMixin): - """ - The variance exploding stochastic differential equation (SDE) scheduler. - - For more information, see the original paper: https://arxiv.org/abs/2011.13456 - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. - snr (`float`): - coefficient weighting the step from the model_output sample (from the network) to the random noise. - sigma_min (`float`): - initial noise scale for sigma sequence in sampling procedure. The minimum sigma should mirror the - distribution of the data. - sigma_max (`float`): maximum value used for the range of continuous timesteps passed into the model. - sampling_eps (`float`): the end value of sampling, where timesteps decrease progressively from 1 to - epsilon. - correct_steps (`int`): number of correction steps performed on a produced sample. - """ - - order = 1 - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 2000, - snr: float = 0.15, - sigma_min: float = 0.01, - sigma_max: float = 1348.0, - sampling_eps: float = 1e-5, - correct_steps: int = 1, - ): - # standard deviation of the initial noise distribution - self.init_noise_sigma = sigma_max - - # setable values - self.timesteps = None - - self.set_sigmas(num_train_timesteps, sigma_min, sigma_max, sampling_eps) - - def scale_model_input(self, sample: paddle.Tensor, timestep: Optional[int] = None) -> paddle.Tensor: - """ - Ensures interchangeability with schedulers that need to scale the denoising model input depending on the - current timestep. - - Args: - sample (`paddle.Tensor`): input sample - timestep (`int`, optional): current timestep - - Returns: - `paddle.Tensor`: scaled input sample - """ - return sample - - def set_timesteps(self, num_inference_steps: int, sampling_eps: float = None): - """ - Sets the continuous timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - sampling_eps (`float`, optional): final timestep value (overrides value given at Scheduler instantiation). - - """ - sampling_eps = sampling_eps if sampling_eps is not None else self.config.sampling_eps - - self.timesteps = paddle.linspace(1, sampling_eps, num_inference_steps) - - def set_sigmas( - self, num_inference_steps: int, sigma_min: float = None, sigma_max: float = None, sampling_eps: float = None - ): - """ - Sets the noise scales used for the diffusion chain. Supporting function to be run before inference. - - The sigmas control the weight of the `drift` and `diffusion` components of sample update. - - Args: - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - sigma_min (`float`, optional): - initial noise scale value (overrides value given at Scheduler instantiation). - sigma_max (`float`, optional): final noise scale value (overrides value given at Scheduler instantiation). - sampling_eps (`float`, optional): final timestep value (overrides value given at Scheduler instantiation). - - """ - sigma_min = sigma_min if sigma_min is not None else self.config.sigma_min - sigma_max = sigma_max if sigma_max is not None else self.config.sigma_max - sampling_eps = sampling_eps if sampling_eps is not None else self.config.sampling_eps - if self.timesteps is None: - self.set_timesteps(num_inference_steps, sampling_eps) - - self.sigmas = sigma_min * (sigma_max / sigma_min) ** (self.timesteps / sampling_eps) - self.discrete_sigmas = paddle.exp( - paddle.linspace(math.log(sigma_min), math.log(sigma_max), num_inference_steps) - ) - self.sigmas = paddle.to_tensor([sigma_min * (sigma_max / sigma_min) ** t for t in self.timesteps]) - - def get_adjacent_sigma(self, timesteps, t): - return paddle.where( - timesteps == 0, - paddle.zeros_like(t), - self.discrete_sigmas[timesteps - 1], - ) - - def step_pred( - self, - model_output: paddle.Tensor, - timestep: int, - sample: paddle.Tensor, - generator: Optional[Union[paddle.Generator, List[paddle.Generator]]] = None, - return_dict: bool = True, - ) -> Union[SdeVeOutput, Tuple]: - """ - Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion - process from the learned model outputs (most often the predicted noise). - - Args: - model_output (`paddle.Tensor`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - sample (`paddle.Tensor`): - current instance of sample being created by diffusion process. - generator: random number generator. - return_dict (`bool`): option for returning tuple rather than SchedulerOutput class - - Returns: - [`~schedulers.scheduling_sde_ve.SdeVeOutput`] or `tuple`: [`~schedulers.scheduling_sde_ve.SdeVeOutput`] if - `return_dict` is True, otherwise a `tuple`. When returning a tuple, the first element is the sample tensor. - - """ - if self.timesteps is None: - raise ValueError( - "`self.timesteps` is not set, you need to run 'set_timesteps' after creating the scheduler" - ) - - timestep = timestep * paddle.ones((sample.shape[0],)) # paddle.repeat_interleave(timestep, sample.shape[0]) - timesteps = (timestep * (len(self.timesteps) - 1)).cast("int64") - - sigma = self.discrete_sigmas[timesteps] - adjacent_sigma = self.get_adjacent_sigma(timesteps, timestep) - drift = paddle.zeros_like(sample) - diffusion = (sigma**2 - adjacent_sigma**2) ** 0.5 - - # equation 6 in the paper: the model_output modeled by the network is grad_x log pt(x) - # also equation 47 shows the analog from SDE models to ancestral sampling methods - diffusion = diffusion.flatten() - while len(diffusion.shape) < len(sample.shape): - diffusion = diffusion.unsqueeze(-1) - drift = drift - diffusion**2 * model_output - - # equation 6: sample noise for the diffusion term of - noise = paddle.randn(sample.shape, generator=generator) - prev_sample_mean = sample - drift # subtract because `dt` is a small negative timestep - # TODO is the variable diffusion the correct scaling term for the noise? - prev_sample = prev_sample_mean + diffusion * noise # add impact of diffusion field g - - if not return_dict: - return (prev_sample, prev_sample_mean) - - return SdeVeOutput(prev_sample=prev_sample, prev_sample_mean=prev_sample_mean) - - def step_correct( - self, - model_output: paddle.Tensor, - sample: paddle.Tensor, - generator: Optional[Union[paddle.Generator, List[paddle.Generator]]] = None, - return_dict: bool = True, - ) -> Union[SchedulerOutput, Tuple]: - """ - Correct the predicted sample based on the output model_output of the network. This is often run repeatedly - after making the prediction for the previous timestep. - - Args: - model_output (`paddle.Tensor`): direct output from learned diffusion model. - sample (`paddle.Tensor`): - current instance of sample being created by diffusion process. - generator: random number generator. - return_dict (`bool`): option for returning tuple rather than SchedulerOutput class - - Returns: - [`~schedulers.scheduling_sde_ve.SdeVeOutput`] or `tuple`: [`~schedulers.scheduling_sde_ve.SdeVeOutput`] if - `return_dict` is True, otherwise a `tuple`. When returning a tuple, the first element is the sample tensor. - - """ - if self.timesteps is None: - raise ValueError( - "`self.timesteps` is not set, you need to run 'set_timesteps' after creating the scheduler" - ) - - # For small batch sizes, the paper "suggest replacing norm(z) with sqrt(d), where d is the dim. of z" - # sample noise for correction - noise = paddle.randn(sample.shape, generator=generator) - - # compute step size from the model_output, the noise, and the snr - grad_norm = paddle.norm(model_output.reshape([model_output.shape[0], -1]), axis=-1).mean() - noise_norm = paddle.norm(noise.reshape([noise.shape[0], -1]), axis=-1).mean() - step_size = (self.config.snr * noise_norm / grad_norm) ** 2 * 2 - step_size = step_size * paddle.ones((sample.shape[0],)) - # self.repeat_scalar(step_size, sample.shape[0]) - - # compute corrected sample: model_output term and noise term - step_size = step_size.flatten() - while len(step_size.shape) < len(sample.shape): - step_size = step_size.unsqueeze(-1) - prev_sample_mean = sample + step_size * model_output - prev_sample = prev_sample_mean + ((step_size * 2) ** 0.5) * noise - - if not return_dict: - return (prev_sample,) - - return SchedulerOutput(prev_sample=prev_sample) - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/7thHeaven/GPT2WordPress/README.md b/spaces/7thHeaven/GPT2WordPress/README.md deleted file mode 100644 index 833454dae8bba209bda1ba7e4b5b97d3657e2266..0000000000000000000000000000000000000000 --- a/spaces/7thHeaven/GPT2WordPress/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: GPT2WordPress -emoji: 📈 -colorFrom: blue -colorTo: green -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: unknown -duplicated_from: 7thHeaven/GPT2WordPress_local ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AIConsultant/MusicGen/scripts/templates/survey.html b/spaces/AIConsultant/MusicGen/scripts/templates/survey.html deleted file mode 100644 index 785d1e61b7ac21619416ba70dd4719ff250f3f4b..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/scripts/templates/survey.html +++ /dev/null @@ -1,131 +0,0 @@ -{% extends "base.html" %} -{% block content %} -

Survey #{{signature}}

-{% if success %} -

Your ratings have been saved! -You have been moved to the next random seed, if you want -to keep rating more samples.

-{% endif %} -{% if already_filled %} -

You already rated those samples in the past, - filling this form will override your previous ratings. -

-{% endif %} -

Welcome {{session['user']}} to the survey #{{signature}}. -Go to the result page to check the results. Go to the home page to start a new survey. -

- -{% for error in errors %} -

{{error}}

-{% endfor %} - -{% if not blind %} -

Base config is: {{ref_name}}

-

The following experiments are compared:

- -{% else %} -

This is a blind experiment, the order of all XPs is shuffled with every sample.

-{% endif %} -

The current random seed is {{seed}}. You can change it with the following form, and also update blind/non blind. -

-
- - - - - - -
- -

Samples

-
-
-{% for id in model_ids %} -
-

{{id}}

- {% for model in models_by_id[id] %} - {% if loop.index == 1 and model.is_prompted %} -
-

Prompt is

- -

Ground truth is

- -
- {% endif %} - {% for err in model['errors'] %} -

{{err}}

- {% endfor %} -
- {% if not blind %} -

{{model.xp.sig}}:

- {% endif %} - -

Rating:

-
- {% for rating in ratings %} - {{rating}} - {% endfor %} - -
-

-
- {% endfor %} -
-
-{% endfor %} - - -
- -{% endblock %} diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/vocoder/vocoder_base.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/vocoder/vocoder_base.py deleted file mode 100644 index 04f45af60c8ac1c1f8303d091f8c6031ec8451bf..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/vocoder/vocoder_base.py +++ /dev/null @@ -1,66 +0,0 @@ -import os - -import torch -import torch.distributed as dist -from torch.utils.data import DistributedSampler - -from tasks.base_task import BaseTask -from tasks.base_task import data_loader -from tasks.vocoder.dataset_utils import VocoderDataset, EndlessDistributedSampler -from utils.hparams import hparams - - -class VocoderBaseTask(BaseTask): - def __init__(self): - super(VocoderBaseTask, self).__init__() - self.max_sentences = hparams['max_sentences'] - self.max_valid_sentences = hparams['max_valid_sentences'] - if self.max_valid_sentences == -1: - hparams['max_valid_sentences'] = self.max_valid_sentences = self.max_sentences - self.dataset_cls = VocoderDataset - - @data_loader - def train_dataloader(self): - train_dataset = self.dataset_cls('train', shuffle=True) - return self.build_dataloader(train_dataset, True, self.max_sentences, hparams['endless_ds']) - - @data_loader - def val_dataloader(self): - valid_dataset = self.dataset_cls('valid', shuffle=False) - return self.build_dataloader(valid_dataset, False, self.max_valid_sentences) - - @data_loader - def test_dataloader(self): - test_dataset = self.dataset_cls('test', shuffle=False) - return self.build_dataloader(test_dataset, False, self.max_valid_sentences) - - def build_dataloader(self, dataset, shuffle, max_sentences, endless=False): - world_size = 1 - rank = 0 - if dist.is_initialized(): - world_size = dist.get_world_size() - rank = dist.get_rank() - sampler_cls = DistributedSampler if not endless else EndlessDistributedSampler - train_sampler = sampler_cls( - dataset=dataset, - num_replicas=world_size, - rank=rank, - shuffle=shuffle, - ) - return torch.utils.data.DataLoader( - dataset=dataset, - shuffle=False, - collate_fn=dataset.collater, - batch_size=max_sentences, - num_workers=dataset.num_workers, - sampler=train_sampler, - pin_memory=True, - ) - - def test_start(self): - self.gen_dir = os.path.join(hparams['work_dir'], - f'generated_{self.trainer.global_step}_{hparams["gen_dir_name"]}') - os.makedirs(self.gen_dir, exist_ok=True) - - def test_end(self, outputs): - return {} diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/plot/plot.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/plot/plot.py deleted file mode 100644 index 9d7fc02cef69fa5517228437156e687ca054efc8..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/plot/plot.py +++ /dev/null @@ -1,51 +0,0 @@ -import matplotlib - -matplotlib.use('Agg') -import matplotlib.pyplot as plt -import numpy as np -import torch - -LINE_COLORS = ['w', 'r', 'orange', 'k', 'cyan', 'm', 'b', 'lime', 'g', 'brown', 'navy'] - - -def spec_to_figure(spec, vmin=None, vmax=None, title='', f0s=None, dur_info=None): - if isinstance(spec, torch.Tensor): - spec = spec.cpu().numpy() - H = spec.shape[1] // 2 - fig = plt.figure(figsize=(12, 6)) - plt.title(title) - plt.pcolor(spec.T, vmin=vmin, vmax=vmax) - if dur_info is not None: - assert isinstance(dur_info, dict) - txt = dur_info['txt'] - dur_gt = dur_info['dur_gt'] - if isinstance(dur_gt, torch.Tensor): - dur_gt = dur_gt.cpu().numpy() - dur_gt = np.cumsum(dur_gt).astype(int) - for i in range(len(dur_gt)): - shift = (i % 8) + 1 - plt.text(dur_gt[i], shift * 4, txt[i]) - plt.vlines(dur_gt[i], 0, H // 2, colors='b') # blue is gt - plt.xlim(0, dur_gt[-1]) - if 'dur_pred' in dur_info: - dur_pred = dur_info['dur_pred'] - if isinstance(dur_pred, torch.Tensor): - dur_pred = dur_pred.cpu().numpy() - dur_pred = np.cumsum(dur_pred).astype(int) - for i in range(len(dur_pred)): - shift = (i % 8) + 1 - plt.text(dur_pred[i], H + shift * 4, txt[i]) - plt.vlines(dur_pred[i], H, H * 1.5, colors='r') # red is pred - plt.xlim(0, max(dur_gt[-1], dur_pred[-1])) - if f0s is not None: - ax = plt.gca() - ax2 = ax.twinx() - if not isinstance(f0s, dict): - f0s = {'f0': f0s} - for i, (k, f0) in enumerate(f0s.items()): - if isinstance(f0, torch.Tensor): - f0 = f0.cpu().numpy() - ax2.plot(f0, label=k, c=LINE_COLORS[i], linewidth=1, alpha=0.5) - ax2.set_ylim(0, 1000) - ax2.legend() - return fig diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov5/voc/yolov5_s-v61_fast_1xb64-50e_voc.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov5/voc/yolov5_s-v61_fast_1xb64-50e_voc.py deleted file mode 100644 index 9585b51fd5cb7c69f7d22dd0b492a1b90b180a4c..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov5/voc/yolov5_s-v61_fast_1xb64-50e_voc.py +++ /dev/null @@ -1,270 +0,0 @@ -_base_ = '../yolov5_s-v61_syncbn_fast_8xb16-300e_coco.py' - -# dataset settings -data_root = 'data/VOCdevkit/' -dataset_type = 'YOLOv5VOCDataset' - -# parameters that often need to be modified -num_classes = 20 -img_scale = (512, 512) # width, height -max_epochs = 50 -train_batch_size_per_gpu = 64 -train_num_workers = 8 -val_batch_size_per_gpu = 1 -val_num_workers = 2 - -# persistent_workers must be False if num_workers is 0. -persistent_workers = True - -lr_factor = 0.15135 -affine_scale = 0.75544 - -# only on Val -batch_shapes_cfg = dict(img_size=img_scale[0]) - -anchors = [[(26, 44), (67, 57), (61, 130)], [(121, 118), (120, 239), - (206, 182)], - [(376, 161), (234, 324), (428, 322)]] -num_det_layers = 3 - -load_from = 'https://download.openmmlab.com/mmyolo/v0/yolov5/yolov5_s-v61_syncbn_fast_8xb16-300e_coco/yolov5_s-v61_syncbn_fast_8xb16-300e_coco_20220918_084700-86e02187.pth' # noqa - -tta_img_scales = [img_scale, (416, 416), (640, 640)] - -# Hyperparameter reference from: -# https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.VOC.yaml -model = dict( - bbox_head=dict( - head_module=dict(num_classes=num_classes), - prior_generator=dict(base_sizes=anchors), - loss_cls=dict( - loss_weight=0.21638 * (num_classes / 80 * 3 / num_det_layers), - class_weight=0.5), - loss_bbox=dict(loss_weight=0.02 * (3 / num_det_layers)), - loss_obj=dict( - loss_weight=0.51728 * - ((img_scale[0] / 640)**2 * 3 / num_det_layers), - class_weight=0.67198), - # Different from COCO - prior_match_thr=3.3744), - test_cfg=dict(nms=dict(iou_threshold=0.6))) - -albu_train_transforms = _base_.albu_train_transforms -pre_transform = _base_.pre_transform - -with_mosiac_pipeline = [ - dict( - type='Mosaic', - img_scale=img_scale, - pad_val=114.0, - pre_transform=pre_transform), - dict( - type='YOLOv5RandomAffine', - max_rotate_degree=0.0, - max_translate_ratio=0.04591, - max_shear_degree=0.0, - scaling_ratio_range=(1 - affine_scale, 1 + affine_scale), - # img_scale is (width, height) - border=(-img_scale[0] // 2, -img_scale[1] // 2), - border_val=(114, 114, 114)), - dict( - type='YOLOv5MixUp', - prob=0.04266, - pre_transform=[ - *pre_transform, - dict( - type='Mosaic', - img_scale=img_scale, - pad_val=114.0, - pre_transform=pre_transform), - dict( - type='YOLOv5RandomAffine', - max_rotate_degree=0.0, - max_translate_ratio=0.04591, - max_shear_degree=0.0, - scaling_ratio_range=(1 - affine_scale, 1 + affine_scale), - # img_scale is (width, height) - border=(-img_scale[0] // 2, -img_scale[1] // 2), - border_val=(114, 114, 114)) - ]) -] - -without_mosaic_pipeline = [ - dict( - type='YOLOv5RandomAffine', - max_rotate_degree=0.0, - max_translate_ratio=0.04591, - max_shear_degree=0.0, - scaling_ratio_range=(1 - affine_scale, 1 + affine_scale), - border=(0, 0), - border_val=(114, 114, 114)), - dict( - type='LetterResize', - scale=img_scale, - allow_scale_up=True, - pad_val=dict(img=114)) -] - -# Because the border parameter is inconsistent when -# using mosaic or not, `RandomChoice` is used here. -randchoice_mosaic_pipeline = dict( - type='RandomChoice', - transforms=[with_mosiac_pipeline, without_mosaic_pipeline], - prob=[0.85834, 0.14166]) - -train_pipeline = [ - *pre_transform, randchoice_mosaic_pipeline, - dict( - type='mmdet.Albu', - transforms=albu_train_transforms, - bbox_params=dict( - type='BboxParams', - format='pascal_voc', - label_fields=['gt_bboxes_labels', 'gt_ignore_flags']), - keymap={ - 'img': 'image', - 'gt_bboxes': 'bboxes' - }), - dict( - type='YOLOv5HSVRandomAug', - hue_delta=0.01041, - saturation_delta=0.54703, - value_delta=0.27739), - dict(type='mmdet.RandomFlip', prob=0.5), - dict( - type='mmdet.PackDetInputs', - meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'flip', - 'flip_direction')) -] - -train_dataloader = dict( - _delete_=True, - batch_size=train_batch_size_per_gpu, - num_workers=train_num_workers, - persistent_workers=persistent_workers, - pin_memory=True, - sampler=dict(type='DefaultSampler', shuffle=True), - dataset=dict( - type='ConcatDataset', - datasets=[ - dict( - type=dataset_type, - data_root=data_root, - ann_file='VOC2007/ImageSets/Main/trainval.txt', - data_prefix=dict(sub_data_root='VOC2007/'), - filter_cfg=dict(filter_empty_gt=False, min_size=32), - pipeline=train_pipeline), - dict( - type=dataset_type, - data_root=data_root, - ann_file='VOC2012/ImageSets/Main/trainval.txt', - data_prefix=dict(sub_data_root='VOC2012/'), - filter_cfg=dict(filter_empty_gt=False, min_size=32), - pipeline=train_pipeline) - ], - # Use ignore_keys to avoid judging metainfo is - # not equal in `ConcatDataset`. - ignore_keys='dataset_type'), - collate_fn=dict(type='yolov5_collate')) - -test_pipeline = [ - dict(type='LoadImageFromFile', file_client_args=_base_.file_client_args), - dict(type='YOLOv5KeepRatioResize', scale=img_scale), - dict( - type='LetterResize', - scale=img_scale, - allow_scale_up=False, - pad_val=dict(img=114)), - dict(type='LoadAnnotations', with_bbox=True, _scope_='mmdet'), - dict( - type='mmdet.PackDetInputs', - meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', - 'scale_factor', 'pad_param')) -] - -val_dataloader = dict( - batch_size=val_batch_size_per_gpu, - num_workers=val_num_workers, - persistent_workers=persistent_workers, - pin_memory=True, - drop_last=False, - sampler=dict(type='DefaultSampler', shuffle=False), - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file='VOC2007/ImageSets/Main/test.txt', - data_prefix=dict(sub_data_root='VOC2007/'), - test_mode=True, - pipeline=test_pipeline, - batch_shapes_cfg=batch_shapes_cfg)) - -test_dataloader = val_dataloader - -param_scheduler = None -optim_wrapper = dict( - optimizer=dict( - lr=0.00334, - momentum=0.74832, - weight_decay=0.00025, - batch_size_per_gpu=train_batch_size_per_gpu)) - -default_hooks = dict( - param_scheduler=dict( - lr_factor=lr_factor, - max_epochs=max_epochs, - warmup_epochs=3.3835, - warmup_momentum=0.59462, - warmup_bias_lr=0.18657)) - -custom_hooks = [ - dict( - type='EMAHook', - ema_type='ExpMomentumEMA', - momentum=0.0001, - update_buffers=True, - # To load COCO pretrained model, need to set `strict_load=False` - strict_load=False, - priority=49) -] - -# TODO: Support using coco metric in voc dataset -val_evaluator = dict( - _delete_=True, type='mmdet.VOCMetric', metric='mAP', eval_mode='area') - -test_evaluator = val_evaluator - -train_cfg = dict(max_epochs=max_epochs) - -# Config for Test Time Augmentation. (TTA) -_multiscale_resize_transforms = [ - dict( - type='Compose', - transforms=[ - dict(type='YOLOv5KeepRatioResize', scale=s), - dict( - type='LetterResize', - scale=s, - allow_scale_up=False, - pad_val=dict(img=114)) - ]) for s in tta_img_scales -] - -tta_pipeline = [ - dict(type='LoadImageFromFile', file_client_args=_base_.file_client_args), - dict( - type='TestTimeAug', - transforms=[ - _multiscale_resize_transforms, - [ - dict(type='mmdet.RandomFlip', prob=1.), - dict(type='mmdet.RandomFlip', prob=0.) - ], [dict(type='mmdet.LoadAnnotations', with_bbox=True)], - [ - dict( - type='mmdet.PackDetInputs', - meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', - 'scale_factor', 'pad_param', 'flip', - 'flip_direction')) - ] - ]) -] diff --git a/spaces/AchyuthGamer/OpenGPT/client/css/theme-toggler.css b/spaces/AchyuthGamer/OpenGPT/client/css/theme-toggler.css deleted file mode 100644 index 9c7eef742bca90cbc69080948c31eb8638fb3ae4..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/client/css/theme-toggler.css +++ /dev/null @@ -1,33 +0,0 @@ -.theme-toggler-container { - margin: 24px 0px 8px 0px; - justify-content: center; -} - -.theme-toggler-container.checkbox input + label, -.theme-toggler-container.checkbox input:checked + label:after { - background: var(--colour-2); -} - -.theme-toggler-container.checkbox input + label:after, -.theme-toggler-container.checkbox input:checked + label { - background: var(--colour-4); -} - -.theme-toggler-container.checkbox span { - font-size: 0.75rem; -} - -.theme-toggler-container.checkbox label { - width: 24px; - height: 16px; -} - -.theme-toggler-container.checkbox label:after { - left: 2px; - width: 10px; - height: 10px; -} - -.theme-toggler-container.checkbox input:checked + label:after { - left: calc(100% - 2px - 10px); -} \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/fullwindowrectangle.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/fullwindowrectangle.js deleted file mode 100644 index a7c7ffddb30171b7a48410d4c5d26af4c6c1e61a..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/fullwindowrectangle.js +++ /dev/null @@ -1,2 +0,0 @@ -import FullWindowRectangle from './gameobjects/shape/fullwindowrectangle/FullWindowRectangle.js'; -export default FullWindowRectangle; \ No newline at end of file diff --git a/spaces/AkitoP/umamusume_bert_vits2/monotonic_align/__init__.py b/spaces/AkitoP/umamusume_bert_vits2/monotonic_align/__init__.py deleted file mode 100644 index aed94600a6b01f4322b371b0c57d5b05713c4dac..0000000000000000000000000000000000000000 --- a/spaces/AkitoP/umamusume_bert_vits2/monotonic_align/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -from numpy import zeros, int32, float32 -from torch import from_numpy - -from .core import maximum_path_jit - - -def maximum_path(neg_cent, mask): - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(float32) - path = zeros(neg_cent.shape, dtype=int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32) - maximum_path_jit(path, neg_cent, t_t_max, t_s_max) - return from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/Aloento/9Nine-PITS/text/frontend/punctuation.py b/spaces/Aloento/9Nine-PITS/text/frontend/punctuation.py deleted file mode 100644 index 9020fac7c00babc708b4f57781e3052386f52f64..0000000000000000000000000000000000000000 --- a/spaces/Aloento/9Nine-PITS/text/frontend/punctuation.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -__all__ = ["get_punctuations"] - -EN_PUNCT = [ - " ", - "-", - "...", - ",", - ".", - "?", - "!", -] - -CN_PUNCT = ["、", ",", ";", ":", "。", "?", "!"] - - -def get_punctuations(lang): - if lang == "en": - return EN_PUNCT - elif lang == "cn": - return CN_PUNCT - else: - raise ValueError(f"language {lang} Not supported") diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/others/test_utils.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/others/test_utils.py deleted file mode 100644 index 6e7cc095f8df7968e5db7b43a28ed6139010ed05..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/others/test_utils.py +++ /dev/null @@ -1,170 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import unittest - -from diffusers import __version__ -from diffusers.utils import deprecate - - -class DeprecateTester(unittest.TestCase): - higher_version = ".".join([str(int(__version__.split(".")[0]) + 1)] + __version__.split(".")[1:]) - lower_version = "0.0.1" - - def test_deprecate_function_arg(self): - kwargs = {"deprecated_arg": 4} - - with self.assertWarns(FutureWarning) as warning: - output = deprecate("deprecated_arg", self.higher_version, "message", take_from=kwargs) - - assert output == 4 - assert ( - str(warning.warning) - == f"The `deprecated_arg` argument is deprecated and will be removed in version {self.higher_version}." - " message" - ) - - def test_deprecate_function_arg_tuple(self): - kwargs = {"deprecated_arg": 4} - - with self.assertWarns(FutureWarning) as warning: - output = deprecate(("deprecated_arg", self.higher_version, "message"), take_from=kwargs) - - assert output == 4 - assert ( - str(warning.warning) - == f"The `deprecated_arg` argument is deprecated and will be removed in version {self.higher_version}." - " message" - ) - - def test_deprecate_function_args(self): - kwargs = {"deprecated_arg_1": 4, "deprecated_arg_2": 8} - with self.assertWarns(FutureWarning) as warning: - output_1, output_2 = deprecate( - ("deprecated_arg_1", self.higher_version, "Hey"), - ("deprecated_arg_2", self.higher_version, "Hey"), - take_from=kwargs, - ) - assert output_1 == 4 - assert output_2 == 8 - assert ( - str(warning.warnings[0].message) - == "The `deprecated_arg_1` argument is deprecated and will be removed in version" - f" {self.higher_version}. Hey" - ) - assert ( - str(warning.warnings[1].message) - == "The `deprecated_arg_2` argument is deprecated and will be removed in version" - f" {self.higher_version}. Hey" - ) - - def test_deprecate_function_incorrect_arg(self): - kwargs = {"deprecated_arg": 4} - - with self.assertRaises(TypeError) as error: - deprecate(("wrong_arg", self.higher_version, "message"), take_from=kwargs) - - assert "test_deprecate_function_incorrect_arg in" in str(error.exception) - assert "line" in str(error.exception) - assert "got an unexpected keyword argument `deprecated_arg`" in str(error.exception) - - def test_deprecate_arg_no_kwarg(self): - with self.assertWarns(FutureWarning) as warning: - deprecate(("deprecated_arg", self.higher_version, "message")) - - assert ( - str(warning.warning) - == f"`deprecated_arg` is deprecated and will be removed in version {self.higher_version}. message" - ) - - def test_deprecate_args_no_kwarg(self): - with self.assertWarns(FutureWarning) as warning: - deprecate( - ("deprecated_arg_1", self.higher_version, "Hey"), - ("deprecated_arg_2", self.higher_version, "Hey"), - ) - assert ( - str(warning.warnings[0].message) - == f"`deprecated_arg_1` is deprecated and will be removed in version {self.higher_version}. Hey" - ) - assert ( - str(warning.warnings[1].message) - == f"`deprecated_arg_2` is deprecated and will be removed in version {self.higher_version}. Hey" - ) - - def test_deprecate_class_obj(self): - class Args: - arg = 5 - - with self.assertWarns(FutureWarning) as warning: - arg = deprecate(("arg", self.higher_version, "message"), take_from=Args()) - - assert arg == 5 - assert ( - str(warning.warning) - == f"The `arg` attribute is deprecated and will be removed in version {self.higher_version}. message" - ) - - def test_deprecate_class_objs(self): - class Args: - arg = 5 - foo = 7 - - with self.assertWarns(FutureWarning) as warning: - arg_1, arg_2 = deprecate( - ("arg", self.higher_version, "message"), - ("foo", self.higher_version, "message"), - ("does not exist", self.higher_version, "message"), - take_from=Args(), - ) - - assert arg_1 == 5 - assert arg_2 == 7 - assert ( - str(warning.warning) - == f"The `arg` attribute is deprecated and will be removed in version {self.higher_version}. message" - ) - assert ( - str(warning.warnings[0].message) - == f"The `arg` attribute is deprecated and will be removed in version {self.higher_version}. message" - ) - assert ( - str(warning.warnings[1].message) - == f"The `foo` attribute is deprecated and will be removed in version {self.higher_version}. message" - ) - - def test_deprecate_incorrect_version(self): - kwargs = {"deprecated_arg": 4} - - with self.assertRaises(ValueError) as error: - deprecate(("wrong_arg", self.lower_version, "message"), take_from=kwargs) - - assert ( - str(error.exception) - == "The deprecation tuple ('wrong_arg', '0.0.1', 'message') should be removed since diffusers' version" - f" {__version__} is >= {self.lower_version}" - ) - - def test_deprecate_incorrect_no_standard_warn(self): - with self.assertWarns(FutureWarning) as warning: - deprecate(("deprecated_arg", self.higher_version, "This message is better!!!"), standard_warn=False) - - assert str(warning.warning) == "This message is better!!!" - - def test_deprecate_stacklevel(self): - with self.assertWarns(FutureWarning) as warning: - deprecate(("deprecated_arg", self.higher_version, "This message is better!!!"), standard_warn=False) - assert str(warning.warning) == "This message is better!!!" - assert "diffusers/tests/others/test_utils.py" in warning.filename diff --git a/spaces/Andy1621/uniformer_image_detection/configs/resnest/cascade_mask_rcnn_s50_fpn_syncbn-backbone+head_mstrain_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/resnest/cascade_mask_rcnn_s50_fpn_syncbn-backbone+head_mstrain_1x_coco.py deleted file mode 100644 index f2cf444d4cd49220ea2e0f7cf25c81b57850a202..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/resnest/cascade_mask_rcnn_s50_fpn_syncbn-backbone+head_mstrain_1x_coco.py +++ /dev/null @@ -1,118 +0,0 @@ -_base_ = '../cascade_rcnn/cascade_mask_rcnn_r50_fpn_1x_coco.py' -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - pretrained='open-mmlab://resnest50', - backbone=dict( - type='ResNeSt', - stem_channels=64, - depth=50, - radix=2, - reduction_factor=4, - avg_down_stride=True, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch'), - roi_head=dict( - bbox_head=[ - dict( - type='Shared4Conv1FCBBoxHead', - in_channels=256, - conv_out_channels=256, - fc_out_channels=1024, - norm_cfg=norm_cfg, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)), - dict( - type='Shared4Conv1FCBBoxHead', - in_channels=256, - conv_out_channels=256, - fc_out_channels=1024, - norm_cfg=norm_cfg, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.05, 0.05, 0.1, 0.1]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)), - dict( - type='Shared4Conv1FCBBoxHead', - in_channels=256, - conv_out_channels=256, - fc_out_channels=1024, - norm_cfg=norm_cfg, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.033, 0.033, 0.067, 0.067]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)) - ], - mask_head=dict(norm_cfg=norm_cfg))) -# # use ResNeSt img_norm -img_norm_cfg = dict( - mean=[123.68, 116.779, 103.939], std=[58.393, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='LoadAnnotations', - with_bbox=True, - with_mask=True, - poly2mask=False), - dict( - type='Resize', - img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736), - (1333, 768), (1333, 800)], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/paa.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/paa.py deleted file mode 100644 index 9b4bb5e0939b824d9fef7fc3bd49a0164c29613a..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/paa.py +++ /dev/null @@ -1,17 +0,0 @@ -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class PAA(SingleStageDetector): - """Implementation of `PAA `_.""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(PAA, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/yolo.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/yolo.py deleted file mode 100644 index 240aab20f857befe25e64114300ebb15a66c6a70..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/yolo.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) 2019 Western Digital Corporation or its affiliates. - -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class YOLOV3(SingleStageDetector): - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(YOLOV3, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/necks/fpg.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/necks/fpg.py deleted file mode 100644 index c8e0d163ccf8cef6211530ba6c1b4d558ff6403f..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/necks/fpg.py +++ /dev/null @@ -1,398 +0,0 @@ -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule, caffe2_xavier_init, constant_init, is_norm - -from ..builder import NECKS - - -class Transition(nn.Module): - """Base class for transition. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - """ - - def __init__(self, in_channels, out_channels): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - - def forward(x): - pass - - -class UpInterpolationConv(Transition): - """A transition used for up-sampling. - - Up-sample the input by interpolation then refines the feature by - a convolution layer. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - scale_factor (int): Up-sampling factor. Default: 2. - mode (int): Interpolation mode. Default: nearest. - align_corners (bool): Whether align corners when interpolation. - Default: None. - kernel_size (int): Kernel size for the conv. Default: 3. - """ - - def __init__(self, - in_channels, - out_channels, - scale_factor=2, - mode='nearest', - align_corners=None, - kernel_size=3, - **kwargs): - super().__init__(in_channels, out_channels) - self.mode = mode - self.scale_factor = scale_factor - self.align_corners = align_corners - self.conv = ConvModule( - in_channels, - out_channels, - kernel_size, - padding=(kernel_size - 1) // 2, - **kwargs) - - def forward(self, x): - x = F.interpolate( - x, - scale_factor=self.scale_factor, - mode=self.mode, - align_corners=self.align_corners) - x = self.conv(x) - return x - - -class LastConv(Transition): - """A transition used for refining the output of the last stage. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - num_inputs (int): Number of inputs of the FPN features. - kernel_size (int): Kernel size for the conv. Default: 3. - """ - - def __init__(self, - in_channels, - out_channels, - num_inputs, - kernel_size=3, - **kwargs): - super().__init__(in_channels, out_channels) - self.num_inputs = num_inputs - self.conv_out = ConvModule( - in_channels, - out_channels, - kernel_size, - padding=(kernel_size - 1) // 2, - **kwargs) - - def forward(self, inputs): - assert len(inputs) == self.num_inputs - return self.conv_out(inputs[-1]) - - -@NECKS.register_module() -class FPG(nn.Module): - """FPG. - - Implementation of `Feature Pyramid Grids (FPG) - `_. - This implementation only gives the basic structure stated in the paper. - But users can implement different type of transitions to fully explore the - the potential power of the structure of FPG. - - Args: - in_channels (int): Number of input channels (feature maps of all levels - should have the same channels). - out_channels (int): Number of output channels (used at each scale) - num_outs (int): Number of output scales. - stack_times (int): The number of times the pyramid architecture will - be stacked. - paths (list[str]): Specify the path order of each stack level. - Each element in the list should be either 'bu' (bottom-up) or - 'td' (top-down). - inter_channels (int): Number of inter channels. - same_up_trans (dict): Transition that goes down at the same stage. - same_down_trans (dict): Transition that goes up at the same stage. - across_lateral_trans (dict): Across-pathway same-stage - across_down_trans (dict): Across-pathway bottom-up connection. - across_up_trans (dict): Across-pathway top-down connection. - across_skip_trans (dict): Across-pathway skip connection. - output_trans (dict): Transition that trans the output of the - last stage. - start_level (int): Index of the start input backbone level used to - build the feature pyramid. Default: 0. - end_level (int): Index of the end input backbone level (exclusive) to - build the feature pyramid. Default: -1, which means the last level. - add_extra_convs (bool): It decides whether to add conv - layers on top of the original feature maps. Default to False. - If True, its actual mode is specified by `extra_convs_on_inputs`. - norm_cfg (dict): Config dict for normalization layer. Default: None. - """ - - transition_types = { - 'conv': ConvModule, - 'interpolation_conv': UpInterpolationConv, - 'last_conv': LastConv, - } - - def __init__(self, - in_channels, - out_channels, - num_outs, - stack_times, - paths, - inter_channels=None, - same_down_trans=None, - same_up_trans=dict( - type='conv', kernel_size=3, stride=2, padding=1), - across_lateral_trans=dict(type='conv', kernel_size=1), - across_down_trans=dict(type='conv', kernel_size=3), - across_up_trans=None, - across_skip_trans=dict(type='identity'), - output_trans=dict(type='last_conv', kernel_size=3), - start_level=0, - end_level=-1, - add_extra_convs=False, - norm_cfg=None, - skip_inds=None): - super(FPG, self).__init__() - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) - self.num_outs = num_outs - if inter_channels is None: - self.inter_channels = [out_channels for _ in range(num_outs)] - elif isinstance(inter_channels, int): - self.inter_channels = [inter_channels for _ in range(num_outs)] - else: - assert isinstance(inter_channels, list) - assert len(inter_channels) == num_outs - self.inter_channels = inter_channels - self.stack_times = stack_times - self.paths = paths - assert isinstance(paths, list) and len(paths) == stack_times - for d in paths: - assert d in ('bu', 'td') - - self.same_down_trans = same_down_trans - self.same_up_trans = same_up_trans - self.across_lateral_trans = across_lateral_trans - self.across_down_trans = across_down_trans - self.across_up_trans = across_up_trans - self.output_trans = output_trans - self.across_skip_trans = across_skip_trans - - self.with_bias = norm_cfg is None - # skip inds must be specified if across skip trans is not None - if self.across_skip_trans is not None: - skip_inds is not None - self.skip_inds = skip_inds - assert len(self.skip_inds[0]) <= self.stack_times - - if end_level == -1: - self.backbone_end_level = self.num_ins - assert num_outs >= self.num_ins - start_level - else: - # if end_level < inputs, no extra level is allowed - self.backbone_end_level = end_level - assert end_level <= len(in_channels) - assert num_outs == end_level - start_level - self.start_level = start_level - self.end_level = end_level - self.add_extra_convs = add_extra_convs - - # build lateral 1x1 convs to reduce channels - self.lateral_convs = nn.ModuleList() - for i in range(self.start_level, self.backbone_end_level): - l_conv = nn.Conv2d(self.in_channels[i], - self.inter_channels[i - self.start_level], 1) - self.lateral_convs.append(l_conv) - - extra_levels = num_outs - self.backbone_end_level + self.start_level - self.extra_downsamples = nn.ModuleList() - for i in range(extra_levels): - if self.add_extra_convs: - fpn_idx = self.backbone_end_level - self.start_level + i - extra_conv = nn.Conv2d( - self.inter_channels[fpn_idx - 1], - self.inter_channels[fpn_idx], - 3, - stride=2, - padding=1) - self.extra_downsamples.append(extra_conv) - else: - self.extra_downsamples.append(nn.MaxPool2d(1, stride=2)) - - self.fpn_transitions = nn.ModuleList() # stack times - for s in range(self.stack_times): - stage_trans = nn.ModuleList() # num of feature levels - for i in range(self.num_outs): - # same, across_lateral, across_down, across_up - trans = nn.ModuleDict() - if s in self.skip_inds[i]: - stage_trans.append(trans) - continue - # build same-stage down trans (used in bottom-up paths) - if i == 0 or self.same_up_trans is None: - same_up_trans = None - else: - same_up_trans = self.build_trans( - self.same_up_trans, self.inter_channels[i - 1], - self.inter_channels[i]) - trans['same_up'] = same_up_trans - # build same-stage up trans (used in top-down paths) - if i == self.num_outs - 1 or self.same_down_trans is None: - same_down_trans = None - else: - same_down_trans = self.build_trans( - self.same_down_trans, self.inter_channels[i + 1], - self.inter_channels[i]) - trans['same_down'] = same_down_trans - # build across lateral trans - across_lateral_trans = self.build_trans( - self.across_lateral_trans, self.inter_channels[i], - self.inter_channels[i]) - trans['across_lateral'] = across_lateral_trans - # build across down trans - if i == self.num_outs - 1 or self.across_down_trans is None: - across_down_trans = None - else: - across_down_trans = self.build_trans( - self.across_down_trans, self.inter_channels[i + 1], - self.inter_channels[i]) - trans['across_down'] = across_down_trans - # build across up trans - if i == 0 or self.across_up_trans is None: - across_up_trans = None - else: - across_up_trans = self.build_trans( - self.across_up_trans, self.inter_channels[i - 1], - self.inter_channels[i]) - trans['across_up'] = across_up_trans - if self.across_skip_trans is None: - across_skip_trans = None - else: - across_skip_trans = self.build_trans( - self.across_skip_trans, self.inter_channels[i - 1], - self.inter_channels[i]) - trans['across_skip'] = across_skip_trans - # build across_skip trans - stage_trans.append(trans) - self.fpn_transitions.append(stage_trans) - - self.output_transition = nn.ModuleList() # output levels - for i in range(self.num_outs): - trans = self.build_trans( - self.output_trans, - self.inter_channels[i], - self.out_channels, - num_inputs=self.stack_times + 1) - self.output_transition.append(trans) - - self.relu = nn.ReLU(inplace=True) - - def build_trans(self, cfg, in_channels, out_channels, **extra_args): - cfg_ = cfg.copy() - trans_type = cfg_.pop('type') - trans_cls = self.transition_types[trans_type] - return trans_cls(in_channels, out_channels, **cfg_, **extra_args) - - def init_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - caffe2_xavier_init(m) - elif is_norm(m): - constant_init(m, 1.0) - - def fuse(self, fuse_dict): - out = None - for item in fuse_dict.values(): - if item is not None: - if out is None: - out = item - else: - out = out + item - return out - - def forward(self, inputs): - assert len(inputs) == len(self.in_channels) - - # build all levels from original feature maps - feats = [ - lateral_conv(inputs[i + self.start_level]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - for downsample in self.extra_downsamples: - feats.append(downsample(feats[-1])) - - outs = [feats] - - for i in range(self.stack_times): - current_outs = outs[-1] - next_outs = [] - direction = self.paths[i] - for j in range(self.num_outs): - if i in self.skip_inds[j]: - next_outs.append(outs[-1][j]) - continue - # feature level - if direction == 'td': - lvl = self.num_outs - j - 1 - else: - lvl = j - # get transitions - if direction == 'td': - same_trans = self.fpn_transitions[i][lvl]['same_down'] - else: - same_trans = self.fpn_transitions[i][lvl]['same_up'] - across_lateral_trans = self.fpn_transitions[i][lvl][ - 'across_lateral'] - across_down_trans = self.fpn_transitions[i][lvl]['across_down'] - across_up_trans = self.fpn_transitions[i][lvl]['across_up'] - across_skip_trans = self.fpn_transitions[i][lvl]['across_skip'] - # init output - to_fuse = dict( - same=None, lateral=None, across_up=None, across_down=None) - # same downsample/upsample - if same_trans is not None: - to_fuse['same'] = same_trans(next_outs[-1]) - # across lateral - if across_lateral_trans is not None: - to_fuse['lateral'] = across_lateral_trans( - current_outs[lvl]) - # across downsample - if lvl > 0 and across_up_trans is not None: - to_fuse['across_up'] = across_up_trans(current_outs[lvl - - 1]) - # across upsample - if (lvl < self.num_outs - 1 and across_down_trans is not None): - to_fuse['across_down'] = across_down_trans( - current_outs[lvl + 1]) - if across_skip_trans is not None: - to_fuse['across_skip'] = across_skip_trans(outs[0][lvl]) - x = self.fuse(to_fuse) - next_outs.append(x) - - if direction == 'td': - outs.append(next_outs[::-1]) - else: - outs.append(next_outs) - - # output trans - final_outs = [] - for i in range(self.num_outs): - lvl_out_list = [] - for s in range(len(outs)): - lvl_out_list.append(outs[s][i]) - lvl_out = self.output_transition[i](lvl_out_list) - final_outs.append(lvl_out) - - return final_outs diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/necks/nas_fpn.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/necks/nas_fpn.py deleted file mode 100644 index 8e333ce65d4d06c47c29af489526ba3142736ad7..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/necks/nas_fpn.py +++ /dev/null @@ -1,160 +0,0 @@ -import torch.nn as nn -from mmcv.cnn import ConvModule, caffe2_xavier_init -from mmcv.ops.merge_cells import GlobalPoolingCell, SumCell - -from ..builder import NECKS - - -@NECKS.register_module() -class NASFPN(nn.Module): - """NAS-FPN. - - Implementation of `NAS-FPN: Learning Scalable Feature Pyramid Architecture - for Object Detection `_ - - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale) - num_outs (int): Number of output scales. - stack_times (int): The number of times the pyramid architecture will - be stacked. - start_level (int): Index of the start input backbone level used to - build the feature pyramid. Default: 0. - end_level (int): Index of the end input backbone level (exclusive) to - build the feature pyramid. Default: -1, which means the last level. - add_extra_convs (bool): It decides whether to add conv - layers on top of the original feature maps. Default to False. - If True, its actual mode is specified by `extra_convs_on_inputs`. - """ - - def __init__(self, - in_channels, - out_channels, - num_outs, - stack_times, - start_level=0, - end_level=-1, - add_extra_convs=False, - norm_cfg=None): - super(NASFPN, self).__init__() - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) # num of input feature levels - self.num_outs = num_outs # num of output feature levels - self.stack_times = stack_times - self.norm_cfg = norm_cfg - - if end_level == -1: - self.backbone_end_level = self.num_ins - assert num_outs >= self.num_ins - start_level - else: - # if end_level < inputs, no extra level is allowed - self.backbone_end_level = end_level - assert end_level <= len(in_channels) - assert num_outs == end_level - start_level - self.start_level = start_level - self.end_level = end_level - self.add_extra_convs = add_extra_convs - - # add lateral connections - self.lateral_convs = nn.ModuleList() - for i in range(self.start_level, self.backbone_end_level): - l_conv = ConvModule( - in_channels[i], - out_channels, - 1, - norm_cfg=norm_cfg, - act_cfg=None) - self.lateral_convs.append(l_conv) - - # add extra downsample layers (stride-2 pooling or conv) - extra_levels = num_outs - self.backbone_end_level + self.start_level - self.extra_downsamples = nn.ModuleList() - for i in range(extra_levels): - extra_conv = ConvModule( - out_channels, out_channels, 1, norm_cfg=norm_cfg, act_cfg=None) - self.extra_downsamples.append( - nn.Sequential(extra_conv, nn.MaxPool2d(2, 2))) - - # add NAS FPN connections - self.fpn_stages = nn.ModuleList() - for _ in range(self.stack_times): - stage = nn.ModuleDict() - # gp(p6, p4) -> p4_1 - stage['gp_64_4'] = GlobalPoolingCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - # sum(p4_1, p4) -> p4_2 - stage['sum_44_4'] = SumCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - # sum(p4_2, p3) -> p3_out - stage['sum_43_3'] = SumCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - # sum(p3_out, p4_2) -> p4_out - stage['sum_34_4'] = SumCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - # sum(p5, gp(p4_out, p3_out)) -> p5_out - stage['gp_43_5'] = GlobalPoolingCell(with_out_conv=False) - stage['sum_55_5'] = SumCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - # sum(p7, gp(p5_out, p4_2)) -> p7_out - stage['gp_54_7'] = GlobalPoolingCell(with_out_conv=False) - stage['sum_77_7'] = SumCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - # gp(p7_out, p5_out) -> p6_out - stage['gp_75_6'] = GlobalPoolingCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - self.fpn_stages.append(stage) - - def init_weights(self): - """Initialize the weights of module.""" - for m in self.modules(): - if isinstance(m, nn.Conv2d): - caffe2_xavier_init(m) - - def forward(self, inputs): - """Forward function.""" - # build P3-P5 - feats = [ - lateral_conv(inputs[i + self.start_level]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - # build P6-P7 on top of P5 - for downsample in self.extra_downsamples: - feats.append(downsample(feats[-1])) - - p3, p4, p5, p6, p7 = feats - - for stage in self.fpn_stages: - # gp(p6, p4) -> p4_1 - p4_1 = stage['gp_64_4'](p6, p4, out_size=p4.shape[-2:]) - # sum(p4_1, p4) -> p4_2 - p4_2 = stage['sum_44_4'](p4_1, p4, out_size=p4.shape[-2:]) - # sum(p4_2, p3) -> p3_out - p3 = stage['sum_43_3'](p4_2, p3, out_size=p3.shape[-2:]) - # sum(p3_out, p4_2) -> p4_out - p4 = stage['sum_34_4'](p3, p4_2, out_size=p4.shape[-2:]) - # sum(p5, gp(p4_out, p3_out)) -> p5_out - p5_tmp = stage['gp_43_5'](p4, p3, out_size=p5.shape[-2:]) - p5 = stage['sum_55_5'](p5, p5_tmp, out_size=p5.shape[-2:]) - # sum(p7, gp(p5_out, p4_2)) -> p7_out - p7_tmp = stage['gp_54_7'](p5, p4_2, out_size=p7.shape[-2:]) - p7 = stage['sum_77_7'](p7, p7_tmp, out_size=p7.shape[-2:]) - # gp(p7_out, p5_out) -> p6_out - p6 = stage['gp_75_6'](p7, p5, out_size=p6.shape[-2:]) - - return p3, p4, p5, p6, p7 diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/encnet/encnet_r50-d8_512x1024_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/encnet/encnet_r50-d8_512x1024_40k_cityscapes.py deleted file mode 100644 index 4ea6ed0e84f3aa7d2c7acd8dd5c459a8cd3ce45c..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/encnet/encnet_r50-d8_512x1024_40k_cityscapes.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = [ - '../_base_/models/encnet_r50-d8.py', '../_base_/datasets/cityscapes.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py' -] diff --git a/spaces/AnnaPalatkina/fine_grained_SA/app.py b/spaces/AnnaPalatkina/fine_grained_SA/app.py deleted file mode 100644 index f13f2f1c3eed2d9981739e706b040cb0832508a4..0000000000000000000000000000000000000000 --- a/spaces/AnnaPalatkina/fine_grained_SA/app.py +++ /dev/null @@ -1,39 +0,0 @@ -from sentiment_wrapper import PredictionModel -import gradio as gr - -model = PredictionModel() - - -def predict(text:str): - result = model.predict([text])[0] - return f'class: {result}' - -markdown_text = ''' -
-
- -This space provides a gradio demo and an easy-to-run wrapper of the pre-trained model for fine-grained sentiment analysis in Norwegian language, pre-trained on the [NoReC dataset](https://github.com/ltgoslo/norec). - -Information about project you an fine on the website of [University of Oslo](https://www.mn.uio.no/ifi/english/research/projects/sant/) - -The model can be easily used for predicting sentiment as follows: -```python ->>> from sentiment_wrapper import PredictionModel ->>> model = PredictionModel() ->>> model.predict(['vi liker svart kaffe', 'jeg elsker virkelig røde roser!']) -[5,5] -``` -''' - -with gr.Blocks() as demo: - with gr.Row(equal_height=False) as row: - text_input = gr.Textbox(label="input") - text_output = gr.Textbox(label="output") - with gr.Row(scale=4) as row: - text_button = gr.Button("submit").style(full_width=True) - - text_button.click(fn=predict, inputs=text_input, outputs=text_output) - gr.Markdown(markdown_text) - - -demo.launch() diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/pipelines/formating.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/pipelines/formating.py deleted file mode 100644 index 97db85f4f9db39fb86ba77ead7d1a8407d810adb..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/pipelines/formating.py +++ /dev/null @@ -1,288 +0,0 @@ -from collections.abc import Sequence - -import annotator.uniformer.mmcv as mmcv -import numpy as np -import torch -from annotator.uniformer.mmcv.parallel import DataContainer as DC - -from ..builder import PIPELINES - - -def to_tensor(data): - """Convert objects of various python types to :obj:`torch.Tensor`. - - Supported types are: :class:`numpy.ndarray`, :class:`torch.Tensor`, - :class:`Sequence`, :class:`int` and :class:`float`. - - Args: - data (torch.Tensor | numpy.ndarray | Sequence | int | float): Data to - be converted. - """ - - if isinstance(data, torch.Tensor): - return data - elif isinstance(data, np.ndarray): - return torch.from_numpy(data) - elif isinstance(data, Sequence) and not mmcv.is_str(data): - return torch.tensor(data) - elif isinstance(data, int): - return torch.LongTensor([data]) - elif isinstance(data, float): - return torch.FloatTensor([data]) - else: - raise TypeError(f'type {type(data)} cannot be converted to tensor.') - - -@PIPELINES.register_module() -class ToTensor(object): - """Convert some results to :obj:`torch.Tensor` by given keys. - - Args: - keys (Sequence[str]): Keys that need to be converted to Tensor. - """ - - def __init__(self, keys): - self.keys = keys - - def __call__(self, results): - """Call function to convert data in results to :obj:`torch.Tensor`. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data converted - to :obj:`torch.Tensor`. - """ - - for key in self.keys: - results[key] = to_tensor(results[key]) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(keys={self.keys})' - - -@PIPELINES.register_module() -class ImageToTensor(object): - """Convert image to :obj:`torch.Tensor` by given keys. - - The dimension order of input image is (H, W, C). The pipeline will convert - it to (C, H, W). If only 2 dimension (H, W) is given, the output would be - (1, H, W). - - Args: - keys (Sequence[str]): Key of images to be converted to Tensor. - """ - - def __init__(self, keys): - self.keys = keys - - def __call__(self, results): - """Call function to convert image in results to :obj:`torch.Tensor` and - transpose the channel order. - - Args: - results (dict): Result dict contains the image data to convert. - - Returns: - dict: The result dict contains the image converted - to :obj:`torch.Tensor` and transposed to (C, H, W) order. - """ - - for key in self.keys: - img = results[key] - if len(img.shape) < 3: - img = np.expand_dims(img, -1) - results[key] = to_tensor(img.transpose(2, 0, 1)) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(keys={self.keys})' - - -@PIPELINES.register_module() -class Transpose(object): - """Transpose some results by given keys. - - Args: - keys (Sequence[str]): Keys of results to be transposed. - order (Sequence[int]): Order of transpose. - """ - - def __init__(self, keys, order): - self.keys = keys - self.order = order - - def __call__(self, results): - """Call function to convert image in results to :obj:`torch.Tensor` and - transpose the channel order. - - Args: - results (dict): Result dict contains the image data to convert. - - Returns: - dict: The result dict contains the image converted - to :obj:`torch.Tensor` and transposed to (C, H, W) order. - """ - - for key in self.keys: - results[key] = results[key].transpose(self.order) - return results - - def __repr__(self): - return self.__class__.__name__ + \ - f'(keys={self.keys}, order={self.order})' - - -@PIPELINES.register_module() -class ToDataContainer(object): - """Convert results to :obj:`mmcv.DataContainer` by given fields. - - Args: - fields (Sequence[dict]): Each field is a dict like - ``dict(key='xxx', **kwargs)``. The ``key`` in result will - be converted to :obj:`mmcv.DataContainer` with ``**kwargs``. - Default: ``(dict(key='img', stack=True), - dict(key='gt_semantic_seg'))``. - """ - - def __init__(self, - fields=(dict(key='img', - stack=True), dict(key='gt_semantic_seg'))): - self.fields = fields - - def __call__(self, results): - """Call function to convert data in results to - :obj:`mmcv.DataContainer`. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data converted to - :obj:`mmcv.DataContainer`. - """ - - for field in self.fields: - field = field.copy() - key = field.pop('key') - results[key] = DC(results[key], **field) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(fields={self.fields})' - - -@PIPELINES.register_module() -class DefaultFormatBundle(object): - """Default formatting bundle. - - It simplifies the pipeline of formatting common fields, including "img" - and "gt_semantic_seg". These fields are formatted as follows. - - - img: (1)transpose, (2)to tensor, (3)to DataContainer (stack=True) - - gt_semantic_seg: (1)unsqueeze dim-0 (2)to tensor, - (3)to DataContainer (stack=True) - """ - - def __call__(self, results): - """Call function to transform and format common fields in results. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data that is formatted with - default bundle. - """ - - if 'img' in results: - img = results['img'] - if len(img.shape) < 3: - img = np.expand_dims(img, -1) - img = np.ascontiguousarray(img.transpose(2, 0, 1)) - results['img'] = DC(to_tensor(img), stack=True) - if 'gt_semantic_seg' in results: - # convert to long - results['gt_semantic_seg'] = DC( - to_tensor(results['gt_semantic_seg'][None, - ...].astype(np.int64)), - stack=True) - return results - - def __repr__(self): - return self.__class__.__name__ - - -@PIPELINES.register_module() -class Collect(object): - """Collect data from the loader relevant to the specific task. - - This is usually the last stage of the data loader pipeline. Typically keys - is set to some subset of "img", "gt_semantic_seg". - - The "img_meta" item is always populated. The contents of the "img_meta" - dictionary depends on "meta_keys". By default this includes: - - - "img_shape": shape of the image input to the network as a tuple - (h, w, c). Note that images may be zero padded on the bottom/right - if the batch tensor is larger than this shape. - - - "scale_factor": a float indicating the preprocessing scale - - - "flip": a boolean indicating if image flip transform was used - - - "filename": path to the image file - - - "ori_shape": original shape of the image as a tuple (h, w, c) - - - "pad_shape": image shape after padding - - - "img_norm_cfg": a dict of normalization information: - - mean - per channel mean subtraction - - std - per channel std divisor - - to_rgb - bool indicating if bgr was converted to rgb - - Args: - keys (Sequence[str]): Keys of results to be collected in ``data``. - meta_keys (Sequence[str], optional): Meta keys to be converted to - ``mmcv.DataContainer`` and collected in ``data[img_metas]``. - Default: ``('filename', 'ori_filename', 'ori_shape', 'img_shape', - 'pad_shape', 'scale_factor', 'flip', 'flip_direction', - 'img_norm_cfg')`` - """ - - def __init__(self, - keys, - meta_keys=('filename', 'ori_filename', 'ori_shape', - 'img_shape', 'pad_shape', 'scale_factor', 'flip', - 'flip_direction', 'img_norm_cfg')): - self.keys = keys - self.meta_keys = meta_keys - - def __call__(self, results): - """Call function to collect keys in results. The keys in ``meta_keys`` - will be converted to :obj:mmcv.DataContainer. - - Args: - results (dict): Result dict contains the data to collect. - - Returns: - dict: The result dict contains the following keys - - keys in``self.keys`` - - ``img_metas`` - """ - - data = {} - img_meta = {} - for key in self.meta_keys: - img_meta[key] = results[key] - data['img_metas'] = DC(img_meta, cpu_only=True) - for key in self.keys: - data[key] = results[key] - return data - - def __repr__(self): - return self.__class__.__name__ + \ - f'(keys={self.keys}, meta_keys={self.meta_keys})' diff --git a/spaces/Anonymous-sub/Rerender/gmflow_module/scripts/evaluate.sh b/spaces/Anonymous-sub/Rerender/gmflow_module/scripts/evaluate.sh deleted file mode 100644 index e073069a9000309260973a3c8ed836056cffb011..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/gmflow_module/scripts/evaluate.sh +++ /dev/null @@ -1,83 +0,0 @@ -#!/usr/bin/env bash - -# evaluate GMFlow without refinement - -# evaluate chairs & things trained model on things and sintel (Table 3 of GMFlow paper) -# the output should be: -# Number of validation image pairs: 1024 -# Validation Things test set (things_clean) EPE: 3.475 -# Validation Things test (things_clean) s0_10: 0.666, s10_40: 1.310, s40+: 8.968 -# Number of validation image pairs: 1041 -# Validation Sintel (clean) EPE: 1.495, 1px: 0.161, 3px: 0.059, 5px: 0.040 -# Validation Sintel (clean) s0_10: 0.457, s10_40: 1.770, s40+: 8.257 -# Number of validation image pairs: 1041 -# Validation Sintel (final) EPE: 2.955, 1px: 0.209, 3px: 0.098, 5px: 0.071 -# Validation Sintel (final) s0_10: 0.725, s10_40: 3.446, s40+: 17.701 - -CUDA_VISIBLE_DEVICES=0 python main.py \ ---eval \ ---resume pretrained/gmflow_things-e9887eda.pth \ ---val_dataset things sintel \ ---with_speed_metric - - - -# evaluate GMFlow with refinement - -# evaluate chairs & things trained model on things and sintel (Table 3 of GMFlow paper) -# the output should be: -# Validation Things test set (things_clean) EPE: 2.804 -# Validation Things test (things_clean) s0_10: 0.527, s10_40: 1.009, s40+: 7.314 -# Number of validation image pairs: 1041 -# Validation Sintel (clean) EPE: 1.084, 1px: 0.092, 3px: 0.040, 5px: 0.028 -# Validation Sintel (clean) s0_10: 0.303, s10_40: 1.252, s40+: 6.261 -# Number of validation image pairs: 1041 -# Validation Sintel (final) EPE: 2.475, 1px: 0.147, 3px: 0.077, 5px: 0.058 -# Validation Sintel (final) s0_10: 0.511, s10_40: 2.810, s40+: 15.669 - -CUDA_VISIBLE_DEVICES=0 python main.py \ ---eval \ ---resume pretrained/gmflow_with_refine_things-36579974.pth \ ---val_dataset things sintel \ ---with_speed_metric \ ---padding_factor 32 \ ---upsample_factor 4 \ ---num_scales 2 \ ---attn_splits_list 2 8 \ ---corr_radius_list -1 4 \ ---prop_radius_list -1 1 - - - -# evaluate matched & matched on sintel - -# evaluate GMFlow without refinement - -CUDA_VISIBLE_DEVICES=0 python main.py \ ---eval \ ---evaluate_matched_unmatched \ ---resume pretrained/gmflow_things-e9887eda.pth \ ---val_dataset sintel - -# evaluate GMFlow with refinement - -CUDA_VISIBLE_DEVICES=0 python main.py \ ---eval \ ---evaluate_matched_unmatched \ ---resume pretrained/gmflow_with_refine_things-36579974.pth \ ---val_dataset sintel \ ---with_speed_metric \ ---padding_factor 32 \ ---upsample_factor 4 \ ---num_scales 2 \ ---attn_splits_list 2 8 \ ---corr_radius_list -1 4 \ ---prop_radius_list -1 1 - - - - - - - - diff --git a/spaces/ArkanDash/rvc-models-new/rmvpe.py b/spaces/ArkanDash/rvc-models-new/rmvpe.py deleted file mode 100644 index 3ad346141340e03bdbaa20121e1ed435bb3da57a..0000000000000000000000000000000000000000 --- a/spaces/ArkanDash/rvc-models-new/rmvpe.py +++ /dev/null @@ -1,432 +0,0 @@ -import sys, torch, numpy as np, traceback, pdb -import torch.nn as nn -from time import time as ttime -import torch.nn.functional as F - - -class BiGRU(nn.Module): - def __init__(self, input_features, hidden_features, num_layers): - super(BiGRU, self).__init__() - self.gru = nn.GRU( - input_features, - hidden_features, - num_layers=num_layers, - batch_first=True, - bidirectional=True, - ) - - def forward(self, x): - return self.gru(x)[0] - - -class ConvBlockRes(nn.Module): - def __init__(self, in_channels, out_channels, momentum=0.01): - super(ConvBlockRes, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - nn.Conv2d( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - if in_channels != out_channels: - self.shortcut = nn.Conv2d(in_channels, out_channels, (1, 1)) - self.is_shortcut = True - else: - self.is_shortcut = False - - def forward(self, x): - if self.is_shortcut: - return self.conv(x) + self.shortcut(x) - else: - return self.conv(x) + x - - -class Encoder(nn.Module): - def __init__( - self, - in_channels, - in_size, - n_encoders, - kernel_size, - n_blocks, - out_channels=16, - momentum=0.01, - ): - super(Encoder, self).__init__() - self.n_encoders = n_encoders - self.bn = nn.BatchNorm2d(in_channels, momentum=momentum) - self.layers = nn.ModuleList() - self.latent_channels = [] - for i in range(self.n_encoders): - self.layers.append( - ResEncoderBlock( - in_channels, out_channels, kernel_size, n_blocks, momentum=momentum - ) - ) - self.latent_channels.append([out_channels, in_size]) - in_channels = out_channels - out_channels *= 2 - in_size //= 2 - self.out_size = in_size - self.out_channel = out_channels - - def forward(self, x): - concat_tensors = [] - x = self.bn(x) - for i in range(self.n_encoders): - _, x = self.layers[i](x) - concat_tensors.append(_) - return x, concat_tensors - - -class ResEncoderBlock(nn.Module): - def __init__( - self, in_channels, out_channels, kernel_size, n_blocks=1, momentum=0.01 - ): - super(ResEncoderBlock, self).__init__() - self.n_blocks = n_blocks - self.conv = nn.ModuleList() - self.conv.append(ConvBlockRes(in_channels, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv.append(ConvBlockRes(out_channels, out_channels, momentum)) - self.kernel_size = kernel_size - if self.kernel_size is not None: - self.pool = nn.AvgPool2d(kernel_size=kernel_size) - - def forward(self, x): - for i in range(self.n_blocks): - x = self.conv[i](x) - if self.kernel_size is not None: - return x, self.pool(x) - else: - return x - - -class Intermediate(nn.Module): # - def __init__(self, in_channels, out_channels, n_inters, n_blocks, momentum=0.01): - super(Intermediate, self).__init__() - self.n_inters = n_inters - self.layers = nn.ModuleList() - self.layers.append( - ResEncoderBlock(in_channels, out_channels, None, n_blocks, momentum) - ) - for i in range(self.n_inters - 1): - self.layers.append( - ResEncoderBlock(out_channels, out_channels, None, n_blocks, momentum) - ) - - def forward(self, x): - for i in range(self.n_inters): - x = self.layers[i](x) - return x - - -class ResDecoderBlock(nn.Module): - def __init__(self, in_channels, out_channels, stride, n_blocks=1, momentum=0.01): - super(ResDecoderBlock, self).__init__() - out_padding = (0, 1) if stride == (1, 2) else (1, 1) - self.n_blocks = n_blocks - self.conv1 = nn.Sequential( - nn.ConvTranspose2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=stride, - padding=(1, 1), - output_padding=out_padding, - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - self.conv2 = nn.ModuleList() - self.conv2.append(ConvBlockRes(out_channels * 2, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv2.append(ConvBlockRes(out_channels, out_channels, momentum)) - - def forward(self, x, concat_tensor): - x = self.conv1(x) - x = torch.cat((x, concat_tensor), dim=1) - for i in range(self.n_blocks): - x = self.conv2[i](x) - return x - - -class Decoder(nn.Module): - def __init__(self, in_channels, n_decoders, stride, n_blocks, momentum=0.01): - super(Decoder, self).__init__() - self.layers = nn.ModuleList() - self.n_decoders = n_decoders - for i in range(self.n_decoders): - out_channels = in_channels // 2 - self.layers.append( - ResDecoderBlock(in_channels, out_channels, stride, n_blocks, momentum) - ) - in_channels = out_channels - - def forward(self, x, concat_tensors): - for i in range(self.n_decoders): - x = self.layers[i](x, concat_tensors[-1 - i]) - return x - - -class DeepUnet(nn.Module): - def __init__( - self, - kernel_size, - n_blocks, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(DeepUnet, self).__init__() - self.encoder = Encoder( - in_channels, 128, en_de_layers, kernel_size, n_blocks, en_out_channels - ) - self.intermediate = Intermediate( - self.encoder.out_channel // 2, - self.encoder.out_channel, - inter_layers, - n_blocks, - ) - self.decoder = Decoder( - self.encoder.out_channel, en_de_layers, kernel_size, n_blocks - ) - - def forward(self, x): - x, concat_tensors = self.encoder(x) - x = self.intermediate(x) - x = self.decoder(x, concat_tensors) - return x - - -class E2E(nn.Module): - def __init__( - self, - n_blocks, - n_gru, - kernel_size, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(E2E, self).__init__() - self.unet = DeepUnet( - kernel_size, - n_blocks, - en_de_layers, - inter_layers, - in_channels, - en_out_channels, - ) - self.cnn = nn.Conv2d(en_out_channels, 3, (3, 3), padding=(1, 1)) - if n_gru: - self.fc = nn.Sequential( - BiGRU(3 * 128, 256, n_gru), - nn.Linear(512, 360), - nn.Dropout(0.25), - nn.Sigmoid(), - ) - else: - self.fc = nn.Sequential( - nn.Linear(3 * N_MELS, N_CLASS), nn.Dropout(0.25), nn.Sigmoid() - ) - - def forward(self, mel): - mel = mel.transpose(-1, -2).unsqueeze(1) - x = self.cnn(self.unet(mel)).transpose(1, 2).flatten(-2) - x = self.fc(x) - return x - - -from librosa.filters import mel - - -class MelSpectrogram(torch.nn.Module): - def __init__( - self, - is_half, - n_mel_channels, - sampling_rate, - win_length, - hop_length, - n_fft=None, - mel_fmin=0, - mel_fmax=None, - clamp=1e-5, - ): - super().__init__() - n_fft = win_length if n_fft is None else n_fft - self.hann_window = {} - mel_basis = mel( - sr=sampling_rate, - n_fft=n_fft, - n_mels=n_mel_channels, - fmin=mel_fmin, - fmax=mel_fmax, - htk=True, - ) - mel_basis = torch.from_numpy(mel_basis).float() - self.register_buffer("mel_basis", mel_basis) - self.n_fft = win_length if n_fft is None else n_fft - self.hop_length = hop_length - self.win_length = win_length - self.sampling_rate = sampling_rate - self.n_mel_channels = n_mel_channels - self.clamp = clamp - self.is_half = is_half - - def forward(self, audio, keyshift=0, speed=1, center=True): - factor = 2 ** (keyshift / 12) - n_fft_new = int(np.round(self.n_fft * factor)) - win_length_new = int(np.round(self.win_length * factor)) - hop_length_new = int(np.round(self.hop_length * speed)) - keyshift_key = str(keyshift) + "_" + str(audio.device) - if keyshift_key not in self.hann_window: - self.hann_window[keyshift_key] = torch.hann_window(win_length_new).to( - audio.device - ) - fft = torch.stft( - audio, - n_fft=n_fft_new, - hop_length=hop_length_new, - win_length=win_length_new, - window=self.hann_window[keyshift_key], - center=center, - return_complex=True, - ) - magnitude = torch.sqrt(fft.real.pow(2) + fft.imag.pow(2)) - if keyshift != 0: - size = self.n_fft // 2 + 1 - resize = magnitude.size(1) - if resize < size: - magnitude = F.pad(magnitude, (0, 0, 0, size - resize)) - magnitude = magnitude[:, :size, :] * self.win_length / win_length_new - mel_output = torch.matmul(self.mel_basis, magnitude) - if self.is_half == True: - mel_output = mel_output.half() - log_mel_spec = torch.log(torch.clamp(mel_output, min=self.clamp)) - return log_mel_spec - - -class RMVPE: - def __init__(self, model_path, is_half, device=None): - self.resample_kernel = {} - model = E2E(4, 1, (2, 2)) - ckpt = torch.load(model_path, map_location="cpu") - model.load_state_dict(ckpt) - model.eval() - if is_half == True: - model = model.half() - self.model = model - self.resample_kernel = {} - self.is_half = is_half - if device is None: - device = "cuda" if torch.cuda.is_available() else "cpu" - self.device = device - self.mel_extractor = MelSpectrogram( - is_half, 128, 16000, 1024, 160, None, 30, 8000 - ).to(device) - self.model = self.model.to(device) - cents_mapping = 20 * np.arange(360) + 1997.3794084376191 - self.cents_mapping = np.pad(cents_mapping, (4, 4)) # 368 - - def mel2hidden(self, mel): - with torch.no_grad(): - n_frames = mel.shape[-1] - mel = F.pad( - mel, (0, 32 * ((n_frames - 1) // 32 + 1) - n_frames), mode="reflect" - ) - hidden = self.model(mel) - return hidden[:, :n_frames] - - def decode(self, hidden, thred=0.03): - cents_pred = self.to_local_average_cents(hidden, thred=thred) - f0 = 10 * (2 ** (cents_pred / 1200)) - f0[f0 == 10] = 0 - # f0 = np.array([10 * (2 ** (cent_pred / 1200)) if cent_pred else 0 for cent_pred in cents_pred]) - return f0 - - def infer_from_audio(self, audio, thred=0.03): - audio = torch.from_numpy(audio).float().to(self.device).unsqueeze(0) - # torch.cuda.synchronize() - # t0=ttime() - mel = self.mel_extractor(audio, center=True) - # torch.cuda.synchronize() - # t1=ttime() - hidden = self.mel2hidden(mel) - # torch.cuda.synchronize() - # t2=ttime() - hidden = hidden.squeeze(0).cpu().numpy() - if self.is_half == True: - hidden = hidden.astype("float32") - f0 = self.decode(hidden, thred=thred) - # torch.cuda.synchronize() - # t3=ttime() - # print("hmvpe:%s\t%s\t%s\t%s"%(t1-t0,t2-t1,t3-t2,t3-t0)) - return f0 - - def to_local_average_cents(self, salience, thred=0.05): - # t0 = ttime() - center = np.argmax(salience, axis=1) # 帧长#index - salience = np.pad(salience, ((0, 0), (4, 4))) # 帧长,368 - # t1 = ttime() - center += 4 - todo_salience = [] - todo_cents_mapping = [] - starts = center - 4 - ends = center + 5 - for idx in range(salience.shape[0]): - todo_salience.append(salience[:, starts[idx] : ends[idx]][idx]) - todo_cents_mapping.append(self.cents_mapping[starts[idx] : ends[idx]]) - # t2 = ttime() - todo_salience = np.array(todo_salience) # 帧长,9 - todo_cents_mapping = np.array(todo_cents_mapping) # 帧长,9 - product_sum = np.sum(todo_salience * todo_cents_mapping, 1) - weight_sum = np.sum(todo_salience, 1) # 帧长 - devided = product_sum / weight_sum # 帧长 - # t3 = ttime() - maxx = np.max(salience, axis=1) # 帧长 - devided[maxx <= thred] = 0 - # t4 = ttime() - # print("decode:%s\t%s\t%s\t%s" % (t1 - t0, t2 - t1, t3 - t2, t4 - t3)) - return devided - - -# if __name__ == '__main__': -# audio, sampling_rate = sf.read("卢本伟语录~1.wav") -# if len(audio.shape) > 1: -# audio = librosa.to_mono(audio.transpose(1, 0)) -# audio_bak = audio.copy() -# if sampling_rate != 16000: -# audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) -# model_path = "/bili-coeus/jupyter/jupyterhub-liujing04/vits_ch/test-RMVPE/weights/rmvpe_llc_half.pt" -# thred = 0.03 # 0.01 -# device = 'cuda' if torch.cuda.is_available() else 'cpu' -# rmvpe = RMVPE(model_path,is_half=False, device=device) -# t0=ttime() -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# t1=ttime() -# print(f0.shape,t1-t0) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/importlib_metadata/_itertools.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/importlib_metadata/_itertools.py deleted file mode 100644 index d4ca9b9140e3f085b36609bb8dfdaea79c78e144..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/importlib_metadata/_itertools.py +++ /dev/null @@ -1,73 +0,0 @@ -from itertools import filterfalse - - -def unique_everseen(iterable, key=None): - "List unique elements, preserving order. Remember all elements ever seen." - # unique_everseen('AAAABBBCCDAABBB') --> A B C D - # unique_everseen('ABBCcAD', str.lower) --> A B C D - seen = set() - seen_add = seen.add - if key is None: - for element in filterfalse(seen.__contains__, iterable): - seen_add(element) - yield element - else: - for element in iterable: - k = key(element) - if k not in seen: - seen_add(k) - yield element - - -# copied from more_itertools 8.8 -def always_iterable(obj, base_type=(str, bytes)): - """If *obj* is iterable, return an iterator over its items:: - - >>> obj = (1, 2, 3) - >>> list(always_iterable(obj)) - [1, 2, 3] - - If *obj* is not iterable, return a one-item iterable containing *obj*:: - - >>> obj = 1 - >>> list(always_iterable(obj)) - [1] - - If *obj* is ``None``, return an empty iterable: - - >>> obj = None - >>> list(always_iterable(None)) - [] - - By default, binary and text strings are not considered iterable:: - - >>> obj = 'foo' - >>> list(always_iterable(obj)) - ['foo'] - - If *base_type* is set, objects for which ``isinstance(obj, base_type)`` - returns ``True`` won't be considered iterable. - - >>> obj = {'a': 1} - >>> list(always_iterable(obj)) # Iterate over the dict's keys - ['a'] - >>> list(always_iterable(obj, base_type=dict)) # Treat dicts as a unit - [{'a': 1}] - - Set *base_type* to ``None`` to avoid any special handling and treat objects - Python considers iterable as iterable: - - >>> obj = 'foo' - >>> list(always_iterable(obj, base_type=None)) - ['f', 'o', 'o'] - """ - if obj is None: - return iter(()) - - if (base_type is not None) and isinstance(obj, base_type): - return iter((obj,)) - - try: - return iter(obj) - except TypeError: - return iter((obj,)) diff --git a/spaces/Awesimo/jojogan/e4e/scripts/train.py b/spaces/Awesimo/jojogan/e4e/scripts/train.py deleted file mode 100644 index d885cfde49a0b21140e663e475918698d5e51ee3..0000000000000000000000000000000000000000 --- a/spaces/Awesimo/jojogan/e4e/scripts/train.py +++ /dev/null @@ -1,88 +0,0 @@ -""" -This file runs the main training/val loop -""" -import os -import json -import math -import sys -import pprint -import torch -from argparse import Namespace - -sys.path.append(".") -sys.path.append("..") - -from options.train_options import TrainOptions -from training.coach import Coach - - -def main(): - opts = TrainOptions().parse() - previous_train_ckpt = None - if opts.resume_training_from_ckpt: - opts, previous_train_ckpt = load_train_checkpoint(opts) - else: - setup_progressive_steps(opts) - create_initial_experiment_dir(opts) - - coach = Coach(opts, previous_train_ckpt) - coach.train() - - -def load_train_checkpoint(opts): - train_ckpt_path = opts.resume_training_from_ckpt - previous_train_ckpt = torch.load(opts.resume_training_from_ckpt, map_location='cpu') - new_opts_dict = vars(opts) - opts = previous_train_ckpt['opts'] - opts['resume_training_from_ckpt'] = train_ckpt_path - update_new_configs(opts, new_opts_dict) - pprint.pprint(opts) - opts = Namespace(**opts) - if opts.sub_exp_dir is not None: - sub_exp_dir = opts.sub_exp_dir - opts.exp_dir = os.path.join(opts.exp_dir, sub_exp_dir) - create_initial_experiment_dir(opts) - return opts, previous_train_ckpt - - -def setup_progressive_steps(opts): - log_size = int(math.log(opts.stylegan_size, 2)) - num_style_layers = 2*log_size - 2 - num_deltas = num_style_layers - 1 - if opts.progressive_start is not None: # If progressive delta training - opts.progressive_steps = [0] - next_progressive_step = opts.progressive_start - for i in range(num_deltas): - opts.progressive_steps.append(next_progressive_step) - next_progressive_step += opts.progressive_step_every - - assert opts.progressive_steps is None or is_valid_progressive_steps(opts, num_style_layers), \ - "Invalid progressive training input" - - -def is_valid_progressive_steps(opts, num_style_layers): - return len(opts.progressive_steps) == num_style_layers and opts.progressive_steps[0] == 0 - - -def create_initial_experiment_dir(opts): - if os.path.exists(opts.exp_dir): - raise Exception('Oops... {} already exists'.format(opts.exp_dir)) - os.makedirs(opts.exp_dir) - - opts_dict = vars(opts) - pprint.pprint(opts_dict) - with open(os.path.join(opts.exp_dir, 'opt.json'), 'w') as f: - json.dump(opts_dict, f, indent=4, sort_keys=True) - - -def update_new_configs(ckpt_opts, new_opts): - for k, v in new_opts.items(): - if k not in ckpt_opts: - ckpt_opts[k] = v - if new_opts['update_param_list']: - for param in new_opts['update_param_list']: - ckpt_opts[param] = new_opts[param] - - -if __name__ == '__main__': - main() diff --git a/spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/nets_123821KB.py b/spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/nets_123821KB.py deleted file mode 100644 index becbfae85683a13bbb19d3ea6c840da24e61e01e..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/nets_123821KB.py +++ /dev/null @@ -1,122 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from . import layers_123821KB as layers - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 32) - self.stg1_high_band_net = BaseASPPNet(2, 32) - - self.stg2_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(16, 32) - - self.stg3_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(32, 64) - - self.out = nn.Conv2d(64, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(32, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(32, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/Benson/text-generation/Examples/Cmo Descargar El Juego Tekken 3.md b/spaces/Benson/text-generation/Examples/Cmo Descargar El Juego Tekken 3.md deleted file mode 100644 index 2d9108a4b3625d690e4796a3713552edfb210b5b..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Cmo Descargar El Juego Tekken 3.md +++ /dev/null @@ -1,195 +0,0 @@ - -

Cómo descargar el juego Tekken 3

-

Si eres un fan de los juegos de lucha, probablemente hayas oído hablar de Tekken 3, uno de los juegos más icónicos e influyentes del género. Lanzado en 1998 para PlayStation y posteriormente portado a PC, Tekken 3 sigue siendo ampliamente considerado como uno de los mejores juegos jamás realizados. Pero, ¿cómo se puede descargar y jugar a este juego clásico hoy? En este artículo, te mostraremos cómo descargar Tekken 3 para PlayStation y PC, así como algunos consejos y trucos para jugarlo.

-

Cómo descargar el juego tekken 3


Download 🌟 https://bltlly.com/2v6JsP



-

¿Qué es Tekken 3?

-

Tekken 3 es un juego de lucha árcade desarrollado por Namco y publicado por Sony. Es la tercera entrega de la serie Tekken, que cuenta con un gran elenco de personajes que compiten en el King of Iron Fist Tournament, una competencia de artes marciales organizada por la corporación Mishima Zaibatsu. El juego cuenta con un nuevo elenco de personajes, con un total de veintitrés personajes. El juego también cuenta con un nuevo modo beat'em up llamado Tekken Force Mode con el objetivo de derrotar a los malvados soldados contratados por los jefes de nivel.

-

La trama de Tekken 3 gira en torno a Jin Kazama, el hijo de Kazuya Mishima y Jun Kazama, que busca venganza contra su abuelo Heihachi Mishima por matar a su madre. En el camino, se encuentra con varios rivales y aliados, así como una misteriosa entidad antigua llamada Ogro, que es responsable de matar a muchos artistas marciales en todo el mundo. El juego tiene múltiples finales dependiendo del personaje que elijas para jugar.

-

¿Por qué descargar Tekken 3?

-

Tekken 3 es un juego que ha resistido la prueba del tiempo y sigue siendo uno de los juegos más queridos y aclamados de la historia. Aquí hay algunas razones por las que debe descargar Tekken 3:

-
    -
  • Tiene gráficos y efectos de sonido increíbles que todavía se ven y suenan muy bien hoy.
  • -
  • Tiene un juego suave y sensible que le permite realizar varios combos, movimientos y ataques especiales con facilidad.
  • - -
  • Tiene un elenco diverso y memorable de personajes, cada uno con su propia personalidad, estilo de lucha y trasfondo.
  • -
  • Tiene una historia cautivadora y atractiva que te mantiene enganchado hasta el final.
  • -
  • Tiene un alto valor de reproducción, ya que puede desbloquear nuevos personajes, trajes, finales y secretos jugando el juego varias veces.
  • -
-

Tekken 3 es un juego que te hará sentir nostálgico, emocionado y satisfecho. Es un juego del que nunca te aburrirás.

-

Cómo descargar Tekken 3 para PlayStation

-

Si tienes una consola PlayStation, puedes descargar Tekken 3 de dos maneras: desde PlayStation Store o usando un disco físico. Estos son los pasos para cada método:

-

-

Requisitos para descargar Tekken 3 para PlayStation

-

Antes de descargar Tekken 3 para PlayStation, debes asegurarte de que tu consola cumple con las siguientes especificaciones:

- - -Especificación -Mínimo -Recomendado - - -Modelo de consola -PlayStation 1 -PlayStation 2 o superior - - -Espacio de almacenamiento -1 MB -2 MB o superior - - -Conexión a Internet -N/A (para disco físico) -Banda ancha (para PlayStation Store) - - -Controlador -DualShock o analógico dual -DualShock 2 o superior - - -Dispositivo de visualización -CRT TV o monitor -TV LCD o monitor - - -Dispositivo de audio -Altavoces o auriculares estéreo -Altavoces o auriculares de sonido envolvente - -

Cómo descargar Tekken 3 desde PlayStation Store

-

Si tienes una consola PlayStation 2, PlayStation 3, PlayStation 4 o PlayStation 5, puedes descargar Tekken 3 desde PlayStation Store. Estos son los pasos para hacerlo:

-
    -
  1. Encienda su consola y conéctela a Internet.
  2. - -
  3. Buscar Tekken 3 en la barra de búsqueda o navegar por las categorías hasta que lo encuentre.
  4. -
  5. Selecciona Tekken 3 y haz clic en el botón Comprar ahora. Es posible que tengas que iniciar sesión en tu cuenta de PlayStation Network o crear una si no la tienes.
  6. -
  7. Introduzca sus datos de pago y confirme su compra. Puede utilizar una tarjeta de crédito, una tarjeta de débito, una cuenta PayPal o una tarjeta PlayStation Network.
  8. -
  9. Espere a que el juego se descargue e instale en su consola. Puede comprobar el progreso en el menú Notificaciones.
  10. -
  11. Una vez descargado e instalado el juego, puede iniciarlo desde el menú Biblioteca o la pantalla de inicio.
  12. -
-

Cómo descargar Tekken 3 usando un disco físico

-

Si tienes una consola PlayStation 1 o una consola PlayStation 2 compatible, puedes descargar Tekken 3 usando un disco físico. Estos son los pasos para hacerlo:

-
    -
  1. Encienda su consola e inserte el disco Tekken 3 en la bandeja del disco.
  2. -
  3. El juego debe iniciarse automáticamente. Si no, vaya al icono del reproductor de CD en el menú principal y selecciónelo.
  4. -
  5. Seleccione Iniciar juego desde el menú Reproductor de CD y pulse X.
  6. -
  7. El juego se cargará y comenzará. Es posible que tenga que crear un archivo de guardado en la tarjeta de memoria si desea guardar su progreso.
  8. -
  9. Puede expulsar el disco en cualquier momento pulsando el botón Abrir de la consola. Asegúrese de guardar el juego antes de hacerlo.
  10. -
-

Cómo descargar Tekken 3 para PC

-

Si tiene un PC, puede descargar Tekken 3 de dos maneras: desde un sitio web de buena reputación o utilizando un emulador. Estos son los pasos para cada método:

-

Requisitos para descargar Tekken 3 para PC

-

Antes de descargar Tekken 3 para PC, debe asegurarse de que su PC cumple con las siguientes especificaciones:

- - -Especificación -Mínimo -Recomendado - - -CPU -Pentium II 266 MHz o equivalente -Pentium III 500 MHz o equivalente - - - -64 MB -128 MB o superior - - -GPU -Tarjeta gráfica compatible con DirectX 7 -Tarjeta gráfica compatible con DirectX 9 - - -Espacio de almacenamiento -500 MB -1 GB o superior - - -Conexión a Internet -N/A (para jugar sin conexión) -Banda ancha (para jugar en línea) - - -Controlador -Teclado y ratón -Gamepad o joystick - - -Dispositivo de visualización -Monitor VGA o TV -Monitor LCD o TV - - -Dispositivo de audio -Altavoces o auriculares estéreo -Altavoces o auriculares de sonido envolvente - -

Cómo descargar Tekken 3 desde un sitio web de buena reputación

-

Si desea descargar Tekken 3 desde un sitio web de buena reputación, debe tener cuidado y evitar cualquier sitio malicioso o ilegal que pueda dañar su PC o violar los derechos de autor del juego. Aquí hay algunos sitios web de confianza que ofrecen descargas de Tekken 3:

-
    -
  • Ocean of Games: Este sitio web proporciona descargas gratuitas y seguras de varios juegos de PC, incluido Tekken 3. Puede descargar el juego haciendo clic en el botón Descargar y siguiendo las instrucciones.
  • -
  • Juegos antiguos Descargar: Este sitio web ofrece descargas de juegos de PC clásicos y antiguos, como Tekken 3. Puede descargar el juego haciendo clic en el botón Descargar Tekken 3 y siguiendo las instrucciones.
  • -
  • GameFabrique: Este sitio web ofrece descargas de juegos retro y árcade, como Tekken 3. Puede descargar el juego haciendo clic en el botón Descargar para PC y siguiendo las instrucciones.
  • -
  • ApunKaGames: Este sitio web proporciona descargas de juegos de PC comprimidos y de pequeño tamaño, como Tekken 3. Puede descargar el juego haciendo clic en el botón Descargar ahora y siguiendo las instrucciones.
  • - -
-

Para elegir y descargar el juego desde estos sitios web, debe hacer lo siguiente:

-
    -
  1. Visite el sitio web de su elección y busque Tekken 3 o navegue por las categorías hasta encontrarlo.
  2. -
  3. Lee la descripción y reseñas del juego y asegúrate de que sea compatible con tu PC.
  4. -
  5. Haga clic en el botón de descarga y espere a que el archivo del juego se descargue en su PC. El archivo puede estar en formato ZIP, RAR o ISO.
  6. -
  7. Extraiga el archivo del juego usando un software como WinRAR o 7-Zip. Puede que necesite introducir una contraseña si el archivo está cifrado.
  8. -
  9. Abra la carpeta extraída y busque el archivo de configuración o el archivo ejecutable del juego. Haga doble clic en él y siga el asistente de instalación.
  10. -
  11. Una vez instalado el juego, puedes lanzarlo desde tu escritorio o menú de inicio.
  12. -
-

Cómo descargar Tekken 3 usando un emulador

-

Si quieres descargar Tekken 3 usando un emulador, necesitas saber qué es un emulador y cómo funciona. Un emulador es un software que imita las funciones de otro dispositivo, como una consola, en su PC. Mediante el uso de un emulador, puede jugar juegos de consola en su PC sin tener que comprar o poseer la consola en sí. Sin embargo, todavía necesitas tener una copia del juego, ya sea en forma digital o física, para jugarlo en un emulador.

-

Para descargar Tekken 3 usando un emulador, necesitas hacer lo siguiente:

-
    -
  1. Elige un emulador que pueda ejecutar juegos de PlayStation en tu PC. Algunos de los mejores emuladores para jugar Tekken 3 son ePSXe, PCSX-Reloaded y RetroArch. Puede descargar estos emuladores de sus sitios web oficiales o de otras fuentes de renombre.
  2. -
  3. Instale el emulador en su PC siguiendo las instrucciones proporcionadas por el desarrollador del emulador.
  4. - -
  5. Carga el archivo del juego en el emulador siguiendo las instrucciones proporcionadas por el desarrollador del emulador. Es posible que necesites configurar algunos ajustes, como gráficos, sonido y controlador, para optimizar el rendimiento y la calidad del juego.
  6. -
  7. Una vez que el archivo del juego está cargado, puede comenzar a jugar Tekken 3 en su PC usando el emulador.
  8. -
-

Cómo jugar Tekken 3 después de descargarlo

-

Después de descargar Tekken 3 para PlayStation o PC, puede comenzar a jugar y disfrutar de sus características y modos. Aquí hay algunos pasos básicos sobre cómo jugar Tekken 3 después de descargarlo:

-
    -
  1. Inicie el juego desde su consola o PC.
  2. -
  3. Seleccione un modo que desea jugar. Puedes elegir entre el modo árcade, el modo Versus, el modo de práctica, el modo Tekken Force, el modo Tekken Ball y el modo Survival. Cada modo tiene diferentes objetivos y reglas.
  4. -
  5. Selecciona un personaje que quieras jugar. Puedes elegir entre veintitrés personajes, cada uno con su propio estilo de lucha, movimientos y ataques especiales. Algunos caracteres están desbloqueados por defecto, mientras que otros necesitan ser desbloqueados completando ciertas tareas o modos.
  6. -
  7. Seleccione una etapa en la que desea luchar. Puede elegir entre diez etapas, cada una con su propio fondo y música. Algunas etapas están desbloqueadas por defecto, mientras que otras necesitan ser desbloqueadas completando ciertas tareas o modos.
  8. -
  9. Iniciar la lucha y tratar de derrotar a su oponente mediante el agotamiento de su barra de salud. Puedes usar varios botones y combinaciones para realizar golpes, patadas, lanzamientos, bloqueos y ataques especiales. También puede usar la almohadilla direccional o el joystick para moverse y esquivar los ataques.
  10. -
  11. Ganar la pelea y proceder a la siguiente ronda o etapa. También puedes ver el final de tu personaje si completas el modo árcade o desbloqueas nuevos personajes, disfraces, secretos o modos si cumples ciertos criterios.
  12. -
-

Consejos y trucos para jugar Tekken 3

- -
    -
  • Aprende los movimientos y combos de tu personaje. Puedes usar el modo de práctica para practicar tus movimientos y combos sin interrupciones ni presiones. También puede usar la lista de comandos para ver las entradas y descripciones de sus movimientos y combos.
  • -
  • Usa diferentes caracteres y modos para aprender sus fortalezas y debilidades. Puede usar el modo Versus para jugar contra otro jugador o la CPU con diferentes caracteres y configuraciones. También puede utilizar el modo Tekken Force o el modo Tekken Ball para jugar con diferentes reglas y objetivos.
  • -
  • Usa diferentes estrategias y tácticas dependiendo de tu oponente y situación. Puedes usar estrategias ofensivas, defensivas o de contraataque dependiendo del estilo de tu personaje y oponente. También puedes usar lanzamientos, pasos laterales, ataques bajos, ataques altos o ataques especiales dependiendo de la posición y la guardia de tu oponente.
  • -
  • Usa diferentes objetos y secretos para mejorar tu juego. Puedes usar artículos como artículos de recuperación de salud, potenciadores, armas o bolas para obtener una ventaja en ciertos modos. También puedes usar secretos como disfraces alternativos, personajes ocultos o códigos de trucos para desbloquear nuevas características y opciones en el juego.
  • -
  • Divertirse y disfrutar del juego. Tekken 3 es un juego que está destinado a ser divertido y entretenido. Puede jugar con sus amigos, familiares o jugadores en línea y pasar un buen rato. También puedes desafiarte a ti mismo e intentar vencer al juego con diferentes personajes, modos y dificultades.
  • -
-

Conclusión

-

Tekken 3 es un juego que definitivamente debes descargar y jugar si te gustan los juegos de lucha. Es un juego que tiene todo lo que necesitas: gráficos increíbles, jugabilidad suave, diversos personajes, historia cautivadora, modos variados y alto valor de reproducción. Es un juego que te hará sentir la emoción y la emoción de luchar. Es un juego que te hará fan de Tekken.

- -

Preguntas frecuentes

-

Aquí hay algunas preguntas y respuestas frecuentes sobre la descarga y reproducción de Tekken 3:

-
    -
  1. ¿Tekken 3 es gratis para descargar?
  2. -

    Sí, Tekken 3 es gratis para descargar desde algunos sitios web y emuladores. Sin embargo, es posible que tengas que pagar por el juego si quieres descargarlo desde PlayStation Store o usar un disco físico.

    -
  3. ¿Es seguro descargar Tekken 3?
  4. -

    Sí, Tekken 3 es seguro para descargar desde sitios web y emuladores de buena reputación. Sin embargo, debe tener cuidado y evitar cualquier sitio malicioso o ilegal que pueda dañar su PC o violar los derechos de autor del juego.

    -
  5. Tekken 3 es compatible con Windows 10?
  6. -

    Sí, Tekken 3 es compatible con Windows 10 si utiliza un emulador o un sitio web que ofrece una versión compatible del juego. Sin embargo, es posible que necesites ajustar algunos ajustes o usar un modo de compatibilidad para ejecutar el juego sin problemas.

    -
  7. ¿Cuántos caracteres hay en Tekken 3?
  8. -

    Hay veintitrés caracteres en Tekken 3, incluyendo quince caracteres por defecto y ocho caracteres desbloqueables. Algunos de los personajes son nuevos en la serie, mientras que otros están regresando de juegos anteriores.

    -
  9. ¿Cuál es el mejor carácter en Tekken 3?
  10. -

    No hay una respuesta definitiva a esta pregunta, ya que diferentes personajes tienen diferentes fortalezas y debilidades, y diferentes jugadores tienen diferentes preferencias y estilos. Sin embargo, algunos de los personajes más populares y poderosos de Tekken 3 son Jin Kazama, Heihachi Mishima, Paul Phoenix, Nina Williams, Hwoarang, Eddy Gordo, King y Ogre.

    -

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Archivo Obb Mx Fuego Gratis.md b/spaces/Benson/text-generation/Examples/Descargar Archivo Obb Mx Fuego Gratis.md deleted file mode 100644 index b900760870f8e141e8a2ab48023769403e188115..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Archivo Obb Mx Fuego Gratis.md +++ /dev/null @@ -1,53 +0,0 @@ -
-

Descarga gratuita de archivos OBB Fire Max: Cómo instalar la última versión del juego en su dispositivo Android

-

Si eres un fan de Garena Free Fire, el popular juego battle royale con más de 500 millones de descargas en Google Play Store, es posible que hayas oído hablar de su versión mejorada, Free Fire Max. Esta versión ofrece gráficos mejorados, animaciones y efectos de sonido que mejoran la experiencia de juego en general. Sin embargo, para reproducir esta versión, tendrá que descargar e instalar el archivo OBB junto con el archivo APK en su dispositivo Android. En este artículo, explicaremos qué es Free Fire Max, por qué necesita el archivo OBB y cómo descargarlo e instalarlo en su dispositivo.

-

descargar archivo obb máx fuego gratis


Download Zip 🆗 https://bltlly.com/2v6Kkh



-

¿Qué es Free Fire Max?

-

Free Fire Max es una versión refinada de Garena Free Fire que se lanzó en 2019 como una prueba beta en regiones seleccionadas. Está diseñado para proporcionar una experiencia de juego más inmersiva y realista para los jugadores que quieren disfrutar del juego en dispositivos de alta gama. Ha mejorado los efectos visuales, animaciones y efectos de sonido que hacen el juego más atractivo y atractivo. También tiene algunas características exclusivas que no están disponibles en la versión original, como:

-

Características de Free Fire Max

-
    -
  • Un nuevo lobby e interfaz de usuario más amigable y elegante.
  • -
  • Un nuevo mapa llamado Bermuda Remastered que tiene más detalles y ubicaciones.
  • -
  • Un nuevo modo de juego llamado Craftland que permite a los jugadores crear sus propios mapas y compartirlos con otros.
  • -
  • Una nueva característica llamada Firelink Technology que permite a los jugadores utilizar su cuenta de Free Fire existente para jugar ambas versiones del juego sin perder su progreso o datos.
  • -
-

Diferencias entre fuego libre y fuego libre Max

-

Aunque ambas versiones del juego tienen la misma mecánica de juego y características, hay algunas diferencias notables entre ellas. Algunas de estas diferencias son:

-
    - -
  • Free Fire Max tiene una configuración de gráficos más alta que Free Fire. Soporta resolución Ultra HD, anti-aliasing, efectos de sombra, iluminación realista y más. También tiene una velocidad de fotogramas más alta que Free Fire.
  • -
  • Free Fire Max tiene más opciones de personalización que Free Fire. Permite a los instrumentistas ajustar la calidad gráfica, sensibilidad, controles, efectos de sonido, y más según sus preferencias.
  • -
-

¿Por qué necesita el archivo OBB para jugar Free Fire Max?

-

Si desea jugar Free Fire Max en su dispositivo Android, tendrá que descargar e instalar tanto el archivo APK y el archivo OBB. El archivo APK es el paquete de aplicación que contiene la información básica y el código del juego. El archivo OBB es el archivo de datos adicional que contiene los gráficos, el sonido y otros recursos del juego. Sin el archivo OBB, no podrá ejecutar el juego correctamente.

-

¿Qué es un archivo OBB y cómo funciona?

-

OBB significa cuenta o crear una nueva. Puede usar la tecnología Firelink para sincronizar sus datos en ambas versiones del juego. -

  • Serás llevado al lobby principal donde puedes acceder a diferentes modos, configuraciones y características del juego.
  • -
  • Puede ajustar la calidad de los gráficos, la sensibilidad, los controles, los efectos de sonido y más de acuerdo con sus preferencias.
  • -
  • Puede disfrutar de los gráficos mejorados, animaciones y efectos de sonido de Free Fire Max en su dispositivo.
  • - -

    Conclusión

    - -

    Preguntas frecuentes

    -

    Aquí están algunas preguntas frecuentes sobre descarga de archivos OBB Free Fire Max:

    -

    -
      -
    1. Q: ¿Es Free Fire Max compatible con todos los dispositivos Android?
    2. -
    3. A: No, Free Fire Max es compatible solo con dispositivos Android que tienen al menos 2 GB de RAM y sistema operativo Android 4.4 o superior.
    4. -
    5. Q: ¿Puedo jugar Free Fire Max con mis amigos que están jugando Free Fire?
    6. -
    7. A: Sí, puedes jugar Free Fire Max con tus amigos que están jugando Free Fire, ya que ambas versiones del juego comparten el mismo servidor y sistema de emparejamiento.
    8. -
    9. Q: ¿Cómo puedo actualizar Free Fire Max en mi dispositivo?
    10. -
    11. A: Puede actualizar Free Fire Max en su dispositivo descargando e instalando el último archivo APK y el archivo OBB de una fuente confiable. También puedes buscar actualizaciones dentro del juego.
    12. -
    13. Q: ¿Qué pasa si me enfrento a cualquier problema al descargar o instalar el archivo OBB?
    14. -
    15. A: Si tiene algún problema al descargar o instalar el archivo OBB, puede probar estas soluciones:
    16. -
        -
      • Compruebe su conexión a Internet y espacio de almacenamiento.
      • -
      • Borra la caché y los datos del juego.
      • -
      • Desinstalar y reinstalar el juego.
      • -
      • Póngase en contacto con el servicio de atención al cliente de Garena para obtener más ayuda.
      • -
      -
    17. Q: ¿Dónde puedo obtener más información sobre Free Fire Max?
    18. -
    19. A: Puede obtener más información sobre Free Fire Max desde su sitio web oficial, páginas de redes sociales o canal de YouTube.
    20. -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Etiqueta Despus De La Escuela Versi Terbaru.md b/spaces/Benson/text-generation/Examples/Descargar Etiqueta Despus De La Escuela Versi Terbaru.md deleted file mode 100644 index 59c0ee451158d459b55a760b8d052b549a2ebe67..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Etiqueta Despus De La Escuela Versi Terbaru.md +++ /dev/null @@ -1,127 +0,0 @@ -
    -

    Descargar Etiqueta Después de la Escuela Versi Terbaru: Un aterrador juego de terror para Android

    -

    Si eres un fan de los juegos de terror y anime, es posible que hayas oído hablar de Tag After School, un popular juego que combina ambos géneros de una manera emocionante e inmersiva. Tag After School es un juego que tiene lugar en una escuela secundaria japonesa embrujada, donde tienes que escapar de los fantasmas de las estudiantes que quieren matarte. Suena aterrador, ¿no es así? Bueno, se vuelve aún más aterrador cuando descargas Tag After School versi terbaru, la última versión del juego que ofrece más características, contenido y desafíos que nunca. En este artículo, te diremos todo lo que necesitas saber sobre Tag After School versi terbaru, incluyendo qué es, cómo jugarlo, por qué deberías descargarlo y cómo descargarlo gratis. ¡Vamos a empezar!

    -

    ¿Qué es la etiqueta después de la escuela?

    -

    Tag After School es un juego de terror y misterio desarrollado por un desarrollador independiente llamado Kojima. El juego fue lanzado en 2020 y desde entonces ha ganado mucha popularidad entre los fanáticos del terror y el anime. El juego está inspirado en leyendas urbanas japonesas, folclore y cultura, y cuenta con gráficos realistas, efectos de sonido y actuación de voz. El juego también tiene una historia oscura y retorcida que te mantendrá al borde de tu asiento.

    -

    descargar etiqueta después de la escuela versi terbaru


    Download File ★★★ https://bltlly.com/2v6JNZ



    -

    Una breve introducción al juego y sus características

    -

    El juego sigue la historia de Shota-Kun, un estudiante de secundaria que se despierta en un aula abandonada después de ser noqueado por una chica misteriosa. Pronto se da cuenta de que está atrapado en la escuela sin salida, y que no está solo. Hay otros estudiantes que también están atrapados en la escuela, pero no son amigables. Están poseídos por espíritus malignos que quieren matar a Shota-Kun y a cualquiera que se interponga en su camino. Shota-Kun tiene que encontrar una manera de escapar de la escuela antes de que sea demasiado tarde.

    -

    El juego tiene muchas características que lo hacen único y emocionante. Algunos de ellos son:

    -
      - -
    • El juego tiene diferentes modos que cambian el nivel de juego y dificultad.
    • -
    • El juego tiene secretos ocultos y huevos de Pascua que revelan más sobre la historia y los personajes.
    • -
    • El juego tiene un sistema de linterna realista que limita su visión y duración de la batería.
    • -
    • El juego tiene un sistema de cámara dinámico que cambia la perspectiva y el ángulo dependiendo de su situación.
    • -
    -

    ¿Cómo se juega etiqueta después de la escuela?

    -

    Jugar a la etiqueta después de la escuela no es fácil, pero es muy divertido y gratificante. Aquí hay algunos consejos sobre cómo jugar el juego:

    -

    La mecánica de juego básica y consejos

    -

    El juego se juega desde una perspectiva en primera persona utilizando su teléfono inteligente como un controlador. Puede moverse utilizando el joystick virtual en el lado izquierdo de la pantalla e interactuar con los objetos utilizando los botones en el lado derecho de la pantalla. También puede usar su linterna tocando en la pantalla, pero tenga cuidado porque tiene una duración limitada de la batería. Puede recargar su linterna encontrando baterías esparcidas por la escuela.

    -

    Tu principal objetivo es encontrar pistas y elementos que te ayuden a escapar de la escuela. También puedes hablar con otros personajes que encuentres en el camino, pero ten cuidado porque algunos de ellos pueden ser hostiles o engañosos. También puedes esconderte de los enemigos usando casilleros, armarios u otros escondites. Sin embargo, algunos enemigos aún pueden encontrarte si haces demasiado ruido o si ven tu linterna. También puedes huir de los enemigos corriendo, pero esto drenará tu resistencia y te hará más vulnerable. Tienes que equilibrar tu sigilo y velocidad para sobrevivir.

    -

    Los diferentes modos y niveles de dificultad

    -

    El juego tiene tres modos que puedes elegir: Normal, Difícil y Pesadilla. Cada modo tiene un nivel diferente de dificultad y desafío. Aquí están las diferencias entre los modos:

    - - -Modo -Dificultad -Desafío - - -Normal -Fácil -Tienes más batería de linterna, más resistencia, más objetos y más pistas. Los enemigos son más lentos y menos agresivos. - - -Duro -Medio -Tienes menos batería de linterna, menos resistencia, menos objetos y menos pistas. Los enemigos son más rápidos y más agresivos. - - -Pesadilla -Duro -No tienes batería de linterna, ni resistencia, ni objetos, ni pistas. Los enemigos son muy rápidos y muy agresivos. - - -

    También puedes desbloquear un modo secreto llamado Hell Mode después de completar el juego en Nightmare Mode. Este modo es extremadamente duro y solo para los jugadores más hardcore.

    -

    -

    Los secretos ocultos y los huevos de Pascua

    -

    El juego tiene muchos secretos ocultos y huevos de Pascua que puedes descubrir explorando la escuela y encontrando pistas. Algunos de estos secretos y huevos de Pascua son:

    -
      -
    • El juego tiene referencias a otros juegos y películas de terror, como Silent Hill, The Ring, The Grudge, etc.
    • -
    • El juego tiene mensajes y códigos ocultos que revelan más sobre la historia de fondo y los personajes.
    • -
    • El juego tiene salas secretas y pasajes que conducen a nuevas áreas y objetos.
    • -
    • El juego tiene finales alternativos que dependen de tus elecciones y acciones.
    • -
    • El juego tiene un jefe secreto que se puede luchar después de completar el juego en el modo infierno.
    • -
    -

    ¿Por qué debería descargar Tag After School versi terbaru?

    -

    Si ya eres un fan de Tag After School, o si estás buscando un nuevo juego de terror para jugar, definitivamente deberías descargar Tag After School versi terbaru. Esto se debe a que la última versión del juego ofrece muchos beneficios y mejoras sobre las versiones anteriores. Algunos de estos beneficios son:

    -

    Los beneficios de descargar la última versión del juego

    -

    Gráficos y rendimiento mejorados

    - -

    Nuevos personajes y escenarios

    -

    La última versión del juego tiene nuevos personajes y escenarios que añaden más variedad y profundidad al juego. El juego tiene nuevos enemigos que tienen diferentes apariencias, comportamientos y habilidades que desafían tus habilidades y estrategias. El juego también tiene nuevos aliados que tienen diferentes personalidades, antecedentes y roles que afectan tu historia y resultado. El juego también tiene nuevas ubicaciones que tienen diferentes diseños, puzzles y secretos que ponen a prueba tu exploración e intuición.

    -

    Corrección de errores y actualizaciones

    -

    La última versión del juego tiene correcciones de errores y actualizaciones que hacen el juego más estable y agradable. El juego ha corregido algunos de los fallos, errores y fallos que se produjeron en las versiones anteriores. El juego también ha añadido algunas características nuevas, como logros, tablas de clasificación, almacenamiento en la nube, etc., que mejoran su experiencia de juego.

    -

    ¿Cómo descargar la etiqueta después de la escuela versi terbaru gratis?

    -

    Si quieres descargar Tag After School versi terbaru gratis, tienes dos opciones: puedes descargarlo desde el sitio web oficial o desde otras plataformas. Estos son los pasos para descargar el juego de ambas fuentes:

    -

    Los pasos para descargar e instalar el juego desde el sitio web oficial

    -
      -
    1. Ir a -
    2. Selecciona tu dispositivo (Android o iOS) y tu idioma preferido (inglés o japonés).
    3. -
    4. Espera a que termine la descarga.
    5. -
    6. Abra el archivo descargado y siga las instrucciones para instalar el juego en su dispositivo.
    7. -
    8. Disfruta jugando Tag After School versi terbaru!
    9. -
    -

    Las fuentes alternativas para descargar el juego desde otras plataformas

    - -
      -
    • Revisa las calificaciones, reseñas y comentarios del juego y la fuente antes de descargarlo.
    • -
    • Compara el tamaño, la versión y la fecha del juego y la fuente con el sitio web oficial.
    • -
    • Escanea el archivo descargado con un antivirus o un detector de malware antes de abrirlo.
    • -
    • No conceda permisos innecesarios ni acceso al juego o a la fuente.
    • -
    • Eliminar el archivo descargado después de instalar el juego en su dispositivo.
    • -
    -

    Las precauciones a tomar antes de descargar el juego de fuentes desconocidas

    -

    Si quieres descargar el juego de fuentes desconocidas que no son verificadas o de confianza, tienes que ser muy cuidadoso y cauteloso. Estas fuentes pueden haber modificado, dañado o infectado versiones del juego que pueden dañar su dispositivo o su privacidad. Aquí hay algunas precauciones a tomar antes de descargar el juego de fuentes desconocidas:

    -
      -
    • No descargue el juego de ninguna fuente que le pida dinero, información personal o registro.
    • -
    • No descargue el juego desde ninguna fuente que tenga una URL o nombre de dominio sospechoso o desconocido.
    • -
    • No descargue el juego desde ninguna fuente que tenga ventanas emergentes, anuncios o redirecciones que interfieran con su navegación.
    • -
    • No descargue el juego desde ninguna fuente que tenga una baja reputación, calidad o nivel de seguridad.
    • -
    • No descargue el juego desde ninguna fuente que tenga comentarios negativos, quejas o informes de otros usuarios.
    • -
    -

    Conclusión

    - -

    Si te gustó este artículo, por favor compártelo con tus amigos y deja un comentario a continuación. Además, si tiene alguna pregunta o sugerencia sobre Tag After School versi terbaru, no dude en consultarnos. Estaremos encantados de ayudarle. ¡Gracias por leer!

    -

    Preguntas frecuentes

    -

    Aquí están algunas de las preguntas más frecuentes sobre Tag After School versi terbaru:

    -
      -
    1. ¿Es seguro descargar la etiqueta después de la escuela versi terbaru?
    2. -

      Sí, Tag After School versi terbaru es seguro de descargar si lo descarga desde el sitio web oficial o desde plataformas de confianza. Sin embargo, si lo descargas desde fuentes desconocidas, tienes que ser cuidadoso y cauteloso ya que podrían haber modificado, dañado o infectado versiones del juego.

      -
    3. ¿Es la etiqueta después de la escuela versi terbaru compatible con mi dispositivo?
    4. -

      Tag After School versi terbaru es compatible con la mayoría de los dispositivos Android que tienen Android 4.4 o superior. Sin embargo, algunos dispositivos pueden tener problemas con el funcionamiento del juego sin problemas debido a sus especificaciones o ajustes. Puede comprobar la compatibilidad de su dispositivo en el sitio web oficial de Tag After School.

      -
    5. ¿Cuánto tiempo es la etiqueta después de la escuela versi terbaru?
    6. -

      La longitud de la etiqueta después de la escuela versi terbaru depende de sus opciones y acciones en el juego. El juego tiene múltiples finales que pueden cambiar dependiendo de tus decisiones e interacciones. El juego también tiene diferentes modos que pueden cambiar el nivel de juego y dificultad. El juego también tiene secretos ocultos y huevos de Pascua que pueden extender el tiempo de juego. En promedio, el juego puede tardar de 2 a 4 horas en completarse, dependiendo del modo y el final.

      -
    7. ¿Cuál es la diferencia entre Tag After School y Tag After School versi terbaru?
    8. -

      Tag After School versi terbaru es la última versión de Tag After School que ofrece más características, contenido y mejoras que las versiones anteriores. Algunas de las diferencias son:

      -
        - -
      • Tag After School versi terbaru tiene nuevos personajes y escenarios que añaden más variedad y profundidad al juego.
      • -
      • Tag After School versi terbaru tiene correcciones de errores y actualizaciones que hacen que el juego sea más estable y agradable.
      • -
      • Tag After School versi terbaru tiene nuevas características, como logros, tablas de clasificación, almacenamiento en la nube, etc., que mejoran su experiencia de juego.
      • -
      -
    9. ¿Dónde puedo encontrar más información sobre Tag After School versi terbaru?
    10. -

      Si quieres encontrar más información sobre Tag After School versi terbaru, puedes visitar la cuenta oficial de Twitter de Tag After School, donde puedes interactuar con el desarrollador y otros fans del juego. También puedes unirte a el sitio web oficial de Tag After School o la tienda oficial en línea de Tag After School. -

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/color_triplet.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/color_triplet.py deleted file mode 100644 index 02cab328251af9bfa809981aaa44933c407e2cd7..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/color_triplet.py +++ /dev/null @@ -1,38 +0,0 @@ -from typing import NamedTuple, Tuple - - -class ColorTriplet(NamedTuple): - """The red, green, and blue components of a color.""" - - red: int - """Red component in 0 to 255 range.""" - green: int - """Green component in 0 to 255 range.""" - blue: int - """Blue component in 0 to 255 range.""" - - @property - def hex(self) -> str: - """get the color triplet in CSS style.""" - red, green, blue = self - return f"#{red:02x}{green:02x}{blue:02x}" - - @property - def rgb(self) -> str: - """The color in RGB format. - - Returns: - str: An rgb color, e.g. ``"rgb(100,23,255)"``. - """ - red, green, blue = self - return f"rgb({red},{green},{blue})" - - @property - def normalized(self) -> Tuple[float, float, float]: - """Convert components into floats between 0 and 1. - - Returns: - Tuple[float, float, float]: A tuple of three normalized colour components. - """ - red, green, blue = self - return red / 255.0, green / 255.0, blue / 255.0 diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/py39compat.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/py39compat.py deleted file mode 100644 index c43e5f10fdecb6606a1b75af3e149cb6a0a55e42..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/py39compat.py +++ /dev/null @@ -1,22 +0,0 @@ -import sys -import platform - - -def add_ext_suffix_39(vars): - """ - Ensure vars contains 'EXT_SUFFIX'. pypa/distutils#130 - """ - import _imp - - ext_suffix = _imp.extension_suffixes()[0] - vars.update( - EXT_SUFFIX=ext_suffix, - # sysconfig sets SO to match EXT_SUFFIX, so maintain - # that expectation. - # https://github.com/python/cpython/blob/785cc6770588de087d09e89a69110af2542be208/Lib/sysconfig.py#L671-L673 - SO=ext_suffix, - ) - - -needs_ext_suffix = sys.version_info < (3, 10) and platform.system() == 'Windows' -add_ext_suffix = add_ext_suffix_39 if needs_ext_suffix else lambda vars: None diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/config/_validate_pyproject/fastjsonschema_validations.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/config/_validate_pyproject/fastjsonschema_validations.py deleted file mode 100644 index ad5ee31ef53370fe7ec95799db390a33c3680b3b..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/config/_validate_pyproject/fastjsonschema_validations.py +++ /dev/null @@ -1,1035 +0,0 @@ -# noqa -# type: ignore -# flake8: noqa -# pylint: skip-file -# mypy: ignore-errors -# yapf: disable -# pylama:skip=1 - - -# *** PLEASE DO NOT MODIFY DIRECTLY: Automatically generated code *** - - -VERSION = "2.15.3" -import re -from .fastjsonschema_exceptions import JsonSchemaValueException - - -REGEX_PATTERNS = { - '^.*$': re.compile('^.*$'), - '.+': re.compile('.+'), - '^.+$': re.compile('^.+$'), - 'idn-email_re_pattern': re.compile('^[^@]+@[^@]+\\.[^@]+\\Z') -} - -NoneType = type(None) - -def validate(data, custom_formats={}, name_prefix=None): - validate_https___packaging_python_org_en_latest_specifications_declaring_build_dependencies(data, custom_formats, (name_prefix or "data") + "") - return data - -def validate_https___packaging_python_org_en_latest_specifications_declaring_build_dependencies(data, custom_formats={}, name_prefix=None): - if not isinstance(data, (dict)): - raise JsonSchemaValueException("" + (name_prefix or "data") + " must be object", value=data, name="" + (name_prefix or "data") + "", definition={'$schema': 'http://json-schema.org/draft-07/schema', '$id': 'https://packaging.python.org/en/latest/specifications/declaring-build-dependencies/', 'title': 'Data structure for ``pyproject.toml`` files', '$$description': ['File format containing build-time configurations for the Python ecosystem. ', ':pep:`517` initially defined a build-system independent format for source trees', 'which was complemented by :pep:`518` to provide a way of specifying dependencies ', 'for building Python projects.', 'Please notice the ``project`` table (as initially defined in :pep:`621`) is not included', 'in this schema and should be considered separately.'], 'type': 'object', 'additionalProperties': False, 'properties': {'build-system': {'type': 'object', 'description': 'Table used to store build-related data', 'additionalProperties': False, 'properties': {'requires': {'type': 'array', '$$description': ['List of dependencies in the :pep:`508` format required to execute the build', 'system. Please notice that the resulting dependency graph', '**MUST NOT contain cycles**'], 'items': {'type': 'string'}}, 'build-backend': {'type': 'string', 'description': 'Python object that will be used to perform the build according to :pep:`517`', 'format': 'pep517-backend-reference'}, 'backend-path': {'type': 'array', '$$description': ['List of directories to be prepended to ``sys.path`` when loading the', 'back-end, and running its hooks'], 'items': {'type': 'string', '$comment': 'Should be a path (TODO: enforce it with format?)'}}}, 'required': ['requires']}, 'project': {'$schema': 'http://json-schema.org/draft-07/schema', '$id': 'https://packaging.python.org/en/latest/specifications/declaring-project-metadata/', 'title': 'Package metadata stored in the ``project`` table', '$$description': ['Data structure for the **project** table inside ``pyproject.toml``', '(as initially defined in :pep:`621`)'], 'type': 'object', 'properties': {'name': {'type': 'string', 'description': 'The name (primary identifier) of the project. MUST be statically defined.', 'format': 'pep508-identifier'}, 'version': {'type': 'string', 'description': 'The version of the project as supported by :pep:`440`.', 'format': 'pep440'}, 'description': {'type': 'string', '$$description': ['The `summary description of the project', '`_']}, 'readme': {'$$description': ['`Full/detailed description of the project in the form of a README', '`_', "with meaning similar to the one defined in `core metadata's Description", '`_'], 'oneOf': [{'type': 'string', '$$description': ['Relative path to a text file (UTF-8) containing the full description', 'of the project. If the file path ends in case-insensitive ``.md`` or', '``.rst`` suffixes, then the content-type is respectively', '``text/markdown`` or ``text/x-rst``']}, {'type': 'object', 'allOf': [{'anyOf': [{'properties': {'file': {'type': 'string', '$$description': ['Relative path to a text file containing the full description', 'of the project.']}}, 'required': ['file']}, {'properties': {'text': {'type': 'string', 'description': 'Full text describing the project.'}}, 'required': ['text']}]}, {'properties': {'content-type': {'type': 'string', '$$description': ['Content-type (:rfc:`1341`) of the full description', '(e.g. ``text/markdown``). The ``charset`` parameter is assumed', 'UTF-8 when not present.'], '$comment': 'TODO: add regex pattern or format?'}}, 'required': ['content-type']}]}]}, 'requires-python': {'type': 'string', 'format': 'pep508-versionspec', '$$description': ['`The Python version requirements of the project', '`_.']}, 'license': {'description': '`Project license `_.', 'oneOf': [{'properties': {'file': {'type': 'string', '$$description': ['Relative path to the file (UTF-8) which contains the license for the', 'project.']}}, 'required': ['file']}, {'properties': {'text': {'type': 'string', '$$description': ['The license of the project whose meaning is that of the', '`License field from the core metadata', '`_.']}}, 'required': ['text']}]}, 'authors': {'type': 'array', 'items': {'$ref': '#/definitions/author'}, '$$description': ["The people or organizations considered to be the 'authors' of the project.", 'The exact meaning is open to interpretation (e.g. original or primary authors,', 'current maintainers, or owners of the package).']}, 'maintainers': {'type': 'array', 'items': {'$ref': '#/definitions/author'}, '$$description': ["The people or organizations considered to be the 'maintainers' of the project.", 'Similarly to ``authors``, the exact meaning is open to interpretation.']}, 'keywords': {'type': 'array', 'items': {'type': 'string'}, 'description': 'List of keywords to assist searching for the distribution in a larger catalog.'}, 'classifiers': {'type': 'array', 'items': {'type': 'string', 'format': 'trove-classifier', 'description': '`PyPI classifier `_.'}, '$$description': ['`Trove classifiers `_', 'which apply to the project.']}, 'urls': {'type': 'object', 'description': 'URLs associated with the project in the form ``label => value``.', 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', 'format': 'url'}}}, 'scripts': {'$ref': '#/definitions/entry-point-group', '$$description': ['Instruct the installer to create command-line wrappers for the given', '`entry points `_.']}, 'gui-scripts': {'$ref': '#/definitions/entry-point-group', '$$description': ['Instruct the installer to create GUI wrappers for the given', '`entry points `_.', 'The difference between ``scripts`` and ``gui-scripts`` is only relevant in', 'Windows.']}, 'entry-points': {'$$description': ['Instruct the installer to expose the given modules/functions via', '``entry-point`` discovery mechanism (useful for plugins).', 'More information available in the `Python packaging guide', '`_.'], 'propertyNames': {'format': 'python-entrypoint-group'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'$ref': '#/definitions/entry-point-group'}}}, 'dependencies': {'type': 'array', 'description': 'Project (mandatory) dependencies.', 'items': {'$ref': '#/definitions/dependency'}}, 'optional-dependencies': {'type': 'object', 'description': 'Optional dependency for the project', 'propertyNames': {'format': 'pep508-identifier'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'array', 'items': {'$ref': '#/definitions/dependency'}}}}, 'dynamic': {'type': 'array', '$$description': ['Specifies which fields are intentionally unspecified and expected to be', 'dynamically provided by build tools'], 'items': {'enum': ['version', 'description', 'readme', 'requires-python', 'license', 'authors', 'maintainers', 'keywords', 'classifiers', 'urls', 'scripts', 'gui-scripts', 'entry-points', 'dependencies', 'optional-dependencies']}}}, 'required': ['name'], 'additionalProperties': False, 'if': {'not': {'required': ['dynamic'], 'properties': {'dynamic': {'contains': {'const': 'version'}, '$$description': ['version is listed in ``dynamic``']}}}, '$$comment': ['According to :pep:`621`:', ' If the core metadata specification lists a field as "Required", then', ' the metadata MUST specify the field statically or list it in dynamic', 'In turn, `core metadata`_ defines:', ' The required fields are: Metadata-Version, Name, Version.', ' All the other fields are optional.', 'Since ``Metadata-Version`` is defined by the build back-end, ``name`` and', '``version`` are the only mandatory information in ``pyproject.toml``.', '.. _core metadata: https://packaging.python.org/specifications/core-metadata/']}, 'then': {'required': ['version'], '$$description': ['version should be statically defined in the ``version`` field']}, 'definitions': {'author': {'$id': '#/definitions/author', 'title': 'Author or Maintainer', '$comment': 'https://www.python.org/dev/peps/pep-0621/#authors-maintainers', 'type': 'object', 'properties': {'name': {'type': 'string', '$$description': ['MUST be a valid email name, i.e. whatever can be put as a name, before an', 'email, in :rfc:`822`.']}, 'email': {'type': 'string', 'format': 'idn-email', 'description': 'MUST be a valid email address'}}}, 'entry-point-group': {'$id': '#/definitions/entry-point-group', 'title': 'Entry-points', 'type': 'object', '$$description': ['Entry-points are grouped together to indicate what sort of capabilities they', 'provide.', 'See the `packaging guides', '`_', 'and `setuptools docs', '`_', 'for more information.'], 'propertyNames': {'format': 'python-entrypoint-name'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', '$$description': ['Reference to a Python object. It is either in the form', '``importable.module``, or ``importable.module:object.attr``.'], 'format': 'python-entrypoint-reference', '$comment': 'https://packaging.python.org/specifications/entry-points/'}}}, 'dependency': {'$id': '#/definitions/dependency', 'title': 'Dependency', 'type': 'string', 'description': 'Project dependency specification according to PEP 508', 'format': 'pep508'}}}, 'tool': {'type': 'object', 'properties': {'distutils': {'$schema': 'http://json-schema.org/draft-07/schema', '$id': 'https://docs.python.org/3/install/', 'title': '``tool.distutils`` table', '$$description': ['Originally, ``distutils`` allowed developers to configure arguments for', '``setup.py`` scripts via `distutils configuration files', '`_.', '``tool.distutils`` subtables could be used with the same purpose', '(NOT CURRENTLY IMPLEMENTED).'], 'type': 'object', 'properties': {'global': {'type': 'object', 'description': 'Global options applied to all ``distutils`` commands'}}, 'patternProperties': {'.+': {'type': 'object'}}, '$comment': 'TODO: Is there a practical way of making this schema more specific?'}, 'setuptools': {'$schema': 'http://json-schema.org/draft-07/schema', '$id': 'https://setuptools.pypa.io/en/latest/references/keywords.html', 'title': '``tool.setuptools`` table', '$$description': ['Please notice for the time being the ``setuptools`` project does not specify', 'a way of configuring builds via ``pyproject.toml``.', 'Therefore this schema should be taken just as a *"thought experiment"* on how', 'this *might be done*, by following the principles established in', '`ini2toml `_.', 'It considers only ``setuptools`` `parameters', '`_', 'that can currently be configured via ``setup.cfg`` and are not covered by :pep:`621`', 'but intentionally excludes ``dependency_links`` and ``setup_requires``.', 'NOTE: ``scripts`` was renamed to ``script-files`` to avoid confusion with', 'entry-point based scripts (defined in :pep:`621`).'], 'type': 'object', 'additionalProperties': False, 'properties': {'platforms': {'type': 'array', 'items': {'type': 'string'}}, 'provides': {'$$description': ['Package and virtual package names contained within this package', '**(not supported by pip)**'], 'type': 'array', 'items': {'type': 'string', 'format': 'pep508-identifier'}}, 'obsoletes': {'$$description': ['Packages which this package renders obsolete', '**(not supported by pip)**'], 'type': 'array', 'items': {'type': 'string', 'format': 'pep508-identifier'}}, 'zip-safe': {'description': 'Whether the project can be safely installed and run from a zip file.', 'type': 'boolean'}, 'script-files': {'description': 'Legacy way of defining scripts (entry-points are preferred).', 'type': 'array', 'items': {'type': 'string'}, '$comment': 'TODO: is this field deprecated/should be removed?'}, 'eager-resources': {'$$description': ['Resources that should be extracted together, if any of them is needed,', 'or if any C extensions included in the project are imported.'], 'type': 'array', 'items': {'type': 'string'}}, 'packages': {'$$description': ['Packages that should be included in the distribution.', 'It can be given either as a list of package identifiers', 'or as a ``dict``-like structure with a single key ``find``', 'which corresponds to a dynamic call to', '``setuptools.config.expand.find_packages`` function.', 'The ``find`` key is associated with a nested ``dict``-like structure that can', 'contain ``where``, ``include``, ``exclude`` and ``namespaces`` keys,', 'mimicking the keyword arguments of the associated function.'], 'oneOf': [{'title': 'Array of Python package identifiers', 'type': 'array', 'items': {'type': 'string', 'format': 'python-module-name'}}, {'$ref': '#/definitions/find-directive'}]}, 'package-dir': {'$$description': [':class:`dict`-like structure mapping from package names to directories where their', 'code can be found.', 'The empty string (as key) means that all packages are contained inside', 'the given directory will be included in the distribution.'], 'type': 'object', 'additionalProperties': False, 'propertyNames': {'oneOf': [{'format': 'python-module-name'}, {'const': ''}]}, 'patternProperties': {'^.*$': {'type': 'string'}}}, 'package-data': {'$$description': ['Mapping from package names to lists of glob patterns.', 'Usually this option is not needed when using ``include-package-data = true``', 'For more information on how to include data files, check ``setuptools`` `docs', '`_.'], 'type': 'object', 'additionalProperties': False, 'propertyNames': {'oneOf': [{'format': 'python-module-name'}, {'const': '*'}]}, 'patternProperties': {'^.*$': {'type': 'array', 'items': {'type': 'string'}}}}, 'include-package-data': {'$$description': ['Automatically include any data files inside the package directories', 'that are specified by ``MANIFEST.in``', 'For more information on how to include data files, check ``setuptools`` `docs', '`_.'], 'type': 'boolean'}, 'exclude-package-data': {'$$description': ['Mapping from package names to lists of glob patterns that should be excluded', 'For more information on how to include data files, check ``setuptools`` `docs', '`_.'], 'type': 'object', 'additionalProperties': False, 'propertyNames': {'oneOf': [{'format': 'python-module-name'}, {'const': '*'}]}, 'patternProperties': {'^.*$': {'type': 'array', 'items': {'type': 'string'}}}}, 'namespace-packages': {'type': 'array', 'items': {'type': 'string', 'format': 'python-module-name'}, '$comment': 'https://setuptools.pypa.io/en/latest/userguide/package_discovery.html'}, 'py-modules': {'description': 'Modules that setuptools will manipulate', 'type': 'array', 'items': {'type': 'string', 'format': 'python-module-name'}, '$comment': 'TODO: clarify the relationship with ``packages``'}, 'data-files': {'$$description': ['**DEPRECATED**: dict-like structure where each key represents a directory and', 'the value is a list of glob patterns that should be installed in them.', "Please notice this don't work with wheels. See `data files support", '`_'], 'type': 'object', 'patternProperties': {'^.*$': {'type': 'array', 'items': {'type': 'string'}}}}, 'cmdclass': {'$$description': ['Mapping of distutils-style command names to ``setuptools.Command`` subclasses', 'which in turn should be represented by strings with a qualified class name', '(i.e., "dotted" form with module), e.g.::\n\n', ' cmdclass = {mycmd = "pkg.subpkg.module.CommandClass"}\n\n', 'The command class should be a directly defined at the top-level of the', 'containing module (no class nesting).'], 'type': 'object', 'patternProperties': {'^.*$': {'type': 'string', 'format': 'python-qualified-identifier'}}}, 'license-files': {'type': 'array', 'items': {'type': 'string'}, '$$description': ['PROVISIONAL: List of glob patterns for all license files being distributed.', '(might become standard with PEP 639).'], 'default': ['LICEN[CS]E*', ' COPYING*', ' NOTICE*', 'AUTHORS*'], '$comment': 'TODO: revise if PEP 639 is accepted. Probably ``project.license-files``?'}, 'dynamic': {'type': 'object', 'description': 'Instructions for loading :pep:`621`-related metadata dynamically', 'additionalProperties': False, 'properties': {'version': {'$$description': ['A version dynamically loaded via either the ``attr:`` or ``file:``', 'directives. Please make sure the given file or attribute respects :pep:`440`.'], 'oneOf': [{'$ref': '#/definitions/attr-directive'}, {'$ref': '#/definitions/file-directive'}]}, 'classifiers': {'$ref': '#/definitions/file-directive'}, 'description': {'$ref': '#/definitions/file-directive'}, 'dependencies': {'$ref': '#/definitions/file-directive'}, 'entry-points': {'$ref': '#/definitions/file-directive'}, 'optional-dependencies': {'type': 'object', 'propertyNames': {'format': 'python-identifier'}, 'additionalProperties': False, 'patternProperties': {'.+': {'$ref': '#/definitions/file-directive'}}}, 'readme': {'anyOf': [{'$ref': '#/definitions/file-directive'}, {'properties': {'content-type': {'type': 'string'}}}], 'required': ['file']}}}}, 'definitions': {'file-directive': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, 'attr-directive': {'title': "'attr:' directive", '$id': '#/definitions/attr-directive', '$$description': ['Value is read from a module attribute. Supports callables and iterables;', 'unsupported types are cast via ``str()``'], 'type': 'object', 'additionalProperties': False, 'properties': {'attr': {'type': 'string'}}, 'required': ['attr']}, 'find-directive': {'$id': '#/definitions/find-directive', 'title': "'find:' directive", 'type': 'object', 'additionalProperties': False, 'properties': {'find': {'type': 'object', '$$description': ['Dynamic `package discovery', '`_.'], 'additionalProperties': False, 'properties': {'where': {'description': 'Directories to be searched for packages (Unix-style relative path)', 'type': 'array', 'items': {'type': 'string'}}, 'exclude': {'type': 'array', '$$description': ['Exclude packages that match the values listed in this field.', "Can container shell-style wildcards (e.g. ``'pkg.*'``)"], 'items': {'type': 'string'}}, 'include': {'type': 'array', '$$description': ['Restrict the found packages to just the ones listed in this field.', "Can container shell-style wildcards (e.g. ``'pkg.*'``)"], 'items': {'type': 'string'}}, 'namespaces': {'type': 'boolean', '$$description': ['When ``True``, directories without a ``__init__.py`` file will also', 'be scanned for :pep:`420`-style implicit namespaces']}}}}}}}}}}, 'project': {'$schema': 'http://json-schema.org/draft-07/schema', '$id': 'https://packaging.python.org/en/latest/specifications/declaring-project-metadata/', 'title': 'Package metadata stored in the ``project`` table', '$$description': ['Data structure for the **project** table inside ``pyproject.toml``', '(as initially defined in :pep:`621`)'], 'type': 'object', 'properties': {'name': {'type': 'string', 'description': 'The name (primary identifier) of the project. MUST be statically defined.', 'format': 'pep508-identifier'}, 'version': {'type': 'string', 'description': 'The version of the project as supported by :pep:`440`.', 'format': 'pep440'}, 'description': {'type': 'string', '$$description': ['The `summary description of the project', '`_']}, 'readme': {'$$description': ['`Full/detailed description of the project in the form of a README', '`_', "with meaning similar to the one defined in `core metadata's Description", '`_'], 'oneOf': [{'type': 'string', '$$description': ['Relative path to a text file (UTF-8) containing the full description', 'of the project. If the file path ends in case-insensitive ``.md`` or', '``.rst`` suffixes, then the content-type is respectively', '``text/markdown`` or ``text/x-rst``']}, {'type': 'object', 'allOf': [{'anyOf': [{'properties': {'file': {'type': 'string', '$$description': ['Relative path to a text file containing the full description', 'of the project.']}}, 'required': ['file']}, {'properties': {'text': {'type': 'string', 'description': 'Full text describing the project.'}}, 'required': ['text']}]}, {'properties': {'content-type': {'type': 'string', '$$description': ['Content-type (:rfc:`1341`) of the full description', '(e.g. ``text/markdown``). The ``charset`` parameter is assumed', 'UTF-8 when not present.'], '$comment': 'TODO: add regex pattern or format?'}}, 'required': ['content-type']}]}]}, 'requires-python': {'type': 'string', 'format': 'pep508-versionspec', '$$description': ['`The Python version requirements of the project', '`_.']}, 'license': {'description': '`Project license `_.', 'oneOf': [{'properties': {'file': {'type': 'string', '$$description': ['Relative path to the file (UTF-8) which contains the license for the', 'project.']}}, 'required': ['file']}, {'properties': {'text': {'type': 'string', '$$description': ['The license of the project whose meaning is that of the', '`License field from the core metadata', '`_.']}}, 'required': ['text']}]}, 'authors': {'type': 'array', 'items': {'$ref': '#/definitions/author'}, '$$description': ["The people or organizations considered to be the 'authors' of the project.", 'The exact meaning is open to interpretation (e.g. original or primary authors,', 'current maintainers, or owners of the package).']}, 'maintainers': {'type': 'array', 'items': {'$ref': '#/definitions/author'}, '$$description': ["The people or organizations considered to be the 'maintainers' of the project.", 'Similarly to ``authors``, the exact meaning is open to interpretation.']}, 'keywords': {'type': 'array', 'items': {'type': 'string'}, 'description': 'List of keywords to assist searching for the distribution in a larger catalog.'}, 'classifiers': {'type': 'array', 'items': {'type': 'string', 'format': 'trove-classifier', 'description': '`PyPI classifier `_.'}, '$$description': ['`Trove classifiers `_', 'which apply to the project.']}, 'urls': {'type': 'object', 'description': 'URLs associated with the project in the form ``label => value``.', 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', 'format': 'url'}}}, 'scripts': {'$ref': '#/definitions/entry-point-group', '$$description': ['Instruct the installer to create command-line wrappers for the given', '`entry points `_.']}, 'gui-scripts': {'$ref': '#/definitions/entry-point-group', '$$description': ['Instruct the installer to create GUI wrappers for the given', '`entry points `_.', 'The difference between ``scripts`` and ``gui-scripts`` is only relevant in', 'Windows.']}, 'entry-points': {'$$description': ['Instruct the installer to expose the given modules/functions via', '``entry-point`` discovery mechanism (useful for plugins).', 'More information available in the `Python packaging guide', '`_.'], 'propertyNames': {'format': 'python-entrypoint-group'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'$ref': '#/definitions/entry-point-group'}}}, 'dependencies': {'type': 'array', 'description': 'Project (mandatory) dependencies.', 'items': {'$ref': '#/definitions/dependency'}}, 'optional-dependencies': {'type': 'object', 'description': 'Optional dependency for the project', 'propertyNames': {'format': 'pep508-identifier'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'array', 'items': {'$ref': '#/definitions/dependency'}}}}, 'dynamic': {'type': 'array', '$$description': ['Specifies which fields are intentionally unspecified and expected to be', 'dynamically provided by build tools'], 'items': {'enum': ['version', 'description', 'readme', 'requires-python', 'license', 'authors', 'maintainers', 'keywords', 'classifiers', 'urls', 'scripts', 'gui-scripts', 'entry-points', 'dependencies', 'optional-dependencies']}}}, 'required': ['name'], 'additionalProperties': False, 'if': {'not': {'required': ['dynamic'], 'properties': {'dynamic': {'contains': {'const': 'version'}, '$$description': ['version is listed in ``dynamic``']}}}, '$$comment': ['According to :pep:`621`:', ' If the core metadata specification lists a field as "Required", then', ' the metadata MUST specify the field statically or list it in dynamic', 'In turn, `core metadata`_ defines:', ' The required fields are: Metadata-Version, Name, Version.', ' All the other fields are optional.', 'Since ``Metadata-Version`` is defined by the build back-end, ``name`` and', '``version`` are the only mandatory information in ``pyproject.toml``.', '.. _core metadata: https://packaging.python.org/specifications/core-metadata/']}, 'then': {'required': ['version'], '$$description': ['version should be statically defined in the ``version`` field']}, 'definitions': {'author': {'$id': '#/definitions/author', 'title': 'Author or Maintainer', '$comment': 'https://www.python.org/dev/peps/pep-0621/#authors-maintainers', 'type': 'object', 'properties': {'name': {'type': 'string', '$$description': ['MUST be a valid email name, i.e. whatever can be put as a name, before an', 'email, in :rfc:`822`.']}, 'email': {'type': 'string', 'format': 'idn-email', 'description': 'MUST be a valid email address'}}}, 'entry-point-group': {'$id': '#/definitions/entry-point-group', 'title': 'Entry-points', 'type': 'object', '$$description': ['Entry-points are grouped together to indicate what sort of capabilities they', 'provide.', 'See the `packaging guides', '`_', 'and `setuptools docs', '`_', 'for more information.'], 'propertyNames': {'format': 'python-entrypoint-name'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', '$$description': ['Reference to a Python object. It is either in the form', '``importable.module``, or ``importable.module:object.attr``.'], 'format': 'python-entrypoint-reference', '$comment': 'https://packaging.python.org/specifications/entry-points/'}}}, 'dependency': {'$id': '#/definitions/dependency', 'title': 'Dependency', 'type': 'string', 'description': 'Project dependency specification according to PEP 508', 'format': 'pep508'}}}}, rule='type') - data_is_dict = isinstance(data, dict) - if data_is_dict: - data_keys = set(data.keys()) - if "build-system" in data_keys: - data_keys.remove("build-system") - data__buildsystem = data["build-system"] - if not isinstance(data__buildsystem, (dict)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".build-system must be object", value=data__buildsystem, name="" + (name_prefix or "data") + ".build-system", definition={'type': 'object', 'description': 'Table used to store build-related data', 'additionalProperties': False, 'properties': {'requires': {'type': 'array', '$$description': ['List of dependencies in the :pep:`508` format required to execute the build', 'system. Please notice that the resulting dependency graph', '**MUST NOT contain cycles**'], 'items': {'type': 'string'}}, 'build-backend': {'type': 'string', 'description': 'Python object that will be used to perform the build according to :pep:`517`', 'format': 'pep517-backend-reference'}, 'backend-path': {'type': 'array', '$$description': ['List of directories to be prepended to ``sys.path`` when loading the', 'back-end, and running its hooks'], 'items': {'type': 'string', '$comment': 'Should be a path (TODO: enforce it with format?)'}}}, 'required': ['requires']}, rule='type') - data__buildsystem_is_dict = isinstance(data__buildsystem, dict) - if data__buildsystem_is_dict: - data__buildsystem_len = len(data__buildsystem) - if not all(prop in data__buildsystem for prop in ['requires']): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".build-system must contain ['requires'] properties", value=data__buildsystem, name="" + (name_prefix or "data") + ".build-system", definition={'type': 'object', 'description': 'Table used to store build-related data', 'additionalProperties': False, 'properties': {'requires': {'type': 'array', '$$description': ['List of dependencies in the :pep:`508` format required to execute the build', 'system. Please notice that the resulting dependency graph', '**MUST NOT contain cycles**'], 'items': {'type': 'string'}}, 'build-backend': {'type': 'string', 'description': 'Python object that will be used to perform the build according to :pep:`517`', 'format': 'pep517-backend-reference'}, 'backend-path': {'type': 'array', '$$description': ['List of directories to be prepended to ``sys.path`` when loading the', 'back-end, and running its hooks'], 'items': {'type': 'string', '$comment': 'Should be a path (TODO: enforce it with format?)'}}}, 'required': ['requires']}, rule='required') - data__buildsystem_keys = set(data__buildsystem.keys()) - if "requires" in data__buildsystem_keys: - data__buildsystem_keys.remove("requires") - data__buildsystem__requires = data__buildsystem["requires"] - if not isinstance(data__buildsystem__requires, (list, tuple)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".build-system.requires must be array", value=data__buildsystem__requires, name="" + (name_prefix or "data") + ".build-system.requires", definition={'type': 'array', '$$description': ['List of dependencies in the :pep:`508` format required to execute the build', 'system. Please notice that the resulting dependency graph', '**MUST NOT contain cycles**'], 'items': {'type': 'string'}}, rule='type') - data__buildsystem__requires_is_list = isinstance(data__buildsystem__requires, (list, tuple)) - if data__buildsystem__requires_is_list: - data__buildsystem__requires_len = len(data__buildsystem__requires) - for data__buildsystem__requires_x, data__buildsystem__requires_item in enumerate(data__buildsystem__requires): - if not isinstance(data__buildsystem__requires_item, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".build-system.requires[{data__buildsystem__requires_x}]".format(**locals()) + " must be string", value=data__buildsystem__requires_item, name="" + (name_prefix or "data") + ".build-system.requires[{data__buildsystem__requires_x}]".format(**locals()) + "", definition={'type': 'string'}, rule='type') - if "build-backend" in data__buildsystem_keys: - data__buildsystem_keys.remove("build-backend") - data__buildsystem__buildbackend = data__buildsystem["build-backend"] - if not isinstance(data__buildsystem__buildbackend, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".build-system.build-backend must be string", value=data__buildsystem__buildbackend, name="" + (name_prefix or "data") + ".build-system.build-backend", definition={'type': 'string', 'description': 'Python object that will be used to perform the build according to :pep:`517`', 'format': 'pep517-backend-reference'}, rule='type') - if isinstance(data__buildsystem__buildbackend, str): - if not custom_formats["pep517-backend-reference"](data__buildsystem__buildbackend): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".build-system.build-backend must be pep517-backend-reference", value=data__buildsystem__buildbackend, name="" + (name_prefix or "data") + ".build-system.build-backend", definition={'type': 'string', 'description': 'Python object that will be used to perform the build according to :pep:`517`', 'format': 'pep517-backend-reference'}, rule='format') - if "backend-path" in data__buildsystem_keys: - data__buildsystem_keys.remove("backend-path") - data__buildsystem__backendpath = data__buildsystem["backend-path"] - if not isinstance(data__buildsystem__backendpath, (list, tuple)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".build-system.backend-path must be array", value=data__buildsystem__backendpath, name="" + (name_prefix or "data") + ".build-system.backend-path", definition={'type': 'array', '$$description': ['List of directories to be prepended to ``sys.path`` when loading the', 'back-end, and running its hooks'], 'items': {'type': 'string', '$comment': 'Should be a path (TODO: enforce it with format?)'}}, rule='type') - data__buildsystem__backendpath_is_list = isinstance(data__buildsystem__backendpath, (list, tuple)) - if data__buildsystem__backendpath_is_list: - data__buildsystem__backendpath_len = len(data__buildsystem__backendpath) - for data__buildsystem__backendpath_x, data__buildsystem__backendpath_item in enumerate(data__buildsystem__backendpath): - if not isinstance(data__buildsystem__backendpath_item, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".build-system.backend-path[{data__buildsystem__backendpath_x}]".format(**locals()) + " must be string", value=data__buildsystem__backendpath_item, name="" + (name_prefix or "data") + ".build-system.backend-path[{data__buildsystem__backendpath_x}]".format(**locals()) + "", definition={'type': 'string', '$comment': 'Should be a path (TODO: enforce it with format?)'}, rule='type') - if data__buildsystem_keys: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".build-system must not contain "+str(data__buildsystem_keys)+" properties", value=data__buildsystem, name="" + (name_prefix or "data") + ".build-system", definition={'type': 'object', 'description': 'Table used to store build-related data', 'additionalProperties': False, 'properties': {'requires': {'type': 'array', '$$description': ['List of dependencies in the :pep:`508` format required to execute the build', 'system. Please notice that the resulting dependency graph', '**MUST NOT contain cycles**'], 'items': {'type': 'string'}}, 'build-backend': {'type': 'string', 'description': 'Python object that will be used to perform the build according to :pep:`517`', 'format': 'pep517-backend-reference'}, 'backend-path': {'type': 'array', '$$description': ['List of directories to be prepended to ``sys.path`` when loading the', 'back-end, and running its hooks'], 'items': {'type': 'string', '$comment': 'Should be a path (TODO: enforce it with format?)'}}}, 'required': ['requires']}, rule='additionalProperties') - if "project" in data_keys: - data_keys.remove("project") - data__project = data["project"] - validate_https___packaging_python_org_en_latest_specifications_declaring_project_metadata(data__project, custom_formats, (name_prefix or "data") + ".project") - if "tool" in data_keys: - data_keys.remove("tool") - data__tool = data["tool"] - if not isinstance(data__tool, (dict)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".tool must be object", value=data__tool, name="" + (name_prefix or "data") + ".tool", definition={'type': 'object', 'properties': {'distutils': {'$schema': 'http://json-schema.org/draft-07/schema', '$id': 'https://docs.python.org/3/install/', 'title': '``tool.distutils`` table', '$$description': ['Originally, ``distutils`` allowed developers to configure arguments for', '``setup.py`` scripts via `distutils configuration files', '`_.', '``tool.distutils`` subtables could be used with the same purpose', '(NOT CURRENTLY IMPLEMENTED).'], 'type': 'object', 'properties': {'global': {'type': 'object', 'description': 'Global options applied to all ``distutils`` commands'}}, 'patternProperties': {'.+': {'type': 'object'}}, '$comment': 'TODO: Is there a practical way of making this schema more specific?'}, 'setuptools': {'$schema': 'http://json-schema.org/draft-07/schema', '$id': 'https://setuptools.pypa.io/en/latest/references/keywords.html', 'title': '``tool.setuptools`` table', '$$description': ['Please notice for the time being the ``setuptools`` project does not specify', 'a way of configuring builds via ``pyproject.toml``.', 'Therefore this schema should be taken just as a *"thought experiment"* on how', 'this *might be done*, by following the principles established in', '`ini2toml `_.', 'It considers only ``setuptools`` `parameters', '`_', 'that can currently be configured via ``setup.cfg`` and are not covered by :pep:`621`', 'but intentionally excludes ``dependency_links`` and ``setup_requires``.', 'NOTE: ``scripts`` was renamed to ``script-files`` to avoid confusion with', 'entry-point based scripts (defined in :pep:`621`).'], 'type': 'object', 'additionalProperties': False, 'properties': {'platforms': {'type': 'array', 'items': {'type': 'string'}}, 'provides': {'$$description': ['Package and virtual package names contained within this package', '**(not supported by pip)**'], 'type': 'array', 'items': {'type': 'string', 'format': 'pep508-identifier'}}, 'obsoletes': {'$$description': ['Packages which this package renders obsolete', '**(not supported by pip)**'], 'type': 'array', 'items': {'type': 'string', 'format': 'pep508-identifier'}}, 'zip-safe': {'description': 'Whether the project can be safely installed and run from a zip file.', 'type': 'boolean'}, 'script-files': {'description': 'Legacy way of defining scripts (entry-points are preferred).', 'type': 'array', 'items': {'type': 'string'}, '$comment': 'TODO: is this field deprecated/should be removed?'}, 'eager-resources': {'$$description': ['Resources that should be extracted together, if any of them is needed,', 'or if any C extensions included in the project are imported.'], 'type': 'array', 'items': {'type': 'string'}}, 'packages': {'$$description': ['Packages that should be included in the distribution.', 'It can be given either as a list of package identifiers', 'or as a ``dict``-like structure with a single key ``find``', 'which corresponds to a dynamic call to', '``setuptools.config.expand.find_packages`` function.', 'The ``find`` key is associated with a nested ``dict``-like structure that can', 'contain ``where``, ``include``, ``exclude`` and ``namespaces`` keys,', 'mimicking the keyword arguments of the associated function.'], 'oneOf': [{'title': 'Array of Python package identifiers', 'type': 'array', 'items': {'type': 'string', 'format': 'python-module-name'}}, {'$ref': '#/definitions/find-directive'}]}, 'package-dir': {'$$description': [':class:`dict`-like structure mapping from package names to directories where their', 'code can be found.', 'The empty string (as key) means that all packages are contained inside', 'the given directory will be included in the distribution.'], 'type': 'object', 'additionalProperties': False, 'propertyNames': {'oneOf': [{'format': 'python-module-name'}, {'const': ''}]}, 'patternProperties': {'^.*$': {'type': 'string'}}}, 'package-data': {'$$description': ['Mapping from package names to lists of glob patterns.', 'Usually this option is not needed when using ``include-package-data = true``', 'For more information on how to include data files, check ``setuptools`` `docs', '`_.'], 'type': 'object', 'additionalProperties': False, 'propertyNames': {'oneOf': [{'format': 'python-module-name'}, {'const': '*'}]}, 'patternProperties': {'^.*$': {'type': 'array', 'items': {'type': 'string'}}}}, 'include-package-data': {'$$description': ['Automatically include any data files inside the package directories', 'that are specified by ``MANIFEST.in``', 'For more information on how to include data files, check ``setuptools`` `docs', '`_.'], 'type': 'boolean'}, 'exclude-package-data': {'$$description': ['Mapping from package names to lists of glob patterns that should be excluded', 'For more information on how to include data files, check ``setuptools`` `docs', '`_.'], 'type': 'object', 'additionalProperties': False, 'propertyNames': {'oneOf': [{'format': 'python-module-name'}, {'const': '*'}]}, 'patternProperties': {'^.*$': {'type': 'array', 'items': {'type': 'string'}}}}, 'namespace-packages': {'type': 'array', 'items': {'type': 'string', 'format': 'python-module-name'}, '$comment': 'https://setuptools.pypa.io/en/latest/userguide/package_discovery.html'}, 'py-modules': {'description': 'Modules that setuptools will manipulate', 'type': 'array', 'items': {'type': 'string', 'format': 'python-module-name'}, '$comment': 'TODO: clarify the relationship with ``packages``'}, 'data-files': {'$$description': ['**DEPRECATED**: dict-like structure where each key represents a directory and', 'the value is a list of glob patterns that should be installed in them.', "Please notice this don't work with wheels. See `data files support", '`_'], 'type': 'object', 'patternProperties': {'^.*$': {'type': 'array', 'items': {'type': 'string'}}}}, 'cmdclass': {'$$description': ['Mapping of distutils-style command names to ``setuptools.Command`` subclasses', 'which in turn should be represented by strings with a qualified class name', '(i.e., "dotted" form with module), e.g.::\n\n', ' cmdclass = {mycmd = "pkg.subpkg.module.CommandClass"}\n\n', 'The command class should be a directly defined at the top-level of the', 'containing module (no class nesting).'], 'type': 'object', 'patternProperties': {'^.*$': {'type': 'string', 'format': 'python-qualified-identifier'}}}, 'license-files': {'type': 'array', 'items': {'type': 'string'}, '$$description': ['PROVISIONAL: List of glob patterns for all license files being distributed.', '(might become standard with PEP 639).'], 'default': ['LICEN[CS]E*', ' COPYING*', ' NOTICE*', 'AUTHORS*'], '$comment': 'TODO: revise if PEP 639 is accepted. Probably ``project.license-files``?'}, 'dynamic': {'type': 'object', 'description': 'Instructions for loading :pep:`621`-related metadata dynamically', 'additionalProperties': False, 'properties': {'version': {'$$description': ['A version dynamically loaded via either the ``attr:`` or ``file:``', 'directives. Please make sure the given file or attribute respects :pep:`440`.'], 'oneOf': [{'$ref': '#/definitions/attr-directive'}, {'$ref': '#/definitions/file-directive'}]}, 'classifiers': {'$ref': '#/definitions/file-directive'}, 'description': {'$ref': '#/definitions/file-directive'}, 'dependencies': {'$ref': '#/definitions/file-directive'}, 'entry-points': {'$ref': '#/definitions/file-directive'}, 'optional-dependencies': {'type': 'object', 'propertyNames': {'format': 'python-identifier'}, 'additionalProperties': False, 'patternProperties': {'.+': {'$ref': '#/definitions/file-directive'}}}, 'readme': {'anyOf': [{'$ref': '#/definitions/file-directive'}, {'properties': {'content-type': {'type': 'string'}}}], 'required': ['file']}}}}, 'definitions': {'file-directive': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, 'attr-directive': {'title': "'attr:' directive", '$id': '#/definitions/attr-directive', '$$description': ['Value is read from a module attribute. Supports callables and iterables;', 'unsupported types are cast via ``str()``'], 'type': 'object', 'additionalProperties': False, 'properties': {'attr': {'type': 'string'}}, 'required': ['attr']}, 'find-directive': {'$id': '#/definitions/find-directive', 'title': "'find:' directive", 'type': 'object', 'additionalProperties': False, 'properties': {'find': {'type': 'object', '$$description': ['Dynamic `package discovery', '`_.'], 'additionalProperties': False, 'properties': {'where': {'description': 'Directories to be searched for packages (Unix-style relative path)', 'type': 'array', 'items': {'type': 'string'}}, 'exclude': {'type': 'array', '$$description': ['Exclude packages that match the values listed in this field.', "Can container shell-style wildcards (e.g. ``'pkg.*'``)"], 'items': {'type': 'string'}}, 'include': {'type': 'array', '$$description': ['Restrict the found packages to just the ones listed in this field.', "Can container shell-style wildcards (e.g. ``'pkg.*'``)"], 'items': {'type': 'string'}}, 'namespaces': {'type': 'boolean', '$$description': ['When ``True``, directories without a ``__init__.py`` file will also', 'be scanned for :pep:`420`-style implicit namespaces']}}}}}}}}}, rule='type') - data__tool_is_dict = isinstance(data__tool, dict) - if data__tool_is_dict: - data__tool_keys = set(data__tool.keys()) - if "distutils" in data__tool_keys: - data__tool_keys.remove("distutils") - data__tool__distutils = data__tool["distutils"] - validate_https___docs_python_org_3_install(data__tool__distutils, custom_formats, (name_prefix or "data") + ".tool.distutils") - if "setuptools" in data__tool_keys: - data__tool_keys.remove("setuptools") - data__tool__setuptools = data__tool["setuptools"] - validate_https___setuptools_pypa_io_en_latest_references_keywords_html(data__tool__setuptools, custom_formats, (name_prefix or "data") + ".tool.setuptools") - if data_keys: - raise JsonSchemaValueException("" + (name_prefix or "data") + " must not contain "+str(data_keys)+" properties", value=data, name="" + (name_prefix or "data") + "", definition={'$schema': 'http://json-schema.org/draft-07/schema', '$id': 'https://packaging.python.org/en/latest/specifications/declaring-build-dependencies/', 'title': 'Data structure for ``pyproject.toml`` files', '$$description': ['File format containing build-time configurations for the Python ecosystem. ', ':pep:`517` initially defined a build-system independent format for source trees', 'which was complemented by :pep:`518` to provide a way of specifying dependencies ', 'for building Python projects.', 'Please notice the ``project`` table (as initially defined in :pep:`621`) is not included', 'in this schema and should be considered separately.'], 'type': 'object', 'additionalProperties': False, 'properties': {'build-system': {'type': 'object', 'description': 'Table used to store build-related data', 'additionalProperties': False, 'properties': {'requires': {'type': 'array', '$$description': ['List of dependencies in the :pep:`508` format required to execute the build', 'system. Please notice that the resulting dependency graph', '**MUST NOT contain cycles**'], 'items': {'type': 'string'}}, 'build-backend': {'type': 'string', 'description': 'Python object that will be used to perform the build according to :pep:`517`', 'format': 'pep517-backend-reference'}, 'backend-path': {'type': 'array', '$$description': ['List of directories to be prepended to ``sys.path`` when loading the', 'back-end, and running its hooks'], 'items': {'type': 'string', '$comment': 'Should be a path (TODO: enforce it with format?)'}}}, 'required': ['requires']}, 'project': {'$schema': 'http://json-schema.org/draft-07/schema', '$id': 'https://packaging.python.org/en/latest/specifications/declaring-project-metadata/', 'title': 'Package metadata stored in the ``project`` table', '$$description': ['Data structure for the **project** table inside ``pyproject.toml``', '(as initially defined in :pep:`621`)'], 'type': 'object', 'properties': {'name': {'type': 'string', 'description': 'The name (primary identifier) of the project. MUST be statically defined.', 'format': 'pep508-identifier'}, 'version': {'type': 'string', 'description': 'The version of the project as supported by :pep:`440`.', 'format': 'pep440'}, 'description': {'type': 'string', '$$description': ['The `summary description of the project', '`_']}, 'readme': {'$$description': ['`Full/detailed description of the project in the form of a README', '`_', "with meaning similar to the one defined in `core metadata's Description", '`_'], 'oneOf': [{'type': 'string', '$$description': ['Relative path to a text file (UTF-8) containing the full description', 'of the project. If the file path ends in case-insensitive ``.md`` or', '``.rst`` suffixes, then the content-type is respectively', '``text/markdown`` or ``text/x-rst``']}, {'type': 'object', 'allOf': [{'anyOf': [{'properties': {'file': {'type': 'string', '$$description': ['Relative path to a text file containing the full description', 'of the project.']}}, 'required': ['file']}, {'properties': {'text': {'type': 'string', 'description': 'Full text describing the project.'}}, 'required': ['text']}]}, {'properties': {'content-type': {'type': 'string', '$$description': ['Content-type (:rfc:`1341`) of the full description', '(e.g. ``text/markdown``). The ``charset`` parameter is assumed', 'UTF-8 when not present.'], '$comment': 'TODO: add regex pattern or format?'}}, 'required': ['content-type']}]}]}, 'requires-python': {'type': 'string', 'format': 'pep508-versionspec', '$$description': ['`The Python version requirements of the project', '`_.']}, 'license': {'description': '`Project license `_.', 'oneOf': [{'properties': {'file': {'type': 'string', '$$description': ['Relative path to the file (UTF-8) which contains the license for the', 'project.']}}, 'required': ['file']}, {'properties': {'text': {'type': 'string', '$$description': ['The license of the project whose meaning is that of the', '`License field from the core metadata', '`_.']}}, 'required': ['text']}]}, 'authors': {'type': 'array', 'items': {'$ref': '#/definitions/author'}, '$$description': ["The people or organizations considered to be the 'authors' of the project.", 'The exact meaning is open to interpretation (e.g. original or primary authors,', 'current maintainers, or owners of the package).']}, 'maintainers': {'type': 'array', 'items': {'$ref': '#/definitions/author'}, '$$description': ["The people or organizations considered to be the 'maintainers' of the project.", 'Similarly to ``authors``, the exact meaning is open to interpretation.']}, 'keywords': {'type': 'array', 'items': {'type': 'string'}, 'description': 'List of keywords to assist searching for the distribution in a larger catalog.'}, 'classifiers': {'type': 'array', 'items': {'type': 'string', 'format': 'trove-classifier', 'description': '`PyPI classifier `_.'}, '$$description': ['`Trove classifiers `_', 'which apply to the project.']}, 'urls': {'type': 'object', 'description': 'URLs associated with the project in the form ``label => value``.', 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', 'format': 'url'}}}, 'scripts': {'$ref': '#/definitions/entry-point-group', '$$description': ['Instruct the installer to create command-line wrappers for the given', '`entry points `_.']}, 'gui-scripts': {'$ref': '#/definitions/entry-point-group', '$$description': ['Instruct the installer to create GUI wrappers for the given', '`entry points `_.', 'The difference between ``scripts`` and ``gui-scripts`` is only relevant in', 'Windows.']}, 'entry-points': {'$$description': ['Instruct the installer to expose the given modules/functions via', '``entry-point`` discovery mechanism (useful for plugins).', 'More information available in the `Python packaging guide', '`_.'], 'propertyNames': {'format': 'python-entrypoint-group'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'$ref': '#/definitions/entry-point-group'}}}, 'dependencies': {'type': 'array', 'description': 'Project (mandatory) dependencies.', 'items': {'$ref': '#/definitions/dependency'}}, 'optional-dependencies': {'type': 'object', 'description': 'Optional dependency for the project', 'propertyNames': {'format': 'pep508-identifier'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'array', 'items': {'$ref': '#/definitions/dependency'}}}}, 'dynamic': {'type': 'array', '$$description': ['Specifies which fields are intentionally unspecified and expected to be', 'dynamically provided by build tools'], 'items': {'enum': ['version', 'description', 'readme', 'requires-python', 'license', 'authors', 'maintainers', 'keywords', 'classifiers', 'urls', 'scripts', 'gui-scripts', 'entry-points', 'dependencies', 'optional-dependencies']}}}, 'required': ['name'], 'additionalProperties': False, 'if': {'not': {'required': ['dynamic'], 'properties': {'dynamic': {'contains': {'const': 'version'}, '$$description': ['version is listed in ``dynamic``']}}}, '$$comment': ['According to :pep:`621`:', ' If the core metadata specification lists a field as "Required", then', ' the metadata MUST specify the field statically or list it in dynamic', 'In turn, `core metadata`_ defines:', ' The required fields are: Metadata-Version, Name, Version.', ' All the other fields are optional.', 'Since ``Metadata-Version`` is defined by the build back-end, ``name`` and', '``version`` are the only mandatory information in ``pyproject.toml``.', '.. _core metadata: https://packaging.python.org/specifications/core-metadata/']}, 'then': {'required': ['version'], '$$description': ['version should be statically defined in the ``version`` field']}, 'definitions': {'author': {'$id': '#/definitions/author', 'title': 'Author or Maintainer', '$comment': 'https://www.python.org/dev/peps/pep-0621/#authors-maintainers', 'type': 'object', 'properties': {'name': {'type': 'string', '$$description': ['MUST be a valid email name, i.e. whatever can be put as a name, before an', 'email, in :rfc:`822`.']}, 'email': {'type': 'string', 'format': 'idn-email', 'description': 'MUST be a valid email address'}}}, 'entry-point-group': {'$id': '#/definitions/entry-point-group', 'title': 'Entry-points', 'type': 'object', '$$description': ['Entry-points are grouped together to indicate what sort of capabilities they', 'provide.', 'See the `packaging guides', '`_', 'and `setuptools docs', '`_', 'for more information.'], 'propertyNames': {'format': 'python-entrypoint-name'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', '$$description': ['Reference to a Python object. It is either in the form', '``importable.module``, or ``importable.module:object.attr``.'], 'format': 'python-entrypoint-reference', '$comment': 'https://packaging.python.org/specifications/entry-points/'}}}, 'dependency': {'$id': '#/definitions/dependency', 'title': 'Dependency', 'type': 'string', 'description': 'Project dependency specification according to PEP 508', 'format': 'pep508'}}}, 'tool': {'type': 'object', 'properties': {'distutils': {'$schema': 'http://json-schema.org/draft-07/schema', '$id': 'https://docs.python.org/3/install/', 'title': '``tool.distutils`` table', '$$description': ['Originally, ``distutils`` allowed developers to configure arguments for', '``setup.py`` scripts via `distutils configuration files', '`_.', '``tool.distutils`` subtables could be used with the same purpose', '(NOT CURRENTLY IMPLEMENTED).'], 'type': 'object', 'properties': {'global': {'type': 'object', 'description': 'Global options applied to all ``distutils`` commands'}}, 'patternProperties': {'.+': {'type': 'object'}}, '$comment': 'TODO: Is there a practical way of making this schema more specific?'}, 'setuptools': {'$schema': 'http://json-schema.org/draft-07/schema', '$id': 'https://setuptools.pypa.io/en/latest/references/keywords.html', 'title': '``tool.setuptools`` table', '$$description': ['Please notice for the time being the ``setuptools`` project does not specify', 'a way of configuring builds via ``pyproject.toml``.', 'Therefore this schema should be taken just as a *"thought experiment"* on how', 'this *might be done*, by following the principles established in', '`ini2toml `_.', 'It considers only ``setuptools`` `parameters', '`_', 'that can currently be configured via ``setup.cfg`` and are not covered by :pep:`621`', 'but intentionally excludes ``dependency_links`` and ``setup_requires``.', 'NOTE: ``scripts`` was renamed to ``script-files`` to avoid confusion with', 'entry-point based scripts (defined in :pep:`621`).'], 'type': 'object', 'additionalProperties': False, 'properties': {'platforms': {'type': 'array', 'items': {'type': 'string'}}, 'provides': {'$$description': ['Package and virtual package names contained within this package', '**(not supported by pip)**'], 'type': 'array', 'items': {'type': 'string', 'format': 'pep508-identifier'}}, 'obsoletes': {'$$description': ['Packages which this package renders obsolete', '**(not supported by pip)**'], 'type': 'array', 'items': {'type': 'string', 'format': 'pep508-identifier'}}, 'zip-safe': {'description': 'Whether the project can be safely installed and run from a zip file.', 'type': 'boolean'}, 'script-files': {'description': 'Legacy way of defining scripts (entry-points are preferred).', 'type': 'array', 'items': {'type': 'string'}, '$comment': 'TODO: is this field deprecated/should be removed?'}, 'eager-resources': {'$$description': ['Resources that should be extracted together, if any of them is needed,', 'or if any C extensions included in the project are imported.'], 'type': 'array', 'items': {'type': 'string'}}, 'packages': {'$$description': ['Packages that should be included in the distribution.', 'It can be given either as a list of package identifiers', 'or as a ``dict``-like structure with a single key ``find``', 'which corresponds to a dynamic call to', '``setuptools.config.expand.find_packages`` function.', 'The ``find`` key is associated with a nested ``dict``-like structure that can', 'contain ``where``, ``include``, ``exclude`` and ``namespaces`` keys,', 'mimicking the keyword arguments of the associated function.'], 'oneOf': [{'title': 'Array of Python package identifiers', 'type': 'array', 'items': {'type': 'string', 'format': 'python-module-name'}}, {'$ref': '#/definitions/find-directive'}]}, 'package-dir': {'$$description': [':class:`dict`-like structure mapping from package names to directories where their', 'code can be found.', 'The empty string (as key) means that all packages are contained inside', 'the given directory will be included in the distribution.'], 'type': 'object', 'additionalProperties': False, 'propertyNames': {'oneOf': [{'format': 'python-module-name'}, {'const': ''}]}, 'patternProperties': {'^.*$': {'type': 'string'}}}, 'package-data': {'$$description': ['Mapping from package names to lists of glob patterns.', 'Usually this option is not needed when using ``include-package-data = true``', 'For more information on how to include data files, check ``setuptools`` `docs', '`_.'], 'type': 'object', 'additionalProperties': False, 'propertyNames': {'oneOf': [{'format': 'python-module-name'}, {'const': '*'}]}, 'patternProperties': {'^.*$': {'type': 'array', 'items': {'type': 'string'}}}}, 'include-package-data': {'$$description': ['Automatically include any data files inside the package directories', 'that are specified by ``MANIFEST.in``', 'For more information on how to include data files, check ``setuptools`` `docs', '`_.'], 'type': 'boolean'}, 'exclude-package-data': {'$$description': ['Mapping from package names to lists of glob patterns that should be excluded', 'For more information on how to include data files, check ``setuptools`` `docs', '`_.'], 'type': 'object', 'additionalProperties': False, 'propertyNames': {'oneOf': [{'format': 'python-module-name'}, {'const': '*'}]}, 'patternProperties': {'^.*$': {'type': 'array', 'items': {'type': 'string'}}}}, 'namespace-packages': {'type': 'array', 'items': {'type': 'string', 'format': 'python-module-name'}, '$comment': 'https://setuptools.pypa.io/en/latest/userguide/package_discovery.html'}, 'py-modules': {'description': 'Modules that setuptools will manipulate', 'type': 'array', 'items': {'type': 'string', 'format': 'python-module-name'}, '$comment': 'TODO: clarify the relationship with ``packages``'}, 'data-files': {'$$description': ['**DEPRECATED**: dict-like structure where each key represents a directory and', 'the value is a list of glob patterns that should be installed in them.', "Please notice this don't work with wheels. See `data files support", '`_'], 'type': 'object', 'patternProperties': {'^.*$': {'type': 'array', 'items': {'type': 'string'}}}}, 'cmdclass': {'$$description': ['Mapping of distutils-style command names to ``setuptools.Command`` subclasses', 'which in turn should be represented by strings with a qualified class name', '(i.e., "dotted" form with module), e.g.::\n\n', ' cmdclass = {mycmd = "pkg.subpkg.module.CommandClass"}\n\n', 'The command class should be a directly defined at the top-level of the', 'containing module (no class nesting).'], 'type': 'object', 'patternProperties': {'^.*$': {'type': 'string', 'format': 'python-qualified-identifier'}}}, 'license-files': {'type': 'array', 'items': {'type': 'string'}, '$$description': ['PROVISIONAL: List of glob patterns for all license files being distributed.', '(might become standard with PEP 639).'], 'default': ['LICEN[CS]E*', ' COPYING*', ' NOTICE*', 'AUTHORS*'], '$comment': 'TODO: revise if PEP 639 is accepted. Probably ``project.license-files``?'}, 'dynamic': {'type': 'object', 'description': 'Instructions for loading :pep:`621`-related metadata dynamically', 'additionalProperties': False, 'properties': {'version': {'$$description': ['A version dynamically loaded via either the ``attr:`` or ``file:``', 'directives. Please make sure the given file or attribute respects :pep:`440`.'], 'oneOf': [{'$ref': '#/definitions/attr-directive'}, {'$ref': '#/definitions/file-directive'}]}, 'classifiers': {'$ref': '#/definitions/file-directive'}, 'description': {'$ref': '#/definitions/file-directive'}, 'dependencies': {'$ref': '#/definitions/file-directive'}, 'entry-points': {'$ref': '#/definitions/file-directive'}, 'optional-dependencies': {'type': 'object', 'propertyNames': {'format': 'python-identifier'}, 'additionalProperties': False, 'patternProperties': {'.+': {'$ref': '#/definitions/file-directive'}}}, 'readme': {'anyOf': [{'$ref': '#/definitions/file-directive'}, {'properties': {'content-type': {'type': 'string'}}}], 'required': ['file']}}}}, 'definitions': {'file-directive': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, 'attr-directive': {'title': "'attr:' directive", '$id': '#/definitions/attr-directive', '$$description': ['Value is read from a module attribute. Supports callables and iterables;', 'unsupported types are cast via ``str()``'], 'type': 'object', 'additionalProperties': False, 'properties': {'attr': {'type': 'string'}}, 'required': ['attr']}, 'find-directive': {'$id': '#/definitions/find-directive', 'title': "'find:' directive", 'type': 'object', 'additionalProperties': False, 'properties': {'find': {'type': 'object', '$$description': ['Dynamic `package discovery', '`_.'], 'additionalProperties': False, 'properties': {'where': {'description': 'Directories to be searched for packages (Unix-style relative path)', 'type': 'array', 'items': {'type': 'string'}}, 'exclude': {'type': 'array', '$$description': ['Exclude packages that match the values listed in this field.', "Can container shell-style wildcards (e.g. ``'pkg.*'``)"], 'items': {'type': 'string'}}, 'include': {'type': 'array', '$$description': ['Restrict the found packages to just the ones listed in this field.', "Can container shell-style wildcards (e.g. ``'pkg.*'``)"], 'items': {'type': 'string'}}, 'namespaces': {'type': 'boolean', '$$description': ['When ``True``, directories without a ``__init__.py`` file will also', 'be scanned for :pep:`420`-style implicit namespaces']}}}}}}}}}}, 'project': {'$schema': 'http://json-schema.org/draft-07/schema', '$id': 'https://packaging.python.org/en/latest/specifications/declaring-project-metadata/', 'title': 'Package metadata stored in the ``project`` table', '$$description': ['Data structure for the **project** table inside ``pyproject.toml``', '(as initially defined in :pep:`621`)'], 'type': 'object', 'properties': {'name': {'type': 'string', 'description': 'The name (primary identifier) of the project. MUST be statically defined.', 'format': 'pep508-identifier'}, 'version': {'type': 'string', 'description': 'The version of the project as supported by :pep:`440`.', 'format': 'pep440'}, 'description': {'type': 'string', '$$description': ['The `summary description of the project', '`_']}, 'readme': {'$$description': ['`Full/detailed description of the project in the form of a README', '`_', "with meaning similar to the one defined in `core metadata's Description", '`_'], 'oneOf': [{'type': 'string', '$$description': ['Relative path to a text file (UTF-8) containing the full description', 'of the project. If the file path ends in case-insensitive ``.md`` or', '``.rst`` suffixes, then the content-type is respectively', '``text/markdown`` or ``text/x-rst``']}, {'type': 'object', 'allOf': [{'anyOf': [{'properties': {'file': {'type': 'string', '$$description': ['Relative path to a text file containing the full description', 'of the project.']}}, 'required': ['file']}, {'properties': {'text': {'type': 'string', 'description': 'Full text describing the project.'}}, 'required': ['text']}]}, {'properties': {'content-type': {'type': 'string', '$$description': ['Content-type (:rfc:`1341`) of the full description', '(e.g. ``text/markdown``). The ``charset`` parameter is assumed', 'UTF-8 when not present.'], '$comment': 'TODO: add regex pattern or format?'}}, 'required': ['content-type']}]}]}, 'requires-python': {'type': 'string', 'format': 'pep508-versionspec', '$$description': ['`The Python version requirements of the project', '`_.']}, 'license': {'description': '`Project license `_.', 'oneOf': [{'properties': {'file': {'type': 'string', '$$description': ['Relative path to the file (UTF-8) which contains the license for the', 'project.']}}, 'required': ['file']}, {'properties': {'text': {'type': 'string', '$$description': ['The license of the project whose meaning is that of the', '`License field from the core metadata', '`_.']}}, 'required': ['text']}]}, 'authors': {'type': 'array', 'items': {'$ref': '#/definitions/author'}, '$$description': ["The people or organizations considered to be the 'authors' of the project.", 'The exact meaning is open to interpretation (e.g. original or primary authors,', 'current maintainers, or owners of the package).']}, 'maintainers': {'type': 'array', 'items': {'$ref': '#/definitions/author'}, '$$description': ["The people or organizations considered to be the 'maintainers' of the project.", 'Similarly to ``authors``, the exact meaning is open to interpretation.']}, 'keywords': {'type': 'array', 'items': {'type': 'string'}, 'description': 'List of keywords to assist searching for the distribution in a larger catalog.'}, 'classifiers': {'type': 'array', 'items': {'type': 'string', 'format': 'trove-classifier', 'description': '`PyPI classifier `_.'}, '$$description': ['`Trove classifiers `_', 'which apply to the project.']}, 'urls': {'type': 'object', 'description': 'URLs associated with the project in the form ``label => value``.', 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', 'format': 'url'}}}, 'scripts': {'$ref': '#/definitions/entry-point-group', '$$description': ['Instruct the installer to create command-line wrappers for the given', '`entry points `_.']}, 'gui-scripts': {'$ref': '#/definitions/entry-point-group', '$$description': ['Instruct the installer to create GUI wrappers for the given', '`entry points `_.', 'The difference between ``scripts`` and ``gui-scripts`` is only relevant in', 'Windows.']}, 'entry-points': {'$$description': ['Instruct the installer to expose the given modules/functions via', '``entry-point`` discovery mechanism (useful for plugins).', 'More information available in the `Python packaging guide', '`_.'], 'propertyNames': {'format': 'python-entrypoint-group'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'$ref': '#/definitions/entry-point-group'}}}, 'dependencies': {'type': 'array', 'description': 'Project (mandatory) dependencies.', 'items': {'$ref': '#/definitions/dependency'}}, 'optional-dependencies': {'type': 'object', 'description': 'Optional dependency for the project', 'propertyNames': {'format': 'pep508-identifier'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'array', 'items': {'$ref': '#/definitions/dependency'}}}}, 'dynamic': {'type': 'array', '$$description': ['Specifies which fields are intentionally unspecified and expected to be', 'dynamically provided by build tools'], 'items': {'enum': ['version', 'description', 'readme', 'requires-python', 'license', 'authors', 'maintainers', 'keywords', 'classifiers', 'urls', 'scripts', 'gui-scripts', 'entry-points', 'dependencies', 'optional-dependencies']}}}, 'required': ['name'], 'additionalProperties': False, 'if': {'not': {'required': ['dynamic'], 'properties': {'dynamic': {'contains': {'const': 'version'}, '$$description': ['version is listed in ``dynamic``']}}}, '$$comment': ['According to :pep:`621`:', ' If the core metadata specification lists a field as "Required", then', ' the metadata MUST specify the field statically or list it in dynamic', 'In turn, `core metadata`_ defines:', ' The required fields are: Metadata-Version, Name, Version.', ' All the other fields are optional.', 'Since ``Metadata-Version`` is defined by the build back-end, ``name`` and', '``version`` are the only mandatory information in ``pyproject.toml``.', '.. _core metadata: https://packaging.python.org/specifications/core-metadata/']}, 'then': {'required': ['version'], '$$description': ['version should be statically defined in the ``version`` field']}, 'definitions': {'author': {'$id': '#/definitions/author', 'title': 'Author or Maintainer', '$comment': 'https://www.python.org/dev/peps/pep-0621/#authors-maintainers', 'type': 'object', 'properties': {'name': {'type': 'string', '$$description': ['MUST be a valid email name, i.e. whatever can be put as a name, before an', 'email, in :rfc:`822`.']}, 'email': {'type': 'string', 'format': 'idn-email', 'description': 'MUST be a valid email address'}}}, 'entry-point-group': {'$id': '#/definitions/entry-point-group', 'title': 'Entry-points', 'type': 'object', '$$description': ['Entry-points are grouped together to indicate what sort of capabilities they', 'provide.', 'See the `packaging guides', '`_', 'and `setuptools docs', '`_', 'for more information.'], 'propertyNames': {'format': 'python-entrypoint-name'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', '$$description': ['Reference to a Python object. It is either in the form', '``importable.module``, or ``importable.module:object.attr``.'], 'format': 'python-entrypoint-reference', '$comment': 'https://packaging.python.org/specifications/entry-points/'}}}, 'dependency': {'$id': '#/definitions/dependency', 'title': 'Dependency', 'type': 'string', 'description': 'Project dependency specification according to PEP 508', 'format': 'pep508'}}}}, rule='additionalProperties') - return data - -def validate_https___setuptools_pypa_io_en_latest_references_keywords_html(data, custom_formats={}, name_prefix=None): - if not isinstance(data, (dict)): - raise JsonSchemaValueException("" + (name_prefix or "data") + " must be object", value=data, name="" + (name_prefix or "data") + "", definition={'$schema': 'http://json-schema.org/draft-07/schema', '$id': 'https://setuptools.pypa.io/en/latest/references/keywords.html', 'title': '``tool.setuptools`` table', '$$description': ['Please notice for the time being the ``setuptools`` project does not specify', 'a way of configuring builds via ``pyproject.toml``.', 'Therefore this schema should be taken just as a *"thought experiment"* on how', 'this *might be done*, by following the principles established in', '`ini2toml `_.', 'It considers only ``setuptools`` `parameters', '`_', 'that can currently be configured via ``setup.cfg`` and are not covered by :pep:`621`', 'but intentionally excludes ``dependency_links`` and ``setup_requires``.', 'NOTE: ``scripts`` was renamed to ``script-files`` to avoid confusion with', 'entry-point based scripts (defined in :pep:`621`).'], 'type': 'object', 'additionalProperties': False, 'properties': {'platforms': {'type': 'array', 'items': {'type': 'string'}}, 'provides': {'$$description': ['Package and virtual package names contained within this package', '**(not supported by pip)**'], 'type': 'array', 'items': {'type': 'string', 'format': 'pep508-identifier'}}, 'obsoletes': {'$$description': ['Packages which this package renders obsolete', '**(not supported by pip)**'], 'type': 'array', 'items': {'type': 'string', 'format': 'pep508-identifier'}}, 'zip-safe': {'description': 'Whether the project can be safely installed and run from a zip file.', 'type': 'boolean'}, 'script-files': {'description': 'Legacy way of defining scripts (entry-points are preferred).', 'type': 'array', 'items': {'type': 'string'}, '$comment': 'TODO: is this field deprecated/should be removed?'}, 'eager-resources': {'$$description': ['Resources that should be extracted together, if any of them is needed,', 'or if any C extensions included in the project are imported.'], 'type': 'array', 'items': {'type': 'string'}}, 'packages': {'$$description': ['Packages that should be included in the distribution.', 'It can be given either as a list of package identifiers', 'or as a ``dict``-like structure with a single key ``find``', 'which corresponds to a dynamic call to', '``setuptools.config.expand.find_packages`` function.', 'The ``find`` key is associated with a nested ``dict``-like structure that can', 'contain ``where``, ``include``, ``exclude`` and ``namespaces`` keys,', 'mimicking the keyword arguments of the associated function.'], 'oneOf': [{'title': 'Array of Python package identifiers', 'type': 'array', 'items': {'type': 'string', 'format': 'python-module-name'}}, {'$id': '#/definitions/find-directive', 'title': "'find:' directive", 'type': 'object', 'additionalProperties': False, 'properties': {'find': {'type': 'object', '$$description': ['Dynamic `package discovery', '`_.'], 'additionalProperties': False, 'properties': {'where': {'description': 'Directories to be searched for packages (Unix-style relative path)', 'type': 'array', 'items': {'type': 'string'}}, 'exclude': {'type': 'array', '$$description': ['Exclude packages that match the values listed in this field.', "Can container shell-style wildcards (e.g. ``'pkg.*'``)"], 'items': {'type': 'string'}}, 'include': {'type': 'array', '$$description': ['Restrict the found packages to just the ones listed in this field.', "Can container shell-style wildcards (e.g. ``'pkg.*'``)"], 'items': {'type': 'string'}}, 'namespaces': {'type': 'boolean', '$$description': ['When ``True``, directories without a ``__init__.py`` file will also', 'be scanned for :pep:`420`-style implicit namespaces']}}}}}]}, 'package-dir': {'$$description': [':class:`dict`-like structure mapping from package names to directories where their', 'code can be found.', 'The empty string (as key) means that all packages are contained inside', 'the given directory will be included in the distribution.'], 'type': 'object', 'additionalProperties': False, 'propertyNames': {'oneOf': [{'format': 'python-module-name'}, {'const': ''}]}, 'patternProperties': {'^.*$': {'type': 'string'}}}, 'package-data': {'$$description': ['Mapping from package names to lists of glob patterns.', 'Usually this option is not needed when using ``include-package-data = true``', 'For more information on how to include data files, check ``setuptools`` `docs', '`_.'], 'type': 'object', 'additionalProperties': False, 'propertyNames': {'oneOf': [{'format': 'python-module-name'}, {'const': '*'}]}, 'patternProperties': {'^.*$': {'type': 'array', 'items': {'type': 'string'}}}}, 'include-package-data': {'$$description': ['Automatically include any data files inside the package directories', 'that are specified by ``MANIFEST.in``', 'For more information on how to include data files, check ``setuptools`` `docs', '`_.'], 'type': 'boolean'}, 'exclude-package-data': {'$$description': ['Mapping from package names to lists of glob patterns that should be excluded', 'For more information on how to include data files, check ``setuptools`` `docs', '`_.'], 'type': 'object', 'additionalProperties': False, 'propertyNames': {'oneOf': [{'format': 'python-module-name'}, {'const': '*'}]}, 'patternProperties': {'^.*$': {'type': 'array', 'items': {'type': 'string'}}}}, 'namespace-packages': {'type': 'array', 'items': {'type': 'string', 'format': 'python-module-name'}, '$comment': 'https://setuptools.pypa.io/en/latest/userguide/package_discovery.html'}, 'py-modules': {'description': 'Modules that setuptools will manipulate', 'type': 'array', 'items': {'type': 'string', 'format': 'python-module-name'}, '$comment': 'TODO: clarify the relationship with ``packages``'}, 'data-files': {'$$description': ['**DEPRECATED**: dict-like structure where each key represents a directory and', 'the value is a list of glob patterns that should be installed in them.', "Please notice this don't work with wheels. See `data files support", '`_'], 'type': 'object', 'patternProperties': {'^.*$': {'type': 'array', 'items': {'type': 'string'}}}}, 'cmdclass': {'$$description': ['Mapping of distutils-style command names to ``setuptools.Command`` subclasses', 'which in turn should be represented by strings with a qualified class name', '(i.e., "dotted" form with module), e.g.::\n\n', ' cmdclass = {mycmd = "pkg.subpkg.module.CommandClass"}\n\n', 'The command class should be a directly defined at the top-level of the', 'containing module (no class nesting).'], 'type': 'object', 'patternProperties': {'^.*$': {'type': 'string', 'format': 'python-qualified-identifier'}}}, 'license-files': {'type': 'array', 'items': {'type': 'string'}, '$$description': ['PROVISIONAL: List of glob patterns for all license files being distributed.', '(might become standard with PEP 639).'], 'default': ['LICEN[CS]E*', ' COPYING*', ' NOTICE*', 'AUTHORS*'], '$comment': 'TODO: revise if PEP 639 is accepted. Probably ``project.license-files``?'}, 'dynamic': {'type': 'object', 'description': 'Instructions for loading :pep:`621`-related metadata dynamically', 'additionalProperties': False, 'properties': {'version': {'$$description': ['A version dynamically loaded via either the ``attr:`` or ``file:``', 'directives. Please make sure the given file or attribute respects :pep:`440`.'], 'oneOf': [{'title': "'attr:' directive", '$id': '#/definitions/attr-directive', '$$description': ['Value is read from a module attribute. Supports callables and iterables;', 'unsupported types are cast via ``str()``'], 'type': 'object', 'additionalProperties': False, 'properties': {'attr': {'type': 'string'}}, 'required': ['attr']}, {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}]}, 'classifiers': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, 'description': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, 'dependencies': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, 'entry-points': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, 'optional-dependencies': {'type': 'object', 'propertyNames': {'format': 'python-identifier'}, 'additionalProperties': False, 'patternProperties': {'.+': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}}}, 'readme': {'anyOf': [{'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, {'properties': {'content-type': {'type': 'string'}}}], 'required': ['file']}}}}, 'definitions': {'file-directive': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, 'attr-directive': {'title': "'attr:' directive", '$id': '#/definitions/attr-directive', '$$description': ['Value is read from a module attribute. Supports callables and iterables;', 'unsupported types are cast via ``str()``'], 'type': 'object', 'additionalProperties': False, 'properties': {'attr': {'type': 'string'}}, 'required': ['attr']}, 'find-directive': {'$id': '#/definitions/find-directive', 'title': "'find:' directive", 'type': 'object', 'additionalProperties': False, 'properties': {'find': {'type': 'object', '$$description': ['Dynamic `package discovery', '`_.'], 'additionalProperties': False, 'properties': {'where': {'description': 'Directories to be searched for packages (Unix-style relative path)', 'type': 'array', 'items': {'type': 'string'}}, 'exclude': {'type': 'array', '$$description': ['Exclude packages that match the values listed in this field.', "Can container shell-style wildcards (e.g. ``'pkg.*'``)"], 'items': {'type': 'string'}}, 'include': {'type': 'array', '$$description': ['Restrict the found packages to just the ones listed in this field.', "Can container shell-style wildcards (e.g. ``'pkg.*'``)"], 'items': {'type': 'string'}}, 'namespaces': {'type': 'boolean', '$$description': ['When ``True``, directories without a ``__init__.py`` file will also', 'be scanned for :pep:`420`-style implicit namespaces']}}}}}}}, rule='type') - data_is_dict = isinstance(data, dict) - if data_is_dict: - data_keys = set(data.keys()) - if "platforms" in data_keys: - data_keys.remove("platforms") - data__platforms = data["platforms"] - if not isinstance(data__platforms, (list, tuple)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".platforms must be array", value=data__platforms, name="" + (name_prefix or "data") + ".platforms", definition={'type': 'array', 'items': {'type': 'string'}}, rule='type') - data__platforms_is_list = isinstance(data__platforms, (list, tuple)) - if data__platforms_is_list: - data__platforms_len = len(data__platforms) - for data__platforms_x, data__platforms_item in enumerate(data__platforms): - if not isinstance(data__platforms_item, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".platforms[{data__platforms_x}]".format(**locals()) + " must be string", value=data__platforms_item, name="" + (name_prefix or "data") + ".platforms[{data__platforms_x}]".format(**locals()) + "", definition={'type': 'string'}, rule='type') - if "provides" in data_keys: - data_keys.remove("provides") - data__provides = data["provides"] - if not isinstance(data__provides, (list, tuple)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".provides must be array", value=data__provides, name="" + (name_prefix or "data") + ".provides", definition={'$$description': ['Package and virtual package names contained within this package', '**(not supported by pip)**'], 'type': 'array', 'items': {'type': 'string', 'format': 'pep508-identifier'}}, rule='type') - data__provides_is_list = isinstance(data__provides, (list, tuple)) - if data__provides_is_list: - data__provides_len = len(data__provides) - for data__provides_x, data__provides_item in enumerate(data__provides): - if not isinstance(data__provides_item, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".provides[{data__provides_x}]".format(**locals()) + " must be string", value=data__provides_item, name="" + (name_prefix or "data") + ".provides[{data__provides_x}]".format(**locals()) + "", definition={'type': 'string', 'format': 'pep508-identifier'}, rule='type') - if isinstance(data__provides_item, str): - if not custom_formats["pep508-identifier"](data__provides_item): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".provides[{data__provides_x}]".format(**locals()) + " must be pep508-identifier", value=data__provides_item, name="" + (name_prefix or "data") + ".provides[{data__provides_x}]".format(**locals()) + "", definition={'type': 'string', 'format': 'pep508-identifier'}, rule='format') - if "obsoletes" in data_keys: - data_keys.remove("obsoletes") - data__obsoletes = data["obsoletes"] - if not isinstance(data__obsoletes, (list, tuple)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".obsoletes must be array", value=data__obsoletes, name="" + (name_prefix or "data") + ".obsoletes", definition={'$$description': ['Packages which this package renders obsolete', '**(not supported by pip)**'], 'type': 'array', 'items': {'type': 'string', 'format': 'pep508-identifier'}}, rule='type') - data__obsoletes_is_list = isinstance(data__obsoletes, (list, tuple)) - if data__obsoletes_is_list: - data__obsoletes_len = len(data__obsoletes) - for data__obsoletes_x, data__obsoletes_item in enumerate(data__obsoletes): - if not isinstance(data__obsoletes_item, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".obsoletes[{data__obsoletes_x}]".format(**locals()) + " must be string", value=data__obsoletes_item, name="" + (name_prefix or "data") + ".obsoletes[{data__obsoletes_x}]".format(**locals()) + "", definition={'type': 'string', 'format': 'pep508-identifier'}, rule='type') - if isinstance(data__obsoletes_item, str): - if not custom_formats["pep508-identifier"](data__obsoletes_item): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".obsoletes[{data__obsoletes_x}]".format(**locals()) + " must be pep508-identifier", value=data__obsoletes_item, name="" + (name_prefix or "data") + ".obsoletes[{data__obsoletes_x}]".format(**locals()) + "", definition={'type': 'string', 'format': 'pep508-identifier'}, rule='format') - if "zip-safe" in data_keys: - data_keys.remove("zip-safe") - data__zipsafe = data["zip-safe"] - if not isinstance(data__zipsafe, (bool)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".zip-safe must be boolean", value=data__zipsafe, name="" + (name_prefix or "data") + ".zip-safe", definition={'description': 'Whether the project can be safely installed and run from a zip file.', 'type': 'boolean'}, rule='type') - if "script-files" in data_keys: - data_keys.remove("script-files") - data__scriptfiles = data["script-files"] - if not isinstance(data__scriptfiles, (list, tuple)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".script-files must be array", value=data__scriptfiles, name="" + (name_prefix or "data") + ".script-files", definition={'description': 'Legacy way of defining scripts (entry-points are preferred).', 'type': 'array', 'items': {'type': 'string'}, '$comment': 'TODO: is this field deprecated/should be removed?'}, rule='type') - data__scriptfiles_is_list = isinstance(data__scriptfiles, (list, tuple)) - if data__scriptfiles_is_list: - data__scriptfiles_len = len(data__scriptfiles) - for data__scriptfiles_x, data__scriptfiles_item in enumerate(data__scriptfiles): - if not isinstance(data__scriptfiles_item, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".script-files[{data__scriptfiles_x}]".format(**locals()) + " must be string", value=data__scriptfiles_item, name="" + (name_prefix or "data") + ".script-files[{data__scriptfiles_x}]".format(**locals()) + "", definition={'type': 'string'}, rule='type') - if "eager-resources" in data_keys: - data_keys.remove("eager-resources") - data__eagerresources = data["eager-resources"] - if not isinstance(data__eagerresources, (list, tuple)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".eager-resources must be array", value=data__eagerresources, name="" + (name_prefix or "data") + ".eager-resources", definition={'$$description': ['Resources that should be extracted together, if any of them is needed,', 'or if any C extensions included in the project are imported.'], 'type': 'array', 'items': {'type': 'string'}}, rule='type') - data__eagerresources_is_list = isinstance(data__eagerresources, (list, tuple)) - if data__eagerresources_is_list: - data__eagerresources_len = len(data__eagerresources) - for data__eagerresources_x, data__eagerresources_item in enumerate(data__eagerresources): - if not isinstance(data__eagerresources_item, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".eager-resources[{data__eagerresources_x}]".format(**locals()) + " must be string", value=data__eagerresources_item, name="" + (name_prefix or "data") + ".eager-resources[{data__eagerresources_x}]".format(**locals()) + "", definition={'type': 'string'}, rule='type') - if "packages" in data_keys: - data_keys.remove("packages") - data__packages = data["packages"] - data__packages_one_of_count1 = 0 - if data__packages_one_of_count1 < 2: - try: - if not isinstance(data__packages, (list, tuple)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".packages must be array", value=data__packages, name="" + (name_prefix or "data") + ".packages", definition={'title': 'Array of Python package identifiers', 'type': 'array', 'items': {'type': 'string', 'format': 'python-module-name'}}, rule='type') - data__packages_is_list = isinstance(data__packages, (list, tuple)) - if data__packages_is_list: - data__packages_len = len(data__packages) - for data__packages_x, data__packages_item in enumerate(data__packages): - if not isinstance(data__packages_item, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".packages[{data__packages_x}]".format(**locals()) + " must be string", value=data__packages_item, name="" + (name_prefix or "data") + ".packages[{data__packages_x}]".format(**locals()) + "", definition={'type': 'string', 'format': 'python-module-name'}, rule='type') - if isinstance(data__packages_item, str): - if not custom_formats["python-module-name"](data__packages_item): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".packages[{data__packages_x}]".format(**locals()) + " must be python-module-name", value=data__packages_item, name="" + (name_prefix or "data") + ".packages[{data__packages_x}]".format(**locals()) + "", definition={'type': 'string', 'format': 'python-module-name'}, rule='format') - data__packages_one_of_count1 += 1 - except JsonSchemaValueException: pass - if data__packages_one_of_count1 < 2: - try: - validate_https___setuptools_pypa_io_en_latest_references_keywords_html__definitions_find_directive(data__packages, custom_formats, (name_prefix or "data") + ".packages") - data__packages_one_of_count1 += 1 - except JsonSchemaValueException: pass - if data__packages_one_of_count1 != 1: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".packages must be valid exactly by one definition" + (" (" + str(data__packages_one_of_count1) + " matches found)"), value=data__packages, name="" + (name_prefix or "data") + ".packages", definition={'$$description': ['Packages that should be included in the distribution.', 'It can be given either as a list of package identifiers', 'or as a ``dict``-like structure with a single key ``find``', 'which corresponds to a dynamic call to', '``setuptools.config.expand.find_packages`` function.', 'The ``find`` key is associated with a nested ``dict``-like structure that can', 'contain ``where``, ``include``, ``exclude`` and ``namespaces`` keys,', 'mimicking the keyword arguments of the associated function.'], 'oneOf': [{'title': 'Array of Python package identifiers', 'type': 'array', 'items': {'type': 'string', 'format': 'python-module-name'}}, {'$id': '#/definitions/find-directive', 'title': "'find:' directive", 'type': 'object', 'additionalProperties': False, 'properties': {'find': {'type': 'object', '$$description': ['Dynamic `package discovery', '`_.'], 'additionalProperties': False, 'properties': {'where': {'description': 'Directories to be searched for packages (Unix-style relative path)', 'type': 'array', 'items': {'type': 'string'}}, 'exclude': {'type': 'array', '$$description': ['Exclude packages that match the values listed in this field.', "Can container shell-style wildcards (e.g. ``'pkg.*'``)"], 'items': {'type': 'string'}}, 'include': {'type': 'array', '$$description': ['Restrict the found packages to just the ones listed in this field.', "Can container shell-style wildcards (e.g. ``'pkg.*'``)"], 'items': {'type': 'string'}}, 'namespaces': {'type': 'boolean', '$$description': ['When ``True``, directories without a ``__init__.py`` file will also', 'be scanned for :pep:`420`-style implicit namespaces']}}}}}]}, rule='oneOf') - if "package-dir" in data_keys: - data_keys.remove("package-dir") - data__packagedir = data["package-dir"] - if not isinstance(data__packagedir, (dict)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".package-dir must be object", value=data__packagedir, name="" + (name_prefix or "data") + ".package-dir", definition={'$$description': [':class:`dict`-like structure mapping from package names to directories where their', 'code can be found.', 'The empty string (as key) means that all packages are contained inside', 'the given directory will be included in the distribution.'], 'type': 'object', 'additionalProperties': False, 'propertyNames': {'oneOf': [{'format': 'python-module-name'}, {'const': ''}]}, 'patternProperties': {'^.*$': {'type': 'string'}}}, rule='type') - data__packagedir_is_dict = isinstance(data__packagedir, dict) - if data__packagedir_is_dict: - data__packagedir_keys = set(data__packagedir.keys()) - for data__packagedir_key, data__packagedir_val in data__packagedir.items(): - if REGEX_PATTERNS['^.*$'].search(data__packagedir_key): - if data__packagedir_key in data__packagedir_keys: - data__packagedir_keys.remove(data__packagedir_key) - if not isinstance(data__packagedir_val, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".package-dir.{data__packagedir_key}".format(**locals()) + " must be string", value=data__packagedir_val, name="" + (name_prefix or "data") + ".package-dir.{data__packagedir_key}".format(**locals()) + "", definition={'type': 'string'}, rule='type') - if data__packagedir_keys: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".package-dir must not contain "+str(data__packagedir_keys)+" properties", value=data__packagedir, name="" + (name_prefix or "data") + ".package-dir", definition={'$$description': [':class:`dict`-like structure mapping from package names to directories where their', 'code can be found.', 'The empty string (as key) means that all packages are contained inside', 'the given directory will be included in the distribution.'], 'type': 'object', 'additionalProperties': False, 'propertyNames': {'oneOf': [{'format': 'python-module-name'}, {'const': ''}]}, 'patternProperties': {'^.*$': {'type': 'string'}}}, rule='additionalProperties') - data__packagedir_len = len(data__packagedir) - if data__packagedir_len != 0: - data__packagedir_property_names = True - for data__packagedir_key in data__packagedir: - try: - data__packagedir_key_one_of_count2 = 0 - if data__packagedir_key_one_of_count2 < 2: - try: - if isinstance(data__packagedir_key, str): - if not custom_formats["python-module-name"](data__packagedir_key): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".package-dir must be python-module-name", value=data__packagedir_key, name="" + (name_prefix or "data") + ".package-dir", definition={'format': 'python-module-name'}, rule='format') - data__packagedir_key_one_of_count2 += 1 - except JsonSchemaValueException: pass - if data__packagedir_key_one_of_count2 < 2: - try: - if data__packagedir_key != "": - raise JsonSchemaValueException("" + (name_prefix or "data") + ".package-dir must be same as const definition: ", value=data__packagedir_key, name="" + (name_prefix or "data") + ".package-dir", definition={'const': ''}, rule='const') - data__packagedir_key_one_of_count2 += 1 - except JsonSchemaValueException: pass - if data__packagedir_key_one_of_count2 != 1: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".package-dir must be valid exactly by one definition" + (" (" + str(data__packagedir_key_one_of_count2) + " matches found)"), value=data__packagedir_key, name="" + (name_prefix or "data") + ".package-dir", definition={'oneOf': [{'format': 'python-module-name'}, {'const': ''}]}, rule='oneOf') - except JsonSchemaValueException: - data__packagedir_property_names = False - if not data__packagedir_property_names: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".package-dir must be named by propertyName definition", value=data__packagedir, name="" + (name_prefix or "data") + ".package-dir", definition={'$$description': [':class:`dict`-like structure mapping from package names to directories where their', 'code can be found.', 'The empty string (as key) means that all packages are contained inside', 'the given directory will be included in the distribution.'], 'type': 'object', 'additionalProperties': False, 'propertyNames': {'oneOf': [{'format': 'python-module-name'}, {'const': ''}]}, 'patternProperties': {'^.*$': {'type': 'string'}}}, rule='propertyNames') - if "package-data" in data_keys: - data_keys.remove("package-data") - data__packagedata = data["package-data"] - if not isinstance(data__packagedata, (dict)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".package-data must be object", value=data__packagedata, name="" + (name_prefix or "data") + ".package-data", definition={'$$description': ['Mapping from package names to lists of glob patterns.', 'Usually this option is not needed when using ``include-package-data = true``', 'For more information on how to include data files, check ``setuptools`` `docs', '`_.'], 'type': 'object', 'additionalProperties': False, 'propertyNames': {'oneOf': [{'format': 'python-module-name'}, {'const': '*'}]}, 'patternProperties': {'^.*$': {'type': 'array', 'items': {'type': 'string'}}}}, rule='type') - data__packagedata_is_dict = isinstance(data__packagedata, dict) - if data__packagedata_is_dict: - data__packagedata_keys = set(data__packagedata.keys()) - for data__packagedata_key, data__packagedata_val in data__packagedata.items(): - if REGEX_PATTERNS['^.*$'].search(data__packagedata_key): - if data__packagedata_key in data__packagedata_keys: - data__packagedata_keys.remove(data__packagedata_key) - if not isinstance(data__packagedata_val, (list, tuple)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".package-data.{data__packagedata_key}".format(**locals()) + " must be array", value=data__packagedata_val, name="" + (name_prefix or "data") + ".package-data.{data__packagedata_key}".format(**locals()) + "", definition={'type': 'array', 'items': {'type': 'string'}}, rule='type') - data__packagedata_val_is_list = isinstance(data__packagedata_val, (list, tuple)) - if data__packagedata_val_is_list: - data__packagedata_val_len = len(data__packagedata_val) - for data__packagedata_val_x, data__packagedata_val_item in enumerate(data__packagedata_val): - if not isinstance(data__packagedata_val_item, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".package-data.{data__packagedata_key}[{data__packagedata_val_x}]".format(**locals()) + " must be string", value=data__packagedata_val_item, name="" + (name_prefix or "data") + ".package-data.{data__packagedata_key}[{data__packagedata_val_x}]".format(**locals()) + "", definition={'type': 'string'}, rule='type') - if data__packagedata_keys: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".package-data must not contain "+str(data__packagedata_keys)+" properties", value=data__packagedata, name="" + (name_prefix or "data") + ".package-data", definition={'$$description': ['Mapping from package names to lists of glob patterns.', 'Usually this option is not needed when using ``include-package-data = true``', 'For more information on how to include data files, check ``setuptools`` `docs', '`_.'], 'type': 'object', 'additionalProperties': False, 'propertyNames': {'oneOf': [{'format': 'python-module-name'}, {'const': '*'}]}, 'patternProperties': {'^.*$': {'type': 'array', 'items': {'type': 'string'}}}}, rule='additionalProperties') - data__packagedata_len = len(data__packagedata) - if data__packagedata_len != 0: - data__packagedata_property_names = True - for data__packagedata_key in data__packagedata: - try: - data__packagedata_key_one_of_count3 = 0 - if data__packagedata_key_one_of_count3 < 2: - try: - if isinstance(data__packagedata_key, str): - if not custom_formats["python-module-name"](data__packagedata_key): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".package-data must be python-module-name", value=data__packagedata_key, name="" + (name_prefix or "data") + ".package-data", definition={'format': 'python-module-name'}, rule='format') - data__packagedata_key_one_of_count3 += 1 - except JsonSchemaValueException: pass - if data__packagedata_key_one_of_count3 < 2: - try: - if data__packagedata_key != "*": - raise JsonSchemaValueException("" + (name_prefix or "data") + ".package-data must be same as const definition: *", value=data__packagedata_key, name="" + (name_prefix or "data") + ".package-data", definition={'const': '*'}, rule='const') - data__packagedata_key_one_of_count3 += 1 - except JsonSchemaValueException: pass - if data__packagedata_key_one_of_count3 != 1: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".package-data must be valid exactly by one definition" + (" (" + str(data__packagedata_key_one_of_count3) + " matches found)"), value=data__packagedata_key, name="" + (name_prefix or "data") + ".package-data", definition={'oneOf': [{'format': 'python-module-name'}, {'const': '*'}]}, rule='oneOf') - except JsonSchemaValueException: - data__packagedata_property_names = False - if not data__packagedata_property_names: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".package-data must be named by propertyName definition", value=data__packagedata, name="" + (name_prefix or "data") + ".package-data", definition={'$$description': ['Mapping from package names to lists of glob patterns.', 'Usually this option is not needed when using ``include-package-data = true``', 'For more information on how to include data files, check ``setuptools`` `docs', '`_.'], 'type': 'object', 'additionalProperties': False, 'propertyNames': {'oneOf': [{'format': 'python-module-name'}, {'const': '*'}]}, 'patternProperties': {'^.*$': {'type': 'array', 'items': {'type': 'string'}}}}, rule='propertyNames') - if "include-package-data" in data_keys: - data_keys.remove("include-package-data") - data__includepackagedata = data["include-package-data"] - if not isinstance(data__includepackagedata, (bool)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".include-package-data must be boolean", value=data__includepackagedata, name="" + (name_prefix or "data") + ".include-package-data", definition={'$$description': ['Automatically include any data files inside the package directories', 'that are specified by ``MANIFEST.in``', 'For more information on how to include data files, check ``setuptools`` `docs', '`_.'], 'type': 'boolean'}, rule='type') - if "exclude-package-data" in data_keys: - data_keys.remove("exclude-package-data") - data__excludepackagedata = data["exclude-package-data"] - if not isinstance(data__excludepackagedata, (dict)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".exclude-package-data must be object", value=data__excludepackagedata, name="" + (name_prefix or "data") + ".exclude-package-data", definition={'$$description': ['Mapping from package names to lists of glob patterns that should be excluded', 'For more information on how to include data files, check ``setuptools`` `docs', '`_.'], 'type': 'object', 'additionalProperties': False, 'propertyNames': {'oneOf': [{'format': 'python-module-name'}, {'const': '*'}]}, 'patternProperties': {'^.*$': {'type': 'array', 'items': {'type': 'string'}}}}, rule='type') - data__excludepackagedata_is_dict = isinstance(data__excludepackagedata, dict) - if data__excludepackagedata_is_dict: - data__excludepackagedata_keys = set(data__excludepackagedata.keys()) - for data__excludepackagedata_key, data__excludepackagedata_val in data__excludepackagedata.items(): - if REGEX_PATTERNS['^.*$'].search(data__excludepackagedata_key): - if data__excludepackagedata_key in data__excludepackagedata_keys: - data__excludepackagedata_keys.remove(data__excludepackagedata_key) - if not isinstance(data__excludepackagedata_val, (list, tuple)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".exclude-package-data.{data__excludepackagedata_key}".format(**locals()) + " must be array", value=data__excludepackagedata_val, name="" + (name_prefix or "data") + ".exclude-package-data.{data__excludepackagedata_key}".format(**locals()) + "", definition={'type': 'array', 'items': {'type': 'string'}}, rule='type') - data__excludepackagedata_val_is_list = isinstance(data__excludepackagedata_val, (list, tuple)) - if data__excludepackagedata_val_is_list: - data__excludepackagedata_val_len = len(data__excludepackagedata_val) - for data__excludepackagedata_val_x, data__excludepackagedata_val_item in enumerate(data__excludepackagedata_val): - if not isinstance(data__excludepackagedata_val_item, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".exclude-package-data.{data__excludepackagedata_key}[{data__excludepackagedata_val_x}]".format(**locals()) + " must be string", value=data__excludepackagedata_val_item, name="" + (name_prefix or "data") + ".exclude-package-data.{data__excludepackagedata_key}[{data__excludepackagedata_val_x}]".format(**locals()) + "", definition={'type': 'string'}, rule='type') - if data__excludepackagedata_keys: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".exclude-package-data must not contain "+str(data__excludepackagedata_keys)+" properties", value=data__excludepackagedata, name="" + (name_prefix or "data") + ".exclude-package-data", definition={'$$description': ['Mapping from package names to lists of glob patterns that should be excluded', 'For more information on how to include data files, check ``setuptools`` `docs', '`_.'], 'type': 'object', 'additionalProperties': False, 'propertyNames': {'oneOf': [{'format': 'python-module-name'}, {'const': '*'}]}, 'patternProperties': {'^.*$': {'type': 'array', 'items': {'type': 'string'}}}}, rule='additionalProperties') - data__excludepackagedata_len = len(data__excludepackagedata) - if data__excludepackagedata_len != 0: - data__excludepackagedata_property_names = True - for data__excludepackagedata_key in data__excludepackagedata: - try: - data__excludepackagedata_key_one_of_count4 = 0 - if data__excludepackagedata_key_one_of_count4 < 2: - try: - if isinstance(data__excludepackagedata_key, str): - if not custom_formats["python-module-name"](data__excludepackagedata_key): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".exclude-package-data must be python-module-name", value=data__excludepackagedata_key, name="" + (name_prefix or "data") + ".exclude-package-data", definition={'format': 'python-module-name'}, rule='format') - data__excludepackagedata_key_one_of_count4 += 1 - except JsonSchemaValueException: pass - if data__excludepackagedata_key_one_of_count4 < 2: - try: - if data__excludepackagedata_key != "*": - raise JsonSchemaValueException("" + (name_prefix or "data") + ".exclude-package-data must be same as const definition: *", value=data__excludepackagedata_key, name="" + (name_prefix or "data") + ".exclude-package-data", definition={'const': '*'}, rule='const') - data__excludepackagedata_key_one_of_count4 += 1 - except JsonSchemaValueException: pass - if data__excludepackagedata_key_one_of_count4 != 1: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".exclude-package-data must be valid exactly by one definition" + (" (" + str(data__excludepackagedata_key_one_of_count4) + " matches found)"), value=data__excludepackagedata_key, name="" + (name_prefix or "data") + ".exclude-package-data", definition={'oneOf': [{'format': 'python-module-name'}, {'const': '*'}]}, rule='oneOf') - except JsonSchemaValueException: - data__excludepackagedata_property_names = False - if not data__excludepackagedata_property_names: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".exclude-package-data must be named by propertyName definition", value=data__excludepackagedata, name="" + (name_prefix or "data") + ".exclude-package-data", definition={'$$description': ['Mapping from package names to lists of glob patterns that should be excluded', 'For more information on how to include data files, check ``setuptools`` `docs', '`_.'], 'type': 'object', 'additionalProperties': False, 'propertyNames': {'oneOf': [{'format': 'python-module-name'}, {'const': '*'}]}, 'patternProperties': {'^.*$': {'type': 'array', 'items': {'type': 'string'}}}}, rule='propertyNames') - if "namespace-packages" in data_keys: - data_keys.remove("namespace-packages") - data__namespacepackages = data["namespace-packages"] - if not isinstance(data__namespacepackages, (list, tuple)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".namespace-packages must be array", value=data__namespacepackages, name="" + (name_prefix or "data") + ".namespace-packages", definition={'type': 'array', 'items': {'type': 'string', 'format': 'python-module-name'}, '$comment': 'https://setuptools.pypa.io/en/latest/userguide/package_discovery.html'}, rule='type') - data__namespacepackages_is_list = isinstance(data__namespacepackages, (list, tuple)) - if data__namespacepackages_is_list: - data__namespacepackages_len = len(data__namespacepackages) - for data__namespacepackages_x, data__namespacepackages_item in enumerate(data__namespacepackages): - if not isinstance(data__namespacepackages_item, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".namespace-packages[{data__namespacepackages_x}]".format(**locals()) + " must be string", value=data__namespacepackages_item, name="" + (name_prefix or "data") + ".namespace-packages[{data__namespacepackages_x}]".format(**locals()) + "", definition={'type': 'string', 'format': 'python-module-name'}, rule='type') - if isinstance(data__namespacepackages_item, str): - if not custom_formats["python-module-name"](data__namespacepackages_item): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".namespace-packages[{data__namespacepackages_x}]".format(**locals()) + " must be python-module-name", value=data__namespacepackages_item, name="" + (name_prefix or "data") + ".namespace-packages[{data__namespacepackages_x}]".format(**locals()) + "", definition={'type': 'string', 'format': 'python-module-name'}, rule='format') - if "py-modules" in data_keys: - data_keys.remove("py-modules") - data__pymodules = data["py-modules"] - if not isinstance(data__pymodules, (list, tuple)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".py-modules must be array", value=data__pymodules, name="" + (name_prefix or "data") + ".py-modules", definition={'description': 'Modules that setuptools will manipulate', 'type': 'array', 'items': {'type': 'string', 'format': 'python-module-name'}, '$comment': 'TODO: clarify the relationship with ``packages``'}, rule='type') - data__pymodules_is_list = isinstance(data__pymodules, (list, tuple)) - if data__pymodules_is_list: - data__pymodules_len = len(data__pymodules) - for data__pymodules_x, data__pymodules_item in enumerate(data__pymodules): - if not isinstance(data__pymodules_item, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".py-modules[{data__pymodules_x}]".format(**locals()) + " must be string", value=data__pymodules_item, name="" + (name_prefix or "data") + ".py-modules[{data__pymodules_x}]".format(**locals()) + "", definition={'type': 'string', 'format': 'python-module-name'}, rule='type') - if isinstance(data__pymodules_item, str): - if not custom_formats["python-module-name"](data__pymodules_item): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".py-modules[{data__pymodules_x}]".format(**locals()) + " must be python-module-name", value=data__pymodules_item, name="" + (name_prefix or "data") + ".py-modules[{data__pymodules_x}]".format(**locals()) + "", definition={'type': 'string', 'format': 'python-module-name'}, rule='format') - if "data-files" in data_keys: - data_keys.remove("data-files") - data__datafiles = data["data-files"] - if not isinstance(data__datafiles, (dict)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".data-files must be object", value=data__datafiles, name="" + (name_prefix or "data") + ".data-files", definition={'$$description': ['**DEPRECATED**: dict-like structure where each key represents a directory and', 'the value is a list of glob patterns that should be installed in them.', "Please notice this don't work with wheels. See `data files support", '`_'], 'type': 'object', 'patternProperties': {'^.*$': {'type': 'array', 'items': {'type': 'string'}}}}, rule='type') - data__datafiles_is_dict = isinstance(data__datafiles, dict) - if data__datafiles_is_dict: - data__datafiles_keys = set(data__datafiles.keys()) - for data__datafiles_key, data__datafiles_val in data__datafiles.items(): - if REGEX_PATTERNS['^.*$'].search(data__datafiles_key): - if data__datafiles_key in data__datafiles_keys: - data__datafiles_keys.remove(data__datafiles_key) - if not isinstance(data__datafiles_val, (list, tuple)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".data-files.{data__datafiles_key}".format(**locals()) + " must be array", value=data__datafiles_val, name="" + (name_prefix or "data") + ".data-files.{data__datafiles_key}".format(**locals()) + "", definition={'type': 'array', 'items': {'type': 'string'}}, rule='type') - data__datafiles_val_is_list = isinstance(data__datafiles_val, (list, tuple)) - if data__datafiles_val_is_list: - data__datafiles_val_len = len(data__datafiles_val) - for data__datafiles_val_x, data__datafiles_val_item in enumerate(data__datafiles_val): - if not isinstance(data__datafiles_val_item, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".data-files.{data__datafiles_key}[{data__datafiles_val_x}]".format(**locals()) + " must be string", value=data__datafiles_val_item, name="" + (name_prefix or "data") + ".data-files.{data__datafiles_key}[{data__datafiles_val_x}]".format(**locals()) + "", definition={'type': 'string'}, rule='type') - if "cmdclass" in data_keys: - data_keys.remove("cmdclass") - data__cmdclass = data["cmdclass"] - if not isinstance(data__cmdclass, (dict)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".cmdclass must be object", value=data__cmdclass, name="" + (name_prefix or "data") + ".cmdclass", definition={'$$description': ['Mapping of distutils-style command names to ``setuptools.Command`` subclasses', 'which in turn should be represented by strings with a qualified class name', '(i.e., "dotted" form with module), e.g.::\n\n', ' cmdclass = {mycmd = "pkg.subpkg.module.CommandClass"}\n\n', 'The command class should be a directly defined at the top-level of the', 'containing module (no class nesting).'], 'type': 'object', 'patternProperties': {'^.*$': {'type': 'string', 'format': 'python-qualified-identifier'}}}, rule='type') - data__cmdclass_is_dict = isinstance(data__cmdclass, dict) - if data__cmdclass_is_dict: - data__cmdclass_keys = set(data__cmdclass.keys()) - for data__cmdclass_key, data__cmdclass_val in data__cmdclass.items(): - if REGEX_PATTERNS['^.*$'].search(data__cmdclass_key): - if data__cmdclass_key in data__cmdclass_keys: - data__cmdclass_keys.remove(data__cmdclass_key) - if not isinstance(data__cmdclass_val, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".cmdclass.{data__cmdclass_key}".format(**locals()) + " must be string", value=data__cmdclass_val, name="" + (name_prefix or "data") + ".cmdclass.{data__cmdclass_key}".format(**locals()) + "", definition={'type': 'string', 'format': 'python-qualified-identifier'}, rule='type') - if isinstance(data__cmdclass_val, str): - if not custom_formats["python-qualified-identifier"](data__cmdclass_val): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".cmdclass.{data__cmdclass_key}".format(**locals()) + " must be python-qualified-identifier", value=data__cmdclass_val, name="" + (name_prefix or "data") + ".cmdclass.{data__cmdclass_key}".format(**locals()) + "", definition={'type': 'string', 'format': 'python-qualified-identifier'}, rule='format') - if "license-files" in data_keys: - data_keys.remove("license-files") - data__licensefiles = data["license-files"] - if not isinstance(data__licensefiles, (list, tuple)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".license-files must be array", value=data__licensefiles, name="" + (name_prefix or "data") + ".license-files", definition={'type': 'array', 'items': {'type': 'string'}, '$$description': ['PROVISIONAL: List of glob patterns for all license files being distributed.', '(might become standard with PEP 639).'], 'default': ['LICEN[CS]E*', ' COPYING*', ' NOTICE*', 'AUTHORS*'], '$comment': 'TODO: revise if PEP 639 is accepted. Probably ``project.license-files``?'}, rule='type') - data__licensefiles_is_list = isinstance(data__licensefiles, (list, tuple)) - if data__licensefiles_is_list: - data__licensefiles_len = len(data__licensefiles) - for data__licensefiles_x, data__licensefiles_item in enumerate(data__licensefiles): - if not isinstance(data__licensefiles_item, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".license-files[{data__licensefiles_x}]".format(**locals()) + " must be string", value=data__licensefiles_item, name="" + (name_prefix or "data") + ".license-files[{data__licensefiles_x}]".format(**locals()) + "", definition={'type': 'string'}, rule='type') - else: data["license-files"] = ['LICEN[CS]E*', ' COPYING*', ' NOTICE*', 'AUTHORS*'] - if "dynamic" in data_keys: - data_keys.remove("dynamic") - data__dynamic = data["dynamic"] - if not isinstance(data__dynamic, (dict)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".dynamic must be object", value=data__dynamic, name="" + (name_prefix or "data") + ".dynamic", definition={'type': 'object', 'description': 'Instructions for loading :pep:`621`-related metadata dynamically', 'additionalProperties': False, 'properties': {'version': {'$$description': ['A version dynamically loaded via either the ``attr:`` or ``file:``', 'directives. Please make sure the given file or attribute respects :pep:`440`.'], 'oneOf': [{'title': "'attr:' directive", '$id': '#/definitions/attr-directive', '$$description': ['Value is read from a module attribute. Supports callables and iterables;', 'unsupported types are cast via ``str()``'], 'type': 'object', 'additionalProperties': False, 'properties': {'attr': {'type': 'string'}}, 'required': ['attr']}, {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}]}, 'classifiers': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, 'description': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, 'dependencies': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, 'entry-points': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, 'optional-dependencies': {'type': 'object', 'propertyNames': {'format': 'python-identifier'}, 'additionalProperties': False, 'patternProperties': {'.+': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}}}, 'readme': {'anyOf': [{'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, {'properties': {'content-type': {'type': 'string'}}}], 'required': ['file']}}}, rule='type') - data__dynamic_is_dict = isinstance(data__dynamic, dict) - if data__dynamic_is_dict: - data__dynamic_keys = set(data__dynamic.keys()) - if "version" in data__dynamic_keys: - data__dynamic_keys.remove("version") - data__dynamic__version = data__dynamic["version"] - data__dynamic__version_one_of_count5 = 0 - if data__dynamic__version_one_of_count5 < 2: - try: - validate_https___setuptools_pypa_io_en_latest_references_keywords_html__definitions_attr_directive(data__dynamic__version, custom_formats, (name_prefix or "data") + ".dynamic.version") - data__dynamic__version_one_of_count5 += 1 - except JsonSchemaValueException: pass - if data__dynamic__version_one_of_count5 < 2: - try: - validate_https___setuptools_pypa_io_en_latest_references_keywords_html__definitions_file_directive(data__dynamic__version, custom_formats, (name_prefix or "data") + ".dynamic.version") - data__dynamic__version_one_of_count5 += 1 - except JsonSchemaValueException: pass - if data__dynamic__version_one_of_count5 != 1: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".dynamic.version must be valid exactly by one definition" + (" (" + str(data__dynamic__version_one_of_count5) + " matches found)"), value=data__dynamic__version, name="" + (name_prefix or "data") + ".dynamic.version", definition={'$$description': ['A version dynamically loaded via either the ``attr:`` or ``file:``', 'directives. Please make sure the given file or attribute respects :pep:`440`.'], 'oneOf': [{'title': "'attr:' directive", '$id': '#/definitions/attr-directive', '$$description': ['Value is read from a module attribute. Supports callables and iterables;', 'unsupported types are cast via ``str()``'], 'type': 'object', 'additionalProperties': False, 'properties': {'attr': {'type': 'string'}}, 'required': ['attr']}, {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}]}, rule='oneOf') - if "classifiers" in data__dynamic_keys: - data__dynamic_keys.remove("classifiers") - data__dynamic__classifiers = data__dynamic["classifiers"] - validate_https___setuptools_pypa_io_en_latest_references_keywords_html__definitions_file_directive(data__dynamic__classifiers, custom_formats, (name_prefix or "data") + ".dynamic.classifiers") - if "description" in data__dynamic_keys: - data__dynamic_keys.remove("description") - data__dynamic__description = data__dynamic["description"] - validate_https___setuptools_pypa_io_en_latest_references_keywords_html__definitions_file_directive(data__dynamic__description, custom_formats, (name_prefix or "data") + ".dynamic.description") - if "dependencies" in data__dynamic_keys: - data__dynamic_keys.remove("dependencies") - data__dynamic__dependencies = data__dynamic["dependencies"] - validate_https___setuptools_pypa_io_en_latest_references_keywords_html__definitions_file_directive(data__dynamic__dependencies, custom_formats, (name_prefix or "data") + ".dynamic.dependencies") - if "entry-points" in data__dynamic_keys: - data__dynamic_keys.remove("entry-points") - data__dynamic__entrypoints = data__dynamic["entry-points"] - validate_https___setuptools_pypa_io_en_latest_references_keywords_html__definitions_file_directive(data__dynamic__entrypoints, custom_formats, (name_prefix or "data") + ".dynamic.entry-points") - if "optional-dependencies" in data__dynamic_keys: - data__dynamic_keys.remove("optional-dependencies") - data__dynamic__optionaldependencies = data__dynamic["optional-dependencies"] - if not isinstance(data__dynamic__optionaldependencies, (dict)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".dynamic.optional-dependencies must be object", value=data__dynamic__optionaldependencies, name="" + (name_prefix or "data") + ".dynamic.optional-dependencies", definition={'type': 'object', 'propertyNames': {'format': 'python-identifier'}, 'additionalProperties': False, 'patternProperties': {'.+': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}}}, rule='type') - data__dynamic__optionaldependencies_is_dict = isinstance(data__dynamic__optionaldependencies, dict) - if data__dynamic__optionaldependencies_is_dict: - data__dynamic__optionaldependencies_keys = set(data__dynamic__optionaldependencies.keys()) - for data__dynamic__optionaldependencies_key, data__dynamic__optionaldependencies_val in data__dynamic__optionaldependencies.items(): - if REGEX_PATTERNS['.+'].search(data__dynamic__optionaldependencies_key): - if data__dynamic__optionaldependencies_key in data__dynamic__optionaldependencies_keys: - data__dynamic__optionaldependencies_keys.remove(data__dynamic__optionaldependencies_key) - validate_https___setuptools_pypa_io_en_latest_references_keywords_html__definitions_file_directive(data__dynamic__optionaldependencies_val, custom_formats, (name_prefix or "data") + ".dynamic.optional-dependencies.{data__dynamic__optionaldependencies_key}") - if data__dynamic__optionaldependencies_keys: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".dynamic.optional-dependencies must not contain "+str(data__dynamic__optionaldependencies_keys)+" properties", value=data__dynamic__optionaldependencies, name="" + (name_prefix or "data") + ".dynamic.optional-dependencies", definition={'type': 'object', 'propertyNames': {'format': 'python-identifier'}, 'additionalProperties': False, 'patternProperties': {'.+': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}}}, rule='additionalProperties') - data__dynamic__optionaldependencies_len = len(data__dynamic__optionaldependencies) - if data__dynamic__optionaldependencies_len != 0: - data__dynamic__optionaldependencies_property_names = True - for data__dynamic__optionaldependencies_key in data__dynamic__optionaldependencies: - try: - if isinstance(data__dynamic__optionaldependencies_key, str): - if not custom_formats["python-identifier"](data__dynamic__optionaldependencies_key): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".dynamic.optional-dependencies must be python-identifier", value=data__dynamic__optionaldependencies_key, name="" + (name_prefix or "data") + ".dynamic.optional-dependencies", definition={'format': 'python-identifier'}, rule='format') - except JsonSchemaValueException: - data__dynamic__optionaldependencies_property_names = False - if not data__dynamic__optionaldependencies_property_names: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".dynamic.optional-dependencies must be named by propertyName definition", value=data__dynamic__optionaldependencies, name="" + (name_prefix or "data") + ".dynamic.optional-dependencies", definition={'type': 'object', 'propertyNames': {'format': 'python-identifier'}, 'additionalProperties': False, 'patternProperties': {'.+': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}}}, rule='propertyNames') - if "readme" in data__dynamic_keys: - data__dynamic_keys.remove("readme") - data__dynamic__readme = data__dynamic["readme"] - data__dynamic__readme_any_of_count6 = 0 - if not data__dynamic__readme_any_of_count6: - try: - validate_https___setuptools_pypa_io_en_latest_references_keywords_html__definitions_file_directive(data__dynamic__readme, custom_formats, (name_prefix or "data") + ".dynamic.readme") - data__dynamic__readme_any_of_count6 += 1 - except JsonSchemaValueException: pass - if not data__dynamic__readme_any_of_count6: - try: - data__dynamic__readme_is_dict = isinstance(data__dynamic__readme, dict) - if data__dynamic__readme_is_dict: - data__dynamic__readme_keys = set(data__dynamic__readme.keys()) - if "content-type" in data__dynamic__readme_keys: - data__dynamic__readme_keys.remove("content-type") - data__dynamic__readme__contenttype = data__dynamic__readme["content-type"] - if not isinstance(data__dynamic__readme__contenttype, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".dynamic.readme.content-type must be string", value=data__dynamic__readme__contenttype, name="" + (name_prefix or "data") + ".dynamic.readme.content-type", definition={'type': 'string'}, rule='type') - data__dynamic__readme_any_of_count6 += 1 - except JsonSchemaValueException: pass - if not data__dynamic__readme_any_of_count6: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".dynamic.readme cannot be validated by any definition", value=data__dynamic__readme, name="" + (name_prefix or "data") + ".dynamic.readme", definition={'anyOf': [{'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, {'properties': {'content-type': {'type': 'string'}}}], 'required': ['file']}, rule='anyOf') - data__dynamic__readme_is_dict = isinstance(data__dynamic__readme, dict) - if data__dynamic__readme_is_dict: - data__dynamic__readme_len = len(data__dynamic__readme) - if not all(prop in data__dynamic__readme for prop in ['file']): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".dynamic.readme must contain ['file'] properties", value=data__dynamic__readme, name="" + (name_prefix or "data") + ".dynamic.readme", definition={'anyOf': [{'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, {'properties': {'content-type': {'type': 'string'}}}], 'required': ['file']}, rule='required') - if data__dynamic_keys: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".dynamic must not contain "+str(data__dynamic_keys)+" properties", value=data__dynamic, name="" + (name_prefix or "data") + ".dynamic", definition={'type': 'object', 'description': 'Instructions for loading :pep:`621`-related metadata dynamically', 'additionalProperties': False, 'properties': {'version': {'$$description': ['A version dynamically loaded via either the ``attr:`` or ``file:``', 'directives. Please make sure the given file or attribute respects :pep:`440`.'], 'oneOf': [{'title': "'attr:' directive", '$id': '#/definitions/attr-directive', '$$description': ['Value is read from a module attribute. Supports callables and iterables;', 'unsupported types are cast via ``str()``'], 'type': 'object', 'additionalProperties': False, 'properties': {'attr': {'type': 'string'}}, 'required': ['attr']}, {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}]}, 'classifiers': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, 'description': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, 'dependencies': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, 'entry-points': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, 'optional-dependencies': {'type': 'object', 'propertyNames': {'format': 'python-identifier'}, 'additionalProperties': False, 'patternProperties': {'.+': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}}}, 'readme': {'anyOf': [{'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, {'properties': {'content-type': {'type': 'string'}}}], 'required': ['file']}}}, rule='additionalProperties') - if data_keys: - raise JsonSchemaValueException("" + (name_prefix or "data") + " must not contain "+str(data_keys)+" properties", value=data, name="" + (name_prefix or "data") + "", definition={'$schema': 'http://json-schema.org/draft-07/schema', '$id': 'https://setuptools.pypa.io/en/latest/references/keywords.html', 'title': '``tool.setuptools`` table', '$$description': ['Please notice for the time being the ``setuptools`` project does not specify', 'a way of configuring builds via ``pyproject.toml``.', 'Therefore this schema should be taken just as a *"thought experiment"* on how', 'this *might be done*, by following the principles established in', '`ini2toml `_.', 'It considers only ``setuptools`` `parameters', '`_', 'that can currently be configured via ``setup.cfg`` and are not covered by :pep:`621`', 'but intentionally excludes ``dependency_links`` and ``setup_requires``.', 'NOTE: ``scripts`` was renamed to ``script-files`` to avoid confusion with', 'entry-point based scripts (defined in :pep:`621`).'], 'type': 'object', 'additionalProperties': False, 'properties': {'platforms': {'type': 'array', 'items': {'type': 'string'}}, 'provides': {'$$description': ['Package and virtual package names contained within this package', '**(not supported by pip)**'], 'type': 'array', 'items': {'type': 'string', 'format': 'pep508-identifier'}}, 'obsoletes': {'$$description': ['Packages which this package renders obsolete', '**(not supported by pip)**'], 'type': 'array', 'items': {'type': 'string', 'format': 'pep508-identifier'}}, 'zip-safe': {'description': 'Whether the project can be safely installed and run from a zip file.', 'type': 'boolean'}, 'script-files': {'description': 'Legacy way of defining scripts (entry-points are preferred).', 'type': 'array', 'items': {'type': 'string'}, '$comment': 'TODO: is this field deprecated/should be removed?'}, 'eager-resources': {'$$description': ['Resources that should be extracted together, if any of them is needed,', 'or if any C extensions included in the project are imported.'], 'type': 'array', 'items': {'type': 'string'}}, 'packages': {'$$description': ['Packages that should be included in the distribution.', 'It can be given either as a list of package identifiers', 'or as a ``dict``-like structure with a single key ``find``', 'which corresponds to a dynamic call to', '``setuptools.config.expand.find_packages`` function.', 'The ``find`` key is associated with a nested ``dict``-like structure that can', 'contain ``where``, ``include``, ``exclude`` and ``namespaces`` keys,', 'mimicking the keyword arguments of the associated function.'], 'oneOf': [{'title': 'Array of Python package identifiers', 'type': 'array', 'items': {'type': 'string', 'format': 'python-module-name'}}, {'$id': '#/definitions/find-directive', 'title': "'find:' directive", 'type': 'object', 'additionalProperties': False, 'properties': {'find': {'type': 'object', '$$description': ['Dynamic `package discovery', '`_.'], 'additionalProperties': False, 'properties': {'where': {'description': 'Directories to be searched for packages (Unix-style relative path)', 'type': 'array', 'items': {'type': 'string'}}, 'exclude': {'type': 'array', '$$description': ['Exclude packages that match the values listed in this field.', "Can container shell-style wildcards (e.g. ``'pkg.*'``)"], 'items': {'type': 'string'}}, 'include': {'type': 'array', '$$description': ['Restrict the found packages to just the ones listed in this field.', "Can container shell-style wildcards (e.g. ``'pkg.*'``)"], 'items': {'type': 'string'}}, 'namespaces': {'type': 'boolean', '$$description': ['When ``True``, directories without a ``__init__.py`` file will also', 'be scanned for :pep:`420`-style implicit namespaces']}}}}}]}, 'package-dir': {'$$description': [':class:`dict`-like structure mapping from package names to directories where their', 'code can be found.', 'The empty string (as key) means that all packages are contained inside', 'the given directory will be included in the distribution.'], 'type': 'object', 'additionalProperties': False, 'propertyNames': {'oneOf': [{'format': 'python-module-name'}, {'const': ''}]}, 'patternProperties': {'^.*$': {'type': 'string'}}}, 'package-data': {'$$description': ['Mapping from package names to lists of glob patterns.', 'Usually this option is not needed when using ``include-package-data = true``', 'For more information on how to include data files, check ``setuptools`` `docs', '`_.'], 'type': 'object', 'additionalProperties': False, 'propertyNames': {'oneOf': [{'format': 'python-module-name'}, {'const': '*'}]}, 'patternProperties': {'^.*$': {'type': 'array', 'items': {'type': 'string'}}}}, 'include-package-data': {'$$description': ['Automatically include any data files inside the package directories', 'that are specified by ``MANIFEST.in``', 'For more information on how to include data files, check ``setuptools`` `docs', '`_.'], 'type': 'boolean'}, 'exclude-package-data': {'$$description': ['Mapping from package names to lists of glob patterns that should be excluded', 'For more information on how to include data files, check ``setuptools`` `docs', '`_.'], 'type': 'object', 'additionalProperties': False, 'propertyNames': {'oneOf': [{'format': 'python-module-name'}, {'const': '*'}]}, 'patternProperties': {'^.*$': {'type': 'array', 'items': {'type': 'string'}}}}, 'namespace-packages': {'type': 'array', 'items': {'type': 'string', 'format': 'python-module-name'}, '$comment': 'https://setuptools.pypa.io/en/latest/userguide/package_discovery.html'}, 'py-modules': {'description': 'Modules that setuptools will manipulate', 'type': 'array', 'items': {'type': 'string', 'format': 'python-module-name'}, '$comment': 'TODO: clarify the relationship with ``packages``'}, 'data-files': {'$$description': ['**DEPRECATED**: dict-like structure where each key represents a directory and', 'the value is a list of glob patterns that should be installed in them.', "Please notice this don't work with wheels. See `data files support", '`_'], 'type': 'object', 'patternProperties': {'^.*$': {'type': 'array', 'items': {'type': 'string'}}}}, 'cmdclass': {'$$description': ['Mapping of distutils-style command names to ``setuptools.Command`` subclasses', 'which in turn should be represented by strings with a qualified class name', '(i.e., "dotted" form with module), e.g.::\n\n', ' cmdclass = {mycmd = "pkg.subpkg.module.CommandClass"}\n\n', 'The command class should be a directly defined at the top-level of the', 'containing module (no class nesting).'], 'type': 'object', 'patternProperties': {'^.*$': {'type': 'string', 'format': 'python-qualified-identifier'}}}, 'license-files': {'type': 'array', 'items': {'type': 'string'}, '$$description': ['PROVISIONAL: List of glob patterns for all license files being distributed.', '(might become standard with PEP 639).'], 'default': ['LICEN[CS]E*', ' COPYING*', ' NOTICE*', 'AUTHORS*'], '$comment': 'TODO: revise if PEP 639 is accepted. Probably ``project.license-files``?'}, 'dynamic': {'type': 'object', 'description': 'Instructions for loading :pep:`621`-related metadata dynamically', 'additionalProperties': False, 'properties': {'version': {'$$description': ['A version dynamically loaded via either the ``attr:`` or ``file:``', 'directives. Please make sure the given file or attribute respects :pep:`440`.'], 'oneOf': [{'title': "'attr:' directive", '$id': '#/definitions/attr-directive', '$$description': ['Value is read from a module attribute. Supports callables and iterables;', 'unsupported types are cast via ``str()``'], 'type': 'object', 'additionalProperties': False, 'properties': {'attr': {'type': 'string'}}, 'required': ['attr']}, {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}]}, 'classifiers': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, 'description': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, 'dependencies': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, 'entry-points': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, 'optional-dependencies': {'type': 'object', 'propertyNames': {'format': 'python-identifier'}, 'additionalProperties': False, 'patternProperties': {'.+': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}}}, 'readme': {'anyOf': [{'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, {'properties': {'content-type': {'type': 'string'}}}], 'required': ['file']}}}}, 'definitions': {'file-directive': {'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, 'attr-directive': {'title': "'attr:' directive", '$id': '#/definitions/attr-directive', '$$description': ['Value is read from a module attribute. Supports callables and iterables;', 'unsupported types are cast via ``str()``'], 'type': 'object', 'additionalProperties': False, 'properties': {'attr': {'type': 'string'}}, 'required': ['attr']}, 'find-directive': {'$id': '#/definitions/find-directive', 'title': "'find:' directive", 'type': 'object', 'additionalProperties': False, 'properties': {'find': {'type': 'object', '$$description': ['Dynamic `package discovery', '`_.'], 'additionalProperties': False, 'properties': {'where': {'description': 'Directories to be searched for packages (Unix-style relative path)', 'type': 'array', 'items': {'type': 'string'}}, 'exclude': {'type': 'array', '$$description': ['Exclude packages that match the values listed in this field.', "Can container shell-style wildcards (e.g. ``'pkg.*'``)"], 'items': {'type': 'string'}}, 'include': {'type': 'array', '$$description': ['Restrict the found packages to just the ones listed in this field.', "Can container shell-style wildcards (e.g. ``'pkg.*'``)"], 'items': {'type': 'string'}}, 'namespaces': {'type': 'boolean', '$$description': ['When ``True``, directories without a ``__init__.py`` file will also', 'be scanned for :pep:`420`-style implicit namespaces']}}}}}}}, rule='additionalProperties') - return data - -def validate_https___setuptools_pypa_io_en_latest_references_keywords_html__definitions_file_directive(data, custom_formats={}, name_prefix=None): - if not isinstance(data, (dict)): - raise JsonSchemaValueException("" + (name_prefix or "data") + " must be object", value=data, name="" + (name_prefix or "data") + "", definition={'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, rule='type') - data_is_dict = isinstance(data, dict) - if data_is_dict: - data_len = len(data) - if not all(prop in data for prop in ['file']): - raise JsonSchemaValueException("" + (name_prefix or "data") + " must contain ['file'] properties", value=data, name="" + (name_prefix or "data") + "", definition={'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, rule='required') - data_keys = set(data.keys()) - if "file" in data_keys: - data_keys.remove("file") - data__file = data["file"] - data__file_one_of_count7 = 0 - if data__file_one_of_count7 < 2: - try: - if not isinstance(data__file, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".file must be string", value=data__file, name="" + (name_prefix or "data") + ".file", definition={'type': 'string'}, rule='type') - data__file_one_of_count7 += 1 - except JsonSchemaValueException: pass - if data__file_one_of_count7 < 2: - try: - if not isinstance(data__file, (list, tuple)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".file must be array", value=data__file, name="" + (name_prefix or "data") + ".file", definition={'type': 'array', 'items': {'type': 'string'}}, rule='type') - data__file_is_list = isinstance(data__file, (list, tuple)) - if data__file_is_list: - data__file_len = len(data__file) - for data__file_x, data__file_item in enumerate(data__file): - if not isinstance(data__file_item, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".file[{data__file_x}]".format(**locals()) + " must be string", value=data__file_item, name="" + (name_prefix or "data") + ".file[{data__file_x}]".format(**locals()) + "", definition={'type': 'string'}, rule='type') - data__file_one_of_count7 += 1 - except JsonSchemaValueException: pass - if data__file_one_of_count7 != 1: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".file must be valid exactly by one definition" + (" (" + str(data__file_one_of_count7) + " matches found)"), value=data__file, name="" + (name_prefix or "data") + ".file", definition={'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}, rule='oneOf') - if data_keys: - raise JsonSchemaValueException("" + (name_prefix or "data") + " must not contain "+str(data_keys)+" properties", value=data, name="" + (name_prefix or "data") + "", definition={'$id': '#/definitions/file-directive', 'title': "'file:' directive", 'description': 'Value is read from a file (or list of files and then concatenated)', 'type': 'object', 'additionalProperties': False, 'properties': {'file': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}}, 'required': ['file']}, rule='additionalProperties') - return data - -def validate_https___setuptools_pypa_io_en_latest_references_keywords_html__definitions_attr_directive(data, custom_formats={}, name_prefix=None): - if not isinstance(data, (dict)): - raise JsonSchemaValueException("" + (name_prefix or "data") + " must be object", value=data, name="" + (name_prefix or "data") + "", definition={'title': "'attr:' directive", '$id': '#/definitions/attr-directive', '$$description': ['Value is read from a module attribute. Supports callables and iterables;', 'unsupported types are cast via ``str()``'], 'type': 'object', 'additionalProperties': False, 'properties': {'attr': {'type': 'string'}}, 'required': ['attr']}, rule='type') - data_is_dict = isinstance(data, dict) - if data_is_dict: - data_len = len(data) - if not all(prop in data for prop in ['attr']): - raise JsonSchemaValueException("" + (name_prefix or "data") + " must contain ['attr'] properties", value=data, name="" + (name_prefix or "data") + "", definition={'title': "'attr:' directive", '$id': '#/definitions/attr-directive', '$$description': ['Value is read from a module attribute. Supports callables and iterables;', 'unsupported types are cast via ``str()``'], 'type': 'object', 'additionalProperties': False, 'properties': {'attr': {'type': 'string'}}, 'required': ['attr']}, rule='required') - data_keys = set(data.keys()) - if "attr" in data_keys: - data_keys.remove("attr") - data__attr = data["attr"] - if not isinstance(data__attr, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".attr must be string", value=data__attr, name="" + (name_prefix or "data") + ".attr", definition={'type': 'string'}, rule='type') - if data_keys: - raise JsonSchemaValueException("" + (name_prefix or "data") + " must not contain "+str(data_keys)+" properties", value=data, name="" + (name_prefix or "data") + "", definition={'title': "'attr:' directive", '$id': '#/definitions/attr-directive', '$$description': ['Value is read from a module attribute. Supports callables and iterables;', 'unsupported types are cast via ``str()``'], 'type': 'object', 'additionalProperties': False, 'properties': {'attr': {'type': 'string'}}, 'required': ['attr']}, rule='additionalProperties') - return data - -def validate_https___setuptools_pypa_io_en_latest_references_keywords_html__definitions_find_directive(data, custom_formats={}, name_prefix=None): - if not isinstance(data, (dict)): - raise JsonSchemaValueException("" + (name_prefix or "data") + " must be object", value=data, name="" + (name_prefix or "data") + "", definition={'$id': '#/definitions/find-directive', 'title': "'find:' directive", 'type': 'object', 'additionalProperties': False, 'properties': {'find': {'type': 'object', '$$description': ['Dynamic `package discovery', '`_.'], 'additionalProperties': False, 'properties': {'where': {'description': 'Directories to be searched for packages (Unix-style relative path)', 'type': 'array', 'items': {'type': 'string'}}, 'exclude': {'type': 'array', '$$description': ['Exclude packages that match the values listed in this field.', "Can container shell-style wildcards (e.g. ``'pkg.*'``)"], 'items': {'type': 'string'}}, 'include': {'type': 'array', '$$description': ['Restrict the found packages to just the ones listed in this field.', "Can container shell-style wildcards (e.g. ``'pkg.*'``)"], 'items': {'type': 'string'}}, 'namespaces': {'type': 'boolean', '$$description': ['When ``True``, directories without a ``__init__.py`` file will also', 'be scanned for :pep:`420`-style implicit namespaces']}}}}}, rule='type') - data_is_dict = isinstance(data, dict) - if data_is_dict: - data_keys = set(data.keys()) - if "find" in data_keys: - data_keys.remove("find") - data__find = data["find"] - if not isinstance(data__find, (dict)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".find must be object", value=data__find, name="" + (name_prefix or "data") + ".find", definition={'type': 'object', '$$description': ['Dynamic `package discovery', '`_.'], 'additionalProperties': False, 'properties': {'where': {'description': 'Directories to be searched for packages (Unix-style relative path)', 'type': 'array', 'items': {'type': 'string'}}, 'exclude': {'type': 'array', '$$description': ['Exclude packages that match the values listed in this field.', "Can container shell-style wildcards (e.g. ``'pkg.*'``)"], 'items': {'type': 'string'}}, 'include': {'type': 'array', '$$description': ['Restrict the found packages to just the ones listed in this field.', "Can container shell-style wildcards (e.g. ``'pkg.*'``)"], 'items': {'type': 'string'}}, 'namespaces': {'type': 'boolean', '$$description': ['When ``True``, directories without a ``__init__.py`` file will also', 'be scanned for :pep:`420`-style implicit namespaces']}}}, rule='type') - data__find_is_dict = isinstance(data__find, dict) - if data__find_is_dict: - data__find_keys = set(data__find.keys()) - if "where" in data__find_keys: - data__find_keys.remove("where") - data__find__where = data__find["where"] - if not isinstance(data__find__where, (list, tuple)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".find.where must be array", value=data__find__where, name="" + (name_prefix or "data") + ".find.where", definition={'description': 'Directories to be searched for packages (Unix-style relative path)', 'type': 'array', 'items': {'type': 'string'}}, rule='type') - data__find__where_is_list = isinstance(data__find__where, (list, tuple)) - if data__find__where_is_list: - data__find__where_len = len(data__find__where) - for data__find__where_x, data__find__where_item in enumerate(data__find__where): - if not isinstance(data__find__where_item, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".find.where[{data__find__where_x}]".format(**locals()) + " must be string", value=data__find__where_item, name="" + (name_prefix or "data") + ".find.where[{data__find__where_x}]".format(**locals()) + "", definition={'type': 'string'}, rule='type') - if "exclude" in data__find_keys: - data__find_keys.remove("exclude") - data__find__exclude = data__find["exclude"] - if not isinstance(data__find__exclude, (list, tuple)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".find.exclude must be array", value=data__find__exclude, name="" + (name_prefix or "data") + ".find.exclude", definition={'type': 'array', '$$description': ['Exclude packages that match the values listed in this field.', "Can container shell-style wildcards (e.g. ``'pkg.*'``)"], 'items': {'type': 'string'}}, rule='type') - data__find__exclude_is_list = isinstance(data__find__exclude, (list, tuple)) - if data__find__exclude_is_list: - data__find__exclude_len = len(data__find__exclude) - for data__find__exclude_x, data__find__exclude_item in enumerate(data__find__exclude): - if not isinstance(data__find__exclude_item, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".find.exclude[{data__find__exclude_x}]".format(**locals()) + " must be string", value=data__find__exclude_item, name="" + (name_prefix or "data") + ".find.exclude[{data__find__exclude_x}]".format(**locals()) + "", definition={'type': 'string'}, rule='type') - if "include" in data__find_keys: - data__find_keys.remove("include") - data__find__include = data__find["include"] - if not isinstance(data__find__include, (list, tuple)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".find.include must be array", value=data__find__include, name="" + (name_prefix or "data") + ".find.include", definition={'type': 'array', '$$description': ['Restrict the found packages to just the ones listed in this field.', "Can container shell-style wildcards (e.g. ``'pkg.*'``)"], 'items': {'type': 'string'}}, rule='type') - data__find__include_is_list = isinstance(data__find__include, (list, tuple)) - if data__find__include_is_list: - data__find__include_len = len(data__find__include) - for data__find__include_x, data__find__include_item in enumerate(data__find__include): - if not isinstance(data__find__include_item, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".find.include[{data__find__include_x}]".format(**locals()) + " must be string", value=data__find__include_item, name="" + (name_prefix or "data") + ".find.include[{data__find__include_x}]".format(**locals()) + "", definition={'type': 'string'}, rule='type') - if "namespaces" in data__find_keys: - data__find_keys.remove("namespaces") - data__find__namespaces = data__find["namespaces"] - if not isinstance(data__find__namespaces, (bool)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".find.namespaces must be boolean", value=data__find__namespaces, name="" + (name_prefix or "data") + ".find.namespaces", definition={'type': 'boolean', '$$description': ['When ``True``, directories without a ``__init__.py`` file will also', 'be scanned for :pep:`420`-style implicit namespaces']}, rule='type') - if data__find_keys: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".find must not contain "+str(data__find_keys)+" properties", value=data__find, name="" + (name_prefix or "data") + ".find", definition={'type': 'object', '$$description': ['Dynamic `package discovery', '`_.'], 'additionalProperties': False, 'properties': {'where': {'description': 'Directories to be searched for packages (Unix-style relative path)', 'type': 'array', 'items': {'type': 'string'}}, 'exclude': {'type': 'array', '$$description': ['Exclude packages that match the values listed in this field.', "Can container shell-style wildcards (e.g. ``'pkg.*'``)"], 'items': {'type': 'string'}}, 'include': {'type': 'array', '$$description': ['Restrict the found packages to just the ones listed in this field.', "Can container shell-style wildcards (e.g. ``'pkg.*'``)"], 'items': {'type': 'string'}}, 'namespaces': {'type': 'boolean', '$$description': ['When ``True``, directories without a ``__init__.py`` file will also', 'be scanned for :pep:`420`-style implicit namespaces']}}}, rule='additionalProperties') - if data_keys: - raise JsonSchemaValueException("" + (name_prefix or "data") + " must not contain "+str(data_keys)+" properties", value=data, name="" + (name_prefix or "data") + "", definition={'$id': '#/definitions/find-directive', 'title': "'find:' directive", 'type': 'object', 'additionalProperties': False, 'properties': {'find': {'type': 'object', '$$description': ['Dynamic `package discovery', '`_.'], 'additionalProperties': False, 'properties': {'where': {'description': 'Directories to be searched for packages (Unix-style relative path)', 'type': 'array', 'items': {'type': 'string'}}, 'exclude': {'type': 'array', '$$description': ['Exclude packages that match the values listed in this field.', "Can container shell-style wildcards (e.g. ``'pkg.*'``)"], 'items': {'type': 'string'}}, 'include': {'type': 'array', '$$description': ['Restrict the found packages to just the ones listed in this field.', "Can container shell-style wildcards (e.g. ``'pkg.*'``)"], 'items': {'type': 'string'}}, 'namespaces': {'type': 'boolean', '$$description': ['When ``True``, directories without a ``__init__.py`` file will also', 'be scanned for :pep:`420`-style implicit namespaces']}}}}}, rule='additionalProperties') - return data - -def validate_https___docs_python_org_3_install(data, custom_formats={}, name_prefix=None): - if not isinstance(data, (dict)): - raise JsonSchemaValueException("" + (name_prefix or "data") + " must be object", value=data, name="" + (name_prefix or "data") + "", definition={'$schema': 'http://json-schema.org/draft-07/schema', '$id': 'https://docs.python.org/3/install/', 'title': '``tool.distutils`` table', '$$description': ['Originally, ``distutils`` allowed developers to configure arguments for', '``setup.py`` scripts via `distutils configuration files', '`_.', '``tool.distutils`` subtables could be used with the same purpose', '(NOT CURRENTLY IMPLEMENTED).'], 'type': 'object', 'properties': {'global': {'type': 'object', 'description': 'Global options applied to all ``distutils`` commands'}}, 'patternProperties': {'.+': {'type': 'object'}}, '$comment': 'TODO: Is there a practical way of making this schema more specific?'}, rule='type') - data_is_dict = isinstance(data, dict) - if data_is_dict: - data_keys = set(data.keys()) - if "global" in data_keys: - data_keys.remove("global") - data__global = data["global"] - if not isinstance(data__global, (dict)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".global must be object", value=data__global, name="" + (name_prefix or "data") + ".global", definition={'type': 'object', 'description': 'Global options applied to all ``distutils`` commands'}, rule='type') - for data_key, data_val in data.items(): - if REGEX_PATTERNS['.+'].search(data_key): - if data_key in data_keys: - data_keys.remove(data_key) - if not isinstance(data_val, (dict)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".{data_key}".format(**locals()) + " must be object", value=data_val, name="" + (name_prefix or "data") + ".{data_key}".format(**locals()) + "", definition={'type': 'object'}, rule='type') - return data - -def validate_https___packaging_python_org_en_latest_specifications_declaring_project_metadata(data, custom_formats={}, name_prefix=None): - if not isinstance(data, (dict)): - raise JsonSchemaValueException("" + (name_prefix or "data") + " must be object", value=data, name="" + (name_prefix or "data") + "", definition={'$schema': 'http://json-schema.org/draft-07/schema', '$id': 'https://packaging.python.org/en/latest/specifications/declaring-project-metadata/', 'title': 'Package metadata stored in the ``project`` table', '$$description': ['Data structure for the **project** table inside ``pyproject.toml``', '(as initially defined in :pep:`621`)'], 'type': 'object', 'properties': {'name': {'type': 'string', 'description': 'The name (primary identifier) of the project. MUST be statically defined.', 'format': 'pep508-identifier'}, 'version': {'type': 'string', 'description': 'The version of the project as supported by :pep:`440`.', 'format': 'pep440'}, 'description': {'type': 'string', '$$description': ['The `summary description of the project', '`_']}, 'readme': {'$$description': ['`Full/detailed description of the project in the form of a README', '`_', "with meaning similar to the one defined in `core metadata's Description", '`_'], 'oneOf': [{'type': 'string', '$$description': ['Relative path to a text file (UTF-8) containing the full description', 'of the project. If the file path ends in case-insensitive ``.md`` or', '``.rst`` suffixes, then the content-type is respectively', '``text/markdown`` or ``text/x-rst``']}, {'type': 'object', 'allOf': [{'anyOf': [{'properties': {'file': {'type': 'string', '$$description': ['Relative path to a text file containing the full description', 'of the project.']}}, 'required': ['file']}, {'properties': {'text': {'type': 'string', 'description': 'Full text describing the project.'}}, 'required': ['text']}]}, {'properties': {'content-type': {'type': 'string', '$$description': ['Content-type (:rfc:`1341`) of the full description', '(e.g. ``text/markdown``). The ``charset`` parameter is assumed', 'UTF-8 when not present.'], '$comment': 'TODO: add regex pattern or format?'}}, 'required': ['content-type']}]}]}, 'requires-python': {'type': 'string', 'format': 'pep508-versionspec', '$$description': ['`The Python version requirements of the project', '`_.']}, 'license': {'description': '`Project license `_.', 'oneOf': [{'properties': {'file': {'type': 'string', '$$description': ['Relative path to the file (UTF-8) which contains the license for the', 'project.']}}, 'required': ['file']}, {'properties': {'text': {'type': 'string', '$$description': ['The license of the project whose meaning is that of the', '`License field from the core metadata', '`_.']}}, 'required': ['text']}]}, 'authors': {'type': 'array', 'items': {'$id': '#/definitions/author', 'title': 'Author or Maintainer', '$comment': 'https://www.python.org/dev/peps/pep-0621/#authors-maintainers', 'type': 'object', 'properties': {'name': {'type': 'string', '$$description': ['MUST be a valid email name, i.e. whatever can be put as a name, before an', 'email, in :rfc:`822`.']}, 'email': {'type': 'string', 'format': 'idn-email', 'description': 'MUST be a valid email address'}}}, '$$description': ["The people or organizations considered to be the 'authors' of the project.", 'The exact meaning is open to interpretation (e.g. original or primary authors,', 'current maintainers, or owners of the package).']}, 'maintainers': {'type': 'array', 'items': {'$id': '#/definitions/author', 'title': 'Author or Maintainer', '$comment': 'https://www.python.org/dev/peps/pep-0621/#authors-maintainers', 'type': 'object', 'properties': {'name': {'type': 'string', '$$description': ['MUST be a valid email name, i.e. whatever can be put as a name, before an', 'email, in :rfc:`822`.']}, 'email': {'type': 'string', 'format': 'idn-email', 'description': 'MUST be a valid email address'}}}, '$$description': ["The people or organizations considered to be the 'maintainers' of the project.", 'Similarly to ``authors``, the exact meaning is open to interpretation.']}, 'keywords': {'type': 'array', 'items': {'type': 'string'}, 'description': 'List of keywords to assist searching for the distribution in a larger catalog.'}, 'classifiers': {'type': 'array', 'items': {'type': 'string', 'format': 'trove-classifier', 'description': '`PyPI classifier `_.'}, '$$description': ['`Trove classifiers `_', 'which apply to the project.']}, 'urls': {'type': 'object', 'description': 'URLs associated with the project in the form ``label => value``.', 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', 'format': 'url'}}}, 'scripts': {'$id': '#/definitions/entry-point-group', 'title': 'Entry-points', 'type': 'object', '$$description': ['Entry-points are grouped together to indicate what sort of capabilities they', 'provide.', 'See the `packaging guides', '`_', 'and `setuptools docs', '`_', 'for more information.'], 'propertyNames': {'format': 'python-entrypoint-name'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', '$$description': ['Reference to a Python object. It is either in the form', '``importable.module``, or ``importable.module:object.attr``.'], 'format': 'python-entrypoint-reference', '$comment': 'https://packaging.python.org/specifications/entry-points/'}}}, 'gui-scripts': {'$id': '#/definitions/entry-point-group', 'title': 'Entry-points', 'type': 'object', '$$description': ['Entry-points are grouped together to indicate what sort of capabilities they', 'provide.', 'See the `packaging guides', '`_', 'and `setuptools docs', '`_', 'for more information.'], 'propertyNames': {'format': 'python-entrypoint-name'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', '$$description': ['Reference to a Python object. It is either in the form', '``importable.module``, or ``importable.module:object.attr``.'], 'format': 'python-entrypoint-reference', '$comment': 'https://packaging.python.org/specifications/entry-points/'}}}, 'entry-points': {'$$description': ['Instruct the installer to expose the given modules/functions via', '``entry-point`` discovery mechanism (useful for plugins).', 'More information available in the `Python packaging guide', '`_.'], 'propertyNames': {'format': 'python-entrypoint-group'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'$id': '#/definitions/entry-point-group', 'title': 'Entry-points', 'type': 'object', '$$description': ['Entry-points are grouped together to indicate what sort of capabilities they', 'provide.', 'See the `packaging guides', '`_', 'and `setuptools docs', '`_', 'for more information.'], 'propertyNames': {'format': 'python-entrypoint-name'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', '$$description': ['Reference to a Python object. It is either in the form', '``importable.module``, or ``importable.module:object.attr``.'], 'format': 'python-entrypoint-reference', '$comment': 'https://packaging.python.org/specifications/entry-points/'}}}}}, 'dependencies': {'type': 'array', 'description': 'Project (mandatory) dependencies.', 'items': {'$id': '#/definitions/dependency', 'title': 'Dependency', 'type': 'string', 'description': 'Project dependency specification according to PEP 508', 'format': 'pep508'}}, 'optional-dependencies': {'type': 'object', 'description': 'Optional dependency for the project', 'propertyNames': {'format': 'pep508-identifier'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'array', 'items': {'$id': '#/definitions/dependency', 'title': 'Dependency', 'type': 'string', 'description': 'Project dependency specification according to PEP 508', 'format': 'pep508'}}}}, 'dynamic': {'type': 'array', '$$description': ['Specifies which fields are intentionally unspecified and expected to be', 'dynamically provided by build tools'], 'items': {'enum': ['version', 'description', 'readme', 'requires-python', 'license', 'authors', 'maintainers', 'keywords', 'classifiers', 'urls', 'scripts', 'gui-scripts', 'entry-points', 'dependencies', 'optional-dependencies']}}}, 'required': ['name'], 'additionalProperties': False, 'if': {'not': {'required': ['dynamic'], 'properties': {'dynamic': {'contains': {'const': 'version'}, '$$description': ['version is listed in ``dynamic``']}}}, '$$comment': ['According to :pep:`621`:', ' If the core metadata specification lists a field as "Required", then', ' the metadata MUST specify the field statically or list it in dynamic', 'In turn, `core metadata`_ defines:', ' The required fields are: Metadata-Version, Name, Version.', ' All the other fields are optional.', 'Since ``Metadata-Version`` is defined by the build back-end, ``name`` and', '``version`` are the only mandatory information in ``pyproject.toml``.', '.. _core metadata: https://packaging.python.org/specifications/core-metadata/']}, 'then': {'required': ['version'], '$$description': ['version should be statically defined in the ``version`` field']}, 'definitions': {'author': {'$id': '#/definitions/author', 'title': 'Author or Maintainer', '$comment': 'https://www.python.org/dev/peps/pep-0621/#authors-maintainers', 'type': 'object', 'properties': {'name': {'type': 'string', '$$description': ['MUST be a valid email name, i.e. whatever can be put as a name, before an', 'email, in :rfc:`822`.']}, 'email': {'type': 'string', 'format': 'idn-email', 'description': 'MUST be a valid email address'}}}, 'entry-point-group': {'$id': '#/definitions/entry-point-group', 'title': 'Entry-points', 'type': 'object', '$$description': ['Entry-points are grouped together to indicate what sort of capabilities they', 'provide.', 'See the `packaging guides', '`_', 'and `setuptools docs', '`_', 'for more information.'], 'propertyNames': {'format': 'python-entrypoint-name'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', '$$description': ['Reference to a Python object. It is either in the form', '``importable.module``, or ``importable.module:object.attr``.'], 'format': 'python-entrypoint-reference', '$comment': 'https://packaging.python.org/specifications/entry-points/'}}}, 'dependency': {'$id': '#/definitions/dependency', 'title': 'Dependency', 'type': 'string', 'description': 'Project dependency specification according to PEP 508', 'format': 'pep508'}}}, rule='type') - data_is_dict = isinstance(data, dict) - if data_is_dict: - data_len = len(data) - if not all(prop in data for prop in ['name']): - raise JsonSchemaValueException("" + (name_prefix or "data") + " must contain ['name'] properties", value=data, name="" + (name_prefix or "data") + "", definition={'$schema': 'http://json-schema.org/draft-07/schema', '$id': 'https://packaging.python.org/en/latest/specifications/declaring-project-metadata/', 'title': 'Package metadata stored in the ``project`` table', '$$description': ['Data structure for the **project** table inside ``pyproject.toml``', '(as initially defined in :pep:`621`)'], 'type': 'object', 'properties': {'name': {'type': 'string', 'description': 'The name (primary identifier) of the project. MUST be statically defined.', 'format': 'pep508-identifier'}, 'version': {'type': 'string', 'description': 'The version of the project as supported by :pep:`440`.', 'format': 'pep440'}, 'description': {'type': 'string', '$$description': ['The `summary description of the project', '`_']}, 'readme': {'$$description': ['`Full/detailed description of the project in the form of a README', '`_', "with meaning similar to the one defined in `core metadata's Description", '`_'], 'oneOf': [{'type': 'string', '$$description': ['Relative path to a text file (UTF-8) containing the full description', 'of the project. If the file path ends in case-insensitive ``.md`` or', '``.rst`` suffixes, then the content-type is respectively', '``text/markdown`` or ``text/x-rst``']}, {'type': 'object', 'allOf': [{'anyOf': [{'properties': {'file': {'type': 'string', '$$description': ['Relative path to a text file containing the full description', 'of the project.']}}, 'required': ['file']}, {'properties': {'text': {'type': 'string', 'description': 'Full text describing the project.'}}, 'required': ['text']}]}, {'properties': {'content-type': {'type': 'string', '$$description': ['Content-type (:rfc:`1341`) of the full description', '(e.g. ``text/markdown``). The ``charset`` parameter is assumed', 'UTF-8 when not present.'], '$comment': 'TODO: add regex pattern or format?'}}, 'required': ['content-type']}]}]}, 'requires-python': {'type': 'string', 'format': 'pep508-versionspec', '$$description': ['`The Python version requirements of the project', '`_.']}, 'license': {'description': '`Project license `_.', 'oneOf': [{'properties': {'file': {'type': 'string', '$$description': ['Relative path to the file (UTF-8) which contains the license for the', 'project.']}}, 'required': ['file']}, {'properties': {'text': {'type': 'string', '$$description': ['The license of the project whose meaning is that of the', '`License field from the core metadata', '`_.']}}, 'required': ['text']}]}, 'authors': {'type': 'array', 'items': {'$id': '#/definitions/author', 'title': 'Author or Maintainer', '$comment': 'https://www.python.org/dev/peps/pep-0621/#authors-maintainers', 'type': 'object', 'properties': {'name': {'type': 'string', '$$description': ['MUST be a valid email name, i.e. whatever can be put as a name, before an', 'email, in :rfc:`822`.']}, 'email': {'type': 'string', 'format': 'idn-email', 'description': 'MUST be a valid email address'}}}, '$$description': ["The people or organizations considered to be the 'authors' of the project.", 'The exact meaning is open to interpretation (e.g. original or primary authors,', 'current maintainers, or owners of the package).']}, 'maintainers': {'type': 'array', 'items': {'$id': '#/definitions/author', 'title': 'Author or Maintainer', '$comment': 'https://www.python.org/dev/peps/pep-0621/#authors-maintainers', 'type': 'object', 'properties': {'name': {'type': 'string', '$$description': ['MUST be a valid email name, i.e. whatever can be put as a name, before an', 'email, in :rfc:`822`.']}, 'email': {'type': 'string', 'format': 'idn-email', 'description': 'MUST be a valid email address'}}}, '$$description': ["The people or organizations considered to be the 'maintainers' of the project.", 'Similarly to ``authors``, the exact meaning is open to interpretation.']}, 'keywords': {'type': 'array', 'items': {'type': 'string'}, 'description': 'List of keywords to assist searching for the distribution in a larger catalog.'}, 'classifiers': {'type': 'array', 'items': {'type': 'string', 'format': 'trove-classifier', 'description': '`PyPI classifier `_.'}, '$$description': ['`Trove classifiers `_', 'which apply to the project.']}, 'urls': {'type': 'object', 'description': 'URLs associated with the project in the form ``label => value``.', 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', 'format': 'url'}}}, 'scripts': {'$id': '#/definitions/entry-point-group', 'title': 'Entry-points', 'type': 'object', '$$description': ['Entry-points are grouped together to indicate what sort of capabilities they', 'provide.', 'See the `packaging guides', '`_', 'and `setuptools docs', '`_', 'for more information.'], 'propertyNames': {'format': 'python-entrypoint-name'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', '$$description': ['Reference to a Python object. It is either in the form', '``importable.module``, or ``importable.module:object.attr``.'], 'format': 'python-entrypoint-reference', '$comment': 'https://packaging.python.org/specifications/entry-points/'}}}, 'gui-scripts': {'$id': '#/definitions/entry-point-group', 'title': 'Entry-points', 'type': 'object', '$$description': ['Entry-points are grouped together to indicate what sort of capabilities they', 'provide.', 'See the `packaging guides', '`_', 'and `setuptools docs', '`_', 'for more information.'], 'propertyNames': {'format': 'python-entrypoint-name'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', '$$description': ['Reference to a Python object. It is either in the form', '``importable.module``, or ``importable.module:object.attr``.'], 'format': 'python-entrypoint-reference', '$comment': 'https://packaging.python.org/specifications/entry-points/'}}}, 'entry-points': {'$$description': ['Instruct the installer to expose the given modules/functions via', '``entry-point`` discovery mechanism (useful for plugins).', 'More information available in the `Python packaging guide', '`_.'], 'propertyNames': {'format': 'python-entrypoint-group'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'$id': '#/definitions/entry-point-group', 'title': 'Entry-points', 'type': 'object', '$$description': ['Entry-points are grouped together to indicate what sort of capabilities they', 'provide.', 'See the `packaging guides', '`_', 'and `setuptools docs', '`_', 'for more information.'], 'propertyNames': {'format': 'python-entrypoint-name'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', '$$description': ['Reference to a Python object. It is either in the form', '``importable.module``, or ``importable.module:object.attr``.'], 'format': 'python-entrypoint-reference', '$comment': 'https://packaging.python.org/specifications/entry-points/'}}}}}, 'dependencies': {'type': 'array', 'description': 'Project (mandatory) dependencies.', 'items': {'$id': '#/definitions/dependency', 'title': 'Dependency', 'type': 'string', 'description': 'Project dependency specification according to PEP 508', 'format': 'pep508'}}, 'optional-dependencies': {'type': 'object', 'description': 'Optional dependency for the project', 'propertyNames': {'format': 'pep508-identifier'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'array', 'items': {'$id': '#/definitions/dependency', 'title': 'Dependency', 'type': 'string', 'description': 'Project dependency specification according to PEP 508', 'format': 'pep508'}}}}, 'dynamic': {'type': 'array', '$$description': ['Specifies which fields are intentionally unspecified and expected to be', 'dynamically provided by build tools'], 'items': {'enum': ['version', 'description', 'readme', 'requires-python', 'license', 'authors', 'maintainers', 'keywords', 'classifiers', 'urls', 'scripts', 'gui-scripts', 'entry-points', 'dependencies', 'optional-dependencies']}}}, 'required': ['name'], 'additionalProperties': False, 'if': {'not': {'required': ['dynamic'], 'properties': {'dynamic': {'contains': {'const': 'version'}, '$$description': ['version is listed in ``dynamic``']}}}, '$$comment': ['According to :pep:`621`:', ' If the core metadata specification lists a field as "Required", then', ' the metadata MUST specify the field statically or list it in dynamic', 'In turn, `core metadata`_ defines:', ' The required fields are: Metadata-Version, Name, Version.', ' All the other fields are optional.', 'Since ``Metadata-Version`` is defined by the build back-end, ``name`` and', '``version`` are the only mandatory information in ``pyproject.toml``.', '.. _core metadata: https://packaging.python.org/specifications/core-metadata/']}, 'then': {'required': ['version'], '$$description': ['version should be statically defined in the ``version`` field']}, 'definitions': {'author': {'$id': '#/definitions/author', 'title': 'Author or Maintainer', '$comment': 'https://www.python.org/dev/peps/pep-0621/#authors-maintainers', 'type': 'object', 'properties': {'name': {'type': 'string', '$$description': ['MUST be a valid email name, i.e. whatever can be put as a name, before an', 'email, in :rfc:`822`.']}, 'email': {'type': 'string', 'format': 'idn-email', 'description': 'MUST be a valid email address'}}}, 'entry-point-group': {'$id': '#/definitions/entry-point-group', 'title': 'Entry-points', 'type': 'object', '$$description': ['Entry-points are grouped together to indicate what sort of capabilities they', 'provide.', 'See the `packaging guides', '`_', 'and `setuptools docs', '`_', 'for more information.'], 'propertyNames': {'format': 'python-entrypoint-name'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', '$$description': ['Reference to a Python object. It is either in the form', '``importable.module``, or ``importable.module:object.attr``.'], 'format': 'python-entrypoint-reference', '$comment': 'https://packaging.python.org/specifications/entry-points/'}}}, 'dependency': {'$id': '#/definitions/dependency', 'title': 'Dependency', 'type': 'string', 'description': 'Project dependency specification according to PEP 508', 'format': 'pep508'}}}, rule='required') - data_keys = set(data.keys()) - if "name" in data_keys: - data_keys.remove("name") - data__name = data["name"] - if not isinstance(data__name, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".name must be string", value=data__name, name="" + (name_prefix or "data") + ".name", definition={'type': 'string', 'description': 'The name (primary identifier) of the project. MUST be statically defined.', 'format': 'pep508-identifier'}, rule='type') - if isinstance(data__name, str): - if not custom_formats["pep508-identifier"](data__name): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".name must be pep508-identifier", value=data__name, name="" + (name_prefix or "data") + ".name", definition={'type': 'string', 'description': 'The name (primary identifier) of the project. MUST be statically defined.', 'format': 'pep508-identifier'}, rule='format') - if "version" in data_keys: - data_keys.remove("version") - data__version = data["version"] - if not isinstance(data__version, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".version must be string", value=data__version, name="" + (name_prefix or "data") + ".version", definition={'type': 'string', 'description': 'The version of the project as supported by :pep:`440`.', 'format': 'pep440'}, rule='type') - if isinstance(data__version, str): - if not custom_formats["pep440"](data__version): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".version must be pep440", value=data__version, name="" + (name_prefix or "data") + ".version", definition={'type': 'string', 'description': 'The version of the project as supported by :pep:`440`.', 'format': 'pep440'}, rule='format') - if "description" in data_keys: - data_keys.remove("description") - data__description = data["description"] - if not isinstance(data__description, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".description must be string", value=data__description, name="" + (name_prefix or "data") + ".description", definition={'type': 'string', '$$description': ['The `summary description of the project', '`_']}, rule='type') - if "readme" in data_keys: - data_keys.remove("readme") - data__readme = data["readme"] - data__readme_one_of_count8 = 0 - if data__readme_one_of_count8 < 2: - try: - if not isinstance(data__readme, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".readme must be string", value=data__readme, name="" + (name_prefix or "data") + ".readme", definition={'type': 'string', '$$description': ['Relative path to a text file (UTF-8) containing the full description', 'of the project. If the file path ends in case-insensitive ``.md`` or', '``.rst`` suffixes, then the content-type is respectively', '``text/markdown`` or ``text/x-rst``']}, rule='type') - data__readme_one_of_count8 += 1 - except JsonSchemaValueException: pass - if data__readme_one_of_count8 < 2: - try: - if not isinstance(data__readme, (dict)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".readme must be object", value=data__readme, name="" + (name_prefix or "data") + ".readme", definition={'type': 'object', 'allOf': [{'anyOf': [{'properties': {'file': {'type': 'string', '$$description': ['Relative path to a text file containing the full description', 'of the project.']}}, 'required': ['file']}, {'properties': {'text': {'type': 'string', 'description': 'Full text describing the project.'}}, 'required': ['text']}]}, {'properties': {'content-type': {'type': 'string', '$$description': ['Content-type (:rfc:`1341`) of the full description', '(e.g. ``text/markdown``). The ``charset`` parameter is assumed', 'UTF-8 when not present.'], '$comment': 'TODO: add regex pattern or format?'}}, 'required': ['content-type']}]}, rule='type') - data__readme_any_of_count9 = 0 - if not data__readme_any_of_count9: - try: - data__readme_is_dict = isinstance(data__readme, dict) - if data__readme_is_dict: - data__readme_len = len(data__readme) - if not all(prop in data__readme for prop in ['file']): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".readme must contain ['file'] properties", value=data__readme, name="" + (name_prefix or "data") + ".readme", definition={'properties': {'file': {'type': 'string', '$$description': ['Relative path to a text file containing the full description', 'of the project.']}}, 'required': ['file']}, rule='required') - data__readme_keys = set(data__readme.keys()) - if "file" in data__readme_keys: - data__readme_keys.remove("file") - data__readme__file = data__readme["file"] - if not isinstance(data__readme__file, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".readme.file must be string", value=data__readme__file, name="" + (name_prefix or "data") + ".readme.file", definition={'type': 'string', '$$description': ['Relative path to a text file containing the full description', 'of the project.']}, rule='type') - data__readme_any_of_count9 += 1 - except JsonSchemaValueException: pass - if not data__readme_any_of_count9: - try: - data__readme_is_dict = isinstance(data__readme, dict) - if data__readme_is_dict: - data__readme_len = len(data__readme) - if not all(prop in data__readme for prop in ['text']): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".readme must contain ['text'] properties", value=data__readme, name="" + (name_prefix or "data") + ".readme", definition={'properties': {'text': {'type': 'string', 'description': 'Full text describing the project.'}}, 'required': ['text']}, rule='required') - data__readme_keys = set(data__readme.keys()) - if "text" in data__readme_keys: - data__readme_keys.remove("text") - data__readme__text = data__readme["text"] - if not isinstance(data__readme__text, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".readme.text must be string", value=data__readme__text, name="" + (name_prefix or "data") + ".readme.text", definition={'type': 'string', 'description': 'Full text describing the project.'}, rule='type') - data__readme_any_of_count9 += 1 - except JsonSchemaValueException: pass - if not data__readme_any_of_count9: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".readme cannot be validated by any definition", value=data__readme, name="" + (name_prefix or "data") + ".readme", definition={'anyOf': [{'properties': {'file': {'type': 'string', '$$description': ['Relative path to a text file containing the full description', 'of the project.']}}, 'required': ['file']}, {'properties': {'text': {'type': 'string', 'description': 'Full text describing the project.'}}, 'required': ['text']}]}, rule='anyOf') - data__readme_is_dict = isinstance(data__readme, dict) - if data__readme_is_dict: - data__readme_len = len(data__readme) - if not all(prop in data__readme for prop in ['content-type']): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".readme must contain ['content-type'] properties", value=data__readme, name="" + (name_prefix or "data") + ".readme", definition={'properties': {'content-type': {'type': 'string', '$$description': ['Content-type (:rfc:`1341`) of the full description', '(e.g. ``text/markdown``). The ``charset`` parameter is assumed', 'UTF-8 when not present.'], '$comment': 'TODO: add regex pattern or format?'}}, 'required': ['content-type']}, rule='required') - data__readme_keys = set(data__readme.keys()) - if "content-type" in data__readme_keys: - data__readme_keys.remove("content-type") - data__readme__contenttype = data__readme["content-type"] - if not isinstance(data__readme__contenttype, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".readme.content-type must be string", value=data__readme__contenttype, name="" + (name_prefix or "data") + ".readme.content-type", definition={'type': 'string', '$$description': ['Content-type (:rfc:`1341`) of the full description', '(e.g. ``text/markdown``). The ``charset`` parameter is assumed', 'UTF-8 when not present.'], '$comment': 'TODO: add regex pattern or format?'}, rule='type') - data__readme_one_of_count8 += 1 - except JsonSchemaValueException: pass - if data__readme_one_of_count8 != 1: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".readme must be valid exactly by one definition" + (" (" + str(data__readme_one_of_count8) + " matches found)"), value=data__readme, name="" + (name_prefix or "data") + ".readme", definition={'$$description': ['`Full/detailed description of the project in the form of a README', '`_', "with meaning similar to the one defined in `core metadata's Description", '`_'], 'oneOf': [{'type': 'string', '$$description': ['Relative path to a text file (UTF-8) containing the full description', 'of the project. If the file path ends in case-insensitive ``.md`` or', '``.rst`` suffixes, then the content-type is respectively', '``text/markdown`` or ``text/x-rst``']}, {'type': 'object', 'allOf': [{'anyOf': [{'properties': {'file': {'type': 'string', '$$description': ['Relative path to a text file containing the full description', 'of the project.']}}, 'required': ['file']}, {'properties': {'text': {'type': 'string', 'description': 'Full text describing the project.'}}, 'required': ['text']}]}, {'properties': {'content-type': {'type': 'string', '$$description': ['Content-type (:rfc:`1341`) of the full description', '(e.g. ``text/markdown``). The ``charset`` parameter is assumed', 'UTF-8 when not present.'], '$comment': 'TODO: add regex pattern or format?'}}, 'required': ['content-type']}]}]}, rule='oneOf') - if "requires-python" in data_keys: - data_keys.remove("requires-python") - data__requirespython = data["requires-python"] - if not isinstance(data__requirespython, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".requires-python must be string", value=data__requirespython, name="" + (name_prefix or "data") + ".requires-python", definition={'type': 'string', 'format': 'pep508-versionspec', '$$description': ['`The Python version requirements of the project', '`_.']}, rule='type') - if isinstance(data__requirespython, str): - if not custom_formats["pep508-versionspec"](data__requirespython): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".requires-python must be pep508-versionspec", value=data__requirespython, name="" + (name_prefix or "data") + ".requires-python", definition={'type': 'string', 'format': 'pep508-versionspec', '$$description': ['`The Python version requirements of the project', '`_.']}, rule='format') - if "license" in data_keys: - data_keys.remove("license") - data__license = data["license"] - data__license_one_of_count10 = 0 - if data__license_one_of_count10 < 2: - try: - data__license_is_dict = isinstance(data__license, dict) - if data__license_is_dict: - data__license_len = len(data__license) - if not all(prop in data__license for prop in ['file']): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".license must contain ['file'] properties", value=data__license, name="" + (name_prefix or "data") + ".license", definition={'properties': {'file': {'type': 'string', '$$description': ['Relative path to the file (UTF-8) which contains the license for the', 'project.']}}, 'required': ['file']}, rule='required') - data__license_keys = set(data__license.keys()) - if "file" in data__license_keys: - data__license_keys.remove("file") - data__license__file = data__license["file"] - if not isinstance(data__license__file, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".license.file must be string", value=data__license__file, name="" + (name_prefix or "data") + ".license.file", definition={'type': 'string', '$$description': ['Relative path to the file (UTF-8) which contains the license for the', 'project.']}, rule='type') - data__license_one_of_count10 += 1 - except JsonSchemaValueException: pass - if data__license_one_of_count10 < 2: - try: - data__license_is_dict = isinstance(data__license, dict) - if data__license_is_dict: - data__license_len = len(data__license) - if not all(prop in data__license for prop in ['text']): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".license must contain ['text'] properties", value=data__license, name="" + (name_prefix or "data") + ".license", definition={'properties': {'text': {'type': 'string', '$$description': ['The license of the project whose meaning is that of the', '`License field from the core metadata', '`_.']}}, 'required': ['text']}, rule='required') - data__license_keys = set(data__license.keys()) - if "text" in data__license_keys: - data__license_keys.remove("text") - data__license__text = data__license["text"] - if not isinstance(data__license__text, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".license.text must be string", value=data__license__text, name="" + (name_prefix or "data") + ".license.text", definition={'type': 'string', '$$description': ['The license of the project whose meaning is that of the', '`License field from the core metadata', '`_.']}, rule='type') - data__license_one_of_count10 += 1 - except JsonSchemaValueException: pass - if data__license_one_of_count10 != 1: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".license must be valid exactly by one definition" + (" (" + str(data__license_one_of_count10) + " matches found)"), value=data__license, name="" + (name_prefix or "data") + ".license", definition={'description': '`Project license `_.', 'oneOf': [{'properties': {'file': {'type': 'string', '$$description': ['Relative path to the file (UTF-8) which contains the license for the', 'project.']}}, 'required': ['file']}, {'properties': {'text': {'type': 'string', '$$description': ['The license of the project whose meaning is that of the', '`License field from the core metadata', '`_.']}}, 'required': ['text']}]}, rule='oneOf') - if "authors" in data_keys: - data_keys.remove("authors") - data__authors = data["authors"] - if not isinstance(data__authors, (list, tuple)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".authors must be array", value=data__authors, name="" + (name_prefix or "data") + ".authors", definition={'type': 'array', 'items': {'$id': '#/definitions/author', 'title': 'Author or Maintainer', '$comment': 'https://www.python.org/dev/peps/pep-0621/#authors-maintainers', 'type': 'object', 'properties': {'name': {'type': 'string', '$$description': ['MUST be a valid email name, i.e. whatever can be put as a name, before an', 'email, in :rfc:`822`.']}, 'email': {'type': 'string', 'format': 'idn-email', 'description': 'MUST be a valid email address'}}}, '$$description': ["The people or organizations considered to be the 'authors' of the project.", 'The exact meaning is open to interpretation (e.g. original or primary authors,', 'current maintainers, or owners of the package).']}, rule='type') - data__authors_is_list = isinstance(data__authors, (list, tuple)) - if data__authors_is_list: - data__authors_len = len(data__authors) - for data__authors_x, data__authors_item in enumerate(data__authors): - validate_https___packaging_python_org_en_latest_specifications_declaring_project_metadata___definitions_author(data__authors_item, custom_formats, (name_prefix or "data") + ".authors[{data__authors_x}]") - if "maintainers" in data_keys: - data_keys.remove("maintainers") - data__maintainers = data["maintainers"] - if not isinstance(data__maintainers, (list, tuple)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".maintainers must be array", value=data__maintainers, name="" + (name_prefix or "data") + ".maintainers", definition={'type': 'array', 'items': {'$id': '#/definitions/author', 'title': 'Author or Maintainer', '$comment': 'https://www.python.org/dev/peps/pep-0621/#authors-maintainers', 'type': 'object', 'properties': {'name': {'type': 'string', '$$description': ['MUST be a valid email name, i.e. whatever can be put as a name, before an', 'email, in :rfc:`822`.']}, 'email': {'type': 'string', 'format': 'idn-email', 'description': 'MUST be a valid email address'}}}, '$$description': ["The people or organizations considered to be the 'maintainers' of the project.", 'Similarly to ``authors``, the exact meaning is open to interpretation.']}, rule='type') - data__maintainers_is_list = isinstance(data__maintainers, (list, tuple)) - if data__maintainers_is_list: - data__maintainers_len = len(data__maintainers) - for data__maintainers_x, data__maintainers_item in enumerate(data__maintainers): - validate_https___packaging_python_org_en_latest_specifications_declaring_project_metadata___definitions_author(data__maintainers_item, custom_formats, (name_prefix or "data") + ".maintainers[{data__maintainers_x}]") - if "keywords" in data_keys: - data_keys.remove("keywords") - data__keywords = data["keywords"] - if not isinstance(data__keywords, (list, tuple)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".keywords must be array", value=data__keywords, name="" + (name_prefix or "data") + ".keywords", definition={'type': 'array', 'items': {'type': 'string'}, 'description': 'List of keywords to assist searching for the distribution in a larger catalog.'}, rule='type') - data__keywords_is_list = isinstance(data__keywords, (list, tuple)) - if data__keywords_is_list: - data__keywords_len = len(data__keywords) - for data__keywords_x, data__keywords_item in enumerate(data__keywords): - if not isinstance(data__keywords_item, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".keywords[{data__keywords_x}]".format(**locals()) + " must be string", value=data__keywords_item, name="" + (name_prefix or "data") + ".keywords[{data__keywords_x}]".format(**locals()) + "", definition={'type': 'string'}, rule='type') - if "classifiers" in data_keys: - data_keys.remove("classifiers") - data__classifiers = data["classifiers"] - if not isinstance(data__classifiers, (list, tuple)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".classifiers must be array", value=data__classifiers, name="" + (name_prefix or "data") + ".classifiers", definition={'type': 'array', 'items': {'type': 'string', 'format': 'trove-classifier', 'description': '`PyPI classifier `_.'}, '$$description': ['`Trove classifiers `_', 'which apply to the project.']}, rule='type') - data__classifiers_is_list = isinstance(data__classifiers, (list, tuple)) - if data__classifiers_is_list: - data__classifiers_len = len(data__classifiers) - for data__classifiers_x, data__classifiers_item in enumerate(data__classifiers): - if not isinstance(data__classifiers_item, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".classifiers[{data__classifiers_x}]".format(**locals()) + " must be string", value=data__classifiers_item, name="" + (name_prefix or "data") + ".classifiers[{data__classifiers_x}]".format(**locals()) + "", definition={'type': 'string', 'format': 'trove-classifier', 'description': '`PyPI classifier `_.'}, rule='type') - if isinstance(data__classifiers_item, str): - if not custom_formats["trove-classifier"](data__classifiers_item): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".classifiers[{data__classifiers_x}]".format(**locals()) + " must be trove-classifier", value=data__classifiers_item, name="" + (name_prefix or "data") + ".classifiers[{data__classifiers_x}]".format(**locals()) + "", definition={'type': 'string', 'format': 'trove-classifier', 'description': '`PyPI classifier `_.'}, rule='format') - if "urls" in data_keys: - data_keys.remove("urls") - data__urls = data["urls"] - if not isinstance(data__urls, (dict)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".urls must be object", value=data__urls, name="" + (name_prefix or "data") + ".urls", definition={'type': 'object', 'description': 'URLs associated with the project in the form ``label => value``.', 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', 'format': 'url'}}}, rule='type') - data__urls_is_dict = isinstance(data__urls, dict) - if data__urls_is_dict: - data__urls_keys = set(data__urls.keys()) - for data__urls_key, data__urls_val in data__urls.items(): - if REGEX_PATTERNS['^.+$'].search(data__urls_key): - if data__urls_key in data__urls_keys: - data__urls_keys.remove(data__urls_key) - if not isinstance(data__urls_val, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".urls.{data__urls_key}".format(**locals()) + " must be string", value=data__urls_val, name="" + (name_prefix or "data") + ".urls.{data__urls_key}".format(**locals()) + "", definition={'type': 'string', 'format': 'url'}, rule='type') - if isinstance(data__urls_val, str): - if not custom_formats["url"](data__urls_val): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".urls.{data__urls_key}".format(**locals()) + " must be url", value=data__urls_val, name="" + (name_prefix or "data") + ".urls.{data__urls_key}".format(**locals()) + "", definition={'type': 'string', 'format': 'url'}, rule='format') - if data__urls_keys: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".urls must not contain "+str(data__urls_keys)+" properties", value=data__urls, name="" + (name_prefix or "data") + ".urls", definition={'type': 'object', 'description': 'URLs associated with the project in the form ``label => value``.', 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', 'format': 'url'}}}, rule='additionalProperties') - if "scripts" in data_keys: - data_keys.remove("scripts") - data__scripts = data["scripts"] - validate_https___packaging_python_org_en_latest_specifications_declaring_project_metadata___definitions_entry_point_group(data__scripts, custom_formats, (name_prefix or "data") + ".scripts") - if "gui-scripts" in data_keys: - data_keys.remove("gui-scripts") - data__guiscripts = data["gui-scripts"] - validate_https___packaging_python_org_en_latest_specifications_declaring_project_metadata___definitions_entry_point_group(data__guiscripts, custom_formats, (name_prefix or "data") + ".gui-scripts") - if "entry-points" in data_keys: - data_keys.remove("entry-points") - data__entrypoints = data["entry-points"] - data__entrypoints_is_dict = isinstance(data__entrypoints, dict) - if data__entrypoints_is_dict: - data__entrypoints_keys = set(data__entrypoints.keys()) - for data__entrypoints_key, data__entrypoints_val in data__entrypoints.items(): - if REGEX_PATTERNS['^.+$'].search(data__entrypoints_key): - if data__entrypoints_key in data__entrypoints_keys: - data__entrypoints_keys.remove(data__entrypoints_key) - validate_https___packaging_python_org_en_latest_specifications_declaring_project_metadata___definitions_entry_point_group(data__entrypoints_val, custom_formats, (name_prefix or "data") + ".entry-points.{data__entrypoints_key}") - if data__entrypoints_keys: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".entry-points must not contain "+str(data__entrypoints_keys)+" properties", value=data__entrypoints, name="" + (name_prefix or "data") + ".entry-points", definition={'$$description': ['Instruct the installer to expose the given modules/functions via', '``entry-point`` discovery mechanism (useful for plugins).', 'More information available in the `Python packaging guide', '`_.'], 'propertyNames': {'format': 'python-entrypoint-group'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'$id': '#/definitions/entry-point-group', 'title': 'Entry-points', 'type': 'object', '$$description': ['Entry-points are grouped together to indicate what sort of capabilities they', 'provide.', 'See the `packaging guides', '`_', 'and `setuptools docs', '`_', 'for more information.'], 'propertyNames': {'format': 'python-entrypoint-name'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', '$$description': ['Reference to a Python object. It is either in the form', '``importable.module``, or ``importable.module:object.attr``.'], 'format': 'python-entrypoint-reference', '$comment': 'https://packaging.python.org/specifications/entry-points/'}}}}}, rule='additionalProperties') - data__entrypoints_len = len(data__entrypoints) - if data__entrypoints_len != 0: - data__entrypoints_property_names = True - for data__entrypoints_key in data__entrypoints: - try: - if isinstance(data__entrypoints_key, str): - if not custom_formats["python-entrypoint-group"](data__entrypoints_key): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".entry-points must be python-entrypoint-group", value=data__entrypoints_key, name="" + (name_prefix or "data") + ".entry-points", definition={'format': 'python-entrypoint-group'}, rule='format') - except JsonSchemaValueException: - data__entrypoints_property_names = False - if not data__entrypoints_property_names: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".entry-points must be named by propertyName definition", value=data__entrypoints, name="" + (name_prefix or "data") + ".entry-points", definition={'$$description': ['Instruct the installer to expose the given modules/functions via', '``entry-point`` discovery mechanism (useful for plugins).', 'More information available in the `Python packaging guide', '`_.'], 'propertyNames': {'format': 'python-entrypoint-group'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'$id': '#/definitions/entry-point-group', 'title': 'Entry-points', 'type': 'object', '$$description': ['Entry-points are grouped together to indicate what sort of capabilities they', 'provide.', 'See the `packaging guides', '`_', 'and `setuptools docs', '`_', 'for more information.'], 'propertyNames': {'format': 'python-entrypoint-name'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', '$$description': ['Reference to a Python object. It is either in the form', '``importable.module``, or ``importable.module:object.attr``.'], 'format': 'python-entrypoint-reference', '$comment': 'https://packaging.python.org/specifications/entry-points/'}}}}}, rule='propertyNames') - if "dependencies" in data_keys: - data_keys.remove("dependencies") - data__dependencies = data["dependencies"] - if not isinstance(data__dependencies, (list, tuple)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".dependencies must be array", value=data__dependencies, name="" + (name_prefix or "data") + ".dependencies", definition={'type': 'array', 'description': 'Project (mandatory) dependencies.', 'items': {'$id': '#/definitions/dependency', 'title': 'Dependency', 'type': 'string', 'description': 'Project dependency specification according to PEP 508', 'format': 'pep508'}}, rule='type') - data__dependencies_is_list = isinstance(data__dependencies, (list, tuple)) - if data__dependencies_is_list: - data__dependencies_len = len(data__dependencies) - for data__dependencies_x, data__dependencies_item in enumerate(data__dependencies): - validate_https___packaging_python_org_en_latest_specifications_declaring_project_metadata___definitions_dependency(data__dependencies_item, custom_formats, (name_prefix or "data") + ".dependencies[{data__dependencies_x}]") - if "optional-dependencies" in data_keys: - data_keys.remove("optional-dependencies") - data__optionaldependencies = data["optional-dependencies"] - if not isinstance(data__optionaldependencies, (dict)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".optional-dependencies must be object", value=data__optionaldependencies, name="" + (name_prefix or "data") + ".optional-dependencies", definition={'type': 'object', 'description': 'Optional dependency for the project', 'propertyNames': {'format': 'pep508-identifier'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'array', 'items': {'$id': '#/definitions/dependency', 'title': 'Dependency', 'type': 'string', 'description': 'Project dependency specification according to PEP 508', 'format': 'pep508'}}}}, rule='type') - data__optionaldependencies_is_dict = isinstance(data__optionaldependencies, dict) - if data__optionaldependencies_is_dict: - data__optionaldependencies_keys = set(data__optionaldependencies.keys()) - for data__optionaldependencies_key, data__optionaldependencies_val in data__optionaldependencies.items(): - if REGEX_PATTERNS['^.+$'].search(data__optionaldependencies_key): - if data__optionaldependencies_key in data__optionaldependencies_keys: - data__optionaldependencies_keys.remove(data__optionaldependencies_key) - if not isinstance(data__optionaldependencies_val, (list, tuple)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".optional-dependencies.{data__optionaldependencies_key}".format(**locals()) + " must be array", value=data__optionaldependencies_val, name="" + (name_prefix or "data") + ".optional-dependencies.{data__optionaldependencies_key}".format(**locals()) + "", definition={'type': 'array', 'items': {'$id': '#/definitions/dependency', 'title': 'Dependency', 'type': 'string', 'description': 'Project dependency specification according to PEP 508', 'format': 'pep508'}}, rule='type') - data__optionaldependencies_val_is_list = isinstance(data__optionaldependencies_val, (list, tuple)) - if data__optionaldependencies_val_is_list: - data__optionaldependencies_val_len = len(data__optionaldependencies_val) - for data__optionaldependencies_val_x, data__optionaldependencies_val_item in enumerate(data__optionaldependencies_val): - validate_https___packaging_python_org_en_latest_specifications_declaring_project_metadata___definitions_dependency(data__optionaldependencies_val_item, custom_formats, (name_prefix or "data") + ".optional-dependencies.{data__optionaldependencies_key}[{data__optionaldependencies_val_x}]") - if data__optionaldependencies_keys: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".optional-dependencies must not contain "+str(data__optionaldependencies_keys)+" properties", value=data__optionaldependencies, name="" + (name_prefix or "data") + ".optional-dependencies", definition={'type': 'object', 'description': 'Optional dependency for the project', 'propertyNames': {'format': 'pep508-identifier'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'array', 'items': {'$id': '#/definitions/dependency', 'title': 'Dependency', 'type': 'string', 'description': 'Project dependency specification according to PEP 508', 'format': 'pep508'}}}}, rule='additionalProperties') - data__optionaldependencies_len = len(data__optionaldependencies) - if data__optionaldependencies_len != 0: - data__optionaldependencies_property_names = True - for data__optionaldependencies_key in data__optionaldependencies: - try: - if isinstance(data__optionaldependencies_key, str): - if not custom_formats["pep508-identifier"](data__optionaldependencies_key): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".optional-dependencies must be pep508-identifier", value=data__optionaldependencies_key, name="" + (name_prefix or "data") + ".optional-dependencies", definition={'format': 'pep508-identifier'}, rule='format') - except JsonSchemaValueException: - data__optionaldependencies_property_names = False - if not data__optionaldependencies_property_names: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".optional-dependencies must be named by propertyName definition", value=data__optionaldependencies, name="" + (name_prefix or "data") + ".optional-dependencies", definition={'type': 'object', 'description': 'Optional dependency for the project', 'propertyNames': {'format': 'pep508-identifier'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'array', 'items': {'$id': '#/definitions/dependency', 'title': 'Dependency', 'type': 'string', 'description': 'Project dependency specification according to PEP 508', 'format': 'pep508'}}}}, rule='propertyNames') - if "dynamic" in data_keys: - data_keys.remove("dynamic") - data__dynamic = data["dynamic"] - if not isinstance(data__dynamic, (list, tuple)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".dynamic must be array", value=data__dynamic, name="" + (name_prefix or "data") + ".dynamic", definition={'type': 'array', '$$description': ['Specifies which fields are intentionally unspecified and expected to be', 'dynamically provided by build tools'], 'items': {'enum': ['version', 'description', 'readme', 'requires-python', 'license', 'authors', 'maintainers', 'keywords', 'classifiers', 'urls', 'scripts', 'gui-scripts', 'entry-points', 'dependencies', 'optional-dependencies']}}, rule='type') - data__dynamic_is_list = isinstance(data__dynamic, (list, tuple)) - if data__dynamic_is_list: - data__dynamic_len = len(data__dynamic) - for data__dynamic_x, data__dynamic_item in enumerate(data__dynamic): - if data__dynamic_item not in ['version', 'description', 'readme', 'requires-python', 'license', 'authors', 'maintainers', 'keywords', 'classifiers', 'urls', 'scripts', 'gui-scripts', 'entry-points', 'dependencies', 'optional-dependencies']: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".dynamic[{data__dynamic_x}]".format(**locals()) + " must be one of ['version', 'description', 'readme', 'requires-python', 'license', 'authors', 'maintainers', 'keywords', 'classifiers', 'urls', 'scripts', 'gui-scripts', 'entry-points', 'dependencies', 'optional-dependencies']", value=data__dynamic_item, name="" + (name_prefix or "data") + ".dynamic[{data__dynamic_x}]".format(**locals()) + "", definition={'enum': ['version', 'description', 'readme', 'requires-python', 'license', 'authors', 'maintainers', 'keywords', 'classifiers', 'urls', 'scripts', 'gui-scripts', 'entry-points', 'dependencies', 'optional-dependencies']}, rule='enum') - if data_keys: - raise JsonSchemaValueException("" + (name_prefix or "data") + " must not contain "+str(data_keys)+" properties", value=data, name="" + (name_prefix or "data") + "", definition={'$schema': 'http://json-schema.org/draft-07/schema', '$id': 'https://packaging.python.org/en/latest/specifications/declaring-project-metadata/', 'title': 'Package metadata stored in the ``project`` table', '$$description': ['Data structure for the **project** table inside ``pyproject.toml``', '(as initially defined in :pep:`621`)'], 'type': 'object', 'properties': {'name': {'type': 'string', 'description': 'The name (primary identifier) of the project. MUST be statically defined.', 'format': 'pep508-identifier'}, 'version': {'type': 'string', 'description': 'The version of the project as supported by :pep:`440`.', 'format': 'pep440'}, 'description': {'type': 'string', '$$description': ['The `summary description of the project', '`_']}, 'readme': {'$$description': ['`Full/detailed description of the project in the form of a README', '`_', "with meaning similar to the one defined in `core metadata's Description", '`_'], 'oneOf': [{'type': 'string', '$$description': ['Relative path to a text file (UTF-8) containing the full description', 'of the project. If the file path ends in case-insensitive ``.md`` or', '``.rst`` suffixes, then the content-type is respectively', '``text/markdown`` or ``text/x-rst``']}, {'type': 'object', 'allOf': [{'anyOf': [{'properties': {'file': {'type': 'string', '$$description': ['Relative path to a text file containing the full description', 'of the project.']}}, 'required': ['file']}, {'properties': {'text': {'type': 'string', 'description': 'Full text describing the project.'}}, 'required': ['text']}]}, {'properties': {'content-type': {'type': 'string', '$$description': ['Content-type (:rfc:`1341`) of the full description', '(e.g. ``text/markdown``). The ``charset`` parameter is assumed', 'UTF-8 when not present.'], '$comment': 'TODO: add regex pattern or format?'}}, 'required': ['content-type']}]}]}, 'requires-python': {'type': 'string', 'format': 'pep508-versionspec', '$$description': ['`The Python version requirements of the project', '`_.']}, 'license': {'description': '`Project license `_.', 'oneOf': [{'properties': {'file': {'type': 'string', '$$description': ['Relative path to the file (UTF-8) which contains the license for the', 'project.']}}, 'required': ['file']}, {'properties': {'text': {'type': 'string', '$$description': ['The license of the project whose meaning is that of the', '`License field from the core metadata', '`_.']}}, 'required': ['text']}]}, 'authors': {'type': 'array', 'items': {'$id': '#/definitions/author', 'title': 'Author or Maintainer', '$comment': 'https://www.python.org/dev/peps/pep-0621/#authors-maintainers', 'type': 'object', 'properties': {'name': {'type': 'string', '$$description': ['MUST be a valid email name, i.e. whatever can be put as a name, before an', 'email, in :rfc:`822`.']}, 'email': {'type': 'string', 'format': 'idn-email', 'description': 'MUST be a valid email address'}}}, '$$description': ["The people or organizations considered to be the 'authors' of the project.", 'The exact meaning is open to interpretation (e.g. original or primary authors,', 'current maintainers, or owners of the package).']}, 'maintainers': {'type': 'array', 'items': {'$id': '#/definitions/author', 'title': 'Author or Maintainer', '$comment': 'https://www.python.org/dev/peps/pep-0621/#authors-maintainers', 'type': 'object', 'properties': {'name': {'type': 'string', '$$description': ['MUST be a valid email name, i.e. whatever can be put as a name, before an', 'email, in :rfc:`822`.']}, 'email': {'type': 'string', 'format': 'idn-email', 'description': 'MUST be a valid email address'}}}, '$$description': ["The people or organizations considered to be the 'maintainers' of the project.", 'Similarly to ``authors``, the exact meaning is open to interpretation.']}, 'keywords': {'type': 'array', 'items': {'type': 'string'}, 'description': 'List of keywords to assist searching for the distribution in a larger catalog.'}, 'classifiers': {'type': 'array', 'items': {'type': 'string', 'format': 'trove-classifier', 'description': '`PyPI classifier `_.'}, '$$description': ['`Trove classifiers `_', 'which apply to the project.']}, 'urls': {'type': 'object', 'description': 'URLs associated with the project in the form ``label => value``.', 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', 'format': 'url'}}}, 'scripts': {'$id': '#/definitions/entry-point-group', 'title': 'Entry-points', 'type': 'object', '$$description': ['Entry-points are grouped together to indicate what sort of capabilities they', 'provide.', 'See the `packaging guides', '`_', 'and `setuptools docs', '`_', 'for more information.'], 'propertyNames': {'format': 'python-entrypoint-name'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', '$$description': ['Reference to a Python object. It is either in the form', '``importable.module``, or ``importable.module:object.attr``.'], 'format': 'python-entrypoint-reference', '$comment': 'https://packaging.python.org/specifications/entry-points/'}}}, 'gui-scripts': {'$id': '#/definitions/entry-point-group', 'title': 'Entry-points', 'type': 'object', '$$description': ['Entry-points are grouped together to indicate what sort of capabilities they', 'provide.', 'See the `packaging guides', '`_', 'and `setuptools docs', '`_', 'for more information.'], 'propertyNames': {'format': 'python-entrypoint-name'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', '$$description': ['Reference to a Python object. It is either in the form', '``importable.module``, or ``importable.module:object.attr``.'], 'format': 'python-entrypoint-reference', '$comment': 'https://packaging.python.org/specifications/entry-points/'}}}, 'entry-points': {'$$description': ['Instruct the installer to expose the given modules/functions via', '``entry-point`` discovery mechanism (useful for plugins).', 'More information available in the `Python packaging guide', '`_.'], 'propertyNames': {'format': 'python-entrypoint-group'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'$id': '#/definitions/entry-point-group', 'title': 'Entry-points', 'type': 'object', '$$description': ['Entry-points are grouped together to indicate what sort of capabilities they', 'provide.', 'See the `packaging guides', '`_', 'and `setuptools docs', '`_', 'for more information.'], 'propertyNames': {'format': 'python-entrypoint-name'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', '$$description': ['Reference to a Python object. It is either in the form', '``importable.module``, or ``importable.module:object.attr``.'], 'format': 'python-entrypoint-reference', '$comment': 'https://packaging.python.org/specifications/entry-points/'}}}}}, 'dependencies': {'type': 'array', 'description': 'Project (mandatory) dependencies.', 'items': {'$id': '#/definitions/dependency', 'title': 'Dependency', 'type': 'string', 'description': 'Project dependency specification according to PEP 508', 'format': 'pep508'}}, 'optional-dependencies': {'type': 'object', 'description': 'Optional dependency for the project', 'propertyNames': {'format': 'pep508-identifier'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'array', 'items': {'$id': '#/definitions/dependency', 'title': 'Dependency', 'type': 'string', 'description': 'Project dependency specification according to PEP 508', 'format': 'pep508'}}}}, 'dynamic': {'type': 'array', '$$description': ['Specifies which fields are intentionally unspecified and expected to be', 'dynamically provided by build tools'], 'items': {'enum': ['version', 'description', 'readme', 'requires-python', 'license', 'authors', 'maintainers', 'keywords', 'classifiers', 'urls', 'scripts', 'gui-scripts', 'entry-points', 'dependencies', 'optional-dependencies']}}}, 'required': ['name'], 'additionalProperties': False, 'if': {'not': {'required': ['dynamic'], 'properties': {'dynamic': {'contains': {'const': 'version'}, '$$description': ['version is listed in ``dynamic``']}}}, '$$comment': ['According to :pep:`621`:', ' If the core metadata specification lists a field as "Required", then', ' the metadata MUST specify the field statically or list it in dynamic', 'In turn, `core metadata`_ defines:', ' The required fields are: Metadata-Version, Name, Version.', ' All the other fields are optional.', 'Since ``Metadata-Version`` is defined by the build back-end, ``name`` and', '``version`` are the only mandatory information in ``pyproject.toml``.', '.. _core metadata: https://packaging.python.org/specifications/core-metadata/']}, 'then': {'required': ['version'], '$$description': ['version should be statically defined in the ``version`` field']}, 'definitions': {'author': {'$id': '#/definitions/author', 'title': 'Author or Maintainer', '$comment': 'https://www.python.org/dev/peps/pep-0621/#authors-maintainers', 'type': 'object', 'properties': {'name': {'type': 'string', '$$description': ['MUST be a valid email name, i.e. whatever can be put as a name, before an', 'email, in :rfc:`822`.']}, 'email': {'type': 'string', 'format': 'idn-email', 'description': 'MUST be a valid email address'}}}, 'entry-point-group': {'$id': '#/definitions/entry-point-group', 'title': 'Entry-points', 'type': 'object', '$$description': ['Entry-points are grouped together to indicate what sort of capabilities they', 'provide.', 'See the `packaging guides', '`_', 'and `setuptools docs', '`_', 'for more information.'], 'propertyNames': {'format': 'python-entrypoint-name'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', '$$description': ['Reference to a Python object. It is either in the form', '``importable.module``, or ``importable.module:object.attr``.'], 'format': 'python-entrypoint-reference', '$comment': 'https://packaging.python.org/specifications/entry-points/'}}}, 'dependency': {'$id': '#/definitions/dependency', 'title': 'Dependency', 'type': 'string', 'description': 'Project dependency specification according to PEP 508', 'format': 'pep508'}}}, rule='additionalProperties') - try: - try: - data_is_dict = isinstance(data, dict) - if data_is_dict: - data_len = len(data) - if not all(prop in data for prop in ['dynamic']): - raise JsonSchemaValueException("" + (name_prefix or "data") + " must contain ['dynamic'] properties", value=data, name="" + (name_prefix or "data") + "", definition={'required': ['dynamic'], 'properties': {'dynamic': {'contains': {'const': 'version'}, '$$description': ['version is listed in ``dynamic``']}}}, rule='required') - data_keys = set(data.keys()) - if "dynamic" in data_keys: - data_keys.remove("dynamic") - data__dynamic = data["dynamic"] - data__dynamic_is_list = isinstance(data__dynamic, (list, tuple)) - if data__dynamic_is_list: - data__dynamic_contains = False - for data__dynamic_key in data__dynamic: - try: - if data__dynamic_key != "version": - raise JsonSchemaValueException("" + (name_prefix or "data") + ".dynamic must be same as const definition: version", value=data__dynamic_key, name="" + (name_prefix or "data") + ".dynamic", definition={'const': 'version'}, rule='const') - data__dynamic_contains = True - break - except JsonSchemaValueException: pass - if not data__dynamic_contains: - raise JsonSchemaValueException("" + (name_prefix or "data") + ".dynamic must contain one of contains definition", value=data__dynamic, name="" + (name_prefix or "data") + ".dynamic", definition={'contains': {'const': 'version'}, '$$description': ['version is listed in ``dynamic``']}, rule='contains') - except JsonSchemaValueException: pass - else: - raise JsonSchemaValueException("" + (name_prefix or "data") + " must NOT match a disallowed definition", value=data, name="" + (name_prefix or "data") + "", definition={'not': {'required': ['dynamic'], 'properties': {'dynamic': {'contains': {'const': 'version'}, '$$description': ['version is listed in ``dynamic``']}}}, '$$comment': ['According to :pep:`621`:', ' If the core metadata specification lists a field as "Required", then', ' the metadata MUST specify the field statically or list it in dynamic', 'In turn, `core metadata`_ defines:', ' The required fields are: Metadata-Version, Name, Version.', ' All the other fields are optional.', 'Since ``Metadata-Version`` is defined by the build back-end, ``name`` and', '``version`` are the only mandatory information in ``pyproject.toml``.', '.. _core metadata: https://packaging.python.org/specifications/core-metadata/']}, rule='not') - except JsonSchemaValueException: - pass - else: - data_is_dict = isinstance(data, dict) - if data_is_dict: - data_len = len(data) - if not all(prop in data for prop in ['version']): - raise JsonSchemaValueException("" + (name_prefix or "data") + " must contain ['version'] properties", value=data, name="" + (name_prefix or "data") + "", definition={'required': ['version'], '$$description': ['version should be statically defined in the ``version`` field']}, rule='required') - return data - -def validate_https___packaging_python_org_en_latest_specifications_declaring_project_metadata___definitions_dependency(data, custom_formats={}, name_prefix=None): - if not isinstance(data, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + " must be string", value=data, name="" + (name_prefix or "data") + "", definition={'$id': '#/definitions/dependency', 'title': 'Dependency', 'type': 'string', 'description': 'Project dependency specification according to PEP 508', 'format': 'pep508'}, rule='type') - if isinstance(data, str): - if not custom_formats["pep508"](data): - raise JsonSchemaValueException("" + (name_prefix or "data") + " must be pep508", value=data, name="" + (name_prefix or "data") + "", definition={'$id': '#/definitions/dependency', 'title': 'Dependency', 'type': 'string', 'description': 'Project dependency specification according to PEP 508', 'format': 'pep508'}, rule='format') - return data - -def validate_https___packaging_python_org_en_latest_specifications_declaring_project_metadata___definitions_entry_point_group(data, custom_formats={}, name_prefix=None): - if not isinstance(data, (dict)): - raise JsonSchemaValueException("" + (name_prefix or "data") + " must be object", value=data, name="" + (name_prefix or "data") + "", definition={'$id': '#/definitions/entry-point-group', 'title': 'Entry-points', 'type': 'object', '$$description': ['Entry-points are grouped together to indicate what sort of capabilities they', 'provide.', 'See the `packaging guides', '`_', 'and `setuptools docs', '`_', 'for more information.'], 'propertyNames': {'format': 'python-entrypoint-name'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', '$$description': ['Reference to a Python object. It is either in the form', '``importable.module``, or ``importable.module:object.attr``.'], 'format': 'python-entrypoint-reference', '$comment': 'https://packaging.python.org/specifications/entry-points/'}}}, rule='type') - data_is_dict = isinstance(data, dict) - if data_is_dict: - data_keys = set(data.keys()) - for data_key, data_val in data.items(): - if REGEX_PATTERNS['^.+$'].search(data_key): - if data_key in data_keys: - data_keys.remove(data_key) - if not isinstance(data_val, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".{data_key}".format(**locals()) + " must be string", value=data_val, name="" + (name_prefix or "data") + ".{data_key}".format(**locals()) + "", definition={'type': 'string', '$$description': ['Reference to a Python object. It is either in the form', '``importable.module``, or ``importable.module:object.attr``.'], 'format': 'python-entrypoint-reference', '$comment': 'https://packaging.python.org/specifications/entry-points/'}, rule='type') - if isinstance(data_val, str): - if not custom_formats["python-entrypoint-reference"](data_val): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".{data_key}".format(**locals()) + " must be python-entrypoint-reference", value=data_val, name="" + (name_prefix or "data") + ".{data_key}".format(**locals()) + "", definition={'type': 'string', '$$description': ['Reference to a Python object. It is either in the form', '``importable.module``, or ``importable.module:object.attr``.'], 'format': 'python-entrypoint-reference', '$comment': 'https://packaging.python.org/specifications/entry-points/'}, rule='format') - if data_keys: - raise JsonSchemaValueException("" + (name_prefix or "data") + " must not contain "+str(data_keys)+" properties", value=data, name="" + (name_prefix or "data") + "", definition={'$id': '#/definitions/entry-point-group', 'title': 'Entry-points', 'type': 'object', '$$description': ['Entry-points are grouped together to indicate what sort of capabilities they', 'provide.', 'See the `packaging guides', '`_', 'and `setuptools docs', '`_', 'for more information.'], 'propertyNames': {'format': 'python-entrypoint-name'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', '$$description': ['Reference to a Python object. It is either in the form', '``importable.module``, or ``importable.module:object.attr``.'], 'format': 'python-entrypoint-reference', '$comment': 'https://packaging.python.org/specifications/entry-points/'}}}, rule='additionalProperties') - data_len = len(data) - if data_len != 0: - data_property_names = True - for data_key in data: - try: - if isinstance(data_key, str): - if not custom_formats["python-entrypoint-name"](data_key): - raise JsonSchemaValueException("" + (name_prefix or "data") + " must be python-entrypoint-name", value=data_key, name="" + (name_prefix or "data") + "", definition={'format': 'python-entrypoint-name'}, rule='format') - except JsonSchemaValueException: - data_property_names = False - if not data_property_names: - raise JsonSchemaValueException("" + (name_prefix or "data") + " must be named by propertyName definition", value=data, name="" + (name_prefix or "data") + "", definition={'$id': '#/definitions/entry-point-group', 'title': 'Entry-points', 'type': 'object', '$$description': ['Entry-points are grouped together to indicate what sort of capabilities they', 'provide.', 'See the `packaging guides', '`_', 'and `setuptools docs', '`_', 'for more information.'], 'propertyNames': {'format': 'python-entrypoint-name'}, 'additionalProperties': False, 'patternProperties': {'^.+$': {'type': 'string', '$$description': ['Reference to a Python object. It is either in the form', '``importable.module``, or ``importable.module:object.attr``.'], 'format': 'python-entrypoint-reference', '$comment': 'https://packaging.python.org/specifications/entry-points/'}}}, rule='propertyNames') - return data - -def validate_https___packaging_python_org_en_latest_specifications_declaring_project_metadata___definitions_author(data, custom_formats={}, name_prefix=None): - if not isinstance(data, (dict)): - raise JsonSchemaValueException("" + (name_prefix or "data") + " must be object", value=data, name="" + (name_prefix or "data") + "", definition={'$id': '#/definitions/author', 'title': 'Author or Maintainer', '$comment': 'https://www.python.org/dev/peps/pep-0621/#authors-maintainers', 'type': 'object', 'properties': {'name': {'type': 'string', '$$description': ['MUST be a valid email name, i.e. whatever can be put as a name, before an', 'email, in :rfc:`822`.']}, 'email': {'type': 'string', 'format': 'idn-email', 'description': 'MUST be a valid email address'}}}, rule='type') - data_is_dict = isinstance(data, dict) - if data_is_dict: - data_keys = set(data.keys()) - if "name" in data_keys: - data_keys.remove("name") - data__name = data["name"] - if not isinstance(data__name, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".name must be string", value=data__name, name="" + (name_prefix or "data") + ".name", definition={'type': 'string', '$$description': ['MUST be a valid email name, i.e. whatever can be put as a name, before an', 'email, in :rfc:`822`.']}, rule='type') - if "email" in data_keys: - data_keys.remove("email") - data__email = data["email"] - if not isinstance(data__email, (str)): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".email must be string", value=data__email, name="" + (name_prefix or "data") + ".email", definition={'type': 'string', 'format': 'idn-email', 'description': 'MUST be a valid email address'}, rule='type') - if isinstance(data__email, str): - if not REGEX_PATTERNS["idn-email_re_pattern"].match(data__email): - raise JsonSchemaValueException("" + (name_prefix or "data") + ".email must be idn-email", value=data__email, name="" + (name_prefix or "data") + ".email", definition={'type': 'string', 'format': 'idn-email', 'description': 'MUST be a valid email address'}, rule='format') - return data \ No newline at end of file diff --git a/spaces/BilalSardar/Halal_Food_Checker/app.py b/spaces/BilalSardar/Halal_Food_Checker/app.py deleted file mode 100644 index f2ca7e2ce9bee3443ba23409f18c450ad506381f..0000000000000000000000000000000000000000 --- a/spaces/BilalSardar/Halal_Food_Checker/app.py +++ /dev/null @@ -1,67 +0,0 @@ -import pytesseract -from PIL import Image -import requests -import json -import numpy as np -import gradio as gr -import os - - - -def get_halal_data(ingredient, num_results=20): - - try: - url = f'http://halal.addi.is.its.ac.id/apiv2?q={ingredient}&result={num_results}' - response = requests.get(url) - data = response.json() - except requests.exceptions.RequestException as e: - print(f"Error: {e}") - return None - - results = [] - - for result in data['entityData']: - try: - if result['atribute']['certificate']: - results.append(result) - except: - pass - - if not results: - return "No Data Found If its halal" - - return results - - - -def extract_text(text,image): - # Convert sketchpad to bounding box - # img1 = np.array(Image.fromarray(image)) - if text=='': - text = pytesseract.image_to_string(image) - # Extract ingredient words - # ingredients = [word for word in text.split() if word.isalpha()] - - # results = {} - - # for ing in ingredients: - # data = get_halal_data(ing, 5) - # if data: - # results[ing] = data - # else: - # results[ing] = "No halal data found" - try: - results=get_halal_data(text,5) - except: - pass - - return text,results - -iface = gr.Interface(fn=extract_text, - inputs=["text",gr.inputs.Image(label="image", type="numpy")], - outputs=["text","text"], - examples=[["Monosodium Glutamate",None],[None,"3.jpg"]], - title="Halal Food Checker", - description="Enter products, ingredients, foodcodes, or manufactures manually or upload image and crop it to ingredient. If a data is shown for an ingredient its mean its Halal.") - -iface.launch(debug=True) \ No newline at end of file diff --git a/spaces/Branon/Proxy/README.md b/spaces/Branon/Proxy/README.md deleted file mode 100644 index 5929f4a19e168d8b5a16e08dc869bd99de831eaa..0000000000000000000000000000000000000000 --- a/spaces/Branon/Proxy/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Proxy -emoji: -colorFrom: red -colorTo: blue -sdk: docker -pinned: false -duplicated_from: Branon/TempBRICS ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/bad_alloc.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/bad_alloc.h deleted file mode 100644 index 461704fd6b74a33f3c9c789f0f02833bf49586d3..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/bad_alloc.h +++ /dev/null @@ -1,57 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -#pragma once - -#include -#include - -namespace thrust -{ -namespace system -{ -namespace detail -{ - -// define our own bad_alloc so we can set its .what() -class bad_alloc - : public std::bad_alloc -{ - public: - inline bad_alloc(const std::string &w) - : std::bad_alloc(), m_what() - { - m_what = std::bad_alloc::what(); - m_what += ": "; - m_what += w; - } // end bad_alloc() - - inline virtual ~bad_alloc(void) throw () {}; - - inline virtual const char *what(void) const throw() - { - return m_what.c_str(); - } // end what() - - private: - std::string m_what; -}; // end bad_alloc - -} // end detail -} // end system -} // end thrust - diff --git a/spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter/utils.py b/spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter/utils.py deleted file mode 100644 index af402265c74fc092a1a03ef6e90b5c9ad3f1934b..0000000000000000000000000000000000000000 --- a/spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter/utils.py +++ /dev/null @@ -1,81 +0,0 @@ - -import logging -import torch - - -def train_epoch(model, dataloader, criterion, optimizer, device, scheduler=None): - - pred_correct, pred_all = 0, 0 - running_loss = 0.0 - - for i, data in enumerate(dataloader): - inputs, labels = data - inputs = inputs.squeeze(0).to(device) - labels = labels.to(device, dtype=torch.long) - - optimizer.zero_grad() - outputs = model(inputs).expand(1, -1, -1) - - loss = criterion(outputs[0], labels[0]) - loss.backward() - optimizer.step() - running_loss += loss - - # Statistics - if int(torch.argmax(torch.nn.functional.softmax(outputs, dim=2))) == int(labels[0][0]): - pred_correct += 1 - pred_all += 1 - - if scheduler: - scheduler.step(running_loss.item() / len(dataloader)) - - return running_loss, pred_correct, pred_all, (pred_correct / pred_all) - - -def evaluate(model, dataloader, device, print_stats=False): - - pred_correct, pred_all = 0, 0 - stats = {i: [0, 0] for i in range(101)} - - for i, data in enumerate(dataloader): - inputs, labels = data - inputs = inputs.squeeze(0).to(device) - labels = labels.to(device, dtype=torch.long) - - outputs = model(inputs).expand(1, -1, -1) - - # Statistics - if int(torch.argmax(torch.nn.functional.softmax(outputs, dim=2))) == int(labels[0][0]): - stats[int(labels[0][0])][0] += 1 - pred_correct += 1 - - stats[int(labels[0][0])][1] += 1 - pred_all += 1 - - if print_stats: - stats = {key: value[0] / value[1] for key, value in stats.items() if value[1] != 0} - print("Label accuracies statistics:") - print(str(stats) + "\n") - logging.info("Label accuracies statistics:") - logging.info(str(stats) + "\n") - - return pred_correct, pred_all, (pred_correct / pred_all) - - -def evaluate_top_k(model, dataloader, device, k=5): - - pred_correct, pred_all = 0, 0 - - for i, data in enumerate(dataloader): - inputs, labels = data - inputs = inputs.squeeze(0).to(device) - labels = labels.to(device, dtype=torch.long) - - outputs = model(inputs).expand(1, -1, -1) - - if int(labels[0][0]) in torch.topk(outputs, k).indices.tolist(): - pred_correct += 1 - - pred_all += 1 - - return pred_correct, pred_all, (pred_correct / pred_all) diff --git a/spaces/CVPR/ml-talking-face/README.md b/spaces/CVPR/ml-talking-face/README.md deleted file mode 100644 index cfc2b726c581f69256f3b84b167d536733c383fc..0000000000000000000000000000000000000000 --- a/spaces/CVPR/ml-talking-face/README.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -title: Talking Face Generation with Multilingual TTS -emoji: 👄 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.0.6 -app_file: app.py -pinned: false -license: cc-by-nc-sa-4.0 ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/CVPR/monoscene_lite/monoscene/monoscene_model.py b/spaces/CVPR/monoscene_lite/monoscene/monoscene_model.py deleted file mode 100644 index 8a5207f3d03de86192c5d41a8bdfe3ce32e672ab..0000000000000000000000000000000000000000 --- a/spaces/CVPR/monoscene_lite/monoscene/monoscene_model.py +++ /dev/null @@ -1,21 +0,0 @@ -from transformers import PreTrainedModel -from .config import MonoSceneConfig -from monoscene.monoscene import MonoScene - - -class MonoSceneModel(PreTrainedModel): - config_class = MonoSceneConfig - - def __init__(self, config): - super().__init__(config) - self.model = MonoScene( - dataset=config.dataset, - n_classes=config.n_classes, - feature=config.feature, - project_scale=config.project_scale, - full_scene_size=config.full_scene_size - ) - - - def forward(self, tensor): - return self.model.forward(tensor) \ No newline at end of file diff --git a/spaces/Coweed/BadTrip/Dockerfile b/spaces/Coweed/BadTrip/Dockerfile deleted file mode 100644 index 4cb0ce42128d9a2ad33a395883f5e5455a38c707..0000000000000000000000000000000000000000 --- a/spaces/Coweed/BadTrip/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/merge/layout.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/merge/layout.py deleted file mode 100644 index 6b85cd503387291f326e937b36a5739b1de23ef1..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/merge/layout.py +++ /dev/null @@ -1,530 +0,0 @@ -# Copyright 2013 Google, Inc. All Rights Reserved. -# -# Google Author(s): Behdad Esfahbod, Roozbeh Pournader - -from fontTools import ttLib -from fontTools.ttLib.tables.DefaultTable import DefaultTable -from fontTools.ttLib.tables import otTables -from fontTools.merge.base import add_method, mergeObjects -from fontTools.merge.util import * -import logging - - -log = logging.getLogger("fontTools.merge") - - -def mergeLookupLists(lst): - # TODO Do smarter merge. - return sumLists(lst) - - -def mergeFeatures(lst): - assert lst - self = otTables.Feature() - self.FeatureParams = None - self.LookupListIndex = mergeLookupLists( - [l.LookupListIndex for l in lst if l.LookupListIndex] - ) - self.LookupCount = len(self.LookupListIndex) - return self - - -def mergeFeatureLists(lst): - d = {} - for l in lst: - for f in l: - tag = f.FeatureTag - if tag not in d: - d[tag] = [] - d[tag].append(f.Feature) - ret = [] - for tag in sorted(d.keys()): - rec = otTables.FeatureRecord() - rec.FeatureTag = tag - rec.Feature = mergeFeatures(d[tag]) - ret.append(rec) - return ret - - -def mergeLangSyses(lst): - assert lst - - # TODO Support merging ReqFeatureIndex - assert all(l.ReqFeatureIndex == 0xFFFF for l in lst) - - self = otTables.LangSys() - self.LookupOrder = None - self.ReqFeatureIndex = 0xFFFF - self.FeatureIndex = mergeFeatureLists( - [l.FeatureIndex for l in lst if l.FeatureIndex] - ) - self.FeatureCount = len(self.FeatureIndex) - return self - - -def mergeScripts(lst): - assert lst - - if len(lst) == 1: - return lst[0] - langSyses = {} - for sr in lst: - for lsr in sr.LangSysRecord: - if lsr.LangSysTag not in langSyses: - langSyses[lsr.LangSysTag] = [] - langSyses[lsr.LangSysTag].append(lsr.LangSys) - lsrecords = [] - for tag, langSys_list in sorted(langSyses.items()): - lsr = otTables.LangSysRecord() - lsr.LangSys = mergeLangSyses(langSys_list) - lsr.LangSysTag = tag - lsrecords.append(lsr) - - self = otTables.Script() - self.LangSysRecord = lsrecords - self.LangSysCount = len(lsrecords) - dfltLangSyses = [s.DefaultLangSys for s in lst if s.DefaultLangSys] - if dfltLangSyses: - self.DefaultLangSys = mergeLangSyses(dfltLangSyses) - else: - self.DefaultLangSys = None - return self - - -def mergeScriptRecords(lst): - d = {} - for l in lst: - for s in l: - tag = s.ScriptTag - if tag not in d: - d[tag] = [] - d[tag].append(s.Script) - ret = [] - for tag in sorted(d.keys()): - rec = otTables.ScriptRecord() - rec.ScriptTag = tag - rec.Script = mergeScripts(d[tag]) - ret.append(rec) - return ret - - -otTables.ScriptList.mergeMap = { - "ScriptCount": lambda lst: None, # TODO - "ScriptRecord": mergeScriptRecords, -} -otTables.BaseScriptList.mergeMap = { - "BaseScriptCount": lambda lst: None, # TODO - # TODO: Merge duplicate entries - "BaseScriptRecord": lambda lst: sorted( - sumLists(lst), key=lambda s: s.BaseScriptTag - ), -} - -otTables.FeatureList.mergeMap = { - "FeatureCount": sum, - "FeatureRecord": lambda lst: sorted(sumLists(lst), key=lambda s: s.FeatureTag), -} - -otTables.LookupList.mergeMap = { - "LookupCount": sum, - "Lookup": sumLists, -} - -otTables.Coverage.mergeMap = { - "Format": min, - "glyphs": sumLists, -} - -otTables.ClassDef.mergeMap = { - "Format": min, - "classDefs": sumDicts, -} - -otTables.LigCaretList.mergeMap = { - "Coverage": mergeObjects, - "LigGlyphCount": sum, - "LigGlyph": sumLists, -} - -otTables.AttachList.mergeMap = { - "Coverage": mergeObjects, - "GlyphCount": sum, - "AttachPoint": sumLists, -} - -# XXX Renumber MarkFilterSets of lookups -otTables.MarkGlyphSetsDef.mergeMap = { - "MarkSetTableFormat": equal, - "MarkSetCount": sum, - "Coverage": sumLists, -} - -otTables.Axis.mergeMap = { - "*": mergeObjects, -} - -# XXX Fix BASE table merging -otTables.BaseTagList.mergeMap = { - "BaseTagCount": sum, - "BaselineTag": sumLists, -} - -otTables.GDEF.mergeMap = ( - otTables.GSUB.mergeMap -) = ( - otTables.GPOS.mergeMap -) = otTables.BASE.mergeMap = otTables.JSTF.mergeMap = otTables.MATH.mergeMap = { - "*": mergeObjects, - "Version": max, -} - -ttLib.getTableClass("GDEF").mergeMap = ttLib.getTableClass( - "GSUB" -).mergeMap = ttLib.getTableClass("GPOS").mergeMap = ttLib.getTableClass( - "BASE" -).mergeMap = ttLib.getTableClass( - "JSTF" -).mergeMap = ttLib.getTableClass( - "MATH" -).mergeMap = { - "tableTag": onlyExisting(equal), # XXX clean me up - "table": mergeObjects, -} - - -@add_method(ttLib.getTableClass("GSUB")) -def merge(self, m, tables): - assert len(tables) == len(m.duplicateGlyphsPerFont) - for i, (table, dups) in enumerate(zip(tables, m.duplicateGlyphsPerFont)): - if not dups: - continue - if table is None or table is NotImplemented: - log.warning( - "Have non-identical duplicates to resolve for '%s' but no GSUB. Are duplicates intended?: %s", - m.fonts[i]._merger__name, - dups, - ) - continue - - synthFeature = None - synthLookup = None - for script in table.table.ScriptList.ScriptRecord: - if script.ScriptTag == "DFLT": - continue # XXX - for langsys in [script.Script.DefaultLangSys] + [ - l.LangSys for l in script.Script.LangSysRecord - ]: - if langsys is None: - continue # XXX Create! - feature = [v for v in langsys.FeatureIndex if v.FeatureTag == "locl"] - assert len(feature) <= 1 - if feature: - feature = feature[0] - else: - if not synthFeature: - synthFeature = otTables.FeatureRecord() - synthFeature.FeatureTag = "locl" - f = synthFeature.Feature = otTables.Feature() - f.FeatureParams = None - f.LookupCount = 0 - f.LookupListIndex = [] - table.table.FeatureList.FeatureRecord.append(synthFeature) - table.table.FeatureList.FeatureCount += 1 - feature = synthFeature - langsys.FeatureIndex.append(feature) - langsys.FeatureIndex.sort(key=lambda v: v.FeatureTag) - - if not synthLookup: - subtable = otTables.SingleSubst() - subtable.mapping = dups - synthLookup = otTables.Lookup() - synthLookup.LookupFlag = 0 - synthLookup.LookupType = 1 - synthLookup.SubTableCount = 1 - synthLookup.SubTable = [subtable] - if table.table.LookupList is None: - # mtiLib uses None as default value for LookupList, - # while feaLib points to an empty array with count 0 - # TODO: make them do the same - table.table.LookupList = otTables.LookupList() - table.table.LookupList.Lookup = [] - table.table.LookupList.LookupCount = 0 - table.table.LookupList.Lookup.append(synthLookup) - table.table.LookupList.LookupCount += 1 - - if feature.Feature.LookupListIndex[:1] != [synthLookup]: - feature.Feature.LookupListIndex[:0] = [synthLookup] - feature.Feature.LookupCount += 1 - - DefaultTable.merge(self, m, tables) - return self - - -@add_method( - otTables.SingleSubst, - otTables.MultipleSubst, - otTables.AlternateSubst, - otTables.LigatureSubst, - otTables.ReverseChainSingleSubst, - otTables.SinglePos, - otTables.PairPos, - otTables.CursivePos, - otTables.MarkBasePos, - otTables.MarkLigPos, - otTables.MarkMarkPos, -) -def mapLookups(self, lookupMap): - pass - - -# Copied and trimmed down from subset.py -@add_method( - otTables.ContextSubst, - otTables.ChainContextSubst, - otTables.ContextPos, - otTables.ChainContextPos, -) -def __merge_classify_context(self): - class ContextHelper(object): - def __init__(self, klass, Format): - if klass.__name__.endswith("Subst"): - Typ = "Sub" - Type = "Subst" - else: - Typ = "Pos" - Type = "Pos" - if klass.__name__.startswith("Chain"): - Chain = "Chain" - else: - Chain = "" - ChainTyp = Chain + Typ - - self.Typ = Typ - self.Type = Type - self.Chain = Chain - self.ChainTyp = ChainTyp - - self.LookupRecord = Type + "LookupRecord" - - if Format == 1: - self.Rule = ChainTyp + "Rule" - self.RuleSet = ChainTyp + "RuleSet" - elif Format == 2: - self.Rule = ChainTyp + "ClassRule" - self.RuleSet = ChainTyp + "ClassSet" - - if self.Format not in [1, 2, 3]: - return None # Don't shoot the messenger; let it go - if not hasattr(self.__class__, "_merge__ContextHelpers"): - self.__class__._merge__ContextHelpers = {} - if self.Format not in self.__class__._merge__ContextHelpers: - helper = ContextHelper(self.__class__, self.Format) - self.__class__._merge__ContextHelpers[self.Format] = helper - return self.__class__._merge__ContextHelpers[self.Format] - - -@add_method( - otTables.ContextSubst, - otTables.ChainContextSubst, - otTables.ContextPos, - otTables.ChainContextPos, -) -def mapLookups(self, lookupMap): - c = self.__merge_classify_context() - - if self.Format in [1, 2]: - for rs in getattr(self, c.RuleSet): - if not rs: - continue - for r in getattr(rs, c.Rule): - if not r: - continue - for ll in getattr(r, c.LookupRecord): - if not ll: - continue - ll.LookupListIndex = lookupMap[ll.LookupListIndex] - elif self.Format == 3: - for ll in getattr(self, c.LookupRecord): - if not ll: - continue - ll.LookupListIndex = lookupMap[ll.LookupListIndex] - else: - assert 0, "unknown format: %s" % self.Format - - -@add_method(otTables.ExtensionSubst, otTables.ExtensionPos) -def mapLookups(self, lookupMap): - if self.Format == 1: - self.ExtSubTable.mapLookups(lookupMap) - else: - assert 0, "unknown format: %s" % self.Format - - -@add_method(otTables.Lookup) -def mapLookups(self, lookupMap): - for st in self.SubTable: - if not st: - continue - st.mapLookups(lookupMap) - - -@add_method(otTables.LookupList) -def mapLookups(self, lookupMap): - for l in self.Lookup: - if not l: - continue - l.mapLookups(lookupMap) - - -@add_method(otTables.Lookup) -def mapMarkFilteringSets(self, markFilteringSetMap): - if self.LookupFlag & 0x0010: - self.MarkFilteringSet = markFilteringSetMap[self.MarkFilteringSet] - - -@add_method(otTables.LookupList) -def mapMarkFilteringSets(self, markFilteringSetMap): - for l in self.Lookup: - if not l: - continue - l.mapMarkFilteringSets(markFilteringSetMap) - - -@add_method(otTables.Feature) -def mapLookups(self, lookupMap): - self.LookupListIndex = [lookupMap[i] for i in self.LookupListIndex] - - -@add_method(otTables.FeatureList) -def mapLookups(self, lookupMap): - for f in self.FeatureRecord: - if not f or not f.Feature: - continue - f.Feature.mapLookups(lookupMap) - - -@add_method(otTables.DefaultLangSys, otTables.LangSys) -def mapFeatures(self, featureMap): - self.FeatureIndex = [featureMap[i] for i in self.FeatureIndex] - if self.ReqFeatureIndex != 65535: - self.ReqFeatureIndex = featureMap[self.ReqFeatureIndex] - - -@add_method(otTables.Script) -def mapFeatures(self, featureMap): - if self.DefaultLangSys: - self.DefaultLangSys.mapFeatures(featureMap) - for l in self.LangSysRecord: - if not l or not l.LangSys: - continue - l.LangSys.mapFeatures(featureMap) - - -@add_method(otTables.ScriptList) -def mapFeatures(self, featureMap): - for s in self.ScriptRecord: - if not s or not s.Script: - continue - s.Script.mapFeatures(featureMap) - - -def layoutPreMerge(font): - # Map indices to references - - GDEF = font.get("GDEF") - GSUB = font.get("GSUB") - GPOS = font.get("GPOS") - - for t in [GSUB, GPOS]: - if not t: - continue - - if t.table.LookupList: - lookupMap = {i: v for i, v in enumerate(t.table.LookupList.Lookup)} - t.table.LookupList.mapLookups(lookupMap) - t.table.FeatureList.mapLookups(lookupMap) - - if ( - GDEF - and GDEF.table.Version >= 0x00010002 - and GDEF.table.MarkGlyphSetsDef - ): - markFilteringSetMap = { - i: v for i, v in enumerate(GDEF.table.MarkGlyphSetsDef.Coverage) - } - t.table.LookupList.mapMarkFilteringSets(markFilteringSetMap) - - if t.table.FeatureList and t.table.ScriptList: - featureMap = {i: v for i, v in enumerate(t.table.FeatureList.FeatureRecord)} - t.table.ScriptList.mapFeatures(featureMap) - - # TODO FeatureParams nameIDs - - -def layoutPostMerge(font): - # Map references back to indices - - GDEF = font.get("GDEF") - GSUB = font.get("GSUB") - GPOS = font.get("GPOS") - - for t in [GSUB, GPOS]: - if not t: - continue - - if t.table.FeatureList and t.table.ScriptList: - # Collect unregistered (new) features. - featureMap = GregariousIdentityDict(t.table.FeatureList.FeatureRecord) - t.table.ScriptList.mapFeatures(featureMap) - - # Record used features. - featureMap = AttendanceRecordingIdentityDict( - t.table.FeatureList.FeatureRecord - ) - t.table.ScriptList.mapFeatures(featureMap) - usedIndices = featureMap.s - - # Remove unused features - t.table.FeatureList.FeatureRecord = [ - f - for i, f in enumerate(t.table.FeatureList.FeatureRecord) - if i in usedIndices - ] - - # Map back to indices. - featureMap = NonhashableDict(t.table.FeatureList.FeatureRecord) - t.table.ScriptList.mapFeatures(featureMap) - - t.table.FeatureList.FeatureCount = len(t.table.FeatureList.FeatureRecord) - - if t.table.LookupList: - # Collect unregistered (new) lookups. - lookupMap = GregariousIdentityDict(t.table.LookupList.Lookup) - t.table.FeatureList.mapLookups(lookupMap) - t.table.LookupList.mapLookups(lookupMap) - - # Record used lookups. - lookupMap = AttendanceRecordingIdentityDict(t.table.LookupList.Lookup) - t.table.FeatureList.mapLookups(lookupMap) - t.table.LookupList.mapLookups(lookupMap) - usedIndices = lookupMap.s - - # Remove unused lookups - t.table.LookupList.Lookup = [ - l for i, l in enumerate(t.table.LookupList.Lookup) if i in usedIndices - ] - - # Map back to indices. - lookupMap = NonhashableDict(t.table.LookupList.Lookup) - t.table.FeatureList.mapLookups(lookupMap) - t.table.LookupList.mapLookups(lookupMap) - - t.table.LookupList.LookupCount = len(t.table.LookupList.Lookup) - - if GDEF and GDEF.table.Version >= 0x00010002: - markFilteringSetMap = NonhashableDict( - GDEF.table.MarkGlyphSetsDef.Coverage - ) - t.table.LookupList.mapMarkFilteringSets(markFilteringSetMap) - - # TODO FeatureParams nameIDs diff --git a/spaces/Daextream/Whisper-Auto-Subtitled-Video-Generator/utils.py b/spaces/Daextream/Whisper-Auto-Subtitled-Video-Generator/utils.py deleted file mode 100644 index ae54176dab8e141ed806c9ac7cd088f2d274b26a..0000000000000000000000000000000000000000 --- a/spaces/Daextream/Whisper-Auto-Subtitled-Video-Generator/utils.py +++ /dev/null @@ -1,96 +0,0 @@ -import textwrap -import zlib -from typing import Iterator, TextIO - - -def exact_div(x, y): - assert x % y == 0 - return x // y - - -def str2bool(string): - str2val = {"True": True, "False": False} - if string in str2val: - return str2val[string] - else: - raise ValueError(f"Expected one of {set(str2val.keys())}, got {string}") - - -def optional_int(string): - return None if string == "None" else int(string) - - -def optional_float(string): - return None if string == "None" else float(string) - - -def compression_ratio(text) -> float: - return len(text) / len(zlib.compress(text.encode("utf-8"))) - - -def format_timestamp(seconds: float, always_include_hours: bool = False, fractionalSeperator: str = '.'): - assert seconds >= 0, "non-negative timestamp expected" - milliseconds = round(seconds * 1000.0) - - hours = milliseconds // 3_600_000 - milliseconds -= hours * 3_600_000 - - minutes = milliseconds // 60_000 - milliseconds -= minutes * 60_000 - - seconds = milliseconds // 1_000 - milliseconds -= seconds * 1_000 - - hours_marker = f"{hours:02d}:" if always_include_hours or hours > 0 else "" - return f"{hours_marker}{minutes:02d}:{seconds:02d}{fractionalSeperator}{milliseconds:03d}" - - -def write_txt(transcript: Iterator[dict], file: TextIO): - for segment in transcript: - print(segment['text'].strip(), file=file, flush=True) - - -def write_vtt(transcript: Iterator[dict], file: TextIO, maxLineWidth=None): - print("WEBVTT\n", file=file) - for segment in transcript: - text = processText(segment['text'], maxLineWidth).replace('-->', '->') - - print( - f"{format_timestamp(segment['start'])} --> {format_timestamp(segment['end'])}\n" - f"{text}\n", - file=file, - flush=True, - ) - - -def write_srt(transcript: Iterator[dict], file: TextIO, maxLineWidth=None): - """ - Write a transcript to a file in SRT format. - Example usage: - from pathlib import Path - from whisper.utils import write_srt - result = transcribe(model, audio_path, temperature=temperature, **args) - # save SRT - audio_basename = Path(audio_path).stem - with open(Path(output_dir) / (audio_basename + ".srt"), "w", encoding="utf-8") as srt: - write_srt(result["segments"], file=srt) - """ - for i, segment in enumerate(transcript, start=1): - text = processText(segment['text'].strip(), maxLineWidth).replace('-->', '->') - - # write srt lines - print( - f"{i}\n" - f"{format_timestamp(segment['start'], always_include_hours=True, fractionalSeperator=',')} --> " - f"{format_timestamp(segment['end'], always_include_hours=True, fractionalSeperator=',')}\n" - f"{text}\n", - file=file, - flush=True, - ) - -def processText(text: str, maxLineWidth=None): - if (maxLineWidth is None or maxLineWidth < 0): - return text - - lines = textwrap.wrap(text, width=maxLineWidth, tabsize=4) - return '\n'.join(lines) diff --git a/spaces/Danielzero/GPT3.5/modules/models.py b/spaces/Danielzero/GPT3.5/modules/models.py deleted file mode 100644 index 25b18b1904910e183a997a763008403d960868d6..0000000000000000000000000000000000000000 --- a/spaces/Danielzero/GPT3.5/modules/models.py +++ /dev/null @@ -1,625 +0,0 @@ -from __future__ import annotations -from typing import TYPE_CHECKING, List - -import logging -import json -import commentjson as cjson -import os -import sys -import requests -import urllib3 -import platform -import base64 -from io import BytesIO -from PIL import Image - -from tqdm import tqdm -import colorama -from duckduckgo_search import ddg -import asyncio -import aiohttp -from enum import Enum -import uuid - -from .presets import * -from .llama_func import * -from .utils import * -from . import shared -from .config import retrieve_proxy -from modules import config -from .base_model import BaseLLMModel, ModelType - - -class OpenAIClient(BaseLLMModel): - def __init__( - self, - model_name, - api_key, - system_prompt=INITIAL_SYSTEM_PROMPT, - temperature=1.0, - top_p=1.0, - ) -> None: - super().__init__( - model_name=model_name, - temperature=temperature, - top_p=top_p, - system_prompt=system_prompt, - ) - self.api_key = api_key - self.need_api_key = True - self._refresh_header() - - def get_answer_stream_iter(self): - response = self._get_response(stream=True) - if response is not None: - iter = self._decode_chat_response(response) - partial_text = "" - for i in iter: - partial_text += i - yield partial_text - else: - yield STANDARD_ERROR_MSG + GENERAL_ERROR_MSG - - def get_answer_at_once(self): - response = self._get_response() - response = json.loads(response.text) - content = response["choices"][0]["message"]["content"] - total_token_count = response["usage"]["total_tokens"] - return content, total_token_count - - def count_token(self, user_input): - input_token_count = count_token(construct_user(user_input)) - if self.system_prompt is not None and len(self.all_token_counts) == 0: - system_prompt_token_count = count_token( - construct_system(self.system_prompt) - ) - return input_token_count + system_prompt_token_count - return input_token_count - - def billing_info(self): - try: - curr_time = datetime.datetime.now() - last_day_of_month = get_last_day_of_month( - curr_time).strftime("%Y-%m-%d") - first_day_of_month = curr_time.replace(day=1).strftime("%Y-%m-%d") - usage_url = f"{shared.state.usage_api_url}?start_date={first_day_of_month}&end_date={last_day_of_month}" - try: - usage_data = self._get_billing_data(usage_url) - except Exception as e: - logging.error(f"获取API使用情况失败:" + str(e)) - return i18n("**获取API使用情况失败**") - rounded_usage = "{:.5f}".format(usage_data["total_usage"] / 100) - return i18n("**本月使用金额** ") + f"\u3000 ${rounded_usage}" - except requests.exceptions.ConnectTimeout: - status_text = ( - STANDARD_ERROR_MSG + CONNECTION_TIMEOUT_MSG + ERROR_RETRIEVE_MSG - ) - return status_text - except requests.exceptions.ReadTimeout: - status_text = STANDARD_ERROR_MSG + READ_TIMEOUT_MSG + ERROR_RETRIEVE_MSG - return status_text - except Exception as e: - import traceback - traceback.print_exc() - logging.error(i18n("获取API使用情况失败:") + str(e)) - return STANDARD_ERROR_MSG + ERROR_RETRIEVE_MSG - - def set_token_upper_limit(self, new_upper_limit): - pass - - @shared.state.switching_api_key # 在不开启多账号模式的时候,这个装饰器不会起作用 - def _get_response(self, stream=False): - openai_api_key = self.api_key - system_prompt = self.system_prompt - history = self.history - logging.debug(colorama.Fore.YELLOW + - f"{history}" + colorama.Fore.RESET) - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {openai_api_key}", - } - - if system_prompt is not None: - history = [construct_system(system_prompt), *history] - - payload = { - "model": self.model_name, - "messages": history, - "temperature": self.temperature, - "top_p": self.top_p, - "n": self.n_choices, - "stream": stream, - "presence_penalty": self.presence_penalty, - "frequency_penalty": self.frequency_penalty, - } - - if self.max_generation_token is not None: - payload["max_tokens"] = self.max_generation_token - if self.stop_sequence is not None: - payload["stop"] = self.stop_sequence - if self.logit_bias is not None: - payload["logit_bias"] = self.logit_bias - if self.user_identifier is not None: - payload["user"] = self.user_identifier - - if stream: - timeout = TIMEOUT_STREAMING - else: - timeout = TIMEOUT_ALL - - # 如果有自定义的api-host,使用自定义host发送请求,否则使用默认设置发送请求 - if shared.state.completion_url != COMPLETION_URL: - logging.info(f"使用自定义API URL: {shared.state.completion_url}") - - with retrieve_proxy(): - try: - response = requests.post( - shared.state.completion_url, - headers=headers, - json=payload, - stream=stream, - timeout=timeout, - ) - except: - return None - return response - - def _refresh_header(self): - self.headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {self.api_key}", - } - - def _get_billing_data(self, billing_url): - with retrieve_proxy(): - response = requests.get( - billing_url, - headers=self.headers, - timeout=TIMEOUT_ALL, - ) - - if response.status_code == 200: - data = response.json() - return data - else: - raise Exception( - f"API request failed with status code {response.status_code}: {response.text}" - ) - - def _decode_chat_response(self, response): - error_msg = "" - for chunk in response.iter_lines(): - if chunk: - chunk = chunk.decode() - chunk_length = len(chunk) - try: - chunk = json.loads(chunk[6:]) - except json.JSONDecodeError: - print(i18n("JSON解析错误,收到的内容: ") + f"{chunk}") - error_msg += chunk - continue - if chunk_length > 6 and "delta" in chunk["choices"][0]: - if chunk["choices"][0]["finish_reason"] == "stop": - break - try: - yield chunk["choices"][0]["delta"]["content"] - except Exception as e: - # logging.error(f"Error: {e}") - continue - if error_msg: - raise Exception(error_msg) - - def set_key(self, new_access_key): - ret = super().set_key(new_access_key) - self._refresh_header() - return ret - - -class ChatGLM_Client(BaseLLMModel): - def __init__(self, model_name) -> None: - super().__init__(model_name=model_name) - from transformers import AutoTokenizer, AutoModel - import torch - global CHATGLM_TOKENIZER, CHATGLM_MODEL - if CHATGLM_TOKENIZER is None or CHATGLM_MODEL is None: - system_name = platform.system() - model_path = None - if os.path.exists("models"): - model_dirs = os.listdir("models") - if model_name in model_dirs: - model_path = f"models/{model_name}" - if model_path is not None: - model_source = model_path - else: - model_source = f"THUDM/{model_name}" - CHATGLM_TOKENIZER = AutoTokenizer.from_pretrained( - model_source, trust_remote_code=True - ) - quantified = False - if "int4" in model_name: - quantified = True - model = AutoModel.from_pretrained( - model_source, trust_remote_code=True - ) - if torch.cuda.is_available(): - # run on CUDA - logging.info("CUDA is available, using CUDA") - model = model.half().cuda() - # mps加速还存在一些问题,暂时不使用 - elif system_name == "Darwin" and model_path is not None and not quantified: - logging.info("Running on macOS, using MPS") - # running on macOS and model already downloaded - model = model.half().to("mps") - else: - logging.info("GPU is not available, using CPU") - model = model.float() - model = model.eval() - CHATGLM_MODEL = model - - def _get_glm_style_input(self): - history = [x["content"] for x in self.history] - query = history.pop() - logging.debug(colorama.Fore.YELLOW + - f"{history}" + colorama.Fore.RESET) - assert ( - len(history) % 2 == 0 - ), f"History should be even length. current history is: {history}" - history = [[history[i], history[i + 1]] - for i in range(0, len(history), 2)] - return history, query - - def get_answer_at_once(self): - history, query = self._get_glm_style_input() - response, _ = CHATGLM_MODEL.chat( - CHATGLM_TOKENIZER, query, history=history) - return response, len(response) - - def get_answer_stream_iter(self): - history, query = self._get_glm_style_input() - for response, history in CHATGLM_MODEL.stream_chat( - CHATGLM_TOKENIZER, - query, - history, - max_length=self.token_upper_limit, - top_p=self.top_p, - temperature=self.temperature, - ): - yield response - - -class LLaMA_Client(BaseLLMModel): - def __init__( - self, - model_name, - lora_path=None, - ) -> None: - super().__init__(model_name=model_name) - from lmflow.datasets.dataset import Dataset - from lmflow.pipeline.auto_pipeline import AutoPipeline - from lmflow.models.auto_model import AutoModel - from lmflow.args import ModelArguments, DatasetArguments, InferencerArguments - - self.max_generation_token = 1000 - self.end_string = "\n\n" - # We don't need input data - data_args = DatasetArguments(dataset_path=None) - self.dataset = Dataset(data_args) - self.system_prompt = "" - - global LLAMA_MODEL, LLAMA_INFERENCER - if LLAMA_MODEL is None or LLAMA_INFERENCER is None: - model_path = None - if os.path.exists("models"): - model_dirs = os.listdir("models") - if model_name in model_dirs: - model_path = f"models/{model_name}" - if model_path is not None: - model_source = model_path - else: - model_source = f"decapoda-research/{model_name}" - # raise Exception(f"models目录下没有这个模型: {model_name}") - if lora_path is not None: - lora_path = f"lora/{lora_path}" - model_args = ModelArguments(model_name_or_path=model_source, lora_model_path=lora_path, model_type=None, config_overrides=None, config_name=None, tokenizer_name=None, cache_dir=None, - use_fast_tokenizer=True, model_revision='main', use_auth_token=False, torch_dtype=None, use_lora=False, lora_r=8, lora_alpha=32, lora_dropout=0.1, use_ram_optimized_load=True) - pipeline_args = InferencerArguments( - local_rank=0, random_seed=1, deepspeed='configs/ds_config_chatbot.json', mixed_precision='bf16') - - with open(pipeline_args.deepspeed, "r") as f: - ds_config = json.load(f) - LLAMA_MODEL = AutoModel.get_model( - model_args, - tune_strategy="none", - ds_config=ds_config, - ) - LLAMA_INFERENCER = AutoPipeline.get_pipeline( - pipeline_name="inferencer", - model_args=model_args, - data_args=data_args, - pipeline_args=pipeline_args, - ) - - def _get_llama_style_input(self): - history = [] - instruction = "" - if self.system_prompt: - instruction = (f"Instruction: {self.system_prompt}\n") - for x in self.history: - if x["role"] == "user": - history.append(f"{instruction}Input: {x['content']}") - else: - history.append(f"Output: {x['content']}") - context = "\n\n".join(history) - context += "\n\nOutput: " - return context - - def get_answer_at_once(self): - context = self._get_llama_style_input() - - input_dataset = self.dataset.from_dict( - {"type": "text_only", "instances": [{"text": context}]} - ) - - output_dataset = LLAMA_INFERENCER.inference( - model=LLAMA_MODEL, - dataset=input_dataset, - max_new_tokens=self.max_generation_token, - temperature=self.temperature, - ) - - response = output_dataset.to_dict()["instances"][0]["text"] - return response, len(response) - - def get_answer_stream_iter(self): - context = self._get_llama_style_input() - partial_text = "" - step = 1 - for _ in range(0, self.max_generation_token, step): - input_dataset = self.dataset.from_dict( - {"type": "text_only", "instances": [ - {"text": context + partial_text}]} - ) - output_dataset = LLAMA_INFERENCER.inference( - model=LLAMA_MODEL, - dataset=input_dataset, - max_new_tokens=step, - temperature=self.temperature, - ) - response = output_dataset.to_dict()["instances"][0]["text"] - if response == "" or response == self.end_string: - break - partial_text += response - yield partial_text - - -class XMChat(BaseLLMModel): - def __init__(self, api_key): - super().__init__(model_name="xmchat") - self.api_key = api_key - self.session_id = None - self.reset() - self.image_bytes = None - self.image_path = None - self.xm_history = [] - self.url = "https://xmbot.net/web" - self.last_conv_id = None - - def reset(self): - self.session_id = str(uuid.uuid4()) - self.last_conv_id = None - return [], "已重置" - - def image_to_base64(self, image_path): - # 打开并加载图片 - img = Image.open(image_path) - - # 获取图片的宽度和高度 - width, height = img.size - - # 计算压缩比例,以确保最长边小于4096像素 - max_dimension = 2048 - scale_ratio = min(max_dimension / width, max_dimension / height) - - if scale_ratio < 1: - # 按压缩比例调整图片大小 - new_width = int(width * scale_ratio) - new_height = int(height * scale_ratio) - img = img.resize((new_width, new_height), Image.ANTIALIAS) - - # 将图片转换为jpg格式的二进制数据 - buffer = BytesIO() - if img.mode == "RGBA": - img = img.convert("RGB") - img.save(buffer, format='JPEG') - binary_image = buffer.getvalue() - - # 对二进制数据进行Base64编码 - base64_image = base64.b64encode(binary_image).decode('utf-8') - - return base64_image - - def try_read_image(self, filepath): - def is_image_file(filepath): - # 判断文件是否为图片 - valid_image_extensions = [".jpg", ".jpeg", ".png", ".bmp", ".gif", ".tiff"] - file_extension = os.path.splitext(filepath)[1].lower() - return file_extension in valid_image_extensions - - if is_image_file(filepath): - logging.info(f"读取图片文件: {filepath}") - self.image_bytes = self.image_to_base64(filepath) - self.image_path = filepath - else: - self.image_bytes = None - self.image_path = None - - def like(self): - if self.last_conv_id is None: - return "点赞失败,你还没发送过消息" - data = { - "uuid": self.last_conv_id, - "appraise": "good" - } - response = requests.post(self.url, json=data) - return "👍点赞成功,,感谢反馈~" - - def dislike(self): - if self.last_conv_id is None: - return "点踩失败,你还没发送过消息" - data = { - "uuid": self.last_conv_id, - "appraise": "bad" - } - response = requests.post(self.url, json=data) - return "👎点踩成功,感谢反馈~" - - def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot): - fake_inputs = real_inputs - display_append = "" - limited_context = False - return limited_context, fake_inputs, display_append, real_inputs, chatbot - - def handle_file_upload(self, files, chatbot): - """if the model accepts multi modal input, implement this function""" - if files: - for file in files: - if file.name: - logging.info(f"尝试读取图像: {file.name}") - self.try_read_image(file.name) - if self.image_path is not None: - chatbot = chatbot + [((self.image_path,), None)] - if self.image_bytes is not None: - logging.info("使用图片作为输入") - # XMChat的一轮对话中实际上只能处理一张图片 - self.reset() - conv_id = str(uuid.uuid4()) - data = { - "user_id": self.api_key, - "session_id": self.session_id, - "uuid": conv_id, - "data_type": "imgbase64", - "data": self.image_bytes - } - response = requests.post(self.url, json=data) - response = json.loads(response.text) - logging.info(f"图片回复: {response['data']}") - return None, chatbot, None - - def get_answer_at_once(self): - question = self.history[-1]["content"] - conv_id = str(uuid.uuid4()) - self.last_conv_id = conv_id - data = { - "user_id": self.api_key, - "session_id": self.session_id, - "uuid": conv_id, - "data_type": "text", - "data": question - } - response = requests.post(self.url, json=data) - try: - response = json.loads(response.text) - return response["data"], len(response["data"]) - except Exception as e: - return response.text, len(response.text) - - - - -def get_model( - model_name, - lora_model_path=None, - access_key=None, - temperature=None, - top_p=None, - system_prompt=None, -) -> BaseLLMModel: - msg = i18n("模型设置为了:") + f" {model_name}" - model_type = ModelType.get_type(model_name) - lora_selector_visibility = False - lora_choices = [] - dont_change_lora_selector = False - if model_type != ModelType.OpenAI: - config.local_embedding = True - # del current_model.model - model = None - try: - if model_type == ModelType.OpenAI: - logging.info(f"正在加载OpenAI模型: {model_name}") - model = OpenAIClient( - model_name=model_name, - api_key=access_key, - system_prompt=system_prompt, - temperature=temperature, - top_p=top_p, - ) - elif model_type == ModelType.ChatGLM: - logging.info(f"正在加载ChatGLM模型: {model_name}") - model = ChatGLM_Client(model_name) - elif model_type == ModelType.LLaMA and lora_model_path == "": - msg = f"现在请为 {model_name} 选择LoRA模型" - logging.info(msg) - lora_selector_visibility = True - if os.path.isdir("lora"): - lora_choices = get_file_names( - "lora", plain=True, filetypes=[""]) - lora_choices = ["No LoRA"] + lora_choices - elif model_type == ModelType.LLaMA and lora_model_path != "": - logging.info(f"正在加载LLaMA模型: {model_name} + {lora_model_path}") - dont_change_lora_selector = True - if lora_model_path == "No LoRA": - lora_model_path = None - msg += " + No LoRA" - else: - msg += f" + {lora_model_path}" - model = LLaMA_Client(model_name, lora_model_path) - elif model_type == ModelType.XMChat: - if os.environ.get("XMCHAT_API_KEY") != "": - access_key = os.environ.get("XMCHAT_API_KEY") - model = XMChat(api_key=access_key) - elif model_type == ModelType.Unknown: - raise ValueError(f"未知模型: {model_name}") - logging.info(msg) - except Exception as e: - logging.error(e) - msg = f"{STANDARD_ERROR_MSG}: {e}" - if dont_change_lora_selector: - return model, msg - else: - return model, msg, gr.Dropdown.update(choices=lora_choices, visible=lora_selector_visibility) - - -if __name__ == "__main__": - with open("config.json", "r") as f: - openai_api_key = cjson.load(f)["openai_api_key"] - # set logging level to debug - logging.basicConfig(level=logging.DEBUG) - # client = ModelManager(model_name="gpt-3.5-turbo", access_key=openai_api_key) - client = get_model(model_name="chatglm-6b-int4") - chatbot = [] - stream = False - # 测试账单功能 - logging.info(colorama.Back.GREEN + "测试账单功能" + colorama.Back.RESET) - logging.info(client.billing_info()) - # 测试问答 - logging.info(colorama.Back.GREEN + "测试问答" + colorama.Back.RESET) - question = "巴黎是中国的首都吗?" - for i in client.predict(inputs=question, chatbot=chatbot, stream=stream): - logging.info(i) - logging.info(f"测试问答后history : {client.history}") - # 测试记忆力 - logging.info(colorama.Back.GREEN + "测试记忆力" + colorama.Back.RESET) - question = "我刚刚问了你什么问题?" - for i in client.predict(inputs=question, chatbot=chatbot, stream=stream): - logging.info(i) - logging.info(f"测试记忆力后history : {client.history}") - # 测试重试功能 - logging.info(colorama.Back.GREEN + "测试重试功能" + colorama.Back.RESET) - for i in client.retry(chatbot=chatbot, stream=stream): - logging.info(i) - logging.info(f"重试后history : {client.history}") - # # 测试总结功能 - # print(colorama.Back.GREEN + "测试总结功能" + colorama.Back.RESET) - # chatbot, msg = client.reduce_token_size(chatbot=chatbot) - # print(chatbot, msg) - # print(f"总结后history: {client.history}") diff --git a/spaces/Datasculptor/ImageGPT/app.py b/spaces/Datasculptor/ImageGPT/app.py deleted file mode 100644 index 0b1905398a64977fc75afbec422805f6e6c5cc63..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/ImageGPT/app.py +++ /dev/null @@ -1,253 +0,0 @@ -from pyChatGPT import ChatGPT -import gradio as gr -import os, sys, json -from loguru import logger -import paddlehub as hub -import random - -language_translation_model = hub.Module(directory=f'./baidu_translate') -def getTextTrans(text, source='zh', target='en'): - try: - text_translation = language_translation_model.translate(text, source, target) - return text_translation - except Exception as e: - return text - -session_token = os.environ.get('SessionToken') -# logger.info(f"session_token_: {session_token}") - -def get_response_from_chatbot(text): - try: - api = ChatGPT(session_token) - resp = api.send_message(text) - api.refresh_auth() - api.reset_conversation() - response = resp['message'] - # logger.info(f"response_: {response}") - except: - response = "Sorry, I'm busy. Try again later." - return response - -model_ids = { - # "models/stabilityai/stable-diffusion-2-1":"sd-v2-1", - # "models/stabilityai/stable-diffusion-2":"sd-v2-0", - # "models/runwayml/stable-diffusion-v1-5":"sd-v1-5", - # "models/CompVis/stable-diffusion-v1-4":"sd-v1-4", - "models/prompthero/openjourney":"openjourney", - # "models/ShadoWxShinigamI/Midjourney-Rangoli":"midjourney", - # "models/hakurei/waifu-diffusion":"waifu-diffusion", - # "models/Linaqruf/anything-v3.0":"anything-v3.0", - } - -tab_actions = [] -tab_titles = [] -for model_id in model_ids.keys(): - print(model_id, model_ids[model_id]) - try: - tab = gr.Interface.load(model_id) - tab_actions.append(tab) - tab_titles.append(model_ids[model_id]) - except: - logger.info(f"load_fail__{model_id}_") - -def chat(input0, input1, chat_radio, chat_history): - out_chat = [] - if chat_history != '': - out_chat = json.loads(chat_history) - logger.info(f"out_chat_: {len(out_chat)} / {chat_radio}") - if chat_radio == "Talk to chatGPT": - response = get_response_from_chatbot(input0) - out_chat.append((input0, response)) - chat_history = json.dumps(out_chat) - return out_chat, input1, chat_history - else: - prompt_en = getTextTrans(input0, source='zh', target='en') + f',{random.randint(0,sys.maxsize)}' - return out_chat, prompt_en, chat_history - - -start_work = """async() => { - function isMobile() { - try { - document.createEvent("TouchEvent"); return true; - } catch(e) { - return false; - } - } - function getClientHeight() - { - var clientHeight=0; - if(document.body.clientHeight&&document.documentElement.clientHeight) { - var clientHeight = (document.body.clientHeightdocument.documentElement.clientHeight)?document.body.clientHeight:document.documentElement.clientHeight; - } - return clientHeight; - } - - function setNativeValue(element, value) { - const valueSetter = Object.getOwnPropertyDescriptor(element.__proto__, 'value').set; - const prototype = Object.getPrototypeOf(element); - const prototypeValueSetter = Object.getOwnPropertyDescriptor(prototype, 'value').set; - - if (valueSetter && valueSetter !== prototypeValueSetter) { - prototypeValueSetter.call(element, value); - } else { - valueSetter.call(element, value); - } - } - var gradioEl = document.querySelector('body > gradio-app').shadowRoot; - if (!gradioEl) { - gradioEl = document.querySelector('body > gradio-app'); - } - - if (typeof window['gradioEl'] === 'undefined') { - window['gradioEl'] = gradioEl; - - const page1 = window['gradioEl'].querySelectorAll('#page_1')[0]; - const page2 = window['gradioEl'].querySelectorAll('#page_2')[0]; - - page1.style.display = "none"; - page2.style.display = "block"; - - window['div_count'] = 0; - window['chat_bot'] = window['gradioEl'].querySelectorAll('#chat_bot')[0]; - window['chat_bot1'] = window['gradioEl'].querySelectorAll('#chat_bot1')[0]; - chat_row = window['gradioEl'].querySelectorAll('#chat_row')[0]; - prompt_row = window['gradioEl'].querySelectorAll('#prompt_row')[0]; - window['chat_bot1'].children[1].textContent = ''; - - clientHeight = getClientHeight(); - if (isMobile()) { - output_htmls = window['gradioEl'].querySelectorAll('.output-html'); - for (var i = 0; i < output_htmls.length; i++) { - output_htmls[i].style.display = "none"; - } - new_height = (clientHeight - 250) + 'px'; - } else { - new_height = (clientHeight - 350) + 'px'; - } - chat_row.style.height = new_height; - window['chat_bot'].style.height = new_height; - window['chat_bot'].children[2].style.height = new_height; - window['chat_bot1'].style.height = new_height; - window['chat_bot1'].children[2].style.height = new_height; - prompt_row.children[0].style.flex = 'auto'; - prompt_row.children[0].style.width = '100%'; - window['gradioEl'].querySelectorAll('#chat_radio')[0].style.flex = 'auto'; - window['gradioEl'].querySelectorAll('#chat_radio')[0].style.width = '100%'; - prompt_row.children[0].setAttribute('style','flex-direction: inherit; flex: 1 1 auto; width: 100%;border-color: green;border-width: 1px !important;') - window['chat_bot1'].children[1].setAttribute('style', 'border-bottom-right-radius:0;top:unset;bottom:0;padding-left:0.1rem;'); - - window['prevPrompt'] = ''; - window['doCheckPrompt'] = 0; - window['prevImgSrc'] = ''; - window['checkChange'] = function checkChange() { - try { - if (window['gradioEl'].querySelectorAll('.gr-radio')[0].checked) { - if (window['chat_bot'].children[2].children[0].children.length > window['div_count']) { - new_len = window['chat_bot'].children[2].children[0].children.length - window['div_count']; - for (var i = 0; i < new_len; i++) { - new_div = window['chat_bot'].children[2].children[0].children[window['div_count'] + i].cloneNode(true); - window['chat_bot1'].children[2].children[0].appendChild(new_div); - } - window['div_count'] = chat_bot.children[2].children[0].children.length; - window['chat_bot1'].children[2].scrollTop = window['chat_bot1'].children[2].scrollHeight; - } - if (window['chat_bot'].children[0].children.length > 1) { - window['chat_bot1'].children[1].textContent = window['chat_bot'].children[0].children[1].textContent; - } else { - window['chat_bot1'].children[1].textContent = ''; - } - } else { - texts = window['gradioEl'].querySelectorAll('textarea'); - text0 = texts[0]; - text1 = texts[1]; - img_index = 0; - if (window['doCheckPrompt'] === 0 && window['prevPrompt'] !== text1.value) { - console.log('_____new prompt___[' + text1.value + ']_'); - window['doCheckPrompt'] = 1; - window['prevPrompt'] = text1.value; - for (var i = 3; i < texts.length; i++) { - setNativeValue(texts[i], text1.value); - texts[i].dispatchEvent(new Event('input', { bubbles: true })); - } - setTimeout(function() { - img_submit_btns = window['gradioEl'].querySelectorAll('#tab_img')[0].querySelectorAll("button"); - for (var i = 0; i < img_submit_btns.length; i++) { - if (img_submit_btns[i].innerText == 'Submit') { - img_submit_btns[i].click(); - } - } - window['doCheckPrompt'] = 0; - }, 10); - } - tabitems = window['gradioEl'].querySelectorAll('.tabitem'); - imgs = tabitems[img_index].children[0].children[1].children[1].children[0].querySelectorAll("img"); - if (imgs.length > 0) { - if (window['prevImgSrc'] !== imgs[0].src) { - var user_div = document.createElement("div"); - user_div.className = "px-3 py-2 rounded-[22px] rounded-br-none text-white text-sm chat-message svelte-rct66g"; - user_div.style.backgroundColor = "#16a34a"; - user_div.innerHTML = "

      " + text0.value + "

      "; - window['chat_bot1'].children[2].children[0].appendChild(user_div); - - var bot_div = document.createElement("div"); - bot_div.className = "px-3 py-2 rounded-[22px] rounded-bl-none place-self-start text-white text-sm chat-message svelte-rct66g"; - bot_div.style.backgroundColor = "#2563eb"; - bot_div.style.width = "80%"; - bot_div.style.padding = "0.2rem"; - bot_div.appendChild(imgs[0].cloneNode(true)); - window['chat_bot1'].children[2].children[0].appendChild(bot_div); - - window['chat_bot1'].children[2].scrollTop = window['chat_bot1'].children[2].scrollHeight; - window['prevImgSrc'] = imgs[0].src; - } - } - if (tabitems[img_index].children[0].children[1].children[1].children[0].children[0].children.length > 1) { - window['chat_bot1'].children[1].textContent = tabitems[img_index].children[0].children[1].children[1].children[0].children[0].children[1].textContent; - } else { - window['chat_bot1'].children[1].textContent = ''; - } - } - - } catch(e) { - } - } - window['checkChange_interval'] = window.setInterval("window.checkChange()", 500); - } - - return false; -}""" - - -with gr.Blocks(title='Talk to chatGPT') as demo: - gr.HTML("

      You can duplicating this space and use your own session token: Duplicate Space

      ") - gr.HTML("

      Instruction on how to get session token can be seen in video here. Add your session token by going to settings and add under secrets.

      ") - with gr.Group(elem_id="page_1", visible=True) as page_1: - with gr.Box(): - with gr.Row(): - start_button = gr.Button("Let's talk to chatGPT!", elem_id="start-btn", visible=True) - start_button.click(fn=None, inputs=[], outputs=[], _js=start_work) - - with gr.Group(elem_id="page_2", visible=False) as page_2: - with gr.Row(elem_id="chat_row"): - chatbot = gr.Chatbot(elem_id="chat_bot", visible=False).style(color_map=("green", "blue")) - chatbot1 = gr.Chatbot(elem_id="chat_bot1").style(color_map=("green", "blue")) - with gr.Row(elem_id="prompt_row"): - prompt_input0 = gr.Textbox(lines=2, label="prompt",show_label=False) - prompt_input1 = gr.Textbox(lines=4, label="prompt", visible=False) - chat_history = gr.Textbox(lines=4, label="prompt", visible=False) - chat_radio = gr.Radio(["Talk to chatGPT", "Text to Image"], elem_id="chat_radio",value="Talk to chatGPT", show_label=False) - submit_btn = gr.Button(value = "submit",elem_id="submit-btn").style( - margin=True, - rounded=(True, True, True, True), - width=100 - ) - submit_btn.click(fn=chat, - inputs=[prompt_input0, prompt_input1, chat_radio, chat_history], - outputs=[chatbot, prompt_input1, chat_history], - ) - with gr.Row(elem_id='tab_img', visible=False).style(height=5): - tab_img = gr.TabbedInterface(tab_actions, tab_titles) - -demo.launch(debug = True) \ No newline at end of file diff --git a/spaces/Datasculptor/LoRA-DreamBooth-Training-UI/app_inference.py b/spaces/Datasculptor/LoRA-DreamBooth-Training-UI/app_inference.py deleted file mode 100644 index a9969e649ca321a5246130d7d560ac3c431a12f2..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/LoRA-DreamBooth-Training-UI/app_inference.py +++ /dev/null @@ -1,176 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import enum - -import gradio as gr -from huggingface_hub import HfApi - -from inference import InferencePipeline -from utils import find_exp_dirs - -SAMPLE_MODEL_IDS = [ - 'patrickvonplaten/lora_dreambooth_dog_example', - 'sayakpaul/sd-model-finetuned-lora-t4', -] - - -class ModelSource(enum.Enum): - SAMPLE = 'Sample' - HUB_LIB = 'Hub (lora-library)' - LOCAL = 'Local' - - -class InferenceUtil: - def __init__(self, hf_token: str | None): - self.hf_token = hf_token - - @staticmethod - def load_sample_lora_model_list(): - return gr.update(choices=SAMPLE_MODEL_IDS, value=SAMPLE_MODEL_IDS[0]) - - def load_hub_lora_model_list(self) -> dict: - api = HfApi(token=self.hf_token) - choices = [ - info.modelId for info in api.list_models(author='lora-library') - ] - return gr.update(choices=choices, - value=choices[0] if choices else None) - - @staticmethod - def load_local_lora_model_list() -> dict: - choices = find_exp_dirs() - return gr.update(choices=choices, - value=choices[0] if choices else None) - - def reload_lora_model_list(self, model_source: str) -> dict: - if model_source == ModelSource.SAMPLE.value: - return self.load_sample_lora_model_list() - elif model_source == ModelSource.HUB_LIB.value: - return self.load_hub_lora_model_list() - elif model_source == ModelSource.LOCAL.value: - return self.load_local_lora_model_list() - else: - raise ValueError - - def load_model_info(self, lora_model_id: str) -> tuple[str, str]: - try: - card = InferencePipeline.get_model_card(lora_model_id, - self.hf_token) - except Exception: - return '', '' - base_model = getattr(card.data, 'base_model', '') - instance_prompt = getattr(card.data, 'instance_prompt', '') - return base_model, instance_prompt - - def reload_lora_model_list_and_update_model_info( - self, model_source: str) -> tuple[dict, str, str]: - model_list_update = self.reload_lora_model_list(model_source) - model_list = model_list_update['choices'] - model_info = self.load_model_info(model_list[0] if model_list else '') - return model_list_update, *model_info - - -def create_inference_demo(pipe: InferencePipeline, - hf_token: str | None = None) -> gr.Blocks: - app = InferenceUtil(hf_token) - - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - with gr.Box(): - model_source = gr.Radio( - label='Model Source', - choices=[_.value for _ in ModelSource], - value=ModelSource.SAMPLE.value) - reload_button = gr.Button('Reload Model List') - lora_model_id = gr.Dropdown(label='LoRA Model ID', - choices=SAMPLE_MODEL_IDS, - value=SAMPLE_MODEL_IDS[0]) - with gr.Accordion( - label= - 'Model info (Base model and instance prompt used for training)', - open=False): - with gr.Row(): - base_model_used_for_training = gr.Text( - label='Base model', interactive=False) - instance_prompt_used_for_training = gr.Text( - label='Instance prompt', interactive=False) - prompt = gr.Textbox( - label='Prompt', - max_lines=1, - placeholder='Example: "A picture of a sks dog in a bucket"' - ) - alpha = gr.Slider(label='LoRA alpha', - minimum=0, - maximum=2, - step=0.05, - value=1) - seed = gr.Slider(label='Seed', - minimum=0, - maximum=100000, - step=1, - value=0) - with gr.Accordion('Other Parameters', open=False): - num_steps = gr.Slider(label='Number of Steps', - minimum=0, - maximum=100, - step=1, - value=25) - guidance_scale = gr.Slider(label='CFG Scale', - minimum=0, - maximum=50, - step=0.1, - value=7.5) - - run_button = gr.Button('Generate') - - gr.Markdown(''' - - After training, you can press "Reload Model List" button to load your trained model names. - ''') - with gr.Column(): - result = gr.Image(label='Result') - - model_source.change( - fn=app.reload_lora_model_list_and_update_model_info, - inputs=model_source, - outputs=[ - lora_model_id, - base_model_used_for_training, - instance_prompt_used_for_training, - ]) - reload_button.click( - fn=app.reload_lora_model_list_and_update_model_info, - inputs=model_source, - outputs=[ - lora_model_id, - base_model_used_for_training, - instance_prompt_used_for_training, - ]) - lora_model_id.change(fn=app.load_model_info, - inputs=lora_model_id, - outputs=[ - base_model_used_for_training, - instance_prompt_used_for_training, - ]) - inputs = [ - lora_model_id, - prompt, - alpha, - seed, - num_steps, - guidance_scale, - ] - prompt.submit(fn=pipe.run, inputs=inputs, outputs=result) - run_button.click(fn=pipe.run, inputs=inputs, outputs=result) - return demo - - -if __name__ == '__main__': - import os - - hf_token = os.getenv('HF_TOKEN') - pipe = InferencePipeline(hf_token) - demo = create_inference_demo(pipe, hf_token) - demo.queue(max_size=10).launch(share=False) diff --git a/spaces/Dimalker/Faceswapper/roop/core.py b/spaces/Dimalker/Faceswapper/roop/core.py deleted file mode 100644 index aeb4c2a370942266f46c60938f8bc425460519f6..0000000000000000000000000000000000000000 --- a/spaces/Dimalker/Faceswapper/roop/core.py +++ /dev/null @@ -1,216 +0,0 @@ -#!/usr/bin/env python3 - -import os -import sys -# os.environ["CUDA_VISIBLE_DEVICES"] = "" -# single thread doubles cuda performance - needs to be set before torch import -if any(arg.startswith('--execution-provider') for arg in sys.argv): - os.environ['OMP_NUM_THREADS'] = '1' -# reduce tensorflow log level -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' -import warnings -from typing import List -import platform -import signal -import shutil -import argparse -import torch -import onnxruntime -import tensorflow - -import roop.globals -import roop.metadata -import roop.ui as ui -from roop.predicter import predict_image, predict_video -from roop.processors.frame.core import get_frame_processors_modules -from roop.utilities import has_image_extension, is_image, is_video, detect_fps, create_video, extract_frames, get_temp_frame_paths, restore_audio, create_temp, move_temp, clean_temp, normalize_output_path - -if 'ROCMExecutionProvider' in roop.globals.execution_providers: - del torch - -warnings.filterwarnings('ignore', category=FutureWarning, module='insightface') -warnings.filterwarnings('ignore', category=UserWarning, module='torchvision') - - -def parse_args() -> None: - signal.signal(signal.SIGINT, lambda signal_number, frame: destroy()) - program = argparse.ArgumentParser(formatter_class=lambda prog: argparse.HelpFormatter(prog, max_help_position=100)) - program.add_argument('-s', '--source', help='select an source image', dest='source_path') - program.add_argument('-t', '--target', help='select an target image or video', dest='target_path') - program.add_argument('-o', '--output', help='select output file or directory', dest='output_path') - program.add_argument('--frame-processor', help='frame processors (choices: face_swapper, face_enhancer, ...)', dest='frame_processor', default=['face_swapper'], nargs='+') - program.add_argument('--keep-fps', help='keep original fps', dest='keep_fps', action='store_true', default=True) - program.add_argument('--keep-audio', help='keep original audio', dest='keep_audio', action='store_true', default=True) - program.add_argument('--keep-frames', help='keep temporary frames', dest='keep_frames', action='store_true', default=False) - program.add_argument('--many-faces', help='process every face', dest='many_faces', action='store_true', default=False) - program.add_argument('--video-encoder', help='adjust output video encoder', dest='video_encoder', default='libx265', choices=['libx264', 'libx265', 'libvpx-vp9']) - program.add_argument('--video-quality', help='adjust output video quality', dest='video_quality', type=int, default=3, choices=range(52), metavar='[0-51]') - program.add_argument('--max-memory', help='maximum amount of RAM in GB', dest='max_memory', type=int, default=suggest_max_memory()) - program.add_argument('--execution-provider', help='available execution provider (choices: cpu, ...)', dest='execution_provider', default=['cpu'], choices=suggest_execution_providers(), nargs='+') - program.add_argument('--execution-threads', help='number of execution threads', dest='execution_threads', type=int, default=suggest_execution_threads()) - program.add_argument('-v', '--version', action='version', version=f'{roop.metadata.name} {roop.metadata.version}') - - args = program.parse_args() - - roop.globals.source_path = args.source_path - roop.globals.target_path = args.target_path - roop.globals.output_path = normalize_output_path(roop.globals.source_path, roop.globals.target_path, args.output_path) - roop.globals.frame_processors = args.frame_processor - roop.globals.headless = args.source_path or args.target_path or args.output_path - roop.globals.keep_fps = args.keep_fps - roop.globals.keep_audio = args.keep_audio - roop.globals.keep_frames = args.keep_frames - roop.globals.many_faces = args.many_faces - roop.globals.video_encoder = args.video_encoder - roop.globals.video_quality = args.video_quality - roop.globals.max_memory = args.max_memory - roop.globals.execution_providers = decode_execution_providers(args.execution_provider) - roop.globals.execution_threads = args.execution_threads - - -def encode_execution_providers(execution_providers: List[str]) -> List[str]: - return [execution_provider.replace('ExecutionProvider', '').lower() for execution_provider in execution_providers] - - -def decode_execution_providers(execution_providers: List[str]) -> List[str]: - return [provider for provider, encoded_execution_provider in zip(onnxruntime.get_available_providers(), encode_execution_providers(onnxruntime.get_available_providers())) - if any(execution_provider in encoded_execution_provider for execution_provider in execution_providers)] - - -def suggest_max_memory() -> int: - if platform.system().lower() == 'darwin': - return 4 - return 16 - - -def suggest_execution_providers() -> List[str]: - return encode_execution_providers(onnxruntime.get_available_providers()) - - -def suggest_execution_threads() -> int: - if 'DmlExecutionProvider' in roop.globals.execution_providers: - return 1 - if 'ROCMExecutionProvider' in roop.globals.execution_providers: - return 1 - return 8 - - -def limit_resources() -> None: - # prevent tensorflow memory leak - gpus = tensorflow.config.experimental.list_physical_devices('GPU') - for gpu in gpus: - tensorflow.config.experimental.set_virtual_device_configuration(gpu, [ - tensorflow.config.experimental.VirtualDeviceConfiguration(memory_limit=1024) - ]) - # limit memory usage - if roop.globals.max_memory: - memory = roop.globals.max_memory * 1024 ** 3 - if platform.system().lower() == 'darwin': - memory = roop.globals.max_memory * 1024 ** 6 - if platform.system().lower() == 'windows': - import ctypes - kernel32 = ctypes.windll.kernel32 - kernel32.SetProcessWorkingSetSize(-1, ctypes.c_size_t(memory), ctypes.c_size_t(memory)) - else: - import resource - resource.setrlimit(resource.RLIMIT_DATA, (memory, memory)) - - -def release_resources() -> None: - if 'CUDAExecutionProvider' in roop.globals.execution_providers: - torch.cuda.empty_cache() - - -def pre_check() -> bool: - if sys.version_info < (3, 9): - update_status('Python version is not supported - please upgrade to 3.9 or higher.') - return False - if not shutil.which('ffmpeg'): - update_status('ffmpeg is not installed.') - return False - return True - - -def update_status(message: str, scope: str = 'ROOP.CORE') -> None: - print(f'[{scope}] {message}') - if not roop.globals.headless: - ui.update_status(message) - - -def start() -> None: - for frame_processor in get_frame_processors_modules(roop.globals.frame_processors): - if not frame_processor.pre_start(): - return - # process image to image - if has_image_extension(roop.globals.target_path): - if predict_image(roop.globals.target_path): - destroy() - shutil.copy2(roop.globals.target_path, roop.globals.output_path) - for frame_processor in get_frame_processors_modules(roop.globals.frame_processors): - update_status('Progressing...', frame_processor.NAME) - frame_processor.process_image(roop.globals.source_path, roop.globals.output_path, roop.globals.output_path) - frame_processor.post_process() - release_resources() - if is_image(roop.globals.target_path): - update_status('Processing to image succeed!') - else: - update_status('Processing to image failed!') - return - # process image to videos - if predict_video(roop.globals.target_path): - destroy() - update_status('Creating temp resources...') - create_temp(roop.globals.target_path) - update_status('Extracting frames...') - extract_frames(roop.globals.target_path) - temp_frame_paths = get_temp_frame_paths(roop.globals.target_path) - for frame_processor in get_frame_processors_modules(roop.globals.frame_processors): - update_status('Progressing...', frame_processor.NAME) - frame_processor.process_video(roop.globals.source_path, temp_frame_paths) - frame_processor.post_process() - release_resources() - # handles fps - if roop.globals.keep_fps: - update_status('Detecting fps...') - fps = detect_fps(roop.globals.target_path) - update_status(f'Creating video with {fps} fps...') - create_video(roop.globals.target_path, fps) - else: - update_status('Creating video with 30.0 fps...') - create_video(roop.globals.target_path) - # handle audio - if roop.globals.keep_audio: - if roop.globals.keep_fps: - update_status('Restoring audio...') - else: - update_status('Restoring audio might cause issues as fps are not kept...') - restore_audio(roop.globals.target_path, roop.globals.output_path) - else: - move_temp(roop.globals.target_path, roop.globals.output_path) - # clean and validate - clean_temp(roop.globals.target_path) - if is_video(roop.globals.target_path): - update_status('Processing to video succeed!') - else: - update_status('Processing to video failed!') - - -def destroy() -> None: - if roop.globals.target_path: - clean_temp(roop.globals.target_path) - quit() - - -def run() -> None: - parse_args() - if not pre_check(): - return - for frame_processor in get_frame_processors_modules(roop.globals.frame_processors): - if not frame_processor.pre_check(): - return - limit_resources() - if roop.globals.headless: - start() - else: - window = ui.init(start, destroy) - window.mainloop() diff --git a/spaces/EronSamez/RVC_HFmeu/julius/lowpass.py b/spaces/EronSamez/RVC_HFmeu/julius/lowpass.py deleted file mode 100644 index 0eb46e382b20bfc2d93482f9f027986b863de6f0..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/julius/lowpass.py +++ /dev/null @@ -1,181 +0,0 @@ -# File under the MIT license, see https://github.com/adefossez/julius/LICENSE for details. -# Author: adefossez, 2020 -""" -FIR windowed sinc lowpass filters. -""" - -import math -from typing import Sequence, Optional - -import torch -from torch.nn import functional as F - -from .core import sinc -from .fftconv import fft_conv1d -from .utils import simple_repr - - -class LowPassFilters(torch.nn.Module): - """ - Bank of low pass filters. Note that a high pass or band pass filter can easily - be implemented by substracting a same signal processed with low pass filters with different - frequencies (see `julius.bands.SplitBands` for instance). - This uses a windowed sinc filter, very similar to the one used in - `julius.resample`. However, because we do not change the sample rate here, - this filter can be much more efficiently implemented using the FFT convolution from - `julius.fftconv`. - - Args: - cutoffs (list[float]): list of cutoff frequencies, in [0, 0.5] expressed as `f/f_s` where - f_s is the samplerate and `f` is the cutoff frequency. - The upper limit is 0.5, because a signal sampled at `f_s` contains only - frequencies under `f_s / 2`. - stride (int): how much to decimate the output. Keep in mind that decimation - of the output is only acceptable if the cutoff frequency is under `1/ (2 * stride)` - of the original sampling rate. - pad (bool): if True, appropriately pad the input with zero over the edge. If `stride=1`, - the output will have the same length as the input. - zeros (float): Number of zero crossings to keep. - Controls the receptive field of the Finite Impulse Response filter. - For lowpass filters with low cutoff frequency, e.g. 40Hz at 44.1kHz, - it is a bad idea to set this to a high value. - This is likely appropriate for most use. Lower values - will result in a faster filter, but with a slower attenuation around the - cutoff frequency. - fft (bool or None): if True, uses `julius.fftconv` rather than PyTorch convolutions. - If False, uses PyTorch convolutions. If None, either one will be chosen automatically - depending on the effective filter size. - - - ..warning:: - All the filters will use the same filter size, aligned on the lowest - frequency provided. If you combine a lot of filters with very diverse frequencies, it might - be more efficient to split them over multiple modules with similar frequencies. - - ..note:: - A lowpass with a cutoff frequency of 0 is defined as the null function - by convention here. This allows for a highpass with a cutoff of 0 to - be equal to identity, as defined in `julius.filters.HighPassFilters`. - - Shape: - - - Input: `[*, T]` - - Output: `[F, *, T']`, with `T'=T` if `pad` is True and `stride` is 1, and - `F` is the numer of cutoff frequencies. - - >>> lowpass = LowPassFilters([1/4]) - >>> x = torch.randn(4, 12, 21, 1024) - >>> list(lowpass(x).shape) - [1, 4, 12, 21, 1024] - """ - - def __init__(self, cutoffs: Sequence[float], stride: int = 1, pad: bool = True, - zeros: float = 8, fft: Optional[bool] = None): - super().__init__() - self.cutoffs = list(cutoffs) - if min(self.cutoffs) < 0: - raise ValueError("Minimum cutoff must be larger than zero.") - if max(self.cutoffs) > 0.5: - raise ValueError("A cutoff above 0.5 does not make sense.") - self.stride = stride - self.pad = pad - self.zeros = zeros - self.half_size = int(zeros / min([c for c in self.cutoffs if c > 0]) / 2) - if fft is None: - fft = self.half_size > 32 - self.fft = fft - window = torch.hann_window(2 * self.half_size + 1, periodic=False) - time = torch.arange(-self.half_size, self.half_size + 1) - filters = [] - for cutoff in cutoffs: - if cutoff == 0: - filter_ = torch.zeros_like(time) - else: - filter_ = 2 * cutoff * window * sinc(2 * cutoff * math.pi * time) - # Normalize filter to have sum = 1, otherwise we will have a small leakage - # of the constant component in the input signal. - filter_ /= filter_.sum() - filters.append(filter_) - self.register_buffer("filters", torch.stack(filters)[:, None]) - - def forward(self, input): - shape = list(input.shape) - input = input.view(-1, 1, shape[-1]) - if self.pad: - input = F.pad(input, (self.half_size, self.half_size), mode='replicate') - if self.fft: - out = fft_conv1d(input, self.filters, stride=self.stride) - else: - out = F.conv1d(input, self.filters, stride=self.stride) - shape.insert(0, len(self.cutoffs)) - shape[-1] = out.shape[-1] - return out.permute(1, 0, 2).reshape(shape) - - def __repr__(self): - return simple_repr(self) - - -class LowPassFilter(torch.nn.Module): - """ - Same as `LowPassFilters` but applies a single low pass filter. - - Shape: - - - Input: `[*, T]` - - Output: `[*, T']`, with `T'=T` if `pad` is True and `stride` is 1. - - >>> lowpass = LowPassFilter(1/4, stride=2) - >>> x = torch.randn(4, 124) - >>> list(lowpass(x).shape) - [4, 62] - """ - - def __init__(self, cutoff: float, stride: int = 1, pad: bool = True, - zeros: float = 8, fft: Optional[bool] = None): - super().__init__() - self._lowpasses = LowPassFilters([cutoff], stride, pad, zeros, fft) - - @property - def cutoff(self): - return self._lowpasses.cutoffs[0] - - @property - def stride(self): - return self._lowpasses.stride - - @property - def pad(self): - return self._lowpasses.pad - - @property - def zeros(self): - return self._lowpasses.zeros - - @property - def fft(self): - return self._lowpasses.fft - - def forward(self, input): - return self._lowpasses(input)[0] - - def __repr__(self): - return simple_repr(self) - - -def lowpass_filters(input: torch.Tensor, cutoffs: Sequence[float], - stride: int = 1, pad: bool = True, - zeros: float = 8, fft: Optional[bool] = None): - """ - Functional version of `LowPassFilters`, refer to this class for more information. - """ - return LowPassFilters(cutoffs, stride, pad, zeros, fft).to(input)(input) - - -def lowpass_filter(input: torch.Tensor, cutoff: float, - stride: int = 1, pad: bool = True, - zeros: float = 8, fft: Optional[bool] = None): - """ - Same as `lowpass_filters` but with a single cutoff frequency. - Output will not have a dimension inserted in the front. - """ - return lowpass_filters(input, [cutoff], stride, pad, zeros, fft)[0] diff --git a/spaces/EuroPython2022/pulsar-clip/app.py b/spaces/EuroPython2022/pulsar-clip/app.py deleted file mode 100644 index 56fb0f8729b7f6ed3381a092c2ed9e70d45c4877..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/pulsar-clip/app.py +++ /dev/null @@ -1,81 +0,0 @@ -from pulsar_clip import PulsarCLIP, CONFIG_SPEC -from datetime import datetime -import gradio as gr -import utils - - -def generate(*args): - pc = PulsarCLIP(dict([(k, t(v) if not isinstance(t, (tuple, list)) - else (type(t[0])(v) if isinstance(t, tuple) else v)) - for v, (k, v0, t) in zip(args, - (y for _, x in CONFIG_SPEC for y in x))])) - frames = [] - for image in pc.generate(): - frames.append(image) - from tqdm.auto import tqdm - from subprocess import Popen, PIPE - fps = 30 - filename = datetime.strftime(datetime.now(), "%Y-%m-%d-%H-%M-%S") - video_path = f"{filename}.mp4" - if frames: - p = Popen((f"ffmpeg -y -f image2pipe -vcodec png -r {fps} -i - -vcodec libx264 -r {fps} " - f"-pix_fmt yuv420p -crf 17 -preset fast ").split() + [str(video_path)], stdin=PIPE) - for im in tqdm(frames): - im.save(p.stdin, "PNG") - p.stdin.close() - p.wait() - model_path = f"{filename}.obj" - pc.save_obj(model_path) - # model_path = None # TODO - return [video_path, model_path, model_path] - - -def main(): - with gr.Blocks() as ui: - gr.Markdown("# Pulsar+CLIP") - gr.Markdown(" Open In Colab [![arXiv](https://img.shields.io/badge/arXiv-2004.07484-b31b1b.svg)](https://arxiv.org/abs/2004.07484)") - gr.Markdown("Generate 3D point clouds from text!") - - with gr.Group(): - gr.Markdown("## Settings") - inputs = [] - defaults = [] - with gr.Tabs(): - for name, section in CONFIG_SPEC: - with gr.TabItem(name): - for k, v0, t in section: - if t in (float, int): - element = gr.Number(label=k, value=v0) - elif t == str: - element = gr.Textbox(label=k, value=v0) - elif t == bool: - element = gr.Checkbox(label=k, value=v0) - elif isinstance(t, tuple): - element = gr.Slider(*t, label=k, value=v0) - elif isinstance(t, list): - element = gr.Dropdown(label=k, value=v0, choices=t) - else: - raise TypeError(f"Input format {t} should be one of str, int, bool, tuple, list") - element = 1/0 - inputs.append(element) - defaults.append(v0) - - button = gr.Button("Run") - gr.Markdown("## Result") - with gr.Row(): - with gr.Column(): - video_result = gr.Video() - with gr.Column(): - model_demo = gr.Model3D() - model_file = gr.File() - - button.click(fn=generate, inputs=inputs, outputs=[video_result, model_demo, model_file]) - - gr.Markdown("## Examples") - gr.Examples(fn=generate, inputs=inputs, outputs=[video_result, model_demo, model_file], - examples=[defaults], cache_examples=True, examples_per_page=1) - return ui - -ui = main() -ui.configure_queue(concurrency_count=5).launch() -demo = ui diff --git a/spaces/EuroSciPy2022/arxiv-cards/app.py b/spaces/EuroSciPy2022/arxiv-cards/app.py deleted file mode 100644 index 80b7e6c1ee3a437b952974a6dd63d0805d60ceb6..0000000000000000000000000000000000000000 --- a/spaces/EuroSciPy2022/arxiv-cards/app.py +++ /dev/null @@ -1,126 +0,0 @@ -import os -from jinja2 import Environment, FileSystemLoader, select_autoescape -from get_paperinfo_fromurls import get_paperinfo_fromurls -import gradio as gr - -class CARDS_TEMPLATE(object): - def __init__(self, path_to_template, template_filename): - self.path_to_template = path_to_template - self.template_filename = template_filename - self.template = self._get_template() - self.rendered_html = None - - def _get_template(self): - env = Environment( - autoescape=select_autoescape( - enabled_extensions=('html'), - default_for_string=True, - ), - loader=FileSystemLoader(self.path_to_template) - ) - return env.get_template(self.template_filename) - - def render(self, paper_details_iterator): - self.rendered_html = self.template.render(paper_details=paper_details_iterator) - - def save_html(self, output_dir=None, output_htmlfile=None): - with open(os.path.join(output_dir, output_htmlfile), "w") as f: - f.write(self.rendered_html) - -template_file = "htmlcard.html" -template_path = "" -card_template = CARDS_TEMPLATE( - path_to_template = template_path, - template_filename = template_file, - ) - -CSS = """ -#url-textbox { - padding: 0 !important; - font-size: 16px; -} - -.gradio-container { - background-color: transparent; -} - -.gradio-container .gr-button-primary { - background: #b31b1b; - border: 1px solid #b31b1b; - border-radius: 8px; - color: white; - font-weight: bold; - font-size: 16px; -} - -#ctr { - text-align: center; -} - -#htel { - justify-content: center; - text-align: center; -} -""" - -examples = [ - [ - "https://arxiv.org/abs/2208.14178v1", - ] -] - -def create_html_card(arxiv_link): - paper_details = get_paperinfo_fromurls(arxiv_link) - card_template.render(paper_details_iterator=paper_details) - return card_template.rendered_html - -demo = gr.Blocks(css=CSS) -with demo: - with gr.Column(): - gr.Markdown("# arXiv Cards Generator ⚙️", elem_id="ctr") - gr.Markdown( - """ - Need a simple and visual way to share arXiv papers on presentations, blogposts, messages? - This gradio demo allows for creating arXiv cards including arXiv identifier, title, authors, abstract - - Simply paste the url link of the arXiv paper and generate! - """ - ) - - with gr.Column(): - with gr.Row(): - text = gr.Textbox( - show_label=False, - placeholder="Paste arXiv link (abs of pdf)", - lines=1, - max_lines=1, - elem_id="url-textbox", - ) - button = gr.Button("Generate", variant="primary") - with gr.Row(): - card = gr.HTML(elem_id="htel") - with gr.Row(): - gr.Examples( - examples=examples, - inputs=[text], - ) - - with gr.Column(): - gr.Markdown("### Resources and inspirations", elem_id="ctr") - gr.Markdown( - """ - - The code for retrieving the information using arXiv API is mainly taken from [github.com/kunalghosh/Conference-Grok](https://github.com/kunalghosh/Conference-Grok). - - The [pdf2preview](https://huggingface.co/spaces/chuanenlin/pdf2preview) space is also a great way to share academic publications on slides. - - **Author**: [eliolio](https://huggingface.co/eliolio) - """) - button.click( - fn=create_html_card, - inputs=[text], - outputs=[card] - ) - - - -if __name__ == "__main__": - demo.launch() \ No newline at end of file diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/diffusion/solver.py b/spaces/FrankZxShen/so-vits-svc-models-pcr/diffusion/solver.py deleted file mode 100644 index aaf0b21591b42fa903424f8d44fef88d7d791e57..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-pcr/diffusion/solver.py +++ /dev/null @@ -1,195 +0,0 @@ -import os -import time -import numpy as np -import torch -import librosa -from diffusion.logger.saver import Saver -from diffusion.logger import utils -from torch import autocast -from torch.cuda.amp import GradScaler - -def test(args, model, vocoder, loader_test, saver): - print(' [*] testing...') - model.eval() - - # losses - test_loss = 0. - - # intialization - num_batches = len(loader_test) - rtf_all = [] - - # run - with torch.no_grad(): - for bidx, data in enumerate(loader_test): - fn = data['name'][0].split("/")[-1] - speaker = data['name'][0].split("/")[-2] - print('--------') - print('{}/{} - {}'.format(bidx, num_batches, fn)) - - # unpack data - for k in data.keys(): - if not k.startswith('name'): - data[k] = data[k].to(args.device) - print('>>', data['name'][0]) - - # forward - st_time = time.time() - mel = model( - data['units'], - data['f0'], - data['volume'], - data['spk_id'], - gt_spec=None, - infer=True, - infer_speedup=args.infer.speedup, - method=args.infer.method) - signal = vocoder.infer(mel, data['f0']) - ed_time = time.time() - - # RTF - run_time = ed_time - st_time - song_time = signal.shape[-1] / args.data.sampling_rate - rtf = run_time / song_time - print('RTF: {} | {} / {}'.format(rtf, run_time, song_time)) - rtf_all.append(rtf) - - # loss - for i in range(args.train.batch_size): - loss = model( - data['units'], - data['f0'], - data['volume'], - data['spk_id'], - gt_spec=data['mel'], - infer=False) - test_loss += loss.item() - - # log mel - saver.log_spec(f"{speaker}_{fn}.wav", data['mel'], mel) - - # log audi - path_audio = data['name_ext'][0] - audio, sr = librosa.load(path_audio, sr=args.data.sampling_rate) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio) - audio = torch.from_numpy(audio).unsqueeze(0).to(signal) - saver.log_audio({f"{speaker}_{fn}_gt.wav": audio,f"{speaker}_{fn}_pred.wav": signal}) - # report - test_loss /= args.train.batch_size - test_loss /= num_batches - - # check - print(' [test_loss] test_loss:', test_loss) - print(' Real Time Factor', np.mean(rtf_all)) - return test_loss - - -def train(args, initial_global_step, model, optimizer, scheduler, vocoder, loader_train, loader_test): - # saver - saver = Saver(args, initial_global_step=initial_global_step) - - # model size - params_count = utils.get_network_paras_amount({'model': model}) - saver.log_info('--- model size ---') - saver.log_info(params_count) - - # run - num_batches = len(loader_train) - model.train() - saver.log_info('======= start training =======') - scaler = GradScaler() - if args.train.amp_dtype == 'fp32': - dtype = torch.float32 - elif args.train.amp_dtype == 'fp16': - dtype = torch.float16 - elif args.train.amp_dtype == 'bf16': - dtype = torch.bfloat16 - else: - raise ValueError(' [x] Unknown amp_dtype: ' + args.train.amp_dtype) - saver.log_info("epoch|batch_idx/num_batches|output_dir|batch/s|lr|time|step") - for epoch in range(args.train.epochs): - for batch_idx, data in enumerate(loader_train): - saver.global_step_increment() - optimizer.zero_grad() - - # unpack data - for k in data.keys(): - if not k.startswith('name'): - data[k] = data[k].to(args.device) - - # forward - if dtype == torch.float32: - loss = model(data['units'].float(), data['f0'], data['volume'], data['spk_id'], - aug_shift = data['aug_shift'], gt_spec=data['mel'].float(), infer=False) - else: - with autocast(device_type=args.device, dtype=dtype): - loss = model(data['units'], data['f0'], data['volume'], data['spk_id'], - aug_shift = data['aug_shift'], gt_spec=data['mel'], infer=False) - - # handle nan loss - if torch.isnan(loss): - raise ValueError(' [x] nan loss ') - else: - # backpropagate - if dtype == torch.float32: - loss.backward() - optimizer.step() - else: - scaler.scale(loss).backward() - scaler.step(optimizer) - scaler.update() - scheduler.step() - - # log loss - if saver.global_step % args.train.interval_log == 0: - current_lr = optimizer.param_groups[0]['lr'] - saver.log_info( - 'epoch: {} | {:3d}/{:3d} | {} | batch/s: {:.2f} | lr: {:.6} | loss: {:.3f} | time: {} | step: {}'.format( - epoch, - batch_idx, - num_batches, - args.env.expdir, - args.train.interval_log/saver.get_interval_time(), - current_lr, - loss.item(), - saver.get_total_time(), - saver.global_step - ) - ) - - saver.log_value({ - 'train/loss': loss.item() - }) - - saver.log_value({ - 'train/lr': current_lr - }) - - # validation - if saver.global_step % args.train.interval_val == 0: - optimizer_save = optimizer if args.train.save_opt else None - - # save latest - saver.save_model(model, optimizer_save, postfix=f'{saver.global_step}') - last_val_step = saver.global_step - args.train.interval_val - if last_val_step % args.train.interval_force_save != 0: - saver.delete_model(postfix=f'{last_val_step}') - - # run testing set - test_loss = test(args, model, vocoder, loader_test, saver) - - # log loss - saver.log_info( - ' --- --- \nloss: {:.3f}. '.format( - test_loss, - ) - ) - - saver.log_value({ - 'validation/loss': test_loss - }) - - model.train() - - diff --git a/spaces/GIZ/SDSN-demo/utils/lexical_search.py b/spaces/GIZ/SDSN-demo/utils/lexical_search.py deleted file mode 100644 index 8f12fb567c87e6e2a717c0258b9f1f0c1072442d..0000000000000000000000000000000000000000 --- a/spaces/GIZ/SDSN-demo/utils/lexical_search.py +++ /dev/null @@ -1,251 +0,0 @@ -from haystack.nodes import TfidfRetriever -from haystack.document_stores import InMemoryDocumentStore -import spacy -import re -from spacy.matcher import Matcher -from markdown import markdown -from annotated_text import annotation -from haystack.schema import Document -from typing import List, Text, Tuple -from typing_extensions import Literal -from utils.preprocessing import processingpipeline -from utils.streamlitcheck import check_streamlit -import logging -try: - from termcolor import colored -except: - pass - -try: - import streamlit as st -except ImportError: - logging.info("Streamlit not installed") - - -def runLexicalPreprocessingPipeline(file_name:str,file_path:str, - split_by: Literal["sentence", "word"] = 'word', - split_length:int = 80, split_overlap:int = 0, - remove_punc:bool = False,)->List[Document]: - """ - creates the pipeline and runs the preprocessing pipeline, - the params for pipeline are fetched from paramconfig. As lexical doesnt gets - affected by overlap, threfore split_overlap = 0 in default paramconfig and - split_by = word. - - Params - ------------ - - file_name: filename, in case of streamlit application use - st.session_state['filename'] - file_path: filepath, in case of streamlit application use - st.session_state['filepath'] - split_by: document splitting strategy either as word or sentence - split_length: when synthetically creating the paragrpahs from document, - it defines the length of paragraph. - split_overlap: Number of words or sentences that overlap when creating - the paragraphs. This is done as one sentence or 'some words' make sense - when read in together with others. Therefore the overlap is used. - splititng of text. - removePunc: to remove all Punctuation including ',' and '.' or not - - Return - -------------- - List[Document]: When preprocessing pipeline is run, the output dictionary - has four objects. For the lexicaal search using TFIDFRetriever we - need to use the List of Haystack Document, which can be fetched by - key = 'documents' on output. - - """ - - lexical_processing_pipeline = processingpipeline() - - - output_lexical_pre = lexical_processing_pipeline.run(file_paths = file_path, - params= {"FileConverter": {"file_path": file_path, \ - "file_name": file_name}, - "UdfPreProcessor": {"remove_punc": remove_punc, \ - "split_by": split_by, \ - "split_length":split_length,\ - "split_overlap": split_overlap}}) - - return output_lexical_pre - - -def tokenize_lexical_query(query:str)-> List[str]: - """ - Removes the stop words from query and returns the list of important keywords - in query. For the lexical search the relevent paragraphs in document are - retreived using TfIDFretreiver from Haystack. However to highlight these - keywords we need the tokenized form of query. - - Params - -------- - query: string which represents either list of keywords user is looking for - or a query in form of Question. - - Return - ----------- - token_list: list of important keywords in the query. - - """ - nlp = spacy.load("en_core_web_sm") - token_list = [token.text.lower() for token in nlp(query) - if not (token.is_stop or token.is_punct)] - return token_list - -def runSpacyMatcher(token_list:List[str], document:Text - )->Tuple[List[List[int]],spacy.tokens.doc.Doc]: - """ - Using the spacy in backend finds the keywords in the document using the - Matcher class from spacy. We can alternatively use the regex, but spacy - finds all keywords in serialized manner which helps in annotation of answers. - - Params - ------- - token_list: this is token list which tokenize_lexical_query function returns - document: text in which we need to find the tokens - - Return - -------- - matches: List of [start_index, end_index] in the spacydoc(at word level not - character) for the keywords in token list. - - spacydoc: the keyword index in the spacydoc are at word level and not character, - therefore to allow the annotator to work seamlessly we return the spacydoc. - - """ - nlp = spacy.load("en_core_web_sm") - spacydoc = nlp(document) - matcher = Matcher(nlp.vocab) - token_pattern = [[{"LOWER":token}] for token in token_list] - matcher.add(",".join(token_list), token_pattern) - spacymatches = matcher(spacydoc) - - # getting start and end index in spacydoc so that annotator can work seamlessly - matches = [] - for match_id, start, end in spacymatches: - matches = matches + [[start, end]] - - return matches, spacydoc - -def runRegexMatcher(token_list:List[str], document:Text): - """ - Using the regex in backend finds the keywords in the document. - - Params - ------- - token_list: this is token list which tokenize_lexical_query function returns - - document: text in which we need to find the tokens - - Return - -------- - matches: List of [start_index, end_index] in the document for the keywords - in token list at character level. - - document: the keyword index returned by regex are at character level, - therefore to allow the annotator to work seamlessly we return the text back. - - """ - matches = [] - for token in token_list: - matches = (matches + - [[val.start(), val.start() + - len(token)] for val in re.finditer(token, document)]) - - return matches, document - -def spacyAnnotator(matches: List[List[int]], document:spacy.tokens.doc.Doc): - """ - This is spacy Annotator and needs spacy.doc - Annotates the text in the document defined by list of [start index, end index] - Example: "How are you today", if document type is text, matches = [[0,3]] - will give answer = "How", however in case we used the spacy matcher then the - matches = [[0,3]] will give answer = "How are you". However if spacy is used - to find "How" then the matches = [[0,1]] for the string defined above. - - Params - ----------- - matches: As mentioned its list of list. Example [[0,1],[10,13]] - document: document which needs to be indexed. - - - Return - -------- - will send the output to either app front end using streamlit or - write directly to output screen. - - """ - start = 0 - annotated_text = "" - for match in matches: - start_idx = match[0] - end_idx = match[1] - - if check_streamlit(): - annotated_text = (annotated_text + document[start:start_idx].text - + str(annotation(body=document[start_idx:end_idx].text, - label="ANSWER", background="#964448", color='#ffffff'))) - else: - annotated_text = (annotated_text + document[start:start_idx].text - + colored(document[start_idx:end_idx].text, - "green", attrs = ['bold'])) - - - start = end_idx - - annotated_text = annotated_text + document[end_idx:].text - - - if check_streamlit(): - - st.write( - markdown(annotated_text), - unsafe_allow_html=True, - ) - else: - print(annotated_text) - -def lexical_search(query:Text, documents:List[Document],top_k:int): - """ - Performs the Lexical search on the List of haystack documents which is - returned by preprocessing Pipeline. - - Params - ------- - query: Keywords that need to be searche in documents. - documents: List of Haystack documents returned by preprocessing pipeline. - top_k: Number of Top results to be fetched. - - """ - - document_store = InMemoryDocumentStore() - document_store.write_documents(documents) - - # Haystack Retriever works with document stores only. - retriever = TfidfRetriever(document_store) - results = retriever.retrieve(query=query, top_k = top_k) - query_tokens = tokenize_lexical_query(query) - flag = True - for count, result in enumerate(results): - matches, doc = runSpacyMatcher(query_tokens,result.content) - - if len(matches) != 0: - if flag: - flag = False - if check_streamlit(): - st.markdown("##### Top few lexical search (TFIDF) hits #####") - else: - print("Top few lexical search (TFIDF) hits") - - if check_streamlit(): - st.write("Result {}".format(count+1)) - else: - print("Results {}".format(count +1)) - spacyAnnotator(matches, doc) - - if flag: - if check_streamlit(): - st.info("🤔 No relevant result found. Please try another keyword.") - else: - print("No relevant result found. Please try another keyword.") \ No newline at end of file diff --git a/spaces/Gauri54damle/McDFries-SDXL-Dreambooth-Lora-Model/utils.py b/spaces/Gauri54damle/McDFries-SDXL-Dreambooth-Lora-Model/utils.py deleted file mode 100644 index ff1c065d186347ca51b47d010a697dbe1814695c..0000000000000000000000000000000000000000 --- a/spaces/Gauri54damle/McDFries-SDXL-Dreambooth-Lora-Model/utils.py +++ /dev/null @@ -1,6 +0,0 @@ -def is_google_colab(): - try: - import google.colab - return True - except: - return False \ No newline at end of file diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/manipulating_two_ropes.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/manipulating_two_ropes.py deleted file mode 100644 index 1b55a14b5a5a6298f493f83441538405d6efe38a..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/manipulating_two_ropes.py +++ /dev/null @@ -1,55 +0,0 @@ -import numpy as np -import os -import pybullet as p -import random - -import numpy as np -from cliport.tasks.task import Task -from cliport.utils import utils -from cliport.tasks import primitives -from cliport.tasks.grippers import Spatula - -class ManipulatingTwoRopes(Task): - """rearrange the red and blue deformable ropes such that it connects the two endpoints of a 3-sided square of corresponding color.""" - - def __init__(self): - super().__init__() - self.max_steps = 20 - self.lang_template = "rearrange the {color_name} rope such that it connects the two endpoints of a 3-sided square of corresponding color." - self.task_completed_desc = "done manipulating two ropes." - self.additional_reset() - - def reset(self, env): - super().reset(env) - - n_parts = 20 - radius = 0.005 - length = 2 * radius * n_parts * np.sqrt(2) - - # Add 3-sided square for the red rope. - color_list = ['red', 'blue'] - for color_name in color_list: - square_size = (length, length, 0) - square_pose = self.get_random_pose(env, square_size) - square_template = 'square/square-template.urdf' - - # IMPORTANT: REPLACE THE TEMPLATE URDF with `fill_template` - replace = {'DIM': (length,), 'HALF': (np.float32(length) / 2 - 0.005,)} - urdf = self.fill_template(square_template, replace) - env.add_object(urdf, square_pose, 'fixed', color=utils.COLORS[color_name]) - - # compute corners - corner0 = (length / 2, length / 2, 0.001) - corner1 = (-length / 2, length / 2, 0.001) - corner_0 = utils.apply(square_pose, corner0) - corner_1 = utils.apply(square_pose, corner1) - - # IMPORTANT: use `make_ropes` to add cable (series of articulated small blocks). - objects, targets, matches = self.make_ropes(env, corners=(corner_0, corner_1), color_name=color_name) - self.add_goal(objs=objects, matches=matches, targ_poses=targets, replace=False, - rotations=False, metric='pose', params=None, step_max_reward=1. / len(color_list), - language_goal=self.lang_template.format(color_name=color_name)) - - print(f"len of languages: {len(self.lang_goals)} obj:{len(objects)}") - for i in range(480): - p.stepSimulation() diff --git a/spaces/Goutam982/RVC_V2_voice_clone/rmvpe.py b/spaces/Goutam982/RVC_V2_voice_clone/rmvpe.py deleted file mode 100644 index 3ad346141340e03bdbaa20121e1ed435bb3da57a..0000000000000000000000000000000000000000 --- a/spaces/Goutam982/RVC_V2_voice_clone/rmvpe.py +++ /dev/null @@ -1,432 +0,0 @@ -import sys, torch, numpy as np, traceback, pdb -import torch.nn as nn -from time import time as ttime -import torch.nn.functional as F - - -class BiGRU(nn.Module): - def __init__(self, input_features, hidden_features, num_layers): - super(BiGRU, self).__init__() - self.gru = nn.GRU( - input_features, - hidden_features, - num_layers=num_layers, - batch_first=True, - bidirectional=True, - ) - - def forward(self, x): - return self.gru(x)[0] - - -class ConvBlockRes(nn.Module): - def __init__(self, in_channels, out_channels, momentum=0.01): - super(ConvBlockRes, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - nn.Conv2d( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - if in_channels != out_channels: - self.shortcut = nn.Conv2d(in_channels, out_channels, (1, 1)) - self.is_shortcut = True - else: - self.is_shortcut = False - - def forward(self, x): - if self.is_shortcut: - return self.conv(x) + self.shortcut(x) - else: - return self.conv(x) + x - - -class Encoder(nn.Module): - def __init__( - self, - in_channels, - in_size, - n_encoders, - kernel_size, - n_blocks, - out_channels=16, - momentum=0.01, - ): - super(Encoder, self).__init__() - self.n_encoders = n_encoders - self.bn = nn.BatchNorm2d(in_channels, momentum=momentum) - self.layers = nn.ModuleList() - self.latent_channels = [] - for i in range(self.n_encoders): - self.layers.append( - ResEncoderBlock( - in_channels, out_channels, kernel_size, n_blocks, momentum=momentum - ) - ) - self.latent_channels.append([out_channels, in_size]) - in_channels = out_channels - out_channels *= 2 - in_size //= 2 - self.out_size = in_size - self.out_channel = out_channels - - def forward(self, x): - concat_tensors = [] - x = self.bn(x) - for i in range(self.n_encoders): - _, x = self.layers[i](x) - concat_tensors.append(_) - return x, concat_tensors - - -class ResEncoderBlock(nn.Module): - def __init__( - self, in_channels, out_channels, kernel_size, n_blocks=1, momentum=0.01 - ): - super(ResEncoderBlock, self).__init__() - self.n_blocks = n_blocks - self.conv = nn.ModuleList() - self.conv.append(ConvBlockRes(in_channels, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv.append(ConvBlockRes(out_channels, out_channels, momentum)) - self.kernel_size = kernel_size - if self.kernel_size is not None: - self.pool = nn.AvgPool2d(kernel_size=kernel_size) - - def forward(self, x): - for i in range(self.n_blocks): - x = self.conv[i](x) - if self.kernel_size is not None: - return x, self.pool(x) - else: - return x - - -class Intermediate(nn.Module): # - def __init__(self, in_channels, out_channels, n_inters, n_blocks, momentum=0.01): - super(Intermediate, self).__init__() - self.n_inters = n_inters - self.layers = nn.ModuleList() - self.layers.append( - ResEncoderBlock(in_channels, out_channels, None, n_blocks, momentum) - ) - for i in range(self.n_inters - 1): - self.layers.append( - ResEncoderBlock(out_channels, out_channels, None, n_blocks, momentum) - ) - - def forward(self, x): - for i in range(self.n_inters): - x = self.layers[i](x) - return x - - -class ResDecoderBlock(nn.Module): - def __init__(self, in_channels, out_channels, stride, n_blocks=1, momentum=0.01): - super(ResDecoderBlock, self).__init__() - out_padding = (0, 1) if stride == (1, 2) else (1, 1) - self.n_blocks = n_blocks - self.conv1 = nn.Sequential( - nn.ConvTranspose2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=stride, - padding=(1, 1), - output_padding=out_padding, - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - self.conv2 = nn.ModuleList() - self.conv2.append(ConvBlockRes(out_channels * 2, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv2.append(ConvBlockRes(out_channels, out_channels, momentum)) - - def forward(self, x, concat_tensor): - x = self.conv1(x) - x = torch.cat((x, concat_tensor), dim=1) - for i in range(self.n_blocks): - x = self.conv2[i](x) - return x - - -class Decoder(nn.Module): - def __init__(self, in_channels, n_decoders, stride, n_blocks, momentum=0.01): - super(Decoder, self).__init__() - self.layers = nn.ModuleList() - self.n_decoders = n_decoders - for i in range(self.n_decoders): - out_channels = in_channels // 2 - self.layers.append( - ResDecoderBlock(in_channels, out_channels, stride, n_blocks, momentum) - ) - in_channels = out_channels - - def forward(self, x, concat_tensors): - for i in range(self.n_decoders): - x = self.layers[i](x, concat_tensors[-1 - i]) - return x - - -class DeepUnet(nn.Module): - def __init__( - self, - kernel_size, - n_blocks, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(DeepUnet, self).__init__() - self.encoder = Encoder( - in_channels, 128, en_de_layers, kernel_size, n_blocks, en_out_channels - ) - self.intermediate = Intermediate( - self.encoder.out_channel // 2, - self.encoder.out_channel, - inter_layers, - n_blocks, - ) - self.decoder = Decoder( - self.encoder.out_channel, en_de_layers, kernel_size, n_blocks - ) - - def forward(self, x): - x, concat_tensors = self.encoder(x) - x = self.intermediate(x) - x = self.decoder(x, concat_tensors) - return x - - -class E2E(nn.Module): - def __init__( - self, - n_blocks, - n_gru, - kernel_size, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(E2E, self).__init__() - self.unet = DeepUnet( - kernel_size, - n_blocks, - en_de_layers, - inter_layers, - in_channels, - en_out_channels, - ) - self.cnn = nn.Conv2d(en_out_channels, 3, (3, 3), padding=(1, 1)) - if n_gru: - self.fc = nn.Sequential( - BiGRU(3 * 128, 256, n_gru), - nn.Linear(512, 360), - nn.Dropout(0.25), - nn.Sigmoid(), - ) - else: - self.fc = nn.Sequential( - nn.Linear(3 * N_MELS, N_CLASS), nn.Dropout(0.25), nn.Sigmoid() - ) - - def forward(self, mel): - mel = mel.transpose(-1, -2).unsqueeze(1) - x = self.cnn(self.unet(mel)).transpose(1, 2).flatten(-2) - x = self.fc(x) - return x - - -from librosa.filters import mel - - -class MelSpectrogram(torch.nn.Module): - def __init__( - self, - is_half, - n_mel_channels, - sampling_rate, - win_length, - hop_length, - n_fft=None, - mel_fmin=0, - mel_fmax=None, - clamp=1e-5, - ): - super().__init__() - n_fft = win_length if n_fft is None else n_fft - self.hann_window = {} - mel_basis = mel( - sr=sampling_rate, - n_fft=n_fft, - n_mels=n_mel_channels, - fmin=mel_fmin, - fmax=mel_fmax, - htk=True, - ) - mel_basis = torch.from_numpy(mel_basis).float() - self.register_buffer("mel_basis", mel_basis) - self.n_fft = win_length if n_fft is None else n_fft - self.hop_length = hop_length - self.win_length = win_length - self.sampling_rate = sampling_rate - self.n_mel_channels = n_mel_channels - self.clamp = clamp - self.is_half = is_half - - def forward(self, audio, keyshift=0, speed=1, center=True): - factor = 2 ** (keyshift / 12) - n_fft_new = int(np.round(self.n_fft * factor)) - win_length_new = int(np.round(self.win_length * factor)) - hop_length_new = int(np.round(self.hop_length * speed)) - keyshift_key = str(keyshift) + "_" + str(audio.device) - if keyshift_key not in self.hann_window: - self.hann_window[keyshift_key] = torch.hann_window(win_length_new).to( - audio.device - ) - fft = torch.stft( - audio, - n_fft=n_fft_new, - hop_length=hop_length_new, - win_length=win_length_new, - window=self.hann_window[keyshift_key], - center=center, - return_complex=True, - ) - magnitude = torch.sqrt(fft.real.pow(2) + fft.imag.pow(2)) - if keyshift != 0: - size = self.n_fft // 2 + 1 - resize = magnitude.size(1) - if resize < size: - magnitude = F.pad(magnitude, (0, 0, 0, size - resize)) - magnitude = magnitude[:, :size, :] * self.win_length / win_length_new - mel_output = torch.matmul(self.mel_basis, magnitude) - if self.is_half == True: - mel_output = mel_output.half() - log_mel_spec = torch.log(torch.clamp(mel_output, min=self.clamp)) - return log_mel_spec - - -class RMVPE: - def __init__(self, model_path, is_half, device=None): - self.resample_kernel = {} - model = E2E(4, 1, (2, 2)) - ckpt = torch.load(model_path, map_location="cpu") - model.load_state_dict(ckpt) - model.eval() - if is_half == True: - model = model.half() - self.model = model - self.resample_kernel = {} - self.is_half = is_half - if device is None: - device = "cuda" if torch.cuda.is_available() else "cpu" - self.device = device - self.mel_extractor = MelSpectrogram( - is_half, 128, 16000, 1024, 160, None, 30, 8000 - ).to(device) - self.model = self.model.to(device) - cents_mapping = 20 * np.arange(360) + 1997.3794084376191 - self.cents_mapping = np.pad(cents_mapping, (4, 4)) # 368 - - def mel2hidden(self, mel): - with torch.no_grad(): - n_frames = mel.shape[-1] - mel = F.pad( - mel, (0, 32 * ((n_frames - 1) // 32 + 1) - n_frames), mode="reflect" - ) - hidden = self.model(mel) - return hidden[:, :n_frames] - - def decode(self, hidden, thred=0.03): - cents_pred = self.to_local_average_cents(hidden, thred=thred) - f0 = 10 * (2 ** (cents_pred / 1200)) - f0[f0 == 10] = 0 - # f0 = np.array([10 * (2 ** (cent_pred / 1200)) if cent_pred else 0 for cent_pred in cents_pred]) - return f0 - - def infer_from_audio(self, audio, thred=0.03): - audio = torch.from_numpy(audio).float().to(self.device).unsqueeze(0) - # torch.cuda.synchronize() - # t0=ttime() - mel = self.mel_extractor(audio, center=True) - # torch.cuda.synchronize() - # t1=ttime() - hidden = self.mel2hidden(mel) - # torch.cuda.synchronize() - # t2=ttime() - hidden = hidden.squeeze(0).cpu().numpy() - if self.is_half == True: - hidden = hidden.astype("float32") - f0 = self.decode(hidden, thred=thred) - # torch.cuda.synchronize() - # t3=ttime() - # print("hmvpe:%s\t%s\t%s\t%s"%(t1-t0,t2-t1,t3-t2,t3-t0)) - return f0 - - def to_local_average_cents(self, salience, thred=0.05): - # t0 = ttime() - center = np.argmax(salience, axis=1) # 帧长#index - salience = np.pad(salience, ((0, 0), (4, 4))) # 帧长,368 - # t1 = ttime() - center += 4 - todo_salience = [] - todo_cents_mapping = [] - starts = center - 4 - ends = center + 5 - for idx in range(salience.shape[0]): - todo_salience.append(salience[:, starts[idx] : ends[idx]][idx]) - todo_cents_mapping.append(self.cents_mapping[starts[idx] : ends[idx]]) - # t2 = ttime() - todo_salience = np.array(todo_salience) # 帧长,9 - todo_cents_mapping = np.array(todo_cents_mapping) # 帧长,9 - product_sum = np.sum(todo_salience * todo_cents_mapping, 1) - weight_sum = np.sum(todo_salience, 1) # 帧长 - devided = product_sum / weight_sum # 帧长 - # t3 = ttime() - maxx = np.max(salience, axis=1) # 帧长 - devided[maxx <= thred] = 0 - # t4 = ttime() - # print("decode:%s\t%s\t%s\t%s" % (t1 - t0, t2 - t1, t3 - t2, t4 - t3)) - return devided - - -# if __name__ == '__main__': -# audio, sampling_rate = sf.read("卢本伟语录~1.wav") -# if len(audio.shape) > 1: -# audio = librosa.to_mono(audio.transpose(1, 0)) -# audio_bak = audio.copy() -# if sampling_rate != 16000: -# audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) -# model_path = "/bili-coeus/jupyter/jupyterhub-liujing04/vits_ch/test-RMVPE/weights/rmvpe_llc_half.pt" -# thred = 0.03 # 0.01 -# device = 'cuda' if torch.cuda.is_available() else 'cpu' -# rmvpe = RMVPE(model_path,is_half=False, device=device) -# t0=ttime() -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# t1=ttime() -# print(f0.shape,t1-t0) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/lvis/mask_rcnn_x101_32x4d_fpn_sample1e-3_mstrain_1x_lvis_v1.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/lvis/mask_rcnn_x101_32x4d_fpn_sample1e-3_mstrain_1x_lvis_v1.py deleted file mode 100644 index 5abcc2e014fe57b862422fa2fe18dd651761b56e..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/lvis/mask_rcnn_x101_32x4d_fpn_sample1e-3_mstrain_1x_lvis_v1.py +++ /dev/null @@ -1,13 +0,0 @@ -_base_ = './mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py' -model = dict( - pretrained='open-mmlab://resnext101_32x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - style='pytorch')) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/point_rend/point_rend_r50_caffe_fpn_mstrain_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/point_rend/point_rend_r50_caffe_fpn_mstrain_1x_coco.py deleted file mode 100644 index 0c0e563d6fe307d05fbd3862cd28b6dc2a3e52b2..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/point_rend/point_rend_r50_caffe_fpn_mstrain_1x_coco.py +++ /dev/null @@ -1,44 +0,0 @@ -_base_ = '../mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain_1x_coco.py' -# model settings -model = dict( - type='PointRend', - roi_head=dict( - type='PointRendRoIHead', - mask_roi_extractor=dict( - type='GenericRoIExtractor', - aggregation='concat', - roi_layer=dict( - _delete_=True, type='SimpleRoIAlign', output_size=14), - out_channels=256, - featmap_strides=[4]), - mask_head=dict( - _delete_=True, - type='CoarseMaskHead', - num_fcs=2, - in_channels=256, - conv_out_channels=256, - fc_out_channels=1024, - num_classes=80, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0)), - point_head=dict( - type='MaskPointHead', - num_fcs=3, - in_channels=256, - fc_channels=256, - num_classes=80, - coarse_pred_each_layer=True, - loss_point=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rcnn=dict( - mask_size=7, - num_points=14 * 14, - oversample_ratio=3, - importance_sample_ratio=0.75)), - test_cfg=dict( - rcnn=dict( - subdivision_steps=5, - subdivision_num_points=28 * 28, - scale_factor=2))) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/nonlocal_net/nonlocal_r50-d8_512x1024_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/nonlocal_net/nonlocal_r50-d8_512x1024_40k_cityscapes.py deleted file mode 100644 index 9d4dc7390370d0ffe21e7dcb686eeff7261952c4..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/nonlocal_net/nonlocal_r50-d8_512x1024_40k_cityscapes.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = [ - '../_base_/models/nonlocal_r50-d8.py', '../_base_/datasets/cityscapes.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py' -] diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/get_flops.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/get_flops.py deleted file mode 100644 index bc98c5252591b0c9ec218144c0652ea695a5b96e..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/get_flops.py +++ /dev/null @@ -1,58 +0,0 @@ -import argparse - -from mmcv import Config -from mmcv.cnn import get_model_complexity_info - -from mmseg.models import build_segmentor - - -def parse_args(): - parser = argparse.ArgumentParser(description='Train a segmentor') - parser.add_argument('config', help='train config file path') - parser.add_argument( - '--shape', - type=int, - nargs='+', - default=[2048, 1024], - help='input image size') - args = parser.parse_args() - return args - - -def main(): - - args = parse_args() - - if len(args.shape) == 1: - input_shape = (3, args.shape[0], args.shape[0]) - elif len(args.shape) == 2: - input_shape = (3, ) + tuple(args.shape) - else: - raise ValueError('invalid input shape') - - cfg = Config.fromfile(args.config) - cfg.model.pretrained = None - model = build_segmentor( - cfg.model, - train_cfg=cfg.get('train_cfg'), - test_cfg=cfg.get('test_cfg')).cuda() - model.eval() - - if hasattr(model, 'forward_dummy'): - model.forward = model.forward_dummy - else: - raise NotImplementedError( - 'FLOPs counter is currently not currently supported with {}'. - format(model.__class__.__name__)) - - flops, params = get_model_complexity_info(model, input_shape) - split_line = '=' * 30 - print('{0}\nInput shape: {1}\nFlops: {2}\nParams: {3}\n{0}'.format( - split_line, input_shape, flops, params)) - print('!!!Please be cautious if you use the results in papers. ' - 'You may need to check if all ops are supported and verify that the ' - 'flops computation is correct.') - - -if __name__ == '__main__': - main() diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/checkpoint.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/checkpoint.py deleted file mode 100644 index 031618a4c67f6752b60616c6bda9aabb9ea3924b..0000000000000000000000000000000000000000 --- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/checkpoint.py +++ /dev/null @@ -1,134 +0,0 @@ -from typing import Optional, OrderedDict, Tuple, Union -from torch import nn, Tensor -import torch -import os - -from .utils import clean_cache -from .lr import BaseLR -from .configs.base_config import base_cfg -from .logger_fn import Logger - -def save_checkpoint( - cfg: base_cfg, - epoch: int, - global_step: int, - model: nn.parallel.DistributedDataParallel, - opt: torch.optim.Optimizer, - lr_scheduler: BaseLR, - scaler: torch.cuda.amp.GradScaler = None -) -> None: - checkpoint = { - "epoch": epoch, - "global_step": global_step, - "state_dict_model": model.state_dict(), - "state_optimizer": opt.state_dict(), - "state_lr_scheduler": lr_scheduler.state_dict() - } - if scaler is not None: - checkpoint["scaler"] = scaler.state_dict() - current_experiment_dir_path = os.path.join(cfg.experiment_dir_path, cfg.experiment_name) - os.makedirs(current_experiment_dir_path, exist_ok=True) - checkpoint_path = os.path.join(current_experiment_dir_path, f"checkpoint_{epoch}.pt") - print(f'Saved checkpoint into {checkpoint_path}') - torch.save(checkpoint, checkpoint_path) - - cfg.em.update(epoch, None) - -def preprocessing_state_dict( - model: Union[nn.parallel.DistributedDataParallel, nn.Module], - state_dict_model: OrderedDict[str, Tensor], -) -> OrderedDict[str, Tensor]: - if isinstance(model, nn.parallel.DistributedDataParallel): - return OrderedDict( - (f'module.{k}' if not k.startswith('module.') else k, v) \ - for k, v in state_dict_model.items() - ) - elif isinstance(model, nn.Module): - return OrderedDict( - (k[7:] if k.startswith('module.') else k, v) \ - for k, v in state_dict_model.items() - ) - else: - raise Exception(f'Unsupported model type {type(model)}') - -def load_checkpoint_for_inference( - model: nn.Module, ckpt_path: Optional[str] = None, - strict: Optional[bool] = True -) -> None: - ''' Throws error if ckpt_path is not found''' - dict_checkpoint = torch.load(ckpt_path, map_location='cpu') - model.load_state_dict(dict_checkpoint, strict=strict) - -def save_checkpoint( - cfg: base_cfg, - epoch: int, - global_step: int, - model: nn.parallel.DistributedDataParallel, - opt: torch.optim.Optimizer, - lr_scheduler: BaseLR, - scaler: torch.cuda.amp.GradScaler = None -) -> None: - checkpoint = { - "epoch": epoch, - "global_step": global_step, - "state_dict_model": model.state_dict(), - "state_optimizer": opt.state_dict(), - "state_lr_scheduler": lr_scheduler.state_dict() - } - if scaler is not None: - checkpoint["scaler"] = scaler.state_dict() - current_experiment_dir_path = os.path.join(cfg.experiment_dir_path, cfg.experiment_name) - os.makedirs(current_experiment_dir_path, exist_ok=True) - checkpoint_path = os.path.join(current_experiment_dir_path, f"checkpoint_{epoch}.pt") - print(f'Saved checkpoint into {checkpoint_path}') - torch.save(checkpoint, checkpoint_path) - - cfg.em.update(epoch, None) - -def load_checkpoint( - model: Union[nn.parallel.DistributedDataParallel, nn.Module], - optimizer: Optional[torch.optim.Optimizer], - lr_scheduler: Optional[BaseLR], logger: Optional[Logger], - ckpt_path: Optional[str] = None, - scaler: Optional[torch.cuda.amp.GradScaler] = None, - strict: Optional[bool] = True) -> Tuple[int, int, bool]: - """Load checkpoint - Continue training - - Load the model inplace [important for consistently learning] - - Load the previous state of optimizer [important for consistently learning] - - Load previous state of learning rate policy - - Logging if logger is provided - - Note: Throws error if ckpt_path is not found - - Returns: - - int: start epoch. Default: 1 - - int: global step. Default: 0 - """ - if ckpt_path: - dict_checkpoint = torch.load(ckpt_path) - start_epoch = dict_checkpoint["epoch"] - global_step = dict_checkpoint["global_step"] - - model.load_state_dict( - preprocessing_state_dict(model, dict_checkpoint["state_dict_model"]), - strict=strict - ) - - if optimizer is not None: - optimizer.load_state_dict(dict_checkpoint["state_optimizer"]) - - if lr_scheduler is not None: - lr_scheduler.load_state_dict(dict_checkpoint["state_lr_scheduler"]) - - if scaler is not None: - scaler.load_state_dict(dict_checkpoint["scaler"]) - - del dict_checkpoint - clean_cache() - - if logger is not None: - logger.info(f'Load checkpoint {ckpt_path}') - - return start_epoch+1, global_step - else: - return 1, 0 \ No newline at end of file diff --git a/spaces/HarlanHong/DaGAN/depth/layers.py b/spaces/HarlanHong/DaGAN/depth/layers.py deleted file mode 100644 index ca793a595863844faa40eed7b54d6311ab5e9745..0000000000000000000000000000000000000000 --- a/spaces/HarlanHong/DaGAN/depth/layers.py +++ /dev/null @@ -1,285 +0,0 @@ -# Copyright Niantic 2019. Patent Pending. All rights reserved. -# -# This software is licensed under the terms of the Monodepth2 licence -# which allows for non-commercial use only, the full terms of which are made -# available in the LICENSE file. - -from __future__ import absolute_import, division, print_function - -import numpy as np -import pdb -import torch -import torch.nn as nn -import torch.nn.functional as F - - -def disp_to_depth(disp, min_depth, max_depth): - """Convert network's sigmoid output into depth prediction - The formula for this conversion is given in the 'additional considerations' - section of the paper. - """ - min_disp = 1 / max_depth - max_disp = 1 / min_depth - scaled_disp = min_disp + (max_disp - min_disp) * disp - depth = 1 / scaled_disp - return scaled_disp, depth - - -def transformation_from_parameters(axisangle, translation, invert=False): - """Convert the network's (axisangle, translation) output into a 4x4 matrix - """ - R = rot_from_axisangle(axisangle) - t = translation.clone() - - if invert: - R = R.transpose(1, 2) - t *= -1 - - T = get_translation_matrix(t) - - if invert: - M = torch.matmul(R, T) - else: - M = torch.matmul(T, R) - - return M - - -def get_translation_matrix(translation_vector): - """Convert a translation vector into a 4x4 transformation matrix - """ - T = torch.zeros(translation_vector.shape[0], 4, 4).to(device=translation_vector.device) - - t = translation_vector.contiguous().view(-1, 3, 1) - - T[:, 0, 0] = 1 - T[:, 1, 1] = 1 - T[:, 2, 2] = 1 - T[:, 3, 3] = 1 - T[:, :3, 3, None] = t - - return T - - -def rot_from_axisangle(vec): - """Convert an axisangle rotation into a 4x4 transformation matrix - (adapted from https://github.com/Wallacoloo/printipi) - Input 'vec' has to be Bx1x3 - """ - angle = torch.norm(vec, 2, 2, True) - axis = vec / (angle + 1e-7) - - ca = torch.cos(angle) - sa = torch.sin(angle) - C = 1 - ca - - x = axis[..., 0].unsqueeze(1) - y = axis[..., 1].unsqueeze(1) - z = axis[..., 2].unsqueeze(1) - - xs = x * sa - ys = y * sa - zs = z * sa - xC = x * C - yC = y * C - zC = z * C - xyC = x * yC - yzC = y * zC - zxC = z * xC - - rot = torch.zeros((vec.shape[0], 4, 4)).to(device=vec.device) - - rot[:, 0, 0] = torch.squeeze(x * xC + ca) - rot[:, 0, 1] = torch.squeeze(xyC - zs) - rot[:, 0, 2] = torch.squeeze(zxC + ys) - rot[:, 1, 0] = torch.squeeze(xyC + zs) - rot[:, 1, 1] = torch.squeeze(y * yC + ca) - rot[:, 1, 2] = torch.squeeze(yzC - xs) - rot[:, 2, 0] = torch.squeeze(zxC - ys) - rot[:, 2, 1] = torch.squeeze(yzC + xs) - rot[:, 2, 2] = torch.squeeze(z * zC + ca) - rot[:, 3, 3] = 1 - - return rot - - -class ConvBlock(nn.Module): - """Layer to perform a convolution followed by ELU - """ - def __init__(self, in_channels, out_channels): - super(ConvBlock, self).__init__() - - self.conv = Conv3x3(in_channels, out_channels) - self.nonlin = nn.ELU(inplace=True) - - def forward(self, x): - out = self.conv(x) - out = self.nonlin(out) - return out - - -class Conv3x3(nn.Module): - """Layer to pad and convolve input - """ - def __init__(self, in_channels, out_channels, use_refl=True): - super(Conv3x3, self).__init__() - - if use_refl: - self.pad = nn.ReflectionPad2d(1) - else: - self.pad = nn.ZeroPad2d(1) - self.conv = nn.Conv2d(int(in_channels), int(out_channels), 3) - - def forward(self, x): - out = self.pad(x) - out = self.conv(out) - return out - - -class BackprojectDepth(nn.Module): - """Layer to transform a depth image into a point cloud - """ - def __init__(self, batch_size, height, width): - super(BackprojectDepth, self).__init__() - - self.batch_size = batch_size - self.height = height - self.width = width - - meshgrid = np.meshgrid(range(self.width), range(self.height), indexing='xy') - self.id_coords = np.stack(meshgrid, axis=0).astype(np.float32) - self.id_coords = nn.Parameter(torch.from_numpy(self.id_coords), - requires_grad=False) - - self.ones = nn.Parameter(torch.ones(self.batch_size, 1, self.height * self.width), - requires_grad=False) - - self.pix_coords = torch.unsqueeze(torch.stack( - [self.id_coords[0].view(-1), self.id_coords[1].view(-1)], 0), 0) - self.pix_coords = self.pix_coords.repeat(batch_size, 1, 1) - self.pix_coords = nn.Parameter(torch.cat([self.pix_coords, self.ones], 1), - requires_grad=False) - - def forward(self, depth, K,scale): - K[:,:2,:] = (K[:,:2,:]/(2 ** scale)).trunc() - b,n,n = K.shape - inv_K = torch.linalg.inv(K) - #inv_K = torch.cholesky_inverse(K) - pad = torch.tensor([0.0,0.0,0.0]).view(1,3,1).expand(b,3,1).cuda() - inv_K = torch.cat([inv_K,pad],-1) - pad = torch.tensor([0.0,0.0,0.0,1.0]).view(1,1,4).expand(b,1,4).cuda() - inv_K = torch.cat([inv_K,pad],1) - cam_points = torch.matmul(inv_K[:, :3, :3], self.pix_coords) - cam_points = depth.view(self.batch_size, 1, -1) * cam_points - cam_points = torch.cat([cam_points, self.ones], 1) - - return cam_points - - -class Project3D(nn.Module): - """Layer which projects 3D points into a camera with intrinsics K and at position T - """ - def __init__(self, batch_size, height, width, eps=1e-7): - super(Project3D, self).__init__() - - self.batch_size = batch_size - self.height = height - self.width = width - self.eps = eps - - def forward(self, points, K, T,scale=0): - # K[0, :] *= self.width // (2 ** scale) - # K[1, :] *= self.height // (2 ** scale) - K[:,:2,:] = (K[:,:2,:]/(2 ** scale)).trunc() - b,n,n = K.shape - pad = torch.tensor([0.0,0.0,0.0]).view(1,3,1).expand(b,3,1).cuda() - K = torch.cat([K,pad],-1) - pad = torch.tensor([0.0,0.0,0.0,1.0]).view(1,1,4).expand(b,1,4).cuda() - K = torch.cat([K,pad],1) - P = torch.matmul(K, T)[:, :3, :] - - cam_points = torch.matmul(P, points) - - pix_coords = cam_points[:, :2, :] / (cam_points[:, 2, :].unsqueeze(1) + self.eps) - pix_coords = pix_coords.view(self.batch_size, 2, self.height, self.width) - pix_coords = pix_coords.permute(0, 2, 3, 1) - pix_coords[..., 0] /= self.width - 1 - pix_coords[..., 1] /= self.height - 1 - pix_coords = (pix_coords - 0.5) * 2 - return pix_coords - - -def upsample(x): - """Upsample input tensor by a factor of 2 - """ - return F.interpolate(x, scale_factor=2, mode="nearest") - - -def get_smooth_loss(disp, img): - """Computes the smoothness loss for a disparity image - The color image is used for edge-aware smoothness - """ - grad_disp_x = torch.abs(disp[:, :, :, :-1] - disp[:, :, :, 1:]) - grad_disp_y = torch.abs(disp[:, :, :-1, :] - disp[:, :, 1:, :]) - - grad_img_x = torch.mean(torch.abs(img[:, :, :, :-1] - img[:, :, :, 1:]), 1, keepdim=True) - grad_img_y = torch.mean(torch.abs(img[:, :, :-1, :] - img[:, :, 1:, :]), 1, keepdim=True) - - grad_disp_x *= torch.exp(-grad_img_x) - grad_disp_y *= torch.exp(-grad_img_y) - - return grad_disp_x.mean() + grad_disp_y.mean() - - -class SSIM(nn.Module): - """Layer to compute the SSIM loss between a pair of images - """ - def __init__(self): - super(SSIM, self).__init__() - self.mu_x_pool = nn.AvgPool2d(3, 1) - self.mu_y_pool = nn.AvgPool2d(3, 1) - self.sig_x_pool = nn.AvgPool2d(3, 1) - self.sig_y_pool = nn.AvgPool2d(3, 1) - self.sig_xy_pool = nn.AvgPool2d(3, 1) - - self.refl = nn.ReflectionPad2d(1) - - self.C1 = 0.01 ** 2 - self.C2 = 0.03 ** 2 - - def forward(self, x, y): - x = self.refl(x) - y = self.refl(y) - - mu_x = self.mu_x_pool(x) - mu_y = self.mu_y_pool(y) - - sigma_x = self.sig_x_pool(x ** 2) - mu_x ** 2 - sigma_y = self.sig_y_pool(y ** 2) - mu_y ** 2 - sigma_xy = self.sig_xy_pool(x * y) - mu_x * mu_y - - SSIM_n = (2 * mu_x * mu_y + self.C1) * (2 * sigma_xy + self.C2) - SSIM_d = (mu_x ** 2 + mu_y ** 2 + self.C1) * (sigma_x + sigma_y + self.C2) - - return torch.clamp((1 - SSIM_n / SSIM_d) / 2, 0, 1) - - -def compute_depth_errors(gt, pred): - """Computation of error metrics between predicted and ground truth depths - """ - thresh = torch.max((gt / pred), (pred / gt)) - a1 = (thresh < 1.25 ).float().mean() - a2 = (thresh < 1.25 ** 2).float().mean() - a3 = (thresh < 1.25 ** 3).float().mean() - - rmse = (gt - pred) ** 2 - rmse = torch.sqrt(rmse.mean()) - - rmse_log = (torch.log(gt) - torch.log(pred)) ** 2 - rmse_log = torch.sqrt(rmse_log.mean()) - - abs_rel = torch.mean(torch.abs(gt - pred) / gt) - - sq_rel = torch.mean((gt - pred) ** 2 / gt) - - return abs_rel, sq_rel, rmse, rmse_log, a1, a2, a3 diff --git a/spaces/HarlanHong/DaGAN/sync_batchnorm/unittest.py b/spaces/HarlanHong/DaGAN/sync_batchnorm/unittest.py deleted file mode 100644 index 0675c022e4ba85d38d1f813490f6740150909524..0000000000000000000000000000000000000000 --- a/spaces/HarlanHong/DaGAN/sync_batchnorm/unittest.py +++ /dev/null @@ -1,29 +0,0 @@ -# -*- coding: utf-8 -*- -# File : unittest.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import unittest - -import numpy as np -from torch.autograd import Variable - - -def as_numpy(v): - if isinstance(v, Variable): - v = v.data - return v.cpu().numpy() - - -class TorchTestCase(unittest.TestCase): - def assertTensorClose(self, a, b, atol=1e-3, rtol=1e-3): - npa, npb = as_numpy(a), as_numpy(b) - self.assertTrue( - np.allclose(npa, npb, atol=atol), - 'Tensor close check failed\n{}\n{}\nadiff={}, rdiff={}'.format(a, b, np.abs(npa - npb).max(), np.abs((npa - npb) / np.fmax(npa, 1e-5)).max()) - ) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/layer_drop.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/layer_drop.py deleted file mode 100644 index 8961d8bcbc492c40c6b30973234416ce5a414f5a..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/layer_drop.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -LayerDrop as described in https://arxiv.org/abs/1909.11556. -""" - -import torch -import torch.nn as nn - - -class LayerDropModuleList(nn.ModuleList): - """ - A LayerDrop implementation based on :class:`torch.nn.ModuleList`. - - We refresh the choice of which layers to drop every time we iterate - over the LayerDropModuleList instance. During evaluation we always - iterate over all layers. - - Usage:: - - layers = LayerDropList(p=0.5, modules=[layer1, layer2, layer3]) - for layer in layers: # this might iterate over layers 1 and 3 - x = layer(x) - for layer in layers: # this might iterate over all layers - x = layer(x) - for layer in layers: # this might not iterate over any layers - x = layer(x) - - Args: - p (float): probability of dropping out each layer - modules (iterable, optional): an iterable of modules to add - """ - - def __init__(self, p, modules=None): - super().__init__(modules) - self.p = p - - def __iter__(self): - dropout_probs = torch.empty(len(self)).uniform_() - for i, m in enumerate(super().__iter__()): - if not self.training or (dropout_probs[i] > self.p): - yield m diff --git a/spaces/Hassan175/suicide-detection/app.py b/spaces/Hassan175/suicide-detection/app.py deleted file mode 100644 index 274ce7886167e5c6926cb065d24c4eb64fb2406c..0000000000000000000000000000000000000000 --- a/spaces/Hassan175/suicide-detection/app.py +++ /dev/null @@ -1,99 +0,0 @@ -# import libraries -from tensorflow import keras -import gradio as gr -import pickle - - -# define constants from training -MODEL_PATH = "./model.pickle" -TOKENIZER_PATH = "./tokenizer.pickle" -INDEX_CLASS = {1:'Suicidal', 0:'non-Suicidal'} -MAX_SEQ_DF = 128 - - -# load the tokenizer used on data when training -with open(TOKENIZER_PATH, 'rb') as tokenizer_file: - tokenizer = pickle.load(tokenizer_file) -# load the mode and read the trained model -with open(MODEL_PATH, 'rb') as model_file: - model = pickle.load(model_file) - - -# define prediction function -def predict(input_text): - # preprocessing - tokenized_data = tokenizer.texts_to_sequences([input_text]) - text_data_padded = keras.preprocessing.sequence.pad_sequences(tokenized_data, - maxlen = MAX_SEQ_DF, - padding = 'post') - - # make prediction - pred = model.predict(text_data_padded) - prediction = INDEX_CLASS[round(pred[0][0])] - - return prediction - -content = """In today's world, mental health is a growing concern, especially for young people. According to the World Health Organization, suicide is the second leading cause of death among 15-29-year-olds globally, and early identification and intervention can save lives. - -The issue of suicides and mental health in Iraq is an understudied one due to a lack of organizations focused on measuring, evaluating, or intervening in such cases. This is concerning given that Iraq continues to experience instability. - -Suicidal ideation, also known as suicidal thoughts, refers to an individual's plans to commit suicide and can be an indicator of suicide risk. Suicidal thoughts can range from brief to substantial and may include significant planning, role-playing, and failed attempts. Adolescents are particularly prone to experiencing suicidal ideation. Early detection of suicidal ideation is an effective technique for preventing suicides. - -Globally, nearly 800,000 people commit suicide each year, with a suicide rate of 10.5 per 100,000 people. The majority of suicides (79%) occur in low- and middle-income countries where resources for identification and management of mental health issues are often scarce and insufficient. Suicide ideation ranges from depression to an intense preoccupation with self-destruction and can be categorized as suicide ideators or planners and suicide attempters or completers. While some studies indicate that most individuals with suicidal ideation do not attempt suicide, other studies suggest that there is significant overlap between suicide ideators and attempters. - -Social media has become a valuable tool for monitoring the mental health and well-being of its users, particularly young individuals. Online communication platforms such as forums, tweets, and blogs provide a space for people to openly discuss their mental health and to seek support and advice. This includes communication of suicidal tendencies, making social media an important venue for early detection and prevention of suicides. By using machine learning techniques and Natural Language Processing (NLP) methodology, researchers are now able to detect suicidal ideation in social media posts and provide early intervention and support. -""" - - - -example_1 = ("I am feeling very hopeless and helpless lately Everything seems " - "to be going wrong in my life and I don't see any way out. I have " - "been having thoughts of ending it all and I just can't shake them " - "I am afraid of what will happen if I don't get help soon.") - -example_2 = ("Today was a wonderful day, I woke up feeling refreshed and ready " - "to tackle the day. I had a delicious breakfast and went for a walk " - "in the park. I met some old friends and we chatted for hours " - "The weather was perfect and I felt so grateful for the beautiful " - "sunshine. Afterwards, I went shopping and treated myself to some " - "new clothes. I can't wait to see what tomorrow brings") - -img_url = "https://healthmatters.nyp.org/wp-content/uploads/2019/09/warning-signs-of-suicide-hero.jpg" - -description = (f"title image" - "
      Our project aims to create an AI-powered web interface " - "that can quickly and accurately detect signs of suicidal thoughts" - " in text inputs. The model is trained on a dataset of texts " - "and utilizes state-of-the-art NLP techniques to analyze the " - "sentiment and context of the text. The interface is easy to use" - " and can provide immediate feedback to individuals who may be at " - "risk of suicide. The goal of this project is to raise awareness " - "about mental health and to provide a tool that can help " - "individuals get the help they need.") - -resources = ["https://www.verywellmind.com/how-to-talk-to-a-friend-who-has-lost-someone-to-suicide-6543351", - "https://www.verywellmind.com/best-depression-resources-and-organizations-5114534", - "https://healthmatters.nyp.org/how-to-spot-the-potential-warning-signs-of-suicide/", - "https://988lifeline.org/",] - -articale = (f"{content}" - "

      Some resources might help

      ") - -# make nice web interface -gr_intrfc = gr.Interface(fn=predict, - inputs="text", - title='Suicidal Thoughts Detector', - description=description, - article=articale, - css="* {font-family: sans-serif}", - examples=[example_1, example_2], - outputs="text", - theme='dark') - -gr_intrfc.launch() - diff --git a/spaces/Haxan786/Tel/README.md b/spaces/Haxan786/Tel/README.md deleted file mode 100644 index e3a1b606bde587c605c7deffdb6a75d51eb34ae2..0000000000000000000000000000000000000000 --- a/spaces/Haxan786/Tel/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Tel -emoji: 📉 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Himanshi/Face-Cartoonify-for-Video-Call-Privacy/cartoon.py b/spaces/Himanshi/Face-Cartoonify-for-Video-Call-Privacy/cartoon.py deleted file mode 100644 index 3ead89bd5b431f788f48827818c61b545014d19e..0000000000000000000000000000000000000000 --- a/spaces/Himanshi/Face-Cartoonify-for-Video-Call-Privacy/cartoon.py +++ /dev/null @@ -1,28 +0,0 @@ -import os -import argparse -import cv2 -from utils import read_img, edge_detection, color_quantisation - -parser = argparse.ArgumentParser(description='Cartoonify Face Images') -parser.add_argument('--input_path', default='./temp/image.jpg', type=str, help='Directory of input images or path of single image') -parser.add_argument('--result_dir', default='./temp/', type=str, help='Directory for restored results') - - -args = parser.parse_args() -out_dir = args.result_dir -os.makedirs(out_dir, exist_ok=True) - - -img = read_img(args.input_path) - -line_wdt = 9 -blur_value=7 -totalcolours=9 - -edgeImg = edge_detection(img, line_wdt,blur_value) -img = color_quantisation(img, totalcolours) -blurred = cv2.bilateralFilter(img, d=7,sigmaColor=200,sigmaSpace=200) -cartoon = cv2.bitwise_and(blurred,blurred,mask=edgeImg) -# cv2.imwrite('cartoon.jpg', cartoon) -out_path = os.path.join(out_dir, os.path.split(args.input_path)[-1]) -cv2.imwrite(out_path,cv2.cvtColor(cartoon, cv2.COLOR_RGB2BGR)) diff --git a/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/models/cond_transformer.py b/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/models/cond_transformer.py deleted file mode 100644 index 03adb5ab52b497b97d4c50cbb3c3bf3ca0753d41..0000000000000000000000000000000000000000 --- a/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/models/cond_transformer.py +++ /dev/null @@ -1,349 +0,0 @@ -import os, math -import torch -import torch.nn.functional as F -import pytorch_lightning as pl - -from main import instantiate_from_config -from taming.modules.util import SOSProvider - - -def disabled_train(self, mode=True): - """Overwrite model.train with this function to make sure train/eval mode - does not change anymore.""" - return self - - -class Net2NetTransformer(pl.LightningModule): - def __init__(self, - transformer_config, - first_stage_config, - cond_stage_config, - permuter_config=None, - ckpt_path=None, - ignore_keys=[], - first_stage_key="image", - cond_stage_key="depth", - downsample_cond_size=-1, - pkeep=1.0, - sos_token=0, - unconditional=False, - ): - super().__init__() - self.be_unconditional = unconditional - self.sos_token = sos_token - self.first_stage_key = first_stage_key - self.cond_stage_key = cond_stage_key - self.init_first_stage_from_ckpt(first_stage_config) - self.init_cond_stage_from_ckpt(cond_stage_config) - if permuter_config is None: - permuter_config = {"target": "taming.modules.transformer.permuter.Identity"} - self.permuter = instantiate_from_config(config=permuter_config) - self.transformer = instantiate_from_config(config=transformer_config) - - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys) - self.downsample_cond_size = downsample_cond_size - self.pkeep = pkeep - - def init_from_ckpt(self, path, ignore_keys=list()): - sd = torch.load(path, map_location="cpu")["state_dict"] - for k in sd.keys(): - for ik in ignore_keys: - if k.startswith(ik): - self.print("Deleting key {} from state_dict.".format(k)) - del sd[k] - self.load_state_dict(sd, strict=False) - print(f"Restored from {path}") - - def init_first_stage_from_ckpt(self, config): - model = instantiate_from_config(config) - model = model.eval() - model.train = disabled_train - self.first_stage_model = model - - def init_cond_stage_from_ckpt(self, config): - if config == "__is_first_stage__": - print("Using first stage also as cond stage.") - self.cond_stage_model = self.first_stage_model - elif config == "__is_unconditional__" or self.be_unconditional: - print(f"Using no cond stage. Assuming the training is intended to be unconditional. " - f"Prepending {self.sos_token} as a sos token.") - self.be_unconditional = True - self.cond_stage_key = self.first_stage_key - self.cond_stage_model = SOSProvider(self.sos_token) - else: - model = instantiate_from_config(config) - model = model.eval() - model.train = disabled_train - self.cond_stage_model = model - - def forward(self, x, c): - # one step to produce the logits - # x = target - # c = nucleus - _, z_indices = self.encode_to_z(x) - _, c_indices = self.encode_to_c(c) - - if self.training and self.pkeep < 1.0: - mask = torch.bernoulli(self.pkeep*torch.ones(z_indices.shape, - device=z_indices.device)) - mask = mask.round().to(dtype=torch.int64) - r_indices = torch.randint_like(z_indices, self.transformer.config.vocab_size) - a_indices = mask*z_indices+(1-mask)*r_indices - else: - a_indices = z_indices - - cz_indices = torch.cat((c_indices, a_indices), dim=1) - - # target includes all sequence elements (no need to handle first one - # differently because we are conditioning) - target = z_indices - # make the prediction - logits, _ = self.transformer(cz_indices[:, :-1]) - # cut off conditioning outputs - output i corresponds to p(z_i | z_{ -1: - c = F.interpolate(c, size=(self.downsample_cond_size, self.downsample_cond_size)) - - #quant_c, _, info = self.cond_stage_model.encode(x) - #indices = info[2].view(quant_c.shape[0], -1) - #indices = self.permuter(indices) - quant_c, _, [_,_,indices] = self.cond_stage_model.encode(c) - if len(indices.shape) != 2: - indices = indices.view(c.shape[0], -1) - return quant_c, indices - - @torch.no_grad() - def decode_to_img(self, index, zshape): - index = self.permuter(index, reverse=True) - bhwc = (zshape[0],zshape[2],zshape[3],zshape[1]) - quant_z = self.first_stage_model.quantize.get_codebook_entry( - index.reshape(-1), shape=bhwc) - x = self.first_stage_model.decode(quant_z) - return x - - @torch.no_grad() - def log_images(self, batch, temperature=None, top_k=None, callback=None, lr_interface=False, **kwargs): - log = dict() - - N = 4 - if lr_interface: - x, c = self.get_xc(batch, N, diffuse=False, upsample_factor=8) - else: - x, c = self.get_xc(batch, N) - x = x.to(device=self.device) - c = c.to(device=self.device) - - quant_z, z_indices = self.encode_to_z(x) - quant_c, c_indices = self.encode_to_c(c) - - # create a "half"" sample - z_start_indices = z_indices[:,:z_indices.shape[1]//2] - index_sample = self.sample(z_start_indices, c_indices, - steps=z_indices.shape[1]-z_start_indices.shape[1], - temperature=temperature if temperature is not None else 1.0, - sample=True, - top_k=top_k if top_k is not None else 100, - callback=callback if callback is not None else lambda k: None) - x_sample = self.decode_to_img(index_sample, quant_z.shape) - - # sample - z_start_indices = z_indices[:, :0] - index_sample = self.sample(z_start_indices, c_indices, - steps=z_indices.shape[1], - temperature=temperature if temperature is not None else 1.0, - sample=True, - top_k=top_k if top_k is not None else 100, - callback=callback if callback is not None else lambda k: None) - x_sample_nopix = self.decode_to_img(index_sample, quant_z.shape) - - # det sample - z_start_indices = z_indices[:, :0] - index_sample = self.sample(z_start_indices, c_indices, - steps=z_indices.shape[1], - sample=False, - callback=callback if callback is not None else lambda k: None) - x_sample_det = self.decode_to_img(index_sample, quant_z.shape) - - # reconstruction - x_rec = self.decode_to_img(z_indices, quant_z.shape) - - log["inputs"] = x - log["reconstructions"] = x_rec - - if self.cond_stage_key != "image" or self.cond_stage_key != "nucleus" or self.cond_stage_key != "target": - cond_rec = self.cond_stage_model.decode(quant_c) - if self.cond_stage_key == "segmentation": - # get image from segmentation mask - num_classes = cond_rec.shape[1] - - c = torch.argmax(c, dim=1, keepdim=True) - c = F.one_hot(c, num_classes=num_classes) - c = c.squeeze(1).permute(0, 3, 1, 2).float() - c = self.cond_stage_model.to_rgb(c) - - cond_rec = torch.argmax(cond_rec, dim=1, keepdim=True) - cond_rec = F.one_hot(cond_rec, num_classes=num_classes) - cond_rec = cond_rec.squeeze(1).permute(0, 3, 1, 2).float() - cond_rec = self.cond_stage_model.to_rgb(cond_rec) - log["conditioning_rec"] = cond_rec - log["conditioning"] = c - - log["samples_half"] = x_sample - log["samples_nopix"] = x_sample_nopix - log["samples_det"] = x_sample_det - return log - - def get_input(self, key, batch): - x = batch[key] - if len(x.shape) == 3: - x = x[..., None] - #if len(x.shape) == 4: - # x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format) - if x.dtype == torch.double: - x = x.float() - return x - - def get_xc(self, batch, N=None): - x = self.get_input(self.first_stage_key, batch) - c = self.get_input(self.cond_stage_key, batch) - if N is not None: - x = x[:N] - c = c[:N] - return x, c - - def shared_step(self, batch): - x, c = self.get_xc(batch) - logits, target = self(x, c) - loss = F.cross_entropy(logits.reshape(-1, logits.size(-1)), target.reshape(-1)) - return loss - - def training_step(self, batch, batch_idx): - loss = self.shared_step(batch) - self.log("train/loss", loss, prog_bar=True, logger=True, on_step=True, on_epoch=True) - return loss - - def validation_step(self, batch, batch_idx): - loss = self.shared_step(batch) - self.log("val/loss", loss, prog_bar=True, logger=True, on_step=True, on_epoch=True) - return loss - - def configure_optimizers(self): - """ - Following minGPT: - This long function is unfortunately doing something very simple and is being very defensive: - We are separating out all parameters of the model into two buckets: those that will experience - weight decay for regularization and those that won't (biases, and layernorm/embedding weights). - We are then returning the PyTorch optimizer object. - """ - # separate out all parameters to those that will and won't experience regularizing weight decay - decay = set() - no_decay = set() - whitelist_weight_modules = (torch.nn.Linear, ) - blacklist_weight_modules = (torch.nn.LayerNorm, torch.nn.Embedding) - for mn, m in self.transformer.named_modules(): - for pn, p in m.named_parameters(): - fpn = '%s.%s' % (mn, pn) if mn else pn # full param name - - if pn.endswith('bias'): - # all biases will not be decayed - no_decay.add(fpn) - elif pn.endswith('weight') and isinstance(m, whitelist_weight_modules): - # weights of whitelist modules will be weight decayed - decay.add(fpn) - elif pn.endswith('weight') and isinstance(m, blacklist_weight_modules): - # weights of blacklist modules will NOT be weight decayed - no_decay.add(fpn) - - # special case the position embedding parameter in the root GPT module as not decayed - no_decay.add('pos_emb') - - # validate that we considered every parameter - param_dict = {pn: p for pn, p in self.transformer.named_parameters()} - inter_params = decay & no_decay - union_params = decay | no_decay - assert len(inter_params) == 0, "parameters %s made it into both decay/no_decay sets!" % (str(inter_params), ) - assert len(param_dict.keys() - union_params) == 0, "parameters %s were not separated into either decay/no_decay set!" \ - % (str(param_dict.keys() - union_params), ) - - # create the pytorch optimizer object - optim_groups = [ - {"params": [param_dict[pn] for pn in sorted(list(decay))], "weight_decay": 0.01}, - {"params": [param_dict[pn] for pn in sorted(list(no_decay))], "weight_decay": 0.0}, - ] - optimizer = torch.optim.AdamW(optim_groups, lr=self.learning_rate, betas=(0.9, 0.95)) - return optimizer diff --git a/spaces/ICML2022/OFA/fairseq/.github/ISSUE_TEMPLATE/documentation.md b/spaces/ICML2022/OFA/fairseq/.github/ISSUE_TEMPLATE/documentation.md deleted file mode 100644 index 3a6e2e9ea4bb71102122c17ff53051eb3770cb5e..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/.github/ISSUE_TEMPLATE/documentation.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -name: 📚 Documentation/Typos -about: Report an issue related to documentation or a typo -labels: 'documentation, needs triage' ---- - -## 📚 Documentation - -For typos and doc fixes, please go ahead and: - -1. Create an issue. -2. Fix the typo. -3. Submit a PR. - -Thanks! diff --git a/spaces/ICML2022/OFA/fairseq/examples/backtranslation/prepare-de-monolingual.sh b/spaces/ICML2022/OFA/fairseq/examples/backtranslation/prepare-de-monolingual.sh deleted file mode 100644 index 5e67b2b3bcf27d3436031453e796e58a0ae79ec4..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/backtranslation/prepare-de-monolingual.sh +++ /dev/null @@ -1,98 +0,0 @@ -#!/bin/bash - -SCRIPTS=mosesdecoder/scripts -TOKENIZER=$SCRIPTS/tokenizer/tokenizer.perl -NORM_PUNC=$SCRIPTS/tokenizer/normalize-punctuation.perl -REM_NON_PRINT_CHAR=$SCRIPTS/tokenizer/remove-non-printing-char.perl -BPEROOT=subword-nmt/subword_nmt - - -BPE_CODE=wmt18_en_de/code -SUBSAMPLE_SIZE=25000000 -LANG=de - - -OUTDIR=wmt18_${LANG}_mono -orig=orig -tmp=$OUTDIR/tmp -mkdir -p $OUTDIR $tmp - - -URLS=( - "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2007.de.shuffled.gz" - "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2008.de.shuffled.gz" - "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2009.de.shuffled.gz" - "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2010.de.shuffled.gz" - "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2011.de.shuffled.gz" - "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2012.de.shuffled.gz" - "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2013.de.shuffled.gz" - "http://www.statmt.org/wmt15/training-monolingual-news-crawl-v2/news.2014.de.shuffled.v2.gz" - "http://data.statmt.org/wmt16/translation-task/news.2015.de.shuffled.gz" - "http://data.statmt.org/wmt17/translation-task/news.2016.de.shuffled.gz" - "http://data.statmt.org/wmt18/translation-task/news.2017.de.shuffled.deduped.gz" -) -FILES=( - "news.2007.de.shuffled.gz" - "news.2008.de.shuffled.gz" - "news.2009.de.shuffled.gz" - "news.2010.de.shuffled.gz" - "news.2011.de.shuffled.gz" - "news.2012.de.shuffled.gz" - "news.2013.de.shuffled.gz" - "news.2014.de.shuffled.v2.gz" - "news.2015.de.shuffled.gz" - "news.2016.de.shuffled.gz" - "news.2017.de.shuffled.deduped.gz" -) - - -cd $orig -for ((i=0;i<${#URLS[@]};++i)); do - file=${FILES[i]} - if [ -f $file ]; then - echo "$file already exists, skipping download" - else - url=${URLS[i]} - wget "$url" - fi -done -cd .. - - -if [ -f $tmp/monolingual.${SUBSAMPLE_SIZE}.${LANG} ]; then - echo "found monolingual sample, skipping shuffle/sample/tokenize" -else - gzip -c -d -k $(for FILE in "${FILES[@]}"; do echo $orig/$FILE; done) \ - | shuf -n $SUBSAMPLE_SIZE \ - | perl $NORM_PUNC $LANG \ - | perl $REM_NON_PRINT_CHAR \ - | perl $TOKENIZER -threads 8 -a -l $LANG \ - > $tmp/monolingual.${SUBSAMPLE_SIZE}.${LANG} -fi - - -if [ -f $tmp/bpe.monolingual.${SUBSAMPLE_SIZE}.${LANG} ]; then - echo "found BPE monolingual sample, skipping BPE step" -else - python $BPEROOT/apply_bpe.py -c $BPE_CODE \ - < $tmp/monolingual.${SUBSAMPLE_SIZE}.${LANG} \ - > $tmp/bpe.monolingual.${SUBSAMPLE_SIZE}.${LANG} -fi - - -if [ -f $tmp/bpe.monolingual.dedup.${SUBSAMPLE_SIZE}.${LANG} ]; then - echo "found deduplicated monolingual sample, skipping deduplication step" -else - python deduplicate_lines.py $tmp/bpe.monolingual.${SUBSAMPLE_SIZE}.${LANG} \ - > $tmp/bpe.monolingual.dedup.${SUBSAMPLE_SIZE}.${LANG} -fi - - -if [ -f $OUTDIR/bpe.monolingual.dedup.00.de ]; then - echo "found sharded data, skipping sharding step" -else - split --lines 1000000 --numeric-suffixes \ - --additional-suffix .${LANG} \ - $tmp/bpe.monolingual.dedup.${SUBSAMPLE_SIZE}.${LANG} \ - $OUTDIR/bpe.monolingual.dedup. -fi diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/model_parallel/megatron_trainer.py b/spaces/ICML2022/OFA/fairseq/fairseq/model_parallel/megatron_trainer.py deleted file mode 100644 index 8ab4657f73c6cda91e95637921edb84ccb76b3d0..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/model_parallel/megatron_trainer.py +++ /dev/null @@ -1,71 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Train a network across multiple GPUs. -""" - -from fairseq.dataclass.configs import FairseqConfig -from fairseq.distributed import utils as distributed_utils -from fairseq.trainer import Trainer - -try: - from fairseq.model_parallel.megatron.mpu import ( - get_data_parallel_rank, - get_data_parallel_world_size, - get_model_parallel_src_rank, - get_cuda_rng_tracker, - ) - - has_megatron_submodule = True -except (ImportError, ModuleNotFoundError): - has_megatron_submodule = False - - -class MegatronTrainer(Trainer): - """Main class for model parallel with data parallel training.""" - - def __init__(self, cfg: FairseqConfig, task, model, criterion, **kwargs): - if not has_megatron_submodule: - raise ImportError( - "\n\nPlease install the megatron submodule:" - "\n\n git submodule update --init " - "fairseq/model_parallel/megatron" - ) - super().__init__(cfg, task, model, criterion, **kwargs) - - def clip_grad_norm(self, clip_norm): - def _aggregate_model_parallel_grad_norm(total_norm): - total_norm = total_norm ** 2 - distributed_utils.all_reduce( - total_norm, group=distributed_utils.get_model_parallel_group() - ) - total_norm = total_norm ** 0.5 - return total_norm - - return self.optimizer.clip_grad_norm( - clip_norm, - aggregate_norm_fn=_aggregate_model_parallel_grad_norm, - ) - - def save_checkpoint(self, filename, extra_state): - """Save all training state in a checkpoint file.""" - extra_state['rng_tracker_states'] \ - = get_cuda_rng_tracker().get_states() - super().save_checkpoint(filename, extra_state) - - def load_checkpoint( - self, - filename, - reset_optimizer=False, - reset_lr_scheduler=False, - optimizer_overrides=None, - reset_meters=False, - ): - extra_state = super().load_checkpoint(filename, reset_optimizer=reset_optimizer, reset_lr_scheduler=reset_lr_scheduler, optimizer_overrides=optimizer_overrides, reset_meters=reset_meters) - if extra_state is not None and 'rng_tracker_states' in extra_state: - get_cuda_rng_tracker().set_states( - extra_state['rng_tracker_states']) - return extra_state diff --git a/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn.h b/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn.h deleted file mode 100644 index c7408eba007b424194618baa63726657e36875e3..0000000000000000000000000000000000000000 --- a/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn.h +++ /dev/null @@ -1,64 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -#pragma once - -#include "ms_deform_attn_cpu.h" - -#ifdef WITH_CUDA -#include "ms_deform_attn_cuda.h" -#endif - -namespace groundingdino { - -at::Tensor -ms_deform_attn_forward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const int im2col_step) -{ - if (value.type().is_cuda()) - { -#ifdef WITH_CUDA - return ms_deform_attn_cuda_forward( - value, spatial_shapes, level_start_index, sampling_loc, attn_weight, im2col_step); -#else - AT_ERROR("Not compiled with GPU support"); -#endif - } - AT_ERROR("Not implemented on the CPU"); -} - -std::vector -ms_deform_attn_backward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const at::Tensor &grad_output, - const int im2col_step) -{ - if (value.type().is_cuda()) - { -#ifdef WITH_CUDA - return ms_deform_attn_cuda_backward( - value, spatial_shapes, level_start_index, sampling_loc, attn_weight, grad_output, im2col_step); -#else - AT_ERROR("Not compiled with GPU support"); -#endif - } - AT_ERROR("Not implemented on the CPU"); -} - -} // namespace groundingdino \ No newline at end of file diff --git a/spaces/IDEA-Research/Grounded-SAM/segment_anything/segment_anything/modeling/mask_decoder.py b/spaces/IDEA-Research/Grounded-SAM/segment_anything/segment_anything/modeling/mask_decoder.py deleted file mode 100644 index 3e86f7cc9ad95582a08ef2531c68d03fa4af8d99..0000000000000000000000000000000000000000 --- a/spaces/IDEA-Research/Grounded-SAM/segment_anything/segment_anything/modeling/mask_decoder.py +++ /dev/null @@ -1,176 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from torch import nn -from torch.nn import functional as F - -from typing import List, Tuple, Type - -from .common import LayerNorm2d - - -class MaskDecoder(nn.Module): - def __init__( - self, - *, - transformer_dim: int, - transformer: nn.Module, - num_multimask_outputs: int = 3, - activation: Type[nn.Module] = nn.GELU, - iou_head_depth: int = 3, - iou_head_hidden_dim: int = 256, - ) -> None: - """ - Predicts masks given an image and prompt embeddings, using a - tranformer architecture. - - Arguments: - transformer_dim (int): the channel dimension of the transformer - transformer (nn.Module): the transformer used to predict masks - num_multimask_outputs (int): the number of masks to predict - when disambiguating masks - activation (nn.Module): the type of activation to use when - upscaling masks - iou_head_depth (int): the depth of the MLP used to predict - mask quality - iou_head_hidden_dim (int): the hidden dimension of the MLP - used to predict mask quality - """ - super().__init__() - self.transformer_dim = transformer_dim - self.transformer = transformer - - self.num_multimask_outputs = num_multimask_outputs - - self.iou_token = nn.Embedding(1, transformer_dim) - self.num_mask_tokens = num_multimask_outputs + 1 - self.mask_tokens = nn.Embedding(self.num_mask_tokens, transformer_dim) - - self.output_upscaling = nn.Sequential( - nn.ConvTranspose2d(transformer_dim, transformer_dim // 4, kernel_size=2, stride=2), - LayerNorm2d(transformer_dim // 4), - activation(), - nn.ConvTranspose2d(transformer_dim // 4, transformer_dim // 8, kernel_size=2, stride=2), - activation(), - ) - self.output_hypernetworks_mlps = nn.ModuleList( - [ - MLP(transformer_dim, transformer_dim, transformer_dim // 8, 3) - for i in range(self.num_mask_tokens) - ] - ) - - self.iou_prediction_head = MLP( - transformer_dim, iou_head_hidden_dim, self.num_mask_tokens, iou_head_depth - ) - - def forward( - self, - image_embeddings: torch.Tensor, - image_pe: torch.Tensor, - sparse_prompt_embeddings: torch.Tensor, - dense_prompt_embeddings: torch.Tensor, - multimask_output: bool, - ) -> Tuple[torch.Tensor, torch.Tensor]: - """ - Predict masks given image and prompt embeddings. - - Arguments: - image_embeddings (torch.Tensor): the embeddings from the image encoder - image_pe (torch.Tensor): positional encoding with the shape of image_embeddings - sparse_prompt_embeddings (torch.Tensor): the embeddings of the points and boxes - dense_prompt_embeddings (torch.Tensor): the embeddings of the mask inputs - multimask_output (bool): Whether to return multiple masks or a single - mask. - - Returns: - torch.Tensor: batched predicted masks - torch.Tensor: batched predictions of mask quality - """ - masks, iou_pred = self.predict_masks( - image_embeddings=image_embeddings, - image_pe=image_pe, - sparse_prompt_embeddings=sparse_prompt_embeddings, - dense_prompt_embeddings=dense_prompt_embeddings, - ) - - # Select the correct mask or masks for outptu - if multimask_output: - mask_slice = slice(1, None) - else: - mask_slice = slice(0, 1) - masks = masks[:, mask_slice, :, :] - iou_pred = iou_pred[:, mask_slice] - - # Prepare output - return masks, iou_pred - - def predict_masks( - self, - image_embeddings: torch.Tensor, - image_pe: torch.Tensor, - sparse_prompt_embeddings: torch.Tensor, - dense_prompt_embeddings: torch.Tensor, - ) -> Tuple[torch.Tensor, torch.Tensor]: - """Predicts masks. See 'forward' for more details.""" - # Concatenate output tokens - output_tokens = torch.cat([self.iou_token.weight, self.mask_tokens.weight], dim=0) - output_tokens = output_tokens.unsqueeze(0).expand(sparse_prompt_embeddings.size(0), -1, -1) - tokens = torch.cat((output_tokens, sparse_prompt_embeddings), dim=1) - - # Expand per-image data in batch direction to be per-mask - src = torch.repeat_interleave(image_embeddings, tokens.shape[0], dim=0) - src = src + dense_prompt_embeddings - pos_src = torch.repeat_interleave(image_pe, tokens.shape[0], dim=0) - b, c, h, w = src.shape - - # Run the transformer - hs, src = self.transformer(src, pos_src, tokens) - iou_token_out = hs[:, 0, :] - mask_tokens_out = hs[:, 1 : (1 + self.num_mask_tokens), :] - - # Upscale mask embeddings and predict masks using the mask tokens - src = src.transpose(1, 2).view(b, c, h, w) - upscaled_embedding = self.output_upscaling(src) - hyper_in_list: List[torch.Tensor] = [] - for i in range(self.num_mask_tokens): - hyper_in_list.append(self.output_hypernetworks_mlps[i](mask_tokens_out[:, i, :])) - hyper_in = torch.stack(hyper_in_list, dim=1) - b, c, h, w = upscaled_embedding.shape - masks = (hyper_in @ upscaled_embedding.view(b, c, h * w)).view(b, -1, h, w) - - # Generate mask quality predictions - iou_pred = self.iou_prediction_head(iou_token_out) - - return masks, iou_pred - - -# Lightly adapted from -# https://github.com/facebookresearch/MaskFormer/blob/main/mask_former/modeling/transformer/transformer_predictor.py # noqa -class MLP(nn.Module): - def __init__( - self, - input_dim: int, - hidden_dim: int, - output_dim: int, - num_layers: int, - sigmoid_output: bool = False, - ) -> None: - super().__init__() - self.num_layers = num_layers - h = [hidden_dim] * (num_layers - 1) - self.layers = nn.ModuleList( - nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]) - ) - self.sigmoid_output = sigmoid_output - - def forward(self, x): - for i, layer in enumerate(self.layers): - x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x) - if self.sigmoid_output: - x = F.sigmoid(x) - return x diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/models/ade20k/segm_lib/nn/parallel/data_parallel.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/models/ade20k/segm_lib/nn/parallel/data_parallel.py deleted file mode 100644 index 376fc038919aa2a5bd696141e7bb6025d4981306..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/models/ade20k/segm_lib/nn/parallel/data_parallel.py +++ /dev/null @@ -1,112 +0,0 @@ -# -*- coding: utf8 -*- - -import torch.cuda as cuda -import torch.nn as nn -import torch -import collections -from torch.nn.parallel._functions import Gather - - -__all__ = ['UserScatteredDataParallel', 'user_scattered_collate', 'async_copy_to'] - - -def async_copy_to(obj, dev, main_stream=None): - if torch.is_tensor(obj): - v = obj.cuda(dev, non_blocking=True) - if main_stream is not None: - v.data.record_stream(main_stream) - return v - elif isinstance(obj, collections.Mapping): - return {k: async_copy_to(o, dev, main_stream) for k, o in obj.items()} - elif isinstance(obj, collections.Sequence): - return [async_copy_to(o, dev, main_stream) for o in obj] - else: - return obj - - -def dict_gather(outputs, target_device, dim=0): - """ - Gathers variables from different GPUs on a specified device - (-1 means the CPU), with dictionary support. - """ - def gather_map(outputs): - out = outputs[0] - if torch.is_tensor(out): - # MJY(20180330) HACK:: force nr_dims > 0 - if out.dim() == 0: - outputs = [o.unsqueeze(0) for o in outputs] - return Gather.apply(target_device, dim, *outputs) - elif out is None: - return None - elif isinstance(out, collections.Mapping): - return {k: gather_map([o[k] for o in outputs]) for k in out} - elif isinstance(out, collections.Sequence): - return type(out)(map(gather_map, zip(*outputs))) - return gather_map(outputs) - - -class DictGatherDataParallel(nn.DataParallel): - def gather(self, outputs, output_device): - return dict_gather(outputs, output_device, dim=self.dim) - - -class UserScatteredDataParallel(DictGatherDataParallel): - def scatter(self, inputs, kwargs, device_ids): - assert len(inputs) == 1 - inputs = inputs[0] - inputs = _async_copy_stream(inputs, device_ids) - inputs = [[i] for i in inputs] - assert len(kwargs) == 0 - kwargs = [{} for _ in range(len(inputs))] - - return inputs, kwargs - - -def user_scattered_collate(batch): - return batch - - -def _async_copy(inputs, device_ids): - nr_devs = len(device_ids) - assert type(inputs) in (tuple, list) - assert len(inputs) == nr_devs - - outputs = [] - for i, dev in zip(inputs, device_ids): - with cuda.device(dev): - outputs.append(async_copy_to(i, dev)) - - return tuple(outputs) - - -def _async_copy_stream(inputs, device_ids): - nr_devs = len(device_ids) - assert type(inputs) in (tuple, list) - assert len(inputs) == nr_devs - - outputs = [] - streams = [_get_stream(d) for d in device_ids] - for i, dev, stream in zip(inputs, device_ids, streams): - with cuda.device(dev): - main_stream = cuda.current_stream() - with cuda.stream(stream): - outputs.append(async_copy_to(i, dev, main_stream=main_stream)) - main_stream.wait_stream(stream) - - return outputs - - -"""Adapted from: torch/nn/parallel/_functions.py""" -# background streams used for copying -_streams = None - - -def _get_stream(device): - """Gets a background stream for copying between CPU and GPU""" - global _streams - if device == -1: - return None - if _streams is None: - _streams = [None] * cuda.device_count() - if _streams[device] is None: _streams[device] = cuda.Stream(device) - return _streams[device] diff --git a/spaces/Intoval/privateChatGPT/ChuanhuChatbot.py b/spaces/Intoval/privateChatGPT/ChuanhuChatbot.py deleted file mode 100644 index 45c7749733c29c5aa6231215277eca5d999ff1f9..0000000000000000000000000000000000000000 --- a/spaces/Intoval/privateChatGPT/ChuanhuChatbot.py +++ /dev/null @@ -1,452 +0,0 @@ -# -*- coding:utf-8 -*- -import os -import logging -import sys - -import gradio as gr - -from modules import config -from modules.config import * -from modules.utils import * -from modules.presets import * -from modules.overwrites import * -from modules.models import get_model - - -gr.Chatbot._postprocess_chat_messages = postprocess_chat_messages -gr.Chatbot.postprocess = postprocess -PromptHelper.compact_text_chunks = compact_text_chunks - -with open("assets/custom.css", "r", encoding="utf-8") as f: - customCSS = f.read() - -def create_new_model(): - return get_model(model_name = MODELS[DEFAULT_MODEL], access_key = my_api_key)[0] - -with gr.Blocks(css=customCSS, theme=small_and_beautiful_theme) as demo: - user_name = gr.State("") - promptTemplates = gr.State(load_template(get_template_names(plain=True)[0], mode=2)) - user_question = gr.State("") - user_api_key = gr.State(my_api_key) - current_model = gr.State(create_new_model) - - topic = gr.State(i18n("未命名对话历史记录")) - - with gr.Row(): - gr.HTML(CHUANHU_TITLE, elem_id="app_title") - status_display = gr.Markdown(get_geoip(), elem_id="status_display") - with gr.Row(elem_id="float_display"): - user_info = gr.Markdown(value="getting user info...", elem_id="user_info") - - # https://github.com/gradio-app/gradio/pull/3296 - def create_greeting(request: gr.Request): - if hasattr(request, "username") and request.username: # is not None or is not "" - logging.info(f"Get User Name: {request.username}") - return gr.Markdown.update(value=f"User: {request.username}"), request.username - else: - return gr.Markdown.update(value=f"User: default", visible=False), "" - demo.load(create_greeting, inputs=None, outputs=[user_info, user_name]) - - with gr.Row().style(equal_height=True): - with gr.Column(scale=5): - with gr.Row(): - chatbot = gr.Chatbot(elem_id="chuanhu_chatbot").style(height="100%") - with gr.Row(): - with gr.Column(min_width=225, scale=12): - user_input = gr.Textbox( - elem_id="user_input_tb", - show_label=False, placeholder=i18n("在这里输入") - ).style(container=False) - with gr.Column(min_width=42, scale=1): - submitBtn = gr.Button(value="", variant="primary", elem_id="submit_btn") - cancelBtn = gr.Button(value="", variant="secondary", visible=False, elem_id="cancel_btn") - with gr.Row(): - emptyBtn = gr.Button( - i18n("🧹 新的对话"), - ) - retryBtn = gr.Button(i18n("🔄 重新生成")) - delFirstBtn = gr.Button(i18n("🗑️ 删除最旧对话")) - delLastBtn = gr.Button(i18n("🗑️ 删除最新对话")) - - with gr.Column(): - with gr.Column(min_width=50, scale=1): - with gr.Tab(label=i18n("模型")): - keyTxt = gr.Textbox( - show_label=True, - placeholder=f"OpenAI API-key...", - value=hide_middle_chars(user_api_key.value), - type="password", - visible=not HIDE_MY_KEY, - label="API-Key", - ) - if multi_api_key: - usageTxt = gr.Markdown(i18n("多账号模式已开启,无需输入key,可直接开始对话"), elem_id="usage_display", elem_classes="insert_block") - else: - usageTxt = gr.Markdown(i18n("**发送消息** 或 **提交key** 以显示额度"), elem_id="usage_display", elem_classes="insert_block") - model_select_dropdown = gr.Dropdown( - label=i18n("选择模型"), choices=MODELS, multiselect=False, value=MODELS[DEFAULT_MODEL], interactive=True - ) - lora_select_dropdown = gr.Dropdown( - label=i18n("选择LoRA模型"), choices=[], multiselect=False, interactive=True, visible=False - ) - with gr.Row(): - use_streaming_checkbox = gr.Checkbox( - label=i18n("实时传输回答"), value=True, visible=ENABLE_STREAMING_OPTION - ) - single_turn_checkbox = gr.Checkbox(label=i18n("单轮对话"), value=False) - use_websearch_checkbox = gr.Checkbox(label=i18n("使用在线搜索"), value=False) - language_select_dropdown = gr.Dropdown( - label=i18n("选择回复语言(针对搜索&索引功能)"), - choices=REPLY_LANGUAGES, - multiselect=False, - value=REPLY_LANGUAGES[0], - ) - index_files = gr.Files(label=i18n("上传索引文件"), type="file") - two_column = gr.Checkbox(label=i18n("双栏pdf"), value=advance_docs["pdf"].get("two_column", False)) - # TODO: 公式ocr - # formula_ocr = gr.Checkbox(label=i18n("识别公式"), value=advance_docs["pdf"].get("formula_ocr", False)) - - with gr.Tab(label="Prompt"): - systemPromptTxt = gr.Textbox( - show_label=True, - placeholder=i18n("在这里输入System Prompt..."), - label="System prompt", - value=INITIAL_SYSTEM_PROMPT, - lines=10, - ).style(container=False) - with gr.Accordion(label=i18n("加载Prompt模板"), open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - templateFileSelectDropdown = gr.Dropdown( - label=i18n("选择Prompt模板集合文件"), - choices=get_template_names(plain=True), - multiselect=False, - value=get_template_names(plain=True)[0], - ).style(container=False) - with gr.Column(scale=1): - templateRefreshBtn = gr.Button(i18n("🔄 刷新")) - with gr.Row(): - with gr.Column(): - templateSelectDropdown = gr.Dropdown( - label=i18n("从Prompt模板中加载"), - choices=load_template( - get_template_names(plain=True)[0], mode=1 - ), - multiselect=False, - ).style(container=False) - - with gr.Tab(label=i18n("保存/加载")): - with gr.Accordion(label=i18n("保存/加载对话历史记录"), open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - historyFileSelectDropdown = gr.Dropdown( - label=i18n("从列表中加载对话"), - choices=get_history_names(plain=True), - multiselect=False, - value=get_history_names(plain=True)[0], - ) - with gr.Column(scale=1): - historyRefreshBtn = gr.Button(i18n("🔄 刷新")) - with gr.Row(): - with gr.Column(scale=6): - saveFileName = gr.Textbox( - show_label=True, - placeholder=i18n("设置文件名: 默认为.json,可选为.md"), - label=i18n("设置保存文件名"), - value=i18n("对话历史记录"), - ).style(container=True) - with gr.Column(scale=1): - saveHistoryBtn = gr.Button(i18n("💾 保存对话")) - exportMarkdownBtn = gr.Button(i18n("📝 导出为Markdown")) - gr.Markdown(i18n("默认保存于history文件夹")) - with gr.Row(): - with gr.Column(): - downloadFile = gr.File(interactive=True) - - with gr.Tab(label=i18n("高级")): - gr.Markdown(i18n("# ⚠️ 务必谨慎更改 ⚠️\n\n如果无法使用请恢复默认设置")) - gr.HTML(APPEARANCE_SWITCHER, elem_classes="insert_block") - with gr.Accordion(i18n("参数"), open=False): - temperature_slider = gr.Slider( - minimum=-0, - maximum=2.0, - value=1.0, - step=0.1, - interactive=True, - label="temperature", - ) - top_p_slider = gr.Slider( - minimum=-0, - maximum=1.0, - value=1.0, - step=0.05, - interactive=True, - label="top-p", - ) - n_choices_slider = gr.Slider( - minimum=1, - maximum=10, - value=1, - step=1, - interactive=True, - label="n choices", - ) - stop_sequence_txt = gr.Textbox( - show_label=True, - placeholder=i18n("在这里输入停止符,用英文逗号隔开..."), - label="stop", - value="", - lines=1, - ) - max_context_length_slider = gr.Slider( - minimum=1, - maximum=32768, - value=2000, - step=1, - interactive=True, - label="max context", - ) - max_generation_slider = gr.Slider( - minimum=1, - maximum=32768, - value=1000, - step=1, - interactive=True, - label="max generations", - ) - presence_penalty_slider = gr.Slider( - minimum=-2.0, - maximum=2.0, - value=0.0, - step=0.01, - interactive=True, - label="presence penalty", - ) - frequency_penalty_slider = gr.Slider( - minimum=-2.0, - maximum=2.0, - value=0.0, - step=0.01, - interactive=True, - label="frequency penalty", - ) - logit_bias_txt = gr.Textbox( - show_label=True, - placeholder=f"word:likelihood", - label="logit bias", - value="", - lines=1, - ) - user_identifier_txt = gr.Textbox( - show_label=True, - placeholder=i18n("用于定位滥用行为"), - label=i18n("用户名"), - value=user_name.value, - lines=1, - ) - - with gr.Accordion(i18n("网络设置"), open=False): - # 优先展示自定义的api_host - apihostTxt = gr.Textbox( - show_label=True, - placeholder=i18n("在这里输入API-Host..."), - label="API-Host", - value=config.api_host or shared.API_HOST, - lines=1, - ) - changeAPIURLBtn = gr.Button(i18n("🔄 切换API地址")) - proxyTxt = gr.Textbox( - show_label=True, - placeholder=i18n("在这里输入代理地址..."), - label=i18n("代理地址(示例:http://127.0.0.1:10809)"), - value="", - lines=2, - ) - changeProxyBtn = gr.Button(i18n("🔄 设置代理地址")) - default_btn = gr.Button(i18n("🔙 恢复默认设置")) - - gr.Markdown(CHUANHU_DESCRIPTION, elem_id="description") - gr.HTML(FOOTER.format(versions=versions_html()), elem_id="footer") - chatgpt_predict_args = dict( - fn=predict, - inputs=[ - current_model, - user_question, - chatbot, - use_streaming_checkbox, - use_websearch_checkbox, - index_files, - language_select_dropdown, - ], - outputs=[chatbot, status_display], - show_progress=True, - ) - - start_outputing_args = dict( - fn=start_outputing, - inputs=[], - outputs=[submitBtn, cancelBtn], - show_progress=True, - ) - - end_outputing_args = dict( - fn=end_outputing, inputs=[], outputs=[submitBtn, cancelBtn] - ) - - reset_textbox_args = dict( - fn=reset_textbox, inputs=[], outputs=[user_input] - ) - - transfer_input_args = dict( - fn=transfer_input, inputs=[user_input], outputs=[user_question, user_input, submitBtn, cancelBtn], show_progress=True - ) - - get_usage_args = dict( - fn=billing_info, inputs=[current_model], outputs=[usageTxt], show_progress=False - ) - - load_history_from_file_args = dict( - fn=load_chat_history, - inputs=[current_model, historyFileSelectDropdown, chatbot, user_name], - outputs=[saveFileName, systemPromptTxt, chatbot] - ) - - - # Chatbot - cancelBtn.click(interrupt, [current_model], []) - - user_input.submit(**transfer_input_args).then(**chatgpt_predict_args).then(**end_outputing_args) - user_input.submit(**get_usage_args) - - submitBtn.click(**transfer_input_args).then(**chatgpt_predict_args).then(**end_outputing_args) - submitBtn.click(**get_usage_args) - - index_files.change(handle_file_upload, [current_model, index_files, chatbot], [index_files, chatbot, status_display]) - - emptyBtn.click( - reset, - inputs=[current_model], - outputs=[chatbot, status_display], - show_progress=True, - ) - emptyBtn.click(**reset_textbox_args) - - retryBtn.click(**start_outputing_args).then( - retry, - [ - current_model, - chatbot, - use_streaming_checkbox, - use_websearch_checkbox, - index_files, - language_select_dropdown, - ], - [chatbot, status_display], - show_progress=True, - ).then(**end_outputing_args) - retryBtn.click(**get_usage_args) - - delFirstBtn.click( - delete_first_conversation, - [current_model], - [status_display], - ) - - delLastBtn.click( - delete_last_conversation, - [current_model, chatbot], - [chatbot, status_display], - show_progress=False - ) - - two_column.change(update_doc_config, [two_column], None) - - # LLM Models - keyTxt.change(set_key, [current_model, keyTxt], [user_api_key, status_display]).then(**get_usage_args) - keyTxt.submit(**get_usage_args) - single_turn_checkbox.change(set_single_turn, [current_model, single_turn_checkbox], None) - model_select_dropdown.change(get_model, [model_select_dropdown, lora_select_dropdown, user_api_key, temperature_slider, top_p_slider, systemPromptTxt], [current_model, status_display, lora_select_dropdown], show_progress=True) - lora_select_dropdown.change(get_model, [model_select_dropdown, lora_select_dropdown, user_api_key, temperature_slider, top_p_slider, systemPromptTxt], [current_model, status_display], show_progress=True) - - # Template - systemPromptTxt.change(set_system_prompt, [current_model, systemPromptTxt], None) - templateRefreshBtn.click(get_template_names, None, [templateFileSelectDropdown]) - templateFileSelectDropdown.change( - load_template, - [templateFileSelectDropdown], - [promptTemplates, templateSelectDropdown], - show_progress=True, - ) - templateSelectDropdown.change( - get_template_content, - [promptTemplates, templateSelectDropdown, systemPromptTxt], - [systemPromptTxt], - show_progress=True, - ) - - # S&L - saveHistoryBtn.click( - save_chat_history, - [current_model, saveFileName, chatbot, user_name], - downloadFile, - show_progress=True, - ) - saveHistoryBtn.click(get_history_names, [gr.State(False), user_name], [historyFileSelectDropdown]) - exportMarkdownBtn.click( - export_markdown, - [current_model, saveFileName, chatbot, user_name], - downloadFile, - show_progress=True, - ) - historyRefreshBtn.click(get_history_names, [gr.State(False), user_name], [historyFileSelectDropdown]) - historyFileSelectDropdown.change(**load_history_from_file_args) - downloadFile.change(**load_history_from_file_args) - - # Advanced - max_context_length_slider.change(set_token_upper_limit, [current_model, max_context_length_slider], None) - temperature_slider.change(set_temperature, [current_model, temperature_slider], None) - top_p_slider.change(set_top_p, [current_model, top_p_slider], None) - n_choices_slider.change(set_n_choices, [current_model, n_choices_slider], None) - stop_sequence_txt.change(set_stop_sequence, [current_model, stop_sequence_txt], None) - max_generation_slider.change(set_max_tokens, [current_model, max_generation_slider], None) - presence_penalty_slider.change(set_presence_penalty, [current_model, presence_penalty_slider], None) - frequency_penalty_slider.change(set_frequency_penalty, [current_model, frequency_penalty_slider], None) - logit_bias_txt.change(set_logit_bias, [current_model, logit_bias_txt], None) - user_identifier_txt.change(set_user_identifier, [current_model, user_identifier_txt], None) - - default_btn.click( - reset_default, [], [apihostTxt, proxyTxt, status_display], show_progress=True - ) - changeAPIURLBtn.click( - change_api_host, - [apihostTxt], - [status_display], - show_progress=True, - ) - changeProxyBtn.click( - change_proxy, - [proxyTxt], - [status_display], - show_progress=True, - ) - -logging.info( - colorama.Back.GREEN - + "\n川虎的温馨提示:访问 http://localhost:7860 查看界面" - + colorama.Style.RESET_ALL -) -# 默认开启本地服务器,默认可以直接从IP访问,默认不创建公开分享链接 -demo.title = i18n("川虎Chat 🚀") - -if __name__ == "__main__": - reload_javascript() - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - auth=auth_list if authflag else None, - favicon_path="./assets/favicon.ico", - inbrowser=not dockerflag, # 禁止在docker下开启inbrowser - ) - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=7860, share=False) # 可自定义端口 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=7860,auth=("在这里填写用户名", "在这里填写密码")) # 可设置用户名与密码 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(auth=("在这里填写用户名", "在这里填写密码")) # 适合Nginx反向代理 diff --git a/spaces/IsaacK/streamlit-test/pages/join.py b/spaces/IsaacK/streamlit-test/pages/join.py deleted file mode 100644 index 5e2861f7cb2c839f0259386547532b2ddea1bc6e..0000000000000000000000000000000000000000 --- a/spaces/IsaacK/streamlit-test/pages/join.py +++ /dev/null @@ -1,63 +0,0 @@ -from pickle import FALSE -from pages.utils import empty -import streamlit as st -import sqlite3 -import datetime - -# Custom imports -from pages.utils import * -from authenticator import Hasher - -def app(): - - drop_table = False - - DATABASE = db_path('quiz_maker.db') - c, conn = db_connect(DATABASE) - - if drop_table: - st.write("User Table Dropped.") - query = "DROP TABLE IF EXISTS users" - c.execute(query) - - query = "CREATE TABLE IF NOT EXISTS users(uct_iso, firstname, lastname, username, email, hashed_password)" - c.execute(query) - - usernames = [] - emails = [] - - query = "SELECT username, email FROM users" - for items in c.execute(query): - usernames.append(items[0]) - emails.append(items[1]) - - st.markdown("## Join") - - with st.form("join_form"): - first_name = st.text_input("First Name") - last_name = st.text_input("Last Name") - user_name = st.text_input("User Name") - email = st.text_input("Email") - password1 = st.text_input("Password", type="password") - password2 = st.text_input("Confirm Password", type="password") - - submitted = st.form_submit_button("Submit") - - if empty(first_name) or empty(last_name) or empty(user_name) or \ - empty(email) or empty(password1) or empty(password2): - st.warning("Complete all inputs.") - elif submitted and password1.strip() != password2.strip(): - st.warning("The passwords do not match.") - elif user_name in usernames: - st.warning("This user name already exists.") - elif email in emails: - st.warning("This email is already being used.") - else: - uct_iso = datetime.datetime.utcnow().isoformat() - hashed_password = Hasher(password1).generate() - st.write(firstname, lastname, username, email, hashed_password) - query = "INSERT INTO users(uct_iso, firstname, lastname, username, email, hashed_password) VALUES(?, ?, ?, ?, ?, ?)" - c.execute(query, (uct_iso, first_name, last_name, user_name, email, hashed_password)) - conn.commit() - conn.close() - st.success("You have joined.") \ No newline at end of file diff --git a/spaces/JUNGU/VToonify/vtoonify/model/vgg.py b/spaces/JUNGU/VToonify/vtoonify/model/vgg.py deleted file mode 100644 index a1043d5bd8bdd0d1484d2270ae0d33c29495856c..0000000000000000000000000000000000000000 --- a/spaces/JUNGU/VToonify/vtoonify/model/vgg.py +++ /dev/null @@ -1,60 +0,0 @@ -import torch -import torch.nn as nn -import torchvision - -# VGG architecter, used for the perceptual loss using a pretrained VGG network -class VGG19(torch.nn.Module): - def __init__(self, requires_grad=False): - super().__init__() - vgg_pretrained_features = torchvision.models.vgg19(pretrained=True).features - self.slice1 = torch.nn.Sequential() - self.slice2 = torch.nn.Sequential() - self.slice3 = torch.nn.Sequential() - self.slice4 = torch.nn.Sequential() - self.slice5 = torch.nn.Sequential() - self.slice6 = torch.nn.Sequential() - for x in range(2): - self.slice1.add_module(str(x), vgg_pretrained_features[x]) - for x in range(2, 7): - self.slice2.add_module(str(x), vgg_pretrained_features[x]) - for x in range(7, 12): - self.slice3.add_module(str(x), vgg_pretrained_features[x]) - for x in range(12, 21): - self.slice4.add_module(str(x), vgg_pretrained_features[x]) - for x in range(21, 32): - self.slice5.add_module(str(x), vgg_pretrained_features[x]) - for x in range(32, 36): - self.slice6.add_module(str(x), vgg_pretrained_features[x]) - if not requires_grad: - for param in self.parameters(): - param.requires_grad = False - - self.pool = nn.AdaptiveAvgPool2d(output_size=1) - - self.mean = torch.tensor([0.485, 0.456, 0.406]).view(1,-1, 1, 1).cuda() * 2 - 1 - self.std = torch.tensor([0.229, 0.224, 0.225]).view(1,-1, 1, 1).cuda() * 2 - - def forward(self, X): # relui_1 - X = (X-self.mean)/self.std - h_relu1 = self.slice1(X) - h_relu2 = self.slice2(h_relu1) - h_relu3 = self.slice3(h_relu2) - h_relu4 = self.slice4(h_relu3) - h_relu5 = self.slice5[:-2](h_relu4) - out = [h_relu1, h_relu2, h_relu3, h_relu4, h_relu5] - return out - -# Perceptual loss that uses a pretrained VGG network -class VGGLoss(nn.Module): - def __init__(self): - super(VGGLoss, self).__init__() - self.vgg = VGG19().cuda() - self.criterion = nn.L1Loss() - self.weights = [1.0 / 32, 1.0 / 16, 1.0 / 8, 1.0 / 4, 1.0] - - def forward(self, x, y): - x_vgg, y_vgg = self.vgg(x), self.vgg(y) - loss = 0 - for i in range(len(x_vgg)): - loss += self.weights[i] * self.criterion(x_vgg[i], y_vgg[i].detach()) - return loss \ No newline at end of file diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/models/unet_2d_condition.py b/spaces/Jackflack09/diffuse-custom/diffusers/models/unet_2d_condition.py deleted file mode 100644 index f9d3402d0619847c5d218dadb6dea080a992ab98..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/models/unet_2d_condition.py +++ /dev/null @@ -1,381 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import torch -import torch.nn as nn -import torch.utils.checkpoint - -from ..configuration_utils import ConfigMixin, register_to_config -from ..modeling_utils import ModelMixin -from ..utils import BaseOutput, logging -from .embeddings import TimestepEmbedding, Timesteps -from .unet_2d_blocks import ( - CrossAttnDownBlock2D, - CrossAttnUpBlock2D, - DownBlock2D, - UNetMidBlock2DCrossAttn, - UpBlock2D, - get_down_block, - get_up_block, -) - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -@dataclass -class UNet2DConditionOutput(BaseOutput): - """ - Args: - sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - Hidden states conditioned on `encoder_hidden_states` input. Output of last layer of model. - """ - - sample: torch.FloatTensor - - -class UNet2DConditionModel(ModelMixin, ConfigMixin): - r""" - UNet2DConditionModel is a conditional 2D UNet model that takes in a noisy sample, conditional state, and a timestep - and returns sample shaped output. - - This model inherits from [`ModelMixin`]. Check the superclass documentation for the generic methods the library - implements for all the models (such as downloading or saving, etc.) - - Parameters: - sample_size (`int` or `Tuple[int, int]`, *optional*, defaults to `None`): - Height and width of input/output sample. - in_channels (`int`, *optional*, defaults to 4): The number of channels in the input sample. - out_channels (`int`, *optional*, defaults to 4): The number of channels in the output. - center_input_sample (`bool`, *optional*, defaults to `False`): Whether to center the input sample. - flip_sin_to_cos (`bool`, *optional*, defaults to `False`): - Whether to flip the sin to cos in the time embedding. - freq_shift (`int`, *optional*, defaults to 0): The frequency shift to apply to the time embedding. - down_block_types (`Tuple[str]`, *optional*, defaults to `("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")`): - The tuple of downsample blocks to use. - up_block_types (`Tuple[str]`, *optional*, defaults to `("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D",)`): - The tuple of upsample blocks to use. - block_out_channels (`Tuple[int]`, *optional*, defaults to `(320, 640, 1280, 1280)`): - The tuple of output channels for each block. - layers_per_block (`int`, *optional*, defaults to 2): The number of layers per block. - downsample_padding (`int`, *optional*, defaults to 1): The padding to use for the downsampling convolution. - mid_block_scale_factor (`float`, *optional*, defaults to 1.0): The scale factor to use for the mid block. - act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use. - norm_num_groups (`int`, *optional*, defaults to 32): The number of groups to use for the normalization. - norm_eps (`float`, *optional*, defaults to 1e-5): The epsilon to use for the normalization. - cross_attention_dim (`int`, *optional*, defaults to 1280): The dimension of the cross attention features. - attention_head_dim (`int`, *optional*, defaults to 8): The dimension of the attention heads. - """ - - _supports_gradient_checkpointing = True - - @register_to_config - def __init__( - self, - sample_size: Optional[int] = None, - in_channels: int = 4, - out_channels: int = 4, - center_input_sample: bool = False, - flip_sin_to_cos: bool = True, - freq_shift: int = 0, - down_block_types: Tuple[str] = ( - "CrossAttnDownBlock2D", - "CrossAttnDownBlock2D", - "CrossAttnDownBlock2D", - "DownBlock2D", - ), - up_block_types: Tuple[str] = ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D"), - only_cross_attention: Union[bool, Tuple[bool]] = False, - block_out_channels: Tuple[int] = (320, 640, 1280, 1280), - layers_per_block: int = 2, - downsample_padding: int = 1, - mid_block_scale_factor: float = 1, - act_fn: str = "silu", - norm_num_groups: int = 32, - norm_eps: float = 1e-5, - cross_attention_dim: int = 1280, - attention_head_dim: Union[int, Tuple[int]] = 8, - dual_cross_attention: bool = False, - use_linear_projection: bool = False, - num_class_embeds: Optional[int] = None, - ): - super().__init__() - - self.sample_size = sample_size - time_embed_dim = block_out_channels[0] * 4 - - # input - self.conv_in = nn.Conv2d(in_channels, block_out_channels[0], kernel_size=3, padding=(1, 1)) - - # time - self.time_proj = Timesteps(block_out_channels[0], flip_sin_to_cos, freq_shift) - timestep_input_dim = block_out_channels[0] - - self.time_embedding = TimestepEmbedding(timestep_input_dim, time_embed_dim) - - # class embedding - if num_class_embeds is not None: - self.class_embedding = nn.Embedding(num_class_embeds, time_embed_dim) - - self.down_blocks = nn.ModuleList([]) - self.mid_block = None - self.up_blocks = nn.ModuleList([]) - - if isinstance(only_cross_attention, bool): - only_cross_attention = [only_cross_attention] * len(down_block_types) - - if isinstance(attention_head_dim, int): - attention_head_dim = (attention_head_dim,) * len(down_block_types) - - # down - output_channel = block_out_channels[0] - for i, down_block_type in enumerate(down_block_types): - input_channel = output_channel - output_channel = block_out_channels[i] - is_final_block = i == len(block_out_channels) - 1 - - down_block = get_down_block( - down_block_type, - num_layers=layers_per_block, - in_channels=input_channel, - out_channels=output_channel, - temb_channels=time_embed_dim, - add_downsample=not is_final_block, - resnet_eps=norm_eps, - resnet_act_fn=act_fn, - resnet_groups=norm_num_groups, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attention_head_dim[i], - downsample_padding=downsample_padding, - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention[i], - ) - self.down_blocks.append(down_block) - - # mid - self.mid_block = UNetMidBlock2DCrossAttn( - in_channels=block_out_channels[-1], - temb_channels=time_embed_dim, - resnet_eps=norm_eps, - resnet_act_fn=act_fn, - output_scale_factor=mid_block_scale_factor, - resnet_time_scale_shift="default", - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attention_head_dim[-1], - resnet_groups=norm_num_groups, - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - ) - - # count how many layers upsample the images - self.num_upsamplers = 0 - - # up - reversed_block_out_channels = list(reversed(block_out_channels)) - reversed_attention_head_dim = list(reversed(attention_head_dim)) - only_cross_attention = list(reversed(only_cross_attention)) - output_channel = reversed_block_out_channels[0] - for i, up_block_type in enumerate(up_block_types): - is_final_block = i == len(block_out_channels) - 1 - - prev_output_channel = output_channel - output_channel = reversed_block_out_channels[i] - input_channel = reversed_block_out_channels[min(i + 1, len(block_out_channels) - 1)] - - # add upsample block for all BUT final layer - if not is_final_block: - add_upsample = True - self.num_upsamplers += 1 - else: - add_upsample = False - - up_block = get_up_block( - up_block_type, - num_layers=layers_per_block + 1, - in_channels=input_channel, - out_channels=output_channel, - prev_output_channel=prev_output_channel, - temb_channels=time_embed_dim, - add_upsample=add_upsample, - resnet_eps=norm_eps, - resnet_act_fn=act_fn, - resnet_groups=norm_num_groups, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=reversed_attention_head_dim[i], - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention[i], - ) - self.up_blocks.append(up_block) - prev_output_channel = output_channel - - # out - self.conv_norm_out = nn.GroupNorm(num_channels=block_out_channels[0], num_groups=norm_num_groups, eps=norm_eps) - self.conv_act = nn.SiLU() - self.conv_out = nn.Conv2d(block_out_channels[0], out_channels, kernel_size=3, padding=1) - - def set_attention_slice(self, slice_size): - head_dims = self.config.attention_head_dim - head_dims = [head_dims] if isinstance(head_dims, int) else head_dims - if slice_size is not None and any(dim % slice_size != 0 for dim in head_dims): - raise ValueError( - f"Make sure slice_size {slice_size} is a common divisor of " - f"the number of heads used in cross_attention: {head_dims}" - ) - if slice_size is not None and slice_size > min(head_dims): - raise ValueError( - f"slice_size {slice_size} has to be smaller or equal to " - f"the lowest number of heads used in cross_attention: min({head_dims}) = {min(head_dims)}" - ) - - for block in self.down_blocks: - if hasattr(block, "attentions") and block.attentions is not None: - block.set_attention_slice(slice_size) - - self.mid_block.set_attention_slice(slice_size) - - for block in self.up_blocks: - if hasattr(block, "attentions") and block.attentions is not None: - block.set_attention_slice(slice_size) - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, (CrossAttnDownBlock2D, DownBlock2D, CrossAttnUpBlock2D, UpBlock2D)): - module.gradient_checkpointing = value - - def forward( - self, - sample: torch.FloatTensor, - timestep: Union[torch.Tensor, float, int], - encoder_hidden_states: torch.Tensor, - class_labels: Optional[torch.Tensor] = None, - return_dict: bool = True, - ) -> Union[UNet2DConditionOutput, Tuple]: - r""" - Args: - sample (`torch.FloatTensor`): (batch, channel, height, width) noisy inputs tensor - timestep (`torch.FloatTensor` or `float` or `int`): (batch) timesteps - encoder_hidden_states (`torch.FloatTensor`): (batch, channel, height, width) encoder hidden states - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`models.unet_2d_condition.UNet2DConditionOutput`] instead of a plain tuple. - - Returns: - [`~models.unet_2d_condition.UNet2DConditionOutput`] or `tuple`: - [`~models.unet_2d_condition.UNet2DConditionOutput`] if `return_dict` is True, otherwise a `tuple`. When - returning a tuple, the first element is the sample tensor. - """ - # By default samples have to be AT least a multiple of the overall upsampling factor. - # The overall upsampling factor is equal to 2 ** (# num of upsampling layears). - # However, the upsampling interpolation output size can be forced to fit any upsampling size - # on the fly if necessary. - default_overall_up_factor = 2**self.num_upsamplers - - # upsample size should be forwarded when sample is not a multiple of `default_overall_up_factor` - forward_upsample_size = False - upsample_size = None - - if any(s % default_overall_up_factor != 0 for s in sample.shape[-2:]): - logger.info("Forward upsample size to force interpolation output size.") - forward_upsample_size = True - - # 0. center input if necessary - if self.config.center_input_sample: - sample = 2 * sample - 1.0 - - # 1. time - timesteps = timestep - if not torch.is_tensor(timesteps): - # TODO: this requires sync between CPU and GPU. So try to pass timesteps as tensors if you can - # This would be a good case for the `match` statement (Python 3.10+) - is_mps = sample.device.type == "mps" - if isinstance(timestep, float): - dtype = torch.float32 if is_mps else torch.float64 - else: - dtype = torch.int32 if is_mps else torch.int64 - timesteps = torch.tensor([timesteps], dtype=dtype, device=sample.device) - elif len(timesteps.shape) == 0: - timesteps = timesteps[None].to(sample.device) - - # broadcast to batch dimension in a way that's compatible with ONNX/Core ML - timesteps = timesteps.expand(sample.shape[0]) - - t_emb = self.time_proj(timesteps) - - # timesteps does not contain any weights and will always return f32 tensors - # but time_embedding might actually be running in fp16. so we need to cast here. - # there might be better ways to encapsulate this. - t_emb = t_emb.to(dtype=self.dtype) - emb = self.time_embedding(t_emb) - - if self.config.num_class_embeds is not None: - if class_labels is None: - raise ValueError("class_labels should be provided when num_class_embeds > 0") - class_emb = self.class_embedding(class_labels).to(dtype=self.dtype) - emb = emb + class_emb - - # 2. pre-process - sample = self.conv_in(sample) - - # 3. down - down_block_res_samples = (sample,) - for downsample_block in self.down_blocks: - if hasattr(downsample_block, "attentions") and downsample_block.attentions is not None: - sample, res_samples = downsample_block( - hidden_states=sample, - temb=emb, - encoder_hidden_states=encoder_hidden_states, - ) - else: - sample, res_samples = downsample_block(hidden_states=sample, temb=emb) - - down_block_res_samples += res_samples - - # 4. mid - sample = self.mid_block(sample, emb, encoder_hidden_states=encoder_hidden_states) - - # 5. up - for i, upsample_block in enumerate(self.up_blocks): - is_final_block = i == len(self.up_blocks) - 1 - - res_samples = down_block_res_samples[-len(upsample_block.resnets) :] - down_block_res_samples = down_block_res_samples[: -len(upsample_block.resnets)] - - # if we have not reached the final block and need to forward the - # upsample size, we do it here - if not is_final_block and forward_upsample_size: - upsample_size = down_block_res_samples[-1].shape[2:] - - if hasattr(upsample_block, "attentions") and upsample_block.attentions is not None: - sample = upsample_block( - hidden_states=sample, - temb=emb, - res_hidden_states_tuple=res_samples, - encoder_hidden_states=encoder_hidden_states, - upsample_size=upsample_size, - ) - else: - sample = upsample_block( - hidden_states=sample, temb=emb, res_hidden_states_tuple=res_samples, upsample_size=upsample_size - ) - # 6. post-process - sample = self.conv_norm_out(sample) - sample = self.conv_act(sample) - sample = self.conv_out(sample) - - if not return_dict: - return (sample,) - - return UNet2DConditionOutput(sample=sample) diff --git a/spaces/Jamel887/Rv-percobaan887/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py b/spaces/Jamel887/Rv-percobaan887/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py deleted file mode 100644 index b412ba2814e114ca7bb00b6fd6ef217f63d788a3..0000000000000000000000000000000000000000 --- a/spaces/Jamel887/Rv-percobaan887/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py +++ /dev/null @@ -1,86 +0,0 @@ -from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor -import pyworld -import numpy as np - - -class HarvestF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def resize_f0(self, x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * target_len, len(source)) / target_len, - np.arange(0, len(source)), - source, - ) - res = np.nan_to_num(target) - return res - - def compute_f0(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.harvest( - wav.astype(np.double), - fs=self.hop_length, - f0_ceil=self.f0_max, - f0_floor=self.f0_min, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.fs) - return self.interpolate_f0(self.resize_f0(f0, p_len))[0] - - def compute_f0_uv(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.harvest( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - return self.interpolate_f0(self.resize_f0(f0, p_len)) diff --git a/spaces/JohnnyPittt/audio-styling/deepafx_st/data/audio.py b/spaces/JohnnyPittt/audio-styling/deepafx_st/data/audio.py deleted file mode 100644 index 1b81607fc064542017875c7993ce40ed6f72f06a..0000000000000000000000000000000000000000 --- a/spaces/JohnnyPittt/audio-styling/deepafx_st/data/audio.py +++ /dev/null @@ -1,177 +0,0 @@ -import os -import glob -import torch -import warnings -import torchaudio -import pyloudnorm as pyln - - -class AudioFile(object): - def __init__(self, filepath, preload=False, half=False, target_loudness=None): - """Base class for audio files to handle metadata and loading. - - Args: - filepath (str): Path to audio file to load from disk. - preload (bool, optional): If set, load audio data into RAM. Default: False - half (bool, optional): If set, store audio data as float16 to save space. Default: False - target_loudness (float, optional): Loudness normalize to dB LUFS value. Default: - """ - super().__init__() - - self.filepath = filepath - self.half = half - self.target_loudness = target_loudness - self.loaded = False - - if preload: - self.load() - num_frames = self.audio.shape[-1] - num_channels = self.audio.shape[0] - else: - metadata = torchaudio.info(filepath) - audio = None - self.sample_rate = metadata.sample_rate - num_frames = metadata.num_frames - num_channels = metadata.num_channels - - self.num_frames = num_frames - self.num_channels = num_channels - - def load(self): - audio, sr = torchaudio.load(self.filepath, normalize=True) - self.audio = audio - self.sample_rate = sr - - if self.target_loudness is not None: - self.loudness_normalize() - - if self.half: - self.audio = audio.half() - - self.loaded = True - - def loudness_normalize(self): - meter = pyln.Meter(self.sample_rate) - - # conver mono to stereo - if self.audio.shape[0] == 1: - tmp_audio = self.audio.repeat(2, 1) - else: - tmp_audio = self.audio - - # measure integrated loudness - input_loudness = meter.integrated_loudness(tmp_audio.numpy().T) - - # compute and apply gain - gain_dB = self.target_loudness - input_loudness - gain_ln = 10 ** (gain_dB / 20.0) - self.audio *= gain_ln - - # check for potentially clipped samples - if self.audio.abs().max() >= 1.0: - warnings.warn("Possible clipped samples in output.") - - -class AudioFileDataset(torch.utils.data.Dataset): - """Base class for audio file datasets loaded from disk. - - Datasets can be either paired or unpaired. A paired dataset requires passing the `target_dir` path. - - Args: - input_dir (List[str]): List of paths to the directories containing input audio files. - target_dir (List[str], optional): List of paths to the directories containing correponding audio files. Default: [] - subset (str, optional): Dataset subset. One of ["train", "val", "test"]. Default: "train" - length (int, optional): Number of samples to load for each example. Default: 65536 - normalize (bool, optional): Normalize audio amplitiude to -1 to 1. Default: True - train_frac (float, optional): Fraction of the files to use for training subset. Default: 0.8 - val_frac (float, optional): Fraction of the files to use for validation subset. Default: 0.1 - preload (bool, optional): Read audio files into RAM at the start of training. Default: False - num_examples_per_epoch (int, optional): Define an epoch as certain number of audio examples. Default: 10000 - ext (str, optional): Expected audio file extension. Default: "wav" - """ - - def __init__( - self, - input_dirs, - target_dirs=[], - subset="train", - length=65536, - normalize=True, - train_per=0.8, - val_per=0.1, - preload=False, - num_examples_per_epoch=10000, - ext="wav", - ): - super().__init__() - self.input_dirs = input_dirs - self.target_dirs = target_dirs - self.subset = subset - self.length = length - self.normalize = normalize - self.train_per = train_per - self.val_per = val_per - self.preload = preload - self.num_examples_per_epoch = num_examples_per_epoch - self.ext = ext - - self.input_filepaths = [] - for input_dir in input_dirs: - search_path = os.path.join(input_dir, f"*.{ext}") - self.input_filepaths += glob.glob(search_path) - self.input_filepaths = sorted(self.input_filepaths) - - self.target_filepaths = [] - for target_dir in target_dirs: - search_path = os.path.join(target_dir, f"*.{ext}") - self.target_filepaths += glob.glob(search_path) - self.target_filepaths = sorted(self.target_filepaths) - - # both sets must have same number of files in paired dataset - assert len(self.target_filepaths) == len(self.input_filepaths) - - # get details about audio files - self.input_files = [] - for input_filepath in self.input_filepaths: - self.input_files.append( - AudioFile(input_filepath, preload=preload, normalize=normalize) - ) - - self.target_files = [] - if target_dir is not None: - for target_filepath in self.target_filepaths: - self.target_files.append( - AudioFile(target_filepath, preload=preload, normalize=normalize) - ) - - def __len__(self): - return self.num_examples_per_epoch - - def __getitem__(self, idx): - """ """ - - # index the current audio file - input_file = self.input_files[idx] - - # load the audio data if needed - if not input_file.loaded: - input_file.load() - - # get a random patch of size `self.length` - start_idx = int(torch.rand() * (input_file.num_frames - self.length)) - stop_idx = start_idx + self.length - input_audio = input_file.audio[:, start_idx:stop_idx] - - # if there is a target file, get it (and load) - if len(self.target_files) > 0: - target_file = self.target_files[idx] - - if not target_file.loaded: - target_file.load() - - # use the same cropping indices - target_audio = target_file.audio[:, start_idx:stop_idx] - - return input_audio, target_audio - else: - return input_audio diff --git a/spaces/KalbeDigitalLab/pathology_nuclei_segmentation_classification/index.html b/spaces/KalbeDigitalLab/pathology_nuclei_segmentation_classification/index.html deleted file mode 100644 index 49a4838ed90eab718c348ce380c9c1b295296f3a..0000000000000000000000000000000000000000 --- a/spaces/KalbeDigitalLab/pathology_nuclei_segmentation_classification/index.html +++ /dev/null @@ -1,47 +0,0 @@ - - - - - - - - Medical Image Classification with MONAI - Pathology Nuclei Segmentation Classification - - -
      -

      Medical Image Classification with MONAI - Pathology Nuclei Segmentation Classification

      -

      Kalbe Digital Lab

      -
      -
      -

      Overview

      -
      -

      - A simultaneous segmentation of nuclei within multitissue histology images.
      - References: https://arxiv.org/abs/1812.06499 -

      -
      -
      -
      -

      Dataset

      -
      -

      The model is trained with multi-tissue histology images based on CoNSeP dataset.

      -
        -
      • Target: Nuclei
      • -
      • Task: Instance Segmentation
      • -
      • Modality: RGB images
      • -
      -
      -
      -
      -

      Model Architecture

      -
      -

      Overview approach for simultaneous nuclear instance segmentation.

      - model-architecture -
      -
      -
      -

      Demo

      -

      Please select or upload a histology image to see nuclei segmentation capabilities of this model

      -
      - - diff --git a/spaces/Kreaols/ChuanhuChatGPT/modules/models/ChuanhuAgent.py b/spaces/Kreaols/ChuanhuChatGPT/modules/models/ChuanhuAgent.py deleted file mode 100644 index c3cb944d3d4a5f60f1402445dc52a3501f466916..0000000000000000000000000000000000000000 --- a/spaces/Kreaols/ChuanhuChatGPT/modules/models/ChuanhuAgent.py +++ /dev/null @@ -1,216 +0,0 @@ -from langchain.chains.summarize import load_summarize_chain -from langchain import PromptTemplate, LLMChain -from langchain.chat_models import ChatOpenAI -from langchain.prompts import PromptTemplate -from langchain.text_splitter import TokenTextSplitter -from langchain.embeddings import OpenAIEmbeddings -from langchain.vectorstores import FAISS -from langchain.chains import RetrievalQA -from langchain.agents import load_tools -from langchain.agents import initialize_agent -from langchain.agents import AgentType -from langchain.docstore.document import Document -from langchain.tools import BaseTool, StructuredTool, Tool, tool -from langchain.callbacks.stdout import StdOutCallbackHandler -from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler -from langchain.callbacks.manager import BaseCallbackManager -from duckduckgo_search import DDGS -from itertools import islice - -from typing import Any, Dict, List, Optional, Union - -from langchain.callbacks.base import BaseCallbackHandler -from langchain.input import print_text -from langchain.schema import AgentAction, AgentFinish, LLMResult - -from pydantic import BaseModel, Field - -import requests -from bs4 import BeautifulSoup -from threading import Thread, Condition -from collections import deque - -from .base_model import BaseLLMModel, CallbackToIterator, ChuanhuCallbackHandler -from ..config import default_chuanhu_assistant_model -from ..presets import SUMMARIZE_PROMPT, i18n -from ..index_func import construct_index - -from langchain.callbacks import get_openai_callback -import os -import gradio as gr -import logging - -class GoogleSearchInput(BaseModel): - keywords: str = Field(description="keywords to search") - -class WebBrowsingInput(BaseModel): - url: str = Field(description="URL of a webpage") - -class WebAskingInput(BaseModel): - url: str = Field(description="URL of a webpage") - question: str = Field(description="Question that you want to know the answer to, based on the webpage's content.") - - -class ChuanhuAgent_Client(BaseLLMModel): - def __init__(self, model_name, openai_api_key, user_name="") -> None: - super().__init__(model_name=model_name, user=user_name) - self.text_splitter = TokenTextSplitter(chunk_size=500, chunk_overlap=30) - self.api_key = openai_api_key - self.llm = ChatOpenAI(openai_api_key=openai_api_key, temperature=0, model_name=default_chuanhu_assistant_model, openai_api_base=os.environ.get("OPENAI_API_BASE", None)) - self.cheap_llm = ChatOpenAI(openai_api_key=openai_api_key, temperature=0, model_name="gpt-3.5-turbo", openai_api_base=os.environ.get("OPENAI_API_BASE", None)) - PROMPT = PromptTemplate(template=SUMMARIZE_PROMPT, input_variables=["text"]) - self.summarize_chain = load_summarize_chain(self.cheap_llm, chain_type="map_reduce", return_intermediate_steps=True, map_prompt=PROMPT, combine_prompt=PROMPT) - self.index_summary = None - self.index = None - if "Pro" in self.model_name: - self.tools = load_tools(["serpapi", "google-search-results-json", "llm-math", "arxiv", "wikipedia", "wolfram-alpha"], llm=self.llm) - else: - self.tools = load_tools(["ddg-search", "llm-math", "arxiv", "wikipedia"], llm=self.llm) - self.tools.append( - Tool.from_function( - func=self.google_search_simple, - name="Google Search JSON", - description="useful when you need to search the web.", - args_schema=GoogleSearchInput - ) - ) - - self.tools.append( - Tool.from_function( - func=self.summary_url, - name="Summary Webpage", - description="useful when you need to know the overall content of a webpage.", - args_schema=WebBrowsingInput - ) - ) - - self.tools.append( - StructuredTool.from_function( - func=self.ask_url, - name="Ask Webpage", - description="useful when you need to ask detailed questions about a webpage.", - args_schema=WebAskingInput - ) - ) - - def google_search_simple(self, query): - results = [] - with DDGS() as ddgs: - ddgs_gen = ddgs.text("notes from a dead house", backend="lite") - for r in islice(ddgs_gen, 10): - results.append({ - "title": r["title"], - "link": r["href"], - "snippet": r["body"] - }) - return str(results) - - def handle_file_upload(self, files, chatbot, language): - """if the model accepts multi modal input, implement this function""" - status = gr.Markdown.update() - if files: - index = construct_index(self.api_key, file_src=files) - assert index is not None, "获取索引失败" - self.index = index - status = i18n("索引构建完成") - # Summarize the document - logging.info(i18n("生成内容总结中……")) - with get_openai_callback() as cb: - os.environ["OPENAI_API_KEY"] = self.api_key - from langchain.chains.summarize import load_summarize_chain - from langchain.prompts import PromptTemplate - from langchain.chat_models import ChatOpenAI - prompt_template = "Write a concise summary of the following:\n\n{text}\n\nCONCISE SUMMARY IN " + language + ":" - PROMPT = PromptTemplate(template=prompt_template, input_variables=["text"]) - llm = ChatOpenAI() - chain = load_summarize_chain(llm, chain_type="map_reduce", return_intermediate_steps=True, map_prompt=PROMPT, combine_prompt=PROMPT) - summary = chain({"input_documents": list(index.docstore.__dict__["_dict"].values())}, return_only_outputs=True)["output_text"] - logging.info(f"Summary: {summary}") - self.index_summary = summary - chatbot.append((f"Uploaded {len(files)} files", summary)) - logging.info(cb) - return gr.Files.update(), chatbot, status - - def query_index(self, query): - if self.index is not None: - retriever = self.index.as_retriever() - qa = RetrievalQA.from_chain_type(llm=self.llm, chain_type="stuff", retriever=retriever) - return qa.run(query) - else: - "Error during query." - - def summary(self, text): - texts = Document(page_content=text) - texts = self.text_splitter.split_documents([texts]) - return self.summarize_chain({"input_documents": texts}, return_only_outputs=True)["output_text"] - - def fetch_url_content(self, url): - response = requests.get(url) - soup = BeautifulSoup(response.text, 'html.parser') - - # 提取所有的文本 - text = ''.join(s.getText() for s in soup.find_all('p')) - logging.info(f"Extracted text from {url}") - return text - - def summary_url(self, url): - text = self.fetch_url_content(url) - if text == "": - return "URL unavailable." - text_summary = self.summary(text) - url_content = "webpage content summary:\n" + text_summary - - return url_content - - def ask_url(self, url, question): - text = self.fetch_url_content(url) - if text == "": - return "URL unavailable." - texts = Document(page_content=text) - texts = self.text_splitter.split_documents([texts]) - # use embedding - embeddings = OpenAIEmbeddings(openai_api_key=self.api_key, openai_api_base=os.environ.get("OPENAI_API_BASE", None)) - - # create vectorstore - db = FAISS.from_documents(texts, embeddings) - retriever = db.as_retriever() - qa = RetrievalQA.from_chain_type(llm=self.cheap_llm, chain_type="stuff", retriever=retriever) - return qa.run(f"{question} Reply in 中文") - - def get_answer_at_once(self): - question = self.history[-1]["content"] - # llm=ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo") - agent = initialize_agent(self.tools, self.llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True) - reply = agent.run(input=f"{question} Reply in 简体中文") - return reply, -1 - - def get_answer_stream_iter(self): - question = self.history[-1]["content"] - it = CallbackToIterator() - manager = BaseCallbackManager(handlers=[ChuanhuCallbackHandler(it.callback)]) - def thread_func(): - tools = self.tools - if self.index is not None: - tools.append( - Tool.from_function( - func=self.query_index, - name="Query Knowledge Base", - description=f"useful when you need to know about: {self.index_summary}", - args_schema=WebBrowsingInput - ) - ) - agent = initialize_agent(self.tools, self.llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True, callback_manager=manager) - try: - reply = agent.run(input=f"{question} Reply in 简体中文") - except Exception as e: - import traceback - traceback.print_exc() - reply = str(e) - it.callback(reply) - it.finish() - t = Thread(target=thread_func) - t.start() - partial_text = "" - for value in it: - partial_text += value - yield partial_text diff --git a/spaces/KyanChen/RSPrompter/mmpl/engine/logger/__init__.py b/spaces/KyanChen/RSPrompter/mmpl/engine/logger/__init__.py deleted file mode 100644 index 5a7d509cbaa213acccf34153ac8df157bbe3bb86..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmpl/engine/logger/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .builder import PL_LOGGERS diff --git a/spaces/KyanChen/RSPrompter/mmpl/models/heads/semantic_seg_head.py b/spaces/KyanChen/RSPrompter/mmpl/models/heads/semantic_seg_head.py deleted file mode 100644 index 99acee3074b0475539635b6aab6d2505375bad59..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmpl/models/heads/semantic_seg_head.py +++ /dev/null @@ -1,216 +0,0 @@ -from typing import List, Optional, Tuple, Union - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmengine.structures import InstanceData -from torch import Tensor - -from mmdet.models.utils import multi_apply -from mmdet.utils import InstanceList, reduce_mean -from mmpl.registry import MODELS, TASK_UTILS -from mmengine.model import BaseModel -from einops import rearrange - -from mmpl.utils import ConfigType, OptConfigType - - -@MODELS.register_module() -class BinarySemanticSegHead(BaseModel): - def __init__( - self, - num_classes=1, - align_corners=False, - loss_mask: ConfigType = dict( - type='CrossEntropyLoss', - use_sigmoid=True, - reduction='mean', - loss_weight=5.0), - loss_dice=None, - train_cfg: OptConfigType = None, - test_cfg: OptConfigType = None, - init_cfg: Optional[dict] = None): - super(BinarySemanticSegHead, self).__init__(init_cfg=init_cfg) - self.num_classes = num_classes - self.align_corners = align_corners - - self.test_cfg = test_cfg - self.train_cfg = train_cfg - if train_cfg: - self.assigner = TASK_UTILS.build(self.train_cfg['assigner']) - self.sampler = TASK_UTILS.build( - self.train_cfg['sampler'], default_args=dict(context=self)) - self.num_points = self.train_cfg.get('num_points', 12544) - self.oversample_ratio = self.train_cfg.get('oversample_ratio', 3.0) - self.importance_sample_ratio = self.train_cfg.get( - 'importance_sample_ratio', 0.75) - - self.loss_mask = MODELS.build(loss_mask) - if loss_dice is not None: - self.loss_dice = MODELS.build(loss_dice) - - def forward(self, *args, **kwargs): - pass - return - - def loss(self, - mask_preds: Tensor, - seg_labels: Tensor, - ): - bs = mask_preds.size(0) - - # dice loss - if hasattr(self, 'loss_dice'): - loss_dice = self.loss_dice(mask_preds, seg_labels, avg_factor=bs) - else: - loss_dice = torch.zeros([]).to(mask_preds.device) - - # mask loss - # FocalLoss support input of shape (n, num_class) - h, w = mask_preds.shape[-2:] - # shape (num_total_gts, h, w) -> (num_total_gts * h * w, 1) - mask_preds = mask_preds.reshape(-1, 1) - # shape (num_total_gts, h, w) -> (num_total_gts * h * w) - mask_targets = seg_labels.reshape(-1, 1) - # target is (1 - mask_targets) !!! - loss_mask = self.loss_mask(mask_preds, mask_targets, avg_factor=h * w) - - loss_dict = dict() - loss_dict['loss_mask'] = loss_mask - loss_dict['loss_dice'] = loss_dice - return loss_dict - - def get_targets( - self, - cls_scores_list: List[Tensor], - mask_preds_list: List[Tensor], - batch_gt_instances: InstanceList, - batch_img_metas: List[dict], - return_sampling_results: bool = False - ) -> Tuple[List[Union[Tensor, int]]]: - """Compute classification and mask targets for all images for a decoder - layer. - - Args: - cls_scores_list (list[Tensor]): Mask score logits from a single - decoder layer for all images. Each with shape (num_queries, - cls_out_channels). - mask_preds_list (list[Tensor]): Mask logits from a single decoder - layer for all images. Each with shape (num_queries, h, w). - batch_gt_instances (list[obj:`InstanceData`]): each contains - ``labels`` and ``masks``. - batch_img_metas (list[dict]): List of image meta information. - return_sampling_results (bool): Whether to return the sampling - results. Defaults to False. - - Returns: - tuple: a tuple containing the following targets. - - - labels_list (list[Tensor]): Labels of all images.\ - Each with shape (num_queries, ). - - label_weights_list (list[Tensor]): Label weights\ - of all images. Each with shape (num_queries, ). - - mask_targets_list (list[Tensor]): Mask targets of\ - all images. Each with shape (num_queries, h, w). - - mask_weights_list (list[Tensor]): Mask weights of\ - all images. Each with shape (num_queries, ). - - avg_factor (int): Average factor that is used to average\ - the loss. When using sampling method, avg_factor is - usually the sum of positive and negative priors. When - using `MaskPseudoSampler`, `avg_factor` is usually equal - to the number of positive priors. - - additional_returns: This function enables user-defined returns from - `self._get_targets_single`. These returns are currently refined - to properties at each feature map (i.e. having HxW dimension). - The results will be concatenated after the end. - """ - results = multi_apply(self._get_targets_single, cls_scores_list, - mask_preds_list, batch_gt_instances, - batch_img_metas) - (labels_list, label_weights_list, mask_targets_list, mask_weights_list, - pos_inds_list, neg_inds_list, sampling_results_list) = results[:7] - rest_results = list(results[7:]) - - avg_factor = sum( - [results.avg_factor for results in sampling_results_list]) - - res = (labels_list, label_weights_list, mask_targets_list, - mask_weights_list, avg_factor) - if return_sampling_results: - res = res + (sampling_results_list) - - return res + tuple(rest_results) - - def _get_targets_single(self, cls_score: Tensor, mask_pred: Tensor, - gt_instances: InstanceData, - img_meta: dict) -> Tuple[Tensor]: - """Compute classification and mask targets for one image. - - Args: - cls_score (Tensor): Mask score logits from a single decoder layer - for one image. Shape (num_queries, cls_out_channels). - mask_pred (Tensor): Mask logits for a single decoder layer for one - image. Shape (num_queries, h, w). - gt_instances (:obj:`InstanceData`): It contains ``labels`` and - ``masks``. - img_meta (dict): Image informtation. - - Returns: - tuple: a tuple containing the following for one image. - - - labels (Tensor): Labels of each image. - shape (num_queries, ). - - label_weights (Tensor): Label weights of each image. - shape (num_queries, ). - - mask_targets (Tensor): Mask targets of each image. - shape (num_queries, h, w). - - mask_weights (Tensor): Mask weights of each image. - shape (num_queries, ). - - pos_inds (Tensor): Sampled positive indices for each image. - - neg_inds (Tensor): Sampled negative indices for each image. - - sampling_result (:obj:`SamplingResult`): Sampling results. - """ - gt_masks = gt_instances.masks - gt_labels = gt_instances.labels - - target_shape = mask_pred.shape[-2:] - if gt_masks.shape[0] > 0: - gt_masks_downsampled = F.interpolate( - gt_masks.unsqueeze(1).float(), target_shape, - mode='nearest').squeeze(1).long() - else: - gt_masks_downsampled = gt_masks - - pred_instances = InstanceData(scores=cls_score, masks=mask_pred) - downsampled_gt_instances = InstanceData( - labels=gt_labels, masks=gt_masks_downsampled) - # assign and sample # assign_result is the 1-based - assign_result = self.assigner.assign( - pred_instances=pred_instances, - gt_instances=downsampled_gt_instances, - img_meta=img_meta) - sampling_result = self.sampler.sample( - assign_result=assign_result, - pred_instances=pred_instances, - gt_instances=gt_instances) - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - - # label target - # 第0类为背景 - num_queries = pred_instances.scores.shape[0] - labels = gt_labels.new_full((num_queries, ), - 0, - dtype=torch.long) - labels[pos_inds] = gt_labels[sampling_result.pos_assigned_gt_inds] - label_weights = gt_labels.new_ones(num_queries) - - # mask target - mask_targets = gt_masks[sampling_result.pos_assigned_gt_inds] - mask_weights = mask_pred.new_zeros((num_queries, )) - mask_weights[pos_inds] = 1.0 - - return (labels, label_weights, mask_targets, mask_weights, pos_inds, - neg_inds, sampling_result) - diff --git a/spaces/LDJA/iris/Dockerfile b/spaces/LDJA/iris/Dockerfile deleted file mode 100644 index 2d5831b32b33224796ada175656d17bdaf448aae..0000000000000000000000000000000000000000 --- a/spaces/LDJA/iris/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM ubuntu:18.04 - -LABEL Version="1.0" - -RUN apt-get update -y - -RUN apt-get install -y python3-pip python3-dev build-essential - -COPY ./app /app - -EXPOSE 7860 - -WORKDIR /app - -RUN pip3 install --no-cache-dir --upgrade pip - -RUN pip3 install -r requirements.txt - -ENV FLASK_APP main - -ENTRYPOINT python3 main.py diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/textsnake/README.md b/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/textsnake/README.md deleted file mode 100644 index be7f3fe7bb15f5610669e937179adca7210039b8..0000000000000000000000000000000000000000 --- a/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/textsnake/README.md +++ /dev/null @@ -1,33 +0,0 @@ -# Textsnake - -> [TextSnake: A Flexible Representation for Detecting Text of Arbitrary Shapes](https://arxiv.org/abs/1807.01544) - - - -## Abstract - -Driven by deep neural networks and large scale datasets, scene text detection methods have progressed substantially over the past years, continuously refreshing the performance records on various standard benchmarks. However, limited by the representations (axis-aligned rectangles, rotated rectangles or quadrangles) adopted to describe text, existing methods may fall short when dealing with much more free-form text instances, such as curved text, which are actually very common in real-world scenarios. To tackle this problem, we propose a more flexible representation for scene text, termed as TextSnake, which is able to effectively represent text instances in horizontal, oriented and curved forms. In TextSnake, a text instance is described as a sequence of ordered, overlapping disks centered at symmetric axes, each of which is associated with potentially variable radius and orientation. Such geometry attributes are estimated via a Fully Convolutional Network (FCN) model. In experiments, the text detector based on TextSnake achieves state-of-the-art or comparable performance on Total-Text and SCUT-CTW1500, the two newly published benchmarks with special emphasis on curved text in natural images, as well as the widely-used datasets ICDAR 2015 and MSRA-TD500. Specifically, TextSnake outperforms the baseline on Total-Text by more than 40% in F-measure. - -
      - -
      - -## Results and models - -### CTW1500 - -| Method | Pretrained Model | Training set | Test set | #epochs | Test size | Recall | Precision | Hmean | Download | -| :----------------------------------------------------------: | :--------------: | :-----------: | :----------: | :-----: | :-------: | :----: | :-------: | :---: | :-------------------------------------------------------------: | -| [TextSnake](/configs/textdet/textsnake/textsnake_r50_fpn_unet_600e_ctw1500.py) | ImageNet | CTW1500 Train | CTW1500 Test | 1200 | 736 | 0.795 | 0.840 | 0.817 | [model](https://download.openmmlab.com/mmocr/textdet/textsnake/textsnake_r50_fpn_unet_1200e_ctw1500-27f65b64.pth) \| [log](<>) | - -## Citation - -```bibtex -@article{long2018textsnake, - title={TextSnake: A Flexible Representation for Detecting Text of Arbitrary Shapes}, - author={Long, Shangbang and Ruan, Jiaqiang and Zhang, Wenjie and He, Xin and Wu, Wenhao and Yao, Cong}, - booktitle={ECCV}, - pages={20-36}, - year={2018} -} -``` diff --git a/spaces/MCkernick/Image_Restoration_Colorization/Global/detection_models/Synchronized-BatchNorm-PyTorch/sync_batchnorm/replicate.py b/spaces/MCkernick/Image_Restoration_Colorization/Global/detection_models/Synchronized-BatchNorm-PyTorch/sync_batchnorm/replicate.py deleted file mode 100644 index b71c7b8ed51a1d6c55b1f753bdd8d90bad79bd06..0000000000000000000000000000000000000000 --- a/spaces/MCkernick/Image_Restoration_Colorization/Global/detection_models/Synchronized-BatchNorm-PyTorch/sync_batchnorm/replicate.py +++ /dev/null @@ -1,94 +0,0 @@ -# -*- coding: utf-8 -*- -# File : replicate.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import functools - -from torch.nn.parallel.data_parallel import DataParallel - -__all__ = [ - 'CallbackContext', - 'execute_replication_callbacks', - 'DataParallelWithCallback', - 'patch_replication_callback' -] - - -class CallbackContext(object): - pass - - -def execute_replication_callbacks(modules): - """ - Execute an replication callback `__data_parallel_replicate__` on each module created by original replication. - - The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)` - - Note that, as all modules are isomorphism, we assign each sub-module with a context - (shared among multiple copies of this module on different devices). - Through this context, different copies can share some information. - - We guarantee that the callback on the master copy (the first copy) will be called ahead of calling the callback - of any slave copies. - """ - master_copy = modules[0] - nr_modules = len(list(master_copy.modules())) - ctxs = [CallbackContext() for _ in range(nr_modules)] - - for i, module in enumerate(modules): - for j, m in enumerate(module.modules()): - if hasattr(m, '__data_parallel_replicate__'): - m.__data_parallel_replicate__(ctxs[j], i) - - -class DataParallelWithCallback(DataParallel): - """ - Data Parallel with a replication callback. - - An replication callback `__data_parallel_replicate__` of each module will be invoked after being created by - original `replicate` function. - The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)` - - Examples: - > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) - > sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1]) - # sync_bn.__data_parallel_replicate__ will be invoked. - """ - - def replicate(self, module, device_ids): - modules = super(DataParallelWithCallback, self).replicate(module, device_ids) - execute_replication_callbacks(modules) - return modules - - -def patch_replication_callback(data_parallel): - """ - Monkey-patch an existing `DataParallel` object. Add the replication callback. - Useful when you have customized `DataParallel` implementation. - - Examples: - > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) - > sync_bn = DataParallel(sync_bn, device_ids=[0, 1]) - > patch_replication_callback(sync_bn) - # this is equivalent to - > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) - > sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1]) - """ - - assert isinstance(data_parallel, DataParallel) - - old_replicate = data_parallel.replicate - - @functools.wraps(old_replicate) - def new_replicate(module, device_ids): - modules = old_replicate(module, device_ids) - execute_replication_callbacks(modules) - return modules - - data_parallel.replicate = new_replicate diff --git a/spaces/MS19/TestSpaceFastAI/README.md b/spaces/MS19/TestSpaceFastAI/README.md deleted file mode 100644 index 096245ee7b7747ae3f9ea192d418615ff335e80f..0000000000000000000000000000000000000000 --- a/spaces/MS19/TestSpaceFastAI/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: TestSpaceFastAI -emoji: 🐨 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Mahiruoshi/MyGO_VIts-bert/transforms.py b/spaces/Mahiruoshi/MyGO_VIts-bert/transforms.py deleted file mode 100644 index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/MyGO_VIts-bert/transforms.py +++ /dev/null @@ -1,209 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/Manjushri/Instruct-Pix-2-Pix/app.py b/spaces/Manjushri/Instruct-Pix-2-Pix/app.py deleted file mode 100644 index d839a66fcc74851c48351cbb329c9859c8d8a4a1..0000000000000000000000000000000000000000 --- a/spaces/Manjushri/Instruct-Pix-2-Pix/app.py +++ /dev/null @@ -1,36 +0,0 @@ -import gradio as gr -import modin.pandas as pd -import torch -import numpy as np -from PIL import Image -from diffusers import StableDiffusionInstructPix2PixPipeline - -model_id = "timbrooks/instruct-pix2pix" -device = "cuda" if torch.cuda.is_available() else "cpu" -pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16, revision="fp16", safety_checker=None) if torch.cuda.is_available() else StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, safety_checker=None) -pipe = pipe.to(device) - -def resize(value,img): - img = Image.open(img) - img = img.resize((value,value)) - return img - -def infer(source_img, instructions, guide, steps, seed, Strength): - generator = torch.Generator(device).manual_seed(seed) - source_image = resize(512, source_img) - source_image.save('source.png') - image = pipe(instructions, image=source_image, - guidance_scale=guide, image_guidance_scale=Strength, - num_inference_steps=steps, generator=generator,).images[0] - return image - -gr.Interface(fn=infer, inputs=[gr.Image(source="upload", type="filepath", label="Raw Image. Must Be .png"), - gr.Textbox(label = 'Input Instructions. 77 Token (Keyword or Symbol) Maximum'), - gr.Slider(2, 15, value = 7.5, label = 'Instructions Strength:'), - gr.Slider(1, 20, value = 5, step = 1, label = "Number of Iterations: More take longer, but aren't always better"), - gr.Slider(label = "Seed", minimum = 0, maximum = 987654321987654321, step = 1, randomize = True), - gr.Slider(label='Original Image Strength:', minimum = 1, maximum = 2, step = .25, value = 1.5)], - outputs = 'image', - title = "Instructions Picture to Picture", - description = "Simply upload an image you want to edit, MUST Be .PNG and 512x512 or 768x768, then enter a Prompt telling the AI how to change the image, then click submit. This version runs on GPU or CPU and is currently running on the free CPU tier. 10 Iterations takes ~240 seconds currently. This version has no NSFW filter.", - article = "Code Monkey: Manjushri").queue(max_size=5).launch(max_threads=True, debug=True) \ No newline at end of file diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/base_module.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/base_module.py deleted file mode 100644 index 617fad9bb89f10a9a0911d962dfb3bc8f3a3628c..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/base_module.py +++ /dev/null @@ -1,195 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings -from abc import ABCMeta -from collections import defaultdict -from logging import FileHandler - -import torch.nn as nn - -from annotator.uniformer.mmcv.runner.dist_utils import master_only -from annotator.uniformer.mmcv.utils.logging import get_logger, logger_initialized, print_log - - -class BaseModule(nn.Module, metaclass=ABCMeta): - """Base module for all modules in openmmlab. - - ``BaseModule`` is a wrapper of ``torch.nn.Module`` with additional - functionality of parameter initialization. Compared with - ``torch.nn.Module``, ``BaseModule`` mainly adds three attributes. - - - ``init_cfg``: the config to control the initialization. - - ``init_weights``: The function of parameter - initialization and recording initialization - information. - - ``_params_init_info``: Used to track the parameter - initialization information. This attribute only - exists during executing the ``init_weights``. - - Args: - init_cfg (dict, optional): Initialization config dict. - """ - - def __init__(self, init_cfg=None): - """Initialize BaseModule, inherited from `torch.nn.Module`""" - - # NOTE init_cfg can be defined in different levels, but init_cfg - # in low levels has a higher priority. - - super(BaseModule, self).__init__() - # define default value of init_cfg instead of hard code - # in init_weights() function - self._is_init = False - - self.init_cfg = copy.deepcopy(init_cfg) - - # Backward compatibility in derived classes - # if pretrained is not None: - # warnings.warn('DeprecationWarning: pretrained is a deprecated \ - # key, please consider using init_cfg') - # self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - - @property - def is_init(self): - return self._is_init - - def init_weights(self): - """Initialize the weights.""" - - is_top_level_module = False - # check if it is top-level module - if not hasattr(self, '_params_init_info'): - # The `_params_init_info` is used to record the initialization - # information of the parameters - # the key should be the obj:`nn.Parameter` of model and the value - # should be a dict containing - # - init_info (str): The string that describes the initialization. - # - tmp_mean_value (FloatTensor): The mean of the parameter, - # which indicates whether the parameter has been modified. - # this attribute would be deleted after all parameters - # is initialized. - self._params_init_info = defaultdict(dict) - is_top_level_module = True - - # Initialize the `_params_init_info`, - # When detecting the `tmp_mean_value` of - # the corresponding parameter is changed, update related - # initialization information - for name, param in self.named_parameters(): - self._params_init_info[param][ - 'init_info'] = f'The value is the same before and ' \ - f'after calling `init_weights` ' \ - f'of {self.__class__.__name__} ' - self._params_init_info[param][ - 'tmp_mean_value'] = param.data.mean() - - # pass `params_init_info` to all submodules - # All submodules share the same `params_init_info`, - # so it will be updated when parameters are - # modified at any level of the model. - for sub_module in self.modules(): - sub_module._params_init_info = self._params_init_info - - # Get the initialized logger, if not exist, - # create a logger named `mmcv` - logger_names = list(logger_initialized.keys()) - logger_name = logger_names[0] if logger_names else 'mmcv' - - from ..cnn import initialize - from ..cnn.utils.weight_init import update_init_info - module_name = self.__class__.__name__ - if not self._is_init: - if self.init_cfg: - print_log( - f'initialize {module_name} with init_cfg {self.init_cfg}', - logger=logger_name) - initialize(self, self.init_cfg) - if isinstance(self.init_cfg, dict): - # prevent the parameters of - # the pre-trained model - # from being overwritten by - # the `init_weights` - if self.init_cfg['type'] == 'Pretrained': - return - - for m in self.children(): - if hasattr(m, 'init_weights'): - m.init_weights() - # users may overload the `init_weights` - update_init_info( - m, - init_info=f'Initialized by ' - f'user-defined `init_weights`' - f' in {m.__class__.__name__} ') - - self._is_init = True - else: - warnings.warn(f'init_weights of {self.__class__.__name__} has ' - f'been called more than once.') - - if is_top_level_module: - self._dump_init_info(logger_name) - - for sub_module in self.modules(): - del sub_module._params_init_info - - @master_only - def _dump_init_info(self, logger_name): - """Dump the initialization information to a file named - `initialization.log.json` in workdir. - - Args: - logger_name (str): The name of logger. - """ - - logger = get_logger(logger_name) - - with_file_handler = False - # dump the information to the logger file if there is a `FileHandler` - for handler in logger.handlers: - if isinstance(handler, FileHandler): - handler.stream.write( - 'Name of parameter - Initialization information\n') - for name, param in self.named_parameters(): - handler.stream.write( - f'\n{name} - {param.shape}: ' - f"\n{self._params_init_info[param]['init_info']} \n") - handler.stream.flush() - with_file_handler = True - if not with_file_handler: - for name, param in self.named_parameters(): - print_log( - f'\n{name} - {param.shape}: ' - f"\n{self._params_init_info[param]['init_info']} \n ", - logger=logger_name) - - def __repr__(self): - s = super().__repr__() - if self.init_cfg: - s += f'\ninit_cfg={self.init_cfg}' - return s - - -class Sequential(BaseModule, nn.Sequential): - """Sequential module in openmmlab. - - Args: - init_cfg (dict, optional): Initialization config dict. - """ - - def __init__(self, *args, init_cfg=None): - BaseModule.__init__(self, init_cfg) - nn.Sequential.__init__(self, *args) - - -class ModuleList(BaseModule, nn.ModuleList): - """ModuleList in openmmlab. - - Args: - modules (iterable, optional): an iterable of modules to add. - init_cfg (dict, optional): Initialization config dict. - """ - - def __init__(self, modules=None, init_cfg=None): - BaseModule.__init__(self, init_cfg) - nn.ModuleList.__init__(self, modules) diff --git a/spaces/MercurialAi/Embeddings_Chat/app.py b/spaces/MercurialAi/Embeddings_Chat/app.py deleted file mode 100644 index 627a7b117a3851daa8d39ad3030b6065bdfafdbf..0000000000000000000000000000000000000000 --- a/spaces/MercurialAi/Embeddings_Chat/app.py +++ /dev/null @@ -1,56 +0,0 @@ -import os -os.system("pip -qq install openai") -os.system("pip -qq install langchain") -os.system("pip -qq install pinecone-client") -os.system("pip -qq install unstructured") -os.system("pip -qq install tiktoken") -os.system("pip -qq install pypdf") - -import gradio as gr -import openai -from embeddings_chat import embeddings_chat - -EX_Q1 = "What is the best course of treatment for a 65-year-old breast cancer patient with a non-ductal carcinoma and HER-2 positive status? Explain the reasoning behind it. " -EX_Q2 = "What is the best course of treatment for a 25-year-old non-metastatic breast cancer patient with a Nottingham grade of 5 and HER-2 negative status? Explain the reasoning behind it. " -EX_Q3 = "What testing must a patient candidate for poly ADP-ribose polymerase (PARP) inhibitor therapy undergo to determine their eligibility?" -EX_Q4 = "What criteria are used to determine a patient's eligibility for treatment with PARP inhibitors like olaparib and talazoparib for metastatic HER2-negative breast cancer?" -EX_Q5 = "What should be done before discharging a patient who had a mastectomy with axillary node clearance?" -EX_Q6 = "What is the best course of treatment for a 60-year-old metastatic breast cancer patient with a tumor size of 3.0 cm, HER-2 negative status, and a tumor grade of 2.0 based on nuclear characteristics? Explain the reasoning behind it. " -EX_Q7 = "What is the best course of treatment for a 50-year-old breast metastatic breast cancer patient with HER-2 positive status, a tumor size of 2 cm, and the involvement of 2 lymph nodes? Explain the reasoning behind it. " - -def get_response(Q): - - # clear cache before generating new response - os.system('huggingface-cli delete-cache') - - response, docs = embeddings_chat(Q) - docs = docs[0].page_content - - return response, str(docs) - -def bot(Q, history): - history = history or [] - c_history = list(sum(history, ())) - c_history.append(Q) - c_input = ' '.join(c_history) - output, docs = get_response(c_input) - history.append((Q, output)) - return history, history, docs - -def get_question_example(qe): - return qe - -with gr.Blocks() as iFace: - - chatbot = gr.Chatbot() - state = gr.State() - citation = gr.Text(show_label=False) - - Q = gr.Textbox(show_label=False, placeholder="I'm here to help.").style(container=False) - - question_example = gr.Radio(label="Inquiry Examples", choices=[EX_Q1, EX_Q2, EX_Q3, EX_Q4, EX_Q5, EX_Q6, EX_Q7]) - - Q.submit(bot, inputs=[Q, state], outputs=[chatbot, state, citation]) - question_example.change(get_question_example, inputs=[question_example], outputs=Q) - -iFace.launch() diff --git a/spaces/MilesCranmer/PySR/gen_example_data.py b/spaces/MilesCranmer/PySR/gen_example_data.py deleted file mode 100644 index 4eef2be2628f73d9ed6e2cb87515eec6809c9845..0000000000000000000000000000000000000000 --- a/spaces/MilesCranmer/PySR/gen_example_data.py +++ /dev/null @@ -1,17 +0,0 @@ -import pandas as pd -import numpy as np - -rand_between = lambda a, b, size: np.random.rand(*size) * (b - a) + a - -X = pd.DataFrame( - { - "T": rand_between(273, 373, (100,)), # Kelvin - "P": rand_between(100, 200, (100,)) * 1e3, # Pa - "n": rand_between(0, 10, (100,)), # mole - } -) - -R = 8.3144598 # J/mol/K -X["y"] = X["n"] * R * X["T"] / X["P"] - -X.to_csv("data.csv", index=False) \ No newline at end of file diff --git a/spaces/MirageML/fantasy-scene/app.py b/spaces/MirageML/fantasy-scene/app.py deleted file mode 100644 index 6bce1771e2b5fcf6e1b3d7d7ac8c055b72038df9..0000000000000000000000000000000000000000 --- a/spaces/MirageML/fantasy-scene/app.py +++ /dev/null @@ -1,155 +0,0 @@ -from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler -import gradio as gr -import torch -from PIL import Image - -model_id = 'MirageML/fantasy-scene' -prefix = 'fantasy_scene' - -scheduler = DPMSolverMultistepScheduler( - beta_start=0.00085, - beta_end=0.012, - beta_schedule="scaled_linear", - num_train_timesteps=1000, - trained_betas=None, - predict_epsilon=True, - thresholding=False, - algorithm_type="dpmsolver++", - solver_type="midpoint", - lower_order_final=True, -) - -pipe = StableDiffusionPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -if torch.cuda.is_available(): - pipe = pipe.to("cuda") - pipe_i2i = pipe_i2i.to("cuda") - -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - -def inference(prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt="", auto_prefix=False): - - generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None - prompt = f"{prefix} {prompt}" if auto_prefix else prompt - - try: - if img is not None: - return img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None - else: - return txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator), None - except Exception as e: - return None, error_str(e) - -def txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator): - - result = pipe( - prompt, - negative_prompt = neg_prompt, - num_inference_steps = int(steps), - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return replace_nsfw_images(result) - -def img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator): - - ratio = min(height / img.height, width / img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS) - result = pipe_i2i( - prompt, - negative_prompt = neg_prompt, - init_image = img, - num_inference_steps = int(steps), - strength = strength, - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return replace_nsfw_images(result) - -def replace_nsfw_images(results): - - for i in range(len(results.images)): - if results.nsfw_content_detected[i]: - results.images[i] = Image.open("nsfw.png") - return results.images[0] - -css = """.main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem} -""" -with gr.Blocks(css=css) as demo: - gr.HTML( - f""" -
      -
      -

      Fantasy Scene

      -
      -

      - Demo for Fantasy Scene Stable Diffusion model.
      - {"Add the following tokens to your prompts for the model to work properly: prefix" if prefix else ""} -

      - Running on {"GPU 🔥" if torch.cuda.is_available() else f"CPU 🥶. For faster inference it is recommended to upgrade to GPU in Settings"}

      - Duplicate Space -
      - """ - ) - with gr.Row(): - - with gr.Column(scale=55): - with gr.Group(): - with gr.Row(): - prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2,placeholder=f"{prefix} [your prompt]").style(container=False) - generate = gr.Button(value="Generate").style(rounded=(False, True, True, False)) - - image_out = gr.Image(height=512) - error_output = gr.Markdown() - - with gr.Column(scale=45): - with gr.Tab("Options"): - with gr.Group(): - neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image") - auto_prefix = gr.Checkbox(label="Prefix styling tokens automatically (fantasy_scene)", value=prefix, visible=prefix) - - with gr.Row(): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=25, minimum=2, maximum=75, step=1) - - with gr.Row(): - width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8) - - seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) - - with gr.Tab("Image to image"): - with gr.Group(): - image = gr.Image(label="Image", height=256, tool="editor", type="pil") - strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5) - - auto_prefix.change(lambda x: gr.update(placeholder=f"{prefix} [your prompt]" if x else "[Your prompt]"), inputs=auto_prefix, outputs=prompt, queue=False) - - inputs = [prompt, guidance, steps, width, height, seed, image, strength, neg_prompt, auto_prefix] - outputs = [image_out, error_output] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs) - - gr.HTML(""" -
      -
      -

      This space was created using SD Space Creator.

      -
      - """) - -demo.queue(concurrency_count=1) -demo.launch() diff --git a/spaces/Mr-Hacker/GenAiTest2/README.md b/spaces/Mr-Hacker/GenAiTest2/README.md deleted file mode 100644 index d4821db5e968c4b392bcc7b4abb9e645ebc0c3a6..0000000000000000000000000000000000000000 --- a/spaces/Mr-Hacker/GenAiTest2/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: GenAiTest2 -emoji: 🌍 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MrD05/text-generation-webui-space/run.py b/spaces/MrD05/text-generation-webui-space/run.py deleted file mode 100644 index 2c966a2f5691c6444c3329365c39e78b74fdbf95..0000000000000000000000000000000000000000 --- a/spaces/MrD05/text-generation-webui-space/run.py +++ /dev/null @@ -1,4 +0,0 @@ -import os -os.system('python download-model.py PygmalionAI/pygmalion-350m --branch main') -# os.system('python download-model.py waifu-workshop/pygmalion-6b --branch original-sharded') -os.system('python server.py --cpu --chat --model pygmalion-350m --no-stream --auto-devices') \ No newline at end of file diff --git a/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/datasets/unittest_utils_test.py b/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/datasets/unittest_utils_test.py deleted file mode 100644 index c241387463720c16c6d6b96c236c15e709209ee7..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/datasets/unittest_utils_test.py +++ /dev/null @@ -1,64 +0,0 @@ -# Copyright 2017 The TensorFlow Authors All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== - -"""Tests for unittest_utils.""" - -import numpy as np -import io -from PIL import Image as PILImage -import tensorflow as tf - -from datasets import unittest_utils - - -class UnittestUtilsTest(tf.test.TestCase): - def test_creates_an_image_of_specified_shape(self): - image, _ = unittest_utils.create_random_image('PNG', (10, 20, 3)) - self.assertEqual(image.shape, (10, 20, 3)) - - def test_encoded_image_corresponds_to_numpy_array(self): - image, encoded = unittest_utils.create_random_image('PNG', (20, 10, 3)) - pil_image = PILImage.open(io.BytesIO(encoded)) - self.assertAllEqual(image, np.array(pil_image)) - - def test_created_example_has_correct_values(self): - example_serialized = unittest_utils.create_serialized_example({ - 'labels': [1, 2, 3], - 'data': [b'FAKE'] - }) - example = tf.train.Example() - example.ParseFromString(example_serialized) - self.assertProtoEquals(""" - features { - feature { - key: "labels" - value { int64_list { - value: 1 - value: 2 - value: 3 - }} - } - feature { - key: "data" - value { bytes_list { - value: "FAKE" - }} - } - } - """, example) - - -if __name__ == '__main__': - tf.test.main() diff --git a/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/cfgs/__init__.py b/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/cfgs/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/NikeZoldyck/green-screen-composition-transfer/utils/photo_wct.py b/spaces/NikeZoldyck/green-screen-composition-transfer/utils/photo_wct.py deleted file mode 100644 index 5b0f82a2bb6f36f1e53adf0834614b0dba777771..0000000000000000000000000000000000000000 --- a/spaces/NikeZoldyck/green-screen-composition-transfer/utils/photo_wct.py +++ /dev/null @@ -1,171 +0,0 @@ -""" -Copyright (C) 2018 NVIDIA Corporation. All rights reserved. -Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode). -""" - -import numpy as np -from PIL import Image -import torch -import torch.nn as nn -from models.models import VGGEncoder, VGGDecoder - - -class PhotoWCT(nn.Module): - def __init__(self): - super(PhotoWCT, self).__init__() - self.e1 = VGGEncoder(1) - self.d1 = VGGDecoder(1) - self.e2 = VGGEncoder(2) - self.d2 = VGGDecoder(2) - self.e3 = VGGEncoder(3) - self.d3 = VGGDecoder(3) - self.e4 = VGGEncoder(4) - self.d4 = VGGDecoder(4) - - def transform(self, cont_img, styl_img, cont_seg, styl_seg): - self.__compute_label_info(cont_seg, styl_seg) - - sF4, sF3, sF2, sF1 = self.e4.forward_multiple(styl_img) - - cF4, cpool_idx, cpool1, cpool_idx2, cpool2, cpool_idx3, cpool3 = self.e4(cont_img) - sF4 = sF4.data.squeeze(0) - cF4 = cF4.data.squeeze(0) - # print(cont_seg) - csF4 = self.__feature_wct(cF4, sF4, cont_seg, styl_seg) - Im4 = self.d4(csF4, cpool_idx, cpool1, cpool_idx2, cpool2, cpool_idx3, cpool3) - - cF3, cpool_idx, cpool1, cpool_idx2, cpool2 = self.e3(Im4) - sF3 = sF3.data.squeeze(0) - cF3 = cF3.data.squeeze(0) - csF3 = self.__feature_wct(cF3, sF3, cont_seg, styl_seg) - Im3 = self.d3(csF3, cpool_idx, cpool1, cpool_idx2, cpool2) - - cF2, cpool_idx, cpool = self.e2(Im3) - sF2 = sF2.data.squeeze(0) - cF2 = cF2.data.squeeze(0) - csF2 = self.__feature_wct(cF2, sF2, cont_seg, styl_seg) - Im2 = self.d2(csF2, cpool_idx, cpool) - - cF1 = self.e1(Im2) - sF1 = sF1.data.squeeze(0) - cF1 = cF1.data.squeeze(0) - csF1 = self.__feature_wct(cF1, sF1, cont_seg, styl_seg) - Im1 = self.d1(csF1) - return Im1 - - def __compute_label_info(self, cont_seg, styl_seg): - if cont_seg.size == False or styl_seg.size == False: - return - max_label = np.max(cont_seg) + 1 - self.label_set = np.unique(cont_seg) - self.label_indicator = np.zeros(max_label) - for l in self.label_set: - # if l==0: - # continue - is_valid = lambda a, b: a > 10 and b > 10 and a / b < 100 and b / a < 100 - o_cont_mask = np.where(cont_seg.reshape(cont_seg.shape[0] * cont_seg.shape[1]) == l) - o_styl_mask = np.where(styl_seg.reshape(styl_seg.shape[0] * styl_seg.shape[1]) == l) - self.label_indicator[l] = is_valid(o_cont_mask[0].size, o_styl_mask[0].size) - - def __feature_wct(self, cont_feat, styl_feat, cont_seg, styl_seg): - cont_c, cont_h, cont_w = cont_feat.size(0), cont_feat.size(1), cont_feat.size(2) - styl_c, styl_h, styl_w = styl_feat.size(0), styl_feat.size(1), styl_feat.size(2) - cont_feat_view = cont_feat.view(cont_c, -1).clone() - styl_feat_view = styl_feat.view(styl_c, -1).clone() - - if cont_seg.size == False or styl_seg.size == False: - target_feature = self.__wct_core(cont_feat_view, styl_feat_view) - else: - target_feature = cont_feat.view(cont_c, -1).clone() - if len(cont_seg.shape) == 2: - t_cont_seg = np.asarray(Image.fromarray(cont_seg).resize((cont_w, cont_h), Image.NEAREST)) - else: - t_cont_seg = np.asarray(Image.fromarray(cont_seg, mode='RGB').resize((cont_w, cont_h), Image.NEAREST)) - if len(styl_seg.shape) == 2: - t_styl_seg = np.asarray(Image.fromarray(styl_seg).resize((styl_w, styl_h), Image.NEAREST)) - else: - t_styl_seg = np.asarray(Image.fromarray(styl_seg, mode='RGB').resize((styl_w, styl_h), Image.NEAREST)) - - for l in self.label_set: - if self.label_indicator[l] == 0: - continue - cont_mask = np.where(t_cont_seg.reshape(t_cont_seg.shape[0] * t_cont_seg.shape[1]) == l) - styl_mask = np.where(t_styl_seg.reshape(t_styl_seg.shape[0] * t_styl_seg.shape[1]) == l) - if cont_mask[0].size <= 0 or styl_mask[0].size <= 0: - continue - - cont_indi = torch.LongTensor(cont_mask[0]) - styl_indi = torch.LongTensor(styl_mask[0]) - if self.is_cuda: - cont_indi = cont_indi.cuda(0) - styl_indi = styl_indi.cuda(0) - - cFFG = torch.index_select(cont_feat_view, 1, cont_indi) - sFFG = torch.index_select(styl_feat_view, 1, styl_indi) - # print(len(cont_indi)) - # print(len(styl_indi)) - tmp_target_feature = self.__wct_core(cFFG, sFFG) - # print(tmp_target_feature.size()) - if torch.__version__ >= "0.4.0": - # This seems to be a bug in PyTorch 0.4.0 to me. - new_target_feature = torch.transpose(target_feature, 1, 0) - new_target_feature.index_copy_(0, cont_indi, \ - torch.transpose(tmp_target_feature,1,0)) - target_feature = torch.transpose(new_target_feature, 1, 0) - else: - target_feature.index_copy_(1, cont_indi, tmp_target_feature) - - target_feature = target_feature.view_as(cont_feat) - ccsF = target_feature.float().unsqueeze(0) - return ccsF - - def __wct_core(self, cont_feat, styl_feat): - cFSize = cont_feat.size() - c_mean = torch.mean(cont_feat, 1) # c x (h x w) - c_mean = c_mean.unsqueeze(1).expand_as(cont_feat) - cont_feat = cont_feat - c_mean - - iden = torch.eye(cFSize[0]) # .double() - if self.is_cuda: - iden = iden.cuda() - - contentConv = torch.mm(cont_feat, cont_feat.t()).div(cFSize[1] - 1) + iden - # del iden - c_u, c_e, c_v = torch.svd(contentConv, some=False) - # c_e2, c_v = torch.eig(contentConv, True) - # c_e = c_e2[:,0] - - k_c = cFSize[0] - for i in range(cFSize[0] - 1, -1, -1): - if c_e[i] >= 0.00001: - k_c = i + 1 - break - - sFSize = styl_feat.size() - s_mean = torch.mean(styl_feat, 1) - styl_feat = styl_feat - s_mean.unsqueeze(1).expand_as(styl_feat) - styleConv = torch.mm(styl_feat, styl_feat.t()).div(sFSize[1] - 1) - s_u, s_e, s_v = torch.svd(styleConv, some=False) - - k_s = sFSize[0] - for i in range(sFSize[0] - 1, -1, -1): - if s_e[i] >= 0.00001: - k_s = i + 1 - break - - c_d = (c_e[0:k_c]).pow(-0.5) - step1 = torch.mm(c_v[:, 0:k_c], torch.diag(c_d)) - step2 = torch.mm(step1, (c_v[:, 0:k_c].t())) - whiten_cF = torch.mm(step2, cont_feat) - - s_d = (s_e[0:k_s]).pow(0.5) - targetFeature = torch.mm(torch.mm(torch.mm(s_v[:, 0:k_s], torch.diag(s_d)), (s_v[:, 0:k_s].t())), whiten_cF) - targetFeature = targetFeature + s_mean.unsqueeze(1).expand_as(targetFeature) - return targetFeature - - @property - def is_cuda(self): - return next(self.parameters()).is_cuda - - def forward(self, *input): - pass \ No newline at end of file diff --git a/spaces/NoCrypt/mikuTTS/lib/infer_pack/models.py b/spaces/NoCrypt/mikuTTS/lib/infer_pack/models.py deleted file mode 100644 index 3665d03bc0514a6ed07d3372ea24717dae1e0a65..0000000000000000000000000000000000000000 --- a/spaces/NoCrypt/mikuTTS/lib/infer_pack/models.py +++ /dev/null @@ -1,1142 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from lib.infer_pack import modules -from lib.infer_pack import attentions -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from lib.infer_pack.commons import init_weights -import numpy as np -from lib.infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - nsff0 = nsff0[:, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - nsff0 = nsff0[:, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/tools/README.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/tools/README.md deleted file mode 100644 index 61fcbbded80023f75eaec4b69ddfbbe4cc252e5b..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/tools/README.md +++ /dev/null @@ -1,22 +0,0 @@ -# GSLM Tools - -## Resynthesis -You can use the command line tool below to input an audio file and get the resynthesized audio. This tool implements the unsupervised method for resynthesis described in the paper. The way to invoke the command line tool is shown below. -``` -FAIRSEQ_ROOT= -TYPE= -ACOUSTIC_MODEL_PATH= -LAYER= -KM_MODEL_PATH= -TTS_MODEL_PATH= -WAVEGLOW_PATH= - -PYTHONPATH=${FAIRSEQ_ROOT}:${FAIRSEQ_ROOT}/examples/textless_nlp/gslm/unit2speech python ${FAIRSEQ_ROOT}/examples/textless_nlp/gslm/tools/gen_speech.py \ - --feature_type $TYPE \ - --acoustic_model_path $ACOUSTIC_MODEL_PATH \ - --layer $LAYER \ - --kmeans_model_path $KM_MODEL_PATH \ - --tts_model_path $TTS_MODEL_PATH \ - --waveglow_path $WAVEGLOW_PATH \ - --max_decoder_steps 2000 -``` \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/noisychannel/rerank_generate.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/noisychannel/rerank_generate.py deleted file mode 100644 index daeeae059a677a9fcd7c370be087f1f5c189bc52..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/noisychannel/rerank_generate.py +++ /dev/null @@ -1,397 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Generate n-best translations using a trained model. -""" - -import os -import subprocess -from contextlib import redirect_stdout - -from fairseq import options -from fairseq_cli import generate, preprocess - -from examples.noisychannel import rerank_options, rerank_utils - - -def gen_and_reprocess_nbest(args): - if args.score_dict_dir is None: - args.score_dict_dir = args.data - if args.prefix_len is not None: - assert ( - args.right_to_left1 is False - ), "prefix length not compatible with right to left models" - assert ( - args.right_to_left2 is False - ), "prefix length not compatible with right to left models" - - if args.nbest_list is not None: - assert args.score_model2 is None - - if args.backwards1: - scorer1_src = args.target_lang - scorer1_tgt = args.source_lang - else: - scorer1_src = args.source_lang - scorer1_tgt = args.target_lang - - store_data = ( - os.path.join(os.path.dirname(__file__)) + "/rerank_data/" + args.data_dir_name - ) - if not os.path.exists(store_data): - os.makedirs(store_data) - - ( - pre_gen, - left_to_right_preprocessed_dir, - right_to_left_preprocessed_dir, - backwards_preprocessed_dir, - lm_preprocessed_dir, - ) = rerank_utils.get_directories( - args.data_dir_name, - args.num_rescore, - args.gen_subset, - args.gen_model_name, - args.shard_id, - args.num_shards, - args.sampling, - args.prefix_len, - args.target_prefix_frac, - args.source_prefix_frac, - ) - assert not ( - args.right_to_left1 and args.backwards1 - ), "backwards right to left not supported" - assert not ( - args.right_to_left2 and args.backwards2 - ), "backwards right to left not supported" - assert not ( - args.prefix_len is not None and args.target_prefix_frac is not None - ), "target prefix frac and target prefix len incompatible" - - # make directory to store generation results - if not os.path.exists(pre_gen): - os.makedirs(pre_gen) - - rerank1_is_gen = ( - args.gen_model == args.score_model1 and args.source_prefix_frac is None - ) - rerank2_is_gen = ( - args.gen_model == args.score_model2 and args.source_prefix_frac is None - ) - - if args.nbest_list is not None: - rerank2_is_gen = True - - # make directories to store preprossed nbest list for reranking - if not os.path.exists(left_to_right_preprocessed_dir): - os.makedirs(left_to_right_preprocessed_dir) - if not os.path.exists(right_to_left_preprocessed_dir): - os.makedirs(right_to_left_preprocessed_dir) - if not os.path.exists(lm_preprocessed_dir): - os.makedirs(lm_preprocessed_dir) - if not os.path.exists(backwards_preprocessed_dir): - os.makedirs(backwards_preprocessed_dir) - - score1_file = rerank_utils.rescore_file_name( - pre_gen, - args.prefix_len, - args.model1_name, - target_prefix_frac=args.target_prefix_frac, - source_prefix_frac=args.source_prefix_frac, - backwards=args.backwards1, - ) - if args.score_model2 is not None: - score2_file = rerank_utils.rescore_file_name( - pre_gen, - args.prefix_len, - args.model2_name, - target_prefix_frac=args.target_prefix_frac, - source_prefix_frac=args.source_prefix_frac, - backwards=args.backwards2, - ) - - predictions_bpe_file = pre_gen + "/generate_output_bpe.txt" - - using_nbest = args.nbest_list is not None - - if using_nbest: - print("Using predefined n-best list from interactive.py") - predictions_bpe_file = args.nbest_list - - else: - if not os.path.isfile(predictions_bpe_file): - print("STEP 1: generate predictions using the p(T|S) model with bpe") - print(args.data) - param1 = [ - args.data, - "--path", - args.gen_model, - "--shard-id", - str(args.shard_id), - "--num-shards", - str(args.num_shards), - "--nbest", - str(args.num_rescore), - "--batch-size", - str(args.batch_size), - "--beam", - str(args.num_rescore), - "--batch-size", - str(args.num_rescore), - "--gen-subset", - args.gen_subset, - "--source-lang", - args.source_lang, - "--target-lang", - args.target_lang, - ] - if args.sampling: - param1 += ["--sampling"] - - gen_parser = options.get_generation_parser() - input_args = options.parse_args_and_arch(gen_parser, param1) - - print(input_args) - with open(predictions_bpe_file, "w") as f: - with redirect_stdout(f): - generate.main(input_args) - - gen_output = rerank_utils.BitextOutputFromGen( - predictions_bpe_file, - bpe_symbol=args.post_process, - nbest=using_nbest, - prefix_len=args.prefix_len, - target_prefix_frac=args.target_prefix_frac, - ) - - if args.diff_bpe: - rerank_utils.write_reprocessed( - gen_output.no_bpe_source, - gen_output.no_bpe_hypo, - gen_output.no_bpe_target, - pre_gen + "/source_gen_bpe." + args.source_lang, - pre_gen + "/target_gen_bpe." + args.target_lang, - pre_gen + "/reference_gen_bpe." + args.target_lang, - ) - bitext_bpe = args.rescore_bpe_code - bpe_src_param = [ - "-c", - bitext_bpe, - "--input", - pre_gen + "/source_gen_bpe." + args.source_lang, - "--output", - pre_gen + "/rescore_data." + args.source_lang, - ] - bpe_tgt_param = [ - "-c", - bitext_bpe, - "--input", - pre_gen + "/target_gen_bpe." + args.target_lang, - "--output", - pre_gen + "/rescore_data." + args.target_lang, - ] - - subprocess.call( - [ - "python", - os.path.join( - os.path.dirname(__file__), "subword-nmt/subword_nmt/apply_bpe.py" - ), - ] - + bpe_src_param, - shell=False, - ) - - subprocess.call( - [ - "python", - os.path.join( - os.path.dirname(__file__), "subword-nmt/subword_nmt/apply_bpe.py" - ), - ] - + bpe_tgt_param, - shell=False, - ) - - if (not os.path.isfile(score1_file) and not rerank1_is_gen) or ( - args.score_model2 is not None - and not os.path.isfile(score2_file) - and not rerank2_is_gen - ): - print( - "STEP 2: process the output of generate.py so we have clean text files with the translations" - ) - - rescore_file = "/rescore_data" - if args.prefix_len is not None: - prefix_len_rescore_file = rescore_file + "prefix" + str(args.prefix_len) - if args.target_prefix_frac is not None: - target_prefix_frac_rescore_file = ( - rescore_file + "target_prefix_frac" + str(args.target_prefix_frac) - ) - if args.source_prefix_frac is not None: - source_prefix_frac_rescore_file = ( - rescore_file + "source_prefix_frac" + str(args.source_prefix_frac) - ) - - if not args.right_to_left1 or not args.right_to_left2: - if not args.diff_bpe: - rerank_utils.write_reprocessed( - gen_output.source, - gen_output.hypo, - gen_output.target, - pre_gen + rescore_file + "." + args.source_lang, - pre_gen + rescore_file + "." + args.target_lang, - pre_gen + "/reference_file", - bpe_symbol=args.post_process, - ) - if args.prefix_len is not None: - bw_rescore_file = prefix_len_rescore_file - rerank_utils.write_reprocessed( - gen_output.source, - gen_output.hypo, - gen_output.target, - pre_gen + prefix_len_rescore_file + "." + args.source_lang, - pre_gen + prefix_len_rescore_file + "." + args.target_lang, - pre_gen + "/reference_file", - prefix_len=args.prefix_len, - bpe_symbol=args.post_process, - ) - elif args.target_prefix_frac is not None: - bw_rescore_file = target_prefix_frac_rescore_file - rerank_utils.write_reprocessed( - gen_output.source, - gen_output.hypo, - gen_output.target, - pre_gen - + target_prefix_frac_rescore_file - + "." - + args.source_lang, - pre_gen - + target_prefix_frac_rescore_file - + "." - + args.target_lang, - pre_gen + "/reference_file", - bpe_symbol=args.post_process, - target_prefix_frac=args.target_prefix_frac, - ) - else: - bw_rescore_file = rescore_file - - if args.source_prefix_frac is not None: - fw_rescore_file = source_prefix_frac_rescore_file - rerank_utils.write_reprocessed( - gen_output.source, - gen_output.hypo, - gen_output.target, - pre_gen - + source_prefix_frac_rescore_file - + "." - + args.source_lang, - pre_gen - + source_prefix_frac_rescore_file - + "." - + args.target_lang, - pre_gen + "/reference_file", - bpe_symbol=args.post_process, - source_prefix_frac=args.source_prefix_frac, - ) - else: - fw_rescore_file = rescore_file - - if args.right_to_left1 or args.right_to_left2: - rerank_utils.write_reprocessed( - gen_output.source, - gen_output.hypo, - gen_output.target, - pre_gen + "/right_to_left_rescore_data." + args.source_lang, - pre_gen + "/right_to_left_rescore_data." + args.target_lang, - pre_gen + "/right_to_left_reference_file", - right_to_left=True, - bpe_symbol=args.post_process, - ) - - print("STEP 3: binarize the translations") - if ( - not args.right_to_left1 - or args.score_model2 is not None - and not args.right_to_left2 - or not rerank1_is_gen - ): - - if args.backwards1 or args.backwards2: - if args.backwards_score_dict_dir is not None: - bw_dict = args.backwards_score_dict_dir - else: - bw_dict = args.score_dict_dir - bw_preprocess_param = [ - "--source-lang", - scorer1_src, - "--target-lang", - scorer1_tgt, - "--trainpref", - pre_gen + bw_rescore_file, - "--srcdict", - bw_dict + "/dict." + scorer1_src + ".txt", - "--tgtdict", - bw_dict + "/dict." + scorer1_tgt + ".txt", - "--destdir", - backwards_preprocessed_dir, - ] - preprocess_parser = options.get_preprocessing_parser() - input_args = preprocess_parser.parse_args(bw_preprocess_param) - preprocess.main(input_args) - - preprocess_param = [ - "--source-lang", - scorer1_src, - "--target-lang", - scorer1_tgt, - "--trainpref", - pre_gen + fw_rescore_file, - "--srcdict", - args.score_dict_dir + "/dict." + scorer1_src + ".txt", - "--tgtdict", - args.score_dict_dir + "/dict." + scorer1_tgt + ".txt", - "--destdir", - left_to_right_preprocessed_dir, - ] - preprocess_parser = options.get_preprocessing_parser() - input_args = preprocess_parser.parse_args(preprocess_param) - preprocess.main(input_args) - - if args.right_to_left1 or args.right_to_left2: - preprocess_param = [ - "--source-lang", - scorer1_src, - "--target-lang", - scorer1_tgt, - "--trainpref", - pre_gen + "/right_to_left_rescore_data", - "--srcdict", - args.score_dict_dir + "/dict." + scorer1_src + ".txt", - "--tgtdict", - args.score_dict_dir + "/dict." + scorer1_tgt + ".txt", - "--destdir", - right_to_left_preprocessed_dir, - ] - preprocess_parser = options.get_preprocessing_parser() - input_args = preprocess_parser.parse_args(preprocess_param) - preprocess.main(input_args) - - return gen_output - - -def cli_main(): - parser = rerank_options.get_reranking_parser() - args = options.parse_args_and_arch(parser) - gen_and_reprocess_nbest(args) - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_recognition/datasets/prepare-librispeech.sh b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_recognition/datasets/prepare-librispeech.sh deleted file mode 100644 index 9e9297f08947027685ff508bfa91ff26b0d8ea0c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_recognition/datasets/prepare-librispeech.sh +++ /dev/null @@ -1,88 +0,0 @@ -#!/usr/bin/env bash -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -# Prepare librispeech dataset - -base_url=www.openslr.org/resources/12 -train_dir=train_960 - -if [ "$#" -ne 2 ]; then - echo "Usage: $0 " - echo "e.g.: $0 /tmp/librispeech_raw/ ~/data/librispeech_final" - exit 1 -fi - -download_dir=${1%/} -out_dir=${2%/} - -fairseq_root=~/fairseq-py/ -mkdir -p ${out_dir} -cd ${out_dir} || exit - -nbpe=5000 -bpemode=unigram - -if [ ! -d "$fairseq_root" ]; then - echo "$0: Please set correct fairseq_root" - exit 1 -fi - -echo "Data Download" -for part in dev-clean test-clean dev-other test-other train-clean-100 train-clean-360 train-other-500; do - url=$base_url/$part.tar.gz - if ! wget -P $download_dir $url; then - echo "$0: wget failed for $url" - exit 1 - fi - if ! tar -C $download_dir -xvzf $download_dir/$part.tar.gz; then - echo "$0: error un-tarring archive $download_dir/$part.tar.gz" - exit 1 - fi -done - -echo "Merge all train packs into one" -mkdir -p ${download_dir}/LibriSpeech/${train_dir}/ -for part in train-clean-100 train-clean-360 train-other-500; do - mv ${download_dir}/LibriSpeech/${part}/* $download_dir/LibriSpeech/${train_dir}/ -done -echo "Merge train text" -find ${download_dir}/LibriSpeech/${train_dir}/ -name '*.txt' -exec cat {} \; >> ${download_dir}/LibriSpeech/${train_dir}/text - -# Use combined dev-clean and dev-other as validation set -find ${download_dir}/LibriSpeech/dev-clean/ ${download_dir}/LibriSpeech/dev-other/ -name '*.txt' -exec cat {} \; >> ${download_dir}/LibriSpeech/valid_text -find ${download_dir}/LibriSpeech/test-clean/ -name '*.txt' -exec cat {} \; >> ${download_dir}/LibriSpeech/test-clean/text -find ${download_dir}/LibriSpeech/test-other/ -name '*.txt' -exec cat {} \; >> ${download_dir}/LibriSpeech/test-other/text - - -dict=data/lang_char/${train_dir}_${bpemode}${nbpe}_units.txt -encoded=data/lang_char/${train_dir}_${bpemode}${nbpe}_encoded.txt -fairseq_dict=data/lang_char/${train_dir}_${bpemode}${nbpe}_fairseq_dict.txt -bpemodel=data/lang_char/${train_dir}_${bpemode}${nbpe} -echo "dictionary: ${dict}" -echo "Dictionary preparation" -mkdir -p data/lang_char/ -echo " 3" > ${dict} -echo " 2" >> ${dict} -echo " 1" >> ${dict} -cut -f 2- -d" " ${download_dir}/LibriSpeech/${train_dir}/text > data/lang_char/input.txt -spm_train --input=data/lang_char/input.txt --vocab_size=${nbpe} --model_type=${bpemode} --model_prefix=${bpemodel} --input_sentence_size=100000000 --unk_id=3 --eos_id=2 --pad_id=1 --bos_id=-1 --character_coverage=1 -spm_encode --model=${bpemodel}.model --output_format=piece < data/lang_char/input.txt > ${encoded} -cat ${encoded} | tr ' ' '\n' | sort | uniq | awk '{print $0 " " NR+3}' >> ${dict} -cat ${encoded} | tr ' ' '\n' | sort | uniq -c | awk '{print $2 " " $1}' > ${fairseq_dict} -wc -l ${dict} - -echo "Prepare train and test jsons" -for part in train_960 test-other test-clean; do - python ${fairseq_root}/examples/speech_recognition/datasets/asr_prep_json.py --audio-dirs ${download_dir}/LibriSpeech/${part} --labels ${download_dir}/LibriSpeech/${part}/text --spm-model ${bpemodel}.model --audio-format flac --dictionary ${fairseq_dict} --output ${part}.json -done -# fairseq expects to find train.json and valid.json during training -mv train_960.json train.json - -echo "Prepare valid json" -python ${fairseq_root}/examples/speech_recognition/datasets/asr_prep_json.py --audio-dirs ${download_dir}/LibriSpeech/dev-clean ${download_dir}/LibriSpeech/dev-other --labels ${download_dir}/LibriSpeech/valid_text --spm-model ${bpemodel}.model --audio-format flac --dictionary ${fairseq_dict} --output valid.json - -cp ${fairseq_dict} ./dict.txt -cp ${bpemodel}.model ./spm.model diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/scripts/ltr_to_wrd.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/scripts/ltr_to_wrd.py deleted file mode 100644 index 36c85d1e2f60487494a92207feb4685e78db8aa2..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/scripts/ltr_to_wrd.py +++ /dev/null @@ -1,16 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import sys - - -def main(): - for line in sys.stdin: - print(line.replace(" ", "").replace("|", " ").strip()) - - -if __name__ == "__main__": - main() diff --git a/spaces/Open-Orca/LlongOrca-7B-16k/app.py b/spaces/Open-Orca/LlongOrca-7B-16k/app.py deleted file mode 100644 index 38f87277c1ec5ea7752038e5406dbe959fd0cf4f..0000000000000000000000000000000000000000 --- a/spaces/Open-Orca/LlongOrca-7B-16k/app.py +++ /dev/null @@ -1,116 +0,0 @@ -import os -import re -import logging -import gradio as gr -import openai - -print(os.environ) -openai.api_base = os.environ.get("OPENAI_API_BASE") -openai.api_key = os.environ.get("OPENAI_API_KEY") - -BASE_SYSTEM_MESSAGE = """You are an AI assistant named OpenOrcaChat. If users clearly indicate an interest in how good you are you may tell them that the group who made you can be found at https://discord.gg/mhFWVbXUDh. The groups involved are OpenChat and OpenOrca""" - -def make_prediction(prompt, max_tokens=None, temperature=None, top_p=None, top_k=None, repetition_penalty=None): - completion = openai.Completion.create(model="Open-Orca/LlongOrca-7B-16k", prompt=prompt, max_tokens=max_tokens, temperature=temperature, top_p=top_p, top_k=top_k, repetition_penalty=repetition_penalty, stream=True, stop=["", "<|im_end|>"]) - for chunk in completion: - yield chunk["choices"][0]["text"] - - -def clear_chat(chat_history_state, chat_message): - chat_history_state = [] - chat_message = '' - return chat_history_state, chat_message - - -def user(message, history): - history = history or [] - # Append the user's message to the conversation history - history.append([message, ""]) - return "", history - - -def chat(history, system_message, max_tokens, temperature, top_p, top_k, repetition_penalty): - history = history or [] - - messages = "<|im_start|>"+"system\n" + BASE_SYSTEM_MESSAGE + system_message.strip() + "<|im_end|>\n" + \ - "\n".join(["\n".join(["<|im_start|>"+"user\n"+item[0]+"<|im_end|>", "<|im_start|>assistant\n"+item[1]+"<|im_end|>"]) - for item in history]) - # strip the last `<|end_of_turn|>` from the messages - messages = messages.rstrip("<|im_end|>") - # remove last space from assistant, some models output a ZWSP if you leave a space - messages = messages.rstrip() - - prediction = make_prediction( - messages, - max_tokens=max_tokens, - temperature=temperature, - top_p=top_p, - top_k=top_k, - repetition_penalty=repetition_penalty, - ) - for tokens in prediction: - tokens = re.findall(r'(.*?)(\s|$)', tokens) - for subtoken in tokens: - subtoken = "".join(subtoken) - answer = subtoken - history[-1][1] += answer - # stream the response - yield history, history, "" - - -start_message = "" - -CSS =""" -.contain { display: flex; flex-direction: column; } -.gradio-container { height: 100vh !important; } -#component-0 { height: 100%; } -#chatbot { flex-grow: 1; overflow: auto; resize: vertical; } -""" - -#with gr.Blocks() as demo: -with gr.Blocks(css=CSS) as demo: - with gr.Row(): - with gr.Column(): - gr.Markdown(f""" - ## This demo is an unquantized GPU chatbot of [OpenOrca LlongOrca-7B-16k](https://huggingface.co/Open-Orca/LlongOrca-7B-16k) - Brought to you by your friends at Alignment Lab AI, OpenChat, and Open Access AI Collective! - """) - with gr.Row(): - gr.Markdown("# 🐋 OpenOrca LlongOrca-7B-16k Playground Space! 🐋") - with gr.Row(): - #chatbot = gr.Chatbot().style(height=500) - chatbot = gr.Chatbot(elem_id="chatbot") - with gr.Row(): - message = gr.Textbox( - label="What do you want to chat about?", - placeholder="Ask me anything.", - lines=3, - ) - with gr.Row(): - submit = gr.Button(value="Send message", variant="secondary").style(full_width=True) - clear = gr.Button(value="New topic", variant="secondary").style(full_width=False) - stop = gr.Button(value="Stop", variant="secondary").style(full_width=False) - with gr.Accordion("Show Model Parameters", open=False): - with gr.Row(): - with gr.Column(): - max_tokens = gr.Slider(20, 1000, label="Max Tokens", step=20, value=500) - temperature = gr.Slider(0.2, 2.0, label="Temperature", step=0.1, value=0.8) - top_p = gr.Slider(0.0, 1.0, label="Top P", step=0.05, value=0.95) - top_k = gr.Slider(0, 100, label="Top K", step=1, value=40) - repetition_penalty = gr.Slider(0.0, 2.0, label="Repetition Penalty", step=0.1, value=1.1) - - system_msg = gr.Textbox( - start_message, label="System Message", interactive=True, visible=True, placeholder="System prompt. Provide instructions which you want the model to remember.", lines=5) - - chat_history_state = gr.State() - clear.click(clear_chat, inputs=[chat_history_state, message], outputs=[chat_history_state, message], queue=False) - clear.click(lambda: None, None, chatbot, queue=False) - - submit_click_event = submit.click( - fn=user, inputs=[message, chat_history_state], outputs=[message, chat_history_state], queue=True - ).then( - fn=chat, inputs=[chat_history_state, system_msg, max_tokens, temperature, top_p, top_k, repetition_penalty], outputs=[chatbot, chat_history_state, message], queue=True - ) - stop.click(fn=None, inputs=None, outputs=None, cancels=[submit_click_event], queue=False) - -demo.queue(max_size=128, concurrency_count=48).launch(debug=True, server_name="0.0.0.0", server_port=7860) diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/meta_arch/rcnn.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/meta_arch/rcnn.py deleted file mode 100644 index 7b45363e6eba306c519b5deeca2bc38d6535cec8..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/meta_arch/rcnn.py +++ /dev/null @@ -1,327 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import numpy as np -from typing import Dict, List, Optional, Tuple -import torch -from torch import nn - -from detectron2.config import configurable -from detectron2.data.detection_utils import convert_image_to_rgb -from detectron2.structures import ImageList, Instances -from detectron2.utils.events import get_event_storage -from detectron2.utils.logger import log_first_n - -from ..backbone import Backbone, build_backbone -from ..postprocessing import detector_postprocess -from ..proposal_generator import build_proposal_generator -from ..roi_heads import build_roi_heads -from .build import META_ARCH_REGISTRY - -__all__ = ["GeneralizedRCNN", "ProposalNetwork"] - - -@META_ARCH_REGISTRY.register() -class GeneralizedRCNN(nn.Module): - """ - Generalized R-CNN. Any models that contains the following three components: - 1. Per-image feature extraction (aka backbone) - 2. Region proposal generation - 3. Per-region feature extraction and prediction - """ - - @configurable - def __init__( - self, - *, - backbone: Backbone, - proposal_generator: nn.Module, - roi_heads: nn.Module, - pixel_mean: Tuple[float], - pixel_std: Tuple[float], - input_format: Optional[str] = None, - vis_period: int = 0, - ): - """ - Args: - backbone: a backbone module, must follow detectron2's backbone interface - proposal_generator: a module that generates proposals using backbone features - roi_heads: a ROI head that performs per-region computation - pixel_mean, pixel_std: list or tuple with #channels element, representing - the per-channel mean and std to be used to normalize the input image - input_format: describe the meaning of channels of input. Needed by visualization - vis_period: the period to run visualization. Set to 0 to disable. - """ - super().__init__() - self.backbone = backbone - self.proposal_generator = proposal_generator - self.roi_heads = roi_heads - - self.input_format = input_format - self.vis_period = vis_period - if vis_period > 0: - assert input_format is not None, "input_format is required for visualization!" - - self.register_buffer("pixel_mean", torch.tensor(pixel_mean).view(-1, 1, 1), False) - self.register_buffer("pixel_std", torch.tensor(pixel_std).view(-1, 1, 1), False) - assert ( - self.pixel_mean.shape == self.pixel_std.shape - ), f"{self.pixel_mean} and {self.pixel_std} have different shapes!" - - @classmethod - def from_config(cls, cfg): - backbone = build_backbone(cfg) - return { - "backbone": backbone, - "proposal_generator": build_proposal_generator(cfg, backbone.output_shape()), - "roi_heads": build_roi_heads(cfg, backbone.output_shape()), - "input_format": cfg.INPUT.FORMAT, - "vis_period": cfg.VIS_PERIOD, - "pixel_mean": cfg.MODEL.PIXEL_MEAN, - "pixel_std": cfg.MODEL.PIXEL_STD, - } - - @property - def device(self): - return self.pixel_mean.device - - def visualize_training(self, batched_inputs, proposals): - """ - A function used to visualize images and proposals. It shows ground truth - bounding boxes on the original image and up to 20 top-scoring predicted - object proposals on the original image. Users can implement different - visualization functions for different models. - - Args: - batched_inputs (list): a list that contains input to the model. - proposals (list): a list that contains predicted proposals. Both - batched_inputs and proposals should have the same length. - """ - from detectron2.utils.visualizer import Visualizer - - storage = get_event_storage() - max_vis_prop = 20 - - for input, prop in zip(batched_inputs, proposals): - img = input["image"] - img = convert_image_to_rgb(img.permute(1, 2, 0), self.input_format) - v_gt = Visualizer(img, None) - v_gt = v_gt.overlay_instances(boxes=input["instances"].gt_boxes) - anno_img = v_gt.get_image() - box_size = min(len(prop.proposal_boxes), max_vis_prop) - v_pred = Visualizer(img, None) - v_pred = v_pred.overlay_instances( - boxes=prop.proposal_boxes[0:box_size].tensor.cpu().numpy() - ) - prop_img = v_pred.get_image() - vis_img = np.concatenate((anno_img, prop_img), axis=1) - vis_img = vis_img.transpose(2, 0, 1) - vis_name = "Left: GT bounding boxes; Right: Predicted proposals" - storage.put_image(vis_name, vis_img) - break # only visualize one image in a batch - - def forward(self, batched_inputs: List[Dict[str, torch.Tensor]]): - """ - Args: - batched_inputs: a list, batched outputs of :class:`DatasetMapper` . - Each item in the list contains the inputs for one image. - For now, each item in the list is a dict that contains: - - * image: Tensor, image in (C, H, W) format. - * instances (optional): groundtruth :class:`Instances` - * proposals (optional): :class:`Instances`, precomputed proposals. - - Other information that's included in the original dicts, such as: - - * "height", "width" (int): the output resolution of the model, used in inference. - See :meth:`postprocess` for details. - - Returns: - list[dict]: - Each dict is the output for one input image. - The dict contains one key "instances" whose value is a :class:`Instances`. - The :class:`Instances` object has the following keys: - "pred_boxes", "pred_classes", "scores", "pred_masks", "pred_keypoints" - """ - if not self.training: - return self.inference(batched_inputs) - - images = self.preprocess_image(batched_inputs) - if "instances" in batched_inputs[0]: - gt_instances = [x["instances"].to(self.device) for x in batched_inputs] - else: - gt_instances = None - - features = self.backbone(images.tensor) - - if self.proposal_generator is not None: - proposals, proposal_losses = self.proposal_generator(images, features, gt_instances) - else: - assert "proposals" in batched_inputs[0] - proposals = [x["proposals"].to(self.device) for x in batched_inputs] - proposal_losses = {} - - _, detector_losses = self.roi_heads(images, features, proposals, gt_instances) - if self.vis_period > 0: - storage = get_event_storage() - if storage.iter % self.vis_period == 0: - self.visualize_training(batched_inputs, proposals) - - losses = {} - losses.update(detector_losses) - losses.update(proposal_losses) - return losses - - def inference( - self, - batched_inputs: List[Dict[str, torch.Tensor]], - detected_instances: Optional[List[Instances]] = None, - do_postprocess: bool = True, - ): - """ - Run inference on the given inputs. - - Args: - batched_inputs (list[dict]): same as in :meth:`forward` - detected_instances (None or list[Instances]): if not None, it - contains an `Instances` object per image. The `Instances` - object contains "pred_boxes" and "pred_classes" which are - known boxes in the image. - The inference will then skip the detection of bounding boxes, - and only predict other per-ROI outputs. - do_postprocess (bool): whether to apply post-processing on the outputs. - - Returns: - When do_postprocess=True, same as in :meth:`forward`. - Otherwise, a list[Instances] containing raw network outputs. - """ - assert not self.training - - images = self.preprocess_image(batched_inputs) - features = self.backbone(images.tensor) - - if detected_instances is None: - if self.proposal_generator is not None: - proposals, _ = self.proposal_generator(images, features, None) - else: - assert "proposals" in batched_inputs[0] - proposals = [x["proposals"].to(self.device) for x in batched_inputs] - - results, _ = self.roi_heads(images, features, proposals, None) - else: - detected_instances = [x.to(self.device) for x in detected_instances] - results = self.roi_heads.forward_with_given_boxes(features, detected_instances) - - if do_postprocess: - assert not torch.jit.is_scripting(), "Scripting is not supported for postprocess." - return GeneralizedRCNN._postprocess(results, batched_inputs, images.image_sizes) - else: - return results - - def preprocess_image(self, batched_inputs: List[Dict[str, torch.Tensor]]): - """ - Normalize, pad and batch the input images. - """ - images = [x["image"].to(self.device) for x in batched_inputs] - images = [(x - self.pixel_mean) / self.pixel_std for x in images] - images = ImageList.from_tensors(images, self.backbone.size_divisibility) - return images - - @staticmethod - def _postprocess(instances, batched_inputs: List[Dict[str, torch.Tensor]], image_sizes): - """ - Rescale the output instances to the target size. - """ - # note: private function; subject to changes - processed_results = [] - for results_per_image, input_per_image, image_size in zip( - instances, batched_inputs, image_sizes - ): - height = input_per_image.get("height", image_size[0]) - width = input_per_image.get("width", image_size[1]) - r = detector_postprocess(results_per_image, height, width) - processed_results.append({"instances": r}) - return processed_results - - -@META_ARCH_REGISTRY.register() -class ProposalNetwork(nn.Module): - """ - A meta architecture that only predicts object proposals. - """ - - @configurable - def __init__( - self, - *, - backbone: Backbone, - proposal_generator: nn.Module, - pixel_mean: Tuple[float], - pixel_std: Tuple[float], - ): - """ - Args: - backbone: a backbone module, must follow detectron2's backbone interface - proposal_generator: a module that generates proposals using backbone features - pixel_mean, pixel_std: list or tuple with #channels element, representing - the per-channel mean and std to be used to normalize the input image - """ - super().__init__() - self.backbone = backbone - self.proposal_generator = proposal_generator - self.register_buffer("pixel_mean", torch.tensor(pixel_mean).view(-1, 1, 1), False) - self.register_buffer("pixel_std", torch.tensor(pixel_std).view(-1, 1, 1), False) - - @classmethod - def from_config(cls, cfg): - backbone = build_backbone(cfg) - return { - "backbone": backbone, - "proposal_generator": build_proposal_generator(cfg, backbone.output_shape()), - "pixel_mean": cfg.MODEL.PIXEL_MEAN, - "pixel_std": cfg.MODEL.PIXEL_STD, - } - - @property - def device(self): - return self.pixel_mean.device - - def forward(self, batched_inputs): - """ - Args: - Same as in :class:`GeneralizedRCNN.forward` - - Returns: - list[dict]: - Each dict is the output for one input image. - The dict contains one key "proposals" whose value is a - :class:`Instances` with keys "proposal_boxes" and "objectness_logits". - """ - images = [x["image"].to(self.device) for x in batched_inputs] - images = [(x - self.pixel_mean) / self.pixel_std for x in images] - images = ImageList.from_tensors(images, self.backbone.size_divisibility) - features = self.backbone(images.tensor) - - if "instances" in batched_inputs[0]: - gt_instances = [x["instances"].to(self.device) for x in batched_inputs] - elif "targets" in batched_inputs[0]: - log_first_n( - logging.WARN, "'targets' in the model inputs is now renamed to 'instances'!", n=10 - ) - gt_instances = [x["targets"].to(self.device) for x in batched_inputs] - else: - gt_instances = None - proposals, proposal_losses = self.proposal_generator(images, features, gt_instances) - # In training, the proposals are not useful at all but we generate them anyway. - # This makes RPN-only models about 5% slower. - if self.training: - return proposal_losses - - processed_results = [] - for results_per_image, input_per_image, image_size in zip( - proposals, batched_inputs, images.image_sizes - ): - height = input_per_image.get("height", image_size[0]) - width = input_per_image.get("width", image_size[1]) - r = detector_postprocess(results_per_image, height, width) - processed_results.append({"proposals": r}) - return processed_results diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/modeling/test_box2box_transform.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/modeling/test_box2box_transform.py deleted file mode 100644 index fd3a7b79b6b7a3608ad7cb3918de020a5a600d2f..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/modeling/test_box2box_transform.py +++ /dev/null @@ -1,94 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import unittest -import torch - -from detectron2.modeling.box_regression import ( - Box2BoxTransform, - Box2BoxTransformLinear, - Box2BoxTransformRotated, -) -from detectron2.utils.testing import random_boxes - -logger = logging.getLogger(__name__) - - -class TestBox2BoxTransform(unittest.TestCase): - def test_reconstruction(self): - weights = (5, 5, 10, 10) - b2b_tfm = Box2BoxTransform(weights=weights) - src_boxes = random_boxes(10) - dst_boxes = random_boxes(10) - - devices = [torch.device("cpu")] - if torch.cuda.is_available(): - devices.append(torch.device("cuda")) - for device in devices: - src_boxes = src_boxes.to(device=device) - dst_boxes = dst_boxes.to(device=device) - deltas = b2b_tfm.get_deltas(src_boxes, dst_boxes) - dst_boxes_reconstructed = b2b_tfm.apply_deltas(deltas, src_boxes) - self.assertTrue(torch.allclose(dst_boxes, dst_boxes_reconstructed)) - - def test_apply_deltas_tracing(self): - weights = (5, 5, 10, 10) - b2b_tfm = Box2BoxTransform(weights=weights) - - with torch.no_grad(): - func = torch.jit.trace(b2b_tfm.apply_deltas, (torch.randn(10, 20), torch.randn(10, 4))) - - o = func(torch.randn(10, 20), torch.randn(10, 4)) - self.assertEqual(o.shape, (10, 20)) - o = func(torch.randn(5, 20), torch.randn(5, 4)) - self.assertEqual(o.shape, (5, 20)) - - -def random_rotated_boxes(mean_box, std_length, std_angle, N): - return torch.cat( - [torch.rand(N, 4) * std_length, torch.rand(N, 1) * std_angle], dim=1 - ) + torch.tensor(mean_box, dtype=torch.float) - - -class TestBox2BoxTransformRotated(unittest.TestCase): - def test_reconstruction(self): - weights = (5, 5, 10, 10, 1) - b2b_transform = Box2BoxTransformRotated(weights=weights) - src_boxes = random_rotated_boxes([10, 10, 20, 20, -30], 5, 60.0, 10) - dst_boxes = random_rotated_boxes([10, 10, 20, 20, -30], 5, 60.0, 10) - - devices = [torch.device("cpu")] - if torch.cuda.is_available(): - devices.append(torch.device("cuda")) - for device in devices: - src_boxes = src_boxes.to(device=device) - dst_boxes = dst_boxes.to(device=device) - deltas = b2b_transform.get_deltas(src_boxes, dst_boxes) - dst_boxes_reconstructed = b2b_transform.apply_deltas(deltas, src_boxes) - assert torch.allclose(dst_boxes[:, :4], dst_boxes_reconstructed[:, :4], atol=1e-5) - # angle difference has to be normalized - assert torch.allclose( - (dst_boxes[:, 4] - dst_boxes_reconstructed[:, 4] + 180.0) % 360.0 - 180.0, - torch.zeros_like(dst_boxes[:, 4]), - atol=1e-4, - ) - - -class TestBox2BoxTransformLinear(unittest.TestCase): - def test_reconstruction(self): - b2b_tfm = Box2BoxTransformLinear() - src_boxes = random_boxes(10) - dst_boxes = torch.tensor([0, 0, 101, 101] * 10).reshape(10, 4).float() - - devices = [torch.device("cpu")] - if torch.cuda.is_available(): - devices.append(torch.device("cuda")) - for device in devices: - src_boxes = src_boxes.to(device=device) - dst_boxes = dst_boxes.to(device=device) - deltas = b2b_tfm.get_deltas(src_boxes, dst_boxes) - dst_boxes_reconstructed = b2b_tfm.apply_deltas(deltas, src_boxes) - self.assertTrue(torch.allclose(dst_boxes, dst_boxes_reconstructed, atol=1e-3)) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/PAIR/PAIR-Diffusion/ldm/modules/image_degradation/bsrgan_light.py b/spaces/PAIR/PAIR-Diffusion/ldm/modules/image_degradation/bsrgan_light.py deleted file mode 100644 index 808c7f882cb75e2ba2340d5b55881d11927351f0..0000000000000000000000000000000000000000 --- a/spaces/PAIR/PAIR-Diffusion/ldm/modules/image_degradation/bsrgan_light.py +++ /dev/null @@ -1,651 +0,0 @@ -# -*- coding: utf-8 -*- -import numpy as np -import cv2 -import torch - -from functools import partial -import random -from scipy import ndimage -import scipy -import scipy.stats as ss -from scipy.interpolate import interp2d -from scipy.linalg import orth -import albumentations - -import ldm.modules.image_degradation.utils_image as util - -""" -# -------------------------------------------- -# Super-Resolution -# -------------------------------------------- -# -# Kai Zhang (cskaizhang@gmail.com) -# https://github.com/cszn -# From 2019/03--2021/08 -# -------------------------------------------- -""" - -def modcrop_np(img, sf): - ''' - Args: - img: numpy image, WxH or WxHxC - sf: scale factor - Return: - cropped image - ''' - w, h = img.shape[:2] - im = np.copy(img) - return im[:w - w % sf, :h - h % sf, ...] - - -""" -# -------------------------------------------- -# anisotropic Gaussian kernels -# -------------------------------------------- -""" - - -def analytic_kernel(k): - """Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper)""" - k_size = k.shape[0] - # Calculate the big kernels size - big_k = np.zeros((3 * k_size - 2, 3 * k_size - 2)) - # Loop over the small kernel to fill the big one - for r in range(k_size): - for c in range(k_size): - big_k[2 * r:2 * r + k_size, 2 * c:2 * c + k_size] += k[r, c] * k - # Crop the edges of the big kernel to ignore very small values and increase run time of SR - crop = k_size // 2 - cropped_big_k = big_k[crop:-crop, crop:-crop] - # Normalize to 1 - return cropped_big_k / cropped_big_k.sum() - - -def anisotropic_Gaussian(ksize=15, theta=np.pi, l1=6, l2=6): - """ generate an anisotropic Gaussian kernel - Args: - ksize : e.g., 15, kernel size - theta : [0, pi], rotation angle range - l1 : [0.1,50], scaling of eigenvalues - l2 : [0.1,l1], scaling of eigenvalues - If l1 = l2, will get an isotropic Gaussian kernel. - Returns: - k : kernel - """ - - v = np.dot(np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]), np.array([1., 0.])) - V = np.array([[v[0], v[1]], [v[1], -v[0]]]) - D = np.array([[l1, 0], [0, l2]]) - Sigma = np.dot(np.dot(V, D), np.linalg.inv(V)) - k = gm_blur_kernel(mean=[0, 0], cov=Sigma, size=ksize) - - return k - - -def gm_blur_kernel(mean, cov, size=15): - center = size / 2.0 + 0.5 - k = np.zeros([size, size]) - for y in range(size): - for x in range(size): - cy = y - center + 1 - cx = x - center + 1 - k[y, x] = ss.multivariate_normal.pdf([cx, cy], mean=mean, cov=cov) - - k = k / np.sum(k) - return k - - -def shift_pixel(x, sf, upper_left=True): - """shift pixel for super-resolution with different scale factors - Args: - x: WxHxC or WxH - sf: scale factor - upper_left: shift direction - """ - h, w = x.shape[:2] - shift = (sf - 1) * 0.5 - xv, yv = np.arange(0, w, 1.0), np.arange(0, h, 1.0) - if upper_left: - x1 = xv + shift - y1 = yv + shift - else: - x1 = xv - shift - y1 = yv - shift - - x1 = np.clip(x1, 0, w - 1) - y1 = np.clip(y1, 0, h - 1) - - if x.ndim == 2: - x = interp2d(xv, yv, x)(x1, y1) - if x.ndim == 3: - for i in range(x.shape[-1]): - x[:, :, i] = interp2d(xv, yv, x[:, :, i])(x1, y1) - - return x - - -def blur(x, k): - ''' - x: image, NxcxHxW - k: kernel, Nx1xhxw - ''' - n, c = x.shape[:2] - p1, p2 = (k.shape[-2] - 1) // 2, (k.shape[-1] - 1) // 2 - x = torch.nn.functional.pad(x, pad=(p1, p2, p1, p2), mode='replicate') - k = k.repeat(1, c, 1, 1) - k = k.view(-1, 1, k.shape[2], k.shape[3]) - x = x.view(1, -1, x.shape[2], x.shape[3]) - x = torch.nn.functional.conv2d(x, k, bias=None, stride=1, padding=0, groups=n * c) - x = x.view(n, c, x.shape[2], x.shape[3]) - - return x - - -def gen_kernel(k_size=np.array([15, 15]), scale_factor=np.array([4, 4]), min_var=0.6, max_var=10., noise_level=0): - """" - # modified version of https://github.com/assafshocher/BlindSR_dataset_generator - # Kai Zhang - # min_var = 0.175 * sf # variance of the gaussian kernel will be sampled between min_var and max_var - # max_var = 2.5 * sf - """ - # Set random eigen-vals (lambdas) and angle (theta) for COV matrix - lambda_1 = min_var + np.random.rand() * (max_var - min_var) - lambda_2 = min_var + np.random.rand() * (max_var - min_var) - theta = np.random.rand() * np.pi # random theta - noise = -noise_level + np.random.rand(*k_size) * noise_level * 2 - - # Set COV matrix using Lambdas and Theta - LAMBDA = np.diag([lambda_1, lambda_2]) - Q = np.array([[np.cos(theta), -np.sin(theta)], - [np.sin(theta), np.cos(theta)]]) - SIGMA = Q @ LAMBDA @ Q.T - INV_SIGMA = np.linalg.inv(SIGMA)[None, None, :, :] - - # Set expectation position (shifting kernel for aligned image) - MU = k_size // 2 - 0.5 * (scale_factor - 1) # - 0.5 * (scale_factor - k_size % 2) - MU = MU[None, None, :, None] - - # Create meshgrid for Gaussian - [X, Y] = np.meshgrid(range(k_size[0]), range(k_size[1])) - Z = np.stack([X, Y], 2)[:, :, :, None] - - # Calcualte Gaussian for every pixel of the kernel - ZZ = Z - MU - ZZ_t = ZZ.transpose(0, 1, 3, 2) - raw_kernel = np.exp(-0.5 * np.squeeze(ZZ_t @ INV_SIGMA @ ZZ)) * (1 + noise) - - # shift the kernel so it will be centered - # raw_kernel_centered = kernel_shift(raw_kernel, scale_factor) - - # Normalize the kernel and return - # kernel = raw_kernel_centered / np.sum(raw_kernel_centered) - kernel = raw_kernel / np.sum(raw_kernel) - return kernel - - -def fspecial_gaussian(hsize, sigma): - hsize = [hsize, hsize] - siz = [(hsize[0] - 1.0) / 2.0, (hsize[1] - 1.0) / 2.0] - std = sigma - [x, y] = np.meshgrid(np.arange(-siz[1], siz[1] + 1), np.arange(-siz[0], siz[0] + 1)) - arg = -(x * x + y * y) / (2 * std * std) - h = np.exp(arg) - h[h < scipy.finfo(float).eps * h.max()] = 0 - sumh = h.sum() - if sumh != 0: - h = h / sumh - return h - - -def fspecial_laplacian(alpha): - alpha = max([0, min([alpha, 1])]) - h1 = alpha / (alpha + 1) - h2 = (1 - alpha) / (alpha + 1) - h = [[h1, h2, h1], [h2, -4 / (alpha + 1), h2], [h1, h2, h1]] - h = np.array(h) - return h - - -def fspecial(filter_type, *args, **kwargs): - ''' - python code from: - https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py - ''' - if filter_type == 'gaussian': - return fspecial_gaussian(*args, **kwargs) - if filter_type == 'laplacian': - return fspecial_laplacian(*args, **kwargs) - - -""" -# -------------------------------------------- -# degradation models -# -------------------------------------------- -""" - - -def bicubic_degradation(x, sf=3): - ''' - Args: - x: HxWxC image, [0, 1] - sf: down-scale factor - Return: - bicubicly downsampled LR image - ''' - x = util.imresize_np(x, scale=1 / sf) - return x - - -def srmd_degradation(x, k, sf=3): - ''' blur + bicubic downsampling - Args: - x: HxWxC image, [0, 1] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - Reference: - @inproceedings{zhang2018learning, - title={Learning a single convolutional super-resolution network for multiple degradations}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={3262--3271}, - year={2018} - } - ''' - x = ndimage.convolve(x, np.expand_dims(k, axis=2), mode='wrap') # 'nearest' | 'mirror' - x = bicubic_degradation(x, sf=sf) - return x - - -def dpsr_degradation(x, k, sf=3): - ''' bicubic downsampling + blur - Args: - x: HxWxC image, [0, 1] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - Reference: - @inproceedings{zhang2019deep, - title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={1671--1681}, - year={2019} - } - ''' - x = bicubic_degradation(x, sf=sf) - x = ndimage.convolve(x, np.expand_dims(k, axis=2), mode='wrap') - return x - - -def classical_degradation(x, k, sf=3): - ''' blur + downsampling - Args: - x: HxWxC image, [0, 1]/[0, 255] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - ''' - x = ndimage.convolve(x, np.expand_dims(k, axis=2), mode='wrap') - # x = filters.correlate(x, np.expand_dims(np.flip(k), axis=2)) - st = 0 - return x[st::sf, st::sf, ...] - - -def add_sharpening(img, weight=0.5, radius=50, threshold=10): - """USM sharpening. borrowed from real-ESRGAN - Input image: I; Blurry image: B. - 1. K = I + weight * (I - B) - 2. Mask = 1 if abs(I - B) > threshold, else: 0 - 3. Blur mask: - 4. Out = Mask * K + (1 - Mask) * I - Args: - img (Numpy array): Input image, HWC, BGR; float32, [0, 1]. - weight (float): Sharp weight. Default: 1. - radius (float): Kernel size of Gaussian blur. Default: 50. - threshold (int): - """ - if radius % 2 == 0: - radius += 1 - blur = cv2.GaussianBlur(img, (radius, radius), 0) - residual = img - blur - mask = np.abs(residual) * 255 > threshold - mask = mask.astype('float32') - soft_mask = cv2.GaussianBlur(mask, (radius, radius), 0) - - K = img + weight * residual - K = np.clip(K, 0, 1) - return soft_mask * K + (1 - soft_mask) * img - - -def add_blur(img, sf=4): - wd2 = 4.0 + sf - wd = 2.0 + 0.2 * sf - - wd2 = wd2/4 - wd = wd/4 - - if random.random() < 0.5: - l1 = wd2 * random.random() - l2 = wd2 * random.random() - k = anisotropic_Gaussian(ksize=random.randint(2, 11) + 3, theta=random.random() * np.pi, l1=l1, l2=l2) - else: - k = fspecial('gaussian', random.randint(2, 4) + 3, wd * random.random()) - img = ndimage.convolve(img, np.expand_dims(k, axis=2), mode='mirror') - - return img - - -def add_resize(img, sf=4): - rnum = np.random.rand() - if rnum > 0.8: # up - sf1 = random.uniform(1, 2) - elif rnum < 0.7: # down - sf1 = random.uniform(0.5 / sf, 1) - else: - sf1 = 1.0 - img = cv2.resize(img, (int(sf1 * img.shape[1]), int(sf1 * img.shape[0])), interpolation=random.choice([1, 2, 3])) - img = np.clip(img, 0.0, 1.0) - - return img - - -# def add_Gaussian_noise(img, noise_level1=2, noise_level2=25): -# noise_level = random.randint(noise_level1, noise_level2) -# rnum = np.random.rand() -# if rnum > 0.6: # add color Gaussian noise -# img += np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) -# elif rnum < 0.4: # add grayscale Gaussian noise -# img += np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) -# else: # add noise -# L = noise_level2 / 255. -# D = np.diag(np.random.rand(3)) -# U = orth(np.random.rand(3, 3)) -# conv = np.dot(np.dot(np.transpose(U), D), U) -# img += np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) -# img = np.clip(img, 0.0, 1.0) -# return img - -def add_Gaussian_noise(img, noise_level1=2, noise_level2=25): - noise_level = random.randint(noise_level1, noise_level2) - rnum = np.random.rand() - if rnum > 0.6: # add color Gaussian noise - img = img + np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) - elif rnum < 0.4: # add grayscale Gaussian noise - img = img + np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) - else: # add noise - L = noise_level2 / 255. - D = np.diag(np.random.rand(3)) - U = orth(np.random.rand(3, 3)) - conv = np.dot(np.dot(np.transpose(U), D), U) - img = img + np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) - img = np.clip(img, 0.0, 1.0) - return img - - -def add_speckle_noise(img, noise_level1=2, noise_level2=25): - noise_level = random.randint(noise_level1, noise_level2) - img = np.clip(img, 0.0, 1.0) - rnum = random.random() - if rnum > 0.6: - img += img * np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) - elif rnum < 0.4: - img += img * np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) - else: - L = noise_level2 / 255. - D = np.diag(np.random.rand(3)) - U = orth(np.random.rand(3, 3)) - conv = np.dot(np.dot(np.transpose(U), D), U) - img += img * np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) - img = np.clip(img, 0.0, 1.0) - return img - - -def add_Poisson_noise(img): - img = np.clip((img * 255.0).round(), 0, 255) / 255. - vals = 10 ** (2 * random.random() + 2.0) # [2, 4] - if random.random() < 0.5: - img = np.random.poisson(img * vals).astype(np.float32) / vals - else: - img_gray = np.dot(img[..., :3], [0.299, 0.587, 0.114]) - img_gray = np.clip((img_gray * 255.0).round(), 0, 255) / 255. - noise_gray = np.random.poisson(img_gray * vals).astype(np.float32) / vals - img_gray - img += noise_gray[:, :, np.newaxis] - img = np.clip(img, 0.0, 1.0) - return img - - -def add_JPEG_noise(img): - quality_factor = random.randint(80, 95) - img = cv2.cvtColor(util.single2uint(img), cv2.COLOR_RGB2BGR) - result, encimg = cv2.imencode('.jpg', img, [int(cv2.IMWRITE_JPEG_QUALITY), quality_factor]) - img = cv2.imdecode(encimg, 1) - img = cv2.cvtColor(util.uint2single(img), cv2.COLOR_BGR2RGB) - return img - - -def random_crop(lq, hq, sf=4, lq_patchsize=64): - h, w = lq.shape[:2] - rnd_h = random.randint(0, h - lq_patchsize) - rnd_w = random.randint(0, w - lq_patchsize) - lq = lq[rnd_h:rnd_h + lq_patchsize, rnd_w:rnd_w + lq_patchsize, :] - - rnd_h_H, rnd_w_H = int(rnd_h * sf), int(rnd_w * sf) - hq = hq[rnd_h_H:rnd_h_H + lq_patchsize * sf, rnd_w_H:rnd_w_H + lq_patchsize * sf, :] - return lq, hq - - -def degradation_bsrgan(img, sf=4, lq_patchsize=72, isp_model=None): - """ - This is the degradation model of BSRGAN from the paper - "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution" - ---------- - img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf) - sf: scale factor - isp_model: camera ISP model - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25 - sf_ori = sf - - h1, w1 = img.shape[:2] - img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = img.shape[:2] - - if h < lq_patchsize * sf or w < lq_patchsize * sf: - raise ValueError(f'img size ({h1}X{w1}) is too small!') - - hq = img.copy() - - if sf == 4 and random.random() < scale2_prob: # downsample1 - if np.random.rand() < 0.5: - img = cv2.resize(img, (int(1 / 2 * img.shape[1]), int(1 / 2 * img.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - img = util.imresize_np(img, 1 / 2, True) - img = np.clip(img, 0.0, 1.0) - sf = 2 - - shuffle_order = random.sample(range(7), 7) - idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3) - if idx1 > idx2: # keep downsample3 last - shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1] - - for i in shuffle_order: - - if i == 0: - img = add_blur(img, sf=sf) - - elif i == 1: - img = add_blur(img, sf=sf) - - elif i == 2: - a, b = img.shape[1], img.shape[0] - # downsample2 - if random.random() < 0.75: - sf1 = random.uniform(1, 2 * sf) - img = cv2.resize(img, (int(1 / sf1 * img.shape[1]), int(1 / sf1 * img.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf)) - k_shifted = shift_pixel(k, sf) - k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel - img = ndimage.convolve(img, np.expand_dims(k_shifted, axis=2), mode='mirror') - img = img[0::sf, 0::sf, ...] # nearest downsampling - img = np.clip(img, 0.0, 1.0) - - elif i == 3: - # downsample3 - img = cv2.resize(img, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3])) - img = np.clip(img, 0.0, 1.0) - - elif i == 4: - # add Gaussian noise - img = add_Gaussian_noise(img, noise_level1=2, noise_level2=8) - - elif i == 5: - # add JPEG noise - if random.random() < jpeg_prob: - img = add_JPEG_noise(img) - - elif i == 6: - # add processed camera sensor noise - if random.random() < isp_prob and isp_model is not None: - with torch.no_grad(): - img, hq = isp_model.forward(img.copy(), hq) - - # add final JPEG compression noise - img = add_JPEG_noise(img) - - # random crop - img, hq = random_crop(img, hq, sf_ori, lq_patchsize) - - return img, hq - - -# todo no isp_model? -def degradation_bsrgan_variant(image, sf=4, isp_model=None, up=False): - """ - This is the degradation model of BSRGAN from the paper - "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution" - ---------- - sf: scale factor - isp_model: camera ISP model - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - image = util.uint2single(image) - isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25 - sf_ori = sf - - h1, w1 = image.shape[:2] - image = image.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = image.shape[:2] - - hq = image.copy() - - if sf == 4 and random.random() < scale2_prob: # downsample1 - if np.random.rand() < 0.5: - image = cv2.resize(image, (int(1 / 2 * image.shape[1]), int(1 / 2 * image.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - image = util.imresize_np(image, 1 / 2, True) - image = np.clip(image, 0.0, 1.0) - sf = 2 - - shuffle_order = random.sample(range(7), 7) - idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3) - if idx1 > idx2: # keep downsample3 last - shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1] - - for i in shuffle_order: - - if i == 0: - image = add_blur(image, sf=sf) - - # elif i == 1: - # image = add_blur(image, sf=sf) - - if i == 0: - pass - - elif i == 2: - a, b = image.shape[1], image.shape[0] - # downsample2 - if random.random() < 0.8: - sf1 = random.uniform(1, 2 * sf) - image = cv2.resize(image, (int(1 / sf1 * image.shape[1]), int(1 / sf1 * image.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf)) - k_shifted = shift_pixel(k, sf) - k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel - image = ndimage.convolve(image, np.expand_dims(k_shifted, axis=2), mode='mirror') - image = image[0::sf, 0::sf, ...] # nearest downsampling - - image = np.clip(image, 0.0, 1.0) - - elif i == 3: - # downsample3 - image = cv2.resize(image, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3])) - image = np.clip(image, 0.0, 1.0) - - elif i == 4: - # add Gaussian noise - image = add_Gaussian_noise(image, noise_level1=1, noise_level2=2) - - elif i == 5: - # add JPEG noise - if random.random() < jpeg_prob: - image = add_JPEG_noise(image) - # - # elif i == 6: - # # add processed camera sensor noise - # if random.random() < isp_prob and isp_model is not None: - # with torch.no_grad(): - # img, hq = isp_model.forward(img.copy(), hq) - - # add final JPEG compression noise - image = add_JPEG_noise(image) - image = util.single2uint(image) - if up: - image = cv2.resize(image, (w1, h1), interpolation=cv2.INTER_CUBIC) # todo: random, as above? want to condition on it then - example = {"image": image} - return example - - - - -if __name__ == '__main__': - print("hey") - img = util.imread_uint('utils/test.png', 3) - img = img[:448, :448] - h = img.shape[0] // 4 - print("resizing to", h) - sf = 4 - deg_fn = partial(degradation_bsrgan_variant, sf=sf) - for i in range(20): - print(i) - img_hq = img - img_lq = deg_fn(img)["image"] - img_hq, img_lq = util.uint2single(img_hq), util.uint2single(img_lq) - print(img_lq) - img_lq_bicubic = albumentations.SmallestMaxSize(max_size=h, interpolation=cv2.INTER_CUBIC)(image=img_hq)["image"] - print(img_lq.shape) - print("bicubic", img_lq_bicubic.shape) - print(img_hq.shape) - lq_nearest = cv2.resize(util.single2uint(img_lq), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])), - interpolation=0) - lq_bicubic_nearest = cv2.resize(util.single2uint(img_lq_bicubic), - (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])), - interpolation=0) - img_concat = np.concatenate([lq_bicubic_nearest, lq_nearest, util.single2uint(img_hq)], axis=1) - util.imsave(img_concat, str(i) + '.png') diff --git a/spaces/PKUWilliamYang/StyleGANEX/app.py b/spaces/PKUWilliamYang/StyleGANEX/app.py deleted file mode 100644 index 1b9e357a20cd0dd64fecd322851c1d0aebc691a4..0000000000000000000000000000000000000000 --- a/spaces/PKUWilliamYang/StyleGANEX/app.py +++ /dev/null @@ -1,112 +0,0 @@ -from __future__ import annotations - -import argparse -import pathlib -import torch -import gradio as gr - -from webUI.app_task import * -from webUI.styleganex_model import Model - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser() - parser.add_argument('--device', type=str, default='cpu') - parser.add_argument('--theme', type=str) - parser.add_argument('--share', action='store_true') - parser.add_argument('--port', type=int) - parser.add_argument('--disable-queue', - dest='enable_queue', - action='store_false') - return parser.parse_args() - -DESCRIPTION = ''' -
      -

      - Face Manipulation with StyleGANEX -

      -

      For faster inference without waiting in queue, you may duplicate the space and upgrade to GPU in settings. -Duplicate Space

      -

      -example -

      -''' -ARTICLE = r""" -If StyleGANEX is helpful, please help to ⭐ the Github Repo. Thanks! -[![GitHub Stars](https://img.shields.io/github/stars/williamyang1991/StyleGANEX?style=social)](https://github.com/williamyang1991/StyleGANEX) ---- -📝 **Citation** -If our work is useful for your research, please consider citing: -```bibtex -@inproceedings{yang2023styleganex, - title = {StyleGANEX: StyleGAN-Based Manipulation Beyond Cropped Aligned Faces}, - author = {Yang, Shuai and Jiang, Liming and Liu, Ziwei and and Loy, Chen Change}, - booktitle = {ICCV}, - year = {2023}, -} -``` -📋 **License** -This project is licensed under S-Lab License 1.0. -Redistribution and use for non-commercial purposes should follow this license. - -📧 **Contact** -If you have any questions, please feel free to reach me out at williamyang@pku.edu.cn. -""" - -FOOTER = '
      visitor badge
      ' - -def main(): - args = parse_args() - args.device = 'cuda' if torch.cuda.is_available() else 'cpu' - print('*** Now using %s.'%(args.device)) - model = Model(device=args.device) - - - torch.hub.download_url_to_file('https://raw.githubusercontent.com/williamyang1991/StyleGANEX/main/data/234_sketch.jpg', - '234_sketch.jpg') - torch.hub.download_url_to_file('https://github.com/williamyang1991/StyleGANEX/raw/main/output/ILip77SbmOE_inversion.pt', - 'ILip77SbmOE_inversion.pt') - torch.hub.download_url_to_file('https://raw.githubusercontent.com/williamyang1991/StyleGANEX/main/data/ILip77SbmOE.png', - 'ILip77SbmOE.png') - torch.hub.download_url_to_file('https://raw.githubusercontent.com/williamyang1991/StyleGANEX/main/data/ILip77SbmOE_mask.png', - 'ILip77SbmOE_mask.png') - torch.hub.download_url_to_file('https://raw.githubusercontent.com/williamyang1991/StyleGANEX/main/data/pexels-daniel-xavier-1239291.jpg', - 'pexels-daniel-xavier-1239291.jpg') - torch.hub.download_url_to_file('https://github.com/williamyang1991/StyleGANEX/raw/main/data/529_2.mp4', - '529_2.mp4') - torch.hub.download_url_to_file('https://github.com/williamyang1991/StyleGANEX/raw/main/data/684.mp4', - '684.mp4') - torch.hub.download_url_to_file('https://github.com/williamyang1991/StyleGANEX/raw/main/data/pexels-anthony-shkraba-production-8136210.mp4', - 'pexels-anthony-shkraba-production-8136210.mp4') - - - with gr.Blocks(css='style.css') as demo: - gr.Markdown(DESCRIPTION) - with gr.Tabs(): - with gr.TabItem('Inversion for Editing'): - create_demo_inversion(model.process_inversion, allow_optimization=False) - with gr.TabItem('Image Face Toonify'): - create_demo_toonify(model.process_toonify) - with gr.TabItem('Video Face Toonify'): - create_demo_vtoonify(model.process_vtoonify, max_frame_num=12) - with gr.TabItem('Image Face Editing'): - create_demo_editing(model.process_editing) - with gr.TabItem('Video Face Editing'): - create_demo_vediting(model.process_vediting, max_frame_num=12) - with gr.TabItem('Sketch2Face'): - create_demo_s2f(model.process_s2f) - with gr.TabItem('Mask2Face'): - create_demo_m2f(model.process_m2f) - with gr.TabItem('SR'): - create_demo_sr(model.process_sr) - gr.Markdown(ARTICLE) - gr.Markdown(FOOTER) - - demo.launch( - enable_queue=args.enable_queue, - server_port=args.port, - share=args.share, - ) - -if __name__ == '__main__': - main() - diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/time.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/time.go deleted file mode 100644 index 0a0d8a253f4f19c327a53244181e7174605c7680..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/time.go and /dev/null differ diff --git a/spaces/PeepDaSlan9/De-limiter/inference.py b/spaces/PeepDaSlan9/De-limiter/inference.py deleted file mode 100644 index 59e20d3182f2da8657c1d07f7f94e17b643ae4af..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/De-limiter/inference.py +++ /dev/null @@ -1,165 +0,0 @@ -import os -import json -import argparse -import glob - -import torch -import tqdm -import librosa -import soundfile as sf -import pyloudnorm as pyln -from dotmap import DotMap - -from models import load_model_with_args -from separate_func import ( - conv_tasnet_separate, -) -from utils import str2bool, db2linear - - -tqdm.monitor_interval = 0 - - -def separate_track_with_model( - args, model, device, track_audio, track_name, meter, augmented_gain -): - with torch.no_grad(): - if ( - args.model_loss_params.architecture == "conv_tasnet_mask_on_output" - or args.model_loss_params.architecture == "conv_tasnet" - ): - estimates = conv_tasnet_separate( - args, - model, - device, - track_audio, - track_name, - meter=meter, - augmented_gain=augmented_gain, - ) - - return estimates - - -def main(): - parser = argparse.ArgumentParser(description="model test.py") - parser.add_argument("--target", type=str, default="all") - parser.add_argument("--data_root", type=str, default="./input_data") - parser.add_argument("--weight_directory", type=str, default="./weight") - parser.add_argument("--output_directory", type=str, default="./output") - parser.add_argument("--use_gpu", type=str2bool, default=True) - parser.add_argument("--save_name_as_target", type=str2bool, default=False) - parser.add_argument( - "--loudnorm_input_lufs", - type=float, - default=None, - help="If you want to use loudnorm for input", - ) - parser.add_argument( - "--save_output_loudnorm", - type=float, - default=-14.0, - help="Save loudness normalized outputs or not. If you want to save, input target loudness", - ) - parser.add_argument( - "--save_mixed_output", - type=float, - default=None, - help="Save original+delimited-estimation mixed output with a ratio of default 0.5 (orginal) and 1 - 0.5 (estimation)", - ) - parser.add_argument( - "--save_16k_mono", - type=str2bool, - default=False, - help="Save 16k mono wav files for FAD evaluation.", - ) - parser.add_argument( - "--save_histogram", - type=str2bool, - default=False, - help="Save histogram of the output. Only valid when the task is 'delimit'", - ) - parser.add_argument( - "--use_singletrackset", - type=str2bool, - default=False, - help="Use SingleTrackSet if input data is too long.", - ) - - args, _ = parser.parse_known_args() - - with open(f"{args.weight_directory}/{args.target}.json", "r") as f: - args_dict = json.load(f) - args_dict = DotMap(args_dict) - - for key, value in args_dict["args"].items(): - if key in list(vars(args).keys()): - pass - else: - setattr(args, key, value) - - args.test_output_dir = f"{args.output_directory}" - os.makedirs(args.test_output_dir, exist_ok=True) - - device = torch.device( - "cuda" if torch.cuda.is_available() and args.use_gpu else "cpu" - ) - - ###################### Define Models ###################### - our_model = load_model_with_args(args) - our_model = our_model.to(device) - - target_model_path = f"{args.weight_directory}/{args.target}.pth" - checkpoint = torch.load(target_model_path, map_location=device) - our_model.load_state_dict(checkpoint) - - our_model.eval() - - meter = pyln.Meter(44100) - - test_tracks = glob.glob(f"{args.data_root}/*.wav") + glob.glob( - f"{args.data_root}/*.mp3" - ) - - for track in tqdm.tqdm(test_tracks): - track_name = os.path.basename(track).replace(".wav", "").replace(".mp3", "") - track_audio, sr = librosa.load(track, sr=None, mono=False) # sr should be 44100 - - orig_audio = track_audio.copy() - - if sr != 44100: - raise ValueError("Sample rate should be 44100") - augmented_gain = None - print("Now De-limiting : ", track_name) - - if args.loudnorm_input_lufs: # If you want to use loud-normalized input - track_lufs = meter.integrated_loudness(track_audio.T) - augmented_gain = args.loudnorm_input_lufs - track_lufs - track_audio = track_audio * db2linear(augmented_gain, eps=0.0) - - track_audio = ( - torch.as_tensor(track_audio, dtype=torch.float32).unsqueeze(0).to(device) - ) - - estimates = separate_track_with_model( - args, our_model, device, track_audio, track_name, meter, augmented_gain - ) - - if args.save_mixed_output: - track_lufs = meter.integrated_loudness(orig_audio.T) - augmented_gain = args.save_output_loudnorm - track_lufs - orig_audio = orig_audio * db2linear(augmented_gain, eps=0.0) - - mixed_output = orig_audio * args.save_mixed_output + estimates * ( - 1 - args.save_mixed_output - ) - - sf.write( - f"{args.test_output_dir}/{track_name}/{track_name}_mixed.wav", - mixed_output.T, - args.data_params.sample_rate, - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/PeepDaSlan9/SDXL-artists-browser/index.js b/spaces/PeepDaSlan9/SDXL-artists-browser/index.js deleted file mode 100644 index cc6724b38bf87645c2118e36907ef63aa006380d..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/SDXL-artists-browser/index.js +++ /dev/null @@ -1,1552 +0,0 @@ -// -// -// -// -// global variables -var timer; -var artTypes = ['🎨','🧑','🏞️']; -var imgTypeShown = 0; -var log = ''; -var editMode = false; -var windowWidth = 0; -var gutterStartPosX, mouseStartPosX, gutterEndPercentX -var style, stylesheet, imgHoverRule; - -// -// -// -// functions -function startUp() { - updateFooter(); - insertArtists(); - insertCheckboxesFromArtistsData(); - insertCheckboxesFromCategories(); - loadCheckboxesState(); - showHideCategories(); - loadOptionsState(); - loadFavoritesState(); - hideAllArtists(); - unhideBasedOnPermissiveSetting(); - updateArtistsCountPerTag('start'); - rotatePromptsImages(); - sortArtists(); - sortTags(); - loadMostUsedTags(); - updateArtistsCountPerCategory(); - showHideLowCountTags(); - getStyleRuleForDrag(); -} - -function updateFooter() { - let proto = window.location.protocol; - if (proto.startsWith('http')) { - var footer = document.getElementsByTagName('footer')[0]; - var el1 = document.createElement('span'); - el1.textContent = ''; - // footer.classList.add('special'); - // footer.querySelectorAll('div')[0].prepend(el1); - } -} - -function insertArtists() { - // artistsData is defined in the artists_and_tags.js file - let missingFiles = ''; - var container = document.getElementById('image-container'); - let imagePromises = artistsData.map((artist) => { - var last = artist[0]; - var first = artist[1]; - var tags1 = artist[2].replaceAll('|', ' ').toLowerCase(); // for classes - var tags2 = artist[2].replaceAll('|', ', ').toLowerCase(); // for display - // class names can't start with a number, but some tags do - // in these cases we prepend the class with 'qqqq-' - tags1 = tags1.replace(/(^|\s)(\d)/g, '$1qqqq-$2'); - // artists can have a tag in the format of "added-YYYY-MM" - // we want that to show up as a filter, but not on the artist card - tags2 = tags2.replace(/, added-(\d|-)*/g,''); - var itemDiv = document.createElement('div'); - itemDiv.className = 'image-item ' + tags1; - if(artist[3]) { - itemDiv.dataset.deprecated = true; - } - var itemHeader = document.createElement('span'); - var h3 = document.createElement('h3'); - itemHeader.appendChild(h3); - var firstN = document.createElement('span'); - var lastN = document.createElement('span'); - firstN.className = 'firstN'; - lastN.className = 'lastN'; - firstN.textContent = `${first}`; - lastN.textContent = `${last}`; - h3.appendChild(firstN); - h3.appendChild(lastN); - h3.title = 'copy to clipboard'; - var h4 = document.createElement('h4'); - h4.textContent = tags2; - h4.title = 'check/uncheck these tags'; - itemHeader.appendChild(h4); - itemDiv.appendChild(itemHeader); - var box = document.createElement('div'); - var imgTools = document.createElement('div'); - imgTools.className = 'imgTools'; - var artPrev = document.createElement('div'); - artPrev.className = 'art_prev'; - var artPrevSpan = document.createElement('span'); - artPrevSpan.textContent = '🧑'; - artPrev.appendChild(artPrevSpan); - imgTools.appendChild(artPrev); - var artStar = document.createElement('div'); - artStar.className = 'art_star'; - var artStarSpan = document.createElement('span'); - artStarSpan.textContent = '⭐️'; - artStar.appendChild(artStarSpan); - imgTools.appendChild(artStar); - var artNext = document.createElement('div'); - artNext.className = 'art_next'; - var artNextSpan = document.createElement('span'); - artNextSpan.textContent = '🏞️'; - artNext.appendChild(artNextSpan); - imgTools.appendChild(artNext); - box.appendChild(imgTools); - var imgBox = document.createElement('div'); - imgBox.className = 'imgBox'; - var imgArtwork = document.createElement('img'); - var imgPortrait = document.createElement('img'); - var imgLandscape = document.createElement('img'); - imgArtwork.alt = `${first} ${last}` + ' - artwork'; - imgPortrait.alt = `${first} ${last}` + ' - portrait'; - imgLandscape.alt = `${first} ${last}` + ' - landscape'; - imgArtwork.className = 'img_artwork'; - imgPortrait.className = 'img_portrait hidden'; - imgLandscape.className = 'img_landscape hidden'; - let src = 'images/SDXL_1_0_thumbs/'; - if(first == '') { - src += last.replaceAll(' ', '_'); - } else { - src += first.replaceAll(' ', '_') + '_' + last.replaceAll(' ', '_'); - } - // files use accented characters and huggingface stores the files with this encoding - src = encodeURI(src.normalize("NFD")); - imgBox.appendChild(imgArtwork); - imgBox.appendChild(imgPortrait); - imgBox.appendChild(imgLandscape); - box.appendChild(imgBox); - itemDiv.appendChild(box); - container.appendChild(itemDiv); - if(artist[3]) { - var deprecatedSpan = document.createElement('span'); - deprecatedSpan.textContent = 'this artist is deprecated. hover to view anyway. more info in the help ⁉️' - deprecatedSpan.className = 'deprecated'; - imgBox.appendChild(deprecatedSpan); - return Promise.allSettled([ - new Promise((resolve, reject) => { - imgArtwork.style.display = 'none'; - imgArtwork.src = 'images/SDXL_1_0_thumbs/1x1.webp'; - }), - new Promise((resolve, reject) => { - imgPortrait.style.display = 'none'; - imgPortrait.src = 'images/SDXL_1_0_thumbs/1x1.webp'; - }), - new Promise((resolve, reject) => { - imgLandscape.style.display = 'none'; - imgLandscape.src = 'images/SDXL_1_0_thumbs/1x1.webp'; - }) - ]); - } else { - // if not flagged as deprecated - return Promise.allSettled([ - new Promise((resolve, reject) => { - imgArtwork.onload = resolve; - imgArtwork.onerror = () => { - missingFiles += '
    11. ' + first + '_' + last + '-artwork.webp
    12. '; - reject(); - }; - imgArtwork.src = src + '-artwork.webp'; - }), - new Promise((resolve, reject) => { - imgPortrait.onload = resolve; - imgPortrait.onerror = () => { - missingFiles += '
    13. ' + first + '_' + last + '-portrait.webp
    14. '; - reject(); - }; - imgPortrait.src = src + '-portrait.webp'; - }), - new Promise((resolve, reject) => { - imgLandscape.onload = resolve; - imgLandscape.onerror = () => { - missingFiles += '
    15. ' + first + '_' + last + '-landscape.webp
    16. '; - reject(); - }; - imgLandscape.src = src + '-landscape.webp'; - }) - ]); - } - }); - let report = document.getElementById('missing_images_report'); - Promise.allSettled(imagePromises).then(() => { - if(missingFiles.indexOf('webp')>0) { - report.innerHTML = missingFiles; - } else { - report.innerHTML = '
    17. No thumbnails files are missing! Enlarged images are loaded on hover. If any are missing, they\'ll be listed here at that time.
    18. ' - } - }); -} - -function insertCheckboxesFromArtistsData() { - var uniqueTags = new Set(); - artistsData.forEach(function(artist) { - var tags = artist[2].split('|'); - tags.forEach(function(tag) { - uniqueTags.add(tag.toLowerCase()); - }); - }); - var uTags = Array.from(uniqueTags); - var toggles = document.getElementById('toggles'); - for(i=0,il=uTags.length;i 0) { - // 👆 shouldn't need to sanitize database, but just in case - var label = document.createElement('label'); - var el = document.createElement('i'); - el.className = 'most_used_indicator'; - el.textContent = '+'; - var input = document.createElement('input'); - input.type = 'checkbox'; - input.name = uTags[i]; - input.value = uTags[i]; - input.checked = true; - var span1 = document.createElement('span'); - span1.textContent = uTags[i]; - var span2 = document.createElement('span'); - span2.className = 'count'; - label.appendChild(el); - label.appendChild(input); - label.appendChild(span1); - label.appendChild(span2); - toggles.appendChild(label); - } - } -} - -function insertCheckboxesFromCategories() { - var useCategories = document.querySelector('input[name="use_categories"]').checked; - for(i=0,il=tagCategories.length;i 2) { imgTypeShown = 0; } - } - var links = document.getElementById('options_prompts').querySelectorAll('.link'); - links.forEach(function(link) { - link.classList.remove('selected'); - }); - if(imgTypeShown == 0) { - document.getElementById('promptA').classList.add('selected'); - doAlert('Showing artwork',0); - } else if(imgTypeShown == 1) { - document.getElementById('promptP').classList.add('selected'); - doAlert('Showing portraits',0); - } else if(imgTypeShown == 2) { - document.getElementById('promptL').classList.add('selected'); - doAlert('Showing landscapes',0); - } - } else { - if(selected == 'promptA') { - imgTypeShown = 0; - doAlert('Showing artwork',0); - } else if(selected == 'promptP') { - imgTypeShown = 1; - doAlert('Showing portraits',0); - } else if(selected == 'promptL') { - imgTypeShown = 2; - doAlert('Showing landscapes',0); - } - var links = document.getElementById(selected).parentNode.querySelectorAll('.link'); - links.forEach(function(link) { - link.classList.remove('selected'); - }); - document.getElementById(selected).classList.add('selected'); - } -} - -function storeOptionsState() { - let state = JSON.parse(localStorage.getItem('tagsChecked')) || {}; - if(document.getElementById('promptA').classList.contains('selected')) { - state['prompt'] = 'promptA'; - } else if(document.getElementById('promptP').classList.contains('selected')) { - state['prompt'] = 'promptP'; - } else { - state['prompt'] = 'promptL'; - } - if(document.getElementById('sortAR').classList.contains('selected')) { - state['artistSort'] = 'sortAR'; - } else { - state['artistSort'] = 'sortAA'; - } - if(document.getElementById('sortTC').classList.contains('selected')) { - state['tagSort'] = 'sortTC'; - } else { - state['tagSort'] = 'sortTA'; - } - localStorage.setItem('tagsChecked', JSON.stringify(state)); -} - -function rotatePromptsImages() { - // hide all images - let images = document.querySelectorAll('.imgBox img'); - images.forEach(function(image) { - image.classList.add('hidden'); - }); - // unhide images matching highlighted option (imgTypeShown) - if(imgTypeShown == 0) { - images = document.querySelectorAll('.img_artwork'); - } else if(imgTypeShown == 1) { - images = document.querySelectorAll('.img_portrait'); - } else if(imgTypeShown == 2) { - images = document.querySelectorAll('.img_landscape'); - } - images.forEach(function(image) { - image.classList.remove('hidden'); - }); - // switch prev and next button icons - let artIndex = 0; - artIndex = imgTypeShown-1; - if(artIndex < 0) { artIndex = 2; } - let prevButtons = document.querySelectorAll('.art_prev span'); - prevButtons.forEach(function(span) { - span.textContent = artTypes[artIndex]; - }); - artIndex = imgTypeShown+1; - if(artIndex > 2) { artIndex = 0; } - let nextButtons = document.querySelectorAll('.art_next span'); - nextButtons.forEach(function(span) { - span.textContent = artTypes[artIndex]; - }); -} - -function updateArtistsCountPerTag(whoCalled) { - var permissiveCheckbox = document.querySelector('input[name="mode"]'); - var checkboxes = document.querySelectorAll('input[type="checkbox"]'); - var divs = document.querySelectorAll('.image-item'); - var hiddenDivs = document.querySelectorAll('.image-item.hidden'); - if(permissiveCheckbox.checked || whoCalled == 'start') { - // on page load, we need to add all the counts first - checkboxes.forEach(function(checkbox) { - let isTop = checkbox.parentNode.classList.contains('top_control'); - if(!isTop) { - var theClass = checkbox.name.replace(/(^|\s)(\d)/g, '$1qqqq-$2'); - var matchingDivs = document.querySelectorAll('.image-item.' + theClass); - var count = matchingDivs.length; - checkbox.parentNode.classList.remove('no_matches'); - checkbox.parentNode.querySelector('input').disabled = false; - checkbox.parentNode.querySelector('.count').textContent = ' - ' + count.toLocaleString(); - } - }); - updateArtistsCountPerCategory(); - } - if(!permissiveCheckbox.checked) { - checkboxes.forEach(function(checkbox) { - let isTop = checkbox.parentNode.classList.contains('top_control'); - if(!isTop) { - var count = 0; - // class names can't start with a number, but some tags do - // in these cases prepending with 'qqqq-' - var theClass = checkbox.name.replace(/(^|\s)(\d)/g, '$1qqqq-$2'); - if(!permissiveCheckbox.checked) { - // for strict mode, for each checkbox, only count artists with a classes matching all checked checkboxes - var matchingDivs = document.querySelectorAll('.image-item.' + theClass + ':not(.hidden)'); - count = matchingDivs.length; - if(count == 0) { - checkbox.parentNode.classList.add('no_matches'); - checkbox.parentNode.querySelector('input').disabled = true; - } else { - checkbox.parentNode.classList.remove('no_matches'); - checkbox.parentNode.querySelector('input').disabled = false; - } - } - checkbox.parentNode.querySelector('.count').textContent = ' - ' + count.toLocaleString(); - } - }); - } - updateCountOfArtistsShown(divs, hiddenDivs); -} - -function updateArtistsCountPerCategory() { - var imageItems = document.querySelectorAll('.image-item'); - let counts = []; - for(i=0,il=tagCategories.length; i { - // class names can't start with a number, - // so some classes were prepending with 'qqqq-' - // which must be ignored - return className.replace(/^qqqq-/, ''); - }); - for(i=0,il=tagCategories.length; i c.toLowerCase()).some(c => classes.includes(c))) { - counts[i]++; - } - } - }); - for(i=0,il=tagCategories.length; i