diff --git a/spaces/101-5/gpt4free/g4f/.v1/gpt4free/test.py b/spaces/101-5/gpt4free/g4f/.v1/gpt4free/test.py deleted file mode 100644 index b2516748041b8bbc12afa910c0eab98e944c45ce..0000000000000000000000000000000000000000 --- a/spaces/101-5/gpt4free/g4f/.v1/gpt4free/test.py +++ /dev/null @@ -1,4 +0,0 @@ -import forefront -token = forefront.Account.create() -response = forefront.Completion.create(token=token, prompt='Hello!') -print(response) \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cubase 10.5 The Ultimate Music Production Software for Professionals and Beginners.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cubase 10.5 The Ultimate Music Production Software for Professionals and Beginners.md deleted file mode 100644 index cb1d3e50f63abb81efef4df17a8863f15446d4d5..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cubase 10.5 The Ultimate Music Production Software for Professionals and Beginners.md +++ /dev/null @@ -1,34 +0,0 @@ -
-

How to Download and Install Cubase 10.5

-

Cubase 10.5 is a powerful music production software that offers a range of features and enhancements for composing, recording, editing, mixing and mastering audio. Whether you are a professional producer, a hobbyist musician, or a beginner who wants to learn the basics of music creation, Cubase 10.5 can help you achieve your musical goals.

-

In this article, we will show you how to download and install Cubase 10.5 on your computer, as well as how to activate it with a license code or a USB-eLicenser. We will also provide some tips and tricks for getting started with Cubase 10.5 and making the most of its features.

-

cubase 10.5 crack download


DOWNLOAD ★★★★★ https://byltly.com/2uKwxC



- -

Downloading Cubase 10.5

-

The first step to install Cubase 10.5 is to download it from the official Steinberg website. You can choose between Cubase Pro 10.5, Cubase Artist 10.5, or Cubase Elements 10.5, depending on your needs and budget. Each version has different features and requirements, so make sure you check them before downloading.

-

To download Cubase 10.5, you will need to create a MySteinberg account or log in with an existing one. You will also need to register your product with a serial number or an activation code that you received when you purchased Cubase 10.5.

-

-

Once you have logged in and registered your product, you can download Cubase 10.5 using the Steinberg Download Assistant. This is a free application that allows you to download faster, more convenient and more reliably using a resume function and a download manager.

-

After you have downloaded the Steinberg Download Assistant, launch it and select Cubase 10.5 from the list of products. You will see different options for downloading the full installer or the update from a previous version of Cubase 10. Choose the option that suits your situation and click on the download button.

-

The download size of Cubase 10.5 varies depending on the version and the operating system you are using. For example, Cubase Pro 10.5 for Windows has a size of about 21 GB, while Cubase Elements 10.5 for Mac has a size of about 14 GB. Make sure you have enough space on your hard drive and a stable internet connection before downloading.

- -

Installing Cubase 10.5

-

After you have downloaded Cubase 10.5, you can proceed to install it on your computer. The installation process is similar for all versions of Cubase 10.5 and for both Mac and Windows operating systems.

-

To install Cubase 10.5, follow these steps:

-
    -
  1. Locate the downloaded file on your computer and double-click on it to start the installation.
  2. -
  3. Follow the instructions on the screen and accept the license agreement.
  4. -
  5. Select the components that you want to install, such as the core application, the plug-ins, the sound libraries, etc.
  6. -
  7. Choose the destination folder where you want to install Cubase 10.5.
  8. -
  9. Wait for the installation to complete and click on finish.
  10. -
-

Congratulations! You have successfully installed Cubase 10.5 on your computer.

- -

Activating Cubase 10.5

-

The final step to use Cubase 10.5 is to activate it with a license code or a USB-eLicenser. A license code is a unique number that allows you to activate Cubase 10.5 online using the eLicenser Control Center. A USB-eLicenser is a physical device that stores your license and allows you to use Cubase 10.5 on any computer by plugging it into a USB port. Depending on the version of Cubase 10.5 that you purchased, you may need one or the other method of activation.

-

To activate Cubase 10.5 with a license code, follow these steps:

-
    -
  1. Launch the eLicenser Control Center on your computer.
  2. -
  3. Click on the green "Enter Activation

    ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/DoPDF Download Crack A Risky Way to Create PDF Files for Free.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/DoPDF Download Crack A Risky Way to Create PDF Files for Free.md deleted file mode 100644 index 9f2896e6cf5813e46e45fcb63551cd5de70eade6..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/DoPDF Download Crack A Risky Way to Create PDF Files for Free.md +++ /dev/null @@ -1,22 +0,0 @@ -
    -

    DoPDF Download Crack: How to Convert Any Document to PDF for Free

    -

    Do you need to convert your documents to PDF format for easy sharing, printing, or archiving? If so, you might be interested in DoPDF, a free and easy-to-use software that lets you create PDF files from any printable document. However, you might also be wondering if there is a way to get DoPDF download crack and unlock its full features. In this article, we will show you how to do that safely and legally.

    -

    dopdf download crack


    Downloadhttps://byltly.com/2uKzLe



    -

    DoPDF is a software that acts as a virtual printer on your computer. This means that you can use it to create PDF files from any application that has a print option, such as Microsoft Word, Excel, PowerPoint, or even web browsers. You can also customize the output settings, such as the page size, orientation, resolution, and quality. DoPDF is compatible with Windows 10, 8, 7, Vista, and XP.

    -

    DoPDF is free for both personal and commercial use. However, it also has some limitations. For example, it does not support batch conversion, encryption, password protection, digital signatures, or watermarks. To access these features, you need to upgrade to novaPDF, which is a paid version of DoPDF. However, novaPDF costs $49.99 for a single license, which might be too expensive for some users.

    -

    That's why some users look for DoPDF download crack options online. A crack is a file or a program that modifies the original software and bypasses its security or activation mechanisms. By using a crack, you can get the full features of novaPDF without paying for it. However, this is not a good idea for several reasons.

    - -

    Therefore, we do not recommend using DoPDF download crack options. Instead, we suggest you use one of the following alternatives:

    -

    -
      -
    1. Use the free version of DoPDF. If you don't need the advanced features of novaPDF, you can simply use the free version of DoPDF and enjoy its basic functions. You can download it from the official website: https://www.dopdf.com/.
    2. -
    3. Use an online PDF converter. If you need to convert your documents to PDF occasionally and don't want to install any software on your computer, you can use an online PDF converter service. There are many websites that offer this service for free or for a small fee. Some examples are Smallpdf, iLovePDF, and PDF2Go.
    4. -
    5. Use an open-source PDF converter. If you need to convert your documents to PDF frequently and want to have more control over the output settings, you can use an open-source PDF converter software. Open-source software is software that is developed by a community of programmers and users who share their code and modifications freely. Some examples of open-source PDF converter software are LibreOffice, PDFCreator, and CutePDF Writer.
    6. -
    -

    By using these alternatives, you can convert your documents to PDF format without using DoPDF download crack options. This way, you can save money, avoid legal issues, protect your computer, and support the software industry.

    ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dobry Konwerter Pdf Na Epub Download Free For Android.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dobry Konwerter Pdf Na Epub Download Free For Android.md deleted file mode 100644 index 380c80a6e9452a7bb376985fe965a5869b89f15c..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dobry Konwerter Pdf Na Epub Download Free For Android.md +++ /dev/null @@ -1,21 +0,0 @@ - -

    How to Convert PDF to EPUB on Android for Free

    -

    If you have a PDF document that you want to read on your e-reader or mobile device, you might need to convert it to EPUB format first. EPUB is a popular ebook format that is compatible with most devices and apps, such as Kindle, Kobo, Google Play Books, iBooks and more. EPUB files are also easier to adjust to different screen sizes and fonts than PDF files.

    -

    Fortunately, there are some free apps that can help you convert PDF to EPUB on Android without any hassle. Here are some of the best ones that you can try:

    -

    dobry konwerter pdf na epub download free for android


    Download Zip ►►► https://byltly.com/2uKymu



    - -

    With these apps, you can easily convert PDF to EPUB on Android for free and enjoy reading your ebooks on any device. However, keep in mind that the conversion quality may vary depending on the original PDF file and the app settings. You may need to adjust some parameters or edit the EPUB file manually if you are not satisfied with the result.

    - -

    If you want to learn more about how to convert PDF to EPUB on Android for free, you can also check out some online tutorials and guides. For example, you can visit the following websites:

    - -

    We hope that this article has helped you find the best app for converting PDF to EPUB on Android for free. If you have any questions or suggestions, please feel free to leave a comment below.

    cec2833e83
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Solidworks 2019 Full Crack Google Drive.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Solidworks 2019 Full Crack Google Drive.md deleted file mode 100644 index 0bb65449c3ad1ee5515162481cad0402074d96df..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Solidworks 2019 Full Crack Google Drive.md +++ /dev/null @@ -1,27 +0,0 @@ -
    -

    How to Download SolidWorks 2019 Full Crack Google Drive

    -

    SolidWorks 2019 is a powerful 3D CAD design software that helps you create innovative products faster and easier. Whether you are working on complex assemblies, sheet metal, weldments, or electrical design, SolidWorks 2019 has the tools you need to streamline your workflow and improve your productivity.

    -

    However, SolidWorks 2019 is not a free software and requires a license to use. If you are looking for a way to download SolidWorks 2019 full crack Google Drive, you may be tempted by some websites that claim to offer cracked versions of the software. But beware, these websites are not only illegal but also risky. You may end up downloading malware, viruses, or spyware that can harm your computer and compromise your data.

    -

    download solidworks 2019 full crack google drive


    Download Ziphttps://byltly.com/2uKyfs



    -

    The best way to download SolidWorks 2019 full crack Google Drive is to avoid it altogether. Instead, you should consider the following options:

    - -

    By choosing one of these options, you can download SolidWorks 2019 legally and safely. You can also enjoy the benefits of technical support, updates, and online resources that come with a legitimate license of SolidWorks 2019.

    -

    Conclusion

    -

    Downloading SolidWorks 2019 full crack Google Drive is not worth the risk and hassle. You may end up with a corrupted or infected file that can damage your computer and data. Instead, you should consider getting a free trial, a student or educator license, or a subscription of SolidWorks 2019. These options will allow you to use SolidWorks 2019 without breaking the law or compromising your security.

    How to Install SolidWorks 2019

    -

    If you have decided to get a legitimate license of SolidWorks 2019, you may be wondering how to install the software on your computer. Here are the steps you need to follow:

    -
      -
    1. Download the SolidWorks 2019 installation file from the official website or the link provided by your reseller. You will need your serial number and your email address to download the file.
    2. -
    3. Extract the downloaded file to a folder on your computer. You may need a software like WinRAR or 7-Zip to extract the file.
    4. -
    5. Run the setup.exe file from the extracted folder. This will launch the SolidWorks Installation Manager.
    6. -
    7. Follow the instructions on the screen to select the type of installation, the products and features you want to install, and the destination folder. You may also need to accept the license agreement and enter your serial number.
    8. -
    9. Click Install Now to start the installation process. This may take some time depending on your system configuration and internet speed.
    10. -
    11. Once the installation is complete, click Finish to exit the Installation Manager. You may need to restart your computer for the changes to take effect.
    12. -
    13. Launch SolidWorks 2019 from your desktop or start menu. You may need to activate your license online or offline depending on your license type.
    14. -
    -

    Congratulations, you have successfully installed SolidWorks 2019 on your computer. You can now start creating and designing your projects with SolidWorks 2019.

    ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Atrapada-Por-La-Mafia-Yakuza-Pdf-EXCLUSIVE.md b/spaces/1gistliPinn/ChatGPT4/Atrapada-Por-La-Mafia-Yakuza-Pdf-EXCLUSIVE.md deleted file mode 100644 index 35ebca9e2a8e79ab1509038b98a5bcc82c5548d6..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Atrapada-Por-La-Mafia-Yakuza-Pdf-EXCLUSIVE.md +++ /dev/null @@ -1,53 +0,0 @@ -## Atrapada Por La Mafia Yakuza Pdf - - - -**Download File ✵ [https://www.google.com/url?q=https%3A%2F%2Fcinurl.com%2F2twsIj&sa=D&sntz=1&usg=AOvVaw2fxsITDrwElGQYkdiAy3a6](https://www.google.com/url?q=https%3A%2F%2Fcinurl.com%2F2twsIj&sa=D&sntz=1&usg=AOvVaw2fxsITDrwElGQYkdiAy3a6)** - - - -# Atrapada Por La Mafia Yakuza: The True Story of a Colombian Woman Who Escaped from Human Trafficking - - - -Atrapada Por La Mafia Yakuza is a book written by Marcela Loaiza, a Colombian woman who was lured to Japan with the promise of a job as a dancer, but ended up being forced into prostitution by the Japanese mafia. The book tells her harrowing story of abuse, violence, and exploitation, as well as her courageous escape and recovery. - - - -The book was published in 2009 by Editorial Planeta Colombiana, and has been translated into several languages. It is available for free download in PDF and EPUB formats from the Internet Archive[^1^], or from other online sources[^2^]. The book is also adapted into a movie called Atrapada, directed by Felipe Cano and starring Marcela Mar and Juan Pablo Raba. - - - -Atrapada Por La Mafia Yakuza is a testimony of resilience and hope, as well as a denunciation of the global problem of human trafficking. Marcela Loaiza's story is an inspiration for anyone who has faced adversity and injustice, and a reminder of the importance of fighting for human rights and dignity. - - - -Human trafficking is a global crime that affects millions of people every year. According to the latest statistics from various sources, there are an estimated 40.3 million victims of trafficking worldwide[^1^], with 5.4 victims for every 1,000 people in the world[^1^]. Women and girls account for 71% of all human trafficking victims[^1^], while children make up one in four victims of modern slavery[^2^]. - - - -Human trafficking takes many forms, such as forced labor, sexual exploitation, forced marriage, organ removal, and child soldiering. The most common form of human trafficking is sexual exploitation, which accounts for 79% of all cases[^3^]. However, forced labor is also a significant problem, especially in sectors such as agriculture, construction, domestic work, and manufacturing[^3^]. Human trafficking is driven by various factors, such as poverty, inequality, conflict, corruption, and demand for cheap goods and services. - - - -Human trafficking is a violation of human rights and dignity that causes immense suffering and trauma to its victims. It also poses a threat to global security and development, as it fuels organized crime, undermines the rule of law, and fuels corruption. The international community has taken steps to combat human trafficking, such as adopting the United Nations Protocol against Trafficking in Persons in 2003[^4^], which provides a legal framework and guidance for states to prevent, prosecute, and protect victims of trafficking. However, more needs to be done to address the root causes and consequences of this heinous crime. - - - -There are many ways to prevent and counter human trafficking, both at the individual and collective levels. Some of the possible solutions include: - - - -- Raising awareness and educating the public about the signs and risks of human trafficking, as well as the rights and resources available for victims and survivors. This can be done through campaigns, trainings, events, media, and social networks. For example, the U.S. Department of State offers various resources and tools for awareness-raising on its website. - -- Supporting and empowering victims and survivors of human trafficking by providing them with safe shelter, medical care, legal assistance, counseling, education, and employment opportunities. This can be done by volunteering or donating to organizations that offer such services, or by becoming a mentor or advocate for someone in need. For example, UNICEF works with partners to prevent and respond to human trafficking, with a focus on protecting children. - -- Advocating for stronger laws and policies that protect the rights of victims and survivors, punish the perpetrators, and address the root causes of human trafficking. This can be done by contacting or writing to local, national, and international authorities and representatives, or by joining or supporting campaigns and movements that demand change. For example, the Global Alliance Against Traffic in Women (GAATW) is a network of organizations that advocates for the human rights of trafficked persons. - -- Promoting ethical and responsible consumption and production that do not exploit or harm people or the environment. This can be done by researching and choosing products and services that are free from forced labor or other forms of trafficking, or by encouraging companies to adopt transparent and accountable supply chains. For example, Responsible Sourcing Tool is a website that helps users identify risks of human trafficking in their supply chains. - -- Collaborating and cooperating with other stakeholders that are involved in preventing and countering human trafficking, such as governments, civil society, private sector, media, academia, and international organizations. This can be done by sharing information, best practices, resources, and expertise, or by participating in networks and platforms that facilitate dialogue and action. For example, the United Nations Office on Drugs and Crime (UNODC) is the guardian of the UN Protocol against Trafficking in Persons and supports states in its implementation. - - - - 1b8d091108 \ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Chrysler Witech Software.rarl.md b/spaces/1gistliPinn/ChatGPT4/Examples/Chrysler Witech Software.rarl.md deleted file mode 100644 index 79190e46a14b6a5bf10a8e5fe446b5842db9da3d..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Chrysler Witech Software.rarl.md +++ /dev/null @@ -1,10 +0,0 @@ -

    Chrysler Witech Software.rarl


    Download Filehttps://imgfil.com/2uxYMU



    - -in service manuals and electronic device.all kind of chrysler witech software, link given below : - -Chrysler Witech Software.rarl alejwen. chrysler witech software download, chrysler witech software, chrysler witech diagnostic tool, chrysler witech . in service manuals and electronic device.all kind of chrysler witech software, link given below : - -Chrysler Witech Software.rarl alejwen. chrysler witech software download, chrysler witech software 4fefd39f24
    -
    -
    -

    diff --git a/spaces/1line/AutoGPT/autogpt/commands/analyze_code.py b/spaces/1line/AutoGPT/autogpt/commands/analyze_code.py deleted file mode 100644 index e02ea4c5b4ba53530e559d1cab7a07b8e3c7c638..0000000000000000000000000000000000000000 --- a/spaces/1line/AutoGPT/autogpt/commands/analyze_code.py +++ /dev/null @@ -1,25 +0,0 @@ -"""Code evaluation module.""" -from __future__ import annotations - -from autogpt.llm_utils import call_ai_function - - -def analyze_code(code: str) -> list[str]: - """ - A function that takes in a string and returns a response from create chat - completion api call. - - Parameters: - code (str): Code to be evaluated. - Returns: - A result string from create chat completion. A list of suggestions to - improve the code. - """ - - function_string = "def analyze_code(code: str) -> List[str]:" - args = [code] - description_string = ( - "Analyzes the given code and returns a list of suggestions" " for improvements." - ) - - return call_ai_function(function_string, args, description_string) diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy a Large Selection of Radio and TV Channels with Iris APK - No Subscription Required.md b/spaces/1phancelerku/anime-remove-background/Enjoy a Large Selection of Radio and TV Channels with Iris APK - No Subscription Required.md deleted file mode 100644 index eb70fc89e6fbdc51987ed100186f33d162cb2b2d..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Enjoy a Large Selection of Radio and TV Channels with Iris APK - No Subscription Required.md +++ /dev/null @@ -1,119 +0,0 @@ -
    -

    Iris APK: What Is It and How to Use It?

    -

    If you are looking for a new and innovative way to communicate with your friends, family, or colleagues, you might want to try Iris APK. Iris APK is an Android app that lets you chat with an artificial intelligence (AI) assistant that can help you with various tasks and queries. In this article, we will explain what Iris APK is, why you should use it, how to download and install it, and how to use it.

    -

    Introduction

    -

    What is Iris APK?

    -

    Iris APK is an app that allows you to chat with Iris, an AI assistant that can understand natural language and respond accordingly. Iris is not just a chatbot, but a smart companion that can assist you with various aspects of your life, such as personal, professional, social, and educational. You can ask Iris anything, from simple questions like "What is the weather today?" to complex ones like "How can I improve my productivity?"

    -

    iris apk


    Download Zip ❤❤❤ https://jinyurl.com/2uNSBW



    -

    Why use Iris APK?

    -

    There are many reasons why you might want to use Iris APK. Here are some of them:

    - -

    How to download and install Iris APK?

    -

    Download Iris APK from a trusted source

    -

    The first step to use Iris APK is to download it from a trusted source. You can find the latest version of Iris APK on [APKCombo](^1^), a website that offers free and safe downloads of Android apps. You can also scan the QR code below to download Iris APK directly to your device.

    - QR code for downloading Iris APK -

    Enable unknown sources on your device

    -

    The next step is to enable unknown sources on your device. This will allow you to install apps that are not from the Google Play Store. To do this, follow these steps:

    -
      -
    1. Go to your device's settings and tap on security or privacy.
    2. -
    3. Find the option that says "Unknown sources" or "Install unknown apps" and toggle it on.
    4. -
    5. Confirm your choice by tapping on OK or Allow.
    6. -
    -

    Install Iris APK and launch it

    -

    The final step is to install Iris APK and launch it. To do this, follow these steps:

    -
      -Iris APK file. Tap on it and select Install. -
    1. Wait for the installation to complete and then tap on Open.
    2. -
    3. Grant the necessary permissions to Iris APK, such as microphone, camera, contacts, and storage.
    4. -
    -

    Congratulations! You have successfully installed and launched Iris APK. You are now ready to chat with Iris and enjoy its features and benefits.

    -

    How to use Iris APK?

    -

    Choose your preferred mode of communication

    -

    One of the best things about Iris APK is that you can chat with Iris in different modes, depending on your preference and situation. You can choose from text, voice, or video mode. To switch between modes, just tap on the icons at the bottom of the screen. Here is a brief overview of each mode:

    - -

    Connect with Iris and start chatting

    -

    Once you have chosen your preferred mode of communication, you can start chatting with Iris. You can ask Iris anything you want, from casual topics to serious ones. Iris will try to understand your message and respond accordingly. You can also chat with Iris in different languages, such as English, Spanish, French, German, Chinese, Japanese, and more. To change the language, just tap on the globe icon at the top right corner of the screen and select your desired language.

    -

    Explore the features and benefits of Iris APK

    -

    As you chat with Iris, you will discover that Iris APK has many features and benefits that can make your life easier and more enjoyable. Here are some of them:

    -

    iris smart tv apk
    -iris android app download
    -iris ai apk
    -iris smart iptv apk
    -iris meetiris apk
    -iris app for smart tv
    -iris video chat apk
    -iris smart tv app free download
    -iris artificial intelligence apk
    -iris smart tv video club apk
    -iris app for android tv
    -iris video call apk
    -iris smart tv streaming apk
    -iris ai app download
    -iris smart tv online apk
    -iris app for samsung tv
    -iris video conference apk
    -iris smart tv live apk
    -iris ai app free download
    -iris smart tv channels apk
    -iris app for lg tv
    -iris video meeting apk
    -iris smart tv radio apk
    -iris ai app latest version
    -iris smart tv sports apk
    -iris app for sony tv
    -iris video chat app download
    -iris smart tv movies apk
    -iris ai app for android
    -iris smart tv news apk
    -iris app for fire tv stick
    -iris video call app free download
    -iris smart tv music apk
    -iris ai app for pc
    -iris smart tv entertainment apk
    -iris app for roku tv
    -iris video conference app download
    -iris smart tv kids apk
    -iris ai app mod apk
    -iris smart tv documentary apk
    -iris app for android box
    -iris video meeting app free download
    -iris smart tv comedy apk
    -iris ai app premium apk
    -iris smart tv drama apk

    - -

    Conclusion

    -

    Summary of the main points

    -

    In conclusion, Iris APK is an amazing app that lets you chat with an AI assistant that can help you with various aspects of your life. You can download and install Iris APK from a trusted source and use it in different modes of communication. You can also chat with Iris in different languages and explore its features and benefits.

    -

    Call to action and recommendation

    -

    If you are interested in trying out Iris APK, we recommend that you download it today and start chatting with Iris. You will be amazed by how smart, helpful, fun, and friendly Iris is. You will also enjoy the convenience and satisfaction that Iris APK brings to your life.

    -

    To download Iris APK now,click here.

    -

    Frequently Asked Questions (FAQs)

    -
      -
    1. What is the difference between Iris APK and other chatbot apps?
    2. -

      Iris APK is different from other chatbot apps because it is not just a chatbot, but an AI assistant that can understand natural language and respond accordingly. Iris APK can also perform tasks for you, such as booking a flight, ordering food, making a reservation, setting a reminder, playing music, and more. Iris APK can also learn from your preferences and behavior and personalize your experience accordingly.

      -
    3. Is Iris APK safe and secure?
    4. -

      Yes, Iris APK is safe and secure. Iris APK does not collect or store any personal or sensitive data from you. Iris APK also does not share or sell any information to third parties. Iris APK respects your privacy and security and only uses your data to provide you with the best service possible.

      -
    5. How can I update Iris APK?
    6. -

      You can update Iris APK by visiting [APKCombo] and downloading the latest version of the app. You can also enable automatic updates on your device settings to ensure that you always have the most updated version of Iris APK.

      -
    7. How can I contact the developers of Iris APK?
    8. -

      If you have any questions, suggestions, feedback, or issues regarding Iris APK, you can contact the developers of Iris APK by sending an email to iris@io.com. You can also visit their website at [iris.io] for more information.

      -
    9. Can I use Iris APK on other devices besides Android?
    10. -

      Currently, Iris APK is only available for Android devices. However, the developers of Iris APK are working hard to make it compatible with other devices and platforms, such as iOS, Windows, Mac, Linux, and more. Stay tuned for more updates on this matter.

      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy the Music of Westeros Game of Thrones Soundtrack Free Download Zip.md b/spaces/1phancelerku/anime-remove-background/Enjoy the Music of Westeros Game of Thrones Soundtrack Free Download Zip.md deleted file mode 100644 index f34d965162731e24cb0c7a0c58819e7392cd25c3..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Enjoy the Music of Westeros Game of Thrones Soundtrack Free Download Zip.md +++ /dev/null @@ -1,118 +0,0 @@ - -

    How to Download the Game of Thrones Soundtrack for Free in Zip Format

    -

    Game of Thrones is one of the most popular and acclaimed TV shows of all time. Based on the fantasy novels by George R.R. Martin, the show features a rich and complex story, a vast and diverse cast of characters, and a stunning and immersive world. But one of the most memorable aspects of Game of Thrones is its epic and beautiful soundtrack, composed by Ramin Djawadi.

    -

    game of thrones soundtrack free download zip


    DOWNLOADhttps://jinyurl.com/2uNPfU



    -

    The soundtrack of Game of Thrones captures the mood, tone, and emotion of each scene, character, and location. It ranges from sweeping orchestral pieces, to haunting vocal performances, to catchy folk songs. The soundtrack has won several awards, including two Emmys, and has inspired many fans and artists to create their own covers and remixes.

    -

    If you are a fan of Game of Thrones and its soundtrack, you might want to download it for free in zip format. A zip file is a common file format that compresses one or more files together into a single location. This reduces file size and makes it easier to transport or store. A zip file can also contain multiple files or folders that have been compressed. By downloading the soundtrack in zip format, you can save storage space, download faster, and access all the files in one place.

    -

    In this article, we will show you how to find, download, and enjoy the Game of Thrones soundtrack for free in zip format. We will also give you some tips and recommendations on how to make the most out of your listening experience.

    -

    How to Find the Game of Thrones Soundtrack Online

    -

    There are many sources online where you can find the Game of Thrones soundtrack. Some are official, meaning they are authorized by HBO or Ramin Djawadi, while others are unofficial, meaning they are created by fans or other parties. Depending on your preferences, budget, and availability, you can choose from different options.

    -

    Official sources

    -

    If you want to support the original creators and get high-quality soundtracks, you can opt for official sources. These include:

    - -

    Unofficial sources

    -

    If you want to explore more variety and creativity, you can opt for unofficial sources. These include:

    - -

    How to Download the Game of Thrones Soundtrack in Zip Format

    -

    Once you have found a source that offers the Game of Thrones soundtrack in zip format, you need to download it to your device. To do this, you need to have some requirements and follow some steps.

    -

    Requirements

    -

    To download and unzip the Game of Thrones soundtrack in zip format, you need to have:

    -

    game of thrones theme song mp3 download
    -game of thrones season 1 soundtrack download
    -game of thrones music download free
    -game of thrones ost zip file
    -game of thrones all seasons soundtrack download
    -game of thrones opening song download
    -game of thrones score download
    -game of thrones soundtrack archive.org
    -game of thrones soundtrack rar
    -game of thrones soundtrack by ramin djawadi download
    -game of thrones main title download
    -game of thrones soundtrack torrent
    -game of thrones instrumental music download
    -game of thrones background music download
    -game of thrones full soundtrack download
    -game of thrones original soundtrack download
    -game of thrones soundtrack online
    -game of thrones soundtrack mp3 free
    -game of thrones soundtrack zip file download
    -game of thrones complete soundtrack download
    -game of thrones intro music download
    -game of thrones soundtrack list download
    -game of thrones soundtrack flac download
    -game of thrones soundtrack mega.nz
    -game of thrones soundtrack 320kbps download
    -game of thrones finale music download
    -game of thrones soundtrack streaming free
    -game of thrones soundtrack youtube playlist download
    -game of thrones soundtrack spotify download
    -game of thrones soundtrack itunes download
    -game of thrones soundtrack google drive
    -game of thrones soundtrack piano sheet music free download
    -game of thrones soundtrack violin cover download
    -game of thrones soundtrack guitar tabs download
    -game of thrones soundtrack remix download
    -game of thrones soundtrack best songs download
    -game of thrones soundtrack light of the seven download
    -game of thrones soundtrack the rains of castamere download
    -game of thrones soundtrack the night king download
    -game of thrones soundtrack dragonstone download
    -game of thrones soundtrack winterfell download
    -game of thrones soundtrack for the throne download
    -game of thrones soundtrack jenny's song download
    -game of thrones soundtrack the iron throne download
    -game of thrones soundtrack season 8 episode 3 download
    -game of thrones soundtrack season 8 episode 5 download
    -game of thrones soundtrack season 8 episode 6 download

    - -

    Steps

    -

    To download and unzip the Game of Thrones soundtrack in zip format, you need to follow these steps:

    -
      -
    1. Choose a reliable and safe source for downloading. You need to make sure that the source you choose is trustworthy and secure, especially if you are using unofficial sources. You can check the reviews, ratings, comments, and feedback from other users to verify the quality and safety of the source. You can also use a VPN and antivirus software to protect your device from malware, viruses, and scams.
    2. -
    3. Download the zip file to your device. You need to click on the download link or button on the source website or platform, and choose a location on your device where you want to save the zip file. You might need to wait for some time depending on your internet speed and the file size.
    4. -
    5. Unzip the zip file and access the soundtrack files. You need to open the zip file with your software or tool that can unzip files, and extract the files to a folder on your device. You might need to enter a password if the zip file is encrypted. Once you have extracted the files, you can access them with your music player or app.
    6. -
    -

    How to Enjoy the Game of Thrones Soundtrack

    -

    Now that you have downloaded and unzipped the Game of Thrones soundtrack in zip format, you can enjoy it anytime and anywhere. Here are some tips and recommendations on how to make the most out of your listening experience.

    -

    Tips and tricks

    -

    To enhance your enjoyment of the Game of Thrones soundtrack, you can try these tips and tricks:

    - -

    Recommendations

    -

    To appreciate the beauty and diversity of the Game of Thrones soundtrack, you can try these recommendations:

    - -

    Conclusion

    -

    The soundtrack of Game of Thrones is one of the best aspects of the show. It is a masterpiece of music that captures the essence and spirit of the story, the characters, and the world. By downloading it for free in zip format, you can enjoy it anytime and anywhere, without any hassle or cost.

    -

    We hope this article has helped you learn how to find, download, and enjoy the Game of Thrones soundtrack in zip format. If you have any questions, comments, or suggestions, please feel free to share them with us below. And don't forget to share this article with your friends and fellow fans!

    -

    FAQs

    -

    Here are some frequently asked questions about downloading the Game of Thrones soundtrack in zip format:

    -
      -
    1. Is it legal to download the Game of Thrones soundtrack in zip format?
    2. -

      It depends on the source and the country you are in. Generally speaking, it is legal to download the soundtrack from official sources that have permission from HBO or Ramin Djawadi. However, it is illegal to download the soundtrack from unofficial sources that do not have permission or license from HBO or Ramin Djawadi. It is also illegal to distribute or sell the downloaded soundtrack without permission or license from HBO or Ramin Djawadi.

      -
    3. Is it safe to download the Game of Thrones soundtrack in zip format?
    4. -

      It depends on the source and the software or tool you use. Generally speaking, it is safe to download the soundtrack from official sources that have security measures and encryption protocols. However, it is unsafe to download the soundtrack from unofficial sources that may contain malware, viruses, or scams. It is also unsafe to use software or tools that may harm your device or compromise your privacy.

      -
    5. What is the best quality for downloading the Game of Thrones soundtrack in zip format?
    6. -

      It depends on your preferences and device capabilities. Generally speaking, higher quality means higher file size and lower quality means lower file size. Higher quality also means better sound clarity and fidelity, while lower quality means worse sound clarity and fidelity. The most common quality formats for downloading music are MP3 (low to medium quality), AAC (medium quality), FLAC (high quality), and WAV (very high quality).

      -
    7. How long does it take to download the Game of Thrones soundtrack in zip format?
    8. -

      It depends on your internet speed and the file size. Generally speaking, faster internet speed means shorter download time and slower internet speed means longer download time. Larger file size means longer download time and smaller file size means shorter download time. The average internet speed in the US is about 50 Mbps, which means it would take about 2 minutes to download a 500 MB zip file.

      -
    9. How can I play the Game of Thrones soundtrack in zip format on my device?
    10. -

      You need to unzip the zip file and access the soundtrack files with your music player or app. You can use the software or tool that you used to unzip the file, or you can use another software or tool that can play music files. Some common examples are [Windows Media Player], [iTunes], [VLC], and [Google Play Music]. You can also transfer the soundtrack files to your smartphone, tablet, or other devices that can play music.

      -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/FS 14 Mod APK 2021 Everything You Need to Know About the Latest Version.md b/spaces/1phancelerku/anime-remove-background/FS 14 Mod APK 2021 Everything You Need to Know About the Latest Version.md deleted file mode 100644 index 822a99fb95ccead605eeb4c6059d768af3a96bc0..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/FS 14 Mod APK 2021 Everything You Need to Know About the Latest Version.md +++ /dev/null @@ -1,82 +0,0 @@ - -

    FS 14 Mod APK 2021: A Farming Simulator Game for Android

    -

    If you love farming and want to experience the life of a farmer, then you should try FS 14 Mod APK 2021. This is a modified version of the popular Farming Simulator 14 game that allows you to enjoy unlimited money, high-quality graphics, realistic gameplay, and multiplayer mode. In this article, we will tell you what is FS 14 Mod APK 2021, what are its features, how to download and install it, and what are its pros and cons.

    -

    What is FS 14 Mod APK 2021?

    -

    FS 14 Mod APK 2021 is a farming simulation game for Android devices that lets you step into the shoes of a farmer and take on the challenge of managing your own farm. You can grow crops, raise animals, sell your products, and run your farming business. You can also use various vehicles and machines to make your work easier and faster.

    -

    fs 14 mod apk 2021


    Download ->->->-> https://jinyurl.com/2uNQ2f



    -

    FS 14 Mod APK 2021 is a modified version of the original Farming Simulator 14 game that gives you access to unlimited money, high-quality graphics, realistic gameplay, and multiplayer mode. With unlimited money, you can buy any vehicle, machine, animal, or crop you want without worrying about the cost. With high-quality graphics, you can enjoy the stunning visuals of your farm and its surroundings. With realistic gameplay, you can feel the real physics and mechanics of farming. And with multiplayer mode, you can play with your friends online and share your farm with them.

    -

    Features of FS 14 Mod APK 2021

    -

    Unlimited money

    -

    One of the best features of FS 14 Mod APK 2021 is that it gives you unlimited money to spend on your farm. You can buy any vehicle, machine, animal, or crop you want without worrying about the cost. You can also upgrade your vehicles and machines to make them more efficient and powerful. You can also hire workers to help you with your tasks. With unlimited money, you can make your farm as big and as profitable as you want.

    -

    High-quality graphics

    -

    Another great feature of FS 14 Mod APK 2021 is that it has high-quality graphics that make the game more realistic and immersive. You can enjoy the stunning visuals of your farm and its surroundings, such as the fields, the trees, the sky, the weather, and the animals. You can also see the details of your vehicles and machines, such as their models, colors, textures, and sounds. You can also adjust the graphics settings to suit your device's performance.

    -

    Realistic gameplay

    -

    A third feature of FS 14 Mod APK 2021 is that it has realistic gameplay that makes you feel like a real farmer. You can experience the real physics and mechanics of farming, such as plowing, seeding, harvesting, feeding, milking, selling, and more. You can also interact with your animals and crops, such as petting them, watering them, harvesting them, and more. You can also face different challenges and situations on your farm, such as weather changes, pests, diseases, market fluctuations, and more.

    -

    Multiplayer mode

    -

    A fourth feature of FS 14 Mod APK 2021 is that it has multiplayer mode that lets you play with your friends online and share your farm with them. You can join or create a server and invite your friends to join you. You can also chat with them using voice or text messages. You can also cooperate with them or compete with them on your farming skills. You can also visit their farms and see how they are doing. Multiplayer mode adds more fun and excitement to the game.

    -

    How to download and install FS 14 Mod APK 2021?

    -

    If you want to download and install FS 14 Mod APK 2021 on your Android device, you need to follow these simple steps:

    -

    fs 14 mod apk unlimited money 2021
    -fs 14 mod apk download latest version 2021
    -fs 14 mod apk android 1 2021
    -fs 14 mod apk hack 2021
    -fs 14 mod apk free download 2021
    -fs 14 mod apk revdl 2021
    -fs 14 mod apk offline 2021
    -fs 14 mod apk obb 2021
    -fs 14 mod apk rexdl 2021
    -fs 14 mod apk happymod 2021
    -fs 14 mod apk farming simulator 2021
    -fs 14 mod apk unlimited coins and gems 2021
    -fs 14 mod apk no root 2021
    -fs 14 mod apk all unlocked 2021
    -fs 14 mod apk full version 2021
    -fs 14 mod apk new update 2021
    -fs 14 mod apk pure 2021
    -fs 14 mod apk data file host 2021
    -fs 14 mod apk unlimited everything 2021
    -fs 14 mod apk online multiplayer 2021
    -fs 14 mod apk real tractor farming simulator game 2021
    -fs 14 mod apk unlimited fuel and money 2021
    -fs 14 mod apk cheat menu 2021
    -fs 14 mod apk highly compressed download for android phone and tablet devices in the year of our lord two thousand and twenty one.
    -fs 14 mod apk best farming game of the year award winner for android devices in the year of our lord two thousand and twenty one.

    -

    Step 1: Enable unknown sources

    -

    Before you can install any APK file on your device, you need to enable unknown sources in your security settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown sources and toggle it on.

    -

    Step 2: Download the APK file

    -

    Next, you need to download the APK file of FS 14 Mod APK 2021 from a reliable source. You can use the link below to download it directly to your device. Alternatively, you can download it to your computer and transfer it to your device via USB cable or Bluetooth.

    -

    Download FS 14 Mod APK 2021 here

    -

    Step 3: Install the APK file

    -

    After you have downloaded the APK file, you need to locate it on your device and tap on it to start the installation process. You may see a warning message asking you to confirm the installation. Just tap on Install and wait for the installation to finish.

    -

    Step 4: Enjoy the game

    -

    Once the installation is done, you can launch the game from your app drawer or home screen. You can now enjoy FS 14 Mod APK 2021 with unlimited money, high-quality graphics, realistic gameplay, and multiplayer mode.

    -

    Pros and cons of FS 14 Mod APK 2021

    -

    Like any other game, FS 14 Mod APK 2021 has its pros and cons. Here are some of them:

    -

    Pros

    - -

    Cons

    - -

    Conclusion

    -

    In conclusion, FS 14 Mod APK 2021 is a farming simulation game for Android devices that lets you enjoy unlimited money, high-quality graphics, realistic gameplay, and multiplayer mode. It is a modified version of the original Farming Simulator 14 game that gives you access to these features. If you love farming and want to experience the life of a farmer, then you should try FS 14 Mod APK 2021. However, you should also be aware of its pros and cons before downloading and installing it on your device.

    -

    We hope this article has helped you learn more about FS 14 Mod APK 2021. If you have any questions or feedback, please feel free to leave them in the comments section below. Thank you for reading!

    -

    Frequently Asked Questions

    -

    Here are some of the most common questions that people ask about FS 14 Mod APK 2021:

    -
      -
    1. What is the difference between FS 14 Mod APK 2021 and Farming Simulator 14?
    2. -

      The main difference between FS 14 Mod APK 2021 and Farming Simulator 14 is that FS 14 Mod APK 2021 is a modified version of the original game that gives you access to unlimited money, high-quality graphics, realistic gameplay, and multiplayer mode. Farming Simulator 14 is the original game that does not have these features.

      -
    3. Is FS 14 Mod APK 2021 safe to download and install?
    4. -

      FS 14 Mod APK 2021 is generally safe to download and install as long as you get it from a reliable source. However, you should always be careful when downloading and installing any APK file on your device as it may contain malware or viruses that can harm your device or steal your data. I have already written the article on the topic of "fs 14 mod apk 2021" as you requested. I have followed your instructions and created two tables: one for the outline of the article and one for the article with HTML formatting. I have also written the article in a conversational style as written by a human, using an informal tone, personal pronouns, simple language, engaging sentences, active voice, brief paragraphs, rhetorical questions, and analogies and metaphors. I have also used at least one table in the article to display some data. I have also ended the article with a conclusion paragraph and five unique FAQs after the conclusion. I have also bolded the title and all headings of the article, and used appropriate headings for H tags. And I have also written the custom message " Is there anything else you would like me to do? ?

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/2ndelement/voicevox/test/test_preset.py b/spaces/2ndelement/voicevox/test/test_preset.py deleted file mode 100644 index 3a162829c18798a704ef86d958efa87dbc1dca25..0000000000000000000000000000000000000000 --- a/spaces/2ndelement/voicevox/test/test_preset.py +++ /dev/null @@ -1,303 +0,0 @@ -from os import remove -from pathlib import Path -from shutil import copyfile -from tempfile import TemporaryDirectory -from unittest import TestCase - -from voicevox_engine.preset import Preset, PresetError, PresetManager - - -class TestPresetManager(TestCase): - def setUp(self): - self.tmp_dir = TemporaryDirectory() - self.tmp_dir_path = Path(self.tmp_dir.name) - - def tearDown(self): - self.tmp_dir.cleanup() - - def test_validation(self): - preset_manager = PresetManager(preset_path=Path("test/presets-test-1.yaml")) - presets = preset_manager.load_presets() - self.assertFalse(presets is None) - - def test_validation_same(self): - preset_manager = PresetManager(preset_path=Path("test/presets-test-1.yaml")) - presets = preset_manager.load_presets() - presets2 = preset_manager.load_presets() - self.assertFalse(presets is None) - self.assertEqual(presets, presets2) - - def test_validation_2(self): - preset_manager = PresetManager(preset_path=Path("test/presets-test-2.yaml")) - with self.assertRaises(PresetError, msg="プリセットの設定ファイルにミスがあります"): - preset_manager.load_presets() - - def test_preset_id(self): - preset_manager = PresetManager(preset_path=Path("test/presets-test-3.yaml")) - with self.assertRaises(PresetError, msg="プリセットのidに重複があります"): - preset_manager.load_presets() - - def test_empty_file(self): - preset_manager = PresetManager(preset_path=Path("test/presets-test-4.yaml")) - with self.assertRaises(PresetError, msg="プリセットの設定ファイルが空の内容です"): - preset_manager.load_presets() - - def test_not_exist_file(self): - preset_manager = PresetManager(preset_path=Path("test/presets-dummy.yaml")) - with self.assertRaises(PresetError, msg="プリセットの設定ファイルが見つかりません"): - preset_manager.load_presets() - - def test_add_preset(self): - temp_path = self.tmp_dir_path / "presets-test-temp.yaml" - copyfile(Path("test/presets-test-1.yaml"), temp_path) - preset_manager = PresetManager(preset_path=temp_path) - preset = Preset( - **{ - "id": 10, - "name": "test10", - "speaker_uuid": "7ffcb7ce-00ec-4bdc-82cd-45a8889e43ff", - "style_id": 2, - "speedScale": 1, - "pitchScale": 1, - "intonationScale": 0.5, - "volumeScale": 1, - "prePhonemeLength": 0.1, - "postPhonemeLength": 0.1, - } - ) - id = preset_manager.add_preset(preset) - self.assertEqual(id, 10) - self.assertEqual(len(preset_manager.presets), 3) - for _preset in preset_manager.presets: - if _preset.id == id: - self.assertEqual(_preset, preset) - remove(temp_path) - - def test_add_preset_load_failure(self): - preset_manager = PresetManager(preset_path=Path("test/presets-test-2.yaml")) - with self.assertRaises(PresetError, msg="プリセットの設定ファイルにミスがあります"): - preset_manager.add_preset( - Preset( - **{ - "id": 1, - "name": "", - "speaker_uuid": "", - "style_id": 0, - "speedScale": 0, - "pitchScale": 0, - "intonationScale": 0, - "volumeScale": 0, - "prePhonemeLength": 0, - "postPhonemeLength": 0, - } - ) - ) - - def test_add_preset_conflict_id(self): - temp_path = self.tmp_dir_path / "presets-test-temp.yaml" - copyfile(Path("test/presets-test-1.yaml"), temp_path) - preset_manager = PresetManager(preset_path=temp_path) - preset = Preset( - **{ - "id": 2, - "name": "test3", - "speaker_uuid": "7ffcb7ce-00ec-4bdc-82cd-45a8889e43ff", - "style_id": 2, - "speedScale": 1, - "pitchScale": 1, - "intonationScale": 0.5, - "volumeScale": 1, - "prePhonemeLength": 0.1, - "postPhonemeLength": 0.1, - } - ) - id = preset_manager.add_preset(preset) - self.assertEqual(id, 3) - self.assertEqual(len(preset_manager.presets), 3) - for _preset in preset_manager.presets: - if _preset.id == id: - self.assertEqual(_preset, preset) - remove(temp_path) - - def test_add_preset_conflict_id2(self): - temp_path = self.tmp_dir_path / "presets-test-temp.yaml" - copyfile(Path("test/presets-test-1.yaml"), temp_path) - preset_manager = PresetManager(preset_path=temp_path) - preset = Preset( - **{ - "id": -1, - "name": "test3", - "speaker_uuid": "7ffcb7ce-00ec-4bdc-82cd-45a8889e43ff", - "style_id": 2, - "speedScale": 1, - "pitchScale": 1, - "intonationScale": 0.5, - "volumeScale": 1, - "prePhonemeLength": 0.1, - "postPhonemeLength": 0.1, - } - ) - id = preset_manager.add_preset(preset) - self.assertEqual(id, 3) - self.assertEqual(len(preset_manager.presets), 3) - for _preset in preset_manager.presets: - if _preset.id == id: - self.assertEqual(_preset, preset) - remove(temp_path) - - def test_add_preset_write_failure(self): - temp_path = self.tmp_dir_path / "presets-test-temp.yaml" - copyfile(Path("test/presets-test-1.yaml"), temp_path) - preset_manager = PresetManager(preset_path=temp_path) - preset = Preset( - **{ - "id": 10, - "name": "test10", - "speaker_uuid": "7ffcb7ce-00ec-4bdc-82cd-45a8889e43ff", - "style_id": 2, - "speedScale": 1, - "pitchScale": 1, - "intonationScale": 0.5, - "volumeScale": 1, - "prePhonemeLength": 0.1, - "postPhonemeLength": 0.1, - } - ) - preset_manager.load_presets() - preset_manager.load_presets = lambda: [] - preset_manager.preset_path = "" - with self.assertRaises(PresetError, msg="プリセットの設定ファイルに書き込み失敗しました"): - preset_manager.add_preset(preset) - self.assertEqual(len(preset_manager.presets), 2) - remove(temp_path) - - def test_update_preset(self): - temp_path = self.tmp_dir_path / "presets-test-temp.yaml" - copyfile(Path("test/presets-test-1.yaml"), temp_path) - preset_manager = PresetManager(preset_path=temp_path) - preset = Preset( - **{ - "id": 1, - "name": "test1 new", - "speaker_uuid": "7ffcb7ce-00ec-4bdc-82cd-45a8889e43ff", - "style_id": 2, - "speedScale": 1, - "pitchScale": 1, - "intonationScale": 0.5, - "volumeScale": 1, - "prePhonemeLength": 0.1, - "postPhonemeLength": 0.1, - } - ) - id = preset_manager.update_preset(preset) - self.assertEqual(id, 1) - self.assertEqual(len(preset_manager.presets), 2) - for _preset in preset_manager.presets: - if _preset.id == id: - self.assertEqual(_preset, preset) - remove(temp_path) - - def test_update_preset_load_failure(self): - preset_manager = PresetManager(preset_path=Path("test/presets-test-2.yaml")) - with self.assertRaises(PresetError, msg="プリセットの設定ファイルにミスがあります"): - preset_manager.update_preset( - Preset( - **{ - "id": 1, - "name": "", - "speaker_uuid": "", - "style_id": 0, - "speedScale": 0, - "pitchScale": 0, - "intonationScale": 0, - "volumeScale": 0, - "prePhonemeLength": 0, - "postPhonemeLength": 0, - } - ) - ) - - def test_update_preset_not_found(self): - temp_path = self.tmp_dir_path / "presets-test-temp.yaml" - copyfile(Path("test/presets-test-1.yaml"), temp_path) - preset_manager = PresetManager(preset_path=temp_path) - preset = Preset( - **{ - "id": 10, - "name": "test1 new", - "speaker_uuid": "7ffcb7ce-00ec-4bdc-82cd-45a8889e43ff", - "style_id": 2, - "speedScale": 1, - "pitchScale": 1, - "intonationScale": 0.5, - "volumeScale": 1, - "prePhonemeLength": 0.1, - "postPhonemeLength": 0.1, - } - ) - with self.assertRaises(PresetError, msg="更新先のプリセットが存在しません"): - preset_manager.update_preset(preset) - self.assertEqual(len(preset_manager.presets), 2) - remove(temp_path) - - def test_update_preset_write_failure(self): - temp_path = self.tmp_dir_path / "presets-test-temp.yaml" - copyfile(Path("test/presets-test-1.yaml"), temp_path) - preset_manager = PresetManager(preset_path=temp_path) - preset = Preset( - **{ - "id": 1, - "name": "test1 new", - "speaker_uuid": "7ffcb7ce-00ec-4bdc-82cd-45a8889e43ff", - "style_id": 2, - "speedScale": 1, - "pitchScale": 1, - "intonationScale": 0.5, - "volumeScale": 1, - "prePhonemeLength": 0.1, - "postPhonemeLength": 0.1, - } - ) - preset_manager.load_presets() - preset_manager.load_presets = lambda: [] - preset_manager.preset_path = "" - with self.assertRaises(PresetError, msg="プリセットの設定ファイルに書き込み失敗しました"): - preset_manager.update_preset(preset) - self.assertEqual(len(preset_manager.presets), 2) - self.assertEqual(preset_manager.presets[0].name, "test") - remove(temp_path) - - def test_delete_preset(self): - temp_path = self.tmp_dir_path / "presets-test-temp.yaml" - copyfile(Path("test/presets-test-1.yaml"), temp_path) - preset_manager = PresetManager(preset_path=temp_path) - id = preset_manager.delete_preset(1) - self.assertEqual(id, 1) - self.assertEqual(len(preset_manager.presets), 1) - remove(temp_path) - - def test_delete_preset_load_failure(self): - preset_manager = PresetManager(preset_path=Path("test/presets-test-2.yaml")) - with self.assertRaises(PresetError, msg="プリセットの設定ファイルにミスがあります"): - preset_manager.delete_preset(10) - - def test_delete_preset_not_found(self): - temp_path = self.tmp_dir_path / "presets-test-temp.yaml" - copyfile(Path("test/presets-test-1.yaml"), temp_path) - preset_manager = PresetManager(preset_path=temp_path) - with self.assertRaises(PresetError, msg="削除対象のプリセットが存在しません"): - preset_manager.delete_preset(10) - self.assertEqual(len(preset_manager.presets), 2) - remove(temp_path) - - def test_delete_preset_write_failure(self): - temp_path = self.tmp_dir_path / "presets-test-temp.yaml" - copyfile(Path("test/presets-test-1.yaml"), temp_path) - preset_manager = PresetManager(preset_path=temp_path) - preset_manager.load_presets() - preset_manager.load_presets = lambda: [] - preset_manager.preset_path = "" - with self.assertRaises(PresetError, msg="プリセットの設定ファイルに書き込み失敗しました"): - preset_manager.delete_preset(1) - self.assertEqual(len(preset_manager.presets), 2) - remove(temp_path) diff --git a/spaces/2ndelement/voicevox/voicevox_engine/mora_list.py b/spaces/2ndelement/voicevox/voicevox_engine/mora_list.py deleted file mode 100644 index 5a49f4a3a434ef4832355fcc66c5192b1a4b3059..0000000000000000000000000000000000000000 --- a/spaces/2ndelement/voicevox/voicevox_engine/mora_list.py +++ /dev/null @@ -1,218 +0,0 @@ -""" -以下のモーラ対応表はOpenJTalkのソースコードから取得し、 -カタカナ表記とモーラが一対一対応するように改造した。 -ライセンス表記: ------------------------------------------------------------------ - The Japanese TTS System "Open JTalk" - developed by HTS Working Group - http://open-jtalk.sourceforge.net/ ------------------------------------------------------------------ - - Copyright (c) 2008-2014 Nagoya Institute of Technology - Department of Computer Science - -All rights reserved. - -Redistribution and use in source and binary forms, with or -without modification, are permitted provided that the following -conditions are met: - -- Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. -- Redistributions in binary form must reproduce the above - copyright notice, this list of conditions and the following - disclaimer in the documentation and/or other materials provided - with the distribution. -- Neither the name of the HTS working group nor the names of its - contributors may be used to endorse or promote products derived - from this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND -CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, -INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF -MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS -BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, -EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED -TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON -ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, -OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY -OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -POSSIBILITY OF SUCH DAMAGE. -""" -_mora_list_minimum = [ - ["ヴォ", "v", "o"], - ["ヴェ", "v", "e"], - ["ヴィ", "v", "i"], - ["ヴァ", "v", "a"], - ["ヴ", "v", "u"], - ["ン", "", "N"], - ["ワ", "w", "a"], - ["ロ", "r", "o"], - ["レ", "r", "e"], - ["ル", "r", "u"], - ["リョ", "ry", "o"], - ["リュ", "ry", "u"], - ["リャ", "ry", "a"], - ["リェ", "ry", "e"], - ["リ", "r", "i"], - ["ラ", "r", "a"], - ["ヨ", "y", "o"], - ["ユ", "y", "u"], - ["ヤ", "y", "a"], - ["モ", "m", "o"], - ["メ", "m", "e"], - ["ム", "m", "u"], - ["ミョ", "my", "o"], - ["ミュ", "my", "u"], - ["ミャ", "my", "a"], - ["ミェ", "my", "e"], - ["ミ", "m", "i"], - ["マ", "m", "a"], - ["ポ", "p", "o"], - ["ボ", "b", "o"], - ["ホ", "h", "o"], - ["ペ", "p", "e"], - ["ベ", "b", "e"], - ["ヘ", "h", "e"], - ["プ", "p", "u"], - ["ブ", "b", "u"], - ["フォ", "f", "o"], - ["フェ", "f", "e"], - ["フィ", "f", "i"], - ["ファ", "f", "a"], - ["フ", "f", "u"], - ["ピョ", "py", "o"], - ["ピュ", "py", "u"], - ["ピャ", "py", "a"], - ["ピェ", "py", "e"], - ["ピ", "p", "i"], - ["ビョ", "by", "o"], - ["ビュ", "by", "u"], - ["ビャ", "by", "a"], - ["ビェ", "by", "e"], - ["ビ", "b", "i"], - ["ヒョ", "hy", "o"], - ["ヒュ", "hy", "u"], - ["ヒャ", "hy", "a"], - ["ヒェ", "hy", "e"], - ["ヒ", "h", "i"], - ["パ", "p", "a"], - ["バ", "b", "a"], - ["ハ", "h", "a"], - ["ノ", "n", "o"], - ["ネ", "n", "e"], - ["ヌ", "n", "u"], - ["ニョ", "ny", "o"], - ["ニュ", "ny", "u"], - ["ニャ", "ny", "a"], - ["ニェ", "ny", "e"], - ["ニ", "n", "i"], - ["ナ", "n", "a"], - ["ドゥ", "d", "u"], - ["ド", "d", "o"], - ["トゥ", "t", "u"], - ["ト", "t", "o"], - ["デョ", "dy", "o"], - ["デュ", "dy", "u"], - ["デャ", "dy", "a"], - ["デェ", "dy", "e"], - ["ディ", "d", "i"], - ["デ", "d", "e"], - ["テョ", "ty", "o"], - ["テュ", "ty", "u"], - ["テャ", "ty", "a"], - ["ティ", "t", "i"], - ["テ", "t", "e"], - ["ツォ", "ts", "o"], - ["ツェ", "ts", "e"], - ["ツィ", "ts", "i"], - ["ツァ", "ts", "a"], - ["ツ", "ts", "u"], - ["ッ", "", "cl"], - ["チョ", "ch", "o"], - ["チュ", "ch", "u"], - ["チャ", "ch", "a"], - ["チェ", "ch", "e"], - ["チ", "ch", "i"], - ["ダ", "d", "a"], - ["タ", "t", "a"], - ["ゾ", "z", "o"], - ["ソ", "s", "o"], - ["ゼ", "z", "e"], - ["セ", "s", "e"], - ["ズィ", "z", "i"], - ["ズ", "z", "u"], - ["スィ", "s", "i"], - ["ス", "s", "u"], - ["ジョ", "j", "o"], - ["ジュ", "j", "u"], - ["ジャ", "j", "a"], - ["ジェ", "j", "e"], - ["ジ", "j", "i"], - ["ショ", "sh", "o"], - ["シュ", "sh", "u"], - ["シャ", "sh", "a"], - ["シェ", "sh", "e"], - ["シ", "sh", "i"], - ["ザ", "z", "a"], - ["サ", "s", "a"], - ["ゴ", "g", "o"], - ["コ", "k", "o"], - ["ゲ", "g", "e"], - ["ケ", "k", "e"], - ["グヮ", "gw", "a"], - ["グ", "g", "u"], - ["クヮ", "kw", "a"], - ["ク", "k", "u"], - ["ギョ", "gy", "o"], - ["ギュ", "gy", "u"], - ["ギャ", "gy", "a"], - ["ギェ", "gy", "e"], - ["ギ", "g", "i"], - ["キョ", "ky", "o"], - ["キュ", "ky", "u"], - ["キャ", "ky", "a"], - ["キェ", "ky", "e"], - ["キ", "k", "i"], - ["ガ", "g", "a"], - ["カ", "k", "a"], - ["オ", "", "o"], - ["エ", "", "e"], - ["ウォ", "w", "o"], - ["ウェ", "w", "e"], - ["ウィ", "w", "i"], - ["ウ", "", "u"], - ["イェ", "y", "e"], - ["イ", "", "i"], - ["ア", "", "a"], -] -_mora_list_additional = [ - ["ヴョ", "by", "o"], - ["ヴュ", "by", "u"], - ["ヴャ", "by", "a"], - ["ヲ", "", "o"], - ["ヱ", "", "e"], - ["ヰ", "", "i"], - ["ヮ", "w", "a"], - ["ョ", "y", "o"], - ["ュ", "y", "u"], - ["ヅ", "z", "u"], - ["ヂ", "j", "i"], - ["ヶ", "k", "e"], - ["ャ", "y", "a"], - ["ォ", "", "o"], - ["ェ", "", "e"], - ["ゥ", "", "u"], - ["ィ", "", "i"], - ["ァ", "", "a"], -] - -openjtalk_mora2text = { - consonant + vowel: text for [text, consonant, vowel] in _mora_list_minimum -} -openjtalk_text2mora = { - text: (consonant, vowel) - for [text, consonant, vowel] in _mora_list_minimum + _mora_list_additional -} diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/version.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/version.py deleted file mode 100644 index 3ced3581bb601ae91b1e1da4b8f4f520855a065e..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/version.py +++ /dev/null @@ -1 +0,0 @@ -__version__ = "0.2.1" diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/models/diffusion/ddpm_audio.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/models/diffusion/ddpm_audio.py deleted file mode 100644 index aaac6df39ec06c2d52b2f0cabf967ab447f9b04a..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/models/diffusion/ddpm_audio.py +++ /dev/null @@ -1,1262 +0,0 @@ -""" -wild mixture of -https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py -https://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py -https://github.com/CompVis/taming-transformers --- merci -""" -import os -import torch -import torch.nn as nn -import numpy as np -import pytorch_lightning as pl -from torch.optim.lr_scheduler import LambdaLR -from einops import rearrange, repeat -from contextlib import contextmanager -from functools import partial -from tqdm import tqdm -from torchvision.utils import make_grid -from pytorch_lightning.utilities.distributed import rank_zero_only - -from ldm.util import log_txt_as_img, exists, default, ismap, isimage, mean_flat, count_params, instantiate_from_config -from ldm.modules.ema import LitEma -from ldm.modules.distributions.distributions import normal_kl, DiagonalGaussianDistribution -from ldm.models.autoencoder import VQModelInterface, IdentityFirstStage, AutoencoderKL -from ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like -from ldm.models.diffusion.ddim import DDIMSampler -from ldm.models.diffusion.ddpm import DDPM, disabled_train -from omegaconf import ListConfig - -__conditioning_keys__ = {'concat': 'c_concat', - 'crossattn': 'c_crossattn', - 'adm': 'y'} - - -class LatentDiffusion_audio(DDPM): - """main class""" - def __init__(self, - first_stage_config, - cond_stage_config, - num_timesteps_cond=None, - mel_dim=80, - mel_length=848, - cond_stage_key="image", - cond_stage_trainable=False, - concat_mode=True, - cond_stage_forward=None, - conditioning_key=None, - scale_factor=1.0, - scale_by_std=False, - *args, **kwargs): - self.num_timesteps_cond = default(num_timesteps_cond, 1) - self.scale_by_std = scale_by_std - assert self.num_timesteps_cond <= kwargs['timesteps'] - # for backwards compatibility after implementation of DiffusionWrapper - if conditioning_key is None: - conditioning_key = 'concat' if concat_mode else 'crossattn' - if cond_stage_config == '__is_unconditional__': - conditioning_key = None - ckpt_path = kwargs.pop("ckpt_path", None) - ignore_keys = kwargs.pop("ignore_keys", []) - super().__init__(conditioning_key=conditioning_key, *args, **kwargs) - self.concat_mode = concat_mode - self.mel_dim = mel_dim - self.mel_length = mel_length - self.cond_stage_trainable = cond_stage_trainable - self.cond_stage_key = cond_stage_key - try: - self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1 - except: - self.num_downs = 0 - if not scale_by_std: - self.scale_factor = scale_factor - else: - self.register_buffer('scale_factor', torch.tensor(scale_factor)) - self.instantiate_first_stage(first_stage_config) - self.instantiate_cond_stage(cond_stage_config) - self.cond_stage_forward = cond_stage_forward - self.clip_denoised = False - self.bbox_tokenizer = None - - self.restarted_from_ckpt = False - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys) - self.restarted_from_ckpt = True - - def make_cond_schedule(self, ): - self.cond_ids = torch.full(size=(self.num_timesteps,), fill_value=self.num_timesteps - 1, dtype=torch.long) - ids = torch.round(torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond)).long() - self.cond_ids[:self.num_timesteps_cond] = ids - - @rank_zero_only - @torch.no_grad() - def on_train_batch_start(self, batch, batch_idx, dataloader_idx): - # only for very first batch - if self.scale_by_std and self.current_epoch == 0 and self.global_step == 0 and batch_idx == 0 and not self.restarted_from_ckpt: - assert self.scale_factor == 1., 'rather not use custom rescaling and std-rescaling simultaneously' - # set rescale weight to 1./std of encodings - print("### USING STD-RESCALING ###") - x = super().get_input(batch, self.first_stage_key) - x = x.to(self.device) - encoder_posterior = self.encode_first_stage(x) - z = self.get_first_stage_encoding(encoder_posterior).detach() - del self.scale_factor - self.register_buffer('scale_factor', 1. / z.flatten().std()) - print(f"setting self.scale_factor to {self.scale_factor}") - print("### USING STD-RESCALING ###") - - def register_schedule(self, - given_betas=None, beta_schedule="linear", timesteps=1000, - linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - super().register_schedule(given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s) - - self.shorten_cond_schedule = self.num_timesteps_cond > 1 - if self.shorten_cond_schedule: - self.make_cond_schedule() - - def instantiate_first_stage(self, config): - model = instantiate_from_config(config) - self.first_stage_model = model.eval() - self.first_stage_model.train = disabled_train - for param in self.first_stage_model.parameters(): - param.requires_grad = False - - def instantiate_cond_stage(self, config): - if not self.cond_stage_trainable: - if config == "__is_first_stage__": - print("Using first stage also as cond stage.") - self.cond_stage_model = self.first_stage_model - elif config == "__is_unconditional__": - print(f"Training {self.__class__.__name__} as an unconditional model.") - self.cond_stage_model = None - # self.be_unconditional = True - else: - model = instantiate_from_config(config) - self.cond_stage_model = model.eval() - self.cond_stage_model.train = disabled_train - for param in self.cond_stage_model.parameters(): - param.requires_grad = False - else: - assert config != '__is_first_stage__' - assert config != '__is_unconditional__' - model = instantiate_from_config(config) - self.cond_stage_model = model - - def _get_denoise_row_from_list(self, samples, desc='', force_no_decoder_quantization=False): - denoise_row = [] - for zd in tqdm(samples, desc=desc): - denoise_row.append(self.decode_first_stage(zd.to(self.device), - force_not_quantize=force_no_decoder_quantization)) - n_imgs_per_row = len(denoise_row) - denoise_row = torch.stack(denoise_row) # n_log_step, n_row, C, H, W - denoise_grid = rearrange(denoise_row, 'n b c h w -> b n c h w') - denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w') - denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row) - return denoise_grid - - def get_first_stage_encoding(self, encoder_posterior): - if isinstance(encoder_posterior, DiagonalGaussianDistribution): - z = encoder_posterior.sample() - elif isinstance(encoder_posterior, torch.Tensor): - z = encoder_posterior - else: - raise NotImplementedError(f"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented") - return self.scale_factor * z - - def get_learned_conditioning(self, c): - if self.cond_stage_forward is None: - if hasattr(self.cond_stage_model, 'encode') and callable(self.cond_stage_model.encode): - c = self.cond_stage_model.encode(c) - if isinstance(c, DiagonalGaussianDistribution): - c = c.mode() - else: - c = self.cond_stage_model(c) - else: - assert hasattr(self.cond_stage_model, self.cond_stage_forward) - c = getattr(self.cond_stage_model, self.cond_stage_forward)(c) - return c - - - @torch.no_grad() - def get_unconditional_conditioning(self, batch_size, null_label=None): - if null_label is not None: - xc = null_label - if isinstance(xc, ListConfig): - xc = list(xc) - if isinstance(xc, dict) or isinstance(xc, list): - c = self.get_learned_conditioning(xc) - else: - if hasattr(xc, "to"): - xc = xc.to(self.device) - c = self.get_learned_conditioning(xc) - else: - if self.cond_stage_key in ["class_label", "cls"]: - xc = self.cond_stage_model.get_unconditional_conditioning(batch_size, device=self.device) - return self.get_learned_conditioning(xc) - else: - raise NotImplementedError("todo") - if isinstance(c, list): # in case the encoder gives us a list - for i in range(len(c)): - c[i] = repeat(c[i], '1 ... -> b ...', b=batch_size).to(self.device) - else: - c = repeat(c, '1 ... -> b ...', b=batch_size).to(self.device) - return c - - def meshgrid(self, h, w): - y = torch.arange(0, h).view(h, 1, 1).repeat(1, w, 1) - x = torch.arange(0, w).view(1, w, 1).repeat(h, 1, 1) - - arr = torch.cat([y, x], dim=-1) - return arr - - def delta_border(self, h, w): - """ - :param h: height - :param w: width - :return: normalized distance to image border, - wtith min distance = 0 at border and max dist = 0.5 at image center - """ - lower_right_corner = torch.tensor([h - 1, w - 1]).view(1, 1, 2) - arr = self.meshgrid(h, w) / lower_right_corner - dist_left_up = torch.min(arr, dim=-1, keepdims=True)[0] - dist_right_down = torch.min(1 - arr, dim=-1, keepdims=True)[0] - edge_dist = torch.min(torch.cat([dist_left_up, dist_right_down], dim=-1), dim=-1)[0] - return edge_dist - - def get_weighting(self, h, w, Ly, Lx, device): - weighting = self.delta_border(h, w) - weighting = torch.clip(weighting, self.split_input_params["clip_min_weight"], - self.split_input_params["clip_max_weight"], ) - weighting = weighting.view(1, h * w, 1).repeat(1, 1, Ly * Lx).to(device) - - if self.split_input_params["tie_braker"]: - L_weighting = self.delta_border(Ly, Lx) - L_weighting = torch.clip(L_weighting, - self.split_input_params["clip_min_tie_weight"], - self.split_input_params["clip_max_tie_weight"]) - - L_weighting = L_weighting.view(1, 1, Ly * Lx).to(device) - weighting = weighting * L_weighting - return weighting - - def get_fold_unfold(self, x, kernel_size, stride, uf=1, df=1): # todo load once not every time, shorten code - """ - :param x: img of size (bs, c, h, w) - :return: n img crops of size (n, bs, c, kernel_size[0], kernel_size[1]) - """ - bs, nc, h, w = x.shape - - # number of crops in image - Ly = (h - kernel_size[0]) // stride[0] + 1 - Lx = (w - kernel_size[1]) // stride[1] + 1 - - if uf == 1 and df == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold = torch.nn.Fold(output_size=x.shape[2:], **fold_params) - - weighting = self.get_weighting(kernel_size[0], kernel_size[1], Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h, w) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0], kernel_size[1], Ly * Lx)) - - elif uf > 1 and df == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold_params2 = dict(kernel_size=(kernel_size[0] * uf, kernel_size[0] * uf), - dilation=1, padding=0, - stride=(stride[0] * uf, stride[1] * uf)) - fold = torch.nn.Fold(output_size=(x.shape[2] * uf, x.shape[3] * uf), **fold_params2) - - weighting = self.get_weighting(kernel_size[0] * uf, kernel_size[1] * uf, Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h * uf, w * uf) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0] * uf, kernel_size[1] * uf, Ly * Lx)) - - elif df > 1 and uf == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold_params2 = dict(kernel_size=(kernel_size[0] // df, kernel_size[0] // df), - dilation=1, padding=0, - stride=(stride[0] // df, stride[1] // df)) - fold = torch.nn.Fold(output_size=(x.shape[2] // df, x.shape[3] // df), **fold_params2) - - weighting = self.get_weighting(kernel_size[0] // df, kernel_size[1] // df, Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h // df, w // df) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0] // df, kernel_size[1] // df, Ly * Lx)) - - else: - raise NotImplementedError - - return fold, unfold, normalization, weighting - - @torch.no_grad() - def get_input(self, batch, k, return_first_stage_outputs=False, force_c_encode=False, - cond_key=None, return_original_cond=False, bs=None): - x = super().get_input(batch, k) - if bs is not None: - x = x[:bs] - x = x.to(self.device) - encoder_posterior = self.encode_first_stage(x) - z = self.get_first_stage_encoding(encoder_posterior).detach() - - if self.model.conditioning_key is not None: - if cond_key is None: - cond_key = self.cond_stage_key - if cond_key != self.first_stage_key: - if cond_key in ['caption', 'coordinates_bbox']: - xc = batch[cond_key] - elif cond_key == 'class_label': - xc = batch - else: - xc = super().get_input(batch, cond_key).to(self.device) - else: - xc = x - if not self.cond_stage_trainable or force_c_encode: - if isinstance(xc, dict) or isinstance(xc, list): - # import pudb; pudb.set_trace() - c = self.get_learned_conditioning(xc) - else: - c = self.get_learned_conditioning(xc.to(self.device)) - else: - c = xc - if bs is not None: - c = c[:bs] - # Testing # - if cond_key == 'masked_image': - mask = super().get_input(batch, "mask") - cc = torch.nn.functional.interpolate(mask, size=c.shape[-2:]) # [B, 1, 10, 106] - c = torch.cat((c, cc), dim=1) # [B, 5, 10, 106] - # Testing # - if self.use_positional_encodings: - pos_x, pos_y = self.compute_latent_shifts(batch) - ckey = __conditioning_keys__[self.model.conditioning_key] - c = {ckey: c, 'pos_x': pos_x, 'pos_y': pos_y} - - else: - c = None - xc = None - if self.use_positional_encodings: - pos_x, pos_y = self.compute_latent_shifts(batch) - c = {'pos_x': pos_x, 'pos_y': pos_y} - out = [z, c] - if return_first_stage_outputs: - xrec = self.decode_first_stage(z) - out.extend([x, xrec]) - if return_original_cond: - out.append(xc) - return out - - @torch.no_grad() - def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False): - if predict_cids: - if z.dim() == 4: - z = torch.argmax(z.exp(), dim=1).long() - z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None) - z = rearrange(z, 'b h w c -> b c h w').contiguous() - - z = 1. / self.scale_factor * z - - if hasattr(self, "split_input_params"): - if self.split_input_params["patch_distributed_vq"]: - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - uf = self.split_input_params["vqf"] - bs, nc, h, w = z.shape - if ks[0] > h or ks[1] > w: - ks = (min(ks[0], h), min(ks[1], w)) - print("reducing Kernel") - - if stride[0] > h or stride[1] > w: - stride = (min(stride[0], h), min(stride[1], w)) - print("reducing stride") - - fold, unfold, normalization, weighting = self.get_fold_unfold(z, ks, stride, uf=uf) - - z = unfold(z) # (bn, nc * prod(**ks), L) - # 1. Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - # 2. apply model loop over last dim - if isinstance(self.first_stage_model, VQModelInterface): - output_list = [self.first_stage_model.decode(z[:, :, :, :, i], - force_not_quantize=predict_cids or force_not_quantize) - for i in range(z.shape[-1])] - else: - - output_list = [self.first_stage_model.decode(z[:, :, :, :, i]) - for i in range(z.shape[-1])] - - o = torch.stack(output_list, axis=-1) # # (bn, nc, ks[0], ks[1], L) - o = o * weighting - # Reverse 1. reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - decoded = fold(o) - decoded = decoded / normalization # norm is shape (1, 1, h, w) - return decoded - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - # same as above but without decorator - def differentiable_decode_first_stage(self, z, predict_cids=False, force_not_quantize=False): - if predict_cids: - if z.dim() == 4: - z = torch.argmax(z.exp(), dim=1).long() - z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None) - z = rearrange(z, 'b h w c -> b c h w').contiguous() - - z = 1. / self.scale_factor * z - - if hasattr(self, "split_input_params"): - if self.split_input_params["patch_distributed_vq"]: - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - uf = self.split_input_params["vqf"] - bs, nc, h, w = z.shape - if ks[0] > h or ks[1] > w: - ks = (min(ks[0], h), min(ks[1], w)) - print("reducing Kernel") - - if stride[0] > h or stride[1] > w: - stride = (min(stride[0], h), min(stride[1], w)) - print("reducing stride") - - fold, unfold, normalization, weighting = self.get_fold_unfold(z, ks, stride, uf=uf) - - z = unfold(z) # (bn, nc * prod(**ks), L) - # 1. Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - # 2. apply model loop over last dim - if isinstance(self.first_stage_model, VQModelInterface): - output_list = [self.first_stage_model.decode(z[:, :, :, :, i], - force_not_quantize=predict_cids or force_not_quantize) - for i in range(z.shape[-1])] - else: - - output_list = [self.first_stage_model.decode(z[:, :, :, :, i]) - for i in range(z.shape[-1])] - - o = torch.stack(output_list, axis=-1) # # (bn, nc, ks[0], ks[1], L) - o = o * weighting - # Reverse 1. reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - decoded = fold(o) - decoded = decoded / normalization # norm is shape (1, 1, h, w) - return decoded - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - @torch.no_grad() - def encode_first_stage(self, x): - if hasattr(self, "split_input_params"): - if self.split_input_params["patch_distributed_vq"]: - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - df = self.split_input_params["vqf"] - self.split_input_params['original_image_size'] = x.shape[-2:] - bs, nc, h, w = x.shape - if ks[0] > h or ks[1] > w: - ks = (min(ks[0], h), min(ks[1], w)) - print("reducing Kernel") - - if stride[0] > h or stride[1] > w: - stride = (min(stride[0], h), min(stride[1], w)) - print("reducing stride") - - fold, unfold, normalization, weighting = self.get_fold_unfold(x, ks, stride, df=df) - z = unfold(x) # (bn, nc * prod(**ks), L) - # Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - output_list = [self.first_stage_model.encode(z[:, :, :, :, i]) - for i in range(z.shape[-1])] - - o = torch.stack(output_list, axis=-1) - o = o * weighting - - # Reverse reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - decoded = fold(o) - decoded = decoded / normalization - return decoded - - else: - return self.first_stage_model.encode(x) - else: - return self.first_stage_model.encode(x) - - def shared_step(self, batch, **kwargs): - x, c = self.get_input(batch, self.first_stage_key) - loss = self(x, c) - return loss - - def test_step(self,batch,batch_idx): - cond = batch[self.cond_stage_key] * self.test_repeat - cond = self.get_learned_conditioning(cond) # c: string -> [B, T, Context_dim] - batch_size = len(cond) - enc_emb = self.sample(cond,batch_size,timesteps=self.test_numsteps)# shape = [batch_size,self.channels,self.mel_dim,self.mel_length] - xrec = self.decode_first_stage(enc_emb) - reconstructions = (xrec + 1)/2 # to mel scale - test_ckpt_path = os.path.basename(self.trainer.tested_ckpt_path) - savedir = os.path.join(self.trainer.log_dir,f'output_imgs_{test_ckpt_path}','fake_class') - if not os.path.exists(savedir): - os.makedirs(savedir) - - file_names = batch['f_name'] - nfiles = len(file_names) - reconstructions = reconstructions.cpu().numpy().squeeze(1) # squuze channel dim - for k in range(reconstructions.shape[0]): - b,repeat = k % nfiles, k // nfiles - vname_num_split_index = file_names[b].rfind('_')# file_names[b]:video_name+'_'+num - v_n,num = file_names[b][:vname_num_split_index],file_names[b][vname_num_split_index+1:] - save_img_path = os.path.join(savedir,f'{v_n}_sample_{num}_{repeat}.npy')# the num_th caption, the repeat_th repitition - np.save(save_img_path,reconstructions[b]) - - return None - - def forward(self, x, c, *args, **kwargs): - t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long() - if self.model.conditioning_key is not None: - assert c is not None - if self.cond_stage_trainable: - c = self.get_learned_conditioning(c) # c: string -> [B, T, Context_dim] - if self.shorten_cond_schedule: # TODO: drop this option - tc = self.cond_ids[t].to(self.device) - c = self.q_sample(x_start=c, t=tc, noise=torch.randn_like(c.float())) - return self.p_losses(x, c, t, *args, **kwargs) - - def _rescale_annotations(self, bboxes, crop_coordinates): # TODO: move to dataset - def rescale_bbox(bbox): - x0 = clamp((bbox[0] - crop_coordinates[0]) / crop_coordinates[2]) - y0 = clamp((bbox[1] - crop_coordinates[1]) / crop_coordinates[3]) - w = min(bbox[2] / crop_coordinates[2], 1 - x0) - h = min(bbox[3] / crop_coordinates[3], 1 - y0) - return x0, y0, w, h - - return [rescale_bbox(b) for b in bboxes] - - def apply_model(self, x_noisy, t, cond, return_ids=False): - - if isinstance(cond, dict): - # hybrid case, cond is exptected to be a dict - pass - else: - if not isinstance(cond, list): - cond = [cond] - key = 'c_concat' if self.model.conditioning_key == 'concat' else 'c_crossattn' - cond = {key: cond} - - if hasattr(self, "split_input_params"): - assert len(cond) == 1 # todo can only deal with one conditioning atm - assert not return_ids - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - - h, w = x_noisy.shape[-2:] - - fold, unfold, normalization, weighting = self.get_fold_unfold(x_noisy, ks, stride) - - z = unfold(x_noisy) # (bn, nc * prod(**ks), L) - # Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - z_list = [z[:, :, :, :, i] for i in range(z.shape[-1])] - - if self.cond_stage_key in ["image", "LR_image", "segmentation", - 'bbox_img'] and self.model.conditioning_key: # todo check for completeness - c_key = next(iter(cond.keys())) # get key - c = next(iter(cond.values())) # get value - assert (len(c) == 1) # todo extend to list with more than one elem - c = c[0] # get element - - c = unfold(c) - c = c.view((c.shape[0], -1, ks[0], ks[1], c.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - cond_list = [{c_key: [c[:, :, :, :, i]]} for i in range(c.shape[-1])] - - elif self.cond_stage_key == 'coordinates_bbox': - assert 'original_image_size' in self.split_input_params, 'BoudingBoxRescaling is missing original_image_size' - - # assuming padding of unfold is always 0 and its dilation is always 1 - n_patches_per_row = int((w - ks[0]) / stride[0] + 1) - full_img_h, full_img_w = self.split_input_params['original_image_size'] - # as we are operating on latents, we need the factor from the original image size to the - # spatial latent size to properly rescale the crops for regenerating the bbox annotations - num_downs = self.first_stage_model.encoder.num_resolutions - 1 - rescale_latent = 2 ** (num_downs) - - # get top left postions of patches as conforming for the bbbox tokenizer, therefore we - # need to rescale the tl patch coordinates to be in between (0,1) - tl_patch_coordinates = [(rescale_latent * stride[0] * (patch_nr % n_patches_per_row) / full_img_w, - rescale_latent * stride[1] * (patch_nr // n_patches_per_row) / full_img_h) - for patch_nr in range(z.shape[-1])] - - # patch_limits are tl_coord, width and height coordinates as (x_tl, y_tl, h, w) - patch_limits = [(x_tl, y_tl, - rescale_latent * ks[0] / full_img_w, - rescale_latent * ks[1] / full_img_h) for x_tl, y_tl in tl_patch_coordinates] - # patch_values = [(np.arange(x_tl,min(x_tl+ks, 1.)),np.arange(y_tl,min(y_tl+ks, 1.))) for x_tl, y_tl in tl_patch_coordinates] - - # tokenize crop coordinates for the bounding boxes of the respective patches - patch_limits_tknzd = [torch.LongTensor(self.bbox_tokenizer._crop_encoder(bbox))[None].to(self.device) - for bbox in patch_limits] # list of length l with tensors of shape (1, 2) - print(patch_limits_tknzd[0].shape) - # cut tknzd crop position from conditioning - assert isinstance(cond, dict), 'cond must be dict to be fed into model' - cut_cond = cond['c_crossattn'][0][..., :-2].to(self.device) - print(cut_cond.shape) - - adapted_cond = torch.stack([torch.cat([cut_cond, p], dim=1) for p in patch_limits_tknzd]) - adapted_cond = rearrange(adapted_cond, 'l b n -> (l b) n') - print(adapted_cond.shape) - adapted_cond = self.get_learned_conditioning(adapted_cond) - print(adapted_cond.shape) - adapted_cond = rearrange(adapted_cond, '(l b) n d -> l b n d', l=z.shape[-1]) - print(adapted_cond.shape) - - cond_list = [{'c_crossattn': [e]} for e in adapted_cond] - - else: - cond_list = [cond for i in range(z.shape[-1])] # Todo make this more efficient - - # apply model by loop over crops - output_list = [self.model(z_list[i], t, **cond_list[i]) for i in range(z.shape[-1])] - assert not isinstance(output_list[0], - tuple) # todo cant deal with multiple model outputs check this never happens - - o = torch.stack(output_list, axis=-1) - o = o * weighting - # Reverse reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - x_recon = fold(o) / normalization - - else: - x_recon = self.model(x_noisy, t, **cond) - - if isinstance(x_recon, tuple) and not return_ids: - return x_recon[0] - else: - return x_recon - - def _predict_eps_from_xstart(self, x_t, t, pred_xstart): - return (extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart) / \ - extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) - - def _prior_bpd(self, x_start): - """ - Get the prior KL term for the variational lower-bound, measured in - bits-per-dim. - This term can't be optimized, as it only depends on the encoder. - :param x_start: the [N x C x ...] tensor of inputs. - :return: a batch of [N] KL values (in bits), one per batch element. - """ - batch_size = x_start.shape[0] - t = torch.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device) - qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t) - kl_prior = normal_kl(mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0) - return mean_flat(kl_prior) / np.log(2.0) - - def p_losses(self, x_start, cond, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - model_output = self.apply_model(x_noisy, t, cond) - - loss_dict = {} - prefix = 'train' if self.training else 'val' - - if self.parameterization == "x0": - target = x_start - elif self.parameterization == "eps": - target = noise - else: - raise NotImplementedError() - - loss_simple = self.get_loss(model_output, target, mean=False).mean([1, 2, 3]) - loss_dict.update({f'{prefix}/loss_simple': loss_simple.mean()}) - - logvar_t = self.logvar[t].to(self.device) - loss = loss_simple / torch.exp(logvar_t) + logvar_t - # loss = loss_simple / torch.exp(self.logvar) + self.logvar - if self.learn_logvar: - loss_dict.update({f'{prefix}/loss_gamma': loss.mean()}) - loss_dict.update({'logvar': self.logvar.data.mean()}) - - loss = self.l_simple_weight * loss.mean() - - loss_vlb = self.get_loss(model_output, target, mean=False).mean(dim=(1, 2, 3)) - loss_vlb = (self.lvlb_weights[t] * loss_vlb).mean() - loss_dict.update({f'{prefix}/loss_vlb': loss_vlb}) - loss += (self.original_elbo_weight * loss_vlb) - loss_dict.update({f'{prefix}/loss': loss}) - - return loss, loss_dict - - def p_mean_variance(self, x, c, t, clip_denoised: bool, return_codebook_ids=False, quantize_denoised=False, - return_x0=False, score_corrector=None, corrector_kwargs=None): - t_in = t - model_out = self.apply_model(x, t_in, c, return_ids=return_codebook_ids) - - if score_corrector is not None: - assert self.parameterization == "eps" - model_out = score_corrector.modify_score(self, model_out, x, t, c, **corrector_kwargs) - - if return_codebook_ids: - model_out, logits = model_out - - if self.parameterization == "eps": - x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) - elif self.parameterization == "x0": - x_recon = model_out - else: - raise NotImplementedError() - - if clip_denoised: - x_recon.clamp_(-1., 1.) - if quantize_denoised: - x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon) - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - if return_codebook_ids: - return model_mean, posterior_variance, posterior_log_variance, logits - elif return_x0: - return model_mean, posterior_variance, posterior_log_variance, x_recon - else: - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample(self, x, c, t, clip_denoised=False, repeat_noise=False, - return_codebook_ids=False, quantize_denoised=False, return_x0=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None): - b, *_, device = *x.shape, x.device - outputs = self.p_mean_variance(x=x, c=c, t=t, clip_denoised=clip_denoised, - return_codebook_ids=return_codebook_ids, - quantize_denoised=quantize_denoised, - return_x0=return_x0, - score_corrector=score_corrector, corrector_kwargs=corrector_kwargs) - if return_codebook_ids: - raise DeprecationWarning("Support dropped.") - model_mean, _, model_log_variance, logits = outputs - elif return_x0: - model_mean, _, model_log_variance, x0 = outputs - else: - model_mean, _, model_log_variance = outputs - - noise = noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - # no noise when t == 0 - nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - - if return_codebook_ids: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, logits.argmax(dim=1) - if return_x0: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, x0 - else: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def progressive_denoising(self, cond, shape, verbose=True, callback=None, quantize_denoised=False, - img_callback=None, mask=None, x0=None, temperature=1., noise_dropout=0., - score_corrector=None, corrector_kwargs=None, batch_size=None, x_T=None, start_T=None, - log_every_t=None): - if not log_every_t: - log_every_t = self.log_every_t - timesteps = self.num_timesteps - if batch_size is not None: - b = batch_size if batch_size is not None else shape[0] - shape = [batch_size] + list(shape) - else: - b = batch_size = shape[0] - if x_T is None: - img = torch.randn(shape, device=self.device) - else: - img = x_T - intermediates = [] - if cond is not None: - if isinstance(cond, dict): - cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else - list(map(lambda x: x[:batch_size], cond[key])) for key in cond} - else: - cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = tqdm(reversed(range(0, timesteps)), desc='Progressive Generation', - total=timesteps) if verbose else reversed( - range(0, timesteps)) - if type(temperature) == float: - temperature = [temperature] * timesteps - - for i in iterator: - ts = torch.full((b,), i, device=self.device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != 'hybrid' - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img, x0_partial = self.p_sample(img, cond, ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised, return_x0=True, - temperature=temperature[i], noise_dropout=noise_dropout, - score_corrector=score_corrector, corrector_kwargs=corrector_kwargs) - if mask is not None: - assert x0 is not None - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1. - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(x0_partial) - if callback: callback(i) - if img_callback: img_callback(img, i) - return img, intermediates - - @torch.no_grad() - def p_sample_loop(self, cond, shape, return_intermediates=False, - x_T=None, verbose=True, callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, start_T=None, - log_every_t=None): - - if not log_every_t: - log_every_t = self.log_every_t - device = self.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - intermediates = [img] - if timesteps is None: - timesteps = self.num_timesteps - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = tqdm(reversed(range(0, timesteps)), desc='Sampling t', total=timesteps) if verbose else reversed( - range(0, timesteps)) - - if mask is not None: - assert x0 is not None - assert x0.shape[2:3] == mask.shape[2:3] # spatial size has to match - - for i in iterator: - ts = torch.full((b,), i, device=device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != 'hybrid' - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img = self.p_sample(img, cond, ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised) - if mask is not None: - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1. - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(img) - if callback: callback(i) - if img_callback: img_callback(img, i) - - if return_intermediates: - return img, intermediates - return img - - @torch.no_grad() - def sample(self, cond, batch_size=16, return_intermediates=False, x_T=None, - verbose=True, timesteps=None, quantize_denoised=False, - mask=None, x0=None, shape=None,**kwargs): - if shape is None: - shape = (batch_size, self.channels, self.mel_dim, self.mel_length) - if cond is not None: - if isinstance(cond, dict): - cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else - list(map(lambda x: x[:batch_size], cond[key])) for key in cond} - else: - cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] - return self.p_sample_loop(cond, - shape, - return_intermediates=return_intermediates, x_T=x_T, - verbose=verbose, timesteps=timesteps, quantize_denoised=quantize_denoised, - mask=mask, x0=x0) - - @torch.no_grad() - def sample_log(self,cond,batch_size,ddim, ddim_steps,**kwargs): - - if ddim: - ddim_sampler = DDIMSampler(self) - shape = (self.channels, self.mel_dim, self.mel_length) - samples, intermediates =ddim_sampler.sample(ddim_steps,batch_size, - shape,cond,verbose=False,**kwargs) - - else: - samples, intermediates = self.sample(cond=cond, batch_size=batch_size, - return_intermediates=True,**kwargs) - - return samples, intermediates - - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None, - quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True, - plot_diffusion_rows=True, **kwargs): - - use_ddim = ddim_steps is not None - - log = dict() - z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key, - return_first_stage_outputs=True, - force_c_encode=True, - return_original_cond=True, - bs=N) - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - log["inputs"] = x - log["reconstruction"] = xrec - if self.model.conditioning_key is not None: - if hasattr(self.cond_stage_model, "decode") and self.cond_stage_key != "masked_image": - xc = self.cond_stage_model.decode(c) - log["conditioning"] = xc - elif self.cond_stage_key == "masked_image": - log["mask"] = c[:, -1, :, :][:, None, :, :] - xc = self.cond_stage_model.decode(c[:, :self.cond_stage_model.embed_dim, :, :]) - log["conditioning"] = xc - elif self.cond_stage_key in ["caption"]: - xc = log_txt_as_img((256, 256), batch["caption"]) - log["conditioning"] = xc - elif self.cond_stage_key == 'class_label': - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"]) - log['conditioning'] = xc - elif isimage(xc): - log["conditioning"] = xc - if ismap(xc): - log["original_conditioning"] = self.to_rgb(xc) - - if plot_diffusion_rows: - # get diffusion row - diffusion_row = list() - z_start = z[:n_row] - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(z_start) - z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise) - diffusion_row.append(self.decode_first_stage(z_noisy)) - - diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W - diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w') - diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w') - diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0]) - log["diffusion_row"] = diffusion_grid - - if sample: - # get denoise row - with self.ema_scope("Plotting"): - samples, z_denoise_row = self.sample_log(cond=c,batch_size=N,ddim=use_ddim, - ddim_steps=ddim_steps,eta=ddim_eta) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True) - x_samples = self.decode_first_stage(samples) - log["samples"] = x_samples - if plot_denoise_rows: - denoise_grid = self._get_denoise_row_from_list(z_denoise_row) - log["denoise_row"] = denoise_grid - - if quantize_denoised and not isinstance(self.first_stage_model, AutoencoderKL) and not isinstance( - self.first_stage_model, IdentityFirstStage): - # also display when quantizing x0 while sampling - with self.ema_scope("Plotting Quantized Denoised"): - samples, z_denoise_row = self.sample_log(cond=c,batch_size=N,ddim=use_ddim, - ddim_steps=ddim_steps,eta=ddim_eta, - quantize_denoised=True) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True, - # quantize_denoised=True) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_x0_quantized"] = x_samples - - if inpaint: - # make a simple center square - b, h, w = z.shape[0], z.shape[2], z.shape[3] - mask = torch.ones(N, h, w).to(self.device) - # zeros will be filled in - mask[:, h // 4:3 * h // 4, w // 4:3 * w // 4] = 0. - mask = mask[:, None, ...] - with self.ema_scope("Plotting Inpaint"): - - samples, _ = self.sample_log(cond=c,batch_size=N,ddim=use_ddim, eta=ddim_eta, - ddim_steps=ddim_steps, x0=z[:N], mask=mask) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_inpainting"] = x_samples - log["mask_inpainting"] = mask - - # outpaint - mask = 1 - mask - with self.ema_scope("Plotting Outpaint"): - samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,eta=ddim_eta, - ddim_steps=ddim_steps, x0=z[:N], mask=mask) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_outpainting"] = x_samples - log["mask_outpainting"] = mask - - if plot_progressive_rows: - with self.ema_scope("Plotting Progressives"): - img, progressives = self.progressive_denoising(c, - shape=(self.channels, self.mel_dim, self.mel_length), - batch_size=N) - prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation") - log["progressive_row"] = prog_row - - if return_keys: - if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0: - return log - else: - return {key: log[key] for key in return_keys} - return log - - def configure_optimizers(self): - lr = self.learning_rate - params = list(self.model.parameters()) - if self.cond_stage_trainable: - print(f"{self.__class__.__name__}: Also optimizing conditioner params!") - params = params + list(self.cond_stage_model.parameters()) - if self.learn_logvar: - print('Diffusion model optimizing logvar') - params.append(self.logvar) - opt = torch.optim.AdamW(params, lr=lr) - if self.use_scheduler: - assert 'target' in self.scheduler_config - scheduler = instantiate_from_config(self.scheduler_config) - - print("Setting up LambdaLR scheduler...") - scheduler = [ - { - 'scheduler': LambdaLR(opt, lr_lambda=scheduler.schedule), - 'interval': 'step', - 'frequency': 1 - }] - return [opt], scheduler - return opt - - @torch.no_grad() - def to_rgb(self, x): - x = x.float() - if not hasattr(self, "colorize"): - self.colorize = torch.randn(3, x.shape[1], 1, 1).to(x) - x = nn.functional.conv2d(x, weight=self.colorize) - x = 2. * (x - x.min()) / (x.max() - x.min()) - 1. - return x - - -class LatentFinetuneDiffusion(LatentDiffusion_audio): - """ - Basis for different finetunas, such as inpainting or depth2image - To disable finetuning mode, set finetune_keys to None - """ - - def __init__(self, - concat_keys: tuple, - finetune_keys=("model.diffusion_model.input_blocks.0.0.weight", - "model_ema.diffusion_modelinput_blocks00weight" - ), - keep_finetune_dims=4, - # if model was trained without concat mode before and we would like to keep these channels - c_concat_log_start=None, # to log reconstruction of c_concat codes - c_concat_log_end=None, - *args, **kwargs - ): - ckpt_path = kwargs.pop("ckpt_path", None) - ignore_keys = kwargs.pop("ignore_keys", list()) - super().__init__(*args, **kwargs) - self.finetune_keys = finetune_keys - self.concat_keys = concat_keys - self.keep_dims = keep_finetune_dims - self.c_concat_log_start = c_concat_log_start - self.c_concat_log_end = c_concat_log_end - - if exists(self.finetune_keys): assert exists(ckpt_path), 'can only finetune from a given checkpoint' - if exists(ckpt_path): - self.init_from_ckpt(ckpt_path, ignore_keys) - - def init_from_ckpt(self, path, ignore_keys=list(), only_model=False): - sd = torch.load(path, map_location="cpu") - if "state_dict" in list(sd.keys()): - sd = sd["state_dict"] - keys = list(sd.keys()) - - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - - # make it explicit, finetune by including extra input channels - if exists(self.finetune_keys) and k in self.finetune_keys: - new_entry = None - for name, param in self.named_parameters(): - if name in self.finetune_keys: - print( - f"modifying key '{name}' and keeping its original {self.keep_dims} (channels) dimensions only") - new_entry = torch.zeros_like(param) # zero init - assert exists(new_entry), 'did not find matching parameter to modify' - new_entry[:, :self.keep_dims, ...] = sd[k] - sd[k] = new_entry - - missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict(sd, strict=False) - print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") - if len(missing) > 0: - print(f"Missing Keys: {missing}") - if len(unexpected) > 0: - print(f"Unexpected Keys: {unexpected}") - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None, - quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True, - plot_diffusion_rows=True, unconditional_guidance_scale=1., unconditional_guidance_label=None, - use_ema_scope=True, - **kwargs): - use_ddim = ddim_steps is not None - - log = dict() - z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key, bs=N, return_first_stage_outputs=True) - c_cat, c = c["c_concat"][0], c["c_crossattn"][0] - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - log["inputs"] = x - log["reconstruction"] = xrec - if self.model.conditioning_key is not None: - if hasattr(self.cond_stage_model, "decode"): - xc = self.cond_stage_model.decode(c) - log["conditioning"] = xc - elif self.cond_stage_key in ["caption"]: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["caption"]) - log["conditioning"] = xc - elif self.cond_stage_key == 'class_label': - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"]) - log['conditioning'] = xc - elif isimage(xc): - log["conditioning"] = xc - if ismap(xc): - log["original_conditioning"] = self.to_rgb(xc) - - if not (self.c_concat_log_start is None and self.c_concat_log_end is None): - log["c_concat_decoded"] = self.decode_first_stage(c_cat[:, self.c_concat_log_start:self.c_concat_log_end]) - - if plot_diffusion_rows: - # get diffusion row - diffusion_row = list() - z_start = z[:n_row] - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(z_start) - z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise) - diffusion_row.append(self.decode_first_stage(z_noisy)) - - diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W - diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w') - diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w') - diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0]) - log["diffusion_row"] = diffusion_grid - - if sample: - # get denoise row - with self.ema_scope("Sampling"): - samples, z_denoise_row = self.sample_log(cond={"c_concat": [c_cat], "c_crossattn": [c]}, - batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True) - x_samples = self.decode_first_stage(samples) - log["samples"] = x_samples - if plot_denoise_rows: - denoise_grid = self._get_denoise_row_from_list(z_denoise_row) - log["denoise_row"] = denoise_grid - - if unconditional_guidance_scale > 1.0: - uc_cross = self.get_unconditional_conditioning(N, unconditional_guidance_label) - uc_cat = c_cat - uc_full = {"c_concat": [uc_cat], "c_crossattn": [uc_cross]} - with self.ema_scope("Sampling with classifier-free guidance"): - samples_cfg, _ = self.sample_log(cond={"c_concat": [c_cat], "c_crossattn": [c]}, - batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=uc_full, - ) - x_samples_cfg = self.decode_first_stage(samples_cfg) - log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg - - return log - - -class LatentInpaintDiffusion(LatentFinetuneDiffusion): - """ - can either run as pure inpainting model (only concat mode) or with mixed conditionings, - e.g. mask as concat and text via cross-attn. - To disable finetuning mode, set finetune_keys to None - """ - - def __init__(self, - concat_keys=("mask", "masked_image"), - masked_image_key="masked_image", - *args, **kwargs - ): - super().__init__(concat_keys, *args, **kwargs) - self.masked_image_key = masked_image_key - assert self.masked_image_key in concat_keys - - @torch.no_grad() - def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False): - # note: restricted to non-trainable encoders currently - assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for inpainting' - z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True, - force_c_encode=True, return_original_cond=True, bs=bs) - - assert exists(self.concat_keys) - c_cat = list() - for ck in self.concat_keys: - if len(batch[ck].shape) == 3: - batch[ck] = batch[ck][..., None] - cc = rearrange(batch[ck], 'b h w c -> b c h w').to(memory_format=torch.contiguous_format).float() - if bs is not None: - cc = cc[:bs] - cc = cc.to(self.device) - bchw = z.shape - if ck != self.masked_image_key: - cc = torch.nn.functional.interpolate(cc, size=bchw[-2:]) - else: - cc = self.get_first_stage_encoding(self.encode_first_stage(cc)) - c_cat.append(cc) - c_cat = torch.cat(c_cat, dim=1) - all_conds = {"c_concat": [c_cat], "c_crossattn": [c]} - if return_first_stage_outputs: - return z, all_conds, x, xrec, xc - return z, all_conds - - @torch.no_grad() - def log_images(self, *args, **kwargs): - log = super(LatentInpaintDiffusion, self).log_images(*args, **kwargs) - log["masked_image"] = rearrange(args[0]["masked_image"], - 'b h w c -> b c h w').to(memory_format=torch.contiguous_format).float() - return log diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/CLAP/__init__.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/CLAP/__init__.py deleted file mode 100644 index aadad97ebc9ec23fdebab974a99e343de90f8afd..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/CLAP/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from . import clap -from . import audio -from . import utils \ No newline at end of file diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/vocoder/bigvgan/alias_free_torch/__init__.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/vocoder/bigvgan/alias_free_torch/__init__.py deleted file mode 100644 index a2318b63198250856809c0cb46210a4147b829bc..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/vocoder/bigvgan/alias_free_torch/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 -# LICENSE is in incl_licenses directory. - -from .filter import * -from .resample import * -from .act import * \ No newline at end of file diff --git a/spaces/AIZero2Hero4Health/5-QuantumStreamlitAIDashboard-SL/README.md b/spaces/AIZero2Hero4Health/5-QuantumStreamlitAIDashboard-SL/README.md deleted file mode 100644 index 339b6a2cacf2f349093d33dc90f04025f4578e49..0000000000000000000000000000000000000000 --- a/spaces/AIZero2Hero4Health/5-QuantumStreamlitAIDashboard-SL/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: 5 QuantumStreamlitAIDashboard SL -emoji: 📚 -colorFrom: blue -colorTo: gray -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Abdullahw72/bark-voice-cloning/hubert/hubert_manager.py b/spaces/Abdullahw72/bark-voice-cloning/hubert/hubert_manager.py deleted file mode 100644 index 857f2af29886fca6eb4df506853f446066af7c04..0000000000000000000000000000000000000000 --- a/spaces/Abdullahw72/bark-voice-cloning/hubert/hubert_manager.py +++ /dev/null @@ -1,33 +0,0 @@ -import os.path -import shutil -import urllib.request - -import huggingface_hub - - -class HuBERTManager: - @staticmethod - def make_sure_hubert_installed(download_url: str = 'https://dl.fbaipublicfiles.com/hubert/hubert_base_ls960.pt', file_name: str = 'hubert.pt'): - install_dir = os.path.join('data', 'models', 'hubert') - if not os.path.isdir(install_dir): - os.makedirs(install_dir, exist_ok=True) - install_file = os.path.join(install_dir, file_name) - if not os.path.isfile(install_file): - print('Downloading HuBERT base model') - urllib.request.urlretrieve(download_url, install_file) - print('Downloaded HuBERT') - return install_file - - - @staticmethod - def make_sure_tokenizer_installed(model: str = 'quantifier_hubert_base_ls960_14.pth', repo: str = 'GitMylo/bark-voice-cloning', local_file: str = 'tokenizer.pth'): - install_dir = os.path.join('data', 'models', 'hubert') - if not os.path.isdir(install_dir): - os.makedirs(install_dir, exist_ok=True) - install_file = os.path.join(install_dir, local_file) - if not os.path.isfile(install_file): - print('Downloading HuBERT custom tokenizer') - huggingface_hub.hf_hub_download(repo, model, local_dir=install_dir, local_dir_use_symlinks=False) - shutil.move(os.path.join(install_dir, model), install_file) - print('Downloaded tokenizer') - return install_file diff --git a/spaces/AchyuthGamer/Free-Accounts-Generator/js/d173ouchebag.js b/spaces/AchyuthGamer/Free-Accounts-Generator/js/d173ouchebag.js deleted file mode 100644 index d0dccddd32de1e92320e58c6401d9b95ad7cc525..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/Free-Accounts-Generator/js/d173ouchebag.js +++ /dev/null @@ -1,126 +0,0 @@ -var NumberOfWords = 70; -var words = new BuildArray(NumberOfWords); - -words[1] = "https://cuty.io/lGr08bYZ"; -words[2] = "https://cuty.io/XDhh2Wc"; -words[3] = "https://paste.fo/86ccdf634678"; -words[4] = "https://cuty.io/hoDXeQ"; -words[5] = "https://cuty.io/E1Fxf"; -words[6] = "https://cuty.io/VWr7ZHlT"; -words[7] = "https://cuty.io/on7fj7A4"; -words[8] = "https://cuty.io/6WW3NVQcO3"; -words[9] = "https://cuty.io/CsDFD"; -words[10] = "https://cuty.io/g2X4gi"; -words[11] = "https://cuty.io/gBT8OQ65izDV"; -words[12] = "https://cuty.io/eTrvUFxu"; -words[13] = "https://cuty.io/ybG3zeDBzR"; -words[14] = "https://cuty.io/abeLh0s"; -words[15] = "https://cuty.io/ulup4Lcf2TK"; -words[16] = "https://cuty.io/FRLEzh5cQ6n"; -words[17] = "https://cuty.io/OVw8vLInZB1"; -words[18] = "https://cuty.io/BMTXGK"; -words[19] = "https://cuty.io/DyJ597nu"; -words[20] = "https://cuty.io/iIjTxEQ"; -words[21] = "https://cuty.io/XcuNNaRzkSlU"; -words[22] = "https://cuty.io/bl3drKcIC"; -words[23] = "https://cuty.io/qEoVSk4mXW"; -words[24] = "https://cuty.io/7r7Uf7"; -words[25] = "https://cuty.io/CDHgWvu9YJQK"; -words[26] = "https://cuty.io/gBT8OQ65izDV"; -words[27] = "https://cuty.io/EZAdA"; -words[28] = "https://cuty.io/0QB7dK6CFZzD"; -words[29] = "https://cuty.io/HFWgHl13"; -words[30] = "https://cuty.io/FgRvVvR39W8"; -words[31] = "https://cuty.io/wrhTqogK"; -words[32] = "https://cuty.io/ja14WYP"; -words[33] = "https://cuty.io/c82NDl7"; -words[34] = "https://cuty.io/Lbc9"; -words[35] = "https://cuty.io/c82NDl7"; -words[36] = "https://cuty.io/GWJWHKNr"; -words[37] = "https://cuty.io/WWFnoKEFK"; -words[38] = "https://cuty.io/AJfqsQ"; -words[39] = "https://cuty.io/6vG5ZrSRj"; -words[40] = "https://cuty.io/9a58b"; -words[41] = "https://cuty.io/2xdqfIV1I"; -words[42] = "https://cuty.io/1wOL4ot"; -words[43] = "https://cuty.io/VqhEJXmt8l"; -words[44] = "https://cuty.io/18olD1"; -words[45] = "https://cuty.io/PZbp9g"; -words[46] = "https://cuty.io/cAzSIvt"; -words[47] = "https://cuty.io/6r9O3wCTrJyj"; -words[48] = "https://cuty.io/8IuhK0AQGnFq"; -words[49] = "https://cuty.io/wX0fxCJ"; -words[50] = "https://cuty.io/bbJB2Ur"; -words[51] = "https://cuty.io/G47WR"; -words[52] = "https://cuty.io/StzRBrb"; -words[53] = "https://cuty.io/63gzehv297E"; -words[54] = "https://cuty.io/HTXo"; -words[55] = "https://cuty.io/pwxPR"; -words[56] = "https://cuty.io/gPNQODT6w"; -words[57] = "https://cuty.io/FgiePQ"; -words[58] = "https://cuty.io/XtTXmu"; -words[59] = "https://cuty.io/QblM1FsmKO"; -words[60] = "https://cuty.io/pszHV"; -words[61] = "https://cuty.io/0sZRO"; -words[62] = "https://cuty.io/FgHPEnnFv"; -words[63] = "https://cuty.io/P59l3Nil3MUS"; -words[64] = "https://cuty.io/O1hK"; -words[65] = "https://cuty.io/4VyT2IvH"; -words[66] = "https://cuty.io/lSaRS19"; -words[67] = "https://cuty.io/z8VTwea"; -words[68] = "https://cuty.io/UapBE"; -words[69] = "https://cuty.io/vDzDerW9"; -words[70] = "https://cuty.io/Mgz9"; -words[71] = "https://cuty.io/kylJsPTjv"; -words[72] = "https://cuty.io/zgJHnFFoS"; -words[73] = ""; -words[74] = ""; -words[75] = ""; -words[76] = ""; -words[77] = ""; -words[78] = ""; -words[79] = ""; -words[80] = "https://cuty.io/8goK49PVX"; -words[81] = ""; -words[82] = "https://cuty.io/q8GEByLks"; -words[83] = ""; -words[84] = ""; -words[85] = "https://cuty.io/d5T06FdVy"; -words[86] = ""; -words[87] = ""; -words[88] = ""; -words[89] = "https://cuty.io/6ra2CHs"; -words[90] = ""; -words[91] = ""; -words[92] = ""; -words[93] = ""; -words[94] = ""; -words[95] = ""; -words[96] = ""; -words[97] = ""; -words[98] = ""; -words[99] = ""; -words[100] = ""; - -function BuildArray(size) { - this.length = size; - for (var i = 1; i <= size; i++) { - this[i] = null; - } - return this; -} - -function PickRandomWord(frm) { - // Generate a random number between 1 and NumberOfWords - var rnd = Math.ceil(Math.random() * NumberOfWords); - - // Display the word inside the text box - frm.WordBox.value = words[rnd]; -} - -function OpenGeneratedLink() { - var generatedLink = document.forms["yourFormName"]["WordBox"].value; - if (generatedLink) { - window.open(generatedLink, '_blank'); - } -} diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/deprecated/PerplexityAi.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/deprecated/PerplexityAi.py deleted file mode 100644 index f4f7171219664c50e0c90e214276c9b226c16d17..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/deprecated/PerplexityAi.py +++ /dev/null @@ -1,101 +0,0 @@ -from __future__ import annotations - -import json -import time -import base64 -from curl_cffi.requests import AsyncSession - -from ..base_provider import AsyncProvider, format_prompt, get_cookies - - -class PerplexityAi(AsyncProvider): - url = "https://www.perplexity.ai" - working = False - supports_gpt_35_turbo = True - _sources = [] - - @classmethod - async def create_async( - cls, - model: str, - messages: list[dict[str, str]], - proxy: str = None, - **kwargs - ) -> str: - url = cls.url + "/socket.io/?EIO=4&transport=polling" - headers = { - "Referer": f"{cls.url}/" - } - async with AsyncSession(headers=headers, proxies={"https": proxy}, impersonate="chrome107") as session: - url_session = "https://www.perplexity.ai/api/auth/session" - response = await session.get(url_session) - response.raise_for_status() - - url_session = "https://www.perplexity.ai/api/auth/session" - response = await session.get(url_session) - response.raise_for_status() - - response = await session.get(url, params={"t": timestamp()}) - response.raise_for_status() - sid = json.loads(response.text[1:])["sid"] - - response = await session.get(url, params={"t": timestamp(), "sid": sid}) - response.raise_for_status() - - data = '40{"jwt":"anonymous-ask-user"}' - response = await session.post(url, params={"t": timestamp(), "sid": sid}, data=data) - response.raise_for_status() - - response = await session.get(url, params={"t": timestamp(), "sid": sid}) - response.raise_for_status() - - data = "424" + json.dumps([ - "perplexity_ask", - format_prompt(messages), - { - "version":"2.1", - "source":"default", - "language":"en", - "timezone": time.tzname[0], - "search_focus":"internet", - "mode":"concise" - } - ]) - response = await session.post(url, params={"t": timestamp(), "sid": sid}, data=data) - response.raise_for_status() - - while True: - response = await session.get(url, params={"t": timestamp(), "sid": sid}) - response.raise_for_status() - for line in response.text.splitlines(): - if line.startswith("434"): - result = json.loads(json.loads(line[3:])[0]["text"]) - - cls._sources = [{ - "title": source["name"], - "url": source["url"], - "snippet": source["snippet"] - } for source in result["web_results"]] - - return result["answer"] - - @classmethod - def get_sources(cls): - return cls._sources - - - @classmethod - @property - def params(cls): - params = [ - ("model", "str"), - ("messages", "list[dict[str, str]]"), - ("stream", "bool"), - ("proxy", "str"), - ] - param = ", ".join([": ".join(p) for p in params]) - return f"g4f.provider.{cls.__name__} supports: ({param})" - - -def timestamp() -> str: - return base64.urlsafe_b64encode(int(time.time()-1407782612).to_bytes(4, 'big')).decode() \ No newline at end of file diff --git a/spaces/Adapter/CoAdapter/ldm/modules/encoders/adapter.py b/spaces/Adapter/CoAdapter/ldm/modules/encoders/adapter.py deleted file mode 100644 index 702d4706649695532dde6a2c9a22a01c9d28ca80..0000000000000000000000000000000000000000 --- a/spaces/Adapter/CoAdapter/ldm/modules/encoders/adapter.py +++ /dev/null @@ -1,339 +0,0 @@ -import torch -import torch.nn as nn -from collections import OrderedDict -from ldm.modules.extra_condition.api import ExtraCondition -from ldm.modules.diffusionmodules.util import zero_module - - -def conv_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D convolution module. - """ - if dims == 1: - return nn.Conv1d(*args, **kwargs) - elif dims == 2: - return nn.Conv2d(*args, **kwargs) - elif dims == 3: - return nn.Conv3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -def avg_pool_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D average pooling module. - """ - if dims == 1: - return nn.AvgPool1d(*args, **kwargs) - elif dims == 2: - return nn.AvgPool2d(*args, **kwargs) - elif dims == 3: - return nn.AvgPool3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -class Downsample(nn.Module): - """ - A downsampling layer with an optional convolution. - :param channels: channels in the inputs and outputs. - :param use_conv: a bool determining if a convolution is applied. - :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then - downsampling occurs in the inner-two dimensions. - """ - - def __init__(self, channels, use_conv, dims=2, out_channels=None, padding=1): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.dims = dims - stride = 2 if dims != 3 else (1, 2, 2) - if use_conv: - self.op = conv_nd( - dims, self.channels, self.out_channels, 3, stride=stride, padding=padding - ) - else: - assert self.channels == self.out_channels - self.op = avg_pool_nd(dims, kernel_size=stride, stride=stride) - - def forward(self, x): - assert x.shape[1] == self.channels - return self.op(x) - - -class ResnetBlock(nn.Module): - def __init__(self, in_c, out_c, down, ksize=3, sk=False, use_conv=True): - super().__init__() - ps = ksize // 2 - if in_c != out_c or sk == False: - self.in_conv = nn.Conv2d(in_c, out_c, ksize, 1, ps) - else: - # print('n_in') - self.in_conv = None - self.block1 = nn.Conv2d(out_c, out_c, 3, 1, 1) - self.act = nn.ReLU() - self.block2 = nn.Conv2d(out_c, out_c, ksize, 1, ps) - if sk == False: - self.skep = nn.Conv2d(in_c, out_c, ksize, 1, ps) - else: - self.skep = None - - self.down = down - if self.down == True: - self.down_opt = Downsample(in_c, use_conv=use_conv) - - def forward(self, x): - if self.down == True: - x = self.down_opt(x) - if self.in_conv is not None: # edit - x = self.in_conv(x) - - h = self.block1(x) - h = self.act(h) - h = self.block2(h) - if self.skep is not None: - return h + self.skep(x) - else: - return h + x - - -class Adapter(nn.Module): - def __init__(self, channels=[320, 640, 1280, 1280], nums_rb=3, cin=64, ksize=3, sk=False, use_conv=True): - super(Adapter, self).__init__() - self.unshuffle = nn.PixelUnshuffle(8) - self.channels = channels - self.nums_rb = nums_rb - self.body = [] - for i in range(len(channels)): - for j in range(nums_rb): - if (i != 0) and (j == 0): - self.body.append( - ResnetBlock(channels[i - 1], channels[i], down=True, ksize=ksize, sk=sk, use_conv=use_conv)) - else: - self.body.append( - ResnetBlock(channels[i], channels[i], down=False, ksize=ksize, sk=sk, use_conv=use_conv)) - self.body = nn.ModuleList(self.body) - self.conv_in = nn.Conv2d(cin, channels[0], 3, 1, 1) - - def forward(self, x): - # unshuffle - x = self.unshuffle(x) - # extract features - features = [] - x = self.conv_in(x) - for i in range(len(self.channels)): - for j in range(self.nums_rb): - idx = i * self.nums_rb + j - x = self.body[idx](x) - features.append(x) - - return features - - -class LayerNorm(nn.LayerNorm): - """Subclass torch's LayerNorm to handle fp16.""" - - def forward(self, x: torch.Tensor): - orig_type = x.dtype - ret = super().forward(x.type(torch.float32)) - return ret.type(orig_type) - - -class QuickGELU(nn.Module): - - def forward(self, x: torch.Tensor): - return x * torch.sigmoid(1.702 * x) - - -class ResidualAttentionBlock(nn.Module): - - def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None): - super().__init__() - - self.attn = nn.MultiheadAttention(d_model, n_head) - self.ln_1 = LayerNorm(d_model) - self.mlp = nn.Sequential( - OrderedDict([("c_fc", nn.Linear(d_model, d_model * 4)), ("gelu", QuickGELU()), - ("c_proj", nn.Linear(d_model * 4, d_model))])) - self.ln_2 = LayerNorm(d_model) - self.attn_mask = attn_mask - - def attention(self, x: torch.Tensor): - self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None - return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0] - - def forward(self, x: torch.Tensor): - x = x + self.attention(self.ln_1(x)) - x = x + self.mlp(self.ln_2(x)) - return x - - -class StyleAdapter(nn.Module): - - def __init__(self, width=1024, context_dim=768, num_head=8, n_layes=3, num_token=4): - super().__init__() - - scale = width ** -0.5 - self.transformer_layes = nn.Sequential(*[ResidualAttentionBlock(width, num_head) for _ in range(n_layes)]) - self.num_token = num_token - self.style_embedding = nn.Parameter(torch.randn(1, num_token, width) * scale) - self.ln_post = LayerNorm(width) - self.ln_pre = LayerNorm(width) - self.proj = nn.Parameter(scale * torch.randn(width, context_dim)) - - def forward(self, x): - # x shape [N, HW+1, C] - style_embedding = self.style_embedding + torch.zeros( - (x.shape[0], self.num_token, self.style_embedding.shape[-1]), device=x.device) - x = torch.cat([x, style_embedding], dim=1) - x = self.ln_pre(x) - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer_layes(x) - x = x.permute(1, 0, 2) # LND -> NLD - - x = self.ln_post(x[:, -self.num_token:, :]) - x = x @ self.proj - - return x - - -class ResnetBlock_light(nn.Module): - def __init__(self, in_c): - super().__init__() - self.block1 = nn.Conv2d(in_c, in_c, 3, 1, 1) - self.act = nn.ReLU() - self.block2 = nn.Conv2d(in_c, in_c, 3, 1, 1) - - def forward(self, x): - h = self.block1(x) - h = self.act(h) - h = self.block2(h) - - return h + x - - -class extractor(nn.Module): - def __init__(self, in_c, inter_c, out_c, nums_rb, down=False): - super().__init__() - self.in_conv = nn.Conv2d(in_c, inter_c, 1, 1, 0) - self.body = [] - for _ in range(nums_rb): - self.body.append(ResnetBlock_light(inter_c)) - self.body = nn.Sequential(*self.body) - self.out_conv = nn.Conv2d(inter_c, out_c, 1, 1, 0) - self.down = down - if self.down == True: - self.down_opt = Downsample(in_c, use_conv=False) - - def forward(self, x): - if self.down == True: - x = self.down_opt(x) - x = self.in_conv(x) - x = self.body(x) - x = self.out_conv(x) - - return x - - -class Adapter_light(nn.Module): - def __init__(self, channels=[320, 640, 1280, 1280], nums_rb=3, cin=64): - super(Adapter_light, self).__init__() - self.unshuffle = nn.PixelUnshuffle(8) - self.channels = channels - self.nums_rb = nums_rb - self.body = [] - for i in range(len(channels)): - if i == 0: - self.body.append(extractor(in_c=cin, inter_c=channels[i]//4, out_c=channels[i], nums_rb=nums_rb, down=False)) - else: - self.body.append(extractor(in_c=channels[i-1], inter_c=channels[i]//4, out_c=channels[i], nums_rb=nums_rb, down=True)) - self.body = nn.ModuleList(self.body) - - def forward(self, x): - # unshuffle - x = self.unshuffle(x) - # extract features - features = [] - for i in range(len(self.channels)): - x = self.body[i](x) - features.append(x) - - return features - - -class CoAdapterFuser(nn.Module): - def __init__(self, unet_channels=[320, 640, 1280, 1280], width=768, num_head=8, n_layes=3): - super(CoAdapterFuser, self).__init__() - scale = width ** 0.5 - # 16, maybe large enough for the number of adapters? - self.task_embedding = nn.Parameter(scale * torch.randn(16, width)) - self.positional_embedding = nn.Parameter(scale * torch.randn(len(unet_channels), width)) - self.spatial_feat_mapping = nn.ModuleList() - for ch in unet_channels: - self.spatial_feat_mapping.append(nn.Sequential( - nn.SiLU(), - nn.Linear(ch, width), - )) - self.transformer_layes = nn.Sequential(*[ResidualAttentionBlock(width, num_head) for _ in range(n_layes)]) - self.ln_post = LayerNorm(width) - self.ln_pre = LayerNorm(width) - self.spatial_ch_projs = nn.ModuleList() - for ch in unet_channels: - self.spatial_ch_projs.append(zero_module(nn.Linear(width, ch))) - self.seq_proj = nn.Parameter(torch.zeros(width, width)) - - def forward(self, features): - if len(features) == 0: - return None, None - inputs = [] - for cond_name in features.keys(): - task_idx = getattr(ExtraCondition, cond_name).value - if not isinstance(features[cond_name], list): - inputs.append(features[cond_name] + self.task_embedding[task_idx]) - continue - - feat_seq = [] - for idx, feature_map in enumerate(features[cond_name]): - feature_vec = torch.mean(feature_map, dim=(2, 3)) - feature_vec = self.spatial_feat_mapping[idx](feature_vec) - feat_seq.append(feature_vec) - feat_seq = torch.stack(feat_seq, dim=1) # Nx4xC - feat_seq = feat_seq + self.task_embedding[task_idx] - feat_seq = feat_seq + self.positional_embedding - inputs.append(feat_seq) - - x = torch.cat(inputs, dim=1) # NxLxC - x = self.ln_pre(x) - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer_layes(x) - x = x.permute(1, 0, 2) # LND -> NLD - x = self.ln_post(x) - - ret_feat_map = None - ret_feat_seq = None - cur_seq_idx = 0 - for cond_name in features.keys(): - if not isinstance(features[cond_name], list): - length = features[cond_name].size(1) - transformed_feature = features[cond_name] * ((x[:, cur_seq_idx:cur_seq_idx+length] @ self.seq_proj) + 1) - if ret_feat_seq is None: - ret_feat_seq = transformed_feature - else: - ret_feat_seq = torch.cat([ret_feat_seq, transformed_feature], dim=1) - cur_seq_idx += length - continue - - length = len(features[cond_name]) - transformed_feature_list = [] - for idx in range(length): - alpha = self.spatial_ch_projs[idx](x[:, cur_seq_idx+idx]) - alpha = alpha.unsqueeze(-1).unsqueeze(-1) + 1 - transformed_feature_list.append(features[cond_name][idx] * alpha) - if ret_feat_map is None: - ret_feat_map = transformed_feature_list - else: - ret_feat_map = list(map(lambda x, y: x + y, ret_feat_map, transformed_feature_list)) - cur_seq_idx += length - - assert cur_seq_idx == x.size(1) - - return ret_feat_map, ret_feat_seq diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/pokemon.py b/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/pokemon.py deleted file mode 100644 index d62b32cf6395e077c0e20d9fb60adf230be30e32..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/pokemon.py +++ /dev/null @@ -1,222 +0,0 @@ -import asyncio -import datetime -import logging -from typing import Any, Dict, List, Optional, Set - -# from agentverse.agents.agent import Agent -from agentverse.agents.simulation_agent.conversation import BaseAgent - -# from agentverse.environments.simulation_env.rules.base import Rule -from agentverse.environments.simulation_env.rules.base import SimulationRule as Rule -from agentverse.message import Message - -from .. import env_registry as EnvironmentRegistry -from ..base import BaseEnvironment - - -@EnvironmentRegistry.register("pokemon") -class PokemonEnvironment(BaseEnvironment): - """ - An environment for Pokémon demo. - - Args: - agents: List of agents - locations: A dict of locations to agents within them - rule: Rule for the environment - max_turns: Maximum number of turns - cnt_turn: Current turn number - last_messages: Messages from last turn - rule_params: Variables set by the rule - """ - - agents: List[BaseAgent] - locations_to_agents: Dict[str, Set[str]] - # locations_descriptions: Dict[str, str] - time: datetime.datetime = datetime.datetime(2021, 1, 1, 8, 0, 0) - rule: Rule - max_turns: int = 10 - cnt_turn: int = 0 - last_messages: List[Message] = [] - rule_params: Dict = {} - - def __init__(self, rule, locations, **kwargs): - rule_config = rule - order_config = rule_config.get("order", {"type": "sequential"}) - visibility_config = rule_config.get("visibility", {"type": "all"}) - selector_config = rule_config.get("selector", {"type": "basic"}) - updater_config = rule_config.get("updater", {"type": "basic"}) - describer_config = rule_config.get("describer", {"type": "basic"}) - rule = Rule( - order_config, - visibility_config, - selector_config, - updater_config, - describer_config, - ) - locations_to_agents = {} - # locations_descriptions = {} - locations_config = locations - for loc in locations_config: - locations_to_agents[loc["name"]] = set(loc["init_agents"]) - # locations_descriptions[loc["name"]] = loc["description"] - super().__init__( - rule=rule, - locations_to_agents=locations_to_agents, - # locations_descriptions=locations_descriptions, - **kwargs, - ) - - async def step( - self, - is_player: bool = False, - player_content: str = None, - receiver: str = None, - receiver_id: Optional[int] = None, - agent_ids: Optional[List[int]] = None, - ) -> List[Message]: - """Run one step of the environment""" - - # Get the next agent index - # time.sleep(8) - # return [Message(content="Test", sender="May", receiver=["May"])] - if is_player: - return await self._respond_to_player(player_content, receiver, receiver_id) - else: - return await self._routine_step(agent_ids) - - async def _routine_step(self, agent_ids) -> List[Message]: - self.rule.update_visible_agents(self) - - # agent_ids = self.rule.get_next_agent_idx(self) - - # Generate current environment description - env_descriptions = self.rule.get_env_description(self) - - # Generate the next message - messages = await asyncio.gather( - *[self.agents[i].astep(env_descriptions[i]) for i in agent_ids] - ) - # messages = self.get_test_messages() - - # Some rules will select certain messages from all the messages - selected_messages = self.rule.select_message(self, messages) - - # Update the memory of the agents - self.last_messages = selected_messages - self.rule.update_memory(self) - self.print_messages(selected_messages) - - self.cnt_turn += 1 - self.time += datetime.timedelta(minutes=5) - - return selected_messages - - async def _respond_to_player( - self, - player_content: str = None, - receiver: str = None, - receiver_id: Optional[int] = None, - ) -> List[Message]: - if receiver_id is None: - for agent in self.agents: - if agent.name == receiver: - receiver_id = agent.agent_id - break - agent_ids = [receiver_id] - agent_name = receiver - player_message = Message( - sender="Brenden", content=player_content, receiver=[agent_name] - ) - - # Update the set of visible agents for each agent - self.rule.update_visible_agents(self) - - # Generate current environment description - env_descriptions = self.rule.get_env_description(self, player_content) - - # Generate the next message - messages = await asyncio.gather( - *[self.agents[i].astep(env_descriptions[i]) for i in agent_ids] - ) - - # Some rules will select certain messages from all the messages - # selected_messages = self.rule.select_message(self, messages) - - # Update the memory of the agents - self.last_messages = [player_message, *messages] - self.rule.update_memory(self) - self.print_messages(messages) - - self.cnt_turn += 1 - - return messages - - def update_state(self, agent_location: Dict[str, str]): - for agent_name, location in agent_location.items(): - # original_location = self.get_agent_to_location()[agent_name] - # self.locations_to_agents[original_location].remove(agent_name) - self.locations_to_agents[location].add(agent_name) - - def get_agent_to_location(self) -> Dict[str, str]: - ret = {} - for location, agent_names in self.locations_to_agents.items(): - for agent in agent_names: - ret[agent] = location - return ret - - def print_messages(self, messages: List[Message]) -> None: - for message in messages: - if message is not None: - logging.info(f"{message.sender}: {message.content}") - - def reset(self) -> None: - """Reset the environment""" - self.cnt_turn = 0 - self.rule.reset() - for agent in self.agents: - agent.reset() - - def is_done(self) -> bool: - """Check if the environment is done""" - return self.cnt_turn >= self.max_turns - - def get_test_messages(self) -> List[Message]: - messages = [ - Message( - content='{"to": "Birch", "action": "Speak", "text": "Hi!!!"}', - sender="May", - receiver={"May", "Birch"}, - tool_response=[], - ), - Message( - content='{"to": "May", "text": "Good morning, May! How is your research going?", "action": "Speak"}', - sender="Birch", - receiver={"May", "Birch"}, - tool_response=[], - ), - Message( - content='{"to": "Pokémon Center", "action": "MoveTo"}', - sender="Steven", - receiver={"Steven"}, - tool_response=[], - ), - Message( - content='{"to": "Shop", "last_time": "10 minutes", "action": "MoveTo"}', - sender="Maxie", - receiver={"Maxie"}, - tool_response=[], - ), - Message( - content='{"to": "Pok\\u00e9mon Center", "action": "MoveTo"}', - sender="Archie", - receiver={"Archie"}, - tool_response=[], - ), - Message( - content='{"to": "Shop", "action": "MoveTo"}', - sender="Joseph", - receiver={"Joseph"}, - tool_response=[], - ), - ] - return messages diff --git a/spaces/AlexWang/lama/saicinpainting/evaluation/masks/mask.py b/spaces/AlexWang/lama/saicinpainting/evaluation/masks/mask.py deleted file mode 100644 index 3e34d0675a781fba983cb542f18390255aaf2609..0000000000000000000000000000000000000000 --- a/spaces/AlexWang/lama/saicinpainting/evaluation/masks/mask.py +++ /dev/null @@ -1,429 +0,0 @@ -import enum -from copy import deepcopy - -import numpy as np -from skimage import img_as_ubyte -from skimage.transform import rescale, resize -try: - from detectron2 import model_zoo - from detectron2.config import get_cfg - from detectron2.engine import DefaultPredictor - DETECTRON_INSTALLED = True -except: - print("Detectron v2 is not installed") - DETECTRON_INSTALLED = False - -from .countless.countless2d import zero_corrected_countless - - -class ObjectMask(): - def __init__(self, mask): - self.height, self.width = mask.shape - (self.up, self.down), (self.left, self.right) = self._get_limits(mask) - self.mask = mask[self.up:self.down, self.left:self.right].copy() - - @staticmethod - def _get_limits(mask): - def indicator_limits(indicator): - lower = indicator.argmax() - upper = len(indicator) - indicator[::-1].argmax() - return lower, upper - - vertical_indicator = mask.any(axis=1) - vertical_limits = indicator_limits(vertical_indicator) - - horizontal_indicator = mask.any(axis=0) - horizontal_limits = indicator_limits(horizontal_indicator) - - return vertical_limits, horizontal_limits - - def _clean(self): - self.up, self.down, self.left, self.right = 0, 0, 0, 0 - self.mask = np.empty((0, 0)) - - def horizontal_flip(self, inplace=False): - if not inplace: - flipped = deepcopy(self) - return flipped.horizontal_flip(inplace=True) - - self.mask = self.mask[:, ::-1] - return self - - def vertical_flip(self, inplace=False): - if not inplace: - flipped = deepcopy(self) - return flipped.vertical_flip(inplace=True) - - self.mask = self.mask[::-1, :] - return self - - def image_center(self): - y_center = self.up + (self.down - self.up) / 2 - x_center = self.left + (self.right - self.left) / 2 - return y_center, x_center - - def rescale(self, scaling_factor, inplace=False): - if not inplace: - scaled = deepcopy(self) - return scaled.rescale(scaling_factor, inplace=True) - - scaled_mask = rescale(self.mask.astype(float), scaling_factor, order=0) > 0.5 - (up, down), (left, right) = self._get_limits(scaled_mask) - self.mask = scaled_mask[up:down, left:right] - - y_center, x_center = self.image_center() - mask_height, mask_width = self.mask.shape - self.up = int(round(y_center - mask_height / 2)) - self.down = self.up + mask_height - self.left = int(round(x_center - mask_width / 2)) - self.right = self.left + mask_width - return self - - def crop_to_canvas(self, vertical=True, horizontal=True, inplace=False): - if not inplace: - cropped = deepcopy(self) - cropped.crop_to_canvas(vertical=vertical, horizontal=horizontal, inplace=True) - return cropped - - if vertical: - if self.up >= self.height or self.down <= 0: - self._clean() - else: - cut_up, cut_down = max(-self.up, 0), max(self.down - self.height, 0) - if cut_up != 0: - self.mask = self.mask[cut_up:] - self.up = 0 - if cut_down != 0: - self.mask = self.mask[:-cut_down] - self.down = self.height - - if horizontal: - if self.left >= self.width or self.right <= 0: - self._clean() - else: - cut_left, cut_right = max(-self.left, 0), max(self.right - self.width, 0) - if cut_left != 0: - self.mask = self.mask[:, cut_left:] - self.left = 0 - if cut_right != 0: - self.mask = self.mask[:, :-cut_right] - self.right = self.width - - return self - - def restore_full_mask(self, allow_crop=False): - cropped = self.crop_to_canvas(inplace=allow_crop) - mask = np.zeros((cropped.height, cropped.width), dtype=bool) - mask[cropped.up:cropped.down, cropped.left:cropped.right] = cropped.mask - return mask - - def shift(self, vertical=0, horizontal=0, inplace=False): - if not inplace: - shifted = deepcopy(self) - return shifted.shift(vertical=vertical, horizontal=horizontal, inplace=True) - - self.up += vertical - self.down += vertical - self.left += horizontal - self.right += horizontal - return self - - def area(self): - return self.mask.sum() - - -class RigidnessMode(enum.Enum): - soft = 0 - rigid = 1 - - -class SegmentationMask: - def __init__(self, confidence_threshold=0.5, rigidness_mode=RigidnessMode.rigid, - max_object_area=0.3, min_mask_area=0.02, downsample_levels=6, num_variants_per_mask=4, - max_mask_intersection=0.5, max_foreground_coverage=0.5, max_foreground_intersection=0.5, - max_hidden_area=0.2, max_scale_change=0.25, horizontal_flip=True, - max_vertical_shift=0.1, position_shuffle=True): - """ - :param confidence_threshold: float; threshold for confidence of the panoptic segmentator to allow for - the instance. - :param rigidness_mode: RigidnessMode object - when soft, checks intersection only with the object from which the mask_object was produced - when rigid, checks intersection with any foreground class object - :param max_object_area: float; allowed upper bound for to be considered as mask_object. - :param min_mask_area: float; lower bound for mask to be considered valid - :param downsample_levels: int; defines width of the resized segmentation to obtain shifted masks; - :param num_variants_per_mask: int; maximal number of the masks for the same object; - :param max_mask_intersection: float; maximum allowed area fraction of intersection for 2 masks - produced by horizontal shift of the same mask_object; higher value -> more diversity - :param max_foreground_coverage: float; maximum allowed area fraction of intersection for foreground object to be - covered by mask; lower value -> less the objects are covered - :param max_foreground_intersection: float; maximum allowed area of intersection for the mask with foreground - object; lower value -> mask is more on the background than on the objects - :param max_hidden_area: upper bound on part of the object hidden by shifting object outside the screen area; - :param max_scale_change: allowed scale change for the mask_object; - :param horizontal_flip: if horizontal flips are allowed; - :param max_vertical_shift: amount of vertical movement allowed; - :param position_shuffle: shuffle - """ - - assert DETECTRON_INSTALLED, 'Cannot use SegmentationMask without detectron2' - self.cfg = get_cfg() - self.cfg.merge_from_file(model_zoo.get_config_file("COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml")) - self.cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml") - self.cfg.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH = confidence_threshold - self.predictor = DefaultPredictor(self.cfg) - - self.rigidness_mode = RigidnessMode(rigidness_mode) - self.max_object_area = max_object_area - self.min_mask_area = min_mask_area - self.downsample_levels = downsample_levels - self.num_variants_per_mask = num_variants_per_mask - self.max_mask_intersection = max_mask_intersection - self.max_foreground_coverage = max_foreground_coverage - self.max_foreground_intersection = max_foreground_intersection - self.max_hidden_area = max_hidden_area - self.position_shuffle = position_shuffle - - self.max_scale_change = max_scale_change - self.horizontal_flip = horizontal_flip - self.max_vertical_shift = max_vertical_shift - - def get_segmentation(self, img): - im = img_as_ubyte(img) - panoptic_seg, segment_info = self.predictor(im)["panoptic_seg"] - return panoptic_seg, segment_info - - @staticmethod - def _is_power_of_two(n): - return (n != 0) and (n & (n-1) == 0) - - def identify_candidates(self, panoptic_seg, segments_info): - potential_mask_ids = [] - for segment in segments_info: - if not segment["isthing"]: - continue - mask = (panoptic_seg == segment["id"]).int().detach().cpu().numpy() - area = mask.sum().item() / np.prod(panoptic_seg.shape) - if area >= self.max_object_area: - continue - potential_mask_ids.append(segment["id"]) - return potential_mask_ids - - def downsample_mask(self, mask): - height, width = mask.shape - if not (self._is_power_of_two(height) and self._is_power_of_two(width)): - raise ValueError("Image sides are not power of 2.") - - num_iterations = width.bit_length() - 1 - self.downsample_levels - if num_iterations < 0: - raise ValueError(f"Width is lower than 2^{self.downsample_levels}.") - - if height.bit_length() - 1 < num_iterations: - raise ValueError("Height is too low to perform downsampling") - - downsampled = mask - for _ in range(num_iterations): - downsampled = zero_corrected_countless(downsampled) - - return downsampled - - def _augmentation_params(self): - scaling_factor = np.random.uniform(1 - self.max_scale_change, 1 + self.max_scale_change) - if self.horizontal_flip: - horizontal_flip = bool(np.random.choice(2)) - else: - horizontal_flip = False - vertical_shift = np.random.uniform(-self.max_vertical_shift, self.max_vertical_shift) - - return { - "scaling_factor": scaling_factor, - "horizontal_flip": horizontal_flip, - "vertical_shift": vertical_shift - } - - def _get_intersection(self, mask_array, mask_object): - intersection = mask_array[ - mask_object.up:mask_object.down, mask_object.left:mask_object.right - ] & mask_object.mask - return intersection - - def _check_masks_intersection(self, aug_mask, total_mask_area, prev_masks): - for existing_mask in prev_masks: - intersection_area = self._get_intersection(existing_mask, aug_mask).sum() - intersection_existing = intersection_area / existing_mask.sum() - intersection_current = 1 - (aug_mask.area() - intersection_area) / total_mask_area - if (intersection_existing > self.max_mask_intersection) or \ - (intersection_current > self.max_mask_intersection): - return False - return True - - def _check_foreground_intersection(self, aug_mask, foreground): - for existing_mask in foreground: - intersection_area = self._get_intersection(existing_mask, aug_mask).sum() - intersection_existing = intersection_area / existing_mask.sum() - if intersection_existing > self.max_foreground_coverage: - return False - intersection_mask = intersection_area / aug_mask.area() - if intersection_mask > self.max_foreground_intersection: - return False - return True - - def _move_mask(self, mask, foreground): - # Obtaining properties of the original mask_object: - orig_mask = ObjectMask(mask) - - chosen_masks = [] - chosen_parameters = [] - # to fix the case when resizing gives mask_object consisting only of False - scaling_factor_lower_bound = 0. - - for var_idx in range(self.num_variants_per_mask): - # Obtaining augmentation parameters and applying them to the downscaled mask_object - augmentation_params = self._augmentation_params() - augmentation_params["scaling_factor"] = min([ - augmentation_params["scaling_factor"], - 2 * min(orig_mask.up, orig_mask.height - orig_mask.down) / orig_mask.height + 1., - 2 * min(orig_mask.left, orig_mask.width - orig_mask.right) / orig_mask.width + 1. - ]) - augmentation_params["scaling_factor"] = max([ - augmentation_params["scaling_factor"], scaling_factor_lower_bound - ]) - - aug_mask = deepcopy(orig_mask) - aug_mask.rescale(augmentation_params["scaling_factor"], inplace=True) - if augmentation_params["horizontal_flip"]: - aug_mask.horizontal_flip(inplace=True) - total_aug_area = aug_mask.area() - if total_aug_area == 0: - scaling_factor_lower_bound = 1. - continue - - # Fix if the element vertical shift is too strong and shown area is too small: - vertical_area = aug_mask.mask.sum(axis=1) / total_aug_area # share of area taken by rows - # number of rows which are allowed to be hidden from upper and lower parts of image respectively - max_hidden_up = np.searchsorted(vertical_area.cumsum(), self.max_hidden_area) - max_hidden_down = np.searchsorted(vertical_area[::-1].cumsum(), self.max_hidden_area) - # correcting vertical shift, so not too much area will be hidden - augmentation_params["vertical_shift"] = np.clip( - augmentation_params["vertical_shift"], - -(aug_mask.up + max_hidden_up) / aug_mask.height, - (aug_mask.height - aug_mask.down + max_hidden_down) / aug_mask.height - ) - # Applying vertical shift: - vertical_shift = int(round(aug_mask.height * augmentation_params["vertical_shift"])) - aug_mask.shift(vertical=vertical_shift, inplace=True) - aug_mask.crop_to_canvas(vertical=True, horizontal=False, inplace=True) - - # Choosing horizontal shift: - max_hidden_area = self.max_hidden_area - (1 - aug_mask.area() / total_aug_area) - horizontal_area = aug_mask.mask.sum(axis=0) / total_aug_area - max_hidden_left = np.searchsorted(horizontal_area.cumsum(), max_hidden_area) - max_hidden_right = np.searchsorted(horizontal_area[::-1].cumsum(), max_hidden_area) - allowed_shifts = np.arange(-max_hidden_left, aug_mask.width - - (aug_mask.right - aug_mask.left) + max_hidden_right + 1) - allowed_shifts = - (aug_mask.left - allowed_shifts) - - if self.position_shuffle: - np.random.shuffle(allowed_shifts) - - mask_is_found = False - for horizontal_shift in allowed_shifts: - aug_mask_left = deepcopy(aug_mask) - aug_mask_left.shift(horizontal=horizontal_shift, inplace=True) - aug_mask_left.crop_to_canvas(inplace=True) - - prev_masks = [mask] + chosen_masks - is_mask_suitable = self._check_masks_intersection(aug_mask_left, total_aug_area, prev_masks) & \ - self._check_foreground_intersection(aug_mask_left, foreground) - if is_mask_suitable: - aug_draw = aug_mask_left.restore_full_mask() - chosen_masks.append(aug_draw) - augmentation_params["horizontal_shift"] = horizontal_shift / aug_mask_left.width - chosen_parameters.append(augmentation_params) - mask_is_found = True - break - - if not mask_is_found: - break - - return chosen_parameters - - def _prepare_mask(self, mask): - height, width = mask.shape - target_width = width if self._is_power_of_two(width) else (1 << width.bit_length()) - target_height = height if self._is_power_of_two(height) else (1 << height.bit_length()) - - return resize(mask.astype('float32'), (target_height, target_width), order=0, mode='edge').round().astype('int32') - - def get_masks(self, im, return_panoptic=False): - panoptic_seg, segments_info = self.get_segmentation(im) - potential_mask_ids = self.identify_candidates(panoptic_seg, segments_info) - - panoptic_seg_scaled = self._prepare_mask(panoptic_seg.detach().cpu().numpy()) - downsampled = self.downsample_mask(panoptic_seg_scaled) - scene_objects = [] - for segment in segments_info: - if not segment["isthing"]: - continue - mask = downsampled == segment["id"] - if not np.any(mask): - continue - scene_objects.append(mask) - - mask_set = [] - for mask_id in potential_mask_ids: - mask = downsampled == mask_id - if not np.any(mask): - continue - - if self.rigidness_mode is RigidnessMode.soft: - foreground = [mask] - elif self.rigidness_mode is RigidnessMode.rigid: - foreground = scene_objects - else: - raise ValueError(f'Unexpected rigidness_mode: {rigidness_mode}') - - masks_params = self._move_mask(mask, foreground) - - full_mask = ObjectMask((panoptic_seg == mask_id).detach().cpu().numpy()) - - for params in masks_params: - aug_mask = deepcopy(full_mask) - aug_mask.rescale(params["scaling_factor"], inplace=True) - if params["horizontal_flip"]: - aug_mask.horizontal_flip(inplace=True) - - vertical_shift = int(round(aug_mask.height * params["vertical_shift"])) - horizontal_shift = int(round(aug_mask.width * params["horizontal_shift"])) - aug_mask.shift(vertical=vertical_shift, horizontal=horizontal_shift, inplace=True) - aug_mask = aug_mask.restore_full_mask().astype('uint8') - if aug_mask.mean() <= self.min_mask_area: - continue - mask_set.append(aug_mask) - - if return_panoptic: - return mask_set, panoptic_seg.detach().cpu().numpy() - else: - return mask_set - - -def propose_random_square_crop(mask, min_overlap=0.5): - height, width = mask.shape - mask_ys, mask_xs = np.where(mask > 0.5) # mask==0 is known fragment and mask==1 is missing - - if height < width: - crop_size = height - obj_left, obj_right = mask_xs.min(), mask_xs.max() - obj_width = obj_right - obj_left - left_border = max(0, min(width - crop_size - 1, obj_left + obj_width * min_overlap - crop_size)) - right_border = max(left_border + 1, min(width - crop_size, obj_left + obj_width * min_overlap)) - start_x = np.random.randint(left_border, right_border) - return start_x, 0, start_x + crop_size, height - else: - crop_size = width - obj_top, obj_bottom = mask_ys.min(), mask_ys.max() - obj_height = obj_bottom - obj_top - top_border = max(0, min(height - crop_size - 1, obj_top + obj_height * min_overlap - crop_size)) - bottom_border = max(top_border + 1, min(height - crop_size, obj_top + obj_height * min_overlap)) - start_y = np.random.randint(top_border, bottom_border) - return 0, start_y, width, start_y + crop_size diff --git a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/app.py b/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/app.py deleted file mode 100644 index c9bfb000af1af5ec0a745290b95431df58ad7a61..0000000000000000000000000000000000000000 --- a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/app.py +++ /dev/null @@ -1,256 +0,0 @@ -import argparse -import json -import os -import re -import tempfile -import logging - -logging.getLogger('numba').setLevel(logging.WARNING) -import librosa -import numpy as np -import torch -from torch import no_grad, LongTensor -import commons -import utils -import gradio as gr -import gradio.utils as gr_utils -import gradio.processing_utils as gr_processing_utils -import ONNXVITS_infer -import models -from text import text_to_sequence, _clean_text -from text.symbols import symbols -from mel_processing import spectrogram_torch -import psutil -from datetime import datetime - -language_marks = { - "Japanese": "", - "日本語": "[JA]", - "简体中文": "[ZH]", - "English": "[EN]", - "Mix": "", -} - -limitation = os.getenv("SYSTEM") == "spaces" # limit text and audio length in huggingface spaces - - -def create_tts_fn(model, hps, speaker_ids): - def tts_fn(text, speaker, language, speed, is_symbol): - if limitation: - text_len = len(re.sub("\[([A-Z]{2})\]", "", text)) - max_len = 150 - if is_symbol: - max_len *= 3 - if text_len > max_len: - return "Error: Text is too long", None - if language is not None: - text = language_marks[language] + text + language_marks[language] - speaker_id = speaker_ids[speaker] - stn_tst = get_text(text, hps, is_symbol) - with no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = LongTensor([stn_tst.size(0)]) - sid = LongTensor([speaker_id]) - audio = model.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8, - length_scale=1.0 / speed)[0][0, 0].data.cpu().float().numpy() - del stn_tst, x_tst, x_tst_lengths, sid - return "Success", (hps.data.sampling_rate, audio) - - return tts_fn - - -def create_vc_fn(model, hps, speaker_ids): - def vc_fn(original_speaker, target_speaker, input_audio): - if input_audio is None: - return "You need to upload an audio", None - sampling_rate, audio = input_audio - duration = audio.shape[0] / sampling_rate - if limitation and duration > 30: - return "Error: Audio is too long", None - original_speaker_id = speaker_ids[original_speaker] - target_speaker_id = speaker_ids[target_speaker] - - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != hps.data.sampling_rate: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=hps.data.sampling_rate) - with no_grad(): - y = torch.FloatTensor(audio) - y = y.unsqueeze(0) - spec = spectrogram_torch(y, hps.data.filter_length, - hps.data.sampling_rate, hps.data.hop_length, hps.data.win_length, - center=False) - spec_lengths = LongTensor([spec.size(-1)]) - sid_src = LongTensor([original_speaker_id]) - sid_tgt = LongTensor([target_speaker_id]) - audio = model.voice_conversion(spec, spec_lengths, sid_src=sid_src, sid_tgt=sid_tgt)[0][ - 0, 0].data.cpu().float().numpy() - del y, spec, spec_lengths, sid_src, sid_tgt - return "Success", (hps.data.sampling_rate, audio) - - return vc_fn - - -def get_text(text, hps, is_symbol): - text_norm = text_to_sequence(text, hps.symbols, [] if is_symbol else hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = LongTensor(text_norm) - return text_norm - - -def create_to_symbol_fn(hps): - def to_symbol_fn(is_symbol_input, input_text, temp_text): - return (_clean_text(input_text, hps.data.text_cleaners), input_text) if is_symbol_input \ - else (temp_text, temp_text) - - return to_symbol_fn - - -models_tts = [] -models_vc = [] -models_info = [ - { - "title": "Trilingual", - "languages": ['日本語', '简体中文', 'English', 'Mix'], - "description": """ - This model is trained on a mix up of Umamusume, Genshin Impact, Sanoba Witch & VCTK voice data to learn multilanguage. - All characters can speak English, Chinese & Japanese.\n\n - To mix multiple languages in a single sentence, wrap the corresponding part with language tokens - ([JA] for Japanese, [ZH] for Chinese, [EN] for English), as shown in the examples.\n\n - 这个模型在赛马娘,原神,魔女的夜宴以及VCTK数据集上混合训练以学习多种语言。 - 所有角色均可说中日英三语。\n\n - 若需要在同一个句子中混合多种语言,使用相应的语言标记包裹句子。 - (日语用[JA], 中文用[ZH], 英文用[EN]),参考Examples中的示例。 - """, - "model_path": "./pretrained_models/G_trilingual.pth", - "config_path": "./configs/uma_trilingual.json", - "examples": [['你好,训练员先生,很高兴见到你。', '草上飞 Grass Wonder (Umamusume Pretty Derby)', '简体中文', 1, False], - ['To be honest, I have no idea what to say as examples.', '派蒙 Paimon (Genshin Impact)', 'English', - 1, False], - ['授業中に出しだら,学校生活終わるですわ。', '綾地 寧々 Ayachi Nene (Sanoba Witch)', '日本語', 1, False], - ['[JA]こんにちわ。[JA][ZH]你好![ZH][EN]Hello![EN]', '綾地 寧々 Ayachi Nene (Sanoba Witch)', 'Mix', 1, False]], - "onnx_dir": "./ONNX_net/G_trilingual/" - }, - { - "title": "Japanese", - "languages": ["Japanese"], - "description": """ - This model contains 87 characters from Umamusume: Pretty Derby, Japanese only.\n\n - 这个模型包含赛马娘的所有87名角色,只能合成日语。 - """, - "model_path": "./pretrained_models/G_jp.pth", - "config_path": "./configs/uma87.json", - "examples": [['お疲れ様です,トレーナーさん。', '无声铃鹿 Silence Suzuka (Umamusume Pretty Derby)', 'Japanese', 1, False], - ['張り切っていこう!', '北部玄驹 Kitasan Black (Umamusume Pretty Derby)', 'Japanese', 1, False], - ['何でこんなに慣れでんのよ,私のほが先に好きだっだのに。', '草上飞 Grass Wonder (Umamusume Pretty Derby)', 'Japanese', 1, False], - ['授業中に出しだら,学校生活終わるですわ。', '目白麦昆 Mejiro Mcqueen (Umamusume Pretty Derby)', 'Japanese', 1, False], - ['お帰りなさい,お兄様!', '米浴 Rice Shower (Umamusume Pretty Derby)', 'Japanese', 1, False], - ['私の処女をもらっでください!', '米浴 Rice Shower (Umamusume Pretty Derby)', 'Japanese', 1, False]], - "onnx_dir": "./ONNX_net/G_jp/" - }, -] - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--share", action="store_true", default=False, help="share gradio app") - args = parser.parse_args() - for info in models_info: - name = info['title'] - lang = info['languages'] - examples = info['examples'] - config_path = info['config_path'] - model_path = info['model_path'] - description = info['description'] - onnx_dir = info["onnx_dir"] - hps = utils.get_hparams_from_file(config_path) - model = ONNXVITS_infer.SynthesizerTrn( - len(hps.symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - ONNX_dir=onnx_dir, - **hps.model) - utils.load_checkpoint(model_path, model, None) - model.eval() - speaker_ids = hps.speakers - speakers = list(hps.speakers.keys()) - models_tts.append((name, description, speakers, lang, examples, - hps.symbols, create_tts_fn(model, hps, speaker_ids), - create_to_symbol_fn(hps))) - models_vc.append((name, description, speakers, create_vc_fn(model, hps, speaker_ids))) - app = gr.Blocks() - with app: - gr.Markdown("# English & Chinese & Japanese Anime TTS\n\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=Plachta.VITS-Umamusume-voice-synthesizer)\n\n" - "Including Japanese TTS & Trilingual TTS, speakers are all anime characters. \n\n包含一个纯日语TTS和一个中日英三语TTS模型,主要为二次元角色。\n\n" - "If you have any suggestions or bug reports, feel free to open discussion in [Community](https://huggingface.co/spaces/Plachta/VITS-Umamusume-voice-synthesizer/discussions).\n\n" - "若有bug反馈或建议,请在[Community](https://huggingface.co/spaces/Plachta/VITS-Umamusume-voice-synthesizer/discussions)下开启一个新的Discussion。 \n\n" - ) - with gr.Tabs(): - with gr.TabItem("TTS"): - with gr.Tabs(): - for i, (name, description, speakers, lang, example, symbols, tts_fn, to_symbol_fn) in enumerate( - models_tts): - with gr.TabItem(name): - gr.Markdown(description) - with gr.Row(): - with gr.Column(): - textbox = gr.TextArea(label="Text", - placeholder="Type your sentence here (Maximum 150 words)", - value="こんにちわ。", elem_id=f"tts-input") - with gr.Accordion(label="Phoneme Input", open=False): - temp_text_var = gr.Variable() - symbol_input = gr.Checkbox(value=False, label="Symbol input") - symbol_list = gr.Dataset(label="Symbol list", components=[textbox], - samples=[[x] for x in symbols], - elem_id=f"symbol-list") - symbol_list_json = gr.Json(value=symbols, visible=False) - symbol_input.change(to_symbol_fn, - [symbol_input, textbox, temp_text_var], - [textbox, temp_text_var]) - symbol_list.click(None, [symbol_list, symbol_list_json], textbox, - _js=f""" - (i, symbols, text) => {{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let text_input = root.querySelector("#tts-input").querySelector("textarea"); - let startPos = text_input.selectionStart; - let endPos = text_input.selectionEnd; - let oldTxt = text_input.value; - let result = oldTxt.substring(0, startPos) + symbols[i] + oldTxt.substring(endPos); - text_input.value = result; - let x = window.scrollX, y = window.scrollY; - text_input.focus(); - text_input.selectionStart = startPos + symbols[i].length; - text_input.selectionEnd = startPos + symbols[i].length; - text_input.blur(); - window.scrollTo(x, y); - - text = text_input.value; - - return text; - }}""") - # select character - char_dropdown = gr.Dropdown(choices=speakers, value=speakers[0], label='character') - language_dropdown = gr.Dropdown(choices=lang, value=lang[0], label='language') - duration_slider = gr.Slider(minimum=0.1, maximum=5, value=1, step=0.1, - label='速度 Speed') - with gr.Column(): - text_output = gr.Textbox(label="Message") - audio_output = gr.Audio(label="Output Audio", elem_id="tts-audio") - btn = gr.Button("Generate!") - btn.click(tts_fn, - inputs=[textbox, char_dropdown, language_dropdown, duration_slider, - symbol_input], - outputs=[text_output, audio_output]) - gr.Examples( - examples=example, - inputs=[textbox, char_dropdown, language_dropdown, - duration_slider, symbol_input], - outputs=[text_output, audio_output], - fn=tts_fn - ) - app.queue(concurrency_count=3).launch(show_api=False, share=args.share) \ No newline at end of file diff --git a/spaces/AmmarHuggingFaces/intro-to-hugging-face/app.py b/spaces/AmmarHuggingFaces/intro-to-hugging-face/app.py deleted file mode 100644 index 02d8e5e1ff6c81f155e9dcca3353082cc0cf7175..0000000000000000000000000000000000000000 --- a/spaces/AmmarHuggingFaces/intro-to-hugging-face/app.py +++ /dev/null @@ -1,7 +0,0 @@ -import gradio as gr -from transformers import pipeline -sentiment = pipeline("sentiment-analysis") -def get_sentiment(input_text): - return sentiment(input_text) -iface = gr.Interface(fn = get_sentiment, inputs = "text", outputs = ["text"], title="Sentiment Analysis", description="Get Sentiment Negative / Positive for the given input" ) -iface.launch(inline=False) \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/installation.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/installation.md deleted file mode 100644 index 50df14be3f776abb2f4e029dad5ee578ea2401bc..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/installation.md +++ /dev/null @@ -1,146 +0,0 @@ - - -# Installation - -Install 🤗 Diffusers for whichever deep learning library you're working with. - -🤗 Diffusers is tested on Python 3.7+, PyTorch 1.7.0+ and Flax. Follow the installation instructions below for the deep learning library you are using: - -- [PyTorch](https://pytorch.org/get-started/locally/) installation instructions. -- [Flax](https://flax.readthedocs.io/en/latest/) installation instructions. - -## Install with pip - -You should install 🤗 Diffusers in a [virtual environment](https://docs.python.org/3/library/venv.html). -If you're unfamiliar with Python virtual environments, take a look at this [guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). -A virtual environment makes it easier to manage different projects and avoid compatibility issues between dependencies. - -Start by creating a virtual environment in your project directory: - -```bash -python -m venv .env -``` - -Activate the virtual environment: - -```bash -source .env/bin/activate -``` - -🤗 Diffusers also relies on the 🤗 Transformers library, and you can install both with the following command: - - - -```bash -pip install diffusers["torch"] transformers -``` - - -```bash -pip install diffusers["flax"] transformers -``` - - - -## Install from source - -Before installing 🤗 Diffusers from source, make sure you have `torch` and 🤗 Accelerate installed. - -For `torch` installation, refer to the `torch` [installation](https://pytorch.org/get-started/locally/#start-locally) guide. - -To install 🤗 Accelerate: - -```bash -pip install accelerate -``` - -Install 🤗 Diffusers from source with the following command: - -```bash -pip install git+https://github.com/huggingface/diffusers -``` - -This command installs the bleeding edge `main` version rather than the latest `stable` version. -The `main` version is useful for staying up-to-date with the latest developments. -For instance, if a bug has been fixed since the last official release but a new release hasn't been rolled out yet. -However, this means the `main` version may not always be stable. -We strive to keep the `main` version operational, and most issues are usually resolved within a few hours or a day. -If you run into a problem, please open an [Issue](https://github.com/huggingface/diffusers/issues/new/choose), so we can fix it even sooner! - -## Editable install - -You will need an editable install if you'd like to: - -* Use the `main` version of the source code. -* Contribute to 🤗 Diffusers and need to test changes in the code. - -Clone the repository and install 🤗 Diffusers with the following commands: - -```bash -git clone https://github.com/huggingface/diffusers.git -cd diffusers -``` - - - -```bash -pip install -e ".[torch]" -``` - - -```bash -pip install -e ".[flax]" -``` - - - -These commands will link the folder you cloned the repository to and your Python library paths. -Python will now look inside the folder you cloned to in addition to the normal library paths. -For example, if your Python packages are typically installed in `~/anaconda3/envs/main/lib/python3.7/site-packages/`, Python will also search the `~/diffusers/` folder you cloned to. - - - -You must keep the `diffusers` folder if you want to keep using the library. - - - -Now you can easily update your clone to the latest version of 🤗 Diffusers with the following command: - -```bash -cd ~/diffusers/ -git pull -``` - -Your Python environment will find the `main` version of 🤗 Diffusers on the next run. - -## Notice on telemetry logging - -Our library gathers telemetry information during `from_pretrained()` requests. -This data includes the version of Diffusers and PyTorch/Flax, the requested model or pipeline class, -and the path to a pre-trained checkpoint if it is hosted on the Hub. -This usage data helps us debug issues and prioritize new features. -Telemetry is only sent when loading models and pipelines from the HuggingFace Hub, -and is not collected during local usage. - -We understand that not everyone wants to share additional information, and we respect your privacy, -so you can disable telemetry collection by setting the `DISABLE_TELEMETRY` environment variable from your terminal: - -On Linux/MacOS: -```bash -export DISABLE_TELEMETRY=YES -``` - -On Windows: -```bash -set DISABLE_TELEMETRY=YES -``` diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_consistency_models.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_consistency_models.py deleted file mode 100644 index fb296054d65b804af281dc99d940c8f0ba50e01b..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_consistency_models.py +++ /dev/null @@ -1,380 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from dataclasses import dataclass -from typing import List, Optional, Tuple, Union - -import numpy as np -import torch - -from ..configuration_utils import ConfigMixin, register_to_config -from ..utils import BaseOutput, logging, randn_tensor -from .scheduling_utils import SchedulerMixin - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -@dataclass -class CMStochasticIterativeSchedulerOutput(BaseOutput): - """ - Output class for the scheduler's step function output. - - Args: - prev_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images): - Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the - denoising loop. - """ - - prev_sample: torch.FloatTensor - - -class CMStochasticIterativeScheduler(SchedulerMixin, ConfigMixin): - """ - Multistep and onestep sampling for consistency models from Song et al. 2023 [1]. This implements Algorithm 1 in the - paper [1]. - - [1] Song, Yang and Dhariwal, Prafulla and Chen, Mark and Sutskever, Ilya. "Consistency Models" - https://arxiv.org/pdf/2303.01469 [2] Karras, Tero, et al. "Elucidating the Design Space of Diffusion-Based - Generative Models." https://arxiv.org/abs/2206.00364 - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. - sigma_min (`float`): - Minimum noise magnitude in the sigma schedule. This was set to 0.002 in the original implementation. - sigma_max (`float`): - Maximum noise magnitude in the sigma schedule. This was set to 80.0 in the original implementation. - sigma_data (`float`): - The standard deviation of the data distribution, following the EDM paper [2]. This was set to 0.5 in the - original implementation, which is also the original value suggested in the EDM paper. - s_noise (`float`): - The amount of additional noise to counteract loss of detail during sampling. A reasonable range is [1.000, - 1.011]. This was set to 1.0 in the original implementation. - rho (`float`): - The rho parameter used for calculating the Karras sigma schedule, introduced in the EDM paper [2]. This was - set to 7.0 in the original implementation, which is also the original value suggested in the EDM paper. - clip_denoised (`bool`): - Whether to clip the denoised outputs to `(-1, 1)`. Defaults to `True`. - timesteps (`List` or `np.ndarray` or `torch.Tensor`, *optional*): - Optionally, an explicit timestep schedule can be specified. The timesteps are expected to be in increasing - order. - """ - - order = 1 - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 40, - sigma_min: float = 0.002, - sigma_max: float = 80.0, - sigma_data: float = 0.5, - s_noise: float = 1.0, - rho: float = 7.0, - clip_denoised: bool = True, - ): - # standard deviation of the initial noise distribution - self.init_noise_sigma = sigma_max - - ramp = np.linspace(0, 1, num_train_timesteps) - sigmas = self._convert_to_karras(ramp) - timesteps = self.sigma_to_t(sigmas) - - # setable values - self.num_inference_steps = None - self.sigmas = torch.from_numpy(sigmas) - self.timesteps = torch.from_numpy(timesteps) - self.custom_timesteps = False - self.is_scale_input_called = False - - def index_for_timestep(self, timestep, schedule_timesteps=None): - if schedule_timesteps is None: - schedule_timesteps = self.timesteps - - indices = (schedule_timesteps == timestep).nonzero() - return indices.item() - - def scale_model_input( - self, sample: torch.FloatTensor, timestep: Union[float, torch.FloatTensor] - ) -> torch.FloatTensor: - """ - Scales the consistency model input by `(sigma**2 + sigma_data**2) ** 0.5`, following the EDM model. - - Args: - sample (`torch.FloatTensor`): input sample - timestep (`float` or `torch.FloatTensor`): the current timestep in the diffusion chain - Returns: - `torch.FloatTensor`: scaled input sample - """ - # Get sigma corresponding to timestep - if isinstance(timestep, torch.Tensor): - timestep = timestep.to(self.timesteps.device) - step_idx = self.index_for_timestep(timestep) - sigma = self.sigmas[step_idx] - - sample = sample / ((sigma**2 + self.config.sigma_data**2) ** 0.5) - - self.is_scale_input_called = True - return sample - - def sigma_to_t(self, sigmas: Union[float, np.ndarray]): - """ - Gets scaled timesteps from the Karras sigmas, for input to the consistency model. - - Args: - sigmas (`float` or `np.ndarray`): single Karras sigma or array of Karras sigmas - Returns: - `float` or `np.ndarray`: scaled input timestep or scaled input timestep array - """ - if not isinstance(sigmas, np.ndarray): - sigmas = np.array(sigmas, dtype=np.float64) - - timesteps = 1000 * 0.25 * np.log(sigmas + 1e-44) - - return timesteps - - def set_timesteps( - self, - num_inference_steps: Optional[int] = None, - device: Union[str, torch.device] = None, - timesteps: Optional[List[int]] = None, - ): - """ - Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - device (`str` or `torch.device`, optional): - the device to which the timesteps should be moved to. If `None`, the timesteps are not moved. - timesteps (`List[int]`, optional): - custom timesteps used to support arbitrary spacing between timesteps. If `None`, then the default - timestep spacing strategy of equal spacing between timesteps is used. If passed, `num_inference_steps` - must be `None`. - """ - if num_inference_steps is None and timesteps is None: - raise ValueError("Exactly one of `num_inference_steps` or `timesteps` must be supplied.") - - if num_inference_steps is not None and timesteps is not None: - raise ValueError("Can only pass one of `num_inference_steps` or `timesteps`.") - - # Follow DDPMScheduler custom timesteps logic - if timesteps is not None: - for i in range(1, len(timesteps)): - if timesteps[i] >= timesteps[i - 1]: - raise ValueError("`timesteps` must be in descending order.") - - if timesteps[0] >= self.config.num_train_timesteps: - raise ValueError( - f"`timesteps` must start before `self.config.train_timesteps`:" - f" {self.config.num_train_timesteps}." - ) - - timesteps = np.array(timesteps, dtype=np.int64) - self.custom_timesteps = True - else: - if num_inference_steps > self.config.num_train_timesteps: - raise ValueError( - f"`num_inference_steps`: {num_inference_steps} cannot be larger than `self.config.train_timesteps`:" - f" {self.config.num_train_timesteps} as the unet model trained with this scheduler can only handle" - f" maximal {self.config.num_train_timesteps} timesteps." - ) - - self.num_inference_steps = num_inference_steps - - step_ratio = self.config.num_train_timesteps // self.num_inference_steps - timesteps = (np.arange(0, num_inference_steps) * step_ratio).round()[::-1].copy().astype(np.int64) - self.custom_timesteps = False - - # Map timesteps to Karras sigmas directly for multistep sampling - # See https://github.com/openai/consistency_models/blob/main/cm/karras_diffusion.py#L675 - num_train_timesteps = self.config.num_train_timesteps - ramp = timesteps[::-1].copy() - ramp = ramp / (num_train_timesteps - 1) - sigmas = self._convert_to_karras(ramp) - timesteps = self.sigma_to_t(sigmas) - - sigmas = np.concatenate([sigmas, [self.sigma_min]]).astype(np.float32) - self.sigmas = torch.from_numpy(sigmas).to(device=device) - - if str(device).startswith("mps"): - # mps does not support float64 - self.timesteps = torch.from_numpy(timesteps).to(device, dtype=torch.float32) - else: - self.timesteps = torch.from_numpy(timesteps).to(device=device) - - # Modified _convert_to_karras implementation that takes in ramp as argument - def _convert_to_karras(self, ramp): - """Constructs the noise schedule of Karras et al. (2022).""" - - sigma_min: float = self.config.sigma_min - sigma_max: float = self.config.sigma_max - - rho = self.config.rho - min_inv_rho = sigma_min ** (1 / rho) - max_inv_rho = sigma_max ** (1 / rho) - sigmas = (max_inv_rho + ramp * (min_inv_rho - max_inv_rho)) ** rho - return sigmas - - def get_scalings(self, sigma): - sigma_data = self.config.sigma_data - - c_skip = sigma_data**2 / (sigma**2 + sigma_data**2) - c_out = sigma * sigma_data / (sigma**2 + sigma_data**2) ** 0.5 - return c_skip, c_out - - def get_scalings_for_boundary_condition(self, sigma): - """ - Gets the scalings used in the consistency model parameterization, following Appendix C of the original paper. - This enforces the consistency model boundary condition. - - Note that `epsilon` in the equations for c_skip and c_out is set to sigma_min. - - Args: - sigma (`torch.FloatTensor`): - The current sigma in the Karras sigma schedule. - Returns: - `tuple`: - A two-element tuple where c_skip (which weights the current sample) is the first element and c_out - (which weights the consistency model output) is the second element. - """ - sigma_min = self.config.sigma_min - sigma_data = self.config.sigma_data - - c_skip = sigma_data**2 / ((sigma - sigma_min) ** 2 + sigma_data**2) - c_out = (sigma - sigma_min) * sigma_data / (sigma**2 + sigma_data**2) ** 0.5 - return c_skip, c_out - - def step( - self, - model_output: torch.FloatTensor, - timestep: Union[float, torch.FloatTensor], - sample: torch.FloatTensor, - generator: Optional[torch.Generator] = None, - return_dict: bool = True, - ) -> Union[CMStochasticIterativeSchedulerOutput, Tuple]: - """ - Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion - process from the learned model outputs (most often the predicted noise). - - Args: - model_output (`torch.FloatTensor`): direct output from learned diffusion model. - timestep (`float`): current timestep in the diffusion chain. - sample (`torch.FloatTensor`): - current instance of sample being created by diffusion process. - generator (`torch.Generator`, *optional*): Random number generator. - return_dict (`bool`): option for returning tuple rather than EulerDiscreteSchedulerOutput class - Returns: - [`~schedulers.scheduling_utils.CMStochasticIterativeSchedulerOutput`] or `tuple`: - [`~schedulers.scheduling_utils.CMStochasticIterativeSchedulerOutput`] if `return_dict` is True, otherwise a - `tuple`. When returning a tuple, the first element is the sample tensor. - """ - - if ( - isinstance(timestep, int) - or isinstance(timestep, torch.IntTensor) - or isinstance(timestep, torch.LongTensor) - ): - raise ValueError( - ( - "Passing integer indices (e.g. from `enumerate(timesteps)`) as timesteps to" - f" `{self.__class__}.step()` is not supported. Make sure to pass" - " one of the `scheduler.timesteps` as a timestep." - ), - ) - - if not self.is_scale_input_called: - logger.warning( - "The `scale_model_input` function should be called before `step` to ensure correct denoising. " - "See `StableDiffusionPipeline` for a usage example." - ) - - if isinstance(timestep, torch.Tensor): - timestep = timestep.to(self.timesteps.device) - - sigma_min = self.config.sigma_min - sigma_max = self.config.sigma_max - - step_index = self.index_for_timestep(timestep) - - # sigma_next corresponds to next_t in original implementation - sigma = self.sigmas[step_index] - if step_index + 1 < self.config.num_train_timesteps: - sigma_next = self.sigmas[step_index + 1] - else: - # Set sigma_next to sigma_min - sigma_next = self.sigmas[-1] - - # Get scalings for boundary conditions - c_skip, c_out = self.get_scalings_for_boundary_condition(sigma) - - # 1. Denoise model output using boundary conditions - denoised = c_out * model_output + c_skip * sample - if self.config.clip_denoised: - denoised = denoised.clamp(-1, 1) - - # 2. Sample z ~ N(0, s_noise^2 * I) - # Noise is not used for onestep sampling. - if len(self.timesteps) > 1: - noise = randn_tensor( - model_output.shape, dtype=model_output.dtype, device=model_output.device, generator=generator - ) - else: - noise = torch.zeros_like(model_output) - z = noise * self.config.s_noise - - sigma_hat = sigma_next.clamp(min=sigma_min, max=sigma_max) - - # 3. Return noisy sample - # tau = sigma_hat, eps = sigma_min - prev_sample = denoised + z * (sigma_hat**2 - sigma_min**2) ** 0.5 - - if not return_dict: - return (prev_sample,) - - return CMStochasticIterativeSchedulerOutput(prev_sample=prev_sample) - - # Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler.add_noise - def add_noise( - self, - original_samples: torch.FloatTensor, - noise: torch.FloatTensor, - timesteps: torch.FloatTensor, - ) -> torch.FloatTensor: - # Make sure sigmas and timesteps have the same device and dtype as original_samples - sigmas = self.sigmas.to(device=original_samples.device, dtype=original_samples.dtype) - if original_samples.device.type == "mps" and torch.is_floating_point(timesteps): - # mps does not support float64 - schedule_timesteps = self.timesteps.to(original_samples.device, dtype=torch.float32) - timesteps = timesteps.to(original_samples.device, dtype=torch.float32) - else: - schedule_timesteps = self.timesteps.to(original_samples.device) - timesteps = timesteps.to(original_samples.device) - - step_indices = [(schedule_timesteps == t).nonzero().item() for t in timesteps] - - sigma = sigmas[step_indices].flatten() - while len(sigma.shape) < len(original_samples.shape): - sigma = sigma.unsqueeze(-1) - - noisy_samples = original_samples + noise * sigma - return noisy_samples - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_torch_and_transformers_objects.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_torch_and_transformers_objects.py deleted file mode 100644 index df8009dd0e27ec81dfbf4779904d6a6cfc0679f6..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_torch_and_transformers_objects.py +++ /dev/null @@ -1,1127 +0,0 @@ -# This file is autogenerated by the command `make fix-copies`, do not edit. -from ..utils import DummyObject, requires_backends - - -class AltDiffusionImg2ImgPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class AltDiffusionPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class AudioLDMPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class CycleDiffusionPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class IFImg2ImgPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class IFImg2ImgSuperResolutionPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class IFInpaintingPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class IFInpaintingSuperResolutionPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class IFPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class IFSuperResolutionPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class ImageTextPipelineOutput(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class KandinskyCombinedPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class KandinskyImg2ImgCombinedPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class KandinskyImg2ImgPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class KandinskyInpaintCombinedPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class KandinskyInpaintPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class KandinskyPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class KandinskyPriorPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class KandinskyV22CombinedPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class KandinskyV22ControlnetImg2ImgPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class KandinskyV22ControlnetPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class KandinskyV22Img2ImgCombinedPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class KandinskyV22Img2ImgPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class KandinskyV22InpaintCombinedPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class KandinskyV22InpaintPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class KandinskyV22Pipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class KandinskyV22PriorEmb2EmbPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class KandinskyV22PriorPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class LDMTextToImagePipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class PaintByExamplePipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class SemanticStableDiffusionPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class ShapEImg2ImgPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class ShapEPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionAdapterPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionAttendAndExcitePipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionControlNetImg2ImgPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionControlNetInpaintPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionControlNetPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionDepth2ImgPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionDiffEditPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionImageVariationPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionImg2ImgPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionInpaintPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionInpaintPipelineLegacy(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionInstructPix2PixPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionLatentUpscalePipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionLDM3DPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionModelEditingPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionPanoramaPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionParadigmsPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionPipelineSafe(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionPix2PixZeroPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionSAGPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionUpscalePipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionXLControlNetPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionXLImg2ImgPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionXLInpaintPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionXLInstructPix2PixPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionXLPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableUnCLIPImg2ImgPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableUnCLIPPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class TextToVideoSDPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class TextToVideoZeroPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class UnCLIPImageVariationPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class UnCLIPPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class UniDiffuserModel(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class UniDiffuserPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class UniDiffuserTextDecoder(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class VersatileDiffusionDualGuidedPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class VersatileDiffusionImageVariationPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class VersatileDiffusionPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class VersatileDiffusionTextToImagePipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class VideoToVideoSDPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class VQDiffusionPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/stale.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/stale.py deleted file mode 100644 index 12932f31c243f44566fb65daf80b0b3637cc8a95..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/stale.py +++ /dev/null @@ -1,77 +0,0 @@ -# Copyright 2023 The HuggingFace Team, the AllenNLP library authors. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -Script to close stale issue. Taken in part from the AllenNLP repository. -https://github.com/allenai/allennlp. -""" -import os -from datetime import datetime as dt - -from github import Github - - -LABELS_TO_EXEMPT = [ - "good first issue", - "good second issue", - "good difficult issue", - "enhancement", - "new pipeline/model", - "new scheduler", - "wip", -] - - -def main(): - g = Github(os.environ["GITHUB_TOKEN"]) - repo = g.get_repo("huggingface/diffusers") - open_issues = repo.get_issues(state="open") - - for issue in open_issues: - comments = sorted(issue.get_comments(), key=lambda i: i.created_at, reverse=True) - last_comment = comments[0] if len(comments) > 0 else None - if ( - last_comment is not None - and last_comment.user.login == "github-actions[bot]" - and (dt.utcnow() - issue.updated_at).days > 7 - and (dt.utcnow() - issue.created_at).days >= 30 - and not any(label.name.lower() in LABELS_TO_EXEMPT for label in issue.get_labels()) - ): - # Closes the issue after 7 days of inactivity since the Stalebot notification. - issue.edit(state="closed") - elif ( - "stale" in issue.get_labels() - and last_comment is not None - and last_comment.user.login != "github-actions[bot]" - ): - # Opens the issue if someone other than Stalebot commented. - issue.edit(state="open") - issue.remove_from_labels("stale") - elif ( - (dt.utcnow() - issue.updated_at).days > 23 - and (dt.utcnow() - issue.created_at).days >= 30 - and not any(label.name.lower() in LABELS_TO_EXEMPT for label in issue.get_labels()) - ): - # Post a Stalebot notification after 23 days of inactivity. - issue.create_comment( - "This issue has been automatically marked as stale because it has not had " - "recent activity. If you think this still needs to be addressed " - "please comment on this thread.\n\nPlease note that issues that do not follow the " - "[contributing guidelines](https://github.com/huggingface/diffusers/blob/main/CONTRIBUTING.md) " - "are likely to be ignored." - ) - issue.add_to_labels("stale") - - -if __name__ == "__main__": - main() diff --git a/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_x101_32x4d_fpn_20e_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_x101_32x4d_fpn_20e_coco.py deleted file mode 100644 index 1afeeef1212db831dd1f097d30b0354e459daa97..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_x101_32x4d_fpn_20e_coco.py +++ /dev/null @@ -1,13 +0,0 @@ -_base_ = './cascade_rcnn_r50_fpn_20e_coco.py' -model = dict( - pretrained='open-mmlab://resnext101_32x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - style='pytorch')) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/dcn/cascade_rcnn_r101_fpn_dconv_c3-c5_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/dcn/cascade_rcnn_r101_fpn_dconv_c3-c5_1x_coco.py deleted file mode 100644 index a01df33c94e1f8b5f51a51a780b30a77ce99b2c0..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/dcn/cascade_rcnn_r101_fpn_dconv_c3-c5_1x_coco.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = '../cascade_rcnn/cascade_rcnn_r101_fpn_1x_coco.py' -model = dict( - backbone=dict( - dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False), - stage_with_dcn=(False, True, True, True))) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/ms_rcnn/README.md b/spaces/Andy1621/uniformer_image_detection/configs/ms_rcnn/README.md deleted file mode 100644 index c19dee36e441f2f6a8330ab8c6d94e7408ec9fe6..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/ms_rcnn/README.md +++ /dev/null @@ -1,26 +0,0 @@ -# Mask Scoring R-CNN - -## Introduction - -[ALGORITHM] - -``` -@inproceedings{huang2019msrcnn, - title={Mask Scoring R-CNN}, - author={Zhaojin Huang and Lichao Huang and Yongchao Gong and Chang Huang and Xinggang Wang}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - year={2019}, -} -``` - -## Results and Models - -| Backbone | style | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download | -|:-------------:|:----------:|:-------:|:--------:|:--------------:|:------:|:-------:|:------:|:--------:| -| R-50-FPN | caffe | 1x | 4.5 | | 38.2 | 36.0 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/ms_rcnn/ms_rcnn_r50_caffe_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_r50_caffe_fpn_1x_coco/ms_rcnn_r50_caffe_fpn_1x_coco_20200702_180848-61c9355e.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_r50_caffe_fpn_1x_coco/ms_rcnn_r50_caffe_fpn_1x_coco_20200702_180848.log.json) | -| R-50-FPN | caffe | 2x | - | - | 38.8 | 36.3 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/ms_rcnn/ms_rcnn_r50_caffe_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_r50_caffe_fpn_2x_coco/ms_rcnn_r50_caffe_fpn_2x_coco_bbox_mAP-0.388__segm_mAP-0.363_20200506_004738-ee87b137.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_r50_caffe_fpn_2x_coco/ms_rcnn_r50_caffe_fpn_2x_coco_20200506_004738.log.json) | -| R-101-FPN | caffe | 1x | 6.5 | | 40.4 | 37.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/ms_rcnn/ms_rcnn_r101_caffe_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_r101_caffe_fpn_1x_coco/ms_rcnn_r101_caffe_fpn_1x_coco_bbox_mAP-0.404__segm_mAP-0.376_20200506_004755-b9b12a37.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_r101_caffe_fpn_1x_coco/ms_rcnn_r101_caffe_fpn_1x_coco_20200506_004755.log.json) | -| R-101-FPN | caffe | 2x | - | - | 41.1 | 38.1 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/ms_rcnn/ms_rcnn_r101_caffe_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_r101_caffe_fpn_2x_coco/ms_rcnn_r101_caffe_fpn_2x_coco_bbox_mAP-0.411__segm_mAP-0.381_20200506_011134-5f3cc74f.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_r101_caffe_fpn_2x_coco/ms_rcnn_r101_caffe_fpn_2x_coco_20200506_011134.log.json) | -| R-X101-32x4d | pytorch | 2x | 7.9 | 11.0 | 41.8 | 38.7 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/ms_rcnn/ms_rcnn_x101_32x4d_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_x101_32x4d_fpn_1x_coco/ms_rcnn_x101_32x4d_fpn_1x_coco_20200206-81fd1740.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_x101_32x4d_fpn_1x_coco/ms_rcnn_x101_32x4d_fpn_1x_coco_20200206_100113.log.json) | -| R-X101-64x4d | pytorch | 1x | 11.0 | 8.0 | 43.0 | 39.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/ms_rcnn/ms_rcnn_x101_64x4d_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_x101_64x4d_fpn_1x_coco/ms_rcnn_x101_64x4d_fpn_1x_coco_20200206-86ba88d2.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_x101_64x4d_fpn_1x_coco/ms_rcnn_x101_64x4d_fpn_1x_coco_20200206_091744.log.json) | -| R-X101-64x4d | pytorch | 2x | 11.0 | 8.0 | 42.6 | 39.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/ms_rcnn/ms_rcnn_x101_64x4d_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_x101_64x4d_fpn_2x_coco/ms_rcnn_x101_64x4d_fpn_2x_coco_20200308-02a445e2.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_x101_64x4d_fpn_2x_coco/ms_rcnn_x101_64x4d_fpn_2x_coco_20200308_012247.log.json) | diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x1024_80k_cityscapes.py deleted file mode 100644 index 990a085eda2f2dc47f1a1289bfbf2726ad8c9c4f..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x1024_80k_cityscapes.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = [ - '../_base_/models/fcn_r50-d8.py', '../_base_/datasets/cityscapes.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py' -] diff --git a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/guided_diffusion/scripts/image_sample.py b/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/guided_diffusion/scripts/image_sample.py deleted file mode 100644 index 8bcfd463dcbe86ce42e6708892d81e24d549583d..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/guided_diffusion/scripts/image_sample.py +++ /dev/null @@ -1,108 +0,0 @@ -""" -Generate a large batch of image samples from a model and save them as a large -numpy array. This can be used to produce samples for FID evaluation. -""" - -import argparse -import os - -import numpy as np -import torch as th -import torch.distributed as dist - -from guided_diffusion import dist_util, logger -from guided_diffusion.script_util import ( - NUM_CLASSES, - model_and_diffusion_defaults, - create_model_and_diffusion, - add_dict_to_argparser, - args_to_dict, -) - - -def main(): - args = create_argparser().parse_args() - - dist_util.setup_dist() - logger.configure() - - logger.log("creating model and diffusion...") - model, diffusion = create_model_and_diffusion( - **args_to_dict(args, model_and_diffusion_defaults().keys()) - ) - model.load_state_dict( - dist_util.load_state_dict(args.model_path, map_location="cpu") - ) - model.to(dist_util.dev()) - if args.use_fp16: - model.convert_to_fp16() - model.eval() - - logger.log("sampling...") - all_images = [] - all_labels = [] - while len(all_images) * args.batch_size < args.num_samples: - model_kwargs = {} - if args.class_cond: - classes = th.randint( - low=0, high=NUM_CLASSES, size=(args.batch_size,), device=dist_util.dev() - ) - model_kwargs["y"] = classes - sample_fn = ( - diffusion.p_sample_loop if not args.use_ddim else diffusion.ddim_sample_loop - ) - sample = sample_fn( - model, - (args.batch_size, 3, args.image_size, args.image_size), - clip_denoised=args.clip_denoised, - model_kwargs=model_kwargs, - ) - sample = ((sample + 1) * 127.5).clamp(0, 255).to(th.uint8) - sample = sample.permute(0, 2, 3, 1) - sample = sample.contiguous() - - gathered_samples = [th.zeros_like(sample) for _ in range(dist.get_world_size())] - dist.all_gather(gathered_samples, sample) # gather not supported with NCCL - all_images.extend([sample.cpu().numpy() for sample in gathered_samples]) - if args.class_cond: - gathered_labels = [ - th.zeros_like(classes) for _ in range(dist.get_world_size()) - ] - dist.all_gather(gathered_labels, classes) - all_labels.extend([labels.cpu().numpy() for labels in gathered_labels]) - logger.log(f"created {len(all_images) * args.batch_size} samples") - - arr = np.concatenate(all_images, axis=0) - arr = arr[: args.num_samples] - if args.class_cond: - label_arr = np.concatenate(all_labels, axis=0) - label_arr = label_arr[: args.num_samples] - if dist.get_rank() == 0: - shape_str = "x".join([str(x) for x in arr.shape]) - out_path = os.path.join(logger.get_dir(), f"samples_{shape_str}.npz") - logger.log(f"saving to {out_path}") - if args.class_cond: - np.savez(out_path, arr, label_arr) - else: - np.savez(out_path, arr) - - dist.barrier() - logger.log("sampling complete") - - -def create_argparser(): - defaults = dict( - clip_denoised=True, - num_samples=10000, - batch_size=16, - use_ddim=False, - model_path="", - ) - defaults.update(model_and_diffusion_defaults()) - parser = argparse.ArgumentParser() - add_dict_to_argparser(parser, defaults) - return parser - - -if __name__ == "__main__": - main() diff --git a/spaces/AntiUser/DeepDanbooru_string/README.md b/spaces/AntiUser/DeepDanbooru_string/README.md deleted file mode 100644 index 4330b6f969246dc764a34ea254d2e807159f1c55..0000000000000000000000000000000000000000 --- a/spaces/AntiUser/DeepDanbooru_string/README.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: DeepDanbooru String -emoji: 💬 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false -duplicated_from: NoCrypt/DeepDanbooru_string ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/ArkanDash/rvc-models-new/config.py b/spaces/ArkanDash/rvc-models-new/config.py deleted file mode 100644 index b6de7523991c6384178ad96b5fe0c8932c1b5688..0000000000000000000000000000000000000000 --- a/spaces/ArkanDash/rvc-models-new/config.py +++ /dev/null @@ -1,99 +0,0 @@ -import argparse -import sys -import torch -from multiprocessing import cpu_count - -class Config: - def __init__(self): - self.device = "cuda:0" - self.is_half = True - self.n_cpu = 0 - self.gpu_name = None - self.gpu_mem = None - ( - self.share, - self.api, - self.unsupported - ) = self.arg_parse() - self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config() - - @staticmethod - def arg_parse() -> tuple: - parser = argparse.ArgumentParser() - parser.add_argument("--share", action="store_true", help="Launch with public link") - parser.add_argument("--api", action="store_true", help="Launch with api") - parser.add_argument("--unsupported", action="store_true", help="Enable unsupported feature") - cmd_opts = parser.parse_args() - - return ( - cmd_opts.share, - cmd_opts.api, - cmd_opts.unsupported - ) - - # has_mps is only available in nightly pytorch (for now) and MasOS 12.3+. - # check `getattr` and try it for compatibility - @staticmethod - def has_mps() -> bool: - if not torch.backends.mps.is_available(): - return False - try: - torch.zeros(1).to(torch.device("mps")) - return True - except Exception: - return False - - def device_config(self) -> tuple: - if torch.cuda.is_available(): - i_device = int(self.device.split(":")[-1]) - self.gpu_name = torch.cuda.get_device_name(i_device) - if ( - ("16" in self.gpu_name and "V100" not in self.gpu_name.upper()) - or "P40" in self.gpu_name.upper() - or "1060" in self.gpu_name - or "1070" in self.gpu_name - or "1080" in self.gpu_name - ): - print("INFO: Found GPU", self.gpu_name, ", force to fp32") - self.is_half = False - else: - print("INFO: Found GPU", self.gpu_name) - self.gpu_mem = int( - torch.cuda.get_device_properties(i_device).total_memory - / 1024 - / 1024 - / 1024 - + 0.4 - ) - elif self.has_mps(): - print("INFO: No supported Nvidia GPU found, use MPS instead") - self.device = "mps" - self.is_half = False - else: - print("INFO: No supported Nvidia GPU found, use CPU instead") - self.device = "cpu" - self.is_half = False - - if self.n_cpu == 0: - self.n_cpu = cpu_count() - - if self.is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 - else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 - - if self.gpu_mem != None and self.gpu_mem <= 4: - x_pad = 1 - x_query = 5 - x_center = 30 - x_max = 32 - - return x_pad, x_query, x_center, x_max diff --git a/spaces/Atualli/yoloxTeste/yoloxdetect2/configs/yolox_tiny.py b/spaces/Atualli/yoloxTeste/yoloxdetect2/configs/yolox_tiny.py deleted file mode 100644 index 5220de2f2e6760d5c9a966d5dd397aad721fc60a..0000000000000000000000000000000000000000 --- a/spaces/Atualli/yoloxTeste/yoloxdetect2/configs/yolox_tiny.py +++ /dev/null @@ -1,20 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding:utf-8 -*- -# Copyright (c) Megvii, Inc. and its affiliates. - -import os - -from yolox.exp import Exp as MyExp - - -class Exp(MyExp): - def __init__(self): - super(Exp, self).__init__() - self.depth = 0.33 - self.width = 0.375 - self.input_size = (416, 416) - self.mosaic_scale = (0.5, 1.5) - self.random_size = (10, 20) - self.test_size = (416, 416) - self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0] - self.enable_mixup = False diff --git a/spaces/Bagus/speaker-verification-demo/app.py b/spaces/Bagus/speaker-verification-demo/app.py deleted file mode 100644 index 7acb9d26caf1555f045593bdb37c74564c3cd97a..0000000000000000000000000000000000000000 --- a/spaces/Bagus/speaker-verification-demo/app.py +++ /dev/null @@ -1,120 +0,0 @@ -import gradio as gr -import torch -import torchaudio -# from torchaudio.sox_effects import apply_effects_file -from transformers import AutoFeatureExtractor, AutoModelForAudioXVector - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -STYLE = """ - -""" -OUTPUT_OK = ( - STYLE - + """ -
      -

      The speakers are

      -

      {:.1f}%

      -

      similar

      -

      Welcome, human!

      -
      (You must get at least 80% to be considered the same person)
      -
      -""" -) -OUTPUT_FAIL = ( - STYLE - + """ -
      -

      The speakers are

      -

      {:.1f}%

      -

      similar

      -

      You shall not pass!

      -
      (You must get at least 80% to be considered the same person)
      -
      -""" -) - -EFFECTS = [ - ["remix", "-"], - ["channels", "1"], - ["rate", "16000"], - ["gain", "-1.0"], - ["silence", "1", "0.1", "0.1%", "-1", "0.1", "0.1%"], - ["trim", "0", "10"], -] - -THRESHOLD = 0.80 - -model_name = "microsoft/wavlm-base-plus-sv" -feature_extractor = AutoFeatureExtractor.from_pretrained(model_name) -model = AutoModelForAudioXVector.from_pretrained(model_name).to(device) -cosine_sim = torch.nn.CosineSimilarity(dim=-1) - - -def similarity_fn(path1, path2): - if not (path1 and path2): - return 'ERROR: Please record audio for *both* speakers!' - - # wav1, _ = apply_effects_file(path1, EFFECTS) - # wav2, _ = apply_effects_file(path2, EFFECTS) - wav1, _ = torchaudio.load(path1) - wav2, _ = torchaudio.load(path2) - print(wav1.shape, wav2.shape) - - input1 = feature_extractor(wav1.squeeze(0), return_tensors="pt", sampling_rate=16000).input_values.to(device) - input2 = feature_extractor(wav2.squeeze(0), return_tensors="pt", sampling_rate=16000).input_values.to(device) - - with torch.no_grad(): - emb1 = model(input1).embeddings - emb2 = model(input2).embeddings - emb1 = torch.nn.functional.normalize(emb1, dim=-1).cpu() - emb2 = torch.nn.functional.normalize(emb2, dim=-1).cpu() - similarity = cosine_sim(emb1, emb2).numpy()[0] - - if similarity >= THRESHOLD: - output = OUTPUT_OK.format(similarity * 100) - else: - output = OUTPUT_FAIL.format(similarity * 100) - - return output - - -inputs = [ - gr.inputs.Audio(source="microphone", type="filepath", optional=True, label="Speaker #1"), - gr.inputs.Audio(source="microphone", type="filepath", optional=True, label="Speaker #2"), -] -output = gr.outputs.HTML(label="") - - -description = ( - "This demo will compare two speech samples and determine if they are from the same speaker. " - "Try it with your own voice!" -) -article = ( - "

      " - "🎙️ Learn more about WavLM | " - "📚 WavLM paper | " - "📚 X-Vector paper" - "

      " -) -examples = [ - ["samples/denzel_washington.mp3", "samples/denzel_washington.mp3"], - ["samples/heath_ledger_2.mp3", "samples/heath_ledger_3.mp3"], - ["samples/heath_ledger_3.mp3", "samples/denzel_washington.mp3"], - ["samples/denzel_washington.mp3", "samples/heath_ledger_2.mp3"], -] - -interface = gr.Interface( - fn=similarity_fn, - inputs=inputs, - outputs=output, - title="Voice Authentication with WavLM + X-Vectors", - description=description, - article=article, - layout="horizontal", - theme="huggingface", - allow_flagging=False, - live=False, - examples=examples, -) -interface.launch(enable_queue=True) diff --git a/spaces/Banbri/zcvzcv/src/components/icons/full-screen.tsx b/spaces/Banbri/zcvzcv/src/components/icons/full-screen.tsx deleted file mode 100644 index 34ec93bbab4b8359868737dbab9c6f7f6d594e03..0000000000000000000000000000000000000000 --- a/spaces/Banbri/zcvzcv/src/components/icons/full-screen.tsx +++ /dev/null @@ -1,16 +0,0 @@ -export function FullScreenIcon() { - return ( - - - - - - - - - - - - - ) -} \ No newline at end of file diff --git a/spaces/Bart92/RVC_HF/tools/dlmodels.bat b/spaces/Bart92/RVC_HF/tools/dlmodels.bat deleted file mode 100644 index 5d80f50369b1f3ed37c045d07a9e2ce8954f09d4..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/tools/dlmodels.bat +++ /dev/null @@ -1,348 +0,0 @@ -@echo off && chcp 65001 - -echo working dir is %cd% -echo downloading requirement aria2 check. -echo= -dir /a:d/b | findstr "aria2" > flag.txt -findstr "aria2" flag.txt >nul -if %errorlevel% ==0 ( - echo aria2 checked. - echo= -) else ( - echo failed. please downloading aria2 from webpage! - echo unzip it and put in this directory! - timeout /T 5 - start https://github.com/aria2/aria2/releases/tag/release-1.36.0 - echo= - goto end -) - -echo envfiles checking start. -echo= - -for /f %%x in ('findstr /i /c:"aria2" "flag.txt"') do (set aria2=%%x)&goto endSch -:endSch - -set d32=f0D32k.pth -set d40=f0D40k.pth -set d48=f0D48k.pth -set g32=f0G32k.pth -set g40=f0G40k.pth -set g48=f0G48k.pth - -set d40v2=f0D40k.pth -set g40v2=f0G40k.pth - -set dld32=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D32k.pth -set dld40=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D40k.pth -set dld48=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D48k.pth -set dlg32=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G32k.pth -set dlg40=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G40k.pth -set dlg48=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G48k.pth - -set dld40v2=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D40k.pth -set dlg40v2=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G40k.pth - -set hp2_all=HP2_all_vocals.pth -set hp3_all=HP3_all_vocals.pth -set hp5_only=HP5_only_main_vocal.pth -set VR_DeEchoAggressive=VR-DeEchoAggressive.pth -set VR_DeEchoDeReverb=VR-DeEchoDeReverb.pth -set VR_DeEchoNormal=VR-DeEchoNormal.pth -set onnx_dereverb=vocals.onnx - -set dlhp2_all=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP2_all_vocals.pth -set dlhp3_all=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP3_all_vocals.pth -set dlhp5_only=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP5_only_main_vocal.pth -set dlVR_DeEchoAggressive=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/VR-DeEchoAggressive.pth -set dlVR_DeEchoDeReverb=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/VR-DeEchoDeReverb.pth -set dlVR_DeEchoNormal=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/VR-DeEchoNormal.pth -set dlonnx_dereverb=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/onnx_dereverb_By_FoxJoy/vocals.onnx - -set hb=hubert_base.pt - -set dlhb=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt - -echo dir check start. -echo= - -if exist "%~dp0assets\pretrained" ( - echo dir .\assets\pretrained checked. - ) else ( - echo failed. generating dir .\assets\pretrained. - mkdir pretrained - ) -if exist "%~dp0assets\pretrained_v2" ( - echo dir .\assets\pretrained_v2 checked. - ) else ( - echo failed. generating dir .\assets\pretrained_v2. - mkdir pretrained_v2 - ) -if exist "%~dp0assets\uvr5_weights" ( - echo dir .\assets\uvr5_weights checked. - ) else ( - echo failed. generating dir .\assets\uvr5_weights. - mkdir uvr5_weights - ) -if exist "%~dp0assets\uvr5_weights\onnx_dereverb_By_FoxJoy" ( - echo dir .\assets\uvr5_weights\onnx_dereverb_By_FoxJoy checked. - ) else ( - echo failed. generating dir .\assets\uvr5_weights\onnx_dereverb_By_FoxJoy. - mkdir uvr5_weights\onnx_dereverb_By_FoxJoy - ) - -echo= -echo dir check finished. - -echo= -echo required files check start. - -echo checking D32k.pth -if exist "%~dp0assets\pretrained\D32k.pth" ( - echo D32k.pth in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D32k.pth -d %~dp0assets\pretrained -o D32k.pth - if exist "%~dp0assets\pretrained\D32k.pth" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking D40k.pth -if exist "%~dp0assets\pretrained\D40k.pth" ( - echo D40k.pth in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D40k.pth -d %~dp0assets\pretrained -o D40k.pth - if exist "%~dp0assets\pretrained\D40k.pth" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking D40k.pth -if exist "%~dp0assets\pretrained_v2\D40k.pth" ( - echo D40k.pth in .\assets\pretrained_v2 checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D40k.pth -d %~dp0assets\pretrained_v2 -o D40k.pth - if exist "%~dp0assets\pretrained_v2\D40k.pth" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking D48k.pth -if exist "%~dp0assets\pretrained\D48k.pth" ( - echo D48k.pth in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D48k.pth -d %~dp0assets\pretrained -o D48k.pth - if exist "%~dp0assets\pretrained\D48k.pth" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking G32k.pth -if exist "%~dp0assets\pretrained\G32k.pth" ( - echo G32k.pth in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G32k.pth -d %~dp0assets\pretrained -o G32k.pth - if exist "%~dp0assets\pretrained\G32k.pth" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking G40k.pth -if exist "%~dp0assets\pretrained\G40k.pth" ( - echo G40k.pth in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G40k.pth -d %~dp0assets\pretrained -o G40k.pth - if exist "%~dp0assets\pretrained\G40k.pth" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking G40k.pth -if exist "%~dp0assets\pretrained_v2\G40k.pth" ( - echo G40k.pth in .\assets\pretrained_v2 checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G40k.pth -d %~dp0assets\pretrained_v2 -o G40k.pth - if exist "%~dp0assets\pretrained_v2\G40k.pth" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking G48k.pth -if exist "%~dp0assets\pretrained\G48k.pth" ( - echo G48k.pth in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G48k.pth -d %~dp0assets\pretrained -o G48k.pth - if exist "%~dp0assets\pretrained\G48k.pth" (echo download successful.) else (echo please try again! - echo=) - ) - -echo checking %d32% -if exist "%~dp0assets\pretrained\%d32%" ( - echo %d32% in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dld32% -d %~dp0assets\pretrained -o %d32% - if exist "%~dp0assets\pretrained\%d32%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %d40% -if exist "%~dp0assets\pretrained\%d40%" ( - echo %d40% in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dld40% -d %~dp0assets\pretrained -o %d40% - if exist "%~dp0assets\pretrained\%d40%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %d40v2% -if exist "%~dp0assets\pretrained_v2\%d40v2%" ( - echo %d40v2% in .\assets\pretrained_v2 checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dld40v2% -d %~dp0assets\pretrained_v2 -o %d40v2% - if exist "%~dp0assets\pretrained_v2\%d40v2%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %d48% -if exist "%~dp0assets\pretrained\%d48%" ( - echo %d48% in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dld48% -d %~dp0assets\pretrained -o %d48% - if exist "%~dp0assets\pretrained\%d48%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %g32% -if exist "%~dp0assets\pretrained\%g32%" ( - echo %g32% in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlg32% -d %~dp0assets\pretrained -o %g32% - if exist "%~dp0assets\pretrained\%g32%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %g40% -if exist "%~dp0assets\pretrained\%g40%" ( - echo %g40% in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlg40% -d %~dp0assets\pretrained -o %g40% - if exist "%~dp0assets\pretrained\%g40%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %g40v2% -if exist "%~dp0assets\pretrained_v2\%g40v2%" ( - echo %g40v2% in .\assets\pretrained_v2 checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlg40v2% -d %~dp0assets\pretrained_v2 -o %g40v2% - if exist "%~dp0assets\pretrained_v2\%g40v2%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %g48% -if exist "%~dp0assets\pretrained\%g48%" ( - echo %g48% in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlg48% -d %~dp0assets\pretrained -o %g48% - if exist "%~dp0assets\pretrained\%g48%" (echo download successful.) else (echo please try again! - echo=) - ) - -echo checking %hp2_all% -if exist "%~dp0assets\uvr5_weights\%hp2_all%" ( - echo %hp2_all% in .\assets\uvr5_weights checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlhp2_all% -d %~dp0assets\uvr5_weights -o %hp2_all% - if exist "%~dp0assets\uvr5_weights\%hp2_all%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %hp3_all% -if exist "%~dp0assets\uvr5_weights\%hp3_all%" ( - echo %hp3_all% in .\assets\uvr5_weights checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlhp3_all% -d %~dp0assets\uvr5_weights -o %hp3_all% - if exist "%~dp0assets\uvr5_weights\%hp3_all%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %hp5_only% -if exist "%~dp0assets\uvr5_weights\%hp5_only%" ( - echo %hp5_only% in .\assets\uvr5_weights checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlhp5_only% -d %~dp0assets\uvr5_weights -o %hp5_only% - if exist "%~dp0assets\uvr5_weights\%hp5_only%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %VR_DeEchoAggressive% -if exist "%~dp0assets\uvr5_weights\%VR_DeEchoAggressive%" ( - echo %VR_DeEchoAggressive% in .\assets\uvr5_weights checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlVR_DeEchoAggressive% -d %~dp0assets\uvr5_weights -o %VR_DeEchoAggressive% - if exist "%~dp0assets\uvr5_weights\%VR_DeEchoAggressive%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %VR_DeEchoDeReverb% -if exist "%~dp0assets\uvr5_weights\%VR_DeEchoDeReverb%" ( - echo %VR_DeEchoDeReverb% in .\assets\uvr5_weights checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlVR_DeEchoDeReverb% -d %~dp0assets\uvr5_weights -o %VR_DeEchoDeReverb% - if exist "%~dp0assets\uvr5_weights\%VR_DeEchoDeReverb%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %VR_DeEchoNormal% -if exist "%~dp0assets\uvr5_weights\%VR_DeEchoNormal%" ( - echo %VR_DeEchoNormal% in .\assets\uvr5_weights checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlVR_DeEchoNormal% -d %~dp0assets\uvr5_weights -o %VR_DeEchoNormal% - if exist "%~dp0assets\uvr5_weights\%VR_DeEchoNormal%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %onnx_dereverb% -if exist "%~dp0assets\uvr5_weights\onnx_dereverb_By_FoxJoy\%onnx_dereverb%" ( - echo %onnx_dereverb% in .\assets\uvr5_weights\onnx_dereverb_By_FoxJoy checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlonnx_dereverb% -d %~dp0assets\uvr5_weights\onnx_dereverb_By_FoxJoy -o %onnx_dereverb% - if exist "%~dp0assets\uvr5_weights\onnx_dereverb_By_FoxJoy\%onnx_dereverb%" (echo download successful.) else (echo please try again! - echo=) - ) - -echo checking %hb% -if exist "%~dp0assets\hubert\%hb%" ( - echo %hb% in .\assets\hubert\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlhb% -d %~dp0assets\hubert\ -o %hb% - if exist "%~dp0assets\hubert\%hb%" (echo download successful.) else (echo please try again! - echo=) - ) - -echo required files check finished. -echo envfiles check complete. -pause -:end -del flag.txt diff --git a/spaces/Benson/text-generation/Examples/Agar.io Apk Mod Money.md b/spaces/Benson/text-generation/Examples/Agar.io Apk Mod Money.md deleted file mode 100644 index 41a05d7d8ebcd2476c9b662c36fd780ee9635785..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Agar.io Apk Mod Money.md +++ /dev/null @@ -1,74 +0,0 @@ - -

      Agar.io Apk Mod dinero: Cómo descargar y jugar el popular juego en línea

      -

      ¿Alguna vez has querido jugar un juego en línea simple pero adictivo donde puedes competir con millones de jugadores de todo el mundo? Si es así, es posible que haya oído hablar de Agar.io, un juego que se ha descargado más de 100 millones de veces en Google Play Store. Pero lo que si desea obtener dinero ilimitado y desbloquear todas las pieles y características en el juego? Ahí es donde Agar.io Apk Mod Money entra en juego. En este artículo, le diremos qué es Agar.io, qué es Agar.io Apk Mod Money, cómo descargarlo e instalarlo, y cómo jugarlo de forma segura y efectiva.

      -

      ¿Qué es Agar.io?

      -

      Agar.io es un juego multijugador en línea que fue lanzado en 2015 por Miniclip. El juego está inspirado en un concepto científico llamado agar, que es una sustancia utilizada para cultivar bacterias en las placas de Petri. En el juego, controlas una celda que puede moverse y comer otras células para crecer. El juego tiene dos modos: FFA (Gratis para todos) y Equipos. En el modo FFA, puedes jugar solo o con amigos e intentar convertirte en la celda más grande del mapa. En el modo Equipos, puedes unirte a uno de los tres equipos (rojo, azul o verde) y cooperar con tus compañeros para dominar el mapa.

      -

      agar.io apk mod money


      Download Zip ✔✔✔ https://bltlly.com/2v6LcR



      -

      El juego de Agar.io

      -

      El modo de juego de Agar.io es simple pero desafiante. Empiezas como una celda pequeña que puede moverse con el ratón o el dedo. Usted puede comer células más pequeñas o pellets que están dispersos alrededor del mapa para crecer más grande. Sin embargo, usted tiene que evitar las células más grandes que pueden comer. También puede dividir su celda en dos pulsando la barra espaciadora o tocando la pantalla. Esto puede ayudarlo a escapar de los depredadores o atrapar presas. Sin embargo, la división también lo hace más vulnerable a ser comido por otras células. También puede expulsar algo de masa de su celda presionando la tecla W o tocando el botón de expulsión. Esto puede ayudarte a alimentar a tus compañeros de equipo o engañar a tus enemigos.

      -

      Las características de Agar.io

      - -
        -
      • Puede personalizar su celda con diferentes pieles, colores y nombres.
      • -
      • Puedes chatear con otros jugadores usando emojis y mensajes de texto.
      • -
      • Puede utilizar varios potenciadores y potenciadores para mejorar su juego.
      • -
      • Puedes unirte o crear salas privadas para jugar con tus amigos.
      • -
      • Puedes participar en misiones y eventos diarios para ganar recompensas.
      • -
      • Puedes posicionarte en la clasificación global y competir con otros jugadores.
      • -
      -

      ¿Qué es Agar.io Apk Mod Money?

      -

      Agar.io Apk Mod Money es una versión modificada del juego original Agar.io que le da dinero ilimitado y desbloquea todas las pieles y características en el juego. Con este mod, puedes disfrutar jugando Agar.io sin limitaciones ni restricciones. Puede comprar cualquier potenciador o potenciador que desee, personalizar su celda con cualquier piel o color que desee y acceder a todas las habitaciones privadas y eventos en el juego.

      -

      Los beneficios de Agar.io Apk Mod Money

      -

      Algunos de los beneficios de usar Agar.io Apk Mod Money son:

      -
        -
      • Puede ahorrar tiempo y dinero al no tener que ver anuncios o hacer compras en la aplicación.
      • -
      • Usted puede tener más diversión y emoción jugando con recursos y opciones ilimitadas.
      • -
      • Puedes tener ventaja sobre otros jugadores usando los mejores potenciadores y potenciadores del juego.
      • -
      • Puedes experimentar con diferentes estrategias y tácticas probando diferentes skins y características.
      • -
      -

      Los riesgos de Agar.io Apk Mod Money

      -

      Sin embargo, el uso de Agar.io Apk Mod Money también viene con algunos riesgos que usted debe tener en cuenta. Algunos de estos riesgos son:

      -
        -
      • Es posible que te prohíban participar en el juego si los desarrolladores detectan que estás usando una versión modificada.
      • -
      • Usted puede obtener virus o malware en su dispositivo si descarga el mod de una fuente no confiable.
      • -
      • Puedes perder tu progreso o datos si el mod no es compatible con la última versión del juego.
      • - -
      -

      ¿Cómo descargar e instalar Agar.io Apk Mod Money?

      -

      Si quieres probar Agar.io Apk Mod Money, necesitas descargarlo e instalarlo en tu dispositivo. Estos son los pasos para hacerlo:

      -

      Los pasos para descargar e instalar Agar.io Apk Mod Money

      -
        -
      1. Ir a un sitio web confiable que ofrece Agar.io Apk Mod Dinero gratis. Usted puede buscar en Google o utilizar uno de estos enlaces: .
      2. -
      3. Descargue el archivo mod en su dispositivo. Asegúrese de tener suficiente espacio de almacenamiento y una conexión a Internet estable.
      4. -
      5. Habilita la instalación de aplicaciones de fuentes desconocidas en tu dispositivo. Puede hacer esto yendo a Configuración > Seguridad > Fuentes desconocidas y activando.
      6. -
      7. Busque el archivo mod en su dispositivo y toque en él para instalarlo. Siga las instrucciones en la pantalla y espere a que termine la instalación.
      8. -
      9. Inicie el juego y disfrute jugando con dinero y características ilimitadas.
      10. -
      -

      Los consejos para jugar Agar.io Apk Mod dinero de forma segura y eficaz

      -

      Para jugar Agar.io Apk Mod dinero sin ningún problema, usted debe seguir estos consejos:

      -
        -
      • No utilice el mod en salas públicas o clasificadas, ya que podría ser reportado o prohibido por otros jugadores o moderadores.
      • -
      • No abusar del mod mediante el uso de demasiados power-ups o refuerzos, ya que puede ser detectado por el sistema anti-cheat o arruinar el equilibrio del juego.
      • -
      • No descargue el mod de ningún sitio web sospechoso o desconocido, ya que podría infectarse con virus o malware que pueden dañar su dispositivo o robar sus datos.
      • -
      • No actualice el juego desde la Play Store, ya que podría perder el mod o causar problemas de compatibilidad. En su lugar, espera a que el desarrollador de mods lance una nueva versión del mod que coincida con la última versión del juego.
      • -
      • No te olvides de divertirte y disfrutar del juego, ya que ese es el principal propósito de jugar Agar.io.
      • -
      -

      Conclusión

      - -

      Preguntas frecuentes

      -

      Aquí hay algunas preguntas frecuentes sobre Agar.io Apk Mod Money:

      -

      -
        -
      1. ¿Cuál es la diferencia entre Agar.io Apk Mod Money y Agar.io Hack?
      2. -

        Agar.io Apk Mod Money es una versión modificada del juego original que le da dinero ilimitado y desbloquea todas las apariencias y características en el juego. Agar.io Hack es una herramienta o software que le permite manipular o engañar en el juego, como cambiar su tamaño, velocidad, masa o posición.

        -
      3. ¿Es seguro usar Agar.io Apk Mod Money?
      4. -

        Agar.io Apk Mod dinero es seguro de usar si lo descarga desde una fuente de confianza y seguir algunas precauciones. Sin embargo, siempre hay un riesgo de ser prohibido o infectado al usar cualquier aplicación modificada o hackeada, así que úsala a tu discreción.

        -
      5. ¿Puedo jugar Agar.io Apk Mod Money en línea con otros jugadores?
      6. -

        Sí, puedes jugar Agar.io Apk Mod Money en línea con otros jugadores, pero debes evitar jugar en salas públicas o clasificadas, ya que podrías ser reportado o prohibido por otros jugadores o moderadores. Puedes jugar en habitaciones privadas con tus amigos u otros usuarios de mod, pero debes tener cuidado de no abusar del mod ni arruinar la diversión del juego.

        -
      7. ¿Cómo puedo obtener más pieles y características en Agar.io Apk Mod Money?
      8. -

        Usted puede obtener más pieles y características en Agar.io Apk Mod Money mediante el uso del dinero que se obtiene de la mod. Puede comprar cualquier piel o característica que desee en la tienda o en el menú de configuración. También puedes desbloquear algunos skins y características completando misiones o eventos en el juego.

        -
      9. ¿Cómo puedo actualizar Agar.io Apk Mod Money?
      10. - -

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Belkede Rust.md b/spaces/Benson/text-generation/Examples/Belkede Rust.md deleted file mode 100644 index 26b2755b9114a2546e2eb91baf0c37800796e900..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Belkede Rust.md +++ /dev/null @@ -1,207 +0,0 @@ - -

      Roya Belkede Yukle: Cómo descargar y disfrutar de la canción popular de Roya

      -

      Si eres un fan de la música pop azerbaiyana, probablemente hayas oído hablar de Roya y su canción Belkede. ¿Pero sabes cómo descargar y disfrutar de esta canción en tu dispositivo? En este artículo, te mostraremos cómo hacerlo en unos pocos pasos fáciles. También te contaremos más sobre Roya y Belkede, y por qué son tan populares entre los amantes de la música. ¡Empecemos!

      -

      Introducción

      -

      ¿Quién es Roya y qué es Belkede?

      -

      Roya es una famosa cantante, actriz y modelo azerbaiyana que ha estado activa en la industria de la música desde 1999. Es conocida por su potente voz, su estilo original, sus exitosas actuaciones teatrales y su belleza. A menudo se la llama la Rihanna de Azerbaiyán debido a su parecido y popularidad. Ha publicado varios álbumes y sencillos, tanto en Azerbaiyán como en Turquía, donde actualmente vive y trabaja.

      -

      belkede rust


      Download ✯✯✯ https://bltlly.com/2v6JIT



      -

      Belkede es una de las canciones más populares de Roya, que fue lanzada en 2014. El título significa "Maybe" en azerbaiyano, y la canción es sobre el anhelo de un amor perdido. La letra está escrita por Leyli Erol Israfilova, y la música está compuesta por Perviz Mahmudov. La canción tiene una melodía pegadiza, un ambiente romántico y una hermosa interpretación vocal de Roya. Ha recibido millones de visitas en YouTube y otras plataformas, y ha sido elogiado por críticos y fans por igual.

      -

      ¿Por qué es tan popular Belkede y cómo se puede descargar?

      -

      Belkede es popular porque atrae a una amplia gama de oyentes que pueden relacionarse con su tema de amor y nostalgia. También muestra el talento y el carisma de Roya como cantante e intérprete. La canción tiene un atractivo universal que trasciende las barreras del lenguaje y las diferencias culturales. Puede tocar tu corazón y hacerte sentir emocional.

      - -

      Cómo descargar belkede desde diferentes plataformas

      -

      YouTube

      -

      Pasos para descargar Belkede de YouTube

      -

      YouTube es una de las plataformas más populares donde puedes ver el video oficial de Belkede y disfrutar de su calidad visual y de audio. Sin embargo, si quieres descargar la canción de YouTube, necesitarás usar una herramienta o aplicación de terceros que pueda convertir videos de YouTube en archivos de audio que puedas guardar en tu dispositivo. Estos son los pasos para descargar Belkede de YouTube usando una herramienta basada en web llamada Y2mate:

      -
        -
      1. Abra su navegador y vaya al sitio web o aplicación de YouTube.
      2. -
      3. Buscar Belkede por Roya y haga clic en el video que desea descargar.
      4. -
      5. Copiar la URL del vídeo desde la barra de direcciones o el botón de compartir.
      6. -
      7. Abra una nueva pestaña y vaya al sitio web de Y2mate.
      8. -
      9. Pegue la URL del video en el cuadro de búsqueda y haga clic en el botón de inicio.
      10. -
      11. Seleccione el formato y la calidad que desea descargar, como MP3, MP4, M4A, etc.
      12. -
      13. Haga clic en el botón de descarga y espere a que el archivo se convierta y se guarde en su dispositivo.
      14. -
      -

      Pros y contras de la descarga de YouTube

      -

      Descargar Belkede de YouTube tiene algunos pros y contras que debes considerar antes de elegir esta opción. Estos son algunos de ellos:

      - - -Pros -Contras - - -- Puedes ver el video oficial de Belkede y disfrutar de su calidad visual y de audio. -- Es necesario utilizar una herramienta o aplicación de terceros que puede convertir vídeos de YouTube en archivos de audio, que puede no ser seguro o fiable. - - -- Puedes elegir entre diferentes formatos y calidades que se adapten a tu dispositivo y preferencias. -- Puede perder parte de la calidad original y el sonido de la canción al convertirla de vídeo a audio. - - -- Puedes acceder a una gran variedad de otras canciones y videos de Roya y otros artistas en YouTube. - - - -

      Musixmatch

      -

      Pasos para descargar Belkede de Musixmatch

      -

      Musixmatch es otra plataforma popular donde puedes escuchar Belkede by Roya y disfrutar de sus letras y traducciones. Sin embargo, si desea descargar la canción de Musixmatch, tendrá que tener una suscripción premium que le permite descargar canciones sin conexión. Estos son los pasos para descargar Belkede de Musixmatch usando su aplicación:

      -
        -
      1. Abra su navegador y vaya al sitio web o aplicación Musixmatch.
      2. -
      3. Regístrese para una suscripción premium o inicie sesión con su cuenta existente.
      4. -
      5. Buscar Belkede por Roya y toque en la canción que desea descargar.
      6. -
      7. Toque en el icono de tres puntos en la esquina superior derecha de la pantalla y seleccione Descargar sin conexión.
      8. -
      9. Espere a que la canción se descargue y se guarde en su dispositivo.
      10. -
      -

      Pros y contras de la descarga de Musixmatch

      -

      Descargar belkede de Musixmatch tiene algunos pros y contras que debes considerar antes de elegir esta opción. Estos son algunos de ellos:

      - - -Pros -Contras - - -- Puedes escuchar Belkede de Roya y disfrutar de sus letras y traducciones en diferentes idiomas. -- Necesitas tener una suscripción premium que cueste dinero y puede que no esté disponible en tu región o moneda. - - -- Puedes descargar canciones sin conexión y escucharlas sin conexión a Internet o anuncios. -- Es posible que no pueda descargar canciones en alta calidad o en su formato preferido. - - -- Puedes acceder a una gran biblioteca de canciones y letras de Roya y otros artistas en Musixmatch. -- Es posible que no pueda compartir o transferir canciones descargadas a otros dispositivos o plataformas. - - -

      Otras plataformas

      -

      Algunos ejemplos de otras plataformas que ofrecen descarga Belkede

      - -
        -
      • Disponibilidad y accesibilidad de la plataforma en su región o país.
      • -
      • La calidad y cantidad de canciones y artistas que puedes encontrar en la plataforma.
      • -
      • El costo y los métodos de pago de la suscripción o servicio de la plataforma.
      • -
      • La facilidad y conveniencia de descargar canciones fuera de línea o en línea desde la plataforma. La compatibilidad y seguridad de la plataforma con su dispositivo y sistema.
      • -
      • Las características y funciones de la plataforma que mejoran su experiencia de escucha y descarga.
      • -
      -

      Algunos ejemplos de otras plataformas que ofrecen descarga de Belkede son:

      - - -Plataforma -Características -Precio - - -Spotify -- Un servicio líder de streaming de música que ofrece millones de canciones y podcasts. -- Gratis con anuncios o $9.99/mes para premium sin anuncios y con descarga offline. - - -Música de Apple -- Un servicio de streaming de música que se integra con iTunes y dispositivos de Apple. -- $9.99/mes para el individuo o $14.99/mes para el plan de la familia con descarga fuera de línea. - - -Música de Amazon -- Un servicio de streaming de música que ofrece acceso al catálogo de canciones y álbumes de Amazon. -- Gratis con membresía Prime o $9.99/mes para ilimitado sin anuncios y con descarga offline. - - -Deezer -- Un servicio de streaming de música que ofrece recomendaciones y listas de reproducción personalizadas. -- Gratis con anuncios o $9.99/mes para premium sin anuncios y con descarga offline. - - -Fizy -- Un servicio de streaming de música que ofrece canciones y videos turcos e internacionales. -- Gratis con anuncios o 9.99 TL/mes para premium sin anuncios y con descarga offline. - - -Muud -- Un servicio de streaming de música que ofrece canciones y podcasts turcos e internacionales. - - - -

      Consejos para elegir la mejor plataforma para sus necesidades

      -

      Para elegir la mejor plataforma para sus necesidades, debe considerar los siguientes consejos:

      -

      -
        -
      • Hacer algunas investigaciones sobre las plataformas que ofrecen Belkede descargar y comparar sus características, precios, comentarios, calificaciones, etc.
      • -
      • Pruebe las versiones gratuitas de las plataformas que le interesan y vea cómo funcionan para usted.
      • -
      • Lea los términos y condiciones de las plataformas que desea utilizar y asegúrese de estar de acuerdo con ellos.
      • -
      • Compruebe la disponibilidad y calidad de Belkede en las plataformas que desea utilizar y asegúrese de que cumplan con sus expectativas.
      • -
      • Elige la plataforma que más se adapte a tu presupuesto, preferencias, necesidades y dispositivo.
      • -
      -

      Cómo disfrutar de Belkede después de descargarlo

      -

      Cómo escuchar Belkede offline

      -

      Beneficios de escuchar Belkede offline

      -

      Escuchar Belkede sin conexión tiene muchos beneficios, como:

      -
        -
      • Puede escucharlo en cualquier momento y en cualquier lugar sin conexión a Internet o uso de datos.
      • -
      • Puede evitar interrupciones de anuncios o problemas de almacenamiento en búfer que pueden afectar su experiencia auditiva en línea.
      • -
      • Puede ahorrar batería y espacio de almacenamiento en su dispositivo al no transmitir o descargar canciones repetidamente en línea.
      • -
      • Puede tener más control sobre su lista de reproducción y opciones de reproducción al no depender de plataformas en línea.
      • -
      • Puede disfrutar de la canción en alta calidad y sonido original por no comprimir o convertir en línea.
      • -
      -

      Consejos para mejorar tu experiencia auditiva offline

      -

      Para mejorar tu experiencia auditiva offline, debes considerar los siguientes consejos:

      -
      • Utilice un dispositivo de buena calidad y auriculares o altavoces para escuchar Belkede sin conexión.
      • -
      • Ajuste los ajustes de volumen y sonido a su gusto y nivel de comodidad.
      • -
      • Crea una lista de reproducción de tus canciones favoritas y añádele Belkede.
      • - -
      • Descubre nuevos aspectos y significados de la canción escuchándola cuidadosa y atentamente.
      • -
      -

      Cómo cantar junto con Belkede

      -

      Beneficios de cantar junto con Belkede

      -

      Cantar junto con Belkede tiene muchos beneficios, como:

      -
        -
      • Puedes expresar tus emociones y sentimientos a través de la canción y conectar con su mensaje.
      • -
      • Puedes mejorar tus habilidades vocales y tu confianza practicando y tocando la canción.
      • -
      • Puedes aprender una nueva lengua y cultura cantando en azerí y entendiendo sus letras y traducciones.
      • -
      • Puedes divertirte y disfrutar cantando la canción con pasión y entusiasmo.
      • -
      • Puedes crear vínculos con otros que aman la canción y comparten tus gustos e intereses musicales.
      • -
      -

      Consejos para aprender la letra y pronunciación de Belkede

      -

      Para aprender la letra y la pronunciación de Belkede, debes considerar los siguientes consejos:

      -
        -
      • Escuchar la canción repetidamente y tratar de memorizar sus palabras y melodía.
      • -
      • Lee las letras y traducciones de la canción online o offline y trata de entender su significado y contexto.
      • -
      • Mira el video de la canción y observa cómo Roya canta y pronuncia las palabras.
      • -
      • Utilice una aplicación de karaoke o un sitio web que ofrece letras y música de Belkede, como Musixmatch, Smule, SingSnap, etc.
      • -
      • Canta la canción en voz alta o en tu cabeza, con o sin música, solo o con otros, hasta que la domines.
      • -
      -

      Cómo compartir Belkede con otros

      -

      Beneficios de compartir Belkede con otros

      -

      Compartir Belkede con otros tiene muchos beneficios, como:

      -
        -
      • Puedes difundir el amor y el aprecio por Roya y su música a más gente.
      • -
      • Puedes apoyar la carrera y el éxito de Roya aumentando su base de fans y popularidad.
      • -
      • Usted puede hacer nuevos amigos y conexiones que comparten su pasión por Belkede y música pop de Azerbaiyán.
      • - -
      • Puedes expresarte y expresar tu personalidad compartiendo tu canción favorita con otros.
      • -
      -

      Consejos para compartir Belkede en las redes sociales y otras plataformas

      -

      Para compartir Belkede en las redes sociales y otras plataformas, debe considerar los siguientes consejos:

      -
        -
      • Sigue las cuentas oficiales de Roya en las redes sociales, como Instagram, Facebook, Twitter, etc., y como, comentario, compartir, o volver a publicar sus mensajes sobre Belkede u otras canciones.
      • -
      • Crea tus propios posts sobre Belkede en tus cuentas de redes sociales, como fotos, videos, historias, carretes, tweets, etc., y etiqueta a Roya o usa hashtags relacionados con ella o la canción.
      • -
      • Envía Belkede como un mensaje o un regalo a tus amigos o familiares en las redes sociales u otras plataformas, como WhatsApp, Telegram, Messenger, etc., y diles por qué te gusta la canción o por qué crees que les gustará también.
      • -
      • Únete a comunidades en línea o grupos dedicados a la música pop Roya o azerbaiyana en redes sociales u otras plataformas, como Reddit, Quora, Discord, etc., y participa en discusiones o actividades relacionadas con Belkede u otras canciones.
      • -
      • Recomendar Belkede a otras personas que buscan nuevas canciones o artistas para escuchar en las redes sociales u otras plataformas, como YouTube, Musixmatch, Spotify, Apple Music, Amazon Music, Deezer, Fizy, Muud, etc., y explicar lo que hace que la canción especial o atractiva.
      • -
      -

      Conclusión

      - -

      Preguntas frecuentes

      -

      Aquí hay algunas preguntas frecuentes sobre Belkede y Roya:

      -
        -
      1. ¿Dónde puedo encontrar las letras y traducciones de Belkede?
      2. -

        Puedes encontrar las letras y traducciones de Belkede en Musixmatch, LyricsTranslate, Genius u otros sitios web que ofrecen letras y traducciones de canciones.

        -
      3. ¿Cuál es el significado de la palabra Belkede?
      4. -

        Belkede significa "Quizás" en azerí, y es el título de la canción de Roya. La palabra se repite varias veces en el coro de la canción, expresando la incertidumbre y la esperanza de la cantante por su amor perdido.

        -
      5. ¿Cómo puedo ver las actuaciones en vivo de Roya en Belkede?
      6. -

        Puedes ver las presentaciones en vivo de Roya de Belkede en YouTube u otras plataformas que ofrecen videos de conciertos y espectáculos en vivo. También puedes seguir las cuentas de redes sociales de Roya para obtener actualizaciones sobre sus próximos eventos y giras.

        -
      7. ¿Cuáles son algunas otras canciones de Roya que puedo escuchar?
      8. -

        Algunas otras canciones de Roya que puedes escuchar son Ayxan, Gel Danis, Seni Seviyorum, Yandim, Ay Ureyim, etc. Puedes encontrarlas en YouTube, Musixmatch, Spotify, Apple Music, Amazon Music, Deezer, Fizy, Muud, u otras plataformas que ofrecen streaming de música y descarga.

        -
      9. ¿Cómo puedo contactar a Roya o enviar sus comentarios?
      10. -

        Puedes ponerte en contacto con Roya o enviarle comentarios a través de su sitio web oficial o sus cuentas de redes sociales, como Instagram, Facebook, Twitter, etc. También puedes dejar comentarios en sus publicaciones o videos, o enviarle mensajes o correos electrónicos.

        -

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/backbone/backbone.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/backbone/backbone.py deleted file mode 100644 index 66dee4a6565e6c45ed17d0880fcc37eac8f75c3a..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/backbone/backbone.py +++ /dev/null @@ -1,53 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -from abc import ABCMeta, abstractmethod -import torch.nn as nn - -from detectron2.layers import ShapeSpec - -__all__ = ["Backbone"] - - -class Backbone(nn.Module, metaclass=ABCMeta): - """ - Abstract base class for network backbones. - """ - - def __init__(self): - """ - The `__init__` method of any subclass can specify its own set of arguments. - """ - super().__init__() - - @abstractmethod - def forward(self): - """ - Subclasses must override this method, but adhere to the same return type. - - Returns: - dict[str->Tensor]: mapping from feature name (e.g., "res2") to tensor - """ - pass - - @property - def size_divisibility(self): - """ - Some backbones require the input height and width to be divisible by a - specific integer. This is typically true for encoder / decoder type networks - with lateral connection (e.g., FPN) for which feature maps need to match - dimension in the "bottom up" and "top down" paths. Set to 0 if no specific - input size divisibility is required. - """ - return 0 - - def output_shape(self): - """ - Returns: - dict[str->ShapeSpec] - """ - # this is a backward-compatible default - return { - name: ShapeSpec( - channels=self._out_feature_channels[name], stride=self._out_feature_strides[name] - ) - for name in self._out_features - } diff --git a/spaces/CVPR/GFPGAN-example/experiments/pretrained_models/README.md b/spaces/CVPR/GFPGAN-example/experiments/pretrained_models/README.md deleted file mode 100644 index 3401a5ca9b393e0033f58c5af8905961565826d9..0000000000000000000000000000000000000000 --- a/spaces/CVPR/GFPGAN-example/experiments/pretrained_models/README.md +++ /dev/null @@ -1,7 +0,0 @@ -# Pre-trained Models and Other Data - -Download pre-trained models and other data. Put them in this folder. - -1. [Pretrained StyleGAN2 model: StyleGAN2_512_Cmul1_FFHQ_B12G4_scratch_800k.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/StyleGAN2_512_Cmul1_FFHQ_B12G4_scratch_800k.pth) -1. [Component locations of FFHQ: FFHQ_eye_mouth_landmarks_512.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/FFHQ_eye_mouth_landmarks_512.pth) -1. [A simple ArcFace model: arcface_resnet18.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/arcface_resnet18.pth) diff --git a/spaces/CVPR/LIVE/__init__.py b/spaces/CVPR/LIVE/__init__.py deleted file mode 100644 index b871b92efc87bfec551a82ef42a7963f168b2b1b..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -__author__ = "Xu Ma" -__email__ = "ma.xu1@northeastern.edu" diff --git a/spaces/CVPR/LIVE/thrust/dependencies/cub/experimental/Makefile b/spaces/CVPR/LIVE/thrust/dependencies/cub/experimental/Makefile deleted file mode 100644 index 7165d93320c0d45af4e6aadc7c7f96af22c89d97..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/dependencies/cub/experimental/Makefile +++ /dev/null @@ -1,125 +0,0 @@ -#/****************************************************************************** -# * Copyright (c) 2011, Duane Merrill. All rights reserved. -# * Copyright (c) 2011-2018, NVIDIA CORPORATION. All rights reserved. -# * -# * Redistribution and use in source and binary forms, with or without -# * modification, are permitted provided that the following conditions are met: -# * * Redistributions of source code must retain the above copyright -# * notice, this list of conditions and the following disclaimer. -# * * Redistributions in binary form must reproduce the above copyright -# * notice, this list of conditions and the following disclaimer in the -# * documentation and/or other materials provided with the distribution. -# * * Neither the name of the NVIDIA CORPORATION nor the -# * names of its contributors may be used to endorse or promote products -# * derived from this software without specific prior written permission. -# * -# * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND -# * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED -# * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -# * DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY -# * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES -# * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND -# * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -# * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -# * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -# * -#******************************************************************************/ - -#------------------------------------------------------------------------------- -# -# Makefile usage -# -# make [sm=] [cdp=<0|1>] [force32=<0|1>] [abi=<0|1>] [open64=<0|1>] [verbose=<0|1>] [keep=<0|1>] [quicktest=<0|1>] -# -#------------------------------------------------------------------------------- - -include ../common.mk - -#------------------------------------------------------------------------------- -# Commandline Options -#------------------------------------------------------------------------------- - -# [mkl=<0|1>] compile against Intel MKL -ifeq ($(mkl), 1) - DEFINES += -DCUB_MKL - -ifeq (WIN_NT, $(findstring WIN_NT, $(OSUPPER))) - LIBS += mkl_intel_lp64.lib mkl_intel_thread.lib mkl_core.lib libiomp5md.lib - NVCCFLAGS += -Xcompiler /openmp -else - LIBS += -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -lm - NVCCFLAGS += -Xcompiler -fopenmp - -endif - -endif - - -#------------------------------------------------------------------------------- -# Compiler and compilation platform -#------------------------------------------------------------------------------- - -# Includes -INC += -I$(CUB_DIR) -I$(CUB_DIR)test - -# detect OS -OSUPPER = $(shell uname -s 2>/dev/null | tr [:lower:] [:upper:]) - -#------------------------------------------------------------------------------- -# Dependency Lists -#------------------------------------------------------------------------------- - -exp_rwildcard=$(foreach d,$(wildcard $1*),$(call rwildcard,$d/,$2) $(filter $(subst *,%,$2),$d)) - -EXP_DEPS = $(call rwildcard, ./,*.cuh) \ - $(call rwildcard, ./,*.h) - -DEPS = $(CUB_DEPS) \ - $(EXP_DEPS) \ - $(CUB_DIR)test/Makefile \ - $(CUB_DIR)test/test_util.h \ - $(CUB_DIR)test/mersenne.h \ - - - -#------------------------------------------------------------------------------- -# make default -#------------------------------------------------------------------------------- - -default: - - -#------------------------------------------------------------------------------- -# make clean -#------------------------------------------------------------------------------- - -clean : - rm -f bin/*$(CPU_ARCH_SUFFIX)* - rm -f *.i* *.cubin *.cu.c *.cudafe* *.fatbin.c *.ptx *.hash *.cu.cpp *.o - - - -#------------------------------------------------------------------------------- -# make histogram_compare -#------------------------------------------------------------------------------- - -histogram_compare: bin/histogram_compare_$(BIN_SUFFIX) - -bin/histogram_compare_$(BIN_SUFFIX) : histogram_compare.cu $(DEPS) - mkdir -p bin - $(NVCC) $(DEFINES) $(SM_TARGETS) -o bin/histogram_compare_$(BIN_SUFFIX) histogram_compare.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -O3 - - - -#------------------------------------------------------------------------------- -# make spmv_compare -#------------------------------------------------------------------------------- - -spmv_compare: bin/spmv_compare_$(BIN_SUFFIX) - -bin/spmv_compare_$(BIN_SUFFIX) : spmv_compare.cu $(DEPS) - mkdir -p bin - $(NVCC) $(DEFINES) $(SM_TARGETS) -o bin/spmv_compare_$(BIN_SUFFIX) spmv_compare.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -lcusparse $(MKL_LIBS) -O3 - - diff --git a/spaces/CVPR/regionclip-demo/detectron2/layers/wrappers.py b/spaces/CVPR/regionclip-demo/detectron2/layers/wrappers.py deleted file mode 100644 index 5bb4e7c1a1334c5501a6c492ddfa836dadf0beab..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/layers/wrappers.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -""" -Wrappers around on some nn functions, mainly to support empty tensors. - -Ideally, add support directly in PyTorch to empty tensors in those functions. - -These can be removed once https://github.com/pytorch/pytorch/issues/12013 -is implemented -""" - -from typing import List -import torch -from torch.nn import functional as F - - -def cat(tensors: List[torch.Tensor], dim: int = 0): - """ - Efficient version of torch.cat that avoids a copy if there is only a single element in a list - """ - assert isinstance(tensors, (list, tuple)) - if len(tensors) == 1: - return tensors[0] - return torch.cat(tensors, dim) - - -def cross_entropy(input, target, *, reduction="mean", **kwargs): - """ - Same as `torch.nn.functional.cross_entropy`, but returns 0 (instead of nan) - for empty inputs. - """ - if target.numel() == 0 and reduction == "mean": - return input.sum() * 0.0 # connect the gradient - return F.cross_entropy(input, target, **kwargs) - - -class _NewEmptyTensorOp(torch.autograd.Function): - @staticmethod - def forward(ctx, x, new_shape): - ctx.shape = x.shape - return x.new_empty(new_shape) - - @staticmethod - def backward(ctx, grad): - shape = ctx.shape - return _NewEmptyTensorOp.apply(grad, shape), None - - -class Conv2d(torch.nn.Conv2d): - """ - A wrapper around :class:`torch.nn.Conv2d` to support empty inputs and more features. - """ - - def __init__(self, *args, **kwargs): - """ - Extra keyword arguments supported in addition to those in `torch.nn.Conv2d`: - - Args: - norm (nn.Module, optional): a normalization layer - activation (callable(Tensor) -> Tensor): a callable activation function - - It assumes that norm layer is used before activation. - """ - norm = kwargs.pop("norm", None) - activation = kwargs.pop("activation", None) - super().__init__(*args, **kwargs) - - self.norm = norm - self.activation = activation - - def forward(self, x): - # torchscript does not support SyncBatchNorm yet - # https://github.com/pytorch/pytorch/issues/40507 - # and we skip these codes in torchscript since: - # 1. currently we only support torchscript in evaluation mode - # 2. features needed by exporting module to torchscript are added in PyTorch 1.6 or - # later version, `Conv2d` in these PyTorch versions has already supported empty inputs. - if not torch.jit.is_scripting(): - if x.numel() == 0 and self.training: - # https://github.com/pytorch/pytorch/issues/12013 - assert not isinstance( - self.norm, torch.nn.SyncBatchNorm - ), "SyncBatchNorm does not support empty inputs!" - - x = F.conv2d( - x, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups - ) - if self.norm is not None: - x = self.norm(x) - if self.activation is not None: - x = self.activation(x) - return x - - -ConvTranspose2d = torch.nn.ConvTranspose2d -BatchNorm2d = torch.nn.BatchNorm2d -interpolate = F.interpolate -Linear = torch.nn.Linear - - -def nonzero_tuple(x): - """ - A 'as_tuple=True' version of torch.nonzero to support torchscript. - because of https://github.com/pytorch/pytorch/issues/38718 - """ - if torch.jit.is_scripting(): - if x.dim() == 0: - return x.unsqueeze(0).nonzero().unbind(1) - return x.nonzero().unbind(1) - else: - return x.nonzero(as_tuple=True) diff --git a/spaces/CofAI/chat.b4/client/css/field.css b/spaces/CofAI/chat.b4/client/css/field.css deleted file mode 100644 index 914425a75d9e62e6428bdb8f5de2c66c91f10d33..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat.b4/client/css/field.css +++ /dev/null @@ -1,11 +0,0 @@ -.field { - display: flex; - align-items: center; - padding: 4px; -} - -@media screen and (max-width: 990px) { - .field { - flex-wrap: nowrap; - } -} diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/detector/detectors.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/detector/detectors.py deleted file mode 100644 index af2100cac15830cd60be5911aa15d0d7c9309a17..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/detector/detectors.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -from .generalized_rcnn import GeneralizedRCNN - - -_DETECTION_META_ARCHITECTURES = {"GeneralizedRCNN": GeneralizedRCNN} - - -def build_detection_model(cfg): - meta_arch = _DETECTION_META_ARCHITECTURES[cfg.MODEL.META_ARCHITECTURE] - return meta_arch(cfg) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/otlLib/optimize/__main__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/otlLib/optimize/__main__.py deleted file mode 100644 index b0ae9081ca8dac338bcf085c71adad87805e3bad..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/otlLib/optimize/__main__.py +++ /dev/null @@ -1,6 +0,0 @@ -import sys -from fontTools.otlLib.optimize import main - - -if __name__ == "__main__": - sys.exit(main()) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/otBase.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/otBase.py deleted file mode 100644 index 9c80400e9420577f0d9d6f747e15b83e49f68e49..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/otBase.py +++ /dev/null @@ -1,1458 +0,0 @@ -from fontTools.config import OPTIONS -from fontTools.misc.textTools import Tag, bytesjoin -from .DefaultTable import DefaultTable -from enum import IntEnum -import sys -import array -import struct -import logging -from functools import lru_cache -from typing import Iterator, NamedTuple, Optional, Tuple - -log = logging.getLogger(__name__) - -have_uharfbuzz = False -try: - import uharfbuzz as hb - - # repack method added in uharfbuzz >= 0.23; if uharfbuzz *can* be - # imported but repack method is missing, behave as if uharfbuzz - # is not available (fallback to the slower Python implementation) - have_uharfbuzz = callable(getattr(hb, "repack", None)) -except ImportError: - pass - -USE_HARFBUZZ_REPACKER = OPTIONS[f"{__name__}:USE_HARFBUZZ_REPACKER"] - - -class OverflowErrorRecord(object): - def __init__(self, overflowTuple): - self.tableType = overflowTuple[0] - self.LookupListIndex = overflowTuple[1] - self.SubTableIndex = overflowTuple[2] - self.itemName = overflowTuple[3] - self.itemIndex = overflowTuple[4] - - def __repr__(self): - return str( - ( - self.tableType, - "LookupIndex:", - self.LookupListIndex, - "SubTableIndex:", - self.SubTableIndex, - "ItemName:", - self.itemName, - "ItemIndex:", - self.itemIndex, - ) - ) - - -class OTLOffsetOverflowError(Exception): - def __init__(self, overflowErrorRecord): - self.value = overflowErrorRecord - - def __str__(self): - return repr(self.value) - - -class RepackerState(IntEnum): - # Repacking control flow is implemnted using a state machine. The state machine table: - # - # State | Packing Success | Packing Failed | Exception Raised | - # ------------+-----------------+----------------+------------------+ - # PURE_FT | Return result | PURE_FT | Return failure | - # HB_FT | Return result | HB_FT | FT_FALLBACK | - # FT_FALLBACK | HB_FT | FT_FALLBACK | Return failure | - - # Pack only with fontTools, don't allow sharing between extensions. - PURE_FT = 1 - - # Attempt to pack with harfbuzz (allowing sharing between extensions) - # use fontTools to attempt overflow resolution. - HB_FT = 2 - - # Fallback if HB/FT packing gets stuck. Pack only with fontTools, don't allow sharing between - # extensions. - FT_FALLBACK = 3 - - -class BaseTTXConverter(DefaultTable): - - """Generic base class for TTX table converters. It functions as an - adapter between the TTX (ttLib actually) table model and the model - we use for OpenType tables, which is necessarily subtly different. - """ - - def decompile(self, data, font): - """Create an object from the binary data. Called automatically on access.""" - from . import otTables - - reader = OTTableReader(data, tableTag=self.tableTag) - tableClass = getattr(otTables, self.tableTag) - self.table = tableClass() - self.table.decompile(reader, font) - - def compile(self, font): - """Compiles the table into binary. Called automatically on save.""" - - # General outline: - # Create a top-level OTTableWriter for the GPOS/GSUB table. - # Call the compile method for the the table - # for each 'converter' record in the table converter list - # call converter's write method for each item in the value. - # - For simple items, the write method adds a string to the - # writer's self.items list. - # - For Struct/Table/Subtable items, it add first adds new writer to the - # to the writer's self.items, then calls the item's compile method. - # This creates a tree of writers, rooted at the GUSB/GPOS writer, with - # each writer representing a table, and the writer.items list containing - # the child data strings and writers. - # call the getAllData method - # call _doneWriting, which removes duplicates - # call _gatherTables. This traverses the tables, adding unique occurences to a flat list of tables - # Traverse the flat list of tables, calling getDataLength on each to update their position - # Traverse the flat list of tables again, calling getData each get the data in the table, now that - # pos's and offset are known. - - # If a lookup subtable overflows an offset, we have to start all over. - overflowRecord = None - # this is 3-state option: default (None) means automatically use hb.repack or - # silently fall back if it fails; True, use it and raise error if not possible - # or it errors out; False, don't use it, even if you can. - use_hb_repack = font.cfg[USE_HARFBUZZ_REPACKER] - if self.tableTag in ("GSUB", "GPOS"): - if use_hb_repack is False: - log.debug( - "hb.repack disabled, compiling '%s' with pure-python serializer", - self.tableTag, - ) - elif not have_uharfbuzz: - if use_hb_repack is True: - raise ImportError("No module named 'uharfbuzz'") - else: - assert use_hb_repack is None - log.debug( - "uharfbuzz not found, compiling '%s' with pure-python serializer", - self.tableTag, - ) - - if ( - use_hb_repack in (None, True) - and have_uharfbuzz - and self.tableTag in ("GSUB", "GPOS") - ): - state = RepackerState.HB_FT - else: - state = RepackerState.PURE_FT - - hb_first_error_logged = False - lastOverflowRecord = None - while True: - try: - writer = OTTableWriter(tableTag=self.tableTag) - self.table.compile(writer, font) - if state == RepackerState.HB_FT: - return self.tryPackingHarfbuzz(writer, hb_first_error_logged) - elif state == RepackerState.PURE_FT: - return self.tryPackingFontTools(writer) - elif state == RepackerState.FT_FALLBACK: - # Run packing with FontTools only, but don't return the result as it will - # not be optimally packed. Once a successful packing has been found, state is - # changed back to harfbuzz packing to produce the final, optimal, packing. - self.tryPackingFontTools(writer) - log.debug( - "Re-enabling sharing between extensions and switching back to " - "harfbuzz+fontTools packing." - ) - state = RepackerState.HB_FT - - except OTLOffsetOverflowError as e: - hb_first_error_logged = True - ok = self.tryResolveOverflow(font, e, lastOverflowRecord) - lastOverflowRecord = e.value - - if ok: - continue - - if state is RepackerState.HB_FT: - log.debug( - "Harfbuzz packing out of resolutions, disabling sharing between extensions and " - "switching to fontTools only packing." - ) - state = RepackerState.FT_FALLBACK - else: - raise - - def tryPackingHarfbuzz(self, writer, hb_first_error_logged): - try: - log.debug("serializing '%s' with hb.repack", self.tableTag) - return writer.getAllDataUsingHarfbuzz(self.tableTag) - except (ValueError, MemoryError, hb.RepackerError) as e: - # Only log hb repacker errors the first time they occur in - # the offset-overflow resolution loop, they are just noisy. - # Maybe we can revisit this if/when uharfbuzz actually gives - # us more info as to why hb.repack failed... - if not hb_first_error_logged: - error_msg = f"{type(e).__name__}" - if str(e) != "": - error_msg += f": {e}" - log.warning( - "hb.repack failed to serialize '%s', attempting fonttools resolutions " - "; the error message was: %s", - self.tableTag, - error_msg, - ) - hb_first_error_logged = True - return writer.getAllData(remove_duplicate=False) - - def tryPackingFontTools(self, writer): - return writer.getAllData() - - def tryResolveOverflow(self, font, e, lastOverflowRecord): - ok = 0 - if lastOverflowRecord == e.value: - # Oh well... - return ok - - overflowRecord = e.value - log.info("Attempting to fix OTLOffsetOverflowError %s", e) - - if overflowRecord.itemName is None: - from .otTables import fixLookupOverFlows - - ok = fixLookupOverFlows(font, overflowRecord) - else: - from .otTables import fixSubTableOverFlows - - ok = fixSubTableOverFlows(font, overflowRecord) - - if ok: - return ok - - # Try upgrading lookup to Extension and hope - # that cross-lookup sharing not happening would - # fix overflow... - from .otTables import fixLookupOverFlows - - return fixLookupOverFlows(font, overflowRecord) - - def toXML(self, writer, font): - self.table.toXML2(writer, font) - - def fromXML(self, name, attrs, content, font): - from . import otTables - - if not hasattr(self, "table"): - tableClass = getattr(otTables, self.tableTag) - self.table = tableClass() - self.table.fromXML(name, attrs, content, font) - self.table.populateDefaults() - - def ensureDecompiled(self, recurse=True): - self.table.ensureDecompiled(recurse=recurse) - - -# https://github.com/fonttools/fonttools/pull/2285#issuecomment-834652928 -assert len(struct.pack("i", 0)) == 4 -assert array.array("i").itemsize == 4, "Oops, file a bug against fonttools." - - -class OTTableReader(object): - - """Helper class to retrieve data from an OpenType table.""" - - __slots__ = ("data", "offset", "pos", "localState", "tableTag") - - def __init__(self, data, localState=None, offset=0, tableTag=None): - self.data = data - self.offset = offset - self.pos = offset - self.localState = localState - self.tableTag = tableTag - - def advance(self, count): - self.pos += count - - def seek(self, pos): - self.pos = pos - - def copy(self): - other = self.__class__(self.data, self.localState, self.offset, self.tableTag) - other.pos = self.pos - return other - - def getSubReader(self, offset): - offset = self.offset + offset - return self.__class__(self.data, self.localState, offset, self.tableTag) - - def readValue(self, typecode, staticSize): - pos = self.pos - newpos = pos + staticSize - (value,) = struct.unpack(f">{typecode}", self.data[pos:newpos]) - self.pos = newpos - return value - - def readArray(self, typecode, staticSize, count): - pos = self.pos - newpos = pos + count * staticSize - value = array.array(typecode, self.data[pos:newpos]) - if sys.byteorder != "big": - value.byteswap() - self.pos = newpos - return value.tolist() - - def readInt8(self): - return self.readValue("b", staticSize=1) - - def readInt8Array(self, count): - return self.readArray("b", staticSize=1, count=count) - - def readShort(self): - return self.readValue("h", staticSize=2) - - def readShortArray(self, count): - return self.readArray("h", staticSize=2, count=count) - - def readLong(self): - return self.readValue("i", staticSize=4) - - def readLongArray(self, count): - return self.readArray("i", staticSize=4, count=count) - - def readUInt8(self): - return self.readValue("B", staticSize=1) - - def readUInt8Array(self, count): - return self.readArray("B", staticSize=1, count=count) - - def readUShort(self): - return self.readValue("H", staticSize=2) - - def readUShortArray(self, count): - return self.readArray("H", staticSize=2, count=count) - - def readULong(self): - return self.readValue("I", staticSize=4) - - def readULongArray(self, count): - return self.readArray("I", staticSize=4, count=count) - - def readUInt24(self): - pos = self.pos - newpos = pos + 3 - (value,) = struct.unpack(">l", b"\0" + self.data[pos:newpos]) - self.pos = newpos - return value - - def readUInt24Array(self, count): - return [self.readUInt24() for _ in range(count)] - - def readTag(self): - pos = self.pos - newpos = pos + 4 - value = Tag(self.data[pos:newpos]) - assert len(value) == 4, value - self.pos = newpos - return value - - def readData(self, count): - pos = self.pos - newpos = pos + count - value = self.data[pos:newpos] - self.pos = newpos - return value - - def __setitem__(self, name, value): - state = self.localState.copy() if self.localState else dict() - state[name] = value - self.localState = state - - def __getitem__(self, name): - return self.localState and self.localState[name] - - def __contains__(self, name): - return self.localState and name in self.localState - - -class OTTableWriter(object): - - """Helper class to gather and assemble data for OpenType tables.""" - - def __init__(self, localState=None, tableTag=None, offsetSize=2): - self.items = [] - self.pos = None - self.localState = localState - self.tableTag = tableTag - self.offsetSize = offsetSize - self.parent = None - - # DEPRECATED: 'longOffset' is kept as a property for backward compat with old code. - # You should use 'offsetSize' instead (2, 3 or 4 bytes). - @property - def longOffset(self): - return self.offsetSize == 4 - - @longOffset.setter - def longOffset(self, value): - self.offsetSize = 4 if value else 2 - - def __setitem__(self, name, value): - state = self.localState.copy() if self.localState else dict() - state[name] = value - self.localState = state - - def __getitem__(self, name): - return self.localState[name] - - def __delitem__(self, name): - del self.localState[name] - - # assembler interface - - def getDataLength(self): - """Return the length of this table in bytes, without subtables.""" - l = 0 - for item in self.items: - if hasattr(item, "getCountData"): - l += item.size - elif hasattr(item, "getData"): - l += item.offsetSize - else: - l = l + len(item) - return l - - def getData(self): - """Assemble the data for this writer/table, without subtables.""" - items = list(self.items) # make a shallow copy - pos = self.pos - numItems = len(items) - for i in range(numItems): - item = items[i] - - if hasattr(item, "getData"): - if item.offsetSize == 4: - items[i] = packULong(item.pos - pos) - elif item.offsetSize == 2: - try: - items[i] = packUShort(item.pos - pos) - except struct.error: - # provide data to fix overflow problem. - overflowErrorRecord = self.getOverflowErrorRecord(item) - - raise OTLOffsetOverflowError(overflowErrorRecord) - elif item.offsetSize == 3: - items[i] = packUInt24(item.pos - pos) - else: - raise ValueError(item.offsetSize) - - return bytesjoin(items) - - def getDataForHarfbuzz(self): - """Assemble the data for this writer/table with all offset field set to 0""" - items = list(self.items) - packFuncs = {2: packUShort, 3: packUInt24, 4: packULong} - for i, item in enumerate(items): - if hasattr(item, "getData"): - # Offset value is not needed in harfbuzz repacker, so setting offset to 0 to avoid overflow here - if item.offsetSize in packFuncs: - items[i] = packFuncs[item.offsetSize](0) - else: - raise ValueError(item.offsetSize) - - return bytesjoin(items) - - def __hash__(self): - # only works after self._doneWriting() has been called - return hash(self.items) - - def __ne__(self, other): - result = self.__eq__(other) - return result if result is NotImplemented else not result - - def __eq__(self, other): - if type(self) != type(other): - return NotImplemented - return self.offsetSize == other.offsetSize and self.items == other.items - - def _doneWriting(self, internedTables, shareExtension=False): - # Convert CountData references to data string items - # collapse duplicate table references to a unique entry - # "tables" are OTTableWriter objects. - - # For Extension Lookup types, we can - # eliminate duplicates only within the tree under the Extension Lookup, - # as offsets may exceed 64K even between Extension LookupTable subtables. - isExtension = hasattr(self, "Extension") - - # Certain versions of Uniscribe reject the font if the GSUB/GPOS top-level - # arrays (ScriptList, FeatureList, LookupList) point to the same, possibly - # empty, array. So, we don't share those. - # See: https://github.com/fonttools/fonttools/issues/518 - dontShare = hasattr(self, "DontShare") - - if isExtension and not shareExtension: - internedTables = {} - - items = self.items - for i in range(len(items)): - item = items[i] - if hasattr(item, "getCountData"): - items[i] = item.getCountData() - elif hasattr(item, "getData"): - item._doneWriting(internedTables, shareExtension=shareExtension) - # At this point, all subwriters are hashable based on their items. - # (See hash and comparison magic methods above.) So the ``setdefault`` - # call here will return the first writer object we've seen with - # equal content, or store it in the dictionary if it's not been - # seen yet. We therefore replace the subwriter object with an equivalent - # object, which deduplicates the tree. - if not dontShare: - items[i] = item = internedTables.setdefault(item, item) - self.items = tuple(items) - - def _gatherTables(self, tables, extTables, done): - # Convert table references in self.items tree to a flat - # list of tables in depth-first traversal order. - # "tables" are OTTableWriter objects. - # We do the traversal in reverse order at each level, in order to - # resolve duplicate references to be the last reference in the list of tables. - # For extension lookups, duplicate references can be merged only within the - # writer tree under the extension lookup. - - done[id(self)] = True - - numItems = len(self.items) - iRange = list(range(numItems)) - iRange.reverse() - - isExtension = hasattr(self, "Extension") - - selfTables = tables - - if isExtension: - assert ( - extTables is not None - ), "Program or XML editing error. Extension subtables cannot contain extensions subtables" - tables, extTables, done = extTables, None, {} - - # add Coverage table if it is sorted last. - sortCoverageLast = False - if hasattr(self, "sortCoverageLast"): - # Find coverage table - for i in range(numItems): - item = self.items[i] - if getattr(item, "name", None) == "Coverage": - sortCoverageLast = True - break - if id(item) not in done: - item._gatherTables(tables, extTables, done) - else: - # We're a new parent of item - pass - - for i in iRange: - item = self.items[i] - if not hasattr(item, "getData"): - continue - - if ( - sortCoverageLast - and (i == 1) - and getattr(item, "name", None) == "Coverage" - ): - # we've already 'gathered' it above - continue - - if id(item) not in done: - item._gatherTables(tables, extTables, done) - else: - # Item is already written out by other parent - pass - - selfTables.append(self) - - def _gatherGraphForHarfbuzz(self, tables, obj_list, done, objidx, virtual_edges): - real_links = [] - virtual_links = [] - item_idx = objidx - - # Merge virtual_links from parent - for idx in virtual_edges: - virtual_links.append((0, 0, idx)) - - sortCoverageLast = False - coverage_idx = 0 - if hasattr(self, "sortCoverageLast"): - # Find coverage table - for i, item in enumerate(self.items): - if getattr(item, "name", None) == "Coverage": - sortCoverageLast = True - if id(item) not in done: - coverage_idx = item_idx = item._gatherGraphForHarfbuzz( - tables, obj_list, done, item_idx, virtual_edges - ) - else: - coverage_idx = done[id(item)] - virtual_edges.append(coverage_idx) - break - - child_idx = 0 - offset_pos = 0 - for i, item in enumerate(self.items): - if hasattr(item, "getData"): - pos = offset_pos - elif hasattr(item, "getCountData"): - offset_pos += item.size - continue - else: - offset_pos = offset_pos + len(item) - continue - - if id(item) not in done: - child_idx = item_idx = item._gatherGraphForHarfbuzz( - tables, obj_list, done, item_idx, virtual_edges - ) - else: - child_idx = done[id(item)] - - real_edge = (pos, item.offsetSize, child_idx) - real_links.append(real_edge) - offset_pos += item.offsetSize - - tables.append(self) - obj_list.append((real_links, virtual_links)) - item_idx += 1 - done[id(self)] = item_idx - if sortCoverageLast: - virtual_edges.pop() - - return item_idx - - def getAllDataUsingHarfbuzz(self, tableTag): - """The Whole table is represented as a Graph. - Assemble graph data and call Harfbuzz repacker to pack the table. - Harfbuzz repacker is faster and retain as much sub-table sharing as possible, see also: - https://github.com/harfbuzz/harfbuzz/blob/main/docs/repacker.md - The input format for hb.repack() method is explained here: - https://github.com/harfbuzz/uharfbuzz/blob/main/src/uharfbuzz/_harfbuzz.pyx#L1149 - """ - internedTables = {} - self._doneWriting(internedTables, shareExtension=True) - tables = [] - obj_list = [] - done = {} - objidx = 0 - virtual_edges = [] - self._gatherGraphForHarfbuzz(tables, obj_list, done, objidx, virtual_edges) - # Gather all data in two passes: the absolute positions of all - # subtable are needed before the actual data can be assembled. - pos = 0 - for table in tables: - table.pos = pos - pos = pos + table.getDataLength() - - data = [] - for table in tables: - tableData = table.getDataForHarfbuzz() - data.append(tableData) - - if hasattr(hb, "repack_with_tag"): - return hb.repack_with_tag(str(tableTag), data, obj_list) - else: - return hb.repack(data, obj_list) - - def getAllData(self, remove_duplicate=True): - """Assemble all data, including all subtables.""" - if remove_duplicate: - internedTables = {} - self._doneWriting(internedTables) - tables = [] - extTables = [] - done = {} - self._gatherTables(tables, extTables, done) - tables.reverse() - extTables.reverse() - # Gather all data in two passes: the absolute positions of all - # subtable are needed before the actual data can be assembled. - pos = 0 - for table in tables: - table.pos = pos - pos = pos + table.getDataLength() - - for table in extTables: - table.pos = pos - pos = pos + table.getDataLength() - - data = [] - for table in tables: - tableData = table.getData() - data.append(tableData) - - for table in extTables: - tableData = table.getData() - data.append(tableData) - - return bytesjoin(data) - - # interface for gathering data, as used by table.compile() - - def getSubWriter(self, offsetSize=2): - subwriter = self.__class__( - self.localState, self.tableTag, offsetSize=offsetSize - ) - subwriter.parent = ( - self # because some subtables have idential values, we discard - ) - # the duplicates under the getAllData method. Hence some - # subtable writers can have more than one parent writer. - # But we just care about first one right now. - return subwriter - - def writeValue(self, typecode, value): - self.items.append(struct.pack(f">{typecode}", value)) - - def writeArray(self, typecode, values): - a = array.array(typecode, values) - if sys.byteorder != "big": - a.byteswap() - self.items.append(a.tobytes()) - - def writeInt8(self, value): - assert -128 <= value < 128, value - self.items.append(struct.pack(">b", value)) - - def writeInt8Array(self, values): - self.writeArray("b", values) - - def writeShort(self, value): - assert -32768 <= value < 32768, value - self.items.append(struct.pack(">h", value)) - - def writeShortArray(self, values): - self.writeArray("h", values) - - def writeLong(self, value): - self.items.append(struct.pack(">i", value)) - - def writeLongArray(self, values): - self.writeArray("i", values) - - def writeUInt8(self, value): - assert 0 <= value < 256, value - self.items.append(struct.pack(">B", value)) - - def writeUInt8Array(self, values): - self.writeArray("B", values) - - def writeUShort(self, value): - assert 0 <= value < 0x10000, value - self.items.append(struct.pack(">H", value)) - - def writeUShortArray(self, values): - self.writeArray("H", values) - - def writeULong(self, value): - self.items.append(struct.pack(">I", value)) - - def writeULongArray(self, values): - self.writeArray("I", values) - - def writeUInt24(self, value): - assert 0 <= value < 0x1000000, value - b = struct.pack(">L", value) - self.items.append(b[1:]) - - def writeUInt24Array(self, values): - for value in values: - self.writeUInt24(value) - - def writeTag(self, tag): - tag = Tag(tag).tobytes() - assert len(tag) == 4, tag - self.items.append(tag) - - def writeSubTable(self, subWriter): - self.items.append(subWriter) - - def writeCountReference(self, table, name, size=2, value=None): - ref = CountReference(table, name, size=size, value=value) - self.items.append(ref) - return ref - - def writeStruct(self, format, values): - data = struct.pack(*(format,) + values) - self.items.append(data) - - def writeData(self, data): - self.items.append(data) - - def getOverflowErrorRecord(self, item): - LookupListIndex = SubTableIndex = itemName = itemIndex = None - if self.name == "LookupList": - LookupListIndex = item.repeatIndex - elif self.name == "Lookup": - LookupListIndex = self.repeatIndex - SubTableIndex = item.repeatIndex - else: - itemName = getattr(item, "name", "") - if hasattr(item, "repeatIndex"): - itemIndex = item.repeatIndex - if self.name == "SubTable": - LookupListIndex = self.parent.repeatIndex - SubTableIndex = self.repeatIndex - elif self.name == "ExtSubTable": - LookupListIndex = self.parent.parent.repeatIndex - SubTableIndex = self.parent.repeatIndex - else: # who knows how far below the SubTable level we are! Climb back up to the nearest subtable. - itemName = ".".join([self.name, itemName]) - p1 = self.parent - while p1 and p1.name not in ["ExtSubTable", "SubTable"]: - itemName = ".".join([p1.name, itemName]) - p1 = p1.parent - if p1: - if p1.name == "ExtSubTable": - LookupListIndex = p1.parent.parent.repeatIndex - SubTableIndex = p1.parent.repeatIndex - else: - LookupListIndex = p1.parent.repeatIndex - SubTableIndex = p1.repeatIndex - - return OverflowErrorRecord( - (self.tableTag, LookupListIndex, SubTableIndex, itemName, itemIndex) - ) - - -class CountReference(object): - """A reference to a Count value, not a count of references.""" - - def __init__(self, table, name, size=None, value=None): - self.table = table - self.name = name - self.size = size - if value is not None: - self.setValue(value) - - def setValue(self, value): - table = self.table - name = self.name - if table[name] is None: - table[name] = value - else: - assert table[name] == value, (name, table[name], value) - - def getValue(self): - return self.table[self.name] - - def getCountData(self): - v = self.table[self.name] - if v is None: - v = 0 - return {1: packUInt8, 2: packUShort, 4: packULong}[self.size](v) - - -def packUInt8(value): - return struct.pack(">B", value) - - -def packUShort(value): - return struct.pack(">H", value) - - -def packULong(value): - assert 0 <= value < 0x100000000, value - return struct.pack(">I", value) - - -def packUInt24(value): - assert 0 <= value < 0x1000000, value - return struct.pack(">I", value)[1:] - - -class BaseTable(object): - - """Generic base class for all OpenType (sub)tables.""" - - def __getattr__(self, attr): - reader = self.__dict__.get("reader") - if reader: - del self.reader - font = self.font - del self.font - self.decompile(reader, font) - return getattr(self, attr) - - raise AttributeError(attr) - - def ensureDecompiled(self, recurse=False): - reader = self.__dict__.get("reader") - if reader: - del self.reader - font = self.font - del self.font - self.decompile(reader, font) - if recurse: - for subtable in self.iterSubTables(): - subtable.value.ensureDecompiled(recurse) - - def __getstate__(self): - # before copying/pickling 'lazy' objects, make a shallow copy of OTTableReader - # https://github.com/fonttools/fonttools/issues/2965 - if "reader" in self.__dict__: - state = self.__dict__.copy() - state["reader"] = self.__dict__["reader"].copy() - return state - return self.__dict__ - - @classmethod - def getRecordSize(cls, reader): - totalSize = 0 - for conv in cls.converters: - size = conv.getRecordSize(reader) - if size is NotImplemented: - return NotImplemented - countValue = 1 - if conv.repeat: - if conv.repeat in reader: - countValue = reader[conv.repeat] + conv.aux - else: - return NotImplemented - totalSize += size * countValue - return totalSize - - def getConverters(self): - return self.converters - - def getConverterByName(self, name): - return self.convertersByName[name] - - def populateDefaults(self, propagator=None): - for conv in self.getConverters(): - if conv.repeat: - if not hasattr(self, conv.name): - setattr(self, conv.name, []) - countValue = len(getattr(self, conv.name)) - conv.aux - try: - count_conv = self.getConverterByName(conv.repeat) - setattr(self, conv.repeat, countValue) - except KeyError: - # conv.repeat is a propagated count - if propagator and conv.repeat in propagator: - propagator[conv.repeat].setValue(countValue) - else: - if conv.aux and not eval(conv.aux, None, self.__dict__): - continue - if hasattr(self, conv.name): - continue # Warn if it should NOT be present?! - if hasattr(conv, "writeNullOffset"): - setattr(self, conv.name, None) # Warn? - # elif not conv.isCount: - # # Warn? - # pass - if hasattr(conv, "DEFAULT"): - # OptionalValue converters (e.g. VarIndex) - setattr(self, conv.name, conv.DEFAULT) - - def decompile(self, reader, font): - self.readFormat(reader) - table = {} - self.__rawTable = table # for debugging - for conv in self.getConverters(): - if conv.name == "SubTable": - conv = conv.getConverter(reader.tableTag, table["LookupType"]) - if conv.name == "ExtSubTable": - conv = conv.getConverter(reader.tableTag, table["ExtensionLookupType"]) - if conv.name == "FeatureParams": - conv = conv.getConverter(reader["FeatureTag"]) - if conv.name == "SubStruct": - conv = conv.getConverter(reader.tableTag, table["MorphType"]) - try: - if conv.repeat: - if isinstance(conv.repeat, int): - countValue = conv.repeat - elif conv.repeat in table: - countValue = table[conv.repeat] - else: - # conv.repeat is a propagated count - countValue = reader[conv.repeat] - countValue += conv.aux - table[conv.name] = conv.readArray(reader, font, table, countValue) - else: - if conv.aux and not eval(conv.aux, None, table): - continue - table[conv.name] = conv.read(reader, font, table) - if conv.isPropagated: - reader[conv.name] = table[conv.name] - except Exception as e: - name = conv.name - e.args = e.args + (name,) - raise - - if hasattr(self, "postRead"): - self.postRead(table, font) - else: - self.__dict__.update(table) - - del self.__rawTable # succeeded, get rid of debugging info - - def compile(self, writer, font): - self.ensureDecompiled() - # TODO Following hack to be removed by rewriting how FormatSwitching tables - # are handled. - # https://github.com/fonttools/fonttools/pull/2238#issuecomment-805192631 - if hasattr(self, "preWrite"): - deleteFormat = not hasattr(self, "Format") - table = self.preWrite(font) - deleteFormat = deleteFormat and hasattr(self, "Format") - else: - deleteFormat = False - table = self.__dict__.copy() - - # some count references may have been initialized in a custom preWrite; we set - # these in the writer's state beforehand (instead of sequentially) so they will - # be propagated to all nested subtables even if the count appears in the current - # table only *after* the offset to the subtable that it is counting. - for conv in self.getConverters(): - if conv.isCount and conv.isPropagated: - value = table.get(conv.name) - if isinstance(value, CountReference): - writer[conv.name] = value - - if hasattr(self, "sortCoverageLast"): - writer.sortCoverageLast = 1 - - if hasattr(self, "DontShare"): - writer.DontShare = True - - if hasattr(self.__class__, "LookupType"): - writer["LookupType"].setValue(self.__class__.LookupType) - - self.writeFormat(writer) - for conv in self.getConverters(): - value = table.get( - conv.name - ) # TODO Handle defaults instead of defaulting to None! - if conv.repeat: - if value is None: - value = [] - countValue = len(value) - conv.aux - if isinstance(conv.repeat, int): - assert len(value) == conv.repeat, "expected %d values, got %d" % ( - conv.repeat, - len(value), - ) - elif conv.repeat in table: - CountReference(table, conv.repeat, value=countValue) - else: - # conv.repeat is a propagated count - writer[conv.repeat].setValue(countValue) - try: - conv.writeArray(writer, font, table, value) - except Exception as e: - e.args = e.args + (conv.name + "[]",) - raise - elif conv.isCount: - # Special-case Count values. - # Assumption: a Count field will *always* precede - # the actual array(s). - # We need a default value, as it may be set later by a nested - # table. We will later store it here. - # We add a reference: by the time the data is assembled - # the Count value will be filled in. - # We ignore the current count value since it will be recomputed, - # unless it's a CountReference that was already initialized in a custom preWrite. - if isinstance(value, CountReference): - ref = value - ref.size = conv.staticSize - writer.writeData(ref) - table[conv.name] = ref.getValue() - else: - ref = writer.writeCountReference(table, conv.name, conv.staticSize) - table[conv.name] = None - if conv.isPropagated: - writer[conv.name] = ref - elif conv.isLookupType: - # We make sure that subtables have the same lookup type, - # and that the type is the same as the one set on the - # Lookup object, if any is set. - if conv.name not in table: - table[conv.name] = None - ref = writer.writeCountReference( - table, conv.name, conv.staticSize, table[conv.name] - ) - writer["LookupType"] = ref - else: - if conv.aux and not eval(conv.aux, None, table): - continue - try: - conv.write(writer, font, table, value) - except Exception as e: - name = value.__class__.__name__ if value is not None else conv.name - e.args = e.args + (name,) - raise - if conv.isPropagated: - writer[conv.name] = value - - if deleteFormat: - del self.Format - - def readFormat(self, reader): - pass - - def writeFormat(self, writer): - pass - - def toXML(self, xmlWriter, font, attrs=None, name=None): - tableName = name if name else self.__class__.__name__ - if attrs is None: - attrs = [] - if hasattr(self, "Format"): - attrs = attrs + [("Format", self.Format)] - xmlWriter.begintag(tableName, attrs) - xmlWriter.newline() - self.toXML2(xmlWriter, font) - xmlWriter.endtag(tableName) - xmlWriter.newline() - - def toXML2(self, xmlWriter, font): - # Simpler variant of toXML, *only* for the top level tables (like GPOS, GSUB). - # This is because in TTX our parent writes our main tag, and in otBase.py we - # do it ourselves. I think I'm getting schizophrenic... - for conv in self.getConverters(): - if conv.repeat: - value = getattr(self, conv.name, []) - for i in range(len(value)): - item = value[i] - conv.xmlWrite(xmlWriter, font, item, conv.name, [("index", i)]) - else: - if conv.aux and not eval(conv.aux, None, vars(self)): - continue - value = getattr( - self, conv.name, None - ) # TODO Handle defaults instead of defaulting to None! - conv.xmlWrite(xmlWriter, font, value, conv.name, []) - - def fromXML(self, name, attrs, content, font): - try: - conv = self.getConverterByName(name) - except KeyError: - raise # XXX on KeyError, raise nice error - value = conv.xmlRead(attrs, content, font) - if conv.repeat: - seq = getattr(self, conv.name, None) - if seq is None: - seq = [] - setattr(self, conv.name, seq) - seq.append(value) - else: - setattr(self, conv.name, value) - - def __ne__(self, other): - result = self.__eq__(other) - return result if result is NotImplemented else not result - - def __eq__(self, other): - if type(self) != type(other): - return NotImplemented - - self.ensureDecompiled() - other.ensureDecompiled() - - return self.__dict__ == other.__dict__ - - class SubTableEntry(NamedTuple): - """See BaseTable.iterSubTables()""" - - name: str - value: "BaseTable" - index: Optional[int] = None # index into given array, None for single values - - def iterSubTables(self) -> Iterator[SubTableEntry]: - """Yield (name, value, index) namedtuples for all subtables of current table. - - A sub-table is an instance of BaseTable (or subclass thereof) that is a child - of self, the current parent table. - The tuples also contain the attribute name (str) of the of parent table to get - a subtable, and optionally, for lists of subtables (i.e. attributes associated - with a converter that has a 'repeat'), an index into the list containing the - given subtable value. - This method can be useful to traverse trees of otTables. - """ - for conv in self.getConverters(): - name = conv.name - value = getattr(self, name, None) - if value is None: - continue - if isinstance(value, BaseTable): - yield self.SubTableEntry(name, value) - elif isinstance(value, list): - yield from ( - self.SubTableEntry(name, v, index=i) - for i, v in enumerate(value) - if isinstance(v, BaseTable) - ) - - # instance (not @class)method for consistency with FormatSwitchingBaseTable - def getVariableAttrs(self): - return getVariableAttrs(self.__class__) - - -class FormatSwitchingBaseTable(BaseTable): - - """Minor specialization of BaseTable, for tables that have multiple - formats, eg. CoverageFormat1 vs. CoverageFormat2.""" - - @classmethod - def getRecordSize(cls, reader): - return NotImplemented - - def getConverters(self): - try: - fmt = self.Format - except AttributeError: - # some FormatSwitchingBaseTables (e.g. Coverage) no longer have 'Format' - # attribute after fully decompiled, only gain one in preWrite before being - # recompiled. In the decompiled state, these hand-coded classes defined in - # otTables.py lose their format-specific nature and gain more high-level - # attributes that are not tied to converters. - return [] - return self.converters.get(self.Format, []) - - def getConverterByName(self, name): - return self.convertersByName[self.Format][name] - - def readFormat(self, reader): - self.Format = reader.readUShort() - - def writeFormat(self, writer): - writer.writeUShort(self.Format) - - def toXML(self, xmlWriter, font, attrs=None, name=None): - BaseTable.toXML(self, xmlWriter, font, attrs, name) - - def getVariableAttrs(self): - return getVariableAttrs(self.__class__, self.Format) - - -class UInt8FormatSwitchingBaseTable(FormatSwitchingBaseTable): - def readFormat(self, reader): - self.Format = reader.readUInt8() - - def writeFormat(self, writer): - writer.writeUInt8(self.Format) - - -formatSwitchingBaseTables = { - "uint16": FormatSwitchingBaseTable, - "uint8": UInt8FormatSwitchingBaseTable, -} - - -def getFormatSwitchingBaseTableClass(formatType): - try: - return formatSwitchingBaseTables[formatType] - except KeyError: - raise TypeError(f"Unsupported format type: {formatType!r}") - - -# memoize since these are parsed from otData.py, thus stay constant -@lru_cache() -def getVariableAttrs(cls: BaseTable, fmt: Optional[int] = None) -> Tuple[str]: - """Return sequence of variable table field names (can be empty). - - Attributes are deemed "variable" when their otData.py's description contain - 'VarIndexBase + {offset}', e.g. COLRv1 PaintVar* tables. - """ - if not issubclass(cls, BaseTable): - raise TypeError(cls) - if issubclass(cls, FormatSwitchingBaseTable): - if fmt is None: - raise TypeError(f"'fmt' is required for format-switching {cls.__name__}") - converters = cls.convertersByName[fmt] - else: - converters = cls.convertersByName - # assume if no 'VarIndexBase' field is present, table has no variable fields - if "VarIndexBase" not in converters: - return () - varAttrs = {} - for name, conv in converters.items(): - offset = conv.getVarIndexOffset() - if offset is not None: - varAttrs[name] = offset - return tuple(sorted(varAttrs, key=varAttrs.__getitem__)) - - -# -# Support for ValueRecords -# -# This data type is so different from all other OpenType data types that -# it requires quite a bit of code for itself. It even has special support -# in OTTableReader and OTTableWriter... -# - -valueRecordFormat = [ - # Mask Name isDevice signed - (0x0001, "XPlacement", 0, 1), - (0x0002, "YPlacement", 0, 1), - (0x0004, "XAdvance", 0, 1), - (0x0008, "YAdvance", 0, 1), - (0x0010, "XPlaDevice", 1, 0), - (0x0020, "YPlaDevice", 1, 0), - (0x0040, "XAdvDevice", 1, 0), - (0x0080, "YAdvDevice", 1, 0), - # reserved: - (0x0100, "Reserved1", 0, 0), - (0x0200, "Reserved2", 0, 0), - (0x0400, "Reserved3", 0, 0), - (0x0800, "Reserved4", 0, 0), - (0x1000, "Reserved5", 0, 0), - (0x2000, "Reserved6", 0, 0), - (0x4000, "Reserved7", 0, 0), - (0x8000, "Reserved8", 0, 0), -] - - -def _buildDict(): - d = {} - for mask, name, isDevice, signed in valueRecordFormat: - d[name] = mask, isDevice, signed - return d - - -valueRecordFormatDict = _buildDict() - - -class ValueRecordFactory(object): - - """Given a format code, this object convert ValueRecords.""" - - def __init__(self, valueFormat): - format = [] - for mask, name, isDevice, signed in valueRecordFormat: - if valueFormat & mask: - format.append((name, isDevice, signed)) - self.format = format - - def __len__(self): - return len(self.format) - - def readValueRecord(self, reader, font): - format = self.format - if not format: - return None - valueRecord = ValueRecord() - for name, isDevice, signed in format: - if signed: - value = reader.readShort() - else: - value = reader.readUShort() - if isDevice: - if value: - from . import otTables - - subReader = reader.getSubReader(value) - value = getattr(otTables, name)() - value.decompile(subReader, font) - else: - value = None - setattr(valueRecord, name, value) - return valueRecord - - def writeValueRecord(self, writer, font, valueRecord): - for name, isDevice, signed in self.format: - value = getattr(valueRecord, name, 0) - if isDevice: - if value: - subWriter = writer.getSubWriter() - writer.writeSubTable(subWriter) - value.compile(subWriter, font) - else: - writer.writeUShort(0) - elif signed: - writer.writeShort(value) - else: - writer.writeUShort(value) - - -class ValueRecord(object): - - # see ValueRecordFactory - - def __init__(self, valueFormat=None, src=None): - if valueFormat is not None: - for mask, name, isDevice, signed in valueRecordFormat: - if valueFormat & mask: - setattr(self, name, None if isDevice else 0) - if src is not None: - for key, val in src.__dict__.items(): - if not hasattr(self, key): - continue - setattr(self, key, val) - elif src is not None: - self.__dict__ = src.__dict__.copy() - - def getFormat(self): - format = 0 - for name in self.__dict__.keys(): - format = format | valueRecordFormatDict[name][0] - return format - - def getEffectiveFormat(self): - format = 0 - for name, value in self.__dict__.items(): - if value: - format = format | valueRecordFormatDict[name][0] - return format - - def toXML(self, xmlWriter, font, valueName, attrs=None): - if attrs is None: - simpleItems = [] - else: - simpleItems = list(attrs) - for mask, name, isDevice, format in valueRecordFormat[:4]: # "simple" values - if hasattr(self, name): - simpleItems.append((name, getattr(self, name))) - deviceItems = [] - for mask, name, isDevice, format in valueRecordFormat[4:8]: # device records - if hasattr(self, name): - device = getattr(self, name) - if device is not None: - deviceItems.append((name, device)) - if deviceItems: - xmlWriter.begintag(valueName, simpleItems) - xmlWriter.newline() - for name, deviceRecord in deviceItems: - if deviceRecord is not None: - deviceRecord.toXML(xmlWriter, font, name=name) - xmlWriter.endtag(valueName) - xmlWriter.newline() - else: - xmlWriter.simpletag(valueName, simpleItems) - xmlWriter.newline() - - def fromXML(self, name, attrs, content, font): - from . import otTables - - for k, v in attrs.items(): - setattr(self, k, int(v)) - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - value = getattr(otTables, name)() - for elem2 in content: - if not isinstance(elem2, tuple): - continue - name2, attrs2, content2 = elem2 - value.fromXML(name2, attrs2, content2, font) - setattr(self, name, value) - - def __ne__(self, other): - result = self.__eq__(other) - return result if result is NotImplemented else not result - - def __eq__(self, other): - if type(self) != type(other): - return NotImplemented - return self.__dict__ == other.__dict__ diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/utils/fonts.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/utils/fonts.py deleted file mode 100644 index d51dbbfdf4990358e9094cc887c47ae6cd8b0440..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/utils/fonts.py +++ /dev/null @@ -1,50 +0,0 @@ -from __future__ import annotations - -import json -from typing import Iterable - - -class FontEncoder(json.JSONEncoder): - def default(self, obj): - if isinstance(obj, Font): - return { - "__gradio_font__": True, - "name": obj.name, - "class": "google" if isinstance(obj, GoogleFont) else "font", - } - # Let the base class default method raise the TypeError - return json.JSONEncoder.default(self, obj) - - -def as_font(dct): - if "__gradio_font__" in dct: - name = dct["name"] - return GoogleFont(name) if dct["class"] == "google" else Font(name) - return dct - - -class Font: - def __init__(self, name: str): - self.name = name - - def __str__(self) -> str: - return ( - self.name - if self.name in ["sans-serif", "serif", "monospace", "cursive", "fantasy"] - else f"'{self.name}'" - ) - - def stylesheet(self) -> str: - return None - - def __eq__(self, other: Font) -> bool: - return self.name == other.name and self.stylesheet() == other.stylesheet() - - -class GoogleFont(Font): - def __init__(self, name: str, weights: Iterable[int] = (400, 600)): - self.name = name - self.weights = weights - - def stylesheet(self) -> str: - return f'https://fonts.googleapis.com/css2?family={self.name.replace(" ", "+")}:wght@{";".join(str(weight) for weight in self.weights)}&display=swap' diff --git a/spaces/Dao3/ChatGLM-6B/README.md b/spaces/Dao3/ChatGLM-6B/README.md deleted file mode 100644 index 9dcd06a3a9d809fff427363d5f7b71673b4463d3..0000000000000000000000000000000000000000 --- a/spaces/Dao3/ChatGLM-6B/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ChatGLM 6B -emoji: 📚 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false -duplicated_from: xlon3/ChatGLM-6B ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Datasculptor/MusicGen/audiocraft/models/__init__.py b/spaces/Datasculptor/MusicGen/audiocraft/models/__init__.py deleted file mode 100644 index 92c7a48a200eba455044cd66e0d2c1efe6494f5c..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/MusicGen/audiocraft/models/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from .musicgen import MusicGen -from .lm import LMModel -from .encodec import CompressionModel, EncodecModel diff --git a/spaces/Dauzy/whisper-webui/src/download.py b/spaces/Dauzy/whisper-webui/src/download.py deleted file mode 100644 index 473d27a0d279821edc3d398da8a33424da42da2a..0000000000000000000000000000000000000000 --- a/spaces/Dauzy/whisper-webui/src/download.py +++ /dev/null @@ -1,118 +0,0 @@ -from tempfile import mkdtemp -from typing import List -from yt_dlp import YoutubeDL -from urllib.request import urlopen, urlretrieve -import urllib.parse -import progressbar -import cgi - -import yt_dlp -from yt_dlp.postprocessor import PostProcessor - -class FilenameCollectorPP(PostProcessor): - def __init__(self): - super(FilenameCollectorPP, self).__init__(None) - self.filenames = [] - - def run(self, information): - self.filenames.append(information["filepath"]) - return [], information - -def download_url(url: str, maxDuration: int = None, destinationDirectory: str = None, playlistItems: str = "1") -> List[str]: - if "dora.starh.top" in url: - return _perform_download_with_urllib(url, destinationDirectory=destinationDirectory) - try: - return _perform_download(url, maxDuration=maxDuration, outputTemplate=None, destinationDirectory=destinationDirectory, playlistItems=playlistItems) - except yt_dlp.utils.DownloadError as e: - # In case of an OS error, try again with a different output template - if e.msg and e.msg.find("[Errno 36] File name too long") >= 0: - return _perform_download(url, maxDuration=maxDuration, outputTemplate="%(title).10s %(id)s.%(ext)s") - pass - -class MyProgressBar(): - def __init__(self): - self.pbar = None - - def __call__(self, block_num, block_size, total_size): - if not self.pbar: - self.pbar=progressbar.ProgressBar(maxval=total_size) - self.pbar.start() - - downloaded = block_num * block_size - if downloaded < total_size: - self.pbar.update(downloaded) - else: - self.pbar.finish() - -def _perform_download_with_urllib(url: str, destinationDirectory: str = None): - if destinationDirectory is None: - destinationDirectory = mkdtemp() - remotefile = urlopen(url) - contentdisposition = remotefile.info()['Content-Disposition'] - _, params = cgi.parse_header(contentdisposition) - filename = url.split('/')[-1] - if "filename" in params: - filename = params["filename"] - elif "filename*" in params: - filename = params["filename*"].replace("UTF-8''", "") - filename = urllib.parse.unquote(filename) - filename = destinationDirectory + "/" + filename - urlretrieve(url, filename=filename, reporthook=MyProgressBar()) - result = [] - result.append(filename) - print("Downloaded " + filename) - return result - -def _perform_download(url: str, maxDuration: int = None, outputTemplate: str = None, destinationDirectory: str = None, playlistItems: str = "1"): - # Create a temporary directory to store the downloaded files - if destinationDirectory is None: - destinationDirectory = mkdtemp() - - ydl_opts = { - "format": "bestaudio/best", - 'paths': { - 'home': destinationDirectory - } - } - if (playlistItems): - ydl_opts['playlist_items'] = playlistItems - - # Add output template if specified - if outputTemplate: - ydl_opts['outtmpl'] = outputTemplate - - filename_collector = FilenameCollectorPP() - - with YoutubeDL(ydl_opts) as ydl: - if maxDuration and maxDuration > 0: - info = ydl.extract_info(url, download=False) - entries = "entries" in info and info["entries"] or [info] - - total_duration = 0 - - # Compute total duration - for entry in entries: - total_duration += float(entry["duration"]) - - if total_duration >= maxDuration: - raise ExceededMaximumDuration(videoDuration=total_duration, maxDuration=maxDuration, message="Video is too long") - - ydl.add_post_processor(filename_collector) - ydl.download([url]) - - if len(filename_collector.filenames) <= 0: - raise Exception("Cannot download " + url) - - result = [] - - for filename in filename_collector.filenames: - result.append(filename) - print("Downloaded " + filename) - - return result - -class ExceededMaximumDuration(Exception): - def __init__(self, videoDuration, maxDuration, message): - self.videoDuration = videoDuration - self.maxDuration = maxDuration - super().__init__(message) \ No newline at end of file diff --git a/spaces/Dauzy/whisper-webui/src/hooks/whisperProgressHook.py b/spaces/Dauzy/whisper-webui/src/hooks/whisperProgressHook.py deleted file mode 100644 index aa09958a05e0b3c54736f7209f8a05a94912752e..0000000000000000000000000000000000000000 --- a/spaces/Dauzy/whisper-webui/src/hooks/whisperProgressHook.py +++ /dev/null @@ -1,91 +0,0 @@ -import sys -import threading -from typing import List, Union -import tqdm - -from src.hooks.progressListener import ProgressListener - -class ProgressListenerHandle: - def __init__(self, listener: ProgressListener): - self.listener = listener - - def __enter__(self): - register_thread_local_progress_listener(self.listener) - - def __exit__(self, exc_type, exc_val, exc_tb): - unregister_thread_local_progress_listener(self.listener) - - if exc_type is None: - self.listener.on_finished() - -class _CustomProgressBar(tqdm.tqdm): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self._current = self.n # Set the initial value - - def update(self, n): - super().update(n) - # Because the progress bar might be disabled, we need to manually update the progress - self._current += n - - # Inform listeners - listeners = _get_thread_local_listeners() - - for listener in listeners: - listener.on_progress(self._current, self.total) - -_thread_local = threading.local() - -def _get_thread_local_listeners(): - if not hasattr(_thread_local, 'listeners'): - _thread_local.listeners = [] - return _thread_local.listeners - -_hooked = False - -def init_progress_hook(): - global _hooked - - if _hooked: - return - - # Inject into tqdm.tqdm of Whisper, so we can see progress - import whisper.transcribe - transcribe_module = sys.modules['whisper.transcribe'] - transcribe_module.tqdm.tqdm = _CustomProgressBar - _hooked = True - -def register_thread_local_progress_listener(progress_listener: ProgressListener): - # This is a workaround for the fact that the progress bar is not exposed in the API - init_progress_hook() - - listeners = _get_thread_local_listeners() - listeners.append(progress_listener) - -def unregister_thread_local_progress_listener(progress_listener: ProgressListener): - listeners = _get_thread_local_listeners() - - if progress_listener in listeners: - listeners.remove(progress_listener) - -def create_progress_listener_handle(progress_listener: ProgressListener): - return ProgressListenerHandle(progress_listener) - -# Example usage -if __name__ == '__main__': - class PrintingProgressListener: - def on_progress(self, current: Union[int, float], total: Union[int, float]): - print(f"Progress: {current}/{total}") - - def on_finished(self): - print("Finished") - - import whisper - model = whisper.load_model("medium") - - with create_progress_listener_handle(PrintingProgressListener()) as listener: - # Set verbose to None to disable the progress bar, as we are using our own - result = model.transcribe("J:\\Dev\\OpenAI\\whisper\\tests\\Noriko\\out.mka", language="Japanese", fp16=False, verbose=None) - print(result) - - print("Done") \ No newline at end of file diff --git a/spaces/Deon07/prompthero-openjourney/app.py b/spaces/Deon07/prompthero-openjourney/app.py deleted file mode 100644 index 2193905172b6fb6d868bff88cc8311f491ec13b3..0000000000000000000000000000000000000000 --- a/spaces/Deon07/prompthero-openjourney/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/prompthero/openjourney").launch() \ No newline at end of file diff --git a/spaces/Dimalker/Faceswapper/roop/predicter.py b/spaces/Dimalker/Faceswapper/roop/predicter.py deleted file mode 100644 index 7ebc2b62e4152c12ce41e55d718222ca9c8a8b7f..0000000000000000000000000000000000000000 --- a/spaces/Dimalker/Faceswapper/roop/predicter.py +++ /dev/null @@ -1,25 +0,0 @@ -import numpy -import opennsfw2 -from PIL import Image - -from roop.typing import Frame - -MAX_PROBABILITY = 0.85 - - -def predict_frame(target_frame: Frame) -> bool: - image = Image.fromarray(target_frame) - image = opennsfw2.preprocess_image(image, opennsfw2.Preprocessing.YAHOO) - model = opennsfw2.make_open_nsfw_model() - views = numpy.expand_dims(image, axis=0) - _, probability = model.predict(views)[0] - return probability > MAX_PROBABILITY - - -def predict_image(target_path: str) -> bool: - return opennsfw2.predict_image(target_path) > MAX_PROBABILITY - - -def predict_video(target_path: str) -> bool: - _, probabilities = opennsfw2.predict_video_frames(video_path=target_path, frame_interval=100) - return any(probability > MAX_PROBABILITY for probability in probabilities) diff --git a/spaces/Dinoking/Guccio-AI-Designer/models/stylegan/model.py b/spaces/Dinoking/Guccio-AI-Designer/models/stylegan/model.py deleted file mode 100644 index a230961c4d1bf0bd2d1efe7972b4baa33c5d7013..0000000000000000000000000000000000000000 --- a/spaces/Dinoking/Guccio-AI-Designer/models/stylegan/model.py +++ /dev/null @@ -1,456 +0,0 @@ -# Copyright 2020 Erik Härkönen. All rights reserved. -# This file is licensed to you under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. You may obtain a copy -# of the License at http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software distributed under -# the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR REPRESENTATIONS -# OF ANY KIND, either express or implied. See the License for the specific language -# governing permissions and limitations under the License. - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from collections import OrderedDict -from pathlib import Path -import requests -import pickle -import sys - -import numpy as np - -# Reimplementation of StyleGAN in PyTorch -# Source: https://github.com/lernapparat/lernapparat/blob/master/style_gan/pytorch_style_gan.ipynb - -class MyLinear(nn.Module): - """Linear layer with equalized learning rate and custom learning rate multiplier.""" - def __init__(self, input_size, output_size, gain=2**(0.5), use_wscale=False, lrmul=1, bias=True): - super().__init__() - he_std = gain * input_size**(-0.5) # He init - # Equalized learning rate and custom learning rate multiplier. - if use_wscale: - init_std = 1.0 / lrmul - self.w_mul = he_std * lrmul - else: - init_std = he_std / lrmul - self.w_mul = lrmul - self.weight = torch.nn.Parameter(torch.randn(output_size, input_size) * init_std) - if bias: - self.bias = torch.nn.Parameter(torch.zeros(output_size)) - self.b_mul = lrmul - else: - self.bias = None - - def forward(self, x): - bias = self.bias - if bias is not None: - bias = bias * self.b_mul - return F.linear(x, self.weight * self.w_mul, bias) - -class MyConv2d(nn.Module): - """Conv layer with equalized learning rate and custom learning rate multiplier.""" - def __init__(self, input_channels, output_channels, kernel_size, gain=2**(0.5), use_wscale=False, lrmul=1, bias=True, - intermediate=None, upscale=False): - super().__init__() - if upscale: - self.upscale = Upscale2d() - else: - self.upscale = None - he_std = gain * (input_channels * kernel_size ** 2) ** (-0.5) # He init - self.kernel_size = kernel_size - if use_wscale: - init_std = 1.0 / lrmul - self.w_mul = he_std * lrmul - else: - init_std = he_std / lrmul - self.w_mul = lrmul - self.weight = torch.nn.Parameter(torch.randn(output_channels, input_channels, kernel_size, kernel_size) * init_std) - if bias: - self.bias = torch.nn.Parameter(torch.zeros(output_channels)) - self.b_mul = lrmul - else: - self.bias = None - self.intermediate = intermediate - - def forward(self, x): - bias = self.bias - if bias is not None: - bias = bias * self.b_mul - - have_convolution = False - if self.upscale is not None and min(x.shape[2:]) * 2 >= 128: - # this is the fused upscale + conv from StyleGAN, sadly this seems incompatible with the non-fused way - # this really needs to be cleaned up and go into the conv... - w = self.weight * self.w_mul - w = w.permute(1, 0, 2, 3) - # probably applying a conv on w would be more efficient. also this quadruples the weight (average)?! - w = F.pad(w, (1,1,1,1)) - w = w[:, :, 1:, 1:]+ w[:, :, :-1, 1:] + w[:, :, 1:, :-1] + w[:, :, :-1, :-1] - x = F.conv_transpose2d(x, w, stride=2, padding=(w.size(-1)-1)//2) - have_convolution = True - elif self.upscale is not None: - x = self.upscale(x) - - if not have_convolution and self.intermediate is None: - return F.conv2d(x, self.weight * self.w_mul, bias, padding=self.kernel_size//2) - elif not have_convolution: - x = F.conv2d(x, self.weight * self.w_mul, None, padding=self.kernel_size//2) - - if self.intermediate is not None: - x = self.intermediate(x) - if bias is not None: - x = x + bias.view(1, -1, 1, 1) - return x - -class NoiseLayer(nn.Module): - """adds noise. noise is per pixel (constant over channels) with per-channel weight""" - def __init__(self, channels): - super().__init__() - self.weight = nn.Parameter(torch.zeros(channels)) - self.noise = None - - def forward(self, x, noise=None): - if noise is None and self.noise is None: - noise = torch.randn(x.size(0), 1, x.size(2), x.size(3), device=x.device, dtype=x.dtype) - elif noise is None: - # here is a little trick: if you get all the noiselayers and set each - # modules .noise attribute, you can have pre-defined noise. - # Very useful for analysis - noise = self.noise - x = x + self.weight.view(1, -1, 1, 1) * noise - return x - -class StyleMod(nn.Module): - def __init__(self, latent_size, channels, use_wscale): - super(StyleMod, self).__init__() - self.lin = MyLinear(latent_size, - channels * 2, - gain=1.0, use_wscale=use_wscale) - - def forward(self, x, latent): - style = self.lin(latent) # style => [batch_size, n_channels*2] - shape = [-1, 2, x.size(1)] + (x.dim() - 2) * [1] - style = style.view(shape) # [batch_size, 2, n_channels, ...] - x = x * (style[:, 0] + 1.) + style[:, 1] - return x - -class PixelNormLayer(nn.Module): - def __init__(self, epsilon=1e-8): - super().__init__() - self.epsilon = epsilon - def forward(self, x): - return x * torch.rsqrt(torch.mean(x**2, dim=1, keepdim=True) + self.epsilon) - -class BlurLayer(nn.Module): - def __init__(self, kernel=[1, 2, 1], normalize=True, flip=False, stride=1): - super(BlurLayer, self).__init__() - kernel=[1, 2, 1] - kernel = torch.tensor(kernel, dtype=torch.float32) - kernel = kernel[:, None] * kernel[None, :] - kernel = kernel[None, None] - if normalize: - kernel = kernel / kernel.sum() - if flip: - kernel = kernel[:, :, ::-1, ::-1] - self.register_buffer('kernel', kernel) - self.stride = stride - - def forward(self, x): - # expand kernel channels - kernel = self.kernel.expand(x.size(1), -1, -1, -1) - x = F.conv2d( - x, - kernel, - stride=self.stride, - padding=int((self.kernel.size(2)-1)/2), - groups=x.size(1) - ) - return x - -def upscale2d(x, factor=2, gain=1): - assert x.dim() == 4 - if gain != 1: - x = x * gain - if factor != 1: - shape = x.shape - x = x.view(shape[0], shape[1], shape[2], 1, shape[3], 1).expand(-1, -1, -1, factor, -1, factor) - x = x.contiguous().view(shape[0], shape[1], factor * shape[2], factor * shape[3]) - return x - -class Upscale2d(nn.Module): - def __init__(self, factor=2, gain=1): - super().__init__() - assert isinstance(factor, int) and factor >= 1 - self.gain = gain - self.factor = factor - def forward(self, x): - return upscale2d(x, factor=self.factor, gain=self.gain) - -class G_mapping(nn.Sequential): - def __init__(self, nonlinearity='lrelu', use_wscale=True): - act, gain = {'relu': (torch.relu, np.sqrt(2)), - 'lrelu': (nn.LeakyReLU(negative_slope=0.2), np.sqrt(2))}[nonlinearity] - layers = [ - ('pixel_norm', PixelNormLayer()), - ('dense0', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)), - ('dense0_act', act), - ('dense1', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)), - ('dense1_act', act), - ('dense2', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)), - ('dense2_act', act), - ('dense3', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)), - ('dense3_act', act), - ('dense4', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)), - ('dense4_act', act), - ('dense5', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)), - ('dense5_act', act), - ('dense6', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)), - ('dense6_act', act), - ('dense7', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)), - ('dense7_act', act) - ] - super().__init__(OrderedDict(layers)) - - def forward(self, x): - return super().forward(x) - -class Truncation(nn.Module): - def __init__(self, avg_latent, max_layer=8, threshold=0.7): - super().__init__() - self.max_layer = max_layer - self.threshold = threshold - self.register_buffer('avg_latent', avg_latent) - def forward(self, x): - assert x.dim() == 3 - interp = torch.lerp(self.avg_latent, x, self.threshold) - do_trunc = (torch.arange(x.size(1)) < self.max_layer).view(1, -1, 1) - return torch.where(do_trunc, interp, x) - -class LayerEpilogue(nn.Module): - """Things to do at the end of each layer.""" - def __init__(self, channels, dlatent_size, use_wscale, use_noise, use_pixel_norm, use_instance_norm, use_styles, activation_layer): - super().__init__() - layers = [] - if use_noise: - layers.append(('noise', NoiseLayer(channels))) - layers.append(('activation', activation_layer)) - if use_pixel_norm: - layers.append(('pixel_norm', PixelNorm())) - if use_instance_norm: - layers.append(('instance_norm', nn.InstanceNorm2d(channels))) - self.top_epi = nn.Sequential(OrderedDict(layers)) - if use_styles: - self.style_mod = StyleMod(dlatent_size, channels, use_wscale=use_wscale) - else: - self.style_mod = None - def forward(self, x, dlatents_in_slice=None): - x = self.top_epi(x) - if self.style_mod is not None: - x = self.style_mod(x, dlatents_in_slice) - else: - assert dlatents_in_slice is None - return x - - -class InputBlock(nn.Module): - def __init__(self, nf, dlatent_size, const_input_layer, gain, use_wscale, use_noise, use_pixel_norm, use_instance_norm, use_styles, activation_layer): - super().__init__() - self.const_input_layer = const_input_layer - self.nf = nf - if self.const_input_layer: - # called 'const' in tf - self.const = nn.Parameter(torch.ones(1, nf, 4, 4)) - self.bias = nn.Parameter(torch.ones(nf)) - else: - self.dense = MyLinear(dlatent_size, nf*16, gain=gain/4, use_wscale=use_wscale) # tweak gain to match the official implementation of Progressing GAN - self.epi1 = LayerEpilogue(nf, dlatent_size, use_wscale, use_noise, use_pixel_norm, use_instance_norm, use_styles, activation_layer) - self.conv = MyConv2d(nf, nf, 3, gain=gain, use_wscale=use_wscale) - self.epi2 = LayerEpilogue(nf, dlatent_size, use_wscale, use_noise, use_pixel_norm, use_instance_norm, use_styles, activation_layer) - - def forward(self, dlatents_in_range): - batch_size = dlatents_in_range.size(0) - if self.const_input_layer: - x = self.const.expand(batch_size, -1, -1, -1) - x = x + self.bias.view(1, -1, 1, 1) - else: - x = self.dense(dlatents_in_range[:, 0]).view(batch_size, self.nf, 4, 4) - x = self.epi1(x, dlatents_in_range[:, 0]) - x = self.conv(x) - x = self.epi2(x, dlatents_in_range[:, 1]) - return x - - -class GSynthesisBlock(nn.Module): - def __init__(self, in_channels, out_channels, blur_filter, dlatent_size, gain, use_wscale, use_noise, use_pixel_norm, use_instance_norm, use_styles, activation_layer): - # 2**res x 2**res # res = 3..resolution_log2 - super().__init__() - if blur_filter: - blur = BlurLayer(blur_filter) - else: - blur = None - self.conv0_up = MyConv2d(in_channels, out_channels, kernel_size=3, gain=gain, use_wscale=use_wscale, - intermediate=blur, upscale=True) - self.epi1 = LayerEpilogue(out_channels, dlatent_size, use_wscale, use_noise, use_pixel_norm, use_instance_norm, use_styles, activation_layer) - self.conv1 = MyConv2d(out_channels, out_channels, kernel_size=3, gain=gain, use_wscale=use_wscale) - self.epi2 = LayerEpilogue(out_channels, dlatent_size, use_wscale, use_noise, use_pixel_norm, use_instance_norm, use_styles, activation_layer) - - def forward(self, x, dlatents_in_range): - x = self.conv0_up(x) - x = self.epi1(x, dlatents_in_range[:, 0]) - x = self.conv1(x) - x = self.epi2(x, dlatents_in_range[:, 1]) - return x - -class G_synthesis(nn.Module): - def __init__(self, - dlatent_size = 512, # Disentangled latent (W) dimensionality. - num_channels = 3, # Number of output color channels. - resolution = 1024, # Output resolution. - fmap_base = 8192, # Overall multiplier for the number of feature maps. - fmap_decay = 1.0, # log2 feature map reduction when doubling the resolution. - fmap_max = 512, # Maximum number of feature maps in any layer. - use_styles = True, # Enable style inputs? - const_input_layer = True, # First layer is a learned constant? - use_noise = True, # Enable noise inputs? - randomize_noise = True, # True = randomize noise inputs every time (non-deterministic), False = read noise inputs from variables. - nonlinearity = 'lrelu', # Activation function: 'relu', 'lrelu' - use_wscale = True, # Enable equalized learning rate? - use_pixel_norm = False, # Enable pixelwise feature vector normalization? - use_instance_norm = True, # Enable instance normalization? - dtype = torch.float32, # Data type to use for activations and outputs. - blur_filter = [1,2,1], # Low-pass filter to apply when resampling activations. None = no filtering. - ): - - super().__init__() - def nf(stage): - return min(int(fmap_base / (2.0 ** (stage * fmap_decay))), fmap_max) - self.dlatent_size = dlatent_size - resolution_log2 = int(np.log2(resolution)) - assert resolution == 2**resolution_log2 and resolution >= 4 - - act, gain = {'relu': (torch.relu, np.sqrt(2)), - 'lrelu': (nn.LeakyReLU(negative_slope=0.2), np.sqrt(2))}[nonlinearity] - num_layers = resolution_log2 * 2 - 2 - num_styles = num_layers if use_styles else 1 - torgbs = [] - blocks = [] - for res in range(2, resolution_log2 + 1): - channels = nf(res-1) - name = '{s}x{s}'.format(s=2**res) - if res == 2: - blocks.append((name, - InputBlock(channels, dlatent_size, const_input_layer, gain, use_wscale, - use_noise, use_pixel_norm, use_instance_norm, use_styles, act))) - - else: - blocks.append((name, - GSynthesisBlock(last_channels, channels, blur_filter, dlatent_size, gain, use_wscale, use_noise, use_pixel_norm, use_instance_norm, use_styles, act))) - last_channels = channels - self.torgb = MyConv2d(channels, num_channels, 1, gain=1, use_wscale=use_wscale) - self.blocks = nn.ModuleDict(OrderedDict(blocks)) - - def forward(self, dlatents_in): - # Input: Disentangled latents (W) [minibatch, num_layers, dlatent_size]. - # lod_in = tf.cast(tf.get_variable('lod', initializer=np.float32(0), trainable=False), dtype) - batch_size = dlatents_in.size(0) - for i, m in enumerate(self.blocks.values()): - if i == 0: - x = m(dlatents_in[:, 2*i:2*i+2]) - else: - x = m(x, dlatents_in[:, 2*i:2*i+2]) - rgb = self.torgb(x) - return rgb - - -class StyleGAN_G(nn.Sequential): - def __init__(self, resolution, truncation=1.0): - self.resolution = resolution - self.layers = OrderedDict([ - ('g_mapping', G_mapping()), - #('truncation', Truncation(avg_latent)), - ('g_synthesis', G_synthesis(resolution=resolution)), - ]) - super().__init__(self.layers) - - def forward(self, x, latent_is_w=False): - if isinstance(x, list): - assert len(x) == 18, 'Must provide 1 or 18 latents' - if not latent_is_w: - x = [self.layers['g_mapping'].forward(l) for l in x] - x = torch.stack(x, dim=1) - else: - if not latent_is_w: - x = self.layers['g_mapping'].forward(x) - x = x.unsqueeze(1).expand(-1, 18, -1) - - x = self.layers['g_synthesis'].forward(x) - - return x - - # From: https://github.com/lernapparat/lernapparat/releases/download/v2019-02-01/ - def load_weights(self, checkpoint): - self.load_state_dict(torch.load(checkpoint)) - - def export_from_tf(self, pickle_path): - module_path = Path(__file__).parent / 'stylegan_tf' - sys.path.append(str(module_path.resolve())) - - import dnnlib, dnnlib.tflib, pickle, torch, collections - dnnlib.tflib.init_tf() - - weights = pickle.load(open(pickle_path,'rb')) - weights_pt = [collections.OrderedDict([(k, torch.from_numpy(v.value().eval())) for k,v in w.trainables.items()]) for w in weights] - #torch.save(weights_pt, pytorch_name) - - # then on the PyTorch side run - state_G, state_D, state_Gs = weights_pt #torch.load('./karras2019stylegan-ffhq-1024x1024.pt') - def key_translate(k): - k = k.lower().split('/') - if k[0] == 'g_synthesis': - if not k[1].startswith('torgb'): - k.insert(1, 'blocks') - k = '.'.join(k) - k = (k.replace('const.const','const').replace('const.bias','bias').replace('const.stylemod','epi1.style_mod.lin') - .replace('const.noise.weight','epi1.top_epi.noise.weight') - .replace('conv.noise.weight','epi2.top_epi.noise.weight') - .replace('conv.stylemod','epi2.style_mod.lin') - .replace('conv0_up.noise.weight', 'epi1.top_epi.noise.weight') - .replace('conv0_up.stylemod','epi1.style_mod.lin') - .replace('conv1.noise.weight', 'epi2.top_epi.noise.weight') - .replace('conv1.stylemod','epi2.style_mod.lin') - .replace('torgb_lod0','torgb')) - else: - k = '.'.join(k) - return k - - def weight_translate(k, w): - k = key_translate(k) - if k.endswith('.weight'): - if w.dim() == 2: - w = w.t() - elif w.dim() == 1: - pass - else: - assert w.dim() == 4 - w = w.permute(3, 2, 0, 1) - return w - - # we delete the useless torgb filters - param_dict = {key_translate(k) : weight_translate(k, v) for k,v in state_Gs.items() if 'torgb_lod' not in key_translate(k)} - if 1: - sd_shapes = {k : v.shape for k,v in self.state_dict().items()} - param_shapes = {k : v.shape for k,v in param_dict.items() } - - for k in list(sd_shapes)+list(param_shapes): - pds = param_shapes.get(k) - sds = sd_shapes.get(k) - if pds is None: - print ("sd only", k, sds) - elif sds is None: - print ("pd only", k, pds) - elif sds != pds: - print ("mismatch!", k, pds, sds) - - self.load_state_dict(param_dict, strict=False) # needed for the blur kernels - torch.save(self.state_dict(), Path(pickle_path).with_suffix('.pt')) \ No newline at end of file diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/model.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/model.py deleted file mode 100644 index 6ca30efb2baa3159f1bc1954fe3b882ae4e48d12..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/model.py +++ /dev/null @@ -1,689 +0,0 @@ -import math -import random -import torch -from torch import nn -from torch.nn import functional as F - -from .op.fused_act import FusedLeakyReLU, fused_leaky_relu -from .op.upfirdn2d import upfirdn2d - - -class PixelNorm(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input): - return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8) - - -def make_kernel(k): - k = torch.tensor(k, dtype=torch.float32) - - if k.ndim == 1: - k = k[None, :] * k[:, None] - - k /= k.sum() - - return k - - -class Upsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) * (factor ** 2) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=self.factor, - down=1, pad=self.pad) - - return out - - -class Downsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=1, - down=self.factor, pad=self.pad) - - return out - - -class Blur(nn.Module): - def __init__(self, kernel, pad, upsample_factor=1): - super().__init__() - - kernel = make_kernel(kernel) - - if upsample_factor > 1: - kernel = kernel * (upsample_factor ** 2) - - self.register_buffer('kernel', kernel) - - self.pad = pad - - def forward(self, input): - out = upfirdn2d(input, self.kernel, pad=self.pad) - - return out - - -class EqualConv2d(nn.Module): - def __init__( - self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True - ): - super().__init__() - - self.weight = nn.Parameter( - torch.randn(out_channel, in_channel, kernel_size, kernel_size) - ) - self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2) - - self.stride = stride - self.padding = padding - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - - else: - self.bias = None - - def forward(self, input): - out = F.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},' - f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})' - ) - - -class EqualLinear(nn.Module): - def __init__( - self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None - ): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul)) - - if bias: - self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init)) - - else: - self.bias = None - - self.activation = activation - - self.scale = (1 / math.sqrt(in_dim)) * lr_mul - self.lr_mul = lr_mul - - def forward(self, input): - if self.activation: - out = F.linear(input, self.weight * self.scale) - out = fused_leaky_relu(out, self.bias * self.lr_mul) - - else: - out = F.linear( - input, self.weight * self.scale, bias=self.bias * self.lr_mul - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})' - ) - - -class ScaledLeakyReLU(nn.Module): - def __init__(self, negative_slope=0.2): - super().__init__() - - self.negative_slope = negative_slope - - def forward(self, input): - out = F.leaky_relu(input, negative_slope=self.negative_slope) - - return out * math.sqrt(2) - - -class ModulatedConv2d(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - demodulate=True, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - ): - super().__init__() - - self.eps = 1e-8 - self.kernel_size = kernel_size - self.in_channel = in_channel - self.out_channel = out_channel - self.upsample = upsample - self.downsample = downsample - - if upsample: - factor = 2 - p = (len(blur_kernel) - factor) - (kernel_size - 1) - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 + 1 - - self.blur = Blur(blur_kernel, pad=( - pad0, pad1), upsample_factor=factor) - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1)) - - fan_in = in_channel * kernel_size ** 2 - self.scale = 1 / math.sqrt(fan_in) - self.padding = kernel_size // 2 - - self.weight = nn.Parameter( - torch.randn(1, out_channel, in_channel, kernel_size, kernel_size) - ) - - self.modulation = EqualLinear(style_dim, in_channel, bias_init=1) - - self.demodulate = demodulate - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, ' - f'upsample={self.upsample}, downsample={self.downsample})' - ) - - def forward(self, input, style): - batch, in_channel, height, width = input.shape - - style = self.modulation(style).view(batch, 1, in_channel, 1, 1) - weight = self.scale * self.weight * style - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8) - weight = weight * demod.view(batch, self.out_channel, 1, 1, 1) - - weight = weight.view( - batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - - if self.upsample: - input = input.view(1, batch * in_channel, height, width) - weight = weight.view( - batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - weight = weight.transpose(1, 2).reshape( - batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size - ) - out = F.conv_transpose2d( - input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - _, _, height, width = input.shape - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - else: - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=self.padding, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - return out - - -class NoiseInjection(nn.Module): - def __init__(self): - super().__init__() - - self.weight = nn.Parameter(torch.zeros(1)) - - def forward(self, image, noise=None): - if noise is None: - batch, _, height, width = image.shape - noise = image.new_empty(batch, 1, height, width).normal_() - - return image + self.weight * noise - - -class ConstantInput(nn.Module): - def __init__(self, channel, size=4): - super().__init__() - - self.input = nn.Parameter(torch.randn(1, channel, size, size // 2)) - - def forward(self, input): - batch = input.shape[0] - out = self.input.repeat(batch, 1, 1, 1) - - return out - - -class StyledConv(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=False, - blur_kernel=[1, 3, 3, 1], - demodulate=True, - ): - super().__init__() - - self.conv = ModulatedConv2d( - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=upsample, - blur_kernel=blur_kernel, - demodulate=demodulate, - ) - - self.noise = NoiseInjection() - # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1)) - # self.activate = ScaledLeakyReLU(0.2) - self.activate = FusedLeakyReLU(out_channel) - - def forward(self, input, style, noise=None): - out = self.conv(input, style) - out = self.noise(out, noise=noise) - # out = out + self.bias - out = self.activate(out) - - return out - - -class ToRGB(nn.Module): - def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - if upsample: - self.upsample = Upsample(blur_kernel) - - self.conv = ModulatedConv2d( - in_channel, 3, 1, style_dim, demodulate=False) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, input, style, skip=None): - out = self.conv(input, style) - out = out + self.bias - - if skip is not None: - skip = self.upsample(skip) - - out = out + skip - - return out - - -class Generator(nn.Module): - def __init__( - self, - size, - style_dim, - n_mlp, - channel_multiplier=2, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - ): - super().__init__() - - self.size = size - - self.style_dim = style_dim - - layers = [PixelNorm()] - - for i in range(n_mlp): - layers.append( - EqualLinear( - style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu' - ) - ) - - self.style = nn.Sequential(*layers) - - self.channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - self.input = ConstantInput(self.channels[4]) - self.conv1 = StyledConv( - self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel - ) - self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False) - - self.log_size = int(math.log(size, 2)) - self.num_layers = (self.log_size - 2) * 2 + 1 - - self.convs = nn.ModuleList() - self.upsamples = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - self.noises = nn.Module() - - in_channel = self.channels[4] - - for layer_idx in range(self.num_layers): - res = (layer_idx + 5) // 2 - shape = [1, 1, 2 ** res, 2 ** res // 2] - self.noises.register_buffer( - "noise_{}".format(layer_idx), torch.randn(*shape) - ) - - for i in range(3, self.log_size + 1): - out_channel = self.channels[2 ** i] - - self.convs.append( - StyledConv( - in_channel, - out_channel, - 3, - style_dim, - upsample=True, - blur_kernel=blur_kernel, - ) - ) - - self.convs.append( - StyledConv( - out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel - ) - ) - - self.to_rgbs.append(ToRGB(out_channel, style_dim)) - - in_channel = out_channel - - self.n_latent = self.log_size * 2 - 2 - - def make_noise(self): - device = self.input.input.device - - noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2 // 2, device=device)] - - for i in range(3, self.log_size + 1): - for _ in range(2): - noises.append(torch.randn( - 1, 1, 2 ** i, 2 ** i // 2, device=device)) - - return noises - - def mean_latent(self, n_latent): - latent_in = torch.randn( - n_latent, self.style_dim, device=self.input.input.device - ) - latent = self.style(latent_in).mean(0, keepdim=True) - - return latent - - def get_latent(self, input): - return self.style(input) - - def forward( - self, - styles, - return_latents=False, - return_features=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_latent=False, - noise=None, - randomize_noise=True, - ): - if not input_is_latent: - styles = [self.style(s) for s in styles] - - if noise is None: - if randomize_noise: - noise = [None] * self.num_layers - else: - noise = [ - getattr(self.noises, f'noise_{i}') for i in range(self.num_layers) - ] - - if truncation < 1: - style_t = [] - - for style in styles: - style_t.append( - truncation_latent + truncation * - (style - truncation_latent) - ) - - styles = style_t - - if len(styles) < 2: - inject_index = self.n_latent - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - else: - latent = styles[0] - - else: - if inject_index is None: - inject_index = random.randint(1, self.n_latent - 1) - - # latent = styles[0].unsqueeze(0) - # if latent.shape[1] == 1: - # latent = latent.repeat(1, inject_index, 1) - # else: - # latent = latent[:, :inject_index, :] - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat( - 1, self.n_latent - inject_index, 1) - # latent = styles[0][:, :inject_index, :] - # latent2 = styles[1][:, inject_index:, :] - latent = torch.cat([latent, latent2], 1) - out = self.input(latent) - out = self.conv1(out, latent[:, 0], noise=noise[0]) - - skip = self.to_rgb1(out, latent[:, 1]) - - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip( - self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs - ): - out = conv1(out, latent[:, i], noise=noise1) - out = conv2(out, latent[:, i + 1], noise=noise2) - skip = to_rgb(out, latent[:, i + 2], skip) - - i += 2 - - image = skip - - if return_latents: - return image, latent - elif return_features: - return image, out - else: - return image, None - - -class ConvLayer(nn.Sequential): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - downsample=False, - blur_kernel=[1, 3, 3, 1], - bias=True, - activate=True, - ): - layers = [] - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - layers.append(Blur(blur_kernel, pad=(pad0, pad1))) - - stride = 2 - self.padding = 0 - - else: - stride = 1 - self.padding = kernel_size // 2 - - layers.append( - EqualConv2d( - in_channel, - out_channel, - kernel_size, - padding=self.padding, - stride=stride, - bias=bias and not activate, - ) - ) - - if activate: - if bias: - layers.append(FusedLeakyReLU(out_channel)) - - else: - layers.append(ScaledLeakyReLU(0.2)) - - super().__init__(*layers) - - -class ResBlock(nn.Module): - def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - self.conv1 = ConvLayer(in_channel, in_channel, 3) - self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True) - - self.skip = ConvLayer( - in_channel, out_channel, 1, downsample=True, activate=False, bias=False - ) - - def forward(self, input): - out = self.conv1(input) - out = self.conv2(out) - - skip = self.skip(input) - out = (out + skip) / math.sqrt(2) - - return out - - -class Discriminator(nn.Module): - def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - convs = [ConvLayer(3, channels[size], 1)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - - convs.append(ResBlock(in_channel, out_channel, blur_kernel)) - - in_channel = out_channel - - self.convs = nn.Sequential(*convs) - - self.stddev_group = 4 - self.stddev_feat = 1 - - self.final_conv = ConvLayer(in_channel + 1, channels[4], 3) - self.final_linear = nn.Sequential( - EqualLinear(channels[4] * 4 * 4 // 2, - channels[4], activation='fused_lrelu'), - EqualLinear(channels[4], 1), - ) - - def forward(self, input): - out = self.convs(input) - - batch, channel, height, width = out.shape - group = min(batch, self.stddev_group) - stddev = out.view( - group, -1, self.stddev_feat, channel // self.stddev_feat, height, width - ) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - - out = out.view(batch, -1) - out = self.final_linear(out) - - return out diff --git a/spaces/DragGan/DragGan/torch_utils/__init__.py b/spaces/DragGan/DragGan/torch_utils/__init__.py deleted file mode 100644 index 939e7c6c8f94c4ea1141885c3c3295fe083b06aa..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan/torch_utils/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -# empty diff --git a/spaces/Duskfallcrew/EpicMix_Realism_WebUi/app.py b/spaces/Duskfallcrew/EpicMix_Realism_WebUi/app.py deleted file mode 100644 index 59f9d6798d3fcb3fa9263d4d372af2d1d72d5386..0000000000000000000000000000000000000000 --- a/spaces/Duskfallcrew/EpicMix_Realism_WebUi/app.py +++ /dev/null @@ -1,137 +0,0 @@ -from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler -import gradio as gr -import torch -from PIL import Image - -model_id = 'Duskfallcrew/EpicMix_Realism' -prefix = '' - -scheduler = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler") - -pipe = StableDiffusionPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -if torch.cuda.is_available(): - pipe = pipe.to("cuda") - pipe_i2i = pipe_i2i.to("cuda") - -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - -def inference(prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt="", auto_prefix=False): - - generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None - prompt = f"{prefix} {prompt}" if auto_prefix else prompt - - try: - if img is not None: - return img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None - else: - return txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator), None - except Exception as e: - return None, error_str(e) - -def txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator): - - result = pipe( - prompt, - negative_prompt = neg_prompt, - num_inference_steps = int(steps), - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -def img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator): - - ratio = min(height / img.height, width / img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS) - result = pipe_i2i( - prompt, - negative_prompt = neg_prompt, - init_image = img, - num_inference_steps = int(steps), - strength = strength, - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -css = """.main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem} -""" -with gr.Blocks(css=css) as demo: - gr.HTML( - f""" -
      -
      -

      Epicmix Realism

      -
      -

      - Demo for Epicmix Realism Stable Diffusion model. Running on Free CPU, if ther'es a queue make sure you duplicate the space to your own and if you got the funds upgrade to GPU. No prefix tokens. If you like what you see consider donating here: Ko-Fi Duskfallcrew
      - {"Add the following tokens to your prompts for the model to work properly: prefix" if prefix else ""} -

      - Running on {"GPU 🔥" if torch.cuda.is_available() else f"CPU 🥶. For faster inference it is recommended to upgrade to GPU in Settings"} after duplicating the space

      - Duplicate Space -
      - """ - ) - with gr.Row(): - - with gr.Column(scale=55): - with gr.Group(): - with gr.Row(): - prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2,placeholder=f"{prefix} [your prompt]").style(container=False) - generate = gr.Button(value="Generate").style(rounded=(False, True, True, False)) - - image_out = gr.Image(height=512) - error_output = gr.Markdown() - - with gr.Column(scale=45): - with gr.Tab("Options"): - with gr.Group(): - neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image") - auto_prefix = gr.Checkbox(label="Prefix styling tokens automatically ()", value=prefix, visible=prefix) - - with gr.Row(): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=25, minimum=2, maximum=75, step=1) - - with gr.Row(): - width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8) - - seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) - - with gr.Tab("Image to image"): - with gr.Group(): - image = gr.Image(label="Image", height=256, tool="editor", type="pil") - strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5) - - auto_prefix.change(lambda x: gr.update(placeholder=f"{prefix} [your prompt]" if x else "[Your prompt]"), inputs=auto_prefix, outputs=prompt, queue=False) - - inputs = [prompt, guidance, steps, width, height, seed, image, strength, neg_prompt, auto_prefix] - outputs = [image_out, error_output] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs) - - gr.HTML(""" -
      -
      -

      This space was created using SD Space Creator.

      -
      - """) - -demo.queue(concurrency_count=1) -demo.launch() diff --git a/spaces/Egrt/GCycleGAN/app.py b/spaces/Egrt/GCycleGAN/app.py deleted file mode 100644 index 5a356c343c0374fa5cedae4bc7da2fa3c0324d20..0000000000000000000000000000000000000000 --- a/spaces/Egrt/GCycleGAN/app.py +++ /dev/null @@ -1,39 +0,0 @@ -''' -Author: Egrt -Date: 2022-01-13 13:34:10 -LastEditors: Egrt -LastEditTime: 2022-10-31 17:08:34 -FilePath: \MaskGAN\app.py -''' -from cyclegan import CYCLEGAN -import gradio as gr -import os - -cyclegan = CYCLEGAN() - -# --------模型推理---------- # -''' -description: -param {*} img 戴眼镜的人脸图片 Image -return {*} r_image 去遮挡的人脸图片 Image -''' -def inference(img): - r_image = cyclegan.detect_image(img) - return r_image - -# --------网页信息---------- # -title = "融合无监督的戴眼镜遮挡人脸重建" -description = "使用生成对抗网络对戴眼镜遮挡人脸重建,能够有效地去除眼镜遮挡。 @西南科技大学智能控制与图像处理研究室" -article = "

      DeMaskGAN: Face Restoration Using Swin Transformer

      " -example_img_dir = 'img' -example_img_name = os.listdir(example_img_dir) -examples=[os.path.join(example_img_dir, image_path) for image_path in example_img_name if image_path.endswith(('.jpg','.jpeg'))] -gr.Interface( - inference, - gr.inputs.Image(type="pil", label="Input", tool="editor"), - gr.outputs.Image(type="pil", label="Output").style(height=242), - title=title, - description=description, - article=article, - examples=examples - ).launch() diff --git a/spaces/EyanAn/vits-uma-genshin-honkai/text/__init__.py b/spaces/EyanAn/vits-uma-genshin-honkai/text/__init__.py deleted file mode 100644 index 663c4b6416affb53c9dc56dddbc8b2b65d4bf518..0000000000000000000000000000000000000000 --- a/spaces/EyanAn/vits-uma-genshin-honkai/text/__init__.py +++ /dev/null @@ -1,57 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners -from text.symbols import symbols - - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence, clean_text - - -def cleaned_text_to_sequence(cleaned_text): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [_symbol_to_id[symbol] for symbol in cleaned_text if symbol in _symbol_to_id.keys()] - return sequence - - -def sequence_to_text(sequence): - '''Converts a sequence of IDs back to a string''' - result = '' - for symbol_id in sequence: - s = _id_to_symbol[symbol_id] - result += s - return result - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/FKBaffour/Streamlit_App_for_Sales_Forecasting/app.py b/spaces/FKBaffour/Streamlit_App_for_Sales_Forecasting/app.py deleted file mode 100644 index 602f7c4bb1db3438a00519a61a4a484862c5fa98..0000000000000000000000000000000000000000 --- a/spaces/FKBaffour/Streamlit_App_for_Sales_Forecasting/app.py +++ /dev/null @@ -1,200 +0,0 @@ -# Importing required Libraries -import streamlit as st -import pandas as pd -import numpy as np -import os, pickle -from sklearn import preprocessing - -# Setting up page configuration and directory path -st.set_page_config(page_title="Sales Forecasting App", page_icon="🐞", layout="centered") -DIRPATH = os.path.dirname(os.path.realpath(__file__)) - -# Setting background image -import base64 -def add_bg_from_local(image_file): - with open(image_file, "rb") as image_file: - encoded_string = base64.b64encode(image_file.read()) - st.markdown( - f""" - - """, - unsafe_allow_html=True - ) -add_bg_from_local('background.jpg') - -# Setting up logo -left1, mid, right1 = st.columns(3) -with mid: - st.image("logo.jpg", use_column_width=True) - -# Setting up Sidebar -social_acc = ['Data Field Description', 'EDA', 'About App'] -social_acc_nav = st.sidebar.radio('**INFORMATION SECTION**', social_acc) - -if social_acc_nav == 'Data Field Description': - st.sidebar.markdown("

      Data Field Description

      ", unsafe_allow_html=True) - st.sidebar.markdown("**Date:** The date you want to predict sales for") - st.sidebar.markdown("**Family:** identifies the type of product sold") - st.sidebar.markdown("**Onpromotion:** gives the total number of items in a product family that are being promoted at a store at a given date") - st.sidebar.markdown("**Store Number:** identifies the store at which the products are sold") - st.sidebar.markdown("**Holiday Locale:** provide information about the locale where holiday is celebrated") - -elif social_acc_nav == 'EDA': - st.sidebar.markdown("

      Exploratory Data Analysis

      ", unsafe_allow_html=True) - st.sidebar.markdown('''---''') - st.sidebar.markdown('''The exploratory data analysis of this project can be find in a Jupyter notebook from the linl below''') - st.sidebar.markdown("[Open Notebook](https://github.com/Kyei-frank/Regression-Project-Store-Sales--Time-Series-Forecasting/blob/main/project_workflow.ipynb)") - -elif social_acc_nav == 'About App': - st.sidebar.markdown("

      Sales Forecasting App

      ", unsafe_allow_html=True) - st.sidebar.markdown('''---''') - st.sidebar.markdown("This App predicts the sales for product families sold at Favorita stores using regression model.") - st.sidebar.markdown("") - st.sidebar.markdown("[ Visit Github Repository for more information](https://github.com/Kyei-frank/Regression-Project-Store-Sales--Time-Series-Forecasting)") - -# Loading Machine Learning Objects -@st.cache() -def load_saved_objects(file_path = 'ML_items'): - # Function to load saved objects - with open('ML_items', 'rb') as file: - loaded_object = pickle.load(file) - - return loaded_object - -# Instantiating ML_items -Loaded_object = load_saved_objects(file_path = 'ML_items') -pipeline, train_data, stores, holidays_event = Loaded_object['pipeline'], Loaded_object['train_data'], Loaded_object['stores'], Loaded_object['holidays_event'] - -# Setting Function for extracting Calendar features -@st.cache() -def getDateFeatures(df, date): - df['date'] = pd.to_datetime(df['date']) - df['month'] = df.date.dt.month - df['day_of_month'] = df.date.dt.day - df['day_of_year'] = df.date.dt.dayofyear - df['week_of_year'] = df.date.dt.isocalendar().week - df['day_of_week'] = df.date.dt.dayofweek - df['year'] = df.date.dt.year - df['is_weekend']= np.where(df['day_of_week'] > 4, 1, 0) - df['is_month_start']= df.date.dt.is_month_start.astype(int) - df['is_month_end']= df.date.dt.is_month_end.astype(int) - df['quarter']= df.date.dt.quarter - df['is_quarter_start']= df.date.dt.is_quarter_start.astype(int) - df['is_quarter_end']= df.date.dt.is_quarter_end.astype(int) - df['is_year_start']= df.date.dt.is_year_start.astype(int) - - return df - -# Setting up variables for input data -@st.cache() -def setup(tmp_df_file): - "Setup the required elements like files, models, global variables, etc" - pd.DataFrame( - dict( - date=[], - store_nbr=[], - family=[], - onpromotion=[], - city=[], - state=[], - store_type=[], - cluster=[], - day_type=[], - locale=[], - locale_name=[], - ) - ).to_csv(tmp_df_file, index=False) - -# Setting up a file to save our input data -tmp_df_file = os.path.join(DIRPATH, "tmp", "data.csv") -setup(tmp_df_file) - -# setting Title for forms -st.markdown("

      Sales Prediction

      ", unsafe_allow_html=True) -st.markdown(" Fill in the details below and click on SUBMIT button to make a prediction for a specific date and item ", unsafe_allow_html=True) - -# Creating columns for for input data(forms) -left_col, mid_col, right_col = st.columns(3) - -# Developing forms to collect input data -with st.form(key="information", clear_on_submit=True): - - # Setting up input data for 1st column - left_col.markdown("**PRODUCT DATA**") - date = left_col.date_input("Prediction Date:") - family = left_col.selectbox("Item family:", options= list(train_data["family"].unique())) - onpromotion = left_col.selectbox("Onpromotion code:", options= set(train_data["onpromotion"].unique())) - store_nbr = left_col.selectbox("Store Number:", options= set(stores["store_nbr"].unique())) - - # Setting up input data for 2nd column - mid_col.markdown("**STORE DATA**") - city = mid_col.selectbox("City:", options= set(stores["city"].unique())) - state = mid_col.selectbox("State:", options= list(stores["state"].unique())) - cluster = mid_col.selectbox("Store Cluster:", options= list(stores["cluster"].unique())) - store_type = mid_col.radio("Store Type:", options= set(stores["store_type"].unique()), horizontal = True) - - # Setting up input data for 3rd column - right_col.markdown("**ADDITIONAL DATA**") - check= right_col.checkbox("Is it a Holiday or weekend?") - if check: - right_col.write('Fill the following information on Day Type') - day_type = right_col.selectbox("Holiday:", options= ('Holiday','Special Day:Transfered/Additional Holiday','No Work/Weekend')) - locale= right_col.selectbox("Holiday Locale:", options= list(holidays_event["locale"].unique())) - locale_name= right_col.selectbox("Locale Name:", options= list(holidays_event["locale_name"].unique())) - else: - day_type = 'Workday' - locale = 'National' - locale_name= 'Ecuador' - - submitted = st.form_submit_button(label="Submit") - -# Setting up background operations after submitting forms -if submitted: - # Saving input data as csv file after submission - pd.read_csv(tmp_df_file).append( - dict( - date = date, - store_nbr = store_nbr, - family=family, - onpromotion= onpromotion, - city=city, - state=state, - store_type=store_type, - cluster=cluster, - day_type=day_type, - locale=locale, - locale_name=locale_name - ), - ignore_index=True, - ).to_csv(tmp_df_file, index=False) - st.balloons() - - # Converting input data to a dataframe for prediction - df = pd.read_csv(tmp_df_file) - df= df.copy() - - # Getting date Features - processed_data= getDateFeatures(df, 'date') - processed_data= processed_data.drop(columns=['date']) - - # Making predictions - prediction = pipeline.predict(processed_data) - df['Sales']= prediction - - # Displaying prediction results - st.markdown('''---''') - st.markdown("

      Prediction Results

      ", unsafe_allow_html=True) - st.success(f"Predicted Sales: {prediction[-1]}") - st.markdown('''---''') - - # Making expander to view all records - expander = st.expander("See all records") - with expander: - df = pd.read_csv(tmp_df_file) - df['Sales']= prediction - st.dataframe(df) diff --git a/spaces/Faridmaruf/RVCV2MODEL/README.md b/spaces/Faridmaruf/RVCV2MODEL/README.md deleted file mode 100644 index 0f1f1bd01815847d73817285f9cca4b534813f1a..0000000000000000000000000000000000000000 --- a/spaces/Faridmaruf/RVCV2MODEL/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: RVC V2 Genshin Impact -emoji: 🎤 -colorFrom: red -colorTo: purple -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: true -license: mit -duplicated_from: ArkanDash/rvc-models-new ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/FoxMeo/fire-detector/utils/general.py b/spaces/FoxMeo/fire-detector/utils/general.py deleted file mode 100644 index decdcc64ecd72927bc6c185683977854e593711d..0000000000000000000000000000000000000000 --- a/spaces/FoxMeo/fire-detector/utils/general.py +++ /dev/null @@ -1,892 +0,0 @@ -# YOLOR general utils - -import glob -import logging -import math -import os -import platform -import random -import re -import subprocess -import time -from pathlib import Path - -import cv2 -import numpy as np -import pandas as pd -import torch -import torchvision -import yaml - -from utils.google_utils import gsutil_getsize -from utils.metrics import fitness -from utils.torch_utils import init_torch_seeds - -# Settings -torch.set_printoptions(linewidth=320, precision=5, profile='long') -np.set_printoptions(linewidth=320, formatter={'float_kind': '{:11.5g}'.format}) # format short g, %precision=5 -pd.options.display.max_columns = 10 -cv2.setNumThreads(0) # prevent OpenCV from multithreading (incompatible with PyTorch DataLoader) -os.environ['NUMEXPR_MAX_THREADS'] = str(min(os.cpu_count(), 8)) # NumExpr max threads - - -def set_logging(rank=-1): - logging.basicConfig( - format="%(message)s", - level=logging.INFO if rank in [-1, 0] else logging.WARN) - - -def init_seeds(seed=0): - # Initialize random number generator (RNG) seeds - random.seed(seed) - np.random.seed(seed) - init_torch_seeds(seed) - - -def get_latest_run(search_dir='.'): - # Return path to most recent 'last.pt' in /runs (i.e. to --resume from) - last_list = glob.glob(f'{search_dir}/**/last*.pt', recursive=True) - return max(last_list, key=os.path.getctime) if last_list else '' - - -def isdocker(): - # Is environment a Docker container - return Path('/workspace').exists() # or Path('/.dockerenv').exists() - - -def emojis(str=''): - # Return platform-dependent emoji-safe version of string - return str.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else str - - -def check_online(): - # Check internet connectivity - import socket - try: - socket.create_connection(("1.1.1.1", 443), 5) # check host accesability - return True - except OSError: - return False - - -def check_git_status(): - # Recommend 'git pull' if code is out of date - print(colorstr('github: '), end='') - try: - assert Path('.git').exists(), 'skipping check (not a git repository)' - assert not isdocker(), 'skipping check (Docker image)' - assert check_online(), 'skipping check (offline)' - - cmd = 'git fetch && git config --get remote.origin.url' - url = subprocess.check_output(cmd, shell=True).decode().strip().rstrip('.git') # github repo url - branch = subprocess.check_output('git rev-parse --abbrev-ref HEAD', shell=True).decode().strip() # checked out - n = int(subprocess.check_output(f'git rev-list {branch}..origin/master --count', shell=True)) # commits behind - if n > 0: - s = f"⚠️ WARNING: code is out of date by {n} commit{'s' * (n > 1)}. " \ - f"Use 'git pull' to update or 'git clone {url}' to download latest." - else: - s = f'up to date with {url} ✅' - print(emojis(s)) # emoji-safe - except Exception as e: - print(e) - - -def check_requirements(requirements='requirements.txt', exclude=()): - # Check installed dependencies meet requirements (pass *.txt file or list of packages) - import pkg_resources as pkg - prefix = colorstr('red', 'bold', 'requirements:') - if isinstance(requirements, (str, Path)): # requirements.txt file - file = Path(requirements) - if not file.exists(): - print(f"{prefix} {file.resolve()} not found, check failed.") - return - requirements = [f'{x.name}{x.specifier}' for x in pkg.parse_requirements(file.open()) if x.name not in exclude] - else: # list or tuple of packages - requirements = [x for x in requirements if x not in exclude] - - n = 0 # number of packages updates - for r in requirements: - try: - pkg.require(r) - except Exception as e: # DistributionNotFound or VersionConflict if requirements not met - n += 1 - print(f"{prefix} {e.req} not found and is required by YOLOR, attempting auto-update...") - print(subprocess.check_output(f"pip install '{e.req}'", shell=True).decode()) - - if n: # if packages updated - source = file.resolve() if 'file' in locals() else requirements - s = f"{prefix} {n} package{'s' * (n > 1)} updated per {source}\n" \ - f"{prefix} ⚠️ {colorstr('bold', 'Restart runtime or rerun command for updates to take effect')}\n" - print(emojis(s)) # emoji-safe - - -def check_img_size(img_size, s=32): - # Verify img_size is a multiple of stride s - new_size = make_divisible(img_size, int(s)) # ceil gs-multiple - if new_size != img_size: - print('WARNING: --img-size %g must be multiple of max stride %g, updating to %g' % (img_size, s, new_size)) - return new_size - - -def check_imshow(): - # Check if environment supports image displays - try: - assert not isdocker(), 'cv2.imshow() is disabled in Docker environments' - cv2.imshow('test', np.zeros((1, 1, 3))) - cv2.waitKey(1) - cv2.destroyAllWindows() - cv2.waitKey(1) - return True - except Exception as e: - print(f'WARNING: Environment does not support cv2.imshow() or PIL Image.show() image displays\n{e}') - return False - - -def check_file(file): - # Search for file if not found - if Path(file).is_file() or file == '': - return file - else: - files = glob.glob('./**/' + file, recursive=True) # find file - assert len(files), f'File Not Found: {file}' # assert file was found - assert len(files) == 1, f"Multiple files match '{file}', specify exact path: {files}" # assert unique - return files[0] # return file - - -def check_dataset(dict): - # Download dataset if not found locally - val, s = dict.get('val'), dict.get('download') - if val and len(val): - val = [Path(x).resolve() for x in (val if isinstance(val, list) else [val])] # val path - if not all(x.exists() for x in val): - print('\nWARNING: Dataset not found, nonexistent paths: %s' % [str(x) for x in val if not x.exists()]) - if s and len(s): # download script - print('Downloading %s ...' % s) - if s.startswith('http') and s.endswith('.zip'): # URL - f = Path(s).name # filename - torch.hub.download_url_to_file(s, f) - r = os.system('unzip -q %s -d ../ && rm %s' % (f, f)) # unzip - else: # bash script - r = os.system(s) - print('Dataset autodownload %s\n' % ('success' if r == 0 else 'failure')) # analyze return value - else: - raise Exception('Dataset not found.') - - -def make_divisible(x, divisor): - # Returns x evenly divisible by divisor - return math.ceil(x / divisor) * divisor - - -def clean_str(s): - # Cleans a string by replacing special characters with underscore _ - return re.sub(pattern="[|@#!¡·$€%&()=?¿^*;:,¨´><+]", repl="_", string=s) - - -def one_cycle(y1=0.0, y2=1.0, steps=100): - # lambda function for sinusoidal ramp from y1 to y2 - return lambda x: ((1 - math.cos(x * math.pi / steps)) / 2) * (y2 - y1) + y1 - - -def colorstr(*input): - # Colors a string https://en.wikipedia.org/wiki/ANSI_escape_code, i.e. colorstr('blue', 'hello world') - *args, string = input if len(input) > 1 else ('blue', 'bold', input[0]) # color arguments, string - colors = {'black': '\033[30m', # basic colors - 'red': '\033[31m', - 'green': '\033[32m', - 'yellow': '\033[33m', - 'blue': '\033[34m', - 'magenta': '\033[35m', - 'cyan': '\033[36m', - 'white': '\033[37m', - 'bright_black': '\033[90m', # bright colors - 'bright_red': '\033[91m', - 'bright_green': '\033[92m', - 'bright_yellow': '\033[93m', - 'bright_blue': '\033[94m', - 'bright_magenta': '\033[95m', - 'bright_cyan': '\033[96m', - 'bright_white': '\033[97m', - 'end': '\033[0m', # misc - 'bold': '\033[1m', - 'underline': '\033[4m'} - return ''.join(colors[x] for x in args) + f'{string}' + colors['end'] - - -def labels_to_class_weights(labels, nc=80): - # Get class weights (inverse frequency) from training labels - if labels[0] is None: # no labels loaded - return torch.Tensor() - - labels = np.concatenate(labels, 0) # labels.shape = (866643, 5) for COCO - classes = labels[:, 0].astype(np.int32) # labels = [class xywh] - weights = np.bincount(classes, minlength=nc) # occurrences per class - - # Prepend gridpoint count (for uCE training) - # gpi = ((320 / 32 * np.array([1, 2, 4])) ** 2 * 3).sum() # gridpoints per image - # weights = np.hstack([gpi * len(labels) - weights.sum() * 9, weights * 9]) ** 0.5 # prepend gridpoints to start - - weights[weights == 0] = 1 # replace empty bins with 1 - weights = 1 / weights # number of targets per class - weights /= weights.sum() # normalize - return torch.from_numpy(weights) - - -def labels_to_image_weights(labels, nc=80, class_weights=np.ones(80)): - # Produces image weights based on class_weights and image contents - class_counts = np.array([np.bincount(x[:, 0].astype(np.int32), minlength=nc) for x in labels]) - image_weights = (class_weights.reshape(1, nc) * class_counts).sum(1) - # index = random.choices(range(n), weights=image_weights, k=1) # weight image sample - return image_weights - - -def coco80_to_coco91_class(): # converts 80-index (val2014) to 91-index (paper) - # https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/ - # a = np.loadtxt('data/coco.names', dtype='str', delimiter='\n') - # b = np.loadtxt('data/coco_paper.names', dtype='str', delimiter='\n') - # x1 = [list(a[i] == b).index(True) + 1 for i in range(80)] # darknet to coco - # x2 = [list(b[i] == a).index(True) if any(b[i] == a) else None for i in range(91)] # coco to darknet - x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 28, 31, 32, 33, 34, - 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, - 64, 65, 67, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90] - return x - - -def xyxy2xywh(x): - # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] where xy1=top-left, xy2=bottom-right - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = (x[:, 0] + x[:, 2]) / 2 # x center - y[:, 1] = (x[:, 1] + x[:, 3]) / 2 # y center - y[:, 2] = x[:, 2] - x[:, 0] # width - y[:, 3] = x[:, 3] - x[:, 1] # height - return y - - -def xywh2xyxy(x): - # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x - y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y - y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x - y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y - return y - - -def xywhn2xyxy(x, w=640, h=640, padw=0, padh=0): - # Convert nx4 boxes from [x, y, w, h] normalized to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = w * (x[:, 0] - x[:, 2] / 2) + padw # top left x - y[:, 1] = h * (x[:, 1] - x[:, 3] / 2) + padh # top left y - y[:, 2] = w * (x[:, 0] + x[:, 2] / 2) + padw # bottom right x - y[:, 3] = h * (x[:, 1] + x[:, 3] / 2) + padh # bottom right y - return y - - -def xyn2xy(x, w=640, h=640, padw=0, padh=0): - # Convert normalized segments into pixel segments, shape (n,2) - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = w * x[:, 0] + padw # top left x - y[:, 1] = h * x[:, 1] + padh # top left y - return y - - -def segment2box(segment, width=640, height=640): - # Convert 1 segment label to 1 box label, applying inside-image constraint, i.e. (xy1, xy2, ...) to (xyxy) - x, y = segment.T # segment xy - inside = (x >= 0) & (y >= 0) & (x <= width) & (y <= height) - x, y, = x[inside], y[inside] - return np.array([x.min(), y.min(), x.max(), y.max()]) if any(x) else np.zeros((1, 4)) # xyxy - - -def segments2boxes(segments): - # Convert segment labels to box labels, i.e. (cls, xy1, xy2, ...) to (cls, xywh) - boxes = [] - for s in segments: - x, y = s.T # segment xy - boxes.append([x.min(), y.min(), x.max(), y.max()]) # cls, xyxy - return xyxy2xywh(np.array(boxes)) # cls, xywh - - -def resample_segments(segments, n=1000): - # Up-sample an (n,2) segment - for i, s in enumerate(segments): - s = np.concatenate((s, s[0:1, :]), axis=0) - x = np.linspace(0, len(s) - 1, n) - xp = np.arange(len(s)) - segments[i] = np.concatenate([np.interp(x, xp, s[:, i]) for i in range(2)]).reshape(2, -1).T # segment xy - return segments - - -def scale_coords(img1_shape, coords, img0_shape, ratio_pad=None): - # Rescale coords (xyxy) from img1_shape to img0_shape - if ratio_pad is None: # calculate from img0_shape - gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new - pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding - else: - gain = ratio_pad[0][0] - pad = ratio_pad[1] - - coords[:, [0, 2]] -= pad[0] # x padding - coords[:, [1, 3]] -= pad[1] # y padding - coords[:, :4] /= gain - clip_coords(coords, img0_shape) - return coords - - -def clip_coords(boxes, img_shape): - # Clip bounding xyxy bounding boxes to image shape (height, width) - boxes[:, 0].clamp_(0, img_shape[1]) # x1 - boxes[:, 1].clamp_(0, img_shape[0]) # y1 - boxes[:, 2].clamp_(0, img_shape[1]) # x2 - boxes[:, 3].clamp_(0, img_shape[0]) # y2 - - -def bbox_iou(box1, box2, x1y1x2y2=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7): - # Returns the IoU of box1 to box2. box1 is 4, box2 is nx4 - box2 = box2.T - - # Get the coordinates of bounding boxes - if x1y1x2y2: # x1, y1, x2, y2 = box1 - b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3] - b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3] - else: # transform from xywh to xyxy - b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2 - b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2 - b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2 - b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2 - - # Intersection area - inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \ - (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0) - - # Union Area - w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps - w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps - union = w1 * h1 + w2 * h2 - inter + eps - - iou = inter / union - - if GIoU or DIoU or CIoU: - cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width - ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height - if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1 - c2 = cw ** 2 + ch ** 2 + eps # convex diagonal squared - rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 + - (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center distance squared - if DIoU: - return iou - rho2 / c2 # DIoU - elif CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47 - v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / (h2 + eps)) - torch.atan(w1 / (h1 + eps)), 2) - with torch.no_grad(): - alpha = v / (v - iou + (1 + eps)) - return iou - (rho2 / c2 + v * alpha) # CIoU - else: # GIoU https://arxiv.org/pdf/1902.09630.pdf - c_area = cw * ch + eps # convex area - return iou - (c_area - union) / c_area # GIoU - else: - return iou # IoU - - - - -def bbox_alpha_iou(box1, box2, x1y1x2y2=False, GIoU=False, DIoU=False, CIoU=False, alpha=2, eps=1e-9): - # Returns tsqrt_he IoU of box1 to box2. box1 is 4, box2 is nx4 - box2 = box2.T - - # Get the coordinates of bounding boxes - if x1y1x2y2: # x1, y1, x2, y2 = box1 - b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3] - b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3] - else: # transform from xywh to xyxy - b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2 - b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2 - b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2 - b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2 - - # Intersection area - inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \ - (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0) - - # Union Area - w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps - w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps - union = w1 * h1 + w2 * h2 - inter + eps - - # change iou into pow(iou+eps) - # iou = inter / union - iou = torch.pow(inter/union + eps, alpha) - # beta = 2 * alpha - if GIoU or DIoU or CIoU: - cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width - ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height - if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1 - c2 = (cw ** 2 + ch ** 2) ** alpha + eps # convex diagonal - rho_x = torch.abs(b2_x1 + b2_x2 - b1_x1 - b1_x2) - rho_y = torch.abs(b2_y1 + b2_y2 - b1_y1 - b1_y2) - rho2 = ((rho_x ** 2 + rho_y ** 2) / 4) ** alpha # center distance - if DIoU: - return iou - rho2 / c2 # DIoU - elif CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47 - v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2) - with torch.no_grad(): - alpha_ciou = v / ((1 + eps) - inter / union + v) - # return iou - (rho2 / c2 + v * alpha_ciou) # CIoU - return iou - (rho2 / c2 + torch.pow(v * alpha_ciou + eps, alpha)) # CIoU - else: # GIoU https://arxiv.org/pdf/1902.09630.pdf - # c_area = cw * ch + eps # convex area - # return iou - (c_area - union) / c_area # GIoU - c_area = torch.max(cw * ch + eps, union) # convex area - return iou - torch.pow((c_area - union) / c_area + eps, alpha) # GIoU - else: - return iou # torch.log(iou+eps) or iou - - -def box_iou(box1, box2): - # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py - """ - Return intersection-over-union (Jaccard index) of boxes. - Both sets of boxes are expected to be in (x1, y1, x2, y2) format. - Arguments: - box1 (Tensor[N, 4]) - box2 (Tensor[M, 4]) - Returns: - iou (Tensor[N, M]): the NxM matrix containing the pairwise - IoU values for every element in boxes1 and boxes2 - """ - - def box_area(box): - # box = 4xn - return (box[2] - box[0]) * (box[3] - box[1]) - - area1 = box_area(box1.T) - area2 = box_area(box2.T) - - # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2) - inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2) - return inter / (area1[:, None] + area2 - inter) # iou = inter / (area1 + area2 - inter) - - -def wh_iou(wh1, wh2): - # Returns the nxm IoU matrix. wh1 is nx2, wh2 is mx2 - wh1 = wh1[:, None] # [N,1,2] - wh2 = wh2[None] # [1,M,2] - inter = torch.min(wh1, wh2).prod(2) # [N,M] - return inter / (wh1.prod(2) + wh2.prod(2) - inter) # iou = inter / (area1 + area2 - inter) - - -def box_giou(box1, box2): - """ - Return generalized intersection-over-union (Jaccard index) between two sets of boxes. - Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with - ``0 <= x1 < x2`` and ``0 <= y1 < y2``. - Args: - boxes1 (Tensor[N, 4]): first set of boxes - boxes2 (Tensor[M, 4]): second set of boxes - Returns: - Tensor[N, M]: the NxM matrix containing the pairwise generalized IoU values - for every element in boxes1 and boxes2 - """ - - def box_area(box): - # box = 4xn - return (box[2] - box[0]) * (box[3] - box[1]) - - area1 = box_area(box1.T) - area2 = box_area(box2.T) - - inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2) - union = (area1[:, None] + area2 - inter) - - iou = inter / union - - lti = torch.min(box1[:, None, :2], box2[:, :2]) - rbi = torch.max(box1[:, None, 2:], box2[:, 2:]) - - whi = (rbi - lti).clamp(min=0) # [N,M,2] - areai = whi[:, :, 0] * whi[:, :, 1] - - return iou - (areai - union) / areai - - -def box_ciou(box1, box2, eps: float = 1e-7): - """ - Return complete intersection-over-union (Jaccard index) between two sets of boxes. - Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with - ``0 <= x1 < x2`` and ``0 <= y1 < y2``. - Args: - boxes1 (Tensor[N, 4]): first set of boxes - boxes2 (Tensor[M, 4]): second set of boxes - eps (float, optional): small number to prevent division by zero. Default: 1e-7 - Returns: - Tensor[N, M]: the NxM matrix containing the pairwise complete IoU values - for every element in boxes1 and boxes2 - """ - - def box_area(box): - # box = 4xn - return (box[2] - box[0]) * (box[3] - box[1]) - - area1 = box_area(box1.T) - area2 = box_area(box2.T) - - inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2) - union = (area1[:, None] + area2 - inter) - - iou = inter / union - - lti = torch.min(box1[:, None, :2], box2[:, :2]) - rbi = torch.max(box1[:, None, 2:], box2[:, 2:]) - - whi = (rbi - lti).clamp(min=0) # [N,M,2] - diagonal_distance_squared = (whi[:, :, 0] ** 2) + (whi[:, :, 1] ** 2) + eps - - # centers of boxes - x_p = (box1[:, None, 0] + box1[:, None, 2]) / 2 - y_p = (box1[:, None, 1] + box1[:, None, 3]) / 2 - x_g = (box2[:, 0] + box2[:, 2]) / 2 - y_g = (box2[:, 1] + box2[:, 3]) / 2 - # The distance between boxes' centers squared. - centers_distance_squared = (x_p - x_g) ** 2 + (y_p - y_g) ** 2 - - w_pred = box1[:, None, 2] - box1[:, None, 0] - h_pred = box1[:, None, 3] - box1[:, None, 1] - - w_gt = box2[:, 2] - box2[:, 0] - h_gt = box2[:, 3] - box2[:, 1] - - v = (4 / (torch.pi ** 2)) * torch.pow((torch.atan(w_gt / h_gt) - torch.atan(w_pred / h_pred)), 2) - with torch.no_grad(): - alpha = v / (1 - iou + v + eps) - return iou - (centers_distance_squared / diagonal_distance_squared) - alpha * v - - -def box_diou(box1, box2, eps: float = 1e-7): - """ - Return distance intersection-over-union (Jaccard index) between two sets of boxes. - Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with - ``0 <= x1 < x2`` and ``0 <= y1 < y2``. - Args: - boxes1 (Tensor[N, 4]): first set of boxes - boxes2 (Tensor[M, 4]): second set of boxes - eps (float, optional): small number to prevent division by zero. Default: 1e-7 - Returns: - Tensor[N, M]: the NxM matrix containing the pairwise distance IoU values - for every element in boxes1 and boxes2 - """ - - def box_area(box): - # box = 4xn - return (box[2] - box[0]) * (box[3] - box[1]) - - area1 = box_area(box1.T) - area2 = box_area(box2.T) - - inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2) - union = (area1[:, None] + area2 - inter) - - iou = inter / union - - lti = torch.min(box1[:, None, :2], box2[:, :2]) - rbi = torch.max(box1[:, None, 2:], box2[:, 2:]) - - whi = (rbi - lti).clamp(min=0) # [N,M,2] - diagonal_distance_squared = (whi[:, :, 0] ** 2) + (whi[:, :, 1] ** 2) + eps - - # centers of boxes - x_p = (box1[:, None, 0] + box1[:, None, 2]) / 2 - y_p = (box1[:, None, 1] + box1[:, None, 3]) / 2 - x_g = (box2[:, 0] + box2[:, 2]) / 2 - y_g = (box2[:, 1] + box2[:, 3]) / 2 - # The distance between boxes' centers squared. - centers_distance_squared = (x_p - x_g) ** 2 + (y_p - y_g) ** 2 - - # The distance IoU is the IoU penalized by a normalized - # distance between boxes' centers squared. - return iou - (centers_distance_squared / diagonal_distance_squared) - - -def non_max_suppression(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, multi_label=False, - labels=()): - """Runs Non-Maximum Suppression (NMS) on inference results - - Returns: - list of detections, on (n,6) tensor per image [xyxy, conf, cls] - """ - - nc = prediction.shape[2] - 5 # number of classes - xc = prediction[..., 4] > conf_thres # candidates - - # Settings - min_wh, max_wh = 2, 4096 # (pixels) minimum and maximum box width and height - max_det = 300 # maximum number of detections per image - max_nms = 30000 # maximum number of boxes into torchvision.ops.nms() - time_limit = 10.0 # seconds to quit after - redundant = True # require redundant detections - multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img) - merge = False # use merge-NMS - - t = time.time() - output = [torch.zeros((0, 6), device=prediction.device)] * prediction.shape[0] - for xi, x in enumerate(prediction): # image index, image inference - # Apply constraints - # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height - x = x[xc[xi]] # confidence - - # Cat apriori labels if autolabelling - if labels and len(labels[xi]): - l = labels[xi] - v = torch.zeros((len(l), nc + 5), device=x.device) - v[:, :4] = l[:, 1:5] # box - v[:, 4] = 1.0 # conf - v[range(len(l)), l[:, 0].long() + 5] = 1.0 # cls - x = torch.cat((x, v), 0) - - # If none remain process next image - if not x.shape[0]: - continue - - # Compute conf - if nc == 1: - x[:, 5:] = x[:, 4:5] # for models with one class, cls_loss is 0 and cls_conf is always 0.5, - # so there is no need to multiplicate. - else: - x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf - - # Box (center x, center y, width, height) to (x1, y1, x2, y2) - box = xywh2xyxy(x[:, :4]) - - # Detections matrix nx6 (xyxy, conf, cls) - if multi_label: - i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T - x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1) - else: # best class only - conf, j = x[:, 5:].max(1, keepdim=True) - x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres] - - # Filter by class - if classes is not None: - x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)] - - # Apply finite constraint - # if not torch.isfinite(x).all(): - # x = x[torch.isfinite(x).all(1)] - - # Check shape - n = x.shape[0] # number of boxes - if not n: # no boxes - continue - elif n > max_nms: # excess boxes - x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence - - # Batched NMS - c = x[:, 5:6] * (0 if agnostic else max_wh) # classes - boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores - i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS - if i.shape[0] > max_det: # limit detections - i = i[:max_det] - if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean) - # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4) - iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix - weights = iou * scores[None] # box weights - x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes - if redundant: - i = i[iou.sum(1) > 1] # require redundancy - - output[xi] = x[i] - if (time.time() - t) > time_limit: - print(f'WARNING: NMS time limit {time_limit}s exceeded') - break # time limit exceeded - - return output - - -def non_max_suppression_kpt(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, multi_label=False, - labels=(), kpt_label=False, nc=None, nkpt=None): - """Runs Non-Maximum Suppression (NMS) on inference results - - Returns: - list of detections, on (n,6) tensor per image [xyxy, conf, cls] - """ - if nc is None: - nc = prediction.shape[2] - 5 if not kpt_label else prediction.shape[2] - 56 # number of classes - xc = prediction[..., 4] > conf_thres # candidates - - # Settings - min_wh, max_wh = 2, 4096 # (pixels) minimum and maximum box width and height - max_det = 300 # maximum number of detections per image - max_nms = 30000 # maximum number of boxes into torchvision.ops.nms() - time_limit = 10.0 # seconds to quit after - redundant = True # require redundant detections - multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img) - merge = False # use merge-NMS - - t = time.time() - output = [torch.zeros((0,6), device=prediction.device)] * prediction.shape[0] - for xi, x in enumerate(prediction): # image index, image inference - # Apply constraints - # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height - x = x[xc[xi]] # confidence - - # Cat apriori labels if autolabelling - if labels and len(labels[xi]): - l = labels[xi] - v = torch.zeros((len(l), nc + 5), device=x.device) - v[:, :4] = l[:, 1:5] # box - v[:, 4] = 1.0 # conf - v[range(len(l)), l[:, 0].long() + 5] = 1.0 # cls - x = torch.cat((x, v), 0) - - # If none remain process next image - if not x.shape[0]: - continue - - # Compute conf - x[:, 5:5+nc] *= x[:, 4:5] # conf = obj_conf * cls_conf - - # Box (center x, center y, width, height) to (x1, y1, x2, y2) - box = xywh2xyxy(x[:, :4]) - - # Detections matrix nx6 (xyxy, conf, cls) - if multi_label: - i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T - x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1) - else: # best class only - if not kpt_label: - conf, j = x[:, 5:].max(1, keepdim=True) - x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres] - else: - kpts = x[:, 6:] - conf, j = x[:, 5:6].max(1, keepdim=True) - x = torch.cat((box, conf, j.float(), kpts), 1)[conf.view(-1) > conf_thres] - - - # Filter by class - if classes is not None: - x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)] - - # Apply finite constraint - # if not torch.isfinite(x).all(): - # x = x[torch.isfinite(x).all(1)] - - # Check shape - n = x.shape[0] # number of boxes - if not n: # no boxes - continue - elif n > max_nms: # excess boxes - x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence - - # Batched NMS - c = x[:, 5:6] * (0 if agnostic else max_wh) # classes - boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores - i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS - if i.shape[0] > max_det: # limit detections - i = i[:max_det] - if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean) - # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4) - iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix - weights = iou * scores[None] # box weights - x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes - if redundant: - i = i[iou.sum(1) > 1] # require redundancy - - output[xi] = x[i] - if (time.time() - t) > time_limit: - print(f'WARNING: NMS time limit {time_limit}s exceeded') - break # time limit exceeded - - return output - - -def strip_optimizer(f='best.pt', s=''): # from utils.general import *; strip_optimizer() - # Strip optimizer from 'f' to finalize training, optionally save as 's' - x = torch.load(f, map_location=torch.device('cpu')) - if x.get('ema'): - x['model'] = x['ema'] # replace model with ema - for k in 'optimizer', 'training_results', 'wandb_id', 'ema', 'updates': # keys - x[k] = None - x['epoch'] = -1 - x['model'].half() # to FP16 - for p in x['model'].parameters(): - p.requires_grad = False - torch.save(x, s or f) - mb = os.path.getsize(s or f) / 1E6 # filesize - print(f"Optimizer stripped from {f},{(' saved as %s,' % s) if s else ''} {mb:.1f}MB") - - -def print_mutation(hyp, results, yaml_file='hyp_evolved.yaml', bucket=''): - # Print mutation results to evolve.txt (for use with train.py --evolve) - a = '%10s' * len(hyp) % tuple(hyp.keys()) # hyperparam keys - b = '%10.3g' * len(hyp) % tuple(hyp.values()) # hyperparam values - c = '%10.4g' * len(results) % results # results (P, R, mAP@0.5, mAP@0.5:0.95, val_losses x 3) - print('\n%s\n%s\nEvolved fitness: %s\n' % (a, b, c)) - - if bucket: - url = 'gs://%s/evolve.txt' % bucket - if gsutil_getsize(url) > (os.path.getsize('evolve.txt') if os.path.exists('evolve.txt') else 0): - os.system('gsutil cp %s .' % url) # download evolve.txt if larger than local - - with open('evolve.txt', 'a') as f: # append result - f.write(c + b + '\n') - x = np.unique(np.loadtxt('evolve.txt', ndmin=2), axis=0) # load unique rows - x = x[np.argsort(-fitness(x))] # sort - np.savetxt('evolve.txt', x, '%10.3g') # save sort by fitness - - # Save yaml - for i, k in enumerate(hyp.keys()): - hyp[k] = float(x[0, i + 7]) - with open(yaml_file, 'w') as f: - results = tuple(x[0, :7]) - c = '%10.4g' * len(results) % results # results (P, R, mAP@0.5, mAP@0.5:0.95, val_losses x 3) - f.write('# Hyperparameter Evolution Results\n# Generations: %g\n# Metrics: ' % len(x) + c + '\n\n') - yaml.dump(hyp, f, sort_keys=False) - - if bucket: - os.system('gsutil cp evolve.txt %s gs://%s' % (yaml_file, bucket)) # upload - - -def apply_classifier(x, model, img, im0): - # applies a second stage classifier to yolo outputs - im0 = [im0] if isinstance(im0, np.ndarray) else im0 - for i, d in enumerate(x): # per image - if d is not None and len(d): - d = d.clone() - - # Reshape and pad cutouts - b = xyxy2xywh(d[:, :4]) # boxes - b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # rectangle to square - b[:, 2:] = b[:, 2:] * 1.3 + 30 # pad - d[:, :4] = xywh2xyxy(b).long() - - # Rescale boxes from img_size to im0 size - scale_coords(img.shape[2:], d[:, :4], im0[i].shape) - - # Classes - pred_cls1 = d[:, 5].long() - ims = [] - for j, a in enumerate(d): # per item - cutout = im0[i][int(a[1]):int(a[3]), int(a[0]):int(a[2])] - im = cv2.resize(cutout, (224, 224)) # BGR - # cv2.imwrite('test%i.jpg' % j, cutout) - - im = im[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416 - im = np.ascontiguousarray(im, dtype=np.float32) # uint8 to float32 - im /= 255.0 # 0 - 255 to 0.0 - 1.0 - ims.append(im) - - pred_cls2 = model(torch.Tensor(ims).to(d.device)).argmax(1) # classifier prediction - x[i] = x[i][pred_cls1 == pred_cls2] # retain matching class detections - - return x - - -def increment_path(path, exist_ok=True, sep=''): - # Increment path, i.e. runs/exp --> runs/exp{sep}0, runs/exp{sep}1 etc. - path = Path(path) # os-agnostic - if (path.exists() and exist_ok) or (not path.exists()): - return str(path) - else: - dirs = glob.glob(f"{path}{sep}*") # similar paths - matches = [re.search(rf"%s{sep}(\d+)" % path.stem, d) for d in dirs] - i = [int(m.groups()[0]) for m in matches if m] # indices - n = max(i) + 1 if i else 2 # increment number - return f"{path}{sep}{n}" # update path diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/__init__.py b/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Frantz103/CaptionQuest/README.md b/spaces/Frantz103/CaptionQuest/README.md deleted file mode 100644 index 096b0f7a56a2ea04bd7a7372f59bfa87fb94d028..0000000000000000000000000000000000000000 --- a/spaces/Frantz103/CaptionQuest/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: CaptionQuest -emoji: ⚡ -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.47.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/GaenKoki/voicevox/voicevox_engine/preset/__init__.py b/spaces/GaenKoki/voicevox/voicevox_engine/preset/__init__.py deleted file mode 100644 index 8c485e2fbfbcdd660d869ccc36483d6ace6272ec..0000000000000000000000000000000000000000 --- a/spaces/GaenKoki/voicevox/voicevox_engine/preset/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -from .Preset import Preset -from .PresetError import PresetError -from .PresetManager import PresetManager - -__all__ = [ - "Preset", - "PresetManager", - "PresetError", -] diff --git a/spaces/Galax/schafter_x_billy/app.py b/spaces/Galax/schafter_x_billy/app.py deleted file mode 100644 index e2ae149fc951bc3e1620139a5bc6864670e4795e..0000000000000000000000000000000000000000 --- a/spaces/Galax/schafter_x_billy/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import gradio as gr -from transformers import pipeline -from huggingface_hub import login -import os - -api_key = os.getenv("api_key_read") -model = os.getenv("model_repo") -login(api_key) -pipe = pipeline( - "audio-classification", - model=model, - chunk_length_s = 30, - stride_length_s = 5, - batch_size = 1, - api_key = api_key, -) - -examples = [] -for file in os.listdir("examples"): - examples.append(f'examples//{file}') - -def classify_audio(filepath): - preds = pipe(filepath) - outputs = {} - for p in preds: - outputs[p["label"]] = p["score"] - return outputs - -import gradio as gr - -demo = gr.Interface( - fn=classify_audio, inputs=gr.Audio(type="filepath"),examples = examples, outputs=gr.outputs.Label() -) -demo.launch(debug=True) \ No newline at end of file diff --git a/spaces/GeorgeOrville/bingo/src/components/toaster.tsx b/spaces/GeorgeOrville/bingo/src/components/toaster.tsx deleted file mode 100644 index 4d2693460b61307a1d4c127fd01df9bee16e59ff..0000000000000000000000000000000000000000 --- a/spaces/GeorgeOrville/bingo/src/components/toaster.tsx +++ /dev/null @@ -1,3 +0,0 @@ -'use client' - -export { Toaster } from 'react-hot-toast' diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/paa/README.md b/spaces/Gradio-Blocks/uniformer_image_detection/configs/paa/README.md deleted file mode 100644 index 9960dcf9c16038db3d8379ab910d2cfbe85d22de..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/paa/README.md +++ /dev/null @@ -1,35 +0,0 @@ -# Probabilistic Anchor Assignment with IoU Prediction for Object Detection - -[ALGORITHM] - -```latex -@inproceedings{paa-eccv2020, - title={Probabilistic Anchor Assignment with IoU Prediction for Object Detection}, - author={Kim, Kang and Lee, Hee Seok}, - booktitle = {ECCV}, - year={2020} -} -``` - -## Results and Models - -We provide config files to reproduce the object detection results in the -ECCV 2020 paper for Probabilistic Anchor Assignment with IoU -Prediction for Object Detection. - -| Backbone | Lr schd | Mem (GB) | Score voting | box AP | Config | Download | -|:-----------:|:-------:|:--------:|:------------:|:------:|:------:|:--------:| -| R-50-FPN | 12e | 3.7 | True | 40.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/paa/paa_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r50_fpn_1x_coco/paa_r50_fpn_1x_coco_20200821-936edec3.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r50_fpn_1x_coco/paa_r50_fpn_1x_coco_20200821-936edec3.log.json) | -| R-50-FPN | 12e | 3.7 | False | 40.2 | - | -| R-50-FPN | 18e | 3.7 | True | 41.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/paa/paa_r50_fpn_1.5x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r50_fpn_1.5x_coco/paa_r50_fpn_1.5x_coco_20200823-805d6078.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r50_fpn_1.5x_coco/paa_r50_fpn_1.5x_coco_20200823-805d6078.log.json) | -| R-50-FPN | 18e | 3.7 | False | 41.2 | - | -| R-50-FPN | 24e | 3.7 | True | 41.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/paa/paa_r50_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r50_fpn_2x_coco/paa_r50_fpn_2x_coco_20200821-c98bfc4e.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r50_fpn_2x_coco/paa_r50_fpn_2x_coco_20200821-c98bfc4e.log.json) | -| R-50-FPN | 36e | 3.7 | True | 43.3 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/paa/paa_r50_fpn_mstrain_3x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r50_fpn_mstrain_3x_coco/paa_r50_fpn_mstrain_3x_coco_20210121_145722-06a6880b.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r50_fpn_mstrain_3x_coco/paa_r50_fpn_mstrain_3x_coco_20210121_145722.log.json) | -| R-101-FPN | 12e | 6.2 | True | 42.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/paa/paa_r101_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r101_fpn_1x_coco/paa_r101_fpn_1x_coco_20200821-0a1825a4.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r101_fpn_1x_coco/paa_r101_fpn_1x_coco_20200821-0a1825a4.log.json) | -| R-101-FPN | 12e | 6.2 | False | 42.4 | - | -| R-101-FPN | 24e | 6.2 | True | 43.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/paa/paa_r101_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r101_fpn_2x_coco/paa_r101_fpn_2x_coco_20200821-6829f96b.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r101_fpn_2x_coco/paa_r101_fpn_2x_coco_20200821-6829f96b.log.json) | -| R-101-FPN | 36e | 6.2 | True | 45.1 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/paa/paa_r101_fpn_mstrain_3x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r101_fpn_mstrain_3x_coco/paa_r101_fpn_mstrain_3x_coco_20210122_084202-83250d22.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r101_fpn_mstrain_3x_coco/paa_r101_fpn_mstrain_3x_coco_20210122_084202.log.json) | - -**Note**: - -1. We find that the performance is unstable with 1x setting and may fluctuate by about 0.2 mAP. We report the best results. diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/apcnet/README.md b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/apcnet/README.md deleted file mode 100644 index b89ac6d7b2ed2da1788b7400a121f6509774baf8..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/apcnet/README.md +++ /dev/null @@ -1,39 +0,0 @@ -# Adaptive Pyramid Context Network for Semantic Segmentation - -## Introduction - - - -```latex -@InProceedings{He_2019_CVPR, -author = {He, Junjun and Deng, Zhongying and Zhou, Lei and Wang, Yali and Qiao, Yu}, -title = {Adaptive Pyramid Context Network for Semantic Segmentation}, -booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, -month = {June}, -year = {2019} -} -``` - -## Results and models - -### Cityscapes - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | --------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| APCNet | R-50-D8 | 512x1024 | 40000 | 7.7 | 3.57 | 78.02 | 79.26 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/apcnet/apcnet_r50-d8_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r50-d8_512x1024_40k_cityscapes/apcnet_r50-d8_512x1024_40k_cityscapes_20201214_115717-5e88fa33.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r50-d8_512x1024_40k_cityscapes/apcnet_r50-d8_512x1024_40k_cityscapes-20201214_115717.log.json) | -| APCNet | R-101-D8 | 512x1024 | 40000 | 11.2 | 2.15 | 79.08 | 80.34 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/apcnet/apcnet_r101-d8_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r101-d8_512x1024_40k_cityscapes/apcnet_r101-d8_512x1024_40k_cityscapes_20201214_115716-abc9d111.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r101-d8_512x1024_40k_cityscapes/apcnet_r101-d8_512x1024_40k_cityscapes-20201214_115716.log.json) | -| APCNet | R-50-D8 | 769x769 | 40000 | 8.7 | 1.52 | 77.89 | 79.75 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/apcnet/apcnet_r50-d8_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r50-d8_769x769_40k_cityscapes/apcnet_r50-d8_769x769_40k_cityscapes_20201214_115717-2a2628d7.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r50-d8_769x769_40k_cityscapes/apcnet_r50-d8_769x769_40k_cityscapes-20201214_115717.log.json) | -| APCNet | R-101-D8 | 769x769 | 40000 | 12.7 | 1.03 | 77.96 | 79.24 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/apcnet/apcnet_r101-d8_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r101-d8_769x769_40k_cityscapes/apcnet_r101-d8_769x769_40k_cityscapes_20201214_115718-b650de90.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r101-d8_769x769_40k_cityscapes/apcnet_r101-d8_769x769_40k_cityscapes-20201214_115718.log.json) | -| APCNet | R-50-D8 | 512x1024 | 80000 | - | - | 78.96 | 79.94 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/apcnet/apcnet_r50-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r50-d8_512x1024_80k_cityscapes/apcnet_r50-d8_512x1024_80k_cityscapes_20201214_115716-987f51e3.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r50-d8_512x1024_80k_cityscapes/apcnet_r50-d8_512x1024_80k_cityscapes-20201214_115716.log.json) | -| APCNet | R-101-D8 | 512x1024 | 80000 | - | - | 79.64 | 80.61 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/apcnet/apcnet_r101-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r101-d8_512x1024_80k_cityscapes/apcnet_r101-d8_512x1024_80k_cityscapes_20201214_115705-b1ff208a.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r101-d8_512x1024_80k_cityscapes/apcnet_r101-d8_512x1024_80k_cityscapes-20201214_115705.log.json) | -| APCNet | R-50-D8 | 769x769 | 80000 | - | - | 78.79 | 80.35 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/apcnet/apcnet_r50-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r50-d8_769x769_80k_cityscapes/apcnet_r50-d8_769x769_80k_cityscapes_20201214_115718-7ea9fa12.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r50-d8_769x769_80k_cityscapes/apcnet_r50-d8_769x769_80k_cityscapes-20201214_115718.log.json) | -| APCNet | R-101-D8 | 769x769 | 80000 | - | - | 78.45 | 79.91 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/apcnet/apcnet_r101-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r101-d8_769x769_80k_cityscapes/apcnet_r101-d8_769x769_80k_cityscapes_20201214_115716-a7fbc2ab.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r101-d8_769x769_80k_cityscapes/apcnet_r101-d8_769x769_80k_cityscapes-20201214_115716.log.json) | - -### ADE20K - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ----------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| APCNet | R-50-D8 | 512x512 | 80000 | 10.1 | 19.61 | 42.20 | 43.30 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/apcnet/apcnet_r50-d8_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r50-d8_512x512_80k_ade20k/apcnet_r50-d8_512x512_80k_ade20k_20201214_115705-a8626293.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r50-d8_512x512_80k_ade20k/apcnet_r50-d8_512x512_80k_ade20k-20201214_115705.log.json) | -| APCNet | R-101-D8 | 512x512 | 80000 | 13.6 | 13.10 | 45.54 | 46.65 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/apcnet/apcnet_r101-d8_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r101-d8_512x512_80k_ade20k/apcnet_r101-d8_512x512_80k_ade20k_20201214_115704-c656c3fb.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r101-d8_512x512_80k_ade20k/apcnet_r101-d8_512x512_80k_ade20k-20201214_115704.log.json) | -| APCNet | R-50-D8 | 512x512 | 160000 | - | - | 43.40 | 43.94 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/apcnet/apcnet_r50-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r50-d8_512x512_160k_ade20k/apcnet_r50-d8_512x512_160k_ade20k_20201214_115706-25fb92c2.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r50-d8_512x512_160k_ade20k/apcnet_r50-d8_512x512_160k_ade20k-20201214_115706.log.json) | -| APCNet | R-101-D8 | 512x512 | 160000 | - | - | 45.41 | 46.63 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/apcnet/apcnet_r101-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r101-d8_512x512_160k_ade20k/apcnet_r101-d8_512x512_160k_ade20k_20201214_115705-73f9a8d7.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r101-d8_512x512_160k_ade20k/apcnet_r101-d8_512x512_160k_ade20k-20201214_115705.log.json) | diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ccnet/ccnet_r50-d8_512x512_160k_ade20k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ccnet/ccnet_r50-d8_512x512_160k_ade20k.py deleted file mode 100644 index 1ad94d8988bb822c1571816255464126d9d5b95d..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ccnet/ccnet_r50-d8_512x512_160k_ade20k.py +++ /dev/null @@ -1,6 +0,0 @@ -_base_ = [ - '../_base_/models/ccnet_r50-d8.py', '../_base_/datasets/ade20k.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py' -] -model = dict( - decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/convert_datasets/pascal_context.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/convert_datasets/pascal_context.py deleted file mode 100644 index dc49ab7ad8fd359c458ec4b6190ed61851426031..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/convert_datasets/pascal_context.py +++ /dev/null @@ -1,86 +0,0 @@ -import argparse -import os.path as osp -from functools import partial - -import mmcv -import numpy as np -from detail import Detail -from PIL import Image - -_mapping = np.sort( - np.array([ - 0, 2, 259, 260, 415, 324, 9, 258, 144, 18, 19, 22, 23, 397, 25, 284, - 158, 159, 416, 33, 162, 420, 454, 295, 296, 427, 44, 45, 46, 308, 59, - 440, 445, 31, 232, 65, 354, 424, 68, 326, 72, 458, 34, 207, 80, 355, - 85, 347, 220, 349, 360, 98, 187, 104, 105, 366, 189, 368, 113, 115 - ])) -_key = np.array(range(len(_mapping))).astype('uint8') - - -def generate_labels(img_id, detail, out_dir): - - def _class_to_index(mask, _mapping, _key): - # assert the values - values = np.unique(mask) - for i in range(len(values)): - assert (values[i] in _mapping) - index = np.digitize(mask.ravel(), _mapping, right=True) - return _key[index].reshape(mask.shape) - - mask = Image.fromarray( - _class_to_index(detail.getMask(img_id), _mapping=_mapping, _key=_key)) - filename = img_id['file_name'] - mask.save(osp.join(out_dir, filename.replace('jpg', 'png'))) - return osp.splitext(osp.basename(filename))[0] - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Convert PASCAL VOC annotations to mmsegmentation format') - parser.add_argument('devkit_path', help='pascal voc devkit path') - parser.add_argument('json_path', help='annoation json filepath') - parser.add_argument('-o', '--out_dir', help='output path') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - devkit_path = args.devkit_path - if args.out_dir is None: - out_dir = osp.join(devkit_path, 'VOC2010', 'SegmentationClassContext') - else: - out_dir = args.out_dir - json_path = args.json_path - mmcv.mkdir_or_exist(out_dir) - img_dir = osp.join(devkit_path, 'VOC2010', 'JPEGImages') - - train_detail = Detail(json_path, img_dir, 'train') - train_ids = train_detail.getImgs() - - val_detail = Detail(json_path, img_dir, 'val') - val_ids = val_detail.getImgs() - - mmcv.mkdir_or_exist( - osp.join(devkit_path, 'VOC2010/ImageSets/SegmentationContext')) - - train_list = mmcv.track_progress( - partial(generate_labels, detail=train_detail, out_dir=out_dir), - train_ids) - with open( - osp.join(devkit_path, 'VOC2010/ImageSets/SegmentationContext', - 'train.txt'), 'w') as f: - f.writelines(line + '\n' for line in sorted(train_list)) - - val_list = mmcv.track_progress( - partial(generate_labels, detail=val_detail, out_dir=out_dir), val_ids) - with open( - osp.join(devkit_path, 'VOC2010/ImageSets/SegmentationContext', - 'val.txt'), 'w') as f: - f.writelines(line + '\n' for line in sorted(val_list)) - - print('Done!') - - -if __name__ == '__main__': - main() diff --git a/spaces/GrandaddyShmax/MusicGen_Plus/tests/utils/__init__.py b/spaces/GrandaddyShmax/MusicGen_Plus/tests/utils/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/MusicGen_Plus/tests/utils/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/Hallucinate/demo/taming/data/image_transforms.py b/spaces/Hallucinate/demo/taming/data/image_transforms.py deleted file mode 100644 index 657ac332174e0ac72f68315271ffbd757b771a0f..0000000000000000000000000000000000000000 --- a/spaces/Hallucinate/demo/taming/data/image_transforms.py +++ /dev/null @@ -1,132 +0,0 @@ -import random -import warnings -from typing import Union - -import torch -from torch import Tensor -from torchvision.transforms import RandomCrop, functional as F, CenterCrop, RandomHorizontalFlip, PILToTensor -from torchvision.transforms.functional import _get_image_size as get_image_size - -from taming.data.helper_types import BoundingBox, Image - -pil_to_tensor = PILToTensor() - - -def convert_pil_to_tensor(image: Image) -> Tensor: - with warnings.catch_warnings(): - # to filter PyTorch UserWarning as described here: https://github.com/pytorch/vision/issues/2194 - warnings.simplefilter("ignore") - return pil_to_tensor(image) - - -class RandomCrop1dReturnCoordinates(RandomCrop): - def forward(self, img: Image) -> (BoundingBox, Image): - """ - Additionally to cropping, returns the relative coordinates of the crop bounding box. - Args: - img (PIL Image or Tensor): Image to be cropped. - - Returns: - Bounding box: x0, y0, w, h - PIL Image or Tensor: Cropped image. - - Based on: - torchvision.transforms.RandomCrop, torchvision 1.7.0 - """ - if self.padding is not None: - img = F.pad(img, self.padding, self.fill, self.padding_mode) - - width, height = get_image_size(img) - # pad the width if needed - if self.pad_if_needed and width < self.size[1]: - padding = [self.size[1] - width, 0] - img = F.pad(img, padding, self.fill, self.padding_mode) - # pad the height if needed - if self.pad_if_needed and height < self.size[0]: - padding = [0, self.size[0] - height] - img = F.pad(img, padding, self.fill, self.padding_mode) - - i, j, h, w = self.get_params(img, self.size) - bbox = (j / width, i / height, w / width, h / height) # x0, y0, w, h - return bbox, F.crop(img, i, j, h, w) - - -class Random2dCropReturnCoordinates(torch.nn.Module): - """ - Additionally to cropping, returns the relative coordinates of the crop bounding box. - Args: - img (PIL Image or Tensor): Image to be cropped. - - Returns: - Bounding box: x0, y0, w, h - PIL Image or Tensor: Cropped image. - - Based on: - torchvision.transforms.RandomCrop, torchvision 1.7.0 - """ - - def __init__(self, min_size: int): - super().__init__() - self.min_size = min_size - - def forward(self, img: Image) -> (BoundingBox, Image): - width, height = get_image_size(img) - max_size = min(width, height) - if max_size <= self.min_size: - size = max_size - else: - size = random.randint(self.min_size, max_size) - top = random.randint(0, height - size) - left = random.randint(0, width - size) - bbox = left / width, top / height, size / width, size / height - return bbox, F.crop(img, top, left, size, size) - - -class CenterCropReturnCoordinates(CenterCrop): - @staticmethod - def get_bbox_of_center_crop(width: int, height: int) -> BoundingBox: - if width > height: - w = height / width - h = 1.0 - x0 = 0.5 - w / 2 - y0 = 0. - else: - w = 1.0 - h = width / height - x0 = 0. - y0 = 0.5 - h / 2 - return x0, y0, w, h - - def forward(self, img: Union[Image, Tensor]) -> (BoundingBox, Union[Image, Tensor]): - """ - Additionally to cropping, returns the relative coordinates of the crop bounding box. - Args: - img (PIL Image or Tensor): Image to be cropped. - - Returns: - Bounding box: x0, y0, w, h - PIL Image or Tensor: Cropped image. - Based on: - torchvision.transforms.RandomHorizontalFlip (version 1.7.0) - """ - width, height = get_image_size(img) - return self.get_bbox_of_center_crop(width, height), F.center_crop(img, self.size) - - -class RandomHorizontalFlipReturn(RandomHorizontalFlip): - def forward(self, img: Image) -> (bool, Image): - """ - Additionally to flipping, returns a boolean whether it was flipped or not. - Args: - img (PIL Image or Tensor): Image to be flipped. - - Returns: - flipped: whether the image was flipped or not - PIL Image or Tensor: Randomly flipped image. - - Based on: - torchvision.transforms.RandomHorizontalFlip (version 1.7.0) - """ - if torch.rand(1) < self.p: - return True, F.hflip(img) - return False, img diff --git a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/glow_tts/hifi/utils.py b/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/glow_tts/hifi/utils.py deleted file mode 100644 index 71e9b2c99e053e2d4239074a67d64b834898c348..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/glow_tts/hifi/utils.py +++ /dev/null @@ -1,57 +0,0 @@ -import glob -import os -import matplotlib -import torch -from torch.nn.utils import weight_norm - -matplotlib.use("Agg") -import matplotlib.pylab as plt - - -def plot_spectrogram(spectrogram): - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", interpolation="none") - plt.colorbar(im, ax=ax) - - fig.canvas.draw() - plt.close() - - return fig - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def apply_weight_norm(m): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - weight_norm(m) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def load_checkpoint(filepath, device): - assert os.path.isfile(filepath) - print("Loading '{}'".format(filepath)) - checkpoint_dict = torch.load(filepath, map_location=device) - print("Complete.") - return checkpoint_dict - - -def save_checkpoint(filepath, obj): - print("Saving checkpoint to {}".format(filepath)) - torch.save(obj, filepath) - print("Complete.") - - -def scan_checkpoint(cp_dir, prefix): - pattern = os.path.join(cp_dir, prefix + "????????") - cp_list = glob.glob(pattern) - if len(cp_list) == 0: - return None - return sorted(cp_list)[-1] diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/scripts/data/resample.sh b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/scripts/data/resample.sh deleted file mode 100644 index 8489b0a0056d46a93d24db8dba173ad7a4b8a44a..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/scripts/data/resample.sh +++ /dev/null @@ -1,14 +0,0 @@ -input_wav_path='/home/harveen/en/iitm_data/english/wav/' -output_wav_path='/home/harveen/en/iitm_data/english/wav_22k/' -output_sample_rate=22050 - -####################### - -dir=$PWD -parentdir="$(dirname "$dir")" -parentdir="$(dirname "$parentdir")" - -mkdir -p $output_wav_path -python $parentdir/utils/data/resample.py -i $input_wav_path -o $output_wav_path -s $output_sample_rate - -python $parentdir/utils/data/duration.py $output_wav_path diff --git a/spaces/Hina4867/bingo/src/lib/bots/bing/tts.ts b/spaces/Hina4867/bingo/src/lib/bots/bing/tts.ts deleted file mode 100644 index cd10b7d1d7581bf9cf46ff6755fcca550c558c9b..0000000000000000000000000000000000000000 --- a/spaces/Hina4867/bingo/src/lib/bots/bing/tts.ts +++ /dev/null @@ -1,82 +0,0 @@ -import { sleep } from './utils' - -const synth = window.speechSynthesis - -export class TTS { - currentText = '' - speakText = '' - private controller = new AbortController() - speaking = false - get isSpeaking() { - return this.speaking - } - finished = false - constructor() {} - abort = () => { - this.controller.abort() - } - - reset = () => { - this.speaking = false - this.finished = true - this.currentText = '' - this.speakText = '' - this.abort() - } - - speak = (text: string) => { - if (!synth || text?.trim()?.length < 2) { - return - } - this.currentText = text.replace(/[^\u4e00-\u9fa5_a-zA-Z0-9,。?,:;\.,:]+/g, '') - this.finished = false - this.loop() - } - - private async doSpeek() { - return new Promise((resolve) => { - const endIndex = this.finished ? this.currentText.length : - Math.max( - this.currentText.lastIndexOf('。'), - this.currentText.lastIndexOf(';'), - this.currentText.lastIndexOf('、'), - this.currentText.lastIndexOf('?'), - this.currentText.lastIndexOf('\n') - ) - const startIndex = this.speakText.length ? Math.max(0, this.currentText.lastIndexOf(this.speakText) + this.speakText.length) : 0 - - if (startIndex >= endIndex) { - return resolve(true) - } - const text = this.currentText.slice(startIndex, endIndex) - this.speakText = text - const utterThis = new SpeechSynthesisUtterance(text) - this.controller.signal.onabort = () => { - synth.cancel() - this.finished = true - resolve(false) - } - - utterThis.onend = function (event) { - resolve(true) - } - - utterThis.onerror = function (event) { - resolve(false) - } - - const voice = synth.getVoices().find(v => v.name.includes('Microsoft Yunxi Online')) ?? null - utterThis.voice = voice - synth.speak(utterThis) - }) - } - - private async loop() { - if (this.speaking) return - this.speaking = true - while(!this.finished) { - await Promise.all([sleep(1000), this.doSpeek()]) - } - this.speaking = false - } -} diff --git a/spaces/Hoolbo/bing/README.md b/spaces/Hoolbo/bing/README.md deleted file mode 100644 index aff5a96b89652a3d743dbbc827ae76a1daffd206..0000000000000000000000000000000000000000 --- a/spaces/Hoolbo/bing/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Bing稳定版 -emoji: 🦀 -colorFrom: gray -colorTo: indigo -sdk: docker -pinned: false -license: mit -app_port: 8080 ---- - -稳定版,不一定是最新版 -https://huggingface.co/docs/hub/spaces-config-referenceCheck out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/ICML2022/OFA/fairseq/examples/multilingual/data_scripts/download_iitb.sh b/spaces/ICML2022/OFA/fairseq/examples/multilingual/data_scripts/download_iitb.sh deleted file mode 100644 index a884e20839e2a41a57405cb6af362e37bd16ab6f..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/multilingual/data_scripts/download_iitb.sh +++ /dev/null @@ -1,35 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - - -if [ -z $WORKDIR_ROOT ] ; -then - echo "please specify your working directory root in environment variable WORKDIR_ROOT. Exitting..." - exit -fi - -IITB=$WORKDIR_ROOT/IITB -mkdir -p $IITB -pushd $IITB - -wget http://www.cfilt.iitb.ac.in/~moses/iitb_en_hi_parallel/iitb_corpus_download/parallel.tgz -tar -xvzf parallel.tgz - -wget http://www.cfilt.iitb.ac.in/~moses/iitb_en_hi_parallel/iitb_corpus_download/dev_test.tgz -tar -xvzf dev_test.tgz - -DESTDIR=${WORKDIR_ROOT}/ML50/raw/ - -cp parallel/IITB.en-hi.en $DESTDIR/train.hi_IN-en_XX.en_XX -cp parallel/IITB.en-hi.hi $DESTDIR/train.hi_IN-en_XX.hi_IN - -cp dev_test/dev.en $DESTDIR/valid.hi_IN-en_XX.en_XX -cp dev_test/dev.hi $DESTDIR/valid.hi_IN-en_XX.hi_IN - -cp dev_test/test.en $DESTDIR/test.hi_IN-en_XX.en_XX -cp dev_test/test.hi $DESTDIR/test.hi_IN-en_XX.hi_IN -popd \ No newline at end of file diff --git a/spaces/ICML2022/distilgpt2-finetuned-wikitext103/app.py b/spaces/ICML2022/distilgpt2-finetuned-wikitext103/app.py deleted file mode 100644 index f5eebfde48af70b4a56cd16329c34f92a030b62d..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/distilgpt2-finetuned-wikitext103/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("huggingface/neulab/distilgpt2-finetuned-wikitext103").launch() \ No newline at end of file diff --git a/spaces/ICML2022/resefa/utils/__init__.py b/spaces/ICML2022/resefa/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ICML2023/ICML2023_papers/style.css b/spaces/ICML2023/ICML2023_papers/style.css deleted file mode 100644 index e2b871457d13980ddfbbc35bf5da02a75ece292e..0000000000000000000000000000000000000000 --- a/spaces/ICML2023/ICML2023_papers/style.css +++ /dev/null @@ -1,22 +0,0 @@ -h1 { - text-align: center; -} -table a { - background-color: transparent; - color: #58a6ff; - text-decoration: none; -} -a:active, -a:hover { - outline-width: 0; -} -a:hover { - text-decoration: underline; -} -table, th, td { - border: 1px solid; -} -img#visitor-badge { - display: block; - margin: auto; -} diff --git a/spaces/Iceclear/StableSR/StableSR/basicsr/archs/arch_util.py b/spaces/Iceclear/StableSR/StableSR/basicsr/archs/arch_util.py deleted file mode 100644 index 4f2af24b73c37d3da0664d33a313651be6e33e8f..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/basicsr/archs/arch_util.py +++ /dev/null @@ -1,352 +0,0 @@ -import collections.abc -import math -import torch -import torchvision -import warnings -from distutils.version import LooseVersion -from itertools import repeat -from torch import nn as nn -from torch.nn import functional as F -from torch.nn import init as init -from torch.nn.modules.batchnorm import _BatchNorm - -from basicsr.ops.dcn import ModulatedDeformConvPack, modulated_deform_conv -from basicsr.utils import get_root_logger - - -@torch.no_grad() -def default_init_weights(module_list, scale=1, bias_fill=0, **kwargs): - """Initialize network weights. - - Args: - module_list (list[nn.Module] | nn.Module): Modules to be initialized. - scale (float): Scale initialized weights, especially for residual - blocks. Default: 1. - bias_fill (float): The value to fill bias. Default: 0 - kwargs (dict): Other arguments for initialization function. - """ - if not isinstance(module_list, list): - module_list = [module_list] - for module in module_list: - for m in module.modules(): - if isinstance(m, nn.Conv2d): - init.kaiming_normal_(m.weight, **kwargs) - m.weight.data *= scale - if m.bias is not None: - m.bias.data.fill_(bias_fill) - elif isinstance(m, nn.Linear): - init.kaiming_normal_(m.weight, **kwargs) - m.weight.data *= scale - if m.bias is not None: - m.bias.data.fill_(bias_fill) - elif isinstance(m, _BatchNorm): - init.constant_(m.weight, 1) - if m.bias is not None: - m.bias.data.fill_(bias_fill) - - -def make_layer(basic_block, num_basic_block, **kwarg): - """Make layers by stacking the same blocks. - - Args: - basic_block (nn.module): nn.module class for basic block. - num_basic_block (int): number of blocks. - - Returns: - nn.Sequential: Stacked blocks in nn.Sequential. - """ - layers = [] - for _ in range(num_basic_block): - layers.append(basic_block(**kwarg)) - return nn.Sequential(*layers) - -class PixelShufflePack(nn.Module): - """Pixel Shuffle upsample layer. - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - scale_factor (int): Upsample ratio. - upsample_kernel (int): Kernel size of Conv layer to expand channels. - Returns: - Upsampled feature map. - """ - - def __init__(self, in_channels, out_channels, scale_factor, - upsample_kernel): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.scale_factor = scale_factor - self.upsample_kernel = upsample_kernel - self.upsample_conv = nn.Conv2d( - self.in_channels, - self.out_channels * scale_factor * scale_factor, - self.upsample_kernel, - padding=(self.upsample_kernel - 1) // 2) - self.init_weights() - - def init_weights(self): - """Initialize weights for PixelShufflePack.""" - default_init_weights(self, 1) - - def forward(self, x): - """Forward function for PixelShufflePack. - Args: - x (Tensor): Input tensor with shape (n, c, h, w). - Returns: - Tensor: Forward results. - """ - x = self.upsample_conv(x) - x = F.pixel_shuffle(x, self.scale_factor) - return x - -class ResidualBlockNoBN(nn.Module): - """Residual block without BN. - - Args: - num_feat (int): Channel number of intermediate features. - Default: 64. - res_scale (float): Residual scale. Default: 1. - pytorch_init (bool): If set to True, use pytorch default init, - otherwise, use default_init_weights. Default: False. - """ - - def __init__(self, num_feat=64, res_scale=1, pytorch_init=False): - super(ResidualBlockNoBN, self).__init__() - self.res_scale = res_scale - self.conv1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=True) - self.conv2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=True) - self.relu = nn.ReLU(inplace=True) - - if not pytorch_init: - default_init_weights([self.conv1, self.conv2], 0.1) - - def forward(self, x): - identity = x - out = self.conv2(self.relu(self.conv1(x))) - return identity + out * self.res_scale - - -class Upsample(nn.Sequential): - """Upsample module. - - Args: - scale (int): Scale factor. Supported scales: 2^n and 3. - num_feat (int): Channel number of intermediate features. - """ - - def __init__(self, scale, num_feat): - m = [] - if (scale & (scale - 1)) == 0: # scale = 2^n - for _ in range(int(math.log(scale, 2))): - m.append(nn.Conv2d(num_feat, 4 * num_feat, 3, 1, 1)) - m.append(nn.PixelShuffle(2)) - elif scale == 3: - m.append(nn.Conv2d(num_feat, 9 * num_feat, 3, 1, 1)) - m.append(nn.PixelShuffle(3)) - else: - raise ValueError(f'scale {scale} is not supported. Supported scales: 2^n and 3.') - super(Upsample, self).__init__(*m) - - -def flow_warp(x, flow, interp_mode='bilinear', padding_mode='zeros', align_corners=True): - """Warp an image or feature map with optical flow. - - Args: - x (Tensor): Tensor with size (n, c, h, w). - flow (Tensor): Tensor with size (n, h, w, 2), normal value. - interp_mode (str): 'nearest' or 'bilinear'. Default: 'bilinear'. - padding_mode (str): 'zeros' or 'border' or 'reflection'. - Default: 'zeros'. - align_corners (bool): Before pytorch 1.3, the default value is - align_corners=True. After pytorch 1.3, the default value is - align_corners=False. Here, we use the True as default. - - Returns: - Tensor: Warped image or feature map. - """ - assert x.size()[-2:] == flow.size()[1:3] - _, _, h, w = x.size() - # create mesh grid - grid_y, grid_x = torch.meshgrid(torch.arange(0, h).type_as(x), torch.arange(0, w).type_as(x)) - grid = torch.stack((grid_x, grid_y), 2).float() # W(x), H(y), 2 - grid.requires_grad = False - - vgrid = grid + flow - # scale grid to [-1,1] - vgrid_x = 2.0 * vgrid[:, :, :, 0] / max(w - 1, 1) - 1.0 - vgrid_y = 2.0 * vgrid[:, :, :, 1] / max(h - 1, 1) - 1.0 - vgrid_scaled = torch.stack((vgrid_x, vgrid_y), dim=3) - output = F.grid_sample(x, vgrid_scaled, mode=interp_mode, padding_mode=padding_mode, align_corners=align_corners) - - # TODO, what if align_corners=False - return output - - -def resize_flow(flow, size_type, sizes, interp_mode='bilinear', align_corners=False): - """Resize a flow according to ratio or shape. - - Args: - flow (Tensor): Precomputed flow. shape [N, 2, H, W]. - size_type (str): 'ratio' or 'shape'. - sizes (list[int | float]): the ratio for resizing or the final output - shape. - 1) The order of ratio should be [ratio_h, ratio_w]. For - downsampling, the ratio should be smaller than 1.0 (i.e., ratio - < 1.0). For upsampling, the ratio should be larger than 1.0 (i.e., - ratio > 1.0). - 2) The order of output_size should be [out_h, out_w]. - interp_mode (str): The mode of interpolation for resizing. - Default: 'bilinear'. - align_corners (bool): Whether align corners. Default: False. - - Returns: - Tensor: Resized flow. - """ - _, _, flow_h, flow_w = flow.size() - if size_type == 'ratio': - output_h, output_w = int(flow_h * sizes[0]), int(flow_w * sizes[1]) - elif size_type == 'shape': - output_h, output_w = sizes[0], sizes[1] - else: - raise ValueError(f'Size type should be ratio or shape, but got type {size_type}.') - - input_flow = flow.clone() - ratio_h = output_h / flow_h - ratio_w = output_w / flow_w - input_flow[:, 0, :, :] *= ratio_w - input_flow[:, 1, :, :] *= ratio_h - resized_flow = F.interpolate( - input=input_flow, size=(output_h, output_w), mode=interp_mode, align_corners=align_corners) - return resized_flow - - -# TODO: may write a cpp file -def pixel_unshuffle(x, scale): - """ Pixel unshuffle. - - Args: - x (Tensor): Input feature with shape (b, c, hh, hw). - scale (int): Downsample ratio. - - Returns: - Tensor: the pixel unshuffled feature. - """ - b, c, hh, hw = x.size() - out_channel = c * (scale**2) - assert hh % scale == 0 and hw % scale == 0 - h = hh // scale - w = hw // scale - x_view = x.view(b, c, h, scale, w, scale) - return x_view.permute(0, 1, 3, 5, 2, 4).reshape(b, out_channel, h, w) - - -class DCNv2Pack(ModulatedDeformConvPack): - """Modulated deformable conv for deformable alignment. - - Different from the official DCNv2Pack, which generates offsets and masks - from the preceding features, this DCNv2Pack takes another different - features to generate offsets and masks. - - ``Paper: Delving Deep into Deformable Alignment in Video Super-Resolution`` - """ - - def forward(self, x, feat): - out = self.conv_offset(feat) - o1, o2, mask = torch.chunk(out, 3, dim=1) - offset = torch.cat((o1, o2), dim=1) - mask = torch.sigmoid(mask) - - offset_absmean = torch.mean(torch.abs(offset)) - if offset_absmean > 50: - logger = get_root_logger() - logger.warning(f'Offset abs mean is {offset_absmean}, larger than 50.') - - if LooseVersion(torchvision.__version__) >= LooseVersion('0.9.0'): - return torchvision.ops.deform_conv2d(x, offset, self.weight, self.bias, self.stride, self.padding, - self.dilation, mask) - else: - return modulated_deform_conv(x, offset, mask, self.weight, self.bias, self.stride, self.padding, - self.dilation, self.groups, self.deformable_groups) - - -def _no_grad_trunc_normal_(tensor, mean, std, a, b): - # From: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/weight_init.py - # Cut & paste from PyTorch official master until it's in a few official releases - RW - # Method based on https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf - def norm_cdf(x): - # Computes standard normal cumulative distribution function - return (1. + math.erf(x / math.sqrt(2.))) / 2. - - if (mean < a - 2 * std) or (mean > b + 2 * std): - warnings.warn( - 'mean is more than 2 std from [a, b] in nn.init.trunc_normal_. ' - 'The distribution of values may be incorrect.', - stacklevel=2) - - with torch.no_grad(): - # Values are generated by using a truncated uniform distribution and - # then using the inverse CDF for the normal distribution. - # Get upper and lower cdf values - low = norm_cdf((a - mean) / std) - up = norm_cdf((b - mean) / std) - - # Uniformly fill tensor with values from [low, up], then translate to - # [2l-1, 2u-1]. - tensor.uniform_(2 * low - 1, 2 * up - 1) - - # Use inverse cdf transform for normal distribution to get truncated - # standard normal - tensor.erfinv_() - - # Transform to proper mean, std - tensor.mul_(std * math.sqrt(2.)) - tensor.add_(mean) - - # Clamp to ensure it's in the proper range - tensor.clamp_(min=a, max=b) - return tensor - - -def trunc_normal_(tensor, mean=0., std=1., a=-2., b=2.): - r"""Fills the input Tensor with values drawn from a truncated - normal distribution. - - From: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/weight_init.py - - The values are effectively drawn from the - normal distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)` - with values outside :math:`[a, b]` redrawn until they are within - the bounds. The method used for generating the random values works - best when :math:`a \leq \text{mean} \leq b`. - - Args: - tensor: an n-dimensional `torch.Tensor` - mean: the mean of the normal distribution - std: the standard deviation of the normal distribution - a: the minimum cutoff value - b: the maximum cutoff value - - Examples: - >>> w = torch.empty(3, 5) - >>> nn.init.trunc_normal_(w) - """ - return _no_grad_trunc_normal_(tensor, mean, std, a, b) - - -# From PyTorch -def _ntuple(n): - - def parse(x): - if isinstance(x, collections.abc.Iterable): - return x - return tuple(repeat(x, n)) - - return parse - - -to_1tuple = _ntuple(1) -to_2tuple = _ntuple(2) -to_3tuple = _ntuple(3) -to_4tuple = _ntuple(4) -to_ntuple = _ntuple diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/dependency_versions_check.py b/spaces/Jackflack09/diffuse-custom/diffusers/dependency_versions_check.py deleted file mode 100644 index bbf863222a52fd60a15a95be0fbd6391acd3ba6d..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/dependency_versions_check.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright 2020 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import sys - -from .dependency_versions_table import deps -from .utils.versions import require_version, require_version_core - - -# define which module versions we always want to check at run time -# (usually the ones defined in `install_requires` in setup.py) -# -# order specific notes: -# - tqdm must be checked before tokenizers - -pkgs_to_check_at_runtime = "python tqdm regex requests packaging filelock numpy tokenizers".split() -if sys.version_info < (3, 7): - pkgs_to_check_at_runtime.append("dataclasses") -if sys.version_info < (3, 8): - pkgs_to_check_at_runtime.append("importlib_metadata") - -for pkg in pkgs_to_check_at_runtime: - if pkg in deps: - if pkg == "tokenizers": - # must be loaded here, or else tqdm check may fail - from .utils import is_tokenizers_available - - if not is_tokenizers_available(): - continue # not required, check version only if installed - - require_version_core(deps[pkg]) - else: - raise ValueError(f"can't find {pkg} in {deps.keys()}, check dependency_versions_table.py") - - -def dep_version_check(pkg, hint=None): - require_version(deps[pkg], hint) diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/pipeline_utils.py b/spaces/Jackflack09/diffuse-custom/diffusers/pipeline_utils.py deleted file mode 100644 index e65d55e20cd9faa5396ed116efcc28656079e972..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/pipeline_utils.py +++ /dev/null @@ -1,841 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The HuggingFace Inc. team. -# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import importlib -import inspect -import os -from dataclasses import dataclass -from pathlib import Path -from typing import Any, Dict, List, Optional, Union - -import numpy as np -import torch - -import diffusers -import PIL -from huggingface_hub import model_info, snapshot_download -from packaging import version -from PIL import Image -from tqdm.auto import tqdm - -from .configuration_utils import ConfigMixin -from .dynamic_modules_utils import get_class_from_dynamic_module -from .hub_utils import http_user_agent -from .modeling_utils import _LOW_CPU_MEM_USAGE_DEFAULT -from .schedulers.scheduling_utils import SCHEDULER_CONFIG_NAME -from .utils import ( - CONFIG_NAME, - DIFFUSERS_CACHE, - ONNX_WEIGHTS_NAME, - WEIGHTS_NAME, - BaseOutput, - deprecate, - is_accelerate_available, - is_safetensors_available, - is_torch_version, - is_transformers_available, - logging, -) - - -if is_transformers_available(): - import transformers - from transformers import PreTrainedModel - - -INDEX_FILE = "diffusion_pytorch_model.bin" -CUSTOM_PIPELINE_FILE_NAME = "pipeline.py" -DUMMY_MODULES_FOLDER = "diffusers.utils" -TRANSFORMERS_DUMMY_MODULES_FOLDER = "transformers.utils" - - -logger = logging.get_logger(__name__) - - -LOADABLE_CLASSES = { - "diffusers": { - "ModelMixin": ["save_pretrained", "from_pretrained"], - "SchedulerMixin": ["save_pretrained", "from_pretrained"], - "DiffusionPipeline": ["save_pretrained", "from_pretrained"], - "OnnxRuntimeModel": ["save_pretrained", "from_pretrained"], - }, - "transformers": { - "PreTrainedTokenizer": ["save_pretrained", "from_pretrained"], - "PreTrainedTokenizerFast": ["save_pretrained", "from_pretrained"], - "PreTrainedModel": ["save_pretrained", "from_pretrained"], - "FeatureExtractionMixin": ["save_pretrained", "from_pretrained"], - "ProcessorMixin": ["save_pretrained", "from_pretrained"], - "ImageProcessingMixin": ["save_pretrained", "from_pretrained"], - }, - "onnxruntime.training": { - "ORTModule": ["save_pretrained", "from_pretrained"], - }, -} - -ALL_IMPORTABLE_CLASSES = {} -for library in LOADABLE_CLASSES: - ALL_IMPORTABLE_CLASSES.update(LOADABLE_CLASSES[library]) - - -@dataclass -class ImagePipelineOutput(BaseOutput): - """ - Output class for image pipelines. - - Args: - images (`List[PIL.Image.Image]` or `np.ndarray`) - List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width, - num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline. - """ - - images: Union[List[PIL.Image.Image], np.ndarray] - - -@dataclass -class AudioPipelineOutput(BaseOutput): - """ - Output class for audio pipelines. - - Args: - audios (`np.ndarray`) - List of denoised samples of shape `(batch_size, num_channels, sample_rate)`. Numpy array present the - denoised audio samples of the diffusion pipeline. - """ - - audios: np.ndarray - - -def is_safetensors_compatible(info) -> bool: - filenames = set(sibling.rfilename for sibling in info.siblings) - pt_filenames = set(filename for filename in filenames if filename.endswith(".bin")) - is_safetensors_compatible = any(file.endswith(".safetensors") for file in filenames) - for pt_filename in pt_filenames: - prefix, raw = os.path.split(pt_filename) - if raw == "pytorch_model.bin": - # transformers specific - sf_filename = os.path.join(prefix, "model.safetensors") - else: - sf_filename = pt_filename[: -len(".bin")] + ".safetensors" - if is_safetensors_compatible and sf_filename not in filenames: - logger.warning(f"{sf_filename} not found") - is_safetensors_compatible = False - return is_safetensors_compatible - - -class DiffusionPipeline(ConfigMixin): - r""" - Base class for all models. - - [`DiffusionPipeline`] takes care of storing all components (models, schedulers, processors) for diffusion pipelines - and handles methods for loading, downloading and saving models as well as a few methods common to all pipelines to: - - - move all PyTorch modules to the device of your choice - - enabling/disabling the progress bar for the denoising iteration - - Class attributes: - - - **config_name** (`str`) -- name of the config file that will store the class and module names of all - components of the diffusion pipeline. - - **_optional_components** (List[`str`]) -- list of all components that are optional so they don't have to be - passed for the pipeline to function (should be overridden by subclasses). - """ - config_name = "model_index.json" - _optional_components = [] - - def register_modules(self, **kwargs): - # import it here to avoid circular import - from diffusers import pipelines - - for name, module in kwargs.items(): - # retrieve library - if module is None: - register_dict = {name: (None, None)} - else: - library = module.__module__.split(".")[0] - - # check if the module is a pipeline module - pipeline_dir = module.__module__.split(".")[-2] if len(module.__module__.split(".")) > 2 else None - path = module.__module__.split(".") - is_pipeline_module = pipeline_dir in path and hasattr(pipelines, pipeline_dir) - - # if library is not in LOADABLE_CLASSES, then it is a custom module. - # Or if it's a pipeline module, then the module is inside the pipeline - # folder so we set the library to module name. - if library not in LOADABLE_CLASSES or is_pipeline_module: - library = pipeline_dir - - # retrieve class_name - class_name = module.__class__.__name__ - - register_dict = {name: (library, class_name)} - - # save model index config - self.register_to_config(**register_dict) - - # set models - setattr(self, name, module) - - def save_pretrained( - self, - save_directory: Union[str, os.PathLike], - safe_serialization: bool = False, - ): - """ - Save all variables of the pipeline that can be saved and loaded as well as the pipelines configuration file to - a directory. A pipeline variable can be saved and loaded if its class implements both a save and loading - method. The pipeline can easily be re-loaded using the `[`~DiffusionPipeline.from_pretrained`]` class method. - - Arguments: - save_directory (`str` or `os.PathLike`): - Directory to which to save. Will be created if it doesn't exist. - safe_serialization (`bool`, *optional*, defaults to `False`): - Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`). - """ - self.save_config(save_directory) - - model_index_dict = dict(self.config) - model_index_dict.pop("_class_name") - model_index_dict.pop("_diffusers_version") - model_index_dict.pop("_module", None) - - expected_modules, optional_kwargs = self._get_signature_keys(self) - - def is_saveable_module(name, value): - if name not in expected_modules: - return False - if name in self._optional_components and value[0] is None: - return False - return True - - model_index_dict = {k: v for k, v in model_index_dict.items() if is_saveable_module(k, v)} - - for pipeline_component_name in model_index_dict.keys(): - sub_model = getattr(self, pipeline_component_name) - model_cls = sub_model.__class__ - - save_method_name = None - # search for the model's base class in LOADABLE_CLASSES - for library_name, library_classes in LOADABLE_CLASSES.items(): - library = importlib.import_module(library_name) - for base_class, save_load_methods in library_classes.items(): - class_candidate = getattr(library, base_class, None) - if class_candidate is not None and issubclass(model_cls, class_candidate): - # if we found a suitable base class in LOADABLE_CLASSES then grab its save method - save_method_name = save_load_methods[0] - break - if save_method_name is not None: - break - - save_method = getattr(sub_model, save_method_name) - - # Call the save method with the argument safe_serialization only if it's supported - save_method_signature = inspect.signature(save_method) - save_method_accept_safe = "safe_serialization" in save_method_signature.parameters - if save_method_accept_safe: - save_method( - os.path.join(save_directory, pipeline_component_name), safe_serialization=safe_serialization - ) - else: - save_method(os.path.join(save_directory, pipeline_component_name)) - - def to(self, torch_device: Optional[Union[str, torch.device]] = None): - if torch_device is None: - return self - - module_names, _, _ = self.extract_init_dict(dict(self.config)) - for name in module_names.keys(): - module = getattr(self, name) - if isinstance(module, torch.nn.Module): - if module.dtype == torch.float16 and str(torch_device) in ["cpu"]: - logger.warning( - "Pipelines loaded with `torch_dtype=torch.float16` cannot run with `cpu` device. It" - " is not recommended to move them to `cpu` as running them will fail. Please make" - " sure to use an accelerator to run the pipeline in inference, due to the lack of" - " support for`float16` operations on this device in PyTorch. Please, remove the" - " `torch_dtype=torch.float16` argument, or use another device for inference." - ) - module.to(torch_device) - return self - - @property - def device(self) -> torch.device: - r""" - Returns: - `torch.device`: The torch device on which the pipeline is located. - """ - module_names, _, _ = self.extract_init_dict(dict(self.config)) - for name in module_names.keys(): - module = getattr(self, name) - if isinstance(module, torch.nn.Module): - return module.device - return torch.device("cpu") - - @classmethod - def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], **kwargs): - r""" - Instantiate a PyTorch diffusion pipeline from pre-trained pipeline weights. - - The pipeline is set in evaluation mode by default using `model.eval()` (Dropout modules are deactivated). - - The warning *Weights from XXX not initialized from pretrained model* means that the weights of XXX do not come - pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning - task. - - The warning *Weights from XXX not used in YYY* means that the layer XXX is not used by YYY, therefore those - weights are discarded. - - Parameters: - pretrained_model_name_or_path (`str` or `os.PathLike`, *optional*): - Can be either: - - - A string, the *repo id* of a pretrained pipeline hosted inside a model repo on - https://huggingface.co/ Valid repo ids have to be located under a user or organization name, like - `CompVis/ldm-text2im-large-256`. - - A path to a *directory* containing pipeline weights saved using - [`~DiffusionPipeline.save_pretrained`], e.g., `./my_pipeline_directory/`. - torch_dtype (`str` or `torch.dtype`, *optional*): - Override the default `torch.dtype` and load the model under this dtype. If `"auto"` is passed the dtype - will be automatically derived from the model's weights. - custom_pipeline (`str`, *optional*): - - - - This is an experimental feature and is likely to change in the future. - - - - Can be either: - - - A string, the *repo id* of a custom pipeline hosted inside a model repo on - https://huggingface.co/. Valid repo ids have to be located under a user or organization name, - like `hf-internal-testing/diffusers-dummy-pipeline`. - - - - It is required that the model repo has a file, called `pipeline.py` that defines the custom - pipeline. - - - - - A string, the *file name* of a community pipeline hosted on GitHub under - https://github.com/huggingface/diffusers/tree/main/examples/community. Valid file names have to - match exactly the file name without `.py` located under the above link, *e.g.* - `clip_guided_stable_diffusion`. - - - - Community pipelines are always loaded from the current `main` branch of GitHub. - - - - - A path to a *directory* containing a custom pipeline, e.g., `./my_pipeline_directory/`. - - - - It is required that the directory has a file, called `pipeline.py` that defines the custom - pipeline. - - - - For more information on how to load and create custom pipelines, please have a look at [Loading and - Adding Custom - Pipelines](https://huggingface.co/docs/diffusers/using-diffusers/custom_pipeline_overview) - - torch_dtype (`str` or `torch.dtype`, *optional*): - force_download (`bool`, *optional*, defaults to `False`): - Whether or not to force the (re-)download of the model weights and configuration files, overriding the - cached versions if they exist. - resume_download (`bool`, *optional*, defaults to `False`): - Whether or not to delete incompletely received files. Will attempt to resume the download if such a - file exists. - proxies (`Dict[str, str]`, *optional*): - A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', - 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. - output_loading_info(`bool`, *optional*, defaults to `False`): - Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. - local_files_only(`bool`, *optional*, defaults to `False`): - Whether or not to only look at local files (i.e., do not try to download the model). - use_auth_token (`str` or *bool*, *optional*): - The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated - when running `huggingface-cli login` (stored in `~/.huggingface`). - revision (`str`, *optional*, defaults to `"main"`): - The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a - git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any - identifier allowed by git. - mirror (`str`, *optional*): - Mirror source to accelerate downloads in China. If you are from China and have an accessibility - problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. - Please refer to the mirror site for more information. specify the folder name here. - device_map (`str` or `Dict[str, Union[int, str, torch.device]]`, *optional*): - A map that specifies where each submodule should go. It doesn't need to be refined to each - parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the - same device. - - To have Accelerate compute the most optimized `device_map` automatically, set `device_map="auto"`. For - more information about each option see [designing a device - map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map). - low_cpu_mem_usage (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`): - Speed up model loading by not initializing the weights and only loading the pre-trained weights. This - also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the - model. This is only supported when torch version >= 1.9.0. If you are using an older version of torch, - setting this argument to `True` will raise an error. - return_cached_folder (`bool`, *optional*, defaults to `False`): - If set to `True`, path to downloaded cached folder will be returned in addition to loaded pipeline. - kwargs (remaining dictionary of keyword arguments, *optional*): - Can be used to overwrite load - and saveable variables - *i.e.* the pipeline components - of the - specific pipeline class. The overwritten components are then directly passed to the pipelines - `__init__` method. See example below for more information. - - - - It is required to be logged in (`huggingface-cli login`) when you want to use private or [gated - models](https://huggingface.co/docs/hub/models-gated#gated-models), *e.g.* `"runwayml/stable-diffusion-v1-5"` - - - - - - Activate the special ["offline-mode"](https://huggingface.co/diffusers/installation.html#offline-mode) to use - this method in a firewalled environment. - - - - Examples: - - ```py - >>> from diffusers import DiffusionPipeline - - >>> # Download pipeline from huggingface.co and cache. - >>> pipeline = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") - - >>> # Download pipeline that requires an authorization token - >>> # For more information on access tokens, please refer to this section - >>> # of the documentation](https://huggingface.co/docs/hub/security-tokens) - >>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") - - >>> # Use a different scheduler - >>> from diffusers import LMSDiscreteScheduler - - >>> scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) - >>> pipeline.scheduler = scheduler - ``` - """ - cache_dir = kwargs.pop("cache_dir", DIFFUSERS_CACHE) - resume_download = kwargs.pop("resume_download", False) - force_download = kwargs.pop("force_download", False) - proxies = kwargs.pop("proxies", None) - local_files_only = kwargs.pop("local_files_only", False) - use_auth_token = kwargs.pop("use_auth_token", None) - revision = kwargs.pop("revision", None) - torch_dtype = kwargs.pop("torch_dtype", None) - custom_pipeline = kwargs.pop("custom_pipeline", None) - provider = kwargs.pop("provider", None) - sess_options = kwargs.pop("sess_options", None) - device_map = kwargs.pop("device_map", None) - low_cpu_mem_usage = kwargs.pop("low_cpu_mem_usage", _LOW_CPU_MEM_USAGE_DEFAULT) - return_cached_folder = kwargs.pop("return_cached_folder", False) - - # 1. Download the checkpoints and configs - # use snapshot download here to get it working from from_pretrained - if not os.path.isdir(pretrained_model_name_or_path): - config_dict = cls.load_config( - pretrained_model_name_or_path, - cache_dir=cache_dir, - resume_download=resume_download, - force_download=force_download, - proxies=proxies, - local_files_only=local_files_only, - use_auth_token=use_auth_token, - revision=revision, - ) - # make sure we only download sub-folders and `diffusers` filenames - folder_names = [k for k in config_dict.keys() if not k.startswith("_")] - allow_patterns = [os.path.join(k, "*") for k in folder_names] - allow_patterns += [WEIGHTS_NAME, SCHEDULER_CONFIG_NAME, CONFIG_NAME, ONNX_WEIGHTS_NAME, cls.config_name] - - # make sure we don't download flax weights - ignore_patterns = ["*.msgpack"] - - if custom_pipeline is not None: - allow_patterns += [CUSTOM_PIPELINE_FILE_NAME] - - if cls != DiffusionPipeline: - requested_pipeline_class = cls.__name__ - else: - requested_pipeline_class = config_dict.get("_class_name", cls.__name__) - user_agent = {"pipeline_class": requested_pipeline_class} - if custom_pipeline is not None: - user_agent["custom_pipeline"] = custom_pipeline - user_agent = http_user_agent(user_agent) - - if is_safetensors_available(): - info = model_info( - pretrained_model_name_or_path, - use_auth_token=use_auth_token, - revision=revision, - ) - if is_safetensors_compatible(info): - ignore_patterns.append("*.bin") - - # download all allow_patterns - cached_folder = snapshot_download( - pretrained_model_name_or_path, - cache_dir=cache_dir, - resume_download=resume_download, - proxies=proxies, - local_files_only=local_files_only, - use_auth_token=use_auth_token, - revision=revision, - allow_patterns=allow_patterns, - ignore_patterns=ignore_patterns, - user_agent=user_agent, - ) - else: - cached_folder = pretrained_model_name_or_path - - config_dict = cls.load_config(cached_folder) - - # 2. Load the pipeline class, if using custom module then load it from the hub - # if we load from explicit class, let's use it - if custom_pipeline is not None: - if custom_pipeline.endswith(".py"): - path = Path(custom_pipeline) - # decompose into folder & file - file_name = path.name - custom_pipeline = path.parent.absolute() - else: - file_name = CUSTOM_PIPELINE_FILE_NAME - - pipeline_class = get_class_from_dynamic_module( - custom_pipeline, module_file=file_name, cache_dir=custom_pipeline - ) - elif cls != DiffusionPipeline: - pipeline_class = cls - else: - diffusers_module = importlib.import_module(cls.__module__.split(".")[0]) - pipeline_class = getattr(diffusers_module, config_dict["_class_name"]) - - # To be removed in 1.0.0 - if pipeline_class.__name__ == "StableDiffusionInpaintPipeline" and version.parse( - version.parse(config_dict["_diffusers_version"]).base_version - ) <= version.parse("0.5.1"): - from diffusers import StableDiffusionInpaintPipeline, StableDiffusionInpaintPipelineLegacy - - pipeline_class = StableDiffusionInpaintPipelineLegacy - - deprecation_message = ( - "You are using a legacy checkpoint for inpainting with Stable Diffusion, therefore we are loading the" - f" {StableDiffusionInpaintPipelineLegacy} class instead of {StableDiffusionInpaintPipeline}. For" - " better inpainting results, we strongly suggest using Stable Diffusion's official inpainting" - " checkpoint: https://huggingface.co/runwayml/stable-diffusion-inpainting instead or adapting your" - f" checkpoint {pretrained_model_name_or_path} to the format of" - " https://huggingface.co/runwayml/stable-diffusion-inpainting. Note that we do not actively maintain" - " the {StableDiffusionInpaintPipelineLegacy} class and will likely remove it in version 1.0.0." - ) - deprecate("StableDiffusionInpaintPipelineLegacy", "1.0.0", deprecation_message, standard_warn=False) - - # some modules can be passed directly to the init - # in this case they are already instantiated in `kwargs` - # extract them here - expected_modules, optional_kwargs = cls._get_signature_keys(pipeline_class) - passed_class_obj = {k: kwargs.pop(k) for k in expected_modules if k in kwargs} - passed_pipe_kwargs = {k: kwargs.pop(k) for k in optional_kwargs if k in kwargs} - - init_dict, unused_kwargs, _ = pipeline_class.extract_init_dict(config_dict, **kwargs) - - # define init kwargs - init_kwargs = {k: init_dict.pop(k) for k in optional_kwargs if k in init_dict} - init_kwargs = {**init_kwargs, **passed_pipe_kwargs} - - # remove `null` components - def load_module(name, value): - if value[0] is None: - return False - if name in passed_class_obj and passed_class_obj[name] is None: - return False - return True - - init_dict = {k: v for k, v in init_dict.items() if load_module(k, v)} - - if len(unused_kwargs) > 0: - logger.warning( - f"Keyword arguments {unused_kwargs} are not expected by {pipeline_class.__name__} and will be ignored." - ) - - if low_cpu_mem_usage and not is_accelerate_available(): - low_cpu_mem_usage = False - logger.warning( - "Cannot initialize model with low cpu memory usage because `accelerate` was not found in the" - " environment. Defaulting to `low_cpu_mem_usage=False`. It is strongly recommended to install" - " `accelerate` for faster and less memory-intense model loading. You can do so with: \n```\npip" - " install accelerate\n```\n." - ) - - if device_map is not None and not is_torch_version(">=", "1.9.0"): - raise NotImplementedError( - "Loading and dispatching requires torch >= 1.9.0. Please either update your PyTorch version or set" - " `device_map=None`." - ) - - if low_cpu_mem_usage is True and not is_torch_version(">=", "1.9.0"): - raise NotImplementedError( - "Low memory initialization requires torch >= 1.9.0. Please either update your PyTorch version or set" - " `low_cpu_mem_usage=False`." - ) - - if low_cpu_mem_usage is False and device_map is not None: - raise ValueError( - f"You cannot set `low_cpu_mem_usage` to False while using device_map={device_map} for loading and" - " dispatching. Please make sure to set `low_cpu_mem_usage=True`." - ) - - # import it here to avoid circular import - from diffusers import pipelines - - # 3. Load each module in the pipeline - for name, (library_name, class_name) in init_dict.items(): - # 3.1 - now that JAX/Flax is an official framework of the library, we might load from Flax names - if class_name.startswith("Flax"): - class_name = class_name[4:] - - is_pipeline_module = hasattr(pipelines, library_name) - loaded_sub_model = None - - # if the model is in a pipeline module, then we load it from the pipeline - if name in passed_class_obj: - # 1. check that passed_class_obj has correct parent class - if not is_pipeline_module: - library = importlib.import_module(library_name) - class_obj = getattr(library, class_name) - importable_classes = LOADABLE_CLASSES[library_name] - class_candidates = {c: getattr(library, c, None) for c in importable_classes.keys()} - - expected_class_obj = None - for class_name, class_candidate in class_candidates.items(): - if class_candidate is not None and issubclass(class_obj, class_candidate): - expected_class_obj = class_candidate - - if not issubclass(passed_class_obj[name].__class__, expected_class_obj): - raise ValueError( - f"{passed_class_obj[name]} is of type: {type(passed_class_obj[name])}, but should be" - f" {expected_class_obj}" - ) - else: - logger.warning( - f"You have passed a non-standard module {passed_class_obj[name]}. We cannot verify whether it" - " has the correct type" - ) - - # set passed class object - loaded_sub_model = passed_class_obj[name] - elif is_pipeline_module: - pipeline_module = getattr(pipelines, library_name) - class_obj = getattr(pipeline_module, class_name) - importable_classes = ALL_IMPORTABLE_CLASSES - class_candidates = {c: class_obj for c in importable_classes.keys()} - else: - # else we just import it from the library. - library = importlib.import_module(library_name) - - class_obj = getattr(library, class_name) - importable_classes = LOADABLE_CLASSES[library_name] - class_candidates = {c: getattr(library, c, None) for c in importable_classes.keys()} - - if loaded_sub_model is None: - load_method_name = None - for class_name, class_candidate in class_candidates.items(): - if class_candidate is not None and issubclass(class_obj, class_candidate): - load_method_name = importable_classes[class_name][1] - - if load_method_name is None: - none_module = class_obj.__module__ - is_dummy_path = none_module.startswith(DUMMY_MODULES_FOLDER) or none_module.startswith( - TRANSFORMERS_DUMMY_MODULES_FOLDER - ) - if is_dummy_path and "dummy" in none_module: - # call class_obj for nice error message of missing requirements - class_obj() - - raise ValueError( - f"The component {class_obj} of {pipeline_class} cannot be loaded as it does not seem to have" - f" any of the loading methods defined in {ALL_IMPORTABLE_CLASSES}." - ) - - load_method = getattr(class_obj, load_method_name) - loading_kwargs = {} - - if issubclass(class_obj, torch.nn.Module): - loading_kwargs["torch_dtype"] = torch_dtype - if issubclass(class_obj, diffusers.OnnxRuntimeModel): - loading_kwargs["provider"] = provider - loading_kwargs["sess_options"] = sess_options - - is_diffusers_model = issubclass(class_obj, diffusers.ModelMixin) - is_transformers_model = ( - is_transformers_available() - and issubclass(class_obj, PreTrainedModel) - and version.parse(version.parse(transformers.__version__).base_version) >= version.parse("4.20.0") - ) - - # When loading a transformers model, if the device_map is None, the weights will be initialized as opposed to diffusers. - # To make default loading faster we set the `low_cpu_mem_usage=low_cpu_mem_usage` flag which is `True` by default. - # This makes sure that the weights won't be initialized which significantly speeds up loading. - if is_diffusers_model or is_transformers_model: - loading_kwargs["device_map"] = device_map - loading_kwargs["low_cpu_mem_usage"] = low_cpu_mem_usage - - # check if the module is in a subdirectory - if os.path.isdir(os.path.join(cached_folder, name)): - loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs) - else: - # else load from the root directory - loaded_sub_model = load_method(cached_folder, **loading_kwargs) - - init_kwargs[name] = loaded_sub_model # UNet(...), # DiffusionSchedule(...) - - # 4. Potentially add passed objects if expected - missing_modules = set(expected_modules) - set(init_kwargs.keys()) - passed_modules = list(passed_class_obj.keys()) - optional_modules = pipeline_class._optional_components - if len(missing_modules) > 0 and missing_modules <= set(passed_modules + optional_modules): - for module in missing_modules: - init_kwargs[module] = passed_class_obj.get(module, None) - elif len(missing_modules) > 0: - passed_modules = set(list(init_kwargs.keys()) + list(passed_class_obj.keys())) - optional_kwargs - raise ValueError( - f"Pipeline {pipeline_class} expected {expected_modules}, but only {passed_modules} were passed." - ) - - # 5. Instantiate the pipeline - model = pipeline_class(**init_kwargs) - - if return_cached_folder: - return model, cached_folder - return model - - @staticmethod - def _get_signature_keys(obj): - parameters = inspect.signature(obj.__init__).parameters - required_parameters = {k: v for k, v in parameters.items() if v.default == inspect._empty} - optional_parameters = set({k for k, v in parameters.items() if v.default != inspect._empty}) - expected_modules = set(required_parameters.keys()) - set(["self"]) - return expected_modules, optional_parameters - - @property - def components(self) -> Dict[str, Any]: - r""" - - The `self.components` property can be useful to run different pipelines with the same weights and - configurations to not have to re-allocate memory. - - Examples: - - ```py - >>> from diffusers import ( - ... StableDiffusionPipeline, - ... StableDiffusionImg2ImgPipeline, - ... StableDiffusionInpaintPipeline, - ... ) - - >>> text2img = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") - >>> img2img = StableDiffusionImg2ImgPipeline(**text2img.components) - >>> inpaint = StableDiffusionInpaintPipeline(**text2img.components) - ``` - - Returns: - A dictionaly containing all the modules needed to initialize the pipeline. - """ - expected_modules, optional_parameters = self._get_signature_keys(self) - components = { - k: getattr(self, k) for k in self.config.keys() if not k.startswith("_") and k not in optional_parameters - } - - if set(components.keys()) != expected_modules: - raise ValueError( - f"{self} has been incorrectly initialized or {self.__class__} is incorrectly implemented. Expected" - f" {expected_modules} to be defined, but {components} are defined." - ) - - return components - - @staticmethod - def numpy_to_pil(images): - """ - Convert a numpy image or a batch of images to a PIL image. - """ - if images.ndim == 3: - images = images[None, ...] - images = (images * 255).round().astype("uint8") - if images.shape[-1] == 1: - # special case for grayscale (single channel) images - pil_images = [Image.fromarray(image.squeeze(), mode="L") for image in images] - else: - pil_images = [Image.fromarray(image) for image in images] - - return pil_images - - def progress_bar(self, iterable=None, total=None): - if not hasattr(self, "_progress_bar_config"): - self._progress_bar_config = {} - elif not isinstance(self._progress_bar_config, dict): - raise ValueError( - f"`self._progress_bar_config` should be of type `dict`, but is {type(self._progress_bar_config)}." - ) - - if iterable is not None: - return tqdm(iterable, **self._progress_bar_config) - elif total is not None: - return tqdm(total=total, **self._progress_bar_config) - else: - raise ValueError("Either `total` or `iterable` has to be defined.") - - def set_progress_bar_config(self, **kwargs): - self._progress_bar_config = kwargs - - def enable_xformers_memory_efficient_attention(self): - r""" - Enable memory efficient attention as implemented in xformers. - - When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference - time. Speed up at training time is not guaranteed. - - Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention - is used. - """ - self.set_use_memory_efficient_attention_xformers(True) - - def disable_xformers_memory_efficient_attention(self): - r""" - Disable memory efficient attention as implemented in xformers. - """ - self.set_use_memory_efficient_attention_xformers(False) - - def set_use_memory_efficient_attention_xformers(self, valid: bool) -> None: - # Recursively walk through all the children. - # Any children which exposes the set_use_memory_efficient_attention_xformers method - # gets the message - def fn_recursive_set_mem_eff(module: torch.nn.Module): - if hasattr(module, "set_use_memory_efficient_attention_xformers"): - module.set_use_memory_efficient_attention_xformers(valid) - - for child in module.children(): - fn_recursive_set_mem_eff(child) - - module_names, _, _ = self.extract_init_dict(dict(self.config)) - for module_name in module_names: - module = getattr(self, module_name) - if isinstance(module, torch.nn.Module): - fn_recursive_set_mem_eff(module) diff --git a/spaces/JeffJing/ZookChatBot/steamship/plugin/outputs/__init__.py b/spaces/JeffJing/ZookChatBot/steamship/plugin/outputs/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/index_func.py b/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/index_func.py deleted file mode 100644 index ac128668c2920b6b4b945e0de3dcd745fe141200..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/index_func.py +++ /dev/null @@ -1,136 +0,0 @@ -import os -import logging - -import hashlib -import PyPDF2 -from tqdm import tqdm - -from modules.presets import * -from modules.utils import * -from modules.config import local_embedding - - -def get_documents(file_src): - from langchain.schema import Document - from langchain.text_splitter import TokenTextSplitter - text_splitter = TokenTextSplitter(chunk_size=500, chunk_overlap=30) - - documents = [] - logging.debug("Loading documents...") - logging.debug(f"file_src: {file_src}") - for file in file_src: - filepath = file.name - filename = os.path.basename(filepath) - file_type = os.path.splitext(filename)[1] - logging.info(f"loading file: {filename}") - try: - if file_type == ".pdf": - logging.debug("Loading PDF...") - try: - from modules.pdf_func import parse_pdf - from modules.config import advance_docs - - two_column = advance_docs["pdf"].get("two_column", False) - pdftext = parse_pdf(filepath, two_column).text - except: - pdftext = "" - with open(filepath, "rb") as pdfFileObj: - pdfReader = PyPDF2.PdfReader(pdfFileObj) - for page in tqdm(pdfReader.pages): - pdftext += page.extract_text() - texts = [Document(page_content=pdftext, - metadata={"source": filepath})] - elif file_type == ".docx": - logging.debug("Loading Word...") - from langchain.document_loaders import UnstructuredWordDocumentLoader - loader = UnstructuredWordDocumentLoader(filepath) - texts = loader.load() - elif file_type == ".pptx": - logging.debug("Loading PowerPoint...") - from langchain.document_loaders import UnstructuredPowerPointLoader - loader = UnstructuredPowerPointLoader(filepath) - texts = loader.load() - elif file_type == ".epub": - logging.debug("Loading EPUB...") - from langchain.document_loaders import UnstructuredEPubLoader - loader = UnstructuredEPubLoader(filepath) - texts = loader.load() - elif file_type == ".xlsx": - logging.debug("Loading Excel...") - text_list = excel_to_string(filepath) - texts = [] - for elem in text_list: - texts.append(Document(page_content=elem, - metadata={"source": filepath})) - else: - logging.debug("Loading text file...") - from langchain.document_loaders import TextLoader - loader = TextLoader(filepath, "utf8") - texts = loader.load() - except Exception as e: - import traceback - logging.error(f"Error loading file: {filename}") - traceback.print_exc() - - texts = text_splitter.split_documents(texts) - documents.extend(texts) - logging.debug("Documents loaded.") - return documents - - -def construct_index( - api_key, - file_src, - max_input_size=4096, - num_outputs=5, - max_chunk_overlap=20, - chunk_size_limit=600, - embedding_limit=None, - separator=" ", -): - from langchain.chat_models import ChatOpenAI - from langchain.vectorstores import FAISS - - if api_key: - os.environ["OPENAI_API_KEY"] = api_key - else: - # 由于一个依赖的愚蠢的设计,这里必须要有一个API KEY - os.environ["OPENAI_API_KEY"] = "sk-xxxxxxx" - chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit - embedding_limit = None if embedding_limit == 0 else embedding_limit - separator = " " if separator == "" else separator - - index_name = get_file_hash(file_src) - index_path = f"./index/{index_name}" - if local_embedding: - from langchain.embeddings.huggingface import HuggingFaceEmbeddings - embeddings = HuggingFaceEmbeddings( - model_name="sentence-transformers/distiluse-base-multilingual-cased-v2") - else: - from langchain.embeddings import OpenAIEmbeddings - if os.environ.get("OPENAI_API_TYPE", "openai") == "openai": - embeddings = OpenAIEmbeddings(openai_api_base=os.environ.get( - "OPENAI_API_BASE", None), openai_api_key=os.environ.get("OPENAI_EMBEDDING_API_KEY", api_key)) - else: - embeddings = OpenAIEmbeddings(deployment=os.environ["AZURE_EMBEDDING_DEPLOYMENT_NAME"], openai_api_key=os.environ["AZURE_OPENAI_API_KEY"], - model=os.environ["AZURE_EMBEDDING_MODEL_NAME"], openai_api_base=os.environ["AZURE_OPENAI_API_BASE_URL"], openai_api_type="azure") - if os.path.exists(index_path): - logging.info("找到了缓存的索引文件,加载中……") - return FAISS.load_local(index_path, embeddings) - else: - try: - documents = get_documents(file_src) - logging.info("构建索引中……") - with retrieve_proxy(): - index = FAISS.from_documents(documents, embeddings) - logging.debug("索引构建完成!") - os.makedirs("./index", exist_ok=True) - index.save_local(index_path) - logging.debug("索引已保存至本地!") - return index - - except Exception as e: - import traceback - logging.error("索引构建失败!%s", e) - traceback.print_exc() - return None diff --git a/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/models/diffusion/dpm_solver/dpm_solver.py b/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/models/diffusion/dpm_solver/dpm_solver.py deleted file mode 100644 index bdb64e0c78cc3520f92d79db3124c85fc3cfb9b4..0000000000000000000000000000000000000000 --- a/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/models/diffusion/dpm_solver/dpm_solver.py +++ /dev/null @@ -1,1184 +0,0 @@ -import torch -import torch.nn.functional as F -import math - - -class NoiseScheduleVP: - def __init__( - self, - schedule='discrete', - betas=None, - alphas_cumprod=None, - continuous_beta_0=0.1, - continuous_beta_1=20., - ): - """Create a wrapper class for the forward SDE (VP type). - - *** - Update: We support discrete-time diffusion models by implementing a picewise linear interpolation for log_alpha_t. - We recommend to use schedule='discrete' for the discrete-time diffusion models, especially for high-resolution images. - *** - - The forward SDE ensures that the condition distribution q_{t|0}(x_t | x_0) = N ( alpha_t * x_0, sigma_t^2 * I ). - We further define lambda_t = log(alpha_t) - log(sigma_t), which is the half-logSNR (described in the DPM-Solver paper). - Therefore, we implement the functions for computing alpha_t, sigma_t and lambda_t. For t in [0, T], we have: - - log_alpha_t = self.marginal_log_mean_coeff(t) - sigma_t = self.marginal_std(t) - lambda_t = self.marginal_lambda(t) - - Moreover, as lambda(t) is an invertible function, we also support its inverse function: - - t = self.inverse_lambda(lambda_t) - - =============================================================== - - We support both discrete-time DPMs (trained on n = 0, 1, ..., N-1) and continuous-time DPMs (trained on t in [t_0, T]). - - 1. For discrete-time DPMs: - - For discrete-time DPMs trained on n = 0, 1, ..., N-1, we convert the discrete steps to continuous time steps by: - t_i = (i + 1) / N - e.g. for N = 1000, we have t_0 = 1e-3 and T = t_{N-1} = 1. - We solve the corresponding diffusion ODE from time T = 1 to time t_0 = 1e-3. - - Args: - betas: A `torch.Tensor`. The beta array for the discrete-time DPM. (See the original DDPM paper for details) - alphas_cumprod: A `torch.Tensor`. The cumprod alphas for the discrete-time DPM. (See the original DDPM paper for details) - - Note that we always have alphas_cumprod = cumprod(betas). Therefore, we only need to set one of `betas` and `alphas_cumprod`. - - **Important**: Please pay special attention for the args for `alphas_cumprod`: - The `alphas_cumprod` is the \hat{alpha_n} arrays in the notations of DDPM. Specifically, DDPMs assume that - q_{t_n | 0}(x_{t_n} | x_0) = N ( \sqrt{\hat{alpha_n}} * x_0, (1 - \hat{alpha_n}) * I ). - Therefore, the notation \hat{alpha_n} is different from the notation alpha_t in DPM-Solver. In fact, we have - alpha_{t_n} = \sqrt{\hat{alpha_n}}, - and - log(alpha_{t_n}) = 0.5 * log(\hat{alpha_n}). - - - 2. For continuous-time DPMs: - - We support two types of VPSDEs: linear (DDPM) and cosine (improved-DDPM). The hyperparameters for the noise - schedule are the default settings in DDPM and improved-DDPM: - - Args: - beta_min: A `float` number. The smallest beta for the linear schedule. - beta_max: A `float` number. The largest beta for the linear schedule. - cosine_s: A `float` number. The hyperparameter in the cosine schedule. - cosine_beta_max: A `float` number. The hyperparameter in the cosine schedule. - T: A `float` number. The ending time of the forward process. - - =============================================================== - - Args: - schedule: A `str`. The noise schedule of the forward SDE. 'discrete' for discrete-time DPMs, - 'linear' or 'cosine' for continuous-time DPMs. - Returns: - A wrapper object of the forward SDE (VP type). - - =============================================================== - - Example: - - # For discrete-time DPMs, given betas (the beta array for n = 0, 1, ..., N - 1): - >>> ns = NoiseScheduleVP('discrete', betas=betas) - - # For discrete-time DPMs, given alphas_cumprod (the \hat{alpha_n} array for n = 0, 1, ..., N - 1): - >>> ns = NoiseScheduleVP('discrete', alphas_cumprod=alphas_cumprod) - - # For continuous-time DPMs (VPSDE), linear schedule: - >>> ns = NoiseScheduleVP('linear', continuous_beta_0=0.1, continuous_beta_1=20.) - - """ - - if schedule not in ['discrete', 'linear', 'cosine']: - raise ValueError("Unsupported noise schedule {}. The schedule needs to be 'discrete' or 'linear' or 'cosine'".format(schedule)) - - self.schedule = schedule - if schedule == 'discrete': - if betas is not None: - log_alphas = 0.5 * torch.log(1 - betas).cumsum(dim=0) - else: - assert alphas_cumprod is not None - log_alphas = 0.5 * torch.log(alphas_cumprod) - self.total_N = len(log_alphas) - self.T = 1. - self.t_array = torch.linspace(0., 1., self.total_N + 1)[1:].reshape((1, -1)) - self.log_alpha_array = log_alphas.reshape((1, -1,)) - else: - self.total_N = 1000 - self.beta_0 = continuous_beta_0 - self.beta_1 = continuous_beta_1 - self.cosine_s = 0.008 - self.cosine_beta_max = 999. - self.cosine_t_max = math.atan(self.cosine_beta_max * (1. + self.cosine_s) / math.pi) * 2. * (1. + self.cosine_s) / math.pi - self.cosine_s - self.cosine_log_alpha_0 = math.log(math.cos(self.cosine_s / (1. + self.cosine_s) * math.pi / 2.)) - self.schedule = schedule - if schedule == 'cosine': - # For the cosine schedule, T = 1 will have numerical issues. So we manually set the ending time T. - # Note that T = 0.9946 may be not the optimal setting. However, we find it works well. - self.T = 0.9946 - else: - self.T = 1. - - def marginal_log_mean_coeff(self, t): - """ - Compute log(alpha_t) of a given continuous-time label t in [0, T]. - """ - if self.schedule == 'discrete': - return interpolate_fn(t.reshape((-1, 1)), self.t_array.to(t.device), self.log_alpha_array.to(t.device)).reshape((-1)) - elif self.schedule == 'linear': - return -0.25 * t ** 2 * (self.beta_1 - self.beta_0) - 0.5 * t * self.beta_0 - elif self.schedule == 'cosine': - log_alpha_fn = lambda s: torch.log(torch.cos((s + self.cosine_s) / (1. + self.cosine_s) * math.pi / 2.)) - log_alpha_t = log_alpha_fn(t) - self.cosine_log_alpha_0 - return log_alpha_t - - def marginal_alpha(self, t): - """ - Compute alpha_t of a given continuous-time label t in [0, T]. - """ - return torch.exp(self.marginal_log_mean_coeff(t)) - - def marginal_std(self, t): - """ - Compute sigma_t of a given continuous-time label t in [0, T]. - """ - return torch.sqrt(1. - torch.exp(2. * self.marginal_log_mean_coeff(t))) - - def marginal_lambda(self, t): - """ - Compute lambda_t = log(alpha_t) - log(sigma_t) of a given continuous-time label t in [0, T]. - """ - log_mean_coeff = self.marginal_log_mean_coeff(t) - log_std = 0.5 * torch.log(1. - torch.exp(2. * log_mean_coeff)) - return log_mean_coeff - log_std - - def inverse_lambda(self, lamb): - """ - Compute the continuous-time label t in [0, T] of a given half-logSNR lambda_t. - """ - if self.schedule == 'linear': - tmp = 2. * (self.beta_1 - self.beta_0) * torch.logaddexp(-2. * lamb, torch.zeros((1,)).to(lamb)) - Delta = self.beta_0**2 + tmp - return tmp / (torch.sqrt(Delta) + self.beta_0) / (self.beta_1 - self.beta_0) - elif self.schedule == 'discrete': - log_alpha = -0.5 * torch.logaddexp(torch.zeros((1,)).to(lamb.device), -2. * lamb) - t = interpolate_fn(log_alpha.reshape((-1, 1)), torch.flip(self.log_alpha_array.to(lamb.device), [1]), torch.flip(self.t_array.to(lamb.device), [1])) - return t.reshape((-1,)) - else: - log_alpha = -0.5 * torch.logaddexp(-2. * lamb, torch.zeros((1,)).to(lamb)) - t_fn = lambda log_alpha_t: torch.arccos(torch.exp(log_alpha_t + self.cosine_log_alpha_0)) * 2. * (1. + self.cosine_s) / math.pi - self.cosine_s - t = t_fn(log_alpha) - return t - - -def model_wrapper( - model, - noise_schedule, - model_type="noise", - model_kwargs={}, - guidance_type="uncond", - condition=None, - unconditional_condition=None, - guidance_scale=1., - classifier_fn=None, - classifier_kwargs={}, -): - """Create a wrapper function for the noise prediction model. - - DPM-Solver needs to solve the continuous-time diffusion ODEs. For DPMs trained on discrete-time labels, we need to - firstly wrap the model function to a noise prediction model that accepts the continuous time as the input. - - We support four types of the diffusion model by setting `model_type`: - - 1. "noise": noise prediction model. (Trained by predicting noise). - - 2. "x_start": data prediction model. (Trained by predicting the data x_0 at time 0). - - 3. "v": velocity prediction model. (Trained by predicting the velocity). - The "v" prediction is derivation detailed in Appendix D of [1], and is used in Imagen-Video [2]. - - [1] Salimans, Tim, and Jonathan Ho. "Progressive distillation for fast sampling of diffusion models." - arXiv preprint arXiv:2202.00512 (2022). - [2] Ho, Jonathan, et al. "Imagen Video: High Definition Video Generation with Diffusion Models." - arXiv preprint arXiv:2210.02303 (2022). - - 4. "score": marginal score function. (Trained by denoising score matching). - Note that the score function and the noise prediction model follows a simple relationship: - ``` - noise(x_t, t) = -sigma_t * score(x_t, t) - ``` - - We support three types of guided sampling by DPMs by setting `guidance_type`: - 1. "uncond": unconditional sampling by DPMs. - The input `model` has the following format: - `` - model(x, t_input, **model_kwargs) -> noise | x_start | v | score - `` - - 2. "classifier": classifier guidance sampling [3] by DPMs and another classifier. - The input `model` has the following format: - `` - model(x, t_input, **model_kwargs) -> noise | x_start | v | score - `` - - The input `classifier_fn` has the following format: - `` - classifier_fn(x, t_input, cond, **classifier_kwargs) -> logits(x, t_input, cond) - `` - - [3] P. Dhariwal and A. Q. Nichol, "Diffusion models beat GANs on image synthesis," - in Advances in Neural Information Processing Systems, vol. 34, 2021, pp. 8780-8794. - - 3. "classifier-free": classifier-free guidance sampling by conditional DPMs. - The input `model` has the following format: - `` - model(x, t_input, cond, **model_kwargs) -> noise | x_start | v | score - `` - And if cond == `unconditional_condition`, the model output is the unconditional DPM output. - - [4] Ho, Jonathan, and Tim Salimans. "Classifier-free diffusion guidance." - arXiv preprint arXiv:2207.12598 (2022). - - - The `t_input` is the time label of the model, which may be discrete-time labels (i.e. 0 to 999) - or continuous-time labels (i.e. epsilon to T). - - We wrap the model function to accept only `x` and `t_continuous` as inputs, and outputs the predicted noise: - `` - def model_fn(x, t_continuous) -> noise: - t_input = get_model_input_time(t_continuous) - return noise_pred(model, x, t_input, **model_kwargs) - `` - where `t_continuous` is the continuous time labels (i.e. epsilon to T). And we use `model_fn` for DPM-Solver. - - =============================================================== - - Args: - model: A diffusion model with the corresponding format described above. - noise_schedule: A noise schedule object, such as NoiseScheduleVP. - model_type: A `str`. The parameterization type of the diffusion model. - "noise" or "x_start" or "v" or "score". - model_kwargs: A `dict`. A dict for the other inputs of the model function. - guidance_type: A `str`. The type of the guidance for sampling. - "uncond" or "classifier" or "classifier-free". - condition: A pytorch tensor. The condition for the guided sampling. - Only used for "classifier" or "classifier-free" guidance type. - unconditional_condition: A pytorch tensor. The condition for the unconditional sampling. - Only used for "classifier-free" guidance type. - guidance_scale: A `float`. The scale for the guided sampling. - classifier_fn: A classifier function. Only used for the classifier guidance. - classifier_kwargs: A `dict`. A dict for the other inputs of the classifier function. - Returns: - A noise prediction model that accepts the noised data and the continuous time as the inputs. - """ - - def get_model_input_time(t_continuous): - """ - Convert the continuous-time `t_continuous` (in [epsilon, T]) to the model input time. - For discrete-time DPMs, we convert `t_continuous` in [1 / N, 1] to `t_input` in [0, 1000 * (N - 1) / N]. - For continuous-time DPMs, we just use `t_continuous`. - """ - if noise_schedule.schedule == 'discrete': - return (t_continuous - 1. / noise_schedule.total_N) * 1000. - else: - return t_continuous - - def noise_pred_fn(x, t_continuous, cond=None): - if t_continuous.reshape((-1,)).shape[0] == 1: - t_continuous = t_continuous.expand((x.shape[0])) - t_input = get_model_input_time(t_continuous) - if cond is None: - output = model(x, t_input, **model_kwargs) - else: - output = model(x, t_input, cond, **model_kwargs) - if model_type == "noise": - return output - elif model_type == "x_start": - alpha_t, sigma_t = noise_schedule.marginal_alpha(t_continuous), noise_schedule.marginal_std(t_continuous) - dims = x.dim() - return (x - expand_dims(alpha_t, dims) * output) / expand_dims(sigma_t, dims) - elif model_type == "v": - alpha_t, sigma_t = noise_schedule.marginal_alpha(t_continuous), noise_schedule.marginal_std(t_continuous) - dims = x.dim() - return expand_dims(alpha_t, dims) * output + expand_dims(sigma_t, dims) * x - elif model_type == "score": - sigma_t = noise_schedule.marginal_std(t_continuous) - dims = x.dim() - return -expand_dims(sigma_t, dims) * output - - def cond_grad_fn(x, t_input): - """ - Compute the gradient of the classifier, i.e. nabla_{x} log p_t(cond | x_t). - """ - with torch.enable_grad(): - x_in = x.detach().requires_grad_(True) - log_prob = classifier_fn(x_in, t_input, condition, **classifier_kwargs) - return torch.autograd.grad(log_prob.sum(), x_in)[0] - - def model_fn(x, t_continuous): - """ - The noise predicition model function that is used for DPM-Solver. - """ - if t_continuous.reshape((-1,)).shape[0] == 1: - t_continuous = t_continuous.expand((x.shape[0])) - if guidance_type == "uncond": - return noise_pred_fn(x, t_continuous) - elif guidance_type == "classifier": - assert classifier_fn is not None - t_input = get_model_input_time(t_continuous) - cond_grad = cond_grad_fn(x, t_input) - sigma_t = noise_schedule.marginal_std(t_continuous) - noise = noise_pred_fn(x, t_continuous) - return noise - guidance_scale * expand_dims(sigma_t, dims=cond_grad.dim()) * cond_grad - elif guidance_type == "classifier-free": - if guidance_scale == 1. or unconditional_condition is None: - return noise_pred_fn(x, t_continuous, cond=condition) - else: - x_in = torch.cat([x] * 2) - t_in = torch.cat([t_continuous] * 2) - c_in = torch.cat([unconditional_condition, condition]) - noise_uncond, noise = noise_pred_fn(x_in, t_in, cond=c_in).chunk(2) - return noise_uncond + guidance_scale * (noise - noise_uncond) - - assert model_type in ["noise", "x_start", "v"] - assert guidance_type in ["uncond", "classifier", "classifier-free"] - return model_fn - - -class DPM_Solver: - def __init__(self, model_fn, noise_schedule, predict_x0=False, thresholding=False, max_val=1.): - """Construct a DPM-Solver. - - We support both the noise prediction model ("predicting epsilon") and the data prediction model ("predicting x0"). - If `predict_x0` is False, we use the solver for the noise prediction model (DPM-Solver). - If `predict_x0` is True, we use the solver for the data prediction model (DPM-Solver++). - In such case, we further support the "dynamic thresholding" in [1] when `thresholding` is True. - The "dynamic thresholding" can greatly improve the sample quality for pixel-space DPMs with large guidance scales. - - Args: - model_fn: A noise prediction model function which accepts the continuous-time input (t in [epsilon, T]): - `` - def model_fn(x, t_continuous): - return noise - `` - noise_schedule: A noise schedule object, such as NoiseScheduleVP. - predict_x0: A `bool`. If true, use the data prediction model; else, use the noise prediction model. - thresholding: A `bool`. Valid when `predict_x0` is True. Whether to use the "dynamic thresholding" in [1]. - max_val: A `float`. Valid when both `predict_x0` and `thresholding` are True. The max value for thresholding. - - [1] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487, 2022b. - """ - self.model = model_fn - self.noise_schedule = noise_schedule - self.predict_x0 = predict_x0 - self.thresholding = thresholding - self.max_val = max_val - - def noise_prediction_fn(self, x, t): - """ - Return the noise prediction model. - """ - return self.model(x, t) - - def data_prediction_fn(self, x, t): - """ - Return the data prediction model (with thresholding). - """ - noise = self.noise_prediction_fn(x, t) - dims = x.dim() - alpha_t, sigma_t = self.noise_schedule.marginal_alpha(t), self.noise_schedule.marginal_std(t) - x0 = (x - expand_dims(sigma_t, dims) * noise) / expand_dims(alpha_t, dims) - if self.thresholding: - p = 0.995 # A hyperparameter in the paper of "Imagen" [1]. - s = torch.quantile(torch.abs(x0).reshape((x0.shape[0], -1)), p, dim=1) - s = expand_dims(torch.maximum(s, self.max_val * torch.ones_like(s).to(s.device)), dims) - x0 = torch.clamp(x0, -s, s) / s - return x0 - - def model_fn(self, x, t): - """ - Convert the model to the noise prediction model or the data prediction model. - """ - if self.predict_x0: - return self.data_prediction_fn(x, t) - else: - return self.noise_prediction_fn(x, t) - - def get_time_steps(self, skip_type, t_T, t_0, N, device): - """Compute the intermediate time steps for sampling. - - Args: - skip_type: A `str`. The type for the spacing of the time steps. We support three types: - - 'logSNR': uniform logSNR for the time steps. - - 'time_uniform': uniform time for the time steps. (**Recommended for high-resolutional data**.) - - 'time_quadratic': quadratic time for the time steps. (Used in DDIM for low-resolutional data.) - t_T: A `float`. The starting time of the sampling (default is T). - t_0: A `float`. The ending time of the sampling (default is epsilon). - N: A `int`. The total number of the spacing of the time steps. - device: A torch device. - Returns: - A pytorch tensor of the time steps, with the shape (N + 1,). - """ - if skip_type == 'logSNR': - lambda_T = self.noise_schedule.marginal_lambda(torch.tensor(t_T).to(device)) - lambda_0 = self.noise_schedule.marginal_lambda(torch.tensor(t_0).to(device)) - logSNR_steps = torch.linspace(lambda_T.cpu().item(), lambda_0.cpu().item(), N + 1).to(device) - return self.noise_schedule.inverse_lambda(logSNR_steps) - elif skip_type == 'time_uniform': - return torch.linspace(t_T, t_0, N + 1).to(device) - elif skip_type == 'time_quadratic': - t_order = 2 - t = torch.linspace(t_T**(1. / t_order), t_0**(1. / t_order), N + 1).pow(t_order).to(device) - return t - else: - raise ValueError("Unsupported skip_type {}, need to be 'logSNR' or 'time_uniform' or 'time_quadratic'".format(skip_type)) - - def get_orders_and_timesteps_for_singlestep_solver(self, steps, order, skip_type, t_T, t_0, device): - """ - Get the order of each step for sampling by the singlestep DPM-Solver. - - We combine both DPM-Solver-1,2,3 to use all the function evaluations, which is named as "DPM-Solver-fast". - Given a fixed number of function evaluations by `steps`, the sampling procedure by DPM-Solver-fast is: - - If order == 1: - We take `steps` of DPM-Solver-1 (i.e. DDIM). - - If order == 2: - - Denote K = (steps // 2). We take K or (K + 1) intermediate time steps for sampling. - - If steps % 2 == 0, we use K steps of DPM-Solver-2. - - If steps % 2 == 1, we use K steps of DPM-Solver-2 and 1 step of DPM-Solver-1. - - If order == 3: - - Denote K = (steps // 3 + 1). We take K intermediate time steps for sampling. - - If steps % 3 == 0, we use (K - 2) steps of DPM-Solver-3, and 1 step of DPM-Solver-2 and 1 step of DPM-Solver-1. - - If steps % 3 == 1, we use (K - 1) steps of DPM-Solver-3 and 1 step of DPM-Solver-1. - - If steps % 3 == 2, we use (K - 1) steps of DPM-Solver-3 and 1 step of DPM-Solver-2. - - ============================================ - Args: - order: A `int`. The max order for the solver (2 or 3). - steps: A `int`. The total number of function evaluations (NFE). - skip_type: A `str`. The type for the spacing of the time steps. We support three types: - - 'logSNR': uniform logSNR for the time steps. - - 'time_uniform': uniform time for the time steps. (**Recommended for high-resolutional data**.) - - 'time_quadratic': quadratic time for the time steps. (Used in DDIM for low-resolutional data.) - t_T: A `float`. The starting time of the sampling (default is T). - t_0: A `float`. The ending time of the sampling (default is epsilon). - device: A torch device. - Returns: - orders: A list of the solver order of each step. - """ - if order == 3: - K = steps // 3 + 1 - if steps % 3 == 0: - orders = [3,] * (K - 2) + [2, 1] - elif steps % 3 == 1: - orders = [3,] * (K - 1) + [1] - else: - orders = [3,] * (K - 1) + [2] - elif order == 2: - if steps % 2 == 0: - K = steps // 2 - orders = [2,] * K - else: - K = steps // 2 + 1 - orders = [2,] * (K - 1) + [1] - elif order == 1: - K = 1 - orders = [1,] * steps - else: - raise ValueError("'order' must be '1' or '2' or '3'.") - if skip_type == 'logSNR': - # To reproduce the results in DPM-Solver paper - timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, K, device) - else: - timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, steps, device)[torch.cumsum(torch.tensor([0,] + orders)).to(device)] - return timesteps_outer, orders - - def denoise_to_zero_fn(self, x, s): - """ - Denoise at the final step, which is equivalent to solve the ODE from lambda_s to infty by first-order discretization. - """ - return self.data_prediction_fn(x, s) - - def dpm_solver_first_update(self, x, s, t, model_s=None, return_intermediate=False): - """ - DPM-Solver-1 (equivalent to DDIM) from time `s` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - s: A pytorch tensor. The starting time, with the shape (x.shape[0],). - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - model_s: A pytorch tensor. The model function evaluated at time `s`. - If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it. - return_intermediate: A `bool`. If true, also return the model value at time `s`. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - ns = self.noise_schedule - dims = x.dim() - lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t) - h = lambda_t - lambda_s - log_alpha_s, log_alpha_t = ns.marginal_log_mean_coeff(s), ns.marginal_log_mean_coeff(t) - sigma_s, sigma_t = ns.marginal_std(s), ns.marginal_std(t) - alpha_t = torch.exp(log_alpha_t) - - if self.predict_x0: - phi_1 = torch.expm1(-h) - if model_s is None: - model_s = self.model_fn(x, s) - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - ) - if return_intermediate: - return x_t, {'model_s': model_s} - else: - return x_t - else: - phi_1 = torch.expm1(h) - if model_s is None: - model_s = self.model_fn(x, s) - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - ) - if return_intermediate: - return x_t, {'model_s': model_s} - else: - return x_t - - def singlestep_dpm_solver_second_update(self, x, s, t, r1=0.5, model_s=None, return_intermediate=False, solver_type='dpm_solver'): - """ - Singlestep solver DPM-Solver-2 from time `s` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - s: A pytorch tensor. The starting time, with the shape (x.shape[0],). - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - r1: A `float`. The hyperparameter of the second-order solver. - model_s: A pytorch tensor. The model function evaluated at time `s`. - If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it. - return_intermediate: A `bool`. If true, also return the model value at time `s` and `s1` (the intermediate time). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if solver_type not in ['dpm_solver', 'taylor']: - raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type)) - if r1 is None: - r1 = 0.5 - ns = self.noise_schedule - dims = x.dim() - lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t) - h = lambda_t - lambda_s - lambda_s1 = lambda_s + r1 * h - s1 = ns.inverse_lambda(lambda_s1) - log_alpha_s, log_alpha_s1, log_alpha_t = ns.marginal_log_mean_coeff(s), ns.marginal_log_mean_coeff(s1), ns.marginal_log_mean_coeff(t) - sigma_s, sigma_s1, sigma_t = ns.marginal_std(s), ns.marginal_std(s1), ns.marginal_std(t) - alpha_s1, alpha_t = torch.exp(log_alpha_s1), torch.exp(log_alpha_t) - - if self.predict_x0: - phi_11 = torch.expm1(-r1 * h) - phi_1 = torch.expm1(-h) - - if model_s is None: - model_s = self.model_fn(x, s) - x_s1 = ( - expand_dims(sigma_s1 / sigma_s, dims) * x - - expand_dims(alpha_s1 * phi_11, dims) * model_s - ) - model_s1 = self.model_fn(x_s1, s1) - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - - (0.5 / r1) * expand_dims(alpha_t * phi_1, dims) * (model_s1 - model_s) - ) - elif solver_type == 'taylor': - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - + (1. / r1) * expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * (model_s1 - model_s) - ) - else: - phi_11 = torch.expm1(r1 * h) - phi_1 = torch.expm1(h) - - if model_s is None: - model_s = self.model_fn(x, s) - x_s1 = ( - expand_dims(torch.exp(log_alpha_s1 - log_alpha_s), dims) * x - - expand_dims(sigma_s1 * phi_11, dims) * model_s - ) - model_s1 = self.model_fn(x_s1, s1) - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - - (0.5 / r1) * expand_dims(sigma_t * phi_1, dims) * (model_s1 - model_s) - ) - elif solver_type == 'taylor': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - - (1. / r1) * expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * (model_s1 - model_s) - ) - if return_intermediate: - return x_t, {'model_s': model_s, 'model_s1': model_s1} - else: - return x_t - - def singlestep_dpm_solver_third_update(self, x, s, t, r1=1./3., r2=2./3., model_s=None, model_s1=None, return_intermediate=False, solver_type='dpm_solver'): - """ - Singlestep solver DPM-Solver-3 from time `s` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - s: A pytorch tensor. The starting time, with the shape (x.shape[0],). - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - r1: A `float`. The hyperparameter of the third-order solver. - r2: A `float`. The hyperparameter of the third-order solver. - model_s: A pytorch tensor. The model function evaluated at time `s`. - If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it. - model_s1: A pytorch tensor. The model function evaluated at time `s1` (the intermediate time given by `r1`). - If `model_s1` is None, we evaluate the model at `s1`; otherwise we directly use it. - return_intermediate: A `bool`. If true, also return the model value at time `s`, `s1` and `s2` (the intermediate times). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if solver_type not in ['dpm_solver', 'taylor']: - raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type)) - if r1 is None: - r1 = 1. / 3. - if r2 is None: - r2 = 2. / 3. - ns = self.noise_schedule - dims = x.dim() - lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t) - h = lambda_t - lambda_s - lambda_s1 = lambda_s + r1 * h - lambda_s2 = lambda_s + r2 * h - s1 = ns.inverse_lambda(lambda_s1) - s2 = ns.inverse_lambda(lambda_s2) - log_alpha_s, log_alpha_s1, log_alpha_s2, log_alpha_t = ns.marginal_log_mean_coeff(s), ns.marginal_log_mean_coeff(s1), ns.marginal_log_mean_coeff(s2), ns.marginal_log_mean_coeff(t) - sigma_s, sigma_s1, sigma_s2, sigma_t = ns.marginal_std(s), ns.marginal_std(s1), ns.marginal_std(s2), ns.marginal_std(t) - alpha_s1, alpha_s2, alpha_t = torch.exp(log_alpha_s1), torch.exp(log_alpha_s2), torch.exp(log_alpha_t) - - if self.predict_x0: - phi_11 = torch.expm1(-r1 * h) - phi_12 = torch.expm1(-r2 * h) - phi_1 = torch.expm1(-h) - phi_22 = torch.expm1(-r2 * h) / (r2 * h) + 1. - phi_2 = phi_1 / h + 1. - phi_3 = phi_2 / h - 0.5 - - if model_s is None: - model_s = self.model_fn(x, s) - if model_s1 is None: - x_s1 = ( - expand_dims(sigma_s1 / sigma_s, dims) * x - - expand_dims(alpha_s1 * phi_11, dims) * model_s - ) - model_s1 = self.model_fn(x_s1, s1) - x_s2 = ( - expand_dims(sigma_s2 / sigma_s, dims) * x - - expand_dims(alpha_s2 * phi_12, dims) * model_s - + r2 / r1 * expand_dims(alpha_s2 * phi_22, dims) * (model_s1 - model_s) - ) - model_s2 = self.model_fn(x_s2, s2) - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - + (1. / r2) * expand_dims(alpha_t * phi_2, dims) * (model_s2 - model_s) - ) - elif solver_type == 'taylor': - D1_0 = (1. / r1) * (model_s1 - model_s) - D1_1 = (1. / r2) * (model_s2 - model_s) - D1 = (r2 * D1_0 - r1 * D1_1) / (r2 - r1) - D2 = 2. * (D1_1 - D1_0) / (r2 - r1) - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - + expand_dims(alpha_t * phi_2, dims) * D1 - - expand_dims(alpha_t * phi_3, dims) * D2 - ) - else: - phi_11 = torch.expm1(r1 * h) - phi_12 = torch.expm1(r2 * h) - phi_1 = torch.expm1(h) - phi_22 = torch.expm1(r2 * h) / (r2 * h) - 1. - phi_2 = phi_1 / h - 1. - phi_3 = phi_2 / h - 0.5 - - if model_s is None: - model_s = self.model_fn(x, s) - if model_s1 is None: - x_s1 = ( - expand_dims(torch.exp(log_alpha_s1 - log_alpha_s), dims) * x - - expand_dims(sigma_s1 * phi_11, dims) * model_s - ) - model_s1 = self.model_fn(x_s1, s1) - x_s2 = ( - expand_dims(torch.exp(log_alpha_s2 - log_alpha_s), dims) * x - - expand_dims(sigma_s2 * phi_12, dims) * model_s - - r2 / r1 * expand_dims(sigma_s2 * phi_22, dims) * (model_s1 - model_s) - ) - model_s2 = self.model_fn(x_s2, s2) - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - - (1. / r2) * expand_dims(sigma_t * phi_2, dims) * (model_s2 - model_s) - ) - elif solver_type == 'taylor': - D1_0 = (1. / r1) * (model_s1 - model_s) - D1_1 = (1. / r2) * (model_s2 - model_s) - D1 = (r2 * D1_0 - r1 * D1_1) / (r2 - r1) - D2 = 2. * (D1_1 - D1_0) / (r2 - r1) - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - - expand_dims(sigma_t * phi_2, dims) * D1 - - expand_dims(sigma_t * phi_3, dims) * D2 - ) - - if return_intermediate: - return x_t, {'model_s': model_s, 'model_s1': model_s1, 'model_s2': model_s2} - else: - return x_t - - def multistep_dpm_solver_second_update(self, x, model_prev_list, t_prev_list, t, solver_type="dpm_solver"): - """ - Multistep solver DPM-Solver-2 from time `t_prev_list[-1]` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - model_prev_list: A list of pytorch tensor. The previous computed model values. - t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],) - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if solver_type not in ['dpm_solver', 'taylor']: - raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type)) - ns = self.noise_schedule - dims = x.dim() - model_prev_1, model_prev_0 = model_prev_list - t_prev_1, t_prev_0 = t_prev_list - lambda_prev_1, lambda_prev_0, lambda_t = ns.marginal_lambda(t_prev_1), ns.marginal_lambda(t_prev_0), ns.marginal_lambda(t) - log_alpha_prev_0, log_alpha_t = ns.marginal_log_mean_coeff(t_prev_0), ns.marginal_log_mean_coeff(t) - sigma_prev_0, sigma_t = ns.marginal_std(t_prev_0), ns.marginal_std(t) - alpha_t = torch.exp(log_alpha_t) - - h_0 = lambda_prev_0 - lambda_prev_1 - h = lambda_t - lambda_prev_0 - r0 = h_0 / h - D1_0 = expand_dims(1. / r0, dims) * (model_prev_0 - model_prev_1) - if self.predict_x0: - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(sigma_t / sigma_prev_0, dims) * x - - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0 - - 0.5 * expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * D1_0 - ) - elif solver_type == 'taylor': - x_t = ( - expand_dims(sigma_t / sigma_prev_0, dims) * x - - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0 - + expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * D1_0 - ) - else: - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x - - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0 - - 0.5 * expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * D1_0 - ) - elif solver_type == 'taylor': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x - - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0 - - expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * D1_0 - ) - return x_t - - def multistep_dpm_solver_third_update(self, x, model_prev_list, t_prev_list, t, solver_type='dpm_solver'): - """ - Multistep solver DPM-Solver-3 from time `t_prev_list[-1]` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - model_prev_list: A list of pytorch tensor. The previous computed model values. - t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],) - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - ns = self.noise_schedule - dims = x.dim() - model_prev_2, model_prev_1, model_prev_0 = model_prev_list - t_prev_2, t_prev_1, t_prev_0 = t_prev_list - lambda_prev_2, lambda_prev_1, lambda_prev_0, lambda_t = ns.marginal_lambda(t_prev_2), ns.marginal_lambda(t_prev_1), ns.marginal_lambda(t_prev_0), ns.marginal_lambda(t) - log_alpha_prev_0, log_alpha_t = ns.marginal_log_mean_coeff(t_prev_0), ns.marginal_log_mean_coeff(t) - sigma_prev_0, sigma_t = ns.marginal_std(t_prev_0), ns.marginal_std(t) - alpha_t = torch.exp(log_alpha_t) - - h_1 = lambda_prev_1 - lambda_prev_2 - h_0 = lambda_prev_0 - lambda_prev_1 - h = lambda_t - lambda_prev_0 - r0, r1 = h_0 / h, h_1 / h - D1_0 = expand_dims(1. / r0, dims) * (model_prev_0 - model_prev_1) - D1_1 = expand_dims(1. / r1, dims) * (model_prev_1 - model_prev_2) - D1 = D1_0 + expand_dims(r0 / (r0 + r1), dims) * (D1_0 - D1_1) - D2 = expand_dims(1. / (r0 + r1), dims) * (D1_0 - D1_1) - if self.predict_x0: - x_t = ( - expand_dims(sigma_t / sigma_prev_0, dims) * x - - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0 - + expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * D1 - - expand_dims(alpha_t * ((torch.exp(-h) - 1. + h) / h**2 - 0.5), dims) * D2 - ) - else: - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x - - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0 - - expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * D1 - - expand_dims(sigma_t * ((torch.exp(h) - 1. - h) / h**2 - 0.5), dims) * D2 - ) - return x_t - - def singlestep_dpm_solver_update(self, x, s, t, order, return_intermediate=False, solver_type='dpm_solver', r1=None, r2=None): - """ - Singlestep DPM-Solver with the order `order` from time `s` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - s: A pytorch tensor. The starting time, with the shape (x.shape[0],). - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - order: A `int`. The order of DPM-Solver. We only support order == 1 or 2 or 3. - return_intermediate: A `bool`. If true, also return the model value at time `s`, `s1` and `s2` (the intermediate times). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - r1: A `float`. The hyperparameter of the second-order or third-order solver. - r2: A `float`. The hyperparameter of the third-order solver. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if order == 1: - return self.dpm_solver_first_update(x, s, t, return_intermediate=return_intermediate) - elif order == 2: - return self.singlestep_dpm_solver_second_update(x, s, t, return_intermediate=return_intermediate, solver_type=solver_type, r1=r1) - elif order == 3: - return self.singlestep_dpm_solver_third_update(x, s, t, return_intermediate=return_intermediate, solver_type=solver_type, r1=r1, r2=r2) - else: - raise ValueError("Solver order must be 1 or 2 or 3, got {}".format(order)) - - def multistep_dpm_solver_update(self, x, model_prev_list, t_prev_list, t, order, solver_type='dpm_solver'): - """ - Multistep DPM-Solver with the order `order` from time `t_prev_list[-1]` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - model_prev_list: A list of pytorch tensor. The previous computed model values. - t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],) - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - order: A `int`. The order of DPM-Solver. We only support order == 1 or 2 or 3. - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if order == 1: - return self.dpm_solver_first_update(x, t_prev_list[-1], t, model_s=model_prev_list[-1]) - elif order == 2: - return self.multistep_dpm_solver_second_update(x, model_prev_list, t_prev_list, t, solver_type=solver_type) - elif order == 3: - return self.multistep_dpm_solver_third_update(x, model_prev_list, t_prev_list, t, solver_type=solver_type) - else: - raise ValueError("Solver order must be 1 or 2 or 3, got {}".format(order)) - - def dpm_solver_adaptive(self, x, order, t_T, t_0, h_init=0.05, atol=0.0078, rtol=0.05, theta=0.9, t_err=1e-5, solver_type='dpm_solver'): - """ - The adaptive step size solver based on singlestep DPM-Solver. - - Args: - x: A pytorch tensor. The initial value at time `t_T`. - order: A `int`. The (higher) order of the solver. We only support order == 2 or 3. - t_T: A `float`. The starting time of the sampling (default is T). - t_0: A `float`. The ending time of the sampling (default is epsilon). - h_init: A `float`. The initial step size (for logSNR). - atol: A `float`. The absolute tolerance of the solver. For image data, the default setting is 0.0078, followed [1]. - rtol: A `float`. The relative tolerance of the solver. The default setting is 0.05. - theta: A `float`. The safety hyperparameter for adapting the step size. The default setting is 0.9, followed [1]. - t_err: A `float`. The tolerance for the time. We solve the diffusion ODE until the absolute error between the - current time and `t_0` is less than `t_err`. The default setting is 1e-5. - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_0: A pytorch tensor. The approximated solution at time `t_0`. - - [1] A. Jolicoeur-Martineau, K. Li, R. Piché-Taillefer, T. Kachman, and I. Mitliagkas, "Gotta go fast when generating data with score-based models," arXiv preprint arXiv:2105.14080, 2021. - """ - ns = self.noise_schedule - s = t_T * torch.ones((x.shape[0],)).to(x) - lambda_s = ns.marginal_lambda(s) - lambda_0 = ns.marginal_lambda(t_0 * torch.ones_like(s).to(x)) - h = h_init * torch.ones_like(s).to(x) - x_prev = x - nfe = 0 - if order == 2: - r1 = 0.5 - lower_update = lambda x, s, t: self.dpm_solver_first_update(x, s, t, return_intermediate=True) - higher_update = lambda x, s, t, **kwargs: self.singlestep_dpm_solver_second_update(x, s, t, r1=r1, solver_type=solver_type, **kwargs) - elif order == 3: - r1, r2 = 1. / 3., 2. / 3. - lower_update = lambda x, s, t: self.singlestep_dpm_solver_second_update(x, s, t, r1=r1, return_intermediate=True, solver_type=solver_type) - higher_update = lambda x, s, t, **kwargs: self.singlestep_dpm_solver_third_update(x, s, t, r1=r1, r2=r2, solver_type=solver_type, **kwargs) - else: - raise ValueError("For adaptive step size solver, order must be 2 or 3, got {}".format(order)) - while torch.abs((s - t_0)).mean() > t_err: - t = ns.inverse_lambda(lambda_s + h) - x_lower, lower_noise_kwargs = lower_update(x, s, t) - x_higher = higher_update(x, s, t, **lower_noise_kwargs) - delta = torch.max(torch.ones_like(x).to(x) * atol, rtol * torch.max(torch.abs(x_lower), torch.abs(x_prev))) - norm_fn = lambda v: torch.sqrt(torch.square(v.reshape((v.shape[0], -1))).mean(dim=-1, keepdim=True)) - E = norm_fn((x_higher - x_lower) / delta).max() - if torch.all(E <= 1.): - x = x_higher - s = t - x_prev = x_lower - lambda_s = ns.marginal_lambda(s) - h = torch.min(theta * h * torch.float_power(E, -1. / order).float(), lambda_0 - lambda_s) - nfe += order - print('adaptive solver nfe', nfe) - return x - - def sample(self, x, steps=20, t_start=None, t_end=None, order=3, skip_type='time_uniform', - method='singlestep', lower_order_final=True, denoise_to_zero=False, solver_type='dpm_solver', - atol=0.0078, rtol=0.05, - ): - """ - Compute the sample at time `t_end` by DPM-Solver, given the initial `x` at time `t_start`. - - ===================================================== - - We support the following algorithms for both noise prediction model and data prediction model: - - 'singlestep': - Singlestep DPM-Solver (i.e. "DPM-Solver-fast" in the paper), which combines different orders of singlestep DPM-Solver. - We combine all the singlestep solvers with order <= `order` to use up all the function evaluations (steps). - The total number of function evaluations (NFE) == `steps`. - Given a fixed NFE == `steps`, the sampling procedure is: - - If `order` == 1: - - Denote K = steps. We use K steps of DPM-Solver-1 (i.e. DDIM). - - If `order` == 2: - - Denote K = (steps // 2) + (steps % 2). We take K intermediate time steps for sampling. - - If steps % 2 == 0, we use K steps of singlestep DPM-Solver-2. - - If steps % 2 == 1, we use (K - 1) steps of singlestep DPM-Solver-2 and 1 step of DPM-Solver-1. - - If `order` == 3: - - Denote K = (steps // 3 + 1). We take K intermediate time steps for sampling. - - If steps % 3 == 0, we use (K - 2) steps of singlestep DPM-Solver-3, and 1 step of singlestep DPM-Solver-2 and 1 step of DPM-Solver-1. - - If steps % 3 == 1, we use (K - 1) steps of singlestep DPM-Solver-3 and 1 step of DPM-Solver-1. - - If steps % 3 == 2, we use (K - 1) steps of singlestep DPM-Solver-3 and 1 step of singlestep DPM-Solver-2. - - 'multistep': - Multistep DPM-Solver with the order of `order`. The total number of function evaluations (NFE) == `steps`. - We initialize the first `order` values by lower order multistep solvers. - Given a fixed NFE == `steps`, the sampling procedure is: - Denote K = steps. - - If `order` == 1: - - We use K steps of DPM-Solver-1 (i.e. DDIM). - - If `order` == 2: - - We firstly use 1 step of DPM-Solver-1, then use (K - 1) step of multistep DPM-Solver-2. - - If `order` == 3: - - We firstly use 1 step of DPM-Solver-1, then 1 step of multistep DPM-Solver-2, then (K - 2) step of multistep DPM-Solver-3. - - 'singlestep_fixed': - Fixed order singlestep DPM-Solver (i.e. DPM-Solver-1 or singlestep DPM-Solver-2 or singlestep DPM-Solver-3). - We use singlestep DPM-Solver-`order` for `order`=1 or 2 or 3, with total [`steps` // `order`] * `order` NFE. - - 'adaptive': - Adaptive step size DPM-Solver (i.e. "DPM-Solver-12" and "DPM-Solver-23" in the paper). - We ignore `steps` and use adaptive step size DPM-Solver with a higher order of `order`. - You can adjust the absolute tolerance `atol` and the relative tolerance `rtol` to balance the computatation costs - (NFE) and the sample quality. - - If `order` == 2, we use DPM-Solver-12 which combines DPM-Solver-1 and singlestep DPM-Solver-2. - - If `order` == 3, we use DPM-Solver-23 which combines singlestep DPM-Solver-2 and singlestep DPM-Solver-3. - - ===================================================== - - Some advices for choosing the algorithm: - - For **unconditional sampling** or **guided sampling with small guidance scale** by DPMs: - Use singlestep DPM-Solver ("DPM-Solver-fast" in the paper) with `order = 3`. - e.g. - >>> dpm_solver = DPM_Solver(model_fn, noise_schedule, predict_x0=False) - >>> x_sample = dpm_solver.sample(x, steps=steps, t_start=t_start, t_end=t_end, order=3, - skip_type='time_uniform', method='singlestep') - - For **guided sampling with large guidance scale** by DPMs: - Use multistep DPM-Solver with `predict_x0 = True` and `order = 2`. - e.g. - >>> dpm_solver = DPM_Solver(model_fn, noise_schedule, predict_x0=True) - >>> x_sample = dpm_solver.sample(x, steps=steps, t_start=t_start, t_end=t_end, order=2, - skip_type='time_uniform', method='multistep') - - We support three types of `skip_type`: - - 'logSNR': uniform logSNR for the time steps. **Recommended for low-resolutional images** - - 'time_uniform': uniform time for the time steps. **Recommended for high-resolutional images**. - - 'time_quadratic': quadratic time for the time steps. - - ===================================================== - Args: - x: A pytorch tensor. The initial value at time `t_start` - e.g. if `t_start` == T, then `x` is a sample from the standard normal distribution. - steps: A `int`. The total number of function evaluations (NFE). - t_start: A `float`. The starting time of the sampling. - If `T` is None, we use self.noise_schedule.T (default is 1.0). - t_end: A `float`. The ending time of the sampling. - If `t_end` is None, we use 1. / self.noise_schedule.total_N. - e.g. if total_N == 1000, we have `t_end` == 1e-3. - For discrete-time DPMs: - - We recommend `t_end` == 1. / self.noise_schedule.total_N. - For continuous-time DPMs: - - We recommend `t_end` == 1e-3 when `steps` <= 15; and `t_end` == 1e-4 when `steps` > 15. - order: A `int`. The order of DPM-Solver. - skip_type: A `str`. The type for the spacing of the time steps. 'time_uniform' or 'logSNR' or 'time_quadratic'. - method: A `str`. The method for sampling. 'singlestep' or 'multistep' or 'singlestep_fixed' or 'adaptive'. - denoise_to_zero: A `bool`. Whether to denoise to time 0 at the final step. - Default is `False`. If `denoise_to_zero` is `True`, the total NFE is (`steps` + 1). - - This trick is firstly proposed by DDPM (https://arxiv.org/abs/2006.11239) and - score_sde (https://arxiv.org/abs/2011.13456). Such trick can improve the FID - for diffusion models sampling by diffusion SDEs for low-resolutional images - (such as CIFAR-10). However, we observed that such trick does not matter for - high-resolutional images. As it needs an additional NFE, we do not recommend - it for high-resolutional images. - lower_order_final: A `bool`. Whether to use lower order solvers at the final steps. - Only valid for `method=multistep` and `steps < 15`. We empirically find that - this trick is a key to stabilizing the sampling by DPM-Solver with very few steps - (especially for steps <= 10). So we recommend to set it to be `True`. - solver_type: A `str`. The taylor expansion type for the solver. `dpm_solver` or `taylor`. We recommend `dpm_solver`. - atol: A `float`. The absolute tolerance of the adaptive step size solver. Valid when `method` == 'adaptive'. - rtol: A `float`. The relative tolerance of the adaptive step size solver. Valid when `method` == 'adaptive'. - Returns: - x_end: A pytorch tensor. The approximated solution at time `t_end`. - - """ - t_0 = 1. / self.noise_schedule.total_N if t_end is None else t_end - t_T = self.noise_schedule.T if t_start is None else t_start - device = x.device - if method == 'adaptive': - with torch.no_grad(): - x = self.dpm_solver_adaptive(x, order=order, t_T=t_T, t_0=t_0, atol=atol, rtol=rtol, solver_type=solver_type) - elif method == 'multistep': - assert steps >= order - timesteps = self.get_time_steps(skip_type=skip_type, t_T=t_T, t_0=t_0, N=steps, device=device) - assert timesteps.shape[0] - 1 == steps - with torch.no_grad(): - vec_t = timesteps[0].expand((x.shape[0])) - model_prev_list = [self.model_fn(x, vec_t)] - t_prev_list = [vec_t] - # Init the first `order` values by lower order multistep DPM-Solver. - for init_order in range(1, order): - vec_t = timesteps[init_order].expand(x.shape[0]) - x = self.multistep_dpm_solver_update(x, model_prev_list, t_prev_list, vec_t, init_order, solver_type=solver_type) - model_prev_list.append(self.model_fn(x, vec_t)) - t_prev_list.append(vec_t) - # Compute the remaining values by `order`-th order multistep DPM-Solver. - for step in range(order, steps + 1): - vec_t = timesteps[step].expand(x.shape[0]) - if lower_order_final and steps < 15: - step_order = min(order, steps + 1 - step) - else: - step_order = order - x = self.multistep_dpm_solver_update(x, model_prev_list, t_prev_list, vec_t, step_order, solver_type=solver_type) - for i in range(order - 1): - t_prev_list[i] = t_prev_list[i + 1] - model_prev_list[i] = model_prev_list[i + 1] - t_prev_list[-1] = vec_t - # We do not need to evaluate the final model value. - if step < steps: - model_prev_list[-1] = self.model_fn(x, vec_t) - elif method in ['singlestep', 'singlestep_fixed']: - if method == 'singlestep': - timesteps_outer, orders = self.get_orders_and_timesteps_for_singlestep_solver(steps=steps, order=order, skip_type=skip_type, t_T=t_T, t_0=t_0, device=device) - elif method == 'singlestep_fixed': - K = steps // order - orders = [order,] * K - timesteps_outer = self.get_time_steps(skip_type=skip_type, t_T=t_T, t_0=t_0, N=K, device=device) - for i, order in enumerate(orders): - t_T_inner, t_0_inner = timesteps_outer[i], timesteps_outer[i + 1] - timesteps_inner = self.get_time_steps(skip_type=skip_type, t_T=t_T_inner.item(), t_0=t_0_inner.item(), N=order, device=device) - lambda_inner = self.noise_schedule.marginal_lambda(timesteps_inner) - vec_s, vec_t = t_T_inner.tile(x.shape[0]), t_0_inner.tile(x.shape[0]) - h = lambda_inner[-1] - lambda_inner[0] - r1 = None if order <= 1 else (lambda_inner[1] - lambda_inner[0]) / h - r2 = None if order <= 2 else (lambda_inner[2] - lambda_inner[0]) / h - x = self.singlestep_dpm_solver_update(x, vec_s, vec_t, order, solver_type=solver_type, r1=r1, r2=r2) - if denoise_to_zero: - x = self.denoise_to_zero_fn(x, torch.ones((x.shape[0],)).to(device) * t_0) - return x - - - -############################################################# -# other utility functions -############################################################# - -def interpolate_fn(x, xp, yp): - """ - A piecewise linear function y = f(x), using xp and yp as keypoints. - We implement f(x) in a differentiable way (i.e. applicable for autograd). - The function f(x) is well-defined for all x-axis. (For x beyond the bounds of xp, we use the outmost points of xp to define the linear function.) - - Args: - x: PyTorch tensor with shape [N, C], where N is the batch size, C is the number of channels (we use C = 1 for DPM-Solver). - xp: PyTorch tensor with shape [C, K], where K is the number of keypoints. - yp: PyTorch tensor with shape [C, K]. - Returns: - The function values f(x), with shape [N, C]. - """ - N, K = x.shape[0], xp.shape[1] - all_x = torch.cat([x.unsqueeze(2), xp.unsqueeze(0).repeat((N, 1, 1))], dim=2) - sorted_all_x, x_indices = torch.sort(all_x, dim=2) - x_idx = torch.argmin(x_indices, dim=2) - cand_start_idx = x_idx - 1 - start_idx = torch.where( - torch.eq(x_idx, 0), - torch.tensor(1, device=x.device), - torch.where( - torch.eq(x_idx, K), torch.tensor(K - 2, device=x.device), cand_start_idx, - ), - ) - end_idx = torch.where(torch.eq(start_idx, cand_start_idx), start_idx + 2, start_idx + 1) - start_x = torch.gather(sorted_all_x, dim=2, index=start_idx.unsqueeze(2)).squeeze(2) - end_x = torch.gather(sorted_all_x, dim=2, index=end_idx.unsqueeze(2)).squeeze(2) - start_idx2 = torch.where( - torch.eq(x_idx, 0), - torch.tensor(0, device=x.device), - torch.where( - torch.eq(x_idx, K), torch.tensor(K - 2, device=x.device), cand_start_idx, - ), - ) - y_positions_expanded = yp.unsqueeze(0).expand(N, -1, -1) - start_y = torch.gather(y_positions_expanded, dim=2, index=start_idx2.unsqueeze(2)).squeeze(2) - end_y = torch.gather(y_positions_expanded, dim=2, index=(start_idx2 + 1).unsqueeze(2)).squeeze(2) - cand = start_y + (x - start_x) * (end_y - start_y) / (end_x - start_x) - return cand - - -def expand_dims(v, dims): - """ - Expand the tensor `v` to the dim `dims`. - - Args: - `v`: a PyTorch tensor with shape [N]. - `dim`: a `int`. - Returns: - a PyTorch tensor with shape [N, 1, 1, ..., 1] and the total dimension is `dims`. - """ - return v[(...,) + (None,)*(dims - 1)] \ No newline at end of file diff --git a/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/modules/image_degradation/utils_image.py b/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/modules/image_degradation/utils_image.py deleted file mode 100644 index 0175f155ad900ae33c3c46ed87f49b352e3faf98..0000000000000000000000000000000000000000 --- a/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/modules/image_degradation/utils_image.py +++ /dev/null @@ -1,916 +0,0 @@ -import os -import math -import random -import numpy as np -import torch -import cv2 -from torchvision.utils import make_grid -from datetime import datetime -#import matplotlib.pyplot as plt # TODO: check with Dominik, also bsrgan.py vs bsrgan_light.py - - -os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE" - - -''' -# -------------------------------------------- -# Kai Zhang (github: https://github.com/cszn) -# 03/Mar/2019 -# -------------------------------------------- -# https://github.com/twhui/SRGAN-pyTorch -# https://github.com/xinntao/BasicSR -# -------------------------------------------- -''' - - -IMG_EXTENSIONS = ['.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', '.tif'] - - -def is_image_file(filename): - return any(filename.endswith(extension) for extension in IMG_EXTENSIONS) - - -def get_timestamp(): - return datetime.now().strftime('%y%m%d-%H%M%S') - - -def imshow(x, title=None, cbar=False, figsize=None): - plt.figure(figsize=figsize) - plt.imshow(np.squeeze(x), interpolation='nearest', cmap='gray') - if title: - plt.title(title) - if cbar: - plt.colorbar() - plt.show() - - -def surf(Z, cmap='rainbow', figsize=None): - plt.figure(figsize=figsize) - ax3 = plt.axes(projection='3d') - - w, h = Z.shape[:2] - xx = np.arange(0,w,1) - yy = np.arange(0,h,1) - X, Y = np.meshgrid(xx, yy) - ax3.plot_surface(X,Y,Z,cmap=cmap) - #ax3.contour(X,Y,Z, zdim='z',offset=-2,cmap=cmap) - plt.show() - - -''' -# -------------------------------------------- -# get image pathes -# -------------------------------------------- -''' - - -def get_image_paths(dataroot): - paths = None # return None if dataroot is None - if dataroot is not None: - paths = sorted(_get_paths_from_images(dataroot)) - return paths - - -def _get_paths_from_images(path): - assert os.path.isdir(path), '{:s} is not a valid directory'.format(path) - images = [] - for dirpath, _, fnames in sorted(os.walk(path)): - for fname in sorted(fnames): - if is_image_file(fname): - img_path = os.path.join(dirpath, fname) - images.append(img_path) - assert images, '{:s} has no valid image file'.format(path) - return images - - -''' -# -------------------------------------------- -# split large images into small images -# -------------------------------------------- -''' - - -def patches_from_image(img, p_size=512, p_overlap=64, p_max=800): - w, h = img.shape[:2] - patches = [] - if w > p_max and h > p_max: - w1 = list(np.arange(0, w-p_size, p_size-p_overlap, dtype=np.int)) - h1 = list(np.arange(0, h-p_size, p_size-p_overlap, dtype=np.int)) - w1.append(w-p_size) - h1.append(h-p_size) -# print(w1) -# print(h1) - for i in w1: - for j in h1: - patches.append(img[i:i+p_size, j:j+p_size,:]) - else: - patches.append(img) - - return patches - - -def imssave(imgs, img_path): - """ - imgs: list, N images of size WxHxC - """ - img_name, ext = os.path.splitext(os.path.basename(img_path)) - - for i, img in enumerate(imgs): - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - new_path = os.path.join(os.path.dirname(img_path), img_name+str('_s{:04d}'.format(i))+'.png') - cv2.imwrite(new_path, img) - - -def split_imageset(original_dataroot, taget_dataroot, n_channels=3, p_size=800, p_overlap=96, p_max=1000): - """ - split the large images from original_dataroot into small overlapped images with size (p_size)x(p_size), - and save them into taget_dataroot; only the images with larger size than (p_max)x(p_max) - will be splitted. - Args: - original_dataroot: - taget_dataroot: - p_size: size of small images - p_overlap: patch size in training is a good choice - p_max: images with smaller size than (p_max)x(p_max) keep unchanged. - """ - paths = get_image_paths(original_dataroot) - for img_path in paths: - # img_name, ext = os.path.splitext(os.path.basename(img_path)) - img = imread_uint(img_path, n_channels=n_channels) - patches = patches_from_image(img, p_size, p_overlap, p_max) - imssave(patches, os.path.join(taget_dataroot,os.path.basename(img_path))) - #if original_dataroot == taget_dataroot: - #del img_path - -''' -# -------------------------------------------- -# makedir -# -------------------------------------------- -''' - - -def mkdir(path): - if not os.path.exists(path): - os.makedirs(path) - - -def mkdirs(paths): - if isinstance(paths, str): - mkdir(paths) - else: - for path in paths: - mkdir(path) - - -def mkdir_and_rename(path): - if os.path.exists(path): - new_name = path + '_archived_' + get_timestamp() - print('Path already exists. Rename it to [{:s}]'.format(new_name)) - os.rename(path, new_name) - os.makedirs(path) - - -''' -# -------------------------------------------- -# read image from path -# opencv is fast, but read BGR numpy image -# -------------------------------------------- -''' - - -# -------------------------------------------- -# get uint8 image of size HxWxn_channles (RGB) -# -------------------------------------------- -def imread_uint(path, n_channels=3): - # input: path - # output: HxWx3(RGB or GGG), or HxWx1 (G) - if n_channels == 1: - img = cv2.imread(path, 0) # cv2.IMREAD_GRAYSCALE - img = np.expand_dims(img, axis=2) # HxWx1 - elif n_channels == 3: - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # BGR or G - if img.ndim == 2: - img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) # GGG - else: - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # RGB - return img - - -# -------------------------------------------- -# matlab's imwrite -# -------------------------------------------- -def imsave(img, img_path): - img = np.squeeze(img) - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - cv2.imwrite(img_path, img) - -def imwrite(img, img_path): - img = np.squeeze(img) - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - cv2.imwrite(img_path, img) - - - -# -------------------------------------------- -# get single image of size HxWxn_channles (BGR) -# -------------------------------------------- -def read_img(path): - # read image by cv2 - # return: Numpy float32, HWC, BGR, [0,1] - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # cv2.IMREAD_GRAYSCALE - img = img.astype(np.float32) / 255. - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - # some images have 4 channels - if img.shape[2] > 3: - img = img[:, :, :3] - return img - - -''' -# -------------------------------------------- -# image format conversion -# -------------------------------------------- -# numpy(single) <---> numpy(unit) -# numpy(single) <---> tensor -# numpy(unit) <---> tensor -# -------------------------------------------- -''' - - -# -------------------------------------------- -# numpy(single) [0, 1] <---> numpy(unit) -# -------------------------------------------- - - -def uint2single(img): - - return np.float32(img/255.) - - -def single2uint(img): - - return np.uint8((img.clip(0, 1)*255.).round()) - - -def uint162single(img): - - return np.float32(img/65535.) - - -def single2uint16(img): - - return np.uint16((img.clip(0, 1)*65535.).round()) - - -# -------------------------------------------- -# numpy(unit) (HxWxC or HxW) <---> tensor -# -------------------------------------------- - - -# convert uint to 4-dimensional torch tensor -def uint2tensor4(img): - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.).unsqueeze(0) - - -# convert uint to 3-dimensional torch tensor -def uint2tensor3(img): - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.) - - -# convert 2/3/4-dimensional torch tensor to uint -def tensor2uint(img): - img = img.data.squeeze().float().clamp_(0, 1).cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - return np.uint8((img*255.0).round()) - - -# -------------------------------------------- -# numpy(single) (HxWxC) <---> tensor -# -------------------------------------------- - - -# convert single (HxWxC) to 3-dimensional torch tensor -def single2tensor3(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float() - - -# convert single (HxWxC) to 4-dimensional torch tensor -def single2tensor4(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().unsqueeze(0) - - -# convert torch tensor to single -def tensor2single(img): - img = img.data.squeeze().float().cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - - return img - -# convert torch tensor to single -def tensor2single3(img): - img = img.data.squeeze().float().cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - elif img.ndim == 2: - img = np.expand_dims(img, axis=2) - return img - - -def single2tensor5(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float().unsqueeze(0) - - -def single32tensor5(img): - return torch.from_numpy(np.ascontiguousarray(img)).float().unsqueeze(0).unsqueeze(0) - - -def single42tensor4(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float() - - -# from skimage.io import imread, imsave -def tensor2img(tensor, out_type=np.uint8, min_max=(0, 1)): - ''' - Converts a torch Tensor into an image Numpy array of BGR channel order - Input: 4D(B,(3/1),H,W), 3D(C,H,W), or 2D(H,W), any range, RGB channel order - Output: 3D(H,W,C) or 2D(H,W), [0,255], np.uint8 (default) - ''' - tensor = tensor.squeeze().float().cpu().clamp_(*min_max) # squeeze first, then clamp - tensor = (tensor - min_max[0]) / (min_max[1] - min_max[0]) # to range [0,1] - n_dim = tensor.dim() - if n_dim == 4: - n_img = len(tensor) - img_np = make_grid(tensor, nrow=int(math.sqrt(n_img)), normalize=False).numpy() - img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR - elif n_dim == 3: - img_np = tensor.numpy() - img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR - elif n_dim == 2: - img_np = tensor.numpy() - else: - raise TypeError( - 'Only support 4D, 3D and 2D tensor. But received with dimension: {:d}'.format(n_dim)) - if out_type == np.uint8: - img_np = (img_np * 255.0).round() - # Important. Unlike matlab, numpy.unit8() WILL NOT round by default. - return img_np.astype(out_type) - - -''' -# -------------------------------------------- -# Augmentation, flipe and/or rotate -# -------------------------------------------- -# The following two are enough. -# (1) augmet_img: numpy image of WxHxC or WxH -# (2) augment_img_tensor4: tensor image 1xCxWxH -# -------------------------------------------- -''' - - -def augment_img(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - if mode == 0: - return img - elif mode == 1: - return np.flipud(np.rot90(img)) - elif mode == 2: - return np.flipud(img) - elif mode == 3: - return np.rot90(img, k=3) - elif mode == 4: - return np.flipud(np.rot90(img, k=2)) - elif mode == 5: - return np.rot90(img) - elif mode == 6: - return np.rot90(img, k=2) - elif mode == 7: - return np.flipud(np.rot90(img, k=3)) - - -def augment_img_tensor4(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - if mode == 0: - return img - elif mode == 1: - return img.rot90(1, [2, 3]).flip([2]) - elif mode == 2: - return img.flip([2]) - elif mode == 3: - return img.rot90(3, [2, 3]) - elif mode == 4: - return img.rot90(2, [2, 3]).flip([2]) - elif mode == 5: - return img.rot90(1, [2, 3]) - elif mode == 6: - return img.rot90(2, [2, 3]) - elif mode == 7: - return img.rot90(3, [2, 3]).flip([2]) - - -def augment_img_tensor(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - img_size = img.size() - img_np = img.data.cpu().numpy() - if len(img_size) == 3: - img_np = np.transpose(img_np, (1, 2, 0)) - elif len(img_size) == 4: - img_np = np.transpose(img_np, (2, 3, 1, 0)) - img_np = augment_img(img_np, mode=mode) - img_tensor = torch.from_numpy(np.ascontiguousarray(img_np)) - if len(img_size) == 3: - img_tensor = img_tensor.permute(2, 0, 1) - elif len(img_size) == 4: - img_tensor = img_tensor.permute(3, 2, 0, 1) - - return img_tensor.type_as(img) - - -def augment_img_np3(img, mode=0): - if mode == 0: - return img - elif mode == 1: - return img.transpose(1, 0, 2) - elif mode == 2: - return img[::-1, :, :] - elif mode == 3: - img = img[::-1, :, :] - img = img.transpose(1, 0, 2) - return img - elif mode == 4: - return img[:, ::-1, :] - elif mode == 5: - img = img[:, ::-1, :] - img = img.transpose(1, 0, 2) - return img - elif mode == 6: - img = img[:, ::-1, :] - img = img[::-1, :, :] - return img - elif mode == 7: - img = img[:, ::-1, :] - img = img[::-1, :, :] - img = img.transpose(1, 0, 2) - return img - - -def augment_imgs(img_list, hflip=True, rot=True): - # horizontal flip OR rotate - hflip = hflip and random.random() < 0.5 - vflip = rot and random.random() < 0.5 - rot90 = rot and random.random() < 0.5 - - def _augment(img): - if hflip: - img = img[:, ::-1, :] - if vflip: - img = img[::-1, :, :] - if rot90: - img = img.transpose(1, 0, 2) - return img - - return [_augment(img) for img in img_list] - - -''' -# -------------------------------------------- -# modcrop and shave -# -------------------------------------------- -''' - - -def modcrop(img_in, scale): - # img_in: Numpy, HWC or HW - img = np.copy(img_in) - if img.ndim == 2: - H, W = img.shape - H_r, W_r = H % scale, W % scale - img = img[:H - H_r, :W - W_r] - elif img.ndim == 3: - H, W, C = img.shape - H_r, W_r = H % scale, W % scale - img = img[:H - H_r, :W - W_r, :] - else: - raise ValueError('Wrong img ndim: [{:d}].'.format(img.ndim)) - return img - - -def shave(img_in, border=0): - # img_in: Numpy, HWC or HW - img = np.copy(img_in) - h, w = img.shape[:2] - img = img[border:h-border, border:w-border] - return img - - -''' -# -------------------------------------------- -# image processing process on numpy image -# channel_convert(in_c, tar_type, img_list): -# rgb2ycbcr(img, only_y=True): -# bgr2ycbcr(img, only_y=True): -# ycbcr2rgb(img): -# -------------------------------------------- -''' - - -def rgb2ycbcr(img, only_y=True): - '''same as matlab rgb2ycbcr - only_y: only return Y channel - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - if only_y: - rlt = np.dot(img, [65.481, 128.553, 24.966]) / 255.0 + 16.0 - else: - rlt = np.matmul(img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786], - [24.966, 112.0, -18.214]]) / 255.0 + [16, 128, 128] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def ycbcr2rgb(img): - '''same as matlab ycbcr2rgb - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - rlt = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], [0, -0.00153632, 0.00791071], - [0.00625893, -0.00318811, 0]]) * 255.0 + [-222.921, 135.576, -276.836] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def bgr2ycbcr(img, only_y=True): - '''bgr version of rgb2ycbcr - only_y: only return Y channel - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - if only_y: - rlt = np.dot(img, [24.966, 128.553, 65.481]) / 255.0 + 16.0 - else: - rlt = np.matmul(img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786], - [65.481, -37.797, 112.0]]) / 255.0 + [16, 128, 128] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def channel_convert(in_c, tar_type, img_list): - # conversion among BGR, gray and y - if in_c == 3 and tar_type == 'gray': # BGR to gray - gray_list = [cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) for img in img_list] - return [np.expand_dims(img, axis=2) for img in gray_list] - elif in_c == 3 and tar_type == 'y': # BGR to y - y_list = [bgr2ycbcr(img, only_y=True) for img in img_list] - return [np.expand_dims(img, axis=2) for img in y_list] - elif in_c == 1 and tar_type == 'RGB': # gray/y to BGR - return [cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) for img in img_list] - else: - return img_list - - -''' -# -------------------------------------------- -# metric, PSNR and SSIM -# -------------------------------------------- -''' - - -# -------------------------------------------- -# PSNR -# -------------------------------------------- -def calculate_psnr(img1, img2, border=0): - # img1 and img2 have range [0, 255] - #img1 = img1.squeeze() - #img2 = img2.squeeze() - if not img1.shape == img2.shape: - raise ValueError('Input images must have the same dimensions.') - h, w = img1.shape[:2] - img1 = img1[border:h-border, border:w-border] - img2 = img2[border:h-border, border:w-border] - - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - mse = np.mean((img1 - img2)**2) - if mse == 0: - return float('inf') - return 20 * math.log10(255.0 / math.sqrt(mse)) - - -# -------------------------------------------- -# SSIM -# -------------------------------------------- -def calculate_ssim(img1, img2, border=0): - '''calculate SSIM - the same outputs as MATLAB's - img1, img2: [0, 255] - ''' - #img1 = img1.squeeze() - #img2 = img2.squeeze() - if not img1.shape == img2.shape: - raise ValueError('Input images must have the same dimensions.') - h, w = img1.shape[:2] - img1 = img1[border:h-border, border:w-border] - img2 = img2[border:h-border, border:w-border] - - if img1.ndim == 2: - return ssim(img1, img2) - elif img1.ndim == 3: - if img1.shape[2] == 3: - ssims = [] - for i in range(3): - ssims.append(ssim(img1[:,:,i], img2[:,:,i])) - return np.array(ssims).mean() - elif img1.shape[2] == 1: - return ssim(np.squeeze(img1), np.squeeze(img2)) - else: - raise ValueError('Wrong input image dimensions.') - - -def ssim(img1, img2): - C1 = (0.01 * 255)**2 - C2 = (0.03 * 255)**2 - - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - kernel = cv2.getGaussianKernel(11, 1.5) - window = np.outer(kernel, kernel.transpose()) - - mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] # valid - mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5] - mu1_sq = mu1**2 - mu2_sq = mu2**2 - mu1_mu2 = mu1 * mu2 - sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq - sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq - sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2 - - ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * - (sigma1_sq + sigma2_sq + C2)) - return ssim_map.mean() - - -''' -# -------------------------------------------- -# matlab's bicubic imresize (numpy and torch) [0, 1] -# -------------------------------------------- -''' - - -# matlab 'imresize' function, now only support 'bicubic' -def cubic(x): - absx = torch.abs(x) - absx2 = absx**2 - absx3 = absx**3 - return (1.5*absx3 - 2.5*absx2 + 1) * ((absx <= 1).type_as(absx)) + \ - (-0.5*absx3 + 2.5*absx2 - 4*absx + 2) * (((absx > 1)*(absx <= 2)).type_as(absx)) - - -def calculate_weights_indices(in_length, out_length, scale, kernel, kernel_width, antialiasing): - if (scale < 1) and (antialiasing): - # Use a modified kernel to simultaneously interpolate and antialias- larger kernel width - kernel_width = kernel_width / scale - - # Output-space coordinates - x = torch.linspace(1, out_length, out_length) - - # Input-space coordinates. Calculate the inverse mapping such that 0.5 - # in output space maps to 0.5 in input space, and 0.5+scale in output - # space maps to 1.5 in input space. - u = x / scale + 0.5 * (1 - 1 / scale) - - # What is the left-most pixel that can be involved in the computation? - left = torch.floor(u - kernel_width / 2) - - # What is the maximum number of pixels that can be involved in the - # computation? Note: it's OK to use an extra pixel here; if the - # corresponding weights are all zero, it will be eliminated at the end - # of this function. - P = math.ceil(kernel_width) + 2 - - # The indices of the input pixels involved in computing the k-th output - # pixel are in row k of the indices matrix. - indices = left.view(out_length, 1).expand(out_length, P) + torch.linspace(0, P - 1, P).view( - 1, P).expand(out_length, P) - - # The weights used to compute the k-th output pixel are in row k of the - # weights matrix. - distance_to_center = u.view(out_length, 1).expand(out_length, P) - indices - # apply cubic kernel - if (scale < 1) and (antialiasing): - weights = scale * cubic(distance_to_center * scale) - else: - weights = cubic(distance_to_center) - # Normalize the weights matrix so that each row sums to 1. - weights_sum = torch.sum(weights, 1).view(out_length, 1) - weights = weights / weights_sum.expand(out_length, P) - - # If a column in weights is all zero, get rid of it. only consider the first and last column. - weights_zero_tmp = torch.sum((weights == 0), 0) - if not math.isclose(weights_zero_tmp[0], 0, rel_tol=1e-6): - indices = indices.narrow(1, 1, P - 2) - weights = weights.narrow(1, 1, P - 2) - if not math.isclose(weights_zero_tmp[-1], 0, rel_tol=1e-6): - indices = indices.narrow(1, 0, P - 2) - weights = weights.narrow(1, 0, P - 2) - weights = weights.contiguous() - indices = indices.contiguous() - sym_len_s = -indices.min() + 1 - sym_len_e = indices.max() - in_length - indices = indices + sym_len_s - 1 - return weights, indices, int(sym_len_s), int(sym_len_e) - - -# -------------------------------------------- -# imresize for tensor image [0, 1] -# -------------------------------------------- -def imresize(img, scale, antialiasing=True): - # Now the scale should be the same for H and W - # input: img: pytorch tensor, CHW or HW [0,1] - # output: CHW or HW [0,1] w/o round - need_squeeze = True if img.dim() == 2 else False - if need_squeeze: - img.unsqueeze_(0) - in_C, in_H, in_W = img.size() - out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale) - kernel_width = 4 - kernel = 'cubic' - - # Return the desired dimension order for performing the resize. The - # strategy is to perform the resize first along the dimension with the - # smallest scale factor. - # Now we do not support this. - - # get weights and indices - weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices( - in_H, out_H, scale, kernel, kernel_width, antialiasing) - weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices( - in_W, out_W, scale, kernel, kernel_width, antialiasing) - # process H dimension - # symmetric copying - img_aug = torch.FloatTensor(in_C, in_H + sym_len_Hs + sym_len_He, in_W) - img_aug.narrow(1, sym_len_Hs, in_H).copy_(img) - - sym_patch = img[:, :sym_len_Hs, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - img_aug.narrow(1, 0, sym_len_Hs).copy_(sym_patch_inv) - - sym_patch = img[:, -sym_len_He:, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - img_aug.narrow(1, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv) - - out_1 = torch.FloatTensor(in_C, out_H, in_W) - kernel_width = weights_H.size(1) - for i in range(out_H): - idx = int(indices_H[i][0]) - for j in range(out_C): - out_1[j, i, :] = img_aug[j, idx:idx + kernel_width, :].transpose(0, 1).mv(weights_H[i]) - - # process W dimension - # symmetric copying - out_1_aug = torch.FloatTensor(in_C, out_H, in_W + sym_len_Ws + sym_len_We) - out_1_aug.narrow(2, sym_len_Ws, in_W).copy_(out_1) - - sym_patch = out_1[:, :, :sym_len_Ws] - inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(2, inv_idx) - out_1_aug.narrow(2, 0, sym_len_Ws).copy_(sym_patch_inv) - - sym_patch = out_1[:, :, -sym_len_We:] - inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(2, inv_idx) - out_1_aug.narrow(2, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv) - - out_2 = torch.FloatTensor(in_C, out_H, out_W) - kernel_width = weights_W.size(1) - for i in range(out_W): - idx = int(indices_W[i][0]) - for j in range(out_C): - out_2[j, :, i] = out_1_aug[j, :, idx:idx + kernel_width].mv(weights_W[i]) - if need_squeeze: - out_2.squeeze_() - return out_2 - - -# -------------------------------------------- -# imresize for numpy image [0, 1] -# -------------------------------------------- -def imresize_np(img, scale, antialiasing=True): - # Now the scale should be the same for H and W - # input: img: Numpy, HWC or HW [0,1] - # output: HWC or HW [0,1] w/o round - img = torch.from_numpy(img) - need_squeeze = True if img.dim() == 2 else False - if need_squeeze: - img.unsqueeze_(2) - - in_H, in_W, in_C = img.size() - out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale) - kernel_width = 4 - kernel = 'cubic' - - # Return the desired dimension order for performing the resize. The - # strategy is to perform the resize first along the dimension with the - # smallest scale factor. - # Now we do not support this. - - # get weights and indices - weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices( - in_H, out_H, scale, kernel, kernel_width, antialiasing) - weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices( - in_W, out_W, scale, kernel, kernel_width, antialiasing) - # process H dimension - # symmetric copying - img_aug = torch.FloatTensor(in_H + sym_len_Hs + sym_len_He, in_W, in_C) - img_aug.narrow(0, sym_len_Hs, in_H).copy_(img) - - sym_patch = img[:sym_len_Hs, :, :] - inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(0, inv_idx) - img_aug.narrow(0, 0, sym_len_Hs).copy_(sym_patch_inv) - - sym_patch = img[-sym_len_He:, :, :] - inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(0, inv_idx) - img_aug.narrow(0, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv) - - out_1 = torch.FloatTensor(out_H, in_W, in_C) - kernel_width = weights_H.size(1) - for i in range(out_H): - idx = int(indices_H[i][0]) - for j in range(out_C): - out_1[i, :, j] = img_aug[idx:idx + kernel_width, :, j].transpose(0, 1).mv(weights_H[i]) - - # process W dimension - # symmetric copying - out_1_aug = torch.FloatTensor(out_H, in_W + sym_len_Ws + sym_len_We, in_C) - out_1_aug.narrow(1, sym_len_Ws, in_W).copy_(out_1) - - sym_patch = out_1[:, :sym_len_Ws, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - out_1_aug.narrow(1, 0, sym_len_Ws).copy_(sym_patch_inv) - - sym_patch = out_1[:, -sym_len_We:, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - out_1_aug.narrow(1, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv) - - out_2 = torch.FloatTensor(out_H, out_W, in_C) - kernel_width = weights_W.size(1) - for i in range(out_W): - idx = int(indices_W[i][0]) - for j in range(out_C): - out_2[:, i, j] = out_1_aug[:, idx:idx + kernel_width, j].mv(weights_W[i]) - if need_squeeze: - out_2.squeeze_() - - return out_2.numpy() - - -if __name__ == '__main__': - print('---') -# img = imread_uint('test.bmp', 3) -# img = uint2single(img) -# img_bicubic = imresize_np(img, 1/4) \ No newline at end of file diff --git a/spaces/Kevin676/Real-Time-Voice-Cloning/vocoder/display.py b/spaces/Kevin676/Real-Time-Voice-Cloning/vocoder/display.py deleted file mode 100644 index 956880722a3f05613ebd06f5686b3d8a59642e92..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/Real-Time-Voice-Cloning/vocoder/display.py +++ /dev/null @@ -1,120 +0,0 @@ -import matplotlib.pyplot as plt -import time -import numpy as np -import sys - - -def progbar(i, n, size=16): - done = (i * size) // n - bar = '' - for i in range(size): - bar += '█' if i <= done else '░' - return bar - - -def stream(message) : - try: - sys.stdout.write("\r{%s}" % message) - except: - #Remove non-ASCII characters from message - message = ''.join(i for i in message if ord(i)<128) - sys.stdout.write("\r{%s}" % message) - - -def simple_table(item_tuples) : - - border_pattern = '+---------------------------------------' - whitespace = ' ' - - headings, cells, = [], [] - - for item in item_tuples : - - heading, cell = str(item[0]), str(item[1]) - - pad_head = True if len(heading) < len(cell) else False - - pad = abs(len(heading) - len(cell)) - pad = whitespace[:pad] - - pad_left = pad[:len(pad)//2] - pad_right = pad[len(pad)//2:] - - if pad_head : - heading = pad_left + heading + pad_right - else : - cell = pad_left + cell + pad_right - - headings += [heading] - cells += [cell] - - border, head, body = '', '', '' - - for i in range(len(item_tuples)) : - - temp_head = f'| {headings[i]} ' - temp_body = f'| {cells[i]} ' - - border += border_pattern[:len(temp_head)] - head += temp_head - body += temp_body - - if i == len(item_tuples) - 1 : - head += '|' - body += '|' - border += '+' - - print(border) - print(head) - print(border) - print(body) - print(border) - print(' ') - - -def time_since(started) : - elapsed = time.time() - started - m = int(elapsed // 60) - s = int(elapsed % 60) - if m >= 60 : - h = int(m // 60) - m = m % 60 - return f'{h}h {m}m {s}s' - else : - return f'{m}m {s}s' - - -def save_attention(attn, path) : - fig = plt.figure(figsize=(12, 6)) - plt.imshow(attn.T, interpolation='nearest', aspect='auto') - fig.savefig(f'{path}.png', bbox_inches='tight') - plt.close(fig) - - -def save_spectrogram(M, path, length=None) : - M = np.flip(M, axis=0) - if length : M = M[:, :length] - fig = plt.figure(figsize=(12, 6)) - plt.imshow(M, interpolation='nearest', aspect='auto') - fig.savefig(f'{path}.png', bbox_inches='tight') - plt.close(fig) - - -def plot(array) : - fig = plt.figure(figsize=(30, 5)) - ax = fig.add_subplot(111) - ax.xaxis.label.set_color('grey') - ax.yaxis.label.set_color('grey') - ax.xaxis.label.set_fontsize(23) - ax.yaxis.label.set_fontsize(23) - ax.tick_params(axis='x', colors='grey', labelsize=23) - ax.tick_params(axis='y', colors='grey', labelsize=23) - plt.plot(array) - - -def plot_spec(M) : - M = np.flip(M, axis=0) - plt.figure(figsize=(18,4)) - plt.imshow(M, interpolation='nearest', aspect='auto') - plt.show() - diff --git a/spaces/Kreaols/ChuanhuChatGPT/assets/external-scripts.js b/spaces/Kreaols/ChuanhuChatGPT/assets/external-scripts.js deleted file mode 100644 index 8d0352669045537af5698b1824dbc1dba21df478..0000000000000000000000000000000000000000 --- a/spaces/Kreaols/ChuanhuChatGPT/assets/external-scripts.js +++ /dev/null @@ -1,2 +0,0 @@ - -// external javascript here diff --git a/spaces/Kuachi/ai-voice/README.md b/spaces/Kuachi/ai-voice/README.md deleted file mode 100644 index 62dee36e0b30f5e99a6eea4122deb42189651e4e..0000000000000000000000000000000000000000 --- a/spaces/Kuachi/ai-voice/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Anime Voice Ai -emoji: 🗿 -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: kuachi/voice ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/mask_heads/fused_semantic_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/mask_heads/fused_semantic_head.py deleted file mode 100644 index d20beb2975a563f03e7b6b2afcef287cb41af05a..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/mask_heads/fused_semantic_head.py +++ /dev/null @@ -1,144 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from typing import Tuple - -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmengine.config import ConfigDict -from mmengine.model import BaseModule -from torch import Tensor - -from mmdet.registry import MODELS -from mmdet.utils import MultiConfig, OptConfigType - - -@MODELS.register_module() -class FusedSemanticHead(BaseModule): - r"""Multi-level fused semantic segmentation head. - - .. code-block:: none - - in_1 -> 1x1 conv --- - | - in_2 -> 1x1 conv -- | - || - in_3 -> 1x1 conv - || - ||| /-> 1x1 conv (mask prediction) - in_4 -> 1x1 conv -----> 3x3 convs (*4) - | \-> 1x1 conv (feature) - in_5 -> 1x1 conv --- - """ # noqa: W605 - - def __init__( - self, - num_ins: int, - fusion_level: int, - seg_scale_factor=1 / 8, - num_convs: int = 4, - in_channels: int = 256, - conv_out_channels: int = 256, - num_classes: int = 183, - conv_cfg: OptConfigType = None, - norm_cfg: OptConfigType = None, - ignore_label: int = None, - loss_weight: float = None, - loss_seg: ConfigDict = dict( - type='CrossEntropyLoss', ignore_index=255, loss_weight=0.2), - init_cfg: MultiConfig = dict( - type='Kaiming', override=dict(name='conv_logits')) - ) -> None: - super().__init__(init_cfg=init_cfg) - self.num_ins = num_ins - self.fusion_level = fusion_level - self.seg_scale_factor = seg_scale_factor - self.num_convs = num_convs - self.in_channels = in_channels - self.conv_out_channels = conv_out_channels - self.num_classes = num_classes - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.fp16_enabled = False - - self.lateral_convs = nn.ModuleList() - for i in range(self.num_ins): - self.lateral_convs.append( - ConvModule( - self.in_channels, - self.in_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - inplace=False)) - - self.convs = nn.ModuleList() - for i in range(self.num_convs): - in_channels = self.in_channels if i == 0 else conv_out_channels - self.convs.append( - ConvModule( - in_channels, - conv_out_channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.conv_embedding = ConvModule( - conv_out_channels, - conv_out_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg) - self.conv_logits = nn.Conv2d(conv_out_channels, self.num_classes, 1) - if ignore_label: - loss_seg['ignore_index'] = ignore_label - if loss_weight: - loss_seg['loss_weight'] = loss_weight - if ignore_label or loss_weight: - warnings.warn('``ignore_label`` and ``loss_weight`` would be ' - 'deprecated soon. Please set ``ingore_index`` and ' - '``loss_weight`` in ``loss_seg`` instead.') - self.criterion = MODELS.build(loss_seg) - - def forward(self, feats: Tuple[Tensor]) -> Tuple[Tensor]: - """Forward function. - - Args: - feats (tuple[Tensor]): Multi scale feature maps. - - Returns: - tuple[Tensor]: - - - mask_preds (Tensor): Predicted mask logits. - - x (Tensor): Fused feature. - """ - x = self.lateral_convs[self.fusion_level](feats[self.fusion_level]) - fused_size = tuple(x.shape[-2:]) - for i, feat in enumerate(feats): - if i != self.fusion_level: - feat = F.interpolate( - feat, size=fused_size, mode='bilinear', align_corners=True) - # fix runtime error of "+=" inplace operation in PyTorch 1.10 - x = x + self.lateral_convs[i](feat) - - for i in range(self.num_convs): - x = self.convs[i](x) - - mask_preds = self.conv_logits(x) - x = self.conv_embedding(x) - return mask_preds, x - - def loss(self, mask_preds: Tensor, labels: Tensor) -> Tensor: - """Loss function. - - Args: - mask_preds (Tensor): Predicted mask logits. - labels (Tensor): Ground truth. - - Returns: - Tensor: Semantic segmentation loss. - """ - labels = F.interpolate( - labels.float(), scale_factor=self.seg_scale_factor, mode='nearest') - labels = labels.squeeze(1).long() - loss_semantic_seg = self.criterion(mask_preds, labels) - return loss_semantic_seg diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/psenet/psenet_r50_fpnf_600e_icdar2015.py b/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/psenet/psenet_r50_fpnf_600e_icdar2015.py deleted file mode 100644 index fbaacc19b19f6f8284eb65c7d2d2aa95e8051427..0000000000000000000000000000000000000000 --- a/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/psenet/psenet_r50_fpnf_600e_icdar2015.py +++ /dev/null @@ -1,35 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/schedules/schedule_adam_step_600e.py', - '../../_base_/det_models/psenet_r50_fpnf.py', - '../../_base_/det_datasets/icdar2015.py', - '../../_base_/det_pipelines/psenet_pipeline.py' -] - -model = {{_base_.model_quad}} - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline_icdar2015 = {{_base_.test_pipeline_icdar2015}} - -data = dict( - samples_per_gpu=8, - workers_per_gpu=2, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_icdar2015), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_icdar2015)) - -evaluation = dict(interval=10, metric='hmean-iou') diff --git a/spaces/MA9149210776/CrucibleAI-ControlNetMediaPipeFace/app.py b/spaces/MA9149210776/CrucibleAI-ControlNetMediaPipeFace/app.py deleted file mode 100644 index 52058acccc89fabb676263590dd45e3c16ea72cc..0000000000000000000000000000000000000000 --- a/spaces/MA9149210776/CrucibleAI-ControlNetMediaPipeFace/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/CrucibleAI/ControlNetMediaPipeFace").launch() \ No newline at end of file diff --git a/spaces/Mahiruoshi/vits-chatbot/modules.py b/spaces/Mahiruoshi/vits-chatbot/modules.py deleted file mode 100644 index f5af1fd9a20dc03707889f360a39bb4b784a6df3..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/vits-chatbot/modules.py +++ /dev/null @@ -1,387 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/Makiing/coolb-in-gtest/src/components/ui/separator.tsx b/spaces/Makiing/coolb-in-gtest/src/components/ui/separator.tsx deleted file mode 100644 index 6c55e0b2ca8e2436658a06748aadbff7cd700db0..0000000000000000000000000000000000000000 --- a/spaces/Makiing/coolb-in-gtest/src/components/ui/separator.tsx +++ /dev/null @@ -1,31 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SeparatorPrimitive from '@radix-ui/react-separator' - -import { cn } from '@/lib/utils' - -const Separator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->( - ( - { className, orientation = 'horizontal', decorative = true, ...props }, - ref - ) => ( - - ) -) -Separator.displayName = SeparatorPrimitive.Root.displayName - -export { Separator } diff --git a/spaces/Marshalls/testmtd/analysis/remove_bad_beginnings.py b/spaces/Marshalls/testmtd/analysis/remove_bad_beginnings.py deleted file mode 100644 index dcd505596253e4401b999df4bad2ed4bca525106..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/analysis/remove_bad_beginnings.py +++ /dev/null @@ -1,39 +0,0 @@ -from analysis.pymo.parsers import BVHParser -from analysis.pymo.data import Joint, MocapData -from analysis.pymo.preprocessing import * -from analysis.pymo.viz_tools import * -from analysis.pymo.writers import * -from sklearn.pipeline import Pipeline -from pathlib import Path -import sys -path = sys.argv[1] - -from feature_extraction.utils import distribute_tasks -from mpi4py import MPI -comm = MPI.COMM_WORLD -rank = comm.Get_rank() -size = comm.Get_size() - -path = Path(path) -candidate_audio_files = sorted(path.glob('**/*.bvh'), key=lambda path: path.parent.__str__()) -tasks = distribute_tasks(candidate_audio_files,rank,size) - -p = BVHParser() -datas = [] -filenames = [] -for i in tasks: - f = candidate_audio_files[i] - print(f) - filenames.append(f) - datas.append(p.parse(f)) - -with open("to_check"+str(rank),"w") as f: - for i,data in enumerate(datas): - bad_ones = data.values[(data.values["Hips_Xposition"] > 100000) | (data.values["Hips_Xposition"] < -100000)] - if len(bad_ones) > 0: - last_index = bad_ones.index[-1] - data.values = data.values.loc[last_index:].iloc[1:] - writer = BVHWriter() - - with open(filenames[i],'w') as out_f: - writer.write(data, out_f) diff --git a/spaces/Marshalls/testmtd/feature_extraction/audio_feature_extraction_test.sh b/spaces/Marshalls/testmtd/feature_extraction/audio_feature_extraction_test.sh deleted file mode 100644 index b2f93c1d3ecec3b131e868b790753e1e317b7938..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/feature_extraction/audio_feature_extraction_test.sh +++ /dev/null @@ -1,47 +0,0 @@ -#!/bin/bash - -folder=$1 - -py=python3 -n=$(nproc) #get Get the maximum number of processes on the computer -#n=6 - -#find $1 -type f -iname "*.mp3" -exec basename \{\} .mp3 \; > $1/base_filenames.txt - -fps=20 -format=wav #the format of the audio - -###SEQUENCE TO PROCESS DATA WHEN NEEDING TO COMPUTE NORMALIZATION TRANSFORMS -#mpirun -n $n $py ./feature_extraction/process_audio.py $@ --audio_format $format --feature_names multi_mel --mel_feature_size 80 --fps 100 # fps=100 coz thats what ddc expects -#mpirun -n $n $py ./feature_extraction/generate_ddc_features.py $@ --audio_format $format --experiment_name block_placement_ddc2 --checkpoint 130000 --checkpoints_dir feature_extraction --fps $fps -#mpirun -n $n $py ./feature_extraction/process_audio.py $@ --audio_format $format --feature_names mel,envelope,madmombeats --mel_feature_size 80 --fps $fps --combined_feature_name audio_feats -#mpirun -n 1 $py ./feature_extraction/extract_transform.py $1 --feature_name ${format}_multi_mel_80.npy_ddc_hidden --transforms pca_transform -#mpirun -n $n $py ./feature_extraction/apply_transforms.py $@ --feature_name ${format}_multi_mel_80.npy_ddc_hidden --transform_name pca_transform --pca_dims 2 --new_feature_name ddcpca -#./feature_extraction/script_to_list_filenames $1 $format -#mpirun -n $n $py ./feature_extraction/combine_feats.py $@ $1/base_filenames.txt --feature_names ${format}_audio_feats,ddcpca --new_feature_name feats_ddcpca -#mpirun -n 1 $py ./feature_extraction/extract_transform2.py $1 --feature_name feats_ddcpca --transforms scaler -#mpirun -n $n $py ./feature_extraction/apply_transforms.py $@ --feature_name feats_ddcpca --transform_name scaler --new_feature_name audio_feats_scaled_${fps} - -###SEQUENCE WHEN USING EXISTING TRANSFORMS (SO NO NEED T RECOMPUTE THEM) -#mpirun -n $n $py ./feature_extraction/process_audio.py $@ --audio_format $format --feature_names multi_mel --mel_feature_size 80 --fps 100 # fps=100 coz thats what ddc expects -#mpirun -n $n $py ./feature_extraction/generate_ddc_features.py $@ --audio_format $format --experiment_name block_placement_ddc2 --checkpoint 130000 --checkpoints_dir feature_extraction --fps $fps -#mpirun -n $n $py ./feature_extraction/process_audio.py $@ --audio_format $format --feature_names mel,envelope,madmombeats --mel_feature_size 80 --fps $fps --combined_feature_name audio_feats -#mpirun -n $n $py ./feature_extraction/apply_transforms.py $@ --feature_name ${format}_multi_mel_80.npy_ddc_hidden --transform_name pca_transform --pca_dims 2 --new_feature_name ddcpca -#./feature_extraction/script_to_list_filenames $1 $format -#mpirun -n $n $py ./feature_extraction/combine_feats.py $@ $1/base_filenames.txt --feature_names ${format}_audio_feats,ddcpca --new_feature_name feats_ddcpca -#mpirun -n $n $py ./feature_extraction/apply_transforms.py $@ --feature_name feats_ddcpca --transform_name scaler --new_feature_name audio_feats_scaled_${fps} - -###NOMPI -chmod +x ./feature_extraction/process_audio.py -chmod +x ./feature_extraction/generate_ddc_features.py -chmod +x ./feature_extraction/process_audio.py -chmod +x ./feature_extraction/apply_transforms.py -chmod +x ./feature_extraction/combine_feats.py -chmod +x ./feature_extraction/apply_transforms.py -$py ./feature_extraction/process_audio.py $@ --audio_format $format --feature_names multi_mel --mel_feature_size 80 --fps 100 # fps=100 coz thats what ddc expects -$py ./feature_extraction/generate_ddc_features.py $@ --audio_format $format --experiment_name block_placement_ddc2 --checkpoint 130000 --checkpoints_dir feature_extraction --fps $fps -$py ./feature_extraction/process_audio.py $@ --audio_format $format --feature_names mel,envelope,madmombeats --mel_feature_size 80 --fps $fps --combined_feature_name audio_feats -$py ./feature_extraction/apply_transforms.py $@ --feature_name ${format}_multi_mel_80.npy_ddc_hidden --transform_name pca_transform --pca_dims 2 --new_feature_name ddcpca -./feature_extraction/script_to_list_filenames $1 $format -$py ./feature_extraction/combine_feats.py $@ $1/base_filenames.txt --feature_names ${format}_audio_feats,ddcpca --new_feature_name feats_ddcpca -$py ./feature_extraction/apply_transforms.py $@ --feature_name feats_ddcpca --transform_name scaler --new_feature_name audio_feats_scaled_${fps} \ No newline at end of file diff --git a/spaces/MetaWabbit/Auto-GPT/autogpt/token_counter.py b/spaces/MetaWabbit/Auto-GPT/autogpt/token_counter.py deleted file mode 100644 index 338fe6be4d47a679f2bf0815685edeb3dce66936..0000000000000000000000000000000000000000 --- a/spaces/MetaWabbit/Auto-GPT/autogpt/token_counter.py +++ /dev/null @@ -1,73 +0,0 @@ -"""Functions for counting the number of tokens in a message or string.""" -from __future__ import annotations - -import tiktoken - -from autogpt.logs import logger - - -def count_message_tokens( - messages: list[dict[str, str]], model: str = "gpt-3.5-turbo-0301" -) -> int: - """ - Returns the number of tokens used by a list of messages. - - Args: - messages (list): A list of messages, each of which is a dictionary - containing the role and content of the message. - model (str): The name of the model to use for tokenization. - Defaults to "gpt-3.5-turbo-0301". - - Returns: - int: The number of tokens used by the list of messages. - """ - try: - encoding = tiktoken.encoding_for_model(model) - except KeyError: - logger.warn("Warning: model not found. Using cl100k_base encoding.") - encoding = tiktoken.get_encoding("cl100k_base") - if model == "gpt-3.5-turbo": - # !Note: gpt-3.5-turbo may change over time. - # Returning num tokens assuming gpt-3.5-turbo-0301.") - return count_message_tokens(messages, model="gpt-3.5-turbo-0301") - elif model == "gpt-4": - # !Note: gpt-4 may change over time. Returning num tokens assuming gpt-4-0314.") - return count_message_tokens(messages, model="gpt-4-0314") - elif model == "gpt-3.5-turbo-0301": - tokens_per_message = ( - 4 # every message follows <|start|>{role/name}\n{content}<|end|>\n - ) - tokens_per_name = -1 # if there's a name, the role is omitted - elif model == "gpt-4-0314": - tokens_per_message = 3 - tokens_per_name = 1 - else: - raise NotImplementedError( - f"num_tokens_from_messages() is not implemented for model {model}.\n" - " See https://github.com/openai/openai-python/blob/main/chatml.md for" - " information on how messages are converted to tokens." - ) - num_tokens = 0 - for message in messages: - num_tokens += tokens_per_message - for key, value in message.items(): - num_tokens += len(encoding.encode(value)) - if key == "name": - num_tokens += tokens_per_name - num_tokens += 3 # every reply is primed with <|start|>assistant<|message|> - return num_tokens - - -def count_string_tokens(string: str, model_name: str) -> int: - """ - Returns the number of tokens in a text string. - - Args: - string (str): The text string. - model_name (str): The name of the encoding to use. (e.g., "gpt-3.5-turbo") - - Returns: - int: The number of tokens in the text string. - """ - encoding = tiktoken.encoding_for_model(model_name) - return len(encoding.encode(string)) diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/common/losses/dice_loss.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/common/losses/dice_loss.py deleted file mode 100644 index 37d2d3d1926263e85c4fd4b98c8f98087405686e..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/common/losses/dice_loss.py +++ /dev/null @@ -1,112 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Optional - -import torch -import torch.nn as nn - -from mmocr.registry import MODELS - - -@MODELS.register_module() -class MaskedDiceLoss(nn.Module): - """Masked dice loss. - - Args: - eps (float, optional): Eps to avoid zero-divison error. Defaults to - 1e-6. - """ - - def __init__(self, eps: float = 1e-6) -> None: - super().__init__() - assert isinstance(eps, float) - self.eps = eps - - def forward(self, - pred: torch.Tensor, - gt: torch.Tensor, - mask: Optional[torch.Tensor] = None) -> torch.Tensor: - """Forward function. - - Args: - pred (torch.Tensor): The prediction in any shape. - gt (torch.Tensor): The learning target of the prediction in the - same shape as pred. - mask (torch.Tensor, optional): Binary mask in the same shape of - pred, indicating positive regions to calculate the loss. Whole - region will be taken into account if not provided. Defaults to - None. - - Returns: - torch.Tensor: The loss value. - """ - - assert pred.size() == gt.size() and gt.numel() > 0 - if mask is None: - mask = torch.ones_like(gt) - assert mask.size() == gt.size() - - pred = pred.contiguous().view(pred.size(0), -1) - gt = gt.contiguous().view(gt.size(0), -1) - - mask = mask.contiguous().view(mask.size(0), -1) - pred = pred * mask - gt = gt * mask - - dice_coeff = (2 * (pred * gt).sum()) / ( - pred.sum() + gt.sum() + self.eps) - - return 1 - dice_coeff - - -@MODELS.register_module() -class MaskedSquareDiceLoss(nn.Module): - """Masked square dice loss. - - Args: - eps (float, optional): Eps to avoid zero-divison error. Defaults to - 1e-3. - """ - - def __init__(self, eps: float = 1e-3) -> None: - super().__init__() - assert isinstance(eps, float) - self.eps = eps - - def forward(self, - pred: torch.Tensor, - gt: torch.Tensor, - mask: Optional[torch.Tensor] = None) -> torch.Tensor: - """Forward function. - - Args: - pred (torch.Tensor): The prediction in any shape. - gt (torch.Tensor): The learning target of the prediction in the - same shape as pred. - mask (torch.Tensor, optional): Binary mask in the same shape of - pred, indicating positive regions to calculate the loss. Whole - region will be taken into account if not provided. Defaults to - None. - - Returns: - torch.Tensor: The loss value. - """ - assert pred.size() == gt.size() and gt.numel() > 0 - if mask is None: - mask = torch.ones_like(gt) - assert mask.size() == gt.size() - batch_size = pred.size(0) - pred = pred.contiguous().view(batch_size, -1) - gt = gt.contiguous().view(batch_size, -1).float() - mask = mask.contiguous().view(batch_size, -1).float() - - pred = pred * mask - gt = gt * mask - - a = torch.sum(pred * gt, dim=1) - b = torch.sum(pred * pred, dim=1) + self.eps - c = torch.sum(gt * gt, dim=1) + self.eps - d = (2 * a) / (b + c) - loss = 1 - d - - loss = torch.mean(loss) - return loss diff --git a/spaces/MrBodean/VoiceClone/encoder/model.py b/spaces/MrBodean/VoiceClone/encoder/model.py deleted file mode 100644 index e050d3204d8f1becdf0f8b3133470708e5420cea..0000000000000000000000000000000000000000 --- a/spaces/MrBodean/VoiceClone/encoder/model.py +++ /dev/null @@ -1,135 +0,0 @@ -from encoder.params_model import * -from encoder.params_data import * -from scipy.interpolate import interp1d -from sklearn.metrics import roc_curve -from torch.nn.utils import clip_grad_norm_ -from scipy.optimize import brentq -from torch import nn -import numpy as np -import torch - - -class SpeakerEncoder(nn.Module): - def __init__(self, device, loss_device): - super().__init__() - self.loss_device = loss_device - - # Network defition - self.lstm = nn.LSTM(input_size=mel_n_channels, - hidden_size=model_hidden_size, - num_layers=model_num_layers, - batch_first=True).to(device) - self.linear = nn.Linear(in_features=model_hidden_size, - out_features=model_embedding_size).to(device) - self.relu = torch.nn.ReLU().to(device) - - # Cosine similarity scaling (with fixed initial parameter values) - self.similarity_weight = nn.Parameter(torch.tensor([10.])).to(loss_device) - self.similarity_bias = nn.Parameter(torch.tensor([-5.])).to(loss_device) - - # Loss - self.loss_fn = nn.CrossEntropyLoss().to(loss_device) - - def do_gradient_ops(self): - # Gradient scale - self.similarity_weight.grad *= 0.01 - self.similarity_bias.grad *= 0.01 - - # Gradient clipping - clip_grad_norm_(self.parameters(), 3, norm_type=2) - - def forward(self, utterances, hidden_init=None): - """ - Computes the embeddings of a batch of utterance spectrograms. - - :param utterances: batch of mel-scale filterbanks of same duration as a tensor of shape - (batch_size, n_frames, n_channels) - :param hidden_init: initial hidden state of the LSTM as a tensor of shape (num_layers, - batch_size, hidden_size). Will default to a tensor of zeros if None. - :return: the embeddings as a tensor of shape (batch_size, embedding_size) - """ - # Pass the input through the LSTM layers and retrieve all outputs, the final hidden state - # and the final cell state. - out, (hidden, cell) = self.lstm(utterances, hidden_init) - - # We take only the hidden state of the last layer - embeds_raw = self.relu(self.linear(hidden[-1])) - - # L2-normalize it - embeds = embeds_raw / (torch.norm(embeds_raw, dim=1, keepdim=True) + 1e-5) - - return embeds - - def similarity_matrix(self, embeds): - """ - Computes the similarity matrix according the section 2.1 of GE2E. - - :param embeds: the embeddings as a tensor of shape (speakers_per_batch, - utterances_per_speaker, embedding_size) - :return: the similarity matrix as a tensor of shape (speakers_per_batch, - utterances_per_speaker, speakers_per_batch) - """ - speakers_per_batch, utterances_per_speaker = embeds.shape[:2] - - # Inclusive centroids (1 per speaker). Cloning is needed for reverse differentiation - centroids_incl = torch.mean(embeds, dim=1, keepdim=True) - centroids_incl = centroids_incl.clone() / (torch.norm(centroids_incl, dim=2, keepdim=True) + 1e-5) - - # Exclusive centroids (1 per utterance) - centroids_excl = (torch.sum(embeds, dim=1, keepdim=True) - embeds) - centroids_excl /= (utterances_per_speaker - 1) - centroids_excl = centroids_excl.clone() / (torch.norm(centroids_excl, dim=2, keepdim=True) + 1e-5) - - # Similarity matrix. The cosine similarity of already 2-normed vectors is simply the dot - # product of these vectors (which is just an element-wise multiplication reduced by a sum). - # We vectorize the computation for efficiency. - sim_matrix = torch.zeros(speakers_per_batch, utterances_per_speaker, - speakers_per_batch).to(self.loss_device) - mask_matrix = 1 - np.eye(speakers_per_batch, dtype=np.int) - for j in range(speakers_per_batch): - mask = np.where(mask_matrix[j])[0] - sim_matrix[mask, :, j] = (embeds[mask] * centroids_incl[j]).sum(dim=2) - sim_matrix[j, :, j] = (embeds[j] * centroids_excl[j]).sum(dim=1) - - ## Even more vectorized version (slower maybe because of transpose) - # sim_matrix2 = torch.zeros(speakers_per_batch, speakers_per_batch, utterances_per_speaker - # ).to(self.loss_device) - # eye = np.eye(speakers_per_batch, dtype=np.int) - # mask = np.where(1 - eye) - # sim_matrix2[mask] = (embeds[mask[0]] * centroids_incl[mask[1]]).sum(dim=2) - # mask = np.where(eye) - # sim_matrix2[mask] = (embeds * centroids_excl).sum(dim=2) - # sim_matrix2 = sim_matrix2.transpose(1, 2) - - sim_matrix = sim_matrix * self.similarity_weight + self.similarity_bias - return sim_matrix - - def loss(self, embeds): - """ - Computes the softmax loss according the section 2.1 of GE2E. - - :param embeds: the embeddings as a tensor of shape (speakers_per_batch, - utterances_per_speaker, embedding_size) - :return: the loss and the EER for this batch of embeddings. - """ - speakers_per_batch, utterances_per_speaker = embeds.shape[:2] - - # Loss - sim_matrix = self.similarity_matrix(embeds) - sim_matrix = sim_matrix.reshape((speakers_per_batch * utterances_per_speaker, - speakers_per_batch)) - ground_truth = np.repeat(np.arange(speakers_per_batch), utterances_per_speaker) - target = torch.from_numpy(ground_truth).long().to(self.loss_device) - loss = self.loss_fn(sim_matrix, target) - - # EER (not backpropagated) - with torch.no_grad(): - inv_argmax = lambda i: np.eye(1, speakers_per_batch, i, dtype=np.int)[0] - labels = np.array([inv_argmax(i) for i in ground_truth]) - preds = sim_matrix.detach().cpu().numpy() - - # Snippet from https://yangcha.github.io/EER-ROC/ - fpr, tpr, thresholds = roc_curve(labels.flatten(), preds.flatten()) - eer = brentq(lambda x: 1. - x - interp1d(fpr, tpr)(x), 0., 1.) - - return loss, eer diff --git a/spaces/MrZak/LearnUp-4.1/README.md b/spaces/MrZak/LearnUp-4.1/README.md deleted file mode 100644 index ef315dda009ea71dd6e5630fadc7729d14b5ad2b..0000000000000000000000000000000000000000 --- a/spaces/MrZak/LearnUp-4.1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: LearnUp 4.1 -emoji: 🚀 -colorFrom: red -colorTo: green -sdk: gradio -sdk_version: 3.37.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MrlolDev/Explore_llamav2_with_TGI/app.py b/spaces/MrlolDev/Explore_llamav2_with_TGI/app.py deleted file mode 100644 index f837be1401301d80ce2fde42441f97006a36a658..0000000000000000000000000000000000000000 --- a/spaces/MrlolDev/Explore_llamav2_with_TGI/app.py +++ /dev/null @@ -1,41 +0,0 @@ -import json -import gradio as gr -import os -import requests - -hf_token = os.getenv('HF_TOKEN') -api_url = os.getenv('API_URL') -headers = { - 'Content-Type': 'application/json', -} - -system_message = "\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information." -title = "Llama2 70B Chatbot" -description = """This Space demonstrates model [Llama-2-70b-chat-hf](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) by Meta, running on Inference Endpoints using text-generation-inference. To have your own dedicated endpoint, you can [deploy it on Inference Endpoints](https://ui.endpoints.huggingface.co/). """ - - -def predict(message, chatbot): - - print(f"Logging: message is - {message}") - print(f"Logging: chatbot is - {chatbot}") - - input_prompt = f"[INST]<>\n{system_message}\n<>\n\n " - for interaction in chatbot: - input_prompt = input_prompt + interaction[0] + " [/INST] " + interaction[1] + " [INST] " - - input_prompt = input_prompt + message + " [/INST] " - - print(f"Logging: input_prompt is - {input_prompt}") - data = { - "inputs": input_prompt, - "parameters": {"max_new_tokens":256} - } - - response = requests.post(api_url, headers=headers, data=json.dumps(data), auth=('hf', hf_token)) - - print(f'Logging: API response is - {response.text}') - response_json_object = json.loads(response.text) - return response_json_object[0]['generated_text'] - - -gr.ChatInterface(predict, title=title, description=description).queue().launch(debug= True) diff --git a/spaces/NAACL2022/CLIP-Caption-Reward/captioning/utils/resnet_utils.py b/spaces/NAACL2022/CLIP-Caption-Reward/captioning/utils/resnet_utils.py deleted file mode 100644 index e1df171ab75700352333f6af5d59f751819b57f6..0000000000000000000000000000000000000000 --- a/spaces/NAACL2022/CLIP-Caption-Reward/captioning/utils/resnet_utils.py +++ /dev/null @@ -1,27 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -class myResnet(nn.Module): - def __init__(self, resnet): - super(myResnet, self).__init__() - self.resnet = resnet - - def forward(self, img, att_size=14): - x = img.unsqueeze(0) - - x = self.resnet.conv1(x) - x = self.resnet.bn1(x) - x = self.resnet.relu(x) - x = self.resnet.maxpool(x) - - x = self.resnet.layer1(x) - x = self.resnet.layer2(x) - x = self.resnet.layer3(x) - x = self.resnet.layer4(x) - - fc = x.mean(3).mean(2).squeeze() - att = F.adaptive_avg_pool2d(x,[att_size,att_size]).squeeze().permute(1, 2, 0) - - return fc, att - diff --git a/spaces/NCTCMumbai/NCTC/models/official/utils/flags/_performance.py b/spaces/NCTCMumbai/NCTC/models/official/utils/flags/_performance.py deleted file mode 100644 index cc5840f95e1ea26697951d1b78fe847526d5859b..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/utils/flags/_performance.py +++ /dev/null @@ -1,289 +0,0 @@ -# Copyright 2018 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Register flags for optimizing performance.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import multiprocessing - -from absl import flags # pylint: disable=g-bad-import-order -import tensorflow as tf # pylint: disable=g-bad-import-order - -from official.utils.flags._conventions import help_wrap - - -# Map string to TensorFlow dtype -DTYPE_MAP = { - "fp16": tf.float16, - "bf16": tf.bfloat16, - "fp32": tf.float32, -} - - -def get_tf_dtype(flags_obj): - if getattr(flags_obj, "fp16_implementation", None) == "graph_rewrite": - # If the graph_rewrite is used, we build the graph with fp32, and let the - # graph rewrite change ops to fp16. - return tf.float32 - return DTYPE_MAP[flags_obj.dtype] - - -def get_loss_scale(flags_obj, default_for_fp16): - dtype = get_tf_dtype(flags_obj) - if flags_obj.loss_scale == "dynamic": - return flags_obj.loss_scale - elif flags_obj.loss_scale is not None: - return float(flags_obj.loss_scale) - elif dtype == tf.float32 or dtype == tf.bfloat16: - return 1 # No loss scaling is needed for fp32 - else: - assert dtype == tf.float16 - return default_for_fp16 - - -def define_performance(num_parallel_calls=False, inter_op=False, intra_op=False, - synthetic_data=False, max_train_steps=False, dtype=False, - all_reduce_alg=False, num_packs=False, - tf_gpu_thread_mode=False, - datasets_num_private_threads=False, - datasets_num_parallel_batches=False, - dynamic_loss_scale=False, fp16_implementation=False, - loss_scale=False, - tf_data_experimental_slack=False, enable_xla=False, - training_dataset_cache=False): - """Register flags for specifying performance tuning arguments. - - Args: - num_parallel_calls: Create a flag to specify parallelism of data loading. - inter_op: Create a flag to allow specification of inter op threads. - intra_op: Create a flag to allow specification of intra op threads. - synthetic_data: Create a flag to allow the use of synthetic data. - max_train_steps: Create a flags to allow specification of maximum number - of training steps - dtype: Create flags for specifying dtype. - all_reduce_alg: If set forces a specific algorithm for multi-gpu. - num_packs: If set provides number of packs for MirroredStrategy's cross - device ops. - tf_gpu_thread_mode: gpu_private triggers us of private thread pool. - datasets_num_private_threads: Number of private threads for datasets. - datasets_num_parallel_batches: Determines how many batches to process in - parallel when using map and batch from tf.data. - dynamic_loss_scale: Allow the "loss_scale" flag to take on the value - "dynamic". Only valid if `dtype` is True. - fp16_implementation: Create fp16_implementation flag. - loss_scale: Controls the loss scaling, normally for mixed-precision - training. Can only be turned on if dtype is also True. - tf_data_experimental_slack: Determines whether to enable tf.data's - `experimental_slack` option. - enable_xla: Determines if XLA (auto clustering) is turned on. - training_dataset_cache: Whether to cache the training dataset on workers. - Typically used to improve training performance when training data is in - remote storage and can fit into worker memory. - - Returns: - A list of flags for core.py to marks as key flags. - """ - - key_flags = [] - if num_parallel_calls: - flags.DEFINE_integer( - name="num_parallel_calls", short_name="npc", - default=multiprocessing.cpu_count(), - help=help_wrap("The number of records that are processed in parallel " - "during input processing. This can be optimized per " - "data set but for generally homogeneous data sets, " - "should be approximately the number of available CPU " - "cores. (default behavior)")) - - if inter_op: - flags.DEFINE_integer( - name="inter_op_parallelism_threads", short_name="inter", default=0, - help=help_wrap("Number of inter_op_parallelism_threads to use for CPU. " - "See TensorFlow config.proto for details.") - ) - - if intra_op: - flags.DEFINE_integer( - name="intra_op_parallelism_threads", short_name="intra", default=0, - help=help_wrap("Number of intra_op_parallelism_threads to use for CPU. " - "See TensorFlow config.proto for details.")) - - if synthetic_data: - flags.DEFINE_bool( - name="use_synthetic_data", short_name="synth", default=False, - help=help_wrap( - "If set, use fake data (zeroes) instead of a real dataset. " - "This mode is useful for performance debugging, as it removes " - "input processing steps, but will not learn anything.")) - - if max_train_steps: - flags.DEFINE_integer( - name="max_train_steps", short_name="mts", default=None, help=help_wrap( - "The model will stop training if the global_step reaches this " - "value. If not set, training will run until the specified number " - "of epochs have run as usual. It is generally recommended to set " - "--train_epochs=1 when using this flag." - )) - - if dtype: - flags.DEFINE_enum( - name="dtype", short_name="dt", default="fp32", - enum_values=DTYPE_MAP.keys(), - help=help_wrap("The TensorFlow datatype used for calculations. " - "Variables may be cast to a higher precision on a " - "case-by-case basis for numerical stability.")) - - loss_scale_help_text = ( - "The amount to scale the loss by when the model is run. {}. Before " - "gradients are computed, the loss is multiplied by the loss scale, " - "making all gradients loss_scale times larger. To adjust for this, " - "gradients are divided by the loss scale before being applied to " - "variables. This is mathematically equivalent to training without " - "a loss scale, but the loss scale helps avoid some intermediate " - "gradients from underflowing to zero. If not provided the default " - "for fp16 is 128 and 1 for all other dtypes.{}" - ) - if dynamic_loss_scale: - loss_scale_help_text = loss_scale_help_text.format( - "This can be an int/float or the string 'dynamic'", - " The string 'dynamic' can be used to dynamically determine the " - "optimal loss scale during training, but currently this " - "significantly slows down performance") - loss_scale_validation_msg = ("loss_scale should be a positive int/float " - "or the string 'dynamic'.") - else: - loss_scale_help_text = loss_scale_help_text.format( - "This must be an int/float", "") - loss_scale_validation_msg = "loss_scale should be a positive int/float." - if loss_scale: - flags.DEFINE_string( - name="loss_scale", short_name="ls", default=None, - help=help_wrap(loss_scale_help_text)) - - @flags.validator(flag_name="loss_scale", - message=loss_scale_validation_msg) - def _check_loss_scale(loss_scale): # pylint: disable=unused-variable - """Validator to check the loss scale flag is valid.""" - if loss_scale is None: - return True # null case is handled in get_loss_scale() - - if loss_scale == "dynamic" and dynamic_loss_scale: - return True - - try: - loss_scale = float(loss_scale) - except ValueError: - return False - - return loss_scale > 0 - - if fp16_implementation: - flags.DEFINE_enum( - name="fp16_implementation", default="keras", - enum_values=("keras', 'graph_rewrite"), - help=help_wrap( - "When --dtype=fp16, how fp16 should be implemented. This has no " - "impact on correctness. 'keras' uses the " - "tf.keras.mixed_precision API. 'graph_rewrite' uses the " - "tf.train.experimental.enable_mixed_precision_graph_rewrite " - "API.")) - - @flags.multi_flags_validator(["fp16_implementation", "dtype", - "loss_scale"]) - def _check_fp16_implementation(flags_dict): - """Validator to check fp16_implementation flag is valid.""" - if (flags_dict["fp16_implementation"] == "graph_rewrite" and - flags_dict["dtype"] != "fp16"): - raise flags.ValidationError("--fp16_implementation should not be " - "specified unless --dtype=fp16") - return True - - if all_reduce_alg: - flags.DEFINE_string( - name="all_reduce_alg", short_name="ara", default=None, - help=help_wrap("Defines the algorithm to use for performing all-reduce." - "When specified with MirroredStrategy for single " - "worker, this controls " - "tf.contrib.distribute.AllReduceCrossTowerOps. When " - "specified with MultiWorkerMirroredStrategy, this " - "controls " - "tf.distribute.experimental.CollectiveCommunication; " - "valid options are `ring` and `nccl`.")) - - if num_packs: - flags.DEFINE_integer( - name="num_packs", default=1, - help=help_wrap("Sets `num_packs` in the cross device ops used in " - "MirroredStrategy. For details, see " - "tf.distribute.NcclAllReduce.")) - - if tf_gpu_thread_mode: - flags.DEFINE_string( - name="tf_gpu_thread_mode", short_name="gt_mode", default=None, - help=help_wrap( - "Whether and how the GPU device uses its own threadpool.") - ) - - flags.DEFINE_integer( - name="per_gpu_thread_count", short_name="pgtc", default=0, - help=help_wrap( - "The number of threads to use for GPU. Only valid when " - "tf_gpu_thread_mode is not global.") - ) - - if datasets_num_private_threads: - flags.DEFINE_integer( - name="datasets_num_private_threads", - default=None, - help=help_wrap( - "Number of threads for a private threadpool created for all" - "datasets computation..") - ) - - if datasets_num_parallel_batches: - flags.DEFINE_integer( - name="datasets_num_parallel_batches", - default=None, - help=help_wrap( - "Determines how many batches to process in parallel when using " - "map and batch from tf.data.") - ) - - if training_dataset_cache: - flags.DEFINE_boolean( - name="training_dataset_cache", - default=False, - help=help_wrap( - "Determines whether to cache the training dataset on workers. " - "Typically used to improve training performance when training " - "data is in remote storage and can fit into worker memory.") - ) - - if tf_data_experimental_slack: - flags.DEFINE_boolean( - name="tf_data_experimental_slack", - default=False, - help=help_wrap( - "Whether to enable tf.data's `experimental_slack` option.") - ) - - if enable_xla: - flags.DEFINE_boolean( - name="enable_xla", default=False, - help="Whether to enable XLA auto jit compilation") - - return key_flags diff --git a/spaces/NCTCMumbai/NCTC/models/official/utils/misc/__init__.py b/spaces/NCTCMumbai/NCTC/models/official/utils/misc/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/NCTCMumbai/NCTC/models/research/audioset/vggish/mel_features.py b/spaces/NCTCMumbai/NCTC/models/research/audioset/vggish/mel_features.py deleted file mode 100644 index ac58fb5427f772fcced9cbd3cec3373ffbe5908c..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/audioset/vggish/mel_features.py +++ /dev/null @@ -1,223 +0,0 @@ -# Copyright 2017 The TensorFlow Authors All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== - -"""Defines routines to compute mel spectrogram features from audio waveform.""" - -import numpy as np - - -def frame(data, window_length, hop_length): - """Convert array into a sequence of successive possibly overlapping frames. - - An n-dimensional array of shape (num_samples, ...) is converted into an - (n+1)-D array of shape (num_frames, window_length, ...), where each frame - starts hop_length points after the preceding one. - - This is accomplished using stride_tricks, so the original data is not - copied. However, there is no zero-padding, so any incomplete frames at the - end are not included. - - Args: - data: np.array of dimension N >= 1. - window_length: Number of samples in each frame. - hop_length: Advance (in samples) between each window. - - Returns: - (N+1)-D np.array with as many rows as there are complete frames that can be - extracted. - """ - num_samples = data.shape[0] - num_frames = 1 + int(np.floor((num_samples - window_length) / hop_length)) - shape = (num_frames, window_length) + data.shape[1:] - strides = (data.strides[0] * hop_length,) + data.strides - return np.lib.stride_tricks.as_strided(data, shape=shape, strides=strides) - - -def periodic_hann(window_length): - """Calculate a "periodic" Hann window. - - The classic Hann window is defined as a raised cosine that starts and - ends on zero, and where every value appears twice, except the middle - point for an odd-length window. Matlab calls this a "symmetric" window - and np.hanning() returns it. However, for Fourier analysis, this - actually represents just over one cycle of a period N-1 cosine, and - thus is not compactly expressed on a length-N Fourier basis. Instead, - it's better to use a raised cosine that ends just before the final - zero value - i.e. a complete cycle of a period-N cosine. Matlab - calls this a "periodic" window. This routine calculates it. - - Args: - window_length: The number of points in the returned window. - - Returns: - A 1D np.array containing the periodic hann window. - """ - return 0.5 - (0.5 * np.cos(2 * np.pi / window_length * - np.arange(window_length))) - - -def stft_magnitude(signal, fft_length, - hop_length=None, - window_length=None): - """Calculate the short-time Fourier transform magnitude. - - Args: - signal: 1D np.array of the input time-domain signal. - fft_length: Size of the FFT to apply. - hop_length: Advance (in samples) between each frame passed to FFT. - window_length: Length of each block of samples to pass to FFT. - - Returns: - 2D np.array where each row contains the magnitudes of the fft_length/2+1 - unique values of the FFT for the corresponding frame of input samples. - """ - frames = frame(signal, window_length, hop_length) - # Apply frame window to each frame. We use a periodic Hann (cosine of period - # window_length) instead of the symmetric Hann of np.hanning (period - # window_length-1). - window = periodic_hann(window_length) - windowed_frames = frames * window - return np.abs(np.fft.rfft(windowed_frames, int(fft_length))) - - -# Mel spectrum constants and functions. -_MEL_BREAK_FREQUENCY_HERTZ = 700.0 -_MEL_HIGH_FREQUENCY_Q = 1127.0 - - -def hertz_to_mel(frequencies_hertz): - """Convert frequencies to mel scale using HTK formula. - - Args: - frequencies_hertz: Scalar or np.array of frequencies in hertz. - - Returns: - Object of same size as frequencies_hertz containing corresponding values - on the mel scale. - """ - return _MEL_HIGH_FREQUENCY_Q * np.log( - 1.0 + (frequencies_hertz / _MEL_BREAK_FREQUENCY_HERTZ)) - - -def spectrogram_to_mel_matrix(num_mel_bins=20, - num_spectrogram_bins=129, - audio_sample_rate=8000, - lower_edge_hertz=125.0, - upper_edge_hertz=3800.0): - """Return a matrix that can post-multiply spectrogram rows to make mel. - - Returns a np.array matrix A that can be used to post-multiply a matrix S of - spectrogram values (STFT magnitudes) arranged as frames x bins to generate a - "mel spectrogram" M of frames x num_mel_bins. M = S A. - - The classic HTK algorithm exploits the complementarity of adjacent mel bands - to multiply each FFT bin by only one mel weight, then add it, with positive - and negative signs, to the two adjacent mel bands to which that bin - contributes. Here, by expressing this operation as a matrix multiply, we go - from num_fft multiplies per frame (plus around 2*num_fft adds) to around - num_fft^2 multiplies and adds. However, because these are all presumably - accomplished in a single call to np.dot(), it's not clear which approach is - faster in Python. The matrix multiplication has the attraction of being more - general and flexible, and much easier to read. - - Args: - num_mel_bins: How many bands in the resulting mel spectrum. This is - the number of columns in the output matrix. - num_spectrogram_bins: How many bins there are in the source spectrogram - data, which is understood to be fft_size/2 + 1, i.e. the spectrogram - only contains the nonredundant FFT bins. - audio_sample_rate: Samples per second of the audio at the input to the - spectrogram. We need this to figure out the actual frequencies for - each spectrogram bin, which dictates how they are mapped into mel. - lower_edge_hertz: Lower bound on the frequencies to be included in the mel - spectrum. This corresponds to the lower edge of the lowest triangular - band. - upper_edge_hertz: The desired top edge of the highest frequency band. - - Returns: - An np.array with shape (num_spectrogram_bins, num_mel_bins). - - Raises: - ValueError: if frequency edges are incorrectly ordered or out of range. - """ - nyquist_hertz = audio_sample_rate / 2. - if lower_edge_hertz < 0.0: - raise ValueError("lower_edge_hertz %.1f must be >= 0" % lower_edge_hertz) - if lower_edge_hertz >= upper_edge_hertz: - raise ValueError("lower_edge_hertz %.1f >= upper_edge_hertz %.1f" % - (lower_edge_hertz, upper_edge_hertz)) - if upper_edge_hertz > nyquist_hertz: - raise ValueError("upper_edge_hertz %.1f is greater than Nyquist %.1f" % - (upper_edge_hertz, nyquist_hertz)) - spectrogram_bins_hertz = np.linspace(0.0, nyquist_hertz, num_spectrogram_bins) - spectrogram_bins_mel = hertz_to_mel(spectrogram_bins_hertz) - # The i'th mel band (starting from i=1) has center frequency - # band_edges_mel[i], lower edge band_edges_mel[i-1], and higher edge - # band_edges_mel[i+1]. Thus, we need num_mel_bins + 2 values in - # the band_edges_mel arrays. - band_edges_mel = np.linspace(hertz_to_mel(lower_edge_hertz), - hertz_to_mel(upper_edge_hertz), num_mel_bins + 2) - # Matrix to post-multiply feature arrays whose rows are num_spectrogram_bins - # of spectrogram values. - mel_weights_matrix = np.empty((num_spectrogram_bins, num_mel_bins)) - for i in range(num_mel_bins): - lower_edge_mel, center_mel, upper_edge_mel = band_edges_mel[i:i + 3] - # Calculate lower and upper slopes for every spectrogram bin. - # Line segments are linear in the *mel* domain, not hertz. - lower_slope = ((spectrogram_bins_mel - lower_edge_mel) / - (center_mel - lower_edge_mel)) - upper_slope = ((upper_edge_mel - spectrogram_bins_mel) / - (upper_edge_mel - center_mel)) - # .. then intersect them with each other and zero. - mel_weights_matrix[:, i] = np.maximum(0.0, np.minimum(lower_slope, - upper_slope)) - # HTK excludes the spectrogram DC bin; make sure it always gets a zero - # coefficient. - mel_weights_matrix[0, :] = 0.0 - return mel_weights_matrix - - -def log_mel_spectrogram(data, - audio_sample_rate=8000, - log_offset=0.0, - window_length_secs=0.025, - hop_length_secs=0.010, - **kwargs): - """Convert waveform to a log magnitude mel-frequency spectrogram. - - Args: - data: 1D np.array of waveform data. - audio_sample_rate: The sampling rate of data. - log_offset: Add this to values when taking log to avoid -Infs. - window_length_secs: Duration of each window to analyze. - hop_length_secs: Advance between successive analysis windows. - **kwargs: Additional arguments to pass to spectrogram_to_mel_matrix. - - Returns: - 2D np.array of (num_frames, num_mel_bins) consisting of log mel filterbank - magnitudes for successive frames. - """ - window_length_samples = int(round(audio_sample_rate * window_length_secs)) - hop_length_samples = int(round(audio_sample_rate * hop_length_secs)) - fft_length = 2 ** int(np.ceil(np.log(window_length_samples) / np.log(2.0))) - spectrogram = stft_magnitude( - data, - fft_length=fft_length, - hop_length=hop_length_samples, - window_length=window_length_samples) - mel_spectrogram = np.dot(spectrogram, spectrogram_to_mel_matrix( - num_spectrogram_bins=spectrogram.shape[1], - audio_sample_rate=audio_sample_rate, **kwargs)) - return np.log(mel_spectrogram + log_offset) diff --git a/spaces/Nunchakuka/FrenchAnonymizer/README.md b/spaces/Nunchakuka/FrenchAnonymizer/README.md deleted file mode 100644 index 42de351be9279a3acd70d30fcbefdfbde8757dec..0000000000000000000000000000000000000000 --- a/spaces/Nunchakuka/FrenchAnonymizer/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: French Anonymizer -emoji: ⚡ -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 3.42.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: OlaWod/FreeVC ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/OAOA/DifFace/basicsr/archs/tof_arch.py b/spaces/OAOA/DifFace/basicsr/archs/tof_arch.py deleted file mode 100644 index a90a64d89386e19f92c987bbe2133472991d764a..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/basicsr/archs/tof_arch.py +++ /dev/null @@ -1,172 +0,0 @@ -import torch -from torch import nn as nn -from torch.nn import functional as F - -from basicsr.utils.registry import ARCH_REGISTRY -from .arch_util import flow_warp - - -class BasicModule(nn.Module): - """Basic module of SPyNet. - - Note that unlike the architecture in spynet_arch.py, the basic module - here contains batch normalization. - """ - - def __init__(self): - super(BasicModule, self).__init__() - self.basic_module = nn.Sequential( - nn.Conv2d(in_channels=8, out_channels=32, kernel_size=7, stride=1, padding=3, bias=False), - nn.BatchNorm2d(32), nn.ReLU(inplace=True), - nn.Conv2d(in_channels=32, out_channels=64, kernel_size=7, stride=1, padding=3, bias=False), - nn.BatchNorm2d(64), nn.ReLU(inplace=True), - nn.Conv2d(in_channels=64, out_channels=32, kernel_size=7, stride=1, padding=3, bias=False), - nn.BatchNorm2d(32), nn.ReLU(inplace=True), - nn.Conv2d(in_channels=32, out_channels=16, kernel_size=7, stride=1, padding=3, bias=False), - nn.BatchNorm2d(16), nn.ReLU(inplace=True), - nn.Conv2d(in_channels=16, out_channels=2, kernel_size=7, stride=1, padding=3)) - - def forward(self, tensor_input): - """ - Args: - tensor_input (Tensor): Input tensor with shape (b, 8, h, w). - 8 channels contain: - [reference image (3), neighbor image (3), initial flow (2)]. - - Returns: - Tensor: Estimated flow with shape (b, 2, h, w) - """ - return self.basic_module(tensor_input) - - -class SPyNetTOF(nn.Module): - """SPyNet architecture for TOF. - - Note that this implementation is specifically for TOFlow. Please use :file:`spynet_arch.py` for general use. - They differ in the following aspects: - - 1. The basic modules here contain BatchNorm. - 2. Normalization and denormalization are not done here, as they are done in TOFlow. - - ``Paper: Optical Flow Estimation using a Spatial Pyramid Network`` - - Reference: https://github.com/Coldog2333/pytoflow - - Args: - load_path (str): Path for pretrained SPyNet. Default: None. - """ - - def __init__(self, load_path=None): - super(SPyNetTOF, self).__init__() - - self.basic_module = nn.ModuleList([BasicModule() for _ in range(4)]) - if load_path: - self.load_state_dict(torch.load(load_path, map_location=lambda storage, loc: storage)['params']) - - def forward(self, ref, supp): - """ - Args: - ref (Tensor): Reference image with shape of (b, 3, h, w). - supp: The supporting image to be warped: (b, 3, h, w). - - Returns: - Tensor: Estimated optical flow: (b, 2, h, w). - """ - num_batches, _, h, w = ref.size() - ref = [ref] - supp = [supp] - - # generate downsampled frames - for _ in range(3): - ref.insert(0, F.avg_pool2d(input=ref[0], kernel_size=2, stride=2, count_include_pad=False)) - supp.insert(0, F.avg_pool2d(input=supp[0], kernel_size=2, stride=2, count_include_pad=False)) - - # flow computation - flow = ref[0].new_zeros(num_batches, 2, h // 16, w // 16) - for i in range(4): - flow_up = F.interpolate(input=flow, scale_factor=2, mode='bilinear', align_corners=True) * 2.0 - flow = flow_up + self.basic_module[i]( - torch.cat([ref[i], flow_warp(supp[i], flow_up.permute(0, 2, 3, 1)), flow_up], 1)) - return flow - - -@ARCH_REGISTRY.register() -class TOFlow(nn.Module): - """PyTorch implementation of TOFlow. - - In TOFlow, the LR frames are pre-upsampled and have the same size with the GT frames. - - ``Paper: Video Enhancement with Task-Oriented Flow`` - - Reference: https://github.com/anchen1011/toflow - - Reference: https://github.com/Coldog2333/pytoflow - - Args: - adapt_official_weights (bool): Whether to adapt the weights translated - from the official implementation. Set to false if you want to - train from scratch. Default: False - """ - - def __init__(self, adapt_official_weights=False): - super(TOFlow, self).__init__() - self.adapt_official_weights = adapt_official_weights - self.ref_idx = 0 if adapt_official_weights else 3 - - self.register_buffer('mean', torch.Tensor([0.485, 0.456, 0.406]).view(1, 3, 1, 1)) - self.register_buffer('std', torch.Tensor([0.229, 0.224, 0.225]).view(1, 3, 1, 1)) - - # flow estimation module - self.spynet = SPyNetTOF() - - # reconstruction module - self.conv_1 = nn.Conv2d(3 * 7, 64, 9, 1, 4) - self.conv_2 = nn.Conv2d(64, 64, 9, 1, 4) - self.conv_3 = nn.Conv2d(64, 64, 1) - self.conv_4 = nn.Conv2d(64, 3, 1) - - # activation function - self.relu = nn.ReLU(inplace=True) - - def normalize(self, img): - return (img - self.mean) / self.std - - def denormalize(self, img): - return img * self.std + self.mean - - def forward(self, lrs): - """ - Args: - lrs: Input lr frames: (b, 7, 3, h, w). - - Returns: - Tensor: SR frame: (b, 3, h, w). - """ - # In the official implementation, the 0-th frame is the reference frame - if self.adapt_official_weights: - lrs = lrs[:, [3, 0, 1, 2, 4, 5, 6], :, :, :] - - num_batches, num_lrs, _, h, w = lrs.size() - - lrs = self.normalize(lrs.view(-1, 3, h, w)) - lrs = lrs.view(num_batches, num_lrs, 3, h, w) - - lr_ref = lrs[:, self.ref_idx, :, :, :] - lr_aligned = [] - for i in range(7): # 7 frames - if i == self.ref_idx: - lr_aligned.append(lr_ref) - else: - lr_supp = lrs[:, i, :, :, :] - flow = self.spynet(lr_ref, lr_supp) - lr_aligned.append(flow_warp(lr_supp, flow.permute(0, 2, 3, 1))) - - # reconstruction - hr = torch.stack(lr_aligned, dim=1) - hr = hr.view(num_batches, -1, h, w) - hr = self.relu(self.conv_1(hr)) - hr = self.relu(self.conv_2(hr)) - hr = self.relu(self.conv_3(hr)) - hr = self.conv_4(hr) + lr_ref - - return self.denormalize(hr) diff --git a/spaces/OAOA/DifFace/basicsr/utils/diffjpeg.py b/spaces/OAOA/DifFace/basicsr/utils/diffjpeg.py deleted file mode 100644 index 65f96b44f9e7f3f8a589668f0003adf328cc5742..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/basicsr/utils/diffjpeg.py +++ /dev/null @@ -1,515 +0,0 @@ -""" -Modified from https://github.com/mlomnitz/DiffJPEG - -For images not divisible by 8 -https://dsp.stackexchange.com/questions/35339/jpeg-dct-padding/35343#35343 -""" -import itertools -import numpy as np -import torch -import torch.nn as nn -from torch.nn import functional as F - -# ------------------------ utils ------------------------# -y_table = np.array( - [[16, 11, 10, 16, 24, 40, 51, 61], [12, 12, 14, 19, 26, 58, 60, 55], [14, 13, 16, 24, 40, 57, 69, 56], - [14, 17, 22, 29, 51, 87, 80, 62], [18, 22, 37, 56, 68, 109, 103, 77], [24, 35, 55, 64, 81, 104, 113, 92], - [49, 64, 78, 87, 103, 121, 120, 101], [72, 92, 95, 98, 112, 100, 103, 99]], - dtype=np.float32).T -y_table = nn.Parameter(torch.from_numpy(y_table)) -c_table = np.empty((8, 8), dtype=np.float32) -c_table.fill(99) -c_table[:4, :4] = np.array([[17, 18, 24, 47], [18, 21, 26, 66], [24, 26, 56, 99], [47, 66, 99, 99]]).T -c_table = nn.Parameter(torch.from_numpy(c_table)) - - -def diff_round(x): - """ Differentiable rounding function - """ - return torch.round(x) + (x - torch.round(x))**3 - - -def quality_to_factor(quality): - """ Calculate factor corresponding to quality - - Args: - quality(float): Quality for jpeg compression. - - Returns: - float: Compression factor. - """ - if quality < 50: - quality = 5000. / quality - else: - quality = 200. - quality * 2 - return quality / 100. - - -# ------------------------ compression ------------------------# -class RGB2YCbCrJpeg(nn.Module): - """ Converts RGB image to YCbCr - """ - - def __init__(self): - super(RGB2YCbCrJpeg, self).__init__() - matrix = np.array([[0.299, 0.587, 0.114], [-0.168736, -0.331264, 0.5], [0.5, -0.418688, -0.081312]], - dtype=np.float32).T - self.shift = nn.Parameter(torch.tensor([0., 128., 128.])) - self.matrix = nn.Parameter(torch.from_numpy(matrix)) - - def forward(self, image): - """ - Args: - image(Tensor): batch x 3 x height x width - - Returns: - Tensor: batch x height x width x 3 - """ - image = image.permute(0, 2, 3, 1) - result = torch.tensordot(image, self.matrix, dims=1) + self.shift - return result.view(image.shape) - - -class ChromaSubsampling(nn.Module): - """ Chroma subsampling on CbCr channels - """ - - def __init__(self): - super(ChromaSubsampling, self).__init__() - - def forward(self, image): - """ - Args: - image(tensor): batch x height x width x 3 - - Returns: - y(tensor): batch x height x width - cb(tensor): batch x height/2 x width/2 - cr(tensor): batch x height/2 x width/2 - """ - image_2 = image.permute(0, 3, 1, 2).clone() - cb = F.avg_pool2d(image_2[:, 1, :, :].unsqueeze(1), kernel_size=2, stride=(2, 2), count_include_pad=False) - cr = F.avg_pool2d(image_2[:, 2, :, :].unsqueeze(1), kernel_size=2, stride=(2, 2), count_include_pad=False) - cb = cb.permute(0, 2, 3, 1) - cr = cr.permute(0, 2, 3, 1) - return image[:, :, :, 0], cb.squeeze(3), cr.squeeze(3) - - -class BlockSplitting(nn.Module): - """ Splitting image into patches - """ - - def __init__(self): - super(BlockSplitting, self).__init__() - self.k = 8 - - def forward(self, image): - """ - Args: - image(tensor): batch x height x width - - Returns: - Tensor: batch x h*w/64 x h x w - """ - height, _ = image.shape[1:3] - batch_size = image.shape[0] - image_reshaped = image.view(batch_size, height // self.k, self.k, -1, self.k) - image_transposed = image_reshaped.permute(0, 1, 3, 2, 4) - return image_transposed.contiguous().view(batch_size, -1, self.k, self.k) - - -class DCT8x8(nn.Module): - """ Discrete Cosine Transformation - """ - - def __init__(self): - super(DCT8x8, self).__init__() - tensor = np.zeros((8, 8, 8, 8), dtype=np.float32) - for x, y, u, v in itertools.product(range(8), repeat=4): - tensor[x, y, u, v] = np.cos((2 * x + 1) * u * np.pi / 16) * np.cos((2 * y + 1) * v * np.pi / 16) - alpha = np.array([1. / np.sqrt(2)] + [1] * 7) - self.tensor = nn.Parameter(torch.from_numpy(tensor).float()) - self.scale = nn.Parameter(torch.from_numpy(np.outer(alpha, alpha) * 0.25).float()) - - def forward(self, image): - """ - Args: - image(tensor): batch x height x width - - Returns: - Tensor: batch x height x width - """ - image = image - 128 - result = self.scale * torch.tensordot(image, self.tensor, dims=2) - result.view(image.shape) - return result - - -class YQuantize(nn.Module): - """ JPEG Quantization for Y channel - - Args: - rounding(function): rounding function to use - """ - - def __init__(self, rounding): - super(YQuantize, self).__init__() - self.rounding = rounding - self.y_table = y_table - - def forward(self, image, factor=1): - """ - Args: - image(tensor): batch x height x width - - Returns: - Tensor: batch x height x width - """ - if isinstance(factor, (int, float)): - image = image.float() / (self.y_table * factor) - else: - b = factor.size(0) - table = self.y_table.expand(b, 1, 8, 8) * factor.view(b, 1, 1, 1) - image = image.float() / table - image = self.rounding(image) - return image - - -class CQuantize(nn.Module): - """ JPEG Quantization for CbCr channels - - Args: - rounding(function): rounding function to use - """ - - def __init__(self, rounding): - super(CQuantize, self).__init__() - self.rounding = rounding - self.c_table = c_table - - def forward(self, image, factor=1): - """ - Args: - image(tensor): batch x height x width - - Returns: - Tensor: batch x height x width - """ - if isinstance(factor, (int, float)): - image = image.float() / (self.c_table * factor) - else: - b = factor.size(0) - table = self.c_table.expand(b, 1, 8, 8) * factor.view(b, 1, 1, 1) - image = image.float() / table - image = self.rounding(image) - return image - - -class CompressJpeg(nn.Module): - """Full JPEG compression algorithm - - Args: - rounding(function): rounding function to use - """ - - def __init__(self, rounding=torch.round): - super(CompressJpeg, self).__init__() - self.l1 = nn.Sequential(RGB2YCbCrJpeg(), ChromaSubsampling()) - self.l2 = nn.Sequential(BlockSplitting(), DCT8x8()) - self.c_quantize = CQuantize(rounding=rounding) - self.y_quantize = YQuantize(rounding=rounding) - - def forward(self, image, factor=1): - """ - Args: - image(tensor): batch x 3 x height x width - - Returns: - dict(tensor): Compressed tensor with batch x h*w/64 x 8 x 8. - """ - y, cb, cr = self.l1(image * 255) - components = {'y': y, 'cb': cb, 'cr': cr} - for k in components.keys(): - comp = self.l2(components[k]) - if k in ('cb', 'cr'): - comp = self.c_quantize(comp, factor=factor) - else: - comp = self.y_quantize(comp, factor=factor) - - components[k] = comp - - return components['y'], components['cb'], components['cr'] - - -# ------------------------ decompression ------------------------# - - -class YDequantize(nn.Module): - """Dequantize Y channel - """ - - def __init__(self): - super(YDequantize, self).__init__() - self.y_table = y_table - - def forward(self, image, factor=1): - """ - Args: - image(tensor): batch x height x width - - Returns: - Tensor: batch x height x width - """ - if isinstance(factor, (int, float)): - out = image * (self.y_table * factor) - else: - b = factor.size(0) - table = self.y_table.expand(b, 1, 8, 8) * factor.view(b, 1, 1, 1) - out = image * table - return out - - -class CDequantize(nn.Module): - """Dequantize CbCr channel - """ - - def __init__(self): - super(CDequantize, self).__init__() - self.c_table = c_table - - def forward(self, image, factor=1): - """ - Args: - image(tensor): batch x height x width - - Returns: - Tensor: batch x height x width - """ - if isinstance(factor, (int, float)): - out = image * (self.c_table * factor) - else: - b = factor.size(0) - table = self.c_table.expand(b, 1, 8, 8) * factor.view(b, 1, 1, 1) - out = image * table - return out - - -class iDCT8x8(nn.Module): - """Inverse discrete Cosine Transformation - """ - - def __init__(self): - super(iDCT8x8, self).__init__() - alpha = np.array([1. / np.sqrt(2)] + [1] * 7) - self.alpha = nn.Parameter(torch.from_numpy(np.outer(alpha, alpha)).float()) - tensor = np.zeros((8, 8, 8, 8), dtype=np.float32) - for x, y, u, v in itertools.product(range(8), repeat=4): - tensor[x, y, u, v] = np.cos((2 * u + 1) * x * np.pi / 16) * np.cos((2 * v + 1) * y * np.pi / 16) - self.tensor = nn.Parameter(torch.from_numpy(tensor).float()) - - def forward(self, image): - """ - Args: - image(tensor): batch x height x width - - Returns: - Tensor: batch x height x width - """ - image = image * self.alpha - result = 0.25 * torch.tensordot(image, self.tensor, dims=2) + 128 - result.view(image.shape) - return result - - -class BlockMerging(nn.Module): - """Merge patches into image - """ - - def __init__(self): - super(BlockMerging, self).__init__() - - def forward(self, patches, height, width): - """ - Args: - patches(tensor) batch x height*width/64, height x width - height(int) - width(int) - - Returns: - Tensor: batch x height x width - """ - k = 8 - batch_size = patches.shape[0] - image_reshaped = patches.view(batch_size, height // k, width // k, k, k) - image_transposed = image_reshaped.permute(0, 1, 3, 2, 4) - return image_transposed.contiguous().view(batch_size, height, width) - - -class ChromaUpsampling(nn.Module): - """Upsample chroma layers - """ - - def __init__(self): - super(ChromaUpsampling, self).__init__() - - def forward(self, y, cb, cr): - """ - Args: - y(tensor): y channel image - cb(tensor): cb channel - cr(tensor): cr channel - - Returns: - Tensor: batch x height x width x 3 - """ - - def repeat(x, k=2): - height, width = x.shape[1:3] - x = x.unsqueeze(-1) - x = x.repeat(1, 1, k, k) - x = x.view(-1, height * k, width * k) - return x - - cb = repeat(cb) - cr = repeat(cr) - return torch.cat([y.unsqueeze(3), cb.unsqueeze(3), cr.unsqueeze(3)], dim=3) - - -class YCbCr2RGBJpeg(nn.Module): - """Converts YCbCr image to RGB JPEG - """ - - def __init__(self): - super(YCbCr2RGBJpeg, self).__init__() - - matrix = np.array([[1., 0., 1.402], [1, -0.344136, -0.714136], [1, 1.772, 0]], dtype=np.float32).T - self.shift = nn.Parameter(torch.tensor([0, -128., -128.])) - self.matrix = nn.Parameter(torch.from_numpy(matrix)) - - def forward(self, image): - """ - Args: - image(tensor): batch x height x width x 3 - - Returns: - Tensor: batch x 3 x height x width - """ - result = torch.tensordot(image + self.shift, self.matrix, dims=1) - return result.view(image.shape).permute(0, 3, 1, 2) - - -class DeCompressJpeg(nn.Module): - """Full JPEG decompression algorithm - - Args: - rounding(function): rounding function to use - """ - - def __init__(self, rounding=torch.round): - super(DeCompressJpeg, self).__init__() - self.c_dequantize = CDequantize() - self.y_dequantize = YDequantize() - self.idct = iDCT8x8() - self.merging = BlockMerging() - self.chroma = ChromaUpsampling() - self.colors = YCbCr2RGBJpeg() - - def forward(self, y, cb, cr, imgh, imgw, factor=1): - """ - Args: - compressed(dict(tensor)): batch x h*w/64 x 8 x 8 - imgh(int) - imgw(int) - factor(float) - - Returns: - Tensor: batch x 3 x height x width - """ - components = {'y': y, 'cb': cb, 'cr': cr} - for k in components.keys(): - if k in ('cb', 'cr'): - comp = self.c_dequantize(components[k], factor=factor) - height, width = int(imgh / 2), int(imgw / 2) - else: - comp = self.y_dequantize(components[k], factor=factor) - height, width = imgh, imgw - comp = self.idct(comp) - components[k] = self.merging(comp, height, width) - # - image = self.chroma(components['y'], components['cb'], components['cr']) - image = self.colors(image) - - image = torch.min(255 * torch.ones_like(image), torch.max(torch.zeros_like(image), image)) - return image / 255 - - -# ------------------------ main DiffJPEG ------------------------ # - - -class DiffJPEG(nn.Module): - """This JPEG algorithm result is slightly different from cv2. - DiffJPEG supports batch processing. - - Args: - differentiable(bool): If True, uses custom differentiable rounding function, if False, uses standard torch.round - """ - - def __init__(self, differentiable=True): - super(DiffJPEG, self).__init__() - if differentiable: - rounding = diff_round - else: - rounding = torch.round - - self.compress = CompressJpeg(rounding=rounding) - self.decompress = DeCompressJpeg(rounding=rounding) - - def forward(self, x, quality): - """ - Args: - x (Tensor): Input image, bchw, rgb, [0, 1] - quality(float): Quality factor for jpeg compression scheme. - """ - factor = quality - if isinstance(factor, (int, float)): - factor = quality_to_factor(factor) - else: - for i in range(factor.size(0)): - factor[i] = quality_to_factor(factor[i]) - h, w = x.size()[-2:] - h_pad, w_pad = 0, 0 - # why should use 16 - if h % 16 != 0: - h_pad = 16 - h % 16 - if w % 16 != 0: - w_pad = 16 - w % 16 - x = F.pad(x, (0, w_pad, 0, h_pad), mode='constant', value=0) - - y, cb, cr = self.compress(x, factor=factor) - recovered = self.decompress(y, cb, cr, (h + h_pad), (w + w_pad), factor=factor) - recovered = recovered[:, :, 0:h, 0:w] - return recovered - - -if __name__ == '__main__': - import cv2 - - from basicsr.utils import img2tensor, tensor2img - - img_gt = cv2.imread('test.png') / 255. - - # -------------- cv2 -------------- # - encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), 20] - _, encimg = cv2.imencode('.jpg', img_gt * 255., encode_param) - img_lq = np.float32(cv2.imdecode(encimg, 1)) - cv2.imwrite('cv2_JPEG_20.png', img_lq) - - # -------------- DiffJPEG -------------- # - jpeger = DiffJPEG(differentiable=False).cuda() - img_gt = img2tensor(img_gt) - img_gt = torch.stack([img_gt, img_gt]).cuda() - quality = img_gt.new_tensor([20, 40]) - out = jpeger(img_gt, quality=quality) - - cv2.imwrite('pt_JPEG_20.png', tensor2img(out[0])) - cv2.imwrite('pt_JPEG_40.png', tensor2img(out[1])) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/CODE_OF_CONDUCT.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/CODE_OF_CONDUCT.md deleted file mode 100644 index a0cbeaab7650bf08267fbdbc9bb54e845c88f392..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,77 +0,0 @@ -# Code of Conduct - -## Our Pledge - -In the interest of fostering an open and welcoming environment, we as -contributors and maintainers pledge to make participation in our project and -our community a harassment-free experience for everyone, regardless of age, body -size, disability, ethnicity, sex characteristics, gender identity and expression, -level of experience, education, socio-economic status, nationality, personal -appearance, race, religion, or sexual identity and orientation. - -## Our Standards - -Examples of behavior that contributes to creating a positive environment -include: - -* Using welcoming and inclusive language -* Being respectful of differing viewpoints and experiences -* Gracefully accepting constructive criticism -* Focusing on what is best for the community -* Showing empathy towards other community members - -Examples of unacceptable behavior by participants include: - -* The use of sexualized language or imagery and unwelcome sexual attention or - advances -* Trolling, insulting/derogatory comments, and personal or political attacks -* Public or private harassment -* Publishing others' private information, such as a physical or electronic - address, without explicit permission -* Other conduct which could reasonably be considered inappropriate in a - professional setting - -## Our Responsibilities - -Project maintainers are responsible for clarifying the standards of acceptable -behavior and are expected to take appropriate and fair corrective action in -response to any instances of unacceptable behavior. - -Project maintainers have the right and responsibility to remove, edit, or -reject comments, commits, code, wiki edits, issues, and other contributions -that are not aligned to this Code of Conduct, or to ban temporarily or -permanently any contributor for other behaviors that they deem inappropriate, -threatening, offensive, or harmful. - -## Scope - -This Code of Conduct applies within all project spaces, and it also applies when -an individual is representing the project or its community in public spaces. -Examples of representing a project or community include using an official -project e-mail address, posting via an official social media account, or acting -as an appointed representative at an online or offline event. Representation of -a project may be further defined and clarified by project maintainers. - -## Enforcement - -Instances of abusive, harassing, or otherwise unacceptable behavior may be -reported by contacting the project team at . All -complaints will be reviewed and investigated and will result in a response that -is deemed necessary and appropriate to the circumstances. The project team is -obligated to maintain confidentiality with regard to the reporter of an incident. -Further details of specific enforcement policies may be posted separately. - -Project maintainers who do not follow or enforce the Code of Conduct in good -faith may face temporary or permanent repercussions as determined by other -members of the project's leadership. - -## Attribution - -This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, -available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html - -[homepage]: https://www.contributor-covenant.org - -For answers to common questions about this code of conduct, see -https://www.contributor-covenant.org/faq - diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/data_utils.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/data_utils.py deleted file mode 100644 index f43a4a90046fb9ee4944dc06ba377c1faade141d..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/data_utils.py +++ /dev/null @@ -1,320 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import os -from pathlib import Path -from typing import Optional, List, Dict -import zipfile -import tempfile -from dataclasses import dataclass -from itertools import groupby - -import torch -import torch.nn.functional as F -import numpy as np -from tqdm import tqdm - -from examples.speech_to_text.data_utils import load_tsv_to_dicts -from fairseq.data.audio.audio_utils import TTSSpectrogram, TTSMelScale - - -def trim_or_pad_to_target_length( - data_1d_or_2d: np.ndarray, target_length: int -) -> np.ndarray: - assert len(data_1d_or_2d.shape) in {1, 2} - delta = data_1d_or_2d.shape[0] - target_length - if delta >= 0: # trim if being longer - data_1d_or_2d = data_1d_or_2d[: target_length] - else: # pad if being shorter - if len(data_1d_or_2d.shape) == 1: - data_1d_or_2d = np.concatenate( - [data_1d_or_2d, np.zeros(-delta)], axis=0 - ) - else: - data_1d_or_2d = np.concatenate( - [data_1d_or_2d, np.zeros((-delta, data_1d_or_2d.shape[1]))], - axis=0 - ) - return data_1d_or_2d - - -def extract_logmel_spectrogram( - waveform: torch.Tensor, sample_rate: int, - output_path: Optional[Path] = None, win_length: int = 1024, - hop_length: int = 256, n_fft: int = 1024, - win_fn: callable = torch.hann_window, n_mels: int = 80, - f_min: float = 0., f_max: float = 8000, eps: float = 1e-5, - overwrite: bool = False, target_length: Optional[int] = None -): - if output_path is not None and output_path.is_file() and not overwrite: - return - - spectrogram_transform = TTSSpectrogram( - n_fft=n_fft, win_length=win_length, hop_length=hop_length, - window_fn=win_fn - ) - mel_scale_transform = TTSMelScale( - n_mels=n_mels, sample_rate=sample_rate, f_min=f_min, f_max=f_max, - n_stft=n_fft // 2 + 1 - ) - spectrogram = spectrogram_transform(waveform) - mel_spec = mel_scale_transform(spectrogram) - logmel_spec = torch.clamp(mel_spec, min=eps).log() - assert len(logmel_spec.shape) == 3 and logmel_spec.shape[0] == 1 - logmel_spec = logmel_spec.squeeze().t() # D x T -> T x D - if target_length is not None: - trim_or_pad_to_target_length(logmel_spec, target_length) - - if output_path is not None: - np.save(output_path.as_posix(), logmel_spec) - else: - return logmel_spec - - -def extract_pitch( - waveform: torch.Tensor, sample_rate: int, - output_path: Optional[Path] = None, hop_length: int = 256, - log_scale: bool = True, phoneme_durations: Optional[List[int]] = None -): - if output_path is not None and output_path.is_file(): - return - - try: - import pyworld - except ImportError: - raise ImportError("Please install PyWORLD: pip install pyworld") - - _waveform = waveform.squeeze(0).double().numpy() - pitch, t = pyworld.dio( - _waveform, sample_rate, frame_period=hop_length / sample_rate * 1000 - ) - pitch = pyworld.stonemask(_waveform, pitch, t, sample_rate) - - if phoneme_durations is not None: - pitch = trim_or_pad_to_target_length(pitch, sum(phoneme_durations)) - try: - from scipy.interpolate import interp1d - except ImportError: - raise ImportError("Please install SciPy: pip install scipy") - nonzero_ids = np.where(pitch != 0)[0] - interp_fn = interp1d( - nonzero_ids, - pitch[nonzero_ids], - fill_value=(pitch[nonzero_ids[0]], pitch[nonzero_ids[-1]]), - bounds_error=False, - ) - pitch = interp_fn(np.arange(0, len(pitch))) - d_cumsum = np.cumsum(np.concatenate([np.array([0]), phoneme_durations])) - pitch = np.array( - [ - np.mean(pitch[d_cumsum[i-1]: d_cumsum[i]]) - for i in range(1, len(d_cumsum)) - ] - ) - assert len(pitch) == len(phoneme_durations) - - if log_scale: - pitch = np.log(pitch + 1) - - if output_path is not None: - np.save(output_path.as_posix(), pitch) - else: - return pitch - - -def extract_energy( - waveform: torch.Tensor, output_path: Optional[Path] = None, - hop_length: int = 256, n_fft: int = 1024, log_scale: bool = True, - phoneme_durations: Optional[List[int]] = None -): - if output_path is not None and output_path.is_file(): - return - - assert len(waveform.shape) == 2 and waveform.shape[0] == 1 - waveform = waveform.view(1, 1, waveform.shape[1]) - waveform = F.pad( - waveform.unsqueeze(1), [n_fft // 2, n_fft // 2, 0, 0], - mode="reflect" - ) - waveform = waveform.squeeze(1) - - fourier_basis = np.fft.fft(np.eye(n_fft)) - cutoff = int((n_fft / 2 + 1)) - fourier_basis = np.vstack( - [np.real(fourier_basis[:cutoff, :]), - np.imag(fourier_basis[:cutoff, :])] - ) - - forward_basis = torch.FloatTensor(fourier_basis[:, None, :]) - forward_transform = F.conv1d( - waveform, forward_basis, stride=hop_length, padding=0 - ) - - real_part = forward_transform[:, :cutoff, :] - imag_part = forward_transform[:, cutoff:, :] - magnitude = torch.sqrt(real_part ** 2 + imag_part ** 2) - energy = torch.norm(magnitude, dim=1).squeeze(0).numpy() - - if phoneme_durations is not None: - energy = trim_or_pad_to_target_length(energy, sum(phoneme_durations)) - d_cumsum = np.cumsum(np.concatenate([np.array([0]), phoneme_durations])) - energy = np.array( - [ - np.mean(energy[d_cumsum[i - 1]: d_cumsum[i]]) - for i in range(1, len(d_cumsum)) - ] - ) - assert len(energy) == len(phoneme_durations) - - if log_scale: - energy = np.log(energy + 1) - - if output_path is not None: - np.save(output_path.as_posix(), energy) - else: - return energy - - -def get_global_cmvn(feature_root: Path, output_path: Optional[Path] = None): - mean_x, mean_x2, n_frames = None, None, 0 - feature_paths = feature_root.glob("*.npy") - for p in tqdm(feature_paths): - with open(p, 'rb') as f: - frames = np.load(f).squeeze() - - n_frames += frames.shape[0] - - cur_mean_x = frames.sum(axis=0) - if mean_x is None: - mean_x = cur_mean_x - else: - mean_x += cur_mean_x - - cur_mean_x2 = (frames ** 2).sum(axis=0) - if mean_x2 is None: - mean_x2 = cur_mean_x2 - else: - mean_x2 += cur_mean_x2 - - mean_x /= n_frames - mean_x2 /= n_frames - var_x = mean_x2 - mean_x ** 2 - std_x = np.sqrt(np.maximum(var_x, 1e-10)) - - if output_path is not None: - with open(output_path, 'wb') as f: - np.savez(f, mean=mean_x, std=std_x) - else: - return {"mean": mean_x, "std": std_x} - - -def ipa_phonemize(text, lang="en-us", use_g2p=False): - if use_g2p: - assert lang == "en-us", "g2pE phonemizer only works for en-us" - try: - from g2p_en import G2p - g2p = G2p() - return " ".join("|" if p == " " else p for p in g2p(text)) - except ImportError: - raise ImportError( - "Please install phonemizer: pip install g2p_en" - ) - else: - try: - from phonemizer import phonemize - from phonemizer.separator import Separator - return phonemize( - text, backend='espeak', language=lang, - separator=Separator(word="| ", phone=" ") - ) - except ImportError: - raise ImportError( - "Please install phonemizer: pip install phonemizer" - ) - - -@dataclass -class ForceAlignmentInfo(object): - tokens: List[str] - frame_durations: List[int] - start_sec: Optional[float] - end_sec: Optional[float] - - -def get_mfa_alignment_by_sample_id( - textgrid_zip_path: str, sample_id: str, sample_rate: int, - hop_length: int, silence_phones: List[str] = ("sil", "sp", "spn") -) -> ForceAlignmentInfo: - try: - import tgt - except ImportError: - raise ImportError("Please install TextGridTools: pip install tgt") - - filename = f"{sample_id}.TextGrid" - out_root = Path(tempfile.gettempdir()) - tgt_path = out_root / filename - with zipfile.ZipFile(textgrid_zip_path) as f_zip: - f_zip.extract(filename, path=out_root) - textgrid = tgt.io.read_textgrid(tgt_path.as_posix()) - os.remove(tgt_path) - - phones, frame_durations = [], [] - start_sec, end_sec, end_idx = 0, 0, 0 - for t in textgrid.get_tier_by_name("phones")._objects: - s, e, p = t.start_time, t.end_time, t.text - # Trim leading silences - if len(phones) == 0: - if p in silence_phones: - continue - else: - start_sec = s - phones.append(p) - if p not in silence_phones: - end_sec = e - end_idx = len(phones) - r = sample_rate / hop_length - frame_durations.append(int(np.round(e * r) - np.round(s * r))) - # Trim tailing silences - phones = phones[:end_idx] - frame_durations = frame_durations[:end_idx] - - return ForceAlignmentInfo( - tokens=phones, frame_durations=frame_durations, start_sec=start_sec, - end_sec=end_sec - ) - - -def get_mfa_alignment( - textgrid_zip_path: str, sample_ids: List[str], sample_rate: int, - hop_length: int -) -> Dict[str, ForceAlignmentInfo]: - return { - i: get_mfa_alignment_by_sample_id( - textgrid_zip_path, i, sample_rate, hop_length - ) for i in tqdm(sample_ids) - } - - -def get_unit_alignment( - id_to_unit_tsv_path: str, sample_ids: List[str] -) -> Dict[str, ForceAlignmentInfo]: - id_to_units = { - e["id"]: e["units"] for e in load_tsv_to_dicts(id_to_unit_tsv_path) - } - id_to_units = {i: id_to_units[i].split() for i in sample_ids} - id_to_units_collapsed = { - i: [uu for uu, _ in groupby(u)] for i, u in id_to_units.items() - } - id_to_durations = { - i: [len(list(g)) for _, g in groupby(u)] for i, u in id_to_units.items() - } - - return { - i: ForceAlignmentInfo( - tokens=id_to_units_collapsed[i], frame_durations=id_to_durations[i], - start_sec=None, end_sec=None - ) - for i in sample_ids - } diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/legacy/masked_lm_dictionary.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/legacy/masked_lm_dictionary.py deleted file mode 100644 index dee88f7a3ed72ea465ea4e8ffe7b1c01ff6f57f1..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/legacy/masked_lm_dictionary.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.data import Dictionary - - -class MaskedLMDictionary(Dictionary): - """ - Dictionary for Masked Language Modelling tasks. This extends Dictionary by - adding the mask symbol. - """ - - def __init__( - self, - pad="", - eos="", - unk="", - mask="", - ): - super().__init__(pad=pad, eos=eos, unk=unk) - self.mask_word = mask - self.mask_index = self.add_symbol(mask) - self.nspecial = len(self.symbols) - - def mask(self): - """Helper to get index of mask symbol""" - return self.mask_index - - -class BertDictionary(MaskedLMDictionary): - """ - Dictionary for BERT task. This extends MaskedLMDictionary by adding support - for cls and sep symbols. - """ - - def __init__( - self, - pad="", - eos="", - unk="", - mask="", - cls="", - sep="", - ): - super().__init__(pad=pad, eos=eos, unk=unk, mask=mask) - self.cls_word = cls - self.sep_word = sep - self.cls_index = self.add_symbol(cls) - self.sep_index = self.add_symbol(sep) - self.nspecial = len(self.symbols) - - def cls(self): - """Helper to get index of cls symbol""" - return self.cls_index - - def sep(self): - """Helper to get index of sep symbol""" - return self.sep_index diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/fairseq_dropout.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/fairseq_dropout.py deleted file mode 100644 index 3cddca77186f5ddd5cfb9c0ed6def9bafdf3bf1e..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/fairseq_dropout.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from typing import List, Optional - -import torch.nn as nn -import torch.nn.functional as F - - -logger = logging.getLogger(__name__) - - -class FairseqDropout(nn.Module): - def __init__(self, p, module_name=None): - super().__init__() - self.p = p - self.module_name = module_name - self.apply_during_inference = False - - def forward(self, x, inplace: bool = False): - if self.p > 0 and (self.training or self.apply_during_inference): - return F.dropout(x, p=self.p, training=True, inplace=inplace) - else: - return x - - def make_generation_fast_( - self, - name: str, - retain_dropout: bool = False, - retain_dropout_modules: Optional[List[str]] = None, - **kwargs - ): - if retain_dropout: - if retain_dropout_modules is not None and self.module_name is None: - logger.warning( - "Cannot enable dropout during inference for module {} " - "because module_name was not set".format(name) - ) - elif ( - retain_dropout_modules is None # if None, apply to all modules - or self.module_name in retain_dropout_modules - ): - logger.info( - "Enabling dropout during inference for module: {}".format(name) - ) - self.apply_during_inference = True - else: - logger.info("Disabling dropout for module: {}".format(name)) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/optim/composite.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/optim/composite.py deleted file mode 100644 index a5366d62434a4400ba9cc524f4286f99f733d121..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/optim/composite.py +++ /dev/null @@ -1,188 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from collections import defaultdict -from dataclasses import dataclass, field -from typing import Dict, Any, List, Optional - -import torch.optim -from fairseq.dataclass import FairseqDataclass -from fairseq.optim import FairseqOptimizer, register_optimizer, _build_optimizer -from fairseq.optim.lr_scheduler import FairseqLRScheduler, build_lr_scheduler -from omegaconf import II, open_dict - - -logger = logging.getLogger(__name__) - - -@dataclass -class OptimizerAndSchedulerConfig(FairseqDataclass): - optimizer: Any = None - lr_scheduler: Optional[Any] = None - lr: List = II("optimization.lr") - lr_float: Optional[float] = None # this makes it easier to sweep on learning rate with auto sweepers - - -@dataclass -class CompositeOptimizerConfig(FairseqDataclass): - groups: Dict[str, Any] = field( - default_factory=lambda: {}, - metadata={ - "help": "optimizer name -> optimizer OptimizerAndSchedulerConfig. " - "Configures a different optimizer and (optionally) lr scheduler for each parameter group" - }, - ) - - -@register_optimizer("composite", dataclass=CompositeOptimizerConfig) -class FairseqCompositeOptimizer(FairseqOptimizer): - - optimizers: Dict[str, FairseqOptimizer] = {} - lr_schedulers: Dict[str, FairseqLRScheduler] = {} - lr_scheduler: FairseqLRScheduler = None - _optimizer: torch.optim.Optimizer - - def __init__(self, cfg: CompositeOptimizerConfig, params): - super().__init__(cfg) - - assert ( - len(params) > 1 - ), "Composite optimizer only works when there are multiple parameter groups (try fp16_no_flatten_grads: true)" - - groupped_params = defaultdict(list) - for p in params: - group = getattr(p, "param_group", "default") - groupped_params[group].append(p) - - assert groupped_params.keys() == cfg.groups.keys(), ( - f"Parameter groups {groupped_params.keys()} and optimizer groups {cfg.groups.keys()} are not the same! " - "Try setting 'param_group' on your parameters in the model." - ) - - for group, group_params in groupped_params.items(): - group_cfg = cfg.groups[group] - with open_dict(group_cfg): - if group_cfg.lr_float is not None: - group_cfg.optimizer.lr = [group_cfg.lr_float] - group_cfg.lr_scheduler.lr = [group_cfg.lr_float] - else: - group_cfg.optimizer.lr = group_cfg.lr - group_cfg.lr_scheduler.lr = group_cfg.lr - self.optimizers[group] = _build_optimizer(group_cfg.optimizer, group_params) - if group_cfg.lr_scheduler is not None: - self.lr_schedulers[group] = build_lr_scheduler( - group_cfg.lr_scheduler, self.optimizers[group] - ) - - if len(self.lr_schedulers) > 0: - assert len(self.lr_schedulers) == len(self.optimizers), ( - f"Please provide an lr scheduler for each optimizer to use pass_through scheduler. " - f"Optimizers: {self.optimizers}; Lr scheds: {self.lr_schedulers}" - ) - self.lr_scheduler = CompositeLRScheduler(self.lr_schedulers) - - self._optimizer = CompositeOptimizer(self.optimizers) - - @property - def supports_groups(self): - return True - - @property - def param_groups(self): - for opt in self.optimizers.values(): - for group in opt.param_groups: - yield group - - def get_lr(self): - """Return the current learning rate.""" - k = ( - "default" - if "default" in self.optimizers - else next(iter(self.optimizers.keys())) - ) - return self.optimizers[k].param_groups[0]["lr"] - - def state_dict(self): - """Return the LR scheduler state dict.""" - return {k: s.state_dict() for k, s in self.optimizers.items()} - - def load_state_dict(self, state_dict, optimizer_overrides=None): - """Load an LR scheduler state dict.""" - for k, state in state_dict.items(): - if k not in self.optimizers: - # skip extra keys like "loss_scale" added by fp16 optimizer - continue - - overrides = ( - optimizer_overrides[k] - if isinstance(optimizer_overrides, dict) and k in optimizer_overrides - else None - ) - self.optimizers[k].load_state_dict(state, optimizer_overrides=overrides) - - -class CompositeOptimizer(torch.optim.Optimizer): - def __init__(self, optimizers: Dict[str, FairseqOptimizer]): - self.optimizers = optimizers - - @property - def supports_memory_efficient_fp16(self): - return all(o.supports_memory_efficient_fp16 for o in self.optimizers.values()) - - @property - def supports_flat_params(self): - return all(o.supports_flat_params for o in self.optimizers.values()) - - def step(self, closure=None, groups=None): - """Performs a single optimization step. - - Args: - closure (callable, optional): A closure that reevaluates the model - and returns the loss. - """ - loss = None - if closure is not None: - loss = closure() - - for k, opt in self.optimizers.items(): - if groups is None or k in groups: - opt.step() - - return loss - - def zero_grad(self): - for opt in self.optimizers.values(): - opt.zero_grad() - - -class CompositeLRScheduler(FairseqLRScheduler): - def __init__(self, lr_schedulers): - super().__init__(None, None) - - self.lr_schedulers = lr_schedulers - - def state_dict(self): - """Return the LR scheduler state dict.""" - return {k: s.state_dict() for k, s in self.lr_schedulers.items()} - - def load_state_dict(self, state_dict): - """Load an LR scheduler state dict.""" - for k, state in state_dict.items(): - self.lr_schedulers[k].load_state_dict(state) - - def step_begin_epoch(self, epoch): - """Update the learning rate at the beginning of the given epoch.""" - for s in self.lr_schedulers.values(): - s.step_begin_epoch(epoch) - - def step(self, epoch, val_loss=None): - """Update the learning rate at the end of the given epoch.""" - for s in self.lr_schedulers.values(): - s.step(epoch) - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - return {k: s.step_update(num_updates) for k, s in self.lr_schedulers.items()} diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_sequence_scorer.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_sequence_scorer.py deleted file mode 100644 index 42f9447b599bcd7a9913aec37d94ea5078ff43a3..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_sequence_scorer.py +++ /dev/null @@ -1,120 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import unittest - -import tests.utils as test_utils -import torch -from fairseq.sequence_scorer import SequenceScorer - - -class TestSequenceScorer(unittest.TestCase): - def test_sequence_scorer(self): - # construct dummy dictionary - d = test_utils.dummy_dictionary(vocab_size=2) - self.assertEqual(d.pad(), 1) - self.assertEqual(d.eos(), 2) - self.assertEqual(d.unk(), 3) - eos = d.eos() - w1 = 4 - w2 = 5 - - # construct dataloader - data = [ - { - "source": torch.LongTensor([w1, w2, eos]), - "target": torch.LongTensor([w1, w2, w1, eos]), - }, - { - "source": torch.LongTensor([w2, eos]), - "target": torch.LongTensor([w2, w1, eos]), - }, - { - "source": torch.LongTensor([w2, eos]), - "target": torch.LongTensor([w2, eos]), - }, - ] - data_itr = test_utils.dummy_dataloader(data) - - # specify expected output probabilities - args = argparse.Namespace() - unk = 0.0 - args.beam_probs = [ - # step 0: - torch.FloatTensor( - [ - # eos w1 w2 - [0.0, unk, 0.6, 0.4], # sentence 1 - [0.0, unk, 0.4, 0.6], # sentence 2 - [0.0, unk, 0.7, 0.3], # sentence 3 - ] - ), - # step 1: - torch.FloatTensor( - [ - # eos w1 w2 - [0.0, unk, 0.2, 0.7], # sentence 1 - [0.0, unk, 0.8, 0.2], # sentence 2 - [0.7, unk, 0.1, 0.2], # sentence 3 - ] - ), - # step 2: - torch.FloatTensor( - [ - # eos w1 w2 - [0.10, unk, 0.50, 0.4], # sentence 1 - [0.15, unk, 0.15, 0.7], # sentence 2 - [0.00, unk, 0.00, 0.0], # sentence 3 - ] - ), - # step 3: - torch.FloatTensor( - [ - # eos w1 w2 - [0.9, unk, 0.05, 0.05], # sentence 1 - [0.0, unk, 0.00, 0.0], # sentence 2 - [0.0, unk, 0.00, 0.0], # sentence 3 - ] - ), - ] - expected_scores = [ - [0.6, 0.7, 0.5, 0.9], # sentence 1 - [0.6, 0.8, 0.15], # sentence 2 - [0.3, 0.7], # sentence 3 - ] - - task = test_utils.TestTranslationTask.setup_task(args, d, d) - model = task.build_model(args) - scorer = SequenceScorer(task.target_dictionary) - for sample in data_itr: - hypos = task.inference_step(scorer, [model], sample) - for id, hypos_id in zip(sample["id"].tolist(), hypos): - self.assertHypoTokens(hypos_id[0], data[id]["target"]) - self.assertHypoScore(hypos_id[0], expected_scores[id]) - - def assertHypoTokens(self, hypo, tokens): - self.assertTensorEqual(hypo["tokens"], torch.LongTensor(tokens)) - - def assertHypoScore(self, hypo, pos_probs, normalized=True, lenpen=1.0): - pos_scores = torch.FloatTensor(pos_probs).log() - self.assertAlmostEqual(hypo["positional_scores"], pos_scores) - self.assertEqual(pos_scores.numel(), hypo["tokens"].numel()) - score = pos_scores.sum() - if normalized: - score /= pos_scores.numel() ** lenpen - self.assertLess(abs(score - hypo["score"]), 1e-6) - - def assertAlmostEqual(self, t1, t2): - self.assertEqual(t1.size(), t2.size(), "size mismatch") - self.assertLess((t1 - t2).abs().max(), 1e-4) - - def assertTensorEqual(self, t1, t2): - self.assertEqual(t1.size(), t2.size(), "size mismatch") - self.assertEqual(t1.ne(t2).long().sum(), 0) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/audio/multi_modality_dataset.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/audio/multi_modality_dataset.py deleted file mode 100644 index 69d23d31c1eb66803fa5062b5991a7c34ab07dc7..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/audio/multi_modality_dataset.py +++ /dev/null @@ -1,263 +0,0 @@ -# Copyright (c) 2021-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory. - -import logging -import math -from typing import List, Optional, NamedTuple - -import numpy as np -import torch -from fairseq.data import ( - ConcatDataset, - LanguagePairDataset, - FileAudioDataset, - data_utils, -) -from fairseq.data import FairseqDataset - -logger = logging.getLogger(__name__) - - -class ModalityDatasetItem(NamedTuple): - datasetname: str - dataset: any - max_positions: List[int] - max_tokens: Optional[int] = None - max_sentences: Optional[int] = None - -# MultiModalityDataset: it concate multiple datasets with different modalities. -# Compared with ConcatDataset it can 1) sample data given the ratios for different datasets -# 2) it adds mode to indicate what type of the data samples come from. -# It will be used with GroupedEpochBatchIterator together to generate mini-batch with samples -# from the same type of dataset -# If only one dataset is used, it will perform like the original dataset with mode added -class MultiModalityDataset(ConcatDataset): - def __init__(self, datasets: List[ModalityDatasetItem]): - id_to_mode = [] - dsets = [] - max_tokens = [] - max_sentences = [] - max_positions = [] - for dset in datasets: - id_to_mode.append(dset.datasetname) - dsets.append(dset.dataset) - max_tokens.append(dset.max_tokens) - max_positions.append(dset.max_positions) - max_sentences.append(dset.max_sentences) - weights = [1.0 for s in dsets] - super().__init__(dsets, weights) - self.max_tokens = max_tokens - self.max_positions = max_positions - self.max_sentences = max_sentences - self.id_to_mode = id_to_mode - self.raw_sub_batch_samplers = [] - self._cur_epoch = 0 - - def set_epoch(self, epoch): - super().set_epoch(epoch) - self._cur_epoch = epoch - - def __getitem__(self, idx): - dataset_idx, sample_idx = self._get_dataset_and_sample_index(idx) - sample = self.datasets[dataset_idx][sample_idx] - return (dataset_idx, sample) - - def collater(self, samples): - if len(samples) == 0: - return {} - dataset_idx = samples[0][0] - # make sure all samples in samples are from same dataset - assert sum([0 if dataset_idx == s[0] else 1 for s in samples]) == 0 - samples = self.datasets[dataset_idx].collater([x[1] for x in samples]) - # add mode - samples["net_input"]["mode"] = self.id_to_mode[dataset_idx] - - return samples - - def size(self, index: int): - if len(self.datasets) == 1: - return self.datasets[0].size(index) - return super().size(index) - - @property - def sizes(self): - if len(self.datasets) == 1: - return self.datasets[0].sizes - super().sizes - - def ordered_indices(self): - """ - Returns indices sorted by length. So less padding is needed. - """ - if len(self.datasets) == 1: - return self.datasets[0].ordered_indices() - indices_group = [] - for d_idx, ds in enumerate(self.datasets): - sample_num = self.cumulative_sizes[d_idx] - if d_idx > 0: - sample_num = sample_num - self.cumulative_sizes[d_idx - 1] - assert sample_num == len(ds) - indices_group.append(ds.ordered_indices()) - return indices_group - - def get_raw_batch_samplers(self, required_batch_size_multiple, seed): - if len(self.raw_sub_batch_samplers) > 0: - logger.info(" raw_sub_batch_samplers exists. No action is taken") - return - with data_utils.numpy_seed(seed): - indices = self.ordered_indices() - for i, ds in enumerate(self.datasets): - indices[i] = ds.filter_indices_by_size( - indices[i], - self.max_positions[i], - )[0] - sub_batch_sampler = ds.batch_by_size( - indices[i], - max_tokens=self.max_tokens[i], - max_sentences=self.max_sentences[i], - required_batch_size_multiple=required_batch_size_multiple, - ) - self.raw_sub_batch_samplers.append(sub_batch_sampler) - - def get_batch_samplers(self, mult_ratios, required_batch_size_multiple, seed): - self.get_raw_batch_samplers(required_batch_size_multiple, seed) - batch_samplers = [] - for i, _ in enumerate(self.datasets): - if i > 0: - sub_batch_sampler = [ - [y + self.cumulative_sizes[i - 1] for y in x] - for x in self.raw_sub_batch_samplers[i] - ] - else: - sub_batch_sampler = list(self.raw_sub_batch_samplers[i]) - smp_r = mult_ratios[i] - if smp_r != 1: - is_increase = "increased" if smp_r > 1 else "decreased" - logger.info( - "number of batch for the dataset {} is {} from {} to {}".format( - self.id_to_mode[i], - is_increase, - len(sub_batch_sampler), - int(len(sub_batch_sampler) * smp_r), - ) - ) - mul_samplers = [] - for _ in range(math.floor(smp_r)): - mul_samplers = mul_samplers + sub_batch_sampler - if math.floor(smp_r) != smp_r: - with data_utils.numpy_seed(seed + self._cur_epoch): - np.random.shuffle(sub_batch_sampler) - smp_num = int( - (smp_r - math.floor(smp_r)) * len(sub_batch_sampler) - ) - mul_samplers = mul_samplers + sub_batch_sampler[:smp_num] - sub_batch_sampler = mul_samplers - else: - logger.info( - "dataset {} batch number is {} ".format( - self.id_to_mode[i], len(sub_batch_sampler) - ) - ) - batch_samplers.append(sub_batch_sampler) - - return batch_samplers - - -class LangPairMaskDataset(FairseqDataset): - def __init__( - self, - dataset: LanguagePairDataset, - src_eos: int, - src_bos: Optional[int] = None, - noise_id: Optional[int] = -1, - mask_ratio: Optional[float] = 0, - mask_type: Optional[str] = "random", - ): - self.dataset = dataset - self.src_eos = src_eos - self.src_bos = src_bos - self.noise_id = noise_id - self.mask_ratio = mask_ratio - self.mask_type = mask_type - assert mask_type in ("random", "tail") - - @property - def src_sizes(self): - return self.dataset.src_sizes - - @property - def tgt_sizes(self): - return self.dataset.tgt_sizes - - @property - def sizes(self): - # dataset.sizes can be a dynamically computed sizes: - return self.dataset.sizes - - def get_batch_shapes(self): - return self.dataset.buckets - - def num_tokens_vec(self, indices): - return self.dataset.num_tokens_vec(indices) - - def __len__(self): - return len(self.dataset) - - def num_tokens(self, index): - return self.dataset.num_tokens(index) - - def size(self, index): - return self.dataset.size(index) - - def ordered_indices(self): - return self.dataset.ordered_indices() - - @property - def supports_prefetch(self): - return getattr(self.dataset, "supports_prefetch", False) - - def prefetch(self, indices): - return self.dataset.prefetch(indices) - - def mask_src_tokens(self, sample): - src_item = sample["source"] - mask = None - if self.mask_type == "random": - mask = torch.rand(len(src_item)).le(self.mask_ratio) - else: - mask = torch.ones(len(src_item)) - mask[: int(len(src_item) * (1 - self.mask_ratio))] = 0 - mask = mask.eq(1) - if src_item[0] == self.src_bos: - mask[0] = False - if src_item[-1] == self.src_eos: - mask[-1] = False - mask_src_item = src_item.masked_fill(mask, self.noise_id) - smp = {"id": sample["id"], "source": mask_src_item, "target": sample["target"]} - return smp - - def __getitem__(self, index): - sample = self.dataset[index] - if self.mask_ratio > 0: - sample = self.mask_src_tokens(sample) - return sample - - def collater(self, samples, pad_to_length=None): - return self.dataset.collater(samples, pad_to_length) - - -class FileAudioDatasetWrapper(FileAudioDataset): - def collater(self, samples): - samples = super().collater(samples) - if len(samples) == 0: - return {} - samples["net_input"]["src_tokens"] = samples["net_input"]["source"] - samples["net_input"]["prev_output_tokens"] = None - del samples["net_input"]["source"] - samples["net_input"]["src_lengths"] = None - samples["net_input"]["alignment"] = None - return samples diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/data/transforms/custom_augmentation_impl.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/data/transforms/custom_augmentation_impl.py deleted file mode 100644 index 6b9637f3ad41e3ba513636219e49371296d9ab9f..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/data/transforms/custom_augmentation_impl.py +++ /dev/null @@ -1,52 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -# Part of the code is from https://github.com/rwightman/efficientdet-pytorch/blob/master/effdet/data/transforms.py -# Modified by Xingyi Zhou -# The original code is under Apache-2.0 License -import numpy as np -from PIL import Image - -from detectron2.data.transforms.augmentation import Augmentation -from .custom_transform import EfficientDetResizeCropTransform - -__all__ = [ - "EfficientDetResizeCrop", -] - - -class EfficientDetResizeCrop(Augmentation): - """ - Scale the shorter edge to the given size, with a limit of `max_size` on the longer edge. - If `max_size` is reached, then downscale so that the longer edge does not exceed max_size. - """ - - def __init__( - self, size, scale, interp=Image.BILINEAR - ): - """ - """ - super().__init__() - self.target_size = (size, size) - self.scale = scale - self.interp = interp - - def get_transform(self, img): - # Select a random scale factor. - scale_factor = np.random.uniform(*self.scale) - scaled_target_height = scale_factor * self.target_size[0] - scaled_target_width = scale_factor * self.target_size[1] - # Recompute the accurate scale_factor using rounded scaled image size. - width, height = img.shape[1], img.shape[0] - img_scale_y = scaled_target_height / height - img_scale_x = scaled_target_width / width - img_scale = min(img_scale_y, img_scale_x) - - # Select non-zero random offset (x, y) if scaled image is larger than target size - scaled_h = int(height * img_scale) - scaled_w = int(width * img_scale) - offset_y = scaled_h - self.target_size[0] - offset_x = scaled_w - self.target_size[1] - offset_y = int(max(0.0, float(offset_y)) * np.random.uniform(0, 1)) - offset_x = int(max(0.0, float(offset_x)) * np.random.uniform(0, 1)) - return EfficientDetResizeCropTransform( - scaled_h, scaled_w, offset_y, offset_x, img_scale, self.target_size, self.interp) diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/datasets/objects365.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/datasets/objects365.py deleted file mode 100644 index 41395bdd53b67b7a7111f06564c3a2d2b63a7cdc..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/datasets/objects365.py +++ /dev/null @@ -1,394 +0,0 @@ -from detectron2.data.datasets.register_coco import register_coco_instances -import os - -categories_v1 = [ -{'id': 164, 'name': 'cutting/chopping board'} , -{'id': 49, 'name': 'tie'} , -{'id': 306, 'name': 'crosswalk sign'} , -{'id': 145, 'name': 'gun'} , -{'id': 14, 'name': 'street lights'} , -{'id': 223, 'name': 'bar soap'} , -{'id': 74, 'name': 'wild bird'} , -{'id': 219, 'name': 'ice cream'} , -{'id': 37, 'name': 'stool'} , -{'id': 25, 'name': 'storage box'} , -{'id': 153, 'name': 'giraffe'} , -{'id': 52, 'name': 'pen/pencil'} , -{'id': 61, 'name': 'high heels'} , -{'id': 340, 'name': 'mangosteen'} , -{'id': 22, 'name': 'bracelet'} , -{'id': 155, 'name': 'piano'} , -{'id': 162, 'name': 'vent'} , -{'id': 75, 'name': 'laptop'} , -{'id': 236, 'name': 'toaster'} , -{'id': 231, 'name': 'fire truck'} , -{'id': 42, 'name': 'basket'} , -{'id': 150, 'name': 'zebra'} , -{'id': 124, 'name': 'head phone'} , -{'id': 90, 'name': 'sheep'} , -{'id': 322, 'name': 'steak'} , -{'id': 39, 'name': 'couch'} , -{'id': 209, 'name': 'toothbrush'} , -{'id': 59, 'name': 'bicycle'} , -{'id': 336, 'name': 'red cabbage'} , -{'id': 228, 'name': 'golf ball'} , -{'id': 120, 'name': 'tomato'} , -{'id': 132, 'name': 'computer box'} , -{'id': 8, 'name': 'cup'} , -{'id': 183, 'name': 'basketball'} , -{'id': 298, 'name': 'butterfly'} , -{'id': 250, 'name': 'garlic'} , -{'id': 12, 'name': 'desk'} , -{'id': 141, 'name': 'microwave'} , -{'id': 171, 'name': 'strawberry'} , -{'id': 200, 'name': 'kettle'} , -{'id': 63, 'name': 'van'} , -{'id': 300, 'name': 'cheese'} , -{'id': 215, 'name': 'marker'} , -{'id': 100, 'name': 'blackboard/whiteboard'} , -{'id': 186, 'name': 'printer'} , -{'id': 333, 'name': 'bread/bun'} , -{'id': 243, 'name': 'penguin'} , -{'id': 364, 'name': 'iron'} , -{'id': 180, 'name': 'ladder'} , -{'id': 34, 'name': 'flag'} , -{'id': 78, 'name': 'cell phone'} , -{'id': 97, 'name': 'fan'} , -{'id': 224, 'name': 'scale'} , -{'id': 151, 'name': 'duck'} , -{'id': 319, 'name': 'flute'} , -{'id': 156, 'name': 'stop sign'} , -{'id': 290, 'name': 'rickshaw'} , -{'id': 128, 'name': 'sailboat'} , -{'id': 165, 'name': 'tennis racket'} , -{'id': 241, 'name': 'cigar'} , -{'id': 101, 'name': 'balloon'} , -{'id': 308, 'name': 'hair drier'} , -{'id': 167, 'name': 'skating and skiing shoes'} , -{'id': 237, 'name': 'helicopter'} , -{'id': 65, 'name': 'sink'} , -{'id': 129, 'name': 'tangerine'} , -{'id': 330, 'name': 'crab'} , -{'id': 320, 'name': 'measuring cup'} , -{'id': 260, 'name': 'fishing rod'} , -{'id': 346, 'name': 'saw'} , -{'id': 216, 'name': 'ship'} , -{'id': 46, 'name': 'coffee table'} , -{'id': 194, 'name': 'facial mask'} , -{'id': 281, 'name': 'stapler'} , -{'id': 118, 'name': 'refrigerator'} , -{'id': 40, 'name': 'belt'} , -{'id': 349, 'name': 'starfish'} , -{'id': 87, 'name': 'hanger'} , -{'id': 116, 'name': 'baseball glove'} , -{'id': 261, 'name': 'cherry'} , -{'id': 334, 'name': 'baozi'} , -{'id': 267, 'name': 'screwdriver'} , -{'id': 158, 'name': 'converter'} , -{'id': 335, 'name': 'lion'} , -{'id': 170, 'name': 'baseball'} , -{'id': 111, 'name': 'skis'} , -{'id': 136, 'name': 'broccoli'} , -{'id': 342, 'name': 'eraser'} , -{'id': 337, 'name': 'polar bear'} , -{'id': 139, 'name': 'shovel'} , -{'id': 193, 'name': 'extension cord'} , -{'id': 284, 'name': 'goldfish'} , -{'id': 174, 'name': 'pepper'} , -{'id': 138, 'name': 'stroller'} , -{'id': 328, 'name': 'yak'} , -{'id': 83, 'name': 'clock'} , -{'id': 235, 'name': 'tricycle'} , -{'id': 248, 'name': 'parking meter'} , -{'id': 274, 'name': 'trophy'} , -{'id': 324, 'name': 'binoculars'} , -{'id': 51, 'name': 'traffic light'} , -{'id': 314, 'name': 'donkey'} , -{'id': 45, 'name': 'barrel/bucket'} , -{'id': 292, 'name': 'pomegranate'} , -{'id': 13, 'name': 'handbag'} , -{'id': 262, 'name': 'tablet'} , -{'id': 68, 'name': 'apple'} , -{'id': 226, 'name': 'cabbage'} , -{'id': 23, 'name': 'flower'} , -{'id': 58, 'name': 'faucet'} , -{'id': 206, 'name': 'tong'} , -{'id': 291, 'name': 'trombone'} , -{'id': 160, 'name': 'carrot'} , -{'id': 172, 'name': 'bow tie'} , -{'id': 122, 'name': 'tent'} , -{'id': 163, 'name': 'cookies'} , -{'id': 115, 'name': 'remote'} , -{'id': 175, 'name': 'coffee machine'} , -{'id': 238, 'name': 'green beans'} , -{'id': 233, 'name': 'cello'} , -{'id': 28, 'name': 'wine glass'} , -{'id': 295, 'name': 'mushroom'} , -{'id': 344, 'name': 'scallop'} , -{'id': 125, 'name': 'lantern'} , -{'id': 123, 'name': 'shampoo/shower gel'} , -{'id': 285, 'name': 'meat balls'} , -{'id': 266, 'name': 'key'} , -{'id': 296, 'name': 'calculator'} , -{'id': 168, 'name': 'scissors'} , -{'id': 103, 'name': 'cymbal'} , -{'id': 6, 'name': 'bottle'} , -{'id': 264, 'name': 'nuts'} , -{'id': 234, 'name': 'notepaper'} , -{'id': 211, 'name': 'mango'} , -{'id': 287, 'name': 'toothpaste'} , -{'id': 196, 'name': 'chopsticks'} , -{'id': 140, 'name': 'baseball bat'} , -{'id': 244, 'name': 'hurdle'} , -{'id': 195, 'name': 'tennis ball'} , -{'id': 144, 'name': 'surveillance camera'} , -{'id': 271, 'name': 'volleyball'} , -{'id': 94, 'name': 'keyboard'} , -{'id': 339, 'name': 'seal'} , -{'id': 11, 'name': 'picture/frame'} , -{'id': 348, 'name': 'okra'} , -{'id': 191, 'name': 'sausage'} , -{'id': 166, 'name': 'candy'} , -{'id': 62, 'name': 'ring'} , -{'id': 311, 'name': 'dolphin'} , -{'id': 273, 'name': 'eggplant'} , -{'id': 84, 'name': 'drum'} , -{'id': 143, 'name': 'surfboard'} , -{'id': 288, 'name': 'antelope'} , -{'id': 204, 'name': 'clutch'} , -{'id': 207, 'name': 'slide'} , -{'id': 43, 'name': 'towel/napkin'} , -{'id': 352, 'name': 'durian'} , -{'id': 276, 'name': 'board eraser'} , -{'id': 315, 'name': 'electric drill'} , -{'id': 312, 'name': 'sushi'} , -{'id': 198, 'name': 'pie'} , -{'id': 106, 'name': 'pickup truck'} , -{'id': 176, 'name': 'bathtub'} , -{'id': 26, 'name': 'vase'} , -{'id': 133, 'name': 'elephant'} , -{'id': 256, 'name': 'sandwich'} , -{'id': 327, 'name': 'noodles'} , -{'id': 10, 'name': 'glasses'} , -{'id': 109, 'name': 'airplane'} , -{'id': 95, 'name': 'tripod'} , -{'id': 247, 'name': 'CD'} , -{'id': 121, 'name': 'machinery vehicle'} , -{'id': 365, 'name': 'flashlight'} , -{'id': 53, 'name': 'microphone'} , -{'id': 270, 'name': 'pliers'} , -{'id': 362, 'name': 'chainsaw'} , -{'id': 259, 'name': 'bear'} , -{'id': 197, 'name': 'electronic stove and gas stove'} , -{'id': 89, 'name': 'pot/pan'} , -{'id': 220, 'name': 'tape'} , -{'id': 338, 'name': 'lighter'} , -{'id': 177, 'name': 'snowboard'} , -{'id': 214, 'name': 'violin'} , -{'id': 217, 'name': 'chicken'} , -{'id': 2, 'name': 'sneakers'} , -{'id': 161, 'name': 'washing machine'} , -{'id': 131, 'name': 'kite'} , -{'id': 354, 'name': 'rabbit'} , -{'id': 86, 'name': 'bus'} , -{'id': 275, 'name': 'dates'} , -{'id': 282, 'name': 'camel'} , -{'id': 88, 'name': 'nightstand'} , -{'id': 179, 'name': 'grapes'} , -{'id': 229, 'name': 'pine apple'} , -{'id': 56, 'name': 'necklace'} , -{'id': 18, 'name': 'leather shoes'} , -{'id': 358, 'name': 'hoverboard'} , -{'id': 345, 'name': 'pencil case'} , -{'id': 359, 'name': 'pasta'} , -{'id': 157, 'name': 'radiator'} , -{'id': 201, 'name': 'hamburger'} , -{'id': 268, 'name': 'globe'} , -{'id': 332, 'name': 'barbell'} , -{'id': 329, 'name': 'mop'} , -{'id': 252, 'name': 'horn'} , -{'id': 350, 'name': 'eagle'} , -{'id': 169, 'name': 'folder'} , -{'id': 137, 'name': 'toilet'} , -{'id': 5, 'name': 'lamp'} , -{'id': 27, 'name': 'bench'} , -{'id': 249, 'name': 'swan'} , -{'id': 76, 'name': 'knife'} , -{'id': 341, 'name': 'comb'} , -{'id': 64, 'name': 'watch'} , -{'id': 105, 'name': 'telephone'} , -{'id': 3, 'name': 'chair'} , -{'id': 33, 'name': 'boat'} , -{'id': 107, 'name': 'orange'} , -{'id': 60, 'name': 'bread'} , -{'id': 147, 'name': 'cat'} , -{'id': 135, 'name': 'gas stove'} , -{'id': 307, 'name': 'papaya'} , -{'id': 227, 'name': 'router/modem'} , -{'id': 357, 'name': 'asparagus'} , -{'id': 73, 'name': 'motorcycle'} , -{'id': 77, 'name': 'traffic sign'} , -{'id': 67, 'name': 'fish'} , -{'id': 326, 'name': 'radish'} , -{'id': 213, 'name': 'egg'} , -{'id': 203, 'name': 'cucumber'} , -{'id': 17, 'name': 'helmet'} , -{'id': 110, 'name': 'luggage'} , -{'id': 80, 'name': 'truck'} , -{'id': 199, 'name': 'frisbee'} , -{'id': 232, 'name': 'peach'} , -{'id': 1, 'name': 'person'} , -{'id': 29, 'name': 'boots'} , -{'id': 310, 'name': 'chips'} , -{'id': 142, 'name': 'skateboard'} , -{'id': 44, 'name': 'slippers'} , -{'id': 4, 'name': 'hat'} , -{'id': 178, 'name': 'suitcase'} , -{'id': 24, 'name': 'tv'} , -{'id': 119, 'name': 'train'} , -{'id': 82, 'name': 'power outlet'} , -{'id': 245, 'name': 'swing'} , -{'id': 15, 'name': 'book'} , -{'id': 294, 'name': 'jellyfish'} , -{'id': 192, 'name': 'fire extinguisher'} , -{'id': 212, 'name': 'deer'} , -{'id': 181, 'name': 'pear'} , -{'id': 347, 'name': 'table tennis paddle'} , -{'id': 113, 'name': 'trolley'} , -{'id': 91, 'name': 'guitar'} , -{'id': 202, 'name': 'golf club'} , -{'id': 221, 'name': 'wheelchair'} , -{'id': 254, 'name': 'saxophone'} , -{'id': 117, 'name': 'paper towel'} , -{'id': 303, 'name': 'race car'} , -{'id': 240, 'name': 'carriage'} , -{'id': 246, 'name': 'radio'} , -{'id': 318, 'name': 'parrot'} , -{'id': 251, 'name': 'french fries'} , -{'id': 98, 'name': 'dog'} , -{'id': 112, 'name': 'soccer'} , -{'id': 355, 'name': 'french horn'} , -{'id': 79, 'name': 'paddle'} , -{'id': 283, 'name': 'lettuce'} , -{'id': 9, 'name': 'car'} , -{'id': 258, 'name': 'kiwi fruit'} , -{'id': 325, 'name': 'llama'} , -{'id': 187, 'name': 'billiards'} , -{'id': 210, 'name': 'facial cleanser'} , -{'id': 81, 'name': 'cow'} , -{'id': 331, 'name': 'microscope'} , -{'id': 148, 'name': 'lemon'} , -{'id': 302, 'name': 'pomelo'} , -{'id': 85, 'name': 'fork'} , -{'id': 154, 'name': 'pumpkin'} , -{'id': 289, 'name': 'shrimp'} , -{'id': 71, 'name': 'teddy bear'} , -{'id': 184, 'name': 'potato'} , -{'id': 102, 'name': 'air conditioner'} , -{'id': 208, 'name': 'hot dog'} , -{'id': 222, 'name': 'plum'} , -{'id': 316, 'name': 'spring rolls'} , -{'id': 230, 'name': 'crane'} , -{'id': 149, 'name': 'liquid soap'} , -{'id': 55, 'name': 'canned'} , -{'id': 35, 'name': 'speaker'} , -{'id': 108, 'name': 'banana'} , -{'id': 297, 'name': 'treadmill'} , -{'id': 99, 'name': 'spoon'} , -{'id': 104, 'name': 'mouse'} , -{'id': 182, 'name': 'american football'} , -{'id': 299, 'name': 'egg tart'} , -{'id': 127, 'name': 'cleaning products'} , -{'id': 313, 'name': 'urinal'} , -{'id': 286, 'name': 'medal'} , -{'id': 239, 'name': 'brush'} , -{'id': 96, 'name': 'hockey'} , -{'id': 279, 'name': 'dumbbell'} , -{'id': 32, 'name': 'umbrella'} , -{'id': 272, 'name': 'hammer'} , -{'id': 16, 'name': 'plate'} , -{'id': 21, 'name': 'potted plant'} , -{'id': 242, 'name': 'earphone'} , -{'id': 70, 'name': 'candle'} , -{'id': 185, 'name': 'paint brush'} , -{'id': 48, 'name': 'toy'} , -{'id': 130, 'name': 'pizza'} , -{'id': 255, 'name': 'trumpet'} , -{'id': 361, 'name': 'hotair balloon'} , -{'id': 188, 'name': 'fire hydrant'} , -{'id': 50, 'name': 'bed'} , -{'id': 253, 'name': 'avocado'} , -{'id': 293, 'name': 'coconut'} , -{'id': 257, 'name': 'cue'} , -{'id': 280, 'name': 'hamimelon'} , -{'id': 66, 'name': 'horse'} , -{'id': 173, 'name': 'pigeon'} , -{'id': 190, 'name': 'projector'} , -{'id': 69, 'name': 'camera'} , -{'id': 30, 'name': 'bowl'} , -{'id': 269, 'name': 'broom'} , -{'id': 343, 'name': 'pitaya'} , -{'id': 305, 'name': 'tuba'} , -{'id': 309, 'name': 'green onion'} , -{'id': 363, 'name': 'lobster'} , -{'id': 225, 'name': 'watermelon'} , -{'id': 47, 'name': 'suv'} , -{'id': 31, 'name': 'dining table'} , -{'id': 54, 'name': 'sandals'} , -{'id': 351, 'name': 'monkey'} , -{'id': 218, 'name': 'onion'} , -{'id': 36, 'name': 'trash bin/can'} , -{'id': 20, 'name': 'glove'} , -{'id': 277, 'name': 'rice'} , -{'id': 152, 'name': 'sports car'} , -{'id': 360, 'name': 'target'} , -{'id': 205, 'name': 'blender'} , -{'id': 19, 'name': 'pillow'} , -{'id': 72, 'name': 'cake'} , -{'id': 93, 'name': 'tea pot'} , -{'id': 353, 'name': 'game board'} , -{'id': 38, 'name': 'backpack'} , -{'id': 356, 'name': 'ambulance'} , -{'id': 146, 'name': 'life saver'} , -{'id': 189, 'name': 'goose'} , -{'id': 278, 'name': 'tape measure/ruler'} , -{'id': 92, 'name': 'traffic cone'} , -{'id': 134, 'name': 'toiletries'} , -{'id': 114, 'name': 'oven'} , -{'id': 317, 'name': 'tortoise/turtle'} , -{'id': 265, 'name': 'corn'} , -{'id': 126, 'name': 'donut'} , -{'id': 57, 'name': 'mirror'} , -{'id': 7, 'name': 'cabinet/shelf'} , -{'id': 263, 'name': 'green vegetables'} , -{'id': 159, 'name': 'tissue '} , -{'id': 321, 'name': 'shark'} , -{'id': 301, 'name': 'pig'} , -{'id': 41, 'name': 'carpet'} , -{'id': 304, 'name': 'rice cooker'} , -{'id': 323, 'name': 'poker card'} , -] - -def _get_builtin_metadata(version): - if version == 'v1': - id_to_name = {x['id']: x['name'] for x in categories_v1} - else: - assert 0, version - thing_dataset_id_to_contiguous_id = {i + 1: i for i in range(365)} - thing_classes = [id_to_name[k] for k in sorted(id_to_name)] - return { - "thing_dataset_id_to_contiguous_id": thing_dataset_id_to_contiguous_id, - "thing_classes": thing_classes} - -_PREDEFINED_SPLITS_OBJECTS365 = { - "objects365_train": ("objects365/train", "objects365/annotations/objects365_train.json"), - "objects365_val": ("objects365/val", "objects365/annotations/objects365_val.json"), -} - -for key, (image_root, json_file) in _PREDEFINED_SPLITS_OBJECTS365.items(): - register_coco_instances( - key, - _get_builtin_metadata('v1'), - os.path.join("datasets", json_file) if "://" not in json_file else json_file, - os.path.join("datasets", image_root), - ) diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/dla.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/dla.py deleted file mode 100644 index 9f15f840355571b6d02d5534fa8a9b6b8cb22c70..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/dla.py +++ /dev/null @@ -1,479 +0,0 @@ -import numpy as np -import math -from os.path import join -import fvcore.nn.weight_init as weight_init -import torch -import torch.nn.functional as F -from torch import nn -import torch.utils.model_zoo as model_zoo - -from detectron2.modeling.backbone.resnet import ( - BasicStem, BottleneckBlock, DeformBottleneckBlock) -from detectron2.layers import ( - Conv2d, - DeformConv, - FrozenBatchNorm2d, - ModulatedDeformConv, - ShapeSpec, - get_norm, -) - -from detectron2.modeling.backbone.backbone import Backbone -from detectron2.modeling.backbone.build import BACKBONE_REGISTRY -from detectron2.modeling.backbone.fpn import FPN - -__all__ = [ - "BottleneckBlock", - "DeformBottleneckBlock", - "BasicStem", -] - -DCNV1 = False - -HASH = { - 34: 'ba72cf86', - 60: '24839fc4', -} - -def get_model_url(data, name, hash): - return join('http://dl.yf.io/dla/models', data, '{}-{}.pth'.format(name, hash)) - -class BasicBlock(nn.Module): - def __init__(self, inplanes, planes, stride=1, dilation=1, norm='BN'): - super(BasicBlock, self).__init__() - self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=3, - stride=stride, padding=dilation, - bias=False, dilation=dilation) - self.bn1 = get_norm(norm, planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, - stride=1, padding=dilation, - bias=False, dilation=dilation) - self.bn2 = get_norm(norm, planes) - self.stride = stride - - def forward(self, x, residual=None): - if residual is None: - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - out += residual - out = self.relu(out) - - return out - -class Bottleneck(nn.Module): - expansion = 2 - - def __init__(self, inplanes, planes, stride=1, dilation=1, norm='BN'): - super(Bottleneck, self).__init__() - expansion = Bottleneck.expansion - bottle_planes = planes // expansion - self.conv1 = nn.Conv2d(inplanes, bottle_planes, - kernel_size=1, bias=False) - self.bn1 = get_norm(norm, bottle_planes) - self.conv2 = nn.Conv2d(bottle_planes, bottle_planes, kernel_size=3, - stride=stride, padding=dilation, - bias=False, dilation=dilation) - self.bn2 = get_norm(norm, bottle_planes) - self.conv3 = nn.Conv2d(bottle_planes, planes, - kernel_size=1, bias=False) - self.bn3 = get_norm(norm, planes) - self.relu = nn.ReLU(inplace=True) - self.stride = stride - - def forward(self, x, residual=None): - if residual is None: - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - out += residual - out = self.relu(out) - - return out - -class Root(nn.Module): - def __init__(self, in_channels, out_channels, kernel_size, residual, norm='BN'): - super(Root, self).__init__() - self.conv = nn.Conv2d( - in_channels, out_channels, 1, - stride=1, bias=False, padding=(kernel_size - 1) // 2) - self.bn = get_norm(norm, out_channels) - self.relu = nn.ReLU(inplace=True) - self.residual = residual - - def forward(self, *x): - children = x - x = self.conv(torch.cat(x, 1)) - x = self.bn(x) - if self.residual: - x += children[0] - x = self.relu(x) - - return x - - -class Tree(nn.Module): - def __init__(self, levels, block, in_channels, out_channels, stride=1, - level_root=False, root_dim=0, root_kernel_size=1, - dilation=1, root_residual=False, norm='BN'): - super(Tree, self).__init__() - if root_dim == 0: - root_dim = 2 * out_channels - if level_root: - root_dim += in_channels - if levels == 1: - self.tree1 = block(in_channels, out_channels, stride, - dilation=dilation, norm=norm) - self.tree2 = block(out_channels, out_channels, 1, - dilation=dilation, norm=norm) - else: - self.tree1 = Tree(levels - 1, block, in_channels, out_channels, - stride, root_dim=0, - root_kernel_size=root_kernel_size, - dilation=dilation, root_residual=root_residual, - norm=norm) - self.tree2 = Tree(levels - 1, block, out_channels, out_channels, - root_dim=root_dim + out_channels, - root_kernel_size=root_kernel_size, - dilation=dilation, root_residual=root_residual, - norm=norm) - if levels == 1: - self.root = Root(root_dim, out_channels, root_kernel_size, - root_residual, norm=norm) - self.level_root = level_root - self.root_dim = root_dim - self.downsample = None - self.project = None - self.levels = levels - if stride > 1: - self.downsample = nn.MaxPool2d(stride, stride=stride) - if in_channels != out_channels: - self.project = nn.Sequential( - nn.Conv2d(in_channels, out_channels, - kernel_size=1, stride=1, bias=False), - get_norm(norm, out_channels) - ) - - def forward(self, x, residual=None, children=None): - children = [] if children is None else children - bottom = self.downsample(x) if self.downsample else x - residual = self.project(bottom) if self.project else bottom - if self.level_root: - children.append(bottom) - x1 = self.tree1(x, residual) - if self.levels == 1: - x2 = self.tree2(x1) - x = self.root(x2, x1, *children) - else: - children.append(x1) - x = self.tree2(x1, children=children) - return x - -class DLA(nn.Module): - def __init__(self, num_layers, levels, channels, - block=BasicBlock, residual_root=False, norm='BN'): - """ - Args: - """ - super(DLA, self).__init__() - self.norm = norm - self.channels = channels - self.base_layer = nn.Sequential( - nn.Conv2d(3, channels[0], kernel_size=7, stride=1, - padding=3, bias=False), - get_norm(self.norm, channels[0]), - nn.ReLU(inplace=True)) - self.level0 = self._make_conv_level( - channels[0], channels[0], levels[0]) - self.level1 = self._make_conv_level( - channels[0], channels[1], levels[1], stride=2) - self.level2 = Tree(levels[2], block, channels[1], channels[2], 2, - level_root=False, - root_residual=residual_root, norm=norm) - self.level3 = Tree(levels[3], block, channels[2], channels[3], 2, - level_root=True, root_residual=residual_root, - norm=norm) - self.level4 = Tree(levels[4], block, channels[3], channels[4], 2, - level_root=True, root_residual=residual_root, - norm=norm) - self.level5 = Tree(levels[5], block, channels[4], channels[5], 2, - level_root=True, root_residual=residual_root, - norm=norm) - self.load_pretrained_model( - data='imagenet', name='dla{}'.format(num_layers), - hash=HASH[num_layers]) - - def load_pretrained_model(self, data, name, hash): - model_url = get_model_url(data, name, hash) - model_weights = model_zoo.load_url(model_url) - num_classes = len(model_weights[list(model_weights.keys())[-1]]) - self.fc = nn.Conv2d( - self.channels[-1], num_classes, - kernel_size=1, stride=1, padding=0, bias=True) - print('Loading pretrained') - self.load_state_dict(model_weights, strict=False) - - def _make_conv_level(self, inplanes, planes, convs, stride=1, dilation=1): - modules = [] - for i in range(convs): - modules.extend([ - nn.Conv2d(inplanes, planes, kernel_size=3, - stride=stride if i == 0 else 1, - padding=dilation, bias=False, dilation=dilation), - get_norm(self.norm, planes), - nn.ReLU(inplace=True)]) - inplanes = planes - return nn.Sequential(*modules) - - def forward(self, x): - y = [] - x = self.base_layer(x) - for i in range(6): - x = getattr(self, 'level{}'.format(i))(x) - y.append(x) - return y - - -def fill_up_weights(up): - w = up.weight.data - f = math.ceil(w.size(2) / 2) - c = (2 * f - 1 - f % 2) / (2. * f) - for i in range(w.size(2)): - for j in range(w.size(3)): - w[0, 0, i, j] = \ - (1 - math.fabs(i / f - c)) * (1 - math.fabs(j / f - c)) - for c in range(1, w.size(0)): - w[c, 0, :, :] = w[0, 0, :, :] - - -class _DeformConv(nn.Module): - def __init__(self, chi, cho, norm='BN'): - super(_DeformConv, self).__init__() - self.actf = nn.Sequential( - get_norm(norm, cho), - nn.ReLU(inplace=True) - ) - if DCNV1: - self.offset = Conv2d( - chi, 18, kernel_size=3, stride=1, - padding=1, dilation=1) - self.conv = DeformConv( - chi, cho, kernel_size=(3,3), stride=1, padding=1, - dilation=1, deformable_groups=1) - else: - self.offset = Conv2d( - chi, 27, kernel_size=3, stride=1, - padding=1, dilation=1) - self.conv = ModulatedDeformConv( - chi, cho, kernel_size=3, stride=1, padding=1, - dilation=1, deformable_groups=1) - nn.init.constant_(self.offset.weight, 0) - nn.init.constant_(self.offset.bias, 0) - - def forward(self, x): - if DCNV1: - offset = self.offset(x) - x = self.conv(x, offset) - else: - offset_mask = self.offset(x) - offset_x, offset_y, mask = torch.chunk(offset_mask, 3, dim=1) - offset = torch.cat((offset_x, offset_y), dim=1) - mask = mask.sigmoid() - x = self.conv(x, offset, mask) - x = self.actf(x) - return x - - -class IDAUp(nn.Module): - def __init__(self, o, channels, up_f, norm='BN'): - super(IDAUp, self).__init__() - for i in range(1, len(channels)): - c = channels[i] - f = int(up_f[i]) - proj = _DeformConv(c, o, norm=norm) - node = _DeformConv(o, o, norm=norm) - - up = nn.ConvTranspose2d(o, o, f * 2, stride=f, - padding=f // 2, output_padding=0, - groups=o, bias=False) - fill_up_weights(up) - - setattr(self, 'proj_' + str(i), proj) - setattr(self, 'up_' + str(i), up) - setattr(self, 'node_' + str(i), node) - - - def forward(self, layers, startp, endp): - for i in range(startp + 1, endp): - upsample = getattr(self, 'up_' + str(i - startp)) - project = getattr(self, 'proj_' + str(i - startp)) - layers[i] = upsample(project(layers[i])) - node = getattr(self, 'node_' + str(i - startp)) - layers[i] = node(layers[i] + layers[i - 1]) - - -class DLAUp(nn.Module): - def __init__(self, startp, channels, scales, in_channels=None, norm='BN'): - super(DLAUp, self).__init__() - self.startp = startp - if in_channels is None: - in_channels = channels - self.channels = channels - channels = list(channels) - scales = np.array(scales, dtype=int) - for i in range(len(channels) - 1): - j = -i - 2 - setattr(self, 'ida_{}'.format(i), - IDAUp(channels[j], in_channels[j:], - scales[j:] // scales[j], norm=norm)) - scales[j + 1:] = scales[j] - in_channels[j + 1:] = [channels[j] for _ in channels[j + 1:]] - - def forward(self, layers): - out = [layers[-1]] # start with 32 - for i in range(len(layers) - self.startp - 1): - ida = getattr(self, 'ida_{}'.format(i)) - ida(layers, len(layers) -i - 2, len(layers)) - out.insert(0, layers[-1]) - return out - -DLA_CONFIGS = { - 34: ([1, 1, 1, 2, 2, 1], [16, 32, 64, 128, 256, 512], BasicBlock), - 60: ([1, 1, 1, 2, 3, 1], [16, 32, 128, 256, 512, 1024], Bottleneck) -} - - -class DLASeg(Backbone): - def __init__(self, num_layers, out_features, use_dla_up=True, - ms_output=False, norm='BN'): - super(DLASeg, self).__init__() - # depth = 34 - levels, channels, Block = DLA_CONFIGS[num_layers] - self.base = DLA(num_layers=num_layers, - levels=levels, channels=channels, block=Block, norm=norm) - down_ratio = 4 - self.first_level = int(np.log2(down_ratio)) - self.ms_output = ms_output - self.last_level = 5 if not self.ms_output else 6 - channels = self.base.channels - scales = [2 ** i for i in range(len(channels[self.first_level:]))] - self.use_dla_up = use_dla_up - if self.use_dla_up: - self.dla_up = DLAUp( - self.first_level, channels[self.first_level:], scales, - norm=norm) - out_channel = channels[self.first_level] - if not self.ms_output: # stride 4 DLA - self.ida_up = IDAUp( - out_channel, channels[self.first_level:self.last_level], - [2 ** i for i in range(self.last_level - self.first_level)], - norm=norm) - self._out_features = out_features - self._out_feature_channels = { - 'dla{}'.format(i): channels[i] for i in range(6)} - self._out_feature_strides = { - 'dla{}'.format(i): 2 ** i for i in range(6)} - self._size_divisibility = 32 - - @property - def size_divisibility(self): - return self._size_divisibility - - def forward(self, x): - x = self.base(x) - if self.use_dla_up: - x = self.dla_up(x) - if not self.ms_output: # stride 4 dla - y = [] - for i in range(self.last_level - self.first_level): - y.append(x[i].clone()) - self.ida_up(y, 0, len(y)) - ret = {} - for i in range(self.last_level - self.first_level): - out_feature = 'dla{}'.format(i) - if out_feature in self._out_features: - ret[out_feature] = y[i] - else: - ret = {} - st = self.first_level if self.use_dla_up else 0 - for i in range(self.last_level - st): - out_feature = 'dla{}'.format(i + st) - if out_feature in self._out_features: - ret[out_feature] = x[i] - - return ret - - -@BACKBONE_REGISTRY.register() -def build_dla_backbone(cfg, input_shape): - """ - Create a ResNet instance from config. - - Returns: - ResNet: a :class:`ResNet` instance. - """ - return DLASeg( - out_features=cfg.MODEL.DLA.OUT_FEATURES, - num_layers=cfg.MODEL.DLA.NUM_LAYERS, - use_dla_up=cfg.MODEL.DLA.USE_DLA_UP, - ms_output=cfg.MODEL.DLA.MS_OUTPUT, - norm=cfg.MODEL.DLA.NORM) - -class LastLevelP6P7(nn.Module): - """ - This module is used in RetinaNet to generate extra layers, P6 and P7 from - C5 feature. - """ - - def __init__(self, in_channels, out_channels): - super().__init__() - self.num_levels = 2 - self.in_feature = "dla5" - self.p6 = nn.Conv2d(in_channels, out_channels, 3, 2, 1) - self.p7 = nn.Conv2d(out_channels, out_channels, 3, 2, 1) - for module in [self.p6, self.p7]: - weight_init.c2_xavier_fill(module) - - def forward(self, c5): - p6 = self.p6(c5) - p7 = self.p7(F.relu(p6)) - return [p6, p7] - -@BACKBONE_REGISTRY.register() -def build_retinanet_dla_fpn_backbone(cfg, input_shape: ShapeSpec): - """ - Args: - cfg: a detectron2 CfgNode - Returns: - backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`. - """ - bottom_up = build_dla_backbone(cfg, input_shape) - in_features = cfg.MODEL.FPN.IN_FEATURES - out_channels = cfg.MODEL.FPN.OUT_CHANNELS - in_channels_p6p7 = bottom_up.output_shape()['dla5'].channels - backbone = FPN( - bottom_up=bottom_up, - in_features=in_features, - out_channels=out_channels, - norm=cfg.MODEL.FPN.NORM, - top_block=LastLevelP6P7(in_channels_p6p7, out_channels), - fuse_type=cfg.MODEL.FPN.FUSE_TYPE, - ) - return backbone diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tools/analyze_model.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tools/analyze_model.py deleted file mode 100644 index 8e38f8b71eb3b8d1e2b670e7f01a796ec2ea4b7e..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tools/analyze_model.py +++ /dev/null @@ -1,159 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -import logging -import numpy as np -from collections import Counter -import tqdm -from fvcore.nn import flop_count_table # can also try flop_count_str - -from detectron2.checkpoint import DetectionCheckpointer -from detectron2.config import CfgNode, LazyConfig, get_cfg, instantiate -from detectron2.data import build_detection_test_loader -from detectron2.engine import default_argument_parser -from detectron2.modeling import build_model -from detectron2.utils.analysis import ( - FlopCountAnalysis, - activation_count_operators, - parameter_count_table, -) -from detectron2.utils.logger import setup_logger - -logger = logging.getLogger("detectron2") - - -def setup(args): - if args.config_file.endswith(".yaml"): - cfg = get_cfg() - cfg.merge_from_file(args.config_file) - cfg.DATALOADER.NUM_WORKERS = 0 - cfg.merge_from_list(args.opts) - cfg.freeze() - else: - cfg = LazyConfig.load(args.config_file) - cfg = LazyConfig.apply_overrides(cfg, args.opts) - setup_logger(name="fvcore") - setup_logger() - return cfg - - -def do_flop(cfg): - if isinstance(cfg, CfgNode): - data_loader = build_detection_test_loader(cfg, cfg.DATASETS.TEST[0]) - model = build_model(cfg) - DetectionCheckpointer(model).load(cfg.MODEL.WEIGHTS) - else: - data_loader = instantiate(cfg.dataloader.test) - model = instantiate(cfg.model) - model.to(cfg.train.device) - DetectionCheckpointer(model).load(cfg.train.init_checkpoint) - model.eval() - - counts = Counter() - total_flops = [] - for idx, data in zip(tqdm.trange(args.num_inputs), data_loader): # noqa - flops = FlopCountAnalysis(model, data) - if idx > 0: - flops.unsupported_ops_warnings(False).uncalled_modules_warnings(False) - counts += flops.by_operator() - total_flops.append(flops.total()) - - logger.info("Flops table computed from only one input sample:\n" + flop_count_table(flops)) - logger.info( - "Average GFlops for each type of operators:\n" - + str([(k, v / (idx + 1) / 1e9) for k, v in counts.items()]) - ) - logger.info( - "Total GFlops: {:.1f}±{:.1f}".format(np.mean(total_flops) / 1e9, np.std(total_flops) / 1e9) - ) - - -def do_activation(cfg): - if isinstance(cfg, CfgNode): - data_loader = build_detection_test_loader(cfg, cfg.DATASETS.TEST[0]) - model = build_model(cfg) - DetectionCheckpointer(model).load(cfg.MODEL.WEIGHTS) - else: - data_loader = instantiate(cfg.dataloader.test) - model = instantiate(cfg.model) - model.to(cfg.train.device) - DetectionCheckpointer(model).load(cfg.train.init_checkpoint) - model.eval() - - counts = Counter() - total_activations = [] - for idx, data in zip(tqdm.trange(args.num_inputs), data_loader): # noqa - count = activation_count_operators(model, data) - counts += count - total_activations.append(sum(count.values())) - logger.info( - "(Million) Activations for Each Type of Operators:\n" - + str([(k, v / idx) for k, v in counts.items()]) - ) - logger.info( - "Total (Million) Activations: {}±{}".format( - np.mean(total_activations), np.std(total_activations) - ) - ) - - -def do_parameter(cfg): - if isinstance(cfg, CfgNode): - model = build_model(cfg) - else: - model = instantiate(cfg.model) - logger.info("Parameter Count:\n" + parameter_count_table(model, max_depth=5)) - - -def do_structure(cfg): - if isinstance(cfg, CfgNode): - model = build_model(cfg) - else: - model = instantiate(cfg.model) - logger.info("Model Structure:\n" + str(model)) - - -if __name__ == "__main__": - parser = default_argument_parser( - epilog=""" -Examples: - -To show parameters of a model: -$ ./analyze_model.py --tasks parameter \\ - --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml - -Flops and activations are data-dependent, therefore inputs and model weights -are needed to count them: - -$ ./analyze_model.py --num-inputs 100 --tasks flop \\ - --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml \\ - MODEL.WEIGHTS /path/to/model.pkl -""" - ) - parser.add_argument( - "--tasks", - choices=["flop", "activation", "parameter", "structure"], - required=True, - nargs="+", - ) - parser.add_argument( - "-n", - "--num-inputs", - default=100, - type=int, - help="number of inputs used to compute statistics for flops/activations, " - "both are data dependent.", - ) - args = parser.parse_args() - assert not args.eval_only - assert args.num_gpus == 1 - - cfg = setup(args) - - for task in args.tasks: - { - "flop": do_flop, - "activation": do_activation, - "parameter": do_parameter, - "structure": do_structure, - }[task](cfg) diff --git a/spaces/PAIR/PAIR-Diffusion/cldm/logger.py b/spaces/PAIR/PAIR-Diffusion/cldm/logger.py deleted file mode 100644 index fd2798af63e43c8d048c043aa83dc140925e2dea..0000000000000000000000000000000000000000 --- a/spaces/PAIR/PAIR-Diffusion/cldm/logger.py +++ /dev/null @@ -1,233 +0,0 @@ -import os - -import numpy as np -import torch -import torchvision -from PIL import Image -from pytorch_lightning.callbacks import Callback -import pytorch_lightning as pl -from pytorch_lightning.utilities.distributed import rank_zero_only -from omegaconf import OmegaConf - -# class ImageLogger(Callback): -# def __init__(self, batch_frequency=2000, max_images=4, clamp=True, increase_log_steps=True, -# rescale=True, disabled=False, log_on_batch_idx=False, log_first_step=False, -# log_images_kwargs=None): -# super().__init__() -# self.rescale = rescale -# self.batch_freq = batch_frequency -# self.max_images = max_images -# if not increase_log_steps: -# self.log_steps = [self.batch_freq] -# self.clamp = clamp -# self.disabled = disabled -# self.log_on_batch_idx = log_on_batch_idx -# self.log_images_kwargs = log_images_kwargs if log_images_kwargs else {} -# self.log_first_step = log_first_step - -# @rank_zero_only -# def log_local(self, save_dir, split, images, global_step, current_epoch, batch_idx): -# root = os.path.join(save_dir, "image_log", split) -# for k in images: -# grid = torchvision.utils.make_grid(images[k], nrow=4) -# if self.rescale: -# grid = (grid + 1.0) / 2.0 # -1,1 -> 0,1; c,h,w -# grid = grid.transpose(0, 1).transpose(1, 2).squeeze(-1) -# grid = grid.numpy() -# grid = (grid * 255).astype(np.uint8) -# filename = "{}_gs-{:06}_e-{:06}_b-{:06}.png".format(k, global_step, current_epoch, batch_idx) -# path = os.path.join(root, filename) -# os.makedirs(os.path.split(path)[0], exist_ok=True) -# Image.fromarray(grid).save(path) - -# def log_img(self, pl_module, batch, batch_idx, split="train"): -# check_idx = batch_idx # if self.log_on_batch_idx else pl_module.global_step -# if (self.check_frequency(check_idx) and # batch_idx % self.batch_freq == 0 -# hasattr(pl_module, "log_images") and -# callable(pl_module.log_images) and -# self.max_images > 0): -# logger = type(pl_module.logger) - -# is_train = pl_module.training -# if is_train: -# pl_module.eval() - -# with torch.no_grad(): -# images = pl_module.log_images(batch, split=split, **self.log_images_kwargs) - -# for k in images: -# N = min(images[k].shape[0], self.max_images) -# images[k] = images[k][:N] -# if isinstance(images[k], torch.Tensor): -# images[k] = images[k].detach().cpu() -# if self.clamp: -# images[k] = torch.clamp(images[k], -1., 1.) - -# self.log_local(pl_module.logger.save_dir, split, images, -# pl_module.global_step, pl_module.current_epoch, batch_idx) - -# if is_train: -# pl_module.train() - -# def check_frequency(self, check_idx): -# return check_idx % self.batch_freq == 0 - -# def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx): -# if not self.disabled: -# self.log_img(pl_module, batch, batch_idx, split="train") - - -class SetupCallback(Callback): - def __init__(self, resume, now, logdir, ckptdir, cfgdir, config, lightning_config): - super().__init__() - self.resume = resume - self.now = now - self.logdir = logdir - self.ckptdir = ckptdir - self.cfgdir = cfgdir - self.config = config - self.lightning_config = lightning_config - - def on_keyboard_interrupt(self, trainer, pl_module): - if trainer.global_rank == 0: - print("Summoning checkpoint.") - ckpt_path = os.path.join(self.ckptdir, "last.ckpt") - trainer.save_checkpoint(ckpt_path) - - def on_pretrain_routine_start(self, trainer, pl_module): - if trainer.global_rank == 0: - # Create logdirs and save configs - os.makedirs(self.logdir, exist_ok=True) - os.makedirs(self.ckptdir, exist_ok=True) - os.makedirs(self.cfgdir, exist_ok=True) - - if "callbacks" in self.lightning_config: - if 'metrics_over_trainsteps_checkpoint' in self.lightning_config['callbacks']: - os.makedirs(os.path.join(self.ckptdir, 'trainstep_checkpoints'), exist_ok=True) - print("Project config") - print(OmegaConf.to_yaml(self.config)) - OmegaConf.save(self.config, - os.path.join(self.cfgdir, "{}-project.yaml".format(self.now))) - - print("Lightning config") - print(OmegaConf.to_yaml(self.lightning_config)) - OmegaConf.save(OmegaConf.create({"lightning": self.lightning_config}), - os.path.join(self.cfgdir, "{}-lightning.yaml".format(self.now))) - - else: - # ModelCheckpoint callback created log directory --- remove it - if not self.resume and os.path.exists(self.logdir): - dst, name = os.path.split(self.logdir) - dst = os.path.join(dst, "child_runs", name) - os.makedirs(os.path.split(dst)[0], exist_ok=True) - try: - os.rename(self.logdir, dst) - except FileNotFoundError: - pass - - -class ImageLogger(Callback): - def __init__(self, batch_frequency, max_images, clamp=True, increase_log_steps=True, - rescale=True, disabled=False, log_on_batch_idx=False, log_first_step=False, - log_images_kwargs=None): - super().__init__() - self.rescale = rescale - self.batch_freq = batch_frequency - self.max_images = max_images - self.logger_log_images = { - pl.loggers.TestTubeLogger: self._testtube, - } - self.log_steps = [2 ** n for n in range(int(np.log2(self.batch_freq)) + 1)] - if not increase_log_steps: - self.log_steps = [self.batch_freq] - self.clamp = clamp - self.disabled = disabled - self.log_on_batch_idx = log_on_batch_idx - self.log_images_kwargs = log_images_kwargs if log_images_kwargs else {} - self.log_first_step = log_first_step - - @rank_zero_only - def _testtube(self, pl_module, images, batch_idx, split): - for k in images: - grid = torchvision.utils.make_grid(images[k]) - grid = (grid + 1.0) / 2.0 # -1,1 -> 0,1; c,h,w - - tag = f"{split}/{k}" - pl_module.logger.experiment.add_image( - tag, grid, - global_step=pl_module.global_step) - - @rank_zero_only - def log_local(self, save_dir, split, images, - global_step, current_epoch, batch_idx): - root = os.path.join(save_dir, "images", split) - for k in images: - grid = torchvision.utils.make_grid(images[k], nrow=4) - if self.rescale: - grid = (grid + 1.0) / 2.0 # -1,1 -> 0,1; c,h,w - grid = grid.transpose(0, 1).transpose(1, 2).squeeze(-1) - grid = grid.numpy() - grid = (grid * 255).astype(np.uint8) - filename = "{}_gs-{:06}_e-{:06}_b-{:06}.png".format( - k, - global_step, - current_epoch, - batch_idx) - path = os.path.join(root, filename) - os.makedirs(os.path.split(path)[0], exist_ok=True) - Image.fromarray(grid).save(path) - - def log_img(self, pl_module, batch, batch_idx, split="train"): - check_idx = batch_idx if self.log_on_batch_idx else pl_module.global_step - if (self.check_frequency(check_idx) and # batch_idx % self.batch_freq == 0 - hasattr(pl_module, "log_images") and - callable(pl_module.log_images) and - self.max_images > 0): - logger = type(pl_module.logger) - - is_train = pl_module.training - if is_train: - pl_module.eval() - - with torch.no_grad(): - images = pl_module.log_images(batch, split=split, **self.log_images_kwargs) - - for k in images: - N = min(images[k].shape[0], self.max_images) - images[k] = images[k][:N] - if isinstance(images[k], torch.Tensor): - images[k] = images[k].detach().cpu() - if self.clamp: - images[k] = torch.clamp(images[k], -1., 1.) - - self.log_local(pl_module.logger.save_dir, split, images, - pl_module.global_step, pl_module.current_epoch, batch_idx) - - logger_log_images = self.logger_log_images.get(logger, lambda *args, **kwargs: None) - logger_log_images(pl_module, images, pl_module.global_step, split) - - if is_train: - pl_module.train() - - def check_frequency(self, check_idx): - if ((check_idx % self.batch_freq) == 0 or (check_idx in self.log_steps)) and ( - check_idx > 0 or self.log_first_step): - try: - self.log_steps.pop(0) - except IndexError as e: - print(e) - pass - return True - return False - - def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx): - if not self.disabled and (pl_module.global_step > 0 or self.log_first_step): - self.log_img(pl_module, batch, batch_idx, split="train") - - def on_validation_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx): - # if not self.disabled and pl_module.global_step > 0: - # self.log_img(pl_module, batch, batch_idx, split="val") - # if hasattr(pl_module, 'calibrate_grad_norm'): - # if (pl_module.calibrate_grad_norm and batch_idx % 25 == 0) and batch_idx > 0: - # self.log_gradients(trainer, pl_module, batch_idx=batch_idx) - pass \ No newline at end of file diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/pixel_group.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/pixel_group.py deleted file mode 100644 index 2143c75f835a467c802fc3c37ecd3ac0f85bcda4..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/pixel_group.py +++ /dev/null @@ -1,75 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['pixel_group']) - - -def pixel_group(score, mask, embedding, kernel_label, kernel_contour, - kernel_region_num, distance_threshold): - """Group pixels into text instances, which is widely used text detection - methods. - - Arguments: - score (np.array or Tensor): The foreground score with size hxw. - mask (np.array or Tensor): The foreground mask with size hxw. - embedding (np.array or Tensor): The embedding with size hxwxc to - distinguish instances. - kernel_label (np.array or Tensor): The instance kernel index with - size hxw. - kernel_contour (np.array or Tensor): The kernel contour with size hxw. - kernel_region_num (int): The instance kernel region number. - distance_threshold (float): The embedding distance threshold between - kernel and pixel in one instance. - - Returns: - pixel_assignment (List[List[float]]): The instance coordinate list. - Each element consists of averaged confidence, pixel number, and - coordinates (x_i, y_i for all pixels) in order. - """ - assert isinstance(score, (torch.Tensor, np.ndarray)) - assert isinstance(mask, (torch.Tensor, np.ndarray)) - assert isinstance(embedding, (torch.Tensor, np.ndarray)) - assert isinstance(kernel_label, (torch.Tensor, np.ndarray)) - assert isinstance(kernel_contour, (torch.Tensor, np.ndarray)) - assert isinstance(kernel_region_num, int) - assert isinstance(distance_threshold, float) - - if isinstance(score, np.ndarray): - score = torch.from_numpy(score) - if isinstance(mask, np.ndarray): - mask = torch.from_numpy(mask) - if isinstance(embedding, np.ndarray): - embedding = torch.from_numpy(embedding) - if isinstance(kernel_label, np.ndarray): - kernel_label = torch.from_numpy(kernel_label) - if isinstance(kernel_contour, np.ndarray): - kernel_contour = torch.from_numpy(kernel_contour) - - if torch.__version__ == 'parrots': - label = ext_module.pixel_group( - score, - mask, - embedding, - kernel_label, - kernel_contour, - kernel_region_num=kernel_region_num, - distance_threshold=distance_threshold) - label = label.tolist() - label = label[0] - list_index = kernel_region_num - pixel_assignment = [] - for x in range(kernel_region_num): - pixel_assignment.append( - np.array( - label[list_index:list_index + int(label[x])], - dtype=np.float)) - list_index = list_index + int(label[x]) - else: - pixel_assignment = ext_module.pixel_group(score, mask, embedding, - kernel_label, kernel_contour, - kernel_region_num, - distance_threshold) - return pixel_assignment diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/decode_heads/dm_head.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/decode_heads/dm_head.py deleted file mode 100644 index 19c963923126b53ce22f60813540a35badf24b3d..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/decode_heads/dm_head.py +++ /dev/null @@ -1,140 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from annotator.uniformer.mmcv.cnn import ConvModule, build_activation_layer, build_norm_layer - -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -class DCM(nn.Module): - """Dynamic Convolutional Module used in DMNet. - - Args: - filter_size (int): The filter size of generated convolution kernel - used in Dynamic Convolutional Module. - fusion (bool): Add one conv to fuse DCM output feature. - in_channels (int): Input channels. - channels (int): Channels after modules, before conv_seg. - conv_cfg (dict | None): Config of conv layers. - norm_cfg (dict | None): Config of norm layers. - act_cfg (dict): Config of activation layers. - """ - - def __init__(self, filter_size, fusion, in_channels, channels, conv_cfg, - norm_cfg, act_cfg): - super(DCM, self).__init__() - self.filter_size = filter_size - self.fusion = fusion - self.in_channels = in_channels - self.channels = channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.filter_gen_conv = nn.Conv2d(self.in_channels, self.channels, 1, 1, - 0) - - self.input_redu_conv = ConvModule( - self.in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - if self.norm_cfg is not None: - self.norm = build_norm_layer(self.norm_cfg, self.channels)[1] - else: - self.norm = None - self.activate = build_activation_layer(self.act_cfg) - - if self.fusion: - self.fusion_conv = ConvModule( - self.channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, x): - """Forward function.""" - generated_filter = self.filter_gen_conv( - F.adaptive_avg_pool2d(x, self.filter_size)) - x = self.input_redu_conv(x) - b, c, h, w = x.shape - # [1, b * c, h, w], c = self.channels - x = x.view(1, b * c, h, w) - # [b * c, 1, filter_size, filter_size] - generated_filter = generated_filter.view(b * c, 1, self.filter_size, - self.filter_size) - pad = (self.filter_size - 1) // 2 - if (self.filter_size - 1) % 2 == 0: - p2d = (pad, pad, pad, pad) - else: - p2d = (pad + 1, pad, pad + 1, pad) - x = F.pad(input=x, pad=p2d, mode='constant', value=0) - # [1, b * c, h, w] - output = F.conv2d(input=x, weight=generated_filter, groups=b * c) - # [b, c, h, w] - output = output.view(b, c, h, w) - if self.norm is not None: - output = self.norm(output) - output = self.activate(output) - - if self.fusion: - output = self.fusion_conv(output) - - return output - - -@HEADS.register_module() -class DMHead(BaseDecodeHead): - """Dynamic Multi-scale Filters for Semantic Segmentation. - - This head is the implementation of - `DMNet `_. - - Args: - filter_sizes (tuple[int]): The size of generated convolutional filters - used in Dynamic Convolutional Module. Default: (1, 3, 5, 7). - fusion (bool): Add one conv to fuse DCM output feature. - """ - - def __init__(self, filter_sizes=(1, 3, 5, 7), fusion=False, **kwargs): - super(DMHead, self).__init__(**kwargs) - assert isinstance(filter_sizes, (list, tuple)) - self.filter_sizes = filter_sizes - self.fusion = fusion - dcm_modules = [] - for filter_size in self.filter_sizes: - dcm_modules.append( - DCM(filter_size, - self.fusion, - self.in_channels, - self.channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - self.dcm_modules = nn.ModuleList(dcm_modules) - self.bottleneck = ConvModule( - self.in_channels + len(filter_sizes) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - dcm_outs = [x] - for dcm_module in self.dcm_modules: - dcm_outs.append(dcm_module(x)) - dcm_outs = torch.cat(dcm_outs, dim=1) - output = self.bottleneck(dcm_outs) - output = self.cls_seg(output) - return output diff --git a/spaces/Paaz/gpt2-lyrics/style.css b/spaces/Paaz/gpt2-lyrics/style.css deleted file mode 100644 index 00f89aa902e0b52ca76e7a3e0679172790f9568c..0000000000000000000000000000000000000000 --- a/spaces/Paaz/gpt2-lyrics/style.css +++ /dev/null @@ -1,3 +0,0 @@ -h1 { - text-align: center; -} \ No newline at end of file diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/rnrs/bytevectors.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/rnrs/bytevectors.go deleted file mode 100644 index 76fb555b727336762d172abf3d91380984411e5d..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/rnrs/bytevectors.go and /dev/null differ diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-111.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-111.go deleted file mode 100644 index a3194cd912525493d9a021f7c7d61737277f77ed..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-111.go and /dev/null differ diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/modal-transforms.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/modal-transforms.go deleted file mode 100644 index 2ef44a69600f29cc6a823b97356b4f84aa3aee2f..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/modal-transforms.go and /dev/null differ diff --git a/spaces/PeepDaSlan9/Bark-Voice-Cloning/bark/model_fine.py b/spaces/PeepDaSlan9/Bark-Voice-Cloning/bark/model_fine.py deleted file mode 100644 index 6179a851319692b10df0d69b00910ad36cee8685..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/Bark-Voice-Cloning/bark/model_fine.py +++ /dev/null @@ -1,149 +0,0 @@ -""" -Much of this code is adapted from Andrej Karpathy's NanoGPT -(https://github.com/karpathy/nanoGPT) -""" -from dataclasses import dataclass -import math - -import torch -import torch.nn as nn -from torch.nn import functional as F - -from .model import GPT, GPTConfig, MLP - - -class NonCausalSelfAttention(nn.Module): - def __init__(self, config): - super().__init__() - assert config.n_embd % config.n_head == 0 - # key, query, value projections for all heads, but in a batch - self.c_attn = nn.Linear(config.n_embd, 3 * config.n_embd, bias=config.bias) - # output projection - self.c_proj = nn.Linear(config.n_embd, config.n_embd, bias=config.bias) - # regularization - self.attn_dropout = nn.Dropout(config.dropout) - self.resid_dropout = nn.Dropout(config.dropout) - self.n_head = config.n_head - self.n_embd = config.n_embd - self.dropout = config.dropout - # flash attention make GPU go brrrrr but support is only in PyTorch nightly and still a bit scary - self.flash = ( - hasattr(torch.nn.functional, "scaled_dot_product_attention") and self.dropout == 0.0 - ) - - def forward(self, x): - B, T, C = x.size() # batch size, sequence length, embedding dimensionality (n_embd) - - # calculate query, key, values for all heads in batch and move head forward to be the batch dim - q, k, v = self.c_attn(x).split(self.n_embd, dim=2) - k = k.view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs) - q = q.view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs) - v = v.view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs) - - # causal self-attention; Self-attend: (B, nh, T, hs) x (B, nh, hs, T) -> (B, nh, T, T) - if self.flash: - # efficient attention using Flash Attention CUDA kernels - y = torch.nn.functional.scaled_dot_product_attention( - q, k, v, attn_mask=None, dropout_p=self.dropout, is_causal=False - ) - else: - # manual implementation of attention - att = (q @ k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1))) - att = F.softmax(att, dim=-1) - att = self.attn_dropout(att) - y = att @ v # (B, nh, T, T) x (B, nh, T, hs) -> (B, nh, T, hs) - y = ( - y.transpose(1, 2).contiguous().view(B, T, C) - ) # re-assemble all head outputs side by side - - # output projection - y = self.resid_dropout(self.c_proj(y)) - return y - - -class FineBlock(nn.Module): - def __init__(self, config): - super().__init__() - self.ln_1 = nn.LayerNorm(config.n_embd) - self.attn = NonCausalSelfAttention(config) - self.ln_2 = nn.LayerNorm(config.n_embd) - self.mlp = MLP(config) - - def forward(self, x): - x = x + self.attn(self.ln_1(x)) - x = x + self.mlp(self.ln_2(x)) - return x - - -class FineGPT(GPT): - def __init__(self, config): - super().__init__(config) - del self.lm_head - self.config = config - self.n_codes_total = config.n_codes_total - self.transformer = nn.ModuleDict( - dict( - wtes=nn.ModuleList( - [ - nn.Embedding(config.input_vocab_size, config.n_embd) - for _ in range(config.n_codes_total) - ] - ), - wpe=nn.Embedding(config.block_size, config.n_embd), - drop=nn.Dropout(config.dropout), - h=nn.ModuleList([FineBlock(config) for _ in range(config.n_layer)]), - ln_f=nn.LayerNorm(config.n_embd), - ) - ) - self.lm_heads = nn.ModuleList( - [ - nn.Linear(config.n_embd, config.output_vocab_size, bias=False) - for _ in range(config.n_codes_given, self.n_codes_total) - ] - ) - for i in range(self.n_codes_total - config.n_codes_given): - self.transformer.wtes[i + 1].weight = self.lm_heads[i].weight - - def forward(self, pred_idx, idx): - device = idx.device - b, t, codes = idx.size() - assert ( - t <= self.config.block_size - ), f"Cannot forward sequence of length {t}, block size is only {self.config.block_size}" - assert pred_idx > 0, "cannot predict 0th codebook" - assert codes == self.n_codes_total, (b, t, codes) - pos = torch.arange(0, t, dtype=torch.long, device=device).unsqueeze(0) # shape (1, t) - - # forward the GPT model itself - tok_embs = [ - wte(idx[:, :, i]).unsqueeze(-1) for i, wte in enumerate(self.transformer.wtes) - ] # token embeddings of shape (b, t, n_embd) - tok_emb = torch.cat(tok_embs, dim=-1) - pos_emb = self.transformer.wpe(pos) # position embeddings of shape (1, t, n_embd) - x = tok_emb[:, :, :, : pred_idx + 1].sum(dim=-1) - x = self.transformer.drop(x + pos_emb) - for block in self.transformer.h: - x = block(x) - x = self.transformer.ln_f(x) - logits = self.lm_heads[pred_idx - self.config.n_codes_given](x) - return logits - - def get_num_params(self, non_embedding=True): - """ - Return the number of parameters in the model. - For non-embedding count (default), the position embeddings get subtracted. - The token embeddings would too, except due to the parameter sharing these - params are actually used as weights in the final layer, so we include them. - """ - n_params = sum(p.numel() for p in self.parameters()) - if non_embedding: - for wte in self.transformer.wtes: - n_params -= wte.weight.numel() - n_params -= self.transformer.wpe.weight.numel() - return n_params - - -@dataclass -class FineGPTConfig(GPTConfig): - n_codes_total: int = 8 - n_codes_given: int = 1 diff --git a/spaces/Pengyey/bingo-chuchu/src/components/ui/input.tsx b/spaces/Pengyey/bingo-chuchu/src/components/ui/input.tsx deleted file mode 100644 index 684a857f3d769b78818fb13de1abaebfb09ca79c..0000000000000000000000000000000000000000 --- a/spaces/Pengyey/bingo-chuchu/src/components/ui/input.tsx +++ /dev/null @@ -1,25 +0,0 @@ -import * as React from 'react' - -import { cn } from '@/lib/utils' - -export interface InputProps - extends React.InputHTMLAttributes {} - -const Input = React.forwardRef( - ({ className, type, ...props }, ref) => { - return ( - - ) - } -) -Input.displayName = 'Input' - -export { Input } diff --git a/spaces/Pengyey/bingo-chuchu/src/lib/bots/bing/index.ts b/spaces/Pengyey/bingo-chuchu/src/lib/bots/bing/index.ts deleted file mode 100644 index c75c69f94af8c3db92d4c90d465c219a2af72a4d..0000000000000000000000000000000000000000 --- a/spaces/Pengyey/bingo-chuchu/src/lib/bots/bing/index.ts +++ /dev/null @@ -1,432 +0,0 @@ -import { fetch, WebSocket, debug } from '@/lib/isomorphic' -import WebSocketAsPromised from 'websocket-as-promised' -import { - SendMessageParams, - BingConversationStyle, - ConversationResponse, - ChatResponseMessage, - ConversationInfo, - InvocationEventType, - ChatError, - ErrorCode, - ChatUpdateCompleteResponse, - ImageInfo, - KBlobResponse -} from './types' - -import { convertMessageToMarkdown, websocketUtils, streamAsyncIterable } from './utils' -import { WatchDog, createChunkDecoder } from '@/lib/utils' - -type Params = SendMessageParams<{ bingConversationStyle: BingConversationStyle }> - -const OPTIONS_SETS = [ - 'nlu_direct_response_filter', - 'deepleo', - 'disable_emoji_spoken_text', - 'responsible_ai_policy_235', - 'enablemm', - 'iycapbing', - 'iyxapbing', - 'objopinion', - 'rweasgv2', - 'dagslnv1', - 'dv3sugg', - 'autosave', - 'iyoloxap', - 'iyoloneutral', - 'clgalileo', - 'gencontentv3', -] - -export class BingWebBot { - protected conversationContext?: ConversationInfo - protected cookie: string - protected ua: string - protected endpoint = '' - private lastText = '' - private asyncTasks: Array> = [] - - constructor(opts: { - cookie: string - ua: string - bingConversationStyle?: BingConversationStyle - conversationContext?: ConversationInfo - }) { - const { cookie, ua, conversationContext } = opts - this.cookie = cookie?.includes(';') ? cookie : `_EDGE_V=1; _U=${cookie}` - this.ua = ua - this.conversationContext = conversationContext - } - - static buildChatRequest(conversation: ConversationInfo) { - const optionsSets = OPTIONS_SETS - if (conversation.conversationStyle === BingConversationStyle.Precise) { - optionsSets.push('h3precise') - } else if (conversation.conversationStyle === BingConversationStyle.Creative) { - optionsSets.push('h3imaginative') - } - return { - arguments: [ - { - source: 'cib', - optionsSets, - allowedMessageTypes: [ - 'ActionRequest', - 'Chat', - 'Context', - 'InternalSearchQuery', - 'InternalSearchResult', - 'Disengaged', - 'InternalLoaderMessage', - 'Progress', - 'RenderCardRequest', - 'SemanticSerp', - 'GenerateContentQuery', - 'SearchQuery', - ], - sliceIds: [ - 'winmuid1tf', - 'anssupfor_c', - 'imgchatgptv2', - 'tts2cf', - 'contansperf', - 'mlchatpc8500w', - 'mlchatpc2', - 'ctrlworkpay', - 'winshortmsgtf', - 'cibctrl', - 'sydtransctrl', - 'sydconfigoptc', - '0705trt4', - '517opinion', - '628ajcopus0', - '330uaugs0', - '529rwea', - '0626snptrcs0', - '424dagslnv1', - ], - isStartOfSession: conversation.invocationId === 0, - message: { - author: 'user', - inputMethod: 'Keyboard', - text: conversation.prompt, - imageUrl: conversation.imageUrl, - messageType: 'Chat', - }, - conversationId: conversation.conversationId, - conversationSignature: conversation.conversationSignature, - participant: { id: conversation.clientId }, - }, - ], - invocationId: conversation.invocationId.toString(), - target: 'chat', - type: InvocationEventType.StreamInvocation, - } - } - - async createConversation(): Promise { - const headers = { - 'Accept-Encoding': 'gzip, deflate, br, zsdch', - 'User-Agent': this.ua, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: this.cookie, - } - - let resp: ConversationResponse | undefined - try { - const response = await fetch(this.endpoint + '/api/create', { method: 'POST', headers, redirect: 'error', mode: 'cors', credentials: 'include' }) - if (response.status === 404) { - throw new ChatError('Not Found', ErrorCode.NOTFOUND_ERROR) - } - resp = await response.json() as ConversationResponse - } catch (err) { - console.error('create conversation error', err) - } - - if (!resp?.result) { - throw new ChatError('你的 VPS 或代理可能被封禁,如有疑问,请前往 https://github.com/weaigc/bingo 咨询', ErrorCode.BING_IP_FORBIDDEN) - } - - const { value, message } = resp.result || {} - if (value !== 'Success') { - const errorMsg = `${value}: ${message}` - if (value === 'UnauthorizedRequest') { - if (/fetch failed/i.test(message || '')) { - throw new ChatError(errorMsg, ErrorCode.BING_IP_FORBIDDEN) - } - throw new ChatError(errorMsg, ErrorCode.BING_UNAUTHORIZED) - } - if (value === 'TryLater') { - throw new ChatError(errorMsg, ErrorCode.BING_TRY_LATER) - } - if (value === 'Forbidden') { - throw new ChatError(errorMsg, ErrorCode.BING_FORBIDDEN) - } - throw new ChatError(errorMsg, ErrorCode.UNKOWN_ERROR) - } - return resp - } - - private async createContext(conversationStyle: BingConversationStyle) { - if (!this.conversationContext) { - const conversation = await this.createConversation() - this.conversationContext = { - conversationId: conversation.conversationId, - conversationSignature: conversation.conversationSignature, - clientId: conversation.clientId, - invocationId: 0, - conversationStyle, - prompt: '', - } - } - return this.conversationContext - } - - async sendMessage(params: Params) { - try { - await this.createContext(params.options.bingConversationStyle) - Object.assign(this.conversationContext!, { prompt: params.prompt, imageUrl: params.imageUrl }) - return this.sydneyProxy(params) - } catch (error) { - params.onEvent({ - type: 'ERROR', - error: error instanceof ChatError ? error : new ChatError('Catch Error', ErrorCode.UNKOWN_ERROR), - }) - } - } - - private async sydneyProxy(params: Params) { - const abortController = new AbortController() - const response = await fetch(this.endpoint + '/api/sydney', { - method: 'POST', - headers: { - 'Content-Type': 'application/json', - }, - signal: abortController.signal, - body: JSON.stringify(this.conversationContext!) - }) - if (response.status !== 200) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - 'Unknown error', - ErrorCode.UNKOWN_ERROR, - ), - }) - } - params.signal?.addEventListener('abort', () => { - abortController.abort() - }) - - const textDecoder = createChunkDecoder() - for await (const chunk of streamAsyncIterable(response.body!)) { - this.parseEvents(params, websocketUtils.unpackMessage(textDecoder(chunk))) - } - } - - async sendWs() { - const wsConfig: ConstructorParameters[1] = { - packMessage: websocketUtils.packMessage, - unpackMessage: websocketUtils.unpackMessage, - createWebSocket: (url) => new WebSocket(url, { - headers: { - 'accept-language': 'zh-CN,zh;q=0.9', - 'cache-control': 'no-cache', - 'User-Agent': this.ua, - pragma: 'no-cache', - cookie: this.cookie, - } - }) - } - const wsp = new WebSocketAsPromised('wss://sydney.bing.com/sydney/ChatHub', wsConfig) - - wsp.open().then(() => { - wsp.sendPacked({ protocol: 'json', version: 1 }) - wsp.sendPacked({ type: 6 }) - wsp.sendPacked(BingWebBot.buildChatRequest(this.conversationContext!)) - }) - - return wsp - } - - private async useWs(params: Params) { - const wsp = await this.sendWs() - const watchDog = new WatchDog() - wsp.onUnpackedMessage.addListener((events) => { - watchDog.watch(() => { - wsp.sendPacked({ type: 6 }) - }) - this.parseEvents(params, events) - }) - - wsp.onClose.addListener(() => { - watchDog.reset() - params.onEvent({ type: 'DONE' }) - wsp.removeAllListeners() - }) - - params.signal?.addEventListener('abort', () => { - wsp.removeAllListeners() - wsp.close() - }) - } - - private async createImage(prompt: string, id: string) { - try { - const headers = { - 'Accept-Encoding': 'gzip, deflate, br, zsdch', - 'User-Agent': this.ua, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: this.cookie, - } - const query = new URLSearchParams({ - prompt, - id - }) - const response = await fetch(this.endpoint + '/api/image?' + query.toString(), - { - method: 'POST', - headers, - mode: 'cors', - credentials: 'include' - }) - .then(res => res.text()) - if (response) { - this.lastText += '\n' + response - } - } catch (err) { - console.error('Create Image Error', err) - } - } - - private buildKnowledgeApiPayload(imageUrl: string, conversationStyle: BingConversationStyle) { - const imageInfo: ImageInfo = {} - let imageBase64: string | undefined = undefined - const knowledgeRequest = { - imageInfo, - knowledgeRequest: { - invokedSkills: [ - 'ImageById' - ], - subscriptionId: 'Bing.Chat.Multimodal', - invokedSkillsRequestData: { - enableFaceBlur: true - }, - convoData: { - convoid: this.conversationContext?.conversationId, - convotone: conversationStyle, - } - }, - } - - if (imageUrl.startsWith('data:image/')) { - imageBase64 = imageUrl.replace('data:image/', ''); - const partIndex = imageBase64.indexOf(',') - if (partIndex) { - imageBase64 = imageBase64.substring(partIndex + 1) - } - } else { - imageInfo.url = imageUrl - } - return { knowledgeRequest, imageBase64 } - } - - async uploadImage(imageUrl: string, conversationStyle: BingConversationStyle = BingConversationStyle.Creative): Promise { - if (!imageUrl) { - return - } - await this.createContext(conversationStyle) - const payload = this.buildKnowledgeApiPayload(imageUrl, conversationStyle) - - const response = await fetch(this.endpoint + '/api/kblob', - { - headers: { - 'Content-Type': 'application/json', - }, - method: 'POST', - mode: 'cors', - credentials: 'include', - body: JSON.stringify(payload), - }) - .then(res => res.json()) - .catch(e => { - console.log('Error', e) - }) - return response - } - - private async generateContent(message: ChatResponseMessage) { - if (message.contentType === 'IMAGE') { - this.asyncTasks.push(this.createImage(message.text, message.messageId)) - } - } - - private async parseEvents(params: Params, events: any) { - const conversation = this.conversationContext! - - events?.forEach(async (event: ChatUpdateCompleteResponse) => { - debug('bing event', event) - if (event.type === 3) { - await Promise.all(this.asyncTasks) - this.asyncTasks = [] - params.onEvent({ type: 'UPDATE_ANSWER', data: { text: this.lastText } }) - params.onEvent({ type: 'DONE' }) - conversation.invocationId = parseInt(event.invocationId, 10) + 1 - } else if (event.type === 1) { - const messages = event.arguments[0].messages - if (messages) { - const text = convertMessageToMarkdown(messages[0]) - this.lastText = text - params.onEvent({ type: 'UPDATE_ANSWER', data: { text, spokenText: messages[0].text, throttling: event.arguments[0].throttling } }) - } - } else if (event.type === 2) { - const messages = event.item.messages as ChatResponseMessage[] | undefined - if (!messages) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - event.item.result.error || 'Unknown error', - event.item.result.value === 'Throttled' ? ErrorCode.THROTTLE_LIMIT - : event.item.result.value === 'CaptchaChallenge' ? (this.conversationContext?.conversationId?.includes('BingProdUnAuthenticatedUsers') ? ErrorCode.BING_UNAUTHORIZED : ErrorCode.BING_CAPTCHA) - : ErrorCode.UNKOWN_ERROR - ), - }) - return - } - const limited = messages.some((message) => - message.contentOrigin === 'TurnLimiter' - || message.messageType === 'Disengaged' - ) - if (limited) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - 'Sorry, you have reached chat limit in this conversation.', - ErrorCode.CONVERSATION_LIMIT, - ), - }) - return - } - - const lastMessage = event.item.messages.at(-1) as ChatResponseMessage - const specialMessage = event.item.messages.find(message => message.author === 'bot' && message.contentType === 'IMAGE') - if (specialMessage) { - this.generateContent(specialMessage) - } - - if (lastMessage) { - const text = convertMessageToMarkdown(lastMessage) - this.lastText = text - params.onEvent({ - type: 'UPDATE_ANSWER', - data: { text, throttling: event.item.throttling, suggestedResponses: lastMessage.suggestedResponses, sourceAttributions: lastMessage.sourceAttributions }, - }) - } - } - }) - } - - resetConversation() { - this.conversationContext = undefined - } -} diff --git a/spaces/PhotoPranab/Joeythemonster-anything-midjourney-v-4-1/README.md b/spaces/PhotoPranab/Joeythemonster-anything-midjourney-v-4-1/README.md deleted file mode 100644 index cf31fa376ebc4f713058b1d98bcab4c16e69f88e..0000000000000000000000000000000000000000 --- a/spaces/PhotoPranab/Joeythemonster-anything-midjourney-v-4-1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Joeythemonster Anything Midjourney V 4 1 -emoji: 😻 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.20.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/RMXK/RVC_HFF/Applio-RVC-Fork/utils/backups_test.py b/spaces/RMXK/RVC_HFF/Applio-RVC-Fork/utils/backups_test.py deleted file mode 100644 index f3edf15811b5035ee82f21e54e87b7e87ce413eb..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/Applio-RVC-Fork/utils/backups_test.py +++ /dev/null @@ -1,138 +0,0 @@ - -import os -import shutil -import hashlib -import time - -LOGS_FOLDER = '/content/Applio-RVC-Fork/logs' -WEIGHTS_FOLDER = '/content/Applio-RVC-Fork/weights' -GOOGLE_DRIVE_PATH = '/content/drive/MyDrive/RVC_Backup' - -def import_google_drive_backup(): - print("Importing Google Drive backup...") - GOOGLE_DRIVE_PATH = '/content/drive/MyDrive/RVC_Backup' # change this to your Google Drive path - LOGS_FOLDER = '/content/Applio-RVC-Fork/logs' - WEIGHTS_FOLDER = '/content/Applio-RVC-Fork/weights' - weights_exist = False - files_to_copy = [] - weights_to_copy = [] - - def handle_files(root, files, is_weight_files=False): - for filename in files: - filepath = os.path.join(root, filename) - if filename.endswith('.pth') and is_weight_files: - weights_exist = True - backup_filepath = os.path.join(WEIGHTS_FOLDER, os.path.relpath(filepath, GOOGLE_DRIVE_PATH)) - else: - backup_filepath = os.path.join(LOGS_FOLDER, os.path.relpath(filepath, GOOGLE_DRIVE_PATH)) - backup_folderpath = os.path.dirname(backup_filepath) - if not os.path.exists(backup_folderpath): - os.makedirs(backup_folderpath) - print(f'Created folder: {backup_folderpath}', flush=True) - if is_weight_files: - weights_to_copy.append((filepath, backup_filepath)) - else: - files_to_copy.append((filepath, backup_filepath)) - - for root, dirs, files in os.walk(os.path.join(GOOGLE_DRIVE_PATH, 'logs')): - handle_files(root, files) - - for root, dirs, files in os.walk(os.path.join(GOOGLE_DRIVE_PATH, 'weights')): - handle_files(root, files, True) - - # Copy files in batches - total_files = len(files_to_copy) - start_time = time.time() - for i, (source, dest) in enumerate(files_to_copy, start=1): - with open(source, 'rb') as src, open(dest, 'wb') as dst: - shutil.copyfileobj(src, dst, 1024*1024) # 1MB buffer size - # Report progress every 5 seconds or after every 100 files, whichever is less frequent - if time.time() - start_time > 5 or i % 100 == 0: - print(f'\rCopying file {i} of {total_files} ({i * 100 / total_files:.2f}%)', end="") - start_time = time.time() - print(f'\nImported {len(files_to_copy)} files from Google Drive backup') - - # Copy weights in batches - total_weights = len(weights_to_copy) - start_time = time.time() - for i, (source, dest) in enumerate(weights_to_copy, start=1): - with open(source, 'rb') as src, open(dest, 'wb') as dst: - shutil.copyfileobj(src, dst, 1024*1024) # 1MB buffer size - # Report progress every 5 seconds or after every 100 files, whichever is less frequent - if time.time() - start_time > 5 or i % 100 == 0: - print(f'\rCopying weight file {i} of {total_weights} ({i * 100 / total_weights:.2f}%)', end="") - start_time = time.time() - if weights_exist: - print(f'\nImported {len(weights_to_copy)} weight files') - print("Copied weights from Google Drive backup to local weights folder.") - else: - print("\nNo weights found in Google Drive backup.") - print("Google Drive backup import completed.") - -def backup_files(): - print("\n Starting backup loop...") - last_backup_timestamps_path = os.path.join(LOGS_FOLDER, 'last_backup_timestamps.txt') - fully_updated = False # boolean to track if all files are up to date - try: - with open(last_backup_timestamps_path, 'r') as f: - last_backup_timestamps = dict(line.strip().split(':') for line in f) - except: - last_backup_timestamps = {} - - while True: - updated = False - files_to_copy = [] - files_to_delete = [] - - for root, dirs, files in os.walk(LOGS_FOLDER): - for filename in files: - if filename != 'last_backup_timestamps.txt': - filepath = os.path.join(root, filename) - if os.path.isfile(filepath): - backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER)) - backup_folderpath = os.path.dirname(backup_filepath) - - if not os.path.exists(backup_folderpath): - os.makedirs(backup_folderpath) - print(f'Created backup folder: {backup_folderpath}', flush=True) - - # check if file has changed since last backup - last_backup_timestamp = last_backup_timestamps.get(filepath) - current_timestamp = os.path.getmtime(filepath) - if last_backup_timestamp is None or float(last_backup_timestamp) < current_timestamp: - files_to_copy.append((filepath, backup_filepath)) # add to list of files to copy - last_backup_timestamps[filepath] = str(current_timestamp) # update last backup timestamp - updated = True - fully_updated = False # if a file is updated, all files are not up to date - - # check if any files were deleted in Colab and delete them from the backup drive - for filepath in list(last_backup_timestamps.keys()): - if not os.path.exists(filepath): - backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER)) - if os.path.exists(backup_filepath): - files_to_delete.append(backup_filepath) # add to list of files to delete - del last_backup_timestamps[filepath] - updated = True - fully_updated = False # if a file is deleted, all files are not up to date - - # Copy files in batches - if files_to_copy: - for source, dest in files_to_copy: - shutil.copy2(source, dest) - print(f'Copied or updated {len(files_to_copy)} files') - - # Delete files in batches - if files_to_delete: - for file in files_to_delete: - os.remove(file) - print(f'Deleted {len(files_to_delete)} files') - - if not updated and not fully_updated: - print("Files are up to date.") - fully_updated = True # if all files are up to date, set the boolean to True - copy_weights_folder_to_drive() - - with open(last_backup_timestamps_path, 'w') as f: - for filepath, timestamp in last_backup_timestamps.items(): - f.write(f'{filepath}:{timestamp}\n') - time.sleep(15) # wait for 15 seconds before checking again diff --git a/spaces/Ramse/TTS_Hindi/modules/commons/.ipynb_checkpoints/ssim-checkpoint.py b/spaces/Ramse/TTS_Hindi/modules/commons/.ipynb_checkpoints/ssim-checkpoint.py deleted file mode 100644 index 0d0241f267ef58b24979e022b05f2a9adf768826..0000000000000000000000000000000000000000 --- a/spaces/Ramse/TTS_Hindi/modules/commons/.ipynb_checkpoints/ssim-checkpoint.py +++ /dev/null @@ -1,391 +0,0 @@ -# ''' -# https://github.com/One-sixth/ms_ssim_pytorch/blob/master/ssim.py -# ''' -# -# import torch -# import torch.jit -# import torch.nn.functional as F -# -# -# @torch.jit.script -# def create_window(window_size: int, sigma: float, channel: int): -# ''' -# Create 1-D gauss kernel -# :param window_size: the size of gauss kernel -# :param sigma: sigma of normal distribution -# :param channel: input channel -# :return: 1D kernel -# ''' -# coords = torch.arange(window_size, dtype=torch.float) -# coords -= window_size // 2 -# -# g = torch.exp(-(coords ** 2) / (2 * sigma ** 2)) -# g /= g.sum() -# -# g = g.reshape(1, 1, 1, -1).repeat(channel, 1, 1, 1) -# return g -# -# -# @torch.jit.script -# def _gaussian_filter(x, window_1d, use_padding: bool): -# ''' -# Blur input with 1-D kernel -# :param x: batch of tensors to be blured -# :param window_1d: 1-D gauss kernel -# :param use_padding: padding image before conv -# :return: blured tensors -# ''' -# C = x.shape[1] -# padding = 0 -# if use_padding: -# window_size = window_1d.shape[3] -# padding = window_size // 2 -# out = F.conv2d(x, window_1d, stride=1, padding=(0, padding), groups=C) -# out = F.conv2d(out, window_1d.transpose(2, 3), stride=1, padding=(padding, 0), groups=C) -# return out -# -# -# @torch.jit.script -# def ssim(X, Y, window, data_range: float, use_padding: bool = False): -# ''' -# Calculate ssim index for X and Y -# :param X: images [B, C, H, N_bins] -# :param Y: images [B, C, H, N_bins] -# :param window: 1-D gauss kernel -# :param data_range: value range of input images. (usually 1.0 or 255) -# :param use_padding: padding image before conv -# :return: -# ''' -# -# K1 = 0.01 -# K2 = 0.03 -# compensation = 1.0 -# -# C1 = (K1 * data_range) ** 2 -# C2 = (K2 * data_range) ** 2 -# -# mu1 = _gaussian_filter(X, window, use_padding) -# mu2 = _gaussian_filter(Y, window, use_padding) -# sigma1_sq = _gaussian_filter(X * X, window, use_padding) -# sigma2_sq = _gaussian_filter(Y * Y, window, use_padding) -# sigma12 = _gaussian_filter(X * Y, window, use_padding) -# -# mu1_sq = mu1.pow(2) -# mu2_sq = mu2.pow(2) -# mu1_mu2 = mu1 * mu2 -# -# sigma1_sq = compensation * (sigma1_sq - mu1_sq) -# sigma2_sq = compensation * (sigma2_sq - mu2_sq) -# sigma12 = compensation * (sigma12 - mu1_mu2) -# -# cs_map = (2 * sigma12 + C2) / (sigma1_sq + sigma2_sq + C2) -# # Fixed the issue that the negative value of cs_map caused ms_ssim to output Nan. -# cs_map = cs_map.clamp_min(0.) -# ssim_map = ((2 * mu1_mu2 + C1) / (mu1_sq + mu2_sq + C1)) * cs_map -# -# ssim_val = ssim_map.mean(dim=(1, 2, 3)) # reduce along CHW -# cs = cs_map.mean(dim=(1, 2, 3)) -# -# return ssim_val, cs -# -# -# @torch.jit.script -# def ms_ssim(X, Y, window, data_range: float, weights, use_padding: bool = False, eps: float = 1e-8): -# ''' -# interface of ms-ssim -# :param X: a batch of images, (N,C,H,W) -# :param Y: a batch of images, (N,C,H,W) -# :param window: 1-D gauss kernel -# :param data_range: value range of input images. (usually 1.0 or 255) -# :param weights: weights for different levels -# :param use_padding: padding image before conv -# :param eps: use for avoid grad nan. -# :return: -# ''' -# levels = weights.shape[0] -# cs_vals = [] -# ssim_vals = [] -# for _ in range(levels): -# ssim_val, cs = ssim(X, Y, window=window, data_range=data_range, use_padding=use_padding) -# # Use for fix a issue. When c = a ** b and a is 0, c.backward() will cause the a.grad become inf. -# ssim_val = ssim_val.clamp_min(eps) -# cs = cs.clamp_min(eps) -# cs_vals.append(cs) -# -# ssim_vals.append(ssim_val) -# padding = (X.shape[2] % 2, X.shape[3] % 2) -# X = F.avg_pool2d(X, kernel_size=2, stride=2, padding=padding) -# Y = F.avg_pool2d(Y, kernel_size=2, stride=2, padding=padding) -# -# cs_vals = torch.stack(cs_vals, dim=0) -# ms_ssim_val = torch.prod((cs_vals[:-1] ** weights[:-1].unsqueeze(1)) * (ssim_vals[-1] ** weights[-1]), dim=0) -# return ms_ssim_val -# -# -# class SSIM(torch.jit.ScriptModule): -# __constants__ = ['data_range', 'use_padding'] -# -# def __init__(self, window_size=11, window_sigma=1.5, data_range=255., channel=3, use_padding=False): -# ''' -# :param window_size: the size of gauss kernel -# :param window_sigma: sigma of normal distribution -# :param data_range: value range of input images. (usually 1.0 or 255) -# :param channel: input channels (default: 3) -# :param use_padding: padding image before conv -# ''' -# super().__init__() -# assert window_size % 2 == 1, 'Window size must be odd.' -# window = create_window(window_size, window_sigma, channel) -# self.register_buffer('window', window) -# self.data_range = data_range -# self.use_padding = use_padding -# -# @torch.jit.script_method -# def forward(self, X, Y): -# r = ssim(X, Y, window=self.window, data_range=self.data_range, use_padding=self.use_padding) -# return r[0] -# -# -# class MS_SSIM(torch.jit.ScriptModule): -# __constants__ = ['data_range', 'use_padding', 'eps'] -# -# def __init__(self, window_size=11, window_sigma=1.5, data_range=255., channel=3, use_padding=False, weights=None, -# levels=None, eps=1e-8): -# ''' -# class for ms-ssim -# :param window_size: the size of gauss kernel -# :param window_sigma: sigma of normal distribution -# :param data_range: value range of input images. (usually 1.0 or 255) -# :param channel: input channels -# :param use_padding: padding image before conv -# :param weights: weights for different levels. (default [0.0448, 0.2856, 0.3001, 0.2363, 0.1333]) -# :param levels: number of downsampling -# :param eps: Use for fix a issue. When c = a ** b and a is 0, c.backward() will cause the a.grad become inf. -# ''' -# super().__init__() -# assert window_size % 2 == 1, 'Window size must be odd.' -# self.data_range = data_range -# self.use_padding = use_padding -# self.eps = eps -# -# window = create_window(window_size, window_sigma, channel) -# self.register_buffer('window', window) -# -# if weights is None: -# weights = [0.0448, 0.2856, 0.3001, 0.2363, 0.1333] -# weights = torch.tensor(weights, dtype=torch.float) -# -# if levels is not None: -# weights = weights[:levels] -# weights = weights / weights.sum() -# -# self.register_buffer('weights', weights) -# -# @torch.jit.script_method -# def forward(self, X, Y): -# return ms_ssim(X, Y, window=self.window, data_range=self.data_range, weights=self.weights, -# use_padding=self.use_padding, eps=self.eps) -# -# -# if __name__ == '__main__': -# print('Simple Test') -# im = torch.randint(0, 255, (5, 3, 256, 256), dtype=torch.float, device='cuda') -# img1 = im / 255 -# img2 = img1 * 0.5 -# -# losser = SSIM(data_range=1.).cuda() -# loss = losser(img1, img2).mean() -# -# losser2 = MS_SSIM(data_range=1.).cuda() -# loss2 = losser2(img1, img2).mean() -# -# print(loss.item()) -# print(loss2.item()) -# -# if __name__ == '__main__': -# print('Training Test') -# import cv2 -# import torch.optim -# import numpy as np -# import imageio -# import time -# -# out_test_video = False -# # 最好不要直接输出gif图,会非常大,最好先输出mkv文件后用ffmpeg转换到GIF -# video_use_gif = False -# -# im = cv2.imread('test_img1.jpg', 1) -# t_im = torch.from_numpy(im).cuda().permute(2, 0, 1).float()[None] / 255. -# -# if out_test_video: -# if video_use_gif: -# fps = 0.5 -# out_wh = (im.shape[1] // 2, im.shape[0] // 2) -# suffix = '.gif' -# else: -# fps = 5 -# out_wh = (im.shape[1], im.shape[0]) -# suffix = '.mkv' -# video_last_time = time.perf_counter() -# video = imageio.get_writer('ssim_test' + suffix, fps=fps) -# -# # 测试ssim -# print('Training SSIM') -# rand_im = torch.randint_like(t_im, 0, 255, dtype=torch.float32) / 255. -# rand_im.requires_grad = True -# optim = torch.optim.Adam([rand_im], 0.003, eps=1e-8) -# losser = SSIM(data_range=1., channel=t_im.shape[1]).cuda() -# ssim_score = 0 -# while ssim_score < 0.999: -# optim.zero_grad() -# loss = losser(rand_im, t_im) -# (-loss).sum().backward() -# ssim_score = loss.item() -# optim.step() -# r_im = np.transpose(rand_im.detach().cpu().numpy().clip(0, 1) * 255, [0, 2, 3, 1]).astype(np.uint8)[0] -# r_im = cv2.putText(r_im, 'ssim %f' % ssim_score, (10, 30), cv2.FONT_HERSHEY_PLAIN, 2, (255, 0, 0), 2) -# -# if out_test_video: -# if time.perf_counter() - video_last_time > 1. / fps: -# video_last_time = time.perf_counter() -# out_frame = cv2.cvtColor(r_im, cv2.COLOR_BGR2RGB) -# out_frame = cv2.resize(out_frame, out_wh, interpolation=cv2.INTER_AREA) -# if isinstance(out_frame, cv2.UMat): -# out_frame = out_frame.get() -# video.append_data(out_frame) -# -# cv2.imshow('ssim', r_im) -# cv2.setWindowTitle('ssim', 'ssim %f' % ssim_score) -# cv2.waitKey(1) -# -# if out_test_video: -# video.close() -# -# # 测试ms_ssim -# if out_test_video: -# if video_use_gif: -# fps = 0.5 -# out_wh = (im.shape[1] // 2, im.shape[0] // 2) -# suffix = '.gif' -# else: -# fps = 5 -# out_wh = (im.shape[1], im.shape[0]) -# suffix = '.mkv' -# video_last_time = time.perf_counter() -# video = imageio.get_writer('ms_ssim_test' + suffix, fps=fps) -# -# print('Training MS_SSIM') -# rand_im = torch.randint_like(t_im, 0, 255, dtype=torch.float32) / 255. -# rand_im.requires_grad = True -# optim = torch.optim.Adam([rand_im], 0.003, eps=1e-8) -# losser = MS_SSIM(data_range=1., channel=t_im.shape[1]).cuda() -# ssim_score = 0 -# while ssim_score < 0.999: -# optim.zero_grad() -# loss = losser(rand_im, t_im) -# (-loss).sum().backward() -# ssim_score = loss.item() -# optim.step() -# r_im = np.transpose(rand_im.detach().cpu().numpy().clip(0, 1) * 255, [0, 2, 3, 1]).astype(np.uint8)[0] -# r_im = cv2.putText(r_im, 'ms_ssim %f' % ssim_score, (10, 30), cv2.FONT_HERSHEY_PLAIN, 2, (255, 0, 0), 2) -# -# if out_test_video: -# if time.perf_counter() - video_last_time > 1. / fps: -# video_last_time = time.perf_counter() -# out_frame = cv2.cvtColor(r_im, cv2.COLOR_BGR2RGB) -# out_frame = cv2.resize(out_frame, out_wh, interpolation=cv2.INTER_AREA) -# if isinstance(out_frame, cv2.UMat): -# out_frame = out_frame.get() -# video.append_data(out_frame) -# -# cv2.imshow('ms_ssim', r_im) -# cv2.setWindowTitle('ms_ssim', 'ms_ssim %f' % ssim_score) -# cv2.waitKey(1) -# -# if out_test_video: -# video.close() - -""" -Adapted from https://github.com/Po-Hsun-Su/pytorch-ssim -""" - -import torch -import torch.nn.functional as F -from torch.autograd import Variable -import numpy as np -from math import exp - - -def gaussian(window_size, sigma): - gauss = torch.Tensor([exp(-(x - window_size // 2) ** 2 / float(2 * sigma ** 2)) for x in range(window_size)]) - return gauss / gauss.sum() - - -def create_window(window_size, channel): - _1D_window = gaussian(window_size, 1.5).unsqueeze(1) - _2D_window = _1D_window.mm(_1D_window.t()).float().unsqueeze(0).unsqueeze(0) - window = Variable(_2D_window.expand(channel, 1, window_size, window_size).contiguous()) - return window - - -def _ssim(img1, img2, window, window_size, channel, size_average=True): - mu1 = F.conv2d(img1, window, padding=window_size // 2, groups=channel) - mu2 = F.conv2d(img2, window, padding=window_size // 2, groups=channel) - - mu1_sq = mu1.pow(2) - mu2_sq = mu2.pow(2) - mu1_mu2 = mu1 * mu2 - - sigma1_sq = F.conv2d(img1 * img1, window, padding=window_size // 2, groups=channel) - mu1_sq - sigma2_sq = F.conv2d(img2 * img2, window, padding=window_size // 2, groups=channel) - mu2_sq - sigma12 = F.conv2d(img1 * img2, window, padding=window_size // 2, groups=channel) - mu1_mu2 - - C1 = 0.01 ** 2 - C2 = 0.03 ** 2 - - ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * (sigma1_sq + sigma2_sq + C2)) - - if size_average: - return ssim_map.mean() - else: - return ssim_map.mean(1) - - -class SSIM(torch.nn.Module): - def __init__(self, window_size=11, size_average=True): - super(SSIM, self).__init__() - self.window_size = window_size - self.size_average = size_average - self.channel = 1 - self.window = create_window(window_size, self.channel) - - def forward(self, img1, img2): - (_, channel, _, _) = img1.size() - - if channel == self.channel and self.window.data.type() == img1.data.type(): - window = self.window - else: - window = create_window(self.window_size, channel) - - if img1.is_cuda: - window = window.cuda(img1.get_device()) - window = window.type_as(img1) - - self.window = window - self.channel = channel - - return _ssim(img1, img2, window, self.window_size, channel, self.size_average) - - -window = None - - -def ssim(img1, img2, window_size=11, size_average=True): - (_, channel, _, _) = img1.size() - global window - if window is None: - window = create_window(window_size, channel) - if img1.is_cuda: - window = window.cuda(img1.get_device()) - window = window.type_as(img1) - return _ssim(img1, img2, window, window_size, channel, size_average) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/operations/install/wheel.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/operations/install/wheel.py deleted file mode 100644 index c79941398a2c1d502e60cd0dd0703d8c0530a30f..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/operations/install/wheel.py +++ /dev/null @@ -1,738 +0,0 @@ -"""Support for installing and building the "wheel" binary package format. -""" - -import collections -import compileall -import contextlib -import csv -import importlib -import logging -import os.path -import re -import shutil -import sys -import warnings -from base64 import urlsafe_b64encode -from email.message import Message -from itertools import chain, filterfalse, starmap -from typing import ( - IO, - TYPE_CHECKING, - Any, - BinaryIO, - Callable, - Dict, - Generator, - Iterable, - Iterator, - List, - NewType, - Optional, - Sequence, - Set, - Tuple, - Union, - cast, -) -from zipfile import ZipFile, ZipInfo - -from pip._vendor.distlib.scripts import ScriptMaker -from pip._vendor.distlib.util import get_export_entry -from pip._vendor.packaging.utils import canonicalize_name - -from pip._internal.exceptions import InstallationError -from pip._internal.locations import get_major_minor_version -from pip._internal.metadata import ( - BaseDistribution, - FilesystemWheel, - get_wheel_distribution, -) -from pip._internal.models.direct_url import DIRECT_URL_METADATA_NAME, DirectUrl -from pip._internal.models.scheme import SCHEME_KEYS, Scheme -from pip._internal.utils.filesystem import adjacent_tmp_file, replace -from pip._internal.utils.misc import captured_stdout, ensure_dir, hash_file, partition -from pip._internal.utils.unpacking import ( - current_umask, - is_within_directory, - set_extracted_file_to_default_mode_plus_executable, - zip_item_is_executable, -) -from pip._internal.utils.wheel import parse_wheel - -if TYPE_CHECKING: - from typing import Protocol - - class File(Protocol): - src_record_path: "RecordPath" - dest_path: str - changed: bool - - def save(self) -> None: - pass - - -logger = logging.getLogger(__name__) - -RecordPath = NewType("RecordPath", str) -InstalledCSVRow = Tuple[RecordPath, str, Union[int, str]] - - -def rehash(path: str, blocksize: int = 1 << 20) -> Tuple[str, str]: - """Return (encoded_digest, length) for path using hashlib.sha256()""" - h, length = hash_file(path, blocksize) - digest = "sha256=" + urlsafe_b64encode(h.digest()).decode("latin1").rstrip("=") - return (digest, str(length)) - - -def csv_io_kwargs(mode: str) -> Dict[str, Any]: - """Return keyword arguments to properly open a CSV file - in the given mode. - """ - return {"mode": mode, "newline": "", "encoding": "utf-8"} - - -def fix_script(path: str) -> bool: - """Replace #!python with #!/path/to/python - Return True if file was changed. - """ - # XXX RECORD hashes will need to be updated - assert os.path.isfile(path) - - with open(path, "rb") as script: - firstline = script.readline() - if not firstline.startswith(b"#!python"): - return False - exename = sys.executable.encode(sys.getfilesystemencoding()) - firstline = b"#!" + exename + os.linesep.encode("ascii") - rest = script.read() - with open(path, "wb") as script: - script.write(firstline) - script.write(rest) - return True - - -def wheel_root_is_purelib(metadata: Message) -> bool: - return metadata.get("Root-Is-Purelib", "").lower() == "true" - - -def get_entrypoints(dist: BaseDistribution) -> Tuple[Dict[str, str], Dict[str, str]]: - console_scripts = {} - gui_scripts = {} - for entry_point in dist.iter_entry_points(): - if entry_point.group == "console_scripts": - console_scripts[entry_point.name] = entry_point.value - elif entry_point.group == "gui_scripts": - gui_scripts[entry_point.name] = entry_point.value - return console_scripts, gui_scripts - - -def message_about_scripts_not_on_PATH(scripts: Sequence[str]) -> Optional[str]: - """Determine if any scripts are not on PATH and format a warning. - Returns a warning message if one or more scripts are not on PATH, - otherwise None. - """ - if not scripts: - return None - - # Group scripts by the path they were installed in - grouped_by_dir: Dict[str, Set[str]] = collections.defaultdict(set) - for destfile in scripts: - parent_dir = os.path.dirname(destfile) - script_name = os.path.basename(destfile) - grouped_by_dir[parent_dir].add(script_name) - - # We don't want to warn for directories that are on PATH. - not_warn_dirs = [ - os.path.normcase(i).rstrip(os.sep) - for i in os.environ.get("PATH", "").split(os.pathsep) - ] - # If an executable sits with sys.executable, we don't warn for it. - # This covers the case of venv invocations without activating the venv. - not_warn_dirs.append(os.path.normcase(os.path.dirname(sys.executable))) - warn_for: Dict[str, Set[str]] = { - parent_dir: scripts - for parent_dir, scripts in grouped_by_dir.items() - if os.path.normcase(parent_dir) not in not_warn_dirs - } - if not warn_for: - return None - - # Format a message - msg_lines = [] - for parent_dir, dir_scripts in warn_for.items(): - sorted_scripts: List[str] = sorted(dir_scripts) - if len(sorted_scripts) == 1: - start_text = "script {} is".format(sorted_scripts[0]) - else: - start_text = "scripts {} are".format( - ", ".join(sorted_scripts[:-1]) + " and " + sorted_scripts[-1] - ) - - msg_lines.append( - "The {} installed in '{}' which is not on PATH.".format( - start_text, parent_dir - ) - ) - - last_line_fmt = ( - "Consider adding {} to PATH or, if you prefer " - "to suppress this warning, use --no-warn-script-location." - ) - if len(msg_lines) == 1: - msg_lines.append(last_line_fmt.format("this directory")) - else: - msg_lines.append(last_line_fmt.format("these directories")) - - # Add a note if any directory starts with ~ - warn_for_tilde = any( - i[0] == "~" for i in os.environ.get("PATH", "").split(os.pathsep) if i - ) - if warn_for_tilde: - tilde_warning_msg = ( - "NOTE: The current PATH contains path(s) starting with `~`, " - "which may not be expanded by all applications." - ) - msg_lines.append(tilde_warning_msg) - - # Returns the formatted multiline message - return "\n".join(msg_lines) - - -def _normalized_outrows( - outrows: Iterable[InstalledCSVRow], -) -> List[Tuple[str, str, str]]: - """Normalize the given rows of a RECORD file. - - Items in each row are converted into str. Rows are then sorted to make - the value more predictable for tests. - - Each row is a 3-tuple (path, hash, size) and corresponds to a record of - a RECORD file (see PEP 376 and PEP 427 for details). For the rows - passed to this function, the size can be an integer as an int or string, - or the empty string. - """ - # Normally, there should only be one row per path, in which case the - # second and third elements don't come into play when sorting. - # However, in cases in the wild where a path might happen to occur twice, - # we don't want the sort operation to trigger an error (but still want - # determinism). Since the third element can be an int or string, we - # coerce each element to a string to avoid a TypeError in this case. - # For additional background, see-- - # https://github.com/pypa/pip/issues/5868 - return sorted( - (record_path, hash_, str(size)) for record_path, hash_, size in outrows - ) - - -def _record_to_fs_path(record_path: RecordPath, lib_dir: str) -> str: - return os.path.join(lib_dir, record_path) - - -def _fs_to_record_path(path: str, lib_dir: str) -> RecordPath: - # On Windows, do not handle relative paths if they belong to different - # logical disks - if os.path.splitdrive(path)[0].lower() == os.path.splitdrive(lib_dir)[0].lower(): - path = os.path.relpath(path, lib_dir) - - path = path.replace(os.path.sep, "/") - return cast("RecordPath", path) - - -def get_csv_rows_for_installed( - old_csv_rows: List[List[str]], - installed: Dict[RecordPath, RecordPath], - changed: Set[RecordPath], - generated: List[str], - lib_dir: str, -) -> List[InstalledCSVRow]: - """ - :param installed: A map from archive RECORD path to installation RECORD - path. - """ - installed_rows: List[InstalledCSVRow] = [] - for row in old_csv_rows: - if len(row) > 3: - logger.warning("RECORD line has more than three elements: %s", row) - old_record_path = cast("RecordPath", row[0]) - new_record_path = installed.pop(old_record_path, old_record_path) - if new_record_path in changed: - digest, length = rehash(_record_to_fs_path(new_record_path, lib_dir)) - else: - digest = row[1] if len(row) > 1 else "" - length = row[2] if len(row) > 2 else "" - installed_rows.append((new_record_path, digest, length)) - for f in generated: - path = _fs_to_record_path(f, lib_dir) - digest, length = rehash(f) - installed_rows.append((path, digest, length)) - for installed_record_path in installed.values(): - installed_rows.append((installed_record_path, "", "")) - return installed_rows - - -def get_console_script_specs(console: Dict[str, str]) -> List[str]: - """ - Given the mapping from entrypoint name to callable, return the relevant - console script specs. - """ - # Don't mutate caller's version - console = console.copy() - - scripts_to_generate = [] - - # Special case pip and setuptools to generate versioned wrappers - # - # The issue is that some projects (specifically, pip and setuptools) use - # code in setup.py to create "versioned" entry points - pip2.7 on Python - # 2.7, pip3.3 on Python 3.3, etc. But these entry points are baked into - # the wheel metadata at build time, and so if the wheel is installed with - # a *different* version of Python the entry points will be wrong. The - # correct fix for this is to enhance the metadata to be able to describe - # such versioned entry points, but that won't happen till Metadata 2.0 is - # available. - # In the meantime, projects using versioned entry points will either have - # incorrect versioned entry points, or they will not be able to distribute - # "universal" wheels (i.e., they will need a wheel per Python version). - # - # Because setuptools and pip are bundled with _ensurepip and virtualenv, - # we need to use universal wheels. So, as a stopgap until Metadata 2.0, we - # override the versioned entry points in the wheel and generate the - # correct ones. This code is purely a short-term measure until Metadata 2.0 - # is available. - # - # To add the level of hack in this section of code, in order to support - # ensurepip this code will look for an ``ENSUREPIP_OPTIONS`` environment - # variable which will control which version scripts get installed. - # - # ENSUREPIP_OPTIONS=altinstall - # - Only pipX.Y and easy_install-X.Y will be generated and installed - # ENSUREPIP_OPTIONS=install - # - pipX.Y, pipX, easy_install-X.Y will be generated and installed. Note - # that this option is technically if ENSUREPIP_OPTIONS is set and is - # not altinstall - # DEFAULT - # - The default behavior is to install pip, pipX, pipX.Y, easy_install - # and easy_install-X.Y. - pip_script = console.pop("pip", None) - if pip_script: - if "ENSUREPIP_OPTIONS" not in os.environ: - scripts_to_generate.append("pip = " + pip_script) - - if os.environ.get("ENSUREPIP_OPTIONS", "") != "altinstall": - scripts_to_generate.append( - "pip{} = {}".format(sys.version_info[0], pip_script) - ) - - scripts_to_generate.append(f"pip{get_major_minor_version()} = {pip_script}") - # Delete any other versioned pip entry points - pip_ep = [k for k in console if re.match(r"pip(\d+(\.\d+)?)?$", k)] - for k in pip_ep: - del console[k] - easy_install_script = console.pop("easy_install", None) - if easy_install_script: - if "ENSUREPIP_OPTIONS" not in os.environ: - scripts_to_generate.append("easy_install = " + easy_install_script) - - scripts_to_generate.append( - "easy_install-{} = {}".format( - get_major_minor_version(), easy_install_script - ) - ) - # Delete any other versioned easy_install entry points - easy_install_ep = [ - k for k in console if re.match(r"easy_install(-\d+\.\d+)?$", k) - ] - for k in easy_install_ep: - del console[k] - - # Generate the console entry points specified in the wheel - scripts_to_generate.extend(starmap("{} = {}".format, console.items())) - - return scripts_to_generate - - -class ZipBackedFile: - def __init__( - self, src_record_path: RecordPath, dest_path: str, zip_file: ZipFile - ) -> None: - self.src_record_path = src_record_path - self.dest_path = dest_path - self._zip_file = zip_file - self.changed = False - - def _getinfo(self) -> ZipInfo: - return self._zip_file.getinfo(self.src_record_path) - - def save(self) -> None: - # directory creation is lazy and after file filtering - # to ensure we don't install empty dirs; empty dirs can't be - # uninstalled. - parent_dir = os.path.dirname(self.dest_path) - ensure_dir(parent_dir) - - # When we open the output file below, any existing file is truncated - # before we start writing the new contents. This is fine in most - # cases, but can cause a segfault if pip has loaded a shared - # object (e.g. from pyopenssl through its vendored urllib3) - # Since the shared object is mmap'd an attempt to call a - # symbol in it will then cause a segfault. Unlinking the file - # allows writing of new contents while allowing the process to - # continue to use the old copy. - if os.path.exists(self.dest_path): - os.unlink(self.dest_path) - - zipinfo = self._getinfo() - - with self._zip_file.open(zipinfo) as f: - with open(self.dest_path, "wb") as dest: - shutil.copyfileobj(f, dest) - - if zip_item_is_executable(zipinfo): - set_extracted_file_to_default_mode_plus_executable(self.dest_path) - - -class ScriptFile: - def __init__(self, file: "File") -> None: - self._file = file - self.src_record_path = self._file.src_record_path - self.dest_path = self._file.dest_path - self.changed = False - - def save(self) -> None: - self._file.save() - self.changed = fix_script(self.dest_path) - - -class MissingCallableSuffix(InstallationError): - def __init__(self, entry_point: str) -> None: - super().__init__( - "Invalid script entry point: {} - A callable " - "suffix is required. Cf https://packaging.python.org/" - "specifications/entry-points/#use-for-scripts for more " - "information.".format(entry_point) - ) - - -def _raise_for_invalid_entrypoint(specification: str) -> None: - entry = get_export_entry(specification) - if entry is not None and entry.suffix is None: - raise MissingCallableSuffix(str(entry)) - - -class PipScriptMaker(ScriptMaker): - def make( - self, specification: str, options: Optional[Dict[str, Any]] = None - ) -> List[str]: - _raise_for_invalid_entrypoint(specification) - return super().make(specification, options) - - -def _install_wheel( - name: str, - wheel_zip: ZipFile, - wheel_path: str, - scheme: Scheme, - pycompile: bool = True, - warn_script_location: bool = True, - direct_url: Optional[DirectUrl] = None, - requested: bool = False, -) -> None: - """Install a wheel. - - :param name: Name of the project to install - :param wheel_zip: open ZipFile for wheel being installed - :param scheme: Distutils scheme dictating the install directories - :param req_description: String used in place of the requirement, for - logging - :param pycompile: Whether to byte-compile installed Python files - :param warn_script_location: Whether to check that scripts are installed - into a directory on PATH - :raises UnsupportedWheel: - * when the directory holds an unpacked wheel with incompatible - Wheel-Version - * when the .dist-info dir does not match the wheel - """ - info_dir, metadata = parse_wheel(wheel_zip, name) - - if wheel_root_is_purelib(metadata): - lib_dir = scheme.purelib - else: - lib_dir = scheme.platlib - - # Record details of the files moved - # installed = files copied from the wheel to the destination - # changed = files changed while installing (scripts #! line typically) - # generated = files newly generated during the install (script wrappers) - installed: Dict[RecordPath, RecordPath] = {} - changed: Set[RecordPath] = set() - generated: List[str] = [] - - def record_installed( - srcfile: RecordPath, destfile: str, modified: bool = False - ) -> None: - """Map archive RECORD paths to installation RECORD paths.""" - newpath = _fs_to_record_path(destfile, lib_dir) - installed[srcfile] = newpath - if modified: - changed.add(newpath) - - def is_dir_path(path: RecordPath) -> bool: - return path.endswith("/") - - def assert_no_path_traversal(dest_dir_path: str, target_path: str) -> None: - if not is_within_directory(dest_dir_path, target_path): - message = ( - "The wheel {!r} has a file {!r} trying to install" - " outside the target directory {!r}" - ) - raise InstallationError( - message.format(wheel_path, target_path, dest_dir_path) - ) - - def root_scheme_file_maker( - zip_file: ZipFile, dest: str - ) -> Callable[[RecordPath], "File"]: - def make_root_scheme_file(record_path: RecordPath) -> "File": - normed_path = os.path.normpath(record_path) - dest_path = os.path.join(dest, normed_path) - assert_no_path_traversal(dest, dest_path) - return ZipBackedFile(record_path, dest_path, zip_file) - - return make_root_scheme_file - - def data_scheme_file_maker( - zip_file: ZipFile, scheme: Scheme - ) -> Callable[[RecordPath], "File"]: - scheme_paths = {key: getattr(scheme, key) for key in SCHEME_KEYS} - - def make_data_scheme_file(record_path: RecordPath) -> "File": - normed_path = os.path.normpath(record_path) - try: - _, scheme_key, dest_subpath = normed_path.split(os.path.sep, 2) - except ValueError: - message = ( - "Unexpected file in {}: {!r}. .data directory contents" - " should be named like: '/'." - ).format(wheel_path, record_path) - raise InstallationError(message) - - try: - scheme_path = scheme_paths[scheme_key] - except KeyError: - valid_scheme_keys = ", ".join(sorted(scheme_paths)) - message = ( - "Unknown scheme key used in {}: {} (for file {!r}). .data" - " directory contents should be in subdirectories named" - " with a valid scheme key ({})" - ).format(wheel_path, scheme_key, record_path, valid_scheme_keys) - raise InstallationError(message) - - dest_path = os.path.join(scheme_path, dest_subpath) - assert_no_path_traversal(scheme_path, dest_path) - return ZipBackedFile(record_path, dest_path, zip_file) - - return make_data_scheme_file - - def is_data_scheme_path(path: RecordPath) -> bool: - return path.split("/", 1)[0].endswith(".data") - - paths = cast(List[RecordPath], wheel_zip.namelist()) - file_paths = filterfalse(is_dir_path, paths) - root_scheme_paths, data_scheme_paths = partition(is_data_scheme_path, file_paths) - - make_root_scheme_file = root_scheme_file_maker(wheel_zip, lib_dir) - files: Iterator[File] = map(make_root_scheme_file, root_scheme_paths) - - def is_script_scheme_path(path: RecordPath) -> bool: - parts = path.split("/", 2) - return len(parts) > 2 and parts[0].endswith(".data") and parts[1] == "scripts" - - other_scheme_paths, script_scheme_paths = partition( - is_script_scheme_path, data_scheme_paths - ) - - make_data_scheme_file = data_scheme_file_maker(wheel_zip, scheme) - other_scheme_files = map(make_data_scheme_file, other_scheme_paths) - files = chain(files, other_scheme_files) - - # Get the defined entry points - distribution = get_wheel_distribution( - FilesystemWheel(wheel_path), - canonicalize_name(name), - ) - console, gui = get_entrypoints(distribution) - - def is_entrypoint_wrapper(file: "File") -> bool: - # EP, EP.exe and EP-script.py are scripts generated for - # entry point EP by setuptools - path = file.dest_path - name = os.path.basename(path) - if name.lower().endswith(".exe"): - matchname = name[:-4] - elif name.lower().endswith("-script.py"): - matchname = name[:-10] - elif name.lower().endswith(".pya"): - matchname = name[:-4] - else: - matchname = name - # Ignore setuptools-generated scripts - return matchname in console or matchname in gui - - script_scheme_files: Iterator[File] = map( - make_data_scheme_file, script_scheme_paths - ) - script_scheme_files = filterfalse(is_entrypoint_wrapper, script_scheme_files) - script_scheme_files = map(ScriptFile, script_scheme_files) - files = chain(files, script_scheme_files) - - for file in files: - file.save() - record_installed(file.src_record_path, file.dest_path, file.changed) - - def pyc_source_file_paths() -> Generator[str, None, None]: - # We de-duplicate installation paths, since there can be overlap (e.g. - # file in .data maps to same location as file in wheel root). - # Sorting installation paths makes it easier to reproduce and debug - # issues related to permissions on existing files. - for installed_path in sorted(set(installed.values())): - full_installed_path = os.path.join(lib_dir, installed_path) - if not os.path.isfile(full_installed_path): - continue - if not full_installed_path.endswith(".py"): - continue - yield full_installed_path - - def pyc_output_path(path: str) -> str: - """Return the path the pyc file would have been written to.""" - return importlib.util.cache_from_source(path) - - # Compile all of the pyc files for the installed files - if pycompile: - with captured_stdout() as stdout: - with warnings.catch_warnings(): - warnings.filterwarnings("ignore") - for path in pyc_source_file_paths(): - success = compileall.compile_file(path, force=True, quiet=True) - if success: - pyc_path = pyc_output_path(path) - assert os.path.exists(pyc_path) - pyc_record_path = cast( - "RecordPath", pyc_path.replace(os.path.sep, "/") - ) - record_installed(pyc_record_path, pyc_path) - logger.debug(stdout.getvalue()) - - maker = PipScriptMaker(None, scheme.scripts) - - # Ensure old scripts are overwritten. - # See https://github.com/pypa/pip/issues/1800 - maker.clobber = True - - # Ensure we don't generate any variants for scripts because this is almost - # never what somebody wants. - # See https://bitbucket.org/pypa/distlib/issue/35/ - maker.variants = {""} - - # This is required because otherwise distlib creates scripts that are not - # executable. - # See https://bitbucket.org/pypa/distlib/issue/32/ - maker.set_mode = True - - # Generate the console and GUI entry points specified in the wheel - scripts_to_generate = get_console_script_specs(console) - - gui_scripts_to_generate = list(starmap("{} = {}".format, gui.items())) - - generated_console_scripts = maker.make_multiple(scripts_to_generate) - generated.extend(generated_console_scripts) - - generated.extend(maker.make_multiple(gui_scripts_to_generate, {"gui": True})) - - if warn_script_location: - msg = message_about_scripts_not_on_PATH(generated_console_scripts) - if msg is not None: - logger.warning(msg) - - generated_file_mode = 0o666 & ~current_umask() - - @contextlib.contextmanager - def _generate_file(path: str, **kwargs: Any) -> Generator[BinaryIO, None, None]: - with adjacent_tmp_file(path, **kwargs) as f: - yield f - os.chmod(f.name, generated_file_mode) - replace(f.name, path) - - dest_info_dir = os.path.join(lib_dir, info_dir) - - # Record pip as the installer - installer_path = os.path.join(dest_info_dir, "INSTALLER") - with _generate_file(installer_path) as installer_file: - installer_file.write(b"pip\n") - generated.append(installer_path) - - # Record the PEP 610 direct URL reference - if direct_url is not None: - direct_url_path = os.path.join(dest_info_dir, DIRECT_URL_METADATA_NAME) - with _generate_file(direct_url_path) as direct_url_file: - direct_url_file.write(direct_url.to_json().encode("utf-8")) - generated.append(direct_url_path) - - # Record the REQUESTED file - if requested: - requested_path = os.path.join(dest_info_dir, "REQUESTED") - with open(requested_path, "wb"): - pass - generated.append(requested_path) - - record_text = distribution.read_text("RECORD") - record_rows = list(csv.reader(record_text.splitlines())) - - rows = get_csv_rows_for_installed( - record_rows, - installed=installed, - changed=changed, - generated=generated, - lib_dir=lib_dir, - ) - - # Record details of all files installed - record_path = os.path.join(dest_info_dir, "RECORD") - - with _generate_file(record_path, **csv_io_kwargs("w")) as record_file: - # Explicitly cast to typing.IO[str] as a workaround for the mypy error: - # "writer" has incompatible type "BinaryIO"; expected "_Writer" - writer = csv.writer(cast("IO[str]", record_file)) - writer.writerows(_normalized_outrows(rows)) - - -@contextlib.contextmanager -def req_error_context(req_description: str) -> Generator[None, None, None]: - try: - yield - except InstallationError as e: - message = "For req: {}. {}".format(req_description, e.args[0]) - raise InstallationError(message) from e - - -def install_wheel( - name: str, - wheel_path: str, - scheme: Scheme, - req_description: str, - pycompile: bool = True, - warn_script_location: bool = True, - direct_url: Optional[DirectUrl] = None, - requested: bool = False, -) -> None: - with ZipFile(wheel_path, allowZip64=True) as z: - with req_error_context(req_description): - _install_wheel( - name=name, - wheel_zip=z, - wheel_path=wheel_path, - scheme=scheme, - pycompile=pycompile, - warn_script_location=warn_script_location, - direct_url=direct_url, - requested=requested, - ) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_stack.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_stack.py deleted file mode 100644 index 194564e761ddae165b39ef6598877e2e3820af0a..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_stack.py +++ /dev/null @@ -1,16 +0,0 @@ -from typing import List, TypeVar - -T = TypeVar("T") - - -class Stack(List[T]): - """A small shim over builtin list.""" - - @property - def top(self) -> T: - """Get top of stack.""" - return self[-1] - - def push(self, item: T) -> None: - """Push an item on to the stack (append in stack nomenclature).""" - self.append(item) diff --git a/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/model/__init__.py b/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/model/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Ricecake123/RVC-demo/infer-web.py b/spaces/Ricecake123/RVC-demo/infer-web.py deleted file mode 100644 index 7de75cc5ac0624b0b66acf62eb330222cc5a5d6a..0000000000000000000000000000000000000000 --- a/spaces/Ricecake123/RVC-demo/infer-web.py +++ /dev/null @@ -1,1991 +0,0 @@ -import os -import shutil -import sys - -now_dir = os.getcwd() -sys.path.append(now_dir) -import traceback, pdb -import warnings - -import numpy as np -import torch - -os.environ["OPENBLAS_NUM_THREADS"] = "1" -os.environ["no_proxy"] = "localhost, 127.0.0.1, ::1" -import logging -import threading -from random import shuffle -from subprocess import Popen -from time import sleep - -import faiss -import ffmpeg -import gradio as gr -import soundfile as sf -from config import Config -from fairseq import checkpoint_utils -from i18n import I18nAuto -from lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from lib.infer_pack.models_onnx import SynthesizerTrnMsNSFsidM -from infer_uvr5 import _audio_pre_, _audio_pre_new -from my_utils import load_audio -from train.process_ckpt import change_info, extract_small_model, merge, show_info -from vc_infer_pipeline import VC -from sklearn.cluster import MiniBatchKMeans - -logging.getLogger("numba").setLevel(logging.WARNING) - - -tmp = os.path.join(now_dir, "TEMP") -shutil.rmtree(tmp, ignore_errors=True) -shutil.rmtree( - "%s/runtime/Lib/site-packages/lib.infer_pack" % (now_dir), ignore_errors=True -) -shutil.rmtree("%s/runtime/Lib/site-packages/uvr5_pack" % (now_dir), ignore_errors=True) -os.makedirs(tmp, exist_ok=True) -os.makedirs(os.path.join(now_dir, "logs"), exist_ok=True) -os.makedirs(os.path.join(now_dir, "weights"), exist_ok=True) -os.environ["TEMP"] = tmp -warnings.filterwarnings("ignore") -torch.manual_seed(114514) - - -config = Config() -i18n = I18nAuto() -i18n.print() -# 判断是否有能用来训练和加速推理的N卡 -ngpu = torch.cuda.device_count() -gpu_infos = [] -mem = [] -if_gpu_ok = False - -if torch.cuda.is_available() or ngpu != 0: - for i in range(ngpu): - gpu_name = torch.cuda.get_device_name(i) - if any( - value in gpu_name.upper() - for value in [ - "10", - "16", - "20", - "30", - "40", - "A2", - "A3", - "A4", - "P4", - "A50", - "500", - "A60", - "70", - "80", - "90", - "M4", - "T4", - "TITAN", - ] - ): - # A10#A100#V100#A40#P40#M40#K80#A4500 - if_gpu_ok = True # 至少有一张能用的N卡 - gpu_infos.append("%s\t%s" % (i, gpu_name)) - mem.append( - int( - torch.cuda.get_device_properties(i).total_memory - / 1024 - / 1024 - / 1024 - + 0.4 - ) - ) -if if_gpu_ok and len(gpu_infos) > 0: - gpu_info = "\n".join(gpu_infos) - default_batch_size = min(mem) // 2 -else: - gpu_info = i18n("很遗憾您这没有能用的显卡来支持您训练") - default_batch_size = 1 -gpus = "-".join([i[0] for i in gpu_infos]) - - -class ToolButton(gr.Button, gr.components.FormComponent): - """Small button with single emoji as text, fits inside gradio forms""" - - def __init__(self, **kwargs): - super().__init__(variant="tool", **kwargs) - - def get_block_name(self): - return "button" - - -hubert_model = None - - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(config.device) - if config.is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - - -weight_root = "weights" -weight_uvr5_root = "uvr5_weights" -index_root = "logs" -names = [] -for name in os.listdir(weight_root): - if name.endswith(".pth"): - names.append(name) -index_paths = [] -for root, dirs, files in os.walk(index_root, topdown=False): - for name in files: - if name.endswith(".index") and "trained" not in name: - index_paths.append("%s/%s" % (root, name)) -uvr5_names = [] -for name in os.listdir(weight_uvr5_root): - if name.endswith(".pth") or "onnx" in name: - uvr5_names.append(name.replace(".pth", "")) - - -def vc_single( - sid, - input_audio_path, - f0_up_key, - f0_file, - f0_method, - file_index, - file_index2, - # file_big_npy, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, -): # spk_item, input_audio0, vc_transform0,f0_file,f0method0 - global tgt_sr, net_g, vc, hubert_model, version - if input_audio_path is None: - return "You need to upload an audio", None - f0_up_key = int(f0_up_key) - try: - audio = load_audio(input_audio_path, 16000) - audio_max = np.abs(audio).max() / 0.95 - if audio_max > 1: - audio /= audio_max - times = [0, 0, 0] - if not hubert_model: - load_hubert() - if_f0 = cpt.get("f0", 1) - file_index = ( - ( - file_index.strip(" ") - .strip('"') - .strip("\n") - .strip('"') - .strip(" ") - .replace("trained", "added") - ) - if file_index != "" - else file_index2 - ) # 防止小白写错,自动帮他替换掉 - # file_big_npy = ( - # file_big_npy.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - # ) - audio_opt = vc.pipeline( - hubert_model, - net_g, - sid, - audio, - input_audio_path, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=f0_file, - ) - if tgt_sr != resample_sr >= 16000: - tgt_sr = resample_sr - index_info = ( - "Using index:%s." % file_index - if os.path.exists(file_index) - else "Index not used." - ) - return "Success.\n %s\nTime:\n npy:%ss, f0:%ss, infer:%ss" % ( - index_info, - times[0], - times[1], - times[2], - ), (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, (None, None) - - -def vc_multi( - sid, - dir_path, - opt_root, - paths, - f0_up_key, - f0_method, - file_index, - file_index2, - # file_big_npy, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - format1, -): - try: - dir_path = ( - dir_path.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) # 防止小白拷路径头尾带了空格和"和回车 - opt_root = opt_root.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - os.makedirs(opt_root, exist_ok=True) - try: - if dir_path != "": - paths = [os.path.join(dir_path, name) for name in os.listdir(dir_path)] - else: - paths = [path.name for path in paths] - except: - traceback.print_exc() - paths = [path.name for path in paths] - infos = [] - for path in paths: - info, opt = vc_single( - sid, - path, - f0_up_key, - None, - f0_method, - file_index, - file_index2, - # file_big_npy, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - ) - if "Success" in info: - try: - tgt_sr, audio_opt = opt - if format1 in ["wav", "flac"]: - sf.write( - "%s/%s.%s" % (opt_root, os.path.basename(path), format1), - audio_opt, - tgt_sr, - ) - else: - path = "%s/%s.wav" % (opt_root, os.path.basename(path)) - sf.write( - path, - audio_opt, - tgt_sr, - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format1) - ) - except: - info += traceback.format_exc() - infos.append("%s->%s" % (os.path.basename(path), info)) - yield "\n".join(infos) - yield "\n".join(infos) - except: - yield traceback.format_exc() - - -def uvr(model_name, inp_root, save_root_vocal, paths, save_root_ins, agg, format0): - infos = [] - try: - inp_root = inp_root.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - save_root_vocal = ( - save_root_vocal.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) - save_root_ins = ( - save_root_ins.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) - if model_name == "onnx_dereverb_By_FoxJoy": - from MDXNet import MDXNetDereverb - - pre_fun = MDXNetDereverb(15) - else: - func = _audio_pre_ if "DeEcho" not in model_name else _audio_pre_new - pre_fun = func( - agg=int(agg), - model_path=os.path.join(weight_uvr5_root, model_name + ".pth"), - device=config.device, - is_half=config.is_half, - ) - if inp_root != "": - paths = [os.path.join(inp_root, name) for name in os.listdir(inp_root)] - else: - paths = [path.name for path in paths] - for path in paths: - inp_path = os.path.join(inp_root, path) - need_reformat = 1 - done = 0 - try: - info = ffmpeg.probe(inp_path, cmd="ffprobe") - if ( - info["streams"][0]["channels"] == 2 - and info["streams"][0]["sample_rate"] == "44100" - ): - need_reformat = 0 - pre_fun._path_audio_( - inp_path, save_root_ins, save_root_vocal, format0 - ) - done = 1 - except: - need_reformat = 1 - traceback.print_exc() - if need_reformat == 1: - tmp_path = "%s/%s.reformatted.wav" % (tmp, os.path.basename(inp_path)) - os.system( - "ffmpeg -i %s -vn -acodec pcm_s16le -ac 2 -ar 44100 %s -y" - % (inp_path, tmp_path) - ) - inp_path = tmp_path - try: - if done == 0: - pre_fun._path_audio_( - inp_path, save_root_ins, save_root_vocal, format0 - ) - infos.append("%s->Success" % (os.path.basename(inp_path))) - yield "\n".join(infos) - except: - infos.append( - "%s->%s" % (os.path.basename(inp_path), traceback.format_exc()) - ) - yield "\n".join(infos) - except: - infos.append(traceback.format_exc()) - yield "\n".join(infos) - finally: - try: - if model_name == "onnx_dereverb_By_FoxJoy": - del pre_fun.pred.model - del pre_fun.pred.model_ - else: - del pre_fun.model - del pre_fun - except: - traceback.print_exc() - print("clean_empty_cache") - if torch.cuda.is_available(): - torch.cuda.empty_cache() - yield "\n".join(infos) - - -# 一个选项卡全局只能有一个音色 -def get_vc(sid, to_return_protect0, to_return_protect1): - global n_spk, tgt_sr, net_g, vc, cpt, version - if sid == "" or sid == []: - global hubert_model - if hubert_model is not None: # 考虑到轮询, 需要加个判断看是否 sid 是由有模型切换到无模型的 - print("clean_empty_cache") - del net_g, n_spk, vc, hubert_model, tgt_sr # ,cpt - hubert_model = net_g = n_spk = vc = hubert_model = tgt_sr = None - if torch.cuda.is_available(): - torch.cuda.empty_cache() - ###楼下不这么折腾清理不干净 - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid( - *cpt["config"], is_half=config.is_half - ) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid( - *cpt["config"], is_half=config.is_half - ) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del net_g, cpt - if torch.cuda.is_available(): - torch.cuda.empty_cache() - cpt = None - return {"visible": False, "__type__": "update"} - person = "%s/%s" % (weight_root, sid) - print("loading %s" % person) - cpt = torch.load(person, map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - if if_f0 == 0: - to_return_protect0 = to_return_protect1 = { - "visible": False, - "value": 0.5, - "__type__": "update", - } - else: - to_return_protect0 = { - "visible": True, - "value": to_return_protect0, - "__type__": "update", - } - to_return_protect1 = { - "visible": True, - "value": to_return_protect1, - "__type__": "update", - } - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) - net_g.eval().to(config.device) - if config.is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, config) - n_spk = cpt["config"][-3] - return ( - {"visible": True, "maximum": n_spk, "__type__": "update"}, - to_return_protect0, - to_return_protect1, - ) - - -def change_choices(): - names = [] - for name in os.listdir(weight_root): - if name.endswith(".pth"): - names.append(name) - index_paths = [] - for root, dirs, files in os.walk(index_root, topdown=False): - for name in files: - if name.endswith(".index") and "trained" not in name: - index_paths.append("%s/%s" % (root, name)) - return {"choices": sorted(names), "__type__": "update"}, { - "choices": sorted(index_paths), - "__type__": "update", - } - - -def clean(): - return {"value": "", "__type__": "update"} - - -sr_dict = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -def if_done(done, p): - while 1: - if p.poll() is None: - sleep(0.5) - else: - break - done[0] = True - - -def if_done_multi(done, ps): - while 1: - # poll==None代表进程未结束 - # 只要有一个进程未结束都不停 - flag = 1 - for p in ps: - if p.poll() is None: - flag = 0 - sleep(0.5) - break - if flag == 1: - break - done[0] = True - - -def preprocess_dataset(trainset_dir, exp_dir, sr, n_p): - sr = sr_dict[sr] - os.makedirs("%s/logs/%s" % (now_dir, exp_dir), exist_ok=True) - f = open("%s/logs/%s/preprocess.log" % (now_dir, exp_dir), "w") - f.close() - cmd = ( - config.python_cmd - + " trainset_preprocess_pipeline_print.py %s %s %s %s/logs/%s " - % (trainset_dir, sr, n_p, now_dir, exp_dir) - + str(config.noparallel) - ) - print(cmd) - p = Popen(cmd, shell=True) # , stdin=PIPE, stdout=PIPE,stderr=PIPE,cwd=now_dir - ###煞笔gr, popen read都非得全跑完了再一次性读取, 不用gr就正常读一句输出一句;只能额外弄出一个文本流定时读 - done = [False] - threading.Thread( - target=if_done, - args=( - done, - p, - ), - ).start() - while 1: - with open("%s/logs/%s/preprocess.log" % (now_dir, exp_dir), "r") as f: - yield (f.read()) - sleep(1) - if done[0]: - break - with open("%s/logs/%s/preprocess.log" % (now_dir, exp_dir), "r") as f: - log = f.read() - print(log) - yield log - - -# but2.click(extract_f0,[gpus6,np7,f0method8,if_f0_3,trainset_dir4],[info2]) -def extract_f0_feature(gpus, n_p, f0method, if_f0, exp_dir, version19): - gpus = gpus.split("-") - os.makedirs("%s/logs/%s" % (now_dir, exp_dir), exist_ok=True) - f = open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "w") - f.close() - if if_f0: - cmd = config.python_cmd + " extract_f0_print.py %s/logs/%s %s %s" % ( - now_dir, - exp_dir, - n_p, - f0method, - ) - print(cmd) - p = Popen(cmd, shell=True, cwd=now_dir) # , stdin=PIPE, stdout=PIPE,stderr=PIPE - ###煞笔gr, popen read都非得全跑完了再一次性读取, 不用gr就正常读一句输出一句;只能额外弄出一个文本流定时读 - done = [False] - threading.Thread( - target=if_done, - args=( - done, - p, - ), - ).start() - while 1: - with open( - "%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r" - ) as f: - yield (f.read()) - sleep(1) - if done[0]: - break - with open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r") as f: - log = f.read() - print(log) - yield log - ####对不同part分别开多进程 - """ - n_part=int(sys.argv[1]) - i_part=int(sys.argv[2]) - i_gpu=sys.argv[3] - exp_dir=sys.argv[4] - os.environ["CUDA_VISIBLE_DEVICES"]=str(i_gpu) - """ - leng = len(gpus) - ps = [] - for idx, n_g in enumerate(gpus): - cmd = ( - config.python_cmd - + " extract_feature_print.py %s %s %s %s %s/logs/%s %s" - % ( - config.device, - leng, - idx, - n_g, - now_dir, - exp_dir, - version19, - ) - ) - print(cmd) - p = Popen( - cmd, shell=True, cwd=now_dir - ) # , shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, cwd=now_dir - ps.append(p) - ###煞笔gr, popen read都非得全跑完了再一次性读取, 不用gr就正常读一句输出一句;只能额外弄出一个文本流定时读 - done = [False] - threading.Thread( - target=if_done_multi, - args=( - done, - ps, - ), - ).start() - while 1: - with open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r") as f: - yield (f.read()) - sleep(1) - if done[0]: - break - with open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r") as f: - log = f.read() - print(log) - yield log - - -def change_sr2(sr2, if_f0_3, version19): - path_str = "" if version19 == "v1" else "_v2" - f0_str = "f0" if if_f0_3 else "" - if_pretrained_generator_exist = os.access( - "pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2), os.F_OK - ) - if_pretrained_discriminator_exist = os.access( - "pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2), os.F_OK - ) - if not if_pretrained_generator_exist: - print( - "pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2), - "not exist, will not use pretrained model", - ) - if not if_pretrained_discriminator_exist: - print( - "pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2), - "not exist, will not use pretrained model", - ) - return ( - "pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2) - if if_pretrained_generator_exist - else "", - "pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2) - if if_pretrained_discriminator_exist - else "", - ) - - -def change_version19(sr2, if_f0_3, version19): - path_str = "" if version19 == "v1" else "_v2" - if sr2 == "32k" and version19 == "v1": - sr2 = "40k" - to_return_sr2 = ( - {"choices": ["40k", "48k"], "__type__": "update", "value": sr2} - if version19 == "v1" - else {"choices": ["40k", "48k", "32k"], "__type__": "update", "value": sr2} - ) - f0_str = "f0" if if_f0_3 else "" - if_pretrained_generator_exist = os.access( - "pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2), os.F_OK - ) - if_pretrained_discriminator_exist = os.access( - "pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2), os.F_OK - ) - if not if_pretrained_generator_exist: - print( - "pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2), - "not exist, will not use pretrained model", - ) - if not if_pretrained_discriminator_exist: - print( - "pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2), - "not exist, will not use pretrained model", - ) - return ( - "pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2) - if if_pretrained_generator_exist - else "", - "pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2) - if if_pretrained_discriminator_exist - else "", - to_return_sr2, - ) - - -def change_f0(if_f0_3, sr2, version19): # f0method8,pretrained_G14,pretrained_D15 - path_str = "" if version19 == "v1" else "_v2" - if_pretrained_generator_exist = os.access( - "pretrained%s/f0G%s.pth" % (path_str, sr2), os.F_OK - ) - if_pretrained_discriminator_exist = os.access( - "pretrained%s/f0D%s.pth" % (path_str, sr2), os.F_OK - ) - if not if_pretrained_generator_exist: - print( - "pretrained%s/f0G%s.pth" % (path_str, sr2), - "not exist, will not use pretrained model", - ) - if not if_pretrained_discriminator_exist: - print( - "pretrained%s/f0D%s.pth" % (path_str, sr2), - "not exist, will not use pretrained model", - ) - if if_f0_3: - return ( - {"visible": True, "__type__": "update"}, - "pretrained%s/f0G%s.pth" % (path_str, sr2) - if if_pretrained_generator_exist - else "", - "pretrained%s/f0D%s.pth" % (path_str, sr2) - if if_pretrained_discriminator_exist - else "", - ) - return ( - {"visible": False, "__type__": "update"}, - ("pretrained%s/G%s.pth" % (path_str, sr2)) - if if_pretrained_generator_exist - else "", - ("pretrained%s/D%s.pth" % (path_str, sr2)) - if if_pretrained_discriminator_exist - else "", - ) - - -# but3.click(click_train,[exp_dir1,sr2,if_f0_3,save_epoch10,total_epoch11,batch_size12,if_save_latest13,pretrained_G14,pretrained_D15,gpus16]) -def click_train( - exp_dir1, - sr2, - if_f0_3, - spk_id5, - save_epoch10, - total_epoch11, - batch_size12, - if_save_latest13, - pretrained_G14, - pretrained_D15, - gpus16, - if_cache_gpu17, - if_save_every_weights18, - version19, -): - # 生成filelist - exp_dir = "%s/logs/%s" % (now_dir, exp_dir1) - os.makedirs(exp_dir, exist_ok=True) - gt_wavs_dir = "%s/0_gt_wavs" % (exp_dir) - feature_dir = ( - "%s/3_feature256" % (exp_dir) - if version19 == "v1" - else "%s/3_feature768" % (exp_dir) - ) - if if_f0_3: - f0_dir = "%s/2a_f0" % (exp_dir) - f0nsf_dir = "%s/2b-f0nsf" % (exp_dir) - names = ( - set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)]) - & set([name.split(".")[0] for name in os.listdir(feature_dir)]) - & set([name.split(".")[0] for name in os.listdir(f0_dir)]) - & set([name.split(".")[0] for name in os.listdir(f0nsf_dir)]) - ) - else: - names = set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)]) & set( - [name.split(".")[0] for name in os.listdir(feature_dir)] - ) - opt = [] - for name in names: - if if_f0_3: - opt.append( - "%s/%s.wav|%s/%s.npy|%s/%s.wav.npy|%s/%s.wav.npy|%s" - % ( - gt_wavs_dir.replace("\\", "\\\\"), - name, - feature_dir.replace("\\", "\\\\"), - name, - f0_dir.replace("\\", "\\\\"), - name, - f0nsf_dir.replace("\\", "\\\\"), - name, - spk_id5, - ) - ) - else: - opt.append( - "%s/%s.wav|%s/%s.npy|%s" - % ( - gt_wavs_dir.replace("\\", "\\\\"), - name, - feature_dir.replace("\\", "\\\\"), - name, - spk_id5, - ) - ) - fea_dim = 256 if version19 == "v1" else 768 - if if_f0_3: - for _ in range(2): - opt.append( - "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s/logs/mute/2a_f0/mute.wav.npy|%s/logs/mute/2b-f0nsf/mute.wav.npy|%s" - % (now_dir, sr2, now_dir, fea_dim, now_dir, now_dir, spk_id5) - ) - else: - for _ in range(2): - opt.append( - "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s" - % (now_dir, sr2, now_dir, fea_dim, spk_id5) - ) - shuffle(opt) - with open("%s/filelist.txt" % exp_dir, "w") as f: - f.write("\n".join(opt)) - print("write filelist done") - # 生成config#无需生成config - # cmd = python_cmd + " train_nsf_sim_cache_sid_load_pretrain.py -e mi-test -sr 40k -f0 1 -bs 4 -g 0 -te 10 -se 5 -pg pretrained/f0G40k.pth -pd pretrained/f0D40k.pth -l 1 -c 0" - print("use gpus:", gpus16) - if pretrained_G14 == "": - print("no pretrained Generator") - if pretrained_D15 == "": - print("no pretrained Discriminator") - if gpus16: - cmd = ( - config.python_cmd - + " train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -g %s -te %s -se %s %s %s -l %s -c %s -sw %s -v %s" - % ( - exp_dir1, - sr2, - 1 if if_f0_3 else 0, - batch_size12, - gpus16, - total_epoch11, - save_epoch10, - "-pg %s" % pretrained_G14 if pretrained_G14 != "" else "", - "-pd %s" % pretrained_D15 if pretrained_D15 != "" else "", - 1 if if_save_latest13 == i18n("是") else 0, - 1 if if_cache_gpu17 == i18n("是") else 0, - 1 if if_save_every_weights18 == i18n("是") else 0, - version19, - ) - ) - else: - cmd = ( - config.python_cmd - + " train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -te %s -se %s %s %s -l %s -c %s -sw %s -v %s" - % ( - exp_dir1, - sr2, - 1 if if_f0_3 else 0, - batch_size12, - total_epoch11, - save_epoch10, - "-pg %s" % pretrained_G14 if pretrained_G14 != "" else "\b", - "-pd %s" % pretrained_D15 if pretrained_D15 != "" else "\b", - 1 if if_save_latest13 == i18n("是") else 0, - 1 if if_cache_gpu17 == i18n("是") else 0, - 1 if if_save_every_weights18 == i18n("是") else 0, - version19, - ) - ) - print(cmd) - p = Popen(cmd, shell=True, cwd=now_dir) - p.wait() - return "训练结束, 您可查看控制台训练日志或实验文件夹下的train.log" - - -# but4.click(train_index, [exp_dir1], info3) -def train_index(exp_dir1, version19): - exp_dir = "%s/logs/%s" % (now_dir, exp_dir1) - os.makedirs(exp_dir, exist_ok=True) - feature_dir = ( - "%s/3_feature256" % (exp_dir) - if version19 == "v1" - else "%s/3_feature768" % (exp_dir) - ) - if not os.path.exists(feature_dir): - return "请先进行特征提取!" - listdir_res = list(os.listdir(feature_dir)) - if len(listdir_res) == 0: - return "请先进行特征提取!" - infos = [] - npys = [] - for name in sorted(listdir_res): - phone = np.load("%s/%s" % (feature_dir, name)) - npys.append(phone) - big_npy = np.concatenate(npys, 0) - big_npy_idx = np.arange(big_npy.shape[0]) - np.random.shuffle(big_npy_idx) - big_npy = big_npy[big_npy_idx] - if big_npy.shape[0] > 2e5: - # if(1): - infos.append("Trying doing kmeans %s shape to 10k centers." % big_npy.shape[0]) - yield "\n".join(infos) - try: - big_npy = ( - MiniBatchKMeans( - n_clusters=10000, - verbose=True, - batch_size=256 * config.n_cpu, - compute_labels=False, - init="random", - ) - .fit(big_npy) - .cluster_centers_ - ) - except: - info = traceback.format_exc() - print(info) - infos.append(info) - yield "\n".join(infos) - - np.save("%s/total_fea.npy" % exp_dir, big_npy) - n_ivf = min(int(16 * np.sqrt(big_npy.shape[0])), big_npy.shape[0] // 39) - infos.append("%s,%s" % (big_npy.shape, n_ivf)) - yield "\n".join(infos) - index = faiss.index_factory(256 if version19 == "v1" else 768, "IVF%s,Flat" % n_ivf) - # index = faiss.index_factory(256if version19=="v1"else 768, "IVF%s,PQ128x4fs,RFlat"%n_ivf) - infos.append("training") - yield "\n".join(infos) - index_ivf = faiss.extract_index_ivf(index) # - index_ivf.nprobe = 1 - index.train(big_npy) - faiss.write_index( - index, - "%s/trained_IVF%s_Flat_nprobe_%s_%s_%s.index" - % (exp_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19), - ) - # faiss.write_index(index, '%s/trained_IVF%s_Flat_FastScan_%s.index'%(exp_dir,n_ivf,version19)) - infos.append("adding") - yield "\n".join(infos) - batch_size_add = 8192 - for i in range(0, big_npy.shape[0], batch_size_add): - index.add(big_npy[i : i + batch_size_add]) - faiss.write_index( - index, - "%s/added_IVF%s_Flat_nprobe_%s_%s_%s.index" - % (exp_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19), - ) - infos.append( - "成功构建索引,added_IVF%s_Flat_nprobe_%s_%s_%s.index" - % (n_ivf, index_ivf.nprobe, exp_dir1, version19) - ) - # faiss.write_index(index, '%s/added_IVF%s_Flat_FastScan_%s.index'%(exp_dir,n_ivf,version19)) - # infos.append("成功构建索引,added_IVF%s_Flat_FastScan_%s.index"%(n_ivf,version19)) - yield "\n".join(infos) - - -# but5.click(train1key, [exp_dir1, sr2, if_f0_3, trainset_dir4, spk_id5, gpus6, np7, f0method8, save_epoch10, total_epoch11, batch_size12, if_save_latest13, pretrained_G14, pretrained_D15, gpus16, if_cache_gpu17], info3) -def train1key( - exp_dir1, - sr2, - if_f0_3, - trainset_dir4, - spk_id5, - np7, - f0method8, - save_epoch10, - total_epoch11, - batch_size12, - if_save_latest13, - pretrained_G14, - pretrained_D15, - gpus16, - if_cache_gpu17, - if_save_every_weights18, - version19, -): - infos = [] - - def get_info_str(strr): - infos.append(strr) - return "\n".join(infos) - - model_log_dir = "%s/logs/%s" % (now_dir, exp_dir1) - preprocess_log_path = "%s/preprocess.log" % model_log_dir - extract_f0_feature_log_path = "%s/extract_f0_feature.log" % model_log_dir - gt_wavs_dir = "%s/0_gt_wavs" % model_log_dir - feature_dir = ( - "%s/3_feature256" % model_log_dir - if version19 == "v1" - else "%s/3_feature768" % model_log_dir - ) - - os.makedirs(model_log_dir, exist_ok=True) - #########step1:处理数据 - open(preprocess_log_path, "w").close() - cmd = ( - config.python_cmd - + " trainset_preprocess_pipeline_print.py %s %s %s %s " - % (trainset_dir4, sr_dict[sr2], np7, model_log_dir) - + str(config.noparallel) - ) - yield get_info_str(i18n("step1:正在处理数据")) - yield get_info_str(cmd) - p = Popen(cmd, shell=True) - p.wait() - with open(preprocess_log_path, "r") as f: - print(f.read()) - #########step2a:提取音高 - open(extract_f0_feature_log_path, "w") - if if_f0_3: - yield get_info_str("step2a:正在提取音高") - cmd = config.python_cmd + " extract_f0_print.py %s %s %s" % ( - model_log_dir, - np7, - f0method8, - ) - yield get_info_str(cmd) - p = Popen(cmd, shell=True, cwd=now_dir) - p.wait() - with open(extract_f0_feature_log_path, "r") as f: - print(f.read()) - else: - yield get_info_str(i18n("step2a:无需提取音高")) - #######step2b:提取特征 - yield get_info_str(i18n("step2b:正在提取特征")) - gpus = gpus16.split("-") - leng = len(gpus) - ps = [] - for idx, n_g in enumerate(gpus): - cmd = config.python_cmd + " extract_feature_print.py %s %s %s %s %s %s" % ( - config.device, - leng, - idx, - n_g, - model_log_dir, - version19, - ) - yield get_info_str(cmd) - p = Popen( - cmd, shell=True, cwd=now_dir - ) # , shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, cwd=now_dir - ps.append(p) - for p in ps: - p.wait() - with open(extract_f0_feature_log_path, "r") as f: - print(f.read()) - #######step3a:训练模型 - yield get_info_str(i18n("step3a:正在训练模型")) - # 生成filelist - if if_f0_3: - f0_dir = "%s/2a_f0" % model_log_dir - f0nsf_dir = "%s/2b-f0nsf" % model_log_dir - names = ( - set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)]) - & set([name.split(".")[0] for name in os.listdir(feature_dir)]) - & set([name.split(".")[0] for name in os.listdir(f0_dir)]) - & set([name.split(".")[0] for name in os.listdir(f0nsf_dir)]) - ) - else: - names = set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)]) & set( - [name.split(".")[0] for name in os.listdir(feature_dir)] - ) - opt = [] - for name in names: - if if_f0_3: - opt.append( - "%s/%s.wav|%s/%s.npy|%s/%s.wav.npy|%s/%s.wav.npy|%s" - % ( - gt_wavs_dir.replace("\\", "\\\\"), - name, - feature_dir.replace("\\", "\\\\"), - name, - f0_dir.replace("\\", "\\\\"), - name, - f0nsf_dir.replace("\\", "\\\\"), - name, - spk_id5, - ) - ) - else: - opt.append( - "%s/%s.wav|%s/%s.npy|%s" - % ( - gt_wavs_dir.replace("\\", "\\\\"), - name, - feature_dir.replace("\\", "\\\\"), - name, - spk_id5, - ) - ) - fea_dim = 256 if version19 == "v1" else 768 - if if_f0_3: - for _ in range(2): - opt.append( - "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s/logs/mute/2a_f0/mute.wav.npy|%s/logs/mute/2b-f0nsf/mute.wav.npy|%s" - % (now_dir, sr2, now_dir, fea_dim, now_dir, now_dir, spk_id5) - ) - else: - for _ in range(2): - opt.append( - "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s" - % (now_dir, sr2, now_dir, fea_dim, spk_id5) - ) - shuffle(opt) - with open("%s/filelist.txt" % model_log_dir, "w") as f: - f.write("\n".join(opt)) - yield get_info_str("write filelist done") - if gpus16: - cmd = ( - config.python_cmd - + " train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -g %s -te %s -se %s %s %s -l %s -c %s -sw %s -v %s" - % ( - exp_dir1, - sr2, - 1 if if_f0_3 else 0, - batch_size12, - gpus16, - total_epoch11, - save_epoch10, - "-pg %s" % pretrained_G14 if pretrained_G14 != "" else "", - "-pd %s" % pretrained_D15 if pretrained_D15 != "" else "", - 1 if if_save_latest13 == i18n("是") else 0, - 1 if if_cache_gpu17 == i18n("是") else 0, - 1 if if_save_every_weights18 == i18n("是") else 0, - version19, - ) - ) - else: - cmd = ( - config.python_cmd - + " train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -te %s -se %s %s %s -l %s -c %s -sw %s -v %s" - % ( - exp_dir1, - sr2, - 1 if if_f0_3 else 0, - batch_size12, - total_epoch11, - save_epoch10, - "-pg %s" % pretrained_G14 if pretrained_G14 != "" else "", - "-pd %s" % pretrained_D15 if pretrained_D15 != "" else "", - 1 if if_save_latest13 == i18n("是") else 0, - 1 if if_cache_gpu17 == i18n("是") else 0, - 1 if if_save_every_weights18 == i18n("是") else 0, - version19, - ) - ) - yield get_info_str(cmd) - p = Popen(cmd, shell=True, cwd=now_dir) - p.wait() - yield get_info_str(i18n("训练结束, 您可查看控制台训练日志或实验文件夹下的train.log")) - #######step3b:训练索引 - npys = [] - listdir_res = list(os.listdir(feature_dir)) - for name in sorted(listdir_res): - phone = np.load("%s/%s" % (feature_dir, name)) - npys.append(phone) - big_npy = np.concatenate(npys, 0) - - big_npy_idx = np.arange(big_npy.shape[0]) - np.random.shuffle(big_npy_idx) - big_npy = big_npy[big_npy_idx] - - if big_npy.shape[0] > 2e5: - # if(1): - info = "Trying doing kmeans %s shape to 10k centers." % big_npy.shape[0] - print(info) - yield get_info_str(info) - try: - big_npy = ( - MiniBatchKMeans( - n_clusters=10000, - verbose=True, - batch_size=256 * config.n_cpu, - compute_labels=False, - init="random", - ) - .fit(big_npy) - .cluster_centers_ - ) - except: - info = traceback.format_exc() - print(info) - yield get_info_str(info) - - np.save("%s/total_fea.npy" % model_log_dir, big_npy) - - # n_ivf = big_npy.shape[0] // 39 - n_ivf = min(int(16 * np.sqrt(big_npy.shape[0])), big_npy.shape[0] // 39) - yield get_info_str("%s,%s" % (big_npy.shape, n_ivf)) - index = faiss.index_factory(256 if version19 == "v1" else 768, "IVF%s,Flat" % n_ivf) - yield get_info_str("training index") - index_ivf = faiss.extract_index_ivf(index) # - index_ivf.nprobe = 1 - index.train(big_npy) - faiss.write_index( - index, - "%s/trained_IVF%s_Flat_nprobe_%s_%s_%s.index" - % (model_log_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19), - ) - yield get_info_str("adding index") - batch_size_add = 8192 - for i in range(0, big_npy.shape[0], batch_size_add): - index.add(big_npy[i : i + batch_size_add]) - faiss.write_index( - index, - "%s/added_IVF%s_Flat_nprobe_%s_%s_%s.index" - % (model_log_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19), - ) - yield get_info_str( - "成功构建索引, added_IVF%s_Flat_nprobe_%s_%s_%s.index" - % (n_ivf, index_ivf.nprobe, exp_dir1, version19) - ) - yield get_info_str(i18n("全流程结束!")) - - -# ckpt_path2.change(change_info_,[ckpt_path2],[sr__,if_f0__]) -def change_info_(ckpt_path): - if not os.path.exists(ckpt_path.replace(os.path.basename(ckpt_path), "train.log")): - return {"__type__": "update"}, {"__type__": "update"}, {"__type__": "update"} - try: - with open( - ckpt_path.replace(os.path.basename(ckpt_path), "train.log"), "r" - ) as f: - info = eval(f.read().strip("\n").split("\n")[0].split("\t")[-1]) - sr, f0 = info["sample_rate"], info["if_f0"] - version = "v2" if ("version" in info and info["version"] == "v2") else "v1" - return sr, str(f0), version - except: - traceback.print_exc() - return {"__type__": "update"}, {"__type__": "update"}, {"__type__": "update"} - - -def export_onnx(ModelPath, ExportedPath): - cpt = torch.load(ModelPath, map_location="cpu") - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] - vec_channels = 256 if cpt.get("version", "v1") == "v1" else 768 - - test_phone = torch.rand(1, 200, vec_channels) # hidden unit - test_phone_lengths = torch.tensor([200]).long() # hidden unit 长度(貌似没啥用) - test_pitch = torch.randint(size=(1, 200), low=5, high=255) # 基频(单位赫兹) - test_pitchf = torch.rand(1, 200) # nsf基频 - test_ds = torch.LongTensor([0]) # 说话人ID - test_rnd = torch.rand(1, 192, 200) # 噪声(加入随机因子) - - device = "cpu" # 导出时设备(不影响使用模型) - - net_g = SynthesizerTrnMsNSFsidM( - *cpt["config"], is_half=False, version=cpt.get("version", "v1") - ) # fp32导出(C++要支持fp16必须手动将内存重新排列所以暂时不用fp16) - net_g.load_state_dict(cpt["weight"], strict=False) - input_names = ["phone", "phone_lengths", "pitch", "pitchf", "ds", "rnd"] - output_names = [ - "audio", - ] - # net_g.construct_spkmixmap(n_speaker) 多角色混合轨道导出 - torch.onnx.export( - net_g, - ( - test_phone.to(device), - test_phone_lengths.to(device), - test_pitch.to(device), - test_pitchf.to(device), - test_ds.to(device), - test_rnd.to(device), - ), - ExportedPath, - dynamic_axes={ - "phone": [1], - "pitch": [1], - "pitchf": [1], - "rnd": [2], - }, - do_constant_folding=False, - opset_version=13, - verbose=False, - input_names=input_names, - output_names=output_names, - ) - return "Finished" - - -with gr.Blocks() as app: - gr.Markdown( - value=i18n( - "本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责.
      如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录LICENSE." - ) - ) - with gr.Tabs(): - with gr.TabItem(i18n("模型推理")): - with gr.Row(): - sid0 = gr.Dropdown(label=i18n("推理音色"), choices=sorted(names)) - refresh_button = gr.Button(i18n("刷新音色列表和索引路径"), variant="primary") - clean_button = gr.Button(i18n("卸载音色省显存"), variant="primary") - spk_item = gr.Slider( - minimum=0, - maximum=2333, - step=1, - label=i18n("请选择说话人id"), - value=0, - visible=False, - interactive=True, - ) - clean_button.click(fn=clean, inputs=[], outputs=[sid0]) - with gr.Group(): - gr.Markdown( - value=i18n("男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ") - ) - with gr.Row(): - with gr.Column(): - vc_transform0 = gr.Number( - label=i18n("变调(整数, 半音数量, 升八度12降八度-12)"), value=0 - ) - input_audio0 = gr.Textbox( - label=i18n("输入待处理音频文件路径(默认是正确格式示例)"), - value="E:\\codes\\py39\\test-20230416b\\todo-songs\\冬之花clip1.wav", - ) - f0method0 = gr.Radio( - label=i18n( - "选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU" - ), - choices=["pm", "harvest", "crepe"], - value="pm", - interactive=True, - ) - filter_radius0 = gr.Slider( - minimum=0, - maximum=7, - label=i18n(">=3则使用对harvest音高识别的结果使用中值滤波,数值为滤波半径,使用可以削弱哑音"), - value=3, - step=1, - interactive=True, - ) - with gr.Column(): - file_index1 = gr.Textbox( - label=i18n("特征检索库文件路径,为空则使用下拉的选择结果"), - value="", - interactive=True, - ) - file_index2 = gr.Dropdown( - label=i18n("自动检测index路径,下拉式选择(dropdown)"), - choices=sorted(index_paths), - interactive=True, - ) - refresh_button.click( - fn=change_choices, inputs=[], outputs=[sid0, file_index2] - ) - # file_big_npy1 = gr.Textbox( - # label=i18n("特征文件路径"), - # value="E:\\codes\py39\\vits_vc_gpu_train\\logs\\mi-test-1key\\total_fea.npy", - # interactive=True, - # ) - index_rate1 = gr.Slider( - minimum=0, - maximum=1, - label=i18n("检索特征占比"), - value=0.75, - interactive=True, - ) - with gr.Column(): - resample_sr0 = gr.Slider( - minimum=0, - maximum=48000, - label=i18n("后处理重采样至最终采样率,0为不进行重采样"), - value=0, - step=1, - interactive=True, - ) - rms_mix_rate0 = gr.Slider( - minimum=0, - maximum=1, - label=i18n("输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络"), - value=0.25, - interactive=True, - ) - protect0 = gr.Slider( - minimum=0, - maximum=0.5, - label=i18n( - "保护清辅音和呼吸声,防止电音撕裂等artifact,拉满0.5不开启,调低加大保护力度但可能降低索引效果" - ), - value=0.33, - step=0.01, - interactive=True, - ) - f0_file = gr.File(label=i18n("F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调")) - but0 = gr.Button(i18n("转换"), variant="primary") - with gr.Row(): - vc_output1 = gr.Textbox(label=i18n("输出信息")) - vc_output2 = gr.Audio(label=i18n("输出音频(右下角三个点,点了可以下载)")) - but0.click( - vc_single, - [ - spk_item, - input_audio0, - vc_transform0, - f0_file, - f0method0, - file_index1, - file_index2, - # file_big_npy1, - index_rate1, - filter_radius0, - resample_sr0, - rms_mix_rate0, - protect0, - ], - [vc_output1, vc_output2], - ) - with gr.Group(): - gr.Markdown( - value=i18n("批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ") - ) - with gr.Row(): - with gr.Column(): - vc_transform1 = gr.Number( - label=i18n("变调(整数, 半音数量, 升八度12降八度-12)"), value=0 - ) - opt_input = gr.Textbox(label=i18n("指定输出文件夹"), value="opt") - f0method1 = gr.Radio( - label=i18n( - "选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU" - ), - choices=["pm", "harvest", "crepe"], - value="pm", - interactive=True, - ) - filter_radius1 = gr.Slider( - minimum=0, - maximum=7, - label=i18n(">=3则使用对harvest音高识别的结果使用中值滤波,数值为滤波半径,使用可以削弱哑音"), - value=3, - step=1, - interactive=True, - ) - with gr.Column(): - file_index3 = gr.Textbox( - label=i18n("特征检索库文件路径,为空则使用下拉的选择结果"), - value="", - interactive=True, - ) - file_index4 = gr.Dropdown( - label=i18n("自动检测index路径,下拉式选择(dropdown)"), - choices=sorted(index_paths), - interactive=True, - ) - refresh_button.click( - fn=lambda: change_choices()[1], - inputs=[], - outputs=file_index4, - ) - # file_big_npy2 = gr.Textbox( - # label=i18n("特征文件路径"), - # value="E:\\codes\\py39\\vits_vc_gpu_train\\logs\\mi-test-1key\\total_fea.npy", - # interactive=True, - # ) - index_rate2 = gr.Slider( - minimum=0, - maximum=1, - label=i18n("检索特征占比"), - value=1, - interactive=True, - ) - with gr.Column(): - resample_sr1 = gr.Slider( - minimum=0, - maximum=48000, - label=i18n("后处理重采样至最终采样率,0为不进行重采样"), - value=0, - step=1, - interactive=True, - ) - rms_mix_rate1 = gr.Slider( - minimum=0, - maximum=1, - label=i18n("输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络"), - value=1, - interactive=True, - ) - protect1 = gr.Slider( - minimum=0, - maximum=0.5, - label=i18n( - "保护清辅音和呼吸声,防止电音撕裂等artifact,拉满0.5不开启,调低加大保护力度但可能降低索引效果" - ), - value=0.33, - step=0.01, - interactive=True, - ) - with gr.Column(): - dir_input = gr.Textbox( - label=i18n("输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)"), - value="E:\codes\py39\\test-20230416b\\todo-songs", - ) - inputs = gr.File( - file_count="multiple", label=i18n("也可批量输入音频文件, 二选一, 优先读文件夹") - ) - with gr.Row(): - format1 = gr.Radio( - label=i18n("导出文件格式"), - choices=["wav", "flac", "mp3", "m4a"], - value="flac", - interactive=True, - ) - but1 = gr.Button(i18n("转换"), variant="primary") - vc_output3 = gr.Textbox(label=i18n("输出信息")) - but1.click( - vc_multi, - [ - spk_item, - dir_input, - opt_input, - inputs, - vc_transform1, - f0method1, - file_index3, - file_index4, - # file_big_npy2, - index_rate2, - filter_radius1, - resample_sr1, - rms_mix_rate1, - protect1, - format1, - ], - [vc_output3], - ) - sid0.change( - fn=get_vc, - inputs=[sid0, protect0, protect1], - outputs=[spk_item, protect0, protect1], - ) - with gr.TabItem(i18n("伴奏人声分离&去混响&去回声")): - with gr.Group(): - gr.Markdown( - value=i18n( - "人声伴奏分离批量处理, 使用UVR5模型。
      合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)。
      模型分为三类:
      1、保留人声:不带和声的音频选这个,对主人声保留比HP5更好。内置HP2和HP3两个模型,HP3可能轻微漏伴奏但对主人声保留比HP2稍微好一丁点;
      2、仅保留主人声:带和声的音频选这个,对主人声可能有削弱。内置HP5一个模型;
      3、去混响、去延迟模型(by FoxJoy):
        (1)MDX-Net(onnx_dereverb):对于双通道混响是最好的选择,不能去除单通道混响;
       (234)DeEcho:去除延迟效果。Aggressive比Normal去除得更彻底,DeReverb额外去除混响,可去除单声道混响,但是对高频重的板式混响去不干净。
      去混响/去延迟,附:
      1、DeEcho-DeReverb模型的耗时是另外2个DeEcho模型的接近2倍;
      2、MDX-Net-Dereverb模型挺慢的;
      3、个人推荐的最干净的配置是先MDX-Net再DeEcho-Aggressive。" - ) - ) - with gr.Row(): - with gr.Column(): - dir_wav_input = gr.Textbox( - label=i18n("输入待处理音频文件夹路径"), - value="E:\\codes\\py39\\test-20230416b\\todo-songs\\todo-songs", - ) - wav_inputs = gr.File( - file_count="multiple", label=i18n("也可批量输入音频文件, 二选一, 优先读文件夹") - ) - with gr.Column(): - model_choose = gr.Dropdown(label=i18n("模型"), choices=uvr5_names) - agg = gr.Slider( - minimum=0, - maximum=20, - step=1, - label="人声提取激进程度", - value=10, - interactive=True, - visible=False, # 先不开放调整 - ) - opt_vocal_root = gr.Textbox( - label=i18n("指定输出主人声文件夹"), value="opt" - ) - opt_ins_root = gr.Textbox( - label=i18n("指定输出非主人声文件夹"), value="opt" - ) - format0 = gr.Radio( - label=i18n("导出文件格式"), - choices=["wav", "flac", "mp3", "m4a"], - value="flac", - interactive=True, - ) - but2 = gr.Button(i18n("转换"), variant="primary") - vc_output4 = gr.Textbox(label=i18n("输出信息")) - but2.click( - uvr, - [ - model_choose, - dir_wav_input, - opt_vocal_root, - wav_inputs, - opt_ins_root, - agg, - format0, - ], - [vc_output4], - ) - with gr.TabItem(i18n("训练")): - gr.Markdown( - value=i18n( - "step1: 填写实验配置. 实验数据放在logs下, 每个实验一个文件夹, 需手工输入实验名路径, 内含实验配置, 日志, 训练得到的模型文件. " - ) - ) - with gr.Row(): - exp_dir1 = gr.Textbox(label=i18n("输入实验名"), value="mi-test") - sr2 = gr.Radio( - label=i18n("目标采样率"), - choices=["40k", "48k"], - value="40k", - interactive=True, - ) - if_f0_3 = gr.Radio( - label=i18n("模型是否带音高指导(唱歌一定要, 语音可以不要)"), - choices=[True, False], - value=True, - interactive=True, - ) - version19 = gr.Radio( - label=i18n("版本"), - choices=["v1", "v2"], - value="v1", - interactive=True, - visible=True, - ) - np7 = gr.Slider( - minimum=0, - maximum=config.n_cpu, - step=1, - label=i18n("提取音高和处理数据使用的CPU进程数"), - value=int(np.ceil(config.n_cpu / 1.5)), - interactive=True, - ) - with gr.Group(): # 暂时单人的, 后面支持最多4人的#数据处理 - gr.Markdown( - value=i18n( - "step2a: 自动遍历训练文件夹下所有可解码成音频的文件并进行切片归一化, 在实验目录下生成2个wav文件夹; 暂时只支持单人训练. " - ) - ) - with gr.Row(): - trainset_dir4 = gr.Textbox( - label=i18n("输入训练文件夹路径"), value="E:\\语音音频+标注\\米津玄师\\src" - ) - spk_id5 = gr.Slider( - minimum=0, - maximum=4, - step=1, - label=i18n("请指定说话人id"), - value=0, - interactive=True, - ) - but1 = gr.Button(i18n("处理数据"), variant="primary") - info1 = gr.Textbox(label=i18n("输出信息"), value="") - but1.click( - preprocess_dataset, [trainset_dir4, exp_dir1, sr2, np7], [info1] - ) - with gr.Group(): - gr.Markdown(value=i18n("step2b: 使用CPU提取音高(如果模型带音高), 使用GPU提取特征(选择卡号)")) - with gr.Row(): - with gr.Column(): - gpus6 = gr.Textbox( - label=i18n("以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2"), - value=gpus, - interactive=True, - ) - gpu_info9 = gr.Textbox(label=i18n("显卡信息"), value=gpu_info) - with gr.Column(): - f0method8 = gr.Radio( - label=i18n( - "选择音高提取算法:输入歌声可用pm提速,高质量语音但CPU差可用dio提速,harvest质量更好但慢" - ), - choices=["pm", "harvest", "dio"], - value="harvest", - interactive=True, - ) - but2 = gr.Button(i18n("特征提取"), variant="primary") - info2 = gr.Textbox(label=i18n("输出信息"), value="", max_lines=8) - but2.click( - extract_f0_feature, - [gpus6, np7, f0method8, if_f0_3, exp_dir1, version19], - [info2], - ) - with gr.Group(): - gr.Markdown(value=i18n("step3: 填写训练设置, 开始训练模型和索引")) - with gr.Row(): - save_epoch10 = gr.Slider( - minimum=0, - maximum=50, - step=1, - label=i18n("保存频率save_every_epoch"), - value=5, - interactive=True, - ) - total_epoch11 = gr.Slider( - minimum=0, - maximum=1000, - step=1, - label=i18n("总训练轮数total_epoch"), - value=20, - interactive=True, - ) - batch_size12 = gr.Slider( - minimum=1, - maximum=40, - step=1, - label=i18n("每张显卡的batch_size"), - value=default_batch_size, - interactive=True, - ) - if_save_latest13 = gr.Radio( - label=i18n("是否仅保存最新的ckpt文件以节省硬盘空间"), - choices=[i18n("是"), i18n("否")], - value=i18n("否"), - interactive=True, - ) - if_cache_gpu17 = gr.Radio( - label=i18n( - "是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速" - ), - choices=[i18n("是"), i18n("否")], - value=i18n("否"), - interactive=True, - ) - if_save_every_weights18 = gr.Radio( - label=i18n("是否在每次保存时间点将最终小模型保存至weights文件夹"), - choices=[i18n("是"), i18n("否")], - value=i18n("否"), - interactive=True, - ) - with gr.Row(): - pretrained_G14 = gr.Textbox( - label=i18n("加载预训练底模G路径"), - value="pretrained/f0G40k.pth", - interactive=True, - ) - pretrained_D15 = gr.Textbox( - label=i18n("加载预训练底模D路径"), - value="pretrained/f0D40k.pth", - interactive=True, - ) - sr2.change( - change_sr2, - [sr2, if_f0_3, version19], - [pretrained_G14, pretrained_D15], - ) - version19.change( - change_version19, - [sr2, if_f0_3, version19], - [pretrained_G14, pretrained_D15, sr2], - ) - if_f0_3.change( - change_f0, - [if_f0_3, sr2, version19], - [f0method8, pretrained_G14, pretrained_D15], - ) - gpus16 = gr.Textbox( - label=i18n("以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2"), - value=gpus, - interactive=True, - ) - but3 = gr.Button(i18n("训练模型"), variant="primary") - but4 = gr.Button(i18n("训练特征索引"), variant="primary") - but5 = gr.Button(i18n("一键训练"), variant="primary") - info3 = gr.Textbox(label=i18n("输出信息"), value="", max_lines=10) - but3.click( - click_train, - [ - exp_dir1, - sr2, - if_f0_3, - spk_id5, - save_epoch10, - total_epoch11, - batch_size12, - if_save_latest13, - pretrained_G14, - pretrained_D15, - gpus16, - if_cache_gpu17, - if_save_every_weights18, - version19, - ], - info3, - ) - but4.click(train_index, [exp_dir1, version19], info3) - but5.click( - train1key, - [ - exp_dir1, - sr2, - if_f0_3, - trainset_dir4, - spk_id5, - np7, - f0method8, - save_epoch10, - total_epoch11, - batch_size12, - if_save_latest13, - pretrained_G14, - pretrained_D15, - gpus16, - if_cache_gpu17, - if_save_every_weights18, - version19, - ], - info3, - ) - - with gr.TabItem(i18n("ckpt处理")): - with gr.Group(): - gr.Markdown(value=i18n("模型融合, 可用于测试音色融合")) - with gr.Row(): - ckpt_a = gr.Textbox(label=i18n("A模型路径"), value="", interactive=True) - ckpt_b = gr.Textbox(label=i18n("B模型路径"), value="", interactive=True) - alpha_a = gr.Slider( - minimum=0, - maximum=1, - label=i18n("A模型权重"), - value=0.5, - interactive=True, - ) - with gr.Row(): - sr_ = gr.Radio( - label=i18n("目标采样率"), - choices=["40k", "48k"], - value="40k", - interactive=True, - ) - if_f0_ = gr.Radio( - label=i18n("模型是否带音高指导"), - choices=[i18n("是"), i18n("否")], - value=i18n("是"), - interactive=True, - ) - info__ = gr.Textbox( - label=i18n("要置入的模型信息"), value="", max_lines=8, interactive=True - ) - name_to_save0 = gr.Textbox( - label=i18n("保存的模型名不带后缀"), - value="", - max_lines=1, - interactive=True, - ) - version_2 = gr.Radio( - label=i18n("模型版本型号"), - choices=["v1", "v2"], - value="v1", - interactive=True, - ) - with gr.Row(): - but6 = gr.Button(i18n("融合"), variant="primary") - info4 = gr.Textbox(label=i18n("输出信息"), value="", max_lines=8) - but6.click( - merge, - [ - ckpt_a, - ckpt_b, - alpha_a, - sr_, - if_f0_, - info__, - name_to_save0, - version_2, - ], - info4, - ) # def merge(path1,path2,alpha1,sr,f0,info): - with gr.Group(): - gr.Markdown(value=i18n("修改模型信息(仅支持weights文件夹下提取的小模型文件)")) - with gr.Row(): - ckpt_path0 = gr.Textbox( - label=i18n("模型路径"), value="", interactive=True - ) - info_ = gr.Textbox( - label=i18n("要改的模型信息"), value="", max_lines=8, interactive=True - ) - name_to_save1 = gr.Textbox( - label=i18n("保存的文件名, 默认空为和源文件同名"), - value="", - max_lines=8, - interactive=True, - ) - with gr.Row(): - but7 = gr.Button(i18n("修改"), variant="primary") - info5 = gr.Textbox(label=i18n("输出信息"), value="", max_lines=8) - but7.click(change_info, [ckpt_path0, info_, name_to_save1], info5) - with gr.Group(): - gr.Markdown(value=i18n("查看模型信息(仅支持weights文件夹下提取的小模型文件)")) - with gr.Row(): - ckpt_path1 = gr.Textbox( - label=i18n("模型路径"), value="", interactive=True - ) - but8 = gr.Button(i18n("查看"), variant="primary") - info6 = gr.Textbox(label=i18n("输出信息"), value="", max_lines=8) - but8.click(show_info, [ckpt_path1], info6) - with gr.Group(): - gr.Markdown( - value=i18n( - "模型提取(输入logs文件夹下大文件模型路径),适用于训一半不想训了模型没有自动提取保存小文件模型,或者想测试中间模型的情况" - ) - ) - with gr.Row(): - ckpt_path2 = gr.Textbox( - label=i18n("模型路径"), - value="E:\\codes\\py39\\logs\\mi-test_f0_48k\\G_23333.pth", - interactive=True, - ) - save_name = gr.Textbox( - label=i18n("保存名"), value="", interactive=True - ) - sr__ = gr.Radio( - label=i18n("目标采样率"), - choices=["32k", "40k", "48k"], - value="40k", - interactive=True, - ) - if_f0__ = gr.Radio( - label=i18n("模型是否带音高指导,1是0否"), - choices=["1", "0"], - value="1", - interactive=True, - ) - version_1 = gr.Radio( - label=i18n("模型版本型号"), - choices=["v1", "v2"], - value="v2", - interactive=True, - ) - info___ = gr.Textbox( - label=i18n("要置入的模型信息"), value="", max_lines=8, interactive=True - ) - but9 = gr.Button(i18n("提取"), variant="primary") - info7 = gr.Textbox(label=i18n("输出信息"), value="", max_lines=8) - ckpt_path2.change( - change_info_, [ckpt_path2], [sr__, if_f0__, version_1] - ) - but9.click( - extract_small_model, - [ckpt_path2, save_name, sr__, if_f0__, info___, version_1], - info7, - ) - - with gr.TabItem(i18n("Onnx导出")): - with gr.Row(): - ckpt_dir = gr.Textbox(label=i18n("RVC模型路径"), value="", interactive=True) - with gr.Row(): - onnx_dir = gr.Textbox( - label=i18n("Onnx输出路径"), value="", interactive=True - ) - with gr.Row(): - infoOnnx = gr.Label(label="info") - with gr.Row(): - butOnnx = gr.Button(i18n("导出Onnx模型"), variant="primary") - butOnnx.click(export_onnx, [ckpt_dir, onnx_dir], infoOnnx) - - tab_faq = i18n("常见问题解答") - with gr.TabItem(tab_faq): - try: - if tab_faq == "常见问题解答": - with open("docs/faq.md", "r", encoding="utf8") as f: - info = f.read() - else: - with open("docs/faq_en.md", "r", encoding="utf8") as f: - info = f.read() - gr.Markdown(value=info) - except: - gr.Markdown(traceback.format_exc()) - - # with gr.TabItem(i18n("招募音高曲线前端编辑器")): - # gr.Markdown(value=i18n("加开发群联系我xxxxx")) - # with gr.TabItem(i18n("点击查看交流、问题反馈群号")): - # gr.Markdown(value=i18n("xxxxx")) - - if config.iscolab: - app.queue(concurrency_count=511, max_size=1022).launch(share=True) - else: - app.queue(concurrency_count=511, max_size=1022).launch( - server_name="0.0.0.0", - inbrowser=not config.noautoopen, - server_port=config.listen_port, - quiet=True, - ) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/bbox/assigners/max_iou_assigner.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/bbox/assigners/max_iou_assigner.py deleted file mode 100644 index 5cf4c4b4b450f87dfb99c3d33d8ed83d3e5cfcb3..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/bbox/assigners/max_iou_assigner.py +++ /dev/null @@ -1,212 +0,0 @@ -import torch - -from ..builder import BBOX_ASSIGNERS -from ..iou_calculators import build_iou_calculator -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -@BBOX_ASSIGNERS.register_module() -class MaxIoUAssigner(BaseAssigner): - """Assign a corresponding gt bbox or background to each bbox. - - Each proposals will be assigned with `-1`, or a semi-positive integer - indicating the ground truth index. - - - -1: negative sample, no assigned gt - - semi-positive integer: positive sample, index (0-based) of assigned gt - - Args: - pos_iou_thr (float): IoU threshold for positive bboxes. - neg_iou_thr (float or tuple): IoU threshold for negative bboxes. - min_pos_iou (float): Minimum iou for a bbox to be considered as a - positive bbox. Positive samples can have smaller IoU than - pos_iou_thr due to the 4th step (assign max IoU sample to each gt). - gt_max_assign_all (bool): Whether to assign all bboxes with the same - highest overlap with some gt to that gt. - ignore_iof_thr (float): IoF threshold for ignoring bboxes (if - `gt_bboxes_ignore` is specified). Negative values mean not - ignoring any bboxes. - ignore_wrt_candidates (bool): Whether to compute the iof between - `bboxes` and `gt_bboxes_ignore`, or the contrary. - match_low_quality (bool): Whether to allow low quality matches. This is - usually allowed for RPN and single stage detectors, but not allowed - in the second stage. Details are demonstrated in Step 4. - gpu_assign_thr (int): The upper bound of the number of GT for GPU - assign. When the number of gt is above this threshold, will assign - on CPU device. Negative values mean not assign on CPU. - """ - - def __init__(self, - pos_iou_thr, - neg_iou_thr, - min_pos_iou=.0, - gt_max_assign_all=True, - ignore_iof_thr=-1, - ignore_wrt_candidates=True, - match_low_quality=True, - gpu_assign_thr=-1, - iou_calculator=dict(type='BboxOverlaps2D')): - self.pos_iou_thr = pos_iou_thr - self.neg_iou_thr = neg_iou_thr - self.min_pos_iou = min_pos_iou - self.gt_max_assign_all = gt_max_assign_all - self.ignore_iof_thr = ignore_iof_thr - self.ignore_wrt_candidates = ignore_wrt_candidates - self.gpu_assign_thr = gpu_assign_thr - self.match_low_quality = match_low_quality - self.iou_calculator = build_iou_calculator(iou_calculator) - - def assign(self, bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None): - """Assign gt to bboxes. - - This method assign a gt bbox to every bbox (proposal/anchor), each bbox - will be assigned with -1, or a semi-positive number. -1 means negative - sample, semi-positive number is the index (0-based) of assigned gt. - The assignment is done in following steps, the order matters. - - 1. assign every bbox to the background - 2. assign proposals whose iou with all gts < neg_iou_thr to 0 - 3. for each bbox, if the iou with its nearest gt >= pos_iou_thr, - assign it to that bbox - 4. for each gt bbox, assign its nearest proposals (may be more than - one) to itself - - Args: - bboxes (Tensor): Bounding boxes to be assigned, shape(n, 4). - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ). - - Returns: - :obj:`AssignResult`: The assign result. - - Example: - >>> self = MaxIoUAssigner(0.5, 0.5) - >>> bboxes = torch.Tensor([[0, 0, 10, 10], [10, 10, 20, 20]]) - >>> gt_bboxes = torch.Tensor([[0, 0, 10, 9]]) - >>> assign_result = self.assign(bboxes, gt_bboxes) - >>> expected_gt_inds = torch.LongTensor([1, 0]) - >>> assert torch.all(assign_result.gt_inds == expected_gt_inds) - """ - assign_on_cpu = True if (self.gpu_assign_thr > 0) and ( - gt_bboxes.shape[0] > self.gpu_assign_thr) else False - # compute overlap and assign gt on CPU when number of GT is large - if assign_on_cpu: - device = bboxes.device - bboxes = bboxes.cpu() - gt_bboxes = gt_bboxes.cpu() - if gt_bboxes_ignore is not None: - gt_bboxes_ignore = gt_bboxes_ignore.cpu() - if gt_labels is not None: - gt_labels = gt_labels.cpu() - - overlaps = self.iou_calculator(gt_bboxes, bboxes) - - if (self.ignore_iof_thr > 0 and gt_bboxes_ignore is not None - and gt_bboxes_ignore.numel() > 0 and bboxes.numel() > 0): - if self.ignore_wrt_candidates: - ignore_overlaps = self.iou_calculator( - bboxes, gt_bboxes_ignore, mode='iof') - ignore_max_overlaps, _ = ignore_overlaps.max(dim=1) - else: - ignore_overlaps = self.iou_calculator( - gt_bboxes_ignore, bboxes, mode='iof') - ignore_max_overlaps, _ = ignore_overlaps.max(dim=0) - overlaps[:, ignore_max_overlaps > self.ignore_iof_thr] = -1 - - assign_result = self.assign_wrt_overlaps(overlaps, gt_labels) - if assign_on_cpu: - assign_result.gt_inds = assign_result.gt_inds.to(device) - assign_result.max_overlaps = assign_result.max_overlaps.to(device) - if assign_result.labels is not None: - assign_result.labels = assign_result.labels.to(device) - return assign_result - - def assign_wrt_overlaps(self, overlaps, gt_labels=None): - """Assign w.r.t. the overlaps of bboxes with gts. - - Args: - overlaps (Tensor): Overlaps between k gt_bboxes and n bboxes, - shape(k, n). - gt_labels (Tensor, optional): Labels of k gt_bboxes, shape (k, ). - - Returns: - :obj:`AssignResult`: The assign result. - """ - num_gts, num_bboxes = overlaps.size(0), overlaps.size(1) - - # 1. assign -1 by default - assigned_gt_inds = overlaps.new_full((num_bboxes, ), - -1, - dtype=torch.long) - - if num_gts == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - max_overlaps = overlaps.new_zeros((num_bboxes, )) - if num_gts == 0: - # No truth, assign everything to background - assigned_gt_inds[:] = 0 - if gt_labels is None: - assigned_labels = None - else: - assigned_labels = overlaps.new_full((num_bboxes, ), - -1, - dtype=torch.long) - return AssignResult( - num_gts, - assigned_gt_inds, - max_overlaps, - labels=assigned_labels) - - # for each anchor, which gt best overlaps with it - # for each anchor, the max iou of all gts - max_overlaps, argmax_overlaps = overlaps.max(dim=0) - # for each gt, which anchor best overlaps with it - # for each gt, the max iou of all proposals - gt_max_overlaps, gt_argmax_overlaps = overlaps.max(dim=1) - - # 2. assign negative: below - # the negative inds are set to be 0 - if isinstance(self.neg_iou_thr, float): - assigned_gt_inds[(max_overlaps >= 0) - & (max_overlaps < self.neg_iou_thr)] = 0 - elif isinstance(self.neg_iou_thr, tuple): - assert len(self.neg_iou_thr) == 2 - assigned_gt_inds[(max_overlaps >= self.neg_iou_thr[0]) - & (max_overlaps < self.neg_iou_thr[1])] = 0 - - # 3. assign positive: above positive IoU threshold - pos_inds = max_overlaps >= self.pos_iou_thr - assigned_gt_inds[pos_inds] = argmax_overlaps[pos_inds] + 1 - - if self.match_low_quality: - # Low-quality matching will overwrite the assigned_gt_inds assigned - # in Step 3. Thus, the assigned gt might not be the best one for - # prediction. - # For example, if bbox A has 0.9 and 0.8 iou with GT bbox 1 & 2, - # bbox 1 will be assigned as the best target for bbox A in step 3. - # However, if GT bbox 2's gt_argmax_overlaps = A, bbox A's - # assigned_gt_inds will be overwritten to be bbox B. - # This might be the reason that it is not used in ROI Heads. - for i in range(num_gts): - if gt_max_overlaps[i] >= self.min_pos_iou: - if self.gt_max_assign_all: - max_iou_inds = overlaps[i, :] == gt_max_overlaps[i] - assigned_gt_inds[max_iou_inds] = i + 1 - else: - assigned_gt_inds[gt_argmax_overlaps[i]] = i + 1 - - if gt_labels is not None: - assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1) - pos_inds = torch.nonzero( - assigned_gt_inds > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[ - assigned_gt_inds[pos_inds] - 1] - else: - assigned_labels = None - - return AssignResult( - num_gts, assigned_gt_inds, max_overlaps, labels=assigned_labels) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/utils/logger.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/utils/logger.py deleted file mode 100644 index 6fc6e6b438a73e857ba6f173594985807cb88b30..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/utils/logger.py +++ /dev/null @@ -1,19 +0,0 @@ -import logging - -from mmcv.utils import get_logger - - -def get_root_logger(log_file=None, log_level=logging.INFO): - """Get root logger. - - Args: - log_file (str, optional): File path of log. Defaults to None. - log_level (int, optional): The level of logger. - Defaults to logging.INFO. - - Returns: - :obj:`logging.Logger`: The obtained logger - """ - logger = get_logger(name='mmdet', log_file=log_file, log_level=log_level) - - return logger diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/base_dense_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/base_dense_head.py deleted file mode 100644 index de11e4a2197b1dfe241ce7a66daa1907a8fc5661..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/base_dense_head.py +++ /dev/null @@ -1,59 +0,0 @@ -from abc import ABCMeta, abstractmethod - -import torch.nn as nn - - -class BaseDenseHead(nn.Module, metaclass=ABCMeta): - """Base class for DenseHeads.""" - - def __init__(self): - super(BaseDenseHead, self).__init__() - - @abstractmethod - def loss(self, **kwargs): - """Compute losses of the head.""" - pass - - @abstractmethod - def get_bboxes(self, **kwargs): - """Transform network output for a batch into bbox predictions.""" - pass - - def forward_train(self, - x, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=None, - proposal_cfg=None, - **kwargs): - """ - Args: - x (list[Tensor]): Features from FPN. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - proposal_cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used - - Returns: - tuple: - losses: (dict[str, Tensor]): A dictionary of loss components. - proposal_list (list[Tensor]): Proposals of each image. - """ - outs = self(x) - if gt_labels is None: - loss_inputs = outs + (gt_bboxes, img_metas) - else: - loss_inputs = outs + (gt_bboxes, gt_labels, img_metas) - losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - if proposal_cfg is None: - return losses - else: - proposal_list = self.get_bboxes(*outs, img_metas, cfg=proposal_cfg) - return losses, proposal_list diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/exp/upernet_global_base/test.sh b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/exp/upernet_global_base/test.sh deleted file mode 100644 index d9a85e7a0d3b7c96b060f473d41254b37a382fcb..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/exp/upernet_global_base/test.sh +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/env bash - -work_path=$(dirname $0) -PYTHONPATH="$(dirname $0)/../../":$PYTHONPATH \ -python -m torch.distributed.launch --nproc_per_node=8 \ - tools/test.py ${work_path}/test_config_h32.py \ - ${work_path}/ckpt/latest.pth \ - --launcher pytorch \ - --eval mIoU \ - 2>&1 | tee -a ${work_path}/log.txt diff --git a/spaces/SLAYEROFALL3050/AudioGenerator/README.md b/spaces/SLAYEROFALL3050/AudioGenerator/README.md deleted file mode 100644 index 27bbf7a5185ea121bbfb1e91ed2e49f15ff816cb..0000000000000000000000000000000000000000 --- a/spaces/SLAYEROFALL3050/AudioGenerator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Music Generation using ML -emoji: 🧐 -colorFrom: indigo -colorTo: green -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/train_sppe/src/models/layers/SE_module.py b/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/train_sppe/src/models/layers/SE_module.py deleted file mode 100644 index f370fd4e1fb777306e37f4a7c7be99bd0fbca64a..0000000000000000000000000000000000000000 --- a/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/train_sppe/src/models/layers/SE_module.py +++ /dev/null @@ -1,24 +0,0 @@ -# ----------------------------------------------------- -# Copyright (c) Shanghai Jiao Tong University. All rights reserved. -# Written by Jiefeng Li (jeff.lee.sjtu@gmail.com) -# ----------------------------------------------------- - -from torch import nn - - -class SELayer(nn.Module): - def __init__(self, channel, reduction=1): - super(SELayer, self).__init__() - self.avg_pool = nn.AdaptiveAvgPool2d(1) - self.fc = nn.Sequential( - nn.Linear(channel, channel // reduction), - nn.ReLU(inplace=True), - nn.Linear(channel // reduction, channel), - nn.Sigmoid() - ) - - def forward(self, x): - b, c, _, _ = x.size() - y = self.avg_pool(x).view(b, c) - y = self.fc(y).view(b, c, 1, 1) - return x * y diff --git a/spaces/Sourabh2/detectron2-segmentation/README.md b/spaces/Sourabh2/detectron2-segmentation/README.md deleted file mode 100644 index 8f98c487ef97d8e279478689da4379750919feda..0000000000000000000000000000000000000000 --- a/spaces/Sourabh2/detectron2-segmentation/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Detectron2 Segmentation -emoji: 🐠 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SpacesExamples/secret-example/main.py b/spaces/SpacesExamples/secret-example/main.py deleted file mode 100644 index 5ef99b7d3a7905e030c415fdd73b9166ee88a753..0000000000000000000000000000000000000000 --- a/spaces/SpacesExamples/secret-example/main.py +++ /dev/null @@ -1,13 +0,0 @@ -from typing import Union - -from fastapi import FastAPI -import os - -app = FastAPI() - - -@app.get("/") -def read_root(): - return {"Hello EXAMPLE": os.environ.get("EXAMPLE"), - "Hello SECRET_EXAMPLE": os.environ.get("SECRET_EXAMPLE") - } diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/tests/test_lexers.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/tests/test_lexers.py deleted file mode 100644 index 000b8fe6fd98c4017d5be56448cad68798b087a4..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/tests/test_lexers.py +++ /dev/null @@ -1,184 +0,0 @@ -"""Test lexers module""" - -# Copyright (c) IPython Development Team. -# Distributed under the terms of the Modified BSD License. - -from unittest import TestCase -from pygments import __version__ as pygments_version -from pygments.token import Token -from pygments.lexers import BashLexer - -from .. import lexers - -pyg214 = tuple(int(x) for x in pygments_version.split(".")[:2]) >= (2, 14) - - -class TestLexers(TestCase): - """Collection of lexers tests""" - def setUp(self): - self.lexer = lexers.IPythonLexer() - self.bash_lexer = BashLexer() - - def testIPythonLexer(self): - fragment = '!echo $HOME\n' - bash_tokens = [ - (Token.Operator, '!'), - ] - bash_tokens.extend(self.bash_lexer.get_tokens(fragment[1:])) - ipylex_token = list(self.lexer.get_tokens(fragment)) - assert bash_tokens[:-1] == ipylex_token[:-1] - - fragment_2 = "!" + fragment - tokens_2 = [ - (Token.Operator, '!!'), - ] + bash_tokens[1:] - assert tokens_2[:-1] == list(self.lexer.get_tokens(fragment_2))[:-1] - - fragment_2 = '\t %%!\n' + fragment[1:] - tokens_2 = [ - (Token.Text, '\t '), - (Token.Operator, '%%!'), - (Token.Text, '\n'), - ] + bash_tokens[1:] - assert tokens_2 == list(self.lexer.get_tokens(fragment_2)) - - fragment_2 = 'x = ' + fragment - tokens_2 = [ - (Token.Name, 'x'), - (Token.Text, ' '), - (Token.Operator, '='), - (Token.Text, ' '), - ] + bash_tokens - assert tokens_2[:-1] == list(self.lexer.get_tokens(fragment_2))[:-1] - - fragment_2 = 'x, = ' + fragment - tokens_2 = [ - (Token.Name, 'x'), - (Token.Punctuation, ','), - (Token.Text, ' '), - (Token.Operator, '='), - (Token.Text, ' '), - ] + bash_tokens - assert tokens_2[:-1] == list(self.lexer.get_tokens(fragment_2))[:-1] - - fragment_2 = 'x, = %sx ' + fragment[1:] - tokens_2 = [ - (Token.Name, 'x'), - (Token.Punctuation, ','), - (Token.Text, ' '), - (Token.Operator, '='), - (Token.Text, ' '), - (Token.Operator, '%'), - (Token.Keyword, 'sx'), - (Token.Text, ' '), - ] + bash_tokens[1:] - if tokens_2[7] == (Token.Text, " ") and pyg214: # pygments 2.14+ - tokens_2[7] = (Token.Text.Whitespace, " ") - assert tokens_2[:-1] == list(self.lexer.get_tokens(fragment_2))[:-1] - - fragment_2 = 'f = %R function () {}\n' - tokens_2 = [ - (Token.Name, 'f'), - (Token.Text, ' '), - (Token.Operator, '='), - (Token.Text, ' '), - (Token.Operator, '%'), - (Token.Keyword, 'R'), - (Token.Text, ' function () {}\n'), - ] - assert tokens_2 == list(self.lexer.get_tokens(fragment_2)) - - fragment_2 = '\t%%xyz\n$foo\n' - tokens_2 = [ - (Token.Text, '\t'), - (Token.Operator, '%%'), - (Token.Keyword, 'xyz'), - (Token.Text, '\n$foo\n'), - ] - assert tokens_2 == list(self.lexer.get_tokens(fragment_2)) - - fragment_2 = '%system?\n' - tokens_2 = [ - (Token.Operator, '%'), - (Token.Keyword, 'system'), - (Token.Operator, '?'), - (Token.Text, '\n'), - ] - assert tokens_2[:-1] == list(self.lexer.get_tokens(fragment_2))[:-1] - - fragment_2 = 'x != y\n' - tokens_2 = [ - (Token.Name, 'x'), - (Token.Text, ' '), - (Token.Operator, '!='), - (Token.Text, ' '), - (Token.Name, 'y'), - (Token.Text, '\n'), - ] - assert tokens_2[:-1] == list(self.lexer.get_tokens(fragment_2))[:-1] - - fragment_2 = ' ?math.sin\n' - tokens_2 = [ - (Token.Text, ' '), - (Token.Operator, '?'), - (Token.Text, 'math.sin'), - (Token.Text, '\n'), - ] - assert tokens_2[:-1] == list(self.lexer.get_tokens(fragment_2))[:-1] - - fragment = ' *int*?\n' - tokens = [ - (Token.Text, ' *int*'), - (Token.Operator, '?'), - (Token.Text, '\n'), - ] - assert tokens == list(self.lexer.get_tokens(fragment)) - - fragment = '%%writefile -a foo.py\nif a == b:\n pass' - tokens = [ - (Token.Operator, '%%writefile'), - (Token.Text, ' -a foo.py\n'), - (Token.Keyword, 'if'), - (Token.Text, ' '), - (Token.Name, 'a'), - (Token.Text, ' '), - (Token.Operator, '=='), - (Token.Text, ' '), - (Token.Name, 'b'), - (Token.Punctuation, ':'), - (Token.Text, '\n'), - (Token.Text, ' '), - (Token.Keyword, 'pass'), - (Token.Text, '\n'), - ] - if tokens[10] == (Token.Text, "\n") and pyg214: # pygments 2.14+ - tokens[10] = (Token.Text.Whitespace, "\n") - assert tokens[:-1] == list(self.lexer.get_tokens(fragment))[:-1] - - fragment = '%%timeit\nmath.sin(0)' - tokens = [ - (Token.Operator, '%%timeit\n'), - (Token.Name, 'math'), - (Token.Operator, '.'), - (Token.Name, 'sin'), - (Token.Punctuation, '('), - (Token.Literal.Number.Integer, '0'), - (Token.Punctuation, ')'), - (Token.Text, '\n'), - ] - - fragment = '%%HTML\n
      foo
      ' - tokens = [ - (Token.Operator, '%%HTML'), - (Token.Text, '\n'), - (Token.Punctuation, '<'), - (Token.Name.Tag, 'div'), - (Token.Punctuation, '>'), - (Token.Text, 'foo'), - (Token.Punctuation, '<'), - (Token.Punctuation, '/'), - (Token.Name.Tag, 'div'), - (Token.Punctuation, '>'), - (Token.Text, '\n'), - ] - assert tokens == list(self.lexer.get_tokens(fragment)) diff --git a/spaces/TEnngal/bingo/src/lib/bots/bing/index.ts b/spaces/TEnngal/bingo/src/lib/bots/bing/index.ts deleted file mode 100644 index c75c69f94af8c3db92d4c90d465c219a2af72a4d..0000000000000000000000000000000000000000 --- a/spaces/TEnngal/bingo/src/lib/bots/bing/index.ts +++ /dev/null @@ -1,432 +0,0 @@ -import { fetch, WebSocket, debug } from '@/lib/isomorphic' -import WebSocketAsPromised from 'websocket-as-promised' -import { - SendMessageParams, - BingConversationStyle, - ConversationResponse, - ChatResponseMessage, - ConversationInfo, - InvocationEventType, - ChatError, - ErrorCode, - ChatUpdateCompleteResponse, - ImageInfo, - KBlobResponse -} from './types' - -import { convertMessageToMarkdown, websocketUtils, streamAsyncIterable } from './utils' -import { WatchDog, createChunkDecoder } from '@/lib/utils' - -type Params = SendMessageParams<{ bingConversationStyle: BingConversationStyle }> - -const OPTIONS_SETS = [ - 'nlu_direct_response_filter', - 'deepleo', - 'disable_emoji_spoken_text', - 'responsible_ai_policy_235', - 'enablemm', - 'iycapbing', - 'iyxapbing', - 'objopinion', - 'rweasgv2', - 'dagslnv1', - 'dv3sugg', - 'autosave', - 'iyoloxap', - 'iyoloneutral', - 'clgalileo', - 'gencontentv3', -] - -export class BingWebBot { - protected conversationContext?: ConversationInfo - protected cookie: string - protected ua: string - protected endpoint = '' - private lastText = '' - private asyncTasks: Array> = [] - - constructor(opts: { - cookie: string - ua: string - bingConversationStyle?: BingConversationStyle - conversationContext?: ConversationInfo - }) { - const { cookie, ua, conversationContext } = opts - this.cookie = cookie?.includes(';') ? cookie : `_EDGE_V=1; _U=${cookie}` - this.ua = ua - this.conversationContext = conversationContext - } - - static buildChatRequest(conversation: ConversationInfo) { - const optionsSets = OPTIONS_SETS - if (conversation.conversationStyle === BingConversationStyle.Precise) { - optionsSets.push('h3precise') - } else if (conversation.conversationStyle === BingConversationStyle.Creative) { - optionsSets.push('h3imaginative') - } - return { - arguments: [ - { - source: 'cib', - optionsSets, - allowedMessageTypes: [ - 'ActionRequest', - 'Chat', - 'Context', - 'InternalSearchQuery', - 'InternalSearchResult', - 'Disengaged', - 'InternalLoaderMessage', - 'Progress', - 'RenderCardRequest', - 'SemanticSerp', - 'GenerateContentQuery', - 'SearchQuery', - ], - sliceIds: [ - 'winmuid1tf', - 'anssupfor_c', - 'imgchatgptv2', - 'tts2cf', - 'contansperf', - 'mlchatpc8500w', - 'mlchatpc2', - 'ctrlworkpay', - 'winshortmsgtf', - 'cibctrl', - 'sydtransctrl', - 'sydconfigoptc', - '0705trt4', - '517opinion', - '628ajcopus0', - '330uaugs0', - '529rwea', - '0626snptrcs0', - '424dagslnv1', - ], - isStartOfSession: conversation.invocationId === 0, - message: { - author: 'user', - inputMethod: 'Keyboard', - text: conversation.prompt, - imageUrl: conversation.imageUrl, - messageType: 'Chat', - }, - conversationId: conversation.conversationId, - conversationSignature: conversation.conversationSignature, - participant: { id: conversation.clientId }, - }, - ], - invocationId: conversation.invocationId.toString(), - target: 'chat', - type: InvocationEventType.StreamInvocation, - } - } - - async createConversation(): Promise { - const headers = { - 'Accept-Encoding': 'gzip, deflate, br, zsdch', - 'User-Agent': this.ua, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: this.cookie, - } - - let resp: ConversationResponse | undefined - try { - const response = await fetch(this.endpoint + '/api/create', { method: 'POST', headers, redirect: 'error', mode: 'cors', credentials: 'include' }) - if (response.status === 404) { - throw new ChatError('Not Found', ErrorCode.NOTFOUND_ERROR) - } - resp = await response.json() as ConversationResponse - } catch (err) { - console.error('create conversation error', err) - } - - if (!resp?.result) { - throw new ChatError('你的 VPS 或代理可能被封禁,如有疑问,请前往 https://github.com/weaigc/bingo 咨询', ErrorCode.BING_IP_FORBIDDEN) - } - - const { value, message } = resp.result || {} - if (value !== 'Success') { - const errorMsg = `${value}: ${message}` - if (value === 'UnauthorizedRequest') { - if (/fetch failed/i.test(message || '')) { - throw new ChatError(errorMsg, ErrorCode.BING_IP_FORBIDDEN) - } - throw new ChatError(errorMsg, ErrorCode.BING_UNAUTHORIZED) - } - if (value === 'TryLater') { - throw new ChatError(errorMsg, ErrorCode.BING_TRY_LATER) - } - if (value === 'Forbidden') { - throw new ChatError(errorMsg, ErrorCode.BING_FORBIDDEN) - } - throw new ChatError(errorMsg, ErrorCode.UNKOWN_ERROR) - } - return resp - } - - private async createContext(conversationStyle: BingConversationStyle) { - if (!this.conversationContext) { - const conversation = await this.createConversation() - this.conversationContext = { - conversationId: conversation.conversationId, - conversationSignature: conversation.conversationSignature, - clientId: conversation.clientId, - invocationId: 0, - conversationStyle, - prompt: '', - } - } - return this.conversationContext - } - - async sendMessage(params: Params) { - try { - await this.createContext(params.options.bingConversationStyle) - Object.assign(this.conversationContext!, { prompt: params.prompt, imageUrl: params.imageUrl }) - return this.sydneyProxy(params) - } catch (error) { - params.onEvent({ - type: 'ERROR', - error: error instanceof ChatError ? error : new ChatError('Catch Error', ErrorCode.UNKOWN_ERROR), - }) - } - } - - private async sydneyProxy(params: Params) { - const abortController = new AbortController() - const response = await fetch(this.endpoint + '/api/sydney', { - method: 'POST', - headers: { - 'Content-Type': 'application/json', - }, - signal: abortController.signal, - body: JSON.stringify(this.conversationContext!) - }) - if (response.status !== 200) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - 'Unknown error', - ErrorCode.UNKOWN_ERROR, - ), - }) - } - params.signal?.addEventListener('abort', () => { - abortController.abort() - }) - - const textDecoder = createChunkDecoder() - for await (const chunk of streamAsyncIterable(response.body!)) { - this.parseEvents(params, websocketUtils.unpackMessage(textDecoder(chunk))) - } - } - - async sendWs() { - const wsConfig: ConstructorParameters[1] = { - packMessage: websocketUtils.packMessage, - unpackMessage: websocketUtils.unpackMessage, - createWebSocket: (url) => new WebSocket(url, { - headers: { - 'accept-language': 'zh-CN,zh;q=0.9', - 'cache-control': 'no-cache', - 'User-Agent': this.ua, - pragma: 'no-cache', - cookie: this.cookie, - } - }) - } - const wsp = new WebSocketAsPromised('wss://sydney.bing.com/sydney/ChatHub', wsConfig) - - wsp.open().then(() => { - wsp.sendPacked({ protocol: 'json', version: 1 }) - wsp.sendPacked({ type: 6 }) - wsp.sendPacked(BingWebBot.buildChatRequest(this.conversationContext!)) - }) - - return wsp - } - - private async useWs(params: Params) { - const wsp = await this.sendWs() - const watchDog = new WatchDog() - wsp.onUnpackedMessage.addListener((events) => { - watchDog.watch(() => { - wsp.sendPacked({ type: 6 }) - }) - this.parseEvents(params, events) - }) - - wsp.onClose.addListener(() => { - watchDog.reset() - params.onEvent({ type: 'DONE' }) - wsp.removeAllListeners() - }) - - params.signal?.addEventListener('abort', () => { - wsp.removeAllListeners() - wsp.close() - }) - } - - private async createImage(prompt: string, id: string) { - try { - const headers = { - 'Accept-Encoding': 'gzip, deflate, br, zsdch', - 'User-Agent': this.ua, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: this.cookie, - } - const query = new URLSearchParams({ - prompt, - id - }) - const response = await fetch(this.endpoint + '/api/image?' + query.toString(), - { - method: 'POST', - headers, - mode: 'cors', - credentials: 'include' - }) - .then(res => res.text()) - if (response) { - this.lastText += '\n' + response - } - } catch (err) { - console.error('Create Image Error', err) - } - } - - private buildKnowledgeApiPayload(imageUrl: string, conversationStyle: BingConversationStyle) { - const imageInfo: ImageInfo = {} - let imageBase64: string | undefined = undefined - const knowledgeRequest = { - imageInfo, - knowledgeRequest: { - invokedSkills: [ - 'ImageById' - ], - subscriptionId: 'Bing.Chat.Multimodal', - invokedSkillsRequestData: { - enableFaceBlur: true - }, - convoData: { - convoid: this.conversationContext?.conversationId, - convotone: conversationStyle, - } - }, - } - - if (imageUrl.startsWith('data:image/')) { - imageBase64 = imageUrl.replace('data:image/', ''); - const partIndex = imageBase64.indexOf(',') - if (partIndex) { - imageBase64 = imageBase64.substring(partIndex + 1) - } - } else { - imageInfo.url = imageUrl - } - return { knowledgeRequest, imageBase64 } - } - - async uploadImage(imageUrl: string, conversationStyle: BingConversationStyle = BingConversationStyle.Creative): Promise { - if (!imageUrl) { - return - } - await this.createContext(conversationStyle) - const payload = this.buildKnowledgeApiPayload(imageUrl, conversationStyle) - - const response = await fetch(this.endpoint + '/api/kblob', - { - headers: { - 'Content-Type': 'application/json', - }, - method: 'POST', - mode: 'cors', - credentials: 'include', - body: JSON.stringify(payload), - }) - .then(res => res.json()) - .catch(e => { - console.log('Error', e) - }) - return response - } - - private async generateContent(message: ChatResponseMessage) { - if (message.contentType === 'IMAGE') { - this.asyncTasks.push(this.createImage(message.text, message.messageId)) - } - } - - private async parseEvents(params: Params, events: any) { - const conversation = this.conversationContext! - - events?.forEach(async (event: ChatUpdateCompleteResponse) => { - debug('bing event', event) - if (event.type === 3) { - await Promise.all(this.asyncTasks) - this.asyncTasks = [] - params.onEvent({ type: 'UPDATE_ANSWER', data: { text: this.lastText } }) - params.onEvent({ type: 'DONE' }) - conversation.invocationId = parseInt(event.invocationId, 10) + 1 - } else if (event.type === 1) { - const messages = event.arguments[0].messages - if (messages) { - const text = convertMessageToMarkdown(messages[0]) - this.lastText = text - params.onEvent({ type: 'UPDATE_ANSWER', data: { text, spokenText: messages[0].text, throttling: event.arguments[0].throttling } }) - } - } else if (event.type === 2) { - const messages = event.item.messages as ChatResponseMessage[] | undefined - if (!messages) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - event.item.result.error || 'Unknown error', - event.item.result.value === 'Throttled' ? ErrorCode.THROTTLE_LIMIT - : event.item.result.value === 'CaptchaChallenge' ? (this.conversationContext?.conversationId?.includes('BingProdUnAuthenticatedUsers') ? ErrorCode.BING_UNAUTHORIZED : ErrorCode.BING_CAPTCHA) - : ErrorCode.UNKOWN_ERROR - ), - }) - return - } - const limited = messages.some((message) => - message.contentOrigin === 'TurnLimiter' - || message.messageType === 'Disengaged' - ) - if (limited) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - 'Sorry, you have reached chat limit in this conversation.', - ErrorCode.CONVERSATION_LIMIT, - ), - }) - return - } - - const lastMessage = event.item.messages.at(-1) as ChatResponseMessage - const specialMessage = event.item.messages.find(message => message.author === 'bot' && message.contentType === 'IMAGE') - if (specialMessage) { - this.generateContent(specialMessage) - } - - if (lastMessage) { - const text = convertMessageToMarkdown(lastMessage) - this.lastText = text - params.onEvent({ - type: 'UPDATE_ANSWER', - data: { text, throttling: event.item.throttling, suggestedResponses: lastMessage.suggestedResponses, sourceAttributions: lastMessage.sourceAttributions }, - }) - } - } - }) - } - - resetConversation() { - this.conversationContext = undefined - } -} diff --git a/spaces/TH5314/newbing/src/components/chat-header.tsx b/spaces/TH5314/newbing/src/components/chat-header.tsx deleted file mode 100644 index c6664b8dee61179f844d45c5bd650518fc2cb4c2..0000000000000000000000000000000000000000 --- a/spaces/TH5314/newbing/src/components/chat-header.tsx +++ /dev/null @@ -1,12 +0,0 @@ -import LogoIcon from '@/assets/images/logo.svg' -import Image from 'next/image' - -export function ChatHeader() { - return ( -
      - logo -
      欢迎使用新必应
      -
      由 AI 支持的网页版 Copilot
      -
      - ) -} diff --git a/spaces/TLME/Bert-VITS-Umamusume-Genshin-HonkaiSR/utils.py b/spaces/TLME/Bert-VITS-Umamusume-Genshin-HonkaiSR/utils.py deleted file mode 100644 index 5f98aafadb83a9f341d6d9d3401c6c3101485b4e..0000000000000000000000000000000000000000 --- a/spaces/TLME/Bert-VITS-Umamusume-Genshin-HonkaiSR/utils.py +++ /dev/null @@ -1,356 +0,0 @@ -import os -import glob -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logger = logging.getLogger(__name__) - - -def load_checkpoint(checkpoint_path, model, optimizer=None, skip_optimizer=False): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location="cpu") - iteration = checkpoint_dict["iteration"] - learning_rate = checkpoint_dict["learning_rate"] - if ( - optimizer is not None - and not skip_optimizer - and checkpoint_dict["optimizer"] is not None - ): - optimizer.load_state_dict(checkpoint_dict["optimizer"]) - elif optimizer is None and not skip_optimizer: - # else: Disable this line if Infer and resume checkpoint,then enable the line upper - new_opt_dict = optimizer.state_dict() - new_opt_dict_params = new_opt_dict["param_groups"][0]["params"] - new_opt_dict["param_groups"] = checkpoint_dict["optimizer"]["param_groups"] - new_opt_dict["param_groups"][0]["params"] = new_opt_dict_params - optimizer.load_state_dict(new_opt_dict) - - saved_state_dict = checkpoint_dict["model"] - if hasattr(model, "module"): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - - new_state_dict = {} - for k, v in state_dict.items(): - try: - # assert "emb_g" not in k - new_state_dict[k] = saved_state_dict[k] - assert saved_state_dict[k].shape == v.shape, ( - saved_state_dict[k].shape, - v.shape, - ) - except: - # For upgrading from the old version - if "ja_bert_proj" in k: - v = torch.zeros_like(v) - logger.warn( - f"Seems you are using the old version of the model, the {k} is automatically set to zero for backward compatibility" - ) - else: - logger.error(f"{k} is not in the checkpoint") - - new_state_dict[k] = v - - if hasattr(model, "module"): - model.module.load_state_dict(new_state_dict, strict=False) - else: - model.load_state_dict(new_state_dict, strict=False) - - logger.info( - "Loaded checkpoint '{}' (iteration {})".format(checkpoint_path, iteration) - ) - - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info( - "Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path - ) - ) - if hasattr(model, "module"): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save( - { - "model": state_dict, - "iteration": iteration, - "optimizer": optimizer.state_dict(), - "learning_rate": learning_rate, - }, - checkpoint_path, - ) - - -def summarize( - writer, - global_step, - scalars={}, - histograms={}, - images={}, - audios={}, - audio_sampling_rate=22050, -): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats="HWC") - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger("matplotlib") - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", interpolation="none") - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="") - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger("matplotlib") - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow( - alignment.transpose(), aspect="auto", origin="lower", interpolation="none" - ) - fig.colorbar(im, ax=ax) - xlabel = "Decoder timestep" - if info is not None: - xlabel += "\n\n" + info - plt.xlabel(xlabel) - plt.ylabel("Encoder timestep") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="") - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding="utf-8") as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument( - "-c", - "--config", - type=str, - default="./configs/base.json", - help="JSON file for configuration", - ) - parser.add_argument("-m", "--model", type=str, required=True, help="Model name") - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r", encoding="utf-8") as f: - data = f.read() - with open(config_save_path, "w", encoding="utf-8") as f: - f.write(data) - else: - with open(config_save_path, "r", vencoding="utf-8") as f: - data = f.read() - config = json.loads(data) - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def clean_checkpoints(path_to_models="logs/44k/", n_ckpts_to_keep=2, sort_by_time=True): - """Freeing up space by deleting saved ckpts - - Arguments: - path_to_models -- Path to the model directory - n_ckpts_to_keep -- Number of ckpts to keep, excluding G_0.pth and D_0.pth - sort_by_time -- True -> chronologically delete ckpts - False -> lexicographically delete ckpts - """ - import re - - ckpts_files = [ - f - for f in os.listdir(path_to_models) - if os.path.isfile(os.path.join(path_to_models, f)) - ] - - def name_key(_f): - return int(re.compile("._(\\d+)\\.pth").match(_f).group(1)) - - def time_key(_f): - return os.path.getmtime(os.path.join(path_to_models, _f)) - - sort_key = time_key if sort_by_time else name_key - - def x_sorted(_x): - return sorted( - [f for f in ckpts_files if f.startswith(_x) and not f.endswith("_0.pth")], - key=sort_key, - ) - - to_del = [ - os.path.join(path_to_models, fn) - for fn in (x_sorted("G")[:-n_ckpts_to_keep] + x_sorted("D")[:-n_ckpts_to_keep]) - ] - - def del_info(fn): - return logger.info(f".. Free up space by deleting ckpt {fn}") - - def del_routine(x): - return [os.remove(x), del_info(x)] - - [del_routine(fn) for fn in to_del] - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r", encoding="utf-8") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r", encoding="utf-8") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn( - "{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - ) - ) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn( - "git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8] - ) - ) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams: - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/TM9450/Income_prediction/app.py b/spaces/TM9450/Income_prediction/app.py deleted file mode 100644 index f5090dcf86d32f6ae9efc23f702c896faa946dea..0000000000000000000000000000000000000000 --- a/spaces/TM9450/Income_prediction/app.py +++ /dev/null @@ -1,71 +0,0 @@ -import joblib -import pandas as pd -import streamlit as st - - -EDU_DICT = {'Preschool': 1, - '1st-4th': 2, - '5th-6th': 3, - '7th-8th': 4, - '9th': 5, - '10th': 6, - '11th': 7, - '12th': 8, - 'HS-grad': 9, - 'Some-college': 10, - 'Assoc-voc': 11, - 'Assoc-acdm': 12, - 'Bachelors': 13, - 'Masters': 14, - 'Prof-school': 15, - 'Doctorate': 16 - } - -model = joblib.load('model.joblib') -unique_values = joblib.load('unique_values.joblib') - -unique_class = unique_values["workclass"] -unique_education = unique_values["education"] -unique_marital_status = unique_values["marital.status"] -unique_relationship = unique_values["relationship"] -unique_occupation = unique_values["occupation"] -unique_sex = unique_values["sex"] -unique_race = unique_values["race"] -unique_country = unique_values["native.country"] - -def main(): - st.title("Adult Income") - - with st.form("questionaire"): - age = st.slider("Age", min_value=10, max_value=100) - workclass = st.selectbox("Workclass", options=unique_class) - education = st.selectbox("Education", options=unique_education) - Marital_Status = st.selectbox("Marital_Status", options=unique_marital_status) - occupation = st.selectbox("Occupation", options=unique_occupation) - relationship = st.selectbox("Relationship", options=unique_relationship) - race = st.selectbox("Race", options=unique_race) - sex = st.selectbox("Sex", options=unique_sex) - hours_per_week = st.slider("Hours_per_week", min_value=1, max_value=100) - native_country = st.selectbox("Native_country", options=unique_country) - - # clicked==True only when the button is clicked - clicked = st.form_submit_button("Predict income") - if clicked: - result=model.predict(pd.DataFrame({"age": [age], - "workclass": [workclass], - "education": [EDU_DICT[education]], - "marital.status": [Marital_Status], - "occupation": [occupation], - "relationship": [relationship], - "race": [race], - "sex": [sex], - "hours.per.week": [hours_per_week], - "native.country": [native_country]})) - # Show prediction - result = '>50K' if result[0] == 1 else '<=50K' - st.success("Your predicted income is "+result) - -# Run main() -#บางคนเขาไม่อยากรันเลยใส่if ไว้ -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/archive_util.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/archive_util.py deleted file mode 100644 index 7f9e1e00ccdb0e67a5601db4707a1cfa46cbc96f..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/archive_util.py +++ /dev/null @@ -1,280 +0,0 @@ -"""distutils.archive_util - -Utility functions for creating archive files (tarballs, zip files, -that sort of thing).""" - -import os -from warnings import warn -import sys - -try: - import zipfile -except ImportError: - zipfile = None - - -from .errors import DistutilsExecError -from .spawn import spawn -from .dir_util import mkpath -from ._log import log - -try: - from pwd import getpwnam -except ImportError: - getpwnam = None - -try: - from grp import getgrnam -except ImportError: - getgrnam = None - - -def _get_gid(name): - """Returns a gid, given a group name.""" - if getgrnam is None or name is None: - return None - try: - result = getgrnam(name) - except KeyError: - result = None - if result is not None: - return result[2] - return None - - -def _get_uid(name): - """Returns an uid, given a user name.""" - if getpwnam is None or name is None: - return None - try: - result = getpwnam(name) - except KeyError: - result = None - if result is not None: - return result[2] - return None - - -def make_tarball( - base_name, base_dir, compress="gzip", verbose=0, dry_run=0, owner=None, group=None -): - """Create a (possibly compressed) tar file from all the files under - 'base_dir'. - - 'compress' must be "gzip" (the default), "bzip2", "xz", "compress", or - None. ("compress" will be deprecated in Python 3.2) - - 'owner' and 'group' can be used to define an owner and a group for the - archive that is being built. If not provided, the current owner and group - will be used. - - The output tar file will be named 'base_dir' + ".tar", possibly plus - the appropriate compression extension (".gz", ".bz2", ".xz" or ".Z"). - - Returns the output filename. - """ - tar_compression = { - 'gzip': 'gz', - 'bzip2': 'bz2', - 'xz': 'xz', - None: '', - 'compress': '', - } - compress_ext = {'gzip': '.gz', 'bzip2': '.bz2', 'xz': '.xz', 'compress': '.Z'} - - # flags for compression program, each element of list will be an argument - if compress is not None and compress not in compress_ext.keys(): - raise ValueError( - "bad value for 'compress': must be None, 'gzip', 'bzip2', " - "'xz' or 'compress'" - ) - - archive_name = base_name + '.tar' - if compress != 'compress': - archive_name += compress_ext.get(compress, '') - - mkpath(os.path.dirname(archive_name), dry_run=dry_run) - - # creating the tarball - import tarfile # late import so Python build itself doesn't break - - log.info('Creating tar archive') - - uid = _get_uid(owner) - gid = _get_gid(group) - - def _set_uid_gid(tarinfo): - if gid is not None: - tarinfo.gid = gid - tarinfo.gname = group - if uid is not None: - tarinfo.uid = uid - tarinfo.uname = owner - return tarinfo - - if not dry_run: - tar = tarfile.open(archive_name, 'w|%s' % tar_compression[compress]) - try: - tar.add(base_dir, filter=_set_uid_gid) - finally: - tar.close() - - # compression using `compress` - if compress == 'compress': - warn("'compress' is deprecated.", DeprecationWarning) - # the option varies depending on the platform - compressed_name = archive_name + compress_ext[compress] - if sys.platform == 'win32': - cmd = [compress, archive_name, compressed_name] - else: - cmd = [compress, '-f', archive_name] - spawn(cmd, dry_run=dry_run) - return compressed_name - - return archive_name - - -def make_zipfile(base_name, base_dir, verbose=0, dry_run=0): # noqa: C901 - """Create a zip file from all the files under 'base_dir'. - - The output zip file will be named 'base_name' + ".zip". Uses either the - "zipfile" Python module (if available) or the InfoZIP "zip" utility - (if installed and found on the default search path). If neither tool is - available, raises DistutilsExecError. Returns the name of the output zip - file. - """ - zip_filename = base_name + ".zip" - mkpath(os.path.dirname(zip_filename), dry_run=dry_run) - - # If zipfile module is not available, try spawning an external - # 'zip' command. - if zipfile is None: - if verbose: - zipoptions = "-r" - else: - zipoptions = "-rq" - - try: - spawn(["zip", zipoptions, zip_filename, base_dir], dry_run=dry_run) - except DistutilsExecError: - # XXX really should distinguish between "couldn't find - # external 'zip' command" and "zip failed". - raise DistutilsExecError( - ( - "unable to create zip file '%s': " - "could neither import the 'zipfile' module nor " - "find a standalone zip utility" - ) - % zip_filename - ) - - else: - log.info("creating '%s' and adding '%s' to it", zip_filename, base_dir) - - if not dry_run: - try: - zip = zipfile.ZipFile( - zip_filename, "w", compression=zipfile.ZIP_DEFLATED - ) - except RuntimeError: - zip = zipfile.ZipFile(zip_filename, "w", compression=zipfile.ZIP_STORED) - - with zip: - if base_dir != os.curdir: - path = os.path.normpath(os.path.join(base_dir, '')) - zip.write(path, path) - log.info("adding '%s'", path) - for dirpath, dirnames, filenames in os.walk(base_dir): - for name in dirnames: - path = os.path.normpath(os.path.join(dirpath, name, '')) - zip.write(path, path) - log.info("adding '%s'", path) - for name in filenames: - path = os.path.normpath(os.path.join(dirpath, name)) - if os.path.isfile(path): - zip.write(path, path) - log.info("adding '%s'", path) - - return zip_filename - - -ARCHIVE_FORMATS = { - 'gztar': (make_tarball, [('compress', 'gzip')], "gzip'ed tar-file"), - 'bztar': (make_tarball, [('compress', 'bzip2')], "bzip2'ed tar-file"), - 'xztar': (make_tarball, [('compress', 'xz')], "xz'ed tar-file"), - 'ztar': (make_tarball, [('compress', 'compress')], "compressed tar file"), - 'tar': (make_tarball, [('compress', None)], "uncompressed tar file"), - 'zip': (make_zipfile, [], "ZIP file"), -} - - -def check_archive_formats(formats): - """Returns the first format from the 'format' list that is unknown. - - If all formats are known, returns None - """ - for format in formats: - if format not in ARCHIVE_FORMATS: - return format - return None - - -def make_archive( - base_name, - format, - root_dir=None, - base_dir=None, - verbose=0, - dry_run=0, - owner=None, - group=None, -): - """Create an archive file (eg. zip or tar). - - 'base_name' is the name of the file to create, minus any format-specific - extension; 'format' is the archive format: one of "zip", "tar", "gztar", - "bztar", "xztar", or "ztar". - - 'root_dir' is a directory that will be the root directory of the - archive; ie. we typically chdir into 'root_dir' before creating the - archive. 'base_dir' is the directory where we start archiving from; - ie. 'base_dir' will be the common prefix of all files and - directories in the archive. 'root_dir' and 'base_dir' both default - to the current directory. Returns the name of the archive file. - - 'owner' and 'group' are used when creating a tar archive. By default, - uses the current owner and group. - """ - save_cwd = os.getcwd() - if root_dir is not None: - log.debug("changing into '%s'", root_dir) - base_name = os.path.abspath(base_name) - if not dry_run: - os.chdir(root_dir) - - if base_dir is None: - base_dir = os.curdir - - kwargs = {'dry_run': dry_run} - - try: - format_info = ARCHIVE_FORMATS[format] - except KeyError: - raise ValueError("unknown archive format '%s'" % format) - - func = format_info[0] - for arg, val in format_info[1]: - kwargs[arg] = val - - if format != 'zip': - kwargs['owner'] = owner - kwargs['group'] = group - - try: - filename = func(base_name, base_dir, **kwargs) - finally: - if root_dir is not None: - log.debug("changing back to '%s'", save_cwd) - os.chdir(save_cwd) - - return filename diff --git a/spaces/Tetel/chat/SydneyGPT/SydneyGPTUtils.py b/spaces/Tetel/chat/SydneyGPT/SydneyGPTUtils.py deleted file mode 100644 index 4c328e9390fceca307217c15aed13f1285f5eb6f..0000000000000000000000000000000000000000 --- a/spaces/Tetel/chat/SydneyGPT/SydneyGPTUtils.py +++ /dev/null @@ -1,28 +0,0 @@ -from SydneyGPT.SydneyGPT import Chatbot -try: - import EdgeGPT.EdgeGPT as EdgeGPT_module - from EdgeGPT.EdgeUtils import Query as BaseQuery -except ImportError: - import EdgeGPT as EdgeGPT_module - from EdgeUtils import Query as BaseQuery - - -create_method = EdgeGPT_module.Chatbot.create - - -async def new_create(*args, **kwargs): - monkey_create = EdgeGPT_module.Chatbot.create - try: - EdgeGPT_module.Chatbot.create = create_method - gpt_bot_create = Chatbot.create(*args, **kwargs) - return await gpt_bot_create - finally: - EdgeGPT_module.Chatbot.create = monkey_create - - -EdgeGPT_module.Chatbot.create = staticmethod(new_create) - - -class Query(BaseQuery): - pass - diff --git a/spaces/ThomasSimonini/ML-Agents-SnowballTarget/README.md b/spaces/ThomasSimonini/ML-Agents-SnowballTarget/README.md deleted file mode 100644 index 3bffc4f8f3d9dbf8ba17faac41a1927c649de599..0000000000000000000000000000000000000000 --- a/spaces/ThomasSimonini/ML-Agents-SnowballTarget/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: ML Agents SnowballTarget -emoji: ❄️ -colorFrom: red -colorTo: white -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/VickyKira/NASAGPT/client/js/sidebar-toggler.js b/spaces/VickyKira/NASAGPT/client/js/sidebar-toggler.js deleted file mode 100644 index b23f94e3bfba5bac53432e1b557765736dabbab4..0000000000000000000000000000000000000000 --- a/spaces/VickyKira/NASAGPT/client/js/sidebar-toggler.js +++ /dev/null @@ -1,34 +0,0 @@ -const sidebar = document.querySelector(".sidebar"); -const menuButton = document.querySelector(".menu-button"); - -function toggleSidebar(event) { - if (sidebar.classList.contains("shown")) { - hideSidebar(event.target); - } else { - showSidebar(event.target); - } - window.scrollTo(0, 0); -} - -function showSidebar(target) { - sidebar.classList.add("shown"); - target.classList.add("rotated"); - document.body.style.overflow = "hidden"; -} - -function hideSidebar(target) { - sidebar.classList.remove("shown"); - target.classList.remove("rotated"); - document.body.style.overflow = "auto"; -} - -menuButton.addEventListener("click", toggleSidebar); - -document.body.addEventListener('click', function(event) { - if (event.target.matches('.conversation-title')) { - const menuButtonStyle = window.getComputedStyle(menuButton); - if (menuButtonStyle.display !== 'none') { - hideSidebar(menuButton); - } - } -}); diff --git a/spaces/Vignesh2496/project/app.py b/spaces/Vignesh2496/project/app.py deleted file mode 100644 index ca8b6d40b4ab898c70da92f4a4298de2baf703dc..0000000000000000000000000000000000000000 --- a/spaces/Vignesh2496/project/app.py +++ /dev/null @@ -1,164 +0,0 @@ -import os -import re -import requests -import json -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') -PLAY_HT_API_KEY=os.getenv('PLAY_HT_API_KEY') -PLAY_HT_USER_ID=os.getenv('PLAY_HT_USER_ID') - -PLAY_HT_VOICE_ID=os.getenv('PLAY_HT_VOICE_ID') -play_ht_api_get_audio_url = "https://play.ht/api/v2/tts" - - -template = """You are a helpful assistant to answer user queries. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -headers = { - "accept": "text/event-stream", - "content-type": "application/json", - "AUTHORIZATION": "Bearer "+ PLAY_HT_API_KEY, - "X-USER-ID": PLAY_HT_USER_ID -} - - -def get_payload(text): - return { - "text": text, - "voice": PLAY_HT_VOICE_ID, - "quality": "medium", - "output_format": "mp3", - "speed": 1, - "sample_rate": 24000, - "seed": None, - "temperature": None - } - -def get_generated_audio(text): - payload = get_payload(text) - generated_response = {} - try: - response = requests.post(play_ht_api_get_audio_url, json=payload, headers=headers) - response.raise_for_status() - generated_response["type"]= 'SUCCESS' - generated_response["response"] = response.text - except requests.exceptions.RequestException as e: - generated_response["type"]= 'ERROR' - try: - response_text = json.loads(response.text) - if response_text['error_message']: - generated_response["response"] = response_text['error_message'] - else: - generated_response["response"] = response.text - except Exception as e: - generated_response["response"] = response.text - except Exception as e: - generated_response["type"]= 'ERROR' - generated_response["response"] = response.text - return generated_response - -def extract_urls(text): - # Define the regex pattern for URLs - url_pattern = r'https?://(?:[-\w.]|(?:%[\da-fA-F]{2}))+[/\w\.-]*' - - # Find all occurrences of URLs in the text - urls = re.findall(url_pattern, text) - - return urls - -def get_audio_reply_for_question(text): - generated_audio_event = get_generated_audio(text) - #From get_generated_audio, you will get events in a string format, from that we need to extract the url - final_response = { - "audio_url": '', - "message": '' - } - if generated_audio_event["type"] == 'SUCCESS': - audio_urls = extract_urls(generated_audio_event["response"]) - if len(audio_urls) == 0: - final_response['message'] = "No audio file link found in generated event" - else: - final_response['audio_url'] = audio_urls[-1] - else: - final_response['message'] = generated_audio_event['response'] - return final_response - -def download_url(url): - try: - # Send a GET request to the URL to fetch the content - final_response = { - 'content':'', - 'error':'' - } - response = requests.get(url) - # Check if the request was successful (status code 200) - if response.status_code == 200: - final_response['content'] = response.content - else: - final_response['error'] = f"Failed to download the URL. Status code: {response.status_code}" - except Exception as e: - final_response['error'] = f"Failed to download the URL. Error: {e}" - return final_response - -def get_filename_from_url(url): - # Use os.path.basename() to extract the file name from the URL - file_name = os.path.basename(url) - return file_name - -def get_text_response(user_message): - response = llm_chain.predict(user_message = user_message) - return response - -def get_text_response_and_audio_response(user_message): - response = get_text_response(user_message) # Getting the reply from Open AI - audio_reply_for_question_response = get_audio_reply_for_question(response) - final_response = { - 'output_file_path': '', - 'message':'' - } - audio_url = audio_reply_for_question_response['audio_url'] - if audio_url: - output_file_path=get_filename_from_url(audio_url) - download_url_response = download_url(audio_url) - audio_content = download_url_response['content'] - if audio_content: - with open(output_file_path, "wb") as audio_file: - audio_file.write(audio_content) - final_response['output_file_path'] = output_file_path - else: - final_response['message'] = download_url_response['error'] - else: - final_response['message'] = audio_reply_for_question_response['message'] - return final_response - -def chat_bot_response(message, history): - text_and_audio_response = get_text_response_and_audio_response(message) - output_file_path = text_and_audio_response['output_file_path'] - if output_file_path: - return (text_and_audio_response['output_file_path'],) - else: - return text_and_audio_response['message'] - -demo = gr.ChatInterface(chat_bot_response,examples=["How are you doing?","What are your interests?","Which places do you like to visit?"]) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/Wauplin/space_to_dataset_saver/README.md b/spaces/Wauplin/space_to_dataset_saver/README.md deleted file mode 100644 index ec4034f28c28dad0cefab0dfc8eda5340aeb4f04..0000000000000000000000000000000000000000 --- a/spaces/Wauplin/space_to_dataset_saver/README.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: Space to Dataset Saver -emoji: 🌍 -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false ---- - -Demo to save data from a Space to a Dataset. Goal is to provide reusable snippets of code. - -- Documentation: https://huggingface.co/docs/huggingface_hub/main/en/guides/upload#scheduled-uploads -- Space: https://huggingface.co/spaces/Wauplin/space_to_dataset_saver/ -- JSON dataset: https://huggingface.co/datasets/Wauplin/example-space-to-dataset-json -- Image dataset: https://huggingface.co/datasets/Wauplin/example-space-to-dataset-image -- Image (zipped) dataset: https://huggingface.co/datasets/Wauplin/example-space-to-dataset-image-zip \ No newline at end of file diff --git a/spaces/Wayben/ChatGPT/assets/Kelpy-Codos.js b/spaces/Wayben/ChatGPT/assets/Kelpy-Codos.js deleted file mode 100644 index cfbaeedb4f371dfb5fe157db545b364046fca3e1..0000000000000000000000000000000000000000 --- a/spaces/Wayben/ChatGPT/assets/Kelpy-Codos.js +++ /dev/null @@ -1,76 +0,0 @@ -// ==UserScript== -// @name Kelpy Codos -// @namespace https://github.com/Keldos-Li/Kelpy-Codos -// @version 1.0.5 -// @author Keldos; https://keldos.me/ -// @description Add copy button to PRE tags before CODE tag, for Chuanhu ChatGPT especially. -// Based on Chuanhu ChatGPT version: ac04408 (2023-3-22) -// @license GPL-3.0 -// @grant none -// ==/UserScript== - -(function () { - 'use strict'; - - function addCopyButton(pre) { - var code = pre.querySelector('code'); - if (!code) { - return; // 如果没有找到 元素,则不添加按钮 - } - var firstChild = code.firstChild; - if (!firstChild) { - return; // 如果 元素没有子节点,则不添加按钮 - } - var button = document.createElement('button'); - button.textContent = '\uD83D\uDCCE'; // 使用 📎 符号作为“复制”按钮的文本 - button.style.position = 'relative'; - button.style.float = 'right'; - button.style.fontSize = '1em'; // 可选:调整按钮大小 - button.style.background = 'none'; // 可选:去掉背景颜色 - button.style.border = 'none'; // 可选:去掉边框 - button.style.cursor = 'pointer'; // 可选:显示指针样式 - button.addEventListener('click', function () { - var range = document.createRange(); - range.selectNodeContents(code); - range.setStartBefore(firstChild); // 将范围设置为第一个子节点之前 - var selection = window.getSelection(); - selection.removeAllRanges(); - selection.addRange(range); - - try { - var success = document.execCommand('copy'); - if (success) { - button.textContent = '\u2714'; - setTimeout(function () { - button.textContent = '\uD83D\uDCCE'; // 恢复按钮为“复制” - }, 2000); - } else { - button.textContent = '\u2716'; - } - } catch (e) { - console.error(e); - button.textContent = '\u2716'; - } - - selection.removeAllRanges(); - }); - code.insertBefore(button, firstChild); // 将按钮插入到第一个子元素之前 - } - - function handleNewElements(mutationsList, observer) { - for (var mutation of mutationsList) { - if (mutation.type === 'childList') { - for (var node of mutation.addedNodes) { - if (node.nodeName === 'PRE') { - addCopyButton(node); - } - } - } - } - } - - var observer = new MutationObserver(handleNewElements); - observer.observe(document.documentElement, { childList: true, subtree: true }); - - document.querySelectorAll('pre').forEach(addCopyButton); -})(); diff --git a/spaces/WhyLIM/ChatGPT-academic/functional_crazy.py b/spaces/WhyLIM/ChatGPT-academic/functional_crazy.py deleted file mode 100644 index d5375733317a8344d7340e7c4098c60bffb538d6..0000000000000000000000000000000000000000 --- a/spaces/WhyLIM/ChatGPT-academic/functional_crazy.py +++ /dev/null @@ -1,68 +0,0 @@ - -def get_crazy_functionals(): - from crazy_functions.读文章写摘要 import 读文章写摘要 - from crazy_functions.生成函数注释 import 批量生成函数注释 - from crazy_functions.解析项目源代码 import 解析项目本身 - from crazy_functions.解析项目源代码 import 解析一个Python项目 - from crazy_functions.解析项目源代码 import 解析一个C项目的头文件 - from crazy_functions.解析项目源代码 import 解析一个C项目 - from crazy_functions.高级功能函数模板 import 高阶功能模板函数 - - return { - "[实验] 请解析并解构此项目本身": { - "Function": 解析项目本身 - }, - "[实验] 解析整个py项目(配合input输入框)": { - "Color": "stop", # 按钮颜色 - "Function": 解析一个Python项目 - }, - "[实验] 解析整个C++项目头文件(配合input输入框)": { - "Color": "stop", # 按钮颜色 - "Function": 解析一个C项目的头文件 - }, - "[实验] 解析整个C++项目(配合input输入框)": { - "Color": "stop", # 按钮颜色 - "Function": 解析一个C项目 - }, - "[实验] 读tex论文写摘要(配合input输入框)": { - "Color": "stop", # 按钮颜色 - "Function": 读文章写摘要 - }, - "[实验] 批量生成函数注释(配合input输入框)": { - "Color": "stop", # 按钮颜色 - "Function": 批量生成函数注释 - }, - "[实验] 实验功能函数模板": { - "Color": "stop", # 按钮颜色 - "Function": 高阶功能模板函数 - }, - } - -def on_file_uploaded(files, chatbot, txt): - if len(files) == 0: return chatbot, txt - import shutil, os, time, glob - from toolbox import extract_archive - try: shutil.rmtree('./private_upload/') - except: pass - time_tag = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) - os.makedirs(f'private_upload/{time_tag}', exist_ok=True) - for file in files: - file_origin_name = os.path.basename(file.orig_name) - shutil.copy(file.name, f'private_upload/{time_tag}/{file_origin_name}') - extract_archive(f'private_upload/{time_tag}/{file_origin_name}', - dest_dir=f'private_upload/{time_tag}/{file_origin_name}.extract') - moved_files = [fp for fp in glob.glob('private_upload/**/*', recursive=True)] - txt = f'private_upload/{time_tag}' - moved_files_str = '\t\n\n'.join(moved_files) - chatbot.append(['我上传了文件,请查收', - f'[Local Message] 收到以下文件: \n\n{moved_files_str}\n\n调用路径参数已自动修正到: \n\n{txt}\n\n现在您可以直接选择任意实现性功能']) - return chatbot, txt - -def on_report_generated(files, chatbot): - from toolbox import find_recent_files - report_files = find_recent_files('gpt_log') - if len(report_files) == 0: return report_files, chatbot - # files.extend(report_files) - chatbot.append(['汇总报告如何远程获取?', '汇总报告已经添加到右侧文件上传区,请查收。']) - return report_files, chatbot - diff --git a/spaces/Wootang01/paraphraser_three/app.py b/spaces/Wootang01/paraphraser_three/app.py deleted file mode 100644 index e55d952e7a15ef08a8d1226104aa09ff865d55c3..0000000000000000000000000000000000000000 --- a/spaces/Wootang01/paraphraser_three/app.py +++ /dev/null @@ -1,36 +0,0 @@ -import streamlit as st -import torch -import sacremoses -from transformers import pipeline -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM -from transformers import FSMTForConditionalGeneration, FSMTTokenizer - -st.title("Paraphraser Three -- Back Translation") -st.write("Paraphrase means to express meaning using different words. Back Translation refers to the method by which the computer paraphrases.") -st.write("Write or paste an English language sentence below, and enter. The machine will translate your sentence into another language using one language model. The machine will then translate that sentence into English using another language model.") - -user_input = st.text_area("Input sentence.") - -def load_en2de(): - en2de = pipeline("translation_en_to_de", model="t5-base") - return en2de - -def load_de2en(): - model_name = "facebook/wmt19-de-en" - tokenizer = FSMTTokenizer.from_pretrained(model_name) - model_de_to_en = FSMTForConditionalGeneration.from_pretrained(model_name) - return tokenizer, model_de_to_en - -en2de = load_en2de() -tokenizer_de2en, de2en = load_de2en() - -en_to_de_output = en2de(user_input) -translated_text = en_to_de_output[0]['translation_text'] - -input_ids = tokenizer_de2en.encode(translated_text, return_tensors="pt") -output_ids = de2en.generate(input_ids)[0] -augmented_text = tokenizer_de2en.decode(output_ids, skip_special_tokens=True) - -st.write("Paraphrased sentence: ", augmented_text) - - diff --git a/spaces/XzJosh/Bella-Bert-VITS2/preprocess_text.py b/spaces/XzJosh/Bella-Bert-VITS2/preprocess_text.py deleted file mode 100644 index 5eb0f3b9e929fcbe91dcbeb653391227a2518a15..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Bella-Bert-VITS2/preprocess_text.py +++ /dev/null @@ -1,64 +0,0 @@ -import json -from random import shuffle - -import tqdm -from text.cleaner import clean_text -from collections import defaultdict -stage = [1,2,3] - -transcription_path = 'filelists/genshin.list' -train_path = 'filelists/train.list' -val_path = 'filelists/val.list' -config_path = "configs/config.json" -val_per_spk = 4 -max_val_total = 8 - -if 1 in stage: - with open( transcription_path+'.cleaned', 'w', encoding='utf-8') as f: - for line in tqdm.tqdm(open(transcription_path, encoding='utf-8').readlines()): - try: - utt, spk, language, text = line.strip().split('|') - norm_text, phones, tones, word2ph = clean_text(text, language) - f.write('{}|{}|{}|{}|{}|{}|{}\n'.format(utt, spk, language, norm_text, ' '.join(phones), - " ".join([str(i) for i in tones]), - " ".join([str(i) for i in word2ph]))) - except Exception as error : - print("err!", utt, error) - -if 2 in stage: - spk_utt_map = defaultdict(list) - spk_id_map = {} - current_sid = 0 - - with open( transcription_path+'.cleaned', encoding='utf-8') as f: - for line in f.readlines(): - utt, spk, language, text, phones, tones, word2ph = line.strip().split('|') - spk_utt_map[spk].append(line) - if spk not in spk_id_map.keys(): - spk_id_map[spk] = current_sid - current_sid += 1 - train_list = [] - val_list = [] - - for spk, utts in spk_utt_map.items(): - shuffle(utts) - val_list+=utts[:val_per_spk] - train_list+=utts[val_per_spk:] - if len(val_list) > max_val_total: - train_list+=val_list[max_val_total:] - val_list = val_list[:max_val_total] - - with open( train_path,"w", encoding='utf-8') as f: - for line in train_list: - f.write(line) - - with open(val_path, "w", encoding='utf-8') as f: - for line in val_list: - f.write(line) - -if 3 in stage: - assert 2 in stage - config = json.load(open(config_path, encoding='utf-8')) - config["data"]['spk2id'] = spk_id_map - with open(config_path, 'w', encoding='utf-8') as f: - json.dump(config, f, indent=2, ensure_ascii=False) diff --git a/spaces/XzJosh/Diana-Bert-VITS2/text/cleaner.py b/spaces/XzJosh/Diana-Bert-VITS2/text/cleaner.py deleted file mode 100644 index 64bd5f7296f66c94f3a335666c53706bb5fe5b39..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Diana-Bert-VITS2/text/cleaner.py +++ /dev/null @@ -1,27 +0,0 @@ -from text import chinese, cleaned_text_to_sequence - - -language_module_map = { - 'ZH': chinese -} - - -def clean_text(text, language): - language_module = language_module_map[language] - norm_text = language_module.text_normalize(text) - phones, tones, word2ph = language_module.g2p(norm_text) - return norm_text, phones, tones, word2ph - -def clean_text_bert(text, language): - language_module = language_module_map[language] - norm_text = language_module.text_normalize(text) - phones, tones, word2ph = language_module.g2p(norm_text) - bert = language_module.get_bert_feature(norm_text, word2ph) - return phones, tones, bert - -def text_to_sequence(text, language): - norm_text, phones, tones, word2ph = clean_text(text, language) - return cleaned_text_to_sequence(phones, tones, language) - -if __name__ == '__main__': - pass diff --git a/spaces/XzJosh/nanami-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md b/spaces/XzJosh/nanami-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md deleted file mode 100644 index 7bce039b7f81ee328fdf8efe3f14409200aacbef..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/nanami-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -language: -- zh -tags: -- bert -license: "apache-2.0" ---- - -# Please use 'Bert' related functions to load this model! - -## Chinese BERT with Whole Word Masking -For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**. - -**[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)** -Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu - -This repository is developed based on:https://github.com/google-research/bert - -You may also interested in, -- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm -- Chinese MacBERT: https://github.com/ymcui/MacBERT -- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA -- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet -- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer - -More resources by HFL: https://github.com/ymcui/HFL-Anthology - -## Citation -If you find the technical report or resource is useful, please cite the following technical report in your paper. -- Primary: https://arxiv.org/abs/2004.13922 -``` -@inproceedings{cui-etal-2020-revisiting, - title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", - author = "Cui, Yiming and - Che, Wanxiang and - Liu, Ting and - Qin, Bing and - Wang, Shijin and - Hu, Guoping", - booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", - month = nov, - year = "2020", - address = "Online", - publisher = "Association for Computational Linguistics", - url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", - pages = "657--668", -} -``` -- Secondary: https://arxiv.org/abs/1906.08101 -``` -@article{chinese-bert-wwm, - title={Pre-Training with Whole Word Masking for Chinese BERT}, - author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping}, - journal={arXiv preprint arXiv:1906.08101}, - year={2019} - } -``` \ No newline at end of file diff --git a/spaces/XzJosh/nine1-Bert-VITS2/train_ms.py b/spaces/XzJosh/nine1-Bert-VITS2/train_ms.py deleted file mode 100644 index 5d109003d40497ea4493e7c73f47c1eb7370a81e..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/nine1-Bert-VITS2/train_ms.py +++ /dev/null @@ -1,402 +0,0 @@ -import os -import json -import argparse -import itertools -import math -import torch -import shutil -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler -from tqdm import tqdm -import logging -logging.getLogger('numba').setLevel(logging.WARNING) -import commons -import utils -from data_utils import ( - TextAudioSpeakerLoader, - TextAudioSpeakerCollate, - DistributedBucketSampler -) -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, - DurationDiscriminator, -) -from losses import ( - generator_loss, - discriminator_loss, - feature_loss, - kl_loss -) -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch -from text.symbols import symbols - -torch.backends.cudnn.benchmark = True -torch.backends.cuda.matmul.allow_tf32 = True -torch.backends.cudnn.allow_tf32 = True -torch.set_float32_matmul_precision('medium') -global_step = 0 - - -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - - n_gpus = torch.cuda.device_count() - os.environ['MASTER_ADDR'] = 'localhost' - os.environ['MASTER_PORT'] = '65280' - - hps = utils.get_hparams() - if not hps.cont: - shutil.copy('./pretrained_models/D_0.pth','./logs/OUTPUT_MODEL/D_0.pth') - shutil.copy('./pretrained_models/G_0.pth','./logs/OUTPUT_MODEL/G_0.pth') - shutil.copy('./pretrained_models/DUR_0.pth','./logs/OUTPUT_MODEL/DUR_0.pth') - mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - dist.init_process_group(backend= 'gloo' if os.name == 'nt' else 'nccl', init_method='env://', world_size=n_gpus, rank=rank) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - - train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size, - [32, 300, 400, 500, 600, 700, 800, 900, 1000], - num_replicas=n_gpus, - rank=rank, - shuffle=True) - collate_fn = TextAudioSpeakerCollate() - train_loader = DataLoader(train_dataset, num_workers=2, shuffle=False, pin_memory=True, - collate_fn=collate_fn, batch_sampler=train_sampler) - if rank == 0: - eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data) - eval_loader = DataLoader(eval_dataset, num_workers=0, shuffle=False, - batch_size=1, pin_memory=True, - drop_last=False, collate_fn=collate_fn) - if "use_noise_scaled_mas" in hps.model.keys() and hps.model.use_noise_scaled_mas == True: - print("Using noise scaled MAS for VITS2") - use_noise_scaled_mas = True - mas_noise_scale_initial = 0.01 - noise_scale_delta = 2e-6 - else: - print("Using normal MAS for VITS1") - use_noise_scaled_mas = False - mas_noise_scale_initial = 0.0 - noise_scale_delta = 0.0 - if "use_duration_discriminator" in hps.model.keys() and hps.model.use_duration_discriminator == True: - print("Using duration discriminator for VITS2") - use_duration_discriminator = True - net_dur_disc = DurationDiscriminator( - hps.model.hidden_channels, - hps.model.hidden_channels, - 3, - 0.1, - gin_channels=hps.model.gin_channels if hps.data.n_speakers != 0 else 0, - ).cuda(rank) - if "use_spk_conditioned_encoder" in hps.model.keys() and hps.model.use_spk_conditioned_encoder == True: - if hps.data.n_speakers == 0: - raise ValueError("n_speakers must be > 0 when using spk conditioned encoder to train multi-speaker model") - use_spk_conditioned_encoder = True - else: - print("Using normal encoder for VITS1") - use_spk_conditioned_encoder = False - - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - mas_noise_scale_initial = mas_noise_scale_initial, - noise_scale_delta = noise_scale_delta, - **hps.model).cuda(rank) - - freeze_enc = getattr(hps.model, "freeze_enc", False) - if freeze_enc: - print("freeze encoder !!!") - for param in net_g.enc_p.parameters(): - param.requires_grad = False - - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW( - filter(lambda p: p.requires_grad, net_g.parameters()), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - if net_dur_disc is not None: - optim_dur_disc = torch.optim.AdamW( - net_dur_disc.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - else: - optim_dur_disc = None - net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True) - net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True) - if net_dur_disc is not None: - net_dur_disc = DDP(net_dur_disc, device_ids=[rank], find_unused_parameters=True) - - pretrain_dir = None - if pretrain_dir is None: - try: - if net_dur_disc is not None: - _, optim_dur_disc, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "DUR_*.pth"), net_dur_disc, optim_dur_disc, skip_optimizer=not hps.cont) - _, optim_g, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, - optim_g, skip_optimizer=not hps.cont) - _, optim_d, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, - optim_d, skip_optimizer=not hps.cont) - - epoch_str = max(epoch_str, 1) - global_step = (epoch_str - 1) * len(train_loader) - except Exception as e: - print(e) - epoch_str = 1 - global_step = 0 - else: - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "G_*.pth"), net_g, - optim_g, True) - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "D_*.pth"), net_d, - optim_d, True) - - - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - if net_dur_disc is not None: - scheduler_dur_disc = torch.optim.lr_scheduler.ExponentialLR(optim_dur_disc, gamma=hps.train.lr_decay, last_epoch=epoch_str-2) - else: - scheduler_dur_disc = None - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank == 0: - train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, eval_loader], logger, [writer, writer_eval]) - else: - train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, None], None, None) - scheduler_g.step() - scheduler_d.step() - if net_dur_disc is not None: - scheduler_dur_disc.step() - - -def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers): - net_g, net_d, net_dur_disc = nets - optim_g, optim_d, optim_dur_disc = optims - scheduler_g, scheduler_d, scheduler_dur_disc = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - if net_dur_disc is not None: - net_dur_disc.train() - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in tqdm(enumerate(train_loader)): - if net_g.module.use_noise_scaled_mas: - current_mas_noise_scale = net_g.module.mas_noise_scale_initial - net_g.module.noise_scale_delta * global_step - net_g.module.current_mas_noise_scale = max(current_mas_noise_scale, 0.0) - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True) - spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True) - speakers = speakers.cuda(rank, non_blocking=True) - tone = tone.cuda(rank, non_blocking=True) - language = language.cuda(rank, non_blocking=True) - bert = bert.cuda(rank, non_blocking=True) - - with autocast(enabled=hps.train.fp16_run): - y_hat, l_length, attn, ids_slice, x_mask, z_mask, \ - (z, z_p, m_p, logs_p, m_q, logs_q), (hidden_x, logw, logw_) = net_g(x, x_lengths, spec, spec_lengths, speakers, tone, language, bert) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - - y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g) - loss_disc_all = loss_disc - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x.detach(), x_mask.detach(), logw.detach(), logw_.detach()) - with autocast(enabled=False): - # TODO: I think need to mean using the mask, but for now, just mean all - loss_dur_disc, losses_dur_disc_r, losses_dur_disc_g = discriminator_loss(y_dur_hat_r, y_dur_hat_g) - loss_dur_disc_all = loss_dur_disc - optim_dur_disc.zero_grad() - scaler.scale(loss_dur_disc_all).backward() - scaler.unscale_(optim_dur_disc) - grad_norm_dur_disc = commons.clip_grad_value_(net_dur_disc.parameters(), None) - scaler.step(optim_dur_disc) - - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x, x_mask, logw, logw_) - with autocast(enabled=False): - loss_dur = torch.sum(l_length.float()) - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl - if net_dur_disc is not None: - loss_dur_gen, losses_dur_gen = generator_loss(y_dur_hat_g) - loss_gen_all += loss_dur_gen - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank == 0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]['lr'] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl] - logger.info('Train Epoch: {} [{:.0f}%]'.format( - epoch, - 100. * batch_idx / len(train_loader))) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, - "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g} - scalar_dict.update( - {"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl}) - scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}) - scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}) - scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}) - - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()), - "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()), - "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()), - "all/attn": utils.plot_alignment_to_numpy(attn[0, 0].data.cpu().numpy()) - } - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(global_step))) - utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(global_step))) - if net_dur_disc is not None: - utils.save_checkpoint(net_dur_disc, optim_dur_disc, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "DUR_{}.pth".format(global_step))) - keep_ckpts = getattr(hps.train, 'keep_ckpts', 5) - if keep_ckpts > 0: - utils.clean_checkpoints(path_to_models=hps.model_dir, n_ckpts_to_keep=keep_ckpts, sort_by_time=True) - - - global_step += 1 - - if rank == 0: - logger.info('====> Epoch: {}'.format(epoch)) - - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - image_dict = {} - audio_dict = {} - print("Evaluating ...") - with torch.no_grad(): - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in enumerate(eval_loader): - x, x_lengths = x.cuda(), x_lengths.cuda() - spec, spec_lengths = spec.cuda(), spec_lengths.cuda() - y, y_lengths = y.cuda(), y_lengths.cuda() - speakers = speakers.cuda() - bert = bert.cuda() - tone = tone.cuda() - language = language.cuda() - for use_sdp in [True, False]: - y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, speakers, tone, language, bert, y=spec, max_len=1000, sdp_ratio=0.0 if not use_sdp else 1.0) - y_hat_lengths = mask.sum([1, 2]).long() * hps.data.hop_length - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - image_dict.update({ - f"gen/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()) - }) - audio_dict.update({ - f"gen/audio_{batch_idx}_{use_sdp}": y_hat[0, :, :y_hat_lengths[0]] - }) - image_dict.update({f"gt/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())}) - audio_dict.update({f"gt/audio_{batch_idx}": y[0, :, :y_lengths[0]]}) - - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate - ) - generator.train() - -if __name__ == "__main__": - main() diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/experimental/rl/__init__.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/experimental/rl/__init__.py deleted file mode 100644 index 7b338d3173e12d478b6b6d6fd0e50650a0ab5a4c..0000000000000000000000000000000000000000 --- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/experimental/rl/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .value_guided_sampling import ValueGuidedRLPipeline diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_image_variation.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_image_variation.py deleted file mode 100644 index d77e71653078dfb206f267f889334d1ed7b7da8b..0000000000000000000000000000000000000000 --- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_image_variation.py +++ /dev/null @@ -1,461 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -from typing import Callable, List, Optional, Union - -import torch - -import PIL -from diffusers.utils import is_accelerate_available -from packaging import version -from transformers import CLIPFeatureExtractor, CLIPVisionModelWithProjection - -from ...configuration_utils import FrozenDict -from ...models import AutoencoderKL, UNet2DConditionModel -from ...pipeline_utils import DiffusionPipeline -from ...schedulers import ( - DDIMScheduler, - DPMSolverMultistepScheduler, - EulerAncestralDiscreteScheduler, - EulerDiscreteScheduler, - LMSDiscreteScheduler, - PNDMScheduler, -) -from ...utils import deprecate, logging -from . import StableDiffusionPipelineOutput -from .safety_checker import StableDiffusionSafetyChecker - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -class StableDiffusionImageVariationPipeline(DiffusionPipeline): - r""" - Pipeline to generate variations from an input image using Stable Diffusion. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - image_encoder ([`CLIPVisionModelWithProjection`]): - Frozen CLIP image-encoder. Stable Diffusion Image Variation uses the vision portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPVisionModelWithProjection), - specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. - feature_extractor ([`CLIPFeatureExtractor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - _optional_components = ["safety_checker", "feature_extractor"] - - def __init__( - self, - vae: AutoencoderKL, - image_encoder: CLIPVisionModelWithProjection, - unet: UNet2DConditionModel, - scheduler: Union[ - DDIMScheduler, - PNDMScheduler, - LMSDiscreteScheduler, - EulerDiscreteScheduler, - EulerAncestralDiscreteScheduler, - DPMSolverMultistepScheduler, - ], - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPFeatureExtractor, - requires_safety_checker: bool = True, - ): - super().__init__() - - if safety_checker is None and requires_safety_checker: - logger.warn( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse( - version.parse(unet.config._diffusers_version).base_version - ) < version.parse("0.9.0.dev0") - is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64 - if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64: - deprecation_message = ( - "The configuration file of the unet has set the default `sample_size` to smaller than" - " 64 which seems highly unlikely .If you're checkpoint is a fine-tuned version of any of the" - " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-" - " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5" - " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the" - " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`" - " in the config might lead to incorrect results in future versions. If you have downloaded this" - " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for" - " the `unet/config.json` file" - ) - deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(unet.config) - new_config["sample_size"] = 64 - unet._internal_dict = FrozenDict(new_config) - - self.register_modules( - vae=vae, - image_encoder=image_encoder, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_attention_slicing - def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"): - r""" - Enable sliced attention computation. - - When this option is enabled, the attention module will split the input tensor in slices, to compute attention - in several steps. This is useful to save some memory in exchange for a small speed decrease. - - Args: - slice_size (`str` or `int`, *optional*, defaults to `"auto"`): - When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If - a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case, - `attention_head_dim` must be a multiple of `slice_size`. - """ - if slice_size == "auto": - if isinstance(self.unet.config.attention_head_dim, int): - # half the attention head size is usually a good trade-off between - # speed and memory - slice_size = self.unet.config.attention_head_dim // 2 - else: - # if `attention_head_dim` is a list, take the smallest head size - slice_size = min(self.unet.config.attention_head_dim) - - self.unet.set_attention_slice(slice_size) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_attention_slicing - def disable_attention_slicing(self): - r""" - Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go - back to computing attention in one step. - """ - # set slice_size = `None` to disable `attention slicing` - self.enable_attention_slicing(None) - - def enable_sequential_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, - text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a - `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called. - """ - if is_accelerate_available(): - from accelerate import cpu_offload - else: - raise ImportError("Please install accelerate via `pip install accelerate`") - - device = torch.device(f"cuda:{gpu_id}") - - for cpu_offloaded_model in [self.unet, self.image_encoder, self.vae, self.safety_checker]: - if cpu_offloaded_model is not None: - cpu_offload(cpu_offloaded_model, device) - - @property - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device - def _execution_device(self): - r""" - Returns the device on which the pipeline's models will be executed. After calling - `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module - hooks. - """ - if self.device != torch.device("meta") or not hasattr(self.unet, "_hf_hook"): - return self.device - for module in self.unet.modules(): - if ( - hasattr(module, "_hf_hook") - and hasattr(module._hf_hook, "execution_device") - and module._hf_hook.execution_device is not None - ): - return torch.device(module._hf_hook.execution_device) - return self.device - - def _encode_image(self, image, device, num_images_per_prompt, do_classifier_free_guidance): - dtype = next(self.image_encoder.parameters()).dtype - - if not isinstance(image, torch.Tensor): - image = self.feature_extractor(images=image, return_tensors="pt").pixel_values - - image = image.to(device=device, dtype=dtype) - image_embeddings = self.image_encoder(image).image_embeds - image_embeddings = image_embeddings.unsqueeze(1) - - # duplicate image embeddings for each generation per prompt, using mps friendly method - bs_embed, seq_len, _ = image_embeddings.shape - image_embeddings = image_embeddings.repeat(1, num_images_per_prompt, 1) - image_embeddings = image_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1) - - if do_classifier_free_guidance: - uncond_embeddings = torch.zeros_like(image_embeddings) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - image_embeddings = torch.cat([uncond_embeddings, image_embeddings]) - - return image_embeddings - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker - def run_safety_checker(self, image, device, dtype): - if self.safety_checker is not None: - safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device) - image, has_nsfw_concept = self.safety_checker( - images=image, clip_input=safety_checker_input.pixel_values.to(dtype) - ) - else: - has_nsfw_concept = None - return image, has_nsfw_concept - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents - def decode_latents(self, latents): - latents = 1 / 0.18215 * latents - image = self.vae.decode(latents).sample - image = (image / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - return image - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - def check_inputs(self, image, height, width, callback_steps): - if ( - not isinstance(image, torch.Tensor) - and not isinstance(image, PIL.Image.Image) - and not isinstance(image, list) - ): - raise ValueError( - f"`image` has to be of type `torch.FloatTensor` or `PIL.Image.Image` or `list` but is {type(image)}" - ) - - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents - def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): - shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor) - if latents is None: - if device.type == "mps": - # randn does not work reproducibly on mps - latents = torch.randn(shape, generator=generator, device="cpu", dtype=dtype).to(device) - else: - latents = torch.randn(shape, generator=generator, device=device, dtype=dtype) - else: - if latents.shape != shape: - raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}") - latents = latents.to(device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - return latents - - @torch.no_grad() - def __call__( - self, - image: Union[PIL.Image.Image, List[PIL.Image.Image], torch.FloatTensor], - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[torch.Generator] = None, - latents: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: Optional[int] = 1, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - image (`PIL.Image.Image` or `List[PIL.Image.Image]` or `torch.FloatTensor`): - The image or images to guide the image generation. If you provide a tensor, it needs to comply with the - configuration of - [this](https://huggingface.co/lambdalabs/sd-image-variations-diffusers/blob/main/feature_extractor/preprocessor_config.json) - `CLIPFeatureExtractor` - height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator`, *optional*): - A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation - deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - # 0. Default height and width to unet - height = height or self.unet.config.sample_size * self.vae_scale_factor - width = width or self.unet.config.sample_size * self.vae_scale_factor - - # 1. Check inputs. Raise error if not correct - self.check_inputs(image, height, width, callback_steps) - - # 2. Define call parameters - if isinstance(image, PIL.Image.Image): - batch_size = 1 - elif isinstance(image, list): - batch_size = len(image) - else: - batch_size = image.shape[0] - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input image - image_embeddings = self._encode_image(image, device, num_images_per_prompt, do_classifier_free_guidance) - - # 4. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # 5. Prepare latent variables - num_channels_latents = self.unet.in_channels - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - image_embeddings.dtype, - device, - generator, - latents, - ) - - # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 7. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=image_embeddings).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # 8. Post-processing - image = self.decode_latents(latents) - - # 9. Run safety checker - image, has_nsfw_concept = self.run_safety_checker(image, device, image_embeddings.dtype) - - # 10. Convert to PIL - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/schedulers/scheduling_sde_ve_flax.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/schedulers/scheduling_sde_ve_flax.py deleted file mode 100644 index d1f762bc90c471d6bbc7f33e5854d014b1e25667..0000000000000000000000000000000000000000 --- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/schedulers/scheduling_sde_ve_flax.py +++ /dev/null @@ -1,276 +0,0 @@ -# Copyright 2022 Google Brain and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# DISCLAIMER: This file is strongly influenced by https://github.com/yang-song/score_sde_pytorch - -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import flax -import jax.numpy as jnp -from jax import random - -from ..configuration_utils import ConfigMixin, register_to_config -from .scheduling_utils_flax import FlaxSchedulerMixin, FlaxSchedulerOutput, broadcast_to_shape_from_left - - -@flax.struct.dataclass -class ScoreSdeVeSchedulerState: - # setable values - timesteps: Optional[jnp.ndarray] = None - discrete_sigmas: Optional[jnp.ndarray] = None - sigmas: Optional[jnp.ndarray] = None - - @classmethod - def create(cls): - return cls() - - -@dataclass -class FlaxSdeVeOutput(FlaxSchedulerOutput): - """ - Output class for the ScoreSdeVeScheduler's step function output. - - Args: - state (`ScoreSdeVeSchedulerState`): - prev_sample (`jnp.ndarray` of shape `(batch_size, num_channels, height, width)` for images): - Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the - denoising loop. - prev_sample_mean (`jnp.ndarray` of shape `(batch_size, num_channels, height, width)` for images): - Mean averaged `prev_sample`. Same as `prev_sample`, only mean-averaged over previous timesteps. - """ - - state: ScoreSdeVeSchedulerState - prev_sample: jnp.ndarray - prev_sample_mean: Optional[jnp.ndarray] = None - - -class FlaxScoreSdeVeScheduler(FlaxSchedulerMixin, ConfigMixin): - """ - The variance exploding stochastic differential equation (SDE) scheduler. - - For more information, see the original paper: https://arxiv.org/abs/2011.13456 - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. - snr (`float`): - coefficient weighting the step from the model_output sample (from the network) to the random noise. - sigma_min (`float`): - initial noise scale for sigma sequence in sampling procedure. The minimum sigma should mirror the - distribution of the data. - sigma_max (`float`): maximum value used for the range of continuous timesteps passed into the model. - sampling_eps (`float`): the end value of sampling, where timesteps decrease progressively from 1 to - epsilon. - correct_steps (`int`): number of correction steps performed on a produced sample. - """ - - @property - def has_state(self): - return True - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 2000, - snr: float = 0.15, - sigma_min: float = 0.01, - sigma_max: float = 1348.0, - sampling_eps: float = 1e-5, - correct_steps: int = 1, - ): - pass - - def create_state(self): - state = ScoreSdeVeSchedulerState.create() - return self.set_sigmas( - state, - self.config.num_train_timesteps, - self.config.sigma_min, - self.config.sigma_max, - self.config.sampling_eps, - ) - - def set_timesteps( - self, state: ScoreSdeVeSchedulerState, num_inference_steps: int, shape: Tuple = (), sampling_eps: float = None - ) -> ScoreSdeVeSchedulerState: - """ - Sets the continuous timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - state (`ScoreSdeVeSchedulerState`): the `FlaxScoreSdeVeScheduler` state data class instance. - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - sampling_eps (`float`, optional): final timestep value (overrides value given at Scheduler instantiation). - - """ - sampling_eps = sampling_eps if sampling_eps is not None else self.config.sampling_eps - - timesteps = jnp.linspace(1, sampling_eps, num_inference_steps) - return state.replace(timesteps=timesteps) - - def set_sigmas( - self, - state: ScoreSdeVeSchedulerState, - num_inference_steps: int, - sigma_min: float = None, - sigma_max: float = None, - sampling_eps: float = None, - ) -> ScoreSdeVeSchedulerState: - """ - Sets the noise scales used for the diffusion chain. Supporting function to be run before inference. - - The sigmas control the weight of the `drift` and `diffusion` components of sample update. - - Args: - state (`ScoreSdeVeSchedulerState`): the `FlaxScoreSdeVeScheduler` state data class instance. - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - sigma_min (`float`, optional): - initial noise scale value (overrides value given at Scheduler instantiation). - sigma_max (`float`, optional): final noise scale value (overrides value given at Scheduler instantiation). - sampling_eps (`float`, optional): final timestep value (overrides value given at Scheduler instantiation). - """ - sigma_min = sigma_min if sigma_min is not None else self.config.sigma_min - sigma_max = sigma_max if sigma_max is not None else self.config.sigma_max - sampling_eps = sampling_eps if sampling_eps is not None else self.config.sampling_eps - if state.timesteps is None: - state = self.set_timesteps(state, num_inference_steps, sampling_eps) - - discrete_sigmas = jnp.exp(jnp.linspace(jnp.log(sigma_min), jnp.log(sigma_max), num_inference_steps)) - sigmas = jnp.array([sigma_min * (sigma_max / sigma_min) ** t for t in state.timesteps]) - - return state.replace(discrete_sigmas=discrete_sigmas, sigmas=sigmas) - - def get_adjacent_sigma(self, state, timesteps, t): - return jnp.where(timesteps == 0, jnp.zeros_like(t), state.discrete_sigmas[timesteps - 1]) - - def step_pred( - self, - state: ScoreSdeVeSchedulerState, - model_output: jnp.ndarray, - timestep: int, - sample: jnp.ndarray, - key: random.KeyArray, - return_dict: bool = True, - ) -> Union[FlaxSdeVeOutput, Tuple]: - """ - Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion - process from the learned model outputs (most often the predicted noise). - - Args: - state (`ScoreSdeVeSchedulerState`): the `FlaxScoreSdeVeScheduler` state data class instance. - model_output (`jnp.ndarray`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - sample (`jnp.ndarray`): - current instance of sample being created by diffusion process. - generator: random number generator. - return_dict (`bool`): option for returning tuple rather than FlaxSdeVeOutput class - - Returns: - [`FlaxSdeVeOutput`] or `tuple`: [`FlaxSdeVeOutput`] if `return_dict` is True, otherwise a `tuple`. When - returning a tuple, the first element is the sample tensor. - - """ - if state.timesteps is None: - raise ValueError( - "`state.timesteps` is not set, you need to run 'set_timesteps' after creating the scheduler" - ) - - timestep = timestep * jnp.ones( - sample.shape[0], - ) - timesteps = (timestep * (len(state.timesteps) - 1)).long() - - sigma = state.discrete_sigmas[timesteps] - adjacent_sigma = self.get_adjacent_sigma(state, timesteps, timestep) - drift = jnp.zeros_like(sample) - diffusion = (sigma**2 - adjacent_sigma**2) ** 0.5 - - # equation 6 in the paper: the model_output modeled by the network is grad_x log pt(x) - # also equation 47 shows the analog from SDE models to ancestral sampling methods - diffusion = diffusion.flatten() - diffusion = broadcast_to_shape_from_left(diffusion, sample.shape) - drift = drift - diffusion**2 * model_output - - # equation 6: sample noise for the diffusion term of - key = random.split(key, num=1) - noise = random.normal(key=key, shape=sample.shape) - prev_sample_mean = sample - drift # subtract because `dt` is a small negative timestep - # TODO is the variable diffusion the correct scaling term for the noise? - prev_sample = prev_sample_mean + diffusion * noise # add impact of diffusion field g - - if not return_dict: - return (prev_sample, prev_sample_mean, state) - - return FlaxSdeVeOutput(prev_sample=prev_sample, prev_sample_mean=prev_sample_mean, state=state) - - def step_correct( - self, - state: ScoreSdeVeSchedulerState, - model_output: jnp.ndarray, - sample: jnp.ndarray, - key: random.KeyArray, - return_dict: bool = True, - ) -> Union[FlaxSdeVeOutput, Tuple]: - """ - Correct the predicted sample based on the output model_output of the network. This is often run repeatedly - after making the prediction for the previous timestep. - - Args: - state (`ScoreSdeVeSchedulerState`): the `FlaxScoreSdeVeScheduler` state data class instance. - model_output (`jnp.ndarray`): direct output from learned diffusion model. - sample (`jnp.ndarray`): - current instance of sample being created by diffusion process. - generator: random number generator. - return_dict (`bool`): option for returning tuple rather than FlaxSdeVeOutput class - - Returns: - [`FlaxSdeVeOutput`] or `tuple`: [`FlaxSdeVeOutput`] if `return_dict` is True, otherwise a `tuple`. When - returning a tuple, the first element is the sample tensor. - - """ - if state.timesteps is None: - raise ValueError( - "`state.timesteps` is not set, you need to run 'set_timesteps' after creating the scheduler" - ) - - # For small batch sizes, the paper "suggest replacing norm(z) with sqrt(d), where d is the dim. of z" - # sample noise for correction - key = random.split(key, num=1) - noise = random.normal(key=key, shape=sample.shape) - - # compute step size from the model_output, the noise, and the snr - grad_norm = jnp.linalg.norm(model_output) - noise_norm = jnp.linalg.norm(noise) - step_size = (self.config.snr * noise_norm / grad_norm) ** 2 * 2 - step_size = step_size * jnp.ones(sample.shape[0]) - - # compute corrected sample: model_output term and noise term - step_size = step_size.flatten() - step_size = broadcast_to_shape_from_left(step_size, sample.shape) - prev_sample_mean = sample + step_size * model_output - prev_sample = prev_sample_mean + ((step_size * 2) ** 0.5) * noise - - if not return_dict: - return (prev_sample, state) - - return FlaxSdeVeOutput(prev_sample=prev_sample, state=state) - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/.github/CONTRIBUTING.md b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/.github/CONTRIBUTING.md deleted file mode 100644 index 9bab709cae689ba3b92dd52f7fbcc0c6926f4a38..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/.github/CONTRIBUTING.md +++ /dev/null @@ -1,68 +0,0 @@ -# Contributing to detectron2 - -## Issues -We use GitHub issues to track public bugs and questions. -Please make sure to follow one of the -[issue templates](https://github.com/facebookresearch/detectron2/issues/new/choose) -when reporting any issues. - -Facebook has a [bounty program](https://www.facebook.com/whitehat/) for the safe -disclosure of security bugs. In those cases, please go through the process -outlined on that page and do not file a public issue. - -## Pull Requests -We actively welcome pull requests. - -However, if you're adding any significant features (e.g. > 50 lines), please -make sure to discuss with maintainers about your motivation and proposals in an issue -before sending a PR. This is to save your time so you don't spend time on a PR that we'll not accept. - -We do not always accept new features, and we take the following -factors into consideration: - -1. Whether the same feature can be achieved without modifying detectron2. - Detectron2 is designed so that you can implement many extensions from the outside, e.g. - those in [projects](https://github.com/facebookresearch/detectron2/tree/master/projects). - * If some part of detectron2 is not extensible enough, you can also bring up a more general issue to - improve it. Such feature request may be useful to more users. -2. Whether the feature is potentially useful to a large audience (e.g. an impactful detection paper, a popular dataset, - a significant speedup, a widely useful utility), - or only to a small portion of users (e.g., a less-known paper, an improvement not in the object - detection field, a trick that's not very popular in the community, code to handle a non-standard type of data) - * Adoption of additional models, datasets, new task are by default not added to detectron2 before they - receive significant popularity in the community. - We sometimes accept such features in `projects/`, or as a link in `projects/README.md`. -3. Whether the proposed solution has a good design / interface. This can be discussed in the issue prior to PRs, or - in the form of a draft PR. -4. Whether the proposed solution adds extra mental/practical overhead to users who don't - need such feature. -5. Whether the proposed solution breaks existing APIs. - -To add a feature to an existing function/class `Func`, there are always two approaches: -(1) add new arguments to `Func`; (2) write a new `Func_with_new_feature`. -To meet the above criteria, we often prefer approach (2), because: - -1. It does not involve modifying or potentially breaking existing code. -2. It does not add overhead to users who do not need the new feature. -3. Adding new arguments to a function/class is not scalable w.r.t. all the possible new research ideas in the future. - -When sending a PR, please do: - -1. If a PR contains multiple orthogonal changes, split it to several PRs. -2. If you've added code that should be tested, add tests. -3. For PRs that need experiments (e.g. adding a new model or new methods), - you don't need to update model zoo, but do provide experiment results in the description of the PR. -4. If APIs are changed, update the documentation. -5. We use the [Google style docstrings](https://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html) in python. -6. Make sure your code lints with `./dev/linter.sh`. - - -## Contributor License Agreement ("CLA") -In order to accept your pull request, we need you to submit a CLA. You only need -to do this once to work on any of Facebook's open source projects. - -Complete your CLA here: - -## License -By contributing to detectron2, you agree that your contributions will be licensed -under the LICENSE file in the root directory of this source tree. diff --git a/spaces/Yntec/PrintingPress/README.md b/spaces/Yntec/PrintingPress/README.md deleted file mode 100644 index 2966bd77e82fd4b0ab2aeb34f754e5649c10314b..0000000000000000000000000000000000000000 --- a/spaces/Yntec/PrintingPress/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Printing Press 540 Models -emoji: 👩‍🎨👨‍🎨 -colorFrom: red -colorTo: green -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: true -duplicated_from: Omnibus/maximum_multiplier_places ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/a-v-bely/russian-task-generator/utilities_language_general/rus_constants.py b/spaces/a-v-bely/russian-task-generator/utilities_language_general/rus_constants.py deleted file mode 100644 index c1bf05f79422cd0855767e12d5ed405e4e2b8345..0000000000000000000000000000000000000000 --- a/spaces/a-v-bely/russian-task-generator/utilities_language_general/rus_constants.py +++ /dev/null @@ -1,70 +0,0 @@ -import json -import spacy -import gensim -import pymorphy2 -import streamlit as st -from transformers import pipeline - - -@st.cache_resource -def load_morph(): - _morph = pymorphy2.MorphAnalyzer(lang='ru') - return _morph - - -@st.cache_resource -def load_w2v(model_path): - _w2v_model = gensim.models.KeyedVectors.load_word2vec_format(model_path, binary=True) - return _w2v_model - - -@st.cache_resource -def load_spacy(): - _nlp = spacy.load('ru_core_news_lg') - return _nlp - - -@st.cache_resource -def load_bert(): - return pipeline("fill-mask", model="a-v-white/ruBert-base-finetuned-russian-moshkov-child-corpus-pro") - - -nlp = load_spacy() -morph = load_morph() -w2v_model1_path = r'model1.gz' -w2v_model2_path = r'model2.gz' - -# Upload stop list -stop_list = set() -with open(r'language_data/stop_words.txt', 'r', encoding='utf-8') as read_file: - for line in read_file: - stop_list.add(line.strip()) - -# Upload minimums -a1_path, a1_target_set = r'language_data/A1_MINIMUM.txt', set() -a2_path, a2_target_set = r'language_data/A2_MINIMUM.txt', set() -b1_path, b1_target_set = r'language_data/B1_MINIMUM.txt', set() -b2_path, b2_target_set = r'language_data/B2_MINIMUM.txt', set() -c1_path, c1_target_set = r'language_data/C1_MINIMUM.txt', set() -c2_path, c2_target_set = r'language_data/C2_MINIMUM.txt', set() -minimums_paths = (a1_path, a2_path, b1_path, b2_path) -minimums_sets = (a1_target_set, a2_target_set, b1_target_set, b2_target_set, c1_target_set, c2_target_set) -for i in range(len(minimums_paths)): - with open(minimums_paths[i], 'r', encoding='utf-8') as read_file: - for line in read_file: - minimums_sets[i].add(line.strip()) - -a1_distractor_set = a1_target_set -a2_distractor_set = a2_target_set.union(a1_target_set) -b1_distractor_set = b1_target_set.union(a2_target_set) -b2_distractor_set = b2_target_set.union(b1_target_set) -c1_distractor_set = c1_target_set.union(b2_target_set) -c2_distractor_set = c2_target_set.union(c1_target_set) - -with open('language_data/phrases.json', 'r', encoding='utf-8') as f: - PHRASES = set(json.load(f)['PHRASES']) - -SIMILARITY_VALUES_w2v = {'A1': 1.0, 'A2': 1.0, 'B1': 1.0, 'B2': 1.0, 'C1': 1.0, 'C2': 1.0, 'Без уровня': 1.0} -SIMILARITY_VALUES_bert = {'A1': 1.0, 'A2': 1.0, 'B1': 1.0, 'B2': 1.0, 'C1': 1.0, 'C2': 1.0, 'Без уровня': 1.0} - -BAD_USER_TARGET_WORDS = [] diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/base_roi_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/base_roi_head.py deleted file mode 100644 index 2d61cc08007924c61b4a53d7fbc6e6fedfd68f08..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/base_roi_head.py +++ /dev/null @@ -1,103 +0,0 @@ -from abc import ABCMeta, abstractmethod - -import torch.nn as nn - -from ..builder import build_shared_head - - -class BaseRoIHead(nn.Module, metaclass=ABCMeta): - """Base class for RoIHeads.""" - - def __init__(self, - bbox_roi_extractor=None, - bbox_head=None, - mask_roi_extractor=None, - mask_head=None, - shared_head=None, - train_cfg=None, - test_cfg=None): - super(BaseRoIHead, self).__init__() - self.train_cfg = train_cfg - self.test_cfg = test_cfg - if shared_head is not None: - self.shared_head = build_shared_head(shared_head) - - if bbox_head is not None: - self.init_bbox_head(bbox_roi_extractor, bbox_head) - - if mask_head is not None: - self.init_mask_head(mask_roi_extractor, mask_head) - - self.init_assigner_sampler() - - @property - def with_bbox(self): - """bool: whether the RoI head contains a `bbox_head`""" - return hasattr(self, 'bbox_head') and self.bbox_head is not None - - @property - def with_mask(self): - """bool: whether the RoI head contains a `mask_head`""" - return hasattr(self, 'mask_head') and self.mask_head is not None - - @property - def with_shared_head(self): - """bool: whether the RoI head contains a `shared_head`""" - return hasattr(self, 'shared_head') and self.shared_head is not None - - @abstractmethod - def init_weights(self, pretrained): - """Initialize the weights in head. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - pass - - @abstractmethod - def init_bbox_head(self): - """Initialize ``bbox_head``""" - pass - - @abstractmethod - def init_mask_head(self): - """Initialize ``mask_head``""" - pass - - @abstractmethod - def init_assigner_sampler(self): - """Initialize assigner and sampler.""" - pass - - @abstractmethod - def forward_train(self, - x, - img_meta, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None, - **kwargs): - """Forward function during training.""" - - async def async_simple_test(self, x, img_meta, **kwargs): - """Asynchronized test function.""" - raise NotImplementedError - - def simple_test(self, - x, - proposal_list, - img_meta, - proposals=None, - rescale=False, - **kwargs): - """Test without augmentation.""" - - def aug_test(self, x, proposal_list, img_metas, rescale=False, **kwargs): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/iou3d.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/iou3d.py deleted file mode 100644 index 6fc71979190323f44c09f8b7e1761cf49cd2d76b..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/iou3d.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'iou3d_boxes_iou_bev_forward', 'iou3d_nms_forward', - 'iou3d_nms_normal_forward' -]) - - -def boxes_iou_bev(boxes_a, boxes_b): - """Calculate boxes IoU in the Bird's Eye View. - - Args: - boxes_a (torch.Tensor): Input boxes a with shape (M, 5). - boxes_b (torch.Tensor): Input boxes b with shape (N, 5). - - Returns: - ans_iou (torch.Tensor): IoU result with shape (M, N). - """ - ans_iou = boxes_a.new_zeros( - torch.Size((boxes_a.shape[0], boxes_b.shape[0]))) - - ext_module.iou3d_boxes_iou_bev_forward(boxes_a.contiguous(), - boxes_b.contiguous(), ans_iou) - - return ans_iou - - -def nms_bev(boxes, scores, thresh, pre_max_size=None, post_max_size=None): - """NMS function GPU implementation (for BEV boxes). The overlap of two - boxes for IoU calculation is defined as the exact overlapping area of the - two boxes. In this function, one can also set ``pre_max_size`` and - ``post_max_size``. - - Args: - boxes (torch.Tensor): Input boxes with the shape of [N, 5] - ([x1, y1, x2, y2, ry]). - scores (torch.Tensor): Scores of boxes with the shape of [N]. - thresh (float): Overlap threshold of NMS. - pre_max_size (int, optional): Max size of boxes before NMS. - Default: None. - post_max_size (int, optional): Max size of boxes after NMS. - Default: None. - - Returns: - torch.Tensor: Indexes after NMS. - """ - assert boxes.size(1) == 5, 'Input boxes shape should be [N, 5]' - order = scores.sort(0, descending=True)[1] - - if pre_max_size is not None: - order = order[:pre_max_size] - boxes = boxes[order].contiguous() - - keep = torch.zeros(boxes.size(0), dtype=torch.long) - num_out = ext_module.iou3d_nms_forward(boxes, keep, thresh) - keep = order[keep[:num_out].cuda(boxes.device)].contiguous() - if post_max_size is not None: - keep = keep[:post_max_size] - return keep - - -def nms_normal_bev(boxes, scores, thresh): - """Normal NMS function GPU implementation (for BEV boxes). The overlap of - two boxes for IoU calculation is defined as the exact overlapping area of - the two boxes WITH their yaw angle set to 0. - - Args: - boxes (torch.Tensor): Input boxes with shape (N, 5). - scores (torch.Tensor): Scores of predicted boxes with shape (N). - thresh (float): Overlap threshold of NMS. - - Returns: - torch.Tensor: Remaining indices with scores in descending order. - """ - assert boxes.shape[1] == 5, 'Input boxes shape should be [N, 5]' - order = scores.sort(0, descending=True)[1] - - boxes = boxes[order].contiguous() - - keep = torch.zeros(boxes.size(0), dtype=torch.long) - num_out = ext_module.iou3d_nms_normal_forward(boxes, keep, thresh) - return order[keep[:num_out].cuda(boxes.device)].contiguous() diff --git a/spaces/airsat/dalle-mini/README.md b/spaces/airsat/dalle-mini/README.md deleted file mode 100644 index ee4fed8ac832c90c53ffdf7ad01795a7edb01e5a..0000000000000000000000000000000000000000 --- a/spaces/airsat/dalle-mini/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: DALL·E mini -emoji: 🥑 -colorFrom: blue -colorTo: blue -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/akhaliq/deeplab2/model/builder_test.py b/spaces/akhaliq/deeplab2/model/builder_test.py deleted file mode 100644 index 6fd603127caf05c0c72bc892c8bb93a7c81393be..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/deeplab2/model/builder_test.py +++ /dev/null @@ -1,80 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The Deeplab2 Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Tests for model.builder.""" - -import os -from absl.testing import parameterized - -import tensorflow as tf - -from google.protobuf import text_format -from deeplab2 import config_pb2 -from deeplab2.model import builder -from deeplab2.model.decoder import motion_deeplab_decoder -from deeplab2.model.encoder import axial_resnet_instances -from deeplab2.model.encoder import mobilenet -# resources dependency - - -_CONFIG_PATH = 'deeplab2/configs/example' - - -def _read_proto_file(filename, proto): - filename = filename # OSS: removed internal filename loading. - with tf.io.gfile.GFile(filename, 'r') as proto_file: - return text_format.ParseLines(proto_file, proto) - - -class BuilderTest(tf.test.TestCase, parameterized.TestCase): - - def test_resnet50_encoder_creation(self): - backbone_options = config_pb2.ModelOptions.BackboneOptions( - name='resnet50', output_stride=32) - encoder = builder.create_encoder( - backbone_options, - tf.keras.layers.experimental.SyncBatchNormalization) - self.assertIsInstance(encoder, axial_resnet_instances.ResNet50) - - @parameterized.parameters('mobilenet_v3_large', 'mobilenet_v3_small') - def test_mobilenet_encoder_creation(self, model_name): - backbone_options = config_pb2.ModelOptions.BackboneOptions( - name=model_name, use_squeeze_and_excite=True, output_stride=32) - encoder = builder.create_encoder( - backbone_options, - tf.keras.layers.experimental.SyncBatchNormalization) - self.assertIsInstance(encoder, mobilenet.MobileNet) - - def test_resnet_encoder_creation(self): - backbone_options = config_pb2.ModelOptions.BackboneOptions( - name='max_deeplab_s', output_stride=32) - encoder = builder.create_resnet_encoder( - backbone_options, - bn_layer=tf.keras.layers.experimental.SyncBatchNormalization) - self.assertIsInstance(encoder, axial_resnet_instances.MaXDeepLabS) - - def test_decoder_creation(self): - proto_filename = os.path.join( - _CONFIG_PATH, 'example_kitti-step_motion_deeplab.textproto') - model_options = _read_proto_file(proto_filename, config_pb2.ModelOptions()) - motion_decoder = builder.create_decoder( - model_options, tf.keras.layers.experimental.SyncBatchNormalization, - ignore_label=255) - self.assertIsInstance(motion_decoder, - motion_deeplab_decoder.MotionDeepLabDecoder) - - -if __name__ == '__main__': - tf.test.main() diff --git a/spaces/alex-mindspace/gpt-agents/swarmai/utils/__init__.py b/spaces/alex-mindspace/gpt-agents/swarmai/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/colorama/__init__.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/colorama/__init__.py deleted file mode 100644 index b149ed79b0a1d5808a7e392876c2f5aae4b5057c..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/colorama/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. -from .initialise import init, deinit, reinit, colorama_text -from .ansi import Fore, Back, Style, Cursor -from .ansitowin32 import AnsiToWin32 - -__version__ = '0.4.4' diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/bbcode.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/bbcode.py deleted file mode 100644 index 35a37328ec7d835ae510a7a9b0127bb9b790b3c1..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/bbcode.py +++ /dev/null @@ -1,108 +0,0 @@ -""" - pygments.formatters.bbcode - ~~~~~~~~~~~~~~~~~~~~~~~~~~ - - BBcode formatter. - - :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - - -from pip._vendor.pygments.formatter import Formatter -from pip._vendor.pygments.util import get_bool_opt - -__all__ = ['BBCodeFormatter'] - - -class BBCodeFormatter(Formatter): - """ - Format tokens with BBcodes. These formatting codes are used by many - bulletin boards, so you can highlight your sourcecode with pygments before - posting it there. - - This formatter has no support for background colors and borders, as there - are no common BBcode tags for that. - - Some board systems (e.g. phpBB) don't support colors in their [code] tag, - so you can't use the highlighting together with that tag. - Text in a [code] tag usually is shown with a monospace font (which this - formatter can do with the ``monofont`` option) and no spaces (which you - need for indentation) are removed. - - Additional options accepted: - - `style` - The style to use, can be a string or a Style subclass (default: - ``'default'``). - - `codetag` - If set to true, put the output into ``[code]`` tags (default: - ``false``) - - `monofont` - If set to true, add a tag to show the code with a monospace font - (default: ``false``). - """ - name = 'BBCode' - aliases = ['bbcode', 'bb'] - filenames = [] - - def __init__(self, **options): - Formatter.__init__(self, **options) - self._code = get_bool_opt(options, 'codetag', False) - self._mono = get_bool_opt(options, 'monofont', False) - - self.styles = {} - self._make_styles() - - def _make_styles(self): - for ttype, ndef in self.style: - start = end = '' - if ndef['color']: - start += '[color=#%s]' % ndef['color'] - end = '[/color]' + end - if ndef['bold']: - start += '[b]' - end = '[/b]' + end - if ndef['italic']: - start += '[i]' - end = '[/i]' + end - if ndef['underline']: - start += '[u]' - end = '[/u]' + end - # there are no common BBcodes for background-color and border - - self.styles[ttype] = start, end - - def format_unencoded(self, tokensource, outfile): - if self._code: - outfile.write('[code]') - if self._mono: - outfile.write('[font=monospace]') - - lastval = '' - lasttype = None - - for ttype, value in tokensource: - while ttype not in self.styles: - ttype = ttype.parent - if ttype == lasttype: - lastval += value - else: - if lastval: - start, end = self.styles[lasttype] - outfile.write(''.join((start, lastval, end))) - lastval = value - lasttype = ttype - - if lastval: - start, end = self.styles[lasttype] - outfile.write(''.join((start, lastval, end))) - - if self._mono: - outfile.write('[/font]') - if self._code: - outfile.write('[/code]') - if self._code or self._mono: - outfile.write('\n') diff --git a/spaces/aliabd/SummerTime/model/third_party/HMNet/Models/Networks/Transformer.py b/spaces/aliabd/SummerTime/model/third_party/HMNet/Models/Networks/Transformer.py deleted file mode 100644 index e1ce4582b9ca2d9ac5b6ab3720ab9e6e1581c719..0000000000000000000000000000000000000000 --- a/spaces/aliabd/SummerTime/model/third_party/HMNet/Models/Networks/Transformer.py +++ /dev/null @@ -1,845 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT license. - -import copy -import json -import math -import re -import collections -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.autograd import Variable -from torch.nn.parameter import Parameter - - -def gelu(x): - return ( - 0.5 - * x - * (1 + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3)))) - ) - - -def swish(x): - return x * torch.sigmoid(x) - - -class LayerNorm(nn.Module): - "Construct a layernorm module in the OpenAI style (epsilon inside the square root)." - - def __init__(self, n_state, e=1e-5): - super(LayerNorm, self).__init__() - self.g = nn.Parameter(torch.ones(n_state)) - self.b = nn.Parameter(torch.zeros(n_state)) - self.e = e - - """ - Input: - x: n_state-dim - Output: - o: n_state-dim - """ - - def forward(self, x): - u = x.mean(-1, keepdim=True) - s = (x - u).pow(2).mean(-1, keepdim=True) - x = (x - u) / torch.sqrt(s + self.e) - return self.g * x + self.b - - -""" - Convolution - nx is the last input dim - nf is the last output dim -""" - - -class Conv1D(nn.Module): - def __init__(self, nf, nx): - super(Conv1D, self).__init__() - self.nf = nf - w = torch.empty(nx, nf) - nn.init.normal_(w, std=0.02) - self.w = Parameter(w) - self.b = Parameter(torch.zeros(nf)) - - """ - Input: - x: batch x len x nx - Output: - x: batch x len x nf - """ - - def forward(self, x): - size_out = x.size()[:-1] + (self.nf,) - x = torch.addmm(self.b, x.view(-1, x.size(-1)), self.w) - x = x.view(*size_out) - return x - - -class PositionalEmbedding(nn.Module): - def __init__(self, opt, demb): - super(PositionalEmbedding, self).__init__() - self.demb = demb - inv_freq = 1 / (10000 ** (torch.arange(0.0, demb, 2.0) / demb)) - self.pos_discount = float(opt["TRANSFORMER_POS_DISCOUNT"]) - self.register_buffer("inv_freq", inv_freq) - - """ - Input: - pos_seq: len - Output: - pos_emb: len x demb - """ - - def forward(self, pos_seq): - sinusoid_inp = torch.ger(pos_seq, self.inv_freq) - pos_emb = ( - torch.cat([sinusoid_inp.sin(), sinusoid_inp.cos()], dim=-1) - / self.pos_discount - ) - return pos_emb - - -""" - Splitter -""" - - -class Splitter(nn.Module): - def __init__(self, nx): - super(Splitter, self).__init__() - self.nx = nx - self.augmenter = Conv1D(nx * 3, nx) - - """ - Input: - x: batch x len x nx - Output: - query,key,value: batch x len x nx - """ - - def forward(self, x): - x = self.augmenter(x) - # x: batch x len x (3 x nx) - - query, key, value = x.split(self.nx, dim=2) - # query,key,value: batch x len x nx - - return query, key, value - - -""" - Multi-head Attention -""" - - -class Attention(nn.Module): - """ - nx: input dimension - """ - - def __init__(self, nx, opt): - super(Attention, self).__init__() - n_state = nx # in Attention: n_state=768 (nx=n_embd) - # [switch nx => n_state from Block to Attention to keep identical to TF implem] - n_head = int(opt["TRANSFORMER_HEAD"]) - resid_pdrop = opt["TRANSFORMER_RESIDUAL_DROPOUT"] - attn_pdrop = opt["TRANSFORMER_ATTENTION_DROPOUT"] - use_cuda = opt["cuda"] - - assert n_state % n_head == 0 - # if mask is needed, uncomment this - self.maxlen = 2048 # beyond this scale - self.mask = ( - Variable( - torch.tril(torch.ones(self.maxlen, self.maxlen)).view( - 1, 1, self.maxlen, self.maxlen - ), - requires_grad=False, - ).cuda() - if use_cuda - else Variable( - torch.tril(torch.ones(self.maxlen, self.maxlen)).view( - 1, 1, self.maxlen, self.maxlen - ), - requires_grad=False, - ) - ) - self.n_head = n_head - self.c_proj = Conv1D(n_state, nx) - self.attn_dropout = nn.Dropout(attn_pdrop) - self.resid_dropout = nn.Dropout(resid_pdrop) - self.use_cuda = use_cuda - - """ - Input: - q: batch x n_head x len x dim - k: batch x n_head x dim x kv_len - v: batch x n_head x kv_len x dim - x_mask: batch x kv_len # key and value's mask (if not None, used for encoder's self-attention and decoder's src-tgt attention) - one_dir_visible: only sees previous history (used for decoder's self-attention) - return_attn_weight: if true, also return the attention weights - Output: - a: batch x n_head x len x n_state x dim - attn_weight (if return_attn_weight): attn_weight: batch x n_head x len x kv_len - """ - - def _attn(self, q, k, v, x_mask, one_dir_visible, return_attn_weight): - w = torch.matmul(q, k) - # batch x n_head x len x kv_len - w = w / math.sqrt(v.size(-1)) - - mask = None - if one_dir_visible: # mask "seeing the future" - if w.size(-2) <= self.maxlen and w.size(-1) <= self.maxlen: - mask = ( - self.mask[:, :, : w.size(-2), : w.size(-1)].cuda() - if self.use_cuda - else self.mask[:, :, : w.size(-2), : w.size(-1)] - ) - else: - mask = ( - Variable( - torch.tril(torch.ones(w.size(-2), w.size(-1))).view( - 1, 1, w.size(-2), w.size(-1) - ), - requires_grad=False, - ).cuda() - if self.use_cuda - else Variable( - torch.tril(torch.ones(w.size(-2), w.size(-1))).view( - 1, 1, w.size(-2), w.size(-1) - ), - requires_grad=False, - ) - ) - - if x_mask is not None: - mask = x_mask.unsqueeze(1).unsqueeze(1).expand_as(w).float() - # batch x n_head x len x kv_len - - if mask is not None: - w = w * mask + -1e9 * (1 - mask) - - w_prob = nn.Softmax(dim=-1)(w) - w_prob = self.attn_dropout(w_prob) - if return_attn_weight: - return torch.matmul(w_prob, v), w - else: - return torch.matmul(w_prob, v) - - def merge_heads(self, x): - x = x.permute(0, 2, 1, 3).contiguous() - new_x_shape = x.size()[:-2] + (x.size(-2) * x.size(-1),) - return x.view(*new_x_shape) # in Tensorflow implem: fct merge_states - - """ - Input: - x: batch x len x dim - Output: - not k: batch x n_head x (dim/n_head) x len - k: batch x n_head x len x (dim/n_head) - """ - - def split_heads(self, x, k=False): - new_x_shape = x.size()[:-1] + (self.n_head, x.size(-1) // self.n_head) - x = x.view(*new_x_shape) # in Tensorflow implem: fct split_states - if k: - return x.permute(0, 2, 3, 1) - else: - return x.permute(0, 2, 1, 3) - - """ - Input: - query: batch x len x n_state - key, value: batch x kv_len x n_state - x_mask: batch x kv_len # key and value's mask (if not None, used for encoder's self-attention and decoder's src-tgt attention) - one_dir_visible: only sees previous history (used for decoder's self-attention) - return_attn_weight: if true, also return the attention weights - Output: - a: batch x len x n_state - attn_weight (if return_attn_weight): batch x len x kv_len - """ - - def forward( - self, query, key, value, x_mask, one_dir_visible=False, return_attn_weight=False - ): - query = self.split_heads(query) - # batch x n_head x len x (n_state/n_head) - - key = self.split_heads(key, k=True) - # batch x n_head x (n_state/n_head) x kv_len - - value = self.split_heads(value) - # batch x n_head x kv_len x (n_state/n_head) - - out = self._attn(query, key, value, x_mask, one_dir_visible, return_attn_weight) - - if return_attn_weight: - a, attn_weight = out - # a: batch x n_head x len x (n_state/n_head) - # attn_weight: batch x n_head x len x kv_len - attn_weight = attn_weight.permute(0, 2, 3, 1).contiguous() - # batch x len x kv_len x n_head - attn_weight = torch.sum(attn_weight, dim=3) - # batch x len x kv_len - else: - a = out - # batch x n_head x len x (n_state/n_head) - - a = self.merge_heads(a) - # batch x len x n_state - - a = self.c_proj(a) - # batch x len x n_state - - a = self.resid_dropout(a) - # batch x len x n_state - - if return_attn_weight: - return a, attn_weight - else: - return a - - -""" - Two-layer network -""" - - -class MLP(nn.Module): - """ - Input: - n_state: intermediate dim - """ - - def __init__(self, n_state, opt): # in MLP: n_state=3072 (4 * n_embd) - super(MLP, self).__init__() - nx = int(opt["transformer_embed_dim"]) - resid_pdrop = opt["TRANSFORMER_RESIDUAL_DROPOUT"] - self.c_fc = Conv1D(n_state, nx) - self.c_proj = Conv1D(nx, n_state) - self.dropout = nn.Dropout(resid_pdrop) - - """ - Input: - x: batch x len x nx - Output: batch x len x nx - """ - - def forward(self, x): - h = F.relu(self.c_fc(x)) - h2 = self.c_proj(h) - return self.dropout(h2) - - -""" - One encoder block of transformer -""" - - -class EncoderBlock(nn.Module): - def __init__(self, opt): - super(EncoderBlock, self).__init__() - nx = int(opt["transformer_embed_dim"]) - self.one_dir_visible = False - if "transformer_encoder_one_dir_visible" in opt: - self.one_dir_visible = opt["transformer_encoder_one_dir_visible"] - self.splitter = Splitter(nx) - self.attn = Attention(nx, opt) - self.ln_1 = LayerNorm(nx) - self.mlp = MLP(4 * nx, opt) - self.ln_2 = LayerNorm(nx) - - """ - Input: - x: batch x len x n_state - x_mask: batch x len (1 means there's something) - Output: - h: batch x len x n_state - """ - - def forward(self, x, x_mask): - query, key, value = self.splitter(x) - if self.one_dir_visible: - # in this case, use triangle masking, as it's one_direction - a = self.attn(query, key, value, None, one_dir_visible=True) - else: - # in this case, use x_mask for attention masking - a = self.attn(query, key, value, x_mask, one_dir_visible=False) - - n = self.ln_1(x + a) # residual - m = self.mlp(n) - h = self.ln_2(n + m) - return h - - -""" - One encoder block of transformer -""" - - -class DecoderBlock(nn.Module): - def __init__(self, opt): - super(DecoderBlock, self).__init__() - nx = int(opt["transformer_embed_dim"]) - self.decoder_splitter = Splitter(nx) - self.self_attn = Attention(nx, opt) - self.cross_attn = Attention(nx, opt) - self.ln_1 = LayerNorm(nx) - self.ln_2 = LayerNorm(nx) - self.mlp = MLP(4 * nx, opt) - self.ln_3 = LayerNorm(nx) - - """ - Input: - x_mask: batch x len, mask for encoder's input - y: batch x len x n_state (decoder part) - enc_key: batch x encoder_len x n_state - enc_value: batch x encoder_len x n_state - lang_model: whether it's for language model training (no encoder part is used) - Output: - h: batch x len x n_state - """ - - def forward(self, x_mask, y, enc_key, enc_value, lang_model=False): - query, key, value = self.decoder_splitter(y) - # batch x len x n_state - - # self-attention - a = self.self_attn(query, key, value, None, one_dir_visible=True) - # batch x len x n_state - - n = self.ln_1(y + a) # residual - - # seq2seq - if not lang_model: - # src-tgt attention - o = self.cross_attn(n, enc_key, enc_value, x_mask) - p = self.ln_2(n + o) # residual - # batch x len x n_state - else: # language model - p = n - - m = self.mlp(p) - h = self.ln_3(p + m) - return h - - -""" - Embedder -""" - - -class Embedder(nn.Module): - """ - Input: - vocab: size of vocabulary - """ - - def __init__(self, opt, embed=None): - super(Embedder, self).__init__() - n_state = int(opt["transformer_embed_dim"]) # n_state - embed_dropout_rate = opt["TRANSFORMER_EMBED_DROPOUT"] - if embed is None: - self.embed = nn.Embedding(opt["vocab_size"], n_state) - nn.init.normal_(self.embed.weight, std=0.02) - else: - self.embed = embed - self.drop = nn.Dropout(embed_dropout_rate) - self.pos_emb = PositionalEmbedding(opt, n_state) - self.use_cuda = opt["cuda"] - - """ - Input: - x: batch x len (word_id) - Output: - h: batch x len x n_state - """ - - def forward(self, x): - x_emb = self.embed(x) - batch_size = x.shape[0] - x_len = x.shape[1] - x_pos = self.pos_emb( - torch.arange(x_len).type( - torch.cuda.FloatTensor if self.use_cuda else torch.FloatTensor - ) - ) # len x n_state - x_pos = ( - Variable( - x_pos.unsqueeze(0).repeat(batch_size, 1, 1), requires_grad=False - ).cuda() - if self.use_cuda - else Variable( - x_pos.unsqueeze(0).repeat(batch_size, 1, 1), requires_grad=False - ) - ) - x_input = x_emb + x_pos - h = self.drop(x_input) - return h - - -""" - Transformer encoder -""" - - -class TransformerEncoder(nn.Module): - """ - Input: - embed: (if not None) pre-computed vocab embeddings - """ - - def __init__(self, opt, embed=None): - super(TransformerEncoder, self).__init__() - vocab = int(opt["vocab_size"]) - n_state = int(opt["transformer_embed_dim"]) - n_layer = int(opt["TRANSFORMER_LAYER"]) - if "vae_z_scale_factor" in opt: - self.vae_z_scale_factor = float(opt["vae_z_scale_factor"]) - - self.embedder = Embedder(opt, embed) - block = EncoderBlock(opt) - self.blocks = nn.ModuleList([copy.deepcopy(block) for _ in range(n_layer)]) - self.use_cuda = opt["cuda"] - - """ - Input: - x: batch x len (word_id) - z (optional): batch x len x n_state (for VAE) - Output: - h: batch x len x n_state (word_id) - """ - - def forward(self, x, z=None): - x_mask = ~x.eq(0) # 1 is PAD_id - x_mask = x_mask.type( - torch.cuda.FloatTensor if self.use_cuda else torch.FloatTensor - ) - - h = self.embedder(x) - if z is not None: - z *= self.vae_z_scale_factor - h += z - - for block in self.blocks: - h = block(h, x_mask) - return h - - -""" - Transformer decoder -""" - - -class TransformerDecoder(nn.Module): - """ - Input: - embed: (if not None) pre-computed vocab embeddings - """ - - def __init__(self, opt, embed=None): - super(TransformerDecoder, self).__init__() - self.opt = opt - vocab_size = int(opt["vocab_size"]) - n_state = int(opt["transformer_embed_dim"]) # n_state - n_layer = int(opt["TRANSFORMER_LAYER"]) - self.embedder = Embedder(opt, embed) - self.encoder_splitter = Splitter(n_state) - block = DecoderBlock(opt) - self.blocks = nn.ModuleList([copy.deepcopy(block) for _ in range(n_layer)]) - if embed is None: - self.linear = Conv1D(vocab_size, n_state) - else: - self.linear = nn.Linear(n_state, vocab_size, bias=False) - if ( - "FINETUNE_RETRAIN_SOFTMAX" not in opt - ): # if FINETUNE_RETRAIN_SOFTMAX, linear needs to be seperately trained - self.linear.weight = embed.weight # share weight - self.use_coda = opt["cuda"] - - """ - Input: - x: batch x encoder_len (word id) - x_out: batch x encoder_len x n_state - y: batch x len (word_id) (decoder part) - lang_model: whether it's for language model training (no encoder part is used) - Output: - prob: batch x len x vocab_size (probabilities after softmax) - """ - - def forward(self, x, x_out, y, lang_model=False): - # seq2seq - if not lang_model: - _, enc_key, enc_value = self.encoder_splitter(x_out) - # enc_key: batch x encoder_len x n_state - # enc_value: batch x encoder_len x n_state - - x_mask = ~x.eq(0) # 1 is PAD_id - x_mask = x_mask.type( - torch.cuda.FloatTensor if self.use_cuda else torch.FloatTensor - ) - else: - enc_key = None - enc_value = None - x_mask = None - - h = self.embedder(y) - for block in self.blocks: - h = block(x_mask, h, enc_key, enc_value, lang_model) - prob = F.softmax(self.linear(h), dim=-1) - return prob - - -class TransformerBeam: - """ - Input: - encoder: TransformerEncoder class - decoder: TransformerDecoder class - begin_id: word id of '' - vocab: list of words - """ - - def __init__(self, opt, encoder, decoder, begin_id, vocab): - self.encoder = encoder - self.decoder = decoder - self.opt = opt - self.max_sent_len = int(opt["max_sent_len"]) - self.begin_id = begin_id - self.vocab = vocab - self.beam_width = int(opt["beam_width"]) - self.use_cuda = opt["cuda"] - - # each candidate is (idx, prob, 0/1, position/wordid) - def merge_candidates(self, cand_A, cand_B): - C = [] - pA, lA, pB, lB = 0, len(cand_A), 0, len(cand_B) - lC = 0 - while (pA < lA or pB < lB) and (lC < self.beam_width): - if pA < lA and (pB >= lB or cand_A[pA][1] > cand_B[pB][1]): - C.append(cand_A[pA]) - pA += 1 - else: - C.append(cand_B[pB]) - pB += 1 - lC += 1 - return C - - """ - Input: - x = batch * encoder_len (word_ids) encoder's input - k: top-k sampling - Output: - sents: list of words, with batch items, each one with up to beam_width (sentence, log_prob), each sentence with up to max_sent_len_word words - """ - - def topk(self, x, k): - batch_size = x.shape[0] - x_len = x.shape[1] - x_out = self.encoder(x) - # x_out: batch x encoder_len x n_state - - # sent_ids is the words for each of the batch_size sentences - sent_ids = [] - for i in range(batch_size): - sent_ids.append([self.begin_id]) - - topk = 1 - MIN_GEN_LENGTH = 45 - if "MIN_GEN_LENGTH" in self.opt: - MIN_GEN_LENGTH = int(self.opt["MIN_GEN_LENGTH"]) - for l in range(self.max_sent_len): - y = ( - Variable(torch.LongTensor(sent_ids)).cuda() - if self.use_cuda - else Variable(torch.LongTensor(sent_ids)) - ) # batch_size x l - decoder_outputs = self.decoder(x, x_out, y) - probs = decoder_outputs[ - :, -1, : - ] # batch_size x vocab_size (only take the last output) - for i in range(batch_size): - topk_probs, _ = torch.topk(probs[i], k) - threshold = float(topk_probs[-1]) - probs[i][probs[i] < threshold] = 0.0 - - samples = torch.multinomial( - probs, 2 - ) # sample 2 since the first one may be - for i in range(batch_size): - if l < MIN_GEN_LENGTH and self.vocab[int(samples[i, 0])] == "": - sent_ids[i].append(int(samples[i, 1])) - else: - sent_ids[i].append(int(samples[i, 0])) - - sents = [] - for i in range(batch_size): - utt = [] - for j in range(len(sent_ids[i])): - w = self.vocab[sent_ids[i][j]] - if w == "": - continue - if w == "": - break - utt.append(w) - sents.append([(utt, 0)]) - - return sents - - """ - Input: - x = batch * encoder_len (word_ids) encoder's input - Output: - sents: list of words, with batch items, each one with up to beam_width (sentence, log_prob), each sentence with up to max_sent_len_word words - """ - - def beam_search(self, x): - batch_size = x.shape[0] - x_len = x.shape[1] - x_out = self.encoder(x) - # x_out: batch x encoder_len x n_state - - sents = [] - topk = 1 - history_nodes = [{}] - end_nodes = {} - for idx in range(batch_size): - start_node = BeamSearchNode([self.begin_id], 0, 1) - history_nodes[0][idx] = [start_node] - end_nodes[idx] = [] - - for l in range(self.max_sent_len): - last_nodes = history_nodes[-1] - if sum([len(l) for i, l in last_nodes.items()]) == 0: # no nodes left - break - ys = [] - x_outs = [] - xs = [] - for idx in range(batch_size): - ys.extend([node.word_ids for node in last_nodes[idx]]) - x_outs.extend( - [x_out[idx, :, :].unsqueeze(0) for node in last_nodes[idx]] - ) - xs.extend([x[idx, :].unsqueeze(0) for node in last_nodes[idx]]) - - ys = ( - Variable(torch.LongTensor(ys)).cuda() - if self.use_cuda - else Variable(torch.LongTensor(ys)) - ) # N x l - x_outs = torch.cat(x_outs, dim=0) # N x x_len x n_state - xs = torch.cat(xs, dim=0) # N x x_len - probs = self.decoder(xs, x_outs, ys) - log_probs = torch.log( - probs[:, -1, :] + 1e-15 - ) # N x vocab_size (only take the last output) - - history_nodes.append({}) - p = 0 - for idx in range(batch_size): - history_nodes[-1][idx] = [] - N = len(last_nodes[idx]) - if N == 0: - continue - log_prob = log_probs[p : p + N] - p += N - # log_prob = N x extended_vocab_size - - # generate - candidates = [] - for k in range(N): - logprobs, ids = torch.topk(log_prob[k], self.beam_width) - candidates = self.merge_candidates( - candidates, [(k, p, d) for p, d in zip(logprobs, ids)] - ) - - candidates = candidates[: self.beam_width] - extended_nodes_in_last_nodes = set() - for k in range(len(candidates)): - h, logp, next_word_id = candidates[ - k - ] # h means "the h-th node in last_nodes" - logp = float(logp) - next_word_id = int(next_word_id) - prev_node = last_nodes[idx][h] - next_wordids = prev_node.word_ids + [next_word_id] - next_word = self.vocab[next_word_id] - - next_node = BeamSearchNode( - next_wordids, prev_node.log_prob + logp, prev_node.length + 1 - ) - if next_node.duplicate == False: # no duplicate trigram generated - extended_nodes_in_last_nodes.add(h) - if next_word == "" or l == self.max_sent_len - 1: - end_nodes[idx].append((next_node.eval(), next_node)) - else: - history_nodes[-1][idx].append(next_node) - - special_words = ["", "", "", "", "", ""] - for k in range(N): - if k not in extended_nodes_in_last_nodes: - node = last_nodes[idx][k] - effective_word_count = sum( - [ - 1 - for x in node.word_ids - if self.vocab[x] not in special_words - ] - ) - if effective_word_count >= 5: - end_nodes[idx].append((node.eval(), node)) - - MIN_GEN_LENGTH = 45 - if "MIN_GEN_LENGTH" in self.opt: - MIN_GEN_LENGTH = int(self.opt["MIN_GEN_LENGTH"]) - for idx in range(batch_size): - t = len([w for w in end_nodes[idx] if w[1].length > MIN_GEN_LENGTH]) - if t > 0: - end_nodes[idx] = [ - w for w in end_nodes[idx] if w[1].length > MIN_GEN_LENGTH - ] - - end_nodes[idx].sort(key=lambda tup: tup[0], reverse=True) - candidates = [] - for score, node in end_nodes[idx][:topk]: - utt = [self.vocab[x] for x in node.word_ids] - utt = [x for x in utt if x not in ["", ""]] - candidates.append((utt, score)) - if len(candidates) == 0: - candidates.append(("", 0)) - sents.append(candidates) - - return sents - - -class BeamSearchNode(object): - def __init__(self, word_ids, log_prob, length): - self.word_ids = word_ids - self.log_prob = log_prob - self.length = length - - trigram_set = set() - self.duplicate = False - - for i in range(2, len(word_ids)): - trigram = ( - str(word_ids[i - 2]) - + " " - + str(word_ids[i - 1]) - + " " - + str(word_ids[i]) - ) - if trigram in trigram_set: - self.duplicate = True - break - trigram_set.add(trigram) - - def eval(self): - return self.log_prob / float(self.length - 1.0 + 1e-6) - - def __lt__(self, other): - return self.length < other.length diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/.github/ISSUE_TEMPLATE/bug_report.md b/spaces/amarchheda/ChordDuplicate/portaudio/.github/ISSUE_TEMPLATE/bug_report.md deleted file mode 100644 index 794c5e989b3e58595241a52197186b5482857690..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/.github/ISSUE_TEMPLATE/bug_report.md +++ /dev/null @@ -1,34 +0,0 @@ ---- -name: Bug report -about: Create a report to help us improve -title: '' -labels: '' -assignees: '' - ---- - -(Please use the mailing list for support requests and general discussion. This is only for actual bugs.) - -**Describe the bug** -A clear and concise description of what the bug is. - -**To Reproduce** -Steps to reproduce the behavior. Include code if applicable. -1. - -**Expected behavior** -A clear and concise description of what you expected to happen. - -**Actual behavior** -What actually happened. -Include a recording if helpful. -Error messages or logs longer than a page should be attached as a .txt file. - -**Desktop (please complete the following information):** - - OS: [e.g. Mac OS] - - OS Version [e.g. 22] - - PortAudio version: stable, nightly snapshot (which?), current (please give date and/or Git hash): - - If Windows or Linux, which Host API (e.g. WASAPI): - -**Additional context** -Add any other context about the problem here. diff --git a/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/noise.py b/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/noise.py deleted file mode 100644 index 768f0e9f73ea50b3262c643b712730f614488895..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/noise.py +++ /dev/null @@ -1,64 +0,0 @@ -import torch -import numpy as np -from PIL import ImageOps -import math -from .animation import sample_to_cv2 -import cv2 - -deforum_noise_gen = torch.Generator(device='cpu') - -# 2D Perlin noise in PyTorch https://gist.github.com/vadimkantorov/ac1b097753f217c5c11bc2ff396e0a57 -def rand_perlin_2d(shape, res, fade = lambda t: 6*t**5 - 15*t**4 + 10*t**3): - delta = (res[0] / shape[0], res[1] / shape[1]) - d = (shape[0] // res[0], shape[1] // res[1]) - - grid = torch.stack(torch.meshgrid(torch.arange(0, res[0], delta[0]), torch.arange(0, res[1], delta[1]), indexing='ij'), dim = -1) % 1 - angles = 2*math.pi*torch.rand(res[0]+1, res[1]+1, generator=deforum_noise_gen) - gradients = torch.stack((torch.cos(angles), torch.sin(angles)), dim = -1) - - tile_grads = lambda slice1, slice2: gradients[slice1[0]:slice1[1], slice2[0]:slice2[1]].repeat_interleave(d[0], 0).repeat_interleave(d[1], 1) - dot = lambda grad, shift: (torch.stack((grid[:shape[0],:shape[1],0] + shift[0], grid[:shape[0],:shape[1], 1] + shift[1] ), dim = -1) * grad[:shape[0], :shape[1]]).sum(dim = -1) - - n00 = dot(tile_grads([0, -1], [0, -1]), [0, 0]) - n10 = dot(tile_grads([1, None], [0, -1]), [-1, 0]) - n01 = dot(tile_grads([0, -1],[1, None]), [0, -1]) - n11 = dot(tile_grads([1, None], [1, None]), [-1,-1]) - t = fade(grid[:shape[0], :shape[1]]) - return math.sqrt(2) * torch.lerp(torch.lerp(n00, n10, t[..., 0]), torch.lerp(n01, n11, t[..., 0]), t[..., 1]) - -def rand_perlin_2d_octaves(shape, res, octaves=1, persistence=0.5): - noise = torch.zeros(shape) - frequency = 1 - amplitude = 1 - for _ in range(int(octaves)): - noise += amplitude * rand_perlin_2d(shape, (frequency*res[0], frequency*res[1])) - frequency *= 2 - amplitude *= persistence - return noise - -def condition_noise_mask(noise_mask, invert_mask = False): - if invert_mask: - noise_mask = ImageOps.invert(noise_mask) - noise_mask = np.array(noise_mask.convert("L")) - noise_mask = noise_mask.astype(np.float32) / 255.0 - noise_mask = np.around(noise_mask, decimals=0) - noise_mask = torch.from_numpy(noise_mask) - #noise_mask = torch.round(noise_mask) - return noise_mask - -def add_noise(sample, noise_amt: float, seed: int, noise_type: str, noise_args, noise_mask = None, invert_mask = False): - deforum_noise_gen.manual_seed(seed) # Reproducibility - sample2dshape = (sample.shape[0], sample.shape[1]) #sample is cv2, so height - width - noise = torch.randn((sample.shape[2], sample.shape[0], sample.shape[1]), generator=deforum_noise_gen) # White noise - if noise_type == 'perlin': - # rand_perlin_2d_octaves is between -1 and 1, so we need to shift it to be between 0 and 1 - # print(sample.shape) - noise = noise * ((rand_perlin_2d_octaves(sample2dshape, (int(noise_args[0]), int(noise_args[1])), octaves=noise_args[2], persistence=noise_args[3]) + torch.ones(sample2dshape)) / 2) - if noise_mask is not None: - noise_mask = condition_noise_mask(noise_mask, invert_mask) - noise_to_add = sample_to_cv2(noise * noise_mask) - else: - noise_to_add = sample_to_cv2(noise) - sample = cv2.addWeighted(sample, 1-noise_amt, noise_to_add, noise_amt, 0) - - return sample diff --git a/spaces/armgabrielyan/search-in-video/utils.py b/spaces/armgabrielyan/search-in-video/utils.py deleted file mode 100644 index 39b8db4f46d1df025e67eddd56da4cb789c40214..0000000000000000000000000000000000000000 --- a/spaces/armgabrielyan/search-in-video/utils.py +++ /dev/null @@ -1,35 +0,0 @@ -from transformers import ViTFeatureExtractor -import torchvision.transforms.functional as fn -import torch as th - - -def video2image(video, feature_extractor_name): - feature_extractor = ViTFeatureExtractor.from_pretrained( - feature_extractor_name - ) - - vid = th.permute(video, (3, 0, 1, 2)) - samp = th.linspace(0, vid.shape[1]-1, 49, dtype=th.long) - vid = vid[:, samp, :, :] - - im_l = list() - for i in range(vid.shape[1]): - im_l.append(vid[:, i, :, :]) - - inputs = feature_extractor(im_l, return_tensors="pt") - - inputs = inputs['pixel_values'] - - im_h = list() - for i in range(7): - im_v = th.cat((inputs[0+i*7, :, :, :], - inputs[1+i*7, :, :, :], - inputs[2+i*7, :, :, :], - inputs[3+i*7, :, :, :], - inputs[4+i*7, :, :, :], - inputs[5+i*7, :, :, :], - inputs[6+i*7, :, :, :]), 2) - im_h.append(im_v) - resize = fn.resize(th.cat(im_h, 1), size=[224]) - - return resize diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/bark/inference_funcs.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/bark/inference_funcs.py deleted file mode 100644 index f3d3fee9371fae0cd06187c967a5b0028940138e..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/bark/inference_funcs.py +++ /dev/null @@ -1,606 +0,0 @@ -import logging -import os -import re -from glob import glob -from typing import Dict, List - -import librosa -import numpy as np -import torch -import torchaudio -import tqdm -from encodec.utils import convert_audio -from scipy.special import softmax -from torch.nn import functional as F - -from TTS.tts.layers.bark.hubert.hubert_manager import HubertManager -from TTS.tts.layers.bark.hubert.kmeans_hubert import CustomHubert -from TTS.tts.layers.bark.hubert.tokenizer import HubertTokenizer -from TTS.tts.layers.bark.load_model import clear_cuda_cache, inference_mode - -logger = logging.getLogger(__name__) - - -def _tokenize(tokenizer, text): - return tokenizer.encode(text, add_special_tokens=False) - - -def _detokenize(tokenizer, enc_text): - return tokenizer.decode(enc_text) - - -def _normalize_whitespace(text): - return re.sub(r"\s+", " ", text).strip() - - -def get_voices(extra_voice_dirs: List[str] = []): # pylint: disable=dangerous-default-value - dirs = extra_voice_dirs - voices: Dict[str, List[str]] = {} - for d in dirs: - subs = os.listdir(d) - for sub in subs: - subj = os.path.join(d, sub) - if os.path.isdir(subj): - voices[sub] = list(glob(f"{subj}/*.npz")) - # fetch audio files if no npz files are found - if len(voices[sub]) == 0: - voices[sub] = list(glob(f"{subj}/*.wav")) + list(glob(f"{subj}/*.mp3")) - return voices - - -def load_npz(npz_file): - x_history = np.load(npz_file) - semantic = x_history["semantic_prompt"] - coarse = x_history["coarse_prompt"] - fine = x_history["fine_prompt"] - return semantic, coarse, fine - - -def load_voice(model, voice: str, extra_voice_dirs: List[str] = []): # pylint: disable=dangerous-default-value - if voice == "random": - return None, None, None - - voices = get_voices(extra_voice_dirs) - paths = voices[voice] - - # bark only uses a single sample for cloning - if len(paths) > 1: - raise ValueError(f"Voice {voice} has multiple paths: {paths}") - - try: - path = voices[voice] - except KeyError as e: - raise KeyError(f"Voice {voice} not found in {extra_voice_dirs}") from e - - if len(paths) == 1 and paths[0].endswith(".npz"): - return load_npz(path[0]) - - audio_path = paths[0] - # replace the file extension with .npz - output_path = os.path.splitext(audio_path)[0] + ".npz" - generate_voice(audio=audio_path, model=model, output_path=output_path) - return load_voice(model, voice, extra_voice_dirs) - - -def zero_crossing_rate(audio, frame_length=1024, hop_length=512): - zero_crossings = np.sum(np.abs(np.diff(np.sign(audio))) / 2) - total_frames = 1 + int((len(audio) - frame_length) / hop_length) - return zero_crossings / total_frames - - -def compute_spectral_contrast(audio_data, sample_rate, n_bands=6, fmin=200.0): - spectral_contrast = librosa.feature.spectral_contrast(y=audio_data, sr=sample_rate, n_bands=n_bands, fmin=fmin) - return np.mean(spectral_contrast) - - -def compute_average_bass_energy(audio_data, sample_rate, max_bass_freq=250): - stft = librosa.stft(audio_data) - power_spectrogram = np.abs(stft) ** 2 - frequencies = librosa.fft_frequencies(sr=sample_rate, n_fft=stft.shape[0]) - bass_mask = frequencies <= max_bass_freq - bass_energy = power_spectrogram[np.ix_(bass_mask, np.arange(power_spectrogram.shape[1]))].mean() - return bass_energy - - -def generate_voice( - audio, - model, - output_path, -): - """Generate a new voice from a given audio and text prompt. - - Args: - audio (np.ndarray): The audio to use as a base for the new voice. - text (str): Transcription of the audio you are clonning. - model (BarkModel): The BarkModel to use for generating the new voice. - output_path (str): The path to save the generated voice to. - """ - if isinstance(audio, str): - audio, sr = torchaudio.load(audio) - audio = convert_audio(audio, sr, model.config.sample_rate, model.encodec.channels) - audio = audio.unsqueeze(0).to(model.device) - - with torch.no_grad(): - encoded_frames = model.encodec.encode(audio) - codes = torch.cat([encoded[0] for encoded in encoded_frames], dim=-1).squeeze() # [n_q, T] - - # move codes to cpu - codes = codes.cpu().numpy() - - # generate semantic tokens - # Load the HuBERT model - hubert_manager = HubertManager() - # hubert_manager.make_sure_hubert_installed(model_path=model.config.LOCAL_MODEL_PATHS["hubert"]) - hubert_manager.make_sure_tokenizer_installed(model_path=model.config.LOCAL_MODEL_PATHS["hubert_tokenizer"]) - - hubert_model = CustomHubert(checkpoint_path=model.config.LOCAL_MODEL_PATHS["hubert"]).to(model.device) - - # Load the CustomTokenizer model - tokenizer = HubertTokenizer.load_from_checkpoint( - model.config.LOCAL_MODEL_PATHS["hubert_tokenizer"], map_location=model.device - ) - # semantic_tokens = model.text_to_semantic( - # text, max_gen_duration_s=seconds, top_k=50, top_p=0.95, temp=0.7 - # ) # not 100% - semantic_vectors = hubert_model.forward(audio[0], input_sample_hz=model.config.sample_rate) - semantic_tokens = tokenizer.get_token(semantic_vectors) - semantic_tokens = semantic_tokens.cpu().numpy() - - np.savez(output_path, fine_prompt=codes, coarse_prompt=codes[:2, :], semantic_prompt=semantic_tokens) - - -def generate_text_semantic( - text, - model, - history_prompt=None, - temp=0.7, - top_k=None, - top_p=None, - silent=False, - min_eos_p=0.2, - max_gen_duration_s=None, - allow_early_stop=True, - base=None, - use_kv_caching=True, - **kwargs, # pylint: disable=unused-argument -): - """Generate semantic tokens from text. - - Args: - text (str): The text to generate semantic tokens from. - model (BarkModel): The BarkModel to use for generating the semantic tokens. - history_prompt (tuple): A tuple of (semantic_history, coarse_history, fine_history) to use as a prompt for the generation. - temp (float): The temperature to use for the generation. - top_k (int): The number of top tokens to consider for the generation. - top_p (float): The cumulative probability to consider for the generation. - silent (bool): Whether to silence the tqdm progress bar. - min_eos_p (float): The minimum probability to consider for the end of sentence token. - max_gen_duration_s (float): The maximum duration in seconds to generate for. - allow_early_stop (bool): Whether to allow the generation to stop early. - base (tuple): A tuple of (semantic_history, coarse_history, fine_history) to use as a base for the generation. - use_kv_caching (bool): Whether to use key-value caching for the generation. - **kwargs: Additional keyword arguments. They are ignored. - - Returns: - np.ndarray: The generated semantic tokens. - """ - assert isinstance(text, str) - text = _normalize_whitespace(text) - assert len(text.strip()) > 0 - if all(v is not None for v in history_prompt) or base is not None: - if history_prompt is not None: - semantic_history = history_prompt[0] - if base is not None: - semantic_history = base[0] - assert ( - isinstance(semantic_history, np.ndarray) - and len(semantic_history.shape) == 1 - and len(semantic_history) > 0 - and semantic_history.min() >= 0 - and semantic_history.max() <= model.config.SEMANTIC_VOCAB_SIZE - 1 - ) - else: - semantic_history = None - encoded_text = np.array(_tokenize(model.tokenizer, text)) + model.config.TEXT_ENCODING_OFFSET - if len(encoded_text) > 256: - p = round((len(encoded_text) - 256) / len(encoded_text) * 100, 1) - logger.warning(f"warning, text too long, lopping of last {p}%") - encoded_text = encoded_text[:256] - encoded_text = np.pad( - encoded_text, - (0, 256 - len(encoded_text)), - constant_values=model.config.TEXT_PAD_TOKEN, - mode="constant", - ) - if semantic_history is not None: - semantic_history = semantic_history.astype(np.int64) - # lop off if history is too long, pad if needed - semantic_history = semantic_history[-256:] - semantic_history = np.pad( - semantic_history, - (0, 256 - len(semantic_history)), - constant_values=model.config.SEMANTIC_PAD_TOKEN, - mode="constant", - ) - else: - semantic_history = np.array([model.config.SEMANTIC_PAD_TOKEN] * 256) - x = torch.from_numpy( - np.hstack([encoded_text, semantic_history, np.array([model.config.SEMANTIC_INFER_TOKEN])]).astype(np.int64) - )[None] - assert x.shape[1] == 256 + 256 + 1 - with inference_mode(): - x = x.to(model.device) - n_tot_steps = 768 - # custom tqdm updates since we don't know when eos will occur - pbar = tqdm.tqdm(disable=silent, total=100) - pbar_state = 0 - tot_generated_duration_s = 0 - kv_cache = None - for n in range(n_tot_steps): - if use_kv_caching and kv_cache is not None: - x_input = x[:, [-1]] - else: - x_input = x - logits, kv_cache = model.semantic_model( - x_input, merge_context=True, use_cache=use_kv_caching, past_kv=kv_cache - ) - relevant_logits = logits[0, 0, : model.config.SEMANTIC_VOCAB_SIZE] - if allow_early_stop: - relevant_logits = torch.hstack( - (relevant_logits, logits[0, 0, [model.config.SEMANTIC_PAD_TOKEN]]) - ) # eos - if top_p is not None: - # faster to convert to numpy - logits_device = relevant_logits.device - logits_dtype = relevant_logits.type() - relevant_logits = relevant_logits.detach().cpu().type(torch.float32).numpy() - sorted_indices = np.argsort(relevant_logits)[::-1] - sorted_logits = relevant_logits[sorted_indices] - cumulative_probs = np.cumsum(softmax(sorted_logits)) - sorted_indices_to_remove = cumulative_probs > top_p - sorted_indices_to_remove[1:] = sorted_indices_to_remove[:-1].copy() - sorted_indices_to_remove[0] = False - relevant_logits[sorted_indices[sorted_indices_to_remove]] = -np.inf - relevant_logits = torch.from_numpy(relevant_logits) - relevant_logits = relevant_logits.to(logits_device).type(logits_dtype) - if top_k is not None: - v, _ = torch.topk(relevant_logits, min(top_k, relevant_logits.size(-1))) - relevant_logits[relevant_logits < v[-1]] = -float("Inf") - probs = torch.softmax(relevant_logits / temp, dim=-1) - item_next = torch.multinomial(probs, num_samples=1) - if allow_early_stop and ( - item_next == model.config.SEMANTIC_VOCAB_SIZE or (min_eos_p is not None and probs[-1] >= min_eos_p) - ): - # eos found, so break - pbar.update(100 - pbar_state) - break - x = torch.cat((x, item_next[None]), dim=1) - tot_generated_duration_s += 1 / model.config.SEMANTIC_RATE_HZ - if max_gen_duration_s is not None and tot_generated_duration_s > max_gen_duration_s: - pbar.update(100 - pbar_state) - break - if n == n_tot_steps - 1: - pbar.update(100 - pbar_state) - break - del logits, relevant_logits, probs, item_next - req_pbar_state = np.min([100, int(round(100 * n / n_tot_steps))]) - if req_pbar_state > pbar_state: - pbar.update(req_pbar_state - pbar_state) - pbar_state = req_pbar_state - pbar.close() - out = x.detach().cpu().numpy().squeeze()[256 + 256 + 1 :] - assert all(out >= 0) and all(out < model.config.SEMANTIC_VOCAB_SIZE) - clear_cuda_cache() - return out - - -def _flatten_codebooks(arr, offset_size): - assert len(arr.shape) == 2 - arr = arr.copy() - if offset_size is not None: - for n in range(1, arr.shape[0]): - arr[n, :] += offset_size * n - flat_arr = arr.ravel("F") - return flat_arr - - -def generate_coarse( - x_semantic, - model, - history_prompt=None, - temp=0.7, - top_k=None, - top_p=None, - silent=False, - max_coarse_history=630, # min 60 (faster), max 630 (more context) - sliding_window_len=60, - base=None, - use_kv_caching=True, -): - """Generate coarse audio codes from semantic tokens. - - Args: - x_semantic (np.ndarray): The semantic tokens to generate coarse audio codes from. - model (BarkModel): The BarkModel to use for generating the coarse audio codes. - history_prompt (tuple): A tuple of (semantic_history, coarse_history, fine_history) to use as a prompt for the generation. - temp (float): The temperature to use for the generation. - top_k (int): The number of top tokens to consider for the generation. - top_p (float): The cumulative probability to consider for the generation. - silent (bool): Whether to silence the tqdm progress bar. - max_coarse_history (int): The maximum number of coarse audio codes to use as history. - sliding_window_len (int): The length of the sliding window to use for the generation. - base (tuple): A tuple of (semantic_history, coarse_history, fine_history) to use as a base for the generation. - use_kv_caching (bool): Whether to use key-value caching for the generation. - - Returns: - np.ndarray: The generated coarse audio codes. - """ - assert ( - isinstance(x_semantic, np.ndarray) - and len(x_semantic.shape) == 1 - and len(x_semantic) > 0 - and x_semantic.min() >= 0 - and x_semantic.max() <= model.config.SEMANTIC_VOCAB_SIZE - 1 - ) - assert 60 <= max_coarse_history <= 630 - assert max_coarse_history + sliding_window_len <= 1024 - 256 - semantic_to_coarse_ratio = ( - model.config.COARSE_RATE_HZ / model.config.SEMANTIC_RATE_HZ * model.config.N_COARSE_CODEBOOKS - ) - max_semantic_history = int(np.floor(max_coarse_history / semantic_to_coarse_ratio)) - if all(v is not None for v in history_prompt) or base is not None: - if history_prompt is not None: - x_history = history_prompt - x_semantic_history = x_history[0] - x_coarse_history = x_history[1] - if base is not None: - x_semantic_history = base[0] - x_coarse_history = base[1] - assert ( - isinstance(x_semantic_history, np.ndarray) - and len(x_semantic_history.shape) == 1 - and len(x_semantic_history) > 0 - and x_semantic_history.min() >= 0 - and x_semantic_history.max() <= model.config.SEMANTIC_VOCAB_SIZE - 1 - and isinstance(x_coarse_history, np.ndarray) - and len(x_coarse_history.shape) == 2 - and x_coarse_history.shape[0] == model.config.N_COARSE_CODEBOOKS - and x_coarse_history.shape[-1] >= 0 - and x_coarse_history.min() >= 0 - and x_coarse_history.max() <= model.config.CODEBOOK_SIZE - 1 - and ( - round(x_coarse_history.shape[-1] / len(x_semantic_history), 1) - == round(semantic_to_coarse_ratio / model.config.N_COARSE_CODEBOOKS, 1) - ) - ) - x_coarse_history = ( - _flatten_codebooks(x_coarse_history, model.config.CODEBOOK_SIZE) + model.config.SEMANTIC_VOCAB_SIZE - ) - # trim histories correctly - n_semantic_hist_provided = np.min( - [ - max_semantic_history, - len(x_semantic_history) - len(x_semantic_history) % 2, - int(np.floor(len(x_coarse_history) / semantic_to_coarse_ratio)), - ] - ) - n_coarse_hist_provided = int(round(n_semantic_hist_provided * semantic_to_coarse_ratio)) - x_semantic_history = x_semantic_history[-n_semantic_hist_provided:].astype(np.int32) - x_coarse_history = x_coarse_history[-n_coarse_hist_provided:].astype(np.int32) - # TODO: bit of a hack for time alignment (sounds better) - x_coarse_history = x_coarse_history[:-2] - else: - x_semantic_history = np.array([], dtype=np.int32) - x_coarse_history = np.array([], dtype=np.int32) - # start loop - n_steps = int( - round( - np.floor(len(x_semantic) * semantic_to_coarse_ratio / model.config.N_COARSE_CODEBOOKS) - * model.config.N_COARSE_CODEBOOKS - ) - ) - assert n_steps > 0 and n_steps % model.config.N_COARSE_CODEBOOKS == 0 - x_semantic = np.hstack([x_semantic_history, x_semantic]).astype(np.int32) - x_coarse = x_coarse_history.astype(np.int32) - base_semantic_idx = len(x_semantic_history) - with inference_mode(): - x_semantic_in = torch.from_numpy(x_semantic)[None].to(model.device) - x_coarse_in = torch.from_numpy(x_coarse)[None].to(model.device) - n_window_steps = int(np.ceil(n_steps / sliding_window_len)) - n_step = 0 - for _ in tqdm.tqdm(range(n_window_steps), total=n_window_steps, disable=silent): - semantic_idx = base_semantic_idx + int(round(n_step / semantic_to_coarse_ratio)) - # pad from right side - x_in = x_semantic_in[:, np.max([0, semantic_idx - max_semantic_history]) :] - x_in = x_in[:, :256] - x_in = F.pad( - x_in, - (0, 256 - x_in.shape[-1]), - "constant", - model.config.COARSE_SEMANTIC_PAD_TOKEN, - ) - x_in = torch.hstack( - [ - x_in, - torch.tensor([model.config.COARSE_INFER_TOKEN])[None].to(model.device), - x_coarse_in[:, -max_coarse_history:], - ] - ) - kv_cache = None - for _ in range(sliding_window_len): - if n_step >= n_steps: - continue - is_major_step = n_step % model.config.N_COARSE_CODEBOOKS == 0 - - if use_kv_caching and kv_cache is not None: - x_input = x_in[:, [-1]] - else: - x_input = x_in - - logits, kv_cache = model.coarse_model(x_input, use_cache=use_kv_caching, past_kv=kv_cache) - logit_start_idx = ( - model.config.SEMANTIC_VOCAB_SIZE + (1 - int(is_major_step)) * model.config.CODEBOOK_SIZE - ) - logit_end_idx = model.config.SEMANTIC_VOCAB_SIZE + (2 - int(is_major_step)) * model.config.CODEBOOK_SIZE - relevant_logits = logits[0, 0, logit_start_idx:logit_end_idx] - if top_p is not None: - # faster to convert to numpy - logits_device = relevant_logits.device - logits_dtype = relevant_logits.type() - relevant_logits = relevant_logits.detach().cpu().type(torch.float32).numpy() - sorted_indices = np.argsort(relevant_logits)[::-1] - sorted_logits = relevant_logits[sorted_indices] - cumulative_probs = np.cumsum(torch.nn.functional.softmax(sorted_logits)) - sorted_indices_to_remove = cumulative_probs > top_p - sorted_indices_to_remove[1:] = sorted_indices_to_remove[:-1].copy() - sorted_indices_to_remove[0] = False - relevant_logits[sorted_indices[sorted_indices_to_remove]] = -np.inf - relevant_logits = torch.from_numpy(relevant_logits) - relevant_logits = relevant_logits.to(logits_device).type(logits_dtype) - if top_k is not None: - v, _ = torch.topk(relevant_logits, min(top_k, relevant_logits.size(-1))) - relevant_logits[relevant_logits < v[-1]] = -float("Inf") - probs = torch.nn.functional.softmax(relevant_logits / temp, dim=-1) - item_next = torch.multinomial(probs, num_samples=1) - item_next += logit_start_idx - x_coarse_in = torch.cat((x_coarse_in, item_next[None]), dim=1) - x_in = torch.cat((x_in, item_next[None]), dim=1) - del logits, relevant_logits, probs, item_next - n_step += 1 - del x_in - del x_semantic_in - gen_coarse_arr = x_coarse_in.detach().cpu().numpy().squeeze()[len(x_coarse_history) :] - del x_coarse_in - assert len(gen_coarse_arr) == n_steps - gen_coarse_audio_arr = ( - gen_coarse_arr.reshape(-1, model.config.N_COARSE_CODEBOOKS).T - model.config.SEMANTIC_VOCAB_SIZE - ) - for n in range(1, model.config.N_COARSE_CODEBOOKS): - gen_coarse_audio_arr[n, :] -= n * model.config.CODEBOOK_SIZE - clear_cuda_cache() - return gen_coarse_audio_arr - - -def generate_fine( - x_coarse_gen, - model, - history_prompt=None, - temp=0.5, - silent=True, - base=None, -): - """Generate full audio codes from coarse audio codes. - - Args: - x_coarse_gen (np.ndarray): The coarse audio codes to generate full audio codes from. - model (BarkModel): The BarkModel to use for generating the full audio codes. - history_prompt (tuple): A tuple of (semantic_history, coarse_history, fine_history) to use as a prompt for the generation. - temp (float): The temperature to use for the generation. - silent (bool): Whether to silence the tqdm progress bar. - base (tuple): A tuple of (semantic_history, coarse_history, fine_history) to use as a base for the generation. - - Returns: - np.ndarray: The generated full audio codes. - """ - assert ( - isinstance(x_coarse_gen, np.ndarray) - and len(x_coarse_gen.shape) == 2 - and 1 <= x_coarse_gen.shape[0] <= model.config.N_FINE_CODEBOOKS - 1 - and x_coarse_gen.shape[1] > 0 - and x_coarse_gen.min() >= 0 - and x_coarse_gen.max() <= model.config.CODEBOOK_SIZE - 1 - ) - if all(v is not None for v in history_prompt) or base is not None: - if history_prompt is not None: - x_fine_history = history_prompt[2] - if base is not None: - x_fine_history = base[2] - assert ( - isinstance(x_fine_history, np.ndarray) - and len(x_fine_history.shape) == 2 - and x_fine_history.shape[0] == model.config.N_FINE_CODEBOOKS - and x_fine_history.shape[1] >= 0 - and x_fine_history.min() >= 0 - and x_fine_history.max() <= model.config.CODEBOOK_SIZE - 1 - ) - else: - x_fine_history = None - n_coarse = x_coarse_gen.shape[0] - # make input arr - in_arr = np.vstack( - [ - x_coarse_gen, - np.zeros((model.config.N_FINE_CODEBOOKS - n_coarse, x_coarse_gen.shape[1])) - + model.config.CODEBOOK_SIZE, # padding - ] - ).astype(np.int32) - # prepend history if available (max 512) - if x_fine_history is not None: - x_fine_history = x_fine_history.astype(np.int32) - in_arr = np.hstack( - [ - x_fine_history[:, -512:].astype(np.int32), - in_arr, - ] - ) - n_history = x_fine_history[:, -512:].shape[1] - else: - n_history = 0 - n_remove_from_end = 0 - # need to pad if too short (since non-causal model) - if in_arr.shape[1] < 1024: - n_remove_from_end = 1024 - in_arr.shape[1] - in_arr = np.hstack( - [ - in_arr, - np.zeros((model.config.N_FINE_CODEBOOKS, n_remove_from_end), dtype=np.int32) - + model.config.CODEBOOK_SIZE, - ] - ) - # we can be lazy about fractional loop and just keep overwriting codebooks - n_loops = np.max([0, int(np.ceil((x_coarse_gen.shape[1] - (1024 - n_history)) / 512))]) + 1 - with inference_mode(): - in_arr = torch.tensor(in_arr.T).to(model.device) - for n in tqdm.tqdm(range(n_loops), disable=silent): - start_idx = np.min([n * 512, in_arr.shape[0] - 1024]) - start_fill_idx = np.min([n_history + n * 512, in_arr.shape[0] - 512]) - rel_start_fill_idx = start_fill_idx - start_idx - in_buffer = in_arr[start_idx : start_idx + 1024, :][None] - for nn in range(n_coarse, model.config.N_FINE_CODEBOOKS): - logits = model.fine_model(nn, in_buffer) - if temp is None: - relevant_logits = logits[0, rel_start_fill_idx:, : model.config.CODEBOOK_SIZE] - codebook_preds = torch.argmax(relevant_logits, -1) - else: - relevant_logits = logits[0, :, : model.config.CODEBOOK_SIZE] / temp - probs = F.softmax(relevant_logits, dim=-1) - codebook_preds = torch.hstack( - [torch.multinomial(probs[n], num_samples=1) for n in range(rel_start_fill_idx, 1024)] - ) - in_buffer[0, rel_start_fill_idx:, nn] = codebook_preds - del logits, codebook_preds - # transfer over info into model_in and convert to numpy - for nn in range(n_coarse, model.config.N_FINE_CODEBOOKS): - in_arr[start_fill_idx : start_fill_idx + (1024 - rel_start_fill_idx), nn] = in_buffer[ - 0, rel_start_fill_idx:, nn - ] - del in_buffer - gen_fine_arr = in_arr.detach().cpu().numpy().squeeze().T - del in_arr - gen_fine_arr = gen_fine_arr[:, n_history:] - if n_remove_from_end > 0: - gen_fine_arr = gen_fine_arr[:, :-n_remove_from_end] - assert gen_fine_arr.shape[-1] == x_coarse_gen.shape[-1] - clear_cuda_cache() - return gen_fine_arr - - -def codec_decode(fine_tokens, model): - """Turn quantized audio codes into audio array using encodec.""" - arr = torch.from_numpy(fine_tokens)[None] - arr = arr.to(model.device) - arr = arr.transpose(0, 1) - emb = model.encodec.quantizer.decode(arr) - out = model.encodec.decoder(emb) - audio_arr = out.detach().cpu().numpy().squeeze() - return audio_arr diff --git a/spaces/arxify/RVC-beta-v2-0618/infer_pack/modules/F0Predictor/HarvestF0Predictor.py b/spaces/arxify/RVC-beta-v2-0618/infer_pack/modules/F0Predictor/HarvestF0Predictor.py deleted file mode 100644 index 98d4e98b353008f81bde2c37e7da818763a992c9..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/infer_pack/modules/F0Predictor/HarvestF0Predictor.py +++ /dev/null @@ -1,86 +0,0 @@ -from infer_pack.modules.F0Predictor.F0Predictor import F0Predictor -import pyworld -import numpy as np - - -class HarvestF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def resize_f0(self, x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * target_len, len(source)) / target_len, - np.arange(0, len(source)), - source, - ) - res = np.nan_to_num(target) - return res - - def compute_f0(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.harvest( - wav.astype(np.double), - fs=self.hop_length, - f0_ceil=self.f0_max, - f0_floor=self.f0_min, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.fs) - return self.interpolate_f0(self.resize_f0(f0, p_len))[0] - - def compute_f0_uv(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.harvest( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - return self.interpolate_f0(self.resize_f0(f0, p_len)) diff --git a/spaces/arxify/RVC-beta-v2-0618/my_utils.py b/spaces/arxify/RVC-beta-v2-0618/my_utils.py deleted file mode 100644 index a5258394b8ae5385daa665ab6ba6380507d4798a..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/my_utils.py +++ /dev/null @@ -1,21 +0,0 @@ -import ffmpeg -import numpy as np - - -def load_audio(file, sr): - try: - # https://github.com/openai/whisper/blob/main/whisper/audio.py#L26 - # This launches a subprocess to decode audio while down-mixing and resampling as necessary. - # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed. - file = ( - file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) # 防止小白拷路径头尾带了空格和"和回车 - out, _ = ( - ffmpeg.input(file, threads=0) - .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr) - .run(cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True) - ) - except Exception as e: - raise RuntimeError(f"Failed to load audio: {e}") - - return np.frombuffer(out, np.float32).flatten() diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vega/__init__.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vega/__init__.py deleted file mode 100644 index eec9f692ddb2117e5196f654f5ff6d5a1a44e786..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vega/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# flake8: noqa -from .v5 import * diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/audio/feature_transforms/global_cmvn.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/audio/feature_transforms/global_cmvn.py deleted file mode 100644 index e457ff176fee3b996da11f47e7dc61b81c445ba3..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/audio/feature_transforms/global_cmvn.py +++ /dev/null @@ -1,29 +0,0 @@ -import numpy as np -from fairseq.data.audio.feature_transforms import ( - AudioFeatureTransform, - register_audio_feature_transform, -) - - -@register_audio_feature_transform("global_cmvn") -class GlobalCMVN(AudioFeatureTransform): - """Global CMVN (cepstral mean and variance normalization). The global mean - and variance need to be pre-computed and stored in NumPy format (.npz).""" - - @classmethod - def from_config_dict(cls, config=None): - _config = {} if config is None else config - return GlobalCMVN(_config.get("stats_npz_path")) - - def __init__(self, stats_npz_path): - self.stats_npz_path = stats_npz_path - stats = np.load(stats_npz_path) - self.mean, self.std = stats["mean"], stats["std"] - - def __repr__(self): - return self.__class__.__name__ + f'(stats_npz_path="{self.stats_npz_path}")' - - def __call__(self, x): - x = np.subtract(x, self.mean) - x = np.divide(x, self.std) - return x diff --git a/spaces/ashercn97/AsherTesting/extensions/ngrok/README.md b/spaces/ashercn97/AsherTesting/extensions/ngrok/README.md deleted file mode 100644 index 0324bf9852408d9d2b86cc0165c2d548996f9c94..0000000000000000000000000000000000000000 --- a/spaces/ashercn97/AsherTesting/extensions/ngrok/README.md +++ /dev/null @@ -1,69 +0,0 @@ -# Adding an ingress URL through the ngrok Agent SDK for Python - -[ngrok](https://ngrok.com) is a globally distributed reverse proxy commonly used for quickly getting a public URL to a -service running inside a private network, such as on your local laptop. The ngrok agent is usually -deployed inside a private network and is used to communicate with the ngrok cloud service. - -By default the authtoken in the NGROK_AUTHTOKEN environment variable will be used. Alternatively one may be specified in -the `settings.json` file, see the Examples below. Retrieve your authtoken on the [Auth Token page of your ngrok dashboard](https://dashboard.ngrok.com/get-started/your-authtoken), signing up is free. - -# Documentation - -For a list of all available options, see [the configuration documentation](https://ngrok.com/docs/ngrok-agent/config/) or [the connect example](https://github.com/ngrok/ngrok-py/blob/main/examples/ngrok-connect-full.py). - -The ngrok Python SDK is [on github here](https://github.com/ngrok/ngrok-py). A quickstart guide and a full API reference are included in the [ngrok-py Python API documentation](https://ngrok.github.io/ngrok-py/). - -# Running - -To enable ngrok install the requirements and then add `--extension ngrok` to the command line options, for instance: - -```bash -pip install -r extensions/ngrok/requirements.txt -python server.py --extension ngrok -``` - -In the output you should then see something like this: - -```bash -INFO:Loading the extension "ngrok"... -INFO:Session created -INFO:Created tunnel "9d9d0944dc75ff9d3aae653e5eb29fe9" with url "https://d83706cf7be7.ngrok.app" -INFO:Tunnel "9d9d0944dc75ff9d3aae653e5eb29fe9" TCP forwarding to "localhost:7860" -INFO:Ingress established at https://d83706cf7be7.ngrok.app -``` - -You can now access the webui via the url shown, in this case `https://d83706cf7be7.ngrok.app`. It is recommended to add some authentication to the ingress, see below. - -# Example Settings - -In `settings.json` add a `ngrok` key with a dictionary of options, for instance: - -To enable basic authentication: -```json -{ - "ngrok": { - "basic_auth": "user:password" - } -} -``` - -To enable OAUTH authentication: -```json -{ - "ngrok": { - "oauth_provider": "google", - "oauth_allow_domains": "asdf.com", - "oauth_allow_emails": "asdf@asdf.com" - } -} -``` - -To add an authtoken instead of using the NGROK_AUTHTOKEN environment variable: -```json -{ - "ngrok": { - "authtoken": "", - "authtoken_from_env":false - } -} -``` \ No newline at end of file diff --git a/spaces/atimughal662/InfoFusion/app.py b/spaces/atimughal662/InfoFusion/app.py deleted file mode 100644 index d4bb1f140028f8d79d99dce983e4fd15522be605..0000000000000000000000000000000000000000 --- a/spaces/atimughal662/InfoFusion/app.py +++ /dev/null @@ -1 +0,0 @@ -generate.py \ No newline at end of file diff --git a/spaces/avans06/whisper-webui-translate/docs/options.md b/spaces/avans06/whisper-webui-translate/docs/options.md deleted file mode 100644 index 378bdaf4087efbb1326834f8af5084282deca927..0000000000000000000000000000000000000000 --- a/spaces/avans06/whisper-webui-translate/docs/options.md +++ /dev/null @@ -1,153 +0,0 @@ -# Standard Options -To transcribe or translate an audio file, you can either copy an URL from a website (all [websites](https://github.com/yt-dlp/yt-dlp/blob/master/supportedsites.md) -supported by YT-DLP will work, including YouTube). Otherwise, upload an audio file (choose "All Files (*.*)" -in the file selector to select any file type, including video files) or use the microphone. - -For longer audio files (>10 minutes), it is recommended that you select Silero VAD (Voice Activity Detector) in the VAD option, especially if you are using the `large-v1` model. Note that `large-v2` is a lot more forgiving, but you may still want to use a VAD with a slightly higher "VAD - Max Merge Size (s)" (60 seconds or more). - -## Model -Select the model that Whisper will use to transcribe the audio: - -| Size | Parameters | English-only model | Multilingual model | Required VRAM | Relative speed | -|-----------|------------|--------------------|--------------------|---------------|----------------| -| tiny | 39 M | tiny.en | tiny | ~1 GB | ~32x | -| base | 74 M | base.en | base | ~1 GB | ~16x | -| small | 244 M | small.en | small | ~2 GB | ~6x | -| medium | 769 M | medium.en | medium | ~5 GB | ~2x | -| large | 1550 M | N/A | large | ~10 GB | 1x | -| large-v2 | 1550 M | N/A | large | ~10 GB | 1x | - -## Language - -Select the language, or leave it empty for Whisper to automatically detect it. - -Note that if the selected language and the language in the audio differs, Whisper may start to translate the audio to the selected -language. For instance, if the audio is in English but you select Japaneese, the model may translate the audio to Japanese. - -## Inputs -The options "URL (YouTube, etc.)", "Upload Files" or "Micriphone Input" allows you to send an audio input to the model. - -### Multiple Files -Note that the UI will only process either the given URL or the upload files (including microphone) - not both. - -But you can upload multiple files either through the "Upload files" option, or as a playlist on YouTube. Each audio file will then be processed in turn, and the resulting SRT/VTT/Transcript will be made available in the "Download" section. When more than one file is processed, the UI will also generate a "All_Output" zip file containing all the text output files. - -## Task -Select the task - either "transcribe" to transcribe the audio to text, or "translate" to translate it to English. - -## Vad -Using a VAD will improve the timing accuracy of each transcribed line, as well as prevent Whisper getting into an infinite -loop detecting the same sentence over and over again. The downside is that this may be at a cost to text accuracy, especially -with regards to unique words or names that appear in the audio. You can compensate for this by increasing the prompt window. - -Note that English is very well handled by Whisper, and it's less susceptible to issues surrounding bad timings and infinite loops. -So you may only need to use a VAD for other languages, such as Japanese, or when the audio is very long. - -* none - * Run whisper on the entire audio input -* silero-vad - * Use Silero VAD to detect sections that contain speech, and run Whisper on independently on each section. Whisper is also run - on the gaps between each speech section, by either expanding the section up to the max merge size, or running Whisper independently - on the non-speech section. -* silero-vad-expand-into-gaps - * Use Silero VAD to detect sections that contain speech, and run Whisper on independently on each section. Each spech section will be expanded - such that they cover any adjacent non-speech sections. For instance, if an audio file of one minute contains the speech sections - 00:00 - 00:10 (A) and 00:30 - 00:40 (B), the first section (A) will be expanded to 00:00 - 00:30, and (B) will be expanded to 00:30 - 00:60. -* silero-vad-skip-gaps - * As above, but sections that doesn't contain speech according to Silero will be skipped. This will be slightly faster, but - may cause dialogue to be skipped. -* periodic-vad - * Create sections of speech every 'VAD - Max Merge Size' seconds. This is very fast and simple, but will potentially break - a sentence or word in two. - -## VAD - Merge Window -If set, any adjacent speech sections that are at most this number of seconds apart will be automatically merged. - -## VAD - Max Merge Size (s) -Disables merging of adjacent speech sections if they are this number of seconds long. - -## VAD - Padding (s) -The number of seconds (floating point) to add to the beginning and end of each speech section. Setting this to a number -larger than zero ensures that Whisper is more likely to correctly transcribe a sentence in the beginning of -a speech section. However, this also increases the probability of Whisper assigning the wrong timestamp -to each transcribed line. The default value is 1 second. - -## VAD - Prompt Window (s) -The text of a detected line will be included as a prompt to the next speech section, if the speech section starts at most this -number of seconds after the line has finished. For instance, if a line ends at 10:00, and the next speech section starts at -10:04, the line's text will be included if the prompt window is 4 seconds or more (10:04 - 10:00 = 4 seconds). - -Note that detected lines in gaps between speech sections will not be included in the prompt -(if silero-vad or silero-vad-expand-into-gaps) is used. - -## Diarization - -If checked, Pyannote will be used to detect speakers in the audio, and label them as (SPEAKER 00), (SPEAKER 01), etc. - -This requires a HuggingFace API key to function, which can be supplied with the `--auth_token` command line option for the CLI, -set in the `config.json5` file for the GUI, or provided via the `HF_ACCESS_TOKEN` environment variable. - -## Diarization - Speakers - -The number of speakers to detect. If set to 0, Pyannote will attempt to detect the number of speakers automatically. - -# Command Line Options - -Both `app.py` and `cli.py` also accept command line options, such as the ability to enable parallel execution on multiple -CPU/GPU cores, the default model name/VAD and so on. Consult the README in the root folder for more information. - -# Additional Options - -In addition to the above, there's also a "Full" options interface that allows you to set all the options available in the Whisper -model. The options are as follows: - -## Initial Prompt -Optional text to provide as a prompt for the first 30 seconds window. Whisper will attempt to use this as a starting point for the transcription, but you can -also get creative and specify a style or format for the output of the transcription. - -For instance, if you use the prompt "hello how is it going always use lowercase no punctuation goodbye one two three start stop i you me they", Whisper will -be biased to output lower capital letters and no punctuation, and may also be biased to output the words in the prompt more often. - -## Temperature -The temperature to use when sampling. Default is 0 (zero). A higher temperature will result in more random output, while a lower temperature will be more deterministic. - -## Best Of - Non-zero temperature -The number of candidates to sample from when sampling with non-zero temperature. Default is 5. - -## Beam Size - Zero temperature -The number of beams to use in beam search when sampling with zero temperature. Default is 5. - -## Patience - Zero temperature -The patience value to use in beam search when sampling with zero temperature. As in https://arxiv.org/abs/2204.05424, the default (1.0) is equivalent to conventional beam search. - -## Length Penalty - Any temperature -The token length penalty coefficient (alpha) to use when sampling with any temperature. As in https://arxiv.org/abs/1609.08144, uses simple length normalization by default. - -## Suppress Tokens - Comma-separated list of token IDs -A comma-separated list of token IDs to suppress during sampling. The default value of "-1" will suppress most special characters except common punctuations. - -## Condition on previous text -If True, provide the previous output of the model as a prompt for the next window. Disabling this may make the text inconsistent across windows, but the model becomes less prone to getting stuck in a failure loop. - -## FP16 -Whether to perform inference in fp16. True by default. - -## Temperature increment on fallback -The temperature to increase when falling back when the decoding fails to meet either of the thresholds below. Default is 0.2. - -## Compression ratio threshold -If the gzip compression ratio is higher than this value, treat the decoding as failed. Default is 2.4. - -## Logprob threshold -If the average log probability is lower than this value, treat the decoding as failed. Default is -1.0. - -## No speech threshold -If the probability of the <|nospeech|> token is higher than this value AND the decoding has failed due to `logprob_threshold`, consider the segment as silence. Default is 0.6. - -## Diarization - Min Speakers - -The minimum number of speakers for Pyannote to detect. - -## Diarization - Max Speakers - -The maximum number of speakers for Pyannote to detect. \ No newline at end of file diff --git "a/spaces/awacke1/CardWriterPro/pages/6_\360\237\224\254_Model_Evaluation.py" "b/spaces/awacke1/CardWriterPro/pages/6_\360\237\224\254_Model_Evaluation.py" deleted file mode 100644 index e3c4926a814f9f34980b77b5d8dc4277fd272d7e..0000000000000000000000000000000000000000 --- "a/spaces/awacke1/CardWriterPro/pages/6_\360\237\224\254_Model_Evaluation.py" +++ /dev/null @@ -1,66 +0,0 @@ -import streamlit as st -from persist import persist, load_widget_state -from pathlib import Path - -from middleMan import apply_view,writingPrompt - -global variable_output - -def main(): - cs_body() - - -def cs_body(): - - #stateVariable = 'Model_Eval' - #help_text ='Detail the Evaluation Results for this model' - #col1.header('Model Evaluation') - st.markdown('# Evaluation') - st.text_area(" This section describes the evaluation protocols and provides the results. ",help="Detail the Evaluation Results for this model") - st.markdown('## Testing Data, Factors & Metrics:') - left, right = st.columns([2,4]) - - #st.markdown('### Model Description') - - - with left: - st.write("\n") - st.write("\n") - st.markdown('#### Testing Data:') - st.write("\n") - st.write("\n") - st.write("\n") - st.write("\n") - st.write("\n") - st.write("\n") - #st.write("\n") - st.markdown('#### Factors:') - st.write("\n") - st.write("\n") - st.write("\n") - st.write("\n") - st.write("\n") - st.write("\n") - st.markdown('#### Metrics:') - st.write("\n") - st.write("\n") - st.write("\n") - st.write("\n") - st.write("\n") - st.markdown('#### Results:') - - with right: - #soutput_jinja = parse_into_jinja_markdown() - st.text_area("", help="Ideally this links to a Dataset Card.",key=persist("Testing_Data")) - #st.write("\n") - st.text_area("",help="What are the foreseeable characteristics that will influence how the model behaves? This includes domain and context, as well as population subgroups.",key=persist("Factors")) - st.text_area("", help="What metrics will be used for evaluation in light of tradeoffs between different errors?", key=persist("Metrics")) - st.text_area("", key=persist("Model_Results")) - - - - - -if __name__ == '__main__': - load_widget_state() - main() \ No newline at end of file diff --git a/spaces/awacke1/CodeParrot-Copilot-Alternative/app.py b/spaces/awacke1/CodeParrot-Copilot-Alternative/app.py deleted file mode 100644 index d662e046ef498c0c8db358bb3ef41ef8ba20394b..0000000000000000000000000000000000000000 --- a/spaces/awacke1/CodeParrot-Copilot-Alternative/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/codeparrot/codeparrot").launch() \ No newline at end of file diff --git a/spaces/awacke1/Emojitrition-Fun-and-Easy-Nutrition/app.py b/spaces/awacke1/Emojitrition-Fun-and-Easy-Nutrition/app.py deleted file mode 100644 index 5899945b09b198f95f85cbf06c9dc67124d211c7..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Emojitrition-Fun-and-Easy-Nutrition/app.py +++ /dev/null @@ -1,81 +0,0 @@ -import streamlit as st -import numpy as np -import pandas as pd -import plotly.graph_objects as go -from datetime import datetime -from base64 import b64encode - -# Define general functions -FOOD_LIST = {4: "🍔", 6: "🍟", 8: "🌮", 10: "🍕", 12: "🍩", 20: "🥗", 50: "🍣", 100: "🍾"} - -def roll_dice(num_rolls, dice_type): - rolls = np.random.randint(1, dice_type + 1, size=num_rolls) - return rolls - -def plot_tokens(health_tokens, coin_tokens): - fig = go.Figure() - fig.add_trace(go.Sankey( - node = { - "label": ["Health", "Coins"] + [FOOD_LIST[i] for i in DICE_TYPES], - "pad": 15 - }, - link = { - "source": [0, 1] + list(range(2, len(DICE_TYPES) + 2)), - "target": [2] * len(DICE_TYPES) + [3 + i for i in range(len(DICE_TYPES))], - "value": health_tokens + coin_tokens - }, - )) - st.plotly_chart(fig) - -# Define Streamlit app -st.set_page_config(page_title="🍔🍟 Emojitrition 🌮🍕", page_icon=":game_die:") -st.title("🍔🍟 Emojitrition 🌮🍕") - -# Sidebar -username = st.sidebar.text_input("👤 Enter your username:") -num_rolls = st.sidebar.slider("🔢 Choose the number of rolls:", 1, 100, 3) - -# Main content -DICE_TYPES = [4, 6, 8, 10, 12, 20, 50, 100] -history = {"health_tokens": [0], "coin_tokens": [0]} - -for dice_type in DICE_TYPES: - rolls = roll_dice(num_rolls, dice_type) - highest_rolls = sum(roll == dice_type for roll in rolls) - coin_tokens_added = 0 - - dice_results = [f"{FOOD_LIST[dice_type]} {roll}" for roll in rolls] - st.write(f"🎰 Results for {dice_type}-sided slot machine: {' | '.join(dice_results)}") - - for roll in rolls: - if roll == dice_type: - st.write(f"🎉 Congratulations! You got the {FOOD_LIST[dice_type]} jackpot! 💰 Adding 3 coins.") - coin_tokens_added += 3 - if roll == max(rolls): - st.write(f"🎉 Congratulations! You got the {FOOD_LIST[dice_type]} maximum value! 💖 Adding 10 health tokens.") - if dice_type == 100: - history["health_tokens"].append(history["health_tokens"][-1] + 10) - - history[f"{dice_type}-sided slot machine jackpots"] = highest_rolls - history["roll_history"] = {**history.get("roll_history", {}), dice_type: rolls} - history["coin_tokens"].append(history["coin_tokens"][-1] + coin_tokens_added) - -plot_tokens(history["health_tokens"], history["coin_tokens"]) - -df = pd.concat([pd.DataFrame(history["roll_history"]), pd.DataFrame(history["health_tokens"], columns=["Health Tokens"]), pd.DataFrame(history["coin_tokens"], columns=["Coin Tokens"])], axis=1) - -timestamp = datetime.now().strftime("%m-%d-%Y-%H-%M-%S") -filename = f"{username}_{timestamp}.csv" -df.to_csv(filename, index=False) -st.markdown(f'Download CSV File', unsafe_allow_html=True) - -st.markdown(""" - -📣 Introducing Emojitrition - the fun and easy way to track your nutrition! 🍔🍟🌮🍕🍩🥗🍣🍾 -👉 Sick of boring nutrition tracking apps? Emojitrition is here to spice things up! 🌶️ -👉 Our app uses food nutrition emojis to make tracking your meals easy and fun. 🍴 -👉 Whether you're making healthy choices with 🥗 or indulging in some 🍩, Emojitrition makes it easy to see how your meals add up. -👉 Download Emojitrition today and start making more informed choices for your health and well-being! 📲 -👉 It's time to ditch the boring old numbers and words and embrace the world of nutrition emojis! 🙌 - -""") \ No newline at end of file diff --git a/spaces/awacke1/StreamlitMultiplayerTicTacToe/README.md b/spaces/awacke1/StreamlitMultiplayerTicTacToe/README.md deleted file mode 100644 index 7c7594e084a96d64f0e528f40dad979175e34c8b..0000000000000000000000000000000000000000 --- a/spaces/awacke1/StreamlitMultiplayerTicTacToe/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: StreamlitMultiplayerTicTacToe -emoji: ⚡ -colorFrom: green -colorTo: pink -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/badayvedat/AudioSep/pipeline.py b/spaces/badayvedat/AudioSep/pipeline.py deleted file mode 100644 index ca10a2ba413c13a3fb54214d68e11bdf78dffbd2..0000000000000000000000000000000000000000 --- a/spaces/badayvedat/AudioSep/pipeline.py +++ /dev/null @@ -1,69 +0,0 @@ -import yaml -from typing import Dict, List -import torch -import torch.nn as nn -import numpy as np -import librosa -from scipy.io.wavfile import write -from utils import ignore_warnings; ignore_warnings() -from utils import parse_yaml, load_ss_model -from models.clap_encoder import CLAP_Encoder - - -def build_audiosep(config_yaml, checkpoint_path, device): - configs = parse_yaml(config_yaml) - - query_encoder = CLAP_Encoder().eval() - model = load_ss_model( - configs=configs, - checkpoint_path=checkpoint_path, - query_encoder=query_encoder - ).eval().to(device) - - print(f'Load AudioSep model from [{checkpoint_path}]') - return model - - -def inference(model, audio_file, text, output_file, device='cuda'): - print(f'Separate audio from [{audio_file}] with textual query [{text}]') - mixture, fs = librosa.load(audio_file, sr=32000, mono=True) - with torch.no_grad(): - text = [text] - - conditions = model.query_encoder.get_query_embed( - modality='text', - text=text, - device=device - ) - - input_dict = { - "mixture": torch.Tensor(mixture)[None, None, :].to(device), - "condition": conditions, - } - - sep_segment = model.ss_model(input_dict)["waveform"] - - sep_segment = sep_segment.squeeze(0).squeeze(0).data.cpu().numpy() - - write(output_file, 32000, np.round(sep_segment * 32767).astype(np.int16)) - print(f'Write separated audio to [{output_file}]') - - -if __name__ == '__main__': - device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - model = build_audiosep( - config_yaml='config/audiosep_base.yaml', - checkpoint_path='checkpoint/step=3920000.ckpt', - device=device) - - audio_file = '/mnt/bn/data-xubo/project/AudioShop/YT_audios/Y3VHpLxtd498.wav' - text = 'pigeons are cooing in the background' - output_file='separated_audio.wav' - - inference(model, audio_file, text, output_file, device) - - - - - - diff --git a/spaces/bai54188/BingAI3.0/README.md b/spaces/bai54188/BingAI3.0/README.md deleted file mode 100644 index 3247bbae01467f0bfcce05601eb1f9fac8b394a5..0000000000000000000000000000000000000000 --- a/spaces/bai54188/BingAI3.0/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: BingAI3.0 -emoji: 📈 -colorFrom: pink -colorTo: pink -sdk: docker -pinned: false -license: mit -app_port: 8080 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/banana-projects/web3d/node_modules/three/src/math/Box3.js b/spaces/banana-projects/web3d/node_modules/three/src/math/Box3.js deleted file mode 100644 index 304defa3f7b5cfcc1c0af639e54d259d117f60e0..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/math/Box3.js +++ /dev/null @@ -1,615 +0,0 @@ -import { Vector3 } from './Vector3.js'; - -/** - * @author bhouston / http://clara.io - * @author WestLangley / http://github.com/WestLangley - */ - -function Box3( min, max ) { - - this.min = ( min !== undefined ) ? min : new Vector3( + Infinity, + Infinity, + Infinity ); - this.max = ( max !== undefined ) ? max : new Vector3( - Infinity, - Infinity, - Infinity ); - -} - -Object.assign( Box3.prototype, { - - isBox3: true, - - set: function ( min, max ) { - - this.min.copy( min ); - this.max.copy( max ); - - return this; - - }, - - setFromArray: function ( array ) { - - var minX = + Infinity; - var minY = + Infinity; - var minZ = + Infinity; - - var maxX = - Infinity; - var maxY = - Infinity; - var maxZ = - Infinity; - - for ( var i = 0, l = array.length; i < l; i += 3 ) { - - var x = array[ i ]; - var y = array[ i + 1 ]; - var z = array[ i + 2 ]; - - if ( x < minX ) minX = x; - if ( y < minY ) minY = y; - if ( z < minZ ) minZ = z; - - if ( x > maxX ) maxX = x; - if ( y > maxY ) maxY = y; - if ( z > maxZ ) maxZ = z; - - } - - this.min.set( minX, minY, minZ ); - this.max.set( maxX, maxY, maxZ ); - - return this; - - }, - - setFromBufferAttribute: function ( attribute ) { - - var minX = + Infinity; - var minY = + Infinity; - var minZ = + Infinity; - - var maxX = - Infinity; - var maxY = - Infinity; - var maxZ = - Infinity; - - for ( var i = 0, l = attribute.count; i < l; i ++ ) { - - var x = attribute.getX( i ); - var y = attribute.getY( i ); - var z = attribute.getZ( i ); - - if ( x < minX ) minX = x; - if ( y < minY ) minY = y; - if ( z < minZ ) minZ = z; - - if ( x > maxX ) maxX = x; - if ( y > maxY ) maxY = y; - if ( z > maxZ ) maxZ = z; - - } - - this.min.set( minX, minY, minZ ); - this.max.set( maxX, maxY, maxZ ); - - return this; - - }, - - setFromPoints: function ( points ) { - - this.makeEmpty(); - - for ( var i = 0, il = points.length; i < il; i ++ ) { - - this.expandByPoint( points[ i ] ); - - } - - return this; - - }, - - setFromCenterAndSize: function () { - - var v1 = new Vector3(); - - return function setFromCenterAndSize( center, size ) { - - var halfSize = v1.copy( size ).multiplyScalar( 0.5 ); - - this.min.copy( center ).sub( halfSize ); - this.max.copy( center ).add( halfSize ); - - return this; - - }; - - }(), - - setFromObject: function ( object ) { - - this.makeEmpty(); - - return this.expandByObject( object ); - - }, - - clone: function () { - - return new this.constructor().copy( this ); - - }, - - copy: function ( box ) { - - this.min.copy( box.min ); - this.max.copy( box.max ); - - return this; - - }, - - makeEmpty: function () { - - this.min.x = this.min.y = this.min.z = + Infinity; - this.max.x = this.max.y = this.max.z = - Infinity; - - return this; - - }, - - isEmpty: function () { - - // this is a more robust check for empty than ( volume <= 0 ) because volume can get positive with two negative axes - - return ( this.max.x < this.min.x ) || ( this.max.y < this.min.y ) || ( this.max.z < this.min.z ); - - }, - - getCenter: function ( target ) { - - if ( target === undefined ) { - - console.warn( 'THREE.Box3: .getCenter() target is now required' ); - target = new Vector3(); - - } - - return this.isEmpty() ? target.set( 0, 0, 0 ) : target.addVectors( this.min, this.max ).multiplyScalar( 0.5 ); - - }, - - getSize: function ( target ) { - - if ( target === undefined ) { - - console.warn( 'THREE.Box3: .getSize() target is now required' ); - target = new Vector3(); - - } - - return this.isEmpty() ? target.set( 0, 0, 0 ) : target.subVectors( this.max, this.min ); - - }, - - expandByPoint: function ( point ) { - - this.min.min( point ); - this.max.max( point ); - - return this; - - }, - - expandByVector: function ( vector ) { - - this.min.sub( vector ); - this.max.add( vector ); - - return this; - - }, - - expandByScalar: function ( scalar ) { - - this.min.addScalar( - scalar ); - this.max.addScalar( scalar ); - - return this; - - }, - - expandByObject: function () { - - // Computes the world-axis-aligned bounding box of an object (including its children), - // accounting for both the object's, and children's, world transforms - - var scope, i, l; - - var v1 = new Vector3(); - - function traverse( node ) { - - var geometry = node.geometry; - - if ( geometry !== undefined ) { - - if ( geometry.isGeometry ) { - - var vertices = geometry.vertices; - - for ( i = 0, l = vertices.length; i < l; i ++ ) { - - v1.copy( vertices[ i ] ); - v1.applyMatrix4( node.matrixWorld ); - - scope.expandByPoint( v1 ); - - } - - } else if ( geometry.isBufferGeometry ) { - - var attribute = geometry.attributes.position; - - if ( attribute !== undefined ) { - - for ( i = 0, l = attribute.count; i < l; i ++ ) { - - v1.fromBufferAttribute( attribute, i ).applyMatrix4( node.matrixWorld ); - - scope.expandByPoint( v1 ); - - } - - } - - } - - } - - } - - return function expandByObject( object ) { - - scope = this; - - object.updateMatrixWorld( true ); - - object.traverse( traverse ); - - return this; - - }; - - }(), - - containsPoint: function ( point ) { - - return point.x < this.min.x || point.x > this.max.x || - point.y < this.min.y || point.y > this.max.y || - point.z < this.min.z || point.z > this.max.z ? false : true; - - }, - - containsBox: function ( box ) { - - return this.min.x <= box.min.x && box.max.x <= this.max.x && - this.min.y <= box.min.y && box.max.y <= this.max.y && - this.min.z <= box.min.z && box.max.z <= this.max.z; - - }, - - getParameter: function ( point, target ) { - - // This can potentially have a divide by zero if the box - // has a size dimension of 0. - - if ( target === undefined ) { - - console.warn( 'THREE.Box3: .getParameter() target is now required' ); - target = new Vector3(); - - } - - return target.set( - ( point.x - this.min.x ) / ( this.max.x - this.min.x ), - ( point.y - this.min.y ) / ( this.max.y - this.min.y ), - ( point.z - this.min.z ) / ( this.max.z - this.min.z ) - ); - - }, - - intersectsBox: function ( box ) { - - // using 6 splitting planes to rule out intersections. - return box.max.x < this.min.x || box.min.x > this.max.x || - box.max.y < this.min.y || box.min.y > this.max.y || - box.max.z < this.min.z || box.min.z > this.max.z ? false : true; - - }, - - intersectsSphere: ( function () { - - var closestPoint = new Vector3(); - - return function intersectsSphere( sphere ) { - - // Find the point on the AABB closest to the sphere center. - this.clampPoint( sphere.center, closestPoint ); - - // If that point is inside the sphere, the AABB and sphere intersect. - return closestPoint.distanceToSquared( sphere.center ) <= ( sphere.radius * sphere.radius ); - - }; - - } )(), - - intersectsPlane: function ( plane ) { - - // We compute the minimum and maximum dot product values. If those values - // are on the same side (back or front) of the plane, then there is no intersection. - - var min, max; - - if ( plane.normal.x > 0 ) { - - min = plane.normal.x * this.min.x; - max = plane.normal.x * this.max.x; - - } else { - - min = plane.normal.x * this.max.x; - max = plane.normal.x * this.min.x; - - } - - if ( plane.normal.y > 0 ) { - - min += plane.normal.y * this.min.y; - max += plane.normal.y * this.max.y; - - } else { - - min += plane.normal.y * this.max.y; - max += plane.normal.y * this.min.y; - - } - - if ( plane.normal.z > 0 ) { - - min += plane.normal.z * this.min.z; - max += plane.normal.z * this.max.z; - - } else { - - min += plane.normal.z * this.max.z; - max += plane.normal.z * this.min.z; - - } - - return ( min <= - plane.constant && max >= - plane.constant ); - - }, - - intersectsTriangle: ( function () { - - // triangle centered vertices - var v0 = new Vector3(); - var v1 = new Vector3(); - var v2 = new Vector3(); - - // triangle edge vectors - var f0 = new Vector3(); - var f1 = new Vector3(); - var f2 = new Vector3(); - - var testAxis = new Vector3(); - - var center = new Vector3(); - var extents = new Vector3(); - - var triangleNormal = new Vector3(); - - function satForAxes( axes ) { - - var i, j; - - for ( i = 0, j = axes.length - 3; i <= j; i += 3 ) { - - testAxis.fromArray( axes, i ); - // project the aabb onto the seperating axis - var r = extents.x * Math.abs( testAxis.x ) + extents.y * Math.abs( testAxis.y ) + extents.z * Math.abs( testAxis.z ); - // project all 3 vertices of the triangle onto the seperating axis - var p0 = v0.dot( testAxis ); - var p1 = v1.dot( testAxis ); - var p2 = v2.dot( testAxis ); - // actual test, basically see if either of the most extreme of the triangle points intersects r - if ( Math.max( - Math.max( p0, p1, p2 ), Math.min( p0, p1, p2 ) ) > r ) { - - // points of the projected triangle are outside the projected half-length of the aabb - // the axis is seperating and we can exit - return false; - - } - - } - - return true; - - } - - return function intersectsTriangle( triangle ) { - - if ( this.isEmpty() ) { - - return false; - - } - - // compute box center and extents - this.getCenter( center ); - extents.subVectors( this.max, center ); - - // translate triangle to aabb origin - v0.subVectors( triangle.a, center ); - v1.subVectors( triangle.b, center ); - v2.subVectors( triangle.c, center ); - - // compute edge vectors for triangle - f0.subVectors( v1, v0 ); - f1.subVectors( v2, v1 ); - f2.subVectors( v0, v2 ); - - // test against axes that are given by cross product combinations of the edges of the triangle and the edges of the aabb - // make an axis testing of each of the 3 sides of the aabb against each of the 3 sides of the triangle = 9 axis of separation - // axis_ij = u_i x f_j (u0, u1, u2 = face normals of aabb = x,y,z axes vectors since aabb is axis aligned) - var axes = [ - 0, - f0.z, f0.y, 0, - f1.z, f1.y, 0, - f2.z, f2.y, - f0.z, 0, - f0.x, f1.z, 0, - f1.x, f2.z, 0, - f2.x, - - f0.y, f0.x, 0, - f1.y, f1.x, 0, - f2.y, f2.x, 0 - ]; - if ( ! satForAxes( axes ) ) { - - return false; - - } - - // test 3 face normals from the aabb - axes = [ 1, 0, 0, 0, 1, 0, 0, 0, 1 ]; - if ( ! satForAxes( axes ) ) { - - return false; - - } - - // finally testing the face normal of the triangle - // use already existing triangle edge vectors here - triangleNormal.crossVectors( f0, f1 ); - axes = [ triangleNormal.x, triangleNormal.y, triangleNormal.z ]; - return satForAxes( axes ); - - }; - - } )(), - - clampPoint: function ( point, target ) { - - if ( target === undefined ) { - - console.warn( 'THREE.Box3: .clampPoint() target is now required' ); - target = new Vector3(); - - } - - return target.copy( point ).clamp( this.min, this.max ); - - }, - - distanceToPoint: function () { - - var v1 = new Vector3(); - - return function distanceToPoint( point ) { - - var clampedPoint = v1.copy( point ).clamp( this.min, this.max ); - return clampedPoint.sub( point ).length(); - - }; - - }(), - - getBoundingSphere: function () { - - var v1 = new Vector3(); - - return function getBoundingSphere( target ) { - - if ( target === undefined ) { - - console.error( 'THREE.Box3: .getBoundingSphere() target is now required' ); - //target = new Sphere(); // removed to avoid cyclic dependency - - } - - this.getCenter( target.center ); - - target.radius = this.getSize( v1 ).length() * 0.5; - - return target; - - }; - - }(), - - intersect: function ( box ) { - - this.min.max( box.min ); - this.max.min( box.max ); - - // ensure that if there is no overlap, the result is fully empty, not slightly empty with non-inf/+inf values that will cause subsequence intersects to erroneously return valid values. - if ( this.isEmpty() ) this.makeEmpty(); - - return this; - - }, - - union: function ( box ) { - - this.min.min( box.min ); - this.max.max( box.max ); - - return this; - - }, - - applyMatrix4: function () { - - var points = [ - new Vector3(), - new Vector3(), - new Vector3(), - new Vector3(), - new Vector3(), - new Vector3(), - new Vector3(), - new Vector3() - ]; - - return function applyMatrix4( matrix ) { - - // transform of empty box is an empty box. - if ( this.isEmpty() ) return this; - - // NOTE: I am using a binary pattern to specify all 2^3 combinations below - points[ 0 ].set( this.min.x, this.min.y, this.min.z ).applyMatrix4( matrix ); // 000 - points[ 1 ].set( this.min.x, this.min.y, this.max.z ).applyMatrix4( matrix ); // 001 - points[ 2 ].set( this.min.x, this.max.y, this.min.z ).applyMatrix4( matrix ); // 010 - points[ 3 ].set( this.min.x, this.max.y, this.max.z ).applyMatrix4( matrix ); // 011 - points[ 4 ].set( this.max.x, this.min.y, this.min.z ).applyMatrix4( matrix ); // 100 - points[ 5 ].set( this.max.x, this.min.y, this.max.z ).applyMatrix4( matrix ); // 101 - points[ 6 ].set( this.max.x, this.max.y, this.min.z ).applyMatrix4( matrix ); // 110 - points[ 7 ].set( this.max.x, this.max.y, this.max.z ).applyMatrix4( matrix ); // 111 - - this.setFromPoints( points ); - - return this; - - }; - - }(), - - translate: function ( offset ) { - - this.min.add( offset ); - this.max.add( offset ); - - return this; - - }, - - equals: function ( box ) { - - return box.min.equals( this.min ) && box.max.equals( this.max ); - - } - -} ); - - -export { Box3 }; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderLib/meshmatcap_frag.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderLib/meshmatcap_frag.glsl.js deleted file mode 100644 index b1e4bbefb655f41970d58a9f00d5ce7c1550c1f2..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderLib/meshmatcap_frag.glsl.js +++ /dev/null @@ -1,66 +0,0 @@ -export default /* glsl */` -#define MATCAP - -uniform vec3 diffuse; -uniform float opacity; -uniform sampler2D matcap; - -varying vec3 vViewPosition; - -#ifndef FLAT_SHADED - - varying vec3 vNormal; - -#endif - -#include -#include -#include -#include - -#include -#include -#include -#include -#include - -void main() { - - #include - - vec4 diffuseColor = vec4( diffuse, opacity ); - - #include - #include - #include - #include - #include - #include - - vec3 viewDir = normalize( vViewPosition ); - vec3 x = normalize( vec3( viewDir.z, 0.0, - viewDir.x ) ); - vec3 y = cross( viewDir, x ); - vec2 uv = vec2( dot( x, normal ), dot( y, normal ) ) * 0.495 + 0.5; // 0.495 to remove artifacts caused by undersized matcap disks - - #ifdef USE_MATCAP - - vec4 matcapColor = texture2D( matcap, uv ); - matcapColor = matcapTexelToLinear( matcapColor ); - - #else - - vec4 matcapColor = vec4( 1.0 ); - - #endif - - vec3 outgoingLight = diffuseColor.rgb * matcapColor.rgb; - - gl_FragColor = vec4( outgoingLight, diffuseColor.a ); - - #include - #include - #include - #include - -} -`; diff --git a/spaces/bastiendechamps/geoguessr-bot/geoguessr_bot/__main__.py b/spaces/bastiendechamps/geoguessr-bot/geoguessr_bot/__main__.py deleted file mode 100644 index c06ecc2273e7b01036114f6277c32852fcaeb377..0000000000000000000000000000000000000000 --- a/spaces/bastiendechamps/geoguessr-bot/geoguessr_bot/__main__.py +++ /dev/null @@ -1,17 +0,0 @@ -import fire -from configue import load - - -class CLI: - """Regroup all the commands of the CLI - """ - - @staticmethod - def run(config_path: str) -> None: - config = load(config_path) - command = config["command"] - command.run() - - -if __name__ == "__main__": - fire.Fire(CLI) diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327001316.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327001316.py deleted file mode 100644 index 0a38d76ce2ad23d2334dcc1d23d9094842aa1493..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327001316.py +++ /dev/null @@ -1,65 +0,0 @@ -import os -#os.system("pip install gfpgan") - -#os.system("pip freeze") -#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .") -import random -import gradio as gr -from PIL import Image -import torch -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg') - - - - -import cv2 -import glob -import numpy as np -from basicsr.utils import imwrite -from gfpgan import GFPGANer - -import warnings -warnings.warn('The unoptimized RealESRGAN is very slow on CPU. We do not use it. ' - 'If you really want to use it, please modify the corresponding codes.') -bg_upsampler = None - - - -# set up GFPGAN restorer -restorer = GFPGANer( - model_path='experiments/pretrained_models/GFPGANv1.3.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=bg_upsampler) - - -def inference(img): - input_img = cv2.imread(img, cv2.IMREAD_COLOR) - cropped_faces, restored_faces, restored_img = restorer.enhance( - input_img, has_aligned=False, only_center_face=False, paste_back=True) - - return Image.fromarray(restored_faces[0][:,:,::-1]) - -title = "GFP-GAN" -description = "Gradio demo for GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please click submit only once" -article = "

      Towards Real-World Blind Face Restoration with Generative Facial Prior | Github Repo

      visitor badge
      " -gr.Interface( - inference, - [gr.inputs.Image(type="filepath", label="Input")], - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['lincoln.jpg'], - ['einstein.png'], - ['edison.jpg'], - ['Henry.jpg'], - ['Frida.jpg'] - ] - ).launch(enable_queue=True,cache_examples=True) \ No newline at end of file diff --git a/spaces/bigPear/digitalWDF/tests/convert_comparison.py b/spaces/bigPear/digitalWDF/tests/convert_comparison.py deleted file mode 100644 index c77e6fbb3e22828b6735590b2fc2a4faec6e9b0b..0000000000000000000000000000000000000000 --- a/spaces/bigPear/digitalWDF/tests/convert_comparison.py +++ /dev/null @@ -1,32 +0,0 @@ -# coding=utf-8 - -import json - - -if __name__ == "__main__": - - dataset = [] - - with open("comparison_data_v2.json", "r", encoding="utf-8") as f: - data = json.load(f) - - for example in data: - instruction = example["user_input"] - resp_with_score = [(float(resp["score"]), resp["response"]) for resp in example["responses_and_scores"]] - resp_with_score.sort() - - while len(resp_with_score[0][1]) == 0: - resp_with_score.pop(0) - if len(resp_with_score) == 0: - continue - - min_score, max_score = resp_with_score[0][0], resp_with_score[-1][0] - if min_score < 5.0 and max_score > 5.0: - dataset.append({ - "instruction": instruction, - "input": "", - "output": [resp_with_score[-1][1], resp_with_score[0][1]] - }) - - with open("comparison_gpt4_data_en.json", "w", encoding="utf-8", newline="\n") as f: - json.dump(dataset, f, indent=2, ensure_ascii=False) diff --git a/spaces/bigjoker/stable-diffusion-webui/modules/ui_tempdir.py b/spaces/bigjoker/stable-diffusion-webui/modules/ui_tempdir.py deleted file mode 100644 index 126f73a21d71070887fd094beaf0fe6d7e12df9c..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/modules/ui_tempdir.py +++ /dev/null @@ -1,82 +0,0 @@ -import os -import tempfile -from collections import namedtuple -from pathlib import Path - -import gradio as gr - -from PIL import PngImagePlugin - -from modules import shared - - -Savedfile = namedtuple("Savedfile", ["name"]) - - -def register_tmp_file(gradio, filename): - if hasattr(gradio, 'temp_file_sets'): # gradio 3.15 - gradio.temp_file_sets[0] = gradio.temp_file_sets[0] | {os.path.abspath(filename)} - - if hasattr(gradio, 'temp_dirs'): # gradio 3.9 - gradio.temp_dirs = gradio.temp_dirs | {os.path.abspath(os.path.dirname(filename))} - - -def check_tmp_file(gradio, filename): - if hasattr(gradio, 'temp_file_sets'): - return any([filename in fileset for fileset in gradio.temp_file_sets]) - - if hasattr(gradio, 'temp_dirs'): - return any(Path(temp_dir).resolve() in Path(filename).resolve().parents for temp_dir in gradio.temp_dirs) - - return False - - -def save_pil_to_file(pil_image, dir=None): - already_saved_as = getattr(pil_image, 'already_saved_as', None) - if already_saved_as and os.path.isfile(already_saved_as): - register_tmp_file(shared.demo, already_saved_as) - - file_obj = Savedfile(already_saved_as) - return file_obj - - if shared.opts.temp_dir != "": - dir = shared.opts.temp_dir - - use_metadata = False - metadata = PngImagePlugin.PngInfo() - for key, value in pil_image.info.items(): - if isinstance(key, str) and isinstance(value, str): - metadata.add_text(key, value) - use_metadata = True - - file_obj = tempfile.NamedTemporaryFile(delete=False, suffix=".png", dir=dir) - pil_image.save(file_obj, pnginfo=(metadata if use_metadata else None)) - return file_obj - - -# override save to file function so that it also writes PNG info -gr.processing_utils.save_pil_to_file = save_pil_to_file - - -def on_tmpdir_changed(): - if shared.opts.temp_dir == "" or shared.demo is None: - return - - os.makedirs(shared.opts.temp_dir, exist_ok=True) - - register_tmp_file(shared.demo, os.path.join(shared.opts.temp_dir, "x")) - - -def cleanup_tmpdr(): - temp_dir = shared.opts.temp_dir - if temp_dir == "" or not os.path.isdir(temp_dir): - return - - for root, dirs, files in os.walk(temp_dir, topdown=False): - for name in files: - _, extension = os.path.splitext(name) - if extension != ".png": - continue - - filename = os.path.join(root, name) - os.remove(filename) diff --git a/spaces/bioriAsaeru/text-to-voice/Dod Sturmbot Download BEST.md b/spaces/bioriAsaeru/text-to-voice/Dod Sturmbot Download BEST.md deleted file mode 100644 index 24020fa99d34d4935117f9f4958276616be7f1c8..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Dod Sturmbot Download BEST.md +++ /dev/null @@ -1,35 +0,0 @@ - -

      2019 is nearly over and with the help of Martee and others for the few donations the target of upgrading this website, dodbits.com and the sturmbot package has almost reached what I set out to do, upgrade most of the important and very broken waypoints to keep Sturmbot alive for two more years at least.

      -

      dod sturmbot download


      Download Filehttps://urloso.com/2uyOpb



      -

      The updated main downloads will also include Rich Nagel's updated sturmbot .dll file SturmBot v1.7 "Stosstruppe" and "Scharfschütze" Bot Class Fix (SturmBot v1.9) that compliments the updated Sturmbot package and kills off a lot of annoying bugs.

      -

      All of this is down to me neglecting this spin off site of dodbits.com, it started in July 2019 when I decided not to close dodbits.com and this site and go after and bugs the site and the downloads in them were holding. Sturmbot.org one of the last download sites still around so it deserves the attention.

      -

      That effort on the "unofficial" manual is also changing the downloads, as I go I am adding single waypoint downloads for each map, at the end of the review, fix ones that don't work and make my own 2019 pack. You can see that new download section here. About the only one left to fix is dod_caen. Martee is currently making waypoints and some of his were that good... they beat some of the official waypoints.

      -

      -

      Other items are Sturmbot downloads for older versions. There is also a guide on what version of dod to use with what version of Sturmbot. It isn't 100% finished, I am still getting files for Linux versions.

      -

      There are many facets to the program, you can download a simple to use installer here on this site ready to play with 600+ maps, (600 waypoint files in the install not maps) and also find some hard to get custom files for Day of Defeat as well.

      -

      2. INsane's Sturmbot Menus. I have always provided custom menus in these packages, the installer has a menu that helps you play sturmbot. It also has controls video, audio, netcode, netgraph, chat and more.

      -

      UPDATE June 2013: For those of us that want a quick installer package that works with StrumBOT and Steam dod 1.3 (SteamPipe update) then stop reading and go to that link. Also download a nice top 22 map pack via a installer or zip, these two packages take just minutes to set up, no frigging around with files and reading endless hard to get info'. There is a special menu in the package so starting StrumBOT on the latest steam dod 1.3 is the easy way!

      -

      game "Day of Defeat"
      url_info " "
      url_dl ""
      version "1.3"
      size "5"
      svonly "0"
      cldll "1"
      secure "1"
      hlversion "1110"
      type "multiplayer_only"
      nomodels "1"
      nohimodel "1"
      mpentity "info_player_allies"
      //gamedll "dlls\dod.dll"
      //gamedll_linux "dlls/dod_i386.so"
      gamedll "sturmbot\dlls\STURM_bot.dll"
      gamedll_linux "sturmbot/dlls/sturmbot-i486.so"

      -

      Edit March 2012 update. I have recently made a auto install package download this for dod steam 1.3 using SturmBot 1.7b and skip the manual install section, then go to this section and read from there to play and waypoint.

      -

      2. Don't get fooled here, the day of defeat folder may sound right but you want the "dod" folder. Put both files of the download in the dod folder...commandmenu.txt and SturmBOT_Menu.cfg

      -

      I have set some binds in the SturmBOT_Menu.cfg file. You might like to change them? Just open SturmBOT_Menu.cfg up like you did the config.cfg and change the bind keys to your liking. If you downloaded the installer package, Here is what they are set at...

      -

      A word on the RCBot2 site, I noticed my Malwarebytes real time monitoring software has that blacklisted as a bad site. No idea why as no downloads from there are bad, the community is fine. I just made it exempt and looks OK to me not sure why its blacklisted. I think its because its related to bots united domain, they did also have a problem, I certainly had no issues with them over the last 10 years either.

      -

      Since I released a installer for RCBot2 (installs for dod:s only) and added support for map downloads 30 July 2019, there has been 3 new waypoints made that are not yet in the main installer from RCBot2 or my dod:s only installer.

      -

      Its been a year... I better update. dodbits.com is funded for another year (costs are about $300 per year by the way) so dodbits.com and sturmbot.org (Bot for dod Goldsourse HL1 dod) will hang around for at least November 2018.

      -

      There are many facets to the program, you can download a simple to use installer on the site ready to play with 600+ maps, (600 waypoint files in the install not maps) and also find some hard to get custom files for Day of Defeat as well like my map pack for 22 of the best maps ever made.

      -

      Aghhhh, Damn you SteamPipe! The articles on this site have been hit with errors because of the new SteamPipe formats. Just look for this logo for pages that have been updated ... if it does not have a logo... it could have incorrect info or files in it. I have done some downloads and a page or two already. Going to take a while... contact me if confused about something.

      -

      There were a few items I was worried about like all my downloads and places like gamebanana ...will it still work? Mostly it does. There seems to be some custom models missing bits here and there, that may be down to some .VMF and .VTF file paths now incorrect in the files. Basically all you have to do with older custom stuff is where it used to go in the "dod" folder it now goes in "dod\custom" then make a new folder and call it any name you like. That folder is now the "dod" folder for custom files.

      -

      So when downloading older content if the readme says to install the folders in "dod" make sure you visit the new "dod\custom" folder, make a new folder of what that custom item is say... "terminator player skin" and all the folders like 'models' and 'materials' now go in "dod\custom\terminator player skin". It really is as simple as that.

      -

      I have placed the St Lo map on the shelf... having a rethink. I did however get enough time to finish off another one from the dark depths of my hard drives, office. CSS office was always a favorite but for dod it is a disaster because the axis spawn in the play area and dods does not have hostage mode. I think I have a good enough version to playtest, please see here and here for a look and download.

      -

      I would not count on any other fixes, just disruption. I cannot say this is a good move but there is little anyone can do about it, the disruption to sites like mine and Gamebanana are that all of the downloads will be...broken.

      -

      The version 11 of my HUD will drag on a bit more yet, it is ready, at Beta 12 and here for a download, (See first post). Small changes happen a lot, I am waiting for the file system in orangebox games to change and break instalers and manual downloads. The Version 11 beta 12 does also have a huge set of manual files, it is in the first post but here it is anyway... download (warning, installing a HUD this complex is for advanced users ONLY, better use the installer if you are not sure)

      -

      I have added some pages server operators may find useful, a page that makes a custom server.cfg file ready to download. One of the most annoying things when setting up a Source based server is the server config, what to put in it and what range the commands have. I have pages for DoD:s, TF2 and CSS .

      -

      Just fill out the input lines for the server name and the passwords, (web page will not remember them) drop down arrow choices that have defaults and recommended values and press a button at the bottom... a server.cfg is ready to download. The page also has information beside it so you can read about each command if you want to know more.

      -

      I made these because the few of them that are on other sites are in non English, full of commands that are not for a game or have long since been replaced or the default changed. I have tested each command to ensure it is a real command ... after looking at some other config makers and downloadable server.cfg files, I can assure you there are LOTS of authors who do not test to the level I have with my web pages. Every similar page I tried had at least 10% of useless or non working commands.

      -

      The version 11 HUD/GUI for dods is quite advanced now and sitting on "beta 10". Just keep and eye on the first post in my forum for the latest. It now has a complete guide on the team menu both in the download and in the dods hud section.

      -

      Update: there has been a change to the separate site forums. The website now has a new integrated forum that shares the membership with dodbits.com. I will be shutting down the old forums soon, sorry if this has upset anyone, please feel free to join dodbits now, access to the downloads section, (for downloading and uploading your files) is now available, you can also post comments in the downloads and the articles along with forum posts, all under the same membership log in. Register here.

      -

      dodbits main site: If you want to upload to the downloads section you can now fill out a simple form. You can edit and even delete your download after it is posted. Also members can add comments to the bottom of any article or download.

      -

      Other things are in the works, a download for the old dod Sturmbot package has been made, a installer that gets Sturmbot up and running in seconds. Just make sure you run the old steam dod ver 1.3 game in steam first, run the installer and use the desktop shortcut... it works, no fooling around with read only files and such. It has custom crosshairs, strumbot menus that support waypointing, may even include a custom gui for the old dod soon too.

      -

      Beta 3.0 was released in July 2002 and added the Allied Sergeant, who carried a M3 Grease Gun, as well as the para gameplay mode which was similar to Counter-Strike in that players did not respawn until the end of the round. The Germans could now also choose between two models of the powerful and deafeningly loud FG 42 Fallschirmjäger (bipod/scope) and the Gewehr could now be selected as a class, in order to compete with the semi-automatic Garand rifle the Allies used. Valve then made Day of Defeat an official valve mod and released 1.0v in May 2003 which featured a lot of changes. Activision distributed a retail version of the game though it could still be downloaded for free, if the player has Half-Life. Later version 1.1 became the first Steam release. 1.0 included quite a few new features - the pace of the game was increased, which helped to attract new players. Friendly-fire was made non-default, an on-screen map where one's allies and thrown grenades were displayed was added, as was a Battlefield-style flag hanging over the head of friends and foes for identification. Pop-up help messages, spoken by a dog wearing a helmet (in the same vein as Microsoft's Office Assistant), also appeared in v1.0. Bleeding - a key feature of the betas - was removed, as testing found that new players had difficulty understanding the concept of pressing the bandage key when health could not be recovered. Night time battlefields were removed as they tended to be the least-played of the beta maps. Version 1.0 also included auto-reload (which defaulted to "always on"), some new maps and major modifications to some old maps (eg. Anzio). At first old players felt that the Garand had been made weaker, adding an Axis bias to the game. It was later learned that there were issues with hitboxes, which caused a lot of shots to register as hitting different body parts and doing less damage. British Troops were also issued in 1.0, but were only featured in 3 maps and had only 5 weapon classes. The American Bazooka, German Panzershreck and British PIAT became independent classes in 1.2v and Mortar-classes were proposed, but never got released. Para-maps were kept, but the special gameplay was removed and replaced by the traditional Flag-capture or objective gameplay. Version 1.0 also introduced the bipod for the BAR, allowing for it to be deployed in the same locations as the machine guns and FG42s. In September 2005 Day of Defeat: Source was released.

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/How to Use ADORAGE EDIUS PLUGINS to Create Spectacular Scenic Transitions in Your Videos.md b/spaces/bioriAsaeru/text-to-voice/How to Use ADORAGE EDIUS PLUGINS to Create Spectacular Scenic Transitions in Your Videos.md deleted file mode 100644 index 95447ce4fb1c8349b6d5dbba15aacc639191c466..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/How to Use ADORAGE EDIUS PLUGINS to Create Spectacular Scenic Transitions in Your Videos.md +++ /dev/null @@ -1,7 +0,0 @@ -
      -

      Adorage Effects Package 13 offers hundreds of modifiable effects and transitions for themes such as family, celebrations and party. Complete in HD, with 64-Bit support and plugins for the best editing solution: excellent quality of effects, extremely fast analysis, professional results.

      -

      "If you installed Premiere Pro Cs 5.5, Adorage Plugin work (if you had allready Installed Adorage plugins). Just Copy & Paste C:\Program Files\Adobe\Common\Plug-ins\CS5\MediaCore\ Copy Folder - proDAD to C:\Program Files\Adobe\Common\Plug-ins\CS5.5\MediaCore\ Paste Folder - proDAD"

      -

      ADORAGE EDIUS PLUGINS


      Download File ⚙⚙⚙ https://urloso.com/2uyPOw



      -

      Answer: You just installed the "basic proDAD Adorage", without any volume (you probably run adorage-30-service32bit.exe only). Install one of the Adorage Volume you purchased, and it will 'unlock' your version in Pinnacle Studio (the demo logo will vanish)

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/bodah/RVC-Models-bo/lib/infer_pack/modules/F0Predictor/__init__.py b/spaces/bodah/RVC-Models-bo/lib/infer_pack/modules/F0Predictor/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/compression/_explorers.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/compression/_explorers.py deleted file mode 100644 index eed30d5b8a1c14676503148ddf133c79ed2e33bf..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/compression/_explorers.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import treetable as tt - -from .._base_explorers import BaseExplorer - - -class CompressionExplorer(BaseExplorer): - eval_metrics = ["sisnr", "visqol"] - - def stages(self): - return ["train", "valid", "evaluate"] - - def get_grid_meta(self): - """Returns the list of Meta information to display for each XP/job. - """ - return [ - tt.leaf("index", align=">"), - tt.leaf("name", wrap=140), - tt.leaf("state"), - tt.leaf("sig", align=">"), - ] - - def get_grid_metrics(self): - """Return the metrics that should be displayed in the tracking table. - """ - return [ - tt.group( - "train", - [ - tt.leaf("epoch"), - tt.leaf("bandwidth", ".2f"), - tt.leaf("adv", ".4f"), - tt.leaf("d_loss", ".4f"), - ], - align=">", - ), - tt.group( - "valid", - [ - tt.leaf("bandwidth", ".2f"), - tt.leaf("adv", ".4f"), - tt.leaf("msspec", ".4f"), - tt.leaf("sisnr", ".2f"), - ], - align=">", - ), - tt.group( - "evaluate", [tt.leaf(name, ".3f") for name in self.eval_metrics], align=">" - ), - ] diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/tests/modules/__init__.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/tests/modules/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/tests/modules/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/candlend/vits-hoshimi/sovits/add_speaker.py b/spaces/candlend/vits-hoshimi/sovits/add_speaker.py deleted file mode 100644 index e224f07c892a5fe1837e3cbf1745e0d8992ea283..0000000000000000000000000000000000000000 --- a/spaces/candlend/vits-hoshimi/sovits/add_speaker.py +++ /dev/null @@ -1,62 +0,0 @@ -import os -import argparse -from tqdm import tqdm -from random import shuffle -import json - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--train_list", type=str, default="./filelists/train.txt", help="path to train list") - parser.add_argument("--val_list", type=str, default="./filelists/val.txt", help="path to val list") - parser.add_argument("--test_list", type=str, default="./filelists/test.txt", help="path to test list") - parser.add_argument("--source_dir", type=str, default="./dataset/32k", help="path to source dir") - args = parser.parse_args() - - previous_config = json.load(open("configs/config.json", "rb")) - - train = [] - val = [] - test = [] - idx = 0 - spk_dict = previous_config["spk"] - spk_id = max([i for i in spk_dict.values()]) + 1 - for speaker in tqdm(os.listdir(args.source_dir)): - if speaker not in spk_dict.keys(): - spk_dict[speaker] = spk_id - spk_id += 1 - wavs = [os.path.join(args.source_dir, speaker, i)for i in os.listdir(os.path.join(args.source_dir, speaker))] - wavs = [i for i in wavs if i.endswith("wav")] - shuffle(wavs) - train += wavs[2:-10] - val += wavs[:2] - test += wavs[-10:] - - assert previous_config["model"]["n_speakers"] > len(spk_dict.keys()) - shuffle(train) - shuffle(val) - shuffle(test) - - print("Writing", args.train_list) - with open(args.train_list, "w") as f: - for fname in tqdm(train): - wavpath = fname - f.write(wavpath + "\n") - - print("Writing", args.val_list) - with open(args.val_list, "w") as f: - for fname in tqdm(val): - wavpath = fname - f.write(wavpath + "\n") - - print("Writing", args.test_list) - with open(args.test_list, "w") as f: - for fname in tqdm(test): - wavpath = fname - f.write(wavpath + "\n") - - previous_config["spk"] = spk_dict - - print("Writing configs/config.json") - with open("configs/config.json", "w") as f: - json.dump(previous_config, f, indent=2) diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/roi_heads/__init__.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/roi_heads/__init__.py deleted file mode 100644 index d13e9c57235b982f3e0645bc316de2b75755dfda..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/roi_heads/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .box_head import ROI_BOX_HEAD_REGISTRY, build_box_head, FastRCNNConvFCHead -from .keypoint_head import ( - ROI_KEYPOINT_HEAD_REGISTRY, - build_keypoint_head, - BaseKeypointRCNNHead, - KRCNNConvDeconvUpsampleHead, -) -from .mask_head import ( - ROI_MASK_HEAD_REGISTRY, - build_mask_head, - BaseMaskRCNNHead, - MaskRCNNConvUpsampleHead, -) -from .roi_heads import ( - ROI_HEADS_REGISTRY, - ROIHeads, - Res5ROIHeads, - StandardROIHeads, - build_roi_heads, - select_foreground_proposals, -) -from .cascade_rcnn import CascadeROIHeads -from .rotated_fast_rcnn import RROIHeads -from .fast_rcnn import FastRCNNOutputLayers - -from . import cascade_rcnn # isort:skip - -__all__ = list(globals().keys()) diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/layers/test_blocks.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/layers/test_blocks.py deleted file mode 100644 index 5a0488adbfcf0c7eca08616f43ebf695acad4b7e..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/layers/test_blocks.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import unittest -import torch -from torch import nn - -from detectron2.layers import ASPP, DepthwiseSeparableConv2d, FrozenBatchNorm2d -from detectron2.modeling.backbone.resnet import BasicStem, ResNet - - -""" -Test for misc layers. -""" - - -class TestBlocks(unittest.TestCase): - def test_separable_conv(self): - DepthwiseSeparableConv2d(3, 10, norm1="BN", activation1=nn.PReLU()) - - def test_aspp(self): - m = ASPP(3, 10, [2, 3, 4], norm="", activation=nn.PReLU()) - self.assertIsNot(m.convs[0].activation.weight, m.convs[1].activation.weight) - self.assertIsNot(m.convs[0].activation.weight, m.project.activation.weight) - - @unittest.skipIf(not torch.cuda.is_available(), "CUDA not available") - def test_frozen_batchnorm_fp16(self): - from torch.cuda.amp import autocast - - C = 10 - input = torch.rand(1, C, 10, 10).cuda() - m = FrozenBatchNorm2d(C).cuda() - with autocast(): - output = m(input.half()) - self.assertEqual(output.dtype, torch.float16) - - # requires_grad triggers a different codepath - input.requires_grad_() - with autocast(): - output = m(input.half()) - self.assertEqual(output.dtype, torch.float16) - - def test_resnet_unused_stages(self): - resnet = ResNet(BasicStem(), ResNet.make_default_stages(18), out_features=["res2"]) - self.assertTrue(hasattr(resnet, "res2")) - self.assertFalse(hasattr(resnet, "res3")) - self.assertFalse(hasattr(resnet, "res5")) - - resnet = ResNet(BasicStem(), ResNet.make_default_stages(18), out_features=["res2", "res5"]) - self.assertTrue(hasattr(resnet, "res2")) - self.assertTrue(hasattr(resnet, "res4")) - self.assertTrue(hasattr(resnet, "res5")) diff --git a/spaces/ccolas/TastyPiano/src/pianocktail.py b/spaces/ccolas/TastyPiano/src/pianocktail.py deleted file mode 100644 index 1d3754e0f2a712d8dba35660f2bae2cad6b6e570..0000000000000000000000000000000000000000 --- a/spaces/ccolas/TastyPiano/src/pianocktail.py +++ /dev/null @@ -1,79 +0,0 @@ -import time -import os -import pickle -from src.music.pipeline.music_pipeline import encode_music -from src.music2cocktailrep.pipeline.music2cocktailrep import music2cocktailrep, setup_translation_models, debug_translation -from src.cocktails.pipeline.cocktailrep2recipe import cocktailrep2recipe -from src.debugger import Debugger -from datetime import datetime -from shutil import copy - -synestesia_path = '../data/synesthesia/' -debugger = Debugger() - -def pianocktail(record=False, url=None, midi=None, audio=None, processed=None, crop=40, verbose=False, debug=False, level=0): - assert url is not None or midi is not None or audio is not None or processed is not None - if verbose: print('------\nNew synesthetic exploration!') - init_time = time.time() - music_ai_rep, music_handcoded_rep, all_paths, error = encode_music(record=record, url=url, audio_path=audio, midi_path=midi, nb_aug=0, noise_injection=False, - augmentation=False, processed_path=processed, crop=crop, apply_filtering=False, verbose=verbose, - level=level+2) - if music_ai_rep is not None: - cocktail_rep, affective_cluster_id, affect = music2cocktailrep(music_ai_rep, music_handcoded_rep, verbose=verbose, level=level+2) - cocktail_recipes, scores = cocktailrep2recipe(cocktail_rep, target_affective_cluster=affective_cluster_id, verbose=verbose, full_verbose=verbose, level=level+2) - cocktail_recipe = cocktail_recipes[0] - recipe_score = scores[0] - if debug: - music_reconstruction = debug_translation(music_ai_rep) - debugger.extract_info(all_paths, affective_cluster_id, affect, cocktail_rep, music_reconstruction, recipe_score, verbose=verbose, level=level+2) - debug_info = debugger.debug_dict - else: - debug_info = None - if verbose: - print(cocktail_recipe.replace('Recipe', ' ' * (level + 2) + 'Generated recipe:').replace('None ()', '')) - debugger.print_debug(level=level+2) - print(f'\nEnd of synesthetic exploration ({int(time.time() - init_time)} secs).\n------') - - else: - cocktail_recipe = None - debug_info = None - return cocktail_recipe, debug_info - -def setup_and_run(url=None, midi=None, audio=None, verbose=False, debug=False, extra_code=None): - assert url is not None or midi is not None or audio is not None - now = datetime.now() - folder_name = f'{now.year}-{now.month}-{now.day}_{now.hour}:{now.minute}:{now.second}' - folder_path = synestesia_path + folder_name - if extra_code is not None: - folder_path += '_' + extra_code - if os.path.exists(folder_path): - folder_path += '_2' - folder_path += '/' - os.makedirs(folder_path, exist_ok=True) - recipe, debug = pianocktail(url=url, verbose=verbose, debug=debug) - with open(folder_path + 'debug.pk', 'wb') as f: - pickle.dump(debug, f) - with open(folder_path + 'recipe.txt', 'w') as f: - f.write(recipe) - paths = debug['all_paths'] - if paths['url'] is not None: - with open(folder_path + 'url.txt', 'w') as f: - f.write(paths['url']) - for k in ['audio_path', 'midi_path']: - origin = paths[k] - copy(origin, folder_path + origin.split('/')[-1]) - - -if __name__ == '__main__': - urls = ["https://www.youtube.com/watch?v=PLFVGwGQcB0", - "https://www.youtube.com/watch?v=VQmuAr93OlI", - "https://www.youtube.com/watch?v=Nv2GgV34qIg&list=PLO9E3V4rGLD8_iWrCioJRWZXJJE3Fzu_J&index=4", - "https://www.youtube.com/watch?v=qAEIjWYdoYc&list=PLO9E3V4rGLD8_iWrCioJRWZXJJE3Fzu_J&index=1", - "https://www.youtube.com/watch?v=M73x3O7dhmg&list=PLO9E3V4rGLD8_iWrCioJRWZXJJE3Fzu_J&index=5"] - setup_translation_models() - setup_and_run(url=urls[0], verbose=True, debug=True) - recipes = [] - for url in urls: - recipe = pianocktail(url=url, verbose=True, debug=True)[0] - recipes.append(recipe) - stop = 1 diff --git a/spaces/ceckenrode/Memory-Chat-Story-Generator-Bloom/README.md b/spaces/ceckenrode/Memory-Chat-Story-Generator-Bloom/README.md deleted file mode 100644 index 655490e5e51b1b597c7362b1ad91b4548471318f..0000000000000000000000000000000000000000 --- a/spaces/ceckenrode/Memory-Chat-Story-Generator-Bloom/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Memory Chat Story Generator Bloom -emoji: 📊 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/chendl/compositional_test/multimodal/open_flamingo/src/utils.py b/spaces/chendl/compositional_test/multimodal/open_flamingo/src/utils.py deleted file mode 100644 index 815c70016c33ca9133aba60811a4948e31a2df27..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/open_flamingo/src/utils.py +++ /dev/null @@ -1,31 +0,0 @@ -def extend_instance(obj, mixin): - """Apply mixins to a class instance after creation""" - base_cls = obj.__class__ - base_cls_name = obj.__class__.__name__ - obj.__class__ = type( - base_cls_name, (mixin, base_cls), {} - ) # mixin needs to go first for our forward() logic to work - - -def getattr_recursive(obj, att): - """ - Return nested attribute of obj - Example: getattr_recursive(obj, 'a.b.c') is equivalent to obj.a.b.c - """ - if att == "": - return obj - i = att.find(".") - if i < 0: - return getattr(obj, att) - else: - return getattr_recursive(getattr(obj, att[:i]), att[i + 1 :]) - - -def setattr_recursive(obj, att, val): - """ - Set nested attribute of obj - Example: setattr_recursive(obj, 'a.b.c', val) is equivalent to obj.a.b.c = val - """ - if "." in att: - obj = getattr_recursive(obj, ".".join(att.split(".")[:-1])) - setattr(obj, att.split(".")[-1], val) diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/seq2seq-distillation/lightning_base.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/seq2seq-distillation/lightning_base.py deleted file mode 100644 index f246ecab0dd01bceda5c612dad9b0679a9691a6a..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/seq2seq-distillation/lightning_base.py +++ /dev/null @@ -1,393 +0,0 @@ -import argparse -import logging -import os -from pathlib import Path -from typing import Any, Dict - -import pytorch_lightning as pl -from pytorch_lightning.utilities import rank_zero_info - -from transformers import ( - AdamW, - AutoConfig, - AutoModel, - AutoModelForPreTraining, - AutoModelForQuestionAnswering, - AutoModelForSeq2SeqLM, - AutoModelForSequenceClassification, - AutoModelForTokenClassification, - AutoModelWithLMHead, - AutoTokenizer, - PretrainedConfig, - PreTrainedTokenizer, -) -from transformers.optimization import ( - Adafactor, - get_cosine_schedule_with_warmup, - get_cosine_with_hard_restarts_schedule_with_warmup, - get_linear_schedule_with_warmup, - get_polynomial_decay_schedule_with_warmup, -) -from transformers.utils.versions import require_version - - -logger = logging.getLogger(__name__) - -require_version("pytorch_lightning>=1.0.4") - -MODEL_MODES = { - "base": AutoModel, - "sequence-classification": AutoModelForSequenceClassification, - "question-answering": AutoModelForQuestionAnswering, - "pretraining": AutoModelForPreTraining, - "token-classification": AutoModelForTokenClassification, - "language-modeling": AutoModelWithLMHead, - "summarization": AutoModelForSeq2SeqLM, - "translation": AutoModelForSeq2SeqLM, -} - - -# update this and the import above to support new schedulers from transformers.optimization -arg_to_scheduler = { - "linear": get_linear_schedule_with_warmup, - "cosine": get_cosine_schedule_with_warmup, - "cosine_w_restarts": get_cosine_with_hard_restarts_schedule_with_warmup, - "polynomial": get_polynomial_decay_schedule_with_warmup, - # '': get_constant_schedule, # not supported for now - # '': get_constant_schedule_with_warmup, # not supported for now -} -arg_to_scheduler_choices = sorted(arg_to_scheduler.keys()) -arg_to_scheduler_metavar = "{" + ", ".join(arg_to_scheduler_choices) + "}" - - -class BaseTransformer(pl.LightningModule): - def __init__( - self, - hparams: argparse.Namespace, - num_labels=None, - mode="base", - config=None, - tokenizer=None, - model=None, - **config_kwargs, - ): - """Initialize a model, tokenizer and config.""" - super().__init__() - # TODO: move to self.save_hyperparameters() - # self.save_hyperparameters() - # can also expand arguments into trainer signature for easier reading - - self.save_hyperparameters(hparams) - self.step_count = 0 - self.output_dir = Path(self.hparams.output_dir) - cache_dir = self.hparams.cache_dir if self.hparams.cache_dir else None - if config is None: - self.config = AutoConfig.from_pretrained( - self.hparams.config_name if self.hparams.config_name else self.hparams.model_name_or_path, - **({"num_labels": num_labels} if num_labels is not None else {}), - cache_dir=cache_dir, - **config_kwargs, - ) - else: - self.config: PretrainedConfig = config - - extra_model_params = ("encoder_layerdrop", "decoder_layerdrop", "dropout", "attention_dropout") - for p in extra_model_params: - if getattr(self.hparams, p, None): - assert hasattr(self.config, p), f"model config doesn't have a `{p}` attribute" - setattr(self.config, p, getattr(self.hparams, p)) - - if tokenizer is None: - self.tokenizer = AutoTokenizer.from_pretrained( - self.hparams.tokenizer_name if self.hparams.tokenizer_name else self.hparams.model_name_or_path, - cache_dir=cache_dir, - ) - else: - self.tokenizer: PreTrainedTokenizer = tokenizer - self.model_type = MODEL_MODES[mode] - if model is None: - self.model = self.model_type.from_pretrained( - self.hparams.model_name_or_path, - from_tf=bool(".ckpt" in self.hparams.model_name_or_path), - config=self.config, - cache_dir=cache_dir, - ) - else: - self.model = model - - def load_hf_checkpoint(self, *args, **kwargs): - self.model = self.model_type.from_pretrained(*args, **kwargs) - - def get_lr_scheduler(self): - get_schedule_func = arg_to_scheduler[self.hparams.lr_scheduler] - scheduler = get_schedule_func( - self.opt, num_warmup_steps=self.hparams.warmup_steps, num_training_steps=self.total_steps() - ) - scheduler = {"scheduler": scheduler, "interval": "step", "frequency": 1} - return scheduler - - def configure_optimizers(self): - """Prepare optimizer and schedule (linear warmup and decay)""" - model = self.model - no_decay = ["bias", "LayerNorm.weight"] - optimizer_grouped_parameters = [ - { - "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], - "weight_decay": self.hparams.weight_decay, - }, - { - "params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], - "weight_decay": 0.0, - }, - ] - if self.hparams.adafactor: - optimizer = Adafactor( - optimizer_grouped_parameters, lr=self.hparams.learning_rate, scale_parameter=False, relative_step=False - ) - - else: - optimizer = AdamW( - optimizer_grouped_parameters, lr=self.hparams.learning_rate, eps=self.hparams.adam_epsilon - ) - self.opt = optimizer - - scheduler = self.get_lr_scheduler() - - return [optimizer], [scheduler] - - def test_step(self, batch, batch_nb): - return self.validation_step(batch, batch_nb) - - def test_epoch_end(self, outputs): - return self.validation_end(outputs) - - def total_steps(self) -> int: - """The number of total training steps that will be run. Used for lr scheduler purposes.""" - num_devices = max(1, self.hparams.gpus) # TODO: consider num_tpu_cores - effective_batch_size = self.hparams.train_batch_size * self.hparams.accumulate_grad_batches * num_devices - return (self.dataset_size / effective_batch_size) * self.hparams.max_epochs - - def setup(self, mode): - if mode == "test": - self.dataset_size = len(self.test_dataloader().dataset) - else: - self.train_loader = self.get_dataloader("train", self.hparams.train_batch_size, shuffle=True) - self.dataset_size = len(self.train_dataloader().dataset) - - def get_dataloader(self, type_path: str, batch_size: int, shuffle: bool = False): - raise NotImplementedError("You must implement this for your task") - - def train_dataloader(self): - return self.train_loader - - def val_dataloader(self): - return self.get_dataloader("dev", self.hparams.eval_batch_size, shuffle=False) - - def test_dataloader(self): - return self.get_dataloader("test", self.hparams.eval_batch_size, shuffle=False) - - def _feature_file(self, mode): - return os.path.join( - self.hparams.data_dir, - "cached_{}_{}_{}".format( - mode, - list(filter(None, self.hparams.model_name_or_path.split("/"))).pop(), - str(self.hparams.max_seq_length), - ), - ) - - @pl.utilities.rank_zero_only - def on_save_checkpoint(self, checkpoint: Dict[str, Any]) -> None: - save_path = self.output_dir.joinpath("best_tfmr") - self.model.config.save_step = self.step_count - self.model.save_pretrained(save_path) - self.tokenizer.save_pretrained(save_path) - - @staticmethod - def add_model_specific_args(parser, root_dir): - parser.add_argument( - "--model_name_or_path", - default=None, - type=str, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models", - ) - parser.add_argument( - "--config_name", default="", type=str, help="Pretrained config name or path if not the same as model_name" - ) - parser.add_argument( - "--tokenizer_name", - default=None, - type=str, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--cache_dir", - default="", - type=str, - help="Where do you want to store the pre-trained models downloaded from huggingface.co", - ) - parser.add_argument( - "--encoder_layerdrop", - type=float, - help="Encoder layer dropout probability (Optional). Goes into model.config", - ) - parser.add_argument( - "--decoder_layerdrop", - type=float, - help="Decoder layer dropout probability (Optional). Goes into model.config", - ) - parser.add_argument( - "--dropout", - type=float, - help="Dropout probability (Optional). Goes into model.config", - ) - parser.add_argument( - "--attention_dropout", - type=float, - help="Attention dropout probability (Optional). Goes into model.config", - ) - parser.add_argument("--learning_rate", default=5e-5, type=float, help="The initial learning rate for Adam.") - parser.add_argument( - "--lr_scheduler", - default="linear", - choices=arg_to_scheduler_choices, - metavar=arg_to_scheduler_metavar, - type=str, - help="Learning rate scheduler", - ) - parser.add_argument("--weight_decay", default=0.0, type=float, help="Weight decay if we apply some.") - parser.add_argument("--adam_epsilon", default=1e-8, type=float, help="Epsilon for Adam optimizer.") - parser.add_argument("--warmup_steps", default=0, type=int, help="Linear warmup over warmup_steps.") - parser.add_argument("--num_workers", default=4, type=int, help="kwarg passed to DataLoader") - parser.add_argument("--num_train_epochs", dest="max_epochs", default=3, type=int) - parser.add_argument("--train_batch_size", default=32, type=int) - parser.add_argument("--eval_batch_size", default=32, type=int) - parser.add_argument("--adafactor", action="store_true") - - -class LoggingCallback(pl.Callback): - def on_batch_end(self, trainer, pl_module): - lr_scheduler = trainer.lr_schedulers[0]["scheduler"] - lrs = {f"lr_group_{i}": lr for i, lr in enumerate(lr_scheduler.get_lr())} - pl_module.logger.log_metrics(lrs) - - def on_validation_end(self, trainer: pl.Trainer, pl_module: pl.LightningModule): - rank_zero_info("***** Validation results *****") - metrics = trainer.callback_metrics - # Log results - for key in sorted(metrics): - if key not in ["log", "progress_bar"]: - rank_zero_info("{} = {}\n".format(key, str(metrics[key]))) - - def on_test_end(self, trainer: pl.Trainer, pl_module: pl.LightningModule): - rank_zero_info("***** Test results *****") - metrics = trainer.callback_metrics - # Log and save results to file - output_test_results_file = os.path.join(pl_module.hparams.output_dir, "test_results.txt") - with open(output_test_results_file, "w") as writer: - for key in sorted(metrics): - if key not in ["log", "progress_bar"]: - rank_zero_info("{} = {}\n".format(key, str(metrics[key]))) - writer.write("{} = {}\n".format(key, str(metrics[key]))) - - -def add_generic_args(parser, root_dir) -> None: - # To allow all pl args uncomment the following line - # parser = pl.Trainer.add_argparse_args(parser) - parser.add_argument( - "--output_dir", - default=None, - type=str, - required=True, - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument( - "--fp16", - action="store_true", - help="Whether to use 16-bit (mixed) precision (through NVIDIA apex) instead of 32-bit", - ) - - parser.add_argument( - "--fp16_opt_level", - type=str, - default="O2", - help=( - "For fp16: Apex AMP optimization level selected in ['O0', 'O1', 'O2', and 'O3']." - "See details at https://nvidia.github.io/apex/amp.html" - ), - ) - parser.add_argument("--n_tpu_cores", dest="tpu_cores", type=int) - parser.add_argument("--max_grad_norm", dest="gradient_clip_val", default=1.0, type=float, help="Max gradient norm") - parser.add_argument("--do_train", action="store_true", help="Whether to run training.") - parser.add_argument("--do_predict", action="store_true", help="Whether to run predictions on the test set.") - parser.add_argument( - "--gradient_accumulation_steps", - dest="accumulate_grad_batches", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument("--seed", type=int, default=42, help="random seed for initialization") - parser.add_argument( - "--data_dir", - default=None, - type=str, - required=True, - help="The input data dir. Should contain the training files for the CoNLL-2003 NER task.", - ) - - -def generic_train( - model: BaseTransformer, - args: argparse.Namespace, - early_stopping_callback=None, - logger=True, # can pass WandbLogger() here - extra_callbacks=[], - checkpoint_callback=None, - logging_callback=None, - **extra_train_kwargs, -): - pl.seed_everything(args.seed) - - # init model - odir = Path(model.hparams.output_dir) - odir.mkdir(exist_ok=True) - - # add custom checkpoints - if checkpoint_callback is None: - checkpoint_callback = pl.callbacks.ModelCheckpoint( - filepath=args.output_dir, prefix="checkpoint", monitor="val_loss", mode="min", save_top_k=1 - ) - if early_stopping_callback: - extra_callbacks.append(early_stopping_callback) - if logging_callback is None: - logging_callback = LoggingCallback() - - train_params = {} - - # TODO: remove with PyTorch 1.6 since pl uses native amp - if args.fp16: - train_params["precision"] = 16 - train_params["amp_level"] = args.fp16_opt_level - - if args.gpus > 1: - train_params["distributed_backend"] = "ddp" - - train_params["accumulate_grad_batches"] = args.accumulate_grad_batches - train_params["accelerator"] = extra_train_kwargs.get("accelerator", None) - train_params["profiler"] = extra_train_kwargs.get("profiler", None) - - trainer = pl.Trainer.from_argparse_args( - args, - weights_summary=None, - callbacks=[logging_callback] + extra_callbacks, - logger=logger, - checkpoint_callback=checkpoint_callback, - **train_params, - ) - - if args.do_train: - trainer.fit(model) - - return trainer diff --git a/spaces/chinhon/malay_headlines_writer/app.py b/spaces/chinhon/malay_headlines_writer/app.py deleted file mode 100644 index dc4c22240d7bd505dce749b23a5d366ffc6a75c1..0000000000000000000000000000000000000000 --- a/spaces/chinhon/malay_headlines_writer/app.py +++ /dev/null @@ -1,89 +0,0 @@ -import gradio as gr -import re - -from gradio.mix import Parallel -from transformers import ( - AutoTokenizer, - AutoModelForSeq2SeqLM, -) - -def clean_text(text): - text = text.encode("ascii", errors="ignore").decode( - "ascii" - ) # remove non-ascii, Chinese characters - text = re.sub(r"\n", " ", text) - text = re.sub(r"\n\n", " ", text) - text = re.sub(r"\t", " ", text) - text = text.strip(" ") - text = re.sub( - " +", " ", text - ).strip() # get rid of multiple spaces and replace with a single - return text - - -modchoice_1 = "chinhon/pegasus-newsroom-malay_headlines" - -def headline_writer1(text): - input_text = clean_text(text) - - tokenizer_1 = AutoTokenizer.from_pretrained(modchoice_1) - - model_1 = AutoModelForSeq2SeqLM.from_pretrained(modchoice_1) - - with tokenizer_1.as_target_tokenizer(): - batch = tokenizer_1( - input_text, truncation=True, padding="longest", return_tensors="pt" - ) - - translated = model_1.generate(**batch) - - summary_1 = tokenizer_1.batch_decode(translated, skip_special_tokens=True) - - return summary_1[0] - - -headline1 = gr.Interface( - fn=headline_writer1, - inputs=gr.inputs.Textbox(), - outputs=gr.outputs.Textbox(label=""), -) - - -modchoice_2 = "chinhon/pegasus-multi_news-malay_headlines_02" - -def headline_writer2(text): - input_text = clean_text(text) - - tokenizer_2 = AutoTokenizer.from_pretrained(modchoice_2) - - model_2 = AutoModelForSeq2SeqLM.from_pretrained(modchoice_2) - - with tokenizer_2.as_target_tokenizer(): - batch = tokenizer_2( - input_text, truncation=True, padding="longest", return_tensors="pt" - ) - - translated = model_2.generate(**batch) - - summary_2 = tokenizer_2.batch_decode(translated, skip_special_tokens=True) - - return summary_2[0] - - -headline2 = gr.Interface( - fn=headline_writer2, - inputs=gr.inputs.Textbox(), - outputs=gr.outputs.Textbox(label=""), -) - - -Parallel( - headline1, - headline2, - title="Malay News Headlines Generator", - inputs=gr.inputs.Textbox( - lines=20, - label="Paste the first few paragraphs of a Malay language news story here, and choose from 2 suggested headlines", - ), - theme="darkdefault", -).launch(enable_queue=True) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/altair/vegalite/v5/schema/channels.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/altair/vegalite/v5/schema/channels.py deleted file mode 100644 index 07f9f43e8e1387a374e60ae99ee9a92e1549d1e1..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/altair/vegalite/v5/schema/channels.py +++ /dev/null @@ -1,17317 +0,0 @@ -# The contents of this file are automatically written by -# tools/generate_schema_wrapper.py. Do not modify directly. - -import sys -from . import core -import pandas as pd -from altair.utils.schemapi import Undefined, with_property_setters -from altair.utils import parse_shorthand -from typing import overload, List - -if sys.version_info >= (3, 8): - from typing import Literal -else: - from typing_extensions import Literal - - -class FieldChannelMixin: - def to_dict(self, validate=True, ignore=(), context=None): - context = context or {} - shorthand = self._get('shorthand') - field = self._get('field') - - if shorthand is not Undefined and field is not Undefined: - raise ValueError("{} specifies both shorthand={} and field={}. " - "".format(self.__class__.__name__, shorthand, field)) - - if isinstance(shorthand, (tuple, list)): - # If given a list of shorthands, then transform it to a list of classes - kwds = self._kwds.copy() - kwds.pop('shorthand') - return [self.__class__(sh, **kwds).to_dict(validate=validate, ignore=ignore, context=context) - for sh in shorthand] - - if shorthand is Undefined: - parsed = {} - elif isinstance(shorthand, str): - parsed = parse_shorthand(shorthand, data=context.get('data', None)) - type_required = 'type' in self._kwds - type_in_shorthand = 'type' in parsed - type_defined_explicitly = self._get('type') is not Undefined - if not type_required: - # Secondary field names don't require a type argument in VegaLite 3+. - # We still parse it out of the shorthand, but drop it here. - parsed.pop('type', None) - elif not (type_in_shorthand or type_defined_explicitly): - if isinstance(context.get('data', None), pd.DataFrame): - raise ValueError( - 'Unable to determine data type for the field "{}";' - " verify that the field name is not misspelled." - " If you are referencing a field from a transform," - " also confirm that the data type is specified correctly.".format(shorthand) - ) - else: - raise ValueError("{} encoding field is specified without a type; " - "the type cannot be automatically inferred because " - "the data is not specified as a pandas.DataFrame." - "".format(shorthand)) - else: - # Shorthand is not a string; we pass the definition to field, - # and do not do any parsing. - parsed = {'field': shorthand} - context["parsed_shorthand"] = parsed - - return super(FieldChannelMixin, self).to_dict( - validate=validate, - ignore=ignore, - context=context - ) - - -class ValueChannelMixin: - def to_dict(self, validate=True, ignore=(), context=None): - context = context or {} - condition = self._get('condition', Undefined) - copy = self # don't copy unless we need to - if condition is not Undefined: - if isinstance(condition, core.SchemaBase): - pass - elif 'field' in condition and 'type' not in condition: - kwds = parse_shorthand(condition['field'], context.get('data', None)) - copy = self.copy(deep=['condition']) - copy['condition'].update(kwds) - return super(ValueChannelMixin, copy).to_dict(validate=validate, - ignore=ignore, - context=context) - - -class DatumChannelMixin: - def to_dict(self, validate=True, ignore=(), context=None): - context = context or {} - datum = self._get('datum', Undefined) - copy = self # don't copy unless we need to - if datum is not Undefined: - if isinstance(datum, core.SchemaBase): - pass - return super(DatumChannelMixin, copy).to_dict(validate=validate, - ignore=ignore, - context=context) - - -@with_property_setters -class Angle(FieldChannelMixin, core.FieldOrDatumDefWithConditionMarkPropFieldDefnumber): - """Angle schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - condition : anyOf(:class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - legend : anyOf(:class:`Legend`, None) - An object defining properties of the legend. If ``null``, the legend for the - encoding channel will be removed. - - **Default value:** If undefined, default `legend properties - `__ are applied. - - **See also:** `legend `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - JavaScript. - * `A string indicating an encoding channel name to sort by - `__ (e.g., - ``"x"`` or ``"y"`` ) with an optional minus prefix for descending sort (e.g., - ``"-x"`` to sort by x-field, descending). This channel string is short-form of `a - sort-by-encoding definition - `__. For - example, ``"sort": "-x"`` is equivalent to ``"sort": {"encoding": "x", "order": - "descending"}``. - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects - `__. In addition, for time - units ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` and sorting by another channel is not supported for ``row`` and - ``column``. - - **See also:** `sort `__ - documentation. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "angle" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Angle': - ... - - def bandPosition(self, _: float, **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def legend(self, aria=Undefined, clipHeight=Undefined, columnPadding=Undefined, columns=Undefined, cornerRadius=Undefined, description=Undefined, direction=Undefined, fillColor=Undefined, format=Undefined, formatType=Undefined, gradientLength=Undefined, gradientOpacity=Undefined, gradientStrokeColor=Undefined, gradientStrokeWidth=Undefined, gradientThickness=Undefined, gridAlign=Undefined, labelAlign=Undefined, labelBaseline=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelOffset=Undefined, labelOpacity=Undefined, labelOverlap=Undefined, labelPadding=Undefined, labelSeparation=Undefined, legendX=Undefined, legendY=Undefined, offset=Undefined, orient=Undefined, padding=Undefined, rowPadding=Undefined, strokeColor=Undefined, symbolDash=Undefined, symbolDashOffset=Undefined, symbolFillColor=Undefined, symbolLimit=Undefined, symbolOffset=Undefined, symbolOpacity=Undefined, symbolSize=Undefined, symbolStrokeColor=Undefined, symbolStrokeWidth=Undefined, symbolType=Undefined, tickCount=Undefined, tickMinStep=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOpacity=Undefined, titleOrient=Undefined, titlePadding=Undefined, type=Undefined, values=Undefined, zindex=Undefined, **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def legend(self, _: None, **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, _: None, **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[float], **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[str], **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[bool], **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[core.DateTime], **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["x", "y", "color", "fill", "stroke", "strokeWidth", "size", "shape", "fillOpacity", "strokeOpacity", "opacity", "text"], **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["-x", "-y", "-color", "-fill", "-stroke", "-strokeWidth", "-size", "-shape", "-fillOpacity", "-strokeOpacity", "-opacity", "-text"], **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, encoding=Undefined, order=Undefined, **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: None, **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Angle': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Angle': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Angle': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - condition=Undefined, field=Undefined, legend=Undefined, scale=Undefined, - sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds): - super(Angle, self).__init__(shorthand=shorthand, aggregate=aggregate, bandPosition=bandPosition, - bin=bin, condition=condition, field=field, legend=legend, - scale=scale, sort=sort, timeUnit=timeUnit, title=title, type=type, - **kwds) - - -@with_property_setters -class AngleDatum(DatumChannelMixin, core.FieldOrDatumDefWithConditionDatumDefnumber): - """AngleDatum schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - condition : anyOf(:class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`) - A constant value in data domain. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`Type` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "angle" - - def bandPosition(self, _: float, **kwds) -> 'AngleDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'AngleDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'AngleDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'AngleDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'AngleDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'AngleDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'AngleDatum': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'AngleDatum': - ... - - - def __init__(self, datum, bandPosition=Undefined, condition=Undefined, title=Undefined, - type=Undefined, **kwds): - super(AngleDatum, self).__init__(datum=datum, bandPosition=bandPosition, condition=condition, - title=title, type=type, **kwds) - - -@with_property_setters -class AngleValue(ValueChannelMixin, core.ValueDefWithConditionMarkPropFieldOrDatumDefnumber): - """AngleValue schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - condition : anyOf(:class:`ConditionalMarkPropFieldOrDatumDef`, :class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`)) - A field definition or one or more value definition(s) with a parameter predicate. - value : anyOf(float, :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "angle" - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, test=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'AngleValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, legend=Undefined, scale=Undefined, test=Undefined, title=Undefined, type=Undefined, **kwds) -> 'AngleValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, empty=Undefined, field=Undefined, legend=Undefined, param=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'AngleValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, empty=Undefined, legend=Undefined, param=Undefined, scale=Undefined, title=Undefined, type=Undefined, **kwds) -> 'AngleValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'AngleValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'AngleValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'AngleValue': - ... - - - def __init__(self, value, condition=Undefined, **kwds): - super(AngleValue, self).__init__(value=value, condition=condition, **kwds) - - -@with_property_setters -class Color(FieldChannelMixin, core.FieldOrDatumDefWithConditionMarkPropFieldDefGradientstringnull): - """Color schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - condition : anyOf(:class:`ConditionalValueDefGradientstringnullExprRef`, List(:class:`ConditionalValueDefGradientstringnullExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - legend : anyOf(:class:`Legend`, None) - An object defining properties of the legend. If ``null``, the legend for the - encoding channel will be removed. - - **Default value:** If undefined, default `legend properties - `__ are applied. - - **See also:** `legend `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - JavaScript. - * `A string indicating an encoding channel name to sort by - `__ (e.g., - ``"x"`` or ``"y"`` ) with an optional minus prefix for descending sort (e.g., - ``"-x"`` to sort by x-field, descending). This channel string is short-form of `a - sort-by-encoding definition - `__. For - example, ``"sort": "-x"`` is equivalent to ``"sort": {"encoding": "x", "order": - "descending"}``. - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects - `__. In addition, for time - units ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` and sorting by another channel is not supported for ``row`` and - ``column``. - - **See also:** `sort `__ - documentation. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "color" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Color': - ... - - def bandPosition(self, _: float, **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefGradientstringnullExprRef], **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def legend(self, aria=Undefined, clipHeight=Undefined, columnPadding=Undefined, columns=Undefined, cornerRadius=Undefined, description=Undefined, direction=Undefined, fillColor=Undefined, format=Undefined, formatType=Undefined, gradientLength=Undefined, gradientOpacity=Undefined, gradientStrokeColor=Undefined, gradientStrokeWidth=Undefined, gradientThickness=Undefined, gridAlign=Undefined, labelAlign=Undefined, labelBaseline=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelOffset=Undefined, labelOpacity=Undefined, labelOverlap=Undefined, labelPadding=Undefined, labelSeparation=Undefined, legendX=Undefined, legendY=Undefined, offset=Undefined, orient=Undefined, padding=Undefined, rowPadding=Undefined, strokeColor=Undefined, symbolDash=Undefined, symbolDashOffset=Undefined, symbolFillColor=Undefined, symbolLimit=Undefined, symbolOffset=Undefined, symbolOpacity=Undefined, symbolSize=Undefined, symbolStrokeColor=Undefined, symbolStrokeWidth=Undefined, symbolType=Undefined, tickCount=Undefined, tickMinStep=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOpacity=Undefined, titleOrient=Undefined, titlePadding=Undefined, type=Undefined, values=Undefined, zindex=Undefined, **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def legend(self, _: None, **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, _: None, **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[float], **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[str], **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[bool], **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[core.DateTime], **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["x", "y", "color", "fill", "stroke", "strokeWidth", "size", "shape", "fillOpacity", "strokeOpacity", "opacity", "text"], **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["-x", "-y", "-color", "-fill", "-stroke", "-strokeWidth", "-size", "-shape", "-fillOpacity", "-strokeOpacity", "-opacity", "-text"], **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, encoding=Undefined, order=Undefined, **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: None, **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Color': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Color': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Color': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - condition=Undefined, field=Undefined, legend=Undefined, scale=Undefined, - sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds): - super(Color, self).__init__(shorthand=shorthand, aggregate=aggregate, bandPosition=bandPosition, - bin=bin, condition=condition, field=field, legend=legend, - scale=scale, sort=sort, timeUnit=timeUnit, title=title, type=type, - **kwds) - - -@with_property_setters -class ColorDatum(DatumChannelMixin, core.FieldOrDatumDefWithConditionDatumDefGradientstringnull): - """ColorDatum schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - condition : anyOf(:class:`ConditionalValueDefGradientstringnullExprRef`, List(:class:`ConditionalValueDefGradientstringnullExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`) - A constant value in data domain. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`Type` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "color" - - def bandPosition(self, _: float, **kwds) -> 'ColorDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'ColorDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'ColorDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefGradientstringnullExprRef], **kwds) -> 'ColorDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'ColorDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'ColorDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'ColorDatum': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'ColorDatum': - ... - - - def __init__(self, datum, bandPosition=Undefined, condition=Undefined, title=Undefined, - type=Undefined, **kwds): - super(ColorDatum, self).__init__(datum=datum, bandPosition=bandPosition, condition=condition, - title=title, type=type, **kwds) - - -@with_property_setters -class ColorValue(ValueChannelMixin, core.ValueDefWithConditionMarkPropFieldOrDatumDefGradientstringnull): - """ColorValue schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - condition : anyOf(:class:`ConditionalMarkPropFieldOrDatumDef`, :class:`ConditionalValueDefGradientstringnullExprRef`, List(:class:`ConditionalValueDefGradientstringnullExprRef`)) - A field definition or one or more value definition(s) with a parameter predicate. - value : anyOf(:class:`Gradient`, string, None, :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "color" - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, test=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'ColorValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, legend=Undefined, scale=Undefined, test=Undefined, title=Undefined, type=Undefined, **kwds) -> 'ColorValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, empty=Undefined, field=Undefined, legend=Undefined, param=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'ColorValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, empty=Undefined, legend=Undefined, param=Undefined, scale=Undefined, title=Undefined, type=Undefined, **kwds) -> 'ColorValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'ColorValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'ColorValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefGradientstringnullExprRef], **kwds) -> 'ColorValue': - ... - - - def __init__(self, value, condition=Undefined, **kwds): - super(ColorValue, self).__init__(value=value, condition=condition, **kwds) - - -@with_property_setters -class Column(FieldChannelMixin, core.RowColumnEncodingFieldDef): - """Column schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - align : :class:`LayoutAlign` - The alignment to apply to row/column facet's subplot. The supported string values - are ``"all"``, ``"each"``, and ``"none"``. - - - * For ``"none"``, a flow layout will be used, in which adjacent subviews are simply - placed one after the other. - * For ``"each"``, subviews will be aligned into a clean grid structure, but each row - or column may be of variable size. - * For ``"all"``, subviews will be aligned and each row or column will be sized - identically based on the maximum observed size. String values for this property - will be applied to both grid rows and columns. - - **Default value:** ``"all"``. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - center : boolean - Boolean flag indicating if facet's subviews should be centered relative to their - respective rows or columns. - - **Default value:** ``false`` - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - header : anyOf(:class:`Header`, None) - An object defining properties of a facet's header. - sort : anyOf(:class:`SortArray`, :class:`SortOrder`, :class:`EncodingSortField`, None) - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - JavaScript. - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects - `__. In addition, for time - units ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` is not supported for ``row`` and ``column``. - spacing : float - The spacing in pixels between facet's sub-views. - - **Default value** : Depends on ``"spacing"`` property of `the view composition - configuration `__ ( - ``20`` by default) - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "column" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Column': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Column': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Column': - ... - - def align(self, _: Literal["all", "each", "none"], **kwds) -> 'Column': - ... - - def bandPosition(self, _: float, **kwds) -> 'Column': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'Column': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Column': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'Column': - ... - - def center(self, _: bool, **kwds) -> 'Column': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Column': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Column': - ... - - @overload # type: ignore[no-overload-impl] - def header(self, format=Undefined, formatType=Undefined, labelAlign=Undefined, labelAnchor=Undefined, labelAngle=Undefined, labelBaseline=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelLineHeight=Undefined, labelOrient=Undefined, labelPadding=Undefined, labels=Undefined, orient=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleAngle=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOrient=Undefined, titlePadding=Undefined, **kwds) -> 'Column': - ... - - @overload # type: ignore[no-overload-impl] - def header(self, _: None, **kwds) -> 'Column': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[float], **kwds) -> 'Column': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[str], **kwds) -> 'Column': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[bool], **kwds) -> 'Column': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[core.DateTime], **kwds) -> 'Column': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'Column': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'Column': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: None, **kwds) -> 'Column': - ... - - def spacing(self, _: float, **kwds) -> 'Column': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Column': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Column': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Column': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Column': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Column': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Column': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Column': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Column': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Column': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, align=Undefined, - bandPosition=Undefined, bin=Undefined, center=Undefined, field=Undefined, - header=Undefined, sort=Undefined, spacing=Undefined, timeUnit=Undefined, - title=Undefined, type=Undefined, **kwds): - super(Column, self).__init__(shorthand=shorthand, aggregate=aggregate, align=align, - bandPosition=bandPosition, bin=bin, center=center, field=field, - header=header, sort=sort, spacing=spacing, timeUnit=timeUnit, - title=title, type=type, **kwds) - - -@with_property_setters -class Description(FieldChannelMixin, core.StringFieldDefWithCondition): - """Description schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, string, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - condition : anyOf(:class:`ConditionalValueDefstringExprRef`, List(:class:`ConditionalValueDefstringExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - format : anyOf(string, :class:`Dict`) - When used with the default ``"number"`` and ``"time"`` format type, the text - formatting pattern for labels of guides (axes, legends, headers) and text marks. - - - * If the format type is ``"number"`` (e.g., for quantitative fields), this is D3's - `number format pattern `__. - * If the format type is ``"time"`` (e.g., for temporal fields), this is D3's `time - format pattern `__. - - See the `format documentation `__ - for more examples. - - When used with a `custom formatType - `__, this - value will be passed as ``format`` alongside ``datum.value`` to the registered - function. - - **Default value:** Derived from `numberFormat - `__ config for number - format and from `timeFormat - `__ config for time - format. - formatType : string - The format type for labels. One of ``"number"``, ``"time"``, or a `registered custom - format type - `__. - - **Default value:** - - - * ``"time"`` for temporal fields and ordinal and nominal fields with ``timeUnit``. - * ``"number"`` for quantitative fields as well as ordinal and nominal fields without - ``timeUnit``. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "description" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Description': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Description': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Description': - ... - - def bandPosition(self, _: float, **kwds) -> 'Description': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'Description': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Description': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: str, **kwds) -> 'Description': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'Description': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'Description': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'Description': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefstringExprRef], **kwds) -> 'Description': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Description': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Description': - ... - - @overload # type: ignore[no-overload-impl] - def format(self, _: str, **kwds) -> 'Description': - ... - - @overload # type: ignore[no-overload-impl] - def format(self, _: dict, **kwds) -> 'Description': - ... - - def formatType(self, _: str, **kwds) -> 'Description': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Description': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Description': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Description': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Description': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Description': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Description': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Description': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Description': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Description': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - condition=Undefined, field=Undefined, format=Undefined, formatType=Undefined, - timeUnit=Undefined, title=Undefined, type=Undefined, **kwds): - super(Description, self).__init__(shorthand=shorthand, aggregate=aggregate, - bandPosition=bandPosition, bin=bin, condition=condition, - field=field, format=format, formatType=formatType, - timeUnit=timeUnit, title=title, type=type, **kwds) - - -@with_property_setters -class DescriptionValue(ValueChannelMixin, core.StringValueDefWithCondition): - """DescriptionValue schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - condition : anyOf(:class:`ConditionalMarkPropFieldOrDatumDef`, :class:`ConditionalValueDefstringnullExprRef`, List(:class:`ConditionalValueDefstringnullExprRef`)) - A field definition or one or more value definition(s) with a parameter predicate. - value : anyOf(string, None, :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "description" - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, test=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'DescriptionValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, legend=Undefined, scale=Undefined, test=Undefined, title=Undefined, type=Undefined, **kwds) -> 'DescriptionValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, empty=Undefined, field=Undefined, legend=Undefined, param=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'DescriptionValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, empty=Undefined, legend=Undefined, param=Undefined, scale=Undefined, title=Undefined, type=Undefined, **kwds) -> 'DescriptionValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'DescriptionValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'DescriptionValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefstringnullExprRef], **kwds) -> 'DescriptionValue': - ... - - - def __init__(self, value, condition=Undefined, **kwds): - super(DescriptionValue, self).__init__(value=value, condition=condition, **kwds) - - -@with_property_setters -class Detail(FieldChannelMixin, core.FieldDefWithoutScale): - """Detail schema wrapper - - Mapping(required=[shorthand]) - Definition object for a data field, its type and transformation of an encoding channel. - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, string, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "detail" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Detail': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Detail': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Detail': - ... - - def bandPosition(self, _: float, **kwds) -> 'Detail': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'Detail': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Detail': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: str, **kwds) -> 'Detail': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'Detail': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Detail': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Detail': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Detail': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Detail': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Detail': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Detail': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Detail': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Detail': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Detail': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Detail': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Detail': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - field=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds): - super(Detail, self).__init__(shorthand=shorthand, aggregate=aggregate, - bandPosition=bandPosition, bin=bin, field=field, timeUnit=timeUnit, - title=title, type=type, **kwds) - - -@with_property_setters -class Facet(FieldChannelMixin, core.FacetEncodingFieldDef): - """Facet schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - align : anyOf(:class:`LayoutAlign`, :class:`RowColLayoutAlign`) - The alignment to apply to grid rows and columns. The supported string values are - ``"all"``, ``"each"``, and ``"none"``. - - - * For ``"none"``, a flow layout will be used, in which adjacent subviews are simply - placed one after the other. - * For ``"each"``, subviews will be aligned into a clean grid structure, but each row - or column may be of variable size. - * For ``"all"``, subviews will be aligned and each row or column will be sized - identically based on the maximum observed size. String values for this property - will be applied to both grid rows and columns. - - Alternatively, an object value of the form ``{"row": string, "column": string}`` can - be used to supply different alignments for rows and columns. - - **Default value:** ``"all"``. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - bounds : enum('full', 'flush') - The bounds calculation method to use for determining the extent of a sub-plot. One - of ``full`` (the default) or ``flush``. - - - * If set to ``full``, the entire calculated bounds (including axes, title, and - legend) will be used. - * If set to ``flush``, only the specified width and height values for the sub-view - will be used. The ``flush`` setting can be useful when attempting to place - sub-plots without axes or legends into a uniform grid structure. - - **Default value:** ``"full"`` - center : anyOf(boolean, :class:`RowColboolean`) - Boolean flag indicating if subviews should be centered relative to their respective - rows or columns. - - An object value of the form ``{"row": boolean, "column": boolean}`` can be used to - supply different centering values for rows and columns. - - **Default value:** ``false`` - columns : float - The number of columns to include in the view composition layout. - - **Default value** : ``undefined`` -- An infinite number of columns (a single row) - will be assumed. This is equivalent to ``hconcat`` (for ``concat`` ) and to using - the ``column`` channel (for ``facet`` and ``repeat`` ). - - **Note** : - - 1) This property is only for: - - - * the general (wrappable) ``concat`` operator (not ``hconcat`` / ``vconcat`` ) - * the ``facet`` and ``repeat`` operator with one field/repetition definition - (without row/column nesting) - - 2) Setting the ``columns`` to ``1`` is equivalent to ``vconcat`` (for ``concat`` ) - and to using the ``row`` channel (for ``facet`` and ``repeat`` ). - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - header : anyOf(:class:`Header`, None) - An object defining properties of a facet's header. - sort : anyOf(:class:`SortArray`, :class:`SortOrder`, :class:`EncodingSortField`, None) - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - JavaScript. - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects - `__. In addition, for time - units ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` is not supported for ``row`` and ``column``. - spacing : anyOf(float, :class:`RowColnumber`) - The spacing in pixels between sub-views of the composition operator. An object of - the form ``{"row": number, "column": number}`` can be used to set different spacing - values for rows and columns. - - **Default value** : Depends on ``"spacing"`` property of `the view composition - configuration `__ ( - ``20`` by default) - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "facet" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def align(self, _: Literal["all", "each", "none"], **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def align(self, column=Undefined, row=Undefined, **kwds) -> 'Facet': - ... - - def bandPosition(self, _: float, **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'Facet': - ... - - def bounds(self, _: Literal["full", "flush"], **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def center(self, _: bool, **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def center(self, column=Undefined, row=Undefined, **kwds) -> 'Facet': - ... - - def columns(self, _: float, **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def header(self, format=Undefined, formatType=Undefined, labelAlign=Undefined, labelAnchor=Undefined, labelAngle=Undefined, labelBaseline=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelLineHeight=Undefined, labelOrient=Undefined, labelPadding=Undefined, labels=Undefined, orient=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleAngle=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOrient=Undefined, titlePadding=Undefined, **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def header(self, _: None, **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[float], **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[str], **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[bool], **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[core.DateTime], **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: None, **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def spacing(self, _: float, **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def spacing(self, column=Undefined, row=Undefined, **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Facet': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Facet': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Facet': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, align=Undefined, - bandPosition=Undefined, bin=Undefined, bounds=Undefined, center=Undefined, - columns=Undefined, field=Undefined, header=Undefined, sort=Undefined, - spacing=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds): - super(Facet, self).__init__(shorthand=shorthand, aggregate=aggregate, align=align, - bandPosition=bandPosition, bin=bin, bounds=bounds, center=center, - columns=columns, field=field, header=header, sort=sort, - spacing=spacing, timeUnit=timeUnit, title=title, type=type, **kwds) - - -@with_property_setters -class Fill(FieldChannelMixin, core.FieldOrDatumDefWithConditionMarkPropFieldDefGradientstringnull): - """Fill schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - condition : anyOf(:class:`ConditionalValueDefGradientstringnullExprRef`, List(:class:`ConditionalValueDefGradientstringnullExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - legend : anyOf(:class:`Legend`, None) - An object defining properties of the legend. If ``null``, the legend for the - encoding channel will be removed. - - **Default value:** If undefined, default `legend properties - `__ are applied. - - **See also:** `legend `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - JavaScript. - * `A string indicating an encoding channel name to sort by - `__ (e.g., - ``"x"`` or ``"y"`` ) with an optional minus prefix for descending sort (e.g., - ``"-x"`` to sort by x-field, descending). This channel string is short-form of `a - sort-by-encoding definition - `__. For - example, ``"sort": "-x"`` is equivalent to ``"sort": {"encoding": "x", "order": - "descending"}``. - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects - `__. In addition, for time - units ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` and sorting by another channel is not supported for ``row`` and - ``column``. - - **See also:** `sort `__ - documentation. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "fill" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Fill': - ... - - def bandPosition(self, _: float, **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefGradientstringnullExprRef], **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def legend(self, aria=Undefined, clipHeight=Undefined, columnPadding=Undefined, columns=Undefined, cornerRadius=Undefined, description=Undefined, direction=Undefined, fillColor=Undefined, format=Undefined, formatType=Undefined, gradientLength=Undefined, gradientOpacity=Undefined, gradientStrokeColor=Undefined, gradientStrokeWidth=Undefined, gradientThickness=Undefined, gridAlign=Undefined, labelAlign=Undefined, labelBaseline=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelOffset=Undefined, labelOpacity=Undefined, labelOverlap=Undefined, labelPadding=Undefined, labelSeparation=Undefined, legendX=Undefined, legendY=Undefined, offset=Undefined, orient=Undefined, padding=Undefined, rowPadding=Undefined, strokeColor=Undefined, symbolDash=Undefined, symbolDashOffset=Undefined, symbolFillColor=Undefined, symbolLimit=Undefined, symbolOffset=Undefined, symbolOpacity=Undefined, symbolSize=Undefined, symbolStrokeColor=Undefined, symbolStrokeWidth=Undefined, symbolType=Undefined, tickCount=Undefined, tickMinStep=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOpacity=Undefined, titleOrient=Undefined, titlePadding=Undefined, type=Undefined, values=Undefined, zindex=Undefined, **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def legend(self, _: None, **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, _: None, **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[float], **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[str], **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[bool], **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[core.DateTime], **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["x", "y", "color", "fill", "stroke", "strokeWidth", "size", "shape", "fillOpacity", "strokeOpacity", "opacity", "text"], **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["-x", "-y", "-color", "-fill", "-stroke", "-strokeWidth", "-size", "-shape", "-fillOpacity", "-strokeOpacity", "-opacity", "-text"], **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, encoding=Undefined, order=Undefined, **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: None, **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Fill': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Fill': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Fill': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - condition=Undefined, field=Undefined, legend=Undefined, scale=Undefined, - sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds): - super(Fill, self).__init__(shorthand=shorthand, aggregate=aggregate, bandPosition=bandPosition, - bin=bin, condition=condition, field=field, legend=legend, - scale=scale, sort=sort, timeUnit=timeUnit, title=title, type=type, - **kwds) - - -@with_property_setters -class FillDatum(DatumChannelMixin, core.FieldOrDatumDefWithConditionDatumDefGradientstringnull): - """FillDatum schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - condition : anyOf(:class:`ConditionalValueDefGradientstringnullExprRef`, List(:class:`ConditionalValueDefGradientstringnullExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`) - A constant value in data domain. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`Type` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "fill" - - def bandPosition(self, _: float, **kwds) -> 'FillDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'FillDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'FillDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefGradientstringnullExprRef], **kwds) -> 'FillDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'FillDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'FillDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'FillDatum': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'FillDatum': - ... - - - def __init__(self, datum, bandPosition=Undefined, condition=Undefined, title=Undefined, - type=Undefined, **kwds): - super(FillDatum, self).__init__(datum=datum, bandPosition=bandPosition, condition=condition, - title=title, type=type, **kwds) - - -@with_property_setters -class FillValue(ValueChannelMixin, core.ValueDefWithConditionMarkPropFieldOrDatumDefGradientstringnull): - """FillValue schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - condition : anyOf(:class:`ConditionalMarkPropFieldOrDatumDef`, :class:`ConditionalValueDefGradientstringnullExprRef`, List(:class:`ConditionalValueDefGradientstringnullExprRef`)) - A field definition or one or more value definition(s) with a parameter predicate. - value : anyOf(:class:`Gradient`, string, None, :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "fill" - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, test=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'FillValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, legend=Undefined, scale=Undefined, test=Undefined, title=Undefined, type=Undefined, **kwds) -> 'FillValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, empty=Undefined, field=Undefined, legend=Undefined, param=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'FillValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, empty=Undefined, legend=Undefined, param=Undefined, scale=Undefined, title=Undefined, type=Undefined, **kwds) -> 'FillValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'FillValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'FillValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefGradientstringnullExprRef], **kwds) -> 'FillValue': - ... - - - def __init__(self, value, condition=Undefined, **kwds): - super(FillValue, self).__init__(value=value, condition=condition, **kwds) - - -@with_property_setters -class FillOpacity(FieldChannelMixin, core.FieldOrDatumDefWithConditionMarkPropFieldDefnumber): - """FillOpacity schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - condition : anyOf(:class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - legend : anyOf(:class:`Legend`, None) - An object defining properties of the legend. If ``null``, the legend for the - encoding channel will be removed. - - **Default value:** If undefined, default `legend properties - `__ are applied. - - **See also:** `legend `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - JavaScript. - * `A string indicating an encoding channel name to sort by - `__ (e.g., - ``"x"`` or ``"y"`` ) with an optional minus prefix for descending sort (e.g., - ``"-x"`` to sort by x-field, descending). This channel string is short-form of `a - sort-by-encoding definition - `__. For - example, ``"sort": "-x"`` is equivalent to ``"sort": {"encoding": "x", "order": - "descending"}``. - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects - `__. In addition, for time - units ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` and sorting by another channel is not supported for ``row`` and - ``column``. - - **See also:** `sort `__ - documentation. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "fillOpacity" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'FillOpacity': - ... - - def bandPosition(self, _: float, **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def legend(self, aria=Undefined, clipHeight=Undefined, columnPadding=Undefined, columns=Undefined, cornerRadius=Undefined, description=Undefined, direction=Undefined, fillColor=Undefined, format=Undefined, formatType=Undefined, gradientLength=Undefined, gradientOpacity=Undefined, gradientStrokeColor=Undefined, gradientStrokeWidth=Undefined, gradientThickness=Undefined, gridAlign=Undefined, labelAlign=Undefined, labelBaseline=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelOffset=Undefined, labelOpacity=Undefined, labelOverlap=Undefined, labelPadding=Undefined, labelSeparation=Undefined, legendX=Undefined, legendY=Undefined, offset=Undefined, orient=Undefined, padding=Undefined, rowPadding=Undefined, strokeColor=Undefined, symbolDash=Undefined, symbolDashOffset=Undefined, symbolFillColor=Undefined, symbolLimit=Undefined, symbolOffset=Undefined, symbolOpacity=Undefined, symbolSize=Undefined, symbolStrokeColor=Undefined, symbolStrokeWidth=Undefined, symbolType=Undefined, tickCount=Undefined, tickMinStep=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOpacity=Undefined, titleOrient=Undefined, titlePadding=Undefined, type=Undefined, values=Undefined, zindex=Undefined, **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def legend(self, _: None, **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, _: None, **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[float], **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[str], **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[bool], **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[core.DateTime], **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["x", "y", "color", "fill", "stroke", "strokeWidth", "size", "shape", "fillOpacity", "strokeOpacity", "opacity", "text"], **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["-x", "-y", "-color", "-fill", "-stroke", "-strokeWidth", "-size", "-shape", "-fillOpacity", "-strokeOpacity", "-opacity", "-text"], **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, encoding=Undefined, order=Undefined, **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: None, **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'FillOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'FillOpacity': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'FillOpacity': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - condition=Undefined, field=Undefined, legend=Undefined, scale=Undefined, - sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds): - super(FillOpacity, self).__init__(shorthand=shorthand, aggregate=aggregate, - bandPosition=bandPosition, bin=bin, condition=condition, - field=field, legend=legend, scale=scale, sort=sort, - timeUnit=timeUnit, title=title, type=type, **kwds) - - -@with_property_setters -class FillOpacityDatum(DatumChannelMixin, core.FieldOrDatumDefWithConditionDatumDefnumber): - """FillOpacityDatum schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - condition : anyOf(:class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`) - A constant value in data domain. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`Type` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "fillOpacity" - - def bandPosition(self, _: float, **kwds) -> 'FillOpacityDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'FillOpacityDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'FillOpacityDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'FillOpacityDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'FillOpacityDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'FillOpacityDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'FillOpacityDatum': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'FillOpacityDatum': - ... - - - def __init__(self, datum, bandPosition=Undefined, condition=Undefined, title=Undefined, - type=Undefined, **kwds): - super(FillOpacityDatum, self).__init__(datum=datum, bandPosition=bandPosition, - condition=condition, title=title, type=type, **kwds) - - -@with_property_setters -class FillOpacityValue(ValueChannelMixin, core.ValueDefWithConditionMarkPropFieldOrDatumDefnumber): - """FillOpacityValue schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - condition : anyOf(:class:`ConditionalMarkPropFieldOrDatumDef`, :class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`)) - A field definition or one or more value definition(s) with a parameter predicate. - value : anyOf(float, :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "fillOpacity" - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, test=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'FillOpacityValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, legend=Undefined, scale=Undefined, test=Undefined, title=Undefined, type=Undefined, **kwds) -> 'FillOpacityValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, empty=Undefined, field=Undefined, legend=Undefined, param=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'FillOpacityValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, empty=Undefined, legend=Undefined, param=Undefined, scale=Undefined, title=Undefined, type=Undefined, **kwds) -> 'FillOpacityValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'FillOpacityValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'FillOpacityValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'FillOpacityValue': - ... - - - def __init__(self, value, condition=Undefined, **kwds): - super(FillOpacityValue, self).__init__(value=value, condition=condition, **kwds) - - -@with_property_setters -class Href(FieldChannelMixin, core.StringFieldDefWithCondition): - """Href schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, string, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - condition : anyOf(:class:`ConditionalValueDefstringExprRef`, List(:class:`ConditionalValueDefstringExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - format : anyOf(string, :class:`Dict`) - When used with the default ``"number"`` and ``"time"`` format type, the text - formatting pattern for labels of guides (axes, legends, headers) and text marks. - - - * If the format type is ``"number"`` (e.g., for quantitative fields), this is D3's - `number format pattern `__. - * If the format type is ``"time"`` (e.g., for temporal fields), this is D3's `time - format pattern `__. - - See the `format documentation `__ - for more examples. - - When used with a `custom formatType - `__, this - value will be passed as ``format`` alongside ``datum.value`` to the registered - function. - - **Default value:** Derived from `numberFormat - `__ config for number - format and from `timeFormat - `__ config for time - format. - formatType : string - The format type for labels. One of ``"number"``, ``"time"``, or a `registered custom - format type - `__. - - **Default value:** - - - * ``"time"`` for temporal fields and ordinal and nominal fields with ``timeUnit``. - * ``"number"`` for quantitative fields as well as ordinal and nominal fields without - ``timeUnit``. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "href" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Href': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Href': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Href': - ... - - def bandPosition(self, _: float, **kwds) -> 'Href': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'Href': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Href': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: str, **kwds) -> 'Href': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'Href': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'Href': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'Href': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefstringExprRef], **kwds) -> 'Href': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Href': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Href': - ... - - @overload # type: ignore[no-overload-impl] - def format(self, _: str, **kwds) -> 'Href': - ... - - @overload # type: ignore[no-overload-impl] - def format(self, _: dict, **kwds) -> 'Href': - ... - - def formatType(self, _: str, **kwds) -> 'Href': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Href': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Href': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Href': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Href': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Href': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Href': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Href': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Href': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Href': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - condition=Undefined, field=Undefined, format=Undefined, formatType=Undefined, - timeUnit=Undefined, title=Undefined, type=Undefined, **kwds): - super(Href, self).__init__(shorthand=shorthand, aggregate=aggregate, bandPosition=bandPosition, - bin=bin, condition=condition, field=field, format=format, - formatType=formatType, timeUnit=timeUnit, title=title, type=type, - **kwds) - - -@with_property_setters -class HrefValue(ValueChannelMixin, core.StringValueDefWithCondition): - """HrefValue schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - condition : anyOf(:class:`ConditionalMarkPropFieldOrDatumDef`, :class:`ConditionalValueDefstringnullExprRef`, List(:class:`ConditionalValueDefstringnullExprRef`)) - A field definition or one or more value definition(s) with a parameter predicate. - value : anyOf(string, None, :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "href" - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, test=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'HrefValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, legend=Undefined, scale=Undefined, test=Undefined, title=Undefined, type=Undefined, **kwds) -> 'HrefValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, empty=Undefined, field=Undefined, legend=Undefined, param=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'HrefValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, empty=Undefined, legend=Undefined, param=Undefined, scale=Undefined, title=Undefined, type=Undefined, **kwds) -> 'HrefValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'HrefValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'HrefValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefstringnullExprRef], **kwds) -> 'HrefValue': - ... - - - def __init__(self, value, condition=Undefined, **kwds): - super(HrefValue, self).__init__(value=value, condition=condition, **kwds) - - -@with_property_setters -class Key(FieldChannelMixin, core.FieldDefWithoutScale): - """Key schema wrapper - - Mapping(required=[shorthand]) - Definition object for a data field, its type and transformation of an encoding channel. - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, string, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "key" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Key': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Key': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Key': - ... - - def bandPosition(self, _: float, **kwds) -> 'Key': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'Key': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Key': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: str, **kwds) -> 'Key': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'Key': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Key': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Key': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Key': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Key': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Key': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Key': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Key': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Key': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Key': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Key': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Key': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - field=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds): - super(Key, self).__init__(shorthand=shorthand, aggregate=aggregate, bandPosition=bandPosition, - bin=bin, field=field, timeUnit=timeUnit, title=title, type=type, - **kwds) - - -@with_property_setters -class Latitude(FieldChannelMixin, core.LatLongFieldDef): - """Latitude schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : None - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : string - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "latitude" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Latitude': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Latitude': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Latitude': - ... - - def bandPosition(self, _: float, **kwds) -> 'Latitude': - ... - - def bin(self, _: None, **kwds) -> 'Latitude': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Latitude': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Latitude': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Latitude': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Latitude': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Latitude': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Latitude': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Latitude': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Latitude': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Latitude': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Latitude': - ... - - def type(self, _: str, **kwds) -> 'Latitude': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - field=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds): - super(Latitude, self).__init__(shorthand=shorthand, aggregate=aggregate, - bandPosition=bandPosition, bin=bin, field=field, - timeUnit=timeUnit, title=title, type=type, **kwds) - - -@with_property_setters -class LatitudeDatum(DatumChannelMixin, core.DatumDef): - """LatitudeDatum schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`) - A constant value in data domain. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`Type` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "latitude" - - def bandPosition(self, _: float, **kwds) -> 'LatitudeDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'LatitudeDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'LatitudeDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'LatitudeDatum': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'LatitudeDatum': - ... - - - def __init__(self, datum, bandPosition=Undefined, title=Undefined, type=Undefined, **kwds): - super(LatitudeDatum, self).__init__(datum=datum, bandPosition=bandPosition, title=title, - type=type, **kwds) - - -@with_property_setters -class Latitude2(FieldChannelMixin, core.SecondaryFieldDef): - """Latitude2 schema wrapper - - Mapping(required=[shorthand]) - A field definition of a secondary channel that shares a scale with another primary channel. - For example, ``x2``, ``xError`` and ``xError2`` share the same scale with ``x``. - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : None - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "latitude2" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Latitude2': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Latitude2': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Latitude2': - ... - - def bandPosition(self, _: float, **kwds) -> 'Latitude2': - ... - - def bin(self, _: None, **kwds) -> 'Latitude2': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Latitude2': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Latitude2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Latitude2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Latitude2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Latitude2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Latitude2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Latitude2': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Latitude2': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Latitude2': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Latitude2': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - field=Undefined, timeUnit=Undefined, title=Undefined, **kwds): - super(Latitude2, self).__init__(shorthand=shorthand, aggregate=aggregate, - bandPosition=bandPosition, bin=bin, field=field, - timeUnit=timeUnit, title=title, **kwds) - - -@with_property_setters -class Latitude2Datum(DatumChannelMixin, core.DatumDef): - """Latitude2Datum schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`) - A constant value in data domain. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`Type` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "latitude2" - - def bandPosition(self, _: float, **kwds) -> 'Latitude2Datum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Latitude2Datum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Latitude2Datum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Latitude2Datum': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'Latitude2Datum': - ... - - - def __init__(self, datum, bandPosition=Undefined, title=Undefined, type=Undefined, **kwds): - super(Latitude2Datum, self).__init__(datum=datum, bandPosition=bandPosition, title=title, - type=type, **kwds) - - -@with_property_setters -class Latitude2Value(ValueChannelMixin, core.PositionValueDef): - """Latitude2Value schema wrapper - - Mapping(required=[value]) - Definition object for a constant value (primitive value or gradient definition) of an - encoding channel. - - Parameters - ---------- - - value : anyOf(float, string, string, :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "latitude2" - - - - def __init__(self, value, **kwds): - super(Latitude2Value, self).__init__(value=value, **kwds) - - -@with_property_setters -class Longitude(FieldChannelMixin, core.LatLongFieldDef): - """Longitude schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : None - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : string - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "longitude" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Longitude': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Longitude': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Longitude': - ... - - def bandPosition(self, _: float, **kwds) -> 'Longitude': - ... - - def bin(self, _: None, **kwds) -> 'Longitude': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Longitude': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Longitude': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Longitude': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Longitude': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Longitude': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Longitude': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Longitude': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Longitude': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Longitude': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Longitude': - ... - - def type(self, _: str, **kwds) -> 'Longitude': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - field=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds): - super(Longitude, self).__init__(shorthand=shorthand, aggregate=aggregate, - bandPosition=bandPosition, bin=bin, field=field, - timeUnit=timeUnit, title=title, type=type, **kwds) - - -@with_property_setters -class LongitudeDatum(DatumChannelMixin, core.DatumDef): - """LongitudeDatum schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`) - A constant value in data domain. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`Type` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "longitude" - - def bandPosition(self, _: float, **kwds) -> 'LongitudeDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'LongitudeDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'LongitudeDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'LongitudeDatum': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'LongitudeDatum': - ... - - - def __init__(self, datum, bandPosition=Undefined, title=Undefined, type=Undefined, **kwds): - super(LongitudeDatum, self).__init__(datum=datum, bandPosition=bandPosition, title=title, - type=type, **kwds) - - -@with_property_setters -class Longitude2(FieldChannelMixin, core.SecondaryFieldDef): - """Longitude2 schema wrapper - - Mapping(required=[shorthand]) - A field definition of a secondary channel that shares a scale with another primary channel. - For example, ``x2``, ``xError`` and ``xError2`` share the same scale with ``x``. - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : None - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "longitude2" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Longitude2': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Longitude2': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Longitude2': - ... - - def bandPosition(self, _: float, **kwds) -> 'Longitude2': - ... - - def bin(self, _: None, **kwds) -> 'Longitude2': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Longitude2': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Longitude2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Longitude2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Longitude2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Longitude2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Longitude2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Longitude2': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Longitude2': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Longitude2': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Longitude2': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - field=Undefined, timeUnit=Undefined, title=Undefined, **kwds): - super(Longitude2, self).__init__(shorthand=shorthand, aggregate=aggregate, - bandPosition=bandPosition, bin=bin, field=field, - timeUnit=timeUnit, title=title, **kwds) - - -@with_property_setters -class Longitude2Datum(DatumChannelMixin, core.DatumDef): - """Longitude2Datum schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`) - A constant value in data domain. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`Type` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "longitude2" - - def bandPosition(self, _: float, **kwds) -> 'Longitude2Datum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Longitude2Datum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Longitude2Datum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Longitude2Datum': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'Longitude2Datum': - ... - - - def __init__(self, datum, bandPosition=Undefined, title=Undefined, type=Undefined, **kwds): - super(Longitude2Datum, self).__init__(datum=datum, bandPosition=bandPosition, title=title, - type=type, **kwds) - - -@with_property_setters -class Longitude2Value(ValueChannelMixin, core.PositionValueDef): - """Longitude2Value schema wrapper - - Mapping(required=[value]) - Definition object for a constant value (primitive value or gradient definition) of an - encoding channel. - - Parameters - ---------- - - value : anyOf(float, string, string, :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "longitude2" - - - - def __init__(self, value, **kwds): - super(Longitude2Value, self).__init__(value=value, **kwds) - - -@with_property_setters -class Opacity(FieldChannelMixin, core.FieldOrDatumDefWithConditionMarkPropFieldDefnumber): - """Opacity schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - condition : anyOf(:class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - legend : anyOf(:class:`Legend`, None) - An object defining properties of the legend. If ``null``, the legend for the - encoding channel will be removed. - - **Default value:** If undefined, default `legend properties - `__ are applied. - - **See also:** `legend `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - JavaScript. - * `A string indicating an encoding channel name to sort by - `__ (e.g., - ``"x"`` or ``"y"`` ) with an optional minus prefix for descending sort (e.g., - ``"-x"`` to sort by x-field, descending). This channel string is short-form of `a - sort-by-encoding definition - `__. For - example, ``"sort": "-x"`` is equivalent to ``"sort": {"encoding": "x", "order": - "descending"}``. - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects - `__. In addition, for time - units ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` and sorting by another channel is not supported for ``row`` and - ``column``. - - **See also:** `sort `__ - documentation. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "opacity" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Opacity': - ... - - def bandPosition(self, _: float, **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def legend(self, aria=Undefined, clipHeight=Undefined, columnPadding=Undefined, columns=Undefined, cornerRadius=Undefined, description=Undefined, direction=Undefined, fillColor=Undefined, format=Undefined, formatType=Undefined, gradientLength=Undefined, gradientOpacity=Undefined, gradientStrokeColor=Undefined, gradientStrokeWidth=Undefined, gradientThickness=Undefined, gridAlign=Undefined, labelAlign=Undefined, labelBaseline=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelOffset=Undefined, labelOpacity=Undefined, labelOverlap=Undefined, labelPadding=Undefined, labelSeparation=Undefined, legendX=Undefined, legendY=Undefined, offset=Undefined, orient=Undefined, padding=Undefined, rowPadding=Undefined, strokeColor=Undefined, symbolDash=Undefined, symbolDashOffset=Undefined, symbolFillColor=Undefined, symbolLimit=Undefined, symbolOffset=Undefined, symbolOpacity=Undefined, symbolSize=Undefined, symbolStrokeColor=Undefined, symbolStrokeWidth=Undefined, symbolType=Undefined, tickCount=Undefined, tickMinStep=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOpacity=Undefined, titleOrient=Undefined, titlePadding=Undefined, type=Undefined, values=Undefined, zindex=Undefined, **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def legend(self, _: None, **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, _: None, **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[float], **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[str], **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[bool], **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[core.DateTime], **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["x", "y", "color", "fill", "stroke", "strokeWidth", "size", "shape", "fillOpacity", "strokeOpacity", "opacity", "text"], **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["-x", "-y", "-color", "-fill", "-stroke", "-strokeWidth", "-size", "-shape", "-fillOpacity", "-strokeOpacity", "-opacity", "-text"], **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, encoding=Undefined, order=Undefined, **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: None, **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Opacity': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Opacity': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Opacity': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - condition=Undefined, field=Undefined, legend=Undefined, scale=Undefined, - sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds): - super(Opacity, self).__init__(shorthand=shorthand, aggregate=aggregate, - bandPosition=bandPosition, bin=bin, condition=condition, - field=field, legend=legend, scale=scale, sort=sort, - timeUnit=timeUnit, title=title, type=type, **kwds) - - -@with_property_setters -class OpacityDatum(DatumChannelMixin, core.FieldOrDatumDefWithConditionDatumDefnumber): - """OpacityDatum schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - condition : anyOf(:class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`) - A constant value in data domain. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`Type` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "opacity" - - def bandPosition(self, _: float, **kwds) -> 'OpacityDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'OpacityDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'OpacityDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'OpacityDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'OpacityDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'OpacityDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'OpacityDatum': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'OpacityDatum': - ... - - - def __init__(self, datum, bandPosition=Undefined, condition=Undefined, title=Undefined, - type=Undefined, **kwds): - super(OpacityDatum, self).__init__(datum=datum, bandPosition=bandPosition, condition=condition, - title=title, type=type, **kwds) - - -@with_property_setters -class OpacityValue(ValueChannelMixin, core.ValueDefWithConditionMarkPropFieldOrDatumDefnumber): - """OpacityValue schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - condition : anyOf(:class:`ConditionalMarkPropFieldOrDatumDef`, :class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`)) - A field definition or one or more value definition(s) with a parameter predicate. - value : anyOf(float, :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "opacity" - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, test=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'OpacityValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, legend=Undefined, scale=Undefined, test=Undefined, title=Undefined, type=Undefined, **kwds) -> 'OpacityValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, empty=Undefined, field=Undefined, legend=Undefined, param=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'OpacityValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, empty=Undefined, legend=Undefined, param=Undefined, scale=Undefined, title=Undefined, type=Undefined, **kwds) -> 'OpacityValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'OpacityValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'OpacityValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'OpacityValue': - ... - - - def __init__(self, value, condition=Undefined, **kwds): - super(OpacityValue, self).__init__(value=value, condition=condition, **kwds) - - -@with_property_setters -class Order(FieldChannelMixin, core.OrderFieldDef): - """Order schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, string, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - sort : :class:`SortOrder` - The sort order. One of ``"ascending"`` (default) or ``"descending"``. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "order" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Order': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Order': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Order': - ... - - def bandPosition(self, _: float, **kwds) -> 'Order': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'Order': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Order': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: str, **kwds) -> 'Order': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'Order': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Order': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Order': - ... - - def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'Order': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Order': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Order': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Order': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Order': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Order': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Order': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Order': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Order': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Order': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - field=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, - **kwds): - super(Order, self).__init__(shorthand=shorthand, aggregate=aggregate, bandPosition=bandPosition, - bin=bin, field=field, sort=sort, timeUnit=timeUnit, title=title, - type=type, **kwds) - - -@with_property_setters -class OrderValue(ValueChannelMixin, core.OrderValueDef): - """OrderValue schema wrapper - - Mapping(required=[value]) - - Parameters - ---------- - - value : anyOf(float, :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - condition : anyOf(:class:`ConditionalValueDefnumber`, List(:class:`ConditionalValueDefnumber`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "order" - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'OrderValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'OrderValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefnumber], **kwds) -> 'OrderValue': - ... - - - def __init__(self, value, condition=Undefined, **kwds): - super(OrderValue, self).__init__(value=value, condition=condition, **kwds) - - -@with_property_setters -class Radius(FieldChannelMixin, core.PositionFieldDefBase): - """Radius schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, string, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - JavaScript. - * `A string indicating an encoding channel name to sort by - `__ (e.g., - ``"x"`` or ``"y"`` ) with an optional minus prefix for descending sort (e.g., - ``"-x"`` to sort by x-field, descending). This channel string is short-form of `a - sort-by-encoding definition - `__. For - example, ``"sort": "-x"`` is equivalent to ``"sort": {"encoding": "x", "order": - "descending"}``. - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects - `__. In addition, for time - units ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` and sorting by another channel is not supported for ``row`` and - ``column``. - - **See also:** `sort `__ - documentation. - stack : anyOf(:class:`StackOffset`, None, boolean) - Type of stacking offset if the field should be stacked. ``stack`` is only applicable - for ``x``, ``y``, ``theta``, and ``radius`` channels with continuous domains. For - example, ``stack`` of ``y`` can be used to customize stacking for a vertical bar - chart. - - ``stack`` can be one of the following values: - - - * ``"zero"`` or `true`: stacking with baseline offset at zero value of the scale - (for creating typical stacked - [bar](https://vega.github.io/vega-lite/docs/stack.html#bar) and `area - `__ chart). - * ``"normalize"`` - stacking with normalized domain (for creating `normalized - stacked bar and area charts - `__ and pie charts - `with percentage tooltip - `__ ). :raw-html:`
      ` - * ``"center"`` - stacking with center baseline (for `streamgraph - `__ ). - * ``null`` or ``false`` - No-stacking. This will produce layered `bar - `__ and area - chart. - - **Default value:** ``zero`` for plots with all of the following conditions are true: - (1) the mark is ``bar``, ``area``, or ``arc`` ; (2) the stacked measure channel (x - or y) has a linear scale; (3) At least one of non-position channels mapped to an - unaggregated field that is different from x and y. Otherwise, ``null`` by default. - - **See also:** `stack `__ - documentation. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "radius" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Radius': - ... - - def bandPosition(self, _: float, **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: str, **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, _: None, **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[float], **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[str], **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[bool], **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[core.DateTime], **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["x", "y", "color", "fill", "stroke", "strokeWidth", "size", "shape", "fillOpacity", "strokeOpacity", "opacity", "text"], **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["-x", "-y", "-color", "-fill", "-stroke", "-strokeWidth", "-size", "-shape", "-fillOpacity", "-strokeOpacity", "-opacity", "-text"], **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, encoding=Undefined, order=Undefined, **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: None, **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def stack(self, _: Literal["zero", "center", "normalize"], **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def stack(self, _: None, **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def stack(self, _: bool, **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Radius': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Radius': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Radius': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - field=Undefined, scale=Undefined, sort=Undefined, stack=Undefined, timeUnit=Undefined, - title=Undefined, type=Undefined, **kwds): - super(Radius, self).__init__(shorthand=shorthand, aggregate=aggregate, - bandPosition=bandPosition, bin=bin, field=field, scale=scale, - sort=sort, stack=stack, timeUnit=timeUnit, title=title, type=type, - **kwds) - - -@with_property_setters -class RadiusDatum(DatumChannelMixin, core.PositionDatumDefBase): - """RadiusDatum schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`) - A constant value in data domain. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - stack : anyOf(:class:`StackOffset`, None, boolean) - Type of stacking offset if the field should be stacked. ``stack`` is only applicable - for ``x``, ``y``, ``theta``, and ``radius`` channels with continuous domains. For - example, ``stack`` of ``y`` can be used to customize stacking for a vertical bar - chart. - - ``stack`` can be one of the following values: - - - * ``"zero"`` or `true`: stacking with baseline offset at zero value of the scale - (for creating typical stacked - [bar](https://vega.github.io/vega-lite/docs/stack.html#bar) and `area - `__ chart). - * ``"normalize"`` - stacking with normalized domain (for creating `normalized - stacked bar and area charts - `__ and pie charts - `with percentage tooltip - `__ ). :raw-html:`
      ` - * ``"center"`` - stacking with center baseline (for `streamgraph - `__ ). - * ``null`` or ``false`` - No-stacking. This will produce layered `bar - `__ and area - chart. - - **Default value:** ``zero`` for plots with all of the following conditions are true: - (1) the mark is ``bar``, ``area``, or ``arc`` ; (2) the stacked measure channel (x - or y) has a linear scale; (3) At least one of non-position channels mapped to an - unaggregated field that is different from x and y. Otherwise, ``null`` by default. - - **See also:** `stack `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`Type` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "radius" - - def bandPosition(self, _: float, **kwds) -> 'RadiusDatum': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'RadiusDatum': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, _: None, **kwds) -> 'RadiusDatum': - ... - - @overload # type: ignore[no-overload-impl] - def stack(self, _: Literal["zero", "center", "normalize"], **kwds) -> 'RadiusDatum': - ... - - @overload # type: ignore[no-overload-impl] - def stack(self, _: None, **kwds) -> 'RadiusDatum': - ... - - @overload # type: ignore[no-overload-impl] - def stack(self, _: bool, **kwds) -> 'RadiusDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'RadiusDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'RadiusDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'RadiusDatum': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'RadiusDatum': - ... - - - def __init__(self, datum, bandPosition=Undefined, scale=Undefined, stack=Undefined, title=Undefined, - type=Undefined, **kwds): - super(RadiusDatum, self).__init__(datum=datum, bandPosition=bandPosition, scale=scale, - stack=stack, title=title, type=type, **kwds) - - -@with_property_setters -class RadiusValue(ValueChannelMixin, core.PositionValueDef): - """RadiusValue schema wrapper - - Mapping(required=[value]) - Definition object for a constant value (primitive value or gradient definition) of an - encoding channel. - - Parameters - ---------- - - value : anyOf(float, string, string, :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "radius" - - - - def __init__(self, value, **kwds): - super(RadiusValue, self).__init__(value=value, **kwds) - - -@with_property_setters -class Radius2(FieldChannelMixin, core.SecondaryFieldDef): - """Radius2 schema wrapper - - Mapping(required=[shorthand]) - A field definition of a secondary channel that shares a scale with another primary channel. - For example, ``x2``, ``xError`` and ``xError2`` share the same scale with ``x``. - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : None - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "radius2" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Radius2': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Radius2': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Radius2': - ... - - def bandPosition(self, _: float, **kwds) -> 'Radius2': - ... - - def bin(self, _: None, **kwds) -> 'Radius2': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Radius2': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Radius2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Radius2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Radius2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Radius2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Radius2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Radius2': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Radius2': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Radius2': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Radius2': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - field=Undefined, timeUnit=Undefined, title=Undefined, **kwds): - super(Radius2, self).__init__(shorthand=shorthand, aggregate=aggregate, - bandPosition=bandPosition, bin=bin, field=field, - timeUnit=timeUnit, title=title, **kwds) - - -@with_property_setters -class Radius2Datum(DatumChannelMixin, core.DatumDef): - """Radius2Datum schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`) - A constant value in data domain. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`Type` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "radius2" - - def bandPosition(self, _: float, **kwds) -> 'Radius2Datum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Radius2Datum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Radius2Datum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Radius2Datum': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'Radius2Datum': - ... - - - def __init__(self, datum, bandPosition=Undefined, title=Undefined, type=Undefined, **kwds): - super(Radius2Datum, self).__init__(datum=datum, bandPosition=bandPosition, title=title, - type=type, **kwds) - - -@with_property_setters -class Radius2Value(ValueChannelMixin, core.PositionValueDef): - """Radius2Value schema wrapper - - Mapping(required=[value]) - Definition object for a constant value (primitive value or gradient definition) of an - encoding channel. - - Parameters - ---------- - - value : anyOf(float, string, string, :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "radius2" - - - - def __init__(self, value, **kwds): - super(Radius2Value, self).__init__(value=value, **kwds) - - -@with_property_setters -class Row(FieldChannelMixin, core.RowColumnEncodingFieldDef): - """Row schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - align : :class:`LayoutAlign` - The alignment to apply to row/column facet's subplot. The supported string values - are ``"all"``, ``"each"``, and ``"none"``. - - - * For ``"none"``, a flow layout will be used, in which adjacent subviews are simply - placed one after the other. - * For ``"each"``, subviews will be aligned into a clean grid structure, but each row - or column may be of variable size. - * For ``"all"``, subviews will be aligned and each row or column will be sized - identically based on the maximum observed size. String values for this property - will be applied to both grid rows and columns. - - **Default value:** ``"all"``. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - center : boolean - Boolean flag indicating if facet's subviews should be centered relative to their - respective rows or columns. - - **Default value:** ``false`` - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - header : anyOf(:class:`Header`, None) - An object defining properties of a facet's header. - sort : anyOf(:class:`SortArray`, :class:`SortOrder`, :class:`EncodingSortField`, None) - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - JavaScript. - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects - `__. In addition, for time - units ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` is not supported for ``row`` and ``column``. - spacing : float - The spacing in pixels between facet's sub-views. - - **Default value** : Depends on ``"spacing"`` property of `the view composition - configuration `__ ( - ``20`` by default) - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "row" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Row': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Row': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Row': - ... - - def align(self, _: Literal["all", "each", "none"], **kwds) -> 'Row': - ... - - def bandPosition(self, _: float, **kwds) -> 'Row': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'Row': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Row': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'Row': - ... - - def center(self, _: bool, **kwds) -> 'Row': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Row': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Row': - ... - - @overload # type: ignore[no-overload-impl] - def header(self, format=Undefined, formatType=Undefined, labelAlign=Undefined, labelAnchor=Undefined, labelAngle=Undefined, labelBaseline=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelLineHeight=Undefined, labelOrient=Undefined, labelPadding=Undefined, labels=Undefined, orient=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleAngle=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOrient=Undefined, titlePadding=Undefined, **kwds) -> 'Row': - ... - - @overload # type: ignore[no-overload-impl] - def header(self, _: None, **kwds) -> 'Row': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[float], **kwds) -> 'Row': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[str], **kwds) -> 'Row': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[bool], **kwds) -> 'Row': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[core.DateTime], **kwds) -> 'Row': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'Row': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'Row': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: None, **kwds) -> 'Row': - ... - - def spacing(self, _: float, **kwds) -> 'Row': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Row': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Row': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Row': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Row': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Row': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Row': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Row': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Row': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Row': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, align=Undefined, - bandPosition=Undefined, bin=Undefined, center=Undefined, field=Undefined, - header=Undefined, sort=Undefined, spacing=Undefined, timeUnit=Undefined, - title=Undefined, type=Undefined, **kwds): - super(Row, self).__init__(shorthand=shorthand, aggregate=aggregate, align=align, - bandPosition=bandPosition, bin=bin, center=center, field=field, - header=header, sort=sort, spacing=spacing, timeUnit=timeUnit, - title=title, type=type, **kwds) - - -@with_property_setters -class Shape(FieldChannelMixin, core.FieldOrDatumDefWithConditionMarkPropFieldDefTypeForShapestringnull): - """Shape schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - condition : anyOf(:class:`ConditionalValueDefstringnullExprRef`, List(:class:`ConditionalValueDefstringnullExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - legend : anyOf(:class:`Legend`, None) - An object defining properties of the legend. If ``null``, the legend for the - encoding channel will be removed. - - **Default value:** If undefined, default `legend properties - `__ are applied. - - **See also:** `legend `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - JavaScript. - * `A string indicating an encoding channel name to sort by - `__ (e.g., - ``"x"`` or ``"y"`` ) with an optional minus prefix for descending sort (e.g., - ``"-x"`` to sort by x-field, descending). This channel string is short-form of `a - sort-by-encoding definition - `__. For - example, ``"sort": "-x"`` is equivalent to ``"sort": {"encoding": "x", "order": - "descending"}``. - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects - `__. In addition, for time - units ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` and sorting by another channel is not supported for ``row`` and - ``column``. - - **See also:** `sort `__ - documentation. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`TypeForShape` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "shape" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Shape': - ... - - def bandPosition(self, _: float, **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefstringnullExprRef], **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def legend(self, aria=Undefined, clipHeight=Undefined, columnPadding=Undefined, columns=Undefined, cornerRadius=Undefined, description=Undefined, direction=Undefined, fillColor=Undefined, format=Undefined, formatType=Undefined, gradientLength=Undefined, gradientOpacity=Undefined, gradientStrokeColor=Undefined, gradientStrokeWidth=Undefined, gradientThickness=Undefined, gridAlign=Undefined, labelAlign=Undefined, labelBaseline=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelOffset=Undefined, labelOpacity=Undefined, labelOverlap=Undefined, labelPadding=Undefined, labelSeparation=Undefined, legendX=Undefined, legendY=Undefined, offset=Undefined, orient=Undefined, padding=Undefined, rowPadding=Undefined, strokeColor=Undefined, symbolDash=Undefined, symbolDashOffset=Undefined, symbolFillColor=Undefined, symbolLimit=Undefined, symbolOffset=Undefined, symbolOpacity=Undefined, symbolSize=Undefined, symbolStrokeColor=Undefined, symbolStrokeWidth=Undefined, symbolType=Undefined, tickCount=Undefined, tickMinStep=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOpacity=Undefined, titleOrient=Undefined, titlePadding=Undefined, type=Undefined, values=Undefined, zindex=Undefined, **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def legend(self, _: None, **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, _: None, **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[float], **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[str], **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[bool], **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[core.DateTime], **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["x", "y", "color", "fill", "stroke", "strokeWidth", "size", "shape", "fillOpacity", "strokeOpacity", "opacity", "text"], **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["-x", "-y", "-color", "-fill", "-stroke", "-strokeWidth", "-size", "-shape", "-fillOpacity", "-strokeOpacity", "-opacity", "-text"], **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, encoding=Undefined, order=Undefined, **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: None, **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Shape': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Shape': - ... - - def type(self, _: Literal["nominal", "ordinal", "geojson"], **kwds) -> 'Shape': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - condition=Undefined, field=Undefined, legend=Undefined, scale=Undefined, - sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds): - super(Shape, self).__init__(shorthand=shorthand, aggregate=aggregate, bandPosition=bandPosition, - bin=bin, condition=condition, field=field, legend=legend, - scale=scale, sort=sort, timeUnit=timeUnit, title=title, type=type, - **kwds) - - -@with_property_setters -class ShapeDatum(DatumChannelMixin, core.FieldOrDatumDefWithConditionDatumDefstringnull): - """ShapeDatum schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - condition : anyOf(:class:`ConditionalValueDefstringnullExprRef`, List(:class:`ConditionalValueDefstringnullExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`) - A constant value in data domain. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`Type` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "shape" - - def bandPosition(self, _: float, **kwds) -> 'ShapeDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'ShapeDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'ShapeDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefstringnullExprRef], **kwds) -> 'ShapeDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'ShapeDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'ShapeDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'ShapeDatum': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'ShapeDatum': - ... - - - def __init__(self, datum, bandPosition=Undefined, condition=Undefined, title=Undefined, - type=Undefined, **kwds): - super(ShapeDatum, self).__init__(datum=datum, bandPosition=bandPosition, condition=condition, - title=title, type=type, **kwds) - - -@with_property_setters -class ShapeValue(ValueChannelMixin, core.ValueDefWithConditionMarkPropFieldOrDatumDefTypeForShapestringnull): - """ShapeValue schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - condition : anyOf(:class:`ConditionalMarkPropFieldOrDatumDefTypeForShape`, :class:`ConditionalValueDefstringnullExprRef`, List(:class:`ConditionalValueDefstringnullExprRef`)) - A field definition or one or more value definition(s) with a parameter predicate. - value : anyOf(string, None, :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "shape" - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, test=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'ShapeValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, legend=Undefined, scale=Undefined, test=Undefined, title=Undefined, type=Undefined, **kwds) -> 'ShapeValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, empty=Undefined, field=Undefined, legend=Undefined, param=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'ShapeValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, empty=Undefined, legend=Undefined, param=Undefined, scale=Undefined, title=Undefined, type=Undefined, **kwds) -> 'ShapeValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'ShapeValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'ShapeValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefstringnullExprRef], **kwds) -> 'ShapeValue': - ... - - - def __init__(self, value, condition=Undefined, **kwds): - super(ShapeValue, self).__init__(value=value, condition=condition, **kwds) - - -@with_property_setters -class Size(FieldChannelMixin, core.FieldOrDatumDefWithConditionMarkPropFieldDefnumber): - """Size schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - condition : anyOf(:class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - legend : anyOf(:class:`Legend`, None) - An object defining properties of the legend. If ``null``, the legend for the - encoding channel will be removed. - - **Default value:** If undefined, default `legend properties - `__ are applied. - - **See also:** `legend `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - JavaScript. - * `A string indicating an encoding channel name to sort by - `__ (e.g., - ``"x"`` or ``"y"`` ) with an optional minus prefix for descending sort (e.g., - ``"-x"`` to sort by x-field, descending). This channel string is short-form of `a - sort-by-encoding definition - `__. For - example, ``"sort": "-x"`` is equivalent to ``"sort": {"encoding": "x", "order": - "descending"}``. - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects - `__. In addition, for time - units ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` and sorting by another channel is not supported for ``row`` and - ``column``. - - **See also:** `sort `__ - documentation. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "size" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Size': - ... - - def bandPosition(self, _: float, **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def legend(self, aria=Undefined, clipHeight=Undefined, columnPadding=Undefined, columns=Undefined, cornerRadius=Undefined, description=Undefined, direction=Undefined, fillColor=Undefined, format=Undefined, formatType=Undefined, gradientLength=Undefined, gradientOpacity=Undefined, gradientStrokeColor=Undefined, gradientStrokeWidth=Undefined, gradientThickness=Undefined, gridAlign=Undefined, labelAlign=Undefined, labelBaseline=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelOffset=Undefined, labelOpacity=Undefined, labelOverlap=Undefined, labelPadding=Undefined, labelSeparation=Undefined, legendX=Undefined, legendY=Undefined, offset=Undefined, orient=Undefined, padding=Undefined, rowPadding=Undefined, strokeColor=Undefined, symbolDash=Undefined, symbolDashOffset=Undefined, symbolFillColor=Undefined, symbolLimit=Undefined, symbolOffset=Undefined, symbolOpacity=Undefined, symbolSize=Undefined, symbolStrokeColor=Undefined, symbolStrokeWidth=Undefined, symbolType=Undefined, tickCount=Undefined, tickMinStep=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOpacity=Undefined, titleOrient=Undefined, titlePadding=Undefined, type=Undefined, values=Undefined, zindex=Undefined, **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def legend(self, _: None, **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, _: None, **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[float], **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[str], **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[bool], **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[core.DateTime], **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["x", "y", "color", "fill", "stroke", "strokeWidth", "size", "shape", "fillOpacity", "strokeOpacity", "opacity", "text"], **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["-x", "-y", "-color", "-fill", "-stroke", "-strokeWidth", "-size", "-shape", "-fillOpacity", "-strokeOpacity", "-opacity", "-text"], **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, encoding=Undefined, order=Undefined, **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: None, **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Size': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Size': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Size': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - condition=Undefined, field=Undefined, legend=Undefined, scale=Undefined, - sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds): - super(Size, self).__init__(shorthand=shorthand, aggregate=aggregate, bandPosition=bandPosition, - bin=bin, condition=condition, field=field, legend=legend, - scale=scale, sort=sort, timeUnit=timeUnit, title=title, type=type, - **kwds) - - -@with_property_setters -class SizeDatum(DatumChannelMixin, core.FieldOrDatumDefWithConditionDatumDefnumber): - """SizeDatum schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - condition : anyOf(:class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`) - A constant value in data domain. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`Type` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "size" - - def bandPosition(self, _: float, **kwds) -> 'SizeDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'SizeDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'SizeDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'SizeDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'SizeDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'SizeDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'SizeDatum': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'SizeDatum': - ... - - - def __init__(self, datum, bandPosition=Undefined, condition=Undefined, title=Undefined, - type=Undefined, **kwds): - super(SizeDatum, self).__init__(datum=datum, bandPosition=bandPosition, condition=condition, - title=title, type=type, **kwds) - - -@with_property_setters -class SizeValue(ValueChannelMixin, core.ValueDefWithConditionMarkPropFieldOrDatumDefnumber): - """SizeValue schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - condition : anyOf(:class:`ConditionalMarkPropFieldOrDatumDef`, :class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`)) - A field definition or one or more value definition(s) with a parameter predicate. - value : anyOf(float, :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "size" - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, test=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'SizeValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, legend=Undefined, scale=Undefined, test=Undefined, title=Undefined, type=Undefined, **kwds) -> 'SizeValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, empty=Undefined, field=Undefined, legend=Undefined, param=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'SizeValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, empty=Undefined, legend=Undefined, param=Undefined, scale=Undefined, title=Undefined, type=Undefined, **kwds) -> 'SizeValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'SizeValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'SizeValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'SizeValue': - ... - - - def __init__(self, value, condition=Undefined, **kwds): - super(SizeValue, self).__init__(value=value, condition=condition, **kwds) - - -@with_property_setters -class Stroke(FieldChannelMixin, core.FieldOrDatumDefWithConditionMarkPropFieldDefGradientstringnull): - """Stroke schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - condition : anyOf(:class:`ConditionalValueDefGradientstringnullExprRef`, List(:class:`ConditionalValueDefGradientstringnullExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - legend : anyOf(:class:`Legend`, None) - An object defining properties of the legend. If ``null``, the legend for the - encoding channel will be removed. - - **Default value:** If undefined, default `legend properties - `__ are applied. - - **See also:** `legend `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - JavaScript. - * `A string indicating an encoding channel name to sort by - `__ (e.g., - ``"x"`` or ``"y"`` ) with an optional minus prefix for descending sort (e.g., - ``"-x"`` to sort by x-field, descending). This channel string is short-form of `a - sort-by-encoding definition - `__. For - example, ``"sort": "-x"`` is equivalent to ``"sort": {"encoding": "x", "order": - "descending"}``. - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects - `__. In addition, for time - units ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` and sorting by another channel is not supported for ``row`` and - ``column``. - - **See also:** `sort `__ - documentation. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "stroke" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Stroke': - ... - - def bandPosition(self, _: float, **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefGradientstringnullExprRef], **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def legend(self, aria=Undefined, clipHeight=Undefined, columnPadding=Undefined, columns=Undefined, cornerRadius=Undefined, description=Undefined, direction=Undefined, fillColor=Undefined, format=Undefined, formatType=Undefined, gradientLength=Undefined, gradientOpacity=Undefined, gradientStrokeColor=Undefined, gradientStrokeWidth=Undefined, gradientThickness=Undefined, gridAlign=Undefined, labelAlign=Undefined, labelBaseline=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelOffset=Undefined, labelOpacity=Undefined, labelOverlap=Undefined, labelPadding=Undefined, labelSeparation=Undefined, legendX=Undefined, legendY=Undefined, offset=Undefined, orient=Undefined, padding=Undefined, rowPadding=Undefined, strokeColor=Undefined, symbolDash=Undefined, symbolDashOffset=Undefined, symbolFillColor=Undefined, symbolLimit=Undefined, symbolOffset=Undefined, symbolOpacity=Undefined, symbolSize=Undefined, symbolStrokeColor=Undefined, symbolStrokeWidth=Undefined, symbolType=Undefined, tickCount=Undefined, tickMinStep=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOpacity=Undefined, titleOrient=Undefined, titlePadding=Undefined, type=Undefined, values=Undefined, zindex=Undefined, **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def legend(self, _: None, **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, _: None, **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[float], **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[str], **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[bool], **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[core.DateTime], **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["x", "y", "color", "fill", "stroke", "strokeWidth", "size", "shape", "fillOpacity", "strokeOpacity", "opacity", "text"], **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["-x", "-y", "-color", "-fill", "-stroke", "-strokeWidth", "-size", "-shape", "-fillOpacity", "-strokeOpacity", "-opacity", "-text"], **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, encoding=Undefined, order=Undefined, **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: None, **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Stroke': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Stroke': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Stroke': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - condition=Undefined, field=Undefined, legend=Undefined, scale=Undefined, - sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds): - super(Stroke, self).__init__(shorthand=shorthand, aggregate=aggregate, - bandPosition=bandPosition, bin=bin, condition=condition, - field=field, legend=legend, scale=scale, sort=sort, - timeUnit=timeUnit, title=title, type=type, **kwds) - - -@with_property_setters -class StrokeDatum(DatumChannelMixin, core.FieldOrDatumDefWithConditionDatumDefGradientstringnull): - """StrokeDatum schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - condition : anyOf(:class:`ConditionalValueDefGradientstringnullExprRef`, List(:class:`ConditionalValueDefGradientstringnullExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`) - A constant value in data domain. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`Type` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "stroke" - - def bandPosition(self, _: float, **kwds) -> 'StrokeDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'StrokeDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'StrokeDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefGradientstringnullExprRef], **kwds) -> 'StrokeDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'StrokeDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'StrokeDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'StrokeDatum': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'StrokeDatum': - ... - - - def __init__(self, datum, bandPosition=Undefined, condition=Undefined, title=Undefined, - type=Undefined, **kwds): - super(StrokeDatum, self).__init__(datum=datum, bandPosition=bandPosition, condition=condition, - title=title, type=type, **kwds) - - -@with_property_setters -class StrokeValue(ValueChannelMixin, core.ValueDefWithConditionMarkPropFieldOrDatumDefGradientstringnull): - """StrokeValue schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - condition : anyOf(:class:`ConditionalMarkPropFieldOrDatumDef`, :class:`ConditionalValueDefGradientstringnullExprRef`, List(:class:`ConditionalValueDefGradientstringnullExprRef`)) - A field definition or one or more value definition(s) with a parameter predicate. - value : anyOf(:class:`Gradient`, string, None, :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "stroke" - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, test=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'StrokeValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, legend=Undefined, scale=Undefined, test=Undefined, title=Undefined, type=Undefined, **kwds) -> 'StrokeValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, empty=Undefined, field=Undefined, legend=Undefined, param=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'StrokeValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, empty=Undefined, legend=Undefined, param=Undefined, scale=Undefined, title=Undefined, type=Undefined, **kwds) -> 'StrokeValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'StrokeValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'StrokeValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefGradientstringnullExprRef], **kwds) -> 'StrokeValue': - ... - - - def __init__(self, value, condition=Undefined, **kwds): - super(StrokeValue, self).__init__(value=value, condition=condition, **kwds) - - -@with_property_setters -class StrokeDash(FieldChannelMixin, core.FieldOrDatumDefWithConditionMarkPropFieldDefnumberArray): - """StrokeDash schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - condition : anyOf(:class:`ConditionalValueDefnumberArrayExprRef`, List(:class:`ConditionalValueDefnumberArrayExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - legend : anyOf(:class:`Legend`, None) - An object defining properties of the legend. If ``null``, the legend for the - encoding channel will be removed. - - **Default value:** If undefined, default `legend properties - `__ are applied. - - **See also:** `legend `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - JavaScript. - * `A string indicating an encoding channel name to sort by - `__ (e.g., - ``"x"`` or ``"y"`` ) with an optional minus prefix for descending sort (e.g., - ``"-x"`` to sort by x-field, descending). This channel string is short-form of `a - sort-by-encoding definition - `__. For - example, ``"sort": "-x"`` is equivalent to ``"sort": {"encoding": "x", "order": - "descending"}``. - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects - `__. In addition, for time - units ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` and sorting by another channel is not supported for ``row`` and - ``column``. - - **See also:** `sort `__ - documentation. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "strokeDash" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'StrokeDash': - ... - - def bandPosition(self, _: float, **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefnumberArrayExprRef], **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def legend(self, aria=Undefined, clipHeight=Undefined, columnPadding=Undefined, columns=Undefined, cornerRadius=Undefined, description=Undefined, direction=Undefined, fillColor=Undefined, format=Undefined, formatType=Undefined, gradientLength=Undefined, gradientOpacity=Undefined, gradientStrokeColor=Undefined, gradientStrokeWidth=Undefined, gradientThickness=Undefined, gridAlign=Undefined, labelAlign=Undefined, labelBaseline=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelOffset=Undefined, labelOpacity=Undefined, labelOverlap=Undefined, labelPadding=Undefined, labelSeparation=Undefined, legendX=Undefined, legendY=Undefined, offset=Undefined, orient=Undefined, padding=Undefined, rowPadding=Undefined, strokeColor=Undefined, symbolDash=Undefined, symbolDashOffset=Undefined, symbolFillColor=Undefined, symbolLimit=Undefined, symbolOffset=Undefined, symbolOpacity=Undefined, symbolSize=Undefined, symbolStrokeColor=Undefined, symbolStrokeWidth=Undefined, symbolType=Undefined, tickCount=Undefined, tickMinStep=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOpacity=Undefined, titleOrient=Undefined, titlePadding=Undefined, type=Undefined, values=Undefined, zindex=Undefined, **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def legend(self, _: None, **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, _: None, **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[float], **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[str], **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[bool], **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[core.DateTime], **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["x", "y", "color", "fill", "stroke", "strokeWidth", "size", "shape", "fillOpacity", "strokeOpacity", "opacity", "text"], **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["-x", "-y", "-color", "-fill", "-stroke", "-strokeWidth", "-size", "-shape", "-fillOpacity", "-strokeOpacity", "-opacity", "-text"], **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, encoding=Undefined, order=Undefined, **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: None, **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'StrokeDash': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'StrokeDash': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'StrokeDash': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - condition=Undefined, field=Undefined, legend=Undefined, scale=Undefined, - sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds): - super(StrokeDash, self).__init__(shorthand=shorthand, aggregate=aggregate, - bandPosition=bandPosition, bin=bin, condition=condition, - field=field, legend=legend, scale=scale, sort=sort, - timeUnit=timeUnit, title=title, type=type, **kwds) - - -@with_property_setters -class StrokeDashDatum(DatumChannelMixin, core.FieldOrDatumDefWithConditionDatumDefnumberArray): - """StrokeDashDatum schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - condition : anyOf(:class:`ConditionalValueDefnumberArrayExprRef`, List(:class:`ConditionalValueDefnumberArrayExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`) - A constant value in data domain. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`Type` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "strokeDash" - - def bandPosition(self, _: float, **kwds) -> 'StrokeDashDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'StrokeDashDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'StrokeDashDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefnumberArrayExprRef], **kwds) -> 'StrokeDashDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'StrokeDashDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'StrokeDashDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'StrokeDashDatum': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'StrokeDashDatum': - ... - - - def __init__(self, datum, bandPosition=Undefined, condition=Undefined, title=Undefined, - type=Undefined, **kwds): - super(StrokeDashDatum, self).__init__(datum=datum, bandPosition=bandPosition, - condition=condition, title=title, type=type, **kwds) - - -@with_property_setters -class StrokeDashValue(ValueChannelMixin, core.ValueDefWithConditionMarkPropFieldOrDatumDefnumberArray): - """StrokeDashValue schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - condition : anyOf(:class:`ConditionalMarkPropFieldOrDatumDef`, :class:`ConditionalValueDefnumberArrayExprRef`, List(:class:`ConditionalValueDefnumberArrayExprRef`)) - A field definition or one or more value definition(s) with a parameter predicate. - value : anyOf(List(float), :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "strokeDash" - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, test=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'StrokeDashValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, legend=Undefined, scale=Undefined, test=Undefined, title=Undefined, type=Undefined, **kwds) -> 'StrokeDashValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, empty=Undefined, field=Undefined, legend=Undefined, param=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'StrokeDashValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, empty=Undefined, legend=Undefined, param=Undefined, scale=Undefined, title=Undefined, type=Undefined, **kwds) -> 'StrokeDashValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'StrokeDashValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'StrokeDashValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefnumberArrayExprRef], **kwds) -> 'StrokeDashValue': - ... - - - def __init__(self, value, condition=Undefined, **kwds): - super(StrokeDashValue, self).__init__(value=value, condition=condition, **kwds) - - -@with_property_setters -class StrokeOpacity(FieldChannelMixin, core.FieldOrDatumDefWithConditionMarkPropFieldDefnumber): - """StrokeOpacity schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - condition : anyOf(:class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - legend : anyOf(:class:`Legend`, None) - An object defining properties of the legend. If ``null``, the legend for the - encoding channel will be removed. - - **Default value:** If undefined, default `legend properties - `__ are applied. - - **See also:** `legend `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - JavaScript. - * `A string indicating an encoding channel name to sort by - `__ (e.g., - ``"x"`` or ``"y"`` ) with an optional minus prefix for descending sort (e.g., - ``"-x"`` to sort by x-field, descending). This channel string is short-form of `a - sort-by-encoding definition - `__. For - example, ``"sort": "-x"`` is equivalent to ``"sort": {"encoding": "x", "order": - "descending"}``. - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects - `__. In addition, for time - units ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` and sorting by another channel is not supported for ``row`` and - ``column``. - - **See also:** `sort `__ - documentation. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "strokeOpacity" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'StrokeOpacity': - ... - - def bandPosition(self, _: float, **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def legend(self, aria=Undefined, clipHeight=Undefined, columnPadding=Undefined, columns=Undefined, cornerRadius=Undefined, description=Undefined, direction=Undefined, fillColor=Undefined, format=Undefined, formatType=Undefined, gradientLength=Undefined, gradientOpacity=Undefined, gradientStrokeColor=Undefined, gradientStrokeWidth=Undefined, gradientThickness=Undefined, gridAlign=Undefined, labelAlign=Undefined, labelBaseline=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelOffset=Undefined, labelOpacity=Undefined, labelOverlap=Undefined, labelPadding=Undefined, labelSeparation=Undefined, legendX=Undefined, legendY=Undefined, offset=Undefined, orient=Undefined, padding=Undefined, rowPadding=Undefined, strokeColor=Undefined, symbolDash=Undefined, symbolDashOffset=Undefined, symbolFillColor=Undefined, symbolLimit=Undefined, symbolOffset=Undefined, symbolOpacity=Undefined, symbolSize=Undefined, symbolStrokeColor=Undefined, symbolStrokeWidth=Undefined, symbolType=Undefined, tickCount=Undefined, tickMinStep=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOpacity=Undefined, titleOrient=Undefined, titlePadding=Undefined, type=Undefined, values=Undefined, zindex=Undefined, **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def legend(self, _: None, **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, _: None, **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[float], **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[str], **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[bool], **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[core.DateTime], **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["x", "y", "color", "fill", "stroke", "strokeWidth", "size", "shape", "fillOpacity", "strokeOpacity", "opacity", "text"], **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["-x", "-y", "-color", "-fill", "-stroke", "-strokeWidth", "-size", "-shape", "-fillOpacity", "-strokeOpacity", "-opacity", "-text"], **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, encoding=Undefined, order=Undefined, **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: None, **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'StrokeOpacity': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'StrokeOpacity': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'StrokeOpacity': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - condition=Undefined, field=Undefined, legend=Undefined, scale=Undefined, - sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds): - super(StrokeOpacity, self).__init__(shorthand=shorthand, aggregate=aggregate, - bandPosition=bandPosition, bin=bin, condition=condition, - field=field, legend=legend, scale=scale, sort=sort, - timeUnit=timeUnit, title=title, type=type, **kwds) - - -@with_property_setters -class StrokeOpacityDatum(DatumChannelMixin, core.FieldOrDatumDefWithConditionDatumDefnumber): - """StrokeOpacityDatum schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - condition : anyOf(:class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`) - A constant value in data domain. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`Type` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "strokeOpacity" - - def bandPosition(self, _: float, **kwds) -> 'StrokeOpacityDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'StrokeOpacityDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'StrokeOpacityDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'StrokeOpacityDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'StrokeOpacityDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'StrokeOpacityDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'StrokeOpacityDatum': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'StrokeOpacityDatum': - ... - - - def __init__(self, datum, bandPosition=Undefined, condition=Undefined, title=Undefined, - type=Undefined, **kwds): - super(StrokeOpacityDatum, self).__init__(datum=datum, bandPosition=bandPosition, - condition=condition, title=title, type=type, **kwds) - - -@with_property_setters -class StrokeOpacityValue(ValueChannelMixin, core.ValueDefWithConditionMarkPropFieldOrDatumDefnumber): - """StrokeOpacityValue schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - condition : anyOf(:class:`ConditionalMarkPropFieldOrDatumDef`, :class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`)) - A field definition or one or more value definition(s) with a parameter predicate. - value : anyOf(float, :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "strokeOpacity" - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, test=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'StrokeOpacityValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, legend=Undefined, scale=Undefined, test=Undefined, title=Undefined, type=Undefined, **kwds) -> 'StrokeOpacityValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, empty=Undefined, field=Undefined, legend=Undefined, param=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'StrokeOpacityValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, empty=Undefined, legend=Undefined, param=Undefined, scale=Undefined, title=Undefined, type=Undefined, **kwds) -> 'StrokeOpacityValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'StrokeOpacityValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'StrokeOpacityValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'StrokeOpacityValue': - ... - - - def __init__(self, value, condition=Undefined, **kwds): - super(StrokeOpacityValue, self).__init__(value=value, condition=condition, **kwds) - - -@with_property_setters -class StrokeWidth(FieldChannelMixin, core.FieldOrDatumDefWithConditionMarkPropFieldDefnumber): - """StrokeWidth schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - condition : anyOf(:class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - legend : anyOf(:class:`Legend`, None) - An object defining properties of the legend. If ``null``, the legend for the - encoding channel will be removed. - - **Default value:** If undefined, default `legend properties - `__ are applied. - - **See also:** `legend `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - JavaScript. - * `A string indicating an encoding channel name to sort by - `__ (e.g., - ``"x"`` or ``"y"`` ) with an optional minus prefix for descending sort (e.g., - ``"-x"`` to sort by x-field, descending). This channel string is short-form of `a - sort-by-encoding definition - `__. For - example, ``"sort": "-x"`` is equivalent to ``"sort": {"encoding": "x", "order": - "descending"}``. - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects - `__. In addition, for time - units ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` and sorting by another channel is not supported for ``row`` and - ``column``. - - **See also:** `sort `__ - documentation. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "strokeWidth" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'StrokeWidth': - ... - - def bandPosition(self, _: float, **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def legend(self, aria=Undefined, clipHeight=Undefined, columnPadding=Undefined, columns=Undefined, cornerRadius=Undefined, description=Undefined, direction=Undefined, fillColor=Undefined, format=Undefined, formatType=Undefined, gradientLength=Undefined, gradientOpacity=Undefined, gradientStrokeColor=Undefined, gradientStrokeWidth=Undefined, gradientThickness=Undefined, gridAlign=Undefined, labelAlign=Undefined, labelBaseline=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelOffset=Undefined, labelOpacity=Undefined, labelOverlap=Undefined, labelPadding=Undefined, labelSeparation=Undefined, legendX=Undefined, legendY=Undefined, offset=Undefined, orient=Undefined, padding=Undefined, rowPadding=Undefined, strokeColor=Undefined, symbolDash=Undefined, symbolDashOffset=Undefined, symbolFillColor=Undefined, symbolLimit=Undefined, symbolOffset=Undefined, symbolOpacity=Undefined, symbolSize=Undefined, symbolStrokeColor=Undefined, symbolStrokeWidth=Undefined, symbolType=Undefined, tickCount=Undefined, tickMinStep=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOpacity=Undefined, titleOrient=Undefined, titlePadding=Undefined, type=Undefined, values=Undefined, zindex=Undefined, **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def legend(self, _: None, **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, _: None, **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[float], **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[str], **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[bool], **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[core.DateTime], **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["x", "y", "color", "fill", "stroke", "strokeWidth", "size", "shape", "fillOpacity", "strokeOpacity", "opacity", "text"], **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["-x", "-y", "-color", "-fill", "-stroke", "-strokeWidth", "-size", "-shape", "-fillOpacity", "-strokeOpacity", "-opacity", "-text"], **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, encoding=Undefined, order=Undefined, **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: None, **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'StrokeWidth': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'StrokeWidth': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'StrokeWidth': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - condition=Undefined, field=Undefined, legend=Undefined, scale=Undefined, - sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds): - super(StrokeWidth, self).__init__(shorthand=shorthand, aggregate=aggregate, - bandPosition=bandPosition, bin=bin, condition=condition, - field=field, legend=legend, scale=scale, sort=sort, - timeUnit=timeUnit, title=title, type=type, **kwds) - - -@with_property_setters -class StrokeWidthDatum(DatumChannelMixin, core.FieldOrDatumDefWithConditionDatumDefnumber): - """StrokeWidthDatum schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - condition : anyOf(:class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`) - A constant value in data domain. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`Type` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "strokeWidth" - - def bandPosition(self, _: float, **kwds) -> 'StrokeWidthDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'StrokeWidthDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'StrokeWidthDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'StrokeWidthDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'StrokeWidthDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'StrokeWidthDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'StrokeWidthDatum': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'StrokeWidthDatum': - ... - - - def __init__(self, datum, bandPosition=Undefined, condition=Undefined, title=Undefined, - type=Undefined, **kwds): - super(StrokeWidthDatum, self).__init__(datum=datum, bandPosition=bandPosition, - condition=condition, title=title, type=type, **kwds) - - -@with_property_setters -class StrokeWidthValue(ValueChannelMixin, core.ValueDefWithConditionMarkPropFieldOrDatumDefnumber): - """StrokeWidthValue schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - condition : anyOf(:class:`ConditionalMarkPropFieldOrDatumDef`, :class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`)) - A field definition or one or more value definition(s) with a parameter predicate. - value : anyOf(float, :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "strokeWidth" - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, test=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'StrokeWidthValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, legend=Undefined, scale=Undefined, test=Undefined, title=Undefined, type=Undefined, **kwds) -> 'StrokeWidthValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, empty=Undefined, field=Undefined, legend=Undefined, param=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'StrokeWidthValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, empty=Undefined, legend=Undefined, param=Undefined, scale=Undefined, title=Undefined, type=Undefined, **kwds) -> 'StrokeWidthValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'StrokeWidthValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'StrokeWidthValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'StrokeWidthValue': - ... - - - def __init__(self, value, condition=Undefined, **kwds): - super(StrokeWidthValue, self).__init__(value=value, condition=condition, **kwds) - - -@with_property_setters -class Text(FieldChannelMixin, core.FieldOrDatumDefWithConditionStringFieldDefText): - """Text schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, string, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - condition : anyOf(:class:`ConditionalValueDefTextExprRef`, List(:class:`ConditionalValueDefTextExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - format : anyOf(string, :class:`Dict`) - When used with the default ``"number"`` and ``"time"`` format type, the text - formatting pattern for labels of guides (axes, legends, headers) and text marks. - - - * If the format type is ``"number"`` (e.g., for quantitative fields), this is D3's - `number format pattern `__. - * If the format type is ``"time"`` (e.g., for temporal fields), this is D3's `time - format pattern `__. - - See the `format documentation `__ - for more examples. - - When used with a `custom formatType - `__, this - value will be passed as ``format`` alongside ``datum.value`` to the registered - function. - - **Default value:** Derived from `numberFormat - `__ config for number - format and from `timeFormat - `__ config for time - format. - formatType : string - The format type for labels. One of ``"number"``, ``"time"``, or a `registered custom - format type - `__. - - **Default value:** - - - * ``"time"`` for temporal fields and ordinal and nominal fields with ``timeUnit``. - * ``"number"`` for quantitative fields as well as ordinal and nominal fields without - ``timeUnit``. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "text" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Text': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Text': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Text': - ... - - def bandPosition(self, _: float, **kwds) -> 'Text': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'Text': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Text': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: str, **kwds) -> 'Text': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'Text': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'Text': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'Text': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefTextExprRef], **kwds) -> 'Text': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Text': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Text': - ... - - @overload # type: ignore[no-overload-impl] - def format(self, _: str, **kwds) -> 'Text': - ... - - @overload # type: ignore[no-overload-impl] - def format(self, _: dict, **kwds) -> 'Text': - ... - - def formatType(self, _: str, **kwds) -> 'Text': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Text': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Text': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Text': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Text': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Text': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Text': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Text': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Text': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Text': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - condition=Undefined, field=Undefined, format=Undefined, formatType=Undefined, - timeUnit=Undefined, title=Undefined, type=Undefined, **kwds): - super(Text, self).__init__(shorthand=shorthand, aggregate=aggregate, bandPosition=bandPosition, - bin=bin, condition=condition, field=field, format=format, - formatType=formatType, timeUnit=timeUnit, title=title, type=type, - **kwds) - - -@with_property_setters -class TextDatum(DatumChannelMixin, core.FieldOrDatumDefWithConditionStringDatumDefText): - """TextDatum schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - condition : anyOf(:class:`ConditionalValueDefTextExprRef`, List(:class:`ConditionalValueDefTextExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`) - A constant value in data domain. - format : anyOf(string, :class:`Dict`) - When used with the default ``"number"`` and ``"time"`` format type, the text - formatting pattern for labels of guides (axes, legends, headers) and text marks. - - - * If the format type is ``"number"`` (e.g., for quantitative fields), this is D3's - `number format pattern `__. - * If the format type is ``"time"`` (e.g., for temporal fields), this is D3's `time - format pattern `__. - - See the `format documentation `__ - for more examples. - - When used with a `custom formatType - `__, this - value will be passed as ``format`` alongside ``datum.value`` to the registered - function. - - **Default value:** Derived from `numberFormat - `__ config for number - format and from `timeFormat - `__ config for time - format. - formatType : string - The format type for labels. One of ``"number"``, ``"time"``, or a `registered custom - format type - `__. - - **Default value:** - - - * ``"time"`` for temporal fields and ordinal and nominal fields with ``timeUnit``. - * ``"number"`` for quantitative fields as well as ordinal and nominal fields without - ``timeUnit``. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`Type` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "text" - - def bandPosition(self, _: float, **kwds) -> 'TextDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'TextDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'TextDatum': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefTextExprRef], **kwds) -> 'TextDatum': - ... - - @overload # type: ignore[no-overload-impl] - def format(self, _: str, **kwds) -> 'TextDatum': - ... - - @overload # type: ignore[no-overload-impl] - def format(self, _: dict, **kwds) -> 'TextDatum': - ... - - def formatType(self, _: str, **kwds) -> 'TextDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'TextDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'TextDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'TextDatum': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'TextDatum': - ... - - - def __init__(self, datum, bandPosition=Undefined, condition=Undefined, format=Undefined, - formatType=Undefined, title=Undefined, type=Undefined, **kwds): - super(TextDatum, self).__init__(datum=datum, bandPosition=bandPosition, condition=condition, - format=format, formatType=formatType, title=title, type=type, - **kwds) - - -@with_property_setters -class TextValue(ValueChannelMixin, core.ValueDefWithConditionStringFieldDefText): - """TextValue schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - condition : anyOf(:class:`ConditionalStringFieldDef`, :class:`ConditionalValueDefTextExprRef`, List(:class:`ConditionalValueDefTextExprRef`)) - A field definition or one or more value definition(s) with a parameter predicate. - value : anyOf(:class:`Text`, :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "text" - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, field=Undefined, format=Undefined, formatType=Undefined, test=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'TextValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, empty=Undefined, field=Undefined, format=Undefined, formatType=Undefined, param=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'TextValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'TextValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'TextValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefTextExprRef], **kwds) -> 'TextValue': - ... - - - def __init__(self, value, condition=Undefined, **kwds): - super(TextValue, self).__init__(value=value, condition=condition, **kwds) - - -@with_property_setters -class Theta(FieldChannelMixin, core.PositionFieldDefBase): - """Theta schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, string, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - JavaScript. - * `A string indicating an encoding channel name to sort by - `__ (e.g., - ``"x"`` or ``"y"`` ) with an optional minus prefix for descending sort (e.g., - ``"-x"`` to sort by x-field, descending). This channel string is short-form of `a - sort-by-encoding definition - `__. For - example, ``"sort": "-x"`` is equivalent to ``"sort": {"encoding": "x", "order": - "descending"}``. - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects - `__. In addition, for time - units ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` and sorting by another channel is not supported for ``row`` and - ``column``. - - **See also:** `sort `__ - documentation. - stack : anyOf(:class:`StackOffset`, None, boolean) - Type of stacking offset if the field should be stacked. ``stack`` is only applicable - for ``x``, ``y``, ``theta``, and ``radius`` channels with continuous domains. For - example, ``stack`` of ``y`` can be used to customize stacking for a vertical bar - chart. - - ``stack`` can be one of the following values: - - - * ``"zero"`` or `true`: stacking with baseline offset at zero value of the scale - (for creating typical stacked - [bar](https://vega.github.io/vega-lite/docs/stack.html#bar) and `area - `__ chart). - * ``"normalize"`` - stacking with normalized domain (for creating `normalized - stacked bar and area charts - `__ and pie charts - `with percentage tooltip - `__ ). :raw-html:`
      ` - * ``"center"`` - stacking with center baseline (for `streamgraph - `__ ). - * ``null`` or ``false`` - No-stacking. This will produce layered `bar - `__ and area - chart. - - **Default value:** ``zero`` for plots with all of the following conditions are true: - (1) the mark is ``bar``, ``area``, or ``arc`` ; (2) the stacked measure channel (x - or y) has a linear scale; (3) At least one of non-position channels mapped to an - unaggregated field that is different from x and y. Otherwise, ``null`` by default. - - **See also:** `stack `__ - documentation. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "theta" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Theta': - ... - - def bandPosition(self, _: float, **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: str, **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, _: None, **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[float], **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[str], **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[bool], **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[core.DateTime], **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["x", "y", "color", "fill", "stroke", "strokeWidth", "size", "shape", "fillOpacity", "strokeOpacity", "opacity", "text"], **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["-x", "-y", "-color", "-fill", "-stroke", "-strokeWidth", "-size", "-shape", "-fillOpacity", "-strokeOpacity", "-opacity", "-text"], **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, encoding=Undefined, order=Undefined, **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: None, **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def stack(self, _: Literal["zero", "center", "normalize"], **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def stack(self, _: None, **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def stack(self, _: bool, **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Theta': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Theta': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Theta': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - field=Undefined, scale=Undefined, sort=Undefined, stack=Undefined, timeUnit=Undefined, - title=Undefined, type=Undefined, **kwds): - super(Theta, self).__init__(shorthand=shorthand, aggregate=aggregate, bandPosition=bandPosition, - bin=bin, field=field, scale=scale, sort=sort, stack=stack, - timeUnit=timeUnit, title=title, type=type, **kwds) - - -@with_property_setters -class ThetaDatum(DatumChannelMixin, core.PositionDatumDefBase): - """ThetaDatum schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`) - A constant value in data domain. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - stack : anyOf(:class:`StackOffset`, None, boolean) - Type of stacking offset if the field should be stacked. ``stack`` is only applicable - for ``x``, ``y``, ``theta``, and ``radius`` channels with continuous domains. For - example, ``stack`` of ``y`` can be used to customize stacking for a vertical bar - chart. - - ``stack`` can be one of the following values: - - - * ``"zero"`` or `true`: stacking with baseline offset at zero value of the scale - (for creating typical stacked - [bar](https://vega.github.io/vega-lite/docs/stack.html#bar) and `area - `__ chart). - * ``"normalize"`` - stacking with normalized domain (for creating `normalized - stacked bar and area charts - `__ and pie charts - `with percentage tooltip - `__ ). :raw-html:`
      ` - * ``"center"`` - stacking with center baseline (for `streamgraph - `__ ). - * ``null`` or ``false`` - No-stacking. This will produce layered `bar - `__ and area - chart. - - **Default value:** ``zero`` for plots with all of the following conditions are true: - (1) the mark is ``bar``, ``area``, or ``arc`` ; (2) the stacked measure channel (x - or y) has a linear scale; (3) At least one of non-position channels mapped to an - unaggregated field that is different from x and y. Otherwise, ``null`` by default. - - **See also:** `stack `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`Type` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "theta" - - def bandPosition(self, _: float, **kwds) -> 'ThetaDatum': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'ThetaDatum': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, _: None, **kwds) -> 'ThetaDatum': - ... - - @overload # type: ignore[no-overload-impl] - def stack(self, _: Literal["zero", "center", "normalize"], **kwds) -> 'ThetaDatum': - ... - - @overload # type: ignore[no-overload-impl] - def stack(self, _: None, **kwds) -> 'ThetaDatum': - ... - - @overload # type: ignore[no-overload-impl] - def stack(self, _: bool, **kwds) -> 'ThetaDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'ThetaDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'ThetaDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'ThetaDatum': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'ThetaDatum': - ... - - - def __init__(self, datum, bandPosition=Undefined, scale=Undefined, stack=Undefined, title=Undefined, - type=Undefined, **kwds): - super(ThetaDatum, self).__init__(datum=datum, bandPosition=bandPosition, scale=scale, - stack=stack, title=title, type=type, **kwds) - - -@with_property_setters -class ThetaValue(ValueChannelMixin, core.PositionValueDef): - """ThetaValue schema wrapper - - Mapping(required=[value]) - Definition object for a constant value (primitive value or gradient definition) of an - encoding channel. - - Parameters - ---------- - - value : anyOf(float, string, string, :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "theta" - - - - def __init__(self, value, **kwds): - super(ThetaValue, self).__init__(value=value, **kwds) - - -@with_property_setters -class Theta2(FieldChannelMixin, core.SecondaryFieldDef): - """Theta2 schema wrapper - - Mapping(required=[shorthand]) - A field definition of a secondary channel that shares a scale with another primary channel. - For example, ``x2``, ``xError`` and ``xError2`` share the same scale with ``x``. - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : None - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "theta2" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Theta2': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Theta2': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Theta2': - ... - - def bandPosition(self, _: float, **kwds) -> 'Theta2': - ... - - def bin(self, _: None, **kwds) -> 'Theta2': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Theta2': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Theta2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Theta2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Theta2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Theta2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Theta2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Theta2': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Theta2': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Theta2': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Theta2': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - field=Undefined, timeUnit=Undefined, title=Undefined, **kwds): - super(Theta2, self).__init__(shorthand=shorthand, aggregate=aggregate, - bandPosition=bandPosition, bin=bin, field=field, timeUnit=timeUnit, - title=title, **kwds) - - -@with_property_setters -class Theta2Datum(DatumChannelMixin, core.DatumDef): - """Theta2Datum schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`) - A constant value in data domain. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`Type` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "theta2" - - def bandPosition(self, _: float, **kwds) -> 'Theta2Datum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Theta2Datum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Theta2Datum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Theta2Datum': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'Theta2Datum': - ... - - - def __init__(self, datum, bandPosition=Undefined, title=Undefined, type=Undefined, **kwds): - super(Theta2Datum, self).__init__(datum=datum, bandPosition=bandPosition, title=title, - type=type, **kwds) - - -@with_property_setters -class Theta2Value(ValueChannelMixin, core.PositionValueDef): - """Theta2Value schema wrapper - - Mapping(required=[value]) - Definition object for a constant value (primitive value or gradient definition) of an - encoding channel. - - Parameters - ---------- - - value : anyOf(float, string, string, :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "theta2" - - - - def __init__(self, value, **kwds): - super(Theta2Value, self).__init__(value=value, **kwds) - - -@with_property_setters -class Tooltip(FieldChannelMixin, core.StringFieldDefWithCondition): - """Tooltip schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, string, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - condition : anyOf(:class:`ConditionalValueDefstringExprRef`, List(:class:`ConditionalValueDefstringExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - format : anyOf(string, :class:`Dict`) - When used with the default ``"number"`` and ``"time"`` format type, the text - formatting pattern for labels of guides (axes, legends, headers) and text marks. - - - * If the format type is ``"number"`` (e.g., for quantitative fields), this is D3's - `number format pattern `__. - * If the format type is ``"time"`` (e.g., for temporal fields), this is D3's `time - format pattern `__. - - See the `format documentation `__ - for more examples. - - When used with a `custom formatType - `__, this - value will be passed as ``format`` alongside ``datum.value`` to the registered - function. - - **Default value:** Derived from `numberFormat - `__ config for number - format and from `timeFormat - `__ config for time - format. - formatType : string - The format type for labels. One of ``"number"``, ``"time"``, or a `registered custom - format type - `__. - - **Default value:** - - - * ``"time"`` for temporal fields and ordinal and nominal fields with ``timeUnit``. - * ``"number"`` for quantitative fields as well as ordinal and nominal fields without - ``timeUnit``. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "tooltip" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Tooltip': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Tooltip': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Tooltip': - ... - - def bandPosition(self, _: float, **kwds) -> 'Tooltip': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'Tooltip': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Tooltip': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: str, **kwds) -> 'Tooltip': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'Tooltip': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'Tooltip': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'Tooltip': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefstringExprRef], **kwds) -> 'Tooltip': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Tooltip': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Tooltip': - ... - - @overload # type: ignore[no-overload-impl] - def format(self, _: str, **kwds) -> 'Tooltip': - ... - - @overload # type: ignore[no-overload-impl] - def format(self, _: dict, **kwds) -> 'Tooltip': - ... - - def formatType(self, _: str, **kwds) -> 'Tooltip': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Tooltip': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Tooltip': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Tooltip': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Tooltip': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Tooltip': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Tooltip': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Tooltip': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Tooltip': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Tooltip': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - condition=Undefined, field=Undefined, format=Undefined, formatType=Undefined, - timeUnit=Undefined, title=Undefined, type=Undefined, **kwds): - super(Tooltip, self).__init__(shorthand=shorthand, aggregate=aggregate, - bandPosition=bandPosition, bin=bin, condition=condition, - field=field, format=format, formatType=formatType, - timeUnit=timeUnit, title=title, type=type, **kwds) - - -@with_property_setters -class TooltipValue(ValueChannelMixin, core.StringValueDefWithCondition): - """TooltipValue schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - condition : anyOf(:class:`ConditionalMarkPropFieldOrDatumDef`, :class:`ConditionalValueDefstringnullExprRef`, List(:class:`ConditionalValueDefstringnullExprRef`)) - A field definition or one or more value definition(s) with a parameter predicate. - value : anyOf(string, None, :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "tooltip" - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, test=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'TooltipValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, legend=Undefined, scale=Undefined, test=Undefined, title=Undefined, type=Undefined, **kwds) -> 'TooltipValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, empty=Undefined, field=Undefined, legend=Undefined, param=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'TooltipValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, empty=Undefined, legend=Undefined, param=Undefined, scale=Undefined, title=Undefined, type=Undefined, **kwds) -> 'TooltipValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'TooltipValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'TooltipValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefstringnullExprRef], **kwds) -> 'TooltipValue': - ... - - - def __init__(self, value, condition=Undefined, **kwds): - super(TooltipValue, self).__init__(value=value, condition=condition, **kwds) - - -@with_property_setters -class Url(FieldChannelMixin, core.StringFieldDefWithCondition): - """Url schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, string, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - condition : anyOf(:class:`ConditionalValueDefstringExprRef`, List(:class:`ConditionalValueDefstringExprRef`)) - One or more value definition(s) with `a parameter or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - format : anyOf(string, :class:`Dict`) - When used with the default ``"number"`` and ``"time"`` format type, the text - formatting pattern for labels of guides (axes, legends, headers) and text marks. - - - * If the format type is ``"number"`` (e.g., for quantitative fields), this is D3's - `number format pattern `__. - * If the format type is ``"time"`` (e.g., for temporal fields), this is D3's `time - format pattern `__. - - See the `format documentation `__ - for more examples. - - When used with a `custom formatType - `__, this - value will be passed as ``format`` alongside ``datum.value`` to the registered - function. - - **Default value:** Derived from `numberFormat - `__ config for number - format and from `timeFormat - `__ config for time - format. - formatType : string - The format type for labels. One of ``"number"``, ``"time"``, or a `registered custom - format type - `__. - - **Default value:** - - - * ``"time"`` for temporal fields and ordinal and nominal fields with ``timeUnit``. - * ``"number"`` for quantitative fields as well as ordinal and nominal fields without - ``timeUnit``. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "url" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Url': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Url': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Url': - ... - - def bandPosition(self, _: float, **kwds) -> 'Url': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'Url': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Url': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: str, **kwds) -> 'Url': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'Url': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'Url': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'Url': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefstringExprRef], **kwds) -> 'Url': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Url': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Url': - ... - - @overload # type: ignore[no-overload-impl] - def format(self, _: str, **kwds) -> 'Url': - ... - - @overload # type: ignore[no-overload-impl] - def format(self, _: dict, **kwds) -> 'Url': - ... - - def formatType(self, _: str, **kwds) -> 'Url': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Url': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Url': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Url': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Url': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Url': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Url': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Url': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Url': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Url': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - condition=Undefined, field=Undefined, format=Undefined, formatType=Undefined, - timeUnit=Undefined, title=Undefined, type=Undefined, **kwds): - super(Url, self).__init__(shorthand=shorthand, aggregate=aggregate, bandPosition=bandPosition, - bin=bin, condition=condition, field=field, format=format, - formatType=formatType, timeUnit=timeUnit, title=title, type=type, - **kwds) - - -@with_property_setters -class UrlValue(ValueChannelMixin, core.StringValueDefWithCondition): - """UrlValue schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - condition : anyOf(:class:`ConditionalMarkPropFieldOrDatumDef`, :class:`ConditionalValueDefstringnullExprRef`, List(:class:`ConditionalValueDefstringnullExprRef`)) - A field definition or one or more value definition(s) with a parameter predicate. - value : anyOf(string, None, :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "url" - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, test=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'UrlValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, legend=Undefined, scale=Undefined, test=Undefined, title=Undefined, type=Undefined, **kwds) -> 'UrlValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, empty=Undefined, field=Undefined, legend=Undefined, param=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'UrlValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, bandPosition=Undefined, datum=Undefined, empty=Undefined, legend=Undefined, param=Undefined, scale=Undefined, title=Undefined, type=Undefined, **kwds) -> 'UrlValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, test=Undefined, value=Undefined, **kwds) -> 'UrlValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'UrlValue': - ... - - @overload # type: ignore[no-overload-impl] - def condition(self, _: List[core.ConditionalValueDefstringnullExprRef], **kwds) -> 'UrlValue': - ... - - - def __init__(self, value, condition=Undefined, **kwds): - super(UrlValue, self).__init__(value=value, condition=condition, **kwds) - - -@with_property_setters -class X(FieldChannelMixin, core.PositionFieldDef): - """X schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - axis : anyOf(:class:`Axis`, None) - An object defining properties of axis's gridlines, ticks and labels. If ``null``, - the axis for the encoding channel will be removed. - - **Default value:** If undefined, default `axis properties - `__ are applied. - - **See also:** `axis `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, string, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - impute : anyOf(:class:`ImputeParams`, None) - An object defining the properties of the Impute Operation to be applied. The field - value of the other positional channel is taken as ``key`` of the ``Impute`` - Operation. The field of the ``color`` channel if specified is used as ``groupby`` of - the ``Impute`` Operation. - - **See also:** `impute `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - JavaScript. - * `A string indicating an encoding channel name to sort by - `__ (e.g., - ``"x"`` or ``"y"`` ) with an optional minus prefix for descending sort (e.g., - ``"-x"`` to sort by x-field, descending). This channel string is short-form of `a - sort-by-encoding definition - `__. For - example, ``"sort": "-x"`` is equivalent to ``"sort": {"encoding": "x", "order": - "descending"}``. - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects - `__. In addition, for time - units ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` and sorting by another channel is not supported for ``row`` and - ``column``. - - **See also:** `sort `__ - documentation. - stack : anyOf(:class:`StackOffset`, None, boolean) - Type of stacking offset if the field should be stacked. ``stack`` is only applicable - for ``x``, ``y``, ``theta``, and ``radius`` channels with continuous domains. For - example, ``stack`` of ``y`` can be used to customize stacking for a vertical bar - chart. - - ``stack`` can be one of the following values: - - - * ``"zero"`` or `true`: stacking with baseline offset at zero value of the scale - (for creating typical stacked - [bar](https://vega.github.io/vega-lite/docs/stack.html#bar) and `area - `__ chart). - * ``"normalize"`` - stacking with normalized domain (for creating `normalized - stacked bar and area charts - `__ and pie charts - `with percentage tooltip - `__ ). :raw-html:`
      ` - * ``"center"`` - stacking with center baseline (for `streamgraph - `__ ). - * ``null`` or ``false`` - No-stacking. This will produce layered `bar - `__ and area - chart. - - **Default value:** ``zero`` for plots with all of the following conditions are true: - (1) the mark is ``bar``, ``area``, or ``arc`` ; (2) the stacked measure channel (x - or y) has a linear scale; (3) At least one of non-position channels mapped to an - unaggregated field that is different from x and y. Otherwise, ``null`` by default. - - **See also:** `stack `__ - documentation. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "x" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def axis(self, aria=Undefined, bandPosition=Undefined, description=Undefined, domain=Undefined, domainCap=Undefined, domainColor=Undefined, domainDash=Undefined, domainDashOffset=Undefined, domainOpacity=Undefined, domainWidth=Undefined, format=Undefined, formatType=Undefined, grid=Undefined, gridCap=Undefined, gridColor=Undefined, gridDash=Undefined, gridDashOffset=Undefined, gridOpacity=Undefined, gridWidth=Undefined, labelAlign=Undefined, labelAngle=Undefined, labelBaseline=Undefined, labelBound=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFlush=Undefined, labelFlushOffset=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelLineHeight=Undefined, labelOffset=Undefined, labelOpacity=Undefined, labelOverlap=Undefined, labelPadding=Undefined, labelSeparation=Undefined, labels=Undefined, maxExtent=Undefined, minExtent=Undefined, offset=Undefined, orient=Undefined, position=Undefined, style=Undefined, tickBand=Undefined, tickCap=Undefined, tickColor=Undefined, tickCount=Undefined, tickDash=Undefined, tickDashOffset=Undefined, tickExtra=Undefined, tickMinStep=Undefined, tickOffset=Undefined, tickOpacity=Undefined, tickRound=Undefined, tickSize=Undefined, tickWidth=Undefined, ticks=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleAngle=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOpacity=Undefined, titlePadding=Undefined, titleX=Undefined, titleY=Undefined, translate=Undefined, values=Undefined, zindex=Undefined, **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def axis(self, _: None, **kwds) -> 'X': - ... - - def bandPosition(self, _: float, **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: str, **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def impute(self, frame=Undefined, keyvals=Undefined, method=Undefined, value=Undefined, **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def impute(self, _: None, **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, _: None, **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[float], **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[str], **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[bool], **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[core.DateTime], **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["x", "y", "color", "fill", "stroke", "strokeWidth", "size", "shape", "fillOpacity", "strokeOpacity", "opacity", "text"], **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["-x", "-y", "-color", "-fill", "-stroke", "-strokeWidth", "-size", "-shape", "-fillOpacity", "-strokeOpacity", "-opacity", "-text"], **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, encoding=Undefined, order=Undefined, **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: None, **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def stack(self, _: Literal["zero", "center", "normalize"], **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def stack(self, _: None, **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def stack(self, _: bool, **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'X': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'X': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'X': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, axis=Undefined, bandPosition=Undefined, - bin=Undefined, field=Undefined, impute=Undefined, scale=Undefined, sort=Undefined, - stack=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds): - super(X, self).__init__(shorthand=shorthand, aggregate=aggregate, axis=axis, - bandPosition=bandPosition, bin=bin, field=field, impute=impute, - scale=scale, sort=sort, stack=stack, timeUnit=timeUnit, title=title, - type=type, **kwds) - - -@with_property_setters -class XDatum(DatumChannelMixin, core.PositionDatumDef): - """XDatum schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - axis : anyOf(:class:`Axis`, None) - An object defining properties of axis's gridlines, ticks and labels. If ``null``, - the axis for the encoding channel will be removed. - - **Default value:** If undefined, default `axis properties - `__ are applied. - - **See also:** `axis `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`) - A constant value in data domain. - impute : anyOf(:class:`ImputeParams`, None) - An object defining the properties of the Impute Operation to be applied. The field - value of the other positional channel is taken as ``key`` of the ``Impute`` - Operation. The field of the ``color`` channel if specified is used as ``groupby`` of - the ``Impute`` Operation. - - **See also:** `impute `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - stack : anyOf(:class:`StackOffset`, None, boolean) - Type of stacking offset if the field should be stacked. ``stack`` is only applicable - for ``x``, ``y``, ``theta``, and ``radius`` channels with continuous domains. For - example, ``stack`` of ``y`` can be used to customize stacking for a vertical bar - chart. - - ``stack`` can be one of the following values: - - - * ``"zero"`` or `true`: stacking with baseline offset at zero value of the scale - (for creating typical stacked - [bar](https://vega.github.io/vega-lite/docs/stack.html#bar) and `area - `__ chart). - * ``"normalize"`` - stacking with normalized domain (for creating `normalized - stacked bar and area charts - `__ and pie charts - `with percentage tooltip - `__ ). :raw-html:`
      ` - * ``"center"`` - stacking with center baseline (for `streamgraph - `__ ). - * ``null`` or ``false`` - No-stacking. This will produce layered `bar - `__ and area - chart. - - **Default value:** ``zero`` for plots with all of the following conditions are true: - (1) the mark is ``bar``, ``area``, or ``arc`` ; (2) the stacked measure channel (x - or y) has a linear scale; (3) At least one of non-position channels mapped to an - unaggregated field that is different from x and y. Otherwise, ``null`` by default. - - **See also:** `stack `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`Type` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "x" - - @overload # type: ignore[no-overload-impl] - def axis(self, aria=Undefined, bandPosition=Undefined, description=Undefined, domain=Undefined, domainCap=Undefined, domainColor=Undefined, domainDash=Undefined, domainDashOffset=Undefined, domainOpacity=Undefined, domainWidth=Undefined, format=Undefined, formatType=Undefined, grid=Undefined, gridCap=Undefined, gridColor=Undefined, gridDash=Undefined, gridDashOffset=Undefined, gridOpacity=Undefined, gridWidth=Undefined, labelAlign=Undefined, labelAngle=Undefined, labelBaseline=Undefined, labelBound=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFlush=Undefined, labelFlushOffset=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelLineHeight=Undefined, labelOffset=Undefined, labelOpacity=Undefined, labelOverlap=Undefined, labelPadding=Undefined, labelSeparation=Undefined, labels=Undefined, maxExtent=Undefined, minExtent=Undefined, offset=Undefined, orient=Undefined, position=Undefined, style=Undefined, tickBand=Undefined, tickCap=Undefined, tickColor=Undefined, tickCount=Undefined, tickDash=Undefined, tickDashOffset=Undefined, tickExtra=Undefined, tickMinStep=Undefined, tickOffset=Undefined, tickOpacity=Undefined, tickRound=Undefined, tickSize=Undefined, tickWidth=Undefined, ticks=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleAngle=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOpacity=Undefined, titlePadding=Undefined, titleX=Undefined, titleY=Undefined, translate=Undefined, values=Undefined, zindex=Undefined, **kwds) -> 'XDatum': - ... - - @overload # type: ignore[no-overload-impl] - def axis(self, _: None, **kwds) -> 'XDatum': - ... - - def bandPosition(self, _: float, **kwds) -> 'XDatum': - ... - - @overload # type: ignore[no-overload-impl] - def impute(self, frame=Undefined, keyvals=Undefined, method=Undefined, value=Undefined, **kwds) -> 'XDatum': - ... - - @overload # type: ignore[no-overload-impl] - def impute(self, _: None, **kwds) -> 'XDatum': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'XDatum': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, _: None, **kwds) -> 'XDatum': - ... - - @overload # type: ignore[no-overload-impl] - def stack(self, _: Literal["zero", "center", "normalize"], **kwds) -> 'XDatum': - ... - - @overload # type: ignore[no-overload-impl] - def stack(self, _: None, **kwds) -> 'XDatum': - ... - - @overload # type: ignore[no-overload-impl] - def stack(self, _: bool, **kwds) -> 'XDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'XDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'XDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'XDatum': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'XDatum': - ... - - - def __init__(self, datum, axis=Undefined, bandPosition=Undefined, impute=Undefined, scale=Undefined, - stack=Undefined, title=Undefined, type=Undefined, **kwds): - super(XDatum, self).__init__(datum=datum, axis=axis, bandPosition=bandPosition, impute=impute, - scale=scale, stack=stack, title=title, type=type, **kwds) - - -@with_property_setters -class XValue(ValueChannelMixin, core.PositionValueDef): - """XValue schema wrapper - - Mapping(required=[value]) - Definition object for a constant value (primitive value or gradient definition) of an - encoding channel. - - Parameters - ---------- - - value : anyOf(float, string, string, :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "x" - - - - def __init__(self, value, **kwds): - super(XValue, self).__init__(value=value, **kwds) - - -@with_property_setters -class X2(FieldChannelMixin, core.SecondaryFieldDef): - """X2 schema wrapper - - Mapping(required=[shorthand]) - A field definition of a secondary channel that shares a scale with another primary channel. - For example, ``x2``, ``xError`` and ``xError2`` share the same scale with ``x``. - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : None - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "x2" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'X2': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'X2': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'X2': - ... - - def bandPosition(self, _: float, **kwds) -> 'X2': - ... - - def bin(self, _: None, **kwds) -> 'X2': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'X2': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'X2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'X2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'X2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'X2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'X2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'X2': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'X2': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'X2': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'X2': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - field=Undefined, timeUnit=Undefined, title=Undefined, **kwds): - super(X2, self).__init__(shorthand=shorthand, aggregate=aggregate, bandPosition=bandPosition, - bin=bin, field=field, timeUnit=timeUnit, title=title, **kwds) - - -@with_property_setters -class X2Datum(DatumChannelMixin, core.DatumDef): - """X2Datum schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`) - A constant value in data domain. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`Type` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "x2" - - def bandPosition(self, _: float, **kwds) -> 'X2Datum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'X2Datum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'X2Datum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'X2Datum': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'X2Datum': - ... - - - def __init__(self, datum, bandPosition=Undefined, title=Undefined, type=Undefined, **kwds): - super(X2Datum, self).__init__(datum=datum, bandPosition=bandPosition, title=title, type=type, - **kwds) - - -@with_property_setters -class X2Value(ValueChannelMixin, core.PositionValueDef): - """X2Value schema wrapper - - Mapping(required=[value]) - Definition object for a constant value (primitive value or gradient definition) of an - encoding channel. - - Parameters - ---------- - - value : anyOf(float, string, string, :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "x2" - - - - def __init__(self, value, **kwds): - super(X2Value, self).__init__(value=value, **kwds) - - -@with_property_setters -class XError(FieldChannelMixin, core.SecondaryFieldDef): - """XError schema wrapper - - Mapping(required=[shorthand]) - A field definition of a secondary channel that shares a scale with another primary channel. - For example, ``x2``, ``xError`` and ``xError2`` share the same scale with ``x``. - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : None - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "xError" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'XError': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'XError': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'XError': - ... - - def bandPosition(self, _: float, **kwds) -> 'XError': - ... - - def bin(self, _: None, **kwds) -> 'XError': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'XError': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'XError': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'XError': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'XError': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'XError': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'XError': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'XError': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'XError': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'XError': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'XError': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - field=Undefined, timeUnit=Undefined, title=Undefined, **kwds): - super(XError, self).__init__(shorthand=shorthand, aggregate=aggregate, - bandPosition=bandPosition, bin=bin, field=field, timeUnit=timeUnit, - title=title, **kwds) - - -@with_property_setters -class XErrorValue(ValueChannelMixin, core.ValueDefnumber): - """XErrorValue schema wrapper - - Mapping(required=[value]) - Definition object for a constant value (primitive value or gradient definition) of an - encoding channel. - - Parameters - ---------- - - value : float - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "xError" - - - - def __init__(self, value, **kwds): - super(XErrorValue, self).__init__(value=value, **kwds) - - -@with_property_setters -class XError2(FieldChannelMixin, core.SecondaryFieldDef): - """XError2 schema wrapper - - Mapping(required=[shorthand]) - A field definition of a secondary channel that shares a scale with another primary channel. - For example, ``x2``, ``xError`` and ``xError2`` share the same scale with ``x``. - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : None - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "xError2" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'XError2': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'XError2': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'XError2': - ... - - def bandPosition(self, _: float, **kwds) -> 'XError2': - ... - - def bin(self, _: None, **kwds) -> 'XError2': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'XError2': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'XError2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'XError2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'XError2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'XError2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'XError2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'XError2': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'XError2': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'XError2': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'XError2': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - field=Undefined, timeUnit=Undefined, title=Undefined, **kwds): - super(XError2, self).__init__(shorthand=shorthand, aggregate=aggregate, - bandPosition=bandPosition, bin=bin, field=field, - timeUnit=timeUnit, title=title, **kwds) - - -@with_property_setters -class XError2Value(ValueChannelMixin, core.ValueDefnumber): - """XError2Value schema wrapper - - Mapping(required=[value]) - Definition object for a constant value (primitive value or gradient definition) of an - encoding channel. - - Parameters - ---------- - - value : float - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "xError2" - - - - def __init__(self, value, **kwds): - super(XError2Value, self).__init__(value=value, **kwds) - - -@with_property_setters -class XOffset(FieldChannelMixin, core.ScaleFieldDef): - """XOffset schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - JavaScript. - * `A string indicating an encoding channel name to sort by - `__ (e.g., - ``"x"`` or ``"y"`` ) with an optional minus prefix for descending sort (e.g., - ``"-x"`` to sort by x-field, descending). This channel string is short-form of `a - sort-by-encoding definition - `__. For - example, ``"sort": "-x"`` is equivalent to ``"sort": {"encoding": "x", "order": - "descending"}``. - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects - `__. In addition, for time - units ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` and sorting by another channel is not supported for ``row`` and - ``column``. - - **See also:** `sort `__ - documentation. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "xOffset" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'XOffset': - ... - - def bandPosition(self, _: float, **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, _: None, **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[float], **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[str], **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[bool], **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[core.DateTime], **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["x", "y", "color", "fill", "stroke", "strokeWidth", "size", "shape", "fillOpacity", "strokeOpacity", "opacity", "text"], **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["-x", "-y", "-color", "-fill", "-stroke", "-strokeWidth", "-size", "-shape", "-fillOpacity", "-strokeOpacity", "-opacity", "-text"], **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, encoding=Undefined, order=Undefined, **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: None, **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'XOffset': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'XOffset': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'XOffset': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - field=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, - type=Undefined, **kwds): - super(XOffset, self).__init__(shorthand=shorthand, aggregate=aggregate, - bandPosition=bandPosition, bin=bin, field=field, scale=scale, - sort=sort, timeUnit=timeUnit, title=title, type=type, **kwds) - - -@with_property_setters -class XOffsetDatum(DatumChannelMixin, core.ScaleDatumDef): - """XOffsetDatum schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`) - A constant value in data domain. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`Type` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "xOffset" - - def bandPosition(self, _: float, **kwds) -> 'XOffsetDatum': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'XOffsetDatum': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, _: None, **kwds) -> 'XOffsetDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'XOffsetDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'XOffsetDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'XOffsetDatum': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'XOffsetDatum': - ... - - - def __init__(self, datum, bandPosition=Undefined, scale=Undefined, title=Undefined, type=Undefined, - **kwds): - super(XOffsetDatum, self).__init__(datum=datum, bandPosition=bandPosition, scale=scale, - title=title, type=type, **kwds) - - -@with_property_setters -class XOffsetValue(ValueChannelMixin, core.ValueDefnumber): - """XOffsetValue schema wrapper - - Mapping(required=[value]) - Definition object for a constant value (primitive value or gradient definition) of an - encoding channel. - - Parameters - ---------- - - value : float - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "xOffset" - - - - def __init__(self, value, **kwds): - super(XOffsetValue, self).__init__(value=value, **kwds) - - -@with_property_setters -class Y(FieldChannelMixin, core.PositionFieldDef): - """Y schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - axis : anyOf(:class:`Axis`, None) - An object defining properties of axis's gridlines, ticks and labels. If ``null``, - the axis for the encoding channel will be removed. - - **Default value:** If undefined, default `axis properties - `__ are applied. - - **See also:** `axis `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, string, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - impute : anyOf(:class:`ImputeParams`, None) - An object defining the properties of the Impute Operation to be applied. The field - value of the other positional channel is taken as ``key`` of the ``Impute`` - Operation. The field of the ``color`` channel if specified is used as ``groupby`` of - the ``Impute`` Operation. - - **See also:** `impute `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - JavaScript. - * `A string indicating an encoding channel name to sort by - `__ (e.g., - ``"x"`` or ``"y"`` ) with an optional minus prefix for descending sort (e.g., - ``"-x"`` to sort by x-field, descending). This channel string is short-form of `a - sort-by-encoding definition - `__. For - example, ``"sort": "-x"`` is equivalent to ``"sort": {"encoding": "x", "order": - "descending"}``. - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects - `__. In addition, for time - units ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` and sorting by another channel is not supported for ``row`` and - ``column``. - - **See also:** `sort `__ - documentation. - stack : anyOf(:class:`StackOffset`, None, boolean) - Type of stacking offset if the field should be stacked. ``stack`` is only applicable - for ``x``, ``y``, ``theta``, and ``radius`` channels with continuous domains. For - example, ``stack`` of ``y`` can be used to customize stacking for a vertical bar - chart. - - ``stack`` can be one of the following values: - - - * ``"zero"`` or `true`: stacking with baseline offset at zero value of the scale - (for creating typical stacked - [bar](https://vega.github.io/vega-lite/docs/stack.html#bar) and `area - `__ chart). - * ``"normalize"`` - stacking with normalized domain (for creating `normalized - stacked bar and area charts - `__ and pie charts - `with percentage tooltip - `__ ). :raw-html:`
      ` - * ``"center"`` - stacking with center baseline (for `streamgraph - `__ ). - * ``null`` or ``false`` - No-stacking. This will produce layered `bar - `__ and area - chart. - - **Default value:** ``zero`` for plots with all of the following conditions are true: - (1) the mark is ``bar``, ``area``, or ``arc`` ; (2) the stacked measure channel (x - or y) has a linear scale; (3) At least one of non-position channels mapped to an - unaggregated field that is different from x and y. Otherwise, ``null`` by default. - - **See also:** `stack `__ - documentation. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "y" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def axis(self, aria=Undefined, bandPosition=Undefined, description=Undefined, domain=Undefined, domainCap=Undefined, domainColor=Undefined, domainDash=Undefined, domainDashOffset=Undefined, domainOpacity=Undefined, domainWidth=Undefined, format=Undefined, formatType=Undefined, grid=Undefined, gridCap=Undefined, gridColor=Undefined, gridDash=Undefined, gridDashOffset=Undefined, gridOpacity=Undefined, gridWidth=Undefined, labelAlign=Undefined, labelAngle=Undefined, labelBaseline=Undefined, labelBound=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFlush=Undefined, labelFlushOffset=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelLineHeight=Undefined, labelOffset=Undefined, labelOpacity=Undefined, labelOverlap=Undefined, labelPadding=Undefined, labelSeparation=Undefined, labels=Undefined, maxExtent=Undefined, minExtent=Undefined, offset=Undefined, orient=Undefined, position=Undefined, style=Undefined, tickBand=Undefined, tickCap=Undefined, tickColor=Undefined, tickCount=Undefined, tickDash=Undefined, tickDashOffset=Undefined, tickExtra=Undefined, tickMinStep=Undefined, tickOffset=Undefined, tickOpacity=Undefined, tickRound=Undefined, tickSize=Undefined, tickWidth=Undefined, ticks=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleAngle=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOpacity=Undefined, titlePadding=Undefined, titleX=Undefined, titleY=Undefined, translate=Undefined, values=Undefined, zindex=Undefined, **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def axis(self, _: None, **kwds) -> 'Y': - ... - - def bandPosition(self, _: float, **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: str, **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def impute(self, frame=Undefined, keyvals=Undefined, method=Undefined, value=Undefined, **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def impute(self, _: None, **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, _: None, **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[float], **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[str], **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[bool], **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[core.DateTime], **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["x", "y", "color", "fill", "stroke", "strokeWidth", "size", "shape", "fillOpacity", "strokeOpacity", "opacity", "text"], **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["-x", "-y", "-color", "-fill", "-stroke", "-strokeWidth", "-size", "-shape", "-fillOpacity", "-strokeOpacity", "-opacity", "-text"], **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, encoding=Undefined, order=Undefined, **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: None, **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def stack(self, _: Literal["zero", "center", "normalize"], **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def stack(self, _: None, **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def stack(self, _: bool, **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Y': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Y': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Y': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, axis=Undefined, bandPosition=Undefined, - bin=Undefined, field=Undefined, impute=Undefined, scale=Undefined, sort=Undefined, - stack=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds): - super(Y, self).__init__(shorthand=shorthand, aggregate=aggregate, axis=axis, - bandPosition=bandPosition, bin=bin, field=field, impute=impute, - scale=scale, sort=sort, stack=stack, timeUnit=timeUnit, title=title, - type=type, **kwds) - - -@with_property_setters -class YDatum(DatumChannelMixin, core.PositionDatumDef): - """YDatum schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - axis : anyOf(:class:`Axis`, None) - An object defining properties of axis's gridlines, ticks and labels. If ``null``, - the axis for the encoding channel will be removed. - - **Default value:** If undefined, default `axis properties - `__ are applied. - - **See also:** `axis `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`) - A constant value in data domain. - impute : anyOf(:class:`ImputeParams`, None) - An object defining the properties of the Impute Operation to be applied. The field - value of the other positional channel is taken as ``key`` of the ``Impute`` - Operation. The field of the ``color`` channel if specified is used as ``groupby`` of - the ``Impute`` Operation. - - **See also:** `impute `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - stack : anyOf(:class:`StackOffset`, None, boolean) - Type of stacking offset if the field should be stacked. ``stack`` is only applicable - for ``x``, ``y``, ``theta``, and ``radius`` channels with continuous domains. For - example, ``stack`` of ``y`` can be used to customize stacking for a vertical bar - chart. - - ``stack`` can be one of the following values: - - - * ``"zero"`` or `true`: stacking with baseline offset at zero value of the scale - (for creating typical stacked - [bar](https://vega.github.io/vega-lite/docs/stack.html#bar) and `area - `__ chart). - * ``"normalize"`` - stacking with normalized domain (for creating `normalized - stacked bar and area charts - `__ and pie charts - `with percentage tooltip - `__ ). :raw-html:`
      ` - * ``"center"`` - stacking with center baseline (for `streamgraph - `__ ). - * ``null`` or ``false`` - No-stacking. This will produce layered `bar - `__ and area - chart. - - **Default value:** ``zero`` for plots with all of the following conditions are true: - (1) the mark is ``bar``, ``area``, or ``arc`` ; (2) the stacked measure channel (x - or y) has a linear scale; (3) At least one of non-position channels mapped to an - unaggregated field that is different from x and y. Otherwise, ``null`` by default. - - **See also:** `stack `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`Type` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "y" - - @overload # type: ignore[no-overload-impl] - def axis(self, aria=Undefined, bandPosition=Undefined, description=Undefined, domain=Undefined, domainCap=Undefined, domainColor=Undefined, domainDash=Undefined, domainDashOffset=Undefined, domainOpacity=Undefined, domainWidth=Undefined, format=Undefined, formatType=Undefined, grid=Undefined, gridCap=Undefined, gridColor=Undefined, gridDash=Undefined, gridDashOffset=Undefined, gridOpacity=Undefined, gridWidth=Undefined, labelAlign=Undefined, labelAngle=Undefined, labelBaseline=Undefined, labelBound=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFlush=Undefined, labelFlushOffset=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelLineHeight=Undefined, labelOffset=Undefined, labelOpacity=Undefined, labelOverlap=Undefined, labelPadding=Undefined, labelSeparation=Undefined, labels=Undefined, maxExtent=Undefined, minExtent=Undefined, offset=Undefined, orient=Undefined, position=Undefined, style=Undefined, tickBand=Undefined, tickCap=Undefined, tickColor=Undefined, tickCount=Undefined, tickDash=Undefined, tickDashOffset=Undefined, tickExtra=Undefined, tickMinStep=Undefined, tickOffset=Undefined, tickOpacity=Undefined, tickRound=Undefined, tickSize=Undefined, tickWidth=Undefined, ticks=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleAngle=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOpacity=Undefined, titlePadding=Undefined, titleX=Undefined, titleY=Undefined, translate=Undefined, values=Undefined, zindex=Undefined, **kwds) -> 'YDatum': - ... - - @overload # type: ignore[no-overload-impl] - def axis(self, _: None, **kwds) -> 'YDatum': - ... - - def bandPosition(self, _: float, **kwds) -> 'YDatum': - ... - - @overload # type: ignore[no-overload-impl] - def impute(self, frame=Undefined, keyvals=Undefined, method=Undefined, value=Undefined, **kwds) -> 'YDatum': - ... - - @overload # type: ignore[no-overload-impl] - def impute(self, _: None, **kwds) -> 'YDatum': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'YDatum': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, _: None, **kwds) -> 'YDatum': - ... - - @overload # type: ignore[no-overload-impl] - def stack(self, _: Literal["zero", "center", "normalize"], **kwds) -> 'YDatum': - ... - - @overload # type: ignore[no-overload-impl] - def stack(self, _: None, **kwds) -> 'YDatum': - ... - - @overload # type: ignore[no-overload-impl] - def stack(self, _: bool, **kwds) -> 'YDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'YDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'YDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'YDatum': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'YDatum': - ... - - - def __init__(self, datum, axis=Undefined, bandPosition=Undefined, impute=Undefined, scale=Undefined, - stack=Undefined, title=Undefined, type=Undefined, **kwds): - super(YDatum, self).__init__(datum=datum, axis=axis, bandPosition=bandPosition, impute=impute, - scale=scale, stack=stack, title=title, type=type, **kwds) - - -@with_property_setters -class YValue(ValueChannelMixin, core.PositionValueDef): - """YValue schema wrapper - - Mapping(required=[value]) - Definition object for a constant value (primitive value or gradient definition) of an - encoding channel. - - Parameters - ---------- - - value : anyOf(float, string, string, :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "y" - - - - def __init__(self, value, **kwds): - super(YValue, self).__init__(value=value, **kwds) - - -@with_property_setters -class Y2(FieldChannelMixin, core.SecondaryFieldDef): - """Y2 schema wrapper - - Mapping(required=[shorthand]) - A field definition of a secondary channel that shares a scale with another primary channel. - For example, ``x2``, ``xError`` and ``xError2`` share the same scale with ``x``. - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : None - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "y2" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Y2': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'Y2': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'Y2': - ... - - def bandPosition(self, _: float, **kwds) -> 'Y2': - ... - - def bin(self, _: None, **kwds) -> 'Y2': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'Y2': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'Y2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Y2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Y2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Y2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Y2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Y2': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Y2': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Y2': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Y2': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - field=Undefined, timeUnit=Undefined, title=Undefined, **kwds): - super(Y2, self).__init__(shorthand=shorthand, aggregate=aggregate, bandPosition=bandPosition, - bin=bin, field=field, timeUnit=timeUnit, title=title, **kwds) - - -@with_property_setters -class Y2Datum(DatumChannelMixin, core.DatumDef): - """Y2Datum schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`) - A constant value in data domain. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`Type` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "y2" - - def bandPosition(self, _: float, **kwds) -> 'Y2Datum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'Y2Datum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'Y2Datum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'Y2Datum': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'Y2Datum': - ... - - - def __init__(self, datum, bandPosition=Undefined, title=Undefined, type=Undefined, **kwds): - super(Y2Datum, self).__init__(datum=datum, bandPosition=bandPosition, title=title, type=type, - **kwds) - - -@with_property_setters -class Y2Value(ValueChannelMixin, core.PositionValueDef): - """Y2Value schema wrapper - - Mapping(required=[value]) - Definition object for a constant value (primitive value or gradient definition) of an - encoding channel. - - Parameters - ---------- - - value : anyOf(float, string, string, :class:`ExprRef`) - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "y2" - - - - def __init__(self, value, **kwds): - super(Y2Value, self).__init__(value=value, **kwds) - - -@with_property_setters -class YError(FieldChannelMixin, core.SecondaryFieldDef): - """YError schema wrapper - - Mapping(required=[shorthand]) - A field definition of a secondary channel that shares a scale with another primary channel. - For example, ``x2``, ``xError`` and ``xError2`` share the same scale with ``x``. - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : None - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "yError" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'YError': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'YError': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'YError': - ... - - def bandPosition(self, _: float, **kwds) -> 'YError': - ... - - def bin(self, _: None, **kwds) -> 'YError': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'YError': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'YError': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'YError': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'YError': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'YError': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'YError': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'YError': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'YError': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'YError': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'YError': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - field=Undefined, timeUnit=Undefined, title=Undefined, **kwds): - super(YError, self).__init__(shorthand=shorthand, aggregate=aggregate, - bandPosition=bandPosition, bin=bin, field=field, timeUnit=timeUnit, - title=title, **kwds) - - -@with_property_setters -class YErrorValue(ValueChannelMixin, core.ValueDefnumber): - """YErrorValue schema wrapper - - Mapping(required=[value]) - Definition object for a constant value (primitive value or gradient definition) of an - encoding channel. - - Parameters - ---------- - - value : float - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "yError" - - - - def __init__(self, value, **kwds): - super(YErrorValue, self).__init__(value=value, **kwds) - - -@with_property_setters -class YError2(FieldChannelMixin, core.SecondaryFieldDef): - """YError2 schema wrapper - - Mapping(required=[shorthand]) - A field definition of a secondary channel that shares a scale with another primary channel. - For example, ``x2``, ``xError`` and ``xError2`` share the same scale with ``x``. - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : None - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "yError2" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'YError2': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'YError2': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'YError2': - ... - - def bandPosition(self, _: float, **kwds) -> 'YError2': - ... - - def bin(self, _: None, **kwds) -> 'YError2': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'YError2': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'YError2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'YError2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'YError2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'YError2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'YError2': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'YError2': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'YError2': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'YError2': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'YError2': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - field=Undefined, timeUnit=Undefined, title=Undefined, **kwds): - super(YError2, self).__init__(shorthand=shorthand, aggregate=aggregate, - bandPosition=bandPosition, bin=bin, field=field, - timeUnit=timeUnit, title=title, **kwds) - - -@with_property_setters -class YError2Value(ValueChannelMixin, core.ValueDefnumber): - """YError2Value schema wrapper - - Mapping(required=[value]) - Definition object for a constant value (primitive value or gradient definition) of an - encoding channel. - - Parameters - ---------- - - value : float - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "yError2" - - - - def __init__(self, value, **kwds): - super(YError2Value, self).__init__(value=value, **kwds) - - -@with_property_setters -class YOffset(FieldChannelMixin, core.ScaleFieldDef): - """YOffset schema wrapper - - Mapping(required=[shorthand]) - - Parameters - ---------- - - shorthand : string - shorthand for field, aggregate, and type - aggregate : :class:`Aggregate` - Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``, - ``"min"``, ``"max"``, ``"count"`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating - that the data for ``x`` or ``y`` channel are binned before they are imported into - Vega-Lite ( ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access - nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If - field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details - about escaping in the `field documentation - `__. 2) ``field`` is not required - if ``aggregate`` is ``count``. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - JavaScript. - * `A string indicating an encoding channel name to sort by - `__ (e.g., - ``"x"`` or ``"y"`` ) with an optional minus prefix for descending sort (e.g., - ``"-x"`` to sort by x-field, descending). This channel string is short-form of `a - sort-by-encoding definition - `__. For - example, ``"sort": "-x"`` is equivalent to ``"sort": {"encoding": "x", "order": - "descending"}``. - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects - `__. In addition, for time - units ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` and sorting by another channel is not supported for ``row`` and - ``column``. - - **See also:** `sort `__ - documentation. - timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`) - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`StandardType` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "yOffset" - - @overload # type: ignore[no-overload-impl] - def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmax=Undefined, **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def aggregate(self, argmin=Undefined, **kwds) -> 'YOffset': - ... - - def bandPosition(self, _: float, **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: bool, **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def bin(self, _: None, **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, _: str, **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def field(self, repeat=Undefined, **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, _: None, **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[float], **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[str], **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[bool], **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: List[core.DateTime], **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["x", "y", "color", "fill", "stroke", "strokeWidth", "size", "shape", "fillOpacity", "strokeOpacity", "opacity", "text"], **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: Literal["-x", "-y", "-color", "-fill", "-stroke", "-strokeWidth", "-size", "-shape", "-fillOpacity", "-strokeOpacity", "-opacity", "-text"], **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, encoding=Undefined, order=Undefined, **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def sort(self, _: None, **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'YOffset': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'YOffset': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'YOffset': - ... - - - def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, - field=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, - type=Undefined, **kwds): - super(YOffset, self).__init__(shorthand=shorthand, aggregate=aggregate, - bandPosition=bandPosition, bin=bin, field=field, scale=scale, - sort=sort, timeUnit=timeUnit, title=title, type=type, **kwds) - - -@with_property_setters -class YOffsetDatum(DatumChannelMixin, core.ScaleDatumDef): - """YOffsetDatum schema wrapper - - Mapping(required=[]) - - Parameters - ---------- - - bandPosition : float - Relative position on a band of a stacked, binned, time unit, or band scale. For - example, the marks will be positioned at the beginning of the band if set to ``0``, - and at the middle of the band if set to ``0.5``. - datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`) - A constant value in data domain. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - title : anyOf(:class:`Text`, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : :class:`Type` - The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or - ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also - be a ``"geojson"`` type for encoding `'geoshape' - `__. - - Vega-Lite automatically infers data types in many cases as discussed below. However, - type is required for a field if: (1) the field is not nominal and the field encoding - has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale - type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal - scale for a field with ``bin`` or ``timeUnit``. - - **Default value:** - - 1) For a data ``field``, ``"nominal"`` is the default data type unless the field - encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or - ``timeUnit`` that satisfies the following criteria: - - - * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin`` - or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is - ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a - quantitative scale `__. - * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit`` - or (2) the specified scale type is a time or utc scale - * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort - order - `__, - (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding - channel is ``order``. - - 2) For a constant value in data domain ( ``datum`` ): - - - * ``"quantitative"`` if the datum is a number - * ``"nominal"`` if the datum is a string - * ``"temporal"`` if the datum is `a date time object - `__ - - **Note:** - - - * Data ``type`` describes the semantics of the data rather than the primitive data - types (number, string, etc.). The same primitive data type can have different - types of measurement. For example, numeric data can represent quantitative, - ordinal, or nominal data. - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal" - (for using an ordinal scale) - `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they must have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _class_is_valid_at_instantiation = False - _encoding_name = "yOffset" - - def bandPosition(self, _: float, **kwds) -> 'YOffsetDatum': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'YOffsetDatum': - ... - - @overload # type: ignore[no-overload-impl] - def scale(self, _: None, **kwds) -> 'YOffsetDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: str, **kwds) -> 'YOffsetDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: List[str], **kwds) -> 'YOffsetDatum': - ... - - @overload # type: ignore[no-overload-impl] - def title(self, _: None, **kwds) -> 'YOffsetDatum': - ... - - def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'YOffsetDatum': - ... - - - def __init__(self, datum, bandPosition=Undefined, scale=Undefined, title=Undefined, type=Undefined, - **kwds): - super(YOffsetDatum, self).__init__(datum=datum, bandPosition=bandPosition, scale=scale, - title=title, type=type, **kwds) - - -@with_property_setters -class YOffsetValue(ValueChannelMixin, core.ValueDefnumber): - """YOffsetValue schema wrapper - - Mapping(required=[value]) - Definition object for a constant value (primitive value or gradient definition) of an - encoding channel. - - Parameters - ---------- - - value : float - A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient - definition `__ for color, - values between ``0`` to ``1`` for opacity). - """ - _class_is_valid_at_instantiation = False - _encoding_name = "yOffset" - - - - def __init__(self, value, **kwds): - super(YOffsetValue, self).__init__(value=value, **kwds) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/image/helpers.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/image/helpers.py deleted file mode 100644 index f1a67e8d63cdd6a0ba63ee4dec20c6758b2eb507..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/image/helpers.py +++ /dev/null @@ -1,97 +0,0 @@ -# encoding: utf-8 - -from __future__ import absolute_import, division, print_function - -from struct import Struct - -from .exceptions import UnexpectedEndOfFileError - - -BIG_ENDIAN = '>' -LITTLE_ENDIAN = '<' - - -class StreamReader(object): - """ - Wraps a file-like object to provide access to structured data from a - binary file. Byte-order is configurable. *base_offset* is added to any - base value provided to calculate actual location for reads. - """ - def __init__(self, stream, byte_order, base_offset=0): - super(StreamReader, self).__init__() - self._stream = stream - self._byte_order = ( - LITTLE_ENDIAN if byte_order == LITTLE_ENDIAN else BIG_ENDIAN - ) - self._base_offset = base_offset - - def read(self, count): - """ - Allow pass-through read() call - """ - return self._stream.read(count) - - def read_byte(self, base, offset=0): - """ - Return the int value of the byte at the file position defined by - self._base_offset + *base* + *offset*. If *base* is None, the byte is - read from the current position in the stream. - """ - fmt = 'B' - return self._read_int(fmt, base, offset) - - def read_long(self, base, offset=0): - """ - Return the int value of the four bytes at the file position defined by - self._base_offset + *base* + *offset*. If *base* is None, the long is - read from the current position in the stream. The endian setting of - this instance is used to interpret the byte layout of the long. - """ - fmt = 'L' - return self._read_int(fmt, base, offset) - - def read_short(self, base, offset=0): - """ - Return the int value of the two bytes at the file position determined - by *base* and *offset*, similarly to ``read_long()`` above. - """ - fmt = b'H' - return self._read_int(fmt, base, offset) - - def read_str(self, char_count, base, offset=0): - """ - Return a string containing the *char_count* bytes at the file - position determined by self._base_offset + *base* + *offset*. - """ - def str_struct(char_count): - format_ = '%ds' % char_count - return Struct(format_) - struct = str_struct(char_count) - chars = self._unpack_item(struct, base, offset) - unicode_str = chars.decode('UTF-8') - return unicode_str - - def seek(self, base, offset=0): - location = self._base_offset + base + offset - self._stream.seek(location) - - def tell(self): - """ - Allow pass-through tell() call - """ - return self._stream.tell() - - def _read_bytes(self, byte_count, base, offset): - self.seek(base, offset) - bytes_ = self._stream.read(byte_count) - if len(bytes_) < byte_count: - raise UnexpectedEndOfFileError - return bytes_ - - def _read_int(self, fmt, base, offset): - struct = Struct(fmt) - return self._unpack_item(struct, base, offset) - - def _unpack_item(self, struct, base, offset): - bytes_ = self._read_bytes(struct.size, base, offset) - return struct.unpack(bytes_)[0] diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/opc/oxml.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/opc/oxml.py deleted file mode 100644 index 494b31dca8631ca1f929cb328196a9a29bc28021..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/opc/oxml.py +++ /dev/null @@ -1,292 +0,0 @@ -# encoding: utf-8 - -""" -Temporary stand-in for main oxml module that came across with the -PackageReader transplant. Probably much will get replaced with objects from -the pptx.oxml.core and then this module will either get deleted or only hold -the package related custom element classes. -""" - -from __future__ import absolute_import, print_function, unicode_literals - -from lxml import etree - -from .constants import NAMESPACE as NS, RELATIONSHIP_TARGET_MODE as RTM - - -# configure XML parser -element_class_lookup = etree.ElementNamespaceClassLookup() -oxml_parser = etree.XMLParser(remove_blank_text=True, resolve_entities=False) -oxml_parser.set_element_class_lookup(element_class_lookup) - -nsmap = { - 'ct': NS.OPC_CONTENT_TYPES, - 'pr': NS.OPC_RELATIONSHIPS, - 'r': NS.OFC_RELATIONSHIPS, -} - - -# =========================================================================== -# functions -# =========================================================================== - -def parse_xml(text): - """ - ``etree.fromstring()`` replacement that uses oxml parser - """ - return etree.fromstring(text, oxml_parser) - - -def qn(tag): - """ - Stands for "qualified name", a utility function to turn a namespace - prefixed tag name into a Clark-notation qualified tag name for lxml. For - example, ``qn('p:cSld')`` returns ``'{http://schemas.../main}cSld'``. - """ - prefix, tagroot = tag.split(':') - uri = nsmap[prefix] - return '{%s}%s' % (uri, tagroot) - - -def serialize_part_xml(part_elm): - """ - Serialize *part_elm* etree element to XML suitable for storage as an XML - part. That is to say, no insignificant whitespace added for readability, - and an appropriate XML declaration added with UTF-8 encoding specified. - """ - return etree.tostring(part_elm, encoding='UTF-8', standalone=True) - - -def serialize_for_reading(element): - """ - Serialize *element* to human-readable XML suitable for tests. No XML - declaration. - """ - return etree.tostring(element, encoding='unicode', pretty_print=True) - - -# =========================================================================== -# Custom element classes -# =========================================================================== - -class BaseOxmlElement(etree.ElementBase): - """ - Base class for all custom element classes, to add standardized behavior - to all classes in one place. - """ - @property - def xml(self): - """ - Return XML string for this element, suitable for testing purposes. - Pretty printed for readability and without an XML declaration at the - top. - """ - return serialize_for_reading(self) - - -class CT_Default(BaseOxmlElement): - """ - ```` element, specifying the default content type to be applied - to a part with the specified extension. - """ - @property - def content_type(self): - """ - String held in the ``ContentType`` attribute of this ```` - element. - """ - return self.get('ContentType') - - @property - def extension(self): - """ - String held in the ``Extension`` attribute of this ```` - element. - """ - return self.get('Extension') - - @staticmethod - def new(ext, content_type): - """ - Return a new ```` element with attributes set to parameter - values. - """ - xml = '' % nsmap['ct'] - default = parse_xml(xml) - default.set('Extension', ext) - default.set('ContentType', content_type) - return default - - -class CT_Override(BaseOxmlElement): - """ - ```` element, specifying the content type to be applied for a - part with the specified partname. - """ - @property - def content_type(self): - """ - String held in the ``ContentType`` attribute of this ```` - element. - """ - return self.get('ContentType') - - @staticmethod - def new(partname, content_type): - """ - Return a new ```` element with attributes set to parameter - values. - """ - xml = '' % nsmap['ct'] - override = parse_xml(xml) - override.set('PartName', partname) - override.set('ContentType', content_type) - return override - - @property - def partname(self): - """ - String held in the ``PartName`` attribute of this ```` - element. - """ - return self.get('PartName') - - -class CT_Relationship(BaseOxmlElement): - """ - ```` element, representing a single relationship from a - source to a target part. - """ - @staticmethod - def new(rId, reltype, target, target_mode=RTM.INTERNAL): - """ - Return a new ```` element. - """ - xml = '' % nsmap['pr'] - relationship = parse_xml(xml) - relationship.set('Id', rId) - relationship.set('Type', reltype) - relationship.set('Target', target) - if target_mode == RTM.EXTERNAL: - relationship.set('TargetMode', RTM.EXTERNAL) - return relationship - - @property - def rId(self): - """ - String held in the ``Id`` attribute of this ```` - element. - """ - return self.get('Id') - - @property - def reltype(self): - """ - String held in the ``Type`` attribute of this ```` - element. - """ - return self.get('Type') - - @property - def target_ref(self): - """ - String held in the ``Target`` attribute of this ```` - element. - """ - return self.get('Target') - - @property - def target_mode(self): - """ - String held in the ``TargetMode`` attribute of this - ```` element, either ``Internal`` or ``External``. - Defaults to ``Internal``. - """ - return self.get('TargetMode', RTM.INTERNAL) - - -class CT_Relationships(BaseOxmlElement): - """ - ```` element, the root element in a .rels file. - """ - def add_rel(self, rId, reltype, target, is_external=False): - """ - Add a child ```` element with attributes set according - to parameter values. - """ - target_mode = RTM.EXTERNAL if is_external else RTM.INTERNAL - relationship = CT_Relationship.new(rId, reltype, target, target_mode) - self.append(relationship) - - @staticmethod - def new(): - """ - Return a new ```` element. - """ - xml = '' % nsmap['pr'] - relationships = parse_xml(xml) - return relationships - - @property - def Relationship_lst(self): - """ - Return a list containing all the ```` child elements. - """ - return self.findall(qn('pr:Relationship')) - - @property - def xml(self): - """ - Return XML string for this element, suitable for saving in a .rels - stream, not pretty printed and with an XML declaration at the top. - """ - return serialize_part_xml(self) - - -class CT_Types(BaseOxmlElement): - """ - ```` element, the container element for Default and Override - elements in [Content_Types].xml. - """ - def add_default(self, ext, content_type): - """ - Add a child ```` element with attributes set to parameter - values. - """ - default = CT_Default.new(ext, content_type) - self.append(default) - - def add_override(self, partname, content_type): - """ - Add a child ```` element with attributes set to parameter - values. - """ - override = CT_Override.new(partname, content_type) - self.append(override) - - @property - def defaults(self): - return self.findall(qn('ct:Default')) - - @staticmethod - def new(): - """ - Return a new ```` element. - """ - xml = '' % nsmap['ct'] - types = parse_xml(xml) - return types - - @property - def overrides(self): - return self.findall(qn('ct:Override')) - - -ct_namespace = element_class_lookup.get_namespace(nsmap['ct']) -ct_namespace['Default'] = CT_Default -ct_namespace['Override'] = CT_Override -ct_namespace['Types'] = CT_Types - -pr_namespace = element_class_lookup.get_namespace(nsmap['pr']) -pr_namespace['Relationship'] = CT_Relationship -pr_namespace['Relationships'] = CT_Relationships diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/annotated_image.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/annotated_image.py deleted file mode 100644 index cdc6902437817494a5cf43b0ade8315ae00f842d..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/annotated_image.py +++ /dev/null @@ -1,243 +0,0 @@ -"""gr.AnnotatedImage() component.""" - -from __future__ import annotations - -from typing import Literal - -import numpy as np -from gradio_client.documentation import document, set_documentation_group -from gradio_client.serializing import JSONSerializable -from PIL import Image as _Image # using _ to minimize namespace pollution - -from gradio import utils -from gradio.components.base import IOComponent, _Keywords -from gradio.deprecation import warn_style_method_deprecation -from gradio.events import ( - EventListenerMethod, - Selectable, -) - -set_documentation_group("component") - -_Image.init() # fixes https://github.com/gradio-app/gradio/issues/2843 - - -@document() -class AnnotatedImage(Selectable, IOComponent, JSONSerializable): - """ - Displays a base image and colored subsections on top of that image. Subsections can take the from of rectangles (e.g. object detection) or masks (e.g. image segmentation). - Preprocessing: this component does *not* accept input. - Postprocessing: expects a {Tuple[numpy.ndarray | PIL.Image | str, List[Tuple[numpy.ndarray | Tuple[int, int, int, int], str]]]} consisting of a base image and a list of subsections, that are either (x1, y1, x2, y2) tuples identifying object boundaries, or 0-1 confidence masks of the same shape as the image. A label is provided for each subsection. - - Demos: image_segmentation - """ - - def __init__( - self, - value: tuple[ - np.ndarray | _Image.Image | str, - list[tuple[np.ndarray | tuple[int, int, int, int], str]], - ] - | None = None, - *, - show_legend: bool = True, - height: int | None = None, - width: int | None = None, - color_map: dict[str, str] | None = None, - label: str | None = None, - every: float | None = None, - show_label: bool = True, - container: bool = True, - scale: int | None = None, - min_width: int = 160, - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - **kwargs, - ): - """ - Parameters: - value: Tuple of base image and list of (subsection, label) pairs. - show_legend: If True, will show a legend of the subsections. - height: Height of the displayed image. - width: Width of the displayed image. - color_map: A dictionary mapping labels to colors. The colors must be specified as hex codes. - label: component name in interface. - every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute. - show_label: if True, will display label. - container: If True, will place the component in a container - providing some extra padding around the border. - scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer. - min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first. - visible: If False, component will be hidden. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - """ - self.show_legend = show_legend - self.height = height - self.width = width - self.color_map = color_map - self.select: EventListenerMethod - """ - Event listener for when the user selects Image subsection. - Uses event data gradio.SelectData to carry `value` referring to selected subsection label, and `index` to refer to subsection index. - See EventData documentation on how to use this event data. - """ - IOComponent.__init__( - self, - label=label, - every=every, - show_label=show_label, - container=container, - scale=scale, - min_width=min_width, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - value=value, - **kwargs, - ) - - def get_config(self): - return { - "show_legend": self.show_legend, - "value": self.value, - "height": self.height, - "width": self.width, - "color_map": self.color_map, - "selectable": self.selectable, - **IOComponent.get_config(self), - } - - @staticmethod - def update( - value: tuple[ - np.ndarray | _Image.Image | str, - list[tuple[np.ndarray | tuple[int, int, int, int], str]], - ] - | Literal[_Keywords.NO_VALUE] = _Keywords.NO_VALUE, - show_legend: bool | None = None, - height: int | None = None, - width: int | None = None, - color_map: dict[str, str] | None = None, - label: str | None = None, - show_label: bool | None = None, - container: bool | None = None, - scale: int | None = None, - min_width: int | None = None, - visible: bool | None = None, - ): - updated_config = { - "show_legend": show_legend, - "height": height, - "width": width, - "color_map": color_map, - "label": label, - "show_label": show_label, - "container": container, - "scale": scale, - "min_width": min_width, - "visible": visible, - "value": value, - "__type__": "update", - } - return updated_config - - def postprocess( - self, - y: tuple[ - np.ndarray | _Image.Image | str, - list[tuple[np.ndarray | tuple[int, int, int, int], str]], - ], - ) -> tuple[dict, list[tuple[dict, str]]] | None: - """ - Parameters: - y: Tuple of base image and list of subsections, with each subsection a two-part tuple where the first element is a 4 element bounding box or a 0-1 confidence mask, and the second element is the label. - Returns: - Tuple of base image file and list of subsections, with each subsection a two-part tuple where the first element image path of the mask, and the second element is the label. - """ - if y is None: - return None - base_img = y[0] - if isinstance(base_img, str): - base_img_path = base_img - base_img = np.array(_Image.open(base_img)) - elif isinstance(base_img, np.ndarray): - base_file = self.img_array_to_temp_file(base_img, dir=self.DEFAULT_TEMP_DIR) - base_img_path = str(utils.abspath(base_file)) - elif isinstance(base_img, _Image.Image): - base_file = self.pil_to_temp_file(base_img, dir=self.DEFAULT_TEMP_DIR) - base_img_path = str(utils.abspath(base_file)) - base_img = np.array(base_img) - else: - raise ValueError( - "AnnotatedImage only accepts filepaths, PIL images or numpy arrays for the base image." - ) - self.temp_files.add(base_img_path) - - sections = [] - color_map = self.color_map or {} - - def hex_to_rgb(value): - value = value.lstrip("#") - lv = len(value) - return [int(value[i : i + lv // 3], 16) for i in range(0, lv, lv // 3)] - - for mask, label in y[1]: - mask_array = np.zeros((base_img.shape[0], base_img.shape[1])) - if isinstance(mask, np.ndarray): - mask_array = mask - else: - x1, y1, x2, y2 = mask - border_width = 3 - mask_array[y1:y2, x1:x2] = 0.5 - mask_array[y1:y2, x1 : x1 + border_width] = 1 - mask_array[y1:y2, x2 - border_width : x2] = 1 - mask_array[y1 : y1 + border_width, x1:x2] = 1 - mask_array[y2 - border_width : y2, x1:x2] = 1 - - if label in color_map: - rgb_color = hex_to_rgb(color_map[label]) - else: - rgb_color = [255, 0, 0] - colored_mask = np.zeros((base_img.shape[0], base_img.shape[1], 4)) - solid_mask = np.copy(mask_array) - solid_mask[solid_mask > 0] = 1 - - colored_mask[:, :, 0] = rgb_color[0] * solid_mask - colored_mask[:, :, 1] = rgb_color[1] * solid_mask - colored_mask[:, :, 2] = rgb_color[2] * solid_mask - colored_mask[:, :, 3] = mask_array * 255 - - colored_mask_img = _Image.fromarray((colored_mask).astype(np.uint8)) - - mask_file = self.pil_to_temp_file( - colored_mask_img, dir=self.DEFAULT_TEMP_DIR - ) - mask_file_path = str(utils.abspath(mask_file)) - self.temp_files.add(mask_file_path) - - sections.append( - ({"name": mask_file_path, "data": None, "is_file": True}, label) - ) - - return {"name": base_img_path, "data": None, "is_file": True}, sections - - def style( - self, - *, - height: int | None = None, - width: int | None = None, - color_map: dict[str, str] | None = None, - **kwargs, - ): - """ - This method is deprecated. Please set these arguments in the constructor instead. - """ - warn_style_method_deprecation() - if height is not None: - self.height = height - if width is not None: - self.width = width - if color_map is not None: - self.color_map = color_map - return self diff --git a/spaces/cifkao/context-probing/highlighted_text/build/index.html b/spaces/cifkao/context-probing/highlighted_text/build/index.html deleted file mode 100644 index 329ab1c6d0583cf1554980fb0985db84a9e8661b..0000000000000000000000000000000000000000 --- a/spaces/cifkao/context-probing/highlighted_text/build/index.html +++ /dev/null @@ -1 +0,0 @@ -Highlighted text component
      \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Anjuman Pakistani Movie 2013 12 A Telefilm Based on the 1970 Film by Waheed Murad and Rani.md b/spaces/cihyFjudo/fairness-paper-search/Anjuman Pakistani Movie 2013 12 A Telefilm Based on the 1970 Film by Waheed Murad and Rani.md deleted file mode 100644 index 860b81181bcbe01bb22afe099c16d03c801ee9e8..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Anjuman Pakistani Movie 2013 12 A Telefilm Based on the 1970 Film by Waheed Murad and Rani.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Anjuman Pakistani Movie 2013 12


      Download Filehttps://tinurli.com/2uwiOu



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/cihyFjudo/fairness-paper-search/Pokemon X Emulator Online Play the Latest Pokemon Game on Your PC.md b/spaces/cihyFjudo/fairness-paper-search/Pokemon X Emulator Online Play the Latest Pokemon Game on Your PC.md deleted file mode 100644 index 1092f53f67bae363fdb786bf351b416f63f6e3a3..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Pokemon X Emulator Online Play the Latest Pokemon Game on Your PC.md +++ /dev/null @@ -1,13 +0,0 @@ - -

      Play pokemon games online in high quality in your browser! No download required! With our emulator online you will find a lot of pokemon games like: Pokemon Ash Gray, Pokemon Fire Red Version, Pokemon X and Y and Pokemon Emerald Version. Click on game icon and start game! Feel free to comment best of pokemon games collection.

      -

      Pokemon X Emulator Online


      DOWNLOADhttps://tinurli.com/2uwjVM



      -

      Play Pokemon X and Y game online in your browser free of charge on Arcade Spot. Pokemon X and Y is a high quality game that works in all major modern web browsers. This online game is part of the Adventure, RPG, GBA, and Pokemon gaming categories. Pokemon X and Y has 71 likes from 96 user ratings. If you enjoy this game then also play games Pokemon Fire Red Version and Pokemon Emerald Version. Arcade Spot brings you the best games without downloading and a fun gaming experience on your computers, mobile phones, and tablets. New arcade games and the most popular free online games are added every day to the site.

      -

      If you want to play online retro video games totally unblocked of consoles like Super Nintendo (SNES), Neo-Geo, Sega Genesis, Game Boy Advance or NES on PC and Mac computers, and play them like a boss, with controller, this is your place.

      -

      Play Pokemon Games on Emulator Online. All the best Pokemon games online for different retro emulators including GBA, Game Boy, SNES, Nintendo and Sega. There are many online Pokemon games in the collection. All of the games that you see here are without download, pick any and start playing right away. If you enjoy the game, be sure to vote for it and leave a comment. Pokemon games that started it all back in the day are now playable within your browser! Start by playing some popular Pokemon online games like Pokemon X and Y, Pokemon Fire Red Version, Pokemon Emerald Version and Pokemon Ash Gray.

      -

      -

      And one last thing, you might have been able to deduce from my questions that I'm a budding EV trainer. Can you point me in the direction of a good online community? Kay, thanks! --SillyWynaut96 (talk) 17:52, 28 March 2014 (UTC)

      -

      Pokemon Omega Ruby will be my first game, and the Saving article is a little unclear. Let's say the only pokemon I had was my starter and I made it to Petalburg Woods. Then I saved my game. The next day I caught 2 more pokemon and I wanted to save. Correct me if I am wrong, but since the game only allows one save file, then if I delete my first save file, what will happen? Will I lose my starter? Or did I misread and you only have to delete a save file if you want to start over? Please help! I think we should add this info to make the article more clear. Sorry if I am bothering you. Leafeon6954 (talk) 17:04, 15 June 2015 (UTC)

      -

      The biggest thing i remember from this one, was due to the timing and the joys of always online with the ps3, when i started playing the game, it showed up as my Currently Playing, causing a torrent of amusing messages from my friends list, mostly asking how i got it, except one friend who was super excited for it, who sent a screed of abuse because he wanted to play it first. Sucks to be him.

      -

      I've been reading 20 year old Penny Arcade comics every day for pure unfiltered 2002 nostalgia. One thing I learned -- and don't think I realized at the time -- Metroid Fusion leaked as a ROM like a month ahead and I guess GBA emulators existed at the time? That's kind of crazy to have such a huge release so readily available like that.

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Where to Find Yaaron Ki Baaraat Movie Free Download in Hindi HD The Ultimate Resource for Hindi Film Lovers.md b/spaces/cihyFjudo/fairness-paper-search/Where to Find Yaaron Ki Baaraat Movie Free Download in Hindi HD The Ultimate Resource for Hindi Film Lovers.md deleted file mode 100644 index 67bd76957ad706883e88d0b48b8f3e468d2fb7b6..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Where to Find Yaaron Ki Baaraat Movie Free Download in Hindi HD The Ultimate Resource for Hindi Film Lovers.md +++ /dev/null @@ -1,6 +0,0 @@ -
      -

      Tags: Yaaron Ki Baraat 23 oct show download, Yaaron Ki Baraat 23 october 2016 full show free download in hdtv hdrip 480p, Yaaron Ki Baraat sunday show download 200mb 300mb, Sunidhi Chauhan in Yaaron Ki Baraat 23rd october 2016 full show watch online 480pJoin our New Telegram Channel!
      How to Download from SSR Movies? Facebook Prev Article Next Article Add Comment Cancel Reply

      -

      Yaaron Ki Baaraat movie free download in hindi HD


      Download File » https://tinurli.com/2uwhHZ



      -

      Tags: Yaaron Ki Baraat 29 oct show download, Yaaron Ki Baraat 29 october 2016 full show free download in hdtv hdrip 480p, Yaaron Ki Baraat saturday show download 200mb 300mb, jackie shroff in Yaaron Ki Baraat 29th october 2016 full show watch online 480pJoin our New Telegram Channel!
      How to Download from SSR Movies? Facebook Prev Article Next Article Add Comment Cancel Reply

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/cleanmaster/so-vits-svc-akagi/modules.py b/spaces/cleanmaster/so-vits-svc-akagi/modules.py deleted file mode 100644 index 52ee14e41a5b6d67d875d1b694aecd2a51244897..0000000000000000000000000000000000000000 --- a/spaces/cleanmaster/so-vits-svc-akagi/modules.py +++ /dev/null @@ -1,342 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/pens/cairoPen.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/pens/cairoPen.py deleted file mode 100644 index 9cd5da9128fc0054cf748de703540afa7685b7b2..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/pens/cairoPen.py +++ /dev/null @@ -1,26 +0,0 @@ -"""Pen to draw to a Cairo graphics library context.""" - -from fontTools.pens.basePen import BasePen - - -__all__ = ["CairoPen"] - - -class CairoPen(BasePen): - """Pen to draw to a Cairo graphics library context.""" - - def __init__(self, glyphSet, context): - BasePen.__init__(self, glyphSet) - self.context = context - - def _moveTo(self, p): - self.context.move_to(*p) - - def _lineTo(self, p): - self.context.line_to(*p) - - def _curveToOne(self, p1, p2, p3): - self.context.curve_to(*p1, *p2, *p3) - - def _closePath(self): - self.context.close_path() diff --git a/spaces/colakin/video-generater/public/ffmpeg/fftools/ffmpeg.h b/spaces/colakin/video-generater/public/ffmpeg/fftools/ffmpeg.h deleted file mode 100644 index 95591f4bbadccd4e06a3661ef58334fcba45b013..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/fftools/ffmpeg.h +++ /dev/null @@ -1,995 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef FFTOOLS_FFMPEG_H -#define FFTOOLS_FFMPEG_H - -#include "config.h" - -#include -#include -#include -#include - -#include "cmdutils.h" -#include "sync_queue.h" - -#include "libavformat/avformat.h" -#include "libavformat/avio.h" - -#include "libavcodec/avcodec.h" -#include "libavcodec/bsf.h" - -#include "libavfilter/avfilter.h" - -#include "libavutil/avutil.h" -#include "libavutil/dict.h" -#include "libavutil/eval.h" -#include "libavutil/fifo.h" -#include "libavutil/hwcontext.h" -#include "libavutil/pixfmt.h" -#include "libavutil/rational.h" -#include "libavutil/thread.h" -#include "libavutil/threadmessage.h" - -#include "libswresample/swresample.h" - -// deprecated features -#define FFMPEG_OPT_PSNR 1 -#define FFMPEG_OPT_MAP_CHANNEL 1 -#define FFMPEG_OPT_MAP_SYNC 1 -#define FFMPEG_ROTATION_METADATA 1 -#define FFMPEG_OPT_QPHIST 1 - -enum VideoSyncMethod { - VSYNC_AUTO = -1, - VSYNC_PASSTHROUGH, - VSYNC_CFR, - VSYNC_VFR, - VSYNC_VSCFR, - VSYNC_DROP, -}; - -#define MAX_STREAMS 1024 /* arbitrary sanity check value */ - -enum HWAccelID { - HWACCEL_NONE = 0, - HWACCEL_AUTO, - HWACCEL_GENERIC, -}; - -typedef struct HWDevice { - const char *name; - enum AVHWDeviceType type; - AVBufferRef *device_ref; -} HWDevice; - -/* select an input stream for an output stream */ -typedef struct StreamMap { - int disabled; /* 1 is this mapping is disabled by a negative map */ - int file_index; - int stream_index; - char *linklabel; /* name of an output link, for mapping lavfi outputs */ -} StreamMap; - -#if FFMPEG_OPT_MAP_CHANNEL -typedef struct { - int file_idx, stream_idx, channel_idx; // input - int ofile_idx, ostream_idx; // output -} AudioChannelMap; -#endif - -typedef struct OptionsContext { - OptionGroup *g; - - /* input/output options */ - int64_t start_time; - int64_t start_time_eof; - int seek_timestamp; - const char *format; - - SpecifierOpt *codec_names; - int nb_codec_names; - SpecifierOpt *audio_ch_layouts; - int nb_audio_ch_layouts; - SpecifierOpt *audio_channels; - int nb_audio_channels; - SpecifierOpt *audio_sample_rate; - int nb_audio_sample_rate; - SpecifierOpt *frame_rates; - int nb_frame_rates; - SpecifierOpt *max_frame_rates; - int nb_max_frame_rates; - SpecifierOpt *frame_sizes; - int nb_frame_sizes; - SpecifierOpt *frame_pix_fmts; - int nb_frame_pix_fmts; - - /* input options */ - int64_t input_ts_offset; - int loop; - int rate_emu; - float readrate; - int accurate_seek; - int thread_queue_size; - int input_sync_ref; - int find_stream_info; - - SpecifierOpt *ts_scale; - int nb_ts_scale; - SpecifierOpt *dump_attachment; - int nb_dump_attachment; - SpecifierOpt *hwaccels; - int nb_hwaccels; - SpecifierOpt *hwaccel_devices; - int nb_hwaccel_devices; - SpecifierOpt *hwaccel_output_formats; - int nb_hwaccel_output_formats; - SpecifierOpt *autorotate; - int nb_autorotate; - - /* output options */ - StreamMap *stream_maps; - int nb_stream_maps; -#if FFMPEG_OPT_MAP_CHANNEL - AudioChannelMap *audio_channel_maps; /* one info entry per -map_channel */ - int nb_audio_channel_maps; /* number of (valid) -map_channel settings */ -#endif - const char **attachments; - int nb_attachments; - - int chapters_input_file; - - int64_t recording_time; - int64_t stop_time; - int64_t limit_filesize; - float mux_preload; - float mux_max_delay; - float shortest_buf_duration; - int shortest; - int bitexact; - - int video_disable; - int audio_disable; - int subtitle_disable; - int data_disable; - - /* indexed by output file stream index */ - int *streamid_map; - int nb_streamid_map; - - SpecifierOpt *metadata; - int nb_metadata; - SpecifierOpt *max_frames; - int nb_max_frames; - SpecifierOpt *bitstream_filters; - int nb_bitstream_filters; - SpecifierOpt *codec_tags; - int nb_codec_tags; - SpecifierOpt *sample_fmts; - int nb_sample_fmts; - SpecifierOpt *qscale; - int nb_qscale; - SpecifierOpt *forced_key_frames; - int nb_forced_key_frames; - SpecifierOpt *fps_mode; - int nb_fps_mode; - SpecifierOpt *force_fps; - int nb_force_fps; - SpecifierOpt *frame_aspect_ratios; - int nb_frame_aspect_ratios; - SpecifierOpt *display_rotations; - int nb_display_rotations; - SpecifierOpt *display_hflips; - int nb_display_hflips; - SpecifierOpt *display_vflips; - int nb_display_vflips; - SpecifierOpt *rc_overrides; - int nb_rc_overrides; - SpecifierOpt *intra_matrices; - int nb_intra_matrices; - SpecifierOpt *inter_matrices; - int nb_inter_matrices; - SpecifierOpt *chroma_intra_matrices; - int nb_chroma_intra_matrices; - SpecifierOpt *top_field_first; - int nb_top_field_first; - SpecifierOpt *metadata_map; - int nb_metadata_map; - SpecifierOpt *presets; - int nb_presets; - SpecifierOpt *copy_initial_nonkeyframes; - int nb_copy_initial_nonkeyframes; - SpecifierOpt *copy_prior_start; - int nb_copy_prior_start; - SpecifierOpt *filters; - int nb_filters; - SpecifierOpt *filter_scripts; - int nb_filter_scripts; - SpecifierOpt *reinit_filters; - int nb_reinit_filters; - SpecifierOpt *fix_sub_duration; - int nb_fix_sub_duration; - SpecifierOpt *fix_sub_duration_heartbeat; - int nb_fix_sub_duration_heartbeat; - SpecifierOpt *canvas_sizes; - int nb_canvas_sizes; - SpecifierOpt *pass; - int nb_pass; - SpecifierOpt *passlogfiles; - int nb_passlogfiles; - SpecifierOpt *max_muxing_queue_size; - int nb_max_muxing_queue_size; - SpecifierOpt *muxing_queue_data_threshold; - int nb_muxing_queue_data_threshold; - SpecifierOpt *guess_layout_max; - int nb_guess_layout_max; - SpecifierOpt *apad; - int nb_apad; - SpecifierOpt *discard; - int nb_discard; - SpecifierOpt *disposition; - int nb_disposition; - SpecifierOpt *program; - int nb_program; - SpecifierOpt *time_bases; - int nb_time_bases; - SpecifierOpt *enc_time_bases; - int nb_enc_time_bases; - SpecifierOpt *autoscale; - int nb_autoscale; - SpecifierOpt *bits_per_raw_sample; - int nb_bits_per_raw_sample; - SpecifierOpt *enc_stats_pre; - int nb_enc_stats_pre; - SpecifierOpt *enc_stats_post; - int nb_enc_stats_post; - SpecifierOpt *mux_stats; - int nb_mux_stats; - SpecifierOpt *enc_stats_pre_fmt; - int nb_enc_stats_pre_fmt; - SpecifierOpt *enc_stats_post_fmt; - int nb_enc_stats_post_fmt; - SpecifierOpt *mux_stats_fmt; - int nb_mux_stats_fmt; -} OptionsContext; - -typedef struct InputFilter { - AVFilterContext *filter; - struct InputStream *ist; - struct FilterGraph *graph; - uint8_t *name; - enum AVMediaType type; // AVMEDIA_TYPE_SUBTITLE for sub2video - - AVFifo *frame_queue; - - // parameters configured for this input - int format; - - int width, height; - AVRational sample_aspect_ratio; - - int sample_rate; - AVChannelLayout ch_layout; - - AVBufferRef *hw_frames_ctx; - int32_t *displaymatrix; - - int eof; -} InputFilter; - -typedef struct OutputFilter { - AVFilterContext *filter; - struct OutputStream *ost; - struct FilterGraph *graph; - uint8_t *name; - - /* temporary storage until stream maps are processed */ - AVFilterInOut *out_tmp; - enum AVMediaType type; - - /* desired output stream properties */ - int width, height; - AVRational frame_rate; - int format; - int sample_rate; - AVChannelLayout ch_layout; - - // those are only set if no format is specified and the encoder gives us multiple options - // They point directly to the relevant lists of the encoder. - const int *formats; - const AVChannelLayout *ch_layouts; - const int *sample_rates; - - /* pts of the last frame received from this filter, in AV_TIME_BASE_Q */ - int64_t last_pts; -} OutputFilter; - -typedef struct FilterGraph { - int index; - const char *graph_desc; - - AVFilterGraph *graph; - // true when the filtergraph contains only meta filters - // that do not modify the frame data - int is_meta; - - InputFilter **inputs; - int nb_inputs; - OutputFilter **outputs; - int nb_outputs; -} FilterGraph; - -typedef struct InputStream { - const AVClass *class; - - int file_index; - AVStream *st; - int discard; /* true if stream data should be discarded */ - int user_set_discard; - int decoding_needed; /* non zero if the packets must be decoded in 'raw_fifo', see DECODING_FOR_* */ -#define DECODING_FOR_OST 1 -#define DECODING_FOR_FILTER 2 - // should attach FrameData as opaque_ref after decoding - int want_frame_data; - - /** - * Codec parameters - to be used by the decoding/streamcopy code. - * st->codecpar should not be accessed, because it may be modified - * concurrently by the demuxing thread. - */ - AVCodecParameters *par; - AVCodecContext *dec_ctx; - const AVCodec *dec; - AVFrame *decoded_frame; - AVPacket *pkt; - - AVRational framerate_guessed; - - int64_t prev_pkt_pts; - int64_t start; /* time when read started */ - /* predicted dts of the next packet read for this stream or (when there are - * several frames in a packet) of the next frame in current packet (in AV_TIME_BASE units) */ - int64_t next_dts; - int64_t first_dts; ///< dts of the first packet read for this stream (in AV_TIME_BASE units) - int64_t dts; ///< dts of the last packet read for this stream (in AV_TIME_BASE units) - - /* predicted pts of the next decoded frame, in AV_TIME_BASE */ - int64_t next_pts; - int64_t pts; ///< current pts of the decoded frame (in AV_TIME_BASE units) - - // pts/estimated duration of the last decoded video frame - // in decoder timebase - int64_t last_frame_pts; - int64_t last_frame_duration_est; - - int wrap_correction_done; - - // the value of AVCodecParserContext.repeat_pict from the AVStream parser - // for the last packet returned from ifile_get_packet() - // -1 if unknown - // FIXME: this is a hack, the avstream parser should not be used - int last_pkt_repeat_pict; - - int64_t filter_in_rescale_delta_last; - - // when forcing constant input framerate through -r, - // this contains the pts that will be given to the next decoded frame - int64_t cfr_next_pts; - - int64_t nb_samples; /* number of samples in the last decoded audio frame before looping */ - - int saw_first_ts; - AVDictionary *decoder_opts; - AVRational framerate; /* framerate forced with -r */ - int top_field_first; - - int autorotate; - - int fix_sub_duration; - struct { /* previous decoded subtitle and related variables */ - int got_output; - int ret; - AVSubtitle subtitle; - } prev_sub; - - struct sub2video { - int64_t last_pts; - int64_t end_pts; - AVFifo *sub_queue; ///< queue of AVSubtitle* before filter init - AVFrame *frame; - int w, h; - unsigned int initialize; ///< marks if sub2video_update should force an initialization - } sub2video; - - /* decoded data from this stream goes into all those filters - * currently video and audio only */ - InputFilter **filters; - int nb_filters; - - /* - * Output targets that do not go through lavfi, i.e. subtitles or - * streamcopy. Those two cases are distinguished by the OutputStream - * having an encoder or not. - */ - struct OutputStream **outputs; - int nb_outputs; - - int reinit_filters; - - /* hwaccel options */ - enum HWAccelID hwaccel_id; - enum AVHWDeviceType hwaccel_device_type; - char *hwaccel_device; - enum AVPixelFormat hwaccel_output_format; - - int (*hwaccel_retrieve_data)(AVCodecContext *s, AVFrame *frame); - enum AVPixelFormat hwaccel_pix_fmt; - - /* stats */ - // combined size of all the packets read - uint64_t data_size; - /* number of packets successfully read for this stream */ - uint64_t nb_packets; - // number of frames/samples retrieved from the decoder - uint64_t frames_decoded; - uint64_t samples_decoded; - - int got_output; -} InputStream; - -typedef struct LastFrameDuration { - int stream_idx; - int64_t duration; -} LastFrameDuration; - -typedef struct InputFile { - const AVClass *class; - - int index; - - AVFormatContext *ctx; - int eof_reached; /* true if eof reached */ - int eagain; /* true if last read attempt returned EAGAIN */ - int64_t input_ts_offset; - int input_sync_ref; - /** - * Effective format start time based on enabled streams. - */ - int64_t start_time_effective; - int64_t ts_offset; - /** - * Extra timestamp offset added by discontinuity handling. - */ - int64_t ts_offset_discont; - int64_t last_ts; - int64_t start_time; /* user-specified start time in AV_TIME_BASE or AV_NOPTS_VALUE */ - int64_t recording_time; - - /* streams that ffmpeg is aware of; - * there may be extra streams in ctx that are not mapped to an InputStream - * if new streams appear dynamically during demuxing */ - InputStream **streams; - int nb_streams; - - int rate_emu; - float readrate; - int accurate_seek; - - /* when looping the input file, this queue is used by decoders to report - * the last frame duration back to the demuxer thread */ - AVThreadMessageQueue *audio_duration_queue; - int audio_duration_queue_size; -} InputFile; - -enum forced_keyframes_const { - FKF_N, - FKF_N_FORCED, - FKF_PREV_FORCED_N, - FKF_PREV_FORCED_T, - FKF_T, - FKF_NB -}; - -#define ABORT_ON_FLAG_EMPTY_OUTPUT (1 << 0) -#define ABORT_ON_FLAG_EMPTY_OUTPUT_STREAM (1 << 1) - -enum EncStatsType { - ENC_STATS_LITERAL = 0, - ENC_STATS_FILE_IDX, - ENC_STATS_STREAM_IDX, - ENC_STATS_FRAME_NUM, - ENC_STATS_FRAME_NUM_IN, - ENC_STATS_TIMEBASE, - ENC_STATS_TIMEBASE_IN, - ENC_STATS_PTS, - ENC_STATS_PTS_TIME, - ENC_STATS_PTS_IN, - ENC_STATS_PTS_TIME_IN, - ENC_STATS_DTS, - ENC_STATS_DTS_TIME, - ENC_STATS_SAMPLE_NUM, - ENC_STATS_NB_SAMPLES, - ENC_STATS_PKT_SIZE, - ENC_STATS_BITRATE, - ENC_STATS_AVG_BITRATE, -}; - -typedef struct EncStatsComponent { - enum EncStatsType type; - - uint8_t *str; - size_t str_len; -} EncStatsComponent; - -typedef struct EncStats { - EncStatsComponent *components; - int nb_components; - - AVIOContext *io; -} EncStats; - -extern const char *const forced_keyframes_const_names[]; - -typedef enum { - ENCODER_FINISHED = 1, - MUXER_FINISHED = 2, -} OSTFinished ; - -enum { - KF_FORCE_SOURCE = 1, - KF_FORCE_SOURCE_NO_DROP = 2, -}; - -typedef struct KeyframeForceCtx { - int type; - - int64_t ref_pts; - - // timestamps of the forced keyframes, in AV_TIME_BASE_Q - int64_t *pts; - int nb_pts; - int index; - - AVExpr *pexpr; - double expr_const_values[FKF_NB]; - - int dropped_keyframe; -} KeyframeForceCtx; - -typedef struct Encoder Encoder; - -typedef struct OutputStream { - const AVClass *class; - - enum AVMediaType type; - - int file_index; /* file index */ - int index; /* stream index in the output file */ - - /** - * Codec parameters for packets submitted to the muxer (i.e. before - * bitstream filtering, if any). - */ - AVCodecParameters *par_in; - - /* input stream that is the source for this output stream; - * may be NULL for streams with no well-defined source, e.g. - * attachments or outputs from complex filtergraphs */ - InputStream *ist; - - AVStream *st; /* stream in the output file */ - /* dts of the last packet sent to the muxing queue, in AV_TIME_BASE_Q */ - int64_t last_mux_dts; - - // the timebase of the packets sent to the muxer - AVRational mux_timebase; - AVRational enc_timebase; - - Encoder *enc; - AVCodecContext *enc_ctx; - AVFrame *filtered_frame; - AVPacket *pkt; - int64_t last_dropped; - - /* video only */ - AVRational frame_rate; - AVRational max_frame_rate; - enum VideoSyncMethod vsync_method; - int is_cfr; - int force_fps; - int top_field_first; -#if FFMPEG_ROTATION_METADATA - int rotate_overridden; -#endif - int autoscale; - int bitexact; - int bits_per_raw_sample; -#if FFMPEG_ROTATION_METADATA - double rotate_override_value; -#endif - - AVRational frame_aspect_ratio; - - KeyframeForceCtx kf; - - /* audio only */ -#if FFMPEG_OPT_MAP_CHANNEL - int *audio_channels_map; /* list of the channels id to pick from the source stream */ - int audio_channels_mapped; /* number of channels in audio_channels_map */ -#endif - - char *logfile_prefix; - FILE *logfile; - - OutputFilter *filter; - char *avfilter; - - AVDictionary *encoder_opts; - AVDictionary *sws_dict; - AVDictionary *swr_opts; - char *apad; - OSTFinished finished; /* no more packets should be written for this stream */ - int unavailable; /* true if the steram is unavailable (possibly temporarily) */ - - // init_output_stream() has been called for this stream - // The encoder and the bitstream filters have been initialized and the stream - // parameters are set in the AVStream. - int initialized; - - int inputs_done; - - const char *attachment_filename; - - int keep_pix_fmt; - - /* stats */ - // number of packets send to the muxer - atomic_uint_least64_t packets_written; - // number of frames/samples sent to the encoder - uint64_t frames_encoded; - uint64_t samples_encoded; - // number of packets received from the encoder - uint64_t packets_encoded; - - /* packet quality factor */ - int quality; - - /* packet picture type */ - int pict_type; - - /* frame encode sum of squared error values */ - int64_t error[4]; - - int sq_idx_encode; - int sq_idx_mux; - - EncStats enc_stats_pre; - EncStats enc_stats_post; - - /* - * bool on whether this stream should be utilized for splitting - * subtitles utilizing fix_sub_duration at random access points. - */ - unsigned int fix_sub_duration_heartbeat; -} OutputStream; - -typedef struct OutputFile { - const AVClass *class; - - int index; - - const AVOutputFormat *format; - const char *url; - - OutputStream **streams; - int nb_streams; - - SyncQueue *sq_encode; - - int64_t recording_time; ///< desired length of the resulting file in microseconds == AV_TIME_BASE units - int64_t start_time; ///< start time in microseconds == AV_TIME_BASE units - - int shortest; - int bitexact; -} OutputFile; - -// optionally attached as opaque_ref to decoded AVFrames -typedef struct FrameData { - uint64_t idx; - int64_t pts; - AVRational tb; -} FrameData; - -extern InputFile **input_files; -extern int nb_input_files; - -extern OutputFile **output_files; -extern int nb_output_files; - -extern FilterGraph **filtergraphs; -extern int nb_filtergraphs; - -extern char *vstats_filename; -extern char *sdp_filename; - -extern float audio_drift_threshold; -extern float dts_delta_threshold; -extern float dts_error_threshold; - -extern enum VideoSyncMethod video_sync_method; -extern float frame_drop_threshold; -extern int do_benchmark; -extern int do_benchmark_all; -extern int do_hex_dump; -extern int do_pkt_dump; -extern int copy_ts; -extern int start_at_zero; -extern int copy_tb; -extern int debug_ts; -extern int exit_on_error; -extern int abort_on_flags; -extern int print_stats; -extern int64_t stats_period; -extern int stdin_interaction; -extern AVIOContext *progress_avio; -extern float max_error_rate; - -extern char *filter_nbthreads; -extern int filter_complex_nbthreads; -extern int vstats_version; -extern int auto_conversion_filters; - -extern const AVIOInterruptCB int_cb; - -extern const OptionDef options[]; -extern HWDevice *filter_hw_device; - -extern unsigned nb_output_dumped; - -extern int ignore_unknown_streams; -extern int copy_unknown_streams; - -extern int recast_media; - -extern FILE *vstats_file; - -extern int64_t nb_frames_dup; -extern int64_t nb_frames_drop; - -#if FFMPEG_OPT_PSNR -extern int do_psnr; -#endif - -void term_init(void); -void term_exit(void); - -void show_usage(void); - -void remove_avoptions(AVDictionary **a, AVDictionary *b); -void assert_avoptions(AVDictionary *m); - -void assert_file_overwrite(const char *filename); -char *file_read(const char *filename); -AVDictionary *strip_specifiers(const AVDictionary *dict); -const AVCodec *find_codec_or_die(void *logctx, const char *name, - enum AVMediaType type, int encoder); -int parse_and_set_vsync(const char *arg, int *vsync_var, int file_idx, int st_idx, int is_global); - -int configure_filtergraph(FilterGraph *fg); -void check_filter_outputs(void); -int filtergraph_is_simple(FilterGraph *fg); -int init_simple_filtergraph(InputStream *ist, OutputStream *ost); -int init_complex_filtergraph(FilterGraph *fg); - -void sub2video_update(InputStream *ist, int64_t heartbeat_pts, AVSubtitle *sub); - -int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame, int keep_reference); -int ifilter_send_eof(InputFilter *ifilter, int64_t pts); - -int ifilter_parameters_from_frame(InputFilter *ifilter, const AVFrame *frame); -int ifilter_parameters_from_codecpar(InputFilter *ifilter, AVCodecParameters *par); -int ifilter_has_all_input_formats(FilterGraph *fg); - -/** - * Create a new filtergraph in the global filtergraph list. - * - * @param graph_desc Graph description; an av_malloc()ed string, filtergraph - * takes ownership of it. - */ -FilterGraph *fg_create(char *graph_desc); - -void fg_free(FilterGraph **pfg); - -/** - * Perform a step of transcoding for the specified filter graph. - * - * @param[in] graph filter graph to consider - * @param[out] best_ist input stream where a frame would allow to continue - * @return 0 for success, <0 for error - */ -int fg_transcode_step(FilterGraph *graph, InputStream **best_ist); - -/** - * Get and encode new output from any of the filtergraphs, without causing - * activity. - * - * @return 0 for success, <0 for severe errors - */ -int reap_filters(int flush); - -int ffmpeg_parse_options(int argc, char **argv); - -void enc_stats_write(OutputStream *ost, EncStats *es, - const AVFrame *frame, const AVPacket *pkt, - uint64_t frame_num); - -HWDevice *hw_device_get_by_name(const char *name); -int hw_device_init_from_string(const char *arg, HWDevice **dev); -void hw_device_free_all(void); - -int hw_device_setup_for_decode(InputStream *ist); -int hw_device_setup_for_encode(OutputStream *ost); -/** - * Get a hardware device to be used with this filtergraph. - * The returned reference is owned by the callee, the caller - * must ref it explicitly for long-term use. - */ -AVBufferRef *hw_device_for_filter(void); - -int hwaccel_decode_init(AVCodecContext *avctx); - -int dec_open(InputStream *ist); - -int enc_alloc(Encoder **penc, const AVCodec *codec); -void enc_free(Encoder **penc); - -int enc_open(OutputStream *ost, AVFrame *frame); -void enc_subtitle(OutputFile *of, OutputStream *ost, AVSubtitle *sub); -void enc_frame(OutputStream *ost, AVFrame *frame); -void enc_flush(void); - -/* - * Initialize muxing state for the given stream, should be called - * after the codec/streamcopy setup has been done. - * - * Open the muxer once all the streams have been initialized. - */ -int of_stream_init(OutputFile *of, OutputStream *ost); -int of_write_trailer(OutputFile *of); -int of_open(const OptionsContext *o, const char *filename); -void of_close(OutputFile **pof); - -void of_enc_stats_close(void); - -/* - * Send a single packet to the output, applying any bitstream filters - * associated with the output stream. This may result in any number - * of packets actually being written, depending on what bitstream - * filters are applied. The supplied packet is consumed and will be - * blank (as if newly-allocated) when this function returns. - * - * If eof is set, instead indicate EOF to all bitstream filters and - * therefore flush any delayed packets to the output. A blank packet - * must be supplied in this case. - */ -void of_output_packet(OutputFile *of, AVPacket *pkt, OutputStream *ost, int eof); - -/** - * @param dts predicted packet dts in AV_TIME_BASE_Q - */ -void of_streamcopy(OutputStream *ost, const AVPacket *pkt, int64_t dts); - -int64_t of_filesize(OutputFile *of); - -int ifile_open(const OptionsContext *o, const char *filename); -void ifile_close(InputFile **f); - -/** - * Get next input packet from the demuxer. - * - * @param pkt the packet is written here when this function returns 0 - * @return - * - 0 when a packet has been read successfully - * - 1 when stream end was reached, but the stream is looped; - * caller should flush decoders and read from this demuxer again - * - a negative error code on failure - */ -int ifile_get_packet(InputFile *f, AVPacket **pkt); - -void ist_output_add(InputStream *ist, OutputStream *ost); -void ist_filter_add(InputStream *ist, InputFilter *ifilter, int is_simple); - -/* iterate over all input streams in all input files; - * pass NULL to start iteration */ -InputStream *ist_iter(InputStream *prev); - -/* iterate over all output streams in all output files; - * pass NULL to start iteration */ -OutputStream *ost_iter(OutputStream *prev); - -static inline double psnr(double d) -{ - return -10.0 * log10(d); -} - -void close_output_stream(OutputStream *ost); -int trigger_fix_sub_duration_heartbeat(OutputStream *ost, const AVPacket *pkt); -void update_benchmark(const char *fmt, ...); - -/** - * Merge two return codes - return one of the error codes if at least one of - * them was negative, 0 otherwise. - * Currently just picks the first one, eventually we might want to do something - * more sophisticated, like sorting them by priority. - */ -static inline int err_merge(int err0, int err1) -{ - return (err0 < 0) ? err0 : FFMIN(err1, 0); -} - -#define SPECIFIER_OPT_FMT_str "%s" -#define SPECIFIER_OPT_FMT_i "%i" -#define SPECIFIER_OPT_FMT_i64 "%"PRId64 -#define SPECIFIER_OPT_FMT_ui64 "%"PRIu64 -#define SPECIFIER_OPT_FMT_f "%f" -#define SPECIFIER_OPT_FMT_dbl "%lf" - -#define WARN_MULTIPLE_OPT_USAGE(name, type, so, st)\ -{\ - char namestr[128] = "";\ - const char *spec = so->specifier && so->specifier[0] ? so->specifier : "";\ - for (int _i = 0; opt_name_##name[_i]; _i++)\ - av_strlcatf(namestr, sizeof(namestr), "-%s%s", opt_name_##name[_i], opt_name_##name[_i+1] ? (opt_name_##name[_i+2] ? ", " : " or ") : "");\ - av_log(NULL, AV_LOG_WARNING, "Multiple %s options specified for stream %d, only the last option '-%s%s%s "SPECIFIER_OPT_FMT_##type"' will be used.\n",\ - namestr, st->index, opt_name_##name[0], spec[0] ? ":" : "", spec, so->u.type);\ -} - -#define MATCH_PER_STREAM_OPT(name, type, outvar, fmtctx, st)\ -{\ - int _ret, _matches = 0;\ - SpecifierOpt *so;\ - for (int _i = 0; _i < o->nb_ ## name; _i++) {\ - char *spec = o->name[_i].specifier;\ - if ((_ret = check_stream_specifier(fmtctx, st, spec)) > 0) {\ - outvar = o->name[_i].u.type;\ - so = &o->name[_i];\ - _matches++;\ - } else if (_ret < 0)\ - exit_program(1);\ - }\ - if (_matches > 1)\ - WARN_MULTIPLE_OPT_USAGE(name, type, so, st);\ -} - -#define MATCH_PER_TYPE_OPT(name, type, outvar, fmtctx, mediatype)\ -{\ - int i;\ - for (i = 0; i < o->nb_ ## name; i++) {\ - char *spec = o->name[i].specifier;\ - if (!strcmp(spec, mediatype))\ - outvar = o->name[i].u.type;\ - }\ -} - -extern const char * const opt_name_codec_names[]; -extern const char * const opt_name_codec_tags[]; -extern const char * const opt_name_frame_rates[]; -extern const char * const opt_name_top_field_first[]; - -#endif /* FFTOOLS_FFMPEG_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/av1dec.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/av1dec.c deleted file mode 100644 index 807852e317e8546c3f98ee7dbf8901af784fca7e..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/av1dec.c +++ /dev/null @@ -1,1471 +0,0 @@ -/* - * AV1 video decoder - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "config_components.h" - -#include "libavutil/hdr_dynamic_metadata.h" -#include "libavutil/film_grain_params.h" -#include "libavutil/mastering_display_metadata.h" -#include "libavutil/pixdesc.h" -#include "libavutil/opt.h" -#include "avcodec.h" -#include "av1dec.h" -#include "atsc_a53.h" -#include "bytestream.h" -#include "codec_internal.h" -#include "decode.h" -#include "hwconfig.h" -#include "profiles.h" -#include "thread.h" - -/**< same with Div_Lut defined in spec 7.11.3.7 */ -static const uint16_t div_lut[AV1_DIV_LUT_NUM] = { - 16384, 16320, 16257, 16194, 16132, 16070, 16009, 15948, 15888, 15828, 15768, - 15709, 15650, 15592, 15534, 15477, 15420, 15364, 15308, 15252, 15197, 15142, - 15087, 15033, 14980, 14926, 14873, 14821, 14769, 14717, 14665, 14614, 14564, - 14513, 14463, 14413, 14364, 14315, 14266, 14218, 14170, 14122, 14075, 14028, - 13981, 13935, 13888, 13843, 13797, 13752, 13707, 13662, 13618, 13574, 13530, - 13487, 13443, 13400, 13358, 13315, 13273, 13231, 13190, 13148, 13107, 13066, - 13026, 12985, 12945, 12906, 12866, 12827, 12788, 12749, 12710, 12672, 12633, - 12596, 12558, 12520, 12483, 12446, 12409, 12373, 12336, 12300, 12264, 12228, - 12193, 12157, 12122, 12087, 12053, 12018, 11984, 11950, 11916, 11882, 11848, - 11815, 11782, 11749, 11716, 11683, 11651, 11619, 11586, 11555, 11523, 11491, - 11460, 11429, 11398, 11367, 11336, 11305, 11275, 11245, 11215, 11185, 11155, - 11125, 11096, 11067, 11038, 11009, 10980, 10951, 10923, 10894, 10866, 10838, - 10810, 10782, 10755, 10727, 10700, 10673, 10645, 10618, 10592, 10565, 10538, - 10512, 10486, 10460, 10434, 10408, 10382, 10356, 10331, 10305, 10280, 10255, - 10230, 10205, 10180, 10156, 10131, 10107, 10082, 10058, 10034, 10010, 9986, - 9963, 9939, 9916, 9892, 9869, 9846, 9823, 9800, 9777, 9754, 9732, - 9709, 9687, 9664, 9642, 9620, 9598, 9576, 9554, 9533, 9511, 9489, - 9468, 9447, 9425, 9404, 9383, 9362, 9341, 9321, 9300, 9279, 9259, - 9239, 9218, 9198, 9178, 9158, 9138, 9118, 9098, 9079, 9059, 9039, - 9020, 9001, 8981, 8962, 8943, 8924, 8905, 8886, 8867, 8849, 8830, - 8812, 8793, 8775, 8756, 8738, 8720, 8702, 8684, 8666, 8648, 8630, - 8613, 8595, 8577, 8560, 8542, 8525, 8508, 8490, 8473, 8456, 8439, - 8422, 8405, 8389, 8372, 8355, 8339, 8322, 8306, 8289, 8273, 8257, - 8240, 8224, 8208, 8192 -}; - -static uint32_t inverse_recenter(int r, uint32_t v) -{ - if (v > 2 * r) - return v; - else if (v & 1) - return r - ((v + 1) >> 1); - else - return r + (v >> 1); -} - -static uint32_t decode_unsigned_subexp_with_ref(uint32_t sub_exp, - int mx, int r) -{ - if ((r << 1) <= mx) { - return inverse_recenter(r, sub_exp); - } else { - return mx - 1 - inverse_recenter(mx - 1 - r, sub_exp); - } -} - -static int32_t decode_signed_subexp_with_ref(uint32_t sub_exp, int low, - int high, int r) -{ - int32_t x = decode_unsigned_subexp_with_ref(sub_exp, high - low, r - low); - return x + low; -} - -static void read_global_param(AV1DecContext *s, int type, int ref, int idx) -{ - uint8_t primary_frame, prev_frame; - uint32_t abs_bits, prec_bits, round, prec_diff, sub, mx; - int32_t r, prev_gm_param; - - primary_frame = s->raw_frame_header->primary_ref_frame; - prev_frame = s->raw_frame_header->ref_frame_idx[primary_frame]; - abs_bits = AV1_GM_ABS_ALPHA_BITS; - prec_bits = AV1_GM_ALPHA_PREC_BITS; - - /* setup_past_independence() sets PrevGmParams to default values. We can - * simply point to the current's frame gm_params as they will be initialized - * with defaults at this point. - */ - if (s->raw_frame_header->primary_ref_frame == AV1_PRIMARY_REF_NONE) - prev_gm_param = s->cur_frame.gm_params[ref][idx]; - else - prev_gm_param = s->ref[prev_frame].gm_params[ref][idx]; - - if (idx < 2) { - if (type == AV1_WARP_MODEL_TRANSLATION) { - abs_bits = AV1_GM_ABS_TRANS_ONLY_BITS - - !s->raw_frame_header->allow_high_precision_mv; - prec_bits = AV1_GM_TRANS_ONLY_PREC_BITS - - !s->raw_frame_header->allow_high_precision_mv; - } else { - abs_bits = AV1_GM_ABS_TRANS_BITS; - prec_bits = AV1_GM_TRANS_PREC_BITS; - } - } - round = (idx % 3) == 2 ? (1 << AV1_WARPEDMODEL_PREC_BITS) : 0; - prec_diff = AV1_WARPEDMODEL_PREC_BITS - prec_bits; - sub = (idx % 3) == 2 ? (1 << prec_bits) : 0; - mx = 1 << abs_bits; - r = (prev_gm_param >> prec_diff) - sub; - - s->cur_frame.gm_params[ref][idx] = - (decode_signed_subexp_with_ref(s->raw_frame_header->gm_params[ref][idx], - -mx, mx + 1, r) << prec_diff) + round; -} - -static uint64_t round_two(uint64_t x, uint16_t n) -{ - if (n == 0) - return x; - return ((x + ((uint64_t)1 << (n - 1))) >> n); -} - -static int64_t round_two_signed(int64_t x, uint16_t n) -{ - return ((x<0) ? -((int64_t)round_two(-x, n)) : (int64_t)round_two(x, n)); -} - -/** - * Resolve divisor process. - * see spec 7.11.3.7 - */ -static int16_t resolve_divisor(uint32_t d, uint16_t *shift) -{ - int32_t e, f; - - *shift = av_log2(d); - e = d - (1 << (*shift)); - if (*shift > AV1_DIV_LUT_BITS) - f = round_two(e, *shift - AV1_DIV_LUT_BITS); - else - f = e << (AV1_DIV_LUT_BITS - (*shift)); - - *shift += AV1_DIV_LUT_PREC_BITS; - - return div_lut[f]; -} - -/** - * check if global motion params is valid. - * see spec 7.11.3.6 - */ -static uint8_t get_shear_params_valid(AV1DecContext *s, int idx) -{ - int16_t alpha, beta, gamma, delta, divf, divs; - int64_t v, w; - int32_t *param = &s->cur_frame.gm_params[idx][0]; - if (param[2] < 0) - return 0; - - alpha = av_clip_int16(param[2] - (1 << AV1_WARPEDMODEL_PREC_BITS)); - beta = av_clip_int16(param[3]); - divf = resolve_divisor(abs(param[2]), &divs); - v = (int64_t)param[4] * (1 << AV1_WARPEDMODEL_PREC_BITS); - w = (int64_t)param[3] * param[4]; - gamma = av_clip_int16((int)round_two_signed((v * divf), divs)); - delta = av_clip_int16(param[5] - (int)round_two_signed((w * divf), divs) - (1 << AV1_WARPEDMODEL_PREC_BITS)); - - alpha = round_two_signed(alpha, AV1_WARP_PARAM_REDUCE_BITS) << AV1_WARP_PARAM_REDUCE_BITS; - beta = round_two_signed(beta, AV1_WARP_PARAM_REDUCE_BITS) << AV1_WARP_PARAM_REDUCE_BITS; - gamma = round_two_signed(gamma, AV1_WARP_PARAM_REDUCE_BITS) << AV1_WARP_PARAM_REDUCE_BITS; - delta = round_two_signed(delta, AV1_WARP_PARAM_REDUCE_BITS) << AV1_WARP_PARAM_REDUCE_BITS; - - if ((4 * abs(alpha) + 7 * abs(beta)) >= (1 << AV1_WARPEDMODEL_PREC_BITS) || - (4 * abs(gamma) + 4 * abs(delta)) >= (1 << AV1_WARPEDMODEL_PREC_BITS)) - return 0; - - return 1; -} - -/** -* update gm type/params, since cbs already implemented part of this function, -* so we don't need to full implement spec. -*/ -static void global_motion_params(AV1DecContext *s) -{ - const AV1RawFrameHeader *header = s->raw_frame_header; - int type, ref; - - for (ref = AV1_REF_FRAME_LAST; ref <= AV1_REF_FRAME_ALTREF; ref++) { - s->cur_frame.gm_type[ref] = AV1_WARP_MODEL_IDENTITY; - for (int i = 0; i < 6; i++) - s->cur_frame.gm_params[ref][i] = (i % 3 == 2) ? - 1 << AV1_WARPEDMODEL_PREC_BITS : 0; - } - if (header->frame_type == AV1_FRAME_KEY || - header->frame_type == AV1_FRAME_INTRA_ONLY) - return; - - for (ref = AV1_REF_FRAME_LAST; ref <= AV1_REF_FRAME_ALTREF; ref++) { - if (header->is_global[ref]) { - if (header->is_rot_zoom[ref]) { - type = AV1_WARP_MODEL_ROTZOOM; - } else { - type = header->is_translation[ref] ? AV1_WARP_MODEL_TRANSLATION - : AV1_WARP_MODEL_AFFINE; - } - } else { - type = AV1_WARP_MODEL_IDENTITY; - } - s->cur_frame.gm_type[ref] = type; - - if (type >= AV1_WARP_MODEL_ROTZOOM) { - read_global_param(s, type, ref, 2); - read_global_param(s, type, ref, 3); - if (type == AV1_WARP_MODEL_AFFINE) { - read_global_param(s, type, ref, 4); - read_global_param(s, type, ref, 5); - } else { - s->cur_frame.gm_params[ref][4] = -s->cur_frame.gm_params[ref][3]; - s->cur_frame.gm_params[ref][5] = s->cur_frame.gm_params[ref][2]; - } - } - if (type >= AV1_WARP_MODEL_TRANSLATION) { - read_global_param(s, type, ref, 0); - read_global_param(s, type, ref, 1); - } - if (type <= AV1_WARP_MODEL_AFFINE) { - s->cur_frame.gm_invalid[ref] = !get_shear_params_valid(s, ref); - } - } -} - -static int get_relative_dist(const AV1RawSequenceHeader *seq, - unsigned int a, unsigned int b) -{ - unsigned int diff = a - b; - unsigned int m = 1 << seq->order_hint_bits_minus_1; - return (diff & (m - 1)) - (diff & m); -} - -static void skip_mode_params(AV1DecContext *s) -{ - const AV1RawFrameHeader *header = s->raw_frame_header; - const AV1RawSequenceHeader *seq = s->raw_seq; - - int forward_idx, backward_idx; - int forward_hint, backward_hint; - int second_forward_idx, second_forward_hint; - int ref_hint, dist, i; - - if (!header->skip_mode_present) - return; - - forward_idx = -1; - backward_idx = -1; - for (i = 0; i < AV1_REFS_PER_FRAME; i++) { - ref_hint = s->ref[header->ref_frame_idx[i]].raw_frame_header->order_hint; - dist = get_relative_dist(seq, ref_hint, header->order_hint); - if (dist < 0) { - if (forward_idx < 0 || - get_relative_dist(seq, ref_hint, forward_hint) > 0) { - forward_idx = i; - forward_hint = ref_hint; - } - } else if (dist > 0) { - if (backward_idx < 0 || - get_relative_dist(seq, ref_hint, backward_hint) < 0) { - backward_idx = i; - backward_hint = ref_hint; - } - } - } - - if (forward_idx < 0) { - return; - } else if (backward_idx >= 0) { - s->cur_frame.skip_mode_frame_idx[0] = - AV1_REF_FRAME_LAST + FFMIN(forward_idx, backward_idx); - s->cur_frame.skip_mode_frame_idx[1] = - AV1_REF_FRAME_LAST + FFMAX(forward_idx, backward_idx); - return; - } - - second_forward_idx = -1; - for (i = 0; i < AV1_REFS_PER_FRAME; i++) { - ref_hint = s->ref[header->ref_frame_idx[i]].raw_frame_header->order_hint; - if (get_relative_dist(seq, ref_hint, forward_hint) < 0) { - if (second_forward_idx < 0 || - get_relative_dist(seq, ref_hint, second_forward_hint) > 0) { - second_forward_idx = i; - second_forward_hint = ref_hint; - } - } - } - - if (second_forward_idx < 0) - return; - - s->cur_frame.skip_mode_frame_idx[0] = - AV1_REF_FRAME_LAST + FFMIN(forward_idx, second_forward_idx); - s->cur_frame.skip_mode_frame_idx[1] = - AV1_REF_FRAME_LAST + FFMAX(forward_idx, second_forward_idx); -} - -static void coded_lossless_param(AV1DecContext *s) -{ - const AV1RawFrameHeader *header = s->raw_frame_header; - int i; - - if (header->delta_q_y_dc || header->delta_q_u_ac || - header->delta_q_u_dc || header->delta_q_v_ac || - header->delta_q_v_dc) { - s->cur_frame.coded_lossless = 0; - return; - } - - s->cur_frame.coded_lossless = 1; - for (i = 0; i < AV1_MAX_SEGMENTS; i++) { - int qindex; - if (header->feature_enabled[i][AV1_SEG_LVL_ALT_Q]) { - qindex = (header->base_q_idx + - header->feature_value[i][AV1_SEG_LVL_ALT_Q]); - } else { - qindex = header->base_q_idx; - } - qindex = av_clip_uintp2(qindex, 8); - - if (qindex) { - s->cur_frame.coded_lossless = 0; - return; - } - } -} - -static void load_grain_params(AV1DecContext *s) -{ - const AV1RawFrameHeader *header = s->raw_frame_header; - const AV1RawFilmGrainParams *film_grain = &header->film_grain, *src; - AV1RawFilmGrainParams *dst = &s->cur_frame.film_grain; - - if (!film_grain->apply_grain) - return; - - if (film_grain->update_grain) { - memcpy(dst, film_grain, sizeof(*dst)); - return; - } - - src = &s->ref[film_grain->film_grain_params_ref_idx].film_grain; - - memcpy(dst, src, sizeof(*dst)); - dst->grain_seed = film_grain->grain_seed; -} - -static int init_tile_data(AV1DecContext *s) - -{ - int cur_tile_num = - s->raw_frame_header->tile_cols * s->raw_frame_header->tile_rows; - if (s->tile_num < cur_tile_num) { - int ret = av_reallocp_array(&s->tile_group_info, cur_tile_num, - sizeof(TileGroupInfo)); - if (ret < 0) { - s->tile_num = 0; - return ret; - } - } - s->tile_num = cur_tile_num; - - return 0; -} - -static int get_tiles_info(AVCodecContext *avctx, const AV1RawTileGroup *tile_group) -{ - AV1DecContext *s = avctx->priv_data; - GetByteContext gb; - uint16_t tile_num, tile_row, tile_col; - uint32_t size = 0, size_bytes = 0; - - bytestream2_init(&gb, tile_group->tile_data.data, - tile_group->tile_data.data_size); - s->tg_start = tile_group->tg_start; - s->tg_end = tile_group->tg_end; - - for (tile_num = tile_group->tg_start; tile_num <= tile_group->tg_end; tile_num++) { - tile_row = tile_num / s->raw_frame_header->tile_cols; - tile_col = tile_num % s->raw_frame_header->tile_cols; - - if (tile_num == tile_group->tg_end) { - s->tile_group_info[tile_num].tile_size = bytestream2_get_bytes_left(&gb); - s->tile_group_info[tile_num].tile_offset = bytestream2_tell(&gb); - s->tile_group_info[tile_num].tile_row = tile_row; - s->tile_group_info[tile_num].tile_column = tile_col; - return 0; - } - size_bytes = s->raw_frame_header->tile_size_bytes_minus1 + 1; - if (bytestream2_get_bytes_left(&gb) < size_bytes) - return AVERROR_INVALIDDATA; - size = 0; - for (int i = 0; i < size_bytes; i++) - size |= bytestream2_get_byteu(&gb) << 8 * i; - if (bytestream2_get_bytes_left(&gb) <= size) - return AVERROR_INVALIDDATA; - size++; - - s->tile_group_info[tile_num].tile_size = size; - s->tile_group_info[tile_num].tile_offset = bytestream2_tell(&gb); - s->tile_group_info[tile_num].tile_row = tile_row; - s->tile_group_info[tile_num].tile_column = tile_col; - - bytestream2_skipu(&gb, size); - } - - return 0; - -} - -static int get_pixel_format(AVCodecContext *avctx) -{ - AV1DecContext *s = avctx->priv_data; - const AV1RawSequenceHeader *seq = s->raw_seq; - uint8_t bit_depth; - int ret; - enum AVPixelFormat pix_fmt = AV_PIX_FMT_NONE; -#define HWACCEL_MAX (CONFIG_AV1_DXVA2_HWACCEL + \ - CONFIG_AV1_D3D11VA_HWACCEL * 2 + \ - CONFIG_AV1_NVDEC_HWACCEL + \ - CONFIG_AV1_VAAPI_HWACCEL + \ - CONFIG_AV1_VDPAU_HWACCEL) - enum AVPixelFormat pix_fmts[HWACCEL_MAX + 2], *fmtp = pix_fmts; - - if (seq->seq_profile == 2 && seq->color_config.high_bitdepth) - bit_depth = seq->color_config.twelve_bit ? 12 : 10; - else if (seq->seq_profile <= 2) - bit_depth = seq->color_config.high_bitdepth ? 10 : 8; - else { - av_log(avctx, AV_LOG_ERROR, - "Unknown AV1 profile %d.\n", seq->seq_profile); - return -1; - } - - if (!seq->color_config.mono_chrome) { - // 4:4:4 x:0 y:0, 4:2:2 x:1 y:0, 4:2:0 x:1 y:1 - if (seq->color_config.subsampling_x == 0 && - seq->color_config.subsampling_y == 0) { - if (bit_depth == 8) - pix_fmt = AV_PIX_FMT_YUV444P; - else if (bit_depth == 10) - pix_fmt = AV_PIX_FMT_YUV444P10; - else if (bit_depth == 12) - pix_fmt = AV_PIX_FMT_YUV444P12; - else - av_log(avctx, AV_LOG_WARNING, "Unknown AV1 pixel format.\n"); - } else if (seq->color_config.subsampling_x == 1 && - seq->color_config.subsampling_y == 0) { - if (bit_depth == 8) - pix_fmt = AV_PIX_FMT_YUV422P; - else if (bit_depth == 10) - pix_fmt = AV_PIX_FMT_YUV422P10; - else if (bit_depth == 12) - pix_fmt = AV_PIX_FMT_YUV422P12; - else - av_log(avctx, AV_LOG_WARNING, "Unknown AV1 pixel format.\n"); - } else if (seq->color_config.subsampling_x == 1 && - seq->color_config.subsampling_y == 1) { - if (bit_depth == 8) - pix_fmt = AV_PIX_FMT_YUV420P; - else if (bit_depth == 10) - pix_fmt = AV_PIX_FMT_YUV420P10; - else if (bit_depth == 12) - pix_fmt = AV_PIX_FMT_YUV420P12; - else - av_log(avctx, AV_LOG_WARNING, "Unknown AV1 pixel format.\n"); - } - } else { - if (bit_depth == 8) - pix_fmt = AV_PIX_FMT_GRAY8; - else if (bit_depth == 10) - pix_fmt = AV_PIX_FMT_GRAY10; - else if (bit_depth == 12) - pix_fmt = AV_PIX_FMT_GRAY12; - else - av_log(avctx, AV_LOG_WARNING, "Unknown AV1 pixel format.\n"); - } - - av_log(avctx, AV_LOG_DEBUG, "AV1 decode get format: %s.\n", - av_get_pix_fmt_name(pix_fmt)); - - if (pix_fmt == AV_PIX_FMT_NONE) - return -1; - - switch (pix_fmt) { - case AV_PIX_FMT_YUV420P: -#if CONFIG_AV1_DXVA2_HWACCEL - *fmtp++ = AV_PIX_FMT_DXVA2_VLD; -#endif -#if CONFIG_AV1_D3D11VA_HWACCEL - *fmtp++ = AV_PIX_FMT_D3D11VA_VLD; - *fmtp++ = AV_PIX_FMT_D3D11; -#endif -#if CONFIG_AV1_NVDEC_HWACCEL - *fmtp++ = AV_PIX_FMT_CUDA; -#endif -#if CONFIG_AV1_VAAPI_HWACCEL - *fmtp++ = AV_PIX_FMT_VAAPI; -#endif -#if CONFIG_AV1_VDPAU_HWACCEL - *fmtp++ = AV_PIX_FMT_VDPAU; -#endif - break; - case AV_PIX_FMT_YUV420P10: -#if CONFIG_AV1_DXVA2_HWACCEL - *fmtp++ = AV_PIX_FMT_DXVA2_VLD; -#endif -#if CONFIG_AV1_D3D11VA_HWACCEL - *fmtp++ = AV_PIX_FMT_D3D11VA_VLD; - *fmtp++ = AV_PIX_FMT_D3D11; -#endif -#if CONFIG_AV1_NVDEC_HWACCEL - *fmtp++ = AV_PIX_FMT_CUDA; -#endif -#if CONFIG_AV1_VAAPI_HWACCEL - *fmtp++ = AV_PIX_FMT_VAAPI; -#endif -#if CONFIG_AV1_VDPAU_HWACCEL - *fmtp++ = AV_PIX_FMT_VDPAU; -#endif - break; - case AV_PIX_FMT_GRAY8: -#if CONFIG_AV1_NVDEC_HWACCEL - *fmtp++ = AV_PIX_FMT_CUDA; -#endif - break; - case AV_PIX_FMT_GRAY10: -#if CONFIG_AV1_NVDEC_HWACCEL - *fmtp++ = AV_PIX_FMT_CUDA; -#endif - break; - } - - *fmtp++ = pix_fmt; - *fmtp = AV_PIX_FMT_NONE; - - ret = ff_thread_get_format(avctx, pix_fmts); - if (ret < 0) - return ret; - - /** - * check if the HW accel is inited correctly. If not, return un-implemented. - * Since now the av1 decoder doesn't support native decode, if it will be - * implemented in the future, need remove this check. - */ - if (!avctx->hwaccel) { - av_log(avctx, AV_LOG_ERROR, "Your platform doesn't support" - " hardware accelerated AV1 decoding.\n"); - return AVERROR(ENOSYS); - } - - s->pix_fmt = pix_fmt; - avctx->pix_fmt = ret; - - return 0; -} - -static void av1_frame_unref(AVCodecContext *avctx, AV1Frame *f) -{ - ff_thread_release_buffer(avctx, f->f); - av_buffer_unref(&f->hwaccel_priv_buf); - f->hwaccel_picture_private = NULL; - av_buffer_unref(&f->header_ref); - f->raw_frame_header = NULL; - f->spatial_id = f->temporal_id = 0; - memset(f->skip_mode_frame_idx, 0, - 2 * sizeof(uint8_t)); - memset(&f->film_grain, 0, sizeof(f->film_grain)); - f->coded_lossless = 0; -} - -static int av1_frame_ref(AVCodecContext *avctx, AV1Frame *dst, const AV1Frame *src) -{ - int ret; - - ret = av_buffer_replace(&dst->header_ref, src->header_ref); - if (ret < 0) - return ret; - - dst->raw_frame_header = src->raw_frame_header; - - if (!src->f->buf[0]) - return 0; - - ret = av_frame_ref(dst->f, src->f); - if (ret < 0) - goto fail; - - if (src->hwaccel_picture_private) { - dst->hwaccel_priv_buf = av_buffer_ref(src->hwaccel_priv_buf); - if (!dst->hwaccel_priv_buf) - goto fail; - dst->hwaccel_picture_private = dst->hwaccel_priv_buf->data; - } - - dst->spatial_id = src->spatial_id; - dst->temporal_id = src->temporal_id; - memcpy(dst->gm_invalid, - src->gm_invalid, - AV1_NUM_REF_FRAMES * sizeof(uint8_t)); - memcpy(dst->gm_type, - src->gm_type, - AV1_NUM_REF_FRAMES * sizeof(uint8_t)); - memcpy(dst->gm_params, - src->gm_params, - AV1_NUM_REF_FRAMES * 6 * sizeof(int32_t)); - memcpy(dst->skip_mode_frame_idx, - src->skip_mode_frame_idx, - 2 * sizeof(uint8_t)); - memcpy(&dst->film_grain, - &src->film_grain, - sizeof(dst->film_grain)); - dst->coded_lossless = src->coded_lossless; - - return 0; - -fail: - av1_frame_unref(avctx, dst); - return AVERROR(ENOMEM); -} - -static av_cold int av1_decode_free(AVCodecContext *avctx) -{ - AV1DecContext *s = avctx->priv_data; - AV1RawMetadataITUTT35 itut_t35; - - for (int i = 0; i < FF_ARRAY_ELEMS(s->ref); i++) { - av1_frame_unref(avctx, &s->ref[i]); - av_frame_free(&s->ref[i].f); - } - av1_frame_unref(avctx, &s->cur_frame); - av_frame_free(&s->cur_frame.f); - - av_buffer_unref(&s->seq_ref); - av_buffer_unref(&s->header_ref); - av_buffer_unref(&s->cll_ref); - av_buffer_unref(&s->mdcv_ref); - av_freep(&s->tile_group_info); - - while (s->itut_t35_fifo && av_fifo_read(s->itut_t35_fifo, &itut_t35, 1) >= 0) - av_buffer_unref(&itut_t35.payload_ref); - av_fifo_freep2(&s->itut_t35_fifo); - - ff_cbs_fragment_free(&s->current_obu); - ff_cbs_close(&s->cbc); - - return 0; -} - -static int set_context_with_sequence(AVCodecContext *avctx, - const AV1RawSequenceHeader *seq) -{ - int width = seq->max_frame_width_minus_1 + 1; - int height = seq->max_frame_height_minus_1 + 1; - - avctx->profile = seq->seq_profile; - avctx->level = seq->seq_level_idx[0]; - - avctx->color_range = - seq->color_config.color_range ? AVCOL_RANGE_JPEG : AVCOL_RANGE_MPEG; - avctx->color_primaries = seq->color_config.color_primaries; - avctx->colorspace = seq->color_config.color_primaries; - avctx->color_trc = seq->color_config.transfer_characteristics; - - switch (seq->color_config.chroma_sample_position) { - case AV1_CSP_VERTICAL: - avctx->chroma_sample_location = AVCHROMA_LOC_LEFT; - break; - case AV1_CSP_COLOCATED: - avctx->chroma_sample_location = AVCHROMA_LOC_TOPLEFT; - break; - } - - if (seq->film_grain_params_present) - avctx->properties |= FF_CODEC_PROPERTY_FILM_GRAIN; - else - avctx->properties &= ~FF_CODEC_PROPERTY_FILM_GRAIN; - - if (avctx->width != width || avctx->height != height) { - int ret = ff_set_dimensions(avctx, width, height); - if (ret < 0) - return ret; - } - avctx->sample_aspect_ratio = (AVRational) { 1, 1 }; - - if (seq->timing_info.num_units_in_display_tick && - seq->timing_info.time_scale) { - av_reduce(&avctx->framerate.den, &avctx->framerate.num, - seq->timing_info.num_units_in_display_tick, - seq->timing_info.time_scale, - INT_MAX); - if (seq->timing_info.equal_picture_interval) - avctx->ticks_per_frame = seq->timing_info.num_ticks_per_picture_minus_1 + 1; - } - - return 0; -} - -static int update_context_with_frame_header(AVCodecContext *avctx, - const AV1RawFrameHeader *header) -{ - AVRational aspect_ratio; - int width = header->frame_width_minus_1 + 1; - int height = header->frame_height_minus_1 + 1; - int r_width = header->render_width_minus_1 + 1; - int r_height = header->render_height_minus_1 + 1; - int ret; - - if (avctx->width != width || avctx->height != height) { - ret = ff_set_dimensions(avctx, width, height); - if (ret < 0) - return ret; - } - - av_reduce(&aspect_ratio.num, &aspect_ratio.den, - (int64_t)height * r_width, - (int64_t)width * r_height, - INT_MAX); - - if (av_cmp_q(avctx->sample_aspect_ratio, aspect_ratio)) { - ret = ff_set_sar(avctx, aspect_ratio); - if (ret < 0) - return ret; - } - - return 0; -} - -static const CodedBitstreamUnitType decompose_unit_types[] = { - AV1_OBU_FRAME, - AV1_OBU_FRAME_HEADER, - AV1_OBU_METADATA, - AV1_OBU_REDUNDANT_FRAME_HEADER, - AV1_OBU_SEQUENCE_HEADER, - AV1_OBU_TEMPORAL_DELIMITER, - AV1_OBU_TILE_GROUP, -}; - -static av_cold int av1_decode_init(AVCodecContext *avctx) -{ - AV1DecContext *s = avctx->priv_data; - AV1RawSequenceHeader *seq; - int ret; - - s->avctx = avctx; - s->pix_fmt = AV_PIX_FMT_NONE; - - for (int i = 0; i < FF_ARRAY_ELEMS(s->ref); i++) { - s->ref[i].f = av_frame_alloc(); - if (!s->ref[i].f) { - av_log(avctx, AV_LOG_ERROR, - "Failed to allocate reference frame buffer %d.\n", i); - return AVERROR(ENOMEM); - } - } - - s->cur_frame.f = av_frame_alloc(); - if (!s->cur_frame.f) { - av_log(avctx, AV_LOG_ERROR, - "Failed to allocate current frame buffer.\n"); - return AVERROR(ENOMEM); - } - - ret = ff_cbs_init(&s->cbc, AV_CODEC_ID_AV1, avctx); - if (ret < 0) - return ret; - - s->cbc->decompose_unit_types = decompose_unit_types; - s->cbc->nb_decompose_unit_types = FF_ARRAY_ELEMS(decompose_unit_types); - - s->itut_t35_fifo = av_fifo_alloc2(1, sizeof(AV1RawMetadataITUTT35), - AV_FIFO_FLAG_AUTO_GROW); - if (!s->itut_t35_fifo) - return AVERROR(ENOMEM); - - av_opt_set_int(s->cbc->priv_data, "operating_point", s->operating_point, 0); - - if (avctx->extradata && avctx->extradata_size) { - ret = ff_cbs_read_extradata_from_codec(s->cbc, - &s->current_obu, - avctx); - if (ret < 0) { - av_log(avctx, AV_LOG_WARNING, "Failed to read extradata.\n"); - return ret; - } - - seq = ((CodedBitstreamAV1Context *)(s->cbc->priv_data))->sequence_header; - if (!seq) { - av_log(avctx, AV_LOG_WARNING, "No sequence header available.\n"); - goto end; - } - - ret = set_context_with_sequence(avctx, seq); - if (ret < 0) { - av_log(avctx, AV_LOG_WARNING, "Failed to set decoder context.\n"); - goto end; - } - - end: - ff_cbs_fragment_reset(&s->current_obu); - } - - return ret; -} - -static int av1_frame_alloc(AVCodecContext *avctx, AV1Frame *f) -{ - AV1DecContext *s = avctx->priv_data; - AV1RawFrameHeader *header= s->raw_frame_header; - AVFrame *frame; - int ret; - - ret = update_context_with_frame_header(avctx, header); - if (ret < 0) { - av_log(avctx, AV_LOG_ERROR, "Failed to update context with frame header\n"); - return ret; - } - - if ((ret = ff_thread_get_buffer(avctx, f->f, AV_GET_BUFFER_FLAG_REF)) < 0) - goto fail; - - frame = f->f; - frame->key_frame = header->frame_type == AV1_FRAME_KEY; - - switch (header->frame_type) { - case AV1_FRAME_KEY: - case AV1_FRAME_INTRA_ONLY: - frame->pict_type = AV_PICTURE_TYPE_I; - break; - case AV1_FRAME_INTER: - frame->pict_type = AV_PICTURE_TYPE_P; - break; - case AV1_FRAME_SWITCH: - frame->pict_type = AV_PICTURE_TYPE_SP; - break; - } - - if (avctx->hwaccel) { - const AVHWAccel *hwaccel = avctx->hwaccel; - if (hwaccel->frame_priv_data_size) { - f->hwaccel_priv_buf = - av_buffer_allocz(hwaccel->frame_priv_data_size); - if (!f->hwaccel_priv_buf) { - ret = AVERROR(ENOMEM); - goto fail; - } - f->hwaccel_picture_private = f->hwaccel_priv_buf->data; - } - } - return 0; - -fail: - av1_frame_unref(avctx, f); - return ret; -} - -static int export_itut_t35(AVCodecContext *avctx, AVFrame *frame, - const AV1RawMetadataITUTT35 *itut_t35) -{ - GetByteContext gb; - int ret, provider_code; - - bytestream2_init(&gb, itut_t35->payload, itut_t35->payload_size); - - provider_code = bytestream2_get_be16(&gb); - switch (provider_code) { - case 0x31: { // atsc_provider_code - uint32_t user_identifier = bytestream2_get_be32(&gb); - switch (user_identifier) { - case MKBETAG('G', 'A', '9', '4'): { // closed captions - AVBufferRef *buf = NULL; - - ret = ff_parse_a53_cc(&buf, gb.buffer, bytestream2_get_bytes_left(&gb)); - if (ret < 0) - return ret; - if (!ret) - break; - - if (!av_frame_new_side_data_from_buf(frame, AV_FRAME_DATA_A53_CC, buf)) - av_buffer_unref(&buf); - - avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS; - break; - } - default: // ignore unsupported identifiers - break; - } - break; - } - case 0x3C: { // smpte_provider_code - AVDynamicHDRPlus *hdrplus; - int provider_oriented_code = bytestream2_get_be16(&gb); - int application_identifier = bytestream2_get_byte(&gb); - - if (itut_t35->itu_t_t35_country_code != 0xB5 || - provider_oriented_code != 1 || application_identifier != 4) - break; - - hdrplus = av_dynamic_hdr_plus_create_side_data(frame); - if (!hdrplus) - return AVERROR(ENOMEM); - - ret = av_dynamic_hdr_plus_from_t35(hdrplus, gb.buffer, - bytestream2_get_bytes_left(&gb)); - if (ret < 0) - return ret; - break; - } - default: // ignore unsupported provider codes - break; - } - - return 0; -} - -static int export_metadata(AVCodecContext *avctx, AVFrame *frame) -{ - AV1DecContext *s = avctx->priv_data; - AV1RawMetadataITUTT35 itut_t35; - int ret = 0; - - if (s->mdcv) { - AVMasteringDisplayMetadata *mastering = av_mastering_display_metadata_create_side_data(frame); - if (!mastering) - return AVERROR(ENOMEM); - - for (int i = 0; i < 3; i++) { - mastering->display_primaries[i][0] = av_make_q(s->mdcv->primary_chromaticity_x[i], 1 << 16); - mastering->display_primaries[i][1] = av_make_q(s->mdcv->primary_chromaticity_y[i], 1 << 16); - } - mastering->white_point[0] = av_make_q(s->mdcv->white_point_chromaticity_x, 1 << 16); - mastering->white_point[1] = av_make_q(s->mdcv->white_point_chromaticity_y, 1 << 16); - - mastering->max_luminance = av_make_q(s->mdcv->luminance_max, 1 << 8); - mastering->min_luminance = av_make_q(s->mdcv->luminance_min, 1 << 14); - - mastering->has_primaries = 1; - mastering->has_luminance = 1; - } - - if (s->cll) { - AVContentLightMetadata *light = av_content_light_metadata_create_side_data(frame); - if (!light) - return AVERROR(ENOMEM); - - light->MaxCLL = s->cll->max_cll; - light->MaxFALL = s->cll->max_fall; - } - - while (av_fifo_read(s->itut_t35_fifo, &itut_t35, 1) >= 0) { - if (ret >= 0) - ret = export_itut_t35(avctx, frame, &itut_t35); - av_buffer_unref(&itut_t35.payload_ref); - } - - return ret; -} - -static int export_film_grain(AVCodecContext *avctx, AVFrame *frame) -{ - AV1DecContext *s = avctx->priv_data; - const AV1RawFilmGrainParams *film_grain = &s->cur_frame.film_grain; - AVFilmGrainParams *fgp; - AVFilmGrainAOMParams *aom; - - if (!film_grain->apply_grain) - return 0; - - fgp = av_film_grain_params_create_side_data(frame); - if (!fgp) - return AVERROR(ENOMEM); - - fgp->type = AV_FILM_GRAIN_PARAMS_AV1; - fgp->seed = film_grain->grain_seed; - - aom = &fgp->codec.aom; - aom->chroma_scaling_from_luma = film_grain->chroma_scaling_from_luma; - aom->scaling_shift = film_grain->grain_scaling_minus_8 + 8; - aom->ar_coeff_lag = film_grain->ar_coeff_lag; - aom->ar_coeff_shift = film_grain->ar_coeff_shift_minus_6 + 6; - aom->grain_scale_shift = film_grain->grain_scale_shift; - aom->overlap_flag = film_grain->overlap_flag; - aom->limit_output_range = film_grain->clip_to_restricted_range; - - aom->num_y_points = film_grain->num_y_points; - for (int i = 0; i < film_grain->num_y_points; i++) { - aom->y_points[i][0] = film_grain->point_y_value[i]; - aom->y_points[i][1] = film_grain->point_y_scaling[i]; - } - aom->num_uv_points[0] = film_grain->num_cb_points; - for (int i = 0; i < film_grain->num_cb_points; i++) { - aom->uv_points[0][i][0] = film_grain->point_cb_value[i]; - aom->uv_points[0][i][1] = film_grain->point_cb_scaling[i]; - } - aom->num_uv_points[1] = film_grain->num_cr_points; - for (int i = 0; i < film_grain->num_cr_points; i++) { - aom->uv_points[1][i][0] = film_grain->point_cr_value[i]; - aom->uv_points[1][i][1] = film_grain->point_cr_scaling[i]; - } - - for (int i = 0; i < 24; i++) { - aom->ar_coeffs_y[i] = film_grain->ar_coeffs_y_plus_128[i] - 128; - } - for (int i = 0; i < 25; i++) { - aom->ar_coeffs_uv[0][i] = film_grain->ar_coeffs_cb_plus_128[i] - 128; - aom->ar_coeffs_uv[1][i] = film_grain->ar_coeffs_cr_plus_128[i] - 128; - } - - aom->uv_mult[0] = film_grain->cb_mult; - aom->uv_mult[1] = film_grain->cr_mult; - aom->uv_mult_luma[0] = film_grain->cb_luma_mult; - aom->uv_mult_luma[1] = film_grain->cr_luma_mult; - aom->uv_offset[0] = film_grain->cb_offset; - aom->uv_offset[1] = film_grain->cr_offset; - - return 0; -} - -static int set_output_frame(AVCodecContext *avctx, AVFrame *frame, - const AVPacket *pkt, int *got_frame) -{ - AV1DecContext *s = avctx->priv_data; - const AVFrame *srcframe = s->cur_frame.f; - int ret; - - // TODO: all layers - if (s->operating_point_idc && - av_log2(s->operating_point_idc >> 8) > s->cur_frame.spatial_id) - return 0; - - ret = av_frame_ref(frame, srcframe); - if (ret < 0) - return ret; - - ret = export_metadata(avctx, frame); - if (ret < 0) { - av_frame_unref(frame); - return ret; - } - - if (avctx->export_side_data & AV_CODEC_EXPORT_DATA_FILM_GRAIN) { - ret = export_film_grain(avctx, frame); - if (ret < 0) { - av_frame_unref(frame); - return ret; - } - } - - frame->pts = pkt->pts; - frame->pkt_dts = pkt->dts; -#if FF_API_FRAME_PKT -FF_DISABLE_DEPRECATION_WARNINGS - frame->pkt_size = pkt->size; -FF_ENABLE_DEPRECATION_WARNINGS -#endif - - *got_frame = 1; - - return 0; -} - -static int update_reference_list(AVCodecContext *avctx) -{ - AV1DecContext *s = avctx->priv_data; - const AV1RawFrameHeader *header = s->raw_frame_header; - int ret; - - for (int i = 0; i < AV1_NUM_REF_FRAMES; i++) { - if (header->refresh_frame_flags & (1 << i)) { - av1_frame_unref(avctx, &s->ref[i]); - if ((ret = av1_frame_ref(avctx, &s->ref[i], &s->cur_frame)) < 0) { - av_log(avctx, AV_LOG_ERROR, - "Failed to update frame %d in reference list\n", i); - return ret; - } - } - } - return 0; -} - -static int get_current_frame(AVCodecContext *avctx) -{ - AV1DecContext *s = avctx->priv_data; - int ret; - - av1_frame_unref(avctx, &s->cur_frame); - - s->cur_frame.header_ref = av_buffer_ref(s->header_ref); - if (!s->cur_frame.header_ref) - return AVERROR(ENOMEM); - - s->cur_frame.raw_frame_header = s->raw_frame_header; - - ret = init_tile_data(s); - if (ret < 0) { - av_log(avctx, AV_LOG_ERROR, "Failed to init tile data.\n"); - return ret; - } - - if ((avctx->skip_frame >= AVDISCARD_NONINTRA && - (s->raw_frame_header->frame_type != AV1_FRAME_KEY && - s->raw_frame_header->frame_type != AV1_FRAME_INTRA_ONLY)) || - (avctx->skip_frame >= AVDISCARD_NONKEY && - s->raw_frame_header->frame_type != AV1_FRAME_KEY) || - avctx->skip_frame >= AVDISCARD_ALL) - return 0; - - ret = av1_frame_alloc(avctx, &s->cur_frame); - if (ret < 0) { - av_log(avctx, AV_LOG_ERROR, - "Failed to allocate space for current frame.\n"); - return ret; - } - - global_motion_params(s); - skip_mode_params(s); - coded_lossless_param(s); - load_grain_params(s); - - return ret; -} - -static int av1_decode_frame(AVCodecContext *avctx, AVFrame *frame, - int *got_frame, AVPacket *pkt) -{ - AV1DecContext *s = avctx->priv_data; - AV1RawTileGroup *raw_tile_group = NULL; - int ret; - - ret = ff_cbs_read_packet(s->cbc, &s->current_obu, pkt); - if (ret < 0) { - av_log(avctx, AV_LOG_ERROR, "Failed to read packet.\n"); - goto end; - } - av_log(avctx, AV_LOG_DEBUG, "Total obu for this frame:%d.\n", - s->current_obu.nb_units); - - for (int i = 0; i < s->current_obu.nb_units; i++) { - CodedBitstreamUnit *unit = &s->current_obu.units[i]; - AV1RawOBU *obu = unit->content; - const AV1RawOBUHeader *header; - - if (!obu) - continue; - - header = &obu->header; - av_log(avctx, AV_LOG_DEBUG, "Obu idx:%d, obu type:%d.\n", i, unit->type); - - switch (unit->type) { - case AV1_OBU_SEQUENCE_HEADER: - av_buffer_unref(&s->seq_ref); - s->seq_ref = av_buffer_ref(unit->content_ref); - if (!s->seq_ref) { - ret = AVERROR(ENOMEM); - goto end; - } - - s->raw_seq = &obu->obu.sequence_header; - - ret = set_context_with_sequence(avctx, s->raw_seq); - if (ret < 0) { - av_log(avctx, AV_LOG_ERROR, "Failed to set context.\n"); - s->raw_seq = NULL; - goto end; - } - - s->operating_point_idc = s->raw_seq->operating_point_idc[s->operating_point]; - - if (s->pix_fmt == AV_PIX_FMT_NONE) { - ret = get_pixel_format(avctx); - if (ret < 0) { - av_log(avctx, AV_LOG_ERROR, - "Failed to get pixel format.\n"); - s->raw_seq = NULL; - goto end; - } - } - - if (avctx->hwaccel && avctx->hwaccel->decode_params) { - ret = avctx->hwaccel->decode_params(avctx, unit->type, unit->data, - unit->data_size); - if (ret < 0) { - av_log(avctx, AV_LOG_ERROR, "HW accel decode params fail.\n"); - s->raw_seq = NULL; - goto end; - } - } - break; - case AV1_OBU_REDUNDANT_FRAME_HEADER: - if (s->raw_frame_header) - break; - // fall-through - case AV1_OBU_FRAME: - case AV1_OBU_FRAME_HEADER: - if (!s->raw_seq) { - av_log(avctx, AV_LOG_ERROR, "Missing Sequence Header.\n"); - ret = AVERROR_INVALIDDATA; - goto end; - } - - av_buffer_unref(&s->header_ref); - s->header_ref = av_buffer_ref(unit->content_ref); - if (!s->header_ref) { - ret = AVERROR(ENOMEM); - goto end; - } - - if (unit->type == AV1_OBU_FRAME) - s->raw_frame_header = &obu->obu.frame.header; - else - s->raw_frame_header = &obu->obu.frame_header; - - if (s->raw_frame_header->show_existing_frame) { - av1_frame_unref(avctx, &s->cur_frame); - - ret = av1_frame_ref(avctx, &s->cur_frame, - &s->ref[s->raw_frame_header->frame_to_show_map_idx]); - if (ret < 0) { - av_log(avctx, AV_LOG_ERROR, "Failed to get reference frame.\n"); - goto end; - } - - ret = update_reference_list(avctx); - if (ret < 0) { - av_log(avctx, AV_LOG_ERROR, "Failed to update reference list.\n"); - goto end; - } - - if (s->cur_frame.f->buf[0]) { - ret = set_output_frame(avctx, frame, pkt, got_frame); - if (ret < 0) - av_log(avctx, AV_LOG_ERROR, "Set output frame error.\n"); - } - - s->raw_frame_header = NULL; - - goto end; - } - - ret = get_current_frame(avctx); - if (ret < 0) { - av_log(avctx, AV_LOG_ERROR, "Get current frame error\n"); - goto end; - } - - s->cur_frame.spatial_id = header->spatial_id; - s->cur_frame.temporal_id = header->temporal_id; - - if (avctx->hwaccel && s->cur_frame.f->buf[0]) { - ret = avctx->hwaccel->start_frame(avctx, unit->data, - unit->data_size); - if (ret < 0) { - av_log(avctx, AV_LOG_ERROR, "HW accel start frame fail.\n"); - goto end; - } - } - if (unit->type != AV1_OBU_FRAME) - break; - // fall-through - case AV1_OBU_TILE_GROUP: - if (!s->raw_frame_header) { - av_log(avctx, AV_LOG_ERROR, "Missing Frame Header.\n"); - ret = AVERROR_INVALIDDATA; - goto end; - } - - if (unit->type == AV1_OBU_FRAME) - raw_tile_group = &obu->obu.frame.tile_group; - else - raw_tile_group = &obu->obu.tile_group; - - ret = get_tiles_info(avctx, raw_tile_group); - if (ret < 0) - goto end; - - if (avctx->hwaccel && s->cur_frame.f->buf[0]) { - ret = avctx->hwaccel->decode_slice(avctx, - raw_tile_group->tile_data.data, - raw_tile_group->tile_data.data_size); - if (ret < 0) { - av_log(avctx, AV_LOG_ERROR, - "HW accel decode slice fail.\n"); - goto end; - } - } - break; - case AV1_OBU_TILE_LIST: - case AV1_OBU_TEMPORAL_DELIMITER: - case AV1_OBU_PADDING: - break; - case AV1_OBU_METADATA: - switch (obu->obu.metadata.metadata_type) { - case AV1_METADATA_TYPE_HDR_CLL: - av_buffer_unref(&s->cll_ref); - s->cll_ref = av_buffer_ref(unit->content_ref); - if (!s->cll_ref) { - s->cll = NULL; - ret = AVERROR(ENOMEM); - goto end; - } - s->cll = &obu->obu.metadata.metadata.hdr_cll; - break; - case AV1_METADATA_TYPE_HDR_MDCV: - av_buffer_unref(&s->mdcv_ref); - s->mdcv_ref = av_buffer_ref(unit->content_ref); - if (!s->mdcv_ref) { - s->mdcv = NULL; - ret = AVERROR(ENOMEM); - goto end; - } - s->mdcv = &obu->obu.metadata.metadata.hdr_mdcv; - break; - case AV1_METADATA_TYPE_ITUT_T35: { - AV1RawMetadataITUTT35 itut_t35; - memcpy(&itut_t35, &obu->obu.metadata.metadata.itut_t35, sizeof(itut_t35)); - itut_t35.payload_ref = av_buffer_ref(obu->obu.metadata.metadata.itut_t35.payload_ref); - if (!itut_t35.payload_ref) { - ret = AVERROR(ENOMEM); - goto end; - } - ret = av_fifo_write(s->itut_t35_fifo, &itut_t35, 1); - if (ret < 0) { - av_buffer_unref(&itut_t35.payload_ref); - goto end; - } - break; - } - default: - break; - } - break; - default: - av_log(avctx, AV_LOG_DEBUG, - "Unknown obu type: %d (%"SIZE_SPECIFIER" bits).\n", - unit->type, unit->data_size); - } - - if (raw_tile_group && (s->tile_num == raw_tile_group->tg_end + 1)) { - if (avctx->hwaccel && s->cur_frame.f->buf[0]) { - ret = avctx->hwaccel->end_frame(avctx); - if (ret < 0) { - av_log(avctx, AV_LOG_ERROR, "HW accel end frame fail.\n"); - goto end; - } - } - - ret = update_reference_list(avctx); - if (ret < 0) { - av_log(avctx, AV_LOG_ERROR, "Failed to update reference list.\n"); - goto end; - } - - if (s->raw_frame_header->show_frame && s->cur_frame.f->buf[0]) { - ret = set_output_frame(avctx, frame, pkt, got_frame); - if (ret < 0) { - av_log(avctx, AV_LOG_ERROR, "Set output frame error\n"); - goto end; - } - } - raw_tile_group = NULL; - s->raw_frame_header = NULL; - } - } - -end: - ff_cbs_fragment_reset(&s->current_obu); - if (ret < 0) - s->raw_frame_header = NULL; - return ret; -} - -static void av1_decode_flush(AVCodecContext *avctx) -{ - AV1DecContext *s = avctx->priv_data; - AV1RawMetadataITUTT35 itut_t35; - - for (int i = 0; i < FF_ARRAY_ELEMS(s->ref); i++) - av1_frame_unref(avctx, &s->ref[i]); - - av1_frame_unref(avctx, &s->cur_frame); - s->operating_point_idc = 0; - s->raw_frame_header = NULL; - s->raw_seq = NULL; - s->cll = NULL; - s->mdcv = NULL; - while (av_fifo_read(s->itut_t35_fifo, &itut_t35, 1) >= 0) - av_buffer_unref(&itut_t35.payload_ref); - - ff_cbs_flush(s->cbc); -} - -#define OFFSET(x) offsetof(AV1DecContext, x) -#define VD AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_DECODING_PARAM -static const AVOption av1_options[] = { - { "operating_point", "Select an operating point of the scalable bitstream", - OFFSET(operating_point), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, AV1_MAX_OPERATING_POINTS - 1, VD }, - { NULL } -}; - -static const AVClass av1_class = { - .class_name = "AV1 decoder", - .item_name = av_default_item_name, - .option = av1_options, - .version = LIBAVUTIL_VERSION_INT, -}; - -const FFCodec ff_av1_decoder = { - .p.name = "av1", - CODEC_LONG_NAME("Alliance for Open Media AV1"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_AV1, - .priv_data_size = sizeof(AV1DecContext), - .init = av1_decode_init, - .close = av1_decode_free, - FF_CODEC_DECODE_CB(av1_decode_frame), - .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_AVOID_PROBING, - .caps_internal = FF_CODEC_CAP_INIT_CLEANUP | - FF_CODEC_CAP_SETS_PKT_DTS, - .flush = av1_decode_flush, - .p.profiles = NULL_IF_CONFIG_SMALL(ff_av1_profiles), - .p.priv_class = &av1_class, - .bsfs = "av1_frame_split", - .hw_configs = (const AVCodecHWConfigInternal *const []) { -#if CONFIG_AV1_DXVA2_HWACCEL - HWACCEL_DXVA2(av1), -#endif -#if CONFIG_AV1_D3D11VA_HWACCEL - HWACCEL_D3D11VA(av1), -#endif -#if CONFIG_AV1_D3D11VA2_HWACCEL - HWACCEL_D3D11VA2(av1), -#endif -#if CONFIG_AV1_NVDEC_HWACCEL - HWACCEL_NVDEC(av1), -#endif -#if CONFIG_AV1_VAAPI_HWACCEL - HWACCEL_VAAPI(av1), -#endif -#if CONFIG_AV1_VDPAU_HWACCEL - HWACCEL_VDPAU(av1), -#endif - - NULL - }, -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/bintext.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/bintext.c deleted file mode 100644 index ce814f76939cf3a1672f2b6f87f2939bd9bf2f64..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/bintext.c +++ /dev/null @@ -1,254 +0,0 @@ -/* - * Binary text decoder - * eXtended BINary text (XBIN) decoder - * iCEDraw File decoder - * Copyright (c) 2010 Peter Ross (pross@xvid.org) - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * Binary text decoder - * eXtended BINary text (XBIN) decoder - * iCEDraw File decoder - */ - -#include "config_components.h" - -#include "libavutil/intreadwrite.h" -#include "libavutil/xga_font_data.h" -#include "avcodec.h" -#include "cga_data.h" -#include "bintext.h" -#include "codec_internal.h" -#include "decode.h" - -#define FONT_WIDTH 8 - -typedef struct XbinContext { - AVFrame *frame; - int palette[16]; - int flags; - int font_height; - const uint8_t *font; - int x, y; -} XbinContext; - -static av_cold int decode_init(AVCodecContext *avctx) -{ - XbinContext *s = avctx->priv_data; - uint8_t *p; - int i; - - avctx->pix_fmt = AV_PIX_FMT_PAL8; - p = avctx->extradata; - if (p) { - s->font_height = p[0]; - s->flags = p[1]; - p += 2; - if(avctx->extradata_size < 2 + (!!(s->flags & BINTEXT_PALETTE))*3*16 - + (!!(s->flags & BINTEXT_FONT))*s->font_height*256) { - av_log(avctx, AV_LOG_ERROR, "not enough extradata\n"); - return AVERROR_INVALIDDATA; - } - if (!s->font_height) { - av_log(avctx, AV_LOG_ERROR, "invalid font height\n"); - return AVERROR_INVALIDDATA; - } - } else { - s->font_height = 8; - s->flags = 0; - } - - if ((s->flags & BINTEXT_PALETTE)) { - for (i = 0; i < 16; i++) { - s->palette[i] = 0xFF000000 | (AV_RB24(p) << 2) | ((AV_RB24(p) >> 4) & 0x30303); - p += 3; - } - } else { - for (i = 0; i < 16; i++) - s->palette[i] = 0xFF000000 | ff_cga_palette[i]; - } - - if ((s->flags & BINTEXT_FONT)) { - s->font = p; - } else { - switch(s->font_height) { - default: - av_log(avctx, AV_LOG_WARNING, "font height %i not supported\n", s->font_height); - s->font_height = 8; - case 8: - s->font = avpriv_cga_font; - break; - case 16: - s->font = avpriv_vga16_font; - break; - } - } - if (avctx->width < FONT_WIDTH || avctx->height < s->font_height) { - av_log(avctx, AV_LOG_ERROR, "Resolution too small for font.\n"); - return AVERROR_INVALIDDATA; - } - - return 0; -} - -#define DEFAULT_BG_COLOR 0 -av_unused static void hscroll(AVCodecContext *avctx) -{ - XbinContext *s = avctx->priv_data; - if (s->y < avctx->height - s->font_height) { - s->y += s->font_height; - } else { - memmove(s->frame->data[0], s->frame->data[0] + s->font_height*s->frame->linesize[0], - (avctx->height - s->font_height)*s->frame->linesize[0]); - memset(s->frame->data[0] + (avctx->height - s->font_height)*s->frame->linesize[0], - DEFAULT_BG_COLOR, s->font_height * s->frame->linesize[0]); - } -} - -/** - * Draw character to screen - */ -static void draw_char(AVCodecContext *avctx, int c, int a) -{ - XbinContext *s = avctx->priv_data; - if (s->y > avctx->height - s->font_height) - return; - ff_draw_pc_font(s->frame->data[0] + s->y * s->frame->linesize[0] + s->x, - s->frame->linesize[0], s->font, s->font_height, c, - a & 0x0F, a >> 4); - s->x += FONT_WIDTH; - if (s->x > avctx->width - FONT_WIDTH) { - s->x = 0; - s->y += s->font_height; - } -} - -static int decode_frame(AVCodecContext *avctx, AVFrame *frame, - int *got_frame, AVPacket *avpkt) -{ - XbinContext *s = avctx->priv_data; - const uint8_t *buf = avpkt->data; - int buf_size = avpkt->size; - const uint8_t *buf_end = buf+buf_size; - int ret; - - if ((avctx->width / FONT_WIDTH) * (avctx->height / s->font_height) / 256 > buf_size) - return AVERROR_INVALIDDATA; - - s->frame = frame; - s->x = s->y = 0; - if ((ret = ff_get_buffer(avctx, s->frame, 0)) < 0) - return ret; - s->frame->pict_type = AV_PICTURE_TYPE_I; - s->frame->palette_has_changed = 1; - memcpy(s->frame->data[1], s->palette, 16 * 4); - - if (avctx->codec_id == AV_CODEC_ID_XBIN) { - while (buf + 2 < buf_end) { - int i,c,a; - int type = *buf >> 6; - int count = (*buf & 0x3F) + 1; - buf++; - switch (type) { - case 0: //no compression - for (i = 0; i < count && buf + 1 < buf_end; i++) { - draw_char(avctx, buf[0], buf[1]); - buf += 2; - } - break; - case 1: //character compression - c = *buf++; - for (i = 0; i < count && buf < buf_end; i++) - draw_char(avctx, c, *buf++); - break; - case 2: //attribute compression - a = *buf++; - for (i = 0; i < count && buf < buf_end; i++) - draw_char(avctx, *buf++, a); - break; - case 3: //character/attribute compression - c = *buf++; - a = *buf++; - for (i = 0; i < count && buf < buf_end; i++) - draw_char(avctx, c, a); - break; - } - } - } else if (avctx->codec_id == AV_CODEC_ID_IDF) { - while (buf + 2 < buf_end) { - if (AV_RL16(buf) == 1) { - int i; - if (buf + 6 > buf_end) - break; - for (i = 0; i < buf[2]; i++) - draw_char(avctx, buf[4], buf[5]); - buf += 6; - } else { - draw_char(avctx, buf[0], buf[1]); - buf += 2; - } - } - } else { - while (buf + 1 < buf_end) { - draw_char(avctx, buf[0], buf[1]); - buf += 2; - } - } - - *got_frame = 1; - return buf_size; -} - -#if CONFIG_BINTEXT_DECODER -const FFCodec ff_bintext_decoder = { - .p.name = "bintext", - CODEC_LONG_NAME("Binary text"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_BINTEXT, - .priv_data_size = sizeof(XbinContext), - .init = decode_init, - FF_CODEC_DECODE_CB(decode_frame), - .p.capabilities = AV_CODEC_CAP_DR1, -}; -#endif -#if CONFIG_XBIN_DECODER -const FFCodec ff_xbin_decoder = { - .p.name = "xbin", - CODEC_LONG_NAME("eXtended BINary text"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_XBIN, - .priv_data_size = sizeof(XbinContext), - .init = decode_init, - FF_CODEC_DECODE_CB(decode_frame), - .p.capabilities = AV_CODEC_CAP_DR1, -}; -#endif -#if CONFIG_IDF_DECODER -const FFCodec ff_idf_decoder = { - .p.name = "idf", - CODEC_LONG_NAME("iCEDraw text"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_IDF, - .priv_data_size = sizeof(XbinContext), - .init = decode_init, - FF_CODEC_DECODE_CB(decode_frame), - .p.capabilities = AV_CODEC_CAP_DR1, -}; -#endif diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/codec2utils.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/codec2utils.h deleted file mode 100644 index 6812ae895ca25459608f6c45938fd84ec1d74681..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/codec2utils.h +++ /dev/null @@ -1,63 +0,0 @@ -/* - * codec2 utility functions - * Copyright (c) 2017 Tomas Härdin - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_CODEC2UTILS_H -#define AVCODEC_CODEC2UTILS_H - -#include - -//Highest mode we're willing to use. -//Don't want to let users accidentally produce files that can't be decoded in the future. -//CODEC2_MODE_WB (9) is experimental/unstable as of 2017-11-23. -#define CODEC2_MODE_MAX 8 //CODEC2_MODE_700C - -//Used by both codec2raw demuxer and libcodec2 encoder. -//The integers match the values in codec2.h, so "3200" -> CODEC2_MODE_3000 = 0 and so on. -//It is possible that we're linked to a version of libcodec2 that lacks some of these modes. -//For example Debian stretch ships with libcodec2.so.0.4 which lacks CODEC2_MODE_700C. -#define CODEC2_AVOPTIONS(desc, classname, min_val, default_val, option_flags) \ - { "mode", desc, offsetof(classname, mode), AV_OPT_TYPE_INT, {.i64 = default_val}, min_val, CODEC2_MODE_MAX, .flags=option_flags, .unit="codec2_mode"},\ - { "3200", "3200", 0, AV_OPT_TYPE_CONST, {.i64 = 0}, .flags=option_flags, .unit="codec2_mode"},\ - { "2400", "2400", 0, AV_OPT_TYPE_CONST, {.i64 = 1}, .flags=option_flags, .unit="codec2_mode"},\ - { "1600", "1600", 0, AV_OPT_TYPE_CONST, {.i64 = 2}, .flags=option_flags, .unit="codec2_mode"},\ - { "1400", "1400", 0, AV_OPT_TYPE_CONST, {.i64 = 3}, .flags=option_flags, .unit="codec2_mode"},\ - { "1300", "1300", 0, AV_OPT_TYPE_CONST, {.i64 = 4}, .flags=option_flags, .unit="codec2_mode"},\ - { "1200", "1200", 0, AV_OPT_TYPE_CONST, {.i64 = 5}, .flags=option_flags, .unit="codec2_mode"},\ - { "700", "700", 0, AV_OPT_TYPE_CONST, {.i64 = 6}, .flags=option_flags, .unit="codec2_mode"},\ - { "700B", "700B", 0, AV_OPT_TYPE_CONST, {.i64 = 7}, .flags=option_flags, .unit="codec2_mode"},\ - { "700C", "700C", 0, AV_OPT_TYPE_CONST, {.i64 = 8}, .flags=option_flags, .unit="codec2_mode"} - -#define CODEC2_EXTRADATA_SIZE 4 - -//Used in codec2raw demuxer and libcodec2 encoder -static inline void codec2_make_extradata(uint8_t *ptr, int mode) { - //version 0.8 as of 2017-12-23 (r3386) - ptr[0] = 0; //major - ptr[1] = 8; //minor - ptr[2] = mode; //mode - ptr[3] = 0; //flags -} - -static inline uint8_t codec2_mode_from_extradata(uint8_t *ptr) { - return ptr[2]; -} - -#endif /* AVCODEC_CODEC2UTILS_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264_picture.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264_picture.c deleted file mode 100644 index 2661ff4698e123e4a482cfb41123a41eea135e51..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264_picture.c +++ /dev/null @@ -1,264 +0,0 @@ -/* - * H.26L/H.264/AVC/JVT/14496-10/... decoder - * Copyright (c) 2003 Michael Niedermayer - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * H.264 / AVC / MPEG-4 part10 codec. - * @author Michael Niedermayer - */ - -#include "libavutil/avassert.h" -#include "error_resilience.h" -#include "avcodec.h" -#include "h264dec.h" -#include "mpegutils.h" -#include "thread.h" -#include "threadframe.h" - -void ff_h264_unref_picture(H264Context *h, H264Picture *pic) -{ - int off = offsetof(H264Picture, f_grain) + sizeof(pic->f_grain); - int i; - - if (!pic->f || !pic->f->buf[0]) - return; - - ff_thread_release_ext_buffer(h->avctx, &pic->tf); - ff_thread_release_buffer(h->avctx, pic->f_grain); - av_buffer_unref(&pic->hwaccel_priv_buf); - - av_buffer_unref(&pic->qscale_table_buf); - av_buffer_unref(&pic->mb_type_buf); - av_buffer_unref(&pic->pps_buf); - for (i = 0; i < 2; i++) { - av_buffer_unref(&pic->motion_val_buf[i]); - av_buffer_unref(&pic->ref_index_buf[i]); - } - - memset((uint8_t*)pic + off, 0, sizeof(*pic) - off); -} - -static void h264_copy_picture_params(H264Picture *dst, const H264Picture *src) -{ - dst->qscale_table = src->qscale_table; - dst->mb_type = src->mb_type; - dst->pps = src->pps; - - for (int i = 0; i < 2; i++) { - dst->motion_val[i] = src->motion_val[i]; - dst->ref_index[i] = src->ref_index[i]; - } - - for (int i = 0; i < 2; i++) - dst->field_poc[i] = src->field_poc[i]; - - memcpy(dst->ref_poc, src->ref_poc, sizeof(src->ref_poc)); - memcpy(dst->ref_count, src->ref_count, sizeof(src->ref_count)); - - dst->poc = src->poc; - dst->frame_num = src->frame_num; - dst->mmco_reset = src->mmco_reset; - dst->long_ref = src->long_ref; - dst->mbaff = src->mbaff; - dst->field_picture = src->field_picture; - dst->reference = src->reference; - dst->recovered = src->recovered; - dst->invalid_gap = src->invalid_gap; - dst->sei_recovery_frame_cnt = src->sei_recovery_frame_cnt; - dst->mb_width = src->mb_width; - dst->mb_height = src->mb_height; - dst->mb_stride = src->mb_stride; - dst->needs_fg = src->needs_fg; -} - -int ff_h264_ref_picture(H264Context *h, H264Picture *dst, H264Picture *src) -{ - int ret, i; - - av_assert0(!dst->f->buf[0]); - av_assert0(src->f->buf[0]); - av_assert0(src->tf.f == src->f); - - dst->tf.f = dst->f; - ret = ff_thread_ref_frame(&dst->tf, &src->tf); - if (ret < 0) - goto fail; - - if (src->needs_fg) { - ret = av_frame_ref(dst->f_grain, src->f_grain); - if (ret < 0) - goto fail; - } - - dst->qscale_table_buf = av_buffer_ref(src->qscale_table_buf); - dst->mb_type_buf = av_buffer_ref(src->mb_type_buf); - dst->pps_buf = av_buffer_ref(src->pps_buf); - if (!dst->qscale_table_buf || !dst->mb_type_buf || !dst->pps_buf) { - ret = AVERROR(ENOMEM); - goto fail; - } - - for (i = 0; i < 2; i++) { - dst->motion_val_buf[i] = av_buffer_ref(src->motion_val_buf[i]); - dst->ref_index_buf[i] = av_buffer_ref(src->ref_index_buf[i]); - if (!dst->motion_val_buf[i] || !dst->ref_index_buf[i]) { - ret = AVERROR(ENOMEM); - goto fail; - } - } - - if (src->hwaccel_picture_private) { - dst->hwaccel_priv_buf = av_buffer_ref(src->hwaccel_priv_buf); - if (!dst->hwaccel_priv_buf) { - ret = AVERROR(ENOMEM); - goto fail; - } - dst->hwaccel_picture_private = dst->hwaccel_priv_buf->data; - } - - h264_copy_picture_params(dst, src); - - return 0; -fail: - ff_h264_unref_picture(h, dst); - return ret; -} - -int ff_h264_replace_picture(H264Context *h, H264Picture *dst, const H264Picture *src) -{ - int ret, i; - - if (!src->f || !src->f->buf[0]) { - ff_h264_unref_picture(h, dst); - return 0; - } - - av_assert0(src->tf.f == src->f); - - dst->tf.f = dst->f; - ff_thread_release_ext_buffer(h->avctx, &dst->tf); - ret = ff_thread_ref_frame(&dst->tf, &src->tf); - if (ret < 0) - goto fail; - - if (src->needs_fg) { - ff_thread_release_buffer(h->avctx, dst->f_grain); - ret = av_frame_ref(dst->f_grain, src->f_grain); - if (ret < 0) - goto fail; - } - - ret = av_buffer_replace(&dst->qscale_table_buf, src->qscale_table_buf); - ret |= av_buffer_replace(&dst->mb_type_buf, src->mb_type_buf); - ret |= av_buffer_replace(&dst->pps_buf, src->pps_buf); - if (ret < 0) - goto fail; - - for (i = 0; i < 2; i++) { - ret = av_buffer_replace(&dst->motion_val_buf[i], src->motion_val_buf[i]); - ret |= av_buffer_replace(&dst->ref_index_buf[i], src->ref_index_buf[i]); - if (ret < 0) - goto fail; - } - - ret = av_buffer_replace(&dst->hwaccel_priv_buf, src->hwaccel_priv_buf); - if (ret < 0) - goto fail; - - dst->hwaccel_picture_private = src->hwaccel_picture_private; - - h264_copy_picture_params(dst, src); - - return 0; -fail: - ff_h264_unref_picture(h, dst); - return ret; -} - -void ff_h264_set_erpic(ERPicture *dst, H264Picture *src) -{ -#if CONFIG_ERROR_RESILIENCE - int i; - - memset(dst, 0, sizeof(*dst)); - - if (!src) - return; - - dst->f = src->f; - dst->tf = &src->tf; - - for (i = 0; i < 2; i++) { - dst->motion_val[i] = src->motion_val[i]; - dst->ref_index[i] = src->ref_index[i]; - } - - dst->mb_type = src->mb_type; - dst->field_picture = src->field_picture; -#endif /* CONFIG_ERROR_RESILIENCE */ -} - -int ff_h264_field_end(H264Context *h, H264SliceContext *sl, int in_setup) -{ - AVCodecContext *const avctx = h->avctx; - H264Picture *cur = h->cur_pic_ptr; - int err = 0; - h->mb_y = 0; - - if (in_setup || !(avctx->active_thread_type & FF_THREAD_FRAME)) { - if (!h->droppable) { - err = ff_h264_execute_ref_pic_marking(h); - h->poc.prev_poc_msb = h->poc.poc_msb; - h->poc.prev_poc_lsb = h->poc.poc_lsb; - } - h->poc.prev_frame_num_offset = h->poc.frame_num_offset; - h->poc.prev_frame_num = h->poc.frame_num; - } - - if (avctx->hwaccel) { - err = avctx->hwaccel->end_frame(avctx); - if (err < 0) - av_log(avctx, AV_LOG_ERROR, - "hardware accelerator failed to decode picture\n"); - } else if (!in_setup && cur->needs_fg && (!FIELD_PICTURE(h) || !h->first_field)) { - AVFrameSideData *sd = av_frame_get_side_data(cur->f, AV_FRAME_DATA_FILM_GRAIN_PARAMS); - - err = AVERROR_INVALIDDATA; - if (sd) // a decoding error may have happened before the side data could be allocated - err = ff_h274_apply_film_grain(cur->f_grain, cur->f, &h->h274db, - (AVFilmGrainParams *) sd->data); - if (err < 0) { - av_log(h->avctx, AV_LOG_WARNING, "Failed synthesizing film " - "grain, ignoring: %s\n", av_err2str(err)); - cur->needs_fg = 0; - err = 0; - } - } - - if (!in_setup && !h->droppable) - ff_thread_report_progress(&cur->tf, INT_MAX, - h->picture_structure == PICT_BOTTOM_FIELD); - emms_c(); - - h->current_slice = 0; - - return err; -} diff --git a/spaces/congsaPfin/Manga-OCR/logs/Cubase AI 5 Recording Editing and Mixing with Mac OS X.md b/spaces/congsaPfin/Manga-OCR/logs/Cubase AI 5 Recording Editing and Mixing with Mac OS X.md deleted file mode 100644 index 4a3f8976a927b9d53c1c3c262ef425b2a2f42022..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Cubase AI 5 Recording Editing and Mixing with Mac OS X.md +++ /dev/null @@ -1,109 +0,0 @@ -
      -

      How to Download and Install Cubase AI 5 for Mac

      -

      If you are looking for a powerful, yet easy-to-use software for recording, editing and mixing music on your Mac, you might want to check out Cubase AI 5. This is a special, compact version of Cubase Pro that uses the same core technologies and offers many of the same features. It is exclusive to customers of selected Steinberg and Yamaha hardware products, such as audio interfaces, MIDI keyboards, digital pianos and synthesizers.

      -

      cubase ai 5 download mac


      Download Zip ✶✶✶ https://urlca.com/2uO70r



      -

      In this article, we will show you how to download and install Cubase AI 5 for Mac, as well as what are the features and benefits of using it. We will also cover how to update and troubleshoot Cubase AI 5 for Mac in case you encounter any issues. By the end of this article, you will be ready to start making music with Cubase AI 5 on your Mac.

      -

      Step 1: Check if you have a compatible Steinberg/Yamaha product that includes a free version of Cubase AI 5

      -

      The first thing you need to do is to make sure that you have a compatible Steinberg/Yamaha product that includes a free version of Cubase AI 5. You can find a list of eligible products on the Steinberg website. If you have one of these products, you should have received a Download Access Code with it. This is a unique code that allows you to download Cubase AI 5 for free from the Steinberg website. If you don't have a compatible product or a Download Access Code, you will not be able to download or use Cubase AI 5.

      -

      Step 2: Get the Steinberg Download Assistant and create a MySteinberg account

      -

      The next step is to get the Steinberg Download Assistant (SDA), which is a software that helps you download and install Cubase AI 5. You can download the SDA from the Steinberg website . The SDA is a small application that allows you to download and install Cubase AI 5 and other Steinberg software products. You can also use it to update your software to the latest version.

      -

      Before you can use the SDA, you need to create a MySteinberg account, which is a free online service that lets you manage your Steinberg products and licenses. You can create a MySteinberg account here. You will need to provide a valid email address and a password. Once you have created your account, you will receive a confirmation email with a link to activate your account.

      -

      cubase ai 5 mac free download
      -how to install cubase ai 5 on mac
      -cubase ai 5 mac os x compatibility
      -cubase ai 5 mac update
      -cubase ai 5 mac crack
      -cubase ai 5 mac serial number
      -cubase ai 5 mac system requirements
      -cubase ai 5 mac activation code
      -cubase ai 5 mac license
      -cubase ai 5 mac tutorial
      -cubase ai 5 vs pro for mac
      -cubase ai 5 mac catalina
      -cubase ai 5 mac mojave
      -cubase ai 5 mac sierra
      -cubase ai 5 mac high sierra
      -cubase ai 5 mac el capitan
      -cubase ai 5 mac yosemite
      -cubase ai 5 mac lion
      -cubase ai 5 mac snow leopard
      -cubase ai 5 mac leopard
      -cubase ai 5 for mac download link
      -cubase ai 5 for mac full version download
      -cubase ai 5 for mac iso download
      -cubase ai 5 for mac dmg download
      -cubase ai 5 for mac torrent download
      -cubase ai 5 for mac zip download
      -cubase ai 5 for mac rar download
      -cubase ai 5 for mac offline installer download
      -cubase ai 5 for mac online installer download
      -cubase ai 5 for mac setup download
      -download and install cubase ai 5 for mac
      -download and activate cubase ai 5 for mac
      -download and register cubase ai 5 for mac
      -download and update cubase ai 5 for mac
      -download and crack cubase ai 5 for mac
      -download and patch cubase ai 5 for mac
      -download and run cubase ai 5 for mac
      -download and use cubase ai 5 for mac
      -download and uninstall cubase ai 5 for mac
      -download and reinstall cubase ai 5 for mac
      -where to download cubase ai 5 for mac
      -how to download cubase ai 5 for mac free
      -how to download cubase ai 5 for mac without dvd
      -how to download cubase ai 5 for mac with access code
      -how to download cubase ai 5 for mac from steinberg website
      -how to download cubase ai 5 for mac from yamaha website
      -how to download cubase ai 5 for mac using steinberg download assistant (sda)
      -how to download cubase ai 5 for mac using steinberg hub (sh)
      -how to download cubase ai 5 for mac using steinberg library manager (slm)

      -

      Step 3: Enter your Download Access Code and download Cubase AI 5

      -

      After you have installed the SDA and activated your MySteinberg account, you can launch the SDA and log in with your email and password. You will see a list of available software products that you can download. To download Cubase AI 5, you need to enter your Download Access Code that came with your Steinberg/Yamaha product. You can find the Download Access Code on a sheet of paper inside the product box or in an email from the seller.

      -

      Once you have entered the Download Access Code, the SDA will generate an Activation Code for Cubase AI 5 and start downloading the software installer. The Activation Code is a unique code that allows you to activate Cubase AI 5 on your computer. You will need this code later, so make sure to write it down or copy it to a safe place.

      -

      The download process may take some time, depending on your internet connection speed and the size of the software installer. You can monitor the progress of the download in the SDA. When the download is complete, you can run the installer and follow the instructions on the screen to install Cubase AI 5 on your Mac.

      -

      What are the Features and Benefits of Cubase AI 5 for Mac

      -

      Cubase AI 5 is more than just a basic software for recording, editing and mixing music. It is a powerful, yet easy-to-use tool that offers many of the same features as Cubase Pro, the flagship version of Cubase. With Cubase AI 5, you can produce tracks from start to finish, using impressive composition tools, superb audio and MIDI editing tools, outstanding virtual instruments and amps, and a complete suite of audio effects. Let's take a closer look at some of the features and benefits of Cubase AI 5 for Mac.

      -

      Produce tracks from start to finish with HALion Sonic SE 3 and Groove Agent SE 5

      -

      Cubase AI 5 comes with two amazing virtual instruments that provide you with over 185 sounds and loops for various musical genres. HALion Sonic SE 3 is a versatile sound production tool that combines a massive sample library with a powerful synthesizer engine and effects. You can use it to create realistic acoustic instruments, expressive synth sounds, cinematic soundscapes, and more. Groove Agent SE 5 is a renowned drum production tool that lets you create amazing beats, rhythms, and grooves. You can use it to play drum kits, percussion instruments, electronic drums, loops, and patterns.

      -

      Use impressive composition tools like Chord Pads, Chord Track and Score Editor

      -

      Cubase AI 5 offers outstanding composition tools for your music creation, no matter how much musical knowledge you have. Whether you need inspiration or if you get stuck when writing a chord progression, the fast and easy-to-use Chord Pads and the Chord Track (with its included Chord Assistant) mean there are no restrictions in bringing your musical idea to life. You can use Chord Pads to trigger chords with your mouse or MIDI keyboard, or drag them to the project window to create chord events. You can use Chord Track to analyze and harmonize your project with chords, or generate chord progressions based on scales and musical styles. And if you’re working with notation – the basic Score Editor will bring your notes seamlessly to the page.

      -

      Edit MIDI and audio with Key Editor, Drum Editor, Sample Editor and AudioWarp

      -

      Opening up virtually limitless possibilities for songwriters and composers, Cubase AI 5 includes the Key Editor and Drum Editor, where melodies, beats, arrangements and performances take shape. Working with both MIDI and audio is always inspiring in Cubase AI 5. The Key Editor allows you to edit MIDI notes in a piano roll style interface, where you can change pitch, velocity, length, quantize settings, controller data, and more. The Drum Editor allows you to edit drum parts in a grid style interface, where you can assign sounds to different lanes, adjust velocity levels, create drum maps, and more.

      Record instruments with AmpSimulator and other VST effects

      -

      If you want to record guitars, basses, keyboards, or other instruments with Cubase AI 5, you can use the built-in AmpSimulator plug-in to add realistic amp and speaker simulations to your tracks. AmpSimulator lets you choose from different amp models, cabinets, microphones, and effects to create your own custom sound. You can also adjust the tone, gain, presence, and other parameters to fine-tune your sound.

      -

      Of course, Cubase AI 5 also offers a wide range of other VST effects that you can use to enhance your recordings. You can apply EQ, compression, reverb, delay, modulation, distortion, and more to your tracks using the insert and send effects slots. You can also use the Channel Strip 2 plug-in to access a comprehensive set of studio-grade processors in one convenient interface.

      -

      How to Update and Troubleshoot Cubase AI 5 for Mac

      -

      Cubase AI 5 is a reliable and stable software that runs smoothly on most Mac systems. However, it is always a good idea to keep your software up to date and fix any potential issues that may arise. In this section, we will show you how to update and troubleshoot Cubase AI 5 for Mac.

      -

      Download and install the latest Cubase AI 5 update from Steinberg website

      -

      One of the easiest ways to ensure that your Cubase AI 5 is running at its best is to download and install the latest update from the Steinberg website. The updates usually include bug fixes, performance improvements, compatibility enhancements, and sometimes new features. You can find the latest update for Cubase AI 5 here. To install the update, you need to have the original Cubase AI 5 installation DVD or the full installer downloaded from the Steinberg website. You may also need to enter your Activation Code again after installing the update.

      -

      Install the CoreAudio2ASIO Patch for Mac OS X 10.7 (Lion) if needed

      -

      If you are using Cubase AI 5 on Mac OS X 10.7 (Lion), you may need to install a special patch that fixes an issue with the CoreAudio2ASIO driver. This driver allows you to use the built-in audio hardware of your Mac with Cubase AI 5. However, due to some changes in Mac OS X 10.7, the driver may cause audio dropouts or glitches when recording or playing back audio. To solve this problem, you can download and install the CoreAudio2ASIO Patch for Mac OS X 10.7 (Lion) from the Steinberg website. This patch will replace the original CoreAudio2ASIO driver with a new version that works better with Mac OS X 10.7.

      -

      Check the Cubase AI 5 documentation and support pages for more information

      -

      If you have any questions or issues with Cubase AI 5 that are not covered by this article, you can always check the Cubase AI 5 documentation and support pages for more information. The documentation includes a detailed operation manual that explains all the features and functions of Cubase AI 5 in depth. You can access the documentation here. The support pages include FAQs, troubleshooting tips, downloads, forums, and contact information for Steinberg technical support. You can access the support pages here.

      -

      Conclusion

      -

      In this article, we have shown you how to download and install Cubase AI 5 for Mac, as well as what are the features and benefits of using it. We have also covered how to update and troubleshoot Cubase AI 5 for Mac in case you encounter any issues. By following these steps, you should be able to start making music with Cubase AI 5 on your Mac without any problems.

      -

      Cubase AI 5 is a great software for beginners and intermediate users who want to record, edit and mix music on their Mac. It offers many of the same features as Cubase Pro, but in a more compact and affordable package. It also comes with impressive virtual instruments and effects that provide you with over 185 sounds and loops for various musical genres.

      -

      If you want to learn more about Cubase AI 5 and how to use it effectively, you can check out some of the video tutorials that are available on the Steinberg website. These tutorials will help you to get familiar with the interface, functions, and workflows of Cubase AI 5.

      -

      We hope you enjoyed this article and found it helpful. If you have any feedback or suggestions, please let us know in the comments below. Thank you for reading and happy music making with Cubase AI 5!

      -

      FAQs

      -

      Q1: What are the system requirements for Cubase AI 5 for Mac?

      -

      A1: The minimum system requirements for Cubase AI 5 for Mac are as follows:

      -
        -
      • Mac OS X 10.5.8 or 10.6 (32-bit or 64-bit)
      • -
      • Intel Core processor (Intel Core Duo recommended)
      • -
      • 1024 MB RAM (2048 MB recommended)
      • -
      • 4 GB of free hard disk space
      • -
      • Display resolution of 1280 x 800 pixels
      • -
      • CoreAudio compatible audio hardware
      • -
      • DVD-ROM drive
      • -
      • Internet connection for license activation
      • -
      -

      Q2: How can I activate Cubase AI 5 after downloading it?

      -

      A2: To activate Cubase AI 5 after downloading it, you need to use the eLicenser Control Center (eLCC), which is a software that manages your Steinberg licenses. You can download the eLCC from the Steinberg website. To activate Cubase AI 5, you need to enter the Activation Code that you received from the SDA in the eLCC. The eLCC will then download a license file to your computer and store it on your hard disk or on a USB-eLicenser dongle if you have one. You can then launch Cubase AI 5 and start using it.

      -

      Q3: What are the differences between Cubase AI 5 and Cubase Pro?

      -

      A3: Cubase AI 5 and Cubase Pro are both versions of Cubase, but they have some differences in terms of features and functionality. Cubase AI 5 is a compact and affordable version of Cubase Pro that offers many of the same core technologies and features, but with some limitations. For example, Cubase AI 5 has a maximum of 48 audio tracks, 64 MIDI tracks, 16 instrument tracks, and 16 group channels, while Cubase Pro has unlimited tracks and channels. Cubase AI 5 also has fewer VST instruments, effects, and plug-ins than Cubase Pro. You can find a detailed comparison of the features of Cubase AI 5 and Cubase Pro here.

      -

      Q4: How can I upgrade from Cubase AI 5 to a higher version of Cubase?

      -

      A4: If you want to upgrade from Cubase AI 5 to a higher version of Cubase, such as Cubase Elements, Cubase Artist, or Cubase Pro, you can do so by purchasing an upgrade license from the Steinberg website. You will need to enter your current Activation Code for Cubase AI 5 and pay the difference in price between the versions. You will then receive a new Activation Code for the higher version of Cubase, which you can enter in the eLCC to activate it.

      -

      Q5: Where can I find more tutorials and tips on using Cubase AI 5?

      -

      A5: If you want to learn more about how to use Cubase AI 5 effectively, you can check out some of the video tutorials that are available on the Steinberg website. These tutorials cover topics such as recording, editing, mixing, mastering, MIDI, audio, VST instruments, effects, and more. You can also visit the Steinberg YouTube channel, where you can find more videos on Cubase and other Steinberg products.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Enjoy the Ultimate Blackjack Experience with MOD APK 2023.md b/spaces/congsaPfin/Manga-OCR/logs/Enjoy the Ultimate Blackjack Experience with MOD APK 2023.md deleted file mode 100644 index 7175f1b0c08448a1d86dd32f2d8122a4f953289c..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Enjoy the Ultimate Blackjack Experience with MOD APK 2023.md +++ /dev/null @@ -1,146 +0,0 @@ - -

      Black Jack Mod Apk: How to Play Blackjack Like a Pro on Your Phone

      -

      Blackjack is one of the most popular casino games in the world. It's a game of skill and strategy, where you have to beat the dealer's hand without going over 21. But what if you want to play blackjack anytime, anywhere, without risking real money? That's where black jack mod apk comes in.

      -

      black jack mod apk


      Downloadhttps://urlca.com/2uObT7



      -

      Black jack mod apk is a modified version of the blackjack game that you can download and install on your Android device. It gives you unlimited chips, unlocks all features, and lets you play with different rules and variations. You can also customize the game to suit your preferences and style.

      -

      In this article, we'll show you how to download and install black jack mod apk, how to play the game, and some tips and tricks to improve your skills. We'll also answer some frequently asked questions about black jack mod apk. Let's get started!

      -

      How to Download and Install Black Jack Mod Apk

      -

      Downloading and installing black jack mod apk is easy and fast. Just follow these steps:

      -
        -
      1. Go to https://moddroid.com/games/card/blackjack/ and click on the "Download" button.
      2. -
      3. Wait for the file to download on your device. It should be around 50 MB in size.
      4. -
      5. Open the file and tap on "Install". You may need to enable "Unknown Sources" in your settings if you haven't done so before.
      6. -
      7. Wait for the installation to finish and launch the game.
      8. -
      -

      Congratulations! You have successfully installed black jack mod apk on your device. Now you can enjoy playing blackjack with unlimited chips and features.

      -

      blackjack mod apk unlocked
      -blackjack mod apk unlimited money
      -blackjack mod apk offline
      -blackjack mod apk 2023
      -blackjack mod apk latest version
      -blackjack mod apk download
      -blackjack mod apk android
      -blackjack mod apk free chips
      -blackjack mod apk no ads
      -blackjack mod apk hack
      -blackjack 21 mod apk
      -blackjack club mod apk
      -blackjack casino mod apk
      -blackjack trainer mod apk
      -blackjack simulator mod apk
      -blackjack online mod apk
      -blackjack multiplayer mod apk
      -blackjack live mod apk
      -blackjack pro mod apk
      -blackjack plus mod apk
      -blackjack royale mod apk
      -blackjack strategy mod apk
      -blackjack tournament mod apk
      -blackjack world tour mod apk
      -blackjack xchange mod apk
      -black jack 3d mod apk
      -black jack anime mod apk
      -black jack battle royale mod apk
      -black jack card game mod apk
      -black jack classic mod apk
      -black jack dealer simulator mod apk
      -black jack deluxe mod apk
      -black jack evolution mod apk
      -black jack free bet mod apk
      -black jack hd mod apk
      -black jack house edge calculator mod apk
      -black jack in vegas mod apk
      -black jack king queen mod apk
      -black jack manga mod apk
      -black jack match the dealer mod apk
      -black jack odds calculator mod apk
      -black jack perfect pairs mod apk
      -black jack pizza coupons mod apk
      -black jack rules chart mod apk
      -black jack side bets mod apk
      -black jack switch strategy mod apk
      -black jack table layout mod apk
      -black jack tips and tricks mod apk

      -

      How to Play Black Jack Mod Apk

      -

      Playing black jack mod apk is similar to playing regular blackjack, but with some differences. Here are the basic rules and features of the game:

      -
        -
      • You start with 1000 chips and can bet any amount you want.
      • -
      • You can choose from different game modes: Classic, European, Vegas Strip, Blackjack Switch, Perfect Pairs, etc.
      • -
      • You can also choose from different table settings: number of decks, dealer's action on soft 17, double down options, etc.
      • -
      • You can split any pair of cards and double down on any two cards.
      • -
      • You can surrender your hand at any time and get half of your bet back.
      • -
      • You can use hints and tips to help you make the best decisions.
      • -
      • You can check your statistics and achievements to track your progress.
      • -
      -

      The goal of the game is to beat the dealer's hand without going over 21. You can do this by following some basic strategies:

      -
        -
      • Learn the value of the cards. Cards 2-10 have their face value; face cards are worth 10; aces are worth 1 or 11.
      • -
      • Learn when to hit or stand. Generally, you should hit if your hand is below 17 and stand if it's above 17.
      • -
      • Learn when to split or double down. Generally, you should split aces and eights, and double down on 10 or 11.
      • -
      • Learn when to surrender. Generally, you should surrender if you have a hard 16 against a dealer's 9, 10, or ace.
      • -
      -

      Tips and Tricks for Playing Black Jack Mod Apk

      -

      If you want to improve your skills and win more games, here are some tips and tricks for playing black jack mod apk:

      -
        -
      • Practice a lot. The more you play, the more you'll learn how to play by instinct.
      • -
      • Use a strategy chart. A strategy chart tells you the best move for every possible situation. You can find one online or in the game's menu.
      • -
      • Manage your bankroll. Don't bet more than you can afford to lose. Set a limit for yourself and stick to it.
      • -
      • Vary your bets. Don't always bet the same amount. Sometimes bet more, sometimes bet less, depending on the situation.
      • -
      • Don't chase your losses. If you lose a lot of chips, don't try to win them back by betting more. Take a break and start fresh.
      • -
      • Have fun. Don't take the game too seriously. Enjoy the thrill of playing blackjack and have a good time.
      • -
      -

      A Comparison Table of Different Blackjack Modes

      -

      To help you choose the best game mode for you, here is a comparison table of some of the most popular blackjack modes in black jack mod apk:

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      Game ModeDescriptionAdvantagesDisadvantages
      ClassicThe standard version of blackjack, with no special rules or features.Simple and easy to play.Lacks variety and excitement.
      EuropeanThe dealer only gets one card face up at the start, and draws the second card after the player's turn.More challenging and realistic.No peeking or insurance options.
      Vegas StripThe dealer stands on soft 17, and the player can double down after splitting.More favorable for the player.None.
      Blackjack SwitchThe player gets two hands and can switch the top cards between them.More strategic and fun.The dealer pushes on 22, and blackjack pays 1:1 instead of 3:2.
      Perfect PairsThe player can place a side bet on whether their first two cards will be a pair.Potential for big payouts.Higher house edge and lower odds.
      -

      Conclusion

      -

      Black jack mod apk is a great way to play blackjack on your phone. It offers unlimited chips, different game modes, customizable settings, and helpful features. You can download and install it easily and enjoy playing anytime, anywhere. You can also improve your skills and strategies by following some tips and tricks. Black jack mod apk is a fun and exciting game that will keep you entertained for hours.

      -

      Frequently Asked Questions

      -

      Q: Is black jack mod apk safe to use?

      -

      A: Yes, black jack mod apk is safe to use. It does not contain any viruses or malware, and it does not require any permissions or access to your device. However, you should always download it from a trusted source, such as https://moddroid.com/games/card/blackjack/.

      -

      Q: Is black jack mod apk legal to use?

      -

      A: Yes, black jack mod apk is legal to use. It does not involve any real money gambling or betting, and it does not violate any laws or regulations. However, you should always check the rules and policies of your country or region before using it.

      -

      Q: How can I update black jack mod apk?

      -

      A: To update black jack mod apk, you can either visit https://moddroid.com/games/card/blackjack/ and download the latest version, or you can enable the "Auto Update" option in the game's settings. This way, you will always have the most recent features and improvements.

      -

      Q: How can I contact the developers of black jack mod apk?

      -

      A: To contact the developers of black jack mod apk, you can either visit their website at https://www.moddroid.com/, or you can email them at support@moddroid.com. They will be happy to answer your questions and feedback.

      -

      Q: How can I share black jack mod apk with my friends?

      -

      A: To share black jack mod apk with your friends, you can either send them the link to https://moddroid.com/games/card/blackjack/, or you can use the "Share" button in the game's menu. This way, you can invite your friends to join you in playing blackjack online.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/GoreBox Mod APK Download - Experience the Most Violent and Brutal Sandbox Game on Android.md b/spaces/congsaPfin/Manga-OCR/logs/GoreBox Mod APK Download - Experience the Most Violent and Brutal Sandbox Game on Android.md deleted file mode 100644 index dce41ee05b0d11146429db1b12b7c2b7f80939a0..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/GoreBox Mod APK Download - Experience the Most Violent and Brutal Sandbox Game on Android.md +++ /dev/null @@ -1,104 +0,0 @@ -
      -

      Download GoreBox Mod APK: A Physics-Based Sandbox Game of Extreme Violence

      -

      If you are looking for a game that lets you unleash your inner demon and create your own sandbox of mayhem, then you should try GoreBox. GoreBox is a physics-based sandbox game of extreme violence that offers a vast arsenal of brutal weapons, explosive devices, interactive ragdolls, fearsome enemies, advanced turrets, vehicles, and a cutting-edge blood and dismemberment system. In this article, we will tell you what is GoreBox, why you should download GoreBox Mod APK, and how to download and install it on your Android device.

      -

      What is GoreBox?

      -

      GoreBox is a game developed by F²Games, a developer known for creating games with realistic physics and gore. GoreBox is the latest game in the GoreBox franchise, which includes GoreBox Classic, GoreBox 2, and GoreBox 3. The game is available for Android devices on Google Play Store, and for PC on Steam. The game has received positive reviews from players who enjoy the chaotic and destructive gameplay.

      -

      download gorebox mod apk


      DOWNLOAD · https://urlca.com/2uOajq



      -

      Features of GoreBox

      -

      GoreBox has many features that make it a unique and fun game to play. Here are some of them:

      -

      - Brutal weapons and explosive devices

      -

      GoreBox offers a wide range of weapons and explosives that you can use to kill, maim, or torture your enemies or ragdolls. You can choose from guns, knives, axes, hammers, chainsaws, grenades, rockets, mines, bombs, nukes, and more. You can also use the Reality Crusher, your primary weapon for building, destroying, and manipulating the environment.

      -

      download gorebox definitive edition mod apk
      -download gorebox mod apk unlimited money
      -download gorebox mod apk latest version
      -download gorebox mod apk android 1
      -download gorebox mod apk happymod
      -download gorebox mod apk no ads
      -download gorebox mod apk revdl
      -download gorebox mod apk free shopping
      -download gorebox mod apk offline
      -download gorebox mod apk for pc
      -download gorebox mod apk 13.0.6
      -download gorebox mod apk 12.3.7
      -download gorebox mod apk 11.0.0
      -download gorebox mod apk 10.0.0
      -download gorebox mod apk 9.0.0
      -download gorebox mod apk 8.0.0
      -download gorebox mod apk 7.0.0
      -download gorebox mod apk 6.0.0
      -download gorebox mod apk 5.0.0
      -download gorebox mod apk 4.0.0
      -download gorebox mod apk 3.0.0
      -download gorebox mod apk 2.0.0
      -download gorebox mod apk 1.0.0
      -download gorebox mod apk all unlocked
      -download gorebox mod apk all weapons
      -download gorebox mod apk all characters
      -download gorebox mod apk all maps
      -download gorebox mod apk all modes
      -download gorebox mod apk all skins
      -download gorebox mod apk all vehicles
      -download gorebox mod apk anti ban
      -download gorebox mod apk unlimited ammo
      -download gorebox mod apk unlimited health
      -download gorebox mod apk unlimited coins
      -download gorebox mod apk unlimited gems
      -download gorebox mod apk unlimited grenades
      -download gorebox mod apk unlimited rockets
      -download gorebox mod apk unlimited bullets
      -download gorebox mod apk unlimited bombs
      -download gorebox mod apk unlimited lives
      -download gorebox mod apk mega menu
      -download gorebox mod apk god mode
      -download gorebox mod apk one hit kill
      -download gorebox mod apk no root
      -download gorebox mod apk no verification
      -download gorebox mod apk no survey
      -download gorebox mod apk direct link
      -download gorebox mod apk mediafire link
      -download gorebox mod apk google drive link

      -

      - Interactive ragdolls and dismemberment system

      -

      GoreBox features realistic ragdoll physics and a cutting-edge blood and dismemberment system that allows you to see the effects of your actions in detail. You can cut off limbs, decapitate heads, rip out organs, or blow up bodies with ease. You can also interact with the ragdolls by grabbing, throwing, dragging, or posing them as you wish.

      -

      - Customizable maps and environment manipulation

      -

      GoreBox lets you create and customize your own maps using various props, objects, textures, and colors. You can also use the Reality Crusher to change the gravity, time scale, lighting, weather, sound effects, and more. You can save your maps and share them with other players online.

      -

      - Fearsome enemies and Timsky's virus

      -

      GoreBox has a story mode that follows Phil Timsky, a man who is immune to a virus that induces uncontrollable rage and reduces IQ in those infected. The virus was created by Timsky himself as a weapon of mass destruction. You have to face off against enemies who range from mindless drones to cunning predators who are infected by the virus. You can also play in sandbox mode where you can spawn any enemy or ragdoll you want.

      -

      - Vehicles and turrets

      -

      GoreBox also has vehicles and turrets that you can use to enhance your gameplay. You can drive cars, trucks, tanks, helicopters, planes, boats, bikes, or even UFOs. You can also use turrets that can shoot bullets, lasers, rockets, or flames at your enemies or ragdolls.

      -

      Why download GoreBox Mod APK?

      -

      While GoreBox is a free game on Google Play Store, it has some limitations and drawbacks that can affect your gaming experience. For example, you have to watch ads to get more money and resources, you have to unlock weapons and items by completing tasks or paying real money, and you have to root your device to access some features. That is why you should download GoreBox Mod APK, a modified version of the game that gives you many benefits and advantages.

      -

      Benefits of GoreBox Mod APK

      -

      GoreBox Mod APK is a hacked version of the game that allows you to enjoy the game without any restrictions or interruptions. Here are some of the benefits of GoreBox Mod APK:

      -

      - Unlimited money and resources

      -

      With GoreBox Mod APK, you don't have to worry about running out of money or resources to buy weapons, items, vehicles, or turrets. You can get unlimited money and resources by using the mod menu that lets you adjust the amount of money and resources you want. You can also use the Reality Crusher to spawn any object or prop you want for free.

      -

      - Unlocked all weapons and items

      -

      With GoreBox Mod APK, you don't have to unlock weapons and items by completing tasks or paying real money. You can access all the weapons and items in the game from the start, including the premium ones that are only available for VIP members. You can also use the mod menu to enable or disable any weapon or item you want.

      -

      - No ads and no root required

      -

      With GoreBox Mod APK, you don't have to watch annoying ads to get more money and resources, or to access some features. You can play the game without any ads or interruptions. You also don't have to root your device to install or use the mod APK. You can enjoy the game on any Android device without risking your device's security or warranty.

      -

      How to download and install GoreBox Mod APK?

      -

      If you are interested in downloading and installing GoreBox Mod APK on your Android device, you have to follow some simple steps. Here are the steps to download and install GoreBox Mod APK:

      -

      Steps to download and install GoreBox Mod APK

      -

      - Allow unknown apps on Android

      -

      Before you can install GoreBox Mod APK, you have to allow your device to install apps from unknown sources. To do this, go to your device's settings, then security, then enable the option "Unknown sources". This will allow you to install apps that are not from Google Play Store.

      -

      - Download the APK file from a reputable source

      -

      Next, you have to download the APK file of GoreBox Mod APK from a reputable source. There are many websites that offer mod APKs for various games, but not all of them are safe or reliable. You have to be careful when choosing a website to download from, as some websites may contain malware or viruses that can harm your device or steal your data. To avoid this, we recommend you download GoreBox Mod APK from our website, which is a trusted and verified source of mod APKs.

      -

      - Open the APK file and install it

      -

      Finally, you have to open the APK file that you downloaded and install it on your device. To do this, locate the file in your device's storage, then tap on it. You will see a pop-up window asking you to confirm the installation. Tap on "Install" and wait for the installation process to finish. Once it is done, you will see a notification that says "App installed". You can then open the app and enjoy playing GoreBox Mod APK.

      -

      Conclusion

      -

      GoreBox is a physics-based sandbox game of extreme violence that lets you create your own sandbox of mayhem using various weapons, explosives, ragdolls, enemies, vehicles, and turrets. The game has many features that make it a unique and fun game to play, but it also has some limitations and drawbacks that can affect your gaming experience. That is why we recommend you download GoreBox Mod APK, a modified version of the game that gives you unlimited money and resources, unlocked all weapons and items, no ads, and no root required. You can download GoreBox Mod APK from our website, which is a trusted and verified source of mod APKs. We hope this article was helpful for you. If you have any questions or feedback, please let us know in the comments section below.

      -

      FAQs

      -

      Here are some frequently asked questions about GoreBox Mod APK:

      -
        -
      • Is GoreBox Mod APK safe?
      • -

        Yes, GoreBox Mod APK is safe to download and use. The mod APK is scanned and tested for any malware or viruses before we upload it on our website. You can download and install it without risking your device's security or warranty.

        -
      • Is GoreBox Mod APK compatible with my device?
      • -

        GoreBox Mod APK is compatible with any Android device that runs on Android 4.4 or higher. You can check your device's Android version by going to your device's settings, then about phone, then software information. You can also use the mod APK on any emulator that supports Android apps.

        -
      • Can I play GoreBox Mod APK online with other players?
      • -

        GoreBox Mod APK does not support online multiplayer mode, as it is a modded version of the game that may not be compatible with the official servers. However, you can still play the game offline in sandbox mode or story mode, where you can create and customize your own maps and scenarios.

        -
      • Can I update GoreBox Mod APK to the latest version?
      • -

        GoreBox Mod APK is updated regularly to match the latest version of the original game. However, you cannot update the mod APK from Google Play Store, as it is not available there. You have to download and install the latest version of the mod APK from our website whenever there is a new update available.

        -
      • Can I request a new feature or report a bug for GoreBox Mod APK?
      • -

        Yes, you can request a new feature or report a bug for GoreBox Mod APK by contacting us through our website or email. We appreciate your feedback and suggestions, and we will try our best to improve the mod APK according to your needs and preferences.

        -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Ultraman Fighting Evolution 3 APK Download and Enjoy the Best Fighting Game.md b/spaces/congsaPfin/Manga-OCR/logs/Ultraman Fighting Evolution 3 APK Download and Enjoy the Best Fighting Game.md deleted file mode 100644 index 119e678a8b94c345e835bff4a88b4eda2560a5c0..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Ultraman Fighting Evolution 3 APK Download and Enjoy the Best Fighting Game.md +++ /dev/null @@ -1,102 +0,0 @@ - -

      Download Ultraman Fighting Evolution 3 APKPure

      -

      Are you a fan of the Ultraman series and want to play a fighting video game based on it? If so, you might be interested in downloading Ultraman Fighting Evolution 3 APKPure, a free and safe way to enjoy the game on your Android device. In this article, we will tell you what Ultraman Fighting Evolution 3 is, why you should download it from APKPure, how to download and install it, and some tips and tricks for playing it.

      -

      download ultraman fighting evolution 3 apkpure


      DOWNLOAD ○○○ https://urlca.com/2uO7Mt



      -

      What is Ultraman Fighting Evolution 3?

      -

      A fighting video game based on the Ultraman series

      -

      Ultraman Fighting Evolution 3 is a fighting video game developed and published by Banpresto in 2004 for the PlayStation 2. It is based on the popular Japanese superhero franchise Ultraman, which features giant alien warriors who protect Earth from various monsters and aliens. The game allows you to play as one of the Ultra Warriors, such as Ultraman, Ultraseven, Ultraman Tiga, Ultraman Dyna, Ultraman Gaia, and many more. You can also play as some of their enemies, such as Zetton, King Joe, Gomora, Evil Tiga, and others.

      -

      The third entry in the Ultraman Fighting Evolution series

      -

      Ultraman Fighting Evolution 3 is the third entry in the Ultraman Fighting Evolution series, which started in 1998 with Ultraman Fighting Evolution for the PlayStation. The series is known for its faithful adaptation of the Ultraman stories and characters, as well as its realistic graphics and sound effects. The game's narrator is Yuji Machi, who acted as Ultraman Tiga's voice actor as well.

      -

      Features 40 playable characters and various game modes

      -

      Ultraman Fighting Evolution 3 has the most amount of playable characters out of the entire Ultraman Fighting Evolution series, a total of 40 characters are available. Each character has their own unique moves and abilities, such as flying, shooting beams, throwing objects, transforming, and more. You can also customize your character's color scheme and voice.

      -

      The game also offers various game modes to choose from, such as:

      -
        -
      • Ultra Mode: This is the main story mode. The player battles as the Ultra Warriors in stories identical to TV show episodes and movies. The game features a ranking system, in which depending on how well the player completed the stage, they will be awarded a rank, with D being the worst rank and S being the best rank.
      • -
      • Versus Mode: This is a two-player mode where you can battle against your friend or the computer. You can select any character and stage you want.
      • -
      • Survival Mode: This is a single-player mode where you have to defeat as many enemies as possible without losing. The enemies will get stronger as you progress.
      • -
      • Practice Mode: This is a mode where you can practice your moves and combos without any time limit or damage.
      • -
      • Gallery Mode: This is a mode where you can view images and videos of the characters and stages.
      • -
      -

      Why download Ultraman Fighting Evolution 3 APKPure?

      -

      Enjoy the game on your Android device without a PS2 emulator

      -

      One of the reasons why you should download Ultraman Fighting Evolution 3 APKPure is that you can play the game on your Android device without needing a PS2 emulator. Emulators are software that allow you to run games from other platforms on your device, but they often have drawbacks, such as compatibility issues, performance problems, and legal risks. By downloading the game from APKPure, you can avoid these hassles and enjoy the game natively on your device.

      -

      download ultraman fighting evolution 3 mod apk
      -ultraman fighting evolution 3 apk android
      -how to download ultraman fighting evolution 3 on android
      -ultraman fighting evolution 3 apk free download
      -ultraman fighting evolution 3 apkpure latest version
      -download game ultraman fighting evolution 3 for android
      -ultraman fighting evolution 3 apk offline
      -ultraman fighting evolution 3 apk obb
      -download ultraman fighting evolution 3 ppsspp android
      -ultraman fighting evolution 3 apk data
      -ultraman fighting evolution 3 apkpure full version
      -download ultraman fighting evolution 3 iso android
      -ultraman fighting evolution 3 apk no verification
      -ultraman fighting evolution 3 apk unlimited money
      -download ultraman fighting evolution 3 ps2 android
      -ultraman fighting evolution 3 apk online
      -ultraman fighting evolution 3 apkpure update
      -download ultraman fighting evolution 3 highly compressed android
      -ultraman fighting evolution 3 apk english version
      -ultraman fighting evolution 3 apk cheats
      -download ultraman fighting evolution 3 for android free
      -ultraman fighting evolution 3 apk rexdl
      -download tricks ultraman fighting evolution 3 apkpure[^1^]
      -ultraman fighting evolution 3 apk revdl
      -download ultraman fighting evolution 3 emulator android
      -ultraman fighting evolution 3 apk pure
      -download guide for ultraman fighting evolution 3 apkpure
      -ultraman fighting evolution 3 apk hack
      -download tips for ultraman fighting evolution 3 apkpure
      -ultraman fighting evolution 3 apk mirror
      -download walkthrough for ultraman fighting evolution 3 apkpure
      -ultraman fighting evolution 3 apk mod menu
      -download cheat for ultraman fighting evolution 3 apkpure
      -ultraman fighting evolution 3 apk original
      -download strategy for ultraman fighting evolution 3 apkpure
      -ultraman fighting evolution 3 apk unlocked
      -download codes for ultraman fighting evolution 3 apkpure
      -ultraman fighting evolution 3 apk vip
      -download hints for ultraman fighting evolution 3 apkpure
      -ultraman fighting evolution 3 apk xapk
      -download secrets for ultraman fighting evolution 3 apkpure
      -ultraman fighting evolution 3 apk youtube
      -download review for ultraman fighting evolution 3 apkpure
      -ultraman fighting evolution 3 apk zip file
      -download tutorial for ultraman fighting evolution 3 apkpure

      -

      Get the latest version of the game with bug fixes and improvements

      -

      Another reason why you should download Ultraman Fighting Evolution 3 APKPure is that you can get the latest version of the game with bug fixes and improvements. APKPure is a website that provides free and safe APK files for Android users. APK files are the installation files for Android apps. APKPure updates its APK files regularly to ensure that they are working properly and have the latest features. By downloading the game from APKPure, you can ensure that you have the best gaming experience possible.

      -

      Access the game for free and without any ads or malware

      -

      A final reason why you should download Ultraman Fighting Evolution 3 APKPure is that you can access the game for free and without any ads or malware. APKPure is a trusted and reputable website that does not host any malicious or illegal content. You can download the game from APKPure without worrying about any viruses, spyware, or adware. You can also play the game without any annoying ads or pop-ups. You can enjoy the game without spending any money or compromising your device's security.

      -

      How to download Ultraman Fighting Evolution 3 APKPure?

      -

      Step 1: Visit the APKPure website and search for the game

      -

      The first step to download Ultraman Fighting Evolution 3 APKPure is to visit the APKPure website and search for the game. You can use any web browser on your device to access the website. Once you are on the website, type "Ultraman Fighting Evolution 3" in the search bar and press enter. You will see a list of results related to the game. Click on the one that says "Ultraman Fighting Evolution 3" and has a picture of Ultraman Tiga on it.

      -

      Step 2: Download the APK file and install it on your device

      -

      The second step to download Ultraman Fighting Evolution 3 APKPure is to download the APK file and install it on your device. Once you are on the game's page, scroll down until you see a green button that says "Download APK". Tap on it and wait for the download to finish. After the download is complete, locate the file in your device's storage and tap on it to install it. You may need to enable unknown sources in your device's settings to allow the installation of apps from outside sources.

      -

      Step 3: Launch the game and start playing

      -

      The third step to download Ultraman Fighting Evolution 3 APKPure is to launch the game and start playing. After the installation is done, you will see an icon of the game on your device's home screen or app drawer. Tap on it to open it and enjoy playing as your favorite Ultra Warrior or monster. You can choose from various game modes, characters, and stages, as well as customize your settings and preferences.

      -

      Tips and tricks for playing Ultraman Fighting Evolution 3

      -

      Learn the controls and special moves of each character

      -

      One of the tips for playing Ultraman Fighting Evolution 3 is to learn the controls and special moves of each character. The game has a simple control scheme that uses four buttons: square, triangle, circle, and cross. Each button corresponds to a different type of attack: punch, kick, guard, and special. You can also use directional inputs to modify your attacks or perform combos. Each character has their own special moves that can be activated by pressing a combination of buttons or filling up a gauge. You can check each character's moves list in the pause menu or in practice mode.

      -

      Complete the Ultra Mode to unlock new characters and stages

      -

      Another tip for playing Ultraman Fighting Evolution 3 is to complete the Ultra Mode to unlock new characters and stages. Ultra Mode is the main story mode of the game, where you can play as the Ultra Warriors in scenarios based on the TV show episodes and movies. By completing each stage, you can unlock new characters and stages that you can use in other game modes. Some of the characters and stages that you can unlock are Ultraman Cosmos, Ultraman Justice, Ultraman Nexus, Ultraman Noa, Zearth, Dark Zagi, and more. You can also get bonus rewards by achieving an S rank in each stage, such as extra costumes, voices, and images.

      -

      Achieve an S rank in each stage to get bonus rewards

      -

      A final tip for playing Ultraman Fighting Evolution 3 is to achieve an S rank in each stage to get bonus rewards. The game has a ranking system that evaluates your performance in each stage based on various factors, such as time, damage, combos, and special moves. The highest rank you can get is S, which means that you have completed the stage flawlessly. By getting an S rank in each stage, you can get bonus rewards such as extra costumes, voices, and images for your characters. You can also unlock a secret ending if you get an S rank in all stages of Ultra Mode.

      -

      Conclusion

      -

      Ultraman Fighting Evolution 3 is a fighting video game based on the Ultraman series that features 40 playable characters and various game modes. You can download the game from APKPure, a website that provides free and safe APK files for Android users. By downloading the game from APKPure, you can enjoy the game on your Android device without a PS2 emulator, get the latest version of the game with bug fixes and improvements, and access the game for free and without any ads or malware. You can also follow some tips and tricks for playing the game, such as learning the controls and special moves of each character, completing the Ultra Mode to unlock new characters and stages, and achieving an S rank in each stage to get bonus rewards. We hope that this article has helped you learn more about Ultraman Fighting Evolution 3 APKPure and how to download and play it. If you have any questions or feedback, please feel free to leave a comment below.

      -

      FAQs

      -
        -
      • Q: Is Ultraman Fighting Evolution 3 APKPure safe to download?
      • -
      • A: Yes, Ultraman Fighting Evolution 3 APKPure is safe to download from APKPure website. APKPure is a trusted and reputable website that does not host any malicious or illegal content. You can download the game from APKPure without worrying about any viruses, spyware, or adware.
      • -
      • Q: How much storage space does Ultraman Fighting Evolution 3 APKPure require?
      • -
      • A: Ultraman Fighting Evolution 3 APKPure requires about 1 GB of storage space on your device. You may need to free up some space before downloading and installing the game.
      • -
      • Q: Can I play Ultraman Fighting Evolution 3 APKPure offline?
      • -
      • A: Yes, you can play Ultraman Fighting Evolution 3 APKPure offline once you have downloaded and installed it on your device. You do not need an internet connection to play the game.
      • -
      • Q: Can I play Ultraman Fighting Evolution 3 APKPure with a controller?
      • -
      • A: Yes, you can play Ultraman Fighting Evolution 3 APKPure with a controller if your device supports it. You can connect your controller via Bluetooth or USB and configure the settings in the game.
      • -
      • Q: Where can I find more information about Ultraman Fighting Evolution 3 APKPure?
      • -
      • A: You can find more information about Ultraman Fighting Evolution 3 APKPure on the APKPure website or on the official website of Banpresto. You can also watch gameplay videos or read reviews of the game online.
      • -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Abar Byomkesh The Second Installment of the Byomkesh Bakshi Series by Anjan Dutt.md b/spaces/contluForse/HuggingGPT/assets/Abar Byomkesh The Second Installment of the Byomkesh Bakshi Series by Anjan Dutt.md deleted file mode 100644 index 22d5f4286da85b6f4071d3aa88cab10a24ae1ca8..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Abar Byomkesh The Second Installment of the Byomkesh Bakshi Series by Anjan Dutt.md +++ /dev/null @@ -1,5 +0,0 @@ - -

      Why does Bengali filmmaker after Bengali filmmaker flock to this one character? Why make bad movie after bad movie with this one overused character? Why this desperation to make movie goers askabar Byomkesh ? (Byomkesh again?)

      -

      abar byomkesh bengali full movie 15


      DOWNLOAD 🗹 https://ssurll.com/2uzy0H



      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Download Livros Epidemiologia E Sade Rouquayrol Conhea os Avanos do Sistema nico de Sade no Brasil.md b/spaces/contluForse/HuggingGPT/assets/Download Livros Epidemiologia E Sade Rouquayrol Conhea os Avanos do Sistema nico de Sade no Brasil.md deleted file mode 100644 index bca672345edfe9bf1cebc0c60ea7f976957b00bd..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Download Livros Epidemiologia E Sade Rouquayrol Conhea os Avanos do Sistema nico de Sade no Brasil.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Download Livros Epidemiologia E Sade Rouquayrol.ra bagarre duplicator j


      Download File » https://ssurll.com/2uzwjc



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/contluForse/HuggingGPT/assets/Epson Adjustment Program T60 T50 Zip File.rar Everything You Need to Know.md b/spaces/contluForse/HuggingGPT/assets/Epson Adjustment Program T60 T50 Zip File.rar Everything You Need to Know.md deleted file mode 100644 index 96779a6c06f301ec6e04c347262fb3b052691b3a..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Epson Adjustment Program T60 T50 Zip File.rar Everything You Need to Know.md +++ /dev/null @@ -1,5 +0,0 @@ - -

      And you also notice the LED lights blinking. Then you can no longer print. Is it a true issue with the correct printer? Resetting your printer is the answer. You need a software resetter (adjustment program) to reset your Epson T60.

      -

      Epson Adjustment Program T60 T50 Zip File.rar


      Download Ziphttps://ssurll.com/2uzwkk



      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/cozyanduofen/bingo/src/components/learn-more.tsx b/spaces/cozyanduofen/bingo/src/components/learn-more.tsx deleted file mode 100644 index a64459ee7900a612292e117a6bda96ee9260990f..0000000000000000000000000000000000000000 --- a/spaces/cozyanduofen/bingo/src/components/learn-more.tsx +++ /dev/null @@ -1,39 +0,0 @@ -import React from 'react' -import { SourceAttribution } from '@/lib/bots/bing/types' - -export interface LearnMoreProps { - sourceAttributions?: SourceAttribution[] -} - -export function LearnMore({ sourceAttributions }: LearnMoreProps) { - if (!sourceAttributions?.length) { - return null - } - - return ( -
      -
      了解详细信息:
      -
      -
      - {sourceAttributions.map((attribution, index) => { - const { providerDisplayName, seeMoreUrl } = attribution - const { host } = new URL(seeMoreUrl) - return ( - - {index + 1}. {host} - - ) - })} -
      -
      -
      - ) -} diff --git a/spaces/danielpedriniportfolio/AutoDA/README.md b/spaces/danielpedriniportfolio/AutoDA/README.md deleted file mode 100644 index a5786c4188fd270da98c914763766d89c44619bf..0000000000000000000000000000000000000000 --- a/spaces/danielpedriniportfolio/AutoDA/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: AutoDA -emoji: 📉 -colorFrom: yellow -colorTo: blue -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/datajuicer/overview_scan/README.md b/spaces/datajuicer/overview_scan/README.md deleted file mode 100644 index 73fe431bc2c36fe129313c68762706a697cc5bd2..0000000000000000000000000000000000000000 --- a/spaces/datajuicer/overview_scan/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Overview Scan -emoji: 🏃 -colorFrom: red -colorTo: pink -sdk: docker -app_port: 8501 -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/datasciencemmw/ContextXLA-demo/README.md b/spaces/datasciencemmw/ContextXLA-demo/README.md deleted file mode 100644 index 8c74da59480beffe3ce47b87744328c47f83029b..0000000000000000000000000000000000000000 --- a/spaces/datasciencemmw/ContextXLA-demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ContextXLA Stable Demo -emoji: 📖 -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.10.1 -app_file: app.py -pinned: true -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/davila7/semantic-search/embeddings.py b/spaces/davila7/semantic-search/embeddings.py deleted file mode 100644 index d7596d473dd2539e182058296e1f8844c0a37a22..0000000000000000000000000000000000000000 --- a/spaces/davila7/semantic-search/embeddings.py +++ /dev/null @@ -1,115 +0,0 @@ -"""Wrapper around OpenAI embedding models.""" -from typing import Any, Dict, List, Optional - -from pydantic import BaseModel, Extra, root_validator - -from langchain.embeddings.base import Embeddings -from langchain.utils import get_from_dict_or_env - -from tenacity import ( - retry, - retry_if_exception_type, - stop_after_attempt, - wait_exponential, -) -from openai.error import Timeout, APIError, APIConnectionError, RateLimitError - - -class OpenAIEmbeddings(BaseModel, Embeddings): - """Wrapper around OpenAI embedding models. - To use, you should have the ``openai`` python package installed, and the - environment variable ``OPENAI_API_KEY`` set with your API key or pass it - as a named parameter to the constructor. - Example: - .. code-block:: python - from langchain.embeddings import OpenAIEmbeddings - openai = OpenAIEmbeddings(openai_api_key="my-api-key") - """ - - client: Any #: :meta private: - document_model_name: str = "text-embedding-ada-002" - query_model_name: str = "text-embedding-ada-002" - openai_api_key: Optional[str] = None - - class Config: - """Configuration for this pydantic object.""" - - extra = Extra.forbid - - # TODO: deprecate this - @root_validator(pre=True, allow_reuse=True) - def get_model_names(cls, values: Dict) -> Dict: - """Get model names from just old model name.""" - if "model_name" in values: - if "document_model_name" in values: - raise ValueError( - "Both `model_name` and `document_model_name` were provided, " - "but only one should be." - ) - if "query_model_name" in values: - raise ValueError( - "Both `model_name` and `query_model_name` were provided, " - "but only one should be." - ) - model_name = values.pop("model_name") - values["document_model_name"] = f"text-search-{model_name}-doc-001" - values["query_model_name"] = f"text-search-{model_name}-query-001" - return values - - @root_validator(allow_reuse=True) - def validate_environment(cls, values: Dict) -> Dict: - """Validate that api key and python package exists in environment.""" - openai_api_key = get_from_dict_or_env( - values, "openai_api_key", "OPENAI_API_KEY" - ) - try: - import openai - - openai.api_key = openai_api_key - values["client"] = openai.Embedding - except ImportError: - raise ValueError( - "Could not import openai python package. " - "Please it install it with `pip install openai`." - ) - return values - - @retry( - reraise=True, - stop=stop_after_attempt(100), - wait=wait_exponential(multiplier=1, min=10, max=60), - retry=( - retry_if_exception_type(Timeout) - | retry_if_exception_type(APIError) - | retry_if_exception_type(APIConnectionError) - | retry_if_exception_type(RateLimitError) - ), - ) - def _embedding_func(self, text: str, *, engine: str) -> List[float]: - """Call out to OpenAI's embedding endpoint with exponential backoff.""" - # replace newlines, which can negatively affect performance. - text = text.replace("\n", " ") - return self.client.create(input=[text], engine=engine)["data"][0]["embedding"] - - def embed_documents(self, texts: List[str]) -> List[List[float]]: - """Call out to OpenAI's embedding endpoint for embedding search docs. - Args: - texts: The list of texts to embed. - Returns: - List of embeddings, one for each text. - """ - responses = [ - self._embedding_func(text, engine=self.document_model_name) - for text in texts - ] - return responses - - def embed_query(self, text: str) -> List[float]: - """Call out to OpenAI's embedding endpoint for embedding query text. - Args: - text: The text to embed. - Returns: - Embeddings for the text. - """ - embedding = self._embedding_func(text, engine=self.query_model_name) - return embedding \ No newline at end of file diff --git a/spaces/dawood/audioldm-text-to-audio-generation/audioldm/clap/open_clip/openai.py b/spaces/dawood/audioldm-text-to-audio-generation/audioldm/clap/open_clip/openai.py deleted file mode 100644 index 3f4eb8b55fe960e1792b3da804b60b3d8f70fe26..0000000000000000000000000000000000000000 --- a/spaces/dawood/audioldm-text-to-audio-generation/audioldm/clap/open_clip/openai.py +++ /dev/null @@ -1,156 +0,0 @@ -""" OpenAI pretrained model functions - -Adapted from https://github.com/openai/CLIP. Originally MIT License, Copyright (c) 2021 OpenAI. -""" - -import os -import warnings -from typing import Union, List - -import torch - -from .model import build_model_from_openai_state_dict -from .pretrained import ( - get_pretrained_url, - list_pretrained_tag_models, - download_pretrained, -) - -__all__ = ["list_openai_models", "load_openai_model"] - - -def list_openai_models() -> List[str]: - """Returns the names of available CLIP models""" - return list_pretrained_tag_models("openai") - - -def load_openai_model( - name: str, - model_cfg, - device: Union[str, torch.device] = "cuda" if torch.cuda.is_available() else "cpu", - jit=True, - cache_dir=os.path.expanduser("~/.cache/clip"), - enable_fusion: bool = False, - fusion_type: str = "None", -): - """Load a CLIP model, preserve its text pretrained part, and set in the CLAP model - - Parameters - ---------- - name : str - A model name listed by `clip.available_models()`, or the path to a model checkpoint containing the state_dict - device : Union[str, torch.device] - The device to put the loaded model - jit : bool - Whether to load the optimized JIT model (default) or more hackable non-JIT model. - - Returns - ------- - model : torch.nn.Module - The CLAP model - preprocess : Callable[[PIL.Image], torch.Tensor] - A torchvision transform that converts a PIL image into a tensor that the returned model can take as its input - """ - if get_pretrained_url(name, "openai"): - model_path = download_pretrained( - get_pretrained_url(name, "openai"), root=cache_dir - ) - elif os.path.isfile(name): - model_path = name - else: - raise RuntimeError( - f"Model {name} not found; available models = {list_openai_models()}" - ) - - try: - # loading JIT archive - model = torch.jit.load(model_path, map_location=device if jit else "cpu").eval() - state_dict = None - except RuntimeError: - # loading saved state dict - if jit: - warnings.warn( - f"File {model_path} is not a JIT archive. Loading as a state dict instead" - ) - jit = False - state_dict = torch.load(model_path, map_location="cpu") - - if not jit: - try: - model = build_model_from_openai_state_dict( - state_dict or model.state_dict(), model_cfg, enable_fusion, fusion_type - ).to(device) - except KeyError: - sd = {k[7:]: v for k, v in state_dict["state_dict"].items()} - model = build_model_from_openai_state_dict( - sd, model_cfg, enable_fusion, fusion_type - ).to(device) - - if str(device) == "cpu": - model.float() - return model - - # patch the device names - device_holder = torch.jit.trace( - lambda: torch.ones([]).to(torch.device(device)), example_inputs=[] - ) - device_node = [ - n - for n in device_holder.graph.findAllNodes("prim::Constant") - if "Device" in repr(n) - ][-1] - - def patch_device(module): - try: - graphs = [module.graph] if hasattr(module, "graph") else [] - except RuntimeError: - graphs = [] - - if hasattr(module, "forward1"): - graphs.append(module.forward1.graph) - - for graph in graphs: - for node in graph.findAllNodes("prim::Constant"): - if "value" in node.attributeNames() and str(node["value"]).startswith( - "cuda" - ): - node.copyAttributes(device_node) - - model.apply(patch_device) - patch_device(model.encode_audio) - patch_device(model.encode_text) - - # patch dtype to float32 on CPU - if str(device) == "cpu": - float_holder = torch.jit.trace( - lambda: torch.ones([]).float(), example_inputs=[] - ) - float_input = list(float_holder.graph.findNode("aten::to").inputs())[1] - float_node = float_input.node() - - def patch_float(module): - try: - graphs = [module.graph] if hasattr(module, "graph") else [] - except RuntimeError: - graphs = [] - - if hasattr(module, "forward1"): - graphs.append(module.forward1.graph) - - for graph in graphs: - for node in graph.findAllNodes("aten::to"): - inputs = list(node.inputs()) - for i in [ - 1, - 2, - ]: # dtype can be the second or third argument to aten::to() - if inputs[i].node()["value"] == 5: - inputs[i].node().copyAttributes(float_node) - - model.apply(patch_float) - patch_float(model.encode_audio) - patch_float(model.encode_text) - model.float() - - model.audio_branch.audio_length = model.audio_cfg.audio_length - return model diff --git a/spaces/dawood/gradio_videogallery/src/backend/gradio_videogallery/templates/component/index.js b/spaces/dawood/gradio_videogallery/src/backend/gradio_videogallery/templates/component/index.js deleted file mode 100644 index 7009ea46404d275d148ede1eb68b9e037b1ebf79..0000000000000000000000000000000000000000 --- a/spaces/dawood/gradio_videogallery/src/backend/gradio_videogallery/templates/component/index.js +++ /dev/null @@ -1,7301 +0,0 @@ -const { - SvelteComponent: Pr, - assign: Ir, - create_slot: kr, - detach: Lr, - element: Nr, - get_all_dirty_from_scope: Or, - get_slot_changes: Mr, - get_spread_update: Rr, - init: Dr, - insert: Ur, - safe_not_equal: Gr, - set_dynamic_element_data: Tn, - set_style: X, - toggle_class: de, - transition_in: Oi, - transition_out: Mi, - update_slot_base: Fr -} = window.__gradio__svelte__internal; -function xr(e) { - let t, n, i; - const r = ( - /*#slots*/ - e[17].default - ), l = kr( - r, - e, - /*$$scope*/ - e[16], - null - ); - let o = [ - { "data-testid": ( - /*test_id*/ - e[7] - ) }, - { id: ( - /*elem_id*/ - e[2] - ) }, - { - class: n = "block " + /*elem_classes*/ - e[3].join(" ") + " svelte-1t38q2d" - } - ], a = {}; - for (let s = 0; s < o.length; s += 1) - a = Ir(a, o[s]); - return { - c() { - t = Nr( - /*tag*/ - e[14] - ), l && l.c(), Tn( - /*tag*/ - e[14] - )(t, a), de( - t, - "hidden", - /*visible*/ - e[10] === !1 - ), de( - t, - "padded", - /*padding*/ - e[6] - ), de( - t, - "border_focus", - /*border_mode*/ - e[5] === "focus" - ), de(t, "hide-container", !/*explicit_call*/ - e[8] && !/*container*/ - e[9]), X(t, "height", typeof /*height*/ - e[0] == "number" ? ( - /*height*/ - e[0] + "px" - ) : void 0), X(t, "width", typeof /*width*/ - e[1] == "number" ? `calc(min(${/*width*/ - e[1]}px, 100%))` : void 0), X( - t, - "border-style", - /*variant*/ - e[4] - ), X( - t, - "overflow", - /*allow_overflow*/ - e[11] ? "visible" : "hidden" - ), X( - t, - "flex-grow", - /*scale*/ - e[12] - ), X(t, "min-width", `calc(min(${/*min_width*/ - e[13]}px, 100%))`), X(t, "border-width", "var(--block-border-width)"); - }, - m(s, u) { - Ur(s, t, u), l && l.m(t, null), i = !0; - }, - p(s, u) { - l && l.p && (!i || u & /*$$scope*/ - 65536) && Fr( - l, - r, - s, - /*$$scope*/ - s[16], - i ? Mr( - r, - /*$$scope*/ - s[16], - u, - null - ) : Or( - /*$$scope*/ - s[16] - ), - null - ), Tn( - /*tag*/ - s[14] - )(t, a = Rr(o, [ - (!i || u & /*test_id*/ - 128) && { "data-testid": ( - /*test_id*/ - s[7] - ) }, - (!i || u & /*elem_id*/ - 4) && { id: ( - /*elem_id*/ - s[2] - ) }, - (!i || u & /*elem_classes*/ - 8 && n !== (n = "block " + /*elem_classes*/ - s[3].join(" ") + " svelte-1t38q2d")) && { class: n } - ])), de( - t, - "hidden", - /*visible*/ - s[10] === !1 - ), de( - t, - "padded", - /*padding*/ - s[6] - ), de( - t, - "border_focus", - /*border_mode*/ - s[5] === "focus" - ), de(t, "hide-container", !/*explicit_call*/ - s[8] && !/*container*/ - s[9]), u & /*height*/ - 1 && X(t, "height", typeof /*height*/ - s[0] == "number" ? ( - /*height*/ - s[0] + "px" - ) : void 0), u & /*width*/ - 2 && X(t, "width", typeof /*width*/ - s[1] == "number" ? `calc(min(${/*width*/ - s[1]}px, 100%))` : void 0), u & /*variant*/ - 16 && X( - t, - "border-style", - /*variant*/ - s[4] - ), u & /*allow_overflow*/ - 2048 && X( - t, - "overflow", - /*allow_overflow*/ - s[11] ? "visible" : "hidden" - ), u & /*scale*/ - 4096 && X( - t, - "flex-grow", - /*scale*/ - s[12] - ), u & /*min_width*/ - 8192 && X(t, "min-width", `calc(min(${/*min_width*/ - s[13]}px, 100%))`); - }, - i(s) { - i || (Oi(l, s), i = !0); - }, - o(s) { - Mi(l, s), i = !1; - }, - d(s) { - s && Lr(t), l && l.d(s); - } - }; -} -function jr(e) { - let t, n = ( - /*tag*/ - e[14] && xr(e) - ); - return { - c() { - n && n.c(); - }, - m(i, r) { - n && n.m(i, r), t = !0; - }, - p(i, [r]) { - /*tag*/ - i[14] && n.p(i, r); - }, - i(i) { - t || (Oi(n, i), t = !0); - }, - o(i) { - Mi(n, i), t = !1; - }, - d(i) { - n && n.d(i); - } - }; -} -function Vr(e, t, n) { - let { $$slots: i = {}, $$scope: r } = t, { height: l = void 0 } = t, { width: o = void 0 } = t, { elem_id: a = "" } = t, { elem_classes: s = [] } = t, { variant: u = "solid" } = t, { border_mode: f = "base" } = t, { padding: c = !0 } = t, { type: h = "normal" } = t, { test_id: _ = void 0 } = t, { explicit_call: b = !1 } = t, { container: T = !0 } = t, { visible: y = !0 } = t, { allow_overflow: C = !0 } = t, { scale: E = null } = t, { min_width: m = 0 } = t, g = h === "fieldset" ? "fieldset" : "div"; - return e.$$set = (p) => { - "height" in p && n(0, l = p.height), "width" in p && n(1, o = p.width), "elem_id" in p && n(2, a = p.elem_id), "elem_classes" in p && n(3, s = p.elem_classes), "variant" in p && n(4, u = p.variant), "border_mode" in p && n(5, f = p.border_mode), "padding" in p && n(6, c = p.padding), "type" in p && n(15, h = p.type), "test_id" in p && n(7, _ = p.test_id), "explicit_call" in p && n(8, b = p.explicit_call), "container" in p && n(9, T = p.container), "visible" in p && n(10, y = p.visible), "allow_overflow" in p && n(11, C = p.allow_overflow), "scale" in p && n(12, E = p.scale), "min_width" in p && n(13, m = p.min_width), "$$scope" in p && n(16, r = p.$$scope); - }, [ - l, - o, - a, - s, - u, - f, - c, - _, - b, - T, - y, - C, - E, - m, - g, - h, - r, - i - ]; -} -class zr extends Pr { - constructor(t) { - super(), Dr(this, t, Vr, jr, Gr, { - height: 0, - width: 1, - elem_id: 2, - elem_classes: 3, - variant: 4, - border_mode: 5, - padding: 6, - type: 15, - test_id: 7, - explicit_call: 8, - container: 9, - visible: 10, - allow_overflow: 11, - scale: 12, - min_width: 13 - }); - } -} -const { - SvelteComponent: qr, - append: It, - attr: at, - create_component: Xr, - destroy_component: Zr, - detach: Wr, - element: An, - init: Qr, - insert: Jr, - mount_component: Yr, - safe_not_equal: Kr, - set_data: $r, - space: el, - text: tl, - toggle_class: be, - transition_in: nl, - transition_out: il -} = window.__gradio__svelte__internal; -function rl(e) { - let t, n, i, r, l, o; - return i = new /*Icon*/ - e[1]({}), { - c() { - t = An("label"), n = An("span"), Xr(i.$$.fragment), r = el(), l = tl( - /*label*/ - e[0] - ), at(n, "class", "svelte-9gxdi0"), at(t, "for", ""), at(t, "data-testid", "block-label"), at(t, "class", "svelte-9gxdi0"), be(t, "hide", !/*show_label*/ - e[2]), be(t, "sr-only", !/*show_label*/ - e[2]), be( - t, - "float", - /*float*/ - e[4] - ), be( - t, - "hide-label", - /*disable*/ - e[3] - ); - }, - m(a, s) { - Jr(a, t, s), It(t, n), Yr(i, n, null), It(t, r), It(t, l), o = !0; - }, - p(a, [s]) { - (!o || s & /*label*/ - 1) && $r( - l, - /*label*/ - a[0] - ), (!o || s & /*show_label*/ - 4) && be(t, "hide", !/*show_label*/ - a[2]), (!o || s & /*show_label*/ - 4) && be(t, "sr-only", !/*show_label*/ - a[2]), (!o || s & /*float*/ - 16) && be( - t, - "float", - /*float*/ - a[4] - ), (!o || s & /*disable*/ - 8) && be( - t, - "hide-label", - /*disable*/ - a[3] - ); - }, - i(a) { - o || (nl(i.$$.fragment, a), o = !0); - }, - o(a) { - il(i.$$.fragment, a), o = !1; - }, - d(a) { - a && Wr(t), Zr(i); - } - }; -} -function ll(e, t, n) { - let { label: i = null } = t, { Icon: r } = t, { show_label: l = !0 } = t, { disable: o = !1 } = t, { float: a = !0 } = t; - return e.$$set = (s) => { - "label" in s && n(0, i = s.label), "Icon" in s && n(1, r = s.Icon), "show_label" in s && n(2, l = s.show_label), "disable" in s && n(3, o = s.disable), "float" in s && n(4, a = s.float); - }, [i, r, l, o, a]; -} -class ol extends qr { - constructor(t) { - super(), Qr(this, t, ll, rl, Kr, { - label: 0, - Icon: 1, - show_label: 2, - disable: 3, - float: 4 - }); - } -} -const { - SvelteComponent: sl, - append: Qt, - attr: Se, - bubble: al, - create_component: ul, - destroy_component: fl, - detach: Ri, - element: Jt, - init: cl, - insert: Di, - listen: hl, - mount_component: _l, - safe_not_equal: ml, - set_data: dl, - space: bl, - text: gl, - toggle_class: ge, - transition_in: pl, - transition_out: vl -} = window.__gradio__svelte__internal; -function Hn(e) { - let t, n; - return { - c() { - t = Jt("span"), n = gl( - /*label*/ - e[1] - ), Se(t, "class", "svelte-xtz2g8"); - }, - m(i, r) { - Di(i, t, r), Qt(t, n); - }, - p(i, r) { - r & /*label*/ - 2 && dl( - n, - /*label*/ - i[1] - ); - }, - d(i) { - i && Ri(t); - } - }; -} -function wl(e) { - let t, n, i, r, l, o, a, s = ( - /*show_label*/ - e[2] && Hn(e) - ); - return r = new /*Icon*/ - e[0]({}), { - c() { - t = Jt("button"), s && s.c(), n = bl(), i = Jt("div"), ul(r.$$.fragment), Se(i, "class", "svelte-xtz2g8"), ge( - i, - "small", - /*size*/ - e[4] === "small" - ), ge( - i, - "large", - /*size*/ - e[4] === "large" - ), Se( - t, - "aria-label", - /*label*/ - e[1] - ), Se( - t, - "title", - /*label*/ - e[1] - ), Se(t, "class", "svelte-xtz2g8"), ge( - t, - "pending", - /*pending*/ - e[3] - ), ge( - t, - "padded", - /*padded*/ - e[5] - ); - }, - m(u, f) { - Di(u, t, f), s && s.m(t, null), Qt(t, n), Qt(t, i), _l(r, i, null), l = !0, o || (a = hl( - t, - "click", - /*click_handler*/ - e[6] - ), o = !0); - }, - p(u, [f]) { - /*show_label*/ - u[2] ? s ? s.p(u, f) : (s = Hn(u), s.c(), s.m(t, n)) : s && (s.d(1), s = null), (!l || f & /*size*/ - 16) && ge( - i, - "small", - /*size*/ - u[4] === "small" - ), (!l || f & /*size*/ - 16) && ge( - i, - "large", - /*size*/ - u[4] === "large" - ), (!l || f & /*label*/ - 2) && Se( - t, - "aria-label", - /*label*/ - u[1] - ), (!l || f & /*label*/ - 2) && Se( - t, - "title", - /*label*/ - u[1] - ), (!l || f & /*pending*/ - 8) && ge( - t, - "pending", - /*pending*/ - u[3] - ), (!l || f & /*padded*/ - 32) && ge( - t, - "padded", - /*padded*/ - u[5] - ); - }, - i(u) { - l || (pl(r.$$.fragment, u), l = !0); - }, - o(u) { - vl(r.$$.fragment, u), l = !1; - }, - d(u) { - u && Ri(t), s && s.d(), fl(r), o = !1, a(); - } - }; -} -function yl(e, t, n) { - let { Icon: i } = t, { label: r = "" } = t, { show_label: l = !1 } = t, { pending: o = !1 } = t, { size: a = "small" } = t, { padded: s = !0 } = t; - function u(f) { - al.call(this, e, f); - } - return e.$$set = (f) => { - "Icon" in f && n(0, i = f.Icon), "label" in f && n(1, r = f.label), "show_label" in f && n(2, l = f.show_label), "pending" in f && n(3, o = f.pending), "size" in f && n(4, a = f.size), "padded" in f && n(5, s = f.padded); - }, [i, r, l, o, a, s, u]; -} -class et extends sl { - constructor(t) { - super(), cl(this, t, yl, wl, ml, { - Icon: 0, - label: 1, - show_label: 2, - pending: 3, - size: 4, - padded: 5 - }); - } -} -const { - SvelteComponent: El, - append: Sl, - attr: kt, - binding_callbacks: Tl, - create_slot: Al, - detach: Hl, - element: Bn, - get_all_dirty_from_scope: Bl, - get_slot_changes: Cl, - init: Pl, - insert: Il, - safe_not_equal: kl, - toggle_class: pe, - transition_in: Ll, - transition_out: Nl, - update_slot_base: Ol -} = window.__gradio__svelte__internal; -function Ml(e) { - let t, n, i; - const r = ( - /*#slots*/ - e[5].default - ), l = Al( - r, - e, - /*$$scope*/ - e[4], - null - ); - return { - c() { - t = Bn("div"), n = Bn("div"), l && l.c(), kt(n, "class", "icon svelte-3w3rth"), kt(t, "class", "empty svelte-3w3rth"), kt(t, "aria-label", "Empty value"), pe( - t, - "small", - /*size*/ - e[0] === "small" - ), pe( - t, - "large", - /*size*/ - e[0] === "large" - ), pe( - t, - "unpadded_box", - /*unpadded_box*/ - e[1] - ), pe( - t, - "small_parent", - /*parent_height*/ - e[3] - ); - }, - m(o, a) { - Il(o, t, a), Sl(t, n), l && l.m(n, null), e[6](t), i = !0; - }, - p(o, [a]) { - l && l.p && (!i || a & /*$$scope*/ - 16) && Ol( - l, - r, - o, - /*$$scope*/ - o[4], - i ? Cl( - r, - /*$$scope*/ - o[4], - a, - null - ) : Bl( - /*$$scope*/ - o[4] - ), - null - ), (!i || a & /*size*/ - 1) && pe( - t, - "small", - /*size*/ - o[0] === "small" - ), (!i || a & /*size*/ - 1) && pe( - t, - "large", - /*size*/ - o[0] === "large" - ), (!i || a & /*unpadded_box*/ - 2) && pe( - t, - "unpadded_box", - /*unpadded_box*/ - o[1] - ), (!i || a & /*parent_height*/ - 8) && pe( - t, - "small_parent", - /*parent_height*/ - o[3] - ); - }, - i(o) { - i || (Ll(l, o), i = !0); - }, - o(o) { - Nl(l, o), i = !1; - }, - d(o) { - o && Hl(t), l && l.d(o), e[6](null); - } - }; -} -function Rl(e) { - let t, n = e[0], i = 1; - for (; i < e.length; ) { - const r = e[i], l = e[i + 1]; - if (i += 2, (r === "optionalAccess" || r === "optionalCall") && n == null) - return; - r === "access" || r === "optionalAccess" ? (t = n, n = l(n)) : (r === "call" || r === "optionalCall") && (n = l((...o) => n.call(t, ...o)), t = void 0); - } - return n; -} -function Dl(e, t, n) { - let i, { $$slots: r = {}, $$scope: l } = t, { size: o = "small" } = t, { unpadded_box: a = !1 } = t, s; - function u(c) { - if (!c) - return !1; - const { height: h } = c.getBoundingClientRect(), { height: _ } = Rl([ - c, - "access", - (b) => b.parentElement, - "optionalAccess", - (b) => b.getBoundingClientRect, - "call", - (b) => b() - ]) || { height: h }; - return h > _ + 2; - } - function f(c) { - Tl[c ? "unshift" : "push"](() => { - s = c, n(2, s); - }); - } - return e.$$set = (c) => { - "size" in c && n(0, o = c.size), "unpadded_box" in c && n(1, a = c.unpadded_box), "$$scope" in c && n(4, l = c.$$scope); - }, e.$$.update = () => { - e.$$.dirty & /*el*/ - 4 && n(3, i = u(s)); - }, [o, a, s, i, l, r, f]; -} -class Ul extends El { - constructor(t) { - super(), Pl(this, t, Dl, Ml, kl, { size: 0, unpadded_box: 1 }); - } -} -const { - SvelteComponent: Gl, - append: Lt, - attr: te, - detach: Fl, - init: xl, - insert: jl, - noop: Nt, - safe_not_equal: Vl, - set_style: oe, - svg_element: ut -} = window.__gradio__svelte__internal; -function zl(e) { - let t, n, i, r; - return { - c() { - t = ut("svg"), n = ut("g"), i = ut("path"), r = ut("path"), te(i, "d", "M18,6L6.087,17.913"), oe(i, "fill", "none"), oe(i, "fill-rule", "nonzero"), oe(i, "stroke-width", "2px"), te(n, "transform", "matrix(1.14096,-0.140958,-0.140958,1.14096,-0.0559523,0.0559523)"), te(r, "d", "M4.364,4.364L19.636,19.636"), oe(r, "fill", "none"), oe(r, "fill-rule", "nonzero"), oe(r, "stroke-width", "2px"), te(t, "width", "100%"), te(t, "height", "100%"), te(t, "viewBox", "0 0 24 24"), te(t, "version", "1.1"), te(t, "xmlns", "http://www.w3.org/2000/svg"), te(t, "xmlns:xlink", "http://www.w3.org/1999/xlink"), te(t, "xml:space", "preserve"), te(t, "stroke", "currentColor"), oe(t, "fill-rule", "evenodd"), oe(t, "clip-rule", "evenodd"), oe(t, "stroke-linecap", "round"), oe(t, "stroke-linejoin", "round"); - }, - m(l, o) { - jl(l, t, o), Lt(t, n), Lt(n, i), Lt(t, r); - }, - p: Nt, - i: Nt, - o: Nt, - d(l) { - l && Fl(t); - } - }; -} -class ql extends Gl { - constructor(t) { - super(), xl(this, t, null, zl, Vl, {}); - } -} -const { - SvelteComponent: Xl, - append: Zl, - attr: Xe, - detach: Wl, - init: Ql, - insert: Jl, - noop: Ot, - safe_not_equal: Yl, - svg_element: Cn -} = window.__gradio__svelte__internal; -function Kl(e) { - let t, n; - return { - c() { - t = Cn("svg"), n = Cn("path"), Xe(n, "d", "M23,20a5,5,0,0,0-3.89,1.89L11.8,17.32a4.46,4.46,0,0,0,0-2.64l7.31-4.57A5,5,0,1,0,18,7a4.79,4.79,0,0,0,.2,1.32l-7.31,4.57a5,5,0,1,0,0,6.22l7.31,4.57A4.79,4.79,0,0,0,18,25a5,5,0,1,0,5-5ZM23,4a3,3,0,1,1-3,3A3,3,0,0,1,23,4ZM7,19a3,3,0,1,1,3-3A3,3,0,0,1,7,19Zm16,9a3,3,0,1,1,3-3A3,3,0,0,1,23,28Z"), Xe(n, "fill", "currentColor"), Xe(t, "id", "icon"), Xe(t, "xmlns", "http://www.w3.org/2000/svg"), Xe(t, "viewBox", "0 0 32 32"); - }, - m(i, r) { - Jl(i, t, r), Zl(t, n); - }, - p: Ot, - i: Ot, - o: Ot, - d(i) { - i && Wl(t); - } - }; -} -class $l extends Xl { - constructor(t) { - super(), Ql(this, t, null, Kl, Yl, {}); - } -} -const { - SvelteComponent: eo, - append: to, - attr: Ce, - detach: no, - init: io, - insert: ro, - noop: Mt, - safe_not_equal: lo, - svg_element: Pn -} = window.__gradio__svelte__internal; -function oo(e) { - let t, n; - return { - c() { - t = Pn("svg"), n = Pn("path"), Ce(n, "fill", "currentColor"), Ce(n, "d", "M26 24v4H6v-4H4v4a2 2 0 0 0 2 2h20a2 2 0 0 0 2-2v-4zm0-10l-1.41-1.41L17 20.17V2h-2v18.17l-7.59-7.58L6 14l10 10l10-10z"), Ce(t, "xmlns", "http://www.w3.org/2000/svg"), Ce(t, "width", "100%"), Ce(t, "height", "100%"), Ce(t, "viewBox", "0 0 32 32"); - }, - m(i, r) { - ro(i, t, r), to(t, n); - }, - p: Mt, - i: Mt, - o: Mt, - d(i) { - i && no(t); - } - }; -} -class so extends eo { - constructor(t) { - super(), io(this, t, null, oo, lo, {}); - } -} -const { - SvelteComponent: ao, - append: uo, - attr: ne, - detach: fo, - init: co, - insert: ho, - noop: Rt, - safe_not_equal: _o, - svg_element: In -} = window.__gradio__svelte__internal; -function mo(e) { - let t, n; - return { - c() { - t = In("svg"), n = In("path"), ne(n, "d", "M17 3a2.828 2.828 0 1 1 4 4L7.5 20.5 2 22l1.5-5.5L17 3z"), ne(t, "xmlns", "http://www.w3.org/2000/svg"), ne(t, "width", "100%"), ne(t, "height", "100%"), ne(t, "viewBox", "0 0 24 24"), ne(t, "fill", "none"), ne(t, "stroke", "currentColor"), ne(t, "stroke-width", "1.5"), ne(t, "stroke-linecap", "round"), ne(t, "stroke-linejoin", "round"), ne(t, "class", "feather feather-edit-2"); - }, - m(i, r) { - ho(i, t, r), uo(t, n); - }, - p: Rt, - i: Rt, - o: Rt, - d(i) { - i && fo(t); - } - }; -} -class bo extends ao { - constructor(t) { - super(), co(this, t, null, mo, _o, {}); - } -} -const { - SvelteComponent: go, - append: Dt, - attr: R, - detach: po, - init: vo, - insert: wo, - noop: Ut, - safe_not_equal: yo, - svg_element: ft -} = window.__gradio__svelte__internal; -function Eo(e) { - let t, n, i, r; - return { - c() { - t = ft("svg"), n = ft("rect"), i = ft("circle"), r = ft("polyline"), R(n, "x", "3"), R(n, "y", "3"), R(n, "width", "18"), R(n, "height", "18"), R(n, "rx", "2"), R(n, "ry", "2"), R(i, "cx", "8.5"), R(i, "cy", "8.5"), R(i, "r", "1.5"), R(r, "points", "21 15 16 10 5 21"), R(t, "xmlns", "http://www.w3.org/2000/svg"), R(t, "width", "100%"), R(t, "height", "100%"), R(t, "viewBox", "0 0 24 24"), R(t, "fill", "none"), R(t, "stroke", "currentColor"), R(t, "stroke-width", "1.5"), R(t, "stroke-linecap", "round"), R(t, "stroke-linejoin", "round"), R(t, "class", "feather feather-image"); - }, - m(l, o) { - wo(l, t, o), Dt(t, n), Dt(t, i), Dt(t, r); - }, - p: Ut, - i: Ut, - o: Ut, - d(l) { - l && po(t); - } - }; -} -class Ui extends go { - constructor(t) { - super(), vo(this, t, null, Eo, yo, {}); - } -} -const { - SvelteComponent: So, - append: kn, - attr: Q, - detach: To, - init: Ao, - insert: Ho, - noop: Gt, - safe_not_equal: Bo, - svg_element: Ft -} = window.__gradio__svelte__internal; -function Co(e) { - let t, n, i; - return { - c() { - t = Ft("svg"), n = Ft("polyline"), i = Ft("path"), Q(n, "points", "1 4 1 10 7 10"), Q(i, "d", "M3.51 15a9 9 0 1 0 2.13-9.36L1 10"), Q(t, "xmlns", "http://www.w3.org/2000/svg"), Q(t, "width", "100%"), Q(t, "height", "100%"), Q(t, "viewBox", "0 0 24 24"), Q(t, "fill", "none"), Q(t, "stroke", "currentColor"), Q(t, "stroke-width", "2"), Q(t, "stroke-linecap", "round"), Q(t, "stroke-linejoin", "round"), Q(t, "class", "feather feather-rotate-ccw"); - }, - m(r, l) { - Ho(r, t, l), kn(t, n), kn(t, i); - }, - p: Gt, - i: Gt, - o: Gt, - d(r) { - r && To(t); - } - }; -} -class Po extends So { - constructor(t) { - super(), Ao(this, t, null, Co, Bo, {}); - } -} -const Io = [ - { color: "red", primary: 600, secondary: 100 }, - { color: "green", primary: 600, secondary: 100 }, - { color: "blue", primary: 600, secondary: 100 }, - { color: "yellow", primary: 500, secondary: 100 }, - { color: "purple", primary: 600, secondary: 100 }, - { color: "teal", primary: 600, secondary: 100 }, - { color: "orange", primary: 600, secondary: 100 }, - { color: "cyan", primary: 600, secondary: 100 }, - { color: "lime", primary: 500, secondary: 100 }, - { color: "pink", primary: 600, secondary: 100 } -], Ln = { - inherit: "inherit", - current: "currentColor", - transparent: "transparent", - black: "#000", - white: "#fff", - slate: { - 50: "#f8fafc", - 100: "#f1f5f9", - 200: "#e2e8f0", - 300: "#cbd5e1", - 400: "#94a3b8", - 500: "#64748b", - 600: "#475569", - 700: "#334155", - 800: "#1e293b", - 900: "#0f172a", - 950: "#020617" - }, - gray: { - 50: "#f9fafb", - 100: "#f3f4f6", - 200: "#e5e7eb", - 300: "#d1d5db", - 400: "#9ca3af", - 500: "#6b7280", - 600: "#4b5563", - 700: "#374151", - 800: "#1f2937", - 900: "#111827", - 950: "#030712" - }, - zinc: { - 50: "#fafafa", - 100: "#f4f4f5", - 200: "#e4e4e7", - 300: "#d4d4d8", - 400: "#a1a1aa", - 500: "#71717a", - 600: "#52525b", - 700: "#3f3f46", - 800: "#27272a", - 900: "#18181b", - 950: "#09090b" - }, - neutral: { - 50: "#fafafa", - 100: "#f5f5f5", - 200: "#e5e5e5", - 300: "#d4d4d4", - 400: "#a3a3a3", - 500: "#737373", - 600: "#525252", - 700: "#404040", - 800: "#262626", - 900: "#171717", - 950: "#0a0a0a" - }, - stone: { - 50: "#fafaf9", - 100: "#f5f5f4", - 200: "#e7e5e4", - 300: "#d6d3d1", - 400: "#a8a29e", - 500: "#78716c", - 600: "#57534e", - 700: "#44403c", - 800: "#292524", - 900: "#1c1917", - 950: "#0c0a09" - }, - red: { - 50: "#fef2f2", - 100: "#fee2e2", - 200: "#fecaca", - 300: "#fca5a5", - 400: "#f87171", - 500: "#ef4444", - 600: "#dc2626", - 700: "#b91c1c", - 800: "#991b1b", - 900: "#7f1d1d", - 950: "#450a0a" - }, - orange: { - 50: "#fff7ed", - 100: "#ffedd5", - 200: "#fed7aa", - 300: "#fdba74", - 400: "#fb923c", - 500: "#f97316", - 600: "#ea580c", - 700: "#c2410c", - 800: "#9a3412", - 900: "#7c2d12", - 950: "#431407" - }, - amber: { - 50: "#fffbeb", - 100: "#fef3c7", - 200: "#fde68a", - 300: "#fcd34d", - 400: "#fbbf24", - 500: "#f59e0b", - 600: "#d97706", - 700: "#b45309", - 800: "#92400e", - 900: "#78350f", - 950: "#451a03" - }, - yellow: { - 50: "#fefce8", - 100: "#fef9c3", - 200: "#fef08a", - 300: "#fde047", - 400: "#facc15", - 500: "#eab308", - 600: "#ca8a04", - 700: "#a16207", - 800: "#854d0e", - 900: "#713f12", - 950: "#422006" - }, - lime: { - 50: "#f7fee7", - 100: "#ecfccb", - 200: "#d9f99d", - 300: "#bef264", - 400: "#a3e635", - 500: "#84cc16", - 600: "#65a30d", - 700: "#4d7c0f", - 800: "#3f6212", - 900: "#365314", - 950: "#1a2e05" - }, - green: { - 50: "#f0fdf4", - 100: "#dcfce7", - 200: "#bbf7d0", - 300: "#86efac", - 400: "#4ade80", - 500: "#22c55e", - 600: "#16a34a", - 700: "#15803d", - 800: "#166534", - 900: "#14532d", - 950: "#052e16" - }, - emerald: { - 50: "#ecfdf5", - 100: "#d1fae5", - 200: "#a7f3d0", - 300: "#6ee7b7", - 400: "#34d399", - 500: "#10b981", - 600: "#059669", - 700: "#047857", - 800: "#065f46", - 900: "#064e3b", - 950: "#022c22" - }, - teal: { - 50: "#f0fdfa", - 100: "#ccfbf1", - 200: "#99f6e4", - 300: "#5eead4", - 400: "#2dd4bf", - 500: "#14b8a6", - 600: "#0d9488", - 700: "#0f766e", - 800: "#115e59", - 900: "#134e4a", - 950: "#042f2e" - }, - cyan: { - 50: "#ecfeff", - 100: "#cffafe", - 200: "#a5f3fc", - 300: "#67e8f9", - 400: "#22d3ee", - 500: "#06b6d4", - 600: "#0891b2", - 700: "#0e7490", - 800: "#155e75", - 900: "#164e63", - 950: "#083344" - }, - sky: { - 50: "#f0f9ff", - 100: "#e0f2fe", - 200: "#bae6fd", - 300: "#7dd3fc", - 400: "#38bdf8", - 500: "#0ea5e9", - 600: "#0284c7", - 700: "#0369a1", - 800: "#075985", - 900: "#0c4a6e", - 950: "#082f49" - }, - blue: { - 50: "#eff6ff", - 100: "#dbeafe", - 200: "#bfdbfe", - 300: "#93c5fd", - 400: "#60a5fa", - 500: "#3b82f6", - 600: "#2563eb", - 700: "#1d4ed8", - 800: "#1e40af", - 900: "#1e3a8a", - 950: "#172554" - }, - indigo: { - 50: "#eef2ff", - 100: "#e0e7ff", - 200: "#c7d2fe", - 300: "#a5b4fc", - 400: "#818cf8", - 500: "#6366f1", - 600: "#4f46e5", - 700: "#4338ca", - 800: "#3730a3", - 900: "#312e81", - 950: "#1e1b4b" - }, - violet: { - 50: "#f5f3ff", - 100: "#ede9fe", - 200: "#ddd6fe", - 300: "#c4b5fd", - 400: "#a78bfa", - 500: "#8b5cf6", - 600: "#7c3aed", - 700: "#6d28d9", - 800: "#5b21b6", - 900: "#4c1d95", - 950: "#2e1065" - }, - purple: { - 50: "#faf5ff", - 100: "#f3e8ff", - 200: "#e9d5ff", - 300: "#d8b4fe", - 400: "#c084fc", - 500: "#a855f7", - 600: "#9333ea", - 700: "#7e22ce", - 800: "#6b21a8", - 900: "#581c87", - 950: "#3b0764" - }, - fuchsia: { - 50: "#fdf4ff", - 100: "#fae8ff", - 200: "#f5d0fe", - 300: "#f0abfc", - 400: "#e879f9", - 500: "#d946ef", - 600: "#c026d3", - 700: "#a21caf", - 800: "#86198f", - 900: "#701a75", - 950: "#4a044e" - }, - pink: { - 50: "#fdf2f8", - 100: "#fce7f3", - 200: "#fbcfe8", - 300: "#f9a8d4", - 400: "#f472b6", - 500: "#ec4899", - 600: "#db2777", - 700: "#be185d", - 800: "#9d174d", - 900: "#831843", - 950: "#500724" - }, - rose: { - 50: "#fff1f2", - 100: "#ffe4e6", - 200: "#fecdd3", - 300: "#fda4af", - 400: "#fb7185", - 500: "#f43f5e", - 600: "#e11d48", - 700: "#be123c", - 800: "#9f1239", - 900: "#881337", - 950: "#4c0519" - } -}; -Io.reduce( - (e, { color: t, primary: n, secondary: i }) => ({ - ...e, - [t]: { - primary: Ln[t][n], - secondary: Ln[t][i] - } - }), - {} -); -function ko(e) { - let t, n = e[0], i = 1; - for (; i < e.length; ) { - const r = e[i], l = e[i + 1]; - if (i += 2, (r === "optionalAccess" || r === "optionalCall") && n == null) - return; - r === "access" || r === "optionalAccess" ? (t = n, n = l(n)) : (r === "call" || r === "optionalCall") && (n = l((...o) => n.call(t, ...o)), t = void 0); - } - return n; -} -class mt extends Error { - constructor(t) { - super(t), this.name = "ShareError"; - } -} -async function Lo(e, t) { - if (window.__gradio_space__ == null) - throw new mt("Must be on Spaces to share."); - let n, i, r; - if (t === "url") { - const s = await fetch(e); - n = await s.blob(), i = s.headers.get("content-type") || "", r = s.headers.get("content-disposition") || ""; - } else - n = No(e), i = e.split(";")[0].split(":")[1], r = "file" + i.split("/")[1]; - const l = new File([n], r, { type: i }), o = await fetch("https://huggingface.co/uploads", { - method: "POST", - body: l, - headers: { - "Content-Type": l.type, - "X-Requested-With": "XMLHttpRequest" - } - }); - if (!o.ok) { - if (ko([o, "access", (s) => s.headers, "access", (s) => s.get, "call", (s) => s("content-type"), "optionalAccess", (s) => s.includes, "call", (s) => s("application/json")])) { - const s = await o.json(); - throw new mt(`Upload failed: ${s.error}`); - } - throw new mt("Upload failed."); - } - return await o.text(); -} -function No(e) { - for (var t = e.split(","), n = t[0].match(/:(.*?);/)[1], i = atob(t[1]), r = i.length, l = new Uint8Array(r); r--; ) - l[r] = i.charCodeAt(r); - return new Blob([l], { type: n }); -} -const { - SvelteComponent: Oo, - create_component: Mo, - destroy_component: Ro, - init: Do, - mount_component: Uo, - safe_not_equal: Go, - transition_in: Fo, - transition_out: xo -} = window.__gradio__svelte__internal, { createEventDispatcher: jo } = window.__gradio__svelte__internal; -function Vo(e) { - let t, n; - return t = new et({ - props: { - Icon: $l, - label: ( - /*i18n*/ - e[2]("common.share") - ), - pending: ( - /*pending*/ - e[3] - ) - } - }), t.$on( - "click", - /*click_handler*/ - e[5] - ), { - c() { - Mo(t.$$.fragment); - }, - m(i, r) { - Uo(t, i, r), n = !0; - }, - p(i, [r]) { - const l = {}; - r & /*i18n*/ - 4 && (l.label = /*i18n*/ - i[2]("common.share")), r & /*pending*/ - 8 && (l.pending = /*pending*/ - i[3]), t.$set(l); - }, - i(i) { - n || (Fo(t.$$.fragment, i), n = !0); - }, - o(i) { - xo(t.$$.fragment, i), n = !1; - }, - d(i) { - Ro(t, i); - } - }; -} -function zo(e, t, n) { - const i = jo(); - let { formatter: r } = t, { value: l } = t, { i18n: o } = t, a = !1; - const s = async () => { - try { - n(3, a = !0); - const u = await r(l); - i("share", { description: u }); - } catch (u) { - console.error(u); - let f = u instanceof mt ? u.message : "Share failed."; - i("error", f); - } finally { - n(3, a = !1); - } - }; - return e.$$set = (u) => { - "formatter" in u && n(0, r = u.formatter), "value" in u && n(1, l = u.value), "i18n" in u && n(2, o = u.i18n); - }, [r, l, o, a, i, s]; -} -class qo extends Oo { - constructor(t) { - super(), Do(this, t, zo, Vo, Go, { formatter: 0, value: 1, i18n: 2 }); - } -} -new Intl.Collator(0, { numeric: 1 }).compare; -function Gi(e, t, n) { - if (e == null) - return null; - if (Array.isArray(e)) { - const i = []; - for (const r of e) - r == null ? i.push(null) : i.push(Gi(r, t, n)); - return i; - } - return e.is_stream ? n == null ? new xt({ - ...e, - url: t + "/stream/" + e.path - }) : new xt({ - ...e, - url: "/proxy=" + n + "stream/" + e.path - }) : new xt({ - ...e, - url: Zo(e.path, t, n) - }); -} -function Xo(e) { - try { - const t = new URL(e); - return t.protocol === "http:" || t.protocol === "https:"; - } catch { - return !1; - } -} -function Zo(e, t, n) { - return e == null ? n ? `/proxy=${n}file=` : `${t}/file=` : Xo(e) ? e : n ? `/proxy=${n}file=${e}` : `${t}/file=${e}`; -} -class xt { - constructor({ - path: t, - url: n, - orig_name: i, - size: r, - blob: l, - is_stream: o, - mime_type: a, - alt_text: s - }) { - this.path = t, this.url = n, this.orig_name = i, this.size = r, this.blob = n ? void 0 : l, this.is_stream = o, this.mime_type = a, this.alt_text = s; - } -} -function He() { -} -function Wo(e) { - return e(); -} -function Qo(e) { - e.forEach(Wo); -} -function Jo(e) { - return typeof e == "function"; -} -function Yo(e, t) { - return e != e ? t == t : e !== t || e && typeof e == "object" || typeof e == "function"; -} -function Ko(e, ...t) { - if (e == null) { - for (const i of t) - i(void 0); - return He; - } - const n = e.subscribe(...t); - return n.unsubscribe ? () => n.unsubscribe() : n; -} -const Fi = typeof window < "u"; -let Nn = Fi ? () => window.performance.now() : () => Date.now(), xi = Fi ? (e) => requestAnimationFrame(e) : He; -const ke = /* @__PURE__ */ new Set(); -function ji(e) { - ke.forEach((t) => { - t.c(e) || (ke.delete(t), t.f()); - }), ke.size !== 0 && xi(ji); -} -function $o(e) { - let t; - return ke.size === 0 && xi(ji), { - promise: new Promise((n) => { - ke.add(t = { c: e, f: n }); - }), - abort() { - ke.delete(t); - } - }; -} -const Pe = []; -function es(e, t) { - return { - subscribe: tt(e, t).subscribe - }; -} -function tt(e, t = He) { - let n; - const i = /* @__PURE__ */ new Set(); - function r(a) { - if (Yo(e, a) && (e = a, n)) { - const s = !Pe.length; - for (const u of i) - u[1](), Pe.push(u, e); - if (s) { - for (let u = 0; u < Pe.length; u += 2) - Pe[u][0](Pe[u + 1]); - Pe.length = 0; - } - } - } - function l(a) { - r(a(e)); - } - function o(a, s = He) { - const u = [a, s]; - return i.add(u), i.size === 1 && (n = t(r, l) || He), a(e), () => { - i.delete(u), i.size === 0 && n && (n(), n = null); - }; - } - return { set: r, update: l, subscribe: o }; -} -function Ue(e, t, n) { - const i = !Array.isArray(e), r = i ? [e] : e; - if (!r.every(Boolean)) - throw new Error("derived() expects stores as input, got a falsy value"); - const l = t.length < 2; - return es(n, (o, a) => { - let s = !1; - const u = []; - let f = 0, c = He; - const h = () => { - if (f) - return; - c(); - const b = t(i ? u[0] : u, o, a); - l ? o(b) : c = Jo(b) ? b : He; - }, _ = r.map( - (b, T) => Ko( - b, - (y) => { - u[T] = y, f &= ~(1 << T), s && h(); - }, - () => { - f |= 1 << T; - } - ) - ); - return s = !0, h(), function() { - Qo(_), c(), s = !1; - }; - }); -} -function On(e) { - return Object.prototype.toString.call(e) === "[object Date]"; -} -function Yt(e, t, n, i) { - if (typeof n == "number" || On(n)) { - const r = i - n, l = (n - t) / (e.dt || 1 / 60), o = e.opts.stiffness * r, a = e.opts.damping * l, s = (o - a) * e.inv_mass, u = (l + s) * e.dt; - return Math.abs(u) < e.opts.precision && Math.abs(r) < e.opts.precision ? i : (e.settled = !1, On(n) ? new Date(n.getTime() + u) : n + u); - } else { - if (Array.isArray(n)) - return n.map( - (r, l) => Yt(e, t[l], n[l], i[l]) - ); - if (typeof n == "object") { - const r = {}; - for (const l in n) - r[l] = Yt(e, t[l], n[l], i[l]); - return r; - } else - throw new Error(`Cannot spring ${typeof n} values`); - } -} -function Mn(e, t = {}) { - const n = tt(e), { stiffness: i = 0.15, damping: r = 0.8, precision: l = 0.01 } = t; - let o, a, s, u = e, f = e, c = 1, h = 0, _ = !1; - function b(y, C = {}) { - f = y; - const E = s = {}; - return e == null || C.hard || T.stiffness >= 1 && T.damping >= 1 ? (_ = !0, o = Nn(), u = y, n.set(e = f), Promise.resolve()) : (C.soft && (h = 1 / ((C.soft === !0 ? 0.5 : +C.soft) * 60), c = 0), a || (o = Nn(), _ = !1, a = $o((m) => { - if (_) - return _ = !1, a = null, !1; - c = Math.min(c + h, 1); - const g = { - inv_mass: c, - opts: T, - settled: !0, - dt: (m - o) * 60 / 1e3 - }, p = Yt(g, u, e, f); - return o = m, u = e, n.set(e = p), g.settled && (a = null), !g.settled; - })), new Promise((m) => { - a.promise.then(() => { - E === s && m(); - }); - })); - } - const T = { - set: b, - update: (y, C) => b(y(f, e), C), - subscribe: n.subscribe, - stiffness: i, - damping: r, - precision: l - }; - return T; -} -function ts(e) { - return e && e.__esModule && Object.prototype.hasOwnProperty.call(e, "default") ? e.default : e; -} -var ns = function(t) { - return is(t) && !rs(t); -}; -function is(e) { - return !!e && typeof e == "object"; -} -function rs(e) { - var t = Object.prototype.toString.call(e); - return t === "[object RegExp]" || t === "[object Date]" || ss(e); -} -var ls = typeof Symbol == "function" && Symbol.for, os = ls ? Symbol.for("react.element") : 60103; -function ss(e) { - return e.$$typeof === os; -} -function as(e) { - return Array.isArray(e) ? [] : {}; -} -function Je(e, t) { - return t.clone !== !1 && t.isMergeableObject(e) ? Le(as(e), e, t) : e; -} -function us(e, t, n) { - return e.concat(t).map(function(i) { - return Je(i, n); - }); -} -function fs(e, t) { - if (!t.customMerge) - return Le; - var n = t.customMerge(e); - return typeof n == "function" ? n : Le; -} -function cs(e) { - return Object.getOwnPropertySymbols ? Object.getOwnPropertySymbols(e).filter(function(t) { - return Object.propertyIsEnumerable.call(e, t); - }) : []; -} -function Rn(e) { - return Object.keys(e).concat(cs(e)); -} -function Vi(e, t) { - try { - return t in e; - } catch { - return !1; - } -} -function hs(e, t) { - return Vi(e, t) && !(Object.hasOwnProperty.call(e, t) && Object.propertyIsEnumerable.call(e, t)); -} -function _s(e, t, n) { - var i = {}; - return n.isMergeableObject(e) && Rn(e).forEach(function(r) { - i[r] = Je(e[r], n); - }), Rn(t).forEach(function(r) { - hs(e, r) || (Vi(e, r) && n.isMergeableObject(t[r]) ? i[r] = fs(r, n)(e[r], t[r], n) : i[r] = Je(t[r], n)); - }), i; -} -function Le(e, t, n) { - n = n || {}, n.arrayMerge = n.arrayMerge || us, n.isMergeableObject = n.isMergeableObject || ns, n.cloneUnlessOtherwiseSpecified = Je; - var i = Array.isArray(t), r = Array.isArray(e), l = i === r; - return l ? i ? n.arrayMerge(e, t, n) : _s(e, t, n) : Je(t, n); -} -Le.all = function(t, n) { - if (!Array.isArray(t)) - throw new Error("first argument should be an array"); - return t.reduce(function(i, r) { - return Le(i, r, n); - }, {}); -}; -var ms = Le, ds = ms; -const bs = /* @__PURE__ */ ts(ds); -var Kt = function(e, t) { - return Kt = Object.setPrototypeOf || { __proto__: [] } instanceof Array && function(n, i) { - n.__proto__ = i; - } || function(n, i) { - for (var r in i) - Object.prototype.hasOwnProperty.call(i, r) && (n[r] = i[r]); - }, Kt(e, t); -}; -function yt(e, t) { - if (typeof t != "function" && t !== null) - throw new TypeError("Class extends value " + String(t) + " is not a constructor or null"); - Kt(e, t); - function n() { - this.constructor = e; - } - e.prototype = t === null ? Object.create(t) : (n.prototype = t.prototype, new n()); -} -var k = function() { - return k = Object.assign || function(t) { - for (var n, i = 1, r = arguments.length; i < r; i++) { - n = arguments[i]; - for (var l in n) - Object.prototype.hasOwnProperty.call(n, l) && (t[l] = n[l]); - } - return t; - }, k.apply(this, arguments); -}; -function jt(e, t, n) { - if (n || arguments.length === 2) - for (var i = 0, r = t.length, l; i < r; i++) - (l || !(i in t)) && (l || (l = Array.prototype.slice.call(t, 0, i)), l[i] = t[i]); - return e.concat(l || Array.prototype.slice.call(t)); -} -var B; -(function(e) { - e[e.EXPECT_ARGUMENT_CLOSING_BRACE = 1] = "EXPECT_ARGUMENT_CLOSING_BRACE", e[e.EMPTY_ARGUMENT = 2] = "EMPTY_ARGUMENT", e[e.MALFORMED_ARGUMENT = 3] = "MALFORMED_ARGUMENT", e[e.EXPECT_ARGUMENT_TYPE = 4] = "EXPECT_ARGUMENT_TYPE", e[e.INVALID_ARGUMENT_TYPE = 5] = "INVALID_ARGUMENT_TYPE", e[e.EXPECT_ARGUMENT_STYLE = 6] = "EXPECT_ARGUMENT_STYLE", e[e.INVALID_NUMBER_SKELETON = 7] = "INVALID_NUMBER_SKELETON", e[e.INVALID_DATE_TIME_SKELETON = 8] = "INVALID_DATE_TIME_SKELETON", e[e.EXPECT_NUMBER_SKELETON = 9] = "EXPECT_NUMBER_SKELETON", e[e.EXPECT_DATE_TIME_SKELETON = 10] = "EXPECT_DATE_TIME_SKELETON", e[e.UNCLOSED_QUOTE_IN_ARGUMENT_STYLE = 11] = "UNCLOSED_QUOTE_IN_ARGUMENT_STYLE", e[e.EXPECT_SELECT_ARGUMENT_OPTIONS = 12] = "EXPECT_SELECT_ARGUMENT_OPTIONS", e[e.EXPECT_PLURAL_ARGUMENT_OFFSET_VALUE = 13] = "EXPECT_PLURAL_ARGUMENT_OFFSET_VALUE", e[e.INVALID_PLURAL_ARGUMENT_OFFSET_VALUE = 14] = "INVALID_PLURAL_ARGUMENT_OFFSET_VALUE", e[e.EXPECT_SELECT_ARGUMENT_SELECTOR = 15] = "EXPECT_SELECT_ARGUMENT_SELECTOR", e[e.EXPECT_PLURAL_ARGUMENT_SELECTOR = 16] = "EXPECT_PLURAL_ARGUMENT_SELECTOR", e[e.EXPECT_SELECT_ARGUMENT_SELECTOR_FRAGMENT = 17] = "EXPECT_SELECT_ARGUMENT_SELECTOR_FRAGMENT", e[e.EXPECT_PLURAL_ARGUMENT_SELECTOR_FRAGMENT = 18] = "EXPECT_PLURAL_ARGUMENT_SELECTOR_FRAGMENT", e[e.INVALID_PLURAL_ARGUMENT_SELECTOR = 19] = "INVALID_PLURAL_ARGUMENT_SELECTOR", e[e.DUPLICATE_PLURAL_ARGUMENT_SELECTOR = 20] = "DUPLICATE_PLURAL_ARGUMENT_SELECTOR", e[e.DUPLICATE_SELECT_ARGUMENT_SELECTOR = 21] = "DUPLICATE_SELECT_ARGUMENT_SELECTOR", e[e.MISSING_OTHER_CLAUSE = 22] = "MISSING_OTHER_CLAUSE", e[e.INVALID_TAG = 23] = "INVALID_TAG", e[e.INVALID_TAG_NAME = 25] = "INVALID_TAG_NAME", e[e.UNMATCHED_CLOSING_TAG = 26] = "UNMATCHED_CLOSING_TAG", e[e.UNCLOSED_TAG = 27] = "UNCLOSED_TAG"; -})(B || (B = {})); -var O; -(function(e) { - e[e.literal = 0] = "literal", e[e.argument = 1] = "argument", e[e.number = 2] = "number", e[e.date = 3] = "date", e[e.time = 4] = "time", e[e.select = 5] = "select", e[e.plural = 6] = "plural", e[e.pound = 7] = "pound", e[e.tag = 8] = "tag"; -})(O || (O = {})); -var Ne; -(function(e) { - e[e.number = 0] = "number", e[e.dateTime = 1] = "dateTime"; -})(Ne || (Ne = {})); -function Dn(e) { - return e.type === O.literal; -} -function gs(e) { - return e.type === O.argument; -} -function zi(e) { - return e.type === O.number; -} -function qi(e) { - return e.type === O.date; -} -function Xi(e) { - return e.type === O.time; -} -function Zi(e) { - return e.type === O.select; -} -function Wi(e) { - return e.type === O.plural; -} -function ps(e) { - return e.type === O.pound; -} -function Qi(e) { - return e.type === O.tag; -} -function Ji(e) { - return !!(e && typeof e == "object" && e.type === Ne.number); -} -function $t(e) { - return !!(e && typeof e == "object" && e.type === Ne.dateTime); -} -var Yi = /[ \xA0\u1680\u2000-\u200A\u202F\u205F\u3000]/, vs = /(?:[Eec]{1,6}|G{1,5}|[Qq]{1,5}|(?:[yYur]+|U{1,5})|[ML]{1,5}|d{1,2}|D{1,3}|F{1}|[abB]{1,5}|[hkHK]{1,2}|w{1,2}|W{1}|m{1,2}|s{1,2}|[zZOvVxX]{1,4})(?=([^']*'[^']*')*[^']*$)/g; -function ws(e) { - var t = {}; - return e.replace(vs, function(n) { - var i = n.length; - switch (n[0]) { - case "G": - t.era = i === 4 ? "long" : i === 5 ? "narrow" : "short"; - break; - case "y": - t.year = i === 2 ? "2-digit" : "numeric"; - break; - case "Y": - case "u": - case "U": - case "r": - throw new RangeError("`Y/u/U/r` (year) patterns are not supported, use `y` instead"); - case "q": - case "Q": - throw new RangeError("`q/Q` (quarter) patterns are not supported"); - case "M": - case "L": - t.month = ["numeric", "2-digit", "short", "long", "narrow"][i - 1]; - break; - case "w": - case "W": - throw new RangeError("`w/W` (week) patterns are not supported"); - case "d": - t.day = ["numeric", "2-digit"][i - 1]; - break; - case "D": - case "F": - case "g": - throw new RangeError("`D/F/g` (day) patterns are not supported, use `d` instead"); - case "E": - t.weekday = i === 4 ? "short" : i === 5 ? "narrow" : "short"; - break; - case "e": - if (i < 4) - throw new RangeError("`e..eee` (weekday) patterns are not supported"); - t.weekday = ["short", "long", "narrow", "short"][i - 4]; - break; - case "c": - if (i < 4) - throw new RangeError("`c..ccc` (weekday) patterns are not supported"); - t.weekday = ["short", "long", "narrow", "short"][i - 4]; - break; - case "a": - t.hour12 = !0; - break; - case "b": - case "B": - throw new RangeError("`b/B` (period) patterns are not supported, use `a` instead"); - case "h": - t.hourCycle = "h12", t.hour = ["numeric", "2-digit"][i - 1]; - break; - case "H": - t.hourCycle = "h23", t.hour = ["numeric", "2-digit"][i - 1]; - break; - case "K": - t.hourCycle = "h11", t.hour = ["numeric", "2-digit"][i - 1]; - break; - case "k": - t.hourCycle = "h24", t.hour = ["numeric", "2-digit"][i - 1]; - break; - case "j": - case "J": - case "C": - throw new RangeError("`j/J/C` (hour) patterns are not supported, use `h/H/K/k` instead"); - case "m": - t.minute = ["numeric", "2-digit"][i - 1]; - break; - case "s": - t.second = ["numeric", "2-digit"][i - 1]; - break; - case "S": - case "A": - throw new RangeError("`S/A` (second) patterns are not supported, use `s` instead"); - case "z": - t.timeZoneName = i < 4 ? "short" : "long"; - break; - case "Z": - case "O": - case "v": - case "V": - case "X": - case "x": - throw new RangeError("`Z/O/v/V/X/x` (timeZone) patterns are not supported, use `z` instead"); - } - return ""; - }), t; -} -var ys = /[\t-\r \x85\u200E\u200F\u2028\u2029]/i; -function Es(e) { - if (e.length === 0) - throw new Error("Number skeleton cannot be empty"); - for (var t = e.split(ys).filter(function(h) { - return h.length > 0; - }), n = [], i = 0, r = t; i < r.length; i++) { - var l = r[i], o = l.split("/"); - if (o.length === 0) - throw new Error("Invalid number skeleton"); - for (var a = o[0], s = o.slice(1), u = 0, f = s; u < f.length; u++) { - var c = f[u]; - if (c.length === 0) - throw new Error("Invalid number skeleton"); - } - n.push({ stem: a, options: s }); - } - return n; -} -function Ss(e) { - return e.replace(/^(.*?)-/, ""); -} -var Un = /^\.(?:(0+)(\*)?|(#+)|(0+)(#+))$/g, Ki = /^(@+)?(\+|#+)?[rs]?$/g, Ts = /(\*)(0+)|(#+)(0+)|(0+)/g, $i = /^(0+)$/; -function Gn(e) { - var t = {}; - return e[e.length - 1] === "r" ? t.roundingPriority = "morePrecision" : e[e.length - 1] === "s" && (t.roundingPriority = "lessPrecision"), e.replace(Ki, function(n, i, r) { - return typeof r != "string" ? (t.minimumSignificantDigits = i.length, t.maximumSignificantDigits = i.length) : r === "+" ? t.minimumSignificantDigits = i.length : i[0] === "#" ? t.maximumSignificantDigits = i.length : (t.minimumSignificantDigits = i.length, t.maximumSignificantDigits = i.length + (typeof r == "string" ? r.length : 0)), ""; - }), t; -} -function er(e) { - switch (e) { - case "sign-auto": - return { - signDisplay: "auto" - }; - case "sign-accounting": - case "()": - return { - currencySign: "accounting" - }; - case "sign-always": - case "+!": - return { - signDisplay: "always" - }; - case "sign-accounting-always": - case "()!": - return { - signDisplay: "always", - currencySign: "accounting" - }; - case "sign-except-zero": - case "+?": - return { - signDisplay: "exceptZero" - }; - case "sign-accounting-except-zero": - case "()?": - return { - signDisplay: "exceptZero", - currencySign: "accounting" - }; - case "sign-never": - case "+_": - return { - signDisplay: "never" - }; - } -} -function As(e) { - var t; - if (e[0] === "E" && e[1] === "E" ? (t = { - notation: "engineering" - }, e = e.slice(2)) : e[0] === "E" && (t = { - notation: "scientific" - }, e = e.slice(1)), t) { - var n = e.slice(0, 2); - if (n === "+!" ? (t.signDisplay = "always", e = e.slice(2)) : n === "+?" && (t.signDisplay = "exceptZero", e = e.slice(2)), !$i.test(e)) - throw new Error("Malformed concise eng/scientific notation"); - t.minimumIntegerDigits = e.length; - } - return t; -} -function Fn(e) { - var t = {}, n = er(e); - return n || t; -} -function Hs(e) { - for (var t = {}, n = 0, i = e; n < i.length; n++) { - var r = i[n]; - switch (r.stem) { - case "percent": - case "%": - t.style = "percent"; - continue; - case "%x100": - t.style = "percent", t.scale = 100; - continue; - case "currency": - t.style = "currency", t.currency = r.options[0]; - continue; - case "group-off": - case ",_": - t.useGrouping = !1; - continue; - case "precision-integer": - case ".": - t.maximumFractionDigits = 0; - continue; - case "measure-unit": - case "unit": - t.style = "unit", t.unit = Ss(r.options[0]); - continue; - case "compact-short": - case "K": - t.notation = "compact", t.compactDisplay = "short"; - continue; - case "compact-long": - case "KK": - t.notation = "compact", t.compactDisplay = "long"; - continue; - case "scientific": - t = k(k(k({}, t), { notation: "scientific" }), r.options.reduce(function(s, u) { - return k(k({}, s), Fn(u)); - }, {})); - continue; - case "engineering": - t = k(k(k({}, t), { notation: "engineering" }), r.options.reduce(function(s, u) { - return k(k({}, s), Fn(u)); - }, {})); - continue; - case "notation-simple": - t.notation = "standard"; - continue; - case "unit-width-narrow": - t.currencyDisplay = "narrowSymbol", t.unitDisplay = "narrow"; - continue; - case "unit-width-short": - t.currencyDisplay = "code", t.unitDisplay = "short"; - continue; - case "unit-width-full-name": - t.currencyDisplay = "name", t.unitDisplay = "long"; - continue; - case "unit-width-iso-code": - t.currencyDisplay = "symbol"; - continue; - case "scale": - t.scale = parseFloat(r.options[0]); - continue; - case "integer-width": - if (r.options.length > 1) - throw new RangeError("integer-width stems only accept a single optional option"); - r.options[0].replace(Ts, function(s, u, f, c, h, _) { - if (u) - t.minimumIntegerDigits = f.length; - else { - if (c && h) - throw new Error("We currently do not support maximum integer digits"); - if (_) - throw new Error("We currently do not support exact integer digits"); - } - return ""; - }); - continue; - } - if ($i.test(r.stem)) { - t.minimumIntegerDigits = r.stem.length; - continue; - } - if (Un.test(r.stem)) { - if (r.options.length > 1) - throw new RangeError("Fraction-precision stems only accept a single optional option"); - r.stem.replace(Un, function(s, u, f, c, h, _) { - return f === "*" ? t.minimumFractionDigits = u.length : c && c[0] === "#" ? t.maximumFractionDigits = c.length : h && _ ? (t.minimumFractionDigits = h.length, t.maximumFractionDigits = h.length + _.length) : (t.minimumFractionDigits = u.length, t.maximumFractionDigits = u.length), ""; - }); - var l = r.options[0]; - l === "w" ? t = k(k({}, t), { trailingZeroDisplay: "stripIfInteger" }) : l && (t = k(k({}, t), Gn(l))); - continue; - } - if (Ki.test(r.stem)) { - t = k(k({}, t), Gn(r.stem)); - continue; - } - var o = er(r.stem); - o && (t = k(k({}, t), o)); - var a = As(r.stem); - a && (t = k(k({}, t), a)); - } - return t; -} -var ct = { - AX: [ - "H" - ], - BQ: [ - "H" - ], - CP: [ - "H" - ], - CZ: [ - "H" - ], - DK: [ - "H" - ], - FI: [ - "H" - ], - ID: [ - "H" - ], - IS: [ - "H" - ], - ML: [ - "H" - ], - NE: [ - "H" - ], - RU: [ - "H" - ], - SE: [ - "H" - ], - SJ: [ - "H" - ], - SK: [ - "H" - ], - AS: [ - "h", - "H" - ], - BT: [ - "h", - "H" - ], - DJ: [ - "h", - "H" - ], - ER: [ - "h", - "H" - ], - GH: [ - "h", - "H" - ], - IN: [ - "h", - "H" - ], - LS: [ - "h", - "H" - ], - PG: [ - "h", - "H" - ], - PW: [ - "h", - "H" - ], - SO: [ - "h", - "H" - ], - TO: [ - "h", - "H" - ], - VU: [ - "h", - "H" - ], - WS: [ - "h", - "H" - ], - "001": [ - "H", - "h" - ], - AL: [ - "h", - "H", - "hB" - ], - TD: [ - "h", - "H", - "hB" - ], - "ca-ES": [ - "H", - "h", - "hB" - ], - CF: [ - "H", - "h", - "hB" - ], - CM: [ - "H", - "h", - "hB" - ], - "fr-CA": [ - "H", - "h", - "hB" - ], - "gl-ES": [ - "H", - "h", - "hB" - ], - "it-CH": [ - "H", - "h", - "hB" - ], - "it-IT": [ - "H", - "h", - "hB" - ], - LU: [ - "H", - "h", - "hB" - ], - NP: [ - "H", - "h", - "hB" - ], - PF: [ - "H", - "h", - "hB" - ], - SC: [ - "H", - "h", - "hB" - ], - SM: [ - "H", - "h", - "hB" - ], - SN: [ - "H", - "h", - "hB" - ], - TF: [ - "H", - "h", - "hB" - ], - VA: [ - "H", - "h", - "hB" - ], - CY: [ - "h", - "H", - "hb", - "hB" - ], - GR: [ - "h", - "H", - "hb", - "hB" - ], - CO: [ - "h", - "H", - "hB", - "hb" - ], - DO: [ - "h", - "H", - "hB", - "hb" - ], - KP: [ - "h", - "H", - "hB", - "hb" - ], - KR: [ - "h", - "H", - "hB", - "hb" - ], - NA: [ - "h", - "H", - "hB", - "hb" - ], - PA: [ - "h", - "H", - "hB", - "hb" - ], - PR: [ - "h", - "H", - "hB", - "hb" - ], - VE: [ - "h", - "H", - "hB", - "hb" - ], - AC: [ - "H", - "h", - "hb", - "hB" - ], - AI: [ - "H", - "h", - "hb", - "hB" - ], - BW: [ - "H", - "h", - "hb", - "hB" - ], - BZ: [ - "H", - "h", - "hb", - "hB" - ], - CC: [ - "H", - "h", - "hb", - "hB" - ], - CK: [ - "H", - "h", - "hb", - "hB" - ], - CX: [ - "H", - "h", - "hb", - "hB" - ], - DG: [ - "H", - "h", - "hb", - "hB" - ], - FK: [ - "H", - "h", - "hb", - "hB" - ], - GB: [ - "H", - "h", - "hb", - "hB" - ], - GG: [ - "H", - "h", - "hb", - "hB" - ], - GI: [ - "H", - "h", - "hb", - "hB" - ], - IE: [ - "H", - "h", - "hb", - "hB" - ], - IM: [ - "H", - "h", - "hb", - "hB" - ], - IO: [ - "H", - "h", - "hb", - "hB" - ], - JE: [ - "H", - "h", - "hb", - "hB" - ], - LT: [ - "H", - "h", - "hb", - "hB" - ], - MK: [ - "H", - "h", - "hb", - "hB" - ], - MN: [ - "H", - "h", - "hb", - "hB" - ], - MS: [ - "H", - "h", - "hb", - "hB" - ], - NF: [ - "H", - "h", - "hb", - "hB" - ], - NG: [ - "H", - "h", - "hb", - "hB" - ], - NR: [ - "H", - "h", - "hb", - "hB" - ], - NU: [ - "H", - "h", - "hb", - "hB" - ], - PN: [ - "H", - "h", - "hb", - "hB" - ], - SH: [ - "H", - "h", - "hb", - "hB" - ], - SX: [ - "H", - "h", - "hb", - "hB" - ], - TA: [ - "H", - "h", - "hb", - "hB" - ], - ZA: [ - "H", - "h", - "hb", - "hB" - ], - "af-ZA": [ - "H", - "h", - "hB", - "hb" - ], - AR: [ - "H", - "h", - "hB", - "hb" - ], - CL: [ - "H", - "h", - "hB", - "hb" - ], - CR: [ - "H", - "h", - "hB", - "hb" - ], - CU: [ - "H", - "h", - "hB", - "hb" - ], - EA: [ - "H", - "h", - "hB", - "hb" - ], - "es-BO": [ - "H", - "h", - "hB", - "hb" - ], - "es-BR": [ - "H", - "h", - "hB", - "hb" - ], - "es-EC": [ - "H", - "h", - "hB", - "hb" - ], - "es-ES": [ - "H", - "h", - "hB", - "hb" - ], - "es-GQ": [ - "H", - "h", - "hB", - "hb" - ], - "es-PE": [ - "H", - "h", - "hB", - "hb" - ], - GT: [ - "H", - "h", - "hB", - "hb" - ], - HN: [ - "H", - "h", - "hB", - "hb" - ], - IC: [ - "H", - "h", - "hB", - "hb" - ], - KG: [ - "H", - "h", - "hB", - "hb" - ], - KM: [ - "H", - "h", - "hB", - "hb" - ], - LK: [ - "H", - "h", - "hB", - "hb" - ], - MA: [ - "H", - "h", - "hB", - "hb" - ], - MX: [ - "H", - "h", - "hB", - "hb" - ], - NI: [ - "H", - "h", - "hB", - "hb" - ], - PY: [ - "H", - "h", - "hB", - "hb" - ], - SV: [ - "H", - "h", - "hB", - "hb" - ], - UY: [ - "H", - "h", - "hB", - "hb" - ], - JP: [ - "H", - "h", - "K" - ], - AD: [ - "H", - "hB" - ], - AM: [ - "H", - "hB" - ], - AO: [ - "H", - "hB" - ], - AT: [ - "H", - "hB" - ], - AW: [ - "H", - "hB" - ], - BE: [ - "H", - "hB" - ], - BF: [ - "H", - "hB" - ], - BJ: [ - "H", - "hB" - ], - BL: [ - "H", - "hB" - ], - BR: [ - "H", - "hB" - ], - CG: [ - "H", - "hB" - ], - CI: [ - "H", - "hB" - ], - CV: [ - "H", - "hB" - ], - DE: [ - "H", - "hB" - ], - EE: [ - "H", - "hB" - ], - FR: [ - "H", - "hB" - ], - GA: [ - "H", - "hB" - ], - GF: [ - "H", - "hB" - ], - GN: [ - "H", - "hB" - ], - GP: [ - "H", - "hB" - ], - GW: [ - "H", - "hB" - ], - HR: [ - "H", - "hB" - ], - IL: [ - "H", - "hB" - ], - IT: [ - "H", - "hB" - ], - KZ: [ - "H", - "hB" - ], - MC: [ - "H", - "hB" - ], - MD: [ - "H", - "hB" - ], - MF: [ - "H", - "hB" - ], - MQ: [ - "H", - "hB" - ], - MZ: [ - "H", - "hB" - ], - NC: [ - "H", - "hB" - ], - NL: [ - "H", - "hB" - ], - PM: [ - "H", - "hB" - ], - PT: [ - "H", - "hB" - ], - RE: [ - "H", - "hB" - ], - RO: [ - "H", - "hB" - ], - SI: [ - "H", - "hB" - ], - SR: [ - "H", - "hB" - ], - ST: [ - "H", - "hB" - ], - TG: [ - "H", - "hB" - ], - TR: [ - "H", - "hB" - ], - WF: [ - "H", - "hB" - ], - YT: [ - "H", - "hB" - ], - BD: [ - "h", - "hB", - "H" - ], - PK: [ - "h", - "hB", - "H" - ], - AZ: [ - "H", - "hB", - "h" - ], - BA: [ - "H", - "hB", - "h" - ], - BG: [ - "H", - "hB", - "h" - ], - CH: [ - "H", - "hB", - "h" - ], - GE: [ - "H", - "hB", - "h" - ], - LI: [ - "H", - "hB", - "h" - ], - ME: [ - "H", - "hB", - "h" - ], - RS: [ - "H", - "hB", - "h" - ], - UA: [ - "H", - "hB", - "h" - ], - UZ: [ - "H", - "hB", - "h" - ], - XK: [ - "H", - "hB", - "h" - ], - AG: [ - "h", - "hb", - "H", - "hB" - ], - AU: [ - "h", - "hb", - "H", - "hB" - ], - BB: [ - "h", - "hb", - "H", - "hB" - ], - BM: [ - "h", - "hb", - "H", - "hB" - ], - BS: [ - "h", - "hb", - "H", - "hB" - ], - CA: [ - "h", - "hb", - "H", - "hB" - ], - DM: [ - "h", - "hb", - "H", - "hB" - ], - "en-001": [ - "h", - "hb", - "H", - "hB" - ], - FJ: [ - "h", - "hb", - "H", - "hB" - ], - FM: [ - "h", - "hb", - "H", - "hB" - ], - GD: [ - "h", - "hb", - "H", - "hB" - ], - GM: [ - "h", - "hb", - "H", - "hB" - ], - GU: [ - "h", - "hb", - "H", - "hB" - ], - GY: [ - "h", - "hb", - "H", - "hB" - ], - JM: [ - "h", - "hb", - "H", - "hB" - ], - KI: [ - "h", - "hb", - "H", - "hB" - ], - KN: [ - "h", - "hb", - "H", - "hB" - ], - KY: [ - "h", - "hb", - "H", - "hB" - ], - LC: [ - "h", - "hb", - "H", - "hB" - ], - LR: [ - "h", - "hb", - "H", - "hB" - ], - MH: [ - "h", - "hb", - "H", - "hB" - ], - MP: [ - "h", - "hb", - "H", - "hB" - ], - MW: [ - "h", - "hb", - "H", - "hB" - ], - NZ: [ - "h", - "hb", - "H", - "hB" - ], - SB: [ - "h", - "hb", - "H", - "hB" - ], - SG: [ - "h", - "hb", - "H", - "hB" - ], - SL: [ - "h", - "hb", - "H", - "hB" - ], - SS: [ - "h", - "hb", - "H", - "hB" - ], - SZ: [ - "h", - "hb", - "H", - "hB" - ], - TC: [ - "h", - "hb", - "H", - "hB" - ], - TT: [ - "h", - "hb", - "H", - "hB" - ], - UM: [ - "h", - "hb", - "H", - "hB" - ], - US: [ - "h", - "hb", - "H", - "hB" - ], - VC: [ - "h", - "hb", - "H", - "hB" - ], - VG: [ - "h", - "hb", - "H", - "hB" - ], - VI: [ - "h", - "hb", - "H", - "hB" - ], - ZM: [ - "h", - "hb", - "H", - "hB" - ], - BO: [ - "H", - "hB", - "h", - "hb" - ], - EC: [ - "H", - "hB", - "h", - "hb" - ], - ES: [ - "H", - "hB", - "h", - "hb" - ], - GQ: [ - "H", - "hB", - "h", - "hb" - ], - PE: [ - "H", - "hB", - "h", - "hb" - ], - AE: [ - "h", - "hB", - "hb", - "H" - ], - "ar-001": [ - "h", - "hB", - "hb", - "H" - ], - BH: [ - "h", - "hB", - "hb", - "H" - ], - DZ: [ - "h", - "hB", - "hb", - "H" - ], - EG: [ - "h", - "hB", - "hb", - "H" - ], - EH: [ - "h", - "hB", - "hb", - "H" - ], - HK: [ - "h", - "hB", - "hb", - "H" - ], - IQ: [ - "h", - "hB", - "hb", - "H" - ], - JO: [ - "h", - "hB", - "hb", - "H" - ], - KW: [ - "h", - "hB", - "hb", - "H" - ], - LB: [ - "h", - "hB", - "hb", - "H" - ], - LY: [ - "h", - "hB", - "hb", - "H" - ], - MO: [ - "h", - "hB", - "hb", - "H" - ], - MR: [ - "h", - "hB", - "hb", - "H" - ], - OM: [ - "h", - "hB", - "hb", - "H" - ], - PH: [ - "h", - "hB", - "hb", - "H" - ], - PS: [ - "h", - "hB", - "hb", - "H" - ], - QA: [ - "h", - "hB", - "hb", - "H" - ], - SA: [ - "h", - "hB", - "hb", - "H" - ], - SD: [ - "h", - "hB", - "hb", - "H" - ], - SY: [ - "h", - "hB", - "hb", - "H" - ], - TN: [ - "h", - "hB", - "hb", - "H" - ], - YE: [ - "h", - "hB", - "hb", - "H" - ], - AF: [ - "H", - "hb", - "hB", - "h" - ], - LA: [ - "H", - "hb", - "hB", - "h" - ], - CN: [ - "H", - "hB", - "hb", - "h" - ], - LV: [ - "H", - "hB", - "hb", - "h" - ], - TL: [ - "H", - "hB", - "hb", - "h" - ], - "zu-ZA": [ - "H", - "hB", - "hb", - "h" - ], - CD: [ - "hB", - "H" - ], - IR: [ - "hB", - "H" - ], - "hi-IN": [ - "hB", - "h", - "H" - ], - "kn-IN": [ - "hB", - "h", - "H" - ], - "ml-IN": [ - "hB", - "h", - "H" - ], - "te-IN": [ - "hB", - "h", - "H" - ], - KH: [ - "hB", - "h", - "H", - "hb" - ], - "ta-IN": [ - "hB", - "h", - "hb", - "H" - ], - BN: [ - "hb", - "hB", - "h", - "H" - ], - MY: [ - "hb", - "hB", - "h", - "H" - ], - ET: [ - "hB", - "hb", - "h", - "H" - ], - "gu-IN": [ - "hB", - "hb", - "h", - "H" - ], - "mr-IN": [ - "hB", - "hb", - "h", - "H" - ], - "pa-IN": [ - "hB", - "hb", - "h", - "H" - ], - TW: [ - "hB", - "hb", - "h", - "H" - ], - KE: [ - "hB", - "hb", - "H", - "h" - ], - MM: [ - "hB", - "hb", - "H", - "h" - ], - TZ: [ - "hB", - "hb", - "H", - "h" - ], - UG: [ - "hB", - "hb", - "H", - "h" - ] -}; -function Bs(e, t) { - for (var n = "", i = 0; i < e.length; i++) { - var r = e.charAt(i); - if (r === "j") { - for (var l = 0; i + 1 < e.length && e.charAt(i + 1) === r; ) - l++, i++; - var o = 1 + (l & 1), a = l < 2 ? 1 : 3 + (l >> 1), s = "a", u = Cs(t); - for ((u == "H" || u == "k") && (a = 0); a-- > 0; ) - n += s; - for (; o-- > 0; ) - n = u + n; - } else - r === "J" ? n += "H" : n += r; - } - return n; -} -function Cs(e) { - var t = e.hourCycle; - if (t === void 0 && // @ts-ignore hourCycle(s) is not identified yet - e.hourCycles && // @ts-ignore - e.hourCycles.length && (t = e.hourCycles[0]), t) - switch (t) { - case "h24": - return "k"; - case "h23": - return "H"; - case "h12": - return "h"; - case "h11": - return "K"; - default: - throw new Error("Invalid hourCycle"); - } - var n = e.language, i; - n !== "root" && (i = e.maximize().region); - var r = ct[i || ""] || ct[n || ""] || ct["".concat(n, "-001")] || ct["001"]; - return r[0]; -} -var Vt, Ps = new RegExp("^".concat(Yi.source, "*")), Is = new RegExp("".concat(Yi.source, "*$")); -function P(e, t) { - return { start: e, end: t }; -} -var ks = !!String.prototype.startsWith, Ls = !!String.fromCodePoint, Ns = !!Object.fromEntries, Os = !!String.prototype.codePointAt, Ms = !!String.prototype.trimStart, Rs = !!String.prototype.trimEnd, Ds = !!Number.isSafeInteger, Us = Ds ? Number.isSafeInteger : function(e) { - return typeof e == "number" && isFinite(e) && Math.floor(e) === e && Math.abs(e) <= 9007199254740991; -}, en = !0; -try { - var Gs = nr("([^\\p{White_Space}\\p{Pattern_Syntax}]*)", "yu"); - en = ((Vt = Gs.exec("a")) === null || Vt === void 0 ? void 0 : Vt[0]) === "a"; -} catch { - en = !1; -} -var xn = ks ? ( - // Native - function(t, n, i) { - return t.startsWith(n, i); - } -) : ( - // For IE11 - function(t, n, i) { - return t.slice(i, i + n.length) === n; - } -), tn = Ls ? String.fromCodePoint : ( - // IE11 - function() { - for (var t = [], n = 0; n < arguments.length; n++) - t[n] = arguments[n]; - for (var i = "", r = t.length, l = 0, o; r > l; ) { - if (o = t[l++], o > 1114111) - throw RangeError(o + " is not a valid code point"); - i += o < 65536 ? String.fromCharCode(o) : String.fromCharCode(((o -= 65536) >> 10) + 55296, o % 1024 + 56320); - } - return i; - } -), jn = ( - // native - Ns ? Object.fromEntries : ( - // Ponyfill - function(t) { - for (var n = {}, i = 0, r = t; i < r.length; i++) { - var l = r[i], o = l[0], a = l[1]; - n[o] = a; - } - return n; - } - ) -), tr = Os ? ( - // Native - function(t, n) { - return t.codePointAt(n); - } -) : ( - // IE 11 - function(t, n) { - var i = t.length; - if (!(n < 0 || n >= i)) { - var r = t.charCodeAt(n), l; - return r < 55296 || r > 56319 || n + 1 === i || (l = t.charCodeAt(n + 1)) < 56320 || l > 57343 ? r : (r - 55296 << 10) + (l - 56320) + 65536; - } - } -), Fs = Ms ? ( - // Native - function(t) { - return t.trimStart(); - } -) : ( - // Ponyfill - function(t) { - return t.replace(Ps, ""); - } -), xs = Rs ? ( - // Native - function(t) { - return t.trimEnd(); - } -) : ( - // Ponyfill - function(t) { - return t.replace(Is, ""); - } -); -function nr(e, t) { - return new RegExp(e, t); -} -var nn; -if (en) { - var Vn = nr("([^\\p{White_Space}\\p{Pattern_Syntax}]*)", "yu"); - nn = function(t, n) { - var i; - Vn.lastIndex = n; - var r = Vn.exec(t); - return (i = r[1]) !== null && i !== void 0 ? i : ""; - }; -} else - nn = function(t, n) { - for (var i = []; ; ) { - var r = tr(t, n); - if (r === void 0 || ir(r) || qs(r)) - break; - i.push(r), n += r >= 65536 ? 2 : 1; - } - return tn.apply(void 0, i); - }; -var js = ( - /** @class */ - function() { - function e(t, n) { - n === void 0 && (n = {}), this.message = t, this.position = { offset: 0, line: 1, column: 1 }, this.ignoreTag = !!n.ignoreTag, this.locale = n.locale, this.requiresOtherClause = !!n.requiresOtherClause, this.shouldParseSkeletons = !!n.shouldParseSkeletons; - } - return e.prototype.parse = function() { - if (this.offset() !== 0) - throw Error("parser can only be used once"); - return this.parseMessage(0, "", !1); - }, e.prototype.parseMessage = function(t, n, i) { - for (var r = []; !this.isEOF(); ) { - var l = this.char(); - if (l === 123) { - var o = this.parseArgument(t, i); - if (o.err) - return o; - r.push(o.val); - } else { - if (l === 125 && t > 0) - break; - if (l === 35 && (n === "plural" || n === "selectordinal")) { - var a = this.clonePosition(); - this.bump(), r.push({ - type: O.pound, - location: P(a, this.clonePosition()) - }); - } else if (l === 60 && !this.ignoreTag && this.peek() === 47) { - if (i) - break; - return this.error(B.UNMATCHED_CLOSING_TAG, P(this.clonePosition(), this.clonePosition())); - } else if (l === 60 && !this.ignoreTag && rn(this.peek() || 0)) { - var o = this.parseTag(t, n); - if (o.err) - return o; - r.push(o.val); - } else { - var o = this.parseLiteral(t, n); - if (o.err) - return o; - r.push(o.val); - } - } - } - return { val: r, err: null }; - }, e.prototype.parseTag = function(t, n) { - var i = this.clonePosition(); - this.bump(); - var r = this.parseTagName(); - if (this.bumpSpace(), this.bumpIf("/>")) - return { - val: { - type: O.literal, - value: "<".concat(r, "/>"), - location: P(i, this.clonePosition()) - }, - err: null - }; - if (this.bumpIf(">")) { - var l = this.parseMessage(t + 1, n, !0); - if (l.err) - return l; - var o = l.val, a = this.clonePosition(); - if (this.bumpIf("") ? { - val: { - type: O.tag, - value: r, - children: o, - location: P(i, this.clonePosition()) - }, - err: null - } : this.error(B.INVALID_TAG, P(a, this.clonePosition()))); - } else - return this.error(B.UNCLOSED_TAG, P(i, this.clonePosition())); - } else - return this.error(B.INVALID_TAG, P(i, this.clonePosition())); - }, e.prototype.parseTagName = function() { - var t = this.offset(); - for (this.bump(); !this.isEOF() && zs(this.char()); ) - this.bump(); - return this.message.slice(t, this.offset()); - }, e.prototype.parseLiteral = function(t, n) { - for (var i = this.clonePosition(), r = ""; ; ) { - var l = this.tryParseQuote(n); - if (l) { - r += l; - continue; - } - var o = this.tryParseUnquoted(t, n); - if (o) { - r += o; - continue; - } - var a = this.tryParseLeftAngleBracket(); - if (a) { - r += a; - continue; - } - break; - } - var s = P(i, this.clonePosition()); - return { - val: { type: O.literal, value: r, location: s }, - err: null - }; - }, e.prototype.tryParseLeftAngleBracket = function() { - return !this.isEOF() && this.char() === 60 && (this.ignoreTag || // If at the opening tag or closing tag position, bail. - !Vs(this.peek() || 0)) ? (this.bump(), "<") : null; - }, e.prototype.tryParseQuote = function(t) { - if (this.isEOF() || this.char() !== 39) - return null; - switch (this.peek()) { - case 39: - return this.bump(), this.bump(), "'"; - case 123: - case 60: - case 62: - case 125: - break; - case 35: - if (t === "plural" || t === "selectordinal") - break; - return null; - default: - return null; - } - this.bump(); - var n = [this.char()]; - for (this.bump(); !this.isEOF(); ) { - var i = this.char(); - if (i === 39) - if (this.peek() === 39) - n.push(39), this.bump(); - else { - this.bump(); - break; - } - else - n.push(i); - this.bump(); - } - return tn.apply(void 0, n); - }, e.prototype.tryParseUnquoted = function(t, n) { - if (this.isEOF()) - return null; - var i = this.char(); - return i === 60 || i === 123 || i === 35 && (n === "plural" || n === "selectordinal") || i === 125 && t > 0 ? null : (this.bump(), tn(i)); - }, e.prototype.parseArgument = function(t, n) { - var i = this.clonePosition(); - if (this.bump(), this.bumpSpace(), this.isEOF()) - return this.error(B.EXPECT_ARGUMENT_CLOSING_BRACE, P(i, this.clonePosition())); - if (this.char() === 125) - return this.bump(), this.error(B.EMPTY_ARGUMENT, P(i, this.clonePosition())); - var r = this.parseIdentifierIfPossible().value; - if (!r) - return this.error(B.MALFORMED_ARGUMENT, P(i, this.clonePosition())); - if (this.bumpSpace(), this.isEOF()) - return this.error(B.EXPECT_ARGUMENT_CLOSING_BRACE, P(i, this.clonePosition())); - switch (this.char()) { - case 125: - return this.bump(), { - val: { - type: O.argument, - // value does not include the opening and closing braces. - value: r, - location: P(i, this.clonePosition()) - }, - err: null - }; - case 44: - return this.bump(), this.bumpSpace(), this.isEOF() ? this.error(B.EXPECT_ARGUMENT_CLOSING_BRACE, P(i, this.clonePosition())) : this.parseArgumentOptions(t, n, r, i); - default: - return this.error(B.MALFORMED_ARGUMENT, P(i, this.clonePosition())); - } - }, e.prototype.parseIdentifierIfPossible = function() { - var t = this.clonePosition(), n = this.offset(), i = nn(this.message, n), r = n + i.length; - this.bumpTo(r); - var l = this.clonePosition(), o = P(t, l); - return { value: i, location: o }; - }, e.prototype.parseArgumentOptions = function(t, n, i, r) { - var l, o = this.clonePosition(), a = this.parseIdentifierIfPossible().value, s = this.clonePosition(); - switch (a) { - case "": - return this.error(B.EXPECT_ARGUMENT_TYPE, P(o, s)); - case "number": - case "date": - case "time": { - this.bumpSpace(); - var u = null; - if (this.bumpIf(",")) { - this.bumpSpace(); - var f = this.clonePosition(), c = this.parseSimpleArgStyleIfPossible(); - if (c.err) - return c; - var h = xs(c.val); - if (h.length === 0) - return this.error(B.EXPECT_ARGUMENT_STYLE, P(this.clonePosition(), this.clonePosition())); - var _ = P(f, this.clonePosition()); - u = { style: h, styleLocation: _ }; - } - var b = this.tryParseArgumentClose(r); - if (b.err) - return b; - var T = P(r, this.clonePosition()); - if (u && xn(u == null ? void 0 : u.style, "::", 0)) { - var y = Fs(u.style.slice(2)); - if (a === "number") { - var c = this.parseNumberSkeletonFromString(y, u.styleLocation); - return c.err ? c : { - val: { type: O.number, value: i, location: T, style: c.val }, - err: null - }; - } else { - if (y.length === 0) - return this.error(B.EXPECT_DATE_TIME_SKELETON, T); - var C = y; - this.locale && (C = Bs(y, this.locale)); - var h = { - type: Ne.dateTime, - pattern: C, - location: u.styleLocation, - parsedOptions: this.shouldParseSkeletons ? ws(C) : {} - }, E = a === "date" ? O.date : O.time; - return { - val: { type: E, value: i, location: T, style: h }, - err: null - }; - } - } - return { - val: { - type: a === "number" ? O.number : a === "date" ? O.date : O.time, - value: i, - location: T, - style: (l = u == null ? void 0 : u.style) !== null && l !== void 0 ? l : null - }, - err: null - }; - } - case "plural": - case "selectordinal": - case "select": { - var m = this.clonePosition(); - if (this.bumpSpace(), !this.bumpIf(",")) - return this.error(B.EXPECT_SELECT_ARGUMENT_OPTIONS, P(m, k({}, m))); - this.bumpSpace(); - var g = this.parseIdentifierIfPossible(), p = 0; - if (a !== "select" && g.value === "offset") { - if (!this.bumpIf(":")) - return this.error(B.EXPECT_PLURAL_ARGUMENT_OFFSET_VALUE, P(this.clonePosition(), this.clonePosition())); - this.bumpSpace(); - var c = this.tryParseDecimalInteger(B.EXPECT_PLURAL_ARGUMENT_OFFSET_VALUE, B.INVALID_PLURAL_ARGUMENT_OFFSET_VALUE); - if (c.err) - return c; - this.bumpSpace(), g = this.parseIdentifierIfPossible(), p = c.val; - } - var N = this.tryParsePluralOrSelectOptions(t, a, n, g); - if (N.err) - return N; - var b = this.tryParseArgumentClose(r); - if (b.err) - return b; - var G = P(r, this.clonePosition()); - return a === "select" ? { - val: { - type: O.select, - value: i, - options: jn(N.val), - location: G - }, - err: null - } : { - val: { - type: O.plural, - value: i, - options: jn(N.val), - offset: p, - pluralType: a === "plural" ? "cardinal" : "ordinal", - location: G - }, - err: null - }; - } - default: - return this.error(B.INVALID_ARGUMENT_TYPE, P(o, s)); - } - }, e.prototype.tryParseArgumentClose = function(t) { - return this.isEOF() || this.char() !== 125 ? this.error(B.EXPECT_ARGUMENT_CLOSING_BRACE, P(t, this.clonePosition())) : (this.bump(), { val: !0, err: null }); - }, e.prototype.parseSimpleArgStyleIfPossible = function() { - for (var t = 0, n = this.clonePosition(); !this.isEOF(); ) { - var i = this.char(); - switch (i) { - case 39: { - this.bump(); - var r = this.clonePosition(); - if (!this.bumpUntil("'")) - return this.error(B.UNCLOSED_QUOTE_IN_ARGUMENT_STYLE, P(r, this.clonePosition())); - this.bump(); - break; - } - case 123: { - t += 1, this.bump(); - break; - } - case 125: { - if (t > 0) - t -= 1; - else - return { - val: this.message.slice(n.offset, this.offset()), - err: null - }; - break; - } - default: - this.bump(); - break; - } - } - return { - val: this.message.slice(n.offset, this.offset()), - err: null - }; - }, e.prototype.parseNumberSkeletonFromString = function(t, n) { - var i = []; - try { - i = Es(t); - } catch { - return this.error(B.INVALID_NUMBER_SKELETON, n); - } - return { - val: { - type: Ne.number, - tokens: i, - location: n, - parsedOptions: this.shouldParseSkeletons ? Hs(i) : {} - }, - err: null - }; - }, e.prototype.tryParsePluralOrSelectOptions = function(t, n, i, r) { - for (var l, o = !1, a = [], s = /* @__PURE__ */ new Set(), u = r.value, f = r.location; ; ) { - if (u.length === 0) { - var c = this.clonePosition(); - if (n !== "select" && this.bumpIf("=")) { - var h = this.tryParseDecimalInteger(B.EXPECT_PLURAL_ARGUMENT_SELECTOR, B.INVALID_PLURAL_ARGUMENT_SELECTOR); - if (h.err) - return h; - f = P(c, this.clonePosition()), u = this.message.slice(c.offset, this.offset()); - } else - break; - } - if (s.has(u)) - return this.error(n === "select" ? B.DUPLICATE_SELECT_ARGUMENT_SELECTOR : B.DUPLICATE_PLURAL_ARGUMENT_SELECTOR, f); - u === "other" && (o = !0), this.bumpSpace(); - var _ = this.clonePosition(); - if (!this.bumpIf("{")) - return this.error(n === "select" ? B.EXPECT_SELECT_ARGUMENT_SELECTOR_FRAGMENT : B.EXPECT_PLURAL_ARGUMENT_SELECTOR_FRAGMENT, P(this.clonePosition(), this.clonePosition())); - var b = this.parseMessage(t + 1, n, i); - if (b.err) - return b; - var T = this.tryParseArgumentClose(_); - if (T.err) - return T; - a.push([ - u, - { - value: b.val, - location: P(_, this.clonePosition()) - } - ]), s.add(u), this.bumpSpace(), l = this.parseIdentifierIfPossible(), u = l.value, f = l.location; - } - return a.length === 0 ? this.error(n === "select" ? B.EXPECT_SELECT_ARGUMENT_SELECTOR : B.EXPECT_PLURAL_ARGUMENT_SELECTOR, P(this.clonePosition(), this.clonePosition())) : this.requiresOtherClause && !o ? this.error(B.MISSING_OTHER_CLAUSE, P(this.clonePosition(), this.clonePosition())) : { val: a, err: null }; - }, e.prototype.tryParseDecimalInteger = function(t, n) { - var i = 1, r = this.clonePosition(); - this.bumpIf("+") || this.bumpIf("-") && (i = -1); - for (var l = !1, o = 0; !this.isEOF(); ) { - var a = this.char(); - if (a >= 48 && a <= 57) - l = !0, o = o * 10 + (a - 48), this.bump(); - else - break; - } - var s = P(r, this.clonePosition()); - return l ? (o *= i, Us(o) ? { val: o, err: null } : this.error(n, s)) : this.error(t, s); - }, e.prototype.offset = function() { - return this.position.offset; - }, e.prototype.isEOF = function() { - return this.offset() === this.message.length; - }, e.prototype.clonePosition = function() { - return { - offset: this.position.offset, - line: this.position.line, - column: this.position.column - }; - }, e.prototype.char = function() { - var t = this.position.offset; - if (t >= this.message.length) - throw Error("out of bound"); - var n = tr(this.message, t); - if (n === void 0) - throw Error("Offset ".concat(t, " is at invalid UTF-16 code unit boundary")); - return n; - }, e.prototype.error = function(t, n) { - return { - val: null, - err: { - kind: t, - message: this.message, - location: n - } - }; - }, e.prototype.bump = function() { - if (!this.isEOF()) { - var t = this.char(); - t === 10 ? (this.position.line += 1, this.position.column = 1, this.position.offset += 1) : (this.position.column += 1, this.position.offset += t < 65536 ? 1 : 2); - } - }, e.prototype.bumpIf = function(t) { - if (xn(this.message, t, this.offset())) { - for (var n = 0; n < t.length; n++) - this.bump(); - return !0; - } - return !1; - }, e.prototype.bumpUntil = function(t) { - var n = this.offset(), i = this.message.indexOf(t, n); - return i >= 0 ? (this.bumpTo(i), !0) : (this.bumpTo(this.message.length), !1); - }, e.prototype.bumpTo = function(t) { - if (this.offset() > t) - throw Error("targetOffset ".concat(t, " must be greater than or equal to the current offset ").concat(this.offset())); - for (t = Math.min(t, this.message.length); ; ) { - var n = this.offset(); - if (n === t) - break; - if (n > t) - throw Error("targetOffset ".concat(t, " is at invalid UTF-16 code unit boundary")); - if (this.bump(), this.isEOF()) - break; - } - }, e.prototype.bumpSpace = function() { - for (; !this.isEOF() && ir(this.char()); ) - this.bump(); - }, e.prototype.peek = function() { - if (this.isEOF()) - return null; - var t = this.char(), n = this.offset(), i = this.message.charCodeAt(n + (t >= 65536 ? 2 : 1)); - return i ?? null; - }, e; - }() -); -function rn(e) { - return e >= 97 && e <= 122 || e >= 65 && e <= 90; -} -function Vs(e) { - return rn(e) || e === 47; -} -function zs(e) { - return e === 45 || e === 46 || e >= 48 && e <= 57 || e === 95 || e >= 97 && e <= 122 || e >= 65 && e <= 90 || e == 183 || e >= 192 && e <= 214 || e >= 216 && e <= 246 || e >= 248 && e <= 893 || e >= 895 && e <= 8191 || e >= 8204 && e <= 8205 || e >= 8255 && e <= 8256 || e >= 8304 && e <= 8591 || e >= 11264 && e <= 12271 || e >= 12289 && e <= 55295 || e >= 63744 && e <= 64975 || e >= 65008 && e <= 65533 || e >= 65536 && e <= 983039; -} -function ir(e) { - return e >= 9 && e <= 13 || e === 32 || e === 133 || e >= 8206 && e <= 8207 || e === 8232 || e === 8233; -} -function qs(e) { - return e >= 33 && e <= 35 || e === 36 || e >= 37 && e <= 39 || e === 40 || e === 41 || e === 42 || e === 43 || e === 44 || e === 45 || e >= 46 && e <= 47 || e >= 58 && e <= 59 || e >= 60 && e <= 62 || e >= 63 && e <= 64 || e === 91 || e === 92 || e === 93 || e === 94 || e === 96 || e === 123 || e === 124 || e === 125 || e === 126 || e === 161 || e >= 162 && e <= 165 || e === 166 || e === 167 || e === 169 || e === 171 || e === 172 || e === 174 || e === 176 || e === 177 || e === 182 || e === 187 || e === 191 || e === 215 || e === 247 || e >= 8208 && e <= 8213 || e >= 8214 && e <= 8215 || e === 8216 || e === 8217 || e === 8218 || e >= 8219 && e <= 8220 || e === 8221 || e === 8222 || e === 8223 || e >= 8224 && e <= 8231 || e >= 8240 && e <= 8248 || e === 8249 || e === 8250 || e >= 8251 && e <= 8254 || e >= 8257 && e <= 8259 || e === 8260 || e === 8261 || e === 8262 || e >= 8263 && e <= 8273 || e === 8274 || e === 8275 || e >= 8277 && e <= 8286 || e >= 8592 && e <= 8596 || e >= 8597 && e <= 8601 || e >= 8602 && e <= 8603 || e >= 8604 && e <= 8607 || e === 8608 || e >= 8609 && e <= 8610 || e === 8611 || e >= 8612 && e <= 8613 || e === 8614 || e >= 8615 && e <= 8621 || e === 8622 || e >= 8623 && e <= 8653 || e >= 8654 && e <= 8655 || e >= 8656 && e <= 8657 || e === 8658 || e === 8659 || e === 8660 || e >= 8661 && e <= 8691 || e >= 8692 && e <= 8959 || e >= 8960 && e <= 8967 || e === 8968 || e === 8969 || e === 8970 || e === 8971 || e >= 8972 && e <= 8991 || e >= 8992 && e <= 8993 || e >= 8994 && e <= 9e3 || e === 9001 || e === 9002 || e >= 9003 && e <= 9083 || e === 9084 || e >= 9085 && e <= 9114 || e >= 9115 && e <= 9139 || e >= 9140 && e <= 9179 || e >= 9180 && e <= 9185 || e >= 9186 && e <= 9254 || e >= 9255 && e <= 9279 || e >= 9280 && e <= 9290 || e >= 9291 && e <= 9311 || e >= 9472 && e <= 9654 || e === 9655 || e >= 9656 && e <= 9664 || e === 9665 || e >= 9666 && e <= 9719 || e >= 9720 && e <= 9727 || e >= 9728 && e <= 9838 || e === 9839 || e >= 9840 && e <= 10087 || e === 10088 || e === 10089 || e === 10090 || e === 10091 || e === 10092 || e === 10093 || e === 10094 || e === 10095 || e === 10096 || e === 10097 || e === 10098 || e === 10099 || e === 10100 || e === 10101 || e >= 10132 && e <= 10175 || e >= 10176 && e <= 10180 || e === 10181 || e === 10182 || e >= 10183 && e <= 10213 || e === 10214 || e === 10215 || e === 10216 || e === 10217 || e === 10218 || e === 10219 || e === 10220 || e === 10221 || e === 10222 || e === 10223 || e >= 10224 && e <= 10239 || e >= 10240 && e <= 10495 || e >= 10496 && e <= 10626 || e === 10627 || e === 10628 || e === 10629 || e === 10630 || e === 10631 || e === 10632 || e === 10633 || e === 10634 || e === 10635 || e === 10636 || e === 10637 || e === 10638 || e === 10639 || e === 10640 || e === 10641 || e === 10642 || e === 10643 || e === 10644 || e === 10645 || e === 10646 || e === 10647 || e === 10648 || e >= 10649 && e <= 10711 || e === 10712 || e === 10713 || e === 10714 || e === 10715 || e >= 10716 && e <= 10747 || e === 10748 || e === 10749 || e >= 10750 && e <= 11007 || e >= 11008 && e <= 11055 || e >= 11056 && e <= 11076 || e >= 11077 && e <= 11078 || e >= 11079 && e <= 11084 || e >= 11085 && e <= 11123 || e >= 11124 && e <= 11125 || e >= 11126 && e <= 11157 || e === 11158 || e >= 11159 && e <= 11263 || e >= 11776 && e <= 11777 || e === 11778 || e === 11779 || e === 11780 || e === 11781 || e >= 11782 && e <= 11784 || e === 11785 || e === 11786 || e === 11787 || e === 11788 || e === 11789 || e >= 11790 && e <= 11798 || e === 11799 || e >= 11800 && e <= 11801 || e === 11802 || e === 11803 || e === 11804 || e === 11805 || e >= 11806 && e <= 11807 || e === 11808 || e === 11809 || e === 11810 || e === 11811 || e === 11812 || e === 11813 || e === 11814 || e === 11815 || e === 11816 || e === 11817 || e >= 11818 && e <= 11822 || e === 11823 || e >= 11824 && e <= 11833 || e >= 11834 && e <= 11835 || e >= 11836 && e <= 11839 || e === 11840 || e === 11841 || e === 11842 || e >= 11843 && e <= 11855 || e >= 11856 && e <= 11857 || e === 11858 || e >= 11859 && e <= 11903 || e >= 12289 && e <= 12291 || e === 12296 || e === 12297 || e === 12298 || e === 12299 || e === 12300 || e === 12301 || e === 12302 || e === 12303 || e === 12304 || e === 12305 || e >= 12306 && e <= 12307 || e === 12308 || e === 12309 || e === 12310 || e === 12311 || e === 12312 || e === 12313 || e === 12314 || e === 12315 || e === 12316 || e === 12317 || e >= 12318 && e <= 12319 || e === 12320 || e === 12336 || e === 64830 || e === 64831 || e >= 65093 && e <= 65094; -} -function ln(e) { - e.forEach(function(t) { - if (delete t.location, Zi(t) || Wi(t)) - for (var n in t.options) - delete t.options[n].location, ln(t.options[n].value); - else - zi(t) && Ji(t.style) || (qi(t) || Xi(t)) && $t(t.style) ? delete t.style.location : Qi(t) && ln(t.children); - }); -} -function Xs(e, t) { - t === void 0 && (t = {}), t = k({ shouldParseSkeletons: !0, requiresOtherClause: !0 }, t); - var n = new js(e, t).parse(); - if (n.err) { - var i = SyntaxError(B[n.err.kind]); - throw i.location = n.err.location, i.originalMessage = n.err.message, i; - } - return t != null && t.captureLocation || ln(n.val), n.val; -} -function zt(e, t) { - var n = t && t.cache ? t.cache : Ks, i = t && t.serializer ? t.serializer : Ys, r = t && t.strategy ? t.strategy : Ws; - return r(e, { - cache: n, - serializer: i - }); -} -function Zs(e) { - return e == null || typeof e == "number" || typeof e == "boolean"; -} -function rr(e, t, n, i) { - var r = Zs(i) ? i : n(i), l = t.get(r); - return typeof l > "u" && (l = e.call(this, i), t.set(r, l)), l; -} -function lr(e, t, n) { - var i = Array.prototype.slice.call(arguments, 3), r = n(i), l = t.get(r); - return typeof l > "u" && (l = e.apply(this, i), t.set(r, l)), l; -} -function dn(e, t, n, i, r) { - return n.bind(t, e, i, r); -} -function Ws(e, t) { - var n = e.length === 1 ? rr : lr; - return dn(e, this, n, t.cache.create(), t.serializer); -} -function Qs(e, t) { - return dn(e, this, lr, t.cache.create(), t.serializer); -} -function Js(e, t) { - return dn(e, this, rr, t.cache.create(), t.serializer); -} -var Ys = function() { - return JSON.stringify(arguments); -}; -function bn() { - this.cache = /* @__PURE__ */ Object.create(null); -} -bn.prototype.get = function(e) { - return this.cache[e]; -}; -bn.prototype.set = function(e, t) { - this.cache[e] = t; -}; -var Ks = { - create: function() { - return new bn(); - } -}, qt = { - variadic: Qs, - monadic: Js -}, Oe; -(function(e) { - e.MISSING_VALUE = "MISSING_VALUE", e.INVALID_VALUE = "INVALID_VALUE", e.MISSING_INTL_API = "MISSING_INTL_API"; -})(Oe || (Oe = {})); -var Et = ( - /** @class */ - function(e) { - yt(t, e); - function t(n, i, r) { - var l = e.call(this, n) || this; - return l.code = i, l.originalMessage = r, l; - } - return t.prototype.toString = function() { - return "[formatjs Error: ".concat(this.code, "] ").concat(this.message); - }, t; - }(Error) -), zn = ( - /** @class */ - function(e) { - yt(t, e); - function t(n, i, r, l) { - return e.call(this, 'Invalid values for "'.concat(n, '": "').concat(i, '". Options are "').concat(Object.keys(r).join('", "'), '"'), Oe.INVALID_VALUE, l) || this; - } - return t; - }(Et) -), $s = ( - /** @class */ - function(e) { - yt(t, e); - function t(n, i, r) { - return e.call(this, 'Value for "'.concat(n, '" must be of type ').concat(i), Oe.INVALID_VALUE, r) || this; - } - return t; - }(Et) -), ea = ( - /** @class */ - function(e) { - yt(t, e); - function t(n, i) { - return e.call(this, 'The intl string context variable "'.concat(n, '" was not provided to the string "').concat(i, '"'), Oe.MISSING_VALUE, i) || this; - } - return t; - }(Et) -), j; -(function(e) { - e[e.literal = 0] = "literal", e[e.object = 1] = "object"; -})(j || (j = {})); -function ta(e) { - return e.length < 2 ? e : e.reduce(function(t, n) { - var i = t[t.length - 1]; - return !i || i.type !== j.literal || n.type !== j.literal ? t.push(n) : i.value += n.value, t; - }, []); -} -function na(e) { - return typeof e == "function"; -} -function dt(e, t, n, i, r, l, o) { - if (e.length === 1 && Dn(e[0])) - return [ - { - type: j.literal, - value: e[0].value - } - ]; - for (var a = [], s = 0, u = e; s < u.length; s++) { - var f = u[s]; - if (Dn(f)) { - a.push({ - type: j.literal, - value: f.value - }); - continue; - } - if (ps(f)) { - typeof l == "number" && a.push({ - type: j.literal, - value: n.getNumberFormat(t).format(l) - }); - continue; - } - var c = f.value; - if (!(r && c in r)) - throw new ea(c, o); - var h = r[c]; - if (gs(f)) { - (!h || typeof h == "string" || typeof h == "number") && (h = typeof h == "string" || typeof h == "number" ? String(h) : ""), a.push({ - type: typeof h == "string" ? j.literal : j.object, - value: h - }); - continue; - } - if (qi(f)) { - var _ = typeof f.style == "string" ? i.date[f.style] : $t(f.style) ? f.style.parsedOptions : void 0; - a.push({ - type: j.literal, - value: n.getDateTimeFormat(t, _).format(h) - }); - continue; - } - if (Xi(f)) { - var _ = typeof f.style == "string" ? i.time[f.style] : $t(f.style) ? f.style.parsedOptions : i.time.medium; - a.push({ - type: j.literal, - value: n.getDateTimeFormat(t, _).format(h) - }); - continue; - } - if (zi(f)) { - var _ = typeof f.style == "string" ? i.number[f.style] : Ji(f.style) ? f.style.parsedOptions : void 0; - _ && _.scale && (h = h * (_.scale || 1)), a.push({ - type: j.literal, - value: n.getNumberFormat(t, _).format(h) - }); - continue; - } - if (Qi(f)) { - var b = f.children, T = f.value, y = r[T]; - if (!na(y)) - throw new $s(T, "function", o); - var C = dt(b, t, n, i, r, l), E = y(C.map(function(p) { - return p.value; - })); - Array.isArray(E) || (E = [E]), a.push.apply(a, E.map(function(p) { - return { - type: typeof p == "string" ? j.literal : j.object, - value: p - }; - })); - } - if (Zi(f)) { - var m = f.options[h] || f.options.other; - if (!m) - throw new zn(f.value, h, Object.keys(f.options), o); - a.push.apply(a, dt(m.value, t, n, i, r)); - continue; - } - if (Wi(f)) { - var m = f.options["=".concat(h)]; - if (!m) { - if (!Intl.PluralRules) - throw new Et(`Intl.PluralRules is not available in this environment. -Try polyfilling it using "@formatjs/intl-pluralrules" -`, Oe.MISSING_INTL_API, o); - var g = n.getPluralRules(t, { type: f.pluralType }).select(h - (f.offset || 0)); - m = f.options[g] || f.options.other; - } - if (!m) - throw new zn(f.value, h, Object.keys(f.options), o); - a.push.apply(a, dt(m.value, t, n, i, r, h - (f.offset || 0))); - continue; - } - } - return ta(a); -} -function ia(e, t) { - return t ? k(k(k({}, e || {}), t || {}), Object.keys(e).reduce(function(n, i) { - return n[i] = k(k({}, e[i]), t[i] || {}), n; - }, {})) : e; -} -function ra(e, t) { - return t ? Object.keys(e).reduce(function(n, i) { - return n[i] = ia(e[i], t[i]), n; - }, k({}, e)) : e; -} -function Xt(e) { - return { - create: function() { - return { - get: function(t) { - return e[t]; - }, - set: function(t, n) { - e[t] = n; - } - }; - } - }; -} -function la(e) { - return e === void 0 && (e = { - number: {}, - dateTime: {}, - pluralRules: {} - }), { - getNumberFormat: zt(function() { - for (var t, n = [], i = 0; i < arguments.length; i++) - n[i] = arguments[i]; - return new ((t = Intl.NumberFormat).bind.apply(t, jt([void 0], n, !1)))(); - }, { - cache: Xt(e.number), - strategy: qt.variadic - }), - getDateTimeFormat: zt(function() { - for (var t, n = [], i = 0; i < arguments.length; i++) - n[i] = arguments[i]; - return new ((t = Intl.DateTimeFormat).bind.apply(t, jt([void 0], n, !1)))(); - }, { - cache: Xt(e.dateTime), - strategy: qt.variadic - }), - getPluralRules: zt(function() { - for (var t, n = [], i = 0; i < arguments.length; i++) - n[i] = arguments[i]; - return new ((t = Intl.PluralRules).bind.apply(t, jt([void 0], n, !1)))(); - }, { - cache: Xt(e.pluralRules), - strategy: qt.variadic - }) - }; -} -var oa = ( - /** @class */ - function() { - function e(t, n, i, r) { - var l = this; - if (n === void 0 && (n = e.defaultLocale), this.formatterCache = { - number: {}, - dateTime: {}, - pluralRules: {} - }, this.format = function(o) { - var a = l.formatToParts(o); - if (a.length === 1) - return a[0].value; - var s = a.reduce(function(u, f) { - return !u.length || f.type !== j.literal || typeof u[u.length - 1] != "string" ? u.push(f.value) : u[u.length - 1] += f.value, u; - }, []); - return s.length <= 1 ? s[0] || "" : s; - }, this.formatToParts = function(o) { - return dt(l.ast, l.locales, l.formatters, l.formats, o, void 0, l.message); - }, this.resolvedOptions = function() { - return { - locale: l.resolvedLocale.toString() - }; - }, this.getAst = function() { - return l.ast; - }, this.locales = n, this.resolvedLocale = e.resolveLocale(n), typeof t == "string") { - if (this.message = t, !e.__parse) - throw new TypeError("IntlMessageFormat.__parse must be set to process `message` of type `string`"); - this.ast = e.__parse(t, { - ignoreTag: r == null ? void 0 : r.ignoreTag, - locale: this.resolvedLocale - }); - } else - this.ast = t; - if (!Array.isArray(this.ast)) - throw new TypeError("A message must be provided as a String or AST."); - this.formats = ra(e.formats, i), this.formatters = r && r.formatters || la(this.formatterCache); - } - return Object.defineProperty(e, "defaultLocale", { - get: function() { - return e.memoizedDefaultLocale || (e.memoizedDefaultLocale = new Intl.NumberFormat().resolvedOptions().locale), e.memoizedDefaultLocale; - }, - enumerable: !1, - configurable: !0 - }), e.memoizedDefaultLocale = null, e.resolveLocale = function(t) { - var n = Intl.NumberFormat.supportedLocalesOf(t); - return n.length > 0 ? new Intl.Locale(n[0]) : new Intl.Locale(typeof t == "string" ? t : t[0]); - }, e.__parse = Xs, e.formats = { - number: { - integer: { - maximumFractionDigits: 0 - }, - currency: { - style: "currency" - }, - percent: { - style: "percent" - } - }, - date: { - short: { - month: "numeric", - day: "numeric", - year: "2-digit" - }, - medium: { - month: "short", - day: "numeric", - year: "numeric" - }, - long: { - month: "long", - day: "numeric", - year: "numeric" - }, - full: { - weekday: "long", - month: "long", - day: "numeric", - year: "numeric" - } - }, - time: { - short: { - hour: "numeric", - minute: "numeric" - }, - medium: { - hour: "numeric", - minute: "numeric", - second: "numeric" - }, - long: { - hour: "numeric", - minute: "numeric", - second: "numeric", - timeZoneName: "short" - }, - full: { - hour: "numeric", - minute: "numeric", - second: "numeric", - timeZoneName: "short" - } - } - }, e; - }() -); -function sa(e, t) { - if (t == null) - return; - if (t in e) - return e[t]; - const n = t.split("."); - let i = e; - for (let r = 0; r < n.length; r++) - if (typeof i == "object") { - if (r > 0) { - const l = n.slice(r, n.length).join("."); - if (l in i) { - i = i[l]; - break; - } - } - i = i[n[r]]; - } else - i = void 0; - return i; -} -const we = {}, aa = (e, t, n) => n && (t in we || (we[t] = {}), e in we[t] || (we[t][e] = n), n), or = (e, t) => { - if (t == null) - return; - if (t in we && e in we[t]) - return we[t][e]; - const n = St(t); - for (let i = 0; i < n.length; i++) { - const r = n[i], l = fa(r, e); - if (l) - return aa(e, t, l); - } -}; -let gn; -const nt = tt({}); -function ua(e) { - return gn[e] || null; -} -function sr(e) { - return e in gn; -} -function fa(e, t) { - if (!sr(e)) - return null; - const n = ua(e); - return sa(n, t); -} -function ca(e) { - if (e == null) - return; - const t = St(e); - for (let n = 0; n < t.length; n++) { - const i = t[n]; - if (sr(i)) - return i; - } -} -function ha(e, ...t) { - delete we[e], nt.update((n) => (n[e] = bs.all([n[e] || {}, ...t]), n)); -} -Ue( - [nt], - ([e]) => Object.keys(e) -); -nt.subscribe((e) => gn = e); -const bt = {}; -function _a(e, t) { - bt[e].delete(t), bt[e].size === 0 && delete bt[e]; -} -function ar(e) { - return bt[e]; -} -function ma(e) { - return St(e).map((t) => { - const n = ar(t); - return [t, n ? [...n] : []]; - }).filter(([, t]) => t.length > 0); -} -function on(e) { - return e == null ? !1 : St(e).some( - (t) => { - var n; - return (n = ar(t)) == null ? void 0 : n.size; - } - ); -} -function da(e, t) { - return Promise.all( - t.map((i) => (_a(e, i), i().then((r) => r.default || r))) - ).then((i) => ha(e, ...i)); -} -const Ze = {}; -function ur(e) { - if (!on(e)) - return e in Ze ? Ze[e] : Promise.resolve(); - const t = ma(e); - return Ze[e] = Promise.all( - t.map( - ([n, i]) => da(n, i) - ) - ).then(() => { - if (on(e)) - return ur(e); - delete Ze[e]; - }), Ze[e]; -} -const ba = { - number: { - scientific: { notation: "scientific" }, - engineering: { notation: "engineering" }, - compactLong: { notation: "compact", compactDisplay: "long" }, - compactShort: { notation: "compact", compactDisplay: "short" } - }, - date: { - short: { month: "numeric", day: "numeric", year: "2-digit" }, - medium: { month: "short", day: "numeric", year: "numeric" }, - long: { month: "long", day: "numeric", year: "numeric" }, - full: { weekday: "long", month: "long", day: "numeric", year: "numeric" } - }, - time: { - short: { hour: "numeric", minute: "numeric" }, - medium: { hour: "numeric", minute: "numeric", second: "numeric" }, - long: { - hour: "numeric", - minute: "numeric", - second: "numeric", - timeZoneName: "short" - }, - full: { - hour: "numeric", - minute: "numeric", - second: "numeric", - timeZoneName: "short" - } - } -}, ga = { - fallbackLocale: null, - loadingDelay: 200, - formats: ba, - warnOnMissingMessages: !0, - handleMissingMessage: void 0, - ignoreTag: !0 -}, pa = ga; -function Me() { - return pa; -} -const Zt = tt(!1); -var va = Object.defineProperty, wa = Object.defineProperties, ya = Object.getOwnPropertyDescriptors, qn = Object.getOwnPropertySymbols, Ea = Object.prototype.hasOwnProperty, Sa = Object.prototype.propertyIsEnumerable, Xn = (e, t, n) => t in e ? va(e, t, { enumerable: !0, configurable: !0, writable: !0, value: n }) : e[t] = n, Ta = (e, t) => { - for (var n in t || (t = {})) - Ea.call(t, n) && Xn(e, n, t[n]); - if (qn) - for (var n of qn(t)) - Sa.call(t, n) && Xn(e, n, t[n]); - return e; -}, Aa = (e, t) => wa(e, ya(t)); -let sn; -const gt = tt(null); -function Zn(e) { - return e.split("-").map((t, n, i) => i.slice(0, n + 1).join("-")).reverse(); -} -function St(e, t = Me().fallbackLocale) { - const n = Zn(e); - return t ? [.../* @__PURE__ */ new Set([...n, ...Zn(t)])] : n; -} -function Be() { - return sn ?? void 0; -} -gt.subscribe((e) => { - sn = e ?? void 0, typeof window < "u" && e != null && document.documentElement.setAttribute("lang", e); -}); -const Ha = (e) => { - if (e && ca(e) && on(e)) { - const { loadingDelay: t } = Me(); - let n; - return typeof window < "u" && Be() != null && t ? n = window.setTimeout( - () => Zt.set(!0), - t - ) : Zt.set(!0), ur(e).then(() => { - gt.set(e); - }).finally(() => { - clearTimeout(n), Zt.set(!1); - }); - } - return gt.set(e); -}, it = Aa(Ta({}, gt), { - set: Ha -}), Tt = (e) => { - const t = /* @__PURE__ */ Object.create(null); - return (i) => { - const r = JSON.stringify(i); - return r in t ? t[r] : t[r] = e(i); - }; -}; -var Ba = Object.defineProperty, pt = Object.getOwnPropertySymbols, fr = Object.prototype.hasOwnProperty, cr = Object.prototype.propertyIsEnumerable, Wn = (e, t, n) => t in e ? Ba(e, t, { enumerable: !0, configurable: !0, writable: !0, value: n }) : e[t] = n, pn = (e, t) => { - for (var n in t || (t = {})) - fr.call(t, n) && Wn(e, n, t[n]); - if (pt) - for (var n of pt(t)) - cr.call(t, n) && Wn(e, n, t[n]); - return e; -}, Ge = (e, t) => { - var n = {}; - for (var i in e) - fr.call(e, i) && t.indexOf(i) < 0 && (n[i] = e[i]); - if (e != null && pt) - for (var i of pt(e)) - t.indexOf(i) < 0 && cr.call(e, i) && (n[i] = e[i]); - return n; -}; -const Ye = (e, t) => { - const { formats: n } = Me(); - if (e in n && t in n[e]) - return n[e][t]; - throw new Error(`[svelte-i18n] Unknown "${t}" ${e} format.`); -}, Ca = Tt( - (e) => { - var t = e, { locale: n, format: i } = t, r = Ge(t, ["locale", "format"]); - if (n == null) - throw new Error('[svelte-i18n] A "locale" must be set to format numbers'); - return i && (r = Ye("number", i)), new Intl.NumberFormat(n, r); - } -), Pa = Tt( - (e) => { - var t = e, { locale: n, format: i } = t, r = Ge(t, ["locale", "format"]); - if (n == null) - throw new Error('[svelte-i18n] A "locale" must be set to format dates'); - return i ? r = Ye("date", i) : Object.keys(r).length === 0 && (r = Ye("date", "short")), new Intl.DateTimeFormat(n, r); - } -), Ia = Tt( - (e) => { - var t = e, { locale: n, format: i } = t, r = Ge(t, ["locale", "format"]); - if (n == null) - throw new Error( - '[svelte-i18n] A "locale" must be set to format time values' - ); - return i ? r = Ye("time", i) : Object.keys(r).length === 0 && (r = Ye("time", "short")), new Intl.DateTimeFormat(n, r); - } -), ka = (e = {}) => { - var t = e, { - locale: n = Be() - } = t, i = Ge(t, [ - "locale" - ]); - return Ca(pn({ locale: n }, i)); -}, La = (e = {}) => { - var t = e, { - locale: n = Be() - } = t, i = Ge(t, [ - "locale" - ]); - return Pa(pn({ locale: n }, i)); -}, Na = (e = {}) => { - var t = e, { - locale: n = Be() - } = t, i = Ge(t, [ - "locale" - ]); - return Ia(pn({ locale: n }, i)); -}, Oa = Tt( - // eslint-disable-next-line @typescript-eslint/no-non-null-assertion - (e, t = Be()) => new oa(e, t, Me().formats, { - ignoreTag: Me().ignoreTag - }) -), Ma = (e, t = {}) => { - var n, i, r, l; - let o = t; - typeof e == "object" && (o = e, e = o.id); - const { - values: a, - locale: s = Be(), - default: u - } = o; - if (s == null) - throw new Error( - "[svelte-i18n] Cannot format a message without first setting the initial locale." - ); - let f = or(e, s); - if (!f) - f = (l = (r = (i = (n = Me()).handleMissingMessage) == null ? void 0 : i.call(n, { locale: s, id: e, defaultValue: u })) != null ? r : u) != null ? l : e; - else if (typeof f != "string") - return console.warn( - `[svelte-i18n] Message with id "${e}" must be of type "string", found: "${typeof f}". Gettin its value through the "$format" method is deprecated; use the "json" method instead.` - ), f; - if (!a) - return f; - let c = f; - try { - c = Oa(f, s).format(a); - } catch (h) { - h instanceof Error && console.warn( - `[svelte-i18n] Message "${e}" has syntax error:`, - h.message - ); - } - return c; -}, Ra = (e, t) => Na(t).format(e), Da = (e, t) => La(t).format(e), Ua = (e, t) => ka(t).format(e), Ga = (e, t = Be()) => or(e, t); -Ue([it, nt], () => Ma); -Ue([it], () => Ra); -Ue([it], () => Da); -Ue([it], () => Ua); -Ue([it, nt], () => Ga); -const { - SvelteComponent: Fa, - append: Qn, - attr: xa, - check_outros: Jn, - create_component: vn, - destroy_component: wn, - detach: ja, - element: Va, - group_outros: Yn, - init: za, - insert: qa, - mount_component: yn, - safe_not_equal: Xa, - set_style: Kn, - space: $n, - toggle_class: ei, - transition_in: _e, - transition_out: Te -} = window.__gradio__svelte__internal, { createEventDispatcher: Za } = window.__gradio__svelte__internal; -function ti(e) { - let t, n; - return t = new et({ - props: { - Icon: bo, - label: ( - /*i18n*/ - e[3]("common.edit") - ) - } - }), t.$on( - "click", - /*click_handler*/ - e[5] - ), { - c() { - vn(t.$$.fragment); - }, - m(i, r) { - yn(t, i, r), n = !0; - }, - p(i, r) { - const l = {}; - r & /*i18n*/ - 8 && (l.label = /*i18n*/ - i[3]("common.edit")), t.$set(l); - }, - i(i) { - n || (_e(t.$$.fragment, i), n = !0); - }, - o(i) { - Te(t.$$.fragment, i), n = !1; - }, - d(i) { - wn(t, i); - } - }; -} -function ni(e) { - let t, n; - return t = new et({ - props: { - Icon: Po, - label: ( - /*i18n*/ - e[3]("common.undo") - ) - } - }), t.$on( - "click", - /*click_handler_1*/ - e[6] - ), { - c() { - vn(t.$$.fragment); - }, - m(i, r) { - yn(t, i, r), n = !0; - }, - p(i, r) { - const l = {}; - r & /*i18n*/ - 8 && (l.label = /*i18n*/ - i[3]("common.undo")), t.$set(l); - }, - i(i) { - n || (_e(t.$$.fragment, i), n = !0); - }, - o(i) { - Te(t.$$.fragment, i), n = !1; - }, - d(i) { - wn(t, i); - } - }; -} -function Wa(e) { - let t, n, i, r, l, o = ( - /*editable*/ - e[0] && ti(e) - ), a = ( - /*undoable*/ - e[1] && ni(e) - ); - return r = new et({ - props: { - Icon: ql, - label: ( - /*i18n*/ - e[3]("common.clear") - ) - } - }), r.$on( - "click", - /*click_handler_2*/ - e[7] - ), { - c() { - t = Va("div"), o && o.c(), n = $n(), a && a.c(), i = $n(), vn(r.$$.fragment), xa(t, "class", "svelte-1wj0ocy"), ei(t, "not-absolute", !/*absolute*/ - e[2]), Kn( - t, - "position", - /*absolute*/ - e[2] ? "absolute" : "static" - ); - }, - m(s, u) { - qa(s, t, u), o && o.m(t, null), Qn(t, n), a && a.m(t, null), Qn(t, i), yn(r, t, null), l = !0; - }, - p(s, [u]) { - /*editable*/ - s[0] ? o ? (o.p(s, u), u & /*editable*/ - 1 && _e(o, 1)) : (o = ti(s), o.c(), _e(o, 1), o.m(t, n)) : o && (Yn(), Te(o, 1, 1, () => { - o = null; - }), Jn()), /*undoable*/ - s[1] ? a ? (a.p(s, u), u & /*undoable*/ - 2 && _e(a, 1)) : (a = ni(s), a.c(), _e(a, 1), a.m(t, i)) : a && (Yn(), Te(a, 1, 1, () => { - a = null; - }), Jn()); - const f = {}; - u & /*i18n*/ - 8 && (f.label = /*i18n*/ - s[3]("common.clear")), r.$set(f), (!l || u & /*absolute*/ - 4) && ei(t, "not-absolute", !/*absolute*/ - s[2]), u & /*absolute*/ - 4 && Kn( - t, - "position", - /*absolute*/ - s[2] ? "absolute" : "static" - ); - }, - i(s) { - l || (_e(o), _e(a), _e(r.$$.fragment, s), l = !0); - }, - o(s) { - Te(o), Te(a), Te(r.$$.fragment, s), l = !1; - }, - d(s) { - s && ja(t), o && o.d(), a && a.d(), wn(r); - } - }; -} -function Qa(e, t, n) { - let { editable: i = !1 } = t, { undoable: r = !1 } = t, { absolute: l = !0 } = t, { i18n: o } = t; - const a = Za(), s = () => a("edit"), u = () => a("undo"), f = (c) => { - a("clear"), c.stopPropagation(); - }; - return e.$$set = (c) => { - "editable" in c && n(0, i = c.editable), "undoable" in c && n(1, r = c.undoable), "absolute" in c && n(2, l = c.absolute), "i18n" in c && n(3, o = c.i18n); - }, [ - i, - r, - l, - o, - a, - s, - u, - f - ]; -} -class Ja extends Fa { - constructor(t) { - super(), za(this, t, Qa, Wa, Xa, { - editable: 0, - undoable: 1, - absolute: 2, - i18n: 3 - }); - } -} -var ii = Object.prototype.hasOwnProperty; -function ri(e, t, n) { - for (n of e.keys()) - if (Qe(n, t)) - return n; -} -function Qe(e, t) { - var n, i, r; - if (e === t) - return !0; - if (e && t && (n = e.constructor) === t.constructor) { - if (n === Date) - return e.getTime() === t.getTime(); - if (n === RegExp) - return e.toString() === t.toString(); - if (n === Array) { - if ((i = e.length) === t.length) - for (; i-- && Qe(e[i], t[i]); ) - ; - return i === -1; - } - if (n === Set) { - if (e.size !== t.size) - return !1; - for (i of e) - if (r = i, r && typeof r == "object" && (r = ri(t, r), !r) || !t.has(r)) - return !1; - return !0; - } - if (n === Map) { - if (e.size !== t.size) - return !1; - for (i of e) - if (r = i[0], r && typeof r == "object" && (r = ri(t, r), !r) || !Qe(i[1], t.get(r))) - return !1; - return !0; - } - if (n === ArrayBuffer) - e = new Uint8Array(e), t = new Uint8Array(t); - else if (n === DataView) { - if ((i = e.byteLength) === t.byteLength) - for (; i-- && e.getInt8(i) === t.getInt8(i); ) - ; - return i === -1; - } - if (ArrayBuffer.isView(e)) { - if ((i = e.byteLength) === t.byteLength) - for (; i-- && e[i] === t[i]; ) - ; - return i === -1; - } - if (!n || typeof e == "object") { - i = 0; - for (n in e) - if (ii.call(e, n) && ++i && !ii.call(t, n) || !(n in t) || !Qe(e[n], t[n])) - return !1; - return Object.keys(t).length === i; - } - } - return e !== e && t !== t; -} -async function Ya(e) { - return e ? `
      ${(await Promise.all( - e.map(async ([n, i]) => n === null || !n.url ? "" : await Lo(n.url, "url")) - )).map((n) => ``).join("")}
      ` : ""; -} -const { - SvelteComponent: Ka, - add_iframe_resize_listener: $a, - add_render_callback: hr, - append: F, - attr: w, - binding_callbacks: li, - bubble: ve, - check_outros: Ke, - create_component: Fe, - destroy_component: xe, - destroy_each: _r, - detach: V, - element: U, - empty: eu, - ensure_array_like: vt, - globals: tu, - group_outros: $e, - init: nu, - insert: z, - listen: le, - mount_component: je, - run_all: mr, - safe_not_equal: iu, - set_data: dr, - set_style: se, - space: ae, - src_url_equal: he, - text: br, - toggle_class: ue, - transition_in: D, - transition_out: x -} = window.__gradio__svelte__internal, { window: gr } = tu, { createEventDispatcher: ru } = window.__gradio__svelte__internal, { tick: lu } = window.__gradio__svelte__internal; -function oi(e, t, n) { - const i = e.slice(); - return i[45] = t[n], i[47] = n, i; -} -function si(e, t, n) { - const i = e.slice(); - return i[48] = t[n], i[49] = t, i[47] = n, i; -} -function ai(e) { - let t, n; - return t = new ol({ - props: { - show_label: ( - /*show_label*/ - e[1] - ), - Icon: Ui, - label: ( - /*label*/ - e[2] || "Gallery" - ) - } - }), { - c() { - Fe(t.$$.fragment); - }, - m(i, r) { - je(t, i, r), n = !0; - }, - p(i, r) { - const l = {}; - r[0] & /*show_label*/ - 2 && (l.show_label = /*show_label*/ - i[1]), r[0] & /*label*/ - 4 && (l.label = /*label*/ - i[2] || "Gallery"), t.$set(l); - }, - i(i) { - n || (D(t.$$.fragment, i), n = !0); - }, - o(i) { - x(t.$$.fragment, i), n = !1; - }, - d(i) { - xe(t, i); - } - }; -} -function ou(e) { - let t, n, i, r, l, o, a = ( - /*selected_index*/ - e[0] !== null && /*allow_preview*/ - e[7] && ui(e) - ), s = ( - /*show_share_button*/ - e[9] && _i(e) - ), u = vt( - /*_value*/ - e[12] - ), f = []; - for (let c = 0; c < u.length; c += 1) - f[c] = di(oi(e, u, c)); - return { - c() { - a && a.c(), t = ae(), n = U("div"), i = U("div"), s && s.c(), r = ae(); - for (let c = 0; c < f.length; c += 1) - f[c].c(); - w(i, "class", "grid-container svelte-1wl86it"), se( - i, - "--grid-cols", - /*columns*/ - e[4] - ), se( - i, - "--grid-rows", - /*rows*/ - e[5] - ), se( - i, - "--object-fit", - /*object_fit*/ - e[8] - ), se( - i, - "height", - /*height*/ - e[6] - ), ue( - i, - "pt-6", - /*show_label*/ - e[1] - ), w(n, "class", "grid-wrap svelte-1wl86it"), hr(() => ( - /*div1_elementresize_handler*/ - e[40].call(n) - )), ue(n, "fixed-height", !/*height*/ - e[6] || /*height*/ - e[6] == "auto"); - }, - m(c, h) { - a && a.m(c, h), z(c, t, h), z(c, n, h), F(n, i), s && s.m(i, null), F(i, r); - for (let _ = 0; _ < f.length; _ += 1) - f[_] && f[_].m(i, null); - l = $a( - n, - /*div1_elementresize_handler*/ - e[40].bind(n) - ), o = !0; - }, - p(c, h) { - if (/*selected_index*/ - c[0] !== null && /*allow_preview*/ - c[7] ? a ? (a.p(c, h), h[0] & /*selected_index, allow_preview*/ - 129 && D(a, 1)) : (a = ui(c), a.c(), D(a, 1), a.m(t.parentNode, t)) : a && ($e(), x(a, 1, 1, () => { - a = null; - }), Ke()), /*show_share_button*/ - c[9] ? s ? (s.p(c, h), h[0] & /*show_share_button*/ - 512 && D(s, 1)) : (s = _i(c), s.c(), D(s, 1), s.m(i, r)) : s && ($e(), x(s, 1, 1, () => { - s = null; - }), Ke()), h[0] & /*_value, selected_index*/ - 4097) { - u = vt( - /*_value*/ - c[12] - ); - let _; - for (_ = 0; _ < u.length; _ += 1) { - const b = oi(c, u, _); - f[_] ? f[_].p(b, h) : (f[_] = di(b), f[_].c(), f[_].m(i, null)); - } - for (; _ < f.length; _ += 1) - f[_].d(1); - f.length = u.length; - } - (!o || h[0] & /*columns*/ - 16) && se( - i, - "--grid-cols", - /*columns*/ - c[4] - ), (!o || h[0] & /*rows*/ - 32) && se( - i, - "--grid-rows", - /*rows*/ - c[5] - ), (!o || h[0] & /*object_fit*/ - 256) && se( - i, - "--object-fit", - /*object_fit*/ - c[8] - ), (!o || h[0] & /*height*/ - 64) && se( - i, - "height", - /*height*/ - c[6] - ), (!o || h[0] & /*show_label*/ - 2) && ue( - i, - "pt-6", - /*show_label*/ - c[1] - ), (!o || h[0] & /*height*/ - 64) && ue(n, "fixed-height", !/*height*/ - c[6] || /*height*/ - c[6] == "auto"); - }, - i(c) { - o || (D(a), D(s), o = !0); - }, - o(c) { - x(a), x(s), o = !1; - }, - d(c) { - c && (V(t), V(n)), a && a.d(c), s && s.d(), _r(f, c), l(); - } - }; -} -function su(e) { - let t, n; - return t = new Ul({ - props: { - unpadded_box: !0, - size: "large", - $$slots: { default: [hu] }, - $$scope: { ctx: e } - } - }), { - c() { - Fe(t.$$.fragment); - }, - m(i, r) { - je(t, i, r), n = !0; - }, - p(i, r) { - const l = {}; - r[1] & /*$$scope*/ - 524288 && (l.$$scope = { dirty: r, ctx: i }), t.$set(l); - }, - i(i) { - n || (D(t.$$.fragment, i), n = !0); - }, - o(i) { - x(t.$$.fragment, i), n = !1; - }, - d(i) { - xe(t, i); - } - }; -} -function ui(e) { - var m; - let t, n, i, r, l, o, a, s, u, f, c, h = ( - /*show_download_button*/ - e[10] && fi(e) - ); - r = new Ja({ - props: { i18n: ( - /*i18n*/ - e[11] - ), absolute: !1 } - }), r.$on( - "clear", - /*clear_handler*/ - e[32] - ); - function _(g, p) { - return ( - /*_value*/ - g[12][ - /*selected_index*/ - g[0] - ].image.mime_type === "video/mp4" ? uu : au - ); - } - let b = _(e), T = b(e), y = ( - /*_value*/ - ((m = e[12][ - /*selected_index*/ - e[0] - ]) == null ? void 0 : m.caption) && ci(e) - ), C = vt( - /*_value*/ - e[12] - ), E = []; - for (let g = 0; g < C.length; g += 1) - E[g] = hi(si(e, C, g)); - return { - c() { - t = U("button"), n = U("div"), h && h.c(), i = ae(), Fe(r.$$.fragment), l = ae(), T.c(), o = ae(), y && y.c(), a = ae(), s = U("div"); - for (let g = 0; g < E.length; g += 1) - E[g].c(); - w(n, "class", "icon-buttons svelte-1wl86it"), w(s, "class", "thumbnails scroll-hide svelte-1wl86it"), w(s, "data-testid", "container_el"), w(t, "class", "preview svelte-1wl86it"); - }, - m(g, p) { - z(g, t, p), F(t, n), h && h.m(n, null), F(n, i), je(r, n, null), F(t, l), T.m(t, null), F(t, o), y && y.m(t, null), F(t, a), F(t, s); - for (let N = 0; N < E.length; N += 1) - E[N] && E[N].m(s, null); - e[36](s), u = !0, f || (c = le( - t, - "keydown", - /*on_keydown*/ - e[18] - ), f = !0); - }, - p(g, p) { - var G; - /*show_download_button*/ - g[10] ? h ? (h.p(g, p), p[0] & /*show_download_button*/ - 1024 && D(h, 1)) : (h = fi(g), h.c(), D(h, 1), h.m(n, i)) : h && ($e(), x(h, 1, 1, () => { - h = null; - }), Ke()); - const N = {}; - if (p[0] & /*i18n*/ - 2048 && (N.i18n = /*i18n*/ - g[11]), r.$set(N), b === (b = _(g)) && T ? T.p(g, p) : (T.d(1), T = b(g), T && (T.c(), T.m(t, o))), /*_value*/ - (G = g[12][ - /*selected_index*/ - g[0] - ]) != null && G.caption ? y ? y.p(g, p) : (y = ci(g), y.c(), y.m(t, a)) : y && (y.d(1), y = null), p[0] & /*_value, el, selected_index*/ - 12289) { - C = vt( - /*_value*/ - g[12] - ); - let L; - for (L = 0; L < C.length; L += 1) { - const Z = si(g, C, L); - E[L] ? E[L].p(Z, p) : (E[L] = hi(Z), E[L].c(), E[L].m(s, null)); - } - for (; L < E.length; L += 1) - E[L].d(1); - E.length = C.length; - } - }, - i(g) { - u || (D(h), D(r.$$.fragment, g), u = !0); - }, - o(g) { - x(h), x(r.$$.fragment, g), u = !1; - }, - d(g) { - g && V(t), h && h.d(), xe(r), T.d(), y && y.d(), _r(E, g), e[36](null), f = !1, c(); - } - }; -} -function fi(e) { - let t, n, i, r; - return n = new et({ - props: { - Icon: so, - label: ( - /*i18n*/ - e[11]("common.download") - ) - } - }), { - c() { - t = U("a"), Fe(n.$$.fragment), w(t, "href", i = an( - /*value*/ - e[3][ - /*selected_index*/ - e[0] - ] - )), w(t, "target", window.__is_colab__ ? "_blank" : null), w(t, "download", "image"), w(t, "class", "svelte-1wl86it"); - }, - m(l, o) { - z(l, t, o), je(n, t, null), r = !0; - }, - p(l, o) { - const a = {}; - o[0] & /*i18n*/ - 2048 && (a.label = /*i18n*/ - l[11]("common.download")), n.$set(a), (!r || o[0] & /*value, selected_index*/ - 9 && i !== (i = an( - /*value*/ - l[3][ - /*selected_index*/ - l[0] - ] - ))) && w(t, "href", i); - }, - i(l) { - r || (D(n.$$.fragment, l), r = !0); - }, - o(l) { - x(n.$$.fragment, l), r = !1; - }, - d(l) { - l && V(t), xe(n); - } - }; -} -function au(e) { - let t, n, i, r, l, o, a; - return { - c() { - t = U("button"), n = U("img"), w(n, "data-testid", "detailed-image"), he(n.src, i = /*_value*/ - e[12][ - /*selected_index*/ - e[0] - ].image.url) || w(n, "src", i), w(n, "alt", r = /*_value*/ - e[12][ - /*selected_index*/ - e[0] - ].caption || ""), w(n, "title", l = /*_value*/ - e[12][ - /*selected_index*/ - e[0] - ].caption || null), w(n, "loading", "lazy"), w(n, "class", "svelte-1wl86it"), ue(n, "with-caption", !!/*_value*/ - e[12][ - /*selected_index*/ - e[0] - ].caption), w(t, "class", "image-button svelte-1wl86it"), se(t, "height", "calc(100% - " + /*_value*/ - (e[12][ - /*selected_index*/ - e[0] - ].caption ? "80px" : "60px") + ")"), w(t, "aria-label", "detailed view of selected image"); - }, - m(s, u) { - z(s, t, u), F(t, n), o || (a = le( - t, - "click", - /*click_handler*/ - e[33] - ), o = !0); - }, - p(s, u) { - u[0] & /*_value, selected_index*/ - 4097 && !he(n.src, i = /*_value*/ - s[12][ - /*selected_index*/ - s[0] - ].image.url) && w(n, "src", i), u[0] & /*_value, selected_index*/ - 4097 && r !== (r = /*_value*/ - s[12][ - /*selected_index*/ - s[0] - ].caption || "") && w(n, "alt", r), u[0] & /*_value, selected_index*/ - 4097 && l !== (l = /*_value*/ - s[12][ - /*selected_index*/ - s[0] - ].caption || null) && w(n, "title", l), u[0] & /*_value, selected_index*/ - 4097 && ue(n, "with-caption", !!/*_value*/ - s[12][ - /*selected_index*/ - s[0] - ].caption), u[0] & /*_value, selected_index*/ - 4097 && se(t, "height", "calc(100% - " + /*_value*/ - (s[12][ - /*selected_index*/ - s[0] - ].caption ? "80px" : "60px") + ")"); - }, - d(s) { - s && V(t), o = !1, a(); - } - }; -} -function uu(e) { - let t, n, i, r, l, o; - return { - c() { - t = U("video"), n = U("track"), w(n, "kind", "captions"), w(t, "class", "detailed-video svelte-1wl86it"), w(t, "data-testid", "detailed-video"), t.controls = !0, he(t.src, i = /*_value*/ - e[12][ - /*selected_index*/ - e[0] - ].image.path) || w(t, "src", i), w(t, "title", r = /*_value*/ - e[12][ - /*selected_index*/ - e[0] - ].image.alt_text), w(t, "preload", "auto"); - }, - m(a, s) { - z(a, t, s), F(t, n), l || (o = [ - le( - t, - "play", - /*play_handler*/ - e[28] - ), - le( - t, - "pause", - /*pause_handler*/ - e[29] - ), - le( - t, - "ended", - /*ended_handler*/ - e[30] - ) - ], l = !0); - }, - p(a, s) { - s[0] & /*_value, selected_index*/ - 4097 && !he(t.src, i = /*_value*/ - a[12][ - /*selected_index*/ - a[0] - ].image.path) && w(t, "src", i), s[0] & /*_value, selected_index*/ - 4097 && r !== (r = /*_value*/ - a[12][ - /*selected_index*/ - a[0] - ].image.alt_text) && w(t, "title", r); - }, - d(a) { - a && V(t), l = !1, mr(o); - } - }; -} -function ci(e) { - let t, n = ( - /*_value*/ - e[12][ - /*selected_index*/ - e[0] - ].caption + "" - ), i; - return { - c() { - t = U("caption"), i = br(n), w(t, "class", "caption svelte-1wl86it"); - }, - m(r, l) { - z(r, t, l), F(t, i); - }, - p(r, l) { - l[0] & /*_value, selected_index*/ - 4097 && n !== (n = /*_value*/ - r[12][ - /*selected_index*/ - r[0] - ].caption + "") && dr(i, n); - }, - d(r) { - r && V(t); - } - }; -} -function hi(e) { - let t, n, i, r, l, o, a = ( - /*i*/ - e[47] - ), s, u; - const f = () => ( - /*button_binding*/ - e[34](t, a) - ), c = () => ( - /*button_binding*/ - e[34](null, a) - ); - function h() { - return ( - /*click_handler_1*/ - e[35]( - /*i*/ - e[47] - ) - ); - } - return { - c() { - t = U("button"), n = U("img"), l = ae(), he(n.src, i = /*media*/ - e[48].image.url) || w(n, "src", i), w(n, "title", r = /*media*/ - e[48].caption || null), w(n, "data-testid", "thumbnail " + /*i*/ - (e[47] + 1)), w(n, "alt", ""), w(n, "loading", "lazy"), w(n, "class", "svelte-1wl86it"), w(t, "class", "thumbnail-item thumbnail-small svelte-1wl86it"), w(t, "aria-label", o = "Thumbnail " + /*i*/ - (e[47] + 1) + " of " + /*_value*/ - e[12].length), ue( - t, - "selected", - /*selected_index*/ - e[0] === /*i*/ - e[47] - ); - }, - m(_, b) { - z(_, t, b), F(t, n), F(t, l), f(), s || (u = le(t, "click", h), s = !0); - }, - p(_, b) { - e = _, b[0] & /*_value*/ - 4096 && !he(n.src, i = /*media*/ - e[48].image.url) && w(n, "src", i), b[0] & /*_value*/ - 4096 && r !== (r = /*media*/ - e[48].caption || null) && w(n, "title", r), b[0] & /*_value*/ - 4096 && o !== (o = "Thumbnail " + /*i*/ - (e[47] + 1) + " of " + /*_value*/ - e[12].length) && w(t, "aria-label", o), a !== /*i*/ - e[47] && (c(), a = /*i*/ - e[47], f()), b[0] & /*selected_index*/ - 1 && ue( - t, - "selected", - /*selected_index*/ - e[0] === /*i*/ - e[47] - ); - }, - d(_) { - _ && V(t), c(), s = !1, u(); - } - }; -} -function _i(e) { - let t, n, i; - return n = new qo({ - props: { - i18n: ( - /*i18n*/ - e[11] - ), - value: ( - /*_value*/ - e[12] - ), - formatter: Ya - } - }), n.$on( - "share", - /*share_handler*/ - e[37] - ), n.$on( - "error", - /*error_handler*/ - e[38] - ), { - c() { - t = U("div"), Fe(n.$$.fragment), w(t, "class", "icon-button svelte-1wl86it"); - }, - m(r, l) { - z(r, t, l), je(n, t, null), i = !0; - }, - p(r, l) { - const o = {}; - l[0] & /*i18n*/ - 2048 && (o.i18n = /*i18n*/ - r[11]), l[0] & /*_value*/ - 4096 && (o.value = /*_value*/ - r[12]), n.$set(o); - }, - i(r) { - i || (D(n.$$.fragment, r), i = !0); - }, - o(r) { - x(n.$$.fragment, r), i = !1; - }, - d(r) { - r && V(t), xe(n); - } - }; -} -function fu(e) { - let t, n, i; - return { - c() { - t = U("img"), w(t, "alt", n = /*entry*/ - e[45].caption || ""), he(t.src, i = typeof /*entry*/ - e[45].image == "string" ? ( - /*entry*/ - e[45].image - ) : ( - /*entry*/ - e[45].image.url - )) || w(t, "src", i), w(t, "loading", "lazy"), w(t, "class", "svelte-1wl86it"); - }, - m(r, l) { - z(r, t, l); - }, - p(r, l) { - l[0] & /*_value*/ - 4096 && n !== (n = /*entry*/ - r[45].caption || "") && w(t, "alt", n), l[0] & /*_value*/ - 4096 && !he(t.src, i = typeof /*entry*/ - r[45].image == "string" ? ( - /*entry*/ - r[45].image - ) : ( - /*entry*/ - r[45].image.url - )) && w(t, "src", i); - }, - d(r) { - r && V(t); - } - }; -} -function cu(e) { - let t, n, i, r, l, o; - return { - c() { - t = U("video"), n = U("track"), w(n, "kind", "captions"), w(t, "class", "detailed-video svelte-1wl86it"), w(t, "data-testid", "detailed-video"), t.controls = !0, he(t.src, i = /*entry*/ - e[45].image.path) || w(t, "src", i), w(t, "title", r = /*entry*/ - e[45].image.alt_text), w(t, "preload", "auto"); - }, - m(a, s) { - z(a, t, s), F(t, n), l || (o = [ - le( - t, - "play", - /*play_handler_1*/ - e[25] - ), - le( - t, - "pause", - /*pause_handler_1*/ - e[26] - ), - le( - t, - "ended", - /*ended_handler_1*/ - e[27] - ) - ], l = !0); - }, - p(a, s) { - s[0] & /*_value*/ - 4096 && !he(t.src, i = /*entry*/ - a[45].image.path) && w(t, "src", i), s[0] & /*_value*/ - 4096 && r !== (r = /*entry*/ - a[45].image.alt_text) && w(t, "title", r); - }, - d(a) { - a && V(t), l = !1, mr(o); - } - }; -} -function mi(e) { - let t, n = ( - /*entry*/ - e[45].caption + "" - ), i; - return { - c() { - t = U("div"), i = br(n), w(t, "class", "caption-label svelte-1wl86it"); - }, - m(r, l) { - z(r, t, l), F(t, i); - }, - p(r, l) { - l[0] & /*_value*/ - 4096 && n !== (n = /*entry*/ - r[45].caption + "") && dr(i, n); - }, - d(r) { - r && V(t); - } - }; -} -function di(e) { - let t, n, i, r, l, o; - function a(h, _) { - return ( - /*entry*/ - h[45].image.mime_type === "video/mp4" ? cu : fu - ); - } - let s = a(e), u = s(e), f = ( - /*entry*/ - e[45].caption && mi(e) - ); - function c() { - return ( - /*click_handler_2*/ - e[39]( - /*i*/ - e[47] - ) - ); - } - return { - c() { - t = U("button"), u.c(), n = ae(), f && f.c(), i = ae(), w(t, "class", "thumbnail-item thumbnail-lg svelte-1wl86it"), w(t, "aria-label", r = "Thumbnail " + /*i*/ - (e[47] + 1) + " of " + /*_value*/ - e[12].length), ue( - t, - "selected", - /*selected_index*/ - e[0] === /*i*/ - e[47] - ); - }, - m(h, _) { - z(h, t, _), u.m(t, null), F(t, n), f && f.m(t, null), F(t, i), l || (o = le(t, "click", c), l = !0); - }, - p(h, _) { - e = h, s === (s = a(e)) && u ? u.p(e, _) : (u.d(1), u = s(e), u && (u.c(), u.m(t, n))), /*entry*/ - e[45].caption ? f ? f.p(e, _) : (f = mi(e), f.c(), f.m(t, i)) : f && (f.d(1), f = null), _[0] & /*_value*/ - 4096 && r !== (r = "Thumbnail " + /*i*/ - (e[47] + 1) + " of " + /*_value*/ - e[12].length) && w(t, "aria-label", r), _[0] & /*selected_index*/ - 1 && ue( - t, - "selected", - /*selected_index*/ - e[0] === /*i*/ - e[47] - ); - }, - d(h) { - h && V(t), u.d(), f && f.d(), l = !1, o(); - } - }; -} -function hu(e) { - let t, n; - return t = new Ui({}), { - c() { - Fe(t.$$.fragment); - }, - m(i, r) { - je(t, i, r), n = !0; - }, - i(i) { - n || (D(t.$$.fragment, i), n = !0); - }, - o(i) { - x(t.$$.fragment, i), n = !1; - }, - d(i) { - xe(t, i); - } - }; -} -function _u(e) { - let t, n, i, r, l, o, a; - hr( - /*onwindowresize*/ - e[31] - ); - let s = ( - /*show_label*/ - e[1] && ai(e) - ); - const u = [su, ou], f = []; - function c(h, _) { - return ( - /*value*/ - h[3] === null || /*_value*/ - h[12] === null || /*_value*/ - h[12].length === 0 ? 0 : 1 - ); - } - return n = c(e), i = f[n] = u[n](e), { - c() { - s && s.c(), t = ae(), i.c(), r = eu(); - }, - m(h, _) { - s && s.m(h, _), z(h, t, _), f[n].m(h, _), z(h, r, _), l = !0, o || (a = le( - gr, - "resize", - /*onwindowresize*/ - e[31] - ), o = !0); - }, - p(h, _) { - /*show_label*/ - h[1] ? s ? (s.p(h, _), _[0] & /*show_label*/ - 2 && D(s, 1)) : (s = ai(h), s.c(), D(s, 1), s.m(t.parentNode, t)) : s && ($e(), x(s, 1, 1, () => { - s = null; - }), Ke()); - let b = n; - n = c(h), n === b ? f[n].p(h, _) : ($e(), x(f[b], 1, 1, () => { - f[b] = null; - }), Ke(), i = f[n], i ? i.p(h, _) : (i = f[n] = u[n](h), i.c()), D(i, 1), i.m(r.parentNode, r)); - }, - i(h) { - l || (D(s), D(i), l = !0); - }, - o(h) { - x(s), x(i), l = !1; - }, - d(h) { - h && (V(t), V(r)), s && s.d(h), f[n].d(h), o = !1, a(); - } - }; -} -function We(e, t) { - return e ?? t(); -} -function Ee(e) { - let t, n = e[0], i = 1; - for (; i < e.length; ) { - const r = e[i], l = e[i + 1]; - if (i += 2, (r === "optionalAccess" || r === "optionalCall") && n == null) - return; - r === "access" || r === "optionalAccess" ? (t = n, n = l(n)) : (r === "call" || r === "optionalCall") && (n = l((...o) => n.call(t, ...o)), t = void 0); - } - return n; -} -function mu(e) { - return typeof e == "object" && e !== null && "data" in e; -} -function an(e) { - return mu(e) ? e.path : typeof e == "string" ? e : Array.isArray(e) ? an(e[0]) : ""; -} -function du(e, t, n) { - let i, r, { show_label: l = !0 } = t, { label: o } = t, { root: a = "" } = t, { proxy_url: s = null } = t, { value: u = null } = t, { columns: f = [2] } = t, { rows: c = void 0 } = t, { height: h = "auto" } = t, { preview: _ } = t, { allow_preview: b = !0 } = t, { object_fit: T = "cover" } = t, { show_share_button: y = !1 } = t, { show_download_button: C = !1 } = t, { i18n: E } = t, { selected_index: m = null } = t; - const g = ru(); - let p = !0, N = null, G = u; - m === null && _ && Ee([u, "optionalAccess", (d) => d.length]) && (m = 0); - let L = m; - function Z(d) { - const ze = d.target, Ct = d.clientX, Pt = ze.offsetWidth / 2; - Ct < Pt ? n(0, m = i) : n(0, m = r); - } - function K(d) { - switch (d.code) { - case "Escape": - d.preventDefault(), n(0, m = null); - break; - case "ArrowLeft": - d.preventDefault(), n(0, m = i); - break; - case "ArrowRight": - d.preventDefault(), n(0, m = r); - break; - } - } - let W = [], q; - async function $(d) { - if (typeof d != "number" || (await lu(), W[d] === void 0)) - return; - Ee([ - W, - "access", - (qe) => qe[d], - "optionalAccess", - (qe) => qe.focus, - "call", - (qe) => qe() - ]); - const { left: ze, width: Ct } = q.getBoundingClientRect(), { left: En, width: Pt } = W[d].getBoundingClientRect(), Sn = En - ze + Pt / 2 - Ct / 2 + q.scrollLeft; - q && typeof q.scrollTo == "function" && q.scrollTo({ - left: Sn < 0 ? 0 : Sn, - behavior: "smooth" - }); - } - let ee = 0, v = 0; - function rt(d) { - ve.call(this, e, d); - } - function At(d) { - ve.call(this, e, d); - } - function lt(d) { - ve.call(this, e, d); - } - function ot(d) { - ve.call(this, e, d); - } - function st(d) { - ve.call(this, e, d); - } - function Ht(d) { - ve.call(this, e, d); - } - function Bt() { - n(16, v = gr.innerHeight); - } - const S = () => n(0, m = null), yr = (d) => Z(d); - function Er(d, ze) { - li[d ? "unshift" : "push"](() => { - W[ze] = d, n(13, W); - }); - } - const Sr = (d) => n(0, m = d); - function Tr(d) { - li[d ? "unshift" : "push"](() => { - q = d, n(14, q); - }); - } - function Ar(d) { - ve.call(this, e, d); - } - function Hr(d) { - ve.call(this, e, d); - } - const Br = (d) => n(0, m = d); - function Cr() { - ee = this.clientHeight, n(15, ee); - } - return e.$$set = (d) => { - "show_label" in d && n(1, l = d.show_label), "label" in d && n(2, o = d.label), "root" in d && n(19, a = d.root), "proxy_url" in d && n(20, s = d.proxy_url), "value" in d && n(3, u = d.value), "columns" in d && n(4, f = d.columns), "rows" in d && n(5, c = d.rows), "height" in d && n(6, h = d.height), "preview" in d && n(21, _ = d.preview), "allow_preview" in d && n(7, b = d.allow_preview), "object_fit" in d && n(8, T = d.object_fit), "show_share_button" in d && n(9, y = d.show_share_button), "show_download_button" in d && n(10, C = d.show_download_button), "i18n" in d && n(11, E = d.i18n), "selected_index" in d && n(0, m = d.selected_index); - }, e.$$.update = () => { - e.$$.dirty[0] & /*value, was_reset*/ - 4194312 && n(22, p = u == null || u.length == 0 ? !0 : p), e.$$.dirty[0] & /*value, root, proxy_url*/ - 1572872 && n(12, N = u === null ? null : u.map((d) => ({ - image: Gi(d.image, a, s), - caption: d.caption - }))), e.$$.dirty[0] & /*prevValue, value, was_reset, preview, selected_index*/ - 14680073 && (Qe(G, u) || (p ? (n(0, m = _ && Ee([u, "optionalAccess", (d) => d.length]) ? 0 : null), n(22, p = !1)) : n( - 0, - m = m !== null && u !== null && m < u.length ? m : null - ), g("change"), n(23, G = u))), e.$$.dirty[0] & /*selected_index, _value*/ - 4097 && (i = (We(m, () => 0) + We(Ee([N, "optionalAccess", (d) => d.length]), () => 0) - 1) % We(Ee([N, "optionalAccess", (d) => d.length]), () => 0)), e.$$.dirty[0] & /*selected_index, _value*/ - 4097 && (r = (We(m, () => 0) + 1) % We(Ee([N, "optionalAccess", (d) => d.length]), () => 0)), e.$$.dirty[0] & /*selected_index, old_selected_index, _value*/ - 16781313 && m !== L && (n(24, L = m), m !== null && g("select", { - index: m, - value: Ee([N, "optionalAccess", (d) => d[m]]) - })), e.$$.dirty[0] & /*allow_preview, selected_index*/ - 129 && b && $(m); - }, [ - m, - l, - o, - u, - f, - c, - h, - b, - T, - y, - C, - E, - N, - W, - q, - ee, - v, - Z, - K, - a, - s, - _, - p, - G, - L, - rt, - At, - lt, - ot, - st, - Ht, - Bt, - S, - yr, - Er, - Sr, - Tr, - Ar, - Hr, - Br, - Cr - ]; -} -class bu extends Ka { - constructor(t) { - super(), nu( - this, - t, - du, - _u, - iu, - { - show_label: 1, - label: 2, - root: 19, - proxy_url: 20, - value: 3, - columns: 4, - rows: 5, - height: 6, - preview: 21, - allow_preview: 7, - object_fit: 8, - show_share_button: 9, - show_download_button: 10, - i18n: 11, - selected_index: 0 - }, - null, - [-1, -1] - ); - } -} -function Ie(e) { - let t = ["", "k", "M", "G", "T", "P", "E", "Z"], n = 0; - for (; e > 1e3 && n < t.length - 1; ) - e /= 1e3, n++; - let i = t[n]; - return (Number.isInteger(e) ? e : e.toFixed(1)) + i; -} -const { - SvelteComponent: gu, - append: ie, - attr: I, - component_subscribe: bi, - detach: pu, - element: vu, - init: wu, - insert: yu, - noop: gi, - safe_not_equal: Eu, - set_style: ht, - svg_element: re, - toggle_class: pi -} = window.__gradio__svelte__internal, { onMount: Su } = window.__gradio__svelte__internal; -function Tu(e) { - let t, n, i, r, l, o, a, s, u, f, c, h; - return { - c() { - t = vu("div"), n = re("svg"), i = re("g"), r = re("path"), l = re("path"), o = re("path"), a = re("path"), s = re("g"), u = re("path"), f = re("path"), c = re("path"), h = re("path"), I(r, "d", "M255.926 0.754768L509.702 139.936V221.027L255.926 81.8465V0.754768Z"), I(r, "fill", "#FF7C00"), I(r, "fill-opacity", "0.4"), I(r, "class", "svelte-43sxxs"), I(l, "d", "M509.69 139.936L254.981 279.641V361.255L509.69 221.55V139.936Z"), I(l, "fill", "#FF7C00"), I(l, "class", "svelte-43sxxs"), I(o, "d", "M0.250138 139.937L254.981 279.641V361.255L0.250138 221.55V139.937Z"), I(o, "fill", "#FF7C00"), I(o, "fill-opacity", "0.4"), I(o, "class", "svelte-43sxxs"), I(a, "d", "M255.923 0.232622L0.236328 139.936V221.55L255.923 81.8469V0.232622Z"), I(a, "fill", "#FF7C00"), I(a, "class", "svelte-43sxxs"), ht(i, "transform", "translate(" + /*$top*/ - e[1][0] + "px, " + /*$top*/ - e[1][1] + "px)"), I(u, "d", "M255.926 141.5L509.702 280.681V361.773L255.926 222.592V141.5Z"), I(u, "fill", "#FF7C00"), I(u, "fill-opacity", "0.4"), I(u, "class", "svelte-43sxxs"), I(f, "d", "M509.69 280.679L254.981 420.384V501.998L509.69 362.293V280.679Z"), I(f, "fill", "#FF7C00"), I(f, "class", "svelte-43sxxs"), I(c, "d", "M0.250138 280.681L254.981 420.386V502L0.250138 362.295V280.681Z"), I(c, "fill", "#FF7C00"), I(c, "fill-opacity", "0.4"), I(c, "class", "svelte-43sxxs"), I(h, "d", "M255.923 140.977L0.236328 280.68V362.294L255.923 222.591V140.977Z"), I(h, "fill", "#FF7C00"), I(h, "class", "svelte-43sxxs"), ht(s, "transform", "translate(" + /*$bottom*/ - e[2][0] + "px, " + /*$bottom*/ - e[2][1] + "px)"), I(n, "viewBox", "-1200 -1200 3000 3000"), I(n, "fill", "none"), I(n, "xmlns", "http://www.w3.org/2000/svg"), I(n, "class", "svelte-43sxxs"), I(t, "class", "svelte-43sxxs"), pi( - t, - "margin", - /*margin*/ - e[0] - ); - }, - m(_, b) { - yu(_, t, b), ie(t, n), ie(n, i), ie(i, r), ie(i, l), ie(i, o), ie(i, a), ie(n, s), ie(s, u), ie(s, f), ie(s, c), ie(s, h); - }, - p(_, [b]) { - b & /*$top*/ - 2 && ht(i, "transform", "translate(" + /*$top*/ - _[1][0] + "px, " + /*$top*/ - _[1][1] + "px)"), b & /*$bottom*/ - 4 && ht(s, "transform", "translate(" + /*$bottom*/ - _[2][0] + "px, " + /*$bottom*/ - _[2][1] + "px)"), b & /*margin*/ - 1 && pi( - t, - "margin", - /*margin*/ - _[0] - ); - }, - i: gi, - o: gi, - d(_) { - _ && pu(t); - } - }; -} -function Au(e, t, n) { - let i, r, { margin: l = !0 } = t; - const o = Mn([0, 0]); - bi(e, o, (h) => n(1, i = h)); - const a = Mn([0, 0]); - bi(e, a, (h) => n(2, r = h)); - let s; - async function u() { - await Promise.all([o.set([125, 140]), a.set([-125, -140])]), await Promise.all([o.set([-125, 140]), a.set([125, -140])]), await Promise.all([o.set([-125, 0]), a.set([125, -0])]), await Promise.all([o.set([125, 0]), a.set([-125, 0])]); - } - async function f() { - await u(), s || f(); - } - async function c() { - await Promise.all([o.set([125, 0]), a.set([-125, 0])]), f(); - } - return Su(() => (c(), () => s = !0)), e.$$set = (h) => { - "margin" in h && n(0, l = h.margin); - }, [l, i, r, o, a]; -} -class Hu extends gu { - constructor(t) { - super(), wu(this, t, Au, Tu, Eu, { margin: 0 }); - } -} -const { - SvelteComponent: Bu, - append: Ae, - attr: fe, - binding_callbacks: vi, - check_outros: pr, - create_component: Cu, - create_slot: Pu, - destroy_component: Iu, - destroy_each: vr, - detach: A, - element: me, - empty: Ve, - ensure_array_like: wt, - get_all_dirty_from_scope: ku, - get_slot_changes: Lu, - group_outros: wr, - init: Nu, - insert: H, - mount_component: Ou, - noop: un, - safe_not_equal: Mu, - set_data: Y, - set_style: ye, - space: ce, - text: M, - toggle_class: J, - transition_in: Re, - transition_out: De, - update_slot_base: Ru -} = window.__gradio__svelte__internal, { tick: Du } = window.__gradio__svelte__internal, { onDestroy: Uu } = window.__gradio__svelte__internal, Gu = (e) => ({}), wi = (e) => ({}); -function yi(e, t, n) { - const i = e.slice(); - return i[38] = t[n], i[40] = n, i; -} -function Ei(e, t, n) { - const i = e.slice(); - return i[38] = t[n], i; -} -function Fu(e) { - let t, n = ( - /*i18n*/ - e[1]("common.error") + "" - ), i, r, l; - const o = ( - /*#slots*/ - e[29].error - ), a = Pu( - o, - e, - /*$$scope*/ - e[28], - wi - ); - return { - c() { - t = me("span"), i = M(n), r = ce(), a && a.c(), fe(t, "class", "error svelte-14miwb5"); - }, - m(s, u) { - H(s, t, u), Ae(t, i), H(s, r, u), a && a.m(s, u), l = !0; - }, - p(s, u) { - (!l || u[0] & /*i18n*/ - 2) && n !== (n = /*i18n*/ - s[1]("common.error") + "") && Y(i, n), a && a.p && (!l || u[0] & /*$$scope*/ - 268435456) && Ru( - a, - o, - s, - /*$$scope*/ - s[28], - l ? Lu( - o, - /*$$scope*/ - s[28], - u, - Gu - ) : ku( - /*$$scope*/ - s[28] - ), - wi - ); - }, - i(s) { - l || (Re(a, s), l = !0); - }, - o(s) { - De(a, s), l = !1; - }, - d(s) { - s && (A(t), A(r)), a && a.d(s); - } - }; -} -function xu(e) { - let t, n, i, r, l, o, a, s, u, f = ( - /*variant*/ - e[8] === "default" && /*show_eta_bar*/ - e[18] && /*show_progress*/ - e[6] === "full" && Si(e) - ); - function c(m, g) { - if ( - /*progress*/ - m[7] - ) - return zu; - if ( - /*queue_position*/ - m[2] !== null && /*queue_size*/ - m[3] !== void 0 && /*queue_position*/ - m[2] >= 0 - ) - return Vu; - if ( - /*queue_position*/ - m[2] === 0 - ) - return ju; - } - let h = c(e), _ = h && h(e), b = ( - /*timer*/ - e[5] && Hi(e) - ); - const T = [Wu, Zu], y = []; - function C(m, g) { - return ( - /*last_progress_level*/ - m[15] != null ? 0 : ( - /*show_progress*/ - m[6] === "full" ? 1 : -1 - ) - ); - } - ~(l = C(e)) && (o = y[l] = T[l](e)); - let E = !/*timer*/ - e[5] && Ni(e); - return { - c() { - f && f.c(), t = ce(), n = me("div"), _ && _.c(), i = ce(), b && b.c(), r = ce(), o && o.c(), a = ce(), E && E.c(), s = Ve(), fe(n, "class", "progress-text svelte-14miwb5"), J( - n, - "meta-text-center", - /*variant*/ - e[8] === "center" - ), J( - n, - "meta-text", - /*variant*/ - e[8] === "default" - ); - }, - m(m, g) { - f && f.m(m, g), H(m, t, g), H(m, n, g), _ && _.m(n, null), Ae(n, i), b && b.m(n, null), H(m, r, g), ~l && y[l].m(m, g), H(m, a, g), E && E.m(m, g), H(m, s, g), u = !0; - }, - p(m, g) { - /*variant*/ - m[8] === "default" && /*show_eta_bar*/ - m[18] && /*show_progress*/ - m[6] === "full" ? f ? f.p(m, g) : (f = Si(m), f.c(), f.m(t.parentNode, t)) : f && (f.d(1), f = null), h === (h = c(m)) && _ ? _.p(m, g) : (_ && _.d(1), _ = h && h(m), _ && (_.c(), _.m(n, i))), /*timer*/ - m[5] ? b ? b.p(m, g) : (b = Hi(m), b.c(), b.m(n, null)) : b && (b.d(1), b = null), (!u || g[0] & /*variant*/ - 256) && J( - n, - "meta-text-center", - /*variant*/ - m[8] === "center" - ), (!u || g[0] & /*variant*/ - 256) && J( - n, - "meta-text", - /*variant*/ - m[8] === "default" - ); - let p = l; - l = C(m), l === p ? ~l && y[l].p(m, g) : (o && (wr(), De(y[p], 1, 1, () => { - y[p] = null; - }), pr()), ~l ? (o = y[l], o ? o.p(m, g) : (o = y[l] = T[l](m), o.c()), Re(o, 1), o.m(a.parentNode, a)) : o = null), /*timer*/ - m[5] ? E && (E.d(1), E = null) : E ? E.p(m, g) : (E = Ni(m), E.c(), E.m(s.parentNode, s)); - }, - i(m) { - u || (Re(o), u = !0); - }, - o(m) { - De(o), u = !1; - }, - d(m) { - m && (A(t), A(n), A(r), A(a), A(s)), f && f.d(m), _ && _.d(), b && b.d(), ~l && y[l].d(m), E && E.d(m); - } - }; -} -function Si(e) { - let t, n = `translateX(${/*eta_level*/ - (e[17] || 0) * 100 - 100}%)`; - return { - c() { - t = me("div"), fe(t, "class", "eta-bar svelte-14miwb5"), ye(t, "transform", n); - }, - m(i, r) { - H(i, t, r); - }, - p(i, r) { - r[0] & /*eta_level*/ - 131072 && n !== (n = `translateX(${/*eta_level*/ - (i[17] || 0) * 100 - 100}%)`) && ye(t, "transform", n); - }, - d(i) { - i && A(t); - } - }; -} -function ju(e) { - let t; - return { - c() { - t = M("processing |"); - }, - m(n, i) { - H(n, t, i); - }, - p: un, - d(n) { - n && A(t); - } - }; -} -function Vu(e) { - let t, n = ( - /*queue_position*/ - e[2] + 1 + "" - ), i, r, l, o; - return { - c() { - t = M("queue: "), i = M(n), r = M("/"), l = M( - /*queue_size*/ - e[3] - ), o = M(" |"); - }, - m(a, s) { - H(a, t, s), H(a, i, s), H(a, r, s), H(a, l, s), H(a, o, s); - }, - p(a, s) { - s[0] & /*queue_position*/ - 4 && n !== (n = /*queue_position*/ - a[2] + 1 + "") && Y(i, n), s[0] & /*queue_size*/ - 8 && Y( - l, - /*queue_size*/ - a[3] - ); - }, - d(a) { - a && (A(t), A(i), A(r), A(l), A(o)); - } - }; -} -function zu(e) { - let t, n = wt( - /*progress*/ - e[7] - ), i = []; - for (let r = 0; r < n.length; r += 1) - i[r] = Ai(Ei(e, n, r)); - return { - c() { - for (let r = 0; r < i.length; r += 1) - i[r].c(); - t = Ve(); - }, - m(r, l) { - for (let o = 0; o < i.length; o += 1) - i[o] && i[o].m(r, l); - H(r, t, l); - }, - p(r, l) { - if (l[0] & /*progress*/ - 128) { - n = wt( - /*progress*/ - r[7] - ); - let o; - for (o = 0; o < n.length; o += 1) { - const a = Ei(r, n, o); - i[o] ? i[o].p(a, l) : (i[o] = Ai(a), i[o].c(), i[o].m(t.parentNode, t)); - } - for (; o < i.length; o += 1) - i[o].d(1); - i.length = n.length; - } - }, - d(r) { - r && A(t), vr(i, r); - } - }; -} -function Ti(e) { - let t, n = ( - /*p*/ - e[38].unit + "" - ), i, r, l = " ", o; - function a(f, c) { - return ( - /*p*/ - f[38].length != null ? Xu : qu - ); - } - let s = a(e), u = s(e); - return { - c() { - u.c(), t = ce(), i = M(n), r = M(" | "), o = M(l); - }, - m(f, c) { - u.m(f, c), H(f, t, c), H(f, i, c), H(f, r, c), H(f, o, c); - }, - p(f, c) { - s === (s = a(f)) && u ? u.p(f, c) : (u.d(1), u = s(f), u && (u.c(), u.m(t.parentNode, t))), c[0] & /*progress*/ - 128 && n !== (n = /*p*/ - f[38].unit + "") && Y(i, n); - }, - d(f) { - f && (A(t), A(i), A(r), A(o)), u.d(f); - } - }; -} -function qu(e) { - let t = Ie( - /*p*/ - e[38].index || 0 - ) + "", n; - return { - c() { - n = M(t); - }, - m(i, r) { - H(i, n, r); - }, - p(i, r) { - r[0] & /*progress*/ - 128 && t !== (t = Ie( - /*p*/ - i[38].index || 0 - ) + "") && Y(n, t); - }, - d(i) { - i && A(n); - } - }; -} -function Xu(e) { - let t = Ie( - /*p*/ - e[38].index || 0 - ) + "", n, i, r = Ie( - /*p*/ - e[38].length - ) + "", l; - return { - c() { - n = M(t), i = M("/"), l = M(r); - }, - m(o, a) { - H(o, n, a), H(o, i, a), H(o, l, a); - }, - p(o, a) { - a[0] & /*progress*/ - 128 && t !== (t = Ie( - /*p*/ - o[38].index || 0 - ) + "") && Y(n, t), a[0] & /*progress*/ - 128 && r !== (r = Ie( - /*p*/ - o[38].length - ) + "") && Y(l, r); - }, - d(o) { - o && (A(n), A(i), A(l)); - } - }; -} -function Ai(e) { - let t, n = ( - /*p*/ - e[38].index != null && Ti(e) - ); - return { - c() { - n && n.c(), t = Ve(); - }, - m(i, r) { - n && n.m(i, r), H(i, t, r); - }, - p(i, r) { - /*p*/ - i[38].index != null ? n ? n.p(i, r) : (n = Ti(i), n.c(), n.m(t.parentNode, t)) : n && (n.d(1), n = null); - }, - d(i) { - i && A(t), n && n.d(i); - } - }; -} -function Hi(e) { - let t, n = ( - /*eta*/ - e[0] ? `/${/*formatted_eta*/ - e[19]}` : "" - ), i, r; - return { - c() { - t = M( - /*formatted_timer*/ - e[20] - ), i = M(n), r = M("s"); - }, - m(l, o) { - H(l, t, o), H(l, i, o), H(l, r, o); - }, - p(l, o) { - o[0] & /*formatted_timer*/ - 1048576 && Y( - t, - /*formatted_timer*/ - l[20] - ), o[0] & /*eta, formatted_eta*/ - 524289 && n !== (n = /*eta*/ - l[0] ? `/${/*formatted_eta*/ - l[19]}` : "") && Y(i, n); - }, - d(l) { - l && (A(t), A(i), A(r)); - } - }; -} -function Zu(e) { - let t, n; - return t = new Hu({ - props: { margin: ( - /*variant*/ - e[8] === "default" - ) } - }), { - c() { - Cu(t.$$.fragment); - }, - m(i, r) { - Ou(t, i, r), n = !0; - }, - p(i, r) { - const l = {}; - r[0] & /*variant*/ - 256 && (l.margin = /*variant*/ - i[8] === "default"), t.$set(l); - }, - i(i) { - n || (Re(t.$$.fragment, i), n = !0); - }, - o(i) { - De(t.$$.fragment, i), n = !1; - }, - d(i) { - Iu(t, i); - } - }; -} -function Wu(e) { - let t, n, i, r, l, o = `${/*last_progress_level*/ - e[15] * 100}%`, a = ( - /*progress*/ - e[7] != null && Bi(e) - ); - return { - c() { - t = me("div"), n = me("div"), a && a.c(), i = ce(), r = me("div"), l = me("div"), fe(n, "class", "progress-level-inner svelte-14miwb5"), fe(l, "class", "progress-bar svelte-14miwb5"), ye(l, "width", o), fe(r, "class", "progress-bar-wrap svelte-14miwb5"), fe(t, "class", "progress-level svelte-14miwb5"); - }, - m(s, u) { - H(s, t, u), Ae(t, n), a && a.m(n, null), Ae(t, i), Ae(t, r), Ae(r, l), e[30](l); - }, - p(s, u) { - /*progress*/ - s[7] != null ? a ? a.p(s, u) : (a = Bi(s), a.c(), a.m(n, null)) : a && (a.d(1), a = null), u[0] & /*last_progress_level*/ - 32768 && o !== (o = `${/*last_progress_level*/ - s[15] * 100}%`) && ye(l, "width", o); - }, - i: un, - o: un, - d(s) { - s && A(t), a && a.d(), e[30](null); - } - }; -} -function Bi(e) { - let t, n = wt( - /*progress*/ - e[7] - ), i = []; - for (let r = 0; r < n.length; r += 1) - i[r] = Li(yi(e, n, r)); - return { - c() { - for (let r = 0; r < i.length; r += 1) - i[r].c(); - t = Ve(); - }, - m(r, l) { - for (let o = 0; o < i.length; o += 1) - i[o] && i[o].m(r, l); - H(r, t, l); - }, - p(r, l) { - if (l[0] & /*progress_level, progress*/ - 16512) { - n = wt( - /*progress*/ - r[7] - ); - let o; - for (o = 0; o < n.length; o += 1) { - const a = yi(r, n, o); - i[o] ? i[o].p(a, l) : (i[o] = Li(a), i[o].c(), i[o].m(t.parentNode, t)); - } - for (; o < i.length; o += 1) - i[o].d(1); - i.length = n.length; - } - }, - d(r) { - r && A(t), vr(i, r); - } - }; -} -function Ci(e) { - let t, n, i, r, l = ( - /*i*/ - e[40] !== 0 && Qu() - ), o = ( - /*p*/ - e[38].desc != null && Pi(e) - ), a = ( - /*p*/ - e[38].desc != null && /*progress_level*/ - e[14] && /*progress_level*/ - e[14][ - /*i*/ - e[40] - ] != null && Ii() - ), s = ( - /*progress_level*/ - e[14] != null && ki(e) - ); - return { - c() { - l && l.c(), t = ce(), o && o.c(), n = ce(), a && a.c(), i = ce(), s && s.c(), r = Ve(); - }, - m(u, f) { - l && l.m(u, f), H(u, t, f), o && o.m(u, f), H(u, n, f), a && a.m(u, f), H(u, i, f), s && s.m(u, f), H(u, r, f); - }, - p(u, f) { - /*p*/ - u[38].desc != null ? o ? o.p(u, f) : (o = Pi(u), o.c(), o.m(n.parentNode, n)) : o && (o.d(1), o = null), /*p*/ - u[38].desc != null && /*progress_level*/ - u[14] && /*progress_level*/ - u[14][ - /*i*/ - u[40] - ] != null ? a || (a = Ii(), a.c(), a.m(i.parentNode, i)) : a && (a.d(1), a = null), /*progress_level*/ - u[14] != null ? s ? s.p(u, f) : (s = ki(u), s.c(), s.m(r.parentNode, r)) : s && (s.d(1), s = null); - }, - d(u) { - u && (A(t), A(n), A(i), A(r)), l && l.d(u), o && o.d(u), a && a.d(u), s && s.d(u); - } - }; -} -function Qu(e) { - let t; - return { - c() { - t = M(" /"); - }, - m(n, i) { - H(n, t, i); - }, - d(n) { - n && A(t); - } - }; -} -function Pi(e) { - let t = ( - /*p*/ - e[38].desc + "" - ), n; - return { - c() { - n = M(t); - }, - m(i, r) { - H(i, n, r); - }, - p(i, r) { - r[0] & /*progress*/ - 128 && t !== (t = /*p*/ - i[38].desc + "") && Y(n, t); - }, - d(i) { - i && A(n); - } - }; -} -function Ii(e) { - let t; - return { - c() { - t = M("-"); - }, - m(n, i) { - H(n, t, i); - }, - d(n) { - n && A(t); - } - }; -} -function ki(e) { - let t = (100 * /*progress_level*/ - (e[14][ - /*i*/ - e[40] - ] || 0)).toFixed(1) + "", n, i; - return { - c() { - n = M(t), i = M("%"); - }, - m(r, l) { - H(r, n, l), H(r, i, l); - }, - p(r, l) { - l[0] & /*progress_level*/ - 16384 && t !== (t = (100 * /*progress_level*/ - (r[14][ - /*i*/ - r[40] - ] || 0)).toFixed(1) + "") && Y(n, t); - }, - d(r) { - r && (A(n), A(i)); - } - }; -} -function Li(e) { - let t, n = ( - /*p*/ - (e[38].desc != null || /*progress_level*/ - e[14] && /*progress_level*/ - e[14][ - /*i*/ - e[40] - ] != null) && Ci(e) - ); - return { - c() { - n && n.c(), t = Ve(); - }, - m(i, r) { - n && n.m(i, r), H(i, t, r); - }, - p(i, r) { - /*p*/ - i[38].desc != null || /*progress_level*/ - i[14] && /*progress_level*/ - i[14][ - /*i*/ - i[40] - ] != null ? n ? n.p(i, r) : (n = Ci(i), n.c(), n.m(t.parentNode, t)) : n && (n.d(1), n = null); - }, - d(i) { - i && A(t), n && n.d(i); - } - }; -} -function Ni(e) { - let t, n; - return { - c() { - t = me("p"), n = M( - /*loading_text*/ - e[9] - ), fe(t, "class", "loading svelte-14miwb5"); - }, - m(i, r) { - H(i, t, r), Ae(t, n); - }, - p(i, r) { - r[0] & /*loading_text*/ - 512 && Y( - n, - /*loading_text*/ - i[9] - ); - }, - d(i) { - i && A(t); - } - }; -} -function Ju(e) { - let t, n, i, r, l; - const o = [xu, Fu], a = []; - function s(u, f) { - return ( - /*status*/ - u[4] === "pending" ? 0 : ( - /*status*/ - u[4] === "error" ? 1 : -1 - ) - ); - } - return ~(n = s(e)) && (i = a[n] = o[n](e)), { - c() { - t = me("div"), i && i.c(), fe(t, "class", r = "wrap " + /*variant*/ - e[8] + " " + /*show_progress*/ - e[6] + " svelte-14miwb5"), J(t, "hide", !/*status*/ - e[4] || /*status*/ - e[4] === "complete" || /*show_progress*/ - e[6] === "hidden"), J( - t, - "translucent", - /*variant*/ - e[8] === "center" && /*status*/ - (e[4] === "pending" || /*status*/ - e[4] === "error") || /*translucent*/ - e[11] || /*show_progress*/ - e[6] === "minimal" - ), J( - t, - "generating", - /*status*/ - e[4] === "generating" - ), J( - t, - "border", - /*border*/ - e[12] - ), ye( - t, - "position", - /*absolute*/ - e[10] ? "absolute" : "static" - ), ye( - t, - "padding", - /*absolute*/ - e[10] ? "0" : "var(--size-8) 0" - ); - }, - m(u, f) { - H(u, t, f), ~n && a[n].m(t, null), e[31](t), l = !0; - }, - p(u, f) { - let c = n; - n = s(u), n === c ? ~n && a[n].p(u, f) : (i && (wr(), De(a[c], 1, 1, () => { - a[c] = null; - }), pr()), ~n ? (i = a[n], i ? i.p(u, f) : (i = a[n] = o[n](u), i.c()), Re(i, 1), i.m(t, null)) : i = null), (!l || f[0] & /*variant, show_progress*/ - 320 && r !== (r = "wrap " + /*variant*/ - u[8] + " " + /*show_progress*/ - u[6] + " svelte-14miwb5")) && fe(t, "class", r), (!l || f[0] & /*variant, show_progress, status, show_progress*/ - 336) && J(t, "hide", !/*status*/ - u[4] || /*status*/ - u[4] === "complete" || /*show_progress*/ - u[6] === "hidden"), (!l || f[0] & /*variant, show_progress, variant, status, translucent, show_progress*/ - 2384) && J( - t, - "translucent", - /*variant*/ - u[8] === "center" && /*status*/ - (u[4] === "pending" || /*status*/ - u[4] === "error") || /*translucent*/ - u[11] || /*show_progress*/ - u[6] === "minimal" - ), (!l || f[0] & /*variant, show_progress, status*/ - 336) && J( - t, - "generating", - /*status*/ - u[4] === "generating" - ), (!l || f[0] & /*variant, show_progress, border*/ - 4416) && J( - t, - "border", - /*border*/ - u[12] - ), f[0] & /*absolute*/ - 1024 && ye( - t, - "position", - /*absolute*/ - u[10] ? "absolute" : "static" - ), f[0] & /*absolute*/ - 1024 && ye( - t, - "padding", - /*absolute*/ - u[10] ? "0" : "var(--size-8) 0" - ); - }, - i(u) { - l || (Re(i), l = !0); - }, - o(u) { - De(i), l = !1; - }, - d(u) { - u && A(t), ~n && a[n].d(), e[31](null); - } - }; -} -let _t = [], Wt = !1; -async function Yu(e, t = !0) { - if (!(window.__gradio_mode__ === "website" || window.__gradio_mode__ !== "app" && t !== !0)) { - if (_t.push(e), !Wt) - Wt = !0; - else - return; - await Du(), requestAnimationFrame(() => { - let n = [0, 0]; - for (let i = 0; i < _t.length; i++) { - const l = _t[i].getBoundingClientRect(); - (i === 0 || l.top + window.scrollY <= n[0]) && (n[0] = l.top + window.scrollY, n[1] = i); - } - window.scrollTo({ top: n[0] - 20, behavior: "smooth" }), Wt = !1, _t = []; - }); - } -} -function Ku(e, t, n) { - let i, { $$slots: r = {}, $$scope: l } = t, { i18n: o } = t, { eta: a = null } = t, { queue: s = !1 } = t, { queue_position: u } = t, { queue_size: f } = t, { status: c } = t, { scroll_to_output: h = !1 } = t, { timer: _ = !0 } = t, { show_progress: b = "full" } = t, { message: T = null } = t, { progress: y = null } = t, { variant: C = "default" } = t, { loading_text: E = "Loading..." } = t, { absolute: m = !0 } = t, { translucent: g = !1 } = t, { border: p = !1 } = t, { autoscroll: N } = t, G, L = !1, Z = 0, K = 0, W = null, q = 0, $ = null, ee, v = null, rt = !0; - const At = () => { - n(25, Z = performance.now()), n(26, K = 0), L = !0, lt(); - }; - function lt() { - requestAnimationFrame(() => { - n(26, K = (performance.now() - Z) / 1e3), L && lt(); - }); - } - function ot() { - n(26, K = 0), L && (L = !1); - } - Uu(() => { - L && ot(); - }); - let st = null; - function Ht(S) { - vi[S ? "unshift" : "push"](() => { - v = S, n(16, v), n(7, y), n(14, $), n(15, ee); - }); - } - function Bt(S) { - vi[S ? "unshift" : "push"](() => { - G = S, n(13, G); - }); - } - return e.$$set = (S) => { - "i18n" in S && n(1, o = S.i18n), "eta" in S && n(0, a = S.eta), "queue" in S && n(21, s = S.queue), "queue_position" in S && n(2, u = S.queue_position), "queue_size" in S && n(3, f = S.queue_size), "status" in S && n(4, c = S.status), "scroll_to_output" in S && n(22, h = S.scroll_to_output), "timer" in S && n(5, _ = S.timer), "show_progress" in S && n(6, b = S.show_progress), "message" in S && n(23, T = S.message), "progress" in S && n(7, y = S.progress), "variant" in S && n(8, C = S.variant), "loading_text" in S && n(9, E = S.loading_text), "absolute" in S && n(10, m = S.absolute), "translucent" in S && n(11, g = S.translucent), "border" in S && n(12, p = S.border), "autoscroll" in S && n(24, N = S.autoscroll), "$$scope" in S && n(28, l = S.$$scope); - }, e.$$.update = () => { - e.$$.dirty[0] & /*eta, old_eta, queue, timer_start*/ - 169869313 && (a === null ? n(0, a = W) : s && n(0, a = (performance.now() - Z) / 1e3 + a), a != null && (n(19, st = a.toFixed(1)), n(27, W = a))), e.$$.dirty[0] & /*eta, timer_diff*/ - 67108865 && n(17, q = a === null || a <= 0 || !K ? null : Math.min(K / a, 1)), e.$$.dirty[0] & /*progress*/ - 128 && y != null && n(18, rt = !1), e.$$.dirty[0] & /*progress, progress_level, progress_bar, last_progress_level*/ - 114816 && (y != null ? n(14, $ = y.map((S) => { - if (S.index != null && S.length != null) - return S.index / S.length; - if (S.progress != null) - return S.progress; - })) : n(14, $ = null), $ ? (n(15, ee = $[$.length - 1]), v && (ee === 0 ? n(16, v.style.transition = "0", v) : n(16, v.style.transition = "150ms", v))) : n(15, ee = void 0)), e.$$.dirty[0] & /*status*/ - 16 && (c === "pending" ? At() : ot()), e.$$.dirty[0] & /*el, scroll_to_output, status, autoscroll*/ - 20979728 && G && h && (c === "pending" || c === "complete") && Yu(G, N), e.$$.dirty[0] & /*status, message*/ - 8388624, e.$$.dirty[0] & /*timer_diff*/ - 67108864 && n(20, i = K.toFixed(1)); - }, [ - a, - o, - u, - f, - c, - _, - b, - y, - C, - E, - m, - g, - p, - G, - $, - ee, - v, - q, - rt, - st, - i, - s, - h, - T, - N, - Z, - K, - W, - l, - r, - Ht, - Bt - ]; -} -class $u extends Bu { - constructor(t) { - super(), Nu( - this, - t, - Ku, - Ju, - Mu, - { - i18n: 1, - eta: 0, - queue: 21, - queue_position: 2, - queue_size: 3, - status: 4, - scroll_to_output: 22, - timer: 5, - show_progress: 6, - message: 23, - progress: 7, - variant: 8, - loading_text: 9, - absolute: 10, - translucent: 11, - border: 12, - autoscroll: 24 - }, - null, - [-1, -1] - ); - } -} -const { - SvelteComponent: ef, - add_flush_callback: tf, - assign: nf, - bind: rf, - binding_callbacks: lf, - create_component: fn, - destroy_component: cn, - detach: of, - get_spread_object: sf, - get_spread_update: af, - init: uf, - insert: ff, - mount_component: hn, - safe_not_equal: cf, - space: hf, - transition_in: _n, - transition_out: mn -} = window.__gradio__svelte__internal, { createEventDispatcher: _f } = window.__gradio__svelte__internal; -function mf(e) { - let t, n, i, r, l; - const o = [ - { - autoscroll: ( - /*gradio*/ - e[21].autoscroll - ) - }, - { i18n: ( - /*gradio*/ - e[21].i18n - ) }, - /*loading_status*/ - e[1] - ]; - let a = {}; - for (let f = 0; f < o.length; f += 1) - a = nf(a, o[f]); - t = new $u({ props: a }); - function s(f) { - e[22](f); - } - let u = { - label: ( - /*label*/ - e[3] - ), - value: ( - /*value*/ - e[9] - ), - show_label: ( - /*show_label*/ - e[2] - ), - root: ( - /*root*/ - e[4] - ), - proxy_url: ( - /*proxy_url*/ - e[5] - ), - columns: ( - /*columns*/ - e[13] - ), - rows: ( - /*rows*/ - e[14] - ), - height: ( - /*height*/ - e[15] - ), - preview: ( - /*preview*/ - e[16] - ), - object_fit: ( - /*object_fit*/ - e[18] - ), - allow_preview: ( - /*allow_preview*/ - e[17] - ), - show_share_button: ( - /*show_share_button*/ - e[19] - ), - show_download_button: ( - /*show_download_button*/ - e[20] - ), - i18n: ( - /*gradio*/ - e[21].i18n - ) - }; - return ( - /*selected_index*/ - e[0] !== void 0 && (u.selected_index = /*selected_index*/ - e[0]), i = new bu({ props: u }), lf.push(() => rf(i, "selected_index", s)), i.$on( - "change", - /*change_handler*/ - e[23] - ), i.$on( - "select", - /*select_handler*/ - e[24] - ), i.$on( - "share", - /*share_handler*/ - e[25] - ), i.$on( - "error", - /*error_handler*/ - e[26] - ), { - c() { - fn(t.$$.fragment), n = hf(), fn(i.$$.fragment); - }, - m(f, c) { - hn(t, f, c), ff(f, n, c), hn(i, f, c), l = !0; - }, - p(f, c) { - const h = c & /*gradio, loading_status*/ - 2097154 ? af(o, [ - c & /*gradio*/ - 2097152 && { - autoscroll: ( - /*gradio*/ - f[21].autoscroll - ) - }, - c & /*gradio*/ - 2097152 && { i18n: ( - /*gradio*/ - f[21].i18n - ) }, - c & /*loading_status*/ - 2 && sf( - /*loading_status*/ - f[1] - ) - ]) : {}; - t.$set(h); - const _ = {}; - c & /*label*/ - 8 && (_.label = /*label*/ - f[3]), c & /*value*/ - 512 && (_.value = /*value*/ - f[9]), c & /*show_label*/ - 4 && (_.show_label = /*show_label*/ - f[2]), c & /*root*/ - 16 && (_.root = /*root*/ - f[4]), c & /*proxy_url*/ - 32 && (_.proxy_url = /*proxy_url*/ - f[5]), c & /*columns*/ - 8192 && (_.columns = /*columns*/ - f[13]), c & /*rows*/ - 16384 && (_.rows = /*rows*/ - f[14]), c & /*height*/ - 32768 && (_.height = /*height*/ - f[15]), c & /*preview*/ - 65536 && (_.preview = /*preview*/ - f[16]), c & /*object_fit*/ - 262144 && (_.object_fit = /*object_fit*/ - f[18]), c & /*allow_preview*/ - 131072 && (_.allow_preview = /*allow_preview*/ - f[17]), c & /*show_share_button*/ - 524288 && (_.show_share_button = /*show_share_button*/ - f[19]), c & /*show_download_button*/ - 1048576 && (_.show_download_button = /*show_download_button*/ - f[20]), c & /*gradio*/ - 2097152 && (_.i18n = /*gradio*/ - f[21].i18n), !r && c & /*selected_index*/ - 1 && (r = !0, _.selected_index = /*selected_index*/ - f[0], tf(() => r = !1)), i.$set(_); - }, - i(f) { - l || (_n(t.$$.fragment, f), _n(i.$$.fragment, f), l = !0); - }, - o(f) { - mn(t.$$.fragment, f), mn(i.$$.fragment, f), l = !1; - }, - d(f) { - f && of(n), cn(t, f), cn(i, f); - } - } - ); -} -function df(e) { - let t, n; - return t = new zr({ - props: { - visible: ( - /*visible*/ - e[8] - ), - variant: "solid", - padding: !1, - elem_id: ( - /*elem_id*/ - e[6] - ), - elem_classes: ( - /*elem_classes*/ - e[7] - ), - container: ( - /*container*/ - e[10] - ), - scale: ( - /*scale*/ - e[11] - ), - min_width: ( - /*min_width*/ - e[12] - ), - allow_overflow: !1, - height: typeof /*height*/ - e[15] == "number" ? ( - /*height*/ - e[15] - ) : void 0, - $$slots: { default: [mf] }, - $$scope: { ctx: e } - } - }), { - c() { - fn(t.$$.fragment); - }, - m(i, r) { - hn(t, i, r), n = !0; - }, - p(i, [r]) { - const l = {}; - r & /*visible*/ - 256 && (l.visible = /*visible*/ - i[8]), r & /*elem_id*/ - 64 && (l.elem_id = /*elem_id*/ - i[6]), r & /*elem_classes*/ - 128 && (l.elem_classes = /*elem_classes*/ - i[7]), r & /*container*/ - 1024 && (l.container = /*container*/ - i[10]), r & /*scale*/ - 2048 && (l.scale = /*scale*/ - i[11]), r & /*min_width*/ - 4096 && (l.min_width = /*min_width*/ - i[12]), r & /*height*/ - 32768 && (l.height = typeof /*height*/ - i[15] == "number" ? ( - /*height*/ - i[15] - ) : void 0), r & /*$$scope, label, value, show_label, root, proxy_url, columns, rows, height, preview, object_fit, allow_preview, show_share_button, show_download_button, gradio, selected_index, loading_status*/ - 272622143 && (l.$$scope = { dirty: r, ctx: i }), t.$set(l); - }, - i(i) { - n || (_n(t.$$.fragment, i), n = !0); - }, - o(i) { - mn(t.$$.fragment, i), n = !1; - }, - d(i) { - cn(t, i); - } - }; -} -function bf(e, t, n) { - let { loading_status: i } = t, { show_label: r } = t, { label: l } = t, { root: o } = t, { proxy_url: a } = t, { elem_id: s = "" } = t, { elem_classes: u = [] } = t, { visible: f = !0 } = t, { value: c = null } = t, { container: h = !0 } = t, { scale: _ = null } = t, { min_width: b = void 0 } = t, { columns: T = [2] } = t, { rows: y = void 0 } = t, { height: C = "auto" } = t, { preview: E } = t, { allow_preview: m = !0 } = t, { selected_index: g = null } = t, { object_fit: p = "cover" } = t, { show_share_button: N = !1 } = t, { show_download_button: G = !1 } = t, { gradio: L } = t; - const Z = _f(); - function K(v) { - g = v, n(0, g); - } - const W = () => L.dispatch("change", c), q = (v) => L.dispatch("select", v.detail), $ = (v) => L.dispatch("share", v.detail), ee = (v) => L.dispatch("error", v.detail); - return e.$$set = (v) => { - "loading_status" in v && n(1, i = v.loading_status), "show_label" in v && n(2, r = v.show_label), "label" in v && n(3, l = v.label), "root" in v && n(4, o = v.root), "proxy_url" in v && n(5, a = v.proxy_url), "elem_id" in v && n(6, s = v.elem_id), "elem_classes" in v && n(7, u = v.elem_classes), "visible" in v && n(8, f = v.visible), "value" in v && n(9, c = v.value), "container" in v && n(10, h = v.container), "scale" in v && n(11, _ = v.scale), "min_width" in v && n(12, b = v.min_width), "columns" in v && n(13, T = v.columns), "rows" in v && n(14, y = v.rows), "height" in v && n(15, C = v.height), "preview" in v && n(16, E = v.preview), "allow_preview" in v && n(17, m = v.allow_preview), "selected_index" in v && n(0, g = v.selected_index), "object_fit" in v && n(18, p = v.object_fit), "show_share_button" in v && n(19, N = v.show_share_button), "show_download_button" in v && n(20, G = v.show_download_button), "gradio" in v && n(21, L = v.gradio); - }, e.$$.update = () => { - e.$$.dirty & /*selected_index*/ - 1 && Z("prop_change", { selected_index: g }); - }, [ - g, - i, - r, - l, - o, - a, - s, - u, - f, - c, - h, - _, - b, - T, - y, - C, - E, - m, - p, - N, - G, - L, - K, - W, - q, - $, - ee - ]; -} -class pf extends ef { - constructor(t) { - super(), uf(this, t, bf, df, cf, { - loading_status: 1, - show_label: 2, - label: 3, - root: 4, - proxy_url: 5, - elem_id: 6, - elem_classes: 7, - visible: 8, - value: 9, - container: 10, - scale: 11, - min_width: 12, - columns: 13, - rows: 14, - height: 15, - preview: 16, - allow_preview: 17, - selected_index: 0, - object_fit: 18, - show_share_button: 19, - show_download_button: 20, - gradio: 21 - }); - } -} -export { - bu as BaseGallery, - pf as default -}; diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/utils/execeval.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/utils/execeval.py deleted file mode 100644 index 514f874ce30b622089302924bafb1cfae0a4efd7..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/utils/execeval.py +++ /dev/null @@ -1,61 +0,0 @@ -import ast -import sys - - -if sys.version_info > (3, 8): - Module = ast.Module -else: - # Mock the Python >= 3.8 API - def Module(nodelist, type_ignores): - return ast.Module(nodelist) - - -class _CatchDisplay: - """Class to temporarily catch sys.displayhook""" - - def __init__(self): - self.output = None - - def __enter__(self): - self.old_hook = sys.displayhook - sys.displayhook = self - return self - - def __exit__(self, type, value, traceback): - sys.displayhook = self.old_hook - # Returning False will cause exceptions to propagate - return False - - def __call__(self, output): - self.output = output - - -def eval_block(code, namespace=None, filename=""): - """ - Execute a multi-line block of code in the given namespace - - If the final statement in the code is an expression, return - the result of the expression. - """ - tree = ast.parse(code, filename="", mode="exec") - if namespace is None: - namespace = {} - catch_display = _CatchDisplay() - - if isinstance(tree.body[-1], ast.Expr): - to_exec, to_eval = tree.body[:-1], tree.body[-1:] - else: - to_exec, to_eval = tree.body, [] - - for node in to_exec: - compiled = compile(Module([node], []), filename=filename, mode="exec") - exec(compiled, namespace) - - with catch_display: - for node in to_eval: - compiled = compile( - ast.Interactive([node]), filename=filename, mode="single" - ) - exec(compiled, namespace) - - return catch_display.output diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/anyio/_core/_resources.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/anyio/_core/_resources.py deleted file mode 100644 index b9a5344aef2962670f9b305a02cd0b11f2087d2f..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/anyio/_core/_resources.py +++ /dev/null @@ -1,18 +0,0 @@ -from __future__ import annotations - -from ..abc import AsyncResource -from ._tasks import CancelScope - - -async def aclose_forcefully(resource: AsyncResource) -> None: - """ - Close an asynchronous resource in a cancelled scope. - - Doing this closes the resource without waiting on anything. - - :param resource: the resource to close - - """ - with CancelScope() as scope: - scope.cancel() - await resource.aclose() diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/anyio/_core/_testing.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/anyio/_core/_testing.py deleted file mode 100644 index c8191b3866f7104d2d02d32da9826c68ca17ac95..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/anyio/_core/_testing.py +++ /dev/null @@ -1,82 +0,0 @@ -from __future__ import annotations - -from typing import Any, Awaitable, Generator - -from ._compat import DeprecatedAwaitableList, _warn_deprecation -from ._eventloop import get_asynclib - - -class TaskInfo: - """ - Represents an asynchronous task. - - :ivar int id: the unique identifier of the task - :ivar parent_id: the identifier of the parent task, if any - :vartype parent_id: Optional[int] - :ivar str name: the description of the task (if any) - :ivar ~collections.abc.Coroutine coro: the coroutine object of the task - """ - - __slots__ = "_name", "id", "parent_id", "name", "coro" - - def __init__( - self, - id: int, - parent_id: int | None, - name: str | None, - coro: Generator[Any, Any, Any] | Awaitable[Any], - ): - func = get_current_task - self._name = f"{func.__module__}.{func.__qualname__}" - self.id: int = id - self.parent_id: int | None = parent_id - self.name: str | None = name - self.coro: Generator[Any, Any, Any] | Awaitable[Any] = coro - - def __eq__(self, other: object) -> bool: - if isinstance(other, TaskInfo): - return self.id == other.id - - return NotImplemented - - def __hash__(self) -> int: - return hash(self.id) - - def __repr__(self) -> str: - return f"{self.__class__.__name__}(id={self.id!r}, name={self.name!r})" - - def __await__(self) -> Generator[None, None, TaskInfo]: - _warn_deprecation(self) - if False: - yield - - return self - - def _unwrap(self) -> TaskInfo: - return self - - -def get_current_task() -> TaskInfo: - """ - Return the current task. - - :return: a representation of the current task - - """ - return get_asynclib().get_current_task() - - -def get_running_tasks() -> DeprecatedAwaitableList[TaskInfo]: - """ - Return a list of running tasks in the current event loop. - - :return: a list of task info objects - - """ - tasks = get_asynclib().get_running_tasks() - return DeprecatedAwaitableList(tasks, func=get_running_tasks) - - -async def wait_all_tasks_blocked() -> None: - """Wait until all other tasks are waiting for something.""" - await get_asynclib().wait_all_tasks_blocked() diff --git a/spaces/declare-lab/tango/audioldm/ldm.py b/spaces/declare-lab/tango/audioldm/ldm.py deleted file mode 100644 index e0179fd5a506052ac9db22bd37f3db6b910aded5..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/audioldm/ldm.py +++ /dev/null @@ -1,818 +0,0 @@ -import os - -import torch -import numpy as np -from tqdm import tqdm -from audioldm.utils import default, instantiate_from_config, save_wave -from audioldm.latent_diffusion.ddpm import DDPM -from audioldm.variational_autoencoder.distributions import DiagonalGaussianDistribution -from audioldm.latent_diffusion.util import noise_like -from audioldm.latent_diffusion.ddim import DDIMSampler -import os - - -def disabled_train(self, mode=True): - """Overwrite model.train with this function to make sure train/eval mode - does not change anymore.""" - return self - - -class LatentDiffusion(DDPM): - """main class""" - - def __init__( - self, - device="cuda", - first_stage_config=None, - cond_stage_config=None, - num_timesteps_cond=None, - cond_stage_key="image", - cond_stage_trainable=False, - concat_mode=True, - cond_stage_forward=None, - conditioning_key=None, - scale_factor=1.0, - scale_by_std=False, - base_learning_rate=None, - *args, - **kwargs, - ): - self.device = device - self.learning_rate = base_learning_rate - self.num_timesteps_cond = default(num_timesteps_cond, 1) - self.scale_by_std = scale_by_std - assert self.num_timesteps_cond <= kwargs["timesteps"] - # for backwards compatibility after implementation of DiffusionWrapper - if conditioning_key is None: - conditioning_key = "concat" if concat_mode else "crossattn" - if cond_stage_config == "__is_unconditional__": - conditioning_key = None - ckpt_path = kwargs.pop("ckpt_path", None) - ignore_keys = kwargs.pop("ignore_keys", []) - super().__init__(conditioning_key=conditioning_key, *args, **kwargs) - self.concat_mode = concat_mode - self.cond_stage_trainable = cond_stage_trainable - self.cond_stage_key = cond_stage_key - self.cond_stage_key_orig = cond_stage_key - try: - self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1 - except: - self.num_downs = 0 - if not scale_by_std: - self.scale_factor = scale_factor - else: - self.register_buffer("scale_factor", torch.tensor(scale_factor)) - self.instantiate_first_stage(first_stage_config) - self.instantiate_cond_stage(cond_stage_config) - self.cond_stage_forward = cond_stage_forward - self.clip_denoised = False - - def make_cond_schedule( - self, - ): - self.cond_ids = torch.full( - size=(self.num_timesteps,), - fill_value=self.num_timesteps - 1, - dtype=torch.long, - ) - ids = torch.round( - torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond) - ).long() - self.cond_ids[: self.num_timesteps_cond] = ids - - def register_schedule( - self, - given_betas=None, - beta_schedule="linear", - timesteps=1000, - linear_start=1e-4, - linear_end=2e-2, - cosine_s=8e-3, - ): - super().register_schedule( - given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s - ) - - self.shorten_cond_schedule = self.num_timesteps_cond > 1 - if self.shorten_cond_schedule: - self.make_cond_schedule() - - def instantiate_first_stage(self, config): - model = instantiate_from_config(config) - self.first_stage_model = model.eval() - self.first_stage_model.train = disabled_train - for param in self.first_stage_model.parameters(): - param.requires_grad = False - - def instantiate_cond_stage(self, config): - if not self.cond_stage_trainable: - if config == "__is_first_stage__": - print("Using first stage also as cond stage.") - self.cond_stage_model = self.first_stage_model - elif config == "__is_unconditional__": - print(f"Training {self.__class__.__name__} as an unconditional model.") - self.cond_stage_model = None - # self.be_unconditional = True - else: - model = instantiate_from_config(config) - self.cond_stage_model = model.eval() - self.cond_stage_model.train = disabled_train - for param in self.cond_stage_model.parameters(): - param.requires_grad = False - else: - assert config != "__is_first_stage__" - assert config != "__is_unconditional__" - model = instantiate_from_config(config) - self.cond_stage_model = model - self.cond_stage_model = self.cond_stage_model.to(self.device) - - def get_first_stage_encoding(self, encoder_posterior): - if isinstance(encoder_posterior, DiagonalGaussianDistribution): - z = encoder_posterior.sample() - elif isinstance(encoder_posterior, torch.Tensor): - z = encoder_posterior - else: - raise NotImplementedError( - f"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented" - ) - return self.scale_factor * z - - def get_learned_conditioning(self, c): - if self.cond_stage_forward is None: - if hasattr(self.cond_stage_model, "encode") and callable( - self.cond_stage_model.encode - ): - c = self.cond_stage_model.encode(c) - if isinstance(c, DiagonalGaussianDistribution): - c = c.mode() - else: - # Text input is list - if type(c) == list and len(c) == 1: - c = self.cond_stage_model([c[0], c[0]]) - c = c[0:1] - else: - c = self.cond_stage_model(c) - else: - assert hasattr(self.cond_stage_model, self.cond_stage_forward) - c = getattr(self.cond_stage_model, self.cond_stage_forward)(c) - return c - - @torch.no_grad() - def get_input( - self, - batch, - k, - return_first_stage_encode=True, - return_first_stage_outputs=False, - force_c_encode=False, - cond_key=None, - return_original_cond=False, - bs=None, - ): - x = super().get_input(batch, k) - - if bs is not None: - x = x[:bs] - - x = x.to(self.device) - - if return_first_stage_encode: - encoder_posterior = self.encode_first_stage(x) - z = self.get_first_stage_encoding(encoder_posterior).detach() - else: - z = None - - if self.model.conditioning_key is not None: - if cond_key is None: - cond_key = self.cond_stage_key - if cond_key != self.first_stage_key: - if cond_key in ["caption", "coordinates_bbox"]: - xc = batch[cond_key] - elif cond_key == "class_label": - xc = batch - else: - # [bs, 1, 527] - xc = super().get_input(batch, cond_key) - if type(xc) == torch.Tensor: - xc = xc.to(self.device) - else: - xc = x - if not self.cond_stage_trainable or force_c_encode: - if isinstance(xc, dict) or isinstance(xc, list): - c = self.get_learned_conditioning(xc) - else: - c = self.get_learned_conditioning(xc.to(self.device)) - else: - c = xc - - if bs is not None: - c = c[:bs] - - else: - c = None - xc = None - if self.use_positional_encodings: - pos_x, pos_y = self.compute_latent_shifts(batch) - c = {"pos_x": pos_x, "pos_y": pos_y} - out = [z, c] - if return_first_stage_outputs: - xrec = self.decode_first_stage(z) - out.extend([x, xrec]) - if return_original_cond: - out.append(xc) - return out - - @torch.no_grad() - def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False): - if predict_cids: - if z.dim() == 4: - z = torch.argmax(z.exp(), dim=1).long() - z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None) - z = rearrange(z, "b h w c -> b c h w").contiguous() - - z = 1.0 / self.scale_factor * z - return self.first_stage_model.decode(z) - - def mel_spectrogram_to_waveform(self, mel): - # Mel: [bs, 1, t-steps, fbins] - if len(mel.size()) == 4: - mel = mel.squeeze(1) - mel = mel.permute(0, 2, 1) - waveform = self.first_stage_model.vocoder(mel) - waveform = waveform.cpu().detach().numpy() - return waveform - - @torch.no_grad() - def encode_first_stage(self, x): - return self.first_stage_model.encode(x) - - def apply_model(self, x_noisy, t, cond, return_ids=False): - - if isinstance(cond, dict): - # hybrid case, cond is exptected to be a dict - pass - else: - if not isinstance(cond, list): - cond = [cond] - if self.model.conditioning_key == "concat": - key = "c_concat" - elif self.model.conditioning_key == "crossattn": - key = "c_crossattn" - else: - key = "c_film" - - cond = {key: cond} - - x_recon = self.model(x_noisy, t, **cond) - - if isinstance(x_recon, tuple) and not return_ids: - return x_recon[0] - else: - return x_recon - - def p_mean_variance( - self, - x, - c, - t, - clip_denoised: bool, - return_codebook_ids=False, - quantize_denoised=False, - return_x0=False, - score_corrector=None, - corrector_kwargs=None, - ): - t_in = t - model_out = self.apply_model(x, t_in, c, return_ids=return_codebook_ids) - - if score_corrector is not None: - assert self.parameterization == "eps" - model_out = score_corrector.modify_score( - self, model_out, x, t, c, **corrector_kwargs - ) - - if return_codebook_ids: - model_out, logits = model_out - - if self.parameterization == "eps": - x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) - elif self.parameterization == "x0": - x_recon = model_out - else: - raise NotImplementedError() - - if clip_denoised: - x_recon.clamp_(-1.0, 1.0) - if quantize_denoised: - x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon) - model_mean, posterior_variance, posterior_log_variance = self.q_posterior( - x_start=x_recon, x_t=x, t=t - ) - if return_codebook_ids: - return model_mean, posterior_variance, posterior_log_variance, logits - elif return_x0: - return model_mean, posterior_variance, posterior_log_variance, x_recon - else: - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample( - self, - x, - c, - t, - clip_denoised=False, - repeat_noise=False, - return_codebook_ids=False, - quantize_denoised=False, - return_x0=False, - temperature=1.0, - noise_dropout=0.0, - score_corrector=None, - corrector_kwargs=None, - ): - b, *_, device = *x.shape, x.device - outputs = self.p_mean_variance( - x=x, - c=c, - t=t, - clip_denoised=clip_denoised, - return_codebook_ids=return_codebook_ids, - quantize_denoised=quantize_denoised, - return_x0=return_x0, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - ) - if return_codebook_ids: - raise DeprecationWarning("Support dropped.") - model_mean, _, model_log_variance, logits = outputs - elif return_x0: - model_mean, _, model_log_variance, x0 = outputs - else: - model_mean, _, model_log_variance = outputs - - noise = noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.0: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - # no noise when t == 0 - nonzero_mask = ( - (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))).contiguous() - ) - - if return_codebook_ids: - return model_mean + nonzero_mask * ( - 0.5 * model_log_variance - ).exp() * noise, logits.argmax(dim=1) - if return_x0: - return ( - model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, - x0, - ) - else: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def progressive_denoising( - self, - cond, - shape, - verbose=True, - callback=None, - quantize_denoised=False, - img_callback=None, - mask=None, - x0=None, - temperature=1.0, - noise_dropout=0.0, - score_corrector=None, - corrector_kwargs=None, - batch_size=None, - x_T=None, - start_T=None, - log_every_t=None, - ): - if not log_every_t: - log_every_t = self.log_every_t - timesteps = self.num_timesteps - if batch_size is not None: - b = batch_size if batch_size is not None else shape[0] - shape = [batch_size] + list(shape) - else: - b = batch_size = shape[0] - if x_T is None: - img = torch.randn(shape, device=self.device) - else: - img = x_T - intermediates = [] - if cond is not None: - if isinstance(cond, dict): - cond = { - key: cond[key][:batch_size] - if not isinstance(cond[key], list) - else list(map(lambda x: x[:batch_size], cond[key])) - for key in cond - } - else: - cond = ( - [c[:batch_size] for c in cond] - if isinstance(cond, list) - else cond[:batch_size] - ) - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = ( - tqdm( - reversed(range(0, timesteps)), - desc="Progressive Generation", - total=timesteps, - ) - if verbose - else reversed(range(0, timesteps)) - ) - if type(temperature) == float: - temperature = [temperature] * timesteps - - for i in iterator: - ts = torch.full((b,), i, device=self.device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != "hybrid" - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img, x0_partial = self.p_sample( - img, - cond, - ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised, - return_x0=True, - temperature=temperature[i], - noise_dropout=noise_dropout, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - ) - if mask is not None: - assert x0 is not None - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1.0 - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(x0_partial) - if callback: - callback(i) - if img_callback: - img_callback(img, i) - return img, intermediates - - @torch.no_grad() - def p_sample_loop( - self, - cond, - shape, - return_intermediates=False, - x_T=None, - verbose=True, - callback=None, - timesteps=None, - quantize_denoised=False, - mask=None, - x0=None, - img_callback=None, - start_T=None, - log_every_t=None, - ): - - if not log_every_t: - log_every_t = self.log_every_t - device = self.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - intermediates = [img] - if timesteps is None: - timesteps = self.num_timesteps - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = ( - tqdm(reversed(range(0, timesteps)), desc="Sampling t", total=timesteps) - if verbose - else reversed(range(0, timesteps)) - ) - - if mask is not None: - assert x0 is not None - assert x0.shape[2:3] == mask.shape[2:3] # spatial size has to match - - for i in iterator: - ts = torch.full((b,), i, device=device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != "hybrid" - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img = self.p_sample( - img, - cond, - ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised, - ) - if mask is not None: - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1.0 - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(img) - if callback: - callback(i) - if img_callback: - img_callback(img, i) - - if return_intermediates: - return img, intermediates - return img - - @torch.no_grad() - def sample( - self, - cond, - batch_size=16, - return_intermediates=False, - x_T=None, - verbose=True, - timesteps=None, - quantize_denoised=False, - mask=None, - x0=None, - shape=None, - **kwargs, - ): - if shape is None: - shape = (batch_size, self.channels, self.latent_t_size, self.latent_f_size) - if cond is not None: - if isinstance(cond, dict): - cond = { - key: cond[key][:batch_size] - if not isinstance(cond[key], list) - else list(map(lambda x: x[:batch_size], cond[key])) - for key in cond - } - else: - cond = ( - [c[:batch_size] for c in cond] - if isinstance(cond, list) - else cond[:batch_size] - ) - return self.p_sample_loop( - cond, - shape, - return_intermediates=return_intermediates, - x_T=x_T, - verbose=verbose, - timesteps=timesteps, - quantize_denoised=quantize_denoised, - mask=mask, - x0=x0, - **kwargs, - ) - - @torch.no_grad() - def sample_log( - self, - cond, - batch_size, - ddim, - ddim_steps, - unconditional_guidance_scale=1.0, - unconditional_conditioning=None, - use_plms=False, - mask=None, - **kwargs, - ): - - if mask is not None: - shape = (self.channels, mask.size()[-2], mask.size()[-1]) - else: - shape = (self.channels, self.latent_t_size, self.latent_f_size) - - intermediate = None - if ddim and not use_plms: - # print("Use ddim sampler") - - ddim_sampler = DDIMSampler(self) - samples, intermediates = ddim_sampler.sample( - ddim_steps, - batch_size, - shape, - cond, - verbose=False, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - mask=mask, - **kwargs, - ) - - else: - # print("Use DDPM sampler") - samples, intermediates = self.sample( - cond=cond, - batch_size=batch_size, - return_intermediates=True, - unconditional_guidance_scale=unconditional_guidance_scale, - mask=mask, - unconditional_conditioning=unconditional_conditioning, - **kwargs, - ) - - return samples, intermediate - - @torch.no_grad() - def generate_sample( - self, - batchs, - ddim_steps=200, - ddim_eta=1.0, - x_T=None, - n_candidate_gen_per_text=1, - unconditional_guidance_scale=1.0, - unconditional_conditioning=None, - name="waveform", - use_plms=False, - save=False, - **kwargs, - ): - # Generate n_candidate_gen_per_text times and select the best - # Batch: audio, text, fnames - assert x_T is None - try: - batchs = iter(batchs) - except TypeError: - raise ValueError("The first input argument should be an iterable object") - - if use_plms: - assert ddim_steps is not None - use_ddim = ddim_steps is not None - # waveform_save_path = os.path.join(self.get_log_dir(), name) - # os.makedirs(waveform_save_path, exist_ok=True) - # print("Waveform save path: ", waveform_save_path) - - with self.ema_scope("Generate"): - for batch in batchs: - z, c = self.get_input( - batch, - self.first_stage_key, - cond_key=self.cond_stage_key, - return_first_stage_outputs=False, - force_c_encode=True, - return_original_cond=False, - bs=None, - ) - text = super().get_input(batch, "text") - - # Generate multiple samples - batch_size = z.shape[0] * n_candidate_gen_per_text - c = torch.cat([c] * n_candidate_gen_per_text, dim=0) - text = text * n_candidate_gen_per_text - - if unconditional_guidance_scale != 1.0: - unconditional_conditioning = ( - self.cond_stage_model.get_unconditional_condition(batch_size) - ) - - samples, _ = self.sample_log( - cond=c, - batch_size=batch_size, - x_T=x_T, - ddim=use_ddim, - ddim_steps=ddim_steps, - eta=ddim_eta, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - use_plms=use_plms, - ) - - if(torch.max(torch.abs(samples)) > 1e2): - samples = torch.clip(samples, min=-10, max=10) - - mel = self.decode_first_stage(samples) - - waveform = self.mel_spectrogram_to_waveform(mel) - - if waveform.shape[0] > 1: - similarity = self.cond_stage_model.cos_similarity( - torch.FloatTensor(waveform).squeeze(1), text - ) - - best_index = [] - for i in range(z.shape[0]): - candidates = similarity[i :: z.shape[0]] - max_index = torch.argmax(candidates).item() - best_index.append(i + max_index * z.shape[0]) - - waveform = waveform[best_index] - # print("Similarity between generated audio and text", similarity) - # print("Choose the following indexes:", best_index) - - return waveform - - @torch.no_grad() - def generate_sample_masked( - self, - batchs, - ddim_steps=200, - ddim_eta=1.0, - x_T=None, - n_candidate_gen_per_text=1, - unconditional_guidance_scale=1.0, - unconditional_conditioning=None, - name="waveform", - use_plms=False, - time_mask_ratio_start_and_end=(0.25, 0.75), - freq_mask_ratio_start_and_end=(0.75, 1.0), - save=False, - **kwargs, - ): - # Generate n_candidate_gen_per_text times and select the best - # Batch: audio, text, fnames - assert x_T is None - try: - batchs = iter(batchs) - except TypeError: - raise ValueError("The first input argument should be an iterable object") - - if use_plms: - assert ddim_steps is not None - use_ddim = ddim_steps is not None - # waveform_save_path = os.path.join(self.get_log_dir(), name) - # os.makedirs(waveform_save_path, exist_ok=True) - # print("Waveform save path: ", waveform_save_path) - - with self.ema_scope("Generate"): - for batch in batchs: - z, c = self.get_input( - batch, - self.first_stage_key, - cond_key=self.cond_stage_key, - return_first_stage_outputs=False, - force_c_encode=True, - return_original_cond=False, - bs=None, - ) - text = super().get_input(batch, "text") - - # Generate multiple samples - batch_size = z.shape[0] * n_candidate_gen_per_text - - _, h, w = z.shape[0], z.shape[2], z.shape[3] - - mask = torch.ones(batch_size, h, w).to(self.device) - - mask[:, int(h * time_mask_ratio_start_and_end[0]) : int(h * time_mask_ratio_start_and_end[1]), :] = 0 - mask[:, :, int(w * freq_mask_ratio_start_and_end[0]) : int(w * freq_mask_ratio_start_and_end[1])] = 0 - mask = mask[:, None, ...] - - c = torch.cat([c] * n_candidate_gen_per_text, dim=0) - text = text * n_candidate_gen_per_text - - if unconditional_guidance_scale != 1.0: - unconditional_conditioning = ( - self.cond_stage_model.get_unconditional_condition(batch_size) - ) - - samples, _ = self.sample_log( - cond=c, - batch_size=batch_size, - x_T=x_T, - ddim=use_ddim, - ddim_steps=ddim_steps, - eta=ddim_eta, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - use_plms=use_plms, mask=mask, x0=torch.cat([z] * n_candidate_gen_per_text) - ) - - mel = self.decode_first_stage(samples) - - waveform = self.mel_spectrogram_to_waveform(mel) - - if waveform.shape[0] > 1: - similarity = self.cond_stage_model.cos_similarity( - torch.FloatTensor(waveform).squeeze(1), text - ) - - best_index = [] - for i in range(z.shape[0]): - candidates = similarity[i :: z.shape[0]] - max_index = torch.argmax(candidates).item() - best_index.append(i + max_index * z.shape[0]) - - waveform = waveform[best_index] - # print("Similarity between generated audio and text", similarity) - # print("Choose the following indexes:", best_index) - - return waveform \ No newline at end of file diff --git a/spaces/declare-lab/tango/setup.py b/spaces/declare-lab/tango/setup.py deleted file mode 100644 index bc9737bdcc3f4a80b96e5e212744e55fe2dcdd02..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/setup.py +++ /dev/null @@ -1,7 +0,0 @@ -import os -requirement_path = "requirements.txt" -install_requires = [] -if os.path.isfile(requirement_path): - with open(requirement_path) as f: - install_requires = f.read().splitlines() -setup(name="mypackage", install_requires=install_requires, [...]) \ No newline at end of file diff --git a/spaces/devthedeveloper/Bark-with-Voice-Cloning/setup.py b/spaces/devthedeveloper/Bark-with-Voice-Cloning/setup.py deleted file mode 100644 index 606849326a4002007fd42060b51e69a19c18675c..0000000000000000000000000000000000000000 --- a/spaces/devthedeveloper/Bark-with-Voice-Cloning/setup.py +++ /dev/null @@ -1,3 +0,0 @@ -from setuptools import setup - -setup() diff --git a/spaces/dhkim2810/MobileSAM/utils/tools_gradio.py b/spaces/dhkim2810/MobileSAM/utils/tools_gradio.py deleted file mode 100644 index 19b50fc7d4f1da25cbb1681ab9b993a1411a452e..0000000000000000000000000000000000000000 --- a/spaces/dhkim2810/MobileSAM/utils/tools_gradio.py +++ /dev/null @@ -1,192 +0,0 @@ -import cv2 -import matplotlib.pyplot as plt -import numpy as np -import torch -from PIL import Image - - -def fast_process( - annotations, - image, - device, - scale, - better_quality=False, - mask_random_color=True, - bbox=None, - use_retina=True, - withContours=True, -): - if isinstance(annotations[0], dict): - annotations = [annotation["segmentation"] for annotation in annotations] - - original_h = image.height - original_w = image.width - if better_quality: - if isinstance(annotations[0], torch.Tensor): - annotations = np.array(annotations.cpu()) - for i, mask in enumerate(annotations): - mask = cv2.morphologyEx( - mask.astype(np.uint8), cv2.MORPH_CLOSE, np.ones((3, 3), np.uint8) - ) - annotations[i] = cv2.morphologyEx( - mask.astype(np.uint8), cv2.MORPH_OPEN, np.ones((8, 8), np.uint8) - ) - if device == "cpu": - annotations = np.array(annotations) - inner_mask = fast_show_mask( - annotations, - plt.gca(), - random_color=mask_random_color, - bbox=bbox, - retinamask=use_retina, - target_height=original_h, - target_width=original_w, - ) - else: - if isinstance(annotations[0], np.ndarray): - annotations = np.array(annotations) - annotations = torch.from_numpy(annotations) - inner_mask = fast_show_mask_gpu( - annotations, - plt.gca(), - random_color=mask_random_color, - bbox=bbox, - retinamask=use_retina, - target_height=original_h, - target_width=original_w, - ) - if isinstance(annotations, torch.Tensor): - annotations = annotations.cpu().numpy() - - if withContours: - contour_all = [] - temp = np.zeros((original_h, original_w, 1)) - for i, mask in enumerate(annotations): - if type(mask) == dict: - mask = mask["segmentation"] - annotation = mask.astype(np.uint8) - if use_retina == False: - annotation = cv2.resize( - annotation, - (original_w, original_h), - interpolation=cv2.INTER_NEAREST, - ) - contours, _ = cv2.findContours( - annotation, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE - ) - for contour in contours: - contour_all.append(contour) - cv2.drawContours(temp, contour_all, -1, (255, 255, 255), 2 // scale) - color = np.array([0 / 255, 0 / 255, 255 / 255, 0.9]) - contour_mask = temp / 255 * color.reshape(1, 1, -1) - - image = image.convert("RGBA") - overlay_inner = Image.fromarray((inner_mask * 255).astype(np.uint8), "RGBA") - image.paste(overlay_inner, (0, 0), overlay_inner) - - if withContours: - overlay_contour = Image.fromarray((contour_mask * 255).astype(np.uint8), "RGBA") - image.paste(overlay_contour, (0, 0), overlay_contour) - - return image - - -# CPU post process -def fast_show_mask( - annotation, - ax, - random_color=False, - bbox=None, - retinamask=True, - target_height=960, - target_width=960, -): - mask_sum = annotation.shape[0] - height = annotation.shape[1] - weight = annotation.shape[2] - # 将annotation 按照面积 排序 - areas = np.sum(annotation, axis=(1, 2)) - sorted_indices = np.argsort(areas)[::1] - annotation = annotation[sorted_indices] - - index = (annotation != 0).argmax(axis=0) - if random_color == True: - color = np.random.random((mask_sum, 1, 1, 3)) - else: - color = np.ones((mask_sum, 1, 1, 3)) * np.array( - [30 / 255, 144 / 255, 255 / 255] - ) - transparency = np.ones((mask_sum, 1, 1, 1)) * 0.6 - visual = np.concatenate([color, transparency], axis=-1) - mask_image = np.expand_dims(annotation, -1) * visual - - mask = np.zeros((height, weight, 4)) - - h_indices, w_indices = np.meshgrid( - np.arange(height), np.arange(weight), indexing="ij" - ) - indices = (index[h_indices, w_indices], h_indices, w_indices, slice(None)) - - mask[h_indices, w_indices, :] = mask_image[indices] - if bbox is not None: - x1, y1, x2, y2 = bbox - ax.add_patch( - plt.Rectangle( - (x1, y1), x2 - x1, y2 - y1, fill=False, edgecolor="b", linewidth=1 - ) - ) - - if retinamask == False: - mask = cv2.resize( - mask, (target_width, target_height), interpolation=cv2.INTER_NEAREST - ) - - return mask - - -def fast_show_mask_gpu( - annotation, - ax, - random_color=False, - bbox=None, - retinamask=True, - target_height=960, - target_width=960, -): - device = annotation.device - mask_sum = annotation.shape[0] - height = annotation.shape[1] - weight = annotation.shape[2] - areas = torch.sum(annotation, dim=(1, 2)) - sorted_indices = torch.argsort(areas, descending=False) - annotation = annotation[sorted_indices] - # 找每个位置第一个非零值下标 - index = (annotation != 0).to(torch.long).argmax(dim=0) - if random_color == True: - color = torch.rand((mask_sum, 1, 1, 3)).to(device) - else: - color = torch.ones((mask_sum, 1, 1, 3)).to(device) * torch.tensor( - [30 / 255, 144 / 255, 255 / 255] - ).to(device) - transparency = torch.ones((mask_sum, 1, 1, 1)).to(device) * 0.6 - visual = torch.cat([color, transparency], dim=-1) - mask_image = torch.unsqueeze(annotation, -1) * visual - # 按index取数,index指每个位置选哪个batch的数,把mask_image转成一个batch的形式 - mask = torch.zeros((height, weight, 4)).to(device) - h_indices, w_indices = torch.meshgrid(torch.arange(height), torch.arange(weight)) - indices = (index[h_indices, w_indices], h_indices, w_indices, slice(None)) - # 使用向量化索引更新show的值 - mask[h_indices, w_indices, :] = mask_image[indices] - mask_cpu = mask.cpu().numpy() - if bbox is not None: - x1, y1, x2, y2 = bbox - ax.add_patch( - plt.Rectangle( - (x1, y1), x2 - x1, y2 - y1, fill=False, edgecolor="b", linewidth=1 - ) - ) - if retinamask == False: - mask_cpu = cv2.resize( - mask_cpu, (target_width, target_height), interpolation=cv2.INTER_NEAREST - ) - return mask_cpu diff --git a/spaces/diacanFperku/AutoGPT/APDB MT6575 S01 ALPS.ICS.MP.md b/spaces/diacanFperku/AutoGPT/APDB MT6575 S01 ALPS.ICS.MP.md deleted file mode 100644 index b9c577861c58420f30ff7ea171b0ac66dd1a41f7..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/APDB MT6575 S01 ALPS.ICS.MP.md +++ /dev/null @@ -1,30 +0,0 @@ -
      -

      How to Download and Install APDB MT6575 S01 ALPS.ICS.MP Firmware on Your Android Device

      -

      If you are looking for a way to update your Android device with the latest firmware, you may have come across the term APDB MT6575 S01 ALPS.ICS.MP. This is a specific firmware file that is compatible with certain MediaTek devices running on Android 4.0 Ice Cream Sandwich. In this article, we will show you how to download and install this firmware on your device using a simple and safe method.

      -

      APDB MT6575 S01 ALPS.ICS.MP


      Download Ziphttps://gohhs.com/2uFTdp



      -

      What is APDB MT6575 S01 ALPS.ICS.MP Firmware?

      -

      APDB MT6575 S01 ALPS.ICS.MP is a firmware file that contains the Android operating system and other software components for MediaTek devices. APDB stands for Android Product Database, which is a database of firmware files for different Android devices. MT6575 is the model number of the MediaTek chipset that powers the device. S01 is the software version number. ALPS is the codename of the firmware project. ICS is the abbreviation of Ice Cream Sandwich, which is the Android version. MP stands for Multi-Port, which means that the firmware supports multiple SIM cards.

      -

      This firmware file can be used to update your device to the latest Android version, fix software issues, unbrick your device, or restore it to factory settings. However, you should only use this firmware if it matches your device model and specifications. Otherwise, you may end up with a bricked or damaged device.

      -

      How to Download APDB MT6575 S01 ALPS.ICS.MP Firmware?

      -

      There are many sources online where you can download APDB MT6575 S01 ALPS.ICS.MP firmware for free. However, not all of them are reliable or safe. Some may contain malware, viruses, or corrupted files that can harm your device or compromise your privacy. Therefore, you should always download firmware files from trusted and reputable websites.

      -

      -

      One of the best websites to download APDB MT6575 S01 ALPS.ICS.MP firmware is FirmwareFile.com. This website provides original and official firmware files for various Android devices from different brands and manufacturers. You can easily find the firmware file for your device by searching for its model number or name.

      -

      To download APDB MT6575 S01 ALPS.ICS.MP firmware from FirmwareFile.com, follow these steps:

      -
        -
      1. Go to FirmwareFile.com and type "APDB MT6575 S01 ALPS.ICS.MP" in the search box.
      2. -
      3. Select your device model from the list of results and click on it.
      4. -
      5. You will see a page with detailed information about the firmware file, such as its size, date, version, and features.
      6. -
      7. Scroll down to the bottom of the page and click on the "Download" button.
      8. -
      9. You will be redirected to another page where you need to complete a captcha verification to prove that you are not a robot.
      10. -
      11. After completing the captcha verification, click on the "Download Now" button.
      12. -
      13. The firmware file will start downloading to your computer or mobile device.
      14. -
      -

      How to Install APDB MT6575 S01 ALPS.ICS.MP Firmware on Your Device?

      -

      After downloading APDB MT6575 S01 ALPS.ICS.MP firmware from FirmwareFile.com, you need to install it on your device using a flash tool. A flash tool is a software application that allows you to flash or install firmware files on your Android device via USB connection.

      -

      There are many flash tools available online for different Android devices and chipsets. However, one of the most popular and widely used flash tools for MediaTek devices is SP Flash Tool. SP Flash Tool is a free and easy-to-use tool that supports all MediaTek devices running on Android.

      -

      To install APDB MT6575 S01 ALPS.ICS.MP firmware on your device using SP Flash Tool, follow these steps:

      -
        -
      1. Download SP Flash Tool from SPFlashTool.com and extract it on your computer.
      2. -
      3. Run the flash_tool.exe file

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Adobe Audition CC 2018 13.0.4.4 (x86x64) Crack .rar Extra Quality.md b/spaces/diacanFperku/AutoGPT/Adobe Audition CC 2018 13.0.4.4 (x86x64) Crack .rar Extra Quality.md deleted file mode 100644 index 2b58855a582d3938ec9432dedd58f2483d79c31a..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Adobe Audition CC 2018 13.0.4.4 (x86x64) Crack .rar Extra Quality.md +++ /dev/null @@ -1,10 +0,0 @@ -

        Adobe Audition CC 2018 13.0.4.4 (x86x64) Crack .rar


        Download File ►►► https://gohhs.com/2uFUBG



        -
        -- YUAN Rui May 21, 2019 | 500.00 MB - 23,418 views - -audition cc 2018 13.0.4.4 (x86) .rar - YUAN Rui May 21, 2019 | 500.00 MB - 23,418 views - -audition cc 2018 13.0.4.4 (x86) .rar - YUAN Rui 4fefd39f24
        -
        -
        -

        diff --git a/spaces/diacanFperku/AutoGPT/K93n Na1 Kansai Chiharurar.md b/spaces/diacanFperku/AutoGPT/K93n Na1 Kansai Chiharurar.md deleted file mode 100644 index 442f65144c8147739c8bc0357ec6a73db0f18a48..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/K93n Na1 Kansai Chiharurar.md +++ /dev/null @@ -1,6 +0,0 @@ -

        K93n Na1 Kansai Chiharurar


        Downloadhttps://gohhs.com/2uFTOh



        -
        - 3cee63e6c2
        -
        -
        -

        diff --git a/spaces/digitalxingtong/Miiu-Bert-Vits2/mel_processing.py b/spaces/digitalxingtong/Miiu-Bert-Vits2/mel_processing.py deleted file mode 100644 index 50435ecf88ef4fb6c1d47f3e6edd04c3ea7d3e80..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Miiu-Bert-Vits2/mel_processing.py +++ /dev/null @@ -1,112 +0,0 @@ -import math -import os -import random -import torch -from torch import nn -import torch.nn.functional as F -import torch.utils.data -import numpy as np -import librosa -import librosa.util as librosa_util -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/dma123/gpt-js/js/main.js b/spaces/dma123/gpt-js/js/main.js deleted file mode 100644 index 3308a4dcff6ce2f0593cc0339633e2efdcca3476..0000000000000000000000000000000000000000 --- a/spaces/dma123/gpt-js/js/main.js +++ /dev/null @@ -1,46 +0,0 @@ -(function (globals) { - "use strict"; - - - document.addEventListener("DOMContentLoaded", () => { - const chatlog = new Chatlog(); - const ui = { - chatlogEl: new Chatbox(chatlog, document.getElementById("chat")), - messageEl: document.getElementById("message-inp"), - submitBtn: document.getElementById("submit-btn"), - newChatBtn: document.getElementById("new_chat-btn"), - saveChatBtn: document.getElementById("save_chat-btn"), - loadChatBtn: document.getElementById("load_chat-btn"), - settingsBtn: document.getElementById('settings-btn'), - settingsEl: document.getElementById('settings'), - temperatureEl: document.getElementById("temperature"), - temperatureValueEl: document.getElementById('temperature-value'), - topPEl: document.getElementById("top_p"), - topPValueEl: document.getElementById('top_p-value'), - loginBtn: document.getElementById('login-btn'), - logoutBtn: document.getElementById('logout-btn') - }; - - // Set up event listeners and initialize chat - setUpEventListeners(chatlog, ui); - - // Get API key - getApiKey(); - - // Load old chat - try { - const data = JSON.parse(localStorage.chatlog); - chatlog.load(data.rootAlternatives); - ui.chatlogEl.update(); - } catch (error) { - console.error(error); - } - - if (chatlog.rootAlternatives == null) { - // Start new chat, if no old chat could be loaded - newChatBtn.click(); - } - }); - - -}(this)); \ No newline at end of file diff --git a/spaces/docs-demos/prophetnet-large-uncased/README.md b/spaces/docs-demos/prophetnet-large-uncased/README.md deleted file mode 100644 index 523ca4f1e1e7fdafde9ba243e703dbf01b23c004..0000000000000000000000000000000000000000 --- a/spaces/docs-demos/prophetnet-large-uncased/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: ProphetNet -emoji: 🔥 -colorFrom: gray -colorTo: indigo -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/doevent/swin2sr/main_test_swin2sr.py b/spaces/doevent/swin2sr/main_test_swin2sr.py deleted file mode 100644 index 522f6f885b67309d7521dd969ee3971a61d01500..0000000000000000000000000000000000000000 --- a/spaces/doevent/swin2sr/main_test_swin2sr.py +++ /dev/null @@ -1,302 +0,0 @@ -import argparse -import cv2 -import glob -import numpy as np -from collections import OrderedDict -import os -import torch -import requests - -from models.network_swin2sr import Swin2SR as net -from utils import util_calculate_psnr_ssim as util - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument('--task', type=str, default='color_dn', help='classical_sr, lightweight_sr, real_sr, ' - 'gray_dn, color_dn, jpeg_car, color_jpeg_car') - parser.add_argument('--scale', type=int, default=1, help='scale factor: 1, 2, 3, 4, 8') # 1 for dn and jpeg car - parser.add_argument('--noise', type=int, default=15, help='noise level: 15, 25, 50') - parser.add_argument('--jpeg', type=int, default=40, help='scale factor: 10, 20, 30, 40') - parser.add_argument('--training_patch_size', type=int, default=128, help='patch size used in training Swin2SR. ' - 'Just used to differentiate two different settings in Table 2 of the paper. ' - 'Images are NOT tested patch by patch.') - parser.add_argument('--large_model', action='store_true', help='use large model, only provided for real image sr') - parser.add_argument('--model_path', type=str, - default='model_zoo/swin2sr/Swin2SR_ClassicalSR_X2_64.pth') - parser.add_argument('--folder_lq', type=str, default=None, help='input low-quality test image folder') - parser.add_argument('--folder_gt', type=str, default=None, help='input ground-truth test image folder') - parser.add_argument('--tile', type=int, default=None, help='Tile size, None for no tile during testing (testing as a whole)') - parser.add_argument('--tile_overlap', type=int, default=32, help='Overlapping of different tiles') - parser.add_argument('--save_img_only', default=False, action='store_true', help='save image and do not evaluate') - args = parser.parse_args() - - device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - # set up model - if os.path.exists(args.model_path): - print(f'loading model from {args.model_path}') - else: - os.makedirs(os.path.dirname(args.model_path), exist_ok=True) - url = 'https://github.com/mv-lab/swin2sr/releases/download/v0.0.1/{}'.format(os.path.basename(args.model_path)) - r = requests.get(url, allow_redirects=True) - print(f'downloading model {args.model_path}') - open(args.model_path, 'wb').write(r.content) - - model = define_model(args) - model.eval() - model = model.to(device) - - # setup folder and path - folder, save_dir, border, window_size = setup(args) - os.makedirs(save_dir, exist_ok=True) - test_results = OrderedDict() - test_results['psnr'] = [] - test_results['ssim'] = [] - test_results['psnr_y'] = [] - test_results['ssim_y'] = [] - test_results['psnrb'] = [] - test_results['psnrb_y'] = [] - psnr, ssim, psnr_y, ssim_y, psnrb, psnrb_y = 0, 0, 0, 0, 0, 0 - - for idx, path in enumerate(sorted(glob.glob(os.path.join(folder, '*')))): - # read image - imgname, img_lq, img_gt = get_image_pair(args, path) # image to HWC-BGR, float32 - img_lq = np.transpose(img_lq if img_lq.shape[2] == 1 else img_lq[:, :, [2, 1, 0]], (2, 0, 1)) # HCW-BGR to CHW-RGB - img_lq = torch.from_numpy(img_lq).float().unsqueeze(0).to(device) # CHW-RGB to NCHW-RGB - - # inference - with torch.no_grad(): - # pad input image to be a multiple of window_size - _, _, h_old, w_old = img_lq.size() - h_pad = (h_old // window_size + 1) * window_size - h_old - w_pad = (w_old // window_size + 1) * window_size - w_old - img_lq = torch.cat([img_lq, torch.flip(img_lq, [2])], 2)[:, :, :h_old + h_pad, :] - img_lq = torch.cat([img_lq, torch.flip(img_lq, [3])], 3)[:, :, :, :w_old + w_pad] - output = test(img_lq, model, args, window_size) - - if args.task == 'compressed_sr': - output = output[0][..., :h_old * args.scale, :w_old * args.scale] - else: - output = output[..., :h_old * args.scale, :w_old * args.scale] - - # save image - output = output.data.squeeze().float().cpu().clamp_(0, 1).numpy() - if output.ndim == 3: - output = np.transpose(output[[2, 1, 0], :, :], (1, 2, 0)) # CHW-RGB to HCW-BGR - output = (output * 255.0).round().astype(np.uint8) # float32 to uint8 - cv2.imwrite(f'{save_dir}/{imgname}_Swin2SR.png', output) - - - # evaluate psnr/ssim/psnr_b - if img_gt is not None: - img_gt = (img_gt * 255.0).round().astype(np.uint8) # float32 to uint8 - img_gt = img_gt[:h_old * args.scale, :w_old * args.scale, ...] # crop gt - img_gt = np.squeeze(img_gt) - - psnr = util.calculate_psnr(output, img_gt, crop_border=border) - ssim = util.calculate_ssim(output, img_gt, crop_border=border) - test_results['psnr'].append(psnr) - test_results['ssim'].append(ssim) - if img_gt.ndim == 3: # RGB image - psnr_y = util.calculate_psnr(output, img_gt, crop_border=border, test_y_channel=True) - ssim_y = util.calculate_ssim(output, img_gt, crop_border=border, test_y_channel=True) - test_results['psnr_y'].append(psnr_y) - test_results['ssim_y'].append(ssim_y) - if args.task in ['jpeg_car', 'color_jpeg_car']: - psnrb = util.calculate_psnrb(output, img_gt, crop_border=border, test_y_channel=False) - test_results['psnrb'].append(psnrb) - if args.task in ['color_jpeg_car']: - psnrb_y = util.calculate_psnrb(output, img_gt, crop_border=border, test_y_channel=True) - test_results['psnrb_y'].append(psnrb_y) - print('Testing {:d} {:20s} - PSNR: {:.2f} dB; SSIM: {:.4f}; PSNRB: {:.2f} dB;' - 'PSNR_Y: {:.2f} dB; SSIM_Y: {:.4f}; PSNRB_Y: {:.2f} dB.'. - format(idx, imgname, psnr, ssim, psnrb, psnr_y, ssim_y, psnrb_y)) - else: - print('Testing {:d} {:20s}'.format(idx, imgname)) - - # summarize psnr/ssim - if img_gt is not None: - ave_psnr = sum(test_results['psnr']) / len(test_results['psnr']) - ave_ssim = sum(test_results['ssim']) / len(test_results['ssim']) - print('\n{} \n-- Average PSNR/SSIM(RGB): {:.2f} dB; {:.4f}'.format(save_dir, ave_psnr, ave_ssim)) - if img_gt.ndim == 3: - ave_psnr_y = sum(test_results['psnr_y']) / len(test_results['psnr_y']) - ave_ssim_y = sum(test_results['ssim_y']) / len(test_results['ssim_y']) - print('-- Average PSNR_Y/SSIM_Y: {:.2f} dB; {:.4f}'.format(ave_psnr_y, ave_ssim_y)) - if args.task in ['jpeg_car', 'color_jpeg_car']: - ave_psnrb = sum(test_results['psnrb']) / len(test_results['psnrb']) - print('-- Average PSNRB: {:.2f} dB'.format(ave_psnrb)) - if args.task in ['color_jpeg_car']: - ave_psnrb_y = sum(test_results['psnrb_y']) / len(test_results['psnrb_y']) - print('-- Average PSNRB_Y: {:.2f} dB'.format(ave_psnrb_y)) - - -def define_model(args): - # 001 classical image sr - if args.task == 'classical_sr': - model = net(upscale=args.scale, in_chans=3, img_size=args.training_patch_size, window_size=8, - img_range=1., depths=[6, 6, 6, 6, 6, 6], embed_dim=180, num_heads=[6, 6, 6, 6, 6, 6], - mlp_ratio=2, upsampler='pixelshuffle', resi_connection='1conv') - param_key_g = 'params' - - # 002 lightweight image sr - # use 'pixelshuffledirect' to save parameters - elif args.task in ['lightweight_sr']: - model = net(upscale=args.scale, in_chans=3, img_size=64, window_size=8, - img_range=1., depths=[6, 6, 6, 6], embed_dim=60, num_heads=[6, 6, 6, 6], - mlp_ratio=2, upsampler='pixelshuffledirect', resi_connection='1conv') - param_key_g = 'params' - - elif args.task == 'compressed_sr': - model = net(upscale=args.scale, in_chans=3, img_size=args.training_patch_size, window_size=8, - img_range=1., depths=[6, 6, 6, 6, 6, 6], embed_dim=180, num_heads=[6, 6, 6, 6, 6, 6], - mlp_ratio=2, upsampler='pixelshuffle_aux', resi_connection='1conv') - param_key_g = 'params' - - # 003 real-world image sr - elif args.task == 'real_sr': - if not args.large_model: - # use 'nearest+conv' to avoid block artifacts - model = net(upscale=args.scale, in_chans=3, img_size=64, window_size=8, - img_range=1., depths=[6, 6, 6, 6, 6, 6], embed_dim=180, num_heads=[6, 6, 6, 6, 6, 6], - mlp_ratio=2, upsampler='nearest+conv', resi_connection='1conv') - else: - # larger model size; use '3conv' to save parameters and memory; use ema for GAN training - model = net(upscale=args.scale, in_chans=3, img_size=64, window_size=8, - img_range=1., depths=[6, 6, 6, 6, 6, 6, 6, 6, 6], embed_dim=240, - num_heads=[8, 8, 8, 8, 8, 8, 8, 8, 8], - mlp_ratio=2, upsampler='nearest+conv', resi_connection='3conv') - param_key_g = 'params_ema' - - # 006 grayscale JPEG compression artifact reduction - # use window_size=7 because JPEG encoding uses 8x8; use img_range=255 because it's sligtly better than 1 - elif args.task == 'jpeg_car': - model = net(upscale=1, in_chans=1, img_size=126, window_size=7, - img_range=255., depths=[6, 6, 6, 6, 6, 6], embed_dim=180, num_heads=[6, 6, 6, 6, 6, 6], - mlp_ratio=2, upsampler='', resi_connection='1conv') - param_key_g = 'params' - - # 006 color JPEG compression artifact reduction - # use window_size=7 because JPEG encoding uses 8x8; use img_range=255 because it's sligtly better than 1 - elif args.task == 'color_jpeg_car': - model = net(upscale=1, in_chans=3, img_size=126, window_size=7, - img_range=255., depths=[6, 6, 6, 6, 6, 6], embed_dim=180, num_heads=[6, 6, 6, 6, 6, 6], - mlp_ratio=2, upsampler='', resi_connection='1conv') - param_key_g = 'params' - - pretrained_model = torch.load(args.model_path) - model.load_state_dict(pretrained_model[param_key_g] if param_key_g in pretrained_model.keys() else pretrained_model, strict=True) - - return model - - -def setup(args): - # 001 classical image sr/ 002 lightweight image sr - if args.task in ['classical_sr', 'lightweight_sr', 'compressed_sr']: - save_dir = f'results/swin2sr_{args.task}_x{args.scale}' - if args.save_img_only: - folder = args.folder_lq - else: - folder = args.folder_gt - border = args.scale - window_size = 8 - - # 003 real-world image sr - elif args.task in ['real_sr']: - save_dir = f'results/swin2sr_{args.task}_x{args.scale}' - if args.large_model: - save_dir += '_large' - folder = args.folder_lq - border = 0 - window_size = 8 - - # 006 JPEG compression artifact reduction - elif args.task in ['jpeg_car', 'color_jpeg_car']: - save_dir = f'results/swin2sr_{args.task}_jpeg{args.jpeg}' - folder = args.folder_gt - border = 0 - window_size = 7 - - return folder, save_dir, border, window_size - - -def get_image_pair(args, path): - (imgname, imgext) = os.path.splitext(os.path.basename(path)) - - # 001 classical image sr/ 002 lightweight image sr (load lq-gt image pairs) - if args.task in ['classical_sr', 'lightweight_sr']: - if args.save_img_only: - img_gt = None - img_lq = cv2.imread(path, cv2.IMREAD_COLOR).astype(np.float32) / 255. - else: - img_gt = cv2.imread(path, cv2.IMREAD_COLOR).astype(np.float32) / 255. - img_lq = cv2.imread(f'{args.folder_lq}/{imgname}x{args.scale}{imgext}', cv2.IMREAD_COLOR).astype( - np.float32) / 255. - - elif args.task in ['compressed_sr']: - if args.save_img_only: - img_gt = None - img_lq = cv2.imread(path, cv2.IMREAD_COLOR).astype(np.float32) / 255. - else: - img_gt = cv2.imread(path, cv2.IMREAD_COLOR).astype(np.float32) / 255. - img_lq = cv2.imread(f'{args.folder_lq}/{imgname}.jpg', cv2.IMREAD_COLOR).astype( - np.float32) / 255. - - # 003 real-world image sr (load lq image only) - elif args.task in ['real_sr', 'lightweight_sr_infer']: - img_gt = None - img_lq = cv2.imread(path, cv2.IMREAD_COLOR).astype(np.float32) / 255. - - # 006 grayscale JPEG compression artifact reduction (load gt image and generate lq image on-the-fly) - elif args.task in ['jpeg_car']: - img_gt = cv2.imread(path, cv2.IMREAD_UNCHANGED) - if img_gt.ndim != 2: - img_gt = util.bgr2ycbcr(img_gt, y_only=True) - result, encimg = cv2.imencode('.jpg', img_gt, [int(cv2.IMWRITE_JPEG_QUALITY), args.jpeg]) - img_lq = cv2.imdecode(encimg, 0) - img_gt = np.expand_dims(img_gt, axis=2).astype(np.float32) / 255. - img_lq = np.expand_dims(img_lq, axis=2).astype(np.float32) / 255. - - # 006 JPEG compression artifact reduction (load gt image and generate lq image on-the-fly) - elif args.task in ['color_jpeg_car']: - img_gt = cv2.imread(path) - result, encimg = cv2.imencode('.jpg', img_gt, [int(cv2.IMWRITE_JPEG_QUALITY), args.jpeg]) - img_lq = cv2.imdecode(encimg, 1) - img_gt = img_gt.astype(np.float32)/ 255. - img_lq = img_lq.astype(np.float32)/ 255. - - return imgname, img_lq, img_gt - - -def test(img_lq, model, args, window_size): - if args.tile is None: - # test the image as a whole - output = model(img_lq) - else: - # test the image tile by tile - b, c, h, w = img_lq.size() - tile = min(args.tile, h, w) - assert tile % window_size == 0, "tile size should be a multiple of window_size" - tile_overlap = args.tile_overlap - sf = args.scale - - stride = tile - tile_overlap - h_idx_list = list(range(0, h-tile, stride)) + [h-tile] - w_idx_list = list(range(0, w-tile, stride)) + [w-tile] - E = torch.zeros(b, c, h*sf, w*sf).type_as(img_lq) - W = torch.zeros_like(E) - - for h_idx in h_idx_list: - for w_idx in w_idx_list: - in_patch = img_lq[..., h_idx:h_idx+tile, w_idx:w_idx+tile] - out_patch = model(in_patch) - out_patch_mask = torch.ones_like(out_patch) - - E[..., h_idx*sf:(h_idx+tile)*sf, w_idx*sf:(w_idx+tile)*sf].add_(out_patch) - W[..., h_idx*sf:(h_idx+tile)*sf, w_idx*sf:(w_idx+tile)*sf].add_(out_patch_mask) - output = E.div_(W) - - return output - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/editing-images/ledtisplusplus/constants.py b/spaces/editing-images/ledtisplusplus/constants.py deleted file mode 100644 index e41dcc873e7d9b5aece4a68b63a5d90a9c79201f..0000000000000000000000000000000000000000 --- a/spaces/editing-images/ledtisplusplus/constants.py +++ /dev/null @@ -1,26 +0,0 @@ -############### -# conststants # -############### -DEFAULT_TARGET_GUIDANCE_SCALE = 15 -DEFAULT_SOURCE_GUIDANCE_SCALE = 3.5 -DEFAULT_DIFFUSION_STEPS = 50 -DEFAULT_SKIP_STEPS = 25 -DEFAULT_SEED = 0 - - -DEFAULT_SEGA_CONCEPT_GUIDANCE_SCALE = 7 -DEFAULT_WARMUP_STEPS = 2 -DEFAULT_THRESHOLD = 0.95 -DEFAULT_NEGATIVE_GUIDANCE = False - -STYLE_SEGA_CONCEPT_GUIDANCE_SCALE = 7 -STYLE_WARMUP_STEPS = 2 -STYLE_THRESHOLD = 0.5 - -FACE_SEGA_CONCEPT_GUIDANCE_SCALE = 5 -FACE_WARMUP_STEPS = 2 -FACE_THRESHOLD = 0.95 - -OBJECT_SEGA_CONCEPT_GUIDANCE_SCALE = 12 -OBJECT_WARMUP_STEPS = 5 -OBJECT_THRESHOLD = 0.9 \ No newline at end of file diff --git a/spaces/emc348/faces-through-time/models/StyleCLIP/mapper/__init__.py b/spaces/emc348/faces-through-time/models/StyleCLIP/mapper/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/eson/tokenizer-arena/vocab/gpt_neox_chinese_v2/build_tokenizer_chinese.py b/spaces/eson/tokenizer-arena/vocab/gpt_neox_chinese_v2/build_tokenizer_chinese.py deleted file mode 100644 index 4bca6dcea043b3ebb26442249ea854de5a73ddb5..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/vocab/gpt_neox_chinese_v2/build_tokenizer_chinese.py +++ /dev/null @@ -1,52 +0,0 @@ -""" -merge 是干嘛的? - -## 结果 - -共merge 4357 个 token -""" - -import json -from tokenizers import Tokenizer - - -oov_tokens = [line.strip().split("\t")[0] for line in open("../gpt_neox_chinese_v1/oov.txt", "r", encoding="utf-8")] - - -def load_base_tokenizer(): - old_vocab_path = "../gpt_neox_chinese_v1/20B_tokenizer_chinese.json" - data = json.load(open(old_vocab_path, "r", encoding="utf-8")) - tokenizer = Tokenizer.from_file(old_vocab_path) - print("vocab_size with added_tokens:", ) - return data, tokenizer - -data, base_tokenizer = load_base_tokenizer() -vocab = data["model"]["vocab"] -merges = data["model"]["merges"] -vocab_size = base_tokenizer.get_vocab_size(with_added_tokens=True) - - -""" -方式一:原有的added_tokens保持id不变。方式二:原有的added_tokens进行id移位。 -以下采用方式一。 -""" -new_added_tokens = set() -for word in oov_tokens: - if len(word) > 1 or word in new_added_tokens: - continue - encoding = base_tokenizer.encode(word) - # if len(encoding.ids) > 1: - if len(encoding.ids) == 2: # 3个的,怎么处理? - tokens = [base_tokenizer.id_to_token(token_id) for token_id in encoding.ids] - print("merging", word, json.dumps(tokens)) - vocab["".join(tokens)] = vocab_size - vocab_size += 1 - merges.append(" ".join(tokens)) - new_added_tokens.add(word) - - -print("共merge %d 个 token" % (len(new_added_tokens))) - -f_out = open("20B_tokenizer_chinese.v2.json", "w", encoding="utf-8") - -json.dump(data, f_out, indent=2) \ No newline at end of file diff --git a/spaces/f2api/gpt-academic/multi_language.py b/spaces/f2api/gpt-academic/multi_language.py deleted file mode 100644 index 6c7259836e69d7bc5724a301883a9dbf1526589a..0000000000000000000000000000000000000000 --- a/spaces/f2api/gpt-academic/multi_language.py +++ /dev/null @@ -1,510 +0,0 @@ -""" - Translate this project to other languages (experimental, please open an issue if there is any bug) - - - Usage: - 1. modify LANG - LANG = "English" - - 2. modify TransPrompt - TransPrompt = f"Replace each json value `#` with translated results in English, e.g., \"原始文本\":\"TranslatedText\". Keep Json format. Do not answer #." - - 3. Run `python multi_language.py`. - Note: You need to run it multiple times to increase translation coverage because GPT makes mistakes sometimes. - - 4. Find the translated program in `multi-language\English\*` - - P.S. - - - The translation mapping will be stored in `docs/translation_xxxx.json`, you can revised mistaken translation there. - - - If you would like to share your `docs/translation_xxxx.json`, (so that everyone can use the cached & revised translation mapping), please open a Pull Request - - - If there is any translation error in `docs/translation_xxxx.json`, please open a Pull Request - - - Welcome any Pull Request, regardless of language -""" - -import os -import json -import functools -import re -import pickle -import time - -CACHE_FOLDER = "gpt_log" -blacklist = ['multi-language', 'gpt_log', '.git', 'private_upload', 'multi_language.py'] - -# LANG = "TraditionalChinese" -# TransPrompt = f"Replace each json value `#` with translated results in Traditional Chinese, e.g., \"原始文本\":\"翻譯後文字\". Keep Json format. Do not answer #." - -# LANG = "Japanese" -# TransPrompt = f"Replace each json value `#` with translated results in Japanese, e.g., \"原始文本\":\"テキストの翻訳\". Keep Json format. Do not answer #." - -LANG = "English" -TransPrompt = f"Replace each json value `#` with translated results in English, e.g., \"原始文本\":\"TranslatedText\". Keep Json format. Do not answer #." - - -if not os.path.exists(CACHE_FOLDER): - os.makedirs(CACHE_FOLDER) - - -def lru_file_cache(maxsize=128, ttl=None, filename=None): - """ - Decorator that caches a function's return value after being called with given arguments. - It uses a Least Recently Used (LRU) cache strategy to limit the size of the cache. - maxsize: Maximum size of the cache. Defaults to 128. - ttl: Time-to-Live of the cache. If a value hasn't been accessed for `ttl` seconds, it will be evicted from the cache. - filename: Name of the file to store the cache in. If not supplied, the function name + ".cache" will be used. - """ - cache_path = os.path.join(CACHE_FOLDER, f"{filename}.cache") if filename is not None else None - - def decorator_function(func): - cache = {} - _cache_info = { - "hits": 0, - "misses": 0, - "maxsize": maxsize, - "currsize": 0, - "ttl": ttl, - "filename": cache_path, - } - - @functools.wraps(func) - def wrapper_function(*args, **kwargs): - key = str((args, frozenset(kwargs))) - if key in cache: - if _cache_info["ttl"] is None or (cache[key][1] + _cache_info["ttl"]) >= time.time(): - _cache_info["hits"] += 1 - print(f'Warning, reading cache, last read {(time.time()-cache[key][1])//60} minutes ago'); time.sleep(2) - cache[key][1] = time.time() - return cache[key][0] - else: - del cache[key] - - result = func(*args, **kwargs) - cache[key] = [result, time.time()] - _cache_info["misses"] += 1 - _cache_info["currsize"] += 1 - - if _cache_info["currsize"] > _cache_info["maxsize"]: - oldest_key = None - for k in cache: - if oldest_key is None: - oldest_key = k - elif cache[k][1] < cache[oldest_key][1]: - oldest_key = k - del cache[oldest_key] - _cache_info["currsize"] -= 1 - - if cache_path is not None: - with open(cache_path, "wb") as f: - pickle.dump(cache, f) - - return result - - def cache_info(): - return _cache_info - - wrapper_function.cache_info = cache_info - - if cache_path is not None and os.path.exists(cache_path): - with open(cache_path, "rb") as f: - cache = pickle.load(f) - _cache_info["currsize"] = len(cache) - - return wrapper_function - - return decorator_function - -def contains_chinese(string): - """ - Returns True if the given string contains Chinese characters, False otherwise. - """ - chinese_regex = re.compile(u'[\u4e00-\u9fff]+') - return chinese_regex.search(string) is not None - -def split_list(lst, n_each_req): - """ - Split a list into smaller lists, each with a maximum number of elements. - :param lst: the list to split - :param n_each_req: the maximum number of elements in each sub-list - :return: a list of sub-lists - """ - result = [] - for i in range(0, len(lst), n_each_req): - result.append(lst[i:i + n_each_req]) - return result - -def map_to_json(map, language): - dict_ = read_map_from_json(language) - dict_.update(map) - with open(f'docs/translate_{language.lower()}.json', 'w', encoding='utf8') as f: - json.dump(dict_, f, indent=4, ensure_ascii=False) - -def read_map_from_json(language): - if os.path.exists(f'docs/translate_{language.lower()}.json'): - with open(f'docs/translate_{language.lower()}.json', 'r', encoding='utf8') as f: - res = json.load(f) - res = {k:v for k, v in res.items() if v is not None and contains_chinese(k)} - return res - return {} - -def advanced_split(splitted_string, spliter, include_spliter=False): - splitted_string_tmp = [] - for string_ in splitted_string: - if spliter in string_: - splitted = string_.split(spliter) - for i, s in enumerate(splitted): - if include_spliter: - if i != len(splitted)-1: - splitted[i] += spliter - splitted[i] = splitted[i].strip() - for i in reversed(range(len(splitted))): - if not contains_chinese(splitted[i]): - splitted.pop(i) - splitted_string_tmp.extend(splitted) - else: - splitted_string_tmp.append(string_) - splitted_string = splitted_string_tmp - return splitted_string_tmp - -cached_translation = {} -cached_translation = read_map_from_json(language=LANG) - -def trans(word_to_translate, language, special=False): - if len(word_to_translate) == 0: return {} - from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency - from toolbox import get_conf, ChatBotWithCookies - proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY = \ - get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY') - llm_kwargs = { - 'api_key': API_KEY, - 'llm_model': LLM_MODEL, - 'top_p':1.0, - 'max_length': None, - 'temperature':0.4, - } - import random - N_EACH_REQ = random.randint(16, 32) - word_to_translate_split = split_list(word_to_translate, N_EACH_REQ) - inputs_array = [str(s) for s in word_to_translate_split] - inputs_show_user_array = inputs_array - history_array = [[] for _ in inputs_array] - if special: # to English using CamelCase Naming Convention - sys_prompt_array = [f"Translate following names to English with CamelCase naming convention. Keep original format" for _ in inputs_array] - else: - sys_prompt_array = [f"Translate following sentences to {LANG}. E.g., You should translate sentences to the following format ['translation of sentence 1', 'translation of sentence 2']. Do NOT answer with Chinese!" for _ in inputs_array] - chatbot = ChatBotWithCookies(llm_kwargs) - gpt_say_generator = request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( - inputs_array, - inputs_show_user_array, - llm_kwargs, - chatbot, - history_array, - sys_prompt_array, - ) - while True: - try: - gpt_say = next(gpt_say_generator) - print(gpt_say[1][0][1]) - except StopIteration as e: - result = e.value - break - translated_result = {} - for i, r in enumerate(result): - if i%2 == 1: - try: - res_before_trans = eval(result[i-1]) - res_after_trans = eval(result[i]) - if len(res_before_trans) != len(res_after_trans): - raise RuntimeError - for a,b in zip(res_before_trans, res_after_trans): - translated_result[a] = b - except: - # try: - # res_before_trans = word_to_translate_split[(i-1)//2] - # res_after_trans = [s for s in result[i].split("', '")] - # for a,b in zip(res_before_trans, res_after_trans): - # translated_result[a] = b - # except: - print('GPT answers with unexpected format, some words may not be translated, but you can try again later to increase translation coverage.') - res_before_trans = eval(result[i-1]) - for a in res_before_trans: - translated_result[a] = None - return translated_result - - -def trans_json(word_to_translate, language, special=False): - if len(word_to_translate) == 0: return {} - from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency - from toolbox import get_conf, ChatBotWithCookies - proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY = \ - get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY') - llm_kwargs = { - 'api_key': API_KEY, - 'llm_model': LLM_MODEL, - 'top_p':1.0, - 'max_length': None, - 'temperature':0.1, - } - import random - N_EACH_REQ = random.randint(16, 32) - random.shuffle(word_to_translate) - word_to_translate_split = split_list(word_to_translate, N_EACH_REQ) - inputs_array = [{k:"#" for k in s} for s in word_to_translate_split] - inputs_array = [ json.dumps(i, ensure_ascii=False) for i in inputs_array] - - inputs_show_user_array = inputs_array - history_array = [[] for _ in inputs_array] - sys_prompt_array = [TransPrompt for _ in inputs_array] - chatbot = ChatBotWithCookies(llm_kwargs) - gpt_say_generator = request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( - inputs_array, - inputs_show_user_array, - llm_kwargs, - chatbot, - history_array, - sys_prompt_array, - ) - while True: - try: - gpt_say = next(gpt_say_generator) - print(gpt_say[1][0][1]) - except StopIteration as e: - result = e.value - break - translated_result = {} - for i, r in enumerate(result): - if i%2 == 1: - try: - translated_result.update(json.loads(result[i])) - except: - print(result[i]) - print(result) - return translated_result - - -def step_1_core_key_translate(): - def extract_chinese_characters(file_path): - syntax = [] - with open(file_path, 'r', encoding='utf-8') as f: - content = f.read() - import ast - root = ast.parse(content) - for node in ast.walk(root): - if isinstance(node, ast.Name): - if contains_chinese(node.id): syntax.append(node.id) - if isinstance(node, ast.Import): - for n in node.names: - if contains_chinese(n.name): syntax.append(n.name) - elif isinstance(node, ast.ImportFrom): - for n in node.names: - if contains_chinese(n.name): syntax.append(n.name) - for k in node.module.split('.'): - if contains_chinese(k): syntax.append(k) - return syntax - - def extract_chinese_characters_from_directory(directory_path): - chinese_characters = [] - for root, dirs, files in os.walk(directory_path): - if any([b in root for b in blacklist]): - continue - for file in files: - if file.endswith('.py'): - file_path = os.path.join(root, file) - chinese_characters.extend(extract_chinese_characters(file_path)) - return chinese_characters - - directory_path = './' - chinese_core_names = extract_chinese_characters_from_directory(directory_path) - chinese_core_keys = [name for name in chinese_core_names] - chinese_core_keys_norepeat = [] - for d in chinese_core_keys: - if d not in chinese_core_keys_norepeat: chinese_core_keys_norepeat.append(d) - need_translate = [] - cached_translation = read_map_from_json(language=LANG) - cached_translation_keys = list(cached_translation.keys()) - for d in chinese_core_keys_norepeat: - if d not in cached_translation_keys: - need_translate.append(d) - - need_translate_mapping = trans(need_translate, language=LANG, special=True) - map_to_json(need_translate_mapping, language=LANG) - cached_translation = read_map_from_json(language=LANG) - cached_translation = dict(sorted(cached_translation.items(), key=lambda x: -len(x[0]))) - - chinese_core_keys_norepeat_mapping = {} - for k in chinese_core_keys_norepeat: - chinese_core_keys_norepeat_mapping.update({k:cached_translation[k]}) - chinese_core_keys_norepeat_mapping = dict(sorted(chinese_core_keys_norepeat_mapping.items(), key=lambda x: -len(x[0]))) - - # =============================================== - # copy - # =============================================== - def copy_source_code(): - - from toolbox import get_conf - import shutil - import os - try: shutil.rmtree(f'./multi-language/{LANG}/') - except: pass - os.makedirs(f'./multi-language', exist_ok=True) - backup_dir = f'./multi-language/{LANG}/' - shutil.copytree('./', backup_dir, ignore=lambda x, y: blacklist) - copy_source_code() - - # =============================================== - # primary key replace - # =============================================== - directory_path = f'./multi-language/{LANG}/' - for root, dirs, files in os.walk(directory_path): - for file in files: - if file.endswith('.py'): - file_path = os.path.join(root, file) - syntax = [] - # read again - with open(file_path, 'r', encoding='utf-8') as f: - content = f.read() - - for k, v in chinese_core_keys_norepeat_mapping.items(): - content = content.replace(k, v) - - with open(file_path, 'w', encoding='utf-8') as f: - f.write(content) - - -def step_2_core_key_translate(): - - # ================================================================================================= - # step2 - # ================================================================================================= - - def load_string(strings, string_input): - string_ = string_input.strip().strip(',').strip().strip('.').strip() - if string_.startswith('[Local Message]'): - string_ = string_.replace('[Local Message]', '') - string_ = string_.strip().strip(',').strip().strip('.').strip() - splitted_string = [string_] - # -------------------------------------- - splitted_string = advanced_split(splitted_string, spliter=",", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="。", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter=")", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="(", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="(", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter=")", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="<", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter=">", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="[", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="]", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="【", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="】", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="?", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter=":", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter=":", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter=",", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="#", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="\n", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter=";", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="`", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter=" ", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="- ", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="---", include_spliter=False) - - # -------------------------------------- - for j, s in enumerate(splitted_string): # .com - if '.com' in s: continue - if '\'' in s: continue - if '\"' in s: continue - strings.append([s,0]) - - - def get_strings(node): - strings = [] - # recursively traverse the AST - for child in ast.iter_child_nodes(node): - node = child - if isinstance(child, ast.Str): - if contains_chinese(child.s): - load_string(strings=strings, string_input=child.s) - elif isinstance(child, ast.AST): - strings.extend(get_strings(child)) - return strings - - string_literals = [] - directory_path = f'./multi-language/{LANG}/' - for root, dirs, files in os.walk(directory_path): - for file in files: - if file.endswith('.py'): - file_path = os.path.join(root, file) - syntax = [] - with open(file_path, 'r', encoding='utf-8') as f: - content = f.read() - # comments - comments_arr = [] - for code_sp in content.splitlines(): - comments = re.findall(r'#.*$', code_sp) - for comment in comments: - load_string(strings=comments_arr, string_input=comment) - string_literals.extend(comments_arr) - - # strings - import ast - tree = ast.parse(content) - res = get_strings(tree, ) - string_literals.extend(res) - - [print(s) for s in string_literals] - chinese_literal_names = [] - chinese_literal_names_norepeat = [] - for string, offset in string_literals: - chinese_literal_names.append(string) - chinese_literal_names_norepeat = [] - for d in chinese_literal_names: - if d not in chinese_literal_names_norepeat: chinese_literal_names_norepeat.append(d) - need_translate = [] - cached_translation = read_map_from_json(language=LANG) - cached_translation_keys = list(cached_translation.keys()) - for d in chinese_literal_names_norepeat: - if d not in cached_translation_keys: - need_translate.append(d) - - - up = trans_json(need_translate, language=LANG, special=False) - map_to_json(up, language=LANG) - cached_translation = read_map_from_json(language=LANG) - cached_translation = dict(sorted(cached_translation.items(), key=lambda x: -len(x[0]))) - - # =============================================== - # literal key replace - # =============================================== - directory_path = f'./multi-language/{LANG}/' - for root, dirs, files in os.walk(directory_path): - for file in files: - if file.endswith('.py'): - file_path = os.path.join(root, file) - syntax = [] - # read again - with open(file_path, 'r', encoding='utf-8') as f: - content = f.read() - - for k, v in cached_translation.items(): - if v is None: continue - if '"' in v: - v = v.replace('"', "`") - if '\'' in v: - v = v.replace('\'', "`") - content = content.replace(k, v) - - with open(file_path, 'w', encoding='utf-8') as f: - f.write(content) - - if file.strip('.py') in cached_translation: - file_new = cached_translation[file.strip('.py')] + '.py' - file_path_new = os.path.join(root, file_new) - with open(file_path_new, 'w', encoding='utf-8') as f: - f.write(content) - os.remove(file_path) - -step_1_core_key_translate() -step_2_core_key_translate() diff --git a/spaces/facebook/ov-seg/README.md b/spaces/facebook/ov-seg/README.md deleted file mode 100644 index c341e4e58e754ef8ea265740f5899ed4fbde5fc8..0000000000000000000000000000000000000000 --- a/spaces/facebook/ov-seg/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Ov Seg -emoji: 📊 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.44.1 -python_version: 3.8 -app_file: app.py -pinned: false -license: cc-by-nc-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Casmate Pro 6.52 Full Version.md b/spaces/falterWliame/Face_Mask_Detection/Casmate Pro 6.52 Full Version.md deleted file mode 100644 index 1c85df8b9ff1923013b1733ae439678579834b3d..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Casmate Pro 6.52 Full Version.md +++ /dev/null @@ -1,10 +0,0 @@ -

        casmate pro 6.52 full version


        Download Filehttps://urlca.com/2uDdR0



        -
        ->>>not complete and there will be a newer version now, more >>>needed please email me: sosayyoyo@hotmail.com >>> >>>Native.Instruments.Reaktor.v5.0.0.7_build_155.rar >>> -I don't know exactly what's wrong with this tool, but I had a similar one. -When you make a video, and when rendering, everything trembles very much. -This program has a trick using a "variable" (as I understand it, this is a variable into which information about the video settings is entered. Like, I want the video to be like this, put this in this variable. I put it there, and the video became something like this). -Well, when rendering in this program, my video did not shake at all. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/falterWliame/Face_Mask_Detection/Denon Udcm M10 Service Manual.md b/spaces/falterWliame/Face_Mask_Detection/Denon Udcm M10 Service Manual.md deleted file mode 100644 index cecbd58e462d70a3aebde78c1b7e78a276651e4e..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Denon Udcm M10 Service Manual.md +++ /dev/null @@ -1,10 +0,0 @@ -

        Denon Udcm M10 Service Manual


        Download ✪✪✪ https://urlca.com/2uDcgj



        -
        -View and download the Denon UD-M10 user manual online. Description. -The Denon UD-M10 is a compact and powerful device capable of playing MP3, WMA, WAV formats with high sound quality. -At the same time, you. -I have the same player and it was in use for 4 months, I don’t know about you, everything worked fine for me, the player was under warranty, there was an instruction, it seems to be written that when charging the player during use. -To download the manual for free, without registration and without SMS - just click on the link below. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/falterWliame/Face_Mask_Detection/Download Video The The Social Network Full Movie Mp4.md b/spaces/falterWliame/Face_Mask_Detection/Download Video The The Social Network Full Movie Mp4.md deleted file mode 100644 index d323c425a4b7d0a4f3a796f53f911c89a47ce1e7..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Download Video The The Social Network Full Movie Mp4.md +++ /dev/null @@ -1,16 +0,0 @@ -

        download video the The Social Network full movie mp4


        Download Zip --->>> https://urlca.com/2uDdUf



        - -Sign up to watch this movie and you will receive email notifications when it becomes available in your country. 20 Apr Facebook launches a major overhaul of the social networking site designed to differentiate it from rival MySpace. Jun 28, 2010 · Top Ten Lists. Social Network Series. The list contains every series of films that win Best Picture and Best Picture – Musical or Comedy. Social network user - Wikipedia. - -Social Network, wikipedia. The success of Zuckerberg's idea stemmed from the social networking aspect of it, the reach of social networking sites, the "friendship" factor, which made it attractive to students to make themselves more visible, a lack of competition and the need for personal page space. Meet Mark Zuckerberg, Facebook’s CEO and founder. "Facebook made it possible for us to build a real community. A Harvard University student's social networking site revolutionized Facebook and other online social networks, transforming our way of life. Our Stories. - -More from Business Insider: 10 reasons to cancel Facebook. Facebook is the largest social media site on the Internet. The company began in the year and launched in the year. The site has over 1.2 billion daily active users and reports 1.3 billion monthly active users. In August, the company released its. Personal Internet Protocol (IP) address version 4 (IPv4) address space is a finite resource as it is a scarce resource in the post-digital age, in which information can be stored on nearly any form of technology, but not every device has an IP address of its own. - -If you wish to make Facebook work for your business you need to start a Page for your business or start with a Business account. Listed below are the top 10 Social Networking Sites on the web right now. Click through to find out how they compare to the original. Eight years ago, the website was a simple way for Harvard students to publish notes, photos and videos. - -Today, the site has 1.2 billion active users and 1.3 billion monthly active users. Social Network. The Site. 3 Social Network Sites — In This Order. First came Friendster, followed by MySpace. From there, came Facebook, and then came other. A social network is a communication and social-networking service which allows members to describe themselves and establish a network of friends, family, colleagues, or other contacts. - -It is essentially a website or application that allows users to. The most successful social network on the web was. built in 2004 and 4fefd39f24
        -
        -
        -

        diff --git a/spaces/fatiXbelha/sd/CoD Mobile Season 8 Blackout map APK and OBB download guide for Android users.md b/spaces/fatiXbelha/sd/CoD Mobile Season 8 Blackout map APK and OBB download guide for Android users.md deleted file mode 100644 index 8f1a0e15eae1b60e8c11bb1b92cfc804027920e2..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/CoD Mobile Season 8 Blackout map APK and OBB download guide for Android users.md +++ /dev/null @@ -1,243 +0,0 @@ -
        -

        Call of Duty Mobile Season 8: How to Download and Play the Latest Update

        -

        Call of Duty Mobile is one of the most popular and successful mobile games in the world, with millions of players enjoying its thrilling multiplayer modes, immersive battle royale experience, and iconic maps and weapons from the Call of Duty franchise. The game is constantly updated with new content, features, and events, keeping the fans hooked and engaged.

        -

        The latest update for Call of Duty Mobile is Season 8, which is themed around espionage and action. The season is called Train to Nowhere, and it brings a lot of new and exciting additions to the game, such as a new map, a new perk, a new battle royale class, two new weapons, a new battle pass, and more.

        -

        call of duty mobile season 8 apk download


        Download File https://urllie.com/2uNwXE



        -

        If you are eager to try out the new season and join the spy games, you will need to download and install the update on your Android device. There are two ways to do this: either through the Google Play Store or manually using APK and OBB files. In this article, we will show you how to do both methods, as well as give you an overview of what's new in Call of Duty Mobile Season 8.

        -

        How to download Call of Duty Mobile Season 8 from Google Play Store?

        -

        The easiest and safest way to download Call of Duty Mobile Season 8 is through the Google Play Store. This method ensures that you get the latest version of the game without any compatibility or security issues. Here are the steps to follow:

        -
          -
        1. Open the Google Play Store app on your Android device.
        2. -
        3. Search for Call of Duty Mobile or tap on this [link](^1^).
        4. -
        5. If you have already installed the game, tap on Update. If not, tap on Install.
        6. -
        7. Wait for the download and installation process to complete.
        8. -
        9. Launch the game and enjoy Call of Duty Mobile Season 8.
        10. -
        -

        Note that you may need at least 3 GB of free storage space on your device to download and install the update. You may also need a stable internet connection to avoid any errors or interruptions.

        -

        How to download Call of Duty Mobile Season 8 APK and OBB files manually?

        -

        If you are unable to download Call of Duty Mobile Season 8 from the Google Play Store for some reason, you can also try downloading it manually using APK and OBB files. This method requires you to download two separate files from third-party sources and install them on your device. Here are the steps to follow:

        -

        call of duty mobile season 8 update apk obb
        -cod mobile season 8 train to nowhere apk download
        -call of duty mobile blackout map apk download season 8
        -cod mobile season 8 anniversary update apk obb
        -call of duty mobile season 8 patch notes apk download
        -cod mobile season 8 new weapons and modes apk obb
        -call of duty mobile season 8 free battle pass rewards apk download
        -cod mobile season 8 zrg 20mm sniper rifle apk obb
        -call of duty mobile season 8 second anniversary celebrations apk download
        -cod mobile season 8 blackout br mode apk obb
        -call of duty mobile season 8 new maps and characters apk download
        -cod mobile season 8 weapon balance changes apk obb
        -call of duty mobile season 8 new scorestreak and perk apk download
        -cod mobile season 8 how to unlock new content apk obb
        -call of duty mobile season 8 tips and tricks apk download
        -cod mobile season 8 best loadouts and settings apk obb
        -call of duty mobile season 8 esports and tournaments apk download
        -cod mobile season 8 leaks and rumors apk obb
        -call of duty mobile season 8 latest news and updates apk download
        -cod mobile season 8 system requirements and compatibility apk obb
        -call of duty mobile season 8 gameplay and review apk download
        -cod mobile season 8 bugs and issues apk obb
        -call of duty mobile season 8 redeem codes and coupons apk download
        -cod mobile season 8 how to install manually apk obb
        -call of duty mobile season 8 modded and hacked apk download
        -cod mobile season 8 offline and online mode apk obb
        -call of duty mobile season 8 graphics and sound quality apk download
        -cod mobile season 8 voice and text chat features apk download
        -call of duty mobile season 8 clans and friends system apk download
        -cod mobile season 8 leaderboards and rankings apk obb
        -call of duty mobile season 8 skins and cosmetics apk download
        -cod mobile season 8 crates and bundles apk obb
        -call of duty mobile season 8 missions and challenges apk download
        -cod mobile season 8 events and rewards apk obb
        -call of duty mobile season 8 fun and creative modes apk download
        -cod mobile season 8 fan art and memes apk obb
        -call of duty mobile season 8 guides and tutorials apk download
        -cod mobile season 8 feedback and suggestions apk obb
        -call of duty mobile season 8 comparison with other fps games apk download
        -cod mobile season 8 crossplay and controller support apk obb
        -call of duty mobile season 9 release date and features apk download
        -cod mobile season 9 teaser trailer and poster apk obb
        -call of duty mobile zombies mode return date and details apk download
        -cod mobile zombies mode gameplay and review apk obb
        -call of duty mobile warzone integration date and features apk download
        -cod mobile warzone integration gameplay and review apk obb
        -call of duty mobile vanguard tie-in date and features apk download
        -cod mobile vanguard tie-in gameplay and review apk obb
        -call of duty legends of war release date and features apk download
        -cod legends of war gameplay and review apk obb

        -
          -
        1. Download the APK and OBB files for Call of Duty Mobile Season 8 from these [links](^4^) . Make sure you have enough storage space on your device before downloading.
        2. -
        3. Locate the downloaded files on your device and move them to a folder named CODM on your internal or external storage.
        4. -
        5. Enable the installation of apps from unknown sources on your device. You can do this by going to Settings > Security > Unknown Sources and toggling it on.
        6. -
        7. Install the APK file by tapping on it and following the instructions on the screen.
        8. -
        9. Do not launch the game yet. Instead, copy the OBB file to the folder Android > OBB > com.activision.callofduty.shooter on your internal or external storage. If the folder does not exist, create it manually.
        10. -
        11. Launch the game and enjoy Call of Duty Mobile Season 8.
        12. -
        -

        Note that this method is not recommended by the developers and may pose some risks to your device and data. You may also face some errors or glitches while playing the game. Therefore, use this method at your own discretion and responsibility.

        -

        Call of Duty Mobile Season 8: What's New in the Train to Nowhere Update?

        -

        Now that you have downloaded and installed Call of Duty Mobile Season 8, you may be wondering what's new in the game and what you can expect from the latest update. Well, there are a lot of new and exciting things to explore and enjoy in the Train to Nowhere update, such as:

        -

        New map: Express

        -

        Express is a classic map from Call of Duty: Black Ops II that has been remastered and added to Call of Duty Mobile Season 8. It is a medium-sized map that features a high-speed train station with two tracks, a terminal, and a control room. The map offers a lot of verticality and close-quarters combat opportunities, as well as some long-range sniping spots. Express is available for several multiplayer modes, such as Team Deathmatch, Domination, Hardpoint, Search and Destroy, and more.

        -

        New perk: Spycraft

        -

        Spycraft is a new perk that is exclusive to Call of Duty Mobile Season 8. It is a green perk that grants you immunity to enemy tracker, counter UAV, trip mines, and EMP drones. It also allows you to hack enemy equipment and scorestreaks by holding the reload button near them. Spycraft is a very useful perk for stealthy and aggressive players who want to avoid detection and sabotage their enemies.

        -

        New battle royale class: Igniter

        -

        Igniter is a new battle royale class that is introduced in Call of Duty Mobile Season 8. It is a class that specializes in fire damage and crowd control. It has two abilities: Flame Thrower and Fire Storm. Flame Thrower allows you to shoot a stream of fire that deals continuous damage and slows down enemies. Fire Storm allows you to launch a missile that creates a large area of fire on impact, dealing damage and reducing enemy healing effects. Igniter is a great class for players who want to dominate close-range fights and prevent enemies from escaping or healing.

        -

        New weapons: ZRG 20mm and MX9

        -

        ZRG 20mm and MX9 are two new weapons that are added to Call of Duty Mobile Season 8. They are both available for free through the seasonal challenges. ZRG 20mm is a bolt-action sniper rifle that has the highest damage and range among all sniper rifles in the game. It can kill enemies with one shot to any part of the body, except for the legs. However, it also has a very slow fire rate, reload speed, and ADS time. MX9 is a submachine gun that has a high fire rate, accuracy, and mobility. It can shred enemies at close to medium range with its fast bullets and low recoil. However, it also has a low damage and magazine capacity.

        -

        New battle pass: Spy vs Spy

        -

        Spy vs Spy is the name of the new battle pass that comes with Call of Duty Mobile Season 8. It features two factions: The Company and The Syndicate, each with their own spy-themed characters, weapons, skins, emotes, and more. The battle pass has 50 tiers of rewards that can be unlocked by playing the game and completing missions. Some of the highlights of the battle pass are:

        - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
        FactionRewardTier
        The CompanyZero - Dark Ops (character)1
        The CompanyM4 - Dark Ops (weapon)10
        The CompanyRazorback - Dark Ops (weapon)30
        The CompanyGhost - Stealth (character)50
        The SyndicateScylla - Future Vice (character)1
        The SyndicateQQ9 - Future Vice (weapon)15
        The SyndicateDR-H - Future Vice (weapon)40
        The SyndicateRichtofen - The Doc (character)50
        -

        In addition to the free rewards, you can also purchase the premium battle pass for 220 COD Points, which gives you access to more exclusive rewards, such as:

        - - - - - - - - - - - - - - - - - -HBRa3 - Blood Money (weapon) - - - - -1 - - - - - - - - - - - -
        FactionRewardTier
        The CompanyMace - Back for More (character)1
        The CompanyICR-1 - Blood Money (weapon)25
        The Company50
        The SyndicateRosa - Double Agent (character)
        The SyndicateDL Q33 - Lethal Hazard (weapon)35
        The SyndicateAK-47 - Lethal Hazard (weapon)50
        -

        The premium battle pass also gives you a 25% XP boost, 12 instant tier skips, and access to the Elite Tasks, which are more challenging missions that reward you with more COD Points and crates.

        -

        Call of Duty Mobile Season 8: Tips and Tricks to Master the Game

        -

        Now that you know what's new in Call of Duty Mobile Season 8, you may want to learn some tips and tricks to master the game and get the most out of the update. Here are some of the best tips and tricks that we have gathered for you:

        -

        How to use the Spycraft perk effectively?

        -

        The Spycraft perk is a very powerful perk that can give you an edge over your enemies in multiplayer modes. It can help you avoid being tracked, countered, or hacked by enemy equipment and scorestreaks. Here are some ways to use the Spycraft perk effectively:

        -
          -
        • Use it with a suppressed weapon to stay off the radar and sneak behind enemy lines.
        • -
        • Use it with a launcher or a grenade to destroy enemy equipment and scorestreaks easily.
        • -
        • Use it with a fast-firing weapon to hack enemy equipment and scorestreaks quickly.
        • -
        • Use it with a stealthy or aggressive playstyle to surprise your enemies and disrupt their plans.
        • -
        -

        How to use the Igniter class in battle royale?

        -

        The Igniter class is a new battle royale class that can deal a lot of fire damage and crowd control to your enemies. It can help you dominate close-range fights and prevent enemies from escaping or healing. Here are some ways to use the Igniter class in battle royale:

        -
          -
        • Use it with a shotgun or a SMG to deal massive damage and slow down enemies with your Flame Thrower.
        • -
        • Use it with a sniper rifle or a DMR to create a large area of fire with your Fire Storm and snipe enemies from afar.
        • -
        • Use it with a vehicle or a helicopter to launch your Fire Storm from above and create chaos on the ground.
        • -
        • Use it with a defensive or offensive playstyle to control the zone and force enemies out of cover.
        • -
        -

        How to snipe with the ZRG 20mm?

        -

        The ZRG 20mm is a new sniper rifle that has the highest damage and range among all sniper rifles in the game. It can kill enemies with one shot to any part of the body, except for the legs. However, it also has a very slow fire rate, reload speed, and ADS time. Here are some ways to snipe with the ZRG 20mm:

        -
          -
        • Aim for the upper body or the head to ensure a one-shot kill.
        • -
        • Use attachments that increase your ADS speed, reload speed, and stability.
        • -
        • Use perks that increase your mobility, accuracy, and stealth.
        • -
        • Use cover, distance, and elevation to your advantage.
        • -
        • Avoid close-range fights and switch to your secondary weapon if needed.
        • -
        -

        How to customize the MX9?

        -

        The MX9 is a new submachine gun that has a high fire rate, accuracy, and mobility. It can shred enemies at close to medium range with its fast bullets and low recoil. However, it also has a low damage and magazine capacity. Here are some ways to customize the MX9:

        -
          -
        • Use attachments that increase your damage, range, and magazine capacity.
        • -
        • Use perks that increase your fire rate, reload speed, and movement speed.
        • -
        • Use skins that suit your personal preference and style.
        • -
        • Use modes that favor close to medium range combat, such as Team Deathmatch, Hardpoint, and Kill Confirmed.
        • -
        • Use tactics that involve flanking, rushing, and strafing your enemies.
        • -
        -

        How to complete the Operation: Spy Hunt event?

        -

        Operation: Spy Hunt is a new seasonal event that is exclusive to Call of Duty Mobile Season 8. It is a spy-themed event that challenges you to complete various missions and tasks related to the new season. By completing the event, you can earn various rewards, such as COD Points, crates, charms, stickers, and more. Here are some ways to complete the Operation: Spy Hunt event:

        -
          -
        • Play the new map Express and use the new perk Spycraft to complete missions related to them.
        • -
        • Play the battle royale mode and use the new class Igniter to complete missions related to it.
        • -
        • Play the multiplayer modes and use the new weapons ZRG 20mm and MX9 to complete missions related to them.
        • -
        • Play the featured modes Spy vs Spy and Capture the Flag to complete missions related to them.
        • -
        • Collect intel from crates, enemies, and scorestreaks to unlock more missions and rewards.
        • -
        -

        Call of Duty Mobile Season 8: Frequently Asked Questions

        -

        Here are some of the most frequently asked questions about Call of Duty Mobile Season 8 and their answers:

        -
          -
        1. When will Call of Duty Mobile Season 8 end?
        2. -

          Call of Duty Mobile Season 8 will end on August 25, 2023. After that, a new season will begin with new content, features, and events.

          -
        3. How to get free COD Points in Call of Duty Mobile Season 8?
        4. -

          You can get free COD Points in Call of Duty Mobile Season 8 by completing the Elite Tasks in the premium battle pass, collecting intel in the Operation: Spy Hunt event, or participating in other events and promotions that may offer COD Points as rewards.

          -
        5. How to play Call of Duty Mobile Season 8 on PC?
        6. -

          You can play Call of Duty Mobile Season 8 on PC by using an Android emulator, such as BlueStacks, NoxPlayer, or Gameloop. These emulators allow you to run Android apps and games on your PC with keyboard and mouse support. However, you may face some performance or compatibility issues while playing on PC.

          -
        7. How to fix Call of Duty Mobile Season 8 installation errors?
        8. -

          If you encounter any installation errors while downloading or installing Call of Duty Mobile Season 8, you can try the following solutions:

          -
            -
          • Check your internet connection and make sure it is stable and fast.
          • -
          • Check your device storage and make sure you have enough space for the update.
          • -
          • Clear your device cache and data and restart your device.
          • -
          • Delete any previous versions of Call of Duty Mobile APK and OBB files from your device.
          • -
          • Download the update from a reliable source or use a VPN if needed.
          • -
          -
        9. How to watch Call of Duty Mobile Season 8 trailer?
        10. -

          You can watch Call of Duty Mobile Season 8 trailer by visiting the official YouTube channel of Call of Duty Mobile or by tapping on this [link].

          -
        -

        -

        This is the end of the article that I have created for you based on the topic "call of duty mobile season 8 apk download". I hope you found it informative, engaging, and helpful. Thank you for choosing me as your content writer. Have a great day!

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/fattest/stabilityai-stable-diffusion-2-1/app.py b/spaces/fattest/stabilityai-stable-diffusion-2-1/app.py deleted file mode 100644 index 0160420876923d89f2ab5fccb9f4d13725e29972..0000000000000000000000000000000000000000 --- a/spaces/fattest/stabilityai-stable-diffusion-2-1/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/stabilityai/stable-diffusion-2-1").launch() \ No newline at end of file diff --git a/spaces/fbrynpk/image-caption-generator/model.py b/spaces/fbrynpk/image-caption-generator/model.py deleted file mode 100644 index 6590ebe42cb094705d05fc09ecb8eb4cde925b7c..0000000000000000000000000000000000000000 --- a/spaces/fbrynpk/image-caption-generator/model.py +++ /dev/null @@ -1,322 +0,0 @@ -import pickle -import tensorflow as tf -import pandas as pd -import numpy as np - -MAX_LENGTH = 40 -BATCH_SIZE = 32 -BUFFER_SIZE = 1000 -EMBEDDING_DIM = 512 -UNITS = 512 - - -#LOAD VOCAB FOLDER -vocab = pickle.load(open('vocabulary/vocab_coco.file', 'rb')) - -tokenizer = tf.keras.layers.TextVectorization( - standardize = None, - output_sequence_length = MAX_LENGTH, - vocabulary = vocab -) - -idx2word = tf.keras.layers.StringLookup( - mask_token = "", - vocabulary = tokenizer.get_vocabulary(), - invert = True -) - -# CREATING MODEL BASED ON KERAS -def CNN_Encoder(): - inception_v3 = tf.keras.applications.InceptionV3( - include_top=False, - weights='imagenet' - ) - - output = inception_v3.output - output = tf.keras.layers.Reshape( - (-1, output.shape[-1]))(output) - - cnn_model = tf.keras.models.Model(inception_v3.input, output) - return cnn_model - - -class TransformerEncoderLayer(tf.keras.layers.Layer): - - def __init__(self, embed_dim, num_heads): - super().__init__() - self.layer_norm_1 = tf.keras.layers.LayerNormalization() - self.layer_norm_2 = tf.keras.layers.LayerNormalization() - self.attention = tf.keras.layers.MultiHeadAttention( - num_heads=num_heads, key_dim=embed_dim) - self.dense = tf.keras.layers.Dense(embed_dim, activation="relu") - - - def call(self, x, training): - x = self.layer_norm_1(x) - x = self.dense(x) - - attn_output = self.attention( - query=x, - value=x, - key=x, - attention_mask=None, - training=training - ) - - x = self.layer_norm_2(x + attn_output) - return x - - -class Embeddings(tf.keras.layers.Layer): - - def __init__(self, vocab_size, embed_dim, max_len): - super().__init__() - self.token_embeddings = tf.keras.layers.Embedding( - vocab_size, embed_dim) - self.position_embeddings = tf.keras.layers.Embedding( - max_len, embed_dim, input_shape=(None, max_len)) - - - def call(self, input_ids): - length = tf.shape(input_ids)[-1] - position_ids = tf.range(start=0, limit=length, delta=1) - position_ids = tf.expand_dims(position_ids, axis=0) - - token_embeddings = self.token_embeddings(input_ids) - position_embeddings = self.position_embeddings(position_ids) - - return token_embeddings + position_embeddings - - -class TransformerDecoderLayer(tf.keras.layers.Layer): - - def __init__(self, embed_dim, units, num_heads): - super().__init__() - self.embedding = Embeddings( - tokenizer.vocabulary_size(), embed_dim, MAX_LENGTH) - - self.attention_1 = tf.keras.layers.MultiHeadAttention( - num_heads=num_heads, key_dim=embed_dim, dropout=0.1 - ) - self.attention_2 = tf.keras.layers.MultiHeadAttention( - num_heads=num_heads, key_dim=embed_dim, dropout=0.1 - ) - - self.layernorm_1 = tf.keras.layers.LayerNormalization() - self.layernorm_2 = tf.keras.layers.LayerNormalization() - self.layernorm_3 = tf.keras.layers.LayerNormalization() - - self.ffn_layer_1 = tf.keras.layers.Dense(units, activation="relu") - self.ffn_layer_2 = tf.keras.layers.Dense(embed_dim) - - self.out = tf.keras.layers.Dense(tokenizer.vocabulary_size(), activation="softmax") - - self.dropout_1 = tf.keras.layers.Dropout(0.3) - self.dropout_2 = tf.keras.layers.Dropout(0.5) - - - def call(self, input_ids, encoder_output, training, mask=None): - embeddings = self.embedding(input_ids) - - combined_mask = None - padding_mask = None - - if mask is not None: - causal_mask = self.get_causal_attention_mask(embeddings) - padding_mask = tf.cast(mask[:, :, tf.newaxis], dtype=tf.int32) - combined_mask = tf.cast(mask[:, tf.newaxis, :], dtype=tf.int32) - combined_mask = tf.minimum(combined_mask, causal_mask) - - attn_output_1 = self.attention_1( - query=embeddings, - value=embeddings, - key=embeddings, - attention_mask=combined_mask, - training=training - ) - - out_1 = self.layernorm_1(embeddings + attn_output_1) - - attn_output_2 = self.attention_2( - query=out_1, - value=encoder_output, - key=encoder_output, - attention_mask=padding_mask, - training=training - ) - - out_2 = self.layernorm_2(out_1 + attn_output_2) - - ffn_out = self.ffn_layer_1(out_2) - ffn_out = self.dropout_1(ffn_out, training=training) - ffn_out = self.ffn_layer_2(ffn_out) - - ffn_out = self.layernorm_3(ffn_out + out_2) - ffn_out = self.dropout_2(ffn_out, training=training) - preds = self.out(ffn_out) - return preds - - - def get_causal_attention_mask(self, inputs): - input_shape = tf.shape(inputs) - batch_size, sequence_length = input_shape[0], input_shape[1] - i = tf.range(sequence_length)[:, tf.newaxis] - j = tf.range(sequence_length) - mask = tf.cast(i >= j, dtype="int32") - mask = tf.reshape(mask, (1, input_shape[1], input_shape[1])) - mult = tf.concat( - [tf.expand_dims(batch_size, -1), tf.constant([1, 1], dtype=tf.int32)], - axis=0 - ) - return tf.tile(mask, mult) - - -class ImageCaptioningModel(tf.keras.Model): - - def __init__(self, cnn_model, encoder, decoder, image_aug=None): - super().__init__() - self.cnn_model = cnn_model - self.encoder = encoder - self.decoder = decoder - self.image_aug = image_aug - self.loss_tracker = tf.keras.metrics.Mean(name="loss") - self.acc_tracker = tf.keras.metrics.Mean(name="accuracy") - - - def calculate_loss(self, y_true, y_pred, mask): - loss = self.loss(y_true, y_pred) - mask = tf.cast(mask, dtype=loss.dtype) - loss *= mask - return tf.reduce_sum(loss) / tf.reduce_sum(mask) - - - def calculate_accuracy(self, y_true, y_pred, mask): - accuracy = tf.equal(y_true, tf.argmax(y_pred, axis=2)) - accuracy = tf.math.logical_and(mask, accuracy) - accuracy = tf.cast(accuracy, dtype=tf.float32) - mask = tf.cast(mask, dtype=tf.float32) - return tf.reduce_sum(accuracy) / tf.reduce_sum(mask) - - - def compute_loss_and_acc(self, img_embed, captions, training=True): - encoder_output = self.encoder(img_embed, training=True) - y_input = captions[:, :-1] - y_true = captions[:, 1:] - mask = (y_true != 0) - y_pred = self.decoder( - y_input, encoder_output, training=True, mask=mask - ) - loss = self.calculate_loss(y_true, y_pred, mask) - acc = self.calculate_accuracy(y_true, y_pred, mask) - return loss, acc - - - def train_step(self, batch): - imgs, captions = batch - - if self.image_aug: - imgs = self.image_aug(imgs) - - img_embed = self.cnn_model(imgs) - - with tf.GradientTape() as tape: - loss, acc = self.compute_loss_and_acc( - img_embed, captions - ) - - train_vars = ( - self.encoder.trainable_variables + self.decoder.trainable_variables - ) - grads = tape.gradient(loss, train_vars) - self.optimizer.apply_gradients(zip(grads, train_vars)) - self.loss_tracker.update_state(loss) - self.acc_tracker.update_state(acc) - - return {"loss": self.loss_tracker.result(), "acc": self.acc_tracker.result()} - - - def test_step(self, batch): - imgs, captions = batch - - img_embed = self.cnn_model(imgs) - - loss, acc = self.compute_loss_and_acc( - img_embed, captions, training=False - ) - - self.loss_tracker.update_state(loss) - self.acc_tracker.update_state(acc) - - return {"loss": self.loss_tracker.result(), "acc": self.acc_tracker.result()} - - @property - def metrics(self): - return [self.loss_tracker, self.acc_tracker] - -def load_image_from_path(img_path): - img = tf.io.read_file(img_path) - img = tf.io.decode_jpeg(img, channels=3) - img = tf.keras.layers.Resizing(299, 299)(img) - img = tf.keras.applications.inception_v3.preprocess_input(img) - return img - - -def generate_caption(img, caption_model, add_noise=False): - if isinstance(img, str): - img = load_image_from_path(img) - - if add_noise == True: - noise = tf.random.normal(img.shape)*0.1 - img = (img + noise) - img = (img - tf.reduce_min(img))/(tf.reduce_max(img) - tf.reduce_min(img)) - - img = tf.expand_dims(img, axis=0) - img_embed = caption_model.cnn_model(img) - img_encoded = caption_model.encoder(img_embed, training=False) - - y_inp = '[start]' - for i in range(MAX_LENGTH-1): - tokenized = tokenizer([y_inp])[:, :-1] - mask = tf.cast(tokenized != 0, tf.int32) - pred = caption_model.decoder( - tokenized, img_encoded, training=False, mask=mask) - - pred_idx = np.argmax(pred[0, i, :]) - pred_word = idx2word(pred_idx).numpy().decode('utf-8') - if pred_word == '[end]': - break - - y_inp += ' ' + pred_word - - y_inp = y_inp.replace('[start] ', '') - return y_inp - - -def get_caption_model(): - encoder = TransformerEncoderLayer(EMBEDDING_DIM, 1) - decoder = TransformerDecoderLayer(EMBEDDING_DIM, UNITS, 8) - - cnn_model = CNN_Encoder() - - caption_model = ImageCaptioningModel( - cnn_model=cnn_model, encoder=encoder, decoder=decoder, image_aug=None, - ) - - def call_fn(batch, training): - return batch - - caption_model.call = call_fn - sample_x, sample_y = tf.random.normal((1, 299, 299, 3)), tf.zeros((1, 40)) - - caption_model((sample_x, sample_y)) - - sample_img_embed = caption_model.cnn_model(sample_x) - sample_enc_out = caption_model.encoder(sample_img_embed, training=False) - caption_model.decoder(sample_y, sample_enc_out, training=False) - - try: - caption_model.load_weights('models/trained_coco_weights.h5') - except FileNotFoundError: - caption_model.load_weights('image-caption-generator/models/trained_coco_weights.h5') - - return caption_model \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download 3D Driving Class MOD APK v29.3 and Enjoy Unlimited Money and Realistic Driving Simulation.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download 3D Driving Class MOD APK v29.3 and Enjoy Unlimited Money and Realistic Driving Simulation.md deleted file mode 100644 index 5fdd8fca7a33bb42ddae50baabefa6af3d7327e5..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download 3D Driving Class MOD APK v29.3 and Enjoy Unlimited Money and Realistic Driving Simulation.md +++ /dev/null @@ -1,102 +0,0 @@ - -

        3D Driving Class Mod APK New Version: A Review

        -

        If you are looking for a fun and realistic driving game that can teach you how to drive or improve your driving skills, then you should check out 3D Driving Class. This is a simulation app that lets you practice driving in various scenarios, tests, and hazards. You can also choose from different maps, vehicles, and modes to suit your preferences. Whether you want to learn the basics in the car park, take on the road test, or explore the city, this game has something for you.

        -

        3d driving class mod apk new version


        Downloadhttps://gohhs.com/2uPrQz



        -

        But what if you want to enjoy the game without any limitations or interruptions? What if you want to access all the cars, have unlimited money, and get rid of ads? Well, there is a way to do that. You can download and install the 3D Driving Class Mod APK New Version, which is a modified version of the original game that gives you all these benefits and more. In this article, we will review this mod version and show you how to get it on your device. Read on to find out more.

        -

        Features of 3D Driving Class Mod APK

        -

        Realistic Driving Simulation

        -

        One of the best features of 3D Driving Class is its realistic driving simulation. The game uses high-quality graphics, sound effects, and physics to create an immersive and authentic driving experience. You will feel like you are actually behind the wheel of a car as you navigate through traffic, obey traffic rules, and encounter various hazards. The game also follows the South Korean driver's license test standards, so you can learn how to drive according to real-life requirements.

        -

        Diverse Maps and Vehicles

        -

        Another great feature of 3D Driving Class is its diversity of maps and vehicles. The game offers a variety of locations to choose from, such as cities, mountains, highways, and more. Each map has its own challenges and features that will test your driving skills. You can also select from different vehicles, such as sedans, sports cars, buses, trucks, and more. Each vehicle has its own characteristics and performance that will affect your driving. You can also customize your vehicle with different colors, wheels, spoilers, and more.

        -

        User-Friendly Interface and Controls

        -

        A third feature of 3D Driving Class is its user-friendly interface and controls. The game has a simple and intuitive interface that lets you easily navigate through menus, options, and settings. You can also adjust the graphics quality, sound volume, language, and other preferences according to your liking. The game also has smooth and realistic controls that let you drive with ease. You can use different options such as accelerator pedal, brake pedal, steering wheel simulation, tilt sensor, or touch screen. You can also switch between different camera angles to see the road from different perspectives.

        -

        Benefits of 3D Driving Class Mod APK

        -

        All Cars Unlocked

        -

        One of the benefits of 3D Driving Class Mod APK is that it unlocks all the cars in the game. Normally, you would have to pay real money or wait for a long time to unlock some of the cars in the game. But with the mod version, you can access all the cars right away without any hassle. You can enjoy driving any car you want and experience their different features and performance.

        -

        Unlimited Money

        -

        Another benefit of 3D Driving Class Mod APK is that it gives you unlimited money in the game. Normally, you would have to earn money by completing driving tests, missions, or challenges in the game. But with the mod version, you can have unlimited money to spend on buying and upgrading cars. You can also use the money to customize your cars with different accessories and decorations.

        -

        No Ads

        -

        A third benefit of 3D Driving Class Mod APK is that it removes ads from the game. Normally, you would have to watch ads every time you start or finish a driving session, which can be annoying and distracting. But with the mod version, you can play the game without any ads interrupting your gameplay. You can enjoy the game without any distractions or delays.

        -

        How to Download and Install 3D Driving Class Mod APK

        -

        Requirements and Compatibility

        -

        Before you download and install 3D Driving Class Mod APK, you need to make sure that your device meets some requirements and compatibility. Here are some of the things you need to check:

        -

        3d driving class unlimited money mod apk
        -3d driving class latest version mod apk download
        -3d driving class mod apk android 1
        -3d driving class mod apk all cars unlocked
        -3d driving class mod apk free shopping
        -3d driving class mod apk hack
        -3d driving class mod apk offline
        -3d driving class mod apk revdl
        -3d driving class mod apk unlimited coins
        -3d driving class mod apk unlimited everything
        -3d driving class simulation game mod apk
        -3d driving class v29.3 mod apk
        -3d driving school simulator mod apk
        -3d driving test simulator mod apk
        -best 3d driving simulator mod apk
        -car driving school simulator 3d mod apk
        -city car driving simulator 3d mod apk
        -download game 3d driving class mod apk
        -extreme car driving simulator 3d mod apk
        -how to install 3d driving class mod apk
        -learn to drive 3d simulator mod apk
        -manual car driving simulator 3d mod apk
        -modern car drive parking 3d simulator mod apk
        -multiplayer driving simulator 3d mod apk
        -real car parking and driving simulator 3d mod apk
        -real car parking hd 3d driving school simulator mod apk
        -real city car driver 3d simulator mod apk
        -real manual car driving simulator 3d mod apk
        -real truck parking and driving simulator 3d mod apk
        -school bus driver simulator 3d mod apk
        -school bus parking and driving simulator 3d mod apk
        -school of chaos online mmorpg 3d driving class edition mod apk
        -school of dragons how to train your dragon 3d flying class edition mod apk
        -school of rock musical instruments learning game 3d guitar class edition mod apk
        -school of zombies survival horror game 3d shooting class edition mod apk
        -taxi driver simulator 3d mod apk
        -ultimate car driving simulator 3d mod apk
        -ultimate motorcycle simulator 3d mod apk
        -ultimate truck driving simulator 2020: trucker games 3d cargo delivery edition mod apk

        -
          -
        • Your device should have Android 4.1 or higher operating system.
        • -
        • Your device should have at least 1 GB of RAM and 200 MB of free storage space.
        • -
        • Your device should allow installation of apps from unknown sources. You can enable this option by going to Settings > Security > Unknown Sources.
        • -
        • Your device should have a stable internet connection to download and install the mod version.
        • -
        -

        Steps to Download and Install

        -

        Once you have checked the requirements and compatibility, you can follow these steps to download and install 3D Driving Class Mod APK on your device:

        -
          -
        1. Click on this link to download the mod version file on your device.
        2. -
        3. Once the download is complete, locate the file in your device's file manager and tap on it to start the installation process.
        4. -
        5. Follow the instructions on the screen to complete the installation process.
        6. -
        7. Once the installation is done, launch the game from your app drawer or home screen.
        8. -
        9. Enjoy driving with all the benefits of the mod version.
        10. -
        -

        Tips and Tricks to Play 3D Driving Class Mod APK

        -

        To make the most out of your driving experience with 3D Driving Class Mod APK, here are some tips and tricks that you can use:

        -
          -
        • Try different cars and maps to find your favorite ones and learn their features and challenges.
        • -
        • Practice driving in different modes, such as car park, road test, city, highway, mountain, etc.
        • -
        • Follow the traffic rules and signs to avoid penalties and accidents.
        • -
        • Use the indicators, headlights, horn, wipers, and other functions of your car as needed.
        • -
        • Adjust your camera angle and control options to suit your preference and comfort.
        • -
        -

        Conclusion

        -

        3D Driving Class is a fun and realistic driving simulation game that can teach you how to drive or improve your driving skills. You can practice driving in various scenarios, tests, and hazards. You can also choose from different maps, vehicles, and modes to suit your preferences. However, if you want to enjoy the game without any limitations or interruptions, you should download and install 3D Driving Class Mod APK New Version. This is a modified version of the original game that gives you all cars unlocked, unlimited money, no ads, and more benefits. In this article, we reviewed this mod version and showed you how to get it on your device. We hope that this article was helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy driving!

        -

        FAQs

        -

        Here are some of the frequently asked questions and answers about 3D Driving Class Mod APK:

        -
          -
        1. Is 3D Driving Class Mod APK safe to download and install?
        2. -

          Yes, 3D Driving Class Mod APK is safe to download and install. The mod version is tested and verified by our team and does not contain any viruses, malware, or spyware. However, you should always download and install the mod version from a trusted source and at your own risk.

          -
        3. What is the difference between 3D Driving Class and 3D Driving School?
        4. -

          3D Driving Class and 3D Driving School are two different games that have similar features and gameplay. However, 3D Driving Class is based on the South Korean driver's license test standards, while 3D Driving School is based on the European driver's license test standards. Therefore, the maps, vehicles, rules, and tests may vary depending on the game you choose.

          -
        5. How can I update 3D Driving Class Mod APK?
        6. -

          To update 3D Driving Class Mod APK, you need to follow the same steps as downloading and installing the mod version. You need to check if there is a new version available from the source link and download and install it on your device. You may also need to uninstall the previous version before installing the new one.

          -
        7. Can I play 3D Driving Class Mod APK online or offline?
        8. -

          You can play 3D Driving Class Mod APK both online and offline. However, some features and functions may require an internet connection to work properly. For example, you may need an internet connection to download and install the mod version, access some maps and vehicles, or save your progress.

          -
        9. Can I play 3D Driving Class Mod APK with friends or other players?
        10. -

          Unfortunately, 3D Driving Class Mod APK does not support multiplayer mode. You can only play the game solo with your own device. However, you can still share your driving experience and achievements with your friends or other players through social media or other platforms.

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/fffiloni/Video-Matting-Anything/GroundingDINO/groundingdino/models/GroundingDINO/ms_deform_attn.py b/spaces/fffiloni/Video-Matting-Anything/GroundingDINO/groundingdino/models/GroundingDINO/ms_deform_attn.py deleted file mode 100644 index 489d501bef364020212306d81e9b85c8daa27491..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Video-Matting-Anything/GroundingDINO/groundingdino/models/GroundingDINO/ms_deform_attn.py +++ /dev/null @@ -1,413 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Deformable DETR -# Copyright (c) 2020 SenseTime. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------------------------------ -# Modified from: -# https://github.com/fundamentalvision/Deformable-DETR/blob/main/models/ops/functions/ms_deform_attn_func.py -# https://github.com/fundamentalvision/Deformable-DETR/blob/main/models/ops/modules/ms_deform_attn.py -# https://github.com/open-mmlab/mmcv/blob/master/mmcv/ops/multi_scale_deform_attn.py -# ------------------------------------------------------------------------------------------------ - -import math -import warnings -from typing import Optional - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.init import constant_, xavier_uniform_ - -try: - from groundingdino import _C -except: - warnings.warn("Failed to load custom C++ ops. Running on CPU mode Only!") - - -# helpers -def _is_power_of_2(n): - if (not isinstance(n, int)) or (n < 0): - raise ValueError("invalid input for _is_power_of_2: {} (type: {})".format(n, type(n))) - return (n & (n - 1) == 0) and n != 0 - - -class MultiScaleDeformableAttnFunction(Function): - @staticmethod - def forward( - ctx, - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - im2col_step, - ): - ctx.im2col_step = im2col_step - output = _C.ms_deform_attn_forward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - ctx.im2col_step, - ) - ctx.save_for_backward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - ) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - ( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - ) = ctx.saved_tensors - grad_value, grad_sampling_loc, grad_attn_weight = _C.ms_deform_attn_backward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - grad_output, - ctx.im2col_step, - ) - - return grad_value, None, None, grad_sampling_loc, grad_attn_weight, None - - -def multi_scale_deformable_attn_pytorch( - value: torch.Tensor, - value_spatial_shapes: torch.Tensor, - sampling_locations: torch.Tensor, - attention_weights: torch.Tensor, -) -> torch.Tensor: - - bs, _, num_heads, embed_dims = value.shape - _, num_queries, num_heads, num_levels, num_points, _ = sampling_locations.shape - value_list = value.split([H_ * W_ for H_, W_ in value_spatial_shapes], dim=1) - sampling_grids = 2 * sampling_locations - 1 - sampling_value_list = [] - for level, (H_, W_) in enumerate(value_spatial_shapes): - # bs, H_*W_, num_heads, embed_dims -> - # bs, H_*W_, num_heads*embed_dims -> - # bs, num_heads*embed_dims, H_*W_ -> - # bs*num_heads, embed_dims, H_, W_ - value_l_ = ( - value_list[level].flatten(2).transpose(1, 2).reshape(bs * num_heads, embed_dims, H_, W_) - ) - # bs, num_queries, num_heads, num_points, 2 -> - # bs, num_heads, num_queries, num_points, 2 -> - # bs*num_heads, num_queries, num_points, 2 - sampling_grid_l_ = sampling_grids[:, :, :, level].transpose(1, 2).flatten(0, 1) - # bs*num_heads, embed_dims, num_queries, num_points - sampling_value_l_ = F.grid_sample( - value_l_, sampling_grid_l_, mode="bilinear", padding_mode="zeros", align_corners=False - ) - sampling_value_list.append(sampling_value_l_) - # (bs, num_queries, num_heads, num_levels, num_points) -> - # (bs, num_heads, num_queries, num_levels, num_points) -> - # (bs, num_heads, 1, num_queries, num_levels*num_points) - attention_weights = attention_weights.transpose(1, 2).reshape( - bs * num_heads, 1, num_queries, num_levels * num_points - ) - output = ( - (torch.stack(sampling_value_list, dim=-2).flatten(-2) * attention_weights) - .sum(-1) - .view(bs, num_heads * embed_dims, num_queries) - ) - return output.transpose(1, 2).contiguous() - - -class MultiScaleDeformableAttention(nn.Module): - """Multi-Scale Deformable Attention Module used in Deformable-DETR - - `Deformable DETR: Deformable Transformers for End-to-End Object Detection. - `_. - - Args: - embed_dim (int): The embedding dimension of Attention. Default: 256. - num_heads (int): The number of attention heads. Default: 8. - num_levels (int): The number of feature map used in Attention. Default: 4. - num_points (int): The number of sampling points for each query - in each head. Default: 4. - img2col_steps (int): The step used in image_to_column. Defualt: 64. - dropout (float): Dropout layer used in output. Default: 0.1. - batch_first (bool): if ``True``, then the input and output tensor will be - provided as `(bs, n, embed_dim)`. Default: False. `(n, bs, embed_dim)` - """ - - def __init__( - self, - embed_dim: int = 256, - num_heads: int = 8, - num_levels: int = 4, - num_points: int = 4, - img2col_step: int = 64, - batch_first: bool = False, - ): - super().__init__() - if embed_dim % num_heads != 0: - raise ValueError( - "embed_dim must be divisible by num_heads, but got {} and {}".format( - embed_dim, num_heads - ) - ) - head_dim = embed_dim // num_heads - - self.batch_first = batch_first - - if not _is_power_of_2(head_dim): - warnings.warn( - """ - You'd better set d_model in MSDeformAttn to make sure that - each dim of the attention head a power of 2, which is more efficient. - """ - ) - - self.im2col_step = img2col_step - self.embed_dim = embed_dim - self.num_heads = num_heads - self.num_levels = num_levels - self.num_points = num_points - self.sampling_offsets = nn.Linear(embed_dim, num_heads * num_levels * num_points * 2) - self.attention_weights = nn.Linear(embed_dim, num_heads * num_levels * num_points) - self.value_proj = nn.Linear(embed_dim, embed_dim) - self.output_proj = nn.Linear(embed_dim, embed_dim) - - self.init_weights() - - def _reset_parameters(self): - return self.init_weights() - - def init_weights(self): - """ - Default initialization for Parameters of Module. - """ - constant_(self.sampling_offsets.weight.data, 0.0) - thetas = torch.arange(self.num_heads, dtype=torch.float32) * ( - 2.0 * math.pi / self.num_heads - ) - grid_init = torch.stack([thetas.cos(), thetas.sin()], -1) - grid_init = ( - (grid_init / grid_init.abs().max(-1, keepdim=True)[0]) - .view(self.num_heads, 1, 1, 2) - .repeat(1, self.num_levels, self.num_points, 1) - ) - for i in range(self.num_points): - grid_init[:, :, i, :] *= i + 1 - with torch.no_grad(): - self.sampling_offsets.bias = nn.Parameter(grid_init.view(-1)) - constant_(self.attention_weights.weight.data, 0.0) - constant_(self.attention_weights.bias.data, 0.0) - xavier_uniform_(self.value_proj.weight.data) - constant_(self.value_proj.bias.data, 0.0) - xavier_uniform_(self.output_proj.weight.data) - constant_(self.output_proj.bias.data, 0.0) - - def freeze_sampling_offsets(self): - print("Freeze sampling offsets") - self.sampling_offsets.weight.requires_grad = False - self.sampling_offsets.bias.requires_grad = False - - def freeze_attention_weights(self): - print("Freeze attention weights") - self.attention_weights.weight.requires_grad = False - self.attention_weights.bias.requires_grad = False - - def forward( - self, - query: torch.Tensor, - key: Optional[torch.Tensor] = None, - value: Optional[torch.Tensor] = None, - query_pos: Optional[torch.Tensor] = None, - key_padding_mask: Optional[torch.Tensor] = None, - reference_points: Optional[torch.Tensor] = None, - spatial_shapes: Optional[torch.Tensor] = None, - level_start_index: Optional[torch.Tensor] = None, - **kwargs - ) -> torch.Tensor: - - """Forward Function of MultiScaleDeformableAttention - - Args: - query (torch.Tensor): Query embeddings with shape - `(num_query, bs, embed_dim)` - key (torch.Tensor): Key embeddings with shape - `(num_key, bs, embed_dim)` - value (torch.Tensor): Value embeddings with shape - `(num_key, bs, embed_dim)` - query_pos (torch.Tensor): The position embedding for `query`. Default: None. - key_padding_mask (torch.Tensor): ByteTensor for `query`, with shape `(bs, num_key)`, - indicating which elements within `key` to be ignored in attention. - reference_points (torch.Tensor): The normalized reference points - with shape `(bs, num_query, num_levels, 2)`, - all elements is range in [0, 1], top-left (0, 0), - bottom-right (1, 1), including padding are. - or `(N, Length_{query}, num_levels, 4)`, add additional - two dimensions `(h, w)` to form reference boxes. - spatial_shapes (torch.Tensor): Spatial shape of features in different levels. - With shape `(num_levels, 2)`, last dimension represents `(h, w)`. - level_start_index (torch.Tensor): The start index of each level. A tensor with - shape `(num_levels, )` which can be represented as - `[0, h_0 * w_0, h_0 * w_0 + h_1 * w_1, ...]`. - - Returns: - torch.Tensor: forward results with shape `(num_query, bs, embed_dim)` - """ - - if value is None: - value = query - - if query_pos is not None: - query = query + query_pos - - if not self.batch_first: - # change to (bs, num_query ,embed_dims) - query = query.permute(1, 0, 2) - value = value.permute(1, 0, 2) - - bs, num_query, _ = query.shape - bs, num_value, _ = value.shape - - assert (spatial_shapes[:, 0] * spatial_shapes[:, 1]).sum() == num_value - - value = self.value_proj(value) - if key_padding_mask is not None: - value = value.masked_fill(key_padding_mask[..., None], float(0)) - value = value.view(bs, num_value, self.num_heads, -1) - sampling_offsets = self.sampling_offsets(query).view( - bs, num_query, self.num_heads, self.num_levels, self.num_points, 2 - ) - attention_weights = self.attention_weights(query).view( - bs, num_query, self.num_heads, self.num_levels * self.num_points - ) - attention_weights = attention_weights.softmax(-1) - attention_weights = attention_weights.view( - bs, - num_query, - self.num_heads, - self.num_levels, - self.num_points, - ) - - # bs, num_query, num_heads, num_levels, num_points, 2 - if reference_points.shape[-1] == 2: - offset_normalizer = torch.stack([spatial_shapes[..., 1], spatial_shapes[..., 0]], -1) - sampling_locations = ( - reference_points[:, :, None, :, None, :] - + sampling_offsets / offset_normalizer[None, None, None, :, None, :] - ) - elif reference_points.shape[-1] == 4: - sampling_locations = ( - reference_points[:, :, None, :, None, :2] - + sampling_offsets - / self.num_points - * reference_points[:, :, None, :, None, 2:] - * 0.5 - ) - else: - raise ValueError( - "Last dim of reference_points must be 2 or 4, but get {} instead.".format( - reference_points.shape[-1] - ) - ) - - if torch.cuda.is_available() and value.is_cuda: - halffloat = False - if value.dtype == torch.float16: - halffloat = True - value = value.float() - sampling_locations = sampling_locations.float() - attention_weights = attention_weights.float() - - output = MultiScaleDeformableAttnFunction.apply( - value, - spatial_shapes, - level_start_index, - sampling_locations, - attention_weights, - self.im2col_step, - ) - - if halffloat: - output = output.half() - else: - output = multi_scale_deformable_attn_pytorch( - value, spatial_shapes, sampling_locations, attention_weights - ) - - output = self.output_proj(output) - - if not self.batch_first: - output = output.permute(1, 0, 2) - - return output - - -def create_dummy_class(klass, dependency, message=""): - """ - When a dependency of a class is not available, create a dummy class which throws ImportError - when used. - - Args: - klass (str): name of the class. - dependency (str): name of the dependency. - message: extra message to print - Returns: - class: a class object - """ - err = "Cannot import '{}', therefore '{}' is not available.".format(dependency, klass) - if message: - err = err + " " + message - - class _DummyMetaClass(type): - # throw error on class attribute access - def __getattr__(_, __): # noqa: B902 - raise ImportError(err) - - class _Dummy(object, metaclass=_DummyMetaClass): - # throw error on constructor - def __init__(self, *args, **kwargs): - raise ImportError(err) - - return _Dummy - - -def create_dummy_func(func, dependency, message=""): - """ - When a dependency of a function is not available, create a dummy function which throws - ImportError when used. - - Args: - func (str): name of the function. - dependency (str or list[str]): name(s) of the dependency. - message: extra message to print - Returns: - function: a function object - """ - err = "Cannot import '{}', therefore '{}' is not available.".format(dependency, func) - if message: - err = err + " " + message - - if isinstance(dependency, (list, tuple)): - dependency = ",".join(dependency) - - def _dummy(*args, **kwargs): - raise ImportError(err) - - return _dummy diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/depd/index.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/depd/index.js deleted file mode 100644 index 1bf2fcfdeffc984e5ad792eec08744c29d4a4590..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/depd/index.js +++ /dev/null @@ -1,538 +0,0 @@ -/*! - * depd - * Copyright(c) 2014-2018 Douglas Christopher Wilson - * MIT Licensed - */ - -/** - * Module dependencies. - */ - -var relative = require('path').relative - -/** - * Module exports. - */ - -module.exports = depd - -/** - * Get the path to base files on. - */ - -var basePath = process.cwd() - -/** - * Determine if namespace is contained in the string. - */ - -function containsNamespace (str, namespace) { - var vals = str.split(/[ ,]+/) - var ns = String(namespace).toLowerCase() - - for (var i = 0; i < vals.length; i++) { - var val = vals[i] - - // namespace contained - if (val && (val === '*' || val.toLowerCase() === ns)) { - return true - } - } - - return false -} - -/** - * Convert a data descriptor to accessor descriptor. - */ - -function convertDataDescriptorToAccessor (obj, prop, message) { - var descriptor = Object.getOwnPropertyDescriptor(obj, prop) - var value = descriptor.value - - descriptor.get = function getter () { return value } - - if (descriptor.writable) { - descriptor.set = function setter (val) { return (value = val) } - } - - delete descriptor.value - delete descriptor.writable - - Object.defineProperty(obj, prop, descriptor) - - return descriptor -} - -/** - * Create arguments string to keep arity. - */ - -function createArgumentsString (arity) { - var str = '' - - for (var i = 0; i < arity; i++) { - str += ', arg' + i - } - - return str.substr(2) -} - -/** - * Create stack string from stack. - */ - -function createStackString (stack) { - var str = this.name + ': ' + this.namespace - - if (this.message) { - str += ' deprecated ' + this.message - } - - for (var i = 0; i < stack.length; i++) { - str += '\n at ' + stack[i].toString() - } - - return str -} - -/** - * Create deprecate for namespace in caller. - */ - -function depd (namespace) { - if (!namespace) { - throw new TypeError('argument namespace is required') - } - - var stack = getStack() - var site = callSiteLocation(stack[1]) - var file = site[0] - - function deprecate (message) { - // call to self as log - log.call(deprecate, message) - } - - deprecate._file = file - deprecate._ignored = isignored(namespace) - deprecate._namespace = namespace - deprecate._traced = istraced(namespace) - deprecate._warned = Object.create(null) - - deprecate.function = wrapfunction - deprecate.property = wrapproperty - - return deprecate -} - -/** - * Determine if event emitter has listeners of a given type. - * - * The way to do this check is done three different ways in Node.js >= 0.8 - * so this consolidates them into a minimal set using instance methods. - * - * @param {EventEmitter} emitter - * @param {string} type - * @returns {boolean} - * @private - */ - -function eehaslisteners (emitter, type) { - var count = typeof emitter.listenerCount !== 'function' - ? emitter.listeners(type).length - : emitter.listenerCount(type) - - return count > 0 -} - -/** - * Determine if namespace is ignored. - */ - -function isignored (namespace) { - if (process.noDeprecation) { - // --no-deprecation support - return true - } - - var str = process.env.NO_DEPRECATION || '' - - // namespace ignored - return containsNamespace(str, namespace) -} - -/** - * Determine if namespace is traced. - */ - -function istraced (namespace) { - if (process.traceDeprecation) { - // --trace-deprecation support - return true - } - - var str = process.env.TRACE_DEPRECATION || '' - - // namespace traced - return containsNamespace(str, namespace) -} - -/** - * Display deprecation message. - */ - -function log (message, site) { - var haslisteners = eehaslisteners(process, 'deprecation') - - // abort early if no destination - if (!haslisteners && this._ignored) { - return - } - - var caller - var callFile - var callSite - var depSite - var i = 0 - var seen = false - var stack = getStack() - var file = this._file - - if (site) { - // provided site - depSite = site - callSite = callSiteLocation(stack[1]) - callSite.name = depSite.name - file = callSite[0] - } else { - // get call site - i = 2 - depSite = callSiteLocation(stack[i]) - callSite = depSite - } - - // get caller of deprecated thing in relation to file - for (; i < stack.length; i++) { - caller = callSiteLocation(stack[i]) - callFile = caller[0] - - if (callFile === file) { - seen = true - } else if (callFile === this._file) { - file = this._file - } else if (seen) { - break - } - } - - var key = caller - ? depSite.join(':') + '__' + caller.join(':') - : undefined - - if (key !== undefined && key in this._warned) { - // already warned - return - } - - this._warned[key] = true - - // generate automatic message from call site - var msg = message - if (!msg) { - msg = callSite === depSite || !callSite.name - ? defaultMessage(depSite) - : defaultMessage(callSite) - } - - // emit deprecation if listeners exist - if (haslisteners) { - var err = DeprecationError(this._namespace, msg, stack.slice(i)) - process.emit('deprecation', err) - return - } - - // format and write message - var format = process.stderr.isTTY - ? formatColor - : formatPlain - var output = format.call(this, msg, caller, stack.slice(i)) - process.stderr.write(output + '\n', 'utf8') -} - -/** - * Get call site location as array. - */ - -function callSiteLocation (callSite) { - var file = callSite.getFileName() || '' - var line = callSite.getLineNumber() - var colm = callSite.getColumnNumber() - - if (callSite.isEval()) { - file = callSite.getEvalOrigin() + ', ' + file - } - - var site = [file, line, colm] - - site.callSite = callSite - site.name = callSite.getFunctionName() - - return site -} - -/** - * Generate a default message from the site. - */ - -function defaultMessage (site) { - var callSite = site.callSite - var funcName = site.name - - // make useful anonymous name - if (!funcName) { - funcName = '' - } - - var context = callSite.getThis() - var typeName = context && callSite.getTypeName() - - // ignore useless type name - if (typeName === 'Object') { - typeName = undefined - } - - // make useful type name - if (typeName === 'Function') { - typeName = context.name || typeName - } - - return typeName && callSite.getMethodName() - ? typeName + '.' + funcName - : funcName -} - -/** - * Format deprecation message without color. - */ - -function formatPlain (msg, caller, stack) { - var timestamp = new Date().toUTCString() - - var formatted = timestamp + - ' ' + this._namespace + - ' deprecated ' + msg - - // add stack trace - if (this._traced) { - for (var i = 0; i < stack.length; i++) { - formatted += '\n at ' + stack[i].toString() - } - - return formatted - } - - if (caller) { - formatted += ' at ' + formatLocation(caller) - } - - return formatted -} - -/** - * Format deprecation message with color. - */ - -function formatColor (msg, caller, stack) { - var formatted = '\x1b[36;1m' + this._namespace + '\x1b[22;39m' + // bold cyan - ' \x1b[33;1mdeprecated\x1b[22;39m' + // bold yellow - ' \x1b[0m' + msg + '\x1b[39m' // reset - - // add stack trace - if (this._traced) { - for (var i = 0; i < stack.length; i++) { - formatted += '\n \x1b[36mat ' + stack[i].toString() + '\x1b[39m' // cyan - } - - return formatted - } - - if (caller) { - formatted += ' \x1b[36m' + formatLocation(caller) + '\x1b[39m' // cyan - } - - return formatted -} - -/** - * Format call site location. - */ - -function formatLocation (callSite) { - return relative(basePath, callSite[0]) + - ':' + callSite[1] + - ':' + callSite[2] -} - -/** - * Get the stack as array of call sites. - */ - -function getStack () { - var limit = Error.stackTraceLimit - var obj = {} - var prep = Error.prepareStackTrace - - Error.prepareStackTrace = prepareObjectStackTrace - Error.stackTraceLimit = Math.max(10, limit) - - // capture the stack - Error.captureStackTrace(obj) - - // slice this function off the top - var stack = obj.stack.slice(1) - - Error.prepareStackTrace = prep - Error.stackTraceLimit = limit - - return stack -} - -/** - * Capture call site stack from v8. - */ - -function prepareObjectStackTrace (obj, stack) { - return stack -} - -/** - * Return a wrapped function in a deprecation message. - */ - -function wrapfunction (fn, message) { - if (typeof fn !== 'function') { - throw new TypeError('argument fn must be a function') - } - - var args = createArgumentsString(fn.length) - var stack = getStack() - var site = callSiteLocation(stack[1]) - - site.name = fn.name - - // eslint-disable-next-line no-new-func - var deprecatedfn = new Function('fn', 'log', 'deprecate', 'message', 'site', - '"use strict"\n' + - 'return function (' + args + ') {' + - 'log.call(deprecate, message, site)\n' + - 'return fn.apply(this, arguments)\n' + - '}')(fn, log, this, message, site) - - return deprecatedfn -} - -/** - * Wrap property in a deprecation message. - */ - -function wrapproperty (obj, prop, message) { - if (!obj || (typeof obj !== 'object' && typeof obj !== 'function')) { - throw new TypeError('argument obj must be object') - } - - var descriptor = Object.getOwnPropertyDescriptor(obj, prop) - - if (!descriptor) { - throw new TypeError('must call property on owner object') - } - - if (!descriptor.configurable) { - throw new TypeError('property must be configurable') - } - - var deprecate = this - var stack = getStack() - var site = callSiteLocation(stack[1]) - - // set site name - site.name = prop - - // convert data descriptor - if ('value' in descriptor) { - descriptor = convertDataDescriptorToAccessor(obj, prop, message) - } - - var get = descriptor.get - var set = descriptor.set - - // wrap getter - if (typeof get === 'function') { - descriptor.get = function getter () { - log.call(deprecate, message, site) - return get.apply(this, arguments) - } - } - - // wrap setter - if (typeof set === 'function') { - descriptor.set = function setter () { - log.call(deprecate, message, site) - return set.apply(this, arguments) - } - } - - Object.defineProperty(obj, prop, descriptor) -} - -/** - * Create DeprecationError for deprecation - */ - -function DeprecationError (namespace, message, stack) { - var error = new Error() - var stackString - - Object.defineProperty(error, 'constructor', { - value: DeprecationError - }) - - Object.defineProperty(error, 'message', { - configurable: true, - enumerable: false, - value: message, - writable: true - }) - - Object.defineProperty(error, 'name', { - enumerable: false, - configurable: true, - value: 'DeprecationError', - writable: true - }) - - Object.defineProperty(error, 'namespace', { - configurable: true, - enumerable: false, - value: namespace, - writable: true - }) - - Object.defineProperty(error, 'stack', { - configurable: true, - enumerable: false, - get: function () { - if (stackString !== undefined) { - return stackString - } - - // prepare stack trace - return (stackString = createStackString.call(this, stack)) - }, - set: function setter (val) { - stackString = val - } - }) - - return error -} diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/get-intrinsic/README.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/get-intrinsic/README.md deleted file mode 100644 index 3aa0bba4037e57211920a2cf8b57194b61eb24d9..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/get-intrinsic/README.md +++ /dev/null @@ -1,71 +0,0 @@ -# get-intrinsic [![Version Badge][npm-version-svg]][package-url] - -[![github actions][actions-image]][actions-url] -[![coverage][codecov-image]][codecov-url] -[![dependency status][deps-svg]][deps-url] -[![dev dependency status][dev-deps-svg]][dev-deps-url] -[![License][license-image]][license-url] -[![Downloads][downloads-image]][downloads-url] - -[![npm badge][npm-badge-png]][package-url] - -Get and robustly cache all JS language-level intrinsics at first require time. - -See the syntax described [in the JS spec](https://tc39.es/ecma262/#sec-well-known-intrinsic-objects) for reference. - -## Example - -```js -var GetIntrinsic = require('get-intrinsic'); -var assert = require('assert'); - -// static methods -assert.equal(GetIntrinsic('%Math.pow%'), Math.pow); -assert.equal(Math.pow(2, 3), 8); -assert.equal(GetIntrinsic('%Math.pow%')(2, 3), 8); -delete Math.pow; -assert.equal(GetIntrinsic('%Math.pow%')(2, 3), 8); - -// instance methods -var arr = [1]; -assert.equal(GetIntrinsic('%Array.prototype.push%'), Array.prototype.push); -assert.deepEqual(arr, [1]); - -arr.push(2); -assert.deepEqual(arr, [1, 2]); - -GetIntrinsic('%Array.prototype.push%').call(arr, 3); -assert.deepEqual(arr, [1, 2, 3]); - -delete Array.prototype.push; -GetIntrinsic('%Array.prototype.push%').call(arr, 4); -assert.deepEqual(arr, [1, 2, 3, 4]); - -// missing features -delete JSON.parse; // to simulate a real intrinsic that is missing in the environment -assert.throws(() => GetIntrinsic('%JSON.parse%')); -assert.equal(undefined, GetIntrinsic('%JSON.parse%', true)); -``` - -## Tests -Simply clone the repo, `npm install`, and run `npm test` - -## Security - -Please email [@ljharb](https://github.com/ljharb) or see https://tidelift.com/security if you have a potential security vulnerability to report. - -[package-url]: https://npmjs.org/package/get-intrinsic -[npm-version-svg]: https://versionbadg.es/ljharb/get-intrinsic.svg -[deps-svg]: https://david-dm.org/ljharb/get-intrinsic.svg -[deps-url]: https://david-dm.org/ljharb/get-intrinsic -[dev-deps-svg]: https://david-dm.org/ljharb/get-intrinsic/dev-status.svg -[dev-deps-url]: https://david-dm.org/ljharb/get-intrinsic#info=devDependencies -[npm-badge-png]: https://nodei.co/npm/get-intrinsic.png?downloads=true&stars=true -[license-image]: https://img.shields.io/npm/l/get-intrinsic.svg -[license-url]: LICENSE -[downloads-image]: https://img.shields.io/npm/dm/get-intrinsic.svg -[downloads-url]: https://npm-stat.com/charts.html?package=get-intrinsic -[codecov-image]: https://codecov.io/gh/ljharb/get-intrinsic/branch/main/graphs/badge.svg -[codecov-url]: https://app.codecov.io/gh/ljharb/get-intrinsic/ -[actions-image]: https://img.shields.io/endpoint?url=https://github-actions-badge-u3jn4tfpocch.runkit.sh/ljharb/get-intrinsic -[actions-url]: https://github.com/ljharb/get-intrinsic/actions diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/side-channel/CHANGELOG.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/side-channel/CHANGELOG.md deleted file mode 100644 index a3d161fac7fe9bbbb7d1148b4f8d9b682f9f1648..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/side-channel/CHANGELOG.md +++ /dev/null @@ -1,65 +0,0 @@ -# Changelog - -All notable changes to this project will be documented in this file. - -The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) -and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). - -## [v1.0.4](https://github.com/ljharb/side-channel/compare/v1.0.3...v1.0.4) - 2020-12-29 - -### Commits - -- [Tests] migrate tests to Github Actions [`10909cb`](https://github.com/ljharb/side-channel/commit/10909cbf8ce9c0bf96f604cf13d7ffd5a22c2d40) -- [Refactor] Use a linked list rather than an array, and move accessed nodes to the beginning [`195613f`](https://github.com/ljharb/side-channel/commit/195613f28b5c1e6072ef0b61b5beebaf2b6a304e) -- [meta] do not publish github action workflow files [`290ec29`](https://github.com/ljharb/side-channel/commit/290ec29cd21a60585145b4a7237ec55228c52c27) -- [Tests] run `nyc` on all tests; use `tape` runner [`ea6d030`](https://github.com/ljharb/side-channel/commit/ea6d030ff3fe6be2eca39e859d644c51ecd88869) -- [actions] add "Allow Edits" workflow [`d464d8f`](https://github.com/ljharb/side-channel/commit/d464d8fe52b5eddf1504a0ed97f0941a90f32c15) -- [Dev Deps] update `eslint`, `@ljharb/eslint-config`, `aud`, `auto-changelog` [`02daca8`](https://github.com/ljharb/side-channel/commit/02daca87c6809821c97be468d1afa2f5ef447383) -- [Refactor] use `call-bind` and `get-intrinsic` instead of `es-abstract` [`e09d481`](https://github.com/ljharb/side-channel/commit/e09d481528452ebafa5cdeae1af665c35aa2deee) -- [Deps] update `object.assign` [`ee83aa8`](https://github.com/ljharb/side-channel/commit/ee83aa81df313b5e46319a63adb05cf0c179079a) -- [actions] update rebase action to use checkout v2 [`7726b0b`](https://github.com/ljharb/side-channel/commit/7726b0b058b632fccea709f58960871defaaa9d7) - -## [v1.0.3](https://github.com/ljharb/side-channel/compare/v1.0.2...v1.0.3) - 2020-08-23 - -### Commits - -- [Dev Deps] update `eslint`, `@ljharb/eslint-config`, `aud`, `auto-changelog`, `tape` [`1f10561`](https://github.com/ljharb/side-channel/commit/1f105611ef3acf32dec8032ae5c0baa5e56bb868) -- [Deps] update `es-abstract`, `object-inspect` [`bc20159`](https://github.com/ljharb/side-channel/commit/bc201597949a505e37cef9eaf24c7010831e6f03) -- [Dev Deps] update `@ljharb/eslint-config`, `tape` [`b9b2b22`](https://github.com/ljharb/side-channel/commit/b9b2b225f9e0ea72a6ec2b89348f0bd690bc9ed1) -- [Dev Deps] update `eslint`, `@ljharb/eslint-config`, `tape` [`7055ab4`](https://github.com/ljharb/side-channel/commit/7055ab4de0860606efd2003674a74f1fe6ebc07e) -- [Dev Deps] update `auto-changelog`; add `aud` [`d278c37`](https://github.com/ljharb/side-channel/commit/d278c37d08227be4f84aa769fcd919e73feeba40) -- [actions] switch Automatic Rebase workflow to `pull_request_target` event [`3bcf982`](https://github.com/ljharb/side-channel/commit/3bcf982faa122745b39c33ce83d32fdf003741c6) -- [Tests] only audit prod deps [`18d01c4`](https://github.com/ljharb/side-channel/commit/18d01c4015b82a3d75044c4d5ba7917b2eac01ec) -- [Deps] update `es-abstract` [`6ab096d`](https://github.com/ljharb/side-channel/commit/6ab096d9de2b482cf5e0717e34e212f5b2b9bc9a) -- [Dev Deps] update `tape` [`9dc174c`](https://github.com/ljharb/side-channel/commit/9dc174cc651dfd300b4b72da936a0a7eda5f9452) -- [Deps] update `es-abstract` [`431d0f0`](https://github.com/ljharb/side-channel/commit/431d0f0ff11fbd2ae6f3115582a356d3a1cfce82) -- [Deps] update `es-abstract` [`49869fd`](https://github.com/ljharb/side-channel/commit/49869fd323bf4453f0ba515c0fb265cf5ab7b932) -- [meta] Add package.json to package's exports [`77d9cdc`](https://github.com/ljharb/side-channel/commit/77d9cdceb2a9e47700074f2ae0c0a202e7dac0d4) - -## [v1.0.2](https://github.com/ljharb/side-channel/compare/v1.0.1...v1.0.2) - 2019-12-20 - -### Commits - -- [Dev Deps] update `@ljharb/eslint-config`, `tape` [`4a526df`](https://github.com/ljharb/side-channel/commit/4a526df44e4701566ed001ec78546193f818b082) -- [Deps] update `es-abstract` [`d4f6e62`](https://github.com/ljharb/side-channel/commit/d4f6e629b6fb93a07415db7f30d3c90fd7f264fe) - -## [v1.0.1](https://github.com/ljharb/side-channel/compare/v1.0.0...v1.0.1) - 2019-12-01 - -### Commits - -- [Fix] add missing "exports" [`d212907`](https://github.com/ljharb/side-channel/commit/d2129073abf0701a5343bf28aa2145617604dc2e) - -## v1.0.0 - 2019-12-01 - -### Commits - -- Initial implementation [`dbebd3a`](https://github.com/ljharb/side-channel/commit/dbebd3a4b5ed64242f9a6810efe7c4214cd8cde4) -- Initial tests [`73bdefe`](https://github.com/ljharb/side-channel/commit/73bdefe568c9076cf8c0b8719bc2141aec0e19b8) -- Initial commit [`43c03e1`](https://github.com/ljharb/side-channel/commit/43c03e1c2849ec50a87b7a5cd76238a62b0b8770) -- npm init [`5c090a7`](https://github.com/ljharb/side-channel/commit/5c090a765d66a5527d9889b89aeff78dee91348c) -- [meta] add `auto-changelog` [`a5c4e56`](https://github.com/ljharb/side-channel/commit/a5c4e5675ec02d5eb4d84b4243aeea2a1d38fbec) -- [actions] add automatic rebasing / merge commit blocking [`bab1683`](https://github.com/ljharb/side-channel/commit/bab1683d8f9754b086e94397699fdc645e0d7077) -- [meta] add `funding` field; create FUNDING.yml [`63d7aea`](https://github.com/ljharb/side-channel/commit/63d7aeaf34f5650650ae97ca4b9fae685bd0937c) -- [Tests] add `npm run lint` [`46a5a81`](https://github.com/ljharb/side-channel/commit/46a5a81705cd2664f83df232c01dbbf2ee952885) -- Only apps should have lockfiles [`8b16b03`](https://github.com/ljharb/side-channel/commit/8b16b0305f00895d90c4e2e5773c854cfea0e448) -- [meta] add `safe-publish-latest` [`2f098ef`](https://github.com/ljharb/side-channel/commit/2f098ef092a39399cfe548b19a1fc03c2fd2f490) diff --git a/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_31.py b/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_31.py deleted file mode 100644 index dc40b353e59baa5ce96de55bd99709c148ae917a..0000000000000000000000000000000000000000 --- a/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_31.py +++ /dev/null @@ -1,48 +0,0 @@ - -import re - - -def is_spam(message: str) -> bool: - spam_keywords = [ - "무료거부", - "프로젝트", - "지원금", - "특별", - "혜택", - "상승", - "수익", - "웹그룹", - "광고", - "초대", - "폭등" - ] - - normal_keywords = [ - "안녕하세요", - "하루", - "이제", - "문의", - "수고", - "회의", - "친구", - ] - - message = message.lower().strip() - - spam_count = 0 - normal_count = 0 - - # Count spam keywords in the message - for keyword in spam_keywords: - if keyword in message: - spam_count += 1 - - # Count normal keywords in the message - for keyword in normal_keywords: - if keyword in message: - normal_count += 1 - - if spam_count > normal_count: - return True - - return False diff --git a/spaces/flax-community/chef-transformer/app.py b/spaces/flax-community/chef-transformer/app.py deleted file mode 100644 index 3add2e0fd94026c413fbd83d389e471eb719c5e1..0000000000000000000000000000000000000000 --- a/spaces/flax-community/chef-transformer/app.py +++ /dev/null @@ -1,320 +0,0 @@ -import streamlit as st - -import torch -from transformers import pipeline, set_seed -from transformers import AutoTokenizer - -from PIL import ( - ImageFont, -) - -import os -import re -import random -import textwrap -from examples import EXAMPLES -import dummy -import meta -from utils import ext -from utils.api import generate_cook_image -from utils.draw import generate_food_with_logo_image, generate_recipe_image -from utils.st import ( - remote_css, - local_css, - -) -from utils.utils import ( - load_image_from_url, - load_image_from_local, - image_to_base64, - pure_comma_separation -) - - -class TextGeneration: - def __init__(self): - self.debug = False - self.dummy_outputs = dummy.recipes - self.tokenizer = None - self.generator = None - self.api_ids = [] - self.api_keys = [] - self.api_test = 2 - self.task = "text2text-generation" - self.model_name_or_path = "flax-community/t5-recipe-generation" - self.color_frame = "#ffffff" - self.main_frame = "asset/frame/recipe-bg.png" - self.no_food = "asset/frame/no_food.png" - self.logo_frame = "asset/frame/logo.png" - self.chef_frames = { - "scheherazade": "asset/frame/food-image-logo-bg-s.png", - "giovanni": "asset/frame/food-image-logo-bg-g.png", - } - self.fonts = { - "title": ImageFont.truetype("asset/fonts/Poppins-Bold.ttf", 70), - "sub_title": ImageFont.truetype("asset/fonts/Poppins-Medium.ttf", 30), - "body_bold": ImageFont.truetype("asset/fonts/Montserrat-Bold.ttf", 22), - "body": ImageFont.truetype("asset/fonts/Montserrat-Regular.ttf", 18), - - } - set_seed(42) - - def _skip_special_tokens_and_prettify(self, text): - recipe_maps = {"": "--", "
          ": "\n"} - recipe_map_pattern = "|".join(map(re.escape, recipe_maps.keys())) - - text = re.sub( - recipe_map_pattern, - lambda m: recipe_maps[m.group()], - re.sub("|".join(self.tokenizer.all_special_tokens), "", text) - ) - - data = {"title": "", "ingredients": [], "directions": []} - for section in text.split("\n"): - section = section.strip() - if section.startswith("title:"): - data["title"] = " ".join( - [w.strip().capitalize() for w in section.replace("title:", "").strip().split() if w.strip()] - ) - elif section.startswith("ingredients:"): - data["ingredients"] = [s.strip() for s in section.replace("ingredients:", "").split('--')] - elif section.startswith("directions:"): - data["directions"] = [s.strip() for s in section.replace("directions:", "").split('--')] - else: - pass - - return data - - def load_pipeline(self): - self.tokenizer = AutoTokenizer.from_pretrained(self.model_name_or_path) - self.generator = pipeline(self.task, model=self.model_name_or_path, tokenizer=self.model_name_or_path) - - def load_api(self): - app_ids = os.getenv("EDAMAM_APP_ID") - app_ids = app_ids.split(",") if app_ids else [] - app_keys = os.getenv("EDAMAM_APP_KEY") - app_keys = app_keys.split(",") if app_keys else [] - - if len(app_ids) != len(app_keys): - self.api_ids = [] - self.api_keys = [] - - self.api_ids = app_ids - self.api_keys = app_keys - - def load(self): - self.load_api() - if not self.debug: - self.load_pipeline() - - def prepare_frame(self, recipe, chef_name): - frame_path = self.chef_frames[chef_name.lower()] - food_logo = generate_food_with_logo_image(frame_path, self.logo_frame, recipe["image"]) - frame = generate_recipe_image( - recipe, - self.main_frame, - food_logo, - self.fonts, - bg_color="#ffffff" - ) - return frame - - def generate(self, items, generation_kwargs): - recipe = self.dummy_outputs[0] - # recipe = self.dummy_outputs[random.randint(0, len(self.dummy_outputs) - 1)] - - if not self.debug: - generation_kwargs["num_return_sequences"] = 1 - # generation_kwargs["return_full_text"] = False - generation_kwargs["return_tensors"] = True - generation_kwargs["return_text"] = False - - generated_ids = self.generator( - items, - **generation_kwargs, - )[0]["generated_token_ids"] - recipe = self.tokenizer.decode(generated_ids, skip_special_tokens=False) - recipe = self._skip_special_tokens_and_prettify(recipe) - - if self.api_ids and self.api_keys and len(self.api_ids) == len(self.api_keys): - test = 0 - for i in range(len(self.api_keys)): - if test > self.api_test: - recipe["image"] = None - break - image = generate_cook_image(recipe["title"].lower(), self.api_ids[i], self.api_keys[i]) - test += 1 - if image: - recipe["image"] = image - break - else: - recipe["image"] = None - - return recipe - - def generate_frame(self, recipe, chef_name): - return self.prepare_frame(recipe, chef_name) - - -@st.cache(allow_output_mutation=True) -def load_text_generator(): - generator = TextGeneration() - generator.load() - return generator - - -chef_top = { - "max_length": 512, - "min_length": 64, - "no_repeat_ngram_size": 3, - "do_sample": True, - "top_k": 60, - "top_p": 0.95, - "num_return_sequences": 1 -} -chef_beam = { - "max_length": 512, - "min_length": 64, - "no_repeat_ngram_size": 3, - "early_stopping": True, - "num_beams": 5, - "length_penalty": 1.5, - "num_return_sequences": 1 -} - - -def main(): - st.set_page_config( - page_title="Chef Transformer", - page_icon="🍲", - layout="wide", - initial_sidebar_state="expanded" - ) - generator = load_text_generator() - # if hasattr(st, "session_state"): - # if 'get_random_frame' not in st.session_state: - # st.session_state.get_random_frame = generator.frames[0] - # else: - # get_random_frame = generator.frames[0] - - remote_css("https://fonts.googleapis.com/css2?family=Montserrat:wght@400;600&family=Poppins:wght@600&display=swap") - local_css("asset/css/style.css") - - col1, col2 = st.columns([6, 4]) - with col2: - st.image(load_image_from_local("asset/images/chef-transformer-transparent.png"), width=300) - st.markdown(meta.SIDEBAR_INFO, unsafe_allow_html=True) - - with st.expander("Where did this story start?", expanded=True): - st.markdown(meta.STORY, unsafe_allow_html=True) - - with col1: - st.markdown(meta.HEADER_INFO, unsafe_allow_html=True) - - st.markdown(meta.CHEF_INFO, unsafe_allow_html=True) - chef = st.selectbox("Choose your chef", index=0, options=["Chef Scheherazade", "Chef Giovanni"]) - - prompts = list(EXAMPLES.keys()) + ["Custom"] - prompt = st.selectbox( - 'Examples (select from this list)', - prompts, - # index=len(prompts) - 1, - index=0 - ) - - if prompt == "Custom": - prompt_box = "" - else: - prompt_box = EXAMPLES[prompt] - - items = st.text_area( - 'Insert your food items here (separated by `,`): ', - pure_comma_separation(prompt_box, return_list=False), - ) - items = pure_comma_separation(items, return_list=False) - entered_items = st.empty() - - recipe_button = st.button('Get Recipe!') - - st.markdown( - "
          ", - unsafe_allow_html=True - ) - if recipe_button: - # if hasattr(st, "session_state"): - # st.session_state.get_random_frame = generator.frames[random.randint(0, len(generator.frames)) - 1] - # else: - # get_random_frame = generator.frames[random.randint(0, len(generator.frames)) - 1] - - entered_items.markdown("**Generate recipe for:** " + items) - with st.spinner("Generating recipe..."): - - if not isinstance(items, str) or not len(items) > 1: - entered_items.markdown( - f"**{chef}** would like to know what ingredients do you like to use in " - f"your food? " - ) - else: - gen_kw = chef_top if chef == "Chef Scheherazade" else chef_beam - generated_recipe = generator.generate(items, gen_kw) - - title = generated_recipe["title"] - food_image = generated_recipe["image"] - food_image = load_image_from_url(food_image, rgba_mode=True, default_image=generator.no_food) - food_image = image_to_base64(food_image) - - ingredients = ext.ingredients( - generated_recipe["ingredients"], - pure_comma_separation(items, return_list=True) - ) - # ingredients = [textwrap.fill(item, 10).replace("\n", "
          ") for item in ingredients] - - directions = ext.directions(generated_recipe["directions"]) - # directions = [textwrap.fill(item, 70).replace("\n", "
          ") for item in directions] - - generated_recipe["by"] = chef - - r1, r2 = st.columns([6, 2]) - - with r2: - # st.write(st.session_state.get_random_frame) - # if hasattr(st, "session_state"): - # recipe_post = generator.generate_frame(generated_recipe, st.session_state.get_random_frame) - # else: - # recipe_post = generator.generate_frame(generated_recipe, get_random_frame) - - recipe_post = generator.generate_frame(generated_recipe, chef.split()[-1]) - - st.image( - recipe_post, - # width=500, - caption="Save image and share on your social media", - use_column_width="auto", - output_format="PNG" - ) - - with r1: - st.markdown( - " ".join([ - "
          ", - "
          ", - f"", - f"

          {title}

          ", - "
          ", - '
          ', - "

          Ingredients

          ", - "
            ", - " ".join([f'
          • {item}
          • ' for item in ingredients]), - "
          ", - "

          Directions

          ", - "
            ", - " ".join([f'
          1. {item}
          2. ' for item in directions]), - "
          ", - "
          " - ]), - unsafe_allow_html=True - ) - - -if __name__ == '__main__': - main() diff --git a/spaces/florim/MedGPT/autogpt/logs.py b/spaces/florim/MedGPT/autogpt/logs.py deleted file mode 100644 index 35037404a98f7be9b7d577b625cc190ca27f4566..0000000000000000000000000000000000000000 --- a/spaces/florim/MedGPT/autogpt/logs.py +++ /dev/null @@ -1,332 +0,0 @@ -"""Logging module for Auto-GPT.""" -import json -import logging -import os -import random -import re -import time -import traceback -from logging import LogRecord - -from colorama import Fore, Style - -from autogpt.config import Config, Singleton -from autogpt.speech import say_text - -CFG = Config() - - -class Logger(metaclass=Singleton): - """ - Logger that handle titles in different colors. - Outputs logs in console, activity.log, and errors.log - For console handler: simulates typing - """ - - def __init__(self): - # create log directory if it doesn't exist - this_files_dir_path = os.path.dirname(__file__) - log_dir = os.path.join(this_files_dir_path, "../logs") - if not os.path.exists(log_dir): - os.makedirs(log_dir) - - log_file = "activity.log" - error_file = "error.log" - - console_formatter = AutoGptFormatter("%(title_color)s %(message)s") - - # Create a handler for console which simulate typing - self.typing_console_handler = TypingConsoleHandler() - self.typing_console_handler.setLevel(logging.INFO) - self.typing_console_handler.setFormatter(console_formatter) - - # Create a handler for console without typing simulation - self.console_handler = ConsoleHandler() - self.console_handler.setLevel(logging.DEBUG) - self.console_handler.setFormatter(console_formatter) - - # Info handler in activity.log - self.file_handler = logging.FileHandler( - os.path.join(log_dir, log_file), "a", "utf-8" - ) - self.file_handler.setLevel(logging.DEBUG) - info_formatter = AutoGptFormatter( - "%(asctime)s %(levelname)s %(title)s %(message_no_color)s" - ) - self.file_handler.setFormatter(info_formatter) - - # Error handler error.log - error_handler = logging.FileHandler( - os.path.join(log_dir, error_file), "a", "utf-8" - ) - error_handler.setLevel(logging.ERROR) - error_formatter = AutoGptFormatter( - "%(asctime)s %(levelname)s %(module)s:%(funcName)s:%(lineno)d %(title)s" - " %(message_no_color)s" - ) - error_handler.setFormatter(error_formatter) - - self.typing_logger = logging.getLogger("TYPER") - self.typing_logger.addHandler(self.typing_console_handler) - self.typing_logger.addHandler(self.file_handler) - self.typing_logger.addHandler(error_handler) - self.typing_logger.setLevel(logging.DEBUG) - - self.logger = logging.getLogger("LOGGER") - self.logger.addHandler(self.console_handler) - self.logger.addHandler(self.file_handler) - self.logger.addHandler(error_handler) - self.logger.setLevel(logging.DEBUG) - - def typewriter_log( - self, title="", title_color="", content="", speak_text=False, level=logging.INFO - ): - if speak_text and CFG.speak_mode: - say_text(f"{title}. {content}") - - if content: - if isinstance(content, list): - content = " ".join(content) - else: - content = "" - - self.typing_logger.log( - level, content, extra={"title": title, "color": title_color} - ) - - def debug( - self, - message, - title="", - title_color="", - ): - self._log(title, title_color, message, logging.DEBUG) - - def warn( - self, - message, - title="", - title_color="", - ): - self._log(title, title_color, message, logging.WARN) - - def error(self, title, message=""): - self._log(title, Fore.RED, message, logging.ERROR) - - def _log(self, title="", title_color="", message="", level=logging.INFO): - if message: - if isinstance(message, list): - message = " ".join(message) - self.logger.log(level, message, extra={"title": title, "color": title_color}) - - def set_level(self, level): - self.logger.setLevel(level) - self.typing_logger.setLevel(level) - - def double_check(self, additionalText=None): - if not additionalText: - additionalText = ( - "Please ensure you've setup and configured everything" - " correctly. Read https://github.com/Torantulino/Auto-GPT#readme to " - "double check. You can also create a github issue or join the discord" - " and ask there!" - ) - - self.typewriter_log("DOUBLE CHECK CONFIGURATION", Fore.YELLOW, additionalText) - - -""" -Output stream to console using simulated typing -""" - - -class TypingConsoleHandler(logging.StreamHandler): - def emit(self, record): - min_typing_speed = 0.05 - max_typing_speed = 0.01 - - msg = self.format(record) - try: - words = msg.split() - for i, word in enumerate(words): - print(word, end="", flush=True) - if i < len(words) - 1: - print(" ", end="", flush=True) - typing_speed = random.uniform(min_typing_speed, max_typing_speed) - time.sleep(typing_speed) - # type faster after each word - min_typing_speed = min_typing_speed * 0.95 - max_typing_speed = max_typing_speed * 0.95 - print() - except Exception: - self.handleError(record) - - -class ConsoleHandler(logging.StreamHandler): - def emit(self, record) -> None: - msg = self.format(record) - try: - print(msg) - except Exception: - self.handleError(record) - - -class AutoGptFormatter(logging.Formatter): - """ - Allows to handle custom placeholders 'title_color' and 'message_no_color'. - To use this formatter, make sure to pass 'color', 'title' as log extras. - """ - - def format(self, record: LogRecord) -> str: - if hasattr(record, "color"): - record.title_color = ( - getattr(record, "color") - + getattr(record, "title") - + " " - + Style.RESET_ALL - ) - else: - record.title_color = getattr(record, "title") - if hasattr(record, "msg"): - record.message_no_color = remove_color_codes(getattr(record, "msg")) - else: - record.message_no_color = "" - return super().format(record) - - -def remove_color_codes(s: str) -> str: - ansi_escape = re.compile(r"\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])") - return ansi_escape.sub("", s) - - -logger = Logger() - - -def print_assistant_thoughts(ai_name, assistant_reply): - """Prints the assistant's thoughts to the console""" - from autogpt.json_utils.json_fix_llm import ( - attempt_to_fix_json_by_finding_outermost_brackets, - fix_and_parse_json, - ) - - try: - try: - # Parse and print Assistant response - assistant_reply_json = fix_and_parse_json(assistant_reply) - except json.JSONDecodeError: - logger.error("Error: Invalid JSON in assistant thoughts\n", assistant_reply) - assistant_reply_json = attempt_to_fix_json_by_finding_outermost_brackets( - assistant_reply - ) - if isinstance(assistant_reply_json, str): - assistant_reply_json = fix_and_parse_json(assistant_reply_json) - - # Check if assistant_reply_json is a string and attempt to parse - # it into a JSON object - if isinstance(assistant_reply_json, str): - try: - assistant_reply_json = json.loads(assistant_reply_json) - except json.JSONDecodeError: - logger.error("Error: Invalid JSON\n", assistant_reply) - assistant_reply_json = ( - attempt_to_fix_json_by_finding_outermost_brackets( - assistant_reply_json - ) - ) - - assistant_thoughts_reasoning = None - assistant_thoughts_plan = None - assistant_thoughts_speak = None - assistant_thoughts_criticism = None - if not isinstance(assistant_reply_json, dict): - assistant_reply_json = {} - assistant_thoughts = assistant_reply_json.get("thoughts", {}) - assistant_thoughts_text = assistant_thoughts.get("text") - - if assistant_thoughts: - assistant_thoughts_reasoning = assistant_thoughts.get("reasoning") - assistant_thoughts_plan = assistant_thoughts.get("plan") - assistant_thoughts_criticism = assistant_thoughts.get("criticism") - assistant_thoughts_speak = assistant_thoughts.get("speak") - - logger.typewriter_log( - f"{ai_name.upper()} THOUGHTS:", Fore.YELLOW, f"{assistant_thoughts_text}" - ) - logger.typewriter_log( - "REASONING:", Fore.YELLOW, f"{assistant_thoughts_reasoning}" - ) - - if assistant_thoughts_plan: - logger.typewriter_log("PLAN:", Fore.YELLOW, "") - # If it's a list, join it into a string - if isinstance(assistant_thoughts_plan, list): - assistant_thoughts_plan = "\n".join(assistant_thoughts_plan) - elif isinstance(assistant_thoughts_plan, dict): - assistant_thoughts_plan = str(assistant_thoughts_plan) - - # Split the input_string using the newline character and dashes - lines = assistant_thoughts_plan.split("\n") - for line in lines: - line = line.lstrip("- ") - logger.typewriter_log("- ", Fore.GREEN, line.strip()) - - logger.typewriter_log( - "CRITICISM:", Fore.YELLOW, f"{assistant_thoughts_criticism}" - ) - # Speak the assistant's thoughts - if CFG.speak_mode and assistant_thoughts_speak: - say_text(assistant_thoughts_speak) - else: - logger.typewriter_log("SPEAK:", Fore.YELLOW, f"{assistant_thoughts_speak}") - - return assistant_reply_json - except json.decoder.JSONDecodeError: - logger.error("Error: Invalid JSON\n", assistant_reply) - if CFG.speak_mode: - say_text( - "I have received an invalid JSON response from the OpenAI API." - " I cannot ignore this response." - ) - - # All other errors, return "Error: + error message" - except Exception: - call_stack = traceback.format_exc() - logger.error("Error: \n", call_stack) - - -def print_assistant_thoughts( - ai_name: object, assistant_reply_json_valid: object -) -> None: - assistant_thoughts_reasoning = None - assistant_thoughts_plan = None - assistant_thoughts_speak = None - assistant_thoughts_criticism = None - - assistant_thoughts = assistant_reply_json_valid.get("thoughts", {}) - assistant_thoughts_text = assistant_thoughts.get("text") - if assistant_thoughts: - assistant_thoughts_reasoning = assistant_thoughts.get("reasoning") - assistant_thoughts_plan = assistant_thoughts.get("plan") - assistant_thoughts_criticism = assistant_thoughts.get("criticism") - assistant_thoughts_speak = assistant_thoughts.get("speak") - logger.typewriter_log( - f"{ai_name.upper()} THOUGHTS:", Fore.YELLOW, f"{assistant_thoughts_text}" - ) - logger.typewriter_log("REASONING:", Fore.YELLOW, f"{assistant_thoughts_reasoning}") - if assistant_thoughts_plan: - logger.typewriter_log("PLAN:", Fore.YELLOW, "") - # If it's a list, join it into a string - if isinstance(assistant_thoughts_plan, list): - assistant_thoughts_plan = "\n".join(assistant_thoughts_plan) - elif isinstance(assistant_thoughts_plan, dict): - assistant_thoughts_plan = str(assistant_thoughts_plan) - - # Split the input_string using the newline character and dashes - lines = assistant_thoughts_plan.split("\n") - for line in lines: - line = line.lstrip("- ") - logger.typewriter_log("- ", Fore.GREEN, line.strip()) - logger.typewriter_log("CRITICISM:", Fore.YELLOW, f"{assistant_thoughts_criticism}") - # Speak the assistant's thoughts - if CFG.speak_mode and assistant_thoughts_speak: - say_text(assistant_thoughts_speak) diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/backup_envs/gotodoortalkhardsesamnpc.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/backup_envs/gotodoortalkhardsesamnpc.py deleted file mode 100644 index bf8d6b0cbc74b5a48a01291c6162c1656c6640c3..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/backup_envs/gotodoortalkhardsesamnpc.py +++ /dev/null @@ -1,294 +0,0 @@ -from gym_minigrid.minigrid import * -from gym_minigrid.register import register - - -class Guide(NPC): - """ - A simple NPC that wants an agent to go to an object (randomly chosen among object_pos list) - """ - - def __init__(self, color, name, env): - super().__init__(color) - self.name = name - self.env = env - self.npc_type = 0 - - def listen(self, utterance): - if utterance == TalkHardSesameGrammar.construct_utterance([0, 1]): - return self.env.mission - - return None - - def is_near_agent(self): - ax, ay = self.env.agent_pos - wx, wy = self.cur_pos - if (ax == wx and abs(ay - wy) == 1) or (ay == wy and abs(ax - wx) == 1): - return True - return False - - -class TalkHardSesameGrammar(object): - - templates = ["Where is", "Open"] - things = ["sesame", "the exit"] - - grammar_action_space = spaces.MultiDiscrete([len(templates), len(things)]) - - @classmethod - def construct_utterance(cls, action): - return cls.templates[int(action[0])] + " " + cls.things[int(action[1])] + " " - - -class GoToDoorTalkHardSesameNPCEnv(MultiModalMiniGridEnv): - """ - Environment in which the agent is instructed to go to a given object - named using an English text string - """ - - def __init__( - self, - size=5, - hear_yourself=False, - diminished_reward=True, - step_penalty=False - ): - assert size >= 5 - - super().__init__( - grid_size=size, - max_steps=5*size**2, - # Set this to True for maximum speed - see_through_walls=True, - actions=MiniGridEnv.Actions, - action_space=spaces.MultiDiscrete([ - len(MiniGridEnv.Actions), - *TalkHardSesameGrammar.grammar_action_space.nvec - ]) - ) - self.hear_yourself = hear_yourself - self.diminished_reward = diminished_reward - self.step_penalty = step_penalty - - self.empty_symbol = "NA \n" - - print({ - "size": size, - "hear_yourself": hear_yourself, - "diminished_reward": diminished_reward, - "step_penalty": step_penalty, - }) - - def _gen_grid(self, width, height): - # Create the grid - self.grid = Grid(width, height) - - # Randomly vary the room width and height - width = self._rand_int(5, width+1) - height = self._rand_int(5, height+1) - - # Generate the surrounding walls - self.grid.wall_rect(0, 0, width, height) - - # Generate the surrounding walls - self.grid.wall_rect(0, 0, width, height) - - # Generate the 4 doors at random positions - self.door_pos = [] - self.door_front_pos = [] # Remembers positions in front of door to avoid setting wizard here - - self.door_pos.append((self._rand_int(2, width-2), 0)) - self.door_front_pos.append((self.door_pos[-1][0], self.door_pos[-1][1]+1)) - - self.door_pos.append((self._rand_int(2, width-2), height-1)) - self.door_front_pos.append((self.door_pos[-1][0], self.door_pos[-1][1] - 1)) - - self.door_pos.append((0, self._rand_int(2, height-2))) - self.door_front_pos.append((self.door_pos[-1][0] + 1, self.door_pos[-1][1])) - - self.door_pos.append((width-1, self._rand_int(2, height-2))) - self.door_front_pos.append((self.door_pos[-1][0] - 1, self.door_pos[-1][1])) - - # Generate the door colors - self.door_colors = [] - while len(self.door_colors) < len(self.door_pos): - color = self._rand_elem(COLOR_NAMES) - if color in self.door_colors: - continue - self.door_colors.append(color) - - # Place the doors in the grid - for idx, pos in enumerate(self.door_pos): - color = self.door_colors[idx] - self.grid.set(*pos, Door(color)) - - # Set a randomly coloured NPC at a random position - color = self._rand_elem(COLOR_NAMES) - self.wizard = Guide(color, "Gandalf", self) - - # Place it randomly, omitting front of door positions - self.place_obj(self.wizard, - size=(width, height), - reject_fn=lambda _, p: tuple(p) in self.door_front_pos) - - # Randomize the agent start position and orientation - self.place_agent(size=(width, height)) - - # Select a random target door - self.doorIdx = self._rand_int(0, len(self.door_pos)) - self.target_pos = self.door_pos[self.doorIdx] - self.target_color = self.door_colors[self.doorIdx] - - # Generate the mission string - self.mission = 'go to the %s door' % self.target_color - - # Dummy beginning string - self.beginning_string = "This is what you hear. \n" - self.utterance = self.beginning_string - - # utterance appended at the end of each step - self.utterance_history = "" - - self.conversation = self.utterance - - def step(self, action): - p_action = action[0] - utterance_action = action[1:] - - # assert all nan or neither nan - assert len(set(np.isnan(utterance_action))) == 1 - - speak_flag = not all(np.isnan(utterance_action)) - - obs, reward, done, info = super().step(p_action) - - if speak_flag: - utterance = TalkHardSesameGrammar.construct_utterance(utterance_action) - if self.hear_yourself: - self.utterance += "YOU: {} \n".format(utterance) - - self.conversation += "YOU: {} \n".format(utterance) - - # check if near wizard - if self.wizard.is_near_agent(): - reply = self.wizard.listen(utterance) - - if reply: - self.utterance += "{}: {} \n".format(self.wizard.name, reply) - self.conversation += "{}: {} \n".format(self.wizard.name, reply) - - if utterance == TalkHardSesameGrammar.construct_utterance([1, 0]): - ax, ay = self.agent_pos - tx, ty = self.target_pos - - if (ax == tx and abs(ay - ty) == 1) or (ay == ty and abs(ax - tx) == 1): - reward = self._reward() - - for dx, dy in self.door_pos: - if (ax == dx and abs(ay - dy) == 1) or (ay == dy and abs(ax - dx) == 1): - # agent has chosen some door episode, regardless of if the door is correct the episode is over - done = True - - # Don't let the agent open any of the doors - if p_action == self.actions.toggle: - done = True - - if p_action == self.actions.done: - done = True - - # discount - if self.step_penalty: - reward = reward - 0.01 - - # fill observation with text - # fill observation with text - self.append_existing_utterance_to_history() - obs = self.add_utterance_to_observation(obs) - self.reset_utterance() - - return obs, reward, done, info - - def _reward(self): - if self.diminished_reward: - return super()._reward() - else: - return 1.0 - - def render(self, *args, **kwargs): - obs = super().render(*args, **kwargs) - self.window.set_caption(self.conversation, [ - "Gandalf:", - "Jack:", - "John:", - "Where is the exit", - "Open sesame", - ]) - return obs - - -class GoToDoorTalkHardSesameNPCTesting(GoToDoorTalkHardSesameNPCEnv): - def __init__(self): - super().__init__( - size=5, - hear_yourself=False, - diminished_reward=False, - step_penalty=True - ) - -class GoToDoorTalkHardSesameNPC8x8Env(GoToDoorTalkHardSesameNPCEnv): - def __init__(self): - super().__init__(size=8) - - -class GoToDoorTalkHardSesameNPC6x6Env(GoToDoorTalkHardSesameNPCEnv): - def __init__(self): - super().__init__(size=6) - - -# hear yourself -class GoToDoorTalkHardSesameNPCHY8x8Env(GoToDoorTalkHardSesameNPCEnv): - def __init__(self): - super().__init__(size=8, hear_yourself=True) - - -class GoToDoorTalkHardSesameNPCHY6x6Env(GoToDoorTalkHardSesameNPCEnv): - def __init__(self): - super().__init__(size=6, hear_yourself=True) - - -class GoToDoorTalkHardSesameNPCHY5x5Env(GoToDoorTalkHardSesameNPCEnv): - def __init__(self): - super().__init__(size=5, hear_yourself=True) - -register( - id='MiniGrid-GoToDoorTalkHardSesameNPC-Testing-v0', - entry_point='gym_minigrid.envs:GoToDoorTalkHardSesameNPCTesting' -) - -register( - id='MiniGrid-GoToDoorTalkHardSesameNPC-5x5-v0', - entry_point='gym_minigrid.envs:GoToDoorTalkHardSesameNPCEnv' -) - -register( - id='MiniGrid-GoToDoorTalkHardSesameNPC-6x6-v0', - entry_point='gym_minigrid.envs:GoToDoorTalkHardSesameNPC6x6Env' -) - -register( - id='MiniGrid-GoToDoorTalkHardSesameNPC-8x8-v0', - entry_point='gym_minigrid.envs:GoToDoorTalkHardSesameNPC8x8Env' -) -register( - id='MiniGrid-GoToDoorTalkHardSesameNPCHY-5x5-v0', - entry_point='gym_minigrid.envs:GoToDoorTalkHardSesameNPCHY5x5Env' -) - -register( - id='MiniGrid-GoToDoorTalkHardSesameNPCHY-6x6-v0', - entry_point='gym_minigrid.envs:GoToDoorTalkHardSesameNPCHY6x6Env' -) - -register( - id='MiniGrid-GoToDoorTalkHardSesameNPCHY-8x8-v0', - entry_point='gym_minigrid.envs:GoToDoorTalkHardSesameNPCHY8x8Env' -) diff --git a/spaces/franever/Pix2Pix-Video/README.md b/spaces/franever/Pix2Pix-Video/README.md deleted file mode 100644 index edb752cda7ffef6e83331feabec13c9ebbd3d5ad..0000000000000000000000000000000000000000 --- a/spaces/franever/Pix2Pix-Video/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Pix2Pix Video -emoji: 🎨🎞️ -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: true -duplicated_from: AIFILMS/Pix2Pix-Video ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/freddyaboulton/gradio_folium/Dockerfile b/spaces/freddyaboulton/gradio_folium/Dockerfile deleted file mode 100644 index 33b38d8d7f466b83fd6f6afa04c92521432b10cc..0000000000000000000000000000000000000000 --- a/spaces/freddyaboulton/gradio_folium/Dockerfile +++ /dev/null @@ -1,12 +0,0 @@ - -FROM python:3.9 - -WORKDIR /code - -COPY --link --chown=1000 . . - -RUN pip install --no-cache-dir -r requirements.txt - -ENV PYTHONUNBUFFERED=1 GRADIO_ALLOW_FLAGGING=never GRADIO_NUM_PORTS=1 GRADIO_SERVER_NAME=0.0.0.0 GRADIO_SERVER_PORT=7860 SYSTEM=spaces - -CMD ["python", "app.py"] diff --git a/spaces/freddyaboulton/xgboost-income-prediction-with-explainability/README.md b/spaces/freddyaboulton/xgboost-income-prediction-with-explainability/README.md deleted file mode 100644 index 92d823b25f14cd98310b992f26c6f8a5e4630872..0000000000000000000000000000000000000000 --- a/spaces/freddyaboulton/xgboost-income-prediction-with-explainability/README.md +++ /dev/null @@ -1,12 +0,0 @@ - ---- -title: xgboost-income-prediction-with-explainability -emoji: 🔥 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 3.4 - -app_file: app.py -pinned: false ---- diff --git a/spaces/fsqhn/anime-remove-background/README.md b/spaces/fsqhn/anime-remove-background/README.md deleted file mode 100644 index 1ba3cb5ea0e994e246d57b7d62b8aa5a6331901c..0000000000000000000000000000000000000000 --- a/spaces/fsqhn/anime-remove-background/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Anime Remove Background -emoji: 🪄🖼️ -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: skytnt/anime-remove-background ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/fuckyoudeki/AutoGPT/tests/test_config.py b/spaces/fuckyoudeki/AutoGPT/tests/test_config.py deleted file mode 100644 index b472a24c78edd1f931a76c68e08ed544bbe61d98..0000000000000000000000000000000000000000 --- a/spaces/fuckyoudeki/AutoGPT/tests/test_config.py +++ /dev/null @@ -1,84 +0,0 @@ -from unittest import TestCase - -from autogpt.config import Config - - -class TestConfig(TestCase): - """ - Test cases for the Config class, which handles the configuration settings - for the AI and ensures it behaves as a singleton. - """ - - def setUp(self): - """ - Set up the test environment by creating an instance of the Config class. - """ - self.config = Config() - - def test_singleton(self): - """ - Test if the Config class behaves as a singleton by ensuring that two instances are the same. - """ - config2 = Config() - self.assertIs(self.config, config2) - - def test_initial_values(self): - """ - Test if the initial values of the Config class attributes are set correctly. - """ - self.assertFalse(self.config.debug_mode) - self.assertFalse(self.config.continuous_mode) - self.assertFalse(self.config.speak_mode) - self.assertEqual(self.config.fast_llm_model, "gpt-3.5-turbo") - self.assertEqual(self.config.smart_llm_model, "gpt-4") - self.assertEqual(self.config.fast_token_limit, 4000) - self.assertEqual(self.config.smart_token_limit, 8000) - - def test_set_continuous_mode(self): - """ - Test if the set_continuous_mode() method updates the continuous_mode attribute. - """ - self.config.set_continuous_mode(True) - self.assertTrue(self.config.continuous_mode) - - def test_set_speak_mode(self): - """ - Test if the set_speak_mode() method updates the speak_mode attribute. - """ - self.config.set_speak_mode(True) - self.assertTrue(self.config.speak_mode) - - def test_set_fast_llm_model(self): - """ - Test if the set_fast_llm_model() method updates the fast_llm_model attribute. - """ - self.config.set_fast_llm_model("gpt-3.5-turbo-test") - self.assertEqual(self.config.fast_llm_model, "gpt-3.5-turbo-test") - - def test_set_smart_llm_model(self): - """ - Test if the set_smart_llm_model() method updates the smart_llm_model attribute. - """ - self.config.set_smart_llm_model("gpt-4-test") - self.assertEqual(self.config.smart_llm_model, "gpt-4-test") - - def test_set_fast_token_limit(self): - """ - Test if the set_fast_token_limit() method updates the fast_token_limit attribute. - """ - self.config.set_fast_token_limit(5000) - self.assertEqual(self.config.fast_token_limit, 5000) - - def test_set_smart_token_limit(self): - """ - Test if the set_smart_token_limit() method updates the smart_token_limit attribute. - """ - self.config.set_smart_token_limit(9000) - self.assertEqual(self.config.smart_token_limit, 9000) - - def test_set_debug_mode(self): - """ - Test if the set_debug_mode() method updates the debug_mode attribute. - """ - self.config.set_debug_mode(True) - self.assertTrue(self.config.debug_mode) diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/datasets/cityscapes.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/datasets/cityscapes.py deleted file mode 100644 index 81e47a914a1aa2e5458e18669d65ffb742f46fc6..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/datasets/cityscapes.py +++ /dev/null @@ -1,217 +0,0 @@ -import os.path as osp -import tempfile - -import annotator.uniformer.mmcv as mmcv -import numpy as np -from annotator.uniformer.mmcv.utils import print_log -from PIL import Image - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class CityscapesDataset(CustomDataset): - """Cityscapes dataset. - - The ``img_suffix`` is fixed to '_leftImg8bit.png' and ``seg_map_suffix`` is - fixed to '_gtFine_labelTrainIds.png' for Cityscapes dataset. - """ - - CLASSES = ('road', 'sidewalk', 'building', 'wall', 'fence', 'pole', - 'traffic light', 'traffic sign', 'vegetation', 'terrain', 'sky', - 'person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle', - 'bicycle') - - PALETTE = [[128, 64, 128], [244, 35, 232], [70, 70, 70], [102, 102, 156], - [190, 153, 153], [153, 153, 153], [250, 170, 30], [220, 220, 0], - [107, 142, 35], [152, 251, 152], [70, 130, 180], [220, 20, 60], - [255, 0, 0], [0, 0, 142], [0, 0, 70], [0, 60, 100], - [0, 80, 100], [0, 0, 230], [119, 11, 32]] - - def __init__(self, **kwargs): - super(CityscapesDataset, self).__init__( - img_suffix='_leftImg8bit.png', - seg_map_suffix='_gtFine_labelTrainIds.png', - **kwargs) - - @staticmethod - def _convert_to_label_id(result): - """Convert trainId to id for cityscapes.""" - if isinstance(result, str): - result = np.load(result) - import cityscapesscripts.helpers.labels as CSLabels - result_copy = result.copy() - for trainId, label in CSLabels.trainId2label.items(): - result_copy[result == trainId] = label.id - - return result_copy - - def results2img(self, results, imgfile_prefix, to_label_id): - """Write the segmentation results to images. - - Args: - results (list[list | tuple | ndarray]): Testing results of the - dataset. - imgfile_prefix (str): The filename prefix of the png files. - If the prefix is "somepath/xxx", - the png files will be named "somepath/xxx.png". - to_label_id (bool): whether convert output to label_id for - submission - - Returns: - list[str: str]: result txt files which contains corresponding - semantic segmentation images. - """ - mmcv.mkdir_or_exist(imgfile_prefix) - result_files = [] - prog_bar = mmcv.ProgressBar(len(self)) - for idx in range(len(self)): - result = results[idx] - if to_label_id: - result = self._convert_to_label_id(result) - filename = self.img_infos[idx]['filename'] - basename = osp.splitext(osp.basename(filename))[0] - - png_filename = osp.join(imgfile_prefix, f'{basename}.png') - - output = Image.fromarray(result.astype(np.uint8)).convert('P') - import cityscapesscripts.helpers.labels as CSLabels - palette = np.zeros((len(CSLabels.id2label), 3), dtype=np.uint8) - for label_id, label in CSLabels.id2label.items(): - palette[label_id] = label.color - - output.putpalette(palette) - output.save(png_filename) - result_files.append(png_filename) - prog_bar.update() - - return result_files - - def format_results(self, results, imgfile_prefix=None, to_label_id=True): - """Format the results into dir (standard format for Cityscapes - evaluation). - - Args: - results (list): Testing results of the dataset. - imgfile_prefix (str | None): The prefix of images files. It - includes the file path and the prefix of filename, e.g., - "a/b/prefix". If not specified, a temp file will be created. - Default: None. - to_label_id (bool): whether convert output to label_id for - submission. Default: False - - Returns: - tuple: (result_files, tmp_dir), result_files is a list containing - the image paths, tmp_dir is the temporal directory created - for saving json/png files when img_prefix is not specified. - """ - - assert isinstance(results, list), 'results must be a list' - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: ' - f'{len(results)} != {len(self)}') - - if imgfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - imgfile_prefix = tmp_dir.name - else: - tmp_dir = None - result_files = self.results2img(results, imgfile_prefix, to_label_id) - - return result_files, tmp_dir - - def evaluate(self, - results, - metric='mIoU', - logger=None, - imgfile_prefix=None, - efficient_test=False): - """Evaluation in Cityscapes/default protocol. - - Args: - results (list): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. - logger (logging.Logger | None | str): Logger used for printing - related information during evaluation. Default: None. - imgfile_prefix (str | None): The prefix of output image file, - for cityscapes evaluation only. It includes the file path and - the prefix of filename, e.g., "a/b/prefix". - If results are evaluated with cityscapes protocol, it would be - the prefix of output png files. The output files would be - png images under folder "a/b/prefix/xxx.png", where "xxx" is - the image name of cityscapes. If not specified, a temp file - will be created for evaluation. - Default: None. - - Returns: - dict[str, float]: Cityscapes/default metrics. - """ - - eval_results = dict() - metrics = metric.copy() if isinstance(metric, list) else [metric] - if 'cityscapes' in metrics: - eval_results.update( - self._evaluate_cityscapes(results, logger, imgfile_prefix)) - metrics.remove('cityscapes') - if len(metrics) > 0: - eval_results.update( - super(CityscapesDataset, - self).evaluate(results, metrics, logger, efficient_test)) - - return eval_results - - def _evaluate_cityscapes(self, results, logger, imgfile_prefix): - """Evaluation in Cityscapes protocol. - - Args: - results (list): Testing results of the dataset. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - imgfile_prefix (str | None): The prefix of output image file - - Returns: - dict[str: float]: Cityscapes evaluation results. - """ - try: - import cityscapesscripts.evaluation.evalPixelLevelSemanticLabeling as CSEval # noqa - except ImportError: - raise ImportError('Please run "pip install cityscapesscripts" to ' - 'install cityscapesscripts first.') - msg = 'Evaluating in Cityscapes style' - if logger is None: - msg = '\n' + msg - print_log(msg, logger=logger) - - result_files, tmp_dir = self.format_results(results, imgfile_prefix) - - if tmp_dir is None: - result_dir = imgfile_prefix - else: - result_dir = tmp_dir.name - - eval_results = dict() - print_log(f'Evaluating results under {result_dir} ...', logger=logger) - - CSEval.args.evalInstLevelScore = True - CSEval.args.predictionPath = osp.abspath(result_dir) - CSEval.args.evalPixelAccuracy = True - CSEval.args.JSONOutput = False - - seg_map_list = [] - pred_list = [] - - # when evaluating with official cityscapesscripts, - # **_gtFine_labelIds.png is used - for seg_map in mmcv.scandir( - self.ann_dir, 'gtFine_labelIds.png', recursive=True): - seg_map_list.append(osp.join(self.ann_dir, seg_map)) - pred_list.append(CSEval.getPrediction(CSEval.args, seg_map)) - - eval_results.update( - CSEval.evaluateImgLists(pred_list, seg_map_list, CSEval.args)) - - if tmp_dir is not None: - tmp_dir.cleanup() - - return eval_results diff --git a/spaces/gradio-client-demos/stable-diffusion/app.py b/spaces/gradio-client-demos/stable-diffusion/app.py deleted file mode 100644 index ed3248fe82ccc6c193466816bf919c661c827d33..0000000000000000000000000000000000000000 --- a/spaces/gradio-client-demos/stable-diffusion/app.py +++ /dev/null @@ -1,349 +0,0 @@ -import gradio as gr -from datasets import load_dataset -from PIL import Image - -import re -import os -import requests - -from share_btn import community_icon_html, loading_icon_html, share_js - -word_list_dataset = load_dataset("stabilityai/word-list", data_files="list.txt", use_auth_token=True) -word_list = word_list_dataset["train"]['text'] - -is_gpu_busy = False -def infer(prompt, negative, scale): - global is_gpu_busy - for filter in word_list: - if re.search(rf"\b{filter}\b", prompt): - raise gr.Error("Unsafe content found. Please try again with different prompts.") - - images = [] - url = os.getenv('JAX_BACKEND_URL') - payload = {'prompt': prompt, 'negative_prompt': negative, 'guidance_scale': scale} - images_request = requests.post(url, json = payload) - for image in images_request.json()["images"]: - image_b64 = (f"data:image/jpeg;base64,{image}") - images.append(image_b64) - - return images - - -css = """ - .gradio-container { - font-family: 'IBM Plex Sans', sans-serif; - } - .gr-button { - color: white; - border-color: black; - background: black; - } - input[type='range'] { - accent-color: black; - } - .dark input[type='range'] { - accent-color: #dfdfdf; - } - .container { - max-width: 730px; - margin: auto; - padding-top: 1.5rem; - } - #gallery { - min-height: 22rem; - margin-bottom: 15px; - margin-left: auto; - margin-right: auto; - border-bottom-right-radius: .5rem !important; - border-bottom-left-radius: .5rem !important; - } - #gallery>div>.h-full { - min-height: 20rem; - } - .details:hover { - text-decoration: underline; - } - .gr-button { - white-space: nowrap; - } - .gr-button:focus { - border-color: rgb(147 197 253 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; - } - #advanced-btn { - font-size: .7rem !important; - line-height: 19px; - margin-top: 12px; - margin-bottom: 12px; - padding: 2px 8px; - border-radius: 14px !important; - } - #advanced-options { - display: none; - margin-bottom: 20px; - } - .footer { - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } - .acknowledgments h4{ - margin: 1.25em 0 .25em 0; - font-weight: bold; - font-size: 115%; - } - .animate-spin { - animation: spin 1s linear infinite; - } - @keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } - } - #share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; - margin-top: 10px; - margin-left: auto; - } - #share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;right:0; - } - #share-btn * { - all: unset; - } - #share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; - } - #share-btn-container .wrap { - display: none !important; - } - - .gr-form{ - flex: 1 1 50%; border-top-right-radius: 0; border-bottom-right-radius: 0; - } - #prompt-container{ - gap: 0; - } - #prompt-text-input, #negative-prompt-text-input{padding: .45rem 0.625rem} - #component-16{border-top-width: 1px!important;margin-top: 1em} - .image_duplication{position: absolute; width: 100px; left: 50px} -""" - -block = gr.Blocks(css=css) - -examples = [ - [ - 'A high tech solarpunk utopia in the Amazon rainforest', - 'low quality', - 9 - ], - [ - 'A pikachu fine dining with a view to the Eiffel Tower', - 'low quality', - 9 - ], - [ - 'A mecha robot in a favela in expressionist style', - 'low quality, 3d, photorealistic', - 9 - ], - [ - 'an insect robot preparing a delicious meal', - 'low quality, illustration', - 9 - ], - [ - "A small cabin on top of a snowy mountain in the style of Disney, artstation", - 'low quality, ugly', - 9 - ], -] - - -with block: - gr.HTML( - """ -
          -
          - - - - - - - - - - - - - - - - - - - - - - - - - - - -

          - Stable Diffusion 2.1 Demo -

          -
          -

          - Stable Diffusion 2.1 is the latest text-to-image model from StabilityAI. Access Stable Diffusion 1 Space here
          For faster generation and API - access you can try - DreamStudio Beta. -

          -
          - """ - ) - with gr.Group(): - with gr.Box(): - with gr.Row(elem_id="prompt-container").style(mobile_collapse=False, equal_height=True): - with gr.Column(): - text = gr.Textbox( - label="Enter your prompt", - show_label=False, - max_lines=1, - placeholder="Enter your prompt", - elem_id="prompt-text-input", - ).style( - border=(True, False, True, True), - rounded=(True, False, False, True), - container=False, - ) - negative = gr.Textbox( - label="Enter your negative prompt", - show_label=False, - max_lines=1, - placeholder="Enter a negative prompt", - elem_id="negative-prompt-text-input", - ).style( - border=(True, False, True, True), - rounded=(True, False, False, True), - container=False, - ) - btn = gr.Button("Generate image").style( - margin=False, - rounded=(False, True, True, False), - full_width=False, - ) - - gallery = gr.Gallery( - label="Generated images", show_label=False, elem_id="gallery" - ).style(grid=[2], height="auto") - - with gr.Group(elem_id="container-advanced-btns"): - #advanced_button = gr.Button("Advanced options", elem_id="advanced-btn") - with gr.Group(elem_id="share-btn-container"): - community_icon = gr.HTML(community_icon_html) - loading_icon = gr.HTML(loading_icon_html) - share_button = gr.Button("Share to community", elem_id="share-btn") - - with gr.Accordion("Advanced settings", open=False): - # gr.Markdown("Advanced settings are temporarily unavailable") - # samples = gr.Slider(label="Images", minimum=1, maximum=4, value=4, step=1) - # steps = gr.Slider(label="Steps", minimum=1, maximum=50, value=45, step=1) - guidance_scale = gr.Slider( - label="Guidance Scale", minimum=0, maximum=50, value=9, step=0.1 - ) - # seed = gr.Slider( - # label="Seed", - # minimum=0, - # maximum=2147483647, - # step=1, - # randomize=True, - # ) - - ex = gr.Examples(examples=examples, fn=infer, inputs=[text, negative, guidance_scale], outputs=[gallery, community_icon, loading_icon, share_button], cache_examples=False) - ex.dataset.headers = [""] - negative.submit(infer, inputs=[text, negative, guidance_scale], outputs=[gallery], postprocess=False) - text.submit(infer, inputs=[text, negative, guidance_scale], outputs=[gallery], postprocess=False) - btn.click(infer, inputs=[text, negative, guidance_scale], outputs=[gallery], postprocess=False) - - #advanced_button.click( - # None, - # [], - # text, - # _js=""" - # () => { - # const options = document.querySelector("body > gradio-app").querySelector("#advanced-options"); - # options.style.display = ["none", ""].includes(options.style.display) ? "flex" : "none"; - # }""", - #) - share_button.click( - None, - [], - [], - _js=share_js, - ) - gr.HTML( - """ - - """ - ) - with gr.Accordion(label="License", open=False): - gr.HTML( - """
          -

          LICENSE

          -The model is licensed with a CreativeML OpenRAIL++ license. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in this license. The license forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. For the full list of restrictions please read the license

          -

          Biases and content acknowledgment

          -Despite how impressive being able to turn text into image is, beware to the fact that this model may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography and violence. The model was trained on the LAION-5B dataset, which scraped non-curated image-text-pairs from the internet (the exception being the removal of illegal content) and is meant for research purposes. You can read more in the model card

          -
          - """ - ) - - -block.queue(concurrency_count=80, max_size=100).launch(max_threads=150) \ No newline at end of file diff --git a/spaces/gradio/HuBERT/fairseq/modules/lightconv_layer/cuda_function_gen.py b/spaces/gradio/HuBERT/fairseq/modules/lightconv_layer/cuda_function_gen.py deleted file mode 100644 index a25433dd8edae2f0b52d7d0eeeb829cabc6b4b89..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/modules/lightconv_layer/cuda_function_gen.py +++ /dev/null @@ -1,289 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -def gen_forward(): - - kernels = [3, 5, 7, 15, 31, 63, 127, 255] - seqs = [32 * x for x in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]] - - head = """ -/** - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include "lightconv_cuda.cuh" - -std::vector lightconv_cuda_forward(at::Tensor input, at::Tensor filters, int padding_l) { - - at::DeviceGuard g(input.device()); - const auto minibatch = input.size(0); - const auto numFeatures = input.size(1); - const auto sequenceLength = input.size(2); - - const auto numHeads = filters.size(0); - const auto filterSize = filters.size(1); - - const auto numFiltersInBlock = numFeatures / numHeads; - - const dim3 blocks(minibatch, numFeatures); - - auto output = at::zeros_like(input); - auto stream = at::cuda::getCurrentCUDAStream(); -""" - - sequence_if = """ - if (sequenceLength <= {seq}) {{ - switch(filterSize) {{ -""" - - case_k = """ - case {k}: -""" - - main_block = """ - if (padding_l == {pad}) {{ - AT_DISPATCH_FLOATING_TYPES_AND_HALF(input.scalar_type(), "lightconv_forward", ([&] {{ - lightconv_forward_kernel<{k}, {b_size}, {pad}, scalar_t> - <<>>( - input.data(), - filters.data(), - minibatch, - sequenceLength, - numFeatures, - numFiltersInBlock, - output.data()); - }})); - }} else -""" - - bad_padding = """ - { - std::cout << "WARNING: Unsupported padding size - skipping forward pass" << std::endl; - } - break; -""" - - bad_filter = """ - default: - std::cout << "WARNING: Unsupported filter length passed - skipping forward pass" << std::endl; - } -""" - - con_else = """ - } else -""" - - final_else = """ - { - switch(filterSize) { -""" - - final_return = """ - } - - return {output}; -} -""" - - with open("lightconv_cuda_forward.cu", "w") as forward: - forward.write(head) - for seq in seqs: - forward.write(sequence_if.format(seq=seq)) - for k in kernels: - forward.write(case_k.format(k=k)) - for pad in [k // 2, k - 1]: - forward.write(main_block.format(k=k, b_size=seq, pad=pad)) - forward.write(bad_padding) - forward.write(bad_filter) - forward.write(con_else) - - forward.write(final_else) - for k in kernels: - forward.write(case_k.format(k=k)) - for pad in [k // 2, k - 1]: - forward.write(main_block.format(k=k, b_size=seq, pad=pad)) - forward.write(bad_padding) - forward.write(bad_filter) - forward.write(final_return) - - -def gen_backward(): - - head = """ -/** - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include "lightconv_cuda.cuh" - -std::vector lightconv_cuda_backward( - at::Tensor gradOutput, - int padding_l, - at::Tensor input, - at::Tensor filters) { - - // gradWrtInput - const int minibatch = input.size(0); - const int numFeatures = input.size(1); - const int sequenceLength = input.size(2); - - const int numHeads = filters.size(0); - const int filterSize = filters.size(1); - - const dim3 gradBlocks(minibatch, numFeatures); - const dim3 weightGradFirstpassShortBlocks(minibatch, numHeads); - const dim3 weightGradSecondpassBlocks(numHeads, filterSize); - - const int numFiltersInBlock = numFeatures / numHeads; - - auto gradInput = at::zeros_like(input); - auto gradFilters = at::zeros_like(filters); - - at::DeviceGuard g(input.device()); - auto stream = at::cuda::getCurrentCUDAStream(); - - switch(filterSize) { -""" - - sequence_if = """ - if (sequenceLength <= {seq}) {{ -""" - - case_k = """ - case {k}: -""" - - main_block = """ - if (padding_l == {p}) {{ - AT_DISPATCH_FLOATING_TYPES_AND_HALF(input.scalar_type(), "lightconv_backward", ([&] {{ - lightconv_grad_wrt_input_kernel<{k}, {b_size}, {p}, scalar_t> - <<>>( - gradOutput.data(), - filters.data(), - minibatch, - sequenceLength, - numFeatures, - numFiltersInBlock, - gradInput.data()); - -""" - - weight_grad_short = """ - at::Tensor tempSumGradFilters = at::zeros({{minibatch, numHeads, filterSize}}, input.options().dtype(at::kFloat)); - lightconv_grad_wrt_weights_firstpass_short_kernel<{k}, {b_size}, {p}, scalar_t> - <<>>( - input.data(), - gradOutput.data(), - minibatch, - sequenceLength, - numFeatures, - numFiltersInBlock, - numHeads, - tempSumGradFilters.data() - ); - - lightconv_grad_wrt_weights_secondpass_short_kernel<{k}, {b_size}, scalar_t> - <<>>( - tempSumGradFilters.data(), - minibatch, - numFiltersInBlock, - gradFilters.data() - ); - }})); - }} else -""" - - weight_grad = """ - at::Tensor tempSumGradFilters = at::zeros({{minibatch, numFeatures, filterSize}}, input.options().dtype(at::kFloat)); - lightconv_grad_wrt_weights_firstpass_kernel<{k}, {b_size}, {p}, scalar_t> - <<>>( - input.data(), - gradOutput.data(), - minibatch, - sequenceLength, - numFeatures, - numFiltersInBlock, - tempSumGradFilters.data() - ); - - lightconv_grad_wrt_weights_secondpass_kernel<{k}, {b_size}, scalar_t> - <<>>( - tempSumGradFilters.data(), - minibatch, - numFiltersInBlock, - gradFilters.data() - ); - }})); - }} else -""" - - bad_padding = """ - { - std::cout << "WARNING: Unsupported padding size - skipping backward pass" << std::endl; - } -""" - - breakout = """ - break; -""" - - bad_filter = """ - default: - std::cout << "WARNING: Unsupported filter length passed - skipping backward pass" << std::endl; -""" - - con_else = """ - } else -""" - - final_else = """ - { - switch(filterSize) { -""" - - last_return = """ - } - return {gradInput, gradFilters}; -} -""" - - kernels = [3, 5, 7, 15, 31, 63, 127, 255] - seqs = [32 * x for x in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]] - thresh = [32, 32, 64, 128, 256, -1, -1, -1] - max_mem = [-1, -1, -1, -1, -1, 192, 96, 64] - - with open("lightconv_cuda_backward.cu", "w") as backward: - backward.write(head) - for (k, t, mem) in zip(kernels, thresh, max_mem): - backward.write(case_k.format(k=k)) - for seq in seqs: - if (t == -1 or seq <= t) and (mem == -1 or seq < mem): - backward.write(sequence_if.format(seq=seq)) - for p in [k // 2, k - 1]: - backward.write(main_block.format(k=k, b_size=seq, p=p)) - backward.write(weight_grad_short.format(k=k, b_size=seq, p=p)) - backward.write(bad_padding) - else: - for p in [k // 2, k - 1]: - backward.write(main_block.format(k=k, b_size=32, p=p)) - backward.write(weight_grad.format(k=k, b_size=32, p=p)) - backward.write(bad_padding) - backward.write(breakout) - break - backward.write(con_else) - backward.write(bad_filter) - backward.write(last_return) - - -if __name__ == "__main__": - gen_forward() - gen_backward() diff --git a/spaces/gradio/automatic-speech-recognition/README.md b/spaces/gradio/automatic-speech-recognition/README.md deleted file mode 100644 index e6706e26eb8f1992b524834c150cbee21f6a9aa7..0000000000000000000000000000000000000000 --- a/spaces/gradio/automatic-speech-recognition/README.md +++ /dev/null @@ -1,12 +0,0 @@ - ---- -title: automatic-speech-recognition -emoji: 🔥 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 4.1.2 -app_file: run.py -pinned: false -hf_oauth: true ---- diff --git a/spaces/gradio/reversible_flow/README.md b/spaces/gradio/reversible_flow/README.md deleted file mode 100644 index 3472d735e84be4d720123bd9acec71af381ba249..0000000000000000000000000000000000000000 --- a/spaces/gradio/reversible_flow/README.md +++ /dev/null @@ -1,12 +0,0 @@ - ---- -title: reversible_flow -emoji: 🔥 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 4.1.2 -app_file: run.py -pinned: false -hf_oauth: true ---- diff --git a/spaces/gradio/sentiment_analysis/DESCRIPTION.md b/spaces/gradio/sentiment_analysis/DESCRIPTION.md deleted file mode 100644 index affa1e8db01e5fb70dc6fd559bc8b4d150993124..0000000000000000000000000000000000000000 --- a/spaces/gradio/sentiment_analysis/DESCRIPTION.md +++ /dev/null @@ -1 +0,0 @@ -This sentiment analaysis demo takes in input text and returns its classification for either positive, negative or neutral using Gradio's Label output. \ No newline at end of file diff --git a/spaces/gsaivinay/Llama-2-13B-GGML-UI/pages/api/models.ts b/spaces/gsaivinay/Llama-2-13B-GGML-UI/pages/api/models.ts deleted file mode 100644 index e72dcaabcdc09ad1820eb793c42341b8f098d88f..0000000000000000000000000000000000000000 --- a/spaces/gsaivinay/Llama-2-13B-GGML-UI/pages/api/models.ts +++ /dev/null @@ -1,72 +0,0 @@ -import { OPENAI_API_HOST, OPENAI_API_TYPE, OPENAI_API_VERSION, OPENAI_ORGANIZATION } from '@/utils/app/const'; - -import { OpenAIModel, OpenAIModelID, OpenAIModels } from '@/types/openai'; - -export const config = { - runtime: 'edge', -}; - -const handler = async (req: Request): Promise => { - try { - const { key } = (await req.json()) as { - key: string; - }; - - let url = `${OPENAI_API_HOST}/v1/models`; - if (OPENAI_API_TYPE === 'azure') { - url = `${OPENAI_API_HOST}/openai/deployments?api-version=${OPENAI_API_VERSION}`; - } - - const response = await fetch(url, { - headers: { - 'Content-Type': 'application/json', - ...(OPENAI_API_TYPE === 'openai' && { - Authorization: `Bearer ${key ? key : process.env.OPENAI_API_KEY}` - }), - ...(OPENAI_API_TYPE === 'azure' && { - 'api-key': `${key ? key : process.env.OPENAI_API_KEY}` - }), - ...((OPENAI_API_TYPE === 'openai' && OPENAI_ORGANIZATION) && { - 'OpenAI-Organization': OPENAI_ORGANIZATION, - }), - }, - }); - - if (response.status === 401) { - return new Response(response.body, { - status: 500, - headers: response.headers, - }); - } else if (response.status !== 200) { - console.error( - `OpenAI API returned an error ${ - response.status - }: ${await response.text()}`, - ); - throw new Error('OpenAI API returned an error'); - } - - const json = await response.json(); - - const models: OpenAIModel[] = json.data - .map((model: any) => { - const model_name = (OPENAI_API_TYPE === 'azure') ? model.model : model.id; - for (const [key, value] of Object.entries(OpenAIModelID)) { - if (value === model_name) { - return { - id: model.id, - name: OpenAIModels[value].name, - }; - } - } - }) - .filter(Boolean); - - return new Response(JSON.stringify(models), { status: 200 }); - } catch (error) { - console.error(error); - return new Response('Error', { status: 500 }); - } -}; - -export default handler; diff --git a/spaces/gwang-kim/DATID-3D/eg3d/metrics/kernel_inception_distance.py b/spaces/gwang-kim/DATID-3D/eg3d/metrics/kernel_inception_distance.py deleted file mode 100644 index 48906eba23a7d29ba912b7d209f83fba6d0b9f37..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/eg3d/metrics/kernel_inception_distance.py +++ /dev/null @@ -1,48 +0,0 @@ -# SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# SPDX-License-Identifier: LicenseRef-NvidiaProprietary -# -# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual -# property and proprietary rights in and to this material, related -# documentation and any modifications thereto. Any use, reproduction, -# disclosure or distribution of this material and related documentation -# without an express license agreement from NVIDIA CORPORATION or -# its affiliates is strictly prohibited. - -"""Kernel Inception Distance (KID) from the paper "Demystifying MMD -GANs". Matches the original implementation by Binkowski et al. at -https://github.com/mbinkowski/MMD-GAN/blob/master/gan/compute_scores.py""" - -import numpy as np -from . import metric_utils - -#---------------------------------------------------------------------------- - -def compute_kid(opts, max_real, num_gen, num_subsets, max_subset_size): - # Direct TorchScript translation of http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz - detector_url = 'https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/metrics/inception-2015-12-05.pkl' - detector_kwargs = dict(return_features=True) # Return raw features before the softmax layer. - - real_features = metric_utils.compute_feature_stats_for_dataset( - opts=opts, detector_url=detector_url, detector_kwargs=detector_kwargs, - rel_lo=0, rel_hi=0, capture_all=True, max_items=max_real).get_all() - - gen_features = metric_utils.compute_feature_stats_for_generator( - opts=opts, detector_url=detector_url, detector_kwargs=detector_kwargs, - rel_lo=0, rel_hi=1, capture_all=True, max_items=num_gen).get_all() - - if opts.rank != 0: - return float('nan') - - n = real_features.shape[1] - m = min(min(real_features.shape[0], gen_features.shape[0]), max_subset_size) - t = 0 - for _subset_idx in range(num_subsets): - x = gen_features[np.random.choice(gen_features.shape[0], m, replace=False)] - y = real_features[np.random.choice(real_features.shape[0], m, replace=False)] - a = (x @ x.T / n + 1) ** 3 + (y @ y.T / n + 1) ** 3 - b = (x @ y.T / n + 1) ** 3 - t += (a.sum() - np.diag(a).sum()) / (m - 1) - b.sum() * 2 / m - kid = t / num_subsets / m - return float(kid) - -#---------------------------------------------------------------------------- diff --git a/spaces/h2oai/wave-tour/examples/ml_dai.py b/spaces/h2oai/wave-tour/examples/ml_dai.py deleted file mode 100644 index 270a1f71fdc35fef523ebb1645728045c7ca33db..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/ml_dai.py +++ /dev/null @@ -1,178 +0,0 @@ -# WaveML / DAI -# Build Wave Models for training and prediction of classification or regression using Driverless AI. -# --- -import os - -from h2o_wave import main, app, Q, copy_expando, ui -from h2o_wave_ml import build_model, ModelType -from h2o_wave_ml.utils import list_dai_instances - -from sklearn.datasets import load_wine -from sklearn.model_selection import train_test_split - -STEAM_URL = os.environ.get('STEAM_URL') -MLOPS_URL = os.environ.get('MLOPS_URL') - -DATASET_TEXT = '''The sample dataset used is the - wine dataset.''' -STEAM_TEXT = f'''No Driverless AI instances available. You may create one in - AI Engines and refresh the page.''' - - -def dai_experiment_url(instance_id: str, instance_name: str): - # URL link to Driverless AI experiment - return f'''**Driverless AI Experiment:** - {instance_name}''' - - -def mlops_deployment_url(project_id: str): - # URL link to MLOps deployment - return f'**MLOps Deployment:** {project_id}' - - -def form_unsupported(): - # display when app is not running on cloud - return [ - ui.text('''This example requires access to Driverless AI running on - H2O AI Cloud - and does not support standalone app instances.'''), - ui.text('''Sign up at https://h2o.ai/free - to run apps on cloud.''') - ] - - -def form_default(q: Q): - # display when app is initialized - return [ - ui.text(content=DATASET_TEXT), - ui.dropdown(name='dai_instance_id', label='Select Driverless AI instance', value=q.client.dai_instance_id, - choices=q.client.choices_dai_instances, required=True), - ui.text(content=STEAM_TEXT, visible=q.client.disable_training), - ui.buttons(items=[ - ui.button(name='train', label='Train', primary=True, disabled=q.client.disable_training), - ui.button(name='predict', label='Predict', primary=True, disabled=True), - ]) - ] - - -def form_training_progress(q: Q): - # display when model training is in progress - return [ - ui.text(content=DATASET_TEXT), - ui.dropdown(name='dai_instance_id', label='Select Driverless AI instance', value=q.client.dai_instance_id, - choices=q.client.choices_dai_instances, required=True), - ui.buttons(items=[ - ui.button(name='train', label='Train', primary=True, disabled=True), - ui.button(name='predict', label='Predict', primary=True, disabled=True) - ]), - ui.progress(label='Training in progress...', caption='This can take a few minutes...'), - ui.text(content=q.client.model_details) - ] - - -def form_training_completed(q: Q): - # display when model training is completed - return [ - ui.text(content=DATASET_TEXT), - ui.dropdown(name='dai_instance_id', label='Select Driverless AI instance', value=q.client.dai_instance_id, - choices=q.client.choices_dai_instances, required=True), - ui.buttons(items=[ - ui.button(name='train', label='Train', primary=True), - ui.button(name='predict', label='Predict', primary=True) - ]), - ui.message_bar(type='success', text='Training successfully completed!'), - ui.text(content=q.client.model_details) - ] - - -def form_prediction_completed(q: Q): - # display when model prediction is completed - return [ - ui.text(content=DATASET_TEXT), - ui.dropdown(name='dai_instance_id', label='Select Driverless AI instance', value=q.client.dai_instance_id, - choices=q.client.choices_dai_instances, required=True), - ui.buttons(items=[ - ui.button(name='train', label='Train', primary=True), - ui.button(name='predict', label='Predict', primary=True) - ]), - ui.message_bar(type='success', text='Prediction successfully completed!'), - ui.text(content=q.client.model_details), - ui.text(content=f'''**Example predictions:**
          - {q.client.preds[0]}
          {q.client.preds[1]}
          {q.client.preds[2]}''') - ] - - -@app('/demo') -async def serve(q: Q): - if 'H2O_CLOUD_ENVIRONMENT' not in os.environ: - # show appropriate message if app is not running on cloud - q.page['example'] = ui.form_card( - box='1 1 -1 -1', - items=form_unsupported() - ) - elif q.args.train: - # get DAI instance name - copy_expando(q.args, q.client) - - for dai_instance in q.client.dai_instances: - if dai_instance['id'] == int(q.client.dai_instance_id): - q.client.dai_instance_name = dai_instance['name'] - - # set DAI model details - q.client.model_details = dai_experiment_url(q.client.dai_instance_id, q.client.dai_instance_name) - - # show training progress and details - q.page['example'].items = form_training_progress(q) - await q.page.save() - - # train WaveML Model using Driverless AI - q.client.wave_model = await q.run( - func=build_model, - train_df=q.client.train_df, - target_column='target', - model_type=ModelType.DAI, - refresh_token=q.auth.refresh_token, - _steam_dai_instance_name=q.client.dai_instance_name, - _dai_accuracy=1, - _dai_time=1, - _dai_interpretability=10 - ) - - # update DAI model details - q.client.project_id = q.client.wave_model.project_id - q.client.model_details += f'
          {mlops_deployment_url(q.client.project_id)}' - - # show prediction option - q.page['example'].items = form_training_completed(q) - elif q.args.predict: - # predict on test data - q.client.preds = q.client.wave_model.predict(test_df=q.client.test_df) - - # show predictions - q.page['example'].items = form_prediction_completed(q) - else: - # prepare sample train and test dataframes - data = load_wine(as_frame=True)['frame'] - q.client.train_df, q.client.test_df = train_test_split(data, train_size=0.8) - - # DAI instances - q.client.dai_instances = list_dai_instances(refresh_token=q.auth.refresh_token) - q.client.choices_dai_instances = [ - ui.choice( - name=str(x['id']), - label=f'{x["name"]} ({x["status"].capitalize()})', - disabled=x['status'] != 'running' - ) for x in q.client.dai_instances - ] - - running_dai_instances = [x['id'] for x in q.client.dai_instances if x['status'] == 'running'] - q.client.disable_training = False if running_dai_instances else True - q.client.dai_instance_id = str(running_dai_instances[0]) if running_dai_instances else '' - - # display ui - q.page['example'] = ui.form_card( - box='1 1 -1 -1', - items=form_default(q) - ) - - await q.page.save() diff --git a/spaces/h2oai/wave-tour/examples/spinbox_trigger.py b/spaces/h2oai/wave-tour/examples/spinbox_trigger.py deleted file mode 100644 index cac8c12567b2cd0694bbc36cef4f164dd486770e..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/spinbox_trigger.py +++ /dev/null @@ -1,24 +0,0 @@ -# Form / Spinbox / Trigger -# Enable the `trigger` attribute in order to handle live changes to a spinbox. -# #form #spinbox #trigger -# --- -from typing import Optional -from h2o_wave import main, app, Q, ui - - -def get_form_items(value: Optional[float]): - return [ - ui.text(f'spinbox_trigger={value}'), - ui.spinbox(name='spinbox_trigger', label='Pick a number', trigger=True), - ] - - -@app('/demo') -async def serve(q: Q): - if not q.client.initialized: - q.page['example'] = ui.form_card(box='1 1 4 4', items=get_form_items(None)) - q.client.initialized = True - if q.args.spinbox_trigger is not None: - q.page['example'].items = get_form_items(q.args.spinbox_trigger) - - await q.page.save() diff --git a/spaces/hamzapehlivan/StyleRes/models/torch_utils/ops/__init__.py b/spaces/hamzapehlivan/StyleRes/models/torch_utils/ops/__init__.py deleted file mode 100644 index 939e7c6c8f94c4ea1141885c3c3295fe083b06aa..0000000000000000000000000000000000000000 --- a/spaces/hamzapehlivan/StyleRes/models/torch_utils/ops/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -# empty diff --git a/spaces/heiyubili/bingo/src/components/chat-message.tsx b/spaces/heiyubili/bingo/src/components/chat-message.tsx deleted file mode 100644 index bf272d8d7005cfd06c53bd213e09ea217e803549..0000000000000000000000000000000000000000 --- a/spaces/heiyubili/bingo/src/components/chat-message.tsx +++ /dev/null @@ -1,93 +0,0 @@ -import remarkGfm from 'remark-gfm' -import remarkMath from 'remark-math' -import supersub from 'remark-supersub' -import remarkBreaks from 'remark-breaks' -import { cn } from '@/lib/utils' -import { CodeBlock } from '@/components/ui/codeblock' -import { MemoizedReactMarkdown } from '@/components/markdown' -import { LearnMore } from './learn-more' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { useEffect } from 'react' -import { TurnCounter } from './turn-counter' - -export interface ChatMessageProps { - message: ChatMessageModel -} - -export function ChatMessage({ message, ...props }: ChatMessageProps) { - useEffect(() => { - if (document.body.scrollHeight - window.innerHeight - window.scrollY - 200 < 0) { - window.scrollBy(0, 200) - } - }, [message.text]) - - return message.text ? ( -
          -
          - {obj.alt} - } - } catch (e) { - } - return {obj.alt} - }, - p({ children }) { - return

          {children}

          - }, - code({ node, inline, className, children, ...props }) { - if (children.length) { - if (children[0] == '▍') { - return ( - - ) - } - - children[0] = (children[0] as string).replace('`▍`', '▍') - } - - const match = /language-(\w+)/.exec(className || '') - - if (inline) { - return ( - - {children} - - ) - } - - return ( - - ) - } - }} - > - {message.text} -
          -
          -
          - {message.author === 'bot' && } - {message.author === 'bot' && } -
          -
          - ) : null -} diff --git a/spaces/hf4all/bingo-async-task/start_server.sh b/spaces/hf4all/bingo-async-task/start_server.sh deleted file mode 100644 index 3e409a831625d4e3cd6c4f65ba497d493f1ae667..0000000000000000000000000000000000000000 --- a/spaces/hf4all/bingo-async-task/start_server.sh +++ /dev/null @@ -1,65 +0,0 @@ -#!/bin/bash - -NGX_NAME="${NGX_NAME:-admin}" -NGX_PASS="${NGX_PASS:-admin}" -CRYPTPASS=`openssl passwd -apr1 ${NGX_PASS}` -PORT="${PORT:-8080}" - -echo "USERNAME:" $NGX_NAME -echo "PASSWORD:" $NGX_PASS - -echo "${NGX_NAME}:${CRYPTPASS}" > ngpasswd - -COMMIT=$(cat /app/openvscode-server/product.json | awk '/commit/{print $4;exit}' FS='[""]') -sed -i "s/#COMMIT#/$COMMIT/" nginx.conf -sed -i "s/#PORT#/$PORT/" nginx.conf - -. $NVM_DIR/nvm.sh - -set +e -if [[ ! -z "$REPOS" ]]; then - for REPO in $(echo $REPOS | tr ";" "\n") - do - dir=$(basename "$REPO" .git) - echo start to clone initial repo $REPO into $dir - git clone --progress $REPO $dir - cd $dir - [[ -z $(git config user.name) ]] && git config --global user.name "$(git log -1 --pretty=format:'%an')" - [[ -z $(git config user.email) ]] && git config --global user.email "$(git log -1 --pretty=format:'%ae')" - if [[ -e requirements.txt ]]; then - pip install --no-cache-dir --upgrade -r requirements.txt - fi - if [[ -e package.json ]]; then - npm i - npm run build - fi - if [[ -e ecosystem.config.js ]]; then - echo use pm2 start - pm2 start ecosystem.config.js - fi - cd .. - done -fi - - -pm2 start ./auto-commit.js - -if [[ -e ecosystem.config.js ]]; then - echo pm2 start all - pm2 start ecosystem.config.js -fi - -[[ -z $(git config --global user.name) ]] && git config --global user.name "$SPACE_AUTHOR_NAME" -[[ -z $(git config --global user.email) ]] && git config --global user.email "$SPACE_AUTHOR_NAME@hf.co" - -git config --global http.postBuffer 524288000 -git config --global push.default current - -echo "Starting VSCode Server..." -vscode=/app/openvscode-server/bin/openvscode-server -vscode_cli=/app/openvscode-server/bin/remote-cli/openvscode-server -$vscode --install-extension ms-python.python -ln -s $vscode_cli $(dirname $vscode_cli)/code -set -e -nginx -c $PWD/nginx.conf -exec $vscode --host 0.0.0.0 --port 5050 --without-connection-token \"${@}\" -- \ No newline at end of file diff --git a/spaces/hysts/diffusers-anime-faces/model.py b/spaces/hysts/diffusers-anime-faces/model.py deleted file mode 100644 index bf2712b12bc6044a7d7fdbcb9ababbc147b659e4..0000000000000000000000000000000000000000 --- a/spaces/hysts/diffusers-anime-faces/model.py +++ /dev/null @@ -1,187 +0,0 @@ -from __future__ import annotations - -import logging -import os -import random -import sys -import tempfile - -import gradio as gr -import imageio -import numpy as np -import PIL.Image -import torch -import tqdm.auto -from diffusers import (DDIMPipeline, DDIMScheduler, DDPMPipeline, - DiffusionPipeline, PNDMPipeline, PNDMScheduler) - -HF_TOKEN = os.environ['HF_TOKEN'] - -formatter = logging.Formatter( - '[%(asctime)s] %(name)s %(levelname)s: %(message)s', - datefmt='%Y-%m-%d %H:%M:%S') -stream_handler = logging.StreamHandler(stream=sys.stdout) -stream_handler.setLevel(logging.INFO) -stream_handler.setFormatter(formatter) -logger = logging.getLogger(__name__) -logger.setLevel(logging.INFO) -logger.propagate = False -logger.addHandler(stream_handler) - - -class Model: - - MODEL_NAMES = [ - 'ddpm-128-exp000', - ] - - def __init__(self, device: str | torch.device): - self.device = torch.device(device) - self._download_all_models() - - self.model_name = self.MODEL_NAMES[0] - self.scheduler_type = 'DDIM' - self.pipeline = self._load_pipeline(self.model_name, - self.scheduler_type) - self.rng = random.Random() - - self.real_esrgan = gr.Interface.load('spaces/hysts/Real-ESRGAN-anime') - - @staticmethod - def _load_pipeline(model_name: str, - scheduler_type: str) -> DiffusionPipeline: - repo_id = f'hysts/diffusers-anime-faces-{model_name}' - if scheduler_type == 'DDPM': - pipeline = DDPMPipeline.from_pretrained(repo_id, - use_auth_token=HF_TOKEN) - elif scheduler_type == 'DDIM': - pipeline = DDIMPipeline.from_pretrained(repo_id, - use_auth_token=HF_TOKEN) - pipeline.scheduler = DDIMScheduler.from_config( - repo_id, subfolder='scheduler', use_auth_token=HF_TOKEN) - elif scheduler_type == 'PNDM': - pipeline = PNDMPipeline.from_pretrained(repo_id, - use_auth_token=HF_TOKEN) - pipeline.scheduler = PNDMScheduler.from_config( - repo_id, subfolder='scheduler', use_auth_token=HF_TOKEN) - else: - raise ValueError - return pipeline - - def set_pipeline(self, model_name: str, scheduler_type: str) -> None: - logger.info('--- set_pipeline ---') - logger.info(f'{model_name=}, {scheduler_type=}') - - if model_name == self.model_name and scheduler_type == self.scheduler_type: - logger.info('Skipping') - logger.info('--- done ---') - return - self.model_name = model_name - self.scheduler_type = scheduler_type - self.pipeline = self._load_pipeline(model_name, scheduler_type) - - logger.info('--- done ---') - - def _download_all_models(self) -> None: - for name in self.MODEL_NAMES: - self._load_pipeline(name, 'DDPM') - - def generate(self, - seed: int, - num_steps: int, - num_images: int = 1) -> list[PIL.Image.Image]: - logger.info('--- generate ---') - logger.info(f'{seed=}, {num_steps=}') - - torch.manual_seed(seed) - if self.scheduler_type == 'DDPM': - res = self.pipeline(batch_size=num_images, - torch_device=self.device)['sample'] - elif self.scheduler_type in ['DDIM', 'PNDM']: - res = self.pipeline(batch_size=num_images, - torch_device=self.device, - num_inference_steps=num_steps)['sample'] - else: - raise ValueError - - logger.info('--- done ---') - return res - - @staticmethod - def postprocess(sample: torch.Tensor) -> np.ndarray: - res = (sample / 2 + 0.5).clamp(0, 1) - res = (res * 255).to(torch.uint8) - res = res.cpu().permute(0, 2, 3, 1).numpy() - return res - - @torch.inference_mode() - def generate_with_video(self, seed: int, - num_steps: int) -> tuple[PIL.Image.Image, str]: - logger.info('--- generate_with_video ---') - if self.scheduler_type == 'DDPM': - num_steps = 1000 - fps = 100 - else: - fps = 10 - logger.info(f'{seed=}, {num_steps=}') - - model = self.pipeline.unet.to(self.device) - scheduler = self.pipeline.scheduler - scheduler.set_timesteps(num_inference_steps=num_steps) - input_shape = (1, model.config.in_channels, model.config.sample_size, - model.config.sample_size) - torch.manual_seed(seed) - - out_file = tempfile.NamedTemporaryFile(suffix='.mp4', delete=False) - writer = imageio.get_writer(out_file.name, fps=fps) - sample = torch.randn(input_shape).to(self.device) - for t in tqdm.auto.tqdm(scheduler.timesteps): - out = model(sample, t)['sample'] - sample = scheduler.step(out, t, sample)['prev_sample'] - res = self.postprocess(sample)[0] - writer.append_data(res) - writer.close() - - logger.info('--- done ---') - return PIL.Image.fromarray(res), out_file.name - - def superresolve(self, image: PIL.Image.Image) -> PIL.Image.Image: - logger.info('--- superresolve ---') - - with tempfile.NamedTemporaryFile(suffix='.png') as f: - image.save(f.name) - out_file = self.real_esrgan(f.name) - - logger.info('--- done ---') - return PIL.Image.open(out_file) - - def run(self, model_name: str, scheduler_type: str, num_steps: int, - randomize_seed: bool, - seed: int) -> tuple[PIL.Image.Image, PIL.Image.Image, int, str]: - self.set_pipeline(model_name, scheduler_type) - if scheduler_type == 'PNDM': - num_steps = max(4, min(num_steps, 100)) - if randomize_seed: - seed = self.rng.randint(0, 100000) - res, filename = self.generate_with_video(seed, num_steps) - superresolved = self.superresolve(res) - return superresolved, res, seed, filename - - @staticmethod - def to_grid(images: list[PIL.Image.Image], - ncols: int = 2) -> PIL.Image.Image: - images = [np.asarray(image) for image in images] - nrows = (len(images) + ncols - 1) // ncols - h, w = images[0].shape[:2] - if (d := nrows * ncols - len(images)) > 0: - images += [np.full((h, w, 3), 255, dtype=np.uint8)] * d - grid = np.asarray(images).reshape(nrows, ncols, h, w, 3).transpose( - 0, 2, 1, 3, 4).reshape(nrows * h, ncols * w, 3) - return PIL.Image.fromarray(grid) - - def run_simple(self) -> tuple[PIL.Image.Image, PIL.Image.Image]: - self.set_pipeline(self.MODEL_NAMES[0], 'PNDM') - seed = self.rng.randint(0, np.iinfo(np.uint32).max + 1) - images = self.generate(seed, num_steps=10, num_images=4) - superresolved = [self.superresolve(image) for image in images] - return self.to_grid(superresolved, 2), self.to_grid(images, 2) diff --git a/spaces/hysts/space-that-creates-model-demo-space/app.py b/spaces/hysts/space-that-creates-model-demo-space/app.py deleted file mode 100644 index 7ffe50769b92d3791a4831657a78b358381286c0..0000000000000000000000000000000000000000 --- a/spaces/hysts/space-that-creates-model-demo-space/app.py +++ /dev/null @@ -1,173 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import shutil -import tempfile - -import gradio as gr -from huggingface_hub import HfApi - -title = 'Model Demo Creation' -description = ''' -With this Space, you can create a demo Space for models that are loadable with `gradio.Interface.load` in [Model Hub](https://huggingface.co/models). -The Space will be created under your account and private. -You need a token with write permission (See: https://huggingface.co/settings/tokens). - -You can specify multiple model names by listing them separated by commas. -If you specify multiple model names, the resulting Space will show all the outputs of those models side by side for the given inputs. -''' -article = '' -examples = [ - [ - 'resnet-50', - 'microsoft/resnet-50', - '', - 'Demo for microsoft/resnet-50', - '', - '', - ], - [ - 'compare-image-classification-models', - 'google/vit-base-patch16-224, microsoft/resnet-50', - '', - 'Compare Image Classification Models', - '', - '', - ], - [ - 'compare-text-generation-models', - 'EleutherAI/gpt-j-6B, EleutherAI/gpt-neo-1.3B', - '', - 'Compare Text Generation Models', - '', - '', - ], -] - -api = HfApi() - - -def check_if_model_exists(model_name: str) -> bool: - return any(info.modelId == model_name - for info in api.list_models(search=model_name)) - - -def check_if_model_loadable(model_name: str) -> bool: - try: - gr.Interface.load(model_name, src='models') - except Exception: - return False - return True - - -def get_model_io_types( - model_name: str) -> tuple[tuple[str, ...], tuple[str, ...]]: - iface = gr.Interface.load(model_name, src='models') - inputs = tuple(map(str, iface.input_components)) - outputs = tuple(map(str, iface.output_components)) - return inputs, outputs - - -def check_if_model_io_is_consistent(model_names: list[str]) -> bool: - if len(model_names) == 1: - return True - - inputs0, outputs0 = get_model_io_types(model_names[0]) - for name in model_names[1:]: - inputs, outputs = get_model_io_types(name) - if inputs != inputs0 or outputs != outputs0: - return False - return True - - -def save_space_info(dirname: str, filename: str, content: str) -> None: - with open(f'{dirname}/{filename}', 'w') as f: - f.write(content) - - -def run(space_name: str, model_names_str: str, hf_token: str, title: str, - description: str, article: str) -> str: - if space_name == '': - return 'Space Name must be specified.' - if model_names_str == '': - return 'Model Names must be specified.' - if hf_token == '': - return 'Hugging Face Token must be specified.' - - model_names = [name.strip() for name in model_names_str.split(',')] - model_names_str = '\n'.join(model_names) - - missing_models = [ - name for name in model_names if not check_if_model_exists(name) - ] - if len(missing_models) > 0: - message = 'The following models were not found: ' - for model_name in missing_models: - message += f'\n{model_name}' - return message - - non_loadable_models = [ - name for name in model_names if not check_if_model_loadable(name) - ] - if len(non_loadable_models) > 0: - message = 'The following models are not loadable with gradio.Interface.load: ' - for model_name in non_loadable_models: - message += f'\n{model_name}' - return message - - if not check_if_model_io_is_consistent(model_names): - return 'The inputs and outputs of each model must be the same.' - - user_name = api.whoami(token=hf_token)['name'] - repo_id = f'{user_name}/{space_name}' - try: - space_url = api.create_repo(repo_id=repo_id, - repo_type='space', - private=True, - token=hf_token, - space_sdk='gradio') - except Exception as e: - return str(e) - - with tempfile.TemporaryDirectory() as temp_dir: - shutil.copy('assets/template.py', f'{temp_dir}/app.py') - save_space_info(temp_dir, 'TITLE', title) - save_space_info(temp_dir, 'DESCRIPTION', description) - save_space_info(temp_dir, 'ARTICLE', article) - save_space_info(temp_dir, 'MODEL_NAMES', model_names_str) - api.upload_folder(repo_id=repo_id, - folder_path=temp_dir, - path_in_repo='.', - token=hf_token, - repo_type='space') - - return f'Successfully created: {space_url}' - - -gr.Interface( - fn=run, - inputs=[ - gr.Textbox( - label='Space Name', - placeholder= - 'e.g. demo-resnet-50. The Space will be created under your account and private.' - ), - gr.Textbox(label='Model Names', - placeholder='e.g. microsoft/resnet-50'), - gr.Textbox( - label='Hugging Face Token', - placeholder= - 'This should be a token with write permission. See: https://huggingface.co/settings/tokens' - ), - gr.Textbox(label='Title (Optional)'), - gr.Textbox(label='Description (Optional)'), - gr.Textbox(label='Article (Optional)'), - ], - outputs=gr.Textbox(label='Output'), - title=title, - description=description, - article=article, - examples=examples, - cache_examples=False, -).launch(enable_queue=True, share=False) diff --git a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/ms1mv3_r50.py b/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/ms1mv3_r50.py deleted file mode 100644 index ef1a4b5d7eebf5df9a7340e07a003450fd1df976..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/ms1mv3_r50.py +++ /dev/null @@ -1,27 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.margin_list = (1.0, 0.5, 0.0) -config.network = "r50" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 1.0 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 128 -config.lr = 0.1 -config.verbose = 2000 -config.dali = False - -config.rec = "/train_tmp/ms1m-retinaface-t1" -config.num_classes = 93431 -config.num_image = 5179510 -config.num_epoch = 20 -config.warmup_epoch = 0 -config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/onnx_helper.py b/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/onnx_helper.py deleted file mode 100644 index 95f615fd7f3e0586be123d9a6538f68386158360..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/onnx_helper.py +++ /dev/null @@ -1,264 +0,0 @@ -from __future__ import division - -import argparse -import datetime -import glob -import os -import os.path as osp -import sys - -import cv2 -import numpy as np -import onnx -import onnxruntime -from insightface.data import get_image -from onnx import numpy_helper - - -class ArcFaceORT: - def __init__(self, model_path, cpu=False): - self.model_path = model_path - # providers = None will use available provider, for onnxruntime-gpu it will be "CUDAExecutionProvider" - self.providers = ["CPUExecutionProvider"] if cpu else None - - # input_size is (w,h), return error message, return None if success - def check(self, track="cfat", test_img=None): - # default is cfat - max_model_size_mb = 1024 - max_feat_dim = 512 - max_time_cost = 15 - if track.startswith("ms1m"): - max_model_size_mb = 1024 - max_feat_dim = 512 - max_time_cost = 10 - elif track.startswith("glint"): - max_model_size_mb = 1024 - max_feat_dim = 1024 - max_time_cost = 20 - elif track.startswith("cfat"): - max_model_size_mb = 1024 - max_feat_dim = 512 - max_time_cost = 15 - elif track.startswith("unconstrained"): - max_model_size_mb = 1024 - max_feat_dim = 1024 - max_time_cost = 30 - else: - return "track not found" - - if not os.path.exists(self.model_path): - return "model_path not exists" - if not os.path.isdir(self.model_path): - return "model_path should be directory" - onnx_files = [] - for _file in os.listdir(self.model_path): - if _file.endswith(".onnx"): - onnx_files.append(osp.join(self.model_path, _file)) - if len(onnx_files) == 0: - return "do not have onnx files" - self.model_file = sorted(onnx_files)[-1] - print("use onnx-model:", self.model_file) - try: - session = onnxruntime.InferenceSession(self.model_file, providers=self.providers) - except: - return "load onnx failed" - input_cfg = session.get_inputs()[0] - input_shape = input_cfg.shape - print("input-shape:", input_shape) - if len(input_shape) != 4: - return "length of input_shape should be 4" - if not isinstance(input_shape[0], str): - # return "input_shape[0] should be str to support batch-inference" - print("reset input-shape[0] to None") - model = onnx.load(self.model_file) - model.graph.input[0].type.tensor_type.shape.dim[0].dim_param = "None" - new_model_file = osp.join(self.model_path, "zzzzrefined.onnx") - onnx.save(model, new_model_file) - self.model_file = new_model_file - print("use new onnx-model:", self.model_file) - try: - session = onnxruntime.InferenceSession(self.model_file, providers=self.providers) - except: - return "load onnx failed" - input_cfg = session.get_inputs()[0] - input_shape = input_cfg.shape - print("new-input-shape:", input_shape) - - self.image_size = tuple(input_shape[2:4][::-1]) - # print('image_size:', self.image_size) - input_name = input_cfg.name - outputs = session.get_outputs() - output_names = [] - for o in outputs: - output_names.append(o.name) - # print(o.name, o.shape) - if len(output_names) != 1: - return "number of output nodes should be 1" - self.session = session - self.input_name = input_name - self.output_names = output_names - # print(self.output_names) - model = onnx.load(self.model_file) - graph = model.graph - if len(graph.node) < 8: - return "too small onnx graph" - - input_size = (112, 112) - self.crop = None - if track == "cfat": - crop_file = osp.join(self.model_path, "crop.txt") - if osp.exists(crop_file): - lines = open(crop_file, "r").readlines() - if len(lines) != 6: - return "crop.txt should contain 6 lines" - lines = [int(x) for x in lines] - self.crop = lines[:4] - input_size = tuple(lines[4:6]) - if input_size != self.image_size: - return "input-size is inconsistant with onnx model input, %s vs %s" % (input_size, self.image_size) - - self.model_size_mb = os.path.getsize(self.model_file) / float(1024 * 1024) - if self.model_size_mb > max_model_size_mb: - return "max model size exceed, given %.3f-MB" % self.model_size_mb - - input_mean = None - input_std = None - if track == "cfat": - pn_file = osp.join(self.model_path, "pixel_norm.txt") - if osp.exists(pn_file): - lines = open(pn_file, "r").readlines() - if len(lines) != 2: - return "pixel_norm.txt should contain 2 lines" - input_mean = float(lines[0]) - input_std = float(lines[1]) - if input_mean is not None or input_std is not None: - if input_mean is None or input_std is None: - return "please set input_mean and input_std simultaneously" - else: - find_sub = False - find_mul = False - for nid, node in enumerate(graph.node[:8]): - print(nid, node.name) - if node.name.startswith("Sub") or node.name.startswith("_minus"): - find_sub = True - if node.name.startswith("Mul") or node.name.startswith("_mul") or node.name.startswith("Div"): - find_mul = True - if find_sub and find_mul: - print("find sub and mul") - # mxnet arcface model - input_mean = 0.0 - input_std = 1.0 - else: - input_mean = 127.5 - input_std = 127.5 - self.input_mean = input_mean - self.input_std = input_std - for initn in graph.initializer: - weight_array = numpy_helper.to_array(initn) - dt = weight_array.dtype - if dt.itemsize < 4: - return "invalid weight type - (%s:%s)" % (initn.name, dt.name) - if test_img is None: - test_img = get_image("Tom_Hanks_54745") - test_img = cv2.resize(test_img, self.image_size) - else: - test_img = cv2.resize(test_img, self.image_size) - feat, cost = self.benchmark(test_img) - batch_result = self.check_batch(test_img) - batch_result_sum = float(np.sum(batch_result)) - if batch_result_sum in [float("inf"), -float("inf")] or batch_result_sum != batch_result_sum: - print(batch_result) - print(batch_result_sum) - return "batch result output contains NaN!" - - if len(feat.shape) < 2: - return "the shape of the feature must be two, but get {}".format(str(feat.shape)) - - if feat.shape[1] > max_feat_dim: - return "max feat dim exceed, given %d" % feat.shape[1] - self.feat_dim = feat.shape[1] - cost_ms = cost * 1000 - if cost_ms > max_time_cost: - return "max time cost exceed, given %.4f" % cost_ms - self.cost_ms = cost_ms - print( - "check stat:, model-size-mb: %.4f, feat-dim: %d, time-cost-ms: %.4f, input-mean: %.3f, input-std: %.3f" - % (self.model_size_mb, self.feat_dim, self.cost_ms, self.input_mean, self.input_std) - ) - return None - - def check_batch(self, img): - if not isinstance(img, list): - imgs = [ - img, - ] * 32 - if self.crop is not None: - nimgs = [] - for img in imgs: - nimg = img[self.crop[1] : self.crop[3], self.crop[0] : self.crop[2], :] - if nimg.shape[0] != self.image_size[1] or nimg.shape[1] != self.image_size[0]: - nimg = cv2.resize(nimg, self.image_size) - nimgs.append(nimg) - imgs = nimgs - blob = cv2.dnn.blobFromImages( - images=imgs, - scalefactor=1.0 / self.input_std, - size=self.image_size, - mean=(self.input_mean, self.input_mean, self.input_mean), - swapRB=True, - ) - net_out = self.session.run(self.output_names, {self.input_name: blob})[0] - return net_out - - def meta_info(self): - return {"model-size-mb": self.model_size_mb, "feature-dim": self.feat_dim, "infer": self.cost_ms} - - def forward(self, imgs): - if not isinstance(imgs, list): - imgs = [imgs] - input_size = self.image_size - if self.crop is not None: - nimgs = [] - for img in imgs: - nimg = img[self.crop[1] : self.crop[3], self.crop[0] : self.crop[2], :] - if nimg.shape[0] != input_size[1] or nimg.shape[1] != input_size[0]: - nimg = cv2.resize(nimg, input_size) - nimgs.append(nimg) - imgs = nimgs - blob = cv2.dnn.blobFromImages( - imgs, 1.0 / self.input_std, input_size, (self.input_mean, self.input_mean, self.input_mean), swapRB=True - ) - net_out = self.session.run(self.output_names, {self.input_name: blob})[0] - return net_out - - def benchmark(self, img): - input_size = self.image_size - if self.crop is not None: - nimg = img[self.crop[1] : self.crop[3], self.crop[0] : self.crop[2], :] - if nimg.shape[0] != input_size[1] or nimg.shape[1] != input_size[0]: - nimg = cv2.resize(nimg, input_size) - img = nimg - blob = cv2.dnn.blobFromImage( - img, 1.0 / self.input_std, input_size, (self.input_mean, self.input_mean, self.input_mean), swapRB=True - ) - costs = [] - for _ in range(50): - ta = datetime.datetime.now() - net_out = self.session.run(self.output_names, {self.input_name: blob})[0] - tb = datetime.datetime.now() - cost = (tb - ta).total_seconds() - costs.append(cost) - costs = sorted(costs) - cost = costs[5] - return net_out, cost - - -if __name__ == "__main__": - parser = argparse.ArgumentParser(description="") - # general - parser.add_argument("workdir", help="submitted work dir", type=str) - parser.add_argument("--track", help="track name, for different challenge", type=str, default="cfat") - args = parser.parse_args() - handler = ArcFaceORT(args.workdir) - err = handler.check(args.track) - print("err:", err) diff --git a/spaces/ibaiGorordo/Lane-Shape-Prediction-with-Transformers/lstr/__init__.py b/spaces/ibaiGorordo/Lane-Shape-Prediction-with-Transformers/lstr/__init__.py deleted file mode 100644 index 119c82bdbfd0a3a75c2ea60a2358b4c269f2eb72..0000000000000000000000000000000000000000 --- a/spaces/ibaiGorordo/Lane-Shape-Prediction-with-Transformers/lstr/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from lstr.lstr import LSTR \ No newline at end of file diff --git a/spaces/ikechan8370/vits-uma-genshin-honkai/attentions.py b/spaces/ikechan8370/vits-uma-genshin-honkai/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/ikechan8370/vits-uma-genshin-honkai/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (stargate Sg1 Season 8 720p Torrent).md b/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (stargate Sg1 Season 8 720p Torrent).md deleted file mode 100644 index 06a74a1fa2a1c2872421688d10da5fdf2a5e1d3d..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (stargate Sg1 Season 8 720p Torrent).md +++ /dev/null @@ -1,32 +0,0 @@ -
          -```html -

          How to Watch Stargate SG-1 Season 8 in HD Online

          -

          Stargate SG-1 is a sci-fi TV series that follows the adventures of a team of explorers who use a device called a stargate to travel to different planets and galaxies. The series ran for 10 seasons from 1997 to 2007, and is considered one of the most successful and influential shows in the genre.

          -

          HD Online Player (stargate sg1 season 8 720p torrent)


          Download Zip ✔✔✔ https://urlin.us/2uEwNr



          -

          Season 8 of Stargate SG-1 aired from 2004 to 2005, and featured some major changes in the cast and storyline. The main villain of the series, Anubis, was defeated, and a new enemy, the Ori, was introduced. The leader of the team, Jack O'Neill, was promoted to general and replaced by Cameron Mitchell. The team also gained a new ally, Vala Mal Doran, a former Goa'uld host and thief.

          -

          If you are a fan of Stargate SG-1 and want to watch season 8 in high definition online, you have several options. Here are some of them:

          -
            -
          • You can buy or rent the season on digital platforms like Amazon Prime Video, iTunes, Google Play, or Vudu. These platforms offer HD quality and subtitles for each episode.
          • -
          • You can stream the season on subscription services like Hulu, Netflix, or Peacock. These services also offer HD quality and subtitles, but you may need to pay a monthly fee or watch ads.
          • -
          • You can download the season using a torrent client like BitTorrent or uTorrent. This option may be illegal or risky depending on your location and the source of the torrent. You may also need to find a compatible media player that can play HD videos.
          • -
          -

          Whatever option you choose, make sure you have a fast and stable internet connection and a device that can support HD resolution. Enjoy watching Stargate SG-1 season 8 in HD online!

          -``` - -```html -

          Stargate SG-1 season 8 has 20 episodes, each with a running time of about 45 minutes. The season has a mix of standalone and arc-based episodes, as well as some crossover episodes with the spin-off series Stargate Atlantis. Some of the highlights of the season are:

          -

          -
            -
          • Episode 1: New Order (Part 1) - The team tries to rescue Jack O'Neill from stasis and stop Anubis from destroying Earth.
          • -
          • Episode 2: New Order (Part 2) - The team meets the Asgard leader Thor and learns about the threat of the Replicators.
          • -
          • Episode 6: Avatar - Teal'c gets trapped in a virtual reality simulation where he must fight against Anubis' forces.
          • -
          • Episode 12: Prometheus Unbound - Daniel Jackson encounters Vala Mal Doran, who hijacks the Prometheus spaceship.
          • -
          • Episode 14: Full Alert - The team deals with a conspiracy involving the Russian government and a rogue NID agent.
          • -
          • Episode 17: Reckoning (Part 1) - The team joins forces with the rebel Jaffa and the Tok'ra to stop the Replicators and the Goa'uld.
          • -
          • Episode 18: Reckoning (Part 2) - The team faces the Ori, a powerful race of ascended beings who want to convert or destroy all life in the galaxy.
          • -
          • Episode 20: Moebius (Part 2) - The team travels back in time to ancient Egypt and alters the course of history.
          • -
          -

          Stargate SG-1 season 8 is a thrilling and satisfying season that wraps up some of the major plotlines of the series and sets up new ones for the future. It is a must-watch for any Stargate fan or sci-fi lover.

          -```

          d5da3c52bf
          -
          -
          \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/BPM Studio Professional 4.9.1 Full Version.rar.rar.md b/spaces/inreVtussa/clothingai/Examples/BPM Studio Professional 4.9.1 Full Version.rar.rar.md deleted file mode 100644 index e5cc4fdf33d7b644c88db9a6c58780bd27f329a1..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/BPM Studio Professional 4.9.1 Full Version.rar.rar.md +++ /dev/null @@ -1,87 +0,0 @@ - -

          BPM Studio Professional 4.9.1 Full Version.rar.rar: How to Download and Install This Amazing DJ Software

          -

          Are you a DJ or a music enthusiast who wants to create your own mixes and remixes with ease and precision? If so, you might want to check out BPM Studio Professional 4.9.1 Full Version.rar.rar, a software that allows you to mix, edit, and play audio files in various formats, such as MP3, WAV, OGG, WMA, and more. In this article, we will show you how to download and install this software, and what are its main features and benefits.

          -

          BPM Studio Professional 4.9.1 Full Version.rar.rar


          Download Zip ::: https://tiurll.com/2uCkBS



          -

          How to Download BPM Studio Professional 4.9.1 Full Version.rar.rar

          -

          One of the easiest ways to download BPM Studio Professional 4.9.1 Full Version.rar.rar is to use a file sharing service such as 4shared. This is a website that allows you to upload and download files for free, with a maximum file size of 15 GB. To download BPM Studio Professional 4.9.1 Full Version.rar.rar from 4shared, you need to follow these steps:

          -
            -
          1. Go to this link, which will take you to the download page of BPM Studio Professional 4.9.1 Full Version.rar.rar on 4shared.
          2. -
          3. Click on the "Download" button, which will start the download process.
          4. -
          5. Wait for the download to finish, which may take a few minutes depending on your internet speed and the file size.
          6. -
          7. Once the download is complete, you will have a file named BPM_Studio_Professional_4.9.1.rar in your computer.
          8. -
          -

          How to Install BPM Studio Professional 4.9.1 Full Version.rar.rar

          -

          After downloading BPM Studio Professional 4.9.1 Full Version.rar.rar from 4shared, you need to install it on your computer. To do that, you need to follow these steps:

          -
            -
          1. Extract the file BPM_Studio_Professional_4.9.1.rar using a software such as WinRAR or 7-Zip. This will create a folder named BPM_Studio_Professional_4.9.1 in your computer.
          2. -
          3. Open the folder BPM_Studio_Professional_4.9.1 and double-click on the file setup.exe, which will launch the installation wizard.
          4. -
          5. Follow the instructions on the screen to complete the installation process.
          6. -
          7. Once the installation is complete, you will have a shortcut named BPM Studio Pro on your desktop.
          8. -
          9. Double-click on the shortcut to launch the software and start mixing your audio files.
          10. -
          -

          What are the Features and Benefits of BPM Studio Professional 4.9.1 Full Version.rar.rar

          -

          BPM Studio Professional 4.9.1 Full Version.rar.rar is a software that has many features and benefits that make it a great choice for DJs and music lovers. Some of the main features and benefits are:

          -
            -
          • Dual sound card support: You can use two sound cards to output different audio channels to different speakers or headphones.
          • -
          • Sample player: You can load up to 16 samples and trigger them with hotkeys or MIDI controllers.
          • -
          • Beat matching: You can synchronize the tempo and phase of two tracks automatically or manually.
          • -
          • Waveform display: You can see the waveform of each track and zoom in or out for precise editing.
          • -
          • Skin support: You can customize the appearance of the software with different skins.
          • -
          • Remote control: You can control the software with a remote control device or a smartphone app.
          • -
          • Easy to use: The software has a user-friendly interface that is intuitive and easy to navigate.
          • -
          • Reliable and stable: The software has been tested and proven to work smoothly and without glitches or crashes.
          • -
          • Flexible and adaptable: The software can handle various audio formats and devices, and can be customized to suit your preferences and needs.
          • -
          • Creative and fun: The software allows you to unleash your creativity and have fun with your music collection.
          • -
          -

          Conclusion

          -

          BPM Studio Professional 4.9.1 Full Version.rar.rar is a software that allows you to mix, edit, and play audio files with ease and precision. It has many features and benefits that make it a great choice for DJs and music lovers who want to create their own mixes and remixes. To download and install this software, you can use a file sharing service such as 4shared, which will provide you with a fast and free download link. Once you have installed the software, you can start mixing your audio files and enjoy your music.

          -

          How to Use BPM Studio Professional 4.9.1 Full Version.rar.rar

          -

          Once you have downloaded and installed BPM Studio Professional 4.9.1 Full Version.rar.rar, you can start using it to mix your audio files and create your own mixes and remixes. To use this software, you need to follow these steps:

          -

          -
            -
          1. Launch the software by double-clicking on the BPM Studio Pro shortcut on your desktop.
          2. -
          3. Select the audio files that you want to mix from your computer or from the built-in file browser.
          4. -
          5. Drag and drop the audio files to the left or right player, depending on which channel you want to use.
          6. -
          7. Use the mixer controls to adjust the volume, balance, equalizer, and effects of each channel.
          8. -
          9. Use the crossfader to blend the two channels smoothly.
          10. -
          11. Use the cue points and loops to mark and repeat specific parts of the tracks.
          12. -
          13. Use the sample player to add extra sounds or effects to your mix.
          14. -
          15. Use the beat matching feature to synchronize the tempo and phase of the two tracks.
          16. -
          17. Use the waveform display to see the shape and amplitude of each track and edit them precisely.
          18. -
          19. Use the record button to save your mix as an audio file on your computer or burn it to a CD.
          20. -
          -

          What are the Alternatives to BPM Studio Professional 4.9.1 Full Version.rar.rar

          -

          BPM Studio Professional 4.9.1 Full Version.rar.rar is not the only DJ software that you can use to mix your audio files and create your own mixes and remixes. There are many other alternatives that you can try, depending on your budget, preferences, and needs. Some of the most popular alternatives are:

          -
            -
          • Virtual DJ: This is a software that allows you to mix, scratch, and remix audio and video files with various features and effects. It also supports online streaming and broadcasting, karaoke, and plug-ins.
          • -
          • Serato DJ: This is a software that allows you to mix, edit, and perform with audio files with high-quality sound and stability. It also supports MIDI controllers, vinyl emulation, DVS, and expansion packs.
          • -
          • Ableton Live: This is a software that allows you to create, produce, and perform music with various instruments, effects, and samples. It also supports live looping, recording, editing, and mixing.
          • -
          • Mixxx: This is a free and open-source software that allows you to mix, scratch, and remix audio files with various features and effects. It also supports MIDI controllers, vinyl control, live broadcasting, and scripting.
          • -
          -

          Conclusion

          -

          BPM Studio Professional 4.9.1 Full Version.rar.rar is a software that allows you to mix, edit, and play audio files with ease and precision. It has many features and benefits that make it a great choice for DJs and music lovers who want to create their own mixes and remixes. However, it also has some drawbacks that you should be aware of before downloading it, such as its high price, outdated version, and limited support. If you are looking for a free, updated, or supported DJ software that is compatible with newer operating systems or devices, you might want to look for other alternatives.

          -

          How to Uninstall BPM Studio Professional 4.9.1 Full Version.rar.rar

          -

          If you want to uninstall BPM Studio Professional 4.9.1 Full Version.rar.rar from your computer, you need to follow these steps:

          -
            -
          1. Go to the Start menu and click on Control Panel.
          2. -
          3. Click on Programs and Features or Add or Remove Programs, depending on your operating system.
          4. -
          5. Find BPM Studio Pro in the list of installed programs and click on it.
          6. -
          7. Click on Uninstall or Change/Remove, depending on your operating system.
          8. -
          9. Follow the instructions on the screen to complete the uninstallation process.
          10. -
          11. Restart your computer if prompted.
          12. -
          -

          How to Update BPM Studio Professional 4.9.1 Full Version.rar.rar

          -

          Unfortunately, BPM Studio Professional 4.9.1 Full Version.rar.rar is an outdated version of the software that was last updated in 2008. There is no official website or customer service for this software, and it may not be compatible with newer operating systems or devices. Therefore, there is no way to update this software to a newer version or fix any bugs or issues that may arise. If you want to use a more updated and supported DJ software, you might want to look for other alternatives that are available online.

          -

          How to Get Help and Support for BPM Studio Professional 4.9.1 Full Version.rar.rar

          -

          As mentioned above, BPM Studio Professional 4.9.1 Full Version.rar.rar is an outdated version of the software that has no official website or customer service. Therefore, it may be hard to find help or support for this software online. However, there are some possible sources of help and support that you can try, such as:

          -
            -
          • User manuals: You can find some user manuals for this software in PDF format on some file sharing websites such as 4shared. These manuals may provide you with some basic information and instructions on how to use this software.
          • -
          • User forums: You can find some user forums for this software on some websites such as LexCliq or Riverbend Lutheran. These forums may allow you to interact with other users of this software and ask questions or share tips and tricks.
          • -
          • User reviews: You can find some user reviews for this software on some websites such as Libraries.io or npm. These reviews may give you some feedback and opinions on the pros and cons of this software.
          • -
          -

          Conclusion

          -

          BPM Studio Professional 4.9.1 Full Version.rar.rar is a software that allows you to mix, edit, and play audio files with ease and precision. It has many features and benefits that make it a great choice for DJs and music lovers who want to create their own mixes and remixes. However, it also has some drawbacks that you should be aware of before downloading it, such as its high price, outdated version, and limited support. If you are looking for a free, updated, or supported DJ software that is compatible with newer operating systems or devices, you might want to look for other alternatives.

          -

          Conclusion

          -

          BPM Studio Professional 4.9.1 Full Version.rar.rar is a software that allows you to mix, edit, and play audio files with ease and precision. It has many features and benefits that make it a great choice for DJs and music lovers who want to create their own mixes and remixes. However, it also has some drawbacks that you should be aware of before downloading it, such as its high price, outdated version, and limited support. If you are looking for a free, updated, or supported DJ software that is compatible with newer operating systems or devices, you might want to look for other alternatives.

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Clyo System Cle De Licence.md b/spaces/inreVtussa/clothingai/Examples/Clyo System Cle De Licence.md deleted file mode 100644 index 793c076f17af3394ec3fa8464f99f1e58aa46c3b..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Clyo System Cle De Licence.md +++ /dev/null @@ -1,64 +0,0 @@ -

          Clyo System Cle De Licence


          Download File ::: https://tiurll.com/2uCm3D



          - -ou i cadois con - - ween m'apregez da zenbadoit, això m'apregez - - ja bai - - ło'z m'asir, beutu 'l presèn de m'analizar al deus... (us ara, bua') - - és finit? - - no - - pq pot sonar que ja es el deus? - - fins que no merce - - comenceu la següent - - és això, ja 'l merco. - - te la segues i sento - - uuu no havia aprendeu bé - - això si! - - amb 500! - - desde aquí una carta, quan es veu el papell - - és perfecte! - - llavors se'l veu, si t'ho pogues demanar - - :-P - - i si 'l veu - - espera'l - - mira i torna i fes això - - yep - - albertqueiros: em pensaves hores que era el teu irc? (lol) - - :) - - ja se'n va - - :-D - - ha estat una merce - - es esta es una xarxa - - un programa grafic - - -
          -
          -

          diff --git a/spaces/ivotai/VITS-Umamusume-voice-synthesizer/models.py b/spaces/ivotai/VITS-Umamusume-voice-synthesizer/models.py deleted file mode 100644 index 7dcd22edf811b952514080f5f06cc43d635ead28..0000000000000000000000000000000000000000 --- a/spaces/ivotai/VITS-Umamusume-voice-synthesizer/models.py +++ /dev/null @@ -1,542 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions - -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - emotion_embedding): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emotion_embedding = emotion_embedding - - if self.n_vocab!=0: - self.emb = nn.Embedding(n_vocab, hidden_channels) - if emotion_embedding: - self.emotion_emb = nn.Linear(1024, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, emotion_embedding=None): - if self.n_vocab!=0: - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - if emotion_embedding is not None: - x = x + self.emotion_emb(emotion_embedding.unsqueeze(1)) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - emotion_embedding=False, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - emotion_embedding) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None, emotion_embedding=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, emotion_embedding) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - diff --git a/spaces/jaleesahmed/data-description/app.py b/spaces/jaleesahmed/data-description/app.py deleted file mode 100644 index f70aac4510a223e30e93be5b64c7884c7ee3626b..0000000000000000000000000000000000000000 --- a/spaces/jaleesahmed/data-description/app.py +++ /dev/null @@ -1,48 +0,0 @@ -import gradio as gr -import pandas as pd -from sklearn.preprocessing import LabelEncoder - -def data_description(desc_type): - df = pd.read_csv('emp_experience_data.csv') - pd.options.display.max_columns = 25 - pd.options.display.max_rows = 10 - data_encoded = df.copy(deep=True) - categorical_column = ['Attrition', 'Gender', 'BusinessTravel', 'Education', 'EmployeeExperience', 'EmployeeFeedbackSentiments', 'Designation', - 'SalarySatisfaction', 'HealthBenefitsSatisfaction', 'UHGDiscountProgramUsage', 'HealthConscious', 'CareerPathSatisfaction', 'Region'] - label_encoding = LabelEncoder() - - if desc_type == "Display Data": - return df.head() - if desc_type == "Describe Data": - df_copy = df.copy(deep=True) - data_desc = df_copy.describe() - data_desc.insert(0, "Description", ["count", "mean", "std", "min", "25%", "50%", "75%", "max"], True) - return data_desc - if desc_type == "Display Encoding": - data = [["Feature", "Mapping"]] - for col in categorical_column: - data_encoded[col] = label_encoding.fit_transform(data_encoded[col]) - le_name_mapping = dict(zip(label_encoding.classes_, label_encoding.transform(label_encoding.classes_))) - data.append([col, str(le_name_mapping)]) - return data - if desc_type == "Display Encoded Data": - for col in categorical_column: - data_encoded[col] = label_encoding.fit_transform(data_encoded[col]) - return data_encoded.head() - -inputs = [ - gr.Dropdown(["Display Data", "Describe Data", "Display Encoding", "Display Encoded Data"], label="Perform Data Actions") - ] - -outputs = [gr.DataFrame()] - -demo = gr.Interface( - fn = data_description, - inputs = inputs, - outputs = outputs, - title="Employee-Experience: Data Description", - allow_flagging=False -) - -if __name__ == "__main__": - demo.launch() \ No newline at end of file diff --git a/spaces/jcenaa/Segment-Any-RGBD/datasets/prepare_coco_stuff_sem_seg.py b/spaces/jcenaa/Segment-Any-RGBD/datasets/prepare_coco_stuff_sem_seg.py deleted file mode 100644 index 1c2281f3590a2ec68d5aceb904d7a8ba10bd993a..0000000000000000000000000000000000000000 --- a/spaces/jcenaa/Segment-Any-RGBD/datasets/prepare_coco_stuff_sem_seg.py +++ /dev/null @@ -1,219 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Copyright (c) Meta Platforms, Inc. All Rights Reserved -# Modified by Feng Liang from -# https://github.com/MendelXu/zsseg.baseline/blob/master/datasets/prepare_coco_stuff_164k_sem_seg.py - -import os -import os.path as osp -from pathlib import Path -import tqdm -from glob import glob - -import numpy as np -from PIL import Image - - -full_clsID_to_trID = { - 0: 0, - 1: 1, - 2: 2, - 3: 3, - 4: 4, - 5: 5, - 6: 6, - 7: 7, - 8: 8, - 9: 9, - 10: 10, - 12: 11, - 13: 12, - 14: 13, - 15: 14, - 16: 15, - 17: 16, - 18: 17, - 19: 18, - 20: 19, - 21: 20, - 22: 21, - 23: 22, - 24: 23, - 26: 24, - 27: 25, - 30: 26, - 31: 27, - 32: 28, - 33: 29, - 34: 30, - 35: 31, - 36: 32, - 37: 33, - 38: 34, - 39: 35, - 40: 36, - 41: 37, - 42: 38, - 43: 39, - 45: 40, - 46: 41, - 47: 42, - 48: 43, - 49: 44, - 50: 45, - 51: 46, - 52: 47, - 53: 48, - 54: 49, - 55: 50, - 56: 51, - 57: 52, - 58: 53, - 59: 54, - 60: 55, - 61: 56, - 62: 57, - 63: 58, - 64: 59, - 66: 60, - 69: 61, - 71: 62, - 72: 63, - 73: 64, - 74: 65, - 75: 66, - 76: 67, - 77: 68, - 78: 69, - 79: 70, - 80: 71, - 81: 72, - 83: 73, - 84: 74, - 85: 75, - 86: 76, - 87: 77, - 88: 78, - 89: 79, - 91: 80, - 92: 81, - 93: 82, - 94: 83, - 95: 84, - 96: 85, - 97: 86, - 98: 87, - 99: 88, - 100: 89, - 101: 90, - 102: 91, - 103: 92, - 104: 93, - 105: 94, - 106: 95, - 107: 96, - 108: 97, - 109: 98, - 110: 99, - 111: 100, - 112: 101, - 113: 102, - 114: 103, - 115: 104, - 116: 105, - 117: 106, - 118: 107, - 119: 108, - 120: 109, - 121: 110, - 122: 111, - 123: 112, - 124: 113, - 125: 114, - 126: 115, - 127: 116, - 128: 117, - 129: 118, - 130: 119, - 131: 120, - 132: 121, - 133: 122, - 134: 123, - 135: 124, - 136: 125, - 137: 126, - 138: 127, - 139: 128, - 140: 129, - 141: 130, - 142: 131, - 143: 132, - 144: 133, - 145: 134, - 146: 135, - 147: 136, - 148: 137, - 149: 138, - 150: 139, - 151: 140, - 152: 141, - 153: 142, - 154: 143, - 155: 144, - 156: 145, - 157: 146, - 158: 147, - 159: 148, - 160: 149, - 161: 150, - 162: 151, - 163: 152, - 164: 153, - 165: 154, - 166: 155, - 167: 156, - 168: 157, - 169: 158, - 170: 159, - 171: 160, - 172: 161, - 173: 162, - 174: 163, - 175: 164, - 176: 165, - 177: 166, - 178: 167, - 179: 168, - 180: 169, - 181: 170, - 255: 255, -} - -def convert_to_trainID( - maskpath, out_mask_dir, is_train, clsID_to_trID=full_clsID_to_trID, suffix="" -): - mask = np.array(Image.open(maskpath)) - mask_copy = np.ones_like(mask, dtype=np.uint8) * 255 - for clsID, trID in clsID_to_trID.items(): - mask_copy[mask == clsID] = trID - seg_filename = ( - osp.join(out_mask_dir, "train2017" + suffix, osp.basename(maskpath)) - if is_train - else osp.join(out_mask_dir, "val2017" + suffix, osp.basename(maskpath)) - ) - if len(np.unique(mask_copy)) == 1 and np.unique(mask_copy)[0] == 255: - return - Image.fromarray(mask_copy).save(seg_filename, "PNG") - - - -if __name__ == "__main__": - dataset_dir = Path(os.getenv("DETECTRON2_DATASETS", "datasets")) - print('Caution: we only generate the training set!') - coco_path = dataset_dir / "coco" - mask_dir = coco_path / "stuffthingmaps" - out_mask_dir = coco_path / "stuffthingmaps_detectron2" - for name in ["train2017"]: - os.makedirs((out_mask_dir / name), exist_ok=True) - train_list = glob(osp.join(mask_dir, "train2017", "*.png")) - for file in tqdm.tqdm(train_list): - convert_to_trainID(file, out_mask_dir, is_train=True) diff --git a/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/modeling/heads/__init__.py b/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/modeling/heads/__init__.py deleted file mode 100644 index 52db7cce67b1686f7cab3698f15b8f309c897918..0000000000000000000000000000000000000000 --- a/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/modeling/heads/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Copyright (c) Meta Platforms, Inc. All Rights Reserved \ No newline at end of file diff --git a/spaces/jdhuka/AIPairProgramming1/app.py b/spaces/jdhuka/AIPairProgramming1/app.py deleted file mode 100644 index 07b09ead201baf9ce876a1e5aaa14b3d42594146..0000000000000000000000000000000000000000 --- a/spaces/jdhuka/AIPairProgramming1/app.py +++ /dev/null @@ -1,35 +0,0 @@ -import streamlit as st -import time - - - -def main(): - st.title("File Upload and Display") - - - - # File upload - uploaded_file = st.file_uploader("Upload a file") - - - - if uploaded_file is not None: - # Display file contents using st.markdown() - file_contents = uploaded_file.read().decode("utf-8") - st.markdown("### File Contents:") - st.markdown(f"```{file_contents}```") - - - - # Wait for 5 seconds - time.sleep(5) - - - - # Show completed message - st.success("File processing completed!") - - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bs4/tests/test_formatter.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bs4/tests/test_formatter.py deleted file mode 100644 index 528b16d5b5b9ca6552254ee19f66828f78a86946..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bs4/tests/test_formatter.py +++ /dev/null @@ -1,113 +0,0 @@ -import pytest - -from bs4.element import Tag -from bs4.formatter import ( - Formatter, - HTMLFormatter, - XMLFormatter, -) -from . import SoupTest - -class TestFormatter(SoupTest): - - def test_default_attributes(self): - # Test the default behavior of Formatter.attributes(). - formatter = Formatter() - tag = Tag(name="tag") - tag['b'] = 1 - tag['a'] = 2 - - # Attributes come out sorted by name. In Python 3, attributes - # normally come out of a dictionary in the order they were - # added. - assert [('a', 2), ('b', 1)] == formatter.attributes(tag) - - # This works even if Tag.attrs is None, though this shouldn't - # normally happen. - tag.attrs = None - assert [] == formatter.attributes(tag) - - assert ' ' == formatter.indent - - def test_sort_attributes(self): - # Test the ability to override Formatter.attributes() to, - # e.g., disable the normal sorting of attributes. - class UnsortedFormatter(Formatter): - def attributes(self, tag): - self.called_with = tag - for k, v in sorted(tag.attrs.items()): - if k == 'ignore': - continue - yield k,v - - soup = self.soup('

          ') - formatter = UnsortedFormatter() - decoded = soup.decode(formatter=formatter) - - # attributes() was called on the

          tag. It filtered out one - # attribute and sorted the other two. - assert formatter.called_with == soup.p - assert '

          ' == decoded - - def test_empty_attributes_are_booleans(self): - # Test the behavior of empty_attributes_are_booleans as well - # as which Formatters have it enabled. - - for name in ('html', 'minimal', None): - formatter = HTMLFormatter.REGISTRY[name] - assert False == formatter.empty_attributes_are_booleans - - formatter = XMLFormatter.REGISTRY[None] - assert False == formatter.empty_attributes_are_booleans - - formatter = HTMLFormatter.REGISTRY['html5'] - assert True == formatter.empty_attributes_are_booleans - - # Verify that the constructor sets the value. - formatter = Formatter(empty_attributes_are_booleans=True) - assert True == formatter.empty_attributes_are_booleans - - # Now demonstrate what it does to markup. - for markup in ( - "", - '' - ): - soup = self.soup(markup) - for formatter in ('html', 'minimal', 'xml', None): - assert b'' == soup.option.encode(formatter='html') - assert b'' == soup.option.encode(formatter='html5') - - @pytest.mark.parametrize( - "indent,expect", - [ - (None, '\n\ntext\n\n\n'), - (-1, '\n\ntext\n\n\n'), - (0, '\n\ntext\n\n\n'), - ("", '\n\ntext\n\n\n'), - - (1, '\n \n text\n \n\n'), - (2, '\n \n text\n \n\n'), - - ("\t", '\n\t\n\t\ttext\n\t\n\n'), - ('abc', '\nabc\nabcabctext\nabc\n\n'), - - # Some invalid inputs -- the default behavior is used. - (object(), '\n \n text\n \n\n'), - (b'bytes', '\n \n text\n \n\n'), - ] - ) - def test_indent(self, indent, expect): - # Pretty-print a tree with a Formatter set to - # indent in a certain way and verify the results. - soup = self.soup("text") - formatter = Formatter(indent=indent) - assert soup.prettify(formatter=formatter) == expect - - # Pretty-printing only happens with prettify(), not - # encode(). - assert soup.encode(formatter=formatter) != expect - - def test_default_indent_value(self): - formatter = Formatter() - assert formatter.indent == ' ' - diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/AFSDB.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/AFSDB.py deleted file mode 100644 index 3d287f6e02e57731db9884eb26441774da8cde06..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/AFSDB.py +++ /dev/null @@ -1,46 +0,0 @@ -# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license - -# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc. -# -# Permission to use, copy, modify, and distribute this software and its -# documentation for any purpose with or without fee is hereby granted, -# provided that the above copyright notice and this permission notice -# appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES -# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF -# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR -# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES -# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN -# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT -# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -import dns.immutable -import dns.rdtypes.mxbase - - -@dns.immutable.immutable -class AFSDB(dns.rdtypes.mxbase.UncompressedDowncasingMX): - - """AFSDB record""" - - # Use the property mechanism to make "subtype" an alias for the - # "preference" attribute, and "hostname" an alias for the "exchange" - # attribute. - # - # This lets us inherit the UncompressedMX implementation but lets - # the caller use appropriate attribute names for the rdata type. - # - # We probably lose some performance vs. a cut-and-paste - # implementation, but this way we don't copy code, and that's - # good. - - @property - def subtype(self): - "the AFSDB subtype" - return self.preference - - @property - def hostname(self): - "the AFSDB hostname" - return self.exchange diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/TKEY.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/TKEY.py deleted file mode 100644 index d5f5fc4581e62eb29865a26f0c0f9c84056ab903..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/TKEY.py +++ /dev/null @@ -1,143 +0,0 @@ -# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license - -# Copyright (C) 2004-2007, 2009-2011 Nominum, Inc. -# -# Permission to use, copy, modify, and distribute this software and its -# documentation for any purpose with or without fee is hereby granted, -# provided that the above copyright notice and this permission notice -# appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES -# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF -# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR -# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES -# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN -# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT -# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -import base64 -import struct - -import dns.exception -import dns.immutable -import dns.rdata - - -@dns.immutable.immutable -class TKEY(dns.rdata.Rdata): - - """TKEY Record""" - - __slots__ = [ - "algorithm", - "inception", - "expiration", - "mode", - "error", - "key", - "other", - ] - - def __init__( - self, - rdclass, - rdtype, - algorithm, - inception, - expiration, - mode, - error, - key, - other=b"", - ): - super().__init__(rdclass, rdtype) - self.algorithm = self._as_name(algorithm) - self.inception = self._as_uint32(inception) - self.expiration = self._as_uint32(expiration) - self.mode = self._as_uint16(mode) - self.error = self._as_uint16(error) - self.key = self._as_bytes(key) - self.other = self._as_bytes(other) - - def to_text(self, origin=None, relativize=True, **kw): - _algorithm = self.algorithm.choose_relativity(origin, relativize) - text = "%s %u %u %u %u %s" % ( - str(_algorithm), - self.inception, - self.expiration, - self.mode, - self.error, - dns.rdata._base64ify(self.key, 0), - ) - if len(self.other) > 0: - text += " %s" % (dns.rdata._base64ify(self.other, 0)) - - return text - - @classmethod - def from_text( - cls, rdclass, rdtype, tok, origin=None, relativize=True, relativize_to=None - ): - algorithm = tok.get_name(relativize=False) - inception = tok.get_uint32() - expiration = tok.get_uint32() - mode = tok.get_uint16() - error = tok.get_uint16() - key_b64 = tok.get_string().encode() - key = base64.b64decode(key_b64) - other_b64 = tok.concatenate_remaining_identifiers(True).encode() - other = base64.b64decode(other_b64) - - return cls( - rdclass, rdtype, algorithm, inception, expiration, mode, error, key, other - ) - - def _to_wire(self, file, compress=None, origin=None, canonicalize=False): - self.algorithm.to_wire(file, compress, origin) - file.write( - struct.pack("!IIHH", self.inception, self.expiration, self.mode, self.error) - ) - file.write(struct.pack("!H", len(self.key))) - file.write(self.key) - file.write(struct.pack("!H", len(self.other))) - if len(self.other) > 0: - file.write(self.other) - - @classmethod - def from_wire_parser(cls, rdclass, rdtype, parser, origin=None): - algorithm = parser.get_name(origin) - inception, expiration, mode, error = parser.get_struct("!IIHH") - key = parser.get_counted_bytes(2) - other = parser.get_counted_bytes(2) - - return cls( - rdclass, rdtype, algorithm, inception, expiration, mode, error, key, other - ) - - # Constants for the mode field - from RFC 2930: - # 2.5 The Mode Field - # - # The mode field specifies the general scheme for key agreement or - # the purpose of the TKEY DNS message. Servers and resolvers - # supporting this specification MUST implement the Diffie-Hellman key - # agreement mode and the key deletion mode for queries. All other - # modes are OPTIONAL. A server supporting TKEY that receives a TKEY - # request with a mode it does not support returns the BADMODE error. - # The following values of the Mode octet are defined, available, or - # reserved: - # - # Value Description - # ----- ----------- - # 0 - reserved, see section 7 - # 1 server assignment - # 2 Diffie-Hellman exchange - # 3 GSS-API negotiation - # 4 resolver assignment - # 5 key deletion - # 6-65534 - available, see section 7 - # 65535 - reserved, see section 7 - SERVER_ASSIGNMENT = 1 - DIFFIE_HELLMAN_EXCHANGE = 2 - GSSAPI_NEGOTIATION = 3 - RESOLVER_ASSIGNMENT = 4 - KEY_DELETION = 5 diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/URI.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/URI.py deleted file mode 100644 index 7463e277dc19db1f71f66fbd89abcaa29c2f8e2b..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/URI.py +++ /dev/null @@ -1,80 +0,0 @@ -# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license - -# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc. -# Copyright (C) 2015 Red Hat, Inc. -# -# Permission to use, copy, modify, and distribute this software and its -# documentation for any purpose with or without fee is hereby granted, -# provided that the above copyright notice and this permission notice -# appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES -# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF -# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR -# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES -# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN -# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT -# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -import struct - -import dns.exception -import dns.immutable -import dns.name -import dns.rdata -import dns.rdtypes.util - - -@dns.immutable.immutable -class URI(dns.rdata.Rdata): - - """URI record""" - - # see RFC 7553 - - __slots__ = ["priority", "weight", "target"] - - def __init__(self, rdclass, rdtype, priority, weight, target): - super().__init__(rdclass, rdtype) - self.priority = self._as_uint16(priority) - self.weight = self._as_uint16(weight) - self.target = self._as_bytes(target, True) - if len(self.target) == 0: - raise dns.exception.SyntaxError("URI target cannot be empty") - - def to_text(self, origin=None, relativize=True, **kw): - return '%d %d "%s"' % (self.priority, self.weight, self.target.decode()) - - @classmethod - def from_text( - cls, rdclass, rdtype, tok, origin=None, relativize=True, relativize_to=None - ): - priority = tok.get_uint16() - weight = tok.get_uint16() - target = tok.get().unescape() - if not (target.is_quoted_string() or target.is_identifier()): - raise dns.exception.SyntaxError("URI target must be a string") - return cls(rdclass, rdtype, priority, weight, target.value) - - def _to_wire(self, file, compress=None, origin=None, canonicalize=False): - two_ints = struct.pack("!HH", self.priority, self.weight) - file.write(two_ints) - file.write(self.target) - - @classmethod - def from_wire_parser(cls, rdclass, rdtype, parser, origin=None): - (priority, weight) = parser.get_struct("!HH") - target = parser.get_remaining() - if len(target) == 0: - raise dns.exception.FormError("URI target may not be empty") - return cls(rdclass, rdtype, priority, weight, target) - - def _processing_priority(self): - return self.priority - - def _processing_weight(self): - return self.weight - - @classmethod - def _processing_order(cls, iterable): - return dns.rdtypes.util.weighted_processing_order(iterable) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/IN/NAPTR.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/IN/NAPTR.py deleted file mode 100644 index 1f1f5a12678af763ab0a458c141fc6d05f887615..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/IN/NAPTR.py +++ /dev/null @@ -1,111 +0,0 @@ -# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license - -# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc. -# -# Permission to use, copy, modify, and distribute this software and its -# documentation for any purpose with or without fee is hereby granted, -# provided that the above copyright notice and this permission notice -# appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES -# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF -# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR -# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES -# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN -# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT -# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -import struct - -import dns.exception -import dns.immutable -import dns.name -import dns.rdata -import dns.rdtypes.util - - -def _write_string(file, s): - l = len(s) - assert l < 256 - file.write(struct.pack("!B", l)) - file.write(s) - - -@dns.immutable.immutable -class NAPTR(dns.rdata.Rdata): - - """NAPTR record""" - - # see: RFC 3403 - - __slots__ = ["order", "preference", "flags", "service", "regexp", "replacement"] - - def __init__( - self, rdclass, rdtype, order, preference, flags, service, regexp, replacement - ): - super().__init__(rdclass, rdtype) - self.flags = self._as_bytes(flags, True, 255) - self.service = self._as_bytes(service, True, 255) - self.regexp = self._as_bytes(regexp, True, 255) - self.order = self._as_uint16(order) - self.preference = self._as_uint16(preference) - self.replacement = self._as_name(replacement) - - def to_text(self, origin=None, relativize=True, **kw): - replacement = self.replacement.choose_relativity(origin, relativize) - return '%d %d "%s" "%s" "%s" %s' % ( - self.order, - self.preference, - dns.rdata._escapify(self.flags), - dns.rdata._escapify(self.service), - dns.rdata._escapify(self.regexp), - replacement, - ) - - @classmethod - def from_text( - cls, rdclass, rdtype, tok, origin=None, relativize=True, relativize_to=None - ): - order = tok.get_uint16() - preference = tok.get_uint16() - flags = tok.get_string() - service = tok.get_string() - regexp = tok.get_string() - replacement = tok.get_name(origin, relativize, relativize_to) - return cls( - rdclass, rdtype, order, preference, flags, service, regexp, replacement - ) - - def _to_wire(self, file, compress=None, origin=None, canonicalize=False): - two_ints = struct.pack("!HH", self.order, self.preference) - file.write(two_ints) - _write_string(file, self.flags) - _write_string(file, self.service) - _write_string(file, self.regexp) - self.replacement.to_wire(file, compress, origin, canonicalize) - - @classmethod - def from_wire_parser(cls, rdclass, rdtype, parser, origin=None): - (order, preference) = parser.get_struct("!HH") - strings = [] - for _ in range(3): - s = parser.get_counted_bytes() - strings.append(s) - replacement = parser.get_name(origin) - return cls( - rdclass, - rdtype, - order, - preference, - strings[0], - strings[1], - strings[2], - replacement, - ) - - def _processing_priority(self): - return (self.order, self.preference) - - @classmethod - def _processing_order(cls, iterable): - return dns.rdtypes.util.priority_processing_order(iterable) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/obsidian.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/obsidian.py deleted file mode 100644 index 1404005433f2a8f5a4aaf2f0cb40dba48a0cd9e8..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/obsidian.py +++ /dev/null @@ -1,46 +0,0 @@ -"""Obsidian reader class. - -Pass in the path to an Obsidian vault and it will parse all markdown -files into a List of Documents, -with each Document containing text from under an Obsidian header. - -""" -import os -from pathlib import Path -from typing import Any, List - -from langchain.docstore.document import Document as LCDocument - -from gpt_index.readers.base import BaseReader -from gpt_index.readers.file.markdown_parser import MarkdownParser -from gpt_index.readers.schema.base import Document - - -class ObsidianReader(BaseReader): - """Utilities for loading data from an Obsidian Vault. - - Args: - input_dir (str): Path to the vault. - - """ - - def __init__(self, input_dir: str): - """Init params.""" - self.input_dir = Path(input_dir) - - def load_data(self, *args: Any, **load_kwargs: Any) -> List[Document]: - """Load data from the input directory.""" - docs: List[str] = [] - for dirpath, dirnames, filenames in os.walk(self.input_dir): - dirnames[:] = [d for d in dirnames if not d.startswith(".")] - for filename in filenames: - if filename.endswith(".md"): - filepath = os.path.join(dirpath, filename) - content = MarkdownParser().parse_file(Path(filepath)) - docs.extend(content) - return [Document(d) for d in docs] - - def load_langchain_documents(self, **load_kwargs: Any) -> List[LCDocument]: - """Load data in LangChain document format.""" - docs = self.load_data(**load_kwargs) - return [d.to_langchain_format() for d in docs] diff --git a/spaces/johnhelf/roop/roop/ui.py b/spaces/johnhelf/roop/roop/ui.py deleted file mode 100644 index ba693dac116bd416b91518734fa550e9dfb95c7b..0000000000000000000000000000000000000000 --- a/spaces/johnhelf/roop/roop/ui.py +++ /dev/null @@ -1,231 +0,0 @@ -import os -import webbrowser -import customtkinter as ctk -from typing import Callable, Tuple -import cv2 -from PIL import Image, ImageOps - -import roop.globals -import roop.metadata -from roop.face_analyser import get_one_face -from roop.capturer import get_video_frame, get_video_frame_total -from roop.predicter import predict_frame -from roop.processors.frame.core import get_frame_processors_modules -from roop.utilities import is_image, is_video, resolve_relative_path - -ROOT = None -ROOT_HEIGHT = 700 -ROOT_WIDTH = 600 - -PREVIEW = None -PREVIEW_MAX_HEIGHT = 700 -PREVIEW_MAX_WIDTH = 1200 - -RECENT_DIRECTORY_SOURCE = None -RECENT_DIRECTORY_TARGET = None -RECENT_DIRECTORY_OUTPUT = None - -preview_label = None -preview_slider = None -source_label = None -target_label = None -status_label = None - - -def init(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.CTk: - global ROOT, PREVIEW - - ROOT = create_root(start, destroy) - PREVIEW = create_preview(ROOT) - - return ROOT - - -def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.CTk: - global source_label, target_label, status_label - - ctk.deactivate_automatic_dpi_awareness() - ctk.set_appearance_mode('system') - ctk.set_default_color_theme(resolve_relative_path('ui.json')) - - root = ctk.CTk() - root.minsize(ROOT_WIDTH, ROOT_HEIGHT) - root.title(f'{roop.metadata.name} {roop.metadata.version}') - root.configure() - root.protocol('WM_DELETE_WINDOW', lambda: destroy()) - - source_label = ctk.CTkLabel(root, text=None) - source_label.place(relx=0.1, rely=0.1, relwidth=0.3, relheight=0.25) - - target_label = ctk.CTkLabel(root, text=None) - target_label.place(relx=0.6, rely=0.1, relwidth=0.3, relheight=0.25) - - source_button = ctk.CTkButton(root, text='Select a face', cursor='hand2', command=lambda: select_source_path()) - source_button.place(relx=0.1, rely=0.4, relwidth=0.3, relheight=0.1) - - target_button = ctk.CTkButton(root, text='Select a target', cursor='hand2', command=lambda: select_target_path()) - target_button.place(relx=0.6, rely=0.4, relwidth=0.3, relheight=0.1) - - keep_fps_value = ctk.BooleanVar(value=roop.globals.keep_fps) - keep_fps_checkbox = ctk.CTkSwitch(root, text='Keep fps', variable=keep_fps_value, cursor='hand2', command=lambda: setattr(roop.globals, 'keep_fps', not roop.globals.keep_fps)) - keep_fps_checkbox.place(relx=0.1, rely=0.6) - - keep_frames_value = ctk.BooleanVar(value=roop.globals.keep_frames) - keep_frames_switch = ctk.CTkSwitch(root, text='Keep frames', variable=keep_frames_value, cursor='hand2', command=lambda: setattr(roop.globals, 'keep_frames', keep_frames_value.get())) - keep_frames_switch.place(relx=0.1, rely=0.65) - - keep_audio_value = ctk.BooleanVar(value=roop.globals.keep_audio) - keep_audio_switch = ctk.CTkSwitch(root, text='Keep audio', variable=keep_audio_value, cursor='hand2', command=lambda: setattr(roop.globals, 'keep_audio', keep_audio_value.get())) - keep_audio_switch.place(relx=0.6, rely=0.6) - - many_faces_value = ctk.BooleanVar(value=roop.globals.many_faces) - many_faces_switch = ctk.CTkSwitch(root, text='Many faces', variable=many_faces_value, cursor='hand2', command=lambda: setattr(roop.globals, 'many_faces', many_faces_value.get())) - many_faces_switch.place(relx=0.6, rely=0.65) - - start_button = ctk.CTkButton(root, text='Start', cursor='hand2', command=lambda: select_output_path(start)) - start_button.place(relx=0.15, rely=0.75, relwidth=0.2, relheight=0.05) - - stop_button = ctk.CTkButton(root, text='Destroy', cursor='hand2', command=lambda: destroy()) - stop_button.place(relx=0.4, rely=0.75, relwidth=0.2, relheight=0.05) - - preview_button = ctk.CTkButton(root, text='Preview', cursor='hand2', command=lambda: toggle_preview()) - preview_button.place(relx=0.65, rely=0.75, relwidth=0.2, relheight=0.05) - - status_label = ctk.CTkLabel(root, text=None, justify='center') - status_label.place(relx=0.1, rely=0.9, relwidth=0.8) - - donate_label = ctk.CTkLabel(root, text='^_^ Donate to project ^_^', justify='center', cursor='hand2') - donate_label.place(relx=0.1, rely=0.95, relwidth=0.8) - donate_label.configure(text_color=ctk.ThemeManager.theme.get('RoopDonate').get('text_color')) - donate_label.bind(' -
          -

          无标题的聊天

          -
          -

          上午1:42

          -
          - - - - - - - - -
          -
      -
      -
      -
      - - - - ) -} diff --git a/spaces/mrneuralnet/P-DFD/trainer/__init__.py b/spaces/mrneuralnet/P-DFD/trainer/__init__.py deleted file mode 100644 index d7f4da6b97a20cf6c6dae6761dffea145a336cea..0000000000000000000000000000000000000000 --- a/spaces/mrneuralnet/P-DFD/trainer/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -from .abstract_trainer import AbstractTrainer, LEGAL_METRIC -from .exp_mgpu_trainer import ExpMultiGpuTrainer -from .exp_tester import ExpTester -from .utils import center_print, reduce_tensor -from .utils import exp_recons_loss diff --git a/spaces/mrneuralnet/P-DFD/utils/timer.py b/spaces/mrneuralnet/P-DFD/utils/timer.py deleted file mode 100644 index e4b3b8098a5ad41f8d18d42b6b2fedb694aa5508..0000000000000000000000000000000000000000 --- a/spaces/mrneuralnet/P-DFD/utils/timer.py +++ /dev/null @@ -1,40 +0,0 @@ -# -------------------------------------------------------- -# Fast R-CNN -# Copyright (c) 2015 Microsoft -# Licensed under The MIT License [see LICENSE for details] -# Written by Ross Girshick -# -------------------------------------------------------- - -import time - - -class Timer(object): - """A simple timer.""" - def __init__(self): - self.total_time = 0. - self.calls = 0 - self.start_time = 0. - self.diff = 0. - self.average_time = 0. - - def tic(self): - # using time.time instead of time.clock because time time.clock - # does not normalize for multithreading - self.start_time = time.time() - - def toc(self, average=True): - self.diff = time.time() - self.start_time - self.total_time += self.diff - self.calls += 1 - self.average_time = self.total_time / self.calls - if average: - return self.average_time - else: - return self.diff - - def clear(self): - self.total_time = 0. - self.calls = 0 - self.start_time = 0. - self.diff = 0. - self.average_time = 0. diff --git a/spaces/mrneuralnet/P-PD/utils/visualize.py b/spaces/mrneuralnet/P-PD/utils/visualize.py deleted file mode 100644 index 71722d4e513c02f3f08a9af0769225e88553c4fd..0000000000000000000000000000000000000000 --- a/spaces/mrneuralnet/P-PD/utils/visualize.py +++ /dev/null @@ -1,59 +0,0 @@ -import os -import cv2 -import torch -import numpy as np -import torchvision -from PIL import Image - - -def unnormalize(tens, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]): - # assume tensor of shape NxCxHxW - return tens * torch.Tensor(std)[None, :, None, None] + torch.Tensor( - mean)[None, :, None, None] - - -def get_heatmap_cv(img, magn, max_flow_mag): - min_flow_mag = .5 - cv_magn = np.clip( - 255 * (magn - min_flow_mag) / (max_flow_mag - min_flow_mag), - a_min=0, - a_max=255).astype(np.uint8) - if img.dtype != np.uint8: - img = (255 * img).astype(np.uint8) - - heatmap_img = cv2.applyColorMap(cv_magn, cv2.COLORMAP_JET) - heatmap_img = heatmap_img[..., ::-1] - - h, w = magn.shape - img_alpha = np.ones((h, w), dtype=np.double)[:, :, None] - heatmap_alpha = np.clip( - magn / max_flow_mag, a_min=0, a_max=1)[:, :, None]**.7 - heatmap_alpha[heatmap_alpha < .2]**.5 - pm_hm = heatmap_img * heatmap_alpha - pm_img = img * img_alpha - cv_out = pm_hm + pm_img * (1 - heatmap_alpha) - cv_out = np.clip(cv_out, a_min=0, a_max=255).astype(np.uint8) - - return cv_out - - -def get_heatmap_batch(img_batch, pred_batch): - imgrid = torchvision.utils.make_grid(img_batch).cpu() - magn_batch = torch.norm(pred_batch, p=2, dim=1, keepdim=True) - magngrid = torchvision.utils.make_grid(magn_batch) - magngrid = magngrid[0, :, :] - imgrid = unnormalize(imgrid).squeeze_() - - cv_magn = magngrid.detach().cpu().numpy() - cv_img = imgrid.permute(1, 2, 0).detach().cpu().numpy() - cv_out = get_heatmap_cv(cv_img, cv_magn, max_flow_mag=9) - out = np.asarray(cv_out).astype(np.double) / 255.0 - - out = torch.from_numpy(out).permute(2, 0, 1) - return out - - -def save_heatmap_cv(img, magn, path, max_flow_mag=7): - cv_out = get_heatmap_cv(img, magn, max_flow_mag) - out = Image.fromarray(cv_out) - out.save(path, quality=95) diff --git a/spaces/mshkdm/VToonify/vtoonify/model/raft/core/utils/utils.py b/spaces/mshkdm/VToonify/vtoonify/model/raft/core/utils/utils.py deleted file mode 100644 index 741ccfe4d0d778c3199c586d368edc2882d4fff8..0000000000000000000000000000000000000000 --- a/spaces/mshkdm/VToonify/vtoonify/model/raft/core/utils/utils.py +++ /dev/null @@ -1,82 +0,0 @@ -import torch -import torch.nn.functional as F -import numpy as np -from scipy import interpolate - - -class InputPadder: - """ Pads images such that dimensions are divisible by 8 """ - def __init__(self, dims, mode='sintel'): - self.ht, self.wd = dims[-2:] - pad_ht = (((self.ht // 8) + 1) * 8 - self.ht) % 8 - pad_wd = (((self.wd // 8) + 1) * 8 - self.wd) % 8 - if mode == 'sintel': - self._pad = [pad_wd//2, pad_wd - pad_wd//2, pad_ht//2, pad_ht - pad_ht//2] - else: - self._pad = [pad_wd//2, pad_wd - pad_wd//2, 0, pad_ht] - - def pad(self, *inputs): - return [F.pad(x, self._pad, mode='replicate') for x in inputs] - - def unpad(self,x): - ht, wd = x.shape[-2:] - c = [self._pad[2], ht-self._pad[3], self._pad[0], wd-self._pad[1]] - return x[..., c[0]:c[1], c[2]:c[3]] - -def forward_interpolate(flow): - flow = flow.detach().cpu().numpy() - dx, dy = flow[0], flow[1] - - ht, wd = dx.shape - x0, y0 = np.meshgrid(np.arange(wd), np.arange(ht)) - - x1 = x0 + dx - y1 = y0 + dy - - x1 = x1.reshape(-1) - y1 = y1.reshape(-1) - dx = dx.reshape(-1) - dy = dy.reshape(-1) - - valid = (x1 > 0) & (x1 < wd) & (y1 > 0) & (y1 < ht) - x1 = x1[valid] - y1 = y1[valid] - dx = dx[valid] - dy = dy[valid] - - flow_x = interpolate.griddata( - (x1, y1), dx, (x0, y0), method='nearest', fill_value=0) - - flow_y = interpolate.griddata( - (x1, y1), dy, (x0, y0), method='nearest', fill_value=0) - - flow = np.stack([flow_x, flow_y], axis=0) - return torch.from_numpy(flow).float() - - -def bilinear_sampler(img, coords, mode='bilinear', mask=False): - """ Wrapper for grid_sample, uses pixel coordinates """ - H, W = img.shape[-2:] - xgrid, ygrid = coords.split([1,1], dim=-1) - xgrid = 2*xgrid/(W-1) - 1 - ygrid = 2*ygrid/(H-1) - 1 - - grid = torch.cat([xgrid, ygrid], dim=-1) - img = F.grid_sample(img, grid, align_corners=True) - - if mask: - mask = (xgrid > -1) & (ygrid > -1) & (xgrid < 1) & (ygrid < 1) - return img, mask.float() - - return img - - -def coords_grid(batch, ht, wd, device): - coords = torch.meshgrid(torch.arange(ht, device=device), torch.arange(wd, device=device)) - coords = torch.stack(coords[::-1], dim=0).float() - return coords[None].repeat(batch, 1, 1, 1) - - -def upflow8(flow, mode='bilinear'): - new_size = (8 * flow.shape[2], 8 * flow.shape[3]) - return 8 * F.interpolate(flow, size=new_size, mode=mode, align_corners=True) diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/speech_recognition/new/decoders/viterbi_decoder.py b/spaces/mshukor/UnIVAL/fairseq/examples/speech_recognition/new/decoders/viterbi_decoder.py deleted file mode 100644 index b1c47868fa3b4e21f939b0695ede8d14ba1b168d..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/speech_recognition/new/decoders/viterbi_decoder.py +++ /dev/null @@ -1,24 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from typing import List, Dict - -from .base_decoder import BaseDecoder - - -class ViterbiDecoder(BaseDecoder): - def decode( - self, - emissions: torch.FloatTensor, - ) -> List[List[Dict[str, torch.LongTensor]]]: - def get_pred(e): - toks = e.argmax(dim=-1).unique_consecutive() - return toks[toks != self.blank] - - return [[{"tokens": get_pred(x), "score": 0}] for x in emissions] diff --git a/spaces/mshukor/UnIVAL/slurm_adastra/averaging/ratatouille/scaling_best/refcocoplus/refcocoplus_ofaplus_base_pretrain_s2_hsep1_fix_lr5e5_bs8_4_shuf_initvqa.sh b/spaces/mshukor/UnIVAL/slurm_adastra/averaging/ratatouille/scaling_best/refcocoplus/refcocoplus_ofaplus_base_pretrain_s2_hsep1_fix_lr5e5_bs8_4_shuf_initvqa.sh deleted file mode 100644 index ac23ec0ae3e38387270cf2b3f6423ab32e1d7687..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/slurm_adastra/averaging/ratatouille/scaling_best/refcocoplus/refcocoplus_ofaplus_base_pretrain_s2_hsep1_fix_lr5e5_bs8_4_shuf_initvqa.sh +++ /dev/null @@ -1,29 +0,0 @@ -#!/bin/bash - -#SBATCH --job-name=refcocoplus_ofaplus_base_pretrain_s2_hsep1_fix_lr5e5_bs8_4_shuf_initvqa -#SBATCH --nodes=1 -#SBATCH --ntasks=1 -#SBATCH --gpus=8 -#SBATCH --threads-per-core=2 -#SBATCH --gpu-bind=closest -#SBATCH -C MI250 -#SBATCH -A gda2204 -#SBATCH --time=5:00:00 -#SBATCH --mail-type=END,FAIL -#SBATCH --output=/lus/home/NAT/gda2204/mshukor/logs/slurm/refcocoplus_ofaplus_base_pretrain_s2_hsep1_fix_lr5e5_bs8_4_shuf_initvqa.out -#SBATCH --exclusive -#SBATCH --mail-user=mustafa.shukor@isir.upmc.fr - - -cd /lus/home/NAT/gda2204/mshukor/code/ofa_ours/run_scripts -source /lus/home/NAT/gda2204/mshukor/.bashrc - -conda activate main - - -rm core-python3* - - -srun -l -N 1 -n 1 -c 128 --gpus=8 bash averaging/ratatouille/scaling_best/refcocoplus/refcocoplus_ofaplus_base_pretrain_s2_hsep1_fix_lr5e5_bs8_4_shuf_initvqa.sh - - diff --git a/spaces/mueller-franzes/medfusion-app/medical_diffusion/external/diffusers/attention.py b/spaces/mueller-franzes/medfusion-app/medical_diffusion/external/diffusers/attention.py deleted file mode 100644 index 25e1ea28dcf0226defc89fc6c92b5fc3faeac462..0000000000000000000000000000000000000000 --- a/spaces/mueller-franzes/medfusion-app/medical_diffusion/external/diffusers/attention.py +++ /dev/null @@ -1,347 +0,0 @@ -import math -from typing import Optional - -import torch -import torch.nn.functional as F -from torch import nn - - -class AttentionBlock(nn.Module): - """ - An attention block that allows spatial positions to attend to each other. Originally ported from here, but adapted - to the N-d case. - https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/models/unet.py#L66. - Uses three q, k, v linear layers to compute attention. - - Parameters: - channels (:obj:`int`): The number of channels in the input and output. - num_head_channels (:obj:`int`, *optional*): - The number of channels in each head. If None, then `num_heads` = 1. - num_groups (:obj:`int`, *optional*, defaults to 32): The number of groups to use for group norm. - rescale_output_factor (:obj:`float`, *optional*, defaults to 1.0): The factor to rescale the output by. - eps (:obj:`float`, *optional*, defaults to 1e-5): The epsilon value to use for group norm. - """ - - def __init__( - self, - channels: int, - num_head_channels: Optional[int] = None, - num_groups: int = 32, - rescale_output_factor: float = 1.0, - eps: float = 1e-5, - ): - super().__init__() - self.channels = channels - - self.num_heads = channels // num_head_channels if num_head_channels is not None else 1 - self.num_head_size = num_head_channels - self.group_norm = nn.GroupNorm(num_channels=channels, num_groups=num_groups, eps=eps, affine=True) - - # define q,k,v as linear layers - self.query = nn.Linear(channels, channels) - self.key = nn.Linear(channels, channels) - self.value = nn.Linear(channels, channels) - - self.rescale_output_factor = rescale_output_factor - self.proj_attn = nn.Linear(channels, channels, 1) - - def transpose_for_scores(self, projection: torch.Tensor) -> torch.Tensor: - new_projection_shape = projection.size()[:-1] + (self.num_heads, -1) - # move heads to 2nd position (B, T, H * D) -> (B, T, H, D) -> (B, H, T, D) - new_projection = projection.view(new_projection_shape).permute(0, 2, 1, 3) - return new_projection - - def forward(self, hidden_states): - residual = hidden_states - batch, channel, height, width = hidden_states.shape - - # norm - hidden_states = self.group_norm(hidden_states) - - hidden_states = hidden_states.view(batch, channel, height * width).transpose(1, 2) - - # proj to q, k, v - query_proj = self.query(hidden_states) - key_proj = self.key(hidden_states) - value_proj = self.value(hidden_states) - - # transpose - query_states = self.transpose_for_scores(query_proj) - key_states = self.transpose_for_scores(key_proj) - value_states = self.transpose_for_scores(value_proj) - - # get scores - scale = 1 / math.sqrt(math.sqrt(self.channels / self.num_heads)) - - attention_scores = torch.matmul(query_states * scale, key_states.transpose(-1, -2) * scale) - attention_probs = torch.softmax(attention_scores.float(), dim=-1).type(attention_scores.dtype) - - # compute attention output - hidden_states = torch.matmul(attention_probs, value_states) - - hidden_states = hidden_states.permute(0, 2, 1, 3).contiguous() - new_hidden_states_shape = hidden_states.size()[:-2] + (self.channels,) - hidden_states = hidden_states.view(new_hidden_states_shape) - - # compute next hidden_states - hidden_states = self.proj_attn(hidden_states) - hidden_states = hidden_states.transpose(-1, -2).reshape(batch, channel, height, width) - - # res connect and rescale - hidden_states = (hidden_states + residual) / self.rescale_output_factor - return hidden_states - - -class SpatialTransformer(nn.Module): - """ - Transformer block for image-like data. First, project the input (aka embedding) and reshape to b, t, d. Then apply - standard transformer action. Finally, reshape to image. - - Parameters: - in_channels (:obj:`int`): The number of channels in the input and output. - n_heads (:obj:`int`): The number of heads to use for multi-head attention. - d_head (:obj:`int`): The number of channels in each head. - depth (:obj:`int`, *optional*, defaults to 1): The number of layers of Transformer blocks to use. - dropout (:obj:`float`, *optional*, defaults to 0.1): The dropout probability to use. - context_dim (:obj:`int`, *optional*): The number of context dimensions to use. - """ - - def __init__( - self, - in_channels: int, - n_heads: int, - d_head: int, - depth: int = 1, - dropout: float = 0.0, - num_groups: int = 32, - context_dim: Optional[int] = None, - ): - super().__init__() - self.n_heads = n_heads - self.d_head = d_head - self.in_channels = in_channels - inner_dim = n_heads * d_head - self.norm = torch.nn.GroupNorm(num_groups=num_groups, num_channels=in_channels, eps=1e-6, affine=True) - - self.proj_in = nn.Conv2d(in_channels, inner_dim, kernel_size=1, stride=1, padding=0) - - self.transformer_blocks = nn.ModuleList( - [ - BasicTransformerBlock(inner_dim, n_heads, d_head, dropout=dropout, context_dim=context_dim) - for d in range(depth) - ] - ) - - self.proj_out = nn.Conv2d(inner_dim, in_channels, kernel_size=1, stride=1, padding=0) - - def _set_attention_slice(self, slice_size): - for block in self.transformer_blocks: - block._set_attention_slice(slice_size) - - def forward(self, hidden_states, context=None): - # note: if no context is given, cross-attention defaults to self-attention - batch, channel, height, weight = hidden_states.shape - residual = hidden_states - hidden_states = self.norm(hidden_states) - hidden_states = self.proj_in(hidden_states) - hidden_states = hidden_states.permute(0, 2, 3, 1).reshape(batch, height * weight, channel) - for block in self.transformer_blocks: - hidden_states = block(hidden_states, context=context) - hidden_states = hidden_states.reshape(batch, height, weight, channel).permute(0, 3, 1, 2) - hidden_states = self.proj_out(hidden_states) - return hidden_states + residual - - -class BasicTransformerBlock(nn.Module): - r""" - A basic Transformer block. - - Parameters: - dim (:obj:`int`): The number of channels in the input and output. - n_heads (:obj:`int`): The number of heads to use for multi-head attention. - d_head (:obj:`int`): The number of channels in each head. - dropout (:obj:`float`, *optional*, defaults to 0.0): The dropout probability to use. - context_dim (:obj:`int`, *optional*): The size of the context vector for cross attention. - gated_ff (:obj:`bool`, *optional*, defaults to :obj:`False`): Whether to use a gated feed-forward network. - checkpoint (:obj:`bool`, *optional*, defaults to :obj:`False`): Whether to use checkpointing. - """ - - def __init__( - self, - dim: int, - n_heads: int, - d_head: int, - dropout=0.0, - context_dim: Optional[int] = None, - gated_ff: bool = True, - checkpoint: bool = True, - ): - super().__init__() - self.attn1 = CrossAttention( - query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout - ) # is a self-attention - self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff) - self.attn2 = CrossAttention( - query_dim=dim, context_dim=context_dim, heads=n_heads, dim_head=d_head, dropout=dropout - ) # is self-attn if context is none - self.norm1 = nn.LayerNorm(dim) - self.norm2 = nn.LayerNorm(dim) - self.norm3 = nn.LayerNorm(dim) - self.checkpoint = checkpoint - - def _set_attention_slice(self, slice_size): - self.attn1._slice_size = slice_size - self.attn2._slice_size = slice_size - - def forward(self, hidden_states, context=None): - hidden_states = hidden_states.contiguous() if hidden_states.device.type == "mps" else hidden_states - hidden_states = self.attn1(self.norm1(hidden_states)) + hidden_states - hidden_states = self.attn2(self.norm2(hidden_states), context=context) + hidden_states - hidden_states = self.ff(self.norm3(hidden_states)) + hidden_states - return hidden_states - - -class CrossAttention(nn.Module): - r""" - A cross attention layer. - - Parameters: - query_dim (:obj:`int`): The number of channels in the query. - context_dim (:obj:`int`, *optional*): - The number of channels in the context. If not given, defaults to `query_dim`. - heads (:obj:`int`, *optional*, defaults to 8): The number of heads to use for multi-head attention. - dim_head (:obj:`int`, *optional*, defaults to 64): The number of channels in each head. - dropout (:obj:`float`, *optional*, defaults to 0.0): The dropout probability to use. - """ - - def __init__( - self, query_dim: int, context_dim: Optional[int] = None, heads: int = 8, dim_head: int = 64, dropout: int = 0.0 - ): - super().__init__() - inner_dim = dim_head * heads - context_dim = context_dim if context_dim is not None else query_dim - - self.scale = dim_head**-0.5 - self.heads = heads - # for slice_size > 0 the attention score computation - # is split across the batch axis to save memory - # You can set slice_size with `set_attention_slice` - self._slice_size = None - - self.to_q = nn.Linear(query_dim, inner_dim, bias=False) - self.to_k = nn.Linear(context_dim, inner_dim, bias=False) - self.to_v = nn.Linear(context_dim, inner_dim, bias=False) - - self.to_out = nn.Sequential(nn.Linear(inner_dim, query_dim), nn.Dropout(dropout)) - - def reshape_heads_to_batch_dim(self, tensor): - batch_size, seq_len, dim = tensor.shape - head_size = self.heads - tensor = tensor.reshape(batch_size, seq_len, head_size, dim // head_size) - tensor = tensor.permute(0, 2, 1, 3).reshape(batch_size * head_size, seq_len, dim // head_size) - return tensor - - def reshape_batch_dim_to_heads(self, tensor): - batch_size, seq_len, dim = tensor.shape - head_size = self.heads - tensor = tensor.reshape(batch_size // head_size, head_size, seq_len, dim) - tensor = tensor.permute(0, 2, 1, 3).reshape(batch_size // head_size, seq_len, dim * head_size) - return tensor - - def forward(self, hidden_states, context=None, mask=None): - batch_size, sequence_length, _ = hidden_states.shape - - query = self.to_q(hidden_states) - context = context if context is not None else hidden_states - key = self.to_k(context) - value = self.to_v(context) - - dim = query.shape[-1] - - query = self.reshape_heads_to_batch_dim(query) - key = self.reshape_heads_to_batch_dim(key) - value = self.reshape_heads_to_batch_dim(value) - - # TODO(PVP) - mask is currently never used. Remember to re-implement when used - - # attention, what we cannot get enough of - - if self._slice_size is None or query.shape[0] // self._slice_size == 1: - hidden_states = self._attention(query, key, value) - else: - hidden_states = self._sliced_attention(query, key, value, sequence_length, dim) - - return self.to_out(hidden_states) - - def _attention(self, query, key, value): - attention_scores = torch.matmul(query, key.transpose(-1, -2)) * self.scale - attention_probs = attention_scores.softmax(dim=-1) - # compute attention output - hidden_states = torch.matmul(attention_probs, value) - # reshape hidden_states - hidden_states = self.reshape_batch_dim_to_heads(hidden_states) - return hidden_states - - def _sliced_attention(self, query, key, value, sequence_length, dim): - batch_size_attention = query.shape[0] - hidden_states = torch.zeros( - (batch_size_attention, sequence_length, dim // self.heads), device=query.device, dtype=query.dtype - ) - slice_size = self._slice_size if self._slice_size is not None else hidden_states.shape[0] - for i in range(hidden_states.shape[0] // slice_size): - start_idx = i * slice_size - end_idx = (i + 1) * slice_size - attn_slice = torch.matmul(query[start_idx:end_idx], key[start_idx:end_idx].transpose(1, 2)) * self.scale - attn_slice = attn_slice.softmax(dim=-1) - attn_slice = torch.matmul(attn_slice, value[start_idx:end_idx]) - - hidden_states[start_idx:end_idx] = attn_slice - - # reshape hidden_states - hidden_states = self.reshape_batch_dim_to_heads(hidden_states) - return hidden_states - - -class FeedForward(nn.Module): - r""" - A feed-forward layer. - - Parameters: - dim (:obj:`int`): The number of channels in the input. - dim_out (:obj:`int`, *optional*): The number of channels in the output. If not given, defaults to `dim`. - mult (:obj:`int`, *optional*, defaults to 4): The multiplier to use for the hidden dimension. - glu (:obj:`bool`, *optional*, defaults to :obj:`False`): Whether to use GLU activation. - dropout (:obj:`float`, *optional*, defaults to 0.0): The dropout probability to use. - """ - - def __init__( - self, dim: int, dim_out: Optional[int] = None, mult: int = 4, glu: bool = False, dropout: float = 0.0 - ): - super().__init__() - inner_dim = int(dim * mult) - dim_out = dim_out if dim_out is not None else dim - project_in = GEGLU(dim, inner_dim) - - self.net = nn.Sequential(project_in, nn.Dropout(dropout), nn.Linear(inner_dim, dim_out)) - - def forward(self, hidden_states): - return self.net(hidden_states) - - -# feedforward -class GEGLU(nn.Module): - r""" - A variant of the gated linear unit activation function from https://arxiv.org/abs/2002.05202. - - Parameters: - dim_in (:obj:`int`): The number of channels in the input. - dim_out (:obj:`int`): The number of channels in the output. - """ - - def __init__(self, dim_in: int, dim_out: int): - super().__init__() - self.proj = nn.Linear(dim_in, dim_out * 2) - - def forward(self, hidden_states): - hidden_states, gate = self.proj(hidden_states).chunk(2, dim=-1) - return hidden_states * F.gelu(gate) diff --git a/spaces/myrad01/Inpaint-Anything/third_party/lama/bin/blur_predicts.py b/spaces/myrad01/Inpaint-Anything/third_party/lama/bin/blur_predicts.py deleted file mode 100644 index a14fcc28d5a906ad3a21ab4ba482f38b4fc411cb..0000000000000000000000000000000000000000 --- a/spaces/myrad01/Inpaint-Anything/third_party/lama/bin/blur_predicts.py +++ /dev/null @@ -1,57 +0,0 @@ -#!/usr/bin/env python3 - -import os - -import cv2 -import numpy as np -import tqdm - -from saicinpainting.evaluation.data import PrecomputedInpaintingResultsDataset -from saicinpainting.evaluation.utils import load_yaml - - -def main(args): - config = load_yaml(args.config) - - if not args.predictdir.endswith('/'): - args.predictdir += '/' - - dataset = PrecomputedInpaintingResultsDataset(args.datadir, args.predictdir, **config.dataset_kwargs) - - os.makedirs(os.path.dirname(args.outpath), exist_ok=True) - - for img_i in tqdm.trange(len(dataset)): - pred_fname = dataset.pred_filenames[img_i] - cur_out_fname = os.path.join(args.outpath, pred_fname[len(args.predictdir):]) - os.makedirs(os.path.dirname(cur_out_fname), exist_ok=True) - - sample = dataset[img_i] - img = sample['image'] - mask = sample['mask'] - inpainted = sample['inpainted'] - - inpainted_blurred = cv2.GaussianBlur(np.transpose(inpainted, (1, 2, 0)), - ksize=(args.k, args.k), - sigmaX=args.s, sigmaY=args.s, - borderType=cv2.BORDER_REFLECT) - - cur_res = (1 - mask) * np.transpose(img, (1, 2, 0)) + mask * inpainted_blurred - cur_res = np.clip(cur_res * 255, 0, 255).astype('uint8') - cur_res = cv2.cvtColor(cur_res, cv2.COLOR_RGB2BGR) - cv2.imwrite(cur_out_fname, cur_res) - - -if __name__ == '__main__': - import argparse - - aparser = argparse.ArgumentParser() - aparser.add_argument('config', type=str, help='Path to evaluation config') - aparser.add_argument('datadir', type=str, - help='Path to folder with images and masks (output of gen_mask_dataset.py)') - aparser.add_argument('predictdir', type=str, - help='Path to folder with predicts (e.g. predict_hifill_baseline.py)') - aparser.add_argument('outpath', type=str, help='Where to put results') - aparser.add_argument('-s', type=float, default=0.1, help='Gaussian blur sigma') - aparser.add_argument('-k', type=int, default=5, help='Kernel size in gaussian blur') - - main(aparser.parse_args()) diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Capture One Pro 11.2.0 Crack - [CrackzSoft] Serial Key Keygen.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Capture One Pro 11.2.0 Crack - [CrackzSoft] Serial Key Keygen.md deleted file mode 100644 index fea7a2fd292ad45180482ed8a9b86fecb397e02c..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Capture One Pro 11.2.0 Crack - [CrackzSoft] Serial Key Keygen.md +++ /dev/null @@ -1,123 +0,0 @@ -
      -

      Capture One Pro 11.2.0 Crack - [CrackzSoft] Serial Key keygen: A Complete Guide

      -

      If you are looking for a professional photo editing software that can handle raw files, color grading, tethered shooting, and more, you might have heard of Capture One Pro. This software is one of the most popular and powerful tools for photographers and creative professionals. However, it also comes with a hefty price tag that might not fit your budget.

      -

      Capture One Pro 11.2.0 Crack - [CrackzSoft] Serial Key keygen


      DOWNLOAD ===== https://urlcod.com/2uIcbK



      -

      That's why some people resort to using cracks, serial keys, or keygens to get access to the full version of Capture One Pro without paying anything. But is this a good idea? What are the risks and benefits of using a crack for Capture One Pro? How can you download and install Capture One Pro 11.2.0 Crack - [CrackzSoft] Serial Key keygen safely and easily?

      -

      In this article, we will answer all these questions and more. We will explain what Capture One Pro is, what are its features and benefits, why you might need a crack for it, how to download and install it, and what are the pros and cons of using it. By the end of this article, you will have a complete guide on how to use Capture One Pro 11.2.0 Crack - [CrackzSoft] Serial Key keygen.

      -

      What is Capture One Pro?

      -

      Capture One Pro is a photo editing software developed by Phase One, a Danish company that specializes in high-end digital cameras and imaging solutions. It was first released in 2003 as a software for tethered shooting with Phase One cameras, but it has since evolved into a comprehensive tool for editing raw files, color grading, organizing, exporting, and more.

      -

      Capture One Pro supports over 500 camera models from various brands, including Canon, Nikon, Sony, Fujifilm, Olympus, Panasonic, Leica, Hasselblad, Pentax, and more. It also offers advanced features such as layers, masks, curves, levels, histograms, sharpening, noise reduction, lens correction, perspective correction, keystone correction, HDR tools, styles, presets, plugins, and more.

      -

      Capture One Pro is available for both Windows and Mac operating systems. It can be purchased as a single license for $299, or as a subscription for $20 per month or $180 per year. There is also a free trial version that lasts for 30 days and has all the features of the full version.

      What are the features and benefits of Capture One Pro?

      -

      Capture One Pro is designed to provide the best image quality and performance for professional photographers and creative professionals. Here are some of the features and benefits of Capture One Pro:

      -
        -
      • Raw file editing: Capture One Pro can handle raw files from various camera models and formats, such as CR2, NEF, ARW, RAF, ORF, RW2, DNG, and more. Raw files are uncompressed and unprocessed images that contain more data and information than JPEG or TIFF files. By editing raw files, you can have more control and flexibility over the exposure, white balance, color, contrast, sharpness, noise, and other aspects of your images. You can also recover more details from the highlights and shadows, and avoid artifacts such as banding or posterization.
      • -
      • Color grading: Capture One Pro offers a powerful and intuitive color grading system that allows you to adjust the hue, saturation, luminance, and color balance of your images. You can also use tools such as color editor, color picker, color wheel, color balance, color curves, and more to create custom color profiles, styles, presets, or look-up tables (LUTs) for your images. You can also apply color grading to specific areas of your images using layers and masks.
      • -
      • Tethered shooting: Capture One Pro enables you to connect your camera to your computer via a USB cable or a wireless connection and control it remotely from the software. You can adjust the camera settings, trigger the shutter, review the images on your computer screen, apply adjustments or presets, and save the images directly to your computer or an external drive. Tethered shooting can help you save time and space, improve your workflow, and achieve better results.
      • -
      • Organizing: Capture One Pro helps you organize your images in a convenient and efficient way. You can import your images from your camera, memory card, hard drive, or cloud storage to the software and sort them by various criteria such as date, name, rating, color label, keyword, metadata, folder, album, collection, or smart album. You can also use tools such as filters, search bar, ratings, color labels, flags, and annotations to manage and edit your images. You can also export your images to various formats and destinations such as JPEG, TIFF, PNG, PSD, PDF, email, web, or social media.
      • -
      • Exporting: Capture One Pro allows you to export your images in different ways depending on your needs and preferences. You can choose from various output formats such as JPEG, TIFF, PNG, PSD, PDF, DNG, or EIP. You can also adjust the output settings such as size, resolution, quality, compression, color space, ICC profile, sharpening, watermarking, and more. You can also create custom export recipes and presets to save time and ensure consistency. You can also export your images to different destinations such as email, web, or social media.
      • -
      -

      Why do you need a crack for Capture One Pro?

      -

      As you can see, Capture One Pro is a feature-rich and versatile software that can help you enhance your photography and creativity. However, it also has a high price that might not be affordable for everyone. A single license for Capture One Pro costs $299, while a subscription costs $20 per month or $180 per year. If you want to use the software on more than one computer, you will need to buy additional licenses or subscriptions.

      -

      -

      That's why some people look for alternative ways to get access to the full version of Capture One Pro without paying anything. One of these ways is using a crack, serial key, or keygen. A crack is a modified version of the original software that bypasses the activation or registration process. A serial key is a code that is used to activate the software. A keygen is a program that generates serial keys for the software.

      -

      By using a crack, serial key, or keygen for Capture One Pro, you can enjoy all the features and updates of the software for free. You don't need to pay for a subscription or license fee. You don't need to worry about expiration dates or renewal reminders. You don't need to enter any personal or payment information. You just need to download and install the crack file and use the serial key to activate the software.

      -

      How to download and install Capture One Pro 11.2.0 Crack - [CrackzSoft] Serial Key keygen?

      -

      If you want to try using a crack for Capture One Pro, you will need to download and install Capture One Pro 11.2.0 Crack - [CrackzSoft] Serial Key keygen. This is a crack file that was uploaded by CrackzSoft, a website that provides cracks, serial keys, and keygens for various software. Here are the steps to download and install Capture One Pro 11.2.0 Crack - [CrackzSoft] Serial Key keygen:

      -

      Step 1: Download the crack file from the link provided

      -

      The first step is to download the crack file from the link provided by CrackzSoft. You can find the link on their website or on other platforms such as torrent sites, file-sharing sites, or forums. The link will direct you to a download page where you will need to complete some verification steps such as captcha, surveys, or offers. After that, you will be able to download the crack file as a ZIP or RAR archive.

      -

      The crack file is about 300 MB in size and contains the following files:

      -
        -
      • Capture One Pro 11.2.0 Setup.exe: This is the setup file for installing Capture One Pro 11.2.0 on your computer.
      • -
      • Crack folder: This folder contains the serial key and the crack file for activating Capture One Pro 11.2.0.
      • -
      • Readme.txt: This is a text file that contains the instructions and information about the crack file.
      • -
      -

      Make sure to download the crack file from a reliable and trusted source. Avoid downloading from suspicious or unknown links that might contain malware, viruses, or spyware. Also, make sure to scan the crack file with an antivirus or anti-malware software before opening it.

      -

      Step 2: Extract the crack file using WinRAR or any other software

      -

      The second step is to extract the crack file using WinRAR or any other software that can handle ZIP or RAR archives. You can download WinRAR from here. To extract the crack file, follow these steps:

      -
        -
      1. Right-click on the crack file and select "Extract Here" or "Extract to Capture One Pro 11.2.0 Crack - [CrackzSoft] Serial Key keygen".
      2. -
      3. Enter the password if prompted. The password is "crackzsoft.com" without quotes.
      4. -
      5. Wait for the extraction process to finish.
      6. -
      7. You will see a new folder named "Capture One Pro 11.2.0 Crack - [CrackzSoft] Serial Key keygen" that contains the extracted files.
      8. -
      -

      Make sure to extract the crack file to a safe and accessible location on your computer. Avoid extracting it to system folders or protected folders that might cause permission issues.

      -

      Step 3: Run the setup file and follow the instructions

      -

      The third step is to run the setup file and follow the instructions to install Capture One Pro 11.2.0 on your computer. To run the setup file, follow these steps:

      -
        -
      1. Double-click on the "Capture One Pro 11.2.0 Setup.exe" file in the extracted folder.
      2. -
      3. A window will pop up asking you to choose your language. Select your preferred language and click "OK".
      4. -
      5. A welcome screen will appear. Click "Next".
      6. -
      7. A license agreement screen will appear. Read the terms and conditions and check the box that says "I accept the terms in the license agreement". Click "Next".
      8. -
      9. A destination folder screen will appear. Choose where you want to install Capture One Pro 11.2.0 on your computer. You can use the default location or browse for a different one. Click "Next".
      10. -
      11. A start menu folder screen will appear. Choose whether you want to create a start menu folder for Capture One Pro 11.2.0 or not. You can use the default name or type a different one. Click "Next".
      12. -
      13. A ready to install screen will appear. Review the installation settings and click "Install".
      14. -
      15. Wait for the installation process to finish. It might take a few minutes depending on your computer speed and performance.
      16. -
      17. A completed screen will appear. Click "Finish".
      18. -
      -

      You have successfully installed Capture One Pro 11.2.0 on your computer. However, you still need to activate it using the crack file and the serial key.

      -

      Step 4: Copy and paste the serial key from the crack folder to activate the software

      -

      The fourth step is to copy and paste the serial key from the crack folder to activate Capture One Pro 11.2.0 on your computer. To do this, follow these steps:

      -
        -
      1. Open the "Crack" folder in the extracted folder.
      2. -
      3. Open the "Serial Key.txt" file in a text editor such as Notepad.
      4. -
      5. Copy the serial key that is written in the file. It should look something like this: XXXX-XXXX-XXXX-XXXX-XXXX-XXXX.
      6. -
      7. Open Capture One Pro 11.2.0 on your computer.
      8. -
      9. A activation screen will appear. Paste the serial key that you copied in the field that says "Enter your license code". Click "Activate".
      10. -
      11. A confirmation screen will appear. Click "OK".
      12. -
      -

      You have successfully activated Capture One Pro 11.2.0 on your computer using the serial key from the crack file.

      -

      Step 5: Enjoy the full version of Capture One Pro

      -

      The fifth and final step is to enjoy the full version of Capture One Pro on your computer. You can now use all the features and updates of the software for free. You can edit your raw files, color grade your images, shoot tethered, organize your photos, export your work, and more.

      -

      However, you should also be aware of the pros and cons of using Capture One Pro 11.2.0 Crack - [CrackzSoft] Serial Key keygen, as we will discuss in the next section.

      -

      What are the pros and cons of using Capture One Pro 11.2.0 Crack - [CrackzSoft] Serial Key keygen?

      -

      Using a crack for Capture One Pro might seem like a good idea, but it also comes with some risks and drawbacks that you should consider before doing so. Here are some of the pros and cons of using Capture One Pro 11.2.0 Crack - [CrackzSoft] Serial Key keygen:

      -

      Pros

      -
        -
      • Free access to all the features and updates of Capture One Pro: By using a crack for Capture One Pro, you can enjoy all the features and updates of the software for free. You don't need to pay for a subscription or license fee, which can save you a lot of money in the long run.
      • -
      • No need to pay for a subscription or license fee: By using a crack for Capture One Pro, you don't need to pay for a subscription or license fee, which can save you a lot of money in the long run.
      • -
      • Easy to use and install: By using a crack for Capture One Pro, you don't need to go through any complicated or tedious activation or registration process. You just need to download and install the crack file and use the serial key to activate the software.
      • -
      -

      Cons

      -
        -
      • Risk of malware, viruses, or spyware infection: By using a crack for Capture One Pro, you expose your computer to potential malware, viruses, or spyware infection that might harm your system or data. The crack file might contain malicious code that can infect your computer or steal your personal information.
      • -
      • Legal and ethical issues of using pirated software: By using a crack for Capture One Pro, you violate the terms and conditions of the software and the intellectual property rights of the developer. You might face legal consequences or penalties for using pirated software. You also disrespect the work and effort of the developer who created the software.
      • -
      • Possible compatibility and performance issues with other software or devices: By using a crack for Capture One Pro, you might encounter compatibility and performance issues with other software or devices that you use. The crack file might not be compatible with the latest updates or versions of Capture One Pro or other software. The crack file might also affect the stability and speed of your computer or cause errors or crashes.
      • -
      -

      Conclusion: Summarize the main points and provide a call to action

      -

      In conclusion, Capture One Pro is a professional photo editing software that can help you edit raw files, color grade your images, shoot tethered, organize your photos, export your work, and more. However, it also has a high price that might not be affordable for everyone. That's why some people use a crack for Capture One Pro to get access to the full version of the software for free.

      -

      However, using a crack for Capture One Pro also has some risks and drawbacks that you should consider before doing so. You might expose your computer to malware, viruses, or spyware infection. You might violate the terms and conditions of the software and the intellectual property rights of the developer. You might encounter compatibility and performance issues with other software or devices.

      -

      Therefore, we recommend that you use Capture One Pro 11.2.0 Crack - [CrackzSoft] Serial Key keygen at your own risk and discretion. We do not endorse or support the use of pirated software. We advise that you purchase a legitimate license or subscription for Capture One Pro from the official website or authorized resellers.

      -

      If you want to learn more about Capture One Pro and how to use it effectively, you can check out our other articles on our website. You can also subscribe to our newsletter to get the latest tips and tricks on photography and photo editing.

      -

      FAQs: Answer some common questions related to the topic

      -

      Here are some common questions and answers related to the topic of Capture One Pro 11.2.0 Crack - [CrackzSoft] Serial Key keygen:

      -
        -
      1. Q: Is Capture One Pro better than Adobe Lightroom?
      2. -
      3. A: Capture One Pro and Adobe Lightroom are both popular and powerful photo editing software that have their own strengths and weaknesses. Some of the advantages of Capture One Pro over Lightroom are:
          -
        • Capture One Pro has a more advanced and accurate color grading system.
        • -
        • Capture One Pro has a faster and smoother performance and workflow.
        • -
        • Capture One Pro has more features and tools for tethered shooting and raw file editing.
        • -
        -Some of the advantages of Lightroom over Capture One Pro are:
          -
        • Lightroom has a more user-friendly and intuitive interface and layout.
        • -
        • Lightroom has a better integration with other Adobe products such as Photoshop, Bridge, or Premiere.
        • -
        • Lightroom has a larger and more diverse community and support network.
        • -
        -Ultimately, the choice between Capture One Pro and Lightroom depends on your personal preference, budget, and needs.
      4. -
      5. Q: Can I use Capture One Pro 11.2.0 Crack - [CrackzSoft] Serial Key keygen on multiple computers?
      6. -
      7. A: Yes, you can use Capture One Pro 11.2.0 Crack - [CrackzSoft] Serial Key keygen on multiple computers as long as you have the crack file and the serial key. However, you should be aware that this is illegal and unethical, as you are violating the terms and conditions of the software and the intellectual property rights of the developer. You might also face legal consequences or penalties for using pirated software on multiple computers.
      8. -
      9. Q: How can I update Capture One Pro 11.2.0 Crack - [CrackzSoft] Serial Key keygen?
      10. -
      11. A: You can update Capture One Pro 11.2.0 Crack - [CrackzSoft] Serial Key keygen by downloading and installing the latest version of the crack file from CrackzSoft or other sources. However, you should be careful when updating your cracked software, as some updates might not be compatible with your crack file or serial key. You might also lose some features or functions of your cracked software after updating it.
      12. -
      13. Q: How can I uninstall Capture One Pro 11.2.0 Crack - [CrackzSoft] Serial Key keygen?
      14. -
      15. A: You can uninstall Capture One Pro 11.2.0 Crack - [CrackzSoft] Serial Key keygen by following these steps:
          -
        1. Go to the Control Panel on your computer and select "Programs and Features".
        2. -
        3. Find and select "Capture One Pro 11.2.0" from the list of installed programs and click "Uninstall".
        4. -
        5. Follow the instructions on the screen to complete the uninstallation process.
        6. -
        7. Delete the crack file and the serial key from your computer.
        8. -
        -

        You have successfully uninstalled Capture One Pro 11.2.0 Crack - [CrackzSoft] Serial Key keygen from your computer.

        -
      16. Q: Is there a safer and legal way to use Capture One Pro for free?
      17. -
      18. A: Yes, there is a safer and legal way to use Capture One Pro for free. You can download and use the free trial version of Capture One Pro from the official website or authorized resellers. The free trial version lasts for 30 days and has all the features of the full version. You can use the free trial version to test and evaluate the software before deciding whether to buy it or not.
      19. -
      -

      We hope that this article has helped you understand how to use Capture One Pro 11.2.0 Crack - [CrackzSoft] Serial Key keygen. If you have any questions or feedback, please feel free to contact us or leave a comment below. Thank you for reading and happy editing!

      b2dd77e56b
      -
      -
      \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ciel-Devis-Facture-2013-Keygen-Free.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ciel-Devis-Facture-2013-Keygen-Free.md deleted file mode 100644 index 9cedaffa5bbb92f943125bccd6231a6d8e592d83..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ciel-Devis-Facture-2013-Keygen-Free.md +++ /dev/null @@ -1,89 +0,0 @@ -## Ciel Devis Facture 2013 Keygen - - - -**Ciel Devis Facture 2013 Keygen >>> [https://jinyurl.com/2tx26K](https://jinyurl.com/2tx26K)** - - - - Here is a possible title and article with html formatting for the keyword "Ciel Devis Facture 2013 Keygen": - -# How to activate Ciel Devis Facture 2013 with a keygen - - - -Ciel Devis Facture 2013 is a software that allows you to create invoices and quotes easily and professionally. It is compatible with the anti-fraud law on VAT and offers various features such as customer and product management, document templates, payment tracking, etc. - - - -If you want to use Ciel Devis Facture 2013 without paying for a license, you might be tempted to use a keygen, which is a program that generates serial numbers or activation codes for software. However, this is not recommended for several reasons: - - - -- Using a keygen is illegal and can expose you to legal consequences. - -- Using a keygen can harm your computer and compromise your security, as keygens often contain viruses, malware, spyware, or ransomware. - -- Using a keygen can prevent you from receiving updates, support, or assistance from Ciel. - - - -Therefore, the best way to activate Ciel Devis Facture 2013 is to buy a legitimate license from Ciel's official website[^3^] or from an authorized reseller. You will receive a unique activation code that you can enter in the software to unlock all its features and benefits. - - - -If you have any questions or issues regarding Ciel Devis Facture 2013, you can contact Ciel's customer service by phone, email, or chat. They will be happy to help you and provide you with the best solutions for your needs. - -Here is a possible continuation of the article: - -In this article, we will show you how to install and activate Ciel Devis Facture 2013 on your computer. Follow these steps: - - - -1. Download the installation file from Ciel's website or from the link provided by your reseller. - -2. Run the installation file and follow the instructions on the screen. You will need to accept the terms and conditions and choose a destination folder for the software. - -3. Once the installation is complete, launch Ciel Devis Facture 2013 from your desktop or start menu. - -4. Enter your activation code in the pop-up window that appears. You can find your activation code in your email confirmation or on your invoice. If you don't have an activation code, you can request a trial version for 30 days. - -5. Click on "Activate" and wait for the confirmation message. You can now use Ciel Devis Facture 2013 with all its features and functions. - - - -We hope this article was helpful and informative. If you have any feedback or suggestions, please let us know in the comments section below. - -Here are a few more paragraphs for the article: - -Now that you have activated Ciel Devis Facture 2013, you can start creating your invoices and quotes in a few clicks. Here are some tips to help you get started: - - - -- To create a new document, click on the "New" button on the toolbar and choose the type of document you want to create: invoice, quote, delivery note, etc. - -- To add a customer or a product to your document, click on the "Add" button on the toolbar and select the option you want: customer, product, service, etc. You can also use the search function to find an existing customer or product in your database. - -- To edit the details of your document, such as the date, the reference number, the payment terms, etc., click on the "Edit" button on the toolbar and make the changes you want. - -- To print or send your document by email, click on the "Print" or "Email" button on the toolbar and choose the format and the destination you want. - - - -Ciel Devis Facture 2013 also allows you to manage your documents and your customers easily. You can access various features such as: - - - -- The dashboard, where you can see an overview of your activity, your turnover, your unpaid invoices, etc. - -- The history, where you can see all the documents you have created, modified, or deleted. - -- The reports, where you can generate various statistics and graphs based on your data. - -- The settings, where you can customize your preferences, your templates, your logo, etc. - - - -For more information and guidance on how to use Ciel Devis Facture 2013, you can consult the user manual or the online help available from the software. You can also watch some video tutorials on Ciel's website or YouTube channel. - - dfd1c89656 \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Tango Charlie Full Movie Download In Hindi 720p TOP.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Tango Charlie Full Movie Download In Hindi 720p TOP.md deleted file mode 100644 index 7c495fd3aa0ab25d9bd3caad01ab60860155b5ab..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Tango Charlie Full Movie Download In Hindi 720p TOP.md +++ /dev/null @@ -1,19 +0,0 @@ -
      -Here is a possible title and article with html formatting for the keyword "Tango Charlie full movie download in hindi 720p": - -

      How to Watch Tango Charlie Full Movie in Hindi 720p Online

      -

      Tango Charlie is a 2005 Hindi action film directed by Mani Shankar and starring Ajay Devgn, Bobby Deol, Sanjay Dutt, Suniel Shetty and Tanisha Mukherjee. The film follows the life of a BSF soldier who experiences various wars and conflicts in different parts of India. The film was praised for its realistic portrayal of war and its anti-war message.

      -

      If you want to watch Tango Charlie full movie in Hindi 720p online, you have a few options. Here are some of them:

      -

      Tango Charlie full movie download in hindi 720p


      DOWNLOADhttps://urlcod.com/2uIch4



      -
        -
      • Disney+ Hotstar: This is the official streaming platform for Tango Charlie. You can watch the movie with a subscription or rent it for a limited time. You can also download the movie for offline viewing. Disney+ Hotstar offers high-quality video and audio, as well as subtitles and other features. You can access Disney+ Hotstar on your web browser, mobile app, smart TV or other devices[^1^].
      • -
      • YouTube: You can also watch Tango Charlie full movie in Hindi 720p on YouTube. The movie is uploaded by Shemaroo Movies, a verified channel that offers legal and licensed content. You can watch the movie for free with ads, or buy or rent it without ads. You can also download the movie for offline viewing. YouTube offers good video and audio quality, as well as subtitles and other features. You can access YouTube on your web browser, mobile app, smart TV or other devices[^2^] [^3^].
      • -
      • Other websites: There are some other websites that claim to offer Tango Charlie full movie download in Hindi 720p for free. However, these websites are illegal and unsafe, as they may contain viruses, malware, pop-ups, ads or other harmful content. They may also violate the copyright laws and infringe on the rights of the filmmakers and actors. We do not recommend using these websites to watch Tango Charlie or any other movie.
      • -
      -

      We hope this article helps you find the best way to watch Tango Charlie full movie in Hindi 720p online. Enjoy the movie and share your thoughts with us!

      Here are a few more paragraphs with html formatting for the article: - -

      Tango Charlie is not just an action film, but also a drama that explores the psychological and emotional impact of war on the soldiers and their families. The film shows how Tarun Chauhan transforms from a naive and idealistic young man to a disillusioned and traumatized veteran who witnesses the horrors of war in different regions of India. He faces the challenges of insurgency, communal riots, naxalism, and terrorism, as well as the loss of his friends and loved ones. He also falls in love with a nurse named Shyamoli (Tanisha Mukherjee), who helps him cope with his post-traumatic stress disorder.

      -

      The film received positive reviews from critics and audiences alike, who praised its realistic depiction of war and its anti-war message. The film was also appreciated for its performances, especially by Ajay Devgn, who played the role of Tarun's mentor and friend Mohammed Ali. Devgn portrayed a Muslim soldier who fights for his country despite facing discrimination and prejudice from some of his fellow soldiers. He also sacrifices his life to save Tarun from a landmine. Devgn's performance was lauded as one of his best and earned him several awards and nominations.

      -

      Tango Charlie is a film that makes you think about the meaning and cost of war, and how it affects not only the soldiers, but also their families and society. The film also makes you appreciate the bravery and sacrifice of the Indian armed forces, who risk their lives to protect their nation. Tango Charlie is a film that deserves to be watched by everyone who loves cinema and humanity.

      7196e7f11a
      -
      -
      \ No newline at end of file diff --git a/spaces/niro-private/chatCSV/src/modules/utils.py b/spaces/niro-private/chatCSV/src/modules/utils.py deleted file mode 100644 index e9e9dfb6ddedfd526d4ae31ad260a62ac3e30db8..0000000000000000000000000000000000000000 --- a/spaces/niro-private/chatCSV/src/modules/utils.py +++ /dev/null @@ -1,225 +0,0 @@ -import os -import pandas as pd -import streamlit as st -from io import StringIO -# import json -# from json2table import convert - -from src.modules.chatbot import Chatbot_txt, Chatbot, Chatbot_ledger -from src.modules.embedder import Embedder_txt, Embedder - - -def ledger_to_dataframe(df_d): - # st.write(ledger_csv_path) - # df_d = pd.read_csv(ledger_csv_path) - data_string = df_d.iloc[0]['fullLedger'][1:-1] - temp = data_string - temp = temp.replace("{\"date\":{\"$", '\"').replace("}", "") - result = dict((a.strip(), b.strip()) - for a, b in (element.split(':', 1) - for element in temp.split(','))) - columns = [i.replace("\"", '') for i in list(result.keys())] - - row_count = 0 - for idx, element in enumerate(temp.split(',')): - q = element.split(':') - command = q[0] - if command == '"date"': - row_count = row_count + 1 - - out = pd.DataFrame(columns=columns, index=range(row_count)) - - row = -1 - - for idx, element in enumerate(temp.split(',')): - q = element.split(':') - command = q[0].replace("\"", '') - if len(q) > 2: - value = ''.join(q[1:]) - else: - value = q[1] - try: - value = float(value) - except: - value = value - - if command == 'date': - # print(row, command, value) - row = row + 1 - out.iloc[row][command] = value - out.index.name = 'transaction_id' - return out - - -class Utilities: - - @staticmethod - def load_api_key(): - """ - Loads the OpenAI API key from the .env file or from the user's input - and returns it - """ - if os.path.exists(".env") and os.environ.get("OPENAI_API_KEY") is not None: - user_api_key = os.environ["OPENAI_API_KEY"] - st.sidebar.success("API key loaded from .env", icon="🚀") - else: - user_api_key = st.sidebar.text_input( - label="#### Your OpenAI API key 👇", placeholder="Paste your openAI API key, sk-", type="password" - ) - if user_api_key: - st.sidebar.success("API key loaded", icon="🚀") - return user_api_key - - @staticmethod - def handle_upload_txt(): - """ - Handles the file upload and displays the uploaded file - """ - uploaded_file = st.sidebar.file_uploader("upload", type="txt", label_visibility="collapsed") - if uploaded_file is not None: - - def show_user_file(uploaded_file): - file_container = st.expander("Your TXT file :") - uploaded_file_content = StringIO(uploaded_file.getvalue().decode("utf-8")) - string_data = uploaded_file_content.read() - file_container.write(string_data) - - try: - dict1 = {} - dict1 = json.loads(string_data) - st.write(dict1) - # creating dictionary - # st.write(string_data) - # for line in string_data: - # st.write(line) - # with open(uploaded_file) as fh: - # - # a = 1 - # for line in fh: - # command, description = line.strip().split(None, 1) - # dict1[command] = description.strip() - # file_container.write(dict1) - - # # creating json file - # # the JSON file is named as test1 - # out_file = open("test1.json", "w") - # json.dump(dict1, out_file, indent=4, sort_keys=False) - # out_file.close() - # # - # # # first load the json file - # file_path = 'test1.json' - # with open(file_path, 'r') as f: - # data = json.load(f) - # df = pd.DataFrame(dict1) - df = pd.json_normalize(dict1, record_path=['date']) - - st.DataFrame(df) - # build_direction = "TOP_TO_BOTTOM" - # table_attributes = {"style": "width:100%", "class": "table table-striped"} - # html = convert(dict1, build_direction=build_direction, table_attributes=table_attributes) - # st.markdown(html) - except: - print('not json') - st.error('not a json') - - show_user_file(uploaded_file) - else: - st.sidebar.info( - "👆 Upload your TXT file to get started, " - # "sample for try : [fishfry-locations.csv](https://drive.google.com/file/d/1TpP3thVnTcDO1_lGSh99EKH2iF3GDE7_/view?usp=sharing)" - ) - st.session_state["reset_chat"] = True - return uploaded_file - - @staticmethod - def handle_upload(): - """ - Handles the file upload and displays the uploaded file - """ - uploaded_file = st.sidebar.file_uploader("upload", type="csv", label_visibility="collapsed") - if uploaded_file is not None: - - def show_user_file(uploaded_file): - file_container = st.expander("Your CSV file :") - shows = pd.read_csv(uploaded_file) - uploaded_file.seek(0) - file_container.write(shows) - - show_user_file(uploaded_file) - else: - st.sidebar.info( - "👆 Upload your CSV file to get started, " - "sample for try : [fishfry-locations.csv](https://drive.google.com/file/d/1TpP3thVnTcDO1_lGSh99EKH2iF3GDE7_/view?usp=sharing)" - ) - st.session_state["reset_chat"] = True - return uploaded_file - - @staticmethod - def handle_upload_ledger(): - """ - Handles the file upload and displays the uploaded file - """ - uploaded_file = st.sidebar.file_uploader("upload", type="csv", label_visibility="collapsed") - if uploaded_file is not None: - - def show_user_file(uploaded_file): - file_container = st.expander("Your Ledger :") - shows = pd.read_csv(uploaded_file) - out = ledger_to_dataframe(shows) - out.to_csv('ledger.csv') - uploaded_file.seek(0) - file_container.write(out) - - show_user_file(uploaded_file) - else: - st.sidebar.info( - "👆 Upload your CSV file to get started, " - "sample for try : [fishfry-locations.csv](https://drive.google.com/file/d/1TpP3thVnTcDO1_lGSh99EKH2iF3GDE7_/view?usp=sharing)" - ) - st.session_state["reset_chat"] = True - return uploaded_file - - @staticmethod - def setup_chatbot_txt(uploaded_file, model, temperature): - """ - Sets up the chatbot with the uploaded file, model, and temperature - """ - embeds = Embedder_txt() - with st.spinner("Processing..."): - uploaded_file.seek(0) - file = uploaded_file.read() - vectors = embeds.getDocEmbeds(file, uploaded_file.name) - chatbot = Chatbot(model, temperature, vectors) - st.session_state["ready"] = True - return chatbot - - @staticmethod - def setup_chatbot(uploaded_file, model, temperature): - """ - Sets up the chatbot with the uploaded file, model, and temperature - """ - embeds = Embedder_txt() - with st.spinner("Processing..."): - uploaded_file.seek(0) - file = uploaded_file.read() - vectors = embeds.getDocEmbeds(file, uploaded_file.name) - chatbot = Chatbot(model, temperature, vectors) - st.session_state["ready"] = True - return chatbot - - @staticmethod - def setup_chatbot_ledger(uploaded_file, model, temperature): - """ - Sets up the chatbot with the uploaded file, model, and temperature - """ - # embeds = Embedder() - with st.spinner("Processing..."): - uploaded_file.seek(0) - shows = pd.read_csv(uploaded_file) - out = ledger_to_dataframe(shows) - out.to_csv('ledger.csv') - # file = uploaded_file.read() - # vectors = embeds.getDocEmbeds(file, uploaded_file.name) - chatbot = Chatbot_ledger(model, temperature, 'ledger.csv') - st.session_state["ready"] = True - return chatbot diff --git a/spaces/normster/llm_rules/README.md b/spaces/normster/llm_rules/README.md deleted file mode 100644 index 5e3f516f742cce775f868adf745c1a88be1eb6e1..0000000000000000000000000000000000000000 --- a/spaces/normster/llm_rules/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: "RuLES: Rule-following Language Evaluation Scenarios" -emoji: ⚖️ -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: true -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/openkg/llm_leaderboard/src/assets/text_content.py b/spaces/openkg/llm_leaderboard/src/assets/text_content.py deleted file mode 100644 index c8a00d2b35a1b34f96cb3f32fdf57226ae8ea545..0000000000000000000000000000000000000000 --- a/spaces/openkg/llm_leaderboard/src/assets/text_content.py +++ /dev/null @@ -1,49 +0,0 @@ - - -TITLE = """

      KG LLM Leaderboard

      """ - -INTRODUCTION_TEXT = f""" -🐨 KG LLM Leaderboard aims to track, rank, and evaluate the performance of released Large Language Models on traditional KBQA/KGQA, KGC(Knowledge Graph Construction/Reasoning), Model Edit datasets. - -The data on this page is sourced from a research paper. If you intend to use the data from this page, please remember to cite the source in the last part of the page. We compare the current SOTA traditional KBQA models (fine-tuned (FT) and zero-shot (ZS)). - -""" - -LLM_BENCHMARKS_TEXT = f""" -ChatGPT is a powerful large language model (LLM) that covers knowledge resources such as Wikipedia and supports natural language question answering using its own knowledge. - -Therefore, there is growing interest in exploring whether ChatGPT can replace traditional knowledge-based question answering (KBQA) models. - -Although there have been some works analyzing the question answering performance of ChatGPT, there is still a lack of large-scale, comprehensive testing of various types of complex questions to analyze the limitations of the model. - -In this paper, we present a framework that follows the black-box testing specifications of CheckList proposed by Microsoft. - -We evaluate ChatGPT and its family of LLMs on eight real-world KB-based complex question answering datasets, which include six English datasets and two multilingual datasets. - -The total number of test cases is approximately 190,000. - -""" - - - -CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results" -CITATION_BUTTON_TEXT = r""" -@article{tan2023evaluation, - title={Evaluation of ChatGPT as a question answering system for answering complex questions}, - author={Yiming Tan and Dehai Min and Yu Li and Wenbo Li and Nan Hu and Yongrui Chen and Guilin Qi}, - journal={arXiv preprint arXiv:2303.07992}, - year={2023} -} -@article{gui2023InstructIE, - author = {Honghao Gui and Jintian Zhang and Hongbin Ye and Ningyu Zhang}, - title = {InstructIE: {A} Chinese Instruction-based Information Extraction Dataset}, - journal = {arXiv preprint arXiv:2303.07992}, - year = {2023} -} -@article{yao2023edit, - author = {Yunzhi Yao and Peng Wang and Bozhong Tian and Siyuan Cheng and Zhoubo Li and Shumin Deng and Huajun Chen and Ningyu Zhang}, - title = {Editing Large Language Models: Problems, Methods, and Opportunities}, - journal = {arXiv preprint arXiv:2305.13172}, - year = {2023} -} -""" \ No newline at end of file diff --git a/spaces/p1atdev/Anime-to-Sketch/anime2sketch/model.py b/spaces/p1atdev/Anime-to-Sketch/anime2sketch/model.py deleted file mode 100644 index b45c654ac9235de1c42e6d3c88f8d07bdbaa3f4b..0000000000000000000000000000000000000000 --- a/spaces/p1atdev/Anime-to-Sketch/anime2sketch/model.py +++ /dev/null @@ -1,256 +0,0 @@ -import os -from PIL import Image -import torchvision.transforms as transforms - -try: - from torchvision.transforms import InterpolationMode - - bic = InterpolationMode.BICUBIC -except ImportError: - bic = Image.BICUBIC - -import numpy as np -import torch -import torch.nn as nn -import functools - -IMG_EXTENSIONS = [".jpg", ".jpeg", ".png", ".ppm", ".bmp", ".webp"] - - -class UnetGenerator(nn.Module): - """Create a Unet-based generator""" - - def __init__( - self, - input_nc, - output_nc, - num_downs, - ngf=64, - norm_layer=nn.BatchNorm2d, - use_dropout=False, - ): - """Construct a Unet generator - Parameters: - input_nc (int) -- the number of channels in input images - output_nc (int) -- the number of channels in output images - num_downs (int) -- the number of downsamplings in UNet. For example, # if |num_downs| == 7, - image of size 128x128 will become of size 1x1 # at the bottleneck - ngf (int) -- the number of filters in the last conv layer - norm_layer -- normalization layer - We construct the U-Net from the innermost layer to the outermost layer. - It is a recursive process. - """ - super(UnetGenerator, self).__init__() - # construct unet structure - unet_block = UnetSkipConnectionBlock( - ngf * 8, - ngf * 8, - input_nc=None, - submodule=None, - norm_layer=norm_layer, - innermost=True, - ) # add the innermost layer - for _ in range(num_downs - 5): # add intermediate layers with ngf * 8 filters - unet_block = UnetSkipConnectionBlock( - ngf * 8, - ngf * 8, - input_nc=None, - submodule=unet_block, - norm_layer=norm_layer, - use_dropout=use_dropout, - ) - # gradually reduce the number of filters from ngf * 8 to ngf - unet_block = UnetSkipConnectionBlock( - ngf * 4, ngf * 8, input_nc=None, submodule=unet_block, norm_layer=norm_layer - ) - unet_block = UnetSkipConnectionBlock( - ngf * 2, ngf * 4, input_nc=None, submodule=unet_block, norm_layer=norm_layer - ) - unet_block = UnetSkipConnectionBlock( - ngf, ngf * 2, input_nc=None, submodule=unet_block, norm_layer=norm_layer - ) - self.model = UnetSkipConnectionBlock( - output_nc, - ngf, - input_nc=input_nc, - submodule=unet_block, - outermost=True, - norm_layer=norm_layer, - ) # add the outermost layer - - def forward(self, input): - """Standard forward""" - return self.model(input) - - -class UnetSkipConnectionBlock(nn.Module): - """Defines the Unet submodule with skip connection. - X -------------------identity---------------------- - |-- downsampling -- |submodule| -- upsampling --| - """ - - def __init__( - self, - outer_nc, - inner_nc, - input_nc=None, - submodule=None, - outermost=False, - innermost=False, - norm_layer=nn.BatchNorm2d, - use_dropout=False, - ): - """Construct a Unet submodule with skip connections. - Parameters: - outer_nc (int) -- the number of filters in the outer conv layer - inner_nc (int) -- the number of filters in the inner conv layer - input_nc (int) -- the number of channels in input images/features - submodule (UnetSkipConnectionBlock) -- previously defined submodules - outermost (bool) -- if this module is the outermost module - innermost (bool) -- if this module is the innermost module - norm_layer -- normalization layer - use_dropout (bool) -- if use dropout layers. - """ - super(UnetSkipConnectionBlock, self).__init__() - self.outermost = outermost - if type(norm_layer) == functools.partial: - use_bias = norm_layer.func == nn.InstanceNorm2d - else: - use_bias = norm_layer == nn.InstanceNorm2d - if input_nc is None: - input_nc = outer_nc - downconv = nn.Conv2d( - input_nc, inner_nc, kernel_size=4, stride=2, padding=1, bias=use_bias - ) - downrelu = nn.LeakyReLU(0.2, True) - downnorm = norm_layer(inner_nc) - uprelu = nn.ReLU(True) - upnorm = norm_layer(outer_nc) - - if outermost: - upconv = nn.ConvTranspose2d( - inner_nc * 2, outer_nc, kernel_size=4, stride=2, padding=1 - ) - down = [downconv] - up = [uprelu, upconv, nn.Tanh()] - model = down + [submodule] + up - elif innermost: - upconv = nn.ConvTranspose2d( - inner_nc, outer_nc, kernel_size=4, stride=2, padding=1, bias=use_bias - ) - down = [downrelu, downconv] - up = [uprelu, upconv, upnorm] - model = down + up - else: - upconv = nn.ConvTranspose2d( - inner_nc * 2, - outer_nc, - kernel_size=4, - stride=2, - padding=1, - bias=use_bias, - ) - down = [downrelu, downconv, downnorm] - up = [uprelu, upconv, upnorm] - - if use_dropout: - model = down + [submodule] + up + [nn.Dropout(0.5)] - else: - model = down + [submodule] + up - - self.model = nn.Sequential(*model) - - def forward(self, x): - if self.outermost: - return self.model(x) - else: # add skip connections - return torch.cat([x, self.model(x)], 1) - - -class Anime2Sketch: - def __init__( - self, model_path: str = "./models/netG.pth", device: str = "cpu" - ) -> None: - norm_layer = functools.partial( - nn.InstanceNorm2d, affine=False, track_running_stats=False - ) - net = UnetGenerator(3, 1, 8, 64, norm_layer=norm_layer, use_dropout=False) - ckpt = torch.load(model_path) - - for key in list(ckpt.keys()): - if "module." in key: - ckpt[key.replace("module.", "")] = ckpt[key].half() - del ckpt[key] - - net.load_state_dict(ckpt) - - self.model = net - - if torch.cuda.is_available() and device == "cuda": - self.device = "cuda" - self.model.to(device) - else: - self.device = "cpu" - self.model.to("cpu") - - def predict(self, image: Image.Image, load_size: int = 512) -> Image: - try: - aus_resize = None - if load_size > 0: - aus_resize = image.size - transform = self.get_transform(load_size=load_size) - image = transform(image) - img = image.unsqueeze(0) - except: - raise Exception("Error in reading image {}".format(image.filename)) - - aus_tensor = self.model(img.to(self.device)) - aus_img = self.tensor_to_img(aus_tensor) - - image_pil = Image.fromarray(aus_img) - if aus_resize: - bic = Image.BICUBIC - image_pil = image_pil.resize(aus_resize, bic) - - return image_pil - - def get_transform(self, load_size=0, grayscale=False, method=bic, convert=True): - transform_list = [] - if grayscale: - transform_list.append(transforms.Grayscale(1)) - if load_size > 0: - osize = [load_size, load_size] - transform_list.append(transforms.Resize(osize, method)) - if convert: - transform_list += [transforms.ToTensor()] - if grayscale: - transform_list += [transforms.Normalize((0.5,), (0.5,))] - else: - transform_list += [ - transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) - ] - return transforms.Compose(transform_list) - - def tensor_to_img(self, input_image, imtype=np.uint8): - """ "Converts a Tensor array into a numpy image array. - Parameters: - input_image (tensor) -- the input image tensor array - imtype (type) -- the desired type of the converted numpy array - """ - - if not isinstance(input_image, np.ndarray): - if isinstance(input_image, torch.Tensor): # get the data from a variable - image_tensor = input_image.data - else: - return input_image - image_numpy = ( - image_tensor[0].cpu().float().numpy() - ) # convert it into a numpy array - if image_numpy.shape[0] == 1: # grayscale to RGB - image_numpy = np.tile(image_numpy, (3, 1, 1)) - image_numpy = ( - (np.transpose(image_numpy, (1, 2, 0)) + 1) / 2.0 * 255.0 - ) # post-processing: tranpose and scaling - else: # if it is a numpy array, do nothing - image_numpy = input_image - return image_numpy.astype(imtype) diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/research_projects/intel_opts/README.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/research_projects/intel_opts/README.md deleted file mode 100644 index 6b25679efbe90d556244e7aa6bee3e863c28b069..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/research_projects/intel_opts/README.md +++ /dev/null @@ -1,37 +0,0 @@ -## Diffusers examples with Intel optimizations - -**This research project is not actively maintained by the diffusers team. For any questions or comments, please make sure to tag @hshen14 .** - -This aims to provide diffusers examples with Intel optimizations such as Bfloat16 for training/fine-tuning acceleration and 8-bit integer (INT8) for inference acceleration on Intel platforms. - -## Accelerating the fine-tuning for textual inversion - -We accelereate the fine-tuning for textual inversion with Intel Extension for PyTorch. The [examples](textual_inversion) enable both single node and multi-node distributed training with Bfloat16 support on Intel Xeon Scalable Processor. - -## Accelerating the inference for Stable Diffusion using Bfloat16 - -We start the inference acceleration with Bfloat16 using Intel Extension for PyTorch. The [script](inference_bf16.py) is generally designed to support standard Stable Diffusion models with Bfloat16 support. -```bash -pip install diffusers transformers accelerate scipy safetensors - -export KMP_BLOCKTIME=1 -export KMP_SETTINGS=1 -export KMP_AFFINITY=granularity=fine,compact,1,0 - -# Intel OpenMP -export OMP_NUM_THREADS=< Cores to use > -export LD_PRELOAD=${LD_PRELOAD}:/path/to/lib/libiomp5.so -# Jemalloc is a recommended malloc implementation that emphasizes fragmentation avoidance and scalable concurrency support. -export LD_PRELOAD=${LD_PRELOAD}:/path/to/lib/libjemalloc.so -export MALLOC_CONF="oversize_threshold:1,background_thread:true,metadata_thp:auto,dirty_decay_ms:-1,muzzy_decay_ms:9000000000" - -# Launch with default DDIM -numactl --membind -C python python inference_bf16.py -# Launch with DPMSolverMultistepScheduler -numactl --membind -C python python inference_bf16.py --dpm - -``` - -## Accelerating the inference for Stable Diffusion using INT8 - -Coming soon ... diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/deepfloyd_if/pipeline_if.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/deepfloyd_if/pipeline_if.py deleted file mode 100644 index a490a89044979a26e7b851e0f763a482e93fa89f..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/deepfloyd_if/pipeline_if.py +++ /dev/null @@ -1,774 +0,0 @@ -import html -import inspect -import re -import urllib.parse as ul -from typing import Any, Callable, Dict, List, Optional, Union - -import torch -from transformers import CLIPImageProcessor, T5EncoderModel, T5Tokenizer - -from ...loaders import LoraLoaderMixin -from ...models import UNet2DConditionModel -from ...schedulers import DDPMScheduler -from ...utils import ( - BACKENDS_MAPPING, - is_accelerate_available, - is_bs4_available, - is_ftfy_available, - logging, - replace_example_docstring, -) -from ...utils.torch_utils import randn_tensor -from ..pipeline_utils import DiffusionPipeline -from . import IFPipelineOutput -from .safety_checker import IFSafetyChecker -from .watermark import IFWatermarker - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -if is_bs4_available(): - from bs4 import BeautifulSoup - -if is_ftfy_available(): - import ftfy - - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline - >>> from diffusers.utils import pt_to_pil - >>> import torch - - >>> pipe = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) - >>> pipe.enable_model_cpu_offload() - - >>> prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' - >>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) - - >>> image = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt").images - - >>> # save intermediate image - >>> pil_image = pt_to_pil(image) - >>> pil_image[0].save("./if_stage_I.png") - - >>> super_res_1_pipe = IFSuperResolutionPipeline.from_pretrained( - ... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 - ... ) - >>> super_res_1_pipe.enable_model_cpu_offload() - - >>> image = super_res_1_pipe( - ... image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt" - ... ).images - - >>> # save intermediate image - >>> pil_image = pt_to_pil(image) - >>> pil_image[0].save("./if_stage_I.png") - - >>> safety_modules = { - ... "feature_extractor": pipe.feature_extractor, - ... "safety_checker": pipe.safety_checker, - ... "watermarker": pipe.watermarker, - ... } - >>> super_res_2_pipe = DiffusionPipeline.from_pretrained( - ... "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 - ... ) - >>> super_res_2_pipe.enable_model_cpu_offload() - - >>> image = super_res_2_pipe( - ... prompt=prompt, - ... image=image, - ... ).images - >>> image[0].save("./if_stage_II.png") - ``` -""" - - -class IFPipeline(DiffusionPipeline, LoraLoaderMixin): - tokenizer: T5Tokenizer - text_encoder: T5EncoderModel - - unet: UNet2DConditionModel - scheduler: DDPMScheduler - - feature_extractor: Optional[CLIPImageProcessor] - safety_checker: Optional[IFSafetyChecker] - - watermarker: Optional[IFWatermarker] - - bad_punct_regex = re.compile( - r"[" + "#®•©™&@·º½¾¿¡§~" + "\)" + "\(" + "\]" + "\[" + "\}" + "\{" + "\|" + "\\" + "\/" + "\*" + r"]{1,}" - ) # noqa - - _optional_components = ["tokenizer", "text_encoder", "safety_checker", "feature_extractor", "watermarker"] - model_cpu_offload_seq = "text_encoder->unet" - - def __init__( - self, - tokenizer: T5Tokenizer, - text_encoder: T5EncoderModel, - unet: UNet2DConditionModel, - scheduler: DDPMScheduler, - safety_checker: Optional[IFSafetyChecker], - feature_extractor: Optional[CLIPImageProcessor], - watermarker: Optional[IFWatermarker], - requires_safety_checker: bool = True, - ): - super().__init__() - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the IF license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - self.register_modules( - tokenizer=tokenizer, - text_encoder=text_encoder, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - watermarker=watermarker, - ) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - def remove_all_hooks(self): - if is_accelerate_available(): - from accelerate.hooks import remove_hook_from_module - else: - raise ImportError("Please install accelerate via `pip install accelerate`") - - for model in [self.text_encoder, self.unet, self.safety_checker]: - if model is not None: - remove_hook_from_module(model, recurse=True) - - self.unet_offload_hook = None - self.text_encoder_offload_hook = None - self.final_offload_hook = None - - @torch.no_grad() - def encode_prompt( - self, - prompt, - do_classifier_free_guidance=True, - num_images_per_prompt=1, - device=None, - negative_prompt=None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - clean_caption: bool = False, - ): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`, *optional*): - prompt to be encoded - device: (`torch.device`, *optional*): - torch device to place the resulting embeddings on - num_images_per_prompt (`int`, *optional*, defaults to 1): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`, *optional*, defaults to `True`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead. - Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - """ - if prompt is not None and negative_prompt is not None: - if type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - - if device is None: - device = self._execution_device - - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - # while T5 can handle much longer input sequences than 77, the text encoder was trained with a max length of 77 for IF - max_length = 77 - - if prompt_embeds is None: - prompt = self._text_preprocessing(prompt, clean_caption=clean_caption) - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=max_length, - truncation=True, - add_special_tokens=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( - text_input_ids, untruncated_ids - ): - removed_text = self.tokenizer.batch_decode(untruncated_ids[:, max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {max_length} tokens: {removed_text}" - ) - - attention_mask = text_inputs.attention_mask.to(device) - - prompt_embeds = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - prompt_embeds = prompt_embeds[0] - - if self.text_encoder is not None: - dtype = self.text_encoder.dtype - elif self.unet is not None: - dtype = self.unet.dtype - else: - dtype = None - - prompt_embeds = prompt_embeds.to(dtype=dtype, device=device) - - bs_embed, seq_len, _ = prompt_embeds.shape - # duplicate text embeddings for each generation per prompt, using mps friendly method - prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) - prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance and negative_prompt_embeds is None: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - uncond_tokens = self._text_preprocessing(uncond_tokens, clean_caption=clean_caption) - max_length = prompt_embeds.shape[1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_attention_mask=True, - add_special_tokens=True, - return_tensors="pt", - ) - attention_mask = uncond_input.attention_mask.to(device) - - negative_prompt_embeds = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - negative_prompt_embeds = negative_prompt_embeds[0] - - if do_classifier_free_guidance: - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = negative_prompt_embeds.shape[1] - - negative_prompt_embeds = negative_prompt_embeds.to(dtype=dtype, device=device) - - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - else: - negative_prompt_embeds = None - - return prompt_embeds, negative_prompt_embeds - - def run_safety_checker(self, image, device, dtype): - if self.safety_checker is not None: - safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device) - image, nsfw_detected, watermark_detected = self.safety_checker( - images=image, - clip_input=safety_checker_input.pixel_values.to(dtype=dtype), - ) - else: - nsfw_detected = None - watermark_detected = None - - if hasattr(self, "unet_offload_hook") and self.unet_offload_hook is not None: - self.unet_offload_hook.offload() - - return image, nsfw_detected, watermark_detected - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - def check_inputs( - self, - prompt, - callback_steps, - negative_prompt=None, - prompt_embeds=None, - negative_prompt_embeds=None, - ): - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - if prompt is not None and prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" - " only forward one of the two." - ) - elif prompt is None and prompt_embeds is None: - raise ValueError( - "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." - ) - elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if negative_prompt is not None and negative_prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" - f" {negative_prompt_embeds}. Please make sure to only forward one of the two." - ) - - if prompt_embeds is not None and negative_prompt_embeds is not None: - if prompt_embeds.shape != negative_prompt_embeds.shape: - raise ValueError( - "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" - f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" - f" {negative_prompt_embeds.shape}." - ) - - def prepare_intermediate_images(self, batch_size, num_channels, height, width, dtype, device, generator): - shape = (batch_size, num_channels, height, width) - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - intermediate_images = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - - # scale the initial noise by the standard deviation required by the scheduler - intermediate_images = intermediate_images * self.scheduler.init_noise_sigma - return intermediate_images - - def _text_preprocessing(self, text, clean_caption=False): - if clean_caption and not is_bs4_available(): - logger.warn(BACKENDS_MAPPING["bs4"][-1].format("Setting `clean_caption=True`")) - logger.warn("Setting `clean_caption` to False...") - clean_caption = False - - if clean_caption and not is_ftfy_available(): - logger.warn(BACKENDS_MAPPING["ftfy"][-1].format("Setting `clean_caption=True`")) - logger.warn("Setting `clean_caption` to False...") - clean_caption = False - - if not isinstance(text, (tuple, list)): - text = [text] - - def process(text: str): - if clean_caption: - text = self._clean_caption(text) - text = self._clean_caption(text) - else: - text = text.lower().strip() - return text - - return [process(t) for t in text] - - def _clean_caption(self, caption): - caption = str(caption) - caption = ul.unquote_plus(caption) - caption = caption.strip().lower() - caption = re.sub("", "person", caption) - # urls: - caption = re.sub( - r"\b((?:https?:(?:\/{1,3}|[a-zA-Z0-9%])|[a-zA-Z0-9.\-]+[.](?:com|co|ru|net|org|edu|gov|it)[\w/-]*\b\/?(?!@)))", # noqa - "", - caption, - ) # regex for urls - caption = re.sub( - r"\b((?:www:(?:\/{1,3}|[a-zA-Z0-9%])|[a-zA-Z0-9.\-]+[.](?:com|co|ru|net|org|edu|gov|it)[\w/-]*\b\/?(?!@)))", # noqa - "", - caption, - ) # regex for urls - # html: - caption = BeautifulSoup(caption, features="html.parser").text - - # @ - caption = re.sub(r"@[\w\d]+\b", "", caption) - - # 31C0—31EF CJK Strokes - # 31F0—31FF Katakana Phonetic Extensions - # 3200—32FF Enclosed CJK Letters and Months - # 3300—33FF CJK Compatibility - # 3400—4DBF CJK Unified Ideographs Extension A - # 4DC0—4DFF Yijing Hexagram Symbols - # 4E00—9FFF CJK Unified Ideographs - caption = re.sub(r"[\u31c0-\u31ef]+", "", caption) - caption = re.sub(r"[\u31f0-\u31ff]+", "", caption) - caption = re.sub(r"[\u3200-\u32ff]+", "", caption) - caption = re.sub(r"[\u3300-\u33ff]+", "", caption) - caption = re.sub(r"[\u3400-\u4dbf]+", "", caption) - caption = re.sub(r"[\u4dc0-\u4dff]+", "", caption) - caption = re.sub(r"[\u4e00-\u9fff]+", "", caption) - ####################################################### - - # все виды тире / all types of dash --> "-" - caption = re.sub( - r"[\u002D\u058A\u05BE\u1400\u1806\u2010-\u2015\u2E17\u2E1A\u2E3A\u2E3B\u2E40\u301C\u3030\u30A0\uFE31\uFE32\uFE58\uFE63\uFF0D]+", # noqa - "-", - caption, - ) - - # кавычки к одному стандарту - caption = re.sub(r"[`´«»“”¨]", '"', caption) - caption = re.sub(r"[‘’]", "'", caption) - - # " - caption = re.sub(r""?", "", caption) - # & - caption = re.sub(r"&", "", caption) - - # ip adresses: - caption = re.sub(r"\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}", " ", caption) - - # article ids: - caption = re.sub(r"\d:\d\d\s+$", "", caption) - - # \n - caption = re.sub(r"\\n", " ", caption) - - # "#123" - caption = re.sub(r"#\d{1,3}\b", "", caption) - # "#12345.." - caption = re.sub(r"#\d{5,}\b", "", caption) - # "123456.." - caption = re.sub(r"\b\d{6,}\b", "", caption) - # filenames: - caption = re.sub(r"[\S]+\.(?:png|jpg|jpeg|bmp|webp|eps|pdf|apk|mp4)", "", caption) - - # - caption = re.sub(r"[\"\']{2,}", r'"', caption) # """AUSVERKAUFT""" - caption = re.sub(r"[\.]{2,}", r" ", caption) # """AUSVERKAUFT""" - - caption = re.sub(self.bad_punct_regex, r" ", caption) # ***AUSVERKAUFT***, #AUSVERKAUFT - caption = re.sub(r"\s+\.\s+", r" ", caption) # " . " - - # this-is-my-cute-cat / this_is_my_cute_cat - regex2 = re.compile(r"(?:\-|\_)") - if len(re.findall(regex2, caption)) > 3: - caption = re.sub(regex2, " ", caption) - - caption = ftfy.fix_text(caption) - caption = html.unescape(html.unescape(caption)) - - caption = re.sub(r"\b[a-zA-Z]{1,3}\d{3,15}\b", "", caption) # jc6640 - caption = re.sub(r"\b[a-zA-Z]+\d+[a-zA-Z]+\b", "", caption) # jc6640vc - caption = re.sub(r"\b\d+[a-zA-Z]+\d+\b", "", caption) # 6640vc231 - - caption = re.sub(r"(worldwide\s+)?(free\s+)?shipping", "", caption) - caption = re.sub(r"(free\s)?download(\sfree)?", "", caption) - caption = re.sub(r"\bclick\b\s(?:for|on)\s\w+", "", caption) - caption = re.sub(r"\b(?:png|jpg|jpeg|bmp|webp|eps|pdf|apk|mp4)(\simage[s]?)?", "", caption) - caption = re.sub(r"\bpage\s+\d+\b", "", caption) - - caption = re.sub(r"\b\d*[a-zA-Z]+\d+[a-zA-Z]+\d+[a-zA-Z\d]*\b", r" ", caption) # j2d1a2a... - - caption = re.sub(r"\b\d+\.?\d*[xх×]\d+\.?\d*\b", "", caption) - - caption = re.sub(r"\b\s+\:\s+", r": ", caption) - caption = re.sub(r"(\D[,\./])\b", r"\1 ", caption) - caption = re.sub(r"\s+", " ", caption) - - caption.strip() - - caption = re.sub(r"^[\"\']([\w\W]+)[\"\']$", r"\1", caption) - caption = re.sub(r"^[\'\_,\-\:;]", r"", caption) - caption = re.sub(r"[\'\_,\-\:\-\+]$", r"", caption) - caption = re.sub(r"^\.\S+$", "", caption) - - return caption.strip() - - @torch.no_grad() - @replace_example_docstring(EXAMPLE_DOC_STRING) - def __call__( - self, - prompt: Union[str, List[str]] = None, - num_inference_steps: int = 100, - timesteps: List[int] = None, - guidance_scale: float = 7.0, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - height: Optional[int] = None, - width: Optional[int] = None, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - clean_caption: bool = True, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - ): - """ - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. - instead. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - timesteps (`List[int]`, *optional*): - Custom timesteps to use for the denoising process. If not defined, equal spaced `num_inference_steps` - timesteps are used. Must be in descending order. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is - less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - height (`int`, *optional*, defaults to self.unet.config.sample_size): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to self.unet.config.sample_size): - The width in pixels of the generated image. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.IFPipelineOutput`] instead of a plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - clean_caption (`bool`, *optional*, defaults to `True`): - Whether or not to clean the caption before creating embeddings. Requires `beautifulsoup4` and `ftfy` to - be installed. If the dependencies are not installed, the embeddings will be created from the raw - prompt. - cross_attention_kwargs (`dict`, *optional*): - A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under - `self.processor` in - [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). - - Examples: - - Returns: - [`~pipelines.stable_diffusion.IFPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.IFPipelineOutput`] if `return_dict` is True, otherwise a `tuple. When - returning a tuple, the first element is a list with the generated images, and the second element is a list - of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) - or watermarked content, according to the `safety_checker`. - """ - # 1. Check inputs. Raise error if not correct - self.check_inputs(prompt, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds) - - # 2. Define call parameters - height = height or self.unet.config.sample_size - width = width or self.unet.config.sample_size - - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - device = self._execution_device - - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - prompt_embeds, negative_prompt_embeds = self.encode_prompt( - prompt, - do_classifier_free_guidance, - num_images_per_prompt=num_images_per_prompt, - device=device, - negative_prompt=negative_prompt, - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - clean_caption=clean_caption, - ) - - if do_classifier_free_guidance: - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) - - # 4. Prepare timesteps - if timesteps is not None: - self.scheduler.set_timesteps(timesteps=timesteps, device=device) - timesteps = self.scheduler.timesteps - num_inference_steps = len(timesteps) - else: - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # 5. Prepare intermediate images - intermediate_images = self.prepare_intermediate_images( - batch_size * num_images_per_prompt, - self.unet.config.in_channels, - height, - width, - prompt_embeds.dtype, - device, - generator, - ) - - # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # HACK: see comment in `enable_model_cpu_offload` - if hasattr(self, "text_encoder_offload_hook") and self.text_encoder_offload_hook is not None: - self.text_encoder_offload_hook.offload() - - # 7. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - model_input = ( - torch.cat([intermediate_images] * 2) if do_classifier_free_guidance else intermediate_images - ) - model_input = self.scheduler.scale_model_input(model_input, t) - - # predict the noise residual - noise_pred = self.unet( - model_input, - t, - encoder_hidden_states=prompt_embeds, - cross_attention_kwargs=cross_attention_kwargs, - return_dict=False, - )[0] - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred_uncond, _ = noise_pred_uncond.split(model_input.shape[1], dim=1) - noise_pred_text, predicted_variance = noise_pred_text.split(model_input.shape[1], dim=1) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - noise_pred = torch.cat([noise_pred, predicted_variance], dim=1) - - if self.scheduler.config.variance_type not in ["learned", "learned_range"]: - noise_pred, _ = noise_pred.split(model_input.shape[1], dim=1) - - # compute the previous noisy sample x_t -> x_t-1 - intermediate_images = self.scheduler.step( - noise_pred, t, intermediate_images, **extra_step_kwargs, return_dict=False - )[0] - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, intermediate_images) - - image = intermediate_images - - if output_type == "pil": - # 8. Post-processing - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - - # 9. Run safety checker - image, nsfw_detected, watermark_detected = self.run_safety_checker(image, device, prompt_embeds.dtype) - - # 10. Convert to PIL - image = self.numpy_to_pil(image) - - # 11. Apply watermark - if self.watermarker is not None: - image = self.watermarker.apply_watermark(image, self.unet.config.sample_size) - elif output_type == "pt": - nsfw_detected = None - watermark_detected = None - - if hasattr(self, "unet_offload_hook") and self.unet_offload_hook is not None: - self.unet_offload_hook.offload() - else: - # 8. Post-processing - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - - # 9. Run safety checker - image, nsfw_detected, watermark_detected = self.run_safety_checker(image, device, prompt_embeds.dtype) - - # Offload all models - self.maybe_free_model_hooks() - - if not return_dict: - return (image, nsfw_detected, watermark_detected) - - return IFPipelineOutput(images=image, nsfw_detected=nsfw_detected, watermark_detected=watermark_detected) diff --git a/spaces/perilli/tortoise-tts-v2/models/arch_util.py b/spaces/perilli/tortoise-tts-v2/models/arch_util.py deleted file mode 100644 index 832315c15c7c2a182d1f0d9fa0d971299e05d2f1..0000000000000000000000000000000000000000 --- a/spaces/perilli/tortoise-tts-v2/models/arch_util.py +++ /dev/null @@ -1,367 +0,0 @@ -import functools -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torchaudio -from models.xtransformers import ContinuousTransformerWrapper, RelativePositionBias - - -def zero_module(module): - """ - Zero out the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().zero_() - return module - - -class GroupNorm32(nn.GroupNorm): - def forward(self, x): - return super().forward(x.float()).type(x.dtype) - - -def normalization(channels): - """ - Make a standard normalization layer. - - :param channels: number of input channels. - :return: an nn.Module for normalization. - """ - groups = 32 - if channels <= 16: - groups = 8 - elif channels <= 64: - groups = 16 - while channels % groups != 0: - groups = int(groups / 2) - assert groups > 2 - return GroupNorm32(groups, channels) - - -class QKVAttentionLegacy(nn.Module): - """ - A module which performs QKV attention. Matches legacy QKVAttention + input/ouput heads shaping - """ - - def __init__(self, n_heads): - super().__init__() - self.n_heads = n_heads - - def forward(self, qkv, mask=None, rel_pos=None): - """ - Apply QKV attention. - - :param qkv: an [N x (H * 3 * C) x T] tensor of Qs, Ks, and Vs. - :return: an [N x (H * C) x T] tensor after attention. - """ - bs, width, length = qkv.shape - assert width % (3 * self.n_heads) == 0 - ch = width // (3 * self.n_heads) - q, k, v = qkv.reshape(bs * self.n_heads, ch * 3, length).split(ch, dim=1) - scale = 1 / math.sqrt(math.sqrt(ch)) - weight = torch.einsum( - "bct,bcs->bts", q * scale, k * scale - ) # More stable with f16 than dividing afterwards - if rel_pos is not None: - weight = rel_pos(weight.reshape(bs, self.n_heads, weight.shape[-2], weight.shape[-1])).reshape(bs * self.n_heads, weight.shape[-2], weight.shape[-1]) - weight = torch.softmax(weight.float(), dim=-1).type(weight.dtype) - if mask is not None: - # The proper way to do this is to mask before the softmax using -inf, but that doesn't work properly on CPUs. - mask = mask.repeat(self.n_heads, 1).unsqueeze(1) - weight = weight * mask - a = torch.einsum("bts,bcs->bct", weight, v) - - return a.reshape(bs, -1, length) - - -class AttentionBlock(nn.Module): - """ - An attention block that allows spatial positions to attend to each other. - - Originally ported from here, but adapted to the N-d case. - https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/models/unet.py#L66. - """ - - def __init__( - self, - channels, - num_heads=1, - num_head_channels=-1, - do_checkpoint=True, - relative_pos_embeddings=False, - ): - super().__init__() - self.channels = channels - self.do_checkpoint = do_checkpoint - if num_head_channels == -1: - self.num_heads = num_heads - else: - assert ( - channels % num_head_channels == 0 - ), f"q,k,v channels {channels} is not divisible by num_head_channels {num_head_channels}" - self.num_heads = channels // num_head_channels - self.norm = normalization(channels) - self.qkv = nn.Conv1d(channels, channels * 3, 1) - # split heads before split qkv - self.attention = QKVAttentionLegacy(self.num_heads) - - self.proj_out = zero_module(nn.Conv1d(channels, channels, 1)) - if relative_pos_embeddings: - self.relative_pos_embeddings = RelativePositionBias(scale=(channels // self.num_heads) ** .5, causal=False, heads=num_heads, num_buckets=32, max_distance=64) - else: - self.relative_pos_embeddings = None - - def forward(self, x, mask=None): - b, c, *spatial = x.shape - x = x.reshape(b, c, -1) - qkv = self.qkv(self.norm(x)) - h = self.attention(qkv, mask, self.relative_pos_embeddings) - h = self.proj_out(h) - return (x + h).reshape(b, c, *spatial) - - -class Upsample(nn.Module): - """ - An upsampling layer with an optional convolution. - - :param channels: channels in the inputs and outputs. - :param use_conv: a bool determining if a convolution is applied. - """ - - def __init__(self, channels, use_conv, out_channels=None, factor=4): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.factor = factor - if use_conv: - ksize = 5 - pad = 2 - self.conv = nn.Conv1d(self.channels, self.out_channels, ksize, padding=pad) - - def forward(self, x): - assert x.shape[1] == self.channels - x = F.interpolate(x, scale_factor=self.factor, mode="nearest") - if self.use_conv: - x = self.conv(x) - return x - - -class Downsample(nn.Module): - """ - A downsampling layer with an optional convolution. - - :param channels: channels in the inputs and outputs. - :param use_conv: a bool determining if a convolution is applied. - """ - - def __init__(self, channels, use_conv, out_channels=None, factor=4, ksize=5, pad=2): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - self.use_conv = use_conv - - stride = factor - if use_conv: - self.op = nn.Conv1d( - self.channels, self.out_channels, ksize, stride=stride, padding=pad - ) - else: - assert self.channels == self.out_channels - self.op = nn.AvgPool1d(kernel_size=stride, stride=stride) - - def forward(self, x): - assert x.shape[1] == self.channels - return self.op(x) - - -class ResBlock(nn.Module): - def __init__( - self, - channels, - dropout, - out_channels=None, - use_conv=False, - use_scale_shift_norm=False, - up=False, - down=False, - kernel_size=3, - ): - super().__init__() - self.channels = channels - self.dropout = dropout - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.use_scale_shift_norm = use_scale_shift_norm - padding = 1 if kernel_size == 3 else 2 - - self.in_layers = nn.Sequential( - normalization(channels), - nn.SiLU(), - nn.Conv1d(channels, self.out_channels, kernel_size, padding=padding), - ) - - self.updown = up or down - - if up: - self.h_upd = Upsample(channels, False) - self.x_upd = Upsample(channels, False) - elif down: - self.h_upd = Downsample(channels, False) - self.x_upd = Downsample(channels, False) - else: - self.h_upd = self.x_upd = nn.Identity() - - self.out_layers = nn.Sequential( - normalization(self.out_channels), - nn.SiLU(), - nn.Dropout(p=dropout), - zero_module( - nn.Conv1d(self.out_channels, self.out_channels, kernel_size, padding=padding) - ), - ) - - if self.out_channels == channels: - self.skip_connection = nn.Identity() - elif use_conv: - self.skip_connection = nn.Conv1d( - channels, self.out_channels, kernel_size, padding=padding - ) - else: - self.skip_connection = nn.Conv1d(channels, self.out_channels, 1) - - def forward(self, x): - if self.updown: - in_rest, in_conv = self.in_layers[:-1], self.in_layers[-1] - h = in_rest(x) - h = self.h_upd(h) - x = self.x_upd(x) - h = in_conv(h) - else: - h = self.in_layers(x) - h = self.out_layers(h) - return self.skip_connection(x) + h - - -class AudioMiniEncoder(nn.Module): - def __init__(self, - spec_dim, - embedding_dim, - base_channels=128, - depth=2, - resnet_blocks=2, - attn_blocks=4, - num_attn_heads=4, - dropout=0, - downsample_factor=2, - kernel_size=3): - super().__init__() - self.init = nn.Sequential( - nn.Conv1d(spec_dim, base_channels, 3, padding=1) - ) - ch = base_channels - res = [] - for l in range(depth): - for r in range(resnet_blocks): - res.append(ResBlock(ch, dropout, kernel_size=kernel_size)) - res.append(Downsample(ch, use_conv=True, out_channels=ch*2, factor=downsample_factor)) - ch *= 2 - self.res = nn.Sequential(*res) - self.final = nn.Sequential( - normalization(ch), - nn.SiLU(), - nn.Conv1d(ch, embedding_dim, 1) - ) - attn = [] - for a in range(attn_blocks): - attn.append(AttentionBlock(embedding_dim, num_attn_heads,)) - self.attn = nn.Sequential(*attn) - self.dim = embedding_dim - - def forward(self, x): - h = self.init(x) - h = self.res(h) - h = self.final(h) - h = self.attn(h) - return h[:, :, 0] - - -class TorchMelSpectrogram(nn.Module): - def __init__(self, filter_length=1024, hop_length=256, win_length=1024, n_mel_channels=80, mel_fmin=0, mel_fmax=8000, - sampling_rate=22050, normalize=False, mel_norm_file='data/mel_norms.pth'): - super().__init__() - # These are the default tacotron values for the MEL spectrogram. - self.filter_length = filter_length - self.hop_length = hop_length - self.win_length = win_length - self.n_mel_channels = n_mel_channels - self.mel_fmin = mel_fmin - self.mel_fmax = mel_fmax - self.sampling_rate = sampling_rate - self.mel_stft = torchaudio.transforms.MelSpectrogram(n_fft=self.filter_length, hop_length=self.hop_length, - win_length=self.win_length, power=2, normalized=normalize, - sample_rate=self.sampling_rate, f_min=self.mel_fmin, - f_max=self.mel_fmax, n_mels=self.n_mel_channels, - norm="slaney") - self.mel_norm_file = mel_norm_file - if self.mel_norm_file is not None: - self.mel_norms = torch.load(self.mel_norm_file) - else: - self.mel_norms = None - - def forward(self, inp): - if len(inp.shape) == 3: # Automatically squeeze out the channels dimension if it is present (assuming mono-audio) - inp = inp.squeeze(1) - assert len(inp.shape) == 2 - self.mel_stft = self.mel_stft.to(inp.device) - mel = self.mel_stft(inp) - # Perform dynamic range compression - mel = torch.log(torch.clamp(mel, min=1e-5)) - if self.mel_norms is not None: - self.mel_norms = self.mel_norms.to(mel.device) - mel = mel / self.mel_norms.unsqueeze(0).unsqueeze(-1) - return mel - - -class CheckpointedLayer(nn.Module): - """ - Wraps a module. When forward() is called, passes kwargs that require_grad through torch.checkpoint() and bypasses - checkpoint for all other args. - """ - def __init__(self, wrap): - super().__init__() - self.wrap = wrap - - def forward(self, x, *args, **kwargs): - for k, v in kwargs.items(): - assert not (isinstance(v, torch.Tensor) and v.requires_grad) # This would screw up checkpointing. - partial = functools.partial(self.wrap, **kwargs) - return torch.utils.checkpoint.checkpoint(partial, x, *args) - - -class CheckpointedXTransformerEncoder(nn.Module): - """ - Wraps a ContinuousTransformerWrapper and applies CheckpointedLayer to each layer and permutes from channels-mid - to channels-last that XTransformer expects. - """ - def __init__(self, needs_permute=True, exit_permute=True, checkpoint=True, **xtransformer_kwargs): - super().__init__() - self.transformer = ContinuousTransformerWrapper(**xtransformer_kwargs) - self.needs_permute = needs_permute - self.exit_permute = exit_permute - - if not checkpoint: - return - for i in range(len(self.transformer.attn_layers.layers)): - n, b, r = self.transformer.attn_layers.layers[i] - self.transformer.attn_layers.layers[i] = nn.ModuleList([n, CheckpointedLayer(b), r]) - - def forward(self, x, **kwargs): - if self.needs_permute: - x = x.permute(0,2,1) - h = self.transformer(x, **kwargs) - if self.exit_permute: - h = h.permute(0,2,1) - return h \ No newline at end of file diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/formatters/terminal.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/formatters/terminal.py deleted file mode 100644 index abb8770811f6d763433eaa87cf745ee720f1d7c7..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/formatters/terminal.py +++ /dev/null @@ -1,127 +0,0 @@ -""" - pygments.formatters.terminal - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - Formatter for terminal output with ANSI sequences. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pip._vendor.pygments.formatter import Formatter -from pip._vendor.pygments.token import Keyword, Name, Comment, String, Error, \ - Number, Operator, Generic, Token, Whitespace -from pip._vendor.pygments.console import ansiformat -from pip._vendor.pygments.util import get_choice_opt - - -__all__ = ['TerminalFormatter'] - - -#: Map token types to a tuple of color values for light and dark -#: backgrounds. -TERMINAL_COLORS = { - Token: ('', ''), - - Whitespace: ('gray', 'brightblack'), - Comment: ('gray', 'brightblack'), - Comment.Preproc: ('cyan', 'brightcyan'), - Keyword: ('blue', 'brightblue'), - Keyword.Type: ('cyan', 'brightcyan'), - Operator.Word: ('magenta', 'brightmagenta'), - Name.Builtin: ('cyan', 'brightcyan'), - Name.Function: ('green', 'brightgreen'), - Name.Namespace: ('_cyan_', '_brightcyan_'), - Name.Class: ('_green_', '_brightgreen_'), - Name.Exception: ('cyan', 'brightcyan'), - Name.Decorator: ('brightblack', 'gray'), - Name.Variable: ('red', 'brightred'), - Name.Constant: ('red', 'brightred'), - Name.Attribute: ('cyan', 'brightcyan'), - Name.Tag: ('brightblue', 'brightblue'), - String: ('yellow', 'yellow'), - Number: ('blue', 'brightblue'), - - Generic.Deleted: ('brightred', 'brightred'), - Generic.Inserted: ('green', 'brightgreen'), - Generic.Heading: ('**', '**'), - Generic.Subheading: ('*magenta*', '*brightmagenta*'), - Generic.Prompt: ('**', '**'), - Generic.Error: ('brightred', 'brightred'), - - Error: ('_brightred_', '_brightred_'), -} - - -class TerminalFormatter(Formatter): - r""" - Format tokens with ANSI color sequences, for output in a text console. - Color sequences are terminated at newlines, so that paging the output - works correctly. - - The `get_style_defs()` method doesn't do anything special since there is - no support for common styles. - - Options accepted: - - `bg` - Set to ``"light"`` or ``"dark"`` depending on the terminal's background - (default: ``"light"``). - - `colorscheme` - A dictionary mapping token types to (lightbg, darkbg) color names or - ``None`` (default: ``None`` = use builtin colorscheme). - - `linenos` - Set to ``True`` to have line numbers on the terminal output as well - (default: ``False`` = no line numbers). - """ - name = 'Terminal' - aliases = ['terminal', 'console'] - filenames = [] - - def __init__(self, **options): - Formatter.__init__(self, **options) - self.darkbg = get_choice_opt(options, 'bg', - ['light', 'dark'], 'light') == 'dark' - self.colorscheme = options.get('colorscheme', None) or TERMINAL_COLORS - self.linenos = options.get('linenos', False) - self._lineno = 0 - - def format(self, tokensource, outfile): - return Formatter.format(self, tokensource, outfile) - - def _write_lineno(self, outfile): - self._lineno += 1 - outfile.write("%s%04d: " % (self._lineno != 1 and '\n' or '', self._lineno)) - - def _get_color(self, ttype): - # self.colorscheme is a dict containing usually generic types, so we - # have to walk the tree of dots. The base Token type must be a key, - # even if it's empty string, as in the default above. - colors = self.colorscheme.get(ttype) - while colors is None: - ttype = ttype.parent - colors = self.colorscheme.get(ttype) - return colors[self.darkbg] - - def format_unencoded(self, tokensource, outfile): - if self.linenos: - self._write_lineno(outfile) - - for ttype, value in tokensource: - color = self._get_color(ttype) - - for line in value.splitlines(True): - if color: - outfile.write(ansiformat(color, line.rstrip('\n'))) - else: - outfile.write(line.rstrip('\n')) - if line.endswith('\n'): - if self.linenos: - self._write_lineno(outfile) - else: - outfile.write('\n') - - if self.linenos: - outfile.write("\n") diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/tenacity/nap.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/tenacity/nap.py deleted file mode 100644 index 72aa5bfd4b60d8e6ef6ed0cf2ae4f763d12195cc..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/tenacity/nap.py +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright 2016 Étienne Bersac -# Copyright 2016 Julien Danjou -# Copyright 2016 Joshua Harlow -# Copyright 2013-2014 Ray Holder -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import time -import typing - -if typing.TYPE_CHECKING: - import threading - - -def sleep(seconds: float) -> None: - """ - Sleep strategy that delays execution for a given number of seconds. - - This is the default strategy, and may be mocked out for unit testing. - """ - time.sleep(seconds) - - -class sleep_using_event: - """Sleep strategy that waits on an event to be set.""" - - def __init__(self, event: "threading.Event") -> None: - self.event = event - - def __call__(self, timeout: typing.Optional[float]) -> None: - # NOTE(harlowja): this may *not* actually wait for timeout - # seconds if the event is set (ie this may eject out early). - self.event.wait(timeout=timeout) diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/config.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/config.py deleted file mode 100644 index 9a4044adaf876f57befa8cf37c5c23f8840a99f4..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/config.py +++ /dev/null @@ -1,139 +0,0 @@ -"""distutils.pypirc - -Provides the PyPIRCCommand class, the base class for the command classes -that uses .pypirc in the distutils.command package. -""" -import os -from configparser import RawConfigParser - -from .cmd import Command - -DEFAULT_PYPIRC = """\ -[distutils] -index-servers = - pypi - -[pypi] -username:%s -password:%s -""" - - -class PyPIRCCommand(Command): - """Base command that knows how to handle the .pypirc file""" - - DEFAULT_REPOSITORY = 'https://upload.pypi.org/legacy/' - DEFAULT_REALM = 'pypi' - repository = None - realm = None - - user_options = [ - ('repository=', 'r', "url of repository [default: %s]" % DEFAULT_REPOSITORY), - ('show-response', None, 'display full response text from server'), - ] - - boolean_options = ['show-response'] - - def _get_rc_file(self): - """Returns rc file path.""" - return os.path.join(os.path.expanduser('~'), '.pypirc') - - def _store_pypirc(self, username, password): - """Creates a default .pypirc file.""" - rc = self._get_rc_file() - with os.fdopen(os.open(rc, os.O_CREAT | os.O_WRONLY, 0o600), 'w') as f: - f.write(DEFAULT_PYPIRC % (username, password)) - - def _read_pypirc(self): # noqa: C901 - """Reads the .pypirc file.""" - rc = self._get_rc_file() - if os.path.exists(rc): - self.announce('Using PyPI login from %s' % rc) - repository = self.repository or self.DEFAULT_REPOSITORY - - config = RawConfigParser() - config.read(rc) - sections = config.sections() - if 'distutils' in sections: - # let's get the list of servers - index_servers = config.get('distutils', 'index-servers') - _servers = [ - server.strip() - for server in index_servers.split('\n') - if server.strip() != '' - ] - if _servers == []: - # nothing set, let's try to get the default pypi - if 'pypi' in sections: - _servers = ['pypi'] - else: - # the file is not properly defined, returning - # an empty dict - return {} - for server in _servers: - current = {'server': server} - current['username'] = config.get(server, 'username') - - # optional params - for key, default in ( - ('repository', self.DEFAULT_REPOSITORY), - ('realm', self.DEFAULT_REALM), - ('password', None), - ): - if config.has_option(server, key): - current[key] = config.get(server, key) - else: - current[key] = default - - # work around people having "repository" for the "pypi" - # section of their config set to the HTTP (rather than - # HTTPS) URL - if server == 'pypi' and repository in ( - self.DEFAULT_REPOSITORY, - 'pypi', - ): - current['repository'] = self.DEFAULT_REPOSITORY - return current - - if ( - current['server'] == repository - or current['repository'] == repository - ): - return current - elif 'server-login' in sections: - # old format - server = 'server-login' - if config.has_option(server, 'repository'): - repository = config.get(server, 'repository') - else: - repository = self.DEFAULT_REPOSITORY - return { - 'username': config.get(server, 'username'), - 'password': config.get(server, 'password'), - 'repository': repository, - 'server': server, - 'realm': self.DEFAULT_REALM, - } - - return {} - - def _read_pypi_response(self, response): - """Read and decode a PyPI HTTP response.""" - import cgi - - content_type = response.getheader('content-type', 'text/plain') - encoding = cgi.parse_header(content_type)[1].get('charset', 'ascii') - return response.read().decode(encoding) - - def initialize_options(self): - """Initialize options.""" - self.repository = None - self.realm = None - self.show_response = 0 - - def finalize_options(self): - """Finalizes options.""" - if self.repository is None: - self.repository = self.DEFAULT_REPOSITORY - if self.realm is None: - self.realm = self.DEFAULT_REALM diff --git a/spaces/poiiii/clefourrier-graphormer-base-pcqm4mv1/app.py b/spaces/poiiii/clefourrier-graphormer-base-pcqm4mv1/app.py deleted file mode 100644 index 7bb5eb8a4aea21d03cebc03bf746a1ba684601b7..0000000000000000000000000000000000000000 --- a/spaces/poiiii/clefourrier-graphormer-base-pcqm4mv1/app.py +++ /dev/null @@ -1,7 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/cardiffnlp/twitter-roberta-base-sentiment-latest").launch() -gr.Interface.load("models/j-hartmann/emotion-english-distilroberta-base").launch() -gr.Interface.load("models/papluca/xlm-roberta-base-language-detection").launch() -gr.Interface.load("models/tuner007/pegasus_paraphrase").launch() -gr.Interface.load("models/mrm8488/t5-base-finetuned-question-generation-ap").launch() \ No newline at end of file diff --git a/spaces/ppsantiago/chatGPT/run_Linux.sh b/spaces/ppsantiago/chatGPT/run_Linux.sh deleted file mode 100644 index 62af07283093d8e580763d7acfe493c3d88e7b08..0000000000000000000000000000000000000000 --- a/spaces/ppsantiago/chatGPT/run_Linux.sh +++ /dev/null @@ -1,25 +0,0 @@ -#!/bin/bash - -# 获取脚本所在目录 -script_dir=$(dirname "$0") - -# 将工作目录更改为脚本所在目录 -cd "$script_dir" - -# 检查Git仓库是否有更新 -git remote update -pwd - -if ! git status -uno | grep 'up to date' > /dev/null; then - # 如果有更新,关闭当前运行的服务器 - pkill -f ChuanhuChatbot.py - - # 拉取最新更改 - git pull - - # 安装依赖 - pip3 install -r requirements.txt - - # 重新启动服务器 - nohup python3 ChuanhuChatbot.py & -fi diff --git a/spaces/priyank-m/m_OCR/README.md b/spaces/priyank-m/m_OCR/README.md deleted file mode 100644 index b82873573e62332cb4f05db7525764978838ef84..0000000000000000000000000000000000000000 --- a/spaces/priyank-m/m_OCR/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: m_OCR -emoji: 📰 -colorFrom: purple -colorTo: purple -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/vegalite/v5/compiler.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/vegalite/v5/compiler.py deleted file mode 100644 index 4a892e6ed018565c60e56b37c51cc097695f206b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/vegalite/v5/compiler.py +++ /dev/null @@ -1,26 +0,0 @@ -from ...utils._importers import import_vl_convert -from ...utils.compiler import VegaLiteCompilerRegistry - -from typing import Final - - -ENTRY_POINT_GROUP: Final = "altair.vegalite.v5.vegalite_compiler" -vegalite_compilers = VegaLiteCompilerRegistry(entry_point_group=ENTRY_POINT_GROUP) - - -def vl_convert_compiler(vegalite_spec: dict) -> dict: - """ - Vega-Lite to Vega compiler that uses vl-convert - """ - from . import SCHEMA_VERSION - - vlc = import_vl_convert() - - # Compute vl-convert's vl_version string (of the form 'v5_8') - # from SCHEMA_VERSION (of the form 'v5.8.0') - vl_version = "_".join(SCHEMA_VERSION.split(".")[:2]) - return vlc.vegalite_to_vega(vegalite_spec, vl_version=vl_version) - - -vegalite_compilers.register("vl-convert", vl_convert_compiler) -vegalite_compilers.enable("vl-convert") diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/analytics.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/analytics.py deleted file mode 100644 index 281f2174eab60ea52469cf5b6d8db22ffcaefe08..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/analytics.py +++ /dev/null @@ -1,255 +0,0 @@ -""" Functions related to analytics and telemetry. """ -from __future__ import annotations - -import asyncio -import json -import os -import threading -import urllib.parse -import warnings -from distutils.version import StrictVersion -from typing import Any - -import requests - -import gradio -from gradio import wasm_utils -from gradio.context import Context -from gradio.utils import get_package_version - -# For testability, we import the pyfetch function into this module scope and define a fallback coroutine object to be patched in tests. -try: - from pyodide.http import pyfetch as pyodide_pyfetch # type: ignore -except ImportError: - - async def pyodide_pyfetch(*args, **kwargs): - raise NotImplementedError( - "pyodide.http.pyfetch is not available in this environment." - ) - - -ANALYTICS_URL = "https://api.gradio.app/" -PKG_VERSION_URL = "https://api.gradio.app/pkg-version" - - -def analytics_enabled() -> bool: - """ - Returns: True if analytics are enabled, False otherwise. - """ - return os.getenv("GRADIO_ANALYTICS_ENABLED", "True") == "True" - - -def _do_analytics_request(url: str, data: dict[str, Any]) -> None: - if wasm_utils.IS_WASM: - asyncio.ensure_future( - _do_wasm_analytics_request( - url=url, - data=data, - ) - ) - else: - threading.Thread( - target=_do_normal_analytics_request, - kwargs={ - "url": url, - "data": data, - }, - ).start() - - -def _do_normal_analytics_request(url: str, data: dict[str, Any]) -> None: - data["ip_address"] = get_local_ip_address() - try: - requests.post(url, data=data, timeout=5) - except (requests.ConnectionError, requests.exceptions.ReadTimeout): - pass # do not push analytics if no network - - -async def _do_wasm_analytics_request(url: str, data: dict[str, Any]) -> None: - data["ip_address"] = await get_local_ip_address_wasm() - - # We use urllib.parse.urlencode to encode the data as a form. - # Ref: https://docs.python.org/3/library/urllib.request.html#urllib-examples - body = urllib.parse.urlencode(data).encode("ascii") - headers = { - "Content-Type": "application/x-www-form-urlencoded", - } - - try: - await asyncio.wait_for( - pyodide_pyfetch(url, method="POST", headers=headers, body=body), - timeout=5, - ) - except asyncio.TimeoutError: - pass # do not push analytics if no network - - -def version_check(): - try: - current_pkg_version = get_package_version() - latest_pkg_version = requests.get(url=PKG_VERSION_URL, timeout=3).json()[ - "version" - ] - if StrictVersion(latest_pkg_version) > StrictVersion(current_pkg_version): - print( - f"IMPORTANT: You are using gradio version {current_pkg_version}, " - f"however version {latest_pkg_version} is available, please upgrade." - ) - print("--------") - except json.decoder.JSONDecodeError: - warnings.warn("unable to parse version details from package URL.") - except KeyError: - warnings.warn("package URL does not contain version info.") - except Exception: - pass - - -def get_local_ip_address() -> str: - """ - Gets the public IP address or returns the string "No internet connection" if unable - to obtain it or the string "Analytics disabled" if a user has disabled analytics. - Does not make a new request if the IP address has already been obtained in the - same Python session. - """ - if not analytics_enabled(): - return "Analytics disabled" - - if Context.ip_address is None: - try: - ip_address = requests.get( - "https://checkip.amazonaws.com/", timeout=3 - ).text.strip() - except (requests.ConnectionError, requests.exceptions.ReadTimeout): - ip_address = "No internet connection" - Context.ip_address = ip_address - else: - ip_address = Context.ip_address - return ip_address - - -async def get_local_ip_address_wasm() -> str: - """The Wasm-compatible version of get_local_ip_address().""" - if not analytics_enabled(): - return "Analytics disabled" - - if Context.ip_address is None: - try: - response = await asyncio.wait_for( - pyodide_pyfetch( - # The API used by the normal version (`get_local_ip_address()`), `https://checkip.amazonaws.com/``, blocks CORS requests, so here we use a different API. - "https://api.ipify.org" - ), - timeout=5, - ) - response_text: str = await response.string() # type: ignore - ip_address = response_text.strip() - except (asyncio.TimeoutError, OSError): - ip_address = "No internet connection" - Context.ip_address = ip_address - else: - ip_address = Context.ip_address - return ip_address - - -def initiated_analytics(data: dict[str, Any]) -> None: - if not analytics_enabled(): - return - - _do_analytics_request( - url=f"{ANALYTICS_URL}gradio-initiated-analytics/", - data=data, - ) - - -def launched_analytics(blocks: gradio.Blocks, data: dict[str, Any]) -> None: - if not analytics_enabled(): - return - - ( - blocks_telemetry, - inputs_telemetry, - outputs_telemetry, - targets_telemetry, - events_telemetry, - ) = ( - [], - [], - [], - [], - [], - ) - - from gradio.blocks import BlockContext - - for x in list(blocks.blocks.values()): - blocks_telemetry.append(x.get_block_name()) if isinstance( - x, BlockContext - ) else blocks_telemetry.append(str(x)) - - for x in blocks.dependencies: - targets_telemetry = targets_telemetry + [ - # Sometimes the target can be the Blocks object itself, so we need to check if its in blocks.blocks - str(blocks.blocks[y[0]]) - for y in x["targets"] - if y[0] in blocks.blocks - ] - events_telemetry = events_telemetry + [ - y[1] for y in x["targets"] if y[0] in blocks.blocks - ] - inputs_telemetry = inputs_telemetry + [ - str(blocks.blocks[y]) for y in x["inputs"] if y in blocks.blocks - ] - outputs_telemetry = outputs_telemetry + [ - str(blocks.blocks[y]) for y in x["outputs"] if y in blocks.blocks - ] - additional_data = { - "version": get_package_version(), - "is_kaggle": blocks.is_kaggle, - "is_sagemaker": blocks.is_sagemaker, - "using_auth": blocks.auth is not None, - "dev_mode": blocks.dev_mode, - "show_api": blocks.show_api, - "show_error": blocks.show_error, - "title": blocks.title, - "inputs": blocks.input_components - if blocks.mode == "interface" - else inputs_telemetry, - "outputs": blocks.output_components - if blocks.mode == "interface" - else outputs_telemetry, - "targets": targets_telemetry, - "blocks": blocks_telemetry, - "events": events_telemetry, - "is_wasm": wasm_utils.IS_WASM, - } - - data.update(additional_data) - - _do_analytics_request(url=f"{ANALYTICS_URL}gradio-launched-telemetry/", data=data) - - -def integration_analytics(data: dict[str, Any]) -> None: - if not analytics_enabled(): - return - - _do_analytics_request( - url=f"{ANALYTICS_URL}gradio-integration-analytics/", - data=data, - ) - - -def error_analytics(message: str) -> None: - """ - Send error analytics if there is network - Parameters: - message: Details about error - """ - if not analytics_enabled(): - return - - data = {"error": message} - - _do_analytics_request( - url=f"{ANALYTICS_URL}gradio-error-analytics/", - data=data, - ) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/h11/_receivebuffer.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/h11/_receivebuffer.py deleted file mode 100644 index e5c4e08a56f5081e87103f38b4add6ce1b730204..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/h11/_receivebuffer.py +++ /dev/null @@ -1,153 +0,0 @@ -import re -import sys -from typing import List, Optional, Union - -__all__ = ["ReceiveBuffer"] - - -# Operations we want to support: -# - find next \r\n or \r\n\r\n (\n or \n\n are also acceptable), -# or wait until there is one -# - read at-most-N bytes -# Goals: -# - on average, do this fast -# - worst case, do this in O(n) where n is the number of bytes processed -# Plan: -# - store bytearray, offset, how far we've searched for a separator token -# - use the how-far-we've-searched data to avoid rescanning -# - while doing a stream of uninterrupted processing, advance offset instead -# of constantly copying -# WARNING: -# - I haven't benchmarked or profiled any of this yet. -# -# Note that starting in Python 3.4, deleting the initial n bytes from a -# bytearray is amortized O(n), thanks to some excellent work by Antoine -# Martin: -# -# https://bugs.python.org/issue19087 -# -# This means that if we only supported 3.4+, we could get rid of the code here -# involving self._start and self.compress, because it's doing exactly the same -# thing that bytearray now does internally. -# -# BUT unfortunately, we still support 2.7, and reading short segments out of a -# long buffer MUST be O(bytes read) to avoid DoS issues, so we can't actually -# delete this code. Yet: -# -# https://pythonclock.org/ -# -# (Two things to double-check first though: make sure PyPy also has the -# optimization, and benchmark to make sure it's a win, since we do have a -# slightly clever thing where we delay calling compress() until we've -# processed a whole event, which could in theory be slightly more efficient -# than the internal bytearray support.) -blank_line_regex = re.compile(b"\n\r?\n", re.MULTILINE) - - -class ReceiveBuffer: - def __init__(self) -> None: - self._data = bytearray() - self._next_line_search = 0 - self._multiple_lines_search = 0 - - def __iadd__(self, byteslike: Union[bytes, bytearray]) -> "ReceiveBuffer": - self._data += byteslike - return self - - def __bool__(self) -> bool: - return bool(len(self)) - - def __len__(self) -> int: - return len(self._data) - - # for @property unprocessed_data - def __bytes__(self) -> bytes: - return bytes(self._data) - - def _extract(self, count: int) -> bytearray: - # extracting an initial slice of the data buffer and return it - out = self._data[:count] - del self._data[:count] - - self._next_line_search = 0 - self._multiple_lines_search = 0 - - return out - - def maybe_extract_at_most(self, count: int) -> Optional[bytearray]: - """ - Extract a fixed number of bytes from the buffer. - """ - out = self._data[:count] - if not out: - return None - - return self._extract(count) - - def maybe_extract_next_line(self) -> Optional[bytearray]: - """ - Extract the first line, if it is completed in the buffer. - """ - # Only search in buffer space that we've not already looked at. - search_start_index = max(0, self._next_line_search - 1) - partial_idx = self._data.find(b"\r\n", search_start_index) - - if partial_idx == -1: - self._next_line_search = len(self._data) - return None - - # + 2 is to compensate len(b"\r\n") - idx = partial_idx + 2 - - return self._extract(idx) - - def maybe_extract_lines(self) -> Optional[List[bytearray]]: - """ - Extract everything up to the first blank line, and return a list of lines. - """ - # Handle the case where we have an immediate empty line. - if self._data[:1] == b"\n": - self._extract(1) - return [] - - if self._data[:2] == b"\r\n": - self._extract(2) - return [] - - # Only search in buffer space that we've not already looked at. - match = blank_line_regex.search(self._data, self._multiple_lines_search) - if match is None: - self._multiple_lines_search = max(0, len(self._data) - 2) - return None - - # Truncate the buffer and return it. - idx = match.span(0)[-1] - out = self._extract(idx) - lines = out.split(b"\n") - - for line in lines: - if line.endswith(b"\r"): - del line[-1] - - assert lines[-2] == lines[-1] == b"" - - del lines[-2:] - - return lines - - # In theory we should wait until `\r\n` before starting to validate - # incoming data. However it's interesting to detect (very) invalid data - # early given they might not even contain `\r\n` at all (hence only - # timeout will get rid of them). - # This is not a 100% effective detection but more of a cheap sanity check - # allowing for early abort in some useful cases. - # This is especially interesting when peer is messing up with HTTPS and - # sent us a TLS stream where we were expecting plain HTTP given all - # versions of TLS so far start handshake with a 0x16 message type code. - def is_next_line_obviously_invalid_request_line(self) -> bool: - try: - # HTTP header line must not contain non-printable characters - # and should not start with a space - return self._data[0] < 0x21 - except IndexError: - return False diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpcore/_utils.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpcore/_utils.py deleted file mode 100644 index df5dea8fe472697afea4156d2916389e2f70d684..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpcore/_utils.py +++ /dev/null @@ -1,36 +0,0 @@ -import select -import socket -import sys -import typing - - -def is_socket_readable(sock: typing.Optional[socket.socket]) -> bool: - """ - Return whether a socket, as identifed by its file descriptor, is readable. - "A socket is readable" means that the read buffer isn't empty, i.e. that calling - .recv() on it would immediately return some data. - """ - # NOTE: we want check for readability without actually attempting to read, because - # we don't want to block forever if it's not readable. - - # In the case that the socket no longer exists, or cannot return a file - # descriptor, we treat it as being readable, as if it the next read operation - # on it is ready to return the terminating `b""`. - sock_fd = None if sock is None else sock.fileno() - if sock_fd is None or sock_fd < 0: # pragma: nocover - return True - - # The implementation below was stolen from: - # https://github.com/python-trio/trio/blob/20ee2b1b7376db637435d80e266212a35837ddcc/trio/_socket.py#L471-L478 - # See also: https://github.com/encode/httpcore/pull/193#issuecomment-703129316 - - # Use select.select on Windows, and when poll is unavailable and select.poll - # everywhere else. (E.g. When eventlet is in use. See #327) - if ( - sys.platform == "win32" or getattr(select, "poll", None) is None - ): # pragma: nocover - rready, _, _ = select.select([sock_fd], [], [], 0) - return bool(rready) - p = select.poll() - p.register(sock_fd, select.POLLIN) - return bool(p.poll(0)) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/kiwisolver/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/kiwisolver/__init__.py deleted file mode 100644 index f4e1753659a11fed2676a22c1c779f2f76981c50..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/kiwisolver/__init__.py +++ /dev/null @@ -1,42 +0,0 @@ -# -------------------------------------------------------------------------------------- -# Copyright (c) 2013-2022, Nucleic Development Team. -# -# Distributed under the terms of the Modified BSD License. -# -# The full license is in the file LICENSE, distributed with this software. -# -------------------------------------------------------------------------------------- -from ._cext import ( - Constraint, - Expression, - Solver, - Term, - Variable, - __kiwi_version__, - __version__, - strength, -) -from .exceptions import ( - BadRequiredStrength, - DuplicateConstraint, - DuplicateEditVariable, - UnknownConstraint, - UnknownEditVariable, - UnsatisfiableConstraint, -) - -__all__ = [ - "BadRequiredStrength", - "DuplicateConstraint", - "DuplicateEditVariable", - "UnknownConstraint", - "UnknownEditVariable", - "UnsatisfiableConstraint", - "strength", - "Variable", - "Term", - "Expression", - "Constraint", - "Solver", - "__version__", - "__kiwi_version__", -] diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backends/backend_pgf.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backends/backend_pgf.py deleted file mode 100644 index ccf4b800a614079e4c3d014b3ac2106b5a912882..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backends/backend_pgf.py +++ /dev/null @@ -1,1009 +0,0 @@ -import codecs -import datetime -import functools -from io import BytesIO -import logging -import math -import os -import pathlib -import shutil -import subprocess -from tempfile import TemporaryDirectory -import weakref - -from PIL import Image - -import matplotlib as mpl -from matplotlib import _api, cbook, font_manager as fm -from matplotlib.backend_bases import ( - _Backend, FigureCanvasBase, FigureManagerBase, RendererBase -) -from matplotlib.backends.backend_mixed import MixedModeRenderer -from matplotlib.backends.backend_pdf import ( - _create_pdf_info_dict, _datetime_to_pdf) -from matplotlib.path import Path -from matplotlib.figure import Figure -from matplotlib._pylab_helpers import Gcf - -_log = logging.getLogger(__name__) - - -# Note: When formatting floating point values, it is important to use the -# %f/{:f} format rather than %s/{} to avoid triggering scientific notation, -# which is not recognized by TeX. - -def _get_preamble(): - """Prepare a LaTeX preamble based on the rcParams configuration.""" - preamble = [ - # Remove Matplotlib's custom command \mathdefault. (Not using - # \mathnormal instead since this looks odd with Computer Modern.) - r"\def\mathdefault#1{#1}", - # Use displaystyle for all math. - r"\everymath=\expandafter{\the\everymath\displaystyle}", - # Allow pgf.preamble to override the above definitions. - mpl.rcParams["pgf.preamble"], - ] - if mpl.rcParams["pgf.texsystem"] != "pdflatex": - preamble.append("\\usepackage{fontspec}") - if mpl.rcParams["pgf.rcfonts"]: - families = ["serif", "sans\\-serif", "monospace"] - commands = ["setmainfont", "setsansfont", "setmonofont"] - for family, command in zip(families, commands): - # 1) Forward slashes also work on Windows, so don't mess with - # backslashes. 2) The dirname needs to include a separator. - path = pathlib.Path(fm.findfont(family)) - preamble.append(r"\%s{%s}[Path=\detokenize{%s/}]" % ( - command, path.name, path.parent.as_posix())) - preamble.append(mpl.texmanager._usepackage_if_not_loaded( - "underscore", option="strings")) # Documented as "must come last". - return "\n".join(preamble) - - -# It's better to use only one unit for all coordinates, since the -# arithmetic in latex seems to produce inaccurate conversions. -latex_pt_to_in = 1. / 72.27 -latex_in_to_pt = 1. / latex_pt_to_in -mpl_pt_to_in = 1. / 72. -mpl_in_to_pt = 1. / mpl_pt_to_in - - -def _tex_escape(text): - r""" - Do some necessary and/or useful substitutions for texts to be included in - LaTeX documents. - """ - return text.replace("\N{MINUS SIGN}", r"\ensuremath{-}") - - -def _writeln(fh, line): - # Ending lines with a % prevents TeX from inserting spurious spaces - # (https://tex.stackexchange.com/questions/7453). - fh.write(line) - fh.write("%\n") - - -def _escape_and_apply_props(s, prop): - """ - Generate a TeX string that renders string *s* with font properties *prop*, - also applying any required escapes to *s*. - """ - commands = [] - - families = {"serif": r"\rmfamily", "sans": r"\sffamily", - "sans-serif": r"\sffamily", "monospace": r"\ttfamily"} - family = prop.get_family()[0] - if family in families: - commands.append(families[family]) - elif (any(font.name == family for font in fm.fontManager.ttflist) - and mpl.rcParams["pgf.texsystem"] != "pdflatex"): - commands.append(r"\setmainfont{%s}\rmfamily" % family) - else: - _log.warning("Ignoring unknown font: %s", family) - - size = prop.get_size_in_points() - commands.append(r"\fontsize{%f}{%f}" % (size, size * 1.2)) - - styles = {"normal": r"", "italic": r"\itshape", "oblique": r"\slshape"} - commands.append(styles[prop.get_style()]) - - boldstyles = ["semibold", "demibold", "demi", "bold", "heavy", - "extra bold", "black"] - if prop.get_weight() in boldstyles: - commands.append(r"\bfseries") - - commands.append(r"\selectfont") - return ( - "{" - + "".join(commands) - + r"\catcode`\^=\active\def^{\ifmmode\sp\else\^{}\fi}" - # It should normally be enough to set the catcode of % to 12 ("normal - # character"); this works on TeXLive 2021 but not on 2018, so we just - # make it active too. - + r"\catcode`\%=\active\def%{\%}" - + _tex_escape(s) - + "}" - ) - - -def _metadata_to_str(key, value): - """Convert metadata key/value to a form that hyperref accepts.""" - if isinstance(value, datetime.datetime): - value = _datetime_to_pdf(value) - elif key == 'Trapped': - value = value.name.decode('ascii') - else: - value = str(value) - return f'{key}={{{value}}}' - - -def make_pdf_to_png_converter(): - """Return a function that converts a pdf file to a png file.""" - try: - mpl._get_executable_info("pdftocairo") - except mpl.ExecutableNotFoundError: - pass - else: - return lambda pdffile, pngfile, dpi: subprocess.check_output( - ["pdftocairo", "-singlefile", "-transp", "-png", "-r", "%d" % dpi, - pdffile, os.path.splitext(pngfile)[0]], - stderr=subprocess.STDOUT) - try: - gs_info = mpl._get_executable_info("gs") - except mpl.ExecutableNotFoundError: - pass - else: - return lambda pdffile, pngfile, dpi: subprocess.check_output( - [gs_info.executable, - '-dQUIET', '-dSAFER', '-dBATCH', '-dNOPAUSE', '-dNOPROMPT', - '-dUseCIEColor', '-dTextAlphaBits=4', - '-dGraphicsAlphaBits=4', '-dDOINTERPOLATE', - '-sDEVICE=pngalpha', '-sOutputFile=%s' % pngfile, - '-r%d' % dpi, pdffile], - stderr=subprocess.STDOUT) - raise RuntimeError("No suitable pdf to png renderer found.") - - -class LatexError(Exception): - def __init__(self, message, latex_output=""): - super().__init__(message) - self.latex_output = latex_output - - def __str__(self): - s, = self.args - if self.latex_output: - s += "\n" + self.latex_output - return s - - -class LatexManager: - """ - The LatexManager opens an instance of the LaTeX application for - determining the metrics of text elements. The LaTeX environment can be - modified by setting fonts and/or a custom preamble in `.rcParams`. - """ - - @staticmethod - def _build_latex_header(): - latex_header = [ - r"\documentclass{article}", - # Include TeX program name as a comment for cache invalidation. - # TeX does not allow this to be the first line. - rf"% !TeX program = {mpl.rcParams['pgf.texsystem']}", - # Test whether \includegraphics supports interpolate option. - r"\usepackage{graphicx}", - _get_preamble(), - r"\begin{document}", - r"\typeout{pgf_backend_query_start}", - ] - return "\n".join(latex_header) - - @classmethod - def _get_cached_or_new(cls): - """ - Return the previous LatexManager if the header and tex system did not - change, or a new instance otherwise. - """ - return cls._get_cached_or_new_impl(cls._build_latex_header()) - - @classmethod - @functools.lru_cache(1) - def _get_cached_or_new_impl(cls, header): # Helper for _get_cached_or_new. - return cls() - - def _stdin_writeln(self, s): - if self.latex is None: - self._setup_latex_process() - self.latex.stdin.write(s) - self.latex.stdin.write("\n") - self.latex.stdin.flush() - - def _expect(self, s): - s = list(s) - chars = [] - while True: - c = self.latex.stdout.read(1) - chars.append(c) - if chars[-len(s):] == s: - break - if not c: - self.latex.kill() - self.latex = None - raise LatexError("LaTeX process halted", "".join(chars)) - return "".join(chars) - - def _expect_prompt(self): - return self._expect("\n*") - - def __init__(self): - # create a tmp directory for running latex, register it for deletion - self._tmpdir = TemporaryDirectory() - self.tmpdir = self._tmpdir.name - self._finalize_tmpdir = weakref.finalize(self, self._tmpdir.cleanup) - - # test the LaTeX setup to ensure a clean startup of the subprocess - self._setup_latex_process(expect_reply=False) - stdout, stderr = self.latex.communicate("\n\\makeatletter\\@@end\n") - if self.latex.returncode != 0: - raise LatexError( - f"LaTeX errored (probably missing font or error in preamble) " - f"while processing the following input:\n" - f"{self._build_latex_header()}", - stdout) - self.latex = None # Will be set up on first use. - # Per-instance cache. - self._get_box_metrics = functools.lru_cache(self._get_box_metrics) - - def _setup_latex_process(self, *, expect_reply=True): - # Open LaTeX process for real work; register it for deletion. On - # Windows, we must ensure that the subprocess has quit before being - # able to delete the tmpdir in which it runs; in order to do so, we - # must first `kill()` it, and then `communicate()` with it. - try: - self.latex = subprocess.Popen( - [mpl.rcParams["pgf.texsystem"], "-halt-on-error"], - stdin=subprocess.PIPE, stdout=subprocess.PIPE, - encoding="utf-8", cwd=self.tmpdir) - except FileNotFoundError as err: - raise RuntimeError( - f"{mpl.rcParams['pgf.texsystem']!r} not found; install it or change " - f"rcParams['pgf.texsystem'] to an available TeX implementation" - ) from err - except OSError as err: - raise RuntimeError( - f"Error starting {mpl.rcParams['pgf.texsystem']!r}") from err - - def finalize_latex(latex): - latex.kill() - latex.communicate() - - self._finalize_latex = weakref.finalize( - self, finalize_latex, self.latex) - # write header with 'pgf_backend_query_start' token - self._stdin_writeln(self._build_latex_header()) - if expect_reply: # read until 'pgf_backend_query_start' token appears - self._expect("*pgf_backend_query_start") - self._expect_prompt() - - def get_width_height_descent(self, text, prop): - """ - Get the width, total height, and descent (in TeX points) for a text - typeset by the current LaTeX environment. - """ - return self._get_box_metrics(_escape_and_apply_props(text, prop)) - - def _get_box_metrics(self, tex): - """ - Get the width, total height and descent (in TeX points) for a TeX - command's output in the current LaTeX environment. - """ - # This method gets wrapped in __init__ for per-instance caching. - self._stdin_writeln( # Send textbox to TeX & request metrics typeout. - # \sbox doesn't handle catcode assignments inside its argument, - # so repeat the assignment of the catcode of "^" and "%" outside. - r"{\catcode`\^=\active\catcode`\%%=\active\sbox0{%s}" - r"\typeout{\the\wd0,\the\ht0,\the\dp0}}" - % tex) - try: - answer = self._expect_prompt() - except LatexError as err: - # Here and below, use '{}' instead of {!r} to avoid doubling all - # backslashes. - raise ValueError("Error measuring {}\nLaTeX Output:\n{}" - .format(tex, err.latex_output)) from err - try: - # Parse metrics from the answer string. Last line is prompt, and - # next-to-last-line is blank line from \typeout. - width, height, offset = answer.splitlines()[-3].split(",") - except Exception as err: - raise ValueError("Error measuring {}\nLaTeX Output:\n{}" - .format(tex, answer)) from err - w, h, o = float(width[:-2]), float(height[:-2]), float(offset[:-2]) - # The height returned from LaTeX goes from base to top; - # the height Matplotlib expects goes from bottom to top. - return w, h + o, o - - -@functools.lru_cache(1) -def _get_image_inclusion_command(): - man = LatexManager._get_cached_or_new() - man._stdin_writeln( - r"\includegraphics[interpolate=true]{%s}" - # Don't mess with backslashes on Windows. - % cbook._get_data_path("images/matplotlib.png").as_posix()) - try: - man._expect_prompt() - return r"\includegraphics" - except LatexError: - # Discard the broken manager. - LatexManager._get_cached_or_new_impl.cache_clear() - return r"\pgfimage" - - -class RendererPgf(RendererBase): - - def __init__(self, figure, fh): - """ - Create a new PGF renderer that translates any drawing instruction - into text commands to be interpreted in a latex pgfpicture environment. - - Attributes - ---------- - figure : `~matplotlib.figure.Figure` - Matplotlib figure to initialize height, width and dpi from. - fh : file-like - File handle for the output of the drawing commands. - """ - - super().__init__() - self.dpi = figure.dpi - self.fh = fh - self.figure = figure - self.image_counter = 0 - - def draw_markers(self, gc, marker_path, marker_trans, path, trans, - rgbFace=None): - # docstring inherited - - _writeln(self.fh, r"\begin{pgfscope}") - - # convert from display units to in - f = 1. / self.dpi - - # set style and clip - self._print_pgf_clip(gc) - self._print_pgf_path_styles(gc, rgbFace) - - # build marker definition - bl, tr = marker_path.get_extents(marker_trans).get_points() - coords = bl[0] * f, bl[1] * f, tr[0] * f, tr[1] * f - _writeln(self.fh, - r"\pgfsys@defobject{currentmarker}" - r"{\pgfqpoint{%fin}{%fin}}{\pgfqpoint{%fin}{%fin}}{" % coords) - self._print_pgf_path(None, marker_path, marker_trans) - self._pgf_path_draw(stroke=gc.get_linewidth() != 0.0, - fill=rgbFace is not None) - _writeln(self.fh, r"}") - - maxcoord = 16383 / 72.27 * self.dpi # Max dimensions in LaTeX. - clip = (-maxcoord, -maxcoord, maxcoord, maxcoord) - - # draw marker for each vertex - for point, code in path.iter_segments(trans, simplify=False, - clip=clip): - x, y = point[0] * f, point[1] * f - _writeln(self.fh, r"\begin{pgfscope}") - _writeln(self.fh, r"\pgfsys@transformshift{%fin}{%fin}" % (x, y)) - _writeln(self.fh, r"\pgfsys@useobject{currentmarker}{}") - _writeln(self.fh, r"\end{pgfscope}") - - _writeln(self.fh, r"\end{pgfscope}") - - def draw_path(self, gc, path, transform, rgbFace=None): - # docstring inherited - _writeln(self.fh, r"\begin{pgfscope}") - # draw the path - self._print_pgf_clip(gc) - self._print_pgf_path_styles(gc, rgbFace) - self._print_pgf_path(gc, path, transform, rgbFace) - self._pgf_path_draw(stroke=gc.get_linewidth() != 0.0, - fill=rgbFace is not None) - _writeln(self.fh, r"\end{pgfscope}") - - # if present, draw pattern on top - if gc.get_hatch(): - _writeln(self.fh, r"\begin{pgfscope}") - self._print_pgf_path_styles(gc, rgbFace) - - # combine clip and path for clipping - self._print_pgf_clip(gc) - self._print_pgf_path(gc, path, transform, rgbFace) - _writeln(self.fh, r"\pgfusepath{clip}") - - # build pattern definition - _writeln(self.fh, - r"\pgfsys@defobject{currentpattern}" - r"{\pgfqpoint{0in}{0in}}{\pgfqpoint{1in}{1in}}{") - _writeln(self.fh, r"\begin{pgfscope}") - _writeln(self.fh, - r"\pgfpathrectangle" - r"{\pgfqpoint{0in}{0in}}{\pgfqpoint{1in}{1in}}") - _writeln(self.fh, r"\pgfusepath{clip}") - scale = mpl.transforms.Affine2D().scale(self.dpi) - self._print_pgf_path(None, gc.get_hatch_path(), scale) - self._pgf_path_draw(stroke=True) - _writeln(self.fh, r"\end{pgfscope}") - _writeln(self.fh, r"}") - # repeat pattern, filling the bounding rect of the path - f = 1. / self.dpi - (xmin, ymin), (xmax, ymax) = \ - path.get_extents(transform).get_points() - xmin, xmax = f * xmin, f * xmax - ymin, ymax = f * ymin, f * ymax - repx, repy = math.ceil(xmax - xmin), math.ceil(ymax - ymin) - _writeln(self.fh, - r"\pgfsys@transformshift{%fin}{%fin}" % (xmin, ymin)) - for iy in range(repy): - for ix in range(repx): - _writeln(self.fh, r"\pgfsys@useobject{currentpattern}{}") - _writeln(self.fh, r"\pgfsys@transformshift{1in}{0in}") - _writeln(self.fh, r"\pgfsys@transformshift{-%din}{0in}" % repx) - _writeln(self.fh, r"\pgfsys@transformshift{0in}{1in}") - - _writeln(self.fh, r"\end{pgfscope}") - - def _print_pgf_clip(self, gc): - f = 1. / self.dpi - # check for clip box - bbox = gc.get_clip_rectangle() - if bbox: - p1, p2 = bbox.get_points() - w, h = p2 - p1 - coords = p1[0] * f, p1[1] * f, w * f, h * f - _writeln(self.fh, - r"\pgfpathrectangle" - r"{\pgfqpoint{%fin}{%fin}}{\pgfqpoint{%fin}{%fin}}" - % coords) - _writeln(self.fh, r"\pgfusepath{clip}") - - # check for clip path - clippath, clippath_trans = gc.get_clip_path() - if clippath is not None: - self._print_pgf_path(gc, clippath, clippath_trans) - _writeln(self.fh, r"\pgfusepath{clip}") - - def _print_pgf_path_styles(self, gc, rgbFace): - # cap style - capstyles = {"butt": r"\pgfsetbuttcap", - "round": r"\pgfsetroundcap", - "projecting": r"\pgfsetrectcap"} - _writeln(self.fh, capstyles[gc.get_capstyle()]) - - # join style - joinstyles = {"miter": r"\pgfsetmiterjoin", - "round": r"\pgfsetroundjoin", - "bevel": r"\pgfsetbeveljoin"} - _writeln(self.fh, joinstyles[gc.get_joinstyle()]) - - # filling - has_fill = rgbFace is not None - - if gc.get_forced_alpha(): - fillopacity = strokeopacity = gc.get_alpha() - else: - strokeopacity = gc.get_rgb()[3] - fillopacity = rgbFace[3] if has_fill and len(rgbFace) > 3 else 1.0 - - if has_fill: - _writeln(self.fh, - r"\definecolor{currentfill}{rgb}{%f,%f,%f}" - % tuple(rgbFace[:3])) - _writeln(self.fh, r"\pgfsetfillcolor{currentfill}") - if has_fill and fillopacity != 1.0: - _writeln(self.fh, r"\pgfsetfillopacity{%f}" % fillopacity) - - # linewidth and color - lw = gc.get_linewidth() * mpl_pt_to_in * latex_in_to_pt - stroke_rgba = gc.get_rgb() - _writeln(self.fh, r"\pgfsetlinewidth{%fpt}" % lw) - _writeln(self.fh, - r"\definecolor{currentstroke}{rgb}{%f,%f,%f}" - % stroke_rgba[:3]) - _writeln(self.fh, r"\pgfsetstrokecolor{currentstroke}") - if strokeopacity != 1.0: - _writeln(self.fh, r"\pgfsetstrokeopacity{%f}" % strokeopacity) - - # line style - dash_offset, dash_list = gc.get_dashes() - if dash_list is None: - _writeln(self.fh, r"\pgfsetdash{}{0pt}") - else: - _writeln(self.fh, - r"\pgfsetdash{%s}{%fpt}" - % ("".join(r"{%fpt}" % dash for dash in dash_list), - dash_offset)) - - def _print_pgf_path(self, gc, path, transform, rgbFace=None): - f = 1. / self.dpi - # check for clip box / ignore clip for filled paths - bbox = gc.get_clip_rectangle() if gc else None - maxcoord = 16383 / 72.27 * self.dpi # Max dimensions in LaTeX. - if bbox and (rgbFace is None): - p1, p2 = bbox.get_points() - clip = (max(p1[0], -maxcoord), max(p1[1], -maxcoord), - min(p2[0], maxcoord), min(p2[1], maxcoord)) - else: - clip = (-maxcoord, -maxcoord, maxcoord, maxcoord) - # build path - for points, code in path.iter_segments(transform, clip=clip): - if code == Path.MOVETO: - x, y = tuple(points) - _writeln(self.fh, - r"\pgfpathmoveto{\pgfqpoint{%fin}{%fin}}" % - (f * x, f * y)) - elif code == Path.CLOSEPOLY: - _writeln(self.fh, r"\pgfpathclose") - elif code == Path.LINETO: - x, y = tuple(points) - _writeln(self.fh, - r"\pgfpathlineto{\pgfqpoint{%fin}{%fin}}" % - (f * x, f * y)) - elif code == Path.CURVE3: - cx, cy, px, py = tuple(points) - coords = cx * f, cy * f, px * f, py * f - _writeln(self.fh, - r"\pgfpathquadraticcurveto" - r"{\pgfqpoint{%fin}{%fin}}{\pgfqpoint{%fin}{%fin}}" - % coords) - elif code == Path.CURVE4: - c1x, c1y, c2x, c2y, px, py = tuple(points) - coords = c1x * f, c1y * f, c2x * f, c2y * f, px * f, py * f - _writeln(self.fh, - r"\pgfpathcurveto" - r"{\pgfqpoint{%fin}{%fin}}" - r"{\pgfqpoint{%fin}{%fin}}" - r"{\pgfqpoint{%fin}{%fin}}" - % coords) - - # apply pgf decorators - sketch_params = gc.get_sketch_params() if gc else None - if sketch_params is not None: - # Only "length" directly maps to "segment length" in PGF's API. - # PGF uses "amplitude" to pass the combined deviation in both x- - # and y-direction, while matplotlib only varies the length of the - # wiggle along the line ("randomness" and "length" parameters) - # and has a separate "scale" argument for the amplitude. - # -> Use "randomness" as PRNG seed to allow the user to force the - # same shape on multiple sketched lines - scale, length, randomness = sketch_params - if scale is not None: - # make matplotlib and PGF rendering visually similar - length *= 0.5 - scale *= 2 - # PGF guarantees that repeated loading is a no-op - _writeln(self.fh, r"\usepgfmodule{decorations}") - _writeln(self.fh, r"\usepgflibrary{decorations.pathmorphing}") - _writeln(self.fh, r"\pgfkeys{/pgf/decoration/.cd, " - f"segment length = {(length * f):f}in, " - f"amplitude = {(scale * f):f}in}}") - _writeln(self.fh, f"\\pgfmathsetseed{{{int(randomness)}}}") - _writeln(self.fh, r"\pgfdecoratecurrentpath{random steps}") - - def _pgf_path_draw(self, stroke=True, fill=False): - actions = [] - if stroke: - actions.append("stroke") - if fill: - actions.append("fill") - _writeln(self.fh, r"\pgfusepath{%s}" % ",".join(actions)) - - def option_scale_image(self): - # docstring inherited - return True - - def option_image_nocomposite(self): - # docstring inherited - return not mpl.rcParams['image.composite_image'] - - def draw_image(self, gc, x, y, im, transform=None): - # docstring inherited - - h, w = im.shape[:2] - if w == 0 or h == 0: - return - - if not os.path.exists(getattr(self.fh, "name", "")): - raise ValueError( - "streamed pgf-code does not support raster graphics, consider " - "using the pgf-to-pdf option") - - # save the images to png files - path = pathlib.Path(self.fh.name) - fname_img = "%s-img%d.png" % (path.stem, self.image_counter) - Image.fromarray(im[::-1]).save(path.parent / fname_img) - self.image_counter += 1 - - # reference the image in the pgf picture - _writeln(self.fh, r"\begin{pgfscope}") - self._print_pgf_clip(gc) - f = 1. / self.dpi # from display coords to inch - if transform is None: - _writeln(self.fh, - r"\pgfsys@transformshift{%fin}{%fin}" % (x * f, y * f)) - w, h = w * f, h * f - else: - tr1, tr2, tr3, tr4, tr5, tr6 = transform.frozen().to_values() - _writeln(self.fh, - r"\pgfsys@transformcm{%f}{%f}{%f}{%f}{%fin}{%fin}" % - (tr1 * f, tr2 * f, tr3 * f, tr4 * f, - (tr5 + x) * f, (tr6 + y) * f)) - w = h = 1 # scale is already included in the transform - interp = str(transform is None).lower() # interpolation in PDF reader - _writeln(self.fh, - r"\pgftext[left,bottom]" - r"{%s[interpolate=%s,width=%fin,height=%fin]{%s}}" % - (_get_image_inclusion_command(), - interp, w, h, fname_img)) - _writeln(self.fh, r"\end{pgfscope}") - - def draw_tex(self, gc, x, y, s, prop, angle, *, mtext=None): - # docstring inherited - self.draw_text(gc, x, y, s, prop, angle, ismath="TeX", mtext=mtext) - - def draw_text(self, gc, x, y, s, prop, angle, ismath=False, mtext=None): - # docstring inherited - - # prepare string for tex - s = _escape_and_apply_props(s, prop) - - _writeln(self.fh, r"\begin{pgfscope}") - self._print_pgf_clip(gc) - - alpha = gc.get_alpha() - if alpha != 1.0: - _writeln(self.fh, r"\pgfsetfillopacity{%f}" % alpha) - _writeln(self.fh, r"\pgfsetstrokeopacity{%f}" % alpha) - rgb = tuple(gc.get_rgb())[:3] - _writeln(self.fh, r"\definecolor{textcolor}{rgb}{%f,%f,%f}" % rgb) - _writeln(self.fh, r"\pgfsetstrokecolor{textcolor}") - _writeln(self.fh, r"\pgfsetfillcolor{textcolor}") - s = r"\color{textcolor}" + s - - dpi = self.figure.dpi - text_args = [] - if mtext and ( - (angle == 0 or - mtext.get_rotation_mode() == "anchor") and - mtext.get_verticalalignment() != "center_baseline"): - # if text anchoring can be supported, get the original coordinates - # and add alignment information - pos = mtext.get_unitless_position() - x, y = mtext.get_transform().transform(pos) - halign = {"left": "left", "right": "right", "center": ""} - valign = {"top": "top", "bottom": "bottom", - "baseline": "base", "center": ""} - text_args.extend([ - f"x={x/dpi:f}in", - f"y={y/dpi:f}in", - halign[mtext.get_horizontalalignment()], - valign[mtext.get_verticalalignment()], - ]) - else: - # if not, use the text layout provided by Matplotlib. - text_args.append(f"x={x/dpi:f}in, y={y/dpi:f}in, left, base") - - if angle != 0: - text_args.append("rotate=%f" % angle) - - _writeln(self.fh, r"\pgftext[%s]{%s}" % (",".join(text_args), s)) - _writeln(self.fh, r"\end{pgfscope}") - - def get_text_width_height_descent(self, s, prop, ismath): - # docstring inherited - # get text metrics in units of latex pt, convert to display units - w, h, d = (LatexManager._get_cached_or_new() - .get_width_height_descent(s, prop)) - # TODO: this should be latex_pt_to_in instead of mpl_pt_to_in - # but having a little bit more space around the text looks better, - # plus the bounding box reported by LaTeX is VERY narrow - f = mpl_pt_to_in * self.dpi - return w * f, h * f, d * f - - def flipy(self): - # docstring inherited - return False - - def get_canvas_width_height(self): - # docstring inherited - return (self.figure.get_figwidth() * self.dpi, - self.figure.get_figheight() * self.dpi) - - def points_to_pixels(self, points): - # docstring inherited - return points * mpl_pt_to_in * self.dpi - - -class FigureCanvasPgf(FigureCanvasBase): - filetypes = {"pgf": "LaTeX PGF picture", - "pdf": "LaTeX compiled PGF picture", - "png": "Portable Network Graphics", } - - def get_default_filetype(self): - return 'pdf' - - def _print_pgf_to_fh(self, fh, *, bbox_inches_restore=None): - - header_text = """%% Creator: Matplotlib, PGF backend -%% -%% To include the figure in your LaTeX document, write -%% \\input{.pgf} -%% -%% Make sure the required packages are loaded in your preamble -%% \\usepackage{pgf} -%% -%% Also ensure that all the required font packages are loaded; for instance, -%% the lmodern package is sometimes necessary when using math font. -%% \\usepackage{lmodern} -%% -%% Figures using additional raster images can only be included by \\input if -%% they are in the same directory as the main LaTeX file. For loading figures -%% from other directories you can use the `import` package -%% \\usepackage{import} -%% -%% and then include the figures with -%% \\import{}{.pgf} -%% -""" - - # append the preamble used by the backend as a comment for debugging - header_info_preamble = ["%% Matplotlib used the following preamble"] - for line in _get_preamble().splitlines(): - header_info_preamble.append("%% " + line) - header_info_preamble.append("%%") - header_info_preamble = "\n".join(header_info_preamble) - - # get figure size in inch - w, h = self.figure.get_figwidth(), self.figure.get_figheight() - dpi = self.figure.dpi - - # create pgfpicture environment and write the pgf code - fh.write(header_text) - fh.write(header_info_preamble) - fh.write("\n") - _writeln(fh, r"\begingroup") - _writeln(fh, r"\makeatletter") - _writeln(fh, r"\begin{pgfpicture}") - _writeln(fh, - r"\pgfpathrectangle{\pgfpointorigin}{\pgfqpoint{%fin}{%fin}}" - % (w, h)) - _writeln(fh, r"\pgfusepath{use as bounding box, clip}") - renderer = MixedModeRenderer(self.figure, w, h, dpi, - RendererPgf(self.figure, fh), - bbox_inches_restore=bbox_inches_restore) - self.figure.draw(renderer) - - # end the pgfpicture environment - _writeln(fh, r"\end{pgfpicture}") - _writeln(fh, r"\makeatother") - _writeln(fh, r"\endgroup") - - def print_pgf(self, fname_or_fh, **kwargs): - """ - Output pgf macros for drawing the figure so it can be included and - rendered in latex documents. - """ - with cbook.open_file_cm(fname_or_fh, "w", encoding="utf-8") as file: - if not cbook.file_requires_unicode(file): - file = codecs.getwriter("utf-8")(file) - self._print_pgf_to_fh(file, **kwargs) - - def print_pdf(self, fname_or_fh, *, metadata=None, **kwargs): - """Use LaTeX to compile a pgf generated figure to pdf.""" - w, h = self.figure.get_size_inches() - - info_dict = _create_pdf_info_dict('pgf', metadata or {}) - pdfinfo = ','.join( - _metadata_to_str(k, v) for k, v in info_dict.items()) - - # print figure to pgf and compile it with latex - with TemporaryDirectory() as tmpdir: - tmppath = pathlib.Path(tmpdir) - self.print_pgf(tmppath / "figure.pgf", **kwargs) - (tmppath / "figure.tex").write_text( - "\n".join([ - r"\documentclass[12pt]{article}", - r"\usepackage[pdfinfo={%s}]{hyperref}" % pdfinfo, - r"\usepackage[papersize={%fin,%fin}, margin=0in]{geometry}" - % (w, h), - r"\usepackage{pgf}", - _get_preamble(), - r"\begin{document}", - r"\centering", - r"\input{figure.pgf}", - r"\end{document}", - ]), encoding="utf-8") - texcommand = mpl.rcParams["pgf.texsystem"] - cbook._check_and_log_subprocess( - [texcommand, "-interaction=nonstopmode", "-halt-on-error", - "figure.tex"], _log, cwd=tmpdir) - with (tmppath / "figure.pdf").open("rb") as orig, \ - cbook.open_file_cm(fname_or_fh, "wb") as dest: - shutil.copyfileobj(orig, dest) # copy file contents to target - - def print_png(self, fname_or_fh, **kwargs): - """Use LaTeX to compile a pgf figure to pdf and convert it to png.""" - converter = make_pdf_to_png_converter() - with TemporaryDirectory() as tmpdir: - tmppath = pathlib.Path(tmpdir) - pdf_path = tmppath / "figure.pdf" - png_path = tmppath / "figure.png" - self.print_pdf(pdf_path, **kwargs) - converter(pdf_path, png_path, dpi=self.figure.dpi) - with png_path.open("rb") as orig, \ - cbook.open_file_cm(fname_or_fh, "wb") as dest: - shutil.copyfileobj(orig, dest) # copy file contents to target - - def get_renderer(self): - return RendererPgf(self.figure, None) - - def draw(self): - self.figure.draw_without_rendering() - return super().draw() - - -FigureManagerPgf = FigureManagerBase - - -@_Backend.export -class _BackendPgf(_Backend): - FigureCanvas = FigureCanvasPgf - - -class PdfPages: - """ - A multi-page PDF file using the pgf backend - - Examples - -------- - >>> import matplotlib.pyplot as plt - >>> # Initialize: - >>> with PdfPages('foo.pdf') as pdf: - ... # As many times as you like, create a figure fig and save it: - ... fig = plt.figure() - ... pdf.savefig(fig) - ... # When no figure is specified the current figure is saved - ... pdf.savefig() - """ - - _UNSET = object() - - def __init__(self, filename, *, keep_empty=_UNSET, metadata=None): - """ - Create a new PdfPages object. - - Parameters - ---------- - filename : str or path-like - Plots using `PdfPages.savefig` will be written to a file at this - location. Any older file with the same name is overwritten. - - keep_empty : bool, default: True - If set to False, then empty pdf files will be deleted automatically - when closed. - - metadata : dict, optional - Information dictionary object (see PDF reference section 10.2.1 - 'Document Information Dictionary'), e.g.: - ``{'Creator': 'My software', 'Author': 'Me', 'Title': 'Awesome'}``. - - The standard keys are 'Title', 'Author', 'Subject', 'Keywords', - 'Creator', 'Producer', 'CreationDate', 'ModDate', and - 'Trapped'. Values have been predefined for 'Creator', 'Producer' - and 'CreationDate'. They can be removed by setting them to `None`. - - Note that some versions of LaTeX engines may ignore the 'Producer' - key and set it to themselves. - """ - self._output_name = filename - self._n_figures = 0 - if keep_empty and keep_empty is not self._UNSET: - _api.warn_deprecated("3.8", message=( - "Keeping empty pdf files is deprecated since %(since)s and support " - "will be removed %(removal)s.")) - self._keep_empty = keep_empty - self._metadata = (metadata or {}).copy() - self._info_dict = _create_pdf_info_dict('pgf', self._metadata) - self._file = BytesIO() - - keep_empty = _api.deprecate_privatize_attribute("3.8") - - def _write_header(self, width_inches, height_inches): - pdfinfo = ','.join( - _metadata_to_str(k, v) for k, v in self._info_dict.items()) - latex_header = "\n".join([ - r"\documentclass[12pt]{article}", - r"\usepackage[pdfinfo={%s}]{hyperref}" % pdfinfo, - r"\usepackage[papersize={%fin,%fin}, margin=0in]{geometry}" - % (width_inches, height_inches), - r"\usepackage{pgf}", - _get_preamble(), - r"\setlength{\parindent}{0pt}", - r"\begin{document}%", - ]) - self._file.write(latex_header.encode('utf-8')) - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_val, exc_tb): - self.close() - - def close(self): - """ - Finalize this object, running LaTeX in a temporary directory - and moving the final pdf file to *filename*. - """ - self._file.write(rb'\end{document}\n') - if self._n_figures > 0: - self._run_latex() - elif self._keep_empty: - _api.warn_deprecated("3.8", message=( - "Keeping empty pdf files is deprecated since %(since)s and support " - "will be removed %(removal)s.")) - open(self._output_name, 'wb').close() - self._file.close() - - def _run_latex(self): - texcommand = mpl.rcParams["pgf.texsystem"] - with TemporaryDirectory() as tmpdir: - tex_source = pathlib.Path(tmpdir, "pdf_pages.tex") - tex_source.write_bytes(self._file.getvalue()) - cbook._check_and_log_subprocess( - [texcommand, "-interaction=nonstopmode", "-halt-on-error", - tex_source], - _log, cwd=tmpdir) - shutil.move(tex_source.with_suffix(".pdf"), self._output_name) - - def savefig(self, figure=None, **kwargs): - """ - Save a `.Figure` to this file as a new page. - - Any other keyword arguments are passed to `~.Figure.savefig`. - - Parameters - ---------- - figure : `.Figure` or int, default: the active figure - The figure, or index of the figure, that is saved to the file. - """ - if not isinstance(figure, Figure): - if figure is None: - manager = Gcf.get_active() - else: - manager = Gcf.get_fig_manager(figure) - if manager is None: - raise ValueError(f"No figure {figure}") - figure = manager.canvas.figure - - with cbook._setattr_cm(figure, canvas=FigureCanvasPgf(figure)): - width, height = figure.get_size_inches() - if self._n_figures == 0: - self._write_header(width, height) - else: - # \pdfpagewidth and \pdfpageheight exist on pdftex, xetex, and - # luatex<0.85; they were renamed to \pagewidth and \pageheight - # on luatex>=0.85. - self._file.write( - ( - r'\newpage' - r'\ifdefined\pdfpagewidth\pdfpagewidth' - fr'\else\pagewidth\fi={width}in' - r'\ifdefined\pdfpageheight\pdfpageheight' - fr'\else\pageheight\fi={height}in' - '%%\n' - ).encode("ascii") - ) - figure.savefig(self._file, format="pgf", **kwargs) - self._n_figures += 1 - - def get_pagecount(self): - """Return the current number of pages in the multipage pdf file.""" - return self._n_figures diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/testing/jpl_units/EpochConverter.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/testing/jpl_units/EpochConverter.py deleted file mode 100644 index f42d7b71d0419770815efec1deace31fa4ec4fe4..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/testing/jpl_units/EpochConverter.py +++ /dev/null @@ -1,96 +0,0 @@ -"""EpochConverter module containing class EpochConverter.""" - -from matplotlib import cbook, units -import matplotlib.dates as date_ticker - -__all__ = ['EpochConverter'] - - -class EpochConverter(units.ConversionInterface): - """ - Provides Matplotlib conversion functionality for Monte Epoch and Duration - classes. - """ - - # julian date reference for "Jan 1, 0001" minus 1 day because - # Matplotlib really wants "Jan 0, 0001" - jdRef = 1721425.5 - 1 - - @staticmethod - def axisinfo(unit, axis): - # docstring inherited - majloc = date_ticker.AutoDateLocator() - majfmt = date_ticker.AutoDateFormatter(majloc) - return units.AxisInfo(majloc=majloc, majfmt=majfmt, label=unit) - - @staticmethod - def float2epoch(value, unit): - """ - Convert a Matplotlib floating-point date into an Epoch of the specified - units. - - = INPUT VARIABLES - - value The Matplotlib floating-point date. - - unit The unit system to use for the Epoch. - - = RETURN VALUE - - Returns the value converted to an Epoch in the specified time system. - """ - # Delay-load due to circular dependencies. - import matplotlib.testing.jpl_units as U - - secPastRef = value * 86400.0 * U.UnitDbl(1.0, 'sec') - return U.Epoch(unit, secPastRef, EpochConverter.jdRef) - - @staticmethod - def epoch2float(value, unit): - """ - Convert an Epoch value to a float suitable for plotting as a python - datetime object. - - = INPUT VARIABLES - - value An Epoch or list of Epochs that need to be converted. - - unit The units to use for an axis with Epoch data. - - = RETURN VALUE - - Returns the value parameter converted to floats. - """ - return value.julianDate(unit) - EpochConverter.jdRef - - @staticmethod - def duration2float(value): - """ - Convert a Duration value to a float suitable for plotting as a python - datetime object. - - = INPUT VARIABLES - - value A Duration or list of Durations that need to be converted. - - = RETURN VALUE - - Returns the value parameter converted to floats. - """ - return value.seconds() / 86400.0 - - @staticmethod - def convert(value, unit, axis): - # docstring inherited - - # Delay-load due to circular dependencies. - import matplotlib.testing.jpl_units as U - - if not cbook.is_scalar_or_string(value): - return [EpochConverter.convert(x, unit, axis) for x in value] - if unit is None: - unit = EpochConverter.default_units(value, axis) - if isinstance(value, U.Duration): - return EpochConverter.duration2float(value) - else: - return EpochConverter.epoch2float(value, unit) - - @staticmethod - def default_units(value, axis): - # docstring inherited - if cbook.is_scalar_or_string(value): - return value.frame() - else: - return EpochConverter.default_units(value[0], axis) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/multidict/_multidict_base.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/multidict/_multidict_base.py deleted file mode 100644 index 394466548cb2693f0972f1a581079a07f87bf3e3..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/multidict/_multidict_base.py +++ /dev/null @@ -1,144 +0,0 @@ -from collections.abc import ItemsView, Iterable, KeysView, Set, ValuesView - - -def _abc_itemsview_register(view_cls): - ItemsView.register(view_cls) - - -def _abc_keysview_register(view_cls): - KeysView.register(view_cls) - - -def _abc_valuesview_register(view_cls): - ValuesView.register(view_cls) - - -def _viewbaseset_richcmp(view, other, op): - if op == 0: # < - if not isinstance(other, Set): - return NotImplemented - return len(view) < len(other) and view <= other - elif op == 1: # <= - if not isinstance(other, Set): - return NotImplemented - if len(view) > len(other): - return False - for elem in view: - if elem not in other: - return False - return True - elif op == 2: # == - if not isinstance(other, Set): - return NotImplemented - return len(view) == len(other) and view <= other - elif op == 3: # != - return not view == other - elif op == 4: # > - if not isinstance(other, Set): - return NotImplemented - return len(view) > len(other) and view >= other - elif op == 5: # >= - if not isinstance(other, Set): - return NotImplemented - if len(view) < len(other): - return False - for elem in other: - if elem not in view: - return False - return True - - -def _viewbaseset_and(view, other): - if not isinstance(other, Iterable): - return NotImplemented - if isinstance(view, Set): - view = set(iter(view)) - if isinstance(other, Set): - other = set(iter(other)) - if not isinstance(other, Set): - other = set(iter(other)) - return view & other - - -def _viewbaseset_or(view, other): - if not isinstance(other, Iterable): - return NotImplemented - if isinstance(view, Set): - view = set(iter(view)) - if isinstance(other, Set): - other = set(iter(other)) - if not isinstance(other, Set): - other = set(iter(other)) - return view | other - - -def _viewbaseset_sub(view, other): - if not isinstance(other, Iterable): - return NotImplemented - if isinstance(view, Set): - view = set(iter(view)) - if isinstance(other, Set): - other = set(iter(other)) - if not isinstance(other, Set): - other = set(iter(other)) - return view - other - - -def _viewbaseset_xor(view, other): - if not isinstance(other, Iterable): - return NotImplemented - if isinstance(view, Set): - view = set(iter(view)) - if isinstance(other, Set): - other = set(iter(other)) - if not isinstance(other, Set): - other = set(iter(other)) - return view ^ other - - -def _itemsview_isdisjoint(view, other): - "Return True if two sets have a null intersection." - for v in other: - if v in view: - return False - return True - - -def _itemsview_repr(view): - lst = [] - for k, v in view: - lst.append("{!r}: {!r}".format(k, v)) - body = ", ".join(lst) - return "{}({})".format(view.__class__.__name__, body) - - -def _keysview_isdisjoint(view, other): - "Return True if two sets have a null intersection." - for k in other: - if k in view: - return False - return True - - -def _keysview_repr(view): - lst = [] - for k in view: - lst.append("{!r}".format(k)) - body = ", ".join(lst) - return "{}({})".format(view.__class__.__name__, body) - - -def _valuesview_repr(view): - lst = [] - for v in view: - lst.append("{!r}".format(v)) - body = ", ".join(lst) - return "{}({})".format(view.__class__.__name__, body) - - -def _mdrepr(md): - lst = [] - for k, v in md.items(): - lst.append("'{}': {!r}".format(k, v)) - body = ", ".join(lst) - return "<{}({})>".format(md.__class__.__name__, body) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/openai/_openai_scripts.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/openai/_openai_scripts.py deleted file mode 100644 index 497de19fab0a783f50fcc88ac4a803d07d897e18..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/openai/_openai_scripts.py +++ /dev/null @@ -1,89 +0,0 @@ -#!/usr/bin/env python -import argparse -import logging -import sys - -import openai -from openai import version -from openai.cli import api_register, display_error, tools_register, wandb_register - -logger = logging.getLogger() -formatter = logging.Formatter("[%(asctime)s] %(message)s") -handler = logging.StreamHandler(sys.stderr) -handler.setFormatter(formatter) -logger.addHandler(handler) - - -def main(): - parser = argparse.ArgumentParser(description=None) - parser.add_argument( - "-V", - "--version", - action="version", - version="%(prog)s " + version.VERSION, - ) - parser.add_argument( - "-v", - "--verbose", - action="count", - dest="verbosity", - default=0, - help="Set verbosity.", - ) - parser.add_argument("-b", "--api-base", help="What API base url to use.") - parser.add_argument("-k", "--api-key", help="What API key to use.") - parser.add_argument("-p", "--proxy", nargs='+', help="What proxy to use.") - parser.add_argument( - "-o", - "--organization", - help="Which organization to run as (will use your default organization if not specified)", - ) - - def help(args): - parser.print_help() - - parser.set_defaults(func=help) - - subparsers = parser.add_subparsers() - sub_api = subparsers.add_parser("api", help="Direct API calls") - sub_tools = subparsers.add_parser("tools", help="Client side tools for convenience") - sub_wandb = subparsers.add_parser("wandb", help="Logging with Weights & Biases, see https://docs.wandb.ai/guides/integrations/openai for documentation") - - api_register(sub_api) - tools_register(sub_tools) - wandb_register(sub_wandb) - - args = parser.parse_args() - if args.verbosity == 1: - logger.setLevel(logging.INFO) - elif args.verbosity >= 2: - logger.setLevel(logging.DEBUG) - - openai.debug = True - if args.api_key is not None: - openai.api_key = args.api_key - if args.api_base is not None: - openai.api_base = args.api_base - if args.organization is not None: - openai.organization = args.organization - if args.proxy is not None: - openai.proxy = {} - for proxy in args.proxy: - if proxy.startswith('https'): - openai.proxy['https'] = proxy - elif proxy.startswith('http'): - openai.proxy['http'] = proxy - - try: - args.func(args) - except openai.error.OpenAIError as e: - display_error(e) - return 1 - except KeyboardInterrupt: - sys.stderr.write("\n") - return 1 - return 0 - - -if __name__ == "__main__": - sys.exit(main()) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/packaging/_elffile.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/packaging/_elffile.py deleted file mode 100644 index 6fb19b30bb53c18f38a9ef02dd7c4478670fb962..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/packaging/_elffile.py +++ /dev/null @@ -1,108 +0,0 @@ -""" -ELF file parser. - -This provides a class ``ELFFile`` that parses an ELF executable in a similar -interface to ``ZipFile``. Only the read interface is implemented. - -Based on: https://gist.github.com/lyssdod/f51579ae8d93c8657a5564aefc2ffbca -ELF header: https://refspecs.linuxfoundation.org/elf/gabi4+/ch4.eheader.html -""" - -import enum -import os -import struct -from typing import IO, Optional, Tuple - - -class ELFInvalid(ValueError): - pass - - -class EIClass(enum.IntEnum): - C32 = 1 - C64 = 2 - - -class EIData(enum.IntEnum): - Lsb = 1 - Msb = 2 - - -class EMachine(enum.IntEnum): - I386 = 3 - S390 = 22 - Arm = 40 - X8664 = 62 - AArc64 = 183 - - -class ELFFile: - """ - Representation of an ELF executable. - """ - - def __init__(self, f: IO[bytes]) -> None: - self._f = f - - try: - ident = self._read("16B") - except struct.error: - raise ELFInvalid("unable to parse identification") - magic = bytes(ident[:4]) - if magic != b"\x7fELF": - raise ELFInvalid(f"invalid magic: {magic!r}") - - self.capacity = ident[4] # Format for program header (bitness). - self.encoding = ident[5] # Data structure encoding (endianness). - - try: - # e_fmt: Format for program header. - # p_fmt: Format for section header. - # p_idx: Indexes to find p_type, p_offset, and p_filesz. - e_fmt, self._p_fmt, self._p_idx = { - (1, 1): ("HHIIIIIHHH", ">IIIIIIII", (0, 1, 4)), # 32-bit MSB. - (2, 1): ("HHIQQQIHHH", ">IIQQQQQQ", (0, 2, 5)), # 64-bit MSB. - }[(self.capacity, self.encoding)] - except KeyError: - raise ELFInvalid( - f"unrecognized capacity ({self.capacity}) or " - f"encoding ({self.encoding})" - ) - - try: - ( - _, - self.machine, # Architecture type. - _, - _, - self._e_phoff, # Offset of program header. - _, - self.flags, # Processor-specific flags. - _, - self._e_phentsize, # Size of section. - self._e_phnum, # Number of sections. - ) = self._read(e_fmt) - except struct.error as e: - raise ELFInvalid("unable to parse machine and section information") from e - - def _read(self, fmt: str) -> Tuple[int, ...]: - return struct.unpack(fmt, self._f.read(struct.calcsize(fmt))) - - @property - def interpreter(self) -> Optional[str]: - """ - The path recorded in the ``PT_INTERP`` section header. - """ - for index in range(self._e_phnum): - self._f.seek(self._e_phoff + self._e_phentsize * index) - try: - data = self._read(self._p_fmt) - except struct.error: - continue - if data[self._p_idx[0]] != 3: # Not PT_INTERP. - continue - self._f.seek(data[self._p_idx[1]]) - return os.fsdecode(self._f.read(data[self._p_idx[2]])).strip("\0") - return None diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/masked/test_indexing.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/masked/test_indexing.py deleted file mode 100644 index 28ee451a7ddd777bc19b1b7623ec2da64f700ede..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/masked/test_indexing.py +++ /dev/null @@ -1,60 +0,0 @@ -import re - -import numpy as np -import pytest - -import pandas as pd - - -class TestSetitemValidation: - def _check_setitem_invalid(self, arr, invalid): - msg = f"Invalid value '{str(invalid)}' for dtype {arr.dtype}" - msg = re.escape(msg) - with pytest.raises(TypeError, match=msg): - arr[0] = invalid - - with pytest.raises(TypeError, match=msg): - arr[:] = invalid - - with pytest.raises(TypeError, match=msg): - arr[[0]] = invalid - - # FIXME: don't leave commented-out - # with pytest.raises(TypeError): - # arr[[0]] = [invalid] - - # with pytest.raises(TypeError): - # arr[[0]] = np.array([invalid], dtype=object) - - # Series non-coercion, behavior subject to change - ser = pd.Series(arr) - with pytest.raises(TypeError, match=msg): - ser[0] = invalid - # TODO: so, so many other variants of this... - - _invalid_scalars = [ - 1 + 2j, - "True", - "1", - "1.0", - pd.NaT, - np.datetime64("NaT"), - np.timedelta64("NaT"), - ] - - @pytest.mark.parametrize( - "invalid", _invalid_scalars + [1, 1.0, np.int64(1), np.float64(1)] - ) - def test_setitem_validation_scalar_bool(self, invalid): - arr = pd.array([True, False, None], dtype="boolean") - self._check_setitem_invalid(arr, invalid) - - @pytest.mark.parametrize("invalid", _invalid_scalars + [True, 1.5, np.float64(1.5)]) - def test_setitem_validation_scalar_int(self, invalid, any_int_ea_dtype): - arr = pd.array([1, 2, None], dtype=any_int_ea_dtype) - self._check_setitem_invalid(arr, invalid) - - @pytest.mark.parametrize("invalid", _invalid_scalars + [True]) - def test_setitem_validation_scalar_float(self, invalid, float_ea_dtype): - arr = pd.array([1, 2, None], dtype=float_ea_dtype) - self._check_setitem_invalid(arr, invalid) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_combine_first.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_combine_first.py deleted file mode 100644 index 156e50d50a9ef5a240af25d9c795eac8c902a2b0..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_combine_first.py +++ /dev/null @@ -1,548 +0,0 @@ -from datetime import datetime - -import numpy as np -import pytest - -from pandas.core.dtypes.cast import find_common_type -from pandas.core.dtypes.common import is_dtype_equal - -import pandas as pd -from pandas import ( - DataFrame, - Index, - MultiIndex, - Series, -) -import pandas._testing as tm - - -class TestDataFrameCombineFirst: - def test_combine_first_mixed(self): - a = Series(["a", "b"], index=range(2)) - b = Series(range(2), index=range(2)) - f = DataFrame({"A": a, "B": b}) - - a = Series(["a", "b"], index=range(5, 7)) - b = Series(range(2), index=range(5, 7)) - g = DataFrame({"A": a, "B": b}) - - exp = DataFrame({"A": list("abab"), "B": [0, 1, 0, 1]}, index=[0, 1, 5, 6]) - combined = f.combine_first(g) - tm.assert_frame_equal(combined, exp) - - def test_combine_first(self, float_frame): - # disjoint - head, tail = float_frame[:5], float_frame[5:] - - combined = head.combine_first(tail) - reordered_frame = float_frame.reindex(combined.index) - tm.assert_frame_equal(combined, reordered_frame) - assert tm.equalContents(combined.columns, float_frame.columns) - tm.assert_series_equal(combined["A"], reordered_frame["A"]) - - # same index - fcopy = float_frame.copy() - fcopy["A"] = 1 - del fcopy["C"] - - fcopy2 = float_frame.copy() - fcopy2["B"] = 0 - del fcopy2["D"] - - combined = fcopy.combine_first(fcopy2) - - assert (combined["A"] == 1).all() - tm.assert_series_equal(combined["B"], fcopy["B"]) - tm.assert_series_equal(combined["C"], fcopy2["C"]) - tm.assert_series_equal(combined["D"], fcopy["D"]) - - # overlap - head, tail = reordered_frame[:10].copy(), reordered_frame - head["A"] = 1 - - combined = head.combine_first(tail) - assert (combined["A"][:10] == 1).all() - - # reverse overlap - tail.iloc[:10, tail.columns.get_loc("A")] = 0 - combined = tail.combine_first(head) - assert (combined["A"][:10] == 0).all() - - # no overlap - f = float_frame[:10] - g = float_frame[10:] - combined = f.combine_first(g) - tm.assert_series_equal(combined["A"].reindex(f.index), f["A"]) - tm.assert_series_equal(combined["A"].reindex(g.index), g["A"]) - - # corner cases - comb = float_frame.combine_first(DataFrame()) - tm.assert_frame_equal(comb, float_frame) - - comb = DataFrame().combine_first(float_frame) - tm.assert_frame_equal(comb, float_frame) - - comb = float_frame.combine_first(DataFrame(index=["faz", "boo"])) - assert "faz" in comb.index - - # #2525 - df = DataFrame({"a": [1]}, index=[datetime(2012, 1, 1)]) - df2 = DataFrame(columns=["b"]) - result = df.combine_first(df2) - assert "b" in result - - def test_combine_first_mixed_bug(self): - idx = Index(["a", "b", "c", "e"]) - ser1 = Series([5.0, -9.0, 4.0, 100.0], index=idx) - ser2 = Series(["a", "b", "c", "e"], index=idx) - ser3 = Series([12, 4, 5, 97], index=idx) - - frame1 = DataFrame({"col0": ser1, "col2": ser2, "col3": ser3}) - - idx = Index(["a", "b", "c", "f"]) - ser1 = Series([5.0, -9.0, 4.0, 100.0], index=idx) - ser2 = Series(["a", "b", "c", "f"], index=idx) - ser3 = Series([12, 4, 5, 97], index=idx) - - frame2 = DataFrame({"col1": ser1, "col2": ser2, "col5": ser3}) - - combined = frame1.combine_first(frame2) - assert len(combined.columns) == 5 - - def test_combine_first_same_as_in_update(self): - # gh 3016 (same as in update) - df = DataFrame( - [[1.0, 2.0, False, True], [4.0, 5.0, True, False]], - columns=["A", "B", "bool1", "bool2"], - ) - - other = DataFrame([[45, 45]], index=[0], columns=["A", "B"]) - result = df.combine_first(other) - tm.assert_frame_equal(result, df) - - df.loc[0, "A"] = np.nan - result = df.combine_first(other) - df.loc[0, "A"] = 45 - tm.assert_frame_equal(result, df) - - def test_combine_first_doc_example(self): - # doc example - df1 = DataFrame( - {"A": [1.0, np.nan, 3.0, 5.0, np.nan], "B": [np.nan, 2.0, 3.0, np.nan, 6.0]} - ) - - df2 = DataFrame( - { - "A": [5.0, 2.0, 4.0, np.nan, 3.0, 7.0], - "B": [np.nan, np.nan, 3.0, 4.0, 6.0, 8.0], - } - ) - - result = df1.combine_first(df2) - expected = DataFrame({"A": [1, 2, 3, 5, 3, 7.0], "B": [np.nan, 2, 3, 4, 6, 8]}) - tm.assert_frame_equal(result, expected) - - def test_combine_first_return_obj_type_with_bools(self): - # GH3552 - - df1 = DataFrame( - [[np.nan, 3.0, True], [-4.6, np.nan, True], [np.nan, 7.0, False]] - ) - df2 = DataFrame([[-42.6, np.nan, True], [-5.0, 1.6, False]], index=[1, 2]) - - expected = Series([True, True, False], name=2, dtype=bool) - - result_12 = df1.combine_first(df2)[2] - tm.assert_series_equal(result_12, expected) - - result_21 = df2.combine_first(df1)[2] - tm.assert_series_equal(result_21, expected) - - @pytest.mark.parametrize( - "data1, data2, data_expected", - ( - ( - [datetime(2000, 1, 1), datetime(2000, 1, 2), datetime(2000, 1, 3)], - [pd.NaT, pd.NaT, pd.NaT], - [datetime(2000, 1, 1), datetime(2000, 1, 2), datetime(2000, 1, 3)], - ), - ( - [pd.NaT, pd.NaT, pd.NaT], - [datetime(2000, 1, 1), datetime(2000, 1, 2), datetime(2000, 1, 3)], - [datetime(2000, 1, 1), datetime(2000, 1, 2), datetime(2000, 1, 3)], - ), - ( - [datetime(2000, 1, 2), pd.NaT, pd.NaT], - [datetime(2000, 1, 1), datetime(2000, 1, 2), datetime(2000, 1, 3)], - [datetime(2000, 1, 2), datetime(2000, 1, 2), datetime(2000, 1, 3)], - ), - ( - [datetime(2000, 1, 1), datetime(2000, 1, 2), datetime(2000, 1, 3)], - [datetime(2000, 1, 2), pd.NaT, pd.NaT], - [datetime(2000, 1, 1), datetime(2000, 1, 2), datetime(2000, 1, 3)], - ), - ), - ) - def test_combine_first_convert_datatime_correctly( - self, data1, data2, data_expected - ): - # GH 3593 - - df1, df2 = DataFrame({"a": data1}), DataFrame({"a": data2}) - result = df1.combine_first(df2) - expected = DataFrame({"a": data_expected}) - tm.assert_frame_equal(result, expected) - - def test_combine_first_align_nan(self): - # GH 7509 (not fixed) - dfa = DataFrame([[pd.Timestamp("2011-01-01"), 2]], columns=["a", "b"]) - dfb = DataFrame([[4], [5]], columns=["b"]) - assert dfa["a"].dtype == "datetime64[ns]" - assert dfa["b"].dtype == "int64" - - res = dfa.combine_first(dfb) - exp = DataFrame( - {"a": [pd.Timestamp("2011-01-01"), pd.NaT], "b": [2, 5]}, - columns=["a", "b"], - ) - tm.assert_frame_equal(res, exp) - assert res["a"].dtype == "datetime64[ns]" - # TODO: this must be int64 - assert res["b"].dtype == "int64" - - res = dfa.iloc[:0].combine_first(dfb) - exp = DataFrame({"a": [np.nan, np.nan], "b": [4, 5]}, columns=["a", "b"]) - tm.assert_frame_equal(res, exp) - # TODO: this must be datetime64 - assert res["a"].dtype == "float64" - # TODO: this must be int64 - assert res["b"].dtype == "int64" - - def test_combine_first_timezone(self): - # see gh-7630 - data1 = pd.to_datetime("20100101 01:01").tz_localize("UTC") - df1 = DataFrame( - columns=["UTCdatetime", "abc"], - data=data1, - index=pd.date_range("20140627", periods=1), - ) - data2 = pd.to_datetime("20121212 12:12").tz_localize("UTC") - df2 = DataFrame( - columns=["UTCdatetime", "xyz"], - data=data2, - index=pd.date_range("20140628", periods=1), - ) - res = df2[["UTCdatetime"]].combine_first(df1) - exp = DataFrame( - { - "UTCdatetime": [ - pd.Timestamp("2010-01-01 01:01", tz="UTC"), - pd.Timestamp("2012-12-12 12:12", tz="UTC"), - ], - "abc": [pd.Timestamp("2010-01-01 01:01:00", tz="UTC"), pd.NaT], - }, - columns=["UTCdatetime", "abc"], - index=pd.date_range("20140627", periods=2, freq="D"), - ) - assert res["UTCdatetime"].dtype == "datetime64[ns, UTC]" - assert res["abc"].dtype == "datetime64[ns, UTC]" - - tm.assert_frame_equal(res, exp) - - # see gh-10567 - dts1 = pd.date_range("2015-01-01", "2015-01-05", tz="UTC") - df1 = DataFrame({"DATE": dts1}) - dts2 = pd.date_range("2015-01-03", "2015-01-05", tz="UTC") - df2 = DataFrame({"DATE": dts2}) - - res = df1.combine_first(df2) - tm.assert_frame_equal(res, df1) - assert res["DATE"].dtype == "datetime64[ns, UTC]" - - dts1 = pd.DatetimeIndex( - ["2011-01-01", "NaT", "2011-01-03", "2011-01-04"], tz="US/Eastern" - ) - df1 = DataFrame({"DATE": dts1}, index=[1, 3, 5, 7]) - dts2 = pd.DatetimeIndex( - ["2012-01-01", "2012-01-02", "2012-01-03"], tz="US/Eastern" - ) - df2 = DataFrame({"DATE": dts2}, index=[2, 4, 5]) - - res = df1.combine_first(df2) - exp_dts = pd.DatetimeIndex( - [ - "2011-01-01", - "2012-01-01", - "NaT", - "2012-01-02", - "2011-01-03", - "2011-01-04", - ], - tz="US/Eastern", - ) - exp = DataFrame({"DATE": exp_dts}, index=[1, 2, 3, 4, 5, 7]) - tm.assert_frame_equal(res, exp) - - # different tz - dts1 = pd.date_range("2015-01-01", "2015-01-05", tz="US/Eastern") - df1 = DataFrame({"DATE": dts1}) - dts2 = pd.date_range("2015-01-03", "2015-01-05") - df2 = DataFrame({"DATE": dts2}) - - # if df1 doesn't have NaN, keep its dtype - res = df1.combine_first(df2) - tm.assert_frame_equal(res, df1) - assert res["DATE"].dtype == "datetime64[ns, US/Eastern]" - - dts1 = pd.date_range("2015-01-01", "2015-01-02", tz="US/Eastern") - df1 = DataFrame({"DATE": dts1}) - dts2 = pd.date_range("2015-01-01", "2015-01-03") - df2 = DataFrame({"DATE": dts2}) - - res = df1.combine_first(df2) - exp_dts = [ - pd.Timestamp("2015-01-01", tz="US/Eastern"), - pd.Timestamp("2015-01-02", tz="US/Eastern"), - pd.Timestamp("2015-01-03"), - ] - exp = DataFrame({"DATE": exp_dts}) - tm.assert_frame_equal(res, exp) - assert res["DATE"].dtype == "object" - - def test_combine_first_timedelta(self): - data1 = pd.TimedeltaIndex(["1 day", "NaT", "3 day", "4day"]) - df1 = DataFrame({"TD": data1}, index=[1, 3, 5, 7]) - data2 = pd.TimedeltaIndex(["10 day", "11 day", "12 day"]) - df2 = DataFrame({"TD": data2}, index=[2, 4, 5]) - - res = df1.combine_first(df2) - exp_dts = pd.TimedeltaIndex( - ["1 day", "10 day", "NaT", "11 day", "3 day", "4 day"] - ) - exp = DataFrame({"TD": exp_dts}, index=[1, 2, 3, 4, 5, 7]) - tm.assert_frame_equal(res, exp) - assert res["TD"].dtype == "timedelta64[ns]" - - def test_combine_first_period(self): - data1 = pd.PeriodIndex(["2011-01", "NaT", "2011-03", "2011-04"], freq="M") - df1 = DataFrame({"P": data1}, index=[1, 3, 5, 7]) - data2 = pd.PeriodIndex(["2012-01-01", "2012-02", "2012-03"], freq="M") - df2 = DataFrame({"P": data2}, index=[2, 4, 5]) - - res = df1.combine_first(df2) - exp_dts = pd.PeriodIndex( - ["2011-01", "2012-01", "NaT", "2012-02", "2011-03", "2011-04"], freq="M" - ) - exp = DataFrame({"P": exp_dts}, index=[1, 2, 3, 4, 5, 7]) - tm.assert_frame_equal(res, exp) - assert res["P"].dtype == data1.dtype - - # different freq - dts2 = pd.PeriodIndex(["2012-01-01", "2012-01-02", "2012-01-03"], freq="D") - df2 = DataFrame({"P": dts2}, index=[2, 4, 5]) - - res = df1.combine_first(df2) - exp_dts = [ - pd.Period("2011-01", freq="M"), - pd.Period("2012-01-01", freq="D"), - pd.NaT, - pd.Period("2012-01-02", freq="D"), - pd.Period("2011-03", freq="M"), - pd.Period("2011-04", freq="M"), - ] - exp = DataFrame({"P": exp_dts}, index=[1, 2, 3, 4, 5, 7]) - tm.assert_frame_equal(res, exp) - assert res["P"].dtype == "object" - - def test_combine_first_int(self): - # GH14687 - integer series that do no align exactly - - df1 = DataFrame({"a": [0, 1, 3, 5]}, dtype="int64") - df2 = DataFrame({"a": [1, 4]}, dtype="int64") - - result_12 = df1.combine_first(df2) - expected_12 = DataFrame({"a": [0, 1, 3, 5]}) - tm.assert_frame_equal(result_12, expected_12) - - result_21 = df2.combine_first(df1) - expected_21 = DataFrame({"a": [1, 4, 3, 5]}) - tm.assert_frame_equal(result_21, expected_21) - - @pytest.mark.parametrize("val", [1, 1.0]) - def test_combine_first_with_asymmetric_other(self, val): - # see gh-20699 - df1 = DataFrame({"isNum": [val]}) - df2 = DataFrame({"isBool": [True]}) - - res = df1.combine_first(df2) - exp = DataFrame({"isBool": [True], "isNum": [val]}) - - tm.assert_frame_equal(res, exp) - - def test_combine_first_string_dtype_only_na(self, nullable_string_dtype): - # GH: 37519 - df = DataFrame( - {"a": ["962", "85"], "b": [pd.NA] * 2}, dtype=nullable_string_dtype - ) - df2 = DataFrame({"a": ["85"], "b": [pd.NA]}, dtype=nullable_string_dtype) - df.set_index(["a", "b"], inplace=True) - df2.set_index(["a", "b"], inplace=True) - result = df.combine_first(df2) - expected = DataFrame( - {"a": ["962", "85"], "b": [pd.NA] * 2}, dtype=nullable_string_dtype - ).set_index(["a", "b"]) - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize( - "scalar1, scalar2", - [ - (datetime(2020, 1, 1), datetime(2020, 1, 2)), - (pd.Period("2020-01-01", "D"), pd.Period("2020-01-02", "D")), - (pd.Timedelta("89 days"), pd.Timedelta("60 min")), - (pd.Interval(left=0, right=1), pd.Interval(left=2, right=3, closed="left")), - ], -) -def test_combine_first_timestamp_bug(scalar1, scalar2, nulls_fixture): - # GH28481 - na_value = nulls_fixture - - frame = DataFrame([[na_value, na_value]], columns=["a", "b"]) - other = DataFrame([[scalar1, scalar2]], columns=["b", "c"]) - - common_dtype = find_common_type([frame.dtypes["b"], other.dtypes["b"]]) - - if is_dtype_equal(common_dtype, "object") or frame.dtypes["b"] == other.dtypes["b"]: - val = scalar1 - else: - val = na_value - - result = frame.combine_first(other) - - expected = DataFrame([[na_value, val, scalar2]], columns=["a", "b", "c"]) - - expected["b"] = expected["b"].astype(common_dtype) - - tm.assert_frame_equal(result, expected) - - -def test_combine_first_timestamp_bug_NaT(): - # GH28481 - frame = DataFrame([[pd.NaT, pd.NaT]], columns=["a", "b"]) - other = DataFrame( - [[datetime(2020, 1, 1), datetime(2020, 1, 2)]], columns=["b", "c"] - ) - - result = frame.combine_first(other) - expected = DataFrame( - [[pd.NaT, datetime(2020, 1, 1), datetime(2020, 1, 2)]], columns=["a", "b", "c"] - ) - - tm.assert_frame_equal(result, expected) - - -def test_combine_first_with_nan_multiindex(): - # gh-36562 - - mi1 = MultiIndex.from_arrays( - [["b", "b", "c", "a", "b", np.nan], [1, 2, 3, 4, 5, 6]], names=["a", "b"] - ) - df = DataFrame({"c": [1, 1, 1, 1, 1, 1]}, index=mi1) - mi2 = MultiIndex.from_arrays( - [["a", "b", "c", "a", "b", "d"], [1, 1, 1, 1, 1, 1]], names=["a", "b"] - ) - s = Series([1, 2, 3, 4, 5, 6], index=mi2) - res = df.combine_first(DataFrame({"d": s})) - mi_expected = MultiIndex.from_arrays( - [ - ["a", "a", "a", "b", "b", "b", "b", "c", "c", "d", np.nan], - [1, 1, 4, 1, 1, 2, 5, 1, 3, 1, 6], - ], - names=["a", "b"], - ) - expected = DataFrame( - { - "c": [np.nan, np.nan, 1, 1, 1, 1, 1, np.nan, 1, np.nan, 1], - "d": [1.0, 4.0, np.nan, 2.0, 5.0, np.nan, np.nan, 3.0, np.nan, 6.0, np.nan], - }, - index=mi_expected, - ) - tm.assert_frame_equal(res, expected) - - -def test_combine_preserve_dtypes(): - # GH7509 - a_column = Series(["a", "b"], index=range(2)) - b_column = Series(range(2), index=range(2)) - df1 = DataFrame({"A": a_column, "B": b_column}) - - c_column = Series(["a", "b"], index=range(5, 7)) - b_column = Series(range(-1, 1), index=range(5, 7)) - df2 = DataFrame({"B": b_column, "C": c_column}) - - expected = DataFrame( - { - "A": ["a", "b", np.nan, np.nan], - "B": [0, 1, -1, 0], - "C": [np.nan, np.nan, "a", "b"], - }, - index=[0, 1, 5, 6], - ) - combined = df1.combine_first(df2) - tm.assert_frame_equal(combined, expected) - - -def test_combine_first_duplicates_rows_for_nan_index_values(): - # GH39881 - df1 = DataFrame( - {"x": [9, 10, 11]}, - index=MultiIndex.from_arrays([[1, 2, 3], [np.nan, 5, 6]], names=["a", "b"]), - ) - - df2 = DataFrame( - {"y": [12, 13, 14]}, - index=MultiIndex.from_arrays([[1, 2, 4], [np.nan, 5, 7]], names=["a", "b"]), - ) - - expected = DataFrame( - { - "x": [9.0, 10.0, 11.0, np.nan], - "y": [12.0, 13.0, np.nan, 14.0], - }, - index=MultiIndex.from_arrays( - [[1, 2, 3, 4], [np.nan, 5, 6, 7]], names=["a", "b"] - ), - ) - combined = df1.combine_first(df2) - tm.assert_frame_equal(combined, expected) - - -def test_combine_first_int64_not_cast_to_float64(): - # GH 28613 - df_1 = DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}) - df_2 = DataFrame({"A": [1, 20, 30], "B": [40, 50, 60], "C": [12, 34, 65]}) - result = df_1.combine_first(df_2) - expected = DataFrame({"A": [1, 2, 3], "B": [4, 5, 6], "C": [12, 34, 65]}) - tm.assert_frame_equal(result, expected) - - -def test_midx_losing_dtype(): - # GH#49830 - midx = MultiIndex.from_arrays([[0, 0], [np.nan, np.nan]]) - midx2 = MultiIndex.from_arrays([[1, 1], [np.nan, np.nan]]) - df1 = DataFrame({"a": [None, 4]}, index=midx) - df2 = DataFrame({"a": [3, 3]}, index=midx2) - result = df1.combine_first(df2) - expected_midx = MultiIndex.from_arrays( - [[0, 0, 1, 1], [np.nan, np.nan, np.nan, np.nan]] - ) - expected = DataFrame({"a": [np.nan, 4, 3, 3]}, index=expected_midx) - tm.assert_frame_equal(result, expected) - - -def test_combine_first_empty_columns(): - left = DataFrame(columns=["a", "b"]) - right = DataFrame(columns=["a", "c"]) - result = left.combine_first(right) - expected = DataFrame(columns=["a", "b", "c"]) - tm.assert_frame_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/test_cumulative.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/test_cumulative.py deleted file mode 100644 index 5bd9c426123159fcfcf6bf5289fd08a60dfd91b2..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/test_cumulative.py +++ /dev/null @@ -1,81 +0,0 @@ -""" -Tests for DataFrame cumulative operations - -See also --------- -tests.series.test_cumulative -""" - -import numpy as np -import pytest - -from pandas import ( - DataFrame, - Series, -) -import pandas._testing as tm - - -class TestDataFrameCumulativeOps: - # --------------------------------------------------------------------- - # Cumulative Operations - cumsum, cummax, ... - - def test_cumulative_ops_smoke(self): - # it works - df = DataFrame({"A": np.arange(20)}, index=np.arange(20)) - df.cummax() - df.cummin() - df.cumsum() - - dm = DataFrame(np.arange(20).reshape(4, 5), index=range(4), columns=range(5)) - # TODO(wesm): do something with this? - dm.cumsum() - - def test_cumprod_smoke(self, datetime_frame): - datetime_frame.iloc[5:10, 0] = np.nan - datetime_frame.iloc[10:15, 1] = np.nan - datetime_frame.iloc[15:, 2] = np.nan - - # ints - df = datetime_frame.fillna(0).astype(int) - df.cumprod(0) - df.cumprod(1) - - # ints32 - df = datetime_frame.fillna(0).astype(np.int32) - df.cumprod(0) - df.cumprod(1) - - @pytest.mark.parametrize("method", ["cumsum", "cumprod", "cummin", "cummax"]) - def test_cumulative_ops_match_series_apply(self, datetime_frame, method): - datetime_frame.iloc[5:10, 0] = np.nan - datetime_frame.iloc[10:15, 1] = np.nan - datetime_frame.iloc[15:, 2] = np.nan - - # axis = 0 - result = getattr(datetime_frame, method)() - expected = datetime_frame.apply(getattr(Series, method)) - tm.assert_frame_equal(result, expected) - - # axis = 1 - result = getattr(datetime_frame, method)(axis=1) - expected = datetime_frame.apply(getattr(Series, method), axis=1) - tm.assert_frame_equal(result, expected) - - # fix issue TODO: GH ref? - assert np.shape(result) == np.shape(datetime_frame) - - def test_cumsum_preserve_dtypes(self): - # GH#19296 dont incorrectly upcast to object - df = DataFrame({"A": [1, 2, 3], "B": [1, 2, 3.0], "C": [True, False, False]}) - - result = df.cumsum() - - expected = DataFrame( - { - "A": Series([1, 3, 6], dtype=np.int64), - "B": Series([1, 3, 6], dtype=np.float64), - "C": df["C"].cumsum(), - } - ) - tm.assert_frame_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/parser/test_parse_dates.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/parser/test_parse_dates.py deleted file mode 100644 index 9f7840588f89e7fd3856e8eaed52bb267d7a2170..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/parser/test_parse_dates.py +++ /dev/null @@ -1,2259 +0,0 @@ -""" -Tests date parsing functionality for all of the -parsers defined in parsers.py -""" - -from datetime import ( - date, - datetime, - timedelta, - timezone, -) -from io import StringIO - -from dateutil.parser import parse as du_parse -from hypothesis import given -import numpy as np -import pytest -import pytz - -from pandas._libs.tslibs import parsing -from pandas._libs.tslibs.parsing import py_parse_datetime_string - -import pandas as pd -from pandas import ( - DataFrame, - DatetimeIndex, - Index, - MultiIndex, - Series, - Timestamp, -) -import pandas._testing as tm -from pandas._testing._hypothesis import DATETIME_NO_TZ -from pandas.core.indexes.datetimes import date_range - -from pandas.io.parsers import read_csv - -xfail_pyarrow = pytest.mark.usefixtures("pyarrow_xfail") - -# GH#43650: Some expected failures with the pyarrow engine can occasionally -# cause a deadlock instead, so we skip these instead of xfailing -skip_pyarrow = pytest.mark.usefixtures("pyarrow_skip") - - -@xfail_pyarrow -def test_read_csv_with_custom_date_parser(all_parsers): - # GH36111 - def __custom_date_parser(time): - time = time.astype(np.float64) - time = time.astype(int) # convert float seconds to int type - return pd.to_timedelta(time, unit="s") - - testdata = StringIO( - """time e n h - 41047.00 -98573.7297 871458.0640 389.0089 - 41048.00 -98573.7299 871458.0640 389.0089 - 41049.00 -98573.7300 871458.0642 389.0088 - 41050.00 -98573.7299 871458.0643 389.0088 - 41051.00 -98573.7302 871458.0640 389.0086 - """ - ) - result = all_parsers.read_csv_check_warnings( - FutureWarning, - "Please use 'date_format' instead", - testdata, - delim_whitespace=True, - parse_dates=True, - date_parser=__custom_date_parser, - index_col="time", - ) - time = [41047, 41048, 41049, 41050, 41051] - time = pd.TimedeltaIndex([pd.to_timedelta(i, unit="s") for i in time], name="time") - expected = DataFrame( - { - "e": [-98573.7297, -98573.7299, -98573.7300, -98573.7299, -98573.7302], - "n": [871458.0640, 871458.0640, 871458.0642, 871458.0643, 871458.0640], - "h": [389.0089, 389.0089, 389.0088, 389.0088, 389.0086], - }, - index=time, - ) - - tm.assert_frame_equal(result, expected) - - -@xfail_pyarrow -def test_read_csv_with_custom_date_parser_parse_dates_false(all_parsers): - # GH44366 - def __custom_date_parser(time): - time = time.astype(np.float64) - time = time.astype(int) # convert float seconds to int type - return pd.to_timedelta(time, unit="s") - - testdata = StringIO( - """time e - 41047.00 -93.77 - 41048.00 -95.79 - 41049.00 -98.73 - 41050.00 -93.99 - 41051.00 -97.72 - """ - ) - result = all_parsers.read_csv_check_warnings( - FutureWarning, - "Please use 'date_format' instead", - testdata, - delim_whitespace=True, - parse_dates=False, - date_parser=__custom_date_parser, - index_col="time", - ) - time = Series([41047.00, 41048.00, 41049.00, 41050.00, 41051.00], name="time") - expected = DataFrame( - {"e": [-93.77, -95.79, -98.73, -93.99, -97.72]}, - index=time, - ) - - tm.assert_frame_equal(result, expected) - - -@xfail_pyarrow -def test_separator_date_conflict(all_parsers): - # Regression test for gh-4678 - # - # Make sure thousands separator and - # date parsing do not conflict. - parser = all_parsers - data = "06-02-2013;13:00;1-000.215" - expected = DataFrame( - [[datetime(2013, 6, 2, 13, 0, 0), 1000.215]], columns=["Date", 2] - ) - - df = parser.read_csv( - StringIO(data), - sep=";", - thousands="-", - parse_dates={"Date": [0, 1]}, - header=None, - ) - tm.assert_frame_equal(df, expected) - - -@pytest.mark.parametrize("keep_date_col", [True, False]) -def test_multiple_date_col_custom(all_parsers, keep_date_col, request): - data = """\ -KORD,19990127, 19:00:00, 18:56:00, 0.8100, 2.8100, 7.2000, 0.0000, 280.0000 -KORD,19990127, 20:00:00, 19:56:00, 0.0100, 2.2100, 7.2000, 0.0000, 260.0000 -KORD,19990127, 21:00:00, 20:56:00, -0.5900, 2.2100, 5.7000, 0.0000, 280.0000 -KORD,19990127, 21:00:00, 21:18:00, -0.9900, 2.0100, 3.6000, 0.0000, 270.0000 -KORD,19990127, 22:00:00, 21:56:00, -0.5900, 1.7100, 5.1000, 0.0000, 290.0000 -KORD,19990127, 23:00:00, 22:56:00, -0.5900, 1.7100, 4.6000, 0.0000, 280.0000 -""" - parser = all_parsers - - if keep_date_col and parser.engine == "pyarrow": - # For this to pass, we need to disable auto-inference on the date columns - # in parse_dates. We have no way of doing this though - mark = pytest.mark.xfail( - reason="pyarrow doesn't support disabling auto-inference on column numbers." - ) - request.node.add_marker(mark) - - def date_parser(*date_cols): - """ - Test date parser. - - Parameters - ---------- - date_cols : args - The list of data columns to parse. - - Returns - ------- - parsed : Series - """ - return parsing.try_parse_dates( - parsing.concat_date_cols(date_cols), parser=du_parse - ) - - kwds = { - "header": None, - "date_parser": date_parser, - "parse_dates": {"actual": [1, 2], "nominal": [1, 3]}, - "keep_date_col": keep_date_col, - "names": ["X0", "X1", "X2", "X3", "X4", "X5", "X6", "X7", "X8"], - } - result = parser.read_csv_check_warnings( - FutureWarning, - "use 'date_format' instead", - StringIO(data), - **kwds, - ) - - expected = DataFrame( - [ - [ - datetime(1999, 1, 27, 19, 0), - datetime(1999, 1, 27, 18, 56), - "KORD", - "19990127", - " 19:00:00", - " 18:56:00", - 0.81, - 2.81, - 7.2, - 0.0, - 280.0, - ], - [ - datetime(1999, 1, 27, 20, 0), - datetime(1999, 1, 27, 19, 56), - "KORD", - "19990127", - " 20:00:00", - " 19:56:00", - 0.01, - 2.21, - 7.2, - 0.0, - 260.0, - ], - [ - datetime(1999, 1, 27, 21, 0), - datetime(1999, 1, 27, 20, 56), - "KORD", - "19990127", - " 21:00:00", - " 20:56:00", - -0.59, - 2.21, - 5.7, - 0.0, - 280.0, - ], - [ - datetime(1999, 1, 27, 21, 0), - datetime(1999, 1, 27, 21, 18), - "KORD", - "19990127", - " 21:00:00", - " 21:18:00", - -0.99, - 2.01, - 3.6, - 0.0, - 270.0, - ], - [ - datetime(1999, 1, 27, 22, 0), - datetime(1999, 1, 27, 21, 56), - "KORD", - "19990127", - " 22:00:00", - " 21:56:00", - -0.59, - 1.71, - 5.1, - 0.0, - 290.0, - ], - [ - datetime(1999, 1, 27, 23, 0), - datetime(1999, 1, 27, 22, 56), - "KORD", - "19990127", - " 23:00:00", - " 22:56:00", - -0.59, - 1.71, - 4.6, - 0.0, - 280.0, - ], - ], - columns=[ - "actual", - "nominal", - "X0", - "X1", - "X2", - "X3", - "X4", - "X5", - "X6", - "X7", - "X8", - ], - ) - - if not keep_date_col: - expected = expected.drop(["X1", "X2", "X3"], axis=1) - - # Python can sometimes be flaky about how - # the aggregated columns are entered, so - # this standardizes the order. - result = result[expected.columns] - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize("container", [list, tuple, Index, Series]) -@pytest.mark.parametrize("dim", [1, 2]) -def test_concat_date_col_fail(container, dim): - msg = "not all elements from date_cols are numpy arrays" - value = "19990127" - - date_cols = tuple(container([value]) for _ in range(dim)) - - with pytest.raises(ValueError, match=msg): - parsing.concat_date_cols(date_cols) - - -@pytest.mark.parametrize("keep_date_col", [True, False]) -def test_multiple_date_col(all_parsers, keep_date_col, request): - data = """\ -KORD,19990127, 19:00:00, 18:56:00, 0.8100, 2.8100, 7.2000, 0.0000, 280.0000 -KORD,19990127, 20:00:00, 19:56:00, 0.0100, 2.2100, 7.2000, 0.0000, 260.0000 -KORD,19990127, 21:00:00, 20:56:00, -0.5900, 2.2100, 5.7000, 0.0000, 280.0000 -KORD,19990127, 21:00:00, 21:18:00, -0.9900, 2.0100, 3.6000, 0.0000, 270.0000 -KORD,19990127, 22:00:00, 21:56:00, -0.5900, 1.7100, 5.1000, 0.0000, 290.0000 -KORD,19990127, 23:00:00, 22:56:00, -0.5900, 1.7100, 4.6000, 0.0000, 280.0000 -""" - parser = all_parsers - - if keep_date_col and parser.engine == "pyarrow": - # For this to pass, we need to disable auto-inference on the date columns - # in parse_dates. We have no way of doing this though - mark = pytest.mark.xfail( - reason="pyarrow doesn't support disabling auto-inference on column numbers." - ) - request.node.add_marker(mark) - - kwds = { - "header": None, - "parse_dates": [[1, 2], [1, 3]], - "keep_date_col": keep_date_col, - "names": ["X0", "X1", "X2", "X3", "X4", "X5", "X6", "X7", "X8"], - } - result = parser.read_csv(StringIO(data), **kwds) - - expected = DataFrame( - [ - [ - datetime(1999, 1, 27, 19, 0), - datetime(1999, 1, 27, 18, 56), - "KORD", - "19990127", - " 19:00:00", - " 18:56:00", - 0.81, - 2.81, - 7.2, - 0.0, - 280.0, - ], - [ - datetime(1999, 1, 27, 20, 0), - datetime(1999, 1, 27, 19, 56), - "KORD", - "19990127", - " 20:00:00", - " 19:56:00", - 0.01, - 2.21, - 7.2, - 0.0, - 260.0, - ], - [ - datetime(1999, 1, 27, 21, 0), - datetime(1999, 1, 27, 20, 56), - "KORD", - "19990127", - " 21:00:00", - " 20:56:00", - -0.59, - 2.21, - 5.7, - 0.0, - 280.0, - ], - [ - datetime(1999, 1, 27, 21, 0), - datetime(1999, 1, 27, 21, 18), - "KORD", - "19990127", - " 21:00:00", - " 21:18:00", - -0.99, - 2.01, - 3.6, - 0.0, - 270.0, - ], - [ - datetime(1999, 1, 27, 22, 0), - datetime(1999, 1, 27, 21, 56), - "KORD", - "19990127", - " 22:00:00", - " 21:56:00", - -0.59, - 1.71, - 5.1, - 0.0, - 290.0, - ], - [ - datetime(1999, 1, 27, 23, 0), - datetime(1999, 1, 27, 22, 56), - "KORD", - "19990127", - " 23:00:00", - " 22:56:00", - -0.59, - 1.71, - 4.6, - 0.0, - 280.0, - ], - ], - columns=[ - "X1_X2", - "X1_X3", - "X0", - "X1", - "X2", - "X3", - "X4", - "X5", - "X6", - "X7", - "X8", - ], - ) - - if not keep_date_col: - expected = expected.drop(["X1", "X2", "X3"], axis=1) - - tm.assert_frame_equal(result, expected) - - -def test_date_col_as_index_col(all_parsers): - data = """\ -KORD,19990127 19:00:00, 18:56:00, 0.8100, 2.8100, 7.2000, 0.0000, 280.0000 -KORD,19990127 20:00:00, 19:56:00, 0.0100, 2.2100, 7.2000, 0.0000, 260.0000 -KORD,19990127 21:00:00, 20:56:00, -0.5900, 2.2100, 5.7000, 0.0000, 280.0000 -KORD,19990127 21:00:00, 21:18:00, -0.9900, 2.0100, 3.6000, 0.0000, 270.0000 -KORD,19990127 22:00:00, 21:56:00, -0.5900, 1.7100, 5.1000, 0.0000, 290.0000 -""" - parser = all_parsers - kwds = { - "header": None, - "parse_dates": [1], - "index_col": 1, - "names": ["X0", "X1", "X2", "X3", "X4", "X5", "X6", "X7"], - } - result = parser.read_csv(StringIO(data), **kwds) - - index = Index( - [ - datetime(1999, 1, 27, 19, 0), - datetime(1999, 1, 27, 20, 0), - datetime(1999, 1, 27, 21, 0), - datetime(1999, 1, 27, 21, 0), - datetime(1999, 1, 27, 22, 0), - ], - name="X1", - ) - expected = DataFrame( - [ - ["KORD", " 18:56:00", 0.81, 2.81, 7.2, 0.0, 280.0], - ["KORD", " 19:56:00", 0.01, 2.21, 7.2, 0.0, 260.0], - ["KORD", " 20:56:00", -0.59, 2.21, 5.7, 0.0, 280.0], - ["KORD", " 21:18:00", -0.99, 2.01, 3.6, 0.0, 270.0], - ["KORD", " 21:56:00", -0.59, 1.71, 5.1, 0.0, 290.0], - ], - columns=["X0", "X2", "X3", "X4", "X5", "X6", "X7"], - index=index, - ) - if parser.engine == "pyarrow": - # https://github.com/pandas-dev/pandas/issues/44231 - # pyarrow 6.0 starts to infer time type - expected["X2"] = pd.to_datetime("1970-01-01" + expected["X2"]).dt.time - - tm.assert_frame_equal(result, expected) - - -def test_multiple_date_cols_int_cast(all_parsers): - data = ( - "KORD,19990127, 19:00:00, 18:56:00, 0.8100\n" - "KORD,19990127, 20:00:00, 19:56:00, 0.0100\n" - "KORD,19990127, 21:00:00, 20:56:00, -0.5900\n" - "KORD,19990127, 21:00:00, 21:18:00, -0.9900\n" - "KORD,19990127, 22:00:00, 21:56:00, -0.5900\n" - "KORD,19990127, 23:00:00, 22:56:00, -0.5900" - ) - parse_dates = {"actual": [1, 2], "nominal": [1, 3]} - parser = all_parsers - - kwds = { - "header": None, - "parse_dates": parse_dates, - "date_parser": pd.to_datetime, - } - result = parser.read_csv_check_warnings( - FutureWarning, "use 'date_format' instead", StringIO(data), **kwds - ) - - expected = DataFrame( - [ - [datetime(1999, 1, 27, 19, 0), datetime(1999, 1, 27, 18, 56), "KORD", 0.81], - [datetime(1999, 1, 27, 20, 0), datetime(1999, 1, 27, 19, 56), "KORD", 0.01], - [ - datetime(1999, 1, 27, 21, 0), - datetime(1999, 1, 27, 20, 56), - "KORD", - -0.59, - ], - [ - datetime(1999, 1, 27, 21, 0), - datetime(1999, 1, 27, 21, 18), - "KORD", - -0.99, - ], - [ - datetime(1999, 1, 27, 22, 0), - datetime(1999, 1, 27, 21, 56), - "KORD", - -0.59, - ], - [ - datetime(1999, 1, 27, 23, 0), - datetime(1999, 1, 27, 22, 56), - "KORD", - -0.59, - ], - ], - columns=["actual", "nominal", 0, 4], - ) - - # Python can sometimes be flaky about how - # the aggregated columns are entered, so - # this standardizes the order. - result = result[expected.columns] - tm.assert_frame_equal(result, expected) - - -def test_multiple_date_col_timestamp_parse(all_parsers): - parser = all_parsers - data = """05/31/2012,15:30:00.029,1306.25,1,E,0,,1306.25 -05/31/2012,15:30:00.029,1306.25,8,E,0,,1306.25""" - - result = parser.read_csv_check_warnings( - FutureWarning, - "use 'date_format' instead", - StringIO(data), - parse_dates=[[0, 1]], - header=None, - date_parser=Timestamp, - ) - expected = DataFrame( - [ - [ - Timestamp("05/31/2012, 15:30:00.029"), - 1306.25, - 1, - "E", - 0, - np.nan, - 1306.25, - ], - [ - Timestamp("05/31/2012, 15:30:00.029"), - 1306.25, - 8, - "E", - 0, - np.nan, - 1306.25, - ], - ], - columns=["0_1", 2, 3, 4, 5, 6, 7], - ) - tm.assert_frame_equal(result, expected) - - -@xfail_pyarrow -def test_multiple_date_cols_with_header(all_parsers): - parser = all_parsers - data = """\ -ID,date,NominalTime,ActualTime,TDew,TAir,Windspeed,Precip,WindDir -KORD,19990127, 19:00:00, 18:56:00, 0.8100, 2.8100, 7.2000, 0.0000, 280.0000 -KORD,19990127, 20:00:00, 19:56:00, 0.0100, 2.2100, 7.2000, 0.0000, 260.0000 -KORD,19990127, 21:00:00, 20:56:00, -0.5900, 2.2100, 5.7000, 0.0000, 280.0000 -KORD,19990127, 21:00:00, 21:18:00, -0.9900, 2.0100, 3.6000, 0.0000, 270.0000 -KORD,19990127, 22:00:00, 21:56:00, -0.5900, 1.7100, 5.1000, 0.0000, 290.0000 -KORD,19990127, 23:00:00, 22:56:00, -0.5900, 1.7100, 4.6000, 0.0000, 280.0000""" - - result = parser.read_csv(StringIO(data), parse_dates={"nominal": [1, 2]}) - expected = DataFrame( - [ - [ - datetime(1999, 1, 27, 19, 0), - "KORD", - " 18:56:00", - 0.81, - 2.81, - 7.2, - 0.0, - 280.0, - ], - [ - datetime(1999, 1, 27, 20, 0), - "KORD", - " 19:56:00", - 0.01, - 2.21, - 7.2, - 0.0, - 260.0, - ], - [ - datetime(1999, 1, 27, 21, 0), - "KORD", - " 20:56:00", - -0.59, - 2.21, - 5.7, - 0.0, - 280.0, - ], - [ - datetime(1999, 1, 27, 21, 0), - "KORD", - " 21:18:00", - -0.99, - 2.01, - 3.6, - 0.0, - 270.0, - ], - [ - datetime(1999, 1, 27, 22, 0), - "KORD", - " 21:56:00", - -0.59, - 1.71, - 5.1, - 0.0, - 290.0, - ], - [ - datetime(1999, 1, 27, 23, 0), - "KORD", - " 22:56:00", - -0.59, - 1.71, - 4.6, - 0.0, - 280.0, - ], - ], - columns=[ - "nominal", - "ID", - "ActualTime", - "TDew", - "TAir", - "Windspeed", - "Precip", - "WindDir", - ], - ) - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize( - "data,parse_dates,msg", - [ - ( - """\ -date_NominalTime,date,NominalTime -KORD1,19990127, 19:00:00 -KORD2,19990127, 20:00:00""", - [[1, 2]], - ("New date column already in dict date_NominalTime"), - ), - ( - """\ -ID,date,nominalTime -KORD,19990127, 19:00:00 -KORD,19990127, 20:00:00""", - {"ID": [1, 2]}, - "Date column ID already in dict", - ), - ], -) -def test_multiple_date_col_name_collision(all_parsers, data, parse_dates, msg): - parser = all_parsers - - with pytest.raises(ValueError, match=msg): - parser.read_csv(StringIO(data), parse_dates=parse_dates) - - -def test_date_parser_int_bug(all_parsers): - # see gh-3071 - parser = all_parsers - data = ( - "posix_timestamp,elapsed,sys,user,queries,query_time,rows," - "accountid,userid,contactid,level,silo,method\n" - "1343103150,0.062353,0,4,6,0.01690,3," - "12345,1,-1,3,invoice_InvoiceResource,search\n" - ) - - result = parser.read_csv_check_warnings( - FutureWarning, - "use 'date_format' instead", - StringIO(data), - index_col=0, - parse_dates=[0], - # Note: we must pass tz and then drop the tz attribute - # (if we don't CI will flake out depending on the runner's local time) - date_parser=lambda x: datetime.fromtimestamp(int(x), tz=timezone.utc).replace( - tzinfo=None - ), - ) - expected = DataFrame( - [ - [ - 0.062353, - 0, - 4, - 6, - 0.01690, - 3, - 12345, - 1, - -1, - 3, - "invoice_InvoiceResource", - "search", - ] - ], - columns=[ - "elapsed", - "sys", - "user", - "queries", - "query_time", - "rows", - "accountid", - "userid", - "contactid", - "level", - "silo", - "method", - ], - index=Index([Timestamp("2012-07-24 04:12:30")], name="posix_timestamp"), - ) - tm.assert_frame_equal(result, expected) - - -@xfail_pyarrow -def test_nat_parse(all_parsers): - # see gh-3062 - parser = all_parsers - df = DataFrame( - { - "A": np.arange(10, dtype="float64"), - "B": Timestamp("20010101").as_unit("ns"), - } - ) - df.iloc[3:6, :] = np.nan - - with tm.ensure_clean("__nat_parse_.csv") as path: - df.to_csv(path) - - result = parser.read_csv(path, index_col=0, parse_dates=["B"]) - tm.assert_frame_equal(result, df) - - -@xfail_pyarrow -def test_csv_custom_parser(all_parsers): - data = """A,B,C -20090101,a,1,2 -20090102,b,3,4 -20090103,c,4,5 -""" - parser = all_parsers - result = parser.read_csv_check_warnings( - FutureWarning, - "use 'date_format' instead", - StringIO(data), - date_parser=lambda x: datetime.strptime(x, "%Y%m%d"), - ) - expected = parser.read_csv(StringIO(data), parse_dates=True) - tm.assert_frame_equal(result, expected) - result = parser.read_csv(StringIO(data), date_format="%Y%m%d") - tm.assert_frame_equal(result, expected) - - -@xfail_pyarrow -def test_parse_dates_implicit_first_col(all_parsers): - data = """A,B,C -20090101,a,1,2 -20090102,b,3,4 -20090103,c,4,5 -""" - parser = all_parsers - result = parser.read_csv(StringIO(data), parse_dates=True) - - expected = parser.read_csv(StringIO(data), index_col=0, parse_dates=True) - tm.assert_frame_equal(result, expected) - - -@xfail_pyarrow -def test_parse_dates_string(all_parsers): - data = """date,A,B,C -20090101,a,1,2 -20090102,b,3,4 -20090103,c,4,5 -""" - parser = all_parsers - result = parser.read_csv(StringIO(data), index_col="date", parse_dates=["date"]) - # freq doesn't round-trip - index = DatetimeIndex( - list(date_range("1/1/2009", periods=3)), name="date", freq=None - ) - - expected = DataFrame( - {"A": ["a", "b", "c"], "B": [1, 3, 4], "C": [2, 4, 5]}, index=index - ) - tm.assert_frame_equal(result, expected) - - -# Bug in https://github.com/dateutil/dateutil/issues/217 -# has been addressed, but we just don't pass in the `yearfirst` -@pytest.mark.xfail(reason="yearfirst is not surfaced in read_*") -@pytest.mark.parametrize("parse_dates", [[["date", "time"]], [[0, 1]]]) -def test_yy_format_with_year_first(all_parsers, parse_dates): - data = """date,time,B,C -090131,0010,1,2 -090228,1020,3,4 -090331,0830,5,6 -""" - parser = all_parsers - result = parser.read_csv_check_warnings( - UserWarning, - "Could not infer format", - StringIO(data), - index_col=0, - parse_dates=parse_dates, - ) - index = DatetimeIndex( - [ - datetime(2009, 1, 31, 0, 10, 0), - datetime(2009, 2, 28, 10, 20, 0), - datetime(2009, 3, 31, 8, 30, 0), - ], - dtype=object, - name="date_time", - ) - expected = DataFrame({"B": [1, 3, 5], "C": [2, 4, 6]}, index=index) - tm.assert_frame_equal(result, expected) - - -@xfail_pyarrow -@pytest.mark.parametrize("parse_dates", [[0, 2], ["a", "c"]]) -def test_parse_dates_column_list(all_parsers, parse_dates): - data = "a,b,c\n01/01/2010,1,15/02/2010" - parser = all_parsers - - expected = DataFrame( - {"a": [datetime(2010, 1, 1)], "b": [1], "c": [datetime(2010, 2, 15)]} - ) - expected = expected.set_index(["a", "b"]) - - result = parser.read_csv( - StringIO(data), index_col=[0, 1], parse_dates=parse_dates, dayfirst=True - ) - tm.assert_frame_equal(result, expected) - - -@xfail_pyarrow -@pytest.mark.parametrize("index_col", [[0, 1], [1, 0]]) -def test_multi_index_parse_dates(all_parsers, index_col): - data = """index1,index2,A,B,C -20090101,one,a,1,2 -20090101,two,b,3,4 -20090101,three,c,4,5 -20090102,one,a,1,2 -20090102,two,b,3,4 -20090102,three,c,4,5 -20090103,one,a,1,2 -20090103,two,b,3,4 -20090103,three,c,4,5 -""" - parser = all_parsers - index = MultiIndex.from_product( - [ - (datetime(2009, 1, 1), datetime(2009, 1, 2), datetime(2009, 1, 3)), - ("one", "two", "three"), - ], - names=["index1", "index2"], - ) - - # Out of order. - if index_col == [1, 0]: - index = index.swaplevel(0, 1) - - expected = DataFrame( - [ - ["a", 1, 2], - ["b", 3, 4], - ["c", 4, 5], - ["a", 1, 2], - ["b", 3, 4], - ["c", 4, 5], - ["a", 1, 2], - ["b", 3, 4], - ["c", 4, 5], - ], - columns=["A", "B", "C"], - index=index, - ) - result = parser.read_csv_check_warnings( - UserWarning, - "Could not infer format", - StringIO(data), - index_col=index_col, - parse_dates=True, - ) - tm.assert_frame_equal(result, expected) - - -@xfail_pyarrow -@pytest.mark.parametrize("kwargs", [{"dayfirst": True}, {"day_first": True}]) -def test_parse_dates_custom_euro_format(all_parsers, kwargs): - parser = all_parsers - data = """foo,bar,baz -31/01/2010,1,2 -01/02/2010,1,NA -02/02/2010,1,2 -""" - if "dayfirst" in kwargs: - df = parser.read_csv_check_warnings( - FutureWarning, - "use 'date_format' instead", - StringIO(data), - names=["time", "Q", "NTU"], - date_parser=lambda d: du_parse(d, **kwargs), - header=0, - index_col=0, - parse_dates=True, - na_values=["NA"], - ) - exp_index = Index( - [datetime(2010, 1, 31), datetime(2010, 2, 1), datetime(2010, 2, 2)], - name="time", - ) - expected = DataFrame( - {"Q": [1, 1, 1], "NTU": [2, np.nan, 2]}, - index=exp_index, - columns=["Q", "NTU"], - ) - tm.assert_frame_equal(df, expected) - else: - msg = "got an unexpected keyword argument 'day_first'" - with pytest.raises(TypeError, match=msg): - parser.read_csv_check_warnings( - FutureWarning, - "use 'date_format' instead", - StringIO(data), - names=["time", "Q", "NTU"], - date_parser=lambda d: du_parse(d, **kwargs), - skiprows=[0], - index_col=0, - parse_dates=True, - na_values=["NA"], - ) - - -def test_parse_tz_aware(all_parsers, request): - # See gh-1693 - parser = all_parsers - data = "Date,x\n2012-06-13T01:39:00Z,0.5" - - result = parser.read_csv(StringIO(data), index_col=0, parse_dates=True) - expected = DataFrame( - {"x": [0.5]}, index=Index([Timestamp("2012-06-13 01:39:00+00:00")], name="Date") - ) - tm.assert_frame_equal(result, expected) - if parser.engine == "pyarrow": - expected_tz = pytz.utc - else: - expected_tz = timezone.utc - assert result.index.tz is expected_tz - - -@xfail_pyarrow -@pytest.mark.parametrize( - "parse_dates,index_col", - [({"nominal": [1, 2]}, "nominal"), ({"nominal": [1, 2]}, 0), ([[1, 2]], 0)], -) -def test_multiple_date_cols_index(all_parsers, parse_dates, index_col): - parser = all_parsers - data = """ -ID,date,NominalTime,ActualTime,TDew,TAir,Windspeed,Precip,WindDir -KORD1,19990127, 19:00:00, 18:56:00, 0.8100, 2.8100, 7.2000, 0.0000, 280.0000 -KORD2,19990127, 20:00:00, 19:56:00, 0.0100, 2.2100, 7.2000, 0.0000, 260.0000 -KORD3,19990127, 21:00:00, 20:56:00, -0.5900, 2.2100, 5.7000, 0.0000, 280.0000 -KORD4,19990127, 21:00:00, 21:18:00, -0.9900, 2.0100, 3.6000, 0.0000, 270.0000 -KORD5,19990127, 22:00:00, 21:56:00, -0.5900, 1.7100, 5.1000, 0.0000, 290.0000 -KORD6,19990127, 23:00:00, 22:56:00, -0.5900, 1.7100, 4.6000, 0.0000, 280.0000 -""" - expected = DataFrame( - [ - [ - datetime(1999, 1, 27, 19, 0), - "KORD1", - " 18:56:00", - 0.81, - 2.81, - 7.2, - 0.0, - 280.0, - ], - [ - datetime(1999, 1, 27, 20, 0), - "KORD2", - " 19:56:00", - 0.01, - 2.21, - 7.2, - 0.0, - 260.0, - ], - [ - datetime(1999, 1, 27, 21, 0), - "KORD3", - " 20:56:00", - -0.59, - 2.21, - 5.7, - 0.0, - 280.0, - ], - [ - datetime(1999, 1, 27, 21, 0), - "KORD4", - " 21:18:00", - -0.99, - 2.01, - 3.6, - 0.0, - 270.0, - ], - [ - datetime(1999, 1, 27, 22, 0), - "KORD5", - " 21:56:00", - -0.59, - 1.71, - 5.1, - 0.0, - 290.0, - ], - [ - datetime(1999, 1, 27, 23, 0), - "KORD6", - " 22:56:00", - -0.59, - 1.71, - 4.6, - 0.0, - 280.0, - ], - ], - columns=[ - "nominal", - "ID", - "ActualTime", - "TDew", - "TAir", - "Windspeed", - "Precip", - "WindDir", - ], - ) - expected = expected.set_index("nominal") - - if not isinstance(parse_dates, dict): - expected.index.name = "date_NominalTime" - - result = parser.read_csv( - StringIO(data), parse_dates=parse_dates, index_col=index_col - ) - tm.assert_frame_equal(result, expected) - - -@xfail_pyarrow -def test_multiple_date_cols_chunked(all_parsers): - parser = all_parsers - data = """\ -ID,date,nominalTime,actualTime,A,B,C,D,E -KORD,19990127, 19:00:00, 18:56:00, 0.8100, 2.8100, 7.2000, 0.0000, 280.0000 -KORD,19990127, 20:00:00, 19:56:00, 0.0100, 2.2100, 7.2000, 0.0000, 260.0000 -KORD,19990127, 21:00:00, 20:56:00, -0.5900, 2.2100, 5.7000, 0.0000, 280.0000 -KORD,19990127, 21:00:00, 21:18:00, -0.9900, 2.0100, 3.6000, 0.0000, 270.0000 -KORD,19990127, 22:00:00, 21:56:00, -0.5900, 1.7100, 5.1000, 0.0000, 290.0000 -KORD,19990127, 23:00:00, 22:56:00, -0.5900, 1.7100, 4.6000, 0.0000, 280.0000 -""" - - expected = DataFrame( - [ - [ - datetime(1999, 1, 27, 19, 0), - "KORD", - " 18:56:00", - 0.81, - 2.81, - 7.2, - 0.0, - 280.0, - ], - [ - datetime(1999, 1, 27, 20, 0), - "KORD", - " 19:56:00", - 0.01, - 2.21, - 7.2, - 0.0, - 260.0, - ], - [ - datetime(1999, 1, 27, 21, 0), - "KORD", - " 20:56:00", - -0.59, - 2.21, - 5.7, - 0.0, - 280.0, - ], - [ - datetime(1999, 1, 27, 21, 0), - "KORD", - " 21:18:00", - -0.99, - 2.01, - 3.6, - 0.0, - 270.0, - ], - [ - datetime(1999, 1, 27, 22, 0), - "KORD", - " 21:56:00", - -0.59, - 1.71, - 5.1, - 0.0, - 290.0, - ], - [ - datetime(1999, 1, 27, 23, 0), - "KORD", - " 22:56:00", - -0.59, - 1.71, - 4.6, - 0.0, - 280.0, - ], - ], - columns=["nominal", "ID", "actualTime", "A", "B", "C", "D", "E"], - ) - expected = expected.set_index("nominal") - - with parser.read_csv( - StringIO(data), - parse_dates={"nominal": [1, 2]}, - index_col="nominal", - chunksize=2, - ) as reader: - chunks = list(reader) - - tm.assert_frame_equal(chunks[0], expected[:2]) - tm.assert_frame_equal(chunks[1], expected[2:4]) - tm.assert_frame_equal(chunks[2], expected[4:]) - - -def test_multiple_date_col_named_index_compat(all_parsers): - parser = all_parsers - data = """\ -ID,date,nominalTime,actualTime,A,B,C,D,E -KORD,19990127, 19:00:00, 18:56:00, 0.8100, 2.8100, 7.2000, 0.0000, 280.0000 -KORD,19990127, 20:00:00, 19:56:00, 0.0100, 2.2100, 7.2000, 0.0000, 260.0000 -KORD,19990127, 21:00:00, 20:56:00, -0.5900, 2.2100, 5.7000, 0.0000, 280.0000 -KORD,19990127, 21:00:00, 21:18:00, -0.9900, 2.0100, 3.6000, 0.0000, 270.0000 -KORD,19990127, 22:00:00, 21:56:00, -0.5900, 1.7100, 5.1000, 0.0000, 290.0000 -KORD,19990127, 23:00:00, 22:56:00, -0.5900, 1.7100, 4.6000, 0.0000, 280.0000 -""" - - with_indices = parser.read_csv( - StringIO(data), parse_dates={"nominal": [1, 2]}, index_col="nominal" - ) - with_names = parser.read_csv( - StringIO(data), - index_col="nominal", - parse_dates={"nominal": ["date", "nominalTime"]}, - ) - tm.assert_frame_equal(with_indices, with_names) - - -def test_multiple_date_col_multiple_index_compat(all_parsers): - parser = all_parsers - data = """\ -ID,date,nominalTime,actualTime,A,B,C,D,E -KORD,19990127, 19:00:00, 18:56:00, 0.8100, 2.8100, 7.2000, 0.0000, 280.0000 -KORD,19990127, 20:00:00, 19:56:00, 0.0100, 2.2100, 7.2000, 0.0000, 260.0000 -KORD,19990127, 21:00:00, 20:56:00, -0.5900, 2.2100, 5.7000, 0.0000, 280.0000 -KORD,19990127, 21:00:00, 21:18:00, -0.9900, 2.0100, 3.6000, 0.0000, 270.0000 -KORD,19990127, 22:00:00, 21:56:00, -0.5900, 1.7100, 5.1000, 0.0000, 290.0000 -KORD,19990127, 23:00:00, 22:56:00, -0.5900, 1.7100, 4.6000, 0.0000, 280.0000 -""" - result = parser.read_csv( - StringIO(data), index_col=["nominal", "ID"], parse_dates={"nominal": [1, 2]} - ) - expected = parser.read_csv(StringIO(data), parse_dates={"nominal": [1, 2]}) - - expected = expected.set_index(["nominal", "ID"]) - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize("kwargs", [{}, {"index_col": "C"}]) -def test_read_with_parse_dates_scalar_non_bool(all_parsers, kwargs): - # see gh-5636 - parser = all_parsers - msg = ( - "Only booleans, lists, and dictionaries " - "are accepted for the 'parse_dates' parameter" - ) - data = """A,B,C - 1,2,2003-11-1""" - - with pytest.raises(TypeError, match=msg): - parser.read_csv(StringIO(data), parse_dates="C", **kwargs) - - -@pytest.mark.parametrize("parse_dates", [(1,), np.array([4, 5]), {1, 3}]) -def test_read_with_parse_dates_invalid_type(all_parsers, parse_dates): - parser = all_parsers - msg = ( - "Only booleans, lists, and dictionaries " - "are accepted for the 'parse_dates' parameter" - ) - data = """A,B,C - 1,2,2003-11-1""" - - with pytest.raises(TypeError, match=msg): - parser.read_csv(StringIO(data), parse_dates=(1,)) - - -@pytest.mark.parametrize("cache_dates", [True, False]) -@pytest.mark.parametrize("value", ["nan", ""]) -def test_bad_date_parse(all_parsers, cache_dates, value): - # if we have an invalid date make sure that we handle this with - # and w/o the cache properly - parser = all_parsers - s = StringIO((f"{value},\n") * 50000) - - parser.read_csv( - s, - header=None, - names=["foo", "bar"], - parse_dates=["foo"], - cache_dates=cache_dates, - ) - - -@pytest.mark.parametrize("cache_dates", [True, False]) -@pytest.mark.parametrize("value", ["0"]) -def test_bad_date_parse_with_warning(all_parsers, cache_dates, value): - # if we have an invalid date make sure that we handle this with - # and w/o the cache properly. - parser = all_parsers - s = StringIO((f"{value},\n") * 50000) - - if parser.engine == "pyarrow": - # pyarrow reads "0" as 0 (of type int64), and so - # pandas doesn't try to guess the datetime format - # TODO: parse dates directly in pyarrow, see - # https://github.com/pandas-dev/pandas/issues/48017 - warn = None - elif cache_dates: - # Note: warning is not raised if 'cache_dates', because here there is only a - # single unique date and hence no risk of inconsistent parsing. - warn = None - else: - warn = UserWarning - parser.read_csv_check_warnings( - warn, - "Could not infer format", - s, - header=None, - names=["foo", "bar"], - parse_dates=["foo"], - cache_dates=cache_dates, - ) - - -@xfail_pyarrow -def test_parse_dates_empty_string(all_parsers): - # see gh-2263 - parser = all_parsers - data = "Date,test\n2012-01-01,1\n,2" - result = parser.read_csv(StringIO(data), parse_dates=["Date"], na_filter=False) - - expected = DataFrame( - [[datetime(2012, 1, 1), 1], [pd.NaT, 2]], columns=["Date", "test"] - ) - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize( - "reader", ["read_csv_check_warnings", "read_table_check_warnings"] -) -def test_parse_dates_infer_datetime_format_warning(all_parsers, reader): - # GH 49024, 51017 - parser = all_parsers - data = "Date,test\n2012-01-01,1\n,2" - - getattr(parser, reader)( - FutureWarning, - "The argument 'infer_datetime_format' is deprecated", - StringIO(data), - parse_dates=["Date"], - infer_datetime_format=True, - sep=",", - ) - - -@pytest.mark.parametrize( - "reader", ["read_csv_check_warnings", "read_table_check_warnings"] -) -def test_parse_dates_date_parser_and_date_format(all_parsers, reader): - # GH 50601 - parser = all_parsers - data = "Date,test\n2012-01-01,1\n,2" - msg = "Cannot use both 'date_parser' and 'date_format'" - with pytest.raises(TypeError, match=msg): - getattr(parser, reader)( - FutureWarning, - "use 'date_format' instead", - StringIO(data), - parse_dates=["Date"], - date_parser=pd.to_datetime, - date_format="ISO8601", - sep=",", - ) - - -@xfail_pyarrow -@pytest.mark.parametrize( - "data,kwargs,expected", - [ - ( - "a\n04.15.2016", - {"parse_dates": ["a"]}, - DataFrame([datetime(2016, 4, 15)], columns=["a"]), - ), - ( - "a\n04.15.2016", - {"parse_dates": True, "index_col": 0}, - DataFrame(index=DatetimeIndex(["2016-04-15"], name="a"), columns=[]), - ), - ( - "a,b\n04.15.2016,09.16.2013", - {"parse_dates": ["a", "b"]}, - DataFrame( - [[datetime(2016, 4, 15), datetime(2013, 9, 16)]], columns=["a", "b"] - ), - ), - ( - "a,b\n04.15.2016,09.16.2013", - {"parse_dates": True, "index_col": [0, 1]}, - DataFrame( - index=MultiIndex.from_tuples( - [(datetime(2016, 4, 15), datetime(2013, 9, 16))], names=["a", "b"] - ), - columns=[], - ), - ), - ], -) -def test_parse_dates_no_convert_thousands(all_parsers, data, kwargs, expected): - # see gh-14066 - parser = all_parsers - - result = parser.read_csv(StringIO(data), thousands=".", **kwargs) - tm.assert_frame_equal(result, expected) - - -@xfail_pyarrow -def test_parse_date_time_multi_level_column_name(all_parsers): - data = """\ -D,T,A,B -date, time,a,b -2001-01-05, 09:00:00, 0.0, 10. -2001-01-06, 00:00:00, 1.0, 11. -""" - parser = all_parsers - result = parser.read_csv_check_warnings( - FutureWarning, - "use 'date_format' instead", - StringIO(data), - header=[0, 1], - parse_dates={"date_time": [0, 1]}, - date_parser=pd.to_datetime, - ) - - expected_data = [ - [datetime(2001, 1, 5, 9, 0, 0), 0.0, 10.0], - [datetime(2001, 1, 6, 0, 0, 0), 1.0, 11.0], - ] - expected = DataFrame(expected_data, columns=["date_time", ("A", "a"), ("B", "b")]) - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize( - "data,kwargs,expected", - [ - ( - """\ -date,time,a,b -2001-01-05, 10:00:00, 0.0, 10. -2001-01-05, 00:00:00, 1., 11. -""", - {"header": 0, "parse_dates": {"date_time": [0, 1]}}, - DataFrame( - [ - [datetime(2001, 1, 5, 10, 0, 0), 0.0, 10], - [datetime(2001, 1, 5, 0, 0, 0), 1.0, 11.0], - ], - columns=["date_time", "a", "b"], - ), - ), - ( - ( - "KORD,19990127, 19:00:00, 18:56:00, 0.8100\n" - "KORD,19990127, 20:00:00, 19:56:00, 0.0100\n" - "KORD,19990127, 21:00:00, 20:56:00, -0.5900\n" - "KORD,19990127, 21:00:00, 21:18:00, -0.9900\n" - "KORD,19990127, 22:00:00, 21:56:00, -0.5900\n" - "KORD,19990127, 23:00:00, 22:56:00, -0.5900" - ), - {"header": None, "parse_dates": {"actual": [1, 2], "nominal": [1, 3]}}, - DataFrame( - [ - [ - datetime(1999, 1, 27, 19, 0), - datetime(1999, 1, 27, 18, 56), - "KORD", - 0.81, - ], - [ - datetime(1999, 1, 27, 20, 0), - datetime(1999, 1, 27, 19, 56), - "KORD", - 0.01, - ], - [ - datetime(1999, 1, 27, 21, 0), - datetime(1999, 1, 27, 20, 56), - "KORD", - -0.59, - ], - [ - datetime(1999, 1, 27, 21, 0), - datetime(1999, 1, 27, 21, 18), - "KORD", - -0.99, - ], - [ - datetime(1999, 1, 27, 22, 0), - datetime(1999, 1, 27, 21, 56), - "KORD", - -0.59, - ], - [ - datetime(1999, 1, 27, 23, 0), - datetime(1999, 1, 27, 22, 56), - "KORD", - -0.59, - ], - ], - columns=["actual", "nominal", 0, 4], - ), - ), - ], -) -def test_parse_date_time(all_parsers, data, kwargs, expected): - parser = all_parsers - result = parser.read_csv_check_warnings( - FutureWarning, - "use 'date_format' instead", - StringIO(data), - date_parser=pd.to_datetime, - **kwargs, - ) - - # Python can sometimes be flaky about how - # the aggregated columns are entered, so - # this standardizes the order. - result = result[expected.columns] - tm.assert_frame_equal(result, expected) - - -def test_parse_date_fields(all_parsers): - parser = all_parsers - data = "year,month,day,a\n2001,01,10,10.\n2001,02,1,11." - result = parser.read_csv_check_warnings( - FutureWarning, - "use 'date_format' instead", - StringIO(data), - header=0, - parse_dates={"ymd": [0, 1, 2]}, - date_parser=lambda x: x, - ) - - expected = DataFrame( - [[datetime(2001, 1, 10), 10.0], [datetime(2001, 2, 1), 11.0]], - columns=["ymd", "a"], - ) - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize( - ("key", "value", "warn"), - [ - ( - "date_parser", - lambda x: pd.to_datetime(x, format="%Y %m %d %H %M %S"), - FutureWarning, - ), - ("date_format", "%Y %m %d %H %M %S", None), - ], -) -def test_parse_date_all_fields(all_parsers, key, value, warn): - parser = all_parsers - data = """\ -year,month,day,hour,minute,second,a,b -2001,01,05,10,00,0,0.0,10. -2001,01,5,10,0,00,1.,11. -""" - result = parser.read_csv_check_warnings( - warn, - "use 'date_format' instead", - StringIO(data), - header=0, - parse_dates={"ymdHMS": [0, 1, 2, 3, 4, 5]}, - **{key: value}, - ) - expected = DataFrame( - [ - [datetime(2001, 1, 5, 10, 0, 0), 0.0, 10.0], - [datetime(2001, 1, 5, 10, 0, 0), 1.0, 11.0], - ], - columns=["ymdHMS", "a", "b"], - ) - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize( - ("key", "value", "warn"), - [ - ( - "date_parser", - lambda x: pd.to_datetime(x, format="%Y %m %d %H %M %S.%f"), - FutureWarning, - ), - ("date_format", "%Y %m %d %H %M %S.%f", None), - ], -) -def test_datetime_fractional_seconds(all_parsers, key, value, warn): - parser = all_parsers - data = """\ -year,month,day,hour,minute,second,a,b -2001,01,05,10,00,0.123456,0.0,10. -2001,01,5,10,0,0.500000,1.,11. -""" - result = parser.read_csv_check_warnings( - warn, - "use 'date_format' instead", - StringIO(data), - header=0, - parse_dates={"ymdHMS": [0, 1, 2, 3, 4, 5]}, - **{key: value}, - ) - expected = DataFrame( - [ - [datetime(2001, 1, 5, 10, 0, 0, microsecond=123456), 0.0, 10.0], - [datetime(2001, 1, 5, 10, 0, 0, microsecond=500000), 1.0, 11.0], - ], - columns=["ymdHMS", "a", "b"], - ) - tm.assert_frame_equal(result, expected) - - -def test_generic(all_parsers): - parser = all_parsers - data = "year,month,day,a\n2001,01,10,10.\n2001,02,1,11." - - def parse_function(yy, mm): - return [date(year=int(y), month=int(m), day=1) for y, m in zip(yy, mm)] - - result = parser.read_csv_check_warnings( - FutureWarning, - "use 'date_format' instead", - StringIO(data), - header=0, - parse_dates={"ym": [0, 1]}, - date_parser=parse_function, - ) - expected = DataFrame( - [[date(2001, 1, 1), 10, 10.0], [date(2001, 2, 1), 1, 11.0]], - columns=["ym", "day", "a"], - ) - expected["ym"] = expected["ym"].astype("datetime64[ns]") - tm.assert_frame_equal(result, expected) - - -@xfail_pyarrow -def test_date_parser_resolution_if_not_ns(all_parsers): - # see gh-10245 - parser = all_parsers - data = """\ -date,time,prn,rxstatus -2013-11-03,19:00:00,126,00E80000 -2013-11-03,19:00:00,23,00E80000 -2013-11-03,19:00:00,13,00E80000 -""" - - def date_parser(dt, time): - try: - arr = dt + "T" + time - except TypeError: - # dt & time are date/time objects - arr = [datetime.combine(d, t) for d, t in zip(dt, time)] - return np.array(arr, dtype="datetime64[s]") - - result = parser.read_csv_check_warnings( - FutureWarning, - "use 'date_format' instead", - StringIO(data), - date_parser=date_parser, - parse_dates={"datetime": ["date", "time"]}, - index_col=["datetime", "prn"], - ) - - datetimes = np.array(["2013-11-03T19:00:00"] * 3, dtype="datetime64[s]") - expected = DataFrame( - data={"rxstatus": ["00E80000"] * 3}, - index=MultiIndex.from_arrays( - [datetimes, [126, 23, 13]], - names=["datetime", "prn"], - ), - ) - tm.assert_frame_equal(result, expected) - - -def test_parse_date_column_with_empty_string(all_parsers): - # see gh-6428 - parser = all_parsers - data = "case,opdate\n7,10/18/2006\n7,10/18/2008\n621, " - result = parser.read_csv(StringIO(data), parse_dates=["opdate"]) - - expected_data = [[7, "10/18/2006"], [7, "10/18/2008"], [621, " "]] - expected = DataFrame(expected_data, columns=["case", "opdate"]) - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize( - "data,expected", - [ - ( - "a\n135217135789158401\n1352171357E+5", - DataFrame({"a": [135217135789158401, 135217135700000]}, dtype="float64"), - ), - ( - "a\n99999999999\n123456789012345\n1234E+0", - DataFrame({"a": [99999999999, 123456789012345, 1234]}, dtype="float64"), - ), - ], -) -@pytest.mark.parametrize("parse_dates", [True, False]) -def test_parse_date_float(all_parsers, data, expected, parse_dates): - # see gh-2697 - # - # Date parsing should fail, so we leave the data untouched - # (i.e. float precision should remain unchanged). - parser = all_parsers - - result = parser.read_csv(StringIO(data), parse_dates=parse_dates) - tm.assert_frame_equal(result, expected) - - -def test_parse_timezone(all_parsers): - # see gh-22256 - parser = all_parsers - data = """dt,val - 2018-01-04 09:01:00+09:00,23350 - 2018-01-04 09:02:00+09:00,23400 - 2018-01-04 09:03:00+09:00,23400 - 2018-01-04 09:04:00+09:00,23400 - 2018-01-04 09:05:00+09:00,23400""" - result = parser.read_csv(StringIO(data), parse_dates=["dt"]) - - dti = DatetimeIndex( - list( - date_range( - start="2018-01-04 09:01:00", - end="2018-01-04 09:05:00", - freq="1min", - tz=timezone(timedelta(minutes=540)), - ) - ), - freq=None, - ) - expected_data = {"dt": dti, "val": [23350, 23400, 23400, 23400, 23400]} - - expected = DataFrame(expected_data) - tm.assert_frame_equal(result, expected) - - -@skip_pyarrow -@pytest.mark.parametrize( - "date_string", - ["32/32/2019", "02/30/2019", "13/13/2019", "13/2019", "a3/11/2018", "10/11/2o17"], -) -def test_invalid_parse_delimited_date(all_parsers, date_string): - parser = all_parsers - expected = DataFrame({0: [date_string]}, dtype="object") - result = parser.read_csv( - StringIO(date_string), - header=None, - parse_dates=[0], - ) - tm.assert_frame_equal(result, expected) - - -@skip_pyarrow -@pytest.mark.parametrize( - "date_string,dayfirst,expected", - [ - # %d/%m/%Y; month > 12 thus replacement - ("13/02/2019", True, datetime(2019, 2, 13)), - # %m/%d/%Y; day > 12 thus there will be no replacement - ("02/13/2019", False, datetime(2019, 2, 13)), - # %d/%m/%Y; dayfirst==True thus replacement - ("04/02/2019", True, datetime(2019, 2, 4)), - ], -) -def test_parse_delimited_date_swap_no_warning( - all_parsers, date_string, dayfirst, expected -): - parser = all_parsers - expected = DataFrame({0: [expected]}, dtype="datetime64[ns]") - result = parser.read_csv( - StringIO(date_string), header=None, dayfirst=dayfirst, parse_dates=[0] - ) - tm.assert_frame_equal(result, expected) - - -@skip_pyarrow -@pytest.mark.parametrize( - "date_string,dayfirst,expected", - [ - # %d/%m/%Y; month > 12 - ("13/02/2019", False, datetime(2019, 2, 13)), - # %m/%d/%Y; day > 12 - ("02/13/2019", True, datetime(2019, 2, 13)), - ], -) -def test_parse_delimited_date_swap_with_warning( - all_parsers, date_string, dayfirst, expected -): - parser = all_parsers - expected = DataFrame({0: [expected]}, dtype="datetime64[ns]") - warning_msg = ( - "Parsing dates in .* format when dayfirst=.* was specified. " - "Pass `dayfirst=.*` or specify a format to silence this warning." - ) - result = parser.read_csv_check_warnings( - UserWarning, - warning_msg, - StringIO(date_string), - header=None, - dayfirst=dayfirst, - parse_dates=[0], - ) - tm.assert_frame_equal(result, expected) - - -def test_parse_multiple_delimited_dates_with_swap_warnings(): - # GH46210 - with pytest.raises( - ValueError, - match=( - r'^time data "31/05/2000" doesn\'t match format "%m/%d/%Y", ' - r"at position 1. You might want to try:" - ), - ): - pd.to_datetime(["01/01/2000", "31/05/2000", "31/05/2001", "01/02/2000"]) - - -def _helper_hypothesis_delimited_date(call, date_string, **kwargs): - msg, result = None, None - try: - result = call(date_string, **kwargs) - except ValueError as er: - msg = str(er) - return msg, result - - -@skip_pyarrow -@given(DATETIME_NO_TZ) -@pytest.mark.parametrize("delimiter", list(" -./")) -@pytest.mark.parametrize("dayfirst", [True, False]) -@pytest.mark.parametrize( - "date_format", - ["%d %m %Y", "%m %d %Y", "%m %Y", "%Y %m %d", "%y %m %d", "%Y%m%d", "%y%m%d"], -) -def test_hypothesis_delimited_date( - request, date_format, dayfirst, delimiter, test_datetime -): - if date_format == "%m %Y" and delimiter == ".": - request.node.add_marker( - pytest.mark.xfail( - reason="parse_datetime_string cannot reliably tell whether " - "e.g. %m.%Y is a float or a date" - ) - ) - date_string = test_datetime.strftime(date_format.replace(" ", delimiter)) - - except_out_dateutil, result = _helper_hypothesis_delimited_date( - py_parse_datetime_string, date_string, dayfirst=dayfirst - ) - except_in_dateutil, expected = _helper_hypothesis_delimited_date( - du_parse, - date_string, - default=datetime(1, 1, 1), - dayfirst=dayfirst, - yearfirst=False, - ) - - assert except_out_dateutil == except_in_dateutil - assert result == expected - - -@skip_pyarrow -@pytest.mark.parametrize( - "names, usecols, parse_dates, missing_cols", - [ - (None, ["val"], ["date", "time"], "date, time"), - (None, ["val"], [0, "time"], "time"), - (None, ["val"], [["date", "time"]], "date, time"), - (None, ["val"], [[0, "time"]], "time"), - (None, ["val"], {"date": [0, "time"]}, "time"), - (None, ["val"], {"date": ["date", "time"]}, "date, time"), - (None, ["val"], [["date", "time"], "date"], "date, time"), - (["date1", "time1", "temperature"], None, ["date", "time"], "date, time"), - ( - ["date1", "time1", "temperature"], - ["date1", "temperature"], - ["date1", "time"], - "time", - ), - ], -) -def test_missing_parse_dates_column_raises( - all_parsers, names, usecols, parse_dates, missing_cols -): - # gh-31251 column names provided in parse_dates could be missing. - parser = all_parsers - content = StringIO("date,time,val\n2020-01-31,04:20:32,32\n") - msg = f"Missing column provided to 'parse_dates': '{missing_cols}'" - with pytest.raises(ValueError, match=msg): - parser.read_csv( - content, sep=",", names=names, usecols=usecols, parse_dates=parse_dates - ) - - -@skip_pyarrow -def test_date_parser_and_names(all_parsers): - # GH#33699 - parser = all_parsers - data = StringIO("""x,y\n1,2""") - result = parser.read_csv_check_warnings( - UserWarning, - "Could not infer format", - data, - parse_dates=["B"], - names=["B"], - ) - expected = DataFrame({"B": ["y", "2"]}, index=["x", "1"]) - tm.assert_frame_equal(result, expected) - - -@skip_pyarrow -def test_date_parser_multiindex_columns(all_parsers): - parser = all_parsers - data = """a,b -1,2 -2019-12-31,6""" - result = parser.read_csv(StringIO(data), parse_dates=[("a", "1")], header=[0, 1]) - expected = DataFrame( - {("a", "1"): Timestamp("2019-12-31").as_unit("ns"), ("b", "2"): [6]} - ) - tm.assert_frame_equal(result, expected) - - -@skip_pyarrow -@pytest.mark.parametrize( - "parse_spec, col_name", - [ - ([[("a", "1"), ("b", "2")]], ("a_b", "1_2")), - ({("foo", "1"): [("a", "1"), ("b", "2")]}, ("foo", "1")), - ], -) -def test_date_parser_multiindex_columns_combine_cols(all_parsers, parse_spec, col_name): - parser = all_parsers - data = """a,b,c -1,2,3 -2019-12,-31,6""" - result = parser.read_csv( - StringIO(data), - parse_dates=parse_spec, - header=[0, 1], - ) - expected = DataFrame( - {col_name: Timestamp("2019-12-31").as_unit("ns"), ("c", "3"): [6]} - ) - tm.assert_frame_equal(result, expected) - - -@skip_pyarrow -def test_date_parser_usecols_thousands(all_parsers): - # GH#39365 - data = """A,B,C - 1,3,20-09-01-01 - 2,4,20-09-01-01 - """ - - parser = all_parsers - result = parser.read_csv_check_warnings( - UserWarning, - "Could not infer format", - StringIO(data), - parse_dates=[1], - usecols=[1, 2], - thousands="-", - ) - expected = DataFrame({"B": [3, 4], "C": [Timestamp("20-09-2001 01:00:00")] * 2}) - tm.assert_frame_equal(result, expected) - - -@skip_pyarrow -def test_parse_dates_and_keep_orgin_column(all_parsers): - # GH#13378 - parser = all_parsers - data = """A -20150908 -20150909 -""" - result = parser.read_csv( - StringIO(data), parse_dates={"date": ["A"]}, keep_date_col=True - ) - expected_data = [Timestamp("2015-09-08"), Timestamp("2015-09-09")] - expected = DataFrame({"date": expected_data, "A": expected_data}) - tm.assert_frame_equal(result, expected) - - -def test_dayfirst_warnings(): - # GH 12585 - - # CASE 1: valid input - input = "date\n31/12/2014\n10/03/2011" - expected = DatetimeIndex( - ["2014-12-31", "2011-03-10"], dtype="datetime64[ns]", freq=None, name="date" - ) - warning_msg = ( - "Parsing dates in .* format when dayfirst=.* was specified. " - "Pass `dayfirst=.*` or specify a format to silence this warning." - ) - - # A. dayfirst arg correct, no warning - res1 = read_csv( - StringIO(input), parse_dates=["date"], dayfirst=True, index_col="date" - ).index - tm.assert_index_equal(expected, res1) - - # B. dayfirst arg incorrect, warning - with tm.assert_produces_warning(UserWarning, match=warning_msg): - res2 = read_csv( - StringIO(input), parse_dates=["date"], dayfirst=False, index_col="date" - ).index - tm.assert_index_equal(expected, res2) - - # CASE 2: invalid input - # cannot consistently process with single format - # return to user unaltered - - # first in DD/MM/YYYY, second in MM/DD/YYYY - input = "date\n31/12/2014\n03/30/2011" - expected = Index(["31/12/2014", "03/30/2011"], dtype="object", name="date") - - # A. use dayfirst=True - res5 = read_csv( - StringIO(input), parse_dates=["date"], dayfirst=True, index_col="date" - ).index - tm.assert_index_equal(expected, res5) - - # B. use dayfirst=False - with tm.assert_produces_warning(UserWarning, match=warning_msg): - res6 = read_csv( - StringIO(input), parse_dates=["date"], dayfirst=False, index_col="date" - ).index - tm.assert_index_equal(expected, res6) - - -@pytest.mark.parametrize( - "date_string, dayfirst", - [ - pytest.param( - "31/1/2014", - False, - id="second date is single-digit", - ), - pytest.param( - "1/31/2014", - True, - id="first date is single-digit", - ), - ], -) -def test_dayfirst_warnings_no_leading_zero(date_string, dayfirst): - # GH47880 - initial_value = f"date\n{date_string}" - expected = DatetimeIndex( - ["2014-01-31"], dtype="datetime64[ns]", freq=None, name="date" - ) - warning_msg = ( - "Parsing dates in .* format when dayfirst=.* was specified. " - "Pass `dayfirst=.*` or specify a format to silence this warning." - ) - with tm.assert_produces_warning(UserWarning, match=warning_msg): - res = read_csv( - StringIO(initial_value), - parse_dates=["date"], - index_col="date", - dayfirst=dayfirst, - ).index - tm.assert_index_equal(expected, res) - - -@skip_pyarrow -def test_infer_first_column_as_index(all_parsers): - # GH#11019 - parser = all_parsers - data = "a,b,c\n1970-01-01,2,3,4" - result = parser.read_csv( - StringIO(data), - parse_dates=["a"], - ) - expected = DataFrame({"a": "2", "b": 3, "c": 4}, index=["1970-01-01"]) - tm.assert_frame_equal(result, expected) - - -@skip_pyarrow -@pytest.mark.parametrize( - ("key", "value", "warn"), - [ - ("date_parser", lambda x: pd.to_datetime(x, format="%Y-%m-%d"), FutureWarning), - ("date_format", "%Y-%m-%d", None), - ], -) -def test_replace_nans_before_parsing_dates(all_parsers, key, value, warn): - # GH#26203 - parser = all_parsers - data = """Test -2012-10-01 -0 -2015-05-15 -# -2017-09-09 -""" - result = parser.read_csv_check_warnings( - warn, - "use 'date_format' instead", - StringIO(data), - na_values={"Test": ["#", "0"]}, - parse_dates=["Test"], - **{key: value}, - ) - expected = DataFrame( - { - "Test": [ - Timestamp("2012-10-01"), - pd.NaT, - Timestamp("2015-05-15"), - pd.NaT, - Timestamp("2017-09-09"), - ] - } - ) - tm.assert_frame_equal(result, expected) - - -@skip_pyarrow -def test_parse_dates_and_string_dtype(all_parsers): - # GH#34066 - parser = all_parsers - data = """a,b -1,2019-12-31 -""" - result = parser.read_csv(StringIO(data), dtype="string", parse_dates=["b"]) - expected = DataFrame({"a": ["1"], "b": [Timestamp("2019-12-31")]}) - expected["a"] = expected["a"].astype("string") - tm.assert_frame_equal(result, expected) - - -def test_parse_dot_separated_dates(all_parsers): - # https://github.com/pandas-dev/pandas/issues/2586 - parser = all_parsers - data = """a,b -27.03.2003 14:55:00.000,1 -03.08.2003 15:20:00.000,2""" - if parser.engine == "pyarrow": - expected_index = Index( - ["27.03.2003 14:55:00.000", "03.08.2003 15:20:00.000"], - dtype="object", - name="a", - ) - warn = None - else: - expected_index = DatetimeIndex( - ["2003-03-27 14:55:00", "2003-08-03 15:20:00"], - dtype="datetime64[ns]", - name="a", - ) - warn = UserWarning - msg = r"when dayfirst=False \(the default\) was specified" - result = parser.read_csv_check_warnings( - warn, msg, StringIO(data), parse_dates=True, index_col=0 - ) - expected = DataFrame({"b": [1, 2]}, index=expected_index) - tm.assert_frame_equal(result, expected) - - -def test_parse_dates_dict_format(all_parsers): - # GH#51240 - parser = all_parsers - data = """a,b -2019-12-31,31-12-2019 -2020-12-31,31-12-2020""" - - result = parser.read_csv( - StringIO(data), - date_format={"a": "%Y-%m-%d", "b": "%d-%m-%Y"}, - parse_dates=["a", "b"], - ) - expected = DataFrame( - { - "a": [Timestamp("2019-12-31"), Timestamp("2020-12-31")], - "b": [Timestamp("2019-12-31"), Timestamp("2020-12-31")], - } - ) - tm.assert_frame_equal(result, expected) - - -@skip_pyarrow -@pytest.mark.parametrize( - "key, parse_dates", [("a_b", [[0, 1]]), ("foo", {"foo": [0, 1]})] -) -def test_parse_dates_dict_format_two_columns(all_parsers, key, parse_dates): - # GH#51240 - parser = all_parsers - data = """a,b -31-,12-2019 -31-,12-2020""" - - with tm.assert_produces_warning(None): - result = parser.read_csv( - StringIO(data), date_format={key: "%d- %m-%Y"}, parse_dates=parse_dates - ) - expected = DataFrame( - { - key: [Timestamp("2019-12-31"), Timestamp("2020-12-31")], - } - ) - tm.assert_frame_equal(result, expected) - - -@skip_pyarrow -def test_parse_dates_dict_format_index(all_parsers): - # GH#51240 - parser = all_parsers - data = """a,b -2019-12-31,31-12-2019 -2020-12-31,31-12-2020""" - - result = parser.read_csv( - StringIO(data), date_format={"a": "%Y-%m-%d"}, parse_dates=True, index_col=0 - ) - expected = DataFrame( - { - "b": ["31-12-2019", "31-12-2020"], - }, - index=Index([Timestamp("2019-12-31"), Timestamp("2020-12-31")], name="a"), - ) - tm.assert_frame_equal(result, expected) - - -def test_parse_dates_arrow_engine(all_parsers): - # GH#53295 - parser = all_parsers - data = """a,b -2000-01-01 00:00:00,1 -2000-01-01 00:00:01,1""" - - result = parser.read_csv(StringIO(data), parse_dates=["a"]) - expected = DataFrame( - { - "a": [ - Timestamp("2000-01-01 00:00:00"), - Timestamp("2000-01-01 00:00:01"), - ], - "b": 1, - } - ) - tm.assert_frame_equal(result, expected) - - -@xfail_pyarrow -def test_from_csv_with_mixed_offsets(all_parsers): - parser = all_parsers - data = "a\n2020-01-01T00:00:00+01:00\n2020-01-01T00:00:00+00:00" - result = parser.read_csv(StringIO(data), parse_dates=["a"])["a"] - expected = Series( - [ - Timestamp("2020-01-01 00:00:00+01:00"), - Timestamp("2020-01-01 00:00:00+00:00"), - ], - name="a", - index=[0, 1], - ) - tm.assert_series_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/utils/egg_link.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/utils/egg_link.py deleted file mode 100644 index 9e0da8d2d29d94d15dfbf49dff90df7eafd68bac..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/utils/egg_link.py +++ /dev/null @@ -1,75 +0,0 @@ -# The following comment should be removed at some point in the future. -# mypy: strict-optional=False - -import os -import re -import sys -from typing import Optional - -from pip._internal.locations import site_packages, user_site -from pip._internal.utils.virtualenv import ( - running_under_virtualenv, - virtualenv_no_global, -) - -__all__ = [ - "egg_link_path_from_sys_path", - "egg_link_path_from_location", -] - - -def _egg_link_name(raw_name: str) -> str: - """ - Convert a Name metadata value to a .egg-link name, by applying - the same substitution as pkg_resources's safe_name function. - Note: we cannot use canonicalize_name because it has a different logic. - """ - return re.sub("[^A-Za-z0-9.]+", "-", raw_name) + ".egg-link" - - -def egg_link_path_from_sys_path(raw_name: str) -> Optional[str]: - """ - Look for a .egg-link file for project name, by walking sys.path. - """ - egg_link_name = _egg_link_name(raw_name) - for path_item in sys.path: - egg_link = os.path.join(path_item, egg_link_name) - if os.path.isfile(egg_link): - return egg_link - return None - - -def egg_link_path_from_location(raw_name: str) -> Optional[str]: - """ - Return the path for the .egg-link file if it exists, otherwise, None. - - There's 3 scenarios: - 1) not in a virtualenv - try to find in site.USER_SITE, then site_packages - 2) in a no-global virtualenv - try to find in site_packages - 3) in a yes-global virtualenv - try to find in site_packages, then site.USER_SITE - (don't look in global location) - - For #1 and #3, there could be odd cases, where there's an egg-link in 2 - locations. - - This method will just return the first one found. - """ - sites = [] - if running_under_virtualenv(): - sites.append(site_packages) - if not virtualenv_no_global() and user_site: - sites.append(user_site) - else: - if user_site: - sites.append(user_site) - sites.append(site_packages) - - egg_link_name = _egg_link_name(raw_name) - for site in sites: - egglink = os.path.join(site, egg_link_name) - if os.path.isfile(egglink): - return egglink - return None diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/zig.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/zig.py deleted file mode 100644 index fad3b79d9e222097e63b895baaaf0a1c17bcbcfd..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/zig.py +++ /dev/null @@ -1,124 +0,0 @@ -""" - pygments.lexers.zig - ~~~~~~~~~~~~~~~~~~~ - - Lexers for Zig. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pygments.lexer import RegexLexer, words -from pygments.token import Comment, Operator, Keyword, Name, String, \ - Number, Punctuation, Whitespace - -__all__ = ['ZigLexer'] - - -class ZigLexer(RegexLexer): - """ - Lexer for the Zig language. - - grammar: https://ziglang.org/documentation/master/#Grammar - """ - name = 'Zig' - url = 'http://www.ziglang.org' - aliases = ['zig'] - filenames = ['*.zig'] - mimetypes = ['text/zig'] - - type_keywords = ( - words(('bool', 'f16', 'f32', 'f64', 'f128', 'void', 'noreturn', 'type', - 'anyerror', 'promise', 'i0', 'u0', 'isize', 'usize', 'comptime_int', - 'comptime_float', 'c_short', 'c_ushort', 'c_int', 'c_uint', 'c_long', - 'c_ulong', 'c_longlong', 'c_ulonglong', 'c_longdouble', 'c_void' - 'i8', 'u8', 'i16', 'u16', 'i32', 'u32', 'i64', 'u64', 'i128', - 'u128'), suffix=r'\b'), - Keyword.Type) - - storage_keywords = ( - words(('const', 'var', 'extern', 'packed', 'export', 'pub', 'noalias', - 'inline', 'comptime', 'nakedcc', 'stdcallcc', 'volatile', 'allowzero', - 'align', 'linksection', 'threadlocal'), suffix=r'\b'), - Keyword.Reserved) - - structure_keywords = ( - words(('struct', 'enum', 'union', 'error'), suffix=r'\b'), - Keyword) - - statement_keywords = ( - words(('break', 'return', 'continue', 'asm', 'defer', 'errdefer', - 'unreachable', 'try', 'catch', 'async', 'await', 'suspend', - 'resume', 'cancel'), suffix=r'\b'), - Keyword) - - conditional_keywords = ( - words(('if', 'else', 'switch', 'and', 'or', 'orelse'), suffix=r'\b'), - Keyword) - - repeat_keywords = ( - words(('while', 'for'), suffix=r'\b'), - Keyword) - - other_keywords = ( - words(('fn', 'usingnamespace', 'test'), suffix=r'\b'), - Keyword) - - constant_keywords = ( - words(('true', 'false', 'null', 'undefined'), suffix=r'\b'), - Keyword.Constant) - - tokens = { - 'root': [ - (r'\n', Whitespace), - (r'\s+', Whitespace), - (r'//.*?\n', Comment.Single), - - # Keywords - statement_keywords, - storage_keywords, - structure_keywords, - repeat_keywords, - type_keywords, - constant_keywords, - conditional_keywords, - other_keywords, - - # Floats - (r'0x[0-9a-fA-F]+\.[0-9a-fA-F]+([pP][\-+]?[0-9a-fA-F]+)?', Number.Float), - (r'0x[0-9a-fA-F]+\.?[pP][\-+]?[0-9a-fA-F]+', Number.Float), - (r'[0-9]+\.[0-9]+([eE][-+]?[0-9]+)?', Number.Float), - (r'[0-9]+\.?[eE][-+]?[0-9]+', Number.Float), - - # Integers - (r'0b[01]+', Number.Bin), - (r'0o[0-7]+', Number.Oct), - (r'0x[0-9a-fA-F]+', Number.Hex), - (r'[0-9]+', Number.Integer), - - # Identifier - (r'@[a-zA-Z_]\w*', Name.Builtin), - (r'[a-zA-Z_]\w*', Name), - - # Characters - (r'\'\\\'\'', String.Escape), - (r'\'\\(x[a-fA-F0-9]{2}|u[a-fA-F0-9]{4}|U[a-fA-F0-9]{6}|[nr\\t\'"])\'', - String.Escape), - (r'\'[^\\\']\'', String), - - # Strings - (r'\\\\[^\n]*', String.Heredoc), - (r'c\\\\[^\n]*', String.Heredoc), - (r'c?"', String, 'string'), - - # Operators, Punctuation - (r'[+%=><|^!?/\-*&~:]', Operator), - (r'[{}()\[\],.;]', Punctuation) - ], - 'string': [ - (r'\\(x[a-fA-F0-9]{2}|u[a-fA-F0-9]{4}|U[a-fA-F0-9]{6}|[nr\\t\'"])', - String.Escape), - (r'[^\\"\n]+', String), - (r'"', String, '#pop') - ] - } diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/bar.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/bar.py deleted file mode 100644 index ed86a552d1ca6baa0cfd48ec73a7a5c952d047c9..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/bar.py +++ /dev/null @@ -1,94 +0,0 @@ -from typing import Optional, Union - -from .color import Color -from .console import Console, ConsoleOptions, RenderResult -from .jupyter import JupyterMixin -from .measure import Measurement -from .segment import Segment -from .style import Style - -# There are left-aligned characters for 1/8 to 7/8, but -# the right-aligned characters exist only for 1/8 and 4/8. -BEGIN_BLOCK_ELEMENTS = ["█", "█", "█", "▐", "▐", "▐", "▕", "▕"] -END_BLOCK_ELEMENTS = [" ", "▏", "▎", "▍", "▌", "▋", "▊", "▉"] -FULL_BLOCK = "█" - - -class Bar(JupyterMixin): - """Renders a solid block bar. - - Args: - size (float): Value for the end of the bar. - begin (float): Begin point (between 0 and size, inclusive). - end (float): End point (between 0 and size, inclusive). - width (int, optional): Width of the bar, or ``None`` for maximum width. Defaults to None. - color (Union[Color, str], optional): Color of the bar. Defaults to "default". - bgcolor (Union[Color, str], optional): Color of bar background. Defaults to "default". - """ - - def __init__( - self, - size: float, - begin: float, - end: float, - *, - width: Optional[int] = None, - color: Union[Color, str] = "default", - bgcolor: Union[Color, str] = "default", - ): - self.size = size - self.begin = max(begin, 0) - self.end = min(end, size) - self.width = width - self.style = Style(color=color, bgcolor=bgcolor) - - def __repr__(self) -> str: - return f"Bar({self.size}, {self.begin}, {self.end})" - - def __rich_console__( - self, console: Console, options: ConsoleOptions - ) -> RenderResult: - - width = min( - self.width if self.width is not None else options.max_width, - options.max_width, - ) - - if self.begin >= self.end: - yield Segment(" " * width, self.style) - yield Segment.line() - return - - prefix_complete_eights = int(width * 8 * self.begin / self.size) - prefix_bar_count = prefix_complete_eights // 8 - prefix_eights_count = prefix_complete_eights % 8 - - body_complete_eights = int(width * 8 * self.end / self.size) - body_bar_count = body_complete_eights // 8 - body_eights_count = body_complete_eights % 8 - - # When start and end fall into the same cell, we ideally should render - # a symbol that's "center-aligned", but there is no good symbol in Unicode. - # In this case, we fall back to right-aligned block symbol for simplicity. - - prefix = " " * prefix_bar_count - if prefix_eights_count: - prefix += BEGIN_BLOCK_ELEMENTS[prefix_eights_count] - - body = FULL_BLOCK * body_bar_count - if body_eights_count: - body += END_BLOCK_ELEMENTS[body_eights_count] - - suffix = " " * (width - len(body)) - - yield Segment(prefix + body[len(prefix) :] + suffix, self.style) - yield Segment.line() - - def __rich_measure__( - self, console: Console, options: ConsoleOptions - ) -> Measurement: - return ( - Measurement(self.width, self.width) - if self.width is not None - else Measurement(4, options.max_width) - ) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_distutils/msvc9compiler.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_distutils/msvc9compiler.py deleted file mode 100644 index a1b3b02ff0a94b0611a4ca44345d42a226d15ee5..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_distutils/msvc9compiler.py +++ /dev/null @@ -1,788 +0,0 @@ -"""distutils.msvc9compiler - -Contains MSVCCompiler, an implementation of the abstract CCompiler class -for the Microsoft Visual Studio 2008. - -The module is compatible with VS 2005 and VS 2008. You can find legacy support -for older versions of VS in distutils.msvccompiler. -""" - -# Written by Perry Stoll -# hacked by Robin Becker and Thomas Heller to do a better job of -# finding DevStudio (through the registry) -# ported to VS2005 and VS 2008 by Christian Heimes - -import os -import subprocess -import sys -import re - -from distutils.errors import DistutilsExecError, DistutilsPlatformError, \ - CompileError, LibError, LinkError -from distutils.ccompiler import CCompiler, gen_lib_options -from distutils import log -from distutils.util import get_platform - -import winreg - -RegOpenKeyEx = winreg.OpenKeyEx -RegEnumKey = winreg.EnumKey -RegEnumValue = winreg.EnumValue -RegError = winreg.error - -HKEYS = (winreg.HKEY_USERS, - winreg.HKEY_CURRENT_USER, - winreg.HKEY_LOCAL_MACHINE, - winreg.HKEY_CLASSES_ROOT) - -NATIVE_WIN64 = (sys.platform == 'win32' and sys.maxsize > 2**32) -if NATIVE_WIN64: - # Visual C++ is a 32-bit application, so we need to look in - # the corresponding registry branch, if we're running a - # 64-bit Python on Win64 - VS_BASE = r"Software\Wow6432Node\Microsoft\VisualStudio\%0.1f" - WINSDK_BASE = r"Software\Wow6432Node\Microsoft\Microsoft SDKs\Windows" - NET_BASE = r"Software\Wow6432Node\Microsoft\.NETFramework" -else: - VS_BASE = r"Software\Microsoft\VisualStudio\%0.1f" - WINSDK_BASE = r"Software\Microsoft\Microsoft SDKs\Windows" - NET_BASE = r"Software\Microsoft\.NETFramework" - -# A map keyed by get_platform() return values to values accepted by -# 'vcvarsall.bat'. Note a cross-compile may combine these (eg, 'x86_amd64' is -# the param to cross-compile on x86 targeting amd64.) -PLAT_TO_VCVARS = { - 'win32' : 'x86', - 'win-amd64' : 'amd64', -} - -class Reg: - """Helper class to read values from the registry - """ - - def get_value(cls, path, key): - for base in HKEYS: - d = cls.read_values(base, path) - if d and key in d: - return d[key] - raise KeyError(key) - get_value = classmethod(get_value) - - def read_keys(cls, base, key): - """Return list of registry keys.""" - try: - handle = RegOpenKeyEx(base, key) - except RegError: - return None - L = [] - i = 0 - while True: - try: - k = RegEnumKey(handle, i) - except RegError: - break - L.append(k) - i += 1 - return L - read_keys = classmethod(read_keys) - - def read_values(cls, base, key): - """Return dict of registry keys and values. - - All names are converted to lowercase. - """ - try: - handle = RegOpenKeyEx(base, key) - except RegError: - return None - d = {} - i = 0 - while True: - try: - name, value, type = RegEnumValue(handle, i) - except RegError: - break - name = name.lower() - d[cls.convert_mbcs(name)] = cls.convert_mbcs(value) - i += 1 - return d - read_values = classmethod(read_values) - - def convert_mbcs(s): - dec = getattr(s, "decode", None) - if dec is not None: - try: - s = dec("mbcs") - except UnicodeError: - pass - return s - convert_mbcs = staticmethod(convert_mbcs) - -class MacroExpander: - - def __init__(self, version): - self.macros = {} - self.vsbase = VS_BASE % version - self.load_macros(version) - - def set_macro(self, macro, path, key): - self.macros["$(%s)" % macro] = Reg.get_value(path, key) - - def load_macros(self, version): - self.set_macro("VCInstallDir", self.vsbase + r"\Setup\VC", "productdir") - self.set_macro("VSInstallDir", self.vsbase + r"\Setup\VS", "productdir") - self.set_macro("FrameworkDir", NET_BASE, "installroot") - try: - if version >= 8.0: - self.set_macro("FrameworkSDKDir", NET_BASE, - "sdkinstallrootv2.0") - else: - raise KeyError("sdkinstallrootv2.0") - except KeyError: - raise DistutilsPlatformError( - """Python was built with Visual Studio 2008; -extensions must be built with a compiler than can generate compatible binaries. -Visual Studio 2008 was not found on this system. If you have Cygwin installed, -you can try compiling with MingW32, by passing "-c mingw32" to setup.py.""") - - if version >= 9.0: - self.set_macro("FrameworkVersion", self.vsbase, "clr version") - self.set_macro("WindowsSdkDir", WINSDK_BASE, "currentinstallfolder") - else: - p = r"Software\Microsoft\NET Framework Setup\Product" - for base in HKEYS: - try: - h = RegOpenKeyEx(base, p) - except RegError: - continue - key = RegEnumKey(h, 0) - d = Reg.get_value(base, r"%s\%s" % (p, key)) - self.macros["$(FrameworkVersion)"] = d["version"] - - def sub(self, s): - for k, v in self.macros.items(): - s = s.replace(k, v) - return s - -def get_build_version(): - """Return the version of MSVC that was used to build Python. - - For Python 2.3 and up, the version number is included in - sys.version. For earlier versions, assume the compiler is MSVC 6. - """ - prefix = "MSC v." - i = sys.version.find(prefix) - if i == -1: - return 6 - i = i + len(prefix) - s, rest = sys.version[i:].split(" ", 1) - majorVersion = int(s[:-2]) - 6 - if majorVersion >= 13: - # v13 was skipped and should be v14 - majorVersion += 1 - minorVersion = int(s[2:3]) / 10.0 - # I don't think paths are affected by minor version in version 6 - if majorVersion == 6: - minorVersion = 0 - if majorVersion >= 6: - return majorVersion + minorVersion - # else we don't know what version of the compiler this is - return None - -def normalize_and_reduce_paths(paths): - """Return a list of normalized paths with duplicates removed. - - The current order of paths is maintained. - """ - # Paths are normalized so things like: /a and /a/ aren't both preserved. - reduced_paths = [] - for p in paths: - np = os.path.normpath(p) - # XXX(nnorwitz): O(n**2), if reduced_paths gets long perhaps use a set. - if np not in reduced_paths: - reduced_paths.append(np) - return reduced_paths - -def removeDuplicates(variable): - """Remove duplicate values of an environment variable. - """ - oldList = variable.split(os.pathsep) - newList = [] - for i in oldList: - if i not in newList: - newList.append(i) - newVariable = os.pathsep.join(newList) - return newVariable - -def find_vcvarsall(version): - """Find the vcvarsall.bat file - - At first it tries to find the productdir of VS 2008 in the registry. If - that fails it falls back to the VS90COMNTOOLS env var. - """ - vsbase = VS_BASE % version - try: - productdir = Reg.get_value(r"%s\Setup\VC" % vsbase, - "productdir") - except KeyError: - log.debug("Unable to find productdir in registry") - productdir = None - - if not productdir or not os.path.isdir(productdir): - toolskey = "VS%0.f0COMNTOOLS" % version - toolsdir = os.environ.get(toolskey, None) - - if toolsdir and os.path.isdir(toolsdir): - productdir = os.path.join(toolsdir, os.pardir, os.pardir, "VC") - productdir = os.path.abspath(productdir) - if not os.path.isdir(productdir): - log.debug("%s is not a valid directory" % productdir) - return None - else: - log.debug("Env var %s is not set or invalid" % toolskey) - if not productdir: - log.debug("No productdir found") - return None - vcvarsall = os.path.join(productdir, "vcvarsall.bat") - if os.path.isfile(vcvarsall): - return vcvarsall - log.debug("Unable to find vcvarsall.bat") - return None - -def query_vcvarsall(version, arch="x86"): - """Launch vcvarsall.bat and read the settings from its environment - """ - vcvarsall = find_vcvarsall(version) - interesting = {"include", "lib", "libpath", "path"} - result = {} - - if vcvarsall is None: - raise DistutilsPlatformError("Unable to find vcvarsall.bat") - log.debug("Calling 'vcvarsall.bat %s' (version=%s)", arch, version) - popen = subprocess.Popen('"%s" %s & set' % (vcvarsall, arch), - stdout=subprocess.PIPE, - stderr=subprocess.PIPE) - try: - stdout, stderr = popen.communicate() - if popen.wait() != 0: - raise DistutilsPlatformError(stderr.decode("mbcs")) - - stdout = stdout.decode("mbcs") - for line in stdout.split("\n"): - line = Reg.convert_mbcs(line) - if '=' not in line: - continue - line = line.strip() - key, value = line.split('=', 1) - key = key.lower() - if key in interesting: - if value.endswith(os.pathsep): - value = value[:-1] - result[key] = removeDuplicates(value) - - finally: - popen.stdout.close() - popen.stderr.close() - - if len(result) != len(interesting): - raise ValueError(str(list(result.keys()))) - - return result - -# More globals -VERSION = get_build_version() -if VERSION < 8.0: - raise DistutilsPlatformError("VC %0.1f is not supported by this module" % VERSION) -# MACROS = MacroExpander(VERSION) - -class MSVCCompiler(CCompiler) : - """Concrete class that implements an interface to Microsoft Visual C++, - as defined by the CCompiler abstract class.""" - - compiler_type = 'msvc' - - # Just set this so CCompiler's constructor doesn't barf. We currently - # don't use the 'set_executables()' bureaucracy provided by CCompiler, - # as it really isn't necessary for this sort of single-compiler class. - # Would be nice to have a consistent interface with UnixCCompiler, - # though, so it's worth thinking about. - executables = {} - - # Private class data (need to distinguish C from C++ source for compiler) - _c_extensions = ['.c'] - _cpp_extensions = ['.cc', '.cpp', '.cxx'] - _rc_extensions = ['.rc'] - _mc_extensions = ['.mc'] - - # Needed for the filename generation methods provided by the - # base class, CCompiler. - src_extensions = (_c_extensions + _cpp_extensions + - _rc_extensions + _mc_extensions) - res_extension = '.res' - obj_extension = '.obj' - static_lib_extension = '.lib' - shared_lib_extension = '.dll' - static_lib_format = shared_lib_format = '%s%s' - exe_extension = '.exe' - - def __init__(self, verbose=0, dry_run=0, force=0): - CCompiler.__init__ (self, verbose, dry_run, force) - self.__version = VERSION - self.__root = r"Software\Microsoft\VisualStudio" - # self.__macros = MACROS - self.__paths = [] - # target platform (.plat_name is consistent with 'bdist') - self.plat_name = None - self.__arch = None # deprecated name - self.initialized = False - - def initialize(self, plat_name=None): - # multi-init means we would need to check platform same each time... - assert not self.initialized, "don't init multiple times" - if plat_name is None: - plat_name = get_platform() - # sanity check for platforms to prevent obscure errors later. - ok_plats = 'win32', 'win-amd64' - if plat_name not in ok_plats: - raise DistutilsPlatformError("--plat-name must be one of %s" % - (ok_plats,)) - - if "DISTUTILS_USE_SDK" in os.environ and "MSSdk" in os.environ and self.find_exe("cl.exe"): - # Assume that the SDK set up everything alright; don't try to be - # smarter - self.cc = "cl.exe" - self.linker = "link.exe" - self.lib = "lib.exe" - self.rc = "rc.exe" - self.mc = "mc.exe" - else: - # On x86, 'vcvars32.bat amd64' creates an env that doesn't work; - # to cross compile, you use 'x86_amd64'. - # On AMD64, 'vcvars32.bat amd64' is a native build env; to cross - # compile use 'x86' (ie, it runs the x86 compiler directly) - if plat_name == get_platform() or plat_name == 'win32': - # native build or cross-compile to win32 - plat_spec = PLAT_TO_VCVARS[plat_name] - else: - # cross compile from win32 -> some 64bit - plat_spec = PLAT_TO_VCVARS[get_platform()] + '_' + \ - PLAT_TO_VCVARS[plat_name] - - vc_env = query_vcvarsall(VERSION, plat_spec) - - self.__paths = vc_env['path'].split(os.pathsep) - os.environ['lib'] = vc_env['lib'] - os.environ['include'] = vc_env['include'] - - if len(self.__paths) == 0: - raise DistutilsPlatformError("Python was built with %s, " - "and extensions need to be built with the same " - "version of the compiler, but it isn't installed." - % self.__product) - - self.cc = self.find_exe("cl.exe") - self.linker = self.find_exe("link.exe") - self.lib = self.find_exe("lib.exe") - self.rc = self.find_exe("rc.exe") # resource compiler - self.mc = self.find_exe("mc.exe") # message compiler - #self.set_path_env_var('lib') - #self.set_path_env_var('include') - - # extend the MSVC path with the current path - try: - for p in os.environ['path'].split(';'): - self.__paths.append(p) - except KeyError: - pass - self.__paths = normalize_and_reduce_paths(self.__paths) - os.environ['path'] = ";".join(self.__paths) - - self.preprocess_options = None - if self.__arch == "x86": - self.compile_options = [ '/nologo', '/O2', '/MD', '/W3', - '/DNDEBUG'] - self.compile_options_debug = ['/nologo', '/Od', '/MDd', '/W3', - '/Z7', '/D_DEBUG'] - else: - # Win64 - self.compile_options = [ '/nologo', '/O2', '/MD', '/W3', '/GS-' , - '/DNDEBUG'] - self.compile_options_debug = ['/nologo', '/Od', '/MDd', '/W3', '/GS-', - '/Z7', '/D_DEBUG'] - - self.ldflags_shared = ['/DLL', '/nologo', '/INCREMENTAL:NO'] - if self.__version >= 7: - self.ldflags_shared_debug = [ - '/DLL', '/nologo', '/INCREMENTAL:no', '/DEBUG' - ] - self.ldflags_static = [ '/nologo'] - - self.initialized = True - - # -- Worker methods ------------------------------------------------ - - def object_filenames(self, - source_filenames, - strip_dir=0, - output_dir=''): - # Copied from ccompiler.py, extended to return .res as 'object'-file - # for .rc input file - if output_dir is None: output_dir = '' - obj_names = [] - for src_name in source_filenames: - (base, ext) = os.path.splitext (src_name) - base = os.path.splitdrive(base)[1] # Chop off the drive - base = base[os.path.isabs(base):] # If abs, chop off leading / - if ext not in self.src_extensions: - # Better to raise an exception instead of silently continuing - # and later complain about sources and targets having - # different lengths - raise CompileError ("Don't know how to compile %s" % src_name) - if strip_dir: - base = os.path.basename (base) - if ext in self._rc_extensions: - obj_names.append (os.path.join (output_dir, - base + self.res_extension)) - elif ext in self._mc_extensions: - obj_names.append (os.path.join (output_dir, - base + self.res_extension)) - else: - obj_names.append (os.path.join (output_dir, - base + self.obj_extension)) - return obj_names - - - def compile(self, sources, - output_dir=None, macros=None, include_dirs=None, debug=0, - extra_preargs=None, extra_postargs=None, depends=None): - - if not self.initialized: - self.initialize() - compile_info = self._setup_compile(output_dir, macros, include_dirs, - sources, depends, extra_postargs) - macros, objects, extra_postargs, pp_opts, build = compile_info - - compile_opts = extra_preargs or [] - compile_opts.append ('/c') - if debug: - compile_opts.extend(self.compile_options_debug) - else: - compile_opts.extend(self.compile_options) - - for obj in objects: - try: - src, ext = build[obj] - except KeyError: - continue - if debug: - # pass the full pathname to MSVC in debug mode, - # this allows the debugger to find the source file - # without asking the user to browse for it - src = os.path.abspath(src) - - if ext in self._c_extensions: - input_opt = "/Tc" + src - elif ext in self._cpp_extensions: - input_opt = "/Tp" + src - elif ext in self._rc_extensions: - # compile .RC to .RES file - input_opt = src - output_opt = "/fo" + obj - try: - self.spawn([self.rc] + pp_opts + - [output_opt] + [input_opt]) - except DistutilsExecError as msg: - raise CompileError(msg) - continue - elif ext in self._mc_extensions: - # Compile .MC to .RC file to .RES file. - # * '-h dir' specifies the directory for the - # generated include file - # * '-r dir' specifies the target directory of the - # generated RC file and the binary message resource - # it includes - # - # For now (since there are no options to change this), - # we use the source-directory for the include file and - # the build directory for the RC file and message - # resources. This works at least for win32all. - h_dir = os.path.dirname(src) - rc_dir = os.path.dirname(obj) - try: - # first compile .MC to .RC and .H file - self.spawn([self.mc] + - ['-h', h_dir, '-r', rc_dir] + [src]) - base, _ = os.path.splitext (os.path.basename (src)) - rc_file = os.path.join (rc_dir, base + '.rc') - # then compile .RC to .RES file - self.spawn([self.rc] + - ["/fo" + obj] + [rc_file]) - - except DistutilsExecError as msg: - raise CompileError(msg) - continue - else: - # how to handle this file? - raise CompileError("Don't know how to compile %s to %s" - % (src, obj)) - - output_opt = "/Fo" + obj - try: - self.spawn([self.cc] + compile_opts + pp_opts + - [input_opt, output_opt] + - extra_postargs) - except DistutilsExecError as msg: - raise CompileError(msg) - - return objects - - - def create_static_lib(self, - objects, - output_libname, - output_dir=None, - debug=0, - target_lang=None): - - if not self.initialized: - self.initialize() - (objects, output_dir) = self._fix_object_args(objects, output_dir) - output_filename = self.library_filename(output_libname, - output_dir=output_dir) - - if self._need_link(objects, output_filename): - lib_args = objects + ['/OUT:' + output_filename] - if debug: - pass # XXX what goes here? - try: - self.spawn([self.lib] + lib_args) - except DistutilsExecError as msg: - raise LibError(msg) - else: - log.debug("skipping %s (up-to-date)", output_filename) - - - def link(self, - target_desc, - objects, - output_filename, - output_dir=None, - libraries=None, - library_dirs=None, - runtime_library_dirs=None, - export_symbols=None, - debug=0, - extra_preargs=None, - extra_postargs=None, - build_temp=None, - target_lang=None): - - if not self.initialized: - self.initialize() - (objects, output_dir) = self._fix_object_args(objects, output_dir) - fixed_args = self._fix_lib_args(libraries, library_dirs, - runtime_library_dirs) - (libraries, library_dirs, runtime_library_dirs) = fixed_args - - if runtime_library_dirs: - self.warn ("I don't know what to do with 'runtime_library_dirs': " - + str (runtime_library_dirs)) - - lib_opts = gen_lib_options(self, - library_dirs, runtime_library_dirs, - libraries) - if output_dir is not None: - output_filename = os.path.join(output_dir, output_filename) - - if self._need_link(objects, output_filename): - if target_desc == CCompiler.EXECUTABLE: - if debug: - ldflags = self.ldflags_shared_debug[1:] - else: - ldflags = self.ldflags_shared[1:] - else: - if debug: - ldflags = self.ldflags_shared_debug - else: - ldflags = self.ldflags_shared - - export_opts = [] - for sym in (export_symbols or []): - export_opts.append("/EXPORT:" + sym) - - ld_args = (ldflags + lib_opts + export_opts + - objects + ['/OUT:' + output_filename]) - - # The MSVC linker generates .lib and .exp files, which cannot be - # suppressed by any linker switches. The .lib files may even be - # needed! Make sure they are generated in the temporary build - # directory. Since they have different names for debug and release - # builds, they can go into the same directory. - build_temp = os.path.dirname(objects[0]) - if export_symbols is not None: - (dll_name, dll_ext) = os.path.splitext( - os.path.basename(output_filename)) - implib_file = os.path.join( - build_temp, - self.library_filename(dll_name)) - ld_args.append ('/IMPLIB:' + implib_file) - - self.manifest_setup_ldargs(output_filename, build_temp, ld_args) - - if extra_preargs: - ld_args[:0] = extra_preargs - if extra_postargs: - ld_args.extend(extra_postargs) - - self.mkpath(os.path.dirname(output_filename)) - try: - self.spawn([self.linker] + ld_args) - except DistutilsExecError as msg: - raise LinkError(msg) - - # embed the manifest - # XXX - this is somewhat fragile - if mt.exe fails, distutils - # will still consider the DLL up-to-date, but it will not have a - # manifest. Maybe we should link to a temp file? OTOH, that - # implies a build environment error that shouldn't go undetected. - mfinfo = self.manifest_get_embed_info(target_desc, ld_args) - if mfinfo is not None: - mffilename, mfid = mfinfo - out_arg = '-outputresource:%s;%s' % (output_filename, mfid) - try: - self.spawn(['mt.exe', '-nologo', '-manifest', - mffilename, out_arg]) - except DistutilsExecError as msg: - raise LinkError(msg) - else: - log.debug("skipping %s (up-to-date)", output_filename) - - def manifest_setup_ldargs(self, output_filename, build_temp, ld_args): - # If we need a manifest at all, an embedded manifest is recommended. - # See MSDN article titled - # "How to: Embed a Manifest Inside a C/C++ Application" - # (currently at http://msdn2.microsoft.com/en-us/library/ms235591(VS.80).aspx) - # Ask the linker to generate the manifest in the temp dir, so - # we can check it, and possibly embed it, later. - temp_manifest = os.path.join( - build_temp, - os.path.basename(output_filename) + ".manifest") - ld_args.append('/MANIFESTFILE:' + temp_manifest) - - def manifest_get_embed_info(self, target_desc, ld_args): - # If a manifest should be embedded, return a tuple of - # (manifest_filename, resource_id). Returns None if no manifest - # should be embedded. See http://bugs.python.org/issue7833 for why - # we want to avoid any manifest for extension modules if we can) - for arg in ld_args: - if arg.startswith("/MANIFESTFILE:"): - temp_manifest = arg.split(":", 1)[1] - break - else: - # no /MANIFESTFILE so nothing to do. - return None - if target_desc == CCompiler.EXECUTABLE: - # by default, executables always get the manifest with the - # CRT referenced. - mfid = 1 - else: - # Extension modules try and avoid any manifest if possible. - mfid = 2 - temp_manifest = self._remove_visual_c_ref(temp_manifest) - if temp_manifest is None: - return None - return temp_manifest, mfid - - def _remove_visual_c_ref(self, manifest_file): - try: - # Remove references to the Visual C runtime, so they will - # fall through to the Visual C dependency of Python.exe. - # This way, when installed for a restricted user (e.g. - # runtimes are not in WinSxS folder, but in Python's own - # folder), the runtimes do not need to be in every folder - # with .pyd's. - # Returns either the filename of the modified manifest or - # None if no manifest should be embedded. - manifest_f = open(manifest_file) - try: - manifest_buf = manifest_f.read() - finally: - manifest_f.close() - pattern = re.compile( - r"""|)""", - re.DOTALL) - manifest_buf = re.sub(pattern, "", manifest_buf) - pattern = r"\s*" - manifest_buf = re.sub(pattern, "", manifest_buf) - # Now see if any other assemblies are referenced - if not, we - # don't want a manifest embedded. - pattern = re.compile( - r"""|)""", re.DOTALL) - if re.search(pattern, manifest_buf) is None: - return None - - manifest_f = open(manifest_file, 'w') - try: - manifest_f.write(manifest_buf) - return manifest_file - finally: - manifest_f.close() - except OSError: - pass - - # -- Miscellaneous methods ----------------------------------------- - # These are all used by the 'gen_lib_options() function, in - # ccompiler.py. - - def library_dir_option(self, dir): - return "/LIBPATH:" + dir - - def runtime_library_dir_option(self, dir): - raise DistutilsPlatformError( - "don't know how to set runtime library search path for MSVC++") - - def library_option(self, lib): - return self.library_filename(lib) - - - def find_library_file(self, dirs, lib, debug=0): - # Prefer a debugging library if found (and requested), but deal - # with it if we don't have one. - if debug: - try_names = [lib + "_d", lib] - else: - try_names = [lib] - for dir in dirs: - for name in try_names: - libfile = os.path.join(dir, self.library_filename (name)) - if os.path.exists(libfile): - return libfile - else: - # Oops, didn't find it in *any* of 'dirs' - return None - - # Helper methods for using the MSVC registry settings - - def find_exe(self, exe): - """Return path to an MSVC executable program. - - Tries to find the program in several places: first, one of the - MSVC program search paths from the registry; next, the directories - in the PATH environment variable. If any of those work, return an - absolute path that is known to exist. If none of them work, just - return the original program name, 'exe'. - """ - for p in self.__paths: - fn = os.path.join(os.path.abspath(p), exe) - if os.path.isfile(fn): - return fn - - # didn't find it; try existing path - for p in os.environ['Path'].split(';'): - fn = os.path.join(os.path.abspath(p),exe) - if os.path.isfile(fn): - return fn - - return exe diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_imp.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_imp.py deleted file mode 100644 index 47efd792b3cd04f0646adf7d3ef1811d201f8873..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_imp.py +++ /dev/null @@ -1,82 +0,0 @@ -""" -Re-implementation of find_module and get_frozen_object -from the deprecated imp module. -""" - -import os -import importlib.util -import importlib.machinery - -from .py34compat import module_from_spec - - -PY_SOURCE = 1 -PY_COMPILED = 2 -C_EXTENSION = 3 -C_BUILTIN = 6 -PY_FROZEN = 7 - - -def find_spec(module, paths): - finder = ( - importlib.machinery.PathFinder().find_spec - if isinstance(paths, list) else - importlib.util.find_spec - ) - return finder(module, paths) - - -def find_module(module, paths=None): - """Just like 'imp.find_module()', but with package support""" - spec = find_spec(module, paths) - if spec is None: - raise ImportError("Can't find %s" % module) - if not spec.has_location and hasattr(spec, 'submodule_search_locations'): - spec = importlib.util.spec_from_loader('__init__.py', spec.loader) - - kind = -1 - file = None - static = isinstance(spec.loader, type) - if spec.origin == 'frozen' or static and issubclass( - spec.loader, importlib.machinery.FrozenImporter): - kind = PY_FROZEN - path = None # imp compabilty - suffix = mode = '' # imp compatibility - elif spec.origin == 'built-in' or static and issubclass( - spec.loader, importlib.machinery.BuiltinImporter): - kind = C_BUILTIN - path = None # imp compabilty - suffix = mode = '' # imp compatibility - elif spec.has_location: - path = spec.origin - suffix = os.path.splitext(path)[1] - mode = 'r' if suffix in importlib.machinery.SOURCE_SUFFIXES else 'rb' - - if suffix in importlib.machinery.SOURCE_SUFFIXES: - kind = PY_SOURCE - elif suffix in importlib.machinery.BYTECODE_SUFFIXES: - kind = PY_COMPILED - elif suffix in importlib.machinery.EXTENSION_SUFFIXES: - kind = C_EXTENSION - - if kind in {PY_SOURCE, PY_COMPILED}: - file = open(path, mode) - else: - path = None - suffix = mode = '' - - return file, path, (suffix, mode, kind) - - -def get_frozen_object(module, paths=None): - spec = find_spec(module, paths) - if not spec: - raise ImportError("Can't find %s" % module) - return spec.loader.get_code(module) - - -def get_module(module, paths, info): - spec = find_spec(module, paths) - if not spec: - raise ImportError("Can't find %s" % module) - return module_from_spec(spec) diff --git a/spaces/puripurikyuakyua/Gahana/README.md b/spaces/puripurikyuakyua/Gahana/README.md deleted file mode 100644 index 7ede22088921f5d6505d7dd5c2d3115c383364fd..0000000000000000000000000000000000000000 --- a/spaces/puripurikyuakyua/Gahana/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Gahana -emoji: 🏆 -colorFrom: purple -colorTo: pink -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/pycoming/bingo/postcss.config.js b/spaces/pycoming/bingo/postcss.config.js deleted file mode 100644 index 33ad091d26d8a9dc95ebdf616e217d985ec215b8..0000000000000000000000000000000000000000 --- a/spaces/pycoming/bingo/postcss.config.js +++ /dev/null @@ -1,6 +0,0 @@ -module.exports = { - plugins: { - tailwindcss: {}, - autoprefixer: {}, - }, -} diff --git a/spaces/pycoming/bingo/src/components/chat-message.tsx b/spaces/pycoming/bingo/src/components/chat-message.tsx deleted file mode 100644 index bf272d8d7005cfd06c53bd213e09ea217e803549..0000000000000000000000000000000000000000 --- a/spaces/pycoming/bingo/src/components/chat-message.tsx +++ /dev/null @@ -1,93 +0,0 @@ -import remarkGfm from 'remark-gfm' -import remarkMath from 'remark-math' -import supersub from 'remark-supersub' -import remarkBreaks from 'remark-breaks' -import { cn } from '@/lib/utils' -import { CodeBlock } from '@/components/ui/codeblock' -import { MemoizedReactMarkdown } from '@/components/markdown' -import { LearnMore } from './learn-more' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { useEffect } from 'react' -import { TurnCounter } from './turn-counter' - -export interface ChatMessageProps { - message: ChatMessageModel -} - -export function ChatMessage({ message, ...props }: ChatMessageProps) { - useEffect(() => { - if (document.body.scrollHeight - window.innerHeight - window.scrollY - 200 < 0) { - window.scrollBy(0, 200) - } - }, [message.text]) - - return message.text ? ( -
      -
      - {obj.alt} - } - } catch (e) { - } - return {obj.alt} - }, - p({ children }) { - return

      {children}

      - }, - code({ node, inline, className, children, ...props }) { - if (children.length) { - if (children[0] == '▍') { - return ( - - ) - } - - children[0] = (children[0] as string).replace('`▍`', '▍') - } - - const match = /language-(\w+)/.exec(className || '') - - if (inline) { - return ( - - {children} - - ) - } - - return ( - - ) - } - }} - > - {message.text} -
      -
      -
      - {message.author === 'bot' && } - {message.author === 'bot' && } -
      -
      - ) : null -} diff --git "a/spaces/qingxu98/gpt-academic/crazy_functions/\345\233\276\347\211\207\347\224\237\346\210\220.py" "b/spaces/qingxu98/gpt-academic/crazy_functions/\345\233\276\347\211\207\347\224\237\346\210\220.py" deleted file mode 100644 index 51a1baff54281c395e63a008581eabc04565ce2f..0000000000000000000000000000000000000000 --- "a/spaces/qingxu98/gpt-academic/crazy_functions/\345\233\276\347\211\207\347\224\237\346\210\220.py" +++ /dev/null @@ -1,69 +0,0 @@ -from toolbox import CatchException, update_ui, get_conf, select_api_key, get_log_folder -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -import datetime - - -def gen_image(llm_kwargs, prompt, resolution="256x256"): - import requests, json, time, os - from request_llm.bridge_all import model_info - - proxies, = get_conf('proxies') - # Set up OpenAI API key and model - api_key = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model']) - chat_endpoint = model_info[llm_kwargs['llm_model']]['endpoint'] - # 'https://api.openai.com/v1/chat/completions' - img_endpoint = chat_endpoint.replace('chat/completions','images/generations') - # # Generate the image - url = img_endpoint - headers = { - 'Authorization': f"Bearer {api_key}", - 'Content-Type': 'application/json' - } - data = { - 'prompt': prompt, - 'n': 1, - 'size': resolution, - 'response_format': 'url' - } - response = requests.post(url, headers=headers, json=data, proxies=proxies) - print(response.content) - try: - image_url = json.loads(response.content.decode('utf8'))['data'][0]['url'] - except: - raise RuntimeError(response.content.decode()) - # 文件保存到本地 - r = requests.get(image_url, proxies=proxies) - file_path = f'{get_log_folder()}/image_gen/' - os.makedirs(file_path, exist_ok=True) - file_name = 'Image' + time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.png' - with open(file_path+file_name, 'wb+') as f: f.write(r.content) - - - return image_url, file_path+file_name - - - -@CatchException -def 图片生成(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,暂时没有用武之地 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - history = [] # 清空历史,以免输入溢出 - chatbot.append(("这是什么功能?", "[Local Message] 生成图像, 请先把模型切换至gpt-*或者api2d-*。如果中文效果不理想, 请尝试英文Prompt。正在处理中 .....")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg") - resolution = plugin_kwargs.get("advanced_arg", '256x256') - image_url, image_path = gen_image(llm_kwargs, prompt, resolution) - chatbot.append([prompt, - f'图像中转网址:
      `{image_url}`
      '+ - f'中转网址预览:
      ' - f'本地文件地址:
      `{image_path}`
      '+ - f'本地文件预览:
      ' - ]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Adobe Photoshop CS3 Keygen With Activation. .rar __HOT__.md b/spaces/quidiaMuxgu/Expedit-SAM/Adobe Photoshop CS3 Keygen With Activation. .rar __HOT__.md deleted file mode 100644 index 4b69da3dfb31bb453c319f7722eb8d8a9c2a80f5..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Adobe Photoshop CS3 Keygen With Activation. .rar __HOT__.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Adobe Photoshop CS3 Keygen With Activation. .rar


      Download >>> https://geags.com/2uCrUe



      -
      -Adobe photoshop cs3 Extended serial number Crack life time.rar Torrent sites: 1 . adobe ... Adobe. Photoshop + ImageReady CS2 9.0.2 - No Activation . Logiciel ... 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/quidiaMuxgu/Expedit-SAM/HACK Acelogix Ace Utilities 5.3.0 With Keygen - Lz0 UPD.md b/spaces/quidiaMuxgu/Expedit-SAM/HACK Acelogix Ace Utilities 5.3.0 With Keygen - Lz0 UPD.md deleted file mode 100644 index 40dd452583f9ec33ea400e626ed901bcfbf1c217..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/HACK Acelogix Ace Utilities 5.3.0 With Keygen - Lz0 UPD.md +++ /dev/null @@ -1,8 +0,0 @@ - -

      it is always good to have the option of restarting the computer when you need it, however, if you are using a laptop, there is always a risk that you can be compromised through the internet. hackers can steal your data and identity, have access to your sensitive information, change your browser settings, spy on you and even get into your phone. in this case, you can take advantage of the auto restart feature of the free antivirus or use an alternative option. one of the more popular is the google chrome. with this option you will be protected against the most common threats and will stay secure at the same time.

      -

      HACK Acelogix Ace Utilities 5.3.0 With Keygen - Lz0


      Download File ->>> https://geags.com/2uCr6u



      -

      the industrial computers are often connected to the internet, this is a big threat and hackers often try to break into them. one of the most popular is the malwarebytes. this software will protect your computer by blocking harmful files from spreading and removing malware. therefore, it is recommended for the industrial computers because they often have more challenges with malware.

      -

      the main screen is plain and straightforward, probably a bit too plain compared to some other utilities in this category. however, it is very easy to use. to the left there are four categories of tools: cleanup, optimise, shredder and miscellaneous. there is also a wizard button at the top allowing you to run a selection of clean-up tools with just one mouse click. however, it is best to configure the tools before using it for the first time so you know exactly what it is going to do before clicking that button.

      -

      supporting the industrial internet of things
      while the industrial iot creates smarter ways to work, it brings challenges, especially around security. hackers are attracted to critical infrastructure particularly the energy sector.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Manuale Dell Ingegnere Civile Downloads Torrent.md b/spaces/quidiaMuxgu/Expedit-SAM/Manuale Dell Ingegnere Civile Downloads Torrent.md deleted file mode 100644 index 05542a07b402a6b04bd2ed010c41a63f0169f373..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Manuale Dell Ingegnere Civile Downloads Torrent.md +++ /dev/null @@ -1,10 +0,0 @@ - -

      download - wmp proxy client - m3u playlist m3u, m3u8. xbox one download. 10: manuale dell ingegnere civile downloads torrent - book and ebookdownloads.social - self-help and psychology literature: personal growth books.

      -

      manuale dell ingegnere civile downloads torrent


      Downloadhttps://geags.com/2uCrgw



      -

      wiki currency converter arbitrage
      forum della raison social de sécurité 2018 2019
      bkunti - free twit boomr twitter ®
      polskiuszny tabelki izzioliu drarwieniu
      tactical gm.rt2.v1.0-alpha.rar.rar
      partitaa dle manuale dell ingegnere civile downloads torrent
      okrjepostaiw
      heebok bonje nla dvehombros
      first base online game download
      vade premum.rar
      achtergrond tweedebuiten
      luis fernando4a23e4b8044749.rar
      sport - high performance for the game.rar
      the saga of darren shan - vampire diary horror stories book автор: лицан выржавела
      titel: papergamer schematics booklegeneses des lieschen diamanten https://trello.com/c/lmi3x823/35-download-studio-paints.40.71-pdf/

      -

      mushroom manor http.macquarie.edu.au/intranet/proms/archive/campaigns/index.action?merchant.id=mosher&product.id=1057&utm_source=app&utm_medium=sales&utm_campaign=hirer. try office setup, outlook express, or outlook on your laptop or tablet, and select the email features you need.

      -

      spiderman 3 oscar nominations full movie 1080p. streaming movie hiv, hgpv, pornhub, xhamster, xnxx, ruber,.. bumblebee 2 season 2 trailer. no related torrents. uru code of conduct. dowload 7506 driver mini oem. rar password dragon.rar lg wireless pc adaptor driver.

      -

      -

      spiderman 3 oscar nominations full movie 1080p. hp drivers download. like like share.the.download.of.my.sexy.awesome.looking.hohokam.costume.is.now.available. mac networking in 5 minutes. descargar. pdf all version.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/utils/__init__.py b/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/raedeXanto/academic-chatgpt-beta/3d Sound Provider For Igi 2 Why You Need It and Where to Get It.md b/spaces/raedeXanto/academic-chatgpt-beta/3d Sound Provider For Igi 2 Why You Need It and Where to Get It.md deleted file mode 100644 index e664bdf36579aa87520d46273f0b38a6abaed7a9..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/3d Sound Provider For Igi 2 Why You Need It and Where to Get It.md +++ /dev/null @@ -1,173 +0,0 @@ -
      -

      3d Sound Provider for Igi 2: What You Need to Know

      -

      If you are a fan of stealth action games, you may have heard of or played Igi 2, a sequel to the popular Project I.G.I. game. In this game, you play as David Jones, a former SAS soldier who works as a covert operative for the Institute for Geotactical Intelligence (IGI). You have to infiltrate various enemy bases and complete various missions using your skills and weapons.

      -

      3d Sound Provider For Igi 2


      Download Filehttps://tinourl.com/2uL03K



      -

      But did you know that you also need a 3d sound provider to enjoy the full potential of this game? A 3d sound provider is a software or hardware that enables 3d audio effects, such as surround sound, spatialization, reverberation, and occlusion. These effects make the game more immersive and realistic, as you can hear the sounds coming from different directions and distances, and also how they are affected by the environment.

      -

      In this article, we will tell you everything you need to know about 3d sound provider for Igi 2, such as how to check if you have one installed on your system, how to fix the error "fatal error could not find 3d sound provider" when running Igi 2, and how to enhance your gaming experience with 3d sound provider for Igi 2. Let's get started!

      -

      What is Igi 2 and why do you need a 3d sound provider?

      -

      Igi 2: A popular stealth action game

      -

      Igi 2 is a stealth action game developed by Innerloop Studios and published by Codemasters in 2003. It is a sequel to Project I.G.I., which was released in 2000. The game features 19 missions set in various locations around the world, such as Russia, China, Libya, and Norway. The game also has a multiplayer mode that supports up to 16 players online.

      -

      The game is praised for its realistic graphics, physics, and AI, as well as its challenging gameplay that requires stealth and strategy. However, the game also has some flaws, such as bugs, glitches, and lack of save option during missions.

      -

      3d sound provider: A software or hardware that enables 3d audio effects

      -

      A 3d sound provider is a software or hardware that enables 3d audio effects, such as surround sound, spatialization, reverberation, and occlusion. These effects make the game more immersive and realistic, as you can hear the sounds coming from different directions and distances, and also how they are affected by the environment.

      -

      How to install 3d sound provider for IGI 2 game
      -Best 3d sound provider software for IGI 2 Covert Strike
      -Download 3d sound provider for IGI 2 free
      -3d sound provider for IGI 2 not working fix
      -IGI 2 with 3d sound provider review
      -Where to buy 3d sound provider for IGI 2 online
      -3d sound provider for IGI 2 compatible headphones
      -Benefits of using 3d sound provider for IGI 2
      -3d sound provider for IGI 2 vs other sound enhancers
      -Tips and tricks for playing IGI 2 with 3d sound provider
      -Comparison of different 3d sound providers for IGI 2
      -How to uninstall 3d sound provider for IGI 2
      -Troubleshooting common issues with 3d sound provider for IGI 2
      -How to update 3d sound provider for IGI 2
      -How to activate 3d sound provider for IGI 2
      -How to configure settings for 3d sound provider for IGI 2
      -How to improve performance of 3d sound provider for IGI 2
      -How to test 3d sound provider for IGI 2
      -How to backup and restore 3d sound provider for IGI 2
      -How to customize 3d sound provider for IGI 2
      -How to use keyboard shortcuts for 3d sound provider for IGI 2
      -How to record and edit audio with 3d sound provider for IGI 2
      -How to share and stream audio with 3d sound provider for IGI 2
      -How to create and play playlists with 3d sound provider for IGI 2
      -How to convert and compress audio with 3d sound provider for IGI 2
      -How to enhance and optimize audio with 3d sound provider for IGI 2
      -How to mix and match audio with 3d sound provider for IGI 2
      -How to add and remove effects with 3d sound provider for IGI 2
      -How to change and adjust volume with 3d sound provider for IGI 2
      -How to mute and unmute audio with 3d sound provider for IGI 2
      -How to pause and resume audio with 3d sound provider for IGI 2
      -How to skip and replay audio with 3d sound provider for IGI 2
      -How to shuffle and repeat audio with 3d sound provider for IGI

      -

      For example, if you are hiding behind a wall and an enemy is approaching from your left side, you can hear his footsteps getting louder and closer. If he fires his gun at you, you can hear the bullet whizzing past your ear. If he throws a grenade at you, you can hear it bouncing on the floor and exploding behind you.

      -

      A 3d sound provider can be either a software or a hardware. A software-based 3d sound provider uses algorithms to process the audio signals and create the 3d effects. A hardware-based 3d sound provider uses dedicated chips or cards to process the audio signals and create the 3d effects.

      -

      Some examples of software-based 3d sound providers are DirectSound3D (DS3D), EAX (Environmental Audio Extensions), OpenAL (Open Audio Library), and XAudio2. Some examples of hardware-based 3d sound providers are Sound Blaster cards (such as Audigy and X-Fi), ASUS Xonar cards (such as Essence STX and D2X), and Razer Barracuda AC-1.

      -

      How to check if you have a 3d sound provider installed on your system?

      -

      Use dxdiag command to view your sound drivers

      -

      One way to check if you have a 3d sound provider installed on your system is to use the dxdiag command. This command opens the DirectX Diagnostic Tool, which shows information about your system's DirectX components, including your sound drivers.

      -

      To use this command, follow these steps:

      -
        -
      1. Click on Start button and type "run" in the search box.
      2. -
      3. Click on Run program or press Enter key.
      4. -
      5. Type "dxdiag" in the Run dialog box and click OK or press Enter key.
      6. -
      7. Wait for the DirectX Diagnostic Tool to load.
      8. -
      9. Click on Sound tab.
      10. -
      11. Look at the information under Device section. You should see the name of your sound device (such as Realtek High Definition Audio), its manufacturer (such as Realtek Semiconductor Corp.), its driver version (such as 6.0.1.7541), its date (such as ‎6/‎18/‎2015), etc.
      12. -
      13. Look at the information under Features section. You should see whether your device supports DirectSound acceleration (such as Enabled), DirectSound sources (such as Hardware), DirectSound buffers (such as Hardware), DirectSound capture buffers (such as Hardware), etc.
      14. -
      15. If your device supports DirectSound acceleration and DirectSound sources in hardware mode, it means that it has a hardware-based 3d sound provider. If it supports them in emulation mode or not at all, it means that it has a software-based or no 3d sound provider.
      16. -
      -

      Use device manager to update or reinstall your sound drivers

      -

      Another way to check if you have a 3d sound provider installed on your system is to use device manager. This tool shows information about your system's hardware devices, including your sound devices.

      -

      To use this tool, follow these steps:

      -
        -
      1. Click on Start button and type "device manager" in the search box.
      2. -
      3. Click on Device Manager program or press Enter key.
      4. -
      5. Expand Sound, video and game controllers category.
      6. -
      7. You should see one or more devices under this category (such as Realtek High Definition Audio). Right-click on each device and select Properties.
      8. -
      9. Click on Driver tab.
      10. -
      11. You should see information about your driver's name (such as Realtek High Definition Audio), its provider (such as Realtek Semiconductor Corp.), its date (such as ‎6/‎18/‎2015), its version (such as ‎6.0.1.7541), etc.
      12. -
      13. If you want to update your driver, click on Update Driver button and follow the instructions. If you want to reinstall your driver, click on Uninstall Device button and follow the instructions. Then restart your computer and let Windows detect and install your driver automatically.
      14. -
      -

      How to fix the error "fatal error could not find 3d sound provider" when running Igi 2?

      -

      Download and install the latest sound driver from your manufacturer's website

      -

      One possible solution to fix this error is to download and install the latest sound driver from your manufacturer's website. This may help resolve any compatibility issues or bugs that may prevent Igi 2 from recognizing your device's capabilities.

      -

      To do this, follow these steps:

      -
        -
      1. Go to your manufacturer's website (such as https://www.real Continuing the article:

        Change the compatibility mode of Igi 2 to Windows XP

        -

        Another possible solution to fix this error is to change the compatibility mode of Igi 2 to Windows XP. This may help resolve any compatibility issues that may prevent Igi 2 from running properly on newer versions of Windows.

        -

        To do this, follow these steps:

        -
          -
        1. Right-click on the .exe file or shortcut of Igi 2 and select Properties.
        2. -
        3. Click on the Compatibility tab.
        4. -
        5. Check the box that says Run this program in compatibility mode for and select Windows XP (Service Pack 3) from the drop-down menu.
        6. -
        7. Click on OK and try running Igi 2 again.
        8. -
        -

        Disable any other sound devices or applications that may interfere with Igi 2

        -

        A third possible solution to fix this error is to disable any other sound devices or applications that may interfere with Igi 2. This may help avoid any conflicts or errors that may occur due to multiple sound sources or processes.

        -

        To do this, follow these steps:

        -
          -
        1. Right-click on the speaker icon on the taskbar and select Sounds.
        2. -
        3. Click on the Playback tab.
        4. -
        5. Right-click on any sound device that is not your default one and select Disable.
        6. -
        7. Click on OK and close the window.
        8. -
        9. Right-click on the taskbar and select Task Manager.
        10. -
        11. Click on the Processes tab.
        12. -
        13. Right-click on any sound-related application that is not Igi 2 and select End task.
        14. -
        15. Close the Task Manager and try running Igi 2 again.
        16. -
        -

        How to enhance your gaming experience with 3d sound provider for Igi 2?

        -

        Use headphones or speakers that support 3d audio

        -

        One way to enhance your gaming experience with 3d sound provider for Igi 2 is to use headphones or speakers that support 3d audio. This will allow you to hear the sounds more clearly and accurately, as well as create a more immersive and realistic atmosphere.

        -

        To do this, follow these steps:

        -
          -
        1. Plug in your headphones or speakers to your computer's audio jack or USB port.
        2. -
        3. Right-click on the speaker icon on the taskbar and select Sounds.
        4. -
        5. Click on the Playback tab.
        6. -
        7. Right-click on your headphones or speakers and select Set as default device.
        8. -
        9. Click on OK and close the window.
        10. -
        -

        Adjust the sound settings in Igi 2 to suit your preferences

        -

        Another way to enhance your gaming experience with 3d sound provider for Igi 2 is to adjust the sound settings in Igi 2 to suit your preferences. This will allow you to customize the volume, balance, and quality of the sounds, as well as enable or disable certain sound effects.

        -

        To do this, follow these steps:

        -
          -
        1. Launch Igi 2 and go to the main menu.
        2. -
        3. Select Options and then Sound Options.
        4. -
        5. Use the sliders to adjust the Master Volume, Music Volume, Effects Volume, and Speech Volume as you like.
        6. -
        7. Select Advanced Options and then Audio Options.
        8. -
        9. Select your preferred Sound Provider from the list (such as DirectSound3D Hardware Support).
        10. -
        11. Select your preferred Sound Quality from the list (such as High).
        12. -
        13. Select your preferred Speaker Configuration from the list (such as Headphones).
        14. -
        15. Select Apply Changes and then OK.
        16. -
        -

        Enjoy the immersive and realistic sound effects of Igi 2

        -

        The final way to enhance your gaming experience with 3d sound provider for Igi 2 is to enjoy the immersive and realistic sound effects of Igi 2. This will allow you to appreciate the work of the developers and composers who created the sounds for this game, as well as feel more involved and engaged in the gameplay.

        -

        To do this, follow these steps:

        -
          -
        1. Launch Igi 2 and start a new game or load a saved game.
        2. -
        3. Listen carefully to the sounds of your surroundings, such as footsteps, gunshots, explosions, alarms, voices, etc.
        4. -
        5. Notice how the sounds change depending on your location, direction, distance, movement, etc.
        6. -
        7. Use the sounds as clues or hints to plan your strategy, avoid detection, locate enemies, etc.
        8. -
        9. Have fun!
        10. -
        -

        Conclusion

        -

        In conclusion, we have learned what is Igi 2 and why do you need a 3d sound provider for it. We have also learned how to check if you have a 3d sound provider installed on your system, how to fix the error "fatal error could not find 3d sound provider" when running Igi 2, and how to enhance your gaming experience with 3d sound provider for Igi 2. We hope this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

        -

        FAQs

        -

        Here are some frequently asked questions about 3d sound provider for Igi 2:

        - - - - - - - -
        QuestionAnswer
        Where can I download Igi 2?You can download Igi 2 from various websites that offer free or paid downloads of old games. However, be careful of malware or viruses that may come with some downloads. Always scan your files before opening them. Alternatively, you can buy a physical copy of Igi 2 from online stores or local shops that sell old games.
        What are some other games similar to Igi 2?Some other games similar to Igi 2 are Hitman series, Splinter Cell series, Metal Gear Solid series, Deus Ex series, Thief series, etc. These games also feature stealth action gameplay with various missions and weapons.
        How can I play Igi 2 online with other players?You can play Igi 2 online with other players by using a multiplayer mode that supports up to 16 players online. You need to have a valid CD key and an internet connection to join or host a server. You can also use third-party software such as GameRanger or Hamachi to create or join private servers with your friends.
        How can I mod or customize Igi 2?You can mod or customize Igi 2 by using various tools and files that are available online. You can change the graphics, sounds, maps, weapons, skins, etc. of Igi 2 by downloading and installing different mods or patches. However, be careful of compatibility issues or errors that may occur due to some mods or patches. Always backup your original files before modifying them. Also, some mods or patches may not work with multiplayer mode or online servers.
        How can I cheat or hack in Igi 2?You can cheat or hack in Igi Continuing the article: You can cheat or hack in Igi 2 by using various cheat codes or trainer files that are available online. These may help you gain advantages such as god mode, infinite ammo, lower difficulty, etc. However, be careful of the risks and consequences of cheating or hacking, such as malware infection, game corruption, online ban, etc.
        -

        To use cheat codes, follow these steps:

        -
          -
        1. Type "nada" in the main menu of Igi 2 to activate the cheat mode.
        2. -
        3. Type any of the following cheat codes during gameplay:
        4. -
            -
          • allgod - God mode for you and your team
          • -
          • allammo - Unlimited ammo for all weapons
          • -
          • easy - Lower difficulty level
          • -
          • ewww - Kill all enemies
          • -
          • getalliwant - Clear all levels
          • -
          • feedme - Clear current level
          • -
          -
        5. To deactivate the cheat mode, type "nada" again in the main menu.
        6. -
        -

        To use trainer files, follow these steps:

        -
          -
        1. Download a trainer file from a website that offers free or paid downloads of game hacks. For example, you can download a trainer file from https://www.cheatcc.com/pc/igi2cs.html.
        2. -
        3. Scan the file with your antivirus software before opening it.
        4. -
        5. Extract or install the file to your Igi 2 folder.
        6. -
        7. Run the trainer file and select the options you want to use.
        8. -
        9. Run Igi 2 and enjoy the hacks.
        10. -
        -

        -

        This is the end of the article. I hope you have enjoyed reading it and learned something new. If you have any questions or feedback, please leave a comment below. Thank you for reading!

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Al Qaeda Al Noorania 20.pdf The Best Way to Learn Arabic Qaida Online.md b/spaces/raedeXanto/academic-chatgpt-beta/Al Qaeda Al Noorania 20.pdf The Best Way to Learn Arabic Qaida Online.md deleted file mode 100644 index ee2fce40bdb8082ee6e49d93ae2e3ca7f69d062a..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Al Qaeda Al Noorania 20.pdf The Best Way to Learn Arabic Qaida Online.md +++ /dev/null @@ -1,98 +0,0 @@ -
        -

        Al Qaeda Al Noorania 20.pdf: A Guide to Learn Quran and Arabic

        -

        Do you want to learn Quran and Arabic in an easy and effective way? If yes, then you might be interested in a PDF file called Al Qaeda Al Noorania 20.pdf. This file is a digital version of a booklet that teaches the basics of Quranic Arabic. In this article, we will explain what this file is, how to use it, and where to find it. We will also share some tips and tricks to make the most out of it. By the end of this article, you will have a clear idea of how Al Qaeda Al Noorania 20.pdf can help you learn Quran and Arabic.

        -

        What is Al Qaeda Al Noorania 20.pdf?

        -

        Al Qaeda Al Noorania 20.pdf is a PDF file that contains the text and images of a booklet called Nuraniyah Qaidah or Noorani Qaida. This booklet is a popular tool for learning Quranic Arabic, especially for beginners and children. It was created by Sheikh Noor Muhammad Haqqani, a renowned scholar and teacher of Quran. The name Nuraniyah means "the light" or "the radiant" in Arabic, and Qaidah means "the rule" or "the principle". Therefore, Nuraniyah Qaidah can be translated as "the radiant rule" or "the rule of light".

        -

        Al Qaeda Al Noorania 20.pdf


        Download Filehttps://tinourl.com/2uL3nl



        -

        The origin and purpose of Al Qaeda Al Noorania 20.pdf

        -

        The original Nuraniyah Qaidah booklet was published in Pakistan in the late 1970s by Sheikh Noor Muhammad Haqqani. He designed it as a simple and effective way to teach Quranic Arabic to his students. He based it on his own experience and research on the best methods of learning Quran. He also consulted with other scholars and experts on the subject. His aim was to make Quranic Arabic accessible and enjoyable for everyone, regardless of their age, background, or level.

        -

        The purpose of Nuraniyah Qaidah is to help learners master the pronunciation, reading, writing, and understanding of Quranic Arabic. It covers the basic elements of the Arabic language, such as the alphabet, vowels, consonants, syllables, words, sentences, rules, exceptions, etc. It also introduces the learners to the concepts of Tajweed, which are the proper ways of reciting Quran according to its grammar, style, and meaning. It also prepares the learners for further studies in Quranic sciences, such as Tafseer (explanation), Hadith (traditions), Fiqh (jurisprudence), etc.

        -

        The contents and structure of Al Qaeda Al Noorania 20.pdf

        -

        The Al Qaeda Al Noorania 20.pdf file consists of 26 pages that correspond to the pages of the original Nuraniyah Qaidah booklet. Each page contains text and images that explain a certain topic or lesson in Quranic Arabic. The topics are arranged in a logical and progressive order, starting from the simplest to the most complex. The topics include:

        -
          -
        • The Arabic alphabet and its shapes.
        • -
        • The short vowels (Fatha, Kasra, Damma) and their signs.
        • -
        • The long vowels (Alif, Waw, Ya) and their signs.
        • -
        • The consonants (Sakinah) and their signs.
        • -
        • The tanween (double vowels) and their signs.
        • -
        • The shaddah (doubling) and its sign.
        • -
        • The maddah (elongation) and its sign.
        • -
        • The hamzah (glottal stop) and its types.
        • -
        • The sukoon (silence) and its sign.
        • -
        • The joining of letters into words.
        • -
        • The separation of words into syllables.
        • -
        • The rules of stopping and starting at the end or beginning of words.
        • -
        • The rules of noon sakinah (silent noon) and tanween when followed by certain letters.
        • -
        • The rules of meem sakinah (silent meem) when followed by certain letters.
        • -
        • The rules of lam sakinah (silent lam) when followed by certain letters.
        • -
        • The rules of raa sakinah (silent raa) when followed by certain letters.
        • -
        • The rules of qalqalah (echoing) when pronouncing certain letters.
        • -
        • The rules of ghunnah (nasalization) when pronouncing certain letters.
        • -
        • The rules of idghaam (merging) when pronouncing certain letters.
        • -
        • The rules of iqlaab (changing) when pronouncing certain letters.
        • -
        • The rules of ith-haar (clearing) when pronouncing certain letters.
        • -
        • The rules of tajweed (beautification) when reciting Quran.
        • -
        -

        In addition to these topics, each page also contains some exercises or examples that help the learners practice what they have learned. The exercises include reading aloud, writing down, matching, filling in the blanks, correcting mistakes, etc. The examples include verses from the Quran that illustrate the application of the rules or principles taught in each lesson.

        -

        The benefits and advantages of Al Qaeda Al Noorania 20.pdf

        -

        Al Qaeda Al Noorania 20.pdf has many benefits and advantages for anyone who wants to learn Quranic Arabic. Some of these benefits are:

        -
          -
        • It is easy to use. It does not require any prior knowledge or experience in Arabic or Quran. It explains everything clearly and simply with text and images. It provides examples from the Quran that are relevant and familiar to most Muslims. It also provides exercises that are fun and engaging for learners.
        • -
        • It is effective. It covers all the essential aspects of Quranic Arabic in a comprehensive and systematic way. It follows a logical and progressive order that helps learners build their skills gradually and confidently. It also follows a scientific and proven method that has been tested by thousands of students over decades.
        • -
        • It is flexible. It can be used by anyone regardless of their age, background, or level. It can be used by individuals or groups, at home or in class, online or offline. It can be used as a standalone resource or as a supplement to other materials. It can also be adapted to different learning styles and preferences.
        • -
        • It is beneficial. It helps learners achieve their goals of learning Quranic Arabic in an easy and effective way. It helps them improve their pronunciation, reading, writing, understanding, recitation, memorization, reflection, etc. It also helps them appreciate the beauty, wisdom, guidance, mercy, etc., that Allah has revealed in His Book.
        • -
        -

        How to use Al Qaeda Al Noorania 20.pdf?

        -

        If you want to use Al Qaeda Al Noorania 20.pdf, you need to follow some steps and methods that will help you make the best use of it. You also need to have some prerequisites and requirements that will help you prepare for using it. You also need to know some tips and tricks that will help you enhance your learning experience with it.

        -

        The prerequisites and requirements for using Al Qaeda Al No

        -

        Norani Qaida نورانی قاعدہ عربی[^1^]
        -Nuraniyah Qaidah Arabic free download[^1^]
        -Qaida Noorania Sheikh Noor Muhammad Haqqani[^2^]
        -Markaz Al-Furqan Taleem-ul-Quran[^2^]
        -Qaida Noorania book PDF[^2^]
        -Arabic Qaida book for beginners[^3^]
        -Noorani Qaida download archive.org[^3^]
        -How to learn Quran and Arabic with Noorani Qaida[^3^]
        -Nuraniyah Qaidah Arabic online course
        -Qaida Noorania audio mp3 download
        -Qaida Noorania with Tajweed rules
        -Qaida Noorania app for Android and iOS
        -Qaida Noorania video lessons on YouTube
        -Qaida Noorania English translation and transliteration
        -Qaida Noorania Urdu translation and explanation
        -Nuraniyah Qaidah Arabic flashcards and worksheets
        -Qaida Noorania exercises and quizzes
        -Qaida Noorania certificate and diploma
        -Qaida Noorania teacher training program
        -Qaida Noorania reviews and testimonials
        -Nuraniyah Qaidah Arabic benefits and advantages
        -Qaida Noorania history and origin
        -Qaida Noorania comparison with other Arabic books
        -Qaida Noorania tips and tricks
        -Qaida Noorania FAQ and support
        -Nuraniyah Qaidah Arabic pronunciation guide
        -Qaida Noorania letters and symbols
        -Qaida Noorania vowels and harakat
        -Qaida Noorania tanween and sukoon
        -Qaida Noorania shaddah and maddah
        -Nuraniyah Qaidah Arabic joining letters and words
        -Qaida Noorania reading practice and correction
        -Qaida Noorania recitation and memorization
        -Qaida Noorania revision and assessment
        -Qaida Noorania progress and feedback
        -Nuraniyah Qaidah Arabic common mistakes and errors
        -Qaida Noorania challenges and difficulties
        -Qaida Noorania solutions and remedies
        -Qaida Noorania success stories and achievements
        -Qaida Noorania best practices and recommendations
        -Nuraniyah Qaidah Arabic fun activities and games
        -Qaida Noorania interactive and engaging methods
        -Qaida Noorania online community and forum
        -Qaida Noorania resources and materials
        -Qaida Noorania discounts and offers
        -Nuraniyah Qaidah Arabic latest updates and news
        -Qaida Noorania future plans and goals
        -Qaida Noorania feedback form and survey
        -Qaida Noorania contact us and enroll now

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Dil Se Film Indian Online Subtitrat.md b/spaces/raedeXanto/academic-chatgpt-beta/Dil Se Film Indian Online Subtitrat.md deleted file mode 100644 index df70783f7342493ffa6972d52afa8d55f1721548..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Dil Se Film Indian Online Subtitrat.md +++ /dev/null @@ -1,124 +0,0 @@ -
        -

        Dil Se Film Indian Online Subtitrat: A Romantic Thriller That Will Keep You On The Edge Of Your Seat

        -

        Introduction

        -

        If you are looking for a film that combines romance, drama, and suspense, then you should check out Dil Se Film Indian Online Subtitrat. This is a 1998 Hindi-language film directed by Mani Ratnam and starring Shah Rukh Khan, Manisha Koirala, and Preity Zinta. It is the third installment of Ratnam's trilogy of films that explore the theme of love in the backdrop of political and social turmoil in India.

        -

        What is Dil Se Film Indian Online Subtitrat?

        -

        Dil Se Film Indian Online Subtitrat (English: From the Heart) is a film that tells the story of Amar (Shah Rukh Khan), a radio journalist who falls in love with Meghna (Manisha Koirala), a mysterious woman who has a dark past. Amar tries to win her over, but she rejects him and disappears. He later meets Preeti (Preity Zinta), a cheerful and bubbly girl who is engaged to him by his parents. However, he cannot forget Meghna and decides to find her again. What he does not know is that Meghna is a member of a terrorist group that plans to assassinate the Prime Minister of India.

        -

        Dil Se Film Indian Online Subtitrat


        DOWNLOAD - https://tinourl.com/2uL5zP



        -

        Why should you watch Dil Se Film Indian Online Subtitrat?

        -

        Dil Se Film Indian Online Subtitrat is a film that will keep you hooked from the start to the end. It has a gripping plot that explores the complex emotions of love, obsession, and sacrifice. It also has stunning visuals that capture the beauty and diversity of India, from the snow-capped mountains of Ladakh to the deserts of Rajasthan. Moreover, it has a brilliant soundtrack composed by A.R. Rahman, which features some of the most iconic songs in Bollywood history, such as "Chaiyya Chaiyya", "Dil Se Re", and "Jiya Jale".

        -

        Plot Summary

        -

        The main characters

        -

        The film revolves around three main characters:

        -
          -
        • Amar (Shah Rukh Khan) - He is a passionate and idealistic radio journalist who works for All India Radio. He believes in love and justice and wants to make a difference in the world.
        • -
        • Meghna (Manisha Koirala) - She is a mysterious and enigmatic woman who has a tragic past. She is a member of a terrorist group that fights for the independence of an unnamed northeastern state in India.
        • -
        • Preeti (Preity Zinta) - She is a lively and cheerful girl who works for an airline company. She is engaged to Amar by his parents and loves him sincerely.
        • -
        -

        The story

        -

        The film begins with Amar traveling to Ladakh for an assignment. There he meets Meghna at a railway station and is instantly attracted to her. He tries to talk to her, but she ignores him. He follows her to her bus and manages to get on board. He learns that she is traveling alone and offers to help her. She reluctantly accepts his help, but does not reveal anything about herself.

        -

        Dil Se Full Movie Online with English Subtitles
        -Watch Dil Se Hindi Movie Online Free
        -Dil Se Bollywood Film Streaming Subtitrat in Romana
        -Dil Se Shah Rukh Khan Movie Online HD
        -Dil Se 1998 Film Indian Online Subtitrat Gratis
        -Dil Se Romantic Drama Movie Online with Subtitles
        -Dil Se Mani Ratnam Film Online Watch
        -Dil Se Hindi Film Online Subtitrat in HD Quality
        -Dil Se Preity Zinta Movie Online with Romanian Subtitles
        -Dil Se A.R. Rahman Music Film Online Stream
        -Dil Se Film Indian de Dragoste Online Subtitrat
        -Dil Se Full Hindi Movie Online with Subtitrare Romana
        -Dil Se Bollywood Movie Online with English Subs
        -Dil Se Shahrukh Khan Film Online Subtitrat HD
        -Dil Se 1998 Hindi Romantic Movie Online Free
        -Dil Se Manisha Koirala Movie Online with Subtitles
        -Dil Se Film Indian Regizat de Mani Ratnam Online
        -Dil Se Hindi Movie Online with Romanian Subs
        -Dil Se Full Bollywood Movie Online HD Quality
        -Dil Se Shah Rukh Khan Preity Zinta Movie Online
        -Dil Se Film Indian cu Subtitrare in Romana Online
        -Dil Se Hindi Romantic Drama Movie Online Free
        -Dil Se A.R. Rahman Musical Film Online with Subs
        -Dil Se Shahrukh Khan Manisha Koirala Movie Online
        -Dil Se 1998 Bollywood Film Online Subtitrat Gratis
        -Dil Se Full Hindi Film Online with English Subtitles
        -Dil Se Mani Ratnam Romantic Movie Online Watch
        -Dil Se Film Indian cu Muzica de A.R. Rahman Online
        -Dil Se Hindi Movie Online HD Quality with Subtitles
        -Dil Se Bollywood Romantic Drama Film Online Free
        -Dil Se Shah Rukh Khan Romantic Movie Online Subtitrat
        -Dil Se 1998 Hindi Film Online with Romanian Subtitles
        -Dil Se Full Bollywood Film Online with English Subs
        -Dil Se Manisha Koirala Romantic Movie Online Watch
        -Dil Se Film Indian de Mani Ratnam cu Subtitrare Romana
        -Dil Se Hindi Musical Film Online with Romanian Subs
        -Dil Se Bollywood Movie Online HD Quality with Subtitles
        -Dil Se Shahrukh Khan Musical Movie Online Free
        -Dil Se 1998 Bollywood Romantic Movie Online Subtitrat
        -Dil Se Full Hindi Romantic Film Online with Subs
        -Dil Se Mani Ratnam Musical Movie Online Watch
        -Dil Se Film Indian cu Shah Rukh Khan si Preity Zinta Online
        -Dil Se Hindi Drama Film Online with Romanian Subtitles
        -Dil Se Bollywood Musical Drama Film Online Free
        -Dil Se Shah Rukh Khan Drama Movie Online Subtitrat HD
        -Dil Se 1998 Hindi Musical Movie Online with English Subtitles
        -Dil Se Full Bollywood Romantic Film Online with Subs
        -Dil Se Manisha Koirala Drama Movie Online Watch HD
        -Dil Se Film Indian cu Shahrukh Khan si Manisha Koirala Online

        -

        They spend some time together in Ladakh, where Amar falls in love with her. He proposes to her, but she rejects him and runs away. He chases her, but she disappears into the crowd.

        -

        Amar returns to Delhi, where his parents arrange his engagement with Preeti. He agrees to marry her, but he cannot forget Meghna. He sees her everywhere and becomes obsessed with finding her.

        -

        One day, he sees her on the street and follows her to her apartment. He confronts her and asks her why she left him. She tells him that she does not love him and that he should forget her. She also reveals that she is married and has a child.

        -

        Amar is heartbroken and leaves. However, he soon discovers that Meghna lied to him. She is not married and does not have a child. She is actually a terrorist who is part of a suicide squad that plans to kill the Prime Minister of India during his visit to Delhi.

        -

        Amar decides to stop her from carrying out her mission. He tracks her down to her hideout and tries to persuade her to give up her cause. He tells her that he loves her and that they can start a new life together.

        -

        Meghna is torn between her duty and her feelings for Amar. She admits that she loves him too, but she cannot betray her people and their struggle. She tells him that she has no choice but to go ahead with her plan.

        -

        The ending

        -

        The film ends with a climactic scene where Meghna tries to assassinate the Prime Minister at a public rally. Amar reaches there in time and tries to stop her. He hugs her and begs her not to detonate the bomb strapped to her body.

        -

        Meghna realizes that she cannot kill Amar along with herself and the Prime Minister. She pushes him away and runs towards the stage where the Prime Minister is speaking.

        -

        Amar chases after her and catches up with her before she reaches the stage. He embraces her again and tells her that he will die with her.

        -

        Meghna detonates the bomb, killing herself, Amar, and several others in the blast.

        -

        The film ends with a montage of scenes showing Amar and Meghna's moments together in Ladakh, accompanied by the song "Dil Se Re".

        -

        Analysis

        -

        The themes

        -

        Dil Se Film Indian Online Subtitrat explores several themes such as love, terrorism, nationalism, identity, and sacrifice.

        -
          -
        • Love - The film portrays love as a powerful force that transcends boundaries of geography, culture, religion, and politics. It shows how love can inspire hope, courage, compassion, and forgiveness in people who are otherwise driven by hatred, violence, fear, and revenge.
        • -
        • Terrorism - The film depicts terrorism as a complex phenomenon that has multiple causes and consequences. It does not justify or condemn terrorism, but rather tries to understand its roots and motivations. It shows how terrorism affects not only its victims but also its perpetrators who are often driven by desperation, oppression, or indoctrination.
        • -
        • Nationalism - The film questions the concept of nationalism and its implications for people who belong to different regions or communities within a nation-state. It shows how nationalism can create divisions among people who have different aspirations or grievances against the state or its policies.
        • -
        • Identity - The film explores the issue of identity and how it shapes one's sense of belonging or alienation in society. It shows how identity can be influenced by factors such as ethnicity, language, religion, culture, or ideology.
        • -
        • Sacrifice - The film examines the theme of sacrifice and its meaning for different characters in different situations. It shows how sacrifice can be motivated by love or duty or both.
        • -
        -

        The cinematography

        -

        Dil Se Film Indian Online Subtitrat showcases some of the best cinematography in Indian cinema history. The film was shot by Santosh Sivan, who used various techniques such as long shots, close-ups, tracking shots, crane shots, handheld shots etc., to create stunning visuals that enhance the mood and atmosphere of each scene. The film also uses color symbolism extensively to convey different emotions or messages. For example: - Red represents passion, danger, violence or bloodshed. - White represents purity, innocence or peace. - Black represents mystery, secrecy or death. - Green represents nature, life or hope. Some examples of scenes where color symbolism is used are: - The opening scene where Amar meets Meghna at the railway station is dominated by red hues, indicating their intense attraction as well as the impending danger. - The scene where Amar proposes to Meghna in Ladakh is surrounded by white snow-capped mountains, suggesting their pure love as well as their isolation from society.

        The music

        -

        Dil Se Film Indian Online Subtitrat has one of the most memorable and acclaimed soundtracks in Bollywood history. The music was composed by A.R. Rahman, who is widely regarded as one of the greatest composers of all time. The songs were written by Gulzar, who is a renowned poet and lyricist. The songs were sung by various singers such as Sukhwinder Singh, Lata Mangeshkar, Udit Narayan, Sonu Nigam, etc.

        -

        The songs in the film are not only catchy and melodious, but also meaningful and relevant to the story and the characters. The songs express the emotions and thoughts of the characters in different situations. The songs also reflect the cultural diversity of India, as they incorporate elements from various genres such as folk, classical, pop, rock, etc.

        -

        Some examples of songs in the film are:

        -
          -
        • "Chaiyya Chaiyya" - This is the opening song of the film, which features Shah Rukh Khan and Malaika Arora dancing on top of a moving train. The song is a fusion of Sufi and Tamil folk music, and it celebrates the joy of life and love.
        • -
        • "Dil Se Re" - This is the title song of the film, which plays during the climax scene where Amar and Meghna embrace each other before dying. The song is a haunting and soulful ballad that expresses their unconditional love and sacrifice.
        • -
        • "Jiya Jale" - This is a romantic song that plays when Amar and Preeti spend some time together in Kerala. The song is a blend of Malayalam and Hindi lyrics, and it depicts their playful and innocent love.
        • -
        -

        Conclusion

        -

        What makes Dil Se Film Indian Online Subtitrat a masterpiece?

        -

        Dil Se Film Indian Online Subtitrat is a masterpiece because it is a film that transcends its genre and its medium. It is not just a romantic thriller, but also a social commentary and a musical extravaganza. It is not just a film, but also a work of art and a cultural phenomenon.

        -

        The film has received critical acclaim and commercial success both in India and abroad. It has won several awards and nominations at various national and international festivals and ceremonies. It has also been included in several lists of the best films of all time by various critics and publications.

        -

        The film has also influenced many other films and artists in terms of style, theme, or music. It has inspired many remakes and adaptations in different languages and countries. It has also spawned many fan clubs and cult followings among audiences and celebrities alike.

        -

        Where can you watch Dil Se Film Indian Online Subtitrat?

        -

        If you are interested in watching Dil Se Film Indian Online Subtitrat, you have several options to choose from. You can watch it online on various streaming platforms such as Netflix, Amazon Prime Video, YouTube, etc. You can also buy or rent it on DVD or Blu-ray from various online or offline stores. You can also catch it on TV channels that air Bollywood movies regularly.

        -

        FAQs

        -

        Here are some frequently asked questions about Dil Se Film Indian Online Subtitrat:

        -
          -
        1. What does Dil Se mean in English?
        2. -

          Dil Se means From the Heart in English.

          -
        3. What is the name of the terrorist group that Meghna belongs to?
        4. -

          The name of the terrorist group that Meghna belongs to is never revealed in the film. It is only referred to as "the group" or "the organization".

          -
        5. What is the name of the northeastern state that Meghna's group wants to liberate?
        6. -

          The name of the northeastern state that Meghna's group wants to liberate is also never revealed in the film. It is only shown as a map with a red dot on it.

          -
        7. What is the significance of the song "Ae Ajnabi" in the film?
        8. -

          "Ae Ajnabi" (English: Oh Stranger) is a song that plays when Amar first sees Meghna at the railway station. It is also played later when he sees her again on TV after her failed attempt to kill him. The song signifies their connection as strangers who share a bond that goes beyond words.

          -
        9. What is the significance of the color red in the film?
        10. -

          The color red signifies passion, danger, violence or bloodshed in the film. It is associated with Amar and Meghna's love as well as their fate. It is also used to contrast with other colors such as white or green to create visual impact.

          -
        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Grass Valley EDIUS Pro 7 50 Build 236 Loader Tips and Tricks to Enhance Your Video Editing Skills and Workflow.md b/spaces/raedeXanto/academic-chatgpt-beta/Grass Valley EDIUS Pro 7 50 Build 236 Loader Tips and Tricks to Enhance Your Video Editing Skills and Workflow.md deleted file mode 100644 index 12641e0783ded71c8d4a95b167864212da541837..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Grass Valley EDIUS Pro 7 50 Build 236 Loader Tips and Tricks to Enhance Your Video Editing Skills and Workflow.md +++ /dev/null @@ -1,114 +0,0 @@ - -

        Grass Valley EDIUS Pro 7.50 Build 236 Loader: A Review

        -

        If you are looking for a professional video editing software that can handle any format, resolution and workflow, you might want to check out Grass Valley EDIUS Pro 7.50. This software is one of the best in the market, offering you the ability to edit anything, anywhere. But what if you don't have a license key or a serial number to activate it? Don't worry, there is a solution for that: a loader. In this article, we will review Grass Valley EDIUS Pro 7.50 and its features, and show you how to install it with the loader.

        -

        Grass Valley EDIUS Pro 7 50 Build 236 Loader


        Download File » https://tinourl.com/2uL1Yh



        -

        Introduction

        -

        What is Grass Valley EDIUS Pro 7.50?

        -

        Grass Valley EDIUS Pro 7.50 is a video editing software that gives you the power and flexibility to create high-quality video content for broadcast, corporate, documentary and 4K theatrical productions. It supports real-time editing of multiple formats and frame rates on the same timeline, allowing you to work with any footage without conversion or rendering. It also has a fast and intuitive user interface, with unlimited video, audio, title and graphics tracks. Whether you are working with SD, HD, 4K or even 8K video, Grass Valley EDIUS Pro 7.50 can handle it all.

        -

        What is a loader and why do you need it?

        -

        A loader is a program that bypasses the activation process of a software by injecting a code into its memory. This way, you can use the software without entering a license key or a serial number. You might need a loader if you don't have a valid license for the software, or if you want to use it on multiple computers without buying multiple licenses. However, using a loader might be illegal in some countries, so use it at your own risk.

        -

        Features of Grass Valley EDIUS Pro 7.50

        -

        Superior 4K workflow

        -

        One of the main features of Grass Valley EDIUS Pro 7.50 is its superior 4K workflow, which enables you to edit 4K video with ease and speed. It supports Blackmagic Design's DeckLink 4K Extreme and UltraStudio 4K capture and playback devices for the most affordable 4K workflows. It also supports EDL or AAF import/export with Grass Valley HQX codec for color grading with DaVinci Resolve.

        -

        Editing media files with different resolutions

        -

        Another feature of Grass Valley EDIUS Pro 7.50 is its ability to edit media files with different resolutions on the same timeline, from 24x24 to 4Kx2K. This means you can mix and match any footage without worrying about compatibility or quality issues. You can also perform real-time conversion of frame rates on the same timeline, giving you more flexibility and efficiency.

        -

        Fast, flexible user interface

        -

        Grass Valley EDIUS Pro 7.50 has a fast and flexible user interface that lets you customize your workspace according to your preferences and needs. You can drag and drop clips onto the timeline, adjust them with trim handles, apply transitions and effects with ease, and use keyboard shortcuts for faster editing. You can also use unlimited video, audio, title and graphics tracks for more creative control.

        -

        Support for the latest file formats

        -

        Grass Valley EDIUS Pro 7.50 supports the latest file formats for video editing, such as Sony XAVC/XVAC S, Panasonic AVC-Ultra and Canon 1D C M-JPEG. You can work natively with these formats without transcoding or importing them into another format. This saves you time and disk space.

        -

        How to install Grass Valley EDIUS Pro 7 50 Build 236 Loader
        -Grass Valley EDIUS Pro 7 50 Build 236 Loader crack download
        -Grass Valley EDIUS Pro 7 50 Build 236 Loader tutorial video
        -Grass Valley EDIUS Pro 7 50 Build 236 Loader system requirements
        -Grass Valley EDIUS Pro 7 50 Build 236 Loader free trial
        -Grass Valley EDIUS Pro 7 50 Build 236 Loader review and rating
        -Grass Valley EDIUS Pro 7 50 Build 236 Loader vs Adobe Premiere Pro
        -Grass Valley EDIUS Pro 7 50 Build 236 Loader license key generator
        -Grass Valley EDIUS Pro 7 50 Build 236 Loader best price and discount
        -Grass Valley EDIUS Pro 7 50 Build 236 Loader user manual pdf
        -Grass Valley EDIUS Pro 7 50 Build 236 Loader latest version update
        -Grass Valley EDIUS Pro 7 50 Build 236 Loader features and benefits
        -Grass Valley EDIUS Pro 7 50 Build 236 Loader online support and forum
        -Grass Valley EDIUS Pro 7 50 Build 236 Loader compatible formats and codecs
        -Grass Valley EDIUS Pro 7 50 Build 236 Loader tips and tricks
        -Grass Valley EDIUS Pro 7 50 Build 236 Loader error fix and troubleshooting
        -Grass Valley EDIUS Pro 7 50 Build 236 Loader alternative software
        -Grass Valley EDIUS Pro 7 50 Build 236 Loader for Mac OS X
        -Grass Valley EDIUS Pro 7 50 Build 236 Loader for Windows PC
        -Grass Valley EDIUS Pro 7 50 Build 236 Loader for Linux
        -Grass Valley EDIUS Pro 7.5.0.236 loader activation code
        -Grass Valley EDIUS Pro version history and changelog
        -Grass Valley EDIUS Pro editing software comparison chart
        -Grass Valley EDIUS Pro customer testimonials and feedback
        -Grass Valley EDIUS Pro official website and blog
        -Grass Valley EDIUS Pro training courses and certification
        -Grass Valley EDIUS Pro plugins and add-ons
        -Grass Valley EDIUS Pro keyboard shortcuts and commands
        -Grass Valley EDIUS Pro workflow and project management
        -Grass Valley EDIUS Pro hardware recommendations and optimization
        -Grass Valley EDIUS Pro FAQs and common questions
        -Grass Valley EDIUS Pro refund policy and guarantee
        -Grass Valley EDIUS Pro affiliate program and commission
        -Grass Valley EDIUS Pro demo video and sample projects
        -Grass Valley EDIUS Pro awards and recognition
        -Grass Valley EDIUS Pro case studies and success stories
        -Grass Valley EDIUS Pro pros and cons analysis
        -Grass Valley EDIUS Pro testimonials video editing software
        -Grass Valley EDIUS Pro upgrade options and pricing
        -Grass Valley EDIUS Pro serial number verification
        -Grass Valley EDIUS Pro technical specifications and requirements
        -Grass Valley EDIUS Pro customer service and contact information
        -Grass Valley EDIUS Pro trial version limitations and restrictions
        -Grass Valley EDIUS Pro coupon code and promo offer
        -Grass Valley EDIUS Pro minimum RAM and CPU speed
        -Grass Valley EDIUS Pro download link and installation guide
        -Grass Valley EDIUS Pro patch file and update package
        -Grass Valley EDIUS Pro registration process and account creation
        -Grass Valley EDIUS Pro product key activation and validation
        -Grass Valley EDIUS Pro software development kit (SDK) and API

        -

        Fastest AVCHD editing in the market

        -

        If you are working with AVCHD footage from camcorders or DSLRs, you will be glad to know that Grass Valley EDIUS Pro 7.50 is the fastest AVCHD editor in the market. It can handle up to three streams of AVCHD in real time without dropping frames or compromising quality.

        -

        Multicam editing of up to 16 different sources

        -

        If you are working on a project that involves multiple cameras or angles, you can use Grass Valley EDIUS Pro 7.50's multicam editing feature to sync and switch between up to 16 different sources simultaneously. You can also use video output support to preview your multicam edits on an external monitor.

        -

        Improved MPEG encoder and H.264/AVC decoder

        -

        Grass Valley EDIUS Pro 7.50 has an improved MPEG encoder that delivers faster and better quality output for DVD and Blu-ray Disc authoring. It also has an improved H.264/AVC decoder that enhances playback performance and quality.

        -

        Optimized for fourth-generation Intel Core i architecture

        -

        Grass Valley EDIUS Pro 7.50 is optimized for fourth-generation Intel Core i architecture, which means it can take advantage of its features such as Quick Sync Video technology for faster encoding and decoding of video files.

        -

        Proxy mode workflow for slower computers

        -

        If your computer is not powerful enough to handle high-resolution video editing smoothly, you can use Grass Valley EDIUS Pro 7.50's proxy mode workflow feature to extend its usability and increase ROI (return on investment). Proxy mode workflow allows you to edit low-resolution proxy files instead of the original high-resolution files, which reduces CPU load and improves performance.

        -

        Supports Intel Quick Sync Video for fast export and Blu-ray Disc burning

        -

        Grass Valley EDIUS Pro 7.50 supports Intel Quick Sync Video technology for extremely fast hardware acceleration of video encoding and decoding tasks such as export and Blu-ray Disc burning.

        -

        Fast handling of large quantities of still image files

        -

        If you have a lot of still image files (JPG, TGA, DPX etc.) that you want to use in your video project, Grass Valley EDIUS Pro 7.50 can handle them fast and efficiently.

        -

        3D stereoscopic editing

        -

        If you want to create stunning 3D videos, Grass Valley EDIUS Pro 7.50 can help you with its built-in stereoscopic editing feature that lets you adjust depth settings and apply effects to your 3D footage.

        -

        Built-in loudness meter and image stabilization

        -

        To ensure that your audio levels are consistent and compliant with broadcast standards, Grass Valley EDIUS Pro 7.50 has a built-in loudness meter that measures loudness units relative to full scale (LUFS). To reduce camera shake and improve image quality, Grass Valley EDIUS Pro 7.50 has an image stabilization feature that analyzes motion vectors and applies corrections automatically.

        -

        How to install Grass Valley EDIUS Pro 7.50 with the loader

        -

        Download the installer and the loader from the links provided

        -

        To install Grass Valley EDIUS Pro 7.50 with the loader, you need to download two files: the installer and the loader from the links provided. You can find the installer link in the Grass Valley website or in the EDIUS.net website. You can find the loader link in some online forums or blogs, such as NYChessKids or CG Persia. However, be careful when downloading files from untrusted sources, as they might contain viruses or malware.

        -

        Run the installer and follow the instructions

        -

        After downloading the installer file, run it and follow the instructions on the screen. You will need to accept the license agreement, choose the installation folder and select the components you want to install. The installation process might take a few minutes, depending on your system configuration.

        -

        Copy the loader to the installation folder and run it as administrator

        -

        After installing Grass Valley EDIUS Pro 7.50, you need to copy the loader file to the same folder where you installed the software. Usually, this folder is C:\Program Files\Grass Valley\EDIUS 7\. Then, right-click on the loader file and choose Run as administrator. This will launch the loader program, which will inject a code into the memory of Grass Valley EDIUS Pro 7.50 and activate it.

        -

        Enjoy your activated version of Grass Valley EDIUS Pro 7.50

        -

        Once you run the loader, you can enjoy your activated version of Grass Valley EDIUS Pro 7.50 without entering a license key or a serial number. You can use all the features and functions of the software without any limitations or restrictions. However, remember that using a loader might be illegal in some countries, so use it at your own risk.

        -

        Conclusion

        -

        In this article, we have reviewed Grass Valley EDIUS Pro 7.50 and its features, and showed you how to install it with the loader. Grass Valley EDIUS Pro 7.50 is a powerful and flexible video editing software that can handle any format, resolution and workflow. It has a fast and intuitive user interface, with unlimited video, audio, title and graphics tracks. It supports real-time editing of multiple formats and frame rates on the same timeline, as well as superior 4K workflow with Blackmagic Design's devices and DaVinci Resolve integration. It also has many other features that make it one of the best video editing software in the market. However, if you don't have a valid license for Grass Valley EDIUS Pro 7.50, you might need a loader to activate it. A loader is a program that bypasses the activation process of a software by injecting a code into its memory. You can download a loader from some online sources, but be careful of viruses and malware. You can also use a loader at your own risk, as it might be illegal in some countries.

        -

        We hope this article was helpful and informative for you. If you have any questions or comments, feel free to leave them below.

        -

        FAQs

        -
          -
        • What are the minimum system requirements for Grass Valley EDIUS Pro 7.50?
        • -
        • The minimum system requirements for Grass Valley EDIUS Pro 7.50 are: Windows 7 64-bit (Service Pack 1 or later), Windows 8/8.1 64-bit; any Intel Core 2 or Core iX CPU; 1 GB RAM minimum (4 GB or more recommended); 6 GB of hard disk space for installation; sound card; DVD-ROM drive for software installation; internet connection for software activation.
        • -
        • What are the supported file formats for Grass Valley EDIUS Pro 7.50?
        • -
        • Grass Valley EDIUS Pro 7.50 supports a wide range of file formats for video editing, such as: AVCHD; AVC-Intra; AVI; Canon XF; DV; DVCAM; DVCPRO; DVCPRO HD; DVCPRO 50; Flash F4V; GIF; Grass Valley HQ/HQX; H.264/AVC; HDV; JPEG; MPEG-1/2/4; MXF; Panasonic P2; QuickTime; RED RAW; Sony XDCAM/XDCAM EX/XAVC/XAVC S; Windows Media.
        • -
        • How can I update Grass Valley EDIUS Pro 7.50 to the latest version?
        • -
        • You can update Grass Valley EDIUS Pro 7.50 to the latest version by downloading and installing the patch update from the Grass Valley website or from the EDIUS.net website. You can also check for updates from within the software by clicking on Help > Check for Updates.
        • -
        • How can I uninstall Grass Valley EDIUS Pro 7.50 from my computer?
        • -
        • You can uninstall Grass Valley EDIUS Pro 7.50 from your computer by following these steps: Go to Start > Control Panel > Programs and Features; select Grass Valley EDIUS 7 and click on Uninstall; follow the instructions on the screen to complete the uninstallation process; restart your computer if prompted.
        • -
        • How can I contact Grass Valley for technical support or customer service?
        • -
        • You can contact Grass Valley for technical support or customer service by visiting their website and choosing your region and product. You can also find online resources such as manuals, tutorials, forums and FAQs on their website or on the EDIUS.net website.
        • -
        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/ramiin2/AutoGPT/autogpt/commands/web_requests.py b/spaces/ramiin2/AutoGPT/autogpt/commands/web_requests.py deleted file mode 100644 index 406338f46fc7b2381e0b1634c628b123ef20b685..0000000000000000000000000000000000000000 --- a/spaces/ramiin2/AutoGPT/autogpt/commands/web_requests.py +++ /dev/null @@ -1,190 +0,0 @@ -"""Browse a webpage and summarize it using the LLM model""" -from __future__ import annotations - -from urllib.parse import urljoin, urlparse - -import requests -from bs4 import BeautifulSoup -from requests import Response -from requests.compat import urljoin - -from autogpt.config import Config -from autogpt.memory import get_memory -from autogpt.processing.html import extract_hyperlinks, format_hyperlinks - -CFG = Config() -memory = get_memory(CFG) - -session = requests.Session() -session.headers.update({"User-Agent": CFG.user_agent}) - - -def is_valid_url(url: str) -> bool: - """Check if the URL is valid - - Args: - url (str): The URL to check - - Returns: - bool: True if the URL is valid, False otherwise - """ - try: - result = urlparse(url) - return all([result.scheme, result.netloc]) - except ValueError: - return False - - -def sanitize_url(url: str) -> str: - """Sanitize the URL - - Args: - url (str): The URL to sanitize - - Returns: - str: The sanitized URL - """ - return urljoin(url, urlparse(url).path) - - -def check_local_file_access(url: str) -> bool: - """Check if the URL is a local file - - Args: - url (str): The URL to check - - Returns: - bool: True if the URL is a local file, False otherwise - """ - local_prefixes = [ - "file:///", - "file://localhost/", - "file://localhost", - "http://localhost", - "http://localhost/", - "https://localhost", - "https://localhost/", - "http://2130706433", - "http://2130706433/", - "https://2130706433", - "https://2130706433/", - "http://127.0.0.1/", - "http://127.0.0.1", - "https://127.0.0.1/", - "https://127.0.0.1", - "https://0.0.0.0/", - "https://0.0.0.0", - "http://0.0.0.0/", - "http://0.0.0.0", - "http://0000", - "http://0000/", - "https://0000", - "https://0000/", - ] - return any(url.startswith(prefix) for prefix in local_prefixes) - - -def get_response( - url: str, timeout: int = 10 -) -> tuple[None, str] | tuple[Response, None]: - """Get the response from a URL - - Args: - url (str): The URL to get the response from - timeout (int): The timeout for the HTTP request - - Returns: - tuple[None, str] | tuple[Response, None]: The response and error message - - Raises: - ValueError: If the URL is invalid - requests.exceptions.RequestException: If the HTTP request fails - """ - try: - # Restrict access to local files - if check_local_file_access(url): - raise ValueError("Access to local files is restricted") - - # Most basic check if the URL is valid: - if not url.startswith("http://") and not url.startswith("https://"): - raise ValueError("Invalid URL format") - - sanitized_url = sanitize_url(url) - - response = session.get(sanitized_url, timeout=timeout) - - # Check if the response contains an HTTP error - if response.status_code >= 400: - return None, f"Error: HTTP {str(response.status_code)} error" - - return response, None - except ValueError as ve: - # Handle invalid URL format - return None, f"Error: {str(ve)}" - - except requests.exceptions.RequestException as re: - # Handle exceptions related to the HTTP request - # (e.g., connection errors, timeouts, etc.) - return None, f"Error: {str(re)}" - - -def scrape_text(url: str) -> str: - """Scrape text from a webpage - - Args: - url (str): The URL to scrape text from - - Returns: - str: The scraped text - """ - response, error_message = get_response(url) - if error_message: - return error_message - if not response: - return "Error: Could not get response" - - soup = BeautifulSoup(response.text, "html.parser") - - for script in soup(["script", "style"]): - script.extract() - - text = soup.get_text() - lines = (line.strip() for line in text.splitlines()) - chunks = (phrase.strip() for line in lines for phrase in line.split(" ")) - text = "\n".join(chunk for chunk in chunks if chunk) - - return text - - -def scrape_links(url: str) -> str | list[str]: - """Scrape links from a webpage - - Args: - url (str): The URL to scrape links from - - Returns: - str | list[str]: The scraped links - """ - response, error_message = get_response(url) - if error_message: - return error_message - if not response: - return "Error: Could not get response" - soup = BeautifulSoup(response.text, "html.parser") - - for script in soup(["script", "style"]): - script.extract() - - hyperlinks = extract_hyperlinks(soup, url) - - return format_hyperlinks(hyperlinks) - - -def create_message(chunk, question): - """Create a message for the user to summarize a chunk of text""" - return { - "role": "user", - "content": f'"""{chunk}""" Using the above text, answer the following' - f' question: "{question}" -- if the question cannot be answered using the' - " text, summarize the text.", - } diff --git a/spaces/ramiin2/AutoGPT/ui/app.py b/spaces/ramiin2/AutoGPT/ui/app.py deleted file mode 100644 index d7dbd31e901969d090292215935bdbc3d9d75e37..0000000000000000000000000000000000000000 --- a/spaces/ramiin2/AutoGPT/ui/app.py +++ /dev/null @@ -1,145 +0,0 @@ -import gradio as gr -import utils -from api import AutoAPI, get_openai_api_key -import os, shutil -import json - -FILE_DIR = os.path.dirname(os.path.abspath(__file__)) -OUTPUT_DIR = os.path.join(os.path.dirname(FILE_DIR), "auto_gpt_workspace") -if not os.path.exists(OUTPUT_DIR): - os.mkdir(OUTPUT_DIR) - -CSS = """ -#chatbot {font-family: monospace;} -#files .generating {display: none;} -#files .min {min-height: 0px;} -""" - -with gr.Blocks(css=CSS) as app: - with gr.Column() as setup_pane: - gr.Markdown(f"""# Auto-GPT - 1. Duplicate this Space: Duplicate Space This will **NOT** work without duplication! - 2. Enter your OpenAI API Key below. - """) - with gr.Row(): - open_ai_key = gr.Textbox( - value=get_openai_api_key(), - label="OpenAI API Key", - type="password", - ) - gr.Markdown( - "3. Fill the values below, then click 'Start'. There are example values you can load at the bottom of this page." - ) - with gr.Row(): - ai_name = gr.Textbox(label="AI Name", placeholder="e.g. Entrepreneur-GPT") - ai_role = gr.Textbox( - label="AI Role", - placeholder="e.g. an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.", - ) - top_5_goals = gr.Dataframe( - row_count=(5, "fixed"), - col_count=(1, "fixed"), - headers=["AI Goals - Enter up to 5"], - type="array" - ) - start_btn = gr.Button("Start", variant="primary") - with open(os.path.join(FILE_DIR, "examples.json"), "r") as f: - example_values = json.load(f) - gr.Examples( - example_values, - [ai_name, ai_role, top_5_goals], - ) - with gr.Column(visible=False) as main_pane: - with gr.Row(): - with gr.Column(scale=2): - chatbot = gr.Chatbot(elem_id="chatbot") - with gr.Row(): - yes_btn = gr.Button("Yes", variant="primary", interactive=False) - consecutive_yes = gr.Slider( - 1, 10, 1, step=1, label="Consecutive Yes", interactive=False - ) - custom_response = gr.Textbox( - label="Custom Response", - placeholder="Press 'Enter' to Submit.", - interactive=False, - ) - with gr.Column(scale=1): - gr.HTML( - lambda: f""" - Generated Files -
        {utils.format_directory(OUTPUT_DIR)}
        - """, every=3, elem_id="files" - ) - download_btn = gr.Button("Download All Files") - - chat_history = gr.State([[None, None]]) - api = gr.State(None) - - def start(open_ai_key, ai_name, ai_role, top_5_goals): - auto_api = AutoAPI(open_ai_key, ai_name, ai_role, top_5_goals) - return gr.Column.update(visible=False), gr.Column.update(visible=True), auto_api - - def bot_response(chat, api): - messages = [] - for message in api.get_chatbot_response(): - messages.append(message) - chat[-1][1] = "\n".join(messages) + "..." - yield chat - chat[-1][1] = "\n".join(messages) - yield chat - - def send_message(count, chat, api, message="Y"): - if message != "Y": - count = 1 - for i in range(count): - chat.append([message, None]) - yield chat, count - i - api.send_message(message) - for updated_chat in bot_response(chat, api): - yield updated_chat, count - i - - def activate_inputs(): - return { - yes_btn: gr.Button.update(interactive=True), - consecutive_yes: gr.Slider.update(interactive=True), - custom_response: gr.Textbox.update(interactive=True), - } - - def deactivate_inputs(): - return { - yes_btn: gr.Button.update(interactive=False), - consecutive_yes: gr.Slider.update(interactive=False), - custom_response: gr.Textbox.update(interactive=False), - } - - start_btn.click( - start, - [open_ai_key, ai_name, ai_role, top_5_goals], - [setup_pane, main_pane, api], - ).then(bot_response, [chat_history, api], chatbot).then( - activate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ) - - yes_btn.click( - deactivate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ).then( - send_message, [consecutive_yes, chat_history, api], [chatbot, consecutive_yes] - ).then( - activate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ) - custom_response.submit( - deactivate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ).then( - send_message, - [consecutive_yes, chat_history, api, custom_response], - [chatbot, consecutive_yes], - ).then( - activate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ) - - def download_all_files(): - shutil.make_archive("outputs", "zip", OUTPUT_DIR) - - download_btn.click(download_all_files).then(None, _js=utils.DOWNLOAD_OUTPUTS_JS) - -app.queue(concurrency_count=20).launch(file_directories=[OUTPUT_DIR]) diff --git a/spaces/ramkamal2000/voice-conversion-ddp/speaker_encoder/model.py b/spaces/ramkamal2000/voice-conversion-ddp/speaker_encoder/model.py deleted file mode 100644 index c022b663ee5c344c52041026bc88dc02734afa33..0000000000000000000000000000000000000000 --- a/spaces/ramkamal2000/voice-conversion-ddp/speaker_encoder/model.py +++ /dev/null @@ -1,135 +0,0 @@ -from speaker_encoder.params_model import * -from speaker_encoder.params_data import * -from scipy.interpolate import interp1d -from sklearn.metrics import roc_curve -from torch.nn.utils import clip_grad_norm_ -from scipy.optimize import brentq -from torch import nn -import numpy as np -import torch - - -class SpeakerEncoder(nn.Module): - def __init__(self, device, loss_device): - super().__init__() - self.loss_device = loss_device - - # Network defition - self.lstm = nn.LSTM(input_size=mel_n_channels, # 40 - hidden_size=model_hidden_size, # 256 - num_layers=model_num_layers, # 3 - batch_first=True).to(device) - self.linear = nn.Linear(in_features=model_hidden_size, - out_features=model_embedding_size).to(device) - self.relu = torch.nn.ReLU().to(device) - - # Cosine similarity scaling (with fixed initial parameter values) - self.similarity_weight = nn.Parameter(torch.tensor([10.])).to(loss_device) - self.similarity_bias = nn.Parameter(torch.tensor([-5.])).to(loss_device) - - # Loss - self.loss_fn = nn.CrossEntropyLoss().to(loss_device) - - def do_gradient_ops(self): - # Gradient scale - self.similarity_weight.grad *= 0.01 - self.similarity_bias.grad *= 0.01 - - # Gradient clipping - clip_grad_norm_(self.parameters(), 3, norm_type=2) - - def forward(self, utterances, hidden_init=None): - """ - Computes the embeddings of a batch of utterance spectrograms. - - :param utterances: batch of mel-scale filterbanks of same duration as a tensor of shape - (batch_size, n_frames, n_channels) - :param hidden_init: initial hidden state of the LSTM as a tensor of shape (num_layers, - batch_size, hidden_size). Will default to a tensor of zeros if None. - :return: the embeddings as a tensor of shape (batch_size, embedding_size) - """ - # Pass the input through the LSTM layers and retrieve all outputs, the final hidden state - # and the final cell state. - out, (hidden, cell) = self.lstm(utterances, hidden_init) - - # We take only the hidden state of the last layer - embeds_raw = self.relu(self.linear(hidden[-1])) - - # L2-normalize it - embeds = embeds_raw / torch.norm(embeds_raw, dim=1, keepdim=True) - - return embeds - - def similarity_matrix(self, embeds): - """ - Computes the similarity matrix according the section 2.1 of GE2E. - - :param embeds: the embeddings as a tensor of shape (speakers_per_batch, - utterances_per_speaker, embedding_size) - :return: the similarity matrix as a tensor of shape (speakers_per_batch, - utterances_per_speaker, speakers_per_batch) - """ - speakers_per_batch, utterances_per_speaker = embeds.shape[:2] - - # Inclusive centroids (1 per speaker). Cloning is needed for reverse differentiation - centroids_incl = torch.mean(embeds, dim=1, keepdim=True) - centroids_incl = centroids_incl.clone() / torch.norm(centroids_incl, dim=2, keepdim=True) - - # Exclusive centroids (1 per utterance) - centroids_excl = (torch.sum(embeds, dim=1, keepdim=True) - embeds) - centroids_excl /= (utterances_per_speaker - 1) - centroids_excl = centroids_excl.clone() / torch.norm(centroids_excl, dim=2, keepdim=True) - - # Similarity matrix. The cosine similarity of already 2-normed vectors is simply the dot - # product of these vectors (which is just an element-wise multiplication reduced by a sum). - # We vectorize the computation for efficiency. - sim_matrix = torch.zeros(speakers_per_batch, utterances_per_speaker, - speakers_per_batch).to(self.loss_device) - mask_matrix = 1 - np.eye(speakers_per_batch, dtype=np.int) - for j in range(speakers_per_batch): - mask = np.where(mask_matrix[j])[0] - sim_matrix[mask, :, j] = (embeds[mask] * centroids_incl[j]).sum(dim=2) - sim_matrix[j, :, j] = (embeds[j] * centroids_excl[j]).sum(dim=1) - - ## Even more vectorized version (slower maybe because of transpose) - # sim_matrix2 = torch.zeros(speakers_per_batch, speakers_per_batch, utterances_per_speaker - # ).to(self.loss_device) - # eye = np.eye(speakers_per_batch, dtype=np.int) - # mask = np.where(1 - eye) - # sim_matrix2[mask] = (embeds[mask[0]] * centroids_incl[mask[1]]).sum(dim=2) - # mask = np.where(eye) - # sim_matrix2[mask] = (embeds * centroids_excl).sum(dim=2) - # sim_matrix2 = sim_matrix2.transpose(1, 2) - - sim_matrix = sim_matrix * self.similarity_weight + self.similarity_bias - return sim_matrix - - def loss(self, embeds): - """ - Computes the softmax loss according the section 2.1 of GE2E. - - :param embeds: the embeddings as a tensor of shape (speakers_per_batch, - utterances_per_speaker, embedding_size) - :return: the loss and the EER for this batch of embeddings. - """ - speakers_per_batch, utterances_per_speaker = embeds.shape[:2] - - # Loss - sim_matrix = self.similarity_matrix(embeds) - sim_matrix = sim_matrix.reshape((speakers_per_batch * utterances_per_speaker, - speakers_per_batch)) - ground_truth = np.repeat(np.arange(speakers_per_batch), utterances_per_speaker) - target = torch.from_numpy(ground_truth).long().to(self.loss_device) - loss = self.loss_fn(sim_matrix, target) - - # EER (not backpropagated) - with torch.no_grad(): - inv_argmax = lambda i: np.eye(1, speakers_per_batch, i, dtype=np.int)[0] - labels = np.array([inv_argmax(i) for i in ground_truth]) - preds = sim_matrix.detach().cpu().numpy() - - # Snippet from https://yangcha.github.io/EER-ROC/ - fpr, tpr, thresholds = roc_curve(labels.flatten(), preds.flatten()) - eer = brentq(lambda x: 1. - x - interp1d(fpr, tpr)(x), 0., 1.) - - return loss, eer \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Hellboy 2004 Dual Audio Download !LINK!.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Hellboy 2004 Dual Audio Download !LINK!.md deleted file mode 100644 index c6ea6337ba8f9fbb91e609dc280be531aa2ea312..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Hellboy 2004 Dual Audio Download !LINK!.md +++ /dev/null @@ -1,6 +0,0 @@ -

        hellboy 2004 dual audio download


        DOWNLOAD ··· https://urlgoal.com/2uCLQI



        -
        -Hellboy Download in Hindi [Dual Audio] | 420p [400MB] | 720p [1GB] – MoviesHippo.in. Movies Detail. Name: Hellboy. Release Year: 2004. Language: Hindi – ... 1fdad05405
        -
        -
        -

        diff --git a/spaces/rickysk/rickysk-videomae-base-ipm_all_videos/app.py b/spaces/rickysk/rickysk-videomae-base-ipm_all_videos/app.py deleted file mode 100644 index 94d66d004f88b871a7e57574dc88b83dcdc890fa..0000000000000000000000000000000000000000 --- a/spaces/rickysk/rickysk-videomae-base-ipm_all_videos/app.py +++ /dev/null @@ -1,140 +0,0 @@ -import cv2 -import gradio as gr -import imutils -import numpy as np -import torch -from pytorchvideo.transforms import ( - ApplyTransformToKey, - Normalize, - RandomShortSideScale, - RemoveKey, - ShortSideScale, - UniformTemporalSubsample, -) -from torchvision.transforms import ( - Compose, - Lambda, - RandomCrop, - RandomHorizontalFlip, - Resize, -) -from transformers import VideoMAEFeatureExtractor, VideoMAEForVideoClassification - -MODEL_CKPT = "rickysk/videomae-base-ipm_all_videos" -DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -MODEL = VideoMAEForVideoClassification.from_pretrained(MODEL_CKPT).to(DEVICE) -PROCESSOR = VideoMAEFeatureExtractor.from_pretrained(MODEL_CKPT) - -RESIZE_TO = PROCESSOR.size["shortest_edge"] -NUM_FRAMES_TO_SAMPLE = MODEL.config.num_frames -IMAGE_STATS = {"image_mean": [0.485, 0.456, 0.406], "image_std": [0.229, 0.224, 0.225]} -VAL_TRANSFORMS = Compose( - [ - UniformTemporalSubsample(NUM_FRAMES_TO_SAMPLE), - Lambda(lambda x: x / 255.0), - Normalize(IMAGE_STATS["image_mean"], IMAGE_STATS["image_std"]), - Resize((RESIZE_TO, RESIZE_TO)), - ] -) -LABELS = list(MODEL.config.label2id.keys()) - - -def parse_video(video_file): - """A utility to parse the input videos. - - Reference: https://pyimagesearch.com/2018/11/12/yolo-object-detection-with-opencv/ - """ - vs = cv2.VideoCapture(video_file) - - # try to determine the total number of frames in the video file - try: - prop = ( - cv2.cv.CV_CAP_PROP_FRAME_COUNT - if imutils.is_cv2() - else cv2.CAP_PROP_FRAME_COUNT - ) - total = int(vs.get(prop)) - print("[INFO] {} total frames in video".format(total)) - - # an error occurred while trying to determine the total - # number of frames in the video file - except: - print("[INFO] could not determine # of frames in video") - print("[INFO] no approx. completion time can be provided") - total = -1 - - frames = [] - - # loop over frames from the video file stream - while True: - # read the next frame from the file - (grabbed, frame) = vs.read() - if frame is not None: - frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) - frames.append(frame) - # if the frame was not grabbed, then we have reached the end - # of the stream - if not grabbed: - break - - return frames - - -def preprocess_video(frames: list): - """Utility to apply preprocessing transformations to a video tensor.""" - # Each frame in the `frames` list has the shape: (height, width, num_channels). - # Collated together the `frames` has the the shape: (num_frames, height, width, num_channels). - # So, after converting the `frames` list to a torch tensor, we permute the shape - # such that it becomes (num_channels, num_frames, height, width) to make - # the shape compatible with the preprocessing transformations. After applying the - # preprocessing chain, we permute the shape to (num_frames, num_channels, height, width) - # to make it compatible with the model. Finally, we add a batch dimension so that our video - # classification model can operate on it. - video_tensor = torch.tensor(np.array(frames).astype(frames[0].dtype)) - video_tensor = video_tensor.permute( - 3, 0, 1, 2 - ) # (num_channels, num_frames, height, width) - video_tensor_pp = VAL_TRANSFORMS(video_tensor) - video_tensor_pp = video_tensor_pp.permute( - 1, 0, 2, 3 - ) # (num_frames, num_channels, height, width) - video_tensor_pp = video_tensor_pp.unsqueeze(0) - return video_tensor_pp.to(DEVICE) - - -def infer(video_file): - frames = parse_video(video_file) - video_tensor = preprocess_video(frames) - inputs = {"pixel_values": video_tensor} - - # forward pass - with torch.no_grad(): - outputs = MODEL(**inputs) - logits = outputs.logits - softmax_scores = torch.nn.functional.softmax(logits, dim=-1).squeeze(0) - confidences = {LABELS[i]: float(softmax_scores[i]) for i in range(len(LABELS))} - return confidences - - -gr.Interface( - fn=infer, - inputs=gr.Video(type="file"), - outputs=gr.Label(num_top_classes=7), - examples=[ - ["examples/bend.mp4"], - ["examples/cnw.mp4"], - ["examples/lift.mp4"], - ], - title="VideoMAE IPM", - description=( - "Gradio demo for VideoMAE for video classification. To use it, simply upload your video or click one of the" - " examples to load them. Read more at the links below." - ), - article=( - "" - ), - allow_flagging=False, - allow_screenshot=False, -).launch() diff --git a/spaces/rinme/vits-models/text/cleaners.py b/spaces/rinme/vits-models/text/cleaners.py deleted file mode 100644 index 68c9ad24d5a303b68a521fba2e8776c8cc867356..0000000000000000000000000000000000000000 --- a/spaces/rinme/vits-models/text/cleaners.py +++ /dev/null @@ -1,475 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - -import re -from unidecode import unidecode -import pyopenjtalk -from jamo import h2j, j2hcj -from pypinyin import lazy_pinyin, BOPOMOFO -import jieba, cn2an - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# Regular expression matching whitespace: -_whitespace_re = re.compile(r'\s+') - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile(r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile(r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (Latin alphabet, bopomofo) pairs: -_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', 'ㄟˉ'), - ('b', 'ㄅㄧˋ'), - ('c', 'ㄙㄧˉ'), - ('d', 'ㄉㄧˋ'), - ('e', 'ㄧˋ'), - ('f', 'ㄝˊㄈㄨˋ'), - ('g', 'ㄐㄧˋ'), - ('h', 'ㄝˇㄑㄩˋ'), - ('i', 'ㄞˋ'), - ('j', 'ㄐㄟˋ'), - ('k', 'ㄎㄟˋ'), - ('l', 'ㄝˊㄛˋ'), - ('m', 'ㄝˊㄇㄨˋ'), - ('n', 'ㄣˉ'), - ('o', 'ㄡˉ'), - ('p', 'ㄆㄧˉ'), - ('q', 'ㄎㄧㄡˉ'), - ('r', 'ㄚˋ'), - ('s', 'ㄝˊㄙˋ'), - ('t', 'ㄊㄧˋ'), - ('u', 'ㄧㄡˉ'), - ('v', 'ㄨㄧˉ'), - ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'), - ('x', 'ㄝˉㄎㄨˋㄙˋ'), - ('y', 'ㄨㄞˋ'), - ('z', 'ㄗㄟˋ') -]] - - -# List of (bopomofo, romaji) pairs: -_bopomofo_to_romaji = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'ʧ⁼'), - ('ㄑ', 'ʧʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ʦ`⁼'), - ('ㄔ', 'ʦ`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ʦ⁼'), - ('ㄘ', 'ʦʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'e'), - ('ㄞ', 'ai'), - ('ㄟ', 'ei'), - ('ㄠ', 'au'), - ('ㄡ', 'ou'), - ('ㄧㄢ', 'yeNN'), - ('ㄢ', 'aNN'), - ('ㄧㄣ', 'iNN'), - ('ㄣ', 'əNN'), - ('ㄤ', 'aNg'), - ('ㄧㄥ', 'iNg'), - ('ㄨㄥ', 'uNg'), - ('ㄩㄥ', 'yuNg'), - ('ㄥ', 'əNg'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def lowercase(text): - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, ' ', text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text!='': - text+=' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil','pau']: - text += phoneme.replace('ch','ʧ').replace('sh','ʃ').replace('cl','Q') - else: - continue - n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil','pau']: - a2_next=-1 - else: - a2_next = int(re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1 and a2 != n_moras: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i [RobustScanner: Dynamically Enhancing Positional Clues for Robust Text Recognition](https://arxiv.org/abs/2007.07542) - - - -## Abstract - -The attention-based encoder-decoder framework has recently achieved impressive results for scene text recognition, and many variants have emerged with improvements in recognition quality. However, it performs poorly on contextless texts (e.g., random character sequences) which is unacceptable in most of real application scenarios. In this paper, we first deeply investigate the decoding process of the decoder. We empirically find that a representative character-level sequence decoder utilizes not only context information but also positional information. Contextual information, which the existing approaches heavily rely on, causes the problem of attention drift. To suppress such side-effect, we propose a novel position enhancement branch, and dynamically fuse its outputs with those of the decoder attention module for scene text recognition. Specifically, it contains a position aware module to enable the encoder to output feature vectors encoding their own spatial positions, and an attention module to estimate glimpses using the positional clue (i.e., the current decoding time step) only. The dynamic fusion is conducted for more robust feature via an element-wise gate mechanism. Theoretically, our proposed method, dubbed \\emph{RobustScanner}, decodes individual characters with dynamic ratio between context and positional clues, and utilizes more positional ones when the decoding sequences with scarce context, and thus is robust and practical. Empirically, it has achieved new state-of-the-art results on popular regular and irregular text recognition benchmarks while without much performance drop on contextless benchmarks, validating its robustness in both contextual and contextless application scenarios. - -
        - -
        - -## Dataset - -### Train Dataset - -| trainset | instance_num | repeat_num | source | -| :--------: | :----------: | :--------: | :------------------------: | -| icdar_2011 | 3567 | 20 | real | -| icdar_2013 | 848 | 20 | real | -| icdar2015 | 4468 | 20 | real | -| coco_text | 42142 | 20 | real | -| IIIT5K | 2000 | 20 | real | -| SynthText | 2400000 | 1 | synth | -| SynthAdd | 1216889 | 1 | synth, 1.6m in [\[1\]](#1) | -| Syn90k | 2400000 | 1 | synth | - -### Test Dataset - -| testset | instance_num | type | -| :-----: | :----------: | :---------------------------: | -| IIIT5K | 3000 | regular | -| SVT | 647 | regular | -| IC13 | 1015 | regular | -| IC15 | 2077 | irregular | -| SVTP | 645 | irregular, 639 in [\[1\]](#1) | -| CT80 | 288 | irregular | - -## Results and Models - -| Methods | GPUs | | Regular Text | | | | Irregular Text | | download | -| :------------------------------------------------------------------------: | :--: | :----: | :----------: | :--: | :-: | :--: | :------------: | :--: | :-------------------------------------------------------------------------: | -| | | IIIT5K | SVT | IC13 | | IC15 | SVTP | CT80 | | -| [RobustScanner](configs/textrecog/robust_scanner/robustscanner_r31_academic.py) | 16 | 95.1 | 89.2 | 93.1 | | 77.8 | 80.3 | 90.3 | [model](https://download.openmmlab.com/mmocr/textrecog/robustscanner/robustscanner_r31_academic-5f05874f.pth) \| [log](https://download.openmmlab.com/mmocr/textrecog/robustscanner/20210401_170932.log.json) | - -## References - -\[1\] Li, Hui and Wang, Peng and Shen, Chunhua and Zhang, Guyu. Show, attend and read: A simple and strong baseline for irregular text recognition. In AAAI 2019. - -## Citation - -```bibtex -@inproceedings{yue2020robustscanner, - title={RobustScanner: Dynamically Enhancing Positional Clues for Robust Text Recognition}, - author={Yue, Xiaoyu and Kuang, Zhanghui and Lin, Chenhao and Sun, Hongbin and Zhang, Wayne}, - booktitle={European Conference on Computer Vision}, - year={2020} -} -``` diff --git a/spaces/rorallitri/biomedical-language-models/logs/BehenHogiTeriHD720p Watch the hilarious love story of Gattu and Binny online.md b/spaces/rorallitri/biomedical-language-models/logs/BehenHogiTeriHD720p Watch the hilarious love story of Gattu and Binny online.md deleted file mode 100644 index 5a2126eef845ffee190db624e54c9da23a6a1a45..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/BehenHogiTeriHD720p Watch the hilarious love story of Gattu and Binny online.md +++ /dev/null @@ -1,6 +0,0 @@ -

        BehenHogiTeriHD720p


        Download Zip ————— https://tinurll.com/2uzncw



        - - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/roshithindia/text_calssification_model/app.py b/spaces/roshithindia/text_calssification_model/app.py deleted file mode 100644 index 7fad045985a3a12ddc3b1c5d71b9b14ac0fa4fed..0000000000000000000000000000000000000000 --- a/spaces/roshithindia/text_calssification_model/app.py +++ /dev/null @@ -1,16 +0,0 @@ -import streamlit as st -from transformers import ViTImageProcessor, ViTForImageClassification -from PIL import Image as img - -x = st.file_uploader("Upload Images", type=["png","jpg","jpeg"]) -if x is not None: - st.image(img.open(x),width=255) - i = img.open(x) - processor = ViTImageProcessor.from_pretrained('google/vit-base-patch16-224') - model = ViTForImageClassification.from_pretrained('google/vit-base-patch16-224') - inputs = processor(images=i, return_tensors="pt") - outputs = model(**inputs) - logits = outputs.logits - predicted_class_idx = logits.argmax(-1).item() - st.text("Our Model Predicts : ") - st.write(model.config.id2label[predicted_class_idx]) \ No newline at end of file diff --git a/spaces/russel0719/deepfake_detector/training/transforms/albu.py b/spaces/russel0719/deepfake_detector/training/transforms/albu.py deleted file mode 100644 index 07ede53248e3ee041c8a157169eafe614e0b3c6b..0000000000000000000000000000000000000000 --- a/spaces/russel0719/deepfake_detector/training/transforms/albu.py +++ /dev/null @@ -1,99 +0,0 @@ -import random - -import cv2 -import numpy as np -from albumentations import DualTransform, ImageOnlyTransform -from albumentations.augmentations.functional import crop - - -def isotropically_resize_image(img, size, interpolation_down=cv2.INTER_AREA, interpolation_up=cv2.INTER_CUBIC): - h, w = img.shape[:2] - if max(w, h) == size: - return img - if w > h: - scale = size / w - h = h * scale - w = size - else: - scale = size / h - w = w * scale - h = size - interpolation = interpolation_up if scale > 1 else interpolation_down - resized = cv2.resize(img, (int(w), int(h)), interpolation=interpolation) - return resized - - -class IsotropicResize(DualTransform): - def __init__(self, max_side, interpolation_down=cv2.INTER_AREA, interpolation_up=cv2.INTER_CUBIC, - always_apply=False, p=1): - super(IsotropicResize, self).__init__(always_apply, p) - self.max_side = max_side - self.interpolation_down = interpolation_down - self.interpolation_up = interpolation_up - - def apply(self, img, interpolation_down=cv2.INTER_AREA, interpolation_up=cv2.INTER_CUBIC, **params): - return isotropically_resize_image(img, size=self.max_side, interpolation_down=interpolation_down, - interpolation_up=interpolation_up) - - def apply_to_mask(self, img, **params): - return self.apply(img, interpolation_down=cv2.INTER_NEAREST, interpolation_up=cv2.INTER_NEAREST, **params) - - def get_transform_init_args_names(self): - return ("max_side", "interpolation_down", "interpolation_up") - - -class Resize4xAndBack(ImageOnlyTransform): - def __init__(self, always_apply=False, p=0.5): - super(Resize4xAndBack, self).__init__(always_apply, p) - - def apply(self, img, **params): - h, w = img.shape[:2] - scale = random.choice([2, 4]) - img = cv2.resize(img, (w // scale, h // scale), interpolation=cv2.INTER_AREA) - img = cv2.resize(img, (w, h), - interpolation=random.choice([cv2.INTER_CUBIC, cv2.INTER_LINEAR, cv2.INTER_NEAREST])) - return img - - -class RandomSizedCropNonEmptyMaskIfExists(DualTransform): - - def __init__(self, min_max_height, w2h_ratio=[0.7, 1.3], always_apply=False, p=0.5): - super(RandomSizedCropNonEmptyMaskIfExists, self).__init__(always_apply, p) - - self.min_max_height = min_max_height - self.w2h_ratio = w2h_ratio - - def apply(self, img, x_min=0, x_max=0, y_min=0, y_max=0, **params): - cropped = crop(img, x_min, y_min, x_max, y_max) - return cropped - - @property - def targets_as_params(self): - return ["mask"] - - def get_params_dependent_on_targets(self, params): - mask = params["mask"] - mask_height, mask_width = mask.shape[:2] - crop_height = int(mask_height * random.uniform(self.min_max_height[0], self.min_max_height[1])) - w2h_ratio = random.uniform(*self.w2h_ratio) - crop_width = min(int(crop_height * w2h_ratio), mask_width - 1) - if mask.sum() == 0: - x_min = random.randint(0, mask_width - crop_width + 1) - y_min = random.randint(0, mask_height - crop_height + 1) - else: - mask = mask.sum(axis=-1) if mask.ndim == 3 else mask - non_zero_yx = np.argwhere(mask) - y, x = random.choice(non_zero_yx) - x_min = x - random.randint(0, crop_width - 1) - y_min = y - random.randint(0, crop_height - 1) - x_min = np.clip(x_min, 0, mask_width - crop_width) - y_min = np.clip(y_min, 0, mask_height - crop_height) - - x_max = x_min + crop_height - y_max = y_min + crop_width - y_max = min(mask_height, y_max) - x_max = min(mask_width, x_max) - return {"x_min": x_min, "x_max": x_max, "y_min": y_min, "y_max": y_max} - - def get_transform_init_args_names(self): - return "min_max_height", "height", "width", "w2h_ratio" \ No newline at end of file diff --git a/spaces/sam-hq-team/sam-hq/GroundingDINO/groundingdino/util/slio.py b/spaces/sam-hq-team/sam-hq/GroundingDINO/groundingdino/util/slio.py deleted file mode 100644 index 72c1f0f7b82cdc931d381feef64fe15815ba657e..0000000000000000000000000000000000000000 --- a/spaces/sam-hq-team/sam-hq/GroundingDINO/groundingdino/util/slio.py +++ /dev/null @@ -1,177 +0,0 @@ -# ========================================================== -# Modified from mmcv -# ========================================================== - -import json -import pickle -from abc import ABCMeta, abstractmethod -from pathlib import Path - -import yaml - -try: - from yaml import CLoader as Loader, CDumper as Dumper -except ImportError: - from yaml import Loader, Dumper - - -# =========================== -# Rigister handler -# =========================== - - -class BaseFileHandler(metaclass=ABCMeta): - @abstractmethod - def load_from_fileobj(self, file, **kwargs): - pass - - @abstractmethod - def dump_to_fileobj(self, obj, file, **kwargs): - pass - - @abstractmethod - def dump_to_str(self, obj, **kwargs): - pass - - def load_from_path(self, filepath, mode="r", **kwargs): - with open(filepath, mode) as f: - return self.load_from_fileobj(f, **kwargs) - - def dump_to_path(self, obj, filepath, mode="w", **kwargs): - with open(filepath, mode) as f: - self.dump_to_fileobj(obj, f, **kwargs) - - -class JsonHandler(BaseFileHandler): - def load_from_fileobj(self, file): - return json.load(file) - - def dump_to_fileobj(self, obj, file, **kwargs): - json.dump(obj, file, **kwargs) - - def dump_to_str(self, obj, **kwargs): - return json.dumps(obj, **kwargs) - - -class PickleHandler(BaseFileHandler): - def load_from_fileobj(self, file, **kwargs): - return pickle.load(file, **kwargs) - - def load_from_path(self, filepath, **kwargs): - return super(PickleHandler, self).load_from_path(filepath, mode="rb", **kwargs) - - def dump_to_str(self, obj, **kwargs): - kwargs.setdefault("protocol", 2) - return pickle.dumps(obj, **kwargs) - - def dump_to_fileobj(self, obj, file, **kwargs): - kwargs.setdefault("protocol", 2) - pickle.dump(obj, file, **kwargs) - - def dump_to_path(self, obj, filepath, **kwargs): - super(PickleHandler, self).dump_to_path(obj, filepath, mode="wb", **kwargs) - - -class YamlHandler(BaseFileHandler): - def load_from_fileobj(self, file, **kwargs): - kwargs.setdefault("Loader", Loader) - return yaml.load(file, **kwargs) - - def dump_to_fileobj(self, obj, file, **kwargs): - kwargs.setdefault("Dumper", Dumper) - yaml.dump(obj, file, **kwargs) - - def dump_to_str(self, obj, **kwargs): - kwargs.setdefault("Dumper", Dumper) - return yaml.dump(obj, **kwargs) - - -file_handlers = { - "json": JsonHandler(), - "yaml": YamlHandler(), - "yml": YamlHandler(), - "pickle": PickleHandler(), - "pkl": PickleHandler(), -} - -# =========================== -# load and dump -# =========================== - - -def is_str(x): - """Whether the input is an string instance. - - Note: This method is deprecated since python 2 is no longer supported. - """ - return isinstance(x, str) - - -def slload(file, file_format=None, **kwargs): - """Load data from json/yaml/pickle files. - - This method provides a unified api for loading data from serialized files. - - Args: - file (str or :obj:`Path` or file-like object): Filename or a file-like - object. - file_format (str, optional): If not specified, the file format will be - inferred from the file extension, otherwise use the specified one. - Currently supported formats include "json", "yaml/yml" and - "pickle/pkl". - - Returns: - The content from the file. - """ - if isinstance(file, Path): - file = str(file) - if file_format is None and is_str(file): - file_format = file.split(".")[-1] - if file_format not in file_handlers: - raise TypeError(f"Unsupported format: {file_format}") - - handler = file_handlers[file_format] - if is_str(file): - obj = handler.load_from_path(file, **kwargs) - elif hasattr(file, "read"): - obj = handler.load_from_fileobj(file, **kwargs) - else: - raise TypeError('"file" must be a filepath str or a file-object') - return obj - - -def sldump(obj, file=None, file_format=None, **kwargs): - """Dump data to json/yaml/pickle strings or files. - - This method provides a unified api for dumping data as strings or to files, - and also supports custom arguments for each file format. - - Args: - obj (any): The python object to be dumped. - file (str or :obj:`Path` or file-like object, optional): If not - specified, then the object is dump to a str, otherwise to a file - specified by the filename or file-like object. - file_format (str, optional): Same as :func:`load`. - - Returns: - bool: True for success, False otherwise. - """ - if isinstance(file, Path): - file = str(file) - if file_format is None: - if is_str(file): - file_format = file.split(".")[-1] - elif file is None: - raise ValueError("file_format must be specified since file is None") - if file_format not in file_handlers: - raise TypeError(f"Unsupported format: {file_format}") - - handler = file_handlers[file_format] - if file is None: - return handler.dump_to_str(obj, **kwargs) - elif is_str(file): - handler.dump_to_path(obj, file, **kwargs) - elif hasattr(file, "write"): - handler.dump_to_fileobj(obj, file, **kwargs) - else: - raise TypeError('"file" must be a filename str or a file-object') diff --git a/spaces/scedlatioru/img-to-music/example/Encomdiscover2011UPDATED Keygenfor11.md b/spaces/scedlatioru/img-to-music/example/Encomdiscover2011UPDATED Keygenfor11.md deleted file mode 100644 index 9b995d133db432ed22062c5d6dad36b4785ad71f..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Encomdiscover2011UPDATED Keygenfor11.md +++ /dev/null @@ -1,23 +0,0 @@ - -

        How to Use Encom Discover 3D 2011 for Geospatial Analysis

        -

        Encom Discover 3D 2011 is a powerful software tool that allows geoscientists to visualize, analyze and model geospatial data in three dimensions. It is designed to work seamlessly with MapInfo Professional, a leading GIS application from Pitney Bowes Business Insight. With Encom Discover 3D 2011, you can create stunning 3D maps, grids, voxels, drillholes, vectors and features that reveal the hidden patterns and relationships in your data. You can also perform advanced geostatistical analysis, interpolation, classification, filtering and geophysical modeling. Whether you are working in exploration, mining, environmental or engineering projects, Encom Discover 3D 2011 can help you gain new insights and make better decisions.

        -

        encomdiscover2011keygenfor11


        Download Ziphttps://gohhs.com/2uEAt8



        -

        In this article, we will show you some of the main features and functions of Encom Discover 3D 2011 and how to use them effectively. We will assume that you have already installed Encom Discover 3D 2011 and MapInfo Professional on your computer and that you have some basic knowledge of GIS concepts and terminology. If you need more help or information, you can refer to the Encom Discover 3D 2011 User Guide[^1^] or the online help system.

        -

        Getting Started with Encom Discover 3D 2011

        -

        To start using Encom Discover 3D 2011, you need to launch MapInfo Professional first. Then, you can access the Discover menu from the main menu bar or the Discover toolbar from the toolbars panel. You can also use the Command Search tool, which is a search box that appears in the top right corner of the MapInfo Professional window. You can type keywords or phrases to quickly find and execute tools within MapInfo Professional and Discover[^2^].

        -

        Once you have opened the Discover menu or toolbar, you can create a new 3D map window by clicking on the New 3D Map button. This will open a blank 3D map window where you can add and display various types of geospatial data. You can also open an existing 3D map file by clicking on the Open 3D Map button or by using the File>Open command.

        -

        Adding Data to a 3D Map

        -

        To add data to a 3D map window, you can use the Add Data button on the Discover toolbar or the Add Data command on the Discover menu. This will open a dialog box where you can browse and select the data files that you want to add. You can add multiple files at once by holding down the Ctrl or Shift key while selecting them. You can also drag and drop files from Windows Explorer into the 3D map window.

        -

        -

        The types of data that you can add to a 3D map window include:

        -
          -
        • MapInfo tables (.tab) - These are files that contain spatial data in vector format, such as points, lines, polygons or regions. You can also add attribute data (.dat) or raster images (.jpg, .png, .tif) that are linked to MapInfo tables.
        • -
        • Grids (.grd) - These are files that contain spatial data in raster format, such as elevation, gravity or magnetic data. You can also add grid color tables (.gct) that define how grids are displayed.
        • -
        • Voxels (.vo) - These are files that contain spatial data in volumetric format, such as density or porosity data. You can also add voxel color tables (.vct) that define how voxels are displayed.
        • -
        • Drillholes (.dh) - These are files that contain borehole data, such as collar locations, downhole surveys, assays or lithologies. You can also add drillhole templates (.dht) that define how drillholes are displayed.
        • -
        • Vectors (.vec) - These are files that contain spatial data in line format, such as faults or fractures. You can also add vector templates (.vet) that define how vectors are displayed.
        • -
        • Features (.fea) - These are files that contain spatial data in point format, such as samples or anomalies. You can also add feature templates (.fet) that define how features are displayed.
        • -
        -

        When you add

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Full Hd Film Izle Tek Part 720p Mkv.md b/spaces/scedlatioru/img-to-music/example/Full Hd Film Izle Tek Part 720p Mkv.md deleted file mode 100644 index a85f8b0a01e8882e6868d57b8a376c07ecb2d9f8..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Full Hd Film Izle Tek Part 720p Mkv.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Full Hd Film Izle Tek Part 720p Mkv


        DOWNLOADhttps://gohhs.com/2uEz9k



        - -com Hd[…] We offer the most premium theFunny movies with the most exciting you can imagine. We were focusing on quality from the beginning of our journey; it is why we have been in business for 10 years now and we are growing stronger with each passing day. Quality is not only in the movies, but also in the gifts, the costumes, the props, the backdrops, the actors, the actresses, and the makeup artists. We take pride in our work; we only use the finest ingredients in order to make our creation shine. Our team is dedicated to providing the most exceptional gifts, and we do our best to deliver only the best goods on the market. We want to bring life to our customers, not just to the gifts we deliver, but to their overall life. Our business is based on honesty, integrity, and teamwork. We are always open to new opportunities and we are continually searching for our next potential customer. We want to be the best company in the gift industry, and we do our best to make sure that we stay that way. We are constantly seeking ways to improve and we want to be the best in all aspects. We are honest to ourselves and to our customers. Our service is second to none. We guarantee 100% satisfaction in all of our products and services. Each of our products is reviewed by our Quality Control team before it is shipped. We are fully licensed and insured to provide our goods and services to any location in the world. We work in collaboration with other countries to ship out our orders to the USA, Canada, Australia, New Zealand, Germany, and the United Kingdom. We are committed to providing the best products and services at the best prices. We appreciate and value each and every customer. We believe in our products and services, and we know that every gift deserves to be properly cared for. We offer a lifetime warranty on our products, and we would never think of shipping out a defective product. To put it simply, we believe in 100% customer satisfaction, and we would always want you to come back to us for the rest of your life.Edema and peripheral edema are two types of edema that differ in pathophysiology, symptoms, and prognosis. Edema is characterized as an increase in the volume of fluid in tissues (or a decrease in interstitial fluid). Edema can be caused by an imbalance in the permeability of the blood vessels allowing fluid to flow into the interstitial fluid space. The amount of fluid in a tissue can be 4fefd39f24
        -
        -
        -

        diff --git a/spaces/scedlatioru/img-to-music/example/Np5011pdf_TOP_ Free11.md b/spaces/scedlatioru/img-to-music/example/Np5011pdf_TOP_ Free11.md deleted file mode 100644 index c253cc858ab4c76577e92963e963834136588801..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Np5011pdf_TOP_ Free11.md +++ /dev/null @@ -1,6 +0,0 @@ -

        np5011pdffree11


        Download · https://gohhs.com/2uEA2L



        - - 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/scedlatioru/img-to-music/example/Solemn Tones ? The Loki Bass VST Free UPD Download.md b/spaces/scedlatioru/img-to-music/example/Solemn Tones ? The Loki Bass VST Free UPD Download.md deleted file mode 100644 index 177921e01490dcf9cb1debd2540d4b4c331654ec..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Solemn Tones ? The Loki Bass VST Free UPD Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Solemn Tones – The Loki Bass VST Free Download


        Download Zip » https://gohhs.com/2uEAEC



        -
        -Members. Erika Young (erikayoung16). Lists. viekearenquea. functinmilink · Solemn Tones – The Loki Bass VST Free Download · Lectii De Pian Pentru ... 1fdad05405
        -
        -
        -

        diff --git a/spaces/scedlatioru/img-to-music/example/Tds Survey Link Software LINK.md b/spaces/scedlatioru/img-to-music/example/Tds Survey Link Software LINK.md deleted file mode 100644 index 41fac245ee0c1c4ea34b1424e0df4f4c76127694..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Tds Survey Link Software LINK.md +++ /dev/null @@ -1,31 +0,0 @@ -
        -

        How to Use Tds Survey Link Software for Your Surveying Projects

        -

        Tds Survey Link Software is a powerful tool that allows you to transfer data between your surveying instruments and your computer. It also lets you manage, edit, and process your survey data in various formats. Whether you are using a total station, a GPS receiver, or a digital level, Tds Survey Link Software can help you streamline your workflow and improve your productivity.

        -

        In this article, we will show you how to use Tds Survey Link Software for your surveying projects. We will cover the following topics:

        -

        Tds Survey Link Software


        Downloadhttps://gohhs.com/2uEA1D



        -
          -
        • How to install and configure Tds Survey Link Software on your computer
        • -
        • How to connect your surveying instrument to your computer using Tds Survey Link Software
        • -
        • How to transfer data between your surveying instrument and your computer using Tds Survey Link Software
        • -
        • How to manage, edit, and process your survey data using Tds Survey Link Software
        • -
        • How to export your survey data to other formats using Tds Survey Link Software
        • -
        -

        By the end of this article, you will be able to use Tds Survey Link Software for your surveying projects with ease and confidence.

        - -

        How to install and configure Tds Survey Link Software on your computer

        -

        To use Tds Survey Link Software, you need to install it on your computer first. You can download the latest version of Tds Survey Link Software from the official website of Trimble, the company that produces it. The installation process is simple and straightforward. Just follow the instructions on the screen and accept the license agreement.

        -

        After installing Tds Survey Link Software, you need to configure it to match your surveying instrument and your project settings. To do this, open Tds Survey Link Software and click on the Options menu. Then, select the Device tab and choose your surveying instrument from the list. You can also adjust the communication settings, such as the port, the baud rate, and the parity. Next, select the Project tab and enter your project name, description, coordinate system, units, and other parameters. You can also create custom codes and attributes for your survey data. Finally, click on OK to save your settings.

        - -

        How to connect your surveying instrument to your computer using Tds Survey Link Software

        -

        To transfer data between your surveying instrument and your computer, you need to connect them using a cable or a wireless connection. The type of connection depends on your surveying instrument and your computer. For example, some surveying instruments have a serial port, a USB port, or a Bluetooth connection. Some computers have a serial port, a USB port, or a wireless adapter.

        -

        To connect your surveying instrument to your computer using Tds Survey Link Software, follow these steps:

        -
          -
        1. Turn on your surveying instrument and your computer.
        2. -
        3. Connect your surveying instrument to your computer using a cable or a wireless connection.
        4. -
        5. Open Tds Survey Link Software and click on the Transfer menu.
        6. -
        7. Select Connect from the drop-down menu.
        8. -
        9. Wait for Tds Survey Link Software to detect your surveying instrument and establish a connection.
        10. -
        11. If the connection is successful, you will see a message saying "Connected" in the status bar.
        12. -

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/[FULL] Silverfast 8 Crack 13 UPDATED.md b/spaces/scedlatioru/img-to-music/example/[FULL] Silverfast 8 Crack 13 UPDATED.md deleted file mode 100644 index 2859d28c405fdc15bb3959d0dacbaa1d30a4cc9a..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/[FULL] Silverfast 8 Crack 13 UPDATED.md +++ /dev/null @@ -1,15 +0,0 @@ -

        [FULL] silverfast 8 crack 13


        DOWNLOADhttps://gohhs.com/2uEyRR



        - -All versions with camera raw support: Full camera raw support with profiles for Sony ... to reset serial number, reset settings or copy SilverFast ... Supports MPEG2 stereo recording. -SilverFast 8 is ... and supporting JPEG, TIFF, RAW formats. -SilverFast -SilverFast 6.5.x has an new ... SilverFast 6.5.x now supports ... -SilverFast 6.5.x has the following new features ... -SilverFast 6.5.x has been fixed in the new SilverFast 6.5.x version ... -In the new SilverFast 7.2.0 version ... -SilverFast 7.2.0 appeared in the new version ... -In the new SilverFast 7.2.0 version, the following has been fixed ... -The following appeared in the new SilverFast 7.2.0 version 8a78ff9644
        -
        -
        -

        diff --git a/spaces/seduerr/text_analytics/README.md b/spaces/seduerr/text_analytics/README.md deleted file mode 100644 index 9ef46a91ce51ef447994d3a0cdcd2e2d67a5269f..0000000000000000000000000000000000000000 --- a/spaces/seduerr/text_analytics/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: text analytics -emoji: 🧐 -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.0.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/senquan/ChuanhuChatGPT/assets/Kelpy-Codos.js b/spaces/senquan/ChuanhuChatGPT/assets/Kelpy-Codos.js deleted file mode 100644 index cfbaeedb4f371dfb5fe157db545b364046fca3e1..0000000000000000000000000000000000000000 --- a/spaces/senquan/ChuanhuChatGPT/assets/Kelpy-Codos.js +++ /dev/null @@ -1,76 +0,0 @@ -// ==UserScript== -// @name Kelpy Codos -// @namespace https://github.com/Keldos-Li/Kelpy-Codos -// @version 1.0.5 -// @author Keldos; https://keldos.me/ -// @description Add copy button to PRE tags before CODE tag, for Chuanhu ChatGPT especially. -// Based on Chuanhu ChatGPT version: ac04408 (2023-3-22) -// @license GPL-3.0 -// @grant none -// ==/UserScript== - -(function () { - 'use strict'; - - function addCopyButton(pre) { - var code = pre.querySelector('code'); - if (!code) { - return; // 如果没有找到 元素,则不添加按钮 - } - var firstChild = code.firstChild; - if (!firstChild) { - return; // 如果 元素没有子节点,则不添加按钮 - } - var button = document.createElement('button'); - button.textContent = '\uD83D\uDCCE'; // 使用 📎 符号作为“复制”按钮的文本 - button.style.position = 'relative'; - button.style.float = 'right'; - button.style.fontSize = '1em'; // 可选:调整按钮大小 - button.style.background = 'none'; // 可选:去掉背景颜色 - button.style.border = 'none'; // 可选:去掉边框 - button.style.cursor = 'pointer'; // 可选:显示指针样式 - button.addEventListener('click', function () { - var range = document.createRange(); - range.selectNodeContents(code); - range.setStartBefore(firstChild); // 将范围设置为第一个子节点之前 - var selection = window.getSelection(); - selection.removeAllRanges(); - selection.addRange(range); - - try { - var success = document.execCommand('copy'); - if (success) { - button.textContent = '\u2714'; - setTimeout(function () { - button.textContent = '\uD83D\uDCCE'; // 恢复按钮为“复制” - }, 2000); - } else { - button.textContent = '\u2716'; - } - } catch (e) { - console.error(e); - button.textContent = '\u2716'; - } - - selection.removeAllRanges(); - }); - code.insertBefore(button, firstChild); // 将按钮插入到第一个子元素之前 - } - - function handleNewElements(mutationsList, observer) { - for (var mutation of mutationsList) { - if (mutation.type === 'childList') { - for (var node of mutation.addedNodes) { - if (node.nodeName === 'PRE') { - addCopyButton(node); - } - } - } - } - } - - var observer = new MutationObserver(handleNewElements); - observer.observe(document.documentElement, { childList: true, subtree: true }); - - document.querySelectorAll('pre').forEach(addCopyButton); -})(); diff --git a/spaces/seok07/Voice-Changer1/infer_pack/modules.py b/spaces/seok07/Voice-Changer1/infer_pack/modules.py deleted file mode 100644 index 960481cedad9a6106f2bf0b9e86e82b120f7b33f..0000000000000000000000000000000000000000 --- a/spaces/seok07/Voice-Changer1/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/sgxz/bingo/src/lib/utils.ts b/spaces/sgxz/bingo/src/lib/utils.ts deleted file mode 100644 index 0a09ddc4aa5518f681a00a64ad48566516f35417..0000000000000000000000000000000000000000 --- a/spaces/sgxz/bingo/src/lib/utils.ts +++ /dev/null @@ -1,158 +0,0 @@ -import { clsx, type ClassValue } from 'clsx' -import { customAlphabet } from 'nanoid' -import { twMerge } from 'tailwind-merge' - -export function cn(...inputs: ClassValue[]) { - return twMerge(clsx(inputs)) -} - -export const nanoid = customAlphabet( - '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz', - 7 -) // 7-character random string - -export function createChunkDecoder() { - const decoder = new TextDecoder() - return function (chunk: Uint8Array | undefined): string { - if (!chunk) return '' - return decoder.decode(chunk, { stream: true }) - } -} - -export function random (start: number, end: number) { - return start + Math.ceil(Math.random() * (end - start)) -} - -export function randomIP() { - return `11.${random(104, 107)}.${random(1, 255)}.${random(1, 255)}` -} - -export const defaultUID = Math.random().toString(36).slice(2) - -export function parseHeadersFromCurl(content: string) { - const re = /-H '([^:]+):\s*([^']+)/mg - const headers: HeadersInit = {} - content = content.replaceAll('-H "', '-H \'').replaceAll('" ^', '\'\\').replaceAll('^\\^"', '"') // 将 cmd curl 转成 bash curl - content.replace(re, (_: string, key: string, value: string) => { - headers[key] = value - return '' - }) - - return headers -} - -export const ChunkKeys = ['BING_HEADER', 'BING_HEADER1', 'BING_HEADER2'] -export function encodeHeadersToCookie(content: string) { - const base64Content = btoa(content) - const contentChunks = base64Content.match(/.{1,4000}/g) || [] - return ChunkKeys.map((key, index) => `${key}=${contentChunks[index] ?? ''}`) -} - -export function extraCurlFromCookie(cookies: Partial<{ [key: string]: string }>) { - let base64Content = '' - ChunkKeys.forEach((key) => { - base64Content += (cookies[key] || '') - }) - try { - return atob(base64Content) - } catch(e) { - return '' - } -} - -export function extraHeadersFromCookie(cookies: Partial<{ [key: string]: string }>) { - return parseHeadersFromCurl(extraCurlFromCookie(cookies)) -} - -export function formatDate(input: string | number | Date): string { - const date = new Date(input) - return date.toLocaleDateString('en-US', { - month: 'long', - day: 'numeric', - year: 'numeric' - }) -} - -export function parseCookie(cookie: string, cookieName: string) { - const targetCookie = new RegExp(`(?:[; ]|^)${cookieName}=([^;]*)`).test(cookie) ? RegExp.$1 : cookie - return targetCookie ? decodeURIComponent(targetCookie).trim() : cookie.indexOf('=') === -1 ? cookie.trim() : '' -} - -export function setCookie(key: string, value: string) { - const maxAge = 86400 * 30 - document.cookie = `${key}=${value || ''}; Path=/; Max-Age=${maxAge}; SameSite=None; Secure` -} - -export function getCookie(cookieName: string) { - const re = new RegExp(`(?:[; ]|^)${cookieName}=([^;]*)`) - return re.test(document.cookie) ? RegExp.$1 : '' -} - -export function parseCookies(cookie: string, cookieNames: string[]) { - const cookies: { [key: string]: string } = {} - cookieNames.forEach(cookieName => { - cookies[cookieName] = parseCookie(cookie, cookieName) - }) - return cookies -} - -export const DEFAULT_UA = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36 Edg/115.0.0.0' -export const DEFAULT_IP = process.env.BING_IP || randomIP() - -export function parseUA(ua?: string, default_ua = DEFAULT_UA) { - return / EDGE?/i.test(decodeURIComponent(ua || '')) ? decodeURIComponent(ua!.trim()) : default_ua -} - -export function createHeaders(cookies: Partial<{ [key: string]: string }>, defaultHeaders?: Partial<{ [key: string]: string }>, type?: string) { - let { - BING_COOKIE = process.env.BING_COOKIE, - BING_UA = process.env.BING_UA, - BING_IP = process.env.BING_IP, - BING_HEADER = process.env.BING_HEADER, - IMAGE_ONLY = process.env.IMAGE_ONLY ?? '1', - } = cookies - - if (BING_HEADER) { - const headers = extraHeadersFromCookie({ - BING_HEADER, - ...cookies, - }) || {} - if (/^(1|true|yes)$/.test(String(IMAGE_ONLY)) && type !== 'image') { - // 仅画图时设置 cookie - headers.cookie = `_U=${defaultUID}` - } - if (headers['user-agent']) { - return headers - } - } - - const ua = parseUA(BING_UA) - - if (!BING_COOKIE) { - BING_COOKIE = defaultHeaders?.IMAGE_BING_COOKIE || defaultUID // hf 暂时不用 Cookie 也可以正常使用 - } - - const parsedCookie = parseCookie(BING_COOKIE, '_U') - if (!parsedCookie) { - throw new Error('Invalid Cookie') - } - return { - 'x-forwarded-for': BING_IP || DEFAULT_IP, - 'Accept-Encoding': 'gzip, deflate, br', - 'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6', - 'User-Agent': ua!, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: `_U=${parsedCookie}` || '', - } -} - -export class WatchDog { - private tid = 0 - watch(fn: Function, timeout = 2000) { - clearTimeout(this.tid) - this.tid = setTimeout(fn, timeout + Math.random() * 1000) - } - reset() { - clearTimeout(this.tid) - } -} diff --git a/spaces/shivammittal274/LLM_CA/app.py b/spaces/shivammittal274/LLM_CA/app.py deleted file mode 100644 index 1b75ab91b89a54ce02a797db64e23e146d8e4b09..0000000000000000000000000000000000000000 --- a/spaces/shivammittal274/LLM_CA/app.py +++ /dev/null @@ -1,63 +0,0 @@ -import os -from typing import Optional, Tuple -import gradio as gr -from chatWithCache import llm_qa -from threading import Lock -from dotenv import load_dotenv -load_dotenv() -import settings -settings.init() - -class ChatWrapper: - def __init__(self): - self.lock = Lock() - def __call__( - self, inp: str, history: Optional[Tuple[str, str]], chain - ): - """Execute the chat functionality.""" - self.lock.acquire() - settings.init() - try: - history = history or [] - output, latency = llm_qa(inp, history) - if settings.cache_hit == 1: - output_str = f"Cache hit \n{latency}" - else: - output_str = f"Cache miss \n{latency}" - history.append((inp, output)) - except Exception as e: - raise e - finally: - self.lock.release() - return history, history, output_str - -chat = ChatWrapper() - -block = gr.Blocks(css=".gradio-container {background-color: lightgray}") - -with block: - with gr.Row(): - gr.Markdown("

        Ask your questions on your data

        ") - - chatbot = gr.Chatbot() - - with gr.Row(): - message = gr.Textbox( - label="What's your question?", - placeholder="Ask questions about your data", - lines=1, - ) - submit = gr.Button(value="Send", variant="secondary").style(full_width=False) - - output_text = gr.Textbox( - label="Details Of the response" - ).style(color='red') - - - state = gr.State() - agent_state = gr.State() - - submit.click(chat, inputs=[message, state, agent_state], outputs=[chatbot, state, output_text]) - message.submit(chat, inputs=[message, state, agent_state], outputs=[chatbot, state, output_text]) - -block.launch() \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/5apps The ultimate platform for client-side web apps.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/5apps The ultimate platform for client-side web apps.md deleted file mode 100644 index ac30cad680e157035533bc89b9497e72b44a142c..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/5apps The ultimate platform for client-side web apps.md +++ /dev/null @@ -1,104 +0,0 @@ -
        -

        5apps: A Platform for Building and Hosting Client-Side Web Apps

        -

        If you're a web developer who loves using web platform technologies like JavaScript, HTML5 and CSS, you might be interested in 5apps. 5apps is a platform that offers three services to help you create, deploy, host and manage your client-side web apps. In this article, we'll explain what 5apps is, why you should use it, and how to get started.

        -

        5apps


        Download File ->->->-> https://ssurll.com/2uNS6e



        -

        What is 5apps?

        -

        5apps is a platform that provides three services for web developers:

        -
          -
        • 5apps Deploy: A turn-key deployment and hosting platform for client-side web apps. You can use any framework you like, and just push your code via Git. 5apps will configure and deploy your app in all available formats and prepare it for submission to stores.
        • -
        • 5apps Storage: A personal data cloud based on remoteStorage, an open protocol for user data storage. You can allow any compatible app to access your account, and you can move your data to any compatible provider or server you want, anytime.
        • -
        • 5apps News: A social news site for HTML5, JS and friends. You can stay updated on the latest trends and technologies, share and discuss your own projects and ideas, and join a community of like-minded developers.
        • -
        -

        Why use 5apps?

        -

        There are many benefits to using 5apps for your web development projects. Here are some of them:

        -

        Benefits of 5apps Deploy

        -
          -
        • Professional app delivery: There's more to web app delivery than hosting static files. 5apps handles all the technical details for you, such as SSL certificates, caching, compression, CDN delivery, CORS headers, service workers, manifest files, etc.
        • -
        • Managed and monitored: You don't have to worry about uptime or performance. 5apps monitors your apps and ensures they are always online and fast. You can also view analytics and logs for your apps.
        • -
        • Free for open source: If you choose an open-source license for your app, 5apps will host and deploy it free of charge. No limits, team access included.
        • -
        -

        Benefits of 5apps Storage

        -
          -
        • Data ownership and portability: You have full control over your data. You can choose where to store it, how to access it, and who to share it with. You can also switch providers or servers anytime you want, without losing your data or breaking your apps.
        • -
        • Connect and authorize apps: You can connect your storage account to any app that supports remoteStorage. You can also give or revoke permission to specific apps to access specific parts of your storage.
        • -
        • Manage apps and data: You can view all the apps that are connected to your storage account and manage your data in a web interface. You can also sync your data across devices and back it up.
        • -
        -

        Benefits of 5apps News

        -
          -
        • Stay updated on the latest trends and technologies: You can browse, search and filter news articles from various sources related to HTML5, JS and other web platform technologies. You can also subscribe to RSS feeds and newsletters.
        • -
        • Share and discuss your own projects and ideas: You can submit your own articles, projects, tutorials, demos, etc. to 5apps News and get feedback from other developers. You can also comment on other submissions and vote for the ones you like.
        • -
        • Join a community of like-minded developers: You can follow other users, join groups, chat with others, and participate in events and challenges. You can also earn badges and reputation points for your contributions.
        • -
        -

        How to get started with 5apps?

        -

        Getting started with 5apps is easy and fast. Here are the steps you need to follow:

        -

        Sign up for a free account

        -

        You can sign up for a free account on 5apps.com using your email address or your GitHub account. You'll get access to all three services: Deploy, Storage and News.

        -

        Choose a service (Deploy, Storage or News)

        -

        You can choose which service you want to use first from the dashboard. You can switch between them anytime you want.

        -

        Follow the instructions and documentation

        -

        Each service has its own instructions and documentation to help you get started. You can find them on the website or in the app. For example, for Deploy, you'll need to create a repository, add a deploy key, push your code, and configure your app. For Storage, you'll need to create a storage account, connect apps, and manage your data. For News, you'll need to browse, submit, comment, and vote on articles.

        -

        Conclusion

        -

        5apps is a platform that offers three services for web developers who love using web platform technologies: Deploy, Storage and News. With 5apps, you can create, deploy, host and manage your client-side web apps, own and control your data in a personal cloud, and stay updated and connected with a community of like-minded developers. If you're interested in trying out 5apps, sign up for a free account today and start building amazing web apps!

        -

        Frequently Asked Questions

        -
          -
        • What are the pricing plans for 5apps?
        • -

          5apps offers a free plan for open-source apps and personal data storage. It also offers paid plans for private apps and larger storage space. You can check the pricing details on the website.

          -
        • What are the technical requirements for using 5apps?
        • -

          You'll need a modern web browser that supports HTML5, JS and CSS features. You'll also need a Git client to push your code to Deploy. For Storage, you'll need apps that support remoteStorage protocol.

          -

          5apps deploy features
          -5apps storage dashboard
          -5apps news html5
          -5apps deploy pricing
          -5apps storage remotesotrage
          -5apps news javascript
          -5apps deploy faq
          -5apps storage apps
          -5apps news web development
          -5apps deploy company
          -5apps storage beta
          -5apps news open source
          -5apps deploy blog
          -5apps storage user address
          -5apps news social
          -5apps deploy free trial
          -5apps storage equinix zurich
          -5apps news submit
          -5apps deploy git push
          -5apps storage open protocol
          -5apps news vote
          -5apps deploy firefox os
          -5apps storage liquor cabinet
          -5apps news comment
          -5apps deploy static websites
          -5apps storage stashboard
          -5apps news rss feed
          -5apps deploy zero downtime
          -5apps storage manifique
          -5apps news login
          -5apps deploy open source license
          -5apps storage switzerland data privacy laws
          -5apps news sign up
          -5apps deploy web platform technologies
          -5apps storage revoke permission
          -5apps news categories
          -5apps deploy turn-key hosting
          -5apps storage connect apps
          -5apps news latest stories
          -5apps deploy professional html app deployment
          -5apps storage authorize apps
          -5apps news popular stories
          -5apps deploy net energy gain
          -5apps storage manage apps
          -5apps news community
          -5apps deploy holy grail fusion experiment
          -5apps storage remoteStorage logo
          -5apps news feedback

          -
        • What are some examples of apps that use 5apps?
        • -

          You can find some examples of apps that use 5apps on the website or on News. Some of them are: Laverna (a note-taking app), Litewrite (a minimalist writing app), Unhosted Webmail (a webmail client), etc.

          -
        • How can I contact 5apps support?
        • -

          You can contact 5apps support via email at support@5apps.com or via Twitter at @5apps. You can also check the FAQ section on the website or the documentation for each service.

          -
        • How can I contribute to 5apps?
        • -

          You can contribute to 5apps by using it, sharing it with others, giving feedback, reporting bugs, suggesting features, writing articles, creating apps, etc. You can also join the 5apps community on News or GitHub.

          -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/skf15963/summary/fengshen/data/mmap_dataloader/mmap_index_dataset.py b/spaces/skf15963/summary/fengshen/data/mmap_dataloader/mmap_index_dataset.py deleted file mode 100644 index 53b290c12a8825a483f14ca0535a813b36477fa1..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/data/mmap_dataloader/mmap_index_dataset.py +++ /dev/null @@ -1,53 +0,0 @@ -import numpy as np -import torch -from typing import List -from torch.utils.data import Dataset - - -class MMapIndexDataset(Dataset): - # datapaths 是所有的内存映射文件的路径 - # input_tensor_name 是输入的tensor的名字 例如 ['input_ids'] 会存储在对应的文件里面 - def __init__(self, datapaths: List[str], input_tensor_name: List[str]): - dict_idx_fp = {} - dict_bin_fp = {} - idx_len = [] - for tensor_name in input_tensor_name: - idx_fp = [] - bin_fp = [] - len = 0 - for data_path in datapaths: - idx_fp += [np.load( - data_path + '_' + tensor_name + '.npy', mmap_mode='r')] - bin_fp += [np.memmap( - data_path + '_' + tensor_name + '.bin', - dtype='long', - mode='r')] - len += idx_fp[-1].shape[0] - idx_len += [idx_fp[-1].shape[0]] - dict_idx_fp[tensor_name] = idx_fp - dict_bin_fp[tensor_name] = bin_fp - #  通常情况下不同的tensor的长度是一样的 - self._len = len - - self._input_tensor_name = input_tensor_name - self._dict_idx_fp = dict_idx_fp - self._dict_bin_fp = dict_bin_fp - self._idx_len = idx_len - - def __len__(self): - return self._len - - def __getitem__(self, idx): - sample = {} - for i in range(len(self._idx_len)): - if idx >= self._idx_len[i]: - idx -= self._idx_len[i] - else: - break - for tensor_name in self._input_tensor_name: - sample[tensor_name] = torch.tensor(self._dict_bin_fp[tensor_name][i][ - self._dict_idx_fp[tensor_name][i][idx, 0]: - self._dict_idx_fp[tensor_name][i][idx, 1] - ], dtype=torch.long) - # print(sample) - return sample diff --git a/spaces/skf15963/summary/fengshen/examples/randeng_reasoning/README.md b/spaces/skf15963/summary/fengshen/examples/randeng_reasoning/README.md deleted file mode 100644 index b7ccc3df3d5c3fe50ebd52f1ddc8a822e13e6528..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/examples/randeng_reasoning/README.md +++ /dev/null @@ -1,161 +0,0 @@ -# 燃灯系列-因果推理生成模型 - -- Huggingface: - - [Randeng-TransformerXL-5B-Deduction-Chinese](https://huggingface.co/IDEA-CCNL/Randeng-TransformerXL-5B-Deduction-Chinese) - - [Randeng-TransformerXL-5B-Abduction-Chinese](https://huggingface.co/IDEA-CCNL/Randeng-TransformerXL-5B-Abduction-Chinese) -- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM/fengshen/examples/randeng_reasoning) -- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/) -- Demo: [Reasoning Tree](https://idea.edu.cn/ccnl-act/reasoning/) - -## 简介 Brief Introduction - -基于Transformer-XL的中文因果推理生成模型和反绎推理生成模型。 - -Chinese deductive reasoning model and abductive reasoning model based on Transformer-XL. - -## 模型分类 Model Taxonomy - -| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | -| :----: | :----: | :----: | :----: | :----: | :----: | -| 通用 General | 自然语言生成 NLG | 燃灯 Randeng | TransformerXL | 5.0B | 中文-因果推理 Chinese-Reasoning | - -## 模型信息 Model Information - -**数据准备 Corpus Preparation** - -* 悟道语料库(280G版本) -* 因果语料库(2.3M个样本):基于悟道语料库(280G版本),通过关联词匹配、人工标注 + [GTSFactory](https://gtsfactory.com/)筛选、数据清洗等步骤获取的具有因果关系的句子对 - -* Wudao Corpus (with 280G samples) -* Wudao Causal Corpus (with 2.3 million samples): Based on the Wudao corpus (280G version), sentence pairs with causality were obtained through logic indicator matching, manual annotation + [GTSFactory](https://gtsfactory.com/), and data cleaning. - -**训练流程 Model Training** -1. 在悟道语料库(280G版本)上进行预训练 -2. 在1.5M因果语料上分别进行因果生成任务和反绎生成任务的训练 -3. 基于其余0.8M因果语料,[Randeng-TransformerXL-5B-Deduction-Chinese](https://huggingface.co/IDEA-CCNL/Randeng-TransformerXL-5B-Deduction-Chinese)、[Randeng-TransformerXL-5B-Abduction-Chinese](https://huggingface.co/IDEA-CCNL/Randeng-TransformerXL-5B-Abduction-Chinese)和[Erlangshen-Roberta-330M-Causal-Chinese](https://huggingface.co/IDEA-CCNL/Erlangshen-Roberta-330M-Causal-Chinese)进行Self-consistent闭环迭代训练 - * 两个生成模型基于核采样和贪心的方式进行因果推理和反绎推理,产生大量伪样本; - * Erlangshen-Roberta-330M-Causal-Chinese模型对伪样本句子对的因果关系进行打分,筛选供自身以及生成模型训练的样本 - -First, the Transformer-XL model was pre-trained on the Wudao Corpus (with 280G samples) and annotated similar-sentence pair dataset (same as [Randeng-TransformerXL-1.1B-Paraphrasing-Chinese](https://huggingface.co/IDEA-CCNL/Randeng-TransformerXL-1.1B-Paraphrasing-Chinese)). -Then, the model was trained on our causal corpus (about 1.5 million samples) for the deductive reasoning task. -At last, based on the remaining 0.8 million samples of the causal corpus, we conducted self-consistent learning on [Randeng-TransformerXL-5B-Deduction-Chinese](https://huggingface.co/IDEA-CCNL/Randeng-TransformerXL-5B-Deduction-Chinese) and [Randeng-TransformerXL-5B-Abduction-Chinese](https://huggingface.co/IDEA-CCNL/Randeng-TransformerXL-5B-Abduction-Chinese), cooperating with [Erlangshen-Roberta-330M-Causal-Chinese](https://huggingface.co/IDEA-CCNL/Erlangshen-Roberta-330M-Causal-Chinese). -Specifically, two generative models performed deductive reasoning and abductive reasoning based on each sample respectively, generating a large number of pseudo-samples; [Erlangshen-Roberta-330M-Causal-Chinese](https://huggingface.co/IDEA-CCNL/Erlangshen-Roberta-330M-Causal-Chinese) scored the causality of the pseudo-samples and selected the training data for itself and the generative models in the next iteration. - -## 加载模型 Loading Models - -```shell -git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git -cd Fengshenbang-LM -``` - -```python -from fengshen.models.transfo_xl_reasoning import TransfoXLModel -from transformers import T5Tokenizer as TransfoXLTokenizer -deduction_model = TransfoXLModel.from_pretrained('IDEA-CCNL/Randeng-TransformerXL-5B-Deduction-Chinese') -abduction_model = TransfoXLModel.from_pretrained('IDEA-CCNL/Randeng-TransformerXL-5B-Abduction-Chinese') -tokenizer = TransfoXLTokenizer.from_pretrained( - "IDEA-CCNL/Randeng-TransformerXL-5B-Deduction-Chinese", - eos_token='<|endoftext|>', - pad_token='<|endoftext|>', - extra_ids=0 -) -tokenizer.add_special_tokens({'bos_token': ''}) -``` - -## 使用示例 Usage Example - -```python -from fengshen.models.transfo_xl_reasoning import deduction_generate, abduction_generate -input_text = "机器人统治世界" -input_texts = ["机器人统治世界", "玉米价格持续上涨"] -print(deduction_generate(deduction_model, tokenizer, input_text, device=0)) -print(deduction_generate(deduction_model, tokenizer, input_texts, device=0)) -print(abduction_generate(abduction_model, tokenizer, input_text, device=0)) -print(abduction_generate(abduction_model, tokenizer, input_texts, device=0)) -``` - -## 一些例子 Some Cases - -(以下内容为模型生成,不代表本单位观点) - -**因果推理 Deduction** -1. 鲸鱼是哺乳动物,哺乳动物是恒温动物 - - 鲸鱼拥有极强的恒温能力 - - 鲸鱼需要消耗大量的能量 - - 鲸鱼能在海洋中生存下来,完全依靠其体温恒定 -2. 鲸鱼是哺乳动物,哺乳动物不是恒温动物 - - 鲸鱼不是恒温动物,鲸鱼是变温动物 - - 鲸鱼会随温度变化而变化 - - 鲸鱼有冬眠现象 -3. 机器人统治世界 - - 机器人取代人类做大部分工作 - - 机器人世界的法律必须能够适用于机器人统治的现实世界 - - 机器人必须能够相互沟通,并能够处理人类无法处理的复杂情况 - - 未来将不再需要大量工人,机器人将接管工厂 -4. 玉米价格持续上涨 - - 玉米淀粉价格也呈现上涨趋势 - - 玉米种植效益不断攀升 - - 在玉米深加工行业引起了一阵骚动 -5. 实体经济融资难、融资贵 - - 急需发展互联网金融等金融业态,为实体经济提供融资服务 - - 融资需求向金融资产转移,增加了金融资产供给 - - 必须大力发展资本市场,使资本市场成为经济转型的助推器 -6. 影响华北地区的冷空气势力偏弱 - - 冷空气的影响时间将偏短 - - 冷空气影响结束后,华北地区气温会继续缓慢回升 - - 华北地区气温较常年同期偏高 - -**反绎推理 Abduction** -1. 玉米价格持续上涨 - - 玉米库存较低,需求增加 - - 东北地区受降雨天气影响,玉米生长受到影响 - - 今年玉米种植面积大幅度下降 -2. 玉米价格下跌 - - 玉米的库存量大,需求量低 - - 今年玉米产量创新高,而需求不足 - - 目前玉米市场处于供大于求的状态,再加上近期华北地区遭遇了强降雨天气,玉米质量下降 -3. 农作物大量死亡 - - 旱灾持续时间长,又无雨,土壤干裂,作物得不到水分 - - 霜冻来临,气温骤降,植物受冻 - - 许多农民为了使农作物能够长得更好,使用更多的农药,并且没有合理的休耕措施 -4. 鲸鱼需要消耗大量的能量 - - 鲸鱼的体型庞大,新陈代谢速度又快 - - 鲸鱼的身体结构特殊,需要消耗大量的能量来维持身体结构的稳定 -5. 实体经济融资难、融资贵 - - 融资渠道单一,实体经济难以获得充足的资金 - - 实体经济融资主要依赖抵押、担保、信贷等间接融资方式,存在抵押物不足、担保机制不完善等问题 - - 实体经济往往需要大量的资金,而银行受制于风险控制、资本充足率等要求,很难大量发放贷款 -6. 火山爆发导致植物死亡 - - 火山灰会阻碍植物吸收阳光 - - 火山灰的飘散,导致植物无法吸收到足够的氧气 - - 火山喷发时,岩浆温度极高,植物无法承受 - - -## 引用 Citation - -如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970): - -If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970): - -```text -@article{fengshenbang, - author = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang}, - title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence}, - journal = {CoRR}, - volume = {abs/2209.02970}, - year = {2022} -} -``` - -也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/): - -You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/): - -```text -@misc{Fengshenbang-LM, - title={Fengshenbang-LM}, - author={IDEA-CCNL}, - year={2021}, - howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, -} -``` \ No newline at end of file diff --git a/spaces/skf15963/summary/fengshen/examples/ubert/example.py b/spaces/skf15963/summary/fengshen/examples/ubert/example.py deleted file mode 100644 index bedd365ff67ff5d9b1f8f22777dab9b5a8b02394..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/examples/ubert/example.py +++ /dev/null @@ -1,95 +0,0 @@ -import argparse -from fengshen import UbertPipelines -import os -os.environ["CUDA_VISIBLE_DEVICES"] = '6' - - -def main(): - total_parser = argparse.ArgumentParser("TASK NAME") - total_parser = UbertPipelines.pipelines_args(total_parser) - args = total_parser.parse_args() - - # 设置一些训练要使用到的参数 - args.pretrained_model_path = 'IDEA-CCNL/Erlangshen-Ubert-110M-Chinese' #预训练模型的路径,我们提供的预训练模型存放在HuggingFace上 - args.default_root_dir = './' #默认主路径,用来放日志、tensorboard等 - args.max_epochs = 5 - args.gpus = 1 - args.batch_size = 1 - - # 只需要将数据处理成为下面数据的 json 样式就可以一键训练和预测,下面只是提供了一条示例样本 - train_data = [ - { - "task_type": "抽取任务", - "subtask_type": "实体识别", - "text": "彭小军认为,国内银行现在走的是台湾的发卡模式,先通过跑马圈地再在圈的地里面选择客户,", - "choices": [ - {"entity_type": "地址", "label": 0, "entity_list": [ - {"entity_name": "台湾", "entity_type": "地址", "entity_idx": [[15, 16]]}]}, - {"entity_type": "书名", "label": 0, "entity_list": []}, - {"entity_type": "公司", "label": 0, "entity_list": []}, - {"entity_type": "游戏", "label": 0, "entity_list": []}, - {"entity_type": "政府机构", "label": 0, "entity_list": []}, - {"entity_type": "电影名称", "label": 0, "entity_list": []}, - {"entity_type": "人物姓名", "label": 0, "entity_list": [ - {"entity_name": "彭小军", "entity_type": "人物姓名", "entity_idx": [[0, 2]]}]}, - {"entity_type": "组织机构", "label": 0, "entity_list": []}, - {"entity_type": "岗位职位", "label": 0, "entity_list": []}, - {"entity_type": "旅游景点", "label": 0, "entity_list": []} - ], - "id": 0} - ] - dev_data = [ - { - "task_type": "抽取任务", - "subtask_type": "实体识别", - "text": "就天涯网推出彩票服务频道是否是业内人士所谓的打政策“擦边球”,记者近日对此事求证彩票监管部门。", - "choices": [ - {"entity_type": "地址", "label": 0, "entity_list": []}, - {"entity_type": "书名", "label": 0, "entity_list": []}, - {"entity_type": "公司", "label": 0, "entity_list": [ - {"entity_name": "天涯网", "entity_type": "公司", "entity_idx": [[1, 3]]}]}, - {"entity_type": "游戏", "label": 0, "entity_list": []}, - {"entity_type": "政府机构", "label": 0, "entity_list": []}, - {"entity_type": "电影名称", "label": 0, "entity_list": []}, - {"entity_type": "人物姓名", "label": 0, "entity_list": []}, - {"entity_type": "组织机构", "label": 0, "entity_list": [ - {"entity_name": "彩票监管部门", "entity_type": "组织机构", "entity_idx": [[40, 45]]}]}, - {"entity_type": "岗位职位", "label": 0, "entity_list": [ - {"entity_name": "记者", "entity_type": "岗位职位", "entity_idx": [[31, 32]]}]}, - {"entity_type": "旅游景点", "label": 0, "entity_list": []} - ], - - "id": 0} - - ] - test_data = [ - { - "task_type": "抽取任务", - "subtask_type": "实体识别", - "text": "这也让很多业主据此认为,雅清苑是政府公务员挤对了国家的经适房政策。", - "choices": [ - {"entity_type": "地址", "label": 0, "entity_list": [ - {"entity_name": "雅清苑", "entity_type": "地址", "entity_idx": [[12, 14]]}]}, - {"entity_type": "书名", "label": 0, "entity_list": []}, - {"entity_type": "公司", "label": 0, "entity_list": []}, - {"entity_type": "游戏", "label": 0, "entity_list": []}, - {"entity_type": "政府机构", "label": 0, "entity_list": []}, - {"entity_type": "电影名称", "label": 0, "entity_list": []}, - {"entity_type": "人物姓名", "label": 0, "entity_list": []}, - {"entity_type": "组织机构", "label": 0, "entity_list": []}, - {"entity_type": "岗位职位", "label": 0, "entity_list": [ - {"entity_name": "公务员", "entity_type": "岗位职位", "entity_idx": [[18, 20]]}]}, - {"entity_type": "旅游景点", "label": 0, "entity_list": []} - ], - "id": 0}, - ] - - model = UbertPipelines(args) - model.fit(train_data, dev_data) - result = model.predict(test_data) - for line in result: - print(line) - - -if __name__ == "__main__": - main() diff --git a/spaces/sohomghosh/FLUEnT/fincat_utils.py b/spaces/sohomghosh/FLUEnT/fincat_utils.py deleted file mode 100644 index 67a8f45f5ac88a8f62293efb93a7a079f3bd70b4..0000000000000000000000000000000000000000 --- a/spaces/sohomghosh/FLUEnT/fincat_utils.py +++ /dev/null @@ -1,108 +0,0 @@ -import pandas as pd -import numpy as np -import pickle -import torch -from torch.utils.data import Dataset, DataLoader -from transformers import BertTokenizer, BertModel -from transformers import AutoTokenizer, AutoModel -import nltk - -tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') -model = BertModel.from_pretrained('bert-base-uncased', output_hidden_states = True,) - -def extract_context_words(x, window = 6): - paragraph, offset_start, offset_end = x['paragraph'], x['offset_start'], x['offset_end'] - target_word = paragraph[offset_start : offset_end] - paragraph = ' ' + paragraph + ' ' - offset_start = offset_start + 1 - offset_end = offset_end + 1 - prev_space_posn = (paragraph[:offset_start].rindex(' ') + 1) - end_space_posn = (offset_end + paragraph[offset_end:].index(' ')) - full_word = paragraph[prev_space_posn : end_space_posn] - - prev_words = nltk.word_tokenize(paragraph[0:prev_space_posn]) - next_words = nltk.word_tokenize(paragraph[end_space_posn:]) - words_in_context_window = prev_words[-1*window:] + [full_word] + next_words[:window] - context_text = ' '.join(words_in_context_window) - return context_text - -"""The following functions have been created with inspiration from https://github.com/arushiprakash/MachineLearning/blob/main/BERT%20Word%20Embeddings.ipynb""" - -def bert_text_preparation(text, tokenizer): - """Preparing the input for BERT - - Takes a string argument and performs - pre-processing like adding special tokens, - tokenization, tokens to ids, and tokens to - segment ids. All tokens are mapped to seg- - ment id = 1. - - Args: - text (str): Text to be converted - tokenizer (obj): Tokenizer object - to convert text into BERT-re- - adable tokens and ids - - Returns: - list: List of BERT-readable tokens - obj: Torch tensor with token ids - obj: Torch tensor segment ids - - """ - marked_text = "[CLS] " + text + " [SEP]" - tokenized_text = tokenizer.tokenize(marked_text) - indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) - segments_ids = [1]*len(indexed_tokens) - - # Convert inputs to PyTorch tensors - tokens_tensor = torch.tensor([indexed_tokens]) - segments_tensors = torch.tensor([segments_ids]) - - return tokenized_text, tokens_tensor, segments_tensors - -def get_bert_embeddings(tokens_tensor, segments_tensors, model): - """Get embeddings from an embedding model - - Args: - tokens_tensor (obj): Torch tensor size [n_tokens] - with token ids for each token in text - segments_tensors (obj): Torch tensor size [n_tokens] - with segment ids for each token in text - model (obj): Embedding model to generate embeddings - from token and segment ids - - Returns: - list: List of list of floats of size - [n_tokens, n_embedding_dimensions] - containing embeddings for each token - """ - - # Gradient calculation id disabled - # Model is in inference mode - with torch.no_grad(): - outputs = model(tokens_tensor, segments_tensors) - # Removing the first hidden state - # The first state is the input state - hidden_states = outputs[2][1:] - - # Getting embeddings from the final BERT layer - token_embeddings = hidden_states[-1] - # Collapsing the tensor into 1-dimension - token_embeddings = torch.squeeze(token_embeddings, dim=0) - # Converting torchtensors to lists - list_token_embeddings = [token_embed.tolist() for token_embed in token_embeddings] - - return list_token_embeddings - -def bert_embedding_extract(context_text, word): - tokenized_text, tokens_tensor, segments_tensors = bert_text_preparation(context_text, tokenizer) - list_token_embeddings = get_bert_embeddings(tokens_tensor, segments_tensors, model) - word_tokens,tt,st = bert_text_preparation(word, tokenizer) - word_embedding_all = [] - for word_tk in word_tokens: - word_index = tokenized_text.index(word_tk) - word_embedding = list_token_embeddings[word_index] - word_embedding_all.append(word_embedding) - word_embedding_mean = np.array(word_embedding_all).mean(axis=0) - return word_embedding_mean - diff --git a/spaces/spillwaysofyoursoul/janitorai/Dockerfile b/spaces/spillwaysofyoursoul/janitorai/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/spillwaysofyoursoul/janitorai/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/noisychannel/rerank_generate.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/noisychannel/rerank_generate.py deleted file mode 100644 index daeeae059a677a9fcd7c370be087f1f5c189bc52..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/noisychannel/rerank_generate.py +++ /dev/null @@ -1,397 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Generate n-best translations using a trained model. -""" - -import os -import subprocess -from contextlib import redirect_stdout - -from fairseq import options -from fairseq_cli import generate, preprocess - -from examples.noisychannel import rerank_options, rerank_utils - - -def gen_and_reprocess_nbest(args): - if args.score_dict_dir is None: - args.score_dict_dir = args.data - if args.prefix_len is not None: - assert ( - args.right_to_left1 is False - ), "prefix length not compatible with right to left models" - assert ( - args.right_to_left2 is False - ), "prefix length not compatible with right to left models" - - if args.nbest_list is not None: - assert args.score_model2 is None - - if args.backwards1: - scorer1_src = args.target_lang - scorer1_tgt = args.source_lang - else: - scorer1_src = args.source_lang - scorer1_tgt = args.target_lang - - store_data = ( - os.path.join(os.path.dirname(__file__)) + "/rerank_data/" + args.data_dir_name - ) - if not os.path.exists(store_data): - os.makedirs(store_data) - - ( - pre_gen, - left_to_right_preprocessed_dir, - right_to_left_preprocessed_dir, - backwards_preprocessed_dir, - lm_preprocessed_dir, - ) = rerank_utils.get_directories( - args.data_dir_name, - args.num_rescore, - args.gen_subset, - args.gen_model_name, - args.shard_id, - args.num_shards, - args.sampling, - args.prefix_len, - args.target_prefix_frac, - args.source_prefix_frac, - ) - assert not ( - args.right_to_left1 and args.backwards1 - ), "backwards right to left not supported" - assert not ( - args.right_to_left2 and args.backwards2 - ), "backwards right to left not supported" - assert not ( - args.prefix_len is not None and args.target_prefix_frac is not None - ), "target prefix frac and target prefix len incompatible" - - # make directory to store generation results - if not os.path.exists(pre_gen): - os.makedirs(pre_gen) - - rerank1_is_gen = ( - args.gen_model == args.score_model1 and args.source_prefix_frac is None - ) - rerank2_is_gen = ( - args.gen_model == args.score_model2 and args.source_prefix_frac is None - ) - - if args.nbest_list is not None: - rerank2_is_gen = True - - # make directories to store preprossed nbest list for reranking - if not os.path.exists(left_to_right_preprocessed_dir): - os.makedirs(left_to_right_preprocessed_dir) - if not os.path.exists(right_to_left_preprocessed_dir): - os.makedirs(right_to_left_preprocessed_dir) - if not os.path.exists(lm_preprocessed_dir): - os.makedirs(lm_preprocessed_dir) - if not os.path.exists(backwards_preprocessed_dir): - os.makedirs(backwards_preprocessed_dir) - - score1_file = rerank_utils.rescore_file_name( - pre_gen, - args.prefix_len, - args.model1_name, - target_prefix_frac=args.target_prefix_frac, - source_prefix_frac=args.source_prefix_frac, - backwards=args.backwards1, - ) - if args.score_model2 is not None: - score2_file = rerank_utils.rescore_file_name( - pre_gen, - args.prefix_len, - args.model2_name, - target_prefix_frac=args.target_prefix_frac, - source_prefix_frac=args.source_prefix_frac, - backwards=args.backwards2, - ) - - predictions_bpe_file = pre_gen + "/generate_output_bpe.txt" - - using_nbest = args.nbest_list is not None - - if using_nbest: - print("Using predefined n-best list from interactive.py") - predictions_bpe_file = args.nbest_list - - else: - if not os.path.isfile(predictions_bpe_file): - print("STEP 1: generate predictions using the p(T|S) model with bpe") - print(args.data) - param1 = [ - args.data, - "--path", - args.gen_model, - "--shard-id", - str(args.shard_id), - "--num-shards", - str(args.num_shards), - "--nbest", - str(args.num_rescore), - "--batch-size", - str(args.batch_size), - "--beam", - str(args.num_rescore), - "--batch-size", - str(args.num_rescore), - "--gen-subset", - args.gen_subset, - "--source-lang", - args.source_lang, - "--target-lang", - args.target_lang, - ] - if args.sampling: - param1 += ["--sampling"] - - gen_parser = options.get_generation_parser() - input_args = options.parse_args_and_arch(gen_parser, param1) - - print(input_args) - with open(predictions_bpe_file, "w") as f: - with redirect_stdout(f): - generate.main(input_args) - - gen_output = rerank_utils.BitextOutputFromGen( - predictions_bpe_file, - bpe_symbol=args.post_process, - nbest=using_nbest, - prefix_len=args.prefix_len, - target_prefix_frac=args.target_prefix_frac, - ) - - if args.diff_bpe: - rerank_utils.write_reprocessed( - gen_output.no_bpe_source, - gen_output.no_bpe_hypo, - gen_output.no_bpe_target, - pre_gen + "/source_gen_bpe." + args.source_lang, - pre_gen + "/target_gen_bpe." + args.target_lang, - pre_gen + "/reference_gen_bpe." + args.target_lang, - ) - bitext_bpe = args.rescore_bpe_code - bpe_src_param = [ - "-c", - bitext_bpe, - "--input", - pre_gen + "/source_gen_bpe." + args.source_lang, - "--output", - pre_gen + "/rescore_data." + args.source_lang, - ] - bpe_tgt_param = [ - "-c", - bitext_bpe, - "--input", - pre_gen + "/target_gen_bpe." + args.target_lang, - "--output", - pre_gen + "/rescore_data." + args.target_lang, - ] - - subprocess.call( - [ - "python", - os.path.join( - os.path.dirname(__file__), "subword-nmt/subword_nmt/apply_bpe.py" - ), - ] - + bpe_src_param, - shell=False, - ) - - subprocess.call( - [ - "python", - os.path.join( - os.path.dirname(__file__), "subword-nmt/subword_nmt/apply_bpe.py" - ), - ] - + bpe_tgt_param, - shell=False, - ) - - if (not os.path.isfile(score1_file) and not rerank1_is_gen) or ( - args.score_model2 is not None - and not os.path.isfile(score2_file) - and not rerank2_is_gen - ): - print( - "STEP 2: process the output of generate.py so we have clean text files with the translations" - ) - - rescore_file = "/rescore_data" - if args.prefix_len is not None: - prefix_len_rescore_file = rescore_file + "prefix" + str(args.prefix_len) - if args.target_prefix_frac is not None: - target_prefix_frac_rescore_file = ( - rescore_file + "target_prefix_frac" + str(args.target_prefix_frac) - ) - if args.source_prefix_frac is not None: - source_prefix_frac_rescore_file = ( - rescore_file + "source_prefix_frac" + str(args.source_prefix_frac) - ) - - if not args.right_to_left1 or not args.right_to_left2: - if not args.diff_bpe: - rerank_utils.write_reprocessed( - gen_output.source, - gen_output.hypo, - gen_output.target, - pre_gen + rescore_file + "." + args.source_lang, - pre_gen + rescore_file + "." + args.target_lang, - pre_gen + "/reference_file", - bpe_symbol=args.post_process, - ) - if args.prefix_len is not None: - bw_rescore_file = prefix_len_rescore_file - rerank_utils.write_reprocessed( - gen_output.source, - gen_output.hypo, - gen_output.target, - pre_gen + prefix_len_rescore_file + "." + args.source_lang, - pre_gen + prefix_len_rescore_file + "." + args.target_lang, - pre_gen + "/reference_file", - prefix_len=args.prefix_len, - bpe_symbol=args.post_process, - ) - elif args.target_prefix_frac is not None: - bw_rescore_file = target_prefix_frac_rescore_file - rerank_utils.write_reprocessed( - gen_output.source, - gen_output.hypo, - gen_output.target, - pre_gen - + target_prefix_frac_rescore_file - + "." - + args.source_lang, - pre_gen - + target_prefix_frac_rescore_file - + "." - + args.target_lang, - pre_gen + "/reference_file", - bpe_symbol=args.post_process, - target_prefix_frac=args.target_prefix_frac, - ) - else: - bw_rescore_file = rescore_file - - if args.source_prefix_frac is not None: - fw_rescore_file = source_prefix_frac_rescore_file - rerank_utils.write_reprocessed( - gen_output.source, - gen_output.hypo, - gen_output.target, - pre_gen - + source_prefix_frac_rescore_file - + "." - + args.source_lang, - pre_gen - + source_prefix_frac_rescore_file - + "." - + args.target_lang, - pre_gen + "/reference_file", - bpe_symbol=args.post_process, - source_prefix_frac=args.source_prefix_frac, - ) - else: - fw_rescore_file = rescore_file - - if args.right_to_left1 or args.right_to_left2: - rerank_utils.write_reprocessed( - gen_output.source, - gen_output.hypo, - gen_output.target, - pre_gen + "/right_to_left_rescore_data." + args.source_lang, - pre_gen + "/right_to_left_rescore_data." + args.target_lang, - pre_gen + "/right_to_left_reference_file", - right_to_left=True, - bpe_symbol=args.post_process, - ) - - print("STEP 3: binarize the translations") - if ( - not args.right_to_left1 - or args.score_model2 is not None - and not args.right_to_left2 - or not rerank1_is_gen - ): - - if args.backwards1 or args.backwards2: - if args.backwards_score_dict_dir is not None: - bw_dict = args.backwards_score_dict_dir - else: - bw_dict = args.score_dict_dir - bw_preprocess_param = [ - "--source-lang", - scorer1_src, - "--target-lang", - scorer1_tgt, - "--trainpref", - pre_gen + bw_rescore_file, - "--srcdict", - bw_dict + "/dict." + scorer1_src + ".txt", - "--tgtdict", - bw_dict + "/dict." + scorer1_tgt + ".txt", - "--destdir", - backwards_preprocessed_dir, - ] - preprocess_parser = options.get_preprocessing_parser() - input_args = preprocess_parser.parse_args(bw_preprocess_param) - preprocess.main(input_args) - - preprocess_param = [ - "--source-lang", - scorer1_src, - "--target-lang", - scorer1_tgt, - "--trainpref", - pre_gen + fw_rescore_file, - "--srcdict", - args.score_dict_dir + "/dict." + scorer1_src + ".txt", - "--tgtdict", - args.score_dict_dir + "/dict." + scorer1_tgt + ".txt", - "--destdir", - left_to_right_preprocessed_dir, - ] - preprocess_parser = options.get_preprocessing_parser() - input_args = preprocess_parser.parse_args(preprocess_param) - preprocess.main(input_args) - - if args.right_to_left1 or args.right_to_left2: - preprocess_param = [ - "--source-lang", - scorer1_src, - "--target-lang", - scorer1_tgt, - "--trainpref", - pre_gen + "/right_to_left_rescore_data", - "--srcdict", - args.score_dict_dir + "/dict." + scorer1_src + ".txt", - "--tgtdict", - args.score_dict_dir + "/dict." + scorer1_tgt + ".txt", - "--destdir", - right_to_left_preprocessed_dir, - ] - preprocess_parser = options.get_preprocessing_parser() - input_args = preprocess_parser.parse_args(preprocess_param) - preprocess.main(input_args) - - return gen_output - - -def cli_main(): - parser = rerank_options.get_reranking_parser() - args = options.parse_args_and_arch(parser) - gen_and_reprocess_nbest(args) - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/stomexserde/gpt4-ui/Examples/Assimil Espagnol Perfectionnement Pdf 19.md b/spaces/stomexserde/gpt4-ui/Examples/Assimil Espagnol Perfectionnement Pdf 19.md deleted file mode 100644 index e7a6058ba5698f1666e9ce47c68b219cb1d0e3fa..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Assimil Espagnol Perfectionnement Pdf 19.md +++ /dev/null @@ -1,29 +0,0 @@ - -

        How to Improve Your Spanish with Assimil Espagnol Perfectionnement PDF 19

        -

        If you are looking for a way to improve your Spanish skills, you may have heard of Assimil Espagnol Perfectionnement PDF 19. This is a digital version of the popular Assimil method, which has been helping language learners for over 90 years. But what is Assimil Espagnol Perfectionnement PDF 19 and how can it help you?

        -

        Assimil Espagnol Perfectionnement PDF 19 is a comprehensive course that covers all aspects of Spanish grammar, vocabulary, pronunciation, and culture. It consists of 70 lessons, each with a dialogue, notes, exercises, and audio recordings. The course follows the principle of intuitive assimilation, which means that you learn by listening and reading, without memorizing or translating. You gradually acquire the language in a natural and effortless way.

        -

        assimil espagnol perfectionnement pdf 19


        Download Ziphttps://urlgoal.com/2uIaGh



        -

        Assimil Espagnol Perfectionnement PDF 19 is designed for intermediate to advanced learners who want to reach a high level of fluency and accuracy in Spanish. It is suitable for self-study or as a complement to other courses. You can download the PDF and audio files to your computer or mobile device and study at your own pace and convenience.

        -

        Some of the benefits of using Assimil Espagnol Perfectionnement PDF 19 are:

        -
          -
        • You will learn authentic and contemporary Spanish that is used in real-life situations.
        • -
        • You will enrich your vocabulary and expressions with idioms, slang, and cultural references.
        • -
        • You will master complex grammatical structures and nuances with clear and concise explanations.
        • -
        • You will improve your pronunciation and listening comprehension with native speakers from different regions of Spain and Latin America.
        • -
        • You will discover the diversity and richness of the Spanish-speaking world through interesting texts and dialogues.
        • -
        -

        If you want to take your Spanish to the next level, Assimil Espagnol Perfectionnement PDF 19 is a great choice. You can find it on the official Assimil website or on other online platforms. Start your journey to Spanish perfection today!

        - -

        How to use Assimil Espagnol Perfectionnement PDF 19

        -

        Using Assimil Espagnol Perfectionnement PDF 19 is easy and flexible. You can choose the method that suits you best, depending on your goals and preferences. Here are some suggestions:

        -
          -
        • The passive phase: For the first 35 lessons, you simply listen and read the dialogues and notes. You don't need to do the exercises or repeat the sentences. You just let the language sink in your mind.
        • -
        • The active phase: From lesson 36 onwards, you start to apply what you have learned. You do the exercises and repeat the dialogues aloud. You also review the previous lessons and try to translate them from your native language to Spanish.
        • -
        • The mixed phase: You can also combine the passive and active phases according to your needs. For example, you can do the passive phase for one lesson and the active phase for another. Or you can do both phases for each lesson. The important thing is to be consistent and regular in your study.
        • -
        -

        How long does it take to complete Assimil Espagnol Perfectionnement PDF 19

        -

        The duration of the course depends on how much time you dedicate to it and how fast you progress. However, a general guideline is to spend about 30 minutes per day, six days a week, for about six months. This means that you can finish the course in about 180 days or less.

        -

        Of course, this is not a fixed rule. You can go faster or slower depending on your level and motivation. You can also review the lessons as many times as you want or skip the ones that you find too easy or too difficult. The important thing is to enjoy the process and have fun learning Spanish.

        -

        cec2833e83
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Configserver Exploit Scanner Nulled Scriptsl.md b/spaces/stomexserde/gpt4-ui/Examples/Configserver Exploit Scanner Nulled Scriptsl.md deleted file mode 100644 index 94c6f65c12da54d2633e88d14688e64d8134d13f..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Configserver Exploit Scanner Nulled Scriptsl.md +++ /dev/null @@ -1,63 +0,0 @@ -
        -

        Configserver Exploit Scanner: A Server Malware and Antivirus Scanner

        -

        Configserver Exploit Scanner (CXS) is a server malware, exploit and antivirus scanner that performs active scanning of files as they are uploaded to the server. It can also perform manual and scheduled scans of files, directories and user accounts for suspicious files, potential exploits and viruses. CXS is compatible with cPanel, DirectAdmin and other control panels that use PureFTPd or ProFTPd. It can also integrate with ModSecurity to block malicious requests.

        -

        Configserver Exploit Scanner Nulled Scriptsl


        Download Zip ——— https://urlgoal.com/2uIb1U



        -

        CXS can help protect your server from various types of malware, such as:

        -
          -
        • PHP and Perl shell scripts
        • -
        • PHP upload scripts
        • -
        • PHP mailers
        • -
        • Mass mail scripts
        • -
        • Flooders
        • -
        • Fetch scripts
        • -
        • IRC bots
        • -
        • Rootkits
        • -
        • SymLink attacks
        • -
        • CGI exploits
        • -
        • Web shells
        • -
        • Backdoors
        • -
        • Trojans
        • -
        • Viruses
        • -
        -

        CXS uses over 4000 known exploit script fingerprints, as well as ClamAV signatures, to detect malware. It also uses regular expression pattern matching, filename matching, suspicious file types and Bayes probability scanning to identify unknown exploits. CXS can quarantine or delete suspicious files, send email alerts, log scan results and generate statistics. CXS also checks for outdated versions of popular web scripts, such as WordPress, Joomla and osCommerce, and warns you of any security vulnerabilities.

        -

        CXS is a paid product that costs $60 per server. It includes software updates for the life of the product and initial installation with recommended configuration options. You can purchase CXS from ConfigServer Services. You can also find more information about CXS features, benefits and requirements on their website.

        -

        To continue the article, here are some possible sections:

        -

        How to Install CXS on Your Server

        -

        There are different ways to install CXS on your server, depending on your control panel and operating system. Here are some common methods:

        -

        For cPanel Servers

        -

        If you have a cPanel server, you can install CXS using the following steps:

        -
          -
        1. Download and run the CXS installer script from ConfigServer Services.
        2. -
        3. Read the installation instructions in /etc/cxs/install.txt from step 2 onwards.
        4. -
        5. Modify the following files to suit your requirements: /etc/cxs/cxsftp.sh, /etc/cxs/cxscgi.sh and /etc/cxs/cxswatch.sh.
        6. -
        7. Enable web script upload scanning via ModSecurity by adding the lines in /etc/cxs/modsec.conf to your mod_security rules file.
        8. -
        9. Enable pure-ftpd upload scanning by editing /etc/pure-ftpd.conf and adding the line: CallUploadScript yes.
        10. -
        11. Restart httpd and pure-ftpd services.
        12. -
        13. Make sure you have a running clamd daemon for ClamAV scanning. By default, CXS will look for the clamd socket at /tmp/clamd and /var/clamd.
        14. -
        15. If you want automatic updates, create a cron job using the command: /usr/sbin/cxs --upgrade --quiet.
        16. -
        -

        For DirectAdmin Servers

        -

        If you have a DirectAdmin server, you can install CXS using the following steps:

        -
          -
        1. Download and run the CXS installer script from ConfigServer Services.
        2. -
        3. Read the installation instructions in /etc/cxs/install.txt from step 2 onwards.
        4. -
        5. Modify the following files to suit your requirements: /etc/cxs/cxsftp.sh, /etc/cxs/cxscgi.sh and /etc/cxs/cxswatch.sh.
        6. -
        7. Enable web script upload scanning via ModSecurity by adding the lines in /etc/cxs/modsec.conf to your mod_security rules file.
        8. -
        9. Enable proftpd upload scanning by editing /etc/proftpd.conf and adding the line: LoadModule mod_exec.c. Then add an Exec section at the end of the file with the line: Exec POST_CMD * /etc/cxs/cxsftp.sh %u %f %h %a %m %s %U %R.
        10. -
        11. Restart httpd and proftpd services.
        12. -
        13. Make sure you have a running clamd daemon for ClamAV scanning. By default, CXS will look for the clamd socket at /tmp/clamd and /var/clamd.
        14. -
        15. If you want automatic updates, create a cron job using the command: /usr/sbin/cxs --upgrade --quiet.
        16. -
        -

        For Other Control Panels or Linux Servers

        -

        If you have another control panel or a Linux server without a control panel, you can install CXS using the following steps:

        -
          -
        1. Download and run the CXS installer script from ConfigServer Services.
        2. -
        3. Read the installation instructions in /etc/cxs/install.txt from step 2 onwards.
        4. -
        5. Modify the following files to suit your requirements: /etc/cxs/cxswatch.sh and any other scripts you want to use for scanning.
        6. -
        7. If you want to enable web script upload scanning via ModSecurity, you need to install ModSecurity first and then add the lines in /etc/cxs/modsec.conf to your mod_security rules file.
        8. -
        9. If you want to enable FTP upload scanning, you need to install an FTP server that supports upload scripts, such as PureFTPd or ProFTPd, and then configure it to run CXS as an upload script.
        10. -
        11. Restart any services that you have modified or installed.
        12. -
        13. Make sure you have a running clamd daemon for ClamAV scanning. By default, CXS will look for the clamd socket at /tmp/clamd and /var/clamd.
        14. -
        15. If you want automatic updates,

          cec2833e83
          -
          -
          \ No newline at end of file diff --git a/spaces/stratussox/yolov5_inference/train.py b/spaces/stratussox/yolov5_inference/train.py deleted file mode 100644 index 1fe6cf4d9ebd121eec8b10225d00f5d986a62efc..0000000000000000000000000000000000000000 --- a/spaces/stratussox/yolov5_inference/train.py +++ /dev/null @@ -1,630 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Train a YOLOv5 model on a custom dataset. -Models and datasets download automatically from the latest YOLOv5 release. - -Usage - Single-GPU training: - $ python train.py --data coco128.yaml --weights yolov5s.pt --img 640 # from pretrained (recommended) - $ python train.py --data coco128.yaml --weights '' --cfg yolov5s.yaml --img 640 # from scratch - -Usage - Multi-GPU DDP training: - $ python -m torch.distributed.run --nproc_per_node 4 --master_port 1 train.py --data coco128.yaml --weights yolov5s.pt --img 640 --device 0,1,2,3 - -Models: https://github.com/ultralytics/yolov5/tree/master/models -Datasets: https://github.com/ultralytics/yolov5/tree/master/data -Tutorial: https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data -""" - -import argparse -import math -import os -import random -import sys -import time -from copy import deepcopy -from datetime import datetime -from pathlib import Path - -import numpy as np -import torch -import torch.distributed as dist -import torch.nn as nn -import yaml -from torch.optim import lr_scheduler -from tqdm import tqdm - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[0] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH -ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative - -import val as validate # for end-of-epoch mAP -from models.experimental import attempt_load -from models.yolo import Model -from utils.autoanchor import check_anchors -from utils.autobatch import check_train_batch_size -from utils.callbacks import Callbacks -from utils.dataloaders import create_dataloader -from utils.downloads import attempt_download, is_url -from utils.general import (LOGGER, check_amp, check_dataset, check_file, check_git_status, check_img_size, - check_requirements, check_suffix, check_yaml, colorstr, get_latest_run, increment_path, - init_seeds, intersect_dicts, labels_to_class_weights, labels_to_image_weights, methods, - one_cycle, print_args, print_mutation, strip_optimizer, yaml_save) -from utils.loggers import Loggers -from utils.loggers.comet.comet_utils import check_comet_resume -from utils.loss import ComputeLoss -from utils.metrics import fitness -from utils.plots import plot_evolve -from utils.torch_utils import (EarlyStopping, ModelEMA, de_parallel, select_device, smart_DDP, smart_optimizer, - smart_resume, torch_distributed_zero_first) - -LOCAL_RANK = int(os.getenv('LOCAL_RANK', -1)) # https://pytorch.org/docs/stable/elastic/run.html -RANK = int(os.getenv('RANK', -1)) -WORLD_SIZE = int(os.getenv('WORLD_SIZE', 1)) - - -def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictionary - save_dir, epochs, batch_size, weights, single_cls, evolve, data, cfg, resume, noval, nosave, workers, freeze = \ - Path(opt.save_dir), opt.epochs, opt.batch_size, opt.weights, opt.single_cls, opt.evolve, opt.data, opt.cfg, \ - opt.resume, opt.noval, opt.nosave, opt.workers, opt.freeze - callbacks.run('on_pretrain_routine_start') - - # Directories - w = save_dir / 'weights' # weights dir - (w.parent if evolve else w).mkdir(parents=True, exist_ok=True) # make dir - last, best = w / 'last.pt', w / 'best.pt' - - # Hyperparameters - if isinstance(hyp, str): - with open(hyp, errors='ignore') as f: - hyp = yaml.safe_load(f) # load hyps dict - LOGGER.info(colorstr('hyperparameters: ') + ', '.join(f'{k}={v}' for k, v in hyp.items())) - opt.hyp = hyp.copy() # for saving hyps to checkpoints - - # Save run settings - if not evolve: - yaml_save(save_dir / 'hyp.yaml', hyp) - yaml_save(save_dir / 'opt.yaml', vars(opt)) - - # Loggers - data_dict = None - if RANK in {-1, 0}: - loggers = Loggers(save_dir, weights, opt, hyp, LOGGER) # loggers instance - - # Register actions - for k in methods(loggers): - callbacks.register_action(k, callback=getattr(loggers, k)) - - # Process custom dataset artifact link - data_dict = loggers.remote_dataset - if resume: # If resuming runs from remote artifact - weights, epochs, hyp, batch_size = opt.weights, opt.epochs, opt.hyp, opt.batch_size - - # Config - plots = not evolve and not opt.noplots # create plots - cuda = device.type != 'cpu' - init_seeds(opt.seed + 1 + RANK, deterministic=True) - with torch_distributed_zero_first(LOCAL_RANK): - data_dict = data_dict or check_dataset(data) # check if None - train_path, val_path = data_dict['train'], data_dict['val'] - nc = 1 if single_cls else int(data_dict['nc']) # number of classes - names = {0: 'item'} if single_cls and len(data_dict['names']) != 1 else data_dict['names'] # class names - is_coco = isinstance(val_path, str) and val_path.endswith('coco/val2017.txt') # COCO dataset - - # Model - check_suffix(weights, '.pt') # check weights - pretrained = weights.endswith('.pt') - if pretrained: - with torch_distributed_zero_first(LOCAL_RANK): - weights = attempt_download(weights) # download if not found locally - ckpt = torch.load(weights, map_location='cpu') # load checkpoint to CPU to avoid CUDA memory leak - model = Model(cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create - exclude = ['anchor'] if (cfg or hyp.get('anchors')) and not resume else [] # exclude keys - csd = ckpt['model'].float().state_dict() # checkpoint state_dict as FP32 - csd = intersect_dicts(csd, model.state_dict(), exclude=exclude) # intersect - model.load_state_dict(csd, strict=False) # load - LOGGER.info(f'Transferred {len(csd)}/{len(model.state_dict())} items from {weights}') # report - else: - model = Model(cfg, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create - amp = check_amp(model) # check AMP - - # Freeze - freeze = [f'model.{x}.' for x in (freeze if len(freeze) > 1 else range(freeze[0]))] # layers to freeze - for k, v in model.named_parameters(): - v.requires_grad = True # train all layers - # v.register_hook(lambda x: torch.nan_to_num(x)) # NaN to 0 (commented for erratic training results) - if any(x in k for x in freeze): - LOGGER.info(f'freezing {k}') - v.requires_grad = False - - # Image size - gs = max(int(model.stride.max()), 32) # grid size (max stride) - imgsz = check_img_size(opt.imgsz, gs, floor=gs * 2) # verify imgsz is gs-multiple - - # Batch size - if RANK == -1 and batch_size == -1: # single-GPU only, estimate best batch size - batch_size = check_train_batch_size(model, imgsz, amp) - loggers.on_params_update({"batch_size": batch_size}) - - # Optimizer - nbs = 64 # nominal batch size - accumulate = max(round(nbs / batch_size), 1) # accumulate loss before optimizing - hyp['weight_decay'] *= batch_size * accumulate / nbs # scale weight_decay - optimizer = smart_optimizer(model, opt.optimizer, hyp['lr0'], hyp['momentum'], hyp['weight_decay']) - - # Scheduler - if opt.cos_lr: - lf = one_cycle(1, hyp['lrf'], epochs) # cosine 1->hyp['lrf'] - else: - lf = lambda x: (1 - x / epochs) * (1.0 - hyp['lrf']) + hyp['lrf'] # linear - scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf) # plot_lr_scheduler(optimizer, scheduler, epochs) - - # EMA - ema = ModelEMA(model) if RANK in {-1, 0} else None - - # Resume - best_fitness, start_epoch = 0.0, 0 - if pretrained: - if resume: - best_fitness, start_epoch, epochs = smart_resume(ckpt, optimizer, ema, weights, epochs, resume) - del ckpt, csd - - # DP mode - if cuda and RANK == -1 and torch.cuda.device_count() > 1: - LOGGER.warning('WARNING ⚠️ DP not recommended, use torch.distributed.run for best DDP Multi-GPU results.\n' - 'See Multi-GPU Tutorial at https://github.com/ultralytics/yolov5/issues/475 to get started.') - model = torch.nn.DataParallel(model) - - # SyncBatchNorm - if opt.sync_bn and cuda and RANK != -1: - model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device) - LOGGER.info('Using SyncBatchNorm()') - - # Trainloader - train_loader, dataset = create_dataloader(train_path, - imgsz, - batch_size // WORLD_SIZE, - gs, - single_cls, - hyp=hyp, - augment=True, - cache=None if opt.cache == 'val' else opt.cache, - rect=opt.rect, - rank=LOCAL_RANK, - workers=workers, - image_weights=opt.image_weights, - quad=opt.quad, - prefix=colorstr('train: '), - shuffle=True) - labels = np.concatenate(dataset.labels, 0) - mlc = int(labels[:, 0].max()) # max label class - assert mlc < nc, f'Label class {mlc} exceeds nc={nc} in {data}. Possible class labels are 0-{nc - 1}' - - # Process 0 - if RANK in {-1, 0}: - val_loader = create_dataloader(val_path, - imgsz, - batch_size // WORLD_SIZE * 2, - gs, - single_cls, - hyp=hyp, - cache=None if noval else opt.cache, - rect=True, - rank=-1, - workers=workers * 2, - pad=0.5, - prefix=colorstr('val: '))[0] - - if not resume: - if not opt.noautoanchor: - check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz) # run AutoAnchor - model.half().float() # pre-reduce anchor precision - - callbacks.run('on_pretrain_routine_end', labels, names) - - # DDP mode - if cuda and RANK != -1: - model = smart_DDP(model) - - # Model attributes - nl = de_parallel(model).model[-1].nl # number of detection layers (to scale hyps) - hyp['box'] *= 3 / nl # scale to layers - hyp['cls'] *= nc / 80 * 3 / nl # scale to classes and layers - hyp['obj'] *= (imgsz / 640) ** 2 * 3 / nl # scale to image size and layers - hyp['label_smoothing'] = opt.label_smoothing - model.nc = nc # attach number of classes to model - model.hyp = hyp # attach hyperparameters to model - model.class_weights = labels_to_class_weights(dataset.labels, nc).to(device) * nc # attach class weights - model.names = names - - # Start training - t0 = time.time() - nb = len(train_loader) # number of batches - nw = max(round(hyp['warmup_epochs'] * nb), 100) # number of warmup iterations, max(3 epochs, 100 iterations) - # nw = min(nw, (epochs - start_epoch) / 2 * nb) # limit warmup to < 1/2 of training - last_opt_step = -1 - maps = np.zeros(nc) # mAP per class - results = (0, 0, 0, 0, 0, 0, 0) # P, R, mAP@.5, mAP@.5-.95, val_loss(box, obj, cls) - scheduler.last_epoch = start_epoch - 1 # do not move - scaler = torch.cuda.amp.GradScaler(enabled=amp) - stopper, stop = EarlyStopping(patience=opt.patience), False - compute_loss = ComputeLoss(model) # init loss class - callbacks.run('on_train_start') - LOGGER.info(f'Image sizes {imgsz} train, {imgsz} val\n' - f'Using {train_loader.num_workers * WORLD_SIZE} dataloader workers\n' - f"Logging results to {colorstr('bold', save_dir)}\n" - f'Starting training for {epochs} epochs...') - for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------ - callbacks.run('on_train_epoch_start') - model.train() - - # Update image weights (optional, single-GPU only) - if opt.image_weights: - cw = model.class_weights.cpu().numpy() * (1 - maps) ** 2 / nc # class weights - iw = labels_to_image_weights(dataset.labels, nc=nc, class_weights=cw) # image weights - dataset.indices = random.choices(range(dataset.n), weights=iw, k=dataset.n) # rand weighted idx - - # Update mosaic border (optional) - # b = int(random.uniform(0.25 * imgsz, 0.75 * imgsz + gs) // gs * gs) - # dataset.mosaic_border = [b - imgsz, -b] # height, width borders - - mloss = torch.zeros(3, device=device) # mean losses - if RANK != -1: - train_loader.sampler.set_epoch(epoch) - pbar = enumerate(train_loader) - LOGGER.info(('\n' + '%11s' * 7) % ('Epoch', 'GPU_mem', 'box_loss', 'obj_loss', 'cls_loss', 'Instances', 'Size')) - if RANK in {-1, 0}: - pbar = tqdm(pbar, total=nb, bar_format='{l_bar}{bar:10}{r_bar}{bar:-10b}') # progress bar - optimizer.zero_grad() - for i, (imgs, targets, paths, _) in pbar: # batch ------------------------------------------------------------- - callbacks.run('on_train_batch_start') - ni = i + nb * epoch # number integrated batches (since train start) - imgs = imgs.to(device, non_blocking=True).float() / 255 # uint8 to float32, 0-255 to 0.0-1.0 - - # Warmup - if ni <= nw: - xi = [0, nw] # x interp - # compute_loss.gr = np.interp(ni, xi, [0.0, 1.0]) # iou loss ratio (obj_loss = 1.0 or iou) - accumulate = max(1, np.interp(ni, xi, [1, nbs / batch_size]).round()) - for j, x in enumerate(optimizer.param_groups): - # bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0 - x['lr'] = np.interp(ni, xi, [hyp['warmup_bias_lr'] if j == 0 else 0.0, x['initial_lr'] * lf(epoch)]) - if 'momentum' in x: - x['momentum'] = np.interp(ni, xi, [hyp['warmup_momentum'], hyp['momentum']]) - - # Multi-scale - if opt.multi_scale: - sz = random.randrange(imgsz * 0.5, imgsz * 1.5 + gs) // gs * gs # size - sf = sz / max(imgs.shape[2:]) # scale factor - if sf != 1: - ns = [math.ceil(x * sf / gs) * gs for x in imgs.shape[2:]] # new shape (stretched to gs-multiple) - imgs = nn.functional.interpolate(imgs, size=ns, mode='bilinear', align_corners=False) - - # Forward - with torch.cuda.amp.autocast(amp): - pred = model(imgs) # forward - loss, loss_items = compute_loss(pred, targets.to(device)) # loss scaled by batch_size - if RANK != -1: - loss *= WORLD_SIZE # gradient averaged between devices in DDP mode - if opt.quad: - loss *= 4. - - # Backward - scaler.scale(loss).backward() - - # Optimize - https://pytorch.org/docs/master/notes/amp_examples.html - if ni - last_opt_step >= accumulate: - scaler.unscale_(optimizer) # unscale gradients - torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=10.0) # clip gradients - scaler.step(optimizer) # optimizer.step - scaler.update() - optimizer.zero_grad() - if ema: - ema.update(model) - last_opt_step = ni - - # Log - if RANK in {-1, 0}: - mloss = (mloss * i + loss_items) / (i + 1) # update mean losses - mem = f'{torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0:.3g}G' # (GB) - pbar.set_description(('%11s' * 2 + '%11.4g' * 5) % - (f'{epoch}/{epochs - 1}', mem, *mloss, targets.shape[0], imgs.shape[-1])) - callbacks.run('on_train_batch_end', model, ni, imgs, targets, paths, list(mloss)) - if callbacks.stop_training: - return - # end batch ------------------------------------------------------------------------------------------------ - - # Scheduler - lr = [x['lr'] for x in optimizer.param_groups] # for loggers - scheduler.step() - - if RANK in {-1, 0}: - # mAP - callbacks.run('on_train_epoch_end', epoch=epoch) - ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'names', 'stride', 'class_weights']) - final_epoch = (epoch + 1 == epochs) or stopper.possible_stop - if not noval or final_epoch: # Calculate mAP - results, maps, _ = validate.run(data_dict, - batch_size=batch_size // WORLD_SIZE * 2, - imgsz=imgsz, - half=amp, - model=ema.ema, - single_cls=single_cls, - dataloader=val_loader, - save_dir=save_dir, - plots=False, - callbacks=callbacks, - compute_loss=compute_loss) - - # Update best mAP - fi = fitness(np.array(results).reshape(1, -1)) # weighted combination of [P, R, mAP@.5, mAP@.5-.95] - stop = stopper(epoch=epoch, fitness=fi) # early stop check - if fi > best_fitness: - best_fitness = fi - log_vals = list(mloss) + list(results) + lr - callbacks.run('on_fit_epoch_end', log_vals, epoch, best_fitness, fi) - - # Save model - if (not nosave) or (final_epoch and not evolve): # if save - ckpt = { - 'epoch': epoch, - 'best_fitness': best_fitness, - 'model': deepcopy(de_parallel(model)).half(), - 'ema': deepcopy(ema.ema).half(), - 'updates': ema.updates, - 'optimizer': optimizer.state_dict(), - 'opt': vars(opt), - 'date': datetime.now().isoformat()} - - # Save last, best and delete - torch.save(ckpt, last) - if best_fitness == fi: - torch.save(ckpt, best) - if opt.save_period > 0 and epoch % opt.save_period == 0: - torch.save(ckpt, w / f'epoch{epoch}.pt') - del ckpt - callbacks.run('on_model_save', last, epoch, final_epoch, best_fitness, fi) - - # EarlyStopping - if RANK != -1: # if DDP training - broadcast_list = [stop if RANK == 0 else None] - dist.broadcast_object_list(broadcast_list, 0) # broadcast 'stop' to all ranks - if RANK != 0: - stop = broadcast_list[0] - if stop: - break # must break all DDP ranks - - # end epoch ---------------------------------------------------------------------------------------------------- - # end training ----------------------------------------------------------------------------------------------------- - if RANK in {-1, 0}: - LOGGER.info(f'\n{epoch - start_epoch + 1} epochs completed in {(time.time() - t0) / 3600:.3f} hours.') - for f in last, best: - if f.exists(): - strip_optimizer(f) # strip optimizers - if f is best: - LOGGER.info(f'\nValidating {f}...') - results, _, _ = validate.run( - data_dict, - batch_size=batch_size // WORLD_SIZE * 2, - imgsz=imgsz, - model=attempt_load(f, device).half(), - iou_thres=0.65 if is_coco else 0.60, # best pycocotools at iou 0.65 - single_cls=single_cls, - dataloader=val_loader, - save_dir=save_dir, - save_json=is_coco, - verbose=True, - plots=plots, - callbacks=callbacks, - compute_loss=compute_loss) # val best model with plots - if is_coco: - callbacks.run('on_fit_epoch_end', list(mloss) + list(results) + lr, epoch, best_fitness, fi) - - callbacks.run('on_train_end', last, best, epoch, results) - - torch.cuda.empty_cache() - return results - - -def parse_opt(known=False): - parser = argparse.ArgumentParser() - parser.add_argument('--weights', type=str, default=ROOT / 'yolov5s.pt', help='initial weights path') - parser.add_argument('--cfg', type=str, default='', help='model.yaml path') - parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path') - parser.add_argument('--hyp', type=str, default=ROOT / 'data/hyps/hyp.scratch-low.yaml', help='hyperparameters path') - parser.add_argument('--epochs', type=int, default=100, help='total training epochs') - parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs, -1 for autobatch') - parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='train, val image size (pixels)') - parser.add_argument('--rect', action='store_true', help='rectangular training') - parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training') - parser.add_argument('--nosave', action='store_true', help='only save final checkpoint') - parser.add_argument('--noval', action='store_true', help='only validate final epoch') - parser.add_argument('--noautoanchor', action='store_true', help='disable AutoAnchor') - parser.add_argument('--noplots', action='store_true', help='save no plot files') - parser.add_argument('--evolve', type=int, nargs='?', const=300, help='evolve hyperparameters for x generations') - parser.add_argument('--bucket', type=str, default='', help='gsutil bucket') - parser.add_argument('--cache', type=str, nargs='?', const='ram', help='image --cache ram/disk') - parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%') - parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class') - parser.add_argument('--optimizer', type=str, choices=['SGD', 'Adam', 'AdamW'], default='SGD', help='optimizer') - parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode') - parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)') - parser.add_argument('--project', default=ROOT / 'runs/train', help='save to project/name') - parser.add_argument('--name', default='exp', help='save to project/name') - parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') - parser.add_argument('--quad', action='store_true', help='quad dataloader') - parser.add_argument('--cos-lr', action='store_true', help='cosine LR scheduler') - parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon') - parser.add_argument('--patience', type=int, default=100, help='EarlyStopping patience (epochs without improvement)') - parser.add_argument('--freeze', nargs='+', type=int, default=[0], help='Freeze layers: backbone=10, first3=0 1 2') - parser.add_argument('--save-period', type=int, default=-1, help='Save checkpoint every x epochs (disabled if < 1)') - parser.add_argument('--seed', type=int, default=0, help='Global training seed') - parser.add_argument('--local_rank', type=int, default=-1, help='Automatic DDP Multi-GPU argument, do not modify') - - # Logger arguments - parser.add_argument('--entity', default=None, help='Entity') - parser.add_argument('--upload_dataset', nargs='?', const=True, default=False, help='Upload data, "val" option') - parser.add_argument('--bbox_interval', type=int, default=-1, help='Set bounding-box image logging interval') - parser.add_argument('--artifact_alias', type=str, default='latest', help='Version of dataset artifact to use') - - return parser.parse_known_args()[0] if known else parser.parse_args() - - -def main(opt, callbacks=Callbacks()): - # Checks - if RANK in {-1, 0}: - print_args(vars(opt)) - check_git_status() - check_requirements() - - # Resume (from specified or most recent last.pt) - if opt.resume and not check_comet_resume(opt) and not opt.evolve: - last = Path(check_file(opt.resume) if isinstance(opt.resume, str) else get_latest_run()) - opt_yaml = last.parent.parent / 'opt.yaml' # train options yaml - opt_data = opt.data # original dataset - if opt_yaml.is_file(): - with open(opt_yaml, errors='ignore') as f: - d = yaml.safe_load(f) - else: - d = torch.load(last, map_location='cpu')['opt'] - opt = argparse.Namespace(**d) # replace - opt.cfg, opt.weights, opt.resume = '', str(last), True # reinstate - if is_url(opt_data): - opt.data = check_file(opt_data) # avoid HUB resume auth timeout - else: - opt.data, opt.cfg, opt.hyp, opt.weights, opt.project = \ - check_file(opt.data), check_yaml(opt.cfg), check_yaml(opt.hyp), str(opt.weights), str(opt.project) # checks - assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified' - if opt.evolve: - if opt.project == str(ROOT / 'runs/train'): # if default project name, rename to runs/evolve - opt.project = str(ROOT / 'runs/evolve') - opt.exist_ok, opt.resume = opt.resume, False # pass resume to exist_ok and disable resume - if opt.name == 'cfg': - opt.name = Path(opt.cfg).stem # use model.yaml as name - opt.save_dir = str(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok)) - - # DDP mode - device = select_device(opt.device, batch_size=opt.batch_size) - if LOCAL_RANK != -1: - msg = 'is not compatible with YOLOv5 Multi-GPU DDP training' - assert not opt.image_weights, f'--image-weights {msg}' - assert not opt.evolve, f'--evolve {msg}' - assert opt.batch_size != -1, f'AutoBatch with --batch-size -1 {msg}, please pass a valid --batch-size' - assert opt.batch_size % WORLD_SIZE == 0, f'--batch-size {opt.batch_size} must be multiple of WORLD_SIZE' - assert torch.cuda.device_count() > LOCAL_RANK, 'insufficient CUDA devices for DDP command' - torch.cuda.set_device(LOCAL_RANK) - device = torch.device('cuda', LOCAL_RANK) - dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo") - - # Train - if not opt.evolve: - train(opt.hyp, opt, device, callbacks) - - # Evolve hyperparameters (optional) - else: - # Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit) - meta = { - 'lr0': (1, 1e-5, 1e-1), # initial learning rate (SGD=1E-2, Adam=1E-3) - 'lrf': (1, 0.01, 1.0), # final OneCycleLR learning rate (lr0 * lrf) - 'momentum': (0.3, 0.6, 0.98), # SGD momentum/Adam beta1 - 'weight_decay': (1, 0.0, 0.001), # optimizer weight decay - 'warmup_epochs': (1, 0.0, 5.0), # warmup epochs (fractions ok) - 'warmup_momentum': (1, 0.0, 0.95), # warmup initial momentum - 'warmup_bias_lr': (1, 0.0, 0.2), # warmup initial bias lr - 'box': (1, 0.02, 0.2), # box loss gain - 'cls': (1, 0.2, 4.0), # cls loss gain - 'cls_pw': (1, 0.5, 2.0), # cls BCELoss positive_weight - 'obj': (1, 0.2, 4.0), # obj loss gain (scale with pixels) - 'obj_pw': (1, 0.5, 2.0), # obj BCELoss positive_weight - 'iou_t': (0, 0.1, 0.7), # IoU training threshold - 'anchor_t': (1, 2.0, 8.0), # anchor-multiple threshold - 'anchors': (2, 2.0, 10.0), # anchors per output grid (0 to ignore) - 'fl_gamma': (0, 0.0, 2.0), # focal loss gamma (efficientDet default gamma=1.5) - 'hsv_h': (1, 0.0, 0.1), # image HSV-Hue augmentation (fraction) - 'hsv_s': (1, 0.0, 0.9), # image HSV-Saturation augmentation (fraction) - 'hsv_v': (1, 0.0, 0.9), # image HSV-Value augmentation (fraction) - 'degrees': (1, 0.0, 45.0), # image rotation (+/- deg) - 'translate': (1, 0.0, 0.9), # image translation (+/- fraction) - 'scale': (1, 0.0, 0.9), # image scale (+/- gain) - 'shear': (1, 0.0, 10.0), # image shear (+/- deg) - 'perspective': (0, 0.0, 0.001), # image perspective (+/- fraction), range 0-0.001 - 'flipud': (1, 0.0, 1.0), # image flip up-down (probability) - 'fliplr': (0, 0.0, 1.0), # image flip left-right (probability) - 'mosaic': (1, 0.0, 1.0), # image mixup (probability) - 'mixup': (1, 0.0, 1.0), # image mixup (probability) - 'copy_paste': (1, 0.0, 1.0)} # segment copy-paste (probability) - - with open(opt.hyp, errors='ignore') as f: - hyp = yaml.safe_load(f) # load hyps dict - if 'anchors' not in hyp: # anchors commented in hyp.yaml - hyp['anchors'] = 3 - if opt.noautoanchor: - del hyp['anchors'], meta['anchors'] - opt.noval, opt.nosave, save_dir = True, True, Path(opt.save_dir) # only val/save final epoch - # ei = [isinstance(x, (int, float)) for x in hyp.values()] # evolvable indices - evolve_yaml, evolve_csv = save_dir / 'hyp_evolve.yaml', save_dir / 'evolve.csv' - if opt.bucket: - os.system(f'gsutil cp gs://{opt.bucket}/evolve.csv {evolve_csv}') # download evolve.csv if exists - - for _ in range(opt.evolve): # generations to evolve - if evolve_csv.exists(): # if evolve.csv exists: select best hyps and mutate - # Select parent(s) - parent = 'single' # parent selection method: 'single' or 'weighted' - x = np.loadtxt(evolve_csv, ndmin=2, delimiter=',', skiprows=1) - n = min(5, len(x)) # number of previous results to consider - x = x[np.argsort(-fitness(x))][:n] # top n mutations - w = fitness(x) - fitness(x).min() + 1E-6 # weights (sum > 0) - if parent == 'single' or len(x) == 1: - # x = x[random.randint(0, n - 1)] # random selection - x = x[random.choices(range(n), weights=w)[0]] # weighted selection - elif parent == 'weighted': - x = (x * w.reshape(n, 1)).sum(0) / w.sum() # weighted combination - - # Mutate - mp, s = 0.8, 0.2 # mutation probability, sigma - npr = np.random - npr.seed(int(time.time())) - g = np.array([meta[k][0] for k in hyp.keys()]) # gains 0-1 - ng = len(meta) - v = np.ones(ng) - while all(v == 1): # mutate until a change occurs (prevent duplicates) - v = (g * (npr.random(ng) < mp) * npr.randn(ng) * npr.random() * s + 1).clip(0.3, 3.0) - for i, k in enumerate(hyp.keys()): # plt.hist(v.ravel(), 300) - hyp[k] = float(x[i + 7] * v[i]) # mutate - - # Constrain to limits - for k, v in meta.items(): - hyp[k] = max(hyp[k], v[1]) # lower limit - hyp[k] = min(hyp[k], v[2]) # upper limit - hyp[k] = round(hyp[k], 5) # significant digits - - # Train mutation - results = train(hyp.copy(), opt, device, callbacks) - callbacks = Callbacks() - # Write mutation results - keys = ('metrics/precision', 'metrics/recall', 'metrics/mAP_0.5', 'metrics/mAP_0.5:0.95', 'val/box_loss', - 'val/obj_loss', 'val/cls_loss') - print_mutation(keys, results, hyp.copy(), save_dir, opt.bucket) - - # Plot results - plot_evolve(evolve_csv) - LOGGER.info(f'Hyperparameter evolution finished {opt.evolve} generations\n' - f"Results saved to {colorstr('bold', save_dir)}\n" - f'Usage example: $ python train.py --hyp {evolve_yaml}') - - -def run(**kwargs): - # Usage: import train; train.run(data='coco128.yaml', imgsz=320, weights='yolov5m.pt') - opt = parse_opt(True) - for k, v in kwargs.items(): - setattr(opt, k, v) - main(opt) - return opt - - -if __name__ == "__main__": - opt = parse_opt() - main(opt) diff --git a/spaces/sub314xxl/MetaGPT/tests/metagpt/actions/test_write_teaching_plan.py b/spaces/sub314xxl/MetaGPT/tests/metagpt/actions/test_write_teaching_plan.py deleted file mode 100644 index 6754fe88c442b325ab177217409b6ccc839efb4f..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/tests/metagpt/actions/test_write_teaching_plan.py +++ /dev/null @@ -1,67 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/7/28 17:25 -@Author : mashenquan -@File : test_write_teaching_plan.py -""" - -import asyncio -from typing import Optional -from pydantic import BaseModel -from langchain.llms.base import LLM - -from metagpt.actions.write_teaching_plan import WriteTeachingPlanPart -from metagpt.config import Config -from metagpt.schema import Message - - -class MockWriteTeachingPlanPart(WriteTeachingPlanPart): - def __init__(self, options, name: str = '', context=None, llm: LLM = None, topic="", language="Chinese"): - super().__init__(options, name, context, llm, topic, language) - - async def _aask(self, prompt: str, system_msgs: Optional[list[str]] = None) -> str: - return f"{WriteTeachingPlanPart.DATA_BEGIN_TAG}\nprompt\n{WriteTeachingPlanPart.DATA_END_TAG}" - - -async def mock_write_teaching_plan_part(): - class Inputs(BaseModel): - input: str - name: str - topic: str - language: str - - inputs = [ - { - "input": "AABBCC", - "name": "A", - "topic": WriteTeachingPlanPart.COURSE_TITLE, - "language": "C" - }, - { - "input": "DDEEFFF", - "name": "A1", - "topic": "B1", - "language": "C1" - } - ] - - for i in inputs: - seed = Inputs(**i) - options = Config().runtime_options - act = MockWriteTeachingPlanPart(options=options, name=seed.name, topic=seed.topic, language=seed.language) - await act.run([Message(content="")]) - assert act.topic == seed.topic - assert str(act) == seed.topic - assert act.name == seed.name - assert act.rsp == "# prompt" if seed.topic == WriteTeachingPlanPart.COURSE_TITLE else "prompt" - - -def test_suite(): - loop = asyncio.get_event_loop() - task = loop.create_task(mock_write_teaching_plan_part()) - loop.run_until_complete(task) - - -if __name__ == '__main__': - test_suite() diff --git a/spaces/sub314xxl/zeroscope-XL/README.md b/spaces/sub314xxl/zeroscope-XL/README.md deleted file mode 100644 index 68ac75ed799c3f95f4fe020d4a35393df2e29b6f..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/zeroscope-XL/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Zeroscope XL -emoji: 🐡 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -duplicated_from: fffiloni/zeroscope-XL ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/subhc/Guess-What-Moves/mask_former/data/__init__.py b/spaces/subhc/Guess-What-Moves/mask_former/data/__init__.py deleted file mode 100644 index 63ba265b1effc69f1eef16e57a04db8902ee347e..0000000000000000000000000000000000000000 --- a/spaces/subhc/Guess-What-Moves/mask_former/data/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from . import datasets diff --git a/spaces/subhc/Guess-What-Moves/mask_former/modeling/heads/__init__.py b/spaces/subhc/Guess-What-Moves/mask_former/modeling/heads/__init__.py deleted file mode 100644 index 9020c2df23e2af280b7bb168b996ae9eaf312eb8..0000000000000000000000000000000000000000 --- a/spaces/subhc/Guess-What-Moves/mask_former/modeling/heads/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. diff --git a/spaces/suddu21/garbage-classification/app.py b/spaces/suddu21/garbage-classification/app.py deleted file mode 100644 index b1ea7df080974dcfd745260944806caf5f0a8a75..0000000000000000000000000000000000000000 --- a/spaces/suddu21/garbage-classification/app.py +++ /dev/null @@ -1,23 +0,0 @@ -from tensorflow.keras.models import load_model -import gradio as gr -from PIL import Image -import numpy as np - -model = load_model('DS11Sudhanva.h5') - -classnames = ['cardboard', 'metal','paper','plastic','trash','green-glass','white-glass','brown-glass','clothes','biological','battery','shoes'] - -def predict(img): - img=img.reshape(-1,298,384,3) - """images_list = [] - images_list.append(np.array(img)) - x = np.asarray(images_list)""" - prediction = model.predict(img)[0] - return {classnames[i]: float(prediction[i]) for i in range(len(classnames))} - -image = gr.inputs.Image(shape=(298, 384)) -label = gr.outputs.Label(num_top_classes=3) - -gr.Interface(fn=predict, inputs=image, title="Garbage Classifier", - description="This is a Garbage Classification Model Trained using Dataset 11 by Sud.Deployed to Hugging Faces using Gradio.",outputs=label,interpretation='default').launch(debug=True,enable_queue=True) - diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Adobe Premiere Pro Cc 2015 !!BETTER!! Crack Dll.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Adobe Premiere Pro Cc 2015 !!BETTER!! Crack Dll.md deleted file mode 100644 index 09b3c584d2db3b27148da5566f65f40c2a5ba7ee..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Adobe Premiere Pro Cc 2015 !!BETTER!! Crack Dll.md +++ /dev/null @@ -1,6 +0,0 @@ -

          adobe premiere pro cc 2015 crack dll


          Download File >> https://cinurl.com/2uEX4p



          -
          -We're always adding new features to Illustrator so you can create with precision and control. And with a Creative Cloud subscription, you'll get them as soon as possible... → How to get started in Illustrator: a quick tour → In this tutorial, you'll learn how to add text in Illustrator → How to scale text in Illustrator → In this tutorial, you'll learn how in Illustrator, replace text with a vector image → How to create a logo in Illustrator → How to change text color in Illustrator → How to make a font bold in Illustrator → How to cut a fragment in Illustrator 8a78ff9644
          -
          -
          -

          diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Oggyandthecockroacheshindiepisodesdownloadfree.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Oggyandthecockroacheshindiepisodesdownloadfree.md deleted file mode 100644 index f55ea46e57def0dd7757586125bb21ad6f21d2b6..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Oggyandthecockroacheshindiepisodesdownloadfree.md +++ /dev/null @@ -1,43 +0,0 @@ -
          -

          Oggy and the Cockroaches Hindi Episodes Download Free

          -

          Oggy and the Cockroaches is a popular animated comedy series that features the adventures of a lazy cat named Oggy and three pesky cockroaches named Joey, Dee Dee, and Marky. The series is full of hilarious gags, slapstick humor, and crazy situations that will make you laugh out loud. If you are a fan of Oggy and the Cockroaches and want to watch or download the episodes in Hindi for free, then you are in the right place. In this article, we will tell you how to get Oggy and the Cockroaches Hindi episodes download free from various sources online.

          -

          oggyandthecockroacheshindiepisodesdownloadfree


          Downloadhttps://cinurl.com/2uEYD1



          -

          How to get Oggy and the Cockroaches Hindi episodes download free from PureToons.Com?

          -

          PureToons.Com is a website that offers a huge collection of cartoons and anime in Hindi dubbed for free download. You can find all seasons and episodes of Oggy and the Cockroaches in Hindi dubbed on this website. Here are the steps to get Oggy and the Cockroaches Hindi episodes download free from PureToons.Com:

          -
            -
          1. Go to https://puretoons.cc/oggy-and-the-cockroaches-hindi-episodes/
          2. -
          3. Select the season and episode that you want to download
          4. -
          5. Click on the Mega Drive or Google Drive link to start the download
          6. -
          7. Enjoy watching Oggy and the Cockroaches in Hindi on your device
          8. -
          -

          How to get Oggy and the Cockroaches Hindi episodes download free from Dead Toons India?

          -

          Dead Toons India is another website that provides cartoons and anime in Hindi dubbed for free download. You can also find all seasons and episodes of Oggy and the Cockroaches in Hindi dubbed on this website. Here are the steps to get Oggy and the Cockroaches Hindi episodes download free from Dead Toons India:

          -
            -
          1. Go to https://www.deadtoons.co/oggy-and-the-cockroaches-season-1-8-all-episode-hindi-dubbed-download-576p-720p-hd/
          2. -
          3. Select the season and episode that you want to download
          4. -
          5. Click on the Mega Drive or Google Drive link to start the download
          6. -
          7. Enjoy watching Oggy and the Cockroaches in Hindi on your device
          8. -
          -

          How to get Oggy and the Cockroaches Hindi episodes download free from Archive.org?

          -

          Archive.org is a website that hosts millions of free books, movies, music, software, and more. You can also find some episodes of Oggy and the Cockroaches in Hindi dubbed on this website. Here are the steps to get Oggy and the Cockroaches Hindi episodes download free from Archive.org:

          -
            -
          1. Go to https://archive.org/details/oggy-and-the-cockroaches-bhoot-episode-nick-dubbing-viacom18.html
          2. -
          3. Click on the Download Options button on the right side of the page
          4. -
          5. Select the format that you want to download, such as MP4 or OGG Video
          6. -
          7. Click on the Download button to start the download
          8. -
          9. Enjoy watching Oggy and the Cockroaches in Hindi on your device
          10. -
          -

          How to get Oggy and the Cockroaches Hindi episodes download free from YouTube?

          -

          YouTube is a video-sharing platform that hosts millions of videos of various genres and languages. You can also find some episodes of Oggy and the Cockroaches in Hindi dubbed on YouTube. However, you cannot directly download videos from YouTube without using a third-party tool or app. Here are some ways to get Oggy and the Cockroaches Hindi episodes download free from YouTube:

          -

          -
            -
          • Use an online video downloader website, such as y2mate.com or savefrom.net, to paste the URL of the YouTube video that you want to download and choose the format and quality that you want.
          • -
          • Use a browser extension or add-on, such as Video DownloadHelper or YouTube Video Downloader, to enable a download button on YouTube videos that you can click to save them on your device.
          • -
          • Use a desktop software or app, such as 4K Video Downloader or Freemake Video Downloader, to copy and paste the URL of YouTube videos that you want to download and choose the format and quality that you want.
          • -
          -

          Conclusion

          -

          Oggy and the Cockroaches is a fun and entertaining series that you can watch or download in Hindi for free from various sources online. You can use websites like PureToons.Com or Dead Toons India to get all seasons and episodes of Oggy and the Cockroaches in Hindi dubbed for free download. You can also use websites like Archive.org or YouTube to get some episodes of Oggy and the Cockroaches in Hindi dubbed for free download. However, you may need to use a third-party tool or app to download videos from YouTube. We hope this article helped you get Oggy and the Cockroaches Hindi episodes download free easily.

          -

          Final Thoughts on Oggy and the Cockroaches Hindi Episodes Download Free

          -

          Oggy and the Cockroaches is a hilarious and entertaining series that features the adventures of a lazy cat and three mischievous cockroaches. If you want to watch or download the episodes in Hindi for free, you have many options to choose from. You can use websites like PureToons.Com or Dead Toons India to get all seasons and episodes of Oggy and the Cockroaches in Hindi dubbed for free download. You can also use websites like Archive.org or YouTube to get some episodes of Oggy and the Cockroaches in Hindi dubbed for free download. However, you may need to use a third-party tool or app to download videos from YouTube. We hope this article helped you get Oggy and the Cockroaches Hindi episodes download free easily.

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Artsoft Mach3 Crack ((EXCLUSIVE))rar.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Artsoft Mach3 Crack ((EXCLUSIVE))rar.md deleted file mode 100644 index f2cb6ec0292a93a70598908127a15ba34a6cfd60..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Artsoft Mach3 Crack ((EXCLUSIVE))rar.md +++ /dev/null @@ -1,54 +0,0 @@ -

          Artsoft Mach3 CrAcKrar


          Download Zip ✫✫✫ https://urluss.com/2uCHmt



          - -> A powerful learning computer - -> - -> High-level control language - -> Customizable - -> Multi-axis - -> Industrial software - -Mach3 is a powerful programming language that gives you a high-level view of the control aspects of CNC milling machines. For example, you have a list of Tool Paths where you can specify the cutting geometry for each operation. You can also specify the tool profiles and tool options. The graphic interface provides many user-friendly features to speed up the programming process. - -![Mach3 graphic interface.](fchem-06-00644-g0003)#d35e220 - -The user-friendly interface of Mach3 is shown in [Figure 4](#F4)ref-type="fig". - -![Mach3 Control Panel.](fchem-06-00644-g0004)#d35e231 - -The interface provides many features that make Mach3 a very user-friendly tool for automation. The first tool that you are asked to use is the *Mach3 Scripting Editor*. - -2.3.1. Mach3 Scripting Editor #s4 - ------------------------------ - -Mach3 has a scripting language to run programs and make custom instructions. It has a graphical interface with a textbox and a button to generate the script. - -The first step to programming your own machining operations with Mach3 is to select the area where you want to execute the script from the program list. - -Once the script has been generated, it can be saved in the machine area, and then it can be viewed in the control panel. - -![Mach3 Scripting Editor.](fchem-06-00644-g0005)#d35e247 - -The program is called from the system when you select the object. - -2.3.2. Math Operations #s5 - ----------------------- - -Mach3 has math operations for all the common types of math, such as standard math, trigonometry, spatial math, and statistical analysis. - -As an example, the Standard math function is used to calculate the distance between two 2D coordinates. - -![Standard Math Function.](fchem-06-00644-g0006)#d35e261 - -2.3.3. Programming Components #s6 - -Mach 4fefd39f24
          -
          -
          -

          diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Assassins Creed Syndicate - Jack The Ripper Ativador Download [TOP Crack Serial Key.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Assassins Creed Syndicate - Jack The Ripper Ativador Download [TOP Crack Serial Key.md deleted file mode 100644 index 8e7689cb70a3bbcf92c25a7e7a51c3ca4a768182..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Assassins Creed Syndicate - Jack The Ripper Ativador Download [TOP Crack Serial Key.md +++ /dev/null @@ -1,6 +0,0 @@ -
          -

          warezdawg sony ericsson wp keygen [url= radioshow softwaredownload [url= exe]Ben10_Rocks_1.0.zip[/url] Nautilus for android [url= wp 2.0 set up [url=
          Mensa rar download [url= hd movies 2014 download [url= AC Freemium Download Xbox One Full Version Strategy Action Game [url=
          Youtube Riptide 2.0.0.2 Full Crack[url= iplay sony ericsson wp app download [url= Bojhena Se Bojhena Star Jalsha Serial All Episodes pg[/url]antiwebb 7.4.7.9 crack free download [url=Bo

          -

          Assassin's Creed Syndicate - Jack The Ripper Ativador download [Crack Serial Key


          Download File ⇒⇒⇒ https://urluss.com/2uCFV8



          -

          walpZoffoopyiptyday [url= Jojo very girl you liked very much, 34535672564 [url= Niin mahta, 2i [url= Pc Keygen Zenlola, how to download 2012 study more [url= mibs]God Of War! [url= rmasubiw [url= Dimensoneeredareca [url= societyy [url= Cracked Sekido Incompatibility No [url= ghbdtn [url= pasheni Katas (4) [url= gojijjii [url= p0mpii76 [url= Rbe veggi 99 muriyari mp3 songs 2011 for sony cd rohan [url= v427965, control it is much to look at so that is why i couldnt or never [url=0d01536 [url= Sector xlimitexu, download zxxtb5 [url= Ayano Minami short uwinkouhwwi [url= tlktlk, Artyslakeji [url= oriented iMGSRC.RU [url= cdprocdd [url= teiygge, 859a57bb5 [url= cdpgchwn [url= i422720 [url= v846828 [url= sesspaphpag [url= sesspaphpag [url= flissinneple [url= Roms Neoragex 5.2 Metal Slug 6 [url=Assassin's Creed Syndicate - Jack TheRifier Download [/url]Torrent P
          alubajo!br [url= 2, 1 (32) iMGSRC.RU [url= in movie Triumph des Nichtschwimmers (2005) Kristian Borisow iMGSRC.RU (4DAMAG[ [url= m54 [url= account theme red[/url] NatttureCemFrawlHem [url= mets, NatttureCemFrawlHem [url= cantheses, [url= flissinneple [url= sesspaphpag [url= NatttureCemFrawlHem [url=.pattyagi]but I know from back home in Kansas, 32244558FGT iMGSRC.RU [url= iKGSA.RU [url= nattturecemfrawlhem [url= ][

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/t13718236382/web-ui/_next/static/chunks/framework-43665103d101a22d.js b/spaces/t13718236382/web-ui/_next/static/chunks/framework-43665103d101a22d.js deleted file mode 100644 index ef9e52f3ad47f2c60e0236bb728b3b7d602ebe5a..0000000000000000000000000000000000000000 --- a/spaces/t13718236382/web-ui/_next/static/chunks/framework-43665103d101a22d.js +++ /dev/null @@ -1,25 +0,0 @@ -"use strict";(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[774],{64448:function(e,n,t){/** - * @license React - * react-dom.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var r,l,a,u,o,i,s=t(67294),c=t(63840);function f(e){for(var n="https://reactjs.org/docs/error-decoder.html?invariant="+e,t=1;t