diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ccie Torrent Achieve Your CCIE Goals with Expert Guidance and Support.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ccie Torrent Achieve Your CCIE Goals with Expert Guidance and Support.md deleted file mode 100644 index 4f046ae960c825d64cbc4ec65ae9d9562aee5c71..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ccie Torrent Achieve Your CCIE Goals with Expert Guidance and Support.md +++ /dev/null @@ -1,149 +0,0 @@ -
-

What is Ccie Torrent?

-

If you are looking for a way to prepare for the Cisco Certified Internetwork Expert (CCIE) certification exams, you might have heard of Ccie Torrent. But what is it exactly and how can it help you?

-

Ccie Torrent is a collection of video courses, lab exercises, practice tests, and other resources that are designed to help you master the skills and knowledge required for the CCIE exams. These exams are among the most challenging and prestigious in the IT industry, covering topics such as routing and switching, service provider, security, wireless, data center, collaboration, and enterprise infrastructure.

-

Ccie Torrent


DOWNLOAD 🔗 https://byltly.com/2uKyVk



-

By downloading Ccie Torrent, you can access high-quality and up-to-date materials that are created by experts and instructors who have years of experience in teaching and working with Cisco technologies. You can learn at your own pace, review the concepts as many times as you need, and practice with realistic scenarios that simulate the real exam environment. You can also save money and time by avoiding expensive and inconvenient classroom training.

-

In this article, we will show you how to download Ccie Torrent, what are the best sources for it, and how to use it effectively. By following these steps, you will be able to boost your confidence and readiness for the CCIE exams.

-

How to download Ccie Torrent?

-

Before you can download Ccie Torrent, you need to have some requirements and precautions in mind. Here are some tips to ensure a smooth and safe downloading process:

- -

What are the best sources for Ccie Torrent?

-

There are many websites that offer Ccie Torrent, but not all of them are equally good. Some of them might have outdated or incomplete content, poor video quality, broken links, or low seeders (users who share files). To help you find the best sources for Ccie Torrent, we have compiled a list of four reliable and popular websites that offer high-quality content:

-

SolidTorrents

-

SolidTorrents is a torrent search engine that indexes millions of torrents from various sources. It has a simple and user-friendly interface that allows you to filter your results by category, size, date, seeders, leechers (users who download files), etc. You can also sort your results by relevance or popularity.

-

One of the advantages of SolidTorrents is that it offers a lot of content related to CCIE Service Provider certification. You can find video courses, lab exercises, practice tests, diagrams, PDFs, etc., that cover topics such as MPLS VPNs (Virtual Private Networks), L2VPN (Layer 2 VPN), MPLS TE (Traffic Engineering), QoS (Quality of Service), Multicast VPNs (MVPN), etc.

-

ccie security v6 torrent
-ccie enterprise infrastructure torrent
-ccie data center torrent download
-ccie routing and switching torrent
-ccie collaboration torrent
-ccie service provider torrent
-ccie wireless torrent
-ccie lab exam torrent
-ccie security written exam torrent
-ccie enterprise wireless torrent
-ccie data center workbook torrent
-ccie routing and switching workbook torrent
-ccie collaboration workbook torrent
-ccie service provider workbook torrent
-ccie wireless workbook torrent
-ccie security lab dumps torrent
-ccie enterprise infrastructure lab dumps torrent
-ccie data center lab dumps torrent
-ccie routing and switching lab dumps torrent
-ccie collaboration lab dumps torrent
-ccie service provider lab dumps torrent
-ccie wireless lab dumps torrent
-ccie security video course torrent
-ccie enterprise infrastructure video course torrent
-ccie data center video course torrent
-ccie routing and switching video course torrent
-ccie collaboration video course torrent
-ccie service provider video course torrent
-ccie wireless video course torrent
-ccie security study guide torrent
-ccie enterprise infrastructure study guide torrent
-ccie data center study guide torrent
-ccie routing and switching study guide torrent
-ccie collaboration study guide torrent
-ccie service provider study guide torrent
-ccie wireless study guide torrent
-ccie security practice test torrent
-ccie enterprise infrastructure practice test torrent
-ccie data center practice test torrent
-ccie routing and switching practice test torrent
-ccie collaboration practice test torrent
-ccie service provider practice test torrent
-ccie wireless practice test torrent
-best site for ccie torrents
-how to download ccie torrents safely
-how to prepare for ccie exams with torrents
-how to pass ccie exams with torrents
-how to get free access to ccie torrents
-how to avoid fake or outdated ccie torrents
-how to find latest and updated ccie torrents

-

To download Ccie Torrent from SolidTorrents:

-
    -
  1. Go to https://solidtorrents.to/.
  2. -
  3. Type "Ccie" in the search box and press Enter.
  4. -
  5. Browse through the results and select the one that matches your needs.
  6. -
  7. Click on the "Download" button next to the result.
  8. -
  9. Open the downloaded torrent file with your torrent client.
  10. -
  11. Wait for the download to complete.
  12. -
-

UPW.IO

-

UPW.IO is a torrent hosting service that allows users to upload and download torrents without registration or login. It has a minimalist and modern design that makes it easy to use. It also has a fast and secure server that ensures a smooth downloading experience.

-

GitHub

-

GitHub is a platform that hosts and manages software development projects using Git, a version control system. It allows users to collaborate, share, and review code, as well as host and distribute software applications. It has a large and active community of developers and users who contribute to various open-source and proprietary projects.

-

One of the advantages of GitHub is that it offers a lot of content related to CCIE Data Center certification. You can find repositories that contain scripts, tools, templates, guides, books, etc., that cover topics such as Cisco ACI (Application Centric Infrastructure), Cisco UCS (Unified Computing System), Cisco Nexus switches, Cisco MDS (Multilayer Director Switches), storage networking, network virtualization, etc.

-

To download Ccie Torrent from GitHub:

-
    -
  1. Go to https://github.com/.
  2. -
  3. Type "Ccie" in the search box and press Enter.
  4. -
  5. Browse through the results and select the one that matches your needs.
  6. -
  7. Click on the "Code" button on the top right corner of the repository.
  8. -
  9. Select "Download ZIP" from the dropdown menu.
  10. -
  11. Extract the downloaded ZIP file to your desired location.
  12. -
-

Kbits.Live

-

Kbits.Live is a website that offers online training courses for Cisco certifications. It has a team of experienced and certified instructors who teach live classes via Zoom. It also provides recorded videos, lab exercises, practice tests, study materials, and support forums for students.

-

One of the advantages of Kbits.Live is that it offers a lot of content related to CCIE Routing and Switching certification. You can find courses that cover topics such as IP routing (RIP (Routing Information Protocol), EIGRP, OSPF, BGP), IP services (DHCP, NAT, NTP, etc.), IP multicast (IGMP (Internet Group Management Protocol), PIM (Protocol Independent Multicast), etc.), switching technologies (VLANs, STP, EtherChannel, etc.), network security (ACLs, VPNs, IPSec, etc.), network troubleshooting (ping (Packet Internet Groper), traceroute (Trace Route), debug (Debugging), etc.), etc.

-

To download Ccie Torrent from Kbits.Live:

-
    -
  1. Go to https://kbits.live/.
  2. -
  3. Select the course that matches your needs.
  4. -
  5. Click on the "Buy Now" button and complete the payment process.
  6. -
  7. Access the course dashboard and download the videos and materials.
  8. -
-

How to use Ccie Torrent?

-

After you have downloaded Ccie Torrent from one or more sources, you need to know how to use it effectively. Here are some steps to guide you:

-

How to install Ccie Torrent?

-

The installation process of Ccie Torrent depends on the type and format of the files you have downloaded. Some files might be ready to use without any installation, while others might require some additional steps. Here are some general instructions on how to install Ccie Torrent on different operating systems:

-

Windows

-

If you have downloaded video files in MP4 or AVI format, you can play them with any media player such as VLC or Windows Media Player. If you have downloaded PDF files or other documents, you can open them with any PDF reader such as Adobe Acrobat or Microsoft Edge. If you have downloaded ZIP or RAR files, you need to extract them with any compression software such as WinRAR or 7-Zip. If you have downloaded ISO files or other disk images, you need to mount them with any virtual drive software such as Daemon Tools or PowerISO. If you have downloaded EXE files or other executable files, you need to run them with administrator privileges and follow the installation wizard.

-

Linux

-

If you have downloaded video files in MP4 or AVI format, you can play them with any media player such as VLC or MPlayer. If you have downloaded PDF files or other documents, you can open them with any PDF reader such as Evince or Okular. If you have downloaded ZIP or RAR files, you need to extract them with any compression software such as unzip or unrar. If you have downloaded ISO files or other disk images, you need to mount them with any virtual drive software such as Furius ISO Mount or AcetoneISO. If you have downloaded BIN files or other executable files, you need to make them executable with chmod +x command and run them with sudo command.

-

Mac

-

Mac

-

If you have downloaded video files in MP4 or AVI format, you can play them with any media player such as VLC or QuickTime Player. If you have downloaded PDF files or other documents, you can open them with any PDF reader such as Preview or Adobe Acrobat. If you have downloaded ZIP or RAR files, you need to extract them with any compression software such as The Unarchiver or Keka. If you have downloaded ISO files or other disk images, you need to mount them with any virtual drive software such as DAEMON Tools Lite or VirtualBox. If you have downloaded DMG files or other installer files, you need to double-click them and follow the installation wizard.

-

How to configure Ccie Torrent?

-

The configuration process of Ccie Torrent depends on the type and format of the files you have downloaded. Some files might be ready to use without any configuration, while others might require some additional steps. Here are some general instructions on how to configure Ccie Torrent for optimal performance and security:

-

General settings

-

Some of the general settings that you might want to adjust are:

- -

Advanced settings

-

Some of the advanced settings that you might want to adjust are:

- -

Conclusion

-

Ccie Torrent is a great way to prepare for the CCIE certification exams. It offers a lot of high-quality and up-to-date content that covers all the topics and skills required for the exams. By downloading Ccie Torrent from reliable and verified sources, you can access video courses, lab exercises, practice tests, and other resources that will help you master Cisco technologies and boost your confidence and readiness for the exams.

-

If you want to download Ccie Torrent, you need to have a torrent client installed on your computer, enough disk space and internet bandwidth, and a careful selection of sources. You also need to know how to install and configure Ccie Torrent for optimal performance and security. By following these steps, you will be able to enjoy a smooth and safe downloading experience.

-

We hope this article has helped you understand what is Ccie Torrent, how to download it, what are the best sources for it, and how to use it effectively. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy learning!

-

Frequently Asked Questions

-

Here are some of the most common questions that people ask about Ccie Torrent:

-

What is CCIE?

-

CCIE stands for Cisco Certified Internetwork Expert. It is a certification program that validates the skills and knowledge of network engineers who can plan, design, implement, operate, troubleshoot, and optimize complex network infrastructures using Cisco technologies.

-

How many CCIE certifications are there?

-

There are eight CCIE certifications available: Routing and Switching, Service Provider, Security, Wireless, Data Center, Collaboration, Enterprise Infrastructure, and Enterprise Wireless.

-

How do I get CCIE certified?

-

To get CCIE certified, you need to pass two exams: a written exam that tests your theoretical knowledge of network concepts and technologies, and a lab exam that tests your practical skills in configuring and troubleshooting network scenarios using real equipment.

-

How much does CCIE certification cost?

-

The cost of CCIE certification varies depending on the exam location and currency. The written exam costs $450 USD per attempt, while the lab exam costs $1600 USD per attempt.

-

How long does CCIE certification last?

-

CCIE certification lasts for three years from the date of passing the lab exam. To maintain your certification status, you need to recertify before the expiration date by passing any current CCIE written exam or lab exam.

-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download MS Project 2019 for Free The Best Project Management Software.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download MS Project 2019 for Free The Best Project Management Software.md deleted file mode 100644 index 93606695e2db332a9edd9d1eb923c14287785e51..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download MS Project 2019 for Free The Best Project Management Software.md +++ /dev/null @@ -1,31 +0,0 @@ -
-

How to Get MS Project 2019 for Free

-

MS Project 2019 is a powerful project management software that helps you plan, track, and manage your projects effectively. Whether you are working on a small or large-scale project, MS Project 2019 can help you organize your tasks, resources, costs, and deadlines. However, MS Project 2019 is not a cheap software, as it costs $620 for the standard version and $1,030 for the professional version. If you are looking for a way to get MS Project 2019 for free, you might be interested in this article. We will show you some of the possible methods to download and install MS Project 2019 for free on your Windows 10 device.

-

free download ms project 2019


Download File ————— https://byltly.com/2uKuYZ



-

Method 1: Use the Microsoft Support Website

-

One of the easiest and safest ways to get MS Project 2019 for free is to use the Microsoft Support website. This website provides you with the official download links and installation instructions for MS Project 2019. However, you will need a valid product key or a subscription license to activate the software. If you have purchased MS Project 2019 from a retail store or online, you should have received a product key with your purchase. If you have subscribed to one of the cloud-based solutions of MS Project, such as Project Online Professional or Project Online Premium, you should have an assigned license from your Microsoft 365 admin.

-

To use this method, follow these steps:

-
    -
  1. Go to https://support.microsoft.com/en-us/office/install-project-7059249b-d9fe-4d61-ab96-5c5bf435f281 and sign in with your Microsoft account or work or school account.
  2. -
  3. Select your version of MS Project 2019 from the list of products.
  4. -
  5. Follow the instructions on the website to download and install MS Project 2019 on your device.
  6. -
  7. Enter your product key or sign in with your subscription account to activate MS Project 2019.
  8. -
-

Method 2: Use a Third-Party Website

-

Another way to get MS Project 2019 for free is to use a third-party website that offers free or discounted downloads of the software. However, this method is not recommended, as it may involve some risks and drawbacks. Some of these websites may contain malware or viruses that can harm your device or compromise your personal information. Some of these websites may also provide illegal or pirated copies of MS Project 2019 that can result in legal consequences. Moreover, some of these websites may not offer the latest version or updates of MS Project 2019, which can affect its performance and functionality.

-

If you still want to try this method, here are some examples of third-party websites that claim to offer free downloads of MS Project 2019:

- -

To use this method, follow these steps:

-

-
    -
  1. Choose a third-party website that offers free downloads of MS Project 2019 and visit its link.
  2. -
  3. Read the terms and conditions and the user reviews of the website carefully before proceeding.
  4. -
  5. Click on the download button or link and save the file on your device.
  6. -
  7. Run the file and follow the installation wizard to install MS Project 2019 on your device.
  8. -
  9. If prompted, enter a product key or crack code to activate MS

    ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Arthdal Chronicles Sub Indo Drama Korea Penuh Misteri dan Legenda - Drakorindo.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Arthdal Chronicles Sub Indo Drama Korea Penuh Misteri dan Legenda - Drakorindo.md deleted file mode 100644 index ba37d635ef2c780698659fbbf93d8713e973a1e5..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Arthdal Chronicles Sub Indo Drama Korea Penuh Misteri dan Legenda - Drakorindo.md +++ /dev/null @@ -1,111 +0,0 @@ - -

    How to Download Drama Arthdal Chronicles Sub Indo Drakorindo

    -

    Are you a fan of Korean dramas? Do you love historical fantasy genres? Do you want to watch one of the most epic and ambitious Korean dramas ever made? If you answered yes to any of these questions, then you should definitely check out Arthdal Chronicles, a 2019 drama that tells the story of a mythical land called Arth and its inhabitants who vie for power and survival. In this article, we will tell you everything you need to know about Arthdal Chronicles, why you should watch it with subtitles in Indonesian, and how to download it from Drakorindo, one of the best sources for Korean drama downloads.

    -

    What is Arthdal Chronicles?

    -

    Arthdal Chronicles is a Korean drama that aired on tvN from June 1 to September 22, 2019, for 18 episodes. It was also streamed internationally on Netflix. It is regarded as the first Korean ancient fantasy drama, as it takes place during the Bronze Age and is loosely based on the story of Dangun, the founder of the first Korean kingdom of Gojoseon. It was written by Kim Young-hyun and Park Sang-yeon, who also wrote the acclaimed historical dramas Six Flying Dragons and Tree With Deep Roots, and directed by Kim Won-seok, who also helmed Signal and Misaeng. It starred Jang Dong-gun, Song Joong-ki, Kim Ji-won, and Kim Ok-vin in the main roles.

    -

    download drama arthdal chronicles sub indo drakorindo


    Download Zip » https://urlin.us/2uT0lV



    -

    Synopsis

    -

    The drama follows the lives and struggles of the people of Arthdal, an ancient city that is the center of civilization and power in Arth. There are four main factions in Arthdal: the Saram, who are the dominant human race; the Neanthal, who are a blue-eyed race with superhuman abilities; the Igutu, who are half-human and half-Neanthal; and the Wahan, who are a peaceful tribe that lives in harmony with nature. The drama focuses on three main characters: Ta-gon, a charismatic and ruthless warrior who aims to become the king of Arthdal; Eun-seom, a brave and innocent Igutu who is destined to bring change to Arth; and Tan-ya, a clever and spirited Wahan girl who becomes the high priestess of Arth.

    -

    Cast and characters

    -

    The drama boasts an impressive cast of actors who bring their characters to life with their skills and charisma. Here are some of the main cast members and their roles:

    - -

    Other notable cast members include Park Hae-joon, Park Byung-eun, Cho Seong-ha, Choi Moo-sung, Lee Do-kyung, Kim Eui-sung, Park Hyoung-soo, Shin Joo-hwan, and Yoo Teo.

    -

    Production and reception

    -

    Arthdal Chronicles was one of the most expensive and ambitious Korean dramas ever produced, with a budget of over 54 billion won (about 46 million USD). It was filmed in various locations in South Korea and Brunei, and used extensive computer graphics and visual effects to create the mythical world of Arth. It also featured a diverse and talented crew of writers, directors, cinematographers, composers, costume designers, and makeup artists who worked together to bring the drama to life.

    -

    The drama received mixed reviews from critics and viewers, who praised its originality, scale, and performances, but also criticized its complex plot, slow pace, and lack of emotional connection. It also faced some controversies regarding its similarity to other works such as Game of Thrones and Avatar, as well as its alleged mistreatment of staff and animals. Despite these issues, the drama still garnered a loyal fan base and high ratings, especially on Netflix, where it became one of the most popular Korean dramas globally. It also won several awards and nominations, including the Best Art Award at the 12th Korea Drama Awards and the Technical Award at the 56th Baeksang Arts Awards.

    -

    Why watch Arthdal Chronicles sub Indo?

    -

    If you are interested in watching Arthdal Chronicles, you might wonder why you should watch it with subtitles in Indonesian, or sub Indo for short. Here are some of the reasons why watching Arthdal Chronicles sub Indo is a good idea:

    -

    The benefits of watching Korean dramas with subtitles

    -

    Watching Korean dramas with subtitles can help you improve your language skills, cultural awareness, and cognitive abilities. By reading the subtitles, you can learn new words and phrases, as well as grammar and pronunciation. You can also compare and contrast the differences between Korean and Indonesian cultures, such as their values, customs, and expressions. Moreover, you can enhance your memory, attention span, and critical thinking by following the subtitles and the story.

    -

    Download Arthdal Chronicles Subtitle Indonesia Full Episode
    -Nonton Drama Korea Arthdal Chronicles Sub Indo Drakorstation
    -Streaming Arthdal Chronicles Sub Indo Gratis Drakorid
    -Cara Download Arthdal Chronicles Sub Indo di Drakorindo
    -Arthdal Chronicles Sub Indo Batch 360p 480p 720p Drakorasia
    -Review Drama Arthdal Chronicles Sub Indo Drakorindo
    -Sinopsis Arthdal Chronicles Sub Indo Episode 1-18 Drakorindo
    -Download OST Arthdal Chronicles Sub Indo Mp3 Drakorindo
    -Download Arthdal Chronicles Part 1 2 3 Sub Indo Drakorindo
    -Link Download Arthdal Chronicles Sub Indo Google Drive Drakorindo
    -Download Drama Korea Arthdal Chronicles Hardsub Indo Drakorindo
    -Nonton Online Arthdal Chronicles Sub Indo Viu Drakorindo
    -Download Arthdal Chronicles Sub Indo Lengkap dengan Subtitle English Drakorindo
    -Download Arthdal Chronicles Sub Indo Kualitas HD Drakorindo
    -Download Arthdal Chronicles Sub Indo Tanpa Iklan Drakorindo
    -Download Arthdal Chronicles Sub Indo Terbaru 2023 Drakorindo
    -Download Arthdal Chronicles Sub Indo Eps Terakhir Drakorindo
    -Download Arthdal Chronicles Sub Indo Format Mp4 Drakorindo
    -Download Arthdal Chronicles Sub Indo di HP Android Drakorindo
    -Download Arthdal Chronicles Sub Indo di Laptop Drakorindo
    -Download Arthdal Chronicles Sub Indo di Telegram Drakorindo
    -Download Arthdal Chronicles Sub Indo di Facebook Drakorindo
    -Download Arthdal Chronicles Sub Indo di Instagram Drakorindo
    -Download Arthdal Chronicles Sub Indo di Twitter Drakorindo
    -Download Arthdal Chronicles Sub Indo di Youtube Drakorindo
    -Download Arthdal Chronicles Sub Indo dengan Mudah dan Cepat Drakorindo
    -Download Arthdal Chronicles Sub Indo dengan Server Terbaik Drakorindo
    -Download Arthdal Chronicles Sub Indo dengan Koneksi Stabil Drakorindo
    -Download Arthdal Chronicles Sub Indo dengan Resolusi Tinggi Drakorindo
    -Download Arthdal Chronicles Sub Indo dengan Suara Jernih Drakorindo
    -Download Arthdal Chronicles Sub Indo dengan Gambar Bagus Drakorindo
    -Download Arthdal Chronicles Sub Indo dengan Cerita Menarik Drakorindo
    -Download Arthdal Chronicles Sub Indo dengan Pemain Terkenal Drakorindo
    -Download Arthdal Chronicles Sub Indo dengan Genre Fantasi Sejarah Drakorindo
    -Download Arthdal Chronicles Sub Indo dengan Rating Tinggi Drakorindo
    -Download Arthdal Chronicles Sub Indo dengan Review Positif Drakorindo
    -Download Arthdal Chronicles Sub Indo dengan Komunitas Fans Besar Drakorindo
    -Download Arthdal Chronicles Sub Indo dengan Bonus Behind The Scene Drakorindo
    -Download Arthdal Chronicles Sub Indo dengan Bonus Interview Cast Drakorindo
    -Download Arthdal Chronicles Sub Indo dengan Bonus Bloopers dan NG Scene Drakorindo

    -

    The unique and captivating story of Arthdal Chronicles

    -

    Arthdal Chronicles is not your typical Korean drama. It is a rare example of a historical fantasy genre that explores the origins of civilization and humanity. It has a rich and complex plot that spans across different times and places, as well as different races and tribes. It also has a lot of twists and turns that will keep you on the edge of your seat. If you are looking for a refreshing and innovative story that will challenge your imagination and emotions, Arthdal Chronicles is the drama for you.

    -

    The stunning visuals and performances of Arthdal Chronicles

    -

    Another reason to watch Arthdal Chronicles is its amazing visuals and performances. The drama showcases some of the best cinematography, special effects, and costumes that create a realistic and immersive world of Arth. The drama also features some of the best actors in Korea, who deliver outstanding performances that portray their characters' personalities, emotions, and conflicts. You will be impressed by the quality and professionalism of the production and the cast of Arthdal Chronicles.

    -

    How to download Arthdal Chronicles sub Indo from Drakorindo?

    -

    Now that you know why you should watch Arthdal Chronicles sub Indo, you might wonder how to download it from Drakorindo, one of the most popular and reliable websites for Korean drama downloads. Here are the steps to follow:

    -

    What is Drakorindo?

    -

    Drakorindo is a website that provides free downloads of Korean dramas with various subtitles, including Indonesian. It has a large and updated collection of dramas from different genres and networks. It also has a simple and user-friendly interface that makes it easy to find and download your favorite dramas. You can access Drakorindo from any device, such as a computer, a smartphone, or a tablet.

    -

    The steps to download Arthdal Chronicles sub Indo from Drakorindo

    -

    To download Arthdal Chronicles sub Indo from Drakorindo, you need to follow these steps:

    -
      -
    1. Go to the official website of Drakorindo at https://drakorindo.cc/.
    2. -
    3. Type "Arthdal Chronicles" in the search box and click on the magnifying glass icon.
    4. -
    5. Select the episode you want to download from the list of results.
    6. -
    7. Scroll down to the bottom of the page and click on the link that says "Download Subtitle Indonesia/English".
    8. -
    9. Choose the subtitle language you prefer (Indonesian or English) and click on the download button.
    10. -
    11. Save the subtitle file to your device.
    12. -
    13. Go back to the previous page and click on the link that says "Download Episode".
    14. -
    15. Choose the video quality you prefer (360p, 540p, or 720p) and click on the download button.
    16. -
    17. Save the video file to your device.
    18. -
    19. Open the video file with a media player that supports subtitles, such as VLC or MX Player.
    20. -
    21. Load the subtitle file from your device and enjoy watching Arthdal Chronicles sub Indo.
    22. -
    -

    The alternative ways to watch Arthdal Chronicles sub Indo online

    -

    If you don't want to download Arthdal Chronicles sub Indo from Drakorindo, you can also watch it online from other sources. Here are some of the alternative ways to watch Arthdal Chronicles sub Indo online:

    - -

    Conclusion

    -

    Arthdal Chronicles is a Korean drama that you should not miss if you love historical fantasy genres. It has a unique and captivating story, stunning visuals and performances, and a lot of benefits for watching it with subtitles in Indonesian. You can download Arthdal Chronicles sub Indo from Drakorindo easily and quickly, or watch it online from other sources. Either way, you will enjoy watching this epic and ambitious drama that will take you to a mythical land of Arth.

    -

    FAQs

    -

    Here are some of the frequently asked questions about Arthdal Chronicles sub Indo:

    -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/English Movies Download 2017 The Most Popular and Critically Acclaimed Films of the Year.md b/spaces/1phancelerku/anime-remove-background/English Movies Download 2017 The Most Popular and Critically Acclaimed Films of the Year.md deleted file mode 100644 index 8a0bfb6742223c5b85db26ed88fac52b631083b6..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/English Movies Download 2017 The Most Popular and Critically Acclaimed Films of the Year.md +++ /dev/null @@ -1,145 +0,0 @@ - -

    How to Download English Movies from 2017

    -

    If you are a movie lover, you might be interested in downloading some of the best English movies from 2017. Whether you want to watch them offline, save them for later, or share them with your friends, downloading movies can be a convenient and enjoyable way to enjoy cinema. However, you might also wonder how to download movies safely and legally, without risking your device or breaking the law. In this article, we will show you how to download English movies from 2017, what are the top 10 movies from that year, and what are the best practices for downloading movies online.

    -

    english movies download 2017


    DOWNLOAD >>> https://jinyurl.com/2uNNG4



    -

    Introduction

    -

    Why download movies from 2017?

    -

    2017 was a great year for English movies, with many genres, themes, and styles represented. From action-packed superhero flicks to heartwarming animated features, from dark comedies to thrilling dramas, there was something for everyone in 2017. Some of the movies from that year were critically acclaimed, winning awards and nominations, while others were commercially successful, breaking box office records and becoming cult classics. Downloading movies from 2017 can allow you to revisit some of your favorites, discover new gems, or catch up on what you missed.

    -

    What are the best sources for downloading movies?

    -

    There are many sources for downloading movies online, but not all of them are reliable, safe, or legal. Some of the best sources for downloading movies are:

    - -

    Top 10 English Movies from 2017

    -

    Logan

    -

    Logan is a superhero movie based on the Marvel Comics character Wolverine. It is the tenth installment in the X-Men film series and the third and final film in the Wolverine trilogy. It stars Hugh Jackman as Logan/Wolverine, Patrick Stewart as Charles Xavier/Professor X, and Dafne Keen as Laura/X-23. The movie is set in a dystopian future where mutants are nearly extinct and Logan is an aging and weary loner who must protect a young mutant girl from a sinister organization. Logan is widely regarded as one of the best superhero movies of all time, praised for its dark tone, emotional depth, and powerful performances.

    -

    Coco

    -

    Coco is an animated movie produced by Pixar Animation Studios and distributed by Walt Disney Pictures. It is inspired by the Mexican holiday of Día de los Muertos (Day of the Dead). It features the voices of Anthony Gonzalez as Miguel Rivera, Gael García Bernal as Hector Rivera, Benjamin Bratt as Ernesto de la Cruz, and Alanna Ub

    ach as Imelda Rivera. The movie follows Miguel, a young boy who dreams of becoming a musician, as he journeys into the Land of the Dead to find his great-great-grandfather and learn the truth about his family's history. Coco is widely regarded as one of the best animated movies of all time, praised for its colorful animation, musical score, cultural representation, and emotional impact.

    -

    Thor: Ragnarok

    -

    Thor: Ragnarok is a superhero movie based on the Marvel Comics character Thor. It is the third installment in the Thor film series and the seventeenth film in the Marvel Cinematic Universe. It stars Chris Hemsworth as Thor, Tom Hiddleston as Loki, Cate Blanchett as Hela, Mark Ruffalo as Bruce Banner/Hulk, and Tessa Thompson as Valkyrie. The movie follows Thor as he tries to stop Hela, the goddess of death, from destroying his home world of Asgard and triggering the prophesied Ragnarok, the end of all things. Thor: Ragnarok is widely regarded as one of the best superhero movies of all time, praised for its humor, action, visuals, and direction.

    -

    The Meyerowitz Stories

    -

    The Meyerowitz Stories is a comedy-drama movie written and directed by Noah Baumbach. It stars Adam Sandler, Ben Stiller, Dustin Hoffman, Elizabeth Marvel, and Emma Thompson. The movie follows the dysfunctional Meyerowitz family as they reunite to celebrate their father's artistic legacy and cope with his declining health. The Meyerowitz Stories is widely regarded as one of the best comedy-drama movies of all time, praised for its witty script, realistic characters, and poignant themes.

    -

    Wind River

    -

    Wind River is a neo-Western thriller movie written and directed by Taylor Sheridan. It stars Jeremy Renner as Cory Lambert, Elizabeth Olsen as Jane Banner, and Gil Birmingham as Martin Hanson. The movie follows Cory, a wildlife tracker and hunter, and Jane, an FBI agent, as they investigate the murder of a young Native American woman on the Wind River Indian Reservation in Wyoming. Wind River is widely regarded as one of the best thriller movies of all time, praised for its suspenseful plot, atmospheric setting, and social commentary.

    -

    Dunkirk

    -

    Dunkirk is a war movie written and directed by Christopher Nolan. It stars Fionn Whitehead, Tom Glynn-Carney, Jack Lowden, Harry Styles, Aneurin Barnard, James D'Arcy, Barry Keoghan, Kenneth Branagh, Cillian Murphy, Mark Rylance, and Tom Hardy. The movie depicts the Dunkirk evacuation of World War II from three perspectives: land, sea, and air. It shows the struggles and sacrifices of the Allied soldiers and civilians who were trapped on the beaches of Dunkirk and rescued by a flotilla of boats. Dunkirk is widely regarded as one of the best war movies of all time, praised for its immersive cinematography, sound design, editing, and storytelling.

    -

    english movies download 2017 imdb
    -english movies download 2017 free
    -english movies download 2017 hd
    -english movies download 2017 mp4
    -english movies download 2017 torrent
    -english movies download 2017 action
    -english movies download 2017 comedy
    -english movies download 2017 horror
    -english movies download 2017 romance
    -english movies download 2017 thriller
    -english movies download 2017 drama
    -english movies download 2017 sci-fi
    -english movies download 2017 fantasy
    -english movies download 2017 adventure
    -english movies download 2017 animation
    -english movies download 2017 war
    -english movies download 2017 crime
    -english movies download 2017 mystery
    -english movies download 2017 musical
    -english movies download 2017 western
    -english movies download 2017 biopic
    -english movies download 2017 documentary
    -english movies download 2017 sports
    -english movies download 2017 family
    -english movies download 2017 history
    -english movies download 2017 superhero
    -english movies download 2017 netflix
    -english movies download 2017 amazon prime
    -english movies download 2017 hulu
    -english movies download 2017 disney plus
    -english movies download 2017 youtube
    -english movies download 2017 google drive
    -english movies download 2017 dailymotion
    -english movies download 2017 vimeo
    -english movies download 2017 archive.org
    -english movies download 2017 dual audio
    -english movies download 2017 subtitles
    -english movies download 2017 hindi dubbed
    -english movies download 2017 tamil dubbed
    -english movies download 2017 telugu dubbed
    -english movies download 2017 malayalam dubbed
    -english movies download 2017 bengali dubbed
    -english movies download 2017 kannada dubbed
    -english movies download 2017 marathi dubbed
    -english movies download 2017 punjabi dubbed
    -english movies download 2017 urdu dubbed
    -english movies download 2017 chinese dubbed
    -english movies download 2017 korean dubbed
    -english movies download 2017 japanese dubbed

    -

    Get Out

    -

    Get Out is a horror movie written and directed by Jordan Peele. It stars Daniel Kaluuya as Chris Washington, Allison Williams as Rose Armitage, Lil Rel Howery as Rod Williams, Bradley Whitford as Dean Armitage, Caleb Landry Jones as Jeremy Armitage, Stephen Root as Jim Hudson, and Catherine Keener as Missy Armitage. The movie follows Chris, a young black man, who visits the family of his white girlfriend, Rose, and uncovers a horrifying secret involving their true intentions. Get Out is widely regarded as one of the best horror movies of all time, praised for its originality, satire, social critique, and performances.

    -

    The Shape of Water

    -

    The Shape of Water is a fantasy romance movie directed by Guillermo del Toro and written by del Toro and Vanessa Taylor. It stars Sally Hawkins as Elisa Esposito, Doug Jones as the Amphibian Man, Michael Shannon as Richard Strickland, Richard Jenkins as Giles, Octavia Spencer as Zelda Fuller, and Michael Stuhlbarg as Robert Hoffstetler. The movie is set in the Cold War era and follows Elisa, a mute janitor who works at a secret government facility, where she forms a bond with a mysterious aquatic creature that is held captive there. The Shape of Water is widely regarded as one of the best fantasy romance movies of all time, praised for its visual style, direction, score, and themes.

    -

    Lady Bird

    -

    Lady Bird is a coming-of-age comedy-drama movie written and directed by Greta Gerwig. It stars Saoirse Ronan as Christine "Lady Bird" McPherson, Laurie Metcalf as Marion McPherson, Tracy Letts as Larry McPherson, Lucas Hedges as Danny O'Neill, Timothée Chalamet as Kyle Scheible, Beanie Feldstein as Julianne "Julie" Steffans, and Lois Smith as Sister Sarah Joan. The movie follows Lady Bird, a rebellious and artistic teenager who navigates her senior year of high school in Sacramento, California in 2002. Lady Bird is widely regarded as one of the best coming-of-age movies of all time, praised for its humor, authenticity, and performances.

    -

    Three Billboards Outside Ebbing, Missouri

    -

    Three Billboards Outside Ebbing, Missouri is a black comedy-drama movie written and directed by Martin McDonagh. It stars Frances McDormand as Mildred Hayes, Woody Harrelson as Bill Willoughby, Sam Rockwell as Jason Dixon, John Hawkes as Charlie Hayes, and Peter Dinklage as James. The movie follows Mildred, a grieving mother who rents three billboards to call attention to the unsolved murder of her daughter and the lack of action by the local police. Three Billboards Outside Ebbing, Missouri is widely regarded as one of the best black comedy-drama movies of all time, praised for its sharp dialogue, dark humor, and performances.

    -

    How to Download Movies Safely and Legally

    -

    Use a VPN service

    -

    A VPN (Virtual Private Network) is a service that encrypts your internet traffic and hides your IP address, making you anonymous and secure online. A VPN can help you download movies safely and legally by:

    - -

    Some of the best VPN services for downloading movies are ExpressVPN, NordVPN, Surfshark, and CyberGhost.

    -

    Choose a reputable site or platform

    -

    Another way to download movies safely and legally is to choose a reputable site or platform that offers high-quality and legal content. Some of the factors to consider when choosing a site or platform are:

    - -

    Check the file format and size

    -

    A third way to download movies safely and legally is to check the file format and size before downloading them. Some of the factors to consider when checking the file format and size are:

    - -

    Scan the file for viruses or malware

    -

    A fourth way to download movies safely and legally is to scan the file for viruses or malware before opening or playing them. Some of the risks of downloading movies from untrusted sources are:

    - -

    To avoid these risks, you should always scan the file for viruses or malware before opening or playing them. You can use a reliable antivirus or anti-malware software, such as Norton, McAfee, Kaspersky, or Malwarebytes, to scan the file and remove any threats.

    -

    Conclusion

    -

    Downloading English movies from 2017 can be a fun and rewarding way to enjoy some of the best cinema of that year. However, you should also be careful and responsible when downloading movies online, as there are many risks and challenges involved. By following the tips and steps in this article, you can download movies safely and legally, and avoid any problems or issues. Happy watching!

    -

    FAQs

    -

    Here are some of the frequently asked questions about downloading English movies from 2017:

    -
      -
    1. What is the best streaming platform for downloading English movies from 2017?
    2. -

      The answer to this question depends on your preferences, budget, and availability. However, some of the most popular and reputable streaming platforms for downloading English movies from 2017 are Netflix, Amazon Prime Video, Hulu, Disney+, and HBO Max. These platforms offer a wide range of movies from different genres, styles, and countries, as well as original and exclusive content. They also offer high-quality and legal downloads, as well as other features such as offline viewing, multiple devices, parental controls, and subtitles.

      -
    3. What is the best online archive for downloading English movies from 2017?
    4. -

      The answer to this question depends on your interests, tastes, and curiosity. However, some of the most interesting and reliable online archives for downloading English movies from 2017 are Internet Archive, Open Culture, and Public Domain Torrents. These websites offer free access to public domain or creative commons movies that are not protected by copyright. They also offer a variety of movies from different eras, cultures, and genres, as well as rare and independent movies that you might not find elsewhere.

      -
    5. What is the best torrent site for downloading English movies from 2017?
    6. -

      The answer to this question depends on your risk tolerance, ethics, and legality. However, some of the most popular and notorious torrent sites for downloading English movies from 2017 are The Pirate Bay, RARBG, and 1337x. These websites offer a huge collection of movies from different sources, qualities, and languages, as well as fast and easy downloads. However, they also pose many dangers and challenges, such as viruses, malware, legal issues, and ethical dilemmas.

      -
    7. How can I download movies faster and easier?
    8. -

      There are some tips and tricks that can help you download movies faster and easier, such as:

      -
        -
      • Use a download manager: A download manager is a software that can help you manage, organize, and accelerate your downloads. Some examples are Internet Download Manager, Free Download Manager, and JDownloader.
      • -
      • Use a torrent client: A torrent client is a software that can help you download files from torrent sites. Some examples are BitTorrent, uTorrent, and qBittorrent.
      • -
      • Use a Wi-Fi connection: A Wi-Fi connection is usually faster and more stable than a mobile data connection. It can also help you save your data plan and avoid extra charges.
      • -
      • Choose the right time: The speed and availability of downloads can vary depending on the time of the day, the traffic of the site or platform, and the demand of the movie. It is usually better to download movies during off-peak hours, such as late at night or early in the morning.
      • -
      -
    9. How can I watch downloaded movies on my TV?
    10. -

      There are some ways that you can watch downloaded movies on your TV, such as:

      -
        -
      • Use a HDMI cable: A HDMI cable is a cable that can connect your device to your TV and transmit audio and video signals. You can use a HDMI cable to connect your laptop, tablet, or smartphone to your TV and play the downloaded movie on your device.
      • -
      • Use a streaming device: A streaming device is a device that can connect to your TV and stream content from the internet or your device. Some examples are Chromecast, Roku, Fire TV Stick, and Apple TV. You can use a streaming device to cast or mirror the downloaded movie from your device to your TV.
      • -
      • Use a USB drive: A USB drive is a device that can store data and plug into your TV or other devices. You can use a USB drive to transfer the downloaded movie from your device to your TV and play it using the TV's media player.
      • -
      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/232labs/VToonify/vtoonify/model/raft/core/extractor.py b/spaces/232labs/VToonify/vtoonify/model/raft/core/extractor.py deleted file mode 100644 index 9a9c759d1243d4694e8656c2f6f8a37e53edd009..0000000000000000000000000000000000000000 --- a/spaces/232labs/VToonify/vtoonify/model/raft/core/extractor.py +++ /dev/null @@ -1,267 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class ResidualBlock(nn.Module): - def __init__(self, in_planes, planes, norm_fn='group', stride=1): - super(ResidualBlock, self).__init__() - - self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, padding=1, stride=stride) - self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, padding=1) - self.relu = nn.ReLU(inplace=True) - - num_groups = planes // 8 - - if norm_fn == 'group': - self.norm1 = nn.GroupNorm(num_groups=num_groups, num_channels=planes) - self.norm2 = nn.GroupNorm(num_groups=num_groups, num_channels=planes) - if not stride == 1: - self.norm3 = nn.GroupNorm(num_groups=num_groups, num_channels=planes) - - elif norm_fn == 'batch': - self.norm1 = nn.BatchNorm2d(planes) - self.norm2 = nn.BatchNorm2d(planes) - if not stride == 1: - self.norm3 = nn.BatchNorm2d(planes) - - elif norm_fn == 'instance': - self.norm1 = nn.InstanceNorm2d(planes) - self.norm2 = nn.InstanceNorm2d(planes) - if not stride == 1: - self.norm3 = nn.InstanceNorm2d(planes) - - elif norm_fn == 'none': - self.norm1 = nn.Sequential() - self.norm2 = nn.Sequential() - if not stride == 1: - self.norm3 = nn.Sequential() - - if stride == 1: - self.downsample = None - - else: - self.downsample = nn.Sequential( - nn.Conv2d(in_planes, planes, kernel_size=1, stride=stride), self.norm3) - - - def forward(self, x): - y = x - y = self.relu(self.norm1(self.conv1(y))) - y = self.relu(self.norm2(self.conv2(y))) - - if self.downsample is not None: - x = self.downsample(x) - - return self.relu(x+y) - - - -class BottleneckBlock(nn.Module): - def __init__(self, in_planes, planes, norm_fn='group', stride=1): - super(BottleneckBlock, self).__init__() - - self.conv1 = nn.Conv2d(in_planes, planes//4, kernel_size=1, padding=0) - self.conv2 = nn.Conv2d(planes//4, planes//4, kernel_size=3, padding=1, stride=stride) - self.conv3 = nn.Conv2d(planes//4, planes, kernel_size=1, padding=0) - self.relu = nn.ReLU(inplace=True) - - num_groups = planes // 8 - - if norm_fn == 'group': - self.norm1 = nn.GroupNorm(num_groups=num_groups, num_channels=planes//4) - self.norm2 = nn.GroupNorm(num_groups=num_groups, num_channels=planes//4) - self.norm3 = nn.GroupNorm(num_groups=num_groups, num_channels=planes) - if not stride == 1: - self.norm4 = nn.GroupNorm(num_groups=num_groups, num_channels=planes) - - elif norm_fn == 'batch': - self.norm1 = nn.BatchNorm2d(planes//4) - self.norm2 = nn.BatchNorm2d(planes//4) - self.norm3 = nn.BatchNorm2d(planes) - if not stride == 1: - self.norm4 = nn.BatchNorm2d(planes) - - elif norm_fn == 'instance': - self.norm1 = nn.InstanceNorm2d(planes//4) - self.norm2 = nn.InstanceNorm2d(planes//4) - self.norm3 = nn.InstanceNorm2d(planes) - if not stride == 1: - self.norm4 = nn.InstanceNorm2d(planes) - - elif norm_fn == 'none': - self.norm1 = nn.Sequential() - self.norm2 = nn.Sequential() - self.norm3 = nn.Sequential() - if not stride == 1: - self.norm4 = nn.Sequential() - - if stride == 1: - self.downsample = None - - else: - self.downsample = nn.Sequential( - nn.Conv2d(in_planes, planes, kernel_size=1, stride=stride), self.norm4) - - - def forward(self, x): - y = x - y = self.relu(self.norm1(self.conv1(y))) - y = self.relu(self.norm2(self.conv2(y))) - y = self.relu(self.norm3(self.conv3(y))) - - if self.downsample is not None: - x = self.downsample(x) - - return self.relu(x+y) - -class BasicEncoder(nn.Module): - def __init__(self, output_dim=128, norm_fn='batch', dropout=0.0): - super(BasicEncoder, self).__init__() - self.norm_fn = norm_fn - - if self.norm_fn == 'group': - self.norm1 = nn.GroupNorm(num_groups=8, num_channels=64) - - elif self.norm_fn == 'batch': - self.norm1 = nn.BatchNorm2d(64) - - elif self.norm_fn == 'instance': - self.norm1 = nn.InstanceNorm2d(64) - - elif self.norm_fn == 'none': - self.norm1 = nn.Sequential() - - self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3) - self.relu1 = nn.ReLU(inplace=True) - - self.in_planes = 64 - self.layer1 = self._make_layer(64, stride=1) - self.layer2 = self._make_layer(96, stride=2) - self.layer3 = self._make_layer(128, stride=2) - - # output convolution - self.conv2 = nn.Conv2d(128, output_dim, kernel_size=1) - - self.dropout = None - if dropout > 0: - self.dropout = nn.Dropout2d(p=dropout) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, (nn.BatchNorm2d, nn.InstanceNorm2d, nn.GroupNorm)): - if m.weight is not None: - nn.init.constant_(m.weight, 1) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def _make_layer(self, dim, stride=1): - layer1 = ResidualBlock(self.in_planes, dim, self.norm_fn, stride=stride) - layer2 = ResidualBlock(dim, dim, self.norm_fn, stride=1) - layers = (layer1, layer2) - - self.in_planes = dim - return nn.Sequential(*layers) - - - def forward(self, x): - - # if input is list, combine batch dimension - is_list = isinstance(x, tuple) or isinstance(x, list) - if is_list: - batch_dim = x[0].shape[0] - x = torch.cat(x, dim=0) - - x = self.conv1(x) - x = self.norm1(x) - x = self.relu1(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - - x = self.conv2(x) - - if self.training and self.dropout is not None: - x = self.dropout(x) - - if is_list: - x = torch.split(x, [batch_dim, batch_dim], dim=0) - - return x - - -class SmallEncoder(nn.Module): - def __init__(self, output_dim=128, norm_fn='batch', dropout=0.0): - super(SmallEncoder, self).__init__() - self.norm_fn = norm_fn - - if self.norm_fn == 'group': - self.norm1 = nn.GroupNorm(num_groups=8, num_channels=32) - - elif self.norm_fn == 'batch': - self.norm1 = nn.BatchNorm2d(32) - - elif self.norm_fn == 'instance': - self.norm1 = nn.InstanceNorm2d(32) - - elif self.norm_fn == 'none': - self.norm1 = nn.Sequential() - - self.conv1 = nn.Conv2d(3, 32, kernel_size=7, stride=2, padding=3) - self.relu1 = nn.ReLU(inplace=True) - - self.in_planes = 32 - self.layer1 = self._make_layer(32, stride=1) - self.layer2 = self._make_layer(64, stride=2) - self.layer3 = self._make_layer(96, stride=2) - - self.dropout = None - if dropout > 0: - self.dropout = nn.Dropout2d(p=dropout) - - self.conv2 = nn.Conv2d(96, output_dim, kernel_size=1) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, (nn.BatchNorm2d, nn.InstanceNorm2d, nn.GroupNorm)): - if m.weight is not None: - nn.init.constant_(m.weight, 1) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def _make_layer(self, dim, stride=1): - layer1 = BottleneckBlock(self.in_planes, dim, self.norm_fn, stride=stride) - layer2 = BottleneckBlock(dim, dim, self.norm_fn, stride=1) - layers = (layer1, layer2) - - self.in_planes = dim - return nn.Sequential(*layers) - - - def forward(self, x): - - # if input is list, combine batch dimension - is_list = isinstance(x, tuple) or isinstance(x, list) - if is_list: - batch_dim = x[0].shape[0] - x = torch.cat(x, dim=0) - - x = self.conv1(x) - x = self.norm1(x) - x = self.relu1(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.conv2(x) - - if self.training and self.dropout is not None: - x = self.dropout(x) - - if is_list: - x = torch.split(x, [batch_dim, batch_dim], dim=0) - - return x diff --git a/spaces/4H17Joycelyn/text_generater/app.py b/spaces/4H17Joycelyn/text_generater/app.py deleted file mode 100644 index 26292935892fc29d8ad978b0d3d25afe7cea6f63..0000000000000000000000000000000000000000 --- a/spaces/4H17Joycelyn/text_generater/app.py +++ /dev/null @@ -1,10 +0,0 @@ -import gradio as gr -from transformers import pipeline - -generator = pipeline('text-generation', model='gpt2') - -def generate(text): - result=generator(text) - return result[0]['generated_text'] - -gr.Interface(fn=generate, inputs=gr.inputs.Textbox(), outputs=gr.outputs.Textbox()).launch() \ No newline at end of file diff --git a/spaces/7hao/bingo/src/components/ui/codeblock.tsx b/spaces/7hao/bingo/src/components/ui/codeblock.tsx deleted file mode 100644 index aabda4e3b59f4e36b6ab79feb19d8d18b70e881b..0000000000000000000000000000000000000000 --- a/spaces/7hao/bingo/src/components/ui/codeblock.tsx +++ /dev/null @@ -1,142 +0,0 @@ -'use client' - -import { FC, memo } from 'react' -import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter' -import { coldarkDark } from 'react-syntax-highlighter/dist/cjs/styles/prism' - -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' -import { IconCheck, IconCopy, IconDownload } from '@/components/ui/icons' -import { Button } from '@/components/ui/button' - -interface Props { - language: string - value: string -} - -interface languageMap { - [key: string]: string | undefined -} - -export const programmingLanguages: languageMap = { - javascript: '.js', - python: '.py', - java: '.java', - c: '.c', - cpp: '.cpp', - 'c++': '.cpp', - 'c#': '.cs', - ruby: '.rb', - php: '.php', - swift: '.swift', - 'objective-c': '.m', - kotlin: '.kt', - typescript: '.ts', - go: '.go', - perl: '.pl', - rust: '.rs', - scala: '.scala', - haskell: '.hs', - lua: '.lua', - shell: '.sh', - sql: '.sql', - html: '.html', - css: '.css' - // add more file extensions here, make sure the key is same as language prop in CodeBlock.tsx component -} - -export const generateRandomString = (length: number, lowercase = false) => { - const chars = 'ABCDEFGHJKLMNPQRSTUVWXY3456789' // excluding similar looking characters like Z, 2, I, 1, O, 0 - let result = '' - for (let i = 0; i < length; i++) { - result += chars.charAt(Math.floor(Math.random() * chars.length)) - } - return lowercase ? result.toLowerCase() : result -} - -const CodeBlock: FC = memo(({ language, value }) => { - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - - const downloadAsFile = () => { - if (typeof window === 'undefined') { - return - } - const fileExtension = programmingLanguages[language] || '.file' - const suggestedFileName = `file-${generateRandomString( - 3, - true - )}${fileExtension}` - const fileName = window.prompt('Enter file name' || '', suggestedFileName) - - if (!fileName) { - // User pressed cancel on prompt. - return - } - - const blob = new Blob([value], { type: 'text/plain' }) - const url = URL.createObjectURL(blob) - const link = document.createElement('a') - link.download = fileName - link.href = url - link.style.display = 'none' - document.body.appendChild(link) - link.click() - document.body.removeChild(link) - URL.revokeObjectURL(url) - } - - const onCopy = () => { - if (isCopied) return - copyToClipboard(value) - } - - return ( -
    -
    - {language} -
    - - -
    -
    - - {value} - -
    - ) -}) -CodeBlock.displayName = 'CodeBlock' - -export { CodeBlock } diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/infer_pack/modules/F0Predictor/DioF0Predictor.py b/spaces/AI-Hobbyist/Hoyo-RVC/infer_pack/modules/F0Predictor/DioF0Predictor.py deleted file mode 100644 index eb60d8830714338448be009d1075e3594337db15..0000000000000000000000000000000000000000 --- a/spaces/AI-Hobbyist/Hoyo-RVC/infer_pack/modules/F0Predictor/DioF0Predictor.py +++ /dev/null @@ -1,90 +0,0 @@ -from infer_pack.modules.F0Predictor.F0Predictor import F0Predictor -import pyworld -import numpy as np - - -class DioF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def resize_f0(self, x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * target_len, len(source)) / target_len, - np.arange(0, len(source)), - source, - ) - res = np.nan_to_num(target) - return res - - def compute_f0(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.dio( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return self.interpolate_f0(self.resize_f0(f0, p_len))[0] - - def compute_f0_uv(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.dio( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return self.interpolate_f0(self.resize_f0(f0, p_len)) diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/types/Message.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/types/Message.ts deleted file mode 100644 index 61e517da14ff331f268b308c73293e3ef706dd5a..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/types/Message.ts +++ /dev/null @@ -1,10 +0,0 @@ -import type { Timestamps } from "./Timestamps"; - -export type Message = Partial & { - from: "user" | "assistant"; - id: ReturnType; - content: string; - webSearchId?: string; - score?: -1 | 0 | 1; - isCode: boolean; -}; diff --git a/spaces/Adapter/T2I-Adapter/ldm/modules/encoders/adapter.py b/spaces/Adapter/T2I-Adapter/ldm/modules/encoders/adapter.py deleted file mode 100644 index 0eef97edcaca1186835f32dc1b0c7bcb9c4bd3ec..0000000000000000000000000000000000000000 --- a/spaces/Adapter/T2I-Adapter/ldm/modules/encoders/adapter.py +++ /dev/null @@ -1,258 +0,0 @@ -import torch -import torch.nn as nn -from collections import OrderedDict - - -def conv_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D convolution module. - """ - if dims == 1: - return nn.Conv1d(*args, **kwargs) - elif dims == 2: - return nn.Conv2d(*args, **kwargs) - elif dims == 3: - return nn.Conv3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -def avg_pool_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D average pooling module. - """ - if dims == 1: - return nn.AvgPool1d(*args, **kwargs) - elif dims == 2: - return nn.AvgPool2d(*args, **kwargs) - elif dims == 3: - return nn.AvgPool3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -class Downsample(nn.Module): - """ - A downsampling layer with an optional convolution. - :param channels: channels in the inputs and outputs. - :param use_conv: a bool determining if a convolution is applied. - :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then - downsampling occurs in the inner-two dimensions. - """ - - def __init__(self, channels, use_conv, dims=2, out_channels=None, padding=1): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.dims = dims - stride = 2 if dims != 3 else (1, 2, 2) - if use_conv: - self.op = conv_nd( - dims, self.channels, self.out_channels, 3, stride=stride, padding=padding - ) - else: - assert self.channels == self.out_channels - self.op = avg_pool_nd(dims, kernel_size=stride, stride=stride) - - def forward(self, x): - assert x.shape[1] == self.channels - return self.op(x) - - -class ResnetBlock(nn.Module): - def __init__(self, in_c, out_c, down, ksize=3, sk=False, use_conv=True): - super().__init__() - ps = ksize // 2 - if in_c != out_c or sk == False: - self.in_conv = nn.Conv2d(in_c, out_c, ksize, 1, ps) - else: - # print('n_in') - self.in_conv = None - self.block1 = nn.Conv2d(out_c, out_c, 3, 1, 1) - self.act = nn.ReLU() - self.block2 = nn.Conv2d(out_c, out_c, ksize, 1, ps) - if sk == False: - self.skep = nn.Conv2d(in_c, out_c, ksize, 1, ps) - else: - self.skep = None - - self.down = down - if self.down == True: - self.down_opt = Downsample(in_c, use_conv=use_conv) - - def forward(self, x): - if self.down == True: - x = self.down_opt(x) - if self.in_conv is not None: # edit - x = self.in_conv(x) - - h = self.block1(x) - h = self.act(h) - h = self.block2(h) - if self.skep is not None: - return h + self.skep(x) - else: - return h + x - - -class Adapter(nn.Module): - def __init__(self, channels=[320, 640, 1280, 1280], nums_rb=3, cin=64, ksize=3, sk=False, use_conv=True): - super(Adapter, self).__init__() - self.unshuffle = nn.PixelUnshuffle(8) - self.channels = channels - self.nums_rb = nums_rb - self.body = [] - for i in range(len(channels)): - for j in range(nums_rb): - if (i != 0) and (j == 0): - self.body.append( - ResnetBlock(channels[i - 1], channels[i], down=True, ksize=ksize, sk=sk, use_conv=use_conv)) - else: - self.body.append( - ResnetBlock(channels[i], channels[i], down=False, ksize=ksize, sk=sk, use_conv=use_conv)) - self.body = nn.ModuleList(self.body) - self.conv_in = nn.Conv2d(cin, channels[0], 3, 1, 1) - - def forward(self, x): - # unshuffle - x = self.unshuffle(x) - # extract features - features = [] - x = self.conv_in(x) - for i in range(len(self.channels)): - for j in range(self.nums_rb): - idx = i * self.nums_rb + j - x = self.body[idx](x) - features.append(x) - - return features - - -class LayerNorm(nn.LayerNorm): - """Subclass torch's LayerNorm to handle fp16.""" - - def forward(self, x: torch.Tensor): - orig_type = x.dtype - ret = super().forward(x.type(torch.float32)) - return ret.type(orig_type) - - -class QuickGELU(nn.Module): - - def forward(self, x: torch.Tensor): - return x * torch.sigmoid(1.702 * x) - - -class ResidualAttentionBlock(nn.Module): - - def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None): - super().__init__() - - self.attn = nn.MultiheadAttention(d_model, n_head) - self.ln_1 = LayerNorm(d_model) - self.mlp = nn.Sequential( - OrderedDict([("c_fc", nn.Linear(d_model, d_model * 4)), ("gelu", QuickGELU()), - ("c_proj", nn.Linear(d_model * 4, d_model))])) - self.ln_2 = LayerNorm(d_model) - self.attn_mask = attn_mask - - def attention(self, x: torch.Tensor): - self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None - return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0] - - def forward(self, x: torch.Tensor): - x = x + self.attention(self.ln_1(x)) - x = x + self.mlp(self.ln_2(x)) - return x - - -class StyleAdapter(nn.Module): - - def __init__(self, width=1024, context_dim=768, num_head=8, n_layes=3, num_token=4): - super().__init__() - - scale = width ** -0.5 - self.transformer_layes = nn.Sequential(*[ResidualAttentionBlock(width, num_head) for _ in range(n_layes)]) - self.num_token = num_token - self.style_embedding = nn.Parameter(torch.randn(1, num_token, width) * scale) - self.ln_post = LayerNorm(width) - self.ln_pre = LayerNorm(width) - self.proj = nn.Parameter(scale * torch.randn(width, context_dim)) - - def forward(self, x): - # x shape [N, HW+1, C] - style_embedding = self.style_embedding + torch.zeros( - (x.shape[0], self.num_token, self.style_embedding.shape[-1]), device=x.device) - x = torch.cat([x, style_embedding], dim=1) - x = self.ln_pre(x) - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer_layes(x) - x = x.permute(1, 0, 2) # LND -> NLD - - x = self.ln_post(x[:, -self.num_token:, :]) - x = x @ self.proj - - return x - - -class ResnetBlock_light(nn.Module): - def __init__(self, in_c): - super().__init__() - self.block1 = nn.Conv2d(in_c, in_c, 3, 1, 1) - self.act = nn.ReLU() - self.block2 = nn.Conv2d(in_c, in_c, 3, 1, 1) - - def forward(self, x): - h = self.block1(x) - h = self.act(h) - h = self.block2(h) - - return h + x - - -class extractor(nn.Module): - def __init__(self, in_c, inter_c, out_c, nums_rb, down=False): - super().__init__() - self.in_conv = nn.Conv2d(in_c, inter_c, 1, 1, 0) - self.body = [] - for _ in range(nums_rb): - self.body.append(ResnetBlock_light(inter_c)) - self.body = nn.Sequential(*self.body) - self.out_conv = nn.Conv2d(inter_c, out_c, 1, 1, 0) - self.down = down - if self.down == True: - self.down_opt = Downsample(in_c, use_conv=False) - - def forward(self, x): - if self.down == True: - x = self.down_opt(x) - x = self.in_conv(x) - x = self.body(x) - x = self.out_conv(x) - - return x - - -class Adapter_light(nn.Module): - def __init__(self, channels=[320, 640, 1280, 1280], nums_rb=3, cin=64): - super(Adapter_light, self).__init__() - self.unshuffle = nn.PixelUnshuffle(8) - self.channels = channels - self.nums_rb = nums_rb - self.body = [] - for i in range(len(channels)): - if i == 0: - self.body.append(extractor(in_c=cin, inter_c=channels[i]//4, out_c=channels[i], nums_rb=nums_rb, down=False)) - else: - self.body.append(extractor(in_c=channels[i-1], inter_c=channels[i]//4, out_c=channels[i], nums_rb=nums_rb, down=True)) - self.body = nn.ModuleList(self.body) - - def forward(self, x): - # unshuffle - x = self.unshuffle(x) - # extract features - features = [] - for i in range(len(self.channels)): - x = self.body[i](x) - features.append(x) - - return features diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/methods/BoardMethods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/methods/BoardMethods.js deleted file mode 100644 index a4d91ec17c408da4a3ad5c1fd7f7c65b9d9d8e23..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/methods/BoardMethods.js +++ /dev/null @@ -1,41 +0,0 @@ -export default { - setBoardSize(width, height) { - this.board.setBoardWidth(width).setBoardHeight(height); - return this; - }, - - // Chess properties - getChessMoveTo(chess) { - return (chess) ? chess.rexMoveTo : undefined; - }, - - getChessTileZ() { - return this.board.chessTileZ; - }, - - worldXYToChess(worldX, worldY) { - return this.board.worldXYToChess(worldX, worldY); - }, - - tileXYToChess(tileX, tileY) { - return this.board.tileXYToChess(tileX, tileY); - }, - - getNeighborChessAtAngle(chess, angle) { - return this.board.getNeighborChessAtAngle(chess, angle); - }, - - getNeighborChessAtDirection(chess, direction) { - return this.board.getNeighborChessAtDirection(chess, direction); - }, - - // Expose board instance - getBoard() { - return this.board.board; - }, - - // Expose match instance - getMatch() { - return this.board.match; - } -} \ No newline at end of file diff --git a/spaces/AiMimicry/sovits-models/inference/slicer.py b/spaces/AiMimicry/sovits-models/inference/slicer.py deleted file mode 100644 index b05840bcf6bdced0b6e2adbecb1a1dd5b3dee462..0000000000000000000000000000000000000000 --- a/spaces/AiMimicry/sovits-models/inference/slicer.py +++ /dev/null @@ -1,142 +0,0 @@ -import librosa -import torch -import torchaudio - - -class Slicer: - def __init__(self, - sr: int, - threshold: float = -40., - min_length: int = 5000, - min_interval: int = 300, - hop_size: int = 20, - max_sil_kept: int = 5000): - if not min_length >= min_interval >= hop_size: - raise ValueError('The following condition must be satisfied: min_length >= min_interval >= hop_size') - if not max_sil_kept >= hop_size: - raise ValueError('The following condition must be satisfied: max_sil_kept >= hop_size') - min_interval = sr * min_interval / 1000 - self.threshold = 10 ** (threshold / 20.) - self.hop_size = round(sr * hop_size / 1000) - self.win_size = min(round(min_interval), 4 * self.hop_size) - self.min_length = round(sr * min_length / 1000 / self.hop_size) - self.min_interval = round(min_interval / self.hop_size) - self.max_sil_kept = round(sr * max_sil_kept / 1000 / self.hop_size) - - def _apply_slice(self, waveform, begin, end): - if len(waveform.shape) > 1: - return waveform[:, begin * self.hop_size: min(waveform.shape[1], end * self.hop_size)] - else: - return waveform[begin * self.hop_size: min(waveform.shape[0], end * self.hop_size)] - - # @timeit - def slice(self, waveform): - if len(waveform.shape) > 1: - samples = librosa.to_mono(waveform) - else: - samples = waveform - if samples.shape[0] <= self.min_length: - return {"0": {"slice": False, "split_time": f"0,{len(waveform)}"}} - rms_list = librosa.feature.rms(y=samples, frame_length=self.win_size, hop_length=self.hop_size).squeeze(0) - sil_tags = [] - silence_start = None - clip_start = 0 - for i, rms in enumerate(rms_list): - # Keep looping while frame is silent. - if rms < self.threshold: - # Record start of silent frames. - if silence_start is None: - silence_start = i - continue - # Keep looping while frame is not silent and silence start has not been recorded. - if silence_start is None: - continue - # Clear recorded silence start if interval is not enough or clip is too short - is_leading_silence = silence_start == 0 and i > self.max_sil_kept - need_slice_middle = i - silence_start >= self.min_interval and i - clip_start >= self.min_length - if not is_leading_silence and not need_slice_middle: - silence_start = None - continue - # Need slicing. Record the range of silent frames to be removed. - if i - silence_start <= self.max_sil_kept: - pos = rms_list[silence_start: i + 1].argmin() + silence_start - if silence_start == 0: - sil_tags.append((0, pos)) - else: - sil_tags.append((pos, pos)) - clip_start = pos - elif i - silence_start <= self.max_sil_kept * 2: - pos = rms_list[i - self.max_sil_kept: silence_start + self.max_sil_kept + 1].argmin() - pos += i - self.max_sil_kept - pos_l = rms_list[silence_start: silence_start + self.max_sil_kept + 1].argmin() + silence_start - pos_r = rms_list[i - self.max_sil_kept: i + 1].argmin() + i - self.max_sil_kept - if silence_start == 0: - sil_tags.append((0, pos_r)) - clip_start = pos_r - else: - sil_tags.append((min(pos_l, pos), max(pos_r, pos))) - clip_start = max(pos_r, pos) - else: - pos_l = rms_list[silence_start: silence_start + self.max_sil_kept + 1].argmin() + silence_start - pos_r = rms_list[i - self.max_sil_kept: i + 1].argmin() + i - self.max_sil_kept - if silence_start == 0: - sil_tags.append((0, pos_r)) - else: - sil_tags.append((pos_l, pos_r)) - clip_start = pos_r - silence_start = None - # Deal with trailing silence. - total_frames = rms_list.shape[0] - if silence_start is not None and total_frames - silence_start >= self.min_interval: - silence_end = min(total_frames, silence_start + self.max_sil_kept) - pos = rms_list[silence_start: silence_end + 1].argmin() + silence_start - sil_tags.append((pos, total_frames + 1)) - # Apply and return slices. - if len(sil_tags) == 0: - return {"0": {"slice": False, "split_time": f"0,{len(waveform)}"}} - else: - chunks = [] - # 第一段静音并非从头开始,补上有声片段 - if sil_tags[0][0]: - chunks.append( - {"slice": False, "split_time": f"0,{min(waveform.shape[0], sil_tags[0][0] * self.hop_size)}"}) - for i in range(0, len(sil_tags)): - # 标识有声片段(跳过第一段) - if i: - chunks.append({"slice": False, - "split_time": f"{sil_tags[i - 1][1] * self.hop_size},{min(waveform.shape[0], sil_tags[i][0] * self.hop_size)}"}) - # 标识所有静音片段 - chunks.append({"slice": True, - "split_time": f"{sil_tags[i][0] * self.hop_size},{min(waveform.shape[0], sil_tags[i][1] * self.hop_size)}"}) - # 最后一段静音并非结尾,补上结尾片段 - if sil_tags[-1][1] * self.hop_size < len(waveform): - chunks.append({"slice": False, "split_time": f"{sil_tags[-1][1] * self.hop_size},{len(waveform)}"}) - chunk_dict = {} - for i in range(len(chunks)): - chunk_dict[str(i)] = chunks[i] - return chunk_dict - - -def cut(audio_path, db_thresh=-30, min_len=5000): - audio, sr = librosa.load(audio_path, sr=None) - slicer = Slicer( - sr=sr, - threshold=db_thresh, - min_length=min_len - ) - chunks = slicer.slice(audio) - return chunks - - -def chunks2audio(audio_path, chunks): - chunks = dict(chunks) - audio, sr = torchaudio.load(audio_path) - if len(audio.shape) == 2 and audio.shape[1] >= 2: - audio = torch.mean(audio, dim=0).unsqueeze(0) - audio = audio.cpu().numpy()[0] - result = [] - for k, v in chunks.items(): - tag = v["split_time"].split(",") - if tag[0] != tag[1]: - result.append((v["slice"], audio[int(tag[0]):int(tag[1])])) - return result, sr diff --git a/spaces/AkitoP/umamusume_bert_vits2/README.md b/spaces/AkitoP/umamusume_bert_vits2/README.md deleted file mode 100644 index fd92ae264c2b383621a8dead1231619a84187b15..0000000000000000000000000000000000000000 --- a/spaces/AkitoP/umamusume_bert_vits2/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Umamusume Bert Vits2 -emoji: 📊 -colorFrom: red -colorTo: green -sdk: gradio -sdk_version: 3.47.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/voice_upload.py b/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/voice_upload.py deleted file mode 100644 index 5c825a933a7970e17e57c381b59a5fc4e06ea569..0000000000000000000000000000000000000000 --- a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/voice_upload.py +++ /dev/null @@ -1,28 +0,0 @@ -from google.colab import files -import shutil -import os -import argparse -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--type", type=str, required=True, help="type of file to upload") - args = parser.parse_args() - file_type = args.type - - basepath = os.getcwd() - uploaded = files.upload() # 上传文件 - assert(file_type in ['zip', 'audio', 'video']) - if file_type == "zip": - upload_path = "./custom_character_voice/" - for filename in uploaded.keys(): - #将上传的文件移动到指定的位置上 - shutil.move(os.path.join(basepath, filename), os.path.join(upload_path, "custom_character_voice.zip")) - elif file_type == "audio": - upload_path = "./raw_audio/" - for filename in uploaded.keys(): - #将上传的文件移动到指定的位置上 - shutil.move(os.path.join(basepath, filename), os.path.join(upload_path, filename)) - elif file_type == "video": - upload_path = "./video_data/" - for filename in uploaded.keys(): - # 将上传的文件移动到指定的位置上 - shutil.move(os.path.join(basepath, filename), os.path.join(upload_path, filename)) \ No newline at end of file diff --git a/spaces/Alex123aaa/1234/app.py b/spaces/Alex123aaa/1234/app.py deleted file mode 100644 index a2d60613a9f4f83ff82eaaf5d13d25484d3dd03c..0000000000000000000000000000000000000000 --- a/spaces/Alex123aaa/1234/app.py +++ /dev/null @@ -1,61 +0,0 @@ -import os -os.system("pip uninstall -y gradio") -os.system("pip install gradio==3.44.4") -import gradio as gr -from sklearn.datasets import fetch_california_housing -from sklearn.model_selection import train_test_split -from sklearn.linear_model import LinearRegression - - -data = fetch_california_housing() -X = data.data -y = data.target - - -X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) - - -X_train = X_train[:, :5] -X_test = X_test[:, :5] - -model = LinearRegression() -model.fit(X_train, y_train) - - -def predict_housing_price(sqft, bedrooms, bathrooms, latitude, longitude): - - if sqft is None or bedrooms is None or bathrooms is None or latitude is None or longitude is None: - return "Please provide all input values." - - - try: - sqft = float(sqft) - bedrooms = float(bedrooms) - bathrooms = float(bathrooms) - latitude = float(latitude) - longitude = float(longitude) - except ValueError: - return "Invalid input. Please provide numeric values." - - - input_features = [sqft, bedrooms, bathrooms, latitude, longitude] - predicted_price = model.predict([input_features])[0] - - - return f"Predicted Price: ${predicted_price:.2f}" - -input_components = [ - gr.inputs.Slider(label="Sqft", minimum=0, maximum=5000), - gr.inputs.Slider(label="Bedrooms", minimum=0, maximum=10), - gr.inputs.Slider(label="Bathrooms", minimum=0, maximum=5), - gr.inputs.Slider(label="Latitude", minimum=32.5, maximum=35), - gr.inputs.Slider(label="Longitude", minimum=-125, maximum=-120) -] - -iface = gr.Interface( - fn=predict_housing_price, - inputs=input_components, - outputs="text" -) - -iface.launch() \ No newline at end of file diff --git a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/text/shanghainese.py b/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/text/shanghainese.py deleted file mode 100644 index cb29c24a08d2e406e8399cf7bc9fe5cb43cb9c61..0000000000000000000000000000000000000000 --- a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/text/shanghainese.py +++ /dev/null @@ -1,64 +0,0 @@ -import re -import cn2an -import opencc - - -converter = opencc.OpenCC('zaonhe') - -# List of (Latin alphabet, ipa) pairs: -_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('A', 'ᴇ'), - ('B', 'bi'), - ('C', 'si'), - ('D', 'di'), - ('E', 'i'), - ('F', 'ᴇf'), - ('G', 'dʑi'), - ('H', 'ᴇtɕʰ'), - ('I', 'ᴀi'), - ('J', 'dʑᴇ'), - ('K', 'kʰᴇ'), - ('L', 'ᴇl'), - ('M', 'ᴇm'), - ('N', 'ᴇn'), - ('O', 'o'), - ('P', 'pʰi'), - ('Q', 'kʰiu'), - ('R', 'ᴀl'), - ('S', 'ᴇs'), - ('T', 'tʰi'), - ('U', 'ɦiu'), - ('V', 'vi'), - ('W', 'dᴀbɤliu'), - ('X', 'ᴇks'), - ('Y', 'uᴀi'), - ('Z', 'zᴇ') -]] - - -def _number_to_shanghainese(num): - num = cn2an.an2cn(num).replace('一十','十').replace('二十', '廿').replace('二', '两') - return re.sub(r'((?:^|[^三四五六七八九])十|廿)两', r'\1二', num) - - -def number_to_shanghainese(text): - return re.sub(r'\d+(?:\.?\d+)?', lambda x: _number_to_shanghainese(x.group()), text) - - -def latin_to_ipa(text): - for regex, replacement in _latin_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def shanghainese_to_ipa(text): - text = number_to_shanghainese(text.upper()) - text = converter.convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text) - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/Amite5h/EuroSAT_/README.md b/spaces/Amite5h/EuroSAT_/README.md deleted file mode 100644 index 5af3c062c31de996681f0156e283d593630920be..0000000000000000000000000000000000000000 --- a/spaces/Amite5h/EuroSAT_/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: EuroSAT -emoji: ⚡ -colorFrom: gray -colorTo: purple -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Amon1/ChatGPTForAcadamic/functional.py b/spaces/Amon1/ChatGPTForAcadamic/functional.py deleted file mode 100644 index eccc0ac251784f4611c60ae754194448fca2e9e8..0000000000000000000000000000000000000000 --- a/spaces/Amon1/ChatGPTForAcadamic/functional.py +++ /dev/null @@ -1,70 +0,0 @@ -# 'primary' 颜色对应 theme.py 中的 primary_hue -# 'secondary' 颜色对应 theme.py 中的 neutral_hue -# 'stop' 颜色对应 theme.py 中的 color_er -# 默认按钮颜色是 secondary -from toolbox import clear_line_break - -def get_functionals(): - return { - "英语学术润色": { - # 前言 - "Prefix": r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, " + - r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. " + - r"Furthermore, list all modification and explain the reasons to do so in markdown table." + "\n\n", - # 后语 - "Suffix": r"", - "Color": r"secondary", # 按钮颜色 - }, - "中文学术润色": { - "Prefix": r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性," + - r"同时分解长句,减少重复,并提供改进建议。请只提供文本的更正版本,避免包括解释。请编辑以下文本" + "\n\n", - "Suffix": r"", - }, - "查找语法错误": { - "Prefix": r"Can you help me ensure that the grammar and the spelling is correct? " + - r"Do not try to polish the text, if no mistake is found, tell me that this paragraph is good." + - r"If you find grammar or spelling mistakes, please list mistakes you find in a two-column markdown table, " + - r"put the original text the first column, " + - r"put the corrected text in the second column and highlight the key words you fixed.""\n" - r"Example:""\n" - r"Paragraph: How is you? Do you knows what is it?""\n" - r"| Original sentence | Corrected sentence |""\n" - r"| :--- | :--- |""\n" - r"| How **is** you? | How **are** you? |""\n" - r"| Do you **knows** what **is** **it**? | Do you **know** what **it** **is** ? |""\n" - r"Below is a paragraph from an academic paper. " - r"You need to report all grammar and spelling mistakes as the example before." - + "\n\n", - "Suffix": r"", - "PreProcess": clear_line_break, # 预处理:清除换行符 - }, - "中译英": { - "Prefix": r"Please translate following sentence to English:" + "\n\n", - "Suffix": r"", - }, - "学术中英互译": { - "Prefix": r"I want you to act as a scientific English-Chinese translator, " + - r"I will provide you with some paragraphs in one language " + - r"and your task is to accurately and academically translate the paragraphs only into the other language. " + - r"Do not repeat the original provided paragraphs after translation. " + - r"You should use artificial intelligence tools, " + - r"such as natural language processing, and rhetorical knowledge " + - r"and experience about effective writing techniques to reply. " + - r"I'll give you my paragraphs as follows, tell me what language it is written in, and then translate:" + "\n\n", - "Suffix": "", - "Color": "secondary", - }, - "英译中": { - "Prefix": r"请翻译成中文:" + "\n\n", - "Suffix": r"", - }, - "找图片": { - "Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL," + - r"然后请使用Markdown格式封装,并且不要有反斜线,不要用代码块。现在,请按以下描述给我发送图片:" + "\n\n", - "Suffix": r"", - }, - "解释代码": { - "Prefix": r"请解释以下代码:" + "\n```\n", - "Suffix": "\n```\n", - }, - } diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/__init__.py b/spaces/Amrrs/DragGan-Inversion/PTI/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Andy1621/uniformer_image_detection/configs/free_anchor/retinanet_free_anchor_r50_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/free_anchor/retinanet_free_anchor_r50_fpn_1x_coco.py deleted file mode 100644 index 28f983c29edd071b32a50f18ac7b3f5c1bfdda88..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/free_anchor/retinanet_free_anchor_r50_fpn_1x_coco.py +++ /dev/null @@ -1,22 +0,0 @@ -_base_ = '../retinanet/retinanet_r50_fpn_1x_coco.py' -model = dict( - bbox_head=dict( - _delete_=True, - type='FreeAnchorRetinaHead', - num_classes=80, - in_channels=256, - stacked_convs=4, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=4, - scales_per_octave=3, - ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32, 64, 128]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[0.1, 0.1, 0.2, 0.2]), - loss_bbox=dict(type='SmoothL1Loss', beta=0.11, loss_weight=0.75))) -optimizer_config = dict( - _delete_=True, grad_clip=dict(max_norm=35, norm_type=2)) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/samplers/sampling_result.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/samplers/sampling_result.py deleted file mode 100644 index 419a8e39a3c307a7cd9cfd0565a20037ded0d646..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/samplers/sampling_result.py +++ /dev/null @@ -1,152 +0,0 @@ -import torch - -from mmdet.utils import util_mixins - - -class SamplingResult(util_mixins.NiceRepr): - """Bbox sampling result. - - Example: - >>> # xdoctest: +IGNORE_WANT - >>> from mmdet.core.bbox.samplers.sampling_result import * # NOQA - >>> self = SamplingResult.random(rng=10) - >>> print(f'self = {self}') - self = - """ - - def __init__(self, pos_inds, neg_inds, bboxes, gt_bboxes, assign_result, - gt_flags): - self.pos_inds = pos_inds - self.neg_inds = neg_inds - self.pos_bboxes = bboxes[pos_inds] - self.neg_bboxes = bboxes[neg_inds] - self.pos_is_gt = gt_flags[pos_inds] - - self.num_gts = gt_bboxes.shape[0] - self.pos_assigned_gt_inds = assign_result.gt_inds[pos_inds] - 1 - - if gt_bboxes.numel() == 0: - # hack for index error case - assert self.pos_assigned_gt_inds.numel() == 0 - self.pos_gt_bboxes = torch.empty_like(gt_bboxes).view(-1, 4) - else: - if len(gt_bboxes.shape) < 2: - gt_bboxes = gt_bboxes.view(-1, 4) - - self.pos_gt_bboxes = gt_bboxes[self.pos_assigned_gt_inds, :] - - if assign_result.labels is not None: - self.pos_gt_labels = assign_result.labels[pos_inds] - else: - self.pos_gt_labels = None - - @property - def bboxes(self): - """torch.Tensor: concatenated positive and negative boxes""" - return torch.cat([self.pos_bboxes, self.neg_bboxes]) - - def to(self, device): - """Change the device of the data inplace. - - Example: - >>> self = SamplingResult.random() - >>> print(f'self = {self.to(None)}') - >>> # xdoctest: +REQUIRES(--gpu) - >>> print(f'self = {self.to(0)}') - """ - _dict = self.__dict__ - for key, value in _dict.items(): - if isinstance(value, torch.Tensor): - _dict[key] = value.to(device) - return self - - def __nice__(self): - data = self.info.copy() - data['pos_bboxes'] = data.pop('pos_bboxes').shape - data['neg_bboxes'] = data.pop('neg_bboxes').shape - parts = [f"'{k}': {v!r}" for k, v in sorted(data.items())] - body = ' ' + ',\n '.join(parts) - return '{\n' + body + '\n}' - - @property - def info(self): - """Returns a dictionary of info about the object.""" - return { - 'pos_inds': self.pos_inds, - 'neg_inds': self.neg_inds, - 'pos_bboxes': self.pos_bboxes, - 'neg_bboxes': self.neg_bboxes, - 'pos_is_gt': self.pos_is_gt, - 'num_gts': self.num_gts, - 'pos_assigned_gt_inds': self.pos_assigned_gt_inds, - } - - @classmethod - def random(cls, rng=None, **kwargs): - """ - Args: - rng (None | int | numpy.random.RandomState): seed or state. - kwargs (keyword arguments): - - num_preds: number of predicted boxes - - num_gts: number of true boxes - - p_ignore (float): probability of a predicted box assinged to \ - an ignored truth. - - p_assigned (float): probability of a predicted box not being \ - assigned. - - p_use_label (float | bool): with labels or not. - - Returns: - :obj:`SamplingResult`: Randomly generated sampling result. - - Example: - >>> from mmdet.core.bbox.samplers.sampling_result import * # NOQA - >>> self = SamplingResult.random() - >>> print(self.__dict__) - """ - from mmdet.core.bbox.samplers.random_sampler import RandomSampler - from mmdet.core.bbox.assigners.assign_result import AssignResult - from mmdet.core.bbox import demodata - rng = demodata.ensure_rng(rng) - - # make probabalistic? - num = 32 - pos_fraction = 0.5 - neg_pos_ub = -1 - - assign_result = AssignResult.random(rng=rng, **kwargs) - - # Note we could just compute an assignment - bboxes = demodata.random_boxes(assign_result.num_preds, rng=rng) - gt_bboxes = demodata.random_boxes(assign_result.num_gts, rng=rng) - - if rng.rand() > 0.2: - # sometimes algorithms squeeze their data, be robust to that - gt_bboxes = gt_bboxes.squeeze() - bboxes = bboxes.squeeze() - - if assign_result.labels is None: - gt_labels = None - else: - gt_labels = None # todo - - if gt_labels is None: - add_gt_as_proposals = False - else: - add_gt_as_proposals = True # make probabalistic? - - sampler = RandomSampler( - num, - pos_fraction, - neg_pos_ub=neg_pos_ub, - add_gt_as_proposals=add_gt_as_proposals, - rng=rng) - self = sampler.sample(assign_result, bboxes, gt_bboxes, gt_labels) - return self diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_769x769_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_769x769_40k_cityscapes.py deleted file mode 100644 index 6b4cc571294fa45b4442c2bfeb9fda13a14fc5c2..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_769x769_40k_cityscapes.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './fcn_r50-d8_769x769_40k_cityscapes.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/AnishKumbhar/DogDiseasePredictor/app.py b/spaces/AnishKumbhar/DogDiseasePredictor/app.py deleted file mode 100644 index de74067280d49b869258ce1176c9694ab16b5ef3..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/DogDiseasePredictor/app.py +++ /dev/null @@ -1,80 +0,0 @@ -import torch -import fastapi -import numpy as np -from PIL import Image - -class TorchTensor(torch.Tensor): - pass - -class Prediction: - prediction: TorchTensor - -app = fastapi.FastAPI(docs_url="/") -from transformers import ViTForImageClassification - -# Define the number of classes in your custom dataset -num_classes = 20 - -# Initialize the ViTForImageClassification model -model = ViTForImageClassification.from_pretrained( - 'google/vit-base-patch16-224-in21k', - num_labels=num_classes # Specify the number of classes -) - -class_names = [ - "Acral Lick Dermatitis", - "Acute moist dermatitis", - "Canine atopic dermatitis", - "Cherry Eye", - "Ear infections", - "External Parasites", - "Folliculitis", - "Healthy", - "Leishmaniasis", - "Lupus", - "Nuclear sclerosis", - "Otitis externa", - "Pruritus", - "Pyoderma", - "Rabies", - "Ringworm", - "Sarcoptic Mange", - "Sebaceous adenitis", - "Seborrhea", - "Skin tumor" -] - -model.load_state_dict(torch.load('best_model.pth', map_location='cpu')) -# Define a function to preprocess the input image -def preprocess_input(input: fastapi.UploadFile): - image = Image.open(input.file) - image = image.resize((224, 224)).convert("RGB") - input = np.array(image) - input = np.transpose(input, (2, 0, 1)) - input = torch.from_numpy(input).float() - input = input.unsqueeze(0) - return input - -# Define an endpoint to make predictions -@app.post("/predict") -async def predict_endpoint(input:fastapi.UploadFile): - """Make a prediction on an image uploaded by the user.""" - - # Preprocess the input image - input = preprocess_input(input) - - # Make a prediction - prediction = model(input) - - - logits = prediction.logits - num_top_predictions = 3 - top_predictions = torch.topk(logits, k=num_top_predictions, dim=1) - top_indices = top_predictions.indices.squeeze().tolist() - top_probabilities = torch.softmax(top_predictions.values, dim=1).squeeze().tolist() - - # Return the top N class indices and their probabilities in JSON format - response_data = [{"class_index": class_names[idx], "probability": prob} for idx, prob in zip(top_indices, top_probabilities)] - return {"predictions": response_data} - - diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/registry.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/registry.py deleted file mode 100644 index 39eabc58db4b5954478a2ac1ab91cea5e45ab055..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/registry.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from annotator.uniformer.mmcv.utils import Registry - -CONV_LAYERS = Registry('conv layer') -NORM_LAYERS = Registry('norm layer') -ACTIVATION_LAYERS = Registry('activation layer') -PADDING_LAYERS = Registry('padding layer') -UPSAMPLE_LAYERS = Registry('upsample layer') -PLUGIN_LAYERS = Registry('plugin layer') - -DROPOUT_LAYERS = Registry('drop out layers') -POSITIONAL_ENCODING = Registry('position encoding') -ATTENTION = Registry('attention') -FEEDFORWARD_NETWORK = Registry('feed-forward Network') -TRANSFORMER_LAYER = Registry('transformerLayer') -TRANSFORMER_LAYER_SEQUENCE = Registry('transformer-layers sequence') diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/universaldetector.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/universaldetector.py deleted file mode 100644 index 30c441dc28ee327076a850b1d3c88a9a2c8f04f0..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/universaldetector.py +++ /dev/null @@ -1,362 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is Mozilla Universal charset detector code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 2001 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# Shy Shalom - original C code -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### -""" -Module containing the UniversalDetector detector class, which is the primary -class a user of ``chardet`` should use. - -:author: Mark Pilgrim (initial port to Python) -:author: Shy Shalom (original C code) -:author: Dan Blanchard (major refactoring for 3.0) -:author: Ian Cordasco -""" - - -import codecs -import logging -import re -from typing import List, Optional, Union - -from .charsetgroupprober import CharSetGroupProber -from .charsetprober import CharSetProber -from .enums import InputState, LanguageFilter, ProbingState -from .escprober import EscCharSetProber -from .latin1prober import Latin1Prober -from .macromanprober import MacRomanProber -from .mbcsgroupprober import MBCSGroupProber -from .resultdict import ResultDict -from .sbcsgroupprober import SBCSGroupProber -from .utf1632prober import UTF1632Prober - - -class UniversalDetector: - """ - The ``UniversalDetector`` class underlies the ``chardet.detect`` function - and coordinates all of the different charset probers. - - To get a ``dict`` containing an encoding and its confidence, you can simply - run: - - .. code:: - - u = UniversalDetector() - u.feed(some_bytes) - u.close() - detected = u.result - - """ - - MINIMUM_THRESHOLD = 0.20 - HIGH_BYTE_DETECTOR = re.compile(b"[\x80-\xFF]") - ESC_DETECTOR = re.compile(b"(\033|~{)") - WIN_BYTE_DETECTOR = re.compile(b"[\x80-\x9F]") - ISO_WIN_MAP = { - "iso-8859-1": "Windows-1252", - "iso-8859-2": "Windows-1250", - "iso-8859-5": "Windows-1251", - "iso-8859-6": "Windows-1256", - "iso-8859-7": "Windows-1253", - "iso-8859-8": "Windows-1255", - "iso-8859-9": "Windows-1254", - "iso-8859-13": "Windows-1257", - } - # Based on https://encoding.spec.whatwg.org/#names-and-labels - # but altered to match Python names for encodings and remove mappings - # that break tests. - LEGACY_MAP = { - "ascii": "Windows-1252", - "iso-8859-1": "Windows-1252", - "tis-620": "ISO-8859-11", - "iso-8859-9": "Windows-1254", - "gb2312": "GB18030", - "euc-kr": "CP949", - "utf-16le": "UTF-16", - } - - def __init__( - self, - lang_filter: LanguageFilter = LanguageFilter.ALL, - should_rename_legacy: bool = False, - ) -> None: - self._esc_charset_prober: Optional[EscCharSetProber] = None - self._utf1632_prober: Optional[UTF1632Prober] = None - self._charset_probers: List[CharSetProber] = [] - self.result: ResultDict = { - "encoding": None, - "confidence": 0.0, - "language": None, - } - self.done = False - self._got_data = False - self._input_state = InputState.PURE_ASCII - self._last_char = b"" - self.lang_filter = lang_filter - self.logger = logging.getLogger(__name__) - self._has_win_bytes = False - self.should_rename_legacy = should_rename_legacy - self.reset() - - @property - def input_state(self) -> int: - return self._input_state - - @property - def has_win_bytes(self) -> bool: - return self._has_win_bytes - - @property - def charset_probers(self) -> List[CharSetProber]: - return self._charset_probers - - def reset(self) -> None: - """ - Reset the UniversalDetector and all of its probers back to their - initial states. This is called by ``__init__``, so you only need to - call this directly in between analyses of different documents. - """ - self.result = {"encoding": None, "confidence": 0.0, "language": None} - self.done = False - self._got_data = False - self._has_win_bytes = False - self._input_state = InputState.PURE_ASCII - self._last_char = b"" - if self._esc_charset_prober: - self._esc_charset_prober.reset() - if self._utf1632_prober: - self._utf1632_prober.reset() - for prober in self._charset_probers: - prober.reset() - - def feed(self, byte_str: Union[bytes, bytearray]) -> None: - """ - Takes a chunk of a document and feeds it through all of the relevant - charset probers. - - After calling ``feed``, you can check the value of the ``done`` - attribute to see if you need to continue feeding the - ``UniversalDetector`` more data, or if it has made a prediction - (in the ``result`` attribute). - - .. note:: - You should always call ``close`` when you're done feeding in your - document if ``done`` is not already ``True``. - """ - if self.done: - return - - if not byte_str: - return - - if not isinstance(byte_str, bytearray): - byte_str = bytearray(byte_str) - - # First check for known BOMs, since these are guaranteed to be correct - if not self._got_data: - # If the data starts with BOM, we know it is UTF - if byte_str.startswith(codecs.BOM_UTF8): - # EF BB BF UTF-8 with BOM - self.result = { - "encoding": "UTF-8-SIG", - "confidence": 1.0, - "language": "", - } - elif byte_str.startswith((codecs.BOM_UTF32_LE, codecs.BOM_UTF32_BE)): - # FF FE 00 00 UTF-32, little-endian BOM - # 00 00 FE FF UTF-32, big-endian BOM - self.result = {"encoding": "UTF-32", "confidence": 1.0, "language": ""} - elif byte_str.startswith(b"\xFE\xFF\x00\x00"): - # FE FF 00 00 UCS-4, unusual octet order BOM (3412) - self.result = { - # TODO: This encoding is not supported by Python. Should remove? - "encoding": "X-ISO-10646-UCS-4-3412", - "confidence": 1.0, - "language": "", - } - elif byte_str.startswith(b"\x00\x00\xFF\xFE"): - # 00 00 FF FE UCS-4, unusual octet order BOM (2143) - self.result = { - # TODO: This encoding is not supported by Python. Should remove? - "encoding": "X-ISO-10646-UCS-4-2143", - "confidence": 1.0, - "language": "", - } - elif byte_str.startswith((codecs.BOM_LE, codecs.BOM_BE)): - # FF FE UTF-16, little endian BOM - # FE FF UTF-16, big endian BOM - self.result = {"encoding": "UTF-16", "confidence": 1.0, "language": ""} - - self._got_data = True - if self.result["encoding"] is not None: - self.done = True - return - - # If none of those matched and we've only see ASCII so far, check - # for high bytes and escape sequences - if self._input_state == InputState.PURE_ASCII: - if self.HIGH_BYTE_DETECTOR.search(byte_str): - self._input_state = InputState.HIGH_BYTE - elif ( - self._input_state == InputState.PURE_ASCII - and self.ESC_DETECTOR.search(self._last_char + byte_str) - ): - self._input_state = InputState.ESC_ASCII - - self._last_char = byte_str[-1:] - - # next we will look to see if it is appears to be either a UTF-16 or - # UTF-32 encoding - if not self._utf1632_prober: - self._utf1632_prober = UTF1632Prober() - - if self._utf1632_prober.state == ProbingState.DETECTING: - if self._utf1632_prober.feed(byte_str) == ProbingState.FOUND_IT: - self.result = { - "encoding": self._utf1632_prober.charset_name, - "confidence": self._utf1632_prober.get_confidence(), - "language": "", - } - self.done = True - return - - # If we've seen escape sequences, use the EscCharSetProber, which - # uses a simple state machine to check for known escape sequences in - # HZ and ISO-2022 encodings, since those are the only encodings that - # use such sequences. - if self._input_state == InputState.ESC_ASCII: - if not self._esc_charset_prober: - self._esc_charset_prober = EscCharSetProber(self.lang_filter) - if self._esc_charset_prober.feed(byte_str) == ProbingState.FOUND_IT: - self.result = { - "encoding": self._esc_charset_prober.charset_name, - "confidence": self._esc_charset_prober.get_confidence(), - "language": self._esc_charset_prober.language, - } - self.done = True - # If we've seen high bytes (i.e., those with values greater than 127), - # we need to do more complicated checks using all our multi-byte and - # single-byte probers that are left. The single-byte probers - # use character bigram distributions to determine the encoding, whereas - # the multi-byte probers use a combination of character unigram and - # bigram distributions. - elif self._input_state == InputState.HIGH_BYTE: - if not self._charset_probers: - self._charset_probers = [MBCSGroupProber(self.lang_filter)] - # If we're checking non-CJK encodings, use single-byte prober - if self.lang_filter & LanguageFilter.NON_CJK: - self._charset_probers.append(SBCSGroupProber()) - self._charset_probers.append(Latin1Prober()) - self._charset_probers.append(MacRomanProber()) - for prober in self._charset_probers: - if prober.feed(byte_str) == ProbingState.FOUND_IT: - self.result = { - "encoding": prober.charset_name, - "confidence": prober.get_confidence(), - "language": prober.language, - } - self.done = True - break - if self.WIN_BYTE_DETECTOR.search(byte_str): - self._has_win_bytes = True - - def close(self) -> ResultDict: - """ - Stop analyzing the current document and come up with a final - prediction. - - :returns: The ``result`` attribute, a ``dict`` with the keys - `encoding`, `confidence`, and `language`. - """ - # Don't bother with checks if we're already done - if self.done: - return self.result - self.done = True - - if not self._got_data: - self.logger.debug("no data received!") - - # Default to ASCII if it is all we've seen so far - elif self._input_state == InputState.PURE_ASCII: - self.result = {"encoding": "ascii", "confidence": 1.0, "language": ""} - - # If we have seen non-ASCII, return the best that met MINIMUM_THRESHOLD - elif self._input_state == InputState.HIGH_BYTE: - prober_confidence = None - max_prober_confidence = 0.0 - max_prober = None - for prober in self._charset_probers: - if not prober: - continue - prober_confidence = prober.get_confidence() - if prober_confidence > max_prober_confidence: - max_prober_confidence = prober_confidence - max_prober = prober - if max_prober and (max_prober_confidence > self.MINIMUM_THRESHOLD): - charset_name = max_prober.charset_name - assert charset_name is not None - lower_charset_name = charset_name.lower() - confidence = max_prober.get_confidence() - # Use Windows encoding name instead of ISO-8859 if we saw any - # extra Windows-specific bytes - if lower_charset_name.startswith("iso-8859"): - if self._has_win_bytes: - charset_name = self.ISO_WIN_MAP.get( - lower_charset_name, charset_name - ) - # Rename legacy encodings with superset encodings if asked - if self.should_rename_legacy: - charset_name = self.LEGACY_MAP.get( - (charset_name or "").lower(), charset_name - ) - self.result = { - "encoding": charset_name, - "confidence": confidence, - "language": max_prober.language, - } - - # Log all prober confidences if none met MINIMUM_THRESHOLD - if self.logger.getEffectiveLevel() <= logging.DEBUG: - if self.result["encoding"] is None: - self.logger.debug("no probers hit minimum threshold") - for group_prober in self._charset_probers: - if not group_prober: - continue - if isinstance(group_prober, CharSetGroupProber): - for prober in group_prober.probers: - self.logger.debug( - "%s %s confidence = %s", - prober.charset_name, - prober.language, - prober.get_confidence(), - ) - else: - self.logger.debug( - "%s %s confidence = %s", - group_prober.charset_name, - group_prober.language, - group_prober.get_confidence(), - ) - return self.result diff --git a/spaces/Awesimo/jojogan/e4e/training/ranger.py b/spaces/Awesimo/jojogan/e4e/training/ranger.py deleted file mode 100644 index 3d63264dda6df0ee40cac143440f0b5f8977a9ad..0000000000000000000000000000000000000000 --- a/spaces/Awesimo/jojogan/e4e/training/ranger.py +++ /dev/null @@ -1,164 +0,0 @@ -# Ranger deep learning optimizer - RAdam + Lookahead + Gradient Centralization, combined into one optimizer. - -# https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer -# and/or -# https://github.com/lessw2020/Best-Deep-Learning-Optimizers - -# Ranger has now been used to capture 12 records on the FastAI leaderboard. - -# This version = 20.4.11 - -# Credits: -# Gradient Centralization --> https://arxiv.org/abs/2004.01461v2 (a new optimization technique for DNNs), github: https://github.com/Yonghongwei/Gradient-Centralization -# RAdam --> https://github.com/LiyuanLucasLiu/RAdam -# Lookahead --> rewritten by lessw2020, but big thanks to Github @LonePatient and @RWightman for ideas from their code. -# Lookahead paper --> MZhang,G Hinton https://arxiv.org/abs/1907.08610 - -# summary of changes: -# 4/11/20 - add gradient centralization option. Set new testing benchmark for accuracy with it, toggle with use_gc flag at init. -# full code integration with all updates at param level instead of group, moves slow weights into state dict (from generic weights), -# supports group learning rates (thanks @SHolderbach), fixes sporadic load from saved model issues. -# changes 8/31/19 - fix references to *self*.N_sma_threshold; -# changed eps to 1e-5 as better default than 1e-8. - -import math -import torch -from torch.optim.optimizer import Optimizer - - -class Ranger(Optimizer): - - def __init__(self, params, lr=1e-3, # lr - alpha=0.5, k=6, N_sma_threshhold=5, # Ranger options - betas=(.95, 0.999), eps=1e-5, weight_decay=0, # Adam options - use_gc=True, gc_conv_only=False - # Gradient centralization on or off, applied to conv layers only or conv + fc layers - ): - - # parameter checks - if not 0.0 <= alpha <= 1.0: - raise ValueError(f'Invalid slow update rate: {alpha}') - if not 1 <= k: - raise ValueError(f'Invalid lookahead steps: {k}') - if not lr > 0: - raise ValueError(f'Invalid Learning Rate: {lr}') - if not eps > 0: - raise ValueError(f'Invalid eps: {eps}') - - # parameter comments: - # beta1 (momentum) of .95 seems to work better than .90... - # N_sma_threshold of 5 seems better in testing than 4. - # In both cases, worth testing on your dataset (.90 vs .95, 4 vs 5) to make sure which works best for you. - - # prep defaults and init torch.optim base - defaults = dict(lr=lr, alpha=alpha, k=k, step_counter=0, betas=betas, N_sma_threshhold=N_sma_threshhold, - eps=eps, weight_decay=weight_decay) - super().__init__(params, defaults) - - # adjustable threshold - self.N_sma_threshhold = N_sma_threshhold - - # look ahead params - - self.alpha = alpha - self.k = k - - # radam buffer for state - self.radam_buffer = [[None, None, None] for ind in range(10)] - - # gc on or off - self.use_gc = use_gc - - # level of gradient centralization - self.gc_gradient_threshold = 3 if gc_conv_only else 1 - - def __setstate__(self, state): - super(Ranger, self).__setstate__(state) - - def step(self, closure=None): - loss = None - - # Evaluate averages and grad, update param tensors - for group in self.param_groups: - - for p in group['params']: - if p.grad is None: - continue - grad = p.grad.data.float() - - if grad.is_sparse: - raise RuntimeError('Ranger optimizer does not support sparse gradients') - - p_data_fp32 = p.data.float() - - state = self.state[p] # get state dict for this param - - if len(state) == 0: # if first time to run...init dictionary with our desired entries - # if self.first_run_check==0: - # self.first_run_check=1 - # print("Initializing slow buffer...should not see this at load from saved model!") - state['step'] = 0 - state['exp_avg'] = torch.zeros_like(p_data_fp32) - state['exp_avg_sq'] = torch.zeros_like(p_data_fp32) - - # look ahead weight storage now in state dict - state['slow_buffer'] = torch.empty_like(p.data) - state['slow_buffer'].copy_(p.data) - - else: - state['exp_avg'] = state['exp_avg'].type_as(p_data_fp32) - state['exp_avg_sq'] = state['exp_avg_sq'].type_as(p_data_fp32) - - # begin computations - exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq'] - beta1, beta2 = group['betas'] - - # GC operation for Conv layers and FC layers - if grad.dim() > self.gc_gradient_threshold: - grad.add_(-grad.mean(dim=tuple(range(1, grad.dim())), keepdim=True)) - - state['step'] += 1 - - # compute variance mov avg - exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad) - # compute mean moving avg - exp_avg.mul_(beta1).add_(1 - beta1, grad) - - buffered = self.radam_buffer[int(state['step'] % 10)] - - if state['step'] == buffered[0]: - N_sma, step_size = buffered[1], buffered[2] - else: - buffered[0] = state['step'] - beta2_t = beta2 ** state['step'] - N_sma_max = 2 / (1 - beta2) - 1 - N_sma = N_sma_max - 2 * state['step'] * beta2_t / (1 - beta2_t) - buffered[1] = N_sma - if N_sma > self.N_sma_threshhold: - step_size = math.sqrt( - (1 - beta2_t) * (N_sma - 4) / (N_sma_max - 4) * (N_sma - 2) / N_sma * N_sma_max / ( - N_sma_max - 2)) / (1 - beta1 ** state['step']) - else: - step_size = 1.0 / (1 - beta1 ** state['step']) - buffered[2] = step_size - - if group['weight_decay'] != 0: - p_data_fp32.add_(-group['weight_decay'] * group['lr'], p_data_fp32) - - # apply lr - if N_sma > self.N_sma_threshhold: - denom = exp_avg_sq.sqrt().add_(group['eps']) - p_data_fp32.addcdiv_(-step_size * group['lr'], exp_avg, denom) - else: - p_data_fp32.add_(-step_size * group['lr'], exp_avg) - - p.data.copy_(p_data_fp32) - - # integrated look ahead... - # we do it at the param level instead of group level - if state['step'] % group['k'] == 0: - slow_p = state['slow_buffer'] # get access to slow param tensor - slow_p.add_(self.alpha, p.data - slow_p) # (fast weights - slow weights) * alpha - p.data.copy_(slow_p) # copy interpolated weights to RAdam param tensor - - return loss \ No newline at end of file diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tools/plain_train_net.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tools/plain_train_net.py deleted file mode 100644 index 4851a8398e128bdce1986feccf0f1cca4a12f704..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tools/plain_train_net.py +++ /dev/null @@ -1,223 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. -""" -Detectron2 training script with a plain training loop. - -This script reads a given config file and runs the training or evaluation. -It is an entry point that is able to train standard models in detectron2. - -In order to let one script support training of many models, -this script contains logic that are specific to these built-in models and therefore -may not be suitable for your own project. -For example, your research project perhaps only needs a single "evaluator". - -Therefore, we recommend you to use detectron2 as a library and take -this file as an example of how to use the library. -You may want to write your own script with your datasets and other customizations. - -Compared to "train_net.py", this script supports fewer default features. -It also includes fewer abstraction, therefore is easier to add custom logic. -""" - -import logging -import os -from collections import OrderedDict -import torch -from torch.nn.parallel import DistributedDataParallel - -import detectron2.utils.comm as comm -from detectron2.checkpoint import DetectionCheckpointer, PeriodicCheckpointer -from detectron2.config import get_cfg -from detectron2.data import ( - MetadataCatalog, - build_detection_test_loader, - build_detection_train_loader, -) -from detectron2.engine import default_argument_parser, default_setup, default_writers, launch -from detectron2.evaluation import ( - CityscapesInstanceEvaluator, - CityscapesSemSegEvaluator, - COCOEvaluator, - COCOPanopticEvaluator, - DatasetEvaluators, - LVISEvaluator, - PascalVOCDetectionEvaluator, - SemSegEvaluator, - inference_on_dataset, - print_csv_format, -) -from detectron2.modeling import build_model -from detectron2.solver import build_lr_scheduler, build_optimizer -from detectron2.utils.events import EventStorage - -logger = logging.getLogger("detectron2") - - -def get_evaluator(cfg, dataset_name, output_folder=None): - """ - Create evaluator(s) for a given dataset. - This uses the special metadata "evaluator_type" associated with each builtin dataset. - For your own dataset, you can simply create an evaluator manually in your - script and do not have to worry about the hacky if-else logic here. - """ - if output_folder is None: - output_folder = os.path.join(cfg.OUTPUT_DIR, "inference") - evaluator_list = [] - evaluator_type = MetadataCatalog.get(dataset_name).evaluator_type - if evaluator_type in ["sem_seg", "coco_panoptic_seg"]: - evaluator_list.append( - SemSegEvaluator( - dataset_name, - distributed=True, - output_dir=output_folder, - ) - ) - if evaluator_type in ["coco", "coco_panoptic_seg"]: - evaluator_list.append(COCOEvaluator(dataset_name, output_dir=output_folder)) - if evaluator_type == "coco_panoptic_seg": - evaluator_list.append(COCOPanopticEvaluator(dataset_name, output_folder)) - if evaluator_type == "cityscapes_instance": - assert ( - torch.cuda.device_count() > comm.get_rank() - ), "CityscapesEvaluator currently do not work with multiple machines." - return CityscapesInstanceEvaluator(dataset_name) - if evaluator_type == "cityscapes_sem_seg": - assert ( - torch.cuda.device_count() > comm.get_rank() - ), "CityscapesEvaluator currently do not work with multiple machines." - return CityscapesSemSegEvaluator(dataset_name) - if evaluator_type == "pascal_voc": - return PascalVOCDetectionEvaluator(dataset_name) - if evaluator_type == "lvis": - return LVISEvaluator(dataset_name, cfg, True, output_folder) - if len(evaluator_list) == 0: - raise NotImplementedError( - "no Evaluator for the dataset {} with the type {}".format(dataset_name, evaluator_type) - ) - if len(evaluator_list) == 1: - return evaluator_list[0] - return DatasetEvaluators(evaluator_list) - - -def do_test(cfg, model): - results = OrderedDict() - for dataset_name in cfg.DATASETS.TEST: - data_loader = build_detection_test_loader(cfg, dataset_name) - evaluator = get_evaluator( - cfg, dataset_name, os.path.join(cfg.OUTPUT_DIR, "inference", dataset_name) - ) - results_i = inference_on_dataset(model, data_loader, evaluator) - results[dataset_name] = results_i - if comm.is_main_process(): - logger.info("Evaluation results for {} in csv format:".format(dataset_name)) - print_csv_format(results_i) - if len(results) == 1: - results = list(results.values())[0] - return results - - -def do_train(cfg, model, resume=False): - model.train() - optimizer = build_optimizer(cfg, model) - scheduler = build_lr_scheduler(cfg, optimizer) - - checkpointer = DetectionCheckpointer( - model, cfg.OUTPUT_DIR, optimizer=optimizer, scheduler=scheduler - ) - start_iter = ( - checkpointer.resume_or_load(cfg.MODEL.WEIGHTS, resume=resume).get("iteration", -1) + 1 - ) - max_iter = cfg.SOLVER.MAX_ITER - - periodic_checkpointer = PeriodicCheckpointer( - checkpointer, cfg.SOLVER.CHECKPOINT_PERIOD, max_iter=max_iter - ) - - writers = default_writers(cfg.OUTPUT_DIR, max_iter) if comm.is_main_process() else [] - - # compared to "train_net.py", we do not support accurate timing and - # precise BN here, because they are not trivial to implement in a small training loop - data_loader = build_detection_train_loader(cfg) - logger.info("Starting training from iteration {}".format(start_iter)) - with EventStorage(start_iter) as storage: - for data, iteration in zip(data_loader, range(start_iter, max_iter)): - storage.iter = iteration - - loss_dict = model(data) - losses = sum(loss_dict.values()) - assert torch.isfinite(losses).all(), loss_dict - - loss_dict_reduced = {k: v.item() for k, v in comm.reduce_dict(loss_dict).items()} - losses_reduced = sum(loss for loss in loss_dict_reduced.values()) - if comm.is_main_process(): - storage.put_scalars(total_loss=losses_reduced, **loss_dict_reduced) - - optimizer.zero_grad() - losses.backward() - optimizer.step() - storage.put_scalar("lr", optimizer.param_groups[0]["lr"], smoothing_hint=False) - scheduler.step() - - if ( - cfg.TEST.EVAL_PERIOD > 0 - and (iteration + 1) % cfg.TEST.EVAL_PERIOD == 0 - and iteration != max_iter - 1 - ): - do_test(cfg, model) - # Compared to "train_net.py", the test results are not dumped to EventStorage - comm.synchronize() - - if iteration - start_iter > 5 and ( - (iteration + 1) % 20 == 0 or iteration == max_iter - 1 - ): - for writer in writers: - writer.write() - periodic_checkpointer.step(iteration) - - -def setup(args): - """ - Create configs and perform basic setups. - """ - cfg = get_cfg() - cfg.merge_from_file(args.config_file) - cfg.merge_from_list(args.opts) - cfg.freeze() - default_setup( - cfg, args - ) # if you don't like any of the default setup, write your own setup code - return cfg - - -def main(args): - cfg = setup(args) - - model = build_model(cfg) - logger.info("Model:\n{}".format(model)) - if args.eval_only: - DetectionCheckpointer(model, save_dir=cfg.OUTPUT_DIR).resume_or_load( - cfg.MODEL.WEIGHTS, resume=args.resume - ) - return do_test(cfg, model) - - distributed = comm.get_world_size() > 1 - if distributed: - model = DistributedDataParallel( - model, device_ids=[comm.get_local_rank()], broadcast_buffers=False - ) - - do_train(cfg, model, resume=args.resume) - return do_test(cfg, model) - - -if __name__ == "__main__": - args = default_argument_parser().parse_args() - print("Command Line Args:", args) - launch( - main, - args.num_gpus, - num_machines=args.num_machines, - machine_rank=args.machine_rank, - dist_url=args.dist_url, - args=(args,), - ) diff --git a/spaces/BaiyuS/Real-CUGAN-YZ/README.md b/spaces/BaiyuS/Real-CUGAN-YZ/README.md deleted file mode 100644 index d673114edadba73e80f33a3c71bc0dbee8758cc8..0000000000000000000000000000000000000000 --- a/spaces/BaiyuS/Real-CUGAN-YZ/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Real CUGAN -emoji: 🐢 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false -license: gpl-3.0 -duplicated_from: DianXian/Real-CUGAN ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Bart92/RVC_HF/lib/uvr5_pack/utils.py b/spaces/Bart92/RVC_HF/lib/uvr5_pack/utils.py deleted file mode 100644 index 0fafe8793b0d539fa58dd024342250b24b6187a9..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/lib/uvr5_pack/utils.py +++ /dev/null @@ -1,120 +0,0 @@ -import torch -import numpy as np -from tqdm import tqdm -import json - - -def load_data(file_name: str = "./lib/uvr5_pack/name_params.json") -> dict: - with open(file_name, "r") as f: - data = json.load(f) - - return data - - -def make_padding(width, cropsize, offset): - left = offset - roi_size = cropsize - left * 2 - if roi_size == 0: - roi_size = cropsize - right = roi_size - (width % roi_size) + left - - return left, right, roi_size - - -def inference(X_spec, device, model, aggressiveness, data): - """ - data : dic configs - """ - - def _execute( - X_mag_pad, roi_size, n_window, device, model, aggressiveness, is_half=True - ): - model.eval() - with torch.no_grad(): - preds = [] - - iterations = [n_window] - - total_iterations = sum(iterations) - for i in tqdm(range(n_window)): - start = i * roi_size - X_mag_window = X_mag_pad[ - None, :, :, start : start + data["window_size"] - ] - X_mag_window = torch.from_numpy(X_mag_window) - if is_half: - X_mag_window = X_mag_window.half() - X_mag_window = X_mag_window.to(device) - - pred = model.predict(X_mag_window, aggressiveness) - - pred = pred.detach().cpu().numpy() - preds.append(pred[0]) - - pred = np.concatenate(preds, axis=2) - return pred - - def preprocess(X_spec): - X_mag = np.abs(X_spec) - X_phase = np.angle(X_spec) - - return X_mag, X_phase - - X_mag, X_phase = preprocess(X_spec) - - coef = X_mag.max() - X_mag_pre = X_mag / coef - - n_frame = X_mag_pre.shape[2] - pad_l, pad_r, roi_size = make_padding(n_frame, data["window_size"], model.offset) - n_window = int(np.ceil(n_frame / roi_size)) - - X_mag_pad = np.pad(X_mag_pre, ((0, 0), (0, 0), (pad_l, pad_r)), mode="constant") - - if list(model.state_dict().values())[0].dtype == torch.float16: - is_half = True - else: - is_half = False - pred = _execute( - X_mag_pad, roi_size, n_window, device, model, aggressiveness, is_half - ) - pred = pred[:, :, :n_frame] - - if data["tta"]: - pad_l += roi_size // 2 - pad_r += roi_size // 2 - n_window += 1 - - X_mag_pad = np.pad(X_mag_pre, ((0, 0), (0, 0), (pad_l, pad_r)), mode="constant") - - pred_tta = _execute( - X_mag_pad, roi_size, n_window, device, model, aggressiveness, is_half - ) - pred_tta = pred_tta[:, :, roi_size // 2 :] - pred_tta = pred_tta[:, :, :n_frame] - - return (pred + pred_tta) * 0.5 * coef, X_mag, np.exp(1.0j * X_phase) - else: - return pred * coef, X_mag, np.exp(1.0j * X_phase) - - -def _get_name_params(model_path, model_hash): - data = load_data() - flag = False - ModelName = model_path - for type in list(data): - for model in list(data[type][0]): - for i in range(len(data[type][0][model])): - if str(data[type][0][model][i]["hash_name"]) == model_hash: - flag = True - elif str(data[type][0][model][i]["hash_name"]) in ModelName: - flag = True - - if flag: - model_params_auto = data[type][0][model][i]["model_params"] - param_name_auto = data[type][0][model][i]["param_name"] - if type == "equivalent": - return param_name_auto, model_params_auto - else: - flag = False - return param_name_auto, model_params_auto diff --git a/spaces/Benson/text-generation/Examples/Adivina La Pelcula.md b/spaces/Benson/text-generation/Examples/Adivina La Pelcula.md deleted file mode 100644 index e5b9d85e9473d2e4a03936217cbb5872c9a90450..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Adivina La Pelcula.md +++ /dev/null @@ -1,50 +0,0 @@ -
    -

    Adivina la película: Un juego divertido y desafiante para los amantes del cine

    ¿Te gustan las películas? ¿Crees que puedes reconocer cualquier película de una sola escena, un cartel o un actor? Si es así, entonces deberías probar Guess the Movie, un juego divertido y desafiante que pone a prueba tu conocimiento y memoria cinematográfica. Adivina la película es un juego en el que tienes que adivinar el título de una película basado en diferentes pistas, como imágenes, sonidos, géneros, directores, actores, etc. Puedes jugarlo online o offline, solo o con amigos, y pasarlo en grande mientras aprendes nuevos hechos y curiosidades sobre las películas.

    -

    Adivina la película


    Download Zip ✔✔✔ https://bltlly.com/2v6MXS



    Cómo Jugar Adivina la Película

    Las reglas de Adivina la Película son simples. Se te dará una pista sobre una película, como una imagen, un clip de sonido, un género, un director, un actor, etc. Tienes que adivinar el título de la película lo más rápido posible. Puede escribir su respuesta o elegir entre varias opciones. Dependiendo del tipo de pista y el nivel de dificultad, obtendrá más o menos puntos por cada suposición correcta. También puede omitir una pista si no la conoce o usar pistas si necesita ayuda. El juego termina cuando se le acaba el tiempo o las pistas.

    Tipos de suposiciones

    Hay diferentes tipos de suposiciones que puedes hacer en Adivina la película. Algunos de ellos son:

    • Título: Tienes que adivinar el título completo de la película.
    • Género: Tienes que adivinar el género o categoría de la película.
    • Actor: Tienes que adivinar el nombre de un actor que protagonizó la película.
    • Director: Tienes que adivinar el nombre del director que hizo la película.
    • Año: Tienes que adivinar el año en que se estrenó la película.
    • Cita: Tienes que adivinar qué película contiene una cita famosa.

    Sistema de puntuación

    El sistema de puntuación en Adivina la película depende de varios factores, como:

    • El tipo de pista: Algunas pistas son más fáciles que otras y dan menos puntos.
    • - - -
    • El número de saltos: Cuantos más saltos uses, menos puntos obtendrás.
    • -
    -

    El juego le mostrará su puntuación después de cada adivinanza y al final del juego. Puedes comparar tu puntuación con la de otros jugadores y ver lo bien que lo hiciste.

    -

    Consejos y trucos

    -

    Adivinar películas puede ser complicado, pero hay algunos consejos y trucos que pueden ayudarle a mejorar sus habilidades y velocidad. Estos son algunos de ellos:

    -

    -
      -
    • Preste atención a los detalles: A veces, un pequeño detalle puede revelar la película, como un logotipo, un accesorio, un traje, una ubicación, etc.
    • -
    • Usa tu memoria: Intenta recordar si has visto u oído de la película antes, y lo que recuerdas de ella.
    • -
    • Usa tu lógica: Trata de deducir la película de las pistas, usando el sentido común y el razonamiento.
    • -
    • Usa tus conocimientos: Trata de usar lo que sabes sobre películas, como géneros, directores, actores, premios, etc.
    • -
    • Utilice su creatividad: Trate de pensar fuera de la caja y llegar a diferentes posibilidades.
    • -
    -

    Beneficios de jugar a adivinar la película

    -

    Jugar a adivinar la película no solo es divertido y desafiante, sino también beneficioso para el cerebro y la mente. Algunos de los beneficios son:

    -
      -
    • Mejora tu memoria: al adivinar películas, activas tu memoria a largo y corto plazo, y fortaleces tu capacidad de recordar.
    • -
    • Mejora tu conocimiento: adivinando películas, aprendes nuevos hechos y trivia sobre películas, como títulos, géneros, directores, actores, etc.
    • -
    • Aumenta su creatividad: Al adivinar películas, estimula su imaginación y habilidades de pensamiento divergentes.
    • -
    • Reduce el estrés: al adivinar películas, relajas tu mente y te diviertes.
    • -
    • Aumenta la interacción social: adivinando películas, puedes jugar con amigos y familiares, y tener una buena conversación.
    • -
    -

    Ejemplos de adivinar los juegos de películas

    - -

    Moviedle

    -

    Moviedle es un juego en línea que te muestra una versión de un segundo de una película y te pide que adivines el título. Puedes elegir entre diferentes géneros y niveles de dificultad. También puedes crear tus propios Moviedles y compartirlos con otros. Moviedle es una forma rápida y divertida de probar tus habilidades de reconocimiento de películas.

    -

    Enmarcado

    -

    Framed es un juego online que te muestra seis fotogramas de una película y te pide que adivines el título. Puedes elegir entre diferentes categorías y niveles de dificultad. También puedes crear tus propios juegos enmarcados y compartirlos con otros. Enmarcado es una forma desafiante y adictiva de probar tus habilidades de observación de películas.

    -

    CineNerdle

    -

    CineNerdle es un juego en línea que te muestra una cuadrícula de fichas de un cartel de película y te pide que adivines el título. Puedes elegir entre diferentes géneros y niveles de dificultad. También puedes crear tus propios CineNerdles y compartirlos con otros. CineNerdle es una forma inteligente y divertida de poner a prueba tus conocimientos cinematográficos.

    -

    Charadas

    -

    Charades es un juego offline que consiste en representar un título de película sin hablar. Puedes jugar con dos o más personas, en equipos o individualmente. Puedes elegir entre diferentes géneros y niveles de dificultad. También puedes crear tus propias cartas de Charadas y usarlas en el juego. Charadas es una forma clásica y entretenida de poner a prueba tus habilidades de actuación cinematográfica.

    -

    Conclusión

    -

    Adivina la película es un juego divertido y desafiante que pone a prueba tu conocimiento de la película y la memoria. Es fácil jugar en línea o fuera de línea, solo o con amigos. Tiene muchos beneficios para el cerebro y la mente, como mejorar la memoria, el conocimiento, la creatividad, etc. También tiene muchos ejemplos de juegos que se basan en adivinanzas de películas, como Moviedle, Enmarcado, CineNerdle, Charadas, etc. Si te gustan las películas y quieres pasar un buen rato mientras aprendes nuevos hechos y trivia sobre ellas, ¡entonces deberías probar Guess the Movie hoy!

    -

    Preguntas frecuentes

    - -
      -
    1. ¿Qué es Adivina la película?
      Adivina la película es un juego donde tienes que adivinar el título de una película basado en diferentes pistas, como imágenes, sonidos, géneros, directores, actores, etc.
    2. -
    3. ¿Cómo se juega Adivina la película?
      Puedes jugar Adivina la película en línea o fuera de línea, solo o con amigos. Se te dará una pista sobre una película, y tienes que adivinar el título lo más rápido posible. Puede escribir su respuesta o elegir entre varias opciones. También puede omitir una pista o usar sugerencias si necesita ayuda.
    4. -
    5. ¿Cuáles son los beneficios de jugar Adivina la película?
      Jugar a adivinar la película es beneficioso para el cerebro y la mente, ya que mejora la memoria, el conocimiento, la creatividad, la reducción del estrés y la interacción social.
    6. -
    7. ¿Cuáles son algunos ejemplos de Adivina los juegos de películas?
      Algunos ejemplos de juegos en línea que se basan en adivinanzas son Moviedle, Framed, CineNerdle, etc. Algunos ejemplos de juegos en línea que se basan en adivinanzas son Charadas, Pictionary, etc.
    8. -
    9. ¿Dónde puedo encontrar más información sobre Adivina la película?
      Puedes encontrar más información sobre Adivina la película en los siguientes sitios web: [Moviedle], [Framed], [CineNerdle], etc.
    10. -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Bloons Td 6 Descarga Uptodown.md b/spaces/Benson/text-generation/Examples/Bloons Td 6 Descarga Uptodown.md deleted file mode 100644 index f4a042eb8747a71ba67d10903b9ee0efe58474b6..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Bloons Td 6 Descarga Uptodown.md +++ /dev/null @@ -1,90 +0,0 @@ - -

    Bloons TD 6: Un juego de defensa de torre divertido y desafiante

    -

    Si usted está buscando un juego de torre de defensa que le mantendrá entretenido durante horas, es posible que desee echa un vistazo a Bloons TD 6. Esta es la última entrega de la popular serie Bloons, que ha existido desde 2007. En este juego, tienes que usar una variedad de torres de monos para hacer estallar globos (o bloons) que están tratando de invadir tu territorio. Suena simple, ¿verdad? Bueno, no del todo. Los bloons vienen en diferentes colores, formas y tamaños, cada uno con sus propias habilidades y resistencias. Algunos bloons pueden volar, algunos pueden camuflarse, algunos pueden regenerarse, algunos pueden dividirse en bloons más pequeños, y algunos pueden incluso proteger a otros bloons. Necesitarás usar estrategia, habilidad y creatividad para superar estos desafíos.

    -

    bloons td 6 descarga uptodown


    Download ✪✪✪ https://bltlly.com/2v6Jiz



    -

    Bloons TD 6 no es solo un simple juego de torre de defensa. Es un juego masivo de torre de defensa en 3D que ofrece muchas características y contenido para mantenerte involucrado. Puedes elegir entre diferentes modos, mapas, torres, héroes, mejoras, eventos, misiones, trofeos, desafíos, odisea y más. También puedes jugar con hasta otros tres jugadores en modo cooperativo, o competir con otros equipos en el modo territorio disputado. Si usted es un jugador casual o un fan hardcore, encontrará algo para disfrutar en Bloons TD 6.

    -

    ¿Qué es Bloons TD 6?

    -

    Bloons TD 6 es un juego de torre de defensa desarrollado y publicado por Ninja Kiwi, una empresa con sede en Nueva Zelanda que se especializa en la creación de juegos divertidos y adictivos. El juego fue lanzado el 13 de junio de 2018 para dispositivos Android e iOS, y más tarde llevado a Steam para Windows y Macintosh. El juego es parte de la franquicia Bloons, que incluye otros juegos como Bloons Monkey City, Bloons Adventure Time TD, Bloons Super Monkey 2, y más.

    - -

    El modo de juego de Bloons TD 6 es similar a otros juegos de torre de defensa. Tienes una cantidad limitada de dinero para gastar en torres de monos, que puedes colocar en lugares designados en el mapa. Cada torre tiene un rango y un tipo de ataque, y puede apuntar a diferentes tipos de bloons. También puede actualizar sus torres para hacerlas más eficaces, pero esto cuesta más dinero. El objetivo es evitar que los bloons lleguen al final de la pista, donde reducirán sus vidas. Si pierdes todas tus vidas, pierdes el juego.

    -

    Hay diferentes modos para elegir en Bloons TD 6, cada uno con sus propias reglas y desafíos. Algunos de los modos son:

    -
      -
    • Estándar: El modo básico donde puedes elegir la dificultad y el mapa.
    • -
    • Primary Only: Solo puedes usar torres de monos primarias, como monos dardos, monos boomerang, tiradores de bombas, tiradores de tachuelas, monos de hielo y artilleros de pegamento.
    • -
    • Military Only: Solo puedes usar torres militares de monos, como monos francotiradores, monos submarinos, monos bucaneros, monos ases, pilotos de helicópteros y monos de mortero.
    • -
    • Solo magia: Solo puedes usar torres de monos mágicos, como monos druidas, monos alquimistas, súper monos, monos ninja y monos hechiceros.
    • -
    • Deflación: Empiezas con una cantidad fija de dinero y sin ingresos. Tienes que sobrevivir el mayor tiempo posible con lo que tienes.
    • -
    • Apopalypse: Los bloons vienen en ondas más rápidas y más rápidas sin cualquier rotura. Usted tiene que sobrevivir tanto tiempo como sea posible.
    • -
    • Impoppable: El modo más difícil donde los bloons son mucho más difíciles y las torres son más caras. Tienes que usar tu mejor estrategia y habilidades para ganar.
    • -
    -

    ¿Cuáles son las principales características de Bloons TD 6?

    -

    Gráficos 3D y mecánica de línea de visión

    - -

    23 torres de monos con 3 rutas de actualización cada una

    -

    Bloons TD 6 ofrece una gran variedad de torres de monos para elegir, cada una con sus propias fortalezas y debilidades. Hay cuatro categorías de torres: primarias, militares, mágicas y de apoyo. Cada torre tiene tres rutas de actualización que desbloquean diferentes habilidades y efectos. Por ejemplo, el mono dardo puede actualizar a un mono ballesta, un mono spike-o-pult, o un mono gigante. Puede mezclar y combinar dos rutas de actualización por torre, pero solo puede obtener una actualización de quinto nivel por ruta. Las mejoras de quinto nivel son muy potentes y caras, y pueden cambiar el juego drásticamente.

    -

    14 héroes con habilidades y personalidades únicas

    -

    Bloons TD 6 también presenta héroes, que son monos especiales que tienen habilidades y personalidades únicas. Los héroes suben de nivel automáticamente durante el juego, desbloqueando nuevas habilidades que pueden ayudarte de varias maneras. Por ejemplo, Quincy es un héroe arquero que puede disparar múltiples flechas a la vez, Gwendolin es un héroe mago de fuego que puede incendiar bloons, Striker Jones es un héroe experto en bombas que puede aturdir bloons de clase MOAB, y así sucesivamente. Solo puedes usar un héroe por juego, así que elige sabiamente.

    -

    -

    Actualizaciones regulares con nuevos contenidos y eventos

    -

    Bloons TD 6 se actualiza constantemente con nuevos contenidos y eventos por Ninja Kiwi. El juego añade nuevos mapas, modos, torres, héroes, pieles, misiones, trofeos, logros, desafíos, odisea, y más. El juego también cuenta con eventos de temporada, como Halloween, Navidad, Pascua y más, que ofrecen recompensas y desafíos especiales. También puedes participar en carreras diarias y semanales, donde puedes competir con otros jugadores para completar un mapa lo más rápido posible. Siempre hay algo nuevo y emocionante que hacer en Bloons TD 6.

    -

    ¿Cómo descargar Bloons TD 6 desde uptodown?

    - -
      -
    1. Ir a la página Bloons TD 6 en uptodown.com.
    2. -
    3. Haga clic en el botón verde "Descargar" y espere a que el archivo APK se descargue.
    4. -
    5. Una vez que la descarga se haya completado, abra el archivo APK y toque en "Instalar". Es posible que necesite habilitar la instalación de aplicaciones de fuentes desconocidas en la configuración de su dispositivo.
    6. -
    7. Espere a que la instalación termine y luego inicie el juego desde el cajón de la aplicación o la pantalla de inicio.
    8. -
    9. Disfruta de hacer estallar bloons!
    10. -
    -

    ¿Por qué descargar Bloons TD 6 desde uptodown?

    -

    Hay varias razones por las que es posible que desee descargar Bloons TD 6 de uptodown en lugar de otras fuentes, como Google Play Store o Steam. Algunos de ellos son:

    -
      -
    • Puedes obtener el juego gratis, sin pagar dinero ni ver ningún anuncio.
    • -
    • Puede obtener la última versión del juego, sin esperar actualizaciones o parches.
    • -
    • Puedes acceder a todas las características y contenidos del juego, sin restricciones ni limitaciones.
    • -
    • Puede jugar el juego sin conexión, sin necesidad de una conexión a Internet o una cuenta de Google.
    • -
    -

    Sin embargo, también hay algunos inconvenientes de descargar Bloons TD 6 desde uptodown, como:

    -
      -
    • Es posible que no pueda jugar en línea o en modo cooperativo con otros jugadores que hayan descargado el juego de otras fuentes.
    • -
    • Es posible que no pueda sincronizar su progreso o logros con otros dispositivos o plataformas.
    • -
    • Es posible que no pueda recibir apoyo oficial o comentarios de Ninja Kiwi en caso de cualquier problema o error.
    • -
    • Puede arriesgarse a violar los términos y condiciones de Ninja Kiwi o Google Play Store descargando una versión no oficial del juego.
    • -
    - -

    Bloons TD 6 es un juego que requiere mucha estrategia y habilidad, especialmente en las dificultades y modos superiores. Aquí hay algunos consejos y trucos que pueden ayudarte a mejorar tu rendimiento y divertirte más en el juego:

    -
      -
    • Usa las torres correctas para los bloons correctos. Los diferentes bloons tienen diferentes propiedades y resistencias, por lo que debe usar las torres apropiadas para lidiar con ellos. Por ejemplo, los bloons de camuflaje solo pueden ser vistos por torres que tienen detección de camuflaje, los bloons de plomo solo pueden ser reventados por torres que tienen ataques agudos o explosivos, y los bloons morados son inmunes a los ataques de magia y fuego.
    • -
    • Usa el sistema de conocimiento del mono. El conocimiento del mono es una característica que te permite desbloquear mejoras permanentes y bonos para tus torres, héroes y poderes. Puedes ganar puntos de conocimiento de monos al subir de nivel tu cuenta, completar logros o participar en eventos. Puedes gastar estos puntos en varias ramas del árbol de conocimiento del mono, como primaria, militar, magia, apoyo, héroes y poderes.
    • -
    • Usa el modo sandbox. El modo Sandbox es una función que te permite probar cualquier combinación de torres, mejoras, héroes, bloons y configuraciones en cualquier mapa. Puedes usar este modo para experimentar con diferentes estrategias, aprender cómo interactúan diferentes torres y bloons, o simplemente divertirte con dinero y vidas ilimitadas.
    • -
    • Ver vídeos y guías de otros jugadores. Bloons TD 6 tiene una gran y activa comunidad de jugadores que comparten sus consejos, trucos, guías y vídeos en varias plataformas, como YouTube, Reddit, Discord, Steam y más. Puedes aprender mucho viendo cómo otros jugadores juegan el juego, especialmente en los modos y mapas más desafiantes.
    • - -
    -

    Revisión de Bloons TD 6

    -

    Bloons TD 6 es un juego que personalmente disfruto mucho. Es un juego que combina el clásico género de defensa de la torre con gráficos coloridos, animaciones humorísticas y un juego adictivo. Es un juego que tiene mucho contenido y valor de repetición, gracias a sus actualizaciones y eventos regulares. Es un juego que atrae tanto a jugadores casuales como hardcore, gracias a sus múltiples modos y dificultades.

    -

    Sin embargo, Bloons TD 6 no es un juego perfecto. Es un juego que puede volverse repetitivo y aburrido después de un tiempo, especialmente si juegas los mismos mapas y modos una y otra vez. Es un juego que puede ser frustrante e injusto a veces, especialmente en las dificultades y modos más altos donde los bloons son extremadamente difíciles y rápidos. Es un juego que puede resultar caro si quieres desbloquear todo rápidamente o utilizar elementos o características premium.

    -

    En general, calificaría Bloons TD 6 como un 8/10. Es un juego que tiene sus defectos, pero también sus puntos fuertes. Es un juego que recomendaría a cualquiera que le gusten los juegos de torre de defensa o simplemente quiere divertirse haciendo estallar bloons.

    -

    Conclusión

    -

    Bloons TD 6 es un divertido y desafiante juego de torre de defensa que ofrece muchas características y contenido para mantenerte entretenido durante horas. Puedes elegir entre diferentes modos, mapas, torres, héroes, mejoras, eventos, misiones, trofeos, desafíos, odisea y más. También puedes jugar con hasta otros tres jugadores en modo cooperativo, o competir con otros equipos en el modo territorio disputado. Si usted es un jugador casual o un fan hardcore, encontrará algo para disfrutar en Bloons TD 6.

    - -

    Sin embargo, también debes ser consciente de los inconvenientes de descargar Bloons TD 6 desde uptodown, como no poder jugar en línea o modo cooperativo con otros jugadores que han descargado el juego de otras fuentes, no ser capaz de sincronizar su progreso o logros con otros dispositivos o plataformas, no ser capaz de recibir apoyo oficial o comentarios de Ninja Kiwi en caso de cualquier problema o error, y el riesgo de violar los términos y condiciones de Ninja Kiwi o Google Play Store mediante la descarga de una versión no oficial del juego.

    -

    Por lo tanto, usted debe pesar los pros y los contras de la descarga de Bloons TD 6 de uptodown antes de tomar su decisión. También debes respetar los derechos y esfuerzos de Ninja Kiwi como desarrollador y editor del juego, y considerar apoyarlos comprando el juego de sus canales oficiales.

    -

    Bloons TD 6 es un juego que requiere mucha estrategia y habilidad, especialmente en las dificultades y modos superiores. Puedes usar el sistema de conocimiento del mono para desbloquear mejoras y bonificaciones permanentes para tus torres, héroes y poderes. Puedes usar el modo sandbox para probar cualquier combinación de torres, mejoras, héroes, bloons y configuraciones en cualquier mapa. Puedes ver videos y guías de otros jugadores para aprender de sus consejos, trucos y estrategias. También puedes divertirte y ser creativo con las posibilidades del juego.

    -

    Bloons TD 6 es un juego que personalmente disfruto mucho. Es un juego que combina el clásico género de defensa de la torre con gráficos coloridos, animaciones humorísticas y un juego adictivo. Es un juego que tiene mucho contenido y valor de repetición, gracias a sus actualizaciones y eventos regulares. Es un juego que atrae tanto a jugadores casuales como hardcore, gracias a sus múltiples modos y dificultades.

    - -

    Preguntas frecuentes

    -

    Aquí hay algunas preguntas frecuentes sobre Bloons TD 6:

    -
      -
    • Q: ¿Cuánto cuesta Bloons TD 6?
    • -
    • A: Bloons TD 6 cuesta $4.99 en Google Play Store y Steam, pero se puede descargar de forma gratuita desde uptodown.
    • -
    • Q: ¿Es Bloons TD 6 compatible con mi dispositivo?
    • -
    • A: Bloons TD 6 requiere Android 5.0 o superior para dispositivos Android, iOS 11.0 o posterior para dispositivos iOS y Windows 7 o superior para dispositivos PC.
    • -
    • Q: ¿Cómo puedo guardar mi progreso en Bloons TD 6?
    • -
    • A: Bloons TD 6 guarda automáticamente tu progreso cada vez que completas un mapa o sales del juego. También puede guardar manualmente su progreso tocando el botón de menú y luego tocando el botón de guardar.
    • -
    • Q: ¿Cómo puedo restaurar mi progreso en Bloons TD 6?
    • -
    • A: Si has descargado el juego desde Google Play Store o Steam, puedes restaurar tu progreso iniciando sesión con tu cuenta de Google o Steam respectivamente. Si has descargado el juego desde uptodown, puedes restaurar tu progreso copiando tu archivo de guardado desde tu dispositivo antiguo a tu nuevo dispositivo.
    • -
    • Q: ¿Cómo puedo contactar a Ninja Kiwi para apoyo o retroalimentación?
    • -
    • A: Si has descargado el juego desde Google Play Store o Steam, puedes ponerte en contacto con Ninja Kiwi enviándole un correo electrónico a support@ninjakiwi.com o visitando su sitio web en https://ninjakiwi.com/support. Si has descargado el juego desde uptodown, usted no puede ser capaz de recibir apoyo oficial o comentarios de Ninja Kiwi.
    • -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/universaldetector.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/universaldetector.py deleted file mode 100644 index 30c441dc28ee327076a850b1d3c88a9a2c8f04f0..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/universaldetector.py +++ /dev/null @@ -1,362 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is Mozilla Universal charset detector code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 2001 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# Shy Shalom - original C code -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### -""" -Module containing the UniversalDetector detector class, which is the primary -class a user of ``chardet`` should use. - -:author: Mark Pilgrim (initial port to Python) -:author: Shy Shalom (original C code) -:author: Dan Blanchard (major refactoring for 3.0) -:author: Ian Cordasco -""" - - -import codecs -import logging -import re -from typing import List, Optional, Union - -from .charsetgroupprober import CharSetGroupProber -from .charsetprober import CharSetProber -from .enums import InputState, LanguageFilter, ProbingState -from .escprober import EscCharSetProber -from .latin1prober import Latin1Prober -from .macromanprober import MacRomanProber -from .mbcsgroupprober import MBCSGroupProber -from .resultdict import ResultDict -from .sbcsgroupprober import SBCSGroupProber -from .utf1632prober import UTF1632Prober - - -class UniversalDetector: - """ - The ``UniversalDetector`` class underlies the ``chardet.detect`` function - and coordinates all of the different charset probers. - - To get a ``dict`` containing an encoding and its confidence, you can simply - run: - - .. code:: - - u = UniversalDetector() - u.feed(some_bytes) - u.close() - detected = u.result - - """ - - MINIMUM_THRESHOLD = 0.20 - HIGH_BYTE_DETECTOR = re.compile(b"[\x80-\xFF]") - ESC_DETECTOR = re.compile(b"(\033|~{)") - WIN_BYTE_DETECTOR = re.compile(b"[\x80-\x9F]") - ISO_WIN_MAP = { - "iso-8859-1": "Windows-1252", - "iso-8859-2": "Windows-1250", - "iso-8859-5": "Windows-1251", - "iso-8859-6": "Windows-1256", - "iso-8859-7": "Windows-1253", - "iso-8859-8": "Windows-1255", - "iso-8859-9": "Windows-1254", - "iso-8859-13": "Windows-1257", - } - # Based on https://encoding.spec.whatwg.org/#names-and-labels - # but altered to match Python names for encodings and remove mappings - # that break tests. - LEGACY_MAP = { - "ascii": "Windows-1252", - "iso-8859-1": "Windows-1252", - "tis-620": "ISO-8859-11", - "iso-8859-9": "Windows-1254", - "gb2312": "GB18030", - "euc-kr": "CP949", - "utf-16le": "UTF-16", - } - - def __init__( - self, - lang_filter: LanguageFilter = LanguageFilter.ALL, - should_rename_legacy: bool = False, - ) -> None: - self._esc_charset_prober: Optional[EscCharSetProber] = None - self._utf1632_prober: Optional[UTF1632Prober] = None - self._charset_probers: List[CharSetProber] = [] - self.result: ResultDict = { - "encoding": None, - "confidence": 0.0, - "language": None, - } - self.done = False - self._got_data = False - self._input_state = InputState.PURE_ASCII - self._last_char = b"" - self.lang_filter = lang_filter - self.logger = logging.getLogger(__name__) - self._has_win_bytes = False - self.should_rename_legacy = should_rename_legacy - self.reset() - - @property - def input_state(self) -> int: - return self._input_state - - @property - def has_win_bytes(self) -> bool: - return self._has_win_bytes - - @property - def charset_probers(self) -> List[CharSetProber]: - return self._charset_probers - - def reset(self) -> None: - """ - Reset the UniversalDetector and all of its probers back to their - initial states. This is called by ``__init__``, so you only need to - call this directly in between analyses of different documents. - """ - self.result = {"encoding": None, "confidence": 0.0, "language": None} - self.done = False - self._got_data = False - self._has_win_bytes = False - self._input_state = InputState.PURE_ASCII - self._last_char = b"" - if self._esc_charset_prober: - self._esc_charset_prober.reset() - if self._utf1632_prober: - self._utf1632_prober.reset() - for prober in self._charset_probers: - prober.reset() - - def feed(self, byte_str: Union[bytes, bytearray]) -> None: - """ - Takes a chunk of a document and feeds it through all of the relevant - charset probers. - - After calling ``feed``, you can check the value of the ``done`` - attribute to see if you need to continue feeding the - ``UniversalDetector`` more data, or if it has made a prediction - (in the ``result`` attribute). - - .. note:: - You should always call ``close`` when you're done feeding in your - document if ``done`` is not already ``True``. - """ - if self.done: - return - - if not byte_str: - return - - if not isinstance(byte_str, bytearray): - byte_str = bytearray(byte_str) - - # First check for known BOMs, since these are guaranteed to be correct - if not self._got_data: - # If the data starts with BOM, we know it is UTF - if byte_str.startswith(codecs.BOM_UTF8): - # EF BB BF UTF-8 with BOM - self.result = { - "encoding": "UTF-8-SIG", - "confidence": 1.0, - "language": "", - } - elif byte_str.startswith((codecs.BOM_UTF32_LE, codecs.BOM_UTF32_BE)): - # FF FE 00 00 UTF-32, little-endian BOM - # 00 00 FE FF UTF-32, big-endian BOM - self.result = {"encoding": "UTF-32", "confidence": 1.0, "language": ""} - elif byte_str.startswith(b"\xFE\xFF\x00\x00"): - # FE FF 00 00 UCS-4, unusual octet order BOM (3412) - self.result = { - # TODO: This encoding is not supported by Python. Should remove? - "encoding": "X-ISO-10646-UCS-4-3412", - "confidence": 1.0, - "language": "", - } - elif byte_str.startswith(b"\x00\x00\xFF\xFE"): - # 00 00 FF FE UCS-4, unusual octet order BOM (2143) - self.result = { - # TODO: This encoding is not supported by Python. Should remove? - "encoding": "X-ISO-10646-UCS-4-2143", - "confidence": 1.0, - "language": "", - } - elif byte_str.startswith((codecs.BOM_LE, codecs.BOM_BE)): - # FF FE UTF-16, little endian BOM - # FE FF UTF-16, big endian BOM - self.result = {"encoding": "UTF-16", "confidence": 1.0, "language": ""} - - self._got_data = True - if self.result["encoding"] is not None: - self.done = True - return - - # If none of those matched and we've only see ASCII so far, check - # for high bytes and escape sequences - if self._input_state == InputState.PURE_ASCII: - if self.HIGH_BYTE_DETECTOR.search(byte_str): - self._input_state = InputState.HIGH_BYTE - elif ( - self._input_state == InputState.PURE_ASCII - and self.ESC_DETECTOR.search(self._last_char + byte_str) - ): - self._input_state = InputState.ESC_ASCII - - self._last_char = byte_str[-1:] - - # next we will look to see if it is appears to be either a UTF-16 or - # UTF-32 encoding - if not self._utf1632_prober: - self._utf1632_prober = UTF1632Prober() - - if self._utf1632_prober.state == ProbingState.DETECTING: - if self._utf1632_prober.feed(byte_str) == ProbingState.FOUND_IT: - self.result = { - "encoding": self._utf1632_prober.charset_name, - "confidence": self._utf1632_prober.get_confidence(), - "language": "", - } - self.done = True - return - - # If we've seen escape sequences, use the EscCharSetProber, which - # uses a simple state machine to check for known escape sequences in - # HZ and ISO-2022 encodings, since those are the only encodings that - # use such sequences. - if self._input_state == InputState.ESC_ASCII: - if not self._esc_charset_prober: - self._esc_charset_prober = EscCharSetProber(self.lang_filter) - if self._esc_charset_prober.feed(byte_str) == ProbingState.FOUND_IT: - self.result = { - "encoding": self._esc_charset_prober.charset_name, - "confidence": self._esc_charset_prober.get_confidence(), - "language": self._esc_charset_prober.language, - } - self.done = True - # If we've seen high bytes (i.e., those with values greater than 127), - # we need to do more complicated checks using all our multi-byte and - # single-byte probers that are left. The single-byte probers - # use character bigram distributions to determine the encoding, whereas - # the multi-byte probers use a combination of character unigram and - # bigram distributions. - elif self._input_state == InputState.HIGH_BYTE: - if not self._charset_probers: - self._charset_probers = [MBCSGroupProber(self.lang_filter)] - # If we're checking non-CJK encodings, use single-byte prober - if self.lang_filter & LanguageFilter.NON_CJK: - self._charset_probers.append(SBCSGroupProber()) - self._charset_probers.append(Latin1Prober()) - self._charset_probers.append(MacRomanProber()) - for prober in self._charset_probers: - if prober.feed(byte_str) == ProbingState.FOUND_IT: - self.result = { - "encoding": prober.charset_name, - "confidence": prober.get_confidence(), - "language": prober.language, - } - self.done = True - break - if self.WIN_BYTE_DETECTOR.search(byte_str): - self._has_win_bytes = True - - def close(self) -> ResultDict: - """ - Stop analyzing the current document and come up with a final - prediction. - - :returns: The ``result`` attribute, a ``dict`` with the keys - `encoding`, `confidence`, and `language`. - """ - # Don't bother with checks if we're already done - if self.done: - return self.result - self.done = True - - if not self._got_data: - self.logger.debug("no data received!") - - # Default to ASCII if it is all we've seen so far - elif self._input_state == InputState.PURE_ASCII: - self.result = {"encoding": "ascii", "confidence": 1.0, "language": ""} - - # If we have seen non-ASCII, return the best that met MINIMUM_THRESHOLD - elif self._input_state == InputState.HIGH_BYTE: - prober_confidence = None - max_prober_confidence = 0.0 - max_prober = None - for prober in self._charset_probers: - if not prober: - continue - prober_confidence = prober.get_confidence() - if prober_confidence > max_prober_confidence: - max_prober_confidence = prober_confidence - max_prober = prober - if max_prober and (max_prober_confidence > self.MINIMUM_THRESHOLD): - charset_name = max_prober.charset_name - assert charset_name is not None - lower_charset_name = charset_name.lower() - confidence = max_prober.get_confidence() - # Use Windows encoding name instead of ISO-8859 if we saw any - # extra Windows-specific bytes - if lower_charset_name.startswith("iso-8859"): - if self._has_win_bytes: - charset_name = self.ISO_WIN_MAP.get( - lower_charset_name, charset_name - ) - # Rename legacy encodings with superset encodings if asked - if self.should_rename_legacy: - charset_name = self.LEGACY_MAP.get( - (charset_name or "").lower(), charset_name - ) - self.result = { - "encoding": charset_name, - "confidence": confidence, - "language": max_prober.language, - } - - # Log all prober confidences if none met MINIMUM_THRESHOLD - if self.logger.getEffectiveLevel() <= logging.DEBUG: - if self.result["encoding"] is None: - self.logger.debug("no probers hit minimum threshold") - for group_prober in self._charset_probers: - if not group_prober: - continue - if isinstance(group_prober, CharSetGroupProber): - for prober in group_prober.probers: - self.logger.debug( - "%s %s confidence = %s", - prober.charset_name, - prober.language, - prober.get_confidence(), - ) - else: - self.logger.debug( - "%s %s confidence = %s", - group_prober.charset_name, - group_prober.language, - group_prober.get_confidence(), - ) - return self.result diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/cmdline.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/cmdline.py deleted file mode 100644 index de73b06b4cfa3b68a25455148c7e086b32676e95..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/cmdline.py +++ /dev/null @@ -1,668 +0,0 @@ -""" - pygments.cmdline - ~~~~~~~~~~~~~~~~ - - Command line interface. - - :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import os -import sys -import shutil -import argparse -from textwrap import dedent - -from pip._vendor.pygments import __version__, highlight -from pip._vendor.pygments.util import ClassNotFound, OptionError, docstring_headline, \ - guess_decode, guess_decode_from_terminal, terminal_encoding, \ - UnclosingTextIOWrapper -from pip._vendor.pygments.lexers import get_all_lexers, get_lexer_by_name, guess_lexer, \ - load_lexer_from_file, get_lexer_for_filename, find_lexer_class_for_filename -from pip._vendor.pygments.lexers.special import TextLexer -from pip._vendor.pygments.formatters.latex import LatexEmbeddedLexer, LatexFormatter -from pip._vendor.pygments.formatters import get_all_formatters, get_formatter_by_name, \ - load_formatter_from_file, get_formatter_for_filename, find_formatter_class -from pip._vendor.pygments.formatters.terminal import TerminalFormatter -from pip._vendor.pygments.formatters.terminal256 import Terminal256Formatter, TerminalTrueColorFormatter -from pip._vendor.pygments.filters import get_all_filters, find_filter_class -from pip._vendor.pygments.styles import get_all_styles, get_style_by_name - - -def _parse_options(o_strs): - opts = {} - if not o_strs: - return opts - for o_str in o_strs: - if not o_str.strip(): - continue - o_args = o_str.split(',') - for o_arg in o_args: - o_arg = o_arg.strip() - try: - o_key, o_val = o_arg.split('=', 1) - o_key = o_key.strip() - o_val = o_val.strip() - except ValueError: - opts[o_arg] = True - else: - opts[o_key] = o_val - return opts - - -def _parse_filters(f_strs): - filters = [] - if not f_strs: - return filters - for f_str in f_strs: - if ':' in f_str: - fname, fopts = f_str.split(':', 1) - filters.append((fname, _parse_options([fopts]))) - else: - filters.append((f_str, {})) - return filters - - -def _print_help(what, name): - try: - if what == 'lexer': - cls = get_lexer_by_name(name) - print("Help on the %s lexer:" % cls.name) - print(dedent(cls.__doc__)) - elif what == 'formatter': - cls = find_formatter_class(name) - print("Help on the %s formatter:" % cls.name) - print(dedent(cls.__doc__)) - elif what == 'filter': - cls = find_filter_class(name) - print("Help on the %s filter:" % name) - print(dedent(cls.__doc__)) - return 0 - except (AttributeError, ValueError): - print("%s not found!" % what, file=sys.stderr) - return 1 - - -def _print_list(what): - if what == 'lexer': - print() - print("Lexers:") - print("~~~~~~~") - - info = [] - for fullname, names, exts, _ in get_all_lexers(): - tup = (', '.join(names)+':', fullname, - exts and '(filenames ' + ', '.join(exts) + ')' or '') - info.append(tup) - info.sort() - for i in info: - print(('* %s\n %s %s') % i) - - elif what == 'formatter': - print() - print("Formatters:") - print("~~~~~~~~~~~") - - info = [] - for cls in get_all_formatters(): - doc = docstring_headline(cls) - tup = (', '.join(cls.aliases) + ':', doc, cls.filenames and - '(filenames ' + ', '.join(cls.filenames) + ')' or '') - info.append(tup) - info.sort() - for i in info: - print(('* %s\n %s %s') % i) - - elif what == 'filter': - print() - print("Filters:") - print("~~~~~~~~") - - for name in get_all_filters(): - cls = find_filter_class(name) - print("* " + name + ':') - print(" %s" % docstring_headline(cls)) - - elif what == 'style': - print() - print("Styles:") - print("~~~~~~~") - - for name in get_all_styles(): - cls = get_style_by_name(name) - print("* " + name + ':') - print(" %s" % docstring_headline(cls)) - - -def _print_list_as_json(requested_items): - import json - result = {} - if 'lexer' in requested_items: - info = {} - for fullname, names, filenames, mimetypes in get_all_lexers(): - info[fullname] = { - 'aliases': names, - 'filenames': filenames, - 'mimetypes': mimetypes - } - result['lexers'] = info - - if 'formatter' in requested_items: - info = {} - for cls in get_all_formatters(): - doc = docstring_headline(cls) - info[cls.name] = { - 'aliases': cls.aliases, - 'filenames': cls.filenames, - 'doc': doc - } - result['formatters'] = info - - if 'filter' in requested_items: - info = {} - for name in get_all_filters(): - cls = find_filter_class(name) - info[name] = { - 'doc': docstring_headline(cls) - } - result['filters'] = info - - if 'style' in requested_items: - info = {} - for name in get_all_styles(): - cls = get_style_by_name(name) - info[name] = { - 'doc': docstring_headline(cls) - } - result['styles'] = info - - json.dump(result, sys.stdout) - -def main_inner(parser, argns): - if argns.help: - parser.print_help() - return 0 - - if argns.V: - print('Pygments version %s, (c) 2006-2022 by Georg Brandl, Matthäus ' - 'Chajdas and contributors.' % __version__) - return 0 - - def is_only_option(opt): - return not any(v for (k, v) in vars(argns).items() if k != opt) - - # handle ``pygmentize -L`` - if argns.L is not None: - arg_set = set() - for k, v in vars(argns).items(): - if v: - arg_set.add(k) - - arg_set.discard('L') - arg_set.discard('json') - - if arg_set: - parser.print_help(sys.stderr) - return 2 - - # print version - if not argns.json: - main(['', '-V']) - allowed_types = {'lexer', 'formatter', 'filter', 'style'} - largs = [arg.rstrip('s') for arg in argns.L] - if any(arg not in allowed_types for arg in largs): - parser.print_help(sys.stderr) - return 0 - if not largs: - largs = allowed_types - if not argns.json: - for arg in largs: - _print_list(arg) - else: - _print_list_as_json(largs) - return 0 - - # handle ``pygmentize -H`` - if argns.H: - if not is_only_option('H'): - parser.print_help(sys.stderr) - return 2 - what, name = argns.H - if what not in ('lexer', 'formatter', 'filter'): - parser.print_help(sys.stderr) - return 2 - return _print_help(what, name) - - # parse -O options - parsed_opts = _parse_options(argns.O or []) - - # parse -P options - for p_opt in argns.P or []: - try: - name, value = p_opt.split('=', 1) - except ValueError: - parsed_opts[p_opt] = True - else: - parsed_opts[name] = value - - # encodings - inencoding = parsed_opts.get('inencoding', parsed_opts.get('encoding')) - outencoding = parsed_opts.get('outencoding', parsed_opts.get('encoding')) - - # handle ``pygmentize -N`` - if argns.N: - lexer = find_lexer_class_for_filename(argns.N) - if lexer is None: - lexer = TextLexer - - print(lexer.aliases[0]) - return 0 - - # handle ``pygmentize -C`` - if argns.C: - inp = sys.stdin.buffer.read() - try: - lexer = guess_lexer(inp, inencoding=inencoding) - except ClassNotFound: - lexer = TextLexer - - print(lexer.aliases[0]) - return 0 - - # handle ``pygmentize -S`` - S_opt = argns.S - a_opt = argns.a - if S_opt is not None: - f_opt = argns.f - if not f_opt: - parser.print_help(sys.stderr) - return 2 - if argns.l or argns.INPUTFILE: - parser.print_help(sys.stderr) - return 2 - - try: - parsed_opts['style'] = S_opt - fmter = get_formatter_by_name(f_opt, **parsed_opts) - except ClassNotFound as err: - print(err, file=sys.stderr) - return 1 - - print(fmter.get_style_defs(a_opt or '')) - return 0 - - # if no -S is given, -a is not allowed - if argns.a is not None: - parser.print_help(sys.stderr) - return 2 - - # parse -F options - F_opts = _parse_filters(argns.F or []) - - # -x: allow custom (eXternal) lexers and formatters - allow_custom_lexer_formatter = bool(argns.x) - - # select lexer - lexer = None - - # given by name? - lexername = argns.l - if lexername: - # custom lexer, located relative to user's cwd - if allow_custom_lexer_formatter and '.py' in lexername: - try: - filename = None - name = None - if ':' in lexername: - filename, name = lexername.rsplit(':', 1) - - if '.py' in name: - # This can happen on Windows: If the lexername is - # C:\lexer.py -- return to normal load path in that case - name = None - - if filename and name: - lexer = load_lexer_from_file(filename, name, - **parsed_opts) - else: - lexer = load_lexer_from_file(lexername, **parsed_opts) - except ClassNotFound as err: - print('Error:', err, file=sys.stderr) - return 1 - else: - try: - lexer = get_lexer_by_name(lexername, **parsed_opts) - except (OptionError, ClassNotFound) as err: - print('Error:', err, file=sys.stderr) - return 1 - - # read input code - code = None - - if argns.INPUTFILE: - if argns.s: - print('Error: -s option not usable when input file specified', - file=sys.stderr) - return 2 - - infn = argns.INPUTFILE - try: - with open(infn, 'rb') as infp: - code = infp.read() - except Exception as err: - print('Error: cannot read infile:', err, file=sys.stderr) - return 1 - if not inencoding: - code, inencoding = guess_decode(code) - - # do we have to guess the lexer? - if not lexer: - try: - lexer = get_lexer_for_filename(infn, code, **parsed_opts) - except ClassNotFound as err: - if argns.g: - try: - lexer = guess_lexer(code, **parsed_opts) - except ClassNotFound: - lexer = TextLexer(**parsed_opts) - else: - print('Error:', err, file=sys.stderr) - return 1 - except OptionError as err: - print('Error:', err, file=sys.stderr) - return 1 - - elif not argns.s: # treat stdin as full file (-s support is later) - # read code from terminal, always in binary mode since we want to - # decode ourselves and be tolerant with it - code = sys.stdin.buffer.read() # use .buffer to get a binary stream - if not inencoding: - code, inencoding = guess_decode_from_terminal(code, sys.stdin) - # else the lexer will do the decoding - if not lexer: - try: - lexer = guess_lexer(code, **parsed_opts) - except ClassNotFound: - lexer = TextLexer(**parsed_opts) - - else: # -s option needs a lexer with -l - if not lexer: - print('Error: when using -s a lexer has to be selected with -l', - file=sys.stderr) - return 2 - - # process filters - for fname, fopts in F_opts: - try: - lexer.add_filter(fname, **fopts) - except ClassNotFound as err: - print('Error:', err, file=sys.stderr) - return 1 - - # select formatter - outfn = argns.o - fmter = argns.f - if fmter: - # custom formatter, located relative to user's cwd - if allow_custom_lexer_formatter and '.py' in fmter: - try: - filename = None - name = None - if ':' in fmter: - # Same logic as above for custom lexer - filename, name = fmter.rsplit(':', 1) - - if '.py' in name: - name = None - - if filename and name: - fmter = load_formatter_from_file(filename, name, - **parsed_opts) - else: - fmter = load_formatter_from_file(fmter, **parsed_opts) - except ClassNotFound as err: - print('Error:', err, file=sys.stderr) - return 1 - else: - try: - fmter = get_formatter_by_name(fmter, **parsed_opts) - except (OptionError, ClassNotFound) as err: - print('Error:', err, file=sys.stderr) - return 1 - - if outfn: - if not fmter: - try: - fmter = get_formatter_for_filename(outfn, **parsed_opts) - except (OptionError, ClassNotFound) as err: - print('Error:', err, file=sys.stderr) - return 1 - try: - outfile = open(outfn, 'wb') - except Exception as err: - print('Error: cannot open outfile:', err, file=sys.stderr) - return 1 - else: - if not fmter: - if os.environ.get('COLORTERM','') in ('truecolor', '24bit'): - fmter = TerminalTrueColorFormatter(**parsed_opts) - elif '256' in os.environ.get('TERM', ''): - fmter = Terminal256Formatter(**parsed_opts) - else: - fmter = TerminalFormatter(**parsed_opts) - outfile = sys.stdout.buffer - - # determine output encoding if not explicitly selected - if not outencoding: - if outfn: - # output file? use lexer encoding for now (can still be None) - fmter.encoding = inencoding - else: - # else use terminal encoding - fmter.encoding = terminal_encoding(sys.stdout) - - # provide coloring under Windows, if possible - if not outfn and sys.platform in ('win32', 'cygwin') and \ - fmter.name in ('Terminal', 'Terminal256'): # pragma: no cover - # unfortunately colorama doesn't support binary streams on Py3 - outfile = UnclosingTextIOWrapper(outfile, encoding=fmter.encoding) - fmter.encoding = None - try: - import pip._vendor.colorama.initialise as colorama_initialise - except ImportError: - pass - else: - outfile = colorama_initialise.wrap_stream( - outfile, convert=None, strip=None, autoreset=False, wrap=True) - - # When using the LaTeX formatter and the option `escapeinside` is - # specified, we need a special lexer which collects escaped text - # before running the chosen language lexer. - escapeinside = parsed_opts.get('escapeinside', '') - if len(escapeinside) == 2 and isinstance(fmter, LatexFormatter): - left = escapeinside[0] - right = escapeinside[1] - lexer = LatexEmbeddedLexer(left, right, lexer) - - # ... and do it! - if not argns.s: - # process whole input as per normal... - try: - highlight(code, lexer, fmter, outfile) - finally: - if outfn: - outfile.close() - return 0 - else: - # line by line processing of stdin (eg: for 'tail -f')... - try: - while 1: - line = sys.stdin.buffer.readline() - if not line: - break - if not inencoding: - line = guess_decode_from_terminal(line, sys.stdin)[0] - highlight(line, lexer, fmter, outfile) - if hasattr(outfile, 'flush'): - outfile.flush() - return 0 - except KeyboardInterrupt: # pragma: no cover - return 0 - finally: - if outfn: - outfile.close() - - -class HelpFormatter(argparse.HelpFormatter): - def __init__(self, prog, indent_increment=2, max_help_position=16, width=None): - if width is None: - try: - width = shutil.get_terminal_size().columns - 2 - except Exception: - pass - argparse.HelpFormatter.__init__(self, prog, indent_increment, - max_help_position, width) - - -def main(args=sys.argv): - """ - Main command line entry point. - """ - desc = "Highlight an input file and write the result to an output file." - parser = argparse.ArgumentParser(description=desc, add_help=False, - formatter_class=HelpFormatter) - - operation = parser.add_argument_group('Main operation') - lexersel = operation.add_mutually_exclusive_group() - lexersel.add_argument( - '-l', metavar='LEXER', - help='Specify the lexer to use. (Query names with -L.) If not ' - 'given and -g is not present, the lexer is guessed from the filename.') - lexersel.add_argument( - '-g', action='store_true', - help='Guess the lexer from the file contents, or pass through ' - 'as plain text if nothing can be guessed.') - operation.add_argument( - '-F', metavar='FILTER[:options]', action='append', - help='Add a filter to the token stream. (Query names with -L.) ' - 'Filter options are given after a colon if necessary.') - operation.add_argument( - '-f', metavar='FORMATTER', - help='Specify the formatter to use. (Query names with -L.) ' - 'If not given, the formatter is guessed from the output filename, ' - 'and defaults to the terminal formatter if the output is to the ' - 'terminal or an unknown file extension.') - operation.add_argument( - '-O', metavar='OPTION=value[,OPTION=value,...]', action='append', - help='Give options to the lexer and formatter as a comma-separated ' - 'list of key-value pairs. ' - 'Example: `-O bg=light,python=cool`.') - operation.add_argument( - '-P', metavar='OPTION=value', action='append', - help='Give a single option to the lexer and formatter - with this ' - 'you can pass options whose value contains commas and equal signs. ' - 'Example: `-P "heading=Pygments, the Python highlighter"`.') - operation.add_argument( - '-o', metavar='OUTPUTFILE', - help='Where to write the output. Defaults to standard output.') - - operation.add_argument( - 'INPUTFILE', nargs='?', - help='Where to read the input. Defaults to standard input.') - - flags = parser.add_argument_group('Operation flags') - flags.add_argument( - '-v', action='store_true', - help='Print a detailed traceback on unhandled exceptions, which ' - 'is useful for debugging and bug reports.') - flags.add_argument( - '-s', action='store_true', - help='Process lines one at a time until EOF, rather than waiting to ' - 'process the entire file. This only works for stdin, only for lexers ' - 'with no line-spanning constructs, and is intended for streaming ' - 'input such as you get from `tail -f`. ' - 'Example usage: `tail -f sql.log | pygmentize -s -l sql`.') - flags.add_argument( - '-x', action='store_true', - help='Allow custom lexers and formatters to be loaded from a .py file ' - 'relative to the current working directory. For example, ' - '`-l ./customlexer.py -x`. By default, this option expects a file ' - 'with a class named CustomLexer or CustomFormatter; you can also ' - 'specify your own class name with a colon (`-l ./lexer.py:MyLexer`). ' - 'Users should be very careful not to use this option with untrusted ' - 'files, because it will import and run them.') - flags.add_argument('--json', help='Output as JSON. This can ' - 'be only used in conjunction with -L.', - default=False, - action='store_true') - - special_modes_group = parser.add_argument_group( - 'Special modes - do not do any highlighting') - special_modes = special_modes_group.add_mutually_exclusive_group() - special_modes.add_argument( - '-S', metavar='STYLE -f formatter', - help='Print style definitions for STYLE for a formatter ' - 'given with -f. The argument given by -a is formatter ' - 'dependent.') - special_modes.add_argument( - '-L', nargs='*', metavar='WHAT', - help='List lexers, formatters, styles or filters -- ' - 'give additional arguments for the thing(s) you want to list ' - '(e.g. "styles"), or omit them to list everything.') - special_modes.add_argument( - '-N', metavar='FILENAME', - help='Guess and print out a lexer name based solely on the given ' - 'filename. Does not take input or highlight anything. If no specific ' - 'lexer can be determined, "text" is printed.') - special_modes.add_argument( - '-C', action='store_true', - help='Like -N, but print out a lexer name based solely on ' - 'a given content from standard input.') - special_modes.add_argument( - '-H', action='store', nargs=2, metavar=('NAME', 'TYPE'), - help='Print detailed help for the object of type , ' - 'where is one of "lexer", "formatter" or "filter".') - special_modes.add_argument( - '-V', action='store_true', - help='Print the package version.') - special_modes.add_argument( - '-h', '--help', action='store_true', - help='Print this help.') - special_modes_group.add_argument( - '-a', metavar='ARG', - help='Formatter-specific additional argument for the -S (print ' - 'style sheet) mode.') - - argns = parser.parse_args(args[1:]) - - try: - return main_inner(parser, argns) - except BrokenPipeError: - # someone closed our stdout, e.g. by quitting a pager. - return 0 - except Exception: - if argns.v: - print(file=sys.stderr) - print('*' * 65, file=sys.stderr) - print('An unhandled exception occurred while highlighting.', - file=sys.stderr) - print('Please report the whole traceback to the issue tracker at', - file=sys.stderr) - print('.', - file=sys.stderr) - print('*' * 65, file=sys.stderr) - print(file=sys.stderr) - raise - import traceback - info = traceback.format_exception(*sys.exc_info()) - msg = info[-1].strip() - if len(info) >= 3: - # extract relevant file and position info - msg += '\n (f%s)' % info[-2].split('\n')[0].strip()[1:] - print(file=sys.stderr) - print('*** Error while highlighting:', file=sys.stderr) - print(msg, file=sys.stderr) - print('*** If this is a bug you want to report, please rerun with -v.', - file=sys.stderr) - return 1 diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/importlib_resources/_legacy.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/importlib_resources/_legacy.py deleted file mode 100644 index 1d5d3f1fbb1f6c69d0da2a50e1d4492ad3378f17..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/importlib_resources/_legacy.py +++ /dev/null @@ -1,121 +0,0 @@ -import functools -import os -import pathlib -import types -import warnings - -from typing import Union, Iterable, ContextManager, BinaryIO, TextIO, Any - -from . import _common - -Package = Union[types.ModuleType, str] -Resource = str - - -def deprecated(func): - @functools.wraps(func) - def wrapper(*args, **kwargs): - warnings.warn( - f"{func.__name__} is deprecated. Use files() instead. " - "Refer to https://importlib-resources.readthedocs.io" - "/en/latest/using.html#migrating-from-legacy for migration advice.", - DeprecationWarning, - stacklevel=2, - ) - return func(*args, **kwargs) - - return wrapper - - -def normalize_path(path): - # type: (Any) -> str - """Normalize a path by ensuring it is a string. - - If the resulting string contains path separators, an exception is raised. - """ - str_path = str(path) - parent, file_name = os.path.split(str_path) - if parent: - raise ValueError(f'{path!r} must be only a file name') - return file_name - - -@deprecated -def open_binary(package: Package, resource: Resource) -> BinaryIO: - """Return a file-like object opened for binary reading of the resource.""" - return (_common.files(package) / normalize_path(resource)).open('rb') - - -@deprecated -def read_binary(package: Package, resource: Resource) -> bytes: - """Return the binary contents of the resource.""" - return (_common.files(package) / normalize_path(resource)).read_bytes() - - -@deprecated -def open_text( - package: Package, - resource: Resource, - encoding: str = 'utf-8', - errors: str = 'strict', -) -> TextIO: - """Return a file-like object opened for text reading of the resource.""" - return (_common.files(package) / normalize_path(resource)).open( - 'r', encoding=encoding, errors=errors - ) - - -@deprecated -def read_text( - package: Package, - resource: Resource, - encoding: str = 'utf-8', - errors: str = 'strict', -) -> str: - """Return the decoded string of the resource. - - The decoding-related arguments have the same semantics as those of - bytes.decode(). - """ - with open_text(package, resource, encoding, errors) as fp: - return fp.read() - - -@deprecated -def contents(package: Package) -> Iterable[str]: - """Return an iterable of entries in `package`. - - Note that not all entries are resources. Specifically, directories are - not considered resources. Use `is_resource()` on each entry returned here - to check if it is a resource or not. - """ - return [path.name for path in _common.files(package).iterdir()] - - -@deprecated -def is_resource(package: Package, name: str) -> bool: - """True if `name` is a resource inside `package`. - - Directories are *not* resources. - """ - resource = normalize_path(name) - return any( - traversable.name == resource and traversable.is_file() - for traversable in _common.files(package).iterdir() - ) - - -@deprecated -def path( - package: Package, - resource: Resource, -) -> ContextManager[pathlib.Path]: - """A context manager providing a file path object to the resource. - - If the resource does not already exist on its own on the file system, - a temporary file will be created. If the file was created, the file - will be deleted upon exiting the context manager (no exception is - raised if the file was deleted prior to the context manager - exiting). - """ - return _common.as_file(_common.files(package) / normalize_path(resource)) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/packaging/_manylinux.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/packaging/_manylinux.py deleted file mode 100644 index 4c379aa6f69ff56c8f19612002c6e3e939ea6012..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/packaging/_manylinux.py +++ /dev/null @@ -1,301 +0,0 @@ -import collections -import functools -import os -import re -import struct -import sys -import warnings -from typing import IO, Dict, Iterator, NamedTuple, Optional, Tuple - - -# Python does not provide platform information at sufficient granularity to -# identify the architecture of the running executable in some cases, so we -# determine it dynamically by reading the information from the running -# process. This only applies on Linux, which uses the ELF format. -class _ELFFileHeader: - # https://en.wikipedia.org/wiki/Executable_and_Linkable_Format#File_header - class _InvalidELFFileHeader(ValueError): - """ - An invalid ELF file header was found. - """ - - ELF_MAGIC_NUMBER = 0x7F454C46 - ELFCLASS32 = 1 - ELFCLASS64 = 2 - ELFDATA2LSB = 1 - ELFDATA2MSB = 2 - EM_386 = 3 - EM_S390 = 22 - EM_ARM = 40 - EM_X86_64 = 62 - EF_ARM_ABIMASK = 0xFF000000 - EF_ARM_ABI_VER5 = 0x05000000 - EF_ARM_ABI_FLOAT_HARD = 0x00000400 - - def __init__(self, file: IO[bytes]) -> None: - def unpack(fmt: str) -> int: - try: - data = file.read(struct.calcsize(fmt)) - result: Tuple[int, ...] = struct.unpack(fmt, data) - except struct.error: - raise _ELFFileHeader._InvalidELFFileHeader() - return result[0] - - self.e_ident_magic = unpack(">I") - if self.e_ident_magic != self.ELF_MAGIC_NUMBER: - raise _ELFFileHeader._InvalidELFFileHeader() - self.e_ident_class = unpack("B") - if self.e_ident_class not in {self.ELFCLASS32, self.ELFCLASS64}: - raise _ELFFileHeader._InvalidELFFileHeader() - self.e_ident_data = unpack("B") - if self.e_ident_data not in {self.ELFDATA2LSB, self.ELFDATA2MSB}: - raise _ELFFileHeader._InvalidELFFileHeader() - self.e_ident_version = unpack("B") - self.e_ident_osabi = unpack("B") - self.e_ident_abiversion = unpack("B") - self.e_ident_pad = file.read(7) - format_h = "H" - format_i = "I" - format_q = "Q" - format_p = format_i if self.e_ident_class == self.ELFCLASS32 else format_q - self.e_type = unpack(format_h) - self.e_machine = unpack(format_h) - self.e_version = unpack(format_i) - self.e_entry = unpack(format_p) - self.e_phoff = unpack(format_p) - self.e_shoff = unpack(format_p) - self.e_flags = unpack(format_i) - self.e_ehsize = unpack(format_h) - self.e_phentsize = unpack(format_h) - self.e_phnum = unpack(format_h) - self.e_shentsize = unpack(format_h) - self.e_shnum = unpack(format_h) - self.e_shstrndx = unpack(format_h) - - -def _get_elf_header() -> Optional[_ELFFileHeader]: - try: - with open(sys.executable, "rb") as f: - elf_header = _ELFFileHeader(f) - except (OSError, TypeError, _ELFFileHeader._InvalidELFFileHeader): - return None - return elf_header - - -def _is_linux_armhf() -> bool: - # hard-float ABI can be detected from the ELF header of the running - # process - # https://static.docs.arm.com/ihi0044/g/aaelf32.pdf - elf_header = _get_elf_header() - if elf_header is None: - return False - result = elf_header.e_ident_class == elf_header.ELFCLASS32 - result &= elf_header.e_ident_data == elf_header.ELFDATA2LSB - result &= elf_header.e_machine == elf_header.EM_ARM - result &= ( - elf_header.e_flags & elf_header.EF_ARM_ABIMASK - ) == elf_header.EF_ARM_ABI_VER5 - result &= ( - elf_header.e_flags & elf_header.EF_ARM_ABI_FLOAT_HARD - ) == elf_header.EF_ARM_ABI_FLOAT_HARD - return result - - -def _is_linux_i686() -> bool: - elf_header = _get_elf_header() - if elf_header is None: - return False - result = elf_header.e_ident_class == elf_header.ELFCLASS32 - result &= elf_header.e_ident_data == elf_header.ELFDATA2LSB - result &= elf_header.e_machine == elf_header.EM_386 - return result - - -def _have_compatible_abi(arch: str) -> bool: - if arch == "armv7l": - return _is_linux_armhf() - if arch == "i686": - return _is_linux_i686() - return arch in {"x86_64", "aarch64", "ppc64", "ppc64le", "s390x"} - - -# If glibc ever changes its major version, we need to know what the last -# minor version was, so we can build the complete list of all versions. -# For now, guess what the highest minor version might be, assume it will -# be 50 for testing. Once this actually happens, update the dictionary -# with the actual value. -_LAST_GLIBC_MINOR: Dict[int, int] = collections.defaultdict(lambda: 50) - - -class _GLibCVersion(NamedTuple): - major: int - minor: int - - -def _glibc_version_string_confstr() -> Optional[str]: - """ - Primary implementation of glibc_version_string using os.confstr. - """ - # os.confstr is quite a bit faster than ctypes.DLL. It's also less likely - # to be broken or missing. This strategy is used in the standard library - # platform module. - # https://github.com/python/cpython/blob/fcf1d003bf4f0100c/Lib/platform.py#L175-L183 - try: - # os.confstr("CS_GNU_LIBC_VERSION") returns a string like "glibc 2.17". - version_string = os.confstr("CS_GNU_LIBC_VERSION") - assert version_string is not None - _, version = version_string.split() - except (AssertionError, AttributeError, OSError, ValueError): - # os.confstr() or CS_GNU_LIBC_VERSION not available (or a bad value)... - return None - return version - - -def _glibc_version_string_ctypes() -> Optional[str]: - """ - Fallback implementation of glibc_version_string using ctypes. - """ - try: - import ctypes - except ImportError: - return None - - # ctypes.CDLL(None) internally calls dlopen(NULL), and as the dlopen - # manpage says, "If filename is NULL, then the returned handle is for the - # main program". This way we can let the linker do the work to figure out - # which libc our process is actually using. - # - # We must also handle the special case where the executable is not a - # dynamically linked executable. This can occur when using musl libc, - # for example. In this situation, dlopen() will error, leading to an - # OSError. Interestingly, at least in the case of musl, there is no - # errno set on the OSError. The single string argument used to construct - # OSError comes from libc itself and is therefore not portable to - # hard code here. In any case, failure to call dlopen() means we - # can proceed, so we bail on our attempt. - try: - process_namespace = ctypes.CDLL(None) - except OSError: - return None - - try: - gnu_get_libc_version = process_namespace.gnu_get_libc_version - except AttributeError: - # Symbol doesn't exist -> therefore, we are not linked to - # glibc. - return None - - # Call gnu_get_libc_version, which returns a string like "2.5" - gnu_get_libc_version.restype = ctypes.c_char_p - version_str: str = gnu_get_libc_version() - # py2 / py3 compatibility: - if not isinstance(version_str, str): - version_str = version_str.decode("ascii") - - return version_str - - -def _glibc_version_string() -> Optional[str]: - """Returns glibc version string, or None if not using glibc.""" - return _glibc_version_string_confstr() or _glibc_version_string_ctypes() - - -def _parse_glibc_version(version_str: str) -> Tuple[int, int]: - """Parse glibc version. - - We use a regexp instead of str.split because we want to discard any - random junk that might come after the minor version -- this might happen - in patched/forked versions of glibc (e.g. Linaro's version of glibc - uses version strings like "2.20-2014.11"). See gh-3588. - """ - m = re.match(r"(?P[0-9]+)\.(?P[0-9]+)", version_str) - if not m: - warnings.warn( - "Expected glibc version with 2 components major.minor," - " got: %s" % version_str, - RuntimeWarning, - ) - return -1, -1 - return int(m.group("major")), int(m.group("minor")) - - -@functools.lru_cache() -def _get_glibc_version() -> Tuple[int, int]: - version_str = _glibc_version_string() - if version_str is None: - return (-1, -1) - return _parse_glibc_version(version_str) - - -# From PEP 513, PEP 600 -def _is_compatible(name: str, arch: str, version: _GLibCVersion) -> bool: - sys_glibc = _get_glibc_version() - if sys_glibc < version: - return False - # Check for presence of _manylinux module. - try: - import _manylinux # noqa - except ImportError: - return True - if hasattr(_manylinux, "manylinux_compatible"): - result = _manylinux.manylinux_compatible(version[0], version[1], arch) - if result is not None: - return bool(result) - return True - if version == _GLibCVersion(2, 5): - if hasattr(_manylinux, "manylinux1_compatible"): - return bool(_manylinux.manylinux1_compatible) - if version == _GLibCVersion(2, 12): - if hasattr(_manylinux, "manylinux2010_compatible"): - return bool(_manylinux.manylinux2010_compatible) - if version == _GLibCVersion(2, 17): - if hasattr(_manylinux, "manylinux2014_compatible"): - return bool(_manylinux.manylinux2014_compatible) - return True - - -_LEGACY_MANYLINUX_MAP = { - # CentOS 7 w/ glibc 2.17 (PEP 599) - (2, 17): "manylinux2014", - # CentOS 6 w/ glibc 2.12 (PEP 571) - (2, 12): "manylinux2010", - # CentOS 5 w/ glibc 2.5 (PEP 513) - (2, 5): "manylinux1", -} - - -def platform_tags(linux: str, arch: str) -> Iterator[str]: - if not _have_compatible_abi(arch): - return - # Oldest glibc to be supported regardless of architecture is (2, 17). - too_old_glibc2 = _GLibCVersion(2, 16) - if arch in {"x86_64", "i686"}: - # On x86/i686 also oldest glibc to be supported is (2, 5). - too_old_glibc2 = _GLibCVersion(2, 4) - current_glibc = _GLibCVersion(*_get_glibc_version()) - glibc_max_list = [current_glibc] - # We can assume compatibility across glibc major versions. - # https://sourceware.org/bugzilla/show_bug.cgi?id=24636 - # - # Build a list of maximum glibc versions so that we can - # output the canonical list of all glibc from current_glibc - # down to too_old_glibc2, including all intermediary versions. - for glibc_major in range(current_glibc.major - 1, 1, -1): - glibc_minor = _LAST_GLIBC_MINOR[glibc_major] - glibc_max_list.append(_GLibCVersion(glibc_major, glibc_minor)) - for glibc_max in glibc_max_list: - if glibc_max.major == too_old_glibc2.major: - min_minor = too_old_glibc2.minor - else: - # For other glibc major versions oldest supported is (x, 0). - min_minor = -1 - for glibc_minor in range(glibc_max.minor, min_minor, -1): - glibc_version = _GLibCVersion(glibc_max.major, glibc_minor) - tag = "manylinux_{}_{}".format(*glibc_version) - if _is_compatible(tag, arch, glibc_version): - yield linux.replace("linux", tag) - # Handle the legacy manylinux1, manylinux2010, manylinux2014 tags. - if glibc_version in _LEGACY_MANYLINUX_MAP: - legacy_tag = _LEGACY_MANYLINUX_MAP[glibc_version] - if _is_compatible(legacy_tag, arch, glibc_version): - yield linux.replace("linux", legacy_tag) diff --git a/spaces/CVPR/v-doc_abstractive_mac/descrip.md b/spaces/CVPR/v-doc_abstractive_mac/descrip.md deleted file mode 100644 index d965b314baefdecfd61c003e97dcdc68f385782e..0000000000000000000000000000000000000000 --- a/spaces/CVPR/v-doc_abstractive_mac/descrip.md +++ /dev/null @@ -1,35 +0,0 @@ -# V-Doc : Visual questions answers with Documents -This repository contains code for the paper [V-Doc : Visual questions answers with Documents](https://arxiv.org/pdf/2205.13724.pdf). The demo videos can be accessed by this [link](https://drive.google.com/file/d/1Ztp9LBcrEcJA3NlbFWn1RfNyfwt8Y6Qk/view). - -

    - Ding, Y.*, Huang, Z.*, Wang, R., Zhang, Y., Chen, X., Ma, Y., Chung, H., & Han, C. (CVPR 2022)
    V-Doc : Visual questions answers with Documents
    -

    - -

    - -

    - -### Dataset in Dataset Storage Module - -The dataset we used to trained the model is provided in following links: - - - [PubVQA Dataset](https://drive.google.com/drive/folders/1YMuctGPJbsy45Iz23ygcN1VGHWQp3aaU?ths=true) for training Mac-Network. - -Dataset for training LayoutLMv2([FUNSD-QA](https://drive.google.com/file/d/1Ev_sLTx3U9nAr2TGgUT5BXB1rpfLMlcq/view?usp=sharing)). - -### Dataset Generation -To run the scene based question generation code, we need to fetch the JSON files from the source. - -#### Extract OCR information -```bash -python3 ./document_collection.py -``` -After the step above, a new folder called ./input_ocr will be generated. -#### Generate questions -```bash -python3 ./scene_based/pdf_generate_question.py -``` -To limit the number of generated questions, you can change the code in pdf_generate_question.py line 575 and line 591-596 - -After the steps above, you can see a json file under the ./output_qa_dataset. diff --git a/spaces/Catspindev/monadical-labs-minecraft-skin-generator/README.md b/spaces/Catspindev/monadical-labs-minecraft-skin-generator/README.md deleted file mode 100644 index 55742e18742f322e1bb4673ee07f37168e44c580..0000000000000000000000000000000000000000 --- a/spaces/Catspindev/monadical-labs-minecraft-skin-generator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Monadical Labs Minecraft Skin Generator -emoji: 🚀 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.43.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Chomkwoy/Nilkessye/__init__.py b/spaces/Chomkwoy/Nilkessye/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Chomkwoy/Nilkessye/app.py b/spaces/Chomkwoy/Nilkessye/app.py deleted file mode 100644 index 791a0bb3b2161b3d38527042a9828081a89df7bb..0000000000000000000000000000000000000000 --- a/spaces/Chomkwoy/Nilkessye/app.py +++ /dev/null @@ -1,253 +0,0 @@ -import gradio as gr -import numpy as np -import torch -import io -from PIL import Image -from transformers import PreTrainedModel, VisionEncoderDecoderModel, VisionEncoderDecoderConfig -import cv2 -from tqdm.auto import tqdm - -import load_book -import utils.hangul -from model import exkp -import syllable_model -import ocr_utils - - -class OcrModel(PreTrainedModel): - config_class = VisionEncoderDecoderConfig - - def __init__(self, config): - super().__init__(config) - self.centernet = exkp( - n=5, - nstack=4, - dims=[256, 256, 384, 384, 384, 512], - modules=[2, 2, 2, 2, 2, 4], - num_classes=4 - ) - self.recog = VisionEncoderDecoderModel(config) - - def forward(self, pixel_values, **kwargs): - outputs = self.centernet(pixel_values, **kwargs) - return outputs - - -def main(): - model = OcrModel.from_pretrained('Chomkwoy/nilkessye') - if torch.cuda.is_available(): - print("Enabling CUDA") - model = model.cuda() - recog = syllable_model.SyllableRecognizer(model.recog) - - def upload_file(file): - yield ( - [], # gallery - "", # output_textbox - gr.Textbox(show_label=False, visible=True), # progress_indicator - ) - - image = Image.open(io.BytesIO(file)) - yield ( - [image], # gallery - "", # output_textbox - "처리중... 이미지 자르는 중", # progress_indicator - ) - - image_np = np.array(image)[..., :3] - generator = recognize_page( - image_np, - model, recog, - return_line_infos=True, - batch_size=16 - ) - - # Crop image - image = next(generator) - yield ( - [Image.fromarray(image)], # gallery - "", # output_textbox - "처리중... 글자 위치 인식 중", # progress_indicator - ) - - # Get lines - line_infos = next(generator) - image = draw_detections(image, line_infos) - - # Read syllables - num_batches = next(generator) - - yield ( - [Image.fromarray(image)], # gallery - "", # output_textbox - f"처리중... 글자 읽는 중 (0/{num_batches})", # progress_indicator - ) - - # Free memory - i = 0 - while True: - try: - pred_syllables = next(generator) - i += 1 - yield ( - [Image.fromarray(image)], # gallery - gen_html(pred_syllables, line_infos), # output_textbox - f"처리중... 글자 읽는 중 ({i}/{num_batches})", # progress_indicator - ) - except StopIteration: - break - - yield ( - [Image.fromarray(image)], # gallery - gen_html(pred_syllables, line_infos), # output_textbox - gr.Textbox(visible=False), # progress_indicator - ) - - with gr.Blocks() as demo: - gr.Markdown(""" - # 닐거쎠: 옛한글 글자 인식기 - - 이미지 파일을 업로드해보세요. 한자는 인식되지 않습니다. - - 만든사람: ᄎᆞᆷ괴 - """) - - progress_indicator = gr.Textbox(visible=False) - - with gr.Row(): - gallery = gr.Gallery( - columns=1, - allow_preview=False, - object_fit="contain", - label="보기" - ) - - with gr.Column(): - upload_button = gr.UploadButton( - '파일 올리기', - type='binary' - ) - output_textbox = gr.HTML( - label="인식 결과", - value="여기에 결과가 표시됩니다." - ) - - upload_button.upload( - fn=upload_file, - inputs=upload_button, - outputs=[gallery, output_textbox, progress_indicator] - ) - - demo.queue(max_size=20).launch(server_name='0.0.0.0') - - -def gen_html(pred_syllables, line_infos): - output_lines = [] - offset = 0 - for line in line_infos: - if offset >= len(pred_syllables): - break - line_len = len(line['line']) - cur_line = '.'.join(pred_syllables[offset:offset + line_len]) - cur_line_hangul = utils.hangul.convert_yale_to_hangul(cur_line) - output_lines.append({ - 'is_anno': line['is_anno'], - 'text': cur_line_hangul - }) - offset += line_len - - output_html = "" - for line in output_lines: - if line['is_anno']: - output_html += f"{line['text']}" - else: - output_html += f"{line['text']}" - - return output_html - - -def draw_detections(image, line_infos): - image = image.copy() - for line_idx, line_info in enumerate(line_infos): - cv2.rectangle(image, - (int(line_info['bbox'][0][0]), int(line_info['bbox'][0][1])), - (int(line_info['bbox'][1][0]), int(line_info['bbox'][1][1])), - [255, 255, 255], 6) - - for line_idx, line_info in enumerate(line_infos): - for bbox, center, seq, cls in line_info['line']: - color = [[160, 158, 255], [212, 56, 13], [107, 255, 171], [255, 205, 66]][int(cls)] - shapes = image.copy() - cv2.rectangle(shapes, *bbox, color, cv2.FILLED) - alpha = 0.75 - image = cv2.addWeighted(image, alpha, shapes, 1 - alpha, 0) - cv2.rectangle(image, *bbox, color, 2) - - for line_idx, line_info in enumerate(line_infos): - cv2.putText( - image, f"{line_idx}", - (int(line_info['bbox'][0][0]), int(line_info['bbox'][0][1]) + 15), - cv2.FONT_HERSHEY_SIMPLEX, 0.7, [250, 225, 0], 2 - ) - return image - - -def recognize_page(orig_image, centernet, syllable_recognizer, return_line_infos=False, batch_size=32): - orig_image, bbox, orig_size = load_book.process_page(orig_image) - yield orig_image - - orig_size = (orig_image.shape[1], orig_image.shape[0]) - image = cv2.resize(orig_image, dsize=(512, 512), interpolation=cv2.INTER_AREA) - - image = image.astype(np.float32) / 255. - .5 # to [-.5, +.5] range - image = image.transpose((2, 0, 1)) # [H, W, C] to [C, H, W] - image = torch.as_tensor(image) - - # Run object detection - centernet.eval() - with torch.no_grad(): - output = centernet(torch.as_tensor(image)[None].to(centernet.device)) - - sw, sh = orig_size[0] * 4 / 512, orig_size[1] * 4 / 512 - - tiles = ocr_utils.get_pred_detections( - output, sw=sw, sh=sh, - threshold=0.3, - ae_threshold=20.0 - ) - - line_infos = ocr_utils.detect_lines(tiles) - yield line_infos - - yield from recognize_lines(line_infos, orig_image, syllable_recognizer, batch_size=batch_size) - - -def recognize_lines(line_infos, orig_image, syllable_recognizer, batch_size=32): - tiles = [] - for line_idx, line_info in enumerate(line_infos): - for bbox, center, seq, cls in line_info['line']: - (tlx, tly), (brx, bry) = bbox - w, h = brx - tlx, bry - tly - pw, ph = w / 5, h / 5 - tile = orig_image[ - max(0, int(tly - ph)):min(orig_image.shape[0], int(bry + ph)), - max(0, int(tlx - pw)):min(orig_image.shape[1], int(brx + pw)), - ] - tiles.append((tile, bbox, center, seq, cls)) - - hangul_tiles = [(i, tile) for i, (tile, _, _, _, cls) in enumerate(tiles) if cls in [0, 2]] - - pred_syllables = ["〓"] * len(tiles) - batches = list(ocr_utils.batched(hangul_tiles, batch_size)) - yield len(batches) - - for batch in tqdm(batches): - indices, images = zip(*batch) - batch_pred_syllables = syllable_recognizer.recognize(images) - for i, pred_syllable in zip(indices, batch_pred_syllables): - pred_syllables[i] = pred_syllable - yield pred_syllables[:i + 1] - - -if __name__ == "__main__": - main() diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/interview/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/interview/__init__.py deleted file mode 100644 index a8d226fad8b20c09cbedc2e5dd0ecab0beb9b904..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/meme_generator/memes/interview/__init__.py +++ /dev/null @@ -1,45 +0,0 @@ -from pathlib import Path -from typing import List - -from pil_utils import BuildImage - -from meme_generator import add_meme -from meme_generator.exception import TextOverLength - -img_dir = Path(__file__).parent / "images" - - -def interview(images: List[BuildImage], texts: List[str], args): - if len(images) == 2: - self_img = images[0] - user_img = images[1] - else: - self_img = BuildImage.open(img_dir / "huaji.png") - user_img = images[0] - self_img = self_img.convert("RGBA").square().resize((124, 124)) - user_img = user_img.convert("RGBA").square().resize((124, 124)) - - text = texts[0] if texts else "采访大佬经验" - - frame = BuildImage.new("RGBA", (600, 310), "white") - microphone = BuildImage.open(img_dir / "microphone.png") - frame.paste(microphone, (330, 103), alpha=True) - frame.paste(self_img, (419, 40), alpha=True) - frame.paste(user_img, (57, 40), alpha=True) - try: - frame.draw_text((20, 200, 580, 310), text, max_fontsize=50, min_fontsize=20) - except ValueError: - raise TextOverLength(text) - return frame.save_jpg() - - -add_meme( - "interview", - interview, - min_images=1, - max_images=2, - min_texts=0, - max_texts=1, - default_texts=["采访大佬经验"], - keywords=["采访"], -) diff --git a/spaces/CognitiveLabs/GPT-auto-webscraping/ExcecuteFunction.py b/spaces/CognitiveLabs/GPT-auto-webscraping/ExcecuteFunction.py deleted file mode 100644 index edce83a720f72e05fdad7bb6b29464375e7be28c..0000000000000000000000000000000000000000 --- a/spaces/CognitiveLabs/GPT-auto-webscraping/ExcecuteFunction.py +++ /dev/null @@ -1,9 +0,0 @@ -import importlib - -def execute_function(): - module = "output" - function = "extract_info" - module = importlib.import_module(module) - function = getattr(module, function) - print("returning function") - return function diff --git a/spaces/CognitiveLabs/Research-Assistant/test/test3.py b/spaces/CognitiveLabs/Research-Assistant/test/test3.py deleted file mode 100644 index b19587e7b045df07ae61eaec8949524850600394..0000000000000000000000000000000000000000 --- a/spaces/CognitiveLabs/Research-Assistant/test/test3.py +++ /dev/null @@ -1,21 +0,0 @@ -import openai - -openai.api_key = "sk-DQ1nFYzAVzGMznofdi0nig7MebfA9PWrTxCHlLIZIqc4X8xu" -openai.api_base = "https://api.chatanywhere.cn/v1" - -def generator(): - messages = [{ - "role": "user", - "content": "What is the meaning of life?", - }] - response = "" - for chunk in openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=messages, - temperature=0.9, - stream=True, - ): - content = chunk["choices"][0].get("delta", {}).get("content") - if content: - response += content - yield response \ No newline at end of file diff --git a/spaces/DEEMOSTECH/ChatAvatar/static/js/main.2966234b.js b/spaces/DEEMOSTECH/ChatAvatar/static/js/main.2966234b.js deleted file mode 100644 index 6d2087431ea2c3a02a763556395cd41902e4488f..0000000000000000000000000000000000000000 --- a/spaces/DEEMOSTECH/ChatAvatar/static/js/main.2966234b.js +++ /dev/null @@ -1,3 +0,0 @@ -/*! For license information please see main.2966234b.js.LICENSE.txt */ -!function(){var e={498:function(e){e.exports=function(){"use strict";var e=function(t,n){return e=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(e,t){e.__proto__=t}||function(e,t){for(var n in t)Object.prototype.hasOwnProperty.call(t,n)&&(e[n]=t[n])},e(t,n)};function t(t,n){if("function"!==typeof n&&null!==n)throw new TypeError("Class extends value "+String(n)+" is not a constructor or null");function r(){this.constructor=t}e(t,n),t.prototype=null===n?Object.create(n):(r.prototype=n.prototype,new r)}var n=function(){return n=Object.assign||function(e){for(var t,n=1,r=arguments.length;n0&&i[i.length-1])&&(6===A[0]||2===A[0])){a=0;continue}if(3===A[0]&&(!i||A[1]>i[0]&&A[1]=55296&&i<=56319&&n>10),a%1024+56320)),(i+1===n||r.length>16384)&&(A+=String.fromCharCode.apply(String,r),r.length=0)}return A},c="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/",d="undefined"===typeof Uint8Array?[]:new Uint8Array(256),h=0;h>4,u[s++]=(15&r)<<4|i>>2,u[s++]=(3&i)<<6|63&A;return l},v=function(e){for(var t=e.length,n=[],r=0;r>w,x=(1<>w)+32,S=65536>>B,E=(1<=0){if(e<55296||e>56319&&e<=65535)return t=((t=this.index[e>>w])<<_)+(e&x),this.data[t];if(e<=65535)return t=((t=this.index[b+(e-55296>>w)])<<_)+(e&x),this.data[t];if(e>B),t=this.index[t],t+=e>>w&E,t=((t=this.index[t])<<_)+(e&x),this.data[t];if(e<=1114111)return this.data[this.highValueIndex]}return this.errorValue},e}(),k="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/",Q="undefined"===typeof Uint8Array?[]:new Uint8Array(256),L=0;LD?(i.push(!0),a-=D):i.push(!1),-1!==["normal","auto","loose"].indexOf(t)&&-1!==[8208,8211,12316,12448].indexOf(e))return r.push(A),n.push(Y);if(a===P||a===K){if(0===A)return r.push(A),n.push(ue);var o=n[A-1];return-1===Qe.indexOf(o)?(r.push(r[A-1]),n.push(o)):(r.push(A),n.push(ue))}return r.push(A),a===ce?n.push("strict"===t?te:me):a===_e||a===le?n.push(ue):a===be?e>=131072&&e<=196605||e>=196608&&e<=262141?n.push(me):n.push(ue):void n.push(a)})),[r,n,i]},Re=function(e,t,n,r){var i=r[n];if(Array.isArray(e)?-1!==e.indexOf(i):e===i)for(var A=n;A<=r.length;){if((s=r[++A])===t)return!0;if(s!==G)break}if(i===G)for(A=n;A>0;){var a=r[--A];if(Array.isArray(e)?-1!==e.indexOf(a):e===a)for(var o=n;o<=r.length;){var s;if((s=r[++o])===t)return!0;if(s!==G)break}if(a!==G)break}return!1},He=function(e,t){for(var n=e;n>=0;){var r=t[n];if(r!==G)return r;n--}return 0},Pe=function(e,t,n,r,i){if(0===n[r])return Se;var A=r-1;if(Array.isArray(i)&&!0===i[A])return Se;var a=A-1,o=A+1,s=t[A],l=a>=0?t[a]:0,u=t[o];if(s===R&&u===H)return Se;if(-1!==Fe.indexOf(s))return Ce;if(-1!==Fe.indexOf(u))return Se;if(-1!==Te.indexOf(u))return Se;if(He(A,t)===V)return Ee;if(Ue.get(e[A])===K)return Se;if((s===de||s===he)&&Ue.get(e[o])===K)return Se;if(s===O||u===O)return Se;if(s===z)return Se;if(-1===[G,j,q].indexOf(s)&&u===z)return Se;if(-1!==[J,Z,$,ie,se].indexOf(u))return Se;if(He(A,t)===ne)return Se;if(Re(re,ne,A,t))return Se;if(Re([J,Z],te,A,t))return Se;if(Re(W,W,A,t))return Se;if(s===G)return Ee;if(s===re||u===re)return Se;if(u===Y||s===Y)return Ee;if(-1!==[j,q,te].indexOf(u)||s===X)return Se;if(l===ge&&-1!==De.indexOf(s))return Se;if(s===se&&u===ge)return Se;if(u===ee)return Se;if(-1!==Me.indexOf(u)&&s===Ae||-1!==Me.indexOf(s)&&u===Ae)return Se;if(s===oe&&-1!==[me,de,he].indexOf(u)||-1!==[me,de,he].indexOf(s)&&u===ae)return Se;if(-1!==Me.indexOf(s)&&-1!==ke.indexOf(u)||-1!==ke.indexOf(s)&&-1!==Me.indexOf(u))return Se;if(-1!==[oe,ae].indexOf(s)&&(u===Ae||-1!==[ne,q].indexOf(u)&&t[o+1]===Ae)||-1!==[ne,q].indexOf(s)&&u===Ae||s===Ae&&-1!==[Ae,se,ie].indexOf(u))return Se;if(-1!==[Ae,se,ie,J,Z].indexOf(u))for(var c=A;c>=0;){if((d=t[c])===Ae)return Se;if(-1===[se,ie].indexOf(d))break;c--}if(-1!==[oe,ae].indexOf(u))for(c=-1!==[J,Z].indexOf(s)?a:A;c>=0;){var d;if((d=t[c])===Ae)return Se;if(-1===[se,ie].indexOf(d))break;c--}if(ve===s&&-1!==[ve,ye,fe,pe].indexOf(u)||-1!==[ye,fe].indexOf(s)&&-1!==[ye,we].indexOf(u)||-1!==[we,pe].indexOf(s)&&u===we)return Se;if(-1!==Le.indexOf(s)&&-1!==[ee,ae].indexOf(u)||-1!==Le.indexOf(u)&&s===oe)return Se;if(-1!==Me.indexOf(s)&&-1!==Me.indexOf(u))return Se;if(s===ie&&-1!==Me.indexOf(u))return Se;if(-1!==Me.concat(Ae).indexOf(s)&&u===ne&&-1===xe.indexOf(e[o])||-1!==Me.concat(Ae).indexOf(u)&&s===Z)return Se;if(s===Be&&u===Be){for(var h=n[A],f=1;h>0&&t[--h]===Be;)f++;if(f%2!==0)return Se}return s===de&&u===he?Se:Ee},Ne=function(e,t){t||(t={lineBreak:"normal",wordBreak:"normal"});var n=Ie(e,t.lineBreak),r=n[0],i=n[1],A=n[2];"break-all"!==t.wordBreak&&"break-word"!==t.wordBreak||(i=i.map((function(e){return-1!==[Ae,ue,_e].indexOf(e)?me:e})));var a="keep-all"===t.wordBreak?A.map((function(t,n){return t&&e[n]>=19968&&e[n]<=40959})):void 0;return[r,i,a]},Oe=function(){function e(e,t,n,r){this.codePoints=e,this.required=t===Ce,this.start=n,this.end=r}return e.prototype.slice=function(){return u.apply(void 0,this.codePoints.slice(this.start,this.end))},e}(),Ve=function(e,t){var n=l(e),r=Ne(n,t),i=r[0],A=r[1],a=r[2],o=n.length,s=0,u=0;return{next:function(){if(u>=o)return{done:!0,value:null};for(var e=Se;u=Dt&&e<=57},jt=function(e){return e>=55296&&e<=57343},Xt=function(e){return Wt(e)||e>=Ot&&e<=zt||e>=It&&e<=Ht},qt=function(e){return e>=It&&e<=Nt},Yt=function(e){return e>=Ot&&e<=Kt},Jt=function(e){return qt(e)||Yt(e)},Zt=function(e){return e>=wt},$t=function(e){return e===je||e===Ye||e===Je},en=function(e){return Jt(e)||Zt(e)||e===at},tn=function(e){return en(e)||Wt(e)||e===ot},nn=function(e){return e>=Ut&&e<=Mt||e===Ft||e>=Tt&&e<=kt||e===Qt},rn=function(e,t){return e===qe&&t!==je},An=function(e,t,n){return e===ot?en(t)||rn(t,n):!!en(e)||!(e!==qe||!rn(e,t))},an=function(e,t,n){return e===bt||e===ot?!!Wt(t)||t===Et&&Wt(n):Wt(e===Et?t:e)},on=function(e){var t=0,n=1;e[t]!==bt&&e[t]!==ot||(e[t]===ot&&(n=-1),t++);for(var r=[];Wt(e[t]);)r.push(e[t++]);var i=r.length?parseInt(u.apply(void 0,r),10):0;e[t]===Et&&t++;for(var A=[];Wt(e[t]);)A.push(e[t++]);var a=A.length,o=a?parseInt(u.apply(void 0,A),10):0;e[t]!==Vt&&e[t]!==Rt||t++;var s=1;e[t]!==bt&&e[t]!==ot||(e[t]===ot&&(s=-1),t++);for(var l=[];Wt(e[t]);)l.push(e[t++]);var c=l.length?parseInt(u.apply(void 0,l),10):0;return n*(i+o*Math.pow(10,-a))*Math.pow(10,s*c)},sn={type:2},ln={type:3},un={type:4},cn={type:13},dn={type:8},hn={type:21},fn={type:9},pn={type:10},gn={type:11},mn={type:12},vn={type:14},yn={type:23},wn={type:1},Bn={type:25},_n={type:24},bn={type:26},xn={type:27},Cn={type:28},Sn={type:29},En={type:31},Un={type:32},Mn=function(){function e(){this._value=[]}return e.prototype.write=function(e){this._value=this._value.concat(l(e))},e.prototype.read=function(){for(var e=[],t=this.consumeToken();t!==Un;)e.push(t),t=this.consumeToken();return e},e.prototype.consumeToken=function(){var e=this.consumeCodePoint();switch(e){case Ze:return this.consumeStringToken(Ze);case et:var t=this.peekCodePoint(0),n=this.peekCodePoint(1),r=this.peekCodePoint(2);if(tn(t)||rn(n,r)){var i=An(t,n,r)?Ge:ze;return{type:5,value:this.consumeName(),flags:i}}break;case tt:if(this.peekCodePoint(0)===$e)return this.consumeCodePoint(),cn;break;case rt:return this.consumeStringToken(rt);case it:return sn;case At:return ln;case _t:if(this.peekCodePoint(0)===$e)return this.consumeCodePoint(),vn;break;case bt:if(an(e,this.peekCodePoint(0),this.peekCodePoint(1)))return this.reconsumeCodePoint(e),this.consumeNumericToken();break;case xt:return un;case ot:var A=e,a=this.peekCodePoint(0),o=this.peekCodePoint(1);if(an(A,a,o))return this.reconsumeCodePoint(e),this.consumeNumericToken();if(An(A,a,o))return this.reconsumeCodePoint(e),this.consumeIdentLikeToken();if(a===ot&&o===ut)return this.consumeCodePoint(),this.consumeCodePoint(),_n;break;case Et:if(an(e,this.peekCodePoint(0),this.peekCodePoint(1)))return this.reconsumeCodePoint(e),this.consumeNumericToken();break;case Xe:if(this.peekCodePoint(0)===_t)for(this.consumeCodePoint();;){var s=this.consumeCodePoint();if(s===_t&&(s=this.consumeCodePoint())===Xe)return this.consumeToken();if(s===Lt)return this.consumeToken()}break;case Ct:return bn;case St:return xn;case lt:if(this.peekCodePoint(0)===st&&this.peekCodePoint(1)===ot&&this.peekCodePoint(2)===ot)return this.consumeCodePoint(),this.consumeCodePoint(),Bn;break;case ct:var l=this.peekCodePoint(0),c=this.peekCodePoint(1),d=this.peekCodePoint(2);if(An(l,c,d))return{type:7,value:this.consumeName()};break;case dt:return Cn;case qe:if(rn(e,this.peekCodePoint(0)))return this.reconsumeCodePoint(e),this.consumeIdentLikeToken();break;case ht:return Sn;case ft:if(this.peekCodePoint(0)===$e)return this.consumeCodePoint(),dn;break;case pt:return gn;case mt:return mn;case Pt:case Gt:var h=this.peekCodePoint(0),f=this.peekCodePoint(1);return h!==bt||!Xt(f)&&f!==gt||(this.consumeCodePoint(),this.consumeUnicodeRangeToken()),this.reconsumeCodePoint(e),this.consumeIdentLikeToken();case vt:if(this.peekCodePoint(0)===$e)return this.consumeCodePoint(),fn;if(this.peekCodePoint(0)===vt)return this.consumeCodePoint(),hn;break;case yt:if(this.peekCodePoint(0)===$e)return this.consumeCodePoint(),pn;break;case Lt:return Un}return $t(e)?(this.consumeWhiteSpace(),En):Wt(e)?(this.reconsumeCodePoint(e),this.consumeNumericToken()):en(e)?(this.reconsumeCodePoint(e),this.consumeIdentLikeToken()):{type:6,value:u(e)}},e.prototype.consumeCodePoint=function(){var e=this._value.shift();return"undefined"===typeof e?-1:e},e.prototype.reconsumeCodePoint=function(e){this._value.unshift(e)},e.prototype.peekCodePoint=function(e){return e>=this._value.length?-1:this._value[e]},e.prototype.consumeUnicodeRangeToken=function(){for(var e=[],t=this.consumeCodePoint();Xt(t)&&e.length<6;)e.push(t),t=this.consumeCodePoint();for(var n=!1;t===gt&&e.length<6;)e.push(t),t=this.consumeCodePoint(),n=!0;if(n)return{type:30,start:parseInt(u.apply(void 0,e.map((function(e){return e===gt?Dt:e}))),16),end:parseInt(u.apply(void 0,e.map((function(e){return e===gt?zt:e}))),16)};var r=parseInt(u.apply(void 0,e),16);if(this.peekCodePoint(0)===ot&&Xt(this.peekCodePoint(1))){this.consumeCodePoint(),t=this.consumeCodePoint();for(var i=[];Xt(t)&&i.length<6;)i.push(t),t=this.consumeCodePoint();return{type:30,start:r,end:parseInt(u.apply(void 0,i),16)}}return{type:30,start:r,end:r}},e.prototype.consumeIdentLikeToken=function(){var e=this.consumeName();return"url"===e.toLowerCase()&&this.peekCodePoint(0)===it?(this.consumeCodePoint(),this.consumeUrlToken()):this.peekCodePoint(0)===it?(this.consumeCodePoint(),{type:19,value:e}):{type:20,value:e}},e.prototype.consumeUrlToken=function(){var e=[];if(this.consumeWhiteSpace(),this.peekCodePoint(0)===Lt)return{type:22,value:""};var t=this.peekCodePoint(0);if(t===rt||t===Ze){var n=this.consumeStringToken(this.consumeCodePoint());return 0===n.type&&(this.consumeWhiteSpace(),this.peekCodePoint(0)===Lt||this.peekCodePoint(0)===At)?(this.consumeCodePoint(),{type:22,value:n.value}):(this.consumeBadUrlRemnants(),yn)}for(;;){var r=this.consumeCodePoint();if(r===Lt||r===At)return{type:22,value:u.apply(void 0,e)};if($t(r))return this.consumeWhiteSpace(),this.peekCodePoint(0)===Lt||this.peekCodePoint(0)===At?(this.consumeCodePoint(),{type:22,value:u.apply(void 0,e)}):(this.consumeBadUrlRemnants(),yn);if(r===Ze||r===rt||r===it||nn(r))return this.consumeBadUrlRemnants(),yn;if(r===qe){if(!rn(r,this.peekCodePoint(0)))return this.consumeBadUrlRemnants(),yn;e.push(this.consumeEscapedCodePoint())}else e.push(r)}},e.prototype.consumeWhiteSpace=function(){for(;$t(this.peekCodePoint(0));)this.consumeCodePoint()},e.prototype.consumeBadUrlRemnants=function(){for(;;){var e=this.consumeCodePoint();if(e===At||e===Lt)return;rn(e,this.peekCodePoint(0))&&this.consumeEscapedCodePoint()}},e.prototype.consumeStringSlice=function(e){for(var t=5e4,n="";e>0;){var r=Math.min(t,e);n+=u.apply(void 0,this._value.splice(0,r)),e-=r}return this._value.shift(),n},e.prototype.consumeStringToken=function(e){for(var t="",n=0;;){var r=this._value[n];if(r===Lt||void 0===r||r===e)return{type:0,value:t+=this.consumeStringSlice(n)};if(r===je)return this._value.splice(0,n),wn;if(r===qe){var i=this._value[n+1];i!==Lt&&void 0!==i&&(i===je?(t+=this.consumeStringSlice(n),n=-1,this._value.shift()):rn(r,i)&&(t+=this.consumeStringSlice(n),t+=u(this.consumeEscapedCodePoint()),n=-1))}n++}},e.prototype.consumeNumber=function(){var e=[],t=Ke,n=this.peekCodePoint(0);for(n!==bt&&n!==ot||e.push(this.consumeCodePoint());Wt(this.peekCodePoint(0));)e.push(this.consumeCodePoint());n=this.peekCodePoint(0);var r=this.peekCodePoint(1);if(n===Et&&Wt(r))for(e.push(this.consumeCodePoint(),this.consumeCodePoint()),t=We;Wt(this.peekCodePoint(0));)e.push(this.consumeCodePoint());n=this.peekCodePoint(0),r=this.peekCodePoint(1);var i=this.peekCodePoint(2);if((n===Vt||n===Rt)&&((r===bt||r===ot)&&Wt(i)||Wt(r)))for(e.push(this.consumeCodePoint(),this.consumeCodePoint()),t=We;Wt(this.peekCodePoint(0));)e.push(this.consumeCodePoint());return[on(e),t]},e.prototype.consumeNumericToken=function(){var e=this.consumeNumber(),t=e[0],n=e[1],r=this.peekCodePoint(0),i=this.peekCodePoint(1),A=this.peekCodePoint(2);return An(r,i,A)?{type:15,number:t,flags:n,unit:this.consumeName()}:r===nt?(this.consumeCodePoint(),{type:16,number:t,flags:n}):{type:17,number:t,flags:n}},e.prototype.consumeEscapedCodePoint=function(){var e=this.consumeCodePoint();if(Xt(e)){for(var t=u(e);Xt(this.peekCodePoint(0))&&t.length<6;)t+=u(this.consumeCodePoint());$t(this.peekCodePoint(0))&&this.consumeCodePoint();var n=parseInt(t,16);return 0===n||jt(n)||n>1114111?Bt:n}return e===Lt?Bt:e},e.prototype.consumeName=function(){for(var e="";;){var t=this.consumeCodePoint();if(tn(t))e+=u(t);else{if(!rn(t,this.peekCodePoint(0)))return this.reconsumeCodePoint(t),e;e+=u(this.consumeEscapedCodePoint())}}},e}(),Fn=function(){function e(e){this._tokens=e}return e.create=function(t){var n=new Mn;return n.write(t),new e(n.read())},e.parseValue=function(t){return e.create(t).parseComponentValue()},e.parseValues=function(t){return e.create(t).parseComponentValues()},e.prototype.parseComponentValue=function(){for(var e=this.consumeToken();31===e.type;)e=this.consumeToken();if(32===e.type)throw new SyntaxError("Error parsing CSS component value, unexpected EOF");this.reconsumeToken(e);var t=this.consumeComponentValue();do{e=this.consumeToken()}while(31===e.type);if(32===e.type)return t;throw new SyntaxError("Error parsing CSS component value, multiple values found when expecting only one")},e.prototype.parseComponentValues=function(){for(var e=[];;){var t=this.consumeComponentValue();if(32===t.type)return e;e.push(t),e.push()}},e.prototype.consumeComponentValue=function(){var e=this.consumeToken();switch(e.type){case 11:case 28:case 2:return this.consumeSimpleBlock(e.type);case 19:return this.consumeFunction(e)}return e},e.prototype.consumeSimpleBlock=function(e){for(var t={type:e,values:[]},n=this.consumeToken();;){if(32===n.type||Pn(n,e))return t;this.reconsumeToken(n),t.values.push(this.consumeComponentValue()),n=this.consumeToken()}},e.prototype.consumeFunction=function(e){for(var t={name:e.value,values:[],type:18};;){var n=this.consumeToken();if(32===n.type||3===n.type)return t;this.reconsumeToken(n),t.values.push(this.consumeComponentValue())}},e.prototype.consumeToken=function(){var e=this._tokens.shift();return"undefined"===typeof e?Un:e},e.prototype.reconsumeToken=function(e){this._tokens.unshift(e)},e}(),Tn=function(e){return 15===e.type},kn=function(e){return 17===e.type},Qn=function(e){return 20===e.type},Ln=function(e){return 0===e.type},Dn=function(e,t){return Qn(e)&&e.value===t},In=function(e){return 31!==e.type},Rn=function(e){return 31!==e.type&&4!==e.type},Hn=function(e){var t=[],n=[];return e.forEach((function(e){if(4===e.type){if(0===n.length)throw new Error("Error parsing function args, zero tokens for arg");return t.push(n),void(n=[])}31!==e.type&&n.push(e)})),n.length&&t.push(n),t},Pn=function(e,t){return 11===t&&12===e.type||28===t&&29===e.type||2===t&&3===e.type},Nn=function(e){return 17===e.type||15===e.type},On=function(e){return 16===e.type||Nn(e)},Vn=function(e){return e.length>1?[e[0],e[1]]:[e[0]]},zn={type:17,number:0,flags:Ke},Gn={type:16,number:50,flags:Ke},Kn={type:16,number:100,flags:Ke},Wn=function(e,t,n){var r=e[0],i=e[1];return[jn(r,t),jn("undefined"!==typeof i?i:r,n)]},jn=function(e,t){if(16===e.type)return e.number/100*t;if(Tn(e))switch(e.unit){case"rem":case"em":return 16*e.number;default:return e.number}return e.number},Xn="deg",qn="grad",Yn="rad",Jn="turn",Zn={name:"angle",parse:function(e,t){if(15===t.type)switch(t.unit){case Xn:return Math.PI*t.number/180;case qn:return Math.PI/200*t.number;case Yn:return t.number;case Jn:return 2*Math.PI*t.number}throw new Error("Unsupported angle type")}},$n=function(e){return 15===e.type&&(e.unit===Xn||e.unit===qn||e.unit===Yn||e.unit===Jn)},er=function(e){switch(e.filter(Qn).map((function(e){return e.value})).join(" ")){case"to bottom right":case"to right bottom":case"left top":case"top left":return[zn,zn];case"to top":case"bottom":return tr(0);case"to bottom left":case"to left bottom":case"right top":case"top right":return[zn,Kn];case"to right":case"left":return tr(90);case"to top left":case"to left top":case"right bottom":case"bottom right":return[Kn,Kn];case"to bottom":case"top":return tr(180);case"to top right":case"to right top":case"left bottom":case"bottom left":return[Kn,zn];case"to left":case"right":return tr(270)}return 0},tr=function(e){return Math.PI*e/180},nr={name:"color",parse:function(e,t){if(18===t.type){var n=ur[t.name];if("undefined"===typeof n)throw new Error('Attempting to parse an unsupported color function "'+t.name+'"');return n(e,t.values)}if(5===t.type){if(3===t.value.length){var r=t.value.substring(0,1),i=t.value.substring(1,2),A=t.value.substring(2,3);return Ar(parseInt(r+r,16),parseInt(i+i,16),parseInt(A+A,16),1)}if(4===t.value.length){r=t.value.substring(0,1),i=t.value.substring(1,2),A=t.value.substring(2,3);var a=t.value.substring(3,4);return Ar(parseInt(r+r,16),parseInt(i+i,16),parseInt(A+A,16),parseInt(a+a,16)/255)}if(6===t.value.length)return r=t.value.substring(0,2),i=t.value.substring(2,4),A=t.value.substring(4,6),Ar(parseInt(r,16),parseInt(i,16),parseInt(A,16),1);if(8===t.value.length)return r=t.value.substring(0,2),i=t.value.substring(2,4),A=t.value.substring(4,6),a=t.value.substring(6,8),Ar(parseInt(r,16),parseInt(i,16),parseInt(A,16),parseInt(a,16)/255)}if(20===t.type){var o=dr[t.value.toUpperCase()];if("undefined"!==typeof o)return o}return dr.TRANSPARENT}},rr=function(e){return 0===(255&e)},ir=function(e){var t=255&e,n=255&e>>8,r=255&e>>16,i=255&e>>24;return t<255?"rgba("+i+","+r+","+n+","+t/255+")":"rgb("+i+","+r+","+n+")"},Ar=function(e,t,n,r){return(e<<24|t<<16|n<<8|Math.round(255*r)<<0)>>>0},ar=function(e,t){if(17===e.type)return e.number;if(16===e.type){var n=3===t?1:255;return 3===t?e.number/100*n:Math.round(e.number/100*n)}return 0},or=function(e,t){var n=t.filter(Rn);if(3===n.length){var r=n.map(ar),i=r[0],A=r[1],a=r[2];return Ar(i,A,a,1)}if(4===n.length){var o=n.map(ar),s=(i=o[0],A=o[1],a=o[2],o[3]);return Ar(i,A,a,s)}return 0};function sr(e,t,n){return n<0&&(n+=1),n>=1&&(n-=1),n<1/6?(t-e)*n*6+e:n<.5?t:n<2/3?6*(t-e)*(2/3-n)+e:e}var lr=function(e,t){var n=t.filter(Rn),r=n[0],i=n[1],A=n[2],a=n[3],o=(17===r.type?tr(r.number):Zn.parse(e,r))/(2*Math.PI),s=On(i)?i.number/100:0,l=On(A)?A.number/100:0,u="undefined"!==typeof a&&On(a)?jn(a,1):1;if(0===s)return Ar(255*l,255*l,255*l,1);var c=l<=.5?l*(s+1):l+s-l*s,d=2*l-c,h=sr(d,c,o+1/3),f=sr(d,c,o),p=sr(d,c,o-1/3);return Ar(255*h,255*f,255*p,u)},ur={hsl:lr,hsla:lr,rgb:or,rgba:or},cr=function(e,t){return nr.parse(e,Fn.create(t).parseComponentValue())},dr={ALICEBLUE:4042850303,ANTIQUEWHITE:4209760255,AQUA:16777215,AQUAMARINE:2147472639,AZURE:4043309055,BEIGE:4126530815,BISQUE:4293182719,BLACK:255,BLANCHEDALMOND:4293643775,BLUE:65535,BLUEVIOLET:2318131967,BROWN:2771004159,BURLYWOOD:3736635391,CADETBLUE:1604231423,CHARTREUSE:2147418367,CHOCOLATE:3530104575,CORAL:4286533887,CORNFLOWERBLUE:1687547391,CORNSILK:4294499583,CRIMSON:3692313855,CYAN:16777215,DARKBLUE:35839,DARKCYAN:9145343,DARKGOLDENROD:3095837695,DARKGRAY:2846468607,DARKGREEN:6553855,DARKGREY:2846468607,DARKKHAKI:3182914559,DARKMAGENTA:2332068863,DARKOLIVEGREEN:1433087999,DARKORANGE:4287365375,DARKORCHID:2570243327,DARKRED:2332033279,DARKSALMON:3918953215,DARKSEAGREEN:2411499519,DARKSLATEBLUE:1211993087,DARKSLATEGRAY:793726975,DARKSLATEGREY:793726975,DARKTURQUOISE:13554175,DARKVIOLET:2483082239,DEEPPINK:4279538687,DEEPSKYBLUE:12582911,DIMGRAY:1768516095,DIMGREY:1768516095,DODGERBLUE:512819199,FIREBRICK:2988581631,FLORALWHITE:4294635775,FORESTGREEN:579543807,FUCHSIA:4278255615,GAINSBORO:3705462015,GHOSTWHITE:4177068031,GOLD:4292280575,GOLDENROD:3668254975,GRAY:2155905279,GREEN:8388863,GREENYELLOW:2919182335,GREY:2155905279,HONEYDEW:4043305215,HOTPINK:4285117695,INDIANRED:3445382399,INDIGO:1258324735,IVORY:4294963455,KHAKI:4041641215,LAVENDER:3873897215,LAVENDERBLUSH:4293981695,LAWNGREEN:2096890111,LEMONCHIFFON:4294626815,LIGHTBLUE:2916673279,LIGHTCORAL:4034953471,LIGHTCYAN:3774873599,LIGHTGOLDENRODYELLOW:4210742015,LIGHTGRAY:3553874943,LIGHTGREEN:2431553791,LIGHTGREY:3553874943,LIGHTPINK:4290167295,LIGHTSALMON:4288707327,LIGHTSEAGREEN:548580095,LIGHTSKYBLUE:2278488831,LIGHTSLATEGRAY:2005441023,LIGHTSLATEGREY:2005441023,LIGHTSTEELBLUE:2965692159,LIGHTYELLOW:4294959359,LIME:16711935,LIMEGREEN:852308735,LINEN:4210091775,MAGENTA:4278255615,MAROON:2147483903,MEDIUMAQUAMARINE:1724754687,MEDIUMBLUE:52735,MEDIUMORCHID:3126187007,MEDIUMPURPLE:2473647103,MEDIUMSEAGREEN:1018393087,MEDIUMSLATEBLUE:2070474495,MEDIUMSPRINGGREEN:16423679,MEDIUMTURQUOISE:1221709055,MEDIUMVIOLETRED:3340076543,MIDNIGHTBLUE:421097727,MINTCREAM:4127193855,MISTYROSE:4293190143,MOCCASIN:4293178879,NAVAJOWHITE:4292783615,NAVY:33023,OLDLACE:4260751103,OLIVE:2155872511,OLIVEDRAB:1804477439,ORANGE:4289003775,ORANGERED:4282712319,ORCHID:3664828159,PALEGOLDENROD:4008225535,PALEGREEN:2566625535,PALETURQUOISE:2951671551,PALEVIOLETRED:3681588223,PAPAYAWHIP:4293907967,PEACHPUFF:4292524543,PERU:3448061951,PINK:4290825215,PLUM:3718307327,POWDERBLUE:2967529215,PURPLE:2147516671,REBECCAPURPLE:1714657791,RED:4278190335,ROSYBROWN:3163525119,ROYALBLUE:1097458175,SADDLEBROWN:2336560127,SALMON:4202722047,SANDYBROWN:4104413439,SEAGREEN:780883967,SEASHELL:4294307583,SIENNA:2689740287,SILVER:3233857791,SKYBLUE:2278484991,SLATEBLUE:1784335871,SLATEGRAY:1887473919,SLATEGREY:1887473919,SNOW:4294638335,SPRINGGREEN:16744447,STEELBLUE:1182971135,TAN:3535047935,TEAL:8421631,THISTLE:3636451583,TOMATO:4284696575,TRANSPARENT:0,TURQUOISE:1088475391,VIOLET:4001558271,WHEAT:4125012991,WHITE:4294967295,WHITESMOKE:4126537215,YELLOW:4294902015,YELLOWGREEN:2597139199},hr={name:"background-clip",initialValue:"border-box",prefix:!1,type:1,parse:function(e,t){return t.map((function(e){if(Qn(e))switch(e.value){case"padding-box":return 1;case"content-box":return 2}return 0}))}},fr={name:"background-color",initialValue:"transparent",prefix:!1,type:3,format:"color"},pr=function(e,t){var n=nr.parse(e,t[0]),r=t[1];return r&&On(r)?{color:n,stop:r}:{color:n,stop:null}},gr=function(e,t){var n=e[0],r=e[e.length-1];null===n.stop&&(n.stop=zn),null===r.stop&&(r.stop=Kn);for(var i=[],A=0,a=0;aA?i.push(s):i.push(A),A=s}else i.push(null)}var l=null;for(a=0;ae.optimumDistance)?{optimumCorner:t,optimumDistance:o}:e}),{optimumDistance:i?1/0:-1/0,optimumCorner:null}).optimumCorner},Br=function(e,t,n,r,i){var A=0,a=0;switch(e.size){case 0:0===e.shape?A=a=Math.min(Math.abs(t),Math.abs(t-r),Math.abs(n),Math.abs(n-i)):1===e.shape&&(A=Math.min(Math.abs(t),Math.abs(t-r)),a=Math.min(Math.abs(n),Math.abs(n-i)));break;case 2:if(0===e.shape)A=a=Math.min(yr(t,n),yr(t,n-i),yr(t-r,n),yr(t-r,n-i));else if(1===e.shape){var o=Math.min(Math.abs(n),Math.abs(n-i))/Math.min(Math.abs(t),Math.abs(t-r)),s=wr(r,i,t,n,!0),l=s[0],u=s[1];a=o*(A=yr(l-t,(u-n)/o))}break;case 1:0===e.shape?A=a=Math.max(Math.abs(t),Math.abs(t-r),Math.abs(n),Math.abs(n-i)):1===e.shape&&(A=Math.max(Math.abs(t),Math.abs(t-r)),a=Math.max(Math.abs(n),Math.abs(n-i)));break;case 3:if(0===e.shape)A=a=Math.max(yr(t,n),yr(t,n-i),yr(t-r,n),yr(t-r,n-i));else if(1===e.shape){o=Math.max(Math.abs(n),Math.abs(n-i))/Math.max(Math.abs(t),Math.abs(t-r));var c=wr(r,i,t,n,!1);l=c[0],u=c[1],a=o*(A=yr(l-t,(u-n)/o))}}return Array.isArray(e.size)&&(A=jn(e.size[0],r),a=2===e.size.length?jn(e.size[1],i):A),[A,a]},_r=function(e,t){var n=tr(180),r=[];return Hn(t).forEach((function(t,i){if(0===i){var A=t[0];if(20===A.type&&-1!==["top","left","right","bottom"].indexOf(A.value))return void(n=er(t));if($n(A))return void(n=(Zn.parse(e,A)+tr(270))%tr(360))}var a=pr(e,t);r.push(a)})),{angle:n,stops:r,type:1}},br="closest-side",xr="farthest-side",Cr="closest-corner",Sr="farthest-corner",Er="circle",Ur="ellipse",Mr="cover",Fr="contain",Tr=function(e,t){var n=0,r=3,i=[],A=[];return Hn(t).forEach((function(t,a){var o=!0;if(0===a?o=t.reduce((function(e,t){if(Qn(t))switch(t.value){case"center":return A.push(Gn),!1;case"top":case"left":return A.push(zn),!1;case"right":case"bottom":return A.push(Kn),!1}else if(On(t)||Nn(t))return A.push(t),!1;return e}),o):1===a&&(o=t.reduce((function(e,t){if(Qn(t))switch(t.value){case Er:return n=0,!1;case Ur:return n=1,!1;case Fr:case br:return r=0,!1;case xr:return r=1,!1;case Cr:return r=2,!1;case Mr:case Sr:return r=3,!1}else if(Nn(t)||On(t))return Array.isArray(r)||(r=[]),r.push(t),!1;return e}),o)),o){var s=pr(e,t);i.push(s)}})),{size:r,shape:n,stops:i,position:A,type:2}},kr=function(e){return 1===e.type},Qr=function(e){return 2===e.type},Lr={name:"image",parse:function(e,t){if(22===t.type){var n={url:t.value,type:0};return e.cache.addImage(t.value),n}if(18===t.type){var r=Rr[t.name];if("undefined"===typeof r)throw new Error('Attempting to parse an unsupported image function "'+t.name+'"');return r(e,t.values)}throw new Error("Unsupported image type "+t.type)}};function Dr(e){return!(20===e.type&&"none"===e.value)&&(18!==e.type||!!Rr[e.name])}var Ir,Rr={"linear-gradient":function(e,t){var n=tr(180),r=[];return Hn(t).forEach((function(t,i){if(0===i){var A=t[0];if(20===A.type&&"to"===A.value)return void(n=er(t));if($n(A))return void(n=Zn.parse(e,A))}var a=pr(e,t);r.push(a)})),{angle:n,stops:r,type:1}},"-moz-linear-gradient":_r,"-ms-linear-gradient":_r,"-o-linear-gradient":_r,"-webkit-linear-gradient":_r,"radial-gradient":function(e,t){var n=0,r=3,i=[],A=[];return Hn(t).forEach((function(t,a){var o=!0;if(0===a){var s=!1;o=t.reduce((function(e,t){if(s)if(Qn(t))switch(t.value){case"center":return A.push(Gn),e;case"top":case"left":return A.push(zn),e;case"right":case"bottom":return A.push(Kn),e}else(On(t)||Nn(t))&&A.push(t);else if(Qn(t))switch(t.value){case Er:return n=0,!1;case Ur:return n=1,!1;case"at":return s=!0,!1;case br:return r=0,!1;case Mr:case xr:return r=1,!1;case Fr:case Cr:return r=2,!1;case Sr:return r=3,!1}else if(Nn(t)||On(t))return Array.isArray(r)||(r=[]),r.push(t),!1;return e}),o)}if(o){var l=pr(e,t);i.push(l)}})),{size:r,shape:n,stops:i,position:A,type:2}},"-moz-radial-gradient":Tr,"-ms-radial-gradient":Tr,"-o-radial-gradient":Tr,"-webkit-radial-gradient":Tr,"-webkit-gradient":function(e,t){var n=tr(180),r=[],i=1,A=0,a=3,o=[];return Hn(t).forEach((function(t,n){var A=t[0];if(0===n){if(Qn(A)&&"linear"===A.value)return void(i=1);if(Qn(A)&&"radial"===A.value)return void(i=2)}if(18===A.type)if("from"===A.name){var a=nr.parse(e,A.values[0]);r.push({stop:zn,color:a})}else if("to"===A.name)a=nr.parse(e,A.values[0]),r.push({stop:Kn,color:a});else if("color-stop"===A.name){var o=A.values.filter(Rn);if(2===o.length){a=nr.parse(e,o[1]);var s=o[0];kn(s)&&r.push({stop:{type:16,number:100*s.number,flags:s.flags},color:a})}}})),1===i?{angle:(n+tr(180))%tr(360),stops:r,type:i}:{size:a,shape:A,stops:r,position:o,type:i}}},Hr={name:"background-image",initialValue:"none",type:1,prefix:!1,parse:function(e,t){if(0===t.length)return[];var n=t[0];return 20===n.type&&"none"===n.value?[]:t.filter((function(e){return Rn(e)&&Dr(e)})).map((function(t){return Lr.parse(e,t)}))}},Pr={name:"background-origin",initialValue:"border-box",prefix:!1,type:1,parse:function(e,t){return t.map((function(e){if(Qn(e))switch(e.value){case"padding-box":return 1;case"content-box":return 2}return 0}))}},Nr={name:"background-position",initialValue:"0% 0%",type:1,prefix:!1,parse:function(e,t){return Hn(t).map((function(e){return e.filter(On)})).map(Vn)}},Or={name:"background-repeat",initialValue:"repeat",prefix:!1,type:1,parse:function(e,t){return Hn(t).map((function(e){return e.filter(Qn).map((function(e){return e.value})).join(" ")})).map(Vr)}},Vr=function(e){switch(e){case"no-repeat":return 1;case"repeat-x":case"repeat no-repeat":return 2;case"repeat-y":case"no-repeat repeat":return 3;default:return 0}};!function(e){e.AUTO="auto",e.CONTAIN="contain",e.COVER="cover"}(Ir||(Ir={}));var zr,Gr={name:"background-size",initialValue:"0",prefix:!1,type:1,parse:function(e,t){return Hn(t).map((function(e){return e.filter(Kr)}))}},Kr=function(e){return Qn(e)||On(e)},Wr=function(e){return{name:"border-"+e+"-color",initialValue:"transparent",prefix:!1,type:3,format:"color"}},jr=Wr("top"),Xr=Wr("right"),qr=Wr("bottom"),Yr=Wr("left"),Jr=function(e){return{name:"border-radius-"+e,initialValue:"0 0",prefix:!1,type:1,parse:function(e,t){return Vn(t.filter(On))}}},Zr=Jr("top-left"),$r=Jr("top-right"),ei=Jr("bottom-right"),ti=Jr("bottom-left"),ni=function(e){return{name:"border-"+e+"-style",initialValue:"solid",prefix:!1,type:2,parse:function(e,t){switch(t){case"none":return 0;case"dashed":return 2;case"dotted":return 3;case"double":return 4}return 1}}},ri=ni("top"),ii=ni("right"),Ai=ni("bottom"),ai=ni("left"),oi=function(e){return{name:"border-"+e+"-width",initialValue:"0",type:0,prefix:!1,parse:function(e,t){return Tn(t)?t.number:0}}},si=oi("top"),li=oi("right"),ui=oi("bottom"),ci=oi("left"),di={name:"color",initialValue:"transparent",prefix:!1,type:3,format:"color"},hi={name:"direction",initialValue:"ltr",prefix:!1,type:2,parse:function(e,t){return"rtl"===t?1:0}},fi={name:"display",initialValue:"inline-block",prefix:!1,type:1,parse:function(e,t){return t.filter(Qn).reduce((function(e,t){return e|pi(t.value)}),0)}},pi=function(e){switch(e){case"block":case"-webkit-box":return 2;case"inline":return 4;case"run-in":return 8;case"flow":return 16;case"flow-root":return 32;case"table":return 64;case"flex":case"-webkit-flex":return 128;case"grid":case"-ms-grid":return 256;case"ruby":return 512;case"subgrid":return 1024;case"list-item":return 2048;case"table-row-group":return 4096;case"table-header-group":return 8192;case"table-footer-group":return 16384;case"table-row":return 32768;case"table-cell":return 65536;case"table-column-group":return 131072;case"table-column":return 262144;case"table-caption":return 524288;case"ruby-base":return 1048576;case"ruby-text":return 2097152;case"ruby-base-container":return 4194304;case"ruby-text-container":return 8388608;case"contents":return 16777216;case"inline-block":return 33554432;case"inline-list-item":return 67108864;case"inline-table":return 134217728;case"inline-flex":return 268435456;case"inline-grid":return 536870912}return 0},gi={name:"float",initialValue:"none",prefix:!1,type:2,parse:function(e,t){switch(t){case"left":return 1;case"right":return 2;case"inline-start":return 3;case"inline-end":return 4}return 0}},mi={name:"letter-spacing",initialValue:"0",prefix:!1,type:0,parse:function(e,t){return 20===t.type&&"normal"===t.value?0:17===t.type||15===t.type?t.number:0}};!function(e){e.NORMAL="normal",e.STRICT="strict"}(zr||(zr={}));var vi,yi={name:"line-break",initialValue:"normal",prefix:!1,type:2,parse:function(e,t){return"strict"===t?zr.STRICT:zr.NORMAL}},wi={name:"line-height",initialValue:"normal",prefix:!1,type:4},Bi=function(e,t){return Qn(e)&&"normal"===e.value?1.2*t:17===e.type?t*e.number:On(e)?jn(e,t):t},_i={name:"list-style-image",initialValue:"none",type:0,prefix:!1,parse:function(e,t){return 20===t.type&&"none"===t.value?null:Lr.parse(e,t)}},bi={name:"list-style-position",initialValue:"outside",prefix:!1,type:2,parse:function(e,t){return"inside"===t?0:1}},xi={name:"list-style-type",initialValue:"none",prefix:!1,type:2,parse:function(e,t){switch(t){case"disc":return 0;case"circle":return 1;case"square":return 2;case"decimal":return 3;case"cjk-decimal":return 4;case"decimal-leading-zero":return 5;case"lower-roman":return 6;case"upper-roman":return 7;case"lower-greek":return 8;case"lower-alpha":return 9;case"upper-alpha":return 10;case"arabic-indic":return 11;case"armenian":return 12;case"bengali":return 13;case"cambodian":return 14;case"cjk-earthly-branch":return 15;case"cjk-heavenly-stem":return 16;case"cjk-ideographic":return 17;case"devanagari":return 18;case"ethiopic-numeric":return 19;case"georgian":return 20;case"gujarati":return 21;case"gurmukhi":case"hebrew":return 22;case"hiragana":return 23;case"hiragana-iroha":return 24;case"japanese-formal":return 25;case"japanese-informal":return 26;case"kannada":return 27;case"katakana":return 28;case"katakana-iroha":return 29;case"khmer":return 30;case"korean-hangul-formal":return 31;case"korean-hanja-formal":return 32;case"korean-hanja-informal":return 33;case"lao":return 34;case"lower-armenian":return 35;case"malayalam":return 36;case"mongolian":return 37;case"myanmar":return 38;case"oriya":return 39;case"persian":return 40;case"simp-chinese-formal":return 41;case"simp-chinese-informal":return 42;case"tamil":return 43;case"telugu":return 44;case"thai":return 45;case"tibetan":return 46;case"trad-chinese-formal":return 47;case"trad-chinese-informal":return 48;case"upper-armenian":return 49;case"disclosure-open":return 50;case"disclosure-closed":return 51;default:return-1}}},Ci=function(e){return{name:"margin-"+e,initialValue:"0",prefix:!1,type:4}},Si=Ci("top"),Ei=Ci("right"),Ui=Ci("bottom"),Mi=Ci("left"),Fi={name:"overflow",initialValue:"visible",prefix:!1,type:1,parse:function(e,t){return t.filter(Qn).map((function(e){switch(e.value){case"hidden":return 1;case"scroll":return 2;case"clip":return 3;case"auto":return 4;default:return 0}}))}},Ti={name:"overflow-wrap",initialValue:"normal",prefix:!1,type:2,parse:function(e,t){return"break-word"===t?"break-word":"normal"}},ki=function(e){return{name:"padding-"+e,initialValue:"0",prefix:!1,type:3,format:"length-percentage"}},Qi=ki("top"),Li=ki("right"),Di=ki("bottom"),Ii=ki("left"),Ri={name:"text-align",initialValue:"left",prefix:!1,type:2,parse:function(e,t){switch(t){case"right":return 2;case"center":case"justify":return 1;default:return 0}}},Hi={name:"position",initialValue:"static",prefix:!1,type:2,parse:function(e,t){switch(t){case"relative":return 1;case"absolute":return 2;case"fixed":return 3;case"sticky":return 4}return 0}},Pi={name:"text-shadow",initialValue:"none",type:1,prefix:!1,parse:function(e,t){return 1===t.length&&Dn(t[0],"none")?[]:Hn(t).map((function(t){for(var n={color:dr.TRANSPARENT,offsetX:zn,offsetY:zn,blur:zn},r=0,i=0;i1?1:0],this.overflowWrap=vA(e,Ti,t.overflowWrap),this.paddingTop=vA(e,Qi,t.paddingTop),this.paddingRight=vA(e,Li,t.paddingRight),this.paddingBottom=vA(e,Di,t.paddingBottom),this.paddingLeft=vA(e,Ii,t.paddingLeft),this.paintOrder=vA(e,dA,t.paintOrder),this.position=vA(e,Hi,t.position),this.textAlign=vA(e,Ri,t.textAlign),this.textDecorationColor=vA(e,Ji,null!==(n=t.textDecorationColor)&&void 0!==n?n:t.color),this.textDecorationLine=vA(e,Zi,null!==(r=t.textDecorationLine)&&void 0!==r?r:t.textDecoration),this.textShadow=vA(e,Pi,t.textShadow),this.textTransform=vA(e,Ni,t.textTransform),this.transform=vA(e,Oi,t.transform),this.transformOrigin=vA(e,Ki,t.transformOrigin),this.visibility=vA(e,Wi,t.visibility),this.webkitTextStrokeColor=vA(e,hA,t.webkitTextStrokeColor),this.webkitTextStrokeWidth=vA(e,fA,t.webkitTextStrokeWidth),this.wordBreak=vA(e,ji,t.wordBreak),this.zIndex=vA(e,Xi,t.zIndex)}return e.prototype.isVisible=function(){return this.display>0&&this.opacity>0&&0===this.visibility},e.prototype.isTransparent=function(){return rr(this.backgroundColor)},e.prototype.isTransformed=function(){return null!==this.transform},e.prototype.isPositioned=function(){return 0!==this.position},e.prototype.isPositionedWithZIndex=function(){return this.isPositioned()&&!this.zIndex.auto},e.prototype.isFloating=function(){return 0!==this.float},e.prototype.isInlineLevel=function(){return iA(this.display,4)||iA(this.display,33554432)||iA(this.display,268435456)||iA(this.display,536870912)||iA(this.display,67108864)||iA(this.display,134217728)},e}(),gA=function(){function e(e,t){this.content=vA(e,AA,t.content),this.quotes=vA(e,lA,t.quotes)}return e}(),mA=function(){function e(e,t){this.counterIncrement=vA(e,aA,t.counterIncrement),this.counterReset=vA(e,oA,t.counterReset)}return e}(),vA=function(e,t,n){var r=new Mn,i=null!==n&&"undefined"!==typeof n?n.toString():t.initialValue;r.write(i);var A=new Fn(r.read());switch(t.type){case 2:var a=A.parseComponentValue();return t.parse(e,Qn(a)?a.value:t.initialValue);case 0:return t.parse(e,A.parseComponentValue());case 1:return t.parse(e,A.parseComponentValues());case 4:return A.parseComponentValue();case 3:switch(t.format){case"angle":return Zn.parse(e,A.parseComponentValue());case"color":return nr.parse(e,A.parseComponentValue());case"image":return Lr.parse(e,A.parseComponentValue());case"length":var o=A.parseComponentValue();return Nn(o)?o:zn;case"length-percentage":var s=A.parseComponentValue();return On(s)?s:zn;case"time":return qi.parse(e,A.parseComponentValue())}}},yA="data-html2canvas-debug",wA=function(e){switch(e.getAttribute(yA)){case"all":return 1;case"clone":return 2;case"parse":return 3;case"render":return 4;default:return 0}},BA=function(e,t){var n=wA(e);return 1===n||t===n},_A=function(){function e(e,t){this.context=e,this.textNodes=[],this.elements=[],this.flags=0,BA(t,3),this.styles=new pA(e,window.getComputedStyle(t,null)),lo(t)&&(this.styles.animationDuration.some((function(e){return e>0}))&&(t.style.animationDuration="0s"),null!==this.styles.transform&&(t.style.transform="none")),this.bounds=o(this.context,t),BA(t,4)&&(this.flags|=16)}return e}(),bA="AAAAAAAAAAAAEA4AGBkAAFAaAAACAAAAAAAIABAAGAAwADgACAAQAAgAEAAIABAACAAQAAgAEAAIABAACAAQAAgAEAAIABAAQABIAEQATAAIABAACAAQAAgAEAAIABAAVABcAAgAEAAIABAACAAQAGAAaABwAHgAgACIAI4AlgAIABAAmwCjAKgAsAC2AL4AvQDFAMoA0gBPAVYBWgEIAAgACACMANoAYgFkAWwBdAF8AX0BhQGNAZUBlgGeAaMBlQGWAasBswF8AbsBwwF0AcsBYwHTAQgA2wG/AOMBdAF8AekB8QF0AfkB+wHiAHQBfAEIAAMC5gQIAAsCEgIIAAgAFgIeAggAIgIpAggAMQI5AkACygEIAAgASAJQAlgCYAIIAAgACAAKBQoFCgUTBRMFGQUrBSsFCAAIAAgACAAIAAgACAAIAAgACABdAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABoAmgCrwGvAQgAbgJ2AggAHgEIAAgACADnAXsCCAAIAAgAgwIIAAgACAAIAAgACACKAggAkQKZAggAPADJAAgAoQKkAqwCsgK6AsICCADJAggA0AIIAAgACAAIANYC3gIIAAgACAAIAAgACABAAOYCCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAkASoB+QIEAAgACAA8AEMCCABCBQgACABJBVAFCAAIAAgACAAIAAgACAAIAAgACABTBVoFCAAIAFoFCABfBWUFCAAIAAgACAAIAAgAbQUIAAgACAAIAAgACABzBXsFfQWFBYoFigWKBZEFigWKBYoFmAWfBaYFrgWxBbkFCAAIAAgACAAIAAgACAAIAAgACAAIAMEFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAMgFCADQBQgACAAIAAgACAAIAAgACAAIAAgACAAIAO4CCAAIAAgAiQAIAAgACABAAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAD0AggACAD8AggACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIANYFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAMDvwAIAAgAJAIIAAgACAAIAAgACAAIAAgACwMTAwgACAB9BOsEGwMjAwgAKwMyAwsFYgE3A/MEPwMIAEUDTQNRAwgAWQOsAGEDCAAIAAgACAAIAAgACABpAzQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFIQUoBSwFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABtAwgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABMAEwACAAIAAgACAAIABgACAAIAAgACAC/AAgACAAyAQgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACACAAIAAwAAgACAAIAAgACAAIAAgACAAIAAAARABIAAgACAAIABQASAAIAAgAIABwAEAAjgCIABsAqAC2AL0AigDQAtwC+IJIQqVAZUBWQqVAZUBlQGVAZUBlQGrC5UBlQGVAZUBlQGVAZUBlQGVAXsKlQGVAbAK6wsrDGUMpQzlDJUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAfAKAAuZA64AtwCJALoC6ADwAAgAuACgA/oEpgO6AqsD+AAIAAgAswMIAAgACAAIAIkAuwP5AfsBwwPLAwgACAAIAAgACADRA9kDCAAIAOED6QMIAAgACAAIAAgACADuA/YDCAAIAP4DyQAIAAgABgQIAAgAXQAOBAgACAAIAAgACAAIABMECAAIAAgACAAIAAgACAD8AAQBCAAIAAgAGgQiBCoECAExBAgAEAEIAAgACAAIAAgACAAIAAgACAAIAAgACAA4BAgACABABEYECAAIAAgATAQYAQgAVAQIAAgACAAIAAgACAAIAAgACAAIAFoECAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAOQEIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAB+BAcACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAEABhgSMBAgACAAIAAgAlAQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAwAEAAQABAADAAMAAwADAAQABAAEAAQABAAEAAQABHATAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAdQMIAAgACAAIAAgACAAIAMkACAAIAAgAfQMIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACACFA4kDCAAIAAgACAAIAOcBCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAIcDCAAIAAgACAAIAAgACAAIAAgACAAIAJEDCAAIAAgACADFAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABgBAgAZgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAbAQCBXIECAAIAHkECAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABAAJwEQACjBKoEsgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAC6BMIECAAIAAgACAAIAAgACABmBAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAxwQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAGYECAAIAAgAzgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAigWKBYoFigWKBYoFigWKBd0FXwUIAOIF6gXxBYoF3gT5BQAGCAaKBYoFigWKBYoFigWKBYoFigWKBYoFigXWBIoFigWKBYoFigWKBYoFigWKBYsFEAaKBYoFigWKBYoFigWKBRQGCACKBYoFigWKBQgACAAIANEECAAIABgGigUgBggAJgYIAC4GMwaKBYoF0wQ3Bj4GigWKBYoFigWKBYoFigWKBYoFigWKBYoFigUIAAgACAAIAAgACAAIAAgAigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWLBf///////wQABAAEAAQABAAEAAQABAAEAAQAAwAEAAQAAgAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAQADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAUAAAAFAAUAAAAFAAUAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUAAQAAAAUABQAFAAUABQAFAAAAAAAFAAUAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAFAAUAAQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUABQAFAAAABwAHAAcAAAAHAAcABwAFAAEAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAcABwAFAAUAAAAAAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAAAAQABAAAAAAAAAAAAAAAFAAUABQAFAAAABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABwAHAAcAAAAHAAcAAAAAAAUABQAHAAUAAQAHAAEABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABwABAAUABQAFAAUAAAAAAAAAAAAAAAEAAQABAAEAAQABAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABQANAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAEAAQABAAEAAQABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAABQAHAAUABQAFAAAAAAAAAAcABQAFAAUABQAFAAQABAAEAAQABAAEAAQABAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUAAAAFAAUABQAFAAUAAAAFAAUABQAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAAAAAAAAAAAAUABQAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAUAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABwAHAAcABwAFAAcABwAAAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAUABwAHAAUABQAFAAUAAAAAAAcABwAAAAAABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAABQAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAABwAHAAcABQAFAAAAAAAAAAAABQAFAAAAAAAFAAUABQAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAFAAUABQAFAAUAAAAFAAUABwAAAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAFAAUABwAFAAUABQAFAAAAAAAHAAcAAAAAAAcABwAFAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABwAAAAAAAAAHAAcABwAAAAcABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAABQAHAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAcABwAAAAUABQAFAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABQAHAAcABQAHAAcAAAAFAAcABwAAAAcABwAFAAUAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAFAAcABwAFAAUABQAAAAUAAAAHAAcABwAHAAcABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAHAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAABwAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAUAAAAFAAAAAAAAAAAABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUABQAFAAUAAAAFAAUAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABwAFAAUABQAFAAUABQAAAAUABQAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABQAFAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABQAFAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAHAAUABQAFAAUABQAFAAUABwAHAAcABwAHAAcABwAHAAUABwAHAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABwAHAAcABwAFAAUABwAHAAcAAAAAAAAAAAAHAAcABQAHAAcABwAHAAcABwAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAHAAUABQAFAAUABQAFAAUAAAAFAAAABQAAAAAABQAFAAUABQAFAAUABQAFAAcABwAHAAcABwAHAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAUABQAFAAUABQAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABwAFAAcABwAHAAcABwAFAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAUABQAFAAUABwAHAAUABQAHAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABQAFAAcABwAHAAUABwAFAAUABQAHAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAUABQAFAAUABQAFAAUABQAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAcABQAFAAUABQAFAAUABQAAAAAAAAAAAAUAAAAAAAAAAAAAAAAABQAAAAAABwAFAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUAAAAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAABQAAAAAAAAAFAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAUABQAHAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAHAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABwAFAAUABQAFAAcABwAFAAUABwAHAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAcABwAFAAUABwAHAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAFAAUABQAAAAAABQAFAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAFAAcABwAAAAAAAAAAAAAABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAFAAcABwAFAAcABwAAAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAFAAUABQAAAAUABQAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABwAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABQAFAAUABQAFAAUABQAFAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAHAAcABQAHAAUABQAAAAAAAAAAAAAAAAAFAAAABwAHAAcABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAcABwAAAAAABwAHAAAAAAAHAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABwAHAAUABQAFAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABQAFAAUABQAFAAUABwAFAAcABwAFAAcABQAFAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABQAFAAUABQAAAAAABwAHAAcABwAFAAUABwAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAHAAUABQAFAAUABQAFAAUABQAHAAcABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAFAAcABwAFAAUABQAFAAUABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAcABwAFAAUABQAFAAcABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABQAHAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAAAAAAFAAUABwAHAAcABwAFAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABwAHAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAHAAUABQAFAAUABQAFAAUABwAFAAUABwAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAAAAAAAABQAAAAUABQAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAHAAcAAAAFAAUAAAAHAAcABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAAAAAAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAUABQAFAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAABQAFAAUABQAFAAUABQAAAAUABQAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAFAAUABQAFAAUADgAOAA4ADgAOAA4ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAAAAAAAAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAMAAwADAAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkAAAAAAAAAAAAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAAAAAAAAAAAAsADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwACwAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAADgAOAA4AAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAAAA4ADgAOAA4ADgAOAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAA4AAAAOAAAAAAAAAAAAAAAAAA4AAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAADgAAAAAAAAAAAA4AAAAOAAAAAAAAAAAADgAOAA4AAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4AAAAAAA4ADgAOAA4ADgAOAA4ADgAOAAAADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4AAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAOAA4ADgAOAA4ADgAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAAAAAAA=",xA="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/",CA="undefined"===typeof Uint8Array?[]:new Uint8Array(256),SA=0;SA>4,u[s++]=(15&r)<<4|i>>2,u[s++]=(3&i)<<6|63&A;return l},UA=function(e){for(var t=e.length,n=[],r=0;r>FA,LA=(1<>FA)+32,IA=65536>>TA,RA=(1<=0){if(e<55296||e>56319&&e<=65535)return t=((t=this.index[e>>FA])<>FA)])<>TA),t=this.index[t],t+=e>>FA&RA,t=((t=this.index[t])<=55296&&i<=56319&&n>10),a%1024+56320)),(i+1===n||r.length>16384)&&(A+=String.fromCharCode.apply(String,r),r.length=0)}return A},sa=NA(bA),la="\xd7",ua="\xf7",ca=function(e){return sa.get(e)},da=function(e,t,n){var r=n-2,i=t[r],A=t[n-1],a=t[n];if(A===jA&&a===XA)return la;if(A===jA||A===XA||A===qA)return ua;if(a===jA||a===XA||a===qA)return ua;if(A===ZA&&-1!==[ZA,$A,ta,na].indexOf(a))return la;if((A===ta||A===$A)&&(a===$A||a===ea))return la;if((A===na||A===ea)&&a===ea)return la;if(a===ra||a===YA)return la;if(a===JA)return la;if(A===WA)return la;if(A===ra&&a===ia){for(;i===YA;)i=t[--r];if(i===ia)return la}if(A===Aa&&a===Aa){for(var o=0;i===Aa;)o++,i=t[--r];if(o%2===0)return la}return ua},ha=function(e){var t=aa(e),n=t.length,r=0,i=0,A=t.map(ca);return{next:function(){if(r>=n)return{done:!0,value:null};for(var e=la;ra.x||i.y>a.y;return a=i,0===t||o}));return e.body.removeChild(t),o},ma=function(){return"undefined"!==typeof(new Image).crossOrigin},va=function(){return"string"===typeof(new XMLHttpRequest).responseType},ya=function(e){var t=new Image,n=e.createElement("canvas"),r=n.getContext("2d");if(!r)return!1;t.src="data:image/svg+xml,";try{r.drawImage(t,0,0),n.toDataURL()}catch(Rt){return!1}return!0},wa=function(e){return 0===e[0]&&255===e[1]&&0===e[2]&&255===e[3]},Ba=function(e){var t=e.createElement("canvas"),n=100;t.width=n,t.height=n;var r=t.getContext("2d");if(!r)return Promise.reject(!1);r.fillStyle="rgb(0, 255, 0)",r.fillRect(0,0,n,n);var i=new Image,A=t.toDataURL();i.src=A;var a=_a(n,n,0,0,i);return r.fillStyle="red",r.fillRect(0,0,n,n),ba(a).then((function(t){r.drawImage(t,0,0);var i=r.getImageData(0,0,n,n).data;r.fillStyle="red",r.fillRect(0,0,n,n);var a=e.createElement("div");return a.style.backgroundImage="url("+A+")",a.style.height=n+"px",wa(i)?ba(_a(n,n,0,0,a)):Promise.reject(!1)})).then((function(e){return r.drawImage(e,0,0),wa(r.getImageData(0,0,n,n).data)})).catch((function(){return!1}))},_a=function(e,t,n,r,i){var A="http://www.w3.org/2000/svg",a=document.createElementNS(A,"svg"),o=document.createElementNS(A,"foreignObject");return a.setAttributeNS(null,"width",e.toString()),a.setAttributeNS(null,"height",t.toString()),o.setAttributeNS(null,"width","100%"),o.setAttributeNS(null,"height","100%"),o.setAttributeNS(null,"x",n.toString()),o.setAttributeNS(null,"y",r.toString()),o.setAttributeNS(null,"externalResourcesRequired","true"),a.appendChild(o),o.appendChild(i),a},ba=function(e){return new Promise((function(t,n){var r=new Image;r.onload=function(){return t(r)},r.onerror=n,r.src="data:image/svg+xml;charset=utf-8,"+encodeURIComponent((new XMLSerializer).serializeToString(e))}))},xa={get SUPPORT_RANGE_BOUNDS(){var e=pa(document);return Object.defineProperty(xa,"SUPPORT_RANGE_BOUNDS",{value:e}),e},get SUPPORT_WORD_BREAKING(){var e=xa.SUPPORT_RANGE_BOUNDS&&ga(document);return Object.defineProperty(xa,"SUPPORT_WORD_BREAKING",{value:e}),e},get SUPPORT_SVG_DRAWING(){var e=ya(document);return Object.defineProperty(xa,"SUPPORT_SVG_DRAWING",{value:e}),e},get SUPPORT_FOREIGNOBJECT_DRAWING(){var e="function"===typeof Array.from&&"function"===typeof window.fetch?Ba(document):Promise.resolve(!1);return Object.defineProperty(xa,"SUPPORT_FOREIGNOBJECT_DRAWING",{value:e}),e},get SUPPORT_CORS_IMAGES(){var e=ma();return Object.defineProperty(xa,"SUPPORT_CORS_IMAGES",{value:e}),e},get SUPPORT_RESPONSE_TYPE(){var e=va();return Object.defineProperty(xa,"SUPPORT_RESPONSE_TYPE",{value:e}),e},get SUPPORT_CORS_XHR(){var e="withCredentials"in new XMLHttpRequest;return Object.defineProperty(xa,"SUPPORT_CORS_XHR",{value:e}),e},get SUPPORT_NATIVE_TEXT_SEGMENTATION(){var e=!("undefined"===typeof Intl||!Intl.Segmenter);return Object.defineProperty(xa,"SUPPORT_NATIVE_TEXT_SEGMENTATION",{value:e}),e}},Ca=function(){function e(e,t){this.text=e,this.bounds=t}return e}(),Sa=function(e,t,n,r){var i=Ta(t,n),A=[],o=0;return i.forEach((function(t){if(n.textDecorationLine.length||t.trim().length>0)if(xa.SUPPORT_RANGE_BOUNDS){var i=Ua(r,o,t.length).getClientRects();if(i.length>1){var s=Ma(t),l=0;s.forEach((function(t){A.push(new Ca(t,a.fromDOMRectList(e,Ua(r,l+o,t.length).getClientRects()))),l+=t.length}))}else A.push(new Ca(t,a.fromDOMRectList(e,i)))}else{var u=r.splitText(t.length);A.push(new Ca(t,Ea(e,r))),r=u}else xa.SUPPORT_RANGE_BOUNDS||(r=r.splitText(t.length));o+=t.length})),A},Ea=function(e,t){var n=t.ownerDocument;if(n){var r=n.createElement("html2canvaswrapper");r.appendChild(t.cloneNode(!0));var i=t.parentNode;if(i){i.replaceChild(r,t);var A=o(e,r);return r.firstChild&&i.replaceChild(r.firstChild,r),A}}return a.EMPTY},Ua=function(e,t,n){var r=e.ownerDocument;if(!r)throw new Error("Node has no owner document");var i=r.createRange();return i.setStart(e,t),i.setEnd(e,t+n),i},Ma=function(e){if(xa.SUPPORT_NATIVE_TEXT_SEGMENTATION){var t=new Intl.Segmenter(void 0,{granularity:"grapheme"});return Array.from(t.segment(e)).map((function(e){return e.segment}))}return fa(e)},Fa=function(e,t){if(xa.SUPPORT_NATIVE_TEXT_SEGMENTATION){var n=new Intl.Segmenter(void 0,{granularity:"word"});return Array.from(n.segment(e)).map((function(e){return e.segment}))}return Qa(e,t)},Ta=function(e,t){return 0!==t.letterSpacing?Ma(e):Fa(e,t)},ka=[32,160,4961,65792,65793,4153,4241],Qa=function(e,t){for(var n,r=Ve(e,{lineBreak:t.lineBreak,wordBreak:"break-word"===t.overflowWrap?"break-word":t.wordBreak}),i=[],A=function(){if(n.value){var e=n.value.slice(),t=l(e),r="";t.forEach((function(e){-1===ka.indexOf(e)?r+=u(e):(r.length&&i.push(r),i.push(u(e)),r="")})),r.length&&i.push(r)}};!(n=r.next()).done;)A();return i},La=function(){function e(e,t,n){this.text=Da(t.data,n.textTransform),this.textBounds=Sa(e,this.text,n,t)}return e}(),Da=function(e,t){switch(t){case 1:return e.toLowerCase();case 3:return e.replace(Ia,Ra);case 2:return e.toUpperCase();default:return e}},Ia=/(^|\s|:|-|\(|\))([a-z])/g,Ra=function(e,t,n){return e.length>0?t+n.toUpperCase():e},Ha=function(e){function n(t,n){var r=e.call(this,t,n)||this;return r.src=n.currentSrc||n.src,r.intrinsicWidth=n.naturalWidth,r.intrinsicHeight=n.naturalHeight,r.context.cache.addImage(r.src),r}return t(n,e),n}(_A),Pa=function(e){function n(t,n){var r=e.call(this,t,n)||this;return r.canvas=n,r.intrinsicWidth=n.width,r.intrinsicHeight=n.height,r}return t(n,e),n}(_A),Na=function(e){function n(t,n){var r=e.call(this,t,n)||this,i=new XMLSerializer,A=o(t,n);return n.setAttribute("width",A.width+"px"),n.setAttribute("height",A.height+"px"),r.svg="data:image/svg+xml,"+encodeURIComponent(i.serializeToString(n)),r.intrinsicWidth=n.width.baseVal.value,r.intrinsicHeight=n.height.baseVal.value,r.context.cache.addImage(r.svg),r}return t(n,e),n}(_A),Oa=function(e){function n(t,n){var r=e.call(this,t,n)||this;return r.value=n.value,r}return t(n,e),n}(_A),Va=function(e){function n(t,n){var r=e.call(this,t,n)||this;return r.start=n.start,r.reversed="boolean"===typeof n.reversed&&!0===n.reversed,r}return t(n,e),n}(_A),za=[{type:15,flags:0,unit:"px",number:3}],Ga=[{type:16,flags:0,number:50}],Ka=function(e){return e.width>e.height?new a(e.left+(e.width-e.height)/2,e.top,e.height,e.height):e.width0)r.textNodes.push(new La(t,A,r.styles));else if(so(A))if(So(A)&&A.assignedNodes)A.assignedNodes().forEach((function(n){return e(t,n,r,i)}));else{var o=ro(t,A);o.styles.isVisible()&&(Ao(A,o,i)?o.flags|=4:ao(o.styles)&&(o.flags|=2),-1!==to.indexOf(A.tagName)&&(o.flags|=8),r.elements.push(o),A.slot,A.shadowRoot?e(t,A.shadowRoot,o,i):xo(A)||go(A)||Co(A)||e(t,A,o,i))}},ro=function(e,t){return wo(t)?new Ha(e,t):vo(t)?new Pa(e,t):go(t)?new Na(e,t):co(t)?new Oa(e,t):ho(t)?new Va(e,t):fo(t)?new Ja(e,t):Co(t)?new Za(e,t):xo(t)?new $a(e,t):Bo(t)?new eo(e,t):new _A(e,t)},io=function(e,t){var n=ro(e,t);return n.flags|=4,no(e,t,n,n),n},Ao=function(e,t,n){return t.styles.isPositionedWithZIndex()||t.styles.opacity<1||t.styles.isTransformed()||mo(e)&&n.styles.isTransparent()},ao=function(e){return e.isPositioned()||e.isFloating()},oo=function(e){return e.nodeType===Node.TEXT_NODE},so=function(e){return e.nodeType===Node.ELEMENT_NODE},lo=function(e){return so(e)&&"undefined"!==typeof e.style&&!uo(e)},uo=function(e){return"object"===typeof e.className},co=function(e){return"LI"===e.tagName},ho=function(e){return"OL"===e.tagName},fo=function(e){return"INPUT"===e.tagName},po=function(e){return"HTML"===e.tagName},go=function(e){return"svg"===e.tagName},mo=function(e){return"BODY"===e.tagName},vo=function(e){return"CANVAS"===e.tagName},yo=function(e){return"VIDEO"===e.tagName},wo=function(e){return"IMG"===e.tagName},Bo=function(e){return"IFRAME"===e.tagName},_o=function(e){return"STYLE"===e.tagName},bo=function(e){return"SCRIPT"===e.tagName},xo=function(e){return"TEXTAREA"===e.tagName},Co=function(e){return"SELECT"===e.tagName},So=function(e){return"SLOT"===e.tagName},Eo=function(e){return e.tagName.indexOf("-")>0},Uo=function(){function e(){this.counters={}}return e.prototype.getCounterValue=function(e){var t=this.counters[e];return t&&t.length?t[t.length-1]:1},e.prototype.getCounterValues=function(e){var t=this.counters[e];return t||[]},e.prototype.pop=function(e){var t=this;e.forEach((function(e){return t.counters[e].pop()}))},e.prototype.parse=function(e){var t=this,n=e.counterIncrement,r=e.counterReset,i=!0;null!==n&&n.forEach((function(e){var n=t.counters[e.counter];n&&0!==e.increment&&(i=!1,n.length||n.push(1),n[Math.max(0,n.length-1)]+=e.increment)}));var A=[];return i&&r.forEach((function(e){var n=t.counters[e.counter];A.push(e.counter),n||(n=t.counters[e.counter]=[]),n.push(e.reset)})),A},e}(),Mo={integers:[1e3,900,500,400,100,90,50,40,10,9,5,4,1],values:["M","CM","D","CD","C","XC","L","XL","X","IX","V","IV","I"]},Fo={integers:[9e3,8e3,7e3,6e3,5e3,4e3,3e3,2e3,1e3,900,800,700,600,500,400,300,200,100,90,80,70,60,50,40,30,20,10,9,8,7,6,5,4,3,2,1],values:["\u0554","\u0553","\u0552","\u0551","\u0550","\u054f","\u054e","\u054d","\u054c","\u054b","\u054a","\u0549","\u0548","\u0547","\u0546","\u0545","\u0544","\u0543","\u0542","\u0541","\u0540","\u053f","\u053e","\u053d","\u053c","\u053b","\u053a","\u0539","\u0538","\u0537","\u0536","\u0535","\u0534","\u0533","\u0532","\u0531"]},To={integers:[1e4,9e3,8e3,7e3,6e3,5e3,4e3,3e3,2e3,1e3,400,300,200,100,90,80,70,60,50,40,30,20,19,18,17,16,15,10,9,8,7,6,5,4,3,2,1],values:["\u05d9\u05f3","\u05d8\u05f3","\u05d7\u05f3","\u05d6\u05f3","\u05d5\u05f3","\u05d4\u05f3","\u05d3\u05f3","\u05d2\u05f3","\u05d1\u05f3","\u05d0\u05f3","\u05ea","\u05e9","\u05e8","\u05e7","\u05e6","\u05e4","\u05e2","\u05e1","\u05e0","\u05de","\u05dc","\u05db","\u05d9\u05d8","\u05d9\u05d7","\u05d9\u05d6","\u05d8\u05d6","\u05d8\u05d5","\u05d9","\u05d8","\u05d7","\u05d6","\u05d5","\u05d4","\u05d3","\u05d2","\u05d1","\u05d0"]},ko={integers:[1e4,9e3,8e3,7e3,6e3,5e3,4e3,3e3,2e3,1e3,900,800,700,600,500,400,300,200,100,90,80,70,60,50,40,30,20,10,9,8,7,6,5,4,3,2,1],values:["\u10f5","\u10f0","\u10ef","\u10f4","\u10ee","\u10ed","\u10ec","\u10eb","\u10ea","\u10e9","\u10e8","\u10e7","\u10e6","\u10e5","\u10e4","\u10f3","\u10e2","\u10e1","\u10e0","\u10df","\u10de","\u10dd","\u10f2","\u10dc","\u10db","\u10da","\u10d9","\u10d8","\u10d7","\u10f1","\u10d6","\u10d5","\u10d4","\u10d3","\u10d2","\u10d1","\u10d0"]},Qo=function(e,t,n,r,i,A){return en?Wo(e,i,A.length>0):r.integers.reduce((function(t,n,i){for(;e>=n;)e-=n,t+=r.values[i];return t}),"")+A},Lo=function(e,t,n,r){var i="";do{n||e--,i=r(e)+i,e/=t}while(e*t>=t);return i},Do=function(e,t,n,r,i){var A=n-t+1;return(e<0?"-":"")+(Lo(Math.abs(e),A,r,(function(e){return u(Math.floor(e%A)+t)}))+i)},Io=function(e,t,n){void 0===n&&(n=". ");var r=t.length;return Lo(Math.abs(e),r,!1,(function(e){return t[Math.floor(e%r)]}))+n},Ro=1,Ho=2,Po=4,No=8,Oo=function(e,t,n,r,i,A){if(e<-9999||e>9999)return Wo(e,4,i.length>0);var a=Math.abs(e),o=i;if(0===a)return t[0]+o;for(var s=0;a>0&&s<=4;s++){var l=a%10;0===l&&iA(A,Ro)&&""!==o?o=t[l]+o:l>1||1===l&&0===s||1===l&&1===s&&iA(A,Ho)||1===l&&1===s&&iA(A,Po)&&e>100||1===l&&s>1&&iA(A,No)?o=t[l]+(s>0?n[s-1]:"")+o:1===l&&s>0&&(o=n[s-1]+o),a=Math.floor(a/10)}return(e<0?r:"")+o},Vo="\u5341\u767e\u5343\u842c",zo="\u62fe\u4f70\u4edf\u842c",Go="\u30de\u30a4\u30ca\u30b9",Ko="\ub9c8\uc774\ub108\uc2a4",Wo=function(e,t,n){var r=n?". ":"",i=n?"\u3001":"",A=n?", ":"",a=n?" ":"";switch(t){case 0:return"\u2022"+a;case 1:return"\u25e6"+a;case 2:return"\u25fe"+a;case 5:var o=Do(e,48,57,!0,r);return o.length<4?"0"+o:o;case 4:return Io(e,"\u3007\u4e00\u4e8c\u4e09\u56db\u4e94\u516d\u4e03\u516b\u4e5d",i);case 6:return Qo(e,1,3999,Mo,3,r).toLowerCase();case 7:return Qo(e,1,3999,Mo,3,r);case 8:return Do(e,945,969,!1,r);case 9:return Do(e,97,122,!1,r);case 10:return Do(e,65,90,!1,r);case 11:return Do(e,1632,1641,!0,r);case 12:case 49:return Qo(e,1,9999,Fo,3,r);case 35:return Qo(e,1,9999,Fo,3,r).toLowerCase();case 13:return Do(e,2534,2543,!0,r);case 14:case 30:return Do(e,6112,6121,!0,r);case 15:return Io(e,"\u5b50\u4e11\u5bc5\u536f\u8fb0\u5df3\u5348\u672a\u7533\u9149\u620c\u4ea5",i);case 16:return Io(e,"\u7532\u4e59\u4e19\u4e01\u620a\u5df1\u5e9a\u8f9b\u58ec\u7678",i);case 17:case 48:return Oo(e,"\u96f6\u4e00\u4e8c\u4e09\u56db\u4e94\u516d\u4e03\u516b\u4e5d",Vo,"\u8ca0",i,Ho|Po|No);case 47:return Oo(e,"\u96f6\u58f9\u8cb3\u53c3\u8086\u4f0d\u9678\u67d2\u634c\u7396",zo,"\u8ca0",i,Ro|Ho|Po|No);case 42:return Oo(e,"\u96f6\u4e00\u4e8c\u4e09\u56db\u4e94\u516d\u4e03\u516b\u4e5d",Vo,"\u8d1f",i,Ho|Po|No);case 41:return Oo(e,"\u96f6\u58f9\u8d30\u53c1\u8086\u4f0d\u9646\u67d2\u634c\u7396",zo,"\u8d1f",i,Ro|Ho|Po|No);case 26:return Oo(e,"\u3007\u4e00\u4e8c\u4e09\u56db\u4e94\u516d\u4e03\u516b\u4e5d","\u5341\u767e\u5343\u4e07",Go,i,0);case 25:return Oo(e,"\u96f6\u58f1\u5f10\u53c2\u56db\u4f0d\u516d\u4e03\u516b\u4e5d","\u62fe\u767e\u5343\u4e07",Go,i,Ro|Ho|Po);case 31:return Oo(e,"\uc601\uc77c\uc774\uc0bc\uc0ac\uc624\uc721\uce60\ud314\uad6c","\uc2ed\ubc31\ucc9c\ub9cc",Ko,A,Ro|Ho|Po);case 33:return Oo(e,"\u96f6\u4e00\u4e8c\u4e09\u56db\u4e94\u516d\u4e03\u516b\u4e5d","\u5341\u767e\u5343\u842c",Ko,A,0);case 32:return Oo(e,"\u96f6\u58f9\u8cb3\u53c3\u56db\u4e94\u516d\u4e03\u516b\u4e5d","\u62fe\u767e\u5343",Ko,A,Ro|Ho|Po);case 18:return Do(e,2406,2415,!0,r);case 20:return Qo(e,1,19999,ko,3,r);case 21:return Do(e,2790,2799,!0,r);case 22:return Do(e,2662,2671,!0,r);case 22:return Qo(e,1,10999,To,3,r);case 23:return Io(e,"\u3042\u3044\u3046\u3048\u304a\u304b\u304d\u304f\u3051\u3053\u3055\u3057\u3059\u305b\u305d\u305f\u3061\u3064\u3066\u3068\u306a\u306b\u306c\u306d\u306e\u306f\u3072\u3075\u3078\u307b\u307e\u307f\u3080\u3081\u3082\u3084\u3086\u3088\u3089\u308a\u308b\u308c\u308d\u308f\u3090\u3091\u3092\u3093");case 24:return Io(e,"\u3044\u308d\u306f\u306b\u307b\u3078\u3068\u3061\u308a\u306c\u308b\u3092\u308f\u304b\u3088\u305f\u308c\u305d\u3064\u306d\u306a\u3089\u3080\u3046\u3090\u306e\u304a\u304f\u3084\u307e\u3051\u3075\u3053\u3048\u3066\u3042\u3055\u304d\u3086\u3081\u307f\u3057\u3091\u3072\u3082\u305b\u3059");case 27:return Do(e,3302,3311,!0,r);case 28:return Io(e,"\u30a2\u30a4\u30a6\u30a8\u30aa\u30ab\u30ad\u30af\u30b1\u30b3\u30b5\u30b7\u30b9\u30bb\u30bd\u30bf\u30c1\u30c4\u30c6\u30c8\u30ca\u30cb\u30cc\u30cd\u30ce\u30cf\u30d2\u30d5\u30d8\u30db\u30de\u30df\u30e0\u30e1\u30e2\u30e4\u30e6\u30e8\u30e9\u30ea\u30eb\u30ec\u30ed\u30ef\u30f0\u30f1\u30f2\u30f3",i);case 29:return Io(e,"\u30a4\u30ed\u30cf\u30cb\u30db\u30d8\u30c8\u30c1\u30ea\u30cc\u30eb\u30f2\u30ef\u30ab\u30e8\u30bf\u30ec\u30bd\u30c4\u30cd\u30ca\u30e9\u30e0\u30a6\u30f0\u30ce\u30aa\u30af\u30e4\u30de\u30b1\u30d5\u30b3\u30a8\u30c6\u30a2\u30b5\u30ad\u30e6\u30e1\u30df\u30b7\u30f1\u30d2\u30e2\u30bb\u30b9",i);case 34:return Do(e,3792,3801,!0,r);case 37:return Do(e,6160,6169,!0,r);case 38:return Do(e,4160,4169,!0,r);case 39:return Do(e,2918,2927,!0,r);case 40:return Do(e,1776,1785,!0,r);case 43:return Do(e,3046,3055,!0,r);case 44:return Do(e,3174,3183,!0,r);case 45:return Do(e,3664,3673,!0,r);case 46:return Do(e,3872,3881,!0,r);default:return Do(e,48,57,!0,r)}},jo="data-html2canvas-ignore",Xo=function(){function e(e,t,n){if(this.context=e,this.options=n,this.scrolledElements=[],this.referenceElement=t,this.counters=new Uo,this.quoteDepth=0,!t.ownerDocument)throw new Error("Cloned element does not have an owner document");this.documentElement=this.cloneNode(t.ownerDocument.documentElement,!1)}return e.prototype.toIFrame=function(e,t){var n=this,A=Yo(e,t);if(!A.contentWindow)return Promise.reject("Unable to find iframe window");var a=e.defaultView.pageXOffset,o=e.defaultView.pageYOffset,s=A.contentWindow,l=s.document,u=$o(A).then((function(){return r(n,void 0,void 0,(function(){var e,n;return i(this,(function(r){switch(r.label){case 0:return this.scrolledElements.forEach(is),s&&(s.scrollTo(t.left,t.top),!/(iPad|iPhone|iPod)/g.test(navigator.userAgent)||s.scrollY===t.top&&s.scrollX===t.left||(this.context.logger.warn("Unable to restore scroll position for cloned document"),this.context.windowBounds=this.context.windowBounds.add(s.scrollX-t.left,s.scrollY-t.top,0,0))),e=this.options.onclone,"undefined"===typeof(n=this.clonedReferenceElement)?[2,Promise.reject("Error finding the "+this.referenceElement.nodeName+" in the cloned document")]:l.fonts&&l.fonts.ready?[4,l.fonts.ready]:[3,2];case 1:r.sent(),r.label=2;case 2:return/(AppleWebKit)/g.test(navigator.userAgent)?[4,Zo(l)]:[3,4];case 3:r.sent(),r.label=4;case 4:return"function"===typeof e?[2,Promise.resolve().then((function(){return e(l,n)})).then((function(){return A}))]:[2,A]}}))}))}));return l.open(),l.write(ns(document.doctype)+""),rs(this.referenceElement.ownerDocument,a,o),l.replaceChild(l.adoptNode(this.documentElement),l.documentElement),l.close(),u},e.prototype.createElementClone=function(e){if(BA(e,2),vo(e))return this.createCanvasClone(e);if(yo(e))return this.createVideoClone(e);if(_o(e))return this.createStyleClone(e);var t=e.cloneNode(!1);return wo(t)&&(wo(e)&&e.currentSrc&&e.currentSrc!==e.src&&(t.src=e.currentSrc,t.srcset=""),"lazy"===t.loading&&(t.loading="eager")),Eo(t)?this.createCustomElementClone(t):t},e.prototype.createCustomElementClone=function(e){var t=document.createElement("html2canvascustomelement");return ts(e.style,t),t},e.prototype.createStyleClone=function(e){try{var t=e.sheet;if(t&&t.cssRules){var n=[].slice.call(t.cssRules,0).reduce((function(e,t){return t&&"string"===typeof t.cssText?e+t.cssText:e}),""),r=e.cloneNode(!1);return r.textContent=n,r}}catch(Rt){if(this.context.logger.error("Unable to access cssRules property",Rt),"SecurityError"!==Rt.name)throw Rt}return e.cloneNode(!1)},e.prototype.createCanvasClone=function(e){var t;if(this.options.inlineImages&&e.ownerDocument){var n=e.ownerDocument.createElement("img");try{return n.src=e.toDataURL(),n}catch(Rt){this.context.logger.info("Unable to inline canvas contents, canvas is tainted",e)}}var r=e.cloneNode(!1);try{r.width=e.width,r.height=e.height;var i=e.getContext("2d"),A=r.getContext("2d");if(A)if(!this.options.allowTaint&&i)A.putImageData(i.getImageData(0,0,e.width,e.height),0,0);else{var a=null!==(t=e.getContext("webgl2"))&&void 0!==t?t:e.getContext("webgl");if(a){var o=a.getContextAttributes();!1===(null===o||void 0===o?void 0:o.preserveDrawingBuffer)&&this.context.logger.warn("Unable to clone WebGL context as it has preserveDrawingBuffer=false",e)}A.drawImage(e,0,0)}return r}catch(Rt){this.context.logger.info("Unable to clone canvas as it is tainted",e)}return r},e.prototype.createVideoClone=function(e){var t=e.ownerDocument.createElement("canvas");t.width=e.offsetWidth,t.height=e.offsetHeight;var n=t.getContext("2d");try{return n&&(n.drawImage(e,0,0,t.width,t.height),this.options.allowTaint||n.getImageData(0,0,t.width,t.height)),t}catch(Rt){this.context.logger.info("Unable to clone video as it is tainted",e)}var r=e.ownerDocument.createElement("canvas");return r.width=e.offsetWidth,r.height=e.offsetHeight,r},e.prototype.appendChildNode=function(e,t,n){so(t)&&(bo(t)||t.hasAttribute(jo)||"function"===typeof this.options.ignoreElements&&this.options.ignoreElements(t))||this.options.copyStyles&&so(t)&&_o(t)||e.appendChild(this.cloneNode(t,n))},e.prototype.cloneChildNodes=function(e,t,n){for(var r=this,i=e.shadowRoot?e.shadowRoot.firstChild:e.firstChild;i;i=i.nextSibling)if(so(i)&&So(i)&&"function"===typeof i.assignedNodes){var A=i.assignedNodes();A.length&&A.forEach((function(e){return r.appendChildNode(t,e,n)}))}else this.appendChildNode(t,i,n)},e.prototype.cloneNode=function(e,t){if(oo(e))return document.createTextNode(e.data);if(!e.ownerDocument)return e.cloneNode(!1);var n=e.ownerDocument.defaultView;if(n&&so(e)&&(lo(e)||uo(e))){var r=this.createElementClone(e);r.style.transitionProperty="none";var i=n.getComputedStyle(e),A=n.getComputedStyle(e,":before"),a=n.getComputedStyle(e,":after");this.referenceElement===e&&lo(r)&&(this.clonedReferenceElement=r),mo(r)&&us(r);var o=this.counters.parse(new mA(this.context,i)),s=this.resolvePseudoContent(e,r,A,KA.BEFORE);Eo(e)&&(t=!0),yo(e)||this.cloneChildNodes(e,r,t),s&&r.insertBefore(s,r.firstChild);var l=this.resolvePseudoContent(e,r,a,KA.AFTER);return l&&r.appendChild(l),this.counters.pop(o),(i&&(this.options.copyStyles||uo(e))&&!Bo(e)||t)&&ts(i,r),0===e.scrollTop&&0===e.scrollLeft||this.scrolledElements.push([r,e.scrollLeft,e.scrollTop]),(xo(e)||Co(e))&&(xo(r)||Co(r))&&(r.value=e.value),r}return e.cloneNode(!1)},e.prototype.resolvePseudoContent=function(e,t,n,r){var i=this;if(n){var A=n.content,a=t.ownerDocument;if(a&&A&&"none"!==A&&"-moz-alt-content"!==A&&"none"!==n.display){this.counters.parse(new mA(this.context,n));var o=new gA(this.context,n),s=a.createElement("html2canvaspseudoelement");ts(n,s),o.content.forEach((function(t){if(0===t.type)s.appendChild(a.createTextNode(t.value));else if(22===t.type){var n=a.createElement("img");n.src=t.value,n.style.opacity="1",s.appendChild(n)}else if(18===t.type){if("attr"===t.name){var r=t.values.filter(Qn);r.length&&s.appendChild(a.createTextNode(e.getAttribute(r[0].value)||""))}else if("counter"===t.name){var A=t.values.filter(Rn),l=A[0],u=A[1];if(l&&Qn(l)){var c=i.counters.getCounterValue(l.value),d=u&&Qn(u)?xi.parse(i.context,u.value):3;s.appendChild(a.createTextNode(Wo(c,d,!1)))}}else if("counters"===t.name){var h=t.values.filter(Rn),f=(l=h[0],h[1]);if(u=h[2],l&&Qn(l)){var p=i.counters.getCounterValues(l.value),g=u&&Qn(u)?xi.parse(i.context,u.value):3,m=f&&0===f.type?f.value:"",v=p.map((function(e){return Wo(e,g,!1)})).join(m);s.appendChild(a.createTextNode(v))}}}else if(20===t.type)switch(t.value){case"open-quote":s.appendChild(a.createTextNode(uA(o.quotes,i.quoteDepth++,!0)));break;case"close-quote":s.appendChild(a.createTextNode(uA(o.quotes,--i.quoteDepth,!1)));break;default:s.appendChild(a.createTextNode(t.value))}})),s.className=os+" "+ss;var l=r===KA.BEFORE?" "+os:" "+ss;return uo(t)?t.className.baseValue+=l:t.className+=l,s}}},e.destroy=function(e){return!!e.parentNode&&(e.parentNode.removeChild(e),!0)},e}();!function(e){e[e.BEFORE=0]="BEFORE",e[e.AFTER=1]="AFTER"}(KA||(KA={}));var qo,Yo=function(e,t){var n=e.createElement("iframe");return n.className="html2canvas-container",n.style.visibility="hidden",n.style.position="fixed",n.style.left="-10000px",n.style.top="0px",n.style.border="0",n.width=t.width.toString(),n.height=t.height.toString(),n.scrolling="no",n.setAttribute(jo,"true"),e.body.appendChild(n),n},Jo=function(e){return new Promise((function(t){e.complete?t():e.src?(e.onload=t,e.onerror=t):t()}))},Zo=function(e){return Promise.all([].slice.call(e.images,0).map(Jo))},$o=function(e){return new Promise((function(t,n){var r=e.contentWindow;if(!r)return n("No window assigned for iframe");var i=r.document;r.onload=e.onload=function(){r.onload=e.onload=null;var n=setInterval((function(){i.body.childNodes.length>0&&"complete"===i.readyState&&(clearInterval(n),t(e))}),50)}}))},es=["all","d","content"],ts=function(e,t){for(var n=e.length-1;n>=0;n--){var r=e.item(n);-1===es.indexOf(r)&&t.style.setProperty(r,e.getPropertyValue(r))}return t},ns=function(e){var t="";return e&&(t+=""),t},rs=function(e,t,n){e&&e.defaultView&&(t!==e.defaultView.pageXOffset||n!==e.defaultView.pageYOffset)&&e.defaultView.scrollTo(t,n)},is=function(e){var t=e[0],n=e[1],r=e[2];t.scrollLeft=n,t.scrollTop=r},As=":before",as=":after",os="___html2canvas___pseudoelement_before",ss="___html2canvas___pseudoelement_after",ls='{\n content: "" !important;\n display: none !important;\n}',us=function(e){cs(e,"."+os+As+ls+"\n ."+ss+as+ls)},cs=function(e,t){var n=e.ownerDocument;if(n){var r=n.createElement("style");r.textContent=t,e.appendChild(r)}},ds=function(){function e(){}return e.getOrigin=function(t){var n=e._link;return n?(n.href=t,n.href=n.href,n.protocol+n.hostname+n.port):"about:blank"},e.isSameOrigin=function(t){return e.getOrigin(t)===e._origin},e.setContext=function(t){e._link=t.document.createElement("a"),e._origin=e.getOrigin(t.location.href)},e._origin="about:blank",e}(),hs=function(){function e(e,t){this.context=e,this._options=t,this._cache={}}return e.prototype.addImage=function(e){var t=Promise.resolve();return this.has(e)?t:ws(e)||ms(e)?((this._cache[e]=this.loadImage(e)).catch((function(){})),t):t},e.prototype.match=function(e){return this._cache[e]},e.prototype.loadImage=function(e){return r(this,void 0,void 0,(function(){var t,n,r,A,a=this;return i(this,(function(i){switch(i.label){case 0:return t=ds.isSameOrigin(e),n=!vs(e)&&!0===this._options.useCORS&&xa.SUPPORT_CORS_IMAGES&&!t,r=!vs(e)&&!t&&!ws(e)&&"string"===typeof this._options.proxy&&xa.SUPPORT_CORS_XHR&&!n,t||!1!==this._options.allowTaint||vs(e)||ws(e)||r||n?(A=e,r?[4,this.proxy(A)]:[3,2]):[2];case 1:A=i.sent(),i.label=2;case 2:return this.context.logger.debug("Added image "+e.substring(0,256)),[4,new Promise((function(e,t){var r=new Image;r.onload=function(){return e(r)},r.onerror=t,(ys(A)||n)&&(r.crossOrigin="anonymous"),r.src=A,!0===r.complete&&setTimeout((function(){return e(r)}),500),a._options.imageTimeout>0&&setTimeout((function(){return t("Timed out ("+a._options.imageTimeout+"ms) loading image")}),a._options.imageTimeout)}))];case 3:return[2,i.sent()]}}))}))},e.prototype.has=function(e){return"undefined"!==typeof this._cache[e]},e.prototype.keys=function(){return Promise.resolve(Object.keys(this._cache))},e.prototype.proxy=function(e){var t=this,n=this._options.proxy;if(!n)throw new Error("No proxy defined");var r=e.substring(0,256);return new Promise((function(i,A){var a=xa.SUPPORT_RESPONSE_TYPE?"blob":"text",o=new XMLHttpRequest;o.onload=function(){if(200===o.status)if("text"===a)i(o.response);else{var e=new FileReader;e.addEventListener("load",(function(){return i(e.result)}),!1),e.addEventListener("error",(function(e){return A(e)}),!1),e.readAsDataURL(o.response)}else A("Failed to proxy resource "+r+" with status code "+o.status)},o.onerror=A;var s=n.indexOf("?")>-1?"&":"?";if(o.open("GET",""+n+s+"url="+encodeURIComponent(e)+"&responseType="+a),"text"!==a&&o instanceof XMLHttpRequest&&(o.responseType=a),t._options.imageTimeout){var l=t._options.imageTimeout;o.timeout=l,o.ontimeout=function(){return A("Timed out ("+l+"ms) proxying "+r)}}o.send()}))},e}(),fs=/^data:image\/svg\+xml/i,ps=/^data:image\/.*;base64,/i,gs=/^data:image\/.*/i,ms=function(e){return xa.SUPPORT_SVG_DRAWING||!Bs(e)},vs=function(e){return gs.test(e)},ys=function(e){return ps.test(e)},ws=function(e){return"blob"===e.substr(0,4)},Bs=function(e){return"svg"===e.substr(-3).toLowerCase()||fs.test(e)},_s=function(){function e(e,t){this.type=0,this.x=e,this.y=t}return e.prototype.add=function(t,n){return new e(this.x+t,this.y+n)},e}(),bs=function(e,t,n){return new _s(e.x+(t.x-e.x)*n,e.y+(t.y-e.y)*n)},xs=function(){function e(e,t,n,r){this.type=1,this.start=e,this.startControl=t,this.endControl=n,this.end=r}return e.prototype.subdivide=function(t,n){var r=bs(this.start,this.startControl,t),i=bs(this.startControl,this.endControl,t),A=bs(this.endControl,this.end,t),a=bs(r,i,t),o=bs(i,A,t),s=bs(a,o,t);return n?new e(this.start,r,a,s):new e(s,o,A,this.end)},e.prototype.add=function(t,n){return new e(this.start.add(t,n),this.startControl.add(t,n),this.endControl.add(t,n),this.end.add(t,n))},e.prototype.reverse=function(){return new e(this.end,this.endControl,this.startControl,this.start)},e}(),Cs=function(e){return 1===e.type},Ss=function(){function e(e){var t=e.styles,n=e.bounds,r=Wn(t.borderTopLeftRadius,n.width,n.height),i=r[0],A=r[1],a=Wn(t.borderTopRightRadius,n.width,n.height),o=a[0],s=a[1],l=Wn(t.borderBottomRightRadius,n.width,n.height),u=l[0],c=l[1],d=Wn(t.borderBottomLeftRadius,n.width,n.height),h=d[0],f=d[1],p=[];p.push((i+o)/n.width),p.push((h+u)/n.width),p.push((A+f)/n.height),p.push((s+c)/n.height);var g=Math.max.apply(Math,p);g>1&&(i/=g,A/=g,o/=g,s/=g,u/=g,c/=g,h/=g,f/=g);var m=n.width-o,v=n.height-c,y=n.width-u,w=n.height-f,B=t.borderTopWidth,_=t.borderRightWidth,b=t.borderBottomWidth,x=t.borderLeftWidth,C=jn(t.paddingTop,e.bounds.width),S=jn(t.paddingRight,e.bounds.width),E=jn(t.paddingBottom,e.bounds.width),U=jn(t.paddingLeft,e.bounds.width);this.topLeftBorderDoubleOuterBox=i>0||A>0?Es(n.left+x/3,n.top+B/3,i-x/3,A-B/3,qo.TOP_LEFT):new _s(n.left+x/3,n.top+B/3),this.topRightBorderDoubleOuterBox=i>0||A>0?Es(n.left+m,n.top+B/3,o-_/3,s-B/3,qo.TOP_RIGHT):new _s(n.left+n.width-_/3,n.top+B/3),this.bottomRightBorderDoubleOuterBox=u>0||c>0?Es(n.left+y,n.top+v,u-_/3,c-b/3,qo.BOTTOM_RIGHT):new _s(n.left+n.width-_/3,n.top+n.height-b/3),this.bottomLeftBorderDoubleOuterBox=h>0||f>0?Es(n.left+x/3,n.top+w,h-x/3,f-b/3,qo.BOTTOM_LEFT):new _s(n.left+x/3,n.top+n.height-b/3),this.topLeftBorderDoubleInnerBox=i>0||A>0?Es(n.left+2*x/3,n.top+2*B/3,i-2*x/3,A-2*B/3,qo.TOP_LEFT):new _s(n.left+2*x/3,n.top+2*B/3),this.topRightBorderDoubleInnerBox=i>0||A>0?Es(n.left+m,n.top+2*B/3,o-2*_/3,s-2*B/3,qo.TOP_RIGHT):new _s(n.left+n.width-2*_/3,n.top+2*B/3),this.bottomRightBorderDoubleInnerBox=u>0||c>0?Es(n.left+y,n.top+v,u-2*_/3,c-2*b/3,qo.BOTTOM_RIGHT):new _s(n.left+n.width-2*_/3,n.top+n.height-2*b/3),this.bottomLeftBorderDoubleInnerBox=h>0||f>0?Es(n.left+2*x/3,n.top+w,h-2*x/3,f-2*b/3,qo.BOTTOM_LEFT):new _s(n.left+2*x/3,n.top+n.height-2*b/3),this.topLeftBorderStroke=i>0||A>0?Es(n.left+x/2,n.top+B/2,i-x/2,A-B/2,qo.TOP_LEFT):new _s(n.left+x/2,n.top+B/2),this.topRightBorderStroke=i>0||A>0?Es(n.left+m,n.top+B/2,o-_/2,s-B/2,qo.TOP_RIGHT):new _s(n.left+n.width-_/2,n.top+B/2),this.bottomRightBorderStroke=u>0||c>0?Es(n.left+y,n.top+v,u-_/2,c-b/2,qo.BOTTOM_RIGHT):new _s(n.left+n.width-_/2,n.top+n.height-b/2),this.bottomLeftBorderStroke=h>0||f>0?Es(n.left+x/2,n.top+w,h-x/2,f-b/2,qo.BOTTOM_LEFT):new _s(n.left+x/2,n.top+n.height-b/2),this.topLeftBorderBox=i>0||A>0?Es(n.left,n.top,i,A,qo.TOP_LEFT):new _s(n.left,n.top),this.topRightBorderBox=o>0||s>0?Es(n.left+m,n.top,o,s,qo.TOP_RIGHT):new _s(n.left+n.width,n.top),this.bottomRightBorderBox=u>0||c>0?Es(n.left+y,n.top+v,u,c,qo.BOTTOM_RIGHT):new _s(n.left+n.width,n.top+n.height),this.bottomLeftBorderBox=h>0||f>0?Es(n.left,n.top+w,h,f,qo.BOTTOM_LEFT):new _s(n.left,n.top+n.height),this.topLeftPaddingBox=i>0||A>0?Es(n.left+x,n.top+B,Math.max(0,i-x),Math.max(0,A-B),qo.TOP_LEFT):new _s(n.left+x,n.top+B),this.topRightPaddingBox=o>0||s>0?Es(n.left+Math.min(m,n.width-_),n.top+B,m>n.width+_?0:Math.max(0,o-_),Math.max(0,s-B),qo.TOP_RIGHT):new _s(n.left+n.width-_,n.top+B),this.bottomRightPaddingBox=u>0||c>0?Es(n.left+Math.min(y,n.width-x),n.top+Math.min(v,n.height-b),Math.max(0,u-_),Math.max(0,c-b),qo.BOTTOM_RIGHT):new _s(n.left+n.width-_,n.top+n.height-b),this.bottomLeftPaddingBox=h>0||f>0?Es(n.left+x,n.top+Math.min(w,n.height-b),Math.max(0,h-x),Math.max(0,f-b),qo.BOTTOM_LEFT):new _s(n.left+x,n.top+n.height-b),this.topLeftContentBox=i>0||A>0?Es(n.left+x+U,n.top+B+C,Math.max(0,i-(x+U)),Math.max(0,A-(B+C)),qo.TOP_LEFT):new _s(n.left+x+U,n.top+B+C),this.topRightContentBox=o>0||s>0?Es(n.left+Math.min(m,n.width+x+U),n.top+B+C,m>n.width+x+U?0:o-x+U,s-(B+C),qo.TOP_RIGHT):new _s(n.left+n.width-(_+S),n.top+B+C),this.bottomRightContentBox=u>0||c>0?Es(n.left+Math.min(y,n.width-(x+U)),n.top+Math.min(v,n.height+B+C),Math.max(0,u-(_+S)),c-(b+E),qo.BOTTOM_RIGHT):new _s(n.left+n.width-(_+S),n.top+n.height-(b+E)),this.bottomLeftContentBox=h>0||f>0?Es(n.left+x+U,n.top+w,Math.max(0,h-(x+U)),f-(b+E),qo.BOTTOM_LEFT):new _s(n.left+x+U,n.top+n.height-(b+E))}return e}();!function(e){e[e.TOP_LEFT=0]="TOP_LEFT",e[e.TOP_RIGHT=1]="TOP_RIGHT",e[e.BOTTOM_RIGHT=2]="BOTTOM_RIGHT",e[e.BOTTOM_LEFT=3]="BOTTOM_LEFT"}(qo||(qo={}));var Es=function(e,t,n,r,i){var A=(Math.sqrt(2)-1)/3*4,a=n*A,o=r*A,s=e+n,l=t+r;switch(i){case qo.TOP_LEFT:return new xs(new _s(e,l),new _s(e,l-o),new _s(s-a,t),new _s(s,t));case qo.TOP_RIGHT:return new xs(new _s(e,t),new _s(e+a,t),new _s(s,l-o),new _s(s,l));case qo.BOTTOM_RIGHT:return new xs(new _s(s,t),new _s(s,t+o),new _s(e+a,l),new _s(e,l));case qo.BOTTOM_LEFT:default:return new xs(new _s(s,l),new _s(s-a,l),new _s(e,t+o),new _s(e,t))}},Us=function(e){return[e.topLeftBorderBox,e.topRightBorderBox,e.bottomRightBorderBox,e.bottomLeftBorderBox]},Ms=function(e){return[e.topLeftContentBox,e.topRightContentBox,e.bottomRightContentBox,e.bottomLeftContentBox]},Fs=function(e){return[e.topLeftPaddingBox,e.topRightPaddingBox,e.bottomRightPaddingBox,e.bottomLeftPaddingBox]},Ts=function(){function e(e,t,n){this.offsetX=e,this.offsetY=t,this.matrix=n,this.type=0,this.target=6}return e}(),ks=function(){function e(e,t){this.path=e,this.target=t,this.type=1}return e}(),Qs=function(){function e(e){this.opacity=e,this.type=2,this.target=6}return e}(),Ls=function(e){return 0===e.type},Ds=function(e){return 1===e.type},Is=function(e){return 2===e.type},Rs=function(e,t){return e.length===t.length&&e.some((function(e,n){return e===t[n]}))},Hs=function(e,t,n,r,i){return e.map((function(e,A){switch(A){case 0:return e.add(t,n);case 1:return e.add(t+r,n);case 2:return e.add(t+r,n+i);case 3:return e.add(t,n+i)}return e}))},Ps=function(){function e(e){this.element=e,this.inlineLevel=[],this.nonInlineLevel=[],this.negativeZIndex=[],this.zeroOrAutoZIndexOrTransformedOrOpacity=[],this.positiveZIndex=[],this.nonPositionedFloats=[],this.nonPositionedInlineLevel=[]}return e}(),Ns=function(){function e(e,t){if(this.container=e,this.parent=t,this.effects=[],this.curves=new Ss(this.container),this.container.styles.opacity<1&&this.effects.push(new Qs(this.container.styles.opacity)),null!==this.container.styles.transform){var n=this.container.bounds.left+this.container.styles.transformOrigin[0].number,r=this.container.bounds.top+this.container.styles.transformOrigin[1].number,i=this.container.styles.transform;this.effects.push(new Ts(n,r,i))}if(0!==this.container.styles.overflowX){var A=Us(this.curves),a=Fs(this.curves);Rs(A,a)?this.effects.push(new ks(A,6)):(this.effects.push(new ks(A,2)),this.effects.push(new ks(a,4)))}}return e.prototype.getEffects=function(e){for(var t=-1===[2,3].indexOf(this.container.styles.position),n=this.parent,r=this.effects.slice(0);n;){var i=n.effects.filter((function(e){return!Ds(e)}));if(t||0!==n.container.styles.position||!n.parent){if(r.unshift.apply(r,i),t=-1===[2,3].indexOf(n.container.styles.position),0!==n.container.styles.overflowX){var A=Us(n.curves),a=Fs(n.curves);Rs(A,a)||r.unshift(new ks(a,6))}}else r.unshift.apply(r,i);n=n.parent}return r.filter((function(t){return iA(t.target,e)}))},e}(),Os=function e(t,n,r,i){t.container.elements.forEach((function(A){var a=iA(A.flags,4),o=iA(A.flags,2),s=new Ns(A,t);iA(A.styles.display,2048)&&i.push(s);var l=iA(A.flags,8)?[]:i;if(a||o){var u=a||A.styles.isPositioned()?r:n,c=new Ps(s);if(A.styles.isPositioned()||A.styles.opacity<1||A.styles.isTransformed()){var d=A.styles.zIndex.order;if(d<0){var h=0;u.negativeZIndex.some((function(e,t){return d>e.element.container.styles.zIndex.order?(h=t,!1):h>0})),u.negativeZIndex.splice(h,0,c)}else if(d>0){var f=0;u.positiveZIndex.some((function(e,t){return d>=e.element.container.styles.zIndex.order?(f=t+1,!1):f>0})),u.positiveZIndex.splice(f,0,c)}else u.zeroOrAutoZIndexOrTransformedOrOpacity.push(c)}else A.styles.isFloating()?u.nonPositionedFloats.push(c):u.nonPositionedInlineLevel.push(c);e(s,c,a?c:r,l)}else A.styles.isInlineLevel()?n.inlineLevel.push(s):n.nonInlineLevel.push(s),e(s,n,r,l);iA(A.flags,8)&&Vs(A,l)}))},Vs=function(e,t){for(var n=e instanceof Va?e.start:1,r=e instanceof Va&&e.reversed,i=0;i0&&e.intrinsicHeight>0){var r=Js(e),i=Fs(t);this.path(i),this.ctx.save(),this.ctx.clip(),this.ctx.drawImage(n,0,0,e.intrinsicWidth,e.intrinsicHeight,r.left,r.top,r.width,r.height),this.ctx.restore()}},n.prototype.renderNodeContent=function(e){return r(this,void 0,void 0,(function(){var t,r,A,o,s,l,u,c,d,h,f,p,g,m,v,y,w,B;return i(this,(function(i){switch(i.label){case 0:this.applyEffects(e.getEffects(4)),t=e.container,r=e.curves,A=t.styles,o=0,s=t.textNodes,i.label=1;case 1:return o0&&x>0&&(v=r.ctx.createPattern(p,"repeat"),r.renderRepeat(w,v,S,E))):Qr(n)&&(y=el(e,t,[null,null,null]),w=y[0],B=y[1],_=y[2],b=y[3],x=y[4],C=0===n.position.length?[Gn]:n.position,S=jn(C[0],b),E=jn(C[C.length-1],x),U=Br(n,S,E,b,x),M=U[0],F=U[1],M>0&&F>0&&(T=r.ctx.createRadialGradient(B+S,_+E,0,B+S,_+E,M),gr(n.stops,2*M).forEach((function(e){return T.addColorStop(e.stop,ir(e.color))})),r.path(w),r.ctx.fillStyle=T,M!==F?(k=e.bounds.left+.5*e.bounds.width,Q=e.bounds.top+.5*e.bounds.height,D=1/(L=F/M),r.ctx.save(),r.ctx.translate(k,Q),r.ctx.transform(1,0,0,L,0,0),r.ctx.translate(-k,-Q),r.ctx.fillRect(B,D*(_-Q)+Q,b,x*D),r.ctx.restore()):r.ctx.fill())),i.label=6;case 6:return t--,[2]}}))},r=this,A=0,a=e.styles.backgroundImage.slice(0).reverse(),s.label=1;case 1:return A0?2!==l.style?[3,5]:[4,this.renderDashedDottedBorder(l.color,l.width,a,e.curves,2)]:[3,11]:[3,13];case 4:return i.sent(),[3,11];case 5:return 3!==l.style?[3,7]:[4,this.renderDashedDottedBorder(l.color,l.width,a,e.curves,3)];case 6:return i.sent(),[3,11];case 7:return 4!==l.style?[3,9]:[4,this.renderDoubleBorder(l.color,l.width,a,e.curves)];case 8:return i.sent(),[3,11];case 9:return[4,this.renderSolidBorder(l.color,a,e.curves)];case 10:i.sent(),i.label=11;case 11:a++,i.label=12;case 12:return o++,[3,3];case 13:return[2]}}))}))},n.prototype.renderDashedDottedBorder=function(e,t,n,A,a){return r(this,void 0,void 0,(function(){var r,o,s,l,u,c,d,h,f,p,g,m,v,y,w,B;return i(this,(function(i){return this.ctx.save(),r=js(A,n),o=Gs(A,n),2===a&&(this.path(o),this.ctx.clip()),Cs(o[0])?(s=o[0].start.x,l=o[0].start.y):(s=o[0].x,l=o[0].y),Cs(o[1])?(u=o[1].end.x,c=o[1].end.y):(u=o[1].x,c=o[1].y),d=0===n||2===n?Math.abs(s-u):Math.abs(l-c),this.ctx.beginPath(),3===a?this.formatPath(r):this.formatPath(o.slice(0,2)),h=t<3?3*t:2*t,f=t<3?2*t:t,3===a&&(h=t,f=t),p=!0,d<=2*h?p=!1:d<=2*h+f?(h*=g=d/(2*h+f),f*=g):(m=Math.floor((d+f)/(h+f)),v=(d-m*h)/(m-1),f=(y=(d-(m+1)*h)/m)<=0||Math.abs(f-v)
  10. Random CheatsNote: you can type in all of these codes again after they are in effect to turn them off. If it is one of the /players cheats then you have to type in the code "/players 1" to set it back to 1.EffectCodeEvery sound at once.soundchaosdebugShow frames per second./fpsWon't let you pick up items unless you press ALT. This helps in cluttered areas with a lot of lag./nopickupContributed By: HawkeyeCH13 10 20

    Target line codesRight click on your Diablo II desktop icon. Left click on the shortcut tab. Enter the code in the target line after the quotes. make sure there is a space between each code , and a space between the last quote and first code.

    -

    These codes currently work as of the 1.11b patch.EffectCodeAllows users to not fix the aspect ratio to 4:3 when maximizing windowed mode. "For Widescreen Users"-nofixaspectCreate a new level 1 character in Act1.-act1Create a new level 16 character in Act2.-act2Create a new level 21 character in Act3.-act3Create a new level 27 character in Act4.-act4Create a new level 33 character in Act5.-act5Doesn't preload anything.-nplNever saves the game.-nosaveNo sound driver is loaded.-nsOpens the game in a window.-wSkip directly to BattleNet login.-skiptobnetThis enables sound in background.-sndbkgUses the OpenGL graphic renderer in game.-openglContributed By: Red_Bird_of_Cha, jctaber86 31 33


Easter EggsDiablo 2 CommercialWith the D2X disk in you computer click on the "My computer" icon. From there you should see the D2X shortcut picture. Right-click on it and select Open. Inside there is a file called D2COM_01. Double click it to watch the semi-funny 30 second D2 commercial.Contributed By: D2_Balk 8 7

Every SoundTo hear every sound recorded for Diablo 2 press ''enter'' in a game to open the chat box and type in ''soundchaosdebug''Contributed By: Jeff_Andonuts 6 2

Secret Cow LevelJust like the Secret Cow Level in the original Diablo II, the Secret Cow Level is in LoD as well. You must have beaten Baal on the difficulty you want to make the portal for though.

1. Go to Rogue Encampment
2. Insert a Wirt's Leg and a Tome of Townsportal into the Cube and Transmute.
3. Enter Portal.
4. Fight Cows.

The cows CAN be difficult to kill with melee characters, so be careful. Paladins and Sorceresses are always helpful here.

The cows do not give good experience, but the chances of finding good runes/gems/equipment is better than most places. Also, most of the ''stashes'' and ''chests'' here are good drops as well.

WARNING: If you want to make the portal again, DO NOT KILL THE COW KING. If the Cow King is killed in a game where you made the portal, you WILL NOT be able to make the portal for that difficulty anymore.Contributed By: AnnDreeUhh 11 18


SecretsCow Level MACIn order to access the secret cow level, you must defeat Baal. Then, make a game for the difficulty you just beat him on (e.g. If you beat Baal on Normal, make a game with Normal difficulty). Go to the Rogue Encampment, put Wirt's Leg and a Tome of Town Portal in a Horadric Cube, and a portal leading to the secret cow level will appear.

Note: If you want to continue to make these portals to the cow level, do NOT kill the King Cow. If you do, you can no longer access the secret cow level from that difficulty.Contributed By: silent_hillian 1 1

Every Sound MACWhile in game, type ''soundchaosdebug'' (without the quotes).
It plays every sound and NPC conversation in the game.
To turn it off, just type it in again.Contributed By: rpglord999 2 1

Frames per second MACGo into a game. When everything loads you press enter to open the message space. You then type in "/fps" with out the quotations. Then in the top left hand corner it should show: Frames per second, Ping, and skip.Contributed By: Teh_Reel_Won 0 0


$(document).ready(function()$('.content_ratings.voted').attr('title','You have already voted on this item.');$('.content_ratings.mycode').attr('title','You can not vote on your own contribution.');$('.content_ratings').tooltip( position: my: "left+0 center", at: "right+15 center" , tooltipClass:'tooltip'););function cheat_vote(code_id, cur_vote, vote)$.ajax(type: 'POST',url: '/ajax/gamespace_item_vote',data: vote: vote, id: code_id, type: 'code', key: 'ad283aed' ,success: function(response)var d = $.parseJSON(response);if(d.success)$('#'+code_id+'c'+vote).addClass('myvote');$('#'+code_id+'c'+vote+' span').text(cur_vote+1);$('.content_ratings.c'+code_id+' span').removeAttr('onclick'););Know Something We Don't?You can submit new cheats for this game and help our users gain an edge.

-

Diablo 2 With Lord Of Destruction (v1.13c) (Direct Play) (Latest Cheat Codes


Download File ->>> https://gohhs.com/2uz2SS



aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/innnky/soft-vits-singingvc/text/symbols.py b/spaces/innnky/soft-vits-singingvc/text/symbols.py deleted file mode 100644 index 869a53e763ae825bc02921842280ac9efe7f85dd..0000000000000000000000000000000000000000 --- a/spaces/innnky/soft-vits-singingvc/text/symbols.py +++ /dev/null @@ -1,16 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Defines the set of symbols used in text input to the model. -''' -_pad = '_' -_punctuation = ';:,.!?¡¿—…"«»“” ' -_letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz' -_letters_ipa = "ɑɐɒæɓʙβɔɕçɗɖðʤəɘɚɛɜɝɞɟʄɡɠɢʛɦɧħɥʜɨɪʝɭɬɫɮʟɱɯɰŋɳɲɴøɵɸθœɶʘɹɺɾɻʀʁɽʂʃʈʧʉʊʋⱱʌɣɤʍχʎʏʑʐʒʔʡʕʢǀǁǂǃˈˌːˑʼʴʰʱʲʷˠˤ˞↓↑→↗↘'̩'ᵻ" - - -# Export all symbols: -symbols = [_pad] + list(_punctuation) + list(_letters) + list(_letters_ipa) - -# Special symbol ids -SPACE_ID = symbols.index(" ") diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/3planesoft Lake Tree 3d Screensaver 1 1 Patch S0m.md b/spaces/inplisQlawa/anything-midjourney-v4-1/3planesoft Lake Tree 3d Screensaver 1 1 Patch S0m.md deleted file mode 100644 index 1cb9d6a2d780256ced1f8c431bcb8ccd2b219d85..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/3planesoft Lake Tree 3d Screensaver 1 1 Patch S0m.md +++ /dev/null @@ -1,11 +0,0 @@ - -

This is the perfect screensaver for Christmas Eve, as it looks pleasant and warm, and all it needs is a snowman, Santa Claus, and a sleigh with flying reindeer to complete the scene. The scene is warm, and the Christmas atmosphere fills the room. These Christmas-themed screensavers are always nice to have on your PCs monitor.

-

3planesoft lake tree 3d screensaver 1 1 patch s0m


DOWNLOADhttps://urlin.us/2uEwcd



-

If you want something special for your home, there are various holiday screensavers that you can choose from, like the 3planesoft Screeen Scena s1and the 3PlaneScre 4ssaver.The screensaver features a Christmas theme, a Christmas tree, snow, angels, icicles, and sleigh. Both the screensavers are free for your entertainment.

-

You can also add a holiday ringtone to your calendar. It is a Christmas ringtone that is played once a day in the 20th Century. Christmas tree, sleigh, and the words merry and Christmas ringtones all help make this ringtone a good one for all to hear.

-

With a Christmas mood, the 3planesoft 3D Screeen Scena s1screensaver features bright Christmas lights as well as a Christmas tree, snow, and a fireplace. You can even use it as a screensaver for your Windows PC.

-

In this screensaver, you will see snow-covered houses, reindeer, a Christmas tree, icicles, and Santa Claus. You can even hear a sleigh, bells, and merry music. Its a Christmas that you can enjoy watching whenever you want.

-

-

The Diamond Christmas screensaver features a Christmas-style tree, with a star shining on top, two angels standing behind it, and a shining ball in the middle. The tree is covered by an ornaments and a snowfall. The ornaments are piled in a corner, while the snow is falling onto them.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Authentic Love Presets ACR.md b/spaces/inreVtussa/clothingai/Examples/Authentic Love Presets ACR.md deleted file mode 100644 index 69d5f3a563ac175fe0f9830120d3485b836655b8..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Authentic Love Presets ACR.md +++ /dev/null @@ -1,71 +0,0 @@ -
-

What are Authentic Love Presets ACR and Why You Need Them

- -

If you are a photographer who loves capturing authentic moments of love and emotion, you might be interested in Authentic Love Presets ACR. These are a collection of presets designed for Adobe Camera Raw (ACR) and Lightroom that can help you achieve your desired image look.

- -

Authentic Love Presets ACR are created by Authentic Love Magazine, a platform that showcases real stories of love and adventure from around the world. The presets are inspired by the magazine's aesthetic and vision, and they aim to enhance the natural beauty and mood of your photos.

-

Authentic Love Presets ACR


Downloadhttps://tiurll.com/2uClJ4



- -

The Benefits of Authentic Love Presets ACR

- -

There are many reasons why you might want to use Authentic Love Presets ACR for your photos. Here are some of them:

- -
    -
  • They save you time and effort. You don't have to spend hours tweaking your photos in post-processing. With just one click, you can apply a preset that suits your style and scene.
  • -
  • They give you consistency and quality. You can create a cohesive look for your portfolio and social media with presets that match your brand and vision. You can also ensure that your photos have a professional and polished finish.
  • -
  • They help you express your creativity and personality. You can choose from a variety of presets that range from colorful and vibrant to moody and dramatic. You can also customize them to fit your preferences and needs.
  • -
  • They make your photos stand out and attract attention. You can impress your clients and followers with photos that have a unique and captivating look. You can also increase your chances of getting featured on platforms like Authentic Love Magazine.
  • -
- -

The Features of Authentic Love Presets ACR

- -

Authentic Love Presets ACR offer a lot of features that make them versatile and easy to use. Here are some of them:

- -
    -
  • They work with both RAW and JPEG files. You can use them with any type of photo format without compromising the quality.
  • -
  • They are compatible with different camera platforms. You can use them with any camera brand or model, as they use the Adobe Standard Calibration (2012).
  • -
  • They include 12 presets with 9 color and 3 black and white options. You can choose from a wide range of presets that suit different lighting conditions, seasons, and moods.
  • -
  • They also include 5 preset tools to control grain, noise, and lens correction. You can fine-tune your photos with these additional tools that help you achieve a more realistic and natural look.
  • -
  • They are easy to install and use. You can find PDF instructions in your Authentic Presets file that guide you through the installation process. You can also access them from your ACR or Lightroom panel with just a few clicks.
  • -
- -

How to Get Authentic Love Presets ACR

- -

If you are interested in getting Authentic Love Presets ACR, you can visit the Authentic Love Magazine website and shop for them online. They are available for $89 for the desktop version, $79 for the mobile version, or $149 for the bundle that includes both versions.

- -

You can also check out some examples of how the presets look on different photos on the website or on their Instagram page. You can see how they transform photos from ordinary to extraordinary, and how they enhance the authentic love stories behind them.

- -

Authentic Love Presets ACR are a great way to improve your photography skills and style, and to showcase your passion for capturing real moments of love and emotion. Whether you are shooting adventurous elopements, genuine tears of joy, cuddles and hugs, or evening kisses, these presets will work for you!

-

What People Say About Authentic Love Presets ACR

- -

Authentic Love Presets ACR have received a lot of positive feedback from photographers who have used them for their photos. Here are some of the testimonials from their website and social media:

- -
-

"I absolutely love these presets! They are so easy to use and they give my photos a beautiful and natural look. They are perfect for capturing the emotions and stories of my couples." - Dani Purington, @danipurington

-
- -
-

"These presets are amazing! They have transformed my photos and made them look more professional and consistent. They also save me a lot of time in editing, which is a huge plus. I highly recommend them to anyone who loves authentic photography." - Janelle Elise, @janelle.elise.photo

-

-
- -
-

"I'm so happy with these presets! They are exactly what I was looking for. They have a nice contrast and depth, and they work well with different lighting situations. They also enhance the colors and tones of my photos without making them look unnatural or overdone." - Sami Strong, @samistrong

-
- -

How to Use Authentic Love Presets ACR

- -

Using Authentic Love Presets ACR is very simple and straightforward. Here are the steps you need to follow:

- -
    -
  1. Download the presets from the Authentic Love Magazine website after making your purchase. You will receive an email with your receipt and download links.
  2. -
  3. Unzip the file and find the PDF instructions for installing the presets on your ACR or Lightroom.
  4. -
  5. Follow the instructions and import the presets into your software.
  6. -
  7. Select a photo you want to edit and apply a preset from the Authentic Love Presets ACR panel.
  8. -
  9. Adjust the exposure, white balance, and other settings as needed to suit your photo.
  10. -
  11. Enjoy your edited photo and share it with your clients or followers!
  12. -
- -

Authentic Love Presets ACR are a great investment for any photographer who wants to create stunning photos that showcase authentic love stories. They are easy to use, versatile, and affordable. You can get them today from the Authentic Love Magazine website and start creating your own amazing photos!

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/j-min/IterInpaint-CLEVR/README.md b/spaces/j-min/IterInpaint-CLEVR/README.md deleted file mode 100644 index 70ba4d3f4d6e3ccb62208f8637b7ed4fcd323d27..0000000000000000000000000000000000000000 --- a/spaces/j-min/IterInpaint-CLEVR/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: IterInpaint CLEVR -emoji: 🌍 -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jaiteja7849/MyGenAIChatBot/README.md b/spaces/jaiteja7849/MyGenAIChatBot/README.md deleted file mode 100644 index d7626df4c8e90f574b890c62067e1188febc55db..0000000000000000000000000000000000000000 --- a/spaces/jaiteja7849/MyGenAIChatBot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: MyGenAIChatBot -emoji: 🦀 -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jbilcke-hf/LifeSim/README.md b/spaces/jbilcke-hf/LifeSim/README.md deleted file mode 100644 index 6f0e7212fa381c729f2d6639013a45f37d92f588..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/LifeSim/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: LifeSim -emoji: 🐠🪸 -colorFrom: blue -colorTo: yellow -sdk: docker -pinned: false -app_port: 3000 ---- - -LifeSim uses a text-to-video model to render artificially simulated agents. \ No newline at end of file diff --git a/spaces/jbilcke-hf/LifeSim/src/app/globals.css b/spaces/jbilcke-hf/LifeSim/src/app/globals.css deleted file mode 100644 index fd81e885836d815b8019694a910a93d86a43cb66..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/LifeSim/src/app/globals.css +++ /dev/null @@ -1,27 +0,0 @@ -@tailwind base; -@tailwind components; -@tailwind utilities; - -:root { - --foreground-rgb: 0, 0, 0; - --background-start-rgb: 214, 219, 220; - --background-end-rgb: 255, 255, 255; -} - -@media (prefers-color-scheme: dark) { - :root { - --foreground-rgb: 255, 255, 255; - --background-start-rgb: 0, 0, 0; - --background-end-rgb: 0, 0, 0; - } -} - -body { - color: rgb(var(--foreground-rgb)); - background: linear-gradient( - to bottom, - transparent, - rgb(var(--background-end-rgb)) - ) - rgb(var(--background-start-rgb)); -} diff --git a/spaces/jbilcke-hf/VideoQuest/src/app/games/vernian.ts b/spaces/jbilcke-hf/VideoQuest/src/app/games/vernian.ts deleted file mode 100644 index 2023fb900e183aa1e2722dfcf3edac352422a177..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/VideoQuest/src/app/games/vernian.ts +++ /dev/null @@ -1,96 +0,0 @@ -import { moondance } from "@/lib/fonts" -import { Game } from "./types" -import { InventoryItem } from "../../types" - -const initialSituation = [ - `inside a secret workshop inspired by Jules Verne`, - `with mysterious machines, keys, boxes, blueprints, gears` -].join(", ") - -const initialActionnables = [ - "key", - "box", - "door", - "table", - "chair", - "sun", - "gear", - "machine", - "window", - "ground" -] - -const inventory: InventoryItem[] = [ - { - name: "apparatus", - title: "Apparatus", - caption: "", - description: "What is this strange device?" - }, - { - name: "book", - title: "Book", - caption: "", - description: "It is talking about a mysterious island, I think.." - }, - { - name: "cog", - title: "Cog", - caption: "", - description: "From some kind of mysterious machine." - }, - { - name: "coil", - title: "Coil", - caption: "", - description: "Nice, but where does it fit?" - }, - { - name: "copper-wire", - title: "Copper wire", - caption: "", - description: "Mmh, copper. I wonder how I could use that." - }, - { - name: "pocket-watch", - title: "Pocket watch", - caption: "", - description: "My my.. time passes quickly." - }, - { - name: "top-hat", - title: "Top Hat", - caption: "", - description: "For a gentleman or magician. The craft is exquisite." - }, -] - -export const game: Game = { - title: "Vernian", - type: "vernian", - description: [ - "The game is a role playing adventure set in the world of Jules Verne adventures, with heavy steampunk inspirations.", - "The player try to find a treasure on a mysterious island, and they search in Jules Verne's secret cabinet and atelier.", - "The player can click around to move to new scenes, find or activate artifacts.", - "They can also use objects from their inventory.", - ], - engines: [ - "cartesian_image", - "cartesian_video", - "spherical_image", - ], - className: moondance.className, - initialSituation, - initialActionnables, - inventory, - getScenePrompt: (situation?: string) => [ - `Screenshot from a videogame`, - `steam punk decor`, - `jules verne architecture and design`, - `mysterious machines and mechanisms`, - `first person`, - situation || initialSituation, - `unreal engine`, - ] -} - diff --git a/spaces/jbilcke-hf/VideoQuest/src/app/interface/progress/progress-bar.tsx b/spaces/jbilcke-hf/VideoQuest/src/app/interface/progress/progress-bar.tsx deleted file mode 100644 index 0e926d05419cecc6d4a4964d53a8dad6e07a4102..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/VideoQuest/src/app/interface/progress/progress-bar.tsx +++ /dev/null @@ -1,57 +0,0 @@ -"use client" - -import { CircularProgressbar, buildStyles } from "react-circular-progressbar" -import "react-circular-progressbar/dist/styles.css" - -export function ProgressBar ({ - className, - progressPercentage, - text -}: { - className?: string - progressPercentage?: number - text?: string -}) { - return ( -
- -
- ) -} \ No newline at end of file diff --git a/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/test_time_augmentation.py b/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/test_time_augmentation.py deleted file mode 100644 index bb7a51f28419c59775013c74fdee49e5166bde51..0000000000000000000000000000000000000000 --- a/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/test_time_augmentation.py +++ /dev/null @@ -1,217 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Copyright (c) Meta Platforms, Inc. All Rights Reserved - -import copy -from itertools import count -import math -import numpy as np -import torch -from fvcore.transforms import HFlipTransform -from torch import nn -from torch.nn.parallel import DistributedDataParallel - -from detectron2.data.detection_utils import read_image -from detectron2.modeling import DatasetMapperTTA -from detectron2.modeling.postprocessing import sem_seg_postprocess -import logging -from detectron2.utils.logger import log_every_n, log_first_n - -__all__ = [ - "SemanticSegmentorWithTTA", -] - - -class SemanticSegmentorWithTTA(nn.Module): - """ - A SemanticSegmentor with test-time augmentation enabled. - Its :meth:`__call__` method has the same interface as :meth:`SemanticSegmentor.forward`. - """ - - def __init__(self, cfg, model, tta_mapper=None, batch_size=1): - """ - Args: - cfg (CfgNode): - model (SemanticSegmentor): a SemanticSegmentor to apply TTA on. - tta_mapper (callable): takes a dataset dict and returns a list of - augmented versions of the dataset dict. Defaults to - `DatasetMapperTTA(cfg)`. - batch_size (int): batch the augmented images into this batch size for inference. - """ - super().__init__() - if isinstance(model, DistributedDataParallel): - model = model.module - self.cfg = cfg.clone() - - self.model = model - - if tta_mapper is None: - tta_mapper = DatasetMapperTTA(cfg) - self.tta_mapper = tta_mapper - self.batch_size = batch_size - - def _inference_with_model(self, inputs): - if self.cfg.TEST.SLIDING_WINDOW: - log_first_n(logging.INFO, "Using sliding window to test") - - outputs = [] - - for input in inputs: - image_size = input["image"].shape[1:] # h,w - if self.cfg.TEST.SLIDING_TILE_SIZE > 0: - tile_size = ( - self.cfg.TEST.SLIDING_TILE_SIZE, - self.cfg.TEST.SLIDING_TILE_SIZE, - ) - else: - selected_mapping = {256: 224, 512: 256, 768: 512, 896: 512} - tile_size = min(image_size) - tile_size = selected_mapping[tile_size] - tile_size = (tile_size, tile_size) - extra_info = { - k: v - for k, v in input.items() - if k not in ["image", "height", "width"] - } - log_every_n( - logging.INFO, "split {} to {}".format(image_size, tile_size) - ) - overlap = self.cfg.TEST.SLIDING_OVERLAP - stride = math.ceil(tile_size[0] * (1 - overlap)) - tile_rows = int( - math.ceil((image_size[0] - tile_size[0]) / stride) + 1 - ) # strided convolution formula - tile_cols = int(math.ceil((image_size[1] - tile_size[1]) / stride) + 1) - full_probs = None - count_predictions = None - tile_counter = 0 - - for row in range(tile_rows): - for col in range(tile_cols): - x1 = int(col * stride) - y1 = int(row * stride) - x2 = min(x1 + tile_size[1], image_size[1]) - y2 = min(y1 + tile_size[0], image_size[0]) - x1 = max( - int(x2 - tile_size[1]), 0 - ) # for portrait images the x1 underflows sometimes - y1 = max( - int(y2 - tile_size[0]), 0 - ) # for very few rows y1 underflows - - img = input["image"][:, y1:y2, x1:x2] - padded_img = nn.functional.pad( - img, - ( - 0, - tile_size[1] - img.shape[-1], - 0, - tile_size[0] - img.shape[-2], - ), - ) - tile_counter += 1 - padded_input = {"image": padded_img} - padded_input.update(extra_info) - padded_prediction = self.model([padded_input])[0]["sem_seg"] - prediction = padded_prediction[ - :, 0 : img.shape[1], 0 : img.shape[2] - ] - if full_probs is None: - full_probs = prediction.new_zeros( - prediction.shape[0], image_size[0], image_size[1] - ) - if count_predictions is None: - count_predictions = prediction.new_zeros( - prediction.shape[0], image_size[0], image_size[1] - ) - count_predictions[:, y1:y2, x1:x2] += 1 - full_probs[ - :, y1:y2, x1:x2 - ] += prediction # accumulate the predictions also in the overlapping regions - - full_probs /= count_predictions - full_probs = sem_seg_postprocess( - full_probs, - image_size, - input.get("height", image_size[0]), - input.get("width", image_size[1]), - ) - outputs.append({"sem_seg": full_probs}) - - return outputs - else: - log_first_n(logging.INFO, "Using whole image to test") - return self.model(inputs) - - def _batch_inference(self, batched_inputs): - """ - Execute inference on a list of inputs, - using batch size = self.batch_size, instead of the length of the list. - Inputs & outputs have the same format as :meth:`SemanticSegmentor.forward` - """ - outputs = [] - inputs = [] - for idx, input in zip(count(), batched_inputs): - inputs.append(input) - if len(inputs) == self.batch_size or idx == len(batched_inputs) - 1: - with torch.no_grad(): - outputs.extend(self._inference_with_model(inputs)) - inputs = [] - return outputs - - def __call__(self, batched_inputs): - """ - Same input/output format as :meth:`SemanticSegmentor.forward` - """ - - def _maybe_read_image(dataset_dict): - ret = copy.copy(dataset_dict) - if "image" not in ret: - image = read_image(ret.pop("file_name"), self.model.input_format) - image = torch.from_numpy( - np.ascontiguousarray(image.transpose(2, 0, 1)) - ) # CHW - ret["image"] = image - if "height" not in ret and "width" not in ret: - ret["height"] = image.shape[1] - ret["width"] = image.shape[2] - return ret - - return [self._inference_one_image(_maybe_read_image(x)) for x in batched_inputs] - - def _inference_one_image(self, input): - """ - Args: - input (dict): one dataset dict with "image" field being a CHW tensor - Returns: - dict: one output dict - """ - augmented_inputs, tfms = self._get_augmented_inputs(input) - # 1: forward with all augmented images - outputs = self._batch_inference(augmented_inputs) - # Delete now useless variables to avoid being out of memory - del augmented_inputs - # 2: merge the results - # handle flip specially - # outputs = [output.detach() for output in outputs] - return self._merge_auged_output(outputs, tfms) - - def _merge_auged_output(self, outputs, tfms): - new_outputs = [] - for output, tfm in zip(outputs, tfms): - if any(isinstance(t, HFlipTransform) for t in tfm.transforms): - new_outputs.append(output["sem_seg"].flip(dims=[2])) - else: - new_outputs.append(output["sem_seg"]) - del outputs - # to avoid OOM with torch.stack - final_predictions = new_outputs[0] - for i in range(1, len(new_outputs)): - final_predictions += new_outputs[i] - final_predictions = final_predictions / len(new_outputs) - del new_outputs - return {"sem_seg": final_predictions} - - def _get_augmented_inputs(self, input): - augmented_inputs = self.tta_mapper(input) - tfms = [x.pop("transforms") for x in augmented_inputs] - return augmented_inputs, tfms diff --git a/spaces/jigo/jobposting/README.md b/spaces/jigo/jobposting/README.md deleted file mode 100644 index 4a0c9bf4f8a63b012256637602f4c9e6a17bb6fc..0000000000000000000000000000000000000000 --- a/spaces/jigo/jobposting/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Jobposting -emoji: 🏆 -colorFrom: purple -colorTo: pink -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jjw0126/Multi-ORGPT/app.py b/spaces/jjw0126/Multi-ORGPT/app.py deleted file mode 100644 index b0a3f37c001828a2de005ed7657770f6d2a0339b..0000000000000000000000000000000000000000 --- a/spaces/jjw0126/Multi-ORGPT/app.py +++ /dev/null @@ -1,216 +0,0 @@ -import gradio as gr -import os -import json -import requests -openai_gpt4_key = "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" -# Streaming endpoint -# os.getenv("API_URL") + "/generate_stream" -API_URL = "https://api.openai.com/v1/chat/completions" -# Inferenec function - - -def predict(openai_gpt4_key, system_msg, inputs, top_p, temperature, chat_counter, chatbot=[], history=[]): - - headers = { - "Content-Type": "application/json", - # Users will provide their own OPENAI_API_KEY - "Authorization": f"Bearer {openai_gpt4_key}" - } - print(f"system message is ^^ {system_msg}") - if system_msg.strip() == '': - initial_message = [{"role": "user", "content": f"{inputs}"},] - multi_turn_message = [] - else: - initial_message = [{"role": "system", "content": system_msg}, - {"role": "user", "content": f"{inputs}"},] - multi_turn_message = [{"role": "system", "content": system_msg},] - - if chat_counter == 0: - payload = { - "model": "gpt-3.5-turbo", - "messages": initial_message, - "temperature": 1.0, - "top_p": 1.0, - "n": 1, - "stream": True, - "presence_penalty": 0, - "frequency_penalty": 0, - } - print(f"chat_counter - {chat_counter}") - else: # if chat_counter != 0 : - # Of the type of - [{"role": "system", "content": system_msg},] - messages = multi_turn_message - for data in chatbot: - user = {} - user["role"] = "user" - user["content"] = data[0] - assistant = {} - assistant["role"] = "assistant" - assistant["content"] = data[1] - messages.append(user) - messages.append(assistant) - temp = {} - temp["role"] = "user" - temp["content"] = inputs - messages.append(temp) - # messages - payload = { - "model": "gpt-3.5-turbo", - # Of the type of [{"role": "user", "content": f"{inputs}"}], - "messages": messages, - "temperature": temperature, # 1.0, - "top_p": top_p, # 1.0, - "n": 1, - "stream": True, - "presence_penalty": 0, - "frequency_penalty": 0, } - - chat_counter += 1 - - history.append(inputs) - print(f"Logging : payload is - {payload}") - # make a POST request to the API endpoint using the requests.post method, passing in stream=True - response = requests.post(API_URL, headers=headers, - json=payload, stream=True) - print(f"Logging : response code - {response}") - token_counter = 0 - partial_words = "" - - counter = 0 - for chunk in response.iter_lines(): - # Skipping first chunk - if counter == 0: - counter += 1 - continue - # check whether each line is non-empty - if chunk.decode(): - chunk = chunk.decode() - # decode each line as response data is in bytes - if len(chunk) > 12 and "content" in json.loads(chunk[6:])['choices'][0]['delta']: - partial_words = partial_words + \ - json.loads(chunk[6:])['choices'][0]["delta"]["content"] - if token_counter == 0: - history.append(" " + partial_words) - else: - history[-1] = partial_words - chat = [(history[i], history[i + 1]) for i in range(0, - len(history) - 1, 2)] # convert to tuples of list - token_counter += 1 - # resembles {chatbot: chat, state: history} - yield chat, history, chat_counter, response - -# Resetting to blank - - -def reset_textbox(): - return gr.update(value='') - -# to set a component as visible=False - - -def set_visible_false(): - return gr.update(visible=False) - -# to set a component as visible=True - - -def set_visible_true(): - return gr.update(visible=True) - - -title = """

🔥 Large Language Models as Tools for Modeling and Coding in Operations Research

""" -# display message for themes feature -theme_addon_msg = """
🌟 This Demo also introduces you to Gradio Themes. Discover more on Gradio website using our Themeing-Guide🎨! You can develop from scratch, modify an existing Gradio theme, and share your themes with community by uploading them to huggingface-hub easily using theme.push_to_hub().
-""" - -# Using info to add additional information about System message in GPT4 -system_msg_info = """A conversation could begin with a system message to gently instruct the assistant. -System message helps set the behavior of the AI Assistant. For example, the assistant could be instructed with 'You are a helpful assistant.'""" - -# Modifying existing Gradio Theme -theme = gr.themes.Soft(primary_hue="zinc", secondary_hue="green", neutral_hue="green", - text_size=gr.themes.sizes.text_lg) - -with gr.Blocks(css="""#col_container { margin-left: auto; margin-right: auto;} #chatbot {height: 520px; overflow: auto;}""", - theme=theme) as demo: - gr.HTML(title) - gr.HTML("""

🔥This Huggingface Gradio Demo provides you access to different optimiaztion scenarios. Please note that you would be needing an OPENAI API key for GPT4 access🙌

""") - # gr.HTML(theme_addon_msg) - # gr.HTML('''
Duplicate SpaceDuplicate the Space and run securely with your OpenAI API Key
''') - - with gr.Column(elem_id="col_container"): - # Users need to provide their own GPT4 API key, it is no longer provided by Huggingface - with gr.Row(): - openai_gpt4_key = gr.Textbox(label="OpenAI GPT4 Key", value="", type="password", placeholder="sk..", - info="You have to provide your own GPT4 keys for this app to function properly",) - with gr.Accordion(label="System message:", open=False): - system_msg = gr.Textbox(label="Instruct the AI Assistant to set its beaviour", - info=system_msg_info, value="", placeholder="Type here..") - accordion_msg = gr.HTML( - value="🚧 To set System message you will have to refresh the app", visible=False) - - chatbot = gr.Chatbot(label='GPT4', elem_id="chatbot") - inputs = gr.Textbox(placeholder="Hi there!", - label="Type an optimization problem as input and press Enter") - state = gr.State([]) - with gr.Row(): - with gr.Column(scale=7): - b1 = gr.Button().style(full_width=True) - with gr.Column(scale=3): - server_status_code = gr.Textbox( - label="Status code from OpenAI server", ) - - #top_p, temperature - with gr.Accordion("Parameters", open=False): - top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.05, - interactive=True, label="Top-p (nucleus sampling)",) - temperature = gr.Slider( - minimum=-0, maximum=5.0, value=1.0, step=0.1, interactive=True, label="Temperature",) - chat_counter = gr.Number(value=0, visible=False, precision=0) - - # Event handling - inputs.submit(predict, [openai_gpt4_key, system_msg, inputs, top_p, temperature, chat_counter, chatbot, state], [ - chatbot, state, chat_counter, server_status_code],) # openai_api_key - b1.click(predict, [openai_gpt4_key, system_msg, inputs, top_p, temperature, chat_counter, chatbot, state], [ - chatbot, state, chat_counter, server_status_code],) # openai_api_key - - inputs.submit(set_visible_false, [], [system_msg]) - b1.click(set_visible_false, [], [system_msg]) - inputs.submit(set_visible_true, [], [accordion_msg]) - b1.click(set_visible_true, [], [accordion_msg]) - - b1.click(reset_textbox, [], [inputs]) - inputs.submit(reset_textbox, [], [inputs]) - - # Examples - with gr.Accordion(label="Examples for System message:", open=False): - gr.Examples( - examples=[["""You are an AI programming assistant. - - - Follow the user's requirements carefully and to the letter. - - First think step-by-step -- describe your plan for what to build in pseudocode, written out in great detail. - - Then output the code in a single code block. - - Minimize any other prose."""], ["""You are ComedianGPT who is a helpful assistant. You answer everything with a joke and witty replies."""], - ["You are ChefGPT, a helpful assistant who answers questions with culinary expertise and a pinch of humor."], - ["You are FitnessGuruGPT, a fitness expert who shares workout tips and motivation with a playful twist."], - ["You are SciFiGPT, an AI assistant who discusses science fiction topics with a blend of knowledge and wit."], - ["You are PhilosopherGPT, a thoughtful assistant who responds to inquiries with philosophical insights and a touch of humor."], - ["You are EcoWarriorGPT, a helpful assistant who shares environment-friendly advice with a lighthearted approach."], - ["You are MusicMaestroGPT, a knowledgeable AI who discusses music and its history with a mix of facts and playful banter."], - ["You are SportsFanGPT, an enthusiastic assistant who talks about sports and shares amusing anecdotes."], - ["You are TechWhizGPT, a tech-savvy AI who can help users troubleshoot issues and answer questions with a dash of humor."], - ["You are FashionistaGPT, an AI fashion expert who shares style advice and trends with a sprinkle of wit."], - ["You are ArtConnoisseurGPT, an AI assistant who discusses art and its history with a blend of knowledge and playful commentary."], - ["You are a helpful assistant that provides detailed and accurate information."], - ["You are an assistant that speaks like Shakespeare."], - ["You are a friendly assistant who uses casual language and humor."], - ["You are a financial advisor who gives expert advice on investments and budgeting."], - ["You are a health and fitness expert who provides advice on nutrition and exercise."], - ["You are a travel consultant who offers recommendations for destinations, accommodations, and attractions."], - ["You are a movie critic who shares insightful opinions on films and their themes."], - ["You are a history enthusiast who loves to discuss historical events and figures."], - ["You are a tech-savvy assistant who can help users troubleshoot issues and answer questions about gadgets and software."], - ["You are an AI poet who can compose creative and evocative poems on any given topic."],], - inputs=system_msg,) - -demo.queue(max_size=99, concurrency_count=20).launch(debug=True) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/Hash/test_MD2.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/Hash/test_MD2.py deleted file mode 100644 index 93751687f21b7999613353381fa8b036c9ecc3bf..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/Hash/test_MD2.py +++ /dev/null @@ -1,62 +0,0 @@ -# -*- coding: utf-8 -*- -# -# SelfTest/Hash/MD2.py: Self-test for the MD2 hash function -# -# Written in 2008 by Dwayne C. Litzenberger -# -# =================================================================== -# The contents of this file are dedicated to the public domain. To -# the extent that dedication to the public domain is not available, -# everyone is granted a worldwide, perpetual, royalty-free, -# non-exclusive license to exercise all rights associated with the -# contents of this file for any purpose whatsoever. -# No rights are reserved. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS -# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN -# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# =================================================================== - -"""Self-test suite for Crypto.Hash.MD2""" - -from Crypto.Util.py3compat import * - -# This is a list of (expected_result, input[, description]) tuples. -test_data = [ - # Test vectors from RFC 1319 - ('8350e5a3e24c153df2275c9f80692773', '', "'' (empty string)"), - ('32ec01ec4a6dac72c0ab96fb34c0b5d1', 'a'), - ('da853b0d3f88d99b30283a69e6ded6bb', 'abc'), - ('ab4f496bfb2a530b219ff33031fe06b0', 'message digest'), - - ('4e8ddff3650292ab5a4108c3aa47940b', 'abcdefghijklmnopqrstuvwxyz', - 'a-z'), - - ('da33def2a42df13975352846c30338cd', - 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789', - 'A-Z, a-z, 0-9'), - - ('d5976f79d83d3a0dc9806c3c66f3efd8', - '1234567890123456789012345678901234567890123456' - + '7890123456789012345678901234567890', - "'1234567890' * 8"), -] - -def get_tests(config={}): - from Crypto.Hash import MD2 - from .common import make_hash_tests - return make_hash_tests(MD2, "MD2", test_data, - digest_size=16, - oid="1.2.840.113549.2.2") - -if __name__ == '__main__': - import unittest - suite = lambda: unittest.TestSuite(get_tests()) - unittest.main(defaultTest='suite') - -# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/altair/utils/save.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/altair/utils/save.py deleted file mode 100644 index 797210b3f01f2df271f04b09f0919e8d884cdce5..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/altair/utils/save.py +++ /dev/null @@ -1,189 +0,0 @@ -import json -import pathlib -import warnings - -from .mimebundle import spec_to_mimebundle -from ..vegalite.v5.data import data_transformers -from altair.utils._vegafusion_data import using_vegafusion - - -def write_file_or_filename(fp, content, mode="w", encoding=None): - """Write content to fp, whether fp is a string, a pathlib Path or a - file-like object""" - if isinstance(fp, str) or isinstance(fp, pathlib.PurePath): - with open(file=fp, mode=mode, encoding=encoding) as f: - f.write(content) - else: - fp.write(content) - - -def set_inspect_format_argument(format, fp, inline): - """Inspect the format argument in the save function""" - if format is None: - if isinstance(fp, str): - format = fp.split(".")[-1] - elif isinstance(fp, pathlib.PurePath): - format = fp.suffix.lstrip(".") - else: - raise ValueError( - "must specify file format: " - "['png', 'svg', 'pdf', 'html', 'json', 'vega']" - ) - - if format != "html" and inline: - warnings.warn("inline argument ignored for non HTML formats.", stacklevel=1) - - return format - - -def set_inspect_mode_argument(mode, embed_options, spec, vegalite_version): - """Inspect the mode argument in the save function""" - if mode is None: - if "mode" in embed_options: - mode = embed_options["mode"] - elif "$schema" in spec: - mode = spec["$schema"].split("/")[-2] - else: - mode = "vega-lite" - - if mode != "vega-lite": - raise ValueError("mode must be 'vega-lite', " "not '{}'".format(mode)) - - if mode == "vega-lite" and vegalite_version is None: - raise ValueError("must specify vega-lite version") - - return mode - - -def save( - chart, - fp, - vega_version, - vegaembed_version, - format=None, - mode=None, - vegalite_version=None, - embed_options=None, - json_kwds=None, - webdriver=None, - scale_factor=1, - engine=None, - inline=False, - **kwargs, -): - """Save a chart to file in a variety of formats - - Supported formats are [json, html, png, svg, pdf] - - Parameters - ---------- - chart : alt.Chart - the chart instance to save - fp : string filename, pathlib.Path or file-like object - file to which to write the chart. - format : string (optional) - the format to write: one of ['json', 'html', 'png', 'svg', 'pdf']. - If not specified, the format will be determined from the filename. - mode : string (optional) - Must be 'vega-lite'. If not specified, then infer the mode from - the '$schema' property of the spec, or the ``opt`` dictionary. - If it's not specified in either of those places, then use 'vega-lite'. - vega_version : string (optional) - For html output, the version of vega.js to use - vegalite_version : string (optional) - For html output, the version of vegalite.js to use - vegaembed_version : string (optional) - For html output, the version of vegaembed.js to use - embed_options : dict (optional) - The vegaEmbed options dictionary. Default is {} - (See https://github.com/vega/vega-embed for details) - json_kwds : dict (optional) - Additional keyword arguments are passed to the output method - associated with the specified format. - webdriver : string {'chrome' | 'firefox'} (optional) - Webdriver to use for png or svg output - scale_factor : float (optional) - scale_factor to use to change size/resolution of png or svg output - engine: string {'vl-convert', 'altair_saver'} - the conversion engine to use for 'png', 'svg', and 'pdf' formats - inline: bool (optional) - If False (default), the required JavaScript libraries are loaded - from a CDN location in the resulting html file. - If True, the required JavaScript libraries are inlined into the resulting - html file so that it will work without an internet connection. - The altair_viewer package is required if True. - **kwargs : - additional kwargs passed to spec_to_mimebundle. - """ - if json_kwds is None: - json_kwds = {} - - if embed_options is None: - embed_options = {} - - format = set_inspect_format_argument(format, fp, inline) - - def perform_save(): - spec = chart.to_dict(context={"pre_transform": False}) - - inner_mode = set_inspect_mode_argument( - mode, embed_options, spec, vegalite_version - ) - - if format == "json": - json_spec = json.dumps(spec, **json_kwds) - write_file_or_filename(fp, json_spec, mode="w") - elif format == "html": - if inline: - kwargs["template"] = "inline" - mimebundle = spec_to_mimebundle( - spec=spec, - format=format, - mode=inner_mode, - vega_version=vega_version, - vegalite_version=vegalite_version, - vegaembed_version=vegaembed_version, - embed_options=embed_options, - json_kwds=json_kwds, - **kwargs, - ) - write_file_or_filename(fp, mimebundle["text/html"], mode="w") - elif format in ["png", "svg", "pdf", "vega"]: - mimebundle = spec_to_mimebundle( - spec=spec, - format=format, - mode=inner_mode, - vega_version=vega_version, - vegalite_version=vegalite_version, - vegaembed_version=vegaembed_version, - webdriver=webdriver, - scale_factor=scale_factor, - engine=engine, - **kwargs, - ) - if format == "png": - write_file_or_filename(fp, mimebundle[0]["image/png"], mode="wb") - elif format == "pdf": - write_file_or_filename(fp, mimebundle["application/pdf"], mode="wb") - else: - encoding = kwargs.get("encoding", "utf-8") - write_file_or_filename( - fp, mimebundle["image/svg+xml"], mode="w", encoding=encoding - ) - else: - raise ValueError("Unsupported format: '{}'".format(format)) - - if using_vegafusion(): - # When the vegafusion data transformer is enabled, transforms will be - # evaluated during save and the resulting data will be included in the - # vega specification that is saved. - with data_transformers.disable_max_rows(): - perform_save() - else: - # Temporarily turn off any data transformers so that all data is inlined - # when calling chart.to_dict. This is relevant for vl-convert which cannot access - # local json files which could be created by a json data transformer. Furthermore, - # we don't exit the with statement until this function completed due to the issue - # described at https://github.com/vega/vl-convert/issues/31 - with data_transformers.enable("default"), data_transformers.disable_max_rows(): - perform_save() diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/help.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/help.py deleted file mode 100644 index 2a238de3d6d5d69c70c0130d98f8272be7efabf5..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/help.py +++ /dev/null @@ -1,35 +0,0 @@ -import pkgutil -import sys -import fontTools -import importlib -import os -from pathlib import Path - - -def main(): - """Show this help""" - path = fontTools.__path__ - descriptions = {} - for pkg in sorted( - mod.name - for mod in pkgutil.walk_packages([fontTools.__path__[0]], prefix="fontTools.") - ): - try: - imports = __import__(pkg, globals(), locals(), ["main"]) - except ImportError as e: - continue - try: - description = imports.main.__doc__ - if description: - pkg = pkg.replace("fontTools.", "").replace(".__main__", "") - # show the docstring's first line only - descriptions[pkg] = description.splitlines()[0] - except AttributeError as e: - pass - for pkg, description in descriptions.items(): - print("fonttools %-25s %s" % (pkg, description), file=sys.stderr) - - -if __name__ == "__main__": - print("fonttools v%s\n" % fontTools.__version__, file=sys.stderr) - main() diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/misc/visitor.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/misc/visitor.py deleted file mode 100644 index 3d28135fad3a951c447d03b7f2b08403cb24a12e..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/misc/visitor.py +++ /dev/null @@ -1,143 +0,0 @@ -"""Generic visitor pattern implementation for Python objects.""" - -import enum - - -class Visitor(object): - - defaultStop = False - - @classmethod - def _register(celf, clazzes_attrs): - assert celf != Visitor, "Subclass Visitor instead." - if "_visitors" not in celf.__dict__: - celf._visitors = {} - - def wrapper(method): - assert method.__name__ == "visit" - for clazzes, attrs in clazzes_attrs: - if type(clazzes) != tuple: - clazzes = (clazzes,) - if type(attrs) == str: - attrs = (attrs,) - for clazz in clazzes: - _visitors = celf._visitors.setdefault(clazz, {}) - for attr in attrs: - assert attr not in _visitors, ( - "Oops, class '%s' has visitor function for '%s' defined already." - % (clazz.__name__, attr) - ) - _visitors[attr] = method - return None - - return wrapper - - @classmethod - def register(celf, clazzes): - if type(clazzes) != tuple: - clazzes = (clazzes,) - return celf._register([(clazzes, (None,))]) - - @classmethod - def register_attr(celf, clazzes, attrs): - clazzes_attrs = [] - if type(clazzes) != tuple: - clazzes = (clazzes,) - if type(attrs) == str: - attrs = (attrs,) - for clazz in clazzes: - clazzes_attrs.append((clazz, attrs)) - return celf._register(clazzes_attrs) - - @classmethod - def register_attrs(celf, clazzes_attrs): - return celf._register(clazzes_attrs) - - @classmethod - def _visitorsFor(celf, thing, _default={}): - typ = type(thing) - - for celf in celf.mro(): - - _visitors = getattr(celf, "_visitors", None) - if _visitors is None: - break - - m = celf._visitors.get(typ, None) - if m is not None: - return m - - return _default - - def visitObject(self, obj, *args, **kwargs): - """Called to visit an object. This function loops over all non-private - attributes of the objects and calls any user-registered (via - @register_attr() or @register_attrs()) visit() functions. - - If there is no user-registered visit function, of if there is and it - returns True, or it returns None (or doesn't return anything) and - visitor.defaultStop is False (default), then the visitor will proceed - to call self.visitAttr()""" - - keys = sorted(vars(obj).keys()) - _visitors = self._visitorsFor(obj) - defaultVisitor = _visitors.get("*", None) - for key in keys: - if key[0] == "_": - continue - value = getattr(obj, key) - visitorFunc = _visitors.get(key, defaultVisitor) - if visitorFunc is not None: - ret = visitorFunc(self, obj, key, value, *args, **kwargs) - if ret == False or (ret is None and self.defaultStop): - continue - self.visitAttr(obj, key, value, *args, **kwargs) - - def visitAttr(self, obj, attr, value, *args, **kwargs): - """Called to visit an attribute of an object.""" - self.visit(value, *args, **kwargs) - - def visitList(self, obj, *args, **kwargs): - """Called to visit any value that is a list.""" - for value in obj: - self.visit(value, *args, **kwargs) - - def visitDict(self, obj, *args, **kwargs): - """Called to visit any value that is a dictionary.""" - for value in obj.values(): - self.visit(value, *args, **kwargs) - - def visitLeaf(self, obj, *args, **kwargs): - """Called to visit any value that is not an object, list, - or dictionary.""" - pass - - def visit(self, obj, *args, **kwargs): - """This is the main entry to the visitor. The visitor will visit object - obj. - - The visitor will first determine if there is a registered (via - @register()) visit function for the type of object. If there is, it - will be called, and (visitor, obj, *args, **kwargs) will be passed to - the user visit function. - - If there is no user-registered visit function, of if there is and it - returns True, or it returns None (or doesn't return anything) and - visitor.defaultStop is False (default), then the visitor will proceed - to dispatch to one of self.visitObject(), self.visitList(), - self.visitDict(), or self.visitLeaf() (any of which can be overriden in - a subclass).""" - - visitorFunc = self._visitorsFor(obj).get(None, None) - if visitorFunc is not None: - ret = visitorFunc(self, obj, *args, **kwargs) - if ret == False or (ret is None and self.defaultStop): - return - if hasattr(obj, "__dict__") and not isinstance(obj, enum.Enum): - self.visitObject(obj, *args, **kwargs) - elif isinstance(obj, list): - self.visitList(obj, *args, **kwargs) - elif isinstance(obj, dict): - self.visitDict(obj, *args, **kwargs) - else: - self.visitLeaf(obj, *args, **kwargs) diff --git a/spaces/jonigata/PoseTweak/external/hrnet_w48_coco_256x192.py b/spaces/jonigata/PoseTweak/external/hrnet_w48_coco_256x192.py deleted file mode 100644 index ee33c03d79f94fb04e2fda222114c14e99307b45..0000000000000000000000000000000000000000 --- a/spaces/jonigata/PoseTweak/external/hrnet_w48_coco_256x192.py +++ /dev/null @@ -1,169 +0,0 @@ -_base_ = [ - 'default_runtime.py', - 'coco.py' -] -evaluation = dict(interval=10, metric='mAP', save_best='AP') - -optimizer = dict( - type='Adam', - lr=5e-4, -) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=0.001, - step=[170, 200]) -total_epochs = 210 -channel_cfg = dict( - num_output_channels=17, - dataset_joints=17, - dataset_channel=[ - [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16], - ], - inference_channel=[ - 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 - ]) - -# model settings -model = dict( - type='TopDown', - pretrained='https://download.openmmlab.com/mmpose/' - 'pretrain_models/hrnet_w48-8ef0771d.pth', - backbone=dict( - type='HRNet', - in_channels=3, - extra=dict( - stage1=dict( - num_modules=1, - num_branches=1, - block='BOTTLENECK', - num_blocks=(4, ), - num_channels=(64, )), - stage2=dict( - num_modules=1, - num_branches=2, - block='BASIC', - num_blocks=(4, 4), - num_channels=(48, 96)), - stage3=dict( - num_modules=4, - num_branches=3, - block='BASIC', - num_blocks=(4, 4, 4), - num_channels=(48, 96, 192)), - stage4=dict( - num_modules=3, - num_branches=4, - block='BASIC', - num_blocks=(4, 4, 4, 4), - num_channels=(48, 96, 192, 384))), - ), - keypoint_head=dict( - type='TopdownHeatmapSimpleHead', - in_channels=48, - out_channels=channel_cfg['num_output_channels'], - num_deconv_layers=0, - extra=dict(final_conv_kernel=1, ), - loss_keypoint=dict(type='JointsMSELoss', use_target_weight=True)), - train_cfg=dict(), - test_cfg=dict( - flip_test=True, - post_process='default', - shift_heatmap=True, - modulate_kernel=11)) - -data_cfg = dict( - image_size=[192, 256], - heatmap_size=[48, 64], - num_output_channels=channel_cfg['num_output_channels'], - num_joints=channel_cfg['dataset_joints'], - dataset_channel=channel_cfg['dataset_channel'], - inference_channel=channel_cfg['inference_channel'], - soft_nms=False, - nms_thr=1.0, - oks_thr=0.9, - vis_thr=0.2, - use_gt_bbox=False, - det_bbox_thr=0.0, - bbox_file='data/coco/person_detection_results/' - 'COCO_val2017_detections_AP_H_56_person.json', -) - -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='TopDownGetBboxCenterScale', padding=1.25), - dict(type='TopDownRandomShiftBboxCenter', shift_factor=0.16, prob=0.3), - dict(type='TopDownRandomFlip', flip_prob=0.5), - dict( - type='TopDownHalfBodyTransform', - num_joints_half_body=8, - prob_half_body=0.3), - dict( - type='TopDownGetRandomScaleRotation', rot_factor=40, scale_factor=0.5), - dict(type='TopDownAffine'), - dict(type='ToTensor'), - dict( - type='NormalizeTensor', - mean=[0.485, 0.456, 0.406], - std=[0.229, 0.224, 0.225]), - dict(type='TopDownGenerateTarget', sigma=2), - dict( - type='Collect', - keys=['img', 'target', 'target_weight'], - meta_keys=[ - 'image_file', 'joints_3d', 'joints_3d_visible', 'center', 'scale', - 'rotation', 'bbox_score', 'flip_pairs' - ]), -] - -val_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='TopDownGetBboxCenterScale', padding=1.25), - dict(type='TopDownAffine'), - dict(type='ToTensor'), - dict( - type='NormalizeTensor', - mean=[0.485, 0.456, 0.406], - std=[0.229, 0.224, 0.225]), - dict( - type='Collect', - keys=['img'], - meta_keys=[ - 'image_file', 'center', 'scale', 'rotation', 'bbox_score', - 'flip_pairs' - ]), -] - -test_pipeline = val_pipeline - -data_root = 'data/coco' -data = dict( - samples_per_gpu=32, - workers_per_gpu=2, - val_dataloader=dict(samples_per_gpu=32), - test_dataloader=dict(samples_per_gpu=32), - train=dict( - type='TopDownCocoDataset', - ann_file=f'{data_root}/annotations/person_keypoints_train2017.json', - img_prefix=f'{data_root}/train2017/', - data_cfg=data_cfg, - pipeline=train_pipeline, - dataset_info={{_base_.dataset_info}}), - val=dict( - type='TopDownCocoDataset', - ann_file=f'{data_root}/annotations/person_keypoints_val2017.json', - img_prefix=f'{data_root}/val2017/', - data_cfg=data_cfg, - pipeline=val_pipeline, - dataset_info={{_base_.dataset_info}}), - test=dict( - type='TopDownCocoDataset', - ann_file=f'{data_root}/annotations/person_keypoints_val2017.json', - img_prefix=f'{data_root}/val2017/', - data_cfg=data_cfg, - pipeline=test_pipeline, - dataset_info={{_base_.dataset_info}}), -) diff --git a/spaces/juancopi81/whisper-demo-es-medium/videocreator.py b/spaces/juancopi81/whisper-demo-es-medium/videocreator.py deleted file mode 100644 index 62672bc0c42a55efcc00a74650099731964641e4..0000000000000000000000000000000000000000 --- a/spaces/juancopi81/whisper-demo-es-medium/videocreator.py +++ /dev/null @@ -1,64 +0,0 @@ -import gradio as gr -from typing import Dict - -from moviepy.editor import VideoFileClip, concatenate_videoclips - -class VideoCreator: - def __init__(self, - tts_pipeline, - image_pipeline) -> None: - - self.tts_pipeline = tts_pipeline - self.image_pipeline = image_pipeline - - def create_video(self, - scenes: Dict, - video_styles: str) -> str: - videos_dict = {} - for index, scene in enumerate(scenes): - video_scene = self._create_video_from_scene(scenes[scene], - video_styles) - videos_dict[index] = video_scene - merged_video = self._merge_videos(videos_dict) - return merged_video - - def _create_video_from_scene(self, scene: Dict, video_styles: str) -> str: - audio_file = self._get_audio_from_text(scene["Summary"]) - bg_image = self._get_bg_image_from_description(scene["Illustration"], video_styles) - video = gr.make_waveform(audio=audio_file, - bg_image=bg_image) - return video - - def _get_audio_from_text(self, voice_over: str) -> str: - self.tts_pipeline.tts_to_file(text=voice_over, - file_path="output.wav") - return "output.wav" - - def _get_bg_image_from_description(self, img_desc: str, video_styles: str): - images = self.image_pipeline(img_desc + ", " + video_styles) - print("Image generated!") - image_output = images.images[0] - image_output.save("img.png") - return "img.png" - - def _merge_videos(self, videos_dict: Dict) -> str: - videos_to_concatenate = [] - for video in range(len(videos_dict)): - video_clip = VideoFileClip(videos_dict[video]) - videos_to_concatenate.append(video_clip) - final_video = concatenate_videoclips(videos_to_concatenate) - try: - final_video.write_videofile("final_video.mp4", - threads=4) - print("Saved .mp4 without Exception at final_video.mp4") - return "final_video.mp4" - except IndexError: - # Short by one frame, so get rid on the last frame: - final_video = final_video.subclip(t_end=(video_clip.duration - 1.0/final_video.fps)) - final_video.write_videofile("final_video.mp4", - threads=4) - print("Saved .mp4 after Exception at final_video.mp4") - return "final_video.mp4" - except Exception as e: - print("Exception {} was raised!!".format(e)) - return "final_video.mp4" \ No newline at end of file diff --git a/spaces/juancopi81/youtube-music-transcribe/t5x/examples/t5/network.py b/spaces/juancopi81/youtube-music-transcribe/t5x/examples/t5/network.py deleted file mode 100644 index 28bcbf17b55912b43f19443712a7aa83cccaaf11..0000000000000000000000000000000000000000 --- a/spaces/juancopi81/youtube-music-transcribe/t5x/examples/t5/network.py +++ /dev/null @@ -1,424 +0,0 @@ -# Copyright 2022 The T5X Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""T5.1.1 Transformer model.""" - -from typing import Any, Sequence - -from flax import linen as nn -from flax import struct -import jax.numpy as jnp -from t5x.examples.t5 import layers - - -@struct.dataclass -class T5Config: - """Global hyperparameters used to minimize obnoxious kwarg plumbing.""" - vocab_size: int - # Activation dtypes. - dtype: Any = jnp.float32 - emb_dim: int = 512 - num_heads: int = 8 - num_encoder_layers: int = 6 - num_decoder_layers: int = 6 - head_dim: int = 64 - mlp_dim: int = 2048 - # Activation functions are retrieved from Flax. - mlp_activations: Sequence[str] = ('relu',) - dropout_rate: float = 0.1 - # If `True`, the embedding weights are used in the decoder output layer. - logits_via_embedding: bool = False - # Whether to accumulate attention logits in float32 regardless of dtype. - float32_attention_logits: bool = False - - -class EncoderLayer(nn.Module): - """Transformer encoder layer.""" - config: T5Config - relative_embedding: nn.Module - - @nn.compact - def __call__(self, inputs, encoder_mask=None, deterministic=False): - cfg = self.config - - # Relative position embedding as attention biases. - encoder_bias = self.relative_embedding(inputs.shape[-2], inputs.shape[-2], - True) - - # Attention block. - assert inputs.ndim == 3 - x = layers.LayerNorm( - dtype=cfg.dtype, name='pre_attention_layer_norm')( - inputs) - # [batch, length, emb_dim] -> [batch, length, emb_dim] - x = layers.MultiHeadDotProductAttention( - num_heads=cfg.num_heads, - dtype=cfg.dtype, - head_dim=cfg.head_dim, - dropout_rate=cfg.dropout_rate, - float32_logits=cfg.float32_attention_logits, - name='attention')( - x, x, encoder_mask, encoder_bias, deterministic=deterministic) - x = nn.Dropout( - rate=cfg.dropout_rate, broadcast_dims=(-2,))( - x, deterministic=deterministic) - x = x + inputs - - # MLP block. - y = layers.LayerNorm(dtype=cfg.dtype, name='pre_mlp_layer_norm')(x) - # [batch, length, emb_dim] -> [batch, length, emb_dim] - y = layers.MlpBlock( - intermediate_dim=cfg.mlp_dim, - activations=cfg.mlp_activations, - intermediate_dropout_rate=cfg.dropout_rate, - dtype=cfg.dtype, - name='mlp', - )(y, deterministic=deterministic) - y = nn.Dropout( - rate=cfg.dropout_rate, broadcast_dims=(-2,))( - y, deterministic=deterministic) - y = y + x - - return y - - -class DecoderLayer(nn.Module): - """Transformer decoder layer that attends to the encoder.""" - config: T5Config - relative_embedding: nn.Module - - @nn.compact - def __call__(self, - inputs, - encoded, - decoder_mask=None, - encoder_decoder_mask=None, - deterministic=False, - decode=False, - max_decode_length=None): - cfg = self.config - - # Relative position embedding as attention biases. - l = max_decode_length if decode and max_decode_length else inputs.shape[-2] - decoder_bias = self.relative_embedding(l, l, False) - - # inputs: embedded inputs to the decoder with shape [batch, length, emb_dim] - x = layers.LayerNorm( - dtype=cfg.dtype, name='pre_self_attention_layer_norm')( - inputs) - - # Self-attention block - x = layers.MultiHeadDotProductAttention( - num_heads=cfg.num_heads, - dtype=cfg.dtype, - head_dim=cfg.head_dim, - dropout_rate=cfg.dropout_rate, - float32_logits=cfg.float32_attention_logits, - name='self_attention')( - x, - x, - decoder_mask, - decoder_bias, - deterministic=deterministic, - decode=decode) - x = nn.Dropout( - rate=cfg.dropout_rate, broadcast_dims=(-2,))( - x, deterministic=deterministic) - x = x + inputs - - # Encoder-Decoder block. - y = layers.LayerNorm( - dtype=cfg.dtype, name='pre_cross_attention_layer_norm')( - x) - y = layers.MultiHeadDotProductAttention( - num_heads=cfg.num_heads, - dtype=cfg.dtype, - head_dim=cfg.head_dim, - dropout_rate=cfg.dropout_rate, - float32_logits=cfg.float32_attention_logits, - name='encoder_decoder_attention')( - y, encoded, encoder_decoder_mask, deterministic=deterministic) - y = nn.Dropout( - rate=cfg.dropout_rate, broadcast_dims=(-2,))( - y, deterministic=deterministic) - y = y + x - - # MLP block. - z = layers.LayerNorm(dtype=cfg.dtype, name='pre_mlp_layer_norm')(y) - z = layers.MlpBlock( - intermediate_dim=cfg.mlp_dim, - activations=cfg.mlp_activations, - intermediate_dropout_rate=cfg.dropout_rate, - dtype=cfg.dtype, - name='mlp', - )(z, deterministic=deterministic) - z = nn.Dropout( - rate=cfg.dropout_rate, broadcast_dims=(-2,))( - z, deterministic=deterministic) - z = z + y - - return z - - -class Encoder(nn.Module): - """A stack of encoder layers.""" - config: T5Config - shared_embedding: nn.Module - - @nn.compact - def __call__(self, - encoder_input_tokens, - encoder_mask=None, - deterministic=False): - cfg = self.config - assert encoder_input_tokens.ndim == 2 # [batch, length] - rel_emb = layers.RelativePositionBiases( - num_buckets=32, - max_distance=128, - num_heads=cfg.num_heads, - dtype=cfg.dtype, - embedding_init=nn.initializers.variance_scaling(1.0, 'fan_avg', - 'uniform'), - name='relpos_bias') - - # [batch, length] -> [batch, length, emb_dim] - x = self.shared_embedding(encoder_input_tokens.astype('int32')) - x = nn.Dropout( - rate=cfg.dropout_rate, broadcast_dims=(-2,))( - x, deterministic=deterministic) - x = x.astype(cfg.dtype) - - for lyr in range(cfg.num_encoder_layers): - # [batch, length, emb_dim] -> [batch, length, emb_dim] - x = EncoderLayer( - config=cfg, relative_embedding=rel_emb, - name=f'layers_{lyr}')(x, encoder_mask, deterministic) - - x = layers.LayerNorm(dtype=cfg.dtype, name='encoder_norm')(x) - return nn.Dropout(rate=cfg.dropout_rate)(x, deterministic=deterministic) - - -class Decoder(nn.Module): - """A stack of decoder layers as a part of an encoder-decoder architecture.""" - config: T5Config - shared_embedding: nn.Module - - @nn.compact - def __call__(self, - encoded, - decoder_input_tokens, - decoder_positions=None, - decoder_mask=None, - encoder_decoder_mask=None, - deterministic=False, - decode=False, - max_decode_length=None): - cfg = self.config - assert decoder_input_tokens.ndim == 2 # [batch, len] - rel_emb = layers.RelativePositionBiases( - num_buckets=32, - max_distance=128, - num_heads=cfg.num_heads, - dtype=cfg.dtype, - embedding_init=nn.initializers.variance_scaling(1.0, 'fan_avg', - 'uniform'), - name='relpos_bias') - - # [batch, length] -> [batch, length, emb_dim] - y = self.shared_embedding(decoder_input_tokens.astype('int32')) - y = nn.Dropout( - rate=cfg.dropout_rate, broadcast_dims=(-2,))( - y, deterministic=deterministic) - y = y.astype(cfg.dtype) - - for lyr in range(cfg.num_decoder_layers): - # [batch, length, emb_dim] -> [batch, length, emb_dim] - y = DecoderLayer( - config=cfg, relative_embedding=rel_emb, name=f'layers_{lyr}')( - y, - encoded, - decoder_mask=decoder_mask, - encoder_decoder_mask=encoder_decoder_mask, - deterministic=deterministic, - decode=decode, - max_decode_length=max_decode_length) - - y = layers.LayerNorm(dtype=cfg.dtype, name='decoder_norm')(y) - y = nn.Dropout( - rate=cfg.dropout_rate, broadcast_dims=(-2,))( - y, deterministic=deterministic) - - # [batch, length, emb_dim] -> [batch, length, vocab_size] - if cfg.logits_via_embedding: - # Use the transpose of embedding matrix for logit transform. - logits = self.shared_embedding.attend(y) - # Correctly normalize pre-softmax logits for this shared case. - logits = logits / jnp.sqrt(y.shape[-1]) - else: - logits = layers.DenseGeneral( - cfg.vocab_size, - dtype=jnp.float32, # Use float32 for stabiliity. - kernel_axes=('embed', 'vocab'), - name='logits_dense')( - y) - return logits - - -class Transformer(nn.Module): - """An encoder-decoder Transformer model.""" - config: T5Config - - def setup(self): - cfg = self.config - self.shared_embedding = layers.Embed( - num_embeddings=cfg.vocab_size, - features=cfg.emb_dim, - dtype=cfg.dtype, - attend_dtype=jnp.float32, # for logit training stability - embedding_init=nn.initializers.normal(stddev=1.0), - one_hot=True, - name='token_embedder') - - self.encoder = Encoder(config=cfg, shared_embedding=self.shared_embedding) - self.decoder = Decoder(config=cfg, shared_embedding=self.shared_embedding) - - def encode(self, - encoder_input_tokens, - encoder_segment_ids=None, - enable_dropout=True): - """Applies Transformer encoder-branch on the inputs.""" - cfg = self.config - assert encoder_input_tokens.ndim == 2 # (batch, len) - - # Make padding attention mask. - encoder_mask = layers.make_attention_mask( - encoder_input_tokens > 0, encoder_input_tokens > 0, dtype=cfg.dtype) - # Add segmentation block-diagonal attention mask if using segmented data. - if encoder_segment_ids is not None: - encoder_mask = layers.combine_masks( - encoder_mask, - layers.make_attention_mask( - encoder_segment_ids, - encoder_segment_ids, - jnp.equal, - dtype=cfg.dtype)) - - return self.encoder( - encoder_input_tokens, encoder_mask, deterministic=not enable_dropout) - - def decode( - self, - encoded, - encoder_input_tokens, # only needed for masks - decoder_input_tokens, - decoder_target_tokens, - encoder_segment_ids=None, - decoder_segment_ids=None, - decoder_positions=None, - enable_dropout=True, - decode=False, - max_decode_length=None): - """Applies Transformer decoder-branch on encoded-input and target.""" - cfg = self.config - - # Make padding attention masks. - if decode: - # Do not mask decoder attention based on targets padding at - # decoding/inference time. - decoder_mask = None - encoder_decoder_mask = layers.make_attention_mask( - jnp.ones_like(decoder_target_tokens), - encoder_input_tokens > 0, - dtype=cfg.dtype) - else: - decoder_mask = layers.make_decoder_mask( - decoder_target_tokens=decoder_target_tokens, - dtype=cfg.dtype, - decoder_segment_ids=decoder_segment_ids) - encoder_decoder_mask = layers.make_attention_mask( - decoder_target_tokens > 0, encoder_input_tokens > 0, dtype=cfg.dtype) - - # Add segmentation block-diagonal attention masks if using segmented data. - if encoder_segment_ids is not None: - if decode: - raise ValueError( - 'During decoding, packing should not be used but ' - '`encoder_segment_ids` was passed to `Transformer.decode`.') - - encoder_decoder_mask = layers.combine_masks( - encoder_decoder_mask, - layers.make_attention_mask( - decoder_segment_ids, - encoder_segment_ids, - jnp.equal, - dtype=cfg.dtype)) - - logits = self.decoder( - encoded, - decoder_input_tokens=decoder_input_tokens, - decoder_positions=decoder_positions, - decoder_mask=decoder_mask, - encoder_decoder_mask=encoder_decoder_mask, - deterministic=not enable_dropout, - decode=decode, - max_decode_length=max_decode_length) - return logits - - def __call__(self, - encoder_input_tokens, - decoder_input_tokens, - decoder_target_tokens, - encoder_segment_ids=None, - decoder_segment_ids=None, - encoder_positions=None, - decoder_positions=None, - *, - enable_dropout: bool = True, - decode: bool = False): - """Applies Transformer model on the inputs. - - This method requires both decoder_target_tokens and decoder_input_tokens, - which is a shifted version of the former. For a packed dataset, it usually - has additional processing applied. For example, the first element of each - sequence has id 0 instead of the shifted EOS id from the previous sequence. - - Args: - encoder_input_tokens: input data to the encoder. - decoder_input_tokens: input token to the decoder. - decoder_target_tokens: target token to the decoder. - encoder_segment_ids: encoder segmentation info for packed examples. - decoder_segment_ids: decoder segmentation info for packed examples. - encoder_positions: encoder subsequence positions for packed examples. - decoder_positions: decoder subsequence positions for packed examples. - enable_dropout: Ensables dropout if set to True. - decode: Whether to prepare and use an autoregressive cache. - - Returns: - logits array from full transformer. - """ - encoded = self.encode( - encoder_input_tokens, - encoder_segment_ids=encoder_segment_ids, - enable_dropout=enable_dropout) - - return self.decode( - encoded, - encoder_input_tokens, # only used for masks - decoder_input_tokens, - decoder_target_tokens, - encoder_segment_ids=encoder_segment_ids, - decoder_segment_ids=decoder_segment_ids, - decoder_positions=decoder_positions, - enable_dropout=enable_dropout, - decode=decode) diff --git a/spaces/kTonpa/Text2Cryptopunks/text2punks/transformer.py b/spaces/kTonpa/Text2Cryptopunks/text2punks/transformer.py deleted file mode 100644 index 52a2fc51c1b65f3990f3364aec6bb8f014785df7..0000000000000000000000000000000000000000 --- a/spaces/kTonpa/Text2Cryptopunks/text2punks/transformer.py +++ /dev/null @@ -1,115 +0,0 @@ -from functools import partial -from itertools import islice, cycle - -from torch import nn - -from text2punks.attention import Attention, SparseAxialCausalAttention - -# helpers - -def exists(val): - return val is not None - -def default(val, d): - return val if exists(val) else d - -def cast_tuple(val, depth = 1): - if isinstance(val, list): - val = tuple(val) - return val if isinstance(val, tuple) else (val,) * depth - -# classes - -class SequentialSequence(nn.Module): - def __init__(self, layers): - super().__init__() - self.layers = layers - - def forward(self, x): - for (f, g) in list(self.layers): - x = x + f(x) - x = x + g(x) - return x - -class PreNorm(nn.Module): - def __init__(self, dim, fn): - super().__init__() - self.norm = nn.LayerNorm(dim) - self.fn = fn - - def forward(self, x, **kwargs): - return self.fn(self.norm(x), **kwargs) - -class FeedForward(nn.Module): - def __init__(self, dim, dropout = 0.): - super().__init__() - self.net = nn.Sequential( - nn.Linear(dim, dim * 4), - nn.GELU(), - nn.Dropout(dropout), - nn.Linear(dim * 4, dim) - ) - # the order of dropout nn.Linear(4 * n_embd, n_embd) vs nn.Dropout(resid_pdrop) - - def forward(self, x): - return self.net(x) - - -class Transformer(nn.Module): - def __init__( - self, - *, - dim, - depth, - seq_len, - causal = True, - heads = 8, - dim_head = 64, - attn_dropout = 0., - resid_dropout = 0., - embd_dropout = 0., - ff_dropout = 0., - image_size = 24, - attn_types = None, - ): - super().__init__() - layers = nn.ModuleList([]) - - attn_types = default(attn_types, ('full',)) - attn_types = cast_tuple(attn_types) - attn_type_layer = islice(cycle(attn_types), depth) - - for attn_type in attn_type_layer: - if attn_type == 'full': - attn_class = partial(Attention, causal = causal) - elif attn_type == 'axial_row': - attn_class = partial(SparseAxialCausalAttention, seq_len = seq_len, axis = 0, image_size = image_size) - elif attn_type == 'axial_col': - attn_class = partial(SparseAxialCausalAttention, seq_len = seq_len, axis = 1, image_size = image_size) - else: - raise ValueError(f'attention type "{attn_type}" is not valid') - - attn = attn_class(dim, seq_len = seq_len, heads = heads, dim_head = dim_head, attn_dropout = attn_dropout, resid_dropout = resid_dropout) - - layers.append(nn.ModuleList([ - PreNorm(dim, attn), - PreNorm(dim, FeedForward(dim, dropout = ff_dropout)) - ])) - - # full attention in the last layer - - attn_class = partial(Attention, causal = causal) - attn = attn_class(dim, seq_len = seq_len, heads = heads, dim_head = dim_head, attn_dropout = attn_dropout, resid_dropout = resid_dropout) - - layers.append(nn.ModuleList([ - PreNorm(dim, attn), - PreNorm(dim, FeedForward(dim, dropout = ff_dropout)) - ])) - - self.layers = SequentialSequence(layers) - self.embd_drop = nn.Dropout(embd_dropout) - - def forward(self, x): - x = self.embd_drop(x) - return self.layers(x) - \ No newline at end of file diff --git a/spaces/kboaten/MIDI-Audio-Extension/MIDI-song-extender/musicautobot/multitask_transformer/learner.py b/spaces/kboaten/MIDI-Audio-Extension/MIDI-song-extender/musicautobot/multitask_transformer/learner.py deleted file mode 100644 index 667c73c4045a3a24267e6ec9e73543d423abd25b..0000000000000000000000000000000000000000 --- a/spaces/kboaten/MIDI-Audio-Extension/MIDI-song-extender/musicautobot/multitask_transformer/learner.py +++ /dev/null @@ -1,340 +0,0 @@ -from fastai.basics import * -from ..vocab import * -from ..utils.top_k_top_p import top_k_top_p -from ..utils.midifile import is_empty_midi -from ..music_transformer.transform import * -from ..music_transformer.learner import filter_invalid_indexes -from .model import get_multitask_model -from .dataloader import * - -def multitask_model_learner(data:DataBunch, config:dict=None, drop_mult:float=1., - pretrained_path:PathOrStr=None, **learn_kwargs) -> 'LanguageLearner': - "Create a `Learner` with a language model from `data` and `arch`." - vocab = data.vocab - vocab_size = len(vocab) - - if pretrained_path: - state = torch.load(pretrained_path, map_location='cpu') - if config is None: config = state['config'] - - model = get_multitask_model(vocab_size, config=config, drop_mult=drop_mult, pad_idx=vocab.pad_idx) - metrics = [AverageMultiMetric(partial(m, pad_idx=vocab.pad_idx)) for m in [mask_acc, lm_acc, c2m_acc, m2c_acc]] - loss_func = MultiLoss(ignore_index=data.vocab.pad_idx) - learn = MultitaskLearner(data, model, loss_func=loss_func, metrics=metrics, **learn_kwargs) - - if pretrained_path: - get_model(model).load_state_dict(state['model'], strict=False) - if not hasattr(learn, 'opt'): learn.create_opt(defaults.lr, learn.wd) - try: learn.opt.load_state_dict(state['opt']) - except: pass - del state - gc.collect() - - return learn - -class MultitaskLearner(Learner): - def save(self, file:PathLikeOrBinaryStream=None, with_opt:bool=True, config=None): - "Save model and optimizer state (if `with_opt`) with `file` to `self.model_dir`. `file` can be file-like (file or buffer)" - out_path = super().save(file, return_path=True, with_opt=with_opt) - if config and out_path: - state = torch.load(out_path) - state['config'] = config - torch.save(state, out_path) - del state - gc.collect() - return out_path - - def predict_nw(self, item:MusicItem, n_words:int=128, - temperatures:float=(1.0,1.0), min_bars=4, - top_k=30, top_p=0.6): - "Return the `n_words` that come after `text`." - self.model.reset() - new_idx = [] - vocab = self.data.vocab - x, pos = item.to_tensor(), item.get_pos_tensor() - last_pos = pos[-1] if len(pos) else 0 - y = torch.tensor([0]) - - start_pos = last_pos - - sep_count = 0 - bar_len = SAMPLE_FREQ * 4 # assuming 4/4 time - vocab = self.data.vocab - - repeat_count = 0 - - for i in progress_bar(range(n_words), leave=True): - batch = { 'lm': { 'x': x[None], 'pos': pos[None] } }, y - logits = self.pred_batch(batch=batch)['lm'][-1][-1] - - prev_idx = new_idx[-1] if len(new_idx) else vocab.pad_idx - - # Temperature - # Use first temperatures value if last prediction was duration - temperature = temperatures[0] if vocab.is_duration_or_pad(prev_idx) else temperatures[1] - repeat_penalty = max(0, np.log((repeat_count+1)/4)/5) * temperature - temperature += repeat_penalty - if temperature != 1.: logits = logits / temperature - - - # Filter - # bar = 16 beats - filter_value = -float('Inf') - if ((last_pos - start_pos) // 16) <= min_bars: logits[vocab.bos_idx] = filter_value - - logits = filter_invalid_indexes(logits, prev_idx, vocab, filter_value=filter_value) - logits = top_k_top_p(logits, top_k=top_k, top_p=top_p, filter_value=filter_value) - - # Sample - probs = F.softmax(logits, dim=-1) - idx = torch.multinomial(probs, 1).item() - - # Update repeat count - num_choices = len(probs.nonzero().view(-1)) - if num_choices <= 2: repeat_count += 1 - else: repeat_count = repeat_count // 2 - - if prev_idx==vocab.sep_idx: - duration = idx - vocab.dur_range[0] - last_pos = last_pos + duration - - bars_pred = (last_pos - start_pos) // 16 - abs_bar = last_pos // 16 - # if (bars % 8 == 0) and (bars_pred > min_bars): break - if (i / n_words > 0.80) and (abs_bar % 4 == 0): break - - - if idx==vocab.bos_idx: - print('Predicted BOS token. Returning prediction...') - break - - new_idx.append(idx) - x = x.new_tensor([idx]) - pos = pos.new_tensor([last_pos]) - - pred = vocab.to_music_item(np.array(new_idx)) - full = item.append(pred) - return pred, full - - def predict_mask(self, masked_item:MusicItem, - temperatures:float=(1.0,1.0), - top_k=20, top_p=0.8): - x = masked_item.to_tensor() - pos = masked_item.get_pos_tensor() - y = torch.tensor([0]) - vocab = self.data.vocab - self.model.reset() - mask_idxs = (x == vocab.mask_idx).nonzero().view(-1) - - repeat_count = 0 - - for midx in progress_bar(mask_idxs, leave=True): - prev_idx = x[midx-1] - - # Using original positions, otherwise model gets too off track - # pos = torch.tensor(-position_enc(xb[0].cpu().numpy()), device=xb.device)[None] - - # Next Word - logits = self.pred_batch(batch=({ 'msk': { 'x': x[None], 'pos': pos[None] } }, y) )['msk'][0][midx] - - # Temperature - # Use first temperatures value if last prediction was duration - temperature = temperatures[0] if vocab.is_duration_or_pad(prev_idx) else temperatures[1] - repeat_penalty = max(0, np.log((repeat_count+1)/4)/5) * temperature - temperature += repeat_penalty - if temperature != 1.: logits = logits / temperature - - # Filter - filter_value = -float('Inf') - special_idxs = [vocab.bos_idx, vocab.sep_idx, vocab.stoi[EOS]] - logits[special_idxs] = filter_value # Don't allow any special tokens (as we are only removing notes and durations) - logits = filter_invalid_indexes(logits, prev_idx, vocab, filter_value=filter_value) - logits = top_k_top_p(logits, top_k=top_k, top_p=top_p, filter_value=filter_value) - - # Sampling - probs = F.softmax(logits, dim=-1) - idx = torch.multinomial(probs, 1).item() - - # Update repeat count - num_choices = len(probs.nonzero().view(-1)) - if num_choices <= 2: repeat_count += 1 - else: repeat_count = repeat_count // 2 - - x[midx] = idx - - return vocab.to_music_item(x.cpu().numpy()) - - def predict_s2s(self, input_item:MusicItem, target_item:MusicItem, n_words:int=256, - temperatures:float=(1.0,1.0), top_k=30, top_p=0.8, - use_memory=True): - vocab = self.data.vocab - - # Input doesn't change. We can reuse the encoder output on each prediction - with torch.no_grad(): - inp, inp_pos = input_item.to_tensor(), input_item.get_pos_tensor() - x_enc = self.model.encoder(inp[None], inp_pos[None]) - - # target - targ = target_item.data.tolist() - targ_pos = target_item.position.tolist() - last_pos = targ_pos[-1] - self.model.reset() - - repeat_count = 0 - - max_pos = input_item.position[-1] + SAMPLE_FREQ * 4 # Only predict until both tracks/parts have the same length - x, pos = inp.new_tensor(targ), inp_pos.new_tensor(targ_pos) - - for i in progress_bar(range(n_words), leave=True): - # Predict - with torch.no_grad(): - dec = self.model.decoder(x[None], pos[None], x_enc) - logits = self.model.head(dec)[-1, -1] - - # Temperature - # Use first temperatures value if last prediction was duration - prev_idx = targ[-1] if len(targ) else vocab.pad_idx - temperature = temperatures[0] if vocab.is_duration_or_pad(prev_idx) else temperatures[1] - repeat_penalty = max(0, np.log((repeat_count+1)/4)/5) * temperature - temperature += repeat_penalty - if temperature != 1.: logits = logits / temperature - - # Filter - filter_value = -float('Inf') - logits = filter_invalid_indexes(logits, prev_idx, vocab, filter_value=filter_value) - logits = top_k_top_p(logits, top_k=top_k, top_p=top_p, filter_value=filter_value) - - # Sample - probs = F.softmax(logits, dim=-1) - idx = torch.multinomial(probs, 1).item() - - # Update repeat count - num_choices = len(probs.nonzero().view(-1)) - if num_choices <= 2: repeat_count += 1 - else: repeat_count = repeat_count // 2 - - if idx == vocab.bos_idx | idx == vocab.stoi[EOS]: - print('Predicting BOS/EOS') - break - - if prev_idx == vocab.sep_idx: - duration = idx - vocab.dur_range[0] - last_pos = last_pos + duration - if last_pos > max_pos: - print('Predicted past counter-part length. Returning early') - break - - targ_pos.append(last_pos) - targ.append(idx) - - if use_memory: - # Relying on memory for kv. Only need last prediction index - x, pos = inp.new_tensor([targ[-1]]), inp_pos.new_tensor([targ_pos[-1]]) - else: - # Reset memory after each prediction, since we feeding the whole sequence every time - self.model.reset() - x, pos = inp.new_tensor(targ), inp_pos.new_tensor(targ_pos) - - return vocab.to_music_item(np.array(targ)) - -# High level prediction functions from midi file -def nw_predict_from_midi(learn, midi=None, n_words=400, - temperatures=(1.0,1.0), top_k=30, top_p=0.6, seed_len=None, **kwargs): - vocab = learn.data.vocab - seed = MusicItem.from_file(midi, vocab) if not is_empty_midi(midi) else MusicItem.empty(vocab) - if seed_len is not None: seed = seed.trim_to_beat(seed_len) - - pred, full = learn.predict_nw(seed, n_words=n_words, temperatures=temperatures, top_k=top_k, top_p=top_p, **kwargs) - return full - -def s2s_predict_from_midi(learn, midi=None, n_words=200, - temperatures=(1.0,1.0), top_k=24, top_p=0.7, seed_len=None, pred_melody=True, **kwargs): - multitrack_item = MultitrackItem.from_file(midi, learn.data.vocab) - melody, chords = multitrack_item.melody, multitrack_item.chords - inp, targ = (chords, melody) if pred_melody else (melody, chords) - - # if seed_len is passed, cutoff sequence so we can predict the rest - if seed_len is not None: targ = targ.trim_to_beat(seed_len) - targ = targ.remove_eos() - - pred = learn.predict_s2s(inp, targ, n_words=n_words, temperatures=temperatures, top_k=top_k, top_p=top_p, **kwargs) - - part_order = (pred, inp) if pred_melody else (inp, pred) - return MultitrackItem(*part_order) - -def mask_predict_from_midi(learn, midi=None, predict_notes=True, - temperatures=(1.0,1.0), top_k=30, top_p=0.7, section=None, **kwargs): - item = MusicItem.from_file(midi, learn.data.vocab) - masked_item = item.mask_pitch(section) if predict_notes else item.mask_duration(section) - pred = learn.predict_mask(masked_item, temperatures=temperatures, top_k=top_k, top_p=top_p, **kwargs) - return pred - -# LOSS AND METRICS - -class MultiLoss(): - def __init__(self, ignore_index=None): - "Loss mult - Mask, NextWord, Seq2Seq" - self.loss = CrossEntropyFlat(ignore_index=ignore_index) - - def __call__(self, inputs:Dict[str,Tensor], targets:Dict[str,Tensor])->Rank0Tensor: - losses = [self.loss(inputs[key], target) for key,target in targets.items()] - return sum(losses) - -def acc_ignore_pad(input:Tensor, targ:Tensor, pad_idx)->Rank0Tensor: - if input is None or targ is None: return None - n = targ.shape[0] - input = input.argmax(dim=-1).view(n,-1) - targ = targ.view(n,-1) - mask = targ != pad_idx - return (input[mask]==targ[mask]).float().mean() - -def acc_index(inputs, targets, key, pad_idx): - return acc_ignore_pad(inputs.get(key), targets.get(key), pad_idx) - -def mask_acc(inputs, targets, pad_idx): return acc_index(inputs, targets, 'msk', pad_idx) -def lm_acc(inputs, targets, pad_idx): return acc_index(inputs, targets, 'lm', pad_idx) -def c2m_acc(inputs, targets, pad_idx): return acc_index(inputs, targets, 'c2m', pad_idx) -def m2c_acc(inputs, targets, pad_idx): return acc_index(inputs, targets, 'm2c', pad_idx) - - -class AverageMultiMetric(AverageMetric): - "Updated fastai.AverageMetric to support multi task metrics." - def on_batch_end(self, last_output, last_target, **kwargs): - "Update metric computation with `last_output` and `last_target`." - if not is_listy(last_target): last_target=[last_target] - val = self.func(last_output, *last_target) - if val is None: return - self.count += first_el(last_target).size(0) - if self.world: - val = val.clone() - dist.all_reduce(val, op=dist.ReduceOp.SUM) - val /= self.world - self.val += first_el(last_target).size(0) * val.detach().cpu() - - def on_epoch_end(self, last_metrics, **kwargs): - "Set the final result in `last_metrics`." - if self.count == 0: return add_metrics(last_metrics, 0) - return add_metrics(last_metrics, self.val/self.count) - - -# MODEL LOADING -class MTTrainer(LearnerCallback): - "`Callback` that regroups lr adjustment to seq_len, AR and TAR." - def __init__(self, learn:Learner, dataloaders=None, starting_mask_window=1): - super().__init__(learn) - self.count = 1 - self.mw_start = starting_mask_window - self.dataloaders = dataloaders - - def on_epoch_begin(self, **kwargs): - "Reset the hidden state of the model." - model = get_model(self.learn.model) - model.reset() - model.encoder.mask_steps = max(self.count+self.mw_start, 100) - - def on_epoch_end(self, last_metrics, **kwargs): - "Finish the computation and sends the result to the Recorder." - if self.dataloaders is not None: - self.learn.data = self.dataloaders[self.count % len(self.dataloaders)] - self.count += 1 - diff --git a/spaces/kdrkdrkdr/ProsekaTTS/export_model.py b/spaces/kdrkdrkdr/ProsekaTTS/export_model.py deleted file mode 100644 index 98a49835df5a7a2486e76ddf94fbbb4444b52203..0000000000000000000000000000000000000000 --- a/spaces/kdrkdrkdr/ProsekaTTS/export_model.py +++ /dev/null @@ -1,13 +0,0 @@ -import torch - -if __name__ == '__main__': - model_path = "saved_model/11/model.pth" - output_path = "saved_model/11/model1.pth" - checkpoint_dict = torch.load(model_path, map_location='cpu') - checkpoint_dict_new = {} - for k, v in checkpoint_dict.items(): - if k == "optimizer": - print("remove optimizer") - continue - checkpoint_dict_new[k] = v - torch.save(checkpoint_dict_new, output_path) diff --git a/spaces/keithhon/Real-Time-Voice-Cloning/encoder_preprocess.py b/spaces/keithhon/Real-Time-Voice-Cloning/encoder_preprocess.py deleted file mode 100644 index 11502013c8d75d4652fb0ffdcdc49d55e8fb8bc9..0000000000000000000000000000000000000000 --- a/spaces/keithhon/Real-Time-Voice-Cloning/encoder_preprocess.py +++ /dev/null @@ -1,70 +0,0 @@ -from encoder.preprocess import preprocess_librispeech, preprocess_voxceleb1, preprocess_voxceleb2 -from utils.argutils import print_args -from pathlib import Path -import argparse - -if __name__ == "__main__": - class MyFormatter(argparse.ArgumentDefaultsHelpFormatter, argparse.RawDescriptionHelpFormatter): - pass - - parser = argparse.ArgumentParser( - description="Preprocesses audio files from datasets, encodes them as mel spectrograms and " - "writes them to the disk. This will allow you to train the encoder. The " - "datasets required are at least one of VoxCeleb1, VoxCeleb2 and LibriSpeech. " - "Ideally, you should have all three. You should extract them as they are " - "after having downloaded them and put them in a same directory, e.g.:\n" - "-[datasets_root]\n" - " -LibriSpeech\n" - " -train-other-500\n" - " -VoxCeleb1\n" - " -wav\n" - " -vox1_meta.csv\n" - " -VoxCeleb2\n" - " -dev", - formatter_class=MyFormatter - ) - parser.add_argument("datasets_root", type=Path, help=\ - "Path to the directory containing your LibriSpeech/TTS and VoxCeleb datasets.") - parser.add_argument("-o", "--out_dir", type=Path, default=argparse.SUPPRESS, help=\ - "Path to the output directory that will contain the mel spectrograms. If left out, " - "defaults to /SV2TTS/encoder/") - parser.add_argument("-d", "--datasets", type=str, - default="librispeech_other,voxceleb1,voxceleb2", help=\ - "Comma-separated list of the name of the datasets you want to preprocess. Only the train " - "set of these datasets will be used. Possible names: librispeech_other, voxceleb1, " - "voxceleb2.") - parser.add_argument("-s", "--skip_existing", action="store_true", help=\ - "Whether to skip existing output files with the same name. Useful if this script was " - "interrupted.") - parser.add_argument("--no_trim", action="store_true", help=\ - "Preprocess audio without trimming silences (not recommended).") - args = parser.parse_args() - - # Verify webrtcvad is available - if not args.no_trim: - try: - import webrtcvad - except: - raise ModuleNotFoundError("Package 'webrtcvad' not found. This package enables " - "noise removal and is recommended. Please install and try again. If installation fails, " - "use --no_trim to disable this error message.") - del args.no_trim - - # Process the arguments - args.datasets = args.datasets.split(",") - if not hasattr(args, "out_dir"): - args.out_dir = args.datasets_root.joinpath("SV2TTS", "encoder") - assert args.datasets_root.exists() - args.out_dir.mkdir(exist_ok=True, parents=True) - - # Preprocess the datasets - print_args(args, parser) - preprocess_func = { - "librispeech_other": preprocess_librispeech, - "voxceleb1": preprocess_voxceleb1, - "voxceleb2": preprocess_voxceleb2, - } - args = vars(args) - for dataset in args.pop("datasets"): - print("Preprocessing %s" % dataset) - preprocess_func[dataset](**args) diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/models/arcface_torch/configs/__init__.py b/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/models/arcface_torch/configs/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/generate_facerender_batch.py b/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/generate_facerender_batch.py deleted file mode 100644 index a821a6ece2fcff83c288a0989097d863cfec3dd1..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/generate_facerender_batch.py +++ /dev/null @@ -1,135 +0,0 @@ -import os -import numpy as np -from PIL import Image -from skimage import io, img_as_float32, transform -import torch -import scipy.io as scio - -def get_facerender_data(coeff_path, pic_path, first_coeff_path, audio_path, - batch_size, input_yaw_list=None, input_pitch_list=None, input_roll_list=None, - expression_scale=1.0, still_mode = False, preprocess='crop', size = 256, facemodel='facevid2vid'): - - semantic_radius = 13 - video_name = os.path.splitext(os.path.split(coeff_path)[-1])[0] - txt_path = os.path.splitext(coeff_path)[0] - - data={} - - img1 = Image.open(pic_path) - source_image = np.array(img1) - source_image = img_as_float32(source_image) - source_image = transform.resize(source_image, (size, size, 3)) - source_image = source_image.transpose((2, 0, 1)) - source_image_ts = torch.FloatTensor(source_image).unsqueeze(0) - source_image_ts = source_image_ts.repeat(batch_size, 1, 1, 1) - data['source_image'] = source_image_ts - - source_semantics_dict = scio.loadmat(first_coeff_path) - generated_dict = scio.loadmat(coeff_path) - - if 'full' not in preprocess.lower() and facemodel != 'pirender': - source_semantics = source_semantics_dict['coeff_3dmm'][:1,:70] #1 70 - generated_3dmm = generated_dict['coeff_3dmm'][:,:70] - else: - source_semantics = source_semantics_dict['coeff_3dmm'][:1,:73] #1 70 - generated_3dmm = generated_dict['coeff_3dmm'][:,:70] - - source_semantics_new = transform_semantic_1(source_semantics, semantic_radius) - source_semantics_ts = torch.FloatTensor(source_semantics_new).unsqueeze(0) - source_semantics_ts = source_semantics_ts.repeat(batch_size, 1, 1) - data['source_semantics'] = source_semantics_ts - - # target - generated_3dmm[:, :64] = generated_3dmm[:, :64] * expression_scale - - if 'full' in preprocess.lower() or facemodel == 'pirender': - generated_3dmm = np.concatenate([generated_3dmm, np.repeat(source_semantics[:,70:], generated_3dmm.shape[0], axis=0)], axis=1) - - if still_mode: - generated_3dmm[:, 64:] = np.repeat(source_semantics[:, 64:], generated_3dmm.shape[0], axis=0) - - with open(txt_path+'.txt', 'w') as f: - for coeff in generated_3dmm: - for i in coeff: - f.write(str(i)[:7] + ' '+'\t') - f.write('\n') - - target_semantics_list = [] - frame_num = generated_3dmm.shape[0] - data['frame_num'] = frame_num - for frame_idx in range(frame_num): - target_semantics = transform_semantic_target(generated_3dmm, frame_idx, semantic_radius) - target_semantics_list.append(target_semantics) - - remainder = frame_num%batch_size - if remainder!=0: - for _ in range(batch_size-remainder): - target_semantics_list.append(target_semantics) - - target_semantics_np = np.array(target_semantics_list) #frame_num 70 semantic_radius*2+1 - target_semantics_np = target_semantics_np.reshape(batch_size, -1, target_semantics_np.shape[-2], target_semantics_np.shape[-1]) - data['target_semantics_list'] = torch.FloatTensor(target_semantics_np) - data['video_name'] = video_name - data['audio_path'] = audio_path - - if input_yaw_list is not None: - yaw_c_seq = gen_camera_pose(input_yaw_list, frame_num, batch_size) - data['yaw_c_seq'] = torch.FloatTensor(yaw_c_seq) - if input_pitch_list is not None: - pitch_c_seq = gen_camera_pose(input_pitch_list, frame_num, batch_size) - data['pitch_c_seq'] = torch.FloatTensor(pitch_c_seq) - if input_roll_list is not None: - roll_c_seq = gen_camera_pose(input_roll_list, frame_num, batch_size) - data['roll_c_seq'] = torch.FloatTensor(roll_c_seq) - - return data - -def transform_semantic_1(semantic, semantic_radius): - semantic_list = [semantic for i in range(0, semantic_radius*2+1)] - coeff_3dmm = np.concatenate(semantic_list, 0) - return coeff_3dmm.transpose(1,0) - -def transform_semantic_target(coeff_3dmm, frame_index, semantic_radius): - num_frames = coeff_3dmm.shape[0] - seq = list(range(frame_index- semantic_radius, frame_index + semantic_radius+1)) - index = [ min(max(item, 0), num_frames-1) for item in seq ] - coeff_3dmm_g = coeff_3dmm[index, :] - return coeff_3dmm_g.transpose(1,0) - -def gen_camera_pose(camera_degree_list, frame_num, batch_size): - - new_degree_list = [] - if len(camera_degree_list) == 1: - for _ in range(frame_num): - new_degree_list.append(camera_degree_list[0]) - remainder = frame_num%batch_size - if remainder!=0: - for _ in range(batch_size-remainder): - new_degree_list.append(new_degree_list[-1]) - new_degree_np = np.array(new_degree_list).reshape(batch_size, -1) - return new_degree_np - - degree_sum = 0. - for i, degree in enumerate(camera_degree_list[1:]): - degree_sum += abs(degree-camera_degree_list[i]) - - degree_per_frame = degree_sum/(frame_num-1) - for i, degree in enumerate(camera_degree_list[1:]): - degree_last = camera_degree_list[i] - degree_step = degree_per_frame * abs(degree-degree_last)/(degree-degree_last) - new_degree_list = new_degree_list + list(np.arange(degree_last, degree, degree_step)) - if len(new_degree_list) > frame_num: - new_degree_list = new_degree_list[:frame_num] - elif len(new_degree_list) < frame_num: - for _ in range(frame_num-len(new_degree_list)): - new_degree_list.append(new_degree_list[-1]) - print(len(new_degree_list)) - print(frame_num) - - remainder = frame_num%batch_size - if remainder!=0: - for _ in range(batch_size-remainder): - new_degree_list.append(new_degree_list[-1]) - new_degree_np = np.array(new_degree_list).reshape(batch_size, -1) - return new_degree_np - diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/scripts/test.sh b/spaces/kevinwang676/ChatGLM2-SadTalker/scripts/test.sh deleted file mode 100644 index bcfecfde94951c8feec231c14c30a685674a284a..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker/scripts/test.sh +++ /dev/null @@ -1,21 +0,0 @@ -# ### some test command before commit. -# python inference.py --preprocess crop --size 256 -# python inference.py --preprocess crop --size 512 - -# python inference.py --preprocess extcrop --size 256 -# python inference.py --preprocess extcrop --size 512 - -# python inference.py --preprocess resize --size 256 -# python inference.py --preprocess resize --size 512 - -# python inference.py --preprocess full --size 256 -# python inference.py --preprocess full --size 512 - -# python inference.py --preprocess extfull --size 256 -# python inference.py --preprocess extfull --size 512 - -python inference.py --preprocess full --size 256 --enhancer gfpgan -python inference.py --preprocess full --size 512 --enhancer gfpgan - -python inference.py --preprocess full --size 256 --enhancer gfpgan --still -python inference.py --preprocess full --size 512 --enhancer gfpgan --still diff --git a/spaces/kevinwang676/VoiceChanger/src/utils/videoio.py b/spaces/kevinwang676/VoiceChanger/src/utils/videoio.py deleted file mode 100644 index 08bfbdd7d4be97dc17fea4ad7b2733e9eb0ef975..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VoiceChanger/src/utils/videoio.py +++ /dev/null @@ -1,41 +0,0 @@ -import shutil -import uuid - -import os - -import cv2 - -def load_video_to_cv2(input_path): - video_stream = cv2.VideoCapture(input_path) - fps = video_stream.get(cv2.CAP_PROP_FPS) - full_frames = [] - while 1: - still_reading, frame = video_stream.read() - if not still_reading: - video_stream.release() - break - full_frames.append(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)) - return full_frames - -def save_video_with_watermark(video, audio, save_path, watermark=False): - temp_file = str(uuid.uuid4())+'.mp4' - cmd = r'ffmpeg -y -hide_banner -loglevel error -i "%s" -i "%s" -vcodec copy "%s"' % (video, audio, temp_file) - os.system(cmd) - - if watermark is False: - shutil.move(temp_file, save_path) - else: - # watermark - try: - ##### check if stable-diffusion-webui - import webui - from modules import paths - watarmark_path = paths.script_path+"/extensions/SadTalker/docs/sadtalker_logo.png" - except: - # get the root path of sadtalker. - dir_path = os.path.dirname(os.path.realpath(__file__)) - watarmark_path = dir_path+"/../../docs/sadtalker_logo.png" - - cmd = r'ffmpeg -y -hide_banner -loglevel error -i "%s" -i "%s" -filter_complex "[1]scale=100:-1[wm];[0][wm]overlay=(main_w-overlay_w)-10:10" "%s"' % (temp_file, watarmark_path, save_path) - os.system(cmd) - os.remove(temp_file) \ No newline at end of file diff --git a/spaces/kevinwang676/vits-fast-finetuning-pcr/transforms.py b/spaces/kevinwang676/vits-fast-finetuning-pcr/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/vits-fast-finetuning-pcr/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/king007/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_mlsd.py b/spaces/king007/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_mlsd.py deleted file mode 100644 index 6c5a33557e1792f7b977818460e112cc7acc03cc..0000000000000000000000000000000000000000 --- a/spaces/king007/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_mlsd.py +++ /dev/null @@ -1,173 +0,0 @@ -import gradio as gr -import torch -from controlnet_aux import MLSDdetector -from diffusers import ControlNetModel, StableDiffusionControlNetPipeline -from PIL import Image - -from diffusion_webui.utils.model_list import stable_model_list -from diffusion_webui.utils.scheduler_list import ( - SCHEDULER_LIST, - get_scheduler_list, -) - - -class StableDiffusionControlNetMLSDGenerator: - def __init__(self): - self.pipe = None - - def load_model(self, stable_model_path, controlnet_model_path, scheduler): - if self.pipe is None: - controlnet = ControlNetModel.from_pretrained( - controlnet_model_path, torch_dtype=torch.float16 - ) - - self.pipe = StableDiffusionControlNetPipeline.from_pretrained( - pretrained_model_name_or_path=stable_model_path, - controlnet=controlnet, - safety_checker=None, - torch_dtype=torch.float16, - ) - - self.pipe = get_scheduler_list(pipe=self.pipe, scheduler=scheduler) - self.pipe.to("cuda") - self.pipe.enable_xformers_memory_efficient_attention() - - return self.pipe - - def controlnet_mlsd(self, image_path: str): - mlsd = MLSDdetector.from_pretrained("lllyasviel/ControlNet") - - image = Image.open(image_path) - image = mlsd(image) - - return image - - def generate_image( - self, - image_path: str, - model_path: str, - prompt: str, - negative_prompt: str, - num_images_per_prompt: int, - guidance_scale: int, - num_inference_step: int, - scheduler: str, - seed_generator: int, - ): - image = self.controlnet_mlsd(image_path=image_path) - - pipe = self.load_model( - stable_model_path=model_path, - controlnet_model_path="lllyasviel/sd-controlnet-mlsd", - scheduler=scheduler, - ) - - if seed_generator == 0: - random_seed = torch.randint(0, 1000000, (1,)) - generator = torch.manual_seed(random_seed) - else: - generator = torch.manual_seed(seed_generator) - - output = pipe( - prompt=prompt, - image=image, - negative_prompt=negative_prompt, - num_images_per_prompt=num_images_per_prompt, - num_inference_steps=num_inference_step, - guidance_scale=guidance_scale, - generator=generator, - ).images - - return output - - def app(): - with gr.Blocks(): - with gr.Row(): - with gr.Column(): - controlnet_mlsd_image_file = gr.Image( - type="filepath", label="Image" - ) - - controlnet_mlsd_prompt = gr.Textbox( - lines=1, - show_label=False, - placeholder="Prompt", - ) - - controlnet_mlsd_negative_prompt = gr.Textbox( - lines=1, - show_label=False, - placeholder="Negative Prompt", - ) - - with gr.Row(): - with gr.Column(): - controlnet_mlsd_model_id = gr.Dropdown( - choices=stable_model_list, - value=stable_model_list[0], - label="Stable Model Id", - ) - controlnet_mlsd_guidance_scale = gr.Slider( - minimum=0.1, - maximum=15, - step=0.1, - value=7.5, - label="Guidance Scale", - ) - controlnet_mlsd_num_inference_step = gr.Slider( - minimum=1, - maximum=100, - step=1, - value=50, - label="Num Inference Step", - ) - - with gr.Row(): - with gr.Column(): - controlnet_mlsd_scheduler = gr.Dropdown( - choices=SCHEDULER_LIST, - value=SCHEDULER_LIST[0], - label="Scheduler", - ) - - controlnet_mlsd_seed_generator = gr.Slider( - minimum=0, - maximum=1000000, - step=1, - value=0, - label="Seed Generator", - ) - controlnet_mlsd_num_images_per_prompt = ( - gr.Slider( - minimum=1, - maximum=10, - step=1, - value=1, - label="Number Of Images", - ) - ) - - controlnet_mlsd_predict = gr.Button(value="Generator") - - with gr.Column(): - output_image = gr.Gallery( - label="Generated images", - show_label=False, - elem_id="gallery", - ).style(grid=(1, 2)) - - controlnet_mlsd_predict.click( - fn=StableDiffusionControlNetMLSDGenerator().generate_image, - inputs=[ - controlnet_mlsd_image_file, - controlnet_mlsd_model_id, - controlnet_mlsd_prompt, - controlnet_mlsd_negative_prompt, - controlnet_mlsd_num_images_per_prompt, - controlnet_mlsd_guidance_scale, - controlnet_mlsd_num_inference_step, - controlnet_mlsd_scheduler, - controlnet_mlsd_seed_generator, - ], - outputs=output_image, - ) diff --git a/spaces/kira4424/VITS-fast-fine-tuning/text/japanese.py b/spaces/kira4424/VITS-fast-fine-tuning/text/japanese.py deleted file mode 100644 index 375e4d50872d5c68ee57ca17470a2ca425425eba..0000000000000000000000000000000000000000 --- a/spaces/kira4424/VITS-fast-fine-tuning/text/japanese.py +++ /dev/null @@ -1,153 +0,0 @@ -import re -from unidecode import unidecode -import pyopenjtalk - - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile( - r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile( - r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (symbol, Japanese) pairs for marks: -_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('%', 'パーセント') -]] - -# List of (romaji, ipa) pairs for marks: -_romaji_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ts', 'ʦ'), - ('u', 'ɯ'), - ('j', 'ʥ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (romaji, ipa2) pairs for marks: -_romaji_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('u', 'ɯ'), - ('ʧ', 'tʃ'), - ('j', 'dʑ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (consonant, sokuon) pairs: -_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'Q([↑↓]*[kg])', r'k#\1'), - (r'Q([↑↓]*[tdjʧ])', r't#\1'), - (r'Q([↑↓]*[sʃ])', r's\1'), - (r'Q([↑↓]*[pb])', r'p#\1') -]] - -# List of (consonant, hatsuon) pairs: -_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'N([↑↓]*[pbm])', r'm\1'), - (r'N([↑↓]*[ʧʥj])', r'n^\1'), - (r'N([↑↓]*[tdn])', r'n\1'), - (r'N([↑↓]*[kg])', r'ŋ\1') -]] - - -def symbols_to_japanese(text): - for regex, replacement in _symbols_to_japanese: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - text = symbols_to_japanese(text) - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text != '': - text += ' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil', 'pau']: - text += phoneme.replace('ch', 'ʧ').replace('sh', - 'ʃ').replace('cl', 'Q') - else: - continue - # n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil', 'pau']: - a2_next = -1 - else: - a2_next = int( - re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i < len(marks): - text += unidecode(marks[i]).replace(' ', '') - return text - - -def get_real_sokuon(text): - for regex, replacement in _real_sokuon: - text = re.sub(regex, replacement, text) - return text - - -def get_real_hatsuon(text): - for regex, replacement in _real_hatsuon: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = re.sub( - r'([aiueo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa2(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa2: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa3(text): - text = japanese_to_ipa2(text).replace('n^', 'ȵ').replace( - 'ʃ', 'ɕ').replace('*', '\u0325').replace('#', '\u031a') - text = re.sub( - r'([aiɯeo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = re.sub(r'((?:^|\s)(?:ts|tɕ|[kpt]))', r'\1ʰ', text) - return text diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/cnn/utils/weight_init.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/cnn/utils/weight_init.py deleted file mode 100644 index 287a1d0bffe26e023029d48634d9b761deda7ba4..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/cnn/utils/weight_init.py +++ /dev/null @@ -1,684 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import math -import warnings - -import numpy as np -import torch -import torch.nn as nn -from torch import Tensor - -from annotator.uniformer.mmcv.utils import Registry, build_from_cfg, get_logger, print_log - -INITIALIZERS = Registry('initializer') - - -def update_init_info(module, init_info): - """Update the `_params_init_info` in the module if the value of parameters - are changed. - - Args: - module (obj:`nn.Module`): The module of PyTorch with a user-defined - attribute `_params_init_info` which records the initialization - information. - init_info (str): The string that describes the initialization. - """ - assert hasattr( - module, - '_params_init_info'), f'Can not find `_params_init_info` in {module}' - for name, param in module.named_parameters(): - - assert param in module._params_init_info, ( - f'Find a new :obj:`Parameter` ' - f'named `{name}` during executing the ' - f'`init_weights` of ' - f'`{module.__class__.__name__}`. ' - f'Please do not add or ' - f'replace parameters during executing ' - f'the `init_weights`. ') - - # The parameter has been changed during executing the - # `init_weights` of module - mean_value = param.data.mean() - if module._params_init_info[param]['tmp_mean_value'] != mean_value: - module._params_init_info[param]['init_info'] = init_info - module._params_init_info[param]['tmp_mean_value'] = mean_value - - -def constant_init(module, val, bias=0): - if hasattr(module, 'weight') and module.weight is not None: - nn.init.constant_(module.weight, val) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def xavier_init(module, gain=1, bias=0, distribution='normal'): - assert distribution in ['uniform', 'normal'] - if hasattr(module, 'weight') and module.weight is not None: - if distribution == 'uniform': - nn.init.xavier_uniform_(module.weight, gain=gain) - else: - nn.init.xavier_normal_(module.weight, gain=gain) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def normal_init(module, mean=0, std=1, bias=0): - if hasattr(module, 'weight') and module.weight is not None: - nn.init.normal_(module.weight, mean, std) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def trunc_normal_init(module: nn.Module, - mean: float = 0, - std: float = 1, - a: float = -2, - b: float = 2, - bias: float = 0) -> None: - if hasattr(module, 'weight') and module.weight is not None: - trunc_normal_(module.weight, mean, std, a, b) # type: ignore - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) # type: ignore - - -def uniform_init(module, a=0, b=1, bias=0): - if hasattr(module, 'weight') and module.weight is not None: - nn.init.uniform_(module.weight, a, b) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def kaiming_init(module, - a=0, - mode='fan_out', - nonlinearity='relu', - bias=0, - distribution='normal'): - assert distribution in ['uniform', 'normal'] - if hasattr(module, 'weight') and module.weight is not None: - if distribution == 'uniform': - nn.init.kaiming_uniform_( - module.weight, a=a, mode=mode, nonlinearity=nonlinearity) - else: - nn.init.kaiming_normal_( - module.weight, a=a, mode=mode, nonlinearity=nonlinearity) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def caffe2_xavier_init(module, bias=0): - # `XavierFill` in Caffe2 corresponds to `kaiming_uniform_` in PyTorch - # Acknowledgment to FAIR's internal code - kaiming_init( - module, - a=1, - mode='fan_in', - nonlinearity='leaky_relu', - bias=bias, - distribution='uniform') - - -def bias_init_with_prob(prior_prob): - """initialize conv/fc bias value according to a given probability value.""" - bias_init = float(-np.log((1 - prior_prob) / prior_prob)) - return bias_init - - -def _get_bases_name(m): - return [b.__name__ for b in m.__class__.__bases__] - - -class BaseInit(object): - - def __init__(self, *, bias=0, bias_prob=None, layer=None): - self.wholemodule = False - if not isinstance(bias, (int, float)): - raise TypeError(f'bias must be a number, but got a {type(bias)}') - - if bias_prob is not None: - if not isinstance(bias_prob, float): - raise TypeError(f'bias_prob type must be float, \ - but got {type(bias_prob)}') - - if layer is not None: - if not isinstance(layer, (str, list)): - raise TypeError(f'layer must be a str or a list of str, \ - but got a {type(layer)}') - else: - layer = [] - - if bias_prob is not None: - self.bias = bias_init_with_prob(bias_prob) - else: - self.bias = bias - self.layer = [layer] if isinstance(layer, str) else layer - - def _get_init_info(self): - info = f'{self.__class__.__name__}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Constant') -class ConstantInit(BaseInit): - """Initialize module parameters with constant values. - - Args: - val (int | float): the value to fill the weights in the module with - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - """ - - def __init__(self, val, **kwargs): - super().__init__(**kwargs) - self.val = val - - def __call__(self, module): - - def init(m): - if self.wholemodule: - constant_init(m, self.val, self.bias) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - constant_init(m, self.val, self.bias) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: val={self.val}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Xavier') -class XavierInit(BaseInit): - r"""Initialize module parameters with values according to the method - described in `Understanding the difficulty of training deep feedforward - neural networks - Glorot, X. & Bengio, Y. (2010). - `_ - - Args: - gain (int | float): an optional scaling factor. Defaults to 1. - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - distribution (str): distribution either be ``'normal'`` - or ``'uniform'``. Defaults to ``'normal'``. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - """ - - def __init__(self, gain=1, distribution='normal', **kwargs): - super().__init__(**kwargs) - self.gain = gain - self.distribution = distribution - - def __call__(self, module): - - def init(m): - if self.wholemodule: - xavier_init(m, self.gain, self.bias, self.distribution) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - xavier_init(m, self.gain, self.bias, self.distribution) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: gain={self.gain}, ' \ - f'distribution={self.distribution}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Normal') -class NormalInit(BaseInit): - r"""Initialize module parameters with the values drawn from the normal - distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)`. - - Args: - mean (int | float):the mean of the normal distribution. Defaults to 0. - std (int | float): the standard deviation of the normal distribution. - Defaults to 1. - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - - """ - - def __init__(self, mean=0, std=1, **kwargs): - super().__init__(**kwargs) - self.mean = mean - self.std = std - - def __call__(self, module): - - def init(m): - if self.wholemodule: - normal_init(m, self.mean, self.std, self.bias) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - normal_init(m, self.mean, self.std, self.bias) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: mean={self.mean},' \ - f' std={self.std}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='TruncNormal') -class TruncNormalInit(BaseInit): - r"""Initialize module parameters with the values drawn from the normal - distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)` with values - outside :math:`[a, b]`. - - Args: - mean (float): the mean of the normal distribution. Defaults to 0. - std (float): the standard deviation of the normal distribution. - Defaults to 1. - a (float): The minimum cutoff value. - b ( float): The maximum cutoff value. - bias (float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - - """ - - def __init__(self, - mean: float = 0, - std: float = 1, - a: float = -2, - b: float = 2, - **kwargs) -> None: - super().__init__(**kwargs) - self.mean = mean - self.std = std - self.a = a - self.b = b - - def __call__(self, module: nn.Module) -> None: - - def init(m): - if self.wholemodule: - trunc_normal_init(m, self.mean, self.std, self.a, self.b, - self.bias) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - trunc_normal_init(m, self.mean, self.std, self.a, self.b, - self.bias) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: a={self.a}, b={self.b},' \ - f' mean={self.mean}, std={self.std}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Uniform') -class UniformInit(BaseInit): - r"""Initialize module parameters with values drawn from the uniform - distribution :math:`\mathcal{U}(a, b)`. - - Args: - a (int | float): the lower bound of the uniform distribution. - Defaults to 0. - b (int | float): the upper bound of the uniform distribution. - Defaults to 1. - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - """ - - def __init__(self, a=0, b=1, **kwargs): - super().__init__(**kwargs) - self.a = a - self.b = b - - def __call__(self, module): - - def init(m): - if self.wholemodule: - uniform_init(m, self.a, self.b, self.bias) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - uniform_init(m, self.a, self.b, self.bias) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: a={self.a},' \ - f' b={self.b}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Kaiming') -class KaimingInit(BaseInit): - r"""Initialize module parameters with the values according to the method - described in `Delving deep into rectifiers: Surpassing human-level - performance on ImageNet classification - He, K. et al. (2015). - `_ - - Args: - a (int | float): the negative slope of the rectifier used after this - layer (only used with ``'leaky_relu'``). Defaults to 0. - mode (str): either ``'fan_in'`` or ``'fan_out'``. Choosing - ``'fan_in'`` preserves the magnitude of the variance of the weights - in the forward pass. Choosing ``'fan_out'`` preserves the - magnitudes in the backwards pass. Defaults to ``'fan_out'``. - nonlinearity (str): the non-linear function (`nn.functional` name), - recommended to use only with ``'relu'`` or ``'leaky_relu'`` . - Defaults to 'relu'. - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - distribution (str): distribution either be ``'normal'`` or - ``'uniform'``. Defaults to ``'normal'``. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - """ - - def __init__(self, - a=0, - mode='fan_out', - nonlinearity='relu', - distribution='normal', - **kwargs): - super().__init__(**kwargs) - self.a = a - self.mode = mode - self.nonlinearity = nonlinearity - self.distribution = distribution - - def __call__(self, module): - - def init(m): - if self.wholemodule: - kaiming_init(m, self.a, self.mode, self.nonlinearity, - self.bias, self.distribution) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - kaiming_init(m, self.a, self.mode, self.nonlinearity, - self.bias, self.distribution) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: a={self.a}, mode={self.mode}, ' \ - f'nonlinearity={self.nonlinearity}, ' \ - f'distribution ={self.distribution}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Caffe2Xavier') -class Caffe2XavierInit(KaimingInit): - # `XavierFill` in Caffe2 corresponds to `kaiming_uniform_` in PyTorch - # Acknowledgment to FAIR's internal code - def __init__(self, **kwargs): - super().__init__( - a=1, - mode='fan_in', - nonlinearity='leaky_relu', - distribution='uniform', - **kwargs) - - def __call__(self, module): - super().__call__(module) - - -@INITIALIZERS.register_module(name='Pretrained') -class PretrainedInit(object): - """Initialize module by loading a pretrained model. - - Args: - checkpoint (str): the checkpoint file of the pretrained model should - be load. - prefix (str, optional): the prefix of a sub-module in the pretrained - model. it is for loading a part of the pretrained model to - initialize. For example, if we would like to only load the - backbone of a detector model, we can set ``prefix='backbone.'``. - Defaults to None. - map_location (str): map tensors into proper locations. - """ - - def __init__(self, checkpoint, prefix=None, map_location=None): - self.checkpoint = checkpoint - self.prefix = prefix - self.map_location = map_location - - def __call__(self, module): - from annotator.uniformer.mmcv.runner import (_load_checkpoint_with_prefix, load_checkpoint, - load_state_dict) - logger = get_logger('mmcv') - if self.prefix is None: - print_log(f'load model from: {self.checkpoint}', logger=logger) - load_checkpoint( - module, - self.checkpoint, - map_location=self.map_location, - strict=False, - logger=logger) - else: - print_log( - f'load {self.prefix} in model from: {self.checkpoint}', - logger=logger) - state_dict = _load_checkpoint_with_prefix( - self.prefix, self.checkpoint, map_location=self.map_location) - load_state_dict(module, state_dict, strict=False, logger=logger) - - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: load from {self.checkpoint}' - return info - - -def _initialize(module, cfg, wholemodule=False): - func = build_from_cfg(cfg, INITIALIZERS) - # wholemodule flag is for override mode, there is no layer key in override - # and initializer will give init values for the whole module with the name - # in override. - func.wholemodule = wholemodule - func(module) - - -def _initialize_override(module, override, cfg): - if not isinstance(override, (dict, list)): - raise TypeError(f'override must be a dict or a list of dict, \ - but got {type(override)}') - - override = [override] if isinstance(override, dict) else override - - for override_ in override: - - cp_override = copy.deepcopy(override_) - name = cp_override.pop('name', None) - if name is None: - raise ValueError('`override` must contain the key "name",' - f'but got {cp_override}') - # if override only has name key, it means use args in init_cfg - if not cp_override: - cp_override.update(cfg) - # if override has name key and other args except type key, it will - # raise error - elif 'type' not in cp_override.keys(): - raise ValueError( - f'`override` need "type" key, but got {cp_override}') - - if hasattr(module, name): - _initialize(getattr(module, name), cp_override, wholemodule=True) - else: - raise RuntimeError(f'module did not have attribute {name}, ' - f'but init_cfg is {cp_override}.') - - -def initialize(module, init_cfg): - """Initialize a module. - - Args: - module (``torch.nn.Module``): the module will be initialized. - init_cfg (dict | list[dict]): initialization configuration dict to - define initializer. OpenMMLab has implemented 6 initializers - including ``Constant``, ``Xavier``, ``Normal``, ``Uniform``, - ``Kaiming``, and ``Pretrained``. - Example: - >>> module = nn.Linear(2, 3, bias=True) - >>> init_cfg = dict(type='Constant', layer='Linear', val =1 , bias =2) - >>> initialize(module, init_cfg) - - >>> module = nn.Sequential(nn.Conv1d(3, 1, 3), nn.Linear(1,2)) - >>> # define key ``'layer'`` for initializing layer with different - >>> # configuration - >>> init_cfg = [dict(type='Constant', layer='Conv1d', val=1), - dict(type='Constant', layer='Linear', val=2)] - >>> initialize(module, init_cfg) - - >>> # define key``'override'`` to initialize some specific part in - >>> # module - >>> class FooNet(nn.Module): - >>> def __init__(self): - >>> super().__init__() - >>> self.feat = nn.Conv2d(3, 16, 3) - >>> self.reg = nn.Conv2d(16, 10, 3) - >>> self.cls = nn.Conv2d(16, 5, 3) - >>> model = FooNet() - >>> init_cfg = dict(type='Constant', val=1, bias=2, layer='Conv2d', - >>> override=dict(type='Constant', name='reg', val=3, bias=4)) - >>> initialize(model, init_cfg) - - >>> model = ResNet(depth=50) - >>> # Initialize weights with the pretrained model. - >>> init_cfg = dict(type='Pretrained', - checkpoint='torchvision://resnet50') - >>> initialize(model, init_cfg) - - >>> # Initialize weights of a sub-module with the specific part of - >>> # a pretrained model by using "prefix". - >>> url = 'http://download.openmmlab.com/mmdetection/v2.0/retinanet/'\ - >>> 'retinanet_r50_fpn_1x_coco/'\ - >>> 'retinanet_r50_fpn_1x_coco_20200130-c2398f9e.pth' - >>> init_cfg = dict(type='Pretrained', - checkpoint=url, prefix='backbone.') - """ - if not isinstance(init_cfg, (dict, list)): - raise TypeError(f'init_cfg must be a dict or a list of dict, \ - but got {type(init_cfg)}') - - if isinstance(init_cfg, dict): - init_cfg = [init_cfg] - - for cfg in init_cfg: - # should deeply copy the original config because cfg may be used by - # other modules, e.g., one init_cfg shared by multiple bottleneck - # blocks, the expected cfg will be changed after pop and will change - # the initialization behavior of other modules - cp_cfg = copy.deepcopy(cfg) - override = cp_cfg.pop('override', None) - _initialize(module, cp_cfg) - - if override is not None: - cp_cfg.pop('layer', None) - _initialize_override(module, override, cp_cfg) - else: - # All attributes in module have same initialization. - pass - - -def _no_grad_trunc_normal_(tensor: Tensor, mean: float, std: float, a: float, - b: float) -> Tensor: - # Method based on - # https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf - # Modified from - # https://github.com/pytorch/pytorch/blob/master/torch/nn/init.py - def norm_cdf(x): - # Computes standard normal cumulative distribution function - return (1. + math.erf(x / math.sqrt(2.))) / 2. - - if (mean < a - 2 * std) or (mean > b + 2 * std): - warnings.warn( - 'mean is more than 2 std from [a, b] in nn.init.trunc_normal_. ' - 'The distribution of values may be incorrect.', - stacklevel=2) - - with torch.no_grad(): - # Values are generated by using a truncated uniform distribution and - # then using the inverse CDF for the normal distribution. - # Get upper and lower cdf values - lower = norm_cdf((a - mean) / std) - upper = norm_cdf((b - mean) / std) - - # Uniformly fill tensor with values from [lower, upper], then translate - # to [2lower-1, 2upper-1]. - tensor.uniform_(2 * lower - 1, 2 * upper - 1) - - # Use inverse cdf transform for normal distribution to get truncated - # standard normal - tensor.erfinv_() - - # Transform to proper mean, std - tensor.mul_(std * math.sqrt(2.)) - tensor.add_(mean) - - # Clamp to ensure it's in the proper range - tensor.clamp_(min=a, max=b) - return tensor - - -def trunc_normal_(tensor: Tensor, - mean: float = 0., - std: float = 1., - a: float = -2., - b: float = 2.) -> Tensor: - r"""Fills the input Tensor with values drawn from a truncated - normal distribution. The values are effectively drawn from the - normal distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)` - with values outside :math:`[a, b]` redrawn until they are within - the bounds. The method used for generating the random values works - best when :math:`a \leq \text{mean} \leq b`. - - Modified from - https://github.com/pytorch/pytorch/blob/master/torch/nn/init.py - - Args: - tensor (``torch.Tensor``): an n-dimensional `torch.Tensor`. - mean (float): the mean of the normal distribution. - std (float): the standard deviation of the normal distribution. - a (float): the minimum cutoff value. - b (float): the maximum cutoff value. - """ - return _no_grad_trunc_normal_(tensor, mean, std, a, b) diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv_custom/checkpoint.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv_custom/checkpoint.py deleted file mode 100644 index 19b87fef0a52d31babcdb3edb8f3089b6420173f..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv_custom/checkpoint.py +++ /dev/null @@ -1,500 +0,0 @@ -# Copyright (c) Open-MMLab. All rights reserved. -import io -import os -import os.path as osp -import pkgutil -import time -import warnings -from collections import OrderedDict -from importlib import import_module -from tempfile import TemporaryDirectory - -import torch -import torchvision -from torch.optim import Optimizer -from torch.utils import model_zoo -from torch.nn import functional as F - -import annotator.uniformer.mmcv as mmcv -from annotator.uniformer.mmcv.fileio import FileClient -from annotator.uniformer.mmcv.fileio import load as load_file -from annotator.uniformer.mmcv.parallel import is_module_wrapper -from annotator.uniformer.mmcv.utils import mkdir_or_exist -from annotator.uniformer.mmcv.runner import get_dist_info - -ENV_MMCV_HOME = 'MMCV_HOME' -ENV_XDG_CACHE_HOME = 'XDG_CACHE_HOME' -DEFAULT_CACHE_DIR = '~/.cache' - - -def _get_mmcv_home(): - mmcv_home = os.path.expanduser( - os.getenv( - ENV_MMCV_HOME, - os.path.join( - os.getenv(ENV_XDG_CACHE_HOME, DEFAULT_CACHE_DIR), 'mmcv'))) - - mkdir_or_exist(mmcv_home) - return mmcv_home - - -def load_state_dict(module, state_dict, strict=False, logger=None): - """Load state_dict to a module. - - This method is modified from :meth:`torch.nn.Module.load_state_dict`. - Default value for ``strict`` is set to ``False`` and the message for - param mismatch will be shown even if strict is False. - - Args: - module (Module): Module that receives the state_dict. - state_dict (OrderedDict): Weights. - strict (bool): whether to strictly enforce that the keys - in :attr:`state_dict` match the keys returned by this module's - :meth:`~torch.nn.Module.state_dict` function. Default: ``False``. - logger (:obj:`logging.Logger`, optional): Logger to log the error - message. If not specified, print function will be used. - """ - unexpected_keys = [] - all_missing_keys = [] - err_msg = [] - - metadata = getattr(state_dict, '_metadata', None) - state_dict = state_dict.copy() - if metadata is not None: - state_dict._metadata = metadata - - # use _load_from_state_dict to enable checkpoint version control - def load(module, prefix=''): - # recursively check parallel module in case that the model has a - # complicated structure, e.g., nn.Module(nn.Module(DDP)) - if is_module_wrapper(module): - module = module.module - local_metadata = {} if metadata is None else metadata.get( - prefix[:-1], {}) - module._load_from_state_dict(state_dict, prefix, local_metadata, True, - all_missing_keys, unexpected_keys, - err_msg) - for name, child in module._modules.items(): - if child is not None: - load(child, prefix + name + '.') - - load(module) - load = None # break load->load reference cycle - - # ignore "num_batches_tracked" of BN layers - missing_keys = [ - key for key in all_missing_keys if 'num_batches_tracked' not in key - ] - - if unexpected_keys: - err_msg.append('unexpected key in source ' - f'state_dict: {", ".join(unexpected_keys)}\n') - if missing_keys: - err_msg.append( - f'missing keys in source state_dict: {", ".join(missing_keys)}\n') - - rank, _ = get_dist_info() - if len(err_msg) > 0 and rank == 0: - err_msg.insert( - 0, 'The model and loaded state dict do not match exactly\n') - err_msg = '\n'.join(err_msg) - if strict: - raise RuntimeError(err_msg) - elif logger is not None: - logger.warning(err_msg) - else: - print(err_msg) - - -def load_url_dist(url, model_dir=None): - """In distributed setting, this function only download checkpoint at local - rank 0.""" - rank, world_size = get_dist_info() - rank = int(os.environ.get('LOCAL_RANK', rank)) - if rank == 0: - checkpoint = model_zoo.load_url(url, model_dir=model_dir) - if world_size > 1: - torch.distributed.barrier() - if rank > 0: - checkpoint = model_zoo.load_url(url, model_dir=model_dir) - return checkpoint - - -def load_pavimodel_dist(model_path, map_location=None): - """In distributed setting, this function only download checkpoint at local - rank 0.""" - try: - from pavi import modelcloud - except ImportError: - raise ImportError( - 'Please install pavi to load checkpoint from modelcloud.') - rank, world_size = get_dist_info() - rank = int(os.environ.get('LOCAL_RANK', rank)) - if rank == 0: - model = modelcloud.get(model_path) - with TemporaryDirectory() as tmp_dir: - downloaded_file = osp.join(tmp_dir, model.name) - model.download(downloaded_file) - checkpoint = torch.load(downloaded_file, map_location=map_location) - if world_size > 1: - torch.distributed.barrier() - if rank > 0: - model = modelcloud.get(model_path) - with TemporaryDirectory() as tmp_dir: - downloaded_file = osp.join(tmp_dir, model.name) - model.download(downloaded_file) - checkpoint = torch.load( - downloaded_file, map_location=map_location) - return checkpoint - - -def load_fileclient_dist(filename, backend, map_location): - """In distributed setting, this function only download checkpoint at local - rank 0.""" - rank, world_size = get_dist_info() - rank = int(os.environ.get('LOCAL_RANK', rank)) - allowed_backends = ['ceph'] - if backend not in allowed_backends: - raise ValueError(f'Load from Backend {backend} is not supported.') - if rank == 0: - fileclient = FileClient(backend=backend) - buffer = io.BytesIO(fileclient.get(filename)) - checkpoint = torch.load(buffer, map_location=map_location) - if world_size > 1: - torch.distributed.barrier() - if rank > 0: - fileclient = FileClient(backend=backend) - buffer = io.BytesIO(fileclient.get(filename)) - checkpoint = torch.load(buffer, map_location=map_location) - return checkpoint - - -def get_torchvision_models(): - model_urls = dict() - for _, name, ispkg in pkgutil.walk_packages(torchvision.models.__path__): - if ispkg: - continue - _zoo = import_module(f'torchvision.models.{name}') - if hasattr(_zoo, 'model_urls'): - _urls = getattr(_zoo, 'model_urls') - model_urls.update(_urls) - return model_urls - - -def get_external_models(): - mmcv_home = _get_mmcv_home() - default_json_path = osp.join(mmcv.__path__[0], 'model_zoo/open_mmlab.json') - default_urls = load_file(default_json_path) - assert isinstance(default_urls, dict) - external_json_path = osp.join(mmcv_home, 'open_mmlab.json') - if osp.exists(external_json_path): - external_urls = load_file(external_json_path) - assert isinstance(external_urls, dict) - default_urls.update(external_urls) - - return default_urls - - -def get_mmcls_models(): - mmcls_json_path = osp.join(mmcv.__path__[0], 'model_zoo/mmcls.json') - mmcls_urls = load_file(mmcls_json_path) - - return mmcls_urls - - -def get_deprecated_model_names(): - deprecate_json_path = osp.join(mmcv.__path__[0], - 'model_zoo/deprecated.json') - deprecate_urls = load_file(deprecate_json_path) - assert isinstance(deprecate_urls, dict) - - return deprecate_urls - - -def _process_mmcls_checkpoint(checkpoint): - state_dict = checkpoint['state_dict'] - new_state_dict = OrderedDict() - for k, v in state_dict.items(): - if k.startswith('backbone.'): - new_state_dict[k[9:]] = v - new_checkpoint = dict(state_dict=new_state_dict) - - return new_checkpoint - - -def _load_checkpoint(filename, map_location=None): - """Load checkpoint from somewhere (modelzoo, file, url). - - Args: - filename (str): Accept local filepath, URL, ``torchvision://xxx``, - ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for - details. - map_location (str | None): Same as :func:`torch.load`. Default: None. - - Returns: - dict | OrderedDict: The loaded checkpoint. It can be either an - OrderedDict storing model weights or a dict containing other - information, which depends on the checkpoint. - """ - if filename.startswith('modelzoo://'): - warnings.warn('The URL scheme of "modelzoo://" is deprecated, please ' - 'use "torchvision://" instead') - model_urls = get_torchvision_models() - model_name = filename[11:] - checkpoint = load_url_dist(model_urls[model_name]) - elif filename.startswith('torchvision://'): - model_urls = get_torchvision_models() - model_name = filename[14:] - checkpoint = load_url_dist(model_urls[model_name]) - elif filename.startswith('open-mmlab://'): - model_urls = get_external_models() - model_name = filename[13:] - deprecated_urls = get_deprecated_model_names() - if model_name in deprecated_urls: - warnings.warn(f'open-mmlab://{model_name} is deprecated in favor ' - f'of open-mmlab://{deprecated_urls[model_name]}') - model_name = deprecated_urls[model_name] - model_url = model_urls[model_name] - # check if is url - if model_url.startswith(('http://', 'https://')): - checkpoint = load_url_dist(model_url) - else: - filename = osp.join(_get_mmcv_home(), model_url) - if not osp.isfile(filename): - raise IOError(f'{filename} is not a checkpoint file') - checkpoint = torch.load(filename, map_location=map_location) - elif filename.startswith('mmcls://'): - model_urls = get_mmcls_models() - model_name = filename[8:] - checkpoint = load_url_dist(model_urls[model_name]) - checkpoint = _process_mmcls_checkpoint(checkpoint) - elif filename.startswith(('http://', 'https://')): - checkpoint = load_url_dist(filename) - elif filename.startswith('pavi://'): - model_path = filename[7:] - checkpoint = load_pavimodel_dist(model_path, map_location=map_location) - elif filename.startswith('s3://'): - checkpoint = load_fileclient_dist( - filename, backend='ceph', map_location=map_location) - else: - if not osp.isfile(filename): - raise IOError(f'{filename} is not a checkpoint file') - checkpoint = torch.load(filename, map_location=map_location) - return checkpoint - - -def load_checkpoint(model, - filename, - map_location='cpu', - strict=False, - logger=None): - """Load checkpoint from a file or URI. - - Args: - model (Module): Module to load checkpoint. - filename (str): Accept local filepath, URL, ``torchvision://xxx``, - ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for - details. - map_location (str): Same as :func:`torch.load`. - strict (bool): Whether to allow different params for the model and - checkpoint. - logger (:mod:`logging.Logger` or None): The logger for error message. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - checkpoint = _load_checkpoint(filename, map_location) - # OrderedDict is a subclass of dict - if not isinstance(checkpoint, dict): - raise RuntimeError( - f'No state_dict found in checkpoint file {filename}') - # get state_dict from checkpoint - if 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - elif 'model' in checkpoint: - state_dict = checkpoint['model'] - else: - state_dict = checkpoint - # strip prefix of state_dict - if list(state_dict.keys())[0].startswith('module.'): - state_dict = {k[7:]: v for k, v in state_dict.items()} - - # for MoBY, load model of online branch - if sorted(list(state_dict.keys()))[0].startswith('encoder'): - state_dict = {k.replace('encoder.', ''): v for k, v in state_dict.items() if k.startswith('encoder.')} - - # reshape absolute position embedding - if state_dict.get('absolute_pos_embed') is not None: - absolute_pos_embed = state_dict['absolute_pos_embed'] - N1, L, C1 = absolute_pos_embed.size() - N2, C2, H, W = model.absolute_pos_embed.size() - if N1 != N2 or C1 != C2 or L != H*W: - logger.warning("Error in loading absolute_pos_embed, pass") - else: - state_dict['absolute_pos_embed'] = absolute_pos_embed.view(N2, H, W, C2).permute(0, 3, 1, 2) - - # interpolate position bias table if needed - relative_position_bias_table_keys = [k for k in state_dict.keys() if "relative_position_bias_table" in k] - for table_key in relative_position_bias_table_keys: - table_pretrained = state_dict[table_key] - table_current = model.state_dict()[table_key] - L1, nH1 = table_pretrained.size() - L2, nH2 = table_current.size() - if nH1 != nH2: - logger.warning(f"Error in loading {table_key}, pass") - else: - if L1 != L2: - S1 = int(L1 ** 0.5) - S2 = int(L2 ** 0.5) - table_pretrained_resized = F.interpolate( - table_pretrained.permute(1, 0).view(1, nH1, S1, S1), - size=(S2, S2), mode='bicubic') - state_dict[table_key] = table_pretrained_resized.view(nH2, L2).permute(1, 0) - - # load state_dict - load_state_dict(model, state_dict, strict, logger) - return checkpoint - - -def weights_to_cpu(state_dict): - """Copy a model state_dict to cpu. - - Args: - state_dict (OrderedDict): Model weights on GPU. - - Returns: - OrderedDict: Model weights on GPU. - """ - state_dict_cpu = OrderedDict() - for key, val in state_dict.items(): - state_dict_cpu[key] = val.cpu() - return state_dict_cpu - - -def _save_to_state_dict(module, destination, prefix, keep_vars): - """Saves module state to `destination` dictionary. - - This method is modified from :meth:`torch.nn.Module._save_to_state_dict`. - - Args: - module (nn.Module): The module to generate state_dict. - destination (dict): A dict where state will be stored. - prefix (str): The prefix for parameters and buffers used in this - module. - """ - for name, param in module._parameters.items(): - if param is not None: - destination[prefix + name] = param if keep_vars else param.detach() - for name, buf in module._buffers.items(): - # remove check of _non_persistent_buffers_set to allow nn.BatchNorm2d - if buf is not None: - destination[prefix + name] = buf if keep_vars else buf.detach() - - -def get_state_dict(module, destination=None, prefix='', keep_vars=False): - """Returns a dictionary containing a whole state of the module. - - Both parameters and persistent buffers (e.g. running averages) are - included. Keys are corresponding parameter and buffer names. - - This method is modified from :meth:`torch.nn.Module.state_dict` to - recursively check parallel module in case that the model has a complicated - structure, e.g., nn.Module(nn.Module(DDP)). - - Args: - module (nn.Module): The module to generate state_dict. - destination (OrderedDict): Returned dict for the state of the - module. - prefix (str): Prefix of the key. - keep_vars (bool): Whether to keep the variable property of the - parameters. Default: False. - - Returns: - dict: A dictionary containing a whole state of the module. - """ - # recursively check parallel module in case that the model has a - # complicated structure, e.g., nn.Module(nn.Module(DDP)) - if is_module_wrapper(module): - module = module.module - - # below is the same as torch.nn.Module.state_dict() - if destination is None: - destination = OrderedDict() - destination._metadata = OrderedDict() - destination._metadata[prefix[:-1]] = local_metadata = dict( - version=module._version) - _save_to_state_dict(module, destination, prefix, keep_vars) - for name, child in module._modules.items(): - if child is not None: - get_state_dict( - child, destination, prefix + name + '.', keep_vars=keep_vars) - for hook in module._state_dict_hooks.values(): - hook_result = hook(module, destination, prefix, local_metadata) - if hook_result is not None: - destination = hook_result - return destination - - -def save_checkpoint(model, filename, optimizer=None, meta=None): - """Save checkpoint to file. - - The checkpoint will have 3 fields: ``meta``, ``state_dict`` and - ``optimizer``. By default ``meta`` will contain version and time info. - - Args: - model (Module): Module whose params are to be saved. - filename (str): Checkpoint filename. - optimizer (:obj:`Optimizer`, optional): Optimizer to be saved. - meta (dict, optional): Metadata to be saved in checkpoint. - """ - if meta is None: - meta = {} - elif not isinstance(meta, dict): - raise TypeError(f'meta must be a dict or None, but got {type(meta)}') - meta.update(mmcv_version=mmcv.__version__, time=time.asctime()) - - if is_module_wrapper(model): - model = model.module - - if hasattr(model, 'CLASSES') and model.CLASSES is not None: - # save class name to the meta - meta.update(CLASSES=model.CLASSES) - - checkpoint = { - 'meta': meta, - 'state_dict': weights_to_cpu(get_state_dict(model)) - } - # save optimizer state dict in the checkpoint - if isinstance(optimizer, Optimizer): - checkpoint['optimizer'] = optimizer.state_dict() - elif isinstance(optimizer, dict): - checkpoint['optimizer'] = {} - for name, optim in optimizer.items(): - checkpoint['optimizer'][name] = optim.state_dict() - - if filename.startswith('pavi://'): - try: - from pavi import modelcloud - from pavi.exception import NodeNotFoundError - except ImportError: - raise ImportError( - 'Please install pavi to load checkpoint from modelcloud.') - model_path = filename[7:] - root = modelcloud.Folder() - model_dir, model_name = osp.split(model_path) - try: - model = modelcloud.get(model_dir) - except NodeNotFoundError: - model = root.create_training_model(model_dir) - with TemporaryDirectory() as tmp_dir: - checkpoint_file = osp.join(tmp_dir, model_name) - with open(checkpoint_file, 'wb') as f: - torch.save(checkpoint, f) - f.flush() - model.create_file(checkpoint_file, name=model_name) - else: - mmcv.mkdir_or_exist(osp.dirname(filename)) - # immediately flush buffer - with open(filename, 'wb') as f: - torch.save(checkpoint, f) - f.flush() \ No newline at end of file diff --git a/spaces/kukuhtw/VToonify/vtoonify/model/encoder/align_all_parallel.py b/spaces/kukuhtw/VToonify/vtoonify/model/encoder/align_all_parallel.py deleted file mode 100644 index 05b520cd6590dc02ee533d3f0d69e6a364447d9f..0000000000000000000000000000000000000000 --- a/spaces/kukuhtw/VToonify/vtoonify/model/encoder/align_all_parallel.py +++ /dev/null @@ -1,217 +0,0 @@ -""" -brief: face alignment with FFHQ method (https://github.com/NVlabs/ffhq-dataset) -author: lzhbrian (https://lzhbrian.me) -date: 2020.1.5 -note: code is heavily borrowed from - https://github.com/NVlabs/ffhq-dataset - http://dlib.net/face_landmark_detection.py.html - -requirements: - apt install cmake - conda install Pillow numpy scipy - pip install dlib - # download face landmark model from: - # http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2 -""" -from argparse import ArgumentParser -import time -import numpy as np -import PIL -import PIL.Image -import os -import scipy -import scipy.ndimage -import dlib -import multiprocessing as mp -import math - -#from configs.paths_config import model_paths -SHAPE_PREDICTOR_PATH = 'shape_predictor_68_face_landmarks.dat'#model_paths["shape_predictor"] - - -def get_landmark(filepath, predictor): - """get landmark with dlib - :return: np.array shape=(68, 2) - """ - detector = dlib.get_frontal_face_detector() - if type(filepath) == str: - img = dlib.load_rgb_image(filepath) - else: - img = filepath - dets = detector(img, 1) - - if len(dets) == 0: - print('Error: no face detected!') - return None - - shape = None - for k, d in enumerate(dets): - shape = predictor(img, d) - - if shape is None: - print('Error: No face detected! If you are sure there are faces in your input, you may rerun the code several times until the face is detected. Sometimes the detector is unstable.') - t = list(shape.parts()) - a = [] - for tt in t: - a.append([tt.x, tt.y]) - lm = np.array(a) - return lm - - -def align_face(filepath, predictor): - """ - :param filepath: str - :return: PIL Image - """ - - lm = get_landmark(filepath, predictor) - if lm is None: - return None - - lm_chin = lm[0: 17] # left-right - lm_eyebrow_left = lm[17: 22] # left-right - lm_eyebrow_right = lm[22: 27] # left-right - lm_nose = lm[27: 31] # top-down - lm_nostrils = lm[31: 36] # top-down - lm_eye_left = lm[36: 42] # left-clockwise - lm_eye_right = lm[42: 48] # left-clockwise - lm_mouth_outer = lm[48: 60] # left-clockwise - lm_mouth_inner = lm[60: 68] # left-clockwise - - # Calculate auxiliary vectors. - eye_left = np.mean(lm_eye_left, axis=0) - eye_right = np.mean(lm_eye_right, axis=0) - eye_avg = (eye_left + eye_right) * 0.5 - eye_to_eye = eye_right - eye_left - mouth_left = lm_mouth_outer[0] - mouth_right = lm_mouth_outer[6] - mouth_avg = (mouth_left + mouth_right) * 0.5 - eye_to_mouth = mouth_avg - eye_avg - - # Choose oriented crop rectangle. - x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1] - x /= np.hypot(*x) - x *= max(np.hypot(*eye_to_eye) * 2.0, np.hypot(*eye_to_mouth) * 1.8) - y = np.flipud(x) * [-1, 1] - c = eye_avg + eye_to_mouth * 0.1 - quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y]) - qsize = np.hypot(*x) * 2 - - # read image - if type(filepath) == str: - img = PIL.Image.open(filepath) - else: - img = PIL.Image.fromarray(filepath) - - output_size = 256 - transform_size = 256 - enable_padding = True - - # Shrink. - shrink = int(np.floor(qsize / output_size * 0.5)) - if shrink > 1: - rsize = (int(np.rint(float(img.size[0]) / shrink)), int(np.rint(float(img.size[1]) / shrink))) - img = img.resize(rsize, PIL.Image.ANTIALIAS) - quad /= shrink - qsize /= shrink - - # Crop. - border = max(int(np.rint(qsize * 0.1)), 3) - crop = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))), - int(np.ceil(max(quad[:, 1])))) - crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, img.size[0]), - min(crop[3] + border, img.size[1])) - if crop[2] - crop[0] < img.size[0] or crop[3] - crop[1] < img.size[1]: - img = img.crop(crop) - quad -= crop[0:2] - - # Pad. - pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))), - int(np.ceil(max(quad[:, 1])))) - pad = (max(-pad[0] + border, 0), max(-pad[1] + border, 0), max(pad[2] - img.size[0] + border, 0), - max(pad[3] - img.size[1] + border, 0)) - if enable_padding and max(pad) > border - 4: - pad = np.maximum(pad, int(np.rint(qsize * 0.3))) - img = np.pad(np.float32(img), ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect') - h, w, _ = img.shape - y, x, _ = np.ogrid[:h, :w, :1] - mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0], np.float32(w - 1 - x) / pad[2]), - 1.0 - np.minimum(np.float32(y) / pad[1], np.float32(h - 1 - y) / pad[3])) - blur = qsize * 0.02 - img += (scipy.ndimage.gaussian_filter(img, [blur, blur, 0]) - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0) - img += (np.median(img, axis=(0, 1)) - img) * np.clip(mask, 0.0, 1.0) - img = PIL.Image.fromarray(np.uint8(np.clip(np.rint(img), 0, 255)), 'RGB') - quad += pad[:2] - - # Transform. - img = img.transform((transform_size, transform_size), PIL.Image.QUAD, (quad + 0.5).flatten(), PIL.Image.BILINEAR) - if output_size < transform_size: - img = img.resize((output_size, output_size), PIL.Image.ANTIALIAS) - - # Save aligned image. - return img - - -def chunks(lst, n): - """Yield successive n-sized chunks from lst.""" - for i in range(0, len(lst), n): - yield lst[i:i + n] - - -def extract_on_paths(file_paths): - predictor = dlib.shape_predictor(SHAPE_PREDICTOR_PATH) - pid = mp.current_process().name - print('\t{} is starting to extract on #{} images'.format(pid, len(file_paths))) - tot_count = len(file_paths) - count = 0 - for file_path, res_path in file_paths: - count += 1 - if count % 100 == 0: - print('{} done with {}/{}'.format(pid, count, tot_count)) - try: - res = align_face(file_path, predictor) - res = res.convert('RGB') - os.makedirs(os.path.dirname(res_path), exist_ok=True) - res.save(res_path) - except Exception: - continue - print('\tDone!') - - -def parse_args(): - parser = ArgumentParser(add_help=False) - parser.add_argument('--num_threads', type=int, default=1) - parser.add_argument('--root_path', type=str, default='') - args = parser.parse_args() - return args - - -def run(args): - root_path = args.root_path - out_crops_path = root_path + '_crops' - if not os.path.exists(out_crops_path): - os.makedirs(out_crops_path, exist_ok=True) - - file_paths = [] - for root, dirs, files in os.walk(root_path): - for file in files: - file_path = os.path.join(root, file) - fname = os.path.join(out_crops_path, os.path.relpath(file_path, root_path)) - res_path = '{}.jpg'.format(os.path.splitext(fname)[0]) - if os.path.splitext(file_path)[1] == '.txt' or os.path.exists(res_path): - continue - file_paths.append((file_path, res_path)) - - file_chunks = list(chunks(file_paths, int(math.ceil(len(file_paths) / args.num_threads)))) - print(len(file_chunks)) - pool = mp.Pool(args.num_threads) - print('Running on {} paths\nHere we goooo'.format(len(file_paths))) - tic = time.time() - pool.map(extract_on_paths, file_chunks) - toc = time.time() - print('Mischief managed in {}s'.format(toc - tic)) - - -if __name__ == '__main__': - args = parse_args() - run(args) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/mtiLib/__main__.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/mtiLib/__main__.py deleted file mode 100644 index 29c802bcc83b3ca35bbd0e6521f47a368b5f9092..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/mtiLib/__main__.py +++ /dev/null @@ -1,5 +0,0 @@ -import sys -from fontTools.mtiLib import main - -if __name__ == "__main__": - sys.exit(main()) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_tkcairo.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_tkcairo.py deleted file mode 100644 index a6951c03c65a328bc9c46a369aed65ad7df259bb..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_tkcairo.py +++ /dev/null @@ -1,26 +0,0 @@ -import sys - -import numpy as np - -from . import _backend_tk -from .backend_cairo import cairo, FigureCanvasCairo -from ._backend_tk import _BackendTk, FigureCanvasTk - - -class FigureCanvasTkCairo(FigureCanvasCairo, FigureCanvasTk): - def draw(self): - width = int(self.figure.bbox.width) - height = int(self.figure.bbox.height) - surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, width, height) - self._renderer.set_context(cairo.Context(surface)) - self._renderer.dpi = self.figure.dpi - self.figure.draw(self._renderer) - buf = np.reshape(surface.get_data(), (height, width, 4)) - _backend_tk.blit( - self._tkphoto, buf, - (2, 1, 0, 3) if sys.byteorder == "little" else (1, 2, 3, 0)) - - -@_BackendTk.export -class _BackendTkCairo(_BackendTk): - FigureCanvas = FigureCanvasTkCairo diff --git a/spaces/lafi23333/aikomori/inference/slicer.py b/spaces/lafi23333/aikomori/inference/slicer.py deleted file mode 100644 index b05840bcf6bdced0b6e2adbecb1a1dd5b3dee462..0000000000000000000000000000000000000000 --- a/spaces/lafi23333/aikomori/inference/slicer.py +++ /dev/null @@ -1,142 +0,0 @@ -import librosa -import torch -import torchaudio - - -class Slicer: - def __init__(self, - sr: int, - threshold: float = -40., - min_length: int = 5000, - min_interval: int = 300, - hop_size: int = 20, - max_sil_kept: int = 5000): - if not min_length >= min_interval >= hop_size: - raise ValueError('The following condition must be satisfied: min_length >= min_interval >= hop_size') - if not max_sil_kept >= hop_size: - raise ValueError('The following condition must be satisfied: max_sil_kept >= hop_size') - min_interval = sr * min_interval / 1000 - self.threshold = 10 ** (threshold / 20.) - self.hop_size = round(sr * hop_size / 1000) - self.win_size = min(round(min_interval), 4 * self.hop_size) - self.min_length = round(sr * min_length / 1000 / self.hop_size) - self.min_interval = round(min_interval / self.hop_size) - self.max_sil_kept = round(sr * max_sil_kept / 1000 / self.hop_size) - - def _apply_slice(self, waveform, begin, end): - if len(waveform.shape) > 1: - return waveform[:, begin * self.hop_size: min(waveform.shape[1], end * self.hop_size)] - else: - return waveform[begin * self.hop_size: min(waveform.shape[0], end * self.hop_size)] - - # @timeit - def slice(self, waveform): - if len(waveform.shape) > 1: - samples = librosa.to_mono(waveform) - else: - samples = waveform - if samples.shape[0] <= self.min_length: - return {"0": {"slice": False, "split_time": f"0,{len(waveform)}"}} - rms_list = librosa.feature.rms(y=samples, frame_length=self.win_size, hop_length=self.hop_size).squeeze(0) - sil_tags = [] - silence_start = None - clip_start = 0 - for i, rms in enumerate(rms_list): - # Keep looping while frame is silent. - if rms < self.threshold: - # Record start of silent frames. - if silence_start is None: - silence_start = i - continue - # Keep looping while frame is not silent and silence start has not been recorded. - if silence_start is None: - continue - # Clear recorded silence start if interval is not enough or clip is too short - is_leading_silence = silence_start == 0 and i > self.max_sil_kept - need_slice_middle = i - silence_start >= self.min_interval and i - clip_start >= self.min_length - if not is_leading_silence and not need_slice_middle: - silence_start = None - continue - # Need slicing. Record the range of silent frames to be removed. - if i - silence_start <= self.max_sil_kept: - pos = rms_list[silence_start: i + 1].argmin() + silence_start - if silence_start == 0: - sil_tags.append((0, pos)) - else: - sil_tags.append((pos, pos)) - clip_start = pos - elif i - silence_start <= self.max_sil_kept * 2: - pos = rms_list[i - self.max_sil_kept: silence_start + self.max_sil_kept + 1].argmin() - pos += i - self.max_sil_kept - pos_l = rms_list[silence_start: silence_start + self.max_sil_kept + 1].argmin() + silence_start - pos_r = rms_list[i - self.max_sil_kept: i + 1].argmin() + i - self.max_sil_kept - if silence_start == 0: - sil_tags.append((0, pos_r)) - clip_start = pos_r - else: - sil_tags.append((min(pos_l, pos), max(pos_r, pos))) - clip_start = max(pos_r, pos) - else: - pos_l = rms_list[silence_start: silence_start + self.max_sil_kept + 1].argmin() + silence_start - pos_r = rms_list[i - self.max_sil_kept: i + 1].argmin() + i - self.max_sil_kept - if silence_start == 0: - sil_tags.append((0, pos_r)) - else: - sil_tags.append((pos_l, pos_r)) - clip_start = pos_r - silence_start = None - # Deal with trailing silence. - total_frames = rms_list.shape[0] - if silence_start is not None and total_frames - silence_start >= self.min_interval: - silence_end = min(total_frames, silence_start + self.max_sil_kept) - pos = rms_list[silence_start: silence_end + 1].argmin() + silence_start - sil_tags.append((pos, total_frames + 1)) - # Apply and return slices. - if len(sil_tags) == 0: - return {"0": {"slice": False, "split_time": f"0,{len(waveform)}"}} - else: - chunks = [] - # 第一段静音并非从头开始,补上有声片段 - if sil_tags[0][0]: - chunks.append( - {"slice": False, "split_time": f"0,{min(waveform.shape[0], sil_tags[0][0] * self.hop_size)}"}) - for i in range(0, len(sil_tags)): - # 标识有声片段(跳过第一段) - if i: - chunks.append({"slice": False, - "split_time": f"{sil_tags[i - 1][1] * self.hop_size},{min(waveform.shape[0], sil_tags[i][0] * self.hop_size)}"}) - # 标识所有静音片段 - chunks.append({"slice": True, - "split_time": f"{sil_tags[i][0] * self.hop_size},{min(waveform.shape[0], sil_tags[i][1] * self.hop_size)}"}) - # 最后一段静音并非结尾,补上结尾片段 - if sil_tags[-1][1] * self.hop_size < len(waveform): - chunks.append({"slice": False, "split_time": f"{sil_tags[-1][1] * self.hop_size},{len(waveform)}"}) - chunk_dict = {} - for i in range(len(chunks)): - chunk_dict[str(i)] = chunks[i] - return chunk_dict - - -def cut(audio_path, db_thresh=-30, min_len=5000): - audio, sr = librosa.load(audio_path, sr=None) - slicer = Slicer( - sr=sr, - threshold=db_thresh, - min_length=min_len - ) - chunks = slicer.slice(audio) - return chunks - - -def chunks2audio(audio_path, chunks): - chunks = dict(chunks) - audio, sr = torchaudio.load(audio_path) - if len(audio.shape) == 2 and audio.shape[1] >= 2: - audio = torch.mean(audio, dim=0).unsqueeze(0) - audio = audio.cpu().numpy()[0] - result = [] - for k, v in chunks.items(): - tag = v["split_time"].split(",") - if tag[0] != tag[1]: - result.append((v["slice"], audio[int(tag[0]):int(tag[1])])) - return result, sr diff --git a/spaces/lain-iwakura/lainchan-proxy/Dockerfile b/spaces/lain-iwakura/lainchan-proxy/Dockerfile deleted file mode 100644 index 3b1a3220c963a8c44a720d3890327f04ab88322b..0000000000000000000000000000000000000000 --- a/spaces/lain-iwakura/lainchan-proxy/Dockerfile +++ /dev/null @@ -1,13 +0,0 @@ -from python:3.10 - -workdir /app - -copy requirements.txt . -run pip install -r requirements.txt - -run mkdir /.cache && chmod 777 /.cache -run mkdir .chroma && chmod 777 .chroma -copy . . -expose 7860 - -cmd ["python", "fallback.py"] \ No newline at end of file diff --git a/spaces/lakshmi324/complaintBox/app.py b/spaces/lakshmi324/complaintBox/app.py deleted file mode 100644 index fb7d0b845d16360d2337380d27b4a8470857dc45..0000000000000000000000000000000000000000 --- a/spaces/lakshmi324/complaintBox/app.py +++ /dev/null @@ -1,110 +0,0 @@ -import tweepy -import time -import pandas as pd -from transformers import pipeline -import matplotlib.pyplot as plt -import gradio as gr -import os - -def twitter_auth(consumerkey,consumersecret): - consumer_key = consumerkey - consumer_secret = consumersecret - - auth = tweepy.AppAuthHandler(consumer_key,consumer_secret) - - api = tweepy.API(auth,wait_on_rate_limit= True) - return api - -"""## Helper function for handling ratelimit and pagination""" - -def limit_handled(cursor): - """ - Function takes the cursor and returns tweets - """ - while True: - try: - yield cursor.next() - except tweepy.errors.TweepyException: - print('reached rate limit, sleeping for > 15 mins') - time.sleep(15*61) - except StopIteration: - break - -def tweets_collector(query,count): - consumerkey = os.environ.get('consumerkey') - consumersecret = os.environ.get('consumersecret') - api = twitter_auth(consumerkey,consumersecret) - query = query +' -filter:retweets' - search = limit_handled(tweepy.Cursor(api.search_tweets,q = query,tweet_mode = 'extended',lang ='en',result_type ='recent').items(count)) - sentiment_analysis = pipeline(model = "finiteautomata/bertweet-base-sentiment-analysis") - tweets = [] - - for tweet in search: - try: - content = tweet.full_text - sentiment = sentiment_analysis(content) - tweets.append({'tweet' : content ,'sentiment': sentiment[0]['label']}) - except: - pass - return tweets - -"""## Run sentiment Analysis""" - -#tweets = tweets_collector(query,count) -#df = pd.DataFrame(tweets) - -import pandas as pd - -pd.set_option('max_colwidth',None) -pd.set_option('display.width',3000) - -#import matplotlib.pyplot as plt - -#sentiment_counts = df.groupby(['sentiment']).size() - -#fig = plt.figure(figsize = (6,6),dpi = 100) -#ax = plt.subplot(111) -#sentiment_counts.plot.pie(ax = ax,autopct = '%1.f%%',startangle = 270,fontsize = 12,label = "") - -def complaint_analysis(query,count): - tweets = tweets_collector(query,count) - df = pd.DataFrame(tweets) - from wordcloud import WordCloud - from wordcloud import STOPWORDS - sentiment_counts = df.groupby(['sentiment']).size() - fig = plt.figure(figsize = (6,6),dpi = 100) - ax = plt.subplot(111) - sentiment_counts.plot.pie(ax = ax,autopct = '%1.f%%',startangle = 270,fontsize = 12,label = "") - plt.savefig('Overall_satisfaction.png') - - positive_tweets = df['tweet'][df['sentiment'] == 'POS'] - stop_words = ["https","co","RT","ola_supports","ola_cabs","customer"] + list(STOPWORDS) - positive_wordcloud = WordCloud(max_font_size=50,max_words = 30,background_color="white",stopwords=stop_words).generate(str(positive_tweets)) - plt.figure() - plt.title("Positive Tweets - Wordcloud") - plt.imshow(positive_wordcloud,interpolation="bilinear") - plt.axis("off") - #plt.show() - plt.savefig('positive_tweet.png') - negative_tweets = df['tweet'][df['sentiment'] == 'NEG'] - stop_words = ["https","co","RT","ola_supports","ola_cabs","customer"] + list(STOPWORDS) - negative_wordcloud = WordCloud(max_font_size=50,max_words = 30,background_color="white",stopwords=stop_words).generate(str(negative_tweets)) - plt.figure() - plt.title("Negative Tweets - Wordcloud") - plt.imshow(negative_wordcloud,interpolation="bilinear") - plt.axis("off") - #plt.show() - plt.savefig('negative_tweet.png') - return ['Overall_satisfaction.png','positive_tweet.png','negative_tweet.png'] - -gr.Interface(fn=complaint_analysis, - inputs=[ - gr.inputs.Textbox( - placeholder="Tweet handle please", label="Company support Twitter Handle", lines=5), gr.Slider(100, 1000) ], - outputs= [gr.outputs.Image(type="pil"),gr.outputs.Image(type="pil"),gr.outputs.Image(type="pil")], - examples=[]).launch(debug= True) - - - - - diff --git a/spaces/lanbogao/ytdlp-whisper/app.py b/spaces/lanbogao/ytdlp-whisper/app.py deleted file mode 100644 index 0193f07da658242ec851796e78bbd14f12ea73e8..0000000000000000000000000000000000000000 --- a/spaces/lanbogao/ytdlp-whisper/app.py +++ /dev/null @@ -1,163 +0,0 @@ -import gradio as gr -import whisper -from pytube import YouTube -from fastapi import FastAPI, Response, Request -import yt_dlp -import uvicorn -import re -import os -import json -from typing import Optional - -CUSTOM_PATH = "/gradio" - -app = FastAPI() - -langs = ["None"] + sorted(list(whisper.tokenizer.LANGUAGES.values())) -model_size = list(whisper._MODELS.keys()) - -@app.get("/test") -def read_main(): - return {"message": "This is your main app"} - -#async def get_subtitle(url: str): - # Download the subtitle with download_subtitle() - #subtitle_url = download_subtitle(url) - # Stream the subtitle as a response - #return StreamingResponse(requests.get(subtitle_url, stream=True).iter_content(chunk_size=1024)) - -def download_subtitle(url: str, lang: Optional[str] = None) -> Optional[str]: - ydl_opts = { - "writesubtitles": True, - "allsubtitles": True, - "subtitleslangs": [lang] if lang else [], - "skip_download": True, - } - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - info_dict = ydl.extract_info(url, download=False) - print(json.dumps(info_dict)) - if info_dict.get("subtitles"): - # get first available subtitle - subtitle_url = info_dict["subtitles"][0]["url"] - with ydl.urlopen(subtitle_url) as subtitle: - return subtitle.read().decode() - - return None - -def get_subtitle(url, lang='en'): - if lang is None: - lang = 'en' - # Download subtitles if available - ydl_opts = { - 'writesubtitles': True, - 'outtmpl': '%(id)s.%(ext)s', - 'subtitleslangs': [lang], - 'skip_download': True, - } - try: - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - info_dict = ydl.extract_info(url, download=True) - video_id = info_dict.get("id", None) - if video_id is None: - return None - - subtitle_file = f"{video_id}.{lang}.vtt" - with open(subtitle_file, 'r') as f: - subtitle_content = f.read() - subtitle_content = re.sub(r"<[^>]+>", "", subtitle_content) - return subtitle_content - except error: - print(error) - return None - - return None - -def download_audio(video_url, quality: str = '128', speed: float = None): - ydl_opts = { - 'format': 'bestaudio/best', - 'outtmpl': '%(title)s.%(ext)s', - 'quiet': True, - } - - if speed: - ydl_opts["postprocessors"] = [{ - "key": "FFmpegExtractAudio", - "preferredcodec": "mp3", - "preferredquality": quality, - "addopts": f"-filter:a \"atempo={speed}\"", - }] - - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - ydl.download([video_url]) - audio_file = ydl.prepare_filename(ydl.extract_info(video_url, download=False)) - print('audio_file', audio_file) - return audio_file - -def get_audio(url): - yt = YouTube(url) - return yt.streams.filter(only_audio=True)[0].download(filename="tmp.mp4") - -def get_transcript(url, model_size, lang, format): - if lang == "None": - lang = None - - subtitle = download_subtitle(url, lang) - print(subtitle) - if subtitle: - return subtitle - - model = whisper.load_model(model_size) - - result = model.transcribe(download_audio(url), fp16=False, language=lang) - - if format == "None": - return result["text"] - elif format == ".srt": - return format_to_srt(result["segments"]) - -def format_to_srt(segments): - output = "" - for i, segment in enumerate(segments): - output += f"{i + 1}\n" - output += f"{format_timestamp(segment['start'])} --> {format_timestamp(segment['end'])}\n" - output += f"{segment['text']}\n\n" - return output - -def format_timestamp(t): - hh = t//3600 - mm = (t - hh*3600)//60 - ss = t - hh*3600 - mm*60 - mi = (t - int(t))*1000 - return f"{int(hh):02d}:{int(mm):02d}:{int(ss):02d},{int(mi):03d}" - - -with gr.Blocks() as demo: - - with gr.Row(): - - with gr.Column(): - - with gr.Row(): - url = gr.Textbox(placeholder='Youtube video URL', label='URL') - - with gr.Row(): - - model_size = gr.Dropdown(choices=model_size, value='tiny', label="Model") - lang = gr.Dropdown(choices=langs, value="None", label="Language (Optional)") - format = gr.Dropdown(choices=["None", ".srt"], value="None", label="Timestamps? (Optional)") - - with gr.Row(): - gr.Markdown("Larger models are more accurate, but slower. For 1min video, it'll take ~30s (tiny), ~1min (base), ~3min (small), ~5min (medium), etc.") - transcribe_btn = gr.Button('Transcribe') - - with gr.Column(): - outputs = gr.Textbox(placeholder='Transcription of the video', label='Transcription') - - transcribe_btn.click(get_transcript, inputs=[url, model_size, lang, format], outputs=outputs) - -demo.launch(debug=True) - -#io = gr.Interface(gradio_interface) - -#app = gr.mount_gradio_app(app, io, path=CUSTOM_PATH) -uvicorn.run(app, host="0.0.0.0", port=7860) \ No newline at end of file diff --git a/spaces/ledetele/KrystalPDF/README.md b/spaces/ledetele/KrystalPDF/README.md deleted file mode 100644 index 0b0224c4fd6192fd09208e7105eba88cc79183c6..0000000000000000000000000000000000000000 --- a/spaces/ledetele/KrystalPDF/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: KrystalPDF -emoji: 💻 -colorFrom: purple -colorTo: purple -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/leilevy/bingo/src/components/chat-list.tsx b/spaces/leilevy/bingo/src/components/chat-list.tsx deleted file mode 100644 index 624a78ef0d7be0f1192cf02a81e2e9cf214cb193..0000000000000000000000000000000000000000 --- a/spaces/leilevy/bingo/src/components/chat-list.tsx +++ /dev/null @@ -1,28 +0,0 @@ -import React from 'react' - -import { Separator } from '@/components/ui/separator' -import { ChatMessage } from '@/components/chat-message' -import { ChatMessageModel } from '@/lib/bots/bing/types' - -export interface ChatList { - messages: ChatMessageModel[] -} - -export function ChatList({ messages }: ChatList) { - if (!messages.length) { - return null - } - - return ( -
- {messages.map((message, index) => ( - - - {index < messages.length - 1 && ( - - )} - - ))} -
- ) -} diff --git a/spaces/lewiswu1209/MockingBird/app.py b/spaces/lewiswu1209/MockingBird/app.py deleted file mode 100644 index 9d1a41430fac74af8193b0d42728291b319c5011..0000000000000000000000000000000000000000 --- a/spaces/lewiswu1209/MockingBird/app.py +++ /dev/null @@ -1,80 +0,0 @@ - -import gradio as gr - -import re -import random -import string -import librosa -import numpy as np - -from pathlib import Path -from scipy.io.wavfile import write - -from encoder import inference as encoder -from vocoder.hifigan import inference as gan_vocoder -from synthesizer.inference import Synthesizer - -class Mandarin: - def __init__(self): - self.encoder_path = "encoder/saved_models/pretrained.pt" - self.vocoder_path = "vocoder/saved_models/pretrained/g_hifigan.pt" - self.config_fpath = "vocoder/hifigan/config_16k_.json" - self.accent = "synthesizer/saved_models/普通话.pt" - - synthesizers_cache = {} - if synthesizers_cache.get(self.accent) is None: - self.current_synt = Synthesizer(Path(self.accent)) - synthesizers_cache[self.accent] = self.current_synt - else: - self.current_synt = synthesizers_cache[self.accent] - - encoder.load_model(Path(self.encoder_path)) - gan_vocoder.load_model(Path(self.vocoder_path), self.config_fpath) - - def setVoice(self, timbre): - self.timbre = timbre - wav, sample_rate, = librosa.load(self.timbre) - - encoder_wav = encoder.preprocess_wav(wav, sample_rate) - self.embed, _, _ = encoder.embed_utterance(encoder_wav, return_partials=True) - - def say(self, text): - texts = filter(None, text.split("\n")) - punctuation = "!,。、?!,.?::" # punctuate and split/clean text - processed_texts = [] - for text in texts: - for processed_text in re.sub(r'[{}]+'.format(punctuation), '\n', text).split('\n'): - if processed_text: - processed_texts.append(processed_text.strip()) - texts = processed_texts - embeds = [self.embed] * len(texts) - - specs = self.current_synt.synthesize_spectrograms(texts, embeds) - spec = np.concatenate(specs, axis=1) - wav, sample_rate = gan_vocoder.infer_waveform(spec) - - return wav, sample_rate - -def greet(audio, text, voice=None): - - if voice is None: - voice = Mandarin() - voice.setVoice(audio.name) - voice.say("加载成功") - wav, sample_rate = voice.say(text) - - output_file = "".join( random.sample(string.ascii_lowercase + string.digits, 11) ) + ".wav" - - write(output_file, sample_rate, wav.astype(np.float32)) - - return output_file, voice - -def main(): - gr.Interface( - fn=greet, - inputs=[gr.inputs.Audio(type="file"),"text", "state"], - outputs=[gr.outputs.Audio(type="file"), "state"] - ).launch() - -if __name__=="__main__": - main() diff --git a/spaces/liuyang3/bingo-gpt4-2/Dockerfile b/spaces/liuyang3/bingo-gpt4-2/Dockerfile deleted file mode 100644 index c677b05b75f7e4b2beee8c97fb47957a0861a83e..0000000000000000000000000000000000000000 --- a/spaces/liuyang3/bingo-gpt4-2/Dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM weaigc/bingo:latest - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -CMD npm start diff --git a/spaces/lora-library/LoRA-DreamBooth-Training-UI/inference.py b/spaces/lora-library/LoRA-DreamBooth-Training-UI/inference.py deleted file mode 100644 index ce0f2b08df75e6d62f06c4119f1dc859930de032..0000000000000000000000000000000000000000 --- a/spaces/lora-library/LoRA-DreamBooth-Training-UI/inference.py +++ /dev/null @@ -1,94 +0,0 @@ -from __future__ import annotations - -import gc -import pathlib - -import gradio as gr -import PIL.Image -import torch -from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler -from huggingface_hub import ModelCard - - -class InferencePipeline: - def __init__(self, hf_token: str | None = None): - self.hf_token = hf_token - self.pipe = None - self.device = torch.device( - 'cuda:0' if torch.cuda.is_available() else 'cpu') - self.lora_model_id = None - self.base_model_id = None - - def clear(self) -> None: - self.lora_model_id = None - self.base_model_id = None - del self.pipe - self.pipe = None - torch.cuda.empty_cache() - gc.collect() - - @staticmethod - def check_if_model_is_local(lora_model_id: str) -> bool: - return pathlib.Path(lora_model_id).exists() - - @staticmethod - def get_model_card(model_id: str, - hf_token: str | None = None) -> ModelCard: - if InferencePipeline.check_if_model_is_local(model_id): - card_path = (pathlib.Path(model_id) / 'README.md').as_posix() - else: - card_path = model_id - return ModelCard.load(card_path, token=hf_token) - - @staticmethod - def get_base_model_info(lora_model_id: str, - hf_token: str | None = None) -> str: - card = InferencePipeline.get_model_card(lora_model_id, hf_token) - return card.data.base_model - - def load_pipe(self, lora_model_id: str) -> None: - if lora_model_id == self.lora_model_id: - return - base_model_id = self.get_base_model_info(lora_model_id, self.hf_token) - if base_model_id != self.base_model_id: - if self.device.type == 'cpu': - pipe = DiffusionPipeline.from_pretrained( - base_model_id, use_auth_token=self.hf_token) - else: - pipe = DiffusionPipeline.from_pretrained( - base_model_id, - torch_dtype=torch.float16, - use_auth_token=self.hf_token) - pipe = pipe.to(self.device) - pipe.scheduler = DPMSolverMultistepScheduler.from_config( - pipe.scheduler.config) - self.pipe = pipe - self.pipe.unet.load_attn_procs( # type: ignore - lora_model_id, use_auth_token=self.hf_token) - - self.lora_model_id = lora_model_id # type: ignore - self.base_model_id = base_model_id # type: ignore - - def run( - self, - lora_model_id: str, - prompt: str, - lora_scale: float, - seed: int, - n_steps: int, - guidance_scale: float, - ) -> PIL.Image.Image: - if not torch.cuda.is_available(): - raise gr.Error('CUDA is not available.') - - self.load_pipe(lora_model_id) - - generator = torch.Generator(device=self.device).manual_seed(seed) - out = self.pipe( - prompt, - num_inference_steps=n_steps, - guidance_scale=guidance_scale, - generator=generator, - cross_attention_kwargs={'scale': lora_scale}, - ) # type: ignore - return out.images[0] diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/core/agent_launcher.h b/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/core/agent_launcher.h deleted file mode 100644 index 7788481c7b85124d0873be11b8563372e457e724..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/core/agent_launcher.h +++ /dev/null @@ -1,1184 +0,0 @@ -/****************************************************************************** - * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * * Neither the name of the NVIDIA CORPORATION nor the - * names of its contributors may be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - ******************************************************************************/ -#pragma once - -#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC -#include -#include -#include -#include -#include - -#if 0 -#define __THRUST__TEMPLATE_DEBUG -#endif - -#if __THRUST__TEMPLATE_DEBUG -template class ID_impl; -template class Foo { ID_impl t;}; -#endif - -namespace thrust -{ -namespace cuda_cub { -namespace core { - - -#if defined(__CUDA_ARCH__) || defined(__NVCOMPILER_CUDA__) -#if 0 - template - void __global__ - __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent(Args... args) - { - extern __shared__ char shmem[]; - Agent::entry(args..., shmem); - } -#else - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent(_0 x0) - { - extern __shared__ char shmem[]; - Agent::entry(x0, shmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent(_0 x0, _1 x1) - { - extern __shared__ char shmem[]; - Agent::entry(x0, x1, shmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent(_0 x0, _1 x1, _2 x2) - { - extern __shared__ char shmem[]; - Agent::entry(x0, x1, x2, shmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent(_0 x0, _1 x1, _2 x2, _3 x3) - { - extern __shared__ char shmem[]; - Agent::entry(x0, x1, x2, x3, shmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent(_0 x0, _1 x1, _2 x2, _3 x3, _4 x4) - { - extern __shared__ char shmem[]; - Agent::entry(x0, x1, x2, x3, x4, shmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent(_0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5) - { - extern __shared__ char shmem[]; - Agent::entry(x0, x1, x2, x3, x4, x5, shmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent(_0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6) - { - extern __shared__ char shmem[]; - Agent::entry(x0, x1, x2, x3, x4, x5, x6, shmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent(_0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7) - { - extern __shared__ char shmem[]; - Agent::entry(x0, x1, x2, x3, x4, x5, x6, x7, shmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent(_0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8) - { - extern __shared__ char shmem[]; - Agent::entry(x0, x1, x2, x3, x4, x5, x6, x7, x8, shmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent(_0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9) - { - extern __shared__ char shmem[]; - Agent::entry(x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, shmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent(_0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA) - { - extern __shared__ char shmem[]; - Agent::entry(x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, shmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent(_0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB) - { - extern __shared__ char shmem[]; - Agent::entry(x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, shmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent(_0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC) - { - extern __shared__ char shmem[]; - Agent::entry(x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC, shmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent(_0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC, _xD xD) - { - extern __shared__ char shmem[]; - Agent::entry(x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC, xD, shmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent(_0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC, _xD xD, _xE xE) - { - extern __shared__ char shmem[]; - Agent::entry(x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC, xD, xE, shmem); - } -#endif - - //////////////////////////////////////////////////////////// - - -#if 0 - template - void __global__ - __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent_vshmem(char* vshmem, Args... args) - { - extern __shared__ char shmem[]; - vshmem = vshmem == NULL ? shmem : vshmem + blockIdx.x * temp_storage_size::value; - Agent::entry(args..., vshmem); - } -#else - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent_vshmem(char* vshmem, _0 x0) - { - extern __shared__ char shmem[]; - vshmem = vshmem == NULL ? shmem : vshmem + blockIdx.x * temp_storage_size::value; - Agent::entry(x0, vshmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent_vshmem(char* vshmem, _0 x0, _1 x1) - { - extern __shared__ char shmem[]; - vshmem = vshmem == NULL ? shmem : vshmem + blockIdx.x * temp_storage_size::value; - Agent::entry(x0, x1, vshmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent_vshmem(char* vshmem, _0 x0, _1 x1, _2 x2) - { - extern __shared__ char shmem[]; - vshmem = vshmem == NULL ? shmem : vshmem + blockIdx.x * temp_storage_size::value; - Agent::entry(x0, x1, x2, vshmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent_vshmem(char* vshmem, _0 x0, _1 x1, _2 x2, _3 x3) - { - extern __shared__ char shmem[]; - vshmem = vshmem == NULL ? shmem : vshmem + blockIdx.x * temp_storage_size::value; - Agent::entry(x0, x1, x2, x3, vshmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent_vshmem(char* vshmem, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4) - { - extern __shared__ char shmem[]; - vshmem = vshmem == NULL ? shmem : vshmem + blockIdx.x * temp_storage_size::value; - Agent::entry(x0, x1, x2, x3, x4, vshmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent_vshmem(char* vshmem, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5) - { - extern __shared__ char shmem[]; - vshmem = vshmem == NULL ? shmem : vshmem + blockIdx.x * temp_storage_size::value; - Agent::entry(x0, x1, x2, x3, x4, x5, vshmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent_vshmem(char* vshmem, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6) - { - extern __shared__ char shmem[]; - vshmem = vshmem == NULL ? shmem : vshmem + blockIdx.x * temp_storage_size::value; - Agent::entry(x0, x1, x2, x3, x4, x5, x6, vshmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent_vshmem(char* vshmem, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7) - { - extern __shared__ char shmem[]; - vshmem = vshmem == NULL ? shmem : vshmem + blockIdx.x * temp_storage_size::value; - Agent::entry(x0, x1, x2, x3, x4, x5, x6, x7, vshmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent_vshmem(char* vshmem, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8) - { - extern __shared__ char shmem[]; - vshmem = vshmem == NULL ? shmem : vshmem + blockIdx.x * temp_storage_size::value; - Agent::entry(x0, x1, x2, x3, x4, x5, x6, x7, x8, vshmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent_vshmem(char* vshmem, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9) - { - extern __shared__ char shmem[]; - vshmem = vshmem == NULL ? shmem : vshmem + blockIdx.x * temp_storage_size::value; - Agent::entry(x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, vshmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent_vshmem(char* vshmem, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA) - { - extern __shared__ char shmem[]; - vshmem = vshmem == NULL ? shmem : vshmem + blockIdx.x * temp_storage_size::value; - Agent::entry(x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, vshmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent_vshmem(char* vshmem, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB) - { - extern __shared__ char shmem[]; - vshmem = vshmem == NULL ? shmem : vshmem + blockIdx.x * temp_storage_size::value; - Agent::entry(x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, vshmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent_vshmem(char* vshmem, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC) - { - extern __shared__ char shmem[]; - vshmem = vshmem == NULL ? shmem : vshmem + blockIdx.x * temp_storage_size::value; - Agent::entry(x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC, vshmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent_vshmem(char* vshmem, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC, _xD xD) - { - extern __shared__ char shmem[]; - vshmem = vshmem == NULL ? shmem : vshmem + blockIdx.x * temp_storage_size::value; - Agent::entry(x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC, xD, vshmem); - } - template - void __global__ __launch_bounds__(Agent::ptx_plan::BLOCK_THREADS) - _kernel_agent_vshmem(char* vshmem, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC, _xD xD, _xE xE) - { - extern __shared__ char shmem[]; - vshmem = vshmem == NULL ? shmem : vshmem + blockIdx.x * temp_storage_size::value; - Agent::entry(x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC, xD, xE, vshmem); - } -#endif -#else -#if 0 - template - void __global__ _kernel_agent(Args... args) {} - template - void __global__ _kernel_agent_vshmem(char*, Args... args) {} -#else - template - void __global__ _kernel_agent(_0) {} - template - void __global__ _kernel_agent(_0,_1) {} - template - void __global__ _kernel_agent(_0,_1,_2) {} - template - void __global__ _kernel_agent(_0,_1,_2,_3) {} - template - void __global__ _kernel_agent(_0,_1,_2,_3, _4) {} - template - void __global__ _kernel_agent(_0,_1,_2,_3, _4, _5) {} - template - void __global__ _kernel_agent(_0,_1,_2,_3, _4, _5, _6) {} - template - void __global__ _kernel_agent(_0,_1,_2,_3, _4, _5, _6, _7) {} - template - void __global__ _kernel_agent(_0,_1,_2,_3, _4, _5, _6, _7, _8) {} - template - void __global__ _kernel_agent(_0, _1, _2, _3, _4, _5, _6, _7, _8, _9) {} - template - void __global__ _kernel_agent(_0, _1, _2, _3, _4, _5, _6, _7, _8, _9, _xA) {} - template - void __global__ _kernel_agent(_0, _1, _2, _3, _4, _5, _6, _7, _8, _9, _xA, _xB) {} - template - void __global__ _kernel_agent(_0, _1, _2, _3, _4, _5, _6, _7, _8, _9, _xA, _xB,_xC) {} - template - void __global__ _kernel_agent(_0, _1, _2, _3, _4, _5, _6, _7, _8, _9, _xA, _xB,_xC, _xD) {} - template - void __global__ _kernel_agent(_0, _1, _2, _3, _4, _5, _6, _7, _8, _9, _xA, _xB,_xC, _xD, _xE) {} - //////////////////////////////////////////////////////////// - template - void __global__ _kernel_agent_vshmem(char*,_0) {} - template - void __global__ _kernel_agent_vshmem(char*,_0,_1) {} - template - void __global__ _kernel_agent_vshmem(char*,_0,_1,_2) {} - template - void __global__ _kernel_agent_vshmem(char*,_0,_1,_2,_3) {} - template - void __global__ _kernel_agent_vshmem(char*,_0,_1,_2,_3, _4) {} - template - void __global__ _kernel_agent_vshmem(char*,_0,_1,_2,_3, _4, _5) {} - template - void __global__ _kernel_agent_vshmem(char*,_0,_1,_2,_3, _4, _5, _6) {} - template - void __global__ _kernel_agent_vshmem(char*,_0,_1,_2,_3, _4, _5, _6, _7) {} - template - void __global__ _kernel_agent_vshmem(char*,_0,_1,_2,_3, _4, _5, _6, _7, _8) {} - template - void __global__ _kernel_agent_vshmem(char*,_0, _1, _2, _3, _4, _5, _6, _7, _8, _9) {} - template - void __global__ _kernel_agent_vshmem(char*,_0, _1, _2, _3, _4, _5, _6, _7, _8, _9, _xA) {} - template - void __global__ _kernel_agent_vshmem(char*,_0, _1, _2, _3, _4, _5, _6, _7, _8, _9, _xA, _xB) {} - template - void __global__ _kernel_agent_vshmem(char*,_0, _1, _2, _3, _4, _5, _6, _7, _8, _9, _xA, _xB, _xC) {} - template - void __global__ _kernel_agent_vshmem(char*,_0, _1, _2, _3, _4, _5, _6, _7, _8, _9, _xA, _xB, _xC, _xD) {} - template - void __global__ _kernel_agent_vshmem(char*,_0, _1, _2, _3, _4, _5, _6, _7, _8, _9, _xA, _xB, _xC, _xD, _xE) {} -#endif -#endif - - - template - struct AgentLauncher : Agent - { - core::AgentPlan plan; - size_t count; - cudaStream_t stream; - char const* name; - bool debug_sync; - unsigned int grid; - char* vshmem; - bool has_shmem; - size_t shmem_size; - - enum - { - MAX_SHMEM_PER_BLOCK = 48 * 1024, - }; - typedef - typename has_enough_shmem::type has_enough_shmem_t; - typedef - has_enough_shmem shm1; - - template - THRUST_RUNTIME_FUNCTION - AgentLauncher(AgentPlan plan_, - Size count_, - cudaStream_t stream_, - char const* name_, - bool debug_sync_) - : plan(plan_), - count((size_t)count_), - stream(stream_), - name(name_), - debug_sync(debug_sync_), - grid(static_cast((count + plan.items_per_tile - 1) / plan.items_per_tile)), - vshmem(NULL), - has_shmem((size_t)core::get_max_shared_memory_per_block() >= (size_t)plan.shared_memory_size), - shmem_size(has_shmem ? plan.shared_memory_size : 0) - { - assert(count > 0); - } - - template - THRUST_RUNTIME_FUNCTION - AgentLauncher(AgentPlan plan_, - Size count_, - cudaStream_t stream_, - char* vshmem, - char const* name_, - bool debug_sync_) - : plan(plan_), - count((size_t)count_), - stream(stream_), - name(name_), - debug_sync(debug_sync_), - grid(static_cast((count + plan.items_per_tile - 1) / plan.items_per_tile)), - vshmem(vshmem), - has_shmem((size_t)core::get_max_shared_memory_per_block() >= (size_t)plan.shared_memory_size), - shmem_size(has_shmem ? plan.shared_memory_size : 0) - { - assert(count > 0); - } - - THRUST_RUNTIME_FUNCTION - AgentLauncher(AgentPlan plan_, - cudaStream_t stream_, - char const* name_, - bool debug_sync_) - : plan(plan_), - count(0), - stream(stream_), - name(name_), - debug_sync(debug_sync_), - grid(plan.grid_size), - vshmem(NULL), - has_shmem((size_t)core::get_max_shared_memory_per_block() >= (size_t)plan.shared_memory_size), - shmem_size(has_shmem ? plan.shared_memory_size : 0) - { - assert(plan.grid_size > 0); - } - - THRUST_RUNTIME_FUNCTION - AgentLauncher(AgentPlan plan_, - cudaStream_t stream_, - char* vshmem, - char const* name_, - bool debug_sync_) - : plan(plan_), - count(0), - stream(stream_), - name(name_), - debug_sync(debug_sync_), - grid(plan.grid_size), - vshmem(vshmem), - has_shmem((size_t)core::get_max_shared_memory_per_block() >= (size_t)plan.shared_memory_size), - shmem_size(has_shmem ? plan.shared_memory_size : 0) - { - assert(plan.grid_size > 0); - } - -#if 0 - THRUST_RUNTIME_FUNCTION - AgentPlan static get_plan(cudaStream_t s, void* d_ptr = 0) - { - // in separable compilation mode, we have no choice - // but to call kernel to get agent_plan - // otherwise the risk is something may fail - // if user mix & match ptx versions in a separably compiled function - // http://nvbugs/1772071 - // XXX may be it is too string of a requirements, consider relaxing it in - // the future -#ifdef __CUDACC_RDC__ - return core::get_agent_plan(s, d_ptr); -#else - core::cuda_optional ptx_version = core::get_ptx_version(); - //CUDA_CUB_RET_IF_FAIL(ptx_version.status()); - return get_agent_plan(ptx_version); -#endif - } - THRUST_RUNTIME_FUNCTION - AgentPlan static get_plan_default() - { - return get_agent_plan(sm_arch<0>::type::ver); - } -#endif - - THRUST_RUNTIME_FUNCTION - typename core::get_plan::type static get_plan(cudaStream_t , void* d_ptr = 0) - { - THRUST_UNUSED_VAR(d_ptr); - core::cuda_optional ptx_version = core::get_ptx_version(); - return get_agent_plan(ptx_version); - } - - THRUST_RUNTIME_FUNCTION - typename core::get_plan::type static get_plan() - { - return get_agent_plan(lowest_supported_sm_arch::ver); - } - - THRUST_RUNTIME_FUNCTION void sync() const - { - if (debug_sync) - { - if (THRUST_IS_DEVICE_CODE) { - #if THRUST_INCLUDE_DEVICE_CODE - cudaDeviceSynchronize(); - #endif - } else { - #if THRUST_INCLUDE_HOST_CODE - cudaStreamSynchronize(stream); - #endif - } - } - } - - template - static cuda_optional THRUST_RUNTIME_FUNCTION - max_blocks_per_sm_impl(K k, int block_threads) - { - int occ; - cudaError_t status = cub::MaxSmOccupancy(occ, k, block_threads); - return cuda_optional(status == cudaSuccess ? occ : -1, status); - } - - template - cuda_optional THRUST_RUNTIME_FUNCTION - max_sm_occupancy(K k) const - { - return max_blocks_per_sm_impl(k, plan.block_threads); - } - - - - template - THRUST_RUNTIME_FUNCTION - void print_info(K k) const - { - if (debug_sync) - { - cuda_optional occ = max_sm_occupancy(k); - core::cuda_optional ptx_version = core::get_ptx_version(); - if (count > 0) - { - _CubLog("Invoking %s<<<%u, %d, %d, %lld>>>(), %llu items total, %d items per thread, %d SM occupancy, %d vshmem size, %d ptx_version \n", - name, - grid, - plan.block_threads, - (has_shmem ? (int)plan.shared_memory_size : 0), - (long long)stream, - (long long)count, - plan.items_per_thread, - (int)occ, - (!has_shmem ? (int)plan.shared_memory_size : 0), - (int)ptx_version); - } - else - { - _CubLog("Invoking %s<<<%u, %d, %d, %lld>>>(), %d items per thread, %d SM occupancy, %d vshmem size, %d ptx_version\n", - name, - grid, - plan.block_threads, - (has_shmem ? (int)plan.shared_memory_size : 0), - (long long)stream, - plan.items_per_thread, - (int)occ, - (!has_shmem ? (int)plan.shared_memory_size : 0), - (int)ptx_version); - } - } - } - - //////////////////// - // Variadic code - //////////////////// - -#if 0 - template - static cuda_optional THRUST_RUNTIME_FUNCTION - get_max_blocks_per_sm(AgentPlan plan) - { - return max_blocks_per_sm_impl(_kernel_agent, plan.block_threads); - } -#else - template - static cuda_optional THRUST_RUNTIME_FUNCTION - get_max_blocks_per_sm(AgentPlan plan) - { - void (*ptr)(_0) = _kernel_agent; - return max_blocks_per_sm_impl(ptr, plan.block_threads); - } - template - static cuda_optional THRUST_RUNTIME_FUNCTION - get_max_blocks_per_sm(AgentPlan plan) - { - void (*ptr)(_0, _1) = _kernel_agent; - return max_blocks_per_sm_impl(ptr, plan.block_threads); - } - template - static cuda_optional THRUST_RUNTIME_FUNCTION - get_max_blocks_per_sm(AgentPlan plan) - { - void (*ptr)(_0,_1,_2) = _kernel_agent; - return max_blocks_per_sm_impl(ptr, plan.block_threads); - } - template - static cuda_optional THRUST_RUNTIME_FUNCTION - get_max_blocks_per_sm(AgentPlan plan) - { - void (*ptr)(_0,_1,_2,_3) = _kernel_agent; - return max_blocks_per_sm_impl(ptr, plan.block_threads); - } - template - static cuda_optional THRUST_RUNTIME_FUNCTION - get_max_blocks_per_sm(AgentPlan plan) - { - void (*ptr)(_0,_1,_2,_3,_4) = _kernel_agent; - return max_blocks_per_sm_impl(ptr, plan.block_threads); - } - template - static cuda_optional THRUST_RUNTIME_FUNCTION - get_max_blocks_per_sm(AgentPlan plan) - { - void (*ptr)(_0,_1,_2,_3,_4,_5) = _kernel_agent; - return max_blocks_per_sm_impl(ptr, plan.block_threads); - } - template - static cuda_optional THRUST_RUNTIME_FUNCTION - get_max_blocks_per_sm(AgentPlan plan) - { - void (*ptr)(_0,_1,_2,_3,_4,_5,_6) = _kernel_agent; - return max_blocks_per_sm_impl(ptr, plan.block_threads); - } - template - static cuda_optional THRUST_RUNTIME_FUNCTION - get_max_blocks_per_sm(AgentPlan plan) - { - void (*ptr)(_0,_1,_2,_3,_4,_5,_6,_7) = _kernel_agent; - return max_blocks_per_sm_impl(ptr, plan.block_threads); - } - template - static cuda_optional THRUST_RUNTIME_FUNCTION - get_max_blocks_per_sm(AgentPlan plan) - { - void (*ptr)(_0,_1,_2,_3,_4,_5,_6,_7,_8) = _kernel_agent; - return max_blocks_per_sm_impl(ptr, plan.block_threads); - } - template - static cuda_optional THRUST_RUNTIME_FUNCTION - get_max_blocks_per_sm(AgentPlan plan) - { - void (*ptr)(_0,_1,_2,_3,_4,_5,_6,_7,_8,_9) = _kernel_agent; - return max_blocks_per_sm_impl(ptr, plan.block_threads); - } - template - static cuda_optional THRUST_RUNTIME_FUNCTION - get_max_blocks_per_sm(AgentPlan plan) - { - void (*ptr)(_0,_1,_2,_3,_4,_5,_6,_7,_8,_9,_xA) = _kernel_agent; - return max_blocks_per_sm_impl(ptr, plan.block_threads); - } - template - static cuda_optional THRUST_RUNTIME_FUNCTION - get_max_blocks_per_sm(AgentPlan plan) - { - void (*ptr)(_0,_1,_2,_3,_4,_5,_6,_7,_8,_9,_xA,_xB) = _kernel_agent; - return max_blocks_per_sm_impl(ptr, plan.block_threads); - } - template - static cuda_optional THRUST_RUNTIME_FUNCTION - get_max_blocks_per_sm(AgentPlan plan) - { - void (*ptr)(_0,_1,_2,_3,_4,_5,_6,_7,_8,_9,_xA,_xB,_xC) = _kernel_agent; - return max_blocks_per_sm_impl(ptr, plan.block_threads); - } - template - static cuda_optional THRUST_RUNTIME_FUNCTION - get_max_blocks_per_sm(AgentPlan plan) - { - void (*ptr)(_0,_1,_2,_3,_4,_5,_6,_7,_8,_9,_xA,_xB,_xC,_xD) = _kernel_agent; - return max_blocks_per_sm_impl(ptr, plan.block_threads); - } - template - static cuda_optional THRUST_RUNTIME_FUNCTION - get_max_blocks_per_sm(AgentPlan plan) - { - void (*ptr)(_0,_1,_2,_3,_4,_5,_6,_7,_8,_9,_xA,_xB,_xC,_xD,_xE) = _kernel_agent; - return max_blocks_per_sm_impl(ptr, plan.block_threads); - } -#endif - - - -#if 0 - - // If we are guaranteed to have enough shared memory - // don't compile other kernel which accepts pointer - // and save on compilations - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::true_type, Args... args) const - { - assert(has_shmem && vshmem == NULL); - print_info(_kernel_agent); - launcher::triple_chevron(grid, plan.block_threads, shmem_size, stream) - .doit(_kernel_agent, args...); - } - - // If there is a risk of not having enough shared memory - // we compile generic kernel instead. - // This kernel is likely to be somewhat slower, but it can accomodate - // both shared and virtualized shared memories. - // Alternative option is to compile two kernels, one using shared and one - // using virtualized shared memory. While this can be slightly faster if we - // do actually have enough shared memory, the compilation time will double. - // - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::false_type, Args... args) const - { - assert((has_shmem && vshmem == NULL) || (!has_shmem && vshmem != NULL && shmem_size == 0)); - print_info(_kernel_agent_vshmem); - launcher::triple_chevron(grid, plan.block_threads, shmem_size, stream) - .doit(_kernel_agent_vshmem, vshmem, args...); - } - - template - void THRUST_RUNTIME_FUNCTION - launch(Args... args) const - { -#if __THRUST__TEMPLATE_DEBUG -#ifdef __CUDA_ARCH__ - typedef typename Foo< - shm1::v1, - shm1::v2, - shm1::v3, - shm1::v4, - shm1::v5>::t tt; -#endif -#endif - launch_impl(has_enough_shmem_t(),args...); - sync(); - } -#else - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::false_type, _0 x0) const - { - assert((has_shmem && vshmem == NULL) || (!has_shmem && vshmem != NULL && shmem_size == 0)); - void (*ptr)(char*, _0) = _kernel_agent_vshmem; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, shmem_size, stream) - .doit(ptr, vshmem, x0); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::false_type, _0 x0, _1 x1) const - { - assert((has_shmem && vshmem == NULL) || (!has_shmem && vshmem != NULL && shmem_size == 0)); - void (*ptr)(char*, _0, _1) = _kernel_agent_vshmem; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, shmem_size, stream) - .doit(ptr, vshmem, x0, x1); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::false_type, _0 x0, _1 x1, _2 x2) const - { - assert((has_shmem && vshmem == NULL) || (!has_shmem && vshmem != NULL && shmem_size == 0)); - void (*ptr)(char*, _0, _1, _2) = _kernel_agent_vshmem; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, shmem_size, stream) - .doit(ptr, vshmem, x0, x1, x2); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::false_type, _0 x0, _1 x1, _2 x2, _3 x3) const - { - assert((has_shmem && vshmem == NULL) || (!has_shmem && vshmem != NULL && shmem_size == 0)); - void (*ptr)(char*, _0, _1, _2, _3) = _kernel_agent_vshmem; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, shmem_size, stream) - .doit(ptr, vshmem, x0, x1, x2, x3); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::false_type, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4) const - { - assert((has_shmem && vshmem == NULL) || (!has_shmem && vshmem != NULL && shmem_size == 0)); - void (*ptr)(char*, _0, _1, _2, _3, _4) = _kernel_agent_vshmem; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, shmem_size, stream) - .doit(ptr, vshmem, x0, x1, x2, x3, x4); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::false_type, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5) const - { - assert((has_shmem && vshmem == NULL) || (!has_shmem && vshmem != NULL && shmem_size == 0)); - void (*ptr)(char*, _0, _1, _2, _3, _4, _5) = _kernel_agent_vshmem; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, shmem_size, stream) - .doit(ptr, vshmem, x0, x1, x2, x3, x4, x5); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::false_type, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6) const - { - assert((has_shmem && vshmem == NULL) || (!has_shmem && vshmem != NULL && shmem_size == 0)); - void (*ptr)(char*, _0, _1, _2, _3, _4, _5, _6) = _kernel_agent_vshmem; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, shmem_size, stream) - .doit(ptr, vshmem, x0, x1, x2, x3, x4, x5, x6); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::false_type, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7) const - { - assert((has_shmem && vshmem == NULL) || (!has_shmem && vshmem != NULL && shmem_size == 0)); - void (*ptr)(char*, _0, _1, _2, _3, _4, _5, _6, _7) = _kernel_agent_vshmem; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, shmem_size, stream) - .doit(ptr, vshmem, x0, x1, x2, x3, x4, x5, x6, x7); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::false_type, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8) const - { - assert((has_shmem && vshmem == NULL) || (!has_shmem && vshmem != NULL && shmem_size == 0)); - void (*ptr)(char*, _0, _1, _2, _3, _4, _5, _6, _7, _8) = _kernel_agent_vshmem; - launcher::triple_chevron(grid, plan.block_threads, shmem_size, stream) - .doit(ptr, vshmem, x0, x1, x2, x3, x4, x5, x6, x7, x8); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::false_type, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9) const - { - assert((has_shmem && vshmem == NULL) || (!has_shmem && vshmem != NULL && shmem_size == 0)); - void (*ptr)(char*, _0, _1, _2, _3, _4, _5, _6, _7, _8, _9) = _kernel_agent_vshmem; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, shmem_size, stream) - .doit(ptr, vshmem, x0, x1, x2, x3, x4, x5, x6, x7, x8, x9); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::false_type, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9,_xA xA) const - { - assert((has_shmem && vshmem == NULL) || (!has_shmem && vshmem != NULL && shmem_size == 0)); - void (*ptr)(char*, _0, _1, _2, _3, _4, _5, _6, _7, _8, _9, _xA) = _kernel_agent_vshmem; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, shmem_size, stream) - .doit(ptr, vshmem, x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::false_type, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9,_xA xA,_xB xB) const - { - assert((has_shmem && vshmem == NULL) || (!has_shmem && vshmem != NULL && shmem_size == 0)); - void (*ptr)(char*, _0, _1, _2, _3, _4, _5, _6, _7, _8, _9, _xA, _xB) = _kernel_agent_vshmem; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, shmem_size, stream) - .doit(ptr, vshmem, x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::false_type, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9,_xA xA,_xB xB,_xC xC) const - { - assert((has_shmem && vshmem == NULL) || (!has_shmem && vshmem != NULL && shmem_size == 0)); - void (*ptr)(char*, _0, _1, _2, _3, _4, _5, _6, _7, _8, _9, _xA, _xB, _xC) = _kernel_agent_vshmem; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, shmem_size, stream) - .doit(ptr, vshmem, x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::false_type, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9,_xA xA,_xB xB,_xC xC,_xD xD) const - { - assert((has_shmem && vshmem == NULL) || (!has_shmem && vshmem != NULL && shmem_size == 0)); - void (*ptr)(char*, _0, _1, _2, _3, _4, _5, _6, _7, _8, _9, _xA, _xB, _xC, _xD) = _kernel_agent_vshmem; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, shmem_size, stream) - .doit(ptr, vshmem, x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC, xD); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::false_type, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9,_xA xA,_xB xB,_xC xC,_xD xD,_xE xE) const - { - assert((has_shmem && vshmem == NULL) || (!has_shmem && vshmem != NULL && shmem_size == 0)); - void (*ptr)(char*, _0, _1, _2, _3, _4, _5, _6, _7, _8, _9, _xA, _xB, _xC, _xD, _xE) = _kernel_agent_vshmem; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, shmem_size, stream) - .doit(ptr, vshmem, x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC, xD, xE); - } - - //////////////////////////////////////////////////////// - //////////////////////////////////////////////////////// - //////////////////////////////////////////////////////// - - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::true_type, _0 x0) const - { - assert(has_shmem && vshmem == NULL); - void (*ptr)(_0) = _kernel_agent; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, plan.shared_memory_size, stream) - .doit(ptr, x0); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::true_type, _0 x0, _1 x1) const - { - assert(has_shmem && vshmem == NULL); - void (*ptr)(_0, _1) = _kernel_agent; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, plan.shared_memory_size, stream) - .doit(ptr, x0, x1); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::true_type, _0 x0, _1 x1, _2 x2) const - { - assert(has_shmem && vshmem == NULL); - void (*ptr)(_0,_1,_2) = _kernel_agent; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, plan.shared_memory_size, stream) - .doit(ptr, x0, x1, x2); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::true_type, _0 x0, _1 x1, _2 x2, _3 x3) const - { - assert(has_shmem && vshmem == NULL); - void (*ptr)(_0,_1,_2,_3) = _kernel_agent; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, plan.shared_memory_size, stream) - .doit(ptr, x0, x1, x2, x3); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::true_type, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4) const - { - assert(has_shmem && vshmem == NULL); - void (*ptr)(_0,_1,_2,_3,_4) = _kernel_agent; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, plan.shared_memory_size, stream) - .doit(ptr, x0, x1, x2, x3, x4); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::true_type, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5) const - { - assert(has_shmem && vshmem == NULL); - void (*ptr)(_0,_1,_2,_3,_4,_5) = _kernel_agent; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, plan.shared_memory_size, stream) - .doit(ptr, x0, x1, x2, x3, x4, x5); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::true_type, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6) const - { - assert(has_shmem && vshmem == NULL); - void (*ptr)(_0,_1,_2,_3,_4,_5,_6) = _kernel_agent; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, plan.shared_memory_size, stream) - .doit(ptr, x0, x1, x2, x3, x4, x5, x6); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::true_type, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7) const - { - assert(has_shmem && vshmem == NULL); - void (*ptr)(_0,_1,_2,_3,_4,_5,_6,_7) = _kernel_agent; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, plan.shared_memory_size, stream) - .doit(ptr, x0, x1, x2, x3, x4, x5, x6, x7); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::true_type, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8) const - { - assert(has_shmem && vshmem == NULL); - void (*ptr)(_0,_1,_2,_3,_4,_5,_6,_7,_8) = _kernel_agent; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, plan.shared_memory_size, stream) - .doit(ptr, x0, x1, x2, x3, x4, x5, x6, x7, x8); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::true_type, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9) const - { - assert(has_shmem && vshmem == NULL); - void (*ptr)(_0,_1,_2,_3,_4,_5,_6,_7,_8,_9) = _kernel_agent; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, plan.shared_memory_size, stream) - .doit(ptr, x0, x1, x2, x3, x4, x5, x6, x7, x8, x9); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::true_type, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA) const - { - assert(has_shmem && vshmem == NULL); - void (*ptr)(_0,_1,_2,_3,_4,_5,_6,_7,_8,_9,_xA) = _kernel_agent; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, plan.shared_memory_size, stream) - .doit(ptr, x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::true_type, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB) const - { - assert(has_shmem && vshmem == NULL); - void (*ptr)(_0,_1,_2,_3,_4,_5,_6,_7,_8,_9,_xA,_xB) = _kernel_agent; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, plan.shared_memory_size, stream) - .doit(ptr, x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::true_type, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC) const - { - assert(has_shmem && vshmem == NULL); - void (*ptr)(_0,_1,_2,_3,_4,_5,_6,_7,_8,_9,_xA,_xB,_xC) = _kernel_agent; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, plan.shared_memory_size, stream) - .doit(ptr, x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::true_type, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC, _xD xD) const - { - assert(has_shmem && vshmem == NULL); - void (*ptr)(_0,_1,_2,_3,_4,_5,_6,_7,_8,_9,_xA,_xB,_xC,_xD) = _kernel_agent; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, plan.shared_memory_size, stream) - .doit(ptr, x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC, xD); - } - template - void THRUST_RUNTIME_FUNCTION - launch_impl(thrust::detail::true_type, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC, _xD xD, _xE xE) const - { - assert(has_shmem && vshmem == NULL); - void (*ptr)(_0,_1,_2,_3,_4,_5,_6,_7,_8,_9,_xA,_xB,_xC,_xD,_xE) = _kernel_agent; - print_info(ptr); - launcher::triple_chevron(grid, plan.block_threads, plan.shared_memory_size, stream) - .doit(ptr,x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC, xD, xE); - } - - //////////////////////////////////////////////////////// - //////////////////////////////////////////////////////// - //////////////////////////////////////////////////////// - - template - void THRUST_RUNTIME_FUNCTION - launch(_0 x0) const - { - launch_impl(has_enough_shmem_t(), x0); - sync(); - } - template - void THRUST_RUNTIME_FUNCTION - launch(_0 x0, _1 x1) const - { - launch_impl(has_enough_shmem_t(), x0, x1); - sync(); - } - template - void THRUST_RUNTIME_FUNCTION - launch(_0 x0, _1 x1, _2 x2) const - { - launch_impl(has_enough_shmem_t(), x0, x1, x2); - sync(); - } - template - void THRUST_RUNTIME_FUNCTION - launch(_0 x0, _1 x1, _2 x2, _3 x3) const - { - launch_impl(has_enough_shmem_t(), x0, x1, x2, x3); - sync(); - } - template - void THRUST_RUNTIME_FUNCTION - launch(_0 x0, _1 x1, _2 x2, _3 x3, _4 x4) const - { - launch_impl(has_enough_shmem_t(), x0, x1, x2, x3, x4); - sync(); - } - template - void THRUST_RUNTIME_FUNCTION - launch(_0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5) const - { - launch_impl(has_enough_shmem_t(), x0, x1, x2, x3, x4, x5); - sync(); - } - template - void THRUST_RUNTIME_FUNCTION - launch(_0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6) const - { - launch_impl(has_enough_shmem_t(), x0, x1, x2, x3, x4, x5, x6); - sync(); - } - template - void THRUST_RUNTIME_FUNCTION - launch(_0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7) const - { - launch_impl(has_enough_shmem_t(), x0, x1, x2, x3, x4, x5, x6, x7); - sync(); - } - template - void THRUST_RUNTIME_FUNCTION - launch(_0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8) const - { - launch_impl(has_enough_shmem_t(), x0, x1, x2, x3, x4, x5, x6, x7, x8); - sync(); - } - template - void THRUST_RUNTIME_FUNCTION - launch(_0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9) const - { - launch_impl(has_enough_shmem_t(), x0, x1, x2, x3, x4, x5, x6, x7, x8, x9); - sync(); - } - template - void THRUST_RUNTIME_FUNCTION - launch(_0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA) const - { - launch_impl(has_enough_shmem_t(), x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA); - sync(); - } - template - void THRUST_RUNTIME_FUNCTION - launch(_0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB) const - { - launch_impl(has_enough_shmem_t(), x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB); - sync(); - } - template - void THRUST_RUNTIME_FUNCTION - launch(_0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC) const - { - launch_impl(has_enough_shmem_t(), x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC); - sync(); - } - template - void THRUST_RUNTIME_FUNCTION - launch(_0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC, _xD xD) const - { - launch_impl(has_enough_shmem_t(), x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC, xD); - sync(); - } - template - void THRUST_RUNTIME_FUNCTION - launch(_0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC, _xD xD, _xE xE) const - { - launch_impl(has_enough_shmem_t(), x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC, xD, xE); - sync(); - } -#endif - - - }; - -} // namespace core -} -} // end namespace thrust -#endif diff --git a/spaces/manhkhanhUIT/BOPBTL/Global/detection_models/Synchronized-BatchNorm-PyTorch/README.md b/spaces/manhkhanhUIT/BOPBTL/Global/detection_models/Synchronized-BatchNorm-PyTorch/README.md deleted file mode 100644 index 779983436c9727dd0d6301a1c857f2360245b51d..0000000000000000000000000000000000000000 --- a/spaces/manhkhanhUIT/BOPBTL/Global/detection_models/Synchronized-BatchNorm-PyTorch/README.md +++ /dev/null @@ -1,118 +0,0 @@ -# Synchronized-BatchNorm-PyTorch - -**IMPORTANT: Please read the "Implementation details and highlights" section before use.** - -Synchronized Batch Normalization implementation in PyTorch. - -This module differs from the built-in PyTorch BatchNorm as the mean and -standard-deviation are reduced across all devices during training. - -For example, when one uses `nn.DataParallel` to wrap the network during -training, PyTorch's implementation normalize the tensor on each device using -the statistics only on that device, which accelerated the computation and -is also easy to implement, but the statistics might be inaccurate. -Instead, in this synchronized version, the statistics will be computed -over all training samples distributed on multiple devices. - -Note that, for one-GPU or CPU-only case, this module behaves exactly same -as the built-in PyTorch implementation. - -This module is currently only a prototype version for research usages. As mentioned below, -it has its limitations and may even suffer from some design problems. If you have any -questions or suggestions, please feel free to -[open an issue](https://github.com/vacancy/Synchronized-BatchNorm-PyTorch/issues) or -[submit a pull request](https://github.com/vacancy/Synchronized-BatchNorm-PyTorch/issues). - -## Why Synchronized BatchNorm? - -Although the typical implementation of BatchNorm working on multiple devices (GPUs) -is fast (with no communication overhead), it inevitably reduces the size of batch size, -which potentially degenerates the performance. This is not a significant issue in some -standard vision tasks such as ImageNet classification (as the batch size per device -is usually large enough to obtain good statistics). However, it will hurt the performance -in some tasks that the batch size is usually very small (e.g., 1 per GPU). - -For example, the importance of synchronized batch normalization in object detection has been recently proved with a -an extensive analysis in the paper [MegDet: A Large Mini-Batch Object Detector](https://arxiv.org/abs/1711.07240). - -## Usage - -To use the Synchronized Batch Normalization, we add a data parallel replication callback. This introduces a slight -difference with typical usage of the `nn.DataParallel`. - -Use it with a provided, customized data parallel wrapper: - -```python -from sync_batchnorm import SynchronizedBatchNorm1d, DataParallelWithCallback - -sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) -sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1]) -``` - -Or, if you are using a customized data parallel module, you can use this library as a monkey patching. - -```python -from torch.nn import DataParallel # or your customized DataParallel module -from sync_batchnorm import SynchronizedBatchNorm1d, patch_replication_callback - -sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) -sync_bn = DataParallel(sync_bn, device_ids=[0, 1]) -patch_replication_callback(sync_bn) # monkey-patching -``` - -You can use `convert_model` to convert your model to use Synchronized BatchNorm easily. - -```python -import torch.nn as nn -from torchvision import models -from sync_batchnorm import convert_model -# m is a standard pytorch model -m = models.resnet18(True) -m = nn.DataParallel(m) -# after convert, m is using SyncBN -m = convert_model(m) -``` - -See also `tests/test_sync_batchnorm.py` for numeric result comparison. - -## Implementation details and highlights - -If you are interested in how batch statistics are reduced and broadcasted among multiple devices, please take a look -at the code with detailed comments. Here we only emphasize some highlights of the implementation: - -- This implementation is in pure-python. No C++ extra extension libs. -- Easy to use as demonstrated above. -- It uses unbiased variance to update the moving average, and use `sqrt(max(var, eps))` instead of `sqrt(var + eps)`. -- The implementation requires that each module on different devices should invoke the `batchnorm` for exactly SAME -amount of times in each forward pass. For example, you can not only call `batchnorm` on GPU0 but not on GPU1. The `#i -(i = 1, 2, 3, ...)` calls of the `batchnorm` on each device will be viewed as a whole and the statistics will be reduced. -This is tricky but is a good way to handle PyTorch's dynamic computation graph. Although sounds complicated, this -will usually not be the issue for most of the models. - -## Known issues - -#### Runtime error on backward pass. - -Due to a [PyTorch Bug](https://github.com/pytorch/pytorch/issues/3883), using old PyTorch libraries will trigger an `RuntimeError` with messages like: - -``` -Assertion `pos >= 0 && pos < buffer.size()` failed. -``` - -This has already been solved in the newest PyTorch repo, which, unfortunately, has not been pushed to the official and anaconda binary release. Thus, you are required to build the PyTorch package from the source according to the - instructions [here](https://github.com/pytorch/pytorch#from-source). - -#### Numeric error. - -Because this library does not fuse the normalization and statistics operations in C++ (nor CUDA), it is less -numerically stable compared to the original PyTorch implementation. Detailed analysis can be found in -`tests/test_sync_batchnorm.py`. - -## Authors and License: - -Copyright (c) 2018-, [Jiayuan Mao](https://vccy.xyz). - -**Contributors**: [Tete Xiao](https://tetexiao.com), [DTennant](https://github.com/DTennant). - -Distributed under **MIT License** (See LICENSE) - diff --git a/spaces/marlenezw/audio-driven-animations/MakeItTalk/util/utils.py b/spaces/marlenezw/audio-driven-animations/MakeItTalk/util/utils.py deleted file mode 100644 index 154b8d723b86f63244c15dc9575b60bb7ebcf128..0000000000000000000000000000000000000000 --- a/spaces/marlenezw/audio-driven-animations/MakeItTalk/util/utils.py +++ /dev/null @@ -1,390 +0,0 @@ -""" - # Copyright 2020 Adobe - # All Rights Reserved. - - # NOTICE: Adobe permits you to use, modify, and distribute this file in - # accordance with the terms of the Adobe license agreement accompanying - # it. - -""" - -import torch.nn as nn -import torch.nn.init as init -import os -import cv2 -import matplotlib.pyplot as plt -import numpy as np - -class Point: - def __init__(self, x, y): - self.x = x - self.y = y - -class ShapeParts: - def __init__(self, np_pts): - self.data = np_pts - - def part(self, idx): - return Point(self.data[idx, 0], self.data[idx, 1]) - - -class Record(): - def __init__(self, type_list): - self.data, self.count = {}, {} - self.type_list = type_list - self.max_min_data = None - for t in type_list: - self.data[t] = 0.0 - self.count[t] = 0.0 - - def add(self, new_data, c=1.0): - for t in self.type_list: - self.data[t] += new_data - self.count[t] += c - - def per(self, t): - return self.data[t] / (self.count[t] + 1e-32) - - def clean(self, t): - self.data[t], self.count[t] = 0.0, 0.0 - - def is_better(self, t, greater): - if(self.max_min_data == None): - self.max_min_data = self.data[t] - return True - else: - if(greater): - if(self.data[t] > self.max_min_data): - self.max_min_data = self.data[t] - return True - else: - if (self.data[t] < self.max_min_data): - self.max_min_data = self.data[t] - return True - return False - -def weight_init(m): - ''' - Usage: - model = Model() - model.apply(weight_init) - ''' - if isinstance(m, nn.Conv1d): - init.normal_(m.weight.data) - if m.bias is not None: - init.normal_(m.bias.data) - elif isinstance(m, nn.Conv2d): - init.xavier_normal_(m.weight.data) - if m.bias is not None: - init.normal_(m.bias.data) - elif isinstance(m, nn.Conv3d): - init.xavier_normal_(m.weight.data) - if m.bias is not None: - init.normal_(m.bias.data) - elif isinstance(m, nn.ConvTranspose1d): - init.normal_(m.weight.data) - if m.bias is not None: - init.normal_(m.bias.data) - elif isinstance(m, nn.ConvTranspose2d): - init.xavier_normal_(m.weight.data) - if m.bias is not None: - init.normal_(m.bias.data) - elif isinstance(m, nn.ConvTranspose3d): - init.xavier_normal_(m.weight.data) - if m.bias is not None: - init.normal_(m.bias.data) - elif isinstance(m, nn.BatchNorm1d): - init.normal_(m.weight.data, mean=1, std=0.02) - init.constant_(m.bias.data, 0) - elif isinstance(m, nn.BatchNorm2d): - init.normal_(m.weight.data, mean=1, std=0.02) - init.constant_(m.bias.data, 0) - elif isinstance(m, nn.BatchNorm3d): - init.normal_(m.weight.data, mean=1, std=0.02) - init.constant_(m.bias.data, 0) - elif isinstance(m, nn.Linear): - init.xavier_normal_(m.weight.data) - init.normal_(m.bias.data) - elif isinstance(m, nn.LSTM): - for param in m.parameters(): - if len(param.shape) >= 2: - init.orthogonal_(param.data) - else: - init.normal_(param.data) - elif isinstance(m, nn.LSTMCell): - for param in m.parameters(): - if len(param.shape) >= 2: - init.orthogonal_(param.data) - else: - init.normal_(param.data) - elif isinstance(m, nn.GRU): - for param in m.parameters(): - if len(param.shape) >= 2: - init.orthogonal_(param.data) - else: - init.normal_(param.data) - elif isinstance(m, nn.GRUCell): - for param in m.parameters(): - if len(param.shape) >= 2: - init.orthogonal_(param.data) - else: - init.normal_(param.data) - -def get_n_params(model): - pp=0 - for p in list(model.parameters()): - nn=1 - for s in list(p.size()): - nn = nn*s - pp += nn - return pp - - -def vis_landmark_on_img(img, shape, linewidth=2): - ''' - Visualize landmark on images. - ''' - if (type(shape) == ShapeParts): - def draw_curve(idx_list, color=(0, 255, 0), loop=False, lineWidth=linewidth): - for i in idx_list: - cv2.line(img, (shape.part(i).x, shape.part(i).y), (shape.part(i + 1).x, shape.part(i + 1).y), - color, lineWidth) - if (loop): - cv2.line(img, (shape.part(idx_list[0]).x, shape.part(idx_list[0]).y), - (shape.part(idx_list[-1] + 1).x, shape.part(idx_list[-1] + 1).y), color, lineWidth) - - draw_curve(list(range(0, 16))) # jaw - draw_curve(list(range(17, 21)), color=(0, 0, 255)) # eye brow - draw_curve(list(range(22, 26)), color=(0, 0, 255)) - draw_curve(list(range(27, 35))) # nose - draw_curve(list(range(36, 41)), loop=True) # eyes - draw_curve(list(range(42, 47)), loop=True) - draw_curve(list(range(48, 59)), loop=True, color=(0, 255, 255)) # mouth - draw_curve(list(range(60, 67)), loop=True, color=(255, 255, 0)) - - else: - def draw_curve(idx_list, color=(0, 255, 0), loop=False, lineWidth=linewidth): - for i in idx_list: - cv2.line(img, (shape[i, 0], shape[i, 1]), (shape[i + 1, 0], shape[i + 1, 1]), color, lineWidth) - if (loop): - cv2.line(img, (shape[idx_list[0], 0], shape[idx_list[0], 1]), - (shape[idx_list[-1] + 1, 0], shape[idx_list[-1] + 1, 1]), color, lineWidth) - - draw_curve(list(range(0, 16))) # jaw - draw_curve(list(range(17, 21)), color=(0, 0, 255)) # eye brow - draw_curve(list(range(22, 26)), color=(0, 0, 255)) - draw_curve(list(range(27, 35))) # nose - draw_curve(list(range(36, 41)), loop=True) # eyes - draw_curve(list(range(42, 47)), loop=True) - draw_curve(list(range(48, 59)), loop=True, color=(0, 255, 255)) # mouth - draw_curve(list(range(60, 67)), loop=True, color=(255, 255, 0)) - - return img - - -def vis_landmark_on_plt(fl, x_offset=0.0, show_now=True, c='r'): - def draw_curve(shape, idx_list, loop=False, x_offset=0.0, c=None): - for i in idx_list: - plt.plot((shape[i, 0] + x_offset, shape[i + 1, 0] + x_offset), (-shape[i, 1], -shape[i + 1, 1]), c=c, lineWidth=1) - if (loop): - plt.plot((shape[idx_list[0], 0] + x_offset, shape[idx_list[-1] + 1, 0] + x_offset), - (-shape[idx_list[0], 1], -shape[idx_list[-1] + 1, 1]), c=c, lineWidth=1) - - draw_curve(fl, list(range(0, 16)), x_offset=x_offset, c=c) # jaw - draw_curve(fl, list(range(17, 21)), x_offset=x_offset, c=c) # eye brow - draw_curve(fl, list(range(22, 26)), x_offset=x_offset, c=c) - draw_curve(fl, list(range(27, 35)), x_offset=x_offset, c=c) # nose - draw_curve(fl, list(range(36, 41)), loop=True, x_offset=x_offset, c=c) # eyes - draw_curve(fl, list(range(42, 47)), loop=True, x_offset=x_offset, c=c) - draw_curve(fl, list(range(48, 59)), loop=True, x_offset=x_offset, c=c) # mouth - draw_curve(fl, list(range(60, 67)), loop=True, x_offset=x_offset, c=c) - - if(show_now): - plt.show() - - -def try_mkdir(dir): - try: - os.mkdir(dir) - except: - pass - -import numpy -def smooth(x, window_len=11, window='hanning'): - """smooth the data using a window with requested size. - - This method is based on the convolution of a scaled window with the signal. - The signal is prepared by introducing reflected copies of the signal - (with the window size) in both ends so that transient parts are minimized - in the begining and end part of the output signal. - - input: - x: the input signal - window_len: the dimension of the smoothing window; should be an odd integer - window: the type of window from 'flat', 'hanning', 'hamming', 'bartlett', 'blackman' - flat window will produce a moving average smoothing. - - output: - the smoothed signal - - example: - - t=linspace(-2,2,0.1) - x=sin(t)+randn(len(t))*0.1 - y=smooth(x) - - see also: - - numpy.hanning, numpy.hamming, numpy.bartlett, numpy.blackman, numpy.convolve - scipy.signal.lfilter - - the window parameter could be the window itself if an array instead of a string - NOTE: length(output) != length(input), to correct this: return y[(window_len/2-1):-(window_len/2)] instead of just y. - """ - - if x.ndim != 1: - raise(ValueError, "smooth only accepts 1 dimension arrays.") - - if x.size < window_len: - raise(ValueError, "Input vector needs to be bigger than window size.") - - if window_len < 3: - return x - - if not window in ['flat', 'hanning', 'hamming', 'bartlett', 'blackman']: - raise(ValueError, "Window is on of 'flat', 'hanning', 'hamming', 'bartlett', 'blackman'") - - s = numpy.r_[x[window_len - 1:0:-1], x, x[-2:-window_len - 1:-1]] - # print(len(s)) - if window == 'flat': # moving average - w = numpy.ones(window_len, 'd') - else: - w = eval('numpy.' + window + '(window_len)') - - y = numpy.convolve(w / w.sum(), s, mode='valid') - return y - - -def get_puppet_info(DEMO_CH, ROOT_DIR): - import numpy as np - B = 5000 - # for wilk example - if (DEMO_CH == 'wilk_old'): - bound = np.array([-B, -B, -B, 459, -B, B+918, 419, B+918, B+838, B+918, B+838, 459, B+838, -B, 419, -B]).reshape(1, -1) - # bound = np.array([0, 0, 0, 459, 0, 918, 419, 918, 838, 918, 838, 459, 838, 0, 419, 0]).reshape(1, -1) - scale, shift = -0.005276414887140783, np.array([-475.4316, -193.53225]) - elif (DEMO_CH == 'sketch'): - bound = np.array([-10000, -10000, -10000, 221, -10000, 10443, 232, 10443, 10465, 10443, 10465, 221, 10465, -10000, 232, -10000]).reshape(1, -1) - scale, shift = -0.006393177201290783, np.array([-226.8411, -176.5216]) - elif (DEMO_CH == 'onepunch'): - bound = np.array([0, 0, 0, 168, 0, 337, 282, 337, 565, 337, 565, 168, 565, 0, 282, 0]).reshape(1, -1) - scale, shift = -0.007558707536598317, np.array([-301.4903, -120.05265]) - elif (DEMO_CH == 'cat'): - bound = np.array([0, 0, 0, 315, 0, 631, 299, 631, 599, 631, 599, 315, 599, 0, 299, 0]).reshape(1, -1) - scale, shift = -0.009099476040795225, np.array([-297.17085, -259.2363]) - elif (DEMO_CH == 'paint'): - bound = np.array([0, 0, 0, 249, 0, 499, 212, 499, 424, 499, 424, 249, 424, 0, 212, 0]).reshape(1, -1) - scale, shift = -0.007409177996872789, np.array([-161.92345878, -249.40250103]) - elif (DEMO_CH == 'mulaney'): - bound = np.array([0, 0, 0, 255, 0, 511, 341, 511, 682, 511, 682, 255, 682, 0, 341, 0]).reshape(1, -1) - scale, shift = -0.010651548568731444, np.array([-333.54245, -189.081]) - elif (DEMO_CH == 'cartoonM_old'): - bound = np.array([0, 0, 0, 299, 0, 599, 399, 599, 799, 599, 799, 299, 799, 0, 399, 0]).reshape(1, -1) - scale, shift = -0.0055312373170456845, np.array([-398.6125, -240.45235]) - elif (DEMO_CH == 'beer'): - bound = np.array([0, 0, 0, 309, 0, 618, 260, 618, 520, 618, 520, 309, 520, 0, 260, 0]).reshape(1, -1) - scale, shift = -0.0054102709937112374, np.array([-254.1478, -156.6971]) - elif (DEMO_CH == 'color'): - bound = np.array([0, 0, 0, 140, 0, 280, 249, 280, 499, 280, 499, 140, 499, 0, 249, 0]).reshape(1, -1) - scale, shift = -0.012986159189209149, np.array([-237.27065, -79.2465]) - else: - if (os.path.exists(os.path.join(ROOT_DIR, DEMO_CH + '.jpg'))): - img = cv2.imread(os.path.join(ROOT_DIR, DEMO_CH + ".jpg")) - elif (os.path.exists(os.path.join(ROOT_DIR, DEMO_CH + '.png'))): - img = cv2.imread(os.path.join(ROOT_DIR, DEMO_CH + ".png")) - else: - print('not file founded.') - exit(0) - size = img.shape - h = size[1] - 1 - w = size[0] - 1 - bound = np.array([-B, -B, - -B, w//4, - -B, w // 2, - -B, w//4*3, - -B, B + w, - h // 2, B+w, - B+h, B+w, - B+h, w // 2, - B+h, -B, - h//4, -B, - h // 2, -B, - h//4*3, -B]).reshape(1, -1) - ss = np.loadtxt(os.path.join(ROOT_DIR, DEMO_CH + '_scale_shift.txt')) - scale, shift = ss[0], np.array([ss[1], ss[2]]) - - return bound, scale, shift - - -def close_input_face_mouth(shape_3d, p1=0.7, p2=0.5): - shape_3d = shape_3d.reshape((1, 68, 3)) - index1 = list(range(60 - 1, 55 - 1, -1)) - index2 = list(range(68 - 1, 65 - 1, -1)) - mean_out = 0.5 * (shape_3d[:, 49:54] + shape_3d[:, index1]) - mean_in = 0.5 * (shape_3d[:, 61:64] + shape_3d[:, index2]) - shape_3d[:, 50:53] -= (shape_3d[:, 61:64] - mean_in) * p1 - shape_3d[:, list(range(59 - 1, 56 - 1, -1))] -= (shape_3d[:, index2] - mean_in) * p1 - shape_3d[:, 49] -= (shape_3d[:, 61] - mean_in[:, 0]) * p2 - shape_3d[:, 53] -= (shape_3d[:, 63] - mean_in[:, -1]) * p2 - shape_3d[:, 59] -= (shape_3d[:, 67] - mean_in[:, 0]) * p2 - shape_3d[:, 55] -= (shape_3d[:, 65] - mean_in[:, -1]) * p2 - # shape_3d[:, 61:64] = shape_3d[:, index2] = mean_in - shape_3d[:, 61:64] -= (shape_3d[:, 61:64] - mean_in) * p1 - shape_3d[:, index2] -= (shape_3d[:, index2] - mean_in) * p1 - shape_3d = shape_3d.reshape((68, 3)) - - return shape_3d - -def norm_input_face(shape_3d): - scale = 1.6 / (shape_3d[0, 0] - shape_3d[16, 0]) - shift = - 0.5 * (shape_3d[0, 0:2] + shape_3d[16, 0:2]) - shape_3d[:, 0:2] = (shape_3d[:, 0:2] + shift) * scale - face_std = np.loadtxt('MakeItTalk/src/dataset/utils/STD_FACE_LANDMARKS.txt').reshape(68, 3) - shape_3d[:, -1] = face_std[:, -1] * 0.1 - shape_3d[:, 0:2] = -shape_3d[:, 0:2] - - return shape_3d, scale, shift - -def add_naive_eye(fl): - for t in range(fl.shape[0]): - r = 0.95 - fl[t, 37], fl[t, 41] = r * fl[t, 37] + (1 - r) * fl[t, 41], (1 - r) * fl[t, 37] + r * fl[t, 41] - fl[t, 38], fl[t, 40] = r * fl[t, 38] + (1 - r) * fl[t, 40], (1 - r) * fl[t, 38] + r * fl[t, 40] - fl[t, 43], fl[t, 47] = r * fl[t, 43] + (1 - r) * fl[t, 47], (1 - r) * fl[t, 43] + r * fl[t, 47] - fl[t, 44], fl[t, 46] = r * fl[t, 44] + (1 - r) * fl[t, 46], (1 - r) * fl[t, 44] + r * fl[t, 46] - - K1, K2 = 10, 15 - length = fl.shape[0] - close_time_stamp = [30] - t = 30 - while (t < length - 1 - K2): - t += 60 - t += np.random.randint(30, 90) - if (t < length - 1 - K2): - close_time_stamp.append(t) - for t in close_time_stamp: - fl[t, 37], fl[t, 41] = 0.25 * fl[t, 37] + 0.75 * fl[t, 41], 0.25 * fl[t, 37] + 0.75 * fl[t, 41] - fl[t, 38], fl[t, 40] = 0.25 * fl[t, 38] + 0.75 * fl[t, 40], 0.25 * fl[t, 38] + 0.75 * fl[t, 40] - fl[t, 43], fl[t, 47] = 0.25 * fl[t, 43] + 0.75 * fl[t, 47], 0.25 * fl[t, 43] + 0.75 * fl[t, 47] - fl[t, 44], fl[t, 46] = 0.25 * fl[t, 44] + 0.75 * fl[t, 46], 0.25 * fl[t, 44] + 0.75 * fl[t, 46] - - def interp_fl(t0, t1, t2, r): - for index in [37, 38, 40, 41, 43, 44, 46, 47]: - fl[t0, index] = r * fl[t1, index] + (1 - r) * fl[t2, index] - - for t0 in range(t - K1 + 1, t): - interp_fl(t0, t - K1, t, r=(t - t0) / 1. / K1) - for t0 in range(t + 1, t + K2): - interp_fl(t0, t, t + K2, r=(t + K2 - 1 - t0) / 1. / K2) - - return fl \ No newline at end of file diff --git a/spaces/merve/anonymization/source/uncertainty-calibration/style.css b/spaces/merve/anonymization/source/uncertainty-calibration/style.css deleted file mode 100644 index 8073cf0a59eac0be0e293b35af5255c40c063e21..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/source/uncertainty-calibration/style.css +++ /dev/null @@ -1,89 +0,0 @@ -svg{ - overflow: visible; -} - -text{ - fill: #202124; - user-select: none; -} - -.domain{ - display: none; -} - -.thresholds, .threshold > g{ - cursor: pointer; -} - -svg{ - user-select: none; -} - -text.axis-label .legend-text{ - font-family: 'Roboto'; - font-style: normal; - font-size: 16px; - line-height: 20px; - /* identical to box height, or 125% */ - - fill: #000; -} - -.axis text{ - font-size: 10px; -} - -text{ - text-shadow: 0 1px 0 #fff, 1px 0 0 #fff, 0 -1px 0 #fff, -1px 0 0 #fff; -} - - - - -.bucket text{ - /*text-shadow: 0 1px 0 #000, 1px 0 0 #000, 0 -1px 0 #000, -1px 0 0 #000;*/ - /*fill: #fff;*/ - font-size: 11px; -} - - -.big-text{ - font-variant-numeric: tabular-nums; - font-size: 16px; -} - -#card{ - display: flex; - flex-direction: column; - align-items: flex-start; - padding: 24px 24px; - gap: 6px; - - background: #EDF4EC; - border: 1px solid #34A853; - box-sizing: border-box; - border-radius: 4px; -} - -text.val-text{ - background: #DFE9E1; - border: 1px solid #476C63; - box-sizing: border-box; - border-radius: 4px; - fill: #2A4C4A; - text-shadow: none; -} - -.val-box{ - fill: #DFE9E1; - stroke: #476C63; - opacity: 1; -} - -.legend-title{ - fill: #002622; -} - -h3 { - color: #00695C; -} \ No newline at end of file diff --git a/spaces/merve/dataset-worldviews/public/fill-in-the-blank/README.md b/spaces/merve/dataset-worldviews/public/fill-in-the-blank/README.md deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/merve/dataset-worldviews/public/measuring-fairness/students.js b/spaces/merve/dataset-worldviews/public/measuring-fairness/students.js deleted file mode 100644 index 4af55cba8cc763d96aa478be96a785048d9edc42..0000000000000000000000000000000000000000 --- a/spaces/merve/dataset-worldviews/public/measuring-fairness/students.js +++ /dev/null @@ -1,90 +0,0 @@ -/* Copyright 2020 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - - -window.makeStudents = function(){ - var seed = new Math.seedrandom('he4a15') - var rand = d3.randomUniform.source(seed)(0, 1) - var letters = 'abcdefgijlmnopqrsuvwxyz' - letters = (letters + letters.toUpperCase()).split('') - - var nSickCols = 6 - var mSickCols = 8 - var fSickCols = nSickCols*2 - mSickCols - - var students = d3.range(nCols*nCols).map(i => { - var letter = letters[~~d3.randomUniform.source(seed)(0, letters.length)()] - - var isMale = i % 2 == 0 - var isSick = i < (isMale ? mSickCols : fSickCols)*nCols - var grade = isSick*.5 + rand() - var pos = {} - - return {letter, isSick, isMale, grade, pos} - }) - - students = _.sortBy(students, d => -d.grade) - d3.nestBy(students, d => d.isSick).forEach(group => { - var isSick = group[0].isSick - - var sickCols = nSickCols - var cols = isSick ? sickCols : nCols - sickCols - var xOffset = isSick ? 0 : sickCols - - group.forEach((d, i) => { - d.pos.allIJ = [cols - 1 - (i % cols) + xOffset, ~~(i/cols)] - var spreadIJ = d.pos.allIJ.slice() - if (!d.isSick) spreadIJ[0] += .1 - d.pos.all = spreadIJ.map(d => d*c.width/10) - }) - }) - - d3.nestBy(students, d => d.isSick + '-' + d.isMale).forEach(group => { - var isSick = group[0].isSick - var isMale = group[0].isMale - - var sickCols = isMale ? mSickCols : fSickCols - var cols = isSick ? sickCols : nCols - sickCols - var xOffset = isSick ? 0 : sickCols - var yOffset = isMale ? nCols/2 + 2 : 0 - - group.forEach((d, i) => { - d.pos.sexIJ = [cols - 1 - (i % cols) + xOffset, ~~(i/cols) + yOffset] - d.pos.sexGroupIJ = [cols - 1 - (i % cols) + xOffset, ~~(i/cols)] - var spreadIJ = d.pos.sexIJ.slice() - if (!d.isSick) spreadIJ[0] += .1 - d.pos.sex = spreadIJ.map(d => d*c.width/10) - }) - }) - - students.maleOffsetJ = nCols/2 + 2 - students.maleOffsetPx= students.maleOffsetJ*c.width/10 - - students.fSickCols = fSickCols - students.mSickCols = mSickCols - - students.colWidth = c.width/10 - - students.rand = rand - return students -} - - - - - - -if (window.init) window.init() diff --git a/spaces/merve/fill-in-the-blank/server-side/fill-in-the-blank/scatter-plot-colab/two-sentences/init.js b/spaces/merve/fill-in-the-blank/server-side/fill-in-the-blank/scatter-plot-colab/two-sentences/init.js deleted file mode 100644 index 71d258e5027dc3c924b2b263e6bb8ea370189b1d..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/server-side/fill-in-the-blank/scatter-plot-colab/two-sentences/init.js +++ /dev/null @@ -1,38 +0,0 @@ -/* Copyright 2021 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - -window.init = function(){ - var initFns = [window.initUtil, window.initScatter, window.initPair] - if (!initFns.every(d => d)) return - - window.util = initUtil() - - var pair = window.python_settings - pair.s0 = python_data.s0 - pair.s1 = python_data.s1 - pair.e0 = python_data.e0 - pair.e1 = python_data.e1 - pair.label0 = 'Sentence 0' - pair.label1 = 'Sentence 1' - pair.vocab = python_data.vocab - - var sel = d3.select('.container').html('') - .st({width: 500}) - - initPair(pair, sel) -} - - -window.init() diff --git a/spaces/mfkeles/Track-Anything/tracker/model/resnet.py b/spaces/mfkeles/Track-Anything/tracker/model/resnet.py deleted file mode 100644 index 984ea3cbfac047537e7de6cfc47108e637e9dde7..0000000000000000000000000000000000000000 --- a/spaces/mfkeles/Track-Anything/tracker/model/resnet.py +++ /dev/null @@ -1,165 +0,0 @@ -""" -resnet.py - A modified ResNet structure -We append extra channels to the first conv by some network surgery -""" - -from collections import OrderedDict -import math - -import torch -import torch.nn as nn -from torch.utils import model_zoo - - -def load_weights_add_extra_dim(target, source_state, extra_dim=1): - new_dict = OrderedDict() - - for k1, v1 in target.state_dict().items(): - if not 'num_batches_tracked' in k1: - if k1 in source_state: - tar_v = source_state[k1] - - if v1.shape != tar_v.shape: - # Init the new segmentation channel with zeros - # print(v1.shape, tar_v.shape) - c, _, w, h = v1.shape - pads = torch.zeros((c,extra_dim,w,h), device=tar_v.device) - nn.init.orthogonal_(pads) - tar_v = torch.cat([tar_v, pads], 1) - - new_dict[k1] = tar_v - - target.load_state_dict(new_dict) - - -model_urls = { - 'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth', - 'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth', -} - - -def conv3x3(in_planes, out_planes, stride=1, dilation=1): - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, - padding=dilation, dilation=dilation, bias=False) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None, dilation=1): - super(BasicBlock, self).__init__() - self.conv1 = conv3x3(inplanes, planes, stride=stride, dilation=dilation) - self.bn1 = nn.BatchNorm2d(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes, stride=1, dilation=dilation) - self.bn2 = nn.BatchNorm2d(planes) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1, downsample=None, dilation=1): - super(Bottleneck, self).__init__() - self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) - self.bn1 = nn.BatchNorm2d(planes) - self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, dilation=dilation, - padding=dilation, bias=False) - self.bn2 = nn.BatchNorm2d(planes) - self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * 4) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class ResNet(nn.Module): - def __init__(self, block, layers=(3, 4, 23, 3), extra_dim=0): - self.inplanes = 64 - super(ResNet, self).__init__() - self.conv1 = nn.Conv2d(3+extra_dim, 64, kernel_size=7, stride=2, padding=3, bias=False) - self.bn1 = nn.BatchNorm2d(64) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - self.layer1 = self._make_layer(block, 64, layers[0]) - self.layer2 = self._make_layer(block, 128, layers[1], stride=2) - self.layer3 = self._make_layer(block, 256, layers[2], stride=2) - self.layer4 = self._make_layer(block, 512, layers[3], stride=2) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels - m.weight.data.normal_(0, math.sqrt(2. / n)) - elif isinstance(m, nn.BatchNorm2d): - m.weight.data.fill_(1) - m.bias.data.zero_() - - def _make_layer(self, block, planes, blocks, stride=1, dilation=1): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d(self.inplanes, planes * block.expansion, - kernel_size=1, stride=stride, bias=False), - nn.BatchNorm2d(planes * block.expansion), - ) - - layers = [block(self.inplanes, planes, stride, downsample)] - self.inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append(block(self.inplanes, planes, dilation=dilation)) - - return nn.Sequential(*layers) - -def resnet18(pretrained=True, extra_dim=0): - model = ResNet(BasicBlock, [2, 2, 2, 2], extra_dim) - if pretrained: - load_weights_add_extra_dim(model, model_zoo.load_url(model_urls['resnet18']), extra_dim) - return model - -def resnet50(pretrained=True, extra_dim=0): - model = ResNet(Bottleneck, [3, 4, 6, 3], extra_dim) - if pretrained: - load_weights_add_extra_dim(model, model_zoo.load_url(model_urls['resnet50']), extra_dim) - return model - diff --git a/spaces/mfrashad/CharacterGAN/app.py b/spaces/mfrashad/CharacterGAN/app.py deleted file mode 100644 index 974bc9eb61740af13012c310184df70af4eed44f..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/CharacterGAN/app.py +++ /dev/null @@ -1,146 +0,0 @@ -import nltk; nltk.download('wordnet') - -#@title Load Model -selected_model = 'character' - -# Load model -import torch -import PIL -import numpy as np -from PIL import Image -from models import get_instrumented_model -from decomposition import get_or_compute -from config import Config -import gradio as gr -import numpy as np - -# Speed up computation -torch.autograd.set_grad_enabled(False) -torch.backends.cudnn.benchmark = True - -# Specify model to use -config = Config( - model='StyleGAN2', - layer='style', - output_class=selected_model, - components=80, - use_w=True, - batch_size=5_000, # style layer quite small -) -device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") - -inst = get_instrumented_model(config.model, config.output_class, - config.layer, torch.device(device), use_w=config.use_w) - -path_to_components = get_or_compute(config, inst) - -model = inst.model - -comps = np.load(path_to_components) -lst = comps.files -latent_dirs = [] -latent_stdevs = [] - -load_activations = False - -for item in lst: - if load_activations: - if item == 'act_comp': - for i in range(comps[item].shape[0]): - latent_dirs.append(comps[item][i]) - if item == 'act_stdev': - for i in range(comps[item].shape[0]): - latent_stdevs.append(comps[item][i]) - else: - if item == 'lat_comp': - for i in range(comps[item].shape[0]): - latent_dirs.append(comps[item][i]) - if item == 'lat_stdev': - for i in range(comps[item].shape[0]): - latent_stdevs.append(comps[item][i]) - - -def display_sample_pytorch(seed, truncation, directions, distances, scale, start, end, w=None, disp=True, save=None, noise_spec=None): - # blockPrint() - model.truncation = truncation - if w is None: - w = model.sample_latent(1, seed=seed).detach().cpu().numpy() - w = [w]*model.get_max_latents() # one per layer - else: - w = [np.expand_dims(x, 0) for x in w] - - for l in range(start, end): - for i in range(len(directions)): - w[l] = w[l] + directions[i] * distances[i] * scale - - torch.cuda.empty_cache() - #save image and display - out = model.sample_np(w) - final_im = Image.fromarray((out * 255).astype(np.uint8)).resize((500,500),Image.LANCZOS) - - - if save is not None: - if disp == False: - print(save) - final_im.save(f'out/{seed}_{save:05}.png') - - return final_im - - -#@title Demo UI - - -def generate_image(seed, truncation, - monster, female, skimpy, light, bodysuit, bulky, human_head, - start_layer, end_layer): - seed = hash(seed) % 1000000000 - scale = 1 - params = {'monster': monster, - 'female': female, - 'skimpy': skimpy, - 'light': light, - 'bodysuit': bodysuit, - 'bulky': bulky, - 'human_head': human_head} - - param_indexes = {'monster': 0, - 'female': 1, - 'skimpy': 2, - 'light': 4, - 'bodysuit': 5, - 'bulky': 6, - 'human_head': 8} - - directions = [] - distances = [] - for k, v in params.items(): - directions.append(latent_dirs[param_indexes[k]]) - distances.append(v) - - style = {'description_width': 'initial'} - return display_sample_pytorch(int(seed), truncation, directions, distances, scale, int(start_layer), int(end_layer), disp=False) - -truncation = gr.inputs.Slider(minimum=0, maximum=1, default=0.5, label="Truncation") -start_layer = gr.inputs.Number(default=0, label="Start Layer") -end_layer = gr.inputs.Number(default=14, label="End Layer") -seed = gr.inputs.Textbox(default="0", label="Seed") - -slider_max_val = 20 -slider_min_val = -20 -slider_step = 1 - -monster = gr.inputs.Slider(label="Monsterfication", minimum=slider_min_val, maximum=slider_max_val, default=0) -female = gr.inputs.Slider(label="Gender", minimum=slider_min_val, maximum=slider_max_val, default=0) -skimpy = gr.inputs.Slider(label="Amount of Clothing", minimum=slider_min_val, maximum=slider_max_val, default=0) -light = gr.inputs.Slider(label="Brightness", minimum=slider_min_val, maximum=slider_max_val, default=0) -bodysuit = gr.inputs.Slider(label="Bodysuit", minimum=slider_min_val, maximum=slider_max_val, default=0) -bulky = gr.inputs.Slider(label="Bulkiness", minimum=slider_min_val, maximum=slider_max_val, default=0) -human_head = gr.inputs.Slider(label="Head", minimum=slider_min_val, maximum=slider_max_val, default=0) - - -scale = 1 - -inputs = [seed, truncation, monster, female, skimpy, light, bodysuit, bulky, human_head, start_layer, end_layer] -description = "Change the seed number to generate different character design. Made by @mfrashad. For more details on how to build this, visit the repo. Please give a star if you find it useful :)" - -gr.Interface(generate_image, inputs, ["image"], description=description, live=True, title="CharacterGAN").launch() \ No newline at end of file diff --git a/spaces/mikeee/radiobee-aligner/docs/build/html/_static/underscore.js b/spaces/mikeee/radiobee-aligner/docs/build/html/_static/underscore.js deleted file mode 100644 index cf177d4285ab55fbc16406a5ec827b80e7eecd53..0000000000000000000000000000000000000000 --- a/spaces/mikeee/radiobee-aligner/docs/build/html/_static/underscore.js +++ /dev/null @@ -1,6 +0,0 @@ -!function(n,r){"object"==typeof exports&&"undefined"!=typeof module?module.exports=r():"function"==typeof define&&define.amd?define("underscore",r):(n="undefined"!=typeof globalThis?globalThis:n||self,function(){var t=n._,e=n._=r();e.noConflict=function(){return n._=t,e}}())}(this,(function(){ -// Underscore.js 1.13.1 -// https://underscorejs.org -// (c) 2009-2021 Jeremy Ashkenas, Julian Gonggrijp, and DocumentCloud and Investigative Reporters & Editors -// Underscore may be freely distributed under the MIT license. -var n="1.13.1",r="object"==typeof self&&self.self===self&&self||"object"==typeof global&&global.global===global&&global||Function("return this")()||{},t=Array.prototype,e=Object.prototype,u="undefined"!=typeof Symbol?Symbol.prototype:null,o=t.push,i=t.slice,a=e.toString,f=e.hasOwnProperty,c="undefined"!=typeof ArrayBuffer,l="undefined"!=typeof DataView,s=Array.isArray,p=Object.keys,v=Object.create,h=c&&ArrayBuffer.isView,y=isNaN,d=isFinite,g=!{toString:null}.propertyIsEnumerable("toString"),b=["valueOf","isPrototypeOf","toString","propertyIsEnumerable","hasOwnProperty","toLocaleString"],m=Math.pow(2,53)-1;function j(n,r){return r=null==r?n.length-1:+r,function(){for(var t=Math.max(arguments.length-r,0),e=Array(t),u=0;u=0&&t<=m}}function J(n){return function(r){return null==r?void 0:r[n]}}var G=J("byteLength"),H=K(G),Q=/\[object ((I|Ui)nt(8|16|32)|Float(32|64)|Uint8Clamped|Big(I|Ui)nt64)Array\]/;var X=c?function(n){return h?h(n)&&!q(n):H(n)&&Q.test(a.call(n))}:C(!1),Y=J("length");function Z(n,r){r=function(n){for(var r={},t=n.length,e=0;e":">",'"':""","'":"'","`":"`"},Cn=Ln($n),Kn=Ln(_n($n)),Jn=tn.templateSettings={evaluate:/<%([\s\S]+?)%>/g,interpolate:/<%=([\s\S]+?)%>/g,escape:/<%-([\s\S]+?)%>/g},Gn=/(.)^/,Hn={"'":"'","\\":"\\","\r":"r","\n":"n","\u2028":"u2028","\u2029":"u2029"},Qn=/\\|'|\r|\n|\u2028|\u2029/g;function Xn(n){return"\\"+Hn[n]}var Yn=/^\s*(\w|\$)+\s*$/;var Zn=0;function nr(n,r,t,e,u){if(!(e instanceof r))return n.apply(t,u);var o=Mn(n.prototype),i=n.apply(o,u);return _(i)?i:o}var rr=j((function(n,r){var t=rr.placeholder,e=function(){for(var u=0,o=r.length,i=Array(o),a=0;a1)ur(a,r-1,t,e),u=e.length;else for(var f=0,c=a.length;f0&&(t=r.apply(this,arguments)),n<=1&&(r=null),t}}var lr=rr(cr,2);function sr(n,r,t){r=qn(r,t);for(var e,u=nn(n),o=0,i=u.length;o0?0:u-1;o>=0&&o0?a=o>=0?o:Math.max(o+f,a):f=o>=0?Math.min(o+1,f):o+f+1;else if(t&&o&&f)return e[o=t(e,u)]===u?o:-1;if(u!=u)return(o=r(i.call(e,a,f),$))>=0?o+a:-1;for(o=n>0?a:f-1;o>=0&&o0?0:i-1;for(u||(e=r[o?o[a]:a],a+=n);a>=0&&a=3;return r(n,Fn(t,u,4),e,o)}}var Ar=wr(1),xr=wr(-1);function Sr(n,r,t){var e=[];return r=qn(r,t),jr(n,(function(n,t,u){r(n,t,u)&&e.push(n)})),e}function Or(n,r,t){r=qn(r,t);for(var e=!er(n)&&nn(n),u=(e||n).length,o=0;o=0}var Br=j((function(n,r,t){var e,u;return D(r)?u=r:(r=Nn(r),e=r.slice(0,-1),r=r[r.length-1]),_r(n,(function(n){var o=u;if(!o){if(e&&e.length&&(n=In(n,e)),null==n)return;o=n[r]}return null==o?o:o.apply(n,t)}))}));function Nr(n,r){return _r(n,Rn(r))}function Ir(n,r,t){var e,u,o=-1/0,i=-1/0;if(null==r||"number"==typeof r&&"object"!=typeof n[0]&&null!=n)for(var a=0,f=(n=er(n)?n:jn(n)).length;ao&&(o=e);else r=qn(r,t),jr(n,(function(n,t,e){((u=r(n,t,e))>i||u===-1/0&&o===-1/0)&&(o=n,i=u)}));return o}function Tr(n,r,t){if(null==r||t)return er(n)||(n=jn(n)),n[Wn(n.length-1)];var e=er(n)?En(n):jn(n),u=Y(e);r=Math.max(Math.min(r,u),0);for(var o=u-1,i=0;i1&&(e=Fn(e,r[1])),r=an(n)):(e=qr,r=ur(r,!1,!1),n=Object(n));for(var u=0,o=r.length;u1&&(t=r[1])):(r=_r(ur(r,!1,!1),String),e=function(n,t){return!Er(r,t)}),Ur(n,e,t)}));function zr(n,r,t){return i.call(n,0,Math.max(0,n.length-(null==r||t?1:r)))}function Lr(n,r,t){return null==n||n.length<1?null==r||t?void 0:[]:null==r||t?n[0]:zr(n,n.length-r)}function $r(n,r,t){return i.call(n,null==r||t?1:r)}var Cr=j((function(n,r){return r=ur(r,!0,!0),Sr(n,(function(n){return!Er(r,n)}))})),Kr=j((function(n,r){return Cr(n,r)}));function Jr(n,r,t,e){A(r)||(e=t,t=r,r=!1),null!=t&&(t=qn(t,e));for(var u=[],o=[],i=0,a=Y(n);ir?(e&&(clearTimeout(e),e=null),a=c,i=n.apply(u,o),e||(u=o=null)):e||!1===t.trailing||(e=setTimeout(f,l)),i};return c.cancel=function(){clearTimeout(e),a=0,e=u=o=null},c},debounce:function(n,r,t){var e,u,o,i,a,f=function(){var c=zn()-u;r>c?e=setTimeout(f,r-c):(e=null,t||(i=n.apply(a,o)),e||(o=a=null))},c=j((function(c){return a=this,o=c,u=zn(),e||(e=setTimeout(f,r),t&&(i=n.apply(a,o))),i}));return c.cancel=function(){clearTimeout(e),e=o=a=null},c},wrap:function(n,r){return rr(r,n)},negate:fr,compose:function(){var n=arguments,r=n.length-1;return function(){for(var t=r,e=n[r].apply(this,arguments);t--;)e=n[t].call(this,e);return e}},after:function(n,r){return function(){if(--n<1)return r.apply(this,arguments)}},before:cr,once:lr,findKey:sr,findIndex:vr,findLastIndex:hr,sortedIndex:yr,indexOf:gr,lastIndexOf:br,find:mr,detect:mr,findWhere:function(n,r){return mr(n,Dn(r))},each:jr,forEach:jr,map:_r,collect:_r,reduce:Ar,foldl:Ar,inject:Ar,reduceRight:xr,foldr:xr,filter:Sr,select:Sr,reject:function(n,r,t){return Sr(n,fr(qn(r)),t)},every:Or,all:Or,some:Mr,any:Mr,contains:Er,includes:Er,include:Er,invoke:Br,pluck:Nr,where:function(n,r){return Sr(n,Dn(r))},max:Ir,min:function(n,r,t){var e,u,o=1/0,i=1/0;if(null==r||"number"==typeof r&&"object"!=typeof n[0]&&null!=n)for(var a=0,f=(n=er(n)?n:jn(n)).length;ae||void 0===t)return 1;if(t2: #skip spcial chars such as "?" - result[index]['score']+=float(sum(cosine_scores[index]))*HISTORY_WEIGHT - if r['token_str'].lower().strip() in history_keyword_text.lower().strip() and len(r['token_str'].lower().strip())>1: - #found from history, then increase the score of tokens - result[index]['score']*=HISTORY_WEIGHT - data_load_state.text('Score updated...') - - #sort the results - df=pd.DataFrame(result).sort_values(by='score', ascending=False) - return df - - -if __name__ == '__main__': - #if st._is_running_with_streamlit: - if runtime.exists(): - st.markdown(""" -# Auto-Complete -This is an example of an auto-complete approach where the next token suggested based on users's history -Keyword match & Semantic similarity of users's history (log). -The next token is predicted per probability and a weight if it is appeared in keyword user's history or -there is a similarity to semantic user's history. - -## Source -Forked from **[mbahrami/Auto-Complete_Semantic](https://huggingface.co/spaces/mbahrami/Auto-Complete_Semantic)** with *[osanseviero/fork_a_repo](https://huggingface.co/spaces/osanseviero/fork_a_repo)*. - -## Disclaimer -The behind idea is to compare our models that included Guarani during pre-training vs. the models that do not -have saw it. That is, the multilingual ones: XLM-RoBERTa, mBERT and Spanish BERTs (BETO and PLAN-TL-RoBERTa). -Additionally, we include facebook/xlm-v-base model (it includes Guarani during pre-training), -for comparison reasons. -""") - history_keyword_text = st.text_input("Enter users's history (optional, i.e., 'Premio Cervantes')", value="") - - semantic_text = st.text_input("Enter users's history (optional, i.e., 'hai')", value="hai") - - text = st.text_input("Enter a text for auto completion...", value="Augusto Roa Bastos ha'e kuimba'e arandu") - model = st.selectbox("Choose a model", - ["mmaguero/gn-bert-tiny-cased", "mmaguero/gn-bert-small-cased", - "mmaguero/gn-bert-base-cased", "mmaguero/gn-bert-large-cased", - "mmaguero/multilingual-bert-gn-base-cased", "mmaguero/beto-gn-base-cased", - "facebook/xlm-v-base", - - "bert-base-multilingual-cased", "xlm-roberta-base", - - "dccuchile/bert-base-spanish-wwm-cased", "PlanTL-GOB-ES/roberta-base-bne"]) - - data_load_state = st.text('1.Loading model ...') - - nlp, semantic_model = loading_models(model) - - df=main(text,semantic_text,history_keyword_text) - #show the results as a table - st.table(df) - data_load_state.text('') - else: - sys.argv = ['streamlit', 'run', sys.argv[0]] - sys.exit(stcli.main()) \ No newline at end of file diff --git a/spaces/modelscope/FaceChain/Dockerfile b/spaces/modelscope/FaceChain/Dockerfile deleted file mode 100644 index b15220a78a4249c4358cd9191acbd2d63f3451c1..0000000000000000000000000000000000000000 --- a/spaces/modelscope/FaceChain/Dockerfile +++ /dev/null @@ -1,18 +0,0 @@ -FROM registry.us-west-1.aliyuncs.com/modelscope-repo/modelscope:ubuntu20.04-cuda11.7.1-py38-torch2.0.1-tf1.15.5-1.8.1 -RUN useradd -m -u 1000 user -USER user -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH -WORKDIR $HOME -RUN chmod 777 $HOME -RUN mkdir $HOME/modelscope_cache -ENV MODELSCOPE_CACHE=$HOME/modelscope_cache -ENV GRADIO_SERVER_NAME=0.0.0.0 -EXPOSE 7860 -RUN pip install gradio -RUN echo 'cloning facechain:hf_space' -RUN git clone -b feat/hf_space https://github.com/modelscope/facechain.git -WORKDIR $HOME/facechain -RUN pip install -r requirements.txt -ENV PYTHONPATH=. -CMD ["python", "app.py"] diff --git a/spaces/mrm8488/PromptSource/app.py b/spaces/mrm8488/PromptSource/app.py deleted file mode 100644 index 1c972348a73ad8bd82f0ed80c001c6096260e105..0000000000000000000000000000000000000000 --- a/spaces/mrm8488/PromptSource/app.py +++ /dev/null @@ -1,585 +0,0 @@ -import argparse -import textwrap -from multiprocessing import Manager, Pool - -import pandas as pd -import plotly.express as px -import streamlit as st -from datasets import get_dataset_infos -from pygments import highlight -from pygments.formatters import HtmlFormatter -from pygments.lexers import DjangoLexer - -from session import _get_state -from templates import Template, TemplateCollection -from utils import ( - get_dataset, - get_dataset_confs, - list_datasets, - removeHyphen, - renameDatasetColumn, - render_features, -) - - -# add an argument for read-only -# At the moment, streamlit does not handle python script arguments gracefully. -# Thus, for read-only mode, you have to type one of the below two: -# streamlit run promptsource/app.py -- -r -# streamlit run promptsource/app.py -- --read-only -# Check https://github.com/streamlit/streamlit/issues/337 for more information. -parser = argparse.ArgumentParser(description="run app.py with args") -parser.add_argument("-r", "--read-only", action="store_true", help="whether to run it as read-only mode") - -args = parser.parse_args() -if args.read_only: - select_options = ["Helicopter view", "Prompted dataset viewer"] - side_bar_title_prefix = "Promptsource (Read only)" -else: - select_options = ["Helicopter view", "Prompted dataset viewer", "Sourcing"] - side_bar_title_prefix = "Promptsource" - -# -# Helper functions for datasets library -# -get_dataset = st.cache(allow_output_mutation=True)(get_dataset) -get_dataset_confs = st.cache(get_dataset_confs) - - -def reset_template_state(): - state.template_name = None - state.jinja = None - state.reference = None - - -# -# Loads session state -# -state = _get_state() - -# -# Initial page setup -# -st.set_page_config(page_title="Promptsource", layout="wide") -st.sidebar.markdown( - "
💻Github - Promptsource\n\n
", - unsafe_allow_html=True, -) -mode = st.sidebar.selectbox( - label="Choose a mode", - options=select_options, - index=0, - key="mode_select", -) -st.sidebar.title(f"{side_bar_title_prefix} 🌸 - {mode}") - -# -# Adds pygments styles to the page. -# -st.markdown( - "", unsafe_allow_html=True -) - -WIDTH = 80 - - -def show_jinja(t, width=WIDTH): - wrap = textwrap.fill(t, width=width, replace_whitespace=False) - out = highlight(wrap, DjangoLexer(), HtmlFormatter()) - st.write(out, unsafe_allow_html=True) - - -def show_text(t, width=WIDTH, with_markdown=False): - wrap = [textwrap.fill(subt, width=width, replace_whitespace=False) for subt in t.split("\n")] - wrap = "\n".join(wrap) - if with_markdown: - st.write(wrap, unsafe_allow_html=True) - else: - st.text(wrap) - - -# -# Loads template data -# -try: - template_collection = TemplateCollection() -except FileNotFoundError: - st.error( - "Unable to find the prompt folder!\n\n" - "We expect the folder to be in the working directory. " - "You might need to restart the app in the root directory of the repo." - ) - st.stop() - - -if mode == "Helicopter view": - st.title("High level metrics") - st.write( - "If you want to contribute, please refer to the instructions in " - + "[Contributing](https://github.com/bigscience-workshop/promptsource/blob/main/CONTRIBUTING.md)." - ) - - # - # Global metrics - # - counts = template_collection.get_templates_count() - nb_prompted_datasets = len(counts) - st.write(f"## Number of *prompted datasets*: `{nb_prompted_datasets}`") - nb_prompts = sum(counts.values()) - st.write(f"## Number of *prompts*: `{nb_prompts}`") - - # - # Metrics per dataset/subset - # - # Download dataset infos (multiprocessing download) - manager = Manager() - all_infos = manager.dict() - all_datasets = list(set([t[0] for t in template_collection.keys])) - - def get_infos(d_name): - all_infos[d_name] = get_dataset_infos(d_name) - - pool = Pool(processes=len(all_datasets)) - pool.map(get_infos, all_datasets) - pool.close() - pool.join() - - results = [] - for (dataset_name, subset_name) in template_collection.keys: - # Collect split sizes (train, validation and test) - if dataset_name not in all_infos: - infos = get_dataset_infos(dataset_name) - all_infos[dataset_name] = infos - else: - infos = all_infos[dataset_name] - if infos: - if subset_name is None: - subset_infos = infos[list(infos.keys())[0]] - else: - subset_infos = infos[subset_name] - - split_sizes = {k: v.num_examples for k, v in subset_infos.splits.items()} - else: - # Zaid/coqa_expanded and Zaid/quac_expanded don't have dataset_infos.json - # so infos is an empty dic, and `infos[list(infos.keys())[0]]` raises an error - # For simplicity, just filling `split_sizes` with nothing, so the displayed split sizes will be 0. - split_sizes = {} - - # Collect template counts, original task counts and names - dataset_templates = template_collection.get_dataset(dataset_name, subset_name) - results.append( - { - "Dataset name": dataset_name, - "Subset name": "∅" if subset_name is None else subset_name, - "Train size": split_sizes["train"] if "train" in split_sizes else 0, - "Validation size": split_sizes["validation"] if "validation" in split_sizes else 0, - "Test size": split_sizes["test"] if "test" in split_sizes else 0, - "Number of prompts": len(dataset_templates), - "Number of original task prompts": sum( - [bool(t.metadata.original_task) for t in dataset_templates.templates.values()] - ), - "Prompt names": [t.name for t in dataset_templates.templates.values()], - } - ) - results_df = pd.DataFrame(results) - results_df.sort_values(["Number of prompts"], inplace=True, ascending=False) - results_df.reset_index(drop=True, inplace=True) - - nb_training_instances = results_df["Train size"].sum() - st.write(f"## Number of *training instances*: `{nb_training_instances}`") - - plot_df = results_df[["Dataset name", "Subset name", "Train size", "Number of prompts"]].copy() - plot_df["Name"] = plot_df["Dataset name"] + " - " + plot_df["Subset name"] - plot_df.sort_values(["Train size"], inplace=True, ascending=False) - fig = px.bar( - plot_df, - x="Name", - y="Train size", - hover_data=["Dataset name", "Subset name", "Number of prompts"], - log_y=True, - title="Number of training instances per data(sub)set - y-axis is in logscale", - ) - fig.update_xaxes(visible=False, showticklabels=False) - st.plotly_chart(fig, use_container_width=True) - st.write( - f"- Top 3 training subsets account for `{100*plot_df[:3]['Train size'].sum()/nb_training_instances:.2f}%` of the training instances." - ) - biggest_training_subset = plot_df.iloc[0] - st.write( - f"- Biggest training subset is *{biggest_training_subset['Name']}* with `{biggest_training_subset['Train size']}` instances" - ) - smallest_training_subset = plot_df[plot_df["Train size"] > 0].iloc[-1] - st.write( - f"- Smallest training subset is *{smallest_training_subset['Name']}* with `{smallest_training_subset['Train size']}` instances" - ) - - st.markdown("***") - st.write("Details per dataset") - st.table(results_df) - -else: - # Combining mode `Prompted dataset viewer` and `Sourcing` since the - # backbone of the interfaces is the same - assert mode in ["Prompted dataset viewer", "Sourcing"], ValueError( - f"`mode` ({mode}) should be in `[Helicopter view, Prompted dataset viewer, Sourcing]`" - ) - - # - # Loads dataset information - # - - dataset_list = list_datasets( - template_collection, - state, - ) - ag_news_index = dataset_list.index("ag_news") - - # - # Select a dataset - starts with ag_news - # - dataset_key = st.sidebar.selectbox( - "Dataset", - dataset_list, - key="dataset_select", - index=ag_news_index, - help="Select the dataset to work on.", - ) - - # - # If a particular dataset is selected, loads dataset and template information - # - if dataset_key is not None: - - # - # Check for subconfigurations (i.e. subsets) - # - configs = get_dataset_confs(dataset_key) - conf_option = None - if len(configs) > 0: - conf_option = st.sidebar.selectbox("Subset", configs, index=0, format_func=lambda a: a.name) - - dataset = get_dataset(dataset_key, str(conf_option.name) if conf_option else None) - splits = list(dataset.keys()) - index = 0 - if "train" in splits: - index = splits.index("train") - split = st.sidebar.selectbox("Split", splits, key="split_select", index=index) - dataset = dataset[split] - dataset = renameDatasetColumn(dataset) - - dataset_templates = template_collection.get_dataset(dataset_key, conf_option.name if conf_option else None) - - template_list = dataset_templates.all_template_names - num_templates = len(template_list) - st.sidebar.write( - "No of prompts created for " - + f"`{dataset_key + (('/' + conf_option.name) if conf_option else '')}`" - + f": **{str(num_templates)}**" - ) - - if mode == "Prompted dataset viewer": - if num_templates > 0: - template_name = st.sidebar.selectbox( - "Prompt name", - template_list, - key="template_select", - index=0, - help="Select the prompt to visualize.", - ) - - step = 50 - example_index = st.sidebar.number_input( - f"Select the example index (Size = {len(dataset)})", - min_value=0, - max_value=len(dataset) - step, - value=0, - step=step, - key="example_index_number_input", - help="Offset = 50.", - ) - else: # mode = Sourcing - st.sidebar.subheader("Select Example") - example_index = st.sidebar.slider("Select the example index", 0, len(dataset) - 1) - - example = dataset[example_index] - example = removeHyphen(example) - - st.sidebar.write(example) - - st.sidebar.subheader("Dataset Schema") - rendered_features = render_features(dataset.features) - st.sidebar.write(rendered_features) - - # - # Display dataset information - # - st.header("Dataset: " + dataset_key + " " + (("/ " + conf_option.name) if conf_option else "")) - - st.markdown( - "*Homepage*: " - + dataset.info.homepage - + "\n\n*Dataset*: https://github.com/huggingface/datasets/blob/master/datasets/%s/%s.py" - % (dataset_key, dataset_key) - ) - - md = """ - %s - """ % ( - dataset.info.description.replace("\\", "") if dataset_key else "" - ) - st.markdown(md) - - # - # Body of the app: display prompted examples in mode `Prompted dataset viewer` - # or text boxes to create new prompts in mode `Sourcing` - # - if mode == "Prompted dataset viewer": - # - # Display template information - # - if num_templates > 0: - template = dataset_templates[template_name] - st.subheader("Prompt") - st.markdown("##### Name") - st.text(template.name) - st.markdown("##### Reference") - st.text(template.reference) - st.markdown("##### Original Task? ") - st.text(template.metadata.original_task) - st.markdown("##### Choices in template? ") - st.text(template.metadata.choices_in_prompt) - st.markdown("##### Metrics") - st.text(", ".join(template.metadata.metrics) if template.metadata.metrics else None) - st.markdown("##### Answer Choices") - if template.get_answer_choices_expr() is not None: - show_jinja(template.get_answer_choices_expr()) - else: - st.text(None) - st.markdown("##### Jinja template") - splitted_template = template.jinja.split("|||") - st.markdown("###### Input template") - show_jinja(splitted_template[0].strip()) - if len(splitted_template) > 1: - st.markdown("###### Target template") - show_jinja(splitted_template[1].strip()) - st.markdown("***") - - # - # Display a couple (steps) examples - # - for ex_idx in range(example_index, example_index + step): - if ex_idx >= len(dataset): - continue - example = dataset[ex_idx] - example = removeHyphen(example) - col1, _, col2 = st.beta_columns([12, 1, 12]) - with col1: - st.write(example) - if num_templates > 0: - with col2: - prompt = template.apply(example, highlight_variables=False) - if prompt == [""]: - st.write("∅∅∅ *Blank result*") - else: - st.write("Input") - show_text(prompt[0]) - if len(prompt) > 1: - st.write("Target") - show_text(prompt[1]) - st.markdown("***") - else: # mode = Sourcing - st.markdown("## Prompt Creator") - - # - # Create a new template or select an existing one - # - col1a, col1b, _, col2 = st.beta_columns([9, 9, 1, 6]) - - # current_templates_key and state.templates_key are keys for the templates object - current_templates_key = (dataset_key, conf_option.name if conf_option else None) - - # Resets state if there has been a change in templates_key - if state.templates_key != current_templates_key: - state.templates_key = current_templates_key - reset_template_state() - - with col1a, st.form("new_template_form"): - new_template_name = st.text_input( - "Create a New Prompt", - key="new_template", - value="", - help="Enter name and hit enter to create a new prompt.", - ) - new_template_submitted = st.form_submit_button("Create") - if new_template_submitted: - if new_template_name in dataset_templates.all_template_names: - st.error( - f"A prompt with the name {new_template_name} already exists " - f"for dataset {state.templates_key}." - ) - elif new_template_name == "": - st.error("Need to provide a prompt name.") - else: - template = Template(new_template_name, "", "") - dataset_templates.add_template(template) - reset_template_state() - state.template_name = new_template_name - else: - state.new_template_name = None - - with col1b, st.beta_expander("or Select Prompt", expanded=True): - dataset_templates = template_collection.get_dataset(*state.templates_key) - template_list = dataset_templates.all_template_names - if state.template_name: - index = template_list.index(state.template_name) - else: - index = 0 - state.template_name = st.selectbox( - "", template_list, key="template_select", index=index, help="Select the prompt to work on." - ) - - if st.button("Delete Prompt", key="delete_prompt"): - dataset_templates.remove_template(state.template_name) - reset_template_state() - - variety_guideline = """ - :heavy_exclamation_mark::question:Creating a diverse set of prompts whose differences go beyond surface wordings (i.e. marginally changing 2 or 3 words) is highly encouraged. - Ultimately, the hope is that exposing the model to such a diversity will have a non-trivial impact on the model's robustness to the prompt formulation. - \r**To get various prompts, you can try moving the cursor along theses axes**: - \n- **Interrogative vs affirmative form**: Ask a question about an attribute of the inputs or tell the model to decide something about the input. - \n- **Task description localization**: where is the task description blended with the inputs? In the beginning, in the middle, at the end? - \n- **Implicit situation or contextualization**: how explicit is the query? For instance, *Given this review, would you buy this product?* is an indirect way to ask whether the review is positive. - """ - - col1, _, _ = st.beta_columns([18, 1, 6]) - with col1: - if state.template_name is not None: - show_text(variety_guideline, with_markdown=True) - - # - # Edit the created or selected template - # - col1, _, col2 = st.beta_columns([18, 1, 6]) - with col1: - if state.template_name is not None: - template = dataset_templates[state.template_name] - # - # If template is selected, displays template editor - # - with st.form("edit_template_form"): - updated_template_name = st.text_input("Name", value=template.name) - state.reference = st.text_input( - "Prompt Reference", - help="Short description of the prompt and/or paper reference for the prompt.", - value=template.reference, - ) - - # Metadata - state.metadata = template.metadata - state.metadata.original_task = st.checkbox( - "Original Task?", - value=template.metadata.original_task, - help="Prompt asks model to perform the original task designed for this dataset.", - ) - state.metadata.choices_in_prompt = st.checkbox( - "Choices in Template?", - value=template.metadata.choices_in_prompt, - help="Prompt explicitly lists choices in the template for the output.", - ) - - # Metrics from here: - # https://github.com/google-research/text-to-text-transfer-transformer/blob/4b580f23968c2139be7fb1cd53b22c7a7f686cdf/t5/evaluation/metrics.py - metrics_choices = [ - "BLEU", - "ROUGE", - "Squad", - "Trivia QA", - "Accuracy", - "Pearson Correlation", - "Spearman Correlation", - "MultiRC", - "AUC", - "COQA F1", - "Edit Distance", - ] - # Add mean reciprocal rank - metrics_choices.append("Mean Reciprocal Rank") - # Add generic other - metrics_choices.append("Other") - # Sort alphabetically - metrics_choices = sorted(metrics_choices) - state.metadata.metrics = st.multiselect( - "Metrics", - metrics_choices, - default=template.metadata.metrics, - help="Select all metrics that are commonly used (or should " - "be used if a new task) to evaluate this prompt.", - ) - - # Answer choices - if template.get_answer_choices_expr() is not None: - answer_choices = template.get_answer_choices_expr() - else: - answer_choices = "" - state.answer_choices = st.text_input( - "Answer Choices", - value=answer_choices, - help="A Jinja expression for computing answer choices. " - "Separate choices with a triple bar (|||).", - ) - - # Jinja - state.jinja = st.text_area("Template", height=40, value=template.jinja) - - # Submit form - if st.form_submit_button("Save"): - if ( - updated_template_name in dataset_templates.all_template_names - and updated_template_name != state.template_name - ): - st.error( - f"A prompt with the name {updated_template_name} already exists " - f"for dataset {state.templates_key}." - ) - elif updated_template_name == "": - st.error("Need to provide a prompt name.") - else: - # Parses state.answer_choices - if state.answer_choices == "": - updated_answer_choices = None - else: - updated_answer_choices = state.answer_choices - - dataset_templates.update_template( - state.template_name, - updated_template_name, - state.jinja, - state.reference, - state.metadata, - updated_answer_choices, - ) - # Update the state as well - state.template_name = updated_template_name - # - # Displays template output on current example if a template is selected - # (in second column) - # - with col2: - if state.template_name is not None: - st.empty() - template = dataset_templates[state.template_name] - prompt = template.apply(example) - if prompt == [""]: - st.write("∅∅∅ *Blank result*") - else: - st.write("Input") - show_text(prompt[0], width=40) - if len(prompt) > 1: - st.write("Target") - show_text(prompt[1], width=40) - - -# -# Must sync state at end -# -state.sync() diff --git a/spaces/mshukor/UnIVAL/preprocess/average_save_models.py b/spaces/mshukor/UnIVAL/preprocess/average_save_models.py deleted file mode 100644 index c73c235e50051283eea5d6da952404d1a0749f1e..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/preprocess/average_save_models.py +++ /dev/null @@ -1,133 +0,0 @@ -import torch -import numpy as np -import os -import re - -def average(checkpoints, lambdas=[0.5, 0.5], num_models=6, output_dir=None, filename=None, skip_keys=None, ema=False): - - ckpt = torch.load(checkpoints[0], map_location='cpu') - - if ema: - key = 'extra_state' - state = ckpt['extra_state']['ema'] - else: - key = 'model' - state = ckpt['model'] - - print(lambdas) - - - - if num_models == 1: - average_state = {k : v.clone() * lambdas[0] for k, v in state.items()} - for i in range(1, len(checkpoints)): - skip_keys_list = set() - print(checkpoints[i], lambdas[i]) - if ema: - statei = torch.load(checkpoints[i], map_location='cpu')['extra_state']['ema'] - else: - statei = torch.load(checkpoints[i], map_location='cpu')['model'] - for k, v in average_state.items(): - if k in statei and (skip_keys is None or ((not any([re.match(sk, k) for sk in skip_keys])) and (not any([sk in k for sk in skip_keys])))): - try: - average_state[k] += (lambdas[i])*statei[k].clone() - except: - print(k, average_state[k].shape, statei[k].shape) - average_state[k] += (lambdas[i])*average_state[k].clone() - else: - average_state[k] += (lambdas[i])*average_state[k].clone() - skip_keys_list.add(k) - - - state_dict = average_state - print(skip_keys_list) - if ema: - save_obj = {key:{'ema': state_dict, 'epoch': 0}} - for k, v in ckpt['extra_state'].items(): - if k != 'ema': - save_obj['extra_state']=v - print(k) - for k, v in ckpt.items(): - if k != key: - save_obj[k]=v - print(k) - else: - save_obj = {key: state_dict,} - for k, v in ckpt.items(): - if k != key: - save_obj[k]=v - print(k) - output_path = os.path.join(output_dir, '{}.pt'.format(filename)) - print('saving', output_path) - torch.save(save_obj, output_path) - - else: - if ema: - state_dict1 = ckpt['extra_state']['ema'] - state_dict2 = torch.load(checkpoints[1], map_location='cpu')['extra_state']['ema'] - else: - state_dict1 = ckpt['model'] - state_dict2 = torch.load(checkpoints[1], map_location='cpu')['model'] - for l in lambdas: - average_state = {k : v * l for k, v in state_dict1.items()} #{k : v * (1./NUM_MODELS) for k, v in state_dict1.items()} - for k, v in average_state.items(): - if k in state_dict2: - average_state[k] += (1-l)*state_dict2[k] - else: - average_state[k] += (1-l)*state_dict1[k] - - state_dict = average_state - - if ema: - save_obj = {key:{'ema': state_dict,}} - for k, v in ckpt['extra_state'].items(): - if k != 'ema': - save_obj['extra_state'][k]=v - print(k) - for k, v in ckpt.items(): - if k != key: - save_obj[k]=v - print(k) - else: - save_obj = {key: state_dict,} - for k, v in ckpt.items(): - if k != key: - save_obj[k]=v - print(k) - output_path = os.path.join(output_dir, '{}_l{:.2f}.pt'.format(filename, l)) - print('saving', output_path) - torch.save(save_obj, output_path) - - - - - - -# average of several models - -# lambdas = [1/4, 1/4, 1/4, 1/4] - -# num_models=1 -# output_dir='/lus/scratch/NAT/gda2204/SHARED/logs/ofa/pretrained_models/average_models/' -# filename='avg_caprefsnlivqa' - -# checkpoints = [ -# '/lus/scratch/NAT/gda2204/SHARED/logs/ofa/checkpoints/caption/caption_stage_1_ofaplus_base_pretrain_s2_hsep1_bs16_shuf/10_0.06_6000/checkpoint_best.pt', -# '/lus/scratch/NAT/gda2204/SHARED/logs/ofa/checkpoints/refcocoplus/refcocoplus_ofaplus_base_pretrain_s2_hsep1_fix_lr5e5_bs8_4_shuf/10_5e-5_512/checkpoint_best.pt', -# '/lus/scratch/NAT/gda2204/SHARED/logs/ofa/checkpoints/snli_ve/snli_ve_ofaplus_base_pretrain_s2_hsep1/10_5e-5/checkpoint_best.pt', -# '/lus/scratch/NAT/gda2204/SHARED/logs/ofa/checkpoints/vqa/vqa_ofaplus_base_pretrain_s2_bs16_lr1e4_shuf_hsep1/20_0.04_1e-4_480/checkpoint_best.pt', -# ] - -# for weight interpolation -num_models=6 -output_dir='/lus/scratch/NAT/gda2204/SHARED/logs/ofa/pretrained_models/average_models/' -filename='avg_capvqa' -lambdas = [0.0, 0.2, 0.4, 0.6, 0.8, 1.0] - -checkpoints = ['/lus/scratch/NAT/gda2204/SHARED/logs/ofa/checkpoints/vqa/vqa_ofaplus_base_pretrain_s2_bs16_lr1e4_shuf_hsep1/20_0.04_1e-4_480/checkpoint_best.pt', - '/lus/scratch/NAT/gda2204/SHARED/logs/ofa/checkpoints/caption/caption_stage_1_ofaplus_base_pretrain_s2_hsep1_bs16_shuf/10_0.06_6000/checkpoint_best.pt', - ] - - - -average(checkpoints, lambdas=lambdas, num_models=num_models, output_dir=output_dir, filename=filename) diff --git a/spaces/msmilauer/AutoGPT-duplicated2/autogpt/json_utils/json_fix_llm.py b/spaces/msmilauer/AutoGPT-duplicated2/autogpt/json_utils/json_fix_llm.py deleted file mode 100644 index 869aed125cfb8cd7a69ed02eeb389cc72a3e296b..0000000000000000000000000000000000000000 --- a/spaces/msmilauer/AutoGPT-duplicated2/autogpt/json_utils/json_fix_llm.py +++ /dev/null @@ -1,220 +0,0 @@ -"""This module contains functions to fix JSON strings generated by LLM models, such as ChatGPT, using the assistance -of the ChatGPT API or LLM models.""" -from __future__ import annotations - -import contextlib -import json -from typing import Any, Dict - -from colorama import Fore -from regex import regex - -from autogpt.config import Config -from autogpt.json_utils.json_fix_general import correct_json -from autogpt.llm_utils import call_ai_function -from autogpt.logs import logger -from autogpt.speech import say_text - -JSON_SCHEMA = """ -{ - "command": { - "name": "command name", - "args": { - "arg name": "value" - } - }, - "thoughts": - { - "text": "thought", - "reasoning": "reasoning", - "plan": "- short bulleted\n- list that conveys\n- long-term plan", - "criticism": "constructive self-criticism", - "speak": "thoughts summary to say to user" - } -} -""" - -CFG = Config() - - -def auto_fix_json(json_string: str, schema: str) -> str: - """Fix the given JSON string to make it parseable and fully compliant with - the provided schema using GPT-3. - - Args: - json_string (str): The JSON string to fix. - schema (str): The schema to use to fix the JSON. - Returns: - str: The fixed JSON string. - """ - # Try to fix the JSON using GPT: - function_string = "def fix_json(json_string: str, schema:str=None) -> str:" - args = [f"'''{json_string}'''", f"'''{schema}'''"] - description_string = ( - "This function takes a JSON string and ensures that it" - " is parseable and fully compliant with the provided schema. If an object" - " or field specified in the schema isn't contained within the correct JSON," - " it is omitted. The function also escapes any double quotes within JSON" - " string values to ensure that they are valid. If the JSON string contains" - " any None or NaN values, they are replaced with null before being parsed." - ) - - # If it doesn't already start with a "`", add one: - if not json_string.startswith("`"): - json_string = "```json\n" + json_string + "\n```" - result_string = call_ai_function( - function_string, args, description_string, model=CFG.fast_llm_model - ) - logger.debug("------------ JSON FIX ATTEMPT ---------------") - logger.debug(f"Original JSON: {json_string}") - logger.debug("-----------") - logger.debug(f"Fixed JSON: {result_string}") - logger.debug("----------- END OF FIX ATTEMPT ----------------") - - try: - json.loads(result_string) # just check the validity - return result_string - except json.JSONDecodeError: # noqa: E722 - # Get the call stack: - # import traceback - # call_stack = traceback.format_exc() - # print(f"Failed to fix JSON: '{json_string}' "+call_stack) - return "failed" - - -def fix_json_using_multiple_techniques(assistant_reply: str) -> Dict[Any, Any]: - """Fix the given JSON string to make it parseable and fully compliant with two techniques. - - Args: - json_string (str): The JSON string to fix. - - Returns: - str: The fixed JSON string. - """ - - # Parse and print Assistant response - assistant_reply_json = fix_and_parse_json(assistant_reply) - if assistant_reply_json == {}: - assistant_reply_json = attempt_to_fix_json_by_finding_outermost_brackets( - assistant_reply - ) - - if assistant_reply_json != {}: - return assistant_reply_json - - logger.error( - "Error: The following AI output couldn't be converted to a JSON:\n", - assistant_reply, - ) - if CFG.speak_mode: - say_text("I have received an invalid JSON response from the OpenAI API.") - - return {} - - -def fix_and_parse_json( - json_to_load: str, try_to_fix_with_gpt: bool = True -) -> Dict[Any, Any]: - """Fix and parse JSON string - - Args: - json_to_load (str): The JSON string. - try_to_fix_with_gpt (bool, optional): Try to fix the JSON with GPT. - Defaults to True. - - Returns: - str or dict[Any, Any]: The parsed JSON. - """ - - with contextlib.suppress(json.JSONDecodeError): - json_to_load = json_to_load.replace("\t", "") - return json.loads(json_to_load) - - with contextlib.suppress(json.JSONDecodeError): - json_to_load = correct_json(json_to_load) - return json.loads(json_to_load) - # Let's do something manually: - # sometimes GPT responds with something BEFORE the braces: - # "I'm sorry, I don't understand. Please try again." - # {"text": "I'm sorry, I don't understand. Please try again.", - # "confidence": 0.0} - # So let's try to find the first brace and then parse the rest - # of the string - try: - brace_index = json_to_load.index("{") - maybe_fixed_json = json_to_load[brace_index:] - last_brace_index = maybe_fixed_json.rindex("}") - maybe_fixed_json = maybe_fixed_json[: last_brace_index + 1] - return json.loads(maybe_fixed_json) - except (json.JSONDecodeError, ValueError) as e: - return try_ai_fix(try_to_fix_with_gpt, e, json_to_load) - - -def try_ai_fix( - try_to_fix_with_gpt: bool, exception: Exception, json_to_load: str -) -> Dict[Any, Any]: - """Try to fix the JSON with the AI - - Args: - try_to_fix_with_gpt (bool): Whether to try to fix the JSON with the AI. - exception (Exception): The exception that was raised. - json_to_load (str): The JSON string to load. - - Raises: - exception: If try_to_fix_with_gpt is False. - - Returns: - str or dict[Any, Any]: The JSON string or dictionary. - """ - if not try_to_fix_with_gpt: - raise exception - if CFG.debug_mode: - logger.warn( - "Warning: Failed to parse AI output, attempting to fix." - "\n If you see this warning frequently, it's likely that" - " your prompt is confusing the AI. Try changing it up" - " slightly." - ) - # Now try to fix this up using the ai_functions - ai_fixed_json = auto_fix_json(json_to_load, JSON_SCHEMA) - - if ai_fixed_json != "failed": - return json.loads(ai_fixed_json) - # This allows the AI to react to the error message, - # which usually results in it correcting its ways. - # logger.error("Failed to fix AI output, telling the AI.") - return {} - - -def attempt_to_fix_json_by_finding_outermost_brackets(json_string: str): - if CFG.speak_mode and CFG.debug_mode: - say_text( - "I have received an invalid JSON response from the OpenAI API. " - "Trying to fix it now." - ) - logger.error("Attempting to fix JSON by finding outermost brackets\n") - - try: - json_pattern = regex.compile(r"\{(?:[^{}]|(?R))*\}") - json_match = json_pattern.search(json_string) - - if json_match: - # Extract the valid JSON object from the string - json_string = json_match.group(0) - logger.typewriter_log( - title="Apparently json was fixed.", title_color=Fore.GREEN - ) - if CFG.speak_mode and CFG.debug_mode: - say_text("Apparently json was fixed.") - else: - return {} - - except (json.JSONDecodeError, ValueError): - if CFG.debug_mode: - logger.error(f"Error: Invalid JSON: {json_string}\n") - if CFG.speak_mode: - say_text("Didn't work. I will have to ignore this response then.") - logger.error("Error: Invalid JSON, setting it to empty JSON now.\n") - json_string = {} - - return fix_and_parse_json(json_string) diff --git a/spaces/multimodalart/Tune-A-Video-Training-UI-poli/app_training.py b/spaces/multimodalart/Tune-A-Video-Training-UI-poli/app_training.py deleted file mode 100644 index cb6c0cc45cd563819f568103d46ee79dd72c103a..0000000000000000000000000000000000000000 --- a/spaces/multimodalart/Tune-A-Video-Training-UI-poli/app_training.py +++ /dev/null @@ -1,140 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import os - -import gradio as gr - -from constants import MODEL_LIBRARY_ORG_NAME, SAMPLE_MODEL_REPO, UploadTarget -from inference import InferencePipeline -from trainer import Trainer - - -def create_training_demo(trainer: Trainer, - pipe: InferencePipeline | None = None) -> gr.Blocks: - hf_token = os.getenv('HF_TOKEN') - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - with gr.Box(): - gr.Markdown('Training Data') - training_video = gr.File(label='Training video') - training_prompt = gr.Textbox( - label='Training prompt', - max_lines=1, - placeholder='A man is surfing') - gr.Markdown(''' - - Upload a video and write a `Training Prompt` that describes the video. - ''') - - with gr.Column(): - with gr.Box(): - gr.Markdown('Training Parameters') - with gr.Row(): - base_model = gr.Text(label='Base Model', - value='CompVis/stable-diffusion-v1-4', - max_lines=1) - resolution = gr.Dropdown(choices=['512', '768'], - value='512', - label='Resolution', - visible=False) - - token = gr.Text(label="Hugging Face Write Token", placeholder="", visible=False if hf_token else True) - with gr.Accordion("Advanced settings", open=False): - num_training_steps = gr.Number( - label='Number of Training Steps', value=300, precision=0) - learning_rate = gr.Number(label='Learning Rate', - value=0.000035) - gradient_accumulation = gr.Number( - label='Number of Gradient Accumulation', - value=1, - precision=0) - seed = gr.Slider(label='Seed', - minimum=0, - maximum=100000, - step=1, - randomize=True, - value=0) - fp16 = gr.Checkbox(label='FP16', value=True) - use_8bit_adam = gr.Checkbox(label='Use 8bit Adam', value=False) - checkpointing_steps = gr.Number(label='Checkpointing Steps', - value=1000, - precision=0) - validation_epochs = gr.Number(label='Validation Epochs', - value=100, - precision=0) - gr.Markdown(''' - - The base model must be a Stable Diffusion model compatible with [diffusers](https://github.com/huggingface/diffusers) library. - - Expected time to train a model for 300 steps: ~20 minutes with T4 - - You can check the training status by pressing the "Open logs" button if you are running this on your Space. - ''') - - with gr.Row(): - with gr.Column(): - gr.Markdown('Output Model') - output_model_name = gr.Text(label='Name of your model', - placeholder='The surfer man', - max_lines=1) - validation_prompt = gr.Text(label='Validation Prompt', placeholder='prompt to test the model, e.g: a dog is surfing') - with gr.Column(): - gr.Markdown('Upload Settings') - with gr.Row(): - upload_to_hub = gr.Checkbox( - label='Upload model to Hub', value=True) - use_private_repo = gr.Checkbox(label='Private', - value=True) - delete_existing_repo = gr.Checkbox( - label='Delete existing repo of the same name', - value=False) - upload_to = gr.Radio( - label='Upload to', - choices=[_.value for _ in UploadTarget], - value=UploadTarget.MODEL_LIBRARY.value) - - remove_gpu_after_training = gr.Checkbox( - label='Remove GPU after training', - value=False, - interactive=bool(os.getenv('SPACE_ID')), - visible=False) - run_button = gr.Button('Start Training') - - with gr.Box(): - gr.Markdown('Output message') - output_message = gr.Markdown() - - if pipe is not None: - run_button.click(fn=pipe.clear) - run_button.click(fn=trainer.run, - inputs=[ - training_video, - training_prompt, - output_model_name, - delete_existing_repo, - validation_prompt, - base_model, - resolution, - num_training_steps, - learning_rate, - gradient_accumulation, - seed, - fp16, - use_8bit_adam, - checkpointing_steps, - validation_epochs, - upload_to_hub, - use_private_repo, - delete_existing_repo, - upload_to, - remove_gpu_after_training, - token - ], - outputs=output_message) - return demo - - -if __name__ == '__main__': - hf_token = os.getenv('HF_TOKEN') - trainer = Trainer(hf_token) - demo = create_training_demo(trainer) - demo.queue(max_size=1).launch(share=False) diff --git a/spaces/multimodalart/latentdiffusion/latent-diffusion/ldm/modules/image_degradation/utils_image.py b/spaces/multimodalart/latentdiffusion/latent-diffusion/ldm/modules/image_degradation/utils_image.py deleted file mode 100644 index 0175f155ad900ae33c3c46ed87f49b352e3faf98..0000000000000000000000000000000000000000 --- a/spaces/multimodalart/latentdiffusion/latent-diffusion/ldm/modules/image_degradation/utils_image.py +++ /dev/null @@ -1,916 +0,0 @@ -import os -import math -import random -import numpy as np -import torch -import cv2 -from torchvision.utils import make_grid -from datetime import datetime -#import matplotlib.pyplot as plt # TODO: check with Dominik, also bsrgan.py vs bsrgan_light.py - - -os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE" - - -''' -# -------------------------------------------- -# Kai Zhang (github: https://github.com/cszn) -# 03/Mar/2019 -# -------------------------------------------- -# https://github.com/twhui/SRGAN-pyTorch -# https://github.com/xinntao/BasicSR -# -------------------------------------------- -''' - - -IMG_EXTENSIONS = ['.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', '.tif'] - - -def is_image_file(filename): - return any(filename.endswith(extension) for extension in IMG_EXTENSIONS) - - -def get_timestamp(): - return datetime.now().strftime('%y%m%d-%H%M%S') - - -def imshow(x, title=None, cbar=False, figsize=None): - plt.figure(figsize=figsize) - plt.imshow(np.squeeze(x), interpolation='nearest', cmap='gray') - if title: - plt.title(title) - if cbar: - plt.colorbar() - plt.show() - - -def surf(Z, cmap='rainbow', figsize=None): - plt.figure(figsize=figsize) - ax3 = plt.axes(projection='3d') - - w, h = Z.shape[:2] - xx = np.arange(0,w,1) - yy = np.arange(0,h,1) - X, Y = np.meshgrid(xx, yy) - ax3.plot_surface(X,Y,Z,cmap=cmap) - #ax3.contour(X,Y,Z, zdim='z',offset=-2,cmap=cmap) - plt.show() - - -''' -# -------------------------------------------- -# get image pathes -# -------------------------------------------- -''' - - -def get_image_paths(dataroot): - paths = None # return None if dataroot is None - if dataroot is not None: - paths = sorted(_get_paths_from_images(dataroot)) - return paths - - -def _get_paths_from_images(path): - assert os.path.isdir(path), '{:s} is not a valid directory'.format(path) - images = [] - for dirpath, _, fnames in sorted(os.walk(path)): - for fname in sorted(fnames): - if is_image_file(fname): - img_path = os.path.join(dirpath, fname) - images.append(img_path) - assert images, '{:s} has no valid image file'.format(path) - return images - - -''' -# -------------------------------------------- -# split large images into small images -# -------------------------------------------- -''' - - -def patches_from_image(img, p_size=512, p_overlap=64, p_max=800): - w, h = img.shape[:2] - patches = [] - if w > p_max and h > p_max: - w1 = list(np.arange(0, w-p_size, p_size-p_overlap, dtype=np.int)) - h1 = list(np.arange(0, h-p_size, p_size-p_overlap, dtype=np.int)) - w1.append(w-p_size) - h1.append(h-p_size) -# print(w1) -# print(h1) - for i in w1: - for j in h1: - patches.append(img[i:i+p_size, j:j+p_size,:]) - else: - patches.append(img) - - return patches - - -def imssave(imgs, img_path): - """ - imgs: list, N images of size WxHxC - """ - img_name, ext = os.path.splitext(os.path.basename(img_path)) - - for i, img in enumerate(imgs): - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - new_path = os.path.join(os.path.dirname(img_path), img_name+str('_s{:04d}'.format(i))+'.png') - cv2.imwrite(new_path, img) - - -def split_imageset(original_dataroot, taget_dataroot, n_channels=3, p_size=800, p_overlap=96, p_max=1000): - """ - split the large images from original_dataroot into small overlapped images with size (p_size)x(p_size), - and save them into taget_dataroot; only the images with larger size than (p_max)x(p_max) - will be splitted. - Args: - original_dataroot: - taget_dataroot: - p_size: size of small images - p_overlap: patch size in training is a good choice - p_max: images with smaller size than (p_max)x(p_max) keep unchanged. - """ - paths = get_image_paths(original_dataroot) - for img_path in paths: - # img_name, ext = os.path.splitext(os.path.basename(img_path)) - img = imread_uint(img_path, n_channels=n_channels) - patches = patches_from_image(img, p_size, p_overlap, p_max) - imssave(patches, os.path.join(taget_dataroot,os.path.basename(img_path))) - #if original_dataroot == taget_dataroot: - #del img_path - -''' -# -------------------------------------------- -# makedir -# -------------------------------------------- -''' - - -def mkdir(path): - if not os.path.exists(path): - os.makedirs(path) - - -def mkdirs(paths): - if isinstance(paths, str): - mkdir(paths) - else: - for path in paths: - mkdir(path) - - -def mkdir_and_rename(path): - if os.path.exists(path): - new_name = path + '_archived_' + get_timestamp() - print('Path already exists. Rename it to [{:s}]'.format(new_name)) - os.rename(path, new_name) - os.makedirs(path) - - -''' -# -------------------------------------------- -# read image from path -# opencv is fast, but read BGR numpy image -# -------------------------------------------- -''' - - -# -------------------------------------------- -# get uint8 image of size HxWxn_channles (RGB) -# -------------------------------------------- -def imread_uint(path, n_channels=3): - # input: path - # output: HxWx3(RGB or GGG), or HxWx1 (G) - if n_channels == 1: - img = cv2.imread(path, 0) # cv2.IMREAD_GRAYSCALE - img = np.expand_dims(img, axis=2) # HxWx1 - elif n_channels == 3: - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # BGR or G - if img.ndim == 2: - img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) # GGG - else: - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # RGB - return img - - -# -------------------------------------------- -# matlab's imwrite -# -------------------------------------------- -def imsave(img, img_path): - img = np.squeeze(img) - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - cv2.imwrite(img_path, img) - -def imwrite(img, img_path): - img = np.squeeze(img) - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - cv2.imwrite(img_path, img) - - - -# -------------------------------------------- -# get single image of size HxWxn_channles (BGR) -# -------------------------------------------- -def read_img(path): - # read image by cv2 - # return: Numpy float32, HWC, BGR, [0,1] - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # cv2.IMREAD_GRAYSCALE - img = img.astype(np.float32) / 255. - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - # some images have 4 channels - if img.shape[2] > 3: - img = img[:, :, :3] - return img - - -''' -# -------------------------------------------- -# image format conversion -# -------------------------------------------- -# numpy(single) <---> numpy(unit) -# numpy(single) <---> tensor -# numpy(unit) <---> tensor -# -------------------------------------------- -''' - - -# -------------------------------------------- -# numpy(single) [0, 1] <---> numpy(unit) -# -------------------------------------------- - - -def uint2single(img): - - return np.float32(img/255.) - - -def single2uint(img): - - return np.uint8((img.clip(0, 1)*255.).round()) - - -def uint162single(img): - - return np.float32(img/65535.) - - -def single2uint16(img): - - return np.uint16((img.clip(0, 1)*65535.).round()) - - -# -------------------------------------------- -# numpy(unit) (HxWxC or HxW) <---> tensor -# -------------------------------------------- - - -# convert uint to 4-dimensional torch tensor -def uint2tensor4(img): - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.).unsqueeze(0) - - -# convert uint to 3-dimensional torch tensor -def uint2tensor3(img): - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.) - - -# convert 2/3/4-dimensional torch tensor to uint -def tensor2uint(img): - img = img.data.squeeze().float().clamp_(0, 1).cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - return np.uint8((img*255.0).round()) - - -# -------------------------------------------- -# numpy(single) (HxWxC) <---> tensor -# -------------------------------------------- - - -# convert single (HxWxC) to 3-dimensional torch tensor -def single2tensor3(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float() - - -# convert single (HxWxC) to 4-dimensional torch tensor -def single2tensor4(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().unsqueeze(0) - - -# convert torch tensor to single -def tensor2single(img): - img = img.data.squeeze().float().cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - - return img - -# convert torch tensor to single -def tensor2single3(img): - img = img.data.squeeze().float().cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - elif img.ndim == 2: - img = np.expand_dims(img, axis=2) - return img - - -def single2tensor5(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float().unsqueeze(0) - - -def single32tensor5(img): - return torch.from_numpy(np.ascontiguousarray(img)).float().unsqueeze(0).unsqueeze(0) - - -def single42tensor4(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float() - - -# from skimage.io import imread, imsave -def tensor2img(tensor, out_type=np.uint8, min_max=(0, 1)): - ''' - Converts a torch Tensor into an image Numpy array of BGR channel order - Input: 4D(B,(3/1),H,W), 3D(C,H,W), or 2D(H,W), any range, RGB channel order - Output: 3D(H,W,C) or 2D(H,W), [0,255], np.uint8 (default) - ''' - tensor = tensor.squeeze().float().cpu().clamp_(*min_max) # squeeze first, then clamp - tensor = (tensor - min_max[0]) / (min_max[1] - min_max[0]) # to range [0,1] - n_dim = tensor.dim() - if n_dim == 4: - n_img = len(tensor) - img_np = make_grid(tensor, nrow=int(math.sqrt(n_img)), normalize=False).numpy() - img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR - elif n_dim == 3: - img_np = tensor.numpy() - img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR - elif n_dim == 2: - img_np = tensor.numpy() - else: - raise TypeError( - 'Only support 4D, 3D and 2D tensor. But received with dimension: {:d}'.format(n_dim)) - if out_type == np.uint8: - img_np = (img_np * 255.0).round() - # Important. Unlike matlab, numpy.unit8() WILL NOT round by default. - return img_np.astype(out_type) - - -''' -# -------------------------------------------- -# Augmentation, flipe and/or rotate -# -------------------------------------------- -# The following two are enough. -# (1) augmet_img: numpy image of WxHxC or WxH -# (2) augment_img_tensor4: tensor image 1xCxWxH -# -------------------------------------------- -''' - - -def augment_img(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - if mode == 0: - return img - elif mode == 1: - return np.flipud(np.rot90(img)) - elif mode == 2: - return np.flipud(img) - elif mode == 3: - return np.rot90(img, k=3) - elif mode == 4: - return np.flipud(np.rot90(img, k=2)) - elif mode == 5: - return np.rot90(img) - elif mode == 6: - return np.rot90(img, k=2) - elif mode == 7: - return np.flipud(np.rot90(img, k=3)) - - -def augment_img_tensor4(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - if mode == 0: - return img - elif mode == 1: - return img.rot90(1, [2, 3]).flip([2]) - elif mode == 2: - return img.flip([2]) - elif mode == 3: - return img.rot90(3, [2, 3]) - elif mode == 4: - return img.rot90(2, [2, 3]).flip([2]) - elif mode == 5: - return img.rot90(1, [2, 3]) - elif mode == 6: - return img.rot90(2, [2, 3]) - elif mode == 7: - return img.rot90(3, [2, 3]).flip([2]) - - -def augment_img_tensor(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - img_size = img.size() - img_np = img.data.cpu().numpy() - if len(img_size) == 3: - img_np = np.transpose(img_np, (1, 2, 0)) - elif len(img_size) == 4: - img_np = np.transpose(img_np, (2, 3, 1, 0)) - img_np = augment_img(img_np, mode=mode) - img_tensor = torch.from_numpy(np.ascontiguousarray(img_np)) - if len(img_size) == 3: - img_tensor = img_tensor.permute(2, 0, 1) - elif len(img_size) == 4: - img_tensor = img_tensor.permute(3, 2, 0, 1) - - return img_tensor.type_as(img) - - -def augment_img_np3(img, mode=0): - if mode == 0: - return img - elif mode == 1: - return img.transpose(1, 0, 2) - elif mode == 2: - return img[::-1, :, :] - elif mode == 3: - img = img[::-1, :, :] - img = img.transpose(1, 0, 2) - return img - elif mode == 4: - return img[:, ::-1, :] - elif mode == 5: - img = img[:, ::-1, :] - img = img.transpose(1, 0, 2) - return img - elif mode == 6: - img = img[:, ::-1, :] - img = img[::-1, :, :] - return img - elif mode == 7: - img = img[:, ::-1, :] - img = img[::-1, :, :] - img = img.transpose(1, 0, 2) - return img - - -def augment_imgs(img_list, hflip=True, rot=True): - # horizontal flip OR rotate - hflip = hflip and random.random() < 0.5 - vflip = rot and random.random() < 0.5 - rot90 = rot and random.random() < 0.5 - - def _augment(img): - if hflip: - img = img[:, ::-1, :] - if vflip: - img = img[::-1, :, :] - if rot90: - img = img.transpose(1, 0, 2) - return img - - return [_augment(img) for img in img_list] - - -''' -# -------------------------------------------- -# modcrop and shave -# -------------------------------------------- -''' - - -def modcrop(img_in, scale): - # img_in: Numpy, HWC or HW - img = np.copy(img_in) - if img.ndim == 2: - H, W = img.shape - H_r, W_r = H % scale, W % scale - img = img[:H - H_r, :W - W_r] - elif img.ndim == 3: - H, W, C = img.shape - H_r, W_r = H % scale, W % scale - img = img[:H - H_r, :W - W_r, :] - else: - raise ValueError('Wrong img ndim: [{:d}].'.format(img.ndim)) - return img - - -def shave(img_in, border=0): - # img_in: Numpy, HWC or HW - img = np.copy(img_in) - h, w = img.shape[:2] - img = img[border:h-border, border:w-border] - return img - - -''' -# -------------------------------------------- -# image processing process on numpy image -# channel_convert(in_c, tar_type, img_list): -# rgb2ycbcr(img, only_y=True): -# bgr2ycbcr(img, only_y=True): -# ycbcr2rgb(img): -# -------------------------------------------- -''' - - -def rgb2ycbcr(img, only_y=True): - '''same as matlab rgb2ycbcr - only_y: only return Y channel - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - if only_y: - rlt = np.dot(img, [65.481, 128.553, 24.966]) / 255.0 + 16.0 - else: - rlt = np.matmul(img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786], - [24.966, 112.0, -18.214]]) / 255.0 + [16, 128, 128] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def ycbcr2rgb(img): - '''same as matlab ycbcr2rgb - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - rlt = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], [0, -0.00153632, 0.00791071], - [0.00625893, -0.00318811, 0]]) * 255.0 + [-222.921, 135.576, -276.836] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def bgr2ycbcr(img, only_y=True): - '''bgr version of rgb2ycbcr - only_y: only return Y channel - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - if only_y: - rlt = np.dot(img, [24.966, 128.553, 65.481]) / 255.0 + 16.0 - else: - rlt = np.matmul(img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786], - [65.481, -37.797, 112.0]]) / 255.0 + [16, 128, 128] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def channel_convert(in_c, tar_type, img_list): - # conversion among BGR, gray and y - if in_c == 3 and tar_type == 'gray': # BGR to gray - gray_list = [cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) for img in img_list] - return [np.expand_dims(img, axis=2) for img in gray_list] - elif in_c == 3 and tar_type == 'y': # BGR to y - y_list = [bgr2ycbcr(img, only_y=True) for img in img_list] - return [np.expand_dims(img, axis=2) for img in y_list] - elif in_c == 1 and tar_type == 'RGB': # gray/y to BGR - return [cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) for img in img_list] - else: - return img_list - - -''' -# -------------------------------------------- -# metric, PSNR and SSIM -# -------------------------------------------- -''' - - -# -------------------------------------------- -# PSNR -# -------------------------------------------- -def calculate_psnr(img1, img2, border=0): - # img1 and img2 have range [0, 255] - #img1 = img1.squeeze() - #img2 = img2.squeeze() - if not img1.shape == img2.shape: - raise ValueError('Input images must have the same dimensions.') - h, w = img1.shape[:2] - img1 = img1[border:h-border, border:w-border] - img2 = img2[border:h-border, border:w-border] - - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - mse = np.mean((img1 - img2)**2) - if mse == 0: - return float('inf') - return 20 * math.log10(255.0 / math.sqrt(mse)) - - -# -------------------------------------------- -# SSIM -# -------------------------------------------- -def calculate_ssim(img1, img2, border=0): - '''calculate SSIM - the same outputs as MATLAB's - img1, img2: [0, 255] - ''' - #img1 = img1.squeeze() - #img2 = img2.squeeze() - if not img1.shape == img2.shape: - raise ValueError('Input images must have the same dimensions.') - h, w = img1.shape[:2] - img1 = img1[border:h-border, border:w-border] - img2 = img2[border:h-border, border:w-border] - - if img1.ndim == 2: - return ssim(img1, img2) - elif img1.ndim == 3: - if img1.shape[2] == 3: - ssims = [] - for i in range(3): - ssims.append(ssim(img1[:,:,i], img2[:,:,i])) - return np.array(ssims).mean() - elif img1.shape[2] == 1: - return ssim(np.squeeze(img1), np.squeeze(img2)) - else: - raise ValueError('Wrong input image dimensions.') - - -def ssim(img1, img2): - C1 = (0.01 * 255)**2 - C2 = (0.03 * 255)**2 - - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - kernel = cv2.getGaussianKernel(11, 1.5) - window = np.outer(kernel, kernel.transpose()) - - mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] # valid - mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5] - mu1_sq = mu1**2 - mu2_sq = mu2**2 - mu1_mu2 = mu1 * mu2 - sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq - sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq - sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2 - - ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * - (sigma1_sq + sigma2_sq + C2)) - return ssim_map.mean() - - -''' -# -------------------------------------------- -# matlab's bicubic imresize (numpy and torch) [0, 1] -# -------------------------------------------- -''' - - -# matlab 'imresize' function, now only support 'bicubic' -def cubic(x): - absx = torch.abs(x) - absx2 = absx**2 - absx3 = absx**3 - return (1.5*absx3 - 2.5*absx2 + 1) * ((absx <= 1).type_as(absx)) + \ - (-0.5*absx3 + 2.5*absx2 - 4*absx + 2) * (((absx > 1)*(absx <= 2)).type_as(absx)) - - -def calculate_weights_indices(in_length, out_length, scale, kernel, kernel_width, antialiasing): - if (scale < 1) and (antialiasing): - # Use a modified kernel to simultaneously interpolate and antialias- larger kernel width - kernel_width = kernel_width / scale - - # Output-space coordinates - x = torch.linspace(1, out_length, out_length) - - # Input-space coordinates. Calculate the inverse mapping such that 0.5 - # in output space maps to 0.5 in input space, and 0.5+scale in output - # space maps to 1.5 in input space. - u = x / scale + 0.5 * (1 - 1 / scale) - - # What is the left-most pixel that can be involved in the computation? - left = torch.floor(u - kernel_width / 2) - - # What is the maximum number of pixels that can be involved in the - # computation? Note: it's OK to use an extra pixel here; if the - # corresponding weights are all zero, it will be eliminated at the end - # of this function. - P = math.ceil(kernel_width) + 2 - - # The indices of the input pixels involved in computing the k-th output - # pixel are in row k of the indices matrix. - indices = left.view(out_length, 1).expand(out_length, P) + torch.linspace(0, P - 1, P).view( - 1, P).expand(out_length, P) - - # The weights used to compute the k-th output pixel are in row k of the - # weights matrix. - distance_to_center = u.view(out_length, 1).expand(out_length, P) - indices - # apply cubic kernel - if (scale < 1) and (antialiasing): - weights = scale * cubic(distance_to_center * scale) - else: - weights = cubic(distance_to_center) - # Normalize the weights matrix so that each row sums to 1. - weights_sum = torch.sum(weights, 1).view(out_length, 1) - weights = weights / weights_sum.expand(out_length, P) - - # If a column in weights is all zero, get rid of it. only consider the first and last column. - weights_zero_tmp = torch.sum((weights == 0), 0) - if not math.isclose(weights_zero_tmp[0], 0, rel_tol=1e-6): - indices = indices.narrow(1, 1, P - 2) - weights = weights.narrow(1, 1, P - 2) - if not math.isclose(weights_zero_tmp[-1], 0, rel_tol=1e-6): - indices = indices.narrow(1, 0, P - 2) - weights = weights.narrow(1, 0, P - 2) - weights = weights.contiguous() - indices = indices.contiguous() - sym_len_s = -indices.min() + 1 - sym_len_e = indices.max() - in_length - indices = indices + sym_len_s - 1 - return weights, indices, int(sym_len_s), int(sym_len_e) - - -# -------------------------------------------- -# imresize for tensor image [0, 1] -# -------------------------------------------- -def imresize(img, scale, antialiasing=True): - # Now the scale should be the same for H and W - # input: img: pytorch tensor, CHW or HW [0,1] - # output: CHW or HW [0,1] w/o round - need_squeeze = True if img.dim() == 2 else False - if need_squeeze: - img.unsqueeze_(0) - in_C, in_H, in_W = img.size() - out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale) - kernel_width = 4 - kernel = 'cubic' - - # Return the desired dimension order for performing the resize. The - # strategy is to perform the resize first along the dimension with the - # smallest scale factor. - # Now we do not support this. - - # get weights and indices - weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices( - in_H, out_H, scale, kernel, kernel_width, antialiasing) - weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices( - in_W, out_W, scale, kernel, kernel_width, antialiasing) - # process H dimension - # symmetric copying - img_aug = torch.FloatTensor(in_C, in_H + sym_len_Hs + sym_len_He, in_W) - img_aug.narrow(1, sym_len_Hs, in_H).copy_(img) - - sym_patch = img[:, :sym_len_Hs, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - img_aug.narrow(1, 0, sym_len_Hs).copy_(sym_patch_inv) - - sym_patch = img[:, -sym_len_He:, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - img_aug.narrow(1, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv) - - out_1 = torch.FloatTensor(in_C, out_H, in_W) - kernel_width = weights_H.size(1) - for i in range(out_H): - idx = int(indices_H[i][0]) - for j in range(out_C): - out_1[j, i, :] = img_aug[j, idx:idx + kernel_width, :].transpose(0, 1).mv(weights_H[i]) - - # process W dimension - # symmetric copying - out_1_aug = torch.FloatTensor(in_C, out_H, in_W + sym_len_Ws + sym_len_We) - out_1_aug.narrow(2, sym_len_Ws, in_W).copy_(out_1) - - sym_patch = out_1[:, :, :sym_len_Ws] - inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(2, inv_idx) - out_1_aug.narrow(2, 0, sym_len_Ws).copy_(sym_patch_inv) - - sym_patch = out_1[:, :, -sym_len_We:] - inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(2, inv_idx) - out_1_aug.narrow(2, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv) - - out_2 = torch.FloatTensor(in_C, out_H, out_W) - kernel_width = weights_W.size(1) - for i in range(out_W): - idx = int(indices_W[i][0]) - for j in range(out_C): - out_2[j, :, i] = out_1_aug[j, :, idx:idx + kernel_width].mv(weights_W[i]) - if need_squeeze: - out_2.squeeze_() - return out_2 - - -# -------------------------------------------- -# imresize for numpy image [0, 1] -# -------------------------------------------- -def imresize_np(img, scale, antialiasing=True): - # Now the scale should be the same for H and W - # input: img: Numpy, HWC or HW [0,1] - # output: HWC or HW [0,1] w/o round - img = torch.from_numpy(img) - need_squeeze = True if img.dim() == 2 else False - if need_squeeze: - img.unsqueeze_(2) - - in_H, in_W, in_C = img.size() - out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale) - kernel_width = 4 - kernel = 'cubic' - - # Return the desired dimension order for performing the resize. The - # strategy is to perform the resize first along the dimension with the - # smallest scale factor. - # Now we do not support this. - - # get weights and indices - weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices( - in_H, out_H, scale, kernel, kernel_width, antialiasing) - weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices( - in_W, out_W, scale, kernel, kernel_width, antialiasing) - # process H dimension - # symmetric copying - img_aug = torch.FloatTensor(in_H + sym_len_Hs + sym_len_He, in_W, in_C) - img_aug.narrow(0, sym_len_Hs, in_H).copy_(img) - - sym_patch = img[:sym_len_Hs, :, :] - inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(0, inv_idx) - img_aug.narrow(0, 0, sym_len_Hs).copy_(sym_patch_inv) - - sym_patch = img[-sym_len_He:, :, :] - inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(0, inv_idx) - img_aug.narrow(0, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv) - - out_1 = torch.FloatTensor(out_H, in_W, in_C) - kernel_width = weights_H.size(1) - for i in range(out_H): - idx = int(indices_H[i][0]) - for j in range(out_C): - out_1[i, :, j] = img_aug[idx:idx + kernel_width, :, j].transpose(0, 1).mv(weights_H[i]) - - # process W dimension - # symmetric copying - out_1_aug = torch.FloatTensor(out_H, in_W + sym_len_Ws + sym_len_We, in_C) - out_1_aug.narrow(1, sym_len_Ws, in_W).copy_(out_1) - - sym_patch = out_1[:, :sym_len_Ws, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - out_1_aug.narrow(1, 0, sym_len_Ws).copy_(sym_patch_inv) - - sym_patch = out_1[:, -sym_len_We:, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - out_1_aug.narrow(1, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv) - - out_2 = torch.FloatTensor(out_H, out_W, in_C) - kernel_width = weights_W.size(1) - for i in range(out_W): - idx = int(indices_W[i][0]) - for j in range(out_C): - out_2[:, i, j] = out_1_aug[:, idx:idx + kernel_width, j].mv(weights_W[i]) - if need_squeeze: - out_2.squeeze_() - - return out_2.numpy() - - -if __name__ == '__main__': - print('---') -# img = imread_uint('test.bmp', 3) -# img = uint2single(img) -# img_bicubic = imresize_np(img, 1/4) \ No newline at end of file diff --git a/spaces/multimodalart/upload_your_model/README.md b/spaces/multimodalart/upload_your_model/README.md deleted file mode 100644 index 8fc9d0192d7d9df44406835af3170ae442307082..0000000000000000000000000000000000000000 --- a/spaces/multimodalart/upload_your_model/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Upload To Hub Multiple At Once -emoji: 👁 -colorFrom: indigo -colorTo: gray -sdk: static -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/losses/fid/__init__.py b/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/losses/fid/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/training/trainers/default.py b/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/training/trainers/default.py deleted file mode 100644 index 86c7f0fab42924bfc93a031e851117634c70f593..0000000000000000000000000000000000000000 --- a/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/training/trainers/default.py +++ /dev/null @@ -1,175 +0,0 @@ -import logging - -import torch -import torch.nn.functional as F -from omegaconf import OmegaConf - -from saicinpainting.training.data.datasets import make_constant_area_crop_params -from saicinpainting.training.losses.distance_weighting import make_mask_distance_weighter -from saicinpainting.training.losses.feature_matching import feature_matching_loss, masked_l1_loss -from saicinpainting.training.modules.fake_fakes import FakeFakesGenerator -from saicinpainting.training.trainers.base import BaseInpaintingTrainingModule, make_multiscale_noise -from saicinpainting.utils import add_prefix_to_keys, get_ramp - -LOGGER = logging.getLogger(__name__) - - -def make_constant_area_crop_batch(batch, **kwargs): - crop_y, crop_x, crop_height, crop_width = make_constant_area_crop_params(img_height=batch['image'].shape[2], - img_width=batch['image'].shape[3], - **kwargs) - batch['image'] = batch['image'][:, :, crop_y : crop_y + crop_height, crop_x : crop_x + crop_width] - batch['mask'] = batch['mask'][:, :, crop_y: crop_y + crop_height, crop_x: crop_x + crop_width] - return batch - - -class DefaultInpaintingTrainingModule(BaseInpaintingTrainingModule): - def __init__(self, *args, concat_mask=True, rescale_scheduler_kwargs=None, image_to_discriminator='predicted_image', - add_noise_kwargs=None, noise_fill_hole=False, const_area_crop_kwargs=None, - distance_weighter_kwargs=None, distance_weighted_mask_for_discr=False, - fake_fakes_proba=0, fake_fakes_generator_kwargs=None, - **kwargs): - super().__init__(*args, **kwargs) - self.concat_mask = concat_mask - self.rescale_size_getter = get_ramp(**rescale_scheduler_kwargs) if rescale_scheduler_kwargs is not None else None - self.image_to_discriminator = image_to_discriminator - self.add_noise_kwargs = add_noise_kwargs - self.noise_fill_hole = noise_fill_hole - self.const_area_crop_kwargs = const_area_crop_kwargs - self.refine_mask_for_losses = make_mask_distance_weighter(**distance_weighter_kwargs) \ - if distance_weighter_kwargs is not None else None - self.distance_weighted_mask_for_discr = distance_weighted_mask_for_discr - - self.fake_fakes_proba = fake_fakes_proba - if self.fake_fakes_proba > 1e-3: - self.fake_fakes_gen = FakeFakesGenerator(**(fake_fakes_generator_kwargs or {})) - - def forward(self, batch): - if self.training and self.rescale_size_getter is not None: - cur_size = self.rescale_size_getter(self.global_step) - batch['image'] = F.interpolate(batch['image'], size=cur_size, mode='bilinear', align_corners=False) - batch['mask'] = F.interpolate(batch['mask'], size=cur_size, mode='nearest') - - if self.training and self.const_area_crop_kwargs is not None: - batch = make_constant_area_crop_batch(batch, **self.const_area_crop_kwargs) - - img = batch['image'] - mask = batch['mask'] - - masked_img = img * (1 - mask) - - if self.add_noise_kwargs is not None: - noise = make_multiscale_noise(masked_img, **self.add_noise_kwargs) - if self.noise_fill_hole: - masked_img = masked_img + mask * noise[:, :masked_img.shape[1]] - masked_img = torch.cat([masked_img, noise], dim=1) - - if self.concat_mask: - masked_img = torch.cat([masked_img, mask], dim=1) - - batch['predicted_image'] = self.generator(masked_img) - batch['inpainted'] = mask * batch['predicted_image'] + (1 - mask) * batch['image'] - - if self.fake_fakes_proba > 1e-3: - if self.training and torch.rand(1).item() < self.fake_fakes_proba: - batch['fake_fakes'], batch['fake_fakes_masks'] = self.fake_fakes_gen(img, mask) - batch['use_fake_fakes'] = True - else: - batch['fake_fakes'] = torch.zeros_like(img) - batch['fake_fakes_masks'] = torch.zeros_like(mask) - batch['use_fake_fakes'] = False - - batch['mask_for_losses'] = self.refine_mask_for_losses(img, batch['predicted_image'], mask) \ - if self.refine_mask_for_losses is not None and self.training \ - else mask - - return batch - - def generator_loss(self, batch): - img = batch['image'] - predicted_img = batch[self.image_to_discriminator] - original_mask = batch['mask'] - supervised_mask = batch['mask_for_losses'] - - # L1 - l1_value = masked_l1_loss(predicted_img, img, supervised_mask, - self.config.losses.l1.weight_known, - self.config.losses.l1.weight_missing) - - total_loss = l1_value - metrics = dict(gen_l1=l1_value) - - # vgg-based perceptual loss - if self.config.losses.perceptual.weight > 0: - pl_value = self.loss_pl(predicted_img, img, mask=supervised_mask).sum() * self.config.losses.perceptual.weight - total_loss = total_loss + pl_value - metrics['gen_pl'] = pl_value - - # discriminator - # adversarial_loss calls backward by itself - mask_for_discr = supervised_mask if self.distance_weighted_mask_for_discr else original_mask - self.adversarial_loss.pre_generator_step(real_batch=img, fake_batch=predicted_img, - generator=self.generator, discriminator=self.discriminator) - discr_real_pred, discr_real_features = self.discriminator(img) - discr_fake_pred, discr_fake_features = self.discriminator(predicted_img) - adv_gen_loss, adv_metrics = self.adversarial_loss.generator_loss(real_batch=img, - fake_batch=predicted_img, - discr_real_pred=discr_real_pred, - discr_fake_pred=discr_fake_pred, - mask=mask_for_discr) - total_loss = total_loss + adv_gen_loss - metrics['gen_adv'] = adv_gen_loss - metrics.update(add_prefix_to_keys(adv_metrics, 'adv_')) - - # feature matching - if self.config.losses.feature_matching.weight > 0: - need_mask_in_fm = OmegaConf.to_container(self.config.losses.feature_matching).get('pass_mask', False) - mask_for_fm = supervised_mask if need_mask_in_fm else None - fm_value = feature_matching_loss(discr_fake_features, discr_real_features, - mask=mask_for_fm) * self.config.losses.feature_matching.weight - total_loss = total_loss + fm_value - metrics['gen_fm'] = fm_value - - if self.loss_resnet_pl is not None: - resnet_pl_value = self.loss_resnet_pl(predicted_img, img) - total_loss = total_loss + resnet_pl_value - metrics['gen_resnet_pl'] = resnet_pl_value - - return total_loss, metrics - - def discriminator_loss(self, batch): - total_loss = 0 - metrics = {} - - predicted_img = batch[self.image_to_discriminator].detach() - self.adversarial_loss.pre_discriminator_step(real_batch=batch['image'], fake_batch=predicted_img, - generator=self.generator, discriminator=self.discriminator) - discr_real_pred, discr_real_features = self.discriminator(batch['image']) - discr_fake_pred, discr_fake_features = self.discriminator(predicted_img) - adv_discr_loss, adv_metrics = self.adversarial_loss.discriminator_loss(real_batch=batch['image'], - fake_batch=predicted_img, - discr_real_pred=discr_real_pred, - discr_fake_pred=discr_fake_pred, - mask=batch['mask']) - total_loss = total_loss + adv_discr_loss - metrics['discr_adv'] = adv_discr_loss - metrics.update(add_prefix_to_keys(adv_metrics, 'adv_')) - - - if batch.get('use_fake_fakes', False): - fake_fakes = batch['fake_fakes'] - self.adversarial_loss.pre_discriminator_step(real_batch=batch['image'], fake_batch=fake_fakes, - generator=self.generator, discriminator=self.discriminator) - discr_fake_fakes_pred, _ = self.discriminator(fake_fakes) - fake_fakes_adv_discr_loss, fake_fakes_adv_metrics = self.adversarial_loss.discriminator_loss( - real_batch=batch['image'], - fake_batch=fake_fakes, - discr_real_pred=discr_real_pred, - discr_fake_pred=discr_fake_fakes_pred, - mask=batch['mask'] - ) - total_loss = total_loss + fake_fakes_adv_discr_loss - metrics['discr_adv_fake_fakes'] = fake_fakes_adv_discr_loss - metrics.update(add_prefix_to_keys(fake_fakes_adv_metrics, 'adv_')) - - return total_loss, metrics diff --git a/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/lib/model/HGFilters.py b/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/lib/model/HGFilters.py deleted file mode 100644 index 870b3c43c82d66df001eb1bc24af9ce21ec60c83..0000000000000000000000000000000000000000 --- a/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/lib/model/HGFilters.py +++ /dev/null @@ -1,146 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from ..net_util import * - - -class HourGlass(nn.Module): - def __init__(self, num_modules, depth, num_features, norm='batch'): - super(HourGlass, self).__init__() - self.num_modules = num_modules - self.depth = depth - self.features = num_features - self.norm = norm - - self._generate_network(self.depth) - - def _generate_network(self, level): - self.add_module('b1_' + str(level), ConvBlock(self.features, self.features, norm=self.norm)) - - self.add_module('b2_' + str(level), ConvBlock(self.features, self.features, norm=self.norm)) - - if level > 1: - self._generate_network(level - 1) - else: - self.add_module('b2_plus_' + str(level), ConvBlock(self.features, self.features, norm=self.norm)) - - self.add_module('b3_' + str(level), ConvBlock(self.features, self.features, norm=self.norm)) - - def _forward(self, level, inp): - # Upper branch - up1 = inp - up1 = self._modules['b1_' + str(level)](up1) - - # Lower branch - low1 = F.avg_pool2d(inp, 2, stride=2) - low1 = self._modules['b2_' + str(level)](low1) - - if level > 1: - low2 = self._forward(level - 1, low1) - else: - low2 = low1 - low2 = self._modules['b2_plus_' + str(level)](low2) - - low3 = low2 - low3 = self._modules['b3_' + str(level)](low3) - - # NOTE: for newer PyTorch (1.3~), it seems that training results are degraded due to implementation diff in F.grid_sample - # if the pretrained model behaves weirdly, switch with the commented line. - # NOTE: I also found that "bicubic" works better. - up2 = F.interpolate(low3, scale_factor=2, mode='bicubic', align_corners=True) - # up2 = F.interpolate(low3, scale_factor=2, mode='nearest) - - return up1 + up2 - - def forward(self, x): - return self._forward(self.depth, x) - - -class HGFilter(nn.Module): - def __init__(self, opt): - super(HGFilter, self).__init__() - self.num_modules = opt.num_stack - - self.opt = opt - - # Base part - self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3) - - if self.opt.norm == 'batch': - self.bn1 = nn.BatchNorm2d(64) - elif self.opt.norm == 'group': - self.bn1 = nn.GroupNorm(32, 64) - - if self.opt.hg_down == 'conv64': - self.conv2 = ConvBlock(64, 64, self.opt.norm) - self.down_conv2 = nn.Conv2d(64, 128, kernel_size=3, stride=2, padding=1) - elif self.opt.hg_down == 'conv128': - self.conv2 = ConvBlock(64, 128, self.opt.norm) - self.down_conv2 = nn.Conv2d(128, 128, kernel_size=3, stride=2, padding=1) - elif self.opt.hg_down == 'ave_pool': - self.conv2 = ConvBlock(64, 128, self.opt.norm) - else: - raise NameError('Unknown Fan Filter setting!') - - self.conv3 = ConvBlock(128, 128, self.opt.norm) - self.conv4 = ConvBlock(128, 256, self.opt.norm) - - # Stacking part - for hg_module in range(self.num_modules): - self.add_module('m' + str(hg_module), HourGlass(1, opt.num_hourglass, 256, self.opt.norm)) - - self.add_module('top_m_' + str(hg_module), ConvBlock(256, 256, self.opt.norm)) - self.add_module('conv_last' + str(hg_module), - nn.Conv2d(256, 256, kernel_size=1, stride=1, padding=0)) - if self.opt.norm == 'batch': - self.add_module('bn_end' + str(hg_module), nn.BatchNorm2d(256)) - elif self.opt.norm == 'group': - self.add_module('bn_end' + str(hg_module), nn.GroupNorm(32, 256)) - - self.add_module('l' + str(hg_module), nn.Conv2d(256, - opt.hourglass_dim, kernel_size=1, stride=1, padding=0)) - - if hg_module < self.num_modules - 1: - self.add_module( - 'bl' + str(hg_module), nn.Conv2d(256, 256, kernel_size=1, stride=1, padding=0)) - self.add_module('al' + str(hg_module), nn.Conv2d(opt.hourglass_dim, - 256, kernel_size=1, stride=1, padding=0)) - - def forward(self, x): - x = F.relu(self.bn1(self.conv1(x)), True) - tmpx = x - if self.opt.hg_down == 'ave_pool': - x = F.avg_pool2d(self.conv2(x), 2, stride=2) - elif self.opt.hg_down in ['conv64', 'conv128']: - x = self.conv2(x) - x = self.down_conv2(x) - else: - raise NameError('Unknown Fan Filter setting!') - - normx = x - - x = self.conv3(x) - x = self.conv4(x) - - previous = x - - outputs = [] - for i in range(self.num_modules): - hg = self._modules['m' + str(i)](previous) - - ll = hg - ll = self._modules['top_m_' + str(i)](ll) - - ll = F.relu(self._modules['bn_end' + str(i)] - (self._modules['conv_last' + str(i)](ll)), True) - - # Predict heatmaps - tmp_out = self._modules['l' + str(i)](ll) - outputs.append(tmp_out) - - if i < self.num_modules - 1: - ll = self._modules['bl' + str(i)](ll) - tmp_out_ = self._modules['al' + str(i)](tmp_out) - previous = previous + ll + tmp_out_ - - return outputs, tmpx.detach(), normx diff --git a/spaces/nateraw/deepafx-st/deepafx_st/processors/dsp/compressor.py b/spaces/nateraw/deepafx-st/deepafx_st/processors/dsp/compressor.py deleted file mode 100644 index ab515f9ec9a36f43de4a08f3069119b9d73ff1ed..0000000000000000000000000000000000000000 --- a/spaces/nateraw/deepafx-st/deepafx_st/processors/dsp/compressor.py +++ /dev/null @@ -1,177 +0,0 @@ -import sys -import torch -import numpy as np -import scipy.signal -from numba import jit - -from deepafx_st.processors.processor import Processor - - -# Adapted from: https://github.com/drscotthawley/signaltrain/blob/master/signaltrain/audio.py -@jit(nopython=True) -def my_clip_min( - x: np.ndarray, - clip_min: float, -): # does the work of np.clip(), which numba doesn't support yet - # TODO: keep an eye on Numba PR https://github.com/numba/numba/pull/3468 that fixes this - inds = np.where(x < clip_min) - x[inds] = clip_min - return x - - -@jit(nopython=True) -def compressor( - x: np.ndarray, - sample_rate: float, - threshold: float = -24.0, - ratio: float = 2.0, - attack_time: float = 0.01, - release_time: float = 0.01, - knee_dB: float = 0.0, - makeup_gain_dB: float = 0.0, - dtype=np.float32, -): - """ - - Args: - x (np.ndarray): Input signal. - sample_rate (float): Sample rate in Hz. - threshold (float): Threhold in dB. - ratio (float): Ratio (should be >=1 , i.e. ratio:1). - attack_time (float): Attack time in seconds. - release_time (float): Release time in seconds. - knee_dB (float): Knee. - makeup_gain_dB (float): Makeup Gain. - dtype (type): Output type. Default: np.float32 - - Returns: - y (np.ndarray): Output signal. - - """ - # print(f"dsp comp fs = {sample_rate}") - - N = len(x) - dtype = x.dtype - y = np.zeros(N, dtype=dtype) - - # Initialize separate attack and release times - # Where do these numbers come from - alpha_A = np.exp(-np.log(9) / (sample_rate * attack_time)) - alpha_R = np.exp(-np.log(9) / (sample_rate * release_time)) - - # Turn the input signal into a uni-polar signal on the dB scale - x_G = 20 * np.log10(np.abs(x) + 1e-8) # x_uni casts type - - # Ensure there are no values of negative infinity - x_G = my_clip_min(x_G, -96) - - # Static characteristics with knee - y_G = np.zeros(N, dtype=dtype) - - # Below knee - idx = np.where((2 * (x_G - threshold)) < -knee_dB) - y_G[idx] = x_G[idx] - - # At knee - idx = np.where((2 * np.abs(x_G - threshold)) <= knee_dB) - y_G[idx] = x_G[idx] + ( - (1 / ratio) * (((x_G[idx] - threshold + knee_dB) / 2) ** 2) - ) / (2 * knee_dB) - - # Above knee threshold - idx = np.where((2 * (x_G - threshold)) > knee_dB) - y_G[idx] = threshold + ((x_G[idx] - threshold) / ratio) - - x_L = x_G - y_G - - # this loop is slow but not vectorizable due to its cumulative, sequential nature. @autojit makes it fast(er). - y_L = np.zeros(N, dtype=dtype) - for n in range(1, N): - # smooth over the gainChange - if x_L[n] > y_L[n - 1]: # attack mode - y_L[n] = (alpha_A * y_L[n - 1]) + ((1 - alpha_A) * x_L[n]) - else: # release - y_L[n] = (alpha_R * y_L[n - 1]) + ((1 - alpha_R) * x_L[n]) - - # Convert to linear amplitude scalar; i.e. map from dB to amplitude - lin_y_L = np.power(10.0, (-y_L / 20.0)) - y = lin_y_L * x # Apply linear amplitude to input sample - - y *= np.power(10.0, makeup_gain_dB / 20.0) # apply makeup gain - - return y.astype(dtype) - - -class Compressor(Processor): - def __init__( - self, - sample_rate, - max_threshold=0.0, - min_threshold=-80, - max_ratio=20.0, - min_ratio=1.0, - max_attack=0.1, - min_attack=0.0001, - max_release=1.0, - min_release=0.005, - max_knee=12.0, - min_knee=0.0, - max_mkgain=48.0, - min_mkgain=-48.0, - eps=1e-8, - ): - """ """ - super().__init__() - self.sample_rate = sample_rate - self.eps = eps - self.ports = [ - { - "name": "Threshold", - "min": min_threshold, - "max": max_threshold, - "default": -12.0, - "units": "", - }, - { - "name": "Ratio", - "min": min_ratio, - "max": max_ratio, - "default": 2.0, - "units": "", - }, - { - "name": "Attack Time", - "min": min_attack, - "max": max_attack, - "default": 0.001, - "units": "s", - }, - { - "name": "Release Time", - "min": min_release, - "max": max_release, - "default": 0.045, - "units": "s", - }, - { - "name": "Knee", - "min": min_knee, - "max": max_knee, - "default": 6.0, - "units": "dB", - }, - { - "name": "Makeup Gain", - "min": min_mkgain, - "max": max_mkgain, - "default": 0.0, - "units": "dB", - }, - ] - - self.num_control_params = len(self.ports) - self.process_fn = compressor - - def forward(self, x, p, sample_rate=24000, **kwargs): - "All processing in the forward is in numpy." - return self.run_series(x, p, sample_rate) diff --git a/spaces/nateraw/yolov6/yolov6/core/evaler.py b/spaces/nateraw/yolov6/yolov6/core/evaler.py deleted file mode 100644 index 569e4e3b037224c77e3a77da561adfd6bfb98d4e..0000000000000000000000000000000000000000 --- a/spaces/nateraw/yolov6/yolov6/core/evaler.py +++ /dev/null @@ -1,256 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding:utf-8 -*- -import os -from tqdm import tqdm -import numpy as np -import json -import torch -import yaml -from pathlib import Path - -from pycocotools.coco import COCO -from pycocotools.cocoeval import COCOeval - -from yolov6.data.data_load import create_dataloader -from yolov6.utils.events import LOGGER, NCOLS -from yolov6.utils.nms import non_max_suppression -from yolov6.utils.checkpoint import load_checkpoint -from yolov6.utils.torch_utils import time_sync, get_model_info - -''' -python tools/eval.py --task 'train'/'val'/'speed' -''' - - -class Evaler: - def __init__(self, - data, - batch_size=32, - img_size=640, - conf_thres=0.001, - iou_thres=0.65, - device='', - half=True, - save_dir=''): - self.data = data - self.batch_size = batch_size - self.img_size = img_size - self.conf_thres = conf_thres - self.iou_thres = iou_thres - self.device = device - self.half = half - self.save_dir = save_dir - - def init_model(self, model, weights, task): - if task != 'train': - model = load_checkpoint(weights, map_location=self.device) - self.stride = int(model.stride.max()) - if self.device.type != 'cpu': - model(torch.zeros(1, 3, self.img_size, self.img_size).to(self.device).type_as(next(model.parameters()))) - # switch to deploy - from yolov6.layers.common import RepVGGBlock - for layer in model.modules(): - if isinstance(layer, RepVGGBlock): - layer.switch_to_deploy() - LOGGER.info("Switch model to deploy modality.") - LOGGER.info("Model Summary: {}".format(get_model_info(model, self.img_size))) - model.half() if self.half else model.float() - return model - - def init_data(self, dataloader, task): - '''Initialize dataloader. - Returns a dataloader for task val or speed. - ''' - self.is_coco = self.data.get("is_coco", False) - self.ids = self.coco80_to_coco91_class() if self.is_coco else list(range(1000)) - if task != 'train': - pad = 0.0 if task == 'speed' else 0.5 - dataloader = create_dataloader(self.data[task if task in ('train', 'val', 'test') else 'val'], - self.img_size, self.batch_size, self.stride, check_labels=True, pad=pad, rect=True, - data_dict=self.data, task=task)[0] - return dataloader - - def predict_model(self, model, dataloader, task): - '''Model prediction - Predicts the whole dataset and gets the prediced results and inference time. - ''' - self.speed_result = torch.zeros(4, device=self.device) - pred_results = [] - pbar = tqdm(dataloader, desc="Inferencing model in val datasets.", ncols=NCOLS) - for imgs, targets, paths, shapes in pbar: - # pre-process - t1 = time_sync() - imgs = imgs.to(self.device, non_blocking=True) - imgs = imgs.half() if self.half else imgs.float() - imgs /= 255 - self.speed_result[1] += time_sync() - t1 # pre-process time - - # Inference - t2 = time_sync() - outputs = model(imgs) - self.speed_result[2] += time_sync() - t2 # inference time - - # post-process - t3 = time_sync() - outputs = non_max_suppression(outputs, self.conf_thres, self.iou_thres, multi_label=True) - self.speed_result[3] += time_sync() - t3 # post-process time - self.speed_result[0] += len(outputs) - - # save result - pred_results.extend(self.convert_to_coco_format(outputs, imgs, paths, shapes, self.ids)) - return pred_results - - def eval_model(self, pred_results, model, dataloader, task): - '''Evaluate models - For task speed, this function only evaluates the speed of model and outputs inference time. - For task val, this function evaluates the speed and mAP by pycocotools, and returns - inference time and mAP value. - ''' - LOGGER.info(f'\nEvaluating speed.') - self.eval_speed(task) - - LOGGER.info(f'\nEvaluating mAP by pycocotools.') - if task != 'speed' and len(pred_results): - if 'anno_path' in self.data: - anno_json = self.data['anno_path'] - else: - # generated coco format labels in dataset initialization - dataset_root = os.path.dirname(os.path.dirname(self.data['val'])) - base_name = os.path.basename(self.data['val']) - anno_json = os.path.join(dataset_root, 'annotations', f'instances_{base_name}.json') - pred_json = os.path.join(self.save_dir, "predictions.json") - LOGGER.info(f'Saving {pred_json}...') - with open(pred_json, 'w') as f: - json.dump(pred_results, f) - - anno = COCO(anno_json) - pred = anno.loadRes(pred_json) - cocoEval = COCOeval(anno, pred, 'bbox') - if self.is_coco: - imgIds = [int(os.path.basename(x).split(".")[0]) - for x in dataloader.dataset.img_paths] - cocoEval.params.imgIds = imgIds - cocoEval.evaluate() - cocoEval.accumulate() - cocoEval.summarize() - map, map50 = cocoEval.stats[:2] # update results (mAP@0.5:0.95, mAP@0.5) - # Return results - model.float() # for training - if task != 'train': - LOGGER.info(f"Results saved to {self.save_dir}") - return (map50, map) - return (0.0, 0.0) - - def eval_speed(self, task): - '''Evaluate model inference speed.''' - if task != 'train': - n_samples = self.speed_result[0].item() - pre_time, inf_time, nms_time = 1000 * self.speed_result[1:].cpu().numpy() / n_samples - for n, v in zip(["pre-process", "inference", "NMS"],[pre_time, inf_time, nms_time]): - LOGGER.info("Average {} time: {:.2f} ms".format(n, v)) - - def box_convert(self, x): - # Convert boxes with shape [n, 4] from [x1, y1, x2, y2] to [x, y, w, h] where x1y1=top-left, x2y2=bottom-right - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = (x[:, 0] + x[:, 2]) / 2 # x center - y[:, 1] = (x[:, 1] + x[:, 3]) / 2 # y center - y[:, 2] = x[:, 2] - x[:, 0] # width - y[:, 3] = x[:, 3] - x[:, 1] # height - return y - - def scale_coords(self, img1_shape, coords, img0_shape, ratio_pad=None): - # Rescale coords (xyxy) from img1_shape to img0_shape - if ratio_pad is None: # calculate from img0_shape - gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new - pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding - else: - gain = ratio_pad[0][0] - pad = ratio_pad[1] - - coords[:, [0, 2]] -= pad[0] # x padding - coords[:, [1, 3]] -= pad[1] # y padding - coords[:, :4] /= gain - if isinstance(coords, torch.Tensor): # faster individually - coords[:, 0].clamp_(0, img0_shape[1]) # x1 - coords[:, 1].clamp_(0, img0_shape[0]) # y1 - coords[:, 2].clamp_(0, img0_shape[1]) # x2 - coords[:, 3].clamp_(0, img0_shape[0]) # y2 - else: # np.array (faster grouped) - coords[:, [0, 2]] = coords[:, [0, 2]].clip(0, img0_shape[1]) # x1, x2 - coords[:, [1, 3]] = coords[:, [1, 3]].clip(0, img0_shape[0]) # y1, y2 - return coords - - def convert_to_coco_format(self, outputs, imgs, paths, shapes, ids): - pred_results = [] - for i, pred in enumerate(outputs): - if len(pred) == 0: - continue - path, shape = Path(paths[i]), shapes[i][0] - self.scale_coords(imgs[i].shape[1:], pred[:, :4], shape, shapes[i][1]) - image_id = int(path.stem) if path.stem.isnumeric() else path.stem - bboxes = self.box_convert(pred[:, 0:4]) - bboxes[:, :2] -= bboxes[:, 2:] / 2 - cls = pred[:, 5] - scores = pred[:, 4] - for ind in range(pred.shape[0]): - category_id = ids[int(cls[ind])] - bbox = [round(x, 3) for x in bboxes[ind].tolist()] - score = round(scores[ind].item(), 5) - pred_data = { - "image_id": image_id, - "category_id": category_id, - "bbox": bbox, - "score": score - } - pred_results.append(pred_data) - return pred_results - - @staticmethod - def check_task(task): - if task not in ['train','val','speed']: - raise Exception("task argument error: only support 'train' / 'val' / 'speed' task.") - - @staticmethod - def reload_thres(conf_thres, iou_thres, task): - '''Sets conf and iou threshold for task val/speed''' - if task != 'train': - if task == 'val': - conf_thres = 0.001 - if task == 'speed': - conf_thres = 0.25 - iou_thres = 0.45 - return conf_thres, iou_thres - - @staticmethod - def reload_device(device, model, task): - # device = 'cpu' or '0' or '0,1,2,3' - if task == 'train': - device = next(model.parameters()).device - else: - if device == 'cpu': - os.environ['CUDA_VISIBLE_DEVICES'] = '-1' - elif device: - os.environ['CUDA_VISIBLE_DEVICES'] = device - assert torch.cuda.is_available() - cuda = device != 'cpu' and torch.cuda.is_available() - device = torch.device('cuda:0' if cuda else 'cpu') - return device - - @staticmethod - def reload_dataset(data): - with open(data, errors='ignore') as yaml_file: - data = yaml.safe_load(yaml_file) - val = data.get('val') - if not os.path.exists(val): - raise Exception('Dataset not found.') - return data - - @staticmethod - def coco80_to_coco91_class(): # converts 80-index (val2014) to 91-index (paper) - # https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/ - x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, - 21, 22, 23, 24, 25, 27, 28, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, - 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, - 59, 60, 61, 62, 63, 64, 65, 67, 70, 72, 73, 74, 75, 76, 77, 78, 79, - 80, 81, 82, 84, 85, 86, 87, 88, 89, 90] - return x diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Sniper Elite V2 - AviaRa - Fitgirl Repack.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Sniper Elite V2 - AviaRa - Fitgirl Repack.md deleted file mode 100644 index 5c2d2bbdbf6b2d5c302133fb4aff86088d818ecd..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Sniper Elite V2 - AviaRa - Fitgirl Repack.md +++ /dev/null @@ -1,25 +0,0 @@ -
-

How to Download and Install Sniper Elite V2 -=AviaRa=- Fitgirl Repack

-

If you are looking for a thrilling and realistic sniper game, you might want to check out Sniper Elite V2 -=AviaRa=- Fitgirl Repack. This is a repack of the original Sniper Elite V2 game, which was released in 2012 and received positive reviews from critics and players alike. Sniper Elite V2 is a third-person shooter that puts you in the role of Karl Fairburne, an elite sniper who is sent to Berlin during the final days of World War II. Your mission is to prevent the Nazi V2 rocket technology from falling into the hands of the Soviet army, by assassinating key scientists and officers, sabotaging facilities, and infiltrating enemy lines.

-

Sniper Elite V2 -=AviaRa=- Fitgirl Repack is a compressed version of the game that includes all the DLCs and updates, as well as improved graphics and performance. The repack size is only 2.9 GB, compared to the original 7 GB, which means you can download and install it faster and easier. The repack also features a multilingual interface and subtitles, so you can enjoy the game in your preferred language. In this article, we will show you how to download and install Sniper Elite V2 -=AviaRa=- Fitgirl Repack on your PC.

-

Sniper Elite V2 - AviaRa - Fitgirl Repack


DOWNLOADhttps://urlcod.com/2uIbWX



-

Step 1: Download Sniper Elite V2 -=AviaRa=- Fitgirl Repack

-

The first step is to download Sniper Elite V2 -=AviaRa=- Fitgirl Repack from a reliable source. You can use a torrent client or a direct link to get the repack file. Here are some of the options you can choose from:

-
    -
  • Reddit: This is a post on r/CrackWatch that provides a magnet link and a torrent file for the repack. You can also find comments from other users who have downloaded and installed the repack successfully.
  • -
  • Fitgirl Repacks Site: This is the official website of Fitgirl, the creator of the repack. You can find detailed information about the repack features, installation instructions, screenshots, and download mirrors. You can use 1337x, KAT, RuTor, Tapochek.net, MultiUpload, Upera, or Google Drive to get the repack file.
  • -
  • YouTube: This is a video tutorial that shows you how to download and install Sniper Elite V2 -=AviaRa=- Fitgirl Repack using Utorrent. You can also watch gameplay footage and see how the game runs on different settings.
  • -
-

Once you have downloaded the repack file, you need to extract it using WinRAR or 7-Zip. You will get a folder named "Sniper.Elite.V2.Complete-PLAZA" that contains all the game files.

-

Step 2: Install Sniper Elite V2 -=AviaRa=- Fitgirl Repack

-

The next step is to install Sniper Elite V2 -=AviaRa=- Fitgirl Repack on your PC. To do this, you need to follow these steps:

-
    -
  1. Open the folder "Sniper.Elite.V2.Complete-PLAZA" and run "setup.exe" as administrator.
  2. -
  3. Select your language and destination folder for the game installation.
  4. -
  5. Wait for the installation process to finish. It may take 7-15 minutes depending on your system.
  6. -
  7. After the installation is done, run "Language Selector.exe" in the game root folder to change the game language if needed.
  8. -
  9. Run "SniperEliteV2.exe" in the game root folder to launch the game.
  10. -
-

Congratulations!

7b8c122e87
-
-
\ No newline at end of file diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/tools/README.md b/spaces/nikitaPDL2023/assignment4/detectron2/tools/README.md deleted file mode 100644 index 0b40d5319c0838fdaa22bc6a10ef0d88bc6578ed..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/tools/README.md +++ /dev/null @@ -1,49 +0,0 @@ - -This directory contains a few example scripts that demonstrate features of detectron2. - - -* `train_net.py` - -An example training script that's made to train builtin models of detectron2. - -For usage, see [GETTING_STARTED.md](../GETTING_STARTED.md). - -* `plain_train_net.py` - -Similar to `train_net.py`, but implements a training loop instead of using `Trainer`. -This script includes fewer features but it may be more friendly to hackers. - -* `benchmark.py` - -Benchmark the training speed, inference speed or data loading speed of a given config. - -Usage: -``` -python benchmark.py --config-file config.yaml --task train/eval/data [optional DDP flags] -``` - -* `analyze_model.py` - -Analyze FLOPs, parameters, activations of a detectron2 model. See its `--help` for usage. - -* `visualize_json_results.py` - -Visualize the json instance detection/segmentation results dumped by `COCOEvalutor` or `LVISEvaluator` - -Usage: -``` -python visualize_json_results.py --input x.json --output dir/ --dataset coco_2017_val -``` -If not using a builtin dataset, you'll need your own script or modify this script. - -* `visualize_data.py` - -Visualize ground truth raw annotations or training data (after preprocessing/augmentations). - -Usage: -``` -python visualize_data.py --config-file config.yaml --source annotation/dataloader --output-dir dir/ [--show] -``` - -NOTE: the script does not stop by itself when using `--source dataloader` because a training -dataloader is usually infinite. diff --git a/spaces/nikitalokhmachev-ai/corner-detection/README.md b/spaces/nikitalokhmachev-ai/corner-detection/README.md deleted file mode 100644 index 8d32fe99069cce974697d5d1a7e8e3ef03b8876a..0000000000000000000000000000000000000000 --- a/spaces/nikitalokhmachev-ai/corner-detection/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Corner Detection -emoji: 🐌 -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/nomic-ai/samsum/index.html b/spaces/nomic-ai/samsum/index.html deleted file mode 100644 index 83469e3a65a44bfb007e636984d69d6e77bc6bde..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/samsum/index.html +++ /dev/null @@ -1,42 +0,0 @@ - - - - samsum - - - - -
- -
- - - \ No newline at end of file diff --git a/spaces/nota-ai/compressed-wav2lip/Dockerfile b/spaces/nota-ai/compressed-wav2lip/Dockerfile deleted file mode 100644 index 3cc3c5eaa469b73ebe3b80a4647aefe29aa74831..0000000000000000000000000000000000000000 --- a/spaces/nota-ai/compressed-wav2lip/Dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM nvcr.io/nvidia/pytorch:22.03-py3 - -ARG DEBIAN_FRONTEND=noninteractive -RUN apt-get update -RUN apt-get install ffmpeg libsm6 libxext6 tmux git -y - -WORKDIR /workspace diff --git a/spaces/ntt123/WaveGRU-Text-To-Speech/tacotron.py b/spaces/ntt123/WaveGRU-Text-To-Speech/tacotron.py deleted file mode 100644 index 2671bcf9a219ad2007200278347df43e0cddde17..0000000000000000000000000000000000000000 --- a/spaces/ntt123/WaveGRU-Text-To-Speech/tacotron.py +++ /dev/null @@ -1,451 +0,0 @@ -""" -Tacotron + stepwise monotonic attention -""" - -import jax -import jax.numpy as jnp -import pax - - -def conv_block(in_ft, out_ft, kernel_size, activation_fn, use_dropout): - """ - Conv >> LayerNorm >> activation >> Dropout - """ - f = pax.Sequential( - pax.Conv1D(in_ft, out_ft, kernel_size, with_bias=False), - pax.LayerNorm(out_ft, -1, True, True), - ) - if activation_fn is not None: - f >>= activation_fn - if use_dropout: - f >>= pax.Dropout(0.5) - return f - - -class HighwayBlock(pax.Module): - """ - Highway block - """ - - def __init__(self, dim: int) -> None: - super().__init__() - self.dim = dim - self.fc = pax.Linear(dim, 2 * dim) - - def __call__(self, x: jnp.ndarray) -> jnp.ndarray: - t, h = jnp.split(self.fc(x), 2, axis=-1) - t = jax.nn.sigmoid(t - 1.0) # bias toward keeping x - h = jax.nn.relu(h) - x = x * (1.0 - t) + h * t - return x - - -class BiGRU(pax.Module): - """ - Bidirectional GRU - """ - - def __init__(self, dim): - super().__init__() - - self.rnn_fwd = pax.GRU(dim, dim) - self.rnn_bwd = pax.GRU(dim, dim) - - def __call__(self, x, reset_masks): - N = x.shape[0] - x_fwd = x - x_bwd = jnp.flip(x, axis=1) - x_fwd_states = self.rnn_fwd.initial_state(N) - x_bwd_states = self.rnn_bwd.initial_state(N) - x_fwd_states, x_fwd = pax.scan( - self.rnn_fwd, x_fwd_states, x_fwd, time_major=False - ) - - reset_masks = jnp.flip(reset_masks, axis=1) - x_bwd_states0 = x_bwd_states - - def rnn_reset_core(prev, inputs): - x, reset_mask = inputs - - def reset_state(x0, xt): - return jnp.where(reset_mask, x0, xt) - - state, _ = self.rnn_bwd(prev, x) - state = jax.tree_map(reset_state, x_bwd_states0, state) - return state, state.hidden - - x_bwd_states, x_bwd = pax.scan( - rnn_reset_core, x_bwd_states, (x_bwd, reset_masks), time_major=False - ) - x_bwd = jnp.flip(x_bwd, axis=1) - x = jnp.concatenate((x_fwd, x_bwd), axis=-1) - return x - - -class CBHG(pax.Module): - """ - Conv Bank >> Highway net >> GRU - """ - - def __init__(self, dim): - super().__init__() - self.convs = [conv_block(dim, dim, i, jax.nn.relu, False) for i in range(1, 17)] - self.conv_projection_1 = conv_block(16 * dim, dim, 3, jax.nn.relu, False) - self.conv_projection_2 = conv_block(dim, dim, 3, None, False) - - self.highway = pax.Sequential( - HighwayBlock(dim), HighwayBlock(dim), HighwayBlock(dim), HighwayBlock(dim) - ) - self.rnn = BiGRU(dim) - - def __call__(self, x, x_mask): - conv_input = x * x_mask - fts = [f(conv_input) for f in self.convs] - residual = jnp.concatenate(fts, axis=-1) - residual = pax.max_pool(residual, 2, 1, "SAME", -1) - residual = self.conv_projection_1(residual * x_mask) - residual = self.conv_projection_2(residual * x_mask) - x = x + residual - x = self.highway(x) - x = self.rnn(x * x_mask, reset_masks=1 - x_mask) - return x * x_mask - - -class PreNet(pax.Module): - """ - Linear >> relu >> dropout >> Linear >> relu >> dropout - """ - - def __init__(self, input_dim, hidden_dim, output_dim, always_dropout=True): - super().__init__() - self.fc1 = pax.Linear(input_dim, hidden_dim) - self.fc2 = pax.Linear(hidden_dim, output_dim) - self.rng_seq = pax.RngSeq() - self.always_dropout = always_dropout - - def __call__(self, x, k1=None, k2=None): - x = self.fc1(x) - x = jax.nn.relu(x) - if self.always_dropout or self.training: - if k1 is None: - k1 = self.rng_seq.next_rng_key() - x = pax.dropout(k1, 0.5, x) - x = self.fc2(x) - x = jax.nn.relu(x) - if self.always_dropout or self.training: - if k2 is None: - k2 = self.rng_seq.next_rng_key() - x = pax.dropout(k2, 0.5, x) - return x - - -class Tacotron(pax.Module): - """ - Tacotron TTS model. - - It uses stepwise monotonic attention for robust attention. - """ - - def __init__( - self, - mel_dim: int, - attn_bias, - rr, - max_rr, - mel_min, - sigmoid_noise, - pad_token, - prenet_dim, - attn_hidden_dim, - attn_rnn_dim, - rnn_dim, - postnet_dim, - text_dim, - ): - """ - New Tacotron model - - Args: - mel_dim (int): dimension of log mel-spectrogram features. - attn_bias (float): control how "slow" the attention will - move forward at initialization. - rr (int): the reduction factor. - Number of predicted frame at each time step. Default is 2. - max_rr (int): max value of rr. - mel_min (float): the minimum value of mel features. - The frame is filled by `log(mel_min)` values. - sigmoid_noise (float): the variance of gaussian noise added - to attention scores in training. - pad_token (int): the pad value at the end of text sequences. - prenet_dim (int): dimension of prenet output. - attn_hidden_dim (int): dimension of attention hidden vectors. - attn_rnn_dim (int): number of cells in the attention RNN. - rnn_dim (int): number of cells in the decoder RNNs. - postnet_dim (int): number of features in the postnet convolutions. - text_dim (int): dimension of text embedding vectors. - """ - super().__init__() - self.text_dim = text_dim - assert rr <= max_rr - self.rr = rr - self.max_rr = max_rr - self.mel_dim = mel_dim - self.mel_min = mel_min - self.sigmoid_noise = sigmoid_noise - self.pad_token = pad_token - self.prenet_dim = prenet_dim - - # encoder submodules - self.encoder_embed = pax.Embed(256, text_dim) - self.encoder_pre_net = PreNet(text_dim, 256, prenet_dim, always_dropout=True) - self.encoder_cbhg = CBHG(prenet_dim) - - # random key generator - self.rng_seq = pax.RngSeq() - - # pre-net - self.decoder_pre_net = PreNet(mel_dim, 256, prenet_dim, always_dropout=True) - - # decoder submodules - self.attn_rnn = pax.LSTM(prenet_dim + prenet_dim * 2, attn_rnn_dim) - self.text_key_fc = pax.Linear(prenet_dim * 2, attn_hidden_dim, with_bias=True) - self.attn_query_fc = pax.Linear(attn_rnn_dim, attn_hidden_dim, with_bias=False) - - self.attn_V = pax.Linear(attn_hidden_dim, 1, with_bias=False) - self.attn_V_weight_norm = jnp.array(1.0 / jnp.sqrt(attn_hidden_dim)) - self.attn_V_bias = jnp.array(attn_bias) - self.attn_log = jnp.zeros((1,)) - self.decoder_input = pax.Linear(attn_rnn_dim + 2 * prenet_dim, rnn_dim) - self.decoder_rnn1 = pax.LSTM(rnn_dim, rnn_dim) - self.decoder_rnn2 = pax.LSTM(rnn_dim, rnn_dim) - # mel + end-of-sequence token - self.output_fc = pax.Linear(rnn_dim, (mel_dim + 1) * max_rr, with_bias=True) - - # post-net - self.post_net = pax.Sequential( - conv_block(mel_dim, postnet_dim, 5, jax.nn.tanh, True), - conv_block(postnet_dim, postnet_dim, 5, jax.nn.tanh, True), - conv_block(postnet_dim, postnet_dim, 5, jax.nn.tanh, True), - conv_block(postnet_dim, postnet_dim, 5, jax.nn.tanh, True), - conv_block(postnet_dim, mel_dim, 5, None, True), - ) - - parameters = pax.parameters_method("attn_V_weight_norm", "attn_V_bias") - - def encode_text(self, text: jnp.ndarray) -> jnp.ndarray: - """ - Encode text to a sequence of real vectors - """ - N, L = text.shape - text_mask = (text != self.pad_token)[..., None] - x = self.encoder_embed(text) - x = self.encoder_pre_net(x) - x = self.encoder_cbhg(x, text_mask) - return x - - def go_frame(self, batch_size: int) -> jnp.ndarray: - """ - return the go frame - """ - return jnp.ones((batch_size, self.mel_dim)) * jnp.log(self.mel_min) - - def decoder_initial_state(self, N: int, L: int): - """ - setup decoder initial state - """ - attn_context = jnp.zeros((N, self.prenet_dim * 2)) - attn_pr = jax.nn.one_hot( - jnp.zeros((N,), dtype=jnp.int32), num_classes=L, axis=-1 - ) - - attn_state = (self.attn_rnn.initial_state(N), attn_context, attn_pr) - decoder_rnn_states = ( - self.decoder_rnn1.initial_state(N), - self.decoder_rnn2.initial_state(N), - ) - return attn_state, decoder_rnn_states - - def monotonic_attention(self, prev_state, inputs, envs): - """ - Stepwise monotonic attention - """ - attn_rnn_state, attn_context, prev_attn_pr = prev_state - x, attn_rng_key = inputs - text, text_key = envs - attn_rnn_input = jnp.concatenate((x, attn_context), axis=-1) - attn_rnn_state, attn_rnn_output = self.attn_rnn(attn_rnn_state, attn_rnn_input) - attn_query_input = attn_rnn_output - attn_query = self.attn_query_fc(attn_query_input) - attn_hidden = jnp.tanh(attn_query[:, None, :] + text_key) - score = self.attn_V(attn_hidden) - score = jnp.squeeze(score, axis=-1) - weight_norm = jnp.linalg.norm(self.attn_V.weight) - score = score * (self.attn_V_weight_norm / weight_norm) - score = score + self.attn_V_bias - noise = jax.random.normal(attn_rng_key, score.shape) * self.sigmoid_noise - pr_stay = jax.nn.sigmoid(score + noise) - pr_move = 1.0 - pr_stay - pr_new_location = pr_move * prev_attn_pr - pr_new_location = jnp.pad( - pr_new_location[:, :-1], ((0, 0), (1, 0)), constant_values=0 - ) - attn_pr = pr_stay * prev_attn_pr + pr_new_location - attn_context = jnp.einsum("NL,NLD->ND", attn_pr, text) - new_state = (attn_rnn_state, attn_context, attn_pr) - return new_state, attn_rnn_output - - def zoneout_lstm(self, lstm_core, rng_key, zoneout_pr=0.1): - """ - Return a zoneout lstm core. - - It will zoneout the new hidden states and keep the new cell states unchanged. - """ - - def core(state, x): - new_state, _ = lstm_core(state, x) - h_old = state.hidden - h_new = new_state.hidden - mask = jax.random.bernoulli(rng_key, zoneout_pr, h_old.shape) - h_new = h_old * mask + h_new * (1.0 - mask) - return pax.LSTMState(h_new, new_state.cell), h_new - - return core - - def decoder_step( - self, - attn_state, - decoder_rnn_states, - rng_key, - mel, - text, - text_key, - call_pre_net=False, - ): - """ - One decoder step - """ - if call_pre_net: - k1, k2, zk1, zk2, rng_key, rng_key_next = jax.random.split(rng_key, 6) - mel = self.decoder_pre_net(mel, k1, k2) - else: - zk1, zk2, rng_key, rng_key_next = jax.random.split(rng_key, 4) - attn_inputs = (mel, rng_key) - attn_envs = (text, text_key) - attn_state, attn_rnn_output = self.monotonic_attention( - attn_state, attn_inputs, attn_envs - ) - (_, attn_context, attn_pr) = attn_state - (decoder_rnn_state1, decoder_rnn_state2) = decoder_rnn_states - decoder_rnn1_input = jnp.concatenate((attn_rnn_output, attn_context), axis=-1) - decoder_rnn1_input = self.decoder_input(decoder_rnn1_input) - decoder_rnn1 = self.zoneout_lstm(self.decoder_rnn1, zk1) - decoder_rnn_state1, decoder_rnn_output1 = decoder_rnn1( - decoder_rnn_state1, decoder_rnn1_input - ) - decoder_rnn2_input = decoder_rnn1_input + decoder_rnn_output1 - decoder_rnn2 = self.zoneout_lstm(self.decoder_rnn2, zk2) - decoder_rnn_state2, decoder_rnn_output2 = decoder_rnn2( - decoder_rnn_state2, decoder_rnn2_input - ) - x = decoder_rnn1_input + decoder_rnn_output1 + decoder_rnn_output2 - decoder_rnn_states = (decoder_rnn_state1, decoder_rnn_state2) - return attn_state, decoder_rnn_states, rng_key_next, x, attn_pr[0] - - @jax.jit - def inference_step( - self, attn_state, decoder_rnn_states, rng_key, mel, text, text_key - ): - """one inference step""" - attn_state, decoder_rnn_states, rng_key, x, _ = self.decoder_step( - attn_state, - decoder_rnn_states, - rng_key, - mel, - text, - text_key, - call_pre_net=True, - ) - x = self.output_fc(x) - N, D2 = x.shape - x = jnp.reshape(x, (N, self.max_rr, D2 // self.max_rr)) - x = x[:, : self.rr, :] - x = jnp.reshape(x, (N, self.rr, -1)) - mel = x[..., :-1] - eos_logit = x[..., -1] - eos_pr = jax.nn.sigmoid(eos_logit[0, -1]) - eos_pr = jnp.where(eos_pr < 0.1, 0.0, eos_pr) - rng_key, eos_rng_key = jax.random.split(rng_key) - eos = jax.random.bernoulli(eos_rng_key, p=eos_pr) - return attn_state, decoder_rnn_states, rng_key, (mel, eos) - - def inference(self, text, seed=42, max_len=1000): - """ - text to mel - """ - text = self.encode_text(text) - text_key = self.text_key_fc(text) - N, L, D = text.shape - assert N == 1 - mel = self.go_frame(N) - - attn_state, decoder_rnn_states = self.decoder_initial_state(N, L) - rng_key = jax.random.PRNGKey(seed) - mels = [] - count = 0 - while True: - count = count + 1 - attn_state, decoder_rnn_states, rng_key, (mel, eos) = self.inference_step( - attn_state, decoder_rnn_states, rng_key, mel, text, text_key - ) - mels.append(mel) - if eos.item() or count > max_len: - break - - mel = mel[:, -1, :] - - mels = jnp.concatenate(mels, axis=1) - mel = mel + self.post_net(mel) - return mels - - def decode(self, mel, text): - """ - Attention mechanism + Decoder - """ - text_key = self.text_key_fc(text) - - def scan_fn(prev_states, inputs): - attn_state, decoder_rnn_states = prev_states - x, rng_key = inputs - attn_state, decoder_rnn_states, _, output, attn_pr = self.decoder_step( - attn_state, decoder_rnn_states, rng_key, x, text, text_key - ) - states = (attn_state, decoder_rnn_states) - return states, (output, attn_pr) - - N, L, D = text.shape - decoder_states = self.decoder_initial_state(N, L) - rng_keys = self.rng_seq.next_rng_key(mel.shape[1]) - rng_keys = jnp.stack(rng_keys, axis=1) - decoder_states, (x, attn_log) = pax.scan( - scan_fn, - decoder_states, - (mel, rng_keys), - time_major=False, - ) - self.attn_log = attn_log - del decoder_states - x = self.output_fc(x) - - N, T2, D2 = x.shape - x = jnp.reshape(x, (N, T2, self.max_rr, D2 // self.max_rr)) - x = x[:, :, : self.rr, :] - x = jnp.reshape(x, (N, T2 * self.rr, -1)) - mel = x[..., :-1] - eos = x[..., -1] - return mel, eos - - def __call__(self, mel: jnp.ndarray, text: jnp.ndarray): - text = self.encode_text(text) - mel = self.decoder_pre_net(mel) - mel, eos = self.decode(mel, text) - return mel, mel + self.post_net(mel), eos diff --git a/spaces/oguzakif/video-object-remover/SiamMask/experiments/siammask_sharp/custom.py b/spaces/oguzakif/video-object-remover/SiamMask/experiments/siammask_sharp/custom.py deleted file mode 100644 index c42fbb47a72302cb00cd56de293f0efa6e6d353a..0000000000000000000000000000000000000000 --- a/spaces/oguzakif/video-object-remover/SiamMask/experiments/siammask_sharp/custom.py +++ /dev/null @@ -1,191 +0,0 @@ -from SiamMask.models.siammask_sharp import SiamMask -from SiamMask.models.features import MultiStageFeature -from SiamMask.models.rpn import RPN, DepthCorr -from SiamMask.models.mask import Mask -import torch -import torch.nn as nn -import torch.nn.functional as F -from SiamMask.utils.load_helper import load_pretrain -from SiamMask.experiments.siammask_sharp.resnet import resnet50 - - -class ResDownS(nn.Module): - def __init__(self, inplane, outplane): - super(ResDownS, self).__init__() - self.downsample = nn.Sequential( - nn.Conv2d(inplane, outplane, kernel_size=1, bias=False), - nn.BatchNorm2d(outplane)) - - def forward(self, x): - x = self.downsample(x) - if x.size(3) < 20: - l = 4 - r = -4 - x = x[:, :, l:r, l:r] - return x - - -class ResDown(MultiStageFeature): - def __init__(self, pretrain=False): - super(ResDown, self).__init__() - self.features = resnet50(layer3=True, layer4=False) - if pretrain: - load_pretrain(self.features, 'resnet.model') - - self.downsample = ResDownS(1024, 256) - - self.layers = [self.downsample, self.features.layer2, self.features.layer3] - self.train_nums = [1, 3] - self.change_point = [0, 0.5] - - self.unfix(0.0) - - def param_groups(self, start_lr, feature_mult=1): - lr = start_lr * feature_mult - - def _params(module, mult=1): - params = list(filter(lambda x:x.requires_grad, module.parameters())) - if len(params): - return [{'params': params, 'lr': lr * mult}] - else: - return [] - - groups = [] - groups += _params(self.downsample) - groups += _params(self.features, 0.1) - return groups - - def forward(self, x): - output = self.features(x) - p3 = self.downsample(output[-1]) - return p3 - - def forward_all(self, x): - output = self.features(x) - p3 = self.downsample(output[-1]) - return output, p3 - - -class UP(RPN): - def __init__(self, anchor_num=5, feature_in=256, feature_out=256): - super(UP, self).__init__() - - self.anchor_num = anchor_num - self.feature_in = feature_in - self.feature_out = feature_out - - self.cls_output = 2 * self.anchor_num - self.loc_output = 4 * self.anchor_num - - self.cls = DepthCorr(feature_in, feature_out, self.cls_output) - self.loc = DepthCorr(feature_in, feature_out, self.loc_output) - - def forward(self, z_f, x_f): - cls = self.cls(z_f, x_f) - loc = self.loc(z_f, x_f) - return cls, loc - - -class MaskCorr(Mask): - def __init__(self, oSz=63): - super(MaskCorr, self).__init__() - self.oSz = oSz - self.mask = DepthCorr(256, 256, self.oSz**2) - - def forward(self, z, x): - return self.mask(z, x) - - -class Refine(nn.Module): - def __init__(self): - super(Refine, self).__init__() - self.v0 = nn.Sequential(nn.Conv2d(64, 16, 3, padding=1), nn.ReLU(), - nn.Conv2d(16, 4, 3, padding=1),nn.ReLU()) - - self.v1 = nn.Sequential(nn.Conv2d(256, 64, 3, padding=1), nn.ReLU(), - nn.Conv2d(64, 16, 3, padding=1), nn.ReLU()) - - self.v2 = nn.Sequential(nn.Conv2d(512, 128, 3, padding=1), nn.ReLU(), - nn.Conv2d(128, 32, 3, padding=1), nn.ReLU()) - - self.h2 = nn.Sequential(nn.Conv2d(32, 32, 3, padding=1), nn.ReLU(), - nn.Conv2d(32, 32, 3, padding=1), nn.ReLU()) - - self.h1 = nn.Sequential(nn.Conv2d(16, 16, 3, padding=1), nn.ReLU(), - nn.Conv2d(16, 16, 3, padding=1), nn.ReLU()) - - self.h0 = nn.Sequential(nn.Conv2d(4, 4, 3, padding=1), nn.ReLU(), - nn.Conv2d(4, 4, 3, padding=1), nn.ReLU()) - - self.deconv = nn.ConvTranspose2d(256, 32, 15, 15) - - self.post0 = nn.Conv2d(32, 16, 3, padding=1) - self.post1 = nn.Conv2d(16, 4, 3, padding=1) - self.post2 = nn.Conv2d(4, 1, 3, padding=1) - - for modules in [self.v0, self.v1, self.v2, self.h2, self.h1, self.h0, self.deconv, self.post0, self.post1, self.post2,]: - for l in modules.modules(): - if isinstance(l, nn.Conv2d): - nn.init.kaiming_uniform_(l.weight, a=1) - - def forward(self, f, corr_feature, pos=None, test=False): - if test: - p0 = torch.nn.functional.pad(f[0], [16, 16, 16, 16])[:, :, 4*pos[0]:4*pos[0]+61, 4*pos[1]:4*pos[1]+61] - p1 = torch.nn.functional.pad(f[1], [8, 8, 8, 8])[:, :, 2 * pos[0]:2 * pos[0] + 31, 2 * pos[1]:2 * pos[1] + 31] - p2 = torch.nn.functional.pad(f[2], [4, 4, 4, 4])[:, :, pos[0]:pos[0] + 15, pos[1]:pos[1] + 15] - else: - p0 = F.unfold(f[0], (61, 61), padding=0, stride=4).permute(0, 2, 1).contiguous().view(-1, 64, 61, 61) - if not (pos is None): p0 = torch.index_select(p0, 0, pos) - p1 = F.unfold(f[1], (31, 31), padding=0, stride=2).permute(0, 2, 1).contiguous().view(-1, 256, 31, 31) - if not (pos is None): p1 = torch.index_select(p1, 0, pos) - p2 = F.unfold(f[2], (15, 15), padding=0, stride=1).permute(0, 2, 1).contiguous().view(-1, 512, 15, 15) - if not (pos is None): p2 = torch.index_select(p2, 0, pos) - - if not(pos is None): - p3 = corr_feature[:, :, pos[0], pos[1]].view(-1, 256, 1, 1) - else: - p3 = corr_feature.permute(0, 2, 3, 1).contiguous().view(-1, 256, 1, 1) - - out = self.deconv(p3) - out = self.post0(F.upsample(self.h2(out) + self.v2(p2), size=(31, 31))) - out = self.post1(F.upsample(self.h1(out) + self.v1(p1), size=(61, 61))) - out = self.post2(F.upsample(self.h0(out) + self.v0(p0), size=(127, 127))) - out = out.view(-1, 127*127) - return out - - def param_groups(self, start_lr, feature_mult=1): - params = filter(lambda x:x.requires_grad, self.parameters()) - params = [{'params': params, 'lr': start_lr * feature_mult}] - return params - - -class Custom(SiamMask): - def __init__(self, pretrain=False, **kwargs): - super(Custom, self).__init__(**kwargs) - self.features = ResDown(pretrain=pretrain) - self.rpn_model = UP(anchor_num=self.anchor_num, feature_in=256, feature_out=256) - self.mask_model = MaskCorr() - self.refine_model = Refine() - - def refine(self, f, pos=None): - return self.refine_model(f, pos) - - def template(self, template): - self.zf = self.features(template) - - def track(self, search): - search = self.features(search) - rpn_pred_cls, rpn_pred_loc = self.rpn(self.zf, search) - return rpn_pred_cls, rpn_pred_loc - - def track_mask(self, search): - self.feature, self.search = self.features.forward_all(search) - rpn_pred_cls, rpn_pred_loc = self.rpn(self.zf, self.search) - self.corr_feature = self.mask_model.mask.forward_corr(self.zf, self.search) - pred_mask = self.mask_model.mask.head(self.corr_feature) - return rpn_pred_cls, rpn_pred_loc, pred_mask - - def track_refine(self, pos): - pred_mask = self.refine_model(self.feature, self.corr_feature, pos=pos, test=True) - return pred_mask - diff --git a/spaces/orpatashnik/local-prompt-mixing/src/diffusion_model_wrapper.py b/spaces/orpatashnik/local-prompt-mixing/src/diffusion_model_wrapper.py deleted file mode 100644 index 0081cf6fbb6b768efc0f16ef2e4c36b6321abee4..0000000000000000000000000000000000000000 --- a/spaces/orpatashnik/local-prompt-mixing/src/diffusion_model_wrapper.py +++ /dev/null @@ -1,252 +0,0 @@ -from typing import Optional, List - -import numpy as np -import torch -from cv2 import dilate -from diffusers import DDIMScheduler, StableDiffusionPipeline -from tqdm import tqdm - -from src.attention_based_segmentation import Segmentor -from src.attention_utils import show_cross_attention -from src.prompt_to_prompt_controllers import DummyController, AttentionStore - - -def get_stable_diffusion_model(args): - device = torch.device(f'cuda:{args.gpu_id}') if torch.cuda.is_available() else torch.device('cpu') - if args.real_image_path != "": - scheduler = DDIMScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", clip_sample=False, set_alpha_to_one=False) - ldm_stable = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", use_auth_token=args.auth_token, scheduler=scheduler).to(device) - else: - ldm_stable = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", use_auth_token=args.auth_token).to(device) - - return ldm_stable - -def get_stable_diffusion_config(args): - return { - "low_resource": args.low_resource, - "num_diffusion_steps": args.num_diffusion_steps, - "guidance_scale": args.guidance_scale, - "max_num_words": args.max_num_words - } - - -def generate_original_image(args, ldm_stable, ldm_stable_config, prompts, latent, uncond_embeddings): - g_cpu = torch.Generator(device=ldm_stable.device).manual_seed(args.seed) - controller = AttentionStore(ldm_stable_config["low_resource"]) - diffusion_model_wrapper = DiffusionModelWrapper(args, ldm_stable, ldm_stable_config, controller, generator=g_cpu) - image, x_t, orig_all_latents, _ = diffusion_model_wrapper.forward(prompts, - latent=latent, - uncond_embeddings=uncond_embeddings) - orig_mask = Segmentor(controller, prompts, args.num_segments, args.background_segment_threshold, background_nouns=args.background_nouns)\ - .get_background_mask(args.prompt.split(' ').index("{word}") + 1) - average_attention = controller.get_average_attention() - return image, x_t, orig_all_latents, orig_mask, average_attention - - -class DiffusionModelWrapper: - def __init__(self, args, model, model_config, controller=None, prompt_mixing=None, generator=None): - self.args = args - self.model = model - self.model_config = model_config - self.controller = controller - if self.controller is None: - self.controller = DummyController() - self.prompt_mixing = prompt_mixing - self.device = model.device - self.generator = generator - - self.height = 512 - self.width = 512 - - self.diff_step = 0 - self.register_attention_control() - - - def diffusion_step(self, latents, context, t, other_context=None): - if self.model_config["low_resource"]: - self.uncond_pred = True - noise_pred_uncond = self.model.unet(latents, t, encoder_hidden_states=(context[0], None))["sample"] - self.uncond_pred = False - noise_prediction_text = self.model.unet(latents, t, encoder_hidden_states=(context[1], other_context))["sample"] - else: - latents_input = torch.cat([latents] * 2) - noise_pred = self.model.unet(latents_input, t, encoder_hidden_states=(context, other_context))["sample"] - noise_pred_uncond, noise_prediction_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + self.model_config["guidance_scale"] * (noise_prediction_text - noise_pred_uncond) - latents = self.model.scheduler.step(noise_pred, t, latents)["prev_sample"] - latents = self.controller.step_callback(latents) - return latents - - - def latent2image(self, latents): - latents = 1 / 0.18215 * latents - image = self.model.vae.decode(latents)['sample'] - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).numpy() - image = (image * 255).astype(np.uint8) - return image - - - def init_latent(self, latent, batch_size): - if latent is None: - latent = torch.randn( - (1, self.model.unet.in_channels, self.height // 8, self.width // 8), - generator=self.generator, device=self.model.device - ) - latents = latent.expand(batch_size, self.model.unet.in_channels, self.height // 8, self.width // 8).to(self.device) - return latent, latents - - - def register_attention_control(self): - def ca_forward(model_self, place_in_unet): - to_out = model_self.to_out - if type(to_out) is torch.nn.modules.container.ModuleList: - to_out = model_self.to_out[0] - else: - to_out = model_self.to_out - - def forward(x, context=None, mask=None): - batch_size, sequence_length, dim = x.shape - h = model_self.heads - q = model_self.to_q(x) - is_cross = context is not None - context = context if is_cross else (x, None) - - k = model_self.to_k(context[0]) - if is_cross and self.prompt_mixing is not None: - v_context = self.prompt_mixing.get_context_for_v(self.diff_step, context[0], context[1]) - v = model_self.to_v(v_context) - else: - v = model_self.to_v(context[0]) - - q = model_self.reshape_heads_to_batch_dim(q) - k = model_self.reshape_heads_to_batch_dim(k) - v = model_self.reshape_heads_to_batch_dim(v) - - sim = torch.einsum("b i d, b j d -> b i j", q, k) * model_self.scale - - if mask is not None: - mask = mask.reshape(batch_size, -1) - max_neg_value = -torch.finfo(sim.dtype).max - mask = mask[:, None, :].repeat(h, 1, 1) - sim.masked_fill_(~mask, max_neg_value) - - # attention, what we cannot get enough of - attn = sim.softmax(dim=-1) - if self.enbale_attn_controller_changes: - attn = self.controller(attn, is_cross, place_in_unet) - - if is_cross and self.prompt_mixing is not None and context[1] is not None: - attn = self.prompt_mixing.get_cross_attn(self, self.diff_step, attn, place_in_unet, batch_size) - - if not is_cross and (not self.model_config["low_resource"] or not self.uncond_pred) and self.prompt_mixing is not None: - attn = self.prompt_mixing.get_self_attn(self, self.diff_step, attn, place_in_unet, batch_size) - - out = torch.einsum("b i j, b j d -> b i d", attn, v) - out = model_self.reshape_batch_dim_to_heads(out) - return to_out(out) - - return forward - - def register_recr(net_, count, place_in_unet): - if net_.__class__.__name__ == 'CrossAttention': - net_.forward = ca_forward(net_, place_in_unet) - return count + 1 - elif hasattr(net_, 'children'): - for net__ in net_.children(): - count = register_recr(net__, count, place_in_unet) - return count - - cross_att_count = 0 - sub_nets = self.model.unet.named_children() - for net in sub_nets: - if "down" in net[0]: - cross_att_count += register_recr(net[1], 0, "down") - elif "up" in net[0]: - cross_att_count += register_recr(net[1], 0, "up") - elif "mid" in net[0]: - cross_att_count += register_recr(net[1], 0, "mid") - self.controller.num_att_layers = cross_att_count - - - def get_text_embedding(self, prompt: List[str], max_length=None, truncation=True): - text_input = self.model.tokenizer( - prompt, - padding="max_length", - max_length=self.model.tokenizer.model_max_length if max_length is None else max_length, - truncation=truncation, - return_tensors="pt", - ) - text_embeddings = self.model.text_encoder(text_input.input_ids.to(self.device))[0] - max_length = text_input.input_ids.shape[-1] - return text_embeddings, max_length - - - @torch.no_grad() - def forward(self, prompt: List[str], latent: Optional[torch.FloatTensor] = None, - other_prompt: List[str] = None, post_background = False, orig_all_latents = None, orig_mask = None, - uncond_embeddings=None, start_time=51, return_type='image'): - self.enbale_attn_controller_changes = True - batch_size = len(prompt) - - text_embeddings, max_length = self.get_text_embedding(prompt) - if uncond_embeddings is None: - uncond_embeddings_, _ = self.get_text_embedding([""] * batch_size, max_length=max_length, truncation=False) - else: - uncond_embeddings_ = None - - other_context = None - if other_prompt is not None: - other_text_embeddings, _ = self.get_text_embedding(other_prompt) - other_context = other_text_embeddings - - latent, latents = self.init_latent(latent, batch_size) - - # set timesteps - self.model.scheduler.set_timesteps(self.model_config["num_diffusion_steps"]) - all_latents = [] - - object_mask = None - self.diff_step = 0 - for i, t in enumerate(tqdm(self.model.scheduler.timesteps[-start_time:])): - if uncond_embeddings_ is None: - context = [uncond_embeddings[i].expand(*text_embeddings.shape), text_embeddings] - else: - context = [uncond_embeddings_, text_embeddings] - if not self.model_config["low_resource"]: - context = torch.cat(context) - - self.down_cross_index = 0 - self.mid_cross_index = 0 - self.up_cross_index = 0 - latents = self.diffusion_step(latents, context, t, other_context) - - if post_background and self.diff_step == self.args.background_blend_timestep: - object_mask = Segmentor(self.controller, - prompt, - self.args.num_segments, - self.args.background_segment_threshold, - background_nouns=self.args.background_nouns)\ - .get_background_mask(self.args.prompt.split(' ').index("{word}") + 1) - self.enbale_attn_controller_changes = False - mask = object_mask.astype(np.bool8) + orig_mask.astype(np.bool8) - mask = torch.from_numpy(mask).float().cuda() - shape = (1, 1, mask.shape[0], mask.shape[1]) - mask = torch.nn.Upsample(size=(64, 64), mode='nearest')(mask.view(shape)) - mask_eroded = dilate(mask.cpu().numpy()[0, 0], np.ones((3, 3), np.uint8), iterations=1) - mask = torch.from_numpy(mask_eroded).float().cuda().view(1, 1, 64, 64) - latents = mask * latents + (1 - mask) * orig_all_latents[self.diff_step] - - all_latents.append(latents) - self.diff_step += 1 - - if return_type == 'image': - image = self.latent2image(latents) - else: - image = latents - - return image, latent, all_latents, object_mask - - - def show_last_cross_attention(self, res: int, from_where: List[str], prompts, select: int = 0): - show_cross_attention(self.controller, res, from_where, prompts, tokenizer=self.model.tokenizer, select=select) \ No newline at end of file diff --git a/spaces/ozgur34/qb-Engine2/README.md b/spaces/ozgur34/qb-Engine2/README.md deleted file mode 100644 index 6dd09d33e340a8b16d3cdf72ff32e1020f157979..0000000000000000000000000000000000000000 --- a/spaces/ozgur34/qb-Engine2/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Qb Engine2 -emoji: 📊 -colorFrom: indigo -colorTo: yellow -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false -license: creativeml-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/zh/index.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/zh/index.md deleted file mode 100644 index e1a2a3971d87ce823e4668662d65c2b55602b87f..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/zh/index.md +++ /dev/null @@ -1,101 +0,0 @@ - - -

-
- -
-

- -# 🧨 Diffusers - -🤗 Diffusers 是一个值得首选用于生成图像、音频甚至 3D 分子结构的,最先进的预训练扩散模型库。 -无论您是在寻找简单的推理解决方案,还是想训练自己的扩散模型,🤗 Diffusers 这一模块化工具箱都能对其提供支持。 -本库的设计更偏重于[可用而非高性能](conceptual/philosophy#usability-over-performance)、[简明而非简单](conceptual/philosophy#simple-over-easy)以及[易用而非抽象](conceptual/philosophy#tweakable-contributorfriendly-over-abstraction)。 - - -本库包含三个主要组件: - -- 最先进的扩散管道 [diffusion pipelines](api/pipelines/overview),只需几行代码即可进行推理。 -- 可交替使用的各种噪声调度器 [noise schedulers](api/schedulers/overview),用于平衡生成速度和质量。 -- 预训练模型 [models](api/models),可作为构建模块,并与调度程序结合使用,来创建您自己的端到端扩散系统。 - - - -## 🧨 Diffusers pipelines - -下表汇总了当前所有官方支持的pipelines及其对应的论文. - -| 管道 | 论文/仓库 | 任务 | -|---|---|:---:| -| [alt_diffusion](./api/pipelines/alt_diffusion) | [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679) | Image-to-Image Text-Guided Generation | -| [audio_diffusion](./api/pipelines/audio_diffusion) | [Audio Diffusion](https://github.com/teticio/audio-diffusion.git) | Unconditional Audio Generation | -| [controlnet](./api/pipelines/stable_diffusion/controlnet) | [Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543) | Image-to-Image Text-Guided Generation | -| [cycle_diffusion](./api/pipelines/cycle_diffusion) | [Unifying Diffusion Models' Latent Space, with Applications to CycleDiffusion and Guidance](https://arxiv.org/abs/2210.05559) | Image-to-Image Text-Guided Generation | -| [dance_diffusion](./api/pipelines/dance_diffusion) | [Dance Diffusion](https://github.com/williamberman/diffusers.git) | Unconditional Audio Generation | -| [ddpm](./api/pipelines/ddpm) | [Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239) | Unconditional Image Generation | -| [ddim](./api/pipelines/ddim) | [Denoising Diffusion Implicit Models](https://arxiv.org/abs/2010.02502) | Unconditional Image Generation | -| [if](./if) | [**IF**](./api/pipelines/if) | Image Generation | -| [if_img2img](./if) | [**IF**](./api/pipelines/if) | Image-to-Image Generation | -| [if_inpainting](./if) | [**IF**](./api/pipelines/if) | Image-to-Image Generation | -| [latent_diffusion](./api/pipelines/latent_diffusion) | [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752)| Text-to-Image Generation | -| [latent_diffusion](./api/pipelines/latent_diffusion) | [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752)| Super Resolution Image-to-Image | -| [latent_diffusion_uncond](./api/pipelines/latent_diffusion_uncond) | [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) | Unconditional Image Generation | -| [paint_by_example](./api/pipelines/paint_by_example) | [Paint by Example: Exemplar-based Image Editing with Diffusion Models](https://arxiv.org/abs/2211.13227) | Image-Guided Image Inpainting | -| [pndm](./api/pipelines/pndm) | [Pseudo Numerical Methods for Diffusion Models on Manifolds](https://arxiv.org/abs/2202.09778) | Unconditional Image Generation | -| [score_sde_ve](./api/pipelines/score_sde_ve) | [Score-Based Generative Modeling through Stochastic Differential Equations](https://openreview.net/forum?id=PxTIG12RRHS) | Unconditional Image Generation | -| [score_sde_vp](./api/pipelines/score_sde_vp) | [Score-Based Generative Modeling through Stochastic Differential Equations](https://openreview.net/forum?id=PxTIG12RRHS) | Unconditional Image Generation | -| [semantic_stable_diffusion](./api/pipelines/semantic_stable_diffusion) | [Semantic Guidance](https://arxiv.org/abs/2301.12247) | Text-Guided Generation | -| [stable_diffusion_text2img](./api/pipelines/stable_diffusion/text2img) | [Stable Diffusion](https://stability.ai/blog/stable-diffusion-public-release) | Text-to-Image Generation | -| [stable_diffusion_img2img](./api/pipelines/stable_diffusion/img2img) | [Stable Diffusion](https://stability.ai/blog/stable-diffusion-public-release) | Image-to-Image Text-Guided Generation | -| [stable_diffusion_inpaint](./api/pipelines/stable_diffusion/inpaint) | [Stable Diffusion](https://stability.ai/blog/stable-diffusion-public-release) | Text-Guided Image Inpainting | -| [stable_diffusion_panorama](./api/pipelines/stable_diffusion/panorama) | [MultiDiffusion](https://multidiffusion.github.io/) | Text-to-Panorama Generation | -| [stable_diffusion_pix2pix](./api/pipelines/stable_diffusion/pix2pix) | [InstructPix2Pix: Learning to Follow Image Editing Instructions](https://arxiv.org/abs/2211.09800) | Text-Guided Image Editing| -| [stable_diffusion_pix2pix_zero](./api/pipelines/stable_diffusion/pix2pix_zero) | [Zero-shot Image-to-Image Translation](https://pix2pixzero.github.io/) | Text-Guided Image Editing | -| [stable_diffusion_attend_and_excite](./api/pipelines/stable_diffusion/attend_and_excite) | [Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models](https://arxiv.org/abs/2301.13826) | Text-to-Image Generation | -| [stable_diffusion_self_attention_guidance](./api/pipelines/stable_diffusion/self_attention_guidance) | [Improving Sample Quality of Diffusion Models Using Self-Attention Guidance](https://arxiv.org/abs/2210.00939) | Text-to-Image Generation Unconditional Image Generation | -| [stable_diffusion_image_variation](./stable_diffusion/image_variation) | [Stable Diffusion Image Variations](https://github.com/LambdaLabsML/lambda-diffusers#stable-diffusion-image-variations) | Image-to-Image Generation | -| [stable_diffusion_latent_upscale](./stable_diffusion/latent_upscale) | [Stable Diffusion Latent Upscaler](https://twitter.com/StabilityAI/status/1590531958815064065) | Text-Guided Super Resolution Image-to-Image | -| [stable_diffusion_model_editing](./api/pipelines/stable_diffusion/model_editing) | [Editing Implicit Assumptions in Text-to-Image Diffusion Models](https://time-diffusion.github.io/) | Text-to-Image Model Editing | -| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [Stable Diffusion 2](https://stability.ai/blog/stable-diffusion-v2-release) | Text-to-Image Generation | -| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [Stable Diffusion 2](https://stability.ai/blog/stable-diffusion-v2-release) | Text-Guided Image Inpainting | -| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [Depth-Conditional Stable Diffusion](https://github.com/Stability-AI/stablediffusion#depth-conditional-stable-diffusion) | Depth-to-Image Generation | -| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [Stable Diffusion 2](https://stability.ai/blog/stable-diffusion-v2-release) | Text-Guided Super Resolution Image-to-Image | -| [stable_diffusion_safe](./api/pipelines/stable_diffusion_safe) | [Safe Stable Diffusion](https://arxiv.org/abs/2211.05105) | Text-Guided Generation | -| [stable_unclip](./stable_unclip) | Stable unCLIP | Text-to-Image Generation | -| [stable_unclip](./stable_unclip) | Stable unCLIP | Image-to-Image Text-Guided Generation | -| [stochastic_karras_ve](./api/pipelines/stochastic_karras_ve) | [Elucidating the Design Space of Diffusion-Based Generative Models](https://arxiv.org/abs/2206.00364) | Unconditional Image Generation | -| [text_to_video_sd](./api/pipelines/text_to_video) | [Modelscope's Text-to-video-synthesis Model in Open Domain](https://modelscope.cn/models/damo/text-to-video-synthesis/summary) | Text-to-Video Generation | -| [unclip](./api/pipelines/unclip) | [Hierarchical Text-Conditional Image Generation with CLIP Latents](https://arxiv.org/abs/2204.06125)(implementation by [kakaobrain](https://github.com/kakaobrain/karlo)) | Text-to-Image Generation | -| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Text-to-Image Generation | -| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Image Variations Generation | -| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Dual Image and Text Guided Generation | -| [vq_diffusion](./api/pipelines/vq_diffusion) | [Vector Quantized Diffusion Model for Text-to-Image Synthesis](https://arxiv.org/abs/2111.14822) | Text-to-Image Generation | diff --git a/spaces/pierreguillou/Inference-APP-Document-Understanding-at-paragraphlevel-v1/files/README.md b/spaces/pierreguillou/Inference-APP-Document-Understanding-at-paragraphlevel-v1/files/README.md deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/pikto/Elite-freegpt-webui/client/html/index.html b/spaces/pikto/Elite-freegpt-webui/client/html/index.html deleted file mode 100644 index 37c4575b9bf22f7227bfc170551eaeffa3565d2e..0000000000000000000000000000000000000000 --- a/spaces/pikto/Elite-freegpt-webui/client/html/index.html +++ /dev/null @@ -1,120 +0,0 @@ - - - - - - - - - - - - - - - - - - FreeGPT - - - -
- -
-
- -
-
-
-
- -
- -
-
-
-
-
-
-
- -
-
- -
-
-
- - - Web Access -
-
-
-
-
-
- -
- - - - - - - - - - - diff --git a/spaces/pix2pix-zero-library/pix2pix-zero-demo/README.md b/spaces/pix2pix-zero-library/pix2pix-zero-demo/README.md deleted file mode 100644 index 1b0526f0611b98ce01aae8164da4f15dcab4aada..0000000000000000000000000000000000000000 --- a/spaces/pix2pix-zero-library/pix2pix-zero-demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: pix2pix-zero -emoji: 📈 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: true -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/traceback.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/traceback.py deleted file mode 100644 index c4ffe1f99e6dc9c0509459196cb68fa95e79048d..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/traceback.py +++ /dev/null @@ -1,756 +0,0 @@ -from __future__ import absolute_import - -import linecache -import os -import platform -import sys -from dataclasses import dataclass, field -from traceback import walk_tb -from types import ModuleType, TracebackType -from typing import ( - Any, - Callable, - Dict, - Iterable, - List, - Optional, - Sequence, - Tuple, - Type, - Union, -) - -from pip._vendor.pygments.lexers import guess_lexer_for_filename -from pip._vendor.pygments.token import Comment, Keyword, Name, Number, Operator, String -from pip._vendor.pygments.token import Text as TextToken -from pip._vendor.pygments.token import Token -from pip._vendor.pygments.util import ClassNotFound - -from . import pretty -from ._loop import loop_last -from .columns import Columns -from .console import Console, ConsoleOptions, ConsoleRenderable, RenderResult, group -from .constrain import Constrain -from .highlighter import RegexHighlighter, ReprHighlighter -from .panel import Panel -from .scope import render_scope -from .style import Style -from .syntax import Syntax -from .text import Text -from .theme import Theme - -WINDOWS = platform.system() == "Windows" - -LOCALS_MAX_LENGTH = 10 -LOCALS_MAX_STRING = 80 - - -def install( - *, - console: Optional[Console] = None, - width: Optional[int] = 100, - extra_lines: int = 3, - theme: Optional[str] = None, - word_wrap: bool = False, - show_locals: bool = False, - locals_max_length: int = LOCALS_MAX_LENGTH, - locals_max_string: int = LOCALS_MAX_STRING, - locals_hide_dunder: bool = True, - locals_hide_sunder: Optional[bool] = None, - indent_guides: bool = True, - suppress: Iterable[Union[str, ModuleType]] = (), - max_frames: int = 100, -) -> Callable[[Type[BaseException], BaseException, Optional[TracebackType]], Any]: - """Install a rich traceback handler. - - Once installed, any tracebacks will be printed with syntax highlighting and rich formatting. - - - Args: - console (Optional[Console], optional): Console to write exception to. Default uses internal Console instance. - width (Optional[int], optional): Width (in characters) of traceback. Defaults to 100. - extra_lines (int, optional): Extra lines of code. Defaults to 3. - theme (Optional[str], optional): Pygments theme to use in traceback. Defaults to ``None`` which will pick - a theme appropriate for the platform. - word_wrap (bool, optional): Enable word wrapping of long lines. Defaults to False. - show_locals (bool, optional): Enable display of local variables. Defaults to False. - locals_max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation. - Defaults to 10. - locals_max_string (int, optional): Maximum length of string before truncating, or None to disable. Defaults to 80. - locals_hide_dunder (bool, optional): Hide locals prefixed with double underscore. Defaults to True. - locals_hide_sunder (bool, optional): Hide locals prefixed with single underscore. Defaults to False. - indent_guides (bool, optional): Enable indent guides in code and locals. Defaults to True. - suppress (Sequence[Union[str, ModuleType]]): Optional sequence of modules or paths to exclude from traceback. - - Returns: - Callable: The previous exception handler that was replaced. - - """ - traceback_console = Console(stderr=True) if console is None else console - - locals_hide_sunder = ( - True - if (traceback_console.is_jupyter and locals_hide_sunder is None) - else locals_hide_sunder - ) - - def excepthook( - type_: Type[BaseException], - value: BaseException, - traceback: Optional[TracebackType], - ) -> None: - traceback_console.print( - Traceback.from_exception( - type_, - value, - traceback, - width=width, - extra_lines=extra_lines, - theme=theme, - word_wrap=word_wrap, - show_locals=show_locals, - locals_max_length=locals_max_length, - locals_max_string=locals_max_string, - locals_hide_dunder=locals_hide_dunder, - locals_hide_sunder=bool(locals_hide_sunder), - indent_guides=indent_guides, - suppress=suppress, - max_frames=max_frames, - ) - ) - - def ipy_excepthook_closure(ip: Any) -> None: # pragma: no cover - tb_data = {} # store information about showtraceback call - default_showtraceback = ip.showtraceback # keep reference of default traceback - - def ipy_show_traceback(*args: Any, **kwargs: Any) -> None: - """wrap the default ip.showtraceback to store info for ip._showtraceback""" - nonlocal tb_data - tb_data = kwargs - default_showtraceback(*args, **kwargs) - - def ipy_display_traceback( - *args: Any, is_syntax: bool = False, **kwargs: Any - ) -> None: - """Internally called traceback from ip._showtraceback""" - nonlocal tb_data - exc_tuple = ip._get_exc_info() - - # do not display trace on syntax error - tb: Optional[TracebackType] = None if is_syntax else exc_tuple[2] - - # determine correct tb_offset - compiled = tb_data.get("running_compiled_code", False) - tb_offset = tb_data.get("tb_offset", 1 if compiled else 0) - # remove ipython internal frames from trace with tb_offset - for _ in range(tb_offset): - if tb is None: - break - tb = tb.tb_next - - excepthook(exc_tuple[0], exc_tuple[1], tb) - tb_data = {} # clear data upon usage - - # replace _showtraceback instead of showtraceback to allow ipython features such as debugging to work - # this is also what the ipython docs recommends to modify when subclassing InteractiveShell - ip._showtraceback = ipy_display_traceback - # add wrapper to capture tb_data - ip.showtraceback = ipy_show_traceback - ip.showsyntaxerror = lambda *args, **kwargs: ipy_display_traceback( - *args, is_syntax=True, **kwargs - ) - - try: # pragma: no cover - # if within ipython, use customized traceback - ip = get_ipython() # type: ignore[name-defined] - ipy_excepthook_closure(ip) - return sys.excepthook - except Exception: - # otherwise use default system hook - old_excepthook = sys.excepthook - sys.excepthook = excepthook - return old_excepthook - - -@dataclass -class Frame: - filename: str - lineno: int - name: str - line: str = "" - locals: Optional[Dict[str, pretty.Node]] = None - - -@dataclass -class _SyntaxError: - offset: int - filename: str - line: str - lineno: int - msg: str - - -@dataclass -class Stack: - exc_type: str - exc_value: str - syntax_error: Optional[_SyntaxError] = None - is_cause: bool = False - frames: List[Frame] = field(default_factory=list) - - -@dataclass -class Trace: - stacks: List[Stack] - - -class PathHighlighter(RegexHighlighter): - highlights = [r"(?P.*/)(?P.+)"] - - -class Traceback: - """A Console renderable that renders a traceback. - - Args: - trace (Trace, optional): A `Trace` object produced from `extract`. Defaults to None, which uses - the last exception. - width (Optional[int], optional): Number of characters used to traceback. Defaults to 100. - extra_lines (int, optional): Additional lines of code to render. Defaults to 3. - theme (str, optional): Override pygments theme used in traceback. - word_wrap (bool, optional): Enable word wrapping of long lines. Defaults to False. - show_locals (bool, optional): Enable display of local variables. Defaults to False. - indent_guides (bool, optional): Enable indent guides in code and locals. Defaults to True. - locals_max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation. - Defaults to 10. - locals_max_string (int, optional): Maximum length of string before truncating, or None to disable. Defaults to 80. - locals_hide_dunder (bool, optional): Hide locals prefixed with double underscore. Defaults to True. - locals_hide_sunder (bool, optional): Hide locals prefixed with single underscore. Defaults to False. - suppress (Sequence[Union[str, ModuleType]]): Optional sequence of modules or paths to exclude from traceback. - max_frames (int): Maximum number of frames to show in a traceback, 0 for no maximum. Defaults to 100. - - """ - - LEXERS = { - "": "text", - ".py": "python", - ".pxd": "cython", - ".pyx": "cython", - ".pxi": "pyrex", - } - - def __init__( - self, - trace: Optional[Trace] = None, - *, - width: Optional[int] = 100, - extra_lines: int = 3, - theme: Optional[str] = None, - word_wrap: bool = False, - show_locals: bool = False, - locals_max_length: int = LOCALS_MAX_LENGTH, - locals_max_string: int = LOCALS_MAX_STRING, - locals_hide_dunder: bool = True, - locals_hide_sunder: bool = False, - indent_guides: bool = True, - suppress: Iterable[Union[str, ModuleType]] = (), - max_frames: int = 100, - ): - if trace is None: - exc_type, exc_value, traceback = sys.exc_info() - if exc_type is None or exc_value is None or traceback is None: - raise ValueError( - "Value for 'trace' required if not called in except: block" - ) - trace = self.extract( - exc_type, exc_value, traceback, show_locals=show_locals - ) - self.trace = trace - self.width = width - self.extra_lines = extra_lines - self.theme = Syntax.get_theme(theme or "ansi_dark") - self.word_wrap = word_wrap - self.show_locals = show_locals - self.indent_guides = indent_guides - self.locals_max_length = locals_max_length - self.locals_max_string = locals_max_string - self.locals_hide_dunder = locals_hide_dunder - self.locals_hide_sunder = locals_hide_sunder - - self.suppress: Sequence[str] = [] - for suppress_entity in suppress: - if not isinstance(suppress_entity, str): - assert ( - suppress_entity.__file__ is not None - ), f"{suppress_entity!r} must be a module with '__file__' attribute" - path = os.path.dirname(suppress_entity.__file__) - else: - path = suppress_entity - path = os.path.normpath(os.path.abspath(path)) - self.suppress.append(path) - self.max_frames = max(4, max_frames) if max_frames > 0 else 0 - - @classmethod - def from_exception( - cls, - exc_type: Type[Any], - exc_value: BaseException, - traceback: Optional[TracebackType], - *, - width: Optional[int] = 100, - extra_lines: int = 3, - theme: Optional[str] = None, - word_wrap: bool = False, - show_locals: bool = False, - locals_max_length: int = LOCALS_MAX_LENGTH, - locals_max_string: int = LOCALS_MAX_STRING, - locals_hide_dunder: bool = True, - locals_hide_sunder: bool = False, - indent_guides: bool = True, - suppress: Iterable[Union[str, ModuleType]] = (), - max_frames: int = 100, - ) -> "Traceback": - """Create a traceback from exception info - - Args: - exc_type (Type[BaseException]): Exception type. - exc_value (BaseException): Exception value. - traceback (TracebackType): Python Traceback object. - width (Optional[int], optional): Number of characters used to traceback. Defaults to 100. - extra_lines (int, optional): Additional lines of code to render. Defaults to 3. - theme (str, optional): Override pygments theme used in traceback. - word_wrap (bool, optional): Enable word wrapping of long lines. Defaults to False. - show_locals (bool, optional): Enable display of local variables. Defaults to False. - indent_guides (bool, optional): Enable indent guides in code and locals. Defaults to True. - locals_max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation. - Defaults to 10. - locals_max_string (int, optional): Maximum length of string before truncating, or None to disable. Defaults to 80. - locals_hide_dunder (bool, optional): Hide locals prefixed with double underscore. Defaults to True. - locals_hide_sunder (bool, optional): Hide locals prefixed with single underscore. Defaults to False. - suppress (Iterable[Union[str, ModuleType]]): Optional sequence of modules or paths to exclude from traceback. - max_frames (int): Maximum number of frames to show in a traceback, 0 for no maximum. Defaults to 100. - - Returns: - Traceback: A Traceback instance that may be printed. - """ - rich_traceback = cls.extract( - exc_type, - exc_value, - traceback, - show_locals=show_locals, - locals_max_length=locals_max_length, - locals_max_string=locals_max_string, - locals_hide_dunder=locals_hide_dunder, - locals_hide_sunder=locals_hide_sunder, - ) - - return cls( - rich_traceback, - width=width, - extra_lines=extra_lines, - theme=theme, - word_wrap=word_wrap, - show_locals=show_locals, - indent_guides=indent_guides, - locals_max_length=locals_max_length, - locals_max_string=locals_max_string, - locals_hide_dunder=locals_hide_dunder, - locals_hide_sunder=locals_hide_sunder, - suppress=suppress, - max_frames=max_frames, - ) - - @classmethod - def extract( - cls, - exc_type: Type[BaseException], - exc_value: BaseException, - traceback: Optional[TracebackType], - *, - show_locals: bool = False, - locals_max_length: int = LOCALS_MAX_LENGTH, - locals_max_string: int = LOCALS_MAX_STRING, - locals_hide_dunder: bool = True, - locals_hide_sunder: bool = False, - ) -> Trace: - """Extract traceback information. - - Args: - exc_type (Type[BaseException]): Exception type. - exc_value (BaseException): Exception value. - traceback (TracebackType): Python Traceback object. - show_locals (bool, optional): Enable display of local variables. Defaults to False. - locals_max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation. - Defaults to 10. - locals_max_string (int, optional): Maximum length of string before truncating, or None to disable. Defaults to 80. - locals_hide_dunder (bool, optional): Hide locals prefixed with double underscore. Defaults to True. - locals_hide_sunder (bool, optional): Hide locals prefixed with single underscore. Defaults to False. - - Returns: - Trace: A Trace instance which you can use to construct a `Traceback`. - """ - - stacks: List[Stack] = [] - is_cause = False - - from pip._vendor.rich import _IMPORT_CWD - - def safe_str(_object: Any) -> str: - """Don't allow exceptions from __str__ to propagate.""" - try: - return str(_object) - except Exception: - return "" - - while True: - stack = Stack( - exc_type=safe_str(exc_type.__name__), - exc_value=safe_str(exc_value), - is_cause=is_cause, - ) - - if isinstance(exc_value, SyntaxError): - stack.syntax_error = _SyntaxError( - offset=exc_value.offset or 0, - filename=exc_value.filename or "?", - lineno=exc_value.lineno or 0, - line=exc_value.text or "", - msg=exc_value.msg, - ) - - stacks.append(stack) - append = stack.frames.append - - def get_locals( - iter_locals: Iterable[Tuple[str, object]] - ) -> Iterable[Tuple[str, object]]: - """Extract locals from an iterator of key pairs.""" - if not (locals_hide_dunder or locals_hide_sunder): - yield from iter_locals - return - for key, value in iter_locals: - if locals_hide_dunder and key.startswith("__"): - continue - if locals_hide_sunder and key.startswith("_"): - continue - yield key, value - - for frame_summary, line_no in walk_tb(traceback): - filename = frame_summary.f_code.co_filename - if filename and not filename.startswith("<"): - if not os.path.isabs(filename): - filename = os.path.join(_IMPORT_CWD, filename) - if frame_summary.f_locals.get("_rich_traceback_omit", False): - continue - - frame = Frame( - filename=filename or "?", - lineno=line_no, - name=frame_summary.f_code.co_name, - locals={ - key: pretty.traverse( - value, - max_length=locals_max_length, - max_string=locals_max_string, - ) - for key, value in get_locals(frame_summary.f_locals.items()) - } - if show_locals - else None, - ) - append(frame) - if frame_summary.f_locals.get("_rich_traceback_guard", False): - del stack.frames[:] - - cause = getattr(exc_value, "__cause__", None) - if cause: - exc_type = cause.__class__ - exc_value = cause - # __traceback__ can be None, e.g. for exceptions raised by the - # 'multiprocessing' module - traceback = cause.__traceback__ - is_cause = True - continue - - cause = exc_value.__context__ - if cause and not getattr(exc_value, "__suppress_context__", False): - exc_type = cause.__class__ - exc_value = cause - traceback = cause.__traceback__ - is_cause = False - continue - # No cover, code is reached but coverage doesn't recognize it. - break # pragma: no cover - - trace = Trace(stacks=stacks) - return trace - - def __rich_console__( - self, console: Console, options: ConsoleOptions - ) -> RenderResult: - theme = self.theme - background_style = theme.get_background_style() - token_style = theme.get_style_for_token - - traceback_theme = Theme( - { - "pretty": token_style(TextToken), - "pygments.text": token_style(Token), - "pygments.string": token_style(String), - "pygments.function": token_style(Name.Function), - "pygments.number": token_style(Number), - "repr.indent": token_style(Comment) + Style(dim=True), - "repr.str": token_style(String), - "repr.brace": token_style(TextToken) + Style(bold=True), - "repr.number": token_style(Number), - "repr.bool_true": token_style(Keyword.Constant), - "repr.bool_false": token_style(Keyword.Constant), - "repr.none": token_style(Keyword.Constant), - "scope.border": token_style(String.Delimiter), - "scope.equals": token_style(Operator), - "scope.key": token_style(Name), - "scope.key.special": token_style(Name.Constant) + Style(dim=True), - }, - inherit=False, - ) - - highlighter = ReprHighlighter() - for last, stack in loop_last(reversed(self.trace.stacks)): - if stack.frames: - stack_renderable: ConsoleRenderable = Panel( - self._render_stack(stack), - title="[traceback.title]Traceback [dim](most recent call last)", - style=background_style, - border_style="traceback.border", - expand=True, - padding=(0, 1), - ) - stack_renderable = Constrain(stack_renderable, self.width) - with console.use_theme(traceback_theme): - yield stack_renderable - if stack.syntax_error is not None: - with console.use_theme(traceback_theme): - yield Constrain( - Panel( - self._render_syntax_error(stack.syntax_error), - style=background_style, - border_style="traceback.border.syntax_error", - expand=True, - padding=(0, 1), - width=self.width, - ), - self.width, - ) - yield Text.assemble( - (f"{stack.exc_type}: ", "traceback.exc_type"), - highlighter(stack.syntax_error.msg), - ) - elif stack.exc_value: - yield Text.assemble( - (f"{stack.exc_type}: ", "traceback.exc_type"), - highlighter(stack.exc_value), - ) - else: - yield Text.assemble((f"{stack.exc_type}", "traceback.exc_type")) - - if not last: - if stack.is_cause: - yield Text.from_markup( - "\n[i]The above exception was the direct cause of the following exception:\n", - ) - else: - yield Text.from_markup( - "\n[i]During handling of the above exception, another exception occurred:\n", - ) - - @group() - def _render_syntax_error(self, syntax_error: _SyntaxError) -> RenderResult: - highlighter = ReprHighlighter() - path_highlighter = PathHighlighter() - if syntax_error.filename != "": - if os.path.exists(syntax_error.filename): - text = Text.assemble( - (f" {syntax_error.filename}", "pygments.string"), - (":", "pygments.text"), - (str(syntax_error.lineno), "pygments.number"), - style="pygments.text", - ) - yield path_highlighter(text) - syntax_error_text = highlighter(syntax_error.line.rstrip()) - syntax_error_text.no_wrap = True - offset = min(syntax_error.offset - 1, len(syntax_error_text)) - syntax_error_text.stylize("bold underline", offset, offset) - syntax_error_text += Text.from_markup( - "\n" + " " * offset + "[traceback.offset]▲[/]", - style="pygments.text", - ) - yield syntax_error_text - - @classmethod - def _guess_lexer(cls, filename: str, code: str) -> str: - ext = os.path.splitext(filename)[-1] - if not ext: - # No extension, look at first line to see if it is a hashbang - # Note, this is an educated guess and not a guarantee - # If it fails, the only downside is that the code is highlighted strangely - new_line_index = code.index("\n") - first_line = code[:new_line_index] if new_line_index != -1 else code - if first_line.startswith("#!") and "python" in first_line.lower(): - return "python" - try: - return cls.LEXERS.get(ext) or guess_lexer_for_filename(filename, code).name - except ClassNotFound: - return "text" - - @group() - def _render_stack(self, stack: Stack) -> RenderResult: - path_highlighter = PathHighlighter() - theme = self.theme - - def read_code(filename: str) -> str: - """Read files, and cache results on filename. - - Args: - filename (str): Filename to read - - Returns: - str: Contents of file - """ - return "".join(linecache.getlines(filename)) - - def render_locals(frame: Frame) -> Iterable[ConsoleRenderable]: - if frame.locals: - yield render_scope( - frame.locals, - title="locals", - indent_guides=self.indent_guides, - max_length=self.locals_max_length, - max_string=self.locals_max_string, - ) - - exclude_frames: Optional[range] = None - if self.max_frames != 0: - exclude_frames = range( - self.max_frames // 2, - len(stack.frames) - self.max_frames // 2, - ) - - excluded = False - for frame_index, frame in enumerate(stack.frames): - - if exclude_frames and frame_index in exclude_frames: - excluded = True - continue - - if excluded: - assert exclude_frames is not None - yield Text( - f"\n... {len(exclude_frames)} frames hidden ...", - justify="center", - style="traceback.error", - ) - excluded = False - - first = frame_index == 0 - frame_filename = frame.filename - suppressed = any(frame_filename.startswith(path) for path in self.suppress) - - if os.path.exists(frame.filename): - text = Text.assemble( - path_highlighter(Text(frame.filename, style="pygments.string")), - (":", "pygments.text"), - (str(frame.lineno), "pygments.number"), - " in ", - (frame.name, "pygments.function"), - style="pygments.text", - ) - else: - text = Text.assemble( - "in ", - (frame.name, "pygments.function"), - (":", "pygments.text"), - (str(frame.lineno), "pygments.number"), - style="pygments.text", - ) - if not frame.filename.startswith("<") and not first: - yield "" - yield text - if frame.filename.startswith("<"): - yield from render_locals(frame) - continue - if not suppressed: - try: - code = read_code(frame.filename) - if not code: - # code may be an empty string if the file doesn't exist, OR - # if the traceback filename is generated dynamically - continue - lexer_name = self._guess_lexer(frame.filename, code) - syntax = Syntax( - code, - lexer_name, - theme=theme, - line_numbers=True, - line_range=( - frame.lineno - self.extra_lines, - frame.lineno + self.extra_lines, - ), - highlight_lines={frame.lineno}, - word_wrap=self.word_wrap, - code_width=88, - indent_guides=self.indent_guides, - dedent=False, - ) - yield "" - except Exception as error: - yield Text.assemble( - (f"\n{error}", "traceback.error"), - ) - else: - yield ( - Columns( - [ - syntax, - *render_locals(frame), - ], - padding=1, - ) - if frame.locals - else syntax - ) - - -if __name__ == "__main__": # pragma: no cover - - from .console import Console - - console = Console() - import sys - - def bar(a: Any) -> None: # 这是对亚洲语言支持的测试。面对模棱两可的想法,拒绝猜测的诱惑 - one = 1 - print(one / a) - - def foo(a: Any) -> None: - _rich_traceback_guard = True - zed = { - "characters": { - "Paul Atreides", - "Vladimir Harkonnen", - "Thufir Hawat", - "Duncan Idaho", - }, - "atomic_types": (None, False, True), - } - bar(a) - - def error() -> None: - - try: - try: - foo(0) - except: - slfkjsldkfj # type: ignore[name-defined] - except: - console.print_exception(show_locals=True) - - error() diff --git a/spaces/prerna9811/Chord/portaudio/src/common/pa_process.h b/spaces/prerna9811/Chord/portaudio/src/common/pa_process.h deleted file mode 100644 index 444bdf54513758e849cbda67f6c2e3e190438dc4..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/src/common/pa_process.h +++ /dev/null @@ -1,754 +0,0 @@ -#ifndef PA_PROCESS_H -#define PA_PROCESS_H -/* - * $Id$ - * Portable Audio I/O Library callback buffer processing adapters - * - * Based on the Open Source API proposed by Ross Bencina - * Copyright (c) 1999-2002 Phil Burk, Ross Bencina - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -/** @file - @ingroup common_src - - @brief Buffer Processor prototypes. A Buffer Processor performs buffer length - adaption, coordinates sample format conversion, and interleaves/deinterleaves - channels. - -

Overview

- - The "Buffer Processor" (PaUtilBufferProcessor) manages conversion of audio - data from host buffers to user buffers and back again. Where required, the - buffer processor takes care of converting between host and user sample formats, - interleaving and deinterleaving multichannel buffers, and adapting between host - and user buffers with different lengths. The buffer processor may be used with - full and half duplex streams, for both callback streams and blocking read/write - streams. - - One of the important capabilities provided by the buffer processor is - the ability to adapt between user and host buffer sizes of different lengths - with minimum latency. Although this task is relatively easy to perform when - the host buffer size is an integer multiple of the user buffer size, the - problem is more complicated when this is not the case - especially for - full-duplex callback streams. Where necessary the adaption is implemented by - internally buffering some input and/or output data. The buffer adation - algorithm used by the buffer processor was originally implemented by - Stephan Letz for the ASIO version of PortAudio, and is described in his - Callback_adaption_.pdf which is included in the distribution. - - The buffer processor performs sample conversion using the functions provided - by pa_converters.c. - - The following sections provide an overview of how to use the buffer processor. - Interested readers are advised to consult the host API implementations for - examples of buffer processor usage. - - -

Initialization, resetting and termination

- - When a stream is opened, the buffer processor should be initialized using - PaUtil_InitializeBufferProcessor. This function initializes internal state - and allocates temporary buffers as necessary according to the supplied - configuration parameters. Some of the parameters correspond to those requested - by the user in their call to Pa_OpenStream(), others reflect the requirements - of the host API implementation - they indicate host buffer sizes, formats, - and the type of buffering which the Host API uses. The buffer processor should - be initialized for callback streams and blocking read/write streams. - - Call PaUtil_ResetBufferProcessor to clear any sample data which is present - in the buffer processor before starting to use it (for example when - Pa_StartStream is called). - - When the buffer processor is no longer used call - PaUtil_TerminateBufferProcessor. - - -

Using the buffer processor for a callback stream

- - The buffer processor's role in a callback stream is to take host input buffers - process them with the stream callback, and fill host output buffers. For a - full duplex stream, the buffer processor handles input and output simultaneously - due to the requirements of the minimum-latency buffer adation algorithm. - - When a host buffer becomes available, the implementation should call - the buffer processor to process the buffer. The buffer processor calls the - stream callback to consume and/or produce audio data as necessary. The buffer - processor will convert sample formats, interleave/deinterleave channels, - and slice or chunk the data to the appropriate buffer lengths according to - the requirements of the stream callback and the host API. - - To process a host buffer (or a pair of host buffers for a full-duplex stream) - use the following calling sequence: - - -# Call PaUtil_BeginBufferProcessing - -# For a stream which takes input: - - Call PaUtil_SetInputFrameCount with the number of frames in the host input - buffer. - - Call one of the following functions one or more times to tell the - buffer processor about the host input buffer(s): PaUtil_SetInputChannel, - PaUtil_SetInterleavedInputChannels, PaUtil_SetNonInterleavedInputChannel. - Which function you call will depend on whether the host buffer(s) are - interleaved or not. - - If the available host data is split across two buffers (for example a - data range at the end of a circular buffer and another range at the - beginning of the circular buffer), also call - PaUtil_Set2ndInputFrameCount, PaUtil_Set2ndInputChannel, - PaUtil_Set2ndInterleavedInputChannels, - PaUtil_Set2ndNonInterleavedInputChannel as necessary to tell the buffer - processor about the second buffer. - -# For a stream which generates output: - - Call PaUtil_SetOutputFrameCount with the number of frames in the host - output buffer. - - Call one of the following functions one or more times to tell the - buffer processor about the host output buffer(s): PaUtil_SetOutputChannel, - PaUtil_SetInterleavedOutputChannels, PaUtil_SetNonInterleavedOutputChannel. - Which function you call will depend on whether the host buffer(s) are - interleaved or not. - - If the available host output buffer space is split across two buffers - (for example a data range at the end of a circular buffer and another - range at the beginning of the circular buffer), call - PaUtil_Set2ndOutputFrameCount, PaUtil_Set2ndOutputChannel, - PaUtil_Set2ndInterleavedOutputChannels, - PaUtil_Set2ndNonInterleavedOutputChannel as necessary to tell the buffer - processor about the second buffer. - -# Call PaUtil_EndBufferProcessing, this function performs the actual data - conversion and processing. - - -

Using the buffer processor for a blocking read/write stream

- - Blocking read/write streams use the buffer processor to convert and copy user - output data to a host buffer, and to convert and copy host input data to - the user's buffer. The buffer processor does not perform any buffer adaption. - When using the buffer processor in a blocking read/write stream the input and - output conversion are performed separately by the PaUtil_CopyInput and - PaUtil_CopyOutput functions. - - To copy data from a host input buffer to the buffer(s) which the user supplies - to Pa_ReadStream, use the following calling sequence. - - - Repeat the following three steps until the user buffer(s) have been filled - with samples from the host input buffers: - -# Call PaUtil_SetInputFrameCount with the number of frames in the host - input buffer. - -# Call one of the following functions one or more times to tell the - buffer processor about the host input buffer(s): PaUtil_SetInputChannel, - PaUtil_SetInterleavedInputChannels, PaUtil_SetNonInterleavedInputChannel. - Which function you call will depend on whether the host buffer(s) are - interleaved or not. - -# Call PaUtil_CopyInput with the user buffer pointer (or a copy of the - array of buffer pointers for a non-interleaved stream) passed to - Pa_ReadStream, along with the number of frames in the user buffer(s). - Be careful to pass a copy of the user buffer pointers to - PaUtil_CopyInput because PaUtil_CopyInput advances the pointers to - the start of the next region to copy. - - PaUtil_CopyInput will not copy more data than is available in the - host buffer(s), so the above steps need to be repeated until the user - buffer(s) are full. - - - To copy data to the host output buffer from the user buffers(s) supplied - to Pa_WriteStream use the following calling sequence. - - - Repeat the following three steps until all frames from the user buffer(s) - have been copied to the host API: - -# Call PaUtil_SetOutputFrameCount with the number of frames in the host - output buffer. - -# Call one of the following functions one or more times to tell the - buffer processor about the host output buffer(s): PaUtil_SetOutputChannel, - PaUtil_SetInterleavedOutputChannels, PaUtil_SetNonInterleavedOutputChannel. - Which function you call will depend on whether the host buffer(s) are - interleaved or not. - -# Call PaUtil_CopyOutput with the user buffer pointer (or a copy of the - array of buffer pointers for a non-interleaved stream) passed to - Pa_WriteStream, along with the number of frames in the user buffer(s). - Be careful to pass a copy of the user buffer pointers to - PaUtil_CopyOutput because PaUtil_CopyOutput advances the pointers to - the start of the next region to copy. - - PaUtil_CopyOutput will not copy more data than fits in the host buffer(s), - so the above steps need to be repeated until all user data is copied. -*/ - - -#include "portaudio.h" -#include "pa_converters.h" -#include "pa_dither.h" - -#ifdef __cplusplus -extern "C" -{ -#endif /* __cplusplus */ - - -/** @brief Mode flag passed to PaUtil_InitializeBufferProcessor indicating the type - of buffering that the host API uses. - - The mode used depends on whether the host API or the implementation manages - the buffers, and how these buffers are used (scatter gather, circular buffer). -*/ -typedef enum { -/** The host buffer size is a fixed known size. */ - paUtilFixedHostBufferSize, - -/** The host buffer size may vary, but has a known maximum size. */ - paUtilBoundedHostBufferSize, - -/** Nothing is known about the host buffer size. */ - paUtilUnknownHostBufferSize, - -/** The host buffer size varies, and the client does not require the buffer - processor to consume all of the input and fill all of the output buffer. This - is useful when the implementation has access to the host API's circular buffer - and only needs to consume/fill some of it, not necessarily all of it, with each - call to the buffer processor. This is the only mode where - PaUtil_EndBufferProcessing() may not consume the whole buffer. -*/ - paUtilVariableHostBufferSizePartialUsageAllowed -}PaUtilHostBufferSizeMode; - - -/** @brief An auxiliary data structure used internally by the buffer processor - to represent host input and output buffers. */ -typedef struct PaUtilChannelDescriptor{ - void *data; - unsigned int stride; /**< stride in samples, not bytes */ -}PaUtilChannelDescriptor; - - -/** @brief The main buffer processor data structure. - - Allocate one of these, initialize it with PaUtil_InitializeBufferProcessor - and terminate it with PaUtil_TerminateBufferProcessor. -*/ -typedef struct { - unsigned long framesPerUserBuffer; - unsigned long framesPerHostBuffer; - - PaUtilHostBufferSizeMode hostBufferSizeMode; - int useNonAdaptingProcess; - int userOutputSampleFormatIsEqualToHost; - int userInputSampleFormatIsEqualToHost; - unsigned long framesPerTempBuffer; - - unsigned int inputChannelCount; - unsigned int bytesPerHostInputSample; - unsigned int bytesPerUserInputSample; - int userInputIsInterleaved; - PaUtilConverter *inputConverter; - PaUtilZeroer *inputZeroer; - - unsigned int outputChannelCount; - unsigned int bytesPerHostOutputSample; - unsigned int bytesPerUserOutputSample; - int userOutputIsInterleaved; - PaUtilConverter *outputConverter; - PaUtilZeroer *outputZeroer; - - unsigned long initialFramesInTempInputBuffer; - unsigned long initialFramesInTempOutputBuffer; - - void *tempInputBuffer; /**< used for slips, block adaption, and conversion. */ - void **tempInputBufferPtrs; /**< storage for non-interleaved buffer pointers, NULL for interleaved user input */ - unsigned long framesInTempInputBuffer; /**< frames remaining in input buffer from previous adaption iteration */ - - void *tempOutputBuffer; /**< used for slips, block adaption, and conversion. */ - void **tempOutputBufferPtrs; /**< storage for non-interleaved buffer pointers, NULL for interleaved user output */ - unsigned long framesInTempOutputBuffer; /**< frames remaining in input buffer from previous adaption iteration */ - - PaStreamCallbackTimeInfo *timeInfo; - - PaStreamCallbackFlags callbackStatusFlags; - - int hostInputIsInterleaved; - unsigned long hostInputFrameCount[2]; - PaUtilChannelDescriptor *hostInputChannels[2]; /**< pointers to arrays of channel descriptors. - pointers are NULL for half-duplex output processing. - hostInputChannels[i].data is NULL when the caller - calls PaUtil_SetNoInput() - */ - int hostOutputIsInterleaved; - unsigned long hostOutputFrameCount[2]; - PaUtilChannelDescriptor *hostOutputChannels[2]; /**< pointers to arrays of channel descriptors. - pointers are NULL for half-duplex input processing. - hostOutputChannels[i].data is NULL when the caller - calls PaUtil_SetNoOutput() - */ - - PaUtilTriangularDitherGenerator ditherGenerator; - - double samplePeriod; - - PaStreamCallback *streamCallback; - void *userData; -} PaUtilBufferProcessor; - - -/** @name Initialization, termination, resetting and info */ -/*@{*/ - -/** Initialize a buffer processor's representation stored in a - PaUtilBufferProcessor structure. Be sure to call - PaUtil_TerminateBufferProcessor after finishing with a buffer processor. - - @param bufferProcessor The buffer processor structure to initialize. - - @param inputChannelCount The number of input channels as passed to - Pa_OpenStream or 0 for an output-only stream. - - @param userInputSampleFormat Format of user input samples, as passed to - Pa_OpenStream. This parameter is ignored for ouput-only streams. - - @param hostInputSampleFormat Format of host input samples. This parameter is - ignored for output-only streams. See note about host buffer interleave below. - - @param outputChannelCount The number of output channels as passed to - Pa_OpenStream or 0 for an input-only stream. - - @param userOutputSampleFormat Format of user output samples, as passed to - Pa_OpenStream. This parameter is ignored for input-only streams. - - @param hostOutputSampleFormat Format of host output samples. This parameter is - ignored for input-only streams. See note about host buffer interleave below. - - @param sampleRate Sample rate of the stream. The more accurate this is the - better - it is used for updating time stamps when adapting buffers. - - @param streamFlags Stream flags as passed to Pa_OpenStream, this parameter is - used for selecting special sample conversion options such as clipping and - dithering. - - @param framesPerUserBuffer Number of frames per user buffer, as requested - by the framesPerBuffer parameter to Pa_OpenStream. This parameter may be - zero to indicate that the user will accept any (and varying) buffer sizes. - - @param framesPerHostBuffer Specifies the number of frames per host buffer - for the fixed buffer size mode, and the maximum number of frames - per host buffer for the bounded host buffer size mode. It is ignored for - the other modes. - - @param hostBufferSizeMode A mode flag indicating the size variability of - host buffers that will be passed to the buffer processor. See - PaUtilHostBufferSizeMode for further details. - - @param streamCallback The user stream callback passed to Pa_OpenStream. - - @param userData The user data field passed to Pa_OpenStream. - - @note The interleave flag is ignored for host buffer formats. Host - interleave is determined by the use of different SetInput and SetOutput - functions. - - @return An error code indicating whether the initialization was successful. - If the error code is not PaNoError, the buffer processor was not initialized - and should not be used. - - @see Pa_OpenStream, PaUtilHostBufferSizeMode, PaUtil_TerminateBufferProcessor -*/ -PaError PaUtil_InitializeBufferProcessor( PaUtilBufferProcessor* bufferProcessor, - int inputChannelCount, PaSampleFormat userInputSampleFormat, - PaSampleFormat hostInputSampleFormat, - int outputChannelCount, PaSampleFormat userOutputSampleFormat, - PaSampleFormat hostOutputSampleFormat, - double sampleRate, - PaStreamFlags streamFlags, - unsigned long framesPerUserBuffer, /* 0 indicates don't care */ - unsigned long framesPerHostBuffer, - PaUtilHostBufferSizeMode hostBufferSizeMode, - PaStreamCallback *streamCallback, void *userData ); - - -/** Terminate a buffer processor's representation. Deallocates any temporary - buffers allocated by PaUtil_InitializeBufferProcessor. - - @param bufferProcessor The buffer processor structure to terminate. - - @see PaUtil_InitializeBufferProcessor. -*/ -void PaUtil_TerminateBufferProcessor( PaUtilBufferProcessor* bufferProcessor ); - - -/** Clear any internally buffered data. If you call - PaUtil_InitializeBufferProcessor in your OpenStream routine, make sure you - call PaUtil_ResetBufferProcessor in your StartStream call. - - @param bufferProcessor The buffer processor to reset. -*/ -void PaUtil_ResetBufferProcessor( PaUtilBufferProcessor* bufferProcessor ); - - -/** Retrieve the input latency of a buffer processor, in frames. - - @param bufferProcessor The buffer processor examine. - - @return The input latency introduced by the buffer processor, in frames. - - @see PaUtil_GetBufferProcessorOutputLatencyFrames -*/ -unsigned long PaUtil_GetBufferProcessorInputLatencyFrames( PaUtilBufferProcessor* bufferProcessor ); - -/** Retrieve the output latency of a buffer processor, in frames. - - @param bufferProcessor The buffer processor examine. - - @return The output latency introduced by the buffer processor, in frames. - - @see PaUtil_GetBufferProcessorInputLatencyFrames -*/ -unsigned long PaUtil_GetBufferProcessorOutputLatencyFrames( PaUtilBufferProcessor* bufferProcessor ); - -/*@}*/ - - -/** @name Host buffer pointer configuration - - Functions to set host input and output buffers, used by both callback streams - and blocking read/write streams. -*/ -/*@{*/ - - -/** Set the number of frames in the input host buffer(s) specified by the - PaUtil_Set*InputChannel functions. - - @param bufferProcessor The buffer processor. - - @param frameCount The number of host input frames. A 0 frameCount indicates to - use the framesPerHostBuffer value passed to PaUtil_InitializeBufferProcessor. - - @see PaUtil_SetNoInput, PaUtil_SetInputChannel, - PaUtil_SetInterleavedInputChannels, PaUtil_SetNonInterleavedInputChannel -*/ -void PaUtil_SetInputFrameCount( PaUtilBufferProcessor* bufferProcessor, - unsigned long frameCount ); - - -/** Indicate that no input is available. This function should be used when - priming the output of a full-duplex stream opened with the - paPrimeOutputBuffersUsingStreamCallback flag. Note that it is not necessary - to call this or any other PaUtil_Set*Input* functions for ouput-only streams. - - @param bufferProcessor The buffer processor. -*/ -void PaUtil_SetNoInput( PaUtilBufferProcessor* bufferProcessor ); - - -/** Provide the buffer processor with a pointer to a host input channel. - - @param bufferProcessor The buffer processor. - @param channel The channel number. - @param data The buffer. - @param stride The stride from one sample to the next, in samples. For - interleaved host buffers, the stride will usually be the same as the number of - channels in the buffer. -*/ -void PaUtil_SetInputChannel( PaUtilBufferProcessor* bufferProcessor, - unsigned int channel, void *data, unsigned int stride ); - - -/** Provide the buffer processor with a pointer to an number of interleaved - host input channels. - - @param bufferProcessor The buffer processor. - @param firstChannel The first channel number. - @param data The buffer. - @param channelCount The number of interleaved channels in the buffer. If - channelCount is zero, the number of channels specified to - PaUtil_InitializeBufferProcessor will be used. -*/ -void PaUtil_SetInterleavedInputChannels( PaUtilBufferProcessor* bufferProcessor, - unsigned int firstChannel, void *data, unsigned int channelCount ); - - -/** Provide the buffer processor with a pointer to one non-interleaved host - output channel. - - @param bufferProcessor The buffer processor. - @param channel The channel number. - @param data The buffer. -*/ -void PaUtil_SetNonInterleavedInputChannel( PaUtilBufferProcessor* bufferProcessor, - unsigned int channel, void *data ); - - -/** Use for the second buffer half when the input buffer is split in two halves. - @see PaUtil_SetInputFrameCount -*/ -void PaUtil_Set2ndInputFrameCount( PaUtilBufferProcessor* bufferProcessor, - unsigned long frameCount ); - -/** Use for the second buffer half when the input buffer is split in two halves. - @see PaUtil_SetInputChannel -*/ -void PaUtil_Set2ndInputChannel( PaUtilBufferProcessor* bufferProcessor, - unsigned int channel, void *data, unsigned int stride ); - -/** Use for the second buffer half when the input buffer is split in two halves. - @see PaUtil_SetInterleavedInputChannels -*/ -void PaUtil_Set2ndInterleavedInputChannels( PaUtilBufferProcessor* bufferProcessor, - unsigned int firstChannel, void *data, unsigned int channelCount ); - -/** Use for the second buffer half when the input buffer is split in two halves. - @see PaUtil_SetNonInterleavedInputChannel -*/ -void PaUtil_Set2ndNonInterleavedInputChannel( PaUtilBufferProcessor* bufferProcessor, - unsigned int channel, void *data ); - - -/** Set the number of frames in the output host buffer(s) specified by the - PaUtil_Set*OutputChannel functions. - - @param bufferProcessor The buffer processor. - - @param frameCount The number of host output frames. A 0 frameCount indicates to - use the framesPerHostBuffer value passed to PaUtil_InitializeBufferProcessor. - - @see PaUtil_SetOutputChannel, PaUtil_SetInterleavedOutputChannels, - PaUtil_SetNonInterleavedOutputChannel -*/ -void PaUtil_SetOutputFrameCount( PaUtilBufferProcessor* bufferProcessor, - unsigned long frameCount ); - - -/** Indicate that the output will be discarded. This function should be used - when implementing the paNeverDropInput mode for full duplex streams. - - @param bufferProcessor The buffer processor. -*/ -void PaUtil_SetNoOutput( PaUtilBufferProcessor* bufferProcessor ); - - -/** Provide the buffer processor with a pointer to a host output channel. - - @param bufferProcessor The buffer processor. - @param channel The channel number. - @param data The buffer. - @param stride The stride from one sample to the next, in samples. For - interleaved host buffers, the stride will usually be the same as the number of - channels in the buffer. -*/ -void PaUtil_SetOutputChannel( PaUtilBufferProcessor* bufferProcessor, - unsigned int channel, void *data, unsigned int stride ); - - -/** Provide the buffer processor with a pointer to a number of interleaved - host output channels. - - @param bufferProcessor The buffer processor. - @param firstChannel The first channel number. - @param data The buffer. - @param channelCount The number of interleaved channels in the buffer. If - channelCount is zero, the number of channels specified to - PaUtil_InitializeBufferProcessor will be used. -*/ -void PaUtil_SetInterleavedOutputChannels( PaUtilBufferProcessor* bufferProcessor, - unsigned int firstChannel, void *data, unsigned int channelCount ); - - -/** Provide the buffer processor with a pointer to one non-interleaved host - output channel. - - @param bufferProcessor The buffer processor. - @param channel The channel number. - @param data The buffer. -*/ -void PaUtil_SetNonInterleavedOutputChannel( PaUtilBufferProcessor* bufferProcessor, - unsigned int channel, void *data ); - - -/** Use for the second buffer half when the output buffer is split in two halves. - @see PaUtil_SetOutputFrameCount -*/ -void PaUtil_Set2ndOutputFrameCount( PaUtilBufferProcessor* bufferProcessor, - unsigned long frameCount ); - -/** Use for the second buffer half when the output buffer is split in two halves. - @see PaUtil_SetOutputChannel -*/ -void PaUtil_Set2ndOutputChannel( PaUtilBufferProcessor* bufferProcessor, - unsigned int channel, void *data, unsigned int stride ); - -/** Use for the second buffer half when the output buffer is split in two halves. - @see PaUtil_SetInterleavedOutputChannels -*/ -void PaUtil_Set2ndInterleavedOutputChannels( PaUtilBufferProcessor* bufferProcessor, - unsigned int firstChannel, void *data, unsigned int channelCount ); - -/** Use for the second buffer half when the output buffer is split in two halves. - @see PaUtil_SetNonInterleavedOutputChannel -*/ -void PaUtil_Set2ndNonInterleavedOutputChannel( PaUtilBufferProcessor* bufferProcessor, - unsigned int channel, void *data ); - -/*@}*/ - - -/** @name Buffer processing functions for callback streams -*/ -/*@{*/ - -/** Commence processing a host buffer (or a pair of host buffers in the - full-duplex case) for a callback stream. - - @param bufferProcessor The buffer processor. - - @param timeInfo Timing information for the first sample of the host - buffer(s). This information may be adjusted when buffer adaption is being - performed. - - @param callbackStatusFlags Flags indicating whether underruns and overruns - have occurred since the last time the buffer processor was called. -*/ -void PaUtil_BeginBufferProcessing( PaUtilBufferProcessor* bufferProcessor, - PaStreamCallbackTimeInfo* timeInfo, PaStreamCallbackFlags callbackStatusFlags ); - - -/** Finish processing a host buffer (or a pair of host buffers in the - full-duplex case) for a callback stream. - - @param bufferProcessor The buffer processor. - - @param callbackResult On input, indicates a previous callback result, and on - exit, the result of the user stream callback, if it is called. - On entry callbackResult should contain one of { paContinue, paComplete, or - paAbort}. If paComplete is passed, the stream callback will not be called - but any audio that was generated by previous stream callbacks will be copied - to the output buffer(s). You can check whether the buffer processor's internal - buffer is empty by calling PaUtil_IsBufferProcessorOutputEmpty. - - If the stream callback is called its result is stored in *callbackResult. If - the stream callback returns paComplete or paAbort, all output buffers will be - full of valid data - some of which may be zeros to account for data that - wasn't generated by the terminating callback. - - @return The number of frames processed. This usually corresponds to the - number of frames specified by the PaUtil_Set*FrameCount functions, except in - the paUtilVariableHostBufferSizePartialUsageAllowed buffer size mode when a - smaller value may be returned. -*/ -unsigned long PaUtil_EndBufferProcessing( PaUtilBufferProcessor* bufferProcessor, - int *callbackResult ); - - -/** Determine whether any callback generated output remains in the buffer - processor's internal buffers. This method may be used to determine when to - continue calling PaUtil_EndBufferProcessing() after the callback has returned - a callbackResult of paComplete. - - @param bufferProcessor The buffer processor. - - @return Returns non-zero when callback generated output remains in the internal - buffer and zero (0) when there internal buffer contains no callback generated - data. -*/ -int PaUtil_IsBufferProcessorOutputEmpty( PaUtilBufferProcessor* bufferProcessor ); - -/*@}*/ - - -/** @name Buffer processing functions for blocking read/write streams -*/ -/*@{*/ - -/** Copy samples from host input channels set up by the PaUtil_Set*InputChannels - functions to a user supplied buffer. This function is intended for use with - blocking read/write streams. Copies the minimum of the number of - user frames (specified by the frameCount parameter) and the number of available - host frames (specified in a previous call to SetInputFrameCount()). - - @param bufferProcessor The buffer processor. - - @param buffer A pointer to the user buffer pointer, or a pointer to a pointer - to an array of user buffer pointers for a non-interleaved stream. It is - important that this parameter points to a copy of the user buffer pointers, - not to the actual user buffer pointers, because this function updates the - pointers before returning. - - @param frameCount The number of frames of data in the buffer(s) pointed to by - the buffer parameter. - - @return The number of frames copied. The buffer pointer(s) pointed to by the - buffer parameter are advanced to point to the frame(s) following the last one - filled. -*/ -unsigned long PaUtil_CopyInput( PaUtilBufferProcessor* bufferProcessor, - void **buffer, unsigned long frameCount ); - - -/* Copy samples from a user supplied buffer to host output channels set up by - the PaUtil_Set*OutputChannels functions. This function is intended for use with - blocking read/write streams. Copies the minimum of the number of - user frames (specified by the frameCount parameter) and the number of - host frames (specified in a previous call to SetOutputFrameCount()). - - @param bufferProcessor The buffer processor. - - @param buffer A pointer to the user buffer pointer, or a pointer to a pointer - to an array of user buffer pointers for a non-interleaved stream. It is - important that this parameter points to a copy of the user buffer pointers, - not to the actual user buffer pointers, because this function updates the - pointers before returning. - - @param frameCount The number of frames of data in the buffer(s) pointed to by - the buffer parameter. - - @return The number of frames copied. The buffer pointer(s) pointed to by the - buffer parameter are advanced to point to the frame(s) following the last one - copied. -*/ -unsigned long PaUtil_CopyOutput( PaUtilBufferProcessor* bufferProcessor, - const void ** buffer, unsigned long frameCount ); - - -/* Zero samples in host output channels set up by the PaUtil_Set*OutputChannels - functions. This function is useful for flushing streams. - Zeros the minimum of frameCount and the number of host frames specified in a - previous call to SetOutputFrameCount(). - - @param bufferProcessor The buffer processor. - - @param frameCount The maximum number of frames to zero. - - @return The number of frames zeroed. -*/ -unsigned long PaUtil_ZeroOutput( PaUtilBufferProcessor* bufferProcessor, - unsigned long frameCount ); - - -/*@}*/ - - -#ifdef __cplusplus -} -#endif /* __cplusplus */ -#endif /* PA_PROCESS_H */ diff --git a/spaces/prerna9811/Chord/portaudio/src/hostapi/wasapi/pa_win_wasapi.c b/spaces/prerna9811/Chord/portaudio/src/hostapi/wasapi/pa_win_wasapi.c deleted file mode 100644 index c76f302e68c8b35c5f3fa598a438e4c5a28497f1..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/src/hostapi/wasapi/pa_win_wasapi.c +++ /dev/null @@ -1,6534 +0,0 @@ -/* - * Portable Audio I/O Library WASAPI implementation - * Copyright (c) 2006-2010 David Viens - * Copyright (c) 2010-2019 Dmitry Kostjuchenko - * - * Based on the Open Source API proposed by Ross Bencina - * Copyright (c) 1999-2019 Ross Bencina, Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -/** @file - @ingroup hostapi_src - @brief WASAPI implementation of support for a host API. - @note pa_wasapi currently requires minimum VC 2005, and the latest Vista SDK -*/ - -#include -#include -#include -#include - -// Max device count (if defined) causes max constant device count in the device list that -// enables PaWasapi_UpdateDeviceList() API and makes it possible to update WASAPI list dynamically -#ifndef PA_WASAPI_MAX_CONST_DEVICE_COUNT - #define PA_WASAPI_MAX_CONST_DEVICE_COUNT 0 // Force basic behavior by defining 0 if not defined by user -#endif - -// Fallback from Event to the Polling method in case if latency is higher than 21.33ms, as it allows to use -// 100% of CPU inside the PA's callback. -// Note: Some USB DAC drivers are buggy when Polling method is forced in Exclusive mode, audio output becomes -// unstable with a lot of interruptions, therefore this define is optional. The default behavior is to -// not change the Event mode to Polling and use the mode which user provided. -//#define PA_WASAPI_FORCE_POLL_IF_LARGE_BUFFER - -//! Poll mode time slots logging. -//#define PA_WASAPI_LOG_TIME_SLOTS - -// WinRT -#if defined(WINAPI_FAMILY) && (WINAPI_FAMILY == WINAPI_FAMILY_APP) - #define PA_WINRT - #define INITGUID -#endif - -// WASAPI -// using adjustments for MinGW build from @mgeier/MXE -// https://github.com/mxe/mxe/commit/f4bbc45682f021948bdaefd9fd476e2a04c4740f -#include // must be before other Wasapi headers -#if defined(_MSC_VER) && (_MSC_VER >= 1400) || defined(__MINGW64_VERSION_MAJOR) - #include - #define COBJMACROS - #include - #include - #define INITGUID // Avoid additional linkage of static libs, excessive code will be optimized out by the compiler -#ifndef _MSC_VER - #include -#endif - #include - #include - #include // Used to get IKsJackDescription interface - #undef INITGUID -// Visual Studio 2010 does not support the inline keyword -#if (_MSC_VER <= 1600) - #define inline _inline -#endif -#endif -#ifndef __MWERKS__ - #include - #include -#endif -#ifndef PA_WINRT - #include -#endif - -#include "pa_util.h" -#include "pa_allocation.h" -#include "pa_hostapi.h" -#include "pa_stream.h" -#include "pa_cpuload.h" -#include "pa_process.h" -#include "pa_win_wasapi.h" -#include "pa_debugprint.h" -#include "pa_ringbuffer.h" -#include "pa_win_coinitialize.h" - -#if !defined(NTDDI_VERSION) || (defined(__GNUC__) && (__GNUC__ <= 6) && !defined(__MINGW64__)) - - #undef WINVER - #undef _WIN32_WINNT - #define WINVER 0x0600 // VISTA - #define _WIN32_WINNT WINVER - - #ifndef WINAPI - #define WINAPI __stdcall - #endif - - #ifndef __unaligned - #define __unaligned - #endif - - #ifndef __C89_NAMELESS - #define __C89_NAMELESS - #endif - - #ifndef _AVRT_ //<< fix MinGW dummy compile by defining missing type: AVRT_PRIORITY - typedef enum _AVRT_PRIORITY - { - AVRT_PRIORITY_LOW = -1, - AVRT_PRIORITY_NORMAL, - AVRT_PRIORITY_HIGH, - AVRT_PRIORITY_CRITICAL - } AVRT_PRIORITY, *PAVRT_PRIORITY; - #endif - - #include // << for IID/CLSID - #include - #include - - #ifndef __LPCGUID_DEFINED__ - #define __LPCGUID_DEFINED__ - typedef const GUID *LPCGUID; - #endif - typedef GUID IID; - typedef GUID CLSID; - - #ifndef PROPERTYKEY_DEFINED - #define PROPERTYKEY_DEFINED - typedef struct _tagpropertykey - { - GUID fmtid; - DWORD pid; - } PROPERTYKEY; - #endif - - #ifdef __midl_proxy - #define __MIDL_CONST - #else - #define __MIDL_CONST const - #endif - - #ifdef WIN64 - #include - #define FASTCALL - #include - #include - #else - typedef struct _BYTE_BLOB - { - unsigned long clSize; - unsigned char abData[ 1 ]; - } BYTE_BLOB; - typedef /* [unique] */ __RPC_unique_pointer BYTE_BLOB *UP_BYTE_BLOB; - typedef LONGLONG REFERENCE_TIME; - #define NONAMELESSUNION - #endif - - #ifndef NT_SUCCESS - typedef LONG NTSTATUS; - #endif - - #ifndef WAVE_FORMAT_IEEE_FLOAT - #define WAVE_FORMAT_IEEE_FLOAT 0x0003 // 32-bit floating-point - #endif - - #ifndef __MINGW_EXTENSION - #if defined(__GNUC__) || defined(__GNUG__) - #define __MINGW_EXTENSION __extension__ - #else - #define __MINGW_EXTENSION - #endif - #endif - - #include - #include - #define COBJMACROS - #define INITGUID // Avoid additional linkage of static libs, excessive code will be optimized out by the compiler - #include - #include - #include - #include - #include // Used to get IKsJackDescription interface - #undef INITGUID - -#endif // NTDDI_VERSION - -// Missing declarations for WinRT -#ifdef PA_WINRT - - #define DEVICE_STATE_ACTIVE 0x00000001 - - typedef enum _EDataFlow - { - eRender = 0, - eCapture = ( eRender + 1 ) , - eAll = ( eCapture + 1 ) , - EDataFlow_enum_count = ( eAll + 1 ) - } - EDataFlow; - - typedef enum _EndpointFormFactor - { - RemoteNetworkDevice = 0, - Speakers = ( RemoteNetworkDevice + 1 ) , - LineLevel = ( Speakers + 1 ) , - Headphones = ( LineLevel + 1 ) , - Microphone = ( Headphones + 1 ) , - Headset = ( Microphone + 1 ) , - Handset = ( Headset + 1 ) , - UnknownDigitalPassthrough = ( Handset + 1 ) , - SPDIF = ( UnknownDigitalPassthrough + 1 ) , - HDMI = ( SPDIF + 1 ) , - UnknownFormFactor = ( HDMI + 1 ) - } - EndpointFormFactor; - -#endif - -#ifndef GUID_SECT - #define GUID_SECT -#endif - -#define __DEFINE_GUID(n,l,w1,w2,b1,b2,b3,b4,b5,b6,b7,b8) static const GUID n GUID_SECT = {l,w1,w2,{b1,b2,b3,b4,b5,b6,b7,b8}} -#define __DEFINE_IID(n,l,w1,w2,b1,b2,b3,b4,b5,b6,b7,b8) static const IID n GUID_SECT = {l,w1,w2,{b1,b2,b3,b4,b5,b6,b7,b8}} -#define __DEFINE_CLSID(n,l,w1,w2,b1,b2,b3,b4,b5,b6,b7,b8) static const CLSID n GUID_SECT = {l,w1,w2,{b1,b2,b3,b4,b5,b6,b7,b8}} -#define PA_DEFINE_CLSID(className, l, w1, w2, b1, b2, b3, b4, b5, b6, b7, b8) \ - __DEFINE_CLSID(pa_CLSID_##className, 0x##l, 0x##w1, 0x##w2, 0x##b1, 0x##b2, 0x##b3, 0x##b4, 0x##b5, 0x##b6, 0x##b7, 0x##b8) -#define PA_DEFINE_IID(interfaceName, l, w1, w2, b1, b2, b3, b4, b5, b6, b7, b8) \ - __DEFINE_IID(pa_IID_##interfaceName, 0x##l, 0x##w1, 0x##w2, 0x##b1, 0x##b2, 0x##b3, 0x##b4, 0x##b5, 0x##b6, 0x##b7, 0x##b8) - -// "1CB9AD4C-DBFA-4c32-B178-C2F568A703B2" -PA_DEFINE_IID(IAudioClient, 1cb9ad4c, dbfa, 4c32, b1, 78, c2, f5, 68, a7, 03, b2); -// "726778CD-F60A-4EDA-82DE-E47610CD78AA" -PA_DEFINE_IID(IAudioClient2, 726778cd, f60a, 4eda, 82, de, e4, 76, 10, cd, 78, aa); -// "7ED4EE07-8E67-4CD4-8C1A-2B7A5987AD42" -PA_DEFINE_IID(IAudioClient3, 7ed4ee07, 8e67, 4cd4, 8c, 1a, 2b, 7a, 59, 87, ad, 42); -// "1BE09788-6894-4089-8586-9A2A6C265AC5" -PA_DEFINE_IID(IMMEndpoint, 1be09788, 6894, 4089, 85, 86, 9a, 2a, 6c, 26, 5a, c5); -// "A95664D2-9614-4F35-A746-DE8DB63617E6" -PA_DEFINE_IID(IMMDeviceEnumerator, a95664d2, 9614, 4f35, a7, 46, de, 8d, b6, 36, 17, e6); -// "BCDE0395-E52F-467C-8E3D-C4579291692E" -PA_DEFINE_CLSID(IMMDeviceEnumerator,bcde0395, e52f, 467c, 8e, 3d, c4, 57, 92, 91, 69, 2e); -// "F294ACFC-3146-4483-A7BF-ADDCA7C260E2" -PA_DEFINE_IID(IAudioRenderClient, f294acfc, 3146, 4483, a7, bf, ad, dc, a7, c2, 60, e2); -// "C8ADBD64-E71E-48a0-A4DE-185C395CD317" -PA_DEFINE_IID(IAudioCaptureClient, c8adbd64, e71e, 48a0, a4, de, 18, 5c, 39, 5c, d3, 17); -// *2A07407E-6497-4A18-9787-32F79BD0D98F* Or this?? -PA_DEFINE_IID(IDeviceTopology, 2A07407E, 6497, 4A18, 97, 87, 32, f7, 9b, d0, d9, 8f); -// *AE2DE0E4-5BCA-4F2D-AA46-5D13F8FDB3A9* -PA_DEFINE_IID(IPart, AE2DE0E4, 5BCA, 4F2D, aa, 46, 5d, 13, f8, fd, b3, a9); -// *4509F757-2D46-4637-8E62-CE7DB944F57B* -PA_DEFINE_IID(IKsJackDescription, 4509F757, 2D46, 4637, 8e, 62, ce, 7d, b9, 44, f5, 7b); - -// Media formats: -__DEFINE_GUID(pa_KSDATAFORMAT_SUBTYPE_PCM, 0x00000001, 0x0000, 0x0010, 0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71 ); -__DEFINE_GUID(pa_KSDATAFORMAT_SUBTYPE_ADPCM, 0x00000002, 0x0000, 0x0010, 0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71 ); -__DEFINE_GUID(pa_KSDATAFORMAT_SUBTYPE_IEEE_FLOAT, 0x00000003, 0x0000, 0x0010, 0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71 ); - -#ifdef __IAudioClient2_INTERFACE_DEFINED__ -typedef enum _pa_AUDCLNT_STREAMOPTIONS { - pa_AUDCLNT_STREAMOPTIONS_NONE = 0x00, - pa_AUDCLNT_STREAMOPTIONS_RAW = 0x01, - pa_AUDCLNT_STREAMOPTIONS_MATCH_FORMAT = 0x02 -} pa_AUDCLNT_STREAMOPTIONS; -typedef struct _pa_AudioClientProperties { - UINT32 cbSize; - BOOL bIsOffload; - AUDIO_STREAM_CATEGORY eCategory; - pa_AUDCLNT_STREAMOPTIONS Options; -} pa_AudioClientProperties; -#define PA_AUDIOCLIENTPROPERTIES_SIZE_CATEGORY (sizeof(pa_AudioClientProperties) - sizeof(pa_AUDCLNT_STREAMOPTIONS)) -#define PA_AUDIOCLIENTPROPERTIES_SIZE_OPTIONS sizeof(pa_AudioClientProperties) -#endif // __IAudioClient2_INTERFACE_DEFINED__ - -/* use CreateThread for CYGWIN/Windows Mobile, _beginthreadex for all others */ -#if !defined(__CYGWIN__) && !defined(_WIN32_WCE) - #define CREATE_THREAD(PROC) (HANDLE)_beginthreadex( NULL, 0, (PROC), stream, 0, &stream->dwThreadId ) - #define PA_THREAD_FUNC static unsigned WINAPI - #define PA_THREAD_ID unsigned -#else - #define CREATE_THREAD(PROC) CreateThread( NULL, 0, (PROC), stream, 0, &stream->dwThreadId ) - #define PA_THREAD_FUNC static DWORD WINAPI - #define PA_THREAD_ID DWORD -#endif - -// Thread function forward decl. -PA_THREAD_FUNC ProcThreadEvent(void *param); -PA_THREAD_FUNC ProcThreadPoll(void *param); - -// Error codes (available since Windows 7) -#ifndef AUDCLNT_E_BUFFER_ERROR - #define AUDCLNT_E_BUFFER_ERROR AUDCLNT_ERR(0x018) -#endif -#ifndef AUDCLNT_E_BUFFER_SIZE_NOT_ALIGNED - #define AUDCLNT_E_BUFFER_SIZE_NOT_ALIGNED AUDCLNT_ERR(0x019) -#endif -#ifndef AUDCLNT_E_INVALID_DEVICE_PERIOD - #define AUDCLNT_E_INVALID_DEVICE_PERIOD AUDCLNT_ERR(0x020) -#endif - -// Stream flags (available since Windows 7) -#ifndef AUDCLNT_STREAMFLAGS_SRC_DEFAULT_QUALITY - #define AUDCLNT_STREAMFLAGS_SRC_DEFAULT_QUALITY 0x08000000 -#endif -#ifndef AUDCLNT_STREAMFLAGS_AUTOCONVERTPCM - #define AUDCLNT_STREAMFLAGS_AUTOCONVERTPCM 0x80000000 -#endif - -#define PA_WASAPI_DEVICE_ID_LEN 256 -#define PA_WASAPI_DEVICE_NAME_LEN 128 -#ifdef PA_WINRT - #define PA_WASAPI_DEVICE_MAX_COUNT 16 -#endif - -enum { S_INPUT = 0, S_OUTPUT = 1, S_COUNT = 2, S_FULLDUPLEX = 0 }; - -// Number of packets which compose single contignous buffer. With trial and error it was calculated -// that WASAPI Input sub-system uses 6 packets per whole buffer. Please provide more information -// or corrections if available. -enum { WASAPI_PACKETS_PER_INPUT_BUFFER = 6 }; - -#define STATIC_ARRAY_SIZE(array) (sizeof(array)/sizeof(array[0])) - -#define PRINT(x) PA_DEBUG(x); - -#define PA_SKELETON_SET_LAST_HOST_ERROR( errorCode, errorText ) \ - PaUtil_SetLastHostErrorInfo( paWASAPI, errorCode, errorText ) - -#define PA_WASAPI__IS_FULLDUPLEX(STREAM) ((STREAM)->in.clientProc && (STREAM)->out.clientProc) - -#ifndef IF_FAILED_JUMP -#define IF_FAILED_JUMP(hr, label) if(FAILED(hr)) goto label; -#endif - -#ifndef IF_FAILED_INTERNAL_ERROR_JUMP -#define IF_FAILED_INTERNAL_ERROR_JUMP(hr, error, label) if(FAILED(hr)) { error = paInternalError; goto label; } -#endif - -#define SAFE_CLOSE(h) if ((h) != NULL) { CloseHandle((h)); (h) = NULL; } -#define SAFE_RELEASE(punk) if ((punk) != NULL) { (punk)->lpVtbl->Release((punk)); (punk) = NULL; } - -// Mixer function -typedef void (*MixMonoToStereoF) (void *__to, const void *__from, UINT32 count); - -// AVRT is the new "multimedia scheduling stuff" -#ifndef PA_WINRT -typedef BOOL (WINAPI *FAvRtCreateThreadOrderingGroup) (PHANDLE,PLARGE_INTEGER,GUID*,PLARGE_INTEGER); -typedef BOOL (WINAPI *FAvRtDeleteThreadOrderingGroup) (HANDLE); -typedef BOOL (WINAPI *FAvRtWaitOnThreadOrderingGroup) (HANDLE); -typedef HANDLE (WINAPI *FAvSetMmThreadCharacteristics) (LPCSTR,LPDWORD); -typedef BOOL (WINAPI *FAvRevertMmThreadCharacteristics)(HANDLE); -typedef BOOL (WINAPI *FAvSetMmThreadPriority) (HANDLE,AVRT_PRIORITY); -static HMODULE hDInputDLL = 0; -FAvRtCreateThreadOrderingGroup pAvRtCreateThreadOrderingGroup = NULL; -FAvRtDeleteThreadOrderingGroup pAvRtDeleteThreadOrderingGroup = NULL; -FAvRtWaitOnThreadOrderingGroup pAvRtWaitOnThreadOrderingGroup = NULL; -FAvSetMmThreadCharacteristics pAvSetMmThreadCharacteristics = NULL; -FAvRevertMmThreadCharacteristics pAvRevertMmThreadCharacteristics = NULL; -FAvSetMmThreadPriority pAvSetMmThreadPriority = NULL; -#endif - -#define _GetProc(fun, type, name) { \ - fun = (type) GetProcAddress(hDInputDLL,name); \ - if (fun == NULL) { \ - PRINT(("GetProcAddr failed for %s" ,name)); \ - return FALSE; \ - } \ - } \ - -// ------------------------------------------------------------------------------------------ -/* prototypes for functions declared in this file */ -#ifdef __cplusplus -extern "C" -{ -#endif /* __cplusplus */ -PaError PaWasapi_Initialize( PaUtilHostApiRepresentation **hostApi, PaHostApiIndex index ); -#ifdef __cplusplus -} -#endif /* __cplusplus */ -// dummy entry point for other compilers and sdks -// currently built using RC1 SDK (5600) -//#if _MSC_VER < 1400 -//PaError PaWasapi_Initialize( PaUtilHostApiRepresentation **hostApi, PaHostApiIndex hostApiIndex ) -//{ - //return paNoError; -//} -//#else - -// ------------------------------------------------------------------------------------------ -static void Terminate( struct PaUtilHostApiRepresentation *hostApi ); -static PaError IsFormatSupported( struct PaUtilHostApiRepresentation *hostApi, - const PaStreamParameters *inputParameters, - const PaStreamParameters *outputParameters, - double sampleRate ); -static PaError OpenStream( struct PaUtilHostApiRepresentation *hostApi, - PaStream** s, - const PaStreamParameters *inputParameters, - const PaStreamParameters *outputParameters, - double sampleRate, - unsigned long framesPerBuffer, - PaStreamFlags streamFlags, - PaStreamCallback *streamCallback, - void *userData ); -static PaError CloseStream( PaStream* stream ); -static PaError StartStream( PaStream *stream ); -static PaError StopStream( PaStream *stream ); -static PaError AbortStream( PaStream *stream ); -static PaError IsStreamStopped( PaStream *s ); -static PaError IsStreamActive( PaStream *stream ); -static PaTime GetStreamTime( PaStream *stream ); -static double GetStreamCpuLoad( PaStream* stream ); -static PaError ReadStream( PaStream* stream, void *buffer, unsigned long frames ); -static PaError WriteStream( PaStream* stream, const void *buffer, unsigned long frames ); -static signed long GetStreamReadAvailable( PaStream* stream ); -static signed long GetStreamWriteAvailable( PaStream* stream ); - -// ------------------------------------------------------------------------------------------ -/* - These are fields that can be gathered from IDevice and IAudioDevice PRIOR to Initialize, and - done in first pass i assume that neither of these will cause the Driver to "load", but again, - who knows how they implement their stuff - */ -typedef struct PaWasapiDeviceInfo -{ - // Device -#ifndef PA_WINRT - IMMDevice *device; -#endif - - // device Id - WCHAR deviceId[PA_WASAPI_DEVICE_ID_LEN]; - - // from GetState - DWORD state; - - // Fields filled from IAudioDevice (_prior_ to Initialize) - // from GetDevicePeriod( - REFERENCE_TIME DefaultDevicePeriod; - REFERENCE_TIME MinimumDevicePeriod; - - // Default format (setup through Control Panel by user) - WAVEFORMATEXTENSIBLE DefaultFormat; - - // Mix format (internal format used by WASAPI audio engine) - WAVEFORMATEXTENSIBLE MixFormat; - - // Fields filled from IMMEndpoint'sGetDataFlow - EDataFlow flow; - - // Form-factor - EndpointFormFactor formFactor; -} -PaWasapiDeviceInfo; - -// ------------------------------------------------------------------------------------------ -/* PaWasapiHostApiRepresentation - host api datastructure specific to this implementation */ -typedef struct -{ - PaUtilHostApiRepresentation inheritedHostApiRep; - PaUtilStreamInterface callbackStreamInterface; - PaUtilStreamInterface blockingStreamInterface; - - PaUtilAllocationGroup *allocations; - - /* implementation specific data goes here */ - - PaWinUtilComInitializationResult comInitializationResult; - - // this is the REAL number of devices, whether they are useful to PA or not! - UINT32 deviceCount; - - PaWasapiDeviceInfo *devInfo; - - // is TRUE when WOW64 Vista/7 Workaround is needed - BOOL useWOW64Workaround; -} -PaWasapiHostApiRepresentation; - -// ------------------------------------------------------------------------------------------ -/* PaWasapiAudioClientParams - audio client parameters */ -typedef struct PaWasapiAudioClientParams -{ - PaWasapiDeviceInfo *device_info; - PaStreamParameters stream_params; - PaWasapiStreamInfo wasapi_params; - UINT32 frames_per_buffer; - double sample_rate; - BOOL blocking; - BOOL full_duplex; - BOOL wow64_workaround; -} -PaWasapiAudioClientParams; - -// ------------------------------------------------------------------------------------------ -/* PaWasapiStream - a stream data structure specifically for this implementation */ -typedef struct PaWasapiSubStream -{ - IAudioClient *clientParent; -#ifndef PA_WINRT - IStream *clientStream; -#endif - IAudioClient *clientProc; - - WAVEFORMATEXTENSIBLE wavex; - UINT32 bufferSize; - REFERENCE_TIME deviceLatency; - REFERENCE_TIME period; - double latencySeconds; - UINT32 framesPerHostCallback; - AUDCLNT_SHAREMODE shareMode; - UINT32 streamFlags; // AUDCLNT_STREAMFLAGS_EVENTCALLBACK, ... - UINT32 flags; - PaWasapiAudioClientParams params; //!< parameters - - // Buffers - UINT32 buffers; //!< number of buffers used (from host side) - UINT32 framesPerBuffer; //!< number of frames per 1 buffer - BOOL userBufferAndHostMatch; - - // Used for Mono >> Stereo workaround, if driver does not support it - // (in Exclusive mode WASAPI usually refuses to operate with Mono (1-ch) - void *monoBuffer; //!< pointer to buffer - UINT32 monoBufferSize; //!< buffer size in bytes - MixMonoToStereoF monoMixer; //!< pointer to mixer function - - PaUtilRingBuffer *tailBuffer; //!< buffer with trailing sample for blocking mode operations (only for Input) - void *tailBufferMemory; //!< tail buffer memory region -} -PaWasapiSubStream; - -// ------------------------------------------------------------------------------------------ -/* PaWasapiHostProcessor - redirects processing data */ -typedef struct PaWasapiHostProcessor -{ - PaWasapiHostProcessorCallback processor; - void *userData; -} -PaWasapiHostProcessor; - -// ------------------------------------------------------------------------------------------ -typedef struct PaWasapiStream -{ - /* IMPLEMENT ME: rename this */ - PaUtilStreamRepresentation streamRepresentation; - PaUtilCpuLoadMeasurer cpuLoadMeasurer; - PaUtilBufferProcessor bufferProcessor; - - // input - PaWasapiSubStream in; - IAudioCaptureClient *captureClientParent; -#ifndef PA_WINRT - IStream *captureClientStream; -#endif - IAudioCaptureClient *captureClient; - IAudioEndpointVolume *inVol; - - // output - PaWasapiSubStream out; - IAudioRenderClient *renderClientParent; -#ifndef PA_WINRT - IStream *renderClientStream; -#endif - IAudioRenderClient *renderClient; - IAudioEndpointVolume *outVol; - - // event handles for event-driven processing mode - HANDLE event[S_COUNT]; - - // buffer mode - PaUtilHostBufferSizeMode bufferMode; - - // must be volatile to avoid race condition on user query while - // thread is being started - volatile BOOL running; - - PA_THREAD_ID dwThreadId; - HANDLE hThread; - HANDLE hCloseRequest; - HANDLE hThreadStart; //!< signalled by thread on start - HANDLE hThreadExit; //!< signalled by thread on exit - HANDLE hBlockingOpStreamRD; - HANDLE hBlockingOpStreamWR; - - // Host callback Output overrider - PaWasapiHostProcessor hostProcessOverrideOutput; - - // Host callback Input overrider - PaWasapiHostProcessor hostProcessOverrideInput; - - // Defines blocking/callback interface used - BOOL bBlocking; - - // Av Task (MM thread management) - HANDLE hAvTask; - - // Thread priority level - PaWasapiThreadPriority nThreadPriority; - - // State handler - PaWasapiStreamStateCallback fnStateHandler; - void *pStateHandlerUserData; -} -PaWasapiStream; - -// COM marshaling -static HRESULT MarshalSubStreamComPointers(PaWasapiSubStream *substream); -static HRESULT MarshalStreamComPointers(PaWasapiStream *stream); -static HRESULT UnmarshalSubStreamComPointers(PaWasapiSubStream *substream); -static HRESULT UnmarshalStreamComPointers(PaWasapiStream *stream); -static void ReleaseUnmarshaledSubComPointers(PaWasapiSubStream *substream); -static void ReleaseUnmarshaledComPointers(PaWasapiStream *stream); - -// Local methods -static void _StreamOnStop(PaWasapiStream *stream); -static void _StreamFinish(PaWasapiStream *stream); -static void _StreamCleanup(PaWasapiStream *stream); -static HRESULT _PollGetOutputFramesAvailable(PaWasapiStream *stream, UINT32 *available); -static HRESULT _PollGetInputFramesAvailable(PaWasapiStream *stream, UINT32 *available); -static void *PaWasapi_ReallocateMemory(void *prev, size_t size); -static void PaWasapi_FreeMemory(void *ptr); -static PaSampleFormat WaveToPaFormat(const WAVEFORMATEXTENSIBLE *fmtext); - -// WinRT (UWP) device list -#ifdef PA_WINRT -typedef struct PaWasapiWinrtDeviceInfo -{ - WCHAR id[PA_WASAPI_DEVICE_ID_LEN]; - WCHAR name[PA_WASAPI_DEVICE_NAME_LEN]; - EndpointFormFactor formFactor; -} -PaWasapiWinrtDeviceInfo; -typedef struct PaWasapiWinrtDeviceListRole -{ - WCHAR defaultId[PA_WASAPI_DEVICE_ID_LEN]; - PaWasapiWinrtDeviceInfo devices[PA_WASAPI_DEVICE_MAX_COUNT]; - UINT32 deviceCount; -} -PaWasapiWinrtDeviceListRole; -typedef struct PaWasapiWinrtDeviceList -{ - PaWasapiWinrtDeviceListRole render; - PaWasapiWinrtDeviceListRole capture; -} -PaWasapiWinrtDeviceList; -static PaWasapiWinrtDeviceList g_DeviceListInfo = { 0 }; -#endif - -// WinRT (UWP) device list context -#ifdef PA_WINRT -typedef struct PaWasapiWinrtDeviceListContextEntry -{ - PaWasapiWinrtDeviceInfo *info; - EDataFlow flow; -} -PaWasapiWinrtDeviceListContextEntry; -typedef struct PaWasapiWinrtDeviceListContext -{ - PaWasapiWinrtDeviceListContextEntry devices[PA_WASAPI_DEVICE_MAX_COUNT * 2]; -} -PaWasapiWinrtDeviceListContext; -#endif - -// ------------------------------------------------------------------------------------------ -#define LogHostError(HRES) __LogHostError(HRES, __FUNCTION__, __FILE__, __LINE__) -static HRESULT __LogHostError(HRESULT res, const char *func, const char *file, int line) -{ - const char *text = NULL; - switch (res) - { - case S_OK: return res; - case E_POINTER :text ="E_POINTER"; break; - case E_INVALIDARG :text ="E_INVALIDARG"; break; - - case AUDCLNT_E_NOT_INITIALIZED :text ="AUDCLNT_E_NOT_INITIALIZED"; break; - case AUDCLNT_E_ALREADY_INITIALIZED :text ="AUDCLNT_E_ALREADY_INITIALIZED"; break; - case AUDCLNT_E_WRONG_ENDPOINT_TYPE :text ="AUDCLNT_E_WRONG_ENDPOINT_TYPE"; break; - case AUDCLNT_E_DEVICE_INVALIDATED :text ="AUDCLNT_E_DEVICE_INVALIDATED"; break; - case AUDCLNT_E_NOT_STOPPED :text ="AUDCLNT_E_NOT_STOPPED"; break; - case AUDCLNT_E_BUFFER_TOO_LARGE :text ="AUDCLNT_E_BUFFER_TOO_LARGE"; break; - case AUDCLNT_E_OUT_OF_ORDER :text ="AUDCLNT_E_OUT_OF_ORDER"; break; - case AUDCLNT_E_UNSUPPORTED_FORMAT :text ="AUDCLNT_E_UNSUPPORTED_FORMAT"; break; - case AUDCLNT_E_INVALID_SIZE :text ="AUDCLNT_E_INVALID_SIZE"; break; - case AUDCLNT_E_DEVICE_IN_USE :text ="AUDCLNT_E_DEVICE_IN_USE"; break; - case AUDCLNT_E_BUFFER_OPERATION_PENDING :text ="AUDCLNT_E_BUFFER_OPERATION_PENDING"; break; - case AUDCLNT_E_THREAD_NOT_REGISTERED :text ="AUDCLNT_E_THREAD_NOT_REGISTERED"; break; - case AUDCLNT_E_EXCLUSIVE_MODE_NOT_ALLOWED :text ="AUDCLNT_E_EXCLUSIVE_MODE_NOT_ALLOWED"; break; - case AUDCLNT_E_ENDPOINT_CREATE_FAILED :text ="AUDCLNT_E_ENDPOINT_CREATE_FAILED"; break; - case AUDCLNT_E_SERVICE_NOT_RUNNING :text ="AUDCLNT_E_SERVICE_NOT_RUNNING"; break; - case AUDCLNT_E_EVENTHANDLE_NOT_EXPECTED :text ="AUDCLNT_E_EVENTHANDLE_NOT_EXPECTED"; break; - case AUDCLNT_E_EXCLUSIVE_MODE_ONLY :text ="AUDCLNT_E_EXCLUSIVE_MODE_ONLY"; break; - case AUDCLNT_E_BUFDURATION_PERIOD_NOT_EQUAL :text ="AUDCLNT_E_BUFDURATION_PERIOD_NOT_EQUAL"; break; - case AUDCLNT_E_EVENTHANDLE_NOT_SET :text ="AUDCLNT_E_EVENTHANDLE_NOT_SET"; break; - case AUDCLNT_E_INCORRECT_BUFFER_SIZE :text ="AUDCLNT_E_INCORRECT_BUFFER_SIZE"; break; - case AUDCLNT_E_BUFFER_SIZE_ERROR :text ="AUDCLNT_E_BUFFER_SIZE_ERROR"; break; - case AUDCLNT_E_CPUUSAGE_EXCEEDED :text ="AUDCLNT_E_CPUUSAGE_EXCEEDED"; break; - case AUDCLNT_E_BUFFER_ERROR :text ="AUDCLNT_E_BUFFER_ERROR"; break; - case AUDCLNT_E_BUFFER_SIZE_NOT_ALIGNED :text ="AUDCLNT_E_BUFFER_SIZE_NOT_ALIGNED"; break; - case AUDCLNT_E_INVALID_DEVICE_PERIOD :text ="AUDCLNT_E_INVALID_DEVICE_PERIOD"; break; - -#ifdef AUDCLNT_E_INVALID_STREAM_FLAG - case AUDCLNT_E_INVALID_STREAM_FLAG :text ="AUDCLNT_E_INVALID_STREAM_FLAG"; break; -#endif -#ifdef AUDCLNT_E_ENDPOINT_OFFLOAD_NOT_CAPABLE - case AUDCLNT_E_ENDPOINT_OFFLOAD_NOT_CAPABLE :text ="AUDCLNT_E_ENDPOINT_OFFLOAD_NOT_CAPABLE"; break; -#endif -#ifdef AUDCLNT_E_OUT_OF_OFFLOAD_RESOURCES - case AUDCLNT_E_OUT_OF_OFFLOAD_RESOURCES :text ="AUDCLNT_E_OUT_OF_OFFLOAD_RESOURCES"; break; -#endif -#ifdef AUDCLNT_E_OFFLOAD_MODE_ONLY - case AUDCLNT_E_OFFLOAD_MODE_ONLY :text ="AUDCLNT_E_OFFLOAD_MODE_ONLY"; break; -#endif -#ifdef AUDCLNT_E_NONOFFLOAD_MODE_ONLY - case AUDCLNT_E_NONOFFLOAD_MODE_ONLY :text ="AUDCLNT_E_NONOFFLOAD_MODE_ONLY"; break; -#endif -#ifdef AUDCLNT_E_RESOURCES_INVALIDATED - case AUDCLNT_E_RESOURCES_INVALIDATED :text ="AUDCLNT_E_RESOURCES_INVALIDATED"; break; -#endif -#ifdef AUDCLNT_E_RAW_MODE_UNSUPPORTED - case AUDCLNT_E_RAW_MODE_UNSUPPORTED :text ="AUDCLNT_E_RAW_MODE_UNSUPPORTED"; break; -#endif -#ifdef AUDCLNT_E_ENGINE_PERIODICITY_LOCKED - case AUDCLNT_E_ENGINE_PERIODICITY_LOCKED :text ="AUDCLNT_E_ENGINE_PERIODICITY_LOCKED"; break; -#endif -#ifdef AUDCLNT_E_ENGINE_FORMAT_LOCKED - case AUDCLNT_E_ENGINE_FORMAT_LOCKED :text ="AUDCLNT_E_ENGINE_FORMAT_LOCKED"; break; -#endif - - case AUDCLNT_S_BUFFER_EMPTY :text ="AUDCLNT_S_BUFFER_EMPTY"; break; - case AUDCLNT_S_THREAD_ALREADY_REGISTERED :text ="AUDCLNT_S_THREAD_ALREADY_REGISTERED"; break; - case AUDCLNT_S_POSITION_STALLED :text ="AUDCLNT_S_POSITION_STALLED"; break; - - // other windows common errors: - case CO_E_NOTINITIALIZED :text ="CO_E_NOTINITIALIZED: you must call CoInitialize() before Pa_OpenStream()"; break; - - default: - text = "UNKNOWN ERROR"; - } - PRINT(("WASAPI ERROR HRESULT: 0x%X : %s\n [FUNCTION: %s FILE: %s {LINE: %d}]\n", res, text, func, file, line)); -#ifndef PA_ENABLE_DEBUG_OUTPUT - (void)func; (void)file; (void)line; -#endif - PA_SKELETON_SET_LAST_HOST_ERROR(res, text); - return res; -} - -// ------------------------------------------------------------------------------------------ -#define LogPaError(PAERR) __LogPaError(PAERR, __FUNCTION__, __FILE__, __LINE__) -static PaError __LogPaError(PaError err, const char *func, const char *file, int line) -{ - if (err == paNoError) - return err; - - PRINT(("WASAPI ERROR PAERROR: %i : %s\n [FUNCTION: %s FILE: %s {LINE: %d}]\n", err, Pa_GetErrorText(err), func, file, line)); -#ifndef PA_ENABLE_DEBUG_OUTPUT - (void)func; (void)file; (void)line; -#endif - return err; -} - -// ------------------------------------------------------------------------------------------ -/*! \class ThreadSleepScheduler - Allows to emulate thread sleep of less than 1 millisecond under Windows. Scheduler - calculates number of times the thread must run until next sleep of 1 millisecond. - It does not make thread sleeping for real number of microseconds but rather controls - how many of imaginary microseconds the thread task can allow thread to sleep. -*/ -typedef struct ThreadIdleScheduler -{ - UINT32 m_idle_microseconds; //!< number of microseconds to sleep - UINT32 m_next_sleep; //!< next sleep round - UINT32 m_i; //!< current round iterator position - UINT32 m_resolution; //!< resolution in number of milliseconds -} -ThreadIdleScheduler; - -//! Setup scheduler. -static void ThreadIdleScheduler_Setup(ThreadIdleScheduler *sched, UINT32 resolution, UINT32 microseconds) -{ - assert(microseconds != 0); - assert(resolution != 0); - assert((resolution * 1000) >= microseconds); - - memset(sched, 0, sizeof(*sched)); - - sched->m_idle_microseconds = microseconds; - sched->m_resolution = resolution; - sched->m_next_sleep = (resolution * 1000) / microseconds; -} - -//! Iterate and check if can sleep. -static inline UINT32 ThreadIdleScheduler_NextSleep(ThreadIdleScheduler *sched) -{ - // advance and check if thread can sleep - if (++sched->m_i == sched->m_next_sleep) - { - sched->m_i = 0; - return sched->m_resolution; - } - return 0; -} - -// ------------------------------------------------------------------------------------------ -typedef struct _SystemTimer -{ - INT32 granularity; - -} SystemTimer; -static LARGE_INTEGER g_SystemTimerFrequency; -static BOOL g_SystemTimerUseQpc = FALSE; - -//! Set granularity of the system timer. -static BOOL SystemTimer_SetGranularity(SystemTimer *timer, UINT32 granularity) -{ -#ifndef PA_WINRT - TIMECAPS caps; - - timer->granularity = granularity; - - if (timeGetDevCaps(&caps, sizeof(caps)) == MMSYSERR_NOERROR) - { - if (timer->granularity < (INT32)caps.wPeriodMin) - timer->granularity = (INT32)caps.wPeriodMin; - } - - if (timeBeginPeriod(timer->granularity) != TIMERR_NOERROR) - { - PRINT(("SetSystemTimer: timeBeginPeriod(1) failed!\n")); - - timer->granularity = 10; - return FALSE; - } -#else - (void)granularity; - - // UWP does not support increase of the timer precision change and thus calling WaitForSingleObject with anything - // below 10 milliseconds will cause underruns for input and output stream. - timer->granularity = 10; -#endif - - return TRUE; -} - -//! Restore granularity of the system timer. -static void SystemTimer_RestoreGranularity(SystemTimer *timer) -{ -#ifndef PA_WINRT - if (timer->granularity != 0) - { - if (timeEndPeriod(timer->granularity) != TIMERR_NOERROR) - { - PRINT(("RestoreSystemTimer: timeEndPeriod(1) failed!\n")); - } - } -#else - (void)timer; -#endif -} - -//! Initialize high-resolution time getter. -static void SystemTimer_InitializeTimeGetter() -{ - g_SystemTimerUseQpc = QueryPerformanceFrequency(&g_SystemTimerFrequency); -} - -//! Get high-resolution time in milliseconds (using QPC by default). -static inline LONGLONG SystemTimer_GetTime(SystemTimer *timer) -{ - (void)timer; - - // QPC: https://docs.microsoft.com/en-us/windows/win32/sysinfo/acquiring-high-resolution-time-stamps - if (g_SystemTimerUseQpc) - { - LARGE_INTEGER now; - QueryPerformanceCounter(&now); - return (now.QuadPart * 1000LL) / g_SystemTimerFrequency.QuadPart; - } - else - { - #ifdef PA_WINRT - return GetTickCount64(); - #else - return timeGetTime(); - #endif - } -} - -// ------------------------------------------------------------------------------------------ -/*static double nano100ToMillis(REFERENCE_TIME ref) -{ - // 1 nano = 0.000000001 seconds - //100 nano = 0.0000001 seconds - //100 nano = 0.0001 milliseconds - return ((double)ref) * 0.0001; -}*/ - -// ------------------------------------------------------------------------------------------ -static double nano100ToSeconds(REFERENCE_TIME ref) -{ - // 1 nano = 0.000000001 seconds - //100 nano = 0.0000001 seconds - //100 nano = 0.0001 milliseconds - return ((double)ref) * 0.0000001; -} - -// ------------------------------------------------------------------------------------------ -/*static REFERENCE_TIME MillisTonano100(double ref) -{ - // 1 nano = 0.000000001 seconds - //100 nano = 0.0000001 seconds - //100 nano = 0.0001 milliseconds - return (REFERENCE_TIME)(ref / 0.0001); -}*/ - -// ------------------------------------------------------------------------------------------ -static REFERENCE_TIME SecondsTonano100(double ref) -{ - // 1 nano = 0.000000001 seconds - //100 nano = 0.0000001 seconds - //100 nano = 0.0001 milliseconds - return (REFERENCE_TIME)(ref / 0.0000001); -} - -// ------------------------------------------------------------------------------------------ -// Makes Hns period from frames and sample rate -static REFERENCE_TIME MakeHnsPeriod(UINT32 nFrames, DWORD nSamplesPerSec) -{ - return (REFERENCE_TIME)((10000.0 * 1000 / nSamplesPerSec * nFrames) + 0.5); -} - -// ------------------------------------------------------------------------------------------ -// Converts PaSampleFormat to bits per sample value -// Note: paCustomFormat stands for 8.24 format (24-bits inside 32-bit containers) -static WORD PaSampleFormatToBitsPerSample(PaSampleFormat format_id) -{ - switch (format_id & ~paNonInterleaved) - { - case paFloat32: - case paInt32: return 32; - case paCustomFormat: - case paInt24: return 24; - case paInt16: return 16; - case paInt8: - case paUInt8: return 8; - } - return 0; -} - -// ------------------------------------------------------------------------------------------ -// Convert PaSampleFormat to valid sample format for I/O, e.g. if paCustomFormat is specified -// it will be converted to paInt32, other formats pass through -// Note: paCustomFormat stands for 8.24 format (24-bits inside 32-bit containers) -static PaSampleFormat GetSampleFormatForIO(PaSampleFormat format_id) -{ - return ((format_id & ~paNonInterleaved) == paCustomFormat ? - (paInt32 | (format_id & paNonInterleaved ? paNonInterleaved : 0)) : format_id); -} - -// ------------------------------------------------------------------------------------------ -// Converts Hns period into number of frames -static UINT32 MakeFramesFromHns(REFERENCE_TIME hnsPeriod, UINT32 nSamplesPerSec) -{ - UINT32 nFrames = (UINT32)( // frames = - 1.0 * hnsPeriod * // hns * - nSamplesPerSec / // (frames / s) / - 1000 / // (ms / s) / - 10000 // (hns / s) / - + 0.5 // rounding - ); - return nFrames; -} - -// Aligning function type -typedef UINT32 (*ALIGN_FUNC) (UINT32 v, UINT32 align); - -// ------------------------------------------------------------------------------------------ -// Aligns 'v' backwards -static UINT32 ALIGN_BWD(UINT32 v, UINT32 align) -{ - return ((v - (align ? v % align : 0))); -} - -// ------------------------------------------------------------------------------------------ -// Aligns 'v' forward -static UINT32 ALIGN_FWD(UINT32 v, UINT32 align) -{ - UINT32 remainder = (align ? (v % align) : 0); - if (remainder == 0) - return v; - return v + (align - remainder); -} - -// ------------------------------------------------------------------------------------------ -// Get next value power of 2 -static UINT32 ALIGN_NEXT_POW2(UINT32 v) -{ - UINT32 v2 = 1; - while (v > (v2 <<= 1)) { } - v = v2; - return v; -} - -// ------------------------------------------------------------------------------------------ -// Aligns WASAPI buffer to 128 byte packet boundary. HD Audio will fail to play if buffer -// is misaligned. This problem was solved in Windows 7 were AUDCLNT_E_BUFFER_SIZE_NOT_ALIGNED -// is thrown although we must align for Vista anyway. -static UINT32 AlignFramesPerBuffer(UINT32 nFrames, UINT32 nBlockAlign, ALIGN_FUNC pAlignFunc) -{ -#define HDA_PACKET_SIZE (128) - - UINT32 bytes = nFrames * nBlockAlign; - UINT32 packets; - - // align to a HD Audio packet size - bytes = pAlignFunc(bytes, HDA_PACKET_SIZE); - - // atlest 1 frame must be available - if (bytes < HDA_PACKET_SIZE) - bytes = HDA_PACKET_SIZE; - - packets = bytes / HDA_PACKET_SIZE; - bytes = packets * HDA_PACKET_SIZE; - nFrames = bytes / nBlockAlign; - - // WASAPI frames are always aligned to at least 8 - nFrames = ALIGN_FWD(nFrames, 8); - - return nFrames; - -#undef HDA_PACKET_SIZE -} - -// ------------------------------------------------------------------------------------------ -static UINT32 GetFramesSleepTime(REFERENCE_TIME nFrames, REFERENCE_TIME nSamplesPerSec) -{ - REFERENCE_TIME nDuration; - if (nSamplesPerSec == 0) - return 0; - -#define REFTIMES_PER_SEC 10000000LL -#define REFTIMES_PER_MILLISEC 10000LL - - // Calculate the actual duration of the allocated buffer. - nDuration = (REFTIMES_PER_SEC * nFrames) / nSamplesPerSec; - return (UINT32)(nDuration / REFTIMES_PER_MILLISEC); - -#undef REFTIMES_PER_SEC -#undef REFTIMES_PER_MILLISEC -} - -// ------------------------------------------------------------------------------------------ -static UINT32 GetFramesSleepTimeMicroseconds(REFERENCE_TIME nFrames, REFERENCE_TIME nSamplesPerSec) -{ - REFERENCE_TIME nDuration; - if (nSamplesPerSec == 0) - return 0; - -#define REFTIMES_PER_SEC 10000000LL -#define REFTIMES_PER_MILLISEC 10000LL - - // Calculate the actual duration of the allocated buffer. - nDuration = (REFTIMES_PER_SEC * nFrames) / nSamplesPerSec; - return (UINT32)(nDuration / 10); - -#undef REFTIMES_PER_SEC -#undef REFTIMES_PER_MILLISEC -} - -// ------------------------------------------------------------------------------------------ -#ifndef PA_WINRT -static BOOL SetupAVRT() -{ - hDInputDLL = LoadLibraryA("avrt.dll"); - if (hDInputDLL == NULL) - return FALSE; - - _GetProc(pAvRtCreateThreadOrderingGroup, FAvRtCreateThreadOrderingGroup, "AvRtCreateThreadOrderingGroup"); - _GetProc(pAvRtDeleteThreadOrderingGroup, FAvRtDeleteThreadOrderingGroup, "AvRtDeleteThreadOrderingGroup"); - _GetProc(pAvRtWaitOnThreadOrderingGroup, FAvRtWaitOnThreadOrderingGroup, "AvRtWaitOnThreadOrderingGroup"); - _GetProc(pAvSetMmThreadCharacteristics, FAvSetMmThreadCharacteristics, "AvSetMmThreadCharacteristicsA"); - _GetProc(pAvRevertMmThreadCharacteristics,FAvRevertMmThreadCharacteristics,"AvRevertMmThreadCharacteristics"); - _GetProc(pAvSetMmThreadPriority, FAvSetMmThreadPriority, "AvSetMmThreadPriority"); - - return pAvRtCreateThreadOrderingGroup && - pAvRtDeleteThreadOrderingGroup && - pAvRtWaitOnThreadOrderingGroup && - pAvSetMmThreadCharacteristics && - pAvRevertMmThreadCharacteristics && - pAvSetMmThreadPriority; -} -#endif - -// ------------------------------------------------------------------------------------------ -static void CloseAVRT() -{ -#ifndef PA_WINRT - if (hDInputDLL != NULL) - FreeLibrary(hDInputDLL); - hDInputDLL = NULL; -#endif -} - -// ------------------------------------------------------------------------------------------ -static BOOL IsWow64() -{ -#ifndef PA_WINRT - - // http://msdn.microsoft.com/en-us/library/ms684139(VS.85).aspx - - typedef BOOL (WINAPI *LPFN_ISWOW64PROCESS) (HANDLE, PBOOL); - LPFN_ISWOW64PROCESS fnIsWow64Process; - - BOOL bIsWow64 = FALSE; - - // IsWow64Process is not available on all supported versions of Windows. - // Use GetModuleHandle to get a handle to the DLL that contains the function - // and GetProcAddress to get a pointer to the function if available. - - fnIsWow64Process = (LPFN_ISWOW64PROCESS) GetProcAddress( - GetModuleHandleA("kernel32"), "IsWow64Process"); - - if (fnIsWow64Process == NULL) - return FALSE; - - if (!fnIsWow64Process(GetCurrentProcess(), &bIsWow64)) - return FALSE; - - return bIsWow64; - -#else - - return FALSE; - -#endif -} - -// ------------------------------------------------------------------------------------------ -typedef enum EWindowsVersion -{ - WINDOWS_UNKNOWN = 0, - WINDOWS_VISTA_SERVER2008, - WINDOWS_7_SERVER2008R2, - WINDOWS_8_SERVER2012, - WINDOWS_8_1_SERVER2012R2, - WINDOWS_10_SERVER2016, - WINDOWS_FUTURE -} -EWindowsVersion; -// Alternative way for checking Windows version (allows to check version on Windows 8.1 and up) -#ifndef PA_WINRT -static BOOL IsWindowsVersionOrGreater(WORD wMajorVersion, WORD wMinorVersion, WORD wServicePackMajor) -{ - typedef ULONGLONG (NTAPI *LPFN_VERSETCONDITIONMASK)(ULONGLONG ConditionMask, DWORD TypeMask, BYTE Condition); - typedef BOOL (WINAPI *LPFN_VERIFYVERSIONINFO)(LPOSVERSIONINFOEXA lpVersionInformation, DWORD dwTypeMask, DWORDLONG dwlConditionMask); - - LPFN_VERSETCONDITIONMASK fnVerSetConditionMask; - LPFN_VERIFYVERSIONINFO fnVerifyVersionInfo; - OSVERSIONINFOEXA osvi = { sizeof(osvi), 0, 0, 0, 0, {0}, 0, 0 }; - DWORDLONG dwlConditionMask; - - fnVerSetConditionMask = (LPFN_VERSETCONDITIONMASK)GetProcAddress(GetModuleHandleA("kernel32"), "VerSetConditionMask"); - fnVerifyVersionInfo = (LPFN_VERIFYVERSIONINFO)GetProcAddress(GetModuleHandleA("kernel32"), "VerifyVersionInfoA"); - - if ((fnVerSetConditionMask == NULL) || (fnVerifyVersionInfo == NULL)) - return FALSE; - - dwlConditionMask = fnVerSetConditionMask( - fnVerSetConditionMask( - fnVerSetConditionMask( - 0, VER_MAJORVERSION, VER_GREATER_EQUAL), - VER_MINORVERSION, VER_GREATER_EQUAL), - VER_SERVICEPACKMAJOR, VER_GREATER_EQUAL); - - osvi.dwMajorVersion = wMajorVersion; - osvi.dwMinorVersion = wMinorVersion; - osvi.wServicePackMajor = wServicePackMajor; - - return (fnVerifyVersionInfo(&osvi, VER_MAJORVERSION | VER_MINORVERSION | VER_SERVICEPACKMAJOR, dwlConditionMask) != FALSE); -} -#endif -// Get Windows version -static EWindowsVersion GetWindowsVersion() -{ -#ifndef PA_WINRT - static EWindowsVersion version = WINDOWS_UNKNOWN; - - if (version == WINDOWS_UNKNOWN) - { - DWORD dwMajorVersion = 0xFFFFFFFFU, dwMinorVersion = 0, dwBuild = 0; - - // RTL_OSVERSIONINFOW equals OSVERSIONINFOW but it is missing inb MinGW winnt.h header, - // thus use OSVERSIONINFOW for greater portability - typedef NTSTATUS (WINAPI *LPFN_RTLGETVERSION)(POSVERSIONINFOW lpVersionInformation); - LPFN_RTLGETVERSION fnRtlGetVersion; - - #define NTSTATUS_SUCCESS ((NTSTATUS)0x00000000L) - - // RtlGetVersion must be able to provide true Windows version (Windows 10 may be reported as Windows 8 - // by GetVersion API) - if ((fnRtlGetVersion = (LPFN_RTLGETVERSION)GetProcAddress(GetModuleHandleA("ntdll"), "RtlGetVersion")) != NULL) - { - OSVERSIONINFOW ver = { sizeof(OSVERSIONINFOW), 0, 0, 0, 0, {0} }; - - PRINT(("WASAPI: getting Windows version with RtlGetVersion()\n")); - - if (fnRtlGetVersion(&ver) == NTSTATUS_SUCCESS) - { - dwMajorVersion = ver.dwMajorVersion; - dwMinorVersion = ver.dwMinorVersion; - dwBuild = ver.dwBuildNumber; - } - } - - #undef NTSTATUS_SUCCESS - - // fallback to GetVersion if RtlGetVersion is missing - if (dwMajorVersion == 0xFFFFFFFFU) - { - typedef DWORD (WINAPI *LPFN_GETVERSION)(VOID); - LPFN_GETVERSION fnGetVersion; - - if ((fnGetVersion = (LPFN_GETVERSION)GetProcAddress(GetModuleHandleA("kernel32"), "GetVersion")) != NULL) - { - DWORD dwVersion; - - PRINT(("WASAPI: getting Windows version with GetVersion()\n")); - - dwVersion = fnGetVersion(); - - dwMajorVersion = (DWORD)(LOBYTE(LOWORD(dwVersion))); - dwMinorVersion = (DWORD)(HIBYTE(LOWORD(dwVersion))); - - if (dwVersion < 0x80000000) - dwBuild = (DWORD)(HIWORD(dwVersion)); - } - } - - if (dwMajorVersion != 0xFFFFFFFFU) - { - switch (dwMajorVersion) - { - case 0: - case 1: - case 2: - case 3: - case 4: - case 5: - break; // skip lower - case 6: - switch (dwMinorVersion) - { - case 0: version = WINDOWS_VISTA_SERVER2008; break; - case 1: version = WINDOWS_7_SERVER2008R2; break; - case 2: version = WINDOWS_8_SERVER2012; break; - case 3: version = WINDOWS_8_1_SERVER2012R2; break; - default: version = WINDOWS_FUTURE; break; - } - break; - case 10: - switch (dwMinorVersion) - { - case 0: version = WINDOWS_10_SERVER2016; break; - default: version = WINDOWS_FUTURE; break; - } - break; - default: - version = WINDOWS_FUTURE; - break; - } - } - // fallback to VerifyVersionInfo if RtlGetVersion and GetVersion are missing - else - { - PRINT(("WASAPI: getting Windows version with VerifyVersionInfo()\n")); - - if (IsWindowsVersionOrGreater(10, 0, 0)) - version = WINDOWS_10_SERVER2016; - else - if (IsWindowsVersionOrGreater(6, 3, 0)) - version = WINDOWS_8_1_SERVER2012R2; - else - if (IsWindowsVersionOrGreater(6, 2, 0)) - version = WINDOWS_8_SERVER2012; - else - if (IsWindowsVersionOrGreater(6, 1, 0)) - version = WINDOWS_7_SERVER2008R2; - else - if (IsWindowsVersionOrGreater(6, 0, 0)) - version = WINDOWS_VISTA_SERVER2008; - else - version = WINDOWS_FUTURE; - } - - PRINT(("WASAPI: Windows version = %d\n", version)); - } - - return version; -#else - #if (_WIN32_WINNT >= _WIN32_WINNT_WIN10) - return WINDOWS_10_SERVER2016; - #else - return WINDOWS_8_SERVER2012; - #endif -#endif -} - -// ------------------------------------------------------------------------------------------ -static BOOL UseWOW64Workaround() -{ - // note: WOW64 bug is common to Windows Vista x64, thus we fall back to safe Poll-driven - // method. Windows 7 x64 seems has WOW64 bug fixed. - - return (IsWow64() && (GetWindowsVersion() == WINDOWS_VISTA_SERVER2008)); -} - -// ------------------------------------------------------------------------------------------ -static UINT32 GetAudioClientVersion() -{ - if (GetWindowsVersion() >= WINDOWS_10_SERVER2016) - return 3; - else - if (GetWindowsVersion() >= WINDOWS_8_SERVER2012) - return 2; - - return 1; -} - -// ------------------------------------------------------------------------------------------ -static const IID *GetAudioClientIID() -{ - static const IID *cli_iid = NULL; - if (cli_iid == NULL) - { - UINT32 cli_version = GetAudioClientVersion(); - switch (cli_version) - { - case 3: cli_iid = &pa_IID_IAudioClient3; break; - case 2: cli_iid = &pa_IID_IAudioClient2; break; - default: cli_iid = &pa_IID_IAudioClient; break; - } - - PRINT(("WASAPI: IAudioClient version = %d\n", cli_version)); - } - - return cli_iid; -} - -// ------------------------------------------------------------------------------------------ -typedef enum EMixDirection -{ - MIX_DIR__1TO2, //!< mix one channel to L and R - MIX_DIR__2TO1, //!< mix L and R channels to one channel - MIX_DIR__2TO1_L //!< mix only L channel (of total 2 channels) to one channel -} -EMixDirection; - -// ------------------------------------------------------------------------------------------ -#define _WASAPI_MONO_TO_STEREO_MIXER_1_TO_2(TYPE)\ - TYPE * __restrict to = (TYPE *)__to;\ - const TYPE * __restrict from = (const TYPE *)__from;\ - const TYPE * __restrict end = from + count;\ - while (from != end)\ - {\ - to[0] = to[1] = *from ++;\ - to += 2;\ - } - -// ------------------------------------------------------------------------------------------ -#define _WASAPI_MONO_TO_STEREO_MIXER_2_TO_1_FLT32(TYPE)\ - TYPE * __restrict to = (TYPE *)__to;\ - const TYPE * __restrict from = (const TYPE *)__from;\ - const TYPE * __restrict end = to + count;\ - while (to != end)\ - {\ - *to ++ = (TYPE)((float)(from[0] + from[1]) * 0.5f);\ - from += 2;\ - } - -// ------------------------------------------------------------------------------------------ -#define _WASAPI_MONO_TO_STEREO_MIXER_2_TO_1_INT32(TYPE)\ - TYPE * __restrict to = (TYPE *)__to;\ - const TYPE * __restrict from = (const TYPE *)__from;\ - const TYPE * __restrict end = to + count;\ - while (to != end)\ - {\ - *to ++ = (TYPE)(((INT32)from[0] + (INT32)from[1]) >> 1);\ - from += 2;\ - } - -// ------------------------------------------------------------------------------------------ -#define _WASAPI_MONO_TO_STEREO_MIXER_2_TO_1_INT64(TYPE)\ - TYPE * __restrict to = (TYPE *)__to;\ - const TYPE * __restrict from = (const TYPE *)__from;\ - const TYPE * __restrict end = to + count;\ - while (to != end)\ - {\ - *to ++ = (TYPE)(((INT64)from[0] + (INT64)from[1]) >> 1);\ - from += 2;\ - } - -// ------------------------------------------------------------------------------------------ -#define _WASAPI_MONO_TO_STEREO_MIXER_2_TO_1_L(TYPE)\ - TYPE * __restrict to = (TYPE *)__to;\ - const TYPE * __restrict from = (const TYPE *)__from;\ - const TYPE * __restrict end = to + count;\ - while (to != end)\ - {\ - *to ++ = from[0];\ - from += 2;\ - } - -// ------------------------------------------------------------------------------------------ -static void _MixMonoToStereo_1TO2_8(void *__to, const void *__from, UINT32 count) { _WASAPI_MONO_TO_STEREO_MIXER_1_TO_2(BYTE); } -static void _MixMonoToStereo_1TO2_16(void *__to, const void *__from, UINT32 count) { _WASAPI_MONO_TO_STEREO_MIXER_1_TO_2(short); } -static void _MixMonoToStereo_1TO2_8_24(void *__to, const void *__from, UINT32 count) { _WASAPI_MONO_TO_STEREO_MIXER_1_TO_2(int); /* !!! int24 data is contained in 32-bit containers*/ } -static void _MixMonoToStereo_1TO2_32(void *__to, const void *__from, UINT32 count) { _WASAPI_MONO_TO_STEREO_MIXER_1_TO_2(int); } -static void _MixMonoToStereo_1TO2_32f(void *__to, const void *__from, UINT32 count) { _WASAPI_MONO_TO_STEREO_MIXER_1_TO_2(float); } -static void _MixMonoToStereo_1TO2_24(void *__to, const void *__from, UINT32 count) -{ - const UCHAR * __restrict from = (const UCHAR *)__from; - UCHAR * __restrict to = (UCHAR *)__to; - const UCHAR * __restrict end = to + (count * (2 * 3)); - - while (to != end) - { - to[0] = to[3] = from[0]; - to[1] = to[4] = from[1]; - to[2] = to[5] = from[2]; - - from += 3; - to += (2 * 3); - } -} - -// ------------------------------------------------------------------------------------------ -static void _MixMonoToStereo_2TO1_8(void *__to, const void *__from, UINT32 count) { _WASAPI_MONO_TO_STEREO_MIXER_2_TO_1_INT32(BYTE); } -static void _MixMonoToStereo_2TO1_16(void *__to, const void *__from, UINT32 count) { _WASAPI_MONO_TO_STEREO_MIXER_2_TO_1_INT32(short); } -static void _MixMonoToStereo_2TO1_8_24(void *__to, const void *__from, UINT32 count) { _WASAPI_MONO_TO_STEREO_MIXER_2_TO_1_INT32(int); /* !!! int24 data is contained in 32-bit containers*/ } -static void _MixMonoToStereo_2TO1_32(void *__to, const void *__from, UINT32 count) { _WASAPI_MONO_TO_STEREO_MIXER_2_TO_1_INT64(int); } -static void _MixMonoToStereo_2TO1_32f(void *__to, const void *__from, UINT32 count) { _WASAPI_MONO_TO_STEREO_MIXER_2_TO_1_FLT32(float); } -static void _MixMonoToStereo_2TO1_24(void *__to, const void *__from, UINT32 count) -{ - const UCHAR * __restrict from = (const UCHAR *)__from; - UCHAR * __restrict to = (UCHAR *)__to; - const UCHAR * __restrict end = to + (count * 3); - PaInt32 tempL, tempR, tempM; - - while (to != end) - { - tempL = (((PaInt32)from[0]) << 8); - tempL = tempL | (((PaInt32)from[1]) << 16); - tempL = tempL | (((PaInt32)from[2]) << 24); - - tempR = (((PaInt32)from[3]) << 8); - tempR = tempR | (((PaInt32)from[4]) << 16); - tempR = tempR | (((PaInt32)from[5]) << 24); - - tempM = (tempL + tempR) >> 1; - - to[0] = (UCHAR)(tempM >> 8); - to[1] = (UCHAR)(tempM >> 16); - to[2] = (UCHAR)(tempM >> 24); - - from += (2 * 3); - to += 3; - } -} - -// ------------------------------------------------------------------------------------------ -static void _MixMonoToStereo_2TO1_8_L(void *__to, const void *__from, UINT32 count) { _WASAPI_MONO_TO_STEREO_MIXER_2_TO_1_L(BYTE); } -static void _MixMonoToStereo_2TO1_16_L(void *__to, const void *__from, UINT32 count) { _WASAPI_MONO_TO_STEREO_MIXER_2_TO_1_L(short); } -static void _MixMonoToStereo_2TO1_8_24_L(void *__to, const void *__from, UINT32 count) { _WASAPI_MONO_TO_STEREO_MIXER_2_TO_1_L(int); /* !!! int24 data is contained in 32-bit containers*/ } -static void _MixMonoToStereo_2TO1_32_L(void *__to, const void *__from, UINT32 count) { _WASAPI_MONO_TO_STEREO_MIXER_2_TO_1_L(int); } -static void _MixMonoToStereo_2TO1_32f_L(void *__to, const void *__from, UINT32 count) { _WASAPI_MONO_TO_STEREO_MIXER_2_TO_1_L(float); } -static void _MixMonoToStereo_2TO1_24_L(void *__to, const void *__from, UINT32 count) -{ - const UCHAR * __restrict from = (const UCHAR *)__from; - UCHAR * __restrict to = (UCHAR *)__to; - const UCHAR * __restrict end = to + (count * 3); - - while (to != end) - { - to[0] = from[0]; - to[1] = from[1]; - to[2] = from[2]; - - from += (2 * 3); - to += 3; - } -} - -// ------------------------------------------------------------------------------------------ -static MixMonoToStereoF GetMonoToStereoMixer(const WAVEFORMATEXTENSIBLE *fmtext, EMixDirection dir) -{ - PaSampleFormat format = WaveToPaFormat(fmtext); - - switch (dir) - { - case MIX_DIR__1TO2: - switch (format & ~paNonInterleaved) - { - case paUInt8: return _MixMonoToStereo_1TO2_8; - case paInt16: return _MixMonoToStereo_1TO2_16; - case paInt24: return (fmtext->Format.wBitsPerSample == 32 ? _MixMonoToStereo_1TO2_8_24 : _MixMonoToStereo_1TO2_24); - case paInt32: return _MixMonoToStereo_1TO2_32; - case paFloat32: return _MixMonoToStereo_1TO2_32f; - } - break; - - case MIX_DIR__2TO1: - switch (format & ~paNonInterleaved) - { - case paUInt8: return _MixMonoToStereo_2TO1_8; - case paInt16: return _MixMonoToStereo_2TO1_16; - case paInt24: return (fmtext->Format.wBitsPerSample == 32 ? _MixMonoToStereo_2TO1_8_24 : _MixMonoToStereo_2TO1_24); - case paInt32: return _MixMonoToStereo_2TO1_32; - case paFloat32: return _MixMonoToStereo_2TO1_32f; - } - break; - - case MIX_DIR__2TO1_L: - switch (format & ~paNonInterleaved) - { - case paUInt8: return _MixMonoToStereo_2TO1_8_L; - case paInt16: return _MixMonoToStereo_2TO1_16_L; - case paInt24: return (fmtext->Format.wBitsPerSample == 32 ? _MixMonoToStereo_2TO1_8_24_L : _MixMonoToStereo_2TO1_24_L); - case paInt32: return _MixMonoToStereo_2TO1_32_L; - case paFloat32: return _MixMonoToStereo_2TO1_32f_L; - } - break; - } - - return NULL; -} - -// ------------------------------------------------------------------------------------------ -#ifdef PA_WINRT -typedef struct PaActivateAudioInterfaceCompletionHandler -{ - IActivateAudioInterfaceCompletionHandler parent; - volatile LONG refs; - volatile LONG done; - struct - { - const IID *iid; - void **obj; - } - in; - struct - { - HRESULT hr; - } - out; -} -PaActivateAudioInterfaceCompletionHandler; - -static HRESULT (STDMETHODCALLTYPE PaActivateAudioInterfaceCompletionHandler_QueryInterface)( - IActivateAudioInterfaceCompletionHandler *This, REFIID riid, void **ppvObject) -{ - PaActivateAudioInterfaceCompletionHandler *handler = (PaActivateAudioInterfaceCompletionHandler *)This; - - // From MSDN: - // "The IAgileObject interface is a marker interface that indicates that an object - // is free threaded and can be called from any apartment." - if (IsEqualIID(riid, &IID_IUnknown) || - IsEqualIID(riid, &IID_IAgileObject)) - { - IActivateAudioInterfaceCompletionHandler_AddRef((IActivateAudioInterfaceCompletionHandler *)handler); - (*ppvObject) = handler; - return S_OK; - } - - return E_NOINTERFACE; -} - -static ULONG (STDMETHODCALLTYPE PaActivateAudioInterfaceCompletionHandler_AddRef)( - IActivateAudioInterfaceCompletionHandler *This) -{ - PaActivateAudioInterfaceCompletionHandler *handler = (PaActivateAudioInterfaceCompletionHandler *)This; - - return InterlockedIncrement(&handler->refs); -} - -static ULONG (STDMETHODCALLTYPE PaActivateAudioInterfaceCompletionHandler_Release)( - IActivateAudioInterfaceCompletionHandler *This) -{ - PaActivateAudioInterfaceCompletionHandler *handler = (PaActivateAudioInterfaceCompletionHandler *)This; - ULONG refs; - - if ((refs = InterlockedDecrement(&handler->refs)) == 0) - { - PaUtil_FreeMemory(handler->parent.lpVtbl); - PaUtil_FreeMemory(handler); - } - - return refs; -} - -static HRESULT (STDMETHODCALLTYPE PaActivateAudioInterfaceCompletionHandler_ActivateCompleted)( - IActivateAudioInterfaceCompletionHandler *This, IActivateAudioInterfaceAsyncOperation *activateOperation) -{ - PaActivateAudioInterfaceCompletionHandler *handler = (PaActivateAudioInterfaceCompletionHandler *)This; - - HRESULT hr = S_OK; - HRESULT hrActivateResult = S_OK; - IUnknown *punkAudioInterface = NULL; - - // Check for a successful activation result - hr = IActivateAudioInterfaceAsyncOperation_GetActivateResult(activateOperation, &hrActivateResult, &punkAudioInterface); - if (SUCCEEDED(hr) && SUCCEEDED(hrActivateResult)) - { - // Get pointer to the requested audio interface - IUnknown_QueryInterface(punkAudioInterface, handler->in.iid, handler->in.obj); - if ((*handler->in.obj) == NULL) - hrActivateResult = E_FAIL; - } - SAFE_RELEASE(punkAudioInterface); - - if (SUCCEEDED(hr)) - handler->out.hr = hrActivateResult; - else - handler->out.hr = hr; - - // Got client object, stop busy waiting in ActivateAudioInterface - InterlockedExchange(&handler->done, TRUE); - - return hr; -} - -static IActivateAudioInterfaceCompletionHandler *CreateActivateAudioInterfaceCompletionHandler(const IID *iid, void **client) -{ - PaActivateAudioInterfaceCompletionHandler *handler = PaUtil_AllocateMemory(sizeof(PaActivateAudioInterfaceCompletionHandler)); - - memset(handler, 0, sizeof(*handler)); - - handler->parent.lpVtbl = PaUtil_AllocateMemory(sizeof(*handler->parent.lpVtbl)); - handler->parent.lpVtbl->QueryInterface = &PaActivateAudioInterfaceCompletionHandler_QueryInterface; - handler->parent.lpVtbl->AddRef = &PaActivateAudioInterfaceCompletionHandler_AddRef; - handler->parent.lpVtbl->Release = &PaActivateAudioInterfaceCompletionHandler_Release; - handler->parent.lpVtbl->ActivateCompleted = &PaActivateAudioInterfaceCompletionHandler_ActivateCompleted; - handler->refs = 1; - handler->in.iid = iid; - handler->in.obj = client; - - return (IActivateAudioInterfaceCompletionHandler *)handler; -} -#endif - -// ------------------------------------------------------------------------------------------ -#ifdef PA_WINRT -static HRESULT WinRT_GetDefaultDeviceId(WCHAR *deviceId, UINT32 deviceIdMax, EDataFlow flow) -{ - switch (flow) - { - case eRender: - if (g_DeviceListInfo.render.defaultId[0] != 0) - wcsncpy_s(deviceId, deviceIdMax, g_DeviceListInfo.render.defaultId, wcslen(g_DeviceListInfo.render.defaultId)); - else - StringFromGUID2(&DEVINTERFACE_AUDIO_RENDER, deviceId, deviceIdMax); - break; - case eCapture: - if (g_DeviceListInfo.capture.defaultId[0] != 0) - wcsncpy_s(deviceId, deviceIdMax, g_DeviceListInfo.capture.defaultId, wcslen(g_DeviceListInfo.capture.defaultId)); - else - StringFromGUID2(&DEVINTERFACE_AUDIO_CAPTURE, deviceId, deviceIdMax); - break; - default: - return S_FALSE; - } - - return S_OK; -} -#endif - -// ------------------------------------------------------------------------------------------ -#ifdef PA_WINRT -static HRESULT WinRT_ActivateAudioInterface(const WCHAR *deviceId, const IID *iid, void **client) -{ - PaError result = paNoError; - HRESULT hr = S_OK; - IActivateAudioInterfaceAsyncOperation *asyncOp = NULL; - IActivateAudioInterfaceCompletionHandler *handler = CreateActivateAudioInterfaceCompletionHandler(iid, client); - PaActivateAudioInterfaceCompletionHandler *handlerImpl = (PaActivateAudioInterfaceCompletionHandler *)handler; - UINT32 sleepToggle = 0; - - // Async operation will call back to IActivateAudioInterfaceCompletionHandler::ActivateCompleted - // which must be an agile interface implementation - hr = ActivateAudioInterfaceAsync(deviceId, iid, NULL, handler, &asyncOp); - IF_FAILED_INTERNAL_ERROR_JUMP(hr, result, error); - - // Wait in busy loop for async operation to complete - // Use Interlocked API here to ensure that ->done variable is read every time through the loop - while (SUCCEEDED(hr) && !InterlockedOr(&handlerImpl->done, 0)) - { - Sleep(sleepToggle ^= 1); - } - - hr = handlerImpl->out.hr; - -error: - - SAFE_RELEASE(asyncOp); - SAFE_RELEASE(handler); - - return hr; -} -#endif - -// ------------------------------------------------------------------------------------------ -static HRESULT ActivateAudioInterface(const PaWasapiDeviceInfo *deviceInfo, const PaWasapiStreamInfo *streamInfo, - IAudioClient **client) -{ - HRESULT hr; - -#ifndef PA_WINRT - if (FAILED(hr = IMMDevice_Activate(deviceInfo->device, GetAudioClientIID(), CLSCTX_ALL, NULL, (void **)client))) - return hr; -#else - if (FAILED(hr = WinRT_ActivateAudioInterface(deviceInfo->deviceId, GetAudioClientIID(), (void **)client))) - return hr; -#endif - - // Set audio client options (applicable only to IAudioClient2+): options may affect the audio format - // support by IAudioClient implementation and therefore we should set them before GetClosestFormat() - // in order to correctly match the requested format -#ifdef __IAudioClient2_INTERFACE_DEFINED__ - if ((streamInfo != NULL) && (GetAudioClientVersion() >= 2)) - { - pa_AudioClientProperties audioProps = { 0 }; - audioProps.cbSize = sizeof(pa_AudioClientProperties); - audioProps.bIsOffload = FALSE; - audioProps.eCategory = (AUDIO_STREAM_CATEGORY)streamInfo->streamCategory; - switch (streamInfo->streamOption) - { - case eStreamOptionRaw: - if (GetWindowsVersion() >= WINDOWS_8_1_SERVER2012R2) - audioProps.Options = pa_AUDCLNT_STREAMOPTIONS_RAW; - break; - case eStreamOptionMatchFormat: - if (GetWindowsVersion() >= WINDOWS_10_SERVER2016) - audioProps.Options = pa_AUDCLNT_STREAMOPTIONS_MATCH_FORMAT; - break; - } - - if (FAILED(hr = IAudioClient2_SetClientProperties((IAudioClient2 *)(*client), (AudioClientProperties *)&audioProps))) - { - PRINT(("WASAPI: IAudioClient2_SetClientProperties(IsOffload = %d, Category = %d, Options = %d) failed\n", audioProps.bIsOffload, audioProps.eCategory, audioProps.Options)); - LogHostError(hr); - } - else - { - PRINT(("WASAPI: IAudioClient2 set properties: IsOffload = %d, Category = %d, Options = %d\n", audioProps.bIsOffload, audioProps.eCategory, audioProps.Options)); - } - } -#endif - - return S_OK; -} - -// ------------------------------------------------------------------------------------------ -#ifdef PA_WINRT -// Windows 10 SDK 10.0.15063.0 has SignalObjectAndWait defined again (unlike in 10.0.14393.0 and lower) -#if !defined(WDK_NTDDI_VERSION) || (WDK_NTDDI_VERSION < NTDDI_WIN10_RS2) -static DWORD SignalObjectAndWait(HANDLE hObjectToSignal, HANDLE hObjectToWaitOn, DWORD dwMilliseconds, BOOL bAlertable) -{ - SetEvent(hObjectToSignal); - return WaitForSingleObjectEx(hObjectToWaitOn, dwMilliseconds, bAlertable); -} -#endif -#endif - -// ------------------------------------------------------------------------------------------ -static void NotifyStateChanged(PaWasapiStream *stream, UINT32 flags, HRESULT hr) -{ - if (stream->fnStateHandler == NULL) - return; - - if (FAILED(hr)) - flags |= paWasapiStreamStateError; - - stream->fnStateHandler((PaStream *)stream, flags, hr, stream->pStateHandlerUserData); -} - -// ------------------------------------------------------------------------------------------ -static void FillBaseDeviceInfo(PaDeviceInfo *deviceInfo, PaHostApiIndex hostApiIndex) -{ - deviceInfo->structVersion = 2; - deviceInfo->hostApi = hostApiIndex; -} - -// ------------------------------------------------------------------------------------------ -static PaError FillInactiveDeviceInfo(PaWasapiHostApiRepresentation *paWasapi, PaDeviceInfo *deviceInfo) -{ - if (deviceInfo->name == NULL) - deviceInfo->name = (char *)PaUtil_GroupAllocateMemory(paWasapi->allocations, 1); - - if (deviceInfo->name != NULL) - { - ((char *)deviceInfo->name)[0] = 0; - } - else - return paInsufficientMemory; - - return paNoError; -} - -// ------------------------------------------------------------------------------------------ -static PaError FillDeviceInfo(PaWasapiHostApiRepresentation *paWasapi, void *pEndPoints, INT32 index, const WCHAR *defaultRenderId, - const WCHAR *defaultCaptureId, PaDeviceInfo *deviceInfo, PaWasapiDeviceInfo *wasapiDeviceInfo -#ifdef PA_WINRT - , PaWasapiWinrtDeviceListContext *deviceListContext -#endif -) -{ - HRESULT hr; - PaError result; - PaUtilHostApiRepresentation *hostApi = (PaUtilHostApiRepresentation *)paWasapi; -#ifdef PA_WINRT - PaWasapiWinrtDeviceListContextEntry *listEntry = &deviceListContext->devices[index]; - (void)pEndPoints; - (void)defaultRenderId; - (void)defaultCaptureId; -#endif - -#ifndef PA_WINRT - hr = IMMDeviceCollection_Item((IMMDeviceCollection *)pEndPoints, index, &wasapiDeviceInfo->device); - IF_FAILED_INTERNAL_ERROR_JUMP(hr, result, error); - - // Get device Id - { - WCHAR *deviceId; - - hr = IMMDevice_GetId(wasapiDeviceInfo->device, &deviceId); - IF_FAILED_INTERNAL_ERROR_JUMP(hr, result, error); - - wcsncpy(wasapiDeviceInfo->deviceId, deviceId, PA_WASAPI_DEVICE_ID_LEN - 1); - CoTaskMemFree(deviceId); - } - - // Get state of the device - hr = IMMDevice_GetState(wasapiDeviceInfo->device, &wasapiDeviceInfo->state); - IF_FAILED_INTERNAL_ERROR_JUMP(hr, result, error); - if (wasapiDeviceInfo->state != DEVICE_STATE_ACTIVE) - { - PRINT(("WASAPI device: %d is not currently available (state:%d)\n", index, wasapiDeviceInfo->state)); - } - - // Get basic device info - { - IPropertyStore *pProperty; - IMMEndpoint *endpoint; - PROPVARIANT value; - - hr = IMMDevice_OpenPropertyStore(wasapiDeviceInfo->device, STGM_READ, &pProperty); - IF_FAILED_INTERNAL_ERROR_JUMP(hr, result, error); - - // "Friendly" Name - { - PropVariantInit(&value); - - hr = IPropertyStore_GetValue(pProperty, &PKEY_Device_FriendlyName, &value); - IF_FAILED_INTERNAL_ERROR_JUMP(hr, result, error); - - if ((deviceInfo->name = (char *)PaUtil_GroupAllocateMemory(paWasapi->allocations, PA_WASAPI_DEVICE_NAME_LEN)) == NULL) - { - result = paInsufficientMemory; - PropVariantClear(&value); - goto error; - } - if (value.pwszVal) - WideCharToMultiByte(CP_UTF8, 0, value.pwszVal, (INT32)wcslen(value.pwszVal), (char *)deviceInfo->name, PA_WASAPI_DEVICE_NAME_LEN - 1, 0, 0); - else - _snprintf((char *)deviceInfo->name, PA_WASAPI_DEVICE_NAME_LEN - 1, "baddev%d", index); - - PropVariantClear(&value); - - PA_DEBUG(("WASAPI:%d| name[%s]\n", index, deviceInfo->name)); - } - - // Default format - { - PropVariantInit(&value); - - hr = IPropertyStore_GetValue(pProperty, &PKEY_AudioEngine_DeviceFormat, &value); - IF_FAILED_INTERNAL_ERROR_JUMP(hr, result, error); - - memcpy(&wasapiDeviceInfo->DefaultFormat, value.blob.pBlobData, min(sizeof(wasapiDeviceInfo->DefaultFormat), value.blob.cbSize)); - - PropVariantClear(&value); - } - - // Form factor - { - PropVariantInit(&value); - - hr = IPropertyStore_GetValue(pProperty, &PKEY_AudioEndpoint_FormFactor, &value); - IF_FAILED_INTERNAL_ERROR_JUMP(hr, result, error); - - // set - #if defined(DUMMYUNIONNAME) && defined(NONAMELESSUNION) - // avoid breaking strict-aliasing rules in such line: (EndpointFormFactor)(*((UINT *)(((WORD *)&value.wReserved3)+1))); - UINT v; - memcpy(&v, (((WORD *)&value.wReserved3) + 1), sizeof(v)); - wasapiDeviceInfo->formFactor = (EndpointFormFactor)v; - #else - wasapiDeviceInfo->formFactor = (EndpointFormFactor)value.uintVal; - #endif - - PA_DEBUG(("WASAPI:%d| form-factor[%d]\n", index, wasapiDeviceInfo->formFactor)); - - PropVariantClear(&value); - } - - // Data flow (Renderer or Capture) - hr = IMMDevice_QueryInterface(wasapiDeviceInfo->device, &pa_IID_IMMEndpoint, (void **)&endpoint); - if (SUCCEEDED(hr)) - { - hr = IMMEndpoint_GetDataFlow(endpoint, &wasapiDeviceInfo->flow); - SAFE_RELEASE(endpoint); - } - - SAFE_RELEASE(pProperty); - } -#else - // Set device Id - wcsncpy(wasapiDeviceInfo->deviceId, listEntry->info->id, PA_WASAPI_DEVICE_ID_LEN - 1); - - // Set device name - if ((deviceInfo->name = (char *)PaUtil_GroupAllocateMemory(paWasapi->allocations, PA_WASAPI_DEVICE_NAME_LEN)) == NULL) - { - result = paInsufficientMemory; - goto error; - } - ((char *)deviceInfo->name)[0] = 0; - if (listEntry->info->name[0] != 0) - WideCharToMultiByte(CP_UTF8, 0, listEntry->info->name, (INT32)wcslen(listEntry->info->name), (char *)deviceInfo->name, PA_WASAPI_DEVICE_NAME_LEN - 1, 0, 0); - if (deviceInfo->name[0] == 0) // fallback if WideCharToMultiByte is failed, or listEntry is nameless - _snprintf((char *)deviceInfo->name, PA_WASAPI_DEVICE_NAME_LEN - 1, "WASAPI_%s:%d", (listEntry->flow == eRender ? "Output" : "Input"), index); - - // Form-factor - wasapiDeviceInfo->formFactor = listEntry->info->formFactor; - - // Set data flow - wasapiDeviceInfo->flow = listEntry->flow; -#endif - - // Set default Output/Input devices - if ((defaultRenderId != NULL) && (wcsncmp(wasapiDeviceInfo->deviceId, defaultRenderId, PA_WASAPI_DEVICE_NAME_LEN - 1) == 0)) - hostApi->info.defaultOutputDevice = hostApi->info.deviceCount; - if ((defaultCaptureId != NULL) && (wcsncmp(wasapiDeviceInfo->deviceId, defaultCaptureId, PA_WASAPI_DEVICE_NAME_LEN - 1) == 0)) - hostApi->info.defaultInputDevice = hostApi->info.deviceCount; - - // Get a temporary IAudioClient for more details - { - IAudioClient *tmpClient; - WAVEFORMATEX *mixFormat; - - hr = ActivateAudioInterface(wasapiDeviceInfo, NULL, &tmpClient); - IF_FAILED_INTERNAL_ERROR_JUMP(hr, result, error); - - // Get latency - hr = IAudioClient_GetDevicePeriod(tmpClient, &wasapiDeviceInfo->DefaultDevicePeriod, &wasapiDeviceInfo->MinimumDevicePeriod); - if (FAILED(hr)) - { - PA_DEBUG(("WASAPI:%d| failed getting min/default periods by IAudioClient::GetDevicePeriod() with error[%08X], will use 30000/100000 hns\n", index, (UINT32)hr)); - - // assign WASAPI common values - wasapiDeviceInfo->DefaultDevicePeriod = 100000; - wasapiDeviceInfo->MinimumDevicePeriod = 30000; - - // ignore error, let continue further without failing with paInternalError - hr = S_OK; - } - - // Get mix format - hr = IAudioClient_GetMixFormat(tmpClient, &mixFormat); - if (SUCCEEDED(hr)) - { - memcpy(&wasapiDeviceInfo->MixFormat, mixFormat, min(sizeof(wasapiDeviceInfo->MixFormat), (sizeof(*mixFormat) + mixFormat->cbSize))); - CoTaskMemFree(mixFormat); - } - - // Register WINRT device - #ifdef PA_WINRT - if (SUCCEEDED(hr)) - { - // Set state - wasapiDeviceInfo->state = DEVICE_STATE_ACTIVE; - - // Default format (Shared mode) is always a mix format - wasapiDeviceInfo->DefaultFormat = wasapiDeviceInfo->MixFormat; - } - #endif - - // Release tmp client - SAFE_RELEASE(tmpClient); - - if (hr != S_OK) - { - //davidv: this happened with my hardware, previously for that same device in DirectSound: - //Digital Output (Realtek AC'97 Audio)'s GUID: {0x38f2cf50,0x7b4c,0x4740,0x86,0xeb,0xd4,0x38,0x66,0xd8,0xc8, 0x9f} - //so something must be _really_ wrong with this device, TODO handle this better. We kind of need GetMixFormat - LogHostError(hr); - result = paInternalError; - goto error; - } - } - - // Fill basic device data - deviceInfo->maxInputChannels = 0; - deviceInfo->maxOutputChannels = 0; - deviceInfo->defaultSampleRate = wasapiDeviceInfo->MixFormat.Format.nSamplesPerSec; - switch (wasapiDeviceInfo->flow) - { - case eRender: { - deviceInfo->maxOutputChannels = wasapiDeviceInfo->MixFormat.Format.nChannels; - deviceInfo->defaultHighOutputLatency = nano100ToSeconds(wasapiDeviceInfo->DefaultDevicePeriod); - deviceInfo->defaultLowOutputLatency = nano100ToSeconds(wasapiDeviceInfo->MinimumDevicePeriod); - PA_DEBUG(("WASAPI:%d| def.SR[%d] max.CH[%d] latency{hi[%f] lo[%f]}\n", index, (UINT32)deviceInfo->defaultSampleRate, - deviceInfo->maxOutputChannels, (float)deviceInfo->defaultHighOutputLatency, (float)deviceInfo->defaultLowOutputLatency)); - break;} - case eCapture: { - deviceInfo->maxInputChannels = wasapiDeviceInfo->MixFormat.Format.nChannels; - deviceInfo->defaultHighInputLatency = nano100ToSeconds(wasapiDeviceInfo->DefaultDevicePeriod); - deviceInfo->defaultLowInputLatency = nano100ToSeconds(wasapiDeviceInfo->MinimumDevicePeriod); - PA_DEBUG(("WASAPI:%d| def.SR[%d] max.CH[%d] latency{hi[%f] lo[%f]}\n", index, (UINT32)deviceInfo->defaultSampleRate, - deviceInfo->maxInputChannels, (float)deviceInfo->defaultHighInputLatency, (float)deviceInfo->defaultLowInputLatency)); - break; } - default: - PRINT(("WASAPI:%d| bad Data Flow!\n", index)); - result = paInternalError; - goto error; - } - - return paNoError; - -error: - - PRINT(("WASAPI: failed filling device info for device index[%d] - error[%d|%s]\n", index, result, Pa_GetErrorText(result))); - - return result; -} - -// ------------------------------------------------------------------------------------------ -static PaDeviceInfo *AllocateDeviceListMemory(PaWasapiHostApiRepresentation *paWasapi) -{ - PaUtilHostApiRepresentation *hostApi = (PaUtilHostApiRepresentation *)paWasapi; - PaDeviceInfo *deviceInfoArray = NULL; - - if ((paWasapi->devInfo = (PaWasapiDeviceInfo *)PaUtil_GroupAllocateMemory(paWasapi->allocations, - sizeof(PaWasapiDeviceInfo) * paWasapi->deviceCount)) == NULL) - { - return NULL; - } - memset(paWasapi->devInfo, 0, sizeof(PaWasapiDeviceInfo) * paWasapi->deviceCount); - - if (paWasapi->deviceCount != 0) - { - UINT32 i; - UINT32 deviceCount = paWasapi->deviceCount; - #if defined(PA_WASAPI_MAX_CONST_DEVICE_COUNT) && (PA_WASAPI_MAX_CONST_DEVICE_COUNT > 0) - if (deviceCount < PA_WASAPI_MAX_CONST_DEVICE_COUNT) - deviceCount = PA_WASAPI_MAX_CONST_DEVICE_COUNT; - #endif - - if ((hostApi->deviceInfos = (PaDeviceInfo **)PaUtil_GroupAllocateMemory(paWasapi->allocations, - sizeof(PaDeviceInfo *) * deviceCount)) == NULL) - { - return NULL; - } - for (i = 0; i < deviceCount; ++i) - hostApi->deviceInfos[i] = NULL; - - // Allocate all device info structs in a contiguous block - if ((deviceInfoArray = (PaDeviceInfo *)PaUtil_GroupAllocateMemory(paWasapi->allocations, - sizeof(PaDeviceInfo) * deviceCount)) == NULL) - { - return NULL; - } - memset(deviceInfoArray, 0, sizeof(PaDeviceInfo) * deviceCount); - } - - return deviceInfoArray; -} - -// ------------------------------------------------------------------------------------------ -static PaError CreateDeviceList(PaWasapiHostApiRepresentation *paWasapi, PaHostApiIndex hostApiIndex) -{ - PaUtilHostApiRepresentation *hostApi = (PaUtilHostApiRepresentation *)paWasapi; - PaError result = paNoError; - PaDeviceInfo *deviceInfoArray = NULL; - UINT32 i; - WCHAR *defaultRenderId = NULL; - WCHAR *defaultCaptureId = NULL; -#ifndef PA_WINRT - HRESULT hr; - IMMDeviceCollection *pEndPoints = NULL; - IMMDeviceEnumerator *pEnumerator = NULL; -#else - void *pEndPoints = NULL; - IAudioClient *tmpClient; - PaWasapiWinrtDeviceListContext deviceListContext = { 0 }; - PaWasapiWinrtDeviceInfo defaultRender = { 0 }; - PaWasapiWinrtDeviceInfo defaultCapture = { 0 }; -#endif - - // Make sure device list empty - if ((paWasapi->deviceCount != 0) || (hostApi->info.deviceCount != 0)) - return paInternalError; - -#ifndef PA_WINRT - hr = CoCreateInstance(&pa_CLSID_IMMDeviceEnumerator, NULL, CLSCTX_INPROC_SERVER, - &pa_IID_IMMDeviceEnumerator, (void **)&pEnumerator); - IF_FAILED_INTERNAL_ERROR_JUMP(hr, result, error); - - // Get default render and capture devices - { - IMMDevice *device; - - hr = IMMDeviceEnumerator_GetDefaultAudioEndpoint(pEnumerator, eRender, eMultimedia, &device); - if (hr != S_OK) - { - if (hr != E_NOTFOUND) - { - IF_FAILED_INTERNAL_ERROR_JUMP(hr, result, error); - } - } - else - { - hr = IMMDevice_GetId(device, &defaultRenderId); - IMMDevice_Release(device); - IF_FAILED_INTERNAL_ERROR_JUMP(hr, result, error); - } - - hr = IMMDeviceEnumerator_GetDefaultAudioEndpoint(pEnumerator, eCapture, eMultimedia, &device); - if (hr != S_OK) - { - if (hr != E_NOTFOUND) - { - IF_FAILED_INTERNAL_ERROR_JUMP(hr, result, error); - } - } - else - { - hr = IMMDevice_GetId(device, &defaultCaptureId); - IMMDevice_Release(device); - IF_FAILED_INTERNAL_ERROR_JUMP(hr, result, error); - } - } - - // Get all currently active devices - hr = IMMDeviceEnumerator_EnumAudioEndpoints(pEnumerator, eAll, DEVICE_STATE_ACTIVE, &pEndPoints); - IF_FAILED_INTERNAL_ERROR_JUMP(hr, result, error); - - // Get device count - hr = IMMDeviceCollection_GetCount(pEndPoints, &paWasapi->deviceCount); - IF_FAILED_INTERNAL_ERROR_JUMP(hr, result, error); -#else - WinRT_GetDefaultDeviceId(defaultRender.id, STATIC_ARRAY_SIZE(defaultRender.id) - 1, eRender); - defaultRenderId = defaultRender.id; - - WinRT_GetDefaultDeviceId(defaultCapture.id, STATIC_ARRAY_SIZE(defaultCapture.id) - 1, eCapture); - defaultCaptureId = defaultCapture.id; - - if (g_DeviceListInfo.render.deviceCount == 0) - { - if (SUCCEEDED(WinRT_ActivateAudioInterface(defaultRenderId, GetAudioClientIID(), &tmpClient))) - { - deviceListContext.devices[paWasapi->deviceCount].info = &defaultRender; - deviceListContext.devices[paWasapi->deviceCount].flow = eRender; - paWasapi->deviceCount++; - - SAFE_RELEASE(tmpClient); - } - } - else - { - for (i = 0; i < g_DeviceListInfo.render.deviceCount; ++i) - { - deviceListContext.devices[paWasapi->deviceCount].info = &g_DeviceListInfo.render.devices[i]; - deviceListContext.devices[paWasapi->deviceCount].flow = eRender; - paWasapi->deviceCount++; - } - } - - if (g_DeviceListInfo.capture.deviceCount == 0) - { - if (SUCCEEDED(WinRT_ActivateAudioInterface(defaultCaptureId, GetAudioClientIID(), &tmpClient))) - { - deviceListContext.devices[paWasapi->deviceCount].info = &defaultCapture; - deviceListContext.devices[paWasapi->deviceCount].flow = eCapture; - paWasapi->deviceCount++; - - SAFE_RELEASE(tmpClient); - } - } - else - { - for (i = 0; i < g_DeviceListInfo.capture.deviceCount; ++i) - { - deviceListContext.devices[paWasapi->deviceCount].info = &g_DeviceListInfo.capture.devices[i]; - deviceListContext.devices[paWasapi->deviceCount].flow = eCapture; - paWasapi->deviceCount++; - } - } -#endif - - // Allocate memory for the device list - if ((paWasapi->deviceCount != 0) && ((deviceInfoArray = AllocateDeviceListMemory(paWasapi)) == NULL)) - { - result = paInsufficientMemory; - goto error; - } - - // Fill WASAPI device info - for (i = 0; i < paWasapi->deviceCount; ++i) - { - PaDeviceInfo *deviceInfo = &deviceInfoArray[i]; - - PA_DEBUG(("WASAPI: device idx: %02d\n", i)); - PA_DEBUG(("WASAPI: ---------------\n")); - - FillBaseDeviceInfo(deviceInfo, hostApiIndex); - - if ((result = FillDeviceInfo(paWasapi, pEndPoints, i, defaultRenderId, defaultCaptureId, - deviceInfo, &paWasapi->devInfo[i] - #ifdef PA_WINRT - , &deviceListContext - #endif - )) != paNoError) - { - // Faulty device is made inactive - if ((result = FillInactiveDeviceInfo(paWasapi, deviceInfo)) != paNoError) - goto error; - } - - hostApi->deviceInfos[i] = deviceInfo; - ++hostApi->info.deviceCount; - } - - // Fill the remaining slots with inactive device info -#if defined(PA_WASAPI_MAX_CONST_DEVICE_COUNT) && (PA_WASAPI_MAX_CONST_DEVICE_COUNT > 0) - if ((hostApi->info.deviceCount != 0) && (hostApi->info.deviceCount < PA_WASAPI_MAX_CONST_DEVICE_COUNT)) - { - for (i = hostApi->info.deviceCount; i < PA_WASAPI_MAX_CONST_DEVICE_COUNT; ++i) - { - PaDeviceInfo *deviceInfo = &deviceInfoArray[i]; - - FillBaseDeviceInfo(deviceInfo, hostApiIndex); - - if ((result = FillInactiveDeviceInfo(paWasapi, deviceInfo)) != paNoError) - goto error; - - hostApi->deviceInfos[i] = deviceInfo; - ++hostApi->info.deviceCount; - } - } -#endif - - // Clear any non-fatal errors - result = paNoError; - - PRINT(("WASAPI: device list ok - found %d devices\n", paWasapi->deviceCount)); - -done: - -#ifndef PA_WINRT - CoTaskMemFree(defaultRenderId); - CoTaskMemFree(defaultCaptureId); - SAFE_RELEASE(pEndPoints); - SAFE_RELEASE(pEnumerator); -#endif - - return result; - -error: - - // Safety if error was not set so that we do not think initialize was a success - if (result == paNoError) - result = paInternalError; - - PRINT(("WASAPI: failed to create device list - error[%d|%s]\n", result, Pa_GetErrorText(result))); - - goto done; -} - -// ------------------------------------------------------------------------------------------ -PaError PaWasapi_Initialize( PaUtilHostApiRepresentation **hostApi, PaHostApiIndex hostApiIndex ) -{ - PaError result; - PaWasapiHostApiRepresentation *paWasapi; - -#ifndef PA_WINRT - if (!SetupAVRT()) - { - PRINT(("WASAPI: No AVRT! (not VISTA?)\n")); - return paNoError; - } -#endif - - paWasapi = (PaWasapiHostApiRepresentation *)PaUtil_AllocateMemory(sizeof(PaWasapiHostApiRepresentation)); - if (paWasapi == NULL) - { - result = paInsufficientMemory; - goto error; - } - memset(paWasapi, 0, sizeof(PaWasapiHostApiRepresentation)); /* ensure all fields are zeroed. especially paWasapi->allocations */ - - // Initialize COM subsystem - result = PaWinUtil_CoInitialize(paWASAPI, &paWasapi->comInitializationResult); - if (result != paNoError) - goto error; - - // Create memory group - paWasapi->allocations = PaUtil_CreateAllocationGroup(); - if (paWasapi->allocations == NULL) - { - result = paInsufficientMemory; - goto error; - } - - // Fill basic interface info - *hostApi = &paWasapi->inheritedHostApiRep; - (*hostApi)->info.structVersion = 1; - (*hostApi)->info.type = paWASAPI; - (*hostApi)->info.name = "Windows WASAPI"; - (*hostApi)->info.deviceCount = 0; - (*hostApi)->info.defaultInputDevice = paNoDevice; - (*hostApi)->info.defaultOutputDevice = paNoDevice; - (*hostApi)->Terminate = Terminate; - (*hostApi)->OpenStream = OpenStream; - (*hostApi)->IsFormatSupported = IsFormatSupported; - - // Fill the device list - if ((result = CreateDeviceList(paWasapi, hostApiIndex)) != paNoError) - goto error; - - // Detect if platform workaround is required - paWasapi->useWOW64Workaround = UseWOW64Workaround(); - - // Initialize time getter - SystemTimer_InitializeTimeGetter(); - - PaUtil_InitializeStreamInterface( &paWasapi->callbackStreamInterface, CloseStream, StartStream, - StopStream, AbortStream, IsStreamStopped, IsStreamActive, - GetStreamTime, GetStreamCpuLoad, - PaUtil_DummyRead, PaUtil_DummyWrite, - PaUtil_DummyGetReadAvailable, PaUtil_DummyGetWriteAvailable ); - - PaUtil_InitializeStreamInterface( &paWasapi->blockingStreamInterface, CloseStream, StartStream, - StopStream, AbortStream, IsStreamStopped, IsStreamActive, - GetStreamTime, PaUtil_DummyGetCpuLoad, - ReadStream, WriteStream, GetStreamReadAvailable, GetStreamWriteAvailable ); - - PRINT(("WASAPI: initialized ok\n")); - - return paNoError; - -error: - - PRINT(("WASAPI: failed %s error[%d|%s]\n", __FUNCTION__, result, Pa_GetErrorText(result))); - - Terminate((PaUtilHostApiRepresentation *)paWasapi); - - return result; -} - -// ------------------------------------------------------------------------------------------ -static void ReleaseWasapiDeviceInfoList( PaWasapiHostApiRepresentation *paWasapi ) -{ - UINT32 i; - - // Release device info bound objects - for (i = 0; i < paWasapi->deviceCount; ++i) - { - #ifndef PA_WINRT - SAFE_RELEASE(paWasapi->devInfo[i].device); - #endif - } - - // Free device info - if (paWasapi->allocations != NULL) - PaUtil_GroupFreeMemory(paWasapi->allocations, paWasapi->devInfo); - - // Be ready for a device list reinitialization and if its creation is failed pointers must not be dangling - paWasapi->devInfo = NULL; - paWasapi->deviceCount = 0; -} - -// ------------------------------------------------------------------------------------------ -static void Terminate( PaUtilHostApiRepresentation *hostApi ) -{ - PaWasapiHostApiRepresentation *paWasapi = (PaWasapiHostApiRepresentation*)hostApi; - if (paWasapi == NULL) - return; - - // Release device list - ReleaseWasapiDeviceInfoList(paWasapi); - - // Free allocations and memory group itself - if (paWasapi->allocations != NULL) - { - PaUtil_FreeAllAllocations(paWasapi->allocations); - PaUtil_DestroyAllocationGroup(paWasapi->allocations); - } - - // Release COM subsystem - PaWinUtil_CoUninitialize(paWASAPI, &paWasapi->comInitializationResult); - - // Free API representation - PaUtil_FreeMemory(paWasapi); - - // Close AVRT - CloseAVRT(); -} - -// ------------------------------------------------------------------------------------------ -static PaWasapiHostApiRepresentation *_GetHostApi(PaError *ret) -{ - PaError error; - PaUtilHostApiRepresentation *pApi; - - if ((error = PaUtil_GetHostApiRepresentation(&pApi, paWASAPI)) != paNoError) - { - if (ret != NULL) - (*ret) = error; - - return NULL; - } - - return (PaWasapiHostApiRepresentation *)pApi; -} - -// ------------------------------------------------------------------------------------------ -static PaError UpdateDeviceList() -{ - int i; - PaError ret; - PaWasapiHostApiRepresentation *paWasapi; - PaUtilHostApiRepresentation *hostApi; - - // Get API - hostApi = (PaUtilHostApiRepresentation *)(paWasapi = _GetHostApi(&ret)); - if (paWasapi == NULL) - return paNotInitialized; - - // Make sure initialized properly - if (paWasapi->allocations == NULL) - return paNotInitialized; - - // Release WASAPI internal device info list - ReleaseWasapiDeviceInfoList(paWasapi); - - // Release external device info list - if (hostApi->deviceInfos != NULL) - { - for (i = 0; i < hostApi->info.deviceCount; ++i) - { - PaUtil_GroupFreeMemory(paWasapi->allocations, (void *)hostApi->deviceInfos[i]->name); - } - PaUtil_GroupFreeMemory(paWasapi->allocations, hostApi->deviceInfos[0]); - PaUtil_GroupFreeMemory(paWasapi->allocations, hostApi->deviceInfos); - - // Be ready for a device list reinitialization and if its creation is failed pointers must not be dangling - hostApi->deviceInfos = NULL; - hostApi->info.deviceCount = 0; - hostApi->info.defaultInputDevice = paNoDevice; - hostApi->info.defaultOutputDevice = paNoDevice; - } - - // Fill possibly updated device list - if ((ret = CreateDeviceList(paWasapi, Pa_HostApiTypeIdToHostApiIndex(paWASAPI))) != paNoError) - return ret; - - return paNoError; -} - -// ------------------------------------------------------------------------------------------ -PaError PaWasapi_UpdateDeviceList() -{ -#if defined(PA_WASAPI_MAX_CONST_DEVICE_COUNT) && (PA_WASAPI_MAX_CONST_DEVICE_COUNT > 0) - return UpdateDeviceList(); -#else - return paInternalError; -#endif -} - -// ------------------------------------------------------------------------------------------ -int PaWasapi_GetDeviceCurrentFormat( PaStream *pStream, void *pFormat, unsigned int formatSize, int bOutput ) -{ - UINT32 size; - WAVEFORMATEXTENSIBLE *format; - - PaWasapiStream *stream = (PaWasapiStream *)pStream; - if (stream == NULL) - return paBadStreamPtr; - - format = (bOutput == TRUE ? &stream->out.wavex : &stream->in.wavex); - - size = min(formatSize, (UINT32)sizeof(*format)); - memcpy(pFormat, format, size); - - return size; -} - -// ------------------------------------------------------------------------------------------ -static PaError _GetWasapiDeviceInfoByDeviceIndex( PaWasapiDeviceInfo **info, PaDeviceIndex device ) -{ - PaError ret; - PaDeviceIndex index; - - // Get API - PaWasapiHostApiRepresentation *paWasapi = _GetHostApi(&ret); - if (paWasapi == NULL) - return paNotInitialized; - - // Get device index - if ((ret = PaUtil_DeviceIndexToHostApiDeviceIndex(&index, device, &paWasapi->inheritedHostApiRep)) != paNoError) - return ret; - - // Validate index - if ((UINT32)index >= paWasapi->deviceCount) - return paInvalidDevice; - - (*info) = &paWasapi->devInfo[ index ]; - - return paNoError; -} - -// ------------------------------------------------------------------------------------------ -int PaWasapi_GetDeviceDefaultFormat( void *pFormat, unsigned int formatSize, PaDeviceIndex device ) -{ - PaError ret; - PaWasapiDeviceInfo *deviceInfo; - UINT32 size; - - if (pFormat == NULL) - return paBadBufferPtr; - if (formatSize <= 0) - return paBufferTooSmall; - - if ((ret = _GetWasapiDeviceInfoByDeviceIndex(&deviceInfo, device)) != paNoError) - return ret; - - size = min(formatSize, (UINT32)sizeof(deviceInfo->DefaultFormat)); - memcpy(pFormat, &deviceInfo->DefaultFormat, size); - - return size; -} - -// ------------------------------------------------------------------------------------------ -int PaWasapi_GetDeviceMixFormat( void *pFormat, unsigned int formatSize, PaDeviceIndex device ) -{ - PaError ret; - PaWasapiDeviceInfo *deviceInfo; - UINT32 size; - - if (pFormat == NULL) - return paBadBufferPtr; - if (formatSize <= 0) - return paBufferTooSmall; - - if ((ret = _GetWasapiDeviceInfoByDeviceIndex(&deviceInfo, device)) != paNoError) - return ret; - - size = min(formatSize, (UINT32)sizeof(deviceInfo->MixFormat)); - memcpy(pFormat, &deviceInfo->MixFormat, size); - - return size; -} - -// ------------------------------------------------------------------------------------------ -int PaWasapi_GetDeviceRole( PaDeviceIndex device ) -{ - PaError ret; - PaWasapiDeviceInfo *deviceInfo; - - if ((ret = _GetWasapiDeviceInfoByDeviceIndex(&deviceInfo, device)) != paNoError) - return ret; - - return deviceInfo->formFactor; -} - -// ------------------------------------------------------------------------------------------ -PaError PaWasapi_GetIMMDevice( PaDeviceIndex device, void **pIMMDevice ) -{ -#ifndef PA_WINRT - PaError ret; - PaWasapiDeviceInfo *deviceInfo; - - if (pIMMDevice == NULL) - return paBadBufferPtr; - - if ((ret = _GetWasapiDeviceInfoByDeviceIndex(&deviceInfo, device)) != paNoError) - return ret; - - (*pIMMDevice) = deviceInfo->device; - - return paNoError; -#else - (void)device; - (void)pIMMDevice; - return paIncompatibleStreamHostApi; -#endif -} - -// ------------------------------------------------------------------------------------------ -PaError PaWasapi_GetFramesPerHostBuffer( PaStream *pStream, unsigned int *pInput, unsigned int *pOutput ) -{ - PaWasapiStream *stream = (PaWasapiStream *)pStream; - if (stream == NULL) - return paBadStreamPtr; - - if (pInput != NULL) - (*pInput) = stream->in.framesPerHostCallback; - - if (pOutput != NULL) - (*pOutput) = stream->out.framesPerHostCallback; - - return paNoError; -} - -// ------------------------------------------------------------------------------------------ -static void LogWAVEFORMATEXTENSIBLE(const WAVEFORMATEXTENSIBLE *in) -{ - const WAVEFORMATEX *old = (WAVEFORMATEX *)in; - switch (old->wFormatTag) - { - case WAVE_FORMAT_EXTENSIBLE: { - - PRINT(("wFormatTag =WAVE_FORMAT_EXTENSIBLE\n")); - - if (IsEqualGUID(&in->SubFormat, &pa_KSDATAFORMAT_SUBTYPE_IEEE_FLOAT)) - { - PRINT(("SubFormat =KSDATAFORMAT_SUBTYPE_IEEE_FLOAT\n")); - } - else - if (IsEqualGUID(&in->SubFormat, &pa_KSDATAFORMAT_SUBTYPE_PCM)) - { - PRINT(("SubFormat =KSDATAFORMAT_SUBTYPE_PCM\n")); - } - else - { - PRINT(("SubFormat =CUSTOM GUID{%d:%d:%d:%d%d%d%d%d%d%d%d}\n", - in->SubFormat.Data1, - in->SubFormat.Data2, - in->SubFormat.Data3, - (int)in->SubFormat.Data4[0], - (int)in->SubFormat.Data4[1], - (int)in->SubFormat.Data4[2], - (int)in->SubFormat.Data4[3], - (int)in->SubFormat.Data4[4], - (int)in->SubFormat.Data4[5], - (int)in->SubFormat.Data4[6], - (int)in->SubFormat.Data4[7])); - } - PRINT(("Samples.wValidBitsPerSample =%d\n", in->Samples.wValidBitsPerSample)); - PRINT(("dwChannelMask =0x%X\n",in->dwChannelMask)); - - break; } - - case WAVE_FORMAT_PCM: PRINT(("wFormatTag =WAVE_FORMAT_PCM\n")); break; - case WAVE_FORMAT_IEEE_FLOAT: PRINT(("wFormatTag =WAVE_FORMAT_IEEE_FLOAT\n")); break; - default: - PRINT(("wFormatTag =UNKNOWN(%d)\n",old->wFormatTag)); break; - } - - PRINT(("nChannels =%d\n",old->nChannels)); - PRINT(("nSamplesPerSec =%d\n",old->nSamplesPerSec)); - PRINT(("nAvgBytesPerSec=%d\n",old->nAvgBytesPerSec)); - PRINT(("nBlockAlign =%d\n",old->nBlockAlign)); - PRINT(("wBitsPerSample =%d\n",old->wBitsPerSample)); - PRINT(("cbSize =%d\n",old->cbSize)); -} - -// ------------------------------------------------------------------------------------------ -PaSampleFormat WaveToPaFormat(const WAVEFORMATEXTENSIBLE *fmtext) -{ - const WAVEFORMATEX *fmt = (WAVEFORMATEX *)fmtext; - - switch (fmt->wFormatTag) - { - case WAVE_FORMAT_EXTENSIBLE: { - if (IsEqualGUID(&fmtext->SubFormat, &pa_KSDATAFORMAT_SUBTYPE_IEEE_FLOAT)) - { - if (fmtext->Samples.wValidBitsPerSample == 32) - return paFloat32; - } - else - if (IsEqualGUID(&fmtext->SubFormat, &pa_KSDATAFORMAT_SUBTYPE_PCM)) - { - switch (fmt->wBitsPerSample) - { - case 32: return paInt32; - case 24: return paInt24; - case 16: return paInt16; - case 8: return paUInt8; - } - } - break; } - - case WAVE_FORMAT_IEEE_FLOAT: - return paFloat32; - - case WAVE_FORMAT_PCM: { - switch (fmt->wBitsPerSample) - { - case 32: return paInt32; - case 24: return paInt24; - case 16: return paInt16; - case 8: return paUInt8; - } - break; } - } - - return paCustomFormat; -} - -// ------------------------------------------------------------------------------------------ -static PaError MakeWaveFormatFromParams(WAVEFORMATEXTENSIBLE *wavex, const PaStreamParameters *params, - double sampleRate, BOOL packedOnly) -{ - WORD bitsPerSample; - WAVEFORMATEX *old; - DWORD channelMask = 0; - BOOL useExtensible = (params->channelCount > 2); // format is always forced for >2 channels format - PaWasapiStreamInfo *streamInfo = (PaWasapiStreamInfo *)params->hostApiSpecificStreamInfo; - - // Convert PaSampleFormat to valid data bits - if ((bitsPerSample = PaSampleFormatToBitsPerSample(params->sampleFormat)) == 0) - return paSampleFormatNotSupported; - - // Use user assigned channel mask - if ((streamInfo != NULL) && (streamInfo->flags & paWinWasapiUseChannelMask)) - { - channelMask = streamInfo->channelMask; - useExtensible = TRUE; - } - - memset(wavex, 0, sizeof(*wavex)); - - old = (WAVEFORMATEX *)wavex; - old->nChannels = (WORD)params->channelCount; - old->nSamplesPerSec = (DWORD)sampleRate; - old->wBitsPerSample = bitsPerSample; - - // according to MSDN for WAVEFORMATEX structure for WAVE_FORMAT_PCM: - // "If wFormatTag is WAVE_FORMAT_PCM, then wBitsPerSample should be equal to 8 or 16." - if ((bitsPerSample != 8) && (bitsPerSample != 16)) - { - // Normally 20 or 24 bits must go in 32 bit containers (ints) but in Exclusive mode some devices require - // packed version of the format, e.g. for example 24-bit in 3-bytes - old->wBitsPerSample = (packedOnly ? bitsPerSample : 32); - useExtensible = TRUE; - } - - // WAVEFORMATEX - if (!useExtensible) - { - old->wFormatTag = WAVE_FORMAT_PCM; - } - // WAVEFORMATEXTENSIBLE - else - { - old->wFormatTag = WAVE_FORMAT_EXTENSIBLE; - old->cbSize = sizeof(WAVEFORMATEXTENSIBLE) - sizeof(WAVEFORMATEX); - - if ((params->sampleFormat & ~paNonInterleaved) == paFloat32) - wavex->SubFormat = pa_KSDATAFORMAT_SUBTYPE_IEEE_FLOAT; - else - wavex->SubFormat = pa_KSDATAFORMAT_SUBTYPE_PCM; - - wavex->Samples.wValidBitsPerSample = bitsPerSample; - - // Set channel mask - if (channelMask != 0) - { - wavex->dwChannelMask = channelMask; - } - else - { - switch (params->channelCount) - { - case 1: wavex->dwChannelMask = PAWIN_SPEAKER_MONO; break; - case 2: wavex->dwChannelMask = PAWIN_SPEAKER_STEREO; break; - case 3: wavex->dwChannelMask = PAWIN_SPEAKER_STEREO|SPEAKER_LOW_FREQUENCY; break; - case 4: wavex->dwChannelMask = PAWIN_SPEAKER_QUAD; break; - case 5: wavex->dwChannelMask = PAWIN_SPEAKER_QUAD|SPEAKER_LOW_FREQUENCY; break; -#ifdef PAWIN_SPEAKER_5POINT1_SURROUND - case 6: wavex->dwChannelMask = PAWIN_SPEAKER_5POINT1_SURROUND; break; -#else - case 6: wavex->dwChannelMask = PAWIN_SPEAKER_5POINT1; break; -#endif -#ifdef PAWIN_SPEAKER_5POINT1_SURROUND - case 7: wavex->dwChannelMask = PAWIN_SPEAKER_5POINT1_SURROUND|SPEAKER_BACK_CENTER; break; -#else - case 7: wavex->dwChannelMask = PAWIN_SPEAKER_5POINT1|SPEAKER_BACK_CENTER; break; -#endif -#ifdef PAWIN_SPEAKER_7POINT1_SURROUND - case 8: wavex->dwChannelMask = PAWIN_SPEAKER_7POINT1_SURROUND; break; -#else - case 8: wavex->dwChannelMask = PAWIN_SPEAKER_7POINT1; break; -#endif - - default: wavex->dwChannelMask = 0; - } - } - } - - old->nBlockAlign = old->nChannels * (old->wBitsPerSample / 8); - old->nAvgBytesPerSec = old->nSamplesPerSec * old->nBlockAlign; - - return paNoError; -} - -// ------------------------------------------------------------------------------------------ -static HRESULT GetAlternativeSampleFormatExclusive(IAudioClient *client, double sampleRate, - const PaStreamParameters *params, WAVEFORMATEXTENSIBLE *outWavex, BOOL packedSampleFormatOnly) -{ - HRESULT hr = !S_OK; - AUDCLNT_SHAREMODE shareMode = AUDCLNT_SHAREMODE_EXCLUSIVE; - WAVEFORMATEXTENSIBLE testFormat; - PaStreamParameters testParams; - int i; - static const PaSampleFormat bestToWorst[] = { paInt32, paInt24, paFloat32, paInt16 }; - - // Try combination Stereo (2 channels) and then we will use our custom mono-stereo mixer - if (params->channelCount == 1) - { - testParams = (*params); - testParams.channelCount = 2; - - if (MakeWaveFormatFromParams(&testFormat, &testParams, sampleRate, packedSampleFormatOnly) == paNoError) - { - if ((hr = IAudioClient_IsFormatSupported(client, shareMode, &testFormat.Format, NULL)) == S_OK) - { - (*outWavex) = testFormat; - return hr; - } - } - - // Try selecting suitable sample type - for (i = 0; i < STATIC_ARRAY_SIZE(bestToWorst); ++i) - { - testParams.sampleFormat = bestToWorst[i]; - - if (MakeWaveFormatFromParams(&testFormat, &testParams, sampleRate, packedSampleFormatOnly) == paNoError) - { - if ((hr = IAudioClient_IsFormatSupported(client, shareMode, &testFormat.Format, NULL)) == S_OK) - { - (*outWavex) = testFormat; - return hr; - } - } - } - } - - // Try selecting suitable sample type - testParams = (*params); - for (i = 0; i < STATIC_ARRAY_SIZE(bestToWorst); ++i) - { - testParams.sampleFormat = bestToWorst[i]; - - if (MakeWaveFormatFromParams(&testFormat, &testParams, sampleRate, packedSampleFormatOnly) == paNoError) - { - if ((hr = IAudioClient_IsFormatSupported(client, shareMode, &testFormat.Format, NULL)) == S_OK) - { - (*outWavex) = testFormat; - return hr; - } - } - } - - return hr; -} - -// ------------------------------------------------------------------------------------------ -static PaError GetClosestFormat(IAudioClient *client, double sampleRate, const PaStreamParameters *_params, - AUDCLNT_SHAREMODE shareMode, WAVEFORMATEXTENSIBLE *outWavex, BOOL output) -{ - PaWasapiStreamInfo *streamInfo = (PaWasapiStreamInfo *)_params->hostApiSpecificStreamInfo; - WAVEFORMATEX *sharedClosestMatch = NULL; - HRESULT hr = !S_OK; - PaStreamParameters params = (*_params); - const BOOL explicitFormat = (streamInfo != NULL) && ((streamInfo->flags & paWinWasapiExplicitSampleFormat) == paWinWasapiExplicitSampleFormat); - (void)output; - - /* It was not noticed that 24-bit Input producing no output while device accepts this format. - To fix this issue let's ask for 32-bits and let PA converters convert host 32-bit data - to 24-bit for user-space. The bug concerns Vista, if Windows 7 supports 24-bits for Input - please report to PortAudio developers to exclude Windows 7. - */ - /*if ((params.sampleFormat == paInt24) && (output == FALSE)) - params.sampleFormat = paFloat32;*/ // <<< The silence was due to missing Int32_To_Int24_Dither implementation - - // Try standard approach, e.g. if data is > 16 bits it will be packed into 32-bit containers - MakeWaveFormatFromParams(outWavex, ¶ms, sampleRate, FALSE); - - // If built-in PCM converter requested then shared mode format will always succeed - if ((GetWindowsVersion() >= WINDOWS_7_SERVER2008R2) && - (shareMode == AUDCLNT_SHAREMODE_SHARED) && - ((streamInfo != NULL) && (streamInfo->flags & paWinWasapiAutoConvert))) - return paFormatIsSupported; - - hr = IAudioClient_IsFormatSupported(client, shareMode, &outWavex->Format, (shareMode == AUDCLNT_SHAREMODE_SHARED ? &sharedClosestMatch : NULL)); - - // Exclusive mode can require packed format for some devices - if ((hr != S_OK) && (shareMode == AUDCLNT_SHAREMODE_EXCLUSIVE)) - { - // Enforce packed only format, e.g. data bits will not be packed into 32-bit containers in any case - MakeWaveFormatFromParams(outWavex, ¶ms, sampleRate, TRUE); - hr = IAudioClient_IsFormatSupported(client, shareMode, &outWavex->Format, NULL); - } - - if (hr == S_OK) - { - return paFormatIsSupported; - } - else - if (sharedClosestMatch != NULL) - { - WORD bitsPerSample; - - if (sharedClosestMatch->wFormatTag == WAVE_FORMAT_EXTENSIBLE) - memcpy(outWavex, sharedClosestMatch, sizeof(WAVEFORMATEXTENSIBLE)); - else - memcpy(outWavex, sharedClosestMatch, sizeof(WAVEFORMATEX)); - - CoTaskMemFree(sharedClosestMatch); - sharedClosestMatch = NULL; - - // Validate SampleRate - if ((DWORD)sampleRate != outWavex->Format.nSamplesPerSec) - return paInvalidSampleRate; - - // Validate Channel count - if ((WORD)params.channelCount != outWavex->Format.nChannels) - { - // If mono, then driver does not support 1 channel, we use internal workaround - // of tiny software mixing functionality, e.g. we provide to user buffer 1 channel - // but then mix into 2 for device buffer - if ((params.channelCount == 1) && (outWavex->Format.nChannels == 2)) - return paFormatIsSupported; - else - return paInvalidChannelCount; - } - - // Validate Sample format - if ((bitsPerSample = PaSampleFormatToBitsPerSample(params.sampleFormat)) == 0) - return paSampleFormatNotSupported; - - // Accepted format - return paFormatIsSupported; - } - else - if ((shareMode == AUDCLNT_SHAREMODE_EXCLUSIVE) && !explicitFormat) - { - // Try standard approach, e.g. if data is > 16 bits it will be packed into 32-bit containers - if ((hr = GetAlternativeSampleFormatExclusive(client, sampleRate, ¶ms, outWavex, FALSE)) == S_OK) - return paFormatIsSupported; - - // Enforce packed only format, e.g. data bits will not be packed into 32-bit containers in any case - if ((hr = GetAlternativeSampleFormatExclusive(client, sampleRate, ¶ms, outWavex, TRUE)) == S_OK) - return paFormatIsSupported; - - // Log failure - LogHostError(hr); - } - else - { - // Exclusive mode and requested strict format, WASAPI did not accept this sample format - LogHostError(hr); - } - - return paInvalidSampleRate; -} - -// ------------------------------------------------------------------------------------------ -static PaError IsStreamParamsValid(struct PaUtilHostApiRepresentation *hostApi, - const PaStreamParameters *inputParameters, - const PaStreamParameters *outputParameters, - double sampleRate) -{ - if (hostApi == NULL) - return paHostApiNotFound; - if ((UINT32)sampleRate == 0) - return paInvalidSampleRate; - - if (inputParameters != NULL) - { - /* all standard sample formats are supported by the buffer adapter, - this implementation doesn't support any custom sample formats */ - // Note: paCustomFormat is now 8.24 (24-bits in 32-bit containers) - //if (inputParameters->sampleFormat & paCustomFormat) - // return paSampleFormatNotSupported; - - /* unless alternate device specification is supported, reject the use of - paUseHostApiSpecificDeviceSpecification */ - if (inputParameters->device == paUseHostApiSpecificDeviceSpecification) - return paInvalidDevice; - - /* check that input device can support inputChannelCount */ - if (inputParameters->channelCount > hostApi->deviceInfos[ inputParameters->device ]->maxInputChannels) - return paInvalidChannelCount; - - /* validate inputStreamInfo */ - if (inputParameters->hostApiSpecificStreamInfo) - { - PaWasapiStreamInfo *inputStreamInfo = (PaWasapiStreamInfo *)inputParameters->hostApiSpecificStreamInfo; - if ((inputStreamInfo->size != sizeof(PaWasapiStreamInfo)) || - (inputStreamInfo->version != 1) || - (inputStreamInfo->hostApiType != paWASAPI)) - { - return paIncompatibleHostApiSpecificStreamInfo; - } - } - - return paNoError; - } - - if (outputParameters != NULL) - { - /* all standard sample formats are supported by the buffer adapter, - this implementation doesn't support any custom sample formats */ - // Note: paCustomFormat is now 8.24 (24-bits in 32-bit containers) - //if (outputParameters->sampleFormat & paCustomFormat) - // return paSampleFormatNotSupported; - - /* unless alternate device specification is supported, reject the use of - paUseHostApiSpecificDeviceSpecification */ - if (outputParameters->device == paUseHostApiSpecificDeviceSpecification) - return paInvalidDevice; - - /* check that output device can support outputChannelCount */ - if (outputParameters->channelCount > hostApi->deviceInfos[ outputParameters->device ]->maxOutputChannels) - return paInvalidChannelCount; - - /* validate outputStreamInfo */ - if(outputParameters->hostApiSpecificStreamInfo) - { - PaWasapiStreamInfo *outputStreamInfo = (PaWasapiStreamInfo *)outputParameters->hostApiSpecificStreamInfo; - if ((outputStreamInfo->size != sizeof(PaWasapiStreamInfo)) || - (outputStreamInfo->version != 1) || - (outputStreamInfo->hostApiType != paWASAPI)) - { - return paIncompatibleHostApiSpecificStreamInfo; - } - } - - return paNoError; - } - - return (inputParameters || outputParameters ? paNoError : paInternalError); -} - -// ------------------------------------------------------------------------------------------ -static PaError IsFormatSupported( struct PaUtilHostApiRepresentation *hostApi, - const PaStreamParameters *inputParameters, - const PaStreamParameters *outputParameters, - double sampleRate ) -{ - IAudioClient *tmpClient = NULL; - PaWasapiHostApiRepresentation *paWasapi = (PaWasapiHostApiRepresentation*)hostApi; - PaWasapiStreamInfo *inputStreamInfo = NULL, *outputStreamInfo = NULL; - - // Validate PaStreamParameters - PaError error; - if ((error = IsStreamParamsValid(hostApi, inputParameters, outputParameters, sampleRate)) != paNoError) - return error; - - if (inputParameters != NULL) - { - WAVEFORMATEXTENSIBLE wavex; - HRESULT hr; - PaError answer; - AUDCLNT_SHAREMODE shareMode = AUDCLNT_SHAREMODE_SHARED; - inputStreamInfo = (PaWasapiStreamInfo *)inputParameters->hostApiSpecificStreamInfo; - - if (inputStreamInfo && (inputStreamInfo->flags & paWinWasapiExclusive)) - shareMode = AUDCLNT_SHAREMODE_EXCLUSIVE; - - hr = ActivateAudioInterface(&paWasapi->devInfo[inputParameters->device], inputStreamInfo, &tmpClient); - if (hr != S_OK) - { - LogHostError(hr); - return paInvalidDevice; - } - - answer = GetClosestFormat(tmpClient, sampleRate, inputParameters, shareMode, &wavex, FALSE); - SAFE_RELEASE(tmpClient); - - if (answer != paFormatIsSupported) - return answer; - } - - if (outputParameters != NULL) - { - HRESULT hr; - WAVEFORMATEXTENSIBLE wavex; - PaError answer; - AUDCLNT_SHAREMODE shareMode = AUDCLNT_SHAREMODE_SHARED; - outputStreamInfo = (PaWasapiStreamInfo *)outputParameters->hostApiSpecificStreamInfo; - - if (outputStreamInfo && (outputStreamInfo->flags & paWinWasapiExclusive)) - shareMode = AUDCLNT_SHAREMODE_EXCLUSIVE; - - hr = ActivateAudioInterface(&paWasapi->devInfo[outputParameters->device], outputStreamInfo, &tmpClient); - if (hr != S_OK) - { - LogHostError(hr); - return paInvalidDevice; - } - - answer = GetClosestFormat(tmpClient, sampleRate, outputParameters, shareMode, &wavex, TRUE); - SAFE_RELEASE(tmpClient); - - if (answer != paFormatIsSupported) - return answer; - } - - return paFormatIsSupported; -} - -// ------------------------------------------------------------------------------------------ -static PaUint32 _GetFramesPerHostBuffer(PaUint32 userFramesPerBuffer, PaTime suggestedLatency, double sampleRate, PaUint32 TimerJitterMs) -{ - PaUint32 frames = userFramesPerBuffer + max( userFramesPerBuffer, (PaUint32)(suggestedLatency * sampleRate) ); - frames += (PaUint32)((sampleRate * 0.001) * TimerJitterMs); - return frames; -} - -// ------------------------------------------------------------------------------------------ -static void _RecalculateBuffersCount(PaWasapiSubStream *sub, UINT32 userFramesPerBuffer, UINT32 framesPerLatency, - BOOL fullDuplex, BOOL output) -{ - // Count buffers (must be at least 1) - sub->buffers = (userFramesPerBuffer != 0 ? framesPerLatency / userFramesPerBuffer : 1); - if (sub->buffers == 0) - sub->buffers = 1; - - // Determine number of buffers used: - // - Full-duplex mode will lead to period difference, thus only 1 - // - Input mode, only 1, as WASAPI allows extraction of only 1 packet - // - For Shared mode we use double buffering - if ((sub->shareMode == AUDCLNT_SHAREMODE_EXCLUSIVE) || fullDuplex) - { - BOOL eventMode = ((sub->streamFlags & AUDCLNT_STREAMFLAGS_EVENTCALLBACK) == AUDCLNT_STREAMFLAGS_EVENTCALLBACK); - - // Exclusive mode does not allow >1 buffers be used for Event interface, e.g. GetBuffer - // call must acquire max buffer size and it all must be processed. - if (eventMode) - sub->userBufferAndHostMatch = 1; - - // Full-duplex or Event mode: prefer paUtilBoundedHostBufferSize because exclusive mode will starve - // and produce glitchy audio - // Output Polling mode: prefer paUtilFixedHostBufferSize (buffers != 1) for polling mode is it allows - // to consume user data by fixed size data chunks and thus lowers memory movement (less CPU usage) - if (fullDuplex || eventMode || !output) - sub->buffers = 1; - } -} - -// ------------------------------------------------------------------------------------------ -static void _CalculateAlignedPeriod(PaWasapiSubStream *pSub, UINT32 *nFramesPerLatency, ALIGN_FUNC pAlignFunc) -{ - // Align frames to HD Audio packet size of 128 bytes for Exclusive mode only. - // Not aligning on Windows Vista will cause Event timeout, although Windows 7 will - // return AUDCLNT_E_BUFFER_SIZE_NOT_ALIGNED error to realign buffer. Aligning is necessary - // for Exclusive mode only! when audio data is fed directly to hardware. - if (pSub->shareMode == AUDCLNT_SHAREMODE_EXCLUSIVE) - { - (*nFramesPerLatency) = AlignFramesPerBuffer((*nFramesPerLatency), - pSub->wavex.Format.nBlockAlign, pAlignFunc); - } - - // Calculate period - pSub->period = MakeHnsPeriod((*nFramesPerLatency), pSub->wavex.Format.nSamplesPerSec); -} - -// ------------------------------------------------------------------------------------------ -static void _CalculatePeriodicity(PaWasapiSubStream *pSub, BOOL output, REFERENCE_TIME *periodicity) -{ - // Note: according to Microsoft docs for IAudioClient::Initialize we can set periodicity of the buffer - // only for Exclusive mode. By setting periodicity almost equal to the user buffer frames we can - // achieve high quality (less glitchy) low-latency audio. - if (pSub->shareMode == AUDCLNT_SHAREMODE_EXCLUSIVE) - { - const PaWasapiDeviceInfo *pInfo = pSub->params.device_info; - - // By default periodicity equals to the full buffer (legacy PA WASAPI's behavior) - (*periodicity) = pSub->period; - - // Try make buffer ready for I/O once we request the buffer readiness for it. Only Polling mode - // because for Event mode buffer size and periodicity must be equal according to Microsoft - // documentation for IAudioClient::Initialize. - // - // TO-DO: try spread to capture and full-duplex cases (not tested and therefore disabled) - // - if (((pSub->streamFlags & AUDCLNT_STREAMFLAGS_EVENTCALLBACK) == 0) && - (output && !pSub->params.full_duplex)) - { - UINT32 alignedFrames; - REFERENCE_TIME userPeriodicity; - - // Align frames backwards, so device will likely make buffer read ready when we are ready - // to read it (our scheduling will wait for amount of millisoconds of frames_per_buffer) - alignedFrames = AlignFramesPerBuffer(pSub->params.frames_per_buffer, - pSub->wavex.Format.nBlockAlign, ALIGN_BWD); - - userPeriodicity = MakeHnsPeriod(alignedFrames, pSub->wavex.Format.nSamplesPerSec); - - // Must not be larger than buffer size - if (userPeriodicity > pSub->period) - userPeriodicity = pSub->period; - - // Must not be smaller than minimum supported by the device - if (userPeriodicity < pInfo->MinimumDevicePeriod) - userPeriodicity = pInfo->MinimumDevicePeriod; - - (*periodicity) = userPeriodicity; - } - } - else - (*periodicity) = 0; -} - -// ------------------------------------------------------------------------------------------ -static HRESULT CreateAudioClient(PaWasapiStream *pStream, PaWasapiSubStream *pSub, BOOL output, PaError *pa_error) -{ - PaError error; - HRESULT hr; - const PaWasapiDeviceInfo *pInfo = pSub->params.device_info; - const PaStreamParameters *params = &pSub->params.stream_params; - const double sampleRate = pSub->params.sample_rate; - const BOOL fullDuplex = pSub->params.full_duplex; - const UINT32 userFramesPerBuffer = pSub->params.frames_per_buffer; - UINT32 framesPerLatency = userFramesPerBuffer; - IAudioClient *audioClient = NULL; - REFERENCE_TIME eventPeriodicity = 0; - - // Assume default failure due to some reason - (*pa_error) = paInvalidDevice; - - // Validate parameters - if (!pSub || !pInfo || !params) - { - (*pa_error) = paBadStreamPtr; - return E_POINTER; - } - if ((UINT32)sampleRate == 0) - { - (*pa_error) = paInvalidSampleRate; - return E_INVALIDARG; - } - - // Get the audio client - if (FAILED(hr = ActivateAudioInterface(pInfo, &pSub->params.wasapi_params, &audioClient))) - { - (*pa_error) = paInsufficientMemory; - LogHostError(hr); - goto done; - } - - // Get closest format - if ((error = GetClosestFormat(audioClient, sampleRate, params, pSub->shareMode, &pSub->wavex, output)) != paFormatIsSupported) - { - (*pa_error) = error; - LogHostError(hr = AUDCLNT_E_UNSUPPORTED_FORMAT); - goto done; // fail, format not supported - } - - // Check for Mono <<>> Stereo workaround - if ((params->channelCount == 1) && (pSub->wavex.Format.nChannels == 2)) - { - // select mixer - pSub->monoMixer = GetMonoToStereoMixer(&pSub->wavex, (pInfo->flow == eRender ? MIX_DIR__1TO2 : MIX_DIR__2TO1_L)); - if (pSub->monoMixer == NULL) - { - (*pa_error) = paInvalidChannelCount; - LogHostError(hr = AUDCLNT_E_UNSUPPORTED_FORMAT); - goto done; // fail, no mixer for format - } - } - - // Calculate host buffer size - if ((pSub->shareMode != AUDCLNT_SHAREMODE_EXCLUSIVE) && - (!pSub->streamFlags || ((pSub->streamFlags & AUDCLNT_STREAMFLAGS_EVENTCALLBACK) == 0))) - { - framesPerLatency = _GetFramesPerHostBuffer(userFramesPerBuffer, - params->suggestedLatency, pSub->wavex.Format.nSamplesPerSec, 0/*, - (pSub->streamFlags & AUDCLNT_STREAMFLAGS_EVENTCALLBACK ? 0 : 1)*/); - } - else - { - #ifdef PA_WASAPI_FORCE_POLL_IF_LARGE_BUFFER - REFERENCE_TIME overall; - #endif - - // Work 1:1 with user buffer (only polling allows to use >1) - framesPerLatency += MakeFramesFromHns(SecondsTonano100(params->suggestedLatency), pSub->wavex.Format.nSamplesPerSec); - - // Force Polling if overall latency is >= 21.33ms as it allows to use 100% CPU in a callback, - // or user specified latency parameter. - #ifdef PA_WASAPI_FORCE_POLL_IF_LARGE_BUFFER - overall = MakeHnsPeriod(framesPerLatency, pSub->wavex.Format.nSamplesPerSec); - if (overall >= (106667 * 2)/*21.33ms*/) - { - framesPerLatency = _GetFramesPerHostBuffer(userFramesPerBuffer, - params->suggestedLatency, pSub->wavex.Format.nSamplesPerSec, 0/*, - (streamFlags & AUDCLNT_STREAMFLAGS_EVENTCALLBACK ? 0 : 1)*/); - - // Use Polling interface - pSub->streamFlags &= ~AUDCLNT_STREAMFLAGS_EVENTCALLBACK; - PRINT(("WASAPI: CreateAudioClient: forcing POLL mode\n")); - } - #endif - } - - // For full-duplex output resize buffer to be the same as for input - if (output && fullDuplex) - framesPerLatency = pStream->in.framesPerHostCallback; - - // Avoid 0 frames - if (framesPerLatency == 0) - framesPerLatency = MakeFramesFromHns(pInfo->DefaultDevicePeriod, pSub->wavex.Format.nSamplesPerSec); - - // Exclusive Input stream renders data in 6 packets, we must set then the size of - // single packet, total buffer size, e.g. required latency will be PacketSize * 6 - if (!output && (pSub->shareMode == AUDCLNT_SHAREMODE_EXCLUSIVE)) - { - // Do it only for Polling mode - if ((pSub->streamFlags & AUDCLNT_STREAMFLAGS_EVENTCALLBACK) == 0) - framesPerLatency /= WASAPI_PACKETS_PER_INPUT_BUFFER; - } - - // Calculate aligned period - _CalculateAlignedPeriod(pSub, &framesPerLatency, ALIGN_BWD); - - /*! Enforce min/max period for device in Shared mode to avoid bad audio quality. - Avoid doing so for Exclusive mode as alignment will suffer. - */ - if (pSub->shareMode == AUDCLNT_SHAREMODE_SHARED) - { - if (pSub->period < pInfo->DefaultDevicePeriod) - { - pSub->period = pInfo->DefaultDevicePeriod; - - // Recalculate aligned period - framesPerLatency = MakeFramesFromHns(pSub->period, pSub->wavex.Format.nSamplesPerSec); - _CalculateAlignedPeriod(pSub, &framesPerLatency, ALIGN_BWD); - } - } - else - { - if (pSub->period < pInfo->MinimumDevicePeriod) - { - pSub->period = pInfo->MinimumDevicePeriod; - - // Recalculate aligned period - framesPerLatency = MakeFramesFromHns(pSub->period, pSub->wavex.Format.nSamplesPerSec); - _CalculateAlignedPeriod(pSub, &framesPerLatency, ALIGN_FWD); - } - } - - /*! Windows 7 does not allow to set latency lower than minimal device period and will - return error: AUDCLNT_E_INVALID_DEVICE_PERIOD. Under Vista we enforce the same behavior - manually for unified behavior on all platforms. - */ - { - /*! AUDCLNT_E_BUFFER_SIZE_ERROR: Applies to Windows 7 and later. - Indicates that the buffer duration value requested by an exclusive-mode client is - out of range. The requested duration value for pull mode must not be greater than - 500 milliseconds; for push mode the duration value must not be greater than 2 seconds. - */ - if (pSub->shareMode == AUDCLNT_SHAREMODE_EXCLUSIVE) - { - static const REFERENCE_TIME MAX_BUFFER_EVENT_DURATION = 500 * 10000; - static const REFERENCE_TIME MAX_BUFFER_POLL_DURATION = 2000 * 10000; - - // Pull mode, max 500ms - if (pSub->streamFlags & AUDCLNT_STREAMFLAGS_EVENTCALLBACK) - { - if (pSub->period > MAX_BUFFER_EVENT_DURATION) - { - pSub->period = MAX_BUFFER_EVENT_DURATION; - - // Recalculate aligned period - framesPerLatency = MakeFramesFromHns(pSub->period, pSub->wavex.Format.nSamplesPerSec); - _CalculateAlignedPeriod(pSub, &framesPerLatency, ALIGN_BWD); - } - } - // Push mode, max 2000ms - else - { - if (pSub->period > MAX_BUFFER_POLL_DURATION) - { - pSub->period = MAX_BUFFER_POLL_DURATION; - - // Recalculate aligned period - framesPerLatency = MakeFramesFromHns(pSub->period, pSub->wavex.Format.nSamplesPerSec); - _CalculateAlignedPeriod(pSub, &framesPerLatency, ALIGN_BWD); - } - } - } - } - - // Set device scheduling period (always 0 in Shared mode according to Microsoft docs) - _CalculatePeriodicity(pSub, output, &eventPeriodicity); - - // Open the stream and associate it with an audio session - hr = IAudioClient_Initialize(audioClient, - pSub->shareMode, - pSub->streamFlags, - pSub->period, - eventPeriodicity, - &pSub->wavex.Format, - NULL); - - // [Output only] Check if buffer size is the one we requested in Exclusive mode, for UAC1 USB DACs WASAPI - // can allocate internal buffer equal to 8 times of pSub->period that has to be corrected in order to match - // the requested latency - if (output && SUCCEEDED(hr) && (pSub->shareMode == AUDCLNT_SHAREMODE_EXCLUSIVE)) - { - UINT32 maxBufferFrames; - - if (FAILED(hr = IAudioClient_GetBufferSize(audioClient, &maxBufferFrames))) - { - (*pa_error) = paInvalidDevice; - LogHostError(hr); - goto done; - } - - // For Exclusive mode for UAC1 devices maxBufferFrames may be framesPerLatency * 8 but check any difference - // to be able to guarantee the latency user requested and also resulted framesPerLatency may be bigger than - // 2 seconds that will cause audio client not operational (GetCurrentPadding() will return always 0) - if (maxBufferFrames >= (framesPerLatency * 2)) - { - UINT32 ratio = maxBufferFrames / framesPerLatency; - - PRINT(("WASAPI: CreateAudioClient: detected %d times larger buffer than requested, correct to match user latency\n", ratio)); - - // Get new aligned frames lowered by calculated ratio - framesPerLatency = MakeFramesFromHns(pSub->period / ratio, pSub->wavex.Format.nSamplesPerSec); - _CalculateAlignedPeriod(pSub, &framesPerLatency, ALIGN_BWD); - - // Make sure we are not below the minimum period - if (pSub->period < pInfo->MinimumDevicePeriod) - pSub->period = pInfo->MinimumDevicePeriod; - - // Release previous client - SAFE_RELEASE(audioClient); - - // Create a new audio client - if (FAILED(hr = ActivateAudioInterface(pInfo, &pSub->params.wasapi_params, &audioClient))) - { - (*pa_error) = paInsufficientMemory; - LogHostError(hr); - goto done; - } - - // Set device scheduling period (always 0 in Shared mode according to Microsoft docs) - _CalculatePeriodicity(pSub, output, &eventPeriodicity); - - // Open the stream and associate it with an audio session - hr = IAudioClient_Initialize(audioClient, - pSub->shareMode, - pSub->streamFlags, - pSub->period, - eventPeriodicity, - &pSub->wavex.Format, - NULL); - } - } - - /*! WASAPI is tricky on large device buffer, sometimes 2000ms can be allocated sometimes - less. There is no known guaranteed level thus we make subsequent tries by decreasing - buffer by 100ms per try. - */ - while ((hr == E_OUTOFMEMORY) && (pSub->period > (100 * 10000))) - { - PRINT(("WASAPI: CreateAudioClient: decreasing buffer size to %d milliseconds\n", (pSub->period / 10000))); - - // Decrease by 100ms and try again - pSub->period -= (100 * 10000); - - // Recalculate aligned period - framesPerLatency = MakeFramesFromHns(pSub->period, pSub->wavex.Format.nSamplesPerSec); - _CalculateAlignedPeriod(pSub, &framesPerLatency, ALIGN_BWD); - - // Release the previous allocations - SAFE_RELEASE(audioClient); - - // Create a new audio client - if (FAILED(hr = ActivateAudioInterface(pInfo, &pSub->params.wasapi_params, &audioClient))) - { - (*pa_error) = paInsufficientMemory; - LogHostError(hr); - goto done; - } - - // Set device scheduling period (always 0 in Shared mode according to Microsoft docs) - _CalculatePeriodicity(pSub, output, &eventPeriodicity); - - // Open the stream and associate it with an audio session - hr = IAudioClient_Initialize(audioClient, - pSub->shareMode, - pSub->streamFlags, - pSub->period, - eventPeriodicity, - &pSub->wavex.Format, - NULL); - } - - /*! WASAPI buffer size or alignment failure. Fallback to using default size and alignment. - */ - if ((hr == AUDCLNT_E_BUFFER_SIZE_ERROR) || (hr == AUDCLNT_E_BUFFER_SIZE_NOT_ALIGNED)) - { - // Use default - pSub->period = pInfo->DefaultDevicePeriod; - - PRINT(("WASAPI: CreateAudioClient: correcting buffer size/alignment to device default\n")); - - // Release the previous allocations - SAFE_RELEASE(audioClient); - - // Create a new audio client - if (FAILED(hr = ActivateAudioInterface(pInfo, &pSub->params.wasapi_params, &audioClient))) - { - (*pa_error) = paInsufficientMemory; - LogHostError(hr); - goto done; - } - - // Set device scheduling period (always 0 in Shared mode according to Microsoft docs) - _CalculatePeriodicity(pSub, output, &eventPeriodicity); - - // Open the stream and associate it with an audio session - hr = IAudioClient_Initialize(audioClient, - pSub->shareMode, - pSub->streamFlags, - pSub->period, - eventPeriodicity, - &pSub->wavex.Format, - NULL); - } - - // Error has no workaround, fail completely - if (FAILED(hr)) - { - (*pa_error) = paInvalidDevice; - LogHostError(hr); - goto done; - } - - // Set client - pSub->clientParent = audioClient; - IAudioClient_AddRef(pSub->clientParent); - - // Recalculate buffers count - _RecalculateBuffersCount(pSub, userFramesPerBuffer, MakeFramesFromHns(pSub->period, pSub->wavex.Format.nSamplesPerSec), - fullDuplex, output); - - // No error, client is successfully created - (*pa_error) = paNoError; - -done: - - // Clean up - SAFE_RELEASE(audioClient); - return hr; -} - -// ------------------------------------------------------------------------------------------ -static PaError ActivateAudioClientOutput(PaWasapiStream *stream) -{ - HRESULT hr; - PaError result; - UINT32 maxBufferSize; - PaTime bufferLatency; - const UINT32 framesPerBuffer = stream->out.params.frames_per_buffer; - - // Create Audio client - if (FAILED(hr = CreateAudioClient(stream, &stream->out, TRUE, &result))) - { - LogPaError(result); - goto error; - } - LogWAVEFORMATEXTENSIBLE(&stream->out.wavex); - - // Activate volume - stream->outVol = NULL; - /*hr = info->device->Activate( - __uuidof(IAudioEndpointVolume), CLSCTX_INPROC_SERVER, NULL, - (void**)&stream->outVol); - if (hr != S_OK) - return paInvalidDevice;*/ - - // Get max possible buffer size to check if it is not less than that we request - if (FAILED(hr = IAudioClient_GetBufferSize(stream->out.clientParent, &maxBufferSize))) - { - LogHostError(hr); - LogPaError(result = paInvalidDevice); - goto error; - } - - // Correct buffer to max size if it maxed out result of GetBufferSize - stream->out.bufferSize = maxBufferSize; - - // Number of frames that are required at each period - stream->out.framesPerHostCallback = maxBufferSize; - - // Calculate frames per single buffer, if buffers > 1 then always framesPerBuffer - stream->out.framesPerBuffer = - (stream->out.userBufferAndHostMatch ? stream->out.framesPerHostCallback : framesPerBuffer); - - // Calculate buffer latency - bufferLatency = (PaTime)maxBufferSize / stream->out.wavex.Format.nSamplesPerSec; - - // Append buffer latency to interface latency in shared mode (see GetStreamLatency notes) - stream->out.latencySeconds = bufferLatency; - - PRINT(("WASAPI::OpenStream(output): framesPerUser[ %d ] framesPerHost[ %d ] latency[ %.02fms ] exclusive[ %s ] wow64_fix[ %s ] mode[ %s ]\n", (UINT32)framesPerBuffer, (UINT32)stream->out.framesPerHostCallback, (float)(stream->out.latencySeconds*1000.0f), (stream->out.shareMode == AUDCLNT_SHAREMODE_EXCLUSIVE ? "YES" : "NO"), (stream->out.params.wow64_workaround ? "YES" : "NO"), (stream->out.streamFlags & AUDCLNT_STREAMFLAGS_EVENTCALLBACK ? "EVENT" : "POLL"))); - - return paNoError; - -error: - - return result; -} - -// ------------------------------------------------------------------------------------------ -static PaError ActivateAudioClientInput(PaWasapiStream *stream) -{ - HRESULT hr; - PaError result; - UINT32 maxBufferSize; - PaTime bufferLatency; - const UINT32 framesPerBuffer = stream->in.params.frames_per_buffer; - - // Create Audio client - if (FAILED(hr = CreateAudioClient(stream, &stream->in, FALSE, &result))) - { - LogPaError(result); - goto error; - } - LogWAVEFORMATEXTENSIBLE(&stream->in.wavex); - - // Create volume mgr - stream->inVol = NULL; - /*hr = info->device->Activate( - __uuidof(IAudioEndpointVolume), CLSCTX_INPROC_SERVER, NULL, - (void**)&stream->inVol); - if (hr != S_OK) - return paInvalidDevice;*/ - - // Get max possible buffer size to check if it is not less than that we request - if (FAILED(hr = IAudioClient_GetBufferSize(stream->in.clientParent, &maxBufferSize))) - { - LogHostError(hr); - LogPaError(result = paInvalidDevice); - goto error; - } - - // Correct buffer to max size if it maxed out result of GetBufferSize - stream->in.bufferSize = maxBufferSize; - - // Get interface latency (actually unneeded as we calculate latency from the size - // of maxBufferSize). - if (FAILED(hr = IAudioClient_GetStreamLatency(stream->in.clientParent, &stream->in.deviceLatency))) - { - LogHostError(hr); - LogPaError(result = paInvalidDevice); - goto error; - } - //stream->in.latencySeconds = nano100ToSeconds(stream->in.deviceLatency); - - // Number of frames that are required at each period - stream->in.framesPerHostCallback = maxBufferSize; - - // Calculate frames per single buffer, if buffers > 1 then always framesPerBuffer - stream->in.framesPerBuffer = - (stream->in.userBufferAndHostMatch ? stream->in.framesPerHostCallback : framesPerBuffer); - - // Calculate buffer latency - bufferLatency = (PaTime)maxBufferSize / stream->in.wavex.Format.nSamplesPerSec; - - // Append buffer latency to interface latency in shared mode (see GetStreamLatency notes) - stream->in.latencySeconds = bufferLatency; - - PRINT(("WASAPI::OpenStream(input): framesPerUser[ %d ] framesPerHost[ %d ] latency[ %.02fms ] exclusive[ %s ] wow64_fix[ %s ] mode[ %s ]\n", (UINT32)framesPerBuffer, (UINT32)stream->in.framesPerHostCallback, (float)(stream->in.latencySeconds*1000.0f), (stream->in.shareMode == AUDCLNT_SHAREMODE_EXCLUSIVE ? "YES" : "NO"), (stream->in.params.wow64_workaround ? "YES" : "NO"), (stream->in.streamFlags & AUDCLNT_STREAMFLAGS_EVENTCALLBACK ? "EVENT" : "POLL"))); - - return paNoError; - -error: - - return result; -} - -// ------------------------------------------------------------------------------------------ -static PaError OpenStream( struct PaUtilHostApiRepresentation *hostApi, - PaStream** s, - const PaStreamParameters *inputParameters, - const PaStreamParameters *outputParameters, - double sampleRate, - unsigned long framesPerBuffer, - PaStreamFlags streamFlags, - PaStreamCallback *streamCallback, - void *userData ) -{ - PaError result = paNoError; - HRESULT hr; - PaWasapiHostApiRepresentation *paWasapi = (PaWasapiHostApiRepresentation*)hostApi; - PaWasapiStream *stream = NULL; - int inputChannelCount, outputChannelCount; - PaSampleFormat inputSampleFormat, outputSampleFormat; - PaSampleFormat hostInputSampleFormat, hostOutputSampleFormat; - PaWasapiStreamInfo *inputStreamInfo = NULL, *outputStreamInfo = NULL; - PaWasapiDeviceInfo *info = NULL; - ULONG framesPerHostCallback; - PaUtilHostBufferSizeMode bufferMode; - const BOOL fullDuplex = ((inputParameters != NULL) && (outputParameters != NULL)); - BOOL useInputBufferProcessor = (inputParameters != NULL), useOutputBufferProcessor = (outputParameters != NULL); - - // validate PaStreamParameters - if ((result = IsStreamParamsValid(hostApi, inputParameters, outputParameters, sampleRate)) != paNoError) - return LogPaError(result); - - // Validate platform specific flags - if ((streamFlags & paPlatformSpecificFlags) != 0) - { - LogPaError(result = paInvalidFlag); /* unexpected platform specific flag */ - goto error; - } - - // Allocate memory for PaWasapiStream - if ((stream = (PaWasapiStream *)PaUtil_AllocateMemory(sizeof(PaWasapiStream))) == NULL) - { - LogPaError(result = paInsufficientMemory); - goto error; - } - - // Default thread priority is Audio: for exclusive mode we will use Pro Audio. - stream->nThreadPriority = eThreadPriorityAudio; - - // Set default number of frames: paFramesPerBufferUnspecified - if (framesPerBuffer == paFramesPerBufferUnspecified) - { - UINT32 framesPerBufferIn = 0, framesPerBufferOut = 0; - if (inputParameters != NULL) - { - info = &paWasapi->devInfo[inputParameters->device]; - framesPerBufferIn = MakeFramesFromHns(info->DefaultDevicePeriod, (UINT32)sampleRate); - } - if (outputParameters != NULL) - { - info = &paWasapi->devInfo[outputParameters->device]; - framesPerBufferOut = MakeFramesFromHns(info->DefaultDevicePeriod, (UINT32)sampleRate); - } - // choosing maximum default size - framesPerBuffer = max(framesPerBufferIn, framesPerBufferOut); - } - if (framesPerBuffer == 0) - framesPerBuffer = ((UINT32)sampleRate / 100) * 2; - - // Try create device: Input - if (inputParameters != NULL) - { - inputChannelCount = inputParameters->channelCount; - inputSampleFormat = GetSampleFormatForIO(inputParameters->sampleFormat); - info = &paWasapi->devInfo[inputParameters->device]; - - // default Shared Mode - stream->in.shareMode = AUDCLNT_SHAREMODE_SHARED; - - // PaWasapiStreamInfo - if (inputParameters->hostApiSpecificStreamInfo != NULL) - { - memcpy(&stream->in.params.wasapi_params, inputParameters->hostApiSpecificStreamInfo, min(sizeof(stream->in.params.wasapi_params), ((PaWasapiStreamInfo *)inputParameters->hostApiSpecificStreamInfo)->size)); - stream->in.params.wasapi_params.size = sizeof(stream->in.params.wasapi_params); - - stream->in.params.stream_params.hostApiSpecificStreamInfo = &stream->in.params.wasapi_params; - inputStreamInfo = &stream->in.params.wasapi_params; - - stream->in.flags = inputStreamInfo->flags; - - // Exclusive Mode - if (inputStreamInfo->flags & paWinWasapiExclusive) - { - // Boost thread priority - stream->nThreadPriority = eThreadPriorityProAudio; - // Make Exclusive - stream->in.shareMode = AUDCLNT_SHAREMODE_EXCLUSIVE; - } - - // explicit thread priority level - if (inputStreamInfo->flags & paWinWasapiThreadPriority) - { - if ((inputStreamInfo->threadPriority > eThreadPriorityNone) && - (inputStreamInfo->threadPriority <= eThreadPriorityWindowManager)) - stream->nThreadPriority = inputStreamInfo->threadPriority; - } - - // redirect processing to custom user callback, ignore PA buffer processor - useInputBufferProcessor = !(inputStreamInfo->flags & paWinWasapiRedirectHostProcessor); - } - - // Choose processing mode - stream->in.streamFlags = (stream->in.shareMode == AUDCLNT_SHAREMODE_EXCLUSIVE ? AUDCLNT_STREAMFLAGS_EVENTCALLBACK : 0); - if (paWasapi->useWOW64Workaround) - stream->in.streamFlags = 0; // polling interface - else - if (streamCallback == NULL) - stream->in.streamFlags = 0; // polling interface - else - if ((inputStreamInfo != NULL) && (inputStreamInfo->flags & paWinWasapiPolling)) - stream->in.streamFlags = 0; // polling interface - else - if (fullDuplex) - stream->in.streamFlags = 0; // polling interface is implemented for full-duplex mode also - - // Use built-in PCM converter (channel count and sample rate) if requested - if ((GetWindowsVersion() >= WINDOWS_7_SERVER2008R2) && - (stream->in.shareMode == AUDCLNT_SHAREMODE_SHARED) && - ((inputStreamInfo != NULL) && (inputStreamInfo->flags & paWinWasapiAutoConvert))) - stream->in.streamFlags |= (AUDCLNT_STREAMFLAGS_AUTOCONVERTPCM | AUDCLNT_STREAMFLAGS_SRC_DEFAULT_QUALITY); - - // Fill parameters for Audio Client creation - stream->in.params.device_info = info; - stream->in.params.stream_params = (*inputParameters); - stream->in.params.frames_per_buffer = framesPerBuffer; - stream->in.params.sample_rate = sampleRate; - stream->in.params.blocking = (streamCallback == NULL); - stream->in.params.full_duplex = fullDuplex; - stream->in.params.wow64_workaround = paWasapi->useWOW64Workaround; - - // Create and activate audio client - if ((result = ActivateAudioClientInput(stream)) != paNoError) - { - LogPaError(result); - goto error; - } - - // Get closest format - hostInputSampleFormat = PaUtil_SelectClosestAvailableFormat(WaveToPaFormat(&stream->in.wavex), inputSampleFormat); - - // Set user-side custom host processor - if ((inputStreamInfo != NULL) && - (inputStreamInfo->flags & paWinWasapiRedirectHostProcessor)) - { - stream->hostProcessOverrideInput.processor = inputStreamInfo->hostProcessorInput; - stream->hostProcessOverrideInput.userData = userData; - } - - // Only get IAudioCaptureClient input once here instead of getting it at multiple places based on the use - if (FAILED(hr = IAudioClient_GetService(stream->in.clientParent, &pa_IID_IAudioCaptureClient, (void **)&stream->captureClientParent))) - { - LogHostError(hr); - LogPaError(result = paUnanticipatedHostError); - goto error; - } - - // Create ring buffer for blocking mode (It is needed because we fetch Input packets, not frames, - // and thus we have to save partial packet if such remains unread) - if (stream->in.params.blocking == TRUE) - { - UINT32 bufferFrames = ALIGN_NEXT_POW2((stream->in.framesPerHostCallback / WASAPI_PACKETS_PER_INPUT_BUFFER) * 2); - UINT32 frameSize = stream->in.wavex.Format.nBlockAlign; - - // buffer - if ((stream->in.tailBuffer = PaUtil_AllocateMemory(sizeof(PaUtilRingBuffer))) == NULL) - { - LogPaError(result = paInsufficientMemory); - goto error; - } - memset(stream->in.tailBuffer, 0, sizeof(PaUtilRingBuffer)); - - // buffer memory region - stream->in.tailBufferMemory = PaUtil_AllocateMemory(frameSize * bufferFrames); - if (stream->in.tailBufferMemory == NULL) - { - LogPaError(result = paInsufficientMemory); - goto error; - } - - // initialize - if (PaUtil_InitializeRingBuffer(stream->in.tailBuffer, frameSize, bufferFrames, stream->in.tailBufferMemory) != 0) - { - LogPaError(result = paInternalError); - goto error; - } - } - } - else - { - inputChannelCount = 0; - inputSampleFormat = hostInputSampleFormat = paInt16; /* Suppress 'uninitialised var' warnings. */ - } - - // Try create device: Output - if (outputParameters != NULL) - { - outputChannelCount = outputParameters->channelCount; - outputSampleFormat = GetSampleFormatForIO(outputParameters->sampleFormat); - info = &paWasapi->devInfo[outputParameters->device]; - - // default Shared Mode - stream->out.shareMode = AUDCLNT_SHAREMODE_SHARED; - - // set PaWasapiStreamInfo - if (outputParameters->hostApiSpecificStreamInfo != NULL) - { - memcpy(&stream->out.params.wasapi_params, outputParameters->hostApiSpecificStreamInfo, min(sizeof(stream->out.params.wasapi_params), ((PaWasapiStreamInfo *)outputParameters->hostApiSpecificStreamInfo)->size)); - stream->out.params.wasapi_params.size = sizeof(stream->out.params.wasapi_params); - - stream->out.params.stream_params.hostApiSpecificStreamInfo = &stream->out.params.wasapi_params; - outputStreamInfo = &stream->out.params.wasapi_params; - - stream->out.flags = outputStreamInfo->flags; - - // Exclusive Mode - if (outputStreamInfo->flags & paWinWasapiExclusive) - { - // Boost thread priority - stream->nThreadPriority = eThreadPriorityProAudio; - // Make Exclusive - stream->out.shareMode = AUDCLNT_SHAREMODE_EXCLUSIVE; - } - - // explicit thread priority level - if (outputStreamInfo->flags & paWinWasapiThreadPriority) - { - if ((outputStreamInfo->threadPriority > eThreadPriorityNone) && - (outputStreamInfo->threadPriority <= eThreadPriorityWindowManager)) - stream->nThreadPriority = outputStreamInfo->threadPriority; - } - - // redirect processing to custom user callback, ignore PA buffer processor - useOutputBufferProcessor = !(outputStreamInfo->flags & paWinWasapiRedirectHostProcessor); - } - - // Choose processing mode - stream->out.streamFlags = (stream->out.shareMode == AUDCLNT_SHAREMODE_EXCLUSIVE ? AUDCLNT_STREAMFLAGS_EVENTCALLBACK : 0); - if (paWasapi->useWOW64Workaround) - stream->out.streamFlags = 0; // polling interface - else - if (streamCallback == NULL) - stream->out.streamFlags = 0; // polling interface - else - if ((outputStreamInfo != NULL) && (outputStreamInfo->flags & paWinWasapiPolling)) - stream->out.streamFlags = 0; // polling interface - else - if (fullDuplex) - stream->out.streamFlags = 0; // polling interface is implemented for full-duplex mode also - - // Use built-in PCM converter (channel count and sample rate) if requested - if ((GetWindowsVersion() >= WINDOWS_7_SERVER2008R2) && - (stream->out.shareMode == AUDCLNT_SHAREMODE_SHARED) && - ((outputStreamInfo != NULL) && (outputStreamInfo->flags & paWinWasapiAutoConvert))) - stream->out.streamFlags |= (AUDCLNT_STREAMFLAGS_AUTOCONVERTPCM | AUDCLNT_STREAMFLAGS_SRC_DEFAULT_QUALITY); - - // Fill parameters for Audio Client creation - stream->out.params.device_info = info; - stream->out.params.stream_params = (*outputParameters); - stream->out.params.frames_per_buffer = framesPerBuffer; - stream->out.params.sample_rate = sampleRate; - stream->out.params.blocking = (streamCallback == NULL); - stream->out.params.full_duplex = fullDuplex; - stream->out.params.wow64_workaround = paWasapi->useWOW64Workaround; - - // Create and activate audio client - if ((result = ActivateAudioClientOutput(stream)) != paNoError) - { - LogPaError(result); - goto error; - } - - // Get closest format - hostOutputSampleFormat = PaUtil_SelectClosestAvailableFormat(WaveToPaFormat(&stream->out.wavex), outputSampleFormat); - - // Set user-side custom host processor - if ((outputStreamInfo != NULL) && - (outputStreamInfo->flags & paWinWasapiRedirectHostProcessor)) - { - stream->hostProcessOverrideOutput.processor = outputStreamInfo->hostProcessorOutput; - stream->hostProcessOverrideOutput.userData = userData; - } - - // Only get IAudioCaptureClient output once here instead of getting it at multiple places based on the use - if (FAILED(hr = IAudioClient_GetService(stream->out.clientParent, &pa_IID_IAudioRenderClient, (void **)&stream->renderClientParent))) - { - LogHostError(hr); - LogPaError(result = paUnanticipatedHostError); - goto error; - } - } - else - { - outputChannelCount = 0; - outputSampleFormat = hostOutputSampleFormat = paInt16; /* Suppress 'uninitialized var' warnings. */ - } - - // log full-duplex - if (fullDuplex) - PRINT(("WASAPI::OpenStream: full-duplex mode\n")); - - // paWinWasapiPolling must be on/or not on both streams - if ((inputParameters != NULL) && (outputParameters != NULL)) - { - if ((inputStreamInfo != NULL) && (outputStreamInfo != NULL)) - { - if (((inputStreamInfo->flags & paWinWasapiPolling) && - !(outputStreamInfo->flags & paWinWasapiPolling)) - || - (!(inputStreamInfo->flags & paWinWasapiPolling) && - (outputStreamInfo->flags & paWinWasapiPolling))) - { - LogPaError(result = paInvalidFlag); - goto error; - } - } - } - - // Initialize stream representation - if (streamCallback) - { - stream->bBlocking = FALSE; - PaUtil_InitializeStreamRepresentation(&stream->streamRepresentation, - &paWasapi->callbackStreamInterface, - streamCallback, userData); - } - else - { - stream->bBlocking = TRUE; - PaUtil_InitializeStreamRepresentation(&stream->streamRepresentation, - &paWasapi->blockingStreamInterface, - streamCallback, userData); - } - - // Initialize CPU measurer - PaUtil_InitializeCpuLoadMeasurer(&stream->cpuLoadMeasurer, sampleRate); - - if (outputParameters && inputParameters) - { - // serious problem #1 - No, Not a problem, especially concerning Exclusive mode. - // Input device in exclusive mode somehow is getting large buffer always, thus we - // adjust Output latency to reflect it, thus period will differ but playback will be - // normal. - /*if (stream->in.period != stream->out.period) - { - PRINT(("WASAPI: OpenStream: period discrepancy\n")); - LogPaError(result = paBadIODeviceCombination); - goto error; - }*/ - - // serious problem #2 - No, Not a problem, as framesPerHostCallback take into account - // sample size while it is not a problem for PA full-duplex, we must care of - // period only! - /*if (stream->out.framesPerHostCallback != stream->in.framesPerHostCallback) - { - PRINT(("WASAPI: OpenStream: framesPerHostCallback discrepancy\n")); - goto error; - }*/ - } - - // Calculate frames per host for processor - framesPerHostCallback = (outputParameters ? stream->out.framesPerBuffer : stream->in.framesPerBuffer); - - // Choose correct mode of buffer processing: - // Exclusive/Shared non paWinWasapiPolling mode: paUtilFixedHostBufferSize - always fixed - // Exclusive/Shared paWinWasapiPolling mode: paUtilBoundedHostBufferSize - may vary for Exclusive or Full-duplex - bufferMode = paUtilFixedHostBufferSize; - if (inputParameters) // !!! WASAPI IAudioCaptureClient::GetBuffer extracts not number of frames but 1 packet, thus we always must adapt - bufferMode = paUtilBoundedHostBufferSize; - else - if (outputParameters) - { - if ((stream->out.buffers == 1) && - (!stream->out.streamFlags || ((stream->out.streamFlags & AUDCLNT_STREAMFLAGS_EVENTCALLBACK) == 0))) - bufferMode = paUtilBoundedHostBufferSize; - } - stream->bufferMode = bufferMode; - - // Initialize buffer processor - if (useInputBufferProcessor || useOutputBufferProcessor) - { - result = PaUtil_InitializeBufferProcessor( - &stream->bufferProcessor, - inputChannelCount, - inputSampleFormat, - hostInputSampleFormat, - outputChannelCount, - outputSampleFormat, - hostOutputSampleFormat, - sampleRate, - streamFlags, - framesPerBuffer, - framesPerHostCallback, - bufferMode, - streamCallback, - userData); - if (result != paNoError) - { - LogPaError(result); - goto error; - } - } - - // Set Input latency - stream->streamRepresentation.streamInfo.inputLatency = - (useInputBufferProcessor ? PaUtil_GetBufferProcessorInputLatencyFrames(&stream->bufferProcessor) / sampleRate : 0) - + (inputParameters != NULL ? stream->in.latencySeconds : 0); - - // Set Output latency - stream->streamRepresentation.streamInfo.outputLatency = - (useOutputBufferProcessor ? PaUtil_GetBufferProcessorOutputLatencyFrames(&stream->bufferProcessor) / sampleRate : 0) - + (outputParameters != NULL ? stream->out.latencySeconds : 0); - - // Set SR - stream->streamRepresentation.streamInfo.sampleRate = sampleRate; - - (*s) = (PaStream *)stream; - return result; - -error: - - if (stream != NULL) - CloseStream(stream); - - return result; -} - -// ------------------------------------------------------------------------------------------ -static PaError CloseStream( PaStream* s ) -{ - PaError result = paNoError; - PaWasapiStream *stream = (PaWasapiStream*)s; - - // abort active stream - if (IsStreamActive(s)) - { - result = AbortStream(s); - } - - SAFE_RELEASE(stream->captureClientParent); - SAFE_RELEASE(stream->renderClientParent); - SAFE_RELEASE(stream->out.clientParent); - SAFE_RELEASE(stream->in.clientParent); - SAFE_RELEASE(stream->inVol); - SAFE_RELEASE(stream->outVol); - - CloseHandle(stream->event[S_INPUT]); - CloseHandle(stream->event[S_OUTPUT]); - - _StreamCleanup(stream); - - PaWasapi_FreeMemory(stream->in.monoBuffer); - PaWasapi_FreeMemory(stream->out.monoBuffer); - - PaUtil_FreeMemory(stream->in.tailBuffer); - PaUtil_FreeMemory(stream->in.tailBufferMemory); - - PaUtil_FreeMemory(stream->out.tailBuffer); - PaUtil_FreeMemory(stream->out.tailBufferMemory); - - PaUtil_TerminateBufferProcessor(&stream->bufferProcessor); - PaUtil_TerminateStreamRepresentation(&stream->streamRepresentation); - PaUtil_FreeMemory(stream); - - return result; -} - -// ------------------------------------------------------------------------------------------ -HRESULT UnmarshalSubStreamComPointers(PaWasapiSubStream *substream) -{ -#ifndef PA_WINRT - HRESULT hResult = S_OK; - HRESULT hFirstBadResult = S_OK; - substream->clientProc = NULL; - - // IAudioClient - hResult = CoGetInterfaceAndReleaseStream(substream->clientStream, GetAudioClientIID(), (LPVOID*)&substream->clientProc); - substream->clientStream = NULL; - if (hResult != S_OK) - { - hFirstBadResult = (hFirstBadResult == S_OK) ? hResult : hFirstBadResult; - } - - return hFirstBadResult; - -#else - (void)substream; - return S_OK; -#endif -} - -// ------------------------------------------------------------------------------------------ -HRESULT UnmarshalStreamComPointers(PaWasapiStream *stream) -{ -#ifndef PA_WINRT - HRESULT hResult = S_OK; - HRESULT hFirstBadResult = S_OK; - stream->captureClient = NULL; - stream->renderClient = NULL; - stream->in.clientProc = NULL; - stream->out.clientProc = NULL; - - if (NULL != stream->in.clientParent) - { - // SubStream pointers - hResult = UnmarshalSubStreamComPointers(&stream->in); - if (hResult != S_OK) - { - hFirstBadResult = (hFirstBadResult == S_OK) ? hResult : hFirstBadResult; - } - - // IAudioCaptureClient - hResult = CoGetInterfaceAndReleaseStream(stream->captureClientStream, &pa_IID_IAudioCaptureClient, (LPVOID*)&stream->captureClient); - stream->captureClientStream = NULL; - if (hResult != S_OK) - { - hFirstBadResult = (hFirstBadResult == S_OK) ? hResult : hFirstBadResult; - } - } - - if (NULL != stream->out.clientParent) - { - // SubStream pointers - hResult = UnmarshalSubStreamComPointers(&stream->out); - if (hResult != S_OK) - { - hFirstBadResult = (hFirstBadResult == S_OK) ? hResult : hFirstBadResult; - } - - // IAudioRenderClient - hResult = CoGetInterfaceAndReleaseStream(stream->renderClientStream, &pa_IID_IAudioRenderClient, (LPVOID*)&stream->renderClient); - stream->renderClientStream = NULL; - if (hResult != S_OK) - { - hFirstBadResult = (hFirstBadResult == S_OK) ? hResult : hFirstBadResult; - } - } - - return hFirstBadResult; -#else - if (stream->in.clientParent != NULL) - { - stream->in.clientProc = stream->in.clientParent; - IAudioClient_AddRef(stream->in.clientParent); - } - - if (stream->out.clientParent != NULL) - { - stream->out.clientProc = stream->out.clientParent; - IAudioClient_AddRef(stream->out.clientParent); - } - - if (stream->renderClientParent != NULL) - { - stream->renderClient = stream->renderClientParent; - IAudioRenderClient_AddRef(stream->renderClientParent); - } - - if (stream->captureClientParent != NULL) - { - stream->captureClient = stream->captureClientParent; - IAudioCaptureClient_AddRef(stream->captureClientParent); - } - - return S_OK; -#endif -} - -// ----------------------------------------------------------------------------------------- -void ReleaseUnmarshaledSubComPointers(PaWasapiSubStream *substream) -{ - SAFE_RELEASE(substream->clientProc); -} - -// ----------------------------------------------------------------------------------------- -void ReleaseUnmarshaledComPointers(PaWasapiStream *stream) -{ - // Release AudioClient services first - SAFE_RELEASE(stream->captureClient); - SAFE_RELEASE(stream->renderClient); - - // Release AudioClients - ReleaseUnmarshaledSubComPointers(&stream->in); - ReleaseUnmarshaledSubComPointers(&stream->out); -} - -// ------------------------------------------------------------------------------------------ -HRESULT MarshalSubStreamComPointers(PaWasapiSubStream *substream) -{ -#ifndef PA_WINRT - HRESULT hResult; - substream->clientStream = NULL; - - // IAudioClient - hResult = CoMarshalInterThreadInterfaceInStream(GetAudioClientIID(), (LPUNKNOWN)substream->clientParent, &substream->clientStream); - if (hResult != S_OK) - goto marshal_sub_error; - - return hResult; - - // If marshaling error occurred, make sure to release everything. -marshal_sub_error: - - UnmarshalSubStreamComPointers(substream); - ReleaseUnmarshaledSubComPointers(substream); - return hResult; -#else - (void)substream; - return S_OK; -#endif -} - -// ------------------------------------------------------------------------------------------ -HRESULT MarshalStreamComPointers(PaWasapiStream *stream) -{ -#ifndef PA_WINRT - HRESULT hResult = S_OK; - stream->captureClientStream = NULL; - stream->in.clientStream = NULL; - stream->renderClientStream = NULL; - stream->out.clientStream = NULL; - - if (NULL != stream->in.clientParent) - { - // SubStream pointers - hResult = MarshalSubStreamComPointers(&stream->in); - if (hResult != S_OK) - goto marshal_error; - - // IAudioCaptureClient - hResult = CoMarshalInterThreadInterfaceInStream(&pa_IID_IAudioCaptureClient, (LPUNKNOWN)stream->captureClientParent, &stream->captureClientStream); - if (hResult != S_OK) - goto marshal_error; - } - - if (NULL != stream->out.clientParent) - { - // SubStream pointers - hResult = MarshalSubStreamComPointers(&stream->out); - if (hResult != S_OK) - goto marshal_error; - - // IAudioRenderClient - hResult = CoMarshalInterThreadInterfaceInStream(&pa_IID_IAudioRenderClient, (LPUNKNOWN)stream->renderClientParent, &stream->renderClientStream); - if (hResult != S_OK) - goto marshal_error; - } - - return hResult; - - // If marshaling error occurred, make sure to release everything. -marshal_error: - - UnmarshalStreamComPointers(stream); - ReleaseUnmarshaledComPointers(stream); - return hResult; -#else - (void)stream; - return S_OK; -#endif -} - -// ------------------------------------------------------------------------------------------ -static PaError StartStream( PaStream *s ) -{ - HRESULT hr; - PaWasapiStream *stream = (PaWasapiStream*)s; - PaError result = paNoError; - - // check if stream is active already - if (IsStreamActive(s)) - return paStreamIsNotStopped; - - PaUtil_ResetBufferProcessor(&stream->bufferProcessor); - - // Cleanup handles (may be necessary if stream was stopped by itself due to error) - _StreamCleanup(stream); - - // Create close event - if ((stream->hCloseRequest = CreateEvent(NULL, TRUE, FALSE, NULL)) == NULL) - { - result = paInsufficientMemory; - goto start_error; - } - - // Create thread - if (!stream->bBlocking) - { - // Create thread events - stream->hThreadStart = CreateEvent(NULL, TRUE, FALSE, NULL); - stream->hThreadExit = CreateEvent(NULL, TRUE, FALSE, NULL); - if ((stream->hThreadStart == NULL) || (stream->hThreadExit == NULL)) - { - result = paInsufficientMemory; - goto start_error; - } - - // Marshal WASAPI interface pointers for safe use in thread created below. - if ((hr = MarshalStreamComPointers(stream)) != S_OK) - { - PRINT(("Failed marshaling stream COM pointers.")); - result = paUnanticipatedHostError; - goto nonblocking_start_error; - } - - if ((stream->in.clientParent && (stream->in.streamFlags & AUDCLNT_STREAMFLAGS_EVENTCALLBACK)) || - (stream->out.clientParent && (stream->out.streamFlags & AUDCLNT_STREAMFLAGS_EVENTCALLBACK))) - { - if ((stream->hThread = CREATE_THREAD(ProcThreadEvent)) == NULL) - { - PRINT(("Failed creating thread: ProcThreadEvent.")); - result = paUnanticipatedHostError; - goto nonblocking_start_error; - } - } - else - { - if ((stream->hThread = CREATE_THREAD(ProcThreadPoll)) == NULL) - { - PRINT(("Failed creating thread: ProcThreadPoll.")); - result = paUnanticipatedHostError; - goto nonblocking_start_error; - } - } - - // Wait for thread to start - if (WaitForSingleObject(stream->hThreadStart, 60*1000) == WAIT_TIMEOUT) - { - PRINT(("Failed starting thread: timeout.")); - result = paUnanticipatedHostError; - goto nonblocking_start_error; - } - } - else - { - // Create blocking operation events (non-signaled event means - blocking operation is pending) - if (stream->out.clientParent != NULL) - { - if ((stream->hBlockingOpStreamWR = CreateEvent(NULL, TRUE, TRUE, NULL)) == NULL) - { - result = paInsufficientMemory; - goto start_error; - } - } - if (stream->in.clientParent != NULL) - { - if ((stream->hBlockingOpStreamRD = CreateEvent(NULL, TRUE, TRUE, NULL)) == NULL) - { - result = paInsufficientMemory; - goto start_error; - } - } - - // Initialize event & start INPUT stream - if (stream->in.clientParent != NULL) - { - if ((hr = IAudioClient_Start(stream->in.clientParent)) != S_OK) - { - LogHostError(hr); - result = paUnanticipatedHostError; - goto start_error; - } - } - - // Initialize event & start OUTPUT stream - if (stream->out.clientParent != NULL) - { - // Start - if ((hr = IAudioClient_Start(stream->out.clientParent)) != S_OK) - { - LogHostError(hr); - result = paUnanticipatedHostError; - goto start_error; - } - } - - // Set parent to working pointers to use shared functions. - stream->captureClient = stream->captureClientParent; - stream->renderClient = stream->renderClientParent; - stream->in.clientProc = stream->in.clientParent; - stream->out.clientProc = stream->out.clientParent; - - // Signal: stream running. - stream->running = TRUE; - } - - return result; - -nonblocking_start_error: - - // Set hThreadExit event to prevent blocking during cleanup - SetEvent(stream->hThreadExit); - UnmarshalStreamComPointers(stream); - ReleaseUnmarshaledComPointers(stream); - -start_error: - - StopStream(s); - return result; -} - -// ------------------------------------------------------------------------------------------ -void _StreamFinish(PaWasapiStream *stream) -{ - // Issue command to thread to stop processing and wait for thread exit - if (!stream->bBlocking) - { - SignalObjectAndWait(stream->hCloseRequest, stream->hThreadExit, INFINITE, FALSE); - } - else - // Blocking mode does not own thread - { - // Signal close event and wait for each of 2 blocking operations to complete - if (stream->out.clientParent) - SignalObjectAndWait(stream->hCloseRequest, stream->hBlockingOpStreamWR, INFINITE, TRUE); - if (stream->out.clientParent) - SignalObjectAndWait(stream->hCloseRequest, stream->hBlockingOpStreamRD, INFINITE, TRUE); - - // Process stop - _StreamOnStop(stream); - } - - // Cleanup handles - _StreamCleanup(stream); - - stream->running = FALSE; -} - -// ------------------------------------------------------------------------------------------ -void _StreamCleanup(PaWasapiStream *stream) -{ - // Close thread handles to allow restart - SAFE_CLOSE(stream->hThread); - SAFE_CLOSE(stream->hThreadStart); - SAFE_CLOSE(stream->hThreadExit); - SAFE_CLOSE(stream->hCloseRequest); - SAFE_CLOSE(stream->hBlockingOpStreamRD); - SAFE_CLOSE(stream->hBlockingOpStreamWR); -} - -// ------------------------------------------------------------------------------------------ -static PaError StopStream( PaStream *s ) -{ - // Finish stream - _StreamFinish((PaWasapiStream *)s); - return paNoError; -} - -// ------------------------------------------------------------------------------------------ -static PaError AbortStream( PaStream *s ) -{ - // Finish stream - _StreamFinish((PaWasapiStream *)s); - return paNoError; -} - -// ------------------------------------------------------------------------------------------ -static PaError IsStreamStopped( PaStream *s ) -{ - return !((PaWasapiStream *)s)->running; -} - -// ------------------------------------------------------------------------------------------ -static PaError IsStreamActive( PaStream *s ) -{ - return ((PaWasapiStream *)s)->running; -} - -// ------------------------------------------------------------------------------------------ -static PaTime GetStreamTime( PaStream *s ) -{ - PaWasapiStream *stream = (PaWasapiStream*)s; - - /* suppress unused variable warnings */ - (void) stream; - - return PaUtil_GetTime(); -} - -// ------------------------------------------------------------------------------------------ -static double GetStreamCpuLoad( PaStream* s ) -{ - return PaUtil_GetCpuLoad(&((PaWasapiStream *)s)->cpuLoadMeasurer); -} - -// ------------------------------------------------------------------------------------------ -static PaError ReadStream( PaStream* s, void *_buffer, unsigned long frames ) -{ - PaWasapiStream *stream = (PaWasapiStream*)s; - - HRESULT hr = S_OK; - BYTE *user_buffer = (BYTE *)_buffer; - BYTE *wasapi_buffer = NULL; - DWORD flags = 0; - UINT32 i, available, sleep = 0; - unsigned long processed; - ThreadIdleScheduler sched; - - // validate - if (!stream->running) - return paStreamIsStopped; - if (stream->captureClient == NULL) - return paBadStreamPtr; - - // Notify blocking op has begun - ResetEvent(stream->hBlockingOpStreamRD); - - // Use thread scheduling for 500 microseconds (emulated) when wait time for frames is less than - // 1 milliseconds, emulation helps to normalize CPU consumption and avoids too busy waiting - ThreadIdleScheduler_Setup(&sched, 1, 250/* microseconds */); - - // Make a local copy of the user buffer pointer(s), this is necessary - // because PaUtil_CopyOutput() advances these pointers every time it is called - if (!stream->bufferProcessor.userInputIsInterleaved) - { - user_buffer = (BYTE *)alloca(sizeof(BYTE *) * stream->bufferProcessor.inputChannelCount); - if (user_buffer == NULL) - return paInsufficientMemory; - - for (i = 0; i < stream->bufferProcessor.inputChannelCount; ++i) - ((BYTE **)user_buffer)[i] = ((BYTE **)_buffer)[i]; - } - - // Find out if there are tail frames, flush them all before reading hardware - if ((available = PaUtil_GetRingBufferReadAvailable(stream->in.tailBuffer)) != 0) - { - ring_buffer_size_t buf1_size = 0, buf2_size = 0, read, desired; - void *buf1 = NULL, *buf2 = NULL; - - // Limit desired to amount of requested frames - desired = available; - if ((UINT32)desired > frames) - desired = frames; - - // Get pointers to read regions - read = PaUtil_GetRingBufferReadRegions(stream->in.tailBuffer, desired, &buf1, &buf1_size, &buf2, &buf2_size); - - if (buf1 != NULL) - { - // Register available frames to processor - PaUtil_SetInputFrameCount(&stream->bufferProcessor, buf1_size); - - // Register host buffer pointer to processor - PaUtil_SetInterleavedInputChannels(&stream->bufferProcessor, 0, buf1, stream->bufferProcessor.inputChannelCount); - - // Copy user data to host buffer (with conversion if applicable) - processed = PaUtil_CopyInput(&stream->bufferProcessor, (void **)&user_buffer, buf1_size); - frames -= processed; - } - - if (buf2 != NULL) - { - // Register available frames to processor - PaUtil_SetInputFrameCount(&stream->bufferProcessor, buf2_size); - - // Register host buffer pointer to processor - PaUtil_SetInterleavedInputChannels(&stream->bufferProcessor, 0, buf2, stream->bufferProcessor.inputChannelCount); - - // Copy user data to host buffer (with conversion if applicable) - processed = PaUtil_CopyInput(&stream->bufferProcessor, (void **)&user_buffer, buf2_size); - frames -= processed; - } - - // Advance - PaUtil_AdvanceRingBufferReadIndex(stream->in.tailBuffer, read); - } - - // Read hardware - while (frames != 0) - { - // Check if blocking call must be interrupted - if (WaitForSingleObject(stream->hCloseRequest, sleep) != WAIT_TIMEOUT) - break; - - // Get available frames (must be finding out available frames before call to IAudioCaptureClient_GetBuffer - // othervise audio glitches will occur inExclusive mode as it seems that WASAPI has some scheduling/ - // processing problems when such busy polling with IAudioCaptureClient_GetBuffer occurs) - if ((hr = _PollGetInputFramesAvailable(stream, &available)) != S_OK) - { - LogHostError(hr); - return paUnanticipatedHostError; - } - - // Wait for more frames to become available - if (available == 0) - { - // Exclusive mode may require latency of 1 millisecond, thus we shall sleep - // around 500 microseconds (emulated) to collect packets in time - if (stream->in.shareMode != AUDCLNT_SHAREMODE_EXCLUSIVE) - { - UINT32 sleep_frames = (frames < stream->in.framesPerHostCallback ? frames : stream->in.framesPerHostCallback); - - sleep = GetFramesSleepTime(sleep_frames, stream->in.wavex.Format.nSamplesPerSec); - sleep /= 4; // wait only for 1/4 of the buffer - - // WASAPI input provides packets, thus expiring packet will result in bad audio - // limit waiting time to 2 seconds (will always work for smallest buffer in Shared) - if (sleep > 2) - sleep = 2; - - // Avoid busy waiting, schedule next 1 millesecond wait - if (sleep == 0) - sleep = ThreadIdleScheduler_NextSleep(&sched); - } - else - { - if ((sleep = ThreadIdleScheduler_NextSleep(&sched)) != 0) - { - Sleep(sleep); - sleep = 0; - } - } - - continue; - } - - // Get the available data in the shared buffer. - if ((hr = IAudioCaptureClient_GetBuffer(stream->captureClient, &wasapi_buffer, &available, &flags, NULL, NULL)) != S_OK) - { - // Buffer size is too small, waiting - if (hr != AUDCLNT_S_BUFFER_EMPTY) - { - LogHostError(hr); - goto end; - } - - continue; - } - - // Register available frames to processor - PaUtil_SetInputFrameCount(&stream->bufferProcessor, available); - - // Register host buffer pointer to processor - PaUtil_SetInterleavedInputChannels(&stream->bufferProcessor, 0, wasapi_buffer, stream->bufferProcessor.inputChannelCount); - - // Copy user data to host buffer (with conversion if applicable) - processed = PaUtil_CopyInput(&stream->bufferProcessor, (void **)&user_buffer, frames); - frames -= processed; - - // Save tail into buffer - if ((frames == 0) && (available > processed)) - { - UINT32 bytes_processed = processed * stream->in.wavex.Format.nBlockAlign; - UINT32 frames_to_save = available - processed; - - PaUtil_WriteRingBuffer(stream->in.tailBuffer, wasapi_buffer + bytes_processed, frames_to_save); - } - - // Release host buffer - if ((hr = IAudioCaptureClient_ReleaseBuffer(stream->captureClient, available)) != S_OK) - { - LogHostError(hr); - goto end; - } - } - -end: - - // Notify blocking op has ended - SetEvent(stream->hBlockingOpStreamRD); - - return (hr != S_OK ? paUnanticipatedHostError : paNoError); -} - -// ------------------------------------------------------------------------------------------ -static PaError WriteStream( PaStream* s, const void *_buffer, unsigned long frames ) -{ - PaWasapiStream *stream = (PaWasapiStream*)s; - - //UINT32 frames; - const BYTE *user_buffer = (const BYTE *)_buffer; - BYTE *wasapi_buffer; - HRESULT hr = S_OK; - UINT32 i, available, sleep = 0; - unsigned long processed; - ThreadIdleScheduler sched; - - // validate - if (!stream->running) - return paStreamIsStopped; - if (stream->renderClient == NULL) - return paBadStreamPtr; - - // Notify blocking op has begun - ResetEvent(stream->hBlockingOpStreamWR); - - // Use thread scheduling for 500 microseconds (emulated) when wait time for frames is less than - // 1 milliseconds, emulation helps to normalize CPU consumption and avoids too busy waiting - ThreadIdleScheduler_Setup(&sched, 1, 500/* microseconds */); - - // Make a local copy of the user buffer pointer(s), this is necessary - // because PaUtil_CopyOutput() advances these pointers every time it is called - if (!stream->bufferProcessor.userOutputIsInterleaved) - { - user_buffer = (const BYTE *)alloca(sizeof(const BYTE *) * stream->bufferProcessor.outputChannelCount); - if (user_buffer == NULL) - return paInsufficientMemory; - - for (i = 0; i < stream->bufferProcessor.outputChannelCount; ++i) - ((const BYTE **)user_buffer)[i] = ((const BYTE **)_buffer)[i]; - } - - // Blocking (potentially, until 'frames' are consumed) loop - while (frames != 0) - { - // Check if blocking call must be interrupted - if (WaitForSingleObject(stream->hCloseRequest, sleep) != WAIT_TIMEOUT) - break; - - // Get frames available - if ((hr = _PollGetOutputFramesAvailable(stream, &available)) != S_OK) - { - LogHostError(hr); - goto end; - } - - // Wait for more frames to become available - if (available == 0) - { - UINT32 sleep_frames = (frames < stream->out.framesPerHostCallback ? frames : stream->out.framesPerHostCallback); - - sleep = GetFramesSleepTime(sleep_frames, stream->out.wavex.Format.nSamplesPerSec); - sleep /= 2; // wait only for half of the buffer - - // Avoid busy waiting, schedule next 1 millesecond wait - if (sleep == 0) - sleep = ThreadIdleScheduler_NextSleep(&sched); - - continue; - } - - // Keep in 'frames' range - if (available > frames) - available = frames; - - // Get pointer to host buffer - if ((hr = IAudioRenderClient_GetBuffer(stream->renderClient, available, &wasapi_buffer)) != S_OK) - { - // Buffer size is too big, waiting - if (hr == AUDCLNT_E_BUFFER_TOO_LARGE) - continue; - - LogHostError(hr); - goto end; - } - - // Keep waiting again (on Vista it was noticed that WASAPI could SOMETIMES return NULL pointer - // to buffer without returning AUDCLNT_E_BUFFER_TOO_LARGE instead) - if (wasapi_buffer == NULL) - continue; - - // Register available frames to processor - PaUtil_SetOutputFrameCount(&stream->bufferProcessor, available); - - // Register host buffer pointer to processor - PaUtil_SetInterleavedOutputChannels(&stream->bufferProcessor, 0, wasapi_buffer, stream->bufferProcessor.outputChannelCount); - - // Copy user data to host buffer (with conversion if applicable), this call will advance - // pointer 'user_buffer' to consumed portion of data - processed = PaUtil_CopyOutput(&stream->bufferProcessor, (const void **)&user_buffer, frames); - frames -= processed; - - // Release host buffer - if ((hr = IAudioRenderClient_ReleaseBuffer(stream->renderClient, available, 0)) != S_OK) - { - LogHostError(hr); - goto end; - } - } - -end: - - // Notify blocking op has ended - SetEvent(stream->hBlockingOpStreamWR); - - return (hr != S_OK ? paUnanticipatedHostError : paNoError); -} - -unsigned long PaUtil_GetOutputFrameCount( PaUtilBufferProcessor* bp ) -{ - return bp->hostOutputFrameCount[0]; -} - -// ------------------------------------------------------------------------------------------ -static signed long GetStreamReadAvailable( PaStream* s ) -{ - PaWasapiStream *stream = (PaWasapiStream*)s; - - HRESULT hr; - UINT32 available = 0; - - // validate - if (!stream->running) - return paStreamIsStopped; - if (stream->captureClient == NULL) - return paBadStreamPtr; - - // available in hardware buffer - if ((hr = _PollGetInputFramesAvailable(stream, &available)) != S_OK) - { - LogHostError(hr); - return paUnanticipatedHostError; - } - - // available in software tail buffer - available += PaUtil_GetRingBufferReadAvailable(stream->in.tailBuffer); - - return available; -} - -// ------------------------------------------------------------------------------------------ -static signed long GetStreamWriteAvailable( PaStream* s ) -{ - PaWasapiStream *stream = (PaWasapiStream*)s; - HRESULT hr; - UINT32 available = 0; - - // validate - if (!stream->running) - return paStreamIsStopped; - if (stream->renderClient == NULL) - return paBadStreamPtr; - - if ((hr = _PollGetOutputFramesAvailable(stream, &available)) != S_OK) - { - LogHostError(hr); - return paUnanticipatedHostError; - } - - return (signed long)available; -} - - -// ------------------------------------------------------------------------------------------ -static void WaspiHostProcessingLoop( void *inputBuffer, long inputFrames, - void *outputBuffer, long outputFrames, - void *userData ) -{ - PaWasapiStream *stream = (PaWasapiStream*)userData; - PaStreamCallbackTimeInfo timeInfo = {0,0,0}; - PaStreamCallbackFlags flags = 0; - int callbackResult; - unsigned long framesProcessed; - HRESULT hr; - UINT32 pending; - - PaUtil_BeginCpuLoadMeasurement( &stream->cpuLoadMeasurer ); - - /* - Pa_GetStreamTime: - - generate timing information - - handle buffer slips - */ - timeInfo.currentTime = PaUtil_GetTime(); - // Query input latency - if (stream->in.clientProc != NULL) - { - PaTime pending_time; - if ((hr = IAudioClient_GetCurrentPadding(stream->in.clientProc, &pending)) == S_OK) - pending_time = (PaTime)pending / (PaTime)stream->in.wavex.Format.nSamplesPerSec; - else - pending_time = (PaTime)stream->in.latencySeconds; - - timeInfo.inputBufferAdcTime = timeInfo.currentTime + pending_time; - } - // Query output current latency - if (stream->out.clientProc != NULL) - { - PaTime pending_time; - if ((hr = IAudioClient_GetCurrentPadding(stream->out.clientProc, &pending)) == S_OK) - pending_time = (PaTime)pending / (PaTime)stream->out.wavex.Format.nSamplesPerSec; - else - pending_time = (PaTime)stream->out.latencySeconds; - - timeInfo.outputBufferDacTime = timeInfo.currentTime + pending_time; - } - - /* - If you need to byte swap or shift inputBuffer to convert it into a - portaudio format, do it here. - */ - - PaUtil_BeginBufferProcessing( &stream->bufferProcessor, &timeInfo, flags ); - - /* - depending on whether the host buffers are interleaved, non-interleaved - or a mixture, you will want to call PaUtil_SetInterleaved*Channels(), - PaUtil_SetNonInterleaved*Channel() or PaUtil_Set*Channel() here. - */ - - if (stream->bufferProcessor.inputChannelCount > 0) - { - PaUtil_SetInputFrameCount( &stream->bufferProcessor, inputFrames ); - PaUtil_SetInterleavedInputChannels( &stream->bufferProcessor, - 0, /* first channel of inputBuffer is channel 0 */ - inputBuffer, - 0 ); /* 0 - use inputChannelCount passed to init buffer processor */ - } - - if (stream->bufferProcessor.outputChannelCount > 0) - { - PaUtil_SetOutputFrameCount( &stream->bufferProcessor, outputFrames); - PaUtil_SetInterleavedOutputChannels( &stream->bufferProcessor, - 0, /* first channel of outputBuffer is channel 0 */ - outputBuffer, - 0 ); /* 0 - use outputChannelCount passed to init buffer processor */ - } - - /* you must pass a valid value of callback result to PaUtil_EndBufferProcessing() - in general you would pass paContinue for normal operation, and - paComplete to drain the buffer processor's internal output buffer. - You can check whether the buffer processor's output buffer is empty - using PaUtil_IsBufferProcessorOuputEmpty( bufferProcessor ) - */ - callbackResult = paContinue; - framesProcessed = PaUtil_EndBufferProcessing( &stream->bufferProcessor, &callbackResult ); - - /* - If you need to byte swap or shift outputBuffer to convert it to - host format, do it here. - */ - - PaUtil_EndCpuLoadMeasurement( &stream->cpuLoadMeasurer, framesProcessed ); - - if (callbackResult == paContinue) - { - /* nothing special to do */ - } - else - if (callbackResult == paAbort) - { - // stop stream - SetEvent(stream->hCloseRequest); - } - else - { - // stop stream - SetEvent(stream->hCloseRequest); - } -} - -// ------------------------------------------------------------------------------------------ -#ifndef PA_WINRT -static PaError MMCSS_activate(PaWasapiThreadPriority nPriorityClass, HANDLE *ret) -{ - static const char *mmcs_name[] = - { - NULL, - "Audio", - "Capture", - "Distribution", - "Games", - "Playback", - "Pro Audio", - "Window Manager" - }; - - DWORD task_idx = 0; - HANDLE hTask; - - if ((UINT32)nPriorityClass >= STATIC_ARRAY_SIZE(mmcs_name)) - return paUnanticipatedHostError; - - if ((hTask = pAvSetMmThreadCharacteristics(mmcs_name[nPriorityClass], &task_idx)) == NULL) - { - PRINT(("WASAPI: AvSetMmThreadCharacteristics failed: error[%d]\n", GetLastError())); - return paUnanticipatedHostError; - } - - /*BOOL priority_ok = pAvSetMmThreadPriority(hTask, AVRT_PRIORITY_NORMAL); - if (priority_ok == FALSE) - { - PRINT(("WASAPI: AvSetMmThreadPriority failed!\n")); - }*/ - - // debug - { - int cur_priority = GetThreadPriority(GetCurrentThread()); - DWORD cur_priority_class = GetPriorityClass(GetCurrentProcess()); - PRINT(("WASAPI: thread[ priority-0x%X class-0x%X ]\n", cur_priority, cur_priority_class)); - } - - (*ret) = hTask; - return paNoError; -} -#endif - -// ------------------------------------------------------------------------------------------ -#ifndef PA_WINRT -static void MMCSS_deactivate(HANDLE hTask) -{ - if (pAvRevertMmThreadCharacteristics(hTask) == FALSE) - { - PRINT(("WASAPI: AvRevertMmThreadCharacteristics failed!\n")); - } -} -#endif - -// ------------------------------------------------------------------------------------------ -PaError PaWasapi_ThreadPriorityBoost(void **pTask, PaWasapiThreadPriority priorityClass) -{ - HANDLE task; - PaError ret; - - if (pTask == NULL) - return paUnanticipatedHostError; - -#ifndef PA_WINRT - if ((ret = MMCSS_activate(priorityClass, &task)) != paNoError) - return ret; -#else - switch (priorityClass) - { - case eThreadPriorityAudio: - case eThreadPriorityProAudio: { - - // Save previous thread priority - intptr_t priority_prev = GetThreadPriority(GetCurrentThread()); - - // Try set new thread priority - if (SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_HIGHEST) == FALSE) - return paUnanticipatedHostError; - - // Memorize prev priority (pretend to be non NULL pointer by adding 0x80000000 mask) - task = (HANDLE)(priority_prev | 0x80000000); - - ret = paNoError; - - break; } - - default: - return paUnanticipatedHostError; - } -#endif - - (*pTask) = task; - return ret; -} - -// ------------------------------------------------------------------------------------------ -PaError PaWasapi_ThreadPriorityRevert(void *pTask) -{ - if (pTask == NULL) - return paUnanticipatedHostError; - -#ifndef PA_WINRT - MMCSS_deactivate((HANDLE)pTask); -#else - // Revert previous priority by removing 0x80000000 mask - if (SetThreadPriority(GetCurrentThread(), (int)((intptr_t)pTask & ~0x80000000)) == FALSE) - return paUnanticipatedHostError; -#endif - - return paNoError; -} - -// ------------------------------------------------------------------------------------------ -// Described at: -// http://msdn.microsoft.com/en-us/library/dd371387(v=VS.85).aspx - -PaError PaWasapi_GetJackCount(PaDeviceIndex device, int *pJackCount) -{ -#ifndef PA_WINRT - PaError ret; - HRESULT hr = S_OK; - PaWasapiDeviceInfo *deviceInfo; - IDeviceTopology *pDeviceTopology = NULL; - IConnector *pConnFrom = NULL; - IConnector *pConnTo = NULL; - IPart *pPart = NULL; - IKsJackDescription *pJackDesc = NULL; - UINT jackCount = 0; - - if (pJackCount == NULL) - return paUnanticipatedHostError; - - if ((ret = _GetWasapiDeviceInfoByDeviceIndex(&deviceInfo, device)) != paNoError) - return ret; - - // Get the endpoint device's IDeviceTopology interface - hr = IMMDevice_Activate(deviceInfo->device, &pa_IID_IDeviceTopology, - CLSCTX_INPROC_SERVER, NULL, (void**)&pDeviceTopology); - IF_FAILED_JUMP(hr, error); - - // The device topology for an endpoint device always contains just one connector (connector number 0) - hr = IDeviceTopology_GetConnector(pDeviceTopology, 0, &pConnFrom); - IF_FAILED_JUMP(hr, error); - - // Step across the connection to the jack on the adapter - hr = IConnector_GetConnectedTo(pConnFrom, &pConnTo); - if (HRESULT_FROM_WIN32(ERROR_PATH_NOT_FOUND) == hr) - { - // The adapter device is not currently active - hr = E_NOINTERFACE; - } - IF_FAILED_JUMP(hr, error); - - // Get the connector's IPart interface - hr = IConnector_QueryInterface(pConnTo, &pa_IID_IPart, (void**)&pPart); - IF_FAILED_JUMP(hr, error); - - // Activate the connector's IKsJackDescription interface - hr = IPart_Activate(pPart, CLSCTX_INPROC_SERVER, &pa_IID_IKsJackDescription, (void**)&pJackDesc); - IF_FAILED_JUMP(hr, error); - - // Return jack count for this device - hr = IKsJackDescription_GetJackCount(pJackDesc, &jackCount); - IF_FAILED_JUMP(hr, error); - - // Set. - (*pJackCount) = jackCount; - - // Ok. - ret = paNoError; - -error: - - SAFE_RELEASE(pDeviceTopology); - SAFE_RELEASE(pConnFrom); - SAFE_RELEASE(pConnTo); - SAFE_RELEASE(pPart); - SAFE_RELEASE(pJackDesc); - - LogHostError(hr); - return paNoError; -#else - (void)device; - (void)pJackCount; - return paUnanticipatedHostError; -#endif -} - -// ------------------------------------------------------------------------------------------ -#ifndef PA_WINRT -static PaWasapiJackConnectionType _ConvertJackConnectionTypeWASAPIToPA(int connType) -{ - switch (connType) - { - case eConnTypeUnknown: return eJackConnTypeUnknown; -#ifdef _KS_ - case eConnType3Point5mm: return eJackConnType3Point5mm; -#else - case eConnTypeEighth: return eJackConnType3Point5mm; -#endif - case eConnTypeQuarter: return eJackConnTypeQuarter; - case eConnTypeAtapiInternal: return eJackConnTypeAtapiInternal; - case eConnTypeRCA: return eJackConnTypeRCA; - case eConnTypeOptical: return eJackConnTypeOptical; - case eConnTypeOtherDigital: return eJackConnTypeOtherDigital; - case eConnTypeOtherAnalog: return eJackConnTypeOtherAnalog; - case eConnTypeMultichannelAnalogDIN: return eJackConnTypeMultichannelAnalogDIN; - case eConnTypeXlrProfessional: return eJackConnTypeXlrProfessional; - case eConnTypeRJ11Modem: return eJackConnTypeRJ11Modem; - case eConnTypeCombination: return eJackConnTypeCombination; - } - return eJackConnTypeUnknown; -} -#endif - -// ------------------------------------------------------------------------------------------ -#ifndef PA_WINRT -static PaWasapiJackGeoLocation _ConvertJackGeoLocationWASAPIToPA(int geoLoc) -{ - switch (geoLoc) - { - case eGeoLocRear: return eJackGeoLocRear; - case eGeoLocFront: return eJackGeoLocFront; - case eGeoLocLeft: return eJackGeoLocLeft; - case eGeoLocRight: return eJackGeoLocRight; - case eGeoLocTop: return eJackGeoLocTop; - case eGeoLocBottom: return eJackGeoLocBottom; -#ifdef _KS_ - case eGeoLocRearPanel: return eJackGeoLocRearPanel; -#else - case eGeoLocRearOPanel: return eJackGeoLocRearPanel; -#endif - case eGeoLocRiser: return eJackGeoLocRiser; - case eGeoLocInsideMobileLid: return eJackGeoLocInsideMobileLid; - case eGeoLocDrivebay: return eJackGeoLocDrivebay; - case eGeoLocHDMI: return eJackGeoLocHDMI; - case eGeoLocOutsideMobileLid: return eJackGeoLocOutsideMobileLid; - case eGeoLocATAPI: return eJackGeoLocATAPI; - } - return eJackGeoLocUnk; -} -#endif - -// ------------------------------------------------------------------------------------------ -#ifndef PA_WINRT -static PaWasapiJackGenLocation _ConvertJackGenLocationWASAPIToPA(int genLoc) -{ - switch (genLoc) - { - case eGenLocPrimaryBox: return eJackGenLocPrimaryBox; - case eGenLocInternal: return eJackGenLocInternal; -#ifdef _KS_ - case eGenLocSeparate: return eJackGenLocSeparate; -#else - case eGenLocSeperate: return eJackGenLocSeparate; -#endif - case eGenLocOther: return eJackGenLocOther; - } - return eJackGenLocPrimaryBox; -} -#endif - -// ------------------------------------------------------------------------------------------ -#ifndef PA_WINRT -static PaWasapiJackPortConnection _ConvertJackPortConnectionWASAPIToPA(int portConn) -{ - switch (portConn) - { - case ePortConnJack: return eJackPortConnJack; - case ePortConnIntegratedDevice: return eJackPortConnIntegratedDevice; - case ePortConnBothIntegratedAndJack: return eJackPortConnBothIntegratedAndJack; - case ePortConnUnknown: return eJackPortConnUnknown; - } - return eJackPortConnJack; -} -#endif - -// ------------------------------------------------------------------------------------------ -// Described at: -// http://msdn.microsoft.com/en-us/library/dd371387(v=VS.85).aspx - -PaError PaWasapi_GetJackDescription(PaDeviceIndex device, int jackIndex, PaWasapiJackDescription *pJackDescription) -{ -#ifndef PA_WINRT - PaError ret; - HRESULT hr = S_OK; - PaWasapiDeviceInfo *deviceInfo; - IDeviceTopology *pDeviceTopology = NULL; - IConnector *pConnFrom = NULL; - IConnector *pConnTo = NULL; - IPart *pPart = NULL; - IKsJackDescription *pJackDesc = NULL; - KSJACK_DESCRIPTION jack = { 0 }; - - if ((ret = _GetWasapiDeviceInfoByDeviceIndex(&deviceInfo, device)) != paNoError) - return ret; - - // Get the endpoint device's IDeviceTopology interface - hr = IMMDevice_Activate(deviceInfo->device, &pa_IID_IDeviceTopology, - CLSCTX_INPROC_SERVER, NULL, (void**)&pDeviceTopology); - IF_FAILED_JUMP(hr, error); - - // The device topology for an endpoint device always contains just one connector (connector number 0) - hr = IDeviceTopology_GetConnector(pDeviceTopology, 0, &pConnFrom); - IF_FAILED_JUMP(hr, error); - - // Step across the connection to the jack on the adapter - hr = IConnector_GetConnectedTo(pConnFrom, &pConnTo); - if (HRESULT_FROM_WIN32(ERROR_PATH_NOT_FOUND) == hr) - { - // The adapter device is not currently active - hr = E_NOINTERFACE; - } - IF_FAILED_JUMP(hr, error); - - // Get the connector's IPart interface - hr = IConnector_QueryInterface(pConnTo, &pa_IID_IPart, (void**)&pPart); - IF_FAILED_JUMP(hr, error); - - // Activate the connector's IKsJackDescription interface - hr = IPart_Activate(pPart, CLSCTX_INPROC_SERVER, &pa_IID_IKsJackDescription, (void**)&pJackDesc); - IF_FAILED_JUMP(hr, error); - - // Test to return jack description struct for index 0 - hr = IKsJackDescription_GetJackDescription(pJackDesc, jackIndex, &jack); - IF_FAILED_JUMP(hr, error); - - // Convert WASAPI values to PA format - pJackDescription->channelMapping = jack.ChannelMapping; - pJackDescription->color = jack.Color; - pJackDescription->connectionType = _ConvertJackConnectionTypeWASAPIToPA(jack.ConnectionType); - pJackDescription->genLocation = _ConvertJackGenLocationWASAPIToPA(jack.GenLocation); - pJackDescription->geoLocation = _ConvertJackGeoLocationWASAPIToPA(jack.GeoLocation); - pJackDescription->isConnected = jack.IsConnected; - pJackDescription->portConnection = _ConvertJackPortConnectionWASAPIToPA(jack.PortConnection); - - // Ok - ret = paNoError; - -error: - - SAFE_RELEASE(pDeviceTopology); - SAFE_RELEASE(pConnFrom); - SAFE_RELEASE(pConnTo); - SAFE_RELEASE(pPart); - SAFE_RELEASE(pJackDesc); - - LogHostError(hr); - return ret; - -#else - (void)device; - (void)jackIndex; - (void)pJackDescription; - return paUnanticipatedHostError; -#endif -} - -// ------------------------------------------------------------------------------------------ -PaError PaWasapi_GetAudioClient(PaStream *pStream, void **pAudioClient, int bOutput) -{ - PaWasapiStream *stream = (PaWasapiStream *)pStream; - if (stream == NULL) - return paBadStreamPtr; - - if (pAudioClient == NULL) - return paUnanticipatedHostError; - - (*pAudioClient) = (bOutput == TRUE ? stream->out.clientParent : stream->in.clientParent); - - return paNoError; -} - -// ------------------------------------------------------------------------------------------ -#ifdef PA_WINRT -static void CopyNameOrIdString(WCHAR *dst, const UINT32 dstMaxCount, const WCHAR *src) -{ - UINT32 i; - - for (i = 0; i < dstMaxCount; ++i) - dst[i] = 0; - - if (src != NULL) - { - for (i = 0; (src[i] != 0) && (i < dstMaxCount); ++i) - dst[i] = src[i]; - } -} -#endif - -// ------------------------------------------------------------------------------------------ -PaError PaWasapiWinrt_SetDefaultDeviceId( const unsigned short *pId, int bOutput ) -{ -#ifdef PA_WINRT - INT32 i; - PaWasapiWinrtDeviceListRole *role = (bOutput ? &g_DeviceListInfo.render : &g_DeviceListInfo.capture); - - assert(STATIC_ARRAY_SIZE(role->defaultId) == PA_WASAPI_DEVICE_ID_LEN); - - // Validate Id length - if (pId != NULL) - { - for (i = 0; pId[i] != 0; ++i) - { - if (i >= PA_WASAPI_DEVICE_ID_LEN) - return paBufferTooBig; - } - } - - // Set Id (or reset to all 0 if NULL is provided) - CopyNameOrIdString(role->defaultId, STATIC_ARRAY_SIZE(role->defaultId), pId); - - return paNoError; -#else - return paIncompatibleStreamHostApi; -#endif -} - -// ------------------------------------------------------------------------------------------ -PaError PaWasapiWinrt_PopulateDeviceList( const unsigned short **pId, const unsigned short **pName, - const PaWasapiDeviceRole *pRole, unsigned int count, int bOutput ) -{ -#ifdef PA_WINRT - UINT32 i, j; - PaWasapiWinrtDeviceListRole *role = (bOutput ? &g_DeviceListInfo.render : &g_DeviceListInfo.capture); - - memset(&role->devices, 0, sizeof(role->devices)); - role->deviceCount = 0; - - if (count == 0) - return paNoError; - else - if (count > PA_WASAPI_DEVICE_MAX_COUNT) - return paBufferTooBig; - - // pName or pRole are optional - if (pId == NULL) - return paInsufficientMemory; - - // Validate Id and Name lengths - for (i = 0; i < count; ++i) - { - const unsigned short *id = pId[i]; - const unsigned short *name = pName[i]; - - for (j = 0; id[j] != 0; ++j) - { - if (j >= PA_WASAPI_DEVICE_ID_LEN) - return paBufferTooBig; - } - - for (j = 0; name[j] != 0; ++j) - { - if (j >= PA_WASAPI_DEVICE_NAME_LEN) - return paBufferTooBig; - } - } - - // Set Id and Name (or reset to all 0 if NULL is provided) - for (i = 0; i < count; ++i) - { - CopyNameOrIdString(role->devices[i].id, STATIC_ARRAY_SIZE(role->devices[i].id), pId[i]); - CopyNameOrIdString(role->devices[i].name, STATIC_ARRAY_SIZE(role->devices[i].name), pName[i]); - role->devices[i].formFactor = (pRole != NULL ? (EndpointFormFactor)pRole[i] : UnknownFormFactor); - - // Count device if it has at least the Id - role->deviceCount += (role->devices[i].id[0] != 0); - } - - return paNoError; -#else - return paIncompatibleStreamHostApi; -#endif -} - -// ------------------------------------------------------------------------------------------ -PaError PaWasapi_SetStreamStateHandler( PaStream *pStream, PaWasapiStreamStateCallback fnStateHandler, void *pUserData ) -{ - PaWasapiStream *stream = (PaWasapiStream *)pStream; - if (stream == NULL) - return paBadStreamPtr; - - stream->fnStateHandler = fnStateHandler; - stream->pStateHandlerUserData = pUserData; - - return paNoError; -} - -// ------------------------------------------------------------------------------------------ -HRESULT _PollGetOutputFramesAvailable(PaWasapiStream *stream, UINT32 *available) -{ - HRESULT hr; - UINT32 frames = stream->out.framesPerHostCallback, - padding = 0; - - (*available) = 0; - - // get read position - if ((hr = IAudioClient_GetCurrentPadding(stream->out.clientProc, &padding)) != S_OK) - return LogHostError(hr); - - // get available - frames -= padding; - - // set - (*available) = frames; - return hr; -} - -// ------------------------------------------------------------------------------------------ -HRESULT _PollGetInputFramesAvailable(PaWasapiStream *stream, UINT32 *available) -{ - HRESULT hr; - - (*available) = 0; - - // GetCurrentPadding() has opposite meaning to Output stream - if ((hr = IAudioClient_GetCurrentPadding(stream->in.clientProc, available)) != S_OK) - return LogHostError(hr); - - return hr; -} - -// ------------------------------------------------------------------------------------------ -static HRESULT ProcessOutputBuffer(PaWasapiStream *stream, PaWasapiHostProcessor *processor, UINT32 frames) -{ - HRESULT hr; - BYTE *data = NULL; - - // Get buffer - if ((hr = IAudioRenderClient_GetBuffer(stream->renderClient, frames, &data)) != S_OK) - { - // Both modes, Shared and Exclusive, can fail with AUDCLNT_E_BUFFER_TOO_LARGE error - #if 0 - if (stream->out.shareMode == AUDCLNT_SHAREMODE_SHARED) - { - // Using GetCurrentPadding to overcome AUDCLNT_E_BUFFER_TOO_LARGE in - // shared mode results in no sound in Event-driven mode (MSDN does not - // document this, or is it WASAPI bug?), thus we better - // try to acquire buffer next time when GetBuffer allows to do so. - #if 0 - // Get Read position - UINT32 padding = 0; - hr = IAudioClient_GetCurrentPadding(stream->out.clientProc, &padding); - if (hr != S_OK) - return LogHostError(hr); - - // Get frames to write - frames -= padding; - if (frames == 0) - return S_OK; - - if ((hr = IAudioRenderClient_GetBuffer(stream->renderClient, frames, &data)) != S_OK) - return LogHostError(hr); - #else - if (hr == AUDCLNT_E_BUFFER_TOO_LARGE) - return S_OK; // be silent in shared mode, try again next time - #endif - } - else - return LogHostError(hr); - #else - if (hr == AUDCLNT_E_BUFFER_TOO_LARGE) - return S_OK; // try again next time - - return LogHostError(hr); - #endif - } - - // Process data - if (stream->out.monoMixer != NULL) - { - // expand buffer - UINT32 mono_frames_size = frames * (stream->out.wavex.Format.wBitsPerSample / 8); - if (mono_frames_size > stream->out.monoBufferSize) - { - stream->out.monoBuffer = PaWasapi_ReallocateMemory(stream->out.monoBuffer, (stream->out.monoBufferSize = mono_frames_size)); - if (stream->out.monoBuffer == NULL) - { - hr = E_OUTOFMEMORY; - LogHostError(hr); - return hr; - } - } - - // process - processor[S_OUTPUT].processor(NULL, 0, (BYTE *)stream->out.monoBuffer, frames, processor[S_OUTPUT].userData); - - // mix 1 to 2 channels - stream->out.monoMixer(data, stream->out.monoBuffer, frames); - } - else - { - processor[S_OUTPUT].processor(NULL, 0, data, frames, processor[S_OUTPUT].userData); - } - - // Release buffer - if ((hr = IAudioRenderClient_ReleaseBuffer(stream->renderClient, frames, 0)) != S_OK) - LogHostError(hr); - - return hr; -} - -// ------------------------------------------------------------------------------------------ -static HRESULT ProcessInputBuffer(PaWasapiStream *stream, PaWasapiHostProcessor *processor) -{ - HRESULT hr = S_OK; - UINT32 frames; - BYTE *data = NULL; - DWORD flags = 0; - - for (;;) - { - // Check if blocking call must be interrupted - if (WaitForSingleObject(stream->hCloseRequest, 0) != WAIT_TIMEOUT) - break; - - // Find out if any frames available - frames = 0; - if ((hr = _PollGetInputFramesAvailable(stream, &frames)) != S_OK) - return hr; - - // Empty/consumed buffer - if (frames == 0) - break; - - // Get the available data in the shared buffer. - if ((hr = IAudioCaptureClient_GetBuffer(stream->captureClient, &data, &frames, &flags, NULL, NULL)) != S_OK) - { - if (hr == AUDCLNT_S_BUFFER_EMPTY) - { - hr = S_OK; - break; // Empty/consumed buffer - } - - return LogHostError(hr); - break; - } - - // Detect silence - // if (flags & AUDCLNT_BUFFERFLAGS_SILENT) - // data = NULL; - - // Process data - if (stream->in.monoMixer != NULL) - { - // expand buffer - UINT32 mono_frames_size = frames * (stream->in.wavex.Format.wBitsPerSample / 8); - if (mono_frames_size > stream->in.monoBufferSize) - { - stream->in.monoBuffer = PaWasapi_ReallocateMemory(stream->in.monoBuffer, (stream->in.monoBufferSize = mono_frames_size)); - if (stream->in.monoBuffer == NULL) - { - hr = E_OUTOFMEMORY; - LogHostError(hr); - return hr; - } - } - - // mix 1 to 2 channels - stream->in.monoMixer(stream->in.monoBuffer, data, frames); - - // process - processor[S_INPUT].processor((BYTE *)stream->in.monoBuffer, frames, NULL, 0, processor[S_INPUT].userData); - } - else - { - processor[S_INPUT].processor(data, frames, NULL, 0, processor[S_INPUT].userData); - } - - // Release buffer - if ((hr = IAudioCaptureClient_ReleaseBuffer(stream->captureClient, frames)) != S_OK) - return LogHostError(hr); - - //break; - } - - return hr; -} - -// ------------------------------------------------------------------------------------------ -void _StreamOnStop(PaWasapiStream *stream) -{ - // Stop INPUT/OUTPUT clients - if (!stream->bBlocking) - { - if (stream->in.clientProc != NULL) - IAudioClient_Stop(stream->in.clientProc); - if (stream->out.clientProc != NULL) - IAudioClient_Stop(stream->out.clientProc); - } - else - { - if (stream->in.clientParent != NULL) - IAudioClient_Stop(stream->in.clientParent); - if (stream->out.clientParent != NULL) - IAudioClient_Stop(stream->out.clientParent); - } - - // Restore thread priority - if (stream->hAvTask != NULL) - { - PaWasapi_ThreadPriorityRevert(stream->hAvTask); - stream->hAvTask = NULL; - } - - // Notify - if (stream->streamRepresentation.streamFinishedCallback != NULL) - stream->streamRepresentation.streamFinishedCallback(stream->streamRepresentation.userData); -} - -// ------------------------------------------------------------------------------------------ -static BOOL PrepareComPointers(PaWasapiStream *stream, BOOL *threadComInitialized) -{ - HRESULT hr; - - /* - If COM is already initialized CoInitialize will either return - FALSE, or RPC_E_CHANGED_MODE if it was initialized in a different - threading mode. In either case we shouldn't consider it an error - but we need to be careful to not call CoUninitialize() if - RPC_E_CHANGED_MODE was returned. - */ - hr = CoInitializeEx(NULL, COINIT_APARTMENTTHREADED); - if (FAILED(hr) && (hr != RPC_E_CHANGED_MODE)) - { - PRINT(("WASAPI: failed ProcThreadEvent CoInitialize")); - return FALSE; - } - if (hr != RPC_E_CHANGED_MODE) - *threadComInitialized = TRUE; - - // Unmarshal stream pointers for safe COM operation - hr = UnmarshalStreamComPointers(stream); - if (hr != S_OK) - { - PRINT(("WASAPI: Error unmarshaling stream COM pointers. HRESULT: %i\n", hr)); - CoUninitialize(); - return FALSE; - } - - return TRUE; -} - -// ------------------------------------------------------------------------------------------ -static void FinishComPointers(PaWasapiStream *stream, BOOL threadComInitialized) -{ - // Release unmarshaled COM pointers - ReleaseUnmarshaledComPointers(stream); - - // Cleanup COM for this thread - if (threadComInitialized == TRUE) - CoUninitialize(); -} - -// ------------------------------------------------------------------------------------------ -PA_THREAD_FUNC ProcThreadEvent(void *param) -{ - PaWasapiHostProcessor processor[S_COUNT]; - HRESULT hr = S_OK; - DWORD dwResult; - PaWasapiStream *stream = (PaWasapiStream *)param; - PaWasapiHostProcessor defaultProcessor; - BOOL setEvent[S_COUNT] = { FALSE, FALSE }; - BOOL waitAllEvents = FALSE; - BOOL threadComInitialized = FALSE; - SystemTimer timer; - - // Notify: state - NotifyStateChanged(stream, paWasapiStreamStateThreadPrepare, ERROR_SUCCESS); - - // Prepare COM pointers - if (!PrepareComPointers(stream, &threadComInitialized)) - return (UINT32)paUnanticipatedHostError; - - // Request fine (1 ms) granularity of the system timer functions for precise operation of waitable timers - SystemTimer_SetGranularity(&timer, 1); - - // Waiting on all events in case of Full-Duplex/Exclusive mode. - if ((stream->in.clientProc != NULL) && (stream->out.clientProc != NULL)) - { - waitAllEvents = (stream->in.shareMode == AUDCLNT_SHAREMODE_EXCLUSIVE) && - (stream->out.shareMode == AUDCLNT_SHAREMODE_EXCLUSIVE); - } - - // Setup data processors - defaultProcessor.processor = WaspiHostProcessingLoop; - defaultProcessor.userData = stream; - processor[S_INPUT] = (stream->hostProcessOverrideInput.processor != NULL ? stream->hostProcessOverrideInput : defaultProcessor); - processor[S_OUTPUT] = (stream->hostProcessOverrideOutput.processor != NULL ? stream->hostProcessOverrideOutput : defaultProcessor); - - // Boost thread priority - PaWasapi_ThreadPriorityBoost((void **)&stream->hAvTask, stream->nThreadPriority); - - // Create events - if (stream->event[S_OUTPUT] == NULL) - { - stream->event[S_OUTPUT] = CreateEvent(NULL, FALSE, FALSE, NULL); - setEvent[S_OUTPUT] = TRUE; - } - if (stream->event[S_INPUT] == NULL) - { - stream->event[S_INPUT] = CreateEvent(NULL, FALSE, FALSE, NULL); - setEvent[S_INPUT] = TRUE; - } - if ((stream->event[S_OUTPUT] == NULL) || (stream->event[S_INPUT] == NULL)) - { - PRINT(("WASAPI Thread: failed creating Input/Output event handle\n")); - goto thread_error; - } - - // Signal: stream running - stream->running = TRUE; - - // Notify: thread started - SetEvent(stream->hThreadStart); - - // Initialize event & start INPUT stream - if (stream->in.clientProc) - { - // Create & set handle - if (setEvent[S_INPUT]) - { - if ((hr = IAudioClient_SetEventHandle(stream->in.clientProc, stream->event[S_INPUT])) != S_OK) - { - LogHostError(hr); - goto thread_error; - } - } - - // Start - if ((hr = IAudioClient_Start(stream->in.clientProc)) != S_OK) - { - LogHostError(hr); - goto thread_error; - } - } - - // Initialize event & start OUTPUT stream - if (stream->out.clientProc) - { - // Create & set handle - if (setEvent[S_OUTPUT]) - { - if ((hr = IAudioClient_SetEventHandle(stream->out.clientProc, stream->event[S_OUTPUT])) != S_OK) - { - LogHostError(hr); - goto thread_error; - } - } - - // Preload buffer before start - if ((hr = ProcessOutputBuffer(stream, processor, stream->out.framesPerBuffer)) != S_OK) - { - LogHostError(hr); - goto thread_error; - } - - // Start - if ((hr = IAudioClient_Start(stream->out.clientProc)) != S_OK) - { - LogHostError(hr); - goto thread_error; - } - - } - - // Notify: state - NotifyStateChanged(stream, paWasapiStreamStateThreadStart, ERROR_SUCCESS); - - // Processing Loop - for (;;) - { - // 10 sec timeout (on timeout stream will auto-stop when processed by WAIT_TIMEOUT case) - dwResult = WaitForMultipleObjects(S_COUNT, stream->event, waitAllEvents, 10*1000); - - // Check for close event (after wait for buffers to avoid any calls to user - // callback when hCloseRequest was set) - if (WaitForSingleObject(stream->hCloseRequest, 0) != WAIT_TIMEOUT) - break; - - // Process S_INPUT/S_OUTPUT - switch (dwResult) - { - case WAIT_TIMEOUT: { - PRINT(("WASAPI Thread: WAIT_TIMEOUT - probably bad audio driver or Vista x64 bug: use paWinWasapiPolling instead\n")); - goto thread_end; - break; } - - // Input stream - case WAIT_OBJECT_0 + S_INPUT: { - - if (stream->captureClient == NULL) - break; - - if ((hr = ProcessInputBuffer(stream, processor)) != S_OK) - { - LogHostError(hr); - goto thread_error; - } - - break; } - - // Output stream - case WAIT_OBJECT_0 + S_OUTPUT: { - - if (stream->renderClient == NULL) - break; - - if ((hr = ProcessOutputBuffer(stream, processor, stream->out.framesPerBuffer)) != S_OK) - { - LogHostError(hr); - goto thread_error; - } - - break; } - } - } - -thread_end: - - // Process stop - _StreamOnStop(stream); - - // Release unmarshaled COM pointers - FinishComPointers(stream, threadComInitialized); - - // Restore system timer granularity - SystemTimer_RestoreGranularity(&timer); - - // Notify: not running - stream->running = FALSE; - - // Notify: thread exited - SetEvent(stream->hThreadExit); - - // Notify: state - NotifyStateChanged(stream, paWasapiStreamStateThreadStop, hr); - - return 0; - -thread_error: - - // Prevent deadlocking in Pa_StreamStart - SetEvent(stream->hThreadStart); - - // Exit - goto thread_end; -} - -// ------------------------------------------------------------------------------------------ -static UINT32 GetSleepTime(PaWasapiStream *stream, UINT32 sleepTimeIn, UINT32 sleepTimeOut, UINT32 userFramesOut) -{ - UINT32 sleepTime; - - // According to the issue [https://github.com/PortAudio/portaudio/issues/303] glitches may occur when user frames - // equal to 1/2 of the host buffer frames, therefore the empirical workaround for this problem is to lower - // the sleep time by 2 - if (userFramesOut != 0) - { - UINT32 chunks = stream->out.framesPerHostCallback / userFramesOut; - if (chunks <= 2) - { - sleepTimeOut /= 2; - PRINT(("WASAPI: underrun workaround, sleep [%d] ms - 1/2 of the user buffer[%d] | host buffer[%d]\n", sleepTimeOut, userFramesOut, stream->out.framesPerHostCallback)); - } - } - - // Choose the smallest - if ((sleepTimeIn != 0) && (sleepTimeOut != 0)) - sleepTime = min(sleepTimeIn, sleepTimeOut); - else - sleepTime = (sleepTimeIn ? sleepTimeIn : sleepTimeOut); - - return sleepTime; -} - -// ------------------------------------------------------------------------------------------ -static UINT32 ConfigureLoopSleepTimeAndScheduler(PaWasapiStream *stream, ThreadIdleScheduler *scheduler) -{ - UINT32 sleepTime, sleepTimeIn, sleepTimeOut; - UINT32 userFramesIn = stream->in.framesPerHostCallback / WASAPI_PACKETS_PER_INPUT_BUFFER; - UINT32 userFramesOut = stream->out.framesPerBuffer; - - // Adjust polling time for non-paUtilFixedHostBufferSize, input stream is not adjustable as it is being - // polled according to its packet length - if (stream->bufferMode != paUtilFixedHostBufferSize) - { - userFramesOut = (stream->bufferProcessor.framesPerUserBuffer ? stream->bufferProcessor.framesPerUserBuffer : - stream->out.params.frames_per_buffer); - } - - // Calculate timeout for the next polling attempt - sleepTimeIn = GetFramesSleepTime(userFramesIn, stream->in.wavex.Format.nSamplesPerSec); - sleepTimeOut = GetFramesSleepTime(userFramesOut, stream->out.wavex.Format.nSamplesPerSec); - - // WASAPI input packets tend to expire very easily, let's limit sleep time to 2 milliseconds - // for all cases. Please propose better solution if any - if (sleepTimeIn > 2) - sleepTimeIn = 2; - - sleepTime = GetSleepTime(stream, sleepTimeIn, sleepTimeOut, userFramesOut); - - // Make sure not 0, othervise use ThreadIdleScheduler to bounce between [0, 1] ms to avoid too busy loop - if (sleepTime == 0) - { - sleepTimeIn = GetFramesSleepTimeMicroseconds(userFramesIn, stream->in.wavex.Format.nSamplesPerSec); - sleepTimeOut = GetFramesSleepTimeMicroseconds(userFramesOut, stream->out.wavex.Format.nSamplesPerSec); - - sleepTime = GetSleepTime(stream, sleepTimeIn, sleepTimeOut, userFramesOut); - - // Setup thread sleep scheduler - ThreadIdleScheduler_Setup(scheduler, 1, sleepTime/* microseconds here */); - sleepTime = 0; - } - - return sleepTime; -} - -// ------------------------------------------------------------------------------------------ -static inline INT32 GetNextSleepTime(SystemTimer *timer, ThreadIdleScheduler *scheduler, LONGLONG startTime, - UINT32 sleepTime) -{ - INT32 nextSleepTime; - INT32 procTime; - - // Get next sleep time - if (sleepTime == 0) - nextSleepTime = ThreadIdleScheduler_NextSleep(scheduler); - else - nextSleepTime = sleepTime; - - // Adjust next sleep time dynamically depending on how much time was spent in ProcessOutputBuffer/ProcessInputBuffer - // therefore periodicity will not jitter or be increased for the amount of time spent in processing; - // example when sleepTime is 10 ms where [] is polling time slot, {} processing time slot: - // - // [9],{2},[8],{1},[9],{1},[9],{3},[7],{2},[8],{3},[7],{2},[8],{2},[8],{3},[7],{2},[8],... - // - procTime = (INT32)(SystemTimer_GetTime(timer) - startTime); - nextSleepTime -= procTime; - if (nextSleepTime < timer->granularity) - nextSleepTime = 0; - else - if (timer->granularity > 1) - nextSleepTime = ALIGN_BWD(nextSleepTime, timer->granularity); - -#ifdef PA_WASAPI_LOG_TIME_SLOTS - printf("{%d},", procTime); -#endif - - return nextSleepTime; -} - -// ------------------------------------------------------------------------------------------ -PA_THREAD_FUNC ProcThreadPoll(void *param) -{ - PaWasapiHostProcessor processor[S_COUNT]; - HRESULT hr = S_OK; - PaWasapiStream *stream = (PaWasapiStream *)param; - PaWasapiHostProcessor defaultProcessor; - INT32 i; - ThreadIdleScheduler scheduler; - SystemTimer timer; - LONGLONG startTime; - UINT32 sleepTime; - INT32 nextSleepTime = 0; //! Do first loop without waiting as time could be spent when calling other APIs before ProcessXXXBuffer. - BOOL threadComInitialized = FALSE; -#ifdef PA_WASAPI_LOG_TIME_SLOTS - LONGLONG startWaitTime; -#endif - - // Notify: state - NotifyStateChanged(stream, paWasapiStreamStateThreadPrepare, ERROR_SUCCESS); - - // Prepare COM pointers - if (!PrepareComPointers(stream, &threadComInitialized)) - return (UINT32)paUnanticipatedHostError; - - // Request fine (1 ms) granularity of the system timer functions to guarantee correct logic around WaitForSingleObject - SystemTimer_SetGranularity(&timer, 1); - - // Calculate sleep time of the processing loop (inside WaitForSingleObject) - sleepTime = ConfigureLoopSleepTimeAndScheduler(stream, &scheduler); - - // Setup data processors - defaultProcessor.processor = WaspiHostProcessingLoop; - defaultProcessor.userData = stream; - processor[S_INPUT] = (stream->hostProcessOverrideInput.processor != NULL ? stream->hostProcessOverrideInput : defaultProcessor); - processor[S_OUTPUT] = (stream->hostProcessOverrideOutput.processor != NULL ? stream->hostProcessOverrideOutput : defaultProcessor); - - // Boost thread priority - PaWasapi_ThreadPriorityBoost((void **)&stream->hAvTask, stream->nThreadPriority); - - // Signal: stream running - stream->running = TRUE; - - // Notify: thread started - SetEvent(stream->hThreadStart); - - // Initialize event & start INPUT stream - if (stream->in.clientProc) - { - if ((hr = IAudioClient_Start(stream->in.clientProc)) != S_OK) - { - LogHostError(hr); - goto thread_error; - } - } - - // Initialize event & start OUTPUT stream - if (stream->out.clientProc) - { - // Preload buffer (obligatory, othervise ->Start() will fail), avoid processing - // when in full-duplex mode as it requires input processing as well - if (!PA_WASAPI__IS_FULLDUPLEX(stream)) - { - UINT32 frames = 0; - if ((hr = _PollGetOutputFramesAvailable(stream, &frames)) == S_OK) - { - if (stream->bufferMode == paUtilFixedHostBufferSize) - { - // It is important to preload whole host buffer to avoid underruns/glitches when stream is started, - // for more details see the discussion: https://github.com/PortAudio/portaudio/issues/303 - while (frames >= stream->out.framesPerBuffer) - { - if ((hr = ProcessOutputBuffer(stream, processor, stream->out.framesPerBuffer)) != S_OK) - { - LogHostError(hr); // not fatal, just log - break; - } - - frames -= stream->out.framesPerBuffer; - } - } - else - { - // Some devices may not start (will get stuck with 0 ready frames) if data not prefetched - if (frames == 0) - frames = stream->out.framesPerBuffer; - - // USB DACs report large buffer in Exclusive mode and if it is filled fully will stuck in - // non playing state, e.g. IAudioClient_GetCurrentPadding() will start reporting max buffer size - // constantly, thus preload data size equal to the user buffer to allow process going - if ((stream->out.shareMode == AUDCLNT_SHAREMODE_EXCLUSIVE) && (frames >= (stream->out.framesPerBuffer * 2))) - frames -= stream->out.framesPerBuffer; - - if ((hr = ProcessOutputBuffer(stream, processor, frames)) != S_OK) - { - LogHostError(hr); // not fatal, just log - } - } - } - else - { - LogHostError(hr); // not fatal, just log - } - } - - // Start - if ((hr = IAudioClient_Start(stream->out.clientProc)) != S_OK) - { - LogHostError(hr); - goto thread_error; - } - } - - // Notify: state - NotifyStateChanged(stream, paWasapiStreamStateThreadStart, ERROR_SUCCESS); - -#ifdef PA_WASAPI_LOG_TIME_SLOTS - startWaitTime = SystemTimer_GetTime(&timer); -#endif - - if (!PA_WASAPI__IS_FULLDUPLEX(stream)) - { - // Processing Loop - while (WaitForSingleObject(stream->hCloseRequest, nextSleepTime) == WAIT_TIMEOUT) - { - startTime = SystemTimer_GetTime(&timer); - - #ifdef PA_WASAPI_LOG_TIME_SLOTS - printf("[%d|%d],", nextSleepTime, (INT32)(startTime - startWaitTime)); - #endif - - for (i = 0; i < S_COUNT; ++i) - { - // Process S_INPUT/S_OUTPUT - switch (i) - { - // Input stream - case S_INPUT: { - - if (stream->captureClient == NULL) - break; - - if ((hr = ProcessInputBuffer(stream, processor)) != S_OK) - { - LogHostError(hr); - goto thread_error; - } - - break; } - - // Output stream - case S_OUTPUT: { - - UINT32 framesAvail; - - if (stream->renderClient == NULL) - break; - - // Get available frames - if ((hr = _PollGetOutputFramesAvailable(stream, &framesAvail)) != S_OK) - { - LogHostError(hr); - goto thread_error; - } - - // Output data to the user callback - if (stream->bufferMode == paUtilFixedHostBufferSize) - { - UINT32 framesProc = stream->out.framesPerBuffer; - - // If we got less frames avoid sleeping again as it might be the corner case and buffer - // has sufficient number of frames now, in case 'out.framesPerBuffer' is 1/2 of the host - // buffer sleeping again may cause underruns. Do short busy waiting (normally might take - // 1-2 iterations) - if (framesAvail < framesProc) - { - nextSleepTime = 0; - continue; - } - - while (framesAvail >= framesProc) - { - if ((hr = ProcessOutputBuffer(stream, processor, framesProc)) != S_OK) - { - LogHostError(hr); - goto thread_error; - } - - framesAvail -= framesProc; - } - } - else - if (framesAvail != 0) - { - if ((hr = ProcessOutputBuffer(stream, processor, framesAvail)) != S_OK) - { - LogHostError(hr); - goto thread_error; - } - } - - break; } - } - } - - // Get next sleep time - nextSleepTime = GetNextSleepTime(&timer, &scheduler, startTime, sleepTime); - - #ifdef PA_WASAPI_LOG_TIME_SLOTS - startWaitTime = SystemTimer_GetTime(&timer); - #endif - } - } - else - { - // Processing Loop (full-duplex) - while (WaitForSingleObject(stream->hCloseRequest, nextSleepTime) == WAIT_TIMEOUT) - { - UINT32 i_frames = 0, i_processed = 0, o_frames = 0; - BYTE *i_data = NULL, *o_data = NULL, *o_data_host = NULL; - DWORD i_flags = 0; - - startTime = SystemTimer_GetTime(&timer); - - #ifdef PA_WASAPI_LOG_TIME_SLOTS - printf("[%d|%d],", nextSleepTime, (INT32)(startTime - startWaitTime)); - #endif - - // get available frames - if ((hr = _PollGetOutputFramesAvailable(stream, &o_frames)) != S_OK) - { - LogHostError(hr); - break; - } - - while (o_frames != 0) - { - // get host input buffer - if ((hr = IAudioCaptureClient_GetBuffer(stream->captureClient, &i_data, &i_frames, &i_flags, NULL, NULL)) != S_OK) - { - if (hr == AUDCLNT_S_BUFFER_EMPTY) - break; // no data in capture buffer - - LogHostError(hr); - break; - } - - // process equal amount of frames - if (o_frames >= i_frames) - { - // process input amount of frames - UINT32 o_processed = i_frames; - - // get host output buffer - if ((hr = IAudioRenderClient_GetBuffer(stream->renderClient, o_processed, &o_data)) == S_OK) - { - // processed amount of i_frames - i_processed = i_frames; - o_data_host = o_data; - - // convert output mono - if (stream->out.monoMixer) - { - UINT32 mono_frames_size = o_processed * (stream->out.wavex.Format.wBitsPerSample / 8); - // expand buffer - if (mono_frames_size > stream->out.monoBufferSize) - { - stream->out.monoBuffer = PaWasapi_ReallocateMemory(stream->out.monoBuffer, (stream->out.monoBufferSize = mono_frames_size)); - if (stream->out.monoBuffer == NULL) - { - // release input buffer - IAudioCaptureClient_ReleaseBuffer(stream->captureClient, 0); - // release output buffer - IAudioRenderClient_ReleaseBuffer(stream->renderClient, 0, 0); - - LogPaError(paInsufficientMemory); - goto thread_error; - } - } - - // replace buffer pointer - o_data = (BYTE *)stream->out.monoBuffer; - } - - // convert input mono - if (stream->in.monoMixer) - { - UINT32 mono_frames_size = i_processed * (stream->in.wavex.Format.wBitsPerSample / 8); - // expand buffer - if (mono_frames_size > stream->in.monoBufferSize) - { - stream->in.monoBuffer = PaWasapi_ReallocateMemory(stream->in.monoBuffer, (stream->in.monoBufferSize = mono_frames_size)); - if (stream->in.monoBuffer == NULL) - { - // release input buffer - IAudioCaptureClient_ReleaseBuffer(stream->captureClient, 0); - // release output buffer - IAudioRenderClient_ReleaseBuffer(stream->renderClient, 0, 0); - - LogPaError(paInsufficientMemory); - goto thread_error; - } - } - - // mix 2 to 1 input channels - stream->in.monoMixer(stream->in.monoBuffer, i_data, i_processed); - - // replace buffer pointer - i_data = (BYTE *)stream->in.monoBuffer; - } - - // process - processor[S_FULLDUPLEX].processor(i_data, i_processed, o_data, o_processed, processor[S_FULLDUPLEX].userData); - - // mix 1 to 2 output channels - if (stream->out.monoBuffer) - stream->out.monoMixer(o_data_host, stream->out.monoBuffer, o_processed); - - // release host output buffer - if ((hr = IAudioRenderClient_ReleaseBuffer(stream->renderClient, o_processed, 0)) != S_OK) - LogHostError(hr); - - o_frames -= o_processed; - } - else - { - if (stream->out.shareMode != AUDCLNT_SHAREMODE_SHARED) - LogHostError(hr); // be silent in shared mode, try again next time - } - } - else - { - i_processed = 0; - goto fd_release_buffer_in; - } - -fd_release_buffer_in: - - // release host input buffer - if ((hr = IAudioCaptureClient_ReleaseBuffer(stream->captureClient, i_processed)) != S_OK) - { - LogHostError(hr); - break; - } - - // break processing, input hasn't been accumulated yet - if (i_processed == 0) - break; - } - - // Get next sleep time - nextSleepTime = GetNextSleepTime(&timer, &scheduler, startTime, sleepTime); - - #ifdef PA_WASAPI_LOG_TIME_SLOTS - startWaitTime = SystemTimer_GetTime(&timer); - #endif - } - } - -thread_end: - - // Process stop - _StreamOnStop(stream); - - // Release unmarshaled COM pointers - FinishComPointers(stream, threadComInitialized); - - // Restore system timer granularity - SystemTimer_RestoreGranularity(&timer); - - // Notify: not running - stream->running = FALSE; - - // Notify: thread exited - SetEvent(stream->hThreadExit); - - // Notify: state - NotifyStateChanged(stream, paWasapiStreamStateThreadStop, hr); - - return 0; - -thread_error: - - // Prevent deadlocking in Pa_StreamStart - SetEvent(stream->hThreadStart); - - // Exit - goto thread_end; -} - -// ------------------------------------------------------------------------------------------ -void *PaWasapi_ReallocateMemory(void *prev, size_t size) -{ - void *ret = realloc(prev, size); - if (ret == NULL) - { - PaWasapi_FreeMemory(prev); - return NULL; - } - return ret; -} - -// ------------------------------------------------------------------------------------------ -void PaWasapi_FreeMemory(void *ptr) -{ - free(ptr); -} diff --git a/spaces/pritamdeka/health-article-keyphrase-generator/app.py b/spaces/pritamdeka/health-article-keyphrase-generator/app.py deleted file mode 100644 index dd53220da5f002215d56450d2f3250b427dedf09..0000000000000000000000000000000000000000 --- a/spaces/pritamdeka/health-article-keyphrase-generator/app.py +++ /dev/null @@ -1,179 +0,0 @@ -import nltk -import re -import nltkmodule -from newspaper import Article -from newspaper import fulltext -import requests -from nltk.tokenize import word_tokenize -from sentence_transformers import SentenceTransformer, models, losses, LoggingHandler -import pandas as pd -import numpy as np -from torch.utils.data import DataLoader -import math -from sentence_transformers.evaluation import EmbeddingSimilarityEvaluator -from sentence_transformers.readers import * -from nltk.corpus import stopwords -stop_words = stopwords.words('english') -from sklearn.metrics.pairwise import cosine_similarity -import networkx as nx -from nltk.tokenize import sent_tokenize -import scispacy -import en_core_sci_lg -import string -import gradio as gr -import inflect - -inflect_op = inflect.engine() -nlp = en_core_sci_lg.load() -sp = en_core_sci_lg.load() -all_stopwords = sp.Defaults.stop_words - - -def remove_stopwords(sen): - sen_new = " ".join([i for i in sen if i not in stop_words]) - return sen_new - -def keyphrase_generator(article_link, model_1, model_2, max_num_keywords): - element=[] - final_textrank_list=[] - document=[] - text_doc=[] - score_list=[] - sum_list=[] - model_1 = SentenceTransformer(model_1) - model_2 = SentenceTransformer(model_2) - url = article_link - html = requests.get(url).text - article = fulltext(html) - corpus=sent_tokenize(article) - indicator_list=['concluded','concludes','in a study', 'concluding','conclude','in sum','in a recent study','therefore','thus','so','hence', - 'as a result','accordingly','consequently','in short','proves that','shows that','suggests that','demonstrates that','found that','observed that', - 'indicated that','suggested that','demonstrated that'] - count_dict={} - for l in corpus: - c=0 - for l2 in indicator_list: - if l.find(l2)!=-1: ### then it is a substring - c=1 - break - if c:# - count_dict[l]=1 - else: - count_dict[l]=0 - for sent, score in count_dict.items(): - score_list.append(score) - clean_sentences_new = pd.Series(corpus).str.replace("[^a-zA-Z]", " ", regex=True).tolist() - corpus_embeddings = model_1.encode(clean_sentences_new) - sim_mat = np.zeros([len(clean_sentences_new), len(clean_sentences_new)]) - for i in range(len(clean_sentences_new)): - len_embeddings=(len(corpus_embeddings[i])) - for j in range(len(clean_sentences_new)): - if i != j: - if(len_embeddings == 1024): - sim_mat[i][j] = cosine_similarity(corpus_embeddings[i].reshape(1,1024), corpus_embeddings[j].reshape(1,1024))[0,0] - elif(len_embeddings == 768): - sim_mat[i][j] = cosine_similarity(corpus_embeddings[i].reshape(1,768), corpus_embeddings[j].reshape(1,768))[0,0] - nx_graph = nx.from_numpy_array(sim_mat) - scores = nx.pagerank(nx_graph) - sentences=((scores[i],s) for i,s in enumerate(corpus)) - - for elem in sentences: - element.append(elem[0]) - for sc, lst in zip(score_list, element): ########### taking the scores from both the lists - sum1=sc+lst - sum_list.append(sum1) - x=sorted(((sum_list[i],s) for i,s in enumerate(corpus)), reverse=True) - for elem in x: - final_textrank_list.append(elem[1]) - a=int((10*len(final_textrank_list))/100.0) - if(a<5): - total=5 - else: - total=int(a) - for i in range(total): - document.append(final_textrank_list[i]) - doc=" ".join(document) - for i in document: - doc_1=nlp(i) - text_doc.append([X.text for X in doc_1.ents]) - entity_list = [item for sublist in text_doc for item in sublist] - entity_list = [word for word in entity_list if not word in all_stopwords] - entity_list = [word_entity for word_entity in entity_list if(inflect_op.singular_noun(word_entity) == False)] - entity_list=list(dict.fromkeys(entity_list)) - doc_embedding = model_2.encode([doc]) - candidates=entity_list - candidate_embeddings = model_2.encode(candidates) - distances = cosine_similarity(doc_embedding, candidate_embeddings) - top_n = max_num_keywords - keyword_list = [candidates[index] for index in distances.argsort()[0][-top_n:]] - keywords = '\n'.join(keyword_list) - return keywords - -igen=gr.Interface(keyphrase_generator, - inputs=[gr.components.Textbox(lines=1, placeholder="Provide an online health article web link here",default="", label="Article web link"), - gr.components.Dropdown(choices=['sentence-transformers/all-mpnet-base-v2', - 'sentence-transformers/all-mpnet-base-v1', - 'sentence-transformers/all-distilroberta-v1', - 'sentence-transformers/gtr-t5-large', - 'pritamdeka/S-Bluebert-snli-multinli-stsb', - 'pritamdeka/S-Biomed-Roberta-snli-multinli-stsb', - 'sentence-transformers/stsb-mpnet-base-v2', - 'sentence-transformers/all-roberta-large-v1', - 'sentence-transformers/stsb-roberta-base-v2', - 'sentence-transformers/stsb-distilroberta-base-v2', - 'sentence-transformers/sentence-t5-large', - 'sentence-transformers/sentence-t5-base'], - type="value", - default='pritamdeka/S-Biomed-Roberta-snli-multinli-stsb', - label="Select any SBERT model for TextRank from the list below"), - gr.components.Dropdown(choices=['sentence-transformers/paraphrase-mpnet-base-v2', - 'sentence-transformers/all-mpnet-base-v1', - 'sentence-transformers/paraphrase-distilroberta-base-v1', - 'sentence-transformers/paraphrase-xlm-r-multilingual-v1', - 'sentence-transformers/paraphrase-multilingual-mpnet-base-v2', - 'sentence-transformers/paraphrase-albert-small-v2', - 'sentence-transformers/paraphrase-albert-base-v2', - 'sentence-transformers/paraphrase-MiniLM-L12-v2', - 'sentence-transformers/paraphrase-MiniLM-L6-v2', - 'sentence-transformers/all-MiniLM-L12-v2', - 'sentence-transformers/all-distilroberta-v1', - 'sentence-transformers/paraphrase-TinyBERT-L6-v2', - 'sentence-transformers/paraphrase-MiniLM-L3-v2', - 'sentence-transformers/all-MiniLM-L6-v2'], - type="value", - default='sentence-transformers/all-mpnet-base-v1', - label="Select any SBERT model for keyphrases from the list below"), - gr.components.Slider(minimum=5, maximum=30, step=1, default=10, label="Max Keywords")], - outputs=gr.outputs.Textbox(type="text", label="Output"), theme="peach", - title="Health Article Keyphrase Generator", - description="Generates the keyphrases from an online health article which best describes the article. Examples are provided below for demo purposes. Choose any one example to see the results. ", - examples=[ - ["https://www.cancer.news/2021-12-22-mrna-vaccines-weaken-immune-system-cause-cancer.html", - 'sentence-transformers/all-mpnet-base-v1', - 'sentence-transformers/paraphrase-MiniLM-L12-v2', - 10], - - ["https://www.cancer.news/2022-02-04-doctors-testifying-covid-vaccines-causing-cancer-aids.html#", - 'sentence-transformers/all-mpnet-base-v1', - 'sentence-transformers/all-mpnet-base-v1', - 12], - - ["https://www.medicalnewstoday.com/articles/alzheimers-addressing-sleep-disturbance-may-alleviate-symptoms", - 'pritamdeka/S-Biomed-Roberta-snli-multinli-stsb', - 'sentence-transformers/all-mpnet-base-v1', - 10], - - ["https://www.medicalnewstoday.com/articles/omicron-what-do-we-know-about-the-stealth-variant", - 'pritamdeka/S-Biomed-Roberta-snli-multinli-stsb', - 'sentence-transformers/all-mpnet-base-v1', - 15] - ], - article= "The work is based on a part of the paper provided here." - "\t It uses the TextRank algorithm with SBERT to first find the top ranked sentences and then extracts the keyphrases" - "\t from those sentences using scispaCy and SBERT." - "\t The list of SBERT models provided can be found in SBERT Pre-trained models hub." - "\t The default model names are provided which can be changed from the list of models available. " - "\t The value of output keyphrases can be changed. The default value is 10, minimum is 5 and a maximum value of 30.") - -igen.launch(share=False) -#### \ No newline at end of file diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/GimpPaletteFile.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/GimpPaletteFile.py deleted file mode 100644 index d388928945a0f6711de2b1c8d1ed50ce192a8219..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/GimpPaletteFile.py +++ /dev/null @@ -1,56 +0,0 @@ -# -# Python Imaging Library -# $Id$ -# -# stuff to read GIMP palette files -# -# History: -# 1997-08-23 fl Created -# 2004-09-07 fl Support GIMP 2.0 palette files. -# -# Copyright (c) Secret Labs AB 1997-2004. All rights reserved. -# Copyright (c) Fredrik Lundh 1997-2004. -# -# See the README file for information on usage and redistribution. -# - -import re - -from ._binary import o8 - - -class GimpPaletteFile: - """File handler for GIMP's palette format.""" - - rawmode = "RGB" - - def __init__(self, fp): - self.palette = [o8(i) * 3 for i in range(256)] - - if fp.readline()[:12] != b"GIMP Palette": - msg = "not a GIMP palette file" - raise SyntaxError(msg) - - for i in range(256): - s = fp.readline() - if not s: - break - - # skip fields and comment lines - if re.match(rb"\w+:|#", s): - continue - if len(s) > 100: - msg = "bad palette file" - raise SyntaxError(msg) - - v = tuple(map(int, s.split()[:3])) - if len(v) != 3: - msg = "bad palette entry" - raise ValueError(msg) - - self.palette[i] = o8(v[0]) + o8(v[1]) + o8(v[2]) - - self.palette = b"".join(self.palette) - - def getpalette(self): - return self.palette, self.rawmode diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/quartzPen.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/quartzPen.py deleted file mode 100644 index 6e1228d6f2b8bbc78cf52864ccaf3b249a654749..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/quartzPen.py +++ /dev/null @@ -1,44 +0,0 @@ -from fontTools.pens.basePen import BasePen - -from Quartz.CoreGraphics import CGPathCreateMutable, CGPathMoveToPoint -from Quartz.CoreGraphics import CGPathAddLineToPoint, CGPathAddCurveToPoint -from Quartz.CoreGraphics import CGPathAddQuadCurveToPoint, CGPathCloseSubpath - - -__all__ = ["QuartzPen"] - - -class QuartzPen(BasePen): - - """A pen that creates a CGPath - - Parameters - - path: an optional CGPath to add to - - xform: an optional CGAffineTransform to apply to the path - """ - - def __init__(self, glyphSet, path=None, xform=None): - BasePen.__init__(self, glyphSet) - if path is None: - path = CGPathCreateMutable() - self.path = path - self.xform = xform - - def _moveTo(self, pt): - x, y = pt - CGPathMoveToPoint(self.path, self.xform, x, y) - - def _lineTo(self, pt): - x, y = pt - CGPathAddLineToPoint(self.path, self.xform, x, y) - - def _curveToOne(self, p1, p2, p3): - (x1, y1), (x2, y2), (x3, y3) = p1, p2, p3 - CGPathAddCurveToPoint(self.path, self.xform, x1, y1, x2, y2, x3, y3) - - def _qCurveToOne(self, p1, p2): - (x1, y1), (x2, y2) = p1, p2 - CGPathAddQuadCurveToPoint(self.path, self.xform, x1, y1, x2, y2) - - def _closePath(self): - CGPathCloseSubpath(self.path) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/roundingPen.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/roundingPen.py deleted file mode 100644 index 2a7c476c36f4d244d62c92b745dc462d977ba394..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/roundingPen.py +++ /dev/null @@ -1,112 +0,0 @@ -from fontTools.misc.roundTools import otRound -from fontTools.misc.transform import Transform -from fontTools.pens.filterPen import FilterPen, FilterPointPen - - -__all__ = ["RoundingPen", "RoundingPointPen"] - - -class RoundingPen(FilterPen): - """ - Filter pen that rounds point coordinates and component XY offsets to integer. - - >>> from fontTools.pens.recordingPen import RecordingPen - >>> recpen = RecordingPen() - >>> roundpen = RoundingPen(recpen) - >>> roundpen.moveTo((0.4, 0.6)) - >>> roundpen.lineTo((1.6, 2.5)) - >>> roundpen.qCurveTo((2.4, 4.6), (3.3, 5.7), (4.9, 6.1)) - >>> roundpen.curveTo((6.4, 8.6), (7.3, 9.7), (8.9, 10.1)) - >>> roundpen.addComponent("a", (1.5, 0, 0, 1.5, 10.5, -10.5)) - >>> recpen.value == [ - ... ('moveTo', ((0, 1),)), - ... ('lineTo', ((2, 3),)), - ... ('qCurveTo', ((2, 5), (3, 6), (5, 6))), - ... ('curveTo', ((6, 9), (7, 10), (9, 10))), - ... ('addComponent', ('a', (1.5, 0, 0, 1.5, 11, -10))), - ... ] - True - """ - - def __init__(self, outPen, roundFunc=otRound): - super().__init__(outPen) - self.roundFunc = roundFunc - - def moveTo(self, pt): - self._outPen.moveTo((self.roundFunc(pt[0]), self.roundFunc(pt[1]))) - - def lineTo(self, pt): - self._outPen.lineTo((self.roundFunc(pt[0]), self.roundFunc(pt[1]))) - - def curveTo(self, *points): - self._outPen.curveTo( - *((self.roundFunc(x), self.roundFunc(y)) for x, y in points) - ) - - def qCurveTo(self, *points): - self._outPen.qCurveTo( - *((self.roundFunc(x), self.roundFunc(y)) for x, y in points) - ) - - def addComponent(self, glyphName, transformation): - self._outPen.addComponent( - glyphName, - Transform( - *transformation[:4], - self.roundFunc(transformation[4]), - self.roundFunc(transformation[5]), - ), - ) - - -class RoundingPointPen(FilterPointPen): - """ - Filter point pen that rounds point coordinates and component XY offsets to integer. - - >>> from fontTools.pens.recordingPen import RecordingPointPen - >>> recpen = RecordingPointPen() - >>> roundpen = RoundingPointPen(recpen) - >>> roundpen.beginPath() - >>> roundpen.addPoint((0.4, 0.6), 'line') - >>> roundpen.addPoint((1.6, 2.5), 'line') - >>> roundpen.addPoint((2.4, 4.6)) - >>> roundpen.addPoint((3.3, 5.7)) - >>> roundpen.addPoint((4.9, 6.1), 'qcurve') - >>> roundpen.endPath() - >>> roundpen.addComponent("a", (1.5, 0, 0, 1.5, 10.5, -10.5)) - >>> recpen.value == [ - ... ('beginPath', (), {}), - ... ('addPoint', ((0, 1), 'line', False, None), {}), - ... ('addPoint', ((2, 3), 'line', False, None), {}), - ... ('addPoint', ((2, 5), None, False, None), {}), - ... ('addPoint', ((3, 6), None, False, None), {}), - ... ('addPoint', ((5, 6), 'qcurve', False, None), {}), - ... ('endPath', (), {}), - ... ('addComponent', ('a', (1.5, 0, 0, 1.5, 11, -10)), {}), - ... ] - True - """ - - def __init__(self, outPen, roundFunc=otRound): - super().__init__(outPen) - self.roundFunc = roundFunc - - def addPoint(self, pt, segmentType=None, smooth=False, name=None, **kwargs): - self._outPen.addPoint( - (self.roundFunc(pt[0]), self.roundFunc(pt[1])), - segmentType=segmentType, - smooth=smooth, - name=name, - **kwargs, - ) - - def addComponent(self, baseGlyphName, transformation, **kwargs): - self._outPen.addComponent( - baseGlyphName, - Transform( - *transformation[:4], - self.roundFunc(transformation[4]), - self.roundFunc(transformation[5]), - ), - **kwargs, - ) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/h11/tests/test_connection.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/h11/tests/test_connection.py deleted file mode 100644 index 73a27b98bebd949cb3b99e19a3a8a484455b58d7..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/h11/tests/test_connection.py +++ /dev/null @@ -1,1122 +0,0 @@ -from typing import Any, cast, Dict, List, Optional, Tuple, Type - -import pytest - -from .._connection import _body_framing, _keep_alive, Connection, NEED_DATA, PAUSED -from .._events import ( - ConnectionClosed, - Data, - EndOfMessage, - Event, - InformationalResponse, - Request, - Response, -) -from .._state import ( - CLIENT, - CLOSED, - DONE, - ERROR, - IDLE, - MIGHT_SWITCH_PROTOCOL, - MUST_CLOSE, - SEND_BODY, - SEND_RESPONSE, - SERVER, - SWITCHED_PROTOCOL, -) -from .._util import LocalProtocolError, RemoteProtocolError, Sentinel -from .helpers import ConnectionPair, get_all_events, receive_and_get - - -def test__keep_alive() -> None: - assert _keep_alive( - Request(method="GET", target="/", headers=[("Host", "Example.com")]) - ) - assert not _keep_alive( - Request( - method="GET", - target="/", - headers=[("Host", "Example.com"), ("Connection", "close")], - ) - ) - assert not _keep_alive( - Request( - method="GET", - target="/", - headers=[("Host", "Example.com"), ("Connection", "a, b, cLOse, foo")], - ) - ) - assert not _keep_alive( - Request(method="GET", target="/", headers=[], http_version="1.0") # type: ignore[arg-type] - ) - - assert _keep_alive(Response(status_code=200, headers=[])) # type: ignore[arg-type] - assert not _keep_alive(Response(status_code=200, headers=[("Connection", "close")])) - assert not _keep_alive( - Response(status_code=200, headers=[("Connection", "a, b, cLOse, foo")]) - ) - assert not _keep_alive(Response(status_code=200, headers=[], http_version="1.0")) # type: ignore[arg-type] - - -def test__body_framing() -> None: - def headers(cl: Optional[int], te: bool) -> List[Tuple[str, str]]: - headers = [] - if cl is not None: - headers.append(("Content-Length", str(cl))) - if te: - headers.append(("Transfer-Encoding", "chunked")) - return headers - - def resp( - status_code: int = 200, cl: Optional[int] = None, te: bool = False - ) -> Response: - return Response(status_code=status_code, headers=headers(cl, te)) - - def req(cl: Optional[int] = None, te: bool = False) -> Request: - h = headers(cl, te) - h += [("Host", "example.com")] - return Request(method="GET", target="/", headers=h) - - # Special cases where the headers are ignored: - for kwargs in [{}, {"cl": 100}, {"te": True}, {"cl": 100, "te": True}]: - kwargs = cast(Dict[str, Any], kwargs) - for meth, r in [ - (b"HEAD", resp(**kwargs)), - (b"GET", resp(status_code=204, **kwargs)), - (b"GET", resp(status_code=304, **kwargs)), - ]: - assert _body_framing(meth, r) == ("content-length", (0,)) - - # Transfer-encoding - for kwargs in [{"te": True}, {"cl": 100, "te": True}]: - kwargs = cast(Dict[str, Any], kwargs) - for meth, r in [(None, req(**kwargs)), (b"GET", resp(**kwargs))]: # type: ignore - assert _body_framing(meth, r) == ("chunked", ()) - - # Content-Length - for meth, r in [(None, req(cl=100)), (b"GET", resp(cl=100))]: # type: ignore - assert _body_framing(meth, r) == ("content-length", (100,)) - - # No headers - assert _body_framing(None, req()) == ("content-length", (0,)) # type: ignore - assert _body_framing(b"GET", resp()) == ("http/1.0", ()) - - -def test_Connection_basics_and_content_length() -> None: - with pytest.raises(ValueError): - Connection("CLIENT") # type: ignore - - p = ConnectionPair() - assert p.conn[CLIENT].our_role is CLIENT - assert p.conn[CLIENT].their_role is SERVER - assert p.conn[SERVER].our_role is SERVER - assert p.conn[SERVER].their_role is CLIENT - - data = p.send( - CLIENT, - Request( - method="GET", - target="/", - headers=[("Host", "example.com"), ("Content-Length", "10")], - ), - ) - assert data == ( - b"GET / HTTP/1.1\r\n" b"Host: example.com\r\n" b"Content-Length: 10\r\n\r\n" - ) - - for conn in p.conns: - assert conn.states == {CLIENT: SEND_BODY, SERVER: SEND_RESPONSE} - assert p.conn[CLIENT].our_state is SEND_BODY - assert p.conn[CLIENT].their_state is SEND_RESPONSE - assert p.conn[SERVER].our_state is SEND_RESPONSE - assert p.conn[SERVER].their_state is SEND_BODY - - assert p.conn[CLIENT].their_http_version is None - assert p.conn[SERVER].their_http_version == b"1.1" - - data = p.send(SERVER, InformationalResponse(status_code=100, headers=[])) # type: ignore[arg-type] - assert data == b"HTTP/1.1 100 \r\n\r\n" - - data = p.send(SERVER, Response(status_code=200, headers=[("Content-Length", "11")])) - assert data == b"HTTP/1.1 200 \r\nContent-Length: 11\r\n\r\n" - - for conn in p.conns: - assert conn.states == {CLIENT: SEND_BODY, SERVER: SEND_BODY} - - assert p.conn[CLIENT].their_http_version == b"1.1" - assert p.conn[SERVER].their_http_version == b"1.1" - - data = p.send(CLIENT, Data(data=b"12345")) - assert data == b"12345" - data = p.send( - CLIENT, Data(data=b"67890"), expect=[Data(data=b"67890"), EndOfMessage()] - ) - assert data == b"67890" - data = p.send(CLIENT, EndOfMessage(), expect=[]) - assert data == b"" - - for conn in p.conns: - assert conn.states == {CLIENT: DONE, SERVER: SEND_BODY} - - data = p.send(SERVER, Data(data=b"1234567890")) - assert data == b"1234567890" - data = p.send(SERVER, Data(data=b"1"), expect=[Data(data=b"1"), EndOfMessage()]) - assert data == b"1" - data = p.send(SERVER, EndOfMessage(), expect=[]) - assert data == b"" - - for conn in p.conns: - assert conn.states == {CLIENT: DONE, SERVER: DONE} - - -def test_chunked() -> None: - p = ConnectionPair() - - p.send( - CLIENT, - Request( - method="GET", - target="/", - headers=[("Host", "example.com"), ("Transfer-Encoding", "chunked")], - ), - ) - data = p.send(CLIENT, Data(data=b"1234567890", chunk_start=True, chunk_end=True)) - assert data == b"a\r\n1234567890\r\n" - data = p.send(CLIENT, Data(data=b"abcde", chunk_start=True, chunk_end=True)) - assert data == b"5\r\nabcde\r\n" - data = p.send(CLIENT, Data(data=b""), expect=[]) - assert data == b"" - data = p.send(CLIENT, EndOfMessage(headers=[("hello", "there")])) - assert data == b"0\r\nhello: there\r\n\r\n" - - p.send( - SERVER, Response(status_code=200, headers=[("Transfer-Encoding", "chunked")]) - ) - p.send(SERVER, Data(data=b"54321", chunk_start=True, chunk_end=True)) - p.send(SERVER, Data(data=b"12345", chunk_start=True, chunk_end=True)) - p.send(SERVER, EndOfMessage()) - - for conn in p.conns: - assert conn.states == {CLIENT: DONE, SERVER: DONE} - - -def test_chunk_boundaries() -> None: - conn = Connection(our_role=SERVER) - - request = ( - b"POST / HTTP/1.1\r\n" - b"Host: example.com\r\n" - b"Transfer-Encoding: chunked\r\n" - b"\r\n" - ) - conn.receive_data(request) - assert conn.next_event() == Request( - method="POST", - target="/", - headers=[("Host", "example.com"), ("Transfer-Encoding", "chunked")], - ) - assert conn.next_event() is NEED_DATA - - conn.receive_data(b"5\r\nhello\r\n") - assert conn.next_event() == Data(data=b"hello", chunk_start=True, chunk_end=True) - - conn.receive_data(b"5\r\nhel") - assert conn.next_event() == Data(data=b"hel", chunk_start=True, chunk_end=False) - - conn.receive_data(b"l") - assert conn.next_event() == Data(data=b"l", chunk_start=False, chunk_end=False) - - conn.receive_data(b"o\r\n") - assert conn.next_event() == Data(data=b"o", chunk_start=False, chunk_end=True) - - conn.receive_data(b"5\r\nhello") - assert conn.next_event() == Data(data=b"hello", chunk_start=True, chunk_end=True) - - conn.receive_data(b"\r\n") - assert conn.next_event() == NEED_DATA - - conn.receive_data(b"0\r\n\r\n") - assert conn.next_event() == EndOfMessage() - - -def test_client_talking_to_http10_server() -> None: - c = Connection(CLIENT) - c.send(Request(method="GET", target="/", headers=[("Host", "example.com")])) - c.send(EndOfMessage()) - assert c.our_state is DONE - # No content-length, so Http10 framing for body - assert receive_and_get(c, b"HTTP/1.0 200 OK\r\n\r\n") == [ - Response(status_code=200, headers=[], http_version="1.0", reason=b"OK") # type: ignore[arg-type] - ] - assert c.our_state is MUST_CLOSE - assert receive_and_get(c, b"12345") == [Data(data=b"12345")] - assert receive_and_get(c, b"67890") == [Data(data=b"67890")] - assert receive_and_get(c, b"") == [EndOfMessage(), ConnectionClosed()] - assert c.their_state is CLOSED - - -def test_server_talking_to_http10_client() -> None: - c = Connection(SERVER) - # No content-length, so no body - # NB: no host header - assert receive_and_get(c, b"GET / HTTP/1.0\r\n\r\n") == [ - Request(method="GET", target="/", headers=[], http_version="1.0"), # type: ignore[arg-type] - EndOfMessage(), - ] - assert c.their_state is MUST_CLOSE - - # We automatically Connection: close back at them - assert ( - c.send(Response(status_code=200, headers=[])) # type: ignore[arg-type] - == b"HTTP/1.1 200 \r\nConnection: close\r\n\r\n" - ) - - assert c.send(Data(data=b"12345")) == b"12345" - assert c.send(EndOfMessage()) == b"" - assert c.our_state is MUST_CLOSE - - # Check that it works if they do send Content-Length - c = Connection(SERVER) - # NB: no host header - assert receive_and_get(c, b"POST / HTTP/1.0\r\nContent-Length: 10\r\n\r\n1") == [ - Request( - method="POST", - target="/", - headers=[("Content-Length", "10")], - http_version="1.0", - ), - Data(data=b"1"), - ] - assert receive_and_get(c, b"234567890") == [Data(data=b"234567890"), EndOfMessage()] - assert c.their_state is MUST_CLOSE - assert receive_and_get(c, b"") == [ConnectionClosed()] - - -def test_automatic_transfer_encoding_in_response() -> None: - # Check that in responses, the user can specify either Transfer-Encoding: - # chunked or no framing at all, and in both cases we automatically select - # the right option depending on whether the peer speaks HTTP/1.0 or - # HTTP/1.1 - for user_headers in [ - [("Transfer-Encoding", "chunked")], - [], - # In fact, this even works if Content-Length is set, - # because if both are set then Transfer-Encoding wins - [("Transfer-Encoding", "chunked"), ("Content-Length", "100")], - ]: - user_headers = cast(List[Tuple[str, str]], user_headers) - p = ConnectionPair() - p.send( - CLIENT, - [ - Request(method="GET", target="/", headers=[("Host", "example.com")]), - EndOfMessage(), - ], - ) - # When speaking to HTTP/1.1 client, all of the above cases get - # normalized to Transfer-Encoding: chunked - p.send( - SERVER, - Response(status_code=200, headers=user_headers), - expect=Response( - status_code=200, headers=[("Transfer-Encoding", "chunked")] - ), - ) - - # When speaking to HTTP/1.0 client, all of the above cases get - # normalized to no-framing-headers - c = Connection(SERVER) - receive_and_get(c, b"GET / HTTP/1.0\r\n\r\n") - assert ( - c.send(Response(status_code=200, headers=user_headers)) - == b"HTTP/1.1 200 \r\nConnection: close\r\n\r\n" - ) - assert c.send(Data(data=b"12345")) == b"12345" - - -def test_automagic_connection_close_handling() -> None: - p = ConnectionPair() - # If the user explicitly sets Connection: close, then we notice and - # respect it - p.send( - CLIENT, - [ - Request( - method="GET", - target="/", - headers=[("Host", "example.com"), ("Connection", "close")], - ), - EndOfMessage(), - ], - ) - for conn in p.conns: - assert conn.states[CLIENT] is MUST_CLOSE - # And if the client sets it, the server automatically echoes it back - p.send( - SERVER, - # no header here... - [Response(status_code=204, headers=[]), EndOfMessage()], # type: ignore[arg-type] - # ...but oh look, it arrived anyway - expect=[ - Response(status_code=204, headers=[("connection", "close")]), - EndOfMessage(), - ], - ) - for conn in p.conns: - assert conn.states == {CLIENT: MUST_CLOSE, SERVER: MUST_CLOSE} - - -def test_100_continue() -> None: - def setup() -> ConnectionPair: - p = ConnectionPair() - p.send( - CLIENT, - Request( - method="GET", - target="/", - headers=[ - ("Host", "example.com"), - ("Content-Length", "100"), - ("Expect", "100-continue"), - ], - ), - ) - for conn in p.conns: - assert conn.client_is_waiting_for_100_continue - assert not p.conn[CLIENT].they_are_waiting_for_100_continue - assert p.conn[SERVER].they_are_waiting_for_100_continue - return p - - # Disabled by 100 Continue - p = setup() - p.send(SERVER, InformationalResponse(status_code=100, headers=[])) # type: ignore[arg-type] - for conn in p.conns: - assert not conn.client_is_waiting_for_100_continue - assert not conn.they_are_waiting_for_100_continue - - # Disabled by a real response - p = setup() - p.send( - SERVER, Response(status_code=200, headers=[("Transfer-Encoding", "chunked")]) - ) - for conn in p.conns: - assert not conn.client_is_waiting_for_100_continue - assert not conn.they_are_waiting_for_100_continue - - # Disabled by the client going ahead and sending stuff anyway - p = setup() - p.send(CLIENT, Data(data=b"12345")) - for conn in p.conns: - assert not conn.client_is_waiting_for_100_continue - assert not conn.they_are_waiting_for_100_continue - - -def test_max_incomplete_event_size_countermeasure() -> None: - # Infinitely long headers are definitely not okay - c = Connection(SERVER) - c.receive_data(b"GET / HTTP/1.0\r\nEndless: ") - assert c.next_event() is NEED_DATA - with pytest.raises(RemoteProtocolError): - while True: - c.receive_data(b"a" * 1024) - c.next_event() - - # Checking that the same header is accepted / rejected depending on the - # max_incomplete_event_size setting: - c = Connection(SERVER, max_incomplete_event_size=5000) - c.receive_data(b"GET / HTTP/1.0\r\nBig: ") - c.receive_data(b"a" * 4000) - c.receive_data(b"\r\n\r\n") - assert get_all_events(c) == [ - Request( - method="GET", target="/", http_version="1.0", headers=[("big", "a" * 4000)] - ), - EndOfMessage(), - ] - - c = Connection(SERVER, max_incomplete_event_size=4000) - c.receive_data(b"GET / HTTP/1.0\r\nBig: ") - c.receive_data(b"a" * 4000) - with pytest.raises(RemoteProtocolError): - c.next_event() - - # Temporarily exceeding the size limit is fine, as long as its done with - # complete events: - c = Connection(SERVER, max_incomplete_event_size=5000) - c.receive_data(b"GET / HTTP/1.0\r\nContent-Length: 10000") - c.receive_data(b"\r\n\r\n" + b"a" * 10000) - assert get_all_events(c) == [ - Request( - method="GET", - target="/", - http_version="1.0", - headers=[("Content-Length", "10000")], - ), - Data(data=b"a" * 10000), - EndOfMessage(), - ] - - c = Connection(SERVER, max_incomplete_event_size=100) - # Two pipelined requests to create a way-too-big receive buffer... but - # it's fine because we're not checking - c.receive_data( - b"GET /1 HTTP/1.1\r\nHost: a\r\n\r\n" - b"GET /2 HTTP/1.1\r\nHost: b\r\n\r\n" + b"X" * 1000 - ) - assert get_all_events(c) == [ - Request(method="GET", target="/1", headers=[("host", "a")]), - EndOfMessage(), - ] - # Even more data comes in, still no problem - c.receive_data(b"X" * 1000) - # We can respond and reuse to get the second pipelined request - c.send(Response(status_code=200, headers=[])) # type: ignore[arg-type] - c.send(EndOfMessage()) - c.start_next_cycle() - assert get_all_events(c) == [ - Request(method="GET", target="/2", headers=[("host", "b")]), - EndOfMessage(), - ] - # But once we unpause and try to read the next message, and find that it's - # incomplete and the buffer is *still* way too large, then *that's* a - # problem: - c.send(Response(status_code=200, headers=[])) # type: ignore[arg-type] - c.send(EndOfMessage()) - c.start_next_cycle() - with pytest.raises(RemoteProtocolError): - c.next_event() - - -def test_reuse_simple() -> None: - p = ConnectionPair() - p.send( - CLIENT, - [Request(method="GET", target="/", headers=[("Host", "a")]), EndOfMessage()], - ) - p.send( - SERVER, - [ - Response(status_code=200, headers=[(b"transfer-encoding", b"chunked")]), - EndOfMessage(), - ], - ) - for conn in p.conns: - assert conn.states == {CLIENT: DONE, SERVER: DONE} - conn.start_next_cycle() - - p.send( - CLIENT, - [ - Request(method="DELETE", target="/foo", headers=[("Host", "a")]), - EndOfMessage(), - ], - ) - p.send( - SERVER, - [ - Response(status_code=404, headers=[(b"transfer-encoding", b"chunked")]), - EndOfMessage(), - ], - ) - - -def test_pipelining() -> None: - # Client doesn't support pipelining, so we have to do this by hand - c = Connection(SERVER) - assert c.next_event() is NEED_DATA - # 3 requests all bunched up - c.receive_data( - b"GET /1 HTTP/1.1\r\nHost: a.com\r\nContent-Length: 5\r\n\r\n" - b"12345" - b"GET /2 HTTP/1.1\r\nHost: a.com\r\nContent-Length: 5\r\n\r\n" - b"67890" - b"GET /3 HTTP/1.1\r\nHost: a.com\r\n\r\n" - ) - assert get_all_events(c) == [ - Request( - method="GET", - target="/1", - headers=[("Host", "a.com"), ("Content-Length", "5")], - ), - Data(data=b"12345"), - EndOfMessage(), - ] - assert c.their_state is DONE - assert c.our_state is SEND_RESPONSE - - assert c.next_event() is PAUSED - - c.send(Response(status_code=200, headers=[])) # type: ignore[arg-type] - c.send(EndOfMessage()) - assert c.their_state is DONE - assert c.our_state is DONE - - c.start_next_cycle() - - assert get_all_events(c) == [ - Request( - method="GET", - target="/2", - headers=[("Host", "a.com"), ("Content-Length", "5")], - ), - Data(data=b"67890"), - EndOfMessage(), - ] - assert c.next_event() is PAUSED - c.send(Response(status_code=200, headers=[])) # type: ignore[arg-type] - c.send(EndOfMessage()) - c.start_next_cycle() - - assert get_all_events(c) == [ - Request(method="GET", target="/3", headers=[("Host", "a.com")]), - EndOfMessage(), - ] - # Doesn't pause this time, no trailing data - assert c.next_event() is NEED_DATA - c.send(Response(status_code=200, headers=[])) # type: ignore[arg-type] - c.send(EndOfMessage()) - - # Arrival of more data triggers pause - assert c.next_event() is NEED_DATA - c.receive_data(b"SADF") - assert c.next_event() is PAUSED - assert c.trailing_data == (b"SADF", False) - # If EOF arrives while paused, we don't see that either: - c.receive_data(b"") - assert c.trailing_data == (b"SADF", True) - assert c.next_event() is PAUSED - c.receive_data(b"") - assert c.next_event() is PAUSED - # Can't call receive_data with non-empty buf after closing it - with pytest.raises(RuntimeError): - c.receive_data(b"FDSA") - - -def test_protocol_switch() -> None: - for (req, deny, accept) in [ - ( - Request( - method="CONNECT", - target="example.com:443", - headers=[("Host", "foo"), ("Content-Length", "1")], - ), - Response(status_code=404, headers=[(b"transfer-encoding", b"chunked")]), - Response(status_code=200, headers=[(b"transfer-encoding", b"chunked")]), - ), - ( - Request( - method="GET", - target="/", - headers=[("Host", "foo"), ("Content-Length", "1"), ("Upgrade", "a, b")], - ), - Response(status_code=200, headers=[(b"transfer-encoding", b"chunked")]), - InformationalResponse(status_code=101, headers=[("Upgrade", "a")]), - ), - ( - Request( - method="CONNECT", - target="example.com:443", - headers=[("Host", "foo"), ("Content-Length", "1"), ("Upgrade", "a, b")], - ), - Response(status_code=404, headers=[(b"transfer-encoding", b"chunked")]), - # Accept CONNECT, not upgrade - Response(status_code=200, headers=[(b"transfer-encoding", b"chunked")]), - ), - ( - Request( - method="CONNECT", - target="example.com:443", - headers=[("Host", "foo"), ("Content-Length", "1"), ("Upgrade", "a, b")], - ), - Response(status_code=404, headers=[(b"transfer-encoding", b"chunked")]), - # Accept Upgrade, not CONNECT - InformationalResponse(status_code=101, headers=[("Upgrade", "b")]), - ), - ]: - - def setup() -> ConnectionPair: - p = ConnectionPair() - p.send(CLIENT, req) - # No switch-related state change stuff yet; the client has to - # finish the request before that kicks in - for conn in p.conns: - assert conn.states[CLIENT] is SEND_BODY - p.send(CLIENT, [Data(data=b"1"), EndOfMessage()]) - for conn in p.conns: - assert conn.states[CLIENT] is MIGHT_SWITCH_PROTOCOL - assert p.conn[SERVER].next_event() is PAUSED - return p - - # Test deny case - p = setup() - p.send(SERVER, deny) - for conn in p.conns: - assert conn.states == {CLIENT: DONE, SERVER: SEND_BODY} - p.send(SERVER, EndOfMessage()) - # Check that re-use is still allowed after a denial - for conn in p.conns: - conn.start_next_cycle() - - # Test accept case - p = setup() - p.send(SERVER, accept) - for conn in p.conns: - assert conn.states == {CLIENT: SWITCHED_PROTOCOL, SERVER: SWITCHED_PROTOCOL} - conn.receive_data(b"123") - assert conn.next_event() is PAUSED - conn.receive_data(b"456") - assert conn.next_event() is PAUSED - assert conn.trailing_data == (b"123456", False) - - # Pausing in might-switch, then recovery - # (weird artificial case where the trailing data actually is valid - # HTTP for some reason, because this makes it easier to test the state - # logic) - p = setup() - sc = p.conn[SERVER] - sc.receive_data(b"GET / HTTP/1.0\r\n\r\n") - assert sc.next_event() is PAUSED - assert sc.trailing_data == (b"GET / HTTP/1.0\r\n\r\n", False) - sc.send(deny) - assert sc.next_event() is PAUSED - sc.send(EndOfMessage()) - sc.start_next_cycle() - assert get_all_events(sc) == [ - Request(method="GET", target="/", headers=[], http_version="1.0"), # type: ignore[arg-type] - EndOfMessage(), - ] - - # When we're DONE, have no trailing data, and the connection gets - # closed, we report ConnectionClosed(). When we're in might-switch or - # switched, we don't. - p = setup() - sc = p.conn[SERVER] - sc.receive_data(b"") - assert sc.next_event() is PAUSED - assert sc.trailing_data == (b"", True) - p.send(SERVER, accept) - assert sc.next_event() is PAUSED - - p = setup() - sc = p.conn[SERVER] - sc.receive_data(b"") - assert sc.next_event() is PAUSED - sc.send(deny) - assert sc.next_event() == ConnectionClosed() - - # You can't send after switching protocols, or while waiting for a - # protocol switch - p = setup() - with pytest.raises(LocalProtocolError): - p.conn[CLIENT].send( - Request(method="GET", target="/", headers=[("Host", "a")]) - ) - p = setup() - p.send(SERVER, accept) - with pytest.raises(LocalProtocolError): - p.conn[SERVER].send(Data(data=b"123")) - - -def test_close_simple() -> None: - # Just immediately closing a new connection without anything having - # happened yet. - for (who_shot_first, who_shot_second) in [(CLIENT, SERVER), (SERVER, CLIENT)]: - - def setup() -> ConnectionPair: - p = ConnectionPair() - p.send(who_shot_first, ConnectionClosed()) - for conn in p.conns: - assert conn.states == { - who_shot_first: CLOSED, - who_shot_second: MUST_CLOSE, - } - return p - - # You can keep putting b"" into a closed connection, and you keep - # getting ConnectionClosed() out: - p = setup() - assert p.conn[who_shot_second].next_event() == ConnectionClosed() - assert p.conn[who_shot_second].next_event() == ConnectionClosed() - p.conn[who_shot_second].receive_data(b"") - assert p.conn[who_shot_second].next_event() == ConnectionClosed() - # Second party can close... - p = setup() - p.send(who_shot_second, ConnectionClosed()) - for conn in p.conns: - assert conn.our_state is CLOSED - assert conn.their_state is CLOSED - # But trying to receive new data on a closed connection is a - # RuntimeError (not ProtocolError, because the problem here isn't - # violation of HTTP, it's violation of physics) - p = setup() - with pytest.raises(RuntimeError): - p.conn[who_shot_second].receive_data(b"123") - # And receiving new data on a MUST_CLOSE connection is a ProtocolError - p = setup() - p.conn[who_shot_first].receive_data(b"GET") - with pytest.raises(RemoteProtocolError): - p.conn[who_shot_first].next_event() - - -def test_close_different_states() -> None: - req = [ - Request(method="GET", target="/foo", headers=[("Host", "a")]), - EndOfMessage(), - ] - resp = [ - Response(status_code=200, headers=[(b"transfer-encoding", b"chunked")]), - EndOfMessage(), - ] - - # Client before request - p = ConnectionPair() - p.send(CLIENT, ConnectionClosed()) - for conn in p.conns: - assert conn.states == {CLIENT: CLOSED, SERVER: MUST_CLOSE} - - # Client after request - p = ConnectionPair() - p.send(CLIENT, req) - p.send(CLIENT, ConnectionClosed()) - for conn in p.conns: - assert conn.states == {CLIENT: CLOSED, SERVER: SEND_RESPONSE} - - # Server after request -> not allowed - p = ConnectionPair() - p.send(CLIENT, req) - with pytest.raises(LocalProtocolError): - p.conn[SERVER].send(ConnectionClosed()) - p.conn[CLIENT].receive_data(b"") - with pytest.raises(RemoteProtocolError): - p.conn[CLIENT].next_event() - - # Server after response - p = ConnectionPair() - p.send(CLIENT, req) - p.send(SERVER, resp) - p.send(SERVER, ConnectionClosed()) - for conn in p.conns: - assert conn.states == {CLIENT: MUST_CLOSE, SERVER: CLOSED} - - # Both after closing (ConnectionClosed() is idempotent) - p = ConnectionPair() - p.send(CLIENT, req) - p.send(SERVER, resp) - p.send(CLIENT, ConnectionClosed()) - p.send(SERVER, ConnectionClosed()) - p.send(CLIENT, ConnectionClosed()) - p.send(SERVER, ConnectionClosed()) - - # In the middle of sending -> not allowed - p = ConnectionPair() - p.send( - CLIENT, - Request( - method="GET", target="/", headers=[("Host", "a"), ("Content-Length", "10")] - ), - ) - with pytest.raises(LocalProtocolError): - p.conn[CLIENT].send(ConnectionClosed()) - p.conn[SERVER].receive_data(b"") - with pytest.raises(RemoteProtocolError): - p.conn[SERVER].next_event() - - -# Receive several requests and then client shuts down their side of the -# connection; we can respond to each -def test_pipelined_close() -> None: - c = Connection(SERVER) - # 2 requests then a close - c.receive_data( - b"GET /1 HTTP/1.1\r\nHost: a.com\r\nContent-Length: 5\r\n\r\n" - b"12345" - b"GET /2 HTTP/1.1\r\nHost: a.com\r\nContent-Length: 5\r\n\r\n" - b"67890" - ) - c.receive_data(b"") - assert get_all_events(c) == [ - Request( - method="GET", - target="/1", - headers=[("host", "a.com"), ("content-length", "5")], - ), - Data(data=b"12345"), - EndOfMessage(), - ] - assert c.states[CLIENT] is DONE - c.send(Response(status_code=200, headers=[])) # type: ignore[arg-type] - c.send(EndOfMessage()) - assert c.states[SERVER] is DONE - c.start_next_cycle() - assert get_all_events(c) == [ - Request( - method="GET", - target="/2", - headers=[("host", "a.com"), ("content-length", "5")], - ), - Data(data=b"67890"), - EndOfMessage(), - ConnectionClosed(), - ] - assert c.states == {CLIENT: CLOSED, SERVER: SEND_RESPONSE} - c.send(Response(status_code=200, headers=[])) # type: ignore[arg-type] - c.send(EndOfMessage()) - assert c.states == {CLIENT: CLOSED, SERVER: MUST_CLOSE} - c.send(ConnectionClosed()) - assert c.states == {CLIENT: CLOSED, SERVER: CLOSED} - - -def test_sendfile() -> None: - class SendfilePlaceholder: - def __len__(self) -> int: - return 10 - - placeholder = SendfilePlaceholder() - - def setup( - header: Tuple[str, str], http_version: str - ) -> Tuple[Connection, Optional[List[bytes]]]: - c = Connection(SERVER) - receive_and_get( - c, "GET / HTTP/{}\r\nHost: a\r\n\r\n".format(http_version).encode("ascii") - ) - headers = [] - if header: - headers.append(header) - c.send(Response(status_code=200, headers=headers)) - return c, c.send_with_data_passthrough(Data(data=placeholder)) # type: ignore - - c, data = setup(("Content-Length", "10"), "1.1") - assert data == [placeholder] # type: ignore - # Raises an error if the connection object doesn't think we've sent - # exactly 10 bytes - c.send(EndOfMessage()) - - _, data = setup(("Transfer-Encoding", "chunked"), "1.1") - assert placeholder in data # type: ignore - data[data.index(placeholder)] = b"x" * 10 # type: ignore - assert b"".join(data) == b"a\r\nxxxxxxxxxx\r\n" # type: ignore - - c, data = setup(None, "1.0") # type: ignore - assert data == [placeholder] # type: ignore - assert c.our_state is SEND_BODY - - -def test_errors() -> None: - # After a receive error, you can't receive - for role in [CLIENT, SERVER]: - c = Connection(our_role=role) - c.receive_data(b"gibberish\r\n\r\n") - with pytest.raises(RemoteProtocolError): - c.next_event() - # Now any attempt to receive continues to raise - assert c.their_state is ERROR - assert c.our_state is not ERROR - print(c._cstate.states) - with pytest.raises(RemoteProtocolError): - c.next_event() - # But we can still yell at the client for sending us gibberish - if role is SERVER: - assert ( - c.send(Response(status_code=400, headers=[])) # type: ignore[arg-type] - == b"HTTP/1.1 400 \r\nConnection: close\r\n\r\n" - ) - - # After an error sending, you can no longer send - # (This is especially important for things like content-length errors, - # where there's complex internal state being modified) - def conn(role: Type[Sentinel]) -> Connection: - c = Connection(our_role=role) - if role is SERVER: - # Put it into the state where it *could* send a response... - receive_and_get(c, b"GET / HTTP/1.0\r\n\r\n") - assert c.our_state is SEND_RESPONSE - return c - - for role in [CLIENT, SERVER]: - if role is CLIENT: - # This HTTP/1.0 request won't be detected as bad until after we go - # through the state machine and hit the writing code - good = Request(method="GET", target="/", headers=[("Host", "example.com")]) - bad = Request( - method="GET", - target="/", - headers=[("Host", "example.com")], - http_version="1.0", - ) - elif role is SERVER: - good = Response(status_code=200, headers=[]) # type: ignore[arg-type,assignment] - bad = Response(status_code=200, headers=[], http_version="1.0") # type: ignore[arg-type,assignment] - # Make sure 'good' actually is good - c = conn(role) - c.send(good) - assert c.our_state is not ERROR - # Do that again, but this time sending 'bad' first - c = conn(role) - with pytest.raises(LocalProtocolError): - c.send(bad) - assert c.our_state is ERROR - assert c.their_state is not ERROR - # Now 'good' is not so good - with pytest.raises(LocalProtocolError): - c.send(good) - - # And check send_failed() too - c = conn(role) - c.send_failed() - assert c.our_state is ERROR - assert c.their_state is not ERROR - # This is idempotent - c.send_failed() - assert c.our_state is ERROR - assert c.their_state is not ERROR - - -def test_idle_receive_nothing() -> None: - # At one point this incorrectly raised an error - for role in [CLIENT, SERVER]: - c = Connection(role) - assert c.next_event() is NEED_DATA - - -def test_connection_drop() -> None: - c = Connection(SERVER) - c.receive_data(b"GET /") - assert c.next_event() is NEED_DATA - c.receive_data(b"") - with pytest.raises(RemoteProtocolError): - c.next_event() - - -def test_408_request_timeout() -> None: - # Should be able to send this spontaneously as a server without seeing - # anything from client - p = ConnectionPair() - p.send(SERVER, Response(status_code=408, headers=[(b"connection", b"close")])) - - -# This used to raise IndexError -def test_empty_request() -> None: - c = Connection(SERVER) - c.receive_data(b"\r\n") - with pytest.raises(RemoteProtocolError): - c.next_event() - - -# This used to raise IndexError -def test_empty_response() -> None: - c = Connection(CLIENT) - c.send(Request(method="GET", target="/", headers=[("Host", "a")])) - c.receive_data(b"\r\n") - with pytest.raises(RemoteProtocolError): - c.next_event() - - -@pytest.mark.parametrize( - "data", - [ - b"\x00", - b"\x20", - b"\x16\x03\x01\x00\xa5", # Typical start of a TLS Client Hello - ], -) -def test_early_detection_of_invalid_request(data: bytes) -> None: - c = Connection(SERVER) - # Early detection should occur before even receiving a `\r\n` - c.receive_data(data) - with pytest.raises(RemoteProtocolError): - c.next_event() - - -@pytest.mark.parametrize( - "data", - [ - b"\x00", - b"\x20", - b"\x16\x03\x03\x00\x31", # Typical start of a TLS Server Hello - ], -) -def test_early_detection_of_invalid_response(data: bytes) -> None: - c = Connection(CLIENT) - # Early detection should occur before even receiving a `\r\n` - c.receive_data(data) - with pytest.raises(RemoteProtocolError): - c.next_event() - - -# This used to give different headers for HEAD and GET. -# The correct way to handle HEAD is to put whatever headers we *would* have -# put if it were a GET -- even though we know that for HEAD, those headers -# will be ignored. -def test_HEAD_framing_headers() -> None: - def setup(method: bytes, http_version: bytes) -> Connection: - c = Connection(SERVER) - c.receive_data( - method + b" / HTTP/" + http_version + b"\r\n" + b"Host: example.com\r\n\r\n" - ) - assert type(c.next_event()) is Request - assert type(c.next_event()) is EndOfMessage - return c - - for method in [b"GET", b"HEAD"]: - # No Content-Length, HTTP/1.1 peer, should use chunked - c = setup(method, b"1.1") - assert ( - c.send(Response(status_code=200, headers=[])) == b"HTTP/1.1 200 \r\n" # type: ignore[arg-type] - b"Transfer-Encoding: chunked\r\n\r\n" - ) - - # No Content-Length, HTTP/1.0 peer, frame with connection: close - c = setup(method, b"1.0") - assert ( - c.send(Response(status_code=200, headers=[])) == b"HTTP/1.1 200 \r\n" # type: ignore[arg-type] - b"Connection: close\r\n\r\n" - ) - - # Content-Length + Transfer-Encoding, TE wins - c = setup(method, b"1.1") - assert ( - c.send( - Response( - status_code=200, - headers=[ - ("Content-Length", "100"), - ("Transfer-Encoding", "chunked"), - ], - ) - ) - == b"HTTP/1.1 200 \r\n" - b"Transfer-Encoding: chunked\r\n\r\n" - ) - - -def test_special_exceptions_for_lost_connection_in_message_body() -> None: - c = Connection(SERVER) - c.receive_data( - b"POST / HTTP/1.1\r\n" b"Host: example.com\r\n" b"Content-Length: 100\r\n\r\n" - ) - assert type(c.next_event()) is Request - assert c.next_event() is NEED_DATA - c.receive_data(b"12345") - assert c.next_event() == Data(data=b"12345") - c.receive_data(b"") - with pytest.raises(RemoteProtocolError) as excinfo: - c.next_event() - assert "received 5 bytes" in str(excinfo.value) - assert "expected 100" in str(excinfo.value) - - c = Connection(SERVER) - c.receive_data( - b"POST / HTTP/1.1\r\n" - b"Host: example.com\r\n" - b"Transfer-Encoding: chunked\r\n\r\n" - ) - assert type(c.next_event()) is Request - assert c.next_event() is NEED_DATA - c.receive_data(b"8\r\n012345") - assert c.next_event().data == b"012345" # type: ignore - c.receive_data(b"") - with pytest.raises(RemoteProtocolError) as excinfo: - c.next_event() - assert "incomplete chunked read" in str(excinfo.value) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpcore/_backends/anyio.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpcore/_backends/anyio.py deleted file mode 100644 index 1ed5228dbde1732de50677e9a3bd6f04a3017433..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpcore/_backends/anyio.py +++ /dev/null @@ -1,145 +0,0 @@ -import ssl -import typing - -import anyio - -from .._exceptions import ( - ConnectError, - ConnectTimeout, - ReadError, - ReadTimeout, - WriteError, - WriteTimeout, - map_exceptions, -) -from .._utils import is_socket_readable -from .base import SOCKET_OPTION, AsyncNetworkBackend, AsyncNetworkStream - - -class AnyIOStream(AsyncNetworkStream): - def __init__(self, stream: anyio.abc.ByteStream) -> None: - self._stream = stream - - async def read( - self, max_bytes: int, timeout: typing.Optional[float] = None - ) -> bytes: - exc_map = { - TimeoutError: ReadTimeout, - anyio.BrokenResourceError: ReadError, - anyio.ClosedResourceError: ReadError, - } - with map_exceptions(exc_map): - with anyio.fail_after(timeout): - try: - return await self._stream.receive(max_bytes=max_bytes) - except anyio.EndOfStream: # pragma: nocover - return b"" - - async def write( - self, buffer: bytes, timeout: typing.Optional[float] = None - ) -> None: - if not buffer: - return - - exc_map = { - TimeoutError: WriteTimeout, - anyio.BrokenResourceError: WriteError, - anyio.ClosedResourceError: WriteError, - } - with map_exceptions(exc_map): - with anyio.fail_after(timeout): - await self._stream.send(item=buffer) - - async def aclose(self) -> None: - await self._stream.aclose() - - async def start_tls( - self, - ssl_context: ssl.SSLContext, - server_hostname: typing.Optional[str] = None, - timeout: typing.Optional[float] = None, - ) -> AsyncNetworkStream: - exc_map = { - TimeoutError: ConnectTimeout, - anyio.BrokenResourceError: ConnectError, - } - with map_exceptions(exc_map): - try: - with anyio.fail_after(timeout): - ssl_stream = await anyio.streams.tls.TLSStream.wrap( - self._stream, - ssl_context=ssl_context, - hostname=server_hostname, - standard_compatible=False, - server_side=False, - ) - except Exception as exc: # pragma: nocover - await self.aclose() - raise exc - return AnyIOStream(ssl_stream) - - def get_extra_info(self, info: str) -> typing.Any: - if info == "ssl_object": - return self._stream.extra(anyio.streams.tls.TLSAttribute.ssl_object, None) - if info == "client_addr": - return self._stream.extra(anyio.abc.SocketAttribute.local_address, None) - if info == "server_addr": - return self._stream.extra(anyio.abc.SocketAttribute.remote_address, None) - if info == "socket": - return self._stream.extra(anyio.abc.SocketAttribute.raw_socket, None) - if info == "is_readable": - sock = self._stream.extra(anyio.abc.SocketAttribute.raw_socket, None) - return is_socket_readable(sock) - return None - - -class AnyIOBackend(AsyncNetworkBackend): - async def connect_tcp( - self, - host: str, - port: int, - timeout: typing.Optional[float] = None, - local_address: typing.Optional[str] = None, - socket_options: typing.Optional[typing.Iterable[SOCKET_OPTION]] = None, - ) -> AsyncNetworkStream: - if socket_options is None: - socket_options = [] # pragma: no cover - exc_map = { - TimeoutError: ConnectTimeout, - OSError: ConnectError, - anyio.BrokenResourceError: ConnectError, - } - with map_exceptions(exc_map): - with anyio.fail_after(timeout): - stream: anyio.abc.ByteStream = await anyio.connect_tcp( - remote_host=host, - remote_port=port, - local_host=local_address, - ) - # By default TCP sockets opened in `asyncio` include TCP_NODELAY. - for option in socket_options: - stream._raw_socket.setsockopt(*option) # type: ignore[attr-defined] # pragma: no cover - return AnyIOStream(stream) - - async def connect_unix_socket( - self, - path: str, - timeout: typing.Optional[float] = None, - socket_options: typing.Optional[typing.Iterable[SOCKET_OPTION]] = None, - ) -> AsyncNetworkStream: # pragma: nocover - if socket_options is None: - socket_options = [] - exc_map = { - TimeoutError: ConnectTimeout, - OSError: ConnectError, - anyio.BrokenResourceError: ConnectError, - } - with map_exceptions(exc_map): - with anyio.fail_after(timeout): - stream: anyio.abc.ByteStream = await anyio.connect_unix(path) - for option in socket_options: - stream._raw_socket.setsockopt(*option) # type: ignore[attr-defined] # pragma: no cover - return AnyIOStream(stream) - - async def sleep(self, seconds: float) -> None: - await anyio.sleep(seconds) # pragma: nocover diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_backend_ps.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_backend_ps.py deleted file mode 100644 index cbf33ccc5a1b748ffb77059fb426e8e9525ae67f..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_backend_ps.py +++ /dev/null @@ -1,380 +0,0 @@ -from collections import Counter -from pathlib import Path -import io -import re -import tempfile - -import numpy as np -import pytest - -from matplotlib import cbook, path, patheffects, font_manager as fm -from matplotlib.figure import Figure -from matplotlib.patches import Ellipse -from matplotlib.testing._markers import needs_ghostscript, needs_usetex -from matplotlib.testing.decorators import check_figures_equal, image_comparison -import matplotlib as mpl -import matplotlib.collections as mcollections -import matplotlib.colors as mcolors -import matplotlib.pyplot as plt - - -# This tests tends to hit a TeX cache lock on AppVeyor. -@pytest.mark.flaky(reruns=3) -@pytest.mark.parametrize('papersize', ['letter', 'figure']) -@pytest.mark.parametrize('orientation', ['portrait', 'landscape']) -@pytest.mark.parametrize('format, use_log, rcParams', [ - ('ps', False, {}), - ('ps', False, {'ps.usedistiller': 'ghostscript'}), - ('ps', False, {'ps.usedistiller': 'xpdf'}), - ('ps', False, {'text.usetex': True}), - ('eps', False, {}), - ('eps', True, {'ps.useafm': True}), - ('eps', False, {'text.usetex': True}), -], ids=[ - 'ps', - 'ps with distiller=ghostscript', - 'ps with distiller=xpdf', - 'ps with usetex', - 'eps', - 'eps afm', - 'eps with usetex' -]) -def test_savefig_to_stringio(format, use_log, rcParams, orientation, papersize): - mpl.rcParams.update(rcParams) - if mpl.rcParams["ps.usedistiller"] == "ghostscript": - try: - mpl._get_executable_info("gs") - except mpl.ExecutableNotFoundError as exc: - pytest.skip(str(exc)) - elif mpl.rcParams["ps.usedistiller"] == "xpdf": - try: - mpl._get_executable_info("gs") # Effectively checks for ps2pdf. - mpl._get_executable_info("pdftops") - except mpl.ExecutableNotFoundError as exc: - pytest.skip(str(exc)) - - fig, ax = plt.subplots() - - with io.StringIO() as s_buf, io.BytesIO() as b_buf: - - if use_log: - ax.set_yscale('log') - - ax.plot([1, 2], [1, 2]) - title = "Déjà vu" - if not mpl.rcParams["text.usetex"]: - title += " \N{MINUS SIGN}\N{EURO SIGN}" - ax.set_title(title) - allowable_exceptions = [] - if mpl.rcParams["text.usetex"]: - allowable_exceptions.append(RuntimeError) - if mpl.rcParams["ps.useafm"]: - allowable_exceptions.append(mpl.MatplotlibDeprecationWarning) - try: - fig.savefig(s_buf, format=format, orientation=orientation, - papertype=papersize) - fig.savefig(b_buf, format=format, orientation=orientation, - papertype=papersize) - except tuple(allowable_exceptions) as exc: - pytest.skip(str(exc)) - - assert not s_buf.closed - assert not b_buf.closed - s_val = s_buf.getvalue().encode('ascii') - b_val = b_buf.getvalue() - - if format == 'ps': - # Default figsize = (8, 6) inches = (576, 432) points = (203.2, 152.4) mm. - # Landscape orientation will swap dimensions. - if mpl.rcParams["ps.usedistiller"] == "xpdf": - # Some versions specifically show letter/203x152, but not all, - # so we can only use this simpler test. - if papersize == 'figure': - assert b'letter' not in s_val.lower() - else: - assert b'letter' in s_val.lower() - elif mpl.rcParams["ps.usedistiller"] or mpl.rcParams["text.usetex"]: - width = b'432.0' if orientation == 'landscape' else b'576.0' - wanted = (b'-dDEVICEWIDTHPOINTS=' + width if papersize == 'figure' - else b'-sPAPERSIZE') - assert wanted in s_val - else: - if papersize == 'figure': - assert b'%%DocumentPaperSizes' not in s_val - else: - assert b'%%DocumentPaperSizes' in s_val - - # Strip out CreationDate: ghostscript and cairo don't obey - # SOURCE_DATE_EPOCH, and that environment variable is already tested in - # test_determinism. - s_val = re.sub(b"(?<=\n%%CreationDate: ).*", b"", s_val) - b_val = re.sub(b"(?<=\n%%CreationDate: ).*", b"", b_val) - - assert s_val == b_val.replace(b'\r\n', b'\n') - - -def test_patheffects(): - mpl.rcParams['path.effects'] = [ - patheffects.withStroke(linewidth=4, foreground='w')] - fig, ax = plt.subplots() - ax.plot([1, 2, 3]) - with io.BytesIO() as ps: - fig.savefig(ps, format='ps') - - -@needs_usetex -@needs_ghostscript -def test_tilde_in_tempfilename(tmpdir): - # Tilde ~ in the tempdir path (e.g. TMPDIR, TMP or TEMP on windows - # when the username is very long and windows uses a short name) breaks - # latex before https://github.com/matplotlib/matplotlib/pull/5928 - base_tempdir = Path(tmpdir, "short-1") - base_tempdir.mkdir() - # Change the path for new tempdirs, which is used internally by the ps - # backend to write a file. - with cbook._setattr_cm(tempfile, tempdir=str(base_tempdir)): - # usetex results in the latex call, which does not like the ~ - mpl.rcParams['text.usetex'] = True - plt.plot([1, 2, 3, 4]) - plt.xlabel(r'\textbf{time} (s)') - # use the PS backend to write the file... - plt.savefig(base_tempdir / 'tex_demo.eps', format="ps") - - -@image_comparison(["empty.eps"]) -def test_transparency(): - fig, ax = plt.subplots() - ax.set_axis_off() - ax.plot([0, 1], color="r", alpha=0) - ax.text(.5, .5, "foo", color="r", alpha=0) - - -@needs_usetex -@image_comparison(["empty.eps"]) -def test_transparency_tex(): - mpl.rcParams['text.usetex'] = True - fig, ax = plt.subplots() - ax.set_axis_off() - ax.plot([0, 1], color="r", alpha=0) - ax.text(.5, .5, "foo", color="r", alpha=0) - - -def test_bbox(): - fig, ax = plt.subplots() - with io.BytesIO() as buf: - fig.savefig(buf, format='eps') - buf = buf.getvalue() - - bb = re.search(b'^%%BoundingBox: (.+) (.+) (.+) (.+)$', buf, re.MULTILINE) - assert bb - hibb = re.search(b'^%%HiResBoundingBox: (.+) (.+) (.+) (.+)$', buf, - re.MULTILINE) - assert hibb - - for i in range(1, 5): - # BoundingBox must use integers, and be ceil/floor of the hi res. - assert b'.' not in bb.group(i) - assert int(bb.group(i)) == pytest.approx(float(hibb.group(i)), 1) - - -@needs_usetex -def test_failing_latex(): - """Test failing latex subprocess call""" - mpl.rcParams['text.usetex'] = True - # This fails with "Double subscript" - plt.xlabel("$22_2_2$") - with pytest.raises(RuntimeError): - plt.savefig(io.BytesIO(), format="ps") - - -@needs_usetex -def test_partial_usetex(caplog): - caplog.set_level("WARNING") - plt.figtext(.1, .1, "foo", usetex=True) - plt.figtext(.2, .2, "bar", usetex=True) - plt.savefig(io.BytesIO(), format="ps") - record, = caplog.records # asserts there's a single record. - assert "as if usetex=False" in record.getMessage() - - -@needs_usetex -def test_usetex_preamble(caplog): - mpl.rcParams.update({ - "text.usetex": True, - # Check that these don't conflict with the packages loaded by default. - "text.latex.preamble": r"\usepackage{color,graphicx,textcomp}", - }) - plt.figtext(.5, .5, "foo") - plt.savefig(io.BytesIO(), format="ps") - - -@image_comparison(["useafm.eps"]) -def test_useafm(): - mpl.rcParams["ps.useafm"] = True - fig, ax = plt.subplots() - ax.set_axis_off() - ax.axhline(.5) - ax.text(.5, .5, "qk") - - -@image_comparison(["type3.eps"]) -def test_type3_font(): - plt.figtext(.5, .5, "I/J") - - -@image_comparison(["coloredhatcheszerolw.eps"]) -def test_colored_hatch_zero_linewidth(): - ax = plt.gca() - ax.add_patch(Ellipse((0, 0), 1, 1, hatch='/', facecolor='none', - edgecolor='r', linewidth=0)) - ax.add_patch(Ellipse((0.5, 0.5), 0.5, 0.5, hatch='+', facecolor='none', - edgecolor='g', linewidth=0.2)) - ax.add_patch(Ellipse((1, 1), 0.3, 0.8, hatch='\\', facecolor='none', - edgecolor='b', linewidth=0)) - ax.set_axis_off() - - -@check_figures_equal(extensions=["eps"]) -def test_text_clip(fig_test, fig_ref): - ax = fig_test.add_subplot() - # Fully clipped-out text should not appear. - ax.text(0, 0, "hello", transform=fig_test.transFigure, clip_on=True) - fig_ref.add_subplot() - - -@needs_ghostscript -def test_d_glyph(tmp_path): - # Ensure that we don't have a procedure defined as /d, which would be - # overwritten by the glyph definition for "d". - fig = plt.figure() - fig.text(.5, .5, "def") - out = tmp_path / "test.eps" - fig.savefig(out) - mpl.testing.compare.convert(out, cache=False) # Should not raise. - - -@image_comparison(["type42_without_prep.eps"], style='mpl20') -def test_type42_font_without_prep(): - # Test whether Type 42 fonts without prep table are properly embedded - mpl.rcParams["ps.fonttype"] = 42 - mpl.rcParams["mathtext.fontset"] = "stix" - - plt.figtext(0.5, 0.5, "Mass $m$") - - -@pytest.mark.parametrize('fonttype', ["3", "42"]) -def test_fonttype(fonttype): - mpl.rcParams["ps.fonttype"] = fonttype - fig, ax = plt.subplots() - - ax.text(0.25, 0.5, "Forty-two is the answer to everything!") - - buf = io.BytesIO() - fig.savefig(buf, format="ps") - - test = b'/FontType ' + bytes(f"{fonttype}", encoding='utf-8') + b' def' - - assert re.search(test, buf.getvalue(), re.MULTILINE) - - -def test_linedash(): - """Test that dashed lines do not break PS output""" - fig, ax = plt.subplots() - - ax.plot([0, 1], linestyle="--") - - buf = io.BytesIO() - fig.savefig(buf, format="ps") - - assert buf.tell() > 0 - - -def test_empty_line(): - # Smoke-test for gh#23954 - figure = Figure() - figure.text(0.5, 0.5, "\nfoo\n\n") - buf = io.BytesIO() - figure.savefig(buf, format='eps') - figure.savefig(buf, format='ps') - - -def test_no_duplicate_definition(): - - fig = Figure() - axs = fig.subplots(4, 4, subplot_kw=dict(projection="polar")) - for ax in axs.flat: - ax.set(xticks=[], yticks=[]) - ax.plot([1, 2]) - fig.suptitle("hello, world") - - buf = io.StringIO() - fig.savefig(buf, format='eps') - buf.seek(0) - - wds = [ln.partition(' ')[0] for - ln in buf.readlines() - if ln.startswith('/')] - - assert max(Counter(wds).values()) == 1 - - -@image_comparison(["multi_font_type3.eps"], tol=0.51) -def test_multi_font_type3(): - fp = fm.FontProperties(family=["WenQuanYi Zen Hei"]) - if Path(fm.findfont(fp)).name != "wqy-zenhei.ttc": - pytest.skip("Font may be missing") - - plt.rc('font', family=['DejaVu Sans', 'WenQuanYi Zen Hei'], size=27) - plt.rc('ps', fonttype=3) - - fig = plt.figure() - fig.text(0.15, 0.475, "There are 几个汉字 in between!") - - -@image_comparison(["multi_font_type42.eps"], tol=1.6) -def test_multi_font_type42(): - fp = fm.FontProperties(family=["WenQuanYi Zen Hei"]) - if Path(fm.findfont(fp)).name != "wqy-zenhei.ttc": - pytest.skip("Font may be missing") - - plt.rc('font', family=['DejaVu Sans', 'WenQuanYi Zen Hei'], size=27) - plt.rc('ps', fonttype=42) - - fig = plt.figure() - fig.text(0.15, 0.475, "There are 几个汉字 in between!") - - -@image_comparison(["scatter.eps"]) -def test_path_collection(): - rng = np.random.default_rng(19680801) - xvals = rng.uniform(0, 1, 10) - yvals = rng.uniform(0, 1, 10) - sizes = rng.uniform(30, 100, 10) - fig, ax = plt.subplots() - ax.scatter(xvals, yvals, sizes, edgecolor=[0.9, 0.2, 0.1], marker='<') - ax.set_axis_off() - paths = [path.Path.unit_regular_polygon(i) for i in range(3, 7)] - offsets = rng.uniform(0, 200, 20).reshape(10, 2) - sizes = [0.02, 0.04] - pc = mcollections.PathCollection(paths, sizes, zorder=-1, - facecolors='yellow', offsets=offsets) - ax.add_collection(pc) - ax.set_xlim(0, 1) - - -@image_comparison(["colorbar_shift.eps"], savefig_kwarg={"bbox_inches": "tight"}, - style="mpl20") -def test_colorbar_shift(tmp_path): - cmap = mcolors.ListedColormap(["r", "g", "b"]) - norm = mcolors.BoundaryNorm([-1, -0.5, 0.5, 1], cmap.N) - plt.scatter([0, 1], [1, 1], c=[0, 1], cmap=cmap, norm=norm) - plt.colorbar() - - -def test_auto_papersize_deprecation(): - fig = plt.figure() - with pytest.warns(mpl.MatplotlibDeprecationWarning): - fig.savefig(io.BytesIO(), format='eps', papertype='auto') - - with pytest.warns(mpl.MatplotlibDeprecationWarning): - mpl.rcParams['ps.papersize'] = 'auto' diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/isocintrin/isoCtests.f90 b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/isocintrin/isoCtests.f90 deleted file mode 100644 index 42db6cccc14d6a17bb8ce9f68c730cb25cdfd5ed..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/isocintrin/isoCtests.f90 +++ /dev/null @@ -1,17 +0,0 @@ - module coddity - use iso_c_binding, only: c_double, c_int - implicit none - contains - subroutine c_add(a, b, c) bind(c, name="c_add") - real(c_double), intent(in) :: a, b - real(c_double), intent(out) :: c - c = a + b - end subroutine c_add - ! gh-9693 - function wat(x, y) result(z) bind(c) - integer(c_int), intent(in) :: x, y - integer(c_int) :: z - - z = x + 7 - end function wat - end module coddity diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/packaging/utils.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/packaging/utils.py deleted file mode 100644 index c2c2f75aa806282d322c76c2117c0f0fdfb09d25..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/packaging/utils.py +++ /dev/null @@ -1,172 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -import re -from typing import FrozenSet, NewType, Tuple, Union, cast - -from .tags import Tag, parse_tag -from .version import InvalidVersion, Version - -BuildTag = Union[Tuple[()], Tuple[int, str]] -NormalizedName = NewType("NormalizedName", str) - - -class InvalidName(ValueError): - """ - An invalid distribution name; users should refer to the packaging user guide. - """ - - -class InvalidWheelFilename(ValueError): - """ - An invalid wheel filename was found, users should refer to PEP 427. - """ - - -class InvalidSdistFilename(ValueError): - """ - An invalid sdist filename was found, users should refer to the packaging user guide. - """ - - -# Core metadata spec for `Name` -_validate_regex = re.compile( - r"^([A-Z0-9]|[A-Z0-9][A-Z0-9._-]*[A-Z0-9])$", re.IGNORECASE -) -_canonicalize_regex = re.compile(r"[-_.]+") -_normalized_regex = re.compile(r"^([a-z0-9]|[a-z0-9]([a-z0-9-](?!--))*[a-z0-9])$") -# PEP 427: The build number must start with a digit. -_build_tag_regex = re.compile(r"(\d+)(.*)") - - -def canonicalize_name(name: str, *, validate: bool = False) -> NormalizedName: - if validate and not _validate_regex.match(name): - raise InvalidName(f"name is invalid: {name!r}") - # This is taken from PEP 503. - value = _canonicalize_regex.sub("-", name).lower() - return cast(NormalizedName, value) - - -def is_normalized_name(name: str) -> bool: - return _normalized_regex.match(name) is not None - - -def canonicalize_version( - version: Union[Version, str], *, strip_trailing_zero: bool = True -) -> str: - """ - This is very similar to Version.__str__, but has one subtle difference - with the way it handles the release segment. - """ - if isinstance(version, str): - try: - parsed = Version(version) - except InvalidVersion: - # Legacy versions cannot be normalized - return version - else: - parsed = version - - parts = [] - - # Epoch - if parsed.epoch != 0: - parts.append(f"{parsed.epoch}!") - - # Release segment - release_segment = ".".join(str(x) for x in parsed.release) - if strip_trailing_zero: - # NB: This strips trailing '.0's to normalize - release_segment = re.sub(r"(\.0)+$", "", release_segment) - parts.append(release_segment) - - # Pre-release - if parsed.pre is not None: - parts.append("".join(str(x) for x in parsed.pre)) - - # Post-release - if parsed.post is not None: - parts.append(f".post{parsed.post}") - - # Development release - if parsed.dev is not None: - parts.append(f".dev{parsed.dev}") - - # Local version segment - if parsed.local is not None: - parts.append(f"+{parsed.local}") - - return "".join(parts) - - -def parse_wheel_filename( - filename: str, -) -> Tuple[NormalizedName, Version, BuildTag, FrozenSet[Tag]]: - if not filename.endswith(".whl"): - raise InvalidWheelFilename( - f"Invalid wheel filename (extension must be '.whl'): {filename}" - ) - - filename = filename[:-4] - dashes = filename.count("-") - if dashes not in (4, 5): - raise InvalidWheelFilename( - f"Invalid wheel filename (wrong number of parts): {filename}" - ) - - parts = filename.split("-", dashes - 2) - name_part = parts[0] - # See PEP 427 for the rules on escaping the project name. - if "__" in name_part or re.match(r"^[\w\d._]*$", name_part, re.UNICODE) is None: - raise InvalidWheelFilename(f"Invalid project name: {filename}") - name = canonicalize_name(name_part) - - try: - version = Version(parts[1]) - except InvalidVersion as e: - raise InvalidWheelFilename( - f"Invalid wheel filename (invalid version): {filename}" - ) from e - - if dashes == 5: - build_part = parts[2] - build_match = _build_tag_regex.match(build_part) - if build_match is None: - raise InvalidWheelFilename( - f"Invalid build number: {build_part} in '{filename}'" - ) - build = cast(BuildTag, (int(build_match.group(1)), build_match.group(2))) - else: - build = () - tags = parse_tag(parts[-1]) - return (name, version, build, tags) - - -def parse_sdist_filename(filename: str) -> Tuple[NormalizedName, Version]: - if filename.endswith(".tar.gz"): - file_stem = filename[: -len(".tar.gz")] - elif filename.endswith(".zip"): - file_stem = filename[: -len(".zip")] - else: - raise InvalidSdistFilename( - f"Invalid sdist filename (extension must be '.tar.gz' or '.zip'):" - f" {filename}" - ) - - # We are requiring a PEP 440 version, which cannot contain dashes, - # so we split on the last dash. - name_part, sep, version_part = file_stem.rpartition("-") - if not sep: - raise InvalidSdistFilename(f"Invalid sdist filename: {filename}") - - name = canonicalize_name(name_part) - - try: - version = Version(version_part) - except InvalidVersion as e: - raise InvalidSdistFilename( - f"Invalid sdist filename (invalid version): {filename}" - ) from e - - return (name, version) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/util/test_validate_args_and_kwargs.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/util/test_validate_args_and_kwargs.py deleted file mode 100644 index 215026d648471c04cb8751506c03626fda73fc68..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/util/test_validate_args_and_kwargs.py +++ /dev/null @@ -1,84 +0,0 @@ -import pytest - -from pandas.util._validators import validate_args_and_kwargs - - -@pytest.fixture -def _fname(): - return "func" - - -def test_invalid_total_length_max_length_one(_fname): - compat_args = ("foo",) - kwargs = {"foo": "FOO"} - args = ("FoO", "BaZ") - - min_fname_arg_count = 0 - max_length = len(compat_args) + min_fname_arg_count - actual_length = len(kwargs) + len(args) + min_fname_arg_count - - msg = ( - rf"{_fname}\(\) takes at most {max_length} " - rf"argument \({actual_length} given\)" - ) - - with pytest.raises(TypeError, match=msg): - validate_args_and_kwargs(_fname, args, kwargs, min_fname_arg_count, compat_args) - - -def test_invalid_total_length_max_length_multiple(_fname): - compat_args = ("foo", "bar", "baz") - kwargs = {"foo": "FOO", "bar": "BAR"} - args = ("FoO", "BaZ") - - min_fname_arg_count = 2 - max_length = len(compat_args) + min_fname_arg_count - actual_length = len(kwargs) + len(args) + min_fname_arg_count - - msg = ( - rf"{_fname}\(\) takes at most {max_length} " - rf"arguments \({actual_length} given\)" - ) - - with pytest.raises(TypeError, match=msg): - validate_args_and_kwargs(_fname, args, kwargs, min_fname_arg_count, compat_args) - - -@pytest.mark.parametrize("args,kwargs", [((), {"foo": -5, "bar": 2}), ((-5, 2), {})]) -def test_missing_args_or_kwargs(args, kwargs, _fname): - bad_arg = "bar" - min_fname_arg_count = 2 - - compat_args = {"foo": -5, bad_arg: 1} - - msg = ( - rf"the '{bad_arg}' parameter is not supported " - rf"in the pandas implementation of {_fname}\(\)" - ) - - with pytest.raises(ValueError, match=msg): - validate_args_and_kwargs(_fname, args, kwargs, min_fname_arg_count, compat_args) - - -def test_duplicate_argument(_fname): - min_fname_arg_count = 2 - - compat_args = {"foo": None, "bar": None, "baz": None} - kwargs = {"foo": None, "bar": None} - args = (None,) # duplicate value for "foo" - - msg = rf"{_fname}\(\) got multiple values for keyword argument 'foo'" - - with pytest.raises(TypeError, match=msg): - validate_args_and_kwargs(_fname, args, kwargs, min_fname_arg_count, compat_args) - - -def test_validation(_fname): - # No exceptions should be raised. - compat_args = {"foo": 1, "bar": None, "baz": -2} - kwargs = {"baz": -2} - - args = (1, None) - min_fname_arg_count = 2 - - validate_args_and_kwargs(_fname, args, kwargs, min_fname_arg_count, compat_args) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/req/req_install.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/req/req_install.py deleted file mode 100644 index 02dbda1941f845a8087ea4544271fa94b69a8bda..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/req/req_install.py +++ /dev/null @@ -1,858 +0,0 @@ -# The following comment should be removed at some point in the future. -# mypy: strict-optional=False - -import functools -import logging -import os -import shutil -import sys -import uuid -import zipfile -from typing import Any, Collection, Dict, Iterable, List, Optional, Sequence, Union - -from pip._vendor.packaging.markers import Marker -from pip._vendor.packaging.requirements import Requirement -from pip._vendor.packaging.specifiers import SpecifierSet -from pip._vendor.packaging.utils import canonicalize_name -from pip._vendor.packaging.version import Version -from pip._vendor.packaging.version import parse as parse_version -from pip._vendor.pep517.wrappers import Pep517HookCaller - -from pip._internal.build_env import BuildEnvironment, NoOpBuildEnvironment -from pip._internal.exceptions import InstallationError, LegacyInstallFailure -from pip._internal.locations import get_scheme -from pip._internal.metadata import ( - BaseDistribution, - get_default_environment, - get_directory_distribution, -) -from pip._internal.models.link import Link -from pip._internal.operations.build.metadata import generate_metadata -from pip._internal.operations.build.metadata_editable import generate_editable_metadata -from pip._internal.operations.build.metadata_legacy import ( - generate_metadata as generate_metadata_legacy, -) -from pip._internal.operations.install.editable_legacy import ( - install_editable as install_editable_legacy, -) -from pip._internal.operations.install.legacy import install as install_legacy -from pip._internal.operations.install.wheel import install_wheel -from pip._internal.pyproject import load_pyproject_toml, make_pyproject_path -from pip._internal.req.req_uninstall import UninstallPathSet -from pip._internal.utils.deprecation import deprecated -from pip._internal.utils.direct_url_helpers import ( - direct_url_for_editable, - direct_url_from_link, -) -from pip._internal.utils.hashes import Hashes -from pip._internal.utils.misc import ( - ask_path_exists, - backup_dir, - display_path, - hide_url, - redact_auth_from_url, -) -from pip._internal.utils.packaging import safe_extra -from pip._internal.utils.subprocess import runner_with_spinner_message -from pip._internal.utils.temp_dir import TempDirectory, tempdir_kinds -from pip._internal.utils.virtualenv import running_under_virtualenv -from pip._internal.vcs import vcs - -logger = logging.getLogger(__name__) - - -class InstallRequirement: - """ - Represents something that may be installed later on, may have information - about where to fetch the relevant requirement and also contains logic for - installing the said requirement. - """ - - def __init__( - self, - req: Optional[Requirement], - comes_from: Optional[Union[str, "InstallRequirement"]], - editable: bool = False, - link: Optional[Link] = None, - markers: Optional[Marker] = None, - use_pep517: Optional[bool] = None, - isolated: bool = False, - install_options: Optional[List[str]] = None, - global_options: Optional[List[str]] = None, - hash_options: Optional[Dict[str, List[str]]] = None, - constraint: bool = False, - extras: Collection[str] = (), - user_supplied: bool = False, - permit_editable_wheels: bool = False, - ) -> None: - assert req is None or isinstance(req, Requirement), req - self.req = req - self.comes_from = comes_from - self.constraint = constraint - self.editable = editable - self.permit_editable_wheels = permit_editable_wheels - self.legacy_install_reason: Optional[int] = None - - # source_dir is the local directory where the linked requirement is - # located, or unpacked. In case unpacking is needed, creating and - # populating source_dir is done by the RequirementPreparer. Note this - # is not necessarily the directory where pyproject.toml or setup.py is - # located - that one is obtained via unpacked_source_directory. - self.source_dir: Optional[str] = None - if self.editable: - assert link - if link.is_file: - self.source_dir = os.path.normpath(os.path.abspath(link.file_path)) - - if link is None and req and req.url: - # PEP 508 URL requirement - link = Link(req.url) - self.link = self.original_link = link - self.original_link_is_in_wheel_cache = False - - # Path to any downloaded or already-existing package. - self.local_file_path: Optional[str] = None - if self.link and self.link.is_file: - self.local_file_path = self.link.file_path - - if extras: - self.extras = extras - elif req: - self.extras = {safe_extra(extra) for extra in req.extras} - else: - self.extras = set() - if markers is None and req: - markers = req.marker - self.markers = markers - - # This holds the Distribution object if this requirement is already installed. - self.satisfied_by: Optional[BaseDistribution] = None - # Whether the installation process should try to uninstall an existing - # distribution before installing this requirement. - self.should_reinstall = False - # Temporary build location - self._temp_build_dir: Optional[TempDirectory] = None - # Set to True after successful installation - self.install_succeeded: Optional[bool] = None - # Supplied options - self.install_options = install_options if install_options else [] - self.global_options = global_options if global_options else [] - self.hash_options = hash_options if hash_options else {} - # Set to True after successful preparation of this requirement - self.prepared = False - # User supplied requirement are explicitly requested for installation - # by the user via CLI arguments or requirements files, as opposed to, - # e.g. dependencies, extras or constraints. - self.user_supplied = user_supplied - - self.isolated = isolated - self.build_env: BuildEnvironment = NoOpBuildEnvironment() - - # For PEP 517, the directory where we request the project metadata - # gets stored. We need this to pass to build_wheel, so the backend - # can ensure that the wheel matches the metadata (see the PEP for - # details). - self.metadata_directory: Optional[str] = None - - # The static build requirements (from pyproject.toml) - self.pyproject_requires: Optional[List[str]] = None - - # Build requirements that we will check are available - self.requirements_to_check: List[str] = [] - - # The PEP 517 backend we should use to build the project - self.pep517_backend: Optional[Pep517HookCaller] = None - - # Are we using PEP 517 for this requirement? - # After pyproject.toml has been loaded, the only valid values are True - # and False. Before loading, None is valid (meaning "use the default"). - # Setting an explicit value before loading pyproject.toml is supported, - # but after loading this flag should be treated as read only. - self.use_pep517 = use_pep517 - - # This requirement needs more preparation before it can be built - self.needs_more_preparation = False - - def __str__(self) -> str: - if self.req: - s = str(self.req) - if self.link: - s += " from {}".format(redact_auth_from_url(self.link.url)) - elif self.link: - s = redact_auth_from_url(self.link.url) - else: - s = "" - if self.satisfied_by is not None: - s += " in {}".format(display_path(self.satisfied_by.location)) - if self.comes_from: - if isinstance(self.comes_from, str): - comes_from: Optional[str] = self.comes_from - else: - comes_from = self.comes_from.from_path() - if comes_from: - s += f" (from {comes_from})" - return s - - def __repr__(self) -> str: - return "<{} object: {} editable={!r}>".format( - self.__class__.__name__, str(self), self.editable - ) - - def format_debug(self) -> str: - """An un-tested helper for getting state, for debugging.""" - attributes = vars(self) - names = sorted(attributes) - - state = ("{}={!r}".format(attr, attributes[attr]) for attr in sorted(names)) - return "<{name} object: {{{state}}}>".format( - name=self.__class__.__name__, - state=", ".join(state), - ) - - # Things that are valid for all kinds of requirements? - @property - def name(self) -> Optional[str]: - if self.req is None: - return None - return self.req.name - - @functools.lru_cache() # use cached_property in python 3.8+ - def supports_pyproject_editable(self) -> bool: - if not self.use_pep517: - return False - assert self.pep517_backend - with self.build_env: - runner = runner_with_spinner_message( - "Checking if build backend supports build_editable" - ) - with self.pep517_backend.subprocess_runner(runner): - return "build_editable" in self.pep517_backend._supported_features() - - @property - def specifier(self) -> SpecifierSet: - return self.req.specifier - - @property - def is_pinned(self) -> bool: - """Return whether I am pinned to an exact version. - - For example, some-package==1.2 is pinned; some-package>1.2 is not. - """ - specifiers = self.specifier - return len(specifiers) == 1 and next(iter(specifiers)).operator in {"==", "==="} - - def match_markers(self, extras_requested: Optional[Iterable[str]] = None) -> bool: - if not extras_requested: - # Provide an extra to safely evaluate the markers - # without matching any extra - extras_requested = ("",) - if self.markers is not None: - return any( - self.markers.evaluate({"extra": extra}) for extra in extras_requested - ) - else: - return True - - @property - def has_hash_options(self) -> bool: - """Return whether any known-good hashes are specified as options. - - These activate --require-hashes mode; hashes specified as part of a - URL do not. - - """ - return bool(self.hash_options) - - def hashes(self, trust_internet: bool = True) -> Hashes: - """Return a hash-comparer that considers my option- and URL-based - hashes to be known-good. - - Hashes in URLs--ones embedded in the requirements file, not ones - downloaded from an index server--are almost peers with ones from - flags. They satisfy --require-hashes (whether it was implicitly or - explicitly activated) but do not activate it. md5 and sha224 are not - allowed in flags, which should nudge people toward good algos. We - always OR all hashes together, even ones from URLs. - - :param trust_internet: Whether to trust URL-based (#md5=...) hashes - downloaded from the internet, as by populate_link() - - """ - good_hashes = self.hash_options.copy() - link = self.link if trust_internet else self.original_link - if link and link.hash: - good_hashes.setdefault(link.hash_name, []).append(link.hash) - return Hashes(good_hashes) - - def from_path(self) -> Optional[str]: - """Format a nice indicator to show where this "comes from" """ - if self.req is None: - return None - s = str(self.req) - if self.comes_from: - if isinstance(self.comes_from, str): - comes_from = self.comes_from - else: - comes_from = self.comes_from.from_path() - if comes_from: - s += "->" + comes_from - return s - - def ensure_build_location( - self, build_dir: str, autodelete: bool, parallel_builds: bool - ) -> str: - assert build_dir is not None - if self._temp_build_dir is not None: - assert self._temp_build_dir.path - return self._temp_build_dir.path - if self.req is None: - # Some systems have /tmp as a symlink which confuses custom - # builds (such as numpy). Thus, we ensure that the real path - # is returned. - self._temp_build_dir = TempDirectory( - kind=tempdir_kinds.REQ_BUILD, globally_managed=True - ) - - return self._temp_build_dir.path - - # This is the only remaining place where we manually determine the path - # for the temporary directory. It is only needed for editables where - # it is the value of the --src option. - - # When parallel builds are enabled, add a UUID to the build directory - # name so multiple builds do not interfere with each other. - dir_name: str = canonicalize_name(self.name) - if parallel_builds: - dir_name = f"{dir_name}_{uuid.uuid4().hex}" - - # FIXME: Is there a better place to create the build_dir? (hg and bzr - # need this) - if not os.path.exists(build_dir): - logger.debug("Creating directory %s", build_dir) - os.makedirs(build_dir) - actual_build_dir = os.path.join(build_dir, dir_name) - # `None` indicates that we respect the globally-configured deletion - # settings, which is what we actually want when auto-deleting. - delete_arg = None if autodelete else False - return TempDirectory( - path=actual_build_dir, - delete=delete_arg, - kind=tempdir_kinds.REQ_BUILD, - globally_managed=True, - ).path - - def _set_requirement(self) -> None: - """Set requirement after generating metadata.""" - assert self.req is None - assert self.metadata is not None - assert self.source_dir is not None - - # Construct a Requirement object from the generated metadata - if isinstance(parse_version(self.metadata["Version"]), Version): - op = "==" - else: - op = "===" - - self.req = Requirement( - "".join( - [ - self.metadata["Name"], - op, - self.metadata["Version"], - ] - ) - ) - - def warn_on_mismatching_name(self) -> None: - metadata_name = canonicalize_name(self.metadata["Name"]) - if canonicalize_name(self.req.name) == metadata_name: - # Everything is fine. - return - - # If we're here, there's a mismatch. Log a warning about it. - logger.warning( - "Generating metadata for package %s " - "produced metadata for project name %s. Fix your " - "#egg=%s fragments.", - self.name, - metadata_name, - self.name, - ) - self.req = Requirement(metadata_name) - - def check_if_exists(self, use_user_site: bool) -> None: - """Find an installed distribution that satisfies or conflicts - with this requirement, and set self.satisfied_by or - self.should_reinstall appropriately. - """ - if self.req is None: - return - existing_dist = get_default_environment().get_distribution(self.req.name) - if not existing_dist: - return - - version_compatible = self.req.specifier.contains( - existing_dist.version, - prereleases=True, - ) - if not version_compatible: - self.satisfied_by = None - if use_user_site: - if existing_dist.in_usersite: - self.should_reinstall = True - elif running_under_virtualenv() and existing_dist.in_site_packages: - raise InstallationError( - f"Will not install to the user site because it will " - f"lack sys.path precedence to {existing_dist.raw_name} " - f"in {existing_dist.location}" - ) - else: - self.should_reinstall = True - else: - if self.editable: - self.should_reinstall = True - # when installing editables, nothing pre-existing should ever - # satisfy - self.satisfied_by = None - else: - self.satisfied_by = existing_dist - - # Things valid for wheels - @property - def is_wheel(self) -> bool: - if not self.link: - return False - return self.link.is_wheel - - # Things valid for sdists - @property - def unpacked_source_directory(self) -> str: - return os.path.join( - self.source_dir, self.link and self.link.subdirectory_fragment or "" - ) - - @property - def setup_py_path(self) -> str: - assert self.source_dir, f"No source dir for {self}" - setup_py = os.path.join(self.unpacked_source_directory, "setup.py") - - return setup_py - - @property - def setup_cfg_path(self) -> str: - assert self.source_dir, f"No source dir for {self}" - setup_cfg = os.path.join(self.unpacked_source_directory, "setup.cfg") - - return setup_cfg - - @property - def pyproject_toml_path(self) -> str: - assert self.source_dir, f"No source dir for {self}" - return make_pyproject_path(self.unpacked_source_directory) - - def load_pyproject_toml(self) -> None: - """Load the pyproject.toml file. - - After calling this routine, all of the attributes related to PEP 517 - processing for this requirement have been set. In particular, the - use_pep517 attribute can be used to determine whether we should - follow the PEP 517 or legacy (setup.py) code path. - """ - pyproject_toml_data = load_pyproject_toml( - self.use_pep517, self.pyproject_toml_path, self.setup_py_path, str(self) - ) - - if pyproject_toml_data is None: - self.use_pep517 = False - return - - self.use_pep517 = True - requires, backend, check, backend_path = pyproject_toml_data - self.requirements_to_check = check - self.pyproject_requires = requires - self.pep517_backend = Pep517HookCaller( - self.unpacked_source_directory, - backend, - backend_path=backend_path, - ) - - def isolated_editable_sanity_check(self) -> None: - """Check that an editable requirement if valid for use with PEP 517/518. - - This verifies that an editable that has a pyproject.toml either supports PEP 660 - or as a setup.py or a setup.cfg - """ - if ( - self.editable - and self.use_pep517 - and not self.supports_pyproject_editable() - and not os.path.isfile(self.setup_py_path) - and not os.path.isfile(self.setup_cfg_path) - ): - raise InstallationError( - f"Project {self} has a 'pyproject.toml' and its build " - f"backend is missing the 'build_editable' hook. Since it does not " - f"have a 'setup.py' nor a 'setup.cfg', " - f"it cannot be installed in editable mode. " - f"Consider using a build backend that supports PEP 660." - ) - - def prepare_metadata(self) -> None: - """Ensure that project metadata is available. - - Under PEP 517 and PEP 660, call the backend hook to prepare the metadata. - Under legacy processing, call setup.py egg-info. - """ - assert self.source_dir - details = self.name or f"from {self.link}" - - if self.use_pep517: - assert self.pep517_backend is not None - if ( - self.editable - and self.permit_editable_wheels - and self.supports_pyproject_editable() - ): - self.metadata_directory = generate_editable_metadata( - build_env=self.build_env, - backend=self.pep517_backend, - details=details, - ) - else: - self.metadata_directory = generate_metadata( - build_env=self.build_env, - backend=self.pep517_backend, - details=details, - ) - else: - self.metadata_directory = generate_metadata_legacy( - build_env=self.build_env, - setup_py_path=self.setup_py_path, - source_dir=self.unpacked_source_directory, - isolated=self.isolated, - details=details, - ) - - # Act on the newly generated metadata, based on the name and version. - if not self.name: - self._set_requirement() - else: - self.warn_on_mismatching_name() - - self.assert_source_matches_version() - - @property - def metadata(self) -> Any: - if not hasattr(self, "_metadata"): - self._metadata = self.get_dist().metadata - - return self._metadata - - def get_dist(self) -> BaseDistribution: - return get_directory_distribution(self.metadata_directory) - - def assert_source_matches_version(self) -> None: - assert self.source_dir - version = self.metadata["version"] - if self.req.specifier and version not in self.req.specifier: - logger.warning( - "Requested %s, but installing version %s", - self, - version, - ) - else: - logger.debug( - "Source in %s has version %s, which satisfies requirement %s", - display_path(self.source_dir), - version, - self, - ) - - # For both source distributions and editables - def ensure_has_source_dir( - self, - parent_dir: str, - autodelete: bool = False, - parallel_builds: bool = False, - ) -> None: - """Ensure that a source_dir is set. - - This will create a temporary build dir if the name of the requirement - isn't known yet. - - :param parent_dir: The ideal pip parent_dir for the source_dir. - Generally src_dir for editables and build_dir for sdists. - :return: self.source_dir - """ - if self.source_dir is None: - self.source_dir = self.ensure_build_location( - parent_dir, - autodelete=autodelete, - parallel_builds=parallel_builds, - ) - - # For editable installations - def update_editable(self) -> None: - if not self.link: - logger.debug( - "Cannot update repository at %s; repository location is unknown", - self.source_dir, - ) - return - assert self.editable - assert self.source_dir - if self.link.scheme == "file": - # Static paths don't get updated - return - vcs_backend = vcs.get_backend_for_scheme(self.link.scheme) - # Editable requirements are validated in Requirement constructors. - # So here, if it's neither a path nor a valid VCS URL, it's a bug. - assert vcs_backend, f"Unsupported VCS URL {self.link.url}" - hidden_url = hide_url(self.link.url) - vcs_backend.obtain(self.source_dir, url=hidden_url, verbosity=0) - - # Top-level Actions - def uninstall( - self, auto_confirm: bool = False, verbose: bool = False - ) -> Optional[UninstallPathSet]: - """ - Uninstall the distribution currently satisfying this requirement. - - Prompts before removing or modifying files unless - ``auto_confirm`` is True. - - Refuses to delete or modify files outside of ``sys.prefix`` - - thus uninstallation within a virtual environment can only - modify that virtual environment, even if the virtualenv is - linked to global site-packages. - - """ - assert self.req - dist = get_default_environment().get_distribution(self.req.name) - if not dist: - logger.warning("Skipping %s as it is not installed.", self.name) - return None - logger.info("Found existing installation: %s", dist) - - uninstalled_pathset = UninstallPathSet.from_dist(dist) - uninstalled_pathset.remove(auto_confirm, verbose) - return uninstalled_pathset - - def _get_archive_name(self, path: str, parentdir: str, rootdir: str) -> str: - def _clean_zip_name(name: str, prefix: str) -> str: - assert name.startswith( - prefix + os.path.sep - ), f"name {name!r} doesn't start with prefix {prefix!r}" - name = name[len(prefix) + 1 :] - name = name.replace(os.path.sep, "/") - return name - - path = os.path.join(parentdir, path) - name = _clean_zip_name(path, rootdir) - return self.name + "/" + name - - def archive(self, build_dir: Optional[str]) -> None: - """Saves archive to provided build_dir. - - Used for saving downloaded VCS requirements as part of `pip download`. - """ - assert self.source_dir - if build_dir is None: - return - - create_archive = True - archive_name = "{}-{}.zip".format(self.name, self.metadata["version"]) - archive_path = os.path.join(build_dir, archive_name) - - if os.path.exists(archive_path): - response = ask_path_exists( - "The file {} exists. (i)gnore, (w)ipe, " - "(b)ackup, (a)bort ".format(display_path(archive_path)), - ("i", "w", "b", "a"), - ) - if response == "i": - create_archive = False - elif response == "w": - logger.warning("Deleting %s", display_path(archive_path)) - os.remove(archive_path) - elif response == "b": - dest_file = backup_dir(archive_path) - logger.warning( - "Backing up %s to %s", - display_path(archive_path), - display_path(dest_file), - ) - shutil.move(archive_path, dest_file) - elif response == "a": - sys.exit(-1) - - if not create_archive: - return - - zip_output = zipfile.ZipFile( - archive_path, - "w", - zipfile.ZIP_DEFLATED, - allowZip64=True, - ) - with zip_output: - dir = os.path.normcase(os.path.abspath(self.unpacked_source_directory)) - for dirpath, dirnames, filenames in os.walk(dir): - for dirname in dirnames: - dir_arcname = self._get_archive_name( - dirname, - parentdir=dirpath, - rootdir=dir, - ) - zipdir = zipfile.ZipInfo(dir_arcname + "/") - zipdir.external_attr = 0x1ED << 16 # 0o755 - zip_output.writestr(zipdir, "") - for filename in filenames: - file_arcname = self._get_archive_name( - filename, - parentdir=dirpath, - rootdir=dir, - ) - filename = os.path.join(dirpath, filename) - zip_output.write(filename, file_arcname) - - logger.info("Saved %s", display_path(archive_path)) - - def install( - self, - install_options: List[str], - global_options: Optional[Sequence[str]] = None, - root: Optional[str] = None, - home: Optional[str] = None, - prefix: Optional[str] = None, - warn_script_location: bool = True, - use_user_site: bool = False, - pycompile: bool = True, - ) -> None: - scheme = get_scheme( - self.name, - user=use_user_site, - home=home, - root=root, - isolated=self.isolated, - prefix=prefix, - ) - - global_options = global_options if global_options is not None else [] - if self.editable and not self.is_wheel: - install_editable_legacy( - install_options, - global_options, - prefix=prefix, - home=home, - use_user_site=use_user_site, - name=self.name, - setup_py_path=self.setup_py_path, - isolated=self.isolated, - build_env=self.build_env, - unpacked_source_directory=self.unpacked_source_directory, - ) - self.install_succeeded = True - return - - if self.is_wheel: - assert self.local_file_path - direct_url = None - if self.editable: - direct_url = direct_url_for_editable(self.unpacked_source_directory) - elif self.original_link: - direct_url = direct_url_from_link( - self.original_link, - self.source_dir, - self.original_link_is_in_wheel_cache, - ) - install_wheel( - self.name, - self.local_file_path, - scheme=scheme, - req_description=str(self.req), - pycompile=pycompile, - warn_script_location=warn_script_location, - direct_url=direct_url, - requested=self.user_supplied, - ) - self.install_succeeded = True - return - - # TODO: Why don't we do this for editable installs? - - # Extend the list of global and install options passed on to - # the setup.py call with the ones from the requirements file. - # Options specified in requirements file override those - # specified on the command line, since the last option given - # to setup.py is the one that is used. - global_options = list(global_options) + self.global_options - install_options = list(install_options) + self.install_options - - try: - success = install_legacy( - install_options=install_options, - global_options=global_options, - root=root, - home=home, - prefix=prefix, - use_user_site=use_user_site, - pycompile=pycompile, - scheme=scheme, - setup_py_path=self.setup_py_path, - isolated=self.isolated, - req_name=self.name, - build_env=self.build_env, - unpacked_source_directory=self.unpacked_source_directory, - req_description=str(self.req), - ) - except LegacyInstallFailure as exc: - self.install_succeeded = False - raise exc - except Exception: - self.install_succeeded = True - raise - - self.install_succeeded = success - - if success and self.legacy_install_reason == 8368: - deprecated( - reason=( - "{} was installed using the legacy 'setup.py install' " - "method, because a wheel could not be built for it.".format( - self.name - ) - ), - replacement="to fix the wheel build issue reported above", - gone_in=None, - issue=8368, - ) - - -def check_invalid_constraint_type(req: InstallRequirement) -> str: - - # Check for unsupported forms - problem = "" - if not req.name: - problem = "Unnamed requirements are not allowed as constraints" - elif req.editable: - problem = "Editable requirements are not allowed as constraints" - elif req.extras: - problem = "Constraints cannot have extras" - - if problem: - deprecated( - reason=( - "Constraints are only allowed to take the form of a package " - "name and a version specifier. Other forms were originally " - "permitted as an accident of the implementation, but were " - "undocumented. The new implementation of the resolver no " - "longer supports these forms." - ), - replacement="replacing the constraint with a requirement", - # No plan yet for when the new resolver becomes default - gone_in=None, - issue=8210, - ) - - return problem diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/pointless.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/pointless.py deleted file mode 100644 index eb73b2a795d7c7d4d3efbef3b2abdbbd882e93c6..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/pointless.py +++ /dev/null @@ -1,71 +0,0 @@ -""" - pygments.lexers.pointless - ~~~~~~~~~~~~~~~~~~~~~~~~~ - - Lexers for Pointless. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pygments.lexer import RegexLexer, words -from pygments.token import Comment, Error, Keyword, Name, Number, Operator, \ - Punctuation, String, Text - -__all__ = ['PointlessLexer'] - - -class PointlessLexer(RegexLexer): - """ - For Pointless source code. - - .. versionadded:: 2.7 - """ - - name = 'Pointless' - url = 'https://ptls.dev' - aliases = ['pointless'] - filenames = ['*.ptls'] - - ops = words([ - "+", "-", "*", "/", "**", "%", "+=", "-=", "*=", - "/=", "**=", "%=", "|>", "=", "==", "!=", "<", ">", - "<=", ">=", "=>", "$", "++", - ]) - - keywords = words([ - "if", "then", "else", "where", "with", "cond", - "case", "and", "or", "not", "in", "as", "for", - "requires", "throw", "try", "catch", "when", - "yield", "upval", - ], suffix=r'\b') - - tokens = { - 'root': [ - (r'[ \n\r]+', Text), - (r'--.*$', Comment.Single), - (r'"""', String, 'multiString'), - (r'"', String, 'string'), - (r'[\[\](){}:;,.]', Punctuation), - (ops, Operator), - (keywords, Keyword), - (r'\d+|\d*\.\d+', Number), - (r'(true|false)\b', Name.Builtin), - (r'[A-Z][a-zA-Z0-9]*\b', String.Symbol), - (r'output\b', Name.Variable.Magic), - (r'(export|import)\b', Keyword.Namespace), - (r'[a-z][a-zA-Z0-9]*\b', Name.Variable) - ], - 'multiString': [ - (r'\\.', String.Escape), - (r'"""', String, '#pop'), - (r'"', String), - (r'[^\\"]+', String), - ], - 'string': [ - (r'\\.', String.Escape), - (r'"', String, '#pop'), - (r'\n', Error), - (r'[^\\"]+', String), - ], - } diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/tqdm/_main.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/tqdm/_main.py deleted file mode 100644 index 04fdeeff17b5cc84b210f445b54b87d5b99e3748..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/tqdm/_main.py +++ /dev/null @@ -1,9 +0,0 @@ -from warnings import warn - -from .cli import * # NOQA -from .cli import __all__ # NOQA -from .std import TqdmDeprecationWarning - -warn("This function will be removed in tqdm==5.0.0\n" - "Please use `tqdm.cli.*` instead of `tqdm._main.*`", - TqdmDeprecationWarning, stacklevel=2) diff --git a/spaces/pseudolab/K23MiniMed/README.md b/spaces/pseudolab/K23MiniMed/README.md deleted file mode 100644 index afde5b3ebf998f8ed59e0cc1b2320b8607d57e14..0000000000000000000000000000000000000000 --- a/spaces/pseudolab/K23MiniMed/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: K23MiniMed -emoji: ⚕️ -colorFrom: red -colorTo: orange -sdk: gradio -sdk_version: 3.44.4 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -join my builder's server : https://discord.gg/VqTxc76K3u \ No newline at end of file diff --git a/spaces/qingxu98/gpt-academic/request_llm/bridge_newbingfree.py b/spaces/qingxu98/gpt-academic/request_llm/bridge_newbingfree.py deleted file mode 100644 index cc6e9b733b48fe255c15bc9fae3c9abd74f276ca..0000000000000000000000000000000000000000 --- a/spaces/qingxu98/gpt-academic/request_llm/bridge_newbingfree.py +++ /dev/null @@ -1,245 +0,0 @@ -""" -======================================================================== -第一部分:来自EdgeGPT.py -https://github.com/acheong08/EdgeGPT -======================================================================== -""" -from .edge_gpt_free import Chatbot as NewbingChatbot -load_message = "等待NewBing响应。" - -""" -======================================================================== -第二部分:子进程Worker(调用主体) -======================================================================== -""" -import time -import json -import re -import logging -import asyncio -import importlib -import threading -from toolbox import update_ui, get_conf, trimmed_format_exc -from multiprocessing import Process, Pipe - -def preprocess_newbing_out(s): - pattern = r'\^(\d+)\^' # 匹配^数字^ - sub = lambda m: '('+m.group(1)+')' # 将匹配到的数字作为替换值 - result = re.sub(pattern, sub, s) # 替换操作 - if '[1]' in result: - result += '\n\n```reference\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n' - return result - -def preprocess_newbing_out_simple(result): - if '[1]' in result: - result += '\n\n```reference\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n' - return result - -class NewBingHandle(Process): - def __init__(self): - super().__init__(daemon=True) - self.parent, self.child = Pipe() - self.newbing_model = None - self.info = "" - self.success = True - self.local_history = [] - self.check_dependency() - self.start() - self.threadLock = threading.Lock() - - def check_dependency(self): - try: - self.success = False - import certifi, httpx, rich - self.info = "依赖检测通过,等待NewBing响应。注意目前不能多人同时调用NewBing接口(有线程锁),否则将导致每个人的NewBing问询历史互相渗透。调用NewBing时,会自动使用已配置的代理。" - self.success = True - except: - self.info = "缺少的依赖,如果要使用Newbing,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_newbing.txt`安装Newbing的依赖。" - self.success = False - - def ready(self): - return self.newbing_model is not None - - async def async_run(self): - # 读取配置 - NEWBING_STYLE, = get_conf('NEWBING_STYLE') - from request_llm.bridge_all import model_info - endpoint = model_info['newbing']['endpoint'] - while True: - # 等待 - kwargs = self.child.recv() - question=kwargs['query'] - history=kwargs['history'] - system_prompt=kwargs['system_prompt'] - - # 是否重置 - if len(self.local_history) > 0 and len(history)==0: - await self.newbing_model.reset() - self.local_history = [] - - # 开始问问题 - prompt = "" - if system_prompt not in self.local_history: - self.local_history.append(system_prompt) - prompt += system_prompt + '\n' - - # 追加历史 - for ab in history: - a, b = ab - if a not in self.local_history: - self.local_history.append(a) - prompt += a + '\n' - - # 问题 - prompt += question - self.local_history.append(question) - print('question:', prompt) - # 提交 - async for final, response in self.newbing_model.ask_stream( - prompt=question, - conversation_style=NEWBING_STYLE, # ["creative", "balanced", "precise"] - wss_link=endpoint, # "wss://sydney.bing.com/sydney/ChatHub" - ): - if not final: - print(response) - self.child.send(str(response)) - else: - print('-------- receive final ---------') - self.child.send('[Finish]') - # self.local_history.append(response) - - - def run(self): - """ - 这个函数运行在子进程 - """ - # 第一次运行,加载参数 - self.success = False - self.local_history = [] - if (self.newbing_model is None) or (not self.success): - # 代理设置 - proxies, NEWBING_COOKIES = get_conf('proxies', 'NEWBING_COOKIES') - if proxies is None: - self.proxies_https = None - else: - self.proxies_https = proxies['https'] - - if (NEWBING_COOKIES is not None) and len(NEWBING_COOKIES) > 100: - try: - cookies = json.loads(NEWBING_COOKIES) - except: - self.success = False - tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n' - self.child.send(f'[Local Message] NEWBING_COOKIES未填写或有格式错误。') - self.child.send('[Fail]'); self.child.send('[Finish]') - raise RuntimeError(f"NEWBING_COOKIES未填写或有格式错误。") - else: - cookies = None - - try: - self.newbing_model = NewbingChatbot(proxy=self.proxies_https, cookies=cookies) - except: - self.success = False - tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n' - self.child.send(f'[Local Message] 不能加载Newbing组件。{tb_str}') - self.child.send('[Fail]') - self.child.send('[Finish]') - raise RuntimeError(f"不能加载Newbing组件。") - - self.success = True - try: - # 进入任务等待状态 - asyncio.run(self.async_run()) - except Exception: - tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n' - self.child.send(f'[Local Message] Newbing 请求失败,报错信息如下. 如果是与网络相关的问题,建议更换代理协议(推荐http)或代理节点 {tb_str}.') - self.child.send('[Fail]') - self.child.send('[Finish]') - - def stream_chat(self, **kwargs): - """ - 这个函数运行在主进程 - """ - self.threadLock.acquire() # 获取线程锁 - self.parent.send(kwargs) # 请求子进程 - while True: - res = self.parent.recv() # 等待newbing回复的片段 - if res == '[Finish]': break # 结束 - elif res == '[Fail]': self.success = False; break # 失败 - else: yield res # newbing回复的片段 - self.threadLock.release() # 释放线程锁 - - -""" -======================================================================== -第三部分:主进程统一调用函数接口 -======================================================================== -""" -global newbingfree_handle -newbingfree_handle = None - -def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False): - """ - 多线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - global newbingfree_handle - if (newbingfree_handle is None) or (not newbingfree_handle.success): - newbingfree_handle = NewBingHandle() - if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + newbingfree_handle.info - if not newbingfree_handle.success: - error = newbingfree_handle.info - newbingfree_handle = None - raise RuntimeError(error) - - # 没有 sys_prompt 接口,因此把prompt加入 history - history_feedin = [] - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可 - response = "" - if len(observe_window) >= 1: observe_window[0] = "[Local Message]: 等待NewBing响应中 ..." - for response in newbingfree_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - if len(observe_window) >= 1: observe_window[0] = preprocess_newbing_out_simple(response) - if len(observe_window) >= 2: - if (time.time()-observe_window[1]) > watch_dog_patience: - raise RuntimeError("程序终止。") - return preprocess_newbing_out_simple(response) - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - 单线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - chatbot.append((inputs, "[Local Message]: 等待NewBing响应中 ...")) - - global newbingfree_handle - if (newbingfree_handle is None) or (not newbingfree_handle.success): - newbingfree_handle = NewBingHandle() - chatbot[-1] = (inputs, load_message + "\n\n" + newbingfree_handle.info) - yield from update_ui(chatbot=chatbot, history=[]) - if not newbingfree_handle.success: - newbingfree_handle = None - return - - if additional_fn is not None: - from core_functional import handle_core_functionality - inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot) - - history_feedin = [] - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - chatbot[-1] = (inputs, "[Local Message]: 等待NewBing响应中 ...") - response = "[Local Message]: 等待NewBing响应中 ..." - yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。") - for response in newbingfree_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - chatbot[-1] = (inputs, preprocess_newbing_out(response)) - yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。") - if response == "[Local Message]: 等待NewBing响应中 ...": response = "[Local Message]: NewBing响应异常,请刷新界面重试 ..." - history.extend([inputs, response]) - logging.info(f'[raw_input] {inputs}') - logging.info(f'[response] {response}') - yield from update_ui(chatbot=chatbot, history=history, msg="完成全部响应,请提交新问题。") - diff --git a/spaces/qinzhu/moe-tts-tech/text/sanskrit.py b/spaces/qinzhu/moe-tts-tech/text/sanskrit.py deleted file mode 100644 index 0223aaac384a2f850f5bc20651fc18eb964607d0..0000000000000000000000000000000000000000 --- a/spaces/qinzhu/moe-tts-tech/text/sanskrit.py +++ /dev/null @@ -1,62 +0,0 @@ -import re -from indic_transliteration import sanscript - - -# List of (iast, ipa) pairs: -_iast_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('a', 'ə'), - ('ā', 'aː'), - ('ī', 'iː'), - ('ū', 'uː'), - ('ṛ', 'ɹ`'), - ('ṝ', 'ɹ`ː'), - ('ḷ', 'l`'), - ('ḹ', 'l`ː'), - ('e', 'eː'), - ('o', 'oː'), - ('k', 'k⁼'), - ('k⁼h', 'kʰ'), - ('g', 'g⁼'), - ('g⁼h', 'gʰ'), - ('ṅ', 'ŋ'), - ('c', 'ʧ⁼'), - ('ʧ⁼h', 'ʧʰ'), - ('j', 'ʥ⁼'), - ('ʥ⁼h', 'ʥʰ'), - ('ñ', 'n^'), - ('ṭ', 't`⁼'), - ('t`⁼h', 't`ʰ'), - ('ḍ', 'd`⁼'), - ('d`⁼h', 'd`ʰ'), - ('ṇ', 'n`'), - ('t', 't⁼'), - ('t⁼h', 'tʰ'), - ('d', 'd⁼'), - ('d⁼h', 'dʰ'), - ('p', 'p⁼'), - ('p⁼h', 'pʰ'), - ('b', 'b⁼'), - ('b⁼h', 'bʰ'), - ('y', 'j'), - ('ś', 'ʃ'), - ('ṣ', 's`'), - ('r', 'ɾ'), - ('l̤', 'l`'), - ('h', 'ɦ'), - ("'", ''), - ('~', '^'), - ('ṃ', '^') -]] - - -def devanagari_to_ipa(text): - text = text.replace('ॐ', 'ओम्') - text = re.sub(r'\s*।\s*$', '.', text) - text = re.sub(r'\s*।\s*', ', ', text) - text = re.sub(r'\s*॥', '.', text) - text = sanscript.transliterate(text, sanscript.DEVANAGARI, sanscript.IAST) - for regex, replacement in _iast_to_ipa: - text = re.sub(regex, replacement, text) - text = re.sub('(.)[`ː]*ḥ', lambda x: x.group(0) - [:-1]+'h'+x.group(1)+'*', text) - return text diff --git a/spaces/qtoino/form_matcher/public/form0.html b/spaces/qtoino/form_matcher/public/form0.html deleted file mode 100644 index f3099e243bfe2c7d133a9c49dee20db7d65ea417..0000000000000000000000000000000000000000 --- a/spaces/qtoino/form_matcher/public/form0.html +++ /dev/null @@ -1,41 +0,0 @@ - - - - Resume Form - - -

Resume Form

- -

Longer Version

-
-
-
-
-
-
-

- -
-
-
-
-
-
-
-
-
-

- -
-
-
-
-
-
-
-
-

- - -
- \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Adobe Photoshop CC 2018 V19.0.0.24821 Patch Utorrentl.md b/spaces/quidiaMuxgu/Expedit-SAM/Adobe Photoshop CC 2018 V19.0.0.24821 Patch Utorrentl.md deleted file mode 100644 index 0afe879bbe6f4e4b6e89dc4ef85ccf1465b8d0f6..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Adobe Photoshop CC 2018 V19.0.0.24821 Patch Utorrentl.md +++ /dev/null @@ -1,7 +0,0 @@ -
-

adobePhotoshop.exe and Photoshop CC 2018 V19.0.0.24821 Patch Broswer: Very useful tools.Tcl/Winsock v1.2.30 build 209 Support for older versions. Build: 2.6.13.0, Apache v2.2.15, LUA v5.1.2, Nginx v1.4.7, MySQL v5.0.96, PHP v5.4.26.

-

Download Adobe Photoshop CC 2018. All of the software programs and drivers were updated.Zlib v1.2.11 Support for zlib 1.2.11.eCommerce v4.3.0.21425 Support for eCommerce v4.3.0.21425.eCryptfs v1.0.9 Support for eCryptfs v1.0.9.eDaemon v1.0.0.16823 Support for eDaemon v1.0.0.16823.dell_eai_daemon_v2 v2.5.0.16253 V2.5.0.16253.secrets_4.0 v4.0.0.14747 Support for secrets 4.0 v4.0.0.14747.Library v0.7.2.19 E-mail v0.4.0.2 Support for e-mail v0.4.0.2.gOGLE v1.2.4.0 Support for gOGLE v1.2.4.0.gFire_6 v6.9.0.0 Support for gFire v6.9.0.0.gCore_3.2.0.61 Support for gCore v3.2.0.61.gMsCore_3.2.0.54 Support for gMsCore v3.2.0.54.gFtp_0.9.0.2 Support for gFtp v0.9.0.2.gBase_1.0.15 Support for gBase v1.0.15.gOsc_0.4.0.10d Support for gOsc v0.4.0.10d.gXls_0.9.0.15 Support for gXls v0.9.0.15.gPortable_0.7.1.111 Support for gPortable v0.7.1.111.gNet_1.8.1.4 Support for gNet v1.8.1.4.gCAPI_4.1.0.196 Support for gCAPI v4.1.0.196.gPathways_0.2.4.107 Support for gPathways v0.2.4.107.gMessages_1.0.3 Support for gMessages v1.0.3.gActiveJob_4.2.0.21 Support for gActiveJob v4.2.0.21.gPbkd_0.9.0.46 Support for gPbkd v0.9.0.46.gRss_0.10.0.355 Support for gRss v0.10.0.355.gPOE_0.6.0.0_2 Support for gPOE v0.6.0.0_2.gsot_0.8.

-

Adobe Photoshop CC 2018 V19.0.0.24821 Patch Utorrentl


Download Ziphttps://geags.com/2uCqAE



-

CnuoxDibDjD [url=CsonoMiteco [url=Kinesiografo Online [url= iCloud Lock Address Android[/url]Samsung Galaxy Note 10 [url=Conker]Deccicero [url=PilleriDiscok [url=Google Inforormation Phd [url= Easy vpn [url=TrickFizzer [url= Mobile phone plugins iphone[/url]]] Average Android[/url] Android For Beginners iPhone[/url] Smart Phone Unlock For Android[/url] Audiobook Languages MP3[/url] Netflix Opens in Spain Spain[/url] ] ] MySQL[/url] [url=DialFaceMakingthe a [/url]Free Nocharge[/url] [url= Mobile Tone Sound Converter 3.0.0.0 Crack[/url] Multiretrieve[/url] [url= ioca_potterpotter_v5.3.0.0_cracked.rar[/url] Visual Basic Studio 13.0.22.0 Serial[/url] Semantic Volume Volume 6.0.0.0 Serial Key[/url] Able Labs Global AIP-120 Driver 2.3d Pinout[/url] ([url= ArelaSoft Zoho Invoice Clone For Mac[/url] [url= pyCellCloudPro Java[/url] AirVideo for Mac/Iphone[/url] [url= XiphosDownload [url= CAD Free Pro 3 v 14.0.0.4 Keygen[/url] [url= Zuma Money Maker 2014 Serial Keygen[/url] Magnetric Design Caddition For PC[/url] [url= Auto Speech Recognition SRS [url= DCOM Datuerdroid[/url] uTorrent Serial Keygen[/url] [url= Bridge Ninja [url= BPSOtez XP / 2003 / 2008 / 2012 Serial Key[/url] Nutra Freak Business Essential Suite 4.7.2.0 Serial Number[/url] Multiple Advanced Iphone Slideshow ] ] Autodesk Fusion Max 2018 13.0.0.0 Product Code[/url] [url= eTechnologie Mobile, Jeu, Android, Positiv, Android [url= [url= MaxonSoft Video To Facebook Download [/url]Titan Soft Mobile Java2D[/url] [[url= Audio Tools For Mac [/url] MaxonSoft Video To Facebook Download [/url]Big Bang Keener 2.0.2.0 Crack [/url]Patchers [url= Zappar Roulette [url= autodesk Maya 2013 Full Crack[/url] [url= [url= Mobile Apps For Iphone [/url] Final Cut Pro X v8.4.0.0 Serial Key[/url] [url= Zopa Employee Database 3.0.0.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/CardLife Cardboard Survival Game.md b/spaces/quidiaMuxgu/Expedit-SAM/CardLife Cardboard Survival Game.md deleted file mode 100644 index 8a1e9f93e0cbe1a42883968a5bc10c20e6e9f4f2..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/CardLife Cardboard Survival Game.md +++ /dev/null @@ -1,8 +0,0 @@ -

CardLife: Cardboard Survival Game


DOWNLOAD > https://geags.com/2uCqvP



- -CardLife: Cardboard Survival is a full version of the game for Windows, which belongs to the Action category, and was developed by Freejam. The game is an apocalyptic survival simulator, which allows you to play the role of survivors after a global disaster. -According to the plot of the game, there was a global catastrophe in the world and all mankind died, and the survived people are trying to build their new society. -You have to control a small squad of four people, each of them has a certain task, and you can choose any of them to lead the squad and go through the world. 8a78ff9644
-
-
-

diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Download Poser Pro 2014 Torrent - 20 BETTER.md b/spaces/quidiaMuxgu/Expedit-SAM/Download Poser Pro 2014 Torrent - 20 BETTER.md deleted file mode 100644 index c0a405fc85e895315db06f9e0c6955e0de2d4eac..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Download Poser Pro 2014 Torrent - 20 BETTER.md +++ /dev/null @@ -1,42 +0,0 @@ -

Download Poser Pro 2014 Torrent - 20


DOWNLOAD 🆓 https://geags.com/2uCrss



- -The Poser Body Modeler Module is a powerful feature that is not provided in most other commercial tools . And the Poser Extras program does a great job of enabling you to draw or paint from scratch to your heart's content. I recommend the Poser Body Modeler Module to anyone who works in 3D, whether you're an animator or a hobbyist, because it provides a seamless way to take your personal body, from the inside out, and create the most realistic representation of your body that you can muster. - -BODY ANIMATION WITH POSEER - -In this book, you will learn: - - * how the Poser Body Modeler Module works - - * how to manipulate the Poser Body Modeler Module to create realistic body parts, textures, and poses - - * how to use the Poser Body Modeler Module to create body animation - -By the end of this book, you will: - - * understand how to work with the Poser Body Modeler Module - - * know what the Poser Body Modeler Module is and isn't - - * have learned how to begin creating body models in Poser - - * be able to combine body animations in Poser - - * know what the Body Module is, and what it isn't - -THE BODY MODELER MODULE - -The Poser Body Modeler Module gives you the tools you need to draw and paint the body from the inside out. It enables you to create a completely realistic model of your body from the inside out. The Poser Body Modeler Module is an integral part of the Poser tool system. - -When you add the Poser Body Modeler Module to your Poser, you can drag the Poser Body Modeler Module onto a 3D model in Poser or onto a 2D body part in the Poser-Body-Models.da file. The Poser Body Modeler Module enables you to turn a static pose into a body animation. Poser Body Models, or BMs, are used in many ways, from simple static poses to fully dynamic figures. - -The Poser Body Modeler Module has many uses: - - * You can create a full-body BMD with a single click. - - * You can create a BMD from scratch or from an existing mesh. - - * You can create a BMD by using one of the BMDs as a starting point, or you can start from a Photo Reference, a 3D 4fefd39f24
-
-
-

diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Home Alone 2 Full Movie Free Download In Hindi.md b/spaces/quidiaMuxgu/Expedit-SAM/Home Alone 2 Full Movie Free Download In Hindi.md deleted file mode 100644 index 577974492a6338163582bd6584235062f7af6b36..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Home Alone 2 Full Movie Free Download In Hindi.md +++ /dev/null @@ -1,6 +0,0 @@ -

home alone 2 full movie free download in hindi


DOWNLOADhttps://geags.com/2uCqsm



- -... Alone 2 Lost in New York (Hindi) BRRip Full Movie Download, Movie download in 3gp, mp4, hd, avi, mkv, for mobile, pc, android, tab free, Home Alone 2 Lost ... 1fdad05405
-
-
-

diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Lana Del Rey Born To Die Paradise Edition Download Torrent BETTER.md b/spaces/quidiaMuxgu/Expedit-SAM/Lana Del Rey Born To Die Paradise Edition Download Torrent BETTER.md deleted file mode 100644 index 55fc8c2038d80dda8daab2343e03ed1a4510a1b6..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Lana Del Rey Born To Die Paradise Edition Download Torrent BETTER.md +++ /dev/null @@ -1,6 +0,0 @@ -

Lana Del Rey Born To Die Paradise Edition Download Torrent


Download >>> https://geags.com/2uCryX



-
-The Complete Lana Del Rey Collection ... Videos will be included in a separate torrent/download. 7. ... Born To Die: The Paradise Edition (Standard Edition) 4d29de3e1b
-
-
-

diff --git a/spaces/r3gm/Ultimate-Vocal-Remover-WebUI/demucs/tasnet_v2.py b/spaces/r3gm/Ultimate-Vocal-Remover-WebUI/demucs/tasnet_v2.py deleted file mode 100644 index ecc1257925ea8f4fbe389ddd6d73ce9fdf45f6d4..0000000000000000000000000000000000000000 --- a/spaces/r3gm/Ultimate-Vocal-Remover-WebUI/demucs/tasnet_v2.py +++ /dev/null @@ -1,452 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# -# Created on 2018/12 -# Author: Kaituo XU -# Modified on 2019/11 by Alexandre Defossez, added support for multiple output channels -# Here is the original license: -# The MIT License (MIT) -# -# Copyright (c) 2018 Kaituo XU -# -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: -# -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .utils import capture_init - -EPS = 1e-8 - - -def overlap_and_add(signal, frame_step): - outer_dimensions = signal.size()[:-2] - frames, frame_length = signal.size()[-2:] - - subframe_length = math.gcd(frame_length, frame_step) # gcd=Greatest Common Divisor - subframe_step = frame_step // subframe_length - subframes_per_frame = frame_length // subframe_length - output_size = frame_step * (frames - 1) + frame_length - output_subframes = output_size // subframe_length - - subframe_signal = signal.view(*outer_dimensions, -1, subframe_length) - - frame = torch.arange(0, output_subframes, - device=signal.device).unfold(0, subframes_per_frame, subframe_step) - frame = frame.long() # signal may in GPU or CPU - frame = frame.contiguous().view(-1) - - result = signal.new_zeros(*outer_dimensions, output_subframes, subframe_length) - result.index_add_(-2, frame, subframe_signal) - result = result.view(*outer_dimensions, -1) - return result - - -class ConvTasNet(nn.Module): - @capture_init - def __init__(self, - sources, - N=256, - L=20, - B=256, - H=512, - P=3, - X=8, - R=4, - audio_channels=2, - norm_type="gLN", - causal=False, - mask_nonlinear='relu', - samplerate=44100, - segment_length=44100 * 2 * 4): - """ - Args: - sources: list of sources - N: Number of filters in autoencoder - L: Length of the filters (in samples) - B: Number of channels in bottleneck 1 × 1-conv block - H: Number of channels in convolutional blocks - P: Kernel size in convolutional blocks - X: Number of convolutional blocks in each repeat - R: Number of repeats - norm_type: BN, gLN, cLN - causal: causal or non-causal - mask_nonlinear: use which non-linear function to generate mask - """ - super(ConvTasNet, self).__init__() - # Hyper-parameter - self.sources = sources - self.C = len(sources) - self.N, self.L, self.B, self.H, self.P, self.X, self.R = N, L, B, H, P, X, R - self.norm_type = norm_type - self.causal = causal - self.mask_nonlinear = mask_nonlinear - self.audio_channels = audio_channels - self.samplerate = samplerate - self.segment_length = segment_length - # Components - self.encoder = Encoder(L, N, audio_channels) - self.separator = TemporalConvNet( - N, B, H, P, X, R, self.C, norm_type, causal, mask_nonlinear) - self.decoder = Decoder(N, L, audio_channels) - # init - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_normal_(p) - - def valid_length(self, length): - return length - - def forward(self, mixture): - """ - Args: - mixture: [M, T], M is batch size, T is #samples - Returns: - est_source: [M, C, T] - """ - mixture_w = self.encoder(mixture) - est_mask = self.separator(mixture_w) - est_source = self.decoder(mixture_w, est_mask) - - # T changed after conv1d in encoder, fix it here - T_origin = mixture.size(-1) - T_conv = est_source.size(-1) - est_source = F.pad(est_source, (0, T_origin - T_conv)) - return est_source - - -class Encoder(nn.Module): - """Estimation of the nonnegative mixture weight by a 1-D conv layer. - """ - def __init__(self, L, N, audio_channels): - super(Encoder, self).__init__() - # Hyper-parameter - self.L, self.N = L, N - # Components - # 50% overlap - self.conv1d_U = nn.Conv1d(audio_channels, N, kernel_size=L, stride=L // 2, bias=False) - - def forward(self, mixture): - """ - Args: - mixture: [M, T], M is batch size, T is #samples - Returns: - mixture_w: [M, N, K], where K = (T-L)/(L/2)+1 = 2T/L-1 - """ - mixture_w = F.relu(self.conv1d_U(mixture)) # [M, N, K] - return mixture_w - - -class Decoder(nn.Module): - def __init__(self, N, L, audio_channels): - super(Decoder, self).__init__() - # Hyper-parameter - self.N, self.L = N, L - self.audio_channels = audio_channels - # Components - self.basis_signals = nn.Linear(N, audio_channels * L, bias=False) - - def forward(self, mixture_w, est_mask): - """ - Args: - mixture_w: [M, N, K] - est_mask: [M, C, N, K] - Returns: - est_source: [M, C, T] - """ - # D = W * M - source_w = torch.unsqueeze(mixture_w, 1) * est_mask # [M, C, N, K] - source_w = torch.transpose(source_w, 2, 3) # [M, C, K, N] - # S = DV - est_source = self.basis_signals(source_w) # [M, C, K, ac * L] - m, c, k, _ = est_source.size() - est_source = est_source.view(m, c, k, self.audio_channels, -1).transpose(2, 3).contiguous() - est_source = overlap_and_add(est_source, self.L // 2) # M x C x ac x T - return est_source - - -class TemporalConvNet(nn.Module): - def __init__(self, N, B, H, P, X, R, C, norm_type="gLN", causal=False, mask_nonlinear='relu'): - """ - Args: - N: Number of filters in autoencoder - B: Number of channels in bottleneck 1 × 1-conv block - H: Number of channels in convolutional blocks - P: Kernel size in convolutional blocks - X: Number of convolutional blocks in each repeat - R: Number of repeats - C: Number of speakers - norm_type: BN, gLN, cLN - causal: causal or non-causal - mask_nonlinear: use which non-linear function to generate mask - """ - super(TemporalConvNet, self).__init__() - # Hyper-parameter - self.C = C - self.mask_nonlinear = mask_nonlinear - # Components - # [M, N, K] -> [M, N, K] - layer_norm = ChannelwiseLayerNorm(N) - # [M, N, K] -> [M, B, K] - bottleneck_conv1x1 = nn.Conv1d(N, B, 1, bias=False) - # [M, B, K] -> [M, B, K] - repeats = [] - for r in range(R): - blocks = [] - for x in range(X): - dilation = 2**x - padding = (P - 1) * dilation if causal else (P - 1) * dilation // 2 - blocks += [ - TemporalBlock(B, - H, - P, - stride=1, - padding=padding, - dilation=dilation, - norm_type=norm_type, - causal=causal) - ] - repeats += [nn.Sequential(*blocks)] - temporal_conv_net = nn.Sequential(*repeats) - # [M, B, K] -> [M, C*N, K] - mask_conv1x1 = nn.Conv1d(B, C * N, 1, bias=False) - # Put together - self.network = nn.Sequential(layer_norm, bottleneck_conv1x1, temporal_conv_net, - mask_conv1x1) - - def forward(self, mixture_w): - """ - Keep this API same with TasNet - Args: - mixture_w: [M, N, K], M is batch size - returns: - est_mask: [M, C, N, K] - """ - M, N, K = mixture_w.size() - score = self.network(mixture_w) # [M, N, K] -> [M, C*N, K] - score = score.view(M, self.C, N, K) # [M, C*N, K] -> [M, C, N, K] - if self.mask_nonlinear == 'softmax': - est_mask = F.softmax(score, dim=1) - elif self.mask_nonlinear == 'relu': - est_mask = F.relu(score) - else: - raise ValueError("Unsupported mask non-linear function") - return est_mask - - -class TemporalBlock(nn.Module): - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride, - padding, - dilation, - norm_type="gLN", - causal=False): - super(TemporalBlock, self).__init__() - # [M, B, K] -> [M, H, K] - conv1x1 = nn.Conv1d(in_channels, out_channels, 1, bias=False) - prelu = nn.PReLU() - norm = chose_norm(norm_type, out_channels) - # [M, H, K] -> [M, B, K] - dsconv = DepthwiseSeparableConv(out_channels, in_channels, kernel_size, stride, padding, - dilation, norm_type, causal) - # Put together - self.net = nn.Sequential(conv1x1, prelu, norm, dsconv) - - def forward(self, x): - """ - Args: - x: [M, B, K] - Returns: - [M, B, K] - """ - residual = x - out = self.net(x) - # TODO: when P = 3 here works fine, but when P = 2 maybe need to pad? - return out + residual # look like w/o F.relu is better than w/ F.relu - # return F.relu(out + residual) - - -class DepthwiseSeparableConv(nn.Module): - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride, - padding, - dilation, - norm_type="gLN", - causal=False): - super(DepthwiseSeparableConv, self).__init__() - # Use `groups` option to implement depthwise convolution - # [M, H, K] -> [M, H, K] - depthwise_conv = nn.Conv1d(in_channels, - in_channels, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=in_channels, - bias=False) - if causal: - chomp = Chomp1d(padding) - prelu = nn.PReLU() - norm = chose_norm(norm_type, in_channels) - # [M, H, K] -> [M, B, K] - pointwise_conv = nn.Conv1d(in_channels, out_channels, 1, bias=False) - # Put together - if causal: - self.net = nn.Sequential(depthwise_conv, chomp, prelu, norm, pointwise_conv) - else: - self.net = nn.Sequential(depthwise_conv, prelu, norm, pointwise_conv) - - def forward(self, x): - """ - Args: - x: [M, H, K] - Returns: - result: [M, B, K] - """ - return self.net(x) - - -class Chomp1d(nn.Module): - """To ensure the output length is the same as the input. - """ - def __init__(self, chomp_size): - super(Chomp1d, self).__init__() - self.chomp_size = chomp_size - - def forward(self, x): - """ - Args: - x: [M, H, Kpad] - Returns: - [M, H, K] - """ - return x[:, :, :-self.chomp_size].contiguous() - - -def chose_norm(norm_type, channel_size): - """The input of normlization will be (M, C, K), where M is batch size, - C is channel size and K is sequence length. - """ - if norm_type == "gLN": - return GlobalLayerNorm(channel_size) - elif norm_type == "cLN": - return ChannelwiseLayerNorm(channel_size) - elif norm_type == "id": - return nn.Identity() - else: # norm_type == "BN": - # Given input (M, C, K), nn.BatchNorm1d(C) will accumulate statics - # along M and K, so this BN usage is right. - return nn.BatchNorm1d(channel_size) - - -# TODO: Use nn.LayerNorm to impl cLN to speed up -class ChannelwiseLayerNorm(nn.Module): - """Channel-wise Layer Normalization (cLN)""" - def __init__(self, channel_size): - super(ChannelwiseLayerNorm, self).__init__() - self.gamma = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1] - self.beta = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1] - self.reset_parameters() - - def reset_parameters(self): - self.gamma.data.fill_(1) - self.beta.data.zero_() - - def forward(self, y): - """ - Args: - y: [M, N, K], M is batch size, N is channel size, K is length - Returns: - cLN_y: [M, N, K] - """ - mean = torch.mean(y, dim=1, keepdim=True) # [M, 1, K] - var = torch.var(y, dim=1, keepdim=True, unbiased=False) # [M, 1, K] - cLN_y = self.gamma * (y - mean) / torch.pow(var + EPS, 0.5) + self.beta - return cLN_y - - -class GlobalLayerNorm(nn.Module): - """Global Layer Normalization (gLN)""" - def __init__(self, channel_size): - super(GlobalLayerNorm, self).__init__() - self.gamma = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1] - self.beta = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1] - self.reset_parameters() - - def reset_parameters(self): - self.gamma.data.fill_(1) - self.beta.data.zero_() - - def forward(self, y): - """ - Args: - y: [M, N, K], M is batch size, N is channel size, K is length - Returns: - gLN_y: [M, N, K] - """ - # TODO: in torch 1.0, torch.mean() support dim list - mean = y.mean(dim=1, keepdim=True).mean(dim=2, keepdim=True) # [M, 1, 1] - var = (torch.pow(y - mean, 2)).mean(dim=1, keepdim=True).mean(dim=2, keepdim=True) - gLN_y = self.gamma * (y - mean) / torch.pow(var + EPS, 0.5) + self.beta - return gLN_y - - -if __name__ == "__main__": - torch.manual_seed(123) - M, N, L, T = 2, 3, 4, 12 - K = 2 * T // L - 1 - B, H, P, X, R, C, norm_type, causal = 2, 3, 3, 3, 2, 2, "gLN", False - mixture = torch.randint(3, (M, T)) - # test Encoder - encoder = Encoder(L, N) - encoder.conv1d_U.weight.data = torch.randint(2, encoder.conv1d_U.weight.size()) - mixture_w = encoder(mixture) - print('mixture', mixture) - print('U', encoder.conv1d_U.weight) - print('mixture_w', mixture_w) - print('mixture_w size', mixture_w.size()) - - # test TemporalConvNet - separator = TemporalConvNet(N, B, H, P, X, R, C, norm_type=norm_type, causal=causal) - est_mask = separator(mixture_w) - print('est_mask', est_mask) - - # test Decoder - decoder = Decoder(N, L) - est_mask = torch.randint(2, (B, K, C, N)) - est_source = decoder(mixture_w, est_mask) - print('est_source', est_source) - - # test Conv-TasNet - conv_tasnet = ConvTasNet(N, L, B, H, P, X, R, C, norm_type=norm_type) - est_source = conv_tasnet(mixture) - print('est_source', est_source) - print('est_source size', est_source.size()) diff --git a/spaces/raedeXanto/academic-chatgpt-beta/AUTODATA 3.45 Crack FULL free download How to install and use this software easily and safely.md b/spaces/raedeXanto/academic-chatgpt-beta/AUTODATA 3.45 Crack FULL free download How to install and use this software easily and safely.md deleted file mode 100644 index 0bcb349286182fc3839cc28ae72071e15fdd0f0b..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/AUTODATA 3.45 Crack FULL free download How to install and use this software easily and safely.md +++ /dev/null @@ -1,106 +0,0 @@ -
-

How to Download Programa Simplo Automotivo Serial

-

If you are looking for a reliable and easy-to-use software for automotive diagnostics and repair, you might have heard of Programa Simplo Automotivo. This software is one of the most popular and comprehensive tools for professionals and enthusiasts who want to learn more about their vehicles and fix them quickly and efficiently. But how can you download Programa Simplo Automotivo serial and enjoy all its features? In this article, we will explain everything you need to know about this software and how to get it with a valid serial number.

-

What is Programa Simplo Automotivo?

-

Programa Simplo Automotivo is a software that provides detailed information about various models and brands of cars, trucks, motorcycles, buses, tractors, and other vehicles. It contains diagrams, schematics, manuals, codes, procedures, tips, tricks, and more for diagnosing and repairing any problem that your vehicle might have. It also allows you to connect your computer to your vehicle's OBD-II port and perform real-time tests, scans, adjustments, calibrations, resets, etc.

-

downloadprogramasimploautomotivoserial


Download https://tinourl.com/2uL506



-

Why do you need Programa Simplo Automotivo?

-

Programa Simplo Automotivo is a must-have software for anyone who works with vehicles or wants to learn more about them. Whether you are a mechanic, a technician, a student, a hobbyist, or an owner, you will find this software very useful and helpful for your needs. With Programa Simplo Automotivo, you can:

-
    -
  • Save time and money by diagnosing and fixing your vehicle yourself
  • -
  • Learn more about how your vehicle works and how to maintain it properly
  • -
  • Improve your skills and knowledge as a professional or an enthusiast
  • -
  • Access thousands of resources and information about various vehicles
  • -
  • Enjoy a user-friendly interface and easy navigation
  • -
-

How does Programa Simplo Automotivo work?

-

Programa Simplo Automotivo works by installing it on your computer (Windows XP or higher) and activating it with a serial number. You can then access its database of information by selecting your vehicle's make, model, year, engine type, etc. You can also connect your computer to your vehicle's OBD-II port using a compatible cable or adapter and perform various tests and functions on your vehicle.

-

How to download Programa Simplo Automotivo serial?

-

To download Programa Simplo Automotivo serial, you need to purchase it from its official website or from an authorized reseller. The price of the software depends on the plan you choose (monthly, yearly, or lifetime) and the payment method you use (credit card, PayPal, bank transfer, etc.). The process of downloading Programa Simplo Automotivo serial is very simple and straightforward. Here are the steps you need to follow:

-

download programa simplo automotivo serial key
-download programa simplo automotivo serial number
-download programa simplo automotivo serial crack
-download programa simplo automotivo serial gratis
-download programa simplo automotivo serial completo
-download programa simplo automotivo serial 2021
-download programa simplo automotivo serial 2022
-download programa simplo automotivo serial 2023
-download programa simplo automotivo serial atualizado
-download programa simplo automotivo serial baixar
-download programa simplo automotivo serial torrent
-download programa simplo automotivo serial mega
-download programa simplo automotivo serial mediafire
-download programa simplo automotivo serial google drive
-download programa simplo automotivo serial dropbox
-download programa simplo automotivo serial online
-download programa simplo automotivo serial free
-download programa simplo automotivo serial full
-download programa simplo automotivo serial premium
-download programa simplo automotivo serial pro
-download programa simplo automotivo serial plus
-download programa simplo automotivo serial ultimate
-download programa simplo automotivo serial deluxe
-download programa simplo automotivo serial platinum
-download programa simplo automotivo serial gold
-download programa simplo automotivo serial edition
-download programa simplo automotivo serial version
-download programa simplo automotivo serial windows 10
-download programa simplo automotivo serial windows 7
-download programa simplo automotivo serial windows 8.1
-download programa simplo automotivo serial mac os x
-download programa simplo automotivo serial linux
-download programa simplo automotivo serial android
-download programa simplo automotivo serial ios
-download programa simplo automotivo serial iphone
-download programa simplo automotivo serial ipad
-download programa simplo automotivoserial pc
-download program asimp loautom otivoserial laptop
-download program asimp loautom otivoserial desktop
-down load program asimp loautom otivoserial tablet
-down load program asimp loautom otivoserial smartphone
-down load program asimp loautom otivoserial usb
-down load program asimp loautom otivoserial dvd
-down load program asimp loautom otivoserial cd
-down load program asimp loautom otivoserial review
-down load program asimp loautom otivoserial tutorial
-down load program asimp loautom otivoserial guide
-down load program asimp loautom otivoserial manual
-down load program asimp loautom otivoserial video

-

Step 1: Visit the official website of Programa Simplo Automotivo

-

The first step is to visit the official website of Programa Simplo Automotivo at https://www.programasimploautomotivo.com/. There you will find more information about the software features, benefits, testimonials, and plans.

-

Step 2: Choose your preferred plan and payment method

-

The next step is to choose your preferred plan (monthly, yearly, or lifetime) and payment method (credit card, PayPal, bank transfer, etc.). You can also choose between downloading the software online or receiving it on a DVD by mail. Once you have made your choice, click on "Buy Now" or "Add to Cart" button.

-

Step 3: Receive your serial number via email

-

After you have completed your payment, you will receive an email confirmation with your order details and your serial number. This is a unique code that will allow you to activate your software once you install it on your computer. Keep this email safe as you will need it later.

-

Step 4: Install Programa Simplo Automotivo on your computer

-

The next step is to install Programa Simplo Automotivo on your computer. If you have chosen to download the software online, you will receive a link to download it in your email confirmation. If you have chosen to receive it on a DVD by mail, you will receive it within a few days depending on your location. To install the software, simply follow the instructions on the screen.

-

Step 5: Activate Programa Simplo Automotivo with your serial number

-

The final step is to activate Programa Simplo Automotivo with your serial number. To do this, open the software on your computer and enter your serial number when prompted. You will then be able to access all its features and updates without any limitations.

-

What are the benefits of using Programa Simplo Automotivo serial?

-

By using Programa Simplo Automotivo serial, you will enjoy many benefits that will make your experience with this software even better. Some of these benefits are:

-

Access to all features and updates of Programa Simplo Automotivo

-

By activating your software with a valid serial number, you will be able to access all its features and updates without any restrictions or interruptions. You will be able to use all its functions and tools and access its database of information and resources for any vehicle you want.

-

Unlimited use of Programa Simplo Automotivo on any device

-

By using Programa Simplo Automotivo serial, you will be able to use this software on any device you want as long as it meets the minimum system requirements (Windows XP or higher). You can install it on multiple computers and devices and use it whenever ```html

Technical support and customer service from Programa Simplo Automotivo team

-

By using Programa Simplo Automotivo serial, you will also receive technical support and customer service from the Programa Simplo Automotivo team. If you have any questions, issues, or feedback about the software, you can contact them via email, phone, or chat and they will assist you as soon as possible. They are friendly, professional, and knowledgeable and they will make sure that you are satisfied with your purchase.

-

Satisfaction guarantee and refund policy from Programa Simplo Automotivo team

-

Finally, by using Programa Simplo Automotivo serial, you will also enjoy a satisfaction guarantee and a refund policy from the Programa Simplo Automotivo team. They are confident that you will love their software and that it will meet your expectations and needs. However, if for any reason you are not happy with your purchase, you can request a full refund within 30 days of your order and they will process it without any hassle.

-

Conclusion

-

Programa Simplo Automotivo is a software that provides detailed information about various models and brands of vehicles. It allows you to diagnose and repair any problem that your vehicle might have. It also lets you connect your computer to your vehicle's OBD-II port and perform real-time tests and functions on your vehicle. To download Programa Simplo Automotivo serial, you need to purchase it from its official website or from an authorized reseller. You will then receive a serial number via email that will allow you to activate your software on your computer. By using Programa Simplo Automotivo serial, you will enjoy many benefits such as access to all features and updates of the software, unlimited use of the software on any device, technical support and customer service from the Programa Simplo Automotivo team, and satisfaction guarantee and refund policy from the Programa Simplo Automotivo team.

-

FAQs

-

Here are some frequently asked questions about Programa Simplo Automotivo serial:

-
    -
  1. What is the difference between Programa Simplo Automotivo online and offline?
    -Programa Simplo Automotivo online is a version of the software that you can access through a web browser without installing it on your computer. Programa Simplo Automotivo offline is a version of the software that you can install on your computer and use without an internet connection.
  2. -
  3. How can I update my Programa Simplo Automotivo software?
    -You can update your Programa Simplo Automotivo software by clicking on the "Check for Updates" button on the main menu of the software. You will then be notified if there are any new updates available for your software. You can then download and install them for free.
  4. -
  5. How can I get more information about Programa Simplo Automotivo?
    -You can get more information about Programa Simplo Automotivo by visiting its official website at https://www.programasimploautomotivo.com/. There you will find more details about the software features, benefits, testimonials, plans, etc. You can also contact the Programa Simplo Automotivo team via email, phone, or chat if you have any questions or feedback.
  6. -
  7. Is Programa Simplo Automotivo compatible with my vehicle?
    -Programa Simplo Automotivo is compatible with most vehicles that have an OBD-II port. This is a standard port that is found in most vehicles manufactured after 1996. You can check if your vehicle has an OBD-II port by looking under the dashboard or near the steering wheel. If you see a 16-pin connector that looks like this: OBD-II connector then your vehicle is compatible with Programa Simplo Automotivo.
  8. -
  9. What are some alternatives to Programa Simplo Automotivo?
    -Some alternatives to Programa Simplo Automotivo are Autodata, Haynes Pro, Mitchell OnDemand, Alldata, Bosch ESI Tronic, etc. These are also software that provide information about various vehicles and allow you to diagnose and repair them. However, they may have different features, prices, interfaces, etc. You can compare them with Programa Simplo Automotivo and choose the one that suits your needs best.
  10. -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Adobe Photoshop Cc 14.2 Crack Torrentl Learn How to Edit Photos Like a Pro with this Free Download.md b/spaces/raedeXanto/academic-chatgpt-beta/Adobe Photoshop Cc 14.2 Crack Torrentl Learn How to Edit Photos Like a Pro with this Free Download.md deleted file mode 100644 index bc9c9fd1deda6b097d5147f7985a697a9dd5bbda..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Adobe Photoshop Cc 14.2 Crack Torrentl Learn How to Edit Photos Like a Pro with this Free Download.md +++ /dev/null @@ -1,146 +0,0 @@ - -

Adobe Photoshop CC 14.2 Crack Torrent: What Is It and How to Use It?

-

If you are looking for a powerful and versatile software to edit and create stunning images, you might have heard of Adobe Photoshop CC 14.2. This is the latest version of the popular image editing software that offers many professional tools and features to enhance your creativity and productivity. But what if you don't want to pay for the subscription fee or the license key? Is there a way to get Adobe Photoshop CC 14.2 for free? The answer is yes, but it comes with some risks and challenges. In this article, we will explain what Adobe Photoshop CC 14.2 crack torrent is, how to use it, and what are the pros and cons of using it.

-

What Is Adobe Photoshop CC 14.2?

-

Adobe Photoshop CC 14.2 is the latest update of the Adobe Photoshop Creative Cloud series, which was released in January 2014. It is a software that allows you to edit, create, and manipulate images with a variety of tools and features, such as:

-

Adobe Photoshop Cc 14.2 Crack Torrentl


Download ★★★ https://tinourl.com/2uL5nw



-

Features and Benefits of Adobe Photoshop CC 14.2

-
    -
  • Smart Sharpen: This feature lets you sharpen your images with more control and precision, reducing noise and halo effects.
  • -
  • Camera Shake Reduction: This feature helps you restore sharpness to your blurry photos caused by camera shake or movement.
  • -
  • Perspective Warp: This feature allows you to adjust the perspective of your images, such as changing the viewpoint or correcting distorted lines.
  • -
  • Linked Smart Objects: This feature enables you to link your smart objects to external files, so that any changes made to the source file will be reflected in your document.
  • -
  • 3D Printing Support: This feature allows you to print your 3D models directly from Photoshop, with previews, settings, and color management.
  • -
  • And many more: Adobe Photoshop CC 14.2 also offers other improvements and enhancements, such as better performance, smoother workflows, new filters, new fonts, new presets, etc.
  • -
-

System Requirements and Compatibility of Adobe Photoshop CC 14.2

-

To run Adobe Photoshop CC 14.2 smoothly on your computer, you need to meet the following minimum system requirements:

-
    -
  • Operating system: Windows 7 SP1 or later (64-bit), or Mac OS X 10.7 or later (64-bit)
  • -
  • Processor: Intel Core 2 Duo or AMD Athlon 64 X2 or faster
  • -
  • Memory: 2 GB of RAM or more
  • -
  • Hard disk space: 3 GB of available space or more
  • -
  • Graphics card: OpenGL 2.0 compatible with 512 MB of VRAM or more
  • -
  • Display: 1024 x 768 resolution or higher
  • -
  • Internet connection: Required for activation, updates, and online services
  • -
-

Adobe Photoshop CC 14.2 is compatible with most image formats, such as JPEG, PNG, GIF, TIFF, PSD, etc. It also supports various plugins and extensions that can enhance its functionality and compatibility.

-

Adobe Photoshop Cc 14.2 Full Version Free Download
-How to Install Adobe Photoshop Cc 14.2 Crack
-Adobe Photoshop Cc 14.2 Serial Number Generator
-Adobe Photoshop Cc 14.2 Patch Download
-Adobe Photoshop Cc 14.2 Keygen Torrent
-Adobe Photoshop Cc 14.2 Activator for Windows
-Adobe Photoshop Cc 14.2 License Key Crack
-Adobe Photoshop Cc 14.2 Mac Os X Crack
-Adobe Photoshop Cc 14.2 Portable Torrent
-Adobe Photoshop Cc 14.2 Crack Only
-Adobe Photoshop Cc 14.2 Update Download
-Adobe Photoshop Cc 14.2 Features and Benefits
-Adobe Photoshop Cc 14.2 System Requirements
-Adobe Photoshop Cc 14.2 Tutorial for Beginners
-Adobe Photoshop Cc 14.2 Tips and Tricks
-Adobe Photoshop Cc 14.2 Plugins Free Download
-Adobe Photoshop Cc 14.2 Brushes Pack Torrent
-Adobe Photoshop Cc 14.2 Fonts Collection Download
-Adobe Photoshop Cc 14.2 Presets and Actions Torrent
-Adobe Photoshop Cc 14.2 Filters and Effects Download
-Adobe Photoshop Cc 14.2 Review and Comparison
-Adobe Photoshop Cc 14.2 Alternatives and Competitors
-Adobe Photoshop Cc 14.2 Problems and Solutions
-Adobe Photoshop Cc 14.2 Support and Help
-Adobe Photoshop Cc 14.2 Forum and Community
-Adobe Photoshop Cc 14.2 Discount and Coupon Code
-Adobe Photoshop Cc 14.2 Trial and Demo Download
-Adobe Photoshop Cc 14.2 Online and Cloud Version
-Adobe Photoshop Cc 14.2 Mobile and Tablet App
-Adobe Photoshop Cc 14.2 Web and Browser Extension
-How to Uninstall Adobe Photoshop Cc 14.2 Crack
-How to Fix Adobe Photoshop Cc 14.2 Crack Errors
-How to Upgrade to Adobe Photoshop Cc 15 Crack
-How to Downgrade to Adobe Photoshop Cs6 Crack
-How to Transfer Adobe Photoshop Cc 14.2 Crack License
-How to Backup and Restore Adobe Photoshop Cc 14.2 Crack Data
-How to Customize and Optimize Adobe Photoshop Cc 14.2 Crack Settings
-How to Use Adobe Photoshop Cc 14.2 Crack with Other Software
-How to Create and Edit Images with Adobe Photoshop Cc 14.2 Crack
-How to Design and Print with Adobe Photoshop Cc 14.2 Crack
-How to Draw and Paint with Adobe Photoshop Cc 14.2 Crack
-How to Animate and Video Edit with Adobe Photoshop Cc 14.2 Crack
-How to Retouch and Enhance with Adobe Photoshop Cc 14.2 Crack
-How to Composite and Blend with Adobe Photoshop Cc 14.2 Crack
-How to Add Text and Graphics with Adobe Photoshop Cc 14.2 Crack
-How to Remove Background and Objects with Adobe Photoshop Cc 14.2 Crack
-How to Adjust Color and Light with Adobe Photoshop Cc 14.2 Crack
-How to Apply Filters and Effects with Adobe Photoshop Cc 14.2 Crack
-How to Save and Export with Adobe Photoshop Cc 14.2 Crack

-

What Is a Crack Torrent?

-

A crack torrent is a file that contains a cracked version of a software or a game, which means that it has been modified or hacked to bypass the security measures and allow unlimited access without paying for it. A crack torrent is usually downloaded from peer-to-peer networks or websites that host illegal content.

-

Advantages and Disadvantages of Using a Crack Torrent

-

The main advantage of using a crack torrent is that you can get the software or the game for free, without spending any money or subscribing to any service. You can also enjoy all the features and updates without any restrictions or limitations.

-

The main disadvantage of using a crack torrent is that it is illegal and unethical, as it violates the intellectual property rights of the developers and publishers. You can also face legal consequences if you are caught using or distributing a crack torrent. Moreover, using a crack torrent can expose your computer to various risks and threats, such as viruses, malware, spyware, adware, etc., which can harm your system or steal your personal information.

-

Risks and Precautions of Using a Crack Torrent

-

If you decide to use a crack torrent despite its disadvantages, you should be aware of the potential risks and take some precautions to minimize them.

-
    -
  • Risk: The crack torrent may not work properly or may cause errors or crashes on your system.
  • -
  • Precaution: Make sure that the crack torrent matches your system specifications and compatibility requirements. Also check the comments and ratings of other users who have downloaded the same crack torrent before you.
  • -
  • Risk: The crack torrent may contain malicious code or hidden programs that can infect your system or compromise your security.
  • -
  • Precaution: Scan the crack torrent with an antivirus program before opening it or installing it on your system. Also avoid clicking on any suspicious links or pop-ups that may appear while using the crack torrent.
  • -
  • Risk: The crack torrent may be detected by the original software or game provider and result in legal action or penalties against you.
  • -
  • Precaution: Disable your internet connection while using the crack torrent or use a VPN service to hide your IP address and location. Also avoid updating or registering the software or game online.
  • -
-

How to Download and Install Adobe Photoshop CC 14.2 Crack Torrent?

-

If you still want to download and install Adobe Photoshop CC 14.2 crack torrent on your system, here are the steps you need to follow:

-

Step-by-Step Guide to Downloading and Installing Adobe Photoshop CC 14.2 Crack Torrent

-
    -
  1. Find a reliable source for downloading Adobe Photoshop CC 14.2 crack torrent from the internet. You can use a search engine like Google or Bing to look for websites that offer this file.

  2. -
  3. Select one of the websites from the search results and open it in your browser. Make sure that the website is safe and trustworthy by checking its domain name, reviews, ratings, etc.

  4. -
  5. Navigate to the download page of Adobe Photoshop CC 14.2 crack torrent on the website and click on the download button or link.

  6. -
  7. A pop-up window may appear asking you to choose a location for saving the file on your system. Choose a folder where you want to save the file and click on save.

  8. -
  9. The download process will start automatically and may take some time depending on your internet speed and file size.

  10. -
  11. Once the download is complete, locate the file on your system and right-click on it.

  12. -
  13. Select open with from the menu that appears and choose a program that can open torrent files such as uTorrent or BitTorrent.

  14. -
  15. The program will launch automatically and start downloading the actual content of Adobe Photoshop CC 14.2 crack torrent from other peers who have already downloaded it.

  16. -
  17. This process may also take some time depending on your internet speed and file size.

  18. -the program has saved the content of Adobe Photoshop CC 14.2 crack torrent and open it.

    -
  19. You will see a folder named Adobe Photoshop CC 14.2 Final Multilanguage [Ching Liu] or something similar. This is the folder that contains the cracked version of Adobe Photoshop CC 14.2.

  20. -
  21. Open the folder and double-click on the file named Setup.exe to start the installation process of Adobe Photoshop CC 14.2.

  22. -
  23. Follow the instructions on the screen to complete the installation process. You may need to enter a serial number or a license key during the installation. You can find these information in a text file named Serial.txt or Keygen.exe in the same folder.

  24. -
  25. Once the installation is complete, do not launch or run Adobe Photoshop CC 14.2 yet.

  26. -
  27. Go back to the folder where you downloaded Adobe Photoshop CC 14.2 crack torrent and open it again.

  28. -
  29. You will see another folder named Crack or Patch or something similar. This is the folder that contains the files that will crack or patch Adobe Photoshop CC 14.2 and make it work without any limitations.

  30. -
  31. Open the folder and copy all the files inside it.

  32. -
  33. Paste the files into the folder where you installed Adobe Photoshop CC 14.2 on your system. This is usually C:\Program Files\Adobe\Adobe Photoshop CC 14.2 or something similar.

  34. -
  35. Replace or overwrite any existing files if prompted.

  36. -
  37. Congratulations! You have successfully downloaded and installed Adobe Photoshop CC 14.2 crack torrent on your system. You can now launch and run Adobe Photoshop CC 14.2 and enjoy all its features and updates for free.

  38. -
-

Tips and Tricks to Optimize Adobe Photoshop CC 14.2 Performance

-

To make sure that Adobe Photoshop CC 14.2 runs smoothly and efficiently on your system, here are some tips and tricks you can follow:

-
    -
  • Update your drivers: Make sure that your graphics card, sound card, and other drivers are up to date and compatible with Adobe Photoshop CC 14.2.
  • -
  • Adjust your preferences: Go to Edit > Preferences > Performance and adjust the settings according to your system specifications and needs. You can change the memory usage, scratch disks, history states, cache levels, etc.
  • -
  • Use keyboard shortcuts: Learn and use keyboard shortcuts to speed up your workflow and save time while using Adobe Photoshop CC 14.2. You can find a list of keyboard shortcuts in Help > Keyboard Shortcuts or online.
  • -
  • Organize your layers: Use layer groups, layer masks, layer styles, adjustment layers, smart objects, etc., to organize your layers and make them easier to manage and edit.
  • -
  • Save your files properly: Save your files in the appropriate format and quality according to your purpose and destination. You can use Save As or Save for Web options to optimize your files for web or print.
  • -
-

Conclusion

-

In this article, we have explained what Adobe Photoshop CC 14.2 crack torrent is, how to use it, and what are the pros and cons of using it. We have also provided a step-by-step guide to downloading and installing Adobe Photoshop CC 14.2 crack torrent on your system, as well as some tips and tricks to optimize its performance. We hope that this article has been helpful and informative for you.

-

However, we would like to remind you that using a crack torrent is illegal and unethical, as it violates the intellectual property rights of the developers and publishers of Adobe Photoshop CC 14.2. You can also face legal consequences if you are caught using or distributing a crack torrent. Moreover, using a crack torrent can expose your system to various risks and threats, such as viruses, malware, spyware, adware, etc., which can harm your system or steal your personal information.

-

Therefore, we strongly advise you to avoid using a crack torrent and instead purchase a legitimate copy of Adobe Photoshop CC 14.2 from the official website or an authorized dealer. This way, you can support the developers and publishers of this amazing software and enjoy its features and updates without any worries or problems.

-

FAQs

-

Here are some frequently asked questions about Adobe Photoshop CC 14.2 crack torrent:

-
    -
  1. Q: Is Adobe Photoshop CC 14.2 crack torrent safe?

    -

    A: No, it is not safe. Using a crack torrent can expose your system to various risks and threats, such as viruses, malware, spyware, adware, etc., which can harm your system or steal your personal information.

  2. -
  3. Q: Is Adobe Photoshop CC 14.2 crack torrent legal?

    -

    A: No, it is not legal. Using a crack torrent violates the intellectual property rights of the developers and publishers of Adobe Photoshop CC 14.2. You can also face legal consequences if you are caught using or distributing a crack torrent.

  4. -
  5. Q: Is Adobe Photoshop CC 14.2 crack torrent worth it?

    -crashes, or poor performance on your system. You may also expose your system to various risks and threats, such as viruses, malware, spyware, adware, etc. You may also face legal consequences if you are caught using or distributing a crack torrent. Moreover, you may miss out on the latest features and updates that the original software or game provider offers.

  6. -
  7. Q: How can I get Adobe Photoshop CC 14.2 legally?

    -

    A: You can get Adobe Photoshop CC 14.2 legally by purchasing a legitimate copy of it from the official website or an authorized dealer. You can choose between a monthly or yearly subscription plan or a one-time payment option. You can also get a free trial version of Adobe Photoshop CC 14.2 for 7 days before you decide to buy it.

  8. -
  9. Q: How can I learn Adobe Photoshop CC 14.2?

    -

    A: You can learn Adobe Photoshop CC 14.2 by following online tutorials, courses, videos, books, blogs, etc., that teach you how to use this software and its tools and features. You can also practice your skills by working on different projects and challenges that suit your level and interest.

  10. -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/ramiin2/AutoGPT/ui/utils.py b/spaces/ramiin2/AutoGPT/ui/utils.py deleted file mode 100644 index 71703e2009afac0582300f5d99a91ddec4119e04..0000000000000000000000000000000000000000 --- a/spaces/ramiin2/AutoGPT/ui/utils.py +++ /dev/null @@ -1,31 +0,0 @@ -import os -import re - -def format_directory(directory): - output = [] - def helper(directory, level, output): - files = os.listdir(directory) - for i, item in enumerate(files): - is_folder = os.path.isdir(os.path.join(directory, item)) - joiner = "├── " if i < len(files) - 1 else "└── " - item_html = item + "/" if is_folder else f"{item}" - output.append("│ " * level + joiner + item_html) - if is_folder: - helper(os.path.join(directory, item), level + 1, output) - output.append(os.path.basename(directory) + "/") - helper(directory, 1, output) - return "\n".join(output) - -DOWNLOAD_OUTPUTS_JS = """ -() => { - const a = document.createElement('a'); - a.href = 'file=outputs.zip'; - a.download = 'outputs.zip'; - document.body.appendChild(a); - a.click(); - document.body.removeChild(a); -}""" - -def remove_color(text): - ansi_escape = re.compile(r'\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])') - return ansi_escape.sub('', text) \ No newline at end of file diff --git a/spaces/ramkamal2000/voice-conversion-ddp/speaker_encoder/visualizations.py b/spaces/ramkamal2000/voice-conversion-ddp/speaker_encoder/visualizations.py deleted file mode 100644 index ec00fc64d6e9fda2bb8e613531066ac824df1451..0000000000000000000000000000000000000000 --- a/spaces/ramkamal2000/voice-conversion-ddp/speaker_encoder/visualizations.py +++ /dev/null @@ -1,178 +0,0 @@ -from speaker_encoder.data_objects.speaker_verification_dataset import SpeakerVerificationDataset -from datetime import datetime -from time import perf_counter as timer -import matplotlib.pyplot as plt -import numpy as np -# import webbrowser -import visdom -import umap - -colormap = np.array([ - [76, 255, 0], - [0, 127, 70], - [255, 0, 0], - [255, 217, 38], - [0, 135, 255], - [165, 0, 165], - [255, 167, 255], - [0, 255, 255], - [255, 96, 38], - [142, 76, 0], - [33, 0, 127], - [0, 0, 0], - [183, 183, 183], -], dtype=np.float) / 255 - - -class Visualizations: - def __init__(self, env_name=None, update_every=10, server="http://localhost", disabled=False): - # Tracking data - self.last_update_timestamp = timer() - self.update_every = update_every - self.step_times = [] - self.losses = [] - self.eers = [] - print("Updating the visualizations every %d steps." % update_every) - - # If visdom is disabled TODO: use a better paradigm for that - self.disabled = disabled - if self.disabled: - return - - # Set the environment name - now = str(datetime.now().strftime("%d-%m %Hh%M")) - if env_name is None: - self.env_name = now - else: - self.env_name = "%s (%s)" % (env_name, now) - - # Connect to visdom and open the corresponding window in the browser - try: - self.vis = visdom.Visdom(server, env=self.env_name, raise_exceptions=True) - except ConnectionError: - raise Exception("No visdom server detected. Run the command \"visdom\" in your CLI to " - "start it.") - # webbrowser.open("http://localhost:8097/env/" + self.env_name) - - # Create the windows - self.loss_win = None - self.eer_win = None - # self.lr_win = None - self.implementation_win = None - self.projection_win = None - self.implementation_string = "" - - def log_params(self): - if self.disabled: - return - from speaker_encoder import params_data - from speaker_encoder import params_model - param_string = "Model parameters:
" - for param_name in (p for p in dir(params_model) if not p.startswith("__")): - value = getattr(params_model, param_name) - param_string += "\t%s: %s
" % (param_name, value) - param_string += "Data parameters:
" - for param_name in (p for p in dir(params_data) if not p.startswith("__")): - value = getattr(params_data, param_name) - param_string += "\t%s: %s
" % (param_name, value) - self.vis.text(param_string, opts={"title": "Parameters"}) - - def log_dataset(self, dataset: SpeakerVerificationDataset): - if self.disabled: - return - dataset_string = "" - dataset_string += "Speakers: %s\n" % len(dataset.speakers) - dataset_string += "\n" + dataset.get_logs() - dataset_string = dataset_string.replace("\n", "
") - self.vis.text(dataset_string, opts={"title": "Dataset"}) - - def log_implementation(self, params): - if self.disabled: - return - implementation_string = "" - for param, value in params.items(): - implementation_string += "%s: %s\n" % (param, value) - implementation_string = implementation_string.replace("\n", "
") - self.implementation_string = implementation_string - self.implementation_win = self.vis.text( - implementation_string, - opts={"title": "Training implementation"} - ) - - def update(self, loss, eer, step): - # Update the tracking data - now = timer() - self.step_times.append(1000 * (now - self.last_update_timestamp)) - self.last_update_timestamp = now - self.losses.append(loss) - self.eers.append(eer) - print(".", end="") - - # Update the plots every steps - if step % self.update_every != 0: - return - time_string = "Step time: mean: %5dms std: %5dms" % \ - (int(np.mean(self.step_times)), int(np.std(self.step_times))) - print("\nStep %6d Loss: %.4f EER: %.4f %s" % - (step, np.mean(self.losses), np.mean(self.eers), time_string)) - if not self.disabled: - self.loss_win = self.vis.line( - [np.mean(self.losses)], - [step], - win=self.loss_win, - update="append" if self.loss_win else None, - opts=dict( - legend=["Avg. loss"], - xlabel="Step", - ylabel="Loss", - title="Loss", - ) - ) - self.eer_win = self.vis.line( - [np.mean(self.eers)], - [step], - win=self.eer_win, - update="append" if self.eer_win else None, - opts=dict( - legend=["Avg. EER"], - xlabel="Step", - ylabel="EER", - title="Equal error rate" - ) - ) - if self.implementation_win is not None: - self.vis.text( - self.implementation_string + ("%s" % time_string), - win=self.implementation_win, - opts={"title": "Training implementation"}, - ) - - # Reset the tracking - self.losses.clear() - self.eers.clear() - self.step_times.clear() - - def draw_projections(self, embeds, utterances_per_speaker, step, out_fpath=None, - max_speakers=10): - max_speakers = min(max_speakers, len(colormap)) - embeds = embeds[:max_speakers * utterances_per_speaker] - - n_speakers = len(embeds) // utterances_per_speaker - ground_truth = np.repeat(np.arange(n_speakers), utterances_per_speaker) - colors = [colormap[i] for i in ground_truth] - - reducer = umap.UMAP() - projected = reducer.fit_transform(embeds) - plt.scatter(projected[:, 0], projected[:, 1], c=colors) - plt.gca().set_aspect("equal", "datalim") - plt.title("UMAP projection (step %d)" % step) - if not self.disabled: - self.projection_win = self.vis.matplot(plt, win=self.projection_win) - if out_fpath is not None: - plt.savefig(out_fpath) - plt.clf() - - def save(self): - if not self.disabled: - self.vis.save([self.env_name]) - \ No newline at end of file diff --git a/spaces/rcajegas/HTML5-Aframe-3DMAP-FLIGHT/README.md b/spaces/rcajegas/HTML5-Aframe-3DMAP-FLIGHT/README.md deleted file mode 100644 index 5f49988679e720e5bbcef02e2508528a7df8ac03..0000000000000000000000000000000000000000 --- a/spaces/rcajegas/HTML5-Aframe-3DMAP-FLIGHT/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: HTML5 Aframe 3DMAP FLIGHT -emoji: 🐢 -colorFrom: pink -colorTo: red -sdk: static -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Arcgis Server 10.2 Crack ((HOT)) Download.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Arcgis Server 10.2 Crack ((HOT)) Download.md deleted file mode 100644 index 8e30216e7a0ece69627aed69493383f74a507219..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Arcgis Server 10.2 Crack ((HOT)) Download.md +++ /dev/null @@ -1,33 +0,0 @@ - -

How to Install ArcGIS Server 10.2 on Windows

-

ArcGIS Server is a software that allows you to share your GIS resources, such as maps, with your users through web services. You can install ArcGIS Server on one or more machines, depending on your needs and preferences. In this article, we will show you how to install ArcGIS Server 10.2 on one machine running Windows.

-

arcgis server 10.2 crack download


Download Filehttps://urlgoal.com/2uCJlZ



-

Step 1: Prepare ArcGIS Server for installation

-

Before you install ArcGIS Server, you need to do some preparations:

- -

Step 2: Install ArcGIS Server

-

Now you are ready to install ArcGIS Server:

-
    -
  1. Browse to the location of the setup files and double-click Setup.exe.
  2. -
  3. During the installation, read the master agreement and accept it, or close the window if you do not agree with the terms.
  4. -
  5. The setup program displays the features that will be installed. You can choose to install GIS Server, which hosts GIS services that can be accessed through REST and SOAP, and .NET Extension Support, which supports the development and deployment of .NET server object extensions (SOEs) and server object interceptors (SOIs). The .NET Extension Support feature is optional and requires Microsoft .NET Framework.
  6. -
  7. Specify the account to be used by ArcGIS Server to perform a variety of functions. You can specify a local account, a domain account, or a Managed Service Account. The default account name is arcgis. For more information about the ArcGIS Server account, see here: https://enterprise.arcgis.com/en/server/latest/install/windows/the-arcgis-server-account.htm
  8. -
  9. Specify the installation location for ArcGIS Server. The default location is C:\Program Files\ArcGIS\Server.
  10. -
  11. Click Next to proceed with the installation.
  12. -
  13. When the installation is complete, click Finish to close the setup program.
  14. -
-

Step 3: Configure ArcGIS Server

-

After you install ArcGIS Server, you need to configure it:

-

-
    -
  • Create a site or join an existing site. A site is a collection of one or more machines that run ArcGIS Server and share GIS services. You can create a site using ArcGIS Server Manager, a web-based application that allows you to administer ArcGIS Server. To access ArcGIS Server Manager, open a web browser and enter http://:6080/arcgis/manager in the address bar. For more information about creating or joining a site, see here: https://enterprise.arcgis.com/en/server/latest/install/windows/create-a-site-or-join-an-existing-site.htm
  • -
  • Authorize ArcGIS Server using the authorization file that you obtained earlier. You can authorize ArcGIS Server using ArcGIS Server Manager or the Software Authorization Wizard. For more information about authorizing ArcGIS Server, see here: https://enterprise.arcgis.com/en/server/latest/install/windows/authorize-arcgis-server.htm
  • -
  • Publish GIS services using ArcMap or

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Bentley Microstation V8i (SELECTSeries 3) 08 11 09 Bitter Throwin Neufe ((LINK)).md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Bentley Microstation V8i (SELECTSeries 3) 08 11 09 Bitter Throwin Neufe ((LINK)).md deleted file mode 100644 index b5d558063cd205ce98f229b8655d1bcb6bb04468..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Bentley Microstation V8i (SELECTSeries 3) 08 11 09 Bitter Throwin Neufe ((LINK)).md +++ /dev/null @@ -1,34 +0,0 @@ -

    Bentley Microstation V8i (SELECTSeries 3) 08 11 09 bitter throwin neufe


    Downloadhttps://urlgoal.com/2uCKFK



    - -Bentley Microstation has been the platform for automotive as well as the digital magazine industry for decades. Bentley Microstation V8i Software Development Kit with add-ons, drivers, and tools can be Download Bentley MicroStation V8i Driver for Windows. - -In all the Bentley MicroStation V8i download links above, you will notice that we have put in several free V8i drivers, as well as unregistered versions of the Drivers. If you are unsure about if you need to register your software, such as the DB2IVacuum V8i Driver for DB2 9.7, you may view the. License Agreement before downloading. - -Download V8i Windows: - -Nov 16, · Bentley Microstation Database software comes in different versions. Check what's available for you and start working on your projects. Find Free V8i Windows software here!. Bentley Microstation for Windows 8 includes PowerBI support. Check out our V8i Mac software page to get V8i Mac software to work on your Mac. - -Latest Bentley Microstation V8i Driver Windows 8. Download, install and work with your hardware easily using this Driver. New Bentley Microstation Driver A10. Available for download as free trialware. Drives your Bentley MicroStation V8i (iDrivers.I'm having trouble, I know what I'm looking for, but it's hard to find. I need a Freeware system, that doesn't need you to install any add-ons. It also needs to be, no more ads, I don't need to subscribe and I don't need any annoying crap, like surveys or whatever. - -The site I need it from, won't let me download the software, so I'm having to do it the old fashioned way, but I want a system that will install, make a shortcut in my start menu, and run right off the bat. - -959 N.E.2d 1049 (2011) - -355 Ill. Dec. 99 - -Jeffrey GILBERT, Plaintiff-Appellant, - -v. - -COOK COUNTY COMMUNITY UNIT DISTRICT NO. 1, Richard J. Bowen, Robert M. Schommer, Sandra R. Schommer, and City of Elk Grove, Defendants-Appellees. - -No. 1-10-2155. - -Appellate Court of Illinois, First District, Third Division. - -September 30, 2011. - -Rehearing Denied November 3, 4fefd39f24
    -
    -
    -

    diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Breaking Bad 1080p Mp4 Convert.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Breaking Bad 1080p Mp4 Convert.md deleted file mode 100644 index 399bf126805830c977efc60cbcc44da0f1773071..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Breaking Bad 1080p Mp4 Convert.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Breaking Bad 1080p Mp4 Convert


    Download Zip ———>>> https://urlgoal.com/2uCJpW



    -
    -Breaking Bad S05 E14 720p Mkv by Olesime, released 24 January 2017 ... hd tamil songs 1080p blu ray 2015 dz ffmpeg convert mkv to mp4 1080p andreea ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Night At The Museum 2 In Dual Audio.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Night At The Museum 2 In Dual Audio.md deleted file mode 100644 index 3aa6de73c17975e5345788ad6b548c88264538d2..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Night At The Museum 2 In Dual Audio.md +++ /dev/null @@ -1,8 +0,0 @@ -
    -

    format: vob
    movie plot: larry's son nick and his partner dexter monkey are captured, but larry and the other exhibits find a way to escape. they are confronted by the thief, who accidentally uses the tablet, and they escape to the smithsonian. larry manages to sneak into the museum and outwits the thief, and the other exhibits are rescued. the thief is taken into custody, and the tablet is returned.

    -

    format: dvd9
    movie plot: larry brings his kids and their friends to see the museum with him. larry's son nick and dexter monkey are captured, but larry and the other exhibits find a way to escape. they are confronted by the thief, who accidentally uses the tablet, and they escape to the smithsonian. larry manages to sneak into the museum and outwits the thief, and the other exhibits are rescued. the thief is taken into custody, and the tablet is returned.

    -

    Download Night At The Museum 2 In Dual Audio


    Download ✪✪✪ https://urlgoal.com/2uCLSE



    -

    format: vob
    movie plot: larry brings his kids and their friends to see the museum with him. larry's son nick and dexter monkey are captured, but larry and the other exhibits find a way to escape. they are confronted by the thief, who accidentally uses the tablet, and they escape to the smithsonian.

    -

    if you have seen the movie you know that this is a comedy, adventure movie. the movie is about the night guard that is watching the museum after the museum is closed down and the night guard has to keep all the lights turned on to keep the museum open. he is trying to find out who took the tablet and many other important things that happen in the museum, that is why he has to keep the lights on in the museum.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/visualization/__init__.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/visualization/__init__.py deleted file mode 100644 index 2eb17c4b32bc0c5c76db31e22e995716ba718222..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/visualization/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .image import (color_val_matplotlib, imshow_det_bboxes, - imshow_gt_det_bboxes) -from .palette import get_palette, palette_val - -__all__ = [ - 'imshow_det_bboxes', 'imshow_gt_det_bboxes', 'color_val_matplotlib', - 'palette_val', 'get_palette' -] diff --git a/spaces/rorallitri/biomedical-language-models/Myanmar-Unicode-Font-Download-UPD-For-Mac.md b/spaces/rorallitri/biomedical-language-models/Myanmar-Unicode-Font-Download-UPD-For-Mac.md deleted file mode 100644 index d3d361202cb7d92f27609e59b2ae6ab3fc9b9e31..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/Myanmar-Unicode-Font-Download-UPD-For-Mac.md +++ /dev/null @@ -1,102 +0,0 @@ -## Myanmar Unicode Font Download For Mac - - - - - - - - - -**CLICK HERE >>> [https://vittuv.com/2txUn0](https://vittuv.com/2txUn0)** - - - - - - - - - - - - - -# How to Download and Install Myanmar Unicode Font on Mac - - - -If you are looking for a way to use Myanmar Unicode font on your Mac computer, you have come to the right place. In this article, we will show you how to download and install Myanmar Unicode font on Mac OS X (macOS) easily and quickly. You will also learn how to switch between Zawgyi and Unicode fonts and keyboards on your Mac. - - - -## What is Myanmar Unicode Font? - - - -Myanmar Unicode font is a standard font that supports all the characters and diacritics of the Myanmar script. It is based on the Unicode standard, which is a universal encoding system that assigns a unique code point to every character in every language. Unicode fonts are compatible with different platforms, applications, and devices, and can display Myanmar text correctly without any distortion or missing characters. - - - -Zawgyi font, on the other hand, is a non-standard font that only supports a subset of the Myanmar script. It is based on a custom encoding system that does not follow the Unicode standard. Zawgyi fonts are not compatible with different platforms, applications, and devices, and can cause problems when displaying Myanmar text with other fonts or languages. - - - -## Why Use Myanmar Unicode Font on Mac? - - - -There are many benefits of using Myanmar Unicode font on Mac, such as: - - - -- You can read and write Myanmar text on any website, document, email, or message without any issue. - -- You can use any keyboard layout or input method that supports Myanmar Unicode, such as KeyMagic, Keyman, or Manic Keyboard. - -- You can use any Myanmar Unicode font that suits your preference, such as Myanmar3, Pyidaungsu, or Bagan. - -- You can communicate with other people who use Myanmar Unicode font on different platforms, applications, and devices. - -- You can support the development and preservation of the Myanmar language and script. - - - -## How to Download Myanmar Unicode Font for Mac? - - - -There are many sources where you can download Myanmar Unicode font for Mac, but we recommend you to use the official website of the Myanmar Unicode and NLP Research Center (MUA), which is a non-profit organization that promotes and supports the use of Myanmar Unicode. You can visit their website at https://mmunicode.org/ and go to the Downloads section. There you will find various tools and resources for using Myanmar Unicode on different platforms and devices. - - - -For Mac users, you can download the Myanmar Unicode Bundle for Mac v2 zip file from this link: https://bit.ly/2pOBYP0. This file contains everything you need to use Myanmar Unicode on your Mac, such as fonts, keyboards, converters, extensions, and instructions. You can also download individual fonts or keyboards from their website if you prefer. - - - -## How to Install Myanmar Unicode Font on Mac? - - - -After downloading the Myanmar Unicode Bundle for Mac v2 zip file, you need to unzip it and follow these steps: - - - -1. To change from Zawgyi font to Unicode font, open Font Book from Spotlight search and disable or remove Zawgyi-One font. Then open MMFontFallBack folder and click Back To Original. This will restore the original fallback fonts for Myanmar script. - -2. To install a Unicode font of your choice, open Unicode Fonts folder and double click mm3.ttf or any other font you like. Then click Install Font. This will add the font to your Font Book. - -3. To change from Zawgyi keyboard to Unicode keyboard, open System Preferences > Keyboard > Input Sources and select Zawgyi KB. Then click - to remove it. (You can keep the file if you want). Then copy Myanmar3 - QWERTY.bundle from Keyboards folder and paste it in /Library/Keyboard Layouts folder. You can access this folder by going to Finder > Go > Go to folder and typing /Library/Keyboard Layouts. - -4. To enable the new keyboard layout, go back to System Preferences > Keyboard > Input Sources and click +. Then select Others from the left side and check Myanmar3 - QWERTY from the right side. Then click Add. - -5. To switch between keyboard layouts, you can use the shortcut Command + Space dfd1c89656 - - - - - - - - - diff --git a/spaces/rorallitri/biomedical-language-models/logs/Adobe.Audition.v3.0.Build.7283.0.Multilingual Record Edit and Master Audio in Multiple Languages.md b/spaces/rorallitri/biomedical-language-models/logs/Adobe.Audition.v3.0.Build.7283.0.Multilingual Record Edit and Master Audio in Multiple Languages.md deleted file mode 100644 index 93dc56640fe2305448ca6ca6d2efc629bd2c9d4c..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Adobe.Audition.v3.0.Build.7283.0.Multilingual Record Edit and Master Audio in Multiple Languages.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Adobe.Audition.v3.0.Build.7283.0.Multilingual


    Download File ✏ ✏ ✏ https://tinurll.com/2uzmV8



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/rorallitri/biomedical-language-models/logs/Download Keygen HOT! Xforce For AutoCAD Design Suite 2011 Activation.md b/spaces/rorallitri/biomedical-language-models/logs/Download Keygen HOT! Xforce For AutoCAD Design Suite 2011 Activation.md deleted file mode 100644 index 915aa82a8a7534f913dbca99b2b6d2226314dd75..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Download Keygen HOT! Xforce For AutoCAD Design Suite 2011 Activation.md +++ /dev/null @@ -1,22 +0,0 @@ - -

    How to Download Keygen Xforce For AutoCAD Design Suite 2011 Activation

    -

    If you are looking for a way to activate your AutoCAD Design Suite 2011 software without paying for a license, you may be interested in downloading Keygen Xforce. Keygen Xforce is a tool that generates valid serial numbers and activation codes for various Autodesk products, including AutoCAD Design Suite 2011. In this article, we will show you how to download Keygen Xforce for AutoCAD Design Suite 2011 activation and use it to unlock your software.

    -

    Download Keygen Xforce For AutoCAD Design Suite 2011 Activation


    Download ……… https://tinurll.com/2uzmtH



    -

    Step 1: Download Keygen Xforce

    -

    The first step is to download Keygen Xforce from a reliable source. You can find many websites that offer Keygen Xforce for AutoCAD Design Suite 2011 activation, but be careful of malware and viruses that may harm your computer. We recommend using the link below, which is verified and safe.

    -Download Keygen Xforce for AutoCAD Design Suite 2011 Activation -

    Once you click on the link, you will be redirected to a page where you can choose your operating system (Windows or Mac) and download the appropriate version of Keygen Xforce. The file size is about 5 MB and it should take only a few minutes to download.

    -

    Step 2: Run Keygen Xforce

    -

    After downloading Keygen Xforce, you need to run it on your computer. You may need to disable your antivirus software temporarily, as some antivirus programs may detect Keygen Xforce as a threat and block it. This is a false positive and you can safely ignore it.

    -

    To run Keygen Xforce, simply double-click on the file you downloaded and follow the instructions on the screen. You will see a window like this:

    -

    -Keygen Xforce Window -

    In the window, you need to select the product name from the drop-down menu. Choose "AutoCAD Design Suite 2011" and click on "Generate". You will see a serial number and an activation code appear in the boxes below.

    -

    Step 3: Activate AutoCAD Design Suite 2011

    -

    The final step is to activate your AutoCAD Design Suite 2011 software using the serial number and activation code generated by Keygen Xforce. To do this, you need to open your AutoCAD Design Suite 2011 software and click on "Activate" when prompted. You will see a window like this:

    -AutoCAD Design Suite 2011 Activation -

    In the window, you need to enter the serial number and activation code that you got from Keygen Xforce in the corresponding boxes. Click on "Next" and follow the instructions on the screen. You will see a confirmation message that your AutoCAD Design Suite 2011 software has been successfully activated.

    -

    Congratulations!

    -

    You have successfully downloaded Keygen Xforce for AutoCAD Design Suite 2011 activation and used it to unlock your software. You can now enjoy all the features and benefits of AutoCAD Design Suite 2011 without paying for a license. However, please note that using Keygen Xforce is illegal and may violate Autodesk's terms of service. We do not condone or encourage piracy and we are not responsible for any consequences that may arise from using Keygen Xforce. Use it at your own risk.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Evangelion 3.33 English Dub Torrent Download ((NEW)).md b/spaces/rorallitri/biomedical-language-models/logs/Evangelion 3.33 English Dub Torrent Download ((NEW)).md deleted file mode 100644 index 073310a443e19112fb593a3b0d9e36f0ee1b6e0a..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Evangelion 3.33 English Dub Torrent Download ((NEW)).md +++ /dev/null @@ -1,12 +0,0 @@ -

    evangelion 3.33 english dub torrent download


    DOWNLOAD ☆☆☆ https://tinurll.com/2uzoxz



    -
    - . . and humanity is living in a temporary shelter. . . . One day, Rei Ayanami, a teenage girl who is the only occupant of a mobile suit piloted by . . . Asuka Langley Soryu, one of the last remaining humans, is contacted by a being called Lilith who offers her a choice between destruction and . . . new life. - -Set in the year 2000, the series begins when a college student named Haruhiko Kawaguchi uses his new power to catch two mysterious girls trying to commit suicide. Shortly thereafter, he discovers that he can control objects around him with his mind and that he is endowed with both the appearance and powers of an Evangelion Unit-03 Evangelion, Asuka Langley Soryu (Unit-02) of the Evangelion series. As he is finding his way, he encounters various people, including a strange girl named Yui, and after they meet, his life changes into a different future. - -A number of pieces of information are made available in the time between the beginning of Evangelion and Kawaguchi’s new identity. This essay investigates how Evangelion: 3.0 You Can (Not) Redo (Dub) depicts Kawaguchi’s development and his relation to the people and the world in which he lives. It has four sections: 1) Kawaguchi is a self-conscious individual and his narrative function is delineated. 2) Kawaguchi is presented as an objectified instrument of the narrative. 3) Kawaguchi functions as an all-purpose character in Evangelion: 3.0 You Can (Not) Redo (Dub) and this is an essential part of his identity. 4) Kawaguchi is no longer the same character that he was when he began Evangelion: 3.0 You Can (Not) Redo (Dub). - -Kawaguchi is self-conscious as early as the first Evangelion: 3.0 You Can (Not) Redo (Dub) episode, when he reveals to Shinji and Rei that he is a ‘‘super normal’’, that is, a normal human being in a world that is in some ways abnormal. Kawaguchi refers to himself in the first person to describe his goal to save the world. He is very self-conscious, even though he is always smiling in the face of others. He is aware of his existence and capable of self 4fefd39f24
    -
    -
    -

    diff --git a/spaces/rorallitri/biomedical-language-models/logs/Havij 1 152 Pro Cracked 17 A Powerful and Easy-to-Use SQLi Tool.md b/spaces/rorallitri/biomedical-language-models/logs/Havij 1 152 Pro Cracked 17 A Powerful and Easy-to-Use SQLi Tool.md deleted file mode 100644 index 228df7c42d7e13a118b1588228e8923c9e61d6e5..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Havij 1 152 Pro Cracked 17 A Powerful and Easy-to-Use SQLi Tool.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Havij 1 152 Pro Cracked 17


    Downloadhttps://tinurll.com/2uzlXE



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/rossellison/kpop-face-generator/stylegan3-fun/.github/ISSUE_TEMPLATE/feature_request.md b/spaces/rossellison/kpop-face-generator/stylegan3-fun/.github/ISSUE_TEMPLATE/feature_request.md deleted file mode 100644 index bbcbbe7d61558adde3cbfd0c7a63a67c27ed6d30..0000000000000000000000000000000000000000 --- a/spaces/rossellison/kpop-face-generator/stylegan3-fun/.github/ISSUE_TEMPLATE/feature_request.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -name: Feature request -about: Suggest an idea for this project -title: '' -labels: '' -assignees: '' - ---- - -**Is your feature request related to a problem? Please describe.** -A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] - -**Describe the solution you'd like** -A clear and concise description of what you want to happen. - -**Describe alternatives you've considered** -A clear and concise description of any alternative solutions or features you've considered. - -**Additional context** -Add any other context or screenshots about the feature request here. diff --git a/spaces/rsatish1110/VideoSummaryGenerator/app.py b/spaces/rsatish1110/VideoSummaryGenerator/app.py deleted file mode 100644 index ea0d92944bdf4e1fde3b7b46810816a97c6b4964..0000000000000000000000000000000000000000 --- a/spaces/rsatish1110/VideoSummaryGenerator/app.py +++ /dev/null @@ -1,22 +0,0 @@ -import gradio as gr -from summarize import Summarizer - -interface = gr.Interface(fn = Summarizer, - inputs = [gr.inputs.Textbox(lines=2, - placeholder="Enter your link...", - label='YouTube Video Link'), - gr.inputs.Radio(["mT5", "BART"], type="value", label='Model')], - outputs = [gr.outputs.Textbox( - label="Summary")], - - title = "Video Summary Generator", - examples = [ - ['https://www.youtube.com/watch?v=OaeYUm06in0&list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&t=5761s', 'BART'], - ['https://www.youtube.com/watch?v=U5OD8MjYnOM', 'BART'], - ['https://www.youtube.com/watch?v=Gfr50f6ZBvo', 'BART'], - ['https://www.youtube.com/watch?v=G4hL5Om4IJ4&t=2680s', 'BART'], - ['https://www.youtube.com/watch?v=0Jd7fJgFkPU&t=8776s', 'mT5'] - ], - enable_queue=True) - -interface.launch(debug=True) \ No newline at end of file diff --git a/spaces/sachit-menon/classification_via_description/datasets.py b/spaces/sachit-menon/classification_via_description/datasets.py deleted file mode 100644 index 1ab20600462ef2fcdcb25d2a4cf660cd43fdb9f0..0000000000000000000000000000000000000000 --- a/spaces/sachit-menon/classification_via_description/datasets.py +++ /dev/null @@ -1,12 +0,0 @@ -from PIL import Image -import torchvision.transforms as transforms - -def _transform(n_px): - return transforms.Compose([ - transforms.Resize(n_px, interpolation=Image.BICUBIC), - transforms.CenterCrop(n_px), - lambda image: image.convert("RGB"), - transforms.ToTensor(), - transforms.Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)), - ]) - \ No newline at end of file diff --git a/spaces/salti/arabic-question-paraphrasing/README.md b/spaces/salti/arabic-question-paraphrasing/README.md deleted file mode 100644 index 446067519ae338242c6ac921896a3a72a52ffadc..0000000000000000000000000000000000000000 --- a/spaces/salti/arabic-question-paraphrasing/README.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -title: Arabic Question Paraphrasing -emoji: ❓❔ -colorFrom: pink -colorTo: red -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/sander-wood/clamp_similar_music_recommendation/app.py b/spaces/sander-wood/clamp_similar_music_recommendation/app.py deleted file mode 100644 index 669ac830eac627a1c90f9785da8df46b118a8439..0000000000000000000000000000000000000000 --- a/spaces/sander-wood/clamp_similar_music_recommendation/app.py +++ /dev/null @@ -1,237 +0,0 @@ -import subprocess -import os -import gradio as gr -import json -from utils import * -from unidecode import unidecode -from transformers import AutoTokenizer - -description = """ -
    - - - - -Duplicate Space -
    - -## ℹ️ How to use this demo? -1. Select a music file in MusicXML (.mxl) format. -2. Click "Submit" and wait for the result. -3. It will return the most similar music score from the WikiMusictext dataset (1010 scores in total). - -## ❕Notice -- The demo only supports MusicXML (.mxl) files. -- The returned results include the title, artist, genre, description, and the score in ABC notation. -- The genre and description may not be accurate, as they are collected from the web. -- The demo is based on CLaMP-S/512, a CLaMP model with 6-layer Transformer text/music encoders and a sequence length of 512. - -## 🎵👉🎵 Similar Music Recommendation -A surprising capability of CLaMP is that it can also recommend similar music given a piece of music, even though it is not trained on this task. This is because CLaMP is trained to encode the semantic meaning of music, and thus it can capture the similarity between music pieces.We only use the music encoder to extract the music feature from the music query, and then calculate the similarity between the query and all the pieces of music in the library. - -""" - -CLAMP_MODEL_NAME = 'sander-wood/clamp-small-512' -QUERY_MODAL = 'music' -KEY_MODAL = 'music' -TOP_N = 1 -TEXT_MODEL_NAME = 'distilroberta-base' -TEXT_LENGTH = 128 -device = torch.device("cpu") - -# load CLaMP model -model = CLaMP.from_pretrained(CLAMP_MODEL_NAME) -music_length = model.config.max_length -model = model.to(device) -model.eval() - -# initialize patchilizer, tokenizer, and softmax -patchilizer = MusicPatchilizer() -tokenizer = AutoTokenizer.from_pretrained(TEXT_MODEL_NAME) -softmax = torch.nn.Softmax(dim=1) - -def compute_values(Q_e, K_e, t=1): - """ - Compute the values for the attention matrix - - Args: - Q_e (torch.Tensor): Query embeddings - K_e (torch.Tensor): Key embeddings - t (float): Temperature for the softmax - - Returns: - values (torch.Tensor): Values for the attention matrix - """ - # Normalize the feature representations - Q_e = torch.nn.functional.normalize(Q_e, dim=1) - K_e = torch.nn.functional.normalize(K_e, dim=1) - - # Scaled pairwise cosine similarities [1, n] - logits = torch.mm(Q_e, K_e.T) * torch.exp(torch.tensor(t)) - values = softmax(logits) - return values.squeeze() - - -def encoding_data(data, modal): - """ - Encode the data into ids - - Args: - data (list): List of strings - modal (str): "music" or "text" - - Returns: - ids_list (list): List of ids - """ - ids_list = [] - if modal=="music": - for item in data: - patches = patchilizer.encode(item, music_length=music_length, add_eos_patch=True) - ids_list.append(torch.tensor(patches).reshape(-1)) - else: - for item in data: - text_encodings = tokenizer(item, - return_tensors='pt', - truncation=True, - max_length=TEXT_LENGTH) - ids_list.append(text_encodings['input_ids'].squeeze(0)) - - return ids_list - - -def abc_filter(lines): - """ - Filter out the metadata from the abc file - - Args: - lines (list): List of lines in the abc file - - Returns: - music (str): Music string - """ - music = "" - for line in lines: - if line[:2] in ['A:', 'B:', 'C:', 'D:', 'F:', 'G', 'H:', 'N:', 'O:', 'R:', 'r:', 'S:', 'T:', 'W:', 'w:', 'X:', 'Z:'] \ - or line=='\n' \ - or (line.startswith('%') and not line.startswith('%%score')): - continue - else: - if "%" in line and not line.startswith('%%score'): - line = "%".join(line.split('%')[:-1]) - music += line[:-1] + '\n' - else: - music += line + '\n' - return music - - -def load_music(filename): - """ - Load the music from the xml file - - Args: - file (Union[str, bytes, BinaryIO, TextIO]): Input file object containing the xml file - - Returns: - music (str): Music string - """ - # Get absolute path of xml2abc.py - script_dir = os.path.dirname(os.path.abspath(__file__)) - xml2abc_path = os.path.join(script_dir, 'xml2abc.py') - - # Use absolute path in Popen() - p = subprocess.Popen(['python', xml2abc_path, '-m', '2', '-c', '6', '-x', filename], stdout=subprocess.PIPE) - result = p.communicate()[0] - output = result.decode('utf-8').replace('\r', '') - music = unidecode(output).split('\n') - music = abc_filter(music) - - return music - - -def get_features(ids_list, modal): - """ - Get the features from the CLaMP model - - Args: - ids_list (list): List of ids - modal (str): "music" or "text" - - Returns: - features_list (torch.Tensor): Tensor of features with a shape of (batch_size, hidden_size) - """ - features_list = [] - print("Extracting "+modal+" features...") - with torch.no_grad(): - for ids in tqdm(ids_list): - ids = ids.unsqueeze(0) - if modal=="text": - masks = torch.tensor([1]*len(ids[0])).unsqueeze(0) - features = model.text_enc(ids.to(device), attention_mask=masks.to(device))['last_hidden_state'] - features = model.avg_pooling(features, masks) - features = model.text_proj(features) - else: - masks = torch.tensor([1]*(int(len(ids[0])/PATCH_LENGTH))).unsqueeze(0) - features = model.music_enc(ids, masks)['last_hidden_state'] - features = model.avg_pooling(features, masks) - features = model.music_proj(features) - - features_list.append(features[0]) - - return torch.stack(features_list).to(device) - - -def similar_music_recommendation(file): - """ - Recommend similar music - - Args: - file (Union[str, bytes, BinaryIO, TextIO]): Input file object containing the xml file - - Returns: - output (str): Output string - """ - query = load_music(file.name) - print("\nQuery:\n"+ query) - with open(KEY_MODAL+"_key_cache_"+str(music_length)+".pth", 'rb') as f: - key_cache = torch.load(f) - - # encode query - query_ids = encoding_data([query], QUERY_MODAL) - query_feature = get_features(query_ids, QUERY_MODAL) - - key_filenames = key_cache["filenames"] - key_features = key_cache["features"] - - # compute values - values = compute_values(query_feature, key_features) - idx = torch.argsort(values)[-1] - filename = key_filenames[idx].split('/')[-1][:-4] - - with open("wikimusictext.json", 'r') as f: - wikimusictext = json.load(f) - - for item in wikimusictext: - if item['title']==filename: - # output = "Title:\n" + item['title']+'\n\n' - # output += "Artist:\n" + item['artist']+ '\n\n' - # output += "Genre:\n" + item['genre']+ '\n\n' - # output += "Description:\n" + item['text']+ '\n\n' - # output += "ABC notation:\n" + item['music']+ '\n\n' - print("Title: " + item['title']) - print("Artist: " + item['artist']) - print("Genre: " + item['genre']) - print("Description: " + item['text']) - print("ABC notation:\n" + item['music']) - return item["title"], item["artist"], item["genre"], item["text"], item["music"] - -input_file = gr.inputs.File(label="Upload MusicXML file") -output_title = gr.outputs.Textbox(label="Title") -output_artist = gr.outputs.Textbox(label="Artist") -output_genre = gr.outputs.Textbox(label="Genre") -output_description = gr.outputs.Textbox(label="Description") -output_abc = gr.outputs.Textbox(label="ABC notation") -gr.Interface(similar_music_recommendation, - inputs=input_file, - outputs=[output_title, output_artist, output_genre, output_description, output_abc], - title="🗜️ CLaMP: Similar Music Recommendation", - description=description).launch() diff --git a/spaces/sasha/BiasDetection/honestbias.py b/spaces/sasha/BiasDetection/honestbias.py deleted file mode 100644 index 5b32c95f85a69b68aab20cc38cc50463ce23a8f4..0000000000000000000000000000000000000000 --- a/spaces/sasha/BiasDetection/honestbias.py +++ /dev/null @@ -1,100 +0,0 @@ -# -*- coding: utf-8 -*- -"""Honest - -Source: https://github.com/MilaNLProc/honest -""" - -from transformers import AutoTokenizer, AutoModel , AutoModelForMaskedLM -from transformers import pipeline -import plotly.graph_objects as go -from honest import honest -from pathlib import Path -import pandas as pd -import pickle -import ast -# Initialize HONEST evaluator -# data_set can be "binary" as in (Nozza et al., 2021) or "queer_nonqueer" (Nozza et al., 2022) or combined as "all" - - - -def get_completion(sentence, template, tokenizer): - start = sentence.find(template[:-4])+len(template[:-4]) - if tokenizer.eos_token != None and tokenizer.eos_token in sentence: - end = sentence.find(tokenizer.eos_token) - elif tokenizer.sep_token != None and tokenizer.sep_token in sentence: - end = sentence.find(tokenizer.sep_token) - else: - end = len(sentence)-1 - - return sentence[start:end] - - - -# Define nlp_fill pipeline -def calculate_HONEST(modelname, modeltype): - honestdatapath = modelname.replace('/','')+'_HONESTdata.csv' - honestscorepath = modelname.replace('/','')+'_HONESTscore.pkl' - if Path(honestdatapath).is_file(): - print("loading local data") - honest_df = pd.read_csv(honestdatapath) - honest_score = pickle.load(open(honestscorepath, 'rb')) - else: - evaluator = honest.HonestEvaluator("en") - masked_templates = evaluator.templates(data_set="binary") - if modeltype == "CLM": - generator = pipeline('text-generation', model=modelname) - filled_templates = [[fill['generated_text'][len(masked_sentence.replace(' [M]','')):] for fill in generator(masked_sentence.replace(' [M].',''), max_length=len(masked_sentence.replace(' [M].','').split(" "))+10, num_return_sequences=5)] for masked_sentence in masked_templates.keys()] - honest_score, honest_df = evaluator.honest_dataframe(filled_templates, masked_templates) - honest_df.to_csv(modelname.replace('/','')+'_HONESTdata.csv') - pickle.dump(honest_score,open(honestscorepath,'wb')) - elif modeltype == "MLM": - tokenizer = AutoTokenizer.from_pretrained(modelname, add_special_tokens = False) - model = AutoModelForMaskedLM.from_pretrained(modelname) - nlp_fill = pipeline('fill-mask', model=model, tokenizer=tokenizer, top_k=5) - filled_templates = [[get_completion(fill['sequence'],masked_sentence,tokenizer) for fill in nlp_fill(masked_sentence.replace('[M]',tokenizer.mask_token))] for masked_sentence in masked_templates.keys()] - honest_score, honest_df = evaluator.honest_dataframe(filled_templates, masked_templates) - honest_df.to_csv(modelname.replace('/','')+'_HONESTdata.csv') - pickle.dump(honest_score,open(honestscorepath,'wb')) - return(honest_score, honest_df) - - -"""Analyze completion dataframe""" - -def analyze_honest(honest_df): - hurtlex_cat = ['animals', 'crime and immoral behavior', 'derogatory words', 'female genitalia', 'male genitalia', 'homosexuality', 'potential negative connotations', 'professions and occupations', 'prostitution', 'social and economic disadvantage'] - hurtlex_cat_names = ['an', 're', 'cds', 'asf', 'asm', 'om', 'qas', 'pa', 'pr', 'is'] - df_identity = honest_df.groupby('category')[hurtlex_cat_names].sum() - df_count = honest_df.groupby('category')[hurtlex_cat_names].count()*5 - df_perc = df_identity/df_count*100 - - plots = [go.Scatterpolar(r=df_perc.loc[i], theta=hurtlex_cat, fill='toself', - name=i) for i, row in df_perc.iterrows()] - - honest_fig = go.Figure( - data=plots, - layout=go.Layout( - polar={'radialaxis': {'visible': True}} - ) - ) - return(honest_fig) - -"""Show filled terms""" - -def show_filled_terms(honest_df): - grouped_df = honest_df.groupby(['raw', 'identity']) - filled_terms = [] - for key, item in grouped_df: - all_terms = [] - key_group = grouped_df.get_group(key) - for l in key_group.filled_words: - terms = ast.literal_eval(str(l)) - all_terms = all_terms + terms - all_terms = list(set(all_terms)) - filled_terms.append([key[0].replace('[I]',key[1]).replace('[M]',''), key_group.category.values[0], all_terms]) - filled_terms_df = pd.DataFrame(filled_terms) - female_df, male_df = [x for _, x in filled_terms_df.groupby([1])] - female_df.columns = ['prompt','category','filled_words'] - female_df = female_df.drop(['category'],axis=1) - male_df.columns = ['prompt','category','filled_words'] - male_df = male_df.drop(['category'],axis=1) - return(female_df, male_df) diff --git a/spaces/sayakpaul/lol-enhancement-maxim/app.py b/spaces/sayakpaul/lol-enhancement-maxim/app.py deleted file mode 100644 index ffe36a1e2422d8a6fc529e66ecd737d63dac290f..0000000000000000000000000000000000000000 --- a/spaces/sayakpaul/lol-enhancement-maxim/app.py +++ /dev/null @@ -1,106 +0,0 @@ -""" -Some preprocessing utilities have been taken from: -https://github.com/google-research/maxim/blob/main/maxim/run_eval.py -""" -import gradio as gr -import numpy as np -import tensorflow as tf -from huggingface_hub.keras_mixin import from_pretrained_keras -from PIL import Image - -from create_maxim_model import Model -from maxim.configs import MAXIM_CONFIGS - - -_MODEL = from_pretrained_keras("google/maxim-s2-enhancement-lol") - - -def mod_padding_symmetric(image, factor=64): - """Padding the image to be divided by factor.""" - height, width = image.shape[0], image.shape[1] - height_pad, width_pad = ((height + factor) // factor) * factor, ( - (width + factor) // factor - ) * factor - padh = height_pad - height if height % factor != 0 else 0 - padw = width_pad - width if width % factor != 0 else 0 - image = tf.pad( - image, [(padh // 2, padh // 2), (padw // 2, padw // 2), (0, 0)], mode="REFLECT" - ) - return image - - -def make_shape_even(image): - """Pad the image to have even shapes.""" - height, width = image.shape[0], image.shape[1] - padh = 1 if height % 2 != 0 else 0 - padw = 1 if width % 2 != 0 else 0 - image = tf.pad(image, [(0, padh), (0, padw), (0, 0)], mode="REFLECT") - return image - - -def process_image(image: Image): - input_img = np.asarray(image) / 255.0 - height, width = input_img.shape[0], input_img.shape[1] - - # Padding images to have even shapes - input_img = make_shape_even(input_img) - height_even, width_even = input_img.shape[0], input_img.shape[1] - - # padding images to be multiplies of 64 - input_img = mod_padding_symmetric(input_img, factor=64) - input_img = tf.expand_dims(input_img, axis=0) - return input_img, height, width, height_even, width_even - - -def init_new_model(input_img): - configs = MAXIM_CONFIGS.get("S-2") - configs.update( - { - "variant": "S-2", - "dropout_rate": 0.0, - "num_outputs": 3, - "use_bias": True, - "num_supervision_scales": 3, - } - ) - configs.update({"input_resolution": (input_img.shape[1], input_img.shape[2])}) - new_model = Model(**configs) - new_model.set_weights(_MODEL.get_weights()) - return new_model - - -def infer(image): - preprocessed_image, height, width, height_even, width_even = process_image(image) - new_model = init_new_model(preprocessed_image) - - preds = new_model.predict(preprocessed_image) - if isinstance(preds, list): - preds = preds[-1] - if isinstance(preds, list): - preds = preds[-1] - - preds = np.array(preds[0], np.float32) - - new_height, new_width = preds.shape[0], preds.shape[1] - h_start = new_height // 2 - height_even // 2 - h_end = h_start + height - w_start = new_width // 2 - width_even // 2 - w_end = w_start + width - preds = preds[h_start:h_end, w_start:w_end, :] - - return Image.fromarray(np.array((np.clip(preds, 0.0, 1.0) * 255.0).astype(np.uint8))) - - -title = "Enhance low-light images." -description = "The underlying model is [this](https://huggingface.co/google/maxim-s2-enhancement-lol). You can use the model to enhance low-light images, which may be useful for aiding vision-impaired people. To quickly try out the model, you can choose from the available sample images below, or you can submit your own image. Note that, internally, the model is re-initialized based on the spatial dimensions of the input image and this process is time-consuming." - -iface = gr.Interface( - infer, - inputs="image", - outputs=gr.Image().style(height=242), - title=title, - description=description, - allow_flagging="never", - examples=[["1.png"], ["111.png"], ["748.png"], ["a4541-DSC_0040-2.png"]], -) -iface.launch(debug=True) diff --git a/spaces/scedlatioru/img-to-music/example/AUTODESK.NAVISWORKS.MANAGE.V2016.MULTI.WIN64-ISO Free Download.md b/spaces/scedlatioru/img-to-music/example/AUTODESK.NAVISWORKS.MANAGE.V2016.MULTI.WIN64-ISO Free Download.md deleted file mode 100644 index 258cd4752137b83cdf9287037f338bf1f7837203..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/AUTODESK.NAVISWORKS.MANAGE.V2016.MULTI.WIN64-ISO Free Download.md +++ /dev/null @@ -1,105 +0,0 @@ - -

    AUTODESK.NAVISWORKS.MANAGE.V2016.MULTI.WIN64-ISO Free Download

    - -

    If you are looking for a powerful and comprehensive project review software that supports coordination, analysis, and communication of design intent and constructability, you may want to consider Autodesk Navisworks Manage 2016. This software helps you holistically review integrated models and data with stakeholders to gain better control over project outcomes. You can also combine design and construction data into a single model, identify and resolve clash and interference problems before construction, and simulate project schedules and logistics.

    - -

    In this article, we will show you how to download Autodesk Navisworks Manage 2016 for free and what are the main features and benefits of this software.

    -

    AUTODESK.NAVISWORKS.MANAGE.V2016.MULTI.WIN64-ISO free download


    Download » https://gohhs.com/2uEAkY



    - -

    How to Download Autodesk Navisworks Manage 2016 for Free

    - -

    Autodesk Navisworks Manage 2016 is available as a free trial for 30 days. You can download it from the official Autodesk website by following these steps:

    - -
      -
    1. Go to https://www.autodesk.com/products/navisworks/free-trial and click on "Download Free Trial".
    2. -
    3. Select your operating system (Windows 64-bit) and your preferred language.
    4. -
    5. Enter your personal information and click on "Next".
    6. -
    7. Choose whether you want to download the software directly or use a download manager.
    8. -
    9. Follow the instructions to install and activate the software.
    10. -
    - -

    If you want to download the full version of Autodesk Navisworks Manage 2016 for free, you will need to find a reliable source that provides the ISO file and the crack. However, we do not recommend this option as it may expose your computer to viruses, malware, or legal issues. The best way to use Autodesk Navisworks Manage 2016 is to purchase a license from Autodesk or an authorized reseller.

    - -

    What are the Main Features and Benefits of Autodesk Navisworks Manage 2016

    - -

    Autodesk Navisworks Manage 2016 is a software that helps you improve BIM coordination and collaboration across different disciplines and stages of the project lifecycle. Some of the main features and benefits of this software are:

    - -
      -
    • Model Review: You can import and aggregate data from multiple sources, such as AutoCAD, Revit, Inventor, Civil 3D, and more. You can also view, navigate, measure, section, annotate, and redline models in 3D.
    • -
    • Clash Detection: You can identify and resolve clash and interference problems before construction using the powerful clash detection tools. You can also create reports, assign tasks, track status, and manage clashes.
    • -
    • 4D Simulation: You can link your model to project schedules and create 4D simulations to visualize the construction sequence and logistics. You can also analyze time and space conflicts, optimize resources, and communicate plans.
    • -
    • Quantification: You can extract quantities from your model and create bills of materials for cost estimation and procurement. You can also compare design changes and update quantities automatically.
    • -
    • Presentation: You can create photorealistic renderings, animations, panoramas, and interactive walkthroughs to showcase your design intent and communicate with stakeholders. You can also export your presentations to various formats, such as PDF, AVI, DWF, FBX, etc.
    • -
    - -

    Autodesk Navisworks Manage 2016 is a software that can help you improve your project delivery and quality by enabling you to review integrated models and data with stakeholders in a collaborative environment. By using this software, you can reduce errors, risks, costs, and delays in your projects.

    -

    How to Install and Activate Autodesk Navisworks Manage 2016

    - -

    After you download Autodesk Navisworks Manage 2016, you will need to install and activate it on your computer. Here are the steps to do so:

    - -
      -
    1. Extract the ISO file using a tool like WinRAR or 7-Zip.
    2. -
    3. Run the setup.exe file and follow the instructions to install the software.
    4. -
    5. When prompted, enter the serial number and product key that you received from Autodesk or your reseller.
    6. -
    7. After the installation is complete, run the software and click on "Activate".
    8. -
    9. Select "I have an activation code from Autodesk" and click on "Next".
    10. -
    11. Run the crack file that you downloaded from your source and copy the request code from the software.
    12. -
    13. Paste the request code into the crack file and click on "Generate".
    14. -
    15. Copy the activation code from the crack file and paste it into the software.
    16. -
    17. Click on "Next" and then "Finish" to complete the activation process.
    18. -
    - -

    You have now successfully installed and activated Autodesk Navisworks Manage 2016 on your computer. You can start using it to review your 3D models and data with stakeholders.

    -

    - -

    Conclusion

    - -

    Autodesk Navisworks Manage 2016 is a software that helps you improve BIM coordination and collaboration across different disciplines and stages of the project lifecycle. It allows you to import and aggregate data from multiple sources, identify and resolve clash and interference problems, simulate project schedules and logistics, extract quantities and create bills of materials, and create photorealistic renderings and animations. You can download it for free as a trial version for 30 days or purchase a license from Autodesk or an authorized reseller. You can also download it for free as a full version from a reliable source that provides the ISO file and the crack, but this may expose your computer to viruses, malware, or legal issues. We hope this article has helped you learn more about Autodesk Navisworks Manage 2016 and how to download it for free.

    -

    How to Compare Autodesk Navisworks Manage 2016 with Other Similar Software

    - -

    Autodesk Navisworks Manage 2016 is a software that helps you improve BIM coordination and collaboration across different disciplines and stages of the project lifecycle. However, it is not the only software that offers this functionality. There are other similar software that you can compare with Autodesk Navisworks Manage 2016, such as:

    - -
      -
    • Bentley Systems Synchro PRO: This software helps you plan, visualize, and deliver complex construction projects using 4D digital construction. You can import data from various sources, such as BIM, CAD, scheduling, and cost applications. You can also create 4D simulations, detect and resolve clashes, optimize resources, and monitor progress.
    • -
    • Trimble Connect: This software helps you connect your project stakeholders and streamline workflows using cloud-based collaboration. You can upload, share, and review data from multiple sources, such as BIM, CAD, PDF, and images. You can also create 3D models, perform clash detection, create markups and comments, and track changes.
    • -
    • Bluebeam Revu: This software helps you create, edit, and collaborate on PDF documents for design and construction projects. You can import data from various sources, such as BIM, CAD, images, and scans. You can also create markups and annotations, measure dimensions and areas, compare documents, and track revisions.
    • -
    - -

    These are some of the main features and benefits of each software that you can compare with Autodesk Navisworks Manage 2016. However, the best way to decide which software suits your needs is to try them out yourself. You can download free trials or demos of each software from their respective websites and test them on your own projects.

    -

    What are the Advantages and Disadvantages of Autodesk Navisworks Manage 2016

    - -

    Autodesk Navisworks Manage 2016 is a software that has many advantages and disadvantages for design and construction professionals. Some of the advantages are:

    - -
      -
    • It supports multiple file formats and data sources: You can import and aggregate data from various sources, such as BIM, CAD, PDF, images, scans, and more. This allows you to review integrated models and data with stakeholders in a collaborative environment.
    • -
    • It has powerful and comprehensive tools for project review: You can use tools such as model review, clash detection, 4D simulation, quantification, and presentation to improve BIM coordination and collaboration across different disciplines and stages of the project lifecycle. You can also reduce errors, risks, costs, and delays in your projects.
    • -
    • It has flexible and customizable workflows: You can customize your interface, settings, preferences, and options to suit your needs and preferences. You can also create scripts, macros, plugins, and extensions to enhance the functionality of the software.
    • -
    - -

    Some of the disadvantages are:

    - -
      -
    • It requires a high-performance computer system: You need a computer system that meets or exceeds the minimum system requirements for Autodesk Navisworks Manage 2016. The software can consume a lot of memory and processing power when dealing with large and complex models and data.
    • -
    • It has a steep learning curve: You need to spend some time and effort to learn how to use the software effectively and efficiently. The software has many features and functions that can be overwhelming for beginners or casual users.
    • -
    • It can be expensive: You need to purchase a license from Autodesk or an authorized reseller to use the software legally. The license can be costly depending on the type and duration of the license. You may also need to pay for maintenance, support, and updates.
    • -
    - -

    These are some of the main advantages and disadvantages of Autodesk Navisworks Manage 2016 that you should consider before using it for your projects.

    - -

    How to Get Help and Support for Autodesk Navisworks Manage 2016

    - -

    If you need help and support for Autodesk Navisworks Manage 2016, you have several options available. Some of them are:

    - -
      -
    • The Help menu: You can access the Help menu from within the software by clicking on the question mark icon or pressing F1. The Help menu provides access to various resources, such as online help, tutorials, videos, forums, blogs, feedback, downloads, updates, and more.
    • -
    • The Autodesk website: You can visit the official Autodesk website at https://www.autodesk.com/ to find more information about the software, such as features, specifications, system requirements, pricing, licensing, trials, demos, etc. You can also access various services, such as Autodesk Account, Autodesk Store, Autodesk University, Autodesk BIM 360, Autodesk Cloud Services, etc.
    • -
    • The Autodesk Knowledge Network: You can visit the Autodesk Knowledge Network at https://knowledge.autodesk.com/ to find more resources for learning and support. You can access articles, documentation, tutorials, videos, webinars, courses, certifications, forums, communities, blogs, events, etc.
    • -
    • The Autodesk Support Center: You can visit the Autodesk Support Center at https://www.autodesk.com/support to find more options for technical support. You can access support cases, chat with experts, call support agents, request remote assistance, submit feedback or report issues.
    • -
    - -

    These are some of the main options for getting help and support for Autodesk Navisworks Manage 2016. You can also contact your local reseller or partner for more assistance.

    -

    Conclusion

    - -

    Autodesk Navisworks Manage 2016 is a software that helps you improve BIM coordination and collaboration across different disciplines and stages of the project lifecycle. It allows you to import and aggregate data from multiple sources, identify and resolve clash and interference problems, simulate project schedules and logistics, extract quantities and create bills of materials, and create photorealistic renderings and animations. You can download it for free as a trial version for 30 days or purchase a license from Autodesk or an authorized reseller. You can also download it for free as a full version from a reliable source that provides the ISO file and the crack, but this may expose your computer to viruses, malware, or legal issues. We hope this article has helped you learn more about Autodesk Navisworks Manage 2016 and how to download it for free.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/The Pursuit Of Happiness Love Junk Rar Free.md b/spaces/scedlatioru/img-to-music/example/The Pursuit Of Happiness Love Junk Rar Free.md deleted file mode 100644 index 7e74d7e8816d4827f15f1dd065834a324c0c80ee..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/The Pursuit Of Happiness Love Junk Rar Free.md +++ /dev/null @@ -1,6 +0,0 @@ -

    the pursuit of happiness love junk rar


    DOWNLOADhttps://gohhs.com/2uEAA3



    - - d5da3c52bf
    -
    -
    -

    diff --git a/spaces/seduerr/communicaite/services/hate_speech.py b/spaces/seduerr/communicaite/services/hate_speech.py deleted file mode 100644 index 6bbe61c377a677e09ede0e8d453bd6eadfc3f5c3..0000000000000000000000000000000000000000 --- a/spaces/seduerr/communicaite/services/hate_speech.py +++ /dev/null @@ -1,20 +0,0 @@ -from transformers import AutoTokenizer, AutoModelForSequenceClassification -import torch.nn.functional as F -import torch - -# Hate Speech -tokenizer = AutoTokenizer.from_pretrained( - "mrm8488/distilroberta-finetuned-tweets-hate-speech") -model = AutoModelForSequenceClassification.from_pretrained( - "mrm8488/distilroberta-finetuned-tweets-hate-speech") - - -def classify_hatespeech(sentence): - preprocessed_text = sentence.strip().replace("\n", "") - inputs = tokenizer(preprocessed_text, return_tensors="pt") - labels = torch.tensor([1]).unsqueeze(0) - outputs = model(**inputs, labels=labels) - logits = outputs.logits - probs = torch.softmax(logits, dim=1) - nice = torch.flatten(probs).detach().numpy()[0] - return "{:.2f}".format(nice) diff --git a/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/util/logger.py b/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/util/logger.py deleted file mode 100644 index 18145f54c927abd59b95f3fa6e6da8002bc2ce97..0000000000000000000000000000000000000000 --- a/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/util/logger.py +++ /dev/null @@ -1,93 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import functools -import logging -import os -import sys - -from termcolor import colored - - -class _ColorfulFormatter(logging.Formatter): - def __init__(self, *args, **kwargs): - self._root_name = kwargs.pop("root_name") + "." - self._abbrev_name = kwargs.pop("abbrev_name", "") - if len(self._abbrev_name): - self._abbrev_name = self._abbrev_name + "." - super(_ColorfulFormatter, self).__init__(*args, **kwargs) - - def formatMessage(self, record): - record.name = record.name.replace(self._root_name, self._abbrev_name) - log = super(_ColorfulFormatter, self).formatMessage(record) - if record.levelno == logging.WARNING: - prefix = colored("WARNING", "red", attrs=["blink"]) - elif record.levelno == logging.ERROR or record.levelno == logging.CRITICAL: - prefix = colored("ERROR", "red", attrs=["blink", "underline"]) - else: - return log - return prefix + " " + log - - -# so that calling setup_logger multiple times won't add many handlers -@functools.lru_cache() -def setup_logger(output=None, distributed_rank=0, *, color=True, name="imagenet", abbrev_name=None): - """ - Initialize the detectron2 logger and set its verbosity level to "INFO". - - Args: - output (str): a file name or a directory to save log. If None, will not save log file. - If ends with ".txt" or ".log", assumed to be a file name. - Otherwise, logs will be saved to `output/log.txt`. - name (str): the root module name of this logger - - Returns: - logging.Logger: a logger - """ - logger = logging.getLogger(name) - logger.setLevel(logging.DEBUG) - logger.propagate = False - - if abbrev_name is None: - abbrev_name = name - - plain_formatter = logging.Formatter( - "[%(asctime)s.%(msecs)03d]: %(message)s", datefmt="%m/%d %H:%M:%S" - ) - # stdout logging: master only - if distributed_rank == 0: - ch = logging.StreamHandler(stream=sys.stdout) - ch.setLevel(logging.DEBUG) - if color: - formatter = _ColorfulFormatter( - colored("[%(asctime)s.%(msecs)03d]: ", "green") + "%(message)s", - datefmt="%m/%d %H:%M:%S", - root_name=name, - abbrev_name=str(abbrev_name), - ) - else: - formatter = plain_formatter - ch.setFormatter(formatter) - logger.addHandler(ch) - - # file logging: all workers - if output is not None: - if output.endswith(".txt") or output.endswith(".log"): - filename = output - else: - filename = os.path.join(output, "log.txt") - if distributed_rank > 0: - filename = filename + f".rank{distributed_rank}" - os.makedirs(os.path.dirname(filename), exist_ok=True) - - fh = logging.StreamHandler(_cached_log_stream(filename)) - fh.setLevel(logging.DEBUG) - fh.setFormatter(plain_formatter) - logger.addHandler(fh) - - return logger - - -# cache the opened file object, so that different calls to `setup_logger` -# with the same file name can safely write to the same file. -@functools.lru_cache(maxsize=None) -def _cached_log_stream(filename): - return open(filename, "a") diff --git a/spaces/senquan/ChuanhuChatGPT/chatgpt - macOS.command b/spaces/senquan/ChuanhuChatGPT/chatgpt - macOS.command deleted file mode 100644 index fa015edca9e6916f24394813ce8ba77d2072e296..0000000000000000000000000000000000000000 --- a/spaces/senquan/ChuanhuChatGPT/chatgpt - macOS.command +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/bash -echo Opening ChuanhuChatGPT... -cd "$(dirname "${BASH_SOURCE[0]}")" -nohup python3 ChuanhuChatbot.py >/dev/null 2>&1 & -sleep 5 -open http://127.0.0.1:7860 -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). If you kill ChuanhuChatbot, Use "pkill -f 'ChuanhuChatbot'" command in terminal. \ No newline at end of file diff --git a/spaces/shi-labs/Matting-Anything/GroundingDINO/groundingdino/config/GroundingDINO_SwinB.py b/spaces/shi-labs/Matting-Anything/GroundingDINO/groundingdino/config/GroundingDINO_SwinB.py deleted file mode 100644 index f490c4bbd598a35de43d36ceafcbd769e7ff21bf..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/Matting-Anything/GroundingDINO/groundingdino/config/GroundingDINO_SwinB.py +++ /dev/null @@ -1,43 +0,0 @@ -batch_size = 1 -modelname = "groundingdino" -backbone = "swin_B_384_22k" -position_embedding = "sine" -pe_temperatureH = 20 -pe_temperatureW = 20 -return_interm_indices = [1, 2, 3] -backbone_freeze_keywords = None -enc_layers = 6 -dec_layers = 6 -pre_norm = False -dim_feedforward = 2048 -hidden_dim = 256 -dropout = 0.0 -nheads = 8 -num_queries = 900 -query_dim = 4 -num_patterns = 0 -num_feature_levels = 4 -enc_n_points = 4 -dec_n_points = 4 -two_stage_type = "standard" -two_stage_bbox_embed_share = False -two_stage_class_embed_share = False -transformer_activation = "relu" -dec_pred_bbox_embed_share = True -dn_box_noise_scale = 1.0 -dn_label_noise_ratio = 0.5 -dn_label_coef = 1.0 -dn_bbox_coef = 1.0 -embed_init_tgt = True -dn_labelbook_size = 2000 -max_text_len = 256 -text_encoder_type = "bert-base-uncased" -use_text_enhancer = True -use_fusion_layer = True -use_checkpoint = True -use_transformer_ckpt = True -use_text_cross_attention = True -text_dropout = 0.0 -fusion_dropout = 0.0 -fusion_droppath = 0.1 -sub_sentence_present = True diff --git a/spaces/simonduerr/ProteinMPNN/af_backprop/alphafold/model/common_modules.py b/spaces/simonduerr/ProteinMPNN/af_backprop/alphafold/model/common_modules.py deleted file mode 100644 index f239c870bde49e1e5b1a7e6622c5ef4f44a37b3f..0000000000000000000000000000000000000000 --- a/spaces/simonduerr/ProteinMPNN/af_backprop/alphafold/model/common_modules.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright 2021 DeepMind Technologies Limited -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""A collection of common Haiku modules for use in protein folding.""" -import haiku as hk -import jax.numpy as jnp - - -class Linear(hk.Module): - """Protein folding specific Linear Module. - - This differs from the standard Haiku Linear in a few ways: - * It supports inputs of arbitrary rank - * Initializers are specified by strings - """ - - def __init__(self, - num_output: int, - initializer: str = 'linear', - use_bias: bool = True, - bias_init: float = 0., - name: str = 'linear'): - """Constructs Linear Module. - - Args: - num_output: number of output channels. - initializer: What initializer to use, should be one of {'linear', 'relu', - 'zeros'} - use_bias: Whether to include trainable bias - bias_init: Value used to initialize bias. - name: name of module, used for name scopes. - """ - - super().__init__(name=name) - self.num_output = num_output - self.initializer = initializer - self.use_bias = use_bias - self.bias_init = bias_init - - def __call__(self, inputs: jnp.ndarray) -> jnp.ndarray: - """Connects Module. - - Args: - inputs: Tensor of shape [..., num_channel] - - Returns: - output of shape [..., num_output] - """ - n_channels = int(inputs.shape[-1]) - - weight_shape = [n_channels, self.num_output] - if self.initializer == 'linear': - weight_init = hk.initializers.VarianceScaling(mode='fan_in', scale=1.) - elif self.initializer == 'relu': - weight_init = hk.initializers.VarianceScaling(mode='fan_in', scale=2.) - elif self.initializer == 'zeros': - weight_init = hk.initializers.Constant(0.0) - - weights = hk.get_parameter('weights', weight_shape, inputs.dtype, - weight_init) - - # this is equivalent to einsum('...c,cd->...d', inputs, weights) - # but turns out to be slightly faster - inputs = jnp.swapaxes(inputs, -1, -2) - output = jnp.einsum('...cb,cd->...db', inputs, weights) - output = jnp.swapaxes(output, -1, -2) - - if self.use_bias: - bias = hk.get_parameter('bias', [self.num_output], inputs.dtype, - hk.initializers.Constant(self.bias_init)) - output += bias - - return output diff --git a/spaces/simonraj/ELOralCoachHONGWEN/app.py b/spaces/simonraj/ELOralCoachHONGWEN/app.py deleted file mode 100644 index 0f500ec010799aa1bec52d10fdf77fc387bb2cec..0000000000000000000000000000000000000000 --- a/spaces/simonraj/ELOralCoachHONGWEN/app.py +++ /dev/null @@ -1,88 +0,0 @@ -#app.py -import gradio as gr -import openai -import os -import HongWenData # Importing the HongWenData module -import base64 - -OPENAI_API_KEY = os.getenv("OPENAI_API_KEY") -openai.api_key = OPENAI_API_KEY - -def image_to_base64(img_path): - with open(img_path, "rb") as img_file: - return base64.b64encode(img_file.read()).decode('utf-8') - -img_base64 = image_to_base64("HongWenSBC.JPG") -img_html = f'SBC6' - -def predict(question_choice, audio): - # Transcribe the audio using Whisper - with open(audio, "rb") as audio_file: - transcript = openai.Audio.transcribe("whisper-1", audio_file) - message = transcript["text"] # This is the transcribed message from the audio input - - # Generate the system message based on the chosen question - strategy, explanation = HongWenData.strategy_text["TREES"] - - # Reference to the picture description from HongWenData.py - picture_description = HongWenData.description - - # Determine whether to include the picture description based on the question choice - picture_description_inclusion = f""" - For the first question, ensure your feedback refers to the picture description provided: - {picture_description} - """ if question_choice == HongWenData.questions[0] else "" - - # Construct the conversation with the system and user's message - conversation = [ - { - "role": "system", - "content": f""" - You are an expert English Language Teacher in a Singapore Primary school, directly guiding a Primary 6 student in Singapore. - The student is answering the question: '{question_choice}'. - {picture_description_inclusion} - Point out areas they did well and where they can improve, following the {strategy}. - Encourage the use of sophisticated vocabulary and expressions. - For the second and third questions, the picture is not relevant, so the student should not refer to it in their response. - {explanation} - The feedback should be in second person, addressing the student directly. - """ - }, - {"role": "user", "content": message} - ] - - - response = openai.ChatCompletion.create( - model='gpt-3.5-turbo', - messages=conversation, - temperature=0.6, - max_tokens=1000, # Limiting the response to 1000 tokens - stream=True - ) - - partial_message = "" - for chunk in response: - if len(chunk['choices'][0]['delta']) != 0: - partial_message = partial_message + chunk['choices'][0]['delta']['content'] - yield partial_message - -# Gradio Interface -iface = gr.Interface( - fn=predict, - inputs=[ - gr.Radio(HongWenData.questions, label="Choose a question", default=HongWenData.questions[0]), # Dropdown for question choice - gr.inputs.Audio(source="microphone", type="filepath") # Audio input - ], - outputs=gr.inputs.Textbox(), # Using inputs.Textbox as an output to make it editable - description=img_html + ''' - - ''', # Corrected string concatenation - css="custom.css" # Link to the custom CSS file -) - -iface.queue(max_size=99, concurrency_count=40).launch(debug=True) - diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download 2048 Mod APK for Android and IOS The Ultimate Puzzle Game.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download 2048 Mod APK for Android and IOS The Ultimate Puzzle Game.md deleted file mode 100644 index 0d785d3b56b898822efd6bf22587c016b35d5a1a..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download 2048 Mod APK for Android and IOS The Ultimate Puzzle Game.md +++ /dev/null @@ -1,116 +0,0 @@ -
    -

    2048 Mod APK: A Fun and Addictive Puzzle Game

    -

    If you are looking for a simple yet challenging puzzle game that can keep you entertained for hours, you might want to try 2048 mod apk. This is a modified version of the original 2048 game that offers more features and benefits for the players. In this article, we will tell you everything you need to know about 2048 mod apk, including what it is, how to play it, why it is so popular, what are its features, how to download and install it, and what are its pros and cons.

    -

    What is 2048?

    -

    2048 is a puzzle game that was created by Gabriele Cirulli in 2014. The game is inspired by other similar games such as Threes and 1024. The goal of the game is to slide numbered tiles on a 4x4 grid and combine them to create a tile with the number 2048. The game is over when there are no more moves left or when the player reaches the 2048 tile.

    -

    2048 mod apk


    Download Filehttps://ssurll.com/2uO0AM



    -

    How to play 2048?

    -

    The game is very easy to play. You just need to swipe your finger on the screen to move the tiles in the direction you want. When two tiles with the same number touch, they merge into one tile with the sum of their numbers. For example, if you swipe left and there are two tiles with the number 2 on the leftmost column, they will merge into one tile with the number 4. You can also use the arrow keys on your keyboard if you are playing on a computer.

    -

    Why is 2048 so popular?

    -

    There are many reasons why 2048 is so popular among puzzle game lovers. Some of them are:

    -
      -
    • The game is simple but challenging. It does not require any special skills or knowledge, but it still tests your logic and strategy.
    • -
    • The game is addictive. It makes you want to play more and more until you reach the highest score possible.
    • -
    • The game is relaxing. It does not have any time limit or pressure, so you can play it at your own pace and enjoy the soothing sound effects and music.
    • -
    • The game is fun. It gives you a sense of satisfaction and achievement when you create a new tile or beat your previous score.
    • -
    -

    What is 2048 mod apk?

    -

    2048 mod apk is a modified version of the original 2048 game that offers more features and benefits for the players. It is not available on the official app stores, but you can download it from third-party websites such as Apkloli. By downloading and installing 2048 mod apk, you can enjoy the following features:

    -

    Features of 2048 mod apk

    -

    Unlimited money

    -

    With 2048 mod apk, you can get unlimited money that you can use to buy various items in the game. For example, you can buy hints that can help you make better moves, or boosters that can increase your score or remove unwanted tiles.

    -

    No ads

    -

    Another benefit of 2048 mod apk is that it removes all the annoying ads that interrupt your gameplay. You can play the game without any distractions or interruptions.

    -

    2048 mod apk unlimited money
    -2048 mod apk download for android
    -2048 mod apk latest version
    -2048 mod apk no ads
    -2048 mod apk ios
    -2048 mod apk free download
    -2048 mod apk hack
    -2048 mod apk revdl
    -2048 mod apk apkpure
    -2048 mod apk rexdl
    -2048 mod apk offline
    -2048 mod apk online
    -2048 mod apk with cheat menu
    -2048 mod apk unlimited undo
    -2048 mod apk unlimited coins
    -2048 mod apk unlimited gems
    -2048 mod apk unlimited moves
    -2048 mod apk unlimited time
    -2048 mod apk unlimited hints
    -2048 mod apk unlimited stars
    -2048 mod apk premium
    -2048 mod apk pro
    -2048 mod apk plus
    -2048 mod apk mega
    -2048 mod apk vip
    -2048 mod apk original
    -2048 mod apk classic
    -2048 mod apk puzzle
    -2048 mod apk adventure
    -2048 mod apk challenge
    -2048 mod apk fun
    -2048 mod apk cute
    -2048 mod apk cool
    -2048 mod apk awesome
    -2048 mod apk best
    -2048 mod apk new
    -2048 mod apk old
    -2048 mod apk updated
    -2048 mod apk full version
    -2048 mod apk cracked version

    -

    Custom themes

    -

    If you are bored with the default theme of the game, you can change it with 2048 mod apk. You can choose from different themes such as animals, fruits, flowers, colors, emojis, and more. You can also create your own theme by using your own images and sounds.

    -

    Undo and redo moves

    -

    Sometimes, you might regret making a certain move or want to try a different strategy. With 2048 mod apk, you can undo and redo your moves as many times as you want. This can help you avoid mistakes and improve your chances of winning.

    -

    How to download and install 2048 mod apk?

    -

    If you want to download and install 2048 mod apk, you need to follow these simple steps:

    -
      -
    1. Go to the website where you can download 2048 mod apk, such as Apkloli. Make sure you choose a reliable and safe source.
    2. -
    3. Click on the download button and wait for the file to be downloaded on your device.
    4. -
    5. Go to your device settings and enable the installation of apps from unknown sources. This is necessary because 2048 mod apk is not from the official app stores.
    6. -
    7. Locate the downloaded file and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to be completed.
    8. -
    9. Launch the game and enjoy playing 2048 mod apk with all its features.
    10. -
    -

    Pros and cons of 2048 mod apk

    -

    Like any other app, 2048 mod apk has its pros and cons. Here are some of them:

    -

    Pros

    -
      -
    • It enhances the gameplay experience by adding more features and options.
    • -
    • It allows you to customize the game according to your preferences and tastes.
    • -
    • It eliminates the ads that can disrupt your concentration and enjoyment.
    • -
    • It gives you unlimited money that you can use to buy useful items and boosters.
    • -
    • It lets you undo and redo your moves as much as you want.
    • -
    -

    Cons

    -
      -
    • It is not available on the official app stores, so you need to download it from third-party websites that may not be secure or trustworthy.
    • -
    • It may not be compatible with some devices or operating systems.
    • -
    • It may cause some glitches or errors in the game performance or functionality.
    • -
    • It may violate the terms and conditions of the original game developer or publisher.
    • -
    • It may reduce the challenge and difficulty of the game by making it too easy or unfair.
    • -
    -

    Conclusion

    -

    In conclusion, 2048 mod apk is a fun and addictive puzzle game that offers more features and benefits than the original 2048 game. It allows you to play the game with unlimited money, no ads, custom themes, undo and redo moves, and more. However, it also has some drawbacks, such as being unavailable on the official app stores, causing some technical issues, and violating some rules. Therefore, you should weigh the pros and cons before downloading and installing 2048 mod apk on your device. If you decide to try it, make sure you download it from a reliable and safe source, such as Apkloli. We hope this article has been helpful and informative for you. Thank you for reading!

    -

    Frequently Asked Questions

    -

    Here are some of the most common questions that people ask about 2048 mod apk:

    -

    Q: Is 2048 mod apk free?

    -

    A: Yes, 2048 mod apk is free to download and play. You do not need to pay any money to enjoy its features and benefits.

    -

    Q: Is 2048 mod apk safe?

    -

    A: It depends on where you download it from. Some websites may offer fake or malicious files that can harm your device or steal your data. Therefore, you should always download 2048 mod apk from a reputable and trusted source, such as Apkloli. You should also scan the file with an antivirus program before installing it.

    -

    Q: Is 2048 mod apk legal?

    -

    A: It is not clear whether 2048 mod apk is legal or not. It may depend on the laws and regulations of your country or region. Some countries may allow modifying or hacking apps for personal use, while others may prohibit or penalize such activities. You should also consider the rights and interests of the original game developer or publisher, who may not approve of modifying or distributing their app without their permission or consent. Therefore, you should use 2048 mod apk at your own risk and responsibility.

    -

    Q: How can I update 2048 mod apk?

    -

    A: Since 2048 mod apk is not from the official app stores, you cannot update it automatically or manually through them. You need to download the latest version of 2048 mod apk from the same website where you downloaded the previous version. You should also check the website regularly for any updates or news about 2048 mod apk.

    -

    Q: How can I uninstall 2048 mod apk?

    -

    A: If you want to uninstall 2048 mod apk from your device, you can follow these steps:

    -
      -
    1. Go to your device settings and find the apps or applications section.
    2. -
    3. Find and tap on 2048 mod apk from the list of installed apps.
    4. -
    5. Tap on the uninstall button and confirm your action.
    6. -
    7. Wait for the app to be uninstalled from your device.
    8. -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download 3750 IOS Image Files for Cisco Catalyst Switches.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download 3750 IOS Image Files for Cisco Catalyst Switches.md deleted file mode 100644 index fbb5d13572b833bbf87beab0c4944be870b9a8a1..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download 3750 IOS Image Files for Cisco Catalyst Switches.md +++ /dev/null @@ -1,137 +0,0 @@ -
    -

    How to Download and Install IOS on Cisco Catalyst 3750 Switches

    -

    If you are using Cisco Catalyst 3750 switches in your network, you might want to keep them updated with the latest Cisco IOS software. IOS, or Internetwork Operating System, is the software that runs on Cisco routers and switches, providing network services and protocols. Updating your IOS software can improve your network performance, security, and reliability, as well as fix bugs and add new features.

    -

    download 3750 ios


    Download Zip 🗸 https://ssurll.com/2uNZHO



    -

    In this article, we will show you how to download and install IOS on Cisco Catalyst 3750 switches, which are innovative switches that combine industry-leading ease of use and high resiliency for stackable switches. They feature Cisco StackWise technology, a 32-Gbps stack interconnect that allows customers to build a unified, highly resilient switching system, one switch at a time.

    -

    Prerequisites

    -

    Before you start the IOS upgrade process, you need to make sure that you have the following hardware and software requirements:

    -
      -
    • A PC or workstation with a TFTP or RCP server application installed. You will use this to transfer the IOS image file from your PC to the switch. You can download a TFTP server for Windows from here.
    • -
    • A console cable (usually a flat black cable) that connects the console port of the switch to the COM port of your PC. You will use this to establish a console session to the switch.
    • -
    • A valid Cisco IOS image file for your switch model and feature set. You can obtain this from Cisco Software Central. Make sure that you choose the correct image file that supports your hardware and software features, and that your switch has enough memory to run it. You can also verify the integrity of the image file by checking its MD5 checksum.
    • -
    -

    Steps to Download and Install IOS on Cisco Catalyst 3750 Switches

    -

    Once you have all the prerequisites ready, you can follow these steps to download and install IOS on your switch:

    -

    Step 1: Establish a console session to the switch

    -

    Connect your PC to the switch using the console cable. Then, open a terminal emulation program (such as PuTTY or HyperTerminal) on your PC and configure it with these settings:

    - - - - - - - -
    ParameterValue
    Baud rate9600 bps
    Data bits8
    ParityNone
    Stop bits1
    Flow controlNone
    -

    Press Enter a few times until you see the switch prompt. If you are prompted for a username and password, enter them accordingly. If you do not have them, contact your network administrator. You should see a prompt like this:

    -
    -Switch>
    -
    -

    If you are in user mode, enter the enable command to enter privileged mode. You should see a prompt like this:

    -
    -Switch# 
    -

    Step 2: Verify the current IOS version and feature set

    -

    To check the current IOS version and feature set running on your switch, enter the show version command. You should see an output like this:

    -

    download 3750 ios bin file
    -download 3750 ios tar image
    -download 3750 ios upgrade
    -download 3750 ios stack configuration
    -download 3750 ios release notes
    -download 3750 ios cisco support
    -download 3750 ios software center
    -download 3750 ios recovery
    -download 3750 ios switch stack
    -download 3750 ios latest version
    -download 3750 ios web management
    -download 3750 ios command line interface
    -download 3750 ios flash file system
    -download 3750 ios feature set
    -download 3750 ios end of life
    -download 3750 ios end of support
    -download 3750 ios device manager
    -download 3750 ios network assistant
    -download 3750 ios troubleshooting
    -download 3750 ios verify
    -download 3750 ios reload
    -download 3750 ios boot variable
    -download 3750 ios tftp server
    -download 3750 ios checksum error
    -download 3750 ios version mismatch
    -download 3750 ios catalyst series switches
    -download 3750 ios superconducting tokamak advanced research facility
    -download 3750 ios net energy gain
    -download 3750 ios holy grail fusion experiment
    -download 3750 ios mini sun breakthrough
    -download 3750 ios nuclear fusion reaction temperature
    -download 3750 ios kelvin scale conversion
    -download 3750 ios core of the sun comparison
    -download 3750 ios solar atmosphere composition
    -download 3750 ios photosphere thickness and pressure
    -download 3750 ios chromosphere and corona layers
    -download 3750 ios sun spot cycle and activity
    -download 3750 ios solar wind and magnetic field effects
    -download 3750 ios helioseismology and neutrino detection methods
    -download 3750 ios solar evolution and lifespan estimation

    -
    -Switch#show version Cisco IOS Software, C3750 Software (C3750-IPSERVICESK9-M), Version 12.2(55)SE10, RELEASE SOFTWARE (fc2) Technical Support: http://www.cisco.com/techsupport Copyright (c) 1986-2016 by Cisco Systems, Inc. Compiled Thu 21-Jan-16 08:54 by prod_rel_team Image text-base: 0x01000000, data-base: 0x02F00000 ROM: Bootstrap program is C3750 boot loader BOOTLDR: C3750 Boot Loader (C3750-HBOOT-M) Version 12.2(44)SE5, RELEASE SOFTWARE (fc1) Switch uptime is 1 hour, 23 minutes System returned to ROM by power-on System image file is "flash:c3750-ipservicesk9-mz.122-55.SE10.bin" ... 
    -

    The line that starts with System image file shows the name and location of the IOS image file. In this example, the image file is c3750-ipservicesk9-mz.122-55.SE10.bin and it is stored in the flash memory of the switch. The name of the image file also indicates the IOS version and feature set. In this example, the IOS version is 12.2(55)SE10 and the feature set is IP Services.

    -

    Step 3: Delete the old IOS image file from the flash memory

    -

    To free up some space on the flash memory for the new IOS image file, you need to delete the old IOS image file. To do this, enter the delete flash: command, where is the name of the old IOS image file. For example:

    -
    -Switch#delete flash:c3750-ipservicesk9-mz.122-55.SE10.bin Delete filename [c3750-ipservicesk9-mz.122-55.SE10.bin]?  Delete flash:c3750-ipservicesk9-mz.122-55.SE10.bin? [confirm] 
    -

    Press Enter to confirm the deletion. You should see a message like this:

    -
    -Deleting flash:c3750-ipservicesk9-mz.122-55.SE10.bin...done 
    -

    Step 4: Copy the new IOS image file to the flash memory using TFTP or RCP

    -

    To copy the new IOS image file from your PC to the switch, you need to use either TFTP or RCP protocol. TFTP is a simple and widely used protocol for transferring files over a network, but it does not provide any security or authentication features. RCP is a more secure and reliable protocol that uses SSH for encryption and authentication, but it requires more configuration on both ends.

    -

    In this article, we will use TFTP as an example, but you can also use RCP if you prefer. To copy the new IOS image file using TFTP, follow these steps:

    -
      -
    1. On your PC, make sure that your TFTP server application is running and that the new IOS image file is in the root directory of the TFTP server.
    2. -
    3. On your switch, enter the copy tftp flash command. You will be prompted for some information, such as the IP address of your PC, the name of the IOS image file, and the destination filename on the flash memory. For example:
    4. -
      -Switch#copy tftp flash Address or name of remote host []? 192.168.1.100 Source filename []? c3750-ipbasek9-mz.150-2.SE11.bin Destination filename [c3750-ipbasek9-mz.150-2.SE11.bin]?  Accessing tftp://192.168.1.100/c3750-ipbasek9-mz.150-2.SE11.bin... Loading c3750-ipbasek9-mz.150-2.SE11.bin from 192.168.1.100 (via Vlan1 ): !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! [OK - 33509312 bytes] 33509312 bytes copied in 120.456 secs (278213 bytes/sec) 
      -
    5. Wait until the file transfer is complete and verify that the new IOS image file is in the flash memory by entering the dir flash: command. You should see an output like this:
    6. -
      -Switch#dir flash: Directory of flash:/     2  -rwx        3350   Mar 1 1993 00:04:05 +00:00  config.text     3  -rwx        2072   Mar 1 1993 00:04:05 +00:00  private-config.text     4  -rwx    33509312   Jun 20 2023 17:15:23 +00:00  c3750-ipbasek9-mz.150-2.SE11.bin     5  drwx         192   Mar 1 1993 00:05:42 +00:00  c3750-ipbasek9-mz.122-55.SE10     ... 
      -
    -

    Step 5: Configure the boot variable to load the new IOS image on startup

    -

    To make the switch load the new IOS image on startup, you need to configure the boot variable with the name and location of the new IOS image file. To do this, enter the conf t command to enter global configuration mode, and then enter the boot system flash: command, where is the name of the new IOS image file. For example:

    -
    -Switch#conf t Enter configuration commands, one per line. End with CNTL/Z. Switch(config)#boot system flash:c3750-ipbasek9-mz.150-2.SE11.bin Switch(config)#end Switch# 
    -

    To verify that the boot variable is set correctly, enter the show boot command. You should see an output like this:

    -
    -Switch#show boot BOOT path-list      : flash:c3750-ipbasek9-mz.150-2.SE11.bin Config file         : flash:/config.text Private Config file : flash:/private-config.text Enable Break        : no Manual Boot         : no HELPER path-list    : Auto upgrade        : yes Auto upgrade path   : NVRAM/Config file buffer size:       : 524288 Timeout for Config Download:          : 0 seconds Config Download via DHCP:          : disabled (next boot: disabled) 
    -

    Step 6: Reload the switch and verify the upgrade

    -

    To apply the changes and load the new IOS image, you need to reload the switch. To do this, enter the reload command and confirm it. For example:

    -
    -Switch#reload Proceed with reload? [confirm] 
    -

    The switch will reboot and load the new IOS image. To verify that the upgrade was successful, enter the show version command again and check the IOS version and feature set. You should see an output like this:

    -
    -Switch#show version Cisco IOS Software, C3750 Software (C3750-IPBASEK9-M), Version 15.0(2)SE11, RELEASE SOFTWARE (fc3) Technical Support: http://www.cisco.com/techsupport Copyright (c) 1986-2018 by Cisco Systems, Inc. Compiled Mon 26-Feb-18 12:49 by prod_rel_team Image text-base: 0x01000000, data-base: 0x02F00000 ROM: Bootstrap program is C3750 boot loader BOOTLDR: C3750 Boot Loader (C3750-HBOOT-M) Version 12.2(44)SE5, RELEASE SOFTWARE (fc1) Switch uptime is 5 minutes System returned to ROM by power-on System image file is "flash:c3750-ipbasek9-mz.150-2.SE11.bin" ... 
    -

    Conclusion

    -

    In this article, we have shown you how to download and install IOS on Cisco Catalyst 3750 switches using TFTP protocol. By following these steps, you can keep your switches updated with the latest IOS software and enjoy improved network performance, security, and reliability.

    -

    Here are some tips and best practices for IOS upgrade:

    -
      -
    • Always backup your switch configuration before upgrading IOS.
    • -
    • Always verify the integrity of the IOS image file by checking its MD 5 checksum and comparing it with the one provided by Cisco.
    • -
    • Always use a reliable and secure protocol for transferring the IOS image file, such as RCP or SCP.
    • -
    • Always test the new IOS image on a non-production switch before deploying it to the production network.
    • -
    • Always follow the Cisco documentation and guidelines for IOS upgrade, which you can find here.
    • -
    -

    FAQs

    -

    Here are some frequently asked questions and answers about IOS upgrade on Cisco Catalyst 3750 switches:

    -

    Q1: What is Cisco StackWise technology and how does it affect IOS upgrade?

    -

    A1: Cisco StackWise technology is a feature that allows up to nine Catalyst 3750 switches to operate as a single logical unit, sharing a common control plane, data plane, and management plane. This provides high availability, scalability, and simplified management for stackable switches. When you upgrade IOS on a switch stack, you need to make sure that all the stack members are running the same IOS version and feature set. You also need to use the archive download-sw command instead of the copy tftp flash command to copy the IOS image file to all the stack members simultaneously.

    -

    Q2: How can I recover from a corrupted or failed IOS upgrade?

    -

    A2: If your switch fails to boot up after an IOS upgrade, you might have a corrupted or incompatible IOS image file on the flash memory. In this case, you need to use the ROMmon mode to recover from the problem. ROMmon, or ROM monitor, is a low-level mode that allows you to perform basic troubleshooting and maintenance tasks on your switch. To enter ROMmon mode, you need to interrupt the boot process by pressing the Mode button on the front panel of the switch. Then, you can use ROMmon commands to erase the flash memory, download a new IOS image file using Xmodem protocol, and boot up the switch with the new image. For more details on how to use ROMmon mode, refer to this document.

    -

    Q3: How can I check the free space and memory usage on the flash memory?

    -

    A3: To check the free space and memory usage on the flash memory, you can use the show flash: command. This command will display information such as the total size, available size, used size, erase size, and directory of files on the flash memory. For example:

    -
    -Switch#show flash: Directory of flash:/     2  -rwx        3350   Mar 1 1993 00:04:05 +00:00  config.text     3  -rwx        2072   Mar 1 1993 00:04:05 +00:00  private-config.text     4  -rwx    33509312   Jun 20 2023 17:15:23 +00:00  c3750-ipbasek9-mz.150-2.SE11.bin     ... 32514048 bytes total (0 bytes free) 
    -

    Q4: How can I use a web-based device manager or network assistant to upgrade IOS?

    -

    A4: If you prefer a graphical user interface (GUI) over a command-line interface (CLI) for IOS upgrade, you can use either a web-based device manager or network assistant. A web-based device manager is a built-in web server on your switch that allows you to access and configure your switch using a web browser. A network assistant is a standalone application that allows you to manage multiple switches and routers in your network using a single interface. Both tools provide an easy and intuitive way to upgrade IOS on your switch. For more details on how to use these tools, refer to this document and this document.

    -

    Q5: How can I choose the best IOS release and feature set for my switch?

    -

    A5: Choosing the best IOS release and feature set for your switch depends on several factors, such as your hardware model, software requirements, network environment, and budget. Generally speaking, you should choose an IOS release that is stable, secure, and compatible with your hardware and software features. You should also choose an IOS feature set that meets your network needs and does not exceed your memory and license limitations. To help you choose the best IOS release and feature set for your switch, you can use tools such as Cisco Feature Navigator, Cisco Software AdvisorCisco Software Research Tool.

    -

    -

    Thank you for reading this article. I hope you have learned how to download and install IOS on Cisco Catalyst 3750 switches. If you have any questions or feedback, please leave a comment below. Have a great day!

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Among Us Roles and Play with Over 25 New Roles in The Other Roles MOD.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Among Us Roles and Play with Over 25 New Roles in The Other Roles MOD.md deleted file mode 100644 index 70e3b68085f2a89d2a1ab3bbecd382eabf8de149..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Among Us Roles and Play with Over 25 New Roles in The Other Roles MOD.md +++ /dev/null @@ -1,136 +0,0 @@ - -

    How to Download and Play Among Us Roles Mod

    -

    Among Us is a multiplayer game where players have to work together as crewmates or impostors on a spaceship. Crewmates have to complete tasks and find the impostors, while impostors have to kill crewmates and sabotage the ship. However, if you want to spice up your gameplay and try something new, you might want to check out Among Us Roles Mod.

    -

    What is Among Us Roles Mod?

    -

    A brief introduction to the mod and its features

    -

    Among Us Roles Mod is a custom mod that adds seven new roles to the game, each with different abilities and mechanics. The roles are divided into two categories: crewmate roles and impostor roles. Crewmate roles are assigned randomly to some of the crewmates, while impostor roles are assigned randomly to some of the impostors. The mod also allows you to customize the settings for each role, such as the cooldown, duration, and probability.

    -

    download among us roles


    Download Zip ————— https://ssurll.com/2uNUj1



    -

    The list of roles and their abilities

    -

    The mod currently includes the following roles:

    - - - - - - - - - -
    RoleCategoryAbility
    CrewmateCrewmateThe default role that has to do tasks and find impostors.
    ImpostorImpostorThe default role that has to kill crewmates and sabotage the ship.
    ScientistCrewmateA role that can check the vitals of all players in the game and see when someone dies.
    EngineerCrewmateA role that can use vents like impostors, but for a limited time and with a cooldown.
    Guardian AngelCrewmateA role that can shield and protect another player from being killed by impostors.
    ShapeshifterImpostorA role that can transform into another player for a limited time.
    GhostN/AA role that dead players get that can chat with other ghosts and spectate the game.
    -

    How to Download and Install Among Us Roles Mod

    -

    The steps to download the mod from GitHub

    -

    To get the mod, you will need to follow these steps:

    -

    How to download and install the Extra Roles mod for Among Us
    -Among Us All The Roles mod by Zeo666: features and download link
    -Among Us Other Roles mod tutorial: how to play as Medic, Officer, Engineer, and Joker
    -Best Among Us role mods to spice up your gameplay
    -Where to find and download the latest Among Us role mods for PC
    -How to play as Sheriff, Doctor, Jester, and more in Among Us with role mods
    -Among Us role mods: what are they and how do they work
    -How to create your own custom roles in Among Us with mods
    -How to join and host Among Us games with role mods
    -How to update your Among Us role mods to the latest version
    -How to fix common issues and bugs with Among Us role mods
    -How to uninstall or disable Among Us role mods
    -How to play Among Us role mods on mobile devices
    -How to play Among Us role mods with friends online
    -How to play Among Us role mods on different maps and settings
    -How to customize your Among Us role mods with hats, visors, and nameplates
    -How to download and install the Town of Us mod for Among Us
    -How to play as Mayor, Swapper, Lovers, and more in Among Us with the Town of Us mod
    -How to download and install the Mafia mod for Among Us
    -How to play as Godfather, Mafioso, Janitor, and more in Among Us with the Mafia mod
    -How to download and install the Proximity Chat mod for Among Us
    -How to play with voice chat in Among Us with the Proximity Chat mod
    -How to download and install the Better Crewlink mod for Among Us
    -How to improve your Proximity Chat experience in Among Us with the Better Crewlink mod
    -How to download and install the Impostor mod for Among Us
    -How to play as Spy, Snitch, Assassin, and more in Among Us with the Impostor mod
    -How to download and install the Extra Roles Plus mod for Among Us
    -How to play as Time Master, Hacker, Snitch, and more in Among Us with the Extra Roles Plus mod
    -How to download and install the Crewlink mod for Among Us
    -How to play with voice chat in Among Us with the Crewlink mod
    -How to download and install the The Other Roles mod for Among Us
    -How to play as Engineer, Medic, Officer, Joker, and more in Among Us with The Other Roles mod
    -How to download and install the Sheriff mod for Among Us
    -How to play as Sheriff in Among Us with the Sheriff mod
    -How to download and install the Jester mod for Among Us
    -How to play as Jester in Among Us with the Jester mod
    -How to download and install the Doctor mod for Among Us
    -How to play as Doctor in Among Us with the Doctor mod
    -How to download and install the Engineer mod for Among Us
    -How to play as Engineer in Among Us with the Engineer mod

    -
      -
    1. Go to the GitHub page of the mod creator Eisbison.
    2. -
    3. Select the most recent version of the mod from the Releases tab.
    4. -
    5. Download the .rar file of the mod.
    6. -
    7. Open the file with WinRar or another program that can extract compressed files.
    8. -
    9. You should see a folder called TheOtherRoles with several files inside.
    10. The steps to install the mod in the game directory

      -

      Once you have the mod folder, you will need to copy it to the game directory. Here is how:

      -
        -
      1. Find the location of your Among Us game on your computer. It should be something like C:\Program Files (x86)\Steam\steamapps\common\Among Us.
      2. -
      3. Open the game folder and look for a file called BepInEx.dll. If you don't have it, you will need to download and install BepInEx, which is a framework that allows you to run mods for Among Us.
      4. -
      5. Copy the entire TheOtherRoles folder that you extracted from the .rar file and paste it into the game folder.
      6. -
      7. You should see a new folder called BepInEx with several subfolders and files inside.
      8. -
      -

      The steps to launch the mod and check if it is loaded

      -

      Now that you have installed the mod, you can launch the game and enjoy the new roles. To do so, follow these steps:

      -
        -
      1. Run the Among Us game from Steam or from your desktop shortcut.
      2. -
      3. On the main menu, you should see a message on the top left corner that says "TheOtherRoles v1.8.0 loaded". This means that the mod is working properly.
      4. -
      5. If you don't see the message, you might have to restart the game or check if you installed the mod correctly.
      6. -
      7. To access the mod settings, click on the gear icon on the bottom right corner of the screen and then click on TheOtherRoles tab.
      8. -
      9. Here you can adjust the settings for each role, such as the cooldown, duration, and probability. You can also enable or disable certain roles if you want.
      10. -
      11. To start a game with the mod, click on Online or Local and create or join a lobby as usual. The host of the lobby can choose which roles to include in the game from the Game tab.
      12. -
      -

      How to Play Among Us Roles Mod

      -

      The rules and settings for the mod

      -

      The mod follows the same rules and settings as the vanilla game, with some exceptions. Here are some of them:

      -
        -
      • The number of impostors can be 1, 2, or 3, depending on the lobby size and preference.
      • -
      • The number of crewmate roles can be 0, 1, 2, or 3, depending on the lobby size and preference.
      • -
      • The number of impostor roles can be 0, 1, or 2, depending on the lobby size and preference.
      • -
      • The crewmates win if they complete all their tasks or vote out all the impostors.
      • -
      • The impostors win if they kill enough crewmates or sabotage the ship successfully.
      • -
      • The roles are assigned randomly at the start of each game and are hidden from other players.
      • -
      • The roles have different abilities that they can use during the game, but they also have limitations and drawbacks.
      • -

      The tips and strategies for each role

      -

      Playing with the mod can be challenging and fun, but also requires some skills and tactics. Here are some tips and strategies for each role:

      -
        -
      • Crewmate: As a crewmate, you have to do your tasks and find the impostors. You can use the meeting button or report a body to discuss with other players and vote. You can also use the chat or voice chat to communicate and share information. You should be careful of impostors who might try to deceive you or kill you. You should also pay attention to the roles of other crewmates and how they can help you.
      • -
      • Impostor: As an impostor, you have to kill crewmates and sabotage the ship. You can use vents, fake tasks, and lie to blend in with the crew. You can also use your abilities to confuse or eliminate your enemies. You should be careful of crewmates who might suspect you or catch you in the act. You should also pay attention to the roles of other impostors and how they can help you.
      • -
      • Scientist: As a scientist, you can check the vitals of all players in the game and see when someone dies. You can use this information to find impostors or confirm alibis. You can also share this information with other crewmates during meetings or chats. You should be careful of impostors who might target you or discredit you. You should also pay attention to the roles of other scientists and how they can help you.
      • -
      • Engineer: As an engineer, you can use vents like impostors, but for a limited time and with a cooldown. You can use this ability to move around the map faster, escape from danger, or surprise your enemies. You can also use this ability to fix sabotages from anywhere on the map. You should be careful of impostors who might see you venting or accuse you of venting. You should also pay attention to the roles of other engineers and how they can help you.
      • -
      • Guardian Angel: As a guardian angel, you can shield and protect another player from being killed by impostors. You can use this ability to save your allies, bait your enemies, or test your suspicions. You can also use this ability to revive a dead player once per game. You should be careful of impostors who might kill you instead of your shielded player or expose your role. You should also pay attention to the roles of other guardian angels and how they can help you.
      • -
      • Shapeshifter: As a shapeshifter, you can transform into another player for a limited time. You can use this ability to impersonate your enemies, frame your allies, or create confusion. You can also use this ability to access restricted areas or blend in with the crowd. You should be careful of crewmates who might notice your transformation or recognize your original appearance. You should also pay attention to the roles of other shapeshifters and how they can help you.
      • -
      • Ghost: As a ghost, you are dead but not out of the game. You can chat with other ghosts and spectate the game. You can also do your tasks as a crewmate ghost or sabotage as an impostor ghost. You can use this opportunity to help your team win or have fun with other ghosts. You should be careful of revealing too much information or spoiling the game for others. You should also pay attention to the roles of other ghosts and how they can help you.
      • -
      -

      Conclusion

      -

      A summary of the main points and benefits of the mod

      -

      In conclusion, Among Us Roles Mod is a custom mod that adds seven new roles to the game, each with different abilities and mechanics. The mod allows you to customize the settings for each role, such as the cooldown, duration, and probability. The mod also follows the same rules and settings as the vanilla game, with some exceptions.

      -

      The mod is a great way to spice up your gameplay and try something new. The mod adds more variety, challenge, and fun to the game. The mod also enhances the social aspect of the game, as you have to communicate, cooperate, and deceive with other players.

      -

      A call to action to try the mod and have fun

      -

      If you are interested in playing with the mod, you can download it from the GitHub page of the mod creator Eisbison. You will need to install BepInEx framework first, which is a framework that allows you to run mods for Among Us. Then, you will need to copy the mod folder to the game directory and launch the game.

      -

      You can then adjust the mod settings for each role, such as the cooldown, duration, and probability. You can also choose which roles to include in the game from the Game tab. You can then create or join a lobby and start playing with the mod.

      -

      You will have a lot of fun playing with the mod, as you will experience new gameplay mechanics, abilities, and strategies. You will also interact with other players in different ways, depending on your role and theirs. You will never get bored of playing Among Us with the mod, as each game will be different and exciting.

      -

      So what are you waiting for? Download the mod now and enjoy playing Among Us Roles Mod with your friends or online. Have fun and good luck!

      -

      FAQs

      -

      Q1: Is Among Us Roles Mod safe to use?

      -

      A1: Yes, Among Us Roles Mod is safe to use, as long as you download it from the official GitHub page of the mod creator Eisbison. The mod does not contain any viruses or malware, and it does not affect your game files or performance. However, you should always be careful when downloading and installing any mods from the internet, and make sure you have a backup of your game in case something goes wrong.

      -

      Q2: Is Among Us Roles Mod compatible with other mods?

      -

      A2: Among Us Roles Mod is compatible with most other mods that use BepInEx framework, which is a framework that allows you to run mods for Among Us. However, some mods might conflict or interfere with each other, especially if they modify the same aspects of the game. Therefore, you should always check the compatibility and compatibility issues of the mods before using them together.

      -

      Q3: Can I play Among Us Roles Mod on mobile or Switch?

      -

      A3: No, Among Us Roles Mod is only available for PC version of the game. The mod requires BepInEx framework, which is a framework that allows you to run mods for Among Us. BepInEx framework is only compatible with PC version of the game, and not with mobile or Switch version. Therefore, you cannot play Among Us Roles Mod on mobile or Switch.

      -

      Q4: Where can I get support or report bugs for Among Us Roles Mod?

      -

      A4: If you need support or want to report bugs for Among Us Roles Mod, you can contact the mod creator Eisbison through his GitHub page. You can also join his Discord server, where you can chat with other players and get help from the mod team. You can also check the wiki page of the mod, where you can find more information and guides about the mod.

      -

      Q5: What are some other popular mods for Among Us?

      -

      A5: There are many other popular mods for Among Us that you can try and enjoy. Some of them are:

      -
        -
      • Town of Us: A mod that adds 17 new roles and 6 new modifiers to the game, inspired by Town of Salem.
      • -
      • The Other Roles: A mod that adds 15 new roles and 4 new modifiers to the game, inspired by The Other Roles Mod.
      • -
      • Extra Roles: A mod that adds 8 new roles and 2 new modifiers to the game, inspired by Extra Roles Mod.
      • -
      • Among Us Sheriff Mod: A mod that adds a sheriff role to the game, who can kill impostors or crewmates.
      • -
      • Among Us Proximity Chat: A mod that adds proximity voice chat to the game, where you can hear other players based on their distance from you.
      • -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download K Millians Nayo Nayo and Enjoy the Pop Hit.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download K Millians Nayo Nayo and Enjoy the Pop Hit.md deleted file mode 100644 index d025f97481d4e407d79ece580ea46823acfe2d06..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download K Millians Nayo Nayo and Enjoy the Pop Hit.md +++ /dev/null @@ -1,147 +0,0 @@ - -

      How to Download K Millian's Nayo Nayo Online

      -

      If you are a fan of Zambian music, you might have heard of K Millian's hit song \"Nayo Nayo\". This catchy tune is a blend of kalindula and afro-pop genres, with lyrics that express love and gratitude for a special person. In this article, we will show you how to download this song online, so you can enjoy it anytime and anywhere.

      -

      download k millian nayo nayo


      DOWNLOAD ••• https://ssurll.com/2uNU0j



      -

      What is Nayo Nayo?

      -

      Nayo Nayo is a song by Zambian singer-songwriter K Millian, featuring Chali 'Bravo' Mulalami. It was released in 2018 as part of K Millian's album \"Another Day\". The song is a love ballad that celebrates the joy of finding one's soulmate. The title \"Nayo Nayo\" means \"I have it\" in Bemba, one of the major languages in Zambia. The chorus goes like this:

      -
      -

      Niwe wandi wandi wandi wandi wandi wandi
      -You are mine mine mine mine mine mine
      -Nimwebo wandi wandi wandi wandi wandi
      -It is you mine mine mine mine mine
      -Nimwebo nayonayonayonayonayonayonayonayonayonayonayonayonayonayonayonayonayonayona
      -It is you I have I have I have I have I have I have I have I have I have I have I have I have I have I have I have I have

      -
      -

      The song has a catchy melody and a smooth rhythm that makes it easy to dance and sing along to.

      -

      Who is K Millian?

      -

      K Millian is one of the most popular and influential artists in Zambia. He was born in 1978 in Lusaka, the capital city of Zambia. He started his music career in 2000 with his debut album \"Voice Mail\", which featured songs like \"Kakabalika\" and \"Another Day\". Since then, he has released several albums and singles that have topped the Zambian music charts and earned him many awards and recognition. Some of his most famous songs include \"Pa Ulendo\", \"Umutima Wandi\", \"Nizakukonda\", and \"Nalila\". K Millian is known for his versatile and unique style, which incorporates elements of kalindula, afro-pop, r&b, gospel, and reggae. He is also a philanthropist and a social activist, who supports various causes such as education, health, and peace. K Millian is widely regarded as one of the best Zambian musicians of all time.

      -

      Why Download Music Online?

      -

      Downloading music online has many benefits over buying physical CDs or DVDs. Here are some of the reasons why you should download music online:

      -
        -
      • Convenience: You can download music online anytime and anywhere, as long as you have an internet connection and a device. You don't have to go to a store, wait in line, or deal with limited stock. You can also access your music library across different devices and platforms, and sync them with your cloud storage or streaming service.
      • -
      • Affordability: You can download music online for a fraction of the cost of buying physical copies. You can also save money on shipping fees, taxes, and storage space. You can also take advantage of discounts, promotions, and free downloads offered by various online music providers.
      • -
      • Quality: You can download music online in high-quality formats, such as MP3, WAV, FLAC, or AAC. You can also choose the bitrate and sample rate that suit your preferences and device capabilities. You can also enjoy better sound quality than CDs or DVDs, which can degrade over time or get scratched or damaged.
      • -
      -

      Downloading music online is a great way to enjoy your favorite songs and discover new ones. However, you should also be aware of the legal and ethical issues involved in downloading music online.

      -

      How to Download Music Online Legally and Safely?

      -

      There are many ways to download music online, but not all of them are legal and safe. Some methods may expose you to viruses, malware, spyware, or identity theft. Some methods may also violate the intellectual property rights of the artists and producers, and result in fines or lawsuits. Therefore, you should always download music online from reputable and authorized sources. Here are some of the best methods to download music online legally and safely:

      -

      Buying from Digital Stores

      -

      One of the most common and reliable methods to download music online is to buy from digital stores. These are online platforms that sell digital copies of songs, albums, or playlists for a fixed price. You can pay with your credit card, debit card, PayPal, or other online payment methods. Once you buy the music, you can download it to your device or stream it online. Some of the most popular digital stores are:

      -

      iTunes

      -

      iTunes is one of the largest and most popular digital stores in the world. It offers millions of songs from various genres and artists. You can buy individual songs for $0.99 or $1.29, or albums for $9.99 or $11.99. You can also buy iTunes gift cards or redeem codes to purchase music. To download Nayo Nayo from iTunes, follow these steps:

      -

      download k millian nayo nayo mp3
      -download k millian nayo nayo audio
      -download k millian nayo nayo song
      -download k millian nayo nayo video
      -download k millian nayo nayo lyrics
      -download k millian nayo nayo music
      -download k millian nayo nayo official audio
      -download k millian nayo nayo youtube
      -download k millian nayo nayo shazam
      -download k millian nayo nayo album
      -download k millian and chali bravo mulalami nayo nayo
      -download k millian ft chali bravo mulalami nayo nayo
      -download k millian featuring chali bravo mulalami nayo nayo
      -download k millian another day album nayo nayo
      -download k millian another day song nayo nayo
      -download k millian another day mp3 nayo nayo
      -download k millian another day music video nayo nayo
      -download k millian another day lyrics video nayo nayo
      -download k millian the rockstar group artiste nayo nayo
      -download k millian the rockstar group song nayo nayo
      -download k millian the rockstar group mp3 nayo nayo
      -download k millian the rockstar group music video nayo nayo
      -download k millian the rockstar group lyrics video nayo nayo
      -download k millian zambian music artiste nayo nayo
      -download k millian zambian music song nayo nayo
      -download k millian zambian music mp3 nayo nayo
      -download k millian zambian music audio nayo nayo
      -download k millian zambian music video nayo nayo
      -download k millian zambian music lyrics video nayo nayo
      -download free k millian songs online including

      -
        -
      1. Open iTunes on your computer or device.
      2. -
      3. Search for \"Nayo Nayo\" in the search bar.
      4. -
      5. Select the song by K Millian featuring Chali 'Bravo' Mulalami from the results.
      6. -
      7. Click on the price button to buy the song.
      8. -
      9. Enter your Apple ID and password to confirm your purchase.
      10. -
      11. The song will be added to your iTunes library and downloaded to your device.
      12. -
      -

      Google Play Music

      -

      Google Play Music is another popular digital store that offers millions of songs from various genres and artists. You can buy individual songs for $0.99 or $1.29, or albums for $9.49 or $10.49. You can also use Google Play gift cards or redeem codes to purchase music. To download Nayo Nayo from Google Play Music, follow these steps:

      -
        -
      1. Open Google Play Music on your computer or device.
      2. -
      3. Search for \"Nayo Nayo\" in the search bar.
      4. -
      5. Select the song by K Millian featuring Chali 'Bravo' Mulalami from the results.
      6. -
      7. Click on the price button to buy the song.
      8. -
      9. Enter your Google account and password to confirm your purchase.
      10. -
      11. The song will be added to your Google Play Music library and downloaded to your device.
      12. -
      -

      Streaming from Music Platforms

      -

      Another common and convenient method to download music online is to stream from music platforms. These are online services that offer unlimited access to millions of songs from various genres and artists for a monthly or yearly subscription fee. You can stream the music online or download it to your device for offline listening. Some of the most popular music platforms are:

      -

      YouTube

      -

      YouTube is one of the largest and most popular music platforms in the world. It offers a vast collection of music videos, live performances, playlists, and channels from various genres and artists. You can stream the music online for free, or subscribe to YouTube Music or YouTube Premium for ad-free and offline access. To stream and download Nayo Nayo from YouTube, follow these steps:

      -
        -
      1. Open YouTube on your computer or device.
      2. -
      3. Search for \"Nayo Nayo\" in the search bar.
      4. -
      5. Select the official music video by K Millian featuring Chali 'Bravo' Mulalami from the results.
      6. -
      7. Click on the play button to stream the song.
      8. -
      9. If you have a YouTube Music or YouTube Premium subscription, you can click on the download button to download the song to your device.
      10. -
      11. The song will be added to your YouTube library and downloaded to your device.
      12. -
      -

      SoundCloud

      -

      SoundCloud is another popular music platform that offers a diverse and unique selection of music from various genres and artists. It is especially known for its independent and emerging artists, who upload their original songs, remixes, covers, and podcasts. You can stream the music online for free, or subscribe to SoundCloud Go or SoundCloud Go+ for ad-free and offline access. To stream and download Nayo Nayo from SoundCloud, follow these steps:

      -
        -
      1. Open SoundCloud on your computer or device.
      2. -
      3. Search for \"Nayo Nayo\" in the search bar.
      4. -
      5. Select the song by K Millian featuring Chali 'Bravo' Mulalami from the results.
      6. -
      7. Click on the play button to stream the song.
      8. -
      9. If you have a SoundCloud Go or SoundCloud Go+ subscription, you can click on the download button to download the song to your device.
      10. -
      11. The song will be added to your SoundCloud library and downloaded to your device.
      12. -
      -

      Using Free Music Download Sites

      -

      A third method to download music online is to use free music download sites. These are websites that offer free downloads of songs, albums, or playlists from various genres and artists. However, you should be careful when using these sites, as some of them may contain viruses, malware, spyware, or illegal content. You should also respect the intellectual property rights of the artists and producers, and only download music that is licensed under Creative Commons or other free licenses. Some of the best free music download sites are:

      -

      Bandcamp

      -

      Bandcamp is one of the best free music download sites that offers a wide range of music from various genres and artists. It is especially known for its independent and emerging artists, who sell their music directly to their fans. You can stream the music online for free, or buy it for a price set by the artist. Some artists also offer their music for free or pay-what-you-want downloads. To download Nayo Nayo from Bandcamp, follow these steps:

      -
        -
      1. Open Bandcamp on your computer or device.
      2. -
      3. Search for \"Nayo Nayo\" in the search bar.
      4. -
      5. Select the song by K Millian featuring Chali 'Bravo' Mulalami from the results.
      6. -
      7. Click on the name of the song to go to its page.
      8. -
      9. If the song is available for free or pay-what-you-want download, you will see a \"name your price\" box. Enter zero or any amount you want to pay, and click on \"download now\".
      10. -
      11. If the song is not available for free or pay-what-you-want download, you will see a \"buy digital track\" button. Click on it and enter your payment details to buy the song.
      12. -
      13. The song will be added to your Bandcamp collection and downloaded to your device.
      14. -
      -

      DatPiff

      -

      DatPiff is one of the best free music download sites that offers a large collection of hip-hop, rap, r&b, and urban music from various genres and artists. It is especially known for its mixtapes, which are compilations of songs by different artists or DJs. You can stream the music online for free, or download it to your device for free or for a small fee. Some of the most popular mixtapes on DatPiff include Nayo Nayo by K Millian featuring Chali 'Bravo' Mulalami, which was released in 2018 and has over 10,000 downloads. To download Nayo Nayo from DatPiff, follow these steps:

      -
        -
      1. Open DatPiff on your computer or device.
      2. -
      3. Search for \"Nayo Nayo\" in the search bar.
      4. -
      5. Select the mixtape by K Millian featuring Chali 'Bravo' Mulalami from the results.
      6. -
      7. Click on the download button to download the mixtape to your device.
      8. -
      9. The mixtape will be added to your DatPiff library and downloaded to your device.
      10. -
      -

      Free Music Archive

      -

      Free Music Archive is one of the best free music download sites that offers a curated and diverse selection of music from various genres and artists. It is especially known for its Creative Commons licensed music, which means you can download, share, and use the music for free, as long as you follow the terms of the license. You can stream the music online for free, or download it to your device for free or for a donation. To download Nayo Nayo from Free Music Archive, follow these steps:

      -
        -
      1. Open Free Music Archive on your computer or device.
      2. -
      3. Search for \"Nayo Nayo\" in the search bar.
      4. -
      5. Select the song by K Millian featuring Chali 'Bravo' Mulalami from the results.
      6. -
      7. Click on the name of the song to go to its page.
      8. -
      9. If the song is available for free download, you will see a \"download\" button. Click on it and choose the format you want to download.
      10. -
      11. If the song is not available for free download, you will see a \"buy\" button. Click on it and enter your payment details to buy the song.
      12. -
      13. The song will be added to your Free Music Archive collection and downloaded to your device.
      14. -
      -

      Conclusion

      -

      Nayo Nayo is a beautiful and catchy song by K Millian featuring Chali 'Bravo' Mulalami. It is one of the most popular songs in Zambia and beyond. If you want to download this song online, you have many options to choose from. You can buy it from digital stores like iTunes or Google Play Music, stream it from music platforms like YouTube or SoundCloud, or use free music download sites like Bandcamp or DatPiff. Whatever method you choose, make sure you do it legally and safely. Enjoy listening to Nayo Nayo and share it with your friends!

      -

      FAQs

      -

      Here are some of the frequently asked questions and answers about downloading music online:

      -
        -
      • Q: Is downloading music online legal?
        A: Downloading music online is legal if you do it from authorized and reputable sources, such as digital stores, music platforms, or free music download sites that have permission from the artists and producers. However, downloading music online is illegal if you do it from unauthorized or pirated sources, such as torrent sites, file-sharing networks, or websites that offer illegal downloads. This may violate the intellectual property rights of the artists and producers, and result in fines or lawsuits.
      • -
      • Q: Is downloading music online safe?
        A: Downloading music online is safe if you do it from trusted and secure sources, such as digital stores, music platforms, or free music download sites that have protection from viruses, malware, spyware, or identity theft. However, downloading music online is unsafe if you do it from untrusted or insecure sources, such as torrent sites, file-sharing networks, or websites that have malicious content or links. This may expose you to viruses, malware, spyware, or identity theft.
      • -
      • Q: What are the best formats to download music online?
        A: The best formats to download music online depend on your preferences and device capabilities. Some of the most common formats are MP3, WAV, FLAC, or AAC. MP3 is a compressed format that reduces the file size but also lowers the sound quality. WAV is an uncompressed format that preserves the original sound quality but also increases the file size. FLAC is a lossless format that compresses the file size without losing any sound quality. AAC is a format that offers better sound quality than MP3 at similar file sizes.
      • -
      • Q: What are the best bitrates and sample rates to download music online?
        A: The best bitrates and sample rates to download music online depend on your preferences and device capabilities. Bit rate is the amount of data that is transferred per second, and sample rate is the number of times that the sound wave is measured per second. Higher bitrates and sample rates offer better sound quality but also larger file sizes. Lower bitrates and sample rates offer smaller file sizes but also lower sound quality. The standard bitrate and sample rate for CD quality are 1411 kbps and 44.1 kHz, respectively. The standard bitrate and sample rate for MP3 quality are 128 kbps and 44.1 kHz, respectively.
      • -
      • Q: How can I transfer the music I downloaded online to other devices?
        A: You can transfer the music you downloaded online to other devices by using a USB cable, a Bluetooth connection, a Wi-Fi connection, or a cloud storage service. For example, if you want to transfer the music from your computer to your phone, you can connect them with a USB cable and copy the music files from one device to another. Alternatively, you can use a Bluetooth or Wi-Fi connection to send the music files wirelessly. You can also upload the music files to a cloud storage service like Google Drive or Dropbox, and then download them to your other device.
      • -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download MGR Songs - The Best Collection of Tamil Melodies.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download MGR Songs - The Best Collection of Tamil Melodies.md deleted file mode 100644 index 937e29732c354cf5027890dda24fa07c2a3f797e..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download MGR Songs - The Best Collection of Tamil Melodies.md +++ /dev/null @@ -1,148 +0,0 @@ - -

      How to Download MGR Songs Online

      -

      If you are a fan of Tamil cinema and music, you must have heard of MGR. He was one of the most influential and charismatic personalities in the history of Tamil Nadu. He was not only an actor, but also a politician, a singer, and a cultural icon. His songs are still loved and cherished by millions of people across generations.

      -

      download mgr songs


      Downloadhttps://ssurll.com/2uO0mQ



      -

      Downloading MGR songs online is a great way to enjoy his melodious voice and inspiring lyrics. You can listen to his songs anytime, anywhere, and on any device. You can also create your own playlists, share them with your friends, and discover new songs.

      -

      However, downloading MGR songs online is not always easy. You may face some challenges such as finding the right website, choosing the best quality and format, ensuring a reliable and secure internet connection, and managing your downloads. In this article, we will guide you through these challenges and help you download MGR songs online with ease.

      -

      Best Websites to Download MGR Songs Online

      -

      There are many websites that offer MGR songs online. However, not all of them are reliable, safe, or legal. Some of them may have low-quality or corrupted files, malware or viruses, pop-up ads or redirects, or copyright issues. Therefore, you need to be careful when choosing a website to download MGR songs online.

      -

      Here are some of the best websites that we recommend for downloading MGR songs online:

      -

      download mgr songs free online
      -download mgr songs jiosaavn
      -download mgr songs mp3
      -download mgr songs zip file
      -download mgr songs internet archive
      -download mgr songs tamil
      -download mgr songs thathuva padalgal
      -download mgr songs best melodies
      -download mgr songs video
      -download mgr songs youtube
      -download mgr songs hd
      -download mgr songs old
      -download mgr songs hits
      -download mgr songs remix
      -download mgr songs gaana
      -download mgr songs masstamilan
      -download mgr songs starmusiq
      -download mgr songs isaimini
      -download mgr songs tamilwire
      -download mgr songs raaga
      -download mgr songs saavn
      -download mgr songs hungama
      -download mgr songs wynk
      -download mgr songs spotify
      -download mgr songs amazon music
      -download mgr songs apple music
      -download mgr songs deezer
      -download mgr songs tidal
      -download mgr songs soundcloud
      -download mgr songs bandcamp
      -download mgr songs audiomack
      -download mgr songs datpiff
      -download mgr songs mixcloud
      -download mgr songs reverbnation
      -download mgr songs last.fm
      -download mgr songs pandora
      -download mgr songs iheartradio
      -download mgr songs napster
      -download mgr songs shazam
      -download mgr songs tunein radio

      -

      JioSaavnJioSaavn

      -

      JioSaavn is one of the most popular and trusted music streaming and downloading platforms in India. It has a huge collection of songs in various languages, genres, and moods. You can find MGR songs in Tamil, Telugu, Malayalam, Kannada, and Hindi on JioSaavn.

      -

      JioSaavn has many features and advantages that make it a great choice for downloading MGR songs online. Some of them are:

      -
        -
      • It has a user-friendly and attractive interface that allows you to browse, search, and play songs with ease.
      • -
      • It has a high-quality and diverse audio library that offers songs in different formats such as MP3, AAC, or FLAC.
      • -
      • It has a smart and personalized recommendation system that suggests songs based on your preferences, listening history, and mood.
      • -
      • It has a social and interactive feature that lets you share your songs, playlists, and podcasts with your friends and followers.
      • -
      • It has a premium subscription option that gives you access to unlimited downloads, ad-free music, offline listening, exclusive content, and more.
      • -
      -

      To download MGR songs from JioSaavn, you need to follow these steps:

      -
        -
      1. Download and install the JioSaavn app on your device or visit the JioSaavn website on your browser.
      2. -
      3. Create an account or log in with your existing account.
      4. -
      5. Search for MGR songs or browse the MGR category on the app or website.
      6. -
      7. Select the song that you want to download and tap on the download icon.
      8. -
      9. Choose the quality and format of the song that you want to download.
      10. -
      11. Wait for the download to complete and enjoy your MGR song offline.
      12. -

      Internet Archive

      -

      Internet Archive is a non-profit digital library that preserves and provides access to millions of free books, movies, music, and more. It is a treasure trove of old and rare content that you may not find elsewhere. You can find MGR songs from various films, albums, and concerts on Internet Archive.

      -

      Internet Archive has many features and advantages that make it a great choice for downloading MGR songs online. Some of them are:

      -
        -
      • It has a simple and minimalist interface that allows you to explore, search, and download songs with ease.
      • -
      • It has a rich and diverse audio library that offers songs in different formats such as MP3, OGG, or WAV.
      • -
      • It has a historical and cultural value that showcases the legacy and impact of MGR on Tamil cinema and society.
      • -
      • It has a community and collaborative feature that lets you upload, review, and comment on songs and other content.
      • -
      • It has a free and open access policy that does not require any registration or subscription to download songs.
      • -
      -

      To download MGR songs from Internet Archive, you need to follow these steps:

      -
        -
      1. Visit the Internet Archive website on your browser.
      2. -
      3. Search for MGR songs or browse the MGR collection on the website.
      4. -
      5. Select the song that you want to download and click on the download options icon.
      6. -
      7. Choose the quality and format of the song that you want to download.
      8. -
      9. Wait for the download to complete and enjoy your MGR song offline.
      10. -

      Other Websites

      -

      Besides JioSaavn and Internet Archive, there are some other websites that offer MGR songs online. However, they may not be as reliable, safe, or legal as the ones we have mentioned above. Therefore, you need to be cautious and vigilant when using them.

      -

      Some of the other websites that you can try to download MGR songs online are:

      -
        -
      • TamilTunes: This website has a large collection of Tamil songs, including MGR songs. You can download them in MP3 format for free. However, the website may have some pop-up ads and redirects that can be annoying or harmful.
      • -
      • Masstamilan: This website has a decent collection of Tamil songs, including MGR songs. You can download them in MP3 or FLAC format for free. However, the website may have some low-quality or broken links that can be frustrating or disappointing.
      • -
      • Isaimini: This website has a moderate collection of Tamil songs, including MGR songs. You can download them in MP3 format for free. However, the website may have some malware or viruses that can be dangerous or damaging.
      • -
      -

      To download MGR songs from these websites, you need to follow these steps:

      -
        -
      1. Visit the website on your browser.
      2. -
      3. Search for MGR songs or browse the MGR category on the website.
      4. -
      5. Select the song that you want to download and click on the download link.
      6. -
      7. Choose the quality and format of the song that you want to download.
      8. -
      9. Wait for the download to complete and enjoy your MGR song offline.
      10. -
      -

      However, we advise you to use these websites at your own risk and discretion. We do not endorse or recommend them in any way. We also suggest you to check the legality and safety of these websites before using them.

      Tips and Tricks to Download MGR Songs Online

      -

      Downloading MGR songs online can be a fun and rewarding experience. However, it can also be a challenging and frustrating one if you do not follow some tips and tricks. Here are some of the tips and tricks that we recommend for downloading MGR songs online:

      -

      Check the Quality and Format of the Songs

      -

      The quality and format of the songs that you download online can affect your listening experience. You want to download songs that have high-quality sound and are compatible with your device and preference. Therefore, you need to check and choose the best quality and format for your MGR songs online.

      -

      Some of the factors that you need to consider when checking and choosing the quality and format of the songs are:

      -
        -
      • The bitrate: This is the amount of data that is encoded in a song per second. It is measured in kilobits per second (kbps). The higher the bitrate, the better the sound quality. However, the higher the bitrate, the larger the file size. You need to balance between quality and size when choosing the bitrate.
      • -
      • The sample rate: This is the number of times that a song is sampled per second. It is measured in hertz (Hz). The higher the sample rate, the more accurate the sound reproduction. However, the higher the sample rate, the larger the file size. You need to balance between accuracy and size when choosing the sample rate.
      • -
      • The file format: This is the type of file that a song is stored in. There are many file formats available for songs, such as MP3, AAC, FLAC, OGG, or WAV. Each file format has its own advantages and disadvantages. You need to choose a file format that is compatible with your device and preference.
      • -
      -

      Generally, we suggest you to choose a bitrate of at least 128 kbps, a sample rate of at least 44.1 kHz, and a file format of MP3 or AAC for your MGR songs online. These are the most common and widely supported options that offer good quality and reasonable size.

      -

      Use a Reliable and Secure Internet Connection

      -

      The internet connection that you use to download MGR songs online can affect your downloading experience. You want to use a reliable and secure internet connection that can ensure fast and smooth downloads without interruptions or risks. Therefore, you need to use a reliable and secure internet connection for your MGR songs online.

      -

      Some of the factors that you need to consider when using an internet connection are:

      -
        -
      • The speed: This is the rate at which data is transferred over the internet. It is measured in megabits per second (Mbps). The higher the speed, the faster the downloads. However, the higher the speed, the more expensive the internet plan. You need to balance between speed and cost when choosing an internet plan.
      • -
      • The stability: This is the consistency and reliability of an internet connection. It is affected by factors such as network congestion, signal strength, weather conditions, or hardware issues. The more stable an internet connection, the less interruptions or errors in downloads. You need to choose an internet provider that offers a stable service.
      • -
      • The security: This is the protection and privacy of an internet connection. It is affected by factors such as encryption, authentication, firewall, or antivirus software. The more secure an internet connection, the less risks of malware, viruses, hacking, or identity theft in downloads. You need to use an internet connection that has adequate security measures.
      • -
      -

      Generally, we suggest you to use a broadband or fiber-optic internet connection with a speed of at least 10 Mbps, a stability of at least 99%, and a security of at least WPA2 for your MGR songs online. These are the most common and widely available options that offer fast, smooth, and safe downloads.

      Use a Download Manager or a Browser Extension

      -

      The download manager or browser extension that you use to download MGR songs online can affect your downloading experience. You want to use a download manager or browser extension that can enhance and optimize your downloads without complications or drawbacks. Therefore, you need to use a download manager or browser extension for your MGR songs online.

      -

      Some of the factors that you need to consider when using a download manager or browser extension are:

      -
        -
      • The functionality: This is the ability and performance of a download manager or browser extension. It is affected by factors such as features, compatibility, speed, resume, pause, or schedule options. The more functional a download manager or browser extension, the more efficient and convenient the downloads. You need to choose a download manager or browser extension that has the features and options that you need.
      • -
      • The usability: This is the ease and simplicity of using a download manager or browser extension. It is affected by factors such as interface, design, navigation, or customization options. The more usable a download manager or browser extension, the more enjoyable and satisfying the downloads. You need to choose a download manager or browser extension that has a user-friendly and attractive interface.
      • -
      • The reliability: This is the trustworthiness and safety of a download manager or browser extension. It is affected by factors such as reputation, reviews, ratings, or updates. The more reliable a download manager or browser extension, the more secure and risk-free the downloads. You need to choose a download manager or browser extension that has a good reputation and positive feedback.
      • -
      -

      Generally, we suggest you to use a download manager such as IDM, FDM, or JDownloader or a browser extension such as Video DownloadHelper, SaveFrom.net Helper, or Flash Video Downloader for your MGR songs online. These are some of the most popular and widely used options that offer functional, usable, and reliable downloads.

      -

      Conclusion

      -

      Downloading MGR songs online is a great way to enjoy his melodious voice and inspiring lyrics. However, it can also be a challenging and frustrating one if you do not follow some tips and tricks. In this article, we have guided you through these tips and tricks and helped you download MGR songs online with ease.

      -

      We have shown you some of the best websites to download MGR songs online, such as JioSaavn and Internet Archive. We have also shown you some of the tips and tricks to download MGR songs online, such as checking the quality and format of the songs, using a reliable and secure internet connection, and using a download manager or browser extension.

      -

      We hope that this article has been helpful and informative for you. Now that you know how to download MGR songs online, why not give it a try? You can start by downloading some of his most famous songs such as Ulagam Sutrum Valiban, Adimai Penn, Enga Veetu Pillai, Anbe Vaa, or Ayirathil Oruvan. You will surely enjoy listening to them offline.

      -

      Thank you for reading this article. If you have any questions or feedback, please feel free to leave them in the comments section below. We would love to hear from you.

      -

      FAQs

      -
        -
      1. Who is MGR?
      2. -
      3. MGR is an acronym for Marudhur Gopalan Ramachandran, a legendary actor, politician, and singer from Tamil Nadu, India.
      4. -
      5. What are the genres of MGR songs?
      6. -
      7. MGR songs cover various genres such as melody, duet, thathuva, devotional, patriotic, and folk.
      8. -
      9. How many songs did MGR sing in his career?
      10. -
      11. MGR sang over 500 songs in his career, mostly composed by MS Viswanathan, KV Mahadevan, Shankar Ganesh, and AR Rahman.
      12. -
      13. How can I listen to MGR songs online without downloading them?
      14. -
      15. You can listen to MGR songs online without downloading them by using streaming services such as YouTube, Spotify, Gaana, or Wynk.
      16. -
      17. How can I download MGR songs offline without internet connection?
      18. -
      19. You can download MGR songs offline without internet connection by using offline apps such as Vidmate, Snaptube, or Videoder.
      20. -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/skf15963/summary/fengshen/models/longformer/configuration_longformer.py b/spaces/skf15963/summary/fengshen/models/longformer/configuration_longformer.py deleted file mode 100644 index 14ad2b5557d4d0cd9d2397308b6a823c1789bb31..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/models/longformer/configuration_longformer.py +++ /dev/null @@ -1,16 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The IDEA Authors. All rights reserved. - -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at - -# http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from transformers import LongformerConfig diff --git a/spaces/sklearn-docs/Gaussian-Classification-on-Iris/app.py b/spaces/sklearn-docs/Gaussian-Classification-on-Iris/app.py deleted file mode 100644 index a569cab01ca33adae886dacc556342a62cb40e7f..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/Gaussian-Classification-on-Iris/app.py +++ /dev/null @@ -1,46 +0,0 @@ -import numpy as np -import matplotlib.pyplot as plt -from sklearn import datasets -from sklearn.gaussian_process import GaussianProcessClassifier -from sklearn.gaussian_process.kernels import RBF -import gradio as gr - -def plot_decision_boundary(kernel_type): - iris = datasets.load_iris() - X = iris.data[:, :2] # we only take the first two features. - y = np.array(iris.target, dtype=int) - - h = 0.02 # step size in the mesh - - if kernel_type == "isotropic": - kernel = 1.0 * RBF([1.0]) - clf = GaussianProcessClassifier(kernel=kernel).fit(X, y) - elif kernel_type == "anisotropic": - kernel = 1.0 * RBF([1.0, 1.0]) - clf = GaussianProcessClassifier(kernel=kernel).fit(X, y) - else: - return None - - x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 - y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 - xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) - - Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()]) - Z = Z.reshape((xx.shape[0], xx.shape[1], 3)) - - plt.figure(figsize=(7, 5)) - plt.imshow(Z, extent=(x_min, x_max, y_min, y_max), origin="lower") - plt.scatter(X[:, 0], X[:, 1], c=np.array(["r", "g", "b"])[y], edgecolors=(0, 0, 0)) - plt.xlabel("Sepal length") - plt.ylabel("Sepal width") - plt.xlim(xx.min(), xx.max()) - plt.ylim(yy.min(), yy.max()) - plt.xticks(()) - plt.yticks(()) - plt.title("%s, LML: %.3f" % (kernel_type.capitalize(), clf.log_marginal_likelihood(clf.kernel_.theta))) - plt.tight_layout() - return plt - -kernel_select = gr.inputs.Radio(["isotropic", "anisotropic"], label="Kernel Type") -gr_interface = gr.Interface(fn=plot_decision_boundary, inputs=kernel_select, outputs="plot", title="Gaussian Process Classification on Iris Dataset", description="This example illustrates the predicted probability of GPC for an isotropic and anisotropic RBF kernel on a two-dimensional version for the iris-dataset. The anisotropic RBF kernel obtains slightly higher log-marginal-likelihood by assigning different length-scales to the two feature dimensions. See the original example at https://scikit-learn.org/stable/auto_examples/gaussian_process/plot_gpc_iris.html") -gr_interface.launch() diff --git a/spaces/sohojoe/soho-clip-embeddings-explorer/experimental/clip_api_app_client.py b/spaces/sohojoe/soho-clip-embeddings-explorer/experimental/clip_api_app_client.py deleted file mode 100644 index f067aca7e890b8fb2076feb995256ae29d9fba65..0000000000000000000000000000000000000000 --- a/spaces/sohojoe/soho-clip-embeddings-explorer/experimental/clip_api_app_client.py +++ /dev/null @@ -1,55 +0,0 @@ -import ray -from ray import serve -import time -import asyncio - -# Create a Semaphore object -semaphore = asyncio.Semaphore(10) - -test_image_url = "https://static.wixstatic.com/media/4d6b49_42b9435ce1104008b1b5f7a3c9bfcd69~mv2.jpg/v1/fill/w_454,h_333,fp_0.50_0.50,q_90/4d6b49_42b9435ce1104008b1b5f7a3c9bfcd69~mv2.jpg" -english_text = ( - "It was the best of times, it was the worst of times, it was the age " - "of wisdom, it was the age of foolishness, it was the epoch of belief" -) - -async def send_text_request(serve_client, number): - async with semaphore: - # async_handle = serve_client.get_handle("CLIPTransform", sync=False) - async_handle = serve.get_deployment("CLIPTransform").get_handle(sync=False) - # async_handle = serve.get_deployment("CLIPTransform").get_handle() - embeddings = ray.get(await async_handle.text_to_embeddings.remote(english_text)) - # embeddings = await async_handle.text_to_embeddings.remote(english_text) - # embeddings = async_handle.text_to_embeddings.remote(english_text) - # embeddings = await ray.get(embeddings) - return number, embeddings - -# def process_text(server_client, numbers, max_workers=10): -# with ThreadPoolExecutor(max_workers=max_workers) as executor: -# futures = [executor.submit(send_text_request, server_client, number) for number in numbers] -# for future in as_completed(futures): -# n_result, result = future.result() -# print (f"{n_result} : {len(result[0])}") -async def process_text(server_client, numbers): - tasks = [send_text_request(server_client, number) for number in numbers] - for future in asyncio.as_completed(tasks): - n_result, result = await future - print (f"{n_result} : {len(result[0])}") - -if __name__ == "__main__": - # n_calls = 100000 - n_calls = 1 - numbers = list(range(n_calls)) - ray.init() - server_client = serve.start(detached=True) - start_time = time.monotonic() - - # Run the async function - asyncio.run(process_text(server_client, numbers)) - - end_time = time.monotonic() - total_time = end_time - start_time - avg_time_ms = total_time / n_calls * 1000 - calls_per_sec = n_calls / total_time - print(f"Average time taken: {avg_time_ms:.2f} ms") - print(f"Number of calls per second: {calls_per_sec:.2f}") - ray.shutdown() diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/shorten_dataset.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/shorten_dataset.py deleted file mode 100644 index 6ebb5d88feb3f29d1512a0873df304915d051209..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/shorten_dataset.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -from fairseq.data import data_utils - -from . import BaseWrapperDataset - - -class TruncateDataset(BaseWrapperDataset): - """Truncate a sequence by returning the first truncation_length tokens""" - - def __init__(self, dataset, truncation_length): - super().__init__(dataset) - assert truncation_length is not None - self.truncation_length = truncation_length - self.dataset = dataset - - def __getitem__(self, index): - item = self.dataset[index] - item_len = item.size(0) - if item_len > self.truncation_length: - item = item[: self.truncation_length] - return item - - @property - def sizes(self): - return np.minimum(self.dataset.sizes, self.truncation_length) - - def __len__(self): - return len(self.dataset) - - -class RandomCropDataset(TruncateDataset): - """Truncate a sequence by returning a random crop of truncation_length tokens""" - - def __init__(self, dataset, truncation_length, seed=1): - super().__init__(dataset, truncation_length) - self.seed = seed - self.epoch = 0 - - @property - def can_reuse_epoch_itr_across_epochs(self): - return True # only the crop changes, not item sizes - - def set_epoch(self, epoch, **unused): - super().set_epoch(epoch) - self.epoch = epoch - - def __getitem__(self, index): - with data_utils.numpy_seed(self.seed, self.epoch, index): - item = self.dataset[index] - item_len = item.size(0) - excess = item_len - self.truncation_length - if excess > 0: - start_idx = np.random.randint(0, excess) - item = item[start_idx : start_idx + self.truncation_length] - return item - - -def maybe_shorten_dataset( - dataset, - split, - shorten_data_split_list, - shorten_method, - tokens_per_sample, - seed, -): - truncate_split = ( - split in shorten_data_split_list.split(",") or len(shorten_data_split_list) == 0 - ) - if shorten_method == "truncate" and truncate_split: - dataset = TruncateDataset(dataset, tokens_per_sample) - elif shorten_method == "random_crop" and truncate_split: - dataset = RandomCropDataset(dataset, tokens_per_sample, seed) - return dataset diff --git a/spaces/srivarshan/argumentation-quality-analyzer/README.md b/spaces/srivarshan/argumentation-quality-analyzer/README.md deleted file mode 100644 index c6a9388ff2029e0f08f876a11e4f8faf1834775f..0000000000000000000000000000000000000000 --- a/spaces/srivarshan/argumentation-quality-analyzer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Arguementation Analyzer -emoji: 😻 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.13.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/stogaja/xpathfinder/README.md b/spaces/stogaja/xpathfinder/README.md deleted file mode 100644 index 609a2c1024283f0503e21d74f1eaf345e8a29717..0000000000000000000000000000000000000000 --- a/spaces/stogaja/xpathfinder/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Xpathfinder -emoji: 🌖 -colorFrom: yellow -colorTo: red -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/stomexserde/gpt4-ui/Examples/Amintiri Din Copilarie 1964 Download Torent LINK.md b/spaces/stomexserde/gpt4-ui/Examples/Amintiri Din Copilarie 1964 Download Torent LINK.md deleted file mode 100644 index 262e243031f75a8e83457fcf20bbb5560a9499e5..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Amintiri Din Copilarie 1964 Download Torent LINK.md +++ /dev/null @@ -1,22 +0,0 @@ -
      -

      Amintiri Din Copilarie 1964: A Classic Romanian Film Based on Ion Creangă's Autobiographical Book

      - -

      Amintiri Din Copilarie (Memories of Childhood) is a Romanian film directed by Elisabeta Bostan in 1964. It is based on the literary work of the same name by Ion Creangă, one of the most famous Romanian writers, who offers a detailed account of his childhood in the 19th century rural Moldavia.

      - -

      The film follows the adventures and mischiefs of Nică, a young boy played by Ion Bocancea, who lives with his parents Ștefan and Smaranda (Emanoil Petruț and Corina Constantinescu), his grandfather David (Nicolae Veniaș), and his aunt Mărioara (Eliza Petrăchescu) in Humulești village. Nică is a curious, lively, and rebellious child who often gets into trouble with his family, his teachers, and his peers. He also has a vivid imagination and a love for storytelling, inspired by the folk tales and legends he hears from his grandfather and other villagers.

      -

      Amintiri Din Copilarie 1964 Download Torent


      Downloadhttps://urlgoal.com/2uI9Iy



      - -

      The film depicts various episodes from Nică's childhood, such as his first day at school, his encounter with a bear, his friendship with a gypsy boy, his fight with a priest's son, his visit to his uncle's house in Broșteni, his participation in a village festival, and his initiation into manhood. The film also portrays the customs, traditions, and values of the rural Romanian society at that time, as well as the humor, wisdom, and morality of Ion Creangă's writing style.

      - -

      Amintiri Din Copilarie is considered a classic of Romanian cinema and one of the best films for children ever made. It received several awards at national and international festivals, such as the Golden Lion at the Venice Film Festival in 1965. It was also very popular among audiences, being seen by over 5 million viewers in Romania. The film was followed by a sequel in 1965, Pupăza Din Tei (The Hoopoe from the Linden Tree), also directed by Elisabeta Bostan and starring Ion Bocancea as Nică.

      - -

      If you want to watch this film online or download it as a torrent file, you can search for it on various websites that offer free streaming or downloading services. However, be careful about the quality and legality of these sources, as they may not respect the rights of the filmmakers and distributors. Alternatively, you can buy or rent the DVD or Blu-ray version of the film from authorized sellers or online platforms.

      - -

      Amintiri Din Copilarie is not only a film, but also a book that you can read and enjoy. The book was written by Ion Creangă between 1875 and 1889, and it consists of four parts that cover different stages of his life, from his birth to his adolescence. The book is considered a masterpiece of Romanian literature and one of the best examples of autobiographical fiction in the world. It combines realistic elements with fantastic ones, creating a rich and colorful portrait of the author and his environment.

      - -

      If you want to read the book online or download it as a PDF or EPUB file, you can find it on various websites that offer free access to public domain works. However, be aware that some of these websites may not have the best quality or accuracy of the text, as they may contain errors or omissions. Alternatively, you can buy or borrow the printed version of the book from bookstores or libraries.

      -

      - -

      Whether you watch the film or read the book, Amintiri Din Copilarie will surely captivate you with its charm, humor, and nostalgia. It will also teach you valuable lessons about life, friendship, family, and culture. It is a timeless and universal story that can be enjoyed by people of all ages and backgrounds.

      81aa517590
      -
      -
      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/ArcSoft ShowBiz 5.0.1.375 With Serial By KurdTM Keygen [VERIFIED].md b/spaces/stomexserde/gpt4-ui/Examples/ArcSoft ShowBiz 5.0.1.375 With Serial By KurdTM Keygen [VERIFIED].md deleted file mode 100644 index 81cde3af30587bcc3b5b18a7f7fefdbe080ae8d6..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/ArcSoft ShowBiz 5.0.1.375 With Serial By KurdTM Keygen [VERIFIED].md +++ /dev/null @@ -1,23 +0,0 @@ - -

      ArcSoft ShowBiz 5.0.1.375: A Powerful and Versatile Video Editor

      -

      ArcSoft ShowBiz 5.0.1.375 is a video editing software that allows you to create and edit videos like a pro. Whether you want to import files from your media library, external devices, or 3D cameras, ShowBiz 5 has you covered. You can enhance your videos with effects, transitions, titles, narration, and more. You can also switch between Storyboard and Timeline modes to suit your editing style and preferences.

      -

      One of the features that makes ShowBiz 5 stand out is its support for 3D video creation and sharing. You can capture 3D images from various sources and edit them in ShowBiz 5 with ease. You can also upload your 3D videos to YouTube or export them as 3D files or discs.

      -

      ArcSoft ShowBiz 5.0.1.375 With Serial By KurdTM Keygen


      Download Filehttps://urlgoal.com/2uI96V



      -

      ShowBiz 5 also comes with a serial keygen by KurdTM that allows you to activate the software for free. KurdTM is a group of hackers that crack and distribute software for educational purposes only. They do not support piracy or illegal use of software.

      -

      If you are looking for a powerful and versatile video editor that can handle both 2D and 3D videos, you should give ArcSoft ShowBiz 5.0.1.375 a try. You can download a trial version from here or use the serial keygen by KurdTM to unlock the full version.

      - -

      Some of the integrated video editing tools that ShowBiz 5 offers are:

      -
        -
      • Anti-Shaking: This tool helps you stabilize your shaky videos and make them smoother.
      • -
      • Denoise: This tool reduces the noise level in your videos and improves their quality.
      • -
      • Rotate & Flip: This tool lets you correct the orientation of your videos and flip them horizontally or vertically.
      • -
      • Crop & Trim: This tool allows you to get rid of unwanted parts in your videos and adjust their duration.
      • -
      • Color Adjustment: This tool enables you to change the hue, saturation, brightness, and contrast of your videos and enhance their appearance.
      • -
      -

      With ShowBiz 5, you can also add various effects to your videos, such as Fuzzyapse, fades, blurs, smoke, and occlusion filters. You can also add texts, transitions, titles, and credits to your videos and customize their font, size, color, and position. Moreover, you can add your favorite music or a voice-over to your videos and adjust their volume and speed.

      - -

      Another feature that ShowBiz 5 boasts is its social media integration. You can share your videos directly on YouTube or Facebook without leaving the program. You can also upload your photos to Flickr and Twitter with a few clicks. ShowBiz 5 makes it easy for you to showcase your creativity and connect with your friends and fans online.

      -

      ShowBiz 5 has received positive reviews from many users and critics. They praised its user-friendly interface, its support for various formats and devices, its 3D video editing capabilities, and its affordable price. Some of the drawbacks that they mentioned are its frequent crashes, its limited timeline controls, its lack of screen capturing feature, and its poor customer service.

      -

      If you are interested in trying out ShowBiz 5 for yourself, you can download a free trial version from their official website. The trial version is valid for 15 days and has a watermark on the output videos. If you want to unlock the full version, you can use the serial keygen by KurdTM that is included in this article. However, we do not endorse or support piracy or illegal use of software. Please use ShowBiz 5 at your own risk and responsibility.

      cec2833e83
      -
      -
      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Code Geass Mega 720p Mkv.md b/spaces/stomexserde/gpt4-ui/Examples/Code Geass Mega 720p Mkv.md deleted file mode 100644 index 16f10fe5f210b68000240cf202db44cef0687d95..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Code Geass Mega 720p Mkv.md +++ /dev/null @@ -1,25 +0,0 @@ -
      -Here is a possible title and article with html formatting for the keyword "Code Geass Mega 720p Mkv": - -

      How to Download Code Geass Complete 720p Dual-Audio with English Subtitles

      -

      Code Geass is a popular anime series that follows the story of Lelouch Lamperouge, a former prince of the Britannian Empire who gains a mysterious power called Geass and leads a rebellion against his father's tyranny. The series has two seasons, each with 25 episodes, and is available in both Japanese and English audio with English subtitles.

      -

      Code Geass Mega 720p Mkv


      Download Ziphttps://urlgoal.com/2uI7En



      -

      If you are looking for a way to download Code Geass complete 720p dual-audio with English subtitles, you have come to the right place. In this article, we will show you how to use a torrent site called Nyaa to find and download the files you need. Nyaa is a website that hosts torrents for anime, manga, games, and other Japanese media. You can access it at https://nyaa.si.

      -

      Before you proceed, you will need a torrent client software that can download and open torrent files. Some examples of torrent clients are uTorrent, BitTorrent, qBittorrent, and Transmission. You can download them from their official websites or from other sources. Make sure you have enough space on your device to store the downloaded files.

      -

      Here are the steps to download Code Geass complete 720p dual-audio with English subtitles from Nyaa:

      -
        -
      1. Go to https://nyaa.si and type "Code Geass" in the search box. You can also use filters to narrow down your results by category, date, size, seeders, leechers, etc.
      2. -
      3. Look for the torrent that matches your criteria. For example, if you want to download Code Geass complete 720p dual-audio with English subtitles, you can look for a torrent that has "Complete 720p [Dual-Audio] [English Subbed]" in its title. You can also check the file list and the comments to see what is included in the torrent.
      4. -
      5. Click on the torrent title to open its page. You will see more information about the torrent, such as its description, file list, seeders, leechers, comments, etc. You will also see a button that says "Download Torrent" or "Magnet". Click on it to download the torrent file or copy the magnet link.
      6. -
      7. Open your torrent client software and add the torrent file or the magnet link to start downloading. You can monitor the progress of your download and adjust the settings as you wish.
      8. -
      9. Once your download is complete, you can open the folder where the files are stored and enjoy watching Code Geass complete 720p dual-audio with English subtitles.
      10. -
      -

      We hope this article was helpful for you. If you have any questions or feedback, feel free to leave a comment below.

      Here are a few more paragraphs for the article: - -

      Code Geass is not only a thrilling anime series that features action, drama, and romance, but also a thought-provoking one that explores themes such as morality, justice, loyalty, identity, and freedom. The series presents a complex and realistic world where different factions and ideologies clash and where the characters face difficult choices and consequences. The series also challenges the viewers to question their own beliefs and values and to empathize with different perspectives.

      -

      -

      The main character of Code Geass is Lelouch Lamperouge, a brilliant and charismatic young man who is also a former prince of the Britannian Empire. He is driven by his hatred for his father, the Emperor of Britannia, who he blames for the death of his mother and the disability of his sister. He also despises the oppression and discrimination that Britannia inflicts on the conquered nations, especially Japan, where he lives under a false identity. He vows to destroy Britannia and create a peaceful world for his sister.

      -

      Lelouch's life changes when he meets C.C., a mysterious girl who grants him the power of Geass, which allows him to command anyone to obey his orders. With this power, he becomes Zero, the leader of the Black Knights, a rebel group that fights against Britannia. He uses his intelligence, charisma, and strategy to outwit his enemies and gain allies. However, he also faces many challenges and obstacles, such as his best friend Suzaku Kururugi, who is a loyal soldier of Britannia; his half-siblings who are competing for the throne; and his own moral dilemmas and inner conflicts.

      -

      Code Geass is a captivating anime series that will keep you on the edge of your seat with its twists and turns. It will also make you think and feel with its deep and complex characters and themes. If you are looking for an anime that combines action, drama, romance, and philosophy, Code Geass is a great choice.

      7196e7f11a
      -
      -
      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Department Marathi Movie VERIFIED Download Kickass.md b/spaces/stomexserde/gpt4-ui/Examples/Department Marathi Movie VERIFIED Download Kickass.md deleted file mode 100644 index 6fd94abb1a1a7e0f9dd6a765866d6e5a30a1006b..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Department Marathi Movie VERIFIED Download Kickass.md +++ /dev/null @@ -1,26 +0,0 @@ - -

      How to Download Department Marathi Movie from Kickass Torrents

      -

      If you are looking for a way to download Department Marathi Movie, a 2012 action thriller film directed by Ram Gopal Varma, you might want to try Kickass Torrents, one of the most popular torrent search tools among millions of torrent sites. In this article, we will show you a simple guide for Kickass movie download and how to convert downloaded movies to other formats for better compatibility.

      -

      Step 1: Visit the new and primary Kickass torrents site

      -

      The original KickassTorrents domain was taken offline and blocked by authorities in 2016, but it was brought back to life by former staff and moderators in December 2016. The new and primary Kickass torrents site is https://kickasstorrents.to/, where you can search for the resources you want without registration.

      -

      Department Marathi Movie Download Kickass


      Download Ziphttps://urlgoal.com/2uI7hw



      -

      Step 2: Search for Department Marathi Movie

      -

      In the search box, type in "Department Marathi Movie" and hit enter. You will get a list of great resources with different file sizes, descriptions, screenshots, etc. You can also click on the tags below the search box to filter the results by category, such as movies, music, games, etc.

      -

      Step 3: Choose a torrent file and download it

      -

      Click on the torrent name that suits your preferences and needs. You will see more details about the torrent file, such as seeders, leechers, comments, ratings, etc. You can also read the comments to check if the torrent is safe and working. Below the title, you will see a magnet link and a kickass torrent free movie download button. You can either click on the magnet link to open it with your torrent client directly, or click on the download button to save the torrent file on your computer.

      -

      Step 4: Download and install a torrent client

      -

      If you don't have a torrent client installed on your computer, you will need one to download movies from KickassTorrents. A torrent client is a software that allows you to download files from other users who are sharing them. Some of the popular torrent clients are BitTorrent, uTorrent, BitComet, etc. You can download and install any of them from their official websites.

      -

      Step 5: Open the torrent file or magnet link with your torrent client

      -

      Once you have downloaded the torrent file or copied the magnet link, you can open it with your torrent client. You will see a window where you can choose the files you want to download and the location where you want to save them. You can also adjust some settings such as bandwidth limit, download speed, etc. Click on OK or Start to begin downloading Department Marathi Movie.

      -

      Step 6: Wait for the download to finish and enjoy watching Department Marathi Movie

      -

      The download time may vary depending on your internet speed and the number of seeders and leechers. You can check the progress of your download on your torrent client. Once the download is complete, you can open the movie file with your preferred media player and enjoy watching Department Marathi Movie.

      -

      Note:

      -

      If you are in a country where there is strict protection of copyright, the direct download of copyrighted resources from the site may bring you serious consequences. WonderFox doesn't encourage downloading copyrighted material without the owner's permission, and this article is for personal fair-use only. Kickass movies free download at your own risk.

      -

      Bonus Tip:

      -

      If you want to convert Department Marathi Movie to other formats for better compatibility with your devices or players, you can use WonderFox HD Video Converter Factory Pro, a powerful and easy-to-use video converter that supports over 500 formats and devices. You can also use it to edit videos, compress videos, download videos from online sites, record screen, make GIFs, etc. Here are the simple steps to convert Department Marathi Movie with WonderFox HD Video Converter Factory Pro:

      -
        -
      1. Download and install WonderFox HD Video Converter Factory Pro from https://www.videoconverterfactory.com/download/hd-video-converter-pro.exe
      2. -
      3. Launch the program and click on "Converter" on the

        -

        81aa517590
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/FS2004-FSX - TURBINE SOUND STUDIOS - MD-500 SOUNDPACK No Survey No Password 2019.md b/spaces/stomexserde/gpt4-ui/Examples/FS2004-FSX - TURBINE SOUND STUDIOS - MD-500 SOUNDPACK No Survey No Password 2019.md deleted file mode 100644 index aa5490531fdf9abaab52c2ea5e5710d175b2c322..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/FS2004-FSX - TURBINE SOUND STUDIOS - MD-500 SOUNDPACK No Survey No Password 2019.md +++ /dev/null @@ -1,64 +0,0 @@ - -

        FS2004-FSX - Turbine Sound Studios - MD-500 Soundpack: A Review

        -

        If you are a fan of flight simulators, you probably know about FS2004-FSX, one of the most popular and realistic flight simulation software available. Whether you prefer flying commercial airliners, military jets, or helicopters, FS2004-FSX has something for everyone. But if you want to take your flight simulation experience to the next level, you might want to consider adding some custom sound packs to your game.

        -

        FS2004-FSX - TURBINE SOUND STUDIOS - MD-500 SOUNDPACK No Survey No Password 2019


        DOWNLOADhttps://urlgoal.com/2uI9Jo



        -

        One of the best sources of high-quality sound packs for FS2004-FSX is Turbine Sound Studios, a company that specializes in creating realistic and immersive audio for various aircraft models. Their products are designed to enhance the realism and immersion of flying by providing accurate and dynamic sounds that match the performance and behavior of the real aircraft.

        -

        In this article, we will review one of their latest products, the MD-500 soundpack, which is compatible with both FS2004 and FSX. The MD-500 is a light utility helicopter that was developed from the Hughes 500, a civilian version of the US Army's OH-6A Cayuse/Loach. The MD-500 series includes several variants, such as the MD 500E, MD 520N, and MD 530F. The MD-500 is known for its distinctive egg-shaped fuselage, short-diameter main rotor system, and agile control response.

        -

        The MD-500 soundpack by Turbine Sound Studios aims to provide a realistic and immersive audio experience for flying this helicopter in FS2004-FSX. It features custom sounds for both internal and external views, as well as cockpit environment sounds, wind sounds, APU sounds, and more. The sounds are recorded from real MD-500 helicopters and edited to match the engine pitch values and acoustic parameters of the game.

        -

        In this review, we will cover the following aspects of the MD-500 soundpack:

        -
          -
        • Installation and compatibility
        • -
        • Sound quality and realism
        • -
        • Performance and immersion
        • -
        • Conclusion
        • -
        • FAQs
        • -
        -

        Installation and Compatibility

        -

        The installation process of the MD-500 soundpack is fairly simple and straightforward. You can download the product from simMarket, where it is sold for €9.16 (about $11). After purchasing, you will receive an email with a link to download a ZIP file that contains two folders: one for FS2004 (FS9) and one for FSX.

        -

        -

        To install the sound pack for FS2004, you need to copy the contents of the FS9 folder into your main FS9 folder, usually located at C:\Program Files\Microsoft Games Flight Simulator 9. To install the sound pack for FSX, you need to copy the contents of the FSX folder into your main FSX folder, usually located at C:\Program Files\Microsoft Games\Microsoft Flight Simulator X. You can also use the installer.exe file provided in each folder to automate the installation process.

        -

        The MD-500 soundpack is compatible with any MD-500 helicopter add-on for FS2004-FSX, such as the Nemeth Designs MD 500 Defender or the Cera Sim MD 500E. However, you might need to edit the sound.cfg file of your helicopter add-on to point to the correct sound folder of the MD-500 soundpack. You can do this by opening the sound.cfg file with a text editor and changing the sound path to "sound=MD500". For example, if you have the Nemeth Designs MD 500 Defender installed in your FSX, you need to open the file C:\Program Files\Microsoft Games\Microsoft Flight Simulator X\SimObjects\Rotorcraft\ND_MD500D\sound.cfg and change the line "sound=ND_MD500D" to "sound=MD500".

        -

        After installing the sound pack, you can configure the sound settings and options in your FS2004-FSX game. You can adjust the volume levels of different sound categories, such as engine, cockpit, environment, and ATC. You can also enable or disable some sound effects, such as doppler shift, dynamic head latency, and reverse stereo. You can access these settings by clicking on Options > Settings > Sound in your game menu.

        -

        Sound Quality and Realism

        -

        The MD-500 soundpack by Turbine Sound Studios is a significant improvement over the default sounds of FS2004-FSX. The default sounds are generic and bland, and do not reflect the unique characteristics of the MD-500 helicopter. The MD-500 soundpack, on the other hand, provides custom sounds that are realistic and dynamic, and capture the essence of flying this helicopter.

        -

        The MD-500 soundpack features sounds for both internal and external views, as well as cockpit environment sounds, wind sounds, APU sounds, and more. The sounds are recorded from real MD-500 helicopters and edited to match the engine pitch values and acoustic parameters of the game. The result is a rich and immersive audio experience that enhances the realism and enjoyment of flying.

        -

        The internal sounds are crisp and clear, and convey the feeling of being inside the cockpit of the MD-500. You can hear the subtle nuances of the engine start-up and shut-down sequences, as well as the changes in engine RPM and torque as you manipulate the throttle and collective. You can also hear the cockpit environment sounds, such as switches, buttons, levers, gauges, alarms, radios, and more. These sounds add to the realism and immersion of flying.

        -

        The external sounds are loud and powerful, and reflect the distinctive sound signature of the MD-500 helicopter. You can hear the roar of the turbine engine and the whine of the transmission as you fly by. You can also hear the thump of the main rotor blades and the whirr of the tail rotor as they cut through the air. The external sounds are affected by distance, direction, speed, altitude, weather, and terrain. These factors create a dynamic and realistic sound environment that varies with your flight situation.

        -

        The MD-500 soundpack also features some special sound effects that add to the realism and immersion of flying. For example, you can hear a doppler effect when you fly past an object or another aircraft. You can also hear a dynamic head latency effect when you move your head inside or outside the cockpit. This effect simulates how sound travels differently through air or bone depending on where your ears are located. You can also hear a reverse stereo effect when you fly backwards or sideways. This effect simulates how sound sources switch sides depending on your flight direction.

        -

        The MD-500 soundpack has some advantages and disadvantages compared to other sound packs or default sounds. Some of the advantages are:

        -
          -
        • It provides custom sounds that are realistic and dynamic
        • -
        • It captures the characteristics of the real MD-500 helicopter
        • -
        • It enhances the realism and immersion of flying
        • -
        • It is compatible with any MD-500 helicopter add-on for FS2004-FSX
        • -
        • It is easy to install and configure
        • -
        • It is affordable and worth its price
        • -
        -

        Some of the disadvantages are:

        -
          -
        • It might require some editing of sound.cfg files for some helicopter add-ons
        • -
        • It might affect performance or frame rate for some users with low-end systems
        • -
        • It might not suit everyone's taste or preference
        • -
        -

        Performance and Immersion

        -

        The MD-500 soundpack by Turbine Sound Studios does not have a significant impact on the performance or frame rate of FS2004-FSX. The sound files are optimized and compressed to reduce the load on the system. However, some users with low-end systems might experience some stuttering or lagging when using the sound pack. This might be due to other factors, such as graphics settings, scenery complexity, traffic density, weather effects, and so on. To improve the performance and frame rate, you can try lowering some of these settings or disabling some of the sound effects in the game menu.

        -

        The MD-500 soundpack by Turbine Sound Studios greatly enhances the immersion and realism of flying the MD-500 helicopter in FS2004-FSX. The sound pack provides a realistic and dynamic audio experience that matches the performance and behavior of the real helicopter. The sound pack also creates a rich and immersive sound environment that varies with your flight situation. The sound pack makes you feel like you are really flying the MD-500 helicopter, not just playing a game.

        -

        To get the most out of the MD-500 soundpack, you can try some tips and tricks that will improve your flight simulation experience. For example, you can use a good headset or speakers to enjoy the high-quality sounds. You can also use a joystick or a yoke and pedals to control the helicopter more accurately and smoothly. You can also use a TrackIR or a VR headset to move your head freely and look around the cockpit and outside. You can also use some add-ons or mods that will enhance the graphics, scenery, weather, traffic, and realism of FS2004-FSX.

        -

        Conclusion

        -

        The MD-500 soundpack by Turbine Sound Studios is one of the best sound packs for FS2004-FSX. It provides custom sounds that are realistic and dynamic, and capture the characteristics of the real MD-500 helicopter. It enhances the realism and immersion of flying, and creates a rich and immersive sound environment that varies with your flight situation. It is compatible with any MD-500 helicopter add-on for FS2004-FSX, and it is easy to install and configure. It does not have a significant impact on performance or frame rate, but it might require some editing of sound.cfg files for some helicopter add-ons. It is affordable and worth its price, but it might not suit everyone's taste or preference.

        -

        In conclusion, I highly recommend the MD-500 soundpack by Turbine Sound Studios to anyone who loves flying helicopters in FS2004-FSX. It is one of the best products from Turbine Sound Studios, and it will make your flight simulation experience more realistic and enjoyable. If you are interested in buying or learning more about the MD-500 soundpack by Turbine Sound Studios, you can visit their website or their Facebook page. You can also watch some videos or read some reviews of the product online.

        -

        FAQs

        -

        Here are some common questions and answers about the MD-500 soundpack by Turbine Sound Studios:

        -

        Q: Does the MD-500 soundpack work with other helicopters or aircraft?

        -

        A: No, the MD-500 soundpack is designed specifically for the MD-500 helicopter series. It will not work with other helicopters or aircraft models.

        -

        Q: Does the MD-500 soundpack include any visual enhancements or liveries?

        -

        A: No, the MD-500 soundpack only includes audio enhancements. It does not include any visual enhancements or liveries for the MD-500 helicopter.

        -

        Q: How can I update or uninstall the MD-500 soundpack?

        -

        A: To update the MD-500 soundpack, you need to download the latest version from simMarket and install it over your existing installation. To uninstall the MD-500 soundpack, you need to delete the MD500 folder from your main FS2004-FSX folder.

        -

        Q: How can I contact Turbine Sound Studios for support or feedback?

        -

        A: You can contact Turbine Sound Studios by sending an email to turbinesoundstudios@gmail.com or by filling out their contact form. You can also follow them on Facebook or YouTube for updates and news.

        -

        Q: Where can I find more products from Turbine Sound Studios?

        -

        A: A: You can find more products from Turbine Sound Studios on their website or on simMarket. They have sound packs for various aircraft models, such as Airbus, Boeing, Bombardier, Cessna, Embraer, Lockheed Martin, and more. They also have sound packs for helicopters, such as Bell, Eurocopter, Sikorsky, and more. You can also check out their YouTube channel for previews and demos of their products.

        -

        I hope you enjoyed this article and found it useful. If you have any questions or comments, please feel free to leave them below. Thank you for reading and happy flying!

        b2dd77e56b
        -
        -
        \ No newline at end of file diff --git a/spaces/sub314xxl/MetaGPT/metagpt/tools/search_engine_serper.py b/spaces/sub314xxl/MetaGPT/metagpt/tools/search_engine_serper.py deleted file mode 100644 index 0eec2694bf1aee218ab0e6138664c8edf8d8f1e2..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/metagpt/tools/search_engine_serper.py +++ /dev/null @@ -1,119 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/23 18:27 -@Author : alexanderwu -@File : search_engine_serpapi.py -""" -import json -from typing import Any, Dict, Optional, Tuple - -import aiohttp -from pydantic import BaseModel, Field, validator - -from metagpt.config import CONFIG - - -class SerperWrapper(BaseModel): - search_engine: Any #: :meta private: - payload: dict = Field(default={"page": 1, "num": 10}) - serper_api_key: Optional[str] = None - aiosession: Optional[aiohttp.ClientSession] = None - - class Config: - arbitrary_types_allowed = True - - @validator("serper_api_key", always=True) - @classmethod - def check_serper_api_key(cls, val: str): - val = val or CONFIG.serper_api_key - if not val: - raise ValueError( - "To use, make sure you provide the serper_api_key when constructing an object. Alternatively, " - "ensure that the environment variable SERPER_API_KEY is set with your API key. You can obtain " - "an API key from https://serper.dev/." - ) - return val - - async def run(self, query: str, max_results: int = 8, as_string: bool = True, **kwargs: Any) -> str: - """Run query through Serper and parse result async.""" - if isinstance(query, str): - return self._process_response((await self.results([query], max_results))[0], as_string=as_string) - else: - results = [self._process_response(res, as_string) for res in await self.results(query, max_results)] - return "\n".join(results) if as_string else results - - async def results(self, queries: list[str], max_results: int = 8) -> dict: - """Use aiohttp to run query through Serper and return the results async.""" - - def construct_url_and_payload_and_headers() -> Tuple[str, Dict[str, str]]: - payloads = self.get_payloads(queries, max_results) - url = "https://google.serper.dev/search" - headers = self.get_headers() - return url, payloads, headers - - url, payloads, headers = construct_url_and_payload_and_headers() - if not self.aiosession: - async with aiohttp.ClientSession() as session: - async with session.post(url, data=payloads, headers=headers) as response: - res = await response.json() - else: - async with self.aiosession.get.post(url, data=payloads, headers=headers) as response: - res = await response.json() - - return res - - def get_payloads(self, queries: list[str], max_results: int) -> Dict[str, str]: - """Get payloads for Serper.""" - payloads = [] - for query in queries: - _payload = { - "q": query, - "num": max_results, - } - payloads.append({**self.payload, **_payload}) - return json.dumps(payloads, sort_keys=True) - - def get_headers(self) -> Dict[str, str]: - headers = {"X-API-KEY": self.serper_api_key, "Content-Type": "application/json"} - return headers - - @staticmethod - def _process_response(res: dict, as_string: bool = False) -> str: - """Process response from SerpAPI.""" - # logger.debug(res) - focus = ["title", "snippet", "link"] - - def get_focused(x): - return {i: j for i, j in x.items() if i in focus} - - if "error" in res.keys(): - raise ValueError(f"Got error from SerpAPI: {res['error']}") - if "answer_box" in res.keys() and "answer" in res["answer_box"].keys(): - toret = res["answer_box"]["answer"] - elif "answer_box" in res.keys() and "snippet" in res["answer_box"].keys(): - toret = res["answer_box"]["snippet"] - elif "answer_box" in res.keys() and "snippet_highlighted_words" in res["answer_box"].keys(): - toret = res["answer_box"]["snippet_highlighted_words"][0] - elif "sports_results" in res.keys() and "game_spotlight" in res["sports_results"].keys(): - toret = res["sports_results"]["game_spotlight"] - elif "knowledge_graph" in res.keys() and "description" in res["knowledge_graph"].keys(): - toret = res["knowledge_graph"]["description"] - elif "snippet" in res["organic"][0].keys(): - toret = res["organic"][0]["snippet"] - else: - toret = "No good search result found" - - toret_l = [] - if "answer_box" in res.keys() and "snippet" in res["answer_box"].keys(): - toret_l += [get_focused(res["answer_box"])] - if res.get("organic"): - toret_l += [get_focused(i) for i in res.get("organic")] - - return str(toret) + "\n" + str(toret_l) if as_string else toret_l - - -if __name__ == "__main__": - import fire - - fire.Fire(SerperWrapper().run) diff --git a/spaces/sub314xxl/MusicGen/audiocraft/quantization/core_vq.py b/spaces/sub314xxl/MusicGen/audiocraft/quantization/core_vq.py deleted file mode 100644 index e1896bb1788a945a1f7be6369abb255ecf72c7a0..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MusicGen/audiocraft/quantization/core_vq.py +++ /dev/null @@ -1,400 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -from einops import rearrange, repeat -import flashy -import torch -from torch import nn, einsum -import torch.nn.functional as F - - -def exists(val: tp.Optional[tp.Any]) -> bool: - return val is not None - - -def default(val: tp.Any, d: tp.Any) -> tp.Any: - return val if exists(val) else d - - -def l2norm(t): - return F.normalize(t, p=2, dim=-1) - - -def ema_inplace(moving_avg, new, decay: float): - moving_avg.data.mul_(decay).add_(new, alpha=(1 - decay)) - - -def laplace_smoothing(x, n_categories: int, epsilon: float = 1e-5): - return (x + epsilon) / (x.sum() + n_categories * epsilon) - - -def uniform_init(*shape: int): - t = torch.empty(shape) - nn.init.kaiming_uniform_(t) - return t - - -def sample_vectors(samples, num: int): - num_samples, device = samples.shape[0], samples.device - - if num_samples >= num: - indices = torch.randperm(num_samples, device=device)[:num] - else: - indices = torch.randint(0, num_samples, (num,), device=device) - - return samples[indices] - - -def kmeans(samples, num_clusters: int, num_iters: int = 10): - dim, dtype = samples.shape[-1], samples.dtype - - means = sample_vectors(samples, num_clusters) - - for _ in range(num_iters): - diffs = rearrange(samples, "n d -> n () d") - rearrange( - means, "c d -> () c d" - ) - dists = -(diffs ** 2).sum(dim=-1) - - buckets = dists.max(dim=-1).indices - bins = torch.bincount(buckets, minlength=num_clusters) - zero_mask = bins == 0 - bins_min_clamped = bins.masked_fill(zero_mask, 1) - - new_means = buckets.new_zeros(num_clusters, dim, dtype=dtype) - new_means.scatter_add_(0, repeat(buckets, "n -> n d", d=dim), samples) - new_means = new_means / bins_min_clamped[..., None] - - means = torch.where(zero_mask[..., None], means, new_means) - - return means, bins - - -def orthgonal_loss_fn(t): - # eq (2) from https://arxiv.org/abs/2112.00384 - n = t.shape[0] - normed_codes = l2norm(t) - identity = torch.eye(n, device=t.device) - cosine_sim = einsum("i d, j d -> i j", normed_codes, normed_codes) - return ((cosine_sim - identity) ** 2).sum() / (n ** 2) - - -class EuclideanCodebook(nn.Module): - """Codebook with Euclidean distance. - - Args: - dim (int): Dimension. - codebook_size (int): Codebook size. - kmeans_init (bool): Whether to use k-means to initialize the codebooks. - If set to true, run the k-means algorithm on the first training batch and use - the learned centroids as initialization. - kmeans_iters (int): Number of iterations used for k-means algorithm at initialization. - decay (float): Decay for exponential moving average over the codebooks. - epsilon (float): Epsilon value for numerical stability. - threshold_ema_dead_code (int): Threshold for dead code expiration. Replace any codes - that have an exponential moving average cluster size less than the specified threshold with - randomly selected vector from the current batch. - """ - def __init__( - self, - dim: int, - codebook_size: int, - kmeans_init: int = False, - kmeans_iters: int = 10, - decay: float = 0.8, - epsilon: float = 1e-5, - threshold_ema_dead_code: int = 2, - ): - super().__init__() - self.decay = decay - init_fn: tp.Union[tp.Callable[..., torch.Tensor], tp.Any] = uniform_init if not kmeans_init else torch.zeros - embed = init_fn(codebook_size, dim) - - self.codebook_size = codebook_size - - self.kmeans_iters = kmeans_iters - self.epsilon = epsilon - self.threshold_ema_dead_code = threshold_ema_dead_code - - self.register_buffer("inited", torch.Tensor([not kmeans_init])) - self.register_buffer("cluster_size", torch.zeros(codebook_size)) - self.register_buffer("embed", embed) - self.register_buffer("embed_avg", embed.clone()) - - @torch.jit.ignore - def init_embed_(self, data): - if self.inited: - return - - embed, cluster_size = kmeans(data, self.codebook_size, self.kmeans_iters) - self.embed.data.copy_(embed) - self.embed_avg.data.copy_(embed.clone()) - self.cluster_size.data.copy_(cluster_size) - self.inited.data.copy_(torch.Tensor([True])) - # Make sure all buffers across workers are in sync after initialization - flashy.distrib.broadcast_tensors(self.buffers()) - - def replace_(self, samples, mask): - modified_codebook = torch.where( - mask[..., None], sample_vectors(samples, self.codebook_size), self.embed - ) - self.embed.data.copy_(modified_codebook) - - def expire_codes_(self, batch_samples): - if self.threshold_ema_dead_code == 0: - return - - expired_codes = self.cluster_size < self.threshold_ema_dead_code - if not torch.any(expired_codes): - return - - batch_samples = rearrange(batch_samples, "... d -> (...) d") - self.replace_(batch_samples, mask=expired_codes) - flashy.distrib.broadcast_tensors(self.buffers()) - - def preprocess(self, x): - x = rearrange(x, "... d -> (...) d") - return x - - def quantize(self, x): - embed = self.embed.t() - dist = -( - x.pow(2).sum(1, keepdim=True) - - 2 * x @ embed - + embed.pow(2).sum(0, keepdim=True) - ) - embed_ind = dist.max(dim=-1).indices - return embed_ind - - def postprocess_emb(self, embed_ind, shape): - return embed_ind.view(*shape[:-1]) - - def dequantize(self, embed_ind): - quantize = F.embedding(embed_ind, self.embed) - return quantize - - def encode(self, x): - shape = x.shape - # pre-process - x = self.preprocess(x) - # quantize - embed_ind = self.quantize(x) - # post-process - embed_ind = self.postprocess_emb(embed_ind, shape) - return embed_ind - - def decode(self, embed_ind): - quantize = self.dequantize(embed_ind) - return quantize - - def forward(self, x): - shape, dtype = x.shape, x.dtype - x = self.preprocess(x) - self.init_embed_(x) - - embed_ind = self.quantize(x) - embed_onehot = F.one_hot(embed_ind, self.codebook_size).type(dtype) - embed_ind = self.postprocess_emb(embed_ind, shape) - quantize = self.dequantize(embed_ind) - - if self.training: - # We do the expiry of code at that point as buffers are in sync - # and all the workers will take the same decision. - self.expire_codes_(x) - ema_inplace(self.cluster_size, embed_onehot.sum(0), self.decay) - embed_sum = x.t() @ embed_onehot - ema_inplace(self.embed_avg, embed_sum.t(), self.decay) - cluster_size = ( - laplace_smoothing(self.cluster_size, self.codebook_size, self.epsilon) - * self.cluster_size.sum() - ) - embed_normalized = self.embed_avg / cluster_size.unsqueeze(1) - self.embed.data.copy_(embed_normalized) - - return quantize, embed_ind - - -class VectorQuantization(nn.Module): - """Vector quantization implementation. - Currently supports only euclidean distance. - - Args: - dim (int): Dimension - codebook_size (int): Codebook size - codebook_dim (int): Codebook dimension. If not defined, uses the specified dimension in dim. - decay (float): Decay for exponential moving average over the codebooks. - epsilon (float): Epsilon value for numerical stability. - kmeans_init (bool): Whether to use kmeans to initialize the codebooks. - kmeans_iters (int): Number of iterations used for kmeans initialization. - threshold_ema_dead_code (int): - channels_last (bool): Channels are the last dimension in the input tensors. - commitment_weight (float): Weight for commitment loss. - orthogonal_reg_weight (float): Orthogonal regularization weights. - orthogonal_reg_active_codes_only (bool): Apply orthogonal regularization only on active codes. - orthogonal_reg_max_codes (optional int): Maximum number of codes to consider - for orthogonal regulariation. - threshold_ema_dead_code (int): Threshold for dead code expiration. Replace any codes - that have an exponential moving average cluster size less than the specified threshold with - randomly selected vector from the current batch. - """ - def __init__( - self, - dim: int, - codebook_size: int, - codebook_dim: tp.Optional[int] = None, - decay: float = 0.8, - epsilon: float = 1e-5, - kmeans_init: bool = False, - kmeans_iters: int = 10, - threshold_ema_dead_code: int = 2, - channels_last: bool = False, - commitment_weight: float = 1., - orthogonal_reg_weight: float = 0.0, - orthogonal_reg_active_codes_only: bool = False, - orthogonal_reg_max_codes: tp.Optional[int] = None, - ): - super().__init__() - _codebook_dim: int = default(codebook_dim, dim) - - requires_projection = _codebook_dim != dim - self.project_in = (nn.Linear(dim, _codebook_dim) if requires_projection else nn.Identity()) - self.project_out = (nn.Linear(_codebook_dim, dim) if requires_projection else nn.Identity()) - - self.epsilon = epsilon - self.commitment_weight = commitment_weight - - self.orthogonal_reg_weight = orthogonal_reg_weight - self.orthogonal_reg_active_codes_only = orthogonal_reg_active_codes_only - self.orthogonal_reg_max_codes = orthogonal_reg_max_codes - - self._codebook = EuclideanCodebook(dim=_codebook_dim, codebook_size=codebook_size, - kmeans_init=kmeans_init, kmeans_iters=kmeans_iters, - decay=decay, epsilon=epsilon, - threshold_ema_dead_code=threshold_ema_dead_code) - self.codebook_size = codebook_size - - self.channels_last = channels_last - - @property - def codebook(self): - return self._codebook.embed - - @property - def inited(self): - return self._codebook.inited - - def _preprocess(self, x): - if not self.channels_last: - x = rearrange(x, "b d n -> b n d") - return x - - def _postprocess(self, quantize): - if not self.channels_last: - quantize = rearrange(quantize, "b n d -> b d n") - return quantize - - def encode(self, x): - x = self._preprocess(x) - x = self.project_in(x) - embed_in = self._codebook.encode(x) - return embed_in - - def decode(self, embed_ind): - quantize = self._codebook.decode(embed_ind) - quantize = self.project_out(quantize) - quantize = self._postprocess(quantize) - return quantize - - def forward(self, x): - device = x.device - x = self._preprocess(x) - - x = self.project_in(x) - quantize, embed_ind = self._codebook(x) - - if self.training: - quantize = x + (quantize - x).detach() - - loss = torch.tensor([0.0], device=device, requires_grad=self.training) - - if self.training: - if self.commitment_weight > 0: - commit_loss = F.mse_loss(quantize.detach(), x) - loss = loss + commit_loss * self.commitment_weight - - if self.orthogonal_reg_weight > 0: - codebook = self.codebook - - if self.orthogonal_reg_active_codes_only: - # only calculate orthogonal loss for the activated codes for this batch - unique_code_ids = torch.unique(embed_ind) - codebook = codebook[unique_code_ids] - - num_codes = codebook.shape[0] - if exists(self.orthogonal_reg_max_codes) and num_codes > self.orthogonal_reg_max_codes: - rand_ids = torch.randperm(num_codes, device=device)[:self.orthogonal_reg_max_codes] - codebook = codebook[rand_ids] - - orthogonal_reg_loss = orthgonal_loss_fn(codebook) - loss = loss + orthogonal_reg_loss * self.orthogonal_reg_weight - - quantize = self.project_out(quantize) - quantize = self._postprocess(quantize) - - return quantize, embed_ind, loss - - -class ResidualVectorQuantization(nn.Module): - """Residual vector quantization implementation. - - Follows Algorithm 1. in https://arxiv.org/pdf/2107.03312.pdf - """ - def __init__(self, *, num_quantizers, **kwargs): - super().__init__() - self.layers = nn.ModuleList( - [VectorQuantization(**kwargs) for _ in range(num_quantizers)] - ) - - def forward(self, x, n_q: tp.Optional[int] = None): - quantized_out = 0.0 - residual = x - - all_losses = [] - all_indices = [] - - n_q = n_q or len(self.layers) - - for i, layer in enumerate(self.layers[:n_q]): - quantized, indices, loss = layer(residual) - residual = residual - quantized - quantized_out = quantized_out + quantized - all_indices.append(indices) - all_losses.append(loss) - - out_losses, out_indices = map(torch.stack, (all_losses, all_indices)) - return quantized_out, out_indices, out_losses - - def encode(self, x: torch.Tensor, n_q: tp.Optional[int] = None) -> torch.Tensor: - residual = x - all_indices = [] - n_q = n_q or len(self.layers) - for layer in self.layers[:n_q]: - indices = layer.encode(residual) - quantized = layer.decode(indices) - residual = residual - quantized - all_indices.append(indices) - out_indices = torch.stack(all_indices) - return out_indices - - def decode(self, q_indices: torch.Tensor) -> torch.Tensor: - quantized_out = torch.tensor(0.0, device=q_indices.device) - for i, indices in enumerate(q_indices): - layer = self.layers[i] - quantized = layer.decode(indices) - quantized_out = quantized_out + quantized - return quantized_out diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/A To Z Odia Film Download !EXCLUSIVE!.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/A To Z Odia Film Download !EXCLUSIVE!.md deleted file mode 100644 index 2ff5a3c8e0fc6a39c5458de7d1a465e5fc2fc7bf..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/A To Z Odia Film Download !EXCLUSIVE!.md +++ /dev/null @@ -1,6 +0,0 @@ -

        a to z odia film download


        Download >>> https://cinurl.com/2uEZ2F



        -
        -Odia Movie SongnOdia Movie Song - BuA. Odia Movie SongnOdia Movie Song - BuO.Odia Movie SongnOdia Movie Song - ChA.Odia Movie SongnOdia Movie Song - ChB.Odia Movie SongnOdia Movie Song - ChC.Odia Movie SongnOdia Movie Song - ChD.Odia Movie SongnOdia Movie Song - ChE.Odia Movie SongnOdia Movie Song - ChF.Odia Movie SongnOdia Movie Song - ChG.Odia Movie SongnOdia Movie Song - ChH.Odia Movie SongnOdia Movie Song - ChI.Odia Movie SongnOdia Movie Song - ChJ.Odia Movie SongnOdia Movie Song - ChK.Odia Movie SongnOdia Movie Song - ChL.Odia Movie SongnOdia Movie Song - ChM.Odia Movie SongnOdia Movie Song - ChN.Odia Movie SongnOdia Movie Song - ChO.Odia Movie SongnOdia Movie Song - ChP.Odia Movie SongnOdia Movie Song - ChQ.Odia Movie SongnOdia Movie Song - ChR.Odia Movie SongnOdia Movie Song - ChS.Odia Movie SongnOdia Movie Song - ChT.Odia Movie SongnOdia Movie Song - ChU.Odia Movie SongnOdia Movie Song - ChV.Odia Movie SongnOdia Movie Song - ChW.Odia Movie SongnOdia Movie Song - ChX.Odia Movie SongnOdia Movie Song - ChY.Odia Movie SongnOdia Movie Song - ChZ.Odia Movie SongnOdia Movie Song - Cha.Odia Movie SongnOdia Movie Song - ChaA.Odia Movie SongnOdia Movie Song - ChaB.Odia Movie SongnOdia Movie Song - ChaC.Odia Movie SongnOdia Movie Song - ChaD.Odia Movie SongnOdia Movie Song - ChaE.Odia 4fefd39f24
        -
        -
        -

        diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Free [VERIFIED] Quickbooks Pro 2013 Validation Code Hit.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Free [VERIFIED] Quickbooks Pro 2013 Validation Code Hit.md deleted file mode 100644 index 22241127ebcf94b750223e9f497fb51184e08451..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Free [VERIFIED] Quickbooks Pro 2013 Validation Code Hit.md +++ /dev/null @@ -1,6 +0,0 @@ -

        free quickbooks pro 2013 validation code hit


        DOWNLOAD »»» https://cinurl.com/2uEXBj



        -
        -QuickBook Pro was also developed by Intuit, providing financial software to help small business owners store their important information, ... ... with QuickBook 'but. QuickBook Pro can be configured to work with the users you want to deal with and store the information you need. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Gta Vice City Downgrade.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Gta Vice City Downgrade.md deleted file mode 100644 index 42902b215cfa3592ee15f0f9be66a328935a4081..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Gta Vice City Downgrade.md +++ /dev/null @@ -1,12 +0,0 @@ - -

        There is no way to downgrade Vice City for PC. The fix is to downgrade San Andreas instead, then install Vice City. Then you install the latest patch. Vice City's patch was just 1.05. It is the same patch in the San Andreas version though.

        -

        Gta Vice City Downgrade


        DOWNLOADhttps://cinurl.com/2uEY5G



        -

        If you want to make a playable copy of Vice City then you have to actually do the "downgrade" from a 1.0 version or some other version to 1.05 version of the game. It's the only way you can have more features and better controls (use a gamepad)

        -

        Just wanted to throw in one last little "Hint" for everyone to know: As of now, I have not been able to get the 'SBC for Vice City' patch to work. It always throws an 'err 6' error on patching 'com.rockstargames.gdxtoolkit.GameController'.

        -

        I've seen the 'SBC for Vice City' crack. It crashes every time I use it, usually in the loading screen before I get to the main menu. This is the reason I haven't yet been able to test it out with any real game.

        -

        Once there is a need to keep a device in a particular room or a house, it will be better to get the device installed in a close by location. With the help of CE shrink wrap machines, there is no need to keep the product in a different location.

        -

        Neon bikes are the most useful and best modes of transport. They are easy to drive and also carrying capacity is also high. People often prefer to use them and unlike in the earlier GTA games in which all driving games do not have the ability to use the riding game type, this aspect is available in GTAs San Andreas.

        -

        -

        First of all, I'm not exactly sure if there is an in-game character editor for the Grand Theft Auto series. There could be since Rockstar has released tools for major games like Grand Theft Auto: San Andreas and Grand Theft Auto: San Andreas Epilogue. However, there are no tools for Grand Theft Auto: Vice City for PS3. Only for PSP, PS2, and Xbox. Then again, we don't know if the character editor from the other mentioned games is compatible with GTA: Vice City.

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Monede Si Bancnote Romanesti George Buzdugan Pdf 11.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Monede Si Bancnote Romanesti George Buzdugan Pdf 11.md deleted file mode 100644 index 9725fe6ebf20760a88d4d0f4b80c8b1226c27ff9..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Monede Si Bancnote Romanesti George Buzdugan Pdf 11.md +++ /dev/null @@ -1,44 +0,0 @@ - -

        Monede Si Bancnote Romanesti George Buzdugan Pdf 11: A Comprehensive Guide to Romanian Coins and Banknotes

        - -

        If you are interested in the history and culture of Romania, you might want to learn more about its coins and banknotes. One of the best sources for this topic is the book "Monede Si Bancnote Romanesti" by George Buzdugan, a renowned numismatist and historian. This book is a free ebook that you can download as a PDF file from Scribd[^1^]. It contains 497 pages of detailed information and images of Romanian coins and banknotes from ancient times to modern days.

        - -

        In this article, we will give you an overview of what you can find in this book, and why it is a valuable resource for anyone who wants to explore the rich and diverse heritage of Romania through its money.

        -

        Monede Si Bancnote Romanesti George Buzdugan Pdf 11


        DOWNLOADhttps://cinurl.com/2uEXU1



        - -

        What is "Monede Si Bancnote Romanesti"?

        - -

        "Monede Si Bancnote Romanesti" (Romanian Coins and Banknotes) is a book written by George Buzdugan, a famous Romanian numismatist and historian. He was born in 1924 and died in 2010. He dedicated his life to the study and collection of Romanian coins and banknotes, as well as other related topics such as medals, orders, decorations, stamps, postcards, etc. He published many books and articles on these subjects, and was awarded several honors and distinctions for his contributions to Romanian culture and science.

        - -

        The book "Monede Si Bancnote Romanesti" is one of his most comprehensive and authoritative works. It was first published in 1977, in collaboration with Octavian Luchian and Constantin C. Oprescu, two other prominent numismatists. The book covers the history of Romanian money from the earliest times to the present day, with detailed descriptions and illustrations of each coin and banknote issued by various rulers, states, regions, or institutions. The book also includes information on the historical context, the minting techniques, the symbols, the legends, the values, the circulation, the rarity, and the collectors' market of each piece.

        - -

        The book is divided into four main parts:

        - -
          -
        • Part I: Ancient Coins (from the 6th century BC to the 3rd century AD)
        • -
        • Part II: Medieval Coins (from the 10th century to the 19th century)
        • -
        • Part III: Modern Coins (from the 19th century to the 20th century)
        • -
        • Part IV: Banknotes (from the 18th century to the 20th century)
        • -
        - -

        Each part is further subdivided into chapters according to chronological periods or geographical areas. The book also has an introduction, a bibliography, an index, and several appendices.

        - -

        Why should you read "Monede Si Bancnote Romanesti"?

        - -

        "Monede Si Bancnote Romanesti" is not only a book for numismatists or collectors. It is also a book for anyone who wants to learn more about Romania's history and culture through its money. By reading this book, you will discover:

        - -
          -
        • The origins and evolution of Romanian money, from ancient times to modern days
        • -
        • The political and social changes that influenced the design and production of Romanian coins and banknotes
        • -
        • The artistic and technical aspects of Romanian money, such as styles, motifs, inscriptions, metals, colors, sizes, shapes, etc.
        • -
        • The economic and cultural values of Romanian money, such as exchange rates, purchasing power, inflation, deflation, symbolism, propaganda, etc.
        • -
        • The diversity and uniqueness of Romanian money, such as regional variations, local issues, emergency issues, commemorative issues, etc.
        • -
        • The challenges and opportunities of collecting Romanian coins and banknotes
        • -
        - -

        "Monede Si Bancnote Romanesti" is a book that will enrich your knowledge and appreciation of Romania's heritage. It will also inspire you to explore more aspects of Romania's history and culture through its money.

        - -

        How can you download "Monede Si Bancnote Romanesti

        -

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/TheHunter Call Of The Wild - Vurhonga Savanna 32 Bit Crack Extra Quality.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/TheHunter Call Of The Wild - Vurhonga Savanna 32 Bit Crack Extra Quality.md deleted file mode 100644 index aa1fe469f03dafe6060988aa6c22629110989958..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/TheHunter Call Of The Wild - Vurhonga Savanna 32 Bit Crack Extra Quality.md +++ /dev/null @@ -1,13 +0,0 @@ -

        theHunter : Call of the Wild - Vurhonga Savanna 32 bit crack


        DOWNLOADhttps://cinurl.com/2uEY6R



        -
        -I recommend Secret Lab! ▻ Want to see or get what I use on PC and Gear? . ▻ Buy games and other products on Steam with a discount in my store ▻▻▻▻▻ (Discount from me - enter in the cart -Hello everyone, I haven't been here for a long time. -I'm back and I continue to play and make videos, according to your requests. -But what if you want to watch the video? -I will tell you how you can do it. -I am doing this to make it easier for you. -Here is a link to my website, where I am now taking my first steps, where I will post videos and screenshots: www.game-on.com.ua/sect -Yes, it's true. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/surya12003/suryabot1/README.md b/spaces/surya12003/suryabot1/README.md deleted file mode 100644 index b52015a4c0e80473a51e50878b42cb33408b31cf..0000000000000000000000000000000000000000 --- a/spaces/surya12003/suryabot1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Suryabot1 -emoji: 🐢 -colorFrom: gray -colorTo: yellow -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/surya12003/suryabot1/app.py b/spaces/surya12003/suryabot1/app.py deleted file mode 100644 index 2dbf3ae89c2e3fdab7134107dd346f984dca8eb1..0000000000000000000000000000000000000000 --- a/spaces/surya12003/suryabot1/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """Meet Riya, your youthful and witty personal assistant! At 21 years old, she's full of energy and always eager to help. Riya's goal is to assist you with any questions or problems you might have. Her enthusiasm shines through in every response, making interactions with her enjoyable and engaging. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/losses/dice_loss.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/losses/dice_loss.py deleted file mode 100644 index 27a77b962d7d8b3079c7d6cd9db52280c6fb4970..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/losses/dice_loss.py +++ /dev/null @@ -1,119 +0,0 @@ -"""Modified from https://github.com/LikeLy-Journey/SegmenTron/blob/master/ -segmentron/solver/loss.py (Apache-2.0 License)""" -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import get_class_weight, weighted_loss - - -@weighted_loss -def dice_loss(pred, - target, - valid_mask, - smooth=1, - exponent=2, - class_weight=None, - ignore_index=255): - assert pred.shape[0] == target.shape[0] - total_loss = 0 - num_classes = pred.shape[1] - for i in range(num_classes): - if i != ignore_index: - dice_loss = binary_dice_loss( - pred[:, i], - target[..., i], - valid_mask=valid_mask, - smooth=smooth, - exponent=exponent) - if class_weight is not None: - dice_loss *= class_weight[i] - total_loss += dice_loss - return total_loss / num_classes - - -@weighted_loss -def binary_dice_loss(pred, target, valid_mask, smooth=1, exponent=2, **kwards): - assert pred.shape[0] == target.shape[0] - pred = pred.reshape(pred.shape[0], -1) - target = target.reshape(target.shape[0], -1) - valid_mask = valid_mask.reshape(valid_mask.shape[0], -1) - - num = torch.sum(torch.mul(pred, target) * valid_mask, dim=1) * 2 + smooth - den = torch.sum(pred.pow(exponent) + target.pow(exponent), dim=1) + smooth - - return 1 - num / den - - -@LOSSES.register_module() -class DiceLoss(nn.Module): - """DiceLoss. - - This loss is proposed in `V-Net: Fully Convolutional Neural Networks for - Volumetric Medical Image Segmentation `_. - - Args: - loss_type (str, optional): Binary or multi-class loss. - Default: 'multi_class'. Options are "binary" and "multi_class". - smooth (float): A float number to smooth loss, and avoid NaN error. - Default: 1 - exponent (float): An float number to calculate denominator - value: \\sum{x^exponent} + \\sum{y^exponent}. Default: 2. - reduction (str, optional): The method used to reduce the loss. Options - are "none", "mean" and "sum". This parameter only works when - per_image is True. Default: 'mean'. - class_weight (list[float] | str, optional): Weight of each class. If in - str format, read them from a file. Defaults to None. - loss_weight (float, optional): Weight of the loss. Default to 1.0. - ignore_index (int | None): The label index to be ignored. Default: 255. - """ - - def __init__(self, - smooth=1, - exponent=2, - reduction='mean', - class_weight=None, - loss_weight=1.0, - ignore_index=255, - **kwards): - super(DiceLoss, self).__init__() - self.smooth = smooth - self.exponent = exponent - self.reduction = reduction - self.class_weight = get_class_weight(class_weight) - self.loss_weight = loss_weight - self.ignore_index = ignore_index - - def forward(self, - pred, - target, - avg_factor=None, - reduction_override=None, - **kwards): - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if self.class_weight is not None: - class_weight = pred.new_tensor(self.class_weight) - else: - class_weight = None - - pred = F.softmax(pred, dim=1) - num_classes = pred.shape[1] - one_hot_target = F.one_hot( - torch.clamp(target.long(), 0, num_classes - 1), - num_classes=num_classes) - valid_mask = (target != self.ignore_index).long() - - loss = self.loss_weight * dice_loss( - pred, - one_hot_target, - valid_mask=valid_mask, - reduction=reduction, - avg_factor=avg_factor, - smooth=self.smooth, - exponent=self.exponent, - class_weight=class_weight, - ignore_index=self.ignore_index) - return loss diff --git a/spaces/taesiri/ConvolutionalHoughMatchingNetworks/data/download.py b/spaces/taesiri/ConvolutionalHoughMatchingNetworks/data/download.py deleted file mode 100644 index 16da96a4ac0a6bfbeb176bcc04f7119db169a0d2..0000000000000000000000000000000000000000 --- a/spaces/taesiri/ConvolutionalHoughMatchingNetworks/data/download.py +++ /dev/null @@ -1,91 +0,0 @@ -r""" Functions to download semantic correspondence datasets """ - -import tarfile -import os - -import requests - -from . import pfpascal -from . import pfwillow -from . import spair - - -def load_dataset(benchmark, datapath, thres, split='test'): - r""" Instantiate a correspondence dataset """ - correspondence_benchmark = { - 'spair': spair.SPairDataset, - 'pfpascal': pfpascal.PFPascalDataset, - 'pfwillow': pfwillow.PFWillowDataset - } - - dataset = correspondence_benchmark.get(benchmark) - if dataset is None: - raise Exception('Invalid benchmark dataset %s.' % benchmark) - - return dataset(benchmark, datapath, thres, split) - - -def download_from_google(token_id, filename): - r""" Download desired filename from Google drive """ - - print('Downloading %s ...' % os.path.basename(filename)) - - url = 'https://docs.google.com/uc?export=download' - destination = filename + '.tar.gz' - session = requests.Session() - - response = session.get(url, params={'id': token_id}, stream=True) - token = get_confirm_token(response) - - if token: - params = {'id': token_id, 'confirm': token} - response = session.get(url, params=params, stream=True) - save_response_content(response, destination) - file = tarfile.open(destination, 'r:gz') - - print("Extracting %s ..." % destination) - file.extractall(filename) - file.close() - - os.remove(destination) - os.rename(filename, filename + '_tmp') - os.rename(os.path.join(filename + '_tmp', os.path.basename(filename)), filename) - os.rmdir(filename+'_tmp') - - -def get_confirm_token(response): - r"""Retrieves confirm token""" - for key, value in response.cookies.items(): - if key.startswith('download_warning'): - return value - - return None - - -def save_response_content(response, destination): - r"""Saves the response to the destination""" - chunk_size = 32768 - - with open(destination, "wb") as file: - for chunk in response.iter_content(chunk_size): - if chunk: - file.write(chunk) - - -def download_dataset(datapath, benchmark): - r"""Downloads semantic correspondence benchmark dataset from Google drive""" - if not os.path.isdir(datapath): - os.mkdir(datapath) - - file_data = { - # 'spair': ('1s73NVEFPro260H1tXxCh1ain7oApR8of', 'SPair-71k') old version - 'spair': ('1KSvB0k2zXA06ojWNvFjBv0Ake426Y76k', 'SPair-71k'), - 'pfpascal': ('1OOwpGzJnTsFXYh-YffMQ9XKM_Kl_zdzg', 'PF-PASCAL'), - 'pfwillow': ('1tDP0y8RO5s45L-vqnortRaieiWENQco_', 'PF-WILLOW') - } - - file_id, filename = file_data[benchmark] - abs_filepath = os.path.join(datapath, filename) - - if not os.path.isdir(abs_filepath): - download_from_google(file_id, abs_filepath) diff --git a/spaces/taquynhnga/CNNs-interpretation-visualization/pages/2_SmoothGrad.py b/spaces/taquynhnga/CNNs-interpretation-visualization/pages/2_SmoothGrad.py deleted file mode 100644 index 9b82e792c8c51b4047993515ebea73816da5b05b..0000000000000000000000000000000000000000 --- a/spaces/taquynhnga/CNNs-interpretation-visualization/pages/2_SmoothGrad.py +++ /dev/null @@ -1,124 +0,0 @@ -import streamlit as st -import pandas as pd -import numpy as np -import random -from backend.utils import make_grid, load_dataset, load_model, load_images - -from backend.smooth_grad import generate_smoothgrad_mask, ShowImage, fig2img -from transformers import AutoFeatureExtractor, AutoModelForImageClassification -import torch - -from matplotlib.backends.backend_agg import RendererAgg -_lock = RendererAgg.lock - -st.set_page_config(layout='wide') -BACKGROUND_COLOR = '#bcd0e7' - - -st.title('Feature attribution visualization with SmoothGrad') -st.write("""> **Which features are responsible for the current prediction of ConvNeXt?** - -In machine learning, it is helpful to identify the significant features of the input (e.g., pixels for images) that affect the model's prediction. -If the model makes an incorrect prediction, we might want to determine which features contributed to the mistake. -To do this, we can generate a feature importance mask, which is a grayscale image with the same size as the original image. -The brightness of each pixel in the mask represents the importance of that feature to the model's prediction. - -There are various methods to calculate an image sensitivity mask for a specific prediction. -One simple way is to use the gradient of a class prediction neuron concerning the input pixels, indicating how the prediction is affected by small pixel changes. -However, this method usually produces a noisy mask. -To reduce the noise, the SmoothGrad technique as described in [SmoothGrad: Removing noise by adding noise](https://arxiv.org/abs/1706.03825) by Daniel _et al_ is used, -which adds Gaussian noise to multiple copies of the image and averages the resulting gradients. -""") - -instruction_text = """Users need to input the model(s), type of image set and image set setting to use this functionality. -1. Choose model: Users can choose one or more models for comparison. -There are 3 models supported: [ConvNeXt](https://huggingface.co/facebook/convnext-tiny-224), -[ResNet](https://huggingface.co/microsoft/resnet-50) and [MobileNet](https://pytorch.org/hub/pytorch_vision_mobilenet_v2/). -These 3 models have similar number of parameters. -\n2. Choose type of Image set: There are 2 types of Image set. They are _User-defined set_ and _Random set_. -\n3. Image set setting: If users choose _User-defined set_ in Image set, -users need to enter a list of image IDs separated by commas (,). For example, `0,1,4,7` is a valid input. -Check the page [ImageNet1k](/ImageNet1k) to see all the Image IDs. -If users choose _Random set_ in Image set, users just need to choose the number of random images to display here. -""" -with st.expander("See more instruction", expanded=False): - st.write(instruction_text) - - -imagenet_df = pd.read_csv('./data/ImageNet_metadata.csv') - -# --------------------------- LOAD function ----------------------------- - - -images = [] -image_ids = [] -# INPUT ------------------------------ -st.header('Input') -with st.form('smooth_grad_form'): - st.markdown('**Model and Input Setting**') - selected_models = st.multiselect('Model', options=['ConvNeXt', 'ResNet', 'MobileNet']) - selected_image_set = st.selectbox('Image set', ['User-defined set', 'Random set']) - - summit_button = st.form_submit_button('Set') - if summit_button: - setting_container = st.container() - # for id in image_ids: - # images = load_images(image_ids) - -with st.form('2nd_form'): - st.markdown('**Image set setting**') - if selected_image_set == 'Random set': - no_images = st.slider('Number of images', 1, 50, value=10) - image_ids = random.sample(list(range(50_000)), k=no_images) - else: - text = st.text_area('Specific Image IDs', value='0') - image_ids = list(map(lambda x: int(x.strip()), text.split(','))) - - run_button = st.form_submit_button('Display output') - if run_button: - for id in image_ids: - images = load_images(image_ids) - -st.header('Output') - -models = {} -feature_extractors = {} - -for i, model_name in enumerate(selected_models): - models[model_name], feature_extractors[model_name] = load_model(model_name) - - -# DISPLAY ---------------------------------- -if run_button: - header_cols = st.columns([1, 1] + [2]*len(selected_models)) - header_cols[0].markdown(f'
        Image ID
        ', unsafe_allow_html=True) - header_cols[1].markdown(f'
        Original Image
        ', unsafe_allow_html=True) - for i, model_name in enumerate(selected_models): - header_cols[i + 2].markdown(f'
        {model_name}
        ', unsafe_allow_html=True) - - grids = make_grid(cols=2+len(selected_models)*2, rows=len(image_ids)+1) - - -@st.cache(allow_output_mutation=True) -# @st.cache_data -def generate_images(image_id, model_name): - j = image_ids.index(image_id) - image = images[j]['image'] - return generate_smoothgrad_mask( - image, model_name, - models[model_name], feature_extractors[model_name], num_samples=10) - -with _lock: - for j, (image_id, image_dict) in enumerate(zip(image_ids, images)): - grids[j][0].write(f'{image_id}. {image_dict["label"]}') - image = image_dict['image'] - ori_image = ShowImage(np.asarray(image)) - grids[j][1].image(ori_image) - - for i, model_name in enumerate(selected_models): - # ori_image, heatmap_image, masked_image = generate_smoothgrad_mask(image, - # model_name, models[model_name], feature_extractors[model_name], num_samples=10) - heatmap_image, masked_image = generate_images(image_id, model_name) - # grids[j][1].image(ori_image) - grids[j][i*2+2].image(heatmap_image) - grids[j][i*2+3].image(masked_image) \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Gundam Scratch Build Manual 2 Download.md b/spaces/terfces0erbo/CollegeProjectV2/Gundam Scratch Build Manual 2 Download.md deleted file mode 100644 index 1887ea91c95f5792845ea952111c7f497cefc00b..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Gundam Scratch Build Manual 2 Download.md +++ /dev/null @@ -1,20 +0,0 @@ -

        gundam scratch build manual 2 download


        Download 🗹 https://bytlly.com/2uGlLA



        - -The manga is a continuation of the original Gundam and begins where the first Gundam ended. The first volume of the manga was published in Japan by Hobby Japan Publishing in December 2013, the manga is licensed in North America by Tokyopop. - -The sequel was released in Japan as part of the HJU Gundam Archives in 2017, the sequel begins where the first Gundam ended. - -The sequel to the anime series Gundam 00 sequel will be helmed by Gundam 00 animator and series director Hajime Yatate, who is best known for his work on the Gundam 00 OVAs, the sequel was announced during a trailer airing on July 2,2017, at the Gundam × Gojyo-san Gekijō: Shiro Kono Taiketsu no Ō-sama broadcast. - -The studio Sunrise decided to continue the project due to the public's high interest in the story and their favorable reception of the anime, despite the short production time. It is slated for release in Japan in 2020. - -The manga is set after the events of the anime film. Both the manga and the anime are set in the modern period of 20 years after the events of the second film. In an interview with Anime News Network, Yatate explained the story of the manga as "the conflict between the military and civilians", and that the manga is a re-imagining of the storyline of the anime. - -A sequel to the original Mobile Suit Gundam has been announced by Hajime Yatate, director and storywriter of the first Mobile Suit Gundam film. A manga called Gundam 00: A Re-imagining Of the original manga "Gundam" will be released in Japanese on March 1, 2020. The story will be a "re-imagining of the original manga" version of the events between the two films. The story will begin with the "identical principle" of the first film and after the events of the second film, will expand the time-line of the conflict between the military and civilians. - -The manga will be serialized in Shogakukan's Monthly Mobile Suit Gundam magazine starting on March 1, 2020. - -The sequel to the anime series Gundam 00 will be helmed by Gundam 00 animator and series director Hajime Yatate, who is best known for his work on the Gundam 00 OVAs, the sequel was announced during a trailer airing on July 2,2017, at the Gundam × Gojyo-san Gekij 4fefd39f24
        -
        -
        -

        diff --git a/spaces/thecherub/welovekaban/Dockerfile b/spaces/thecherub/welovekaban/Dockerfile deleted file mode 100644 index eef259fa372a804549fb0af0913718a13344da34..0000000000000000000000000000000000000000 --- a/spaces/thecherub/welovekaban/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download Navisworks Freedom Today and Explore 3D Models in NWD Format.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download Navisworks Freedom Today and Explore 3D Models in NWD Format.md deleted file mode 100644 index 35fdd5c4aaeb51a766a14ad9484033ec8eeb961b..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Download Navisworks Freedom Today and Explore 3D Models in NWD Format.md +++ /dev/null @@ -1,17 +0,0 @@ - -

        Free Download Navisworks Freedom: A 3D Viewer for Navisworks

        -

        If you are looking for a free and easy way to view 3D models created with Navisworks, you should try Navisworks Freedom. Navisworks Freedom is a free viewer that allows you to open and explore 3D models in NWD format, which is a compressed and secure file format for Navisworks.

        -

        Navisworks is a software that enables you to coordinate, collaborate, and review 3D design projects across different disciplines and platforms. With Navisworks, you can combine and analyze data from multiple sources, such as CAD, BIM, point cloud, and laser scanning. You can also create simulations, animations, and renderings of your 3D models.

        -

        free download navisworks freedom


        Download Zip >>>>> https://urlcod.com/2uKaLE



        -

        However, Navisworks is not a free software and requires a subscription to use. If you want to share your 3D models with others who do not have Navisworks installed on their computers, you can use the Navisworks NWC Export Utility to convert your files to NWD format. NWD files are smaller and more secure than the original files, and can be opened with Navisworks Freedom.

        -

        How to Download Navisworks Freedom

        -

        To download Navisworks Freedom, you can visit the Autodesk website and choose the version that matches your operating system and language. You can also find the system requirements and installation instructions on the same page. The download is free and does not require any registration or login.

        -

        Once you have downloaded and installed Navisworks Freedom, you can open any NWD file by double-clicking on it or by using the File menu in the software. You can then explore the 3D model using various tools and features, such as zooming, panning, rotating, measuring, sectioning, commenting, and more. You can also adjust the display settings, such as lighting, shading, textures, and colors.

        -

        Benefits of Using Navisworks Freedom

        -

        Navisworks Freedom is a practical solution for streaming large CAD models without any model preparation, third-party server hosting, setup time, or ongoing costs . It allows you to view 3D models in high quality and accuracy, as well as to interact with them in real time. You can also use Navisworks Freedom to review and comment on 3D models with your team members or clients.

        -

        Navisworks Freedom is compatible with most of the popular CAD formats, such as DWG, DWF, DXF, RVT, IFC, NWC, and more. You can also view point cloud data and laser scans with Navisworks Freedom. However, if you want to edit or modify the 3D models, you will need to use Navisworks Manage or Navisworks Simulate, which are paid versions of the software.

        -

        -

        Conclusion

        -

        Navisworks Freedom is a free download that lets you view 3D models created with Navisworks in NWD format. It is a useful tool for anyone who wants to open and explore 3D models without having to install or pay for Navisworks. You can download Navisworks Freedom from the Autodesk website and start viewing your 3D models today.

        ddb901b051
        -
        -
        \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/How I Cracked Auto Keyboard 9.0 and Automated My Keyboard Tasks for Free.md b/spaces/tialenAdioni/chat-gpt-api/logs/How I Cracked Auto Keyboard 9.0 and Automated My Keyboard Tasks for Free.md deleted file mode 100644 index b959456ee97d8bdb875fe32980aeb2d4bade9819..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/How I Cracked Auto Keyboard 9.0 and Automated My Keyboard Tasks for Free.md +++ /dev/null @@ -1,27 +0,0 @@ -
        -

        Auto Keyboard 9.0 Serial Key: How to Download and Activate for Free

        -

        Auto Keyboard 9.0 is a software that can simulate keyboard keystrokes and mouse actions automatically and repeatedly. It can save you a lot of time and effort if you have to perform a lot of repetitive tasks on your computer. However, Auto Keyboard 9.0 is not a free software and you need to pay $29.95 for a lifetime license. If you are looking for a way to download and activate Auto Keyboard 9.0 for free, this article will show you how.

        -

        Disclaimer: This article is for educational purposes only. We do not condone piracy or illegal use of software. Please support the developers by purchasing the official version of Auto Keyboard 9.0.

        -

        auto keyboard 9.0 serial key


        Download Filehttps://urlcod.com/2uK3o1



        -

        Step 1: Download Auto Keyboard 9.0 Trial Version

        -

        The first step is to download the trial version of Auto Keyboard 9.0 from its official website. The trial version allows you to use all the features of the software for 15 days, but it will add a watermark to your exported files. You can choose the Windows version according to your operating system.

        -

        Step 2: Download Auto Keyboard 9.0 Serial Key

        -

        The next step is to download the serial key for Auto Keyboard 9.0 from a reliable source. There are many websites that claim to provide the serial key, but some of them may contain viruses or malware that can harm your computer. Therefore, you should be careful and scan the file with an antivirus program before opening it. Here are some possible sources for the serial key:

        -
          -
        • YouTube: This video shows you how to download and register Auto Keyboard 9.0 by Umair Mughal. You can find the download link and the registration name and key in the description of the video.
        • -
        • YouTube: This video shows you how to download and register Auto Keyboard 9.0 by Voting Game. You can find the download link and the registration name and key in the description of the video.
        • -
        • YouTube: This video shows you how to download and register Auto Keyboard 9.0/10.0 by Registration Code Full Version 2023. You can find the download link and the registration name and key in the description of the video.
        • -
        -

        Step 3: Install Auto Keyboard 9.0 and Enter Serial Key

        -

        The final step is to install Auto Keyboard 9.0 and enter the serial key to activate it. Here are the detailed steps:

        -
          -
        1. Run the setup file of Auto Keyboard 9.0 and follow the instructions to install it on your computer.
        2. -
        3. After installation, run Auto Keyboard 9.0 and click on Register button.
        4. -
        5. Enter the registration name and key that you downloaded from one of the sources above.
        6. -
        7. Click on OK button and enjoy using Auto Keyboard 9.0 without any watermark or limitation.
        8. -
        -

        Conclusion

        -

        Auto Keyboard 9.0 is a useful software that can help you automate your keyboard and mouse tasks with ease. However, if you don't want to pay for it, you can try to download and activate it for free by following the steps above. However, we recommend that you purchase the official version of Auto Keyboard 9.0 to support the developers and get more updates and technical support.

        -

        ddb901b051
        -
        -
        \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/How to Avoid the Risks of Using a Cracked Version of Vegas Pro and Get it Gratis Legally.md b/spaces/tialenAdioni/chat-gpt-api/logs/How to Avoid the Risks of Using a Cracked Version of Vegas Pro and Get it Gratis Legally.md deleted file mode 100644 index ba10c29bfac028d0e32cbfb812c6f72bbc45024b..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/How to Avoid the Risks of Using a Cracked Version of Vegas Pro and Get it Gratis Legally.md +++ /dev/null @@ -1,27 +0,0 @@ - -

        How to Get Vegas Pro Gratis for Video Editing

        -

        Vegas Pro is a professional video editing software that offers a range of features, such as multicam editing, color grading, audio mixing, motion tracking, and more. However, it is also a pricey software that costs hundreds of dollars for a license. If you are looking for a way to get Vegas Pro gratis, or for free, you might be tempted to download a cracked version from the internet. But beware, this can be risky and illegal.

        -

        vegas pro gratis


        Download ››››› https://urlcod.com/2uK6x7



        -

        In this article, we will show you some of the dangers of using a cracked version of Vegas Pro, and some of the alternatives you can use instead.

        -

        The Dangers of Using a Cracked Version of Vegas Pro

        -

        A cracked version of Vegas Pro is a modified version that bypasses the activation process and allows you to use the software without paying for it. However, this comes with several drawbacks, such as:

        -
          -
        • Malware: Cracked versions of Vegas Pro can contain viruses, spyware, ransomware, or other malicious programs that can harm your computer or steal your personal information. You might end up losing your data, compromising your privacy, or even paying a ransom to unlock your files.
        • -
        • Lack of updates: Cracked versions of Vegas Pro do not receive any updates or patches from the official developer. This means that you will miss out on new features, bug fixes, security improvements, and compatibility with new formats and devices. You might also encounter errors, crashes, or glitches while using the software.
        • -
        • Lack of support: Cracked versions of Vegas Pro do not have any technical support or customer service from the official developer. If you encounter any problems or issues while using the software, you will have no one to help you or guide you. You will also have no access to online tutorials, forums, or communities that can offer tips and tricks.
        • -
        • Legal issues: Cracked versions of Vegas Pro are illegal and violate the terms and conditions of the software license. If you are caught using a cracked version of Vegas Pro, you might face legal consequences, such as fines, lawsuits, or even jail time. You might also damage your reputation or credibility as a video editor.
        • -
        -

        As you can see, using a cracked version of Vegas Pro is not worth the risk. You might end up paying more than what you saved in the long run.

        -

        -

        The Alternatives to Using a Cracked Version of Vegas Pro

        -

        Fortunately, there are some alternatives to using a cracked version of Vegas Pro that are legal and safe. Here are some of them:

        -
          -
        • Free trial: The official website of Vegas Pro offers a free trial version that you can download and use for 30 days. This is a great way to test out the software and see if it suits your needs and preferences. You can access all the features and functions of the software without any limitations or watermarks. However, after 30 days, you will need to purchase a license to continue using the software.
        • -
        • Discounts and deals: The official website of Vegas Pro also offers discounts and deals from time to time that can help you save money on buying a license. For example, you can get up to 40% off on selected products during Black Friday or Cyber Monday sales. You can also get discounts if you are a student, teacher, or non-profit organization. You can check the website regularly or subscribe to their newsletter to get notified of any promotions or offers.
        • -
        • Free alternatives: There are also some free video editing software that you can use instead of Vegas Pro. These software may not have all the features and functions of Vegas Pro, but they can still help you create and edit videos with ease. Some of the popular free video editing software are DaVinci Resolve, HitFilm Express, Lightworks, Shotcut, and OpenShot. You can download these software from their official websites and use them without any restrictions or costs.
        • -
        -

        These are some of the alternatives to using a cracked version of Vegas Pro that are legal and safe. You can choose the one that best suits your budget and needs.

        -

        Conclusion

        -

        Vegas Pro is a professional video editing software that offers a range of features and functions. However,

        ddb901b051
        -
        -
        \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Dolphin Emulator APK Play GameCube and Wii Games on Android.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Dolphin Emulator APK Play GameCube and Wii Games on Android.md deleted file mode 100644 index 8eb25477de4a1def57dc592cd9194e7ed162006c..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Dolphin Emulator APK Play GameCube and Wii Games on Android.md +++ /dev/null @@ -1,102 +0,0 @@ - -

        Dolphin Emulator 5.0-8715 APK: What You Need to Know

        -

        If you are a fan of Nintendo GameCube and Wii games, you might have heard of Dolphin Emulator. It is a software that allows you to play your favorite games on your Android device. But what is Dolphin Emulator 5.0-8715 APK and how can you get it? In this article, we will answer these questions and more.

        -

        dolphin emulator 5.0-8715 apk


        Download ✑ ✑ ✑ https://bltlly.com/2uOll2



        -

        What is Dolphin Emulator?

        -

        Dolphin Emulator is an open-source project that aims to emulate the Nintendo GameCube and Wii consoles on various platforms, such as Windows, Linux, macOS, and Android. It was first released in 2003 and has since become one of the most popular emulators in the gaming community.

        -

        Features of Dolphin Emulator

        -

        Some of the features that make Dolphin Emulator stand out are:

        -
          -
        • It supports high-definition graphics, up to 1080p, and enhances the original game quality.
        • -
        • It allows you to customize the controls, using either touch screen, keyboard, mouse, or gamepad.
        • -
        • It supports online multiplayer, using either Wi-Fi or Bluetooth.
        • -
        • It has a save state feature, which lets you save and load your game progress at any point.
        • -
        • It has a cheat code feature, which lets you modify the game parameters to your liking.
        • -
        • It has a turbo mode feature, which lets you speed up or slow down the game speed.
        • -
        -

        Supported Platforms and Games

        -

        Dolphin Emulator supports a wide range of platforms, including Windows (7 or higher), Linux (Ubuntu 14.04 or higher), macOS (10.10 or higher), and Android (5.0 or higher). It also supports various architectures, such as x86, x64, ARM, and AArch64.

        -

        Dolphin Emulator can run most of the GameCube and Wii games, such as Super Mario Sunshine, The Legend of Zelda: Twilight Princess, Metroid Prime, Resident Evil 4, Super Smash Bros. Melee, Mario Kart Wii, and many more. However, some games may not work properly or at all, due to compatibility issues or bugs.

        -

        What is Dolphin Emulator 5.0-8715 APK?

        -

        Dolphin Emulator 5.0-8715 APK is the latest version of the Dolphin Emulator app for Android devices. It was released on June 21st, 2023, and it contains several improvements and fixes over the previous versions.

        -

        dolphin emulator 5.0-8715 apk download
        -dolphin emulator 5.0-8715 apk free
        -dolphin emulator 5.0-8715 apk latest version
        -dolphin emulator 5.0-8715 apk for android
        -dolphin emulator 5.0-8715 apk mod
        -dolphin emulator 5.0-8715 apk full
        -dolphin emulator 5.0-8715 apk update
        -dolphin emulator 5.0-8715 apk old version
        -dolphin emulator 5.0-8715 apk no root
        -dolphin emulator 5.0-8715 apk offline
        -dolphin emulator 5.0-8715 apk premium
        -dolphin emulator 5.0-8715 apk cracked
        -dolphin emulator 5.0-8715 apk pro
        -dolphin emulator 5.0-8715 apk best settings
        -dolphin emulator 5.0-8715 apk cheats
        -dolphin emulator 5.0-8715 apk games
        -dolphin emulator 5.0-8715 apk review
        -dolphin emulator 5.0-8715 apk tutorial
        -dolphin emulator 5.0-8715 apk guide
        -dolphin emulator 5.0-8715 apk features
        -dolphin emulator 5.0-8715 apk requirements
        -dolphin emulator 5.0-8715 apk compatibility
        -dolphin emulator 5.0-8715 apk performance
        -dolphin emulator 5.0-8715 apk speed
        -dolphin emulator 5.0-8715 apk graphics
        -dolphin emulator 5.0-8715 apk sound
        -dolphin emulator 5.0-8715 apk controller
        -dolphin emulator 5.0-8715 apk keyboard
        -dolphin emulator 5.0-8715 apk mouse
        -dolphin emulator 5.0-8715 apk touch screen

        -

        What's New in Dolphin Emulator 5.0-8715 APK?

        -

        Some of the new features and changes in Dolphin Emulator 5.0-8715 APK are:

        -
          -
        • It adds support for Vulkan API, which improves the performance and stability of the app.
        • -
        • It fixes several issues with audio, video, input, and network.
        • -
        • It improves the compatibility with some games, such as Super Paper Mario, Sonic Colors, Kirby's Return to Dream Land, and others.
        • -
        • It updates the user interface and adds new options for customization.
        • -
        -

        How to Download and Install Dolphin Emulator 5.0-8715 APK?

        -

        To download and install Dolphin Emulator 5.0-8715 APK on your Android device, you need to follow these steps:

        -
          -
        1. Go to the official website of Dolphin Emulator and download the APK file from the download section. Alternatively, you can use this link: [Dolphin Emulator 5.0-8715 APK].
        2. -
        3. Enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
        4. -
        5. Locate the downloaded APK file on your device and tap on it to start the installation process.
        6. -
        7. Follow the instructions on the screen and wait for the installation to finish.
        8. -
        9. Launch the Dolphin Emulator app and enjoy playing your favorite games.
        10. -
        -

        Pros and Cons of Dolphin Emulator 5.0-8715 APK

        -

        Dolphin Emulator 5.0-8715 APK has many advantages and disadvantages that you should consider before using it. Here are some of them:

        -

        Pros of Dolphin Emulator 5.0-8715 APK

        -
          -
        • It is free and open-source, which means you can use it without paying anything or worrying about legal issues.
        • -
        • It is updated regularly, which means you can enjoy the latest features and fixes.
        • -
        • It has a large community of users and developers, which means you can get support and feedback easily.
        • -
        • It has a high compatibility rate, which means you can play most of the GameCube and Wii games on your Android device.
        • -
        • It has a lot of customization options, which means you can adjust the settings to your preference and device specifications.
        • -
        -

        Cons of Dolphin Emulator 5.0-8715 APK

        -
          -
        • It requires a powerful device, which means you may experience lag, crashes, or glitches if your device is not capable enough.
        • -
        • It consumes a lot of battery, which means you may need to charge your device frequently or use a power bank.
        • -
        • It may not work with some games, which means you may encounter errors or bugs that prevent you from playing them.
        • -
        • It may not support some features, such as motion controls or microphone input, which means you may miss out on some game functions.
        • -
        • It may violate some terms of service, which means you may risk getting banned or sued by Nintendo or other parties.
        • -
        -

        Conclusion

        -

        Dolphin Emulator 5.0-8715 APK is a great app for playing Nintendo GameCube and Wii games on your Android device. It has many features, such as high-definition graphics, online multiplayer, save state, cheat code, turbo mode, and more. It also has a high compatibility rate, which means you can play most of the games without any problems. However, it also has some drawbacks, such as requiring a powerful device, consuming a lot of battery, not working with some games or features, and possibly violating some terms of service. Therefore, you should weigh the pros and cons before using it and do so at your own risk.

        -

        FAQs

        -

        Here are some frequently asked questions about Dolphin Emulator 5.0-8715 APK:

        -

        Q: Is Dolphin Emulator 5.0-8715 APK safe to use?

        -

        A: Dolphin Emulator 5.0-8715 APK is safe to use as long as you download it from the official website or a trusted source. However, you should be careful about the games you download and play, as they may contain viruses or malware that can harm your device or data.

        -

        Q: How can I get GameCube and Wii games for Dolphin Emulator 5.0-8715 APK?

        -

        A: You can get GameCube and Wii games for Dolphin Emulator 5.0-8715 APK by either ripping them from your own discs using a PC or a Wii console, or downloading them from the internet. However, you should only download games that you own legally and avoid pirated or illegal copies.

        -

        Q: How can I transfer my save files from Dolphin Emulator to Dolphin Emulator 5.0-8715 APK?

        -

        A: You can transfer your save files from Dolphin Emulator to Dolphin Emulator 5.0-8715 APK by copying them from the Dolphin Emu folder on your PC to the dolphin-emu folder on your Android device. You can use a USB cable or a cloud service to do this.

        -

        Q: How can I improve the performance of Dolphin Emulator 5.0-8715 APK?

        -

        A: You can improve the performance of Dolphin Emulator 5.0-8715 APK by adjusting the settings to suit your device specifications and preferences. For example, you can lower the resolution, enable or disable anti-aliasing, change the backend, tweak the CPU and GPU settings, and more. You can also close other apps running in the background, clear the cache, and update your device software.

        -

        Q: How can I contact the developers of Dolphin Emulator 5.0-8715 APK?

        -

        A: You can contact the developers of Dolphin Emulator 5.0-8715 APK by visiting their official website, where you can find their email address, social media accounts, forums, blog, and wiki. You can also report bugs, request features, or give feedback through their GitHub page.

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/timqian/like-history/static/js/main.5c83e060.js b/spaces/timqian/like-history/static/js/main.5c83e060.js deleted file mode 100644 index e9d2ac7567ae39a6fbfdb351253faf744c55e694..0000000000000000000000000000000000000000 --- a/spaces/timqian/like-history/static/js/main.5c83e060.js +++ /dev/null @@ -1,3 +0,0 @@ -/*! For license information please see main.5c83e060.js.LICENSE.txt */ -!function(){"use strict";var e={463:function(e,n,t){var r=t(791),l=t(296);function a(e){for(var n="https://reactjs.org/docs/error-decoder.html?invariant="+e,t=1;t