diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cain and Abel IP Stresser Cracked Learn How to Perform Dictionary Brute-Force and Cryptanalysis Attacks on Encrypted Passwords.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cain and Abel IP Stresser Cracked Learn How to Perform Dictionary Brute-Force and Cryptanalysis Attacks on Encrypted Passwords.md deleted file mode 100644 index 35ac1b61d8379acf4b4746bf96314ff9c7811db7..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cain and Abel IP Stresser Cracked Learn How to Perform Dictionary Brute-Force and Cryptanalysis Attacks on Encrypted Passwords.md +++ /dev/null @@ -1,103 +0,0 @@ - -
- A network sniffer and analyzer
- A tool for performing various attacks | | H2: What is an IP stresser? | - A service that tests the resilience of a network or a server
- A tool for launching distributed denial-of-service (DDoS) attacks
- A way to measure the bandwidth and latency of a network | | H2: How to use Cain and Abel as an IP stresser? | - How to install and configure Cain and Abel
- How to scan and spoof ARP packets
- How to launch a DDoS attack using Cain and Abel | | H2: What are the risks and benefits of using Cain and Abel as an IP stresser? | - The legal and ethical implications of DDoS attacks
- The possible countermeasures and defenses against DDoS attacks
- The advantages and disadvantages of using Cain and Abel compared to other IP stressers | | H2: Conclusion | - A summary of the main points of the article
- A call to action for the readers
- A disclaimer and a warning | **Table 2: Article with HTML formatting**

What is Cain and Abel?

-

Cain and Abel is a password recovery tool for Microsoft Windows operating systems. It allows easy recovery of various kinds of passwords by sniffing the network, cracking encrypted passwords using dictionary, brute-force and cryptanalysis attacks, recording VoIP conversations, decoding scrambled passwords, revealing password boxes, uncovering cached passwords and analyzing routing protocols.

-

cain and abel ip stresser cracked


Download Zip ★★★ https://byltly.com/2uKvF2



-

But Cain and Abel is not just a password recovery tool. It is also a powerful network sniffer and analyzer that can capture and manipulate network traffic. It can perform various attacks such as ARP poisoning, DNS spoofing, man-in-the-middle, session hijacking, SSL stripping and more.

-

One of the most notorious uses of Cain and Abel is to use it as an IP stresser. An IP stresser is a service that tests the resilience of a network or a server by sending a large amount of traffic to it. It can also be used to launch distributed denial-of-service (DDoS) attacks, which aim to disrupt or disable the target by overwhelming it with requests.

-

What is an IP stresser?

-

An IP stresser is a service that tests the resilience of a network or a server by sending a large amount of traffic to it. It can also be used to launch distributed denial-of-service (DDoS) attacks, which aim to disrupt or disable the target by overwhelming it with requests.

-

An IP stresser can measure the bandwidth and latency of a network by sending packets of different sizes and frequencies. It can also simulate different types of traffic such as TCP, UDP, ICMP, HTTP, HTTPS, DNS, FTP, SMTP and more. An IP stresser can help network administrators to identify bottlenecks, vulnerabilities and performance issues in their networks.

-

However, an IP stresser can also be used for malicious purposes. Some hackers use IP stressers to launch DDoS attacks against their enemies or competitors. They can target websites, servers, online games, applications or even individual devices. They can cause slowdowns, outages, data loss or damage to the target.

-

How to use Cain and Abel as an IP stresser?

-

Cain and Abel can be used as an IP stresser by exploiting its ability to spoof ARP packets. ARP stands for Address Resolution Protocol, which is used to map IP addresses to MAC addresses on a local area network (LAN). By spoofing ARP packets, Cain and Abel can trick other devices on the same LAN into thinking that it is the gateway or router. This way, it can intercept all the traffic that passes through the LAN.

-

How to use Cain and Abel sniffer on Windows 10
-Cain and Abel password recovery tool download
-Cain and Abel APR (Arp Poison Routing) tutorial
-Cain and Abel network security software review
-Cain and Abel crack encrypted passwords with brute-force
-How to fix Cain and Abel 0.0.0.0 IP address problem
-Cain and Abel record VoIP conversations on LAN
-Cain and Abel decode scrambled passwords in browsers
-Cain and Abel reveal password boxes on Windows
-Cain and Abel uncover cached passwords in applications
-Cain and Abel analyze routing protocols with sniffer
-Cain and Abel inject custom certificates into HTTPS
-Cain and Abel root certificate generator configuration
-Cain and Abel spoof DNS replies with APR-DNS
-Cain and Abel perform man-in-the-middle attacks with APR
-How to install Cain and Abel on Windows 8
-Cain and Abel dictionary and cryptanalysis attacks tutorial
-Cain and Abel password decoders for various protocols
-Cain and Abel password/hash calculators for common algorithms
-Cain and Abel non standard utilities for Windows users
-How to use Cain and Abel with external WiFi card
-Cain and Abel LSA secret dumper for Windows 8
-Cain and Abel credential manager password decoder for Windows 8
-Cain and Abel editbox revealer for Windows 8
-Cain and Abel RDP client sniffer filter for Windows 8
-How to use aircrack-ng with Cain and Abel on Windows
-How to use image steganography tool with Cain and Abel
-How to grab IP addresses from Xbox LIVE with Cain and Abel
-How to use Winpcap library with Cain and Abel on Windows 8
-How to use LANC vs OctoSniff network sniffer with Cain and Abel
-How to use VPN to protect against DDOS attacks with Cain and Abel
-How to use VNC with Cain and Abel on Windows 10
-How to use SSH-1 protocol analyzer with Cain and Abel
-How to use HTTPS protocol analyzer with Cain and Abel
-How to use POP3 protocol analyzer with Cain and Abel
-How to use IMAP protocol analyzer with Cain and Abel
-How to use SMTP protocol analyzer with Cain and Abel
-How to use FTP protocol analyzer with Cain and Abel
-How to use Telnet protocol analyzer with Cain and Abel
-How to use HTTP protocol analyzer with Cain and Abel
-How to use NNTP protocol analyzer with Cain and Abel
-How to use ICQ protocol analyzer with Cain and Abel
-How to use IRC protocol analyzer with Cain and Abel
-How to use Rlogin protocol analyzer with Cain and Abel
-How to use SNMP protocol analyzer with Cain and Abel
-How to use LDAP protocol analyzer with Cain and Abel
-How to use SOCKS 4/5 protocol analyzer with Cain and Abel
-How to use MySQL protocol analyzer with Cain and Abel
-How to use MS SQL Server protocol analyzer with Cain and Abel
-How to use Oracle Database Server protocol analyzer with Cain and Abel

-

To use Cain and Abel as an IP stresser, you need to follow these steps:

-
    -
  1. Download and install Cain and Abel from here. Make sure you have WinPcap installed as well.
  2. -
  3. Run Cain and Abel as an administrator. Click on the Sniffer tab and then click on the Start/Stop Sniffer button.
  4. -
  5. Click on the Configure button and select your network adapter from the list. Make sure you select the one that is connected to your LAN.
  6. -
  7. Click on the Sniffer tab again and then click on the + button. Select All Hosts in my subnet from the list.
  8. -
  9. Wait for Cain and Abel to scan your LAN for active hosts. You should see a list of IP addresses and MAC addresses in the table.
  10. -
  11. Select one or more hosts that you want to target for your IP stress test or DDoS attack. Right-click on them and select Resolve Host Name to get their domain names.
  12. -
  13. Click on the APR tab at the bottom. Click on the + button again and select Use Spoofed IP & MAC Addresses from the list.
  14. -
  15. In the dialog box that appears, enter your own IP address in the first field and your own MAC address in the second field. You can find them by typing ipconfig /all in a command prompt window.
  16. -
  17. In the third field, enter the IP address of your gateway or router. You can find it by typing ipconfig /all in a command prompt window as well.
  18. -
  19. In the fourth field, enter 00-00-00-00-00-00 as the MAC address of your gateway or router.
  20. -
  21. Click OK to close the dialog box.
  22. -
  23. You should see two entries in the APR table: one for your own device (with your own IP address) spoofing as your gateway or router (with 00-00-00-00-00-00 as its MAC address), and one for your gateway or router (with its real IP address) spoofing as your own device (with your own MAC address).
  24. -
  25. Select both entries in the APR table. Right-click on them and select Start ARP.
  26. -
  27. You have now successfully spoofed ARP packets on your LAN. All traffic from your target hosts will now go through your device instead of your gateway or router.
  28. -
  29. To launch an IP stress test or DDoS attack against your target hosts, click on the Attack tab at the bottom.
  30. -
  31. Select one or more attack methods from the list. You can choose from TCP/UDP/ICMP Floods, HTTP Floods, DNS Floods, FTP Floods, SMTP Floods and more.
  32. -
  33. Enter the parameters for each attack method such as port number, packet size, packet rate etc.
  34. -
  35. Click on Start Attack button to begin sending packets to your target hosts.
  36. -
  37. To stop an attack method, select it from the list again and click on Stop Attack button.
  38. -
  39. To stop all attack methods at once, click on Stop All Attacks button.
  40. -
  41. To stop spoofing ARP packets on your LAN, go back to APR tab at bottom.Select both entries in APR table.Right-click on them.Select Stop ARP.
-

What are risks benefits using Cain Abel as an IP stresser?

-

Using Cain & Abel as an IP stresser has some risks & benefits depending on your purpose & perspective.Here are some of them:

- -

Conclusion

-

Cain and Abel is a versatile tool that can be used for various purposes, including password recovery, network sniffing and analysis, and IP stressing. It can be a useful tool for network administrators, security professionals, students, and hobbyists who want to test, learn, or practice different skills and techniques related to network security.

-

However, Cain and Abel can also be a dangerous tool that can be used for malicious purposes, such as launching DDoS attacks against unauthorized targets. Such attacks can cause serious harm and inconvenience to the victims and their users. They can also violate laws and ethics and result in legal consequences for the attackers.

-

Therefore, it is important to use Cain and Abel responsibly and ethically. It is also important to protect yourself and your device from DDoS attacks by using appropriate security measures and countermeasures. Remember that with great power comes great responsibility.

-

FAQs

- -

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Auslogics BoostSpeed 10.0.19.0 Crack Premium With Serial Key 2019 What You Need to Know About This Powerful Software.md b/spaces/1gistliPinn/ChatGPT4/Examples/Auslogics BoostSpeed 10.0.19.0 Crack Premium With Serial Key 2019 What You Need to Know About This Powerful Software.md deleted file mode 100644 index d8028c2658527bb2b197ec93c4e0489c8aae8ae9..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Auslogics BoostSpeed 10.0.19.0 Crack Premium With Serial Key 2019 What You Need to Know About This Powerful Software.md +++ /dev/null @@ -1,6 +0,0 @@ -

Auslogics BoostSpeed 10.0.19.0 Crack Premium With Serial Key 2019


Download ————— https://imgfil.com/2uy1ni



- - aaccfb2cb3
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Cisco Asa Vmware Image Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/Cisco Asa Vmware Image Download.md deleted file mode 100644 index 8edb8599e5336d664eeb9eecb0badfab4e4f0fa5..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Cisco Asa Vmware Image Download.md +++ /dev/null @@ -1,94 +0,0 @@ - -

Cisco ASA VMware Image Download: A Guide for Network Security Professionals

- -

Cisco ASA (Adaptive Security Appliance) is a family of network security devices that provide firewall, VPN, intrusion prevention, and other security features for enterprise and service provider networks. Cisco ASA devices are widely used and trusted by network administrators and security experts around the world.

-

cisco asa vmware image download


Download Filehttps://imgfil.com/2uy0LL



- -

However, deploying and managing physical Cisco ASA devices can be costly and complex, especially for small and medium-sized businesses or remote offices. That is why Cisco offers a virtual version of the ASA device, called Cisco ASA Virtual (ASAv), that can run on any server class x86 CPU device that is capable of running VMware ESXi.

- -

Cisco ASAv is a software-only solution that provides the same features and functionality as the physical ASA device, but with more flexibility and scalability. Cisco ASAv can be deployed on any VMware ESXi host, either on-premises or in the cloud, and can be integrated with other VMware products and services. Cisco ASAv can also be easily migrated, cloned, backed up, restored, or updated using VMware tools and processes.

- -

What are the benefits of Cisco ASA VMware Image Download?

- -

Cisco ASA VMware Image Download has many benefits for network security professionals who want to use Cisco ASAv for their network security needs. Some of these benefits are:

- - - -

How to use Cisco ASA VMware Image Download?

- -

Cisco ASA VMware Image Download can be used for various purposes, such as:

- - - -

Conclusion

- -

Cisco ASA VMware Image Download is a valuable resource for network security professionals who want to use Cisco ASAv for their network security needs. Cisco ASAv is a software-only solution that provides the same features and functionality as the physical ASA device, but with more flexibility and scalability. Cisco ASAv can be deployed on any VMware ESXi host, either on-premises or in the cloud, and can be integrated with other VMware products and services. Cisco ASAv can also be easily migrated, cloned, backed up, restored, or updated using VMware tools and processes.

-

- -

If you are looking for a quality solution for network security, you should definitely consider using Cisco ASA VMware Image Download. It is a valuable resource that can help you deploy and manage Cisco ASAv in an effective and convenient way.

-

What are the alternatives to Cisco ASA VMware Image Download?

- -

Cisco ASA VMware Image Download is a great solution for network security, but it is not the only option available. There are some alternatives to Cisco ASAv that you may want to consider, depending on your network needs and preferences. Some of these alternatives are:

- - - -

Conclusion

- -

Cisco ASA VMware Image Download is a valuable resource for network security professionals who want to use Cisco ASAv for their network security needs. Cisco ASAv is a software-only solution that provides the same features and functionality as the physical ASA device, but with more flexibility and scalability. Cisco ASAv can be deployed on any VMware ESXi host, either on-premises or in the cloud, and can be integrated with other VMware products and services. However, there are also some alternatives to Cisco ASAv that you may want to consider, depending on your network needs and preferences. You can use Cisco FTDv, Cisco Meraki MX, or Cisco CNF as other solutions for network security.

- -

If you are looking for a quality solution for network security, you should definitely consider using Cisco ASA VMware Image Download. It is a valuable resource that can help you deploy and manage Cisco ASAv in an effective and convenient way.

-

What are the resources and references for Cisco ASA VMware Image Download?

- -

Cisco ASA VMware Image Download can help you learn and use Cisco ASAv for your network security needs, but you also need some resources and references to guide you along the way. Some of these resources and references are:

- - - -

Conclusion

- -

Cisco ASA VMware Image Download is a valuable resource for network security professionals who want to use Cisco ASAv for their network security needs. Cisco ASAv is a software-only solution that provides the same features and functionality as the physical ASA device, but with more flexibility and scalability. Cisco ASAv can be deployed on any VMware ESXi host, either on-premises or in the cloud, and can be integrated with other VMware products and services. However, you also need some resources and references to help you learn and use Cisco ASAv effectively and efficiently. You can use Cisco ASAv documentation, support, community, training, and blogs as other resources and references for network security.

- -

If you are looking for a quality solution for network security, you should definitely consider using Cisco ASA VMware Image Download. It is a valuable resource that can help you deploy and manage Cisco ASAv in an effective and convenient way.

-

Final Thoughts

- -

Network security is one of the most important and challenging aspects of any network, as it protects the network from various threats and attacks that can compromise its performance and integrity. Network security requires a reliable and robust solution that can provide firewall, VPN, intrusion prevention, and other security features for the network.

- -

One such solution is Cisco ASA (Adaptive Security Appliance), a family of network security devices that are widely used and trusted by network administrators and security experts around the world. However, Cisco ASA devices can also be costly and complex to deploy and manage, especially for small and medium-sized businesses or remote offices.

- -

That is why Cisco offers a virtual version of the ASA device, called Cisco ASAv (Adaptive Security Virtual Appliance), that can run on any server class x86 CPU device that is capable of running VMware ESXi. Cisco ASAv is a software-only solution that provides the same features and functionality as the physical ASA device, but with more flexibility and scalability. Cisco ASAv can be deployed on any VMware ESXi host, either on-premises or in the cloud, and can be integrated with other VMware products and services.

- -

Cisco ASA VMware Image Download is a valuable resource for network security professionals who want to use Cisco ASAv for their network security needs. Cisco ASA VMware Image Download allows you to access the latest version of Cisco ASAv software from the official Cisco website, and deploy it on any VMware ESXi host in minutes. Cisco ASA VMware Image Download also allows you to configure, manage, and troubleshoot Cisco ASAv using various tools and methods.

- -

However, Cisco ASA VMware Image Download is not the only option available for network security. There are some alternatives to Cisco ASAv that you may want to consider, depending on your network needs and preferences. You can use Cisco FTDv (Firepower Threat Defense Virtual), Cisco Meraki MX, or Cisco CNF (Secure Firewall Cloud Native) as other solutions for network security.

- -

You also need some resources and references to help you learn and use Cisco ASAv effectively and efficiently. You can use Cisco ASAv documentation, support, community, training, and blogs as other resources and references for network security.

- -

If you are looking for a quality solution for network security, you should definitely consider using Cisco ASA VMware Image Download. It is a valuable resource that can help you deploy and manage Cisco ASAv in an effective and convenient way.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download Ebook Biokimia Harper Bahas.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download Ebook Biokimia Harper Bahas.md deleted file mode 100644 index c900f41a76c567811f4223ec27cd3555820ff945..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Download Ebook Biokimia Harper Bahas.md +++ /dev/null @@ -1,10 +0,0 @@ -
-

DOWNLOAD: https://urloso.com/2xjsk3. DOWNLOAD: http://mp3-mp4-converter.blogspot.com.ec/.. Free download. Book english of the free download is now - 2cmxex.. 2cmxex Crack Keygen.. Main menu. Direct download of Biokimia Harper Bahasa Indonesia... ctd biokimia harper bahas pertama kali.. beta biokimia harper bahasa..

-

download ebook biokimia harper bahas


Download File 🗸 https://imgfil.com/2uy12w



-

DOWNLOAD: https://urloso.com/2f3v61. Download Wubi 11.20 Crack Kanishka Pdf Online Keygen Free. Kanishka Kanishka.ro Kanishka Download Kanishka Crack Kanishka Kanishka Kanishka Patched Crack Kanishka Download Kanishka Online Patch.

-

DOWNLOAD: https://urloso.com/2g39ov. Download Parttime Informasi Masalah Sepak Bola Asli.info. Direct download of Biokimia Harper Bahasa Indonesia.. Biokimia Harper Bahasa Indonesia.pdf.. Biokimia Harper Bahasa Indonesia.pdf online now, exclusively on AccessPharmacy..

-

Smith, Harper'S Illustrated Biochemistry Edi. 27, ISBN 9780744808663. [Bahasa Indonesia]. Google Scholar.. hanya bisa di download akhirnya ketika pulang ke rumah kelas karena angka tersebut terlalu besar.

-

Download Harper's Illustrated Biochemistry. Harper's Illustrated Biochemistry. Harper's Illustrated Biochemistry. Harper's Illustrated Biochemistry.. 27. Kindle edition. $6.50.. Smith, Harper'S Illustrated Biochemistry Edi. 27, ISBN 9780744808663. [Bahasa Indonesia]. Google Scholar.. apakah biokimia harper incisi tiada?. May 23, 2017. Berry, Biochemistry (Harper s Biochemistry), Revised Edition, Harper s Biochemistry,. . Berry, Harper'S Illustrated Biochemistry, Harper'S Illustrated Biochemistry,.. Berry, Biochemistry (Harper s Biochemistry), Revised Edition, Harper s Biochemistry,.. Berry, Harper'S Illustrated Biochemistry, Harper'S Illustrated Biochemistry, 1990, ISBN 9780674783367. [Bahasa Indonesia]. Brassey'S.

-

899543212b
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/F1 Challenge 2007 __HOT__ Crack Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/F1 Challenge 2007 __HOT__ Crack Download.md deleted file mode 100644 index 8fe13ec5e0a4f48b367632c323d480909ba25fc3..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/F1 Challenge 2007 __HOT__ Crack Download.md +++ /dev/null @@ -1,152 +0,0 @@ -
-

F1 Challenge 2007 Crack Download: How to Enjoy the Best Racing Game of the Year

-

If you are a fan of Formula One racing, you might have heard of F1 Challenge 2007, a video game that simulates the 2007 season of the sport. F1 Challenge 2007 is a mod for F1 Challenge '99-'02, a game developed by EA Sports and released in 2003. The mod features updated cars, tracks, teams, drivers, and graphics to match the 2007 season.

-

F1 Challenge 2007 Crack Download


Download Zip ———>>> https://imgfil.com/2uxXRQ



-

However, F1 Challenge 2007 is not an official game and it is not available for purchase. You can only download it from the internet for free. But how do you download and install F1 Challenge 2007 on your PC? And how do you get the crack that allows you to play it without any errors or limitations? In this article, we will show you how to do that in a few simple steps.

-

Step 1: Download F1 Challenge 2007 Mod

-

The first thing you need to do is to download the F1 Challenge 2007 mod from a reliable source. There are many websites that offer this mod, but some of them may contain viruses or malware that can harm your computer. We recommend you to use one of these links:

- -

These links are from a YouTube video by B&J F1, who reissued the mod with some improvements and fixes. You can watch the video here: DOWNLOAD F1 Challenge 2007.

-

After you download the mod, you will get a compressed file in .rar format. You will need to extract it using a software like WinRAR or 7-Zip.

-

-

Step 2: Install F1 Challenge '99-'02

-

Before you can install the F1 Challenge 2007 mod, you need to have the original game F1 Challenge '99-'02 installed on your PC. If you already have it, you can skip this step. If you don't have it, you can download it from here: F1 Challenge 2007 Full version 1.0 Download.

-

This link is from Software Informer, a website that provides information and downloads for various software programs. The download is free and safe, but you may need to register an account to access it.

-

After you download the game, you will get an executable file named F1Challenge2007.exe. Run it and follow the instructions to install the game on your PC.

-

Step 3: Install F1 Challenge 2007 Mod

-

Now that you have both the mod and the original game on your PC, you can install the mod by following these steps:

-
    -
  1. Open the folder where you extracted the mod file and copy the folder named "F1C GGSF12007".
  2. -
  3. Paste it into the folder where you installed the original game (usually C:\Program Files\EA SPORTS\F1 Challenge '99-'02).
  4. -
  5. Replace any existing files if prompted.
  6. -
  7. Open the folder "F1C GGSF12007" and run the file named "F12007.exe".
  8. -
  9. Enjoy playing F1 Challenge 2007!
  10. -
-

Step 4: Download and Apply F1 Challenge 2007 Crack

-

If you want to play F1 Challenge 2007 without any errors or limitations, you will need to download and apply a crack for it. A crack is a file that modifies or bypasses some features of a software program, such as copy protection or activation.

-

You can download a crack for F1 Challenge 2007 from here: Download F1 (2007) Free Full PC Game.

-

This link is from A Real Gamer, a website that offers free downloads of PC games. The crack is included in the game file that you can download from this link.

-

After you download the game file, you will get another compressed file in .rar format. You will need to extract it using a software like WinRAR or 7-Zip.

-

Inside the extracted folder, you will find a folder named "Crack". Open it and copy the file named "F12007.exe".

-

Paste it into the folder where you installed the mod (usually C:\Program Files\EA SPORTS\F1 Challenge '99-'02\F1C GGSF12007). -

  • Replace any existing files if prompted.
  • -
  • Run the file named "F12007.exe" from this folder.
  • -
  • Enjoy playing F1 Challenge 2007 without any errors or limitations!
  • - -

    Conclusion

    - -

    F1 Challenge 2007 is a great racing game that lets you experience the thrill of Formula One racing in your PC. With realistic graphics, sound effects, physics, and gameplay, it will make you feel like a real driver on the track.

    - -

    To play this game for free, you need to download and install both the mod and the original game, as well as apply a crack for it. This may seem complicated at first, but if you follow our guide step by step, you will be able to do it easily and quickly.

    - -

    We hope this article helped you learn how to download and install F1 Challenge 2007 on your PC. If you have any questions or comments, feel free to leave them below. Happy racing!

    -

    F1 Challenge 2007 Tips and Tricks

    -

    F1 Challenge 2007 is a mod that offers a realistic and immersive racing experience. However, it can also be quite challenging and difficult for some players, especially beginners. If you want to improve your skills and performance in F1 Challenge 2007, you can try some of these tips and tricks:

    - - -

    F1 Challenge 2007 Alternatives and Similar Games

    -

    F1 Challenge 2007 is a mod that is based on F1 Challenge '99-'02, a game that was released in 2003. Since then, many other games have been released that are similar or related to F1 Challenge 2007. Some of these games are:

    - - -

    Conclusion

    - -

    F1 Challenge 2007 Crack Download is a query that will help you find and download F1 Challenge 2007 on your PC for free. F1 Challenge 2007 is a mod for F1 Challenge '99-'02 that simulates the 2007 season of Formula One racing. It features updated cars, tracks, teams, drivers, graphics, sound effects, -physics, -AI, -and gameplay. - -To download -and install -F1 Challenge -2007 on -your PC -for free, -you need -to follow -these steps: - -

      -
    1. Download F1 Challenge 2007 Mod from one of these links: FIMEDIA FIRE DOWNLOAD 518.2MB.rar or MEGA DOWNLOAD 518.2MB.rar.
    2. -
    3. Extract the mod file using a software like WinRAR or 7-Zip.
    4. -
    5. Download F1 Challenge '99-'02 from this link: F1 Challenge 2007 Full version 1.0 Download.
    6. -
    7. Install F1 Challenge '99-'02 on your PC.
    8. -
    9. Copy the folder named "F1C GGSF12007" from the mod file and paste it into the folder where you installed F1 Challenge '99-'02 (usually C:\Program Files\EA SPORTS\F1 Challenge '99-'02).
    10. -
    11. Replace any existing files if prompted.
    12. -
    13. Download F1 (2007) Free Full PC Game from this link: Download F1 (2007) Free Full PC Game.
    14. -
    15. Extract the game file using a software like WinRAR or 7-Zip.
    16. -
    17. Copy the file named "F12007.exe" from the folder named "Crack" in the game file and paste it into the folder where you installed F1 Challenge '99-'02\F1C GGSF12007).
    18. -
    19. Replace any existing files if prompted.
    20. -
    21. Run F12007.exe from this folder.
    22. -
    23. Enjoy playing F1 Challenge 2007 without any errors or limitations!
    24. - -

      We hope this article helped you learn how to download -and install -F1 Challenge -2007 on -your PC -for free. -If you have -any questions -or comments, -feel free -to leave -them below. -Happy racing!

      -

      Conclusion

      - -

      F1 Challenge 2007 Crack Download is a query that will help you find and download F1 Challenge 2007 on your PC for free. F1 Challenge 2007 is a mod for F1 Challenge '99-'02 that simulates the 2007 season of Formula One racing. It features updated cars, tracks, teams, drivers, graphics, sound effects, -physics, -AI, -and gameplay. - -To download -and install -F1 Challenge -2007 on -your PC -for free, -you need -to follow -these steps: - -

        -
      1. Download F1 Challenge 2007 Mod from one of these links: FIMEDIA FIRE DOWNLOAD 518.2MB.rar or MEGA DOWNLOAD 518.2MB.rar.
      2. -
      3. Extract the mod file using a software like WinRAR or 7-Zip.
      4. -
      5. Download F1 Challenge '99-'02 from this link: F1 Challenge 2007 Full version 1.0 Download.
      6. -
      7. Install F1 Challenge '99-'02 on your PC.
      8. -
      9. Copy the folder named "F1C GGSF12007" from the mod file and paste it into the folder where you installed F1 Challenge '99-'02 (usually C:\Program Files\EA SPORTS\F1 Challenge '99-'02).
      10. -
      11. Replace any existing files if prompted.
      12. -
      13. Download F1 (2007) Free Full PC Game from this link: Download F1 (2007) Free Full PC Game.
      14. -
      15. Extract the game file using a software like WinRAR or 7-Zip.
      16. -
      17. Copy the file named "F12007.exe" from the folder named "Crack" in the game file and paste it into the folder where you installed F1 Challenge '99-'02\F1C GGSF12007).
      18. -
      19. Replace any existing files if prompted.
      20. -
      21. Run F12007.exe from this folder.
      22. -
      23. Enjoy playing F1 Challenge 2007 without any errors or limitations!
      24. - -

        We hope this article helped you learn how to download -and install -F1 Challenge -2007 on -your PC -for free. -If you have -any questions -or comments, -feel free -to leave -them below. -Happy racing!

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/ASMR Tapping Scratching and Brushing on Various Objects (No Talking) 3Dio Binaural Sounds.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/ASMR Tapping Scratching and Brushing on Various Objects (No Talking) 3Dio Binaural Sounds.md deleted file mode 100644 index 08eafb9f52ce93b9ce4491ce23b47be9d49a7e56..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/ASMR Tapping Scratching and Brushing on Various Objects (No Talking) 3Dio Binaural Sounds.md +++ /dev/null @@ -1,124 +0,0 @@ -
        -
        - Common triggers and effects of ASMR
        - Scientific research and explanations of ASMR | | H2: How to experience ASMR? | - Tips for finding your personal triggers
        - Examples of popular ASMR videos and channels
        - How to create your own ASMR content | | H3: What are the benefits of ASMR? | - Physical and mental health benefits
        - Relaxation and stress relief
        - Creativity and productivity enhancement | | H4: What are the challenges and risks of ASMR? | - Misconceptions and stigma around ASMR
        - Potential side effects and drawbacks of ASMR
        - How to avoid overstimulation and addiction | Here is the second table with the article with HTML formatting: | Article | | --- |

        What is ASMR?

        Have you ever felt a tingling sensation on your scalp or spine when someone whispers in your ear or brushes your hair? Have you ever felt relaxed or sleepy when listening to soft sounds or watching someone perform a mundane task? If so, you may have experienced ASMR.

        -

        asmr


        Download File » https://urlin.us/2uSYWM



        ASMR stands for Autonomous Sensory Meridian Response. It is a term coined in 2010 by Jennifer Allen, who created a Facebook group to connect with others who shared her experience. She defined it as "a physical sensation characterized by a pleasurable tingling that typically begins in the head and scalp, and often moves down the spine and through the limbs."

        ASMR is usually triggered by specific auditory or visual stimuli, such as whispering, tapping, scratching, crinkling, brushing, or personal attention. Some people also experience ASMR from cognitive stimuli, such as reading, writing, or meditating. The effects of ASMR vary from person to person, but they often include relaxation, calmness, happiness, euphoria, sleepiness, or even goosebumps.

        ASMR is not a new phenomenon, but it has gained popularity in recent years thanks to the internet. There are thousands of videos on YouTube dedicated to creating ASMR content for viewers who seek to experience it. Some of these videos have millions of views and subscribers. There are also podcasts, apps, websites, forums, and communities devoted to ASMR.

        -

        asmr sleep sounds
        -asmr eating honeycomb
        -asmr role play doctor
        -asmr tapping and scratching
        -asmr whispering ear to ear
        -asmr slime videos
        -asmr haircut and scalp massage
        -asmr mouth sounds and kisses
        -asmr triggers for tingles
        -asmr no talking 10 hours
        -asmr gentle hand movements
        -asmr drawing and coloring
        -asmr keyboard typing sounds
        -asmr personal attention and affirmations
        -asmr mic brushing and blowing
        -asmr cooking and eating
        -asmr haircut and beard trim
        -asmr water sounds and bubbles
        -asmr relaxing music and nature sounds
        -asmr crinkles and plastic sounds
        -asmr unboxing and review
        -asmr makeup tutorial and application
        -asmr massage and spa treatment
        -asmr page turning and book sounds
        -asmr candle lighting and match sounds
        -asmr haircut and shampoo
        -asmr eating crunchy foods
        -asmr role play teacher
        -asmr scissors and cutting sounds
        -asmr whispering your name
        -asmr leather sounds and gloves
        -asmr painting and brush sounds
        -asmr cleaning and organizing
        -asmr knitting and crochet sounds
        -asmr wood carving and sanding sounds
        -asmr haircut and blow dry
        -asmr eating ice cream and popsicles
        -asmr role play dentist
        -asmr zipper sounds and clothing sounds
        -asmr countdown to sleep

        ASMR is also a subject of scientific interest. Although there is not much research on it yet, some studies have suggested that ASMR may have physiological and psychological benefits for some people. For example, one study found that ASMR reduced heart rate and increased skin conductance in participants who watched ASMR videos. Another study found that ASMR increased positive emotions and reduced stress levels in participants who experienced it.

        -

        How to experience ASMR?

        -

        If you are curious about ASMR or want to enhance your experience of it, here are some tips for finding your personal triggers:

        - -

        Here are some examples of popular ASMR videos and channels that you can check out:

        - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
        VideoChannelDescription
        ASMR 20 Triggers To Help You Sleep ♥Gibi ASMRA compilation of various ASMR triggers, such as tapping, scratching, brushing, and whispering.
        [ASMR] Cranial Nerve Exam - Doctor RoleplayFrivolousFox ASMRA medical roleplay where the ASMRtist performs a cranial nerve exam on the viewer.
        ASMR | Relaxing Spa Facial Roleplay (Layered Sounds)ASMR GlowA spa roleplay where the ASMRtist gives the viewer a facial treatment with layered sounds.
        ASMR - The Ultimate Sleep Clinic (Intense Relaxation)The ASMR RyanA sleep clinic roleplay where the ASMRtist helps the viewer fall asleep with various techniques.
        ASMR Baking Chocolate Chip Cookies ?✨ Soft SpokenRapunzel ASMRA baking tutorial where the ASMRtist makes chocolate chip cookies with soft spoken narration.
        -

        If you want to create your own ASMR content, here are some tips for getting started:

        - -

        What are the benefits of ASMR?

        -

        ASMR can have many benefits for some people who experience it. Here are some of them:

        - -

        What are the challenges and risks of ASMR?

        -

        ASMR is not without its challenges and risks. Here are some of them:

        - -
        Conclusion
        -

        ASMR is a fascinating and complex phenomenon that can have many benefits for some people who experience it. It can also be a fun and creative way to enjoy various types of content online or offline. However, ASMR is not a magic cure for everything and it may have some challenges and risks as well. Therefore, it is important to be informed, respectful, and responsible when engaging with ASMR.

        -
        FAQs
        -

        Here are some frequently asked questions about ASMR:

        -
          -
        1. What does ASMR stand for?
          -ASMR stands for Autonomous Sensory Meridian Response. It is a term coined in 2010 by Jennifer Allen, who created a Facebook group to connect with others who shared her experience.
        2. -
        3. What causes ASMR?
          -ASMR is usually triggered by specific auditory or visual stimuli, such as whispering, tapping, scratching, crinkling, brushing, or personal attention. Some people also experience ASMR from cognitive stimuli, such as reading, writing, or meditating.
        4. -
        5. Who can experience ASMR?
          -ASMR is not something that everyone can experience or enjoy. It may depend on various factors, such as genetics, personality, mood, environment, or exposure. Some people may experience ASMR more easily or intensely than others.
        6. -
        7. Is ASMR sexual?
          -ASMR is not sexual in nature or intention. It is a sensory phenomenon that induces relaxation and pleasure in the mind and body. However, some people may find some ASMR triggers or content erotic or arousing, depending on their personal preferences and associations.
        8. -
        9. Is ASMR safe?
          -ASMR is generally safe for most people who experience it. However, some people may have some side effects or drawbacks from ASMR, such as headaches, nausea, irritation, overstimulation, or addiction. To avoid this, it is important to use ASMR in moderation and balance it with other healthy activities and habits.
        10. -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Doodle Alchemy How to Combine Air Water Fire and Earth in Fun Ways.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Doodle Alchemy How to Combine Air Water Fire and Earth in Fun Ways.md deleted file mode 100644 index 50e6ed270166ec28683549ceaa8561ddd728f08a..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Doodle Alchemy How to Combine Air Water Fire and Earth in Fun Ways.md +++ /dev/null @@ -1,135 +0,0 @@ -
        -

        Doodle Alchemy: A Fun and Creative Puzzle Game

        -

        Do you like to experiment with different elements and create new substances? Do you enjoy solving puzzles and discovering new combinations? If you answered yes, then you might want to try Doodle Alchemy, a casual simulation game that will challenge your creativity and logic. In this article, we will tell you everything you need to know about Doodle Alchemy, including what it is, how to play it, and where to download it.

        -

        doodle alchemy


        Download Zip ○○○ https://urlin.us/2uSY7o



        -

        What is Doodle Alchemy?

        -

        Doodle Alchemy is a game with amazing graphics and effects. Off-beat music and sounds create an unforgettable atmosphere! At the start, you have only 4 elements: air, water, earth, and fire. Combine these elements and create new ones. A fascinating journey into the world of knowledge awaits! Enjoy your discoveries!

        -

        The basic gameplay

        -

        The gameplay of Doodle Alchemy is simple and intuitive. You just need to drag and drop one element onto another to see if they can combine. If they do, you will get a new element that you can use for further combinations. You can also tap on an element to see its description and properties. Your goal is to discover all the possible elements in the game, which are divided into different categories such as animals, plants, countries, food, inventions, etc.

        -

        The graphics and sound effects

        -

        The graphics of Doodle Alchemy are colorful and charming. The elements are drawn in a doodle style that gives them a unique personality. The animations are smooth and realistic, showing how the elements react with each other. The sound effects are also well-designed, matching the mood and theme of the game. The music is off-beat and catchy, creating a relaxing and enjoyable atmosphere.

        -

        The benefits of playing Doodle Alchemy

        -

        Doodle Alchemy is not only a fun game, but also an educational one. By playing it, you can learn about different elements and their properties, as well as how they interact with each other. You can also expand your vocabulary and knowledge by reading the descriptions of the elements. Moreover, you can stimulate your creativity and logic by finding new combinations and solutions. Doodle Alchemy is a game that will keep you entertained and curious for hours.

        -

        doodle alchemy mod apk download
        -doodle alchemy cheats and combinations
        -doodle alchemy game online free
        -doodle alchemy animals list
        -doodle alchemy how to make life
        -doodle alchemy walkthrough by category
        -doodle alchemy best elements
        -doodle alchemy tips and tricks
        -doodle alchemy vs little alchemy
        -doodle alchemy play store
        -doodle alchemy for pc windows 10
        -doodle alchemy all 288 elements
        -doodle alchemy unlimited hints
        -doodle alchemy similar games
        -doodle alchemy review and rating
        -doodle alchemy wiki and guide
        -doodle alchemy update and new features
        -doodle alchemy fun and addictive
        -doodle alchemy create your own world
        -doodle alchemy challenge and puzzle
        -doodle alchemy hack and mod menu
        -doodle alchemy solutions and answers
        -doodle alchemy levels and stages
        -doodle alchemy hints and clues
        -doodle alchemy codes and rewards
        -doodle alchemy genres and themes
        -doodle alchemy graphics and sound
        -doodle alchemy controls and interface
        -doodle alchemy bugs and issues
        -doodle alchemy support and feedback
        -doodle alchemy community and forum
        -doodle alchemy news and updates
        -doodle alchemy videos and tutorials
        -doodle alchemy screenshots and images
        -doodle alchemy facts and trivia
        -doodle alchemy history and origin
        -doodle alchemy developer and publisher
        -doodle alchemy release date and version
        -doodle alchemy system requirements and compatibility
        -doodle alchemy privacy policy and terms of service

        -

        How to play Doodle Alchemy?

        -

        If you are interested in playing Doodle Alchemy, here are some tips and tricks that will help you get started.

        -

        The four elements

        -

        The four elements that you start with are air, water, earth, and fire. These are the basic building blocks of everything in the game. You can combine them in different ways to create new elements. For example, air + fire = energy; water + earth = swamp; earth + fire = lava; water + air = steam; etc. Try to experiment with different combinations and see what happens.

        -

        The combinations and categories

        -

        As you discover new elements, they will be added to your collection. You can access your collection by tapping on the book icon at the bottom of the screen. You can also see how many elements you have discovered out of the total number in the game. The elements are grouped into different categories such as animals, plants, countries, food, inventions, etc. You can tap on a category to see all the elements that belong to it. You can also tap on an element to see its description and properties.

        - For example, human + metal = tool; tool + wood = wheel; wheel + wheel = car; etc. You can use the hint button at the top of the screen to get a clue about a possible combination. However, you have a limited number of hints, so use them wisely.

        -

        The tips and tricks

        -

        Here are some tips and tricks that will help you play Doodle Alchemy more effectively and enjoyably.

        - -

        Where to download and play Doodle Alchemy?

        -

        If you are interested in downloading and playing Doodle Alchemy, here are some information that you might want to know.

        -

        The platforms and devices

        -

        Doodle Alchemy is available for various platforms and devices. You can play it on your Android or iOS smartphone or tablet, as well as on your Windows or Mac computer. You can also play it online on your browser without downloading anything. The game is compatible with most devices and browsers, so you don't have to worry about technical issues.

        -

        The price and in-app purchases

        -

        Doodle Alchemy is free to download and play. However, it does contain some in-app purchases that can enhance your gaming experience. For example, you can buy more hints, remove ads, unlock all categories, or get a premium version of the game. The prices range from $0.99 to $4.99 depending on the item. You can also watch ads or complete offers to get free hints or coins.

        -

        The ratings and reviews

        -

        Doodle Alchemy has received positive ratings and reviews from players and critics alike. It has a 4.5 out of 5 stars rating on Google Play Store and a 4.6 out of 5 stars rating on App Store. It has also been featured on several websites and blogs as one of the best puzzle games for Android and iOS . Some of the common praises for Doodle Alchemy are its addictive gameplay, beautiful graphics, relaxing music, educational value, and originality.

        -

        Conclusion

        -

        Doodle Alchemy is a fun and creative puzzle game that will challenge your creativity and logic. You can experiment with different elements and create new substances, while learning about their properties and interactions. You can also enjoy the colorful graphics, realistic animations, off-beat music, and sound effects that create an unforgettable atmosphere. Doodle Alchemy is a game that will keep you entertained and curious for hours.

        -

        Summary of the main points

        -

        In this article, we have covered the following points about Doodle Alchemy:

        - -

        Call to action

        -

        If you are looking for a game that will stimulate your creativity and logic, while providing you with hours of fun and learning, then you should definitely try Doodle Alchemy. Download it now and start your journey into the world of knowledge!

        -

        Frequently Asked Questions

        -

        Here are some frequently asked questions about Doodle Alchemy that you might find helpful.

        -

        Q: How many elements are there in Doodle Alchemy?

        -

        A: There are over 500 elements in Doodle Alchemy that you can discover by combining different elements.

        -

        Q: How do I reset my progress in Doodle Alchemy?

        -

        A: If you want to start over and erase all your discoveries, you can reset your progress in Doodle Alchemy by following these steps:

        -
          -
        1. Open the game and tap on the settings icon at the top right corner of the screen.
        2. -
        3. Tap on the reset button and confirm your choice.
        4. -
        5. Enjoy the game from scratch!
        6. -
        -

        Q: How do I get more hints in Doodle Alchemy?

        -

        A: Hints are useful when you are stuck and need some guidance. You can get more hints in Doodle Alchemy by doing one of the following:

        - -

        Q: What are the achievements in Doodle Alchemy?

        -

        A: Achievements are goals that you can complete by playing Doodle Alchemy. They are a way to track your progress and challenge yourself. You can access the achievements by tapping on the trophy icon at the bottom of the screen. You can see how many achievements you have unlocked out of the total number in the game. Some examples of achievements are:

        - -

        Q: What are the secrets in Doodle Alchemy?

        -

        A: Secrets are hidden elements that you can discover by combining certain elements in a specific order. They are not part of any category and they have a special icon. They are usually related to pop culture, mythology, or humor. Some examples of secrets are:

        - -

        Q: How do I contact the developers of Doodle Alchemy?

        -

        A: If you have any questions, feedback, suggestions, or issues regarding Doodle Alchemy, you can contact the developers by using one of these methods:

        -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/3dzip.org The Ultimate Source of 3D Model Free Download.md b/spaces/1phancelerku/anime-remove-background/3dzip.org The Ultimate Source of 3D Model Free Download.md deleted file mode 100644 index a78cf401be052c7a7c16408d4f4d198767c091ec..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/3dzip.org The Ultimate Source of 3D Model Free Download.md +++ /dev/null @@ -1,98 +0,0 @@ -
        -

        What is 3dzip.org and why you should use it

        -

        If you are an architect, designer, or hobbyist who loves to create realistic 3D scenes, you know how important it is to have a good collection of 3D models. But finding high-quality, free, and easy-to-use 3D models can be challenging. That's why you should check out 3dzip.org, a website that offers free download of 3D models for architecture and design.

        -

        3dzip.org


        Download Ziphttps://jinyurl.com/2uNNI7



        -

        The benefits of using 3dzip.org

        -

        There are many reasons why you should use 3dzip.org for your 3D projects. Here are some of them:

        -

        High-quality 3D models for various categories

        -

        At 3dzip.org, you can find thousands of 3D models for different categories, such as furniture, lighting, decoration, kitchen, bathroom, plant, technology, and more. You can also find full scenes of interiors and exteriors, as well as textures, materials, scripts, and HDRI panoramas. All the models are realistic, detailed, and optimized for rendering.

        -

        Free and easy to download and use

        -

        All the resources on 3dzip.org are uploaded freely by users. They are only used for scientific research and teaching purposes. Therefore, you can download and use them for free without any commercial restrictions. You can also upload your own models to share with the community. The download process is simple and fast. You just need to click on the download button and enter your email address to get the link.

        -

        Updated regularly with new resources

        -

        One of the best things about 3dzip.org is that it is updated regularly with new resources. You can always find something new and fresh to inspire your creativity. You can also follow their social media accounts to get notified of the latest posts.

        -

        3dzip.org free download 3d models
        -3dzip.org dressing table
        -3dzip.org shoe storage cabinet
        -3dzip.org wardrobe and display cabinets
        -3dzip.org sideboard and chest of drawer
        -3dzip.org wine cabinet
        -3dzip.org bed
        -3dzip.org bar stool
        -3dzip.org bookcase
        -3dzip.org sofa
        -3dzip.org stool
        -3dzip.org tv cabinets
        -3dzip.org tv wall
        -3dzip.org table console table
        -3dzip.org chair
        -3dzip.org bench
        -3dzip.org arm chair
        -3dzip.org table and chair
        -3dzip.org office furniture
        -3dzip.org other soft seating
        -3dzip.org kitchen tableware
        -3dzip.org kitchen island
        -3dzip.org kitchen appliance
        -3dzip.org other kitchen accessories
        -3dzip.org food and drinks
        -3dzip.org sink faucet
        -3dzip.org childroom full furniture set
        -3dzip.org toy miscellaneous
        -3dzip.org bathroom wash basin
        -3dzip.org toilet and bidet
        -3dzip.org bathtub shower
        -3dzip.org bathroom furniture
        -3dzip.org towel rail bathroom accessories
        -3dzip.org decoration decorative plaster
        -3dzip.org curtain mirror frame vase books pillows carpets decorative set wall decor sculpture other decorative objects clothes and shoes plant tree flower grass indoor plants outdoor plants lighting ceiling light wall light floor lamp table lamp spot light street lighting technical lighting technology pcs and other electrics household appliance tv phones audio tech miscellaneous other models windows doors gate and fence fireplace radiator shop transport sports people staircase musical instrument beauty salon weaponry restaurant creature billiards miscellaneous other models scenes exteriors interiors living room kitchen and dining room bedroom children room bathroom study room working room apartment suites hotel reception hall restaurant shop corridors and aisles showroom office other architectural elements textures wood floor coverings wall covering metal stone fabric natural materials miscellaneous hdri panorama tile leather brick roof rug materials wood metal leather fabric plastic stone glass liquid tile miscellaneous scripts scripts sketchup new posts uploaded download free bedroom interior model by gia the binh free stool model download free sofa model download free armchair model download free table model download free bar stool model download free plant model download free decorative shelves model download free living room kitchen interior model download by kien nguyen adblock detected donate upload files adblock detected please consider supporting us by disabling your ad blocker please disable your adblocker and refresh the page to view the site content.

        -

        How to use 3dzip.org

        -

        Using 3dzip.org is easy and fun. Here are some steps to help you get started:

        -

        Browse by tags, categories, or search keywords

        -

        You can browse the website by tags, categories, or search keywords to find the models you need. You can also filter the results by date, popularity, or rating. You can see the preview images, titles, descriptions, and file formats of each model.

        -

        Download the files in different formats

        -

        Once you find a model you like, you can download it in different formats, such as .max, .obj, .fbx, .skp, .rfa, .rvt, .dwg, .stl, .dae, .c4d, .blend, etc. Depending on the model, you

        may also get the textures, materials, and maps that come with the model. You can also see the file size and the number of downloads for each model.

        -

        Import the models into your software of choice

        -

        After downloading the files, you can import them into your software of choice, such as 3ds Max, SketchUp, Blender, Cinema 4D, Revit, AutoCAD, etc. You can then edit, modify, or combine the models as you wish. You can also apply different renderers, such as V-Ray, Corona, Lumion, etc. to create stunning images and animations.

        -

        Some examples of 3D models from 3dzip.org

        -

        To give you an idea of what kind of models you can find on 3dzip.org, here are some examples from different categories:

        -

        Furniture and interior design

        -

        If you are looking for furniture and interior design models, you can find a variety of styles and types on 3dzip.org. You can find sofas, chairs, tables, cabinets, shelves, beds, desks, and more. You can also find models of different rooms, such as living room, bedroom, dining room, office, etc. Here are some examples:

        -

        Table and chair set by Pham Bao Toan

        -

        This is a modern and elegant table and chair set that can fit any dining room. The table has a wooden top and metal legs. The chairs have leather seats and backs. The model is in .max format and comes with V-Ray materials and textures.

        -

        Display cabinet by Nguyen Quang Hai

        -

        This is a stylish and functional display cabinet that can store and showcase your items. The cabinet has glass doors and shelves. The model is in .max format and comes with V-Ray materials and textures.

        -

        Sofa and armchair by 3dzip.org

        -

        This is a cozy and comfortable sofa and armchair set that can enhance any living room. The sofa and armchair have soft cushions and fabric covers. The model is in .max format and comes with V-Ray materials and textures.

        Lighting and decoration

        -

        If you are looking for lighting and decoration models, you can find a variety of shapes and sizes on 3dzip.org. You can find lamps, chandeliers, sconces, candles, vases, books, paintings, sculptures, and more. You can also find models of different themes, such as modern, classic, rustic, etc. Here are some examples:

        -

        Ceiling light by 3dzip.org

        -

        This is a simple and elegant ceiling light that can illuminate any space. The light has a metal frame and a glass shade. The model is in .max format and comes with V-Ray materials and textures.

        -

        Vase and books by 3dzip.org

        -

        This is a lovely and realistic vase and books set that can add some charm to your shelf or table. The vase has a ceramic texture and a floral pattern. The books have different colors and titles. The model is in .max format and comes with V-Ray materials and textures.

        -

        Wall decor by 3dzip.org

        -

        This is a creative and stylish wall decor that can spice up your wall. The decor consists of metal letters that spell out the word "LOVE". The model is in .max format and comes with V-Ray materials and textures.

        -

        Kitchen and bathroom

        -

        If you are looking for kitchen and bathroom models, you can find a variety of appliances and fixtures on 3dzip.org. You can find stoves, refrigerators, microwaves, sinks, faucets, cabinets, countertops, bathtubs, showers, toilets, mirrors, and more. You can also find models of different designs, such as modern, traditional, minimalist, etc. Here are some examples:

        -

        Kitchen island by 3dzip.org

        -

        This is a spacious and functional kitchen island that can make your kitchen more convenient and attractive. The island has a wooden top and a white base. It also has drawers, shelves, and a sink. The model is in .max format and comes with V-Ray materials and textures.

        -

        Wash basin and faucet by 3dzip.org

        -

        This is a sleek and modern wash basin and faucet that can enhance your bathroom. The basin has a rectangular shape and a glossy finish. The faucet has a chrome finish and a curved spout. The model is in .max format and comes with V-Ray materials and textures.

        -

        Bathtub and shower by 3dzip.org

        -

        This is a luxurious and relaxing bathtub and shower that can make your bathroom more comfortable and enjoyable. The bathtub has a oval shape and a smooth surface. The shower has a glass enclosure and a rain shower head. The model is in .max format and comes with V-Ray materials and textures.

        -

        Conclusion and FAQs

        -

        As you can see, 3dzip.org is a great website for finding free 3D models for architecture and design. You can browse, download, and use thousands of high-quality models for various categories. You can also upload your own models to share with the community. Whether you are a professional or a hobbyist, you can benefit from using 3dzip.org for your 3D projects.

        -

        Here are some frequently asked questions about 3dzip.org:

        - -

        I hope you enjoyed this article and learned something new about 3dzip.org. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy 3D modeling!

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Car Parking Multiplayer Mod APK 2022 Drive Park and Customize Your Dream Cars.md b/spaces/1phancelerku/anime-remove-background/Car Parking Multiplayer Mod APK 2022 Drive Park and Customize Your Dream Cars.md deleted file mode 100644 index 1c7ac906491ec13d11487016397a10f7186c062a..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Car Parking Multiplayer Mod APK 2022 Drive Park and Customize Your Dream Cars.md +++ /dev/null @@ -1,77 +0,0 @@ -
        -

        Download Car Parking Multiplayer Mod APK 2022: A Guide for Car Lovers

        -

        If you are a fan of realistic car simulation games, you might have heard of Car Parking Multiplayer. It is one of the most popular and realistic car parking games on Android, with over 100 million downloads on Google Play. In this game, you can experience more than just parking: you can explore an open-world multiplayer mode, tune and customize your car, and even walk around and interact with other players. But what if you want to enjoy the game without any limitations or restrictions? That's where Car Parking Multiplayer mod apk 2022 comes in. In this article, we will tell you everything you need to know about this modded version of the game, including its features, benefits, and how to download and install it on your device.

        -

        What is Car Parking Multiplayer?

        -

        Car Parking Multiplayer is a realistic car parking simulator game developed by olzhass. It is available for free on Google Play and has been downloaded by more than 100 million users worldwide. The game offers a variety of features and modes that make it more than just a parking game. Here are some of the features of Car Parking Multiplayer:

        -

        download car parking multiplayer mod apk 2022


        Download Filehttps://jinyurl.com/2uNTCc



        -

        Features of Car Parking Multiplayer

        -

        Open-world multiplayer mode

        -

        In this mode, you can join thousands of players online and explore different locations, such as cities, airports, deserts, and more. You can also chat with other players, join races, or create your own rules and challenges.

        -

        Car tuning and customization

        -

        The game allows you to tune and customize your car according to your preferences. You can choose from over 100 cars, ranging from sedans to supercars. You can also change the color, wheels, suspension, engine, transmission, and more. You can even add stickers, neon lights, spoilers, and other accessories to make your car stand out.

        -

        Free walking and interaction

        -

        Unlike other car parking games, Car Parking Multiplayer lets you get out of your car and walk around freely. You can also interact with other objects and players in the game world. For example, you can use gas stations, car washes, repair shops, police stations, etc. You can also exchange cars with other players or invite them to your house.

        -

        Why download Car Parking Multiplayer mod apk 2022?

        -

        As much as Car Parking Multiplayer is fun and realistic, it also has some drawbacks that might affect your gaming experience. For instance, the game requires a lot of money and coins to unlock new cars and accessories. It also has ads that might interrupt your gameplay. Moreover, some features are only available for premium users who have to pay real money to access them. That's why many players opt for Car Parking Multiplayer mod apk 2022. This is a modified version of the game that gives you unlimited resources and features for free. Here are some of the benefits of downloading Car Parking Multiplayer mod apk 2022:

        -

        Unlimited money and coins

        -

        With this mod apk, you don't have to worry about running out of money or coins in the game. You can use them to buy any car or accessory you want without any limitations. You can also upgrade your car to the maximum level without spending a dime.

        -

        Unlock all cars and accessories

        -

        This mod apk also

        also unlocks all the cars and accessories in the game, including the premium ones. You can access over 100 cars, from classic to modern, and customize them with various options. You can also use any sticker, neon light, spoiler, or other accessory you like without any restrictions.

        -

        car parking multiplayer mod apk 2022 unlimited money
        -car parking multiplayer mod apk 2022 latest version
        -car parking multiplayer mod apk 2022 free download
        -car parking multiplayer mod apk 2022 android 1
        -car parking multiplayer mod apk 2022 all cars unlocked
        -car parking multiplayer mod apk 2022 ios
        -car parking multiplayer mod apk 2022 online
        -car parking multiplayer mod apk 2022 hack
        -car parking multiplayer mod apk 2022 no root
        -car parking multiplayer mod apk 2022 rexdl
        -car parking multiplayer mod apk 2022 revdl
        -car parking multiplayer mod apk 2022 an1
        -car parking multiplayer mod apk 2022 happymod
        -car parking multiplayer mod apk 2022 unlimited everything
        -car parking multiplayer mod apk 2022 obb
        -car parking multiplayer mod apk 2022 update
        -car parking multiplayer mod apk 2022 new cars
        -car parking multiplayer mod apk 2022 offline
        -car parking multiplayer mod apk 2022 mega
        -car parking multiplayer mod apk 2022 mediafıre
        -car parking multiplayer mod apk 2022 original
        -car parking multiplayer mod apk 2022 premium
        -car parking multiplayer mod apk 2022 pro
        -car parking multiplayer mod apk 2022 unlocked all features
        -car parking multiplayer mod apk 2022 vip
        -car parking multiplayer mod apk 2022 with cheats
        -car parking multiplayer mod apk 2022 youtube
        -how to download car parking multiplayer mod apk 2022
        -where to download car parking multiplayer mod apk 2022
        -best site to download car parking multiplayer mod apk 2022
        -download link for car parking multiplayer mod apk 2022
        -download and install car parking multiplayer mod apk 2022
        -download and play car parking multiplayer mod apk 2022
        -download and enjoy car parking multiplayer mod apk 2022
        -download and review car parking multiplayer mod apk 2022
        -download and share car parking multiplayer mod apk 2022
        -download and rate car parking multiplayer mod apk 2022
        -download and comment on car parking multiplayer mod apk 2022
        -download and subscribe to car parking multiplayer mod apk 2022
        -download and support car parking multiplayer mod apk 2022

        -

        No ads and no root required

        -

        Another advantage of this mod apk is that it removes all the ads from the game, so you can enjoy a smooth and uninterrupted gameplay. You also don't need to root your device to install this mod apk, as it works on any Android device without any issues.

        -

        How to download and install Car Parking Multiplayer mod apk 2022?

        -

        Now that you know the benefits of Car Parking Multiplayer mod apk 2022, you might be wondering how to download and install it on your device. Don't worry, it's very easy and simple. Just follow these steps:

        -

        Step 1: Download the mod apk file from a trusted source

        -

        The first thing you need to do is to download the mod apk file from a reliable and safe source. There are many websites that offer this mod apk, but not all of them are trustworthy. Some of them might contain viruses or malware that can harm your device or steal your data. Therefore, we recommend you to download the mod apk file from [this link], which is verified and tested by us.

        -

        Step 2: Enable unknown sources on your device

        -

        The next thing you need to do is to enable unknown sources on your device. This is a security setting that allows you to install apps from sources other than Google Play. To do this, go to your device settings, then security, then unknown sources, and turn it on. You might see a warning message, but don't worry, it's safe to proceed.

        -

        Step 3: Install the mod apk file and launch the game

        -

        The final thing you need to do is to install the mod apk file and launch the game. To do this, go to your file manager, then locate the downloaded mod apk file, and tap on it. You might see a pop-up asking for permissions, just allow them and wait for the installation to finish. Once it's done, you can open the game and enjoy it with unlimited resources and features.

        -

        Step 4: Enjoy the game with unlimited resources and features

        -

        Congratulations! You have successfully downloaded and installed Car Parking Multiplayer mod apk 2022 on your device. Now you can enjoy the game with unlimited money and coins, unlock all cars and accessories, remove all ads, and access all premium features for free. You can also join the online multiplayer mode and chat with other players, race with them, or create your own rules and challenges. Have fun!

        -

        Conclusion

        -

        Car Parking Multiplayer is one of the best car parking simulator games on Android, with realistic graphics, physics, and gameplay. It offers a variety of features and modes that make it more than just a parking game. However, if you want to enjoy the game without any limitations or restrictions, you should download Car Parking Multiplayer mod apk 2022. This is a modified version of the game that gives you unlimited resources and features for free. You can use them to buy any car or accessory you want, upgrade your car to the maximum level, remove all ads, and access all premium features. You can also join the online multiplayer mode and explore different locations, chat with other players, join races, or create your own rules and challenges.

        -

        We hope this article was helpful for you. If you have any questions or feedback, feel free to leave a comment below. We would love to hear from you. Thank you for reading!

        - FAQs Q: Is Car Parking Multiplayer mod apk 2022 safe to use? A: Yes, Car Parking Multiplayer mod apk 2022 is safe to use as long as you download it from a trusted source like [this link]. It does not contain any viruses or malware that can harm your device or steal your data. Q: Do I need an internet connection to play Car Parking Multiplayer mod apk 2022? A: No, you don't need an internet connection to play Car Parking Multiplayer mod apk 2022. You can play it offline without any problems. However, if you want to join the online multiplayer mode or update the game, you will need an internet connection. Q: How can I update Car Parking Multiplayer mod apk 2022? A: To update Car Parking Multiplayer mod apk 2022, you will need to download the latest version of the mod apk file from [this link] and install it on your device. You You will need to uninstall the previous version of the mod apk before installing the new one. You might also need to back up your game data before updating, as some updates might erase your progress. Q: Can I play Car Parking Multiplayer mod apk 2022 with my friends? A: Yes, you can play Car Parking Multiplayer mod apk 2022 with your friends. You can join the online multiplayer mode and invite your friends to join you. You can also chat with them, race with them, or create your own rules and challenges. Q: What are the minimum requirements to play Car Parking Multiplayer mod apk 2022? A: The minimum requirements to play Car Parking Multiplayer mod apk 2022 are: - Android version: 4.4 or higher - RAM: 1 GB or more - Storage: 300 MB or more - Internet connection: optional Q: Where can I find more information about Car Parking Multiplayer mod apk 2022? A: You can find more information about Car Parking Multiplayer mod apk 2022 on [this website], which is the official website of the mod apk. You can also check out [this YouTube channel], which is the official channel of the mod apk. You can also follow [this Facebook page], which is the official page of the mod apk.

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Z-Cron Scheduler The Ultimate Windows Task Automation Tool.md b/spaces/1phancelerku/anime-remove-background/Download Z-Cron Scheduler The Ultimate Windows Task Automation Tool.md deleted file mode 100644 index 96a1f43d43b4010d38ec27b49128a0808b7761bd..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Z-Cron Scheduler The Ultimate Windows Task Automation Tool.md +++ /dev/null @@ -1,214 +0,0 @@ -
        -

        How to Download and Use Z-Cron Scheduler for Windows

        -

        If you are looking for a powerful and easy-to-use task scheduler for Windows, you might want to check out Z-Cron Scheduler. This program allows you to automate various tasks on your computer, such as starting and stopping applications, copying and deleting files, switching devices on or off, and more. You can schedule tasks to run daily, weekly, monthly, once, or at regular intervals. You can also use Z-Cron Scheduler as a system service, which means it can run tasks even if no user is logged in.

        -

        download z-cron scheduler


        Download File ››››› https://jinyurl.com/2uNP7v



        -

        In this article, we will show you how to download, install, and use Z-Cron Scheduler for Windows. We will also share some tips and tricks to help you get the most out of this program.

        -

        What is Z-Cron Scheduler?

        -

        Z-Cron Scheduler is a task scheduling program for Windows that is inspired by the Cron system from the GNU/Linux world. It was developed by Andreas Baumann and is available as a freeware version or a professional version with more features. You can use Z-Cron Scheduler to plan the execution of commands, programs, or scripts at specific times or intervals, so that recurring tasks on your PC are run automatically on schedule.

        -

        Features and Benefits of Z-Cron Scheduler

        -

        Some of the features and benefits of Z-Cron Scheduler are:

        - -

        Supported Systems and Requirements

        -

        Z-Cron Scheduler supports the following systems:

        - -

        The minimum requirements for Z-Cron Scheduler are:

        - -

        How to Download Z-Cron Scheduler

        -

        Download from the Official Website

        -

        The easiest way to download Z-Cron Scheduler is from its official website. Here are the steps:

        -
          -
        1. Go to https://z-dbackup.de/en/z-cron-scheduler/.
        2. -
        3. Click on the "Freeware Download" button to download the freeware version or click on the "Buy Now" button to purchase the professional version.
        4. -
        5. Save the setup file (ZCRON.EXE) to your computer.
        6. -
        -

        Download from Alternative Sources

        -

        If you cannot access the official website or you want to download an older version of Z-Cron Scheduler, you can try some alternative sources. Here are some of them:

        -

        How to download z-cron scheduler for Windows
        -Z-cron scheduler free download and installation guide
        -Download z-cron scheduler to automate your windows tasks
        -Z-cron scheduler download link and review
        -Z-cron scheduler features and benefits
        -Download z-cron scheduler for windows server 2022
        -Z-cron scheduler system service and data exchange
        -Z-cron scheduler tools and functions overview
        -Download z-cron scheduler for windows 11
        -Z-cron scheduler web app and remote control
        -Z-cron scheduler alternatives and comparisons
        -Download z-cron scheduler for windows 10
        -Z-cron scheduler backup and restore tasks
        -Z-cron scheduler license and pricing
        -Z-cron scheduler support and help
        -Download z-cron scheduler for windows 8
        -Z-cron scheduler task planning and scheduling
        -Z-cron scheduler FTP data transfer and synchronization
        -Z-cron scheduler update and upgrade
        -Z-cron scheduler tutorial and tips
        -Download z-cron scheduler for windows 7
        -Z-cron scheduler cron system and commands
        -Z-cron scheduler defragmentation and scan disk tasks
        -Z-cron scheduler virus scan and security tasks
        -Z-cron scheduler user interface and customization
        -Download z-cron scheduler for windows server 2019
        -Z-cron scheduler electrical device switching tasks
        -Z-cron scheduler internet/VPN connection tasks
        -Z-cron scheduler log file and error handling
        -Z-cron scheduler feedback and testimonials
        -Download z-cron scheduler for windows server 2016
        -Z-cron scheduler popup window and reminder tasks
        -Z-cron scheduler system shutdown and restart tasks
        -Z-cron scheduler file copy and delete tasks
        -Z-cron scheduler directory cleanup and zip tasks
        -Download z-cron scheduler for windows server 2012 (R2)
        -Z-cron scheduler network computer on/off tasks
        -Z-cron scheduler document and website loading tasks
        -Z-cron scheduler application start and stop tasks
        -Z-cron scheduler system service monitoring tasks (pro version)
        -Download z-cron scheduler for windows server 2008 (R2)
        -Z-cron scheduler daily, weekly, monthly, once, interval tasks
        -Z-Cron - Automate your windows tasks (official website)
        -How to uninstall z-cron scheduler from windows
        -Best practices for using z-cron scheduler
        -How to troubleshoot z-cron scheduler issues
        -How to import and export z-cron scheduler tasks

        - -

        However, be careful when downloading from third-party websites, as they may contain malware or unwanted software. Always scan the downloaded files with a reliable antivirus program before installing them.

        -

        How to Install Z-Cron Scheduler

        -

        Run the Setup File

        -

        After you have downloaded the setup file (ZCRON.EXE), you need to run it to start the installation process. Here are the steps:

        -
          -
        1. Double-click on the setup file or right-click on it and choose "Run as administrator".
        2. -
        3. Click on "Yes" if a User Account Control prompt appears.
        4. -
        5. Select your preferred language and click on "OK".
        6. -
        7. Click on "Next" to continue.
        8. -
        -

        Choose the Installation Options

        -

        The next step is to choose the installation options for Z-Cron Scheduler. Here are the steps:

        -
          -
        1. Read and accept the license agreement and click on "Next".
        2. -
        3. Choose the destination folder for Z-Cron Scheduler and click on "Next".
        4. -
        5. Select the components you want to install and click on "Next". You can choose between:
            -
          • Z-Cron Service: This will install Z-Cron Scheduler as a system service that can run tasks even if no user is logged in.
          • -
          • Z-Cron Desktop: This will install Z-Cron Scheduler as a normal application that can run tasks only if a user is logged in.
          • -
          • Z-Cron Web App: This will install a web app that allows you to start tasks from your smartphone or tablet.
          • -
        6. -
        7. Choose whether you want to create a desktop icon and a quick launch icon for Z-Cron Scheduler and click on "Next".
        8. -
        9. Click on "Install" to begin the installation.
        10. -
        -

        Start the Program or the Service

        -

        The final step is to start Z-Cron Scheduler either as a program or as a service. Here are the steps:

        -
          -
        1. Click on "Finish" to complete the installation.
        2. -
        3. If you have installed Z-Cron Service, you need to start it manually from the Windows Services Manager or from the command line. Alternatively, you can restart your computer to start it automatically.
        4. -
        5. If you have installed Z-Cron Desktop, you can start it from the Start menu, the desktop icon, or the quick launch icon.
        6. -
        7. If you have installed Z-Cron Web App, you can access it from your web browser by typing http://localhost:8080/ or http://your-ip-address:8080/.
        8. -
        -

        How to Use Z-Cron Scheduler

        -

        Create a New Task

        -

        To create a new task in Z-Cron Scheduler, you need to follow these steps:

        -
          -
        1. Open Z-Cron Scheduler either as a program or as a service.
        2. -
        3. Click on the "New Task" button in the toolbar or choose "New Task" from the "File" menu.
        4. -
        5. A dialog box will appear where you can enter the details of your task, such as:
            -
          • Name: The name of your task.
          • -
          • Description: A brief description of your task.
          • -
          • Type: The type of your task, such as command, program, script, etc.
          • -
          • Data: The data for your task, such as command line, file name, parameters, etc.
          • -
          • Schedule: The schedule for your task, such as daily, weekly, monthly, once, etc.
          • -
          • Options: The options for your task, such as priority, log file, error handling, etc.

        6. Click on "OK" to save your task.
        7. -
        -

        You can also use the built-in tools to create tasks more easily. To do this, click on the "Tools" button in the toolbar or choose "Tools" from the "File" menu. You will see a list of tools that you can use, such as backup, cleanup, FTP transfer, defragmentation, virus scan, etc. Select the tool you want and follow the instructions to create a task with it.

        -

        Edit or Delete a Task

        -

        To edit or delete a task in Z-Cron Scheduler, you need to follow these steps:

        -
          -
        1. Open Z-Cron Scheduler either as a program or as a service.
        2. -
        3. Select the task you want to edit or delete from the task list.
        4. -
        5. To edit the task, click on the "Edit Task" button in the toolbar or choose "Edit Task" from the "File" menu. A dialog box will appear where you can modify the details of your task. Click on "OK" to save your changes.
        6. -
        7. To delete the task, click on the "Delete Task" button in the toolbar or choose "Delete Task" from the "File" menu. A confirmation message will appear. Click on "Yes" to confirm your deletion.
        8. -
        -

        Manage and Monitor Tasks

        -

        To manage and monitor tasks in Z-Cron Scheduler, you need to follow these steps:

        -
          -
        1. Open Z-Cron Scheduler either as a program or as a service.
        2. -
        3. To start or stop a task manually, select the task from the task list and click on the "Start Task" or "Stop Task" button in the toolbar or choose "Start Task" or "Stop Task" from the "Task" menu.
        4. -
        5. To enable or disable a task, select the task from the task list and click on the "Enable Task" or "Disable Task" button in the toolbar or choose "Enable Task" or "Disable Task" from the "Task" menu.
        6. -
        7. To view the status of a task, select the task from the task list and look at the icons and colors in the columns. You can see if a task is enabled, disabled, running, stopped, successful, failed, etc.
        8. -
        9. To view the log file of a task, select the task from the task list and click on the "View Log File" button in the toolbar or choose "View Log File" from the "Task" menu. A window will open where you can see the details of each execution of your task.
        10. -
        -

        Tips and Tricks for Z-Cron Scheduler

        -

        Use the Built-in Tools

        -

        As mentioned before, Z-Cron Scheduler has more than 100 built-in tools that can perform various functions on your PC. You can use these tools to create tasks more easily and efficiently. Some of these tools are:

        - - - - - - - - - - - - -
        ToolDescription
        Z-BackupThis tool allows you to backup files and folders to another location or device.
        Z-CleanerThis tool allows you to clean up your disk space by deleting temporary files, cache files, recycle bin files, etc.
        Z-FTPThis tool allows you to transfer files between your PC and an FTP server.
        Z-DefragThis tool allows you to defragment your hard drive to improve its performance.
        Z-VirusScanThis tool allows you to scan your PC for viruses and malware using an external antivirus program.
        Z-EmailThis tool allows you to send emails with attachments using an SMTP server.
        Z-PrintThis tool allows you to print documents using a printer connected to your PC or network.
        Z-SoundThis tool allows you to play sound files using your PC's speakers or headphones.
        Z-MessageThis tool allows you to show popup windows with messages on your PC's screen.
        Z-ShutdownThis tool allows you to shut down, restart, log off, or lock your PC.
        -

        Use the Web App for Remote Control

        -

        If you have installed Z-Cron Web App, you can use it to start tasks from your smartphone or tablet. This is useful if you want to control your PC remotely without having to access it directly. To use the web app, you need to follow these steps:

        -
          -
        1. Make sure your PC and your smartphone or tablet are connected to the same network.
        2. -
        3. Open your web browser on your smartphone or tablet and type http://localhost:8080/ or http://your-ip-address:8080/.
        4. -
        5. You will see a list of tasks that you have created in Z-Cron Scheduler on your PC.
        6. -
        7. To start a task, tap on the "Start" button next to the task name.
        8. -
        9. To stop a task, tap on the "Stop" button next to the task name.
        10. -
        11. To refresh the list of tasks, tap on the "Refresh" button at the top of the screen.
        12. -
        -

        Backup and Restore Tasks

        -

        If you want to backup and restore your tasks in Z-Cron Scheduler, you can use the built-in backup tool. This is useful if you want to transfer your tasks to another PC or if you want to recover your tasks in case of a system failure. To backup and restore your tasks, you need to follow these steps:

        -
          -
        1. Open Z-Cron Scheduler either as a program or as a service.
        2. -
        3. Click on the "Tools" button in the toolbar or choose "Tools" from the "File" menu.
        4. -
        5. Select "Backup Tasks" from the list of tools.
        6. -
        7. A dialog box will appear where you can choose the destination folder for your backup file and the name of your backup file.
        8. -
        9. Click on "OK" to start the backup process.
        10. -
        11. To restore your tasks, click on the "Tools" button in the toolbar or choose "Tools" from the "File" menu.
        12. -
        13. Select "Restore Tasks" from the list of tools.
        14. -
        15. A dialog box will appear where you can choose the source folder for your backup file and the name of your backup file.
        16. -
        17. Click on "OK" to start the restore process.
        18. -
        -

        Conclusion

        -

        Z-Cron Scheduler is a powerful and easy-to-use task scheduler for Windows that can help you automate various tasks on your PC. You can download it from its official website or from alternative sources, install it as a program or as a service, and use it to create, edit, delete, manage, and monitor tasks. You can also use its built-in tools, web app, and backup tool to enhance its functionality and convenience. Z-Cron Scheduler is a great tool for anyone who wants to save time and effort by automating their PC tasks.

        -

        FAQs

        -

        Here are some frequently asked questions about Z-Cron Scheduler:

        -

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/A00001/bingothoo/src/components/chat-header.tsx b/spaces/A00001/bingothoo/src/components/chat-header.tsx deleted file mode 100644 index c6664b8dee61179f844d45c5bd650518fc2cb4c2..0000000000000000000000000000000000000000 --- a/spaces/A00001/bingothoo/src/components/chat-header.tsx +++ /dev/null @@ -1,12 +0,0 @@ -import LogoIcon from '@/assets/images/logo.svg' -import Image from 'next/image' - -export function ChatHeader() { - return ( -
        - logo -
        欢迎使用新必应
        -
        由 AI 支持的网页版 Copilot
        -
        - ) -} diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/parallel_wavegan/utils/__init__.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/parallel_wavegan/utils/__init__.py deleted file mode 100644 index e8fa95a020706b5412c3959fbf6e5980019c0d5f..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/parallel_wavegan/utils/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .utils import * # NOQA diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/ckpt_utils.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/ckpt_utils.py deleted file mode 100644 index fc321f9ba891ffffc374df65871c3085bf898afb..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/ckpt_utils.py +++ /dev/null @@ -1,68 +0,0 @@ -import glob -import logging -import os -import re -import torch - - -def get_last_checkpoint(work_dir, steps=None): - checkpoint = None - last_ckpt_path = None - ckpt_paths = get_all_ckpts(work_dir, steps) - if len(ckpt_paths) > 0: - last_ckpt_path = ckpt_paths[0] - checkpoint = torch.load(last_ckpt_path, map_location='cpu') - logging.info(f'load module from checkpoint: {last_ckpt_path}') - return checkpoint, last_ckpt_path - - -def get_all_ckpts(work_dir, steps=None): - if steps is None: - ckpt_path_pattern = f'{work_dir}/model_ckpt_steps_*.ckpt' - else: - ckpt_path_pattern = f'{work_dir}/model_ckpt_steps_{steps}.ckpt' - return sorted(glob.glob(ckpt_path_pattern), - key=lambda x: -int(re.findall('.*steps\_(\d+)\.ckpt', x)[0])) - - -def load_ckpt(cur_model, ckpt_base_dir, model_name='model', force=True, strict=True): - if os.path.isfile(ckpt_base_dir): - base_dir = os.path.dirname(ckpt_base_dir) - ckpt_path = ckpt_base_dir - checkpoint = torch.load(ckpt_base_dir, map_location='cpu') - else: - base_dir = ckpt_base_dir - checkpoint, ckpt_path = get_last_checkpoint(ckpt_base_dir) - if checkpoint is not None: - state_dict = checkpoint["state_dict"] - if len([k for k in state_dict.keys() if '.' in k]) > 0: - state_dict = {k[len(model_name) + 1:]: v for k, v in state_dict.items() - if k.startswith(f'{model_name}.')} - else: - if '.' not in model_name: - state_dict = state_dict[model_name] - else: - base_model_name = model_name.split('.')[0] - rest_model_name = model_name[len(base_model_name) + 1:] - state_dict = { - k[len(rest_model_name) + 1:]: v for k, v in state_dict[base_model_name].items() - if k.startswith(f'{rest_model_name}.')} - if not strict: - cur_model_state_dict = cur_model.state_dict() - unmatched_keys = [] - for key, param in state_dict.items(): - if key in cur_model_state_dict: - new_param = cur_model_state_dict[key] - if new_param.shape != param.shape: - unmatched_keys.append(key) - print("| Unmatched keys: ", key, new_param.shape, param.shape) - for key in unmatched_keys: - del state_dict[key] - cur_model.load_state_dict(state_dict, strict=strict) - print(f"| load '{model_name}' from '{ckpt_path}'.") - else: - e_msg = f"| ckpt not found in {base_dir}." - if force: - assert False, e_msg - else: - print(e_msg) diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/plot.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/plot.py deleted file mode 100644 index bdca62a8cd80869c707890cd9febd39966cd3658..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/plot.py +++ /dev/null @@ -1,56 +0,0 @@ -import matplotlib.pyplot as plt -import numpy as np -import torch - -LINE_COLORS = ['w', 'r', 'y', 'cyan', 'm', 'b', 'lime'] - - -def spec_to_figure(spec, vmin=None, vmax=None): - if isinstance(spec, torch.Tensor): - spec = spec.cpu().numpy() - fig = plt.figure(figsize=(12, 6)) - plt.pcolor(spec.T, vmin=vmin, vmax=vmax) - return fig - - -def spec_f0_to_figure(spec, f0s, figsize=None): - max_y = spec.shape[1] - if isinstance(spec, torch.Tensor): - spec = spec.detach().cpu().numpy() - f0s = {k: f0.detach().cpu().numpy() for k, f0 in f0s.items()} - f0s = {k: f0 / 10 for k, f0 in f0s.items()} - fig = plt.figure(figsize=(12, 6) if figsize is None else figsize) - plt.pcolor(spec.T) - for i, (k, f0) in enumerate(f0s.items()): - plt.plot(f0.clip(0, max_y), label=k, c=LINE_COLORS[i], linewidth=1, alpha=0.8) - plt.legend() - return fig - - -def dur_to_figure(dur_gt, dur_pred, txt): - dur_gt = dur_gt.long().cpu().numpy() - dur_pred = dur_pred.long().cpu().numpy() - dur_gt = np.cumsum(dur_gt) - dur_pred = np.cumsum(dur_pred) - fig = plt.figure(figsize=(12, 6)) - for i in range(len(dur_gt)): - shift = (i % 8) + 1 - plt.text(dur_gt[i], shift, txt[i]) - plt.text(dur_pred[i], 10 + shift, txt[i]) - plt.vlines(dur_gt[i], 0, 10, colors='b') # blue is gt - plt.vlines(dur_pred[i], 10, 20, colors='r') # red is pred - return fig - - -def f0_to_figure(f0_gt, f0_cwt=None, f0_pred=None): - fig = plt.figure() - f0_gt = f0_gt.cpu().numpy() - plt.plot(f0_gt, color='r', label='gt') - if f0_cwt is not None: - f0_cwt = f0_cwt.cpu().numpy() - plt.plot(f0_cwt, color='b', label='cwt') - if f0_pred is not None: - f0_pred = f0_pred.cpu().numpy() - plt.plot(f0_pred, color='green', label='pred') - plt.legend() - return fig diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/lr_scheduler.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/lr_scheduler.py deleted file mode 100644 index be39da9ca6dacc22bf3df9c7389bbb403a4a3ade..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/lr_scheduler.py +++ /dev/null @@ -1,98 +0,0 @@ -import numpy as np - - -class LambdaWarmUpCosineScheduler: - """ - note: use with a base_lr of 1.0 - """ - def __init__(self, warm_up_steps, lr_min, lr_max, lr_start, max_decay_steps, verbosity_interval=0): - self.lr_warm_up_steps = warm_up_steps - self.lr_start = lr_start - self.lr_min = lr_min - self.lr_max = lr_max - self.lr_max_decay_steps = max_decay_steps - self.last_lr = 0. - self.verbosity_interval = verbosity_interval - - def schedule(self, n, **kwargs): - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_lr}") - if n < self.lr_warm_up_steps: - lr = (self.lr_max - self.lr_start) / self.lr_warm_up_steps * n + self.lr_start - self.last_lr = lr - return lr - else: - t = (n - self.lr_warm_up_steps) / (self.lr_max_decay_steps - self.lr_warm_up_steps) - t = min(t, 1.0) - lr = self.lr_min + 0.5 * (self.lr_max - self.lr_min) * ( - 1 + np.cos(t * np.pi)) - self.last_lr = lr - return lr - - def __call__(self, n, **kwargs): - return self.schedule(n,**kwargs) - - -class LambdaWarmUpCosineScheduler2: - """ - supports repeated iterations, configurable via lists - note: use with a base_lr of 1.0. - """ - def __init__(self, warm_up_steps, f_min, f_max, f_start, cycle_lengths, verbosity_interval=0): - assert len(warm_up_steps) == len(f_min) == len(f_max) == len(f_start) == len(cycle_lengths) - self.lr_warm_up_steps = warm_up_steps - self.f_start = f_start - self.f_min = f_min - self.f_max = f_max - self.cycle_lengths = cycle_lengths - self.cum_cycles = np.cumsum([0] + list(self.cycle_lengths)) - self.last_f = 0. - self.verbosity_interval = verbosity_interval - - def find_in_interval(self, n): - interval = 0 - for cl in self.cum_cycles[1:]: - if n <= cl: - return interval - interval += 1 - - def schedule(self, n, **kwargs): - cycle = self.find_in_interval(n) - n = n - self.cum_cycles[cycle] - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, " - f"current cycle {cycle}") - if n < self.lr_warm_up_steps[cycle]: - f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle] - self.last_f = f - return f - else: - t = (n - self.lr_warm_up_steps[cycle]) / (self.cycle_lengths[cycle] - self.lr_warm_up_steps[cycle]) - t = min(t, 1.0) - f = self.f_min[cycle] + 0.5 * (self.f_max[cycle] - self.f_min[cycle]) * ( - 1 + np.cos(t * np.pi)) - self.last_f = f - return f - - def __call__(self, n, **kwargs): - return self.schedule(n, **kwargs) - - -class LambdaLinearScheduler(LambdaWarmUpCosineScheduler2): - - def schedule(self, n, **kwargs): - cycle = self.find_in_interval(n) - n = n - self.cum_cycles[cycle] - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, " - f"current cycle {cycle}") - - if n < self.lr_warm_up_steps[cycle]: - f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle] - self.last_f = f - return f - else: - f = self.f_min[cycle] + (self.f_max[cycle] - self.f_min[cycle]) * (self.cycle_lengths[cycle] - n) / (self.cycle_lengths[cycle]) - self.last_f = f - return f - diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov7/yolov7_x_syncbn_fast_8x16b-300e_coco.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov7/yolov7_x_syncbn_fast_8x16b-300e_coco.py deleted file mode 100644 index 9929705962c918392af12dd0a8275321f89fd361..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov7/yolov7_x_syncbn_fast_8x16b-300e_coco.py +++ /dev/null @@ -1,15 +0,0 @@ -_base_ = './yolov7_l_syncbn_fast_8x16b-300e_coco.py' - -model = dict( - backbone=dict(arch='X'), - neck=dict( - in_channels=[640, 1280, 1280], - out_channels=[160, 320, 640], - block_cfg=dict( - type='ELANBlock', - middle_ratio=0.4, - block_ratio=0.4, - num_blocks=3, - num_convs_in_block=2), - use_repconv_outs=False), - bbox_head=dict(head_module=dict(in_channels=[320, 640, 1280]))) diff --git a/spaces/Aaajdhdhdhahdbbaabs/Hshdhdhd/README.md b/spaces/Aaajdhdhdhahdbbaabs/Hshdhdhd/README.md deleted file mode 100644 index b019e8f1f82c22c695db274535a6f8fb9c0f6ad7..0000000000000000000000000000000000000000 --- a/spaces/Aaajdhdhdhahdbbaabs/Hshdhdhd/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Hshdhdhd -emoji: 📊 -colorFrom: indigo -colorTo: blue -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Aditya9790/yolo7-object-tracking/export.py b/spaces/Aditya9790/yolo7-object-tracking/export.py deleted file mode 100644 index cf918aa42b5563f411e8a53cd9527f59180a8e46..0000000000000000000000000000000000000000 --- a/spaces/Aditya9790/yolo7-object-tracking/export.py +++ /dev/null @@ -1,205 +0,0 @@ -import argparse -import sys -import time -import warnings - -sys.path.append('./') # to run '$ python *.py' files in subdirectories - -import torch -import torch.nn as nn -from torch.utils.mobile_optimizer import optimize_for_mobile - -import models -from models.experimental import attempt_load, End2End -from utils.activations import Hardswish, SiLU -from utils.general import set_logging, check_img_size -from utils.torch_utils import select_device -from utils.add_nms import RegisterNMS - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--weights', type=str, default='./yolor-csp-c.pt', help='weights path') - parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='image size') # height, width - parser.add_argument('--batch-size', type=int, default=1, help='batch size') - parser.add_argument('--dynamic', action='store_true', help='dynamic ONNX axes') - parser.add_argument('--dynamic-batch', action='store_true', help='dynamic batch onnx for tensorrt and onnx-runtime') - parser.add_argument('--grid', action='store_true', help='export Detect() layer grid') - parser.add_argument('--end2end', action='store_true', help='export end2end onnx') - parser.add_argument('--max-wh', type=int, default=None, help='None for tensorrt nms, int value for onnx-runtime nms') - parser.add_argument('--topk-all', type=int, default=100, help='topk objects for every images') - parser.add_argument('--iou-thres', type=float, default=0.45, help='iou threshold for NMS') - parser.add_argument('--conf-thres', type=float, default=0.25, help='conf threshold for NMS') - parser.add_argument('--device', default='cpu', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--simplify', action='store_true', help='simplify onnx model') - parser.add_argument('--include-nms', action='store_true', help='export end2end onnx') - parser.add_argument('--fp16', action='store_true', help='CoreML FP16 half-precision export') - parser.add_argument('--int8', action='store_true', help='CoreML INT8 quantization') - opt = parser.parse_args() - opt.img_size *= 2 if len(opt.img_size) == 1 else 1 # expand - opt.dynamic = opt.dynamic and not opt.end2end - opt.dynamic = False if opt.dynamic_batch else opt.dynamic - print(opt) - set_logging() - t = time.time() - - # Load PyTorch model - device = select_device(opt.device) - model = attempt_load(opt.weights, map_location=device) # load FP32 model - labels = model.names - - # Checks - gs = int(max(model.stride)) # grid size (max stride) - opt.img_size = [check_img_size(x, gs) for x in opt.img_size] # verify img_size are gs-multiples - - # Input - img = torch.zeros(opt.batch_size, 3, *opt.img_size).to(device) # image size(1,3,320,192) iDetection - - # Update model - for k, m in model.named_modules(): - m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatibility - if isinstance(m, models.common.Conv): # assign export-friendly activations - if isinstance(m.act, nn.Hardswish): - m.act = Hardswish() - elif isinstance(m.act, nn.SiLU): - m.act = SiLU() - # elif isinstance(m, models.yolo.Detect): - # m.forward = m.forward_export # assign forward (optional) - model.model[-1].export = not opt.grid # set Detect() layer grid export - y = model(img) # dry run - if opt.include_nms: - model.model[-1].include_nms = True - y = None - - # TorchScript export - try: - print('\nStarting TorchScript export with torch %s...' % torch.__version__) - f = opt.weights.replace('.pt', '.torchscript.pt') # filename - ts = torch.jit.trace(model, img, strict=False) - ts.save(f) - print('TorchScript export success, saved as %s' % f) - except Exception as e: - print('TorchScript export failure: %s' % e) - - # CoreML export - try: - import coremltools as ct - - print('\nStarting CoreML export with coremltools %s...' % ct.__version__) - # convert model from torchscript and apply pixel scaling as per detect.py - ct_model = ct.convert(ts, inputs=[ct.ImageType('image', shape=img.shape, scale=1 / 255.0, bias=[0, 0, 0])]) - bits, mode = (8, 'kmeans_lut') if opt.int8 else (16, 'linear') if opt.fp16 else (32, None) - if bits < 32: - if sys.platform.lower() == 'darwin': # quantization only supported on macOS - with warnings.catch_warnings(): - warnings.filterwarnings("ignore", category=DeprecationWarning) # suppress numpy==1.20 float warning - ct_model = ct.models.neural_network.quantization_utils.quantize_weights(ct_model, bits, mode) - else: - print('quantization only supported on macOS, skipping...') - - f = opt.weights.replace('.pt', '.mlmodel') # filename - ct_model.save(f) - print('CoreML export success, saved as %s' % f) - except Exception as e: - print('CoreML export failure: %s' % e) - - # TorchScript-Lite export - try: - print('\nStarting TorchScript-Lite export with torch %s...' % torch.__version__) - f = opt.weights.replace('.pt', '.torchscript.ptl') # filename - tsl = torch.jit.trace(model, img, strict=False) - tsl = optimize_for_mobile(tsl) - tsl._save_for_lite_interpreter(f) - print('TorchScript-Lite export success, saved as %s' % f) - except Exception as e: - print('TorchScript-Lite export failure: %s' % e) - - # ONNX export - try: - import onnx - - print('\nStarting ONNX export with onnx %s...' % onnx.__version__) - f = opt.weights.replace('.pt', '.onnx') # filename - model.eval() - output_names = ['classes', 'boxes'] if y is None else ['output'] - dynamic_axes = None - if opt.dynamic: - dynamic_axes = {'images': {0: 'batch', 2: 'height', 3: 'width'}, # size(1,3,640,640) - 'output': {0: 'batch', 2: 'y', 3: 'x'}} - if opt.dynamic_batch: - opt.batch_size = 'batch' - dynamic_axes = { - 'images': { - 0: 'batch', - }, } - if opt.end2end and opt.max_wh is None: - output_axes = { - 'num_dets': {0: 'batch'}, - 'det_boxes': {0: 'batch'}, - 'det_scores': {0: 'batch'}, - 'det_classes': {0: 'batch'}, - } - else: - output_axes = { - 'output': {0: 'batch'}, - } - dynamic_axes.update(output_axes) - if opt.grid: - if opt.end2end: - print('\nStarting export end2end onnx model for %s...' % 'TensorRT' if opt.max_wh is None else 'onnxruntime') - model = End2End(model,opt.topk_all,opt.iou_thres,opt.conf_thres,opt.max_wh,device,len(labels)) - if opt.end2end and opt.max_wh is None: - output_names = ['num_dets', 'det_boxes', 'det_scores', 'det_classes'] - shapes = [opt.batch_size, 1, opt.batch_size, opt.topk_all, 4, - opt.batch_size, opt.topk_all, opt.batch_size, opt.topk_all] - else: - output_names = ['output'] - else: - model.model[-1].concat = True - - torch.onnx.export(model, img, f, verbose=False, opset_version=12, input_names=['images'], - output_names=output_names, - dynamic_axes=dynamic_axes) - - # Checks - onnx_model = onnx.load(f) # load onnx model - onnx.checker.check_model(onnx_model) # check onnx model - - if opt.end2end and opt.max_wh is None: - for i in onnx_model.graph.output: - for j in i.type.tensor_type.shape.dim: - j.dim_param = str(shapes.pop(0)) - - # print(onnx.helper.printable_graph(onnx_model.graph)) # print a human readable model - - # # Metadata - # d = {'stride': int(max(model.stride))} - # for k, v in d.items(): - # meta = onnx_model.metadata_props.add() - # meta.key, meta.value = k, str(v) - # onnx.save(onnx_model, f) - - if opt.simplify: - try: - import onnxsim - - print('\nStarting to simplify ONNX...') - onnx_model, check = onnxsim.simplify(onnx_model) - assert check, 'assert check failed' - except Exception as e: - print(f'Simplifier failure: {e}') - - # print(onnx.helper.printable_graph(onnx_model.graph)) # print a human readable model - onnx.save(onnx_model,f) - print('ONNX export success, saved as %s' % f) - - if opt.include_nms: - print('Registering NMS plugin for ONNX...') - mo = RegisterNMS(f) - mo.register_nms() - mo.save(f) - - except Exception as e: - print('ONNX export failure: %s' % e) - - # Finish - print('\nExport complete (%.2fs). Visualize with https://github.com/lutzroeder/netron.' % (time.time() - t)) diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/Factory.js deleted file mode 100644 index 27a65e2da42fc811c23fe96cb39ab1ec27fcf2f7..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/Factory.js +++ /dev/null @@ -1,13 +0,0 @@ -import GridSizer from './GridSizer.js'; -import ObjectFactory from '../ObjectFactory.js'; -import SetValue from '../../../plugins/utils/object/SetValue.js'; - -ObjectFactory.register('gridSizer', function (x, y, minWidth, minHeight, columnCount, rowCount, columnProportions, rowProportion, config) { - var gameObject = new GridSizer(this.scene, x, y, minWidth, minHeight, columnCount, rowCount, columnProportions, rowProportion, config); - this.scene.add.existing(gameObject); - return gameObject; -}); - -SetValue(window, 'RexPlugins.UI.GridSizer', GridSizer); - -export default GridSizer; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/holygrail/methods/LayoutMode0.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/holygrail/methods/LayoutMode0.js deleted file mode 100644 index 2a9698cbb4586340ea832d468e79d6ea173300a3..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/holygrail/methods/LayoutMode0.js +++ /dev/null @@ -1,58 +0,0 @@ -/* -Elements: - ``` - HHH - LCR - FFF - ``` -*/ - -import { - GetAddHeaderConfig, - GetAddLeftSideConfig, GetAddContentConfig, GetAddRightSideConfig, - GetAddFooterConfig, - GetAddContainerConfig -} from './GetAddChildConfig.js'; -import CreatExpandContainer from './CreatExpandContainer.js'; - -var LayoutMode0 = function (config) { - var scene = this.scene; - - // Add Header - var header = config.header; - if (header) { - this.add(header, GetAddHeaderConfig(config)); - } - - /* - L C R - */ - var bodySizer = CreatExpandContainer(scene, 0); - this.add(bodySizer, GetAddContainerConfig(config)); - - // Add Left-side - var leftSide = config.leftSide; - if (leftSide) { - bodySizer.add(leftSide, GetAddLeftSideConfig(config)); - } - - // Add content - var content = config.content; - if (content) { - bodySizer.add(content, GetAddContentConfig(config)); - } - - // Add Right-side - var rightSide = config.rightSide; - if (rightSide) { - bodySizer.add(rightSide, GetAddRightSideConfig(config)); - } - - // Add Footer - var footer = config.footer; - if (footer) { - this.add(footer, GetAddFooterConfig(config)); - } -} - -export default LayoutMode0; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/roundrectanglecanvas/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/roundrectanglecanvas/Factory.js deleted file mode 100644 index dd981735a8906eb59900348f1e2f13b401db3402..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/roundrectanglecanvas/Factory.js +++ /dev/null @@ -1,13 +0,0 @@ -import RoundRectangleCanvas from './RoundRectangleCanvas.js'; -import ObjectFactory from '../ObjectFactory.js'; -import SetValue from '../../../plugins/utils/object/SetValue.js'; - -ObjectFactory.register('roundRectangleCanvas', function (x, y, width, height, radius, fillStyle, strokeStyle, lineWidth, fillColor2, isHorizontalGradient) { - var gameObject = new RoundRectangleCanvas(this.scene, x, y, width, height, radius, fillStyle, strokeStyle, lineWidth, fillColor2, isHorizontalGradient); - this.scene.add.existing(gameObject); - return gameObject; -}); - -SetValue(window, 'RexPlugins.UI.RoundRectangleCanvas', RoundRectangleCanvas); - -export default RoundRectangleCanvas; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/scrollablepanel/scrollableblock/GetChildrenHeight.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/scrollablepanel/scrollableblock/GetChildrenHeight.js deleted file mode 100644 index 50cffddf27eba3114a0afeeef1f37c782b20f8e0..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/scrollablepanel/scrollableblock/GetChildrenHeight.js +++ /dev/null @@ -1,24 +0,0 @@ -import { GetDisplayHeight } from '../../../../plugins/utils/size/GetDisplaySize.js'; - -var GetChildrenHeight = function () { - if (this.rexSizer.hidden) { - return 0; - } - - var result; - var child = this.child, - childConfig = child.rexSizer; - if (childConfig.hidden) { - result = 0; - } else if (this.scrollMode === 0) { // scroll y - result = 0; - } else { // scroll x - result = (child.isRexSizer) ? - Math.max(child.minHeight, child.childrenHeight) : - (child.hasOwnProperty('minHeight')) ? child.minHeight : GetDisplayHeight(child); - } - - return result; -} - -export default GetChildrenHeight; \ No newline at end of file diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/models/e4e/__init__.py b/spaces/Amrrs/DragGan-Inversion/PTI/models/e4e/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/openpose/src/body.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/openpose/src/body.py deleted file mode 100644 index 9a2c5024c3d70bc05deabcf3807e8ef77707a756..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/openpose/src/body.py +++ /dev/null @@ -1,243 +0,0 @@ -import cv2 -import numpy as np -import math -import time -from scipy.ndimage.filters import gaussian_filter -import matplotlib.pyplot as plt -import matplotlib -import torch -from torchvision import transforms - -from openpose.src import util -from openpose.src.model import bodypose_model - - -class Body(object): - def __init__(self, model_path): - self.model = bodypose_model() - if torch.cuda.is_available(): - self.model = self.model.cuda() - model_dict = util.transfer(self.model, torch.load(model_path)) - self.model.load_state_dict(model_dict) - self.model.eval() - - def __call__(self, oriImg): - # scale_search = [0.5, 1.0, 1.5, 2.0] - scale_search = [0.5] - boxsize = 368 - stride = 8 - padValue = 128 - thre1 = 0.1 - thre2 = 0.05 - multiplier = [x * boxsize / oriImg.shape[0] for x in scale_search] - heatmap_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 19)) - paf_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 38)) - - for m in range(len(multiplier)): - scale = multiplier[m] - imageToTest = cv2.resize( - oriImg, (0, 0), fx=scale, fy=scale, interpolation=cv2.INTER_CUBIC) - imageToTest_padded, pad = util.padRightDownCorner( - imageToTest, stride, padValue) - im = np.transpose(np.float32( - imageToTest_padded[:, :, :, np.newaxis]), (3, 2, 0, 1)) / 256 - 0.5 - im = np.ascontiguousarray(im) - - data = torch.from_numpy(im).float() - if torch.cuda.is_available(): - data = data.cuda() - # data = data.permute([2, 0, 1]).unsqueeze(0).float() - with torch.no_grad(): - Mconv7_stage6_L1, Mconv7_stage6_L2 = self.model(data) - Mconv7_stage6_L1 = Mconv7_stage6_L1.cpu().numpy() - Mconv7_stage6_L2 = Mconv7_stage6_L2.cpu().numpy() - - # extract outputs, resize, and remove padding - # heatmap = np.transpose(np.squeeze(net.blobs[output_blobs.keys()[1]].data), (1, 2, 0)) # output 1 is heatmaps - # output 1 is heatmaps - heatmap = np.transpose(np.squeeze(Mconv7_stage6_L2), (1, 2, 0)) - heatmap = cv2.resize(heatmap, (0, 0), fx=stride, - fy=stride, interpolation=cv2.INTER_CUBIC) - heatmap = heatmap[:imageToTest_padded.shape[0] - - pad[2], :imageToTest_padded.shape[1] - pad[3], :] - heatmap = cv2.resize( - heatmap, (oriImg.shape[1], oriImg.shape[0]), interpolation=cv2.INTER_CUBIC) - - # paf = np.transpose(np.squeeze(net.blobs[output_blobs.keys()[0]].data), (1, 2, 0)) # output 0 is PAFs - paf = np.transpose(np.squeeze(Mconv7_stage6_L1), - (1, 2, 0)) # output 0 is PAFs - paf = cv2.resize(paf, (0, 0), fx=stride, fy=stride, - interpolation=cv2.INTER_CUBIC) - paf = paf[:imageToTest_padded.shape[0] - pad[2], - :imageToTest_padded.shape[1] - pad[3], :] - paf = cv2.resize( - paf, (oriImg.shape[1], oriImg.shape[0]), interpolation=cv2.INTER_CUBIC) - - heatmap_avg += heatmap_avg + heatmap / len(multiplier) - paf_avg += + paf / len(multiplier) - all_peaks = [] - peak_counter = 0 - - for part in range(18): - map_ori = heatmap_avg[:, :, part] - one_heatmap = gaussian_filter(map_ori, sigma=3) - - map_left = np.zeros(one_heatmap.shape) - map_left[1:, :] = one_heatmap[:-1, :] - map_right = np.zeros(one_heatmap.shape) - map_right[:-1, :] = one_heatmap[1:, :] - map_up = np.zeros(one_heatmap.shape) - map_up[:, 1:] = one_heatmap[:, :-1] - map_down = np.zeros(one_heatmap.shape) - map_down[:, :-1] = one_heatmap[:, 1:] - - peaks_binary = np.logical_and.reduce( - (one_heatmap >= map_left, one_heatmap >= map_right, one_heatmap >= map_up, one_heatmap >= map_down, one_heatmap > thre1)) - peaks = list(zip(np.nonzero(peaks_binary)[1], np.nonzero( - peaks_binary)[0])) # note reverse - peaks_with_score = [x + (map_ori[x[1], x[0]],) for x in peaks] - peak_id = range(peak_counter, peak_counter + len(peaks)) - peaks_with_score_and_id = [ - peaks_with_score[i] + (peak_id[i],) for i in range(len(peak_id))] - - all_peaks.append(peaks_with_score_and_id) - peak_counter += len(peaks) - - # find connection in the specified sequence, center 29 is in the position 15 - limbSeq = [[2, 3], [2, 6], [3, 4], [4, 5], [6, 7], [7, 8], [2, 9], [9, 10], - [10, 11], [2, 12], [12, 13], [ - 13, 14], [2, 1], [1, 15], [15, 17], - [1, 16], [16, 18], [3, 17], [6, 18]] - # the middle joints heatmap correpondence - mapIdx = [[31, 32], [39, 40], [33, 34], [35, 36], [41, 42], [43, 44], [19, 20], [21, 22], - [23, 24], [25, 26], [27, 28], [29, 30], [ - 47, 48], [49, 50], [53, 54], [51, 52], - [55, 56], [37, 38], [45, 46]] - - connection_all = [] - special_k = [] - mid_num = 10 - - for k in range(len(mapIdx)): - score_mid = paf_avg[:, :, [x - 19 for x in mapIdx[k]]] - candA = all_peaks[limbSeq[k][0] - 1] - candB = all_peaks[limbSeq[k][1] - 1] - nA = len(candA) - nB = len(candB) - indexA, indexB = limbSeq[k] - if (nA != 0 and nB != 0): - connection_candidate = [] - for i in range(nA): - for j in range(nB): - vec = np.subtract(candB[j][:2], candA[i][:2]) - norm = math.sqrt(vec[0] * vec[0] + vec[1] * vec[1]) - norm = max(0.001, norm) - vec = np.divide(vec, norm) - - startend = list(zip(np.linspace(candA[i][0], candB[j][0], num=mid_num), - np.linspace(candA[i][1], candB[j][1], num=mid_num))) - - vec_x = np.array([score_mid[int(round(startend[I][1])), int(round(startend[I][0])), 0] - for I in range(len(startend))]) - vec_y = np.array([score_mid[int(round(startend[I][1])), int(round(startend[I][0])), 1] - for I in range(len(startend))]) - - score_midpts = np.multiply( - vec_x, vec[0]) + np.multiply(vec_y, vec[1]) - score_with_dist_prior = sum(score_midpts) / len(score_midpts) + min( - 0.5 * oriImg.shape[0] / norm - 1, 0) - criterion1 = len(np.nonzero(score_midpts > thre2)[ - 0]) > 0.8 * len(score_midpts) - criterion2 = score_with_dist_prior > 0 - if criterion1 and criterion2: - connection_candidate.append( - [i, j, score_with_dist_prior, score_with_dist_prior + candA[i][2] + candB[j][2]]) - - connection_candidate = sorted( - connection_candidate, key=lambda x: x[2], reverse=True) - connection = np.zeros((0, 5)) - for c in range(len(connection_candidate)): - i, j, s = connection_candidate[c][0:3] - if (i not in connection[:, 3] and j not in connection[:, 4]): - connection = np.vstack( - [connection, [candA[i][3], candB[j][3], s, i, j]]) - if (len(connection) >= min(nA, nB)): - break - - connection_all.append(connection) - else: - special_k.append(k) - connection_all.append([]) - - # last number in each row is the total parts number of that person - # the second last number in each row is the score of the overall configuration - subset = -1 * np.ones((0, 20)) - candidate = np.array( - [item for sublist in all_peaks for item in sublist]) - - for k in range(len(mapIdx)): - if k not in special_k: - partAs = connection_all[k][:, 0] - partBs = connection_all[k][:, 1] - indexA, indexB = np.array(limbSeq[k]) - 1 - - for i in range(len(connection_all[k])): # = 1:size(temp,1) - found = 0 - subset_idx = [-1, -1] - for j in range(len(subset)): # 1:size(subset,1): - if subset[j][indexA] == partAs[i] or subset[j][indexB] == partBs[i]: - subset_idx[found] = j - found += 1 - - if found == 1: - j = subset_idx[0] - if subset[j][indexB] != partBs[i]: - subset[j][indexB] = partBs[i] - subset[j][-1] += 1 - subset[j][-2] += candidate[partBs[i].astype( - int), 2] + connection_all[k][i][2] - elif found == 2: # if found 2 and disjoint, merge them - j1, j2 = subset_idx - membership = ((subset[j1] >= 0).astype( - int) + (subset[j2] >= 0).astype(int))[:-2] - if len(np.nonzero(membership == 2)[0]) == 0: # merge - subset[j1][:-2] += (subset[j2][:-2] + 1) - subset[j1][-2:] += subset[j2][-2:] - subset[j1][-2] += connection_all[k][i][2] - subset = np.delete(subset, j2, 0) - else: # as like found == 1 - subset[j1][indexB] = partBs[i] - subset[j1][-1] += 1 - subset[j1][-2] += candidate[partBs[i].astype( - int), 2] + connection_all[k][i][2] - - # if find no partA in the subset, create a new subset - elif not found and k < 17: - row = -1 * np.ones(20) - row[indexA] = partAs[i] - row[indexB] = partBs[i] - row[-1] = 2 - row[-2] = sum(candidate[connection_all[k][i, - :2].astype(int), 2]) + connection_all[k][i][2] - subset = np.vstack([subset, row]) - # delete some rows of subset which has few parts occur - deleteIdx = [] - for i in range(len(subset)): - if subset[i][-1] < 4 or subset[i][-2] / subset[i][-1] < 0.4: - deleteIdx.append(i) - subset = np.delete(subset, deleteIdx, axis=0) - - # subset: n*20 array, 0-17 is the index in candidate, 18 is the total score, 19 is the total parts - # candidate: x, y, score, id - return candidate, subset - - -if __name__ == "__main__": - body_estimation = Body('../model/body_pose_model.pth') - - test_image = '../images/ski.jpg' - oriImg = cv2.imread(test_image) # B,G,R order - candidate, subset = body_estimation(oriImg) - canvas = util.draw_bodypose(oriImg, candidate, subset) - plt.imshow(canvas[:, :, [2, 1, 0]]) - plt.show() diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/run_pti.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/run_pti.py deleted file mode 100644 index 1be91596fa768240020a2e9af03cb5f24ca1072e..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/run_pti.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -from random import choice -from string import ascii_uppercase -from torch.utils.data import DataLoader -from torchvision.transforms import transforms -import os -from pti.pti_configs import global_config, paths_config -import wandb - -from pti.training.coaches.multi_id_coach import MultiIDCoach -from pti.training.coaches.single_id_coach import SingleIDCoach -from utils.ImagesDataset import ImagesDataset - - -def run_PTI(run_name='', use_wandb=False, use_multi_id_training=False): - os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID' - os.environ['CUDA_VISIBLE_DEVICES'] = global_config.cuda_visible_devices - - if run_name == '': - global_config.run_name = ''.join( - choice(ascii_uppercase) for i in range(12)) - else: - global_config.run_name = run_name - - if use_wandb: - run = wandb.init(project=paths_config.pti_results_keyword, - reinit=True, name=global_config.run_name) - global_config.pivotal_training_steps = 1 - global_config.training_step = 1 - - embedding_dir_path = f'{paths_config.embedding_base_dir}/{paths_config.input_data_id}/{paths_config.pti_results_keyword}' - # print('embedding_dir_path: ', embedding_dir_path) #./embeddings/barcelona/PTI - os.makedirs(embedding_dir_path, exist_ok=True) - - dataset = ImagesDataset(paths_config.input_data_path, transforms.Compose([ - transforms.Resize((1024, 512)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])) - - dataloader = DataLoader(dataset, batch_size=1, shuffle=False) - - if use_multi_id_training: - coach = MultiIDCoach(dataloader, use_wandb) - else: - coach = SingleIDCoach(dataloader, use_wandb) - - coach.train() - - return global_config.run_name - - -if __name__ == '__main__': - run_PTI(run_name='', use_wandb=False, use_multi_id_training=False) diff --git a/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/utils/utils.py b/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/utils/utils.py deleted file mode 100644 index 3d0b9af40c9251ef661baa5a8cc316939bc52b9c..0000000000000000000000000000000000000000 --- a/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/utils/utils.py +++ /dev/null @@ -1,68 +0,0 @@ -import numpy as np -import scipy - -__LPIPS__ = {} - -import torch - - -def init_lpips(net_name, device): - assert net_name in ['alex', 'vgg'] - import lpips - print(f'init_lpips: lpips_{net_name}') - return lpips.LPIPS(net=net_name, version='0.1').eval().to(device) - -def rgb_lpips(np_gt, np_im, net_name, device): - if net_name not in __LPIPS__: - __LPIPS__[net_name] = init_lpips(net_name, device) - gt = torch.from_numpy(np_gt).permute([2, 0, 1]).contiguous().to(device) - im = torch.from_numpy(np_im).permute([2, 0, 1]).contiguous().to(device) - return __LPIPS__[net_name](gt, im, normalize=True).item() - -def rgb_ssim(img0, img1, max_val, - filter_size=11, - filter_sigma=1.5, - k1=0.01, - k2=0.03, - return_map=False): - # Modified from https://github.com/google/mipnerf/blob/16e73dfdb52044dcceb47cda5243a686391a6e0f/internal/math.py#L58 - assert len(img0.shape) == 3 - assert img0.shape[-1] == 3 - assert img0.shape == img1.shape - - # Construct a 1D Gaussian blur filter. - hw = filter_size // 2 - shift = (2 * hw - filter_size + 1) / 2 - f_i = ((np.arange(filter_size) - hw + shift) / filter_sigma)**2 - filt = np.exp(-0.5 * f_i) - filt /= np.sum(filt) - - # Blur in x and y (faster than the 2D convolution). - def convolve2d(z, f): - return scipy.signal.convolve2d(z, f, mode='valid') - - filt_fn = lambda z: np.stack([ - convolve2d(convolve2d(z[...,i], filt[:, None]), filt[None, :]) - for i in range(z.shape[-1])], -1) - mu0 = filt_fn(img0) - mu1 = filt_fn(img1) - mu00 = mu0 * mu0 - mu11 = mu1 * mu1 - mu01 = mu0 * mu1 - sigma00 = filt_fn(img0**2) - mu00 - sigma11 = filt_fn(img1**2) - mu11 - sigma01 = filt_fn(img0 * img1) - mu01 - - # Clip the variances and covariances to valid values. - # Variance must be non-negative: - sigma00 = np.maximum(0., sigma00) - sigma11 = np.maximum(0., sigma11) - sigma01 = np.sign(sigma01) * np.minimum( - np.sqrt(sigma00 * sigma11), np.abs(sigma01)) - c1 = (k1 * max_val)**2 - c2 = (k2 * max_val)**2 - numer = (2 * mu01 + c1) * (2 * sigma01 + c2) - denom = (mu00 + mu11 + c1) * (sigma00 + sigma11 + c2) - ssim_map = numer / denom - ssim = np.mean(ssim_map) - return ssim_map if return_map else ssim \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/README.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/README.md deleted file mode 100644 index 9566e68fc51df1928a01f7cc9c51fbd66f049feb..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/README.md +++ /dev/null @@ -1,72 +0,0 @@ - - -# 🧨 Diffusers Examples - -Diffusers examples are a collection of scripts to demonstrate how to effectively use the `diffusers` library -for a variety of use cases involving training or fine-tuning. - -**Note**: If you are looking for **official** examples on how to use `diffusers` for inference, -please have a look at [src/diffusers/pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines) - -Our examples aspire to be **self-contained**, **easy-to-tweak**, **beginner-friendly** and for **one-purpose-only**. -More specifically, this means: - -- **Self-contained**: An example script shall only depend on "pip-install-able" Python packages that can be found in a `requirements.txt` file. Example scripts shall **not** depend on any local files. This means that one can simply download an example script, *e.g.* [train_unconditional.py](https://github.com/huggingface/diffusers/blob/main/examples/unconditional_image_generation/train_unconditional.py), install the required dependencies, *e.g.* [requirements.txt](https://github.com/huggingface/diffusers/blob/main/examples/unconditional_image_generation/requirements.txt) and execute the example script. -- **Easy-to-tweak**: While we strive to present as many use cases as possible, the example scripts are just that - examples. It is expected that they won't work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs. To help you with that, most of the examples fully expose the preprocessing of the data and the training loop to allow you to tweak and edit them as required. -- **Beginner-friendly**: We do not aim for providing state-of-the-art training scripts for the newest models, but rather examples that can be used as a way to better understand diffusion models and how to use them with the `diffusers` library. We often purposefully leave out certain state-of-the-art methods if we consider them too complex for beginners. -- **One-purpose-only**: Examples should show one task and one task only. Even if a task is from a modeling -point of view very similar, *e.g.* image super-resolution and image modification tend to use the same model and training method, we want examples to showcase only one task to keep them as readable and easy-to-understand as possible. - -We provide **official** examples that cover the most popular tasks of diffusion models. -*Official* examples are **actively** maintained by the `diffusers` maintainers and we try to rigorously follow our example philosophy as defined above. -If you feel like another important example should exist, we are more than happy to welcome a [Feature Request](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feature_request.md&title=) or directly a [Pull Request](https://github.com/huggingface/diffusers/compare) from you! - -Training examples show how to pretrain or fine-tune diffusion models for a variety of tasks. Currently we support: - -| Task | 🤗 Accelerate | 🤗 Datasets | Colab -|---|---|:---:|:---:| -| [**Unconditional Image Generation**](./unconditional_image_generation) | ✅ | ✅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) -| [**Text-to-Image fine-tuning**](./text_to_image) | ✅ | ✅ | -| [**Textual Inversion**](./textual_inversion) | ✅ | - | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb) -| [**Dreambooth**](./dreambooth) | ✅ | - | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb) -| [**ControlNet**](./controlnet) | ✅ | ✅ | - -| [**InstructPix2Pix**](./instruct_pix2pix) | ✅ | ✅ | - -| [**Reinforcement Learning for Control**](https://github.com/huggingface/diffusers/blob/main/examples/reinforcement_learning/run_diffusers_locomotion.py) | - | - | coming soon. - -## Community - -In addition, we provide **community** examples, which are examples added and maintained by our community. -Community examples can consist of both *training* examples or *inference* pipelines. -For such examples, we are more lenient regarding the philosophy defined above and also cannot guarantee to provide maintenance for every issue. -Examples that are useful for the community, but are either not yet deemed popular or not yet following our above philosophy should go into the [community examples](https://github.com/huggingface/diffusers/tree/main/examples/community) folder. The community folder therefore includes training examples and inference pipelines. -**Note**: Community examples can be a [great first contribution](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) to show to the community how you like to use `diffusers` 🪄. - -## Research Projects - -We also provide **research_projects** examples that are maintained by the community as defined in the respective research project folders. These examples are useful and offer the extended capabilities which are complementary to the official examples. You may refer to [research_projects](https://github.com/huggingface/diffusers/tree/main/examples/research_projects) for details. - -## Important note - -To make sure you can successfully run the latest versions of the example scripts, you have to **install the library from source** and install some example-specific requirements. To do this, execute the following steps in a new virtual environment: -```bash -git clone https://github.com/huggingface/diffusers -cd diffusers -pip install . -``` -Then cd in the example folder of your choice and run -```bash -pip install -r requirements.txt -``` diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_unidiffuser_to_diffusers.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_unidiffuser_to_diffusers.py deleted file mode 100644 index 891d289d8c7601f106724f1196d5f0f0eb3f2650..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_unidiffuser_to_diffusers.py +++ /dev/null @@ -1,776 +0,0 @@ -# Convert the original UniDiffuser checkpoints into diffusers equivalents. - -import argparse -from argparse import Namespace - -import torch -from transformers import ( - CLIPImageProcessor, - CLIPTextConfig, - CLIPTextModel, - CLIPTokenizer, - CLIPVisionConfig, - CLIPVisionModelWithProjection, - GPT2Tokenizer, -) - -from diffusers import ( - AutoencoderKL, - DPMSolverMultistepScheduler, - UniDiffuserModel, - UniDiffuserPipeline, - UniDiffuserTextDecoder, -) - - -SCHEDULER_CONFIG = Namespace( - **{ - "beta_start": 0.00085, - "beta_end": 0.012, - "beta_schedule": "scaled_linear", - "solver_order": 3, - } -) - - -# Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.shave_segments -def shave_segments(path, n_shave_prefix_segments=1): - """ - Removes segments. Positive values shave the first segments, negative shave the last segments. - """ - if n_shave_prefix_segments >= 0: - return ".".join(path.split(".")[n_shave_prefix_segments:]) - else: - return ".".join(path.split(".")[:n_shave_prefix_segments]) - - -# Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.renew_vae_resnet_paths -def renew_vae_resnet_paths(old_list, n_shave_prefix_segments=0): - """ - Updates paths inside resnets to the new naming scheme (local renaming) - """ - mapping = [] - for old_item in old_list: - new_item = old_item - - new_item = new_item.replace("nin_shortcut", "conv_shortcut") - new_item = shave_segments(new_item, n_shave_prefix_segments=n_shave_prefix_segments) - - mapping.append({"old": old_item, "new": new_item}) - - return mapping - - -# Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.renew_vae_attention_paths -def renew_vae_attention_paths(old_list, n_shave_prefix_segments=0): - """ - Updates paths inside attentions to the new naming scheme (local renaming) - """ - mapping = [] - for old_item in old_list: - new_item = old_item - - new_item = new_item.replace("norm.weight", "group_norm.weight") - new_item = new_item.replace("norm.bias", "group_norm.bias") - - new_item = new_item.replace("q.weight", "query.weight") - new_item = new_item.replace("q.bias", "query.bias") - - new_item = new_item.replace("k.weight", "key.weight") - new_item = new_item.replace("k.bias", "key.bias") - - new_item = new_item.replace("v.weight", "value.weight") - new_item = new_item.replace("v.bias", "value.bias") - - new_item = new_item.replace("proj_out.weight", "proj_attn.weight") - new_item = new_item.replace("proj_out.bias", "proj_attn.bias") - - new_item = shave_segments(new_item, n_shave_prefix_segments=n_shave_prefix_segments) - - mapping.append({"old": old_item, "new": new_item}) - - return mapping - - -# Modified from diffusers.pipelines.stable_diffusion.convert_from_ckpt.assign_to_checkpoint -# config.num_head_channels => num_head_channels -def assign_to_checkpoint( - paths, - checkpoint, - old_checkpoint, - attention_paths_to_split=None, - additional_replacements=None, - num_head_channels=1, -): - """ - This does the final conversion step: take locally converted weights and apply a global renaming to them. It splits - attention layers, and takes into account additional replacements that may arise. Assigns the weights to the new - checkpoint. - """ - assert isinstance(paths, list), "Paths should be a list of dicts containing 'old' and 'new' keys." - - # Splits the attention layers into three variables. - if attention_paths_to_split is not None: - for path, path_map in attention_paths_to_split.items(): - old_tensor = old_checkpoint[path] - channels = old_tensor.shape[0] // 3 - - target_shape = (-1, channels) if len(old_tensor.shape) == 3 else (-1) - - num_heads = old_tensor.shape[0] // num_head_channels // 3 - - old_tensor = old_tensor.reshape((num_heads, 3 * channels // num_heads) + old_tensor.shape[1:]) - query, key, value = old_tensor.split(channels // num_heads, dim=1) - - checkpoint[path_map["query"]] = query.reshape(target_shape) - checkpoint[path_map["key"]] = key.reshape(target_shape) - checkpoint[path_map["value"]] = value.reshape(target_shape) - - for path in paths: - new_path = path["new"] - - # These have already been assigned - if attention_paths_to_split is not None and new_path in attention_paths_to_split: - continue - - # Global renaming happens here - new_path = new_path.replace("middle_block.0", "mid_block.resnets.0") - new_path = new_path.replace("middle_block.1", "mid_block.attentions.0") - new_path = new_path.replace("middle_block.2", "mid_block.resnets.1") - - if additional_replacements is not None: - for replacement in additional_replacements: - new_path = new_path.replace(replacement["old"], replacement["new"]) - - # proj_attn.weight has to be converted from conv 1D to linear - if "proj_attn.weight" in new_path: - checkpoint[new_path] = old_checkpoint[path["old"]][:, :, 0] - else: - checkpoint[new_path] = old_checkpoint[path["old"]] - - -# Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.conv_attn_to_linear -def conv_attn_to_linear(checkpoint): - keys = list(checkpoint.keys()) - attn_keys = ["query.weight", "key.weight", "value.weight"] - for key in keys: - if ".".join(key.split(".")[-2:]) in attn_keys: - if checkpoint[key].ndim > 2: - checkpoint[key] = checkpoint[key][:, :, 0, 0] - elif "proj_attn.weight" in key: - if checkpoint[key].ndim > 2: - checkpoint[key] = checkpoint[key][:, :, 0] - - -def create_vae_diffusers_config(config_type): - # Hardcoded for now - if args.config_type == "test": - vae_config = create_vae_diffusers_config_test() - elif args.config_type == "big": - vae_config = create_vae_diffusers_config_big() - else: - raise NotImplementedError( - f"Config type {config_type} is not implemented, currently only config types" - " 'test' and 'big' are available." - ) - return vae_config - - -def create_unidiffuser_unet_config(config_type, version): - # Hardcoded for now - if args.config_type == "test": - unet_config = create_unidiffuser_unet_config_test() - elif args.config_type == "big": - unet_config = create_unidiffuser_unet_config_big() - else: - raise NotImplementedError( - f"Config type {config_type} is not implemented, currently only config types" - " 'test' and 'big' are available." - ) - # Unidiffuser-v1 uses data type embeddings - if version == 1: - unet_config["use_data_type_embedding"] = True - return unet_config - - -def create_text_decoder_config(config_type): - # Hardcoded for now - if args.config_type == "test": - text_decoder_config = create_text_decoder_config_test() - elif args.config_type == "big": - text_decoder_config = create_text_decoder_config_big() - else: - raise NotImplementedError( - f"Config type {config_type} is not implemented, currently only config types" - " 'test' and 'big' are available." - ) - return text_decoder_config - - -# Hardcoded configs for test versions of the UniDiffuser models, corresponding to those in the fast default tests. -def create_vae_diffusers_config_test(): - vae_config = { - "sample_size": 32, - "in_channels": 3, - "out_channels": 3, - "down_block_types": ["DownEncoderBlock2D", "DownEncoderBlock2D"], - "up_block_types": ["UpDecoderBlock2D", "UpDecoderBlock2D"], - "block_out_channels": [32, 64], - "latent_channels": 4, - "layers_per_block": 1, - } - return vae_config - - -def create_unidiffuser_unet_config_test(): - unet_config = { - "text_dim": 32, - "clip_img_dim": 32, - "num_text_tokens": 77, - "num_attention_heads": 2, - "attention_head_dim": 8, - "in_channels": 4, - "out_channels": 4, - "num_layers": 2, - "dropout": 0.0, - "norm_num_groups": 32, - "attention_bias": False, - "sample_size": 16, - "patch_size": 2, - "activation_fn": "gelu", - "num_embeds_ada_norm": 1000, - "norm_type": "layer_norm", - "block_type": "unidiffuser", - "pre_layer_norm": False, - "use_timestep_embedding": False, - "norm_elementwise_affine": True, - "use_patch_pos_embed": False, - "ff_final_dropout": True, - "use_data_type_embedding": False, - } - return unet_config - - -def create_text_decoder_config_test(): - text_decoder_config = { - "prefix_length": 77, - "prefix_inner_dim": 32, - "prefix_hidden_dim": 32, - "vocab_size": 1025, # 1024 + 1 for new EOS token - "n_positions": 1024, - "n_embd": 32, - "n_layer": 5, - "n_head": 4, - "n_inner": 37, - "activation_function": "gelu", - "resid_pdrop": 0.1, - "embd_pdrop": 0.1, - "attn_pdrop": 0.1, - "layer_norm_epsilon": 1e-5, - "initializer_range": 0.02, - } - return text_decoder_config - - -# Hardcoded configs for the UniDiffuser V1 model at https://huggingface.co/thu-ml/unidiffuser-v1 -# See also https://github.com/thu-ml/unidiffuser/blob/main/configs/sample_unidiffuser_v1.py -def create_vae_diffusers_config_big(): - vae_config = { - "sample_size": 256, - "in_channels": 3, - "out_channels": 3, - "down_block_types": ["DownEncoderBlock2D", "DownEncoderBlock2D", "DownEncoderBlock2D", "DownEncoderBlock2D"], - "up_block_types": ["UpDecoderBlock2D", "UpDecoderBlock2D", "UpDecoderBlock2D", "UpDecoderBlock2D"], - "block_out_channels": [128, 256, 512, 512], - "latent_channels": 4, - "layers_per_block": 2, - } - return vae_config - - -def create_unidiffuser_unet_config_big(): - unet_config = { - "text_dim": 64, - "clip_img_dim": 512, - "num_text_tokens": 77, - "num_attention_heads": 24, - "attention_head_dim": 64, - "in_channels": 4, - "out_channels": 4, - "num_layers": 30, - "dropout": 0.0, - "norm_num_groups": 32, - "attention_bias": False, - "sample_size": 64, - "patch_size": 2, - "activation_fn": "gelu", - "num_embeds_ada_norm": 1000, - "norm_type": "layer_norm", - "block_type": "unidiffuser", - "pre_layer_norm": False, - "use_timestep_embedding": False, - "norm_elementwise_affine": True, - "use_patch_pos_embed": False, - "ff_final_dropout": True, - "use_data_type_embedding": False, - } - return unet_config - - -# From https://huggingface.co/gpt2/blob/main/config.json, the GPT2 checkpoint used by UniDiffuser -def create_text_decoder_config_big(): - text_decoder_config = { - "prefix_length": 77, - "prefix_inner_dim": 768, - "prefix_hidden_dim": 64, - "vocab_size": 50258, # 50257 + 1 for new EOS token - "n_positions": 1024, - "n_embd": 768, - "n_layer": 12, - "n_head": 12, - "n_inner": 3072, - "activation_function": "gelu", - "resid_pdrop": 0.1, - "embd_pdrop": 0.1, - "attn_pdrop": 0.1, - "layer_norm_epsilon": 1e-5, - "initializer_range": 0.02, - } - return text_decoder_config - - -# Based on diffusers.pipelines.stable_diffusion.convert_from_ckpt.shave_segments.convert_ldm_vae_checkpoint -def convert_vae_to_diffusers(ckpt, diffusers_model, num_head_channels=1): - """ - Converts a UniDiffuser autoencoder_kl.pth checkpoint to a diffusers AutoencoderKL. - """ - # autoencoder_kl.pth ckpt is a torch state dict - vae_state_dict = torch.load(ckpt, map_location="cpu") - - new_checkpoint = {} - - new_checkpoint["encoder.conv_in.weight"] = vae_state_dict["encoder.conv_in.weight"] - new_checkpoint["encoder.conv_in.bias"] = vae_state_dict["encoder.conv_in.bias"] - new_checkpoint["encoder.conv_out.weight"] = vae_state_dict["encoder.conv_out.weight"] - new_checkpoint["encoder.conv_out.bias"] = vae_state_dict["encoder.conv_out.bias"] - new_checkpoint["encoder.conv_norm_out.weight"] = vae_state_dict["encoder.norm_out.weight"] - new_checkpoint["encoder.conv_norm_out.bias"] = vae_state_dict["encoder.norm_out.bias"] - - new_checkpoint["decoder.conv_in.weight"] = vae_state_dict["decoder.conv_in.weight"] - new_checkpoint["decoder.conv_in.bias"] = vae_state_dict["decoder.conv_in.bias"] - new_checkpoint["decoder.conv_out.weight"] = vae_state_dict["decoder.conv_out.weight"] - new_checkpoint["decoder.conv_out.bias"] = vae_state_dict["decoder.conv_out.bias"] - new_checkpoint["decoder.conv_norm_out.weight"] = vae_state_dict["decoder.norm_out.weight"] - new_checkpoint["decoder.conv_norm_out.bias"] = vae_state_dict["decoder.norm_out.bias"] - - new_checkpoint["quant_conv.weight"] = vae_state_dict["quant_conv.weight"] - new_checkpoint["quant_conv.bias"] = vae_state_dict["quant_conv.bias"] - new_checkpoint["post_quant_conv.weight"] = vae_state_dict["post_quant_conv.weight"] - new_checkpoint["post_quant_conv.bias"] = vae_state_dict["post_quant_conv.bias"] - - # Retrieves the keys for the encoder down blocks only - num_down_blocks = len({".".join(layer.split(".")[:3]) for layer in vae_state_dict if "encoder.down" in layer}) - down_blocks = { - layer_id: [key for key in vae_state_dict if f"down.{layer_id}" in key] for layer_id in range(num_down_blocks) - } - - # Retrieves the keys for the decoder up blocks only - num_up_blocks = len({".".join(layer.split(".")[:3]) for layer in vae_state_dict if "decoder.up" in layer}) - up_blocks = { - layer_id: [key for key in vae_state_dict if f"up.{layer_id}" in key] for layer_id in range(num_up_blocks) - } - - for i in range(num_down_blocks): - resnets = [key for key in down_blocks[i] if f"down.{i}" in key and f"down.{i}.downsample" not in key] - - if f"encoder.down.{i}.downsample.conv.weight" in vae_state_dict: - new_checkpoint[f"encoder.down_blocks.{i}.downsamplers.0.conv.weight"] = vae_state_dict.pop( - f"encoder.down.{i}.downsample.conv.weight" - ) - new_checkpoint[f"encoder.down_blocks.{i}.downsamplers.0.conv.bias"] = vae_state_dict.pop( - f"encoder.down.{i}.downsample.conv.bias" - ) - - paths = renew_vae_resnet_paths(resnets) - meta_path = {"old": f"down.{i}.block", "new": f"down_blocks.{i}.resnets"} - assign_to_checkpoint( - paths, - new_checkpoint, - vae_state_dict, - additional_replacements=[meta_path], - num_head_channels=num_head_channels, # not used in vae - ) - - mid_resnets = [key for key in vae_state_dict if "encoder.mid.block" in key] - num_mid_res_blocks = 2 - for i in range(1, num_mid_res_blocks + 1): - resnets = [key for key in mid_resnets if f"encoder.mid.block_{i}" in key] - - paths = renew_vae_resnet_paths(resnets) - meta_path = {"old": f"mid.block_{i}", "new": f"mid_block.resnets.{i - 1}"} - assign_to_checkpoint( - paths, - new_checkpoint, - vae_state_dict, - additional_replacements=[meta_path], - num_head_channels=num_head_channels, # not used in vae - ) - - mid_attentions = [key for key in vae_state_dict if "encoder.mid.attn" in key] - paths = renew_vae_attention_paths(mid_attentions) - meta_path = {"old": "mid.attn_1", "new": "mid_block.attentions.0"} - assign_to_checkpoint( - paths, - new_checkpoint, - vae_state_dict, - additional_replacements=[meta_path], - num_head_channels=num_head_channels, # not used in vae - ) - conv_attn_to_linear(new_checkpoint) - - for i in range(num_up_blocks): - block_id = num_up_blocks - 1 - i - resnets = [ - key for key in up_blocks[block_id] if f"up.{block_id}" in key and f"up.{block_id}.upsample" not in key - ] - - if f"decoder.up.{block_id}.upsample.conv.weight" in vae_state_dict: - new_checkpoint[f"decoder.up_blocks.{i}.upsamplers.0.conv.weight"] = vae_state_dict[ - f"decoder.up.{block_id}.upsample.conv.weight" - ] - new_checkpoint[f"decoder.up_blocks.{i}.upsamplers.0.conv.bias"] = vae_state_dict[ - f"decoder.up.{block_id}.upsample.conv.bias" - ] - - paths = renew_vae_resnet_paths(resnets) - meta_path = {"old": f"up.{block_id}.block", "new": f"up_blocks.{i}.resnets"} - assign_to_checkpoint( - paths, - new_checkpoint, - vae_state_dict, - additional_replacements=[meta_path], - num_head_channels=num_head_channels, # not used in vae - ) - - mid_resnets = [key for key in vae_state_dict if "decoder.mid.block" in key] - num_mid_res_blocks = 2 - for i in range(1, num_mid_res_blocks + 1): - resnets = [key for key in mid_resnets if f"decoder.mid.block_{i}" in key] - - paths = renew_vae_resnet_paths(resnets) - meta_path = {"old": f"mid.block_{i}", "new": f"mid_block.resnets.{i - 1}"} - assign_to_checkpoint( - paths, - new_checkpoint, - vae_state_dict, - additional_replacements=[meta_path], - num_head_channels=num_head_channels, # not used in vae - ) - - mid_attentions = [key for key in vae_state_dict if "decoder.mid.attn" in key] - paths = renew_vae_attention_paths(mid_attentions) - meta_path = {"old": "mid.attn_1", "new": "mid_block.attentions.0"} - assign_to_checkpoint( - paths, - new_checkpoint, - vae_state_dict, - additional_replacements=[meta_path], - num_head_channels=num_head_channels, # not used in vae - ) - conv_attn_to_linear(new_checkpoint) - - missing_keys, unexpected_keys = diffusers_model.load_state_dict(new_checkpoint) - for missing_key in missing_keys: - print(f"Missing key: {missing_key}") - for unexpected_key in unexpected_keys: - print(f"Unexpected key: {unexpected_key}") - - return diffusers_model - - -def convert_uvit_block_to_diffusers_block( - uvit_state_dict, - new_state_dict, - block_prefix, - new_prefix="transformer.transformer_", - skip_connection=False, -): - """ - Maps the keys in a UniDiffuser transformer block (`Block`) to the keys in a diffusers transformer block - (`UTransformerBlock`/`UniDiffuserBlock`). - """ - prefix = new_prefix + block_prefix - if skip_connection: - new_state_dict[prefix + ".skip.skip_linear.weight"] = uvit_state_dict[block_prefix + ".skip_linear.weight"] - new_state_dict[prefix + ".skip.skip_linear.bias"] = uvit_state_dict[block_prefix + ".skip_linear.bias"] - new_state_dict[prefix + ".skip.norm.weight"] = uvit_state_dict[block_prefix + ".norm1.weight"] - new_state_dict[prefix + ".skip.norm.bias"] = uvit_state_dict[block_prefix + ".norm1.bias"] - - # Create the prefix string for out_blocks. - prefix += ".block" - - # Split up attention qkv.weight into to_q.weight, to_k.weight, to_v.weight - qkv = uvit_state_dict[block_prefix + ".attn.qkv.weight"] - new_attn_keys = [".attn1.to_q.weight", ".attn1.to_k.weight", ".attn1.to_v.weight"] - new_attn_keys = [prefix + key for key in new_attn_keys] - shape = qkv.shape[0] // len(new_attn_keys) - for i, attn_key in enumerate(new_attn_keys): - new_state_dict[attn_key] = qkv[i * shape : (i + 1) * shape] - - new_state_dict[prefix + ".attn1.to_out.0.weight"] = uvit_state_dict[block_prefix + ".attn.proj.weight"] - new_state_dict[prefix + ".attn1.to_out.0.bias"] = uvit_state_dict[block_prefix + ".attn.proj.bias"] - new_state_dict[prefix + ".norm1.weight"] = uvit_state_dict[block_prefix + ".norm2.weight"] - new_state_dict[prefix + ".norm1.bias"] = uvit_state_dict[block_prefix + ".norm2.bias"] - new_state_dict[prefix + ".ff.net.0.proj.weight"] = uvit_state_dict[block_prefix + ".mlp.fc1.weight"] - new_state_dict[prefix + ".ff.net.0.proj.bias"] = uvit_state_dict[block_prefix + ".mlp.fc1.bias"] - new_state_dict[prefix + ".ff.net.2.weight"] = uvit_state_dict[block_prefix + ".mlp.fc2.weight"] - new_state_dict[prefix + ".ff.net.2.bias"] = uvit_state_dict[block_prefix + ".mlp.fc2.bias"] - new_state_dict[prefix + ".norm3.weight"] = uvit_state_dict[block_prefix + ".norm3.weight"] - new_state_dict[prefix + ".norm3.bias"] = uvit_state_dict[block_prefix + ".norm3.bias"] - - return uvit_state_dict, new_state_dict - - -def convert_uvit_to_diffusers(ckpt, diffusers_model): - """ - Converts a UniDiffuser uvit_v*.pth checkpoint to a diffusers UniDiffusersModel. - """ - # uvit_v*.pth ckpt is a torch state dict - uvit_state_dict = torch.load(ckpt, map_location="cpu") - - new_state_dict = {} - - # Input layers - new_state_dict["vae_img_in.proj.weight"] = uvit_state_dict["patch_embed.proj.weight"] - new_state_dict["vae_img_in.proj.bias"] = uvit_state_dict["patch_embed.proj.bias"] - new_state_dict["clip_img_in.weight"] = uvit_state_dict["clip_img_embed.weight"] - new_state_dict["clip_img_in.bias"] = uvit_state_dict["clip_img_embed.bias"] - new_state_dict["text_in.weight"] = uvit_state_dict["text_embed.weight"] - new_state_dict["text_in.bias"] = uvit_state_dict["text_embed.bias"] - - new_state_dict["pos_embed"] = uvit_state_dict["pos_embed"] - - # Handle data type token embeddings for UniDiffuser-v1 - if "token_embedding.weight" in uvit_state_dict and diffusers_model.use_data_type_embedding: - new_state_dict["data_type_pos_embed_token"] = uvit_state_dict["pos_embed_token"] - new_state_dict["data_type_token_embedding.weight"] = uvit_state_dict["token_embedding.weight"] - - # Also initialize the PatchEmbedding in UTransformer2DModel with the PatchEmbedding from the checkpoint. - # This isn't used in the current implementation, so might want to remove. - new_state_dict["transformer.pos_embed.proj.weight"] = uvit_state_dict["patch_embed.proj.weight"] - new_state_dict["transformer.pos_embed.proj.bias"] = uvit_state_dict["patch_embed.proj.bias"] - - # Output layers - new_state_dict["transformer.norm_out.weight"] = uvit_state_dict["norm.weight"] - new_state_dict["transformer.norm_out.bias"] = uvit_state_dict["norm.bias"] - - new_state_dict["vae_img_out.weight"] = uvit_state_dict["decoder_pred.weight"] - new_state_dict["vae_img_out.bias"] = uvit_state_dict["decoder_pred.bias"] - new_state_dict["clip_img_out.weight"] = uvit_state_dict["clip_img_out.weight"] - new_state_dict["clip_img_out.bias"] = uvit_state_dict["clip_img_out.bias"] - new_state_dict["text_out.weight"] = uvit_state_dict["text_out.weight"] - new_state_dict["text_out.bias"] = uvit_state_dict["text_out.bias"] - - # in_blocks - in_blocks_prefixes = {".".join(layer.split(".")[:2]) for layer in uvit_state_dict if "in_blocks" in layer} - for in_block_prefix in list(in_blocks_prefixes): - convert_uvit_block_to_diffusers_block(uvit_state_dict, new_state_dict, in_block_prefix) - - # mid_block - # Assume there's only one mid block - convert_uvit_block_to_diffusers_block(uvit_state_dict, new_state_dict, "mid_block") - - # out_blocks - out_blocks_prefixes = {".".join(layer.split(".")[:2]) for layer in uvit_state_dict if "out_blocks" in layer} - for out_block_prefix in list(out_blocks_prefixes): - convert_uvit_block_to_diffusers_block(uvit_state_dict, new_state_dict, out_block_prefix, skip_connection=True) - - missing_keys, unexpected_keys = diffusers_model.load_state_dict(new_state_dict) - for missing_key in missing_keys: - print(f"Missing key: {missing_key}") - for unexpected_key in unexpected_keys: - print(f"Unexpected key: {unexpected_key}") - - return diffusers_model - - -def convert_caption_decoder_to_diffusers(ckpt, diffusers_model): - """ - Converts a UniDiffuser caption_decoder.pth checkpoint to a diffusers UniDiffuserTextDecoder. - """ - # caption_decoder.pth ckpt is a torch state dict - checkpoint_state_dict = torch.load(ckpt, map_location="cpu") - decoder_state_dict = {} - # Remove the "module." prefix, if necessary - caption_decoder_key = "module." - for key in checkpoint_state_dict: - if key.startswith(caption_decoder_key): - decoder_state_dict[key.replace(caption_decoder_key, "")] = checkpoint_state_dict.get(key) - else: - decoder_state_dict[key] = checkpoint_state_dict.get(key) - - new_state_dict = {} - - # Encoder and Decoder - new_state_dict["encode_prefix.weight"] = decoder_state_dict["encode_prefix.weight"] - new_state_dict["encode_prefix.bias"] = decoder_state_dict["encode_prefix.bias"] - new_state_dict["decode_prefix.weight"] = decoder_state_dict["decode_prefix.weight"] - new_state_dict["decode_prefix.bias"] = decoder_state_dict["decode_prefix.bias"] - - # Internal GPT2LMHeadModel transformer model - for key, val in decoder_state_dict.items(): - if key.startswith("gpt"): - suffix = key[len("gpt") :] - new_state_dict["transformer" + suffix] = val - - missing_keys, unexpected_keys = diffusers_model.load_state_dict(new_state_dict) - for missing_key in missing_keys: - print(f"Missing key: {missing_key}") - for unexpected_key in unexpected_keys: - print(f"Unexpected key: {unexpected_key}") - - return diffusers_model - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - parser.add_argument( - "--caption_decoder_checkpoint_path", - default=None, - type=str, - required=False, - help="Path to caption decoder checkpoint to convert.", - ) - parser.add_argument( - "--uvit_checkpoint_path", default=None, type=str, required=False, help="Path to U-ViT checkpoint to convert." - ) - parser.add_argument( - "--vae_checkpoint_path", - default=None, - type=str, - required=False, - help="Path to VAE checkpoint to convert.", - ) - parser.add_argument( - "--pipeline_output_path", - default=None, - type=str, - required=True, - help="Path to save the output pipeline to.", - ) - parser.add_argument( - "--config_type", - default="test", - type=str, - help=( - "Config type to use. Should be 'test' to create small models for testing or 'big' to convert a full" - " checkpoint." - ), - ) - parser.add_argument( - "--version", - default=0, - type=int, - help="The UniDiffuser model type to convert to. Should be 0 for UniDiffuser-v0 and 1 for UniDiffuser-v1.", - ) - - args = parser.parse_args() - - # Convert the VAE model. - if args.vae_checkpoint_path is not None: - vae_config = create_vae_diffusers_config(args.config_type) - vae = AutoencoderKL(**vae_config) - vae = convert_vae_to_diffusers(args.vae_checkpoint_path, vae) - - # Convert the U-ViT ("unet") model. - if args.uvit_checkpoint_path is not None: - unet_config = create_unidiffuser_unet_config(args.config_type, args.version) - unet = UniDiffuserModel(**unet_config) - unet = convert_uvit_to_diffusers(args.uvit_checkpoint_path, unet) - - # Convert the caption decoder ("text_decoder") model. - if args.caption_decoder_checkpoint_path is not None: - text_decoder_config = create_text_decoder_config(args.config_type) - text_decoder = UniDiffuserTextDecoder(**text_decoder_config) - text_decoder = convert_caption_decoder_to_diffusers(args.caption_decoder_checkpoint_path, text_decoder) - - # Scheduler is the same for both the test and big models. - scheduler_config = SCHEDULER_CONFIG - scheduler = DPMSolverMultistepScheduler( - beta_start=scheduler_config.beta_start, - beta_end=scheduler_config.beta_end, - beta_schedule=scheduler_config.beta_schedule, - solver_order=scheduler_config.solver_order, - ) - - if args.config_type == "test": - # Make a small random CLIPTextModel - torch.manual_seed(0) - clip_text_encoder_config = CLIPTextConfig( - bos_token_id=0, - eos_token_id=2, - hidden_size=32, - intermediate_size=37, - layer_norm_eps=1e-05, - num_attention_heads=4, - num_hidden_layers=5, - pad_token_id=1, - vocab_size=1000, - ) - text_encoder = CLIPTextModel(clip_text_encoder_config) - clip_tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") - - # Make a small random CLIPVisionModel and accompanying CLIPImageProcessor - torch.manual_seed(0) - clip_image_encoder_config = CLIPVisionConfig( - image_size=32, - patch_size=2, - num_channels=3, - hidden_size=32, - projection_dim=32, - num_hidden_layers=5, - num_attention_heads=4, - intermediate_size=37, - dropout=0.1, - attention_dropout=0.1, - initializer_range=0.02, - ) - image_encoder = CLIPVisionModelWithProjection(clip_image_encoder_config) - image_processor = CLIPImageProcessor(crop_size=32, size=32) - - # Note that the text_decoder should already have its token embeddings resized. - text_tokenizer = GPT2Tokenizer.from_pretrained("hf-internal-testing/tiny-random-GPT2Model") - eos = "<|EOS|>" - special_tokens_dict = {"eos_token": eos} - text_tokenizer.add_special_tokens(special_tokens_dict) - elif args.config_type == "big": - text_encoder = CLIPTextModel.from_pretrained("openai/clip-vit-large-patch14") - clip_tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14") - - image_encoder = CLIPVisionModelWithProjection.from_pretrained("openai/clip-vit-base-patch32") - image_processor = CLIPImageProcessor.from_pretrained("openai/clip-vit-base-patch32") - - # Note that the text_decoder should already have its token embeddings resized. - text_tokenizer = GPT2Tokenizer.from_pretrained("gpt2") - eos = "<|EOS|>" - special_tokens_dict = {"eos_token": eos} - text_tokenizer.add_special_tokens(special_tokens_dict) - else: - raise NotImplementedError( - f"Config type {args.config_type} is not implemented, currently only config types" - " 'test' and 'big' are available." - ) - - pipeline = UniDiffuserPipeline( - vae=vae, - text_encoder=text_encoder, - image_encoder=image_encoder, - image_processor=image_processor, - clip_tokenizer=clip_tokenizer, - text_decoder=text_decoder, - text_tokenizer=text_tokenizer, - unet=unet, - scheduler=scheduler, - ) - pipeline.save_pretrained(args.pipeline_output_path) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/ddpm/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/ddpm/__init__.py deleted file mode 100644 index bb228ee012e80493b617b314c867ecadba7ca1ce..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/ddpm/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .pipeline_ddpm import DDPMPipeline diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/deepfloyd_if/test_if_superresolution.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/deepfloyd_if/test_if_superresolution.py deleted file mode 100644 index 52fb3830889242fd32ce5dbacdebf89d5d9dcc52..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/deepfloyd_if/test_if_superresolution.py +++ /dev/null @@ -1,83 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import random -import unittest - -import torch - -from diffusers import IFSuperResolutionPipeline -from diffusers.utils import floats_tensor -from diffusers.utils.import_utils import is_xformers_available -from diffusers.utils.testing_utils import skip_mps, torch_device - -from ..pipeline_params import TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS, TEXT_GUIDED_IMAGE_VARIATION_PARAMS -from ..test_pipelines_common import PipelineTesterMixin -from . import IFPipelineTesterMixin - - -@skip_mps -class IFSuperResolutionPipelineFastTests(PipelineTesterMixin, IFPipelineTesterMixin, unittest.TestCase): - pipeline_class = IFSuperResolutionPipeline - params = TEXT_GUIDED_IMAGE_VARIATION_PARAMS - {"width", "height"} - batch_params = TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS - required_optional_params = PipelineTesterMixin.required_optional_params - {"latents"} - - def get_dummy_components(self): - return self._get_superresolution_dummy_components() - - def get_dummy_inputs(self, device, seed=0): - if str(device).startswith("mps"): - generator = torch.manual_seed(seed) - else: - generator = torch.Generator(device=device).manual_seed(seed) - - image = floats_tensor((1, 3, 32, 32), rng=random.Random(seed)).to(device) - - inputs = { - "prompt": "A painting of a squirrel eating a burger", - "image": image, - "generator": generator, - "num_inference_steps": 2, - "output_type": "numpy", - } - - return inputs - - @unittest.skipIf( - torch_device != "cuda" or not is_xformers_available(), - reason="XFormers attention is only available with CUDA and `xformers` installed", - ) - def test_xformers_attention_forwardGenerator_pass(self): - self._test_xformers_attention_forwardGenerator_pass(expected_max_diff=1e-3) - - def test_save_load_optional_components(self): - self._test_save_load_optional_components() - - @unittest.skipIf(torch_device != "cuda", reason="float16 requires CUDA") - def test_save_load_float16(self): - # Due to non-determinism in save load of the hf-internal-testing/tiny-random-t5 text encoder - super().test_save_load_float16(expected_max_diff=1e-1) - - def test_attention_slicing_forward_pass(self): - self._test_attention_slicing_forward_pass(expected_max_diff=1e-2) - - def test_save_load_local(self): - self._test_save_load_local() - - def test_inference_batch_single_identical(self): - self._test_inference_batch_single_identical( - expected_max_diff=1e-2, - ) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/empirical_attention/faster_rcnn_r50_fpn_attention_0010_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/empirical_attention/faster_rcnn_r50_fpn_attention_0010_1x_coco.py deleted file mode 100644 index a544e3ab636aea0efe56007a0ea40608b6e71ad4..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/empirical_attention/faster_rcnn_r50_fpn_attention_0010_1x_coco.py +++ /dev/null @@ -1,13 +0,0 @@ -_base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py' -model = dict( - backbone=dict(plugins=[ - dict( - cfg=dict( - type='GeneralizedAttention', - spatial_range=-1, - num_heads=8, - attention_type='0010', - kv_stride=2), - stages=(False, False, True, True), - position='after_conv2') - ])) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/sabl/sabl_retinanet_r101_fpn_gn_2x_ms_640_800_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/sabl/sabl_retinanet_r101_fpn_gn_2x_ms_640_800_coco.py deleted file mode 100644 index f26062fda282fda420a5f48bbc12bfe4efe57c0a..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/sabl/sabl_retinanet_r101_fpn_gn_2x_ms_640_800_coco.py +++ /dev/null @@ -1,71 +0,0 @@ -_base_ = [ - '../_base_/models/retinanet_r50_fpn.py', - '../_base_/datasets/coco_detection.py', - '../_base_/schedules/schedule_2x.py', '../_base_/default_runtime.py' -] -# model settings -norm_cfg = dict(type='GN', num_groups=32, requires_grad=True) -model = dict( - pretrained='torchvision://resnet101', - backbone=dict(depth=101), - bbox_head=dict( - _delete_=True, - type='SABLRetinaHead', - num_classes=80, - in_channels=256, - stacked_convs=4, - feat_channels=256, - approx_anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=4, - scales_per_octave=3, - ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32, 64, 128]), - square_anchor_generator=dict( - type='AnchorGenerator', - ratios=[1.0], - scales=[4], - strides=[8, 16, 32, 64, 128]), - norm_cfg=norm_cfg, - bbox_coder=dict( - type='BucketingBBoxCoder', num_buckets=14, scale_factor=3.0), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.5), - loss_bbox_reg=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.5)), - # training and testing settings - train_cfg=dict( - assigner=dict( - type='ApproxMaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.4, - min_pos_iou=0.0, - ignore_iof_thr=-1), - allowed_border=-1, - pos_weight=-1, - debug=False)) -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='Resize', - img_scale=[(1333, 640), (1333, 800)], - multiscale_mode='range', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -data = dict(train=dict(pipeline=train_pipeline)) -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/bbox_heads/dii_head.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/bbox_heads/dii_head.py deleted file mode 100644 index 8c970a78184672aaaa95edcdaecec03a26604390..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/bbox_heads/dii_head.py +++ /dev/null @@ -1,415 +0,0 @@ -import torch -import torch.nn as nn -from mmcv.cnn import (bias_init_with_prob, build_activation_layer, - build_norm_layer) -from mmcv.runner import auto_fp16, force_fp32 - -from mmdet.core import multi_apply -from mmdet.models.builder import HEADS, build_loss -from mmdet.models.dense_heads.atss_head import reduce_mean -from mmdet.models.losses import accuracy -from mmdet.models.utils import FFN, MultiheadAttention, build_transformer -from .bbox_head import BBoxHead - - -@HEADS.register_module() -class DIIHead(BBoxHead): - r"""Dynamic Instance Interactive Head for `Sparse R-CNN: End-to-End Object - Detection with Learnable Proposals `_ - - Args: - num_classes (int): Number of class in dataset. - Defaults to 80. - num_ffn_fcs (int): The number of fully-connected - layers in FFNs. Defaults to 2. - num_heads (int): The hidden dimension of FFNs. - Defaults to 8. - num_cls_fcs (int): The number of fully-connected - layers in classification subnet. Defaults to 1. - num_reg_fcs (int): The number of fully-connected - layers in regression subnet. Defaults to 3. - feedforward_channels (int): The hidden dimension - of FFNs. Defaults to 2048 - in_channels (int): Hidden_channels of MultiheadAttention. - Defaults to 256. - dropout (float): Probability of drop the channel. - Defaults to 0.0 - ffn_act_cfg (dict): The activation config for FFNs. - dynamic_conv_cfg (dict): The convolution config - for DynamicConv. - loss_iou (dict): The config for iou or giou loss. - - """ - - def __init__(self, - num_classes=80, - num_ffn_fcs=2, - num_heads=8, - num_cls_fcs=1, - num_reg_fcs=3, - feedforward_channels=2048, - in_channels=256, - dropout=0.0, - ffn_act_cfg=dict(type='ReLU', inplace=True), - dynamic_conv_cfg=dict( - type='DynamicConv', - in_channels=256, - feat_channels=64, - out_channels=256, - input_feat_shape=7, - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN')), - loss_iou=dict(type='GIoULoss', loss_weight=2.0), - **kwargs): - super(DIIHead, self).__init__( - num_classes=num_classes, - reg_decoded_bbox=True, - reg_class_agnostic=True, - **kwargs) - self.loss_iou = build_loss(loss_iou) - self.in_channels = in_channels - self.fp16_enabled = False - self.attention = MultiheadAttention(in_channels, num_heads, dropout) - self.attention_norm = build_norm_layer(dict(type='LN'), in_channels)[1] - - self.instance_interactive_conv = build_transformer(dynamic_conv_cfg) - self.instance_interactive_conv_dropout = nn.Dropout(dropout) - self.instance_interactive_conv_norm = build_norm_layer( - dict(type='LN'), in_channels)[1] - - self.ffn = FFN( - in_channels, - feedforward_channels, - num_ffn_fcs, - act_cfg=ffn_act_cfg, - dropout=dropout) - self.ffn_norm = build_norm_layer(dict(type='LN'), in_channels)[1] - - self.cls_fcs = nn.ModuleList() - for _ in range(num_cls_fcs): - self.cls_fcs.append( - nn.Linear(in_channels, in_channels, bias=False)) - self.cls_fcs.append( - build_norm_layer(dict(type='LN'), in_channels)[1]) - self.cls_fcs.append( - build_activation_layer(dict(type='ReLU', inplace=True))) - - # over load the self.fc_cls in BBoxHead - if self.loss_cls.use_sigmoid: - self.fc_cls = nn.Linear(in_channels, self.num_classes) - else: - self.fc_cls = nn.Linear(in_channels, self.num_classes + 1) - - self.reg_fcs = nn.ModuleList() - for _ in range(num_reg_fcs): - self.reg_fcs.append( - nn.Linear(in_channels, in_channels, bias=False)) - self.reg_fcs.append( - build_norm_layer(dict(type='LN'), in_channels)[1]) - self.reg_fcs.append( - build_activation_layer(dict(type='ReLU', inplace=True))) - # over load the self.fc_cls in BBoxHead - self.fc_reg = nn.Linear(in_channels, 4) - - assert self.reg_class_agnostic, 'DIIHead only ' \ - 'suppport `reg_class_agnostic=True` ' - assert self.reg_decoded_bbox, 'DIIHead only ' \ - 'suppport `reg_decoded_bbox=True`' - - def init_weights(self): - """Use xavier initialization for all weight parameter and set - classification head bias as a specific value when use focal loss.""" - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - else: - # adopt the default initialization for - # the weight and bias of the layer norm - pass - if self.loss_cls.use_sigmoid: - bias_init = bias_init_with_prob(0.01) - nn.init.constant_(self.fc_cls.bias, bias_init) - - @auto_fp16() - def forward(self, roi_feat, proposal_feat): - """Forward function of Dynamic Instance Interactive Head. - - Args: - roi_feat (Tensor): Roi-pooling features with shape - (batch_size*num_proposals, feature_dimensions, - pooling_h , pooling_w). - proposal_feat (Tensor): Intermediate feature get from - diihead in last stage, has shape - (batch_size, num_proposals, feature_dimensions) - - Returns: - tuple[Tensor]: Usually a tuple of classification scores - and bbox prediction and a intermediate feature. - - - cls_scores (Tensor): Classification scores for - all proposals, has shape - (batch_size, num_proposals, num_classes). - - bbox_preds (Tensor): Box energies / deltas for - all proposals, has shape - (batch_size, num_proposals, 4). - - obj_feat (Tensor): Object feature before classification - and regression subnet, has shape - (batch_size, num_proposal, feature_dimensions). - """ - N, num_proposals = proposal_feat.shape[:2] - - # Self attention - proposal_feat = proposal_feat.permute(1, 0, 2) - proposal_feat = self.attention_norm(self.attention(proposal_feat)) - - # instance interactive - proposal_feat = proposal_feat.permute(1, 0, - 2).reshape(-1, self.in_channels) - proposal_feat_iic = self.instance_interactive_conv( - proposal_feat, roi_feat) - proposal_feat = proposal_feat + self.instance_interactive_conv_dropout( - proposal_feat_iic) - obj_feat = self.instance_interactive_conv_norm(proposal_feat) - - # FFN - obj_feat = self.ffn_norm(self.ffn(obj_feat)) - - cls_feat = obj_feat - reg_feat = obj_feat - - for cls_layer in self.cls_fcs: - cls_feat = cls_layer(cls_feat) - for reg_layer in self.reg_fcs: - reg_feat = reg_layer(reg_feat) - - cls_score = self.fc_cls(cls_feat).view(N, num_proposals, -1) - bbox_delta = self.fc_reg(reg_feat).view(N, num_proposals, -1) - - return cls_score, bbox_delta, obj_feat.view(N, num_proposals, -1) - - @force_fp32(apply_to=('cls_score', 'bbox_pred')) - def loss(self, - cls_score, - bbox_pred, - labels, - label_weights, - bbox_targets, - bbox_weights, - imgs_whwh=None, - reduction_override=None, - **kwargs): - """"Loss function of DIIHead, get loss of all images. - - Args: - cls_score (Tensor): Classification prediction - results of all class, has shape - (batch_size * num_proposals_single_image, num_classes) - bbox_pred (Tensor): Regression prediction results, - has shape - (batch_size * num_proposals_single_image, 4), the last - dimension 4 represents [tl_x, tl_y, br_x, br_y]. - labels (Tensor): Label of each proposals, has shape - (batch_size * num_proposals_single_image - label_weights (Tensor): Classification loss - weight of each proposals, has shape - (batch_size * num_proposals_single_image - bbox_targets (Tensor): Regression targets of each - proposals, has shape - (batch_size * num_proposals_single_image, 4), - the last dimension 4 represents - [tl_x, tl_y, br_x, br_y]. - bbox_weights (Tensor): Regression loss weight of each - proposals's coordinate, has shape - (batch_size * num_proposals_single_image, 4), - imgs_whwh (Tensor): imgs_whwh (Tensor): Tensor with\ - shape (batch_size, num_proposals, 4), the last - dimension means - [img_width,img_height, img_width, img_height]. - reduction_override (str, optional): The reduction - method used to override the original reduction - method of the loss. Options are "none", - "mean" and "sum". Defaults to None, - - Returns: - dict[str, Tensor]: Dictionary of loss components - """ - losses = dict() - bg_class_ind = self.num_classes - # note in spare rcnn num_gt == num_pos - pos_inds = (labels >= 0) & (labels < bg_class_ind) - num_pos = pos_inds.sum().float() - avg_factor = reduce_mean(num_pos) - if cls_score is not None: - if cls_score.numel() > 0: - losses['loss_cls'] = self.loss_cls( - cls_score, - labels, - label_weights, - avg_factor=avg_factor, - reduction_override=reduction_override) - losses['pos_acc'] = accuracy(cls_score[pos_inds], - labels[pos_inds]) - if bbox_pred is not None: - # 0~self.num_classes-1 are FG, self.num_classes is BG - # do not perform bounding box regression for BG anymore. - if pos_inds.any(): - pos_bbox_pred = bbox_pred.reshape(bbox_pred.size(0), - 4)[pos_inds.type(torch.bool)] - imgs_whwh = imgs_whwh.reshape(bbox_pred.size(0), - 4)[pos_inds.type(torch.bool)] - losses['loss_bbox'] = self.loss_bbox( - pos_bbox_pred / imgs_whwh, - bbox_targets[pos_inds.type(torch.bool)] / imgs_whwh, - bbox_weights[pos_inds.type(torch.bool)], - avg_factor=avg_factor) - losses['loss_iou'] = self.loss_iou( - pos_bbox_pred, - bbox_targets[pos_inds.type(torch.bool)], - bbox_weights[pos_inds.type(torch.bool)], - avg_factor=avg_factor) - else: - losses['loss_bbox'] = bbox_pred.sum() * 0 - losses['loss_iou'] = bbox_pred.sum() * 0 - return losses - - def _get_target_single(self, pos_inds, neg_inds, pos_bboxes, neg_bboxes, - pos_gt_bboxes, pos_gt_labels, cfg): - """Calculate the ground truth for proposals in the single image - according to the sampling results. - - Almost the same as the implementation in `bbox_head`, - we add pos_inds and neg_inds to select positive and - negative samples instead of selecting the first num_pos - as positive samples. - - Args: - pos_inds (Tensor): The length is equal to the - positive sample numbers contain all index - of the positive sample in the origin proposal set. - neg_inds (Tensor): The length is equal to the - negative sample numbers contain all index - of the negative sample in the origin proposal set. - pos_bboxes (Tensor): Contains all the positive boxes, - has shape (num_pos, 4), the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - neg_bboxes (Tensor): Contains all the negative boxes, - has shape (num_neg, 4), the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - pos_gt_bboxes (Tensor): Contains all the gt_boxes, - has shape (num_gt, 4), the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - pos_gt_labels (Tensor): Contains all the gt_labels, - has shape (num_gt). - cfg (obj:`ConfigDict`): `train_cfg` of R-CNN. - - Returns: - Tuple[Tensor]: Ground truth for proposals in a single image. - Containing the following Tensors: - - - labels(Tensor): Gt_labels for all proposals, has - shape (num_proposals,). - - label_weights(Tensor): Labels_weights for all proposals, has - shape (num_proposals,). - - bbox_targets(Tensor):Regression target for all proposals, has - shape (num_proposals, 4), the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - - bbox_weights(Tensor):Regression weights for all proposals, - has shape (num_proposals, 4). - """ - num_pos = pos_bboxes.size(0) - num_neg = neg_bboxes.size(0) - num_samples = num_pos + num_neg - - # original implementation uses new_zeros since BG are set to be 0 - # now use empty & fill because BG cat_id = num_classes, - # FG cat_id = [0, num_classes-1] - labels = pos_bboxes.new_full((num_samples, ), - self.num_classes, - dtype=torch.long) - label_weights = pos_bboxes.new_zeros(num_samples) - bbox_targets = pos_bboxes.new_zeros(num_samples, 4) - bbox_weights = pos_bboxes.new_zeros(num_samples, 4) - if num_pos > 0: - labels[pos_inds] = pos_gt_labels - pos_weight = 1.0 if cfg.pos_weight <= 0 else cfg.pos_weight - label_weights[pos_inds] = pos_weight - if not self.reg_decoded_bbox: - pos_bbox_targets = self.bbox_coder.encode( - pos_bboxes, pos_gt_bboxes) - else: - pos_bbox_targets = pos_gt_bboxes - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1 - if num_neg > 0: - label_weights[neg_inds] = 1.0 - - return labels, label_weights, bbox_targets, bbox_weights - - def get_targets(self, - sampling_results, - gt_bboxes, - gt_labels, - rcnn_train_cfg, - concat=True): - """Calculate the ground truth for all samples in a batch according to - the sampling_results. - - Almost the same as the implementation in bbox_head, we passed - additional parameters pos_inds_list and neg_inds_list to - `_get_target_single` function. - - Args: - sampling_results (List[obj:SamplingResults]): Assign results of - all images in a batch after sampling. - gt_bboxes (list[Tensor]): Gt_bboxes of all images in a batch, - each tensor has shape (num_gt, 4), the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - gt_labels (list[Tensor]): Gt_labels of all images in a batch, - each tensor has shape (num_gt,). - rcnn_train_cfg (obj:`ConfigDict`): `train_cfg` of RCNN. - concat (bool): Whether to concatenate the results of all - the images in a single batch. - - Returns: - Tuple[Tensor]: Ground truth for proposals in a single image. - Containing the following list of Tensors: - - - labels (list[Tensor],Tensor): Gt_labels for all - proposals in a batch, each tensor in list has - shape (num_proposals,) when `concat=False`, otherwise just - a single tensor has shape (num_all_proposals,). - - label_weights (list[Tensor]): Labels_weights for - all proposals in a batch, each tensor in list has shape - (num_proposals,) when `concat=False`, otherwise just a - single tensor has shape (num_all_proposals,). - - bbox_targets (list[Tensor],Tensor): Regression target - for all proposals in a batch, each tensor in list has - shape (num_proposals, 4) when `concat=False`, otherwise - just a single tensor has shape (num_all_proposals, 4), - the last dimension 4 represents [tl_x, tl_y, br_x, br_y]. - - bbox_weights (list[tensor],Tensor): Regression weights for - all proposals in a batch, each tensor in list has shape - (num_proposals, 4) when `concat=False`, otherwise just a - single tensor has shape (num_all_proposals, 4). - """ - pos_inds_list = [res.pos_inds for res in sampling_results] - neg_inds_list = [res.neg_inds for res in sampling_results] - pos_bboxes_list = [res.pos_bboxes for res in sampling_results] - neg_bboxes_list = [res.neg_bboxes for res in sampling_results] - pos_gt_bboxes_list = [res.pos_gt_bboxes for res in sampling_results] - pos_gt_labels_list = [res.pos_gt_labels for res in sampling_results] - labels, label_weights, bbox_targets, bbox_weights = multi_apply( - self._get_target_single, - pos_inds_list, - neg_inds_list, - pos_bboxes_list, - neg_bboxes_list, - pos_gt_bboxes_list, - pos_gt_labels_list, - cfg=rcnn_train_cfg) - if concat: - labels = torch.cat(labels, 0) - label_weights = torch.cat(label_weights, 0) - bbox_targets = torch.cat(bbox_targets, 0) - bbox_weights = torch.cat(bbox_weights, 0) - return labels, label_weights, bbox_targets, bbox_weights diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/core/evaluation/__init__.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/core/evaluation/__init__.py deleted file mode 100644 index f7cc4b23413a0639e9de00eeb0bf600632d2c6cd..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/core/evaluation/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -from .class_names import get_classes, get_palette -from .eval_hooks import DistEvalHook, EvalHook -from .metrics import eval_metrics, mean_dice, mean_fscore, mean_iou - -__all__ = [ - 'EvalHook', 'DistEvalHook', 'mean_dice', 'mean_iou', 'mean_fscore', - 'eval_metrics', 'get_classes', 'get_palette' -] diff --git a/spaces/AnthonyTruchetPoC/persistent-docker/scripts/run-local-docker.sh b/spaces/AnthonyTruchetPoC/persistent-docker/scripts/run-local-docker.sh deleted file mode 100644 index 3b58692f7acbd8200f8bd7e0f77166284a964e7d..0000000000000000000000000000000000000000 --- a/spaces/AnthonyTruchetPoC/persistent-docker/scripts/run-local-docker.sh +++ /dev/null @@ -1,15 +0,0 @@ -#!/usr/bin/env bash -source $(dirname $0)/common_header.sh - -# During development ONLY use a `bind mount` to enable -# editing the code without having to rebuild the container. -docker run --rm -it \ - -p 8501:8501 -p 7860:7860 \ - --env-file ${ROOT_DIRECTORY}/.env \ - --mount type=volume,src=$VOLUME_NAME,dst=/data \ - --mount type=bind,source=${ROOT_DIRECTORY}/src/,target=/app/,readonly \ - --mount type=bind,source=${ROOT_DIRECTORY}/.streamlit,target=/user/.streamlit,readonly \ - $CONTAINER_NAME:latest \ - $@ # Pass all command line argument quoted in a good way - - diff --git a/spaces/Arun1217/mygenaiapp/app.py b/spaces/Arun1217/mygenaiapp/app.py deleted file mode 100644 index a362dcc7d0ddd1eee86961f1bc3db6d894fbd3d5..0000000000000000000000000000000000000000 --- a/spaces/Arun1217/mygenaiapp/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """You are a helpful assistant to answer all user queries. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/formatter.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/formatter.py deleted file mode 100644 index a2349ef8652c659388ba69477c01989f2e4ce17d..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/formatter.py +++ /dev/null @@ -1,94 +0,0 @@ -""" - pygments.formatter - ~~~~~~~~~~~~~~~~~~ - - Base formatter class. - - :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import codecs - -from pip._vendor.pygments.util import get_bool_opt -from pip._vendor.pygments.styles import get_style_by_name - -__all__ = ['Formatter'] - - -def _lookup_style(style): - if isinstance(style, str): - return get_style_by_name(style) - return style - - -class Formatter: - """ - Converts a token stream to text. - - Options accepted: - - ``style`` - The style to use, can be a string or a Style subclass - (default: "default"). Not used by e.g. the - TerminalFormatter. - ``full`` - Tells the formatter to output a "full" document, i.e. - a complete self-contained document. This doesn't have - any effect for some formatters (default: false). - ``title`` - If ``full`` is true, the title that should be used to - caption the document (default: ''). - ``encoding`` - If given, must be an encoding name. This will be used to - convert the Unicode token strings to byte strings in the - output. If it is "" or None, Unicode strings will be written - to the output file, which most file-like objects do not - support (default: None). - ``outencoding`` - Overrides ``encoding`` if given. - """ - - #: Name of the formatter - name = None - - #: Shortcuts for the formatter - aliases = [] - - #: fn match rules - filenames = [] - - #: If True, this formatter outputs Unicode strings when no encoding - #: option is given. - unicodeoutput = True - - def __init__(self, **options): - self.style = _lookup_style(options.get('style', 'default')) - self.full = get_bool_opt(options, 'full', False) - self.title = options.get('title', '') - self.encoding = options.get('encoding', None) or None - if self.encoding in ('guess', 'chardet'): - # can happen for e.g. pygmentize -O encoding=guess - self.encoding = 'utf-8' - self.encoding = options.get('outencoding') or self.encoding - self.options = options - - def get_style_defs(self, arg=''): - """ - Return the style definitions for the current style as a string. - - ``arg`` is an additional argument whose meaning depends on the - formatter used. Note that ``arg`` can also be a list or tuple - for some formatters like the html formatter. - """ - return '' - - def format(self, tokensource, outfile): - """ - Format ``tokensource``, an iterable of ``(tokentype, tokenstring)`` - tuples and write it into ``outfile``. - """ - if self.encoding: - # wrap the outfile in a StreamWriter - outfile = codecs.lookup(self.encoding)[3](outfile) - return self.format_unencoded(tokensource, outfile) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/util/retry.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/util/retry.py deleted file mode 100644 index 2490d5e5b63359a7f826922dc69c0015cb9a5b2e..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/util/retry.py +++ /dev/null @@ -1,620 +0,0 @@ -from __future__ import absolute_import - -import email -import logging -import re -import time -import warnings -from collections import namedtuple -from itertools import takewhile - -from ..exceptions import ( - ConnectTimeoutError, - InvalidHeader, - MaxRetryError, - ProtocolError, - ProxyError, - ReadTimeoutError, - ResponseError, -) -from ..packages import six - -log = logging.getLogger(__name__) - - -# Data structure for representing the metadata of requests that result in a retry. -RequestHistory = namedtuple( - "RequestHistory", ["method", "url", "error", "status", "redirect_location"] -) - - -# TODO: In v2 we can remove this sentinel and metaclass with deprecated options. -_Default = object() - - -class _RetryMeta(type): - @property - def DEFAULT_METHOD_WHITELIST(cls): - warnings.warn( - "Using 'Retry.DEFAULT_METHOD_WHITELIST' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_ALLOWED_METHODS' instead", - DeprecationWarning, - ) - return cls.DEFAULT_ALLOWED_METHODS - - @DEFAULT_METHOD_WHITELIST.setter - def DEFAULT_METHOD_WHITELIST(cls, value): - warnings.warn( - "Using 'Retry.DEFAULT_METHOD_WHITELIST' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_ALLOWED_METHODS' instead", - DeprecationWarning, - ) - cls.DEFAULT_ALLOWED_METHODS = value - - @property - def DEFAULT_REDIRECT_HEADERS_BLACKLIST(cls): - warnings.warn( - "Using 'Retry.DEFAULT_REDIRECT_HEADERS_BLACKLIST' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_REMOVE_HEADERS_ON_REDIRECT' instead", - DeprecationWarning, - ) - return cls.DEFAULT_REMOVE_HEADERS_ON_REDIRECT - - @DEFAULT_REDIRECT_HEADERS_BLACKLIST.setter - def DEFAULT_REDIRECT_HEADERS_BLACKLIST(cls, value): - warnings.warn( - "Using 'Retry.DEFAULT_REDIRECT_HEADERS_BLACKLIST' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_REMOVE_HEADERS_ON_REDIRECT' instead", - DeprecationWarning, - ) - cls.DEFAULT_REMOVE_HEADERS_ON_REDIRECT = value - - @property - def BACKOFF_MAX(cls): - warnings.warn( - "Using 'Retry.BACKOFF_MAX' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_BACKOFF_MAX' instead", - DeprecationWarning, - ) - return cls.DEFAULT_BACKOFF_MAX - - @BACKOFF_MAX.setter - def BACKOFF_MAX(cls, value): - warnings.warn( - "Using 'Retry.BACKOFF_MAX' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_BACKOFF_MAX' instead", - DeprecationWarning, - ) - cls.DEFAULT_BACKOFF_MAX = value - - -@six.add_metaclass(_RetryMeta) -class Retry(object): - """Retry configuration. - - Each retry attempt will create a new Retry object with updated values, so - they can be safely reused. - - Retries can be defined as a default for a pool:: - - retries = Retry(connect=5, read=2, redirect=5) - http = PoolManager(retries=retries) - response = http.request('GET', 'http://example.com/') - - Or per-request (which overrides the default for the pool):: - - response = http.request('GET', 'http://example.com/', retries=Retry(10)) - - Retries can be disabled by passing ``False``:: - - response = http.request('GET', 'http://example.com/', retries=False) - - Errors will be wrapped in :class:`~urllib3.exceptions.MaxRetryError` unless - retries are disabled, in which case the causing exception will be raised. - - :param int total: - Total number of retries to allow. Takes precedence over other counts. - - Set to ``None`` to remove this constraint and fall back on other - counts. - - Set to ``0`` to fail on the first retry. - - Set to ``False`` to disable and imply ``raise_on_redirect=False``. - - :param int connect: - How many connection-related errors to retry on. - - These are errors raised before the request is sent to the remote server, - which we assume has not triggered the server to process the request. - - Set to ``0`` to fail on the first retry of this type. - - :param int read: - How many times to retry on read errors. - - These errors are raised after the request was sent to the server, so the - request may have side-effects. - - Set to ``0`` to fail on the first retry of this type. - - :param int redirect: - How many redirects to perform. Limit this to avoid infinite redirect - loops. - - A redirect is a HTTP response with a status code 301, 302, 303, 307 or - 308. - - Set to ``0`` to fail on the first retry of this type. - - Set to ``False`` to disable and imply ``raise_on_redirect=False``. - - :param int status: - How many times to retry on bad status codes. - - These are retries made on responses, where status code matches - ``status_forcelist``. - - Set to ``0`` to fail on the first retry of this type. - - :param int other: - How many times to retry on other errors. - - Other errors are errors that are not connect, read, redirect or status errors. - These errors might be raised after the request was sent to the server, so the - request might have side-effects. - - Set to ``0`` to fail on the first retry of this type. - - If ``total`` is not set, it's a good idea to set this to 0 to account - for unexpected edge cases and avoid infinite retry loops. - - :param iterable allowed_methods: - Set of uppercased HTTP method verbs that we should retry on. - - By default, we only retry on methods which are considered to be - idempotent (multiple requests with the same parameters end with the - same state). See :attr:`Retry.DEFAULT_ALLOWED_METHODS`. - - Set to a ``False`` value to retry on any verb. - - .. warning:: - - Previously this parameter was named ``method_whitelist``, that - usage is deprecated in v1.26.0 and will be removed in v2.0. - - :param iterable status_forcelist: - A set of integer HTTP status codes that we should force a retry on. - A retry is initiated if the request method is in ``allowed_methods`` - and the response status code is in ``status_forcelist``. - - By default, this is disabled with ``None``. - - :param float backoff_factor: - A backoff factor to apply between attempts after the second try - (most errors are resolved immediately by a second try without a - delay). urllib3 will sleep for:: - - {backoff factor} * (2 ** ({number of total retries} - 1)) - - seconds. If the backoff_factor is 0.1, then :func:`.sleep` will sleep - for [0.0s, 0.2s, 0.4s, ...] between retries. It will never be longer - than :attr:`Retry.DEFAULT_BACKOFF_MAX`. - - By default, backoff is disabled (set to 0). - - :param bool raise_on_redirect: Whether, if the number of redirects is - exhausted, to raise a MaxRetryError, or to return a response with a - response code in the 3xx range. - - :param bool raise_on_status: Similar meaning to ``raise_on_redirect``: - whether we should raise an exception, or return a response, - if status falls in ``status_forcelist`` range and retries have - been exhausted. - - :param tuple history: The history of the request encountered during - each call to :meth:`~Retry.increment`. The list is in the order - the requests occurred. Each list item is of class :class:`RequestHistory`. - - :param bool respect_retry_after_header: - Whether to respect Retry-After header on status codes defined as - :attr:`Retry.RETRY_AFTER_STATUS_CODES` or not. - - :param iterable remove_headers_on_redirect: - Sequence of headers to remove from the request when a response - indicating a redirect is returned before firing off the redirected - request. - """ - - #: Default methods to be used for ``allowed_methods`` - DEFAULT_ALLOWED_METHODS = frozenset( - ["HEAD", "GET", "PUT", "DELETE", "OPTIONS", "TRACE"] - ) - - #: Default status codes to be used for ``status_forcelist`` - RETRY_AFTER_STATUS_CODES = frozenset([413, 429, 503]) - - #: Default headers to be used for ``remove_headers_on_redirect`` - DEFAULT_REMOVE_HEADERS_ON_REDIRECT = frozenset(["Authorization"]) - - #: Maximum backoff time. - DEFAULT_BACKOFF_MAX = 120 - - def __init__( - self, - total=10, - connect=None, - read=None, - redirect=None, - status=None, - other=None, - allowed_methods=_Default, - status_forcelist=None, - backoff_factor=0, - raise_on_redirect=True, - raise_on_status=True, - history=None, - respect_retry_after_header=True, - remove_headers_on_redirect=_Default, - # TODO: Deprecated, remove in v2.0 - method_whitelist=_Default, - ): - - if method_whitelist is not _Default: - if allowed_methods is not _Default: - raise ValueError( - "Using both 'allowed_methods' and " - "'method_whitelist' together is not allowed. " - "Instead only use 'allowed_methods'" - ) - warnings.warn( - "Using 'method_whitelist' with Retry is deprecated and " - "will be removed in v2.0. Use 'allowed_methods' instead", - DeprecationWarning, - stacklevel=2, - ) - allowed_methods = method_whitelist - if allowed_methods is _Default: - allowed_methods = self.DEFAULT_ALLOWED_METHODS - if remove_headers_on_redirect is _Default: - remove_headers_on_redirect = self.DEFAULT_REMOVE_HEADERS_ON_REDIRECT - - self.total = total - self.connect = connect - self.read = read - self.status = status - self.other = other - - if redirect is False or total is False: - redirect = 0 - raise_on_redirect = False - - self.redirect = redirect - self.status_forcelist = status_forcelist or set() - self.allowed_methods = allowed_methods - self.backoff_factor = backoff_factor - self.raise_on_redirect = raise_on_redirect - self.raise_on_status = raise_on_status - self.history = history or tuple() - self.respect_retry_after_header = respect_retry_after_header - self.remove_headers_on_redirect = frozenset( - [h.lower() for h in remove_headers_on_redirect] - ) - - def new(self, **kw): - params = dict( - total=self.total, - connect=self.connect, - read=self.read, - redirect=self.redirect, - status=self.status, - other=self.other, - status_forcelist=self.status_forcelist, - backoff_factor=self.backoff_factor, - raise_on_redirect=self.raise_on_redirect, - raise_on_status=self.raise_on_status, - history=self.history, - remove_headers_on_redirect=self.remove_headers_on_redirect, - respect_retry_after_header=self.respect_retry_after_header, - ) - - # TODO: If already given in **kw we use what's given to us - # If not given we need to figure out what to pass. We decide - # based on whether our class has the 'method_whitelist' property - # and if so we pass the deprecated 'method_whitelist' otherwise - # we use 'allowed_methods'. Remove in v2.0 - if "method_whitelist" not in kw and "allowed_methods" not in kw: - if "method_whitelist" in self.__dict__: - warnings.warn( - "Using 'method_whitelist' with Retry is deprecated and " - "will be removed in v2.0. Use 'allowed_methods' instead", - DeprecationWarning, - ) - params["method_whitelist"] = self.allowed_methods - else: - params["allowed_methods"] = self.allowed_methods - - params.update(kw) - return type(self)(**params) - - @classmethod - def from_int(cls, retries, redirect=True, default=None): - """Backwards-compatibility for the old retries format.""" - if retries is None: - retries = default if default is not None else cls.DEFAULT - - if isinstance(retries, Retry): - return retries - - redirect = bool(redirect) and None - new_retries = cls(retries, redirect=redirect) - log.debug("Converted retries value: %r -> %r", retries, new_retries) - return new_retries - - def get_backoff_time(self): - """Formula for computing the current backoff - - :rtype: float - """ - # We want to consider only the last consecutive errors sequence (Ignore redirects). - consecutive_errors_len = len( - list( - takewhile(lambda x: x.redirect_location is None, reversed(self.history)) - ) - ) - if consecutive_errors_len <= 1: - return 0 - - backoff_value = self.backoff_factor * (2 ** (consecutive_errors_len - 1)) - return min(self.DEFAULT_BACKOFF_MAX, backoff_value) - - def parse_retry_after(self, retry_after): - # Whitespace: https://tools.ietf.org/html/rfc7230#section-3.2.4 - if re.match(r"^\s*[0-9]+\s*$", retry_after): - seconds = int(retry_after) - else: - retry_date_tuple = email.utils.parsedate_tz(retry_after) - if retry_date_tuple is None: - raise InvalidHeader("Invalid Retry-After header: %s" % retry_after) - if retry_date_tuple[9] is None: # Python 2 - # Assume UTC if no timezone was specified - # On Python2.7, parsedate_tz returns None for a timezone offset - # instead of 0 if no timezone is given, where mktime_tz treats - # a None timezone offset as local time. - retry_date_tuple = retry_date_tuple[:9] + (0,) + retry_date_tuple[10:] - - retry_date = email.utils.mktime_tz(retry_date_tuple) - seconds = retry_date - time.time() - - if seconds < 0: - seconds = 0 - - return seconds - - def get_retry_after(self, response): - """Get the value of Retry-After in seconds.""" - - retry_after = response.headers.get("Retry-After") - - if retry_after is None: - return None - - return self.parse_retry_after(retry_after) - - def sleep_for_retry(self, response=None): - retry_after = self.get_retry_after(response) - if retry_after: - time.sleep(retry_after) - return True - - return False - - def _sleep_backoff(self): - backoff = self.get_backoff_time() - if backoff <= 0: - return - time.sleep(backoff) - - def sleep(self, response=None): - """Sleep between retry attempts. - - This method will respect a server's ``Retry-After`` response header - and sleep the duration of the time requested. If that is not present, it - will use an exponential backoff. By default, the backoff factor is 0 and - this method will return immediately. - """ - - if self.respect_retry_after_header and response: - slept = self.sleep_for_retry(response) - if slept: - return - - self._sleep_backoff() - - def _is_connection_error(self, err): - """Errors when we're fairly sure that the server did not receive the - request, so it should be safe to retry. - """ - if isinstance(err, ProxyError): - err = err.original_error - return isinstance(err, ConnectTimeoutError) - - def _is_read_error(self, err): - """Errors that occur after the request has been started, so we should - assume that the server began processing it. - """ - return isinstance(err, (ReadTimeoutError, ProtocolError)) - - def _is_method_retryable(self, method): - """Checks if a given HTTP method should be retried upon, depending if - it is included in the allowed_methods - """ - # TODO: For now favor if the Retry implementation sets its own method_whitelist - # property outside of our constructor to avoid breaking custom implementations. - if "method_whitelist" in self.__dict__: - warnings.warn( - "Using 'method_whitelist' with Retry is deprecated and " - "will be removed in v2.0. Use 'allowed_methods' instead", - DeprecationWarning, - ) - allowed_methods = self.method_whitelist - else: - allowed_methods = self.allowed_methods - - if allowed_methods and method.upper() not in allowed_methods: - return False - return True - - def is_retry(self, method, status_code, has_retry_after=False): - """Is this method/status code retryable? (Based on allowlists and control - variables such as the number of total retries to allow, whether to - respect the Retry-After header, whether this header is present, and - whether the returned status code is on the list of status codes to - be retried upon on the presence of the aforementioned header) - """ - if not self._is_method_retryable(method): - return False - - if self.status_forcelist and status_code in self.status_forcelist: - return True - - return ( - self.total - and self.respect_retry_after_header - and has_retry_after - and (status_code in self.RETRY_AFTER_STATUS_CODES) - ) - - def is_exhausted(self): - """Are we out of retries?""" - retry_counts = ( - self.total, - self.connect, - self.read, - self.redirect, - self.status, - self.other, - ) - retry_counts = list(filter(None, retry_counts)) - if not retry_counts: - return False - - return min(retry_counts) < 0 - - def increment( - self, - method=None, - url=None, - response=None, - error=None, - _pool=None, - _stacktrace=None, - ): - """Return a new Retry object with incremented retry counters. - - :param response: A response object, or None, if the server did not - return a response. - :type response: :class:`~urllib3.response.HTTPResponse` - :param Exception error: An error encountered during the request, or - None if the response was received successfully. - - :return: A new ``Retry`` object. - """ - if self.total is False and error: - # Disabled, indicate to re-raise the error. - raise six.reraise(type(error), error, _stacktrace) - - total = self.total - if total is not None: - total -= 1 - - connect = self.connect - read = self.read - redirect = self.redirect - status_count = self.status - other = self.other - cause = "unknown" - status = None - redirect_location = None - - if error and self._is_connection_error(error): - # Connect retry? - if connect is False: - raise six.reraise(type(error), error, _stacktrace) - elif connect is not None: - connect -= 1 - - elif error and self._is_read_error(error): - # Read retry? - if read is False or not self._is_method_retryable(method): - raise six.reraise(type(error), error, _stacktrace) - elif read is not None: - read -= 1 - - elif error: - # Other retry? - if other is not None: - other -= 1 - - elif response and response.get_redirect_location(): - # Redirect retry? - if redirect is not None: - redirect -= 1 - cause = "too many redirects" - redirect_location = response.get_redirect_location() - status = response.status - - else: - # Incrementing because of a server error like a 500 in - # status_forcelist and the given method is in the allowed_methods - cause = ResponseError.GENERIC_ERROR - if response and response.status: - if status_count is not None: - status_count -= 1 - cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) - status = response.status - - history = self.history + ( - RequestHistory(method, url, error, status, redirect_location), - ) - - new_retry = self.new( - total=total, - connect=connect, - read=read, - redirect=redirect, - status=status_count, - other=other, - history=history, - ) - - if new_retry.is_exhausted(): - raise MaxRetryError(_pool, url, error or ResponseError(cause)) - - log.debug("Incremented Retry for (url='%s'): %r", url, new_retry) - - return new_retry - - def __repr__(self): - return ( - "{cls.__name__}(total={self.total}, connect={self.connect}, " - "read={self.read}, redirect={self.redirect}, status={self.status})" - ).format(cls=type(self), self=self) - - def __getattr__(self, item): - if item == "method_whitelist": - # TODO: Remove this deprecated alias in v2.0 - warnings.warn( - "Using 'method_whitelist' with Retry is deprecated and " - "will be removed in v2.0. Use 'allowed_methods' instead", - DeprecationWarning, - ) - return self.allowed_methods - try: - return getattr(super(Retry, self), item) - except AttributeError: - return getattr(Retry, item) - - -# For backwards compatibility (equivalent to pre-v1.9): -Retry.DEFAULT = Retry(3) diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/utils/collect_env.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/utils/collect_env.py deleted file mode 100644 index 807b6c7e6245d0a21221b1b8d29b841ec8251761..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/utils/collect_env.py +++ /dev/null @@ -1,242 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import importlib -import numpy as np -import os -import re -import subprocess -import sys -from collections import defaultdict -import PIL -import torch -import torchvision -from tabulate import tabulate - -__all__ = ["collect_env_info"] - - -def collect_torch_env(): - try: - import torch.__config__ - - return torch.__config__.show() - except ImportError: - # compatible with older versions of pytorch - from torch.utils.collect_env import get_pretty_env_info - - return get_pretty_env_info() - - -def get_env_module(): - var_name = "DETECTRON2_ENV_MODULE" - return var_name, os.environ.get(var_name, "") - - -def detect_compute_compatibility(CUDA_HOME, so_file): - try: - cuobjdump = os.path.join(CUDA_HOME, "bin", "cuobjdump") - if os.path.isfile(cuobjdump): - output = subprocess.check_output( - "'{}' --list-elf '{}'".format(cuobjdump, so_file), shell=True - ) - output = output.decode("utf-8").strip().split("\n") - arch = [] - for line in output: - line = re.findall(r"\.sm_([0-9]*)\.", line)[0] - arch.append(".".join(line)) - arch = sorted(set(arch)) - return ", ".join(arch) - else: - return so_file + "; cannot find cuobjdump" - except Exception: - # unhandled failure - return so_file - - -def collect_env_info(): - has_gpu = torch.cuda.is_available() # true for both CUDA & ROCM - torch_version = torch.__version__ - - # NOTE that CUDA_HOME/ROCM_HOME could be None even when CUDA runtime libs are functional - from torch.utils.cpp_extension import CUDA_HOME, ROCM_HOME - - has_rocm = False - if (getattr(torch.version, "hip", None) is not None) and (ROCM_HOME is not None): - has_rocm = True - has_cuda = has_gpu and (not has_rocm) - - data = [] - data.append(("sys.platform", sys.platform)) # check-template.yml depends on it - data.append(("Python", sys.version.replace("\n", ""))) - data.append(("numpy", np.__version__)) - - try: - import detectron2 # noqa - - data.append( - ("detectron2", detectron2.__version__ + " @" + os.path.dirname(detectron2.__file__)) - ) - except ImportError: - data.append(("detectron2", "failed to import")) - except AttributeError: - data.append(("detectron2", "imported a wrong installation")) - - try: - import detectron2._C as _C - except ImportError as e: - data.append(("detectron2._C", f"not built correctly: {e}")) - - # print system compilers when extension fails to build - if sys.platform != "win32": # don't know what to do for windows - try: - # this is how torch/utils/cpp_extensions.py choose compiler - cxx = os.environ.get("CXX", "c++") - cxx = subprocess.check_output("'{}' --version".format(cxx), shell=True) - cxx = cxx.decode("utf-8").strip().split("\n")[0] - except subprocess.SubprocessError: - cxx = "Not found" - data.append(("Compiler ($CXX)", cxx)) - - if has_cuda and CUDA_HOME is not None: - try: - nvcc = os.path.join(CUDA_HOME, "bin", "nvcc") - nvcc = subprocess.check_output("'{}' -V".format(nvcc), shell=True) - nvcc = nvcc.decode("utf-8").strip().split("\n")[-1] - except subprocess.SubprocessError: - nvcc = "Not found" - data.append(("CUDA compiler", nvcc)) - if has_cuda and sys.platform != "win32": - try: - so_file = importlib.util.find_spec("detectron2._C").origin - except (ImportError, AttributeError): - pass - else: - data.append( - ("detectron2 arch flags", detect_compute_compatibility(CUDA_HOME, so_file)) - ) - else: - # print compilers that are used to build extension - data.append(("Compiler", _C.get_compiler_version())) - data.append(("CUDA compiler", _C.get_cuda_version())) # cuda or hip - if has_cuda and getattr(_C, "has_cuda", lambda: True)(): - data.append( - ("detectron2 arch flags", detect_compute_compatibility(CUDA_HOME, _C.__file__)) - ) - - data.append(get_env_module()) - data.append(("PyTorch", torch_version + " @" + os.path.dirname(torch.__file__))) - data.append(("PyTorch debug build", torch.version.debug)) - - if not has_gpu: - has_gpu_text = "No: torch.cuda.is_available() == False" - else: - has_gpu_text = "Yes" - data.append(("GPU available", has_gpu_text)) - if has_gpu: - devices = defaultdict(list) - for k in range(torch.cuda.device_count()): - cap = ".".join((str(x) for x in torch.cuda.get_device_capability(k))) - name = torch.cuda.get_device_name(k) + f" (arch={cap})" - devices[name].append(str(k)) - for name, devids in devices.items(): - data.append(("GPU " + ",".join(devids), name)) - - if has_rocm: - msg = " - invalid!" if not (ROCM_HOME and os.path.isdir(ROCM_HOME)) else "" - data.append(("ROCM_HOME", str(ROCM_HOME) + msg)) - else: - try: - from torch.utils.collect_env import get_nvidia_driver_version, run as _run - - data.append(("Driver version", get_nvidia_driver_version(_run))) - except Exception: - pass - msg = " - invalid!" if not (CUDA_HOME and os.path.isdir(CUDA_HOME)) else "" - data.append(("CUDA_HOME", str(CUDA_HOME) + msg)) - - cuda_arch_list = os.environ.get("TORCH_CUDA_ARCH_LIST", None) - if cuda_arch_list: - data.append(("TORCH_CUDA_ARCH_LIST", cuda_arch_list)) - data.append(("Pillow", PIL.__version__)) - - try: - data.append( - ( - "torchvision", - str(torchvision.__version__) + " @" + os.path.dirname(torchvision.__file__), - ) - ) - if has_cuda: - try: - torchvision_C = importlib.util.find_spec("torchvision._C").origin - msg = detect_compute_compatibility(CUDA_HOME, torchvision_C) - data.append(("torchvision arch flags", msg)) - except (ImportError, AttributeError): - data.append(("torchvision._C", "Not found")) - except AttributeError: - data.append(("torchvision", "unknown")) - - try: - import fvcore - - data.append(("fvcore", fvcore.__version__)) - except (ImportError, AttributeError): - pass - - try: - import iopath - - data.append(("iopath", iopath.__version__)) - except (ImportError, AttributeError): - pass - - try: - import cv2 - - data.append(("cv2", cv2.__version__)) - except (ImportError, AttributeError): - data.append(("cv2", "Not found")) - env_str = tabulate(data) + "\n" - env_str += collect_torch_env() - return env_str - - -def test_nccl_ops(): - num_gpu = torch.cuda.device_count() - if os.access("/tmp", os.W_OK): - import torch.multiprocessing as mp - - dist_url = "file:///tmp/nccl_tmp_file" - print("Testing NCCL connectivity ... this should not hang.") - mp.spawn(_test_nccl_worker, nprocs=num_gpu, args=(num_gpu, dist_url), daemon=False) - print("NCCL succeeded.") - - -def _test_nccl_worker(rank, num_gpu, dist_url): - import torch.distributed as dist - - dist.init_process_group(backend="NCCL", init_method=dist_url, rank=rank, world_size=num_gpu) - dist.barrier(device_ids=[rank]) - - -if __name__ == "__main__": - try: - from detectron2.utils.collect_env import collect_env_info as f - - print(f()) - except ImportError: - print(collect_env_info()) - - if torch.cuda.is_available(): - num_gpu = torch.cuda.device_count() - for k in range(num_gpu): - device = f"cuda:{k}" - try: - x = torch.tensor([1, 2.0], dtype=torch.float32) - x = x.to(device) - except Exception as e: - print( - f"Unable to copy tensor to device={device}: {e}. " - "Your CUDA environment is broken." - ) - if num_gpu > 1: - test_nccl_ops() diff --git a/spaces/Bart92/RVC_HF/demucs/wav.py b/spaces/Bart92/RVC_HF/demucs/wav.py deleted file mode 100644 index a65c3b2ba5aacb1fcab3753f1f85ff7b8db7fc11..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/demucs/wav.py +++ /dev/null @@ -1,174 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from collections import OrderedDict -import hashlib -import math -import json -from pathlib import Path - -import julius -import torch as th -from torch import distributed -import torchaudio as ta -from torch.nn import functional as F - -from .audio import convert_audio_channels -from .compressed import get_musdb_tracks - -MIXTURE = "mixture" -EXT = ".wav" - - -def _track_metadata(track, sources): - track_length = None - track_samplerate = None - for source in sources + [MIXTURE]: - file = track / f"{source}{EXT}" - info = ta.info(str(file)) - length = info.num_frames - if track_length is None: - track_length = length - track_samplerate = info.sample_rate - elif track_length != length: - raise ValueError( - f"Invalid length for file {file}: " - f"expecting {track_length} but got {length}.") - elif info.sample_rate != track_samplerate: - raise ValueError( - f"Invalid sample rate for file {file}: " - f"expecting {track_samplerate} but got {info.sample_rate}.") - if source == MIXTURE: - wav, _ = ta.load(str(file)) - wav = wav.mean(0) - mean = wav.mean().item() - std = wav.std().item() - - return {"length": length, "mean": mean, "std": std, "samplerate": track_samplerate} - - -def _build_metadata(path, sources): - meta = {} - path = Path(path) - for file in path.iterdir(): - meta[file.name] = _track_metadata(file, sources) - return meta - - -class Wavset: - def __init__( - self, - root, metadata, sources, - length=None, stride=None, normalize=True, - samplerate=44100, channels=2): - """ - Waveset (or mp3 set for that matter). Can be used to train - with arbitrary sources. Each track should be one folder inside of `path`. - The folder should contain files named `{source}.{ext}`. - Files will be grouped according to `sources` (each source is a list of - filenames). - - Sample rate and channels will be converted on the fly. - - `length` is the sample size to extract (in samples, not duration). - `stride` is how many samples to move by between each example. - """ - self.root = Path(root) - self.metadata = OrderedDict(metadata) - self.length = length - self.stride = stride or length - self.normalize = normalize - self.sources = sources - self.channels = channels - self.samplerate = samplerate - self.num_examples = [] - for name, meta in self.metadata.items(): - track_length = int(self.samplerate * meta['length'] / meta['samplerate']) - if length is None or track_length < length: - examples = 1 - else: - examples = int(math.ceil((track_length - self.length) / self.stride) + 1) - self.num_examples.append(examples) - - def __len__(self): - return sum(self.num_examples) - - def get_file(self, name, source): - return self.root / name / f"{source}{EXT}" - - def __getitem__(self, index): - for name, examples in zip(self.metadata, self.num_examples): - if index >= examples: - index -= examples - continue - meta = self.metadata[name] - num_frames = -1 - offset = 0 - if self.length is not None: - offset = int(math.ceil( - meta['samplerate'] * self.stride * index / self.samplerate)) - num_frames = int(math.ceil( - meta['samplerate'] * self.length / self.samplerate)) - wavs = [] - for source in self.sources: - file = self.get_file(name, source) - wav, _ = ta.load(str(file), frame_offset=offset, num_frames=num_frames) - wav = convert_audio_channels(wav, self.channels) - wavs.append(wav) - - example = th.stack(wavs) - example = julius.resample_frac(example, meta['samplerate'], self.samplerate) - if self.normalize: - example = (example - meta['mean']) / meta['std'] - if self.length: - example = example[..., :self.length] - example = F.pad(example, (0, self.length - example.shape[-1])) - return example - - -def get_wav_datasets(args, samples, sources): - sig = hashlib.sha1(str(args.wav).encode()).hexdigest()[:8] - metadata_file = args.metadata / (sig + ".json") - train_path = args.wav / "train" - valid_path = args.wav / "valid" - if not metadata_file.is_file() and args.rank == 0: - train = _build_metadata(train_path, sources) - valid = _build_metadata(valid_path, sources) - json.dump([train, valid], open(metadata_file, "w")) - if args.world_size > 1: - distributed.barrier() - train, valid = json.load(open(metadata_file)) - train_set = Wavset(train_path, train, sources, - length=samples, stride=args.data_stride, - samplerate=args.samplerate, channels=args.audio_channels, - normalize=args.norm_wav) - valid_set = Wavset(valid_path, valid, [MIXTURE] + sources, - samplerate=args.samplerate, channels=args.audio_channels, - normalize=args.norm_wav) - return train_set, valid_set - - -def get_musdb_wav_datasets(args, samples, sources): - metadata_file = args.metadata / "musdb_wav.json" - root = args.musdb / "train" - if not metadata_file.is_file() and args.rank == 0: - metadata = _build_metadata(root, sources) - json.dump(metadata, open(metadata_file, "w")) - if args.world_size > 1: - distributed.barrier() - metadata = json.load(open(metadata_file)) - - train_tracks = get_musdb_tracks(args.musdb, is_wav=True, subsets=["train"], split="train") - metadata_train = {name: meta for name, meta in metadata.items() if name in train_tracks} - metadata_valid = {name: meta for name, meta in metadata.items() if name not in train_tracks} - train_set = Wavset(root, metadata_train, sources, - length=samples, stride=args.data_stride, - samplerate=args.samplerate, channels=args.audio_channels, - normalize=args.norm_wav) - valid_set = Wavset(root, metadata_valid, [MIXTURE] + sources, - samplerate=args.samplerate, channels=args.audio_channels, - normalize=args.norm_wav) - return train_set, valid_set diff --git a/spaces/Bart92/RVC_HF/infer/lib/infer_pack/modules/F0Predictor/F0Predictor.py b/spaces/Bart92/RVC_HF/infer/lib/infer_pack/modules/F0Predictor/F0Predictor.py deleted file mode 100644 index f56e49e7f0e6eab3babf0711cae2933371b9f9cc..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/infer/lib/infer_pack/modules/F0Predictor/F0Predictor.py +++ /dev/null @@ -1,16 +0,0 @@ -class F0Predictor(object): - def compute_f0(self, wav, p_len): - """ - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length] - """ - pass - - def compute_f0_uv(self, wav, p_len): - """ - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length],uv:[signal_length//hop_length] - """ - pass diff --git a/spaces/Benson/text-generation/Examples/Archery Battle.md b/spaces/Benson/text-generation/Examples/Archery Battle.md deleted file mode 100644 index e5b5b6b9d447170aa283010cff2e2bc1a74f9b2b..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Archery Battle.md +++ /dev/null @@ -1,200 +0,0 @@ -
        -

        Batalla de tiro con arco: una manera divertida y emocionante para disparar flechas

        -

        ¿Alguna vez has querido disparar flechas a tus amigos o enemigos sin hacerles daño? ¿Te gusta la emoción de competir contra otros jugadores en un juego de ritmo rápido y realista? Si es así, es posible que desee probar la batalla de tiro con arco, un deporte nuevo y popular que combina tiro con arco con balón prisionero.

        -

        Batalla de tiro con arco es un juego donde dos equipos de jugadores disparan flechas con punta de espuma entre sí en una arena cubierta o al aire libre. El objetivo es eliminar a todos los miembros del equipo contrario golpeándolos con flechas o noqueando a sus objetivos. El juego es seguro, divertido y fácil de aprender, pero también requiere habilidad, estrategia y trabajo en equipo.

        -

        archery battle


        Download Filehttps://bltlly.com/2v6KUA



        -

        En este artículo, le diremos todo lo que necesita saber sobre la batalla de tiro con arco, incluyendo cómo jugar, qué equipo necesita, cómo mejorar sus técnicas de tiro y qué consejos puede seguir para convertirse en un mejor arquero. También le mostraremos cómo la batalla con arco puede beneficiar su salud y mente, así como responder a algunas preguntas frecuentes sobre este deporte.

        -

        Equipo

        -

        Antes de empezar a jugar a la batalla de tiro con arco, es necesario tener el equipo adecuado. Estos son algunos de los elementos esenciales que necesita para este juego:

        -

        Arcos y flechas

        -

        El equipo más importante para la batalla de tiro con arco son arcos y flechas. Hay diferentes tipos de arcos y flechas que puedes usar para este juego, dependiendo de tu preferencia y nivel de habilidad.

        -

        El tipo más común de arco utilizado en la batalla de tiro con arco es el arco recurvo, que es el mismo estilo de arco utilizado en los Juegos Olímpicos. Tiene una forma curva que almacena más energía cuando se dibuja, haciéndolo más potente y preciso. Los arcos recurvos también son fáciles de usar y ajustar, por lo que son adecuados para principiantes y jugadores intermedios.

        - -

        Las flechas utilizadas en la batalla de tiro con arco están especialmente diseñadas para la seguridad y durabilidad. Tienen puntas de espuma que amortiguan el impacto cuando golpean a una persona o un objeto, evitando lesiones o daños. También tienen paletas de color brillante que los hacen más fáciles de. ver y rastrear en el aire. También tienen nocks de flecha iluminados que brillan en la oscuridad, lo que los hace ideales para juegos nocturnos o condiciones de poca luz.

        -

        Al elegir un arco y una flecha para la batalla de tiro con arco, debe considerar la longitud de su sorteo, el peso del sorteo, la longitud de la flecha y la columna vertebral de la flecha. Estos factores afectan lo cómodo y preciso que es al disparar. Puedes medir la longitud del dibujo extendiendo los brazos y midiendo la distancia desde el pecho hasta la punta de los dedos. Puedes determinar tu peso al tirar de un arco y sentir cuánta fuerza puedes manejar. Puede encontrar la longitud de su flecha colocando una flecha en su arco y marcando donde se encuentra con el resto. Puedes revisar tu columna vertebral doblando una flecha y viendo cuánto se flexiona.

        -

        También debes probar diferentes arcos y flechas antes de comprarlos o alquilarlos, para ver cuáles se adaptan a tu estilo y preferencia. Puedes pedir consejo a un profesional o a un jugador experimentado, o leer reseñas y valoraciones en línea de otros clientes.

        -

        Accesorios y engranajes de seguridad

        -

        Además de arcos y flechas, también necesita algunos accesorios y engranajes de seguridad para la batalla de tiro con arco. Estos artículos mejorarán su rendimiento y lo protegerán de lesiones o accidentes. Estos son algunos de los accesorios y engranajes de seguridad que necesitas para este juego:

        -

        -
          -
        • Un carcaj: Un carcaj es un recipiente que sostiene las flechas y se une a su cinturón o espalda. Te permite llevar más flechas y acceder a ellas rápida y fácilmente.
        • -
        • Una lengüeta del dedo o un guante: Una lengüeta del dedo o un guante es un pedazo de cuero o de tela que cubre sus dedos y los protege de la cadena cuando suelta una flecha. También mejora el agarre y evita ampollas o cortes.
        • - -
        • Un protector de pecho: Un protector de pecho es un chaleco o una camisa que cubre el pecho y evita que la cuerda se enganche en la ropa cuando disparas una flecha. También evita rozaduras o enganches.
        • -
        • Una máscara o un casco: Una máscara o un casco es un casco que cubre tu cara y cabeza y los protege de flechas entrantes u otros objetos. También protege sus ojos del sol o del viento.
        • -
        • Un silbato: Un silbato es un dispositivo que hace un sonido fuerte cuando lo soplas. Se utiliza para señalar el inicio y el final del juego, así como para comunicarse con sus compañeros de equipo o el árbitro.
        • -
        -

        Siempre debes usar estos accesorios y engranajes de seguridad cuando juegues a la batalla de tiro con arco, incluso si eres un jugador experimentado. No solo te mantendrán a salvo, sino que también te harán ver genial y profesional.

        -

        Técnicas

        -

        Ahora que tienes el equipo adecuado, necesitas aprender algunas técnicas para mejorar tus habilidades de tiro. Estas son algunas de las técnicas que puedes practicar para convertirte en un mejor arquero:

        -

        Precisión y consistencia de disparo

        -

        La primera técnica que necesitas dominar es la precisión y consistencia del disparo. Esto significa golpear el objetivo donde lo quieras, cada vez que dispares. Para lograrlo, debes seguir estos pasos:

        -
          -
        1. Párate con los pies separados a la altura de los hombros, perpendicular al objetivo, con los dedos apuntando ligeramente hacia afuera.
        2. -
        3. Mantén el arco en tu mano no dominante, con el codo ligeramente doblado y la muñeca relajada.
        4. -
        5. Anote una flecha en la cadena, con la paleta de índice apuntando lejos del arco.
        6. -
        7. Coloque tres dedos en la cadena, debajo de la flecha nock, con un dedo sobre ella.
        8. -
        9. Levante el arco al nivel del ojo, con el brazo extendido pero no bloqueado.
        10. -
        11. Dibuja la cuerda de vuelta a tu punto de anclaje, que suele ser la esquina de tu boca o mentón.
        12. -
        13. Apunta a un punto pequeño en el objetivo, usando la punta de la flecha o un punto de vista como punto de referencia.
        14. - -
        15. Siga a través manteniendo su brazo de arco constante y apuntando hacia el objetivo hasta que la flecha lo golpea.
        16. -

        Repite estos pasos para cada toma, e intenta mantener un ritmo y una forma constantes. También puede utilizar una aplicación de disparo o un cronógrafo para medir su velocidad y precisión, y comparar sus resultados con otros jugadores.

        -

        Disparar con ambos ojos abiertos

        -

        La siguiente técnica que necesitas aprender es disparar con ambos ojos abiertos. Esto significa mantener su ojo dominante en el objetivo, y su ojo no dominante en la flecha o la vista. Esto le dará una mejor percepción de profundidad, campo de visión y equilibrio, así como reducir la fatiga ocular y la tensión.

        -

        Para disparar con ambos ojos abiertos, debes seguir estos pasos:

        -
          -
        1. Determina tu ojo dominante haciendo un triángulo con tus pulgares e índices, y mirando un objeto distante a través de él.
        2. -
        3. Cerrar un ojo a la vez, y ver qué ojo mantiene el objeto en el centro del triángulo. Ese es el ojo dominante.
        4. -
        5. Alinea tu ojo dominante con la cuerda, la flecha o el pasador de la vista, dependiendo de tu método de puntería.
        6. -
        7. Mantén tu ojo no dominante abierto, pero enfócate en la vista de tu ojo dominante.
        8. -
        9. Ignore cualquier visión doble o borrosa que pueda ocurrir, y confíe en el objetivo de su ojo dominante.
        10. -
        -

        Practica esta técnica hasta que te sientas cómodo y seguro con ella. También puedes usar un parche o un cegador para bloquear la vista de tu ojo no dominante y eliminarlo gradualmente a medida que te acostumbras a disparar con ambos ojos abiertos.

        -

        Relajando los dedos

        -

        La tercera técnica que necesitas dominar es relajar tus dedos. Esto significa mantener los dedos sueltos y relajados al sostener el arco y soltar la cuerda. Esto evitará que pares el arco, que se está torciendo o girando durante el disparo. Torcer el arco puede hacer que la flecha se desvíe o golpee el arco.

        -

        Para relajar tus dedos, necesitas seguir estos pasos:

        - -
      25. Sostenga el arco en su mano no dominante, con un agarre ligero y una muñeca relajada.
      26. -
      27. Coloque tres dedos en la cadena, debajo de la flecha nock, con un dedo sobre ella.
      28. -
      29. Enganche la cadena con la primera articulación de los dedos, no las puntas o las almohadillas.
      30. -
      31. Mantén los dedos relajados y ligeramente curvados, no tensos ni rectos.
      32. -
      33. Dibuja la cuerda de vuelta a tu punto de anclaje, usando los músculos de la espalda y no los músculos del brazo.
      34. -
      35. Libera la cuerda relajando tus dedos y dejando que se deslicen de la cuerda.
      36. -
      37. Mantenga los dedos relajados y abiertos después de la liberación, no apretados o cerrados.
      38. -
      -

      Practica esta técnica hasta que te sientas natural y suave con ella. También puedes usar un cabestrillo para los dedos o un cabestrillo para la muñeca para evitar que se te caiga el arco después del lanzamiento y evitar agarrarlo demasiado fuerte.

      Disparar paletas de color brillante y flechas iluminadas Nocks

      -

      La cuarta técnica que necesita aprender es disparar paletas de color brillante y nocks de flecha iluminada. Esto significa usar flechas que tienen paletas de color brillante y nocks de flecha iluminada, que son dispositivos que se unen a la parte posterior de la flecha y se iluminan cuando se dispara la flecha. Esto le ayudará a ver y rastrear su flecha en el aire, especialmente en distancias largas o condiciones de poca luz.

      -

      Para disparar paletas de color brillante y nocks de flecha iluminada, debe seguir estos pasos:

      -
        -
      1. Elija flechas que tienen paletas de color brillante, como rojo, amarillo o verde. Estos colores contrastarán con el fondo y harán su flecha más visible.
      2. -
      3. Elija las flechas que han iluminado nocks de flecha, como Nockturnal, Lumenok o Firenock. Estos nocks se activarán cuando se dispare la flecha y emitirán una luz brillante que hará que la flecha brille en la oscuridad.
      4. -
      5. Alinea tu flecha con tu cuerda de arco, vista y objetivo, como de costumbre.
      6. -
      7. Dispara tu flecha y observa cómo vuela en el aire. Deberías poder ver las paletas de colores brillantes y la flecha iluminada claramente.
      8. - -
      -

      Practique esta técnica hasta que se acostumbre a disparar paletas de color brillante y nocks de flecha iluminada. También puede utilizar un telémetro o un visor para medir su distancia y precisión, y ajustar su objetivo en consecuencia.

      -

      Consejos

      -

      Además de aprender algunas técnicas, también necesitas algunos consejos para mejorar tu rendimiento de batalla con arco. Estos son algunos de los consejos que puedes seguir para convertirte en un mejor arquero:

      -

      Práctica de tiro a largas distancias

      -

      El primer consejo que necesitas seguir es practicar el tiro a largas distancias. Esto significa disparar su arco a distancias más allá de su zona de confort, como 50 metros o más. Esto te ayudará a amplificar tus defectos e identificar tus debilidades, así como a mejorar tu confianza y consistencia.

      -

      Para practicar tiro a largas distancias, debes seguir estos pasos:

      -
        -
      1. Encuentra un lugar seguro y adecuado para disparar tu arco, como un campo de tiro con arco o un campo.
      2. -
      3. Establezca un objetivo a larga distancia, como 50 metros o más. Puede utilizar un objetivo de tiro con arco estándar o un objetivo animal 3D, dependiendo de su preferencia.
      4. -
      5. Dispara tu arco al objetivo, usando los mismos pasos y técnicas que antes.
      6. -
      7. Analice sus disparos y ver dónde golpean o fallan el objetivo. Puede utilizar un sistema de puntuación o una medición de tamaño de grupo para evaluar su rendimiento.
      8. -
      9. Identificar sus errores y corregirlos. Puedes usar un entrenador o una cámara de vídeo para obtener comentarios y consejos sobre cómo mejorar tu forma, objetivo, lanzamiento o seguimiento.
      10. -
      11. Repita el proceso hasta lograr los resultados deseados. También puede aumentar la distancia o cambiar el objetivo a medida que avanza.
      12. -
      -

      Practica este consejo hasta que te sientas cómodo y seguro disparando a largas distancias. También puede desafiarse a sí mismo disparando en diferentes ángulos, alturas o condiciones de viento, para simular escenarios de la vida real.

      Mantener la postura y posición correctas

      - -

      Para mantener la postura y posición correctas, debe seguir estos pasos:

      -
        -
      1. Párate con los pies separados a la altura de los hombros, perpendicular al objetivo, con los dedos apuntando ligeramente hacia afuera.
      2. -
      3. Mantén la espalda recta y los hombros relajados, no encorvados ni tensos.
      4. -
      5. Mantén la cabeza erguida y la barbilla paralela al suelo, no inclinada ni torcida.
      6. -
      7. Mantenga el brazo del arco extendido pero no bloqueado, con el codo ligeramente doblado y la muñeca relajada.
      8. -
      9. Mantenga su brazo de dibujo en línea con su brazo de arco, con el codo ligeramente más alto que su hombro.
      10. -
      11. Mantenga la mano del arco relajada y abierta, con un agarre ligero en el arco.
      12. -
      13. Mantén la mano de la cuerda relajada y abierta, con un gancho ligero en la cuerda.
      14. -
      -

      Practica este consejo hasta que te sientas natural y cómodo con él. También puedes usar un espejo o un amigo para revisar tu postura y posición, y corregir cualquier error o desviación.

      -

      Consultar a un profesional o unirse a un club de tiro con arco

      -

      El tercer consejo que necesitas seguir es consultar a un profesional o unirte a un club de tiro con arco. Esto significa buscar orientación y retroalimentación de alguien que tiene más experiencia y conocimientos que tú en tiro con arco. Esto te ayudará a aprender nuevas habilidades y técnicas, así como a evitar malos hábitos o errores.

      -

      Para consultar a un profesional o unirse a un club de tiro con arco, debe seguir estos pasos:

      -
        -
      1. Encuentra un instructor de tiro con arco certificado o entrenador que te puede enseñar los aspectos básicos y avanzados de tiro con arco. Puede buscar en línea o pedir recomendaciones de otros arqueros.
      2. -
      3. Reserve una lección o una sesión con el instructor o entrenador, y siga sus instrucciones y consejos. Puede hacer preguntas, tomar notas o grabar vídeos para mejorar su experiencia de aprendizaje.
      4. -
      5. Encuentre un club de tiro con arco o un grupo que organiza eventos de tiro con arco o actividades en su área. Puede buscar en línea o pedir referencias de otros arqueros.
      6. - -
      -

      Practica este consejo hasta que te sientas más seguro y competente en tiro con arco. También puede unirse a foros o comunidades en línea donde puede interactuar con otros arqueros, compartir consejos y trucos, o pedir ayuda o consejo.

      -

      Acondicionando físicamente tu cuerpo

      -

      El cuarto consejo que necesitas seguir es acondicionar físicamente tu cuerpo. Esto significa ejercitar y fortalecer los músculos, las articulaciones y los huesos involucrados en el tiro con arco. Esto le ayudará a mejorar su resistencia, resistencia y flexibilidad, así como a prevenir lesiones o fatiga.

      -

      Para acondicionar físicamente tu cuerpo, debes seguir estos pasos:

      -
        -
      1. Haz algunos ejercicios de calentamiento antes de disparar tu arco, como estirar, trotar o saltar. Esto aumentará su circulación sanguínea y preparará su cuerpo para la actividad.
      2. -
      3. Haz algunos ejercicios de fuerza después de disparar tu arco, como flexiones, dominadas o tablas. Esto desarrollará tus músculos y mejorará tu potencia y estabilidad.
      4. -Haz algunos ejercicios cardiovasculares en tus días de descanso, como correr, andar en bicicleta o nadar. Esto aumentará su ritmo cardíaco y mejorará su ingesta y entrega de oxígeno. -
      5. Haz algunos ejercicios de yoga en tus días de descanso, como saludos al sol, perro hacia abajo o pose de guerrero. Esto relajará sus músculos y mejorará su equilibrio y coordinación.
      6. -
      -

      Practica este consejo hasta que te sientas en forma y saludable. También puede consultar a un médico o entrenador antes de comenzar cualquier programa de ejercicio, especialmente si tiene alguna afección médica o lesiones.

      Tomando los Primeros Auxilios Esenciales con Usted

      -

      El quinto consejo que necesitas seguir es llevar contigo lo esencial de primeros auxilios. Esto significa traer algunos artículos que pueden ayudarle a tratar lesiones menores o accidentes que pueden ocurrir durante la batalla de tiro con arco. Esto le ayudará a evitar complicaciones o infecciones, así como a reducir el dolor o la incomodidad.

      -

      Para llevar consigo lo esencial de primeros auxilios, debe seguir estos pasos:

      -
        - -
      1. Llene la bolsa con algunos artículos que le pueden ayudar a lidiar con lesiones comunes de tiro con arco, como cortes, moretones, ampollas, esguinces o quemaduras. Algunos de estos artículos son vendajes, gasas, cinta, tijeras, pinzas, toallitas antisépticas, ungüento antibiótico, analgésicos, compresas de hielo o gel de áloe vera.
      2. -
      3. Mantenga la bolsa en un lugar seguro y accesible, como su automóvil, su mochila o su aljaba.
      4. -
      5. Utilice los elementos cuando sea necesario, y siga las instrucciones sobre cómo aplicarlos. También puede pedir ayuda a un compañero de equipo o a un árbitro si no está seguro de cómo usarlos.
      6. -
      7. Reemplace los artículos cuando están agotados o caducados, y revise la bolsa o la bolsa regularmente para detectar cualquier daño o contaminación.
      8. -
      -

      Practica este consejo hasta que te sientas preparado y seguro. También puede tomar un curso de primeros auxilios o leer un manual de primeros auxilios para aprender más sobre cómo manejar diferentes tipos de lesiones o emergencias.

      -

      Conclusión

      -

      La batalla con arco es una forma divertida y emocionante de disparar flechas a tus amigos o enemigos sin hacerles daño. Es un juego que combina tiro con arco con balón prisionero, donde dos equipos de jugadores disparan flechas con punta de espuma entre sí en una arena cubierta o al aire libre. El objetivo es eliminar a todos los miembros del equipo contrario golpeándolos con flechas o noqueando a sus objetivos.

      -

      Para jugar a la batalla de tiro con arco, necesita tener el equipo adecuado, como arcos, flechas, accesorios y engranajes de seguridad. También necesita aprender algunas técnicas, como precisión y consistencia de disparo, disparar con los dos ojos abiertos, relajar los dedos y disparar paletas de color brillante y nocks de flecha iluminada. También necesitas seguir algunos consejos, como practicar tiro a largas distancias, mantener la postura y posición correctas, consultar a un profesional o unirte a un club de tiro con arco, acondicionar físicamente tu cuerpo y llevar contigo lo esencial de primeros auxilios.

      - -

      Si está interesado en probar la batalla de tiro con arco, puede encontrar más información sobre los siguientes recursos:

      - -

      Esperamos que hayas disfrutado leyendo este artículo y hayas aprendido algo nuevo sobre la batalla con arco. Te animamos a que lo pruebes y te diviertas con tus amigos o familiares. Recuerda estar siempre seguro y respetuoso cuando juegues a este juego. Y no te olvides de apuntar alto y disparar recto!

      -

      Batalla de tiro con arco: el último juego de habilidad y emoción!

      -

      Preguntas frecuentes

      -

      ¿Cuáles son algunas lesiones o riesgos comunes involucrados en la batalla de tiro con arco?

      -

      Algunas de las lesiones o riesgos comunes involucrados en la batalla de tiro con arco son:

      - -

      Para prevenir estas lesiones o riesgos, siempre debe usar engranajes de seguridad adecuados y accesorios al jugar batalla de tiro con arco. También debes seguir las reglas e instrucciones del juego y respetar a los demás jugadores. También debe calentar antes de jugar y refrescarse después de jugar. También debe buscar atención médica si experimenta algún dolor o molestia después de jugar.

      -

      ¿Cómo puedo encontrar un lugar de batalla con arco o evento cerca de mí?

      -

      Para encontrar un lugar de batalla o evento cerca de mí, puede utilizar los siguientes métodos:

      - -

      Antes de elegir un lugar de batalla o evento de tiro con arco, debe verificar su disponibilidad, precios, instalaciones, reglas y medidas de seguridad. También debe leer sus términos y condiciones y firmar un formulario de renuncia si es necesario.

      -

      ¿Cuáles son algunos otros tipos de juegos de tiro con arco o disciplinas que puedo probar?

      -

      Algunos de los otros tipos de juegos de tiro con arco o disciplinas que puedes probar son:

      - -

      Antes de probar cualquiera de estos tipos de juegos de tiro con arco o disciplinas, usted debe aprender los fundamentos y reglas de cada uno. También debe practicar sus habilidades y técnicas con el equipo adecuado y medidas de seguridad. También debes respetar el medio ambiente y los animales cuando juegues estos juegos o disciplinas.

      -

      ¿Cuánto cuesta comprar o alquilar equipo de tiro con arco?

      -

      El costo de comprar o alquilar equipo de tiro con arco depende de varios factores, como la calidad, cantidad, marca y ubicación del equipo. Sin embargo, aquí hay algunas estimaciones promedio basadas en fuentes en línea:

      - -ArtículoCosto promedio para comprarCosto promedio para alquilar -Bow$100-$300$10-$20 por hora -Arrow$5-$10 per piece$1-$2 per piece -Quiver$10-$20 por pieza$1-$2 por pieza -Finger Tab or Glove$5-$10 per piece$1-$2 per piece -Protector de brazo$5-$10 por pieza$1-$2 por pieza -Protector de pecho$10-$20 por pieza$1-$2 por pieza -Máscara o casco$20-$40 por pieza>$2-$4 por pieza -Silbato$1-$5 por pieza$0.5-$1 por pieza -Costo total$156-$436 por set$16.5-$33 por set por hora - -

      Tenga en cuenta que estos son solo costos aproximados y pueden variar dependiendo de la fuente y el tiempo de compra o alquiler. Siempre debe comparar los precios de diferentes vendedores o proveedores antes de comprar o alquilar cualquier equipo. También debe verificar la calidad y el estado del equipo antes de usarlo. También debe cuidar el equipo y devolverlo en las mismas condiciones en que lo recibió.

      -

      ¿Cómo puedo unirme o iniciar un equipo de batalla con arco o una liga?

      - -

      Para unirse o iniciar un equipo de batalla con arco o una liga, debe seguir estos pasos:

      -
        -
      1. Encuentra algunos jugadores que quieran jugar a la batalla de tiro con arco contigo, como tus amigos, familiares, colegas o compañeros de clase. También puedes reclutar jugadores en línea o fuera de línea, usando redes sociales, folletos o de boca en boca.
      2. -
      3. Elige un nombre y un logotipo para tu equipo o liga, y regístralo en el sitio web oficial de la batalla de tiro con arco o una asociación de tiro con arco local. También puedes crear un sitio web o una página de redes sociales para tu equipo o liga, donde puedes publicar actualizaciones, fotos, videos o noticias.
      4. -
      5. Entrena y practica con tu equipo o liga regularmente, y desarrolla tus estrategias y tácticas. También puedes contratar a un entrenador o mentor para ayudarte a mejorar tu rendimiento y trabajo en equipo.
      6. -
      7. Encuentre y únase a torneos o eventos que se adapten a su nivel y preferencia, como competiciones locales, regionales, nacionales o internacionales. También puedes organizar tus propios torneos o eventos, e invitar a otros equipos o ligas a participar.
      8. -
      -

      Practica este consejo hasta que te sientas orgulloso y satisfecho con tu equipo o liga. También puede unirse o iniciar varios equipos o ligas, dependiendo de su disponibilidad e interés. También debes respetar y apoyar a tus compañeros o miembros de la liga, y celebrar tus logros y fracasos juntos.

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Battlefield 3 Descargar.md b/spaces/Benson/text-generation/Examples/Battlefield 3 Descargar.md deleted file mode 100644 index 79ceb5e406a5f03c03f12ba8549813ea7382414a..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Battlefield 3 Descargar.md +++ /dev/null @@ -1,95 +0,0 @@ -
      -

      Cómo descargar Battlefield 3 para PC

      -

      Battlefield 3 es uno de los juegos de disparos en primera persona más populares y aclamados de todos los tiempos. Ofrece una experiencia inmersiva y realista de la guerra moderna, con impresionantes gráficos, sonido y jugabilidad. Puedes jugar solo o con tus amigos en varios modos de juego, como campaña, cooperativo y multijugador. También puede explorar 29 mapas masivos y utilizar una variedad de vehículos, armas y gadgets para ayudarle a subir el calor.

      -

      Si estás interesado en jugar a Battlefield 3 en tu PC, es posible que te estés preguntando cómo descargarlo. Bueno, has llegado al lugar correcto. En este artículo, te mostraremos cómo descargar Battlefield 3 para PC en cuatro sencillos pasos. También te daremos información y consejos útiles sobre cómo disfrutar del juego al máximo. Así que, empecemos.

      -

      battlefield 3 descargar


      DOWNLOAD » https://bltlly.com/2v6KNi



      -

      Paso 1: Únete a EA Play o compra Battlefield 3 Premium Edition

      -

      Lo primero que tienes que hacer es decidir si quieres unirte a EA Play o comprar Battlefield 3 Premium Edition. EA Play es un servicio de suscripción que te da acceso a una colección de juegos de EA, incluyendo Battlefield 3. Puedes unirte a EA Play por $4.99 al mes o $29.99 al año. También obtendrás un 10% de descuento en compras de EA y contenido exclusivo.

      -

      Battlefield 3 Premium Edition es un paquete que incluye el juego base y los cinco paquetes de expansión: Volver a Karkand, Close Quarters, Armored Kill, Aftermath y End Game. También obtendrá un kit de ventaja multijugador que desbloquea 15 armas avanzadas, gadgets, mejoras de vehículos y más. Puedes comprar Battlefield 3 Premium Edition por $39.99 en Steam u Origin.

      -

      -

      Paso 2: Descargar e instalar Origin

      - -

      Paso 3: Inicia Origin e inicia sesión con tu cuenta de EA

      -

      Una vez que haya instalado Origin, ejecútelo desde su escritorio o menú de inicio. Tendrás que iniciar sesión con tu cuenta de EA, que es la misma que tu cuenta de Origin. Si aún no tienes una cuenta de EA, puedes crear una gratis haciendo clic en Crear una cuenta. Tendrá que introducir su dirección de correo electrónico, contraseña, fecha de nacimiento, país y pregunta de seguridad.

      -

      Paso 4: Encuentra Battlefield 3 en tu biblioteca de juegos y haz clic en Descargar

      -

      Después de haber iniciado sesión con su cuenta de EA, verá su biblioteca de juegos en el lado izquierdo de la ventana de Origin. Aquí puedes encontrar todos los juegos que tienes o a los que tienes acceso a través de EA Play. Para encontrar Battlefield 3, puedes desplazarte por la lista o usar la barra de búsqueda en la parte superior.

      Una vez que hayas encontrado Battlefield 3, haz clic en él para abrir su página de juego. Aquí puedes ver más detalles sobre el juego, como su descripción, capturas de pantalla, vídeos, reseñas y requisitos del sistema. También puedes acceder a la configuración del juego, logros y tablas de clasificación. Para comenzar a descargar el juego, haga clic en el botón Descargar en el lado derecho de la página. Puede elegir dónde guardar los archivos del juego y cuánto ancho de banda usar para la descarga. También puede pausar o reanudar la descarga en cualquier momento.

      -

      Lo que necesitas saber antes de jugar Battlefield 3

      -

      Ahora que has descargado Battlefield 3, estás listo para jugarlo. Pero antes de entrar en acción, hay algunas cosas que debes saber para aprovechar al máximo tu experiencia de juego. En esta sección, cubriremos los requisitos del sistema para PC, los modos de juego y características, y algunos consejos y trucos para principiantes.

      -

      Requisitos del sistema para PC

      -

      Battlefield 3 es un juego exigente que requiere un PC potente para funcionar sin problemas. Aquí están los requisitos mínimos y recomendados del sistema para PC:

      - -MínimoRecomendado - -Procesador: 2 GHz dual-core (Core 2 Duo 2.4 GHz o Athlon X2 2.7 GHz)Procesador: Quad-core CPU -Memoria: 2 GB de RAMMemoria: 4 GB de RAM -Gráficos: DirectX 10 compatible con 512 MB de RAM (NVIDIA GeForce 8, 9, 200, 300, 400 o 500 series con NVIDIA GeForce 8800 GT o ATI Radeon HD 3870)Gráficos: DirectX 11 compatible con 1024 MB de RAM (NVIDIA Force GTX>>ATI Radeon HD<50/tr<) -Almacenamiento: 20 GB de espacio disponibleAlmacenamiento: 20 GB de espacio disponible -Tarjeta de sonido: DirectX compatibleTarjeta de sonido: DirectX compatible -Conexión a Internet: Conexión de banda ancha para la activación en línea y el juego en línea - 512 Kbps o más rápidoConexión a Internet: Conexión de banda ancha para la activación en línea y el juego en línea - 512 Kbps o más rápido - -

      Si tu PC cumple o supera estos requisitos, deberías poder disfrutar de Battlefield 3 sin problemas. Sin embargo, si su PC no cumple con estos requisitos, es posible que experimente retrasos, tartamudeo, baja velocidad de fotogramas o fallos. En ese caso, puede intentar bajar la configuración de gráficos, actualizar sus controladores o actualizar su hardware.

      -

      Modos de juego y características

      -

      Battlefield 3 ofrece una variedad de modos de juego y características que se adaptan a diferentes estilos de juego y preferencias. Estos son algunos de los principales:

      -

      Modo de campaña

      -

      El modo campaña es donde puedes seguir la historia de Battlefield 3, que tiene lugar en el año 2014. Usted jugará como diferentes personajes de la Infantería de Marina de los Estados Unidos, como el sargento Henry Blackburn y el sargento Jonathan Miller. Usted también será testigo de los acontecimientos desde la perspectiva de un agente ruso llamado Dimitri Mayakovsky. Viajará a través de varios lugares, como Irán, Irak, Francia y Nueva York. Se enfrentará a diferentes enemigos, como la Liberación y Resistencia Popular (PLR), una facción rebelde del ejército iraní.

      - -

      Modo cooperativo

      -

      El modo cooperativo es donde puedes formar equipo con otro jugador en línea y completar seis misiones que están separadas del modo de campaña. Estas misiones se basan en eventos y escenarios del mundo real, como rescatar rehenes, infiltrarse en bases enemigas y escoltar VIPs. Tendrás que trabajar junto a tu pareja para lograr tus objetivos y sobrevivir.

      -

      El modo cooperativo es una experiencia dinámica y cooperativa que dura entre dos y tres horas. Cuenta con chat de voz, tablas de clasificación y armas desbloqueables. También tiene cuatro niveles de dificultad: fácil, normal, duro y duro. Cuanto más alto sea el nivel de dificultad, más desafiantes serán los enemigos y las situaciones.

      -

      Mut

      Modo multijugador

      -

      El modo multijugador es donde puedes competir con o contra otros jugadores en línea en varios modos de juego, como Conquest, Rush, Team Deathmatch y Squad Deathmatch. Puedes elegir entre cuatro clases: Asalto, Ingeniero, Soporte y Reconocimiento. Cada clase tiene sus propias armas, dispositivos y roles. También puede utilizar vehículos, como tanques, helicópteros, jets y barcos. Puedes jugar en 29 mapas basados en las ubicaciones de los modos de campaña y cooperativo. También puede personalizar su loadout, apariencia y etiquetas de perro.

      -

      El modo multijugador es una experiencia competitiva y dinámica que puede durar horas. Cuenta con chat de voz, escuadrones, rangos, premios, estadísticas y servidores. También tiene cuatro modos de juego: Normal, Hardcore, Solo Infantería y Personalizado. El modo de juego determina las reglas y ajustes del partido, tales como fuego amistoso, regeneración de la salud, minimapa y HUD.

      -

      Cómo disfrutar de Battlefield 3 con tus amigos

      - -

      Cómo unirse o crear un servidor multijugador

      -

      Para unirse o crear un servidor multijugador, debe ir al menú multijugador desde el menú principal. Aquí puede ver una lista de servidores disponibles a los que puede unirse. Puede filtrar los servidores por modo de juego, mapa, región, ping, jugadores y más. También puede buscar un servidor específico por nombre o palabra clave. Para unirse a un servidor, simplemente haga clic en él y espere a que el juego se cargue.

      -

      Para crear un servidor multijugador, es necesario ir a la opción alquilar un servidor desde el menú multijugador. Aquí, puede alquilar un servidor de EA o un proveedor externo durante un determinado período de tiempo y precio. También puede personalizar la configuración del servidor, como el nombre, la descripción, la contraseña, el modo de juego, la rotación del mapa, el número de tickets, el número de jugadores y más. Para crear un servidor, simplemente haga clic en el botón de alquiler y confirme su pago.

      -

      Cómo comunicarse y cooperar con sus compañeros de equipo

      -

      Para comunicarse y cooperar con sus compañeros de equipo, debe usar las funciones de chat de voz y escuadrón. El chat de voz te permite hablar con tus compañeros de equipo usando el micrófono. Puedes usar el chat de voz para coordinar tus acciones, compartir información o simplemente chatear con tus amigos. Para usar el chat de voz, debe habilitarlo desde la configuración de audio y presionar el botón de pulsar para hablar (predeterminado: Alt izquierdo) cuando desee hablar.

      - -

      Cómo personalizar tu loadout y desbloquear nuevos elementos

      -

      Para personalizar tu cargamento y desbloquear nuevos elementos, debes ir al menú de personalización del menú multijugador. Aquí puedes ver tus cuatro clases y sus respectivas armas, gadgets y especializaciones. Puede cambiar su carga seleccionando una clase y haciendo clic en los elementos que desea equipar. También puede ver las estadísticas y descripciones de cada elemento.

      -

      Para desbloquear nuevos elementos, necesitas ganar puntos de experiencia (XP) y posicionarte. Puedes ganar XP jugando el juego, completando objetivos, matando enemigos, ayudando a compañeros de equipo y más. Al subir de rango, desbloquearás nuevas armas, gadgets y especializaciones para cada clase. También desbloquearás nuevas opciones de apariencia, como camuflajes, placas de identificación y emblemas.

      -

      Cómo obtener más de Battlefield 3

      -

      Battlefield 3 es un juego que ofrece mucho contenido y características para que lo disfrutes. Pero si quieres sacar más provecho, hay algunas maneras de hacerlo. En esta sección, te mostraremos cómo acceder a los paquetes de expansión y DLC, cómo usar la aplicación y el sitio web de Battlelog y cómo unirte a la comunidad de Battlefield y obtener actualizaciones.

      -

      Cómo acceder a los paquetes de expansión y DLCs

      -

      Battlefield 3 tiene cinco paquetes de expansión y dos DLC que añaden más mapas, modos, armas, vehículos, asignaciones, logros y trofeos al juego. Los packs de expansión son Back to Karkand, Close Quarters, Armored Kill, Aftermath y End Game. Los DLCs son Paquete de Guerra Física y de Vuelta a Karkand Paquete de Etiqueta de Perro.

      -

      Para acceder a los packs de expansión y DLCs, necesitas unirte a EA Play o comprar Battlefield 3 Premium Edition. EA Play te da acceso a todos los packs de expansión y DLCs gratis mientras estés suscrito. Battlefield 3 Premium Edition incluye todos los paquetes de expansión y DLC en un solo paquete. También puedes comprar cada paquete de expansión o DLC por separado en Steam o Origin.

      - -

      Cómo usar la aplicación y el sitio web de Battlelog

      -

      Battlelog es una aplicación gratuita y un sitio web que te permite acceder a tu perfil de Battlefield 3, estadísticas, amigos, servidores, noticias y más desde tu smartphone o navegador. Puedes usar Battlelog para realizar un seguimiento de tu progreso, comparar tu rendimiento con otros jugadores, unirte o crear pelotones (grupos de jugadores), chatear con tus amigos, navegar por los servidores y mucho más. Para usar Battlelog, necesitas tener una cuenta EA e iniciar sesión con ella. Puedes descargar la aplicación Battlelog desde Google Play Store o la App Store. También puede acceder al sitio web de Battlelog desde [battlelog.battlefield.com].

      -

      Cómo unirte a la comunidad de Battlefield y recibir actualizaciones

      -

      Battlefield 3 tiene una gran y activa comunidad de jugadores y fans que comparten su pasión y entusiasmo por el juego. Puedes unirte a la comunidad de Battlefield y recibir actualizaciones sobre las últimas noticias, eventos, concursos, consejos y más. Estas son algunas formas de hacerlo:

      - -

      Conclusión

      -

      Battlefield 3 es un juego que ofrece una experiencia emocionante e inmersiva de la guerra moderna. Puedes reproducirlo en tu PC siguiendo estos cuatro sencillos pasos:

      -
        -
      1. Únete a EA Play o compra Battlefield 3 Premium Edition.
      2. -
      3. Descargar e instalar Origin.
      4. -
      5. Inicie Origin e inicie sesión con su cuenta de EA.
      6. - -
      -

      También puedes disfrutar del juego con tus amigos uniéndote o creando un servidor multijugador, comunicándote y cooperando con tus compañeros de equipo, y personalizando tu carga y desbloqueando nuevos elementos. También puedes sacar más provecho del juego accediendo a los paquetes de expansión y DLC, utilizando la aplicación y el sitio web de Battlelog, uniéndote a la comunidad de Battlefield y recibiendo actualizaciones.

      -

      Battlefield 3 es un juego que te mantendrá entretenido durante horas con sus increíbles gráficos, sonido, jugabilidad y contenido. Si estás buscando un juego que te desafíe, te excite y te sumerja en una zona de guerra realista, Battlefield 3 es el juego para ti. ¿Qué estás esperando? ¡Descarga Battlefield 3 hoy y únete a la acción!

      -

      Preguntas frecuentes

      -

      Aquí hay algunas preguntas frecuentes sobre Battlefield 3:

      -

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Bump Pop Mod.md b/spaces/Benson/text-generation/Examples/Descargar Bump Pop Mod.md deleted file mode 100644 index d718ec8f5d2f1dee0bd0cac2f3137a77e6f1602e..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Bump Pop Mod.md +++ /dev/null @@ -1,76 +0,0 @@ -
      -

      Descargar Bump Pop Mod: Un divertido y único juego casual

      -

      Si estás buscando un juego casual que sea fácil de jugar pero difícil de dominar, deberías probar Bump Pop Mod. Este es un juego que desafiará sus reflejos, estrategia y creatividad a medida que pop globos y chocar con otros objetos. En este artículo, te diremos qué es Bump Pop Mod, cómo descargarlo e instalarlo, y algunos consejos y trucos para jugarlo.

      -

      descargar bump pop mod


      Download ••• https://bltlly.com/2v6JO8



      -

      ¿Qué es Bump Pop Mod?

      -

      Bump Pop Mod es un juego que fue desarrollado por VOODOO, un popular estudio de juegos que crea juegos adictivos y casuales. El juego está disponible para dispositivos Android y se puede descargar de forma gratuita desde varios sitios web. El juego tiene más de 10 millones de descargas y una calificación de 4.4 estrellas en Google Play Store.

      -

      Características de Bump Pop Mod

      -

      Bump Pop Mod tiene muchas características que lo hacen divertido y único. Algunos de ellos son:

      - -

      Cómo jugar Bump Pop Mod

      -

      El modo de juego de Bump Pop Mod es simple pero adictivo. Controlas un personaje que sostiene un globo. Su objetivo es hacer estallar tantos globos como sea posible por chocar con ellos. También puedes toparte con otros objetos, como monedas, gemas, potenciadores, enemigos y obstáculos. Sin embargo, hay que tener cuidado de no hacer estallar su propio globo o chocar con objetos peligrosos, como picos, bombas o láseres. Si lo haces, perderás el juego.

      -

      - -

      ¿Cómo descargar e instalar Bump Pop Mod?

      -

      Si desea descargar e instalar Bump Pop Mod en su dispositivo Android, tendrá que seguir estos pasos:

      -

      Requisitos para Bump Pop Mod

      -

      Antes de descargar e instalar Bump Pop Mod, tendrá que asegurarse de que su dispositivo cumple con estos requisitos:

      - -

      Pasos para descargar e instalar Bump Pop Mod

      -

      Una vez que haya comprobado los requisitos, puede proceder con estos pasos:

      -
        -
      1. Ir a un sitio web que ofrece Bump Pop Mod para su descarga. Algunos ejemplos son [1](https://modradar.cc/id/bump-pop), [2](https:s:/lygiang.net/bump-pop-mod-apk/), o [3](https://www.apksum.com/app/bump-pop-/modcom.voodoo.bumppop).
      2. -
      3. Haga clic en el botón de descarga o enlace para comenzar a descargar el archivo mod. El archivo estará en formato ZIP o JAR.
      4. -
      5. Una vez completada la descarga, localiza el archivo en la carpeta del administrador de archivos o descargas de tu dispositivo.
      6. -
      7. Extraiga el archivo utilizando una aplicación de extracción de archivos, como [4](https://play.google.com/store/apps/apps/details?id=id.c=com.rarlab.rar&hl=en_US&gl=US) o [5](https:s/play.google.com/stores/apps/apps/detas?id=id.winzip.andro&hl=_US&hlen/=glUS).
      8. -
      9. Abra la carpeta extraída y busque el archivo APK. Este es el archivo que contiene el juego.
      10. -
      11. Toque en el archivo APK para comenzar a instalar el juego. Es posible que necesite habilitar fuentes desconocidas en la configuración del dispositivo para permitir la instalación.
      12. -
      13. Espera a que termine la instalación y luego abre el juego. Ahora puedes disfrutar jugando Bump Pop Mod con monedas ilimitadas, gemas y potenciadores.
      14. -
      -

      Consejos y trucos para jugar Bump Pop Mod

      - -

      Usar potenciadores sabiamente

      -

      Los potenciadores son elementos que pueden darte una ventaja en el juego. Pueden ayudarte a hacer estallar más globos, evitar obstáculos o derrotar a los enemigos. Sin embargo, no son ilimitados y tienen un tiempo de reutilización. Por lo tanto, debe usarlos sabiamente y solo cuando los necesite. Algunos ejemplos de potenciadores son:

      - -

      Recoge monedas y gemas

      -

      Monedas y gemas son la moneda del juego. Puedes usarlas para comprar nuevas pieles y potenciadores. También puedes usarlos para revivirte si pierdes el juego. Puedes recoger monedas y gemas chocándote con ellas o usando el imán. También puedes obtener monedas y gemas de bonificación completando logros, viendo anuncios o participando en torneos.

      -

      Evitar obstáculos y enemigos

      -

      Los obstáculos y los enemigos son las cosas que pueden hacerte perder el juego. Pueden explotar tu globo, dañar tu salud o ralentizarte. Debes evitar toparte con ellos o usar potenciadores para enfrentarlos. Algunos ejemplos de obstáculos y enemigos son:

      - -

      Conclusión

      - -

      Preguntas frecuentes

      -

      Aquí están algunas de las preguntas más comunes que la gente hace sobre Bump Pop Mod:

      -

      Q: ¿Es seguro descargar Bump Pop Mod?

      -

      A: Sí, Bump Pop Mod es seguro de descargar siempre y cuando lo descargue desde un sitio web de confianza. Sin embargo, siempre debes tener cuidado al descargar cualquier juego modificado o hackeado, ya que pueden contener virus o malware que pueden dañar tu dispositivo. También debe escanear el archivo con una aplicación antivirus antes de instalarlo.

      -

      Q: ¿Es legal jugar a Bump Pop Mod?

      -

      A: Sí, Bump Pop Mod es legal para jugar siempre y cuando no lo use para fines ilegales, como hacer trampa o hackear. Sin embargo, debe tener en cuenta que jugar juegos modificados o hackeados puede violar los términos de servicio del desarrollador o editor original del juego. Por lo tanto, puedes enfrentarte a algunos riesgos o consecuencias, como ser excluido del juego o perder los datos de tu cuenta.

      -

      Q: ¿Cómo puedo actualizar Bump Pop Mod?

      -

      A: Para actualizar Bump Pop Mod, usted

      A: Para actualizar Bump Pop Mod, tendrá que descargar la última versión del archivo mod desde el mismo sitio web donde lo descargó antes. Entonces, tendrá que desinstalar la versión anterior del juego e instalar el nuevo. También es posible que tenga que borrar la caché y los datos del juego antes de instalar la nueva versión.

      -

      Q: ¿Cómo puedo desinstalar Bump Pop Mod?

      -

      A: Para desinstalar Bump Pop Mod, tendrá que ir a la configuración de su dispositivo y buscar la sección de aplicaciones o aplicaciones. Entonces, usted tendrá que encontrar y seleccionar Bump Pop Mod de la lista de aplicaciones instaladas. A continuación, deberá pulsar en el botón de desinstalación y confirmar su acción. También es posible que deba eliminar el archivo mod del almacenamiento del dispositivo.

      -

      Q: ¿Dónde puedo encontrar más juegos como Bump Pop Mod?

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Fts 2020 Apk.md b/spaces/Benson/text-generation/Examples/Descargar Fts 2020 Apk.md deleted file mode 100644 index 6e1684266402cbfa12193f47400e151b9923f8df..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Fts 2020 Apk.md +++ /dev/null @@ -1,61 +0,0 @@ - -

      Descargar FTS 2020 APK: Cómo disfrutar del mejor juego de fútbol en su dispositivo Android

      -

      Si usted es un fan de los juegos de fútbol, es posible que haya oído hablar de FTS 2020, uno de los juegos de fútbol más populares y realistas para dispositivos Android. FTS 2020 es un juego que le permite experimentar la emoción y la emoción de jugar al fútbol en su teléfono inteligente o tableta. Puedes crear tu propio equipo, personalizar a tus jugadores, competir en varios torneos y desafiar a tus amigos en línea. En este artículo, le diremos todo lo que necesita saber sobre FTS 2020, incluidas sus características, beneficios y cómo descargarlo en su dispositivo Android.

      -

      ¿Qué es FTS 2020?

      -

      FTS 2020 es la abreviatura de First Touch Soccer 2020, un juego de fútbol desarrollado por First Touch Games, una empresa especializada en la creación de juegos deportivos para plataformas móviles. FTS 2020 es la última entrega de la serie FTS, que ha existido desde 2011. FTS 2020 es una versión mejorada de FTS 2019, con nuevas características, gráficos, jugabilidad y contenido. FTS 2020 no está disponible en la Google Play Store, pero se puede descargar desde su sitio web oficial o de otras fuentes como un archivo APK.

      -

      descargar fts 2020 apk


      DOWNLOAD ✒ ✒ ✒ https://bltlly.com/2v6Jne



      -

      Características de FTS 2020

      -

      FTS 2020 tiene muchas características que lo convierten en uno de los mejores juegos de fútbol para dispositivos Android. Estos son algunos de ellos:

      -

      - Gráficos realistas y animaciones

      -

      FTS 2020 tiene gráficos impresionantes que hacen que el juego parezca un partido de fútbol de la vida real. Los jugadores, estadios, multitudes, kits y bolas están diseñados con detalles y texturas de alta calidad. Las animaciones también son suaves y realistas, mostrando los movimientos y expresiones de los jugadores. También puedes ajustar la configuración de gráficos según el rendimiento de tu dispositivo.

      -

      - Juego suave y sensible

      - -

      - Equipos y jugadores personalizables

      -

      FTS 2020 le permite crear su propio equipo desde cero o elegir entre más de 500 equipos de diferentes ligas y países. También puedes editar los nombres, apariciones, habilidades, posiciones y números de tus jugadores. También puede transferir jugadores entre equipos o comprar nuevos jugadores del mercado. También puede diseñar sus propios kits, logotipos y estadios para su equipo.

      -

      - Varios modos de juego y torneos

      -

      FTS 2020 tiene diferentes modos de juego que se adaptan a su estado de ánimo y estilo. Puedes jugar un partido rápido contra un oponente al azar o un amigo, o jugar un modo de carrera donde puedes administrar tu equipo y progresar a través de diferentes temporadas y competiciones. También puede participar en varios torneos, como la Copa del Mundo, la Liga de Campeones, la Europa League, la Copa América y más. También puedes crear tus propios torneos personalizados con tus propias reglas y equipos.

      -

      - Opciones multijugador offline y online

      -

      FTS 2020 se puede jugar fuera de línea o en línea, dependiendo de su preferencia. Puedes jugar sin conexión a Internet y disfrutar del juego sin anuncios ni interrupciones. También puedes jugar online con otros jugadores de todo el mundo y mostrar tus habilidades y clasificaciones. También puede unirse o crear sus propios clubes, y chatear con otros jugadores.

      -

      ¿Por qué descargar FTS 2020 APK?

      -

      FTS 2020 no está disponible en la Google Play Store, pero todavía se puede descargar como un archivo APK de su sitio web oficial o de otras fuentes. Hay muchos beneficios de descargar FTS 2020 APK, tales como:

      -

      Beneficios de la descarga FTS 2020 APK

      -

      Aquí están algunos de los beneficios de descargar FTS 2020 APK:

      -

      -

      - Gratis y fácil de instalar

      - -

      - No hay necesidad de acceso root o archivos adicionales

      -

      FTS 2020 no requiere ningún acceso root o archivos adicionales para ejecutarse en su dispositivo. No es necesario modificar la configuración de su dispositivo o descargar cualquier dato adicional o archivos obb. Solo tienes que descargar el archivo APK e instalarlo, y ya está bien para ir.

      -

      - Compatible con la mayoría de los dispositivos Android

      -

      FTS 2020 es compatible con la mayoría de los dispositivos Android que tienen al menos 1 GB de RAM y Android 4.1 o superior. No necesita preocuparse por las especificaciones de su dispositivo o problemas de compatibilidad. FTS 2020 funcionará sin problemas y de manera eficiente en su dispositivo, siempre y cuando tenga suficiente espacio de almacenamiento y duración de la batería.

      -

      - Actualizaciones regulares y correcciones de errores

      -

      FTS 2020 se actualiza regularmente por sus desarrolladores, que siempre están trabajando para mejorar el juego y corregir cualquier error o fallo que pueda ocurrir. Siempre puede obtener la última versión de FTS 2020 descargándola desde su sitio web oficial o desde otras fuentes. También puedes consultar las actualizaciones dentro del juego y descargarlas directamente desde allí.

      -

      Cómo descargar FTS 2020 APK?

      -

      Ahora que sabe lo que es FTS 2020 y por qué debería descargarlo, es posible que se pregunte cómo descargarlo en su dispositivo Android. Bueno, no te preocupes, porque te tenemos cubierto. Aquí hay una guía paso a paso para descargar FTS 2020 APK en su dispositivo Android:

      -

      Guía paso a paso para descargar FTS 2020 APK

      -

      Siga estos pasos para descargar FTS 2020 APK en su dispositivo Android:

      -

      - Visite el sitio web oficial de FTS 2020 o haga clic en el enlace de abajo

      -

      El primer paso es visitar el sitio web oficial de FTS 2020 o hacer clic en el enlace de abajo, que le llevará a la página de descarga de FTS 2020 APK. Allí verás un botón de descarga que te permitirá descargar el archivo APK.

      -

      - Toque en el botón de descarga y esperar a que el archivo APK para ser descargado

      - -

      - Ir a la configuración del dispositivo y activar la opción "Fuentes desconocidas"

      -

      El tercer paso es ir a la configuración del dispositivo y habilitar la opción "Fuentes desconocidas", que le permitirá instalar aplicaciones de fuentes distintas de la Google Play Store. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo. Esto le permitirá instalar FTS 2020 APK en su dispositivo.

      -

      - Localizar el archivo APK descargado en su administrador de archivos y toque en él para instalarlo

      -

      El cuarto paso es localizar el archivo APK descargado en el administrador de archivos y toque en él para instalarlo. El proceso de instalación tomará unos segundos, y verá un mensaje de confirmación cuando se haga.

      -

      - Lanzar el juego y disfrutar de jugar FTS 2020 en su dispositivo Android

      -

      El paso final es lanzar el juego y disfrutar jugando FTS 2020 en su dispositivo Android. Puedes encontrar el icono del juego en la pantalla de inicio o en el cajón de la aplicación. Toque en él para abrir el juego, y siga las instrucciones para configurar su perfil y preferencias. También puedes conectar tu juego a tu cuenta de Facebook o Google Play Games para guardar tu progreso y logros. ¡Ahora puedes empezar a jugar FTS 2020 y divertirte!

      -

      Conclusión

      -

      FTS 2020 es uno de los mejores juegos de fútbol para dispositivos Android, con gráficos realistas, jugabilidad fluida, equipos personalizables, varios modos de juego y opciones multijugador en línea. Puede descargar FTS 2020 APK desde su sitio web oficial o de otras fuentes, e instalarlo en su dispositivo sin ningún tipo de molestia. FTS 2020 le dará horas de entretenimiento y emoción, ya que juega al fútbol como nunca antes. Descargar FTS 2020 APK hoy y disfrutar del mejor juego de fútbol en su dispositivo Android!

      -

      Preguntas frecuentes

      -

      Aquí hay algunas preguntas frecuentes sobre FTS 2020 APK:

      -

      - ¿Es seguro descargar FTS 2020 APK?

      - -

      - ¿Es FTS 2020 APK legal para descargar?

      -

      Sí, FTS 2020 APK es legal para descargar, ya que no es una versión pirata o agrietada del juego. Es una versión original del juego que se distribuye por sus desarrolladores de forma gratuita. Usted no tiene que preocuparse por cualquier problema legal o sanciones, como FTS 2020 APK no viola ninguna ley o reglamento.

      -

      - ¿Cuánto espacio de almacenamiento requiere FTS 2020 APK?

      -

      FTS 2020 APK requiere alrededor de 300 MB de espacio de almacenamiento en su dispositivo, que no es mucho en comparación con otros juegos de calidad y contenido similares. También puede mover el juego a su tarjeta SD si desea ahorrar espacio de almacenamiento interno.

      -

      - ¿Cómo puedo actualizar FTS 2020 APK?

      -

      Puede actualizar FTS 2020 APK mediante la descarga de la última versión del juego desde su sitio web oficial o de otras fuentes, y la instalación sobre la versión existente. No necesita desinstalar la versión anterior o perder sus datos, ya que la actualización sobrescribirá los archivos antiguos y mantendrá su progreso y configuración intactos.

      -

      - ¿Cómo puedo contactar a los desarrolladores de FTS 2020 APK?

      -

      Puede ponerse en contacto con los desarrolladores de FTS 2020 APK visitando su sitio web oficial o sus páginas de redes sociales, donde se puede encontrar su información de contacto y formularios de comentarios. También puedes enviarles un correo electrónico o un mensaje, y te responderán lo antes posible.

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/utils/datetime.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/utils/datetime.py deleted file mode 100644 index 8668b3b0ec1deec2aeb7ff6bd94265d6705e05bf..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/utils/datetime.py +++ /dev/null @@ -1,11 +0,0 @@ -"""For when pip wants to check the date or time. -""" - -import datetime - - -def today_is_later_than(year: int, month: int, day: int) -> bool: - today = datetime.date.today() - given = datetime.date(year, month, day) - - return today > given diff --git a/spaces/Boadiwaa/Recipes/openai/api_resources/engine.py b/spaces/Boadiwaa/Recipes/openai/api_resources/engine.py deleted file mode 100644 index e2c6f1c9557ab61e054f1504428d7c794bc63444..0000000000000000000000000000000000000000 --- a/spaces/Boadiwaa/Recipes/openai/api_resources/engine.py +++ /dev/null @@ -1,42 +0,0 @@ -import time -import warnings - -from openai import util -from openai.api_resources.abstract import ListableAPIResource, UpdateableAPIResource -from openai.error import InvalidAPIType, TryAgain -from openai.util import ApiType - - -class Engine(ListableAPIResource, UpdateableAPIResource): - OBJECT_NAME = "engines" - - def generate(self, timeout=None, **params): - start = time.time() - while True: - try: - return self.request( - "post", - self.instance_url() + "/generate", - params, - stream=params.get("stream"), - plain_old_data=True, - ) - except TryAgain as e: - if timeout is not None and time.time() > start + timeout: - raise - - util.log_info("Waiting for model to warm up", error=e) - - def search(self, **params): - if self.typed_api_type == ApiType.AZURE: - return self.request("post", self.instance_url("search"), params) - elif self.typed_api_type == ApiType.OPEN_AI: - return self.request("post", self.instance_url() + "/search", params) - else: - raise InvalidAPIType("Unsupported API type %s" % self.api_type) - - def embeddings(self, **params): - warnings.warn( - "Engine.embeddings is deprecated, use Embedding.create", DeprecationWarning - ) - return self.request("post", self.instance_url() + "/embeddings", params) diff --git a/spaces/BraydenMoore/a-random-unsecured-camera/main.py b/spaces/BraydenMoore/a-random-unsecured-camera/main.py deleted file mode 100644 index df6ef0ab73a76b2a9ca9e64502ac4325325cde47..0000000000000000000000000000000000000000 --- a/spaces/BraydenMoore/a-random-unsecured-camera/main.py +++ /dev/null @@ -1,163 +0,0 @@ -from flask import Flask, Response, render_template, send_file, stream_with_context, request, session, redirect, url_for -import requests -import random -import pickle as pkl -import pycountry -import datetime as dt -import pytz -from io import BytesIO -import logging -import os -import time - -app = Flask(__name__) -app.secret_key = 'green-flounder' - -with open('video_urls.pkl', 'rb') as f: - live_urls = pkl.load(f) - live_urls = [i for i in live_urls if i!= 'http://2.40.36.158:8084/img/video.mjpeg'] - live_urls[4161] = live_urls[1163] - -with open('owner_dict.pkl', 'rb') as f: - owner_dict = pkl.load(f) - -from urllib.parse import urlsplit, urlunsplit, quote, parse_qsl, urlencode - -def encode_url(url): - scheme, netloc, path, query_string, fragment = urlsplit(url) - query_params = parse_qsl(query_string) - encoded_query_params = [(key, quote(value)) for key, value in query_params] - encoded_query_string = urlencode(encoded_query_params) - finished = urlunsplit((scheme, netloc, path, encoded_query_string, fragment)) - return finished - -from geolite2 import geolite2 -def get_location(ip): - start_time = time.time() - reader = geolite2.reader() - location = reader.get(ip) - geolite2.close() - end_time = time.time() - - elapsed_time = end_time - start_time - print(f"\nTime taken for get_location: {elapsed_time} seconds\n") - - if location: - return {'country': location['country']['names']['en'] if 'country' in location else 'unknown country', - 'city': location['city']['names']['en'] if 'city' in location else 'unknown city', - 'region': location['subdivisions'][0]['names']['en'] if 'subdivisions' in location else 'unknown region', - 'loc': str(location['location']['latitude']) + ',' + str(location['location']['longitude']) if 'location' in location else '0,0', - 'timezone': location['location']['time_zone'] if 'location' in location and 'time_zone' in location['location'] else 'America/New_York'} - else: - return {'country': 'unknown country', - 'city': 'unknown city', - 'region': 'unknown region', - 'loc': str(0) + ',' + str(0), - 'timezone':'America/New_York'} - - -def latlon_to_pixel(loc): - latitude = float(loc.split(',')[0]) - longitude = float(loc.split(',')[1]) - - y = ((90-latitude)/180) - x = ((longitude+180)/360) - return x*100, y*100 - -from urllib.parse import urlparse, parse_qs - -@app.route('/proxy/') -def proxy(url): - start_time = time.time() - - full_url = url - query_string = request.query_string.decode("utf-8") - if query_string: - full_url += "?" + query_string - - headers = { - 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7', - 'Accept-Encoding': 'gzip, deflate', - 'Accept-Language': 'en-US,en;q=0.9', - 'Cache-Control': 'max-age=0', - 'Connection': 'keep-alive', - 'Dnt': '1', - 'Upgrade-Insecure-Requests': '1', - 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36' - } - - clean_url = full_url.replace('proxy/', '') - clean_url = encode_url(clean_url) - - try: - req = requests.get(clean_url, headers=headers, stream=True, timeout=1) - - end_time = time.time() - elapsed_time = end_time - start_time - print(f"\n{clean_url}\nTime taken for proxy: {elapsed_time} seconds\n") - - return Response(req.iter_content(chunk_size=1024), content_type=req.headers['content-type']) - - except Exception as e: - print(e) - return Response("Error", status=500) - - -@app.route('/') -def index(): - id = request.args.get('id') - if 'current_feed' in session and request.args.get('new', 'false') == 'false': - feed = session['current_feed'] - url = live_urls[int(feed)] - else: - feed = random.randint(0, len(live_urls) - 1) - url = live_urls[int(feed)] - session['current_feed'] = feed - - if id: - url = live_urls[int(id)] - feed = id - session['current_feed'] = id - - url = encode_url(url) - url = url.replace('640x480','1280x960').replace('COUNTER','') - - id = feed - ip = ''.join(url.split('//')[-1]).split(':')[0] - info = get_location(ip) - country = info['country'].lower() - name = (info['city'] + ", " + info['region']).lower() - page_title = (info['city'] + ", " + info['region'] + ", " + country).lower() - timezone = pytz.timezone(info['timezone']) - time = dt.datetime.now(timezone) - time = time.strftime("%I:%M:%S %p") - loc = info['loc'] - X, Y = latlon_to_pixel(info['loc']) - proxy_url = 'proxy/' + url - logging.info(f"Generated proxy URL: {proxy_url}") - loc_link = f"https://www.google.com/maps/search/{loc}" - ip_link = url - try: - owner = owner_dict[ip] - except: - owner = 'unknown' - return render_template('index.html', - name=name, - url=encode_url(proxy_url), - info=info, - country=country, - time=time, - timezone=timezone, - ip=ip, - ip_link=ip_link, - loc=loc, - loc_link=loc_link, - owner=owner, - X=X, - Y=Y, - id=id, - page_title=page_title) - - -if __name__ == '__main__': - app.run(host='0.0.0.0', port='7860') diff --git a/spaces/CMU-80100/80-100-Pre-Writing-Chatbot-Section-H/README.md b/spaces/CMU-80100/80-100-Pre-Writing-Chatbot-Section-H/README.md deleted file mode 100644 index bdf37009d73fc2e2fa98bc2ce4017d2ddae3bb0a..0000000000000000000000000000000000000000 --- a/spaces/CMU-80100/80-100-Pre-Writing-Chatbot-Section-H/README.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -title: 80-100-Pre-Writing-Chatbot-Section-H -app_file: hf_streaming_chatbot.py -sdk: gradio -sdk_version: 3.40.1 -duplicated_from: CMU-80100/80-100-Pre-Writing-Chatbot-Section-C ---- diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/proposal_generator/rrpn_outputs.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/proposal_generator/rrpn_outputs.py deleted file mode 100644 index 47ee8ab9861c52c4f9aaa685e341b39fa18c566b..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/proposal_generator/rrpn_outputs.py +++ /dev/null @@ -1,244 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import itertools -import logging -import torch - -from detectron2.layers import batched_nms_rotated, cat -from detectron2.structures import Instances, RotatedBoxes, pairwise_iou_rotated - -from .rpn_outputs import RPNOutputs - -logger = logging.getLogger(__name__) - -""" -Shape shorthand in this module: - - N: number of images in the minibatch - L: number of feature maps per image on which RRPN is run - A: number of cell anchors (must be the same for all feature maps) - Hi, Wi: height and width of the i-th feature map - 5: size of the box parameterization - -Naming convention: - - objectness: refers to the binary classification of an anchor as object vs. not - object. - - deltas: refers to the 5-d (dx, dy, dw, dh, da) deltas that parameterize the rotated box2box - transform (see :class:`box_regression.Box2BoxTransformRotated`). - - pred_objectness_logits: predicted objectness scores in [-inf, +inf]; use - sigmoid(pred_objectness_logits) to estimate P(object). - - gt_objectness_logits: ground-truth binary classification labels for objectness - - pred_anchor_deltas: predicted rotated box2box transform deltas - - gt_anchor_deltas: ground-truth rotated box2box transform deltas -""" - - -def find_top_rrpn_proposals( - proposals, - pred_objectness_logits, - images, - nms_thresh, - pre_nms_topk, - post_nms_topk, - min_box_side_len, - training, -): - """ - For each feature map, select the `pre_nms_topk` highest scoring proposals, - apply NMS, clip proposals, and remove small boxes. Return the `post_nms_topk` - highest scoring proposals among all the feature maps if `training` is True, - otherwise, returns the highest `post_nms_topk` scoring proposals for each - feature map. - - Args: - proposals (list[Tensor]): A list of L tensors. Tensor i has shape (N, Hi*Wi*A, 5). - All proposal predictions on the feature maps. - pred_objectness_logits (list[Tensor]): A list of L tensors. Tensor i has shape (N, Hi*Wi*A). - images (ImageList): Input images as an :class:`ImageList`. - nms_thresh (float): IoU threshold to use for NMS - pre_nms_topk (int): number of top k scoring proposals to keep before applying NMS. - When RRPN is run on multiple feature maps (as in FPN) this number is per - feature map. - post_nms_topk (int): number of top k scoring proposals to keep after applying NMS. - When RRPN is run on multiple feature maps (as in FPN) this number is total, - over all feature maps. - min_box_side_len (float): minimum proposal box side length in pixels (absolute units - wrt input images). - training (bool): True if proposals are to be used in training, otherwise False. - This arg exists only to support a legacy bug; look for the "NB: Legacy bug ..." - comment. - - Returns: - proposals (list[Instances]): list of N Instances. The i-th Instances - stores post_nms_topk object proposals for image i. - """ - image_sizes = images.image_sizes # in (h, w) order - num_images = len(image_sizes) - device = proposals[0].device - - # 1. Select top-k anchor for every level and every image - topk_scores = [] # #lvl Tensor, each of shape N x topk - topk_proposals = [] - level_ids = [] # #lvl Tensor, each of shape (topk,) - batch_idx = torch.arange(num_images, device=device) - for level_id, proposals_i, logits_i in zip( - itertools.count(), proposals, pred_objectness_logits - ): - Hi_Wi_A = logits_i.shape[1] - num_proposals_i = min(pre_nms_topk, Hi_Wi_A) - - # sort is faster than topk (https://github.com/pytorch/pytorch/issues/22812) - # topk_scores_i, topk_idx = logits_i.topk(num_proposals_i, dim=1) - logits_i, idx = logits_i.sort(descending=True, dim=1) - topk_scores_i = logits_i[batch_idx, :num_proposals_i] - topk_idx = idx[batch_idx, :num_proposals_i] - - # each is N x topk - topk_proposals_i = proposals_i[batch_idx[:, None], topk_idx] # N x topk x 5 - - topk_proposals.append(topk_proposals_i) - topk_scores.append(topk_scores_i) - level_ids.append(torch.full((num_proposals_i,), level_id, dtype=torch.int64, device=device)) - - # 2. Concat all levels together - topk_scores = cat(topk_scores, dim=1) - topk_proposals = cat(topk_proposals, dim=1) - level_ids = cat(level_ids, dim=0) - - # 3. For each image, run a per-level NMS, and choose topk results. - results = [] - for n, image_size in enumerate(image_sizes): - boxes = RotatedBoxes(topk_proposals[n]) - scores_per_img = topk_scores[n] - valid_mask = torch.isfinite(boxes.tensor).all(dim=1) & torch.isfinite(scores_per_img) - if not valid_mask.all(): - boxes = boxes[valid_mask] - scores_per_img = scores_per_img[valid_mask] - boxes.clip(image_size) - - # filter empty boxes - keep = boxes.nonempty(threshold=min_box_side_len) - lvl = level_ids - if keep.sum().item() != len(boxes): - boxes, scores_per_img, lvl = (boxes[keep], scores_per_img[keep], level_ids[keep]) - - keep = batched_nms_rotated(boxes.tensor, scores_per_img, lvl, nms_thresh) - # In Detectron1, there was different behavior during training vs. testing. - # (https://github.com/facebookresearch/Detectron/issues/459) - # During training, topk is over the proposals from *all* images in the training batch. - # During testing, it is over the proposals for each image separately. - # As a result, the training behavior becomes batch-dependent, - # and the configuration "POST_NMS_TOPK_TRAIN" end up relying on the batch size. - # This bug is addressed in Detectron2 to make the behavior independent of batch size. - keep = keep[:post_nms_topk] - - res = Instances(image_size) - res.proposal_boxes = boxes[keep] - res.objectness_logits = scores_per_img[keep] - results.append(res) - return results - - -class RRPNOutputs(RPNOutputs): - def __init__( - self, - box2box_transform, - anchor_matcher, - batch_size_per_image, - positive_fraction, - images, - pred_objectness_logits, - pred_anchor_deltas, - anchors, - boundary_threshold=0, - gt_boxes=None, - smooth_l1_beta=0.0, - ): - """ - Args: - box2box_transform (Box2BoxTransformRotated): :class:`Box2BoxTransformRotated` - instance for anchor-proposal transformations. - anchor_matcher (Matcher): :class:`Matcher` instance for matching anchors to - ground-truth boxes; used to determine training labels. - batch_size_per_image (int): number of proposals to sample when training - positive_fraction (float): target fraction of sampled proposals that should be positive - images (ImageList): :class:`ImageList` instance representing N input images - pred_objectness_logits (list[Tensor]): A list of L elements. - Element i is a tensor of shape (N, A, Hi, Wi) representing - the predicted objectness logits for anchors. - pred_anchor_deltas (list[Tensor]): A list of L elements. Element i is a tensor of shape - (N, A*5, Hi, Wi) representing the predicted "deltas" used to transform anchors - to proposals. - anchors (list[list[RotatedBoxes]]): A list of N elements. Each element is a list of L - RotatedBoxes. The RotatedBoxes at (n, l) stores the entire anchor array for - feature map l in image n (i.e. the cell anchors repeated over all locations in - feature map (n, l)). - boundary_threshold (int): if >= 0, then anchors that extend beyond the image - boundary by more than boundary_thresh are not used in training. Set to a very large - number or < 0 to disable this behavior. Only needed in training. - gt_boxes (list[RotatedBoxes], optional): A list of N elements. Element i a RotatedBoxes - storing the ground-truth ("gt") rotated boxes for image i. - smooth_l1_beta (float): The transition point between L1 and L2 loss in - the smooth L1 loss function. When set to 0, the loss becomes L1. When - set to +inf, the loss becomes constant 0. - """ - super(RRPNOutputs, self).__init__( - box2box_transform, - anchor_matcher, - batch_size_per_image, - positive_fraction, - images, - pred_objectness_logits, - pred_anchor_deltas, - anchors, - boundary_threshold, - gt_boxes, - smooth_l1_beta, - ) - - def _get_ground_truth(self): - """ - Returns: - gt_objectness_logits: list of N tensors. Tensor i is a vector whose length is the - total number of anchors in image i (i.e., len(anchors[i])). Label values are - in {-1, 0, 1}, with meanings: -1 = ignore; 0 = negative class; 1 = positive class. - gt_anchor_deltas: list of N tensors. Tensor i has shape (len(anchors[i]), 5). - """ - gt_objectness_logits = [] - gt_anchor_deltas = [] - # Concatenate anchors from all feature maps into a single RotatedBoxes per image - anchors = [RotatedBoxes.cat(anchors_i) for anchors_i in self.anchors] - for image_size_i, anchors_i, gt_boxes_i in zip(self.image_sizes, anchors, self.gt_boxes): - """ - image_size_i: (h, w) for the i-th image - anchors_i: anchors for i-th image - gt_boxes_i: ground-truth boxes for i-th image - """ - match_quality_matrix = pairwise_iou_rotated(gt_boxes_i, anchors_i) - matched_idxs, gt_objectness_logits_i = self.anchor_matcher(match_quality_matrix) - - if self.boundary_threshold >= 0: - # Discard anchors that go out of the boundaries of the image - # NOTE: This is legacy functionality that is turned off by default in Detectron2 - anchors_inside_image = anchors_i.inside_box(image_size_i, self.boundary_threshold) - gt_objectness_logits_i[~anchors_inside_image] = -1 - - if len(gt_boxes_i) == 0: - # These values won't be used anyway since the anchor is labeled as background - gt_anchor_deltas_i = torch.zeros_like(anchors_i.tensor) - else: - # TODO wasted computation for ignored boxes - matched_gt_boxes = gt_boxes_i[matched_idxs] - gt_anchor_deltas_i = self.box2box_transform.get_deltas( - anchors_i.tensor, matched_gt_boxes.tensor - ) - - gt_objectness_logits.append(gt_objectness_logits_i) - gt_anchor_deltas.append(gt_anchor_deltas_i) - - return gt_objectness_logits, gt_anchor_deltas diff --git a/spaces/CVPR/LIVE/pybind11/tools/clang/cindex.py b/spaces/CVPR/LIVE/pybind11/tools/clang/cindex.py deleted file mode 100644 index 3a083de0df70e64c07bb3c0cd4bdf69d7ddfd8c5..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/tools/clang/cindex.py +++ /dev/null @@ -1,3884 +0,0 @@ -#===- cindex.py - Python Indexing Library Bindings -----------*- python -*--===# -# -# The LLVM Compiler Infrastructure -# -# This file is distributed under the University of Illinois Open Source -# License. See LICENSE.TXT for details. -# -#===------------------------------------------------------------------------===# - -r""" -Clang Indexing Library Bindings -=============================== - -This module provides an interface to the Clang indexing library. It is a -low-level interface to the indexing library which attempts to match the Clang -API directly while also being "pythonic". Notable differences from the C API -are: - - * string results are returned as Python strings, not CXString objects. - - * null cursors are translated to None. - - * access to child cursors is done via iteration, not visitation. - -The major indexing objects are: - - Index - - The top-level object which manages some global library state. - - TranslationUnit - - High-level object encapsulating the AST for a single translation unit. These - can be loaded from .ast files or parsed on the fly. - - Cursor - - Generic object for representing a node in the AST. - - SourceRange, SourceLocation, and File - - Objects representing information about the input source. - -Most object information is exposed using properties, when the underlying API -call is efficient. -""" - -# TODO -# ==== -# -# o API support for invalid translation units. Currently we can't even get the -# diagnostics on failure because they refer to locations in an object that -# will have been invalidated. -# -# o fix memory management issues (currently client must hold on to index and -# translation unit, or risk crashes). -# -# o expose code completion APIs. -# -# o cleanup ctypes wrapping, would be nice to separate the ctypes details more -# clearly, and hide from the external interface (i.e., help(cindex)). -# -# o implement additional SourceLocation, SourceRange, and File methods. - -from ctypes import * -import collections - -import clang.enumerations - -# ctypes doesn't implicitly convert c_void_p to the appropriate wrapper -# object. This is a problem, because it means that from_parameter will see an -# integer and pass the wrong value on platforms where int != void*. Work around -# this by marshalling object arguments as void**. -c_object_p = POINTER(c_void_p) - -callbacks = {} - -### Exception Classes ### - -class TranslationUnitLoadError(Exception): - """Represents an error that occurred when loading a TranslationUnit. - - This is raised in the case where a TranslationUnit could not be - instantiated due to failure in the libclang library. - - FIXME: Make libclang expose additional error information in this scenario. - """ - pass - -class TranslationUnitSaveError(Exception): - """Represents an error that occurred when saving a TranslationUnit. - - Each error has associated with it an enumerated value, accessible under - e.save_error. Consumers can compare the value with one of the ERROR_ - constants in this class. - """ - - # Indicates that an unknown error occurred. This typically indicates that - # I/O failed during save. - ERROR_UNKNOWN = 1 - - # Indicates that errors during translation prevented saving. The errors - # should be available via the TranslationUnit's diagnostics. - ERROR_TRANSLATION_ERRORS = 2 - - # Indicates that the translation unit was somehow invalid. - ERROR_INVALID_TU = 3 - - def __init__(self, enumeration, message): - assert isinstance(enumeration, int) - - if enumeration < 1 or enumeration > 3: - raise Exception("Encountered undefined TranslationUnit save error " - "constant: %d. Please file a bug to have this " - "value supported." % enumeration) - - self.save_error = enumeration - Exception.__init__(self, 'Error %d: %s' % (enumeration, message)) - -### Structures and Utility Classes ### - -class CachedProperty(object): - """Decorator that lazy-loads the value of a property. - - The first time the property is accessed, the original property function is - executed. The value it returns is set as the new value of that instance's - property, replacing the original method. - """ - - def __init__(self, wrapped): - self.wrapped = wrapped - try: - self.__doc__ = wrapped.__doc__ - except: - pass - - def __get__(self, instance, instance_type=None): - if instance is None: - return self - - value = self.wrapped(instance) - setattr(instance, self.wrapped.__name__, value) - - return value - - -class _CXString(Structure): - """Helper for transforming CXString results.""" - - _fields_ = [("spelling", c_char_p), ("free", c_int)] - - def __del__(self): - conf.lib.clang_disposeString(self) - - @staticmethod - def from_result(res, fn, args): - assert isinstance(res, _CXString) - return conf.lib.clang_getCString(res) - -class SourceLocation(Structure): - """ - A SourceLocation represents a particular location within a source file. - """ - _fields_ = [("ptr_data", c_void_p * 2), ("int_data", c_uint)] - _data = None - - def _get_instantiation(self): - if self._data is None: - f, l, c, o = c_object_p(), c_uint(), c_uint(), c_uint() - conf.lib.clang_getInstantiationLocation(self, byref(f), byref(l), - byref(c), byref(o)) - if f: - f = File(f) - else: - f = None - self._data = (f, int(l.value), int(c.value), int(o.value)) - return self._data - - @staticmethod - def from_position(tu, file, line, column): - """ - Retrieve the source location associated with a given file/line/column in - a particular translation unit. - """ - return conf.lib.clang_getLocation(tu, file, line, column) - - @staticmethod - def from_offset(tu, file, offset): - """Retrieve a SourceLocation from a given character offset. - - tu -- TranslationUnit file belongs to - file -- File instance to obtain offset from - offset -- Integer character offset within file - """ - return conf.lib.clang_getLocationForOffset(tu, file, offset) - - @property - def file(self): - """Get the file represented by this source location.""" - return self._get_instantiation()[0] - - @property - def line(self): - """Get the line represented by this source location.""" - return self._get_instantiation()[1] - - @property - def column(self): - """Get the column represented by this source location.""" - return self._get_instantiation()[2] - - @property - def offset(self): - """Get the file offset represented by this source location.""" - return self._get_instantiation()[3] - - def __eq__(self, other): - return conf.lib.clang_equalLocations(self, other) - - def __ne__(self, other): - return not self.__eq__(other) - - def __repr__(self): - if self.file: - filename = self.file.name - else: - filename = None - return "" % ( - filename, self.line, self.column) - -class SourceRange(Structure): - """ - A SourceRange describes a range of source locations within the source - code. - """ - _fields_ = [ - ("ptr_data", c_void_p * 2), - ("begin_int_data", c_uint), - ("end_int_data", c_uint)] - - # FIXME: Eliminate this and make normal constructor? Requires hiding ctypes - # object. - @staticmethod - def from_locations(start, end): - return conf.lib.clang_getRange(start, end) - - @property - def start(self): - """ - Return a SourceLocation representing the first character within a - source range. - """ - return conf.lib.clang_getRangeStart(self) - - @property - def end(self): - """ - Return a SourceLocation representing the last character within a - source range. - """ - return conf.lib.clang_getRangeEnd(self) - - def __eq__(self, other): - return conf.lib.clang_equalRanges(self, other) - - def __ne__(self, other): - return not self.__eq__(other) - - def __contains__(self, other): - """Useful to detect the Token/Lexer bug""" - if not isinstance(other, SourceLocation): - return False - if other.file is None and self.start.file is None: - pass - elif ( self.start.file.name != other.file.name or - other.file.name != self.end.file.name): - # same file name - return False - # same file, in between lines - if self.start.line < other.line < self.end.line: - return True - elif self.start.line == other.line: - # same file first line - if self.start.column <= other.column: - return True - elif other.line == self.end.line: - # same file last line - if other.column <= self.end.column: - return True - return False - - def __repr__(self): - return "" % (self.start, self.end) - -class Diagnostic(object): - """ - A Diagnostic is a single instance of a Clang diagnostic. It includes the - diagnostic severity, the message, the location the diagnostic occurred, as - well as additional source ranges and associated fix-it hints. - """ - - Ignored = 0 - Note = 1 - Warning = 2 - Error = 3 - Fatal = 4 - - def __init__(self, ptr): - self.ptr = ptr - - def __del__(self): - conf.lib.clang_disposeDiagnostic(self) - - @property - def severity(self): - return conf.lib.clang_getDiagnosticSeverity(self) - - @property - def location(self): - return conf.lib.clang_getDiagnosticLocation(self) - - @property - def spelling(self): - return conf.lib.clang_getDiagnosticSpelling(self) - - @property - def ranges(self): - class RangeIterator: - def __init__(self, diag): - self.diag = diag - - def __len__(self): - return int(conf.lib.clang_getDiagnosticNumRanges(self.diag)) - - def __getitem__(self, key): - if (key >= len(self)): - raise IndexError - return conf.lib.clang_getDiagnosticRange(self.diag, key) - - return RangeIterator(self) - - @property - def fixits(self): - class FixItIterator: - def __init__(self, diag): - self.diag = diag - - def __len__(self): - return int(conf.lib.clang_getDiagnosticNumFixIts(self.diag)) - - def __getitem__(self, key): - range = SourceRange() - value = conf.lib.clang_getDiagnosticFixIt(self.diag, key, - byref(range)) - if len(value) == 0: - raise IndexError - - return FixIt(range, value) - - return FixItIterator(self) - - @property - def children(self): - class ChildDiagnosticsIterator: - def __init__(self, diag): - self.diag_set = conf.lib.clang_getChildDiagnostics(diag) - - def __len__(self): - return int(conf.lib.clang_getNumDiagnosticsInSet(self.diag_set)) - - def __getitem__(self, key): - diag = conf.lib.clang_getDiagnosticInSet(self.diag_set, key) - if not diag: - raise IndexError - return Diagnostic(diag) - - return ChildDiagnosticsIterator(self) - - @property - def category_number(self): - """The category number for this diagnostic or 0 if unavailable.""" - return conf.lib.clang_getDiagnosticCategory(self) - - @property - def category_name(self): - """The string name of the category for this diagnostic.""" - return conf.lib.clang_getDiagnosticCategoryText(self) - - @property - def option(self): - """The command-line option that enables this diagnostic.""" - return conf.lib.clang_getDiagnosticOption(self, None) - - @property - def disable_option(self): - """The command-line option that disables this diagnostic.""" - disable = _CXString() - conf.lib.clang_getDiagnosticOption(self, byref(disable)) - - return conf.lib.clang_getCString(disable) - - def __repr__(self): - return "" % ( - self.severity, self.location, self.spelling) - - def from_param(self): - return self.ptr - -class FixIt(object): - """ - A FixIt represents a transformation to be applied to the source to - "fix-it". The fix-it shouldbe applied by replacing the given source range - with the given value. - """ - - def __init__(self, range, value): - self.range = range - self.value = value - - def __repr__(self): - return "" % (self.range, self.value) - -class TokenGroup(object): - """Helper class to facilitate token management. - - Tokens are allocated from libclang in chunks. They must be disposed of as a - collective group. - - One purpose of this class is for instances to represent groups of allocated - tokens. Each token in a group contains a reference back to an instance of - this class. When all tokens from a group are garbage collected, it allows - this class to be garbage collected. When this class is garbage collected, - it calls the libclang destructor which invalidates all tokens in the group. - - You should not instantiate this class outside of this module. - """ - def __init__(self, tu, memory, count): - self._tu = tu - self._memory = memory - self._count = count - - def __del__(self): - conf.lib.clang_disposeTokens(self._tu, self._memory, self._count) - - @staticmethod - def get_tokens(tu, extent): - """Helper method to return all tokens in an extent. - - This functionality is needed multiple places in this module. We define - it here because it seems like a logical place. - """ - tokens_memory = POINTER(Token)() - tokens_count = c_uint() - - conf.lib.clang_tokenize(tu, extent, byref(tokens_memory), - byref(tokens_count)) - - count = int(tokens_count.value) - - # If we get no tokens, no memory was allocated. Be sure not to return - # anything and potentially call a destructor on nothing. - if count < 1: - return - - tokens_array = cast(tokens_memory, POINTER(Token * count)).contents - - token_group = TokenGroup(tu, tokens_memory, tokens_count) - - for i in range(0, count): - token = Token() - token.int_data = tokens_array[i].int_data - token.ptr_data = tokens_array[i].ptr_data - token._tu = tu - token._group = token_group - - yield token - -class TokenKind(object): - """Describes a specific type of a Token.""" - - _value_map = {} # int -> TokenKind - - def __init__(self, value, name): - """Create a new TokenKind instance from a numeric value and a name.""" - self.value = value - self.name = name - - def __repr__(self): - return 'TokenKind.%s' % (self.name,) - - @staticmethod - def from_value(value): - """Obtain a registered TokenKind instance from its value.""" - result = TokenKind._value_map.get(value, None) - - if result is None: - raise ValueError('Unknown TokenKind: %d' % value) - - return result - - @staticmethod - def register(value, name): - """Register a new TokenKind enumeration. - - This should only be called at module load time by code within this - package. - """ - if value in TokenKind._value_map: - raise ValueError('TokenKind already registered: %d' % value) - - kind = TokenKind(value, name) - TokenKind._value_map[value] = kind - setattr(TokenKind, name, kind) - -### Cursor Kinds ### -class BaseEnumeration(object): - """ - Common base class for named enumerations held in sync with Index.h values. - - Subclasses must define their own _kinds and _name_map members, as: - _kinds = [] - _name_map = None - These values hold the per-subclass instances and value-to-name mappings, - respectively. - - """ - - def __init__(self, value): - if value >= len(self.__class__._kinds): - self.__class__._kinds += [None] * (value - len(self.__class__._kinds) + 1) - if self.__class__._kinds[value] is not None: - raise ValueError('{0} value {1} already loaded'.format( - str(self.__class__), value)) - self.value = value - self.__class__._kinds[value] = self - self.__class__._name_map = None - - - def from_param(self): - return self.value - - @property - def name(self): - """Get the enumeration name of this cursor kind.""" - if self._name_map is None: - self._name_map = {} - for key, value in list(self.__class__.__dict__.items()): - if isinstance(value, self.__class__): - self._name_map[value] = key - return self._name_map[self] - - @classmethod - def from_id(cls, id): - if id >= len(cls._kinds) or cls._kinds[id] is None: - raise ValueError('Unknown template argument kind %d' % id) - return cls._kinds[id] - - def __repr__(self): - return '%s.%s' % (self.__class__, self.name,) - - -class CursorKind(BaseEnumeration): - """ - A CursorKind describes the kind of entity that a cursor points to. - """ - - # The required BaseEnumeration declarations. - _kinds = [] - _name_map = None - - @staticmethod - def get_all_kinds(): - """Return all CursorKind enumeration instances.""" - return [_f for _f in CursorKind._kinds if _f] - - def is_declaration(self): - """Test if this is a declaration kind.""" - return conf.lib.clang_isDeclaration(self) - - def is_reference(self): - """Test if this is a reference kind.""" - return conf.lib.clang_isReference(self) - - def is_expression(self): - """Test if this is an expression kind.""" - return conf.lib.clang_isExpression(self) - - def is_statement(self): - """Test if this is a statement kind.""" - return conf.lib.clang_isStatement(self) - - def is_attribute(self): - """Test if this is an attribute kind.""" - return conf.lib.clang_isAttribute(self) - - def is_invalid(self): - """Test if this is an invalid kind.""" - return conf.lib.clang_isInvalid(self) - - def is_translation_unit(self): - """Test if this is a translation unit kind.""" - return conf.lib.clang_isTranslationUnit(self) - - def is_preprocessing(self): - """Test if this is a preprocessing kind.""" - return conf.lib.clang_isPreprocessing(self) - - def is_unexposed(self): - """Test if this is an unexposed kind.""" - return conf.lib.clang_isUnexposed(self) - - def __repr__(self): - return 'CursorKind.%s' % (self.name,) - -### -# Declaration Kinds - -# A declaration whose specific kind is not exposed via this interface. -# -# Unexposed declarations have the same operations as any other kind of -# declaration; one can extract their location information, spelling, find their -# definitions, etc. However, the specific kind of the declaration is not -# reported. -CursorKind.UNEXPOSED_DECL = CursorKind(1) - -# A C or C++ struct. -CursorKind.STRUCT_DECL = CursorKind(2) - -# A C or C++ union. -CursorKind.UNION_DECL = CursorKind(3) - -# A C++ class. -CursorKind.CLASS_DECL = CursorKind(4) - -# An enumeration. -CursorKind.ENUM_DECL = CursorKind(5) - -# A field (in C) or non-static data member (in C++) in a struct, union, or C++ -# class. -CursorKind.FIELD_DECL = CursorKind(6) - -# An enumerator constant. -CursorKind.ENUM_CONSTANT_DECL = CursorKind(7) - -# A function. -CursorKind.FUNCTION_DECL = CursorKind(8) - -# A variable. -CursorKind.VAR_DECL = CursorKind(9) - -# A function or method parameter. -CursorKind.PARM_DECL = CursorKind(10) - -# An Objective-C @interface. -CursorKind.OBJC_INTERFACE_DECL = CursorKind(11) - -# An Objective-C @interface for a category. -CursorKind.OBJC_CATEGORY_DECL = CursorKind(12) - -# An Objective-C @protocol declaration. -CursorKind.OBJC_PROTOCOL_DECL = CursorKind(13) - -# An Objective-C @property declaration. -CursorKind.OBJC_PROPERTY_DECL = CursorKind(14) - -# An Objective-C instance variable. -CursorKind.OBJC_IVAR_DECL = CursorKind(15) - -# An Objective-C instance method. -CursorKind.OBJC_INSTANCE_METHOD_DECL = CursorKind(16) - -# An Objective-C class method. -CursorKind.OBJC_CLASS_METHOD_DECL = CursorKind(17) - -# An Objective-C @implementation. -CursorKind.OBJC_IMPLEMENTATION_DECL = CursorKind(18) - -# An Objective-C @implementation for a category. -CursorKind.OBJC_CATEGORY_IMPL_DECL = CursorKind(19) - -# A typedef. -CursorKind.TYPEDEF_DECL = CursorKind(20) - -# A C++ class method. -CursorKind.CXX_METHOD = CursorKind(21) - -# A C++ namespace. -CursorKind.NAMESPACE = CursorKind(22) - -# A linkage specification, e.g. 'extern "C"'. -CursorKind.LINKAGE_SPEC = CursorKind(23) - -# A C++ constructor. -CursorKind.CONSTRUCTOR = CursorKind(24) - -# A C++ destructor. -CursorKind.DESTRUCTOR = CursorKind(25) - -# A C++ conversion function. -CursorKind.CONVERSION_FUNCTION = CursorKind(26) - -# A C++ template type parameter -CursorKind.TEMPLATE_TYPE_PARAMETER = CursorKind(27) - -# A C++ non-type template paramater. -CursorKind.TEMPLATE_NON_TYPE_PARAMETER = CursorKind(28) - -# A C++ template template parameter. -CursorKind.TEMPLATE_TEMPLATE_PARAMETER = CursorKind(29) - -# A C++ function template. -CursorKind.FUNCTION_TEMPLATE = CursorKind(30) - -# A C++ class template. -CursorKind.CLASS_TEMPLATE = CursorKind(31) - -# A C++ class template partial specialization. -CursorKind.CLASS_TEMPLATE_PARTIAL_SPECIALIZATION = CursorKind(32) - -# A C++ namespace alias declaration. -CursorKind.NAMESPACE_ALIAS = CursorKind(33) - -# A C++ using directive -CursorKind.USING_DIRECTIVE = CursorKind(34) - -# A C++ using declaration -CursorKind.USING_DECLARATION = CursorKind(35) - -# A Type alias decl. -CursorKind.TYPE_ALIAS_DECL = CursorKind(36) - -# A Objective-C synthesize decl -CursorKind.OBJC_SYNTHESIZE_DECL = CursorKind(37) - -# A Objective-C dynamic decl -CursorKind.OBJC_DYNAMIC_DECL = CursorKind(38) - -# A C++ access specifier decl. -CursorKind.CXX_ACCESS_SPEC_DECL = CursorKind(39) - - -### -# Reference Kinds - -CursorKind.OBJC_SUPER_CLASS_REF = CursorKind(40) -CursorKind.OBJC_PROTOCOL_REF = CursorKind(41) -CursorKind.OBJC_CLASS_REF = CursorKind(42) - -# A reference to a type declaration. -# -# A type reference occurs anywhere where a type is named but not -# declared. For example, given: -# typedef unsigned size_type; -# size_type size; -# -# The typedef is a declaration of size_type (CXCursor_TypedefDecl), -# while the type of the variable "size" is referenced. The cursor -# referenced by the type of size is the typedef for size_type. -CursorKind.TYPE_REF = CursorKind(43) -CursorKind.CXX_BASE_SPECIFIER = CursorKind(44) - -# A reference to a class template, function template, template -# template parameter, or class template partial specialization. -CursorKind.TEMPLATE_REF = CursorKind(45) - -# A reference to a namespace or namepsace alias. -CursorKind.NAMESPACE_REF = CursorKind(46) - -# A reference to a member of a struct, union, or class that occurs in -# some non-expression context, e.g., a designated initializer. -CursorKind.MEMBER_REF = CursorKind(47) - -# A reference to a labeled statement. -CursorKind.LABEL_REF = CursorKind(48) - -# A reference to a set of overloaded functions or function templates -# that has not yet been resolved to a specific function or function template. -CursorKind.OVERLOADED_DECL_REF = CursorKind(49) - -# A reference to a variable that occurs in some non-expression -# context, e.g., a C++ lambda capture list. -CursorKind.VARIABLE_REF = CursorKind(50) - -### -# Invalid/Error Kinds - -CursorKind.INVALID_FILE = CursorKind(70) -CursorKind.NO_DECL_FOUND = CursorKind(71) -CursorKind.NOT_IMPLEMENTED = CursorKind(72) -CursorKind.INVALID_CODE = CursorKind(73) - -### -# Expression Kinds - -# An expression whose specific kind is not exposed via this interface. -# -# Unexposed expressions have the same operations as any other kind of -# expression; one can extract their location information, spelling, children, -# etc. However, the specific kind of the expression is not reported. -CursorKind.UNEXPOSED_EXPR = CursorKind(100) - -# An expression that refers to some value declaration, such as a function, -# varible, or enumerator. -CursorKind.DECL_REF_EXPR = CursorKind(101) - -# An expression that refers to a member of a struct, union, class, Objective-C -# class, etc. -CursorKind.MEMBER_REF_EXPR = CursorKind(102) - -# An expression that calls a function. -CursorKind.CALL_EXPR = CursorKind(103) - -# An expression that sends a message to an Objective-C object or class. -CursorKind.OBJC_MESSAGE_EXPR = CursorKind(104) - -# An expression that represents a block literal. -CursorKind.BLOCK_EXPR = CursorKind(105) - -# An integer literal. -CursorKind.INTEGER_LITERAL = CursorKind(106) - -# A floating point number literal. -CursorKind.FLOATING_LITERAL = CursorKind(107) - -# An imaginary number literal. -CursorKind.IMAGINARY_LITERAL = CursorKind(108) - -# A string literal. -CursorKind.STRING_LITERAL = CursorKind(109) - -# A character literal. -CursorKind.CHARACTER_LITERAL = CursorKind(110) - -# A parenthesized expression, e.g. "(1)". -# -# This AST node is only formed if full location information is requested. -CursorKind.PAREN_EXPR = CursorKind(111) - -# This represents the unary-expression's (except sizeof and -# alignof). -CursorKind.UNARY_OPERATOR = CursorKind(112) - -# [C99 6.5.2.1] Array Subscripting. -CursorKind.ARRAY_SUBSCRIPT_EXPR = CursorKind(113) - -# A builtin binary operation expression such as "x + y" or -# "x <= y". -CursorKind.BINARY_OPERATOR = CursorKind(114) - -# Compound assignment such as "+=". -CursorKind.COMPOUND_ASSIGNMENT_OPERATOR = CursorKind(115) - -# The ?: ternary operator. -CursorKind.CONDITIONAL_OPERATOR = CursorKind(116) - -# An explicit cast in C (C99 6.5.4) or a C-style cast in C++ -# (C++ [expr.cast]), which uses the syntax (Type)expr. -# -# For example: (int)f. -CursorKind.CSTYLE_CAST_EXPR = CursorKind(117) - -# [C99 6.5.2.5] -CursorKind.COMPOUND_LITERAL_EXPR = CursorKind(118) - -# Describes an C or C++ initializer list. -CursorKind.INIT_LIST_EXPR = CursorKind(119) - -# The GNU address of label extension, representing &&label. -CursorKind.ADDR_LABEL_EXPR = CursorKind(120) - -# This is the GNU Statement Expression extension: ({int X=4; X;}) -CursorKind.StmtExpr = CursorKind(121) - -# Represents a C11 generic selection. -CursorKind.GENERIC_SELECTION_EXPR = CursorKind(122) - -# Implements the GNU __null extension, which is a name for a null -# pointer constant that has integral type (e.g., int or long) and is the same -# size and alignment as a pointer. -# -# The __null extension is typically only used by system headers, which define -# NULL as __null in C++ rather than using 0 (which is an integer that may not -# match the size of a pointer). -CursorKind.GNU_NULL_EXPR = CursorKind(123) - -# C++'s static_cast<> expression. -CursorKind.CXX_STATIC_CAST_EXPR = CursorKind(124) - -# C++'s dynamic_cast<> expression. -CursorKind.CXX_DYNAMIC_CAST_EXPR = CursorKind(125) - -# C++'s reinterpret_cast<> expression. -CursorKind.CXX_REINTERPRET_CAST_EXPR = CursorKind(126) - -# C++'s const_cast<> expression. -CursorKind.CXX_CONST_CAST_EXPR = CursorKind(127) - -# Represents an explicit C++ type conversion that uses "functional" -# notion (C++ [expr.type.conv]). -# -# Example: -# \code -# x = int(0.5); -# \endcode -CursorKind.CXX_FUNCTIONAL_CAST_EXPR = CursorKind(128) - -# A C++ typeid expression (C++ [expr.typeid]). -CursorKind.CXX_TYPEID_EXPR = CursorKind(129) - -# [C++ 2.13.5] C++ Boolean Literal. -CursorKind.CXX_BOOL_LITERAL_EXPR = CursorKind(130) - -# [C++0x 2.14.7] C++ Pointer Literal. -CursorKind.CXX_NULL_PTR_LITERAL_EXPR = CursorKind(131) - -# Represents the "this" expression in C++ -CursorKind.CXX_THIS_EXPR = CursorKind(132) - -# [C++ 15] C++ Throw Expression. -# -# This handles 'throw' and 'throw' assignment-expression. When -# assignment-expression isn't present, Op will be null. -CursorKind.CXX_THROW_EXPR = CursorKind(133) - -# A new expression for memory allocation and constructor calls, e.g: -# "new CXXNewExpr(foo)". -CursorKind.CXX_NEW_EXPR = CursorKind(134) - -# A delete expression for memory deallocation and destructor calls, -# e.g. "delete[] pArray". -CursorKind.CXX_DELETE_EXPR = CursorKind(135) - -# Represents a unary expression. -CursorKind.CXX_UNARY_EXPR = CursorKind(136) - -# ObjCStringLiteral, used for Objective-C string literals i.e. "foo". -CursorKind.OBJC_STRING_LITERAL = CursorKind(137) - -# ObjCEncodeExpr, used for in Objective-C. -CursorKind.OBJC_ENCODE_EXPR = CursorKind(138) - -# ObjCSelectorExpr used for in Objective-C. -CursorKind.OBJC_SELECTOR_EXPR = CursorKind(139) - -# Objective-C's protocol expression. -CursorKind.OBJC_PROTOCOL_EXPR = CursorKind(140) - -# An Objective-C "bridged" cast expression, which casts between -# Objective-C pointers and C pointers, transferring ownership in the process. -# -# \code -# NSString *str = (__bridge_transfer NSString *)CFCreateString(); -# \endcode -CursorKind.OBJC_BRIDGE_CAST_EXPR = CursorKind(141) - -# Represents a C++0x pack expansion that produces a sequence of -# expressions. -# -# A pack expansion expression contains a pattern (which itself is an -# expression) followed by an ellipsis. For example: -CursorKind.PACK_EXPANSION_EXPR = CursorKind(142) - -# Represents an expression that computes the length of a parameter -# pack. -CursorKind.SIZE_OF_PACK_EXPR = CursorKind(143) - -# Represents a C++ lambda expression that produces a local function -# object. -# -# \code -# void abssort(float *x, unsigned N) { -# std::sort(x, x + N, -# [](float a, float b) { -# return std::abs(a) < std::abs(b); -# }); -# } -# \endcode -CursorKind.LAMBDA_EXPR = CursorKind(144) - -# Objective-c Boolean Literal. -CursorKind.OBJ_BOOL_LITERAL_EXPR = CursorKind(145) - -# Represents the "self" expression in a ObjC method. -CursorKind.OBJ_SELF_EXPR = CursorKind(146) - - -# A statement whose specific kind is not exposed via this interface. -# -# Unexposed statements have the same operations as any other kind of statement; -# one can extract their location information, spelling, children, etc. However, -# the specific kind of the statement is not reported. -CursorKind.UNEXPOSED_STMT = CursorKind(200) - -# A labelled statement in a function. -CursorKind.LABEL_STMT = CursorKind(201) - -# A compound statement -CursorKind.COMPOUND_STMT = CursorKind(202) - -# A case statement. -CursorKind.CASE_STMT = CursorKind(203) - -# A default statement. -CursorKind.DEFAULT_STMT = CursorKind(204) - -# An if statement. -CursorKind.IF_STMT = CursorKind(205) - -# A switch statement. -CursorKind.SWITCH_STMT = CursorKind(206) - -# A while statement. -CursorKind.WHILE_STMT = CursorKind(207) - -# A do statement. -CursorKind.DO_STMT = CursorKind(208) - -# A for statement. -CursorKind.FOR_STMT = CursorKind(209) - -# A goto statement. -CursorKind.GOTO_STMT = CursorKind(210) - -# An indirect goto statement. -CursorKind.INDIRECT_GOTO_STMT = CursorKind(211) - -# A continue statement. -CursorKind.CONTINUE_STMT = CursorKind(212) - -# A break statement. -CursorKind.BREAK_STMT = CursorKind(213) - -# A return statement. -CursorKind.RETURN_STMT = CursorKind(214) - -# A GNU-style inline assembler statement. -CursorKind.ASM_STMT = CursorKind(215) - -# Objective-C's overall @try-@catch-@finally statement. -CursorKind.OBJC_AT_TRY_STMT = CursorKind(216) - -# Objective-C's @catch statement. -CursorKind.OBJC_AT_CATCH_STMT = CursorKind(217) - -# Objective-C's @finally statement. -CursorKind.OBJC_AT_FINALLY_STMT = CursorKind(218) - -# Objective-C's @throw statement. -CursorKind.OBJC_AT_THROW_STMT = CursorKind(219) - -# Objective-C's @synchronized statement. -CursorKind.OBJC_AT_SYNCHRONIZED_STMT = CursorKind(220) - -# Objective-C's autorealease pool statement. -CursorKind.OBJC_AUTORELEASE_POOL_STMT = CursorKind(221) - -# Objective-C's for collection statement. -CursorKind.OBJC_FOR_COLLECTION_STMT = CursorKind(222) - -# C++'s catch statement. -CursorKind.CXX_CATCH_STMT = CursorKind(223) - -# C++'s try statement. -CursorKind.CXX_TRY_STMT = CursorKind(224) - -# C++'s for (* : *) statement. -CursorKind.CXX_FOR_RANGE_STMT = CursorKind(225) - -# Windows Structured Exception Handling's try statement. -CursorKind.SEH_TRY_STMT = CursorKind(226) - -# Windows Structured Exception Handling's except statement. -CursorKind.SEH_EXCEPT_STMT = CursorKind(227) - -# Windows Structured Exception Handling's finally statement. -CursorKind.SEH_FINALLY_STMT = CursorKind(228) - -# A MS inline assembly statement extension. -CursorKind.MS_ASM_STMT = CursorKind(229) - -# The null statement. -CursorKind.NULL_STMT = CursorKind(230) - -# Adaptor class for mixing declarations with statements and expressions. -CursorKind.DECL_STMT = CursorKind(231) - -# OpenMP parallel directive. -CursorKind.OMP_PARALLEL_DIRECTIVE = CursorKind(232) - -# OpenMP SIMD directive. -CursorKind.OMP_SIMD_DIRECTIVE = CursorKind(233) - -# OpenMP for directive. -CursorKind.OMP_FOR_DIRECTIVE = CursorKind(234) - -# OpenMP sections directive. -CursorKind.OMP_SECTIONS_DIRECTIVE = CursorKind(235) - -# OpenMP section directive. -CursorKind.OMP_SECTION_DIRECTIVE = CursorKind(236) - -# OpenMP single directive. -CursorKind.OMP_SINGLE_DIRECTIVE = CursorKind(237) - -# OpenMP parallel for directive. -CursorKind.OMP_PARALLEL_FOR_DIRECTIVE = CursorKind(238) - -# OpenMP parallel sections directive. -CursorKind.OMP_PARALLEL_SECTIONS_DIRECTIVE = CursorKind(239) - -# OpenMP task directive. -CursorKind.OMP_TASK_DIRECTIVE = CursorKind(240) - -# OpenMP master directive. -CursorKind.OMP_MASTER_DIRECTIVE = CursorKind(241) - -# OpenMP critical directive. -CursorKind.OMP_CRITICAL_DIRECTIVE = CursorKind(242) - -# OpenMP taskyield directive. -CursorKind.OMP_TASKYIELD_DIRECTIVE = CursorKind(243) - -# OpenMP barrier directive. -CursorKind.OMP_BARRIER_DIRECTIVE = CursorKind(244) - -# OpenMP taskwait directive. -CursorKind.OMP_TASKWAIT_DIRECTIVE = CursorKind(245) - -# OpenMP flush directive. -CursorKind.OMP_FLUSH_DIRECTIVE = CursorKind(246) - -# Windows Structured Exception Handling's leave statement. -CursorKind.SEH_LEAVE_STMT = CursorKind(247) - -# OpenMP ordered directive. -CursorKind.OMP_ORDERED_DIRECTIVE = CursorKind(248) - -# OpenMP atomic directive. -CursorKind.OMP_ATOMIC_DIRECTIVE = CursorKind(249) - -# OpenMP for SIMD directive. -CursorKind.OMP_FOR_SIMD_DIRECTIVE = CursorKind(250) - -# OpenMP parallel for SIMD directive. -CursorKind.OMP_PARALLELFORSIMD_DIRECTIVE = CursorKind(251) - -# OpenMP target directive. -CursorKind.OMP_TARGET_DIRECTIVE = CursorKind(252) - -# OpenMP teams directive. -CursorKind.OMP_TEAMS_DIRECTIVE = CursorKind(253) - -# OpenMP taskgroup directive. -CursorKind.OMP_TASKGROUP_DIRECTIVE = CursorKind(254) - -# OpenMP cancellation point directive. -CursorKind.OMP_CANCELLATION_POINT_DIRECTIVE = CursorKind(255) - -# OpenMP cancel directive. -CursorKind.OMP_CANCEL_DIRECTIVE = CursorKind(256) - -# OpenMP target data directive. -CursorKind.OMP_TARGET_DATA_DIRECTIVE = CursorKind(257) - -# OpenMP taskloop directive. -CursorKind.OMP_TASK_LOOP_DIRECTIVE = CursorKind(258) - -# OpenMP taskloop simd directive. -CursorKind.OMP_TASK_LOOP_SIMD_DIRECTIVE = CursorKind(259) - -# OpenMP distribute directive. -CursorKind.OMP_DISTRIBUTE_DIRECTIVE = CursorKind(260) - -# OpenMP target enter data directive. -CursorKind.OMP_TARGET_ENTER_DATA_DIRECTIVE = CursorKind(261) - -# OpenMP target exit data directive. -CursorKind.OMP_TARGET_EXIT_DATA_DIRECTIVE = CursorKind(262) - -# OpenMP target parallel directive. -CursorKind.OMP_TARGET_PARALLEL_DIRECTIVE = CursorKind(263) - -# OpenMP target parallel for directive. -CursorKind.OMP_TARGET_PARALLELFOR_DIRECTIVE = CursorKind(264) - -# OpenMP target update directive. -CursorKind.OMP_TARGET_UPDATE_DIRECTIVE = CursorKind(265) - -# OpenMP distribute parallel for directive. -CursorKind.OMP_DISTRIBUTE_PARALLELFOR_DIRECTIVE = CursorKind(266) - -# OpenMP distribute parallel for simd directive. -CursorKind.OMP_DISTRIBUTE_PARALLEL_FOR_SIMD_DIRECTIVE = CursorKind(267) - -# OpenMP distribute simd directive. -CursorKind.OMP_DISTRIBUTE_SIMD_DIRECTIVE = CursorKind(268) - -# OpenMP target parallel for simd directive. -CursorKind.OMP_TARGET_PARALLEL_FOR_SIMD_DIRECTIVE = CursorKind(269) - -# OpenMP target simd directive. -CursorKind.OMP_TARGET_SIMD_DIRECTIVE = CursorKind(270) - -# OpenMP teams distribute directive. -CursorKind.OMP_TEAMS_DISTRIBUTE_DIRECTIVE = CursorKind(271) - -### -# Other Kinds - -# Cursor that represents the translation unit itself. -# -# The translation unit cursor exists primarily to act as the root cursor for -# traversing the contents of a translation unit. -CursorKind.TRANSLATION_UNIT = CursorKind(300) - -### -# Attributes - -# An attribute whoe specific kind is note exposed via this interface -CursorKind.UNEXPOSED_ATTR = CursorKind(400) - -CursorKind.IB_ACTION_ATTR = CursorKind(401) -CursorKind.IB_OUTLET_ATTR = CursorKind(402) -CursorKind.IB_OUTLET_COLLECTION_ATTR = CursorKind(403) - -CursorKind.CXX_FINAL_ATTR = CursorKind(404) -CursorKind.CXX_OVERRIDE_ATTR = CursorKind(405) -CursorKind.ANNOTATE_ATTR = CursorKind(406) -CursorKind.ASM_LABEL_ATTR = CursorKind(407) -CursorKind.PACKED_ATTR = CursorKind(408) -CursorKind.PURE_ATTR = CursorKind(409) -CursorKind.CONST_ATTR = CursorKind(410) -CursorKind.NODUPLICATE_ATTR = CursorKind(411) -CursorKind.CUDACONSTANT_ATTR = CursorKind(412) -CursorKind.CUDADEVICE_ATTR = CursorKind(413) -CursorKind.CUDAGLOBAL_ATTR = CursorKind(414) -CursorKind.CUDAHOST_ATTR = CursorKind(415) -CursorKind.CUDASHARED_ATTR = CursorKind(416) - -CursorKind.VISIBILITY_ATTR = CursorKind(417) - -CursorKind.DLLEXPORT_ATTR = CursorKind(418) -CursorKind.DLLIMPORT_ATTR = CursorKind(419) - -### -# Preprocessing -CursorKind.PREPROCESSING_DIRECTIVE = CursorKind(500) -CursorKind.MACRO_DEFINITION = CursorKind(501) -CursorKind.MACRO_INSTANTIATION = CursorKind(502) -CursorKind.INCLUSION_DIRECTIVE = CursorKind(503) - -### -# Extra declaration - -# A module import declaration. -CursorKind.MODULE_IMPORT_DECL = CursorKind(600) -# A type alias template declaration -CursorKind.TYPE_ALIAS_TEMPLATE_DECL = CursorKind(601) -# A static_assert or _Static_assert node -CursorKind.STATIC_ASSERT = CursorKind(602) -# A friend declaration -CursorKind.FRIEND_DECL = CursorKind(603) - -# A code completion overload candidate. -CursorKind.OVERLOAD_CANDIDATE = CursorKind(700) - -### Template Argument Kinds ### -class TemplateArgumentKind(BaseEnumeration): - """ - A TemplateArgumentKind describes the kind of entity that a template argument - represents. - """ - - # The required BaseEnumeration declarations. - _kinds = [] - _name_map = None - -TemplateArgumentKind.NULL = TemplateArgumentKind(0) -TemplateArgumentKind.TYPE = TemplateArgumentKind(1) -TemplateArgumentKind.DECLARATION = TemplateArgumentKind(2) -TemplateArgumentKind.NULLPTR = TemplateArgumentKind(3) -TemplateArgumentKind.INTEGRAL = TemplateArgumentKind(4) - -### Cursors ### - -class Cursor(Structure): - """ - The Cursor class represents a reference to an element within the AST. It - acts as a kind of iterator. - """ - _fields_ = [("_kind_id", c_int), ("xdata", c_int), ("data", c_void_p * 3)] - - @staticmethod - def from_location(tu, location): - # We store a reference to the TU in the instance so the TU won't get - # collected before the cursor. - cursor = conf.lib.clang_getCursor(tu, location) - cursor._tu = tu - - return cursor - - def __eq__(self, other): - return conf.lib.clang_equalCursors(self, other) - - def __ne__(self, other): - return not self.__eq__(other) - - def is_definition(self): - """ - Returns true if the declaration pointed at by the cursor is also a - definition of that entity. - """ - return conf.lib.clang_isCursorDefinition(self) - - def is_const_method(self): - """Returns True if the cursor refers to a C++ member function or member - function template that is declared 'const'. - """ - return conf.lib.clang_CXXMethod_isConst(self) - - def is_converting_constructor(self): - """Returns True if the cursor refers to a C++ converting constructor. - """ - return conf.lib.clang_CXXConstructor_isConvertingConstructor(self) - - def is_copy_constructor(self): - """Returns True if the cursor refers to a C++ copy constructor. - """ - return conf.lib.clang_CXXConstructor_isCopyConstructor(self) - - def is_default_constructor(self): - """Returns True if the cursor refers to a C++ default constructor. - """ - return conf.lib.clang_CXXConstructor_isDefaultConstructor(self) - - def is_move_constructor(self): - """Returns True if the cursor refers to a C++ move constructor. - """ - return conf.lib.clang_CXXConstructor_isMoveConstructor(self) - - def is_default_method(self): - """Returns True if the cursor refers to a C++ member function or member - function template that is declared '= default'. - """ - return conf.lib.clang_CXXMethod_isDefaulted(self) - - def is_mutable_field(self): - """Returns True if the cursor refers to a C++ field that is declared - 'mutable'. - """ - return conf.lib.clang_CXXField_isMutable(self) - - def is_pure_virtual_method(self): - """Returns True if the cursor refers to a C++ member function or member - function template that is declared pure virtual. - """ - return conf.lib.clang_CXXMethod_isPureVirtual(self) - - def is_static_method(self): - """Returns True if the cursor refers to a C++ member function or member - function template that is declared 'static'. - """ - return conf.lib.clang_CXXMethod_isStatic(self) - - def is_virtual_method(self): - """Returns True if the cursor refers to a C++ member function or member - function template that is declared 'virtual'. - """ - return conf.lib.clang_CXXMethod_isVirtual(self) - - def get_definition(self): - """ - If the cursor is a reference to a declaration or a declaration of - some entity, return a cursor that points to the definition of that - entity. - """ - # TODO: Should probably check that this is either a reference or - # declaration prior to issuing the lookup. - return conf.lib.clang_getCursorDefinition(self) - - def get_usr(self): - """Return the Unified Symbol Resultion (USR) for the entity referenced - by the given cursor (or None). - - A Unified Symbol Resolution (USR) is a string that identifies a - particular entity (function, class, variable, etc.) within a - program. USRs can be compared across translation units to determine, - e.g., when references in one translation refer to an entity defined in - another translation unit.""" - return conf.lib.clang_getCursorUSR(self) - - @property - def kind(self): - """Return the kind of this cursor.""" - return CursorKind.from_id(self._kind_id) - - @property - def spelling(self): - """Return the spelling of the entity pointed at by the cursor.""" - if not hasattr(self, '_spelling'): - self._spelling = conf.lib.clang_getCursorSpelling(self) - - return self._spelling - - @property - def displayname(self): - """ - Return the display name for the entity referenced by this cursor. - - The display name contains extra information that helps identify the - cursor, such as the parameters of a function or template or the - arguments of a class template specialization. - """ - if not hasattr(self, '_displayname'): - self._displayname = conf.lib.clang_getCursorDisplayName(self) - - return self._displayname - - @property - def mangled_name(self): - """Return the mangled name for the entity referenced by this cursor.""" - if not hasattr(self, '_mangled_name'): - self._mangled_name = conf.lib.clang_Cursor_getMangling(self) - - return self._mangled_name - - @property - def location(self): - """ - Return the source location (the starting character) of the entity - pointed at by the cursor. - """ - if not hasattr(self, '_loc'): - self._loc = conf.lib.clang_getCursorLocation(self) - - return self._loc - - @property - def extent(self): - """ - Return the source range (the range of text) occupied by the entity - pointed at by the cursor. - """ - if not hasattr(self, '_extent'): - self._extent = conf.lib.clang_getCursorExtent(self) - - return self._extent - - @property - def storage_class(self): - """ - Retrieves the storage class (if any) of the entity pointed at by the - cursor. - """ - if not hasattr(self, '_storage_class'): - self._storage_class = conf.lib.clang_Cursor_getStorageClass(self) - - return StorageClass.from_id(self._storage_class) - - @property - def access_specifier(self): - """ - Retrieves the access specifier (if any) of the entity pointed at by the - cursor. - """ - if not hasattr(self, '_access_specifier'): - self._access_specifier = conf.lib.clang_getCXXAccessSpecifier(self) - - return AccessSpecifier.from_id(self._access_specifier) - - @property - def type(self): - """ - Retrieve the Type (if any) of the entity pointed at by the cursor. - """ - if not hasattr(self, '_type'): - self._type = conf.lib.clang_getCursorType(self) - - return self._type - - @property - def canonical(self): - """Return the canonical Cursor corresponding to this Cursor. - - The canonical cursor is the cursor which is representative for the - underlying entity. For example, if you have multiple forward - declarations for the same class, the canonical cursor for the forward - declarations will be identical. - """ - if not hasattr(self, '_canonical'): - self._canonical = conf.lib.clang_getCanonicalCursor(self) - - return self._canonical - - @property - def result_type(self): - """Retrieve the Type of the result for this Cursor.""" - if not hasattr(self, '_result_type'): - self._result_type = conf.lib.clang_getResultType(self.type) - - return self._result_type - - @property - def underlying_typedef_type(self): - """Return the underlying type of a typedef declaration. - - Returns a Type for the typedef this cursor is a declaration for. If - the current cursor is not a typedef, this raises. - """ - if not hasattr(self, '_underlying_type'): - assert self.kind.is_declaration() - self._underlying_type = \ - conf.lib.clang_getTypedefDeclUnderlyingType(self) - - return self._underlying_type - - @property - def enum_type(self): - """Return the integer type of an enum declaration. - - Returns a Type corresponding to an integer. If the cursor is not for an - enum, this raises. - """ - if not hasattr(self, '_enum_type'): - assert self.kind == CursorKind.ENUM_DECL - self._enum_type = conf.lib.clang_getEnumDeclIntegerType(self) - - return self._enum_type - - @property - def enum_value(self): - """Return the value of an enum constant.""" - if not hasattr(self, '_enum_value'): - assert self.kind == CursorKind.ENUM_CONSTANT_DECL - # Figure out the underlying type of the enum to know if it - # is a signed or unsigned quantity. - underlying_type = self.type - if underlying_type.kind == TypeKind.ENUM: - underlying_type = underlying_type.get_declaration().enum_type - if underlying_type.kind in (TypeKind.CHAR_U, - TypeKind.UCHAR, - TypeKind.CHAR16, - TypeKind.CHAR32, - TypeKind.USHORT, - TypeKind.UINT, - TypeKind.ULONG, - TypeKind.ULONGLONG, - TypeKind.UINT128): - self._enum_value = \ - conf.lib.clang_getEnumConstantDeclUnsignedValue(self) - else: - self._enum_value = conf.lib.clang_getEnumConstantDeclValue(self) - return self._enum_value - - @property - def objc_type_encoding(self): - """Return the Objective-C type encoding as a str.""" - if not hasattr(self, '_objc_type_encoding'): - self._objc_type_encoding = \ - conf.lib.clang_getDeclObjCTypeEncoding(self) - - return self._objc_type_encoding - - @property - def hash(self): - """Returns a hash of the cursor as an int.""" - if not hasattr(self, '_hash'): - self._hash = conf.lib.clang_hashCursor(self) - - return self._hash - - @property - def semantic_parent(self): - """Return the semantic parent for this cursor.""" - if not hasattr(self, '_semantic_parent'): - self._semantic_parent = conf.lib.clang_getCursorSemanticParent(self) - - return self._semantic_parent - - @property - def lexical_parent(self): - """Return the lexical parent for this cursor.""" - if not hasattr(self, '_lexical_parent'): - self._lexical_parent = conf.lib.clang_getCursorLexicalParent(self) - - return self._lexical_parent - - @property - def translation_unit(self): - """Returns the TranslationUnit to which this Cursor belongs.""" - # If this triggers an AttributeError, the instance was not properly - # created. - return self._tu - - @property - def referenced(self): - """ - For a cursor that is a reference, returns a cursor - representing the entity that it references. - """ - if not hasattr(self, '_referenced'): - self._referenced = conf.lib.clang_getCursorReferenced(self) - - return self._referenced - - @property - def brief_comment(self): - """Returns the brief comment text associated with that Cursor""" - return conf.lib.clang_Cursor_getBriefCommentText(self) - - @property - def raw_comment(self): - """Returns the raw comment text associated with that Cursor""" - return conf.lib.clang_Cursor_getRawCommentText(self) - - def get_arguments(self): - """Return an iterator for accessing the arguments of this cursor.""" - num_args = conf.lib.clang_Cursor_getNumArguments(self) - for i in range(0, num_args): - yield conf.lib.clang_Cursor_getArgument(self, i) - - def get_num_template_arguments(self): - """Returns the number of template args associated with this cursor.""" - return conf.lib.clang_Cursor_getNumTemplateArguments(self) - - def get_template_argument_kind(self, num): - """Returns the TemplateArgumentKind for the indicated template - argument.""" - return conf.lib.clang_Cursor_getTemplateArgumentKind(self, num) - - def get_template_argument_type(self, num): - """Returns the CXType for the indicated template argument.""" - return conf.lib.clang_Cursor_getTemplateArgumentType(self, num) - - def get_template_argument_value(self, num): - """Returns the value of the indicated arg as a signed 64b integer.""" - return conf.lib.clang_Cursor_getTemplateArgumentValue(self, num) - - def get_template_argument_unsigned_value(self, num): - """Returns the value of the indicated arg as an unsigned 64b integer.""" - return conf.lib.clang_Cursor_getTemplateArgumentUnsignedValue(self, num) - - def get_children(self): - """Return an iterator for accessing the children of this cursor.""" - - # FIXME: Expose iteration from CIndex, PR6125. - def visitor(child, parent, children): - # FIXME: Document this assertion in API. - # FIXME: There should just be an isNull method. - assert child != conf.lib.clang_getNullCursor() - - # Create reference to TU so it isn't GC'd before Cursor. - child._tu = self._tu - children.append(child) - return 1 # continue - children = [] - conf.lib.clang_visitChildren(self, callbacks['cursor_visit'](visitor), - children) - return iter(children) - - def walk_preorder(self): - """Depth-first preorder walk over the cursor and its descendants. - - Yields cursors. - """ - yield self - for child in self.get_children(): - for descendant in child.walk_preorder(): - yield descendant - - def get_tokens(self): - """Obtain Token instances formulating that compose this Cursor. - - This is a generator for Token instances. It returns all tokens which - occupy the extent this cursor occupies. - """ - return TokenGroup.get_tokens(self._tu, self.extent) - - def get_field_offsetof(self): - """Returns the offsetof the FIELD_DECL pointed by this Cursor.""" - return conf.lib.clang_Cursor_getOffsetOfField(self) - - def is_anonymous(self): - """ - Check if the record is anonymous. - """ - if self.kind == CursorKind.FIELD_DECL: - return self.type.get_declaration().is_anonymous() - return conf.lib.clang_Cursor_isAnonymous(self) - - def is_bitfield(self): - """ - Check if the field is a bitfield. - """ - return conf.lib.clang_Cursor_isBitField(self) - - def get_bitfield_width(self): - """ - Retrieve the width of a bitfield. - """ - return conf.lib.clang_getFieldDeclBitWidth(self) - - @staticmethod - def from_result(res, fn, args): - assert isinstance(res, Cursor) - # FIXME: There should just be an isNull method. - if res == conf.lib.clang_getNullCursor(): - return None - - # Store a reference to the TU in the Python object so it won't get GC'd - # before the Cursor. - tu = None - for arg in args: - if isinstance(arg, TranslationUnit): - tu = arg - break - - if hasattr(arg, 'translation_unit'): - tu = arg.translation_unit - break - - assert tu is not None - - res._tu = tu - return res - - @staticmethod - def from_cursor_result(res, fn, args): - assert isinstance(res, Cursor) - if res == conf.lib.clang_getNullCursor(): - return None - - res._tu = args[0]._tu - return res - -class StorageClass(object): - """ - Describes the storage class of a declaration - """ - - # The unique kind objects, index by id. - _kinds = [] - _name_map = None - - def __init__(self, value): - if value >= len(StorageClass._kinds): - StorageClass._kinds += [None] * (value - len(StorageClass._kinds) + 1) - if StorageClass._kinds[value] is not None: - raise ValueError('StorageClass already loaded') - self.value = value - StorageClass._kinds[value] = self - StorageClass._name_map = None - - def from_param(self): - return self.value - - @property - def name(self): - """Get the enumeration name of this storage class.""" - if self._name_map is None: - self._name_map = {} - for key,value in list(StorageClass.__dict__.items()): - if isinstance(value,StorageClass): - self._name_map[value] = key - return self._name_map[self] - - @staticmethod - def from_id(id): - if id >= len(StorageClass._kinds) or not StorageClass._kinds[id]: - raise ValueError('Unknown storage class %d' % id) - return StorageClass._kinds[id] - - def __repr__(self): - return 'StorageClass.%s' % (self.name,) - -StorageClass.INVALID = StorageClass(0) -StorageClass.NONE = StorageClass(1) -StorageClass.EXTERN = StorageClass(2) -StorageClass.STATIC = StorageClass(3) -StorageClass.PRIVATEEXTERN = StorageClass(4) -StorageClass.OPENCLWORKGROUPLOCAL = StorageClass(5) -StorageClass.AUTO = StorageClass(6) -StorageClass.REGISTER = StorageClass(7) - - -### C++ access specifiers ### - -class AccessSpecifier(BaseEnumeration): - """ - Describes the access of a C++ class member - """ - - # The unique kind objects, index by id. - _kinds = [] - _name_map = None - - def from_param(self): - return self.value - - def __repr__(self): - return 'AccessSpecifier.%s' % (self.name,) - -AccessSpecifier.INVALID = AccessSpecifier(0) -AccessSpecifier.PUBLIC = AccessSpecifier(1) -AccessSpecifier.PROTECTED = AccessSpecifier(2) -AccessSpecifier.PRIVATE = AccessSpecifier(3) -AccessSpecifier.NONE = AccessSpecifier(4) - -### Type Kinds ### - -class TypeKind(BaseEnumeration): - """ - Describes the kind of type. - """ - - # The unique kind objects, indexed by id. - _kinds = [] - _name_map = None - - @property - def spelling(self): - """Retrieve the spelling of this TypeKind.""" - return conf.lib.clang_getTypeKindSpelling(self.value) - - def __repr__(self): - return 'TypeKind.%s' % (self.name,) - -TypeKind.INVALID = TypeKind(0) -TypeKind.UNEXPOSED = TypeKind(1) -TypeKind.VOID = TypeKind(2) -TypeKind.BOOL = TypeKind(3) -TypeKind.CHAR_U = TypeKind(4) -TypeKind.UCHAR = TypeKind(5) -TypeKind.CHAR16 = TypeKind(6) -TypeKind.CHAR32 = TypeKind(7) -TypeKind.USHORT = TypeKind(8) -TypeKind.UINT = TypeKind(9) -TypeKind.ULONG = TypeKind(10) -TypeKind.ULONGLONG = TypeKind(11) -TypeKind.UINT128 = TypeKind(12) -TypeKind.CHAR_S = TypeKind(13) -TypeKind.SCHAR = TypeKind(14) -TypeKind.WCHAR = TypeKind(15) -TypeKind.SHORT = TypeKind(16) -TypeKind.INT = TypeKind(17) -TypeKind.LONG = TypeKind(18) -TypeKind.LONGLONG = TypeKind(19) -TypeKind.INT128 = TypeKind(20) -TypeKind.FLOAT = TypeKind(21) -TypeKind.DOUBLE = TypeKind(22) -TypeKind.LONGDOUBLE = TypeKind(23) -TypeKind.NULLPTR = TypeKind(24) -TypeKind.OVERLOAD = TypeKind(25) -TypeKind.DEPENDENT = TypeKind(26) -TypeKind.OBJCID = TypeKind(27) -TypeKind.OBJCCLASS = TypeKind(28) -TypeKind.OBJCSEL = TypeKind(29) -TypeKind.FLOAT128 = TypeKind(30) -TypeKind.HALF = TypeKind(31) -TypeKind.COMPLEX = TypeKind(100) -TypeKind.POINTER = TypeKind(101) -TypeKind.BLOCKPOINTER = TypeKind(102) -TypeKind.LVALUEREFERENCE = TypeKind(103) -TypeKind.RVALUEREFERENCE = TypeKind(104) -TypeKind.RECORD = TypeKind(105) -TypeKind.ENUM = TypeKind(106) -TypeKind.TYPEDEF = TypeKind(107) -TypeKind.OBJCINTERFACE = TypeKind(108) -TypeKind.OBJCOBJECTPOINTER = TypeKind(109) -TypeKind.FUNCTIONNOPROTO = TypeKind(110) -TypeKind.FUNCTIONPROTO = TypeKind(111) -TypeKind.CONSTANTARRAY = TypeKind(112) -TypeKind.VECTOR = TypeKind(113) -TypeKind.INCOMPLETEARRAY = TypeKind(114) -TypeKind.VARIABLEARRAY = TypeKind(115) -TypeKind.DEPENDENTSIZEDARRAY = TypeKind(116) -TypeKind.MEMBERPOINTER = TypeKind(117) -TypeKind.AUTO = TypeKind(118) -TypeKind.ELABORATED = TypeKind(119) - -class RefQualifierKind(BaseEnumeration): - """Describes a specific ref-qualifier of a type.""" - - # The unique kind objects, indexed by id. - _kinds = [] - _name_map = None - - def from_param(self): - return self.value - - def __repr__(self): - return 'RefQualifierKind.%s' % (self.name,) - -RefQualifierKind.NONE = RefQualifierKind(0) -RefQualifierKind.LVALUE = RefQualifierKind(1) -RefQualifierKind.RVALUE = RefQualifierKind(2) - -class Type(Structure): - """ - The type of an element in the abstract syntax tree. - """ - _fields_ = [("_kind_id", c_int), ("data", c_void_p * 2)] - - @property - def kind(self): - """Return the kind of this type.""" - return TypeKind.from_id(self._kind_id) - - def argument_types(self): - """Retrieve a container for the non-variadic arguments for this type. - - The returned object is iterable and indexable. Each item in the - container is a Type instance. - """ - class ArgumentsIterator(collections.Sequence): - def __init__(self, parent): - self.parent = parent - self.length = None - - def __len__(self): - if self.length is None: - self.length = conf.lib.clang_getNumArgTypes(self.parent) - - return self.length - - def __getitem__(self, key): - # FIXME Support slice objects. - if not isinstance(key, int): - raise TypeError("Must supply a non-negative int.") - - if key < 0: - raise IndexError("Only non-negative indexes are accepted.") - - if key >= len(self): - raise IndexError("Index greater than container length: " - "%d > %d" % ( key, len(self) )) - - result = conf.lib.clang_getArgType(self.parent, key) - if result.kind == TypeKind.INVALID: - raise IndexError("Argument could not be retrieved.") - - return result - - assert self.kind == TypeKind.FUNCTIONPROTO - return ArgumentsIterator(self) - - @property - def element_type(self): - """Retrieve the Type of elements within this Type. - - If accessed on a type that is not an array, complex, or vector type, an - exception will be raised. - """ - result = conf.lib.clang_getElementType(self) - if result.kind == TypeKind.INVALID: - raise Exception('Element type not available on this type.') - - return result - - @property - def element_count(self): - """Retrieve the number of elements in this type. - - Returns an int. - - If the Type is not an array or vector, this raises. - """ - result = conf.lib.clang_getNumElements(self) - if result < 0: - raise Exception('Type does not have elements.') - - return result - - @property - def translation_unit(self): - """The TranslationUnit to which this Type is associated.""" - # If this triggers an AttributeError, the instance was not properly - # instantiated. - return self._tu - - @staticmethod - def from_result(res, fn, args): - assert isinstance(res, Type) - - tu = None - for arg in args: - if hasattr(arg, 'translation_unit'): - tu = arg.translation_unit - break - - assert tu is not None - res._tu = tu - - return res - - def get_canonical(self): - """ - Return the canonical type for a Type. - - Clang's type system explicitly models typedefs and all the - ways a specific type can be represented. The canonical type - is the underlying type with all the "sugar" removed. For - example, if 'T' is a typedef for 'int', the canonical type for - 'T' would be 'int'. - """ - return conf.lib.clang_getCanonicalType(self) - - def is_const_qualified(self): - """Determine whether a Type has the "const" qualifier set. - - This does not look through typedefs that may have added "const" - at a different level. - """ - return conf.lib.clang_isConstQualifiedType(self) - - def is_volatile_qualified(self): - """Determine whether a Type has the "volatile" qualifier set. - - This does not look through typedefs that may have added "volatile" - at a different level. - """ - return conf.lib.clang_isVolatileQualifiedType(self) - - def is_restrict_qualified(self): - """Determine whether a Type has the "restrict" qualifier set. - - This does not look through typedefs that may have added "restrict" at - a different level. - """ - return conf.lib.clang_isRestrictQualifiedType(self) - - def is_function_variadic(self): - """Determine whether this function Type is a variadic function type.""" - assert self.kind == TypeKind.FUNCTIONPROTO - - return conf.lib.clang_isFunctionTypeVariadic(self) - - def is_pod(self): - """Determine whether this Type represents plain old data (POD).""" - return conf.lib.clang_isPODType(self) - - def get_pointee(self): - """ - For pointer types, returns the type of the pointee. - """ - return conf.lib.clang_getPointeeType(self) - - def get_declaration(self): - """ - Return the cursor for the declaration of the given type. - """ - return conf.lib.clang_getTypeDeclaration(self) - - def get_result(self): - """ - Retrieve the result type associated with a function type. - """ - return conf.lib.clang_getResultType(self) - - def get_array_element_type(self): - """ - Retrieve the type of the elements of the array type. - """ - return conf.lib.clang_getArrayElementType(self) - - def get_array_size(self): - """ - Retrieve the size of the constant array. - """ - return conf.lib.clang_getArraySize(self) - - def get_class_type(self): - """ - Retrieve the class type of the member pointer type. - """ - return conf.lib.clang_Type_getClassType(self) - - def get_named_type(self): - """ - Retrieve the type named by the qualified-id. - """ - return conf.lib.clang_Type_getNamedType(self) - def get_align(self): - """ - Retrieve the alignment of the record. - """ - return conf.lib.clang_Type_getAlignOf(self) - - def get_size(self): - """ - Retrieve the size of the record. - """ - return conf.lib.clang_Type_getSizeOf(self) - - def get_offset(self, fieldname): - """ - Retrieve the offset of a field in the record. - """ - return conf.lib.clang_Type_getOffsetOf(self, c_char_p(fieldname)) - - def get_ref_qualifier(self): - """ - Retrieve the ref-qualifier of the type. - """ - return RefQualifierKind.from_id( - conf.lib.clang_Type_getCXXRefQualifier(self)) - - def get_fields(self): - """Return an iterator for accessing the fields of this type.""" - - def visitor(field, children): - assert field != conf.lib.clang_getNullCursor() - - # Create reference to TU so it isn't GC'd before Cursor. - field._tu = self._tu - fields.append(field) - return 1 # continue - fields = [] - conf.lib.clang_Type_visitFields(self, - callbacks['fields_visit'](visitor), fields) - return iter(fields) - - @property - def spelling(self): - """Retrieve the spelling of this Type.""" - return conf.lib.clang_getTypeSpelling(self) - - def __eq__(self, other): - if type(other) != type(self): - return False - - return conf.lib.clang_equalTypes(self, other) - - def __ne__(self, other): - return not self.__eq__(other) - -## CIndex Objects ## - -# CIndex objects (derived from ClangObject) are essentially lightweight -# wrappers attached to some underlying object, which is exposed via CIndex as -# a void*. - -class ClangObject(object): - """ - A helper for Clang objects. This class helps act as an intermediary for - the ctypes library and the Clang CIndex library. - """ - def __init__(self, obj): - assert isinstance(obj, c_object_p) and obj - self.obj = self._as_parameter_ = obj - - def from_param(self): - return self._as_parameter_ - - -class _CXUnsavedFile(Structure): - """Helper for passing unsaved file arguments.""" - _fields_ = [("name", c_char_p), ("contents", c_char_p), ('length', c_ulong)] - -# Functions calls through the python interface are rather slow. Fortunately, -# for most symboles, we do not need to perform a function call. Their spelling -# never changes and is consequently provided by this spelling cache. -SpellingCache = { - # 0: CompletionChunk.Kind("Optional"), - # 1: CompletionChunk.Kind("TypedText"), - # 2: CompletionChunk.Kind("Text"), - # 3: CompletionChunk.Kind("Placeholder"), - # 4: CompletionChunk.Kind("Informative"), - # 5 : CompletionChunk.Kind("CurrentParameter"), - 6: '(', # CompletionChunk.Kind("LeftParen"), - 7: ')', # CompletionChunk.Kind("RightParen"), - 8: '[', # CompletionChunk.Kind("LeftBracket"), - 9: ']', # CompletionChunk.Kind("RightBracket"), - 10: '{', # CompletionChunk.Kind("LeftBrace"), - 11: '}', # CompletionChunk.Kind("RightBrace"), - 12: '<', # CompletionChunk.Kind("LeftAngle"), - 13: '>', # CompletionChunk.Kind("RightAngle"), - 14: ', ', # CompletionChunk.Kind("Comma"), - # 15: CompletionChunk.Kind("ResultType"), - 16: ':', # CompletionChunk.Kind("Colon"), - 17: ';', # CompletionChunk.Kind("SemiColon"), - 18: '=', # CompletionChunk.Kind("Equal"), - 19: ' ', # CompletionChunk.Kind("HorizontalSpace"), - # 20: CompletionChunk.Kind("VerticalSpace") -} - -class CompletionChunk: - class Kind: - def __init__(self, name): - self.name = name - - def __str__(self): - return self.name - - def __repr__(self): - return "" % self - - def __init__(self, completionString, key): - self.cs = completionString - self.key = key - self.__kindNumberCache = -1 - - def __repr__(self): - return "{'" + self.spelling + "', " + str(self.kind) + "}" - - @CachedProperty - def spelling(self): - if self.__kindNumber in SpellingCache: - return SpellingCache[self.__kindNumber] - return conf.lib.clang_getCompletionChunkText(self.cs, self.key).spelling - - # We do not use @CachedProperty here, as the manual implementation is - # apparently still significantly faster. Please profile carefully if you - # would like to add CachedProperty back. - @property - def __kindNumber(self): - if self.__kindNumberCache == -1: - self.__kindNumberCache = \ - conf.lib.clang_getCompletionChunkKind(self.cs, self.key) - return self.__kindNumberCache - - @CachedProperty - def kind(self): - return completionChunkKindMap[self.__kindNumber] - - @CachedProperty - def string(self): - res = conf.lib.clang_getCompletionChunkCompletionString(self.cs, - self.key) - - if (res): - return CompletionString(res) - else: - None - - def isKindOptional(self): - return self.__kindNumber == 0 - - def isKindTypedText(self): - return self.__kindNumber == 1 - - def isKindPlaceHolder(self): - return self.__kindNumber == 3 - - def isKindInformative(self): - return self.__kindNumber == 4 - - def isKindResultType(self): - return self.__kindNumber == 15 - -completionChunkKindMap = { - 0: CompletionChunk.Kind("Optional"), - 1: CompletionChunk.Kind("TypedText"), - 2: CompletionChunk.Kind("Text"), - 3: CompletionChunk.Kind("Placeholder"), - 4: CompletionChunk.Kind("Informative"), - 5: CompletionChunk.Kind("CurrentParameter"), - 6: CompletionChunk.Kind("LeftParen"), - 7: CompletionChunk.Kind("RightParen"), - 8: CompletionChunk.Kind("LeftBracket"), - 9: CompletionChunk.Kind("RightBracket"), - 10: CompletionChunk.Kind("LeftBrace"), - 11: CompletionChunk.Kind("RightBrace"), - 12: CompletionChunk.Kind("LeftAngle"), - 13: CompletionChunk.Kind("RightAngle"), - 14: CompletionChunk.Kind("Comma"), - 15: CompletionChunk.Kind("ResultType"), - 16: CompletionChunk.Kind("Colon"), - 17: CompletionChunk.Kind("SemiColon"), - 18: CompletionChunk.Kind("Equal"), - 19: CompletionChunk.Kind("HorizontalSpace"), - 20: CompletionChunk.Kind("VerticalSpace")} - -class CompletionString(ClangObject): - class Availability: - def __init__(self, name): - self.name = name - - def __str__(self): - return self.name - - def __repr__(self): - return "" % self - - def __len__(self): - return self.num_chunks - - @CachedProperty - def num_chunks(self): - return conf.lib.clang_getNumCompletionChunks(self.obj) - - def __getitem__(self, key): - if self.num_chunks <= key: - raise IndexError - return CompletionChunk(self.obj, key) - - @property - def priority(self): - return conf.lib.clang_getCompletionPriority(self.obj) - - @property - def availability(self): - res = conf.lib.clang_getCompletionAvailability(self.obj) - return availabilityKinds[res] - - @property - def briefComment(self): - if conf.function_exists("clang_getCompletionBriefComment"): - return conf.lib.clang_getCompletionBriefComment(self.obj) - return _CXString() - - def __repr__(self): - return " | ".join([str(a) for a in self]) \ - + " || Priority: " + str(self.priority) \ - + " || Availability: " + str(self.availability) \ - + " || Brief comment: " + str(self.briefComment.spelling) - -availabilityKinds = { - 0: CompletionChunk.Kind("Available"), - 1: CompletionChunk.Kind("Deprecated"), - 2: CompletionChunk.Kind("NotAvailable"), - 3: CompletionChunk.Kind("NotAccessible")} - -class CodeCompletionResult(Structure): - _fields_ = [('cursorKind', c_int), ('completionString', c_object_p)] - - def __repr__(self): - return str(CompletionString(self.completionString)) - - @property - def kind(self): - return CursorKind.from_id(self.cursorKind) - - @property - def string(self): - return CompletionString(self.completionString) - -class CCRStructure(Structure): - _fields_ = [('results', POINTER(CodeCompletionResult)), - ('numResults', c_int)] - - def __len__(self): - return self.numResults - - def __getitem__(self, key): - if len(self) <= key: - raise IndexError - - return self.results[key] - -class CodeCompletionResults(ClangObject): - def __init__(self, ptr): - assert isinstance(ptr, POINTER(CCRStructure)) and ptr - self.ptr = self._as_parameter_ = ptr - - def from_param(self): - return self._as_parameter_ - - def __del__(self): - conf.lib.clang_disposeCodeCompleteResults(self) - - @property - def results(self): - return self.ptr.contents - - @property - def diagnostics(self): - class DiagnosticsItr: - def __init__(self, ccr): - self.ccr= ccr - - def __len__(self): - return int(\ - conf.lib.clang_codeCompleteGetNumDiagnostics(self.ccr)) - - def __getitem__(self, key): - return conf.lib.clang_codeCompleteGetDiagnostic(self.ccr, key) - - return DiagnosticsItr(self) - - -class Index(ClangObject): - """ - The Index type provides the primary interface to the Clang CIndex library, - primarily by providing an interface for reading and parsing translation - units. - """ - - @staticmethod - def create(excludeDecls=False): - """ - Create a new Index. - Parameters: - excludeDecls -- Exclude local declarations from translation units. - """ - return Index(conf.lib.clang_createIndex(excludeDecls, 0)) - - def __del__(self): - conf.lib.clang_disposeIndex(self) - - def read(self, path): - """Load a TranslationUnit from the given AST file.""" - return TranslationUnit.from_ast_file(path, self) - - def parse(self, path, args=None, unsaved_files=None, options = 0): - """Load the translation unit from the given source code file by running - clang and generating the AST before loading. Additional command line - parameters can be passed to clang via the args parameter. - - In-memory contents for files can be provided by passing a list of pairs - to as unsaved_files, the first item should be the filenames to be mapped - and the second should be the contents to be substituted for the - file. The contents may be passed as strings or file objects. - - If an error was encountered during parsing, a TranslationUnitLoadError - will be raised. - """ - return TranslationUnit.from_source(path, args, unsaved_files, options, - self) - -class TranslationUnit(ClangObject): - """Represents a source code translation unit. - - This is one of the main types in the API. Any time you wish to interact - with Clang's representation of a source file, you typically start with a - translation unit. - """ - - # Default parsing mode. - PARSE_NONE = 0 - - # Instruct the parser to create a detailed processing record containing - # metadata not normally retained. - PARSE_DETAILED_PROCESSING_RECORD = 1 - - # Indicates that the translation unit is incomplete. This is typically used - # when parsing headers. - PARSE_INCOMPLETE = 2 - - # Instruct the parser to create a pre-compiled preamble for the translation - # unit. This caches the preamble (included files at top of source file). - # This is useful if the translation unit will be reparsed and you don't - # want to incur the overhead of reparsing the preamble. - PARSE_PRECOMPILED_PREAMBLE = 4 - - # Cache code completion information on parse. This adds time to parsing but - # speeds up code completion. - PARSE_CACHE_COMPLETION_RESULTS = 8 - - # Flags with values 16 and 32 are deprecated and intentionally omitted. - - # Do not parse function bodies. This is useful if you only care about - # searching for declarations/definitions. - PARSE_SKIP_FUNCTION_BODIES = 64 - - # Used to indicate that brief documentation comments should be included - # into the set of code completions returned from this translation unit. - PARSE_INCLUDE_BRIEF_COMMENTS_IN_CODE_COMPLETION = 128 - - @classmethod - def from_source(cls, filename, args=None, unsaved_files=None, options=0, - index=None): - """Create a TranslationUnit by parsing source. - - This is capable of processing source code both from files on the - filesystem as well as in-memory contents. - - Command-line arguments that would be passed to clang are specified as - a list via args. These can be used to specify include paths, warnings, - etc. e.g. ["-Wall", "-I/path/to/include"]. - - In-memory file content can be provided via unsaved_files. This is an - iterable of 2-tuples. The first element is the str filename. The - second element defines the content. Content can be provided as str - source code or as file objects (anything with a read() method). If - a file object is being used, content will be read until EOF and the - read cursor will not be reset to its original position. - - options is a bitwise or of TranslationUnit.PARSE_XXX flags which will - control parsing behavior. - - index is an Index instance to utilize. If not provided, a new Index - will be created for this TranslationUnit. - - To parse source from the filesystem, the filename of the file to parse - is specified by the filename argument. Or, filename could be None and - the args list would contain the filename(s) to parse. - - To parse source from an in-memory buffer, set filename to the virtual - filename you wish to associate with this source (e.g. "test.c"). The - contents of that file are then provided in unsaved_files. - - If an error occurs, a TranslationUnitLoadError is raised. - - Please note that a TranslationUnit with parser errors may be returned. - It is the caller's responsibility to check tu.diagnostics for errors. - - Also note that Clang infers the source language from the extension of - the input filename. If you pass in source code containing a C++ class - declaration with the filename "test.c" parsing will fail. - """ - if args is None: - args = [] - - if unsaved_files is None: - unsaved_files = [] - - if index is None: - index = Index.create() - - if isinstance(filename, str): - filename = filename.encode('utf8') - - args_length = len(args) - if args_length > 0: - args = (arg.encode('utf8') if isinstance(arg, str) else arg - for arg in args) - args_array = (c_char_p * args_length)(* args) - - unsaved_array = None - if len(unsaved_files) > 0: - unsaved_array = (_CXUnsavedFile * len(unsaved_files))() - for i, (name, contents) in enumerate(unsaved_files): - if hasattr(contents, "read"): - contents = contents.read() - - unsaved_array[i].name = name - unsaved_array[i].contents = contents - unsaved_array[i].length = len(contents) - - ptr = conf.lib.clang_parseTranslationUnit(index, filename, args_array, - args_length, unsaved_array, - len(unsaved_files), options) - - if not ptr: - raise TranslationUnitLoadError("Error parsing translation unit.") - - return cls(ptr, index=index) - - @classmethod - def from_ast_file(cls, filename, index=None): - """Create a TranslationUnit instance from a saved AST file. - - A previously-saved AST file (provided with -emit-ast or - TranslationUnit.save()) is loaded from the filename specified. - - If the file cannot be loaded, a TranslationUnitLoadError will be - raised. - - index is optional and is the Index instance to use. If not provided, - a default Index will be created. - """ - if index is None: - index = Index.create() - - ptr = conf.lib.clang_createTranslationUnit(index, filename) - if not ptr: - raise TranslationUnitLoadError(filename) - - return cls(ptr=ptr, index=index) - - def __init__(self, ptr, index): - """Create a TranslationUnit instance. - - TranslationUnits should be created using one of the from_* @classmethod - functions above. __init__ is only called internally. - """ - assert isinstance(index, Index) - self.index = index - ClangObject.__init__(self, ptr) - - def __del__(self): - conf.lib.clang_disposeTranslationUnit(self) - - @property - def cursor(self): - """Retrieve the cursor that represents the given translation unit.""" - return conf.lib.clang_getTranslationUnitCursor(self) - - @property - def spelling(self): - """Get the original translation unit source file name.""" - return conf.lib.clang_getTranslationUnitSpelling(self) - - def get_includes(self): - """ - Return an iterable sequence of FileInclusion objects that describe the - sequence of inclusions in a translation unit. The first object in - this sequence is always the input file. Note that this method will not - recursively iterate over header files included through precompiled - headers. - """ - def visitor(fobj, lptr, depth, includes): - if depth > 0: - loc = lptr.contents - includes.append(FileInclusion(loc.file, File(fobj), loc, depth)) - - # Automatically adapt CIndex/ctype pointers to python objects - includes = [] - conf.lib.clang_getInclusions(self, - callbacks['translation_unit_includes'](visitor), includes) - - return iter(includes) - - def get_file(self, filename): - """Obtain a File from this translation unit.""" - - return File.from_name(self, filename) - - def get_location(self, filename, position): - """Obtain a SourceLocation for a file in this translation unit. - - The position can be specified by passing: - - - Integer file offset. Initial file offset is 0. - - 2-tuple of (line number, column number). Initial file position is - (0, 0) - """ - f = self.get_file(filename) - - if isinstance(position, int): - return SourceLocation.from_offset(self, f, position) - - return SourceLocation.from_position(self, f, position[0], position[1]) - - def get_extent(self, filename, locations): - """Obtain a SourceRange from this translation unit. - - The bounds of the SourceRange must ultimately be defined by a start and - end SourceLocation. For the locations argument, you can pass: - - - 2 SourceLocation instances in a 2-tuple or list. - - 2 int file offsets via a 2-tuple or list. - - 2 2-tuple or lists of (line, column) pairs in a 2-tuple or list. - - e.g. - - get_extent('foo.c', (5, 10)) - get_extent('foo.c', ((1, 1), (1, 15))) - """ - f = self.get_file(filename) - - if len(locations) < 2: - raise Exception('Must pass object with at least 2 elements') - - start_location, end_location = locations - - if hasattr(start_location, '__len__'): - start_location = SourceLocation.from_position(self, f, - start_location[0], start_location[1]) - elif isinstance(start_location, int): - start_location = SourceLocation.from_offset(self, f, - start_location) - - if hasattr(end_location, '__len__'): - end_location = SourceLocation.from_position(self, f, - end_location[0], end_location[1]) - elif isinstance(end_location, int): - end_location = SourceLocation.from_offset(self, f, end_location) - - assert isinstance(start_location, SourceLocation) - assert isinstance(end_location, SourceLocation) - - return SourceRange.from_locations(start_location, end_location) - - @property - def diagnostics(self): - """ - Return an iterable (and indexable) object containing the diagnostics. - """ - class DiagIterator: - def __init__(self, tu): - self.tu = tu - - def __len__(self): - return int(conf.lib.clang_getNumDiagnostics(self.tu)) - - def __getitem__(self, key): - diag = conf.lib.clang_getDiagnostic(self.tu, key) - if not diag: - raise IndexError - return Diagnostic(diag) - - return DiagIterator(self) - - def reparse(self, unsaved_files=None, options=0): - """ - Reparse an already parsed translation unit. - - In-memory contents for files can be provided by passing a list of pairs - as unsaved_files, the first items should be the filenames to be mapped - and the second should be the contents to be substituted for the - file. The contents may be passed as strings or file objects. - """ - if unsaved_files is None: - unsaved_files = [] - - unsaved_files_array = 0 - if len(unsaved_files): - unsaved_files_array = (_CXUnsavedFile * len(unsaved_files))() - for i,(name,value) in enumerate(unsaved_files): - if not isinstance(value, str): - # FIXME: It would be great to support an efficient version - # of this, one day. - value = value.read() - print(value) - if not isinstance(value, str): - raise TypeError('Unexpected unsaved file contents.') - unsaved_files_array[i].name = name - unsaved_files_array[i].contents = value - unsaved_files_array[i].length = len(value) - ptr = conf.lib.clang_reparseTranslationUnit(self, len(unsaved_files), - unsaved_files_array, options) - - def save(self, filename): - """Saves the TranslationUnit to a file. - - This is equivalent to passing -emit-ast to the clang frontend. The - saved file can be loaded back into a TranslationUnit. Or, if it - corresponds to a header, it can be used as a pre-compiled header file. - - If an error occurs while saving, a TranslationUnitSaveError is raised. - If the error was TranslationUnitSaveError.ERROR_INVALID_TU, this means - the constructed TranslationUnit was not valid at time of save. In this - case, the reason(s) why should be available via - TranslationUnit.diagnostics(). - - filename -- The path to save the translation unit to. - """ - options = conf.lib.clang_defaultSaveOptions(self) - result = int(conf.lib.clang_saveTranslationUnit(self, filename, - options)) - if result != 0: - raise TranslationUnitSaveError(result, - 'Error saving TranslationUnit.') - - def codeComplete(self, path, line, column, unsaved_files=None, - include_macros=False, include_code_patterns=False, - include_brief_comments=False): - """ - Code complete in this translation unit. - - In-memory contents for files can be provided by passing a list of pairs - as unsaved_files, the first items should be the filenames to be mapped - and the second should be the contents to be substituted for the - file. The contents may be passed as strings or file objects. - """ - options = 0 - - if include_macros: - options += 1 - - if include_code_patterns: - options += 2 - - if include_brief_comments: - options += 4 - - if unsaved_files is None: - unsaved_files = [] - - unsaved_files_array = 0 - if len(unsaved_files): - unsaved_files_array = (_CXUnsavedFile * len(unsaved_files))() - for i,(name,value) in enumerate(unsaved_files): - if not isinstance(value, str): - # FIXME: It would be great to support an efficient version - # of this, one day. - value = value.read() - print(value) - if not isinstance(value, str): - raise TypeError('Unexpected unsaved file contents.') - unsaved_files_array[i].name = name - unsaved_files_array[i].contents = value - unsaved_files_array[i].length = len(value) - ptr = conf.lib.clang_codeCompleteAt(self, path, line, column, - unsaved_files_array, len(unsaved_files), options) - if ptr: - return CodeCompletionResults(ptr) - return None - - def get_tokens(self, locations=None, extent=None): - """Obtain tokens in this translation unit. - - This is a generator for Token instances. The caller specifies a range - of source code to obtain tokens for. The range can be specified as a - 2-tuple of SourceLocation or as a SourceRange. If both are defined, - behavior is undefined. - """ - if locations is not None: - extent = SourceRange(start=locations[0], end=locations[1]) - - return TokenGroup.get_tokens(self, extent) - -class File(ClangObject): - """ - The File class represents a particular source file that is part of a - translation unit. - """ - - @staticmethod - def from_name(translation_unit, file_name): - """Retrieve a file handle within the given translation unit.""" - return File(conf.lib.clang_getFile(translation_unit, file_name)) - - @property - def name(self): - """Return the complete file and path name of the file.""" - return conf.lib.clang_getCString(conf.lib.clang_getFileName(self)) - - @property - def time(self): - """Return the last modification time of the file.""" - return conf.lib.clang_getFileTime(self) - - def __bytes__(self): - return self.name - - def __repr__(self): - return "" % (self.name) - - @staticmethod - def from_cursor_result(res, fn, args): - assert isinstance(res, File) - - # Copy a reference to the TranslationUnit to prevent premature GC. - res._tu = args[0]._tu - return res - -class FileInclusion(object): - """ - The FileInclusion class represents the inclusion of one source file by - another via a '#include' directive or as the input file for the translation - unit. This class provides information about the included file, the including - file, the location of the '#include' directive and the depth of the included - file in the stack. Note that the input file has depth 0. - """ - - def __init__(self, src, tgt, loc, depth): - self.source = src - self.include = tgt - self.location = loc - self.depth = depth - - @property - def is_input_file(self): - """True if the included file is the input file.""" - return self.depth == 0 - -class CompilationDatabaseError(Exception): - """Represents an error that occurred when working with a CompilationDatabase - - Each error is associated to an enumerated value, accessible under - e.cdb_error. Consumers can compare the value with one of the ERROR_ - constants in this class. - """ - - # An unknown error occurred - ERROR_UNKNOWN = 0 - - # The database could not be loaded - ERROR_CANNOTLOADDATABASE = 1 - - def __init__(self, enumeration, message): - assert isinstance(enumeration, int) - - if enumeration > 1: - raise Exception("Encountered undefined CompilationDatabase error " - "constant: %d. Please file a bug to have this " - "value supported." % enumeration) - - self.cdb_error = enumeration - Exception.__init__(self, 'Error %d: %s' % (enumeration, message)) - -class CompileCommand(object): - """Represents the compile command used to build a file""" - def __init__(self, cmd, ccmds): - self.cmd = cmd - # Keep a reference to the originating CompileCommands - # to prevent garbage collection - self.ccmds = ccmds - - @property - def directory(self): - """Get the working directory for this CompileCommand""" - return conf.lib.clang_CompileCommand_getDirectory(self.cmd) - - @property - def filename(self): - """Get the working filename for this CompileCommand""" - return conf.lib.clang_CompileCommand_getFilename(self.cmd) - - @property - def arguments(self): - """ - Get an iterable object providing each argument in the - command line for the compiler invocation as a _CXString. - - Invariant : the first argument is the compiler executable - """ - length = conf.lib.clang_CompileCommand_getNumArgs(self.cmd) - for i in range(length): - yield conf.lib.clang_CompileCommand_getArg(self.cmd, i) - -class CompileCommands(object): - """ - CompileCommands is an iterable object containing all CompileCommand - that can be used for building a specific file. - """ - def __init__(self, ccmds): - self.ccmds = ccmds - - def __del__(self): - conf.lib.clang_CompileCommands_dispose(self.ccmds) - - def __len__(self): - return int(conf.lib.clang_CompileCommands_getSize(self.ccmds)) - - def __getitem__(self, i): - cc = conf.lib.clang_CompileCommands_getCommand(self.ccmds, i) - if not cc: - raise IndexError - return CompileCommand(cc, self) - - @staticmethod - def from_result(res, fn, args): - if not res: - return None - return CompileCommands(res) - -class CompilationDatabase(ClangObject): - """ - The CompilationDatabase is a wrapper class around - clang::tooling::CompilationDatabase - - It enables querying how a specific source file can be built. - """ - - def __del__(self): - conf.lib.clang_CompilationDatabase_dispose(self) - - @staticmethod - def from_result(res, fn, args): - if not res: - raise CompilationDatabaseError(0, - "CompilationDatabase loading failed") - return CompilationDatabase(res) - - @staticmethod - def fromDirectory(buildDir): - """Builds a CompilationDatabase from the database found in buildDir""" - errorCode = c_uint() - try: - cdb = conf.lib.clang_CompilationDatabase_fromDirectory(buildDir, - byref(errorCode)) - except CompilationDatabaseError as e: - raise CompilationDatabaseError(int(errorCode.value), - "CompilationDatabase loading failed") - return cdb - - def getCompileCommands(self, filename): - """ - Get an iterable object providing all the CompileCommands available to - build filename. Returns None if filename is not found in the database. - """ - return conf.lib.clang_CompilationDatabase_getCompileCommands(self, - filename) - - def getAllCompileCommands(self): - """ - Get an iterable object providing all the CompileCommands available from - the database. - """ - return conf.lib.clang_CompilationDatabase_getAllCompileCommands(self) - - -class Token(Structure): - """Represents a single token from the preprocessor. - - Tokens are effectively segments of source code. Source code is first parsed - into tokens before being converted into the AST and Cursors. - - Tokens are obtained from parsed TranslationUnit instances. You currently - can't create tokens manually. - """ - _fields_ = [ - ('int_data', c_uint * 4), - ('ptr_data', c_void_p) - ] - - @property - def spelling(self): - """The spelling of this token. - - This is the textual representation of the token in source. - """ - return conf.lib.clang_getTokenSpelling(self._tu, self) - - @property - def kind(self): - """Obtain the TokenKind of the current token.""" - return TokenKind.from_value(conf.lib.clang_getTokenKind(self)) - - @property - def location(self): - """The SourceLocation this Token occurs at.""" - return conf.lib.clang_getTokenLocation(self._tu, self) - - @property - def extent(self): - """The SourceRange this Token occupies.""" - return conf.lib.clang_getTokenExtent(self._tu, self) - - @property - def cursor(self): - """The Cursor this Token corresponds to.""" - cursor = Cursor() - - conf.lib.clang_annotateTokens(self._tu, byref(self), 1, byref(cursor)) - - return cursor - -# Now comes the plumbing to hook up the C library. - -# Register callback types in common container. -callbacks['translation_unit_includes'] = CFUNCTYPE(None, c_object_p, - POINTER(SourceLocation), c_uint, py_object) -callbacks['cursor_visit'] = CFUNCTYPE(c_int, Cursor, Cursor, py_object) -callbacks['fields_visit'] = CFUNCTYPE(c_int, Cursor, py_object) - -# Functions strictly alphabetical order. -functionList = [ - ("clang_annotateTokens", - [TranslationUnit, POINTER(Token), c_uint, POINTER(Cursor)]), - - ("clang_CompilationDatabase_dispose", - [c_object_p]), - - ("clang_CompilationDatabase_fromDirectory", - [c_char_p, POINTER(c_uint)], - c_object_p, - CompilationDatabase.from_result), - - ("clang_CompilationDatabase_getAllCompileCommands", - [c_object_p], - c_object_p, - CompileCommands.from_result), - - ("clang_CompilationDatabase_getCompileCommands", - [c_object_p, c_char_p], - c_object_p, - CompileCommands.from_result), - - ("clang_CompileCommands_dispose", - [c_object_p]), - - ("clang_CompileCommands_getCommand", - [c_object_p, c_uint], - c_object_p), - - ("clang_CompileCommands_getSize", - [c_object_p], - c_uint), - - ("clang_CompileCommand_getArg", - [c_object_p, c_uint], - _CXString, - _CXString.from_result), - - ("clang_CompileCommand_getDirectory", - [c_object_p], - _CXString, - _CXString.from_result), - - ("clang_CompileCommand_getFilename", - [c_object_p], - _CXString, - _CXString.from_result), - - ("clang_CompileCommand_getNumArgs", - [c_object_p], - c_uint), - - ("clang_codeCompleteAt", - [TranslationUnit, c_char_p, c_int, c_int, c_void_p, c_int, c_int], - POINTER(CCRStructure)), - - ("clang_codeCompleteGetDiagnostic", - [CodeCompletionResults, c_int], - Diagnostic), - - ("clang_codeCompleteGetNumDiagnostics", - [CodeCompletionResults], - c_int), - - ("clang_createIndex", - [c_int, c_int], - c_object_p), - - ("clang_createTranslationUnit", - [Index, c_char_p], - c_object_p), - - ("clang_CXXConstructor_isConvertingConstructor", - [Cursor], - bool), - - ("clang_CXXConstructor_isCopyConstructor", - [Cursor], - bool), - - ("clang_CXXConstructor_isDefaultConstructor", - [Cursor], - bool), - - ("clang_CXXConstructor_isMoveConstructor", - [Cursor], - bool), - - ("clang_CXXField_isMutable", - [Cursor], - bool), - - ("clang_CXXMethod_isConst", - [Cursor], - bool), - - ("clang_CXXMethod_isDefaulted", - [Cursor], - bool), - - ("clang_CXXMethod_isPureVirtual", - [Cursor], - bool), - - ("clang_CXXMethod_isStatic", - [Cursor], - bool), - - ("clang_CXXMethod_isVirtual", - [Cursor], - bool), - - ("clang_defaultDiagnosticDisplayOptions", - [], - c_uint), - - ("clang_defaultSaveOptions", - [TranslationUnit], - c_uint), - - ("clang_disposeCodeCompleteResults", - [CodeCompletionResults]), - -# ("clang_disposeCXTUResourceUsage", -# [CXTUResourceUsage]), - - ("clang_disposeDiagnostic", - [Diagnostic]), - - ("clang_disposeIndex", - [Index]), - - ("clang_disposeString", - [_CXString]), - - ("clang_disposeTokens", - [TranslationUnit, POINTER(Token), c_uint]), - - ("clang_disposeTranslationUnit", - [TranslationUnit]), - - ("clang_equalCursors", - [Cursor, Cursor], - bool), - - ("clang_equalLocations", - [SourceLocation, SourceLocation], - bool), - - ("clang_equalRanges", - [SourceRange, SourceRange], - bool), - - ("clang_equalTypes", - [Type, Type], - bool), - - ("clang_formatDiagnostic", - [Diagnostic, c_uint], - _CXString), - - ("clang_getArgType", - [Type, c_uint], - Type, - Type.from_result), - - ("clang_getArrayElementType", - [Type], - Type, - Type.from_result), - - ("clang_getArraySize", - [Type], - c_longlong), - - ("clang_getFieldDeclBitWidth", - [Cursor], - c_int), - - ("clang_getCanonicalCursor", - [Cursor], - Cursor, - Cursor.from_cursor_result), - - ("clang_getCanonicalType", - [Type], - Type, - Type.from_result), - - ("clang_getChildDiagnostics", - [Diagnostic], - c_object_p), - - ("clang_getCompletionAvailability", - [c_void_p], - c_int), - - ("clang_getCompletionBriefComment", - [c_void_p], - _CXString), - - ("clang_getCompletionChunkCompletionString", - [c_void_p, c_int], - c_object_p), - - ("clang_getCompletionChunkKind", - [c_void_p, c_int], - c_int), - - ("clang_getCompletionChunkText", - [c_void_p, c_int], - _CXString), - - ("clang_getCompletionPriority", - [c_void_p], - c_int), - - ("clang_getCString", - [_CXString], - c_char_p), - - ("clang_getCursor", - [TranslationUnit, SourceLocation], - Cursor), - - ("clang_getCursorDefinition", - [Cursor], - Cursor, - Cursor.from_result), - - ("clang_getCursorDisplayName", - [Cursor], - _CXString, - _CXString.from_result), - - ("clang_getCursorExtent", - [Cursor], - SourceRange), - - ("clang_getCursorLexicalParent", - [Cursor], - Cursor, - Cursor.from_cursor_result), - - ("clang_getCursorLocation", - [Cursor], - SourceLocation), - - ("clang_getCursorReferenced", - [Cursor], - Cursor, - Cursor.from_result), - - ("clang_getCursorReferenceNameRange", - [Cursor, c_uint, c_uint], - SourceRange), - - ("clang_getCursorSemanticParent", - [Cursor], - Cursor, - Cursor.from_cursor_result), - - ("clang_getCursorSpelling", - [Cursor], - _CXString, - _CXString.from_result), - - ("clang_getCursorType", - [Cursor], - Type, - Type.from_result), - - ("clang_getCursorUSR", - [Cursor], - _CXString, - _CXString.from_result), - - ("clang_Cursor_getMangling", - [Cursor], - _CXString, - _CXString.from_result), - -# ("clang_getCXTUResourceUsage", -# [TranslationUnit], -# CXTUResourceUsage), - - ("clang_getCXXAccessSpecifier", - [Cursor], - c_uint), - - ("clang_getDeclObjCTypeEncoding", - [Cursor], - _CXString, - _CXString.from_result), - - ("clang_getDiagnostic", - [c_object_p, c_uint], - c_object_p), - - ("clang_getDiagnosticCategory", - [Diagnostic], - c_uint), - - ("clang_getDiagnosticCategoryText", - [Diagnostic], - _CXString, - _CXString.from_result), - - ("clang_getDiagnosticFixIt", - [Diagnostic, c_uint, POINTER(SourceRange)], - _CXString, - _CXString.from_result), - - ("clang_getDiagnosticInSet", - [c_object_p, c_uint], - c_object_p), - - ("clang_getDiagnosticLocation", - [Diagnostic], - SourceLocation), - - ("clang_getDiagnosticNumFixIts", - [Diagnostic], - c_uint), - - ("clang_getDiagnosticNumRanges", - [Diagnostic], - c_uint), - - ("clang_getDiagnosticOption", - [Diagnostic, POINTER(_CXString)], - _CXString, - _CXString.from_result), - - ("clang_getDiagnosticRange", - [Diagnostic, c_uint], - SourceRange), - - ("clang_getDiagnosticSeverity", - [Diagnostic], - c_int), - - ("clang_getDiagnosticSpelling", - [Diagnostic], - _CXString, - _CXString.from_result), - - ("clang_getElementType", - [Type], - Type, - Type.from_result), - - ("clang_getEnumConstantDeclUnsignedValue", - [Cursor], - c_ulonglong), - - ("clang_getEnumConstantDeclValue", - [Cursor], - c_longlong), - - ("clang_getEnumDeclIntegerType", - [Cursor], - Type, - Type.from_result), - - ("clang_getFile", - [TranslationUnit, c_char_p], - c_object_p), - - ("clang_getFileName", - [File], - _CXString), # TODO go through _CXString.from_result? - - ("clang_getFileTime", - [File], - c_uint), - - ("clang_getIBOutletCollectionType", - [Cursor], - Type, - Type.from_result), - - ("clang_getIncludedFile", - [Cursor], - File, - File.from_cursor_result), - - ("clang_getInclusions", - [TranslationUnit, callbacks['translation_unit_includes'], py_object]), - - ("clang_getInstantiationLocation", - [SourceLocation, POINTER(c_object_p), POINTER(c_uint), POINTER(c_uint), - POINTER(c_uint)]), - - ("clang_getLocation", - [TranslationUnit, File, c_uint, c_uint], - SourceLocation), - - ("clang_getLocationForOffset", - [TranslationUnit, File, c_uint], - SourceLocation), - - ("clang_getNullCursor", - None, - Cursor), - - ("clang_getNumArgTypes", - [Type], - c_uint), - - ("clang_getNumCompletionChunks", - [c_void_p], - c_int), - - ("clang_getNumDiagnostics", - [c_object_p], - c_uint), - - ("clang_getNumDiagnosticsInSet", - [c_object_p], - c_uint), - - ("clang_getNumElements", - [Type], - c_longlong), - - ("clang_getNumOverloadedDecls", - [Cursor], - c_uint), - - ("clang_getOverloadedDecl", - [Cursor, c_uint], - Cursor, - Cursor.from_cursor_result), - - ("clang_getPointeeType", - [Type], - Type, - Type.from_result), - - ("clang_getRange", - [SourceLocation, SourceLocation], - SourceRange), - - ("clang_getRangeEnd", - [SourceRange], - SourceLocation), - - ("clang_getRangeStart", - [SourceRange], - SourceLocation), - - ("clang_getResultType", - [Type], - Type, - Type.from_result), - - ("clang_getSpecializedCursorTemplate", - [Cursor], - Cursor, - Cursor.from_cursor_result), - - ("clang_getTemplateCursorKind", - [Cursor], - c_uint), - - ("clang_getTokenExtent", - [TranslationUnit, Token], - SourceRange), - - ("clang_getTokenKind", - [Token], - c_uint), - - ("clang_getTokenLocation", - [TranslationUnit, Token], - SourceLocation), - - ("clang_getTokenSpelling", - [TranslationUnit, Token], - _CXString, - _CXString.from_result), - - ("clang_getTranslationUnitCursor", - [TranslationUnit], - Cursor, - Cursor.from_result), - - ("clang_getTranslationUnitSpelling", - [TranslationUnit], - _CXString, - _CXString.from_result), - - ("clang_getTUResourceUsageName", - [c_uint], - c_char_p), - - ("clang_getTypeDeclaration", - [Type], - Cursor, - Cursor.from_result), - - ("clang_getTypedefDeclUnderlyingType", - [Cursor], - Type, - Type.from_result), - - ("clang_getTypeKindSpelling", - [c_uint], - _CXString, - _CXString.from_result), - - ("clang_getTypeSpelling", - [Type], - _CXString, - _CXString.from_result), - - ("clang_hashCursor", - [Cursor], - c_uint), - - ("clang_isAttribute", - [CursorKind], - bool), - - ("clang_isConstQualifiedType", - [Type], - bool), - - ("clang_isCursorDefinition", - [Cursor], - bool), - - ("clang_isDeclaration", - [CursorKind], - bool), - - ("clang_isExpression", - [CursorKind], - bool), - - ("clang_isFileMultipleIncludeGuarded", - [TranslationUnit, File], - bool), - - ("clang_isFunctionTypeVariadic", - [Type], - bool), - - ("clang_isInvalid", - [CursorKind], - bool), - - ("clang_isPODType", - [Type], - bool), - - ("clang_isPreprocessing", - [CursorKind], - bool), - - ("clang_isReference", - [CursorKind], - bool), - - ("clang_isRestrictQualifiedType", - [Type], - bool), - - ("clang_isStatement", - [CursorKind], - bool), - - ("clang_isTranslationUnit", - [CursorKind], - bool), - - ("clang_isUnexposed", - [CursorKind], - bool), - - ("clang_isVirtualBase", - [Cursor], - bool), - - ("clang_isVolatileQualifiedType", - [Type], - bool), - - ("clang_parseTranslationUnit", - [Index, c_char_p, c_void_p, c_int, c_void_p, c_int, c_int], - c_object_p), - - ("clang_reparseTranslationUnit", - [TranslationUnit, c_int, c_void_p, c_int], - c_int), - - ("clang_saveTranslationUnit", - [TranslationUnit, c_char_p, c_uint], - c_int), - - ("clang_tokenize", - [TranslationUnit, SourceRange, POINTER(POINTER(Token)), POINTER(c_uint)]), - - ("clang_visitChildren", - [Cursor, callbacks['cursor_visit'], py_object], - c_uint), - - ("clang_Cursor_getNumArguments", - [Cursor], - c_int), - - ("clang_Cursor_getArgument", - [Cursor, c_uint], - Cursor, - Cursor.from_result), - - ("clang_Cursor_getNumTemplateArguments", - [Cursor], - c_int), - - ("clang_Cursor_getTemplateArgumentKind", - [Cursor, c_uint], - TemplateArgumentKind.from_id), - - ("clang_Cursor_getTemplateArgumentType", - [Cursor, c_uint], - Type, - Type.from_result), - - ("clang_Cursor_getTemplateArgumentValue", - [Cursor, c_uint], - c_longlong), - - ("clang_Cursor_getTemplateArgumentUnsignedValue", - [Cursor, c_uint], - c_ulonglong), - - ("clang_Cursor_isAnonymous", - [Cursor], - bool), - - ("clang_Cursor_isBitField", - [Cursor], - bool), - - ("clang_Cursor_getBriefCommentText", - [Cursor], - _CXString, - _CXString.from_result), - - ("clang_Cursor_getRawCommentText", - [Cursor], - _CXString, - _CXString.from_result), - - ("clang_Cursor_getOffsetOfField", - [Cursor], - c_longlong), - - ("clang_Type_getAlignOf", - [Type], - c_longlong), - - ("clang_Type_getClassType", - [Type], - Type, - Type.from_result), - - ("clang_Type_getOffsetOf", - [Type, c_char_p], - c_longlong), - - ("clang_Type_getSizeOf", - [Type], - c_longlong), - - ("clang_Type_getCXXRefQualifier", - [Type], - c_uint), - - ("clang_Type_getNamedType", - [Type], - Type, - Type.from_result), - - ("clang_Type_visitFields", - [Type, callbacks['fields_visit'], py_object], - c_uint), -] - -class LibclangError(Exception): - def __init__(self, message): - self.m = message - - def __str__(self): - return self.m - -def register_function(lib, item, ignore_errors): - # A function may not exist, if these bindings are used with an older or - # incompatible version of libclang.so. - try: - func = getattr(lib, item[0]) - except AttributeError as e: - msg = str(e) + ". Please ensure that your python bindings are "\ - "compatible with your libclang.so version." - if ignore_errors: - return - raise LibclangError(msg) - - if len(item) >= 2: - func.argtypes = item[1] - - if len(item) >= 3: - func.restype = item[2] - - if len(item) == 4: - func.errcheck = item[3] - -def register_functions(lib, ignore_errors): - """Register function prototypes with a libclang library instance. - - This must be called as part of library instantiation so Python knows how - to call out to the shared library. - """ - - def register(item): - return register_function(lib, item, ignore_errors) - - for f in functionList: - register(f) - -class Config: - library_path = None - library_file = None - compatibility_check = False - loaded = False - - @staticmethod - def set_library_path(path): - """Set the path in which to search for libclang""" - if Config.loaded: - raise Exception("library path must be set before before using " \ - "any other functionalities in libclang.") - - Config.library_path = path - - @staticmethod - def set_library_file(filename): - """Set the exact location of libclang""" - if Config.loaded: - raise Exception("library file must be set before before using " \ - "any other functionalities in libclang.") - - Config.library_file = filename - - @staticmethod - def set_compatibility_check(check_status): - """ Perform compatibility check when loading libclang - - The python bindings are only tested and evaluated with the version of - libclang they are provided with. To ensure correct behavior a (limited) - compatibility check is performed when loading the bindings. This check - will throw an exception, as soon as it fails. - - In case these bindings are used with an older version of libclang, parts - that have been stable between releases may still work. Users of the - python bindings can disable the compatibility check. This will cause - the python bindings to load, even though they are written for a newer - version of libclang. Failures now arise if unsupported or incompatible - features are accessed. The user is required to test themselves if the - features they are using are available and compatible between different - libclang versions. - """ - if Config.loaded: - raise Exception("compatibility_check must be set before before " \ - "using any other functionalities in libclang.") - - Config.compatibility_check = check_status - - @CachedProperty - def lib(self): - lib = self.get_cindex_library() - register_functions(lib, not Config.compatibility_check) - Config.loaded = True - return lib - - def get_filename(self): - if Config.library_file: - return Config.library_file - - import platform - name = platform.system() - - if name == 'Darwin': - file = 'libclang.dylib' - elif name == 'Windows': - file = 'libclang.dll' - else: - file = 'libclang.so' - - if Config.library_path: - file = Config.library_path + '/' + file - - return file - - def get_cindex_library(self): - try: - library = cdll.LoadLibrary(self.get_filename()) - except OSError as e: - msg = str(e) + ". To provide a path to libclang use " \ - "Config.set_library_path() or " \ - "Config.set_library_file()." - raise LibclangError(msg) - - return library - - def function_exists(self, name): - try: - getattr(self.lib, name) - except AttributeError: - return False - - return True - -def register_enumerations(): - for name, value in clang.enumerations.TokenKinds: - TokenKind.register(value, name) - -conf = Config() -register_enumerations() - -__all__ = [ - 'Config', - 'CodeCompletionResults', - 'CompilationDatabase', - 'CompileCommands', - 'CompileCommand', - 'CursorKind', - 'Cursor', - 'Diagnostic', - 'File', - 'FixIt', - 'Index', - 'SourceLocation', - 'SourceRange', - 'TokenKind', - 'Token', - 'TranslationUnitLoadError', - 'TranslationUnit', - 'TypeKind', - 'Type', -] diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/async/reduce.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/async/reduce.h deleted file mode 100644 index f13ab02fdca0b0fdf416f3bd117ee239e54df15c..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/async/reduce.h +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -// The purpose of this header is to #include the async/reduce.h header of the -// sequential, host, and device systems. It should be #included in any code -// which uses ADL to dispatch async reduce. - -#pragma once - -#include - -//#include - -//#define __THRUST_HOST_SYSTEM_ASYNC_REDUCE_HEADER <__THRUST_HOST_SYSTEM_ROOT/detail/async/reduce.h> -//#include __THRUST_HOST_SYSTEM_ASYNC_REDUCE_HEADER -//#undef __THRUST_HOST_SYSTEM_ASYNC_REDUCE_HEADER - -#define __THRUST_DEVICE_SYSTEM_ASYNC_REDUCE_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/detail/async/reduce.h> -#include __THRUST_DEVICE_SYSTEM_ASYNC_REDUCE_HEADER -#undef __THRUST_DEVICE_SYSTEM_ASYNC_REDUCE_HEADER - diff --git a/spaces/CVPR/WALT/walt/datasets/builder.py b/spaces/CVPR/WALT/walt/datasets/builder.py deleted file mode 100644 index 9bc0fe466f5bfbf903438a5dc979329debd6517f..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/walt/datasets/builder.py +++ /dev/null @@ -1,143 +0,0 @@ -import copy -import platform -import random -from functools import partial - -import numpy as np -from mmcv.parallel import collate -from mmcv.runner import get_dist_info -from mmcv.utils import Registry, build_from_cfg -from torch.utils.data import DataLoader - -from mmdet.datasets.samplers import DistributedGroupSampler, DistributedSampler, GroupSampler - -if platform.system() != 'Windows': - # https://github.com/pytorch/pytorch/issues/973 - import resource - rlimit = resource.getrlimit(resource.RLIMIT_NOFILE) - hard_limit = rlimit[1] - soft_limit = min(4096, hard_limit) - resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit)) - -DATASETS = Registry('dataset') -PIPELINES = Registry('pipeline') - - -def _concat_dataset(cfg, default_args=None): - from mmdet.datasets.dataset_wrappers import ConcatDataset - ann_files = cfg['ann_file'] - img_prefixes = cfg.get('img_prefix', None) - seg_prefixes = cfg.get('seg_prefix', None) - proposal_files = cfg.get('proposal_file', None) - separate_eval = cfg.get('separate_eval', True) - - datasets = [] - num_dset = len(ann_files) - for i in range(num_dset): - data_cfg = copy.deepcopy(cfg) - # pop 'separate_eval' since it is not a valid key for common datasets. - if 'separate_eval' in data_cfg: - data_cfg.pop('separate_eval') - data_cfg['ann_file'] = ann_files[i] - if isinstance(img_prefixes, (list, tuple)): - data_cfg['img_prefix'] = img_prefixes[i] - if isinstance(seg_prefixes, (list, tuple)): - data_cfg['seg_prefix'] = seg_prefixes[i] - if isinstance(proposal_files, (list, tuple)): - data_cfg['proposal_file'] = proposal_files[i] - datasets.append(build_dataset(data_cfg, default_args)) - - return ConcatDataset(datasets, separate_eval) - - -def build_dataset(cfg, default_args=None): - from mmdet.datasets.dataset_wrappers import (ConcatDataset, RepeatDataset, - ClassBalancedDataset) - if isinstance(cfg, (list, tuple)): - dataset = ConcatDataset([build_dataset(c, default_args) for c in cfg]) - elif cfg['type'] == 'ConcatDataset': - dataset = ConcatDataset( - [build_dataset(c, default_args) for c in cfg['datasets']], - cfg.get('separate_eval', True)) - elif cfg['type'] == 'RepeatDataset': - dataset = RepeatDataset( - build_dataset(cfg['dataset'], default_args), cfg['times']) - elif cfg['type'] == 'ClassBalancedDataset': - dataset = ClassBalancedDataset( - build_dataset(cfg['dataset'], default_args), cfg['oversample_thr']) - elif isinstance(cfg.get('ann_file'), (list, tuple)): - dataset = _concat_dataset(cfg, default_args) - else: - dataset = build_from_cfg(cfg, DATASETS, default_args) - - return dataset - - -def build_dataloader(dataset, - samples_per_gpu, - workers_per_gpu, - num_gpus=1, - dist=True, - shuffle=True, - seed=None, - **kwargs): - """Build PyTorch DataLoader. - - In distributed training, each GPU/process has a dataloader. - In non-distributed training, there is only one dataloader for all GPUs. - - Args: - dataset (Dataset): A PyTorch dataset. - samples_per_gpu (int): Number of training samples on each GPU, i.e., - batch size of each GPU. - workers_per_gpu (int): How many subprocesses to use for data loading - for each GPU. - num_gpus (int): Number of GPUs. Only used in non-distributed training. - dist (bool): Distributed training/test or not. Default: True. - shuffle (bool): Whether to shuffle the data at every epoch. - Default: True. - kwargs: any keyword argument to be used to initialize DataLoader - - Returns: - DataLoader: A PyTorch dataloader. - """ - rank, world_size = get_dist_info() - if dist: - # DistributedGroupSampler will definitely shuffle the data to satisfy - # that images on each GPU are in the same group - if shuffle: - sampler = DistributedGroupSampler( - dataset, samples_per_gpu, world_size, rank, seed=seed) - else: - sampler = DistributedSampler( - dataset, world_size, rank, shuffle=False, seed=seed) - batch_size = samples_per_gpu - num_workers = workers_per_gpu - else: - sampler = GroupSampler(dataset, samples_per_gpu) if shuffle else None - batch_size = num_gpus * samples_per_gpu - num_workers = num_gpus * workers_per_gpu - - init_fn = partial( - worker_init_fn, num_workers=num_workers, rank=rank, - seed=seed) if seed is not None else None - - data_loader = DataLoader( - dataset, - batch_size=batch_size, - sampler=sampler, - num_workers=num_workers, - collate_fn=partial(collate, samples_per_gpu=samples_per_gpu), - pin_memory=False, - worker_init_fn=init_fn, - **kwargs) - - return data_loader - - -def worker_init_fn(worker_id, num_workers, rank, seed): - # The seed of each worker equals to - # num_worker * rank + worker_id + user_seed - worker_seed = num_workers * rank + worker_id + seed - np.random.seed(worker_seed) - random.seed(worker_seed) diff --git a/spaces/CVPR/unicl-zero-shot-img-recog/model/image_encoder/build.py b/spaces/CVPR/unicl-zero-shot-img-recog/model/image_encoder/build.py deleted file mode 100644 index 840f8a83e5db182420ac9b840331350e73fc751f..0000000000000000000000000000000000000000 --- a/spaces/CVPR/unicl-zero-shot-img-recog/model/image_encoder/build.py +++ /dev/null @@ -1,59 +0,0 @@ -from timm.models import create_model -from .swin_transformer import SwinTransformer -from . import focalnet - -def build_model(config): - model_type = config.TYPE - print(f"Creating model: {model_type}") - - if "swin" in model_type: - model = SwinTransformer( - num_classes=0, - img_size=config.IMG_SIZE, - patch_size=config.SWIN.PATCH_SIZE, - in_chans=config.SWIN.IN_CHANS, - embed_dim=config.SWIN.EMBED_DIM, - depths=config.SWIN.DEPTHS, - num_heads=config.SWIN.NUM_HEADS, - window_size=config.SWIN.WINDOW_SIZE, - mlp_ratio=config.SWIN.MLP_RATIO, - qkv_bias=config.SWIN.QKV_BIAS, - qk_scale=config.SWIN.QK_SCALE, - drop_rate=config.DROP_RATE, - drop_path_rate=config.DROP_PATH_RATE, - ape=config.SWIN.APE, - patch_norm=config.SWIN.PATCH_NORM, - use_checkpoint=False - ) - elif "focal" in model_type: - model = create_model( - model_type, - pretrained=False, - img_size=config.IMG_SIZE, - num_classes=0, - drop_path_rate=config.DROP_PATH_RATE, - use_conv_embed=config.FOCAL.USE_CONV_EMBED, - use_layerscale=config.FOCAL.USE_LAYERSCALE, - use_postln=config.FOCAL.USE_POSTLN - ) - - elif "vit" in model_type: - model = create_model( - model_type, - pretrained=is_pretrained, - img_size=config.DATA.IMG_SIZE, - num_classes=config.MODEL.NUM_CLASSES, - ) - elif "resnet" in model_type: - model = create_model( - model_type, - pretrained=is_pretrained, - num_classes=config.MODEL.NUM_CLASSES - ) - else: - model = create_model( - model_type, - pretrained=is_pretrained, - num_classes=config.MODEL.NUM_CLASSES - ) - return model diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/bronya_holdsign/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/bronya_holdsign/__init__.py deleted file mode 100644 index 88477c851a82a4e6dda427495f4c45a4224894e6..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/meme_generator/memes/bronya_holdsign/__init__.py +++ /dev/null @@ -1,37 +0,0 @@ -from pathlib import Path -from typing import List - -from pil_utils import BuildImage - -from meme_generator import add_meme -from meme_generator.exception import TextOverLength - -img_dir = Path(__file__).parent / "images" - - -def bronya_holdsign(images, texts: List[str], args): - text = texts[0] - frame = BuildImage.open(img_dir / "0.jpg") - try: - frame.draw_text( - (190, 675, 640, 930), - text, - fill=(111, 95, 95), - allow_wrap=True, - max_fontsize=60, - min_fontsize=25, - lines_align="center", - ) - except ValueError: - raise TextOverLength(text) - return frame.save_jpg() - - -add_meme( - "bronya_holdsign", - bronya_holdsign, - min_texts=1, - max_texts=1, - default_texts=["V我50"], - keywords=["布洛妮娅举牌", "大鸭鸭举牌"], -) diff --git a/spaces/CofAI/chat/g4f/Provider/Providers/helpers/gpt4love.py b/spaces/CofAI/chat/g4f/Provider/Providers/helpers/gpt4love.py deleted file mode 100644 index 987fdbf8de5c27f7b827183d9c192dcf48d8ddcf..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat/g4f/Provider/Providers/helpers/gpt4love.py +++ /dev/null @@ -1,48 +0,0 @@ -import json -import sys -from re import findall -from curl_cffi import requests - -config = json.loads(sys.argv[1]) -prompt = config['messages'][-1]['content'] - -headers = { - 'authority': 'api.gptplus.one', - 'accept': 'application/json, text/plain, */*', - 'accept-language': 'ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7,ja;q=0.6,zh-TW;q=0.5,zh;q=0.4', - 'content-type': 'application/octet-stream', - 'origin': 'https://ai.gptforlove.com/', - 'referer': 'https://ai.gptforlove.com/', - 'sec-ch-ua': '"Google Chrome";v="113", "Chromium";v="113", "Not-A.Brand";v="24"', - 'sec-ch-ua-mobile': '?0', - 'sec-ch-ua-platform': '"macOS"', - 'sec-fetch-dest': 'empty', - 'sec-fetch-mode': 'cors', - 'sec-fetch-site': 'cross-site', - 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36', -} - -json_data = { - 'prompt': prompt, - 'options': {} -} - -def format(chunk): - try: - completion_chunk = findall(r'content":"(.*)"},"fin', chunk.decode())[0] - print(completion_chunk, flush=True, end='') - - except Exception as e: - print(f'[ERROR] an error occured, retrying... | [[{chunk.decode()}]]', flush=True) - return - -while True: - try: - response = requests.post('https://api.gptplus.one/api/chat-process', - headers=headers, json=json_data, content_callback=format, impersonate='chrome110') - - exit(0) - - except Exception as e: - print('[ERROR] an error occured, retrying... |', e, flush=True) - continue \ No newline at end of file diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/utils/env.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/utils/env.py deleted file mode 100644 index 1c7db32e41ec266ead9734f90d0173b4feff61ef..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/utils/env.py +++ /dev/null @@ -1,37 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import os - -from maskrcnn_benchmark.utils.imports import import_file - - -def setup_environment(): - """Perform environment setup work. The default setup is a no-op, but this - function allows the user to specify a Python source file that performs - custom setup work that may be necessary to their computing environment. - """ - custom_module_path = os.environ.get("TORCH_DETECTRON_ENV_MODULE") - if custom_module_path: - setup_custom_environment(custom_module_path) - else: - # The default setup is a no-op - pass - - -def setup_custom_environment(custom_module_path): - """Load custom environment setup from a Python source file and run the setup - function. - """ - module = import_file("maskrcnn_benchmark.utils.env.custom_module", custom_module_path) - assert hasattr(module, "setup_environment") and callable( - module.setup_environment - ), ( - "Custom environment module defined in {} does not have the " - "required callable attribute 'setup_environment'." - ).format( - custom_module_path - ) - module.setup_environment() - - -# Force environment setup when this module is imported -setup_environment() diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/ImageOps.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/ImageOps.py deleted file mode 100644 index 17702778c134abcb51d7632367fbbf1a2f3048fa..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/ImageOps.py +++ /dev/null @@ -1,628 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# standard image operations -# -# History: -# 2001-10-20 fl Created -# 2001-10-23 fl Added autocontrast operator -# 2001-12-18 fl Added Kevin's fit operator -# 2004-03-14 fl Fixed potential division by zero in equalize -# 2005-05-05 fl Fixed equalize for low number of values -# -# Copyright (c) 2001-2004 by Secret Labs AB -# Copyright (c) 2001-2004 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -import functools -import operator -import re - -from . import ExifTags, Image, ImagePalette - -# -# helpers - - -def _border(border): - if isinstance(border, tuple): - if len(border) == 2: - left, top = right, bottom = border - elif len(border) == 4: - left, top, right, bottom = border - else: - left = top = right = bottom = border - return left, top, right, bottom - - -def _color(color, mode): - if isinstance(color, str): - from . import ImageColor - - color = ImageColor.getcolor(color, mode) - return color - - -def _lut(image, lut): - if image.mode == "P": - # FIXME: apply to lookup table, not image data - msg = "mode P support coming soon" - raise NotImplementedError(msg) - elif image.mode in ("L", "RGB"): - if image.mode == "RGB" and len(lut) == 256: - lut = lut + lut + lut - return image.point(lut) - else: - msg = "not supported for this image mode" - raise OSError(msg) - - -# -# actions - - -def autocontrast(image, cutoff=0, ignore=None, mask=None, preserve_tone=False): - """ - Maximize (normalize) image contrast. This function calculates a - histogram of the input image (or mask region), removes ``cutoff`` percent of the - lightest and darkest pixels from the histogram, and remaps the image - so that the darkest pixel becomes black (0), and the lightest - becomes white (255). - - :param image: The image to process. - :param cutoff: The percent to cut off from the histogram on the low and - high ends. Either a tuple of (low, high), or a single - number for both. - :param ignore: The background pixel value (use None for no background). - :param mask: Histogram used in contrast operation is computed using pixels - within the mask. If no mask is given the entire image is used - for histogram computation. - :param preserve_tone: Preserve image tone in Photoshop-like style autocontrast. - - .. versionadded:: 8.2.0 - - :return: An image. - """ - if preserve_tone: - histogram = image.convert("L").histogram(mask) - else: - histogram = image.histogram(mask) - - lut = [] - for layer in range(0, len(histogram), 256): - h = histogram[layer : layer + 256] - if ignore is not None: - # get rid of outliers - try: - h[ignore] = 0 - except TypeError: - # assume sequence - for ix in ignore: - h[ix] = 0 - if cutoff: - # cut off pixels from both ends of the histogram - if not isinstance(cutoff, tuple): - cutoff = (cutoff, cutoff) - # get number of pixels - n = 0 - for ix in range(256): - n = n + h[ix] - # remove cutoff% pixels from the low end - cut = n * cutoff[0] // 100 - for lo in range(256): - if cut > h[lo]: - cut = cut - h[lo] - h[lo] = 0 - else: - h[lo] -= cut - cut = 0 - if cut <= 0: - break - # remove cutoff% samples from the high end - cut = n * cutoff[1] // 100 - for hi in range(255, -1, -1): - if cut > h[hi]: - cut = cut - h[hi] - h[hi] = 0 - else: - h[hi] -= cut - cut = 0 - if cut <= 0: - break - # find lowest/highest samples after preprocessing - for lo in range(256): - if h[lo]: - break - for hi in range(255, -1, -1): - if h[hi]: - break - if hi <= lo: - # don't bother - lut.extend(list(range(256))) - else: - scale = 255.0 / (hi - lo) - offset = -lo * scale - for ix in range(256): - ix = int(ix * scale + offset) - if ix < 0: - ix = 0 - elif ix > 255: - ix = 255 - lut.append(ix) - return _lut(image, lut) - - -def colorize(image, black, white, mid=None, blackpoint=0, whitepoint=255, midpoint=127): - """ - Colorize grayscale image. - This function calculates a color wedge which maps all black pixels in - the source image to the first color and all white pixels to the - second color. If ``mid`` is specified, it uses three-color mapping. - The ``black`` and ``white`` arguments should be RGB tuples or color names; - optionally you can use three-color mapping by also specifying ``mid``. - Mapping positions for any of the colors can be specified - (e.g. ``blackpoint``), where these parameters are the integer - value corresponding to where the corresponding color should be mapped. - These parameters must have logical order, such that - ``blackpoint <= midpoint <= whitepoint`` (if ``mid`` is specified). - - :param image: The image to colorize. - :param black: The color to use for black input pixels. - :param white: The color to use for white input pixels. - :param mid: The color to use for midtone input pixels. - :param blackpoint: an int value [0, 255] for the black mapping. - :param whitepoint: an int value [0, 255] for the white mapping. - :param midpoint: an int value [0, 255] for the midtone mapping. - :return: An image. - """ - - # Initial asserts - assert image.mode == "L" - if mid is None: - assert 0 <= blackpoint <= whitepoint <= 255 - else: - assert 0 <= blackpoint <= midpoint <= whitepoint <= 255 - - # Define colors from arguments - black = _color(black, "RGB") - white = _color(white, "RGB") - if mid is not None: - mid = _color(mid, "RGB") - - # Empty lists for the mapping - red = [] - green = [] - blue = [] - - # Create the low-end values - for i in range(0, blackpoint): - red.append(black[0]) - green.append(black[1]) - blue.append(black[2]) - - # Create the mapping (2-color) - if mid is None: - range_map = range(0, whitepoint - blackpoint) - - for i in range_map: - red.append(black[0] + i * (white[0] - black[0]) // len(range_map)) - green.append(black[1] + i * (white[1] - black[1]) // len(range_map)) - blue.append(black[2] + i * (white[2] - black[2]) // len(range_map)) - - # Create the mapping (3-color) - else: - range_map1 = range(0, midpoint - blackpoint) - range_map2 = range(0, whitepoint - midpoint) - - for i in range_map1: - red.append(black[0] + i * (mid[0] - black[0]) // len(range_map1)) - green.append(black[1] + i * (mid[1] - black[1]) // len(range_map1)) - blue.append(black[2] + i * (mid[2] - black[2]) // len(range_map1)) - for i in range_map2: - red.append(mid[0] + i * (white[0] - mid[0]) // len(range_map2)) - green.append(mid[1] + i * (white[1] - mid[1]) // len(range_map2)) - blue.append(mid[2] + i * (white[2] - mid[2]) // len(range_map2)) - - # Create the high-end values - for i in range(0, 256 - whitepoint): - red.append(white[0]) - green.append(white[1]) - blue.append(white[2]) - - # Return converted image - image = image.convert("RGB") - return _lut(image, red + green + blue) - - -def contain(image, size, method=Image.Resampling.BICUBIC): - """ - Returns a resized version of the image, set to the maximum width and height - within the requested size, while maintaining the original aspect ratio. - - :param image: The image to resize and crop. - :param size: The requested output size in pixels, given as a - (width, height) tuple. - :param method: Resampling method to use. Default is - :py:attr:`~PIL.Image.Resampling.BICUBIC`. - See :ref:`concept-filters`. - :return: An image. - """ - - im_ratio = image.width / image.height - dest_ratio = size[0] / size[1] - - if im_ratio != dest_ratio: - if im_ratio > dest_ratio: - new_height = round(image.height / image.width * size[0]) - if new_height != size[1]: - size = (size[0], new_height) - else: - new_width = round(image.width / image.height * size[1]) - if new_width != size[0]: - size = (new_width, size[1]) - return image.resize(size, resample=method) - - -def pad(image, size, method=Image.Resampling.BICUBIC, color=None, centering=(0.5, 0.5)): - """ - Returns a resized and padded version of the image, expanded to fill the - requested aspect ratio and size. - - :param image: The image to resize and crop. - :param size: The requested output size in pixels, given as a - (width, height) tuple. - :param method: Resampling method to use. Default is - :py:attr:`~PIL.Image.Resampling.BICUBIC`. - See :ref:`concept-filters`. - :param color: The background color of the padded image. - :param centering: Control the position of the original image within the - padded version. - - (0.5, 0.5) will keep the image centered - (0, 0) will keep the image aligned to the top left - (1, 1) will keep the image aligned to the bottom - right - :return: An image. - """ - - resized = contain(image, size, method) - if resized.size == size: - out = resized - else: - out = Image.new(image.mode, size, color) - if resized.palette: - out.putpalette(resized.getpalette()) - if resized.width != size[0]: - x = round((size[0] - resized.width) * max(0, min(centering[0], 1))) - out.paste(resized, (x, 0)) - else: - y = round((size[1] - resized.height) * max(0, min(centering[1], 1))) - out.paste(resized, (0, y)) - return out - - -def crop(image, border=0): - """ - Remove border from image. The same amount of pixels are removed - from all four sides. This function works on all image modes. - - .. seealso:: :py:meth:`~PIL.Image.Image.crop` - - :param image: The image to crop. - :param border: The number of pixels to remove. - :return: An image. - """ - left, top, right, bottom = _border(border) - return image.crop((left, top, image.size[0] - right, image.size[1] - bottom)) - - -def scale(image, factor, resample=Image.Resampling.BICUBIC): - """ - Returns a rescaled image by a specific factor given in parameter. - A factor greater than 1 expands the image, between 0 and 1 contracts the - image. - - :param image: The image to rescale. - :param factor: The expansion factor, as a float. - :param resample: Resampling method to use. Default is - :py:attr:`~PIL.Image.Resampling.BICUBIC`. - See :ref:`concept-filters`. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - if factor == 1: - return image.copy() - elif factor <= 0: - msg = "the factor must be greater than 0" - raise ValueError(msg) - else: - size = (round(factor * image.width), round(factor * image.height)) - return image.resize(size, resample) - - -def deform(image, deformer, resample=Image.Resampling.BILINEAR): - """ - Deform the image. - - :param image: The image to deform. - :param deformer: A deformer object. Any object that implements a - ``getmesh`` method can be used. - :param resample: An optional resampling filter. Same values possible as - in the PIL.Image.transform function. - :return: An image. - """ - return image.transform( - image.size, Image.Transform.MESH, deformer.getmesh(image), resample - ) - - -def equalize(image, mask=None): - """ - Equalize the image histogram. This function applies a non-linear - mapping to the input image, in order to create a uniform - distribution of grayscale values in the output image. - - :param image: The image to equalize. - :param mask: An optional mask. If given, only the pixels selected by - the mask are included in the analysis. - :return: An image. - """ - if image.mode == "P": - image = image.convert("RGB") - h = image.histogram(mask) - lut = [] - for b in range(0, len(h), 256): - histo = [_f for _f in h[b : b + 256] if _f] - if len(histo) <= 1: - lut.extend(list(range(256))) - else: - step = (functools.reduce(operator.add, histo) - histo[-1]) // 255 - if not step: - lut.extend(list(range(256))) - else: - n = step // 2 - for i in range(256): - lut.append(n // step) - n = n + h[i + b] - return _lut(image, lut) - - -def expand(image, border=0, fill=0): - """ - Add border to the image - - :param image: The image to expand. - :param border: Border width, in pixels. - :param fill: Pixel fill value (a color value). Default is 0 (black). - :return: An image. - """ - left, top, right, bottom = _border(border) - width = left + image.size[0] + right - height = top + image.size[1] + bottom - color = _color(fill, image.mode) - if image.palette: - palette = ImagePalette.ImagePalette(palette=image.getpalette()) - if isinstance(color, tuple): - color = palette.getcolor(color) - else: - palette = None - out = Image.new(image.mode, (width, height), color) - if palette: - out.putpalette(palette.palette) - out.paste(image, (left, top)) - return out - - -def fit(image, size, method=Image.Resampling.BICUBIC, bleed=0.0, centering=(0.5, 0.5)): - """ - Returns a resized and cropped version of the image, cropped to the - requested aspect ratio and size. - - This function was contributed by Kevin Cazabon. - - :param image: The image to resize and crop. - :param size: The requested output size in pixels, given as a - (width, height) tuple. - :param method: Resampling method to use. Default is - :py:attr:`~PIL.Image.Resampling.BICUBIC`. - See :ref:`concept-filters`. - :param bleed: Remove a border around the outside of the image from all - four edges. The value is a decimal percentage (use 0.01 for - one percent). The default value is 0 (no border). - Cannot be greater than or equal to 0.5. - :param centering: Control the cropping position. Use (0.5, 0.5) for - center cropping (e.g. if cropping the width, take 50% off - of the left side, and therefore 50% off the right side). - (0.0, 0.0) will crop from the top left corner (i.e. if - cropping the width, take all of the crop off of the right - side, and if cropping the height, take all of it off the - bottom). (1.0, 0.0) will crop from the bottom left - corner, etc. (i.e. if cropping the width, take all of the - crop off the left side, and if cropping the height take - none from the top, and therefore all off the bottom). - :return: An image. - """ - - # by Kevin Cazabon, Feb 17/2000 - # kevin@cazabon.com - # https://www.cazabon.com - - # ensure centering is mutable - centering = list(centering) - - if not 0.0 <= centering[0] <= 1.0: - centering[0] = 0.5 - if not 0.0 <= centering[1] <= 1.0: - centering[1] = 0.5 - - if not 0.0 <= bleed < 0.5: - bleed = 0.0 - - # calculate the area to use for resizing and cropping, subtracting - # the 'bleed' around the edges - - # number of pixels to trim off on Top and Bottom, Left and Right - bleed_pixels = (bleed * image.size[0], bleed * image.size[1]) - - live_size = ( - image.size[0] - bleed_pixels[0] * 2, - image.size[1] - bleed_pixels[1] * 2, - ) - - # calculate the aspect ratio of the live_size - live_size_ratio = live_size[0] / live_size[1] - - # calculate the aspect ratio of the output image - output_ratio = size[0] / size[1] - - # figure out if the sides or top/bottom will be cropped off - if live_size_ratio == output_ratio: - # live_size is already the needed ratio - crop_width = live_size[0] - crop_height = live_size[1] - elif live_size_ratio >= output_ratio: - # live_size is wider than what's needed, crop the sides - crop_width = output_ratio * live_size[1] - crop_height = live_size[1] - else: - # live_size is taller than what's needed, crop the top and bottom - crop_width = live_size[0] - crop_height = live_size[0] / output_ratio - - # make the crop - crop_left = bleed_pixels[0] + (live_size[0] - crop_width) * centering[0] - crop_top = bleed_pixels[1] + (live_size[1] - crop_height) * centering[1] - - crop = (crop_left, crop_top, crop_left + crop_width, crop_top + crop_height) - - # resize the image and return it - return image.resize(size, method, box=crop) - - -def flip(image): - """ - Flip the image vertically (top to bottom). - - :param image: The image to flip. - :return: An image. - """ - return image.transpose(Image.Transpose.FLIP_TOP_BOTTOM) - - -def grayscale(image): - """ - Convert the image to grayscale. - - :param image: The image to convert. - :return: An image. - """ - return image.convert("L") - - -def invert(image): - """ - Invert (negate) the image. - - :param image: The image to invert. - :return: An image. - """ - lut = [] - for i in range(256): - lut.append(255 - i) - return image.point(lut) if image.mode == "1" else _lut(image, lut) - - -def mirror(image): - """ - Flip image horizontally (left to right). - - :param image: The image to mirror. - :return: An image. - """ - return image.transpose(Image.Transpose.FLIP_LEFT_RIGHT) - - -def posterize(image, bits): - """ - Reduce the number of bits for each color channel. - - :param image: The image to posterize. - :param bits: The number of bits to keep for each channel (1-8). - :return: An image. - """ - lut = [] - mask = ~(2 ** (8 - bits) - 1) - for i in range(256): - lut.append(i & mask) - return _lut(image, lut) - - -def solarize(image, threshold=128): - """ - Invert all pixel values above a threshold. - - :param image: The image to solarize. - :param threshold: All pixels above this greyscale level are inverted. - :return: An image. - """ - lut = [] - for i in range(256): - if i < threshold: - lut.append(i) - else: - lut.append(255 - i) - return _lut(image, lut) - - -def exif_transpose(image, *, in_place=False): - """ - If an image has an EXIF Orientation tag, other than 1, transpose the image - accordingly, and remove the orientation data. - - :param image: The image to transpose. - :param in_place: Boolean. Keyword-only argument. - If ``True``, the original image is modified in-place, and ``None`` is returned. - If ``False`` (default), a new :py:class:`~PIL.Image.Image` object is returned - with the transposition applied. If there is no transposition, a copy of the - image will be returned. - """ - image_exif = image.getexif() - orientation = image_exif.get(ExifTags.Base.Orientation) - method = { - 2: Image.Transpose.FLIP_LEFT_RIGHT, - 3: Image.Transpose.ROTATE_180, - 4: Image.Transpose.FLIP_TOP_BOTTOM, - 5: Image.Transpose.TRANSPOSE, - 6: Image.Transpose.ROTATE_270, - 7: Image.Transpose.TRANSVERSE, - 8: Image.Transpose.ROTATE_90, - }.get(orientation) - if method is not None: - transposed_image = image.transpose(method) - if in_place: - image.im = transposed_image.im - image.pyaccess = None - image._size = transposed_image._size - exif_image = image if in_place else transposed_image - - exif = exif_image.getexif() - if ExifTags.Base.Orientation in exif: - del exif[ExifTags.Base.Orientation] - if "exif" in exif_image.info: - exif_image.info["exif"] = exif.tobytes() - elif "Raw profile type exif" in exif_image.info: - exif_image.info["Raw profile type exif"] = exif.tobytes().hex() - elif "XML:com.adobe.xmp" in exif_image.info: - for pattern in ( - r'tiff:Orientation="([0-9])"', - r"([0-9])", - ): - exif_image.info["XML:com.adobe.xmp"] = re.sub( - pattern, "", exif_image.info["XML:com.adobe.xmp"] - ) - if not in_place: - return transposed_image - elif not in_place: - return image.copy() diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/O_S_2f_2.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/O_S_2f_2.py deleted file mode 100644 index 7b403026aa4eabe03c7484f51f14db63ed2ebc5c..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/O_S_2f_2.py +++ /dev/null @@ -1,617 +0,0 @@ -from fontTools.misc import sstruct -from fontTools.misc.roundTools import otRound -from fontTools.misc.textTools import safeEval, num2binary, binary2num -from fontTools.ttLib.tables import DefaultTable -import bisect -import logging - - -log = logging.getLogger(__name__) - -# panose classification - -panoseFormat = """ - bFamilyType: B - bSerifStyle: B - bWeight: B - bProportion: B - bContrast: B - bStrokeVariation: B - bArmStyle: B - bLetterForm: B - bMidline: B - bXHeight: B -""" - - -class Panose(object): - def __init__(self, **kwargs): - _, names, _ = sstruct.getformat(panoseFormat) - for name in names: - setattr(self, name, kwargs.pop(name, 0)) - for k in kwargs: - raise TypeError(f"Panose() got an unexpected keyword argument {k!r}") - - def toXML(self, writer, ttFont): - formatstring, names, fixes = sstruct.getformat(panoseFormat) - for name in names: - writer.simpletag(name, value=getattr(self, name)) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - setattr(self, name, safeEval(attrs["value"])) - - -# 'sfnt' OS/2 and Windows Metrics table - 'OS/2' - -OS2_format_0 = """ - > # big endian - version: H # version - xAvgCharWidth: h # average character width - usWeightClass: H # degree of thickness of strokes - usWidthClass: H # aspect ratio - fsType: H # type flags - ySubscriptXSize: h # subscript horizontal font size - ySubscriptYSize: h # subscript vertical font size - ySubscriptXOffset: h # subscript x offset - ySubscriptYOffset: h # subscript y offset - ySuperscriptXSize: h # superscript horizontal font size - ySuperscriptYSize: h # superscript vertical font size - ySuperscriptXOffset: h # superscript x offset - ySuperscriptYOffset: h # superscript y offset - yStrikeoutSize: h # strikeout size - yStrikeoutPosition: h # strikeout position - sFamilyClass: h # font family class and subclass - panose: 10s # panose classification number - ulUnicodeRange1: L # character range - ulUnicodeRange2: L # character range - ulUnicodeRange3: L # character range - ulUnicodeRange4: L # character range - achVendID: 4s # font vendor identification - fsSelection: H # font selection flags - usFirstCharIndex: H # first unicode character index - usLastCharIndex: H # last unicode character index - sTypoAscender: h # typographic ascender - sTypoDescender: h # typographic descender - sTypoLineGap: h # typographic line gap - usWinAscent: H # Windows ascender - usWinDescent: H # Windows descender -""" - -OS2_format_1_addition = """ - ulCodePageRange1: L - ulCodePageRange2: L -""" - -OS2_format_2_addition = ( - OS2_format_1_addition - + """ - sxHeight: h - sCapHeight: h - usDefaultChar: H - usBreakChar: H - usMaxContext: H -""" -) - -OS2_format_5_addition = ( - OS2_format_2_addition - + """ - usLowerOpticalPointSize: H - usUpperOpticalPointSize: H -""" -) - -bigendian = " > # big endian\n" - -OS2_format_1 = OS2_format_0 + OS2_format_1_addition -OS2_format_2 = OS2_format_0 + OS2_format_2_addition -OS2_format_5 = OS2_format_0 + OS2_format_5_addition -OS2_format_1_addition = bigendian + OS2_format_1_addition -OS2_format_2_addition = bigendian + OS2_format_2_addition -OS2_format_5_addition = bigendian + OS2_format_5_addition - - -class table_O_S_2f_2(DefaultTable.DefaultTable): - - """the OS/2 table""" - - dependencies = ["head"] - - def decompile(self, data, ttFont): - dummy, data = sstruct.unpack2(OS2_format_0, data, self) - - if self.version == 1: - dummy, data = sstruct.unpack2(OS2_format_1_addition, data, self) - elif self.version in (2, 3, 4): - dummy, data = sstruct.unpack2(OS2_format_2_addition, data, self) - elif self.version == 5: - dummy, data = sstruct.unpack2(OS2_format_5_addition, data, self) - self.usLowerOpticalPointSize /= 20 - self.usUpperOpticalPointSize /= 20 - elif self.version != 0: - from fontTools import ttLib - - raise ttLib.TTLibError( - "unknown format for OS/2 table: version %s" % self.version - ) - if len(data): - log.warning("too much 'OS/2' table data") - - self.panose = sstruct.unpack(panoseFormat, self.panose, Panose()) - - def compile(self, ttFont): - self.updateFirstAndLastCharIndex(ttFont) - panose = self.panose - head = ttFont["head"] - if (self.fsSelection & 1) and not (head.macStyle & 1 << 1): - log.warning( - "fsSelection bit 0 (italic) and " - "head table macStyle bit 1 (italic) should match" - ) - if (self.fsSelection & 1 << 5) and not (head.macStyle & 1): - log.warning( - "fsSelection bit 5 (bold) and " - "head table macStyle bit 0 (bold) should match" - ) - if (self.fsSelection & 1 << 6) and (self.fsSelection & 1 + (1 << 5)): - log.warning( - "fsSelection bit 6 (regular) is set, " - "bits 0 (italic) and 5 (bold) must be clear" - ) - if self.version < 4 and self.fsSelection & 0b1110000000: - log.warning( - "fsSelection bits 7, 8 and 9 are only defined in " - "OS/2 table version 4 and up: version %s", - self.version, - ) - self.panose = sstruct.pack(panoseFormat, self.panose) - if self.version == 0: - data = sstruct.pack(OS2_format_0, self) - elif self.version == 1: - data = sstruct.pack(OS2_format_1, self) - elif self.version in (2, 3, 4): - data = sstruct.pack(OS2_format_2, self) - elif self.version == 5: - d = self.__dict__.copy() - d["usLowerOpticalPointSize"] = round(self.usLowerOpticalPointSize * 20) - d["usUpperOpticalPointSize"] = round(self.usUpperOpticalPointSize * 20) - data = sstruct.pack(OS2_format_5, d) - else: - from fontTools import ttLib - - raise ttLib.TTLibError( - "unknown format for OS/2 table: version %s" % self.version - ) - self.panose = panose - return data - - def toXML(self, writer, ttFont): - writer.comment( - "The fields 'usFirstCharIndex' and 'usLastCharIndex'\n" - "will be recalculated by the compiler" - ) - writer.newline() - if self.version == 1: - format = OS2_format_1 - elif self.version in (2, 3, 4): - format = OS2_format_2 - elif self.version == 5: - format = OS2_format_5 - else: - format = OS2_format_0 - formatstring, names, fixes = sstruct.getformat(format) - for name in names: - value = getattr(self, name) - if name == "panose": - writer.begintag("panose") - writer.newline() - value.toXML(writer, ttFont) - writer.endtag("panose") - elif name in ( - "ulUnicodeRange1", - "ulUnicodeRange2", - "ulUnicodeRange3", - "ulUnicodeRange4", - "ulCodePageRange1", - "ulCodePageRange2", - ): - writer.simpletag(name, value=num2binary(value)) - elif name in ("fsType", "fsSelection"): - writer.simpletag(name, value=num2binary(value, 16)) - elif name == "achVendID": - writer.simpletag(name, value=repr(value)[1:-1]) - else: - writer.simpletag(name, value=value) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if name == "panose": - self.panose = panose = Panose() - for element in content: - if isinstance(element, tuple): - name, attrs, content = element - panose.fromXML(name, attrs, content, ttFont) - elif name in ( - "ulUnicodeRange1", - "ulUnicodeRange2", - "ulUnicodeRange3", - "ulUnicodeRange4", - "ulCodePageRange1", - "ulCodePageRange2", - "fsType", - "fsSelection", - ): - setattr(self, name, binary2num(attrs["value"])) - elif name == "achVendID": - setattr(self, name, safeEval("'''" + attrs["value"] + "'''")) - else: - setattr(self, name, safeEval(attrs["value"])) - - def updateFirstAndLastCharIndex(self, ttFont): - if "cmap" not in ttFont: - return - codes = set() - for table in getattr(ttFont["cmap"], "tables", []): - if table.isUnicode(): - codes.update(table.cmap.keys()) - if codes: - minCode = min(codes) - maxCode = max(codes) - # USHORT cannot hold codepoints greater than 0xFFFF - self.usFirstCharIndex = min(0xFFFF, minCode) - self.usLastCharIndex = min(0xFFFF, maxCode) - - # misspelled attributes kept for legacy reasons - - @property - def usMaxContex(self): - return self.usMaxContext - - @usMaxContex.setter - def usMaxContex(self, value): - self.usMaxContext = value - - @property - def fsFirstCharIndex(self): - return self.usFirstCharIndex - - @fsFirstCharIndex.setter - def fsFirstCharIndex(self, value): - self.usFirstCharIndex = value - - @property - def fsLastCharIndex(self): - return self.usLastCharIndex - - @fsLastCharIndex.setter - def fsLastCharIndex(self, value): - self.usLastCharIndex = value - - def getUnicodeRanges(self): - """Return the set of 'ulUnicodeRange*' bits currently enabled.""" - bits = set() - ul1, ul2 = self.ulUnicodeRange1, self.ulUnicodeRange2 - ul3, ul4 = self.ulUnicodeRange3, self.ulUnicodeRange4 - for i in range(32): - if ul1 & (1 << i): - bits.add(i) - if ul2 & (1 << i): - bits.add(i + 32) - if ul3 & (1 << i): - bits.add(i + 64) - if ul4 & (1 << i): - bits.add(i + 96) - return bits - - def setUnicodeRanges(self, bits): - """Set the 'ulUnicodeRange*' fields to the specified 'bits'.""" - ul1, ul2, ul3, ul4 = 0, 0, 0, 0 - for bit in bits: - if 0 <= bit < 32: - ul1 |= 1 << bit - elif 32 <= bit < 64: - ul2 |= 1 << (bit - 32) - elif 64 <= bit < 96: - ul3 |= 1 << (bit - 64) - elif 96 <= bit < 123: - ul4 |= 1 << (bit - 96) - else: - raise ValueError("expected 0 <= int <= 122, found: %r" % bit) - self.ulUnicodeRange1, self.ulUnicodeRange2 = ul1, ul2 - self.ulUnicodeRange3, self.ulUnicodeRange4 = ul3, ul4 - - def recalcUnicodeRanges(self, ttFont, pruneOnly=False): - """Intersect the codepoints in the font's Unicode cmap subtables with - the Unicode block ranges defined in the OpenType specification (v1.7), - and set the respective 'ulUnicodeRange*' bits if there is at least ONE - intersection. - If 'pruneOnly' is True, only clear unused bits with NO intersection. - """ - unicodes = set() - for table in ttFont["cmap"].tables: - if table.isUnicode(): - unicodes.update(table.cmap.keys()) - if pruneOnly: - empty = intersectUnicodeRanges(unicodes, inverse=True) - bits = self.getUnicodeRanges() - empty - else: - bits = intersectUnicodeRanges(unicodes) - self.setUnicodeRanges(bits) - return bits - - def recalcAvgCharWidth(self, ttFont): - """Recalculate xAvgCharWidth using metrics from ttFont's 'hmtx' table. - - Set it to 0 if the unlikely event 'hmtx' table is not found. - """ - avg_width = 0 - hmtx = ttFont.get("hmtx") - if hmtx is not None: - widths = [width for width, _ in hmtx.metrics.values() if width > 0] - if widths: - avg_width = otRound(sum(widths) / len(widths)) - self.xAvgCharWidth = avg_width - return avg_width - - -# Unicode ranges data from the OpenType OS/2 table specification v1.7 - -OS2_UNICODE_RANGES = ( - (("Basic Latin", (0x0000, 0x007F)),), - (("Latin-1 Supplement", (0x0080, 0x00FF)),), - (("Latin Extended-A", (0x0100, 0x017F)),), - (("Latin Extended-B", (0x0180, 0x024F)),), - ( - ("IPA Extensions", (0x0250, 0x02AF)), - ("Phonetic Extensions", (0x1D00, 0x1D7F)), - ("Phonetic Extensions Supplement", (0x1D80, 0x1DBF)), - ), - ( - ("Spacing Modifier Letters", (0x02B0, 0x02FF)), - ("Modifier Tone Letters", (0xA700, 0xA71F)), - ), - ( - ("Combining Diacritical Marks", (0x0300, 0x036F)), - ("Combining Diacritical Marks Supplement", (0x1DC0, 0x1DFF)), - ), - (("Greek and Coptic", (0x0370, 0x03FF)),), - (("Coptic", (0x2C80, 0x2CFF)),), - ( - ("Cyrillic", (0x0400, 0x04FF)), - ("Cyrillic Supplement", (0x0500, 0x052F)), - ("Cyrillic Extended-A", (0x2DE0, 0x2DFF)), - ("Cyrillic Extended-B", (0xA640, 0xA69F)), - ), - (("Armenian", (0x0530, 0x058F)),), - (("Hebrew", (0x0590, 0x05FF)),), - (("Vai", (0xA500, 0xA63F)),), - (("Arabic", (0x0600, 0x06FF)), ("Arabic Supplement", (0x0750, 0x077F))), - (("NKo", (0x07C0, 0x07FF)),), - (("Devanagari", (0x0900, 0x097F)),), - (("Bengali", (0x0980, 0x09FF)),), - (("Gurmukhi", (0x0A00, 0x0A7F)),), - (("Gujarati", (0x0A80, 0x0AFF)),), - (("Oriya", (0x0B00, 0x0B7F)),), - (("Tamil", (0x0B80, 0x0BFF)),), - (("Telugu", (0x0C00, 0x0C7F)),), - (("Kannada", (0x0C80, 0x0CFF)),), - (("Malayalam", (0x0D00, 0x0D7F)),), - (("Thai", (0x0E00, 0x0E7F)),), - (("Lao", (0x0E80, 0x0EFF)),), - (("Georgian", (0x10A0, 0x10FF)), ("Georgian Supplement", (0x2D00, 0x2D2F))), - (("Balinese", (0x1B00, 0x1B7F)),), - (("Hangul Jamo", (0x1100, 0x11FF)),), - ( - ("Latin Extended Additional", (0x1E00, 0x1EFF)), - ("Latin Extended-C", (0x2C60, 0x2C7F)), - ("Latin Extended-D", (0xA720, 0xA7FF)), - ), - (("Greek Extended", (0x1F00, 0x1FFF)),), - ( - ("General Punctuation", (0x2000, 0x206F)), - ("Supplemental Punctuation", (0x2E00, 0x2E7F)), - ), - (("Superscripts And Subscripts", (0x2070, 0x209F)),), - (("Currency Symbols", (0x20A0, 0x20CF)),), - (("Combining Diacritical Marks For Symbols", (0x20D0, 0x20FF)),), - (("Letterlike Symbols", (0x2100, 0x214F)),), - (("Number Forms", (0x2150, 0x218F)),), - ( - ("Arrows", (0x2190, 0x21FF)), - ("Supplemental Arrows-A", (0x27F0, 0x27FF)), - ("Supplemental Arrows-B", (0x2900, 0x297F)), - ("Miscellaneous Symbols and Arrows", (0x2B00, 0x2BFF)), - ), - ( - ("Mathematical Operators", (0x2200, 0x22FF)), - ("Supplemental Mathematical Operators", (0x2A00, 0x2AFF)), - ("Miscellaneous Mathematical Symbols-A", (0x27C0, 0x27EF)), - ("Miscellaneous Mathematical Symbols-B", (0x2980, 0x29FF)), - ), - (("Miscellaneous Technical", (0x2300, 0x23FF)),), - (("Control Pictures", (0x2400, 0x243F)),), - (("Optical Character Recognition", (0x2440, 0x245F)),), - (("Enclosed Alphanumerics", (0x2460, 0x24FF)),), - (("Box Drawing", (0x2500, 0x257F)),), - (("Block Elements", (0x2580, 0x259F)),), - (("Geometric Shapes", (0x25A0, 0x25FF)),), - (("Miscellaneous Symbols", (0x2600, 0x26FF)),), - (("Dingbats", (0x2700, 0x27BF)),), - (("CJK Symbols And Punctuation", (0x3000, 0x303F)),), - (("Hiragana", (0x3040, 0x309F)),), - ( - ("Katakana", (0x30A0, 0x30FF)), - ("Katakana Phonetic Extensions", (0x31F0, 0x31FF)), - ), - (("Bopomofo", (0x3100, 0x312F)), ("Bopomofo Extended", (0x31A0, 0x31BF))), - (("Hangul Compatibility Jamo", (0x3130, 0x318F)),), - (("Phags-pa", (0xA840, 0xA87F)),), - (("Enclosed CJK Letters And Months", (0x3200, 0x32FF)),), - (("CJK Compatibility", (0x3300, 0x33FF)),), - (("Hangul Syllables", (0xAC00, 0xD7AF)),), - (("Non-Plane 0 *", (0xD800, 0xDFFF)),), - (("Phoenician", (0x10900, 0x1091F)),), - ( - ("CJK Unified Ideographs", (0x4E00, 0x9FFF)), - ("CJK Radicals Supplement", (0x2E80, 0x2EFF)), - ("Kangxi Radicals", (0x2F00, 0x2FDF)), - ("Ideographic Description Characters", (0x2FF0, 0x2FFF)), - ("CJK Unified Ideographs Extension A", (0x3400, 0x4DBF)), - ("CJK Unified Ideographs Extension B", (0x20000, 0x2A6DF)), - ("Kanbun", (0x3190, 0x319F)), - ), - (("Private Use Area (plane 0)", (0xE000, 0xF8FF)),), - ( - ("CJK Strokes", (0x31C0, 0x31EF)), - ("CJK Compatibility Ideographs", (0xF900, 0xFAFF)), - ("CJK Compatibility Ideographs Supplement", (0x2F800, 0x2FA1F)), - ), - (("Alphabetic Presentation Forms", (0xFB00, 0xFB4F)),), - (("Arabic Presentation Forms-A", (0xFB50, 0xFDFF)),), - (("Combining Half Marks", (0xFE20, 0xFE2F)),), - ( - ("Vertical Forms", (0xFE10, 0xFE1F)), - ("CJK Compatibility Forms", (0xFE30, 0xFE4F)), - ), - (("Small Form Variants", (0xFE50, 0xFE6F)),), - (("Arabic Presentation Forms-B", (0xFE70, 0xFEFF)),), - (("Halfwidth And Fullwidth Forms", (0xFF00, 0xFFEF)),), - (("Specials", (0xFFF0, 0xFFFF)),), - (("Tibetan", (0x0F00, 0x0FFF)),), - (("Syriac", (0x0700, 0x074F)),), - (("Thaana", (0x0780, 0x07BF)),), - (("Sinhala", (0x0D80, 0x0DFF)),), - (("Myanmar", (0x1000, 0x109F)),), - ( - ("Ethiopic", (0x1200, 0x137F)), - ("Ethiopic Supplement", (0x1380, 0x139F)), - ("Ethiopic Extended", (0x2D80, 0x2DDF)), - ), - (("Cherokee", (0x13A0, 0x13FF)),), - (("Unified Canadian Aboriginal Syllabics", (0x1400, 0x167F)),), - (("Ogham", (0x1680, 0x169F)),), - (("Runic", (0x16A0, 0x16FF)),), - (("Khmer", (0x1780, 0x17FF)), ("Khmer Symbols", (0x19E0, 0x19FF))), - (("Mongolian", (0x1800, 0x18AF)),), - (("Braille Patterns", (0x2800, 0x28FF)),), - (("Yi Syllables", (0xA000, 0xA48F)), ("Yi Radicals", (0xA490, 0xA4CF))), - ( - ("Tagalog", (0x1700, 0x171F)), - ("Hanunoo", (0x1720, 0x173F)), - ("Buhid", (0x1740, 0x175F)), - ("Tagbanwa", (0x1760, 0x177F)), - ), - (("Old Italic", (0x10300, 0x1032F)),), - (("Gothic", (0x10330, 0x1034F)),), - (("Deseret", (0x10400, 0x1044F)),), - ( - ("Byzantine Musical Symbols", (0x1D000, 0x1D0FF)), - ("Musical Symbols", (0x1D100, 0x1D1FF)), - ("Ancient Greek Musical Notation", (0x1D200, 0x1D24F)), - ), - (("Mathematical Alphanumeric Symbols", (0x1D400, 0x1D7FF)),), - ( - ("Private Use (plane 15)", (0xF0000, 0xFFFFD)), - ("Private Use (plane 16)", (0x100000, 0x10FFFD)), - ), - ( - ("Variation Selectors", (0xFE00, 0xFE0F)), - ("Variation Selectors Supplement", (0xE0100, 0xE01EF)), - ), - (("Tags", (0xE0000, 0xE007F)),), - (("Limbu", (0x1900, 0x194F)),), - (("Tai Le", (0x1950, 0x197F)),), - (("New Tai Lue", (0x1980, 0x19DF)),), - (("Buginese", (0x1A00, 0x1A1F)),), - (("Glagolitic", (0x2C00, 0x2C5F)),), - (("Tifinagh", (0x2D30, 0x2D7F)),), - (("Yijing Hexagram Symbols", (0x4DC0, 0x4DFF)),), - (("Syloti Nagri", (0xA800, 0xA82F)),), - ( - ("Linear B Syllabary", (0x10000, 0x1007F)), - ("Linear B Ideograms", (0x10080, 0x100FF)), - ("Aegean Numbers", (0x10100, 0x1013F)), - ), - (("Ancient Greek Numbers", (0x10140, 0x1018F)),), - (("Ugaritic", (0x10380, 0x1039F)),), - (("Old Persian", (0x103A0, 0x103DF)),), - (("Shavian", (0x10450, 0x1047F)),), - (("Osmanya", (0x10480, 0x104AF)),), - (("Cypriot Syllabary", (0x10800, 0x1083F)),), - (("Kharoshthi", (0x10A00, 0x10A5F)),), - (("Tai Xuan Jing Symbols", (0x1D300, 0x1D35F)),), - ( - ("Cuneiform", (0x12000, 0x123FF)), - ("Cuneiform Numbers and Punctuation", (0x12400, 0x1247F)), - ), - (("Counting Rod Numerals", (0x1D360, 0x1D37F)),), - (("Sundanese", (0x1B80, 0x1BBF)),), - (("Lepcha", (0x1C00, 0x1C4F)),), - (("Ol Chiki", (0x1C50, 0x1C7F)),), - (("Saurashtra", (0xA880, 0xA8DF)),), - (("Kayah Li", (0xA900, 0xA92F)),), - (("Rejang", (0xA930, 0xA95F)),), - (("Cham", (0xAA00, 0xAA5F)),), - (("Ancient Symbols", (0x10190, 0x101CF)),), - (("Phaistos Disc", (0x101D0, 0x101FF)),), - ( - ("Carian", (0x102A0, 0x102DF)), - ("Lycian", (0x10280, 0x1029F)), - ("Lydian", (0x10920, 0x1093F)), - ), - (("Domino Tiles", (0x1F030, 0x1F09F)), ("Mahjong Tiles", (0x1F000, 0x1F02F))), -) - - -_unicodeStarts = [] -_unicodeValues = [None] - - -def _getUnicodeRanges(): - # build the ranges of codepoints for each unicode range bit, and cache result - if not _unicodeStarts: - unicodeRanges = [ - (start, (stop, bit)) - for bit, blocks in enumerate(OS2_UNICODE_RANGES) - for _, (start, stop) in blocks - ] - for start, (stop, bit) in sorted(unicodeRanges): - _unicodeStarts.append(start) - _unicodeValues.append((stop, bit)) - return _unicodeStarts, _unicodeValues - - -def intersectUnicodeRanges(unicodes, inverse=False): - """Intersect a sequence of (int) Unicode codepoints with the Unicode block - ranges defined in the OpenType specification v1.7, and return the set of - 'ulUnicodeRanges' bits for which there is at least ONE intersection. - If 'inverse' is True, return the the bits for which there is NO intersection. - - >>> intersectUnicodeRanges([0x0410]) == {9} - True - >>> intersectUnicodeRanges([0x0410, 0x1F000]) == {9, 57, 122} - True - >>> intersectUnicodeRanges([0x0410, 0x1F000], inverse=True) == ( - ... set(range(len(OS2_UNICODE_RANGES))) - {9, 57, 122}) - True - """ - unicodes = set(unicodes) - unicodestarts, unicodevalues = _getUnicodeRanges() - bits = set() - for code in unicodes: - stop, bit = unicodevalues[bisect.bisect(unicodestarts, code)] - if code <= stop: - bits.add(bit) - # The spec says that bit 57 ("Non Plane 0") implies that there's - # at least one codepoint beyond the BMP; so I also include all - # the non-BMP codepoints here - if any(0x10000 <= code < 0x110000 for code in unicodes): - bits.add(57) - return set(range(len(OS2_UNICODE_RANGES))) - bits if inverse else bits - - -if __name__ == "__main__": - import doctest, sys - - sys.exit(doctest.testmod().failed) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/implementations/git.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/implementations/git.py deleted file mode 100644 index 80c73e066d83211da6cfb2940edf97ab5cfe0789..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/implementations/git.py +++ /dev/null @@ -1,127 +0,0 @@ -import os - -import pygit2 - -from fsspec.spec import AbstractFileSystem - -from .memory import MemoryFile - - -class GitFileSystem(AbstractFileSystem): - """Browse the files of a local git repo at any hash/tag/branch - - (experimental backend) - """ - - root_marker = "" - cachable = True - - def __init__(self, path=None, fo=None, ref=None, **kwargs): - """ - - Parameters - ---------- - path: str (optional) - Local location of the repo (uses current directory if not given). - May be deprecated in favour of ``fo``. When used with a higher - level function such as fsspec.open(), may be of the form - "git://[path-to-repo[:]][ref@]path/to/file" (but the actual - file path should not contain "@" or ":"). - fo: str (optional) - Same as ``path``, but passed as part of a chained URL. This one - takes precedence if both are given. - ref: str (optional) - Reference to work with, could be a hash, tag or branch name. Defaults - to current working tree. Note that ``ls`` and ``open`` also take hash, - so this becomes the default for those operations - kwargs - """ - super().__init__(**kwargs) - self.repo = pygit2.Repository(fo or path or os.getcwd()) - self.ref = ref or "master" - - @classmethod - def _strip_protocol(cls, path): - path = super()._strip_protocol(path).lstrip("/") - if ":" in path: - path = path.split(":", 1)[1] - if "@" in path: - path = path.split("@", 1)[1] - return path.lstrip("/") - - def _path_to_object(self, path, ref): - comm, ref = self.repo.resolve_refish(ref or self.ref) - parts = path.split("/") - tree = comm.tree - for part in parts: - if part and isinstance(tree, pygit2.Tree): - tree = tree[part] - return tree - - @staticmethod - def _get_kwargs_from_urls(path): - if path.startswith("git://"): - path = path[6:] - out = {} - if ":" in path: - out["path"], path = path.split(":", 1) - if "@" in path: - out["ref"], path = path.split("@", 1) - return out - - def ls(self, path, detail=True, ref=None, **kwargs): - path = self._strip_protocol(path) - tree = self._path_to_object(path, ref) - if isinstance(tree, pygit2.Tree): - out = [] - for obj in tree: - if isinstance(obj, pygit2.Tree): - out.append( - { - "type": "directory", - "name": "/".join([path, obj.name]).lstrip("/"), - "hex": obj.hex, - "mode": "%o" % obj.filemode, - "size": 0, - } - ) - else: - out.append( - { - "type": "file", - "name": "/".join([path, obj.name]).lstrip("/"), - "hex": obj.hex, - "mode": "%o" % obj.filemode, - "size": obj.size, - } - ) - else: - obj = tree - out = [ - { - "type": "file", - "name": obj.name, - "hex": obj.hex, - "mode": "%o" % obj.filemode, - "size": obj.size, - } - ] - if detail: - return out - return [o["name"] for o in out] - - def ukey(self, path, ref=None): - return self.info(path, ref=ref)["hex"] - - def _open( - self, - path, - mode="rb", - block_size=None, - autocommit=True, - cache_options=None, - ref=None, - **kwargs, - ): - obj = self._path_to_object(path, ref or self.ref) - return MemoryFile(data=obj.data) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/tests/test_state.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/tests/test_state.py deleted file mode 100644 index bc974e636e9f3e9b66022d2095cd670a9acbdcd9..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/tests/test_state.py +++ /dev/null @@ -1,271 +0,0 @@ -import pytest - -from .._events import ( - ConnectionClosed, - Data, - EndOfMessage, - Event, - InformationalResponse, - Request, - Response, -) -from .._state import ( - _SWITCH_CONNECT, - _SWITCH_UPGRADE, - CLIENT, - CLOSED, - ConnectionState, - DONE, - IDLE, - MIGHT_SWITCH_PROTOCOL, - MUST_CLOSE, - SEND_BODY, - SEND_RESPONSE, - SERVER, - SWITCHED_PROTOCOL, -) -from .._util import LocalProtocolError - - -def test_ConnectionState() -> None: - cs = ConnectionState() - - # Basic event-triggered transitions - - assert cs.states == {CLIENT: IDLE, SERVER: IDLE} - - cs.process_event(CLIENT, Request) - # The SERVER-Request special case: - assert cs.states == {CLIENT: SEND_BODY, SERVER: SEND_RESPONSE} - - # Illegal transitions raise an error and nothing happens - with pytest.raises(LocalProtocolError): - cs.process_event(CLIENT, Request) - assert cs.states == {CLIENT: SEND_BODY, SERVER: SEND_RESPONSE} - - cs.process_event(SERVER, InformationalResponse) - assert cs.states == {CLIENT: SEND_BODY, SERVER: SEND_RESPONSE} - - cs.process_event(SERVER, Response) - assert cs.states == {CLIENT: SEND_BODY, SERVER: SEND_BODY} - - cs.process_event(CLIENT, EndOfMessage) - cs.process_event(SERVER, EndOfMessage) - assert cs.states == {CLIENT: DONE, SERVER: DONE} - - # State-triggered transition - - cs.process_event(SERVER, ConnectionClosed) - assert cs.states == {CLIENT: MUST_CLOSE, SERVER: CLOSED} - - -def test_ConnectionState_keep_alive() -> None: - # keep_alive = False - cs = ConnectionState() - cs.process_event(CLIENT, Request) - cs.process_keep_alive_disabled() - cs.process_event(CLIENT, EndOfMessage) - assert cs.states == {CLIENT: MUST_CLOSE, SERVER: SEND_RESPONSE} - - cs.process_event(SERVER, Response) - cs.process_event(SERVER, EndOfMessage) - assert cs.states == {CLIENT: MUST_CLOSE, SERVER: MUST_CLOSE} - - -def test_ConnectionState_keep_alive_in_DONE() -> None: - # Check that if keep_alive is disabled when the CLIENT is already in DONE, - # then this is sufficient to immediately trigger the DONE -> MUST_CLOSE - # transition - cs = ConnectionState() - cs.process_event(CLIENT, Request) - cs.process_event(CLIENT, EndOfMessage) - assert cs.states[CLIENT] is DONE - cs.process_keep_alive_disabled() - assert cs.states[CLIENT] is MUST_CLOSE - - -def test_ConnectionState_switch_denied() -> None: - for switch_type in (_SWITCH_CONNECT, _SWITCH_UPGRADE): - for deny_early in (True, False): - cs = ConnectionState() - cs.process_client_switch_proposal(switch_type) - cs.process_event(CLIENT, Request) - cs.process_event(CLIENT, Data) - assert cs.states == {CLIENT: SEND_BODY, SERVER: SEND_RESPONSE} - - assert switch_type in cs.pending_switch_proposals - - if deny_early: - # before client reaches DONE - cs.process_event(SERVER, Response) - assert not cs.pending_switch_proposals - - cs.process_event(CLIENT, EndOfMessage) - - if deny_early: - assert cs.states == {CLIENT: DONE, SERVER: SEND_BODY} - else: - assert cs.states == { - CLIENT: MIGHT_SWITCH_PROTOCOL, - SERVER: SEND_RESPONSE, - } - - cs.process_event(SERVER, InformationalResponse) - assert cs.states == { - CLIENT: MIGHT_SWITCH_PROTOCOL, - SERVER: SEND_RESPONSE, - } - - cs.process_event(SERVER, Response) - assert cs.states == {CLIENT: DONE, SERVER: SEND_BODY} - assert not cs.pending_switch_proposals - - -_response_type_for_switch = { - _SWITCH_UPGRADE: InformationalResponse, - _SWITCH_CONNECT: Response, - None: Response, -} - - -def test_ConnectionState_protocol_switch_accepted() -> None: - for switch_event in [_SWITCH_UPGRADE, _SWITCH_CONNECT]: - cs = ConnectionState() - cs.process_client_switch_proposal(switch_event) - cs.process_event(CLIENT, Request) - cs.process_event(CLIENT, Data) - assert cs.states == {CLIENT: SEND_BODY, SERVER: SEND_RESPONSE} - - cs.process_event(CLIENT, EndOfMessage) - assert cs.states == {CLIENT: MIGHT_SWITCH_PROTOCOL, SERVER: SEND_RESPONSE} - - cs.process_event(SERVER, InformationalResponse) - assert cs.states == {CLIENT: MIGHT_SWITCH_PROTOCOL, SERVER: SEND_RESPONSE} - - cs.process_event(SERVER, _response_type_for_switch[switch_event], switch_event) - assert cs.states == {CLIENT: SWITCHED_PROTOCOL, SERVER: SWITCHED_PROTOCOL} - - -def test_ConnectionState_double_protocol_switch() -> None: - # CONNECT + Upgrade is legal! Very silly, but legal. So we support - # it. Because sometimes doing the silly thing is easier than not. - for server_switch in [None, _SWITCH_UPGRADE, _SWITCH_CONNECT]: - cs = ConnectionState() - cs.process_client_switch_proposal(_SWITCH_UPGRADE) - cs.process_client_switch_proposal(_SWITCH_CONNECT) - cs.process_event(CLIENT, Request) - cs.process_event(CLIENT, EndOfMessage) - assert cs.states == {CLIENT: MIGHT_SWITCH_PROTOCOL, SERVER: SEND_RESPONSE} - cs.process_event( - SERVER, _response_type_for_switch[server_switch], server_switch - ) - if server_switch is None: - assert cs.states == {CLIENT: DONE, SERVER: SEND_BODY} - else: - assert cs.states == {CLIENT: SWITCHED_PROTOCOL, SERVER: SWITCHED_PROTOCOL} - - -def test_ConnectionState_inconsistent_protocol_switch() -> None: - for client_switches, server_switch in [ - ([], _SWITCH_CONNECT), - ([], _SWITCH_UPGRADE), - ([_SWITCH_UPGRADE], _SWITCH_CONNECT), - ([_SWITCH_CONNECT], _SWITCH_UPGRADE), - ]: - cs = ConnectionState() - for client_switch in client_switches: # type: ignore[attr-defined] - cs.process_client_switch_proposal(client_switch) - cs.process_event(CLIENT, Request) - with pytest.raises(LocalProtocolError): - cs.process_event(SERVER, Response, server_switch) - - -def test_ConnectionState_keepalive_protocol_switch_interaction() -> None: - # keep_alive=False + pending_switch_proposals - cs = ConnectionState() - cs.process_client_switch_proposal(_SWITCH_UPGRADE) - cs.process_event(CLIENT, Request) - cs.process_keep_alive_disabled() - cs.process_event(CLIENT, Data) - assert cs.states == {CLIENT: SEND_BODY, SERVER: SEND_RESPONSE} - - # the protocol switch "wins" - cs.process_event(CLIENT, EndOfMessage) - assert cs.states == {CLIENT: MIGHT_SWITCH_PROTOCOL, SERVER: SEND_RESPONSE} - - # but when the server denies the request, keep_alive comes back into play - cs.process_event(SERVER, Response) - assert cs.states == {CLIENT: MUST_CLOSE, SERVER: SEND_BODY} - - -def test_ConnectionState_reuse() -> None: - cs = ConnectionState() - - with pytest.raises(LocalProtocolError): - cs.start_next_cycle() - - cs.process_event(CLIENT, Request) - cs.process_event(CLIENT, EndOfMessage) - - with pytest.raises(LocalProtocolError): - cs.start_next_cycle() - - cs.process_event(SERVER, Response) - cs.process_event(SERVER, EndOfMessage) - - cs.start_next_cycle() - assert cs.states == {CLIENT: IDLE, SERVER: IDLE} - - # No keepalive - - cs.process_event(CLIENT, Request) - cs.process_keep_alive_disabled() - cs.process_event(CLIENT, EndOfMessage) - cs.process_event(SERVER, Response) - cs.process_event(SERVER, EndOfMessage) - - with pytest.raises(LocalProtocolError): - cs.start_next_cycle() - - # One side closed - - cs = ConnectionState() - cs.process_event(CLIENT, Request) - cs.process_event(CLIENT, EndOfMessage) - cs.process_event(CLIENT, ConnectionClosed) - cs.process_event(SERVER, Response) - cs.process_event(SERVER, EndOfMessage) - - with pytest.raises(LocalProtocolError): - cs.start_next_cycle() - - # Succesful protocol switch - - cs = ConnectionState() - cs.process_client_switch_proposal(_SWITCH_UPGRADE) - cs.process_event(CLIENT, Request) - cs.process_event(CLIENT, EndOfMessage) - cs.process_event(SERVER, InformationalResponse, _SWITCH_UPGRADE) - - with pytest.raises(LocalProtocolError): - cs.start_next_cycle() - - # Failed protocol switch - - cs = ConnectionState() - cs.process_client_switch_proposal(_SWITCH_UPGRADE) - cs.process_event(CLIENT, Request) - cs.process_event(CLIENT, EndOfMessage) - cs.process_event(SERVER, Response) - cs.process_event(SERVER, EndOfMessage) - - cs.start_next_cycle() - assert cs.states == {CLIENT: IDLE, SERVER: IDLE} - - -def test_server_request_is_illegal() -> None: - # There used to be a bug in how we handled the Request special case that - # made this allowed... - cs = ConnectionState() - with pytest.raises(LocalProtocolError): - cs.process_event(SERVER, Request) diff --git a/spaces/DataScienceGuild/WikipediaAIWithDataframeMemory/app.py b/spaces/DataScienceGuild/WikipediaAIWithDataframeMemory/app.py deleted file mode 100644 index 2555189dd8c4611ff5a2df6bf73c8ce7412df062..0000000000000000000000000000000000000000 --- a/spaces/DataScienceGuild/WikipediaAIWithDataframeMemory/app.py +++ /dev/null @@ -1,200 +0,0 @@ -import spacy -import wikipediaapi -import wikipedia -from wikipedia.exceptions import DisambiguationError -from transformers import TFAutoModel, AutoTokenizer -import numpy as np -import pandas as pd -import faiss -import gradio as gr - -try: - nlp = spacy.load("en_core_web_sm") -except: - spacy.cli.download("en_core_web_sm") - nlp = spacy.load("en_core_web_sm") - -wh_words = ['what', 'who', 'how', 'when', 'which'] -def get_concepts(text): - text = text.lower() - doc = nlp(text) - concepts = [] - for chunk in doc.noun_chunks: - if chunk.text not in wh_words: - concepts.append(chunk.text) - return concepts - -def get_passages(text, k=100): - doc = nlp(text) - passages = [] - passage_len = 0 - passage = "" - sents = list(doc.sents) - for i in range(len(sents)): - sen = sents[i] - passage_len+=len(sen) - if passage_len >= k: - passages.append(passage) - passage = sen.text - passage_len = len(sen) - continue - - elif i==(len(sents)-1): - passage+=" "+sen.text - passages.append(passage) - passage = "" - passage_len = 0 - continue - - passage+=" "+sen.text - return passages - -def get_dicts_for_dpr(concepts, n_results=20, k=100): - dicts = [] - for concept in concepts: - wikis = wikipedia.search(concept, results=n_results) - print(concept, "No of Wikis: ",len(wikis)) - for wiki in wikis: - try: - html_page = wikipedia.page(title = wiki, auto_suggest = False) - except DisambiguationError: - continue - - htmlResults=html_page.content - - passages = get_passages(htmlResults, k=k) - for passage in passages: - i_dicts = {} - i_dicts['text'] = passage - i_dicts['title'] = wiki - dicts.append(i_dicts) - return dicts - -passage_encoder = TFAutoModel.from_pretrained("nlpconnect/dpr-ctx_encoder_bert_uncased_L-2_H-128_A-2") -query_encoder = TFAutoModel.from_pretrained("nlpconnect/dpr-question_encoder_bert_uncased_L-2_H-128_A-2") -p_tokenizer = AutoTokenizer.from_pretrained("nlpconnect/dpr-ctx_encoder_bert_uncased_L-2_H-128_A-2") -q_tokenizer = AutoTokenizer.from_pretrained("nlpconnect/dpr-question_encoder_bert_uncased_L-2_H-128_A-2") - -def get_title_text_combined(passage_dicts): - res = [] - for p in passage_dicts: - res.append(tuple((p['title'], p['text']))) - return res - -def extracted_passage_embeddings(processed_passages, max_length=156): - passage_inputs = p_tokenizer.batch_encode_plus( - processed_passages, - add_special_tokens=True, - truncation=True, - padding="max_length", - max_length=max_length, - return_token_type_ids=True - ) - passage_embeddings = passage_encoder.predict([np.array(passage_inputs['input_ids']), - np.array(passage_inputs['attention_mask']), - np.array(passage_inputs['token_type_ids'])], - batch_size=64, - verbose=1) - return passage_embeddings - -def extracted_query_embeddings(queries, max_length=64): - query_inputs = q_tokenizer.batch_encode_plus( - queries, - add_special_tokens=True, - truncation=True, - padding="max_length", - max_length=max_length, - return_token_type_ids=True - ) - query_embeddings = query_encoder.predict([np.array(query_inputs['input_ids']), - np.array(query_inputs['attention_mask']), - np.array(query_inputs['token_type_ids'])], - batch_size=1, - verbose=1) - return query_embeddings - -#Wikipedia API: - -def get_pagetext(page): - s=str(page).replace("/t","") - - return s - -def get_wiki_summary(search): - wiki_wiki = wikipediaapi.Wikipedia('en') - page = wiki_wiki.page(search) - - isExist = page.exists() - if not isExist: - return isExist, "Not found", "Not found", "Not found", "Not found" - - pageurl = page.fullurl - pagetitle = page.title - pagesummary = page.summary[0:60] - pagetext = get_pagetext(page.text) - - backlinks = page.backlinks - linklist = "" - for link in backlinks.items(): - pui = link[0] - linklist += pui + " , " - a=1 - - categories = page.categories - categorylist = "" - for category in categories.items(): - pui = category[0] - categorylist += pui + " , " - a=1 - - links = page.links - linklist2 = "" - for link in links.items(): - pui = link[0] - linklist2 += pui + " , " - a=1 - - sections = page.sections - - ex_dic = { - 'Entity' : ["URL","Title","Summary", "Text", "Backlinks", "Links", "Categories"], - 'Value': [pageurl, pagetitle, pagesummary, pagetext, linklist,linklist2, categorylist ] - } - - df = pd.DataFrame(ex_dic) - - return df - -def search(question): - concepts = get_concepts(question) - print("concepts: ",concepts) - dicts = get_dicts_for_dpr(concepts, n_results=1) - lendicts = len(dicts) - print("dicts len: ", lendicts) - if lendicts == 0: - return pd.DataFrame() - processed_passages = get_title_text_combined(dicts) - passage_embeddings = extracted_passage_embeddings(processed_passages) - query_embeddings = extracted_query_embeddings([question]) - faiss_index = faiss.IndexFlatL2(128) - faiss_index.add(passage_embeddings.pooler_output) - prob, index = faiss_index.search(query_embeddings.pooler_output, k=lendicts) - return pd.DataFrame([dicts[i] for i in index[0]]) - -# AI UI SOTA - Gradio blocks with UI formatting, and event driven UI -with gr.Blocks() as demo: # Block documentation on event listeners, start here: https://gradio.app/blocks_and_event_listeners/ - gr.Markdown("

      🍰 Ultimate Wikipedia AI 🎨

      ") - gr.Markdown("""
      Search and Find Anything Then Use in AI! MediaWiki - API for Wikipedia. Papers,Code,Datasets for SOTA w/ Wikipedia""") - with gr.Row(): # inputs and buttons - inp = gr.Textbox(lines=1, default="Syd Mead", label="Question") - with gr.Row(): # inputs and buttons - b3 = gr.Button("Search AI Summaries") - b4 = gr.Button("Search Web Live") - with gr.Row(): # outputs DF1 - out = gr.Dataframe(label="Answers", type="pandas") - with gr.Row(): # output DF2 - out_DF = gr.Dataframe(wrap=True, max_rows=1000, overflow_row_behaviour= "paginate", datatype = ["markdown", "markdown"], headers=['Entity', 'Value']) - inp.submit(fn=get_wiki_summary, inputs=inp, outputs=out_DF) - b3.click(fn=search, inputs=inp, outputs=out) - b4.click(fn=get_wiki_summary, inputs=inp, outputs=out_DF ) -demo.launch(debug=True, show_error=True) \ No newline at end of file diff --git a/spaces/Datasculptor/StyleGAN-NADA/e4e/models/stylegan2/op/fused_act.py b/spaces/Datasculptor/StyleGAN-NADA/e4e/models/stylegan2/op/fused_act.py deleted file mode 100644 index 973a84fffde53668d31397da5fb993bbc95f7be0..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/StyleGAN-NADA/e4e/models/stylegan2/op/fused_act.py +++ /dev/null @@ -1,85 +0,0 @@ -import os - -import torch -from torch import nn -from torch.autograd import Function -from torch.utils.cpp_extension import load - -module_path = os.path.dirname(__file__) -fused = load( - 'fused', - sources=[ - os.path.join(module_path, 'fused_bias_act.cpp'), - os.path.join(module_path, 'fused_bias_act_kernel.cu'), - ], -) - - -class FusedLeakyReLUFunctionBackward(Function): - @staticmethod - def forward(ctx, grad_output, out, negative_slope, scale): - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - empty = grad_output.new_empty(0) - - grad_input = fused.fused_bias_act( - grad_output, empty, out, 3, 1, negative_slope, scale - ) - - dim = [0] - - if grad_input.ndim > 2: - dim += list(range(2, grad_input.ndim)) - - grad_bias = grad_input.sum(dim).detach() - - return grad_input, grad_bias - - @staticmethod - def backward(ctx, gradgrad_input, gradgrad_bias): - out, = ctx.saved_tensors - gradgrad_out = fused.fused_bias_act( - gradgrad_input, gradgrad_bias, out, 3, 1, ctx.negative_slope, ctx.scale - ) - - return gradgrad_out, None, None, None - - -class FusedLeakyReLUFunction(Function): - @staticmethod - def forward(ctx, input, bias, negative_slope, scale): - empty = input.new_empty(0) - out = fused.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale) - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - return out - - @staticmethod - def backward(ctx, grad_output): - out, = ctx.saved_tensors - - grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply( - grad_output, out, ctx.negative_slope, ctx.scale - ) - - return grad_input, grad_bias, None, None - - -class FusedLeakyReLU(nn.Module): - def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5): - super().__init__() - - self.bias = nn.Parameter(torch.zeros(channel)) - self.negative_slope = negative_slope - self.scale = scale - - def forward(self, input): - return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale) - - -def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2 ** 0.5): - return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale) diff --git a/spaces/DenniSciFi/IconAutomation/README.md b/spaces/DenniSciFi/IconAutomation/README.md deleted file mode 100644 index 5ab20a7419931b66367a25154c1072a86442de7e..0000000000000000000000000000000000000000 --- a/spaces/DenniSciFi/IconAutomation/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: WebStraw -emoji: 🏢 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Designstanic/meta-llama-Llama-2-7b-chat-hf/README.md b/spaces/Designstanic/meta-llama-Llama-2-7b-chat-hf/README.md deleted file mode 100644 index f52616b363c7ddf3e9c3a6941ff96e64a6ba0689..0000000000000000000000000000000000000000 --- a/spaces/Designstanic/meta-llama-Llama-2-7b-chat-hf/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Meta Llama Llama 2 7b Chat Hf -emoji: 🐨 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false -license: llama2 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Endre/SemanticSearch-HU/src/exploration/datetime_test.py b/spaces/Endre/SemanticSearch-HU/src/exploration/datetime_test.py deleted file mode 100644 index 987f52121ccb51674a3250dd94357066ab64de04..0000000000000000000000000000000000000000 --- a/spaces/Endre/SemanticSearch-HU/src/exploration/datetime_test.py +++ /dev/null @@ -1,10 +0,0 @@ -from datetime import datetime - -dt = datetime.now() -print(dt) -print(dt.strftime('%a %d-%m-%Y')) -print(dt.strftime('%a %d/%m/%Y')) -print(dt.strftime('%a %d/%m/%y')) -print(dt.strftime('%A %d-%m-%Y, %H:%M:%S')) -print(dt.strftime('%X %x')) -print(dt.strftime('%Y-%m-%d_%H:%M:%S')) diff --git a/spaces/EsoCode/text-generation-webui/convert-to-flexgen.py b/spaces/EsoCode/text-generation-webui/convert-to-flexgen.py deleted file mode 100644 index 7654593b539541deebfe904403ce73daa4a8651c..0000000000000000000000000000000000000000 --- a/spaces/EsoCode/text-generation-webui/convert-to-flexgen.py +++ /dev/null @@ -1,63 +0,0 @@ -''' - -Converts a transformers model to a format compatible with flexgen. - -''' - -import argparse -import os -from pathlib import Path - -import numpy as np -import torch -from tqdm import tqdm -from transformers import AutoModelForCausalLM, AutoTokenizer - -parser = argparse.ArgumentParser(formatter_class=lambda prog: argparse.HelpFormatter(prog, max_help_position=54)) -parser.add_argument('MODEL', type=str, default=None, nargs='?', help="Path to the input model.") -args = parser.parse_args() - - -def disable_torch_init(): - """ - Disable the redundant torch default initialization to accelerate model creation. - """ - import torch - global torch_linear_init_backup - global torch_layer_norm_init_backup - - torch_linear_init_backup = torch.nn.Linear.reset_parameters - setattr(torch.nn.Linear, "reset_parameters", lambda self: None) - - torch_layer_norm_init_backup = torch.nn.LayerNorm.reset_parameters - setattr(torch.nn.LayerNorm, "reset_parameters", lambda self: None) - - -def restore_torch_init(): - """Rollback the change made by disable_torch_init.""" - import torch - setattr(torch.nn.Linear, "reset_parameters", torch_linear_init_backup) - setattr(torch.nn.LayerNorm, "reset_parameters", torch_layer_norm_init_backup) - - -if __name__ == '__main__': - path = Path(args.MODEL) - model_name = path.name - - print(f"Loading {model_name}...") - # disable_torch_init() - model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.float16, low_cpu_mem_usage=True) - # restore_torch_init() - - tokenizer = AutoTokenizer.from_pretrained(path) - - out_folder = Path(f"models/{model_name}-np") - if not Path(out_folder).exists(): - os.mkdir(out_folder) - - print(f"Saving the converted model to {out_folder}...") - for name, param in tqdm(list(model.model.named_parameters())): - name = name.replace("decoder.final_layer_norm", "decoder.layer_norm") - param_path = os.path.join(out_folder, name) - with open(param_path, "wb") as f: - np.save(f, param.cpu().detach().numpy()) diff --git a/spaces/EuroPython2022/mmocr-demo/configs/textdet/dbnet/dbnet_r18_fpnc_100k_iters_synthtext.py b/spaces/EuroPython2022/mmocr-demo/configs/textdet/dbnet/dbnet_r18_fpnc_100k_iters_synthtext.py deleted file mode 100644 index 78a2bbbf87405a052690546681db127bd93ff738..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/mmocr-demo/configs/textdet/dbnet/dbnet_r18_fpnc_100k_iters_synthtext.py +++ /dev/null @@ -1,59 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/schedules/schedule_sgd_100k_iters.py', - '../../_base_/det_models/dbnet_r18_fpnc.py', - '../../_base_/det_datasets/synthtext.py', - '../../_base_/det_pipelines/dbnet_pipeline.py' -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline_r18 = [ - dict(type='LoadImageFromFile', color_type='color_ignore_orientation'), - dict( - type='LoadTextAnnotations', - with_bbox=True, - with_mask=True, - poly2mask=False), - dict(type='ColorJitter', brightness=32.0 / 255, saturation=0.5), - dict(type='Normalize', **img_norm_cfg), - dict( - type='ImgAug', - args=[['Fliplr', 0.5], - dict(cls='Affine', rotate=[-10, 10]), ['Resize', [0.5, 3.0]]], - clip_invalid_ploys=False), - dict(type='EastRandomCrop', target_size=(640, 640)), - dict(type='DBNetTargets', shrink_ratio=0.4), - dict(type='Pad', size_divisor=32), - dict( - type='CustomFormatBundle', - keys=['gt_shrink', 'gt_shrink_mask', 'gt_thr', 'gt_thr_mask'], - visualize=dict(flag=False, boundary_key='gt_shrink')), - dict( - type='Collect', - keys=['img', 'gt_shrink', 'gt_shrink_mask', 'gt_thr', 'gt_thr_mask']) -] -test_pipeline_1333_736 = {{_base_.test_pipeline_1333_736}} - -data = dict( - samples_per_gpu=16, - workers_per_gpu=8, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline_r18), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_1333_736), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_1333_736)) - -evaluation = dict(interval=999999, metric='hmean-iou') # do not evaluate diff --git a/spaces/EuroSciPy2022/classification/app.py b/spaces/EuroSciPy2022/classification/app.py deleted file mode 100644 index 84d7d38dbcdd0ec2b8eb17fa195c8f9c6fc26c43..0000000000000000000000000000000000000000 --- a/spaces/EuroSciPy2022/classification/app.py +++ /dev/null @@ -1,171 +0,0 @@ -import numpy as np -import matplotlib.pyplot as plt -from matplotlib.colors import ListedColormap -from sklearn.model_selection import train_test_split -from sklearn.preprocessing import StandardScaler -from sklearn.datasets import make_moons, make_circles, make_classification -from sklearn.neural_network import MLPClassifier -from sklearn.neighbors import KNeighborsClassifier -from sklearn.svm import SVC -from sklearn.gaussian_process import GaussianProcessClassifier -from sklearn.gaussian_process.kernels import RBF -from sklearn.tree import DecisionTreeClassifier -from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier -from sklearn.naive_bayes import GaussianNB -from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis -from sklearn.inspection import DecisionBoundaryDisplay -from sklearn.datasets import make_blobs, make_circles, make_moons -import gradio as gr -import math -from functools import partial - - - -### DATASETS - -def normalize(X): - return StandardScaler().fit_transform(X) - - -def linearly_separable(): - X, y = make_classification( - n_features=2, n_redundant=0, n_informative=2, random_state=1, n_clusters_per_class=1 - ) - rng = np.random.RandomState(2) - X += 2 * rng.uniform(size=X.shape) - linearly_separable = (X, y) - return linearly_separable - -DATA_MAPPING = { - "Moons": make_moons(noise=0.3, random_state=0), - "Circles":make_circles(noise=0.2, factor=0.5, random_state=1), - "Linearly Separable Random Dataset": linearly_separable(), -} - - -#### MODELS - -def get_groundtruth_model(X, labels): - # dummy model to show true label distribution - class Dummy: - def __init__(self, y): - self.labels_ = labels - - return Dummy(labels) - -DATASETS = [ - make_moons(noise=0.3, random_state=0), - make_circles(noise=0.2, factor=0.5, random_state=1), - linearly_separable() -] -NAME_CLF_MAPPING = { - "Ground Truth":get_groundtruth_model, - "Nearest Neighbors":KNeighborsClassifier(3), - "Linear SVM":SVC(kernel="linear", C=0.025), - "RBF SVM":SVC(gamma=2, C=1), - "Gaussian Process":GaussianProcessClassifier(1.0 * RBF(1.0)), - "Decision Tree":DecisionTreeClassifier(max_depth=5), - "Random Forest":RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1), - "Neural Net":MLPClassifier(alpha=1, max_iter=1000), - "AdaBoost":AdaBoostClassifier(), - "Naive Bayes":GaussianNB(), -} - - - -#### PLOT -FIGSIZE = 7,7 -figure = plt.figure(figsize=(25, 10)) -i = 1 - - - - -def train_models(selected_data, clf_name): - cm = plt.cm.RdBu - cm_bright = ListedColormap(["#FF0000", "#0000FF"]) - clf = NAME_CLF_MAPPING[clf_name] - - X, y = DATA_MAPPING[selected_data] - X = StandardScaler().fit_transform(X) - X_train, X_test, y_train, y_test = train_test_split( - X, y, test_size=0.4, random_state=42 - ) - - x_min, x_max = X[:, 0].min() - 0.5, X[:, 0].max() + 0.5 - y_min, y_max = X[:, 1].min() - 0.5, X[:, 1].max() + 0.5 - if clf_name != "Ground Truth": - clf.fit(X_train, y_train) - score = clf.score(X_test, y_test) - fig, ax = plt.subplots(figsize=FIGSIZE) - ax.set_title(clf_name, fontsize = 10) - - DecisionBoundaryDisplay.from_estimator( - clf, X, cmap=cm, alpha=0.8, ax=ax, eps=0.5 - ).plot() - return fig - else: - ######### - - for ds_cnt, ds in enumerate(DATASETS): - X, y = ds - - x_min, x_max = X[:, 0].min() - 0.5, X[:, 0].max() + 0.5 - y_min, y_max = X[:, 1].min() - 0.5, X[:, 1].max() + 0.5 - - # just plot the dataset first - cm = plt.cm.RdBu - cm_bright = ListedColormap(["#FF0000", "#0000FF"]) - fig, ax = plt.subplots(figsize=FIGSIZE) - ax.set_title("Input data") - # Plot the training points - - ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright, edgecolors="k") - # Plot the testing points - ax.scatter( - X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6, edgecolors="k" - ) - ax.set_xlim(x_min, x_max) - ax.set_ylim(y_min, y_max) - ax.set_xticks(()) - ax.set_yticks(()) - - return fig - - - - ########### -description = "Learn how different statistical classifiers perform in different datasets." - -def iter_grid(n_rows, n_cols): - # create a grid using gradio Block - for _ in range(n_rows): - with gr.Row(): - for _ in range(n_cols): - with gr.Column(): - yield - -title = "Compare Classifiers!" -with gr.Blocks(title=title) as demo: - gr.Markdown(f"## {title}") - gr.Markdown(description) - - input_models = list(NAME_CLF_MAPPING) - input_data = gr.Radio( - choices=["Moons", "Circles", "Linearly Separable Random Dataset"], - value="Moons" - ) - counter = 0 - - - for _ in iter_grid(2, 5): - if counter >= len(input_models): - break - - input_model = input_models[counter] - plot = gr.Plot(label=input_model) - fn = partial(train_models, clf_name=input_model) - input_data.change(fn=fn, inputs=[input_data], outputs=plot) - counter += 1 - -demo.launch(debug=True) diff --git a/spaces/FYP-23-S1-21/Refineverse_Plugin/GenerationFeature.py b/spaces/FYP-23-S1-21/Refineverse_Plugin/GenerationFeature.py deleted file mode 100644 index 4d5d3dac3cc306e1d7def9bbee9e859713ced7a0..0000000000000000000000000000000000000000 --- a/spaces/FYP-23-S1-21/Refineverse_Plugin/GenerationFeature.py +++ /dev/null @@ -1,49 +0,0 @@ -import re # Python's built-in library for regular expressions (or Regex) -import sqlite3 -from flask import g -from transformers import pipeline, set_seed - -# Main function of the generation feature. Performs text generation! -def generate(Entered_story): - - # Check if the input is empty - if not Entered_story.strip(): - raise ValueError("Empty input!") - - # Validate that the input is in the correct format - if not validate_story(Entered_story): - raise ValueError("Incorrect format!") - - # Set the pipeline to use the correct NLP type and model - generator = pipeline('text-generation', model='gpt2') - - # Take note: The max_length & min_length variables refer to the OUTPUT length! - set_seed(42) - generated_text = generator(Entered_story, max_length=30, num_return_sequences=5) - - generated_text = generated_text[0]['generated_text'] - - return generated_text - -# User Input Format Validation Function -def validate_story(Entered_story): - pattern = r'As a (?P[^,.]+), I want to (?P[^,.]+)(?:,|.)+\s*so that' #Follows the normal structure, but allows anything after 'so that' - match = re.search(pattern, Entered_story, flags=re.DOTALL) - return bool(match) - -# Function to grab all contents in the "TextGeneration" table (except for unique ids) -def getTextGenContents(): - db = getattr(g, '_database', None) # Gets the _database attribute from the 'g' object. If it does not exist, returns 'None' - if db is None: - db = g._database = sqlite3.connect('Refineverse.db') # If db is None, create a new connection for db and g._database. - cursor = db.cursor() # Creates a cursor object to handle data - cursor.execute("SELECT userStory, generatedStory FROM TextGeneration") # The cursor executes the query - rows = cursor.fetchall() # Stores the results of fetchall() into a variable - return rows - -# Function to insert a new row into the "TextGeneration" table -def insertTextGenRow( Entered_story, generatedStory): - with sqlite3.connect('Refineverse.db') as conn: # 'With' will automatically take care of closing and opening the connection - cursor = conn.cursor() - cursor.execute("INSERT INTO TextGeneration (userStory, generatedStory) VALUES (?, ?)", (Entered_story, generatedStory)) - conn.commit() diff --git a/spaces/Felladrin/Web-LLM-Mistral-7B-OpenOrca/src/index.css b/spaces/Felladrin/Web-LLM-Mistral-7B-OpenOrca/src/index.css deleted file mode 100644 index a229236fe6de3f72825cb55482f29b2a474e6e77..0000000000000000000000000000000000000000 --- a/spaces/Felladrin/Web-LLM-Mistral-7B-OpenOrca/src/index.css +++ /dev/null @@ -1,127 +0,0 @@ - .chatui { - display: flex; - flex-flow: column wrap; - justify-content: space-between; - width: 100%; - max-width: 867px; - margin: 25px 10px; - height: 600px; - border: 2px solid #ddd; - border-radius: 5px; - box-shadow: 0 15px 15px -5px rgba(0, 0, 0, 0.2); - } - - s .chatui-header { - display: flex; - justify-content: space-between; - padding: 10px; - border-bottom: 2px solid #ddd; - background: #eee; - color: #666; - } - - .chatui-chat { - flex: 1; - overflow-y: auto; - padding: 10px; - } - - .chatui-chat::-webkit-scrollbar { - width: 6px; - } - - .chatui-chat::-webkit-scrollbar-track { - background: #ddd; - } - - .chatui-chat::-webkit-scrollbar-thumb { - background: #bdbdbd; - } - - .msg { - display: flex; - align-items: flex-end; - margin-bottom: 10px; - } - - .msg:last-of-type { - margin: 0; - } - - .msg-bubble { - max-width: 450px; - padding: 15px; - border-radius: 15px; - background: #ececec; - } - - .left-msg .msg-bubble { - border-bottom-left-radius: 0; - } - - .error-msg .msg-bubble { - border-bottom-left-radius: 0; - color: #f15959; -} - -.init-msg .msg-bubble { - border-bottom-left-radius: 0; -} - - .right-msg { - flex-direction: row-reverse; - } - - .right-msg .msg-bubble { - background: #579ffb; - color: #fff; - border-bottom-right-radius: 0; - } - - .chatui-inputarea { - display: flex; - padding: 10px; - border-top: 2px solid #ddd; - background: #eee; - } - - .chatui-inputarea * { - padding: 10px; - border: none; - border-radius: 3px; - font-size: 1em; - } - - .chatui-input { - flex: 1; - background: #ddd; - } - - .chatui-reset-btn { - margin-left: 10px; - background: #ececec; - font-weight: bold; - border-radius: 8px; - width: 200px; - cursor: pointer; - } - - .chatui-reset-btn:hover { - background: #dcdada; -} - - .chatui-send-btn { - margin-left: 10px; - background: #579ffb; - color: #fff; - font-weight: bold; - cursor: pointer; -} - -.chatui-send-btn:hover { - background: #577bfb; -} - - .chatui-chat { - background-color: #fcfcfe; - } \ No newline at end of file diff --git a/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/GetGpt.py b/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/GetGpt.py deleted file mode 100644 index 56a121f6ee5f430da7beda3b65abdea64a87c36b..0000000000000000000000000000000000000000 --- a/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/GetGpt.py +++ /dev/null @@ -1,57 +0,0 @@ -import os -import json -import uuid -import requests -from Crypto.Cipher import AES -from ...typing import sha256, Dict, get_type_hints - -url = 'https://chat.getgpt.world/' -model = ['gpt-3.5-turbo'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - def encrypt(e): - t = os.urandom(8).hex().encode('utf-8') - n = os.urandom(8).hex().encode('utf-8') - r = e.encode('utf-8') - cipher = AES.new(t, AES.MODE_CBC, n) - ciphertext = cipher.encrypt(pad_data(r)) - return ciphertext.hex() + t.decode('utf-8') + n.decode('utf-8') - - def pad_data(data: bytes) -> bytes: - block_size = AES.block_size - padding_size = block_size - len(data) % block_size - padding = bytes([padding_size] * padding_size) - return data + padding - - headers = { - 'Content-Type': 'application/json', - 'Referer': 'https://chat.getgpt.world/', - 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36' - } - - data = json.dumps({ - 'messages': messages, - 'frequency_penalty': kwargs.get('frequency_penalty', 0), - 'max_tokens': kwargs.get('max_tokens', 4000), - 'model': 'gpt-3.5-turbo', - 'presence_penalty': kwargs.get('presence_penalty', 0), - 'temperature': kwargs.get('temperature', 1), - 'top_p': kwargs.get('top_p', 1), - 'stream': True, - 'uuid': str(uuid.uuid4()) - }) - - res = requests.post('https://chat.getgpt.world/api/chat/stream', - headers=headers, json={'signature': encrypt(data)}, stream=True) - - for line in res.iter_lines(): - if b'content' in line: - line_json = json.loads(line.decode('utf-8').split('data: ')[1]) - yield (line_json['choices'][0]['delta']['content']) - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join( - [f'{name}: {get_type_hints(_create_completion)[name].__name__}' for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git a/spaces/FreeGPT/FreeGPT/README.md b/spaces/FreeGPT/FreeGPT/README.md deleted file mode 100644 index d97945c3658fdc53542904b95125df5883cc0491..0000000000000000000000000000000000000000 --- a/spaces/FreeGPT/FreeGPT/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: FreeGPT -emoji: 🚀 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/FridaZuley/RVC_HFKawaii/infer/modules/ipex/attention.py b/spaces/FridaZuley/RVC_HFKawaii/infer/modules/ipex/attention.py deleted file mode 100644 index 0eed59630d76a56e3fd96aa5bb6518b0c61e81bb..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/infer/modules/ipex/attention.py +++ /dev/null @@ -1,128 +0,0 @@ -import torch -import intel_extension_for_pytorch as ipex # pylint: disable=import-error, unused-import - -# pylint: disable=protected-access, missing-function-docstring, line-too-long - -original_torch_bmm = torch.bmm -def torch_bmm(input, mat2, *, out=None): - if input.dtype != mat2.dtype: - mat2 = mat2.to(input.dtype) - - #ARC GPUs can't allocate more than 4GB to a single block, Slice it: - batch_size_attention, input_tokens, mat2_shape = input.shape[0], input.shape[1], mat2.shape[2] - block_multiply = 2.4 if input.dtype == torch.float32 else 1.2 - block_size = (batch_size_attention * input_tokens * mat2_shape) / 1024 * block_multiply #MB - split_slice_size = batch_size_attention - if block_size >= 4000: - do_split = True - #Find something divisible with the input_tokens - while ((split_slice_size * input_tokens * mat2_shape) / 1024 * block_multiply) > 4000: - split_slice_size = split_slice_size // 2 - if split_slice_size <= 1: - split_slice_size = 1 - break - else: - do_split = False - - split_block_size = (split_slice_size * input_tokens * mat2_shape) / 1024 * block_multiply #MB - split_2_slice_size = input_tokens - if split_block_size >= 4000: - do_split_2 = True - #Find something divisible with the input_tokens - while ((split_slice_size * split_2_slice_size * mat2_shape) / 1024 * block_multiply) > 4000: - split_2_slice_size = split_2_slice_size // 2 - if split_2_slice_size <= 1: - split_2_slice_size = 1 - break - else: - do_split_2 = False - - if do_split: - hidden_states = torch.zeros(input.shape[0], input.shape[1], mat2.shape[2], device=input.device, dtype=input.dtype) - for i in range(batch_size_attention // split_slice_size): - start_idx = i * split_slice_size - end_idx = (i + 1) * split_slice_size - if do_split_2: - for i2 in range(input_tokens // split_2_slice_size): # pylint: disable=invalid-name - start_idx_2 = i2 * split_2_slice_size - end_idx_2 = (i2 + 1) * split_2_slice_size - hidden_states[start_idx:end_idx, start_idx_2:end_idx_2] = original_torch_bmm( - input[start_idx:end_idx, start_idx_2:end_idx_2], - mat2[start_idx:end_idx, start_idx_2:end_idx_2], - out=out - ) - else: - hidden_states[start_idx:end_idx] = original_torch_bmm( - input[start_idx:end_idx], - mat2[start_idx:end_idx], - out=out - ) - else: - return original_torch_bmm(input, mat2, out=out) - return hidden_states - -original_scaled_dot_product_attention = torch.nn.functional.scaled_dot_product_attention -def scaled_dot_product_attention(query, key, value, attn_mask=None, dropout_p=0.0, is_causal=False): - #ARC GPUs can't allocate more than 4GB to a single block, Slice it: - shape_one, batch_size_attention, query_tokens, shape_four = query.shape - block_multiply = 2.4 if query.dtype == torch.float32 else 1.2 - block_size = (shape_one * batch_size_attention * query_tokens * shape_four) / 1024 * block_multiply #MB - split_slice_size = batch_size_attention - if block_size >= 4000: - do_split = True - #Find something divisible with the shape_one - while ((shape_one * split_slice_size * query_tokens * shape_four) / 1024 * block_multiply) > 4000: - split_slice_size = split_slice_size // 2 - if split_slice_size <= 1: - split_slice_size = 1 - break - else: - do_split = False - - split_block_size = (shape_one * split_slice_size * query_tokens * shape_four) / 1024 * block_multiply #MB - split_2_slice_size = query_tokens - if split_block_size >= 4000: - do_split_2 = True - #Find something divisible with the batch_size_attention - while ((shape_one * split_slice_size * split_2_slice_size * shape_four) / 1024 * block_multiply) > 4000: - split_2_slice_size = split_2_slice_size // 2 - if split_2_slice_size <= 1: - split_2_slice_size = 1 - break - else: - do_split_2 = False - - if do_split: - hidden_states = torch.zeros(query.shape, device=query.device, dtype=query.dtype) - for i in range(batch_size_attention // split_slice_size): - start_idx = i * split_slice_size - end_idx = (i + 1) * split_slice_size - if do_split_2: - for i2 in range(query_tokens // split_2_slice_size): # pylint: disable=invalid-name - start_idx_2 = i2 * split_2_slice_size - end_idx_2 = (i2 + 1) * split_2_slice_size - hidden_states[:, start_idx:end_idx, start_idx_2:end_idx_2] = original_scaled_dot_product_attention( - query[:, start_idx:end_idx, start_idx_2:end_idx_2], - key[:, start_idx:end_idx, start_idx_2:end_idx_2], - value[:, start_idx:end_idx, start_idx_2:end_idx_2], - attn_mask=attn_mask[:, start_idx:end_idx, start_idx_2:end_idx_2] if attn_mask is not None else attn_mask, - dropout_p=dropout_p, is_causal=is_causal - ) - else: - hidden_states[:, start_idx:end_idx] = original_scaled_dot_product_attention( - query[:, start_idx:end_idx], - key[:, start_idx:end_idx], - value[:, start_idx:end_idx], - attn_mask=attn_mask[:, start_idx:end_idx] if attn_mask is not None else attn_mask, - dropout_p=dropout_p, is_causal=is_causal - ) - else: - return original_scaled_dot_product_attention( - query, key, value, attn_mask=attn_mask, dropout_p=dropout_p, is_causal=is_causal - ) - return hidden_states - -def attention_init(): - #ARC GPUs can't allocate more than 4GB to a single block: - torch.bmm = torch_bmm - torch.nn.functional.scaled_dot_product_attention = scaled_dot_product_attention \ No newline at end of file diff --git a/spaces/GT4SD/patent_generative_transformers/model_cards/description.md b/spaces/GT4SD/patent_generative_transformers/model_cards/description.md deleted file mode 100644 index 5bc92193bd573e6e4479bcbbf61364d77721f232..0000000000000000000000000000000000000000 --- a/spaces/GT4SD/patent_generative_transformers/model_cards/description.md +++ /dev/null @@ -1,6 +0,0 @@ -logo - -[Patent Generative Transformers (PGT)](https://openreview.net/forum?id=dLHtwZKvJmE): A prompt based generative transformer for the patent domain (Christofidellis et al., 2022; *ICLR Workshop KRLM*). - -For **examples** and **documentation** of the model parameters, please see below. -Moreover, we provide a **model card** ([Mitchell et al. (2019)](https://dl.acm.org/doi/abs/10.1145/3287560.3287596?casa_token=XD4eHiE2cRUAAAAA:NL11gMa1hGPOUKTAbtXnbVQBDBbjxwcjGECF_i-WC_3g1aBgU1Hbz_f2b4kI_m1in-w__1ztGeHnwHs)) at the bottom of this page. diff --git a/spaces/GXSA/bingo/src/lib/hooks/use-at-bottom.tsx b/spaces/GXSA/bingo/src/lib/hooks/use-at-bottom.tsx deleted file mode 100644 index d37c8cf4162adcb0064e08ecec24eb731416b045..0000000000000000000000000000000000000000 --- a/spaces/GXSA/bingo/src/lib/hooks/use-at-bottom.tsx +++ /dev/null @@ -1,23 +0,0 @@ -import * as React from 'react' - -export function useAtBottom(offset = 0) { - const [isAtBottom, setIsAtBottom] = React.useState(false) - - React.useEffect(() => { - const handleScroll = () => { - setIsAtBottom( - window.innerHeight + window.scrollY >= - document.body.offsetHeight - offset - ) - } - - window.addEventListener('scroll', handleScroll, { passive: true }) - handleScroll() - - return () => { - window.removeEventListener('scroll', handleScroll) - } - }, [offset]) - - return isAtBottom -} diff --git a/spaces/GeekTony/Gradio-Ontology/README.md b/spaces/GeekTony/Gradio-Ontology/README.md deleted file mode 100644 index 94e03967eb5c519dcf7b35b36f010a051a3ac3f5..0000000000000000000000000000000000000000 --- a/spaces/GeekTony/Gradio-Ontology/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Gradio Ontology -emoji: 👀 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/GoAPI/Midjourney-zoom-video-generator-GoAPI/api_module.py b/spaces/GoAPI/Midjourney-zoom-video-generator-GoAPI/api_module.py deleted file mode 100644 index c23fee0a233b67be11f4cc5ec8a322ae0970e5e6..0000000000000000000000000000000000000000 --- a/spaces/GoAPI/Midjourney-zoom-video-generator-GoAPI/api_module.py +++ /dev/null @@ -1,126 +0,0 @@ -import requests -import time -import os -import uuid -import random - - -def make_imagine_request(api_key, prompt): - endpoint = "https://api.midjourneyapi.xyz/mj/v2/imagine" - headers = { - 'X-API-KEY': api_key, - 'Content-Type': 'application/json', - } - payload = { - "prompt": prompt.strip(), - "process_mode": "fast" - } - response = requests.post(endpoint, headers=headers, json=payload).json() - # check if api key is valid - if "message" in response and response["message"] == "Invalid API key": - print('Invalid API key') - return None - # check if api key has enough credits - if "message" in response and response["message"] == "Insufficient token": - print('Insufficient token') - return None - - task_id = response.get("task_id", None) - if task_id is not None: - print(f'Task id for prompt \'{prompt}\': {task_id}') - else: - print('Failed to get task id') - - return task_id - -def make_upscale_request(api_key, original_task_id): - endpoint = "https://api.midjourneyapi.xyz/mj/v2/upscale" - headers = { - 'X-API-KEY': api_key, - 'Content-Type': 'application/json', - } - payload = { - "origin_task_id": original_task_id, - "index": str(random.randint(1,4)) - } - response = requests.post(endpoint, headers=headers, json=payload).json() - task_id = response.get("task_id", None) - if task_id is not None: - print(f'Task id for prompt: {task_id}') - else: - print('Failed to get task id') - - return task_id - -def make_outpaint_request(api_key, prompt, original_task_id): - endpoint = "https://api.midjourneyapi.xyz/mj/v2/outpaint" - headers = { - 'X-API-KEY': api_key, - 'Content-Type': 'application/json', - } - payload = { - "origin_task_id": original_task_id, - "zoom_ratio": "2", - "prompt": prompt.strip(), - } - response = requests.post(endpoint, headers=headers, json=payload).json() - task_id = response.get("task_id", None) - if task_id is not None: - print(f'Task id for prompt \'{prompt}\': {task_id}') - else: - print('Failed to get task id') - - return task_id - -def fetch_request(api_key, task_id, max_retries=15): - endpoint = "https://api.midjourneyapi.xyz/mj/v2/fetch" - headers = { - 'X-API-KEY': api_key - } - payload = { - "task_id": task_id - } - - retries = 0 - while True: - response = requests.post(endpoint, headers=headers, json=payload) - if response.status_code == 200: - response_json = response.json() - status = response_json.get('status', '') - if status == 'finished': - discord_url = response_json['task_result'].get('discord_image_url', '') - return discord_url, status - elif status == 'failed': - print('Task failed. Stopping retrieval attempts.') - return None, status - else: - print('Result not ready yet, retrying in 30 seconds...') - else: - print('Failed to fetch result, retrying in 30 seconds...') - retries += 1 - if retries > max_retries: - raise Exception('Failed to fetch result after maximum retries.') - time.sleep(30) - - -def download_image(image_url, path=None): - if not path: - unique_id = uuid.uuid4() # generation of unique ID - path = str(unique_id) # Using the unique ID as the directory path - - os.makedirs(path, exist_ok=True) - - # find a unique file name - for i in range(1, 1000): - file_name = f"{i:03}.png" - file_path = os.path.join(path, file_name) - if not os.path.exists(file_path): - break - - # download and write file - response = requests.get(image_url) - with open(file_path, 'wb') as file: - file.write(response.content) - - print(f"Image downloaded and saved as {file_path}") - return path \ No newline at end of file diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_r50_fpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_r50_fpn_1x_coco.py deleted file mode 100644 index 769472352d06a8f2c30d73ae1f57c393f77adfa2..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_r50_fpn_1x_coco.py +++ /dev/null @@ -1,62 +0,0 @@ -_base_ = '../retinanet/retinanet_r50_fpn_1x_coco.py' -model = dict( - bbox_head=dict( - _delete_=True, - type='GARetinaHead', - num_classes=80, - in_channels=256, - stacked_convs=4, - feat_channels=256, - approx_anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=4, - scales_per_octave=3, - ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32, 64, 128]), - square_anchor_generator=dict( - type='AnchorGenerator', - ratios=[1.0], - scales=[4], - strides=[8, 16, 32, 64, 128]), - anchor_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loc_filter_thr=0.01, - loss_loc=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_shape=dict(type='BoundedIoULoss', beta=0.2, loss_weight=1.0), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=0.04, loss_weight=1.0)), - # training and testing settings - train_cfg=dict( - ga_assigner=dict( - type='ApproxMaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - ga_sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - assigner=dict(neg_iou_thr=0.5, min_pos_iou=0.0), - center_ratio=0.2, - ignore_ratio=0.5)) -optimizer_config = dict( - _delete_=True, grad_clip=dict(max_norm=35, norm_type=2)) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py deleted file mode 100644 index d53c5dc6a1470e4cca209a26c8261dd66c60e9b1..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py +++ /dev/null @@ -1,31 +0,0 @@ -_base_ = [ - '../_base_/models/mask_rcnn_r50_fpn.py', - '../_base_/datasets/lvis_v0.5_instance.py', - '../_base_/schedules/schedule_2x.py', '../_base_/default_runtime.py' -] -model = dict( - roi_head=dict( - bbox_head=dict(num_classes=1230), mask_head=dict(num_classes=1230)), - test_cfg=dict( - rcnn=dict( - score_thr=0.0001, - # LVIS allows up to 300 - max_per_img=300))) -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict( - type='Resize', - img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736), - (1333, 768), (1333, 800)], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -data = dict(train=dict(dataset=dict(pipeline=train_pipeline))) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/tridentnet/tridentnet_r50_caffe_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/tridentnet/tridentnet_r50_caffe_1x_coco.py deleted file mode 100644 index a6a668c4e33611e2b69009741558d83558cc9b4f..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/tridentnet/tridentnet_r50_caffe_1x_coco.py +++ /dev/null @@ -1,53 +0,0 @@ -_base_ = [ - '../_base_/models/faster_rcnn_r50_caffe_c4.py', - '../_base_/datasets/coco_detection.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] - -model = dict( - type='TridentFasterRCNN', - pretrained='open-mmlab://detectron2/resnet50_caffe', - backbone=dict( - type='TridentResNet', - trident_dilations=(1, 2, 3), - num_branch=3, - test_branch_idx=1), - roi_head=dict(type='TridentRoIHead', num_branch=3, test_branch_idx=1), - train_cfg=dict( - rpn_proposal=dict(max_per_img=500), - rcnn=dict( - sampler=dict(num=128, pos_fraction=0.5, - add_gt_as_proposals=False)))) - -# use caffe img_norm -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/encnet/encnet_r50-d8_769x769_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/encnet/encnet_r50-d8_769x769_40k_cityscapes.py deleted file mode 100644 index d311e33f56ba431a882b0e7079001b0e9932a011..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/encnet/encnet_r50-d8_769x769_40k_cityscapes.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = [ - '../_base_/models/encnet_r50-d8.py', - '../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_40k.py' -] -model = dict( - decode_head=dict(align_corners=True), - auxiliary_head=dict(align_corners=True), - test_cfg=dict(mode='slide', crop_size=(769, 769), stride=(513, 513))) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/convert_datasets/voc_aug.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/convert_datasets/voc_aug.py deleted file mode 100644 index 942746351b64b2e931cb18ce684a1f3ccf7e3866..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/convert_datasets/voc_aug.py +++ /dev/null @@ -1,91 +0,0 @@ -import argparse -import os.path as osp -from functools import partial - -import mmcv -import numpy as np -from PIL import Image -from scipy.io import loadmat - -AUG_LEN = 10582 - - -def convert_mat(mat_file, in_dir, out_dir): - data = loadmat(osp.join(in_dir, mat_file)) - mask = data['GTcls'][0]['Segmentation'][0].astype(np.uint8) - seg_filename = osp.join(out_dir, mat_file.replace('.mat', '.png')) - Image.fromarray(mask).save(seg_filename, 'PNG') - - -def generate_aug_list(merged_list, excluded_list): - return list(set(merged_list) - set(excluded_list)) - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Convert PASCAL VOC annotations to mmsegmentation format') - parser.add_argument('devkit_path', help='pascal voc devkit path') - parser.add_argument('aug_path', help='pascal voc aug path') - parser.add_argument('-o', '--out_dir', help='output path') - parser.add_argument( - '--nproc', default=1, type=int, help='number of process') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - devkit_path = args.devkit_path - aug_path = args.aug_path - nproc = args.nproc - if args.out_dir is None: - out_dir = osp.join(devkit_path, 'VOC2012', 'SegmentationClassAug') - else: - out_dir = args.out_dir - mmcv.mkdir_or_exist(out_dir) - in_dir = osp.join(aug_path, 'dataset', 'cls') - - mmcv.track_parallel_progress( - partial(convert_mat, in_dir=in_dir, out_dir=out_dir), - list(mmcv.scandir(in_dir, suffix='.mat')), - nproc=nproc) - - full_aug_list = [] - with open(osp.join(aug_path, 'dataset', 'train.txt')) as f: - full_aug_list += [line.strip() for line in f] - with open(osp.join(aug_path, 'dataset', 'val.txt')) as f: - full_aug_list += [line.strip() for line in f] - - with open( - osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation', - 'train.txt')) as f: - ori_train_list = [line.strip() for line in f] - with open( - osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation', - 'val.txt')) as f: - val_list = [line.strip() for line in f] - - aug_train_list = generate_aug_list(ori_train_list + full_aug_list, - val_list) - assert len(aug_train_list) == AUG_LEN, 'len(aug_train_list) != {}'.format( - AUG_LEN) - - with open( - osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation', - 'trainaug.txt'), 'w') as f: - f.writelines(line + '\n' for line in aug_train_list) - - aug_list = generate_aug_list(full_aug_list, ori_train_list + val_list) - assert len(aug_list) == AUG_LEN - len( - ori_train_list), 'len(aug_list) != {}'.format(AUG_LEN - - len(ori_train_list)) - with open( - osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation', 'aug.txt'), - 'w') as f: - f.writelines(line + '\n' for line in aug_list) - - print('Done!') - - -if __name__ == '__main__': - main() diff --git a/spaces/GrandaddyShmax/MusicGen_Plus/tests/common_utils/wav_utils.py b/spaces/GrandaddyShmax/MusicGen_Plus/tests/common_utils/wav_utils.py deleted file mode 100644 index d3a563ee1749a58217ece55c9a08b8d93c0fc386..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/MusicGen_Plus/tests/common_utils/wav_utils.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from pathlib import Path -import typing as tp - -import torch -import torchaudio - - -def get_white_noise(chs: int = 1, num_frames: int = 1): - wav = torch.randn(chs, num_frames) - return wav - - -def get_batch_white_noise(bs: int = 1, chs: int = 1, num_frames: int = 1): - wav = torch.randn(bs, chs, num_frames) - return wav - - -def save_wav(path: str, wav: torch.Tensor, sample_rate: int): - fp = Path(path) - kwargs: tp.Dict[str, tp.Any] = {} - if fp.suffix == '.wav': - kwargs['encoding'] = 'PCM_S' - kwargs['bits_per_sample'] = 16 - elif fp.suffix == '.mp3': - kwargs['compression'] = 320 - torchaudio.save(str(fp), wav, sample_rate, **kwargs) diff --git a/spaces/HESOAYM/ElviraMulti/readme/README_ja.md b/spaces/HESOAYM/ElviraMulti/readme/README_ja.md deleted file mode 100644 index 5f4eb5afc65eea8afba736b5590dece058cb6b91..0000000000000000000000000000000000000000 --- a/spaces/HESOAYM/ElviraMulti/readme/README_ja.md +++ /dev/null @@ -1,126 +0,0 @@ -
      - - 简体中文 | English | 日本語 -
      - -

      川虎 Chat 🐯 Chuanhu Chat

      -
      - - Logo - - -

      -

      ChatGPT/ChatGLM/LLaMAなどのLLMのための軽量でユーザーフレンドリーなWeb-UI

      -

      - - Tests Passing - - - GitHub Contributors - - - GitHub pull requests - -

      - ストリーム出力/会話回数無制限/履歴保存/プリセットプロンプト/ファイルへの質問チャット
      - ウェブ検索/LaTeXレンダリング/表レンダリング/コードハイライト
      - オートダークモード/アダプティブ・ウェブ・インターフェイス/WeChatライク・テーマ
      - マルチパラメーターチューニング/マルチAPI-Key対応/マルチユーザー対応
      - GPT-4対応/LLMのローカルデプロイ可能。 -

      - 動画チュートリアル - · - 2.0 イントロダクション - · - 3.0 イントロダクション & チュートリアル - || - オンライントライアル - · - ワンクリックデプロイ -

      -

      - Animation Demo -

      -

      -
      - -## 使う上でのTips - -- ChatGPTをより適切に制御するために、システムプロンプトを使用できます。 -- プロンプトテンプレートを使用するには、プロンプトテンプレートコレクションを選択し、ドロップダウンメニューから特定のプロンプトを選択。回答が不十分な場合は、`🔄再生成`ボタンを使って再試行します。 -- 入力ボックスで改行するには、Shift + Enterキーを押してください。 -- 入力履歴を素早く切り替えるには、入力ボックスで キーを押す。 -- プログラムをサーバにデプロイするには、プログラムの最終行を `demo.launch(server_name="0.0.0.0", server_port=)`に変更します。 -- 共有リンクを取得するには、プログラムの最後の行を `demo.launch(share=True)` に変更してください。なお、公開リンクでアクセスするためには、プログラムが実行されている必要があることに注意してください。 -- Hugging Face Spacesで使用する場合: より速く、より安全に利用するために、**Duplicate Space**を使用し、自分のスペースでプログラムを実行することをお勧めします。 - -## インストール - -```shell -git clone https://github.com/GaiZhenbiao/ChuanhuChatGPT.git -cd ChuanhuChatGPT -pip install -r requirements.txt -``` - -次に `config_example.json`をコピーして `config.json`にリネームし、そのファイルにAPI-Keyなどの設定を記入する。 - -```shell -python ChuanhuChatbot.py -``` - -ブラウザのウィンドウが開き、ChatGPTとチャットできるようになります。 - -> **Note** -> -> 詳しい手順は[wikiページ](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程)をご確認ください。 - -## トラブルシューティング - -問題が発生した場合は、まずこのプロジェクトの最新の変更点を手動で引っ張ってみるのがよいでしょう。その手順は以下の通りです: - -1. ウェブページの `Download ZIP` をクリックして最新のコードアーカイブをダウンロードするか、または - ```shell - git pull https://github.com/GaiZhenbiao/ChuanhuChatGPT.git main -f - ``` -2. 新しい依存関係が導入されている可能性があるため、依存関係を再度インストールしてみてください。 - ``` - pip install -r requirements.txt - ``` -3. Gradioを更新 - ``` - pip install gradio --upgrade --force-reinstall - ``` - -一般的に、以下の手順でほとんどの問題を解決することができます。 - -それでも問題が解決しない場合は、こちらのページをご参照ください: [よくある質問(FAQ)](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题) - -このページでは、考えられるほぼすべての問題点と解決策を掲載しています。よくお読みください。 - -## More Information - -より詳細な情報は、[wiki](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki) をご覧ください。: - -- [How to contribute a translation](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/Localization) -- [How to make a contribution](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/贡献指南) -- [How to cite the project](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可#如何引用该项目) -- [Project changelog](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/更新日志) -- [Project license](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可) - -## Starchart - -[![Star History Chart](https://api.star-history.com/svg?repos=GaiZhenbiao/ChuanhuChatGPT&type=Date)](https://star-history.com/#GaiZhenbiao/ChuanhuChatGPT&Date) - -## Contributors - - - - - -## Sponsor - -🐯 この企画が役に立ったら、遠慮なくコーラかコーヒーでもおごってください〜。 - -Buy Me A Coffee - -image diff --git a/spaces/HaloMaster/chinesesummary/fengshen/data/megatron_dataloader/dataset_utils.py b/spaces/HaloMaster/chinesesummary/fengshen/data/megatron_dataloader/dataset_utils.py deleted file mode 100644 index 9b579751573ff8ddf94882c032d4ed6cc168ba07..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/data/megatron_dataloader/dataset_utils.py +++ /dev/null @@ -1,788 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The Google AI Language Team Authors, and NVIDIA. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -# Most of the code here has been copied from: -# https://github.com/google-research/albert/blob/master/create_pretraining_data.py -# with some modifications. - -import math -import time -import collections - -import numpy as np -import re - -from fengshen.data.megatron_dataloader.utils import ( - print_rank_0 -) -from fengshen.data.megatron_dataloader.blendable_dataset import BlendableDataset -from fengshen.data.megatron_dataloader.indexed_dataset import make_dataset as make_indexed_dataset - -DSET_TYPE_BERT = 'standard_bert' -DSET_TYPE_ICT = 'ict' -DSET_TYPE_T5 = 't5' -DSET_TYPE_BERT_CN_WWM = 'bert_cn_wwm' -DSET_TYPE_BART = 'bart' -DSET_TYPE_COCOLM = 'coco_lm' - -DSET_TYPES = [DSET_TYPE_BERT, DSET_TYPE_ICT, - DSET_TYPE_T5, DSET_TYPE_BERT_CN_WWM, - DSET_TYPE_BART, DSET_TYPE_COCOLM] - - -def get_datasets_weights_and_num_samples(data_prefix, - train_valid_test_num_samples): - - # The data prefix should be in the format of: - # weight-1, data-prefix-1, weight-2, data-prefix-2, .. - assert len(data_prefix) % 2 == 0 - num_datasets = len(data_prefix) // 2 - weights = [0] * num_datasets - prefixes = [0] * num_datasets - for i in range(num_datasets): - weights[i] = float(data_prefix[2 * i]) - prefixes[i] = (data_prefix[2 * i + 1]).strip() - # Normalize weights - weight_sum = 0.0 - for weight in weights: - weight_sum += weight - assert weight_sum > 0.0 - weights = [weight / weight_sum for weight in weights] - - # Add 0.5% (the 1.005 factor) so in case the bleding dataset does - # not uniformly distribute the number of samples, we still have - # samples left to feed to the network. - datasets_train_valid_test_num_samples = [] - for weight in weights: - datasets_train_valid_test_num_samples.append( - [int(math.ceil(val * weight * 1.005)) - for val in train_valid_test_num_samples]) - - return prefixes, weights, datasets_train_valid_test_num_samples - - -def compile_helper(): - """Compile helper function ar runtime. Make sure this - is invoked on a single process.""" - import os - import subprocess - path = os.path.abspath(os.path.dirname(__file__)) - ret = subprocess.run(['make', '-C', path]) - if ret.returncode != 0: - print("Making C++ dataset helpers module failed, exiting.") - import sys - sys.exit(1) - - -def get_a_and_b_segments(sample, np_rng): - """Divide sample into a and b segments.""" - - # Number of sentences in the sample. - n_sentences = len(sample) - # Make sure we always have two sentences. - assert n_sentences > 1, 'make sure each sample has at least two sentences.' - - # First part: - # `a_end` is how many sentences go into the `A`. - a_end = 1 - if n_sentences >= 3: - # Note that randin in numpy is exclusive. - a_end = np_rng.randint(1, n_sentences) - tokens_a = [] - for j in range(a_end): - tokens_a.extend(sample[j]) - - # Second part: - tokens_b = [] - for j in range(a_end, n_sentences): - tokens_b.extend(sample[j]) - - # Random next: - is_next_random = False - if np_rng.random() < 0.5: - is_next_random = True - tokens_a, tokens_b = tokens_b, tokens_a - - return tokens_a, tokens_b, is_next_random - - -def truncate_segments(tokens_a, tokens_b, len_a, len_b, max_num_tokens, np_rng): - """Truncates a pair of sequences to a maximum sequence length.""" - # print(len_a, len_b, max_num_tokens) - assert len_a > 0 - if len_a + len_b <= max_num_tokens: - return False - while len_a + len_b > max_num_tokens: - if len_a > len_b: - len_a -= 1 - tokens = tokens_a - else: - len_b -= 1 - tokens = tokens_b - if np_rng.random() < 0.5: - del tokens[0] - else: - tokens.pop() - return True - - -def create_tokens_and_tokentypes(tokens_a, tokens_b, cls_id, sep_id): - """Merge segments A and B, add [CLS] and [SEP] and build tokentypes.""" - - tokens = [] - tokentypes = [] - # [CLS]. - tokens.append(cls_id) - tokentypes.append(0) - # Segment A. - for token in tokens_a: - tokens.append(token) - tokentypes.append(0) - # [SEP]. - tokens.append(sep_id) - tokentypes.append(0) - # Segment B. - for token in tokens_b: - tokens.append(token) - tokentypes.append(1) - if tokens_b: - # [SEP]. - tokens.append(sep_id) - tokentypes.append(1) - - return tokens, tokentypes - - -MaskedLmInstance = collections.namedtuple("MaskedLmInstance", - ["index", "label"]) - - -def is_start_piece(piece): - """Check if the current word piece is the starting piece (BERT).""" - # When a word has been split into - # WordPieces, the first token does not have any marker and any subsequence - # tokens are prefixed with ##. So whenever we see the ## token, we - # append it to the previous set of word indexes. - return not piece.startswith("##") - - -def create_masked_lm_predictions(tokens, - vocab_id_list, vocab_id_to_token_dict, - masked_lm_prob, - cls_id, sep_id, mask_id, - max_predictions_per_seq, - np_rng, - tokenizer, - max_ngrams=3, - do_whole_word_mask=True, - favor_longer_ngram=False, - do_permutation=False, - geometric_dist=False, - masking_style="bert", - zh_tokenizer=None): - """Creates the predictions for the masked LM objective. - Note: Tokens here are vocab ids and not text tokens.""" - - cand_indexes = [] - # Note(mingdachen): We create a list for recording if the piece is - # the starting piece of current token, where 1 means true, so that - # on-the-fly whole word masking is possible. - token_boundary = [0] * len(tokens) - - # 如果没有指定中文分词器,那就直接按##算 - if zh_tokenizer is None: - for (i, token) in enumerate(tokens): - if token == cls_id or token == sep_id: - token_boundary[i] = 1 - continue - # Whole Word Masking means that if we mask all of the wordpieces - # corresponding to an original word. - # - # Note that Whole Word Masking does *not* change the training code - # at all -- we still predict each WordPiece independently, softmaxed - # over the entire vocabulary. - if (do_whole_word_mask and len(cand_indexes) >= 1 and - not is_start_piece(vocab_id_to_token_dict[token])): - cand_indexes[-1].append(i) - else: - cand_indexes.append([i]) - if is_start_piece(vocab_id_to_token_dict[token]): - token_boundary[i] = 1 - else: - # 如果指定了中文分词器,那就先用分词器分词,然后再进行判断 - # 获取去掉CLS SEP的原始文本 - raw_tokens = [] - for t in tokens: - if t != cls_id and t != sep_id: - raw_tokens.append(t) - raw_tokens = [vocab_id_to_token_dict[i] for i in raw_tokens] - # 分词然后获取每次字开头的最长词的长度 - word_list = set(zh_tokenizer(''.join(raw_tokens), HMM=True)) - word_length_dict = {} - for w in word_list: - if len(w) < 1: - continue - if w[0] not in word_length_dict: - word_length_dict[w[0]] = len(w) - elif word_length_dict[w[0]] < len(w): - word_length_dict[w[0]] = len(w) - i = 0 - # 从词表里面检索 - while i < len(tokens): - token_id = tokens[i] - token = vocab_id_to_token_dict[token_id] - if len(token) == 0 or token_id == cls_id or token_id == sep_id: - token_boundary[i] = 1 - i += 1 - continue - word_max_length = 1 - if token[0] in word_length_dict: - word_max_length = word_length_dict[token[0]] - j = 0 - word = '' - word_end = i+1 - # 兼容以前##的形式,如果后面的词是##开头的,那么直接把后面的拼到前面当作一个词 - old_style = False - while word_end < len(tokens) and vocab_id_to_token_dict[tokens[word_end]].startswith('##'): - old_style = True - word_end += 1 - if not old_style: - while j < word_max_length and i+j < len(tokens): - cur_token = tokens[i+j] - word += vocab_id_to_token_dict[cur_token] - j += 1 - if word in word_list: - word_end = i+j - cand_indexes.append([p for p in range(i, word_end)]) - token_boundary[i] = 1 - i = word_end - - output_tokens = list(tokens) - # add by ganruyi - if masking_style == 'bert-cn-wwm': - # if non chinese is False, that means it is chinese - # then try to remove "##" which is added previously - new_token_ids = [] - for token_id in output_tokens: - token = tokenizer.convert_ids_to_tokens([token_id])[0] - if len(re.findall('##[\u4E00-\u9FA5]', token)) > 0: - token = token[2:] - new_token_id = tokenizer.convert_tokens_to_ids([token])[ - 0] - new_token_ids.append(new_token_id) - output_tokens = new_token_ids - - masked_lm_positions = [] - masked_lm_labels = [] - - if masked_lm_prob == 0: - return (output_tokens, masked_lm_positions, - masked_lm_labels, token_boundary) - - num_to_predict = min(max_predictions_per_seq, - max(1, int(round(len(tokens) * masked_lm_prob)))) - - ngrams = np.arange(1, max_ngrams + 1, dtype=np.int64) - if not geometric_dist: - # Note(mingdachen): - # By default, we set the probilities to favor shorter ngram sequences. - pvals = 1. / np.arange(1, max_ngrams + 1) - pvals /= pvals.sum(keepdims=True) - if favor_longer_ngram: - pvals = pvals[::-1] - # 获取一个ngram的idx,对于每个word,记录他的ngram的word - ngram_indexes = [] - for idx in range(len(cand_indexes)): - ngram_index = [] - for n in ngrams: - ngram_index.append(cand_indexes[idx:idx + n]) - ngram_indexes.append(ngram_index) - - np_rng.shuffle(ngram_indexes) - - (masked_lms, masked_spans) = ([], []) - covered_indexes = set() - for cand_index_set in ngram_indexes: - if len(masked_lms) >= num_to_predict: - break - if not cand_index_set: - continue - # Note(mingdachen): - # Skip current piece if they are covered in lm masking or previous ngrams. - for index_set in cand_index_set[0]: - for index in index_set: - if index in covered_indexes: - continue - - if not geometric_dist: - n = np_rng.choice(ngrams[:len(cand_index_set)], - p=pvals[:len(cand_index_set)] / - pvals[:len(cand_index_set)].sum(keepdims=True)) - else: - # Sampling "n" from the geometric distribution and clipping it to - # the max_ngrams. Using p=0.2 default from the SpanBERT paper - # https://arxiv.org/pdf/1907.10529.pdf (Sec 3.1) - n = min(np_rng.geometric(0.2), max_ngrams) - - index_set = sum(cand_index_set[n - 1], []) - n -= 1 - # Note(mingdachen): - # Repeatedly looking for a candidate that does not exceed the - # maximum number of predictions by trying shorter ngrams. - while len(masked_lms) + len(index_set) > num_to_predict: - if n == 0: - break - index_set = sum(cand_index_set[n - 1], []) - n -= 1 - # If adding a whole-word mask would exceed the maximum number of - # predictions, then just skip this candidate. - if len(masked_lms) + len(index_set) > num_to_predict: - continue - is_any_index_covered = False - for index in index_set: - if index in covered_indexes: - is_any_index_covered = True - break - if is_any_index_covered: - continue - for index in index_set: - covered_indexes.add(index) - masked_token = None - if masking_style == "bert": - # 80% of the time, replace with [MASK] - if np_rng.random() < 0.8: - masked_token = mask_id - else: - # 10% of the time, keep original - if np_rng.random() < 0.5: - masked_token = tokens[index] - # 10% of the time, replace with random word - else: - masked_token = vocab_id_list[np_rng.randint(0, len(vocab_id_list))] - elif masking_style == 'bert-cn-wwm': - # 80% of the time, replace with [MASK] - if np_rng.random() < 0.8: - masked_token = mask_id - else: - # 10% of the time, keep original - if np_rng.random() < 0.5: - # 如果是中文全词mask,去掉tokens里的## - token_id = tokens[index] - token = tokenizer.convert_ids_to_tokens([token_id])[ - 0] - if len(re.findall('##[\u4E00-\u9FA5]', token)) > 0: - token = token[2:] - new_token_id = tokenizer.convert_tokens_to_ids([token])[ - 0] - masked_token = new_token_id - # 10% of the time, replace with random word - else: - masked_token = vocab_id_list[np_rng.randint( - 0, len(vocab_id_list))] - elif masking_style == "t5": - masked_token = mask_id - else: - raise ValueError("invalid value of masking style") - - output_tokens[index] = masked_token - masked_lms.append(MaskedLmInstance( - index=index, label=tokens[index])) - - masked_spans.append(MaskedLmInstance( - index=index_set, - label=[tokens[index] for index in index_set])) - - assert len(masked_lms) <= num_to_predict - np_rng.shuffle(ngram_indexes) - - select_indexes = set() - if do_permutation: - for cand_index_set in ngram_indexes: - if len(select_indexes) >= num_to_predict: - break - if not cand_index_set: - continue - # Note(mingdachen): - # Skip current piece if they are covered in lm masking or previous ngrams. - for index_set in cand_index_set[0]: - for index in index_set: - if index in covered_indexes or index in select_indexes: - continue - - n = np.random.choice(ngrams[:len(cand_index_set)], - p=pvals[:len(cand_index_set)] / - pvals[:len(cand_index_set)].sum(keepdims=True)) - index_set = sum(cand_index_set[n - 1], []) - n -= 1 - - while len(select_indexes) + len(index_set) > num_to_predict: - if n == 0: - break - index_set = sum(cand_index_set[n - 1], []) - n -= 1 - # If adding a whole-word mask would exceed the maximum number of - # predictions, then just skip this candidate. - if len(select_indexes) + len(index_set) > num_to_predict: - continue - is_any_index_covered = False - for index in index_set: - if index in covered_indexes or index in select_indexes: - is_any_index_covered = True - break - if is_any_index_covered: - continue - for index in index_set: - select_indexes.add(index) - assert len(select_indexes) <= num_to_predict - - select_indexes = sorted(select_indexes) - permute_indexes = list(select_indexes) - np_rng.shuffle(permute_indexes) - orig_token = list(output_tokens) - - for src_i, tgt_i in zip(select_indexes, permute_indexes): - output_tokens[src_i] = orig_token[tgt_i] - masked_lms.append(MaskedLmInstance( - index=src_i, label=orig_token[src_i])) - - masked_lms = sorted(masked_lms, key=lambda x: x.index) - # Sort the spans by the index of the first span - masked_spans = sorted(masked_spans, key=lambda x: x.index[0]) - - for p in masked_lms: - masked_lm_positions.append(p.index) - masked_lm_labels.append(p.label) - return (output_tokens, masked_lm_positions, masked_lm_labels, token_boundary, masked_spans) - - -def pad_and_convert_to_numpy(tokens, tokentypes, masked_positions, - masked_labels, pad_id, max_seq_length): - """Pad sequences and convert them to numpy.""" - - # Some checks. - num_tokens = len(tokens) - padding_length = max_seq_length - num_tokens - assert padding_length >= 0 - assert len(tokentypes) == num_tokens - assert len(masked_positions) == len(masked_labels) - - # Tokens and token types. - filler = [pad_id] * padding_length - tokens_np = np.array(tokens + filler, dtype=np.int64) - tokentypes_np = np.array(tokentypes + filler, dtype=np.int64) - - # Padding mask. - padding_mask_np = np.array([1] * num_tokens + [0] * padding_length, - dtype=np.int64) - - # Lables and loss mask. - labels = [-1] * max_seq_length - loss_mask = [0] * max_seq_length - for i in range(len(masked_positions)): - assert masked_positions[i] < num_tokens - labels[masked_positions[i]] = masked_labels[i] - loss_mask[masked_positions[i]] = 1 - labels_np = np.array(labels, dtype=np.int64) - loss_mask_np = np.array(loss_mask, dtype=np.int64) - - return tokens_np, tokentypes_np, labels_np, padding_mask_np, loss_mask_np - - -def build_train_valid_test_datasets(data_prefix, data_impl, splits_string, - train_valid_test_num_samples, - max_seq_length, - masked_lm_prob, short_seq_prob, seed, - tokenizer, - skip_warmup, binary_head=False, - max_seq_length_dec=None, - dataset_type='standard_bert', - zh_tokenizer=None, - span=None): - - if len(data_prefix) == 1: - return _build_train_valid_test_datasets(data_prefix[0], - data_impl, splits_string, - train_valid_test_num_samples, - max_seq_length, masked_lm_prob, - short_seq_prob, seed, - skip_warmup, - binary_head, - max_seq_length_dec, - tokenizer, - dataset_type=dataset_type, - zh_tokenizer=zh_tokenizer, - span=span) - # Blending dataset. - # Parse the values. - output = get_datasets_weights_and_num_samples(data_prefix, - train_valid_test_num_samples) - prefixes, weights, datasets_train_valid_test_num_samples = output - - # Build individual datasets. - train_datasets = [] - valid_datasets = [] - test_datasets = [] - for i in range(len(prefixes)): - train_ds, valid_ds, test_ds = _build_train_valid_test_datasets( - prefixes[i], data_impl, splits_string, - datasets_train_valid_test_num_samples[i], - max_seq_length, masked_lm_prob, short_seq_prob, - seed, skip_warmup, binary_head, max_seq_length_dec, - tokenizer, dataset_type=dataset_type, zh_tokenizer=zh_tokenizer) - if train_ds: - train_datasets.append(train_ds) - if valid_ds: - valid_datasets.append(valid_ds) - if test_ds: - test_datasets.append(test_ds) - - # Blend. - blending_train_dataset = None - if train_datasets: - blending_train_dataset = BlendableDataset(train_datasets, weights) - blending_valid_dataset = None - if valid_datasets: - blending_valid_dataset = BlendableDataset(valid_datasets, weights) - blending_test_dataset = None - if test_datasets: - blending_test_dataset = BlendableDataset(test_datasets, weights) - - return (blending_train_dataset, blending_valid_dataset, - blending_test_dataset) - - -def _build_train_valid_test_datasets(data_prefix, data_impl, splits_string, - train_valid_test_num_samples, - max_seq_length, - masked_lm_prob, short_seq_prob, seed, - skip_warmup, binary_head, - max_seq_length_dec, - tokenizer, - dataset_type='standard_bert', - zh_tokenizer=None, - span=None): - - if dataset_type not in DSET_TYPES: - raise ValueError("Invalid dataset_type: ", dataset_type) - - # Indexed dataset. - indexed_dataset = get_indexed_dataset_(data_prefix, - data_impl, - skip_warmup) - - # Get start and end indices of train/valid/train into doc-idx - # Note that doc-idx is desinged to be num-docs + 1 so we can - # easily iterate over it. - total_num_of_documents = indexed_dataset.doc_idx.shape[0] - 1 - splits = get_train_valid_test_split_(splits_string, total_num_of_documents) - - # Print stats about the splits. - print_rank_0(' > dataset split:') - - def print_split_stats(name, index): - print_rank_0(' {}:'.format(name)) - print_rank_0(' document indices in [{}, {}) total of {} ' - 'documents'.format(splits[index], splits[index + 1], - splits[index + 1] - splits[index])) - start_index = indexed_dataset.doc_idx[splits[index]] - end_index = indexed_dataset.doc_idx[splits[index + 1]] - print_rank_0(' sentence indices in [{}, {}) total of {} ' - 'sentences'.format(start_index, end_index, - end_index - start_index)) - print_split_stats('train', 0) - print_split_stats('validation', 1) - print_split_stats('test', 2) - - def build_dataset(index, name): - from fengshen.data.megatron_dataloader.bert_dataset import BertDataset - from fengshen.data.megatron_dataloader.bart_dataset import BartDataset - from fengshen.data.megatron_dataloader.cocolm_dataset import COCOLMDataset - dataset = None - if splits[index + 1] > splits[index]: - # Get the pointer to the original doc-idx so we can set it later. - doc_idx_ptr = indexed_dataset.get_doc_idx() - # Slice the doc-idx - start_index = splits[index] - # Add +1 so we can index into the dataset to get the upper bound. - end_index = splits[index + 1] + 1 - # New doc_idx view. - indexed_dataset.set_doc_idx(doc_idx_ptr[start_index:end_index]) - # Build the dataset accordingly. - kwargs = dict( - name=name, - data_prefix=data_prefix, - num_epochs=None, - max_num_samples=train_valid_test_num_samples[index], - max_seq_length=max_seq_length, - seed=seed, - ) - - if dataset_type == DSET_TYPE_BERT or dataset_type == DSET_TYPE_BERT_CN_WWM: - dataset = BertDataset( - indexed_dataset=indexed_dataset, - masked_lm_prob=masked_lm_prob, - short_seq_prob=short_seq_prob, - binary_head=binary_head, - # 增加参数区分bert和bert-cn-wwm - tokenizer=tokenizer, - masking_style='bert' if dataset_type == DSET_TYPE_BERT else 'bert-cn-wwm', - **kwargs - ) - elif dataset_type == DSET_TYPE_BART: - dataset = BartDataset( - indexed_dataset=indexed_dataset, - masked_lm_prob=masked_lm_prob, - short_seq_prob=short_seq_prob, - tokenizer=tokenizer, - zh_tokenizer=zh_tokenizer, - **kwargs - ) - elif dataset_type == DSET_TYPE_COCOLM: - dataset = COCOLMDataset( - indexed_dataset=indexed_dataset, - masked_lm_prob=masked_lm_prob, - short_seq_prob=short_seq_prob, - tokenizer=tokenizer, - masking_style='bert', - span=span, - **kwargs - ) - else: - raise NotImplementedError( - "Dataset type not fully implemented.") - - # Set the original pointer so dataset remains the main dataset. - indexed_dataset.set_doc_idx(doc_idx_ptr) - # Checks. - assert indexed_dataset.doc_idx[0] == 0 - assert indexed_dataset.doc_idx.shape[0] == \ - (total_num_of_documents + 1) - return dataset - - train_dataset = build_dataset(0, 'train') - valid_dataset = build_dataset(1, 'valid') - test_dataset = build_dataset(2, 'test') - - return (train_dataset, valid_dataset, test_dataset) - - -def get_indexed_dataset_(data_prefix, data_impl, skip_warmup): - - print_rank_0(' > building dataset index ...') - - start_time = time.time() - indexed_dataset = make_indexed_dataset(data_prefix, - data_impl, - skip_warmup) - assert indexed_dataset.sizes.shape[0] == indexed_dataset.doc_idx[-1] - print_rank_0(' > finished creating indexed dataset in {:4f} ' - 'seconds'.format(time.time() - start_time)) - - print_rank_0(' > indexed dataset stats:') - print_rank_0(' number of documents: {}'.format( - indexed_dataset.doc_idx.shape[0] - 1)) - print_rank_0(' number of sentences: {}'.format( - indexed_dataset.sizes.shape[0])) - - return indexed_dataset - - -def get_train_valid_test_split_(splits_string, size): - """ Get dataset splits from comma or '/' separated string list.""" - - splits = [] - if splits_string.find(',') != -1: - splits = [float(s) for s in splits_string.split(',')] - elif splits_string.find('/') != -1: - splits = [float(s) for s in splits_string.split('/')] - else: - splits = [float(splits_string)] - while len(splits) < 3: - splits.append(0.) - splits = splits[:3] - splits_sum = sum(splits) - assert splits_sum > 0.0 - splits = [split / splits_sum for split in splits] - splits_index = [0] - for index, split in enumerate(splits): - splits_index.append(splits_index[index] + - int(round(split * float(size)))) - diff = splits_index[-1] - size - for index in range(1, len(splits_index)): - splits_index[index] -= diff - assert len(splits_index) == 4 - assert splits_index[-1] == size - return splits_index - - -def get_samples_mapping(indexed_dataset, - data_prefix, - num_epochs, - max_num_samples, - max_seq_length, - short_seq_prob, - seed, - name, - binary_head): - """Get a list that maps a sample index to a starting - sentence index, end sentence index, and length""" - - if not num_epochs: - if not max_num_samples: - raise ValueError("Need to specify either max_num_samples " - "or num_epochs") - num_epochs = np.iinfo(np.int32).max - 1 - if not max_num_samples: - max_num_samples = np.iinfo(np.int64).max - 1 - - # Filename of the index mapping - indexmap_filename = data_prefix - indexmap_filename += '_{}_indexmap'.format(name) - if num_epochs != (np.iinfo(np.int32).max - 1): - indexmap_filename += '_{}ep'.format(num_epochs) - if max_num_samples != (np.iinfo(np.int64).max - 1): - indexmap_filename += '_{}mns'.format(max_num_samples) - indexmap_filename += '_{}msl'.format(max_seq_length) - indexmap_filename += '_{:0.2f}ssp'.format(short_seq_prob) - indexmap_filename += '_{}s'.format(seed) - indexmap_filename += '.npy' - - # This should be a barrier but nccl barrier assumes - # device_index=rank which is not the case for model - # parallel case - # ganruyi comment - # counts = torch.cuda.LongTensor([1]) - # torch.distributed.all_reduce( - # counts, group=mpu.get_data_parallel_group()) - # torch.distributed.all_reduce( - # counts, group=mpu.get_pipeline_model_parallel_group()) - # assert counts[0].item() == ( - # torch.distributed.get_world_size() // - # torch.distributed.get_world_size( - # group=mpu.get_tensor_model_parallel_group())) - - # Load indexed dataset. - print_rank_0(' > loading indexed mapping from {}'.format( - indexmap_filename)) - start_time = time.time() - samples_mapping = np.load( - indexmap_filename, allow_pickle=True, mmap_mode='r') - print_rank_0(' loaded indexed file in {:3.3f} seconds'.format( - time.time() - start_time)) - print_rank_0(' total number of samples: {}'.format( - samples_mapping.shape[0])) - - return samples_mapping diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/clip_finetune/clip_finetune_flickr.py b/spaces/HaloMaster/chinesesummary/fengshen/examples/clip_finetune/clip_finetune_flickr.py deleted file mode 100644 index 9cac74d87e861cf0ffff64c9ca03330208db90c3..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/examples/clip_finetune/clip_finetune_flickr.py +++ /dev/null @@ -1,259 +0,0 @@ -import sys -sys.path.append('../../') -from data.clip_dataloader.flickr import FlickrDataModule -import pytorch_lightning as pl -import numpy as np -import torch -from torch.optim.lr_scheduler import CosineAnnealingWarmRestarts -import torch.nn.functional as F -import math -import copy -import argparse -from transformers import CLIPModel, BertForSequenceClassification - -class CLIPLightning(pl.LightningModule): - def __init__(self, model_name='ViT-B/32', minibatch_size=2): - """A lightning wrapper for a CLIP model as specified in the paper. - - Args: - model_name (str): A case sensitive visual model name. - config (dict): A dictionary containing the CLIP instantiation parameters. - """ - super().__init__() - - self.prepare_data_per_node = True - self.model_name = 'ViT-B/32' - # self.model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") - self.clip_model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") # NOTE load from openAI - self.text_encoder = BertForSequenceClassification.from_pretrained("IDEA-CCNL/Taiyi-CLIP-Roberta-102M-Chinese") - self.minibatch_size = minibatch_size - self.isViT = 'ViT' in self.model_name - self.automatic_optimization = False - - # Training loss: https://github.com/openai/CLIP/issues/83 - # Mini-batching thanks to https://github.com/crowsonkb / https://twitter.com/RiversHaveWings - # Multi-GPU support: https://github.com/MicPie/clasp - - def training_step(self, train_batch, idx): - # get optimizers and scheduler - optimizer = self.optimizers() - - image, text, labels = train_batch - n = math.ceil(len(image) // self.minibatch_size) - image_mbs = torch.chunk(image, n) - text_mbs = torch.chunk(text, n) - - with torch.no_grad(): - ims = [F.normalize(self.clip_model.get_image_features(im), dim=1) for im in image_mbs] - txt = [F.normalize(self.text_encoder(t).logits, dim=1) for t in text_mbs] - # gather from all GPUs 这里的LOSS要把所有GPU的汇集起来一起算才对 - ims = self.all_gather(torch.cat(ims)) - txt = self.all_gather(torch.cat(txt)) - - if len(ims.shape) == 3: - ims = list(ims) - txt = list(txt) - else: - ims = [ims] - txt = [txt] - - image_logits = torch.cat(ims) @ torch.cat(txt).t() * self.clip_model.logit_scale.exp() - ground_truth = torch.arange(len(image_logits)).long().to(image_logits.device) - loss = (F.cross_entropy(image_logits, ground_truth) + - F.cross_entropy(image_logits.t(), ground_truth)).div(2) - acc_i = (torch.argmax(image_logits, 1) == ground_truth).sum() - acc_t = (torch.argmax(image_logits, 0) == ground_truth).sum() - self.log_dict({'loss': loss / len(ims), 'acc': (acc_i + acc_t) / 2 / len(image) / len(ims)}, prog_bar=True) - - if isinstance(optimizer, list): - optimizer = optimizer[0] - optimizer.zero_grad() - - # image loss - for j, mb in enumerate(image_mbs[:-1]): - # 最后一部分样本舍弃。(对齐的bug) - images_tmp = copy.deepcopy(ims) - images_tmp[self.global_rank][j * self.minibatch_size:(j+1)*self.minibatch_size] = \ - F.normalize(self.clip_model.get_image_features(mb), dim=1) - image_logits = torch.cat(images_tmp) @ torch.cat(txt).t() * self.clip_model.logit_scale.exp() - ground_truth = torch.arange(len(image_logits)).long().to(image_logits.device) - loss = (F.cross_entropy(image_logits, ground_truth) + F.cross_entropy(image_logits.t(), ground_truth))/2 - self.manual_backward(loss) - - # text loss - for j, mb in enumerate(text_mbs[:-1]): - text_tmp = copy.deepcopy(txt) - text_tmp[self.global_rank][j * self.minibatch_size:(j+1)*self.minibatch_size] = \ - F.normalize(self.text_encoder(mb).logits, dim=1) - image_logits = torch.cat(ims) @ torch.cat(text_tmp).t() * self.clip_model.logit_scale.exp() - loss = (F.cross_entropy(image_logits, ground_truth) + F.cross_entropy(image_logits.t(), ground_truth))/2 - self.manual_backward(loss) - - optimizer.step() - lr_scheduler = self.lr_schedulers() - lr_scheduler.step() - self.clip_model.logit_scale.data.clamp_(-np.log(100), np.log(100)) - - def validation_step(self, val_batch, idx): - image, text, labels = val_batch - img_embed = self.clip_model.get_image_features(image) - txt_embed = self.text_encoder(text).logits - # print(img_embed.shape) - image_norm = F.normalize(img_embed, dim=1) - text_norm = F.normalize(txt_embed, dim=1) - image_logits = image_norm @ text_norm.t() * self.clip_model.logit_scale.exp() - text_logits = text_norm @ image_norm.t() * self.clip_model.logit_scale.exp() - # print(image_logits.shape) - # image_logits, text_logits = self.forward(image, text) - ground_truth = torch.arange(len(image_logits)).long().to(image_logits.device) - loss = (F.cross_entropy(image_logits, ground_truth) + F.cross_entropy(text_logits, ground_truth)).div(2) - self.log('val_loss', loss, prog_bar=True) - return [image_norm, text_norm, labels] - - def validation_epoch_end(self, outputs): - image_features = torch.cat([x[0] for x in outputs]) - text_features = torch.cat([x[1] for x in outputs]) - labels = [label for x in outputs for label in x[2]] - print(image_features.shape, text_features.shape, len(labels)) - self.get_metrics(image_features, text_features, labels, 100) - - def test_step(self, test_batch, idx): - image, text, labels = test_batch - image_features = self.clip_model.get_image_features(image) - text_features = self.text_encoder(text).logits - image_features = image_features / image_features.norm(dim=1, keepdim=True) - text_features = text_features / text_features.norm(dim=1, keepdim=True) - return [image_features, text_features, labels] - - def test_epoch_end(self, outputs): - image_features = torch.cat([x[0] for x in outputs]) - text_features = torch.cat([x[1] for x in outputs]) - labels = [label for x in outputs for label in x[2]] - print(image_features.shape, text_features.shape, len(labels)) - self.get_metrics(image_features, text_features, labels, 100) - - def get_metrics(self, image_features, text_features, labels, logit_scale): - # 计算相似度,支持多个样本的情况(比如一个图片有多个caption) - # img2txt计算的时候要用到,因为一张图片可能对应多个文本。 - # txt2img计算的时候不需要(一般一个text只有一个对应图片) - # metrics = {} - logits_per_image = (logit_scale * image_features @ text_features.t()).detach().cpu() - logits_per_text = logits_per_image.t().detach().cpu() - - logits = {"image_to_text": logits_per_image, "text_to_image": logits_per_text} - - label2idx = {} # 计算label到idx的映射。 - repeat_id = [] - for i, label in enumerate(labels): - if label not in label2idx: - label2idx[label] = [i] - else: - # 表示该index的标签出现过,记录这个index,后续算txt2img分数的时候,这些index的权值要降低。 - label2idx[label].append(i) - repeat_id.append(i) - # print(label2idx) # 标注了每个label的idx - - # print('repeat_id:', repeat_id) - ground_truth = [label2idx[label] for label in labels] - # print(ground_truth) - - for name, logit in logits.items(): - # print(name, logit.shape) - if name == 'text_to_image': - logit[:, repeat_id] -= 1e8 # 这部分的分数要降低。(重复出现的图片,直接忽略) - r1_stat, r5_stat, r10_stat = [], [], [] - ranking = torch.argsort(logit, descending=True) # index of the largest element to the smallest - # print(name, ranking[:, :10]) - for i, each_query in enumerate(ranking[:, :10]): - for j, q in enumerate(each_query): - if q in ground_truth[i]: - if j == 0: - r1_stat.append(1) - r5_stat.append(1) - r10_stat.append(1) - break - if j < 5: - r5_stat.append(1) - r10_stat.append(1) - break - if j < 10: - r10_stat.append(1) - break - print(f'{name} r1:{sum(r1_stat)/len(logit)}, r5:{sum(r5_stat)/len(logit)}, r10:{sum(r10_stat)/len(logit)}') - - def configure_optimizers(self): - lr = { - "RN50": 5e-4, - "RN101": 5e-4, - "RN50x4": 5e-4, - "RN50x16": 4e-4, - "RN50x64": 3.6e-4, - "ViT-B/32": 5e-4, - "ViT-B/16": 5e-4, - "ViT-L/14": 4e-4, - "ViT-L/14-336px": 2e-5 - }[self.model_name] - - optimizer = torch.optim.AdamW( - [{'params': self.clip_model.parameters()}, {'params': self.text_encoder.parameters()}], - lr=lr, - betas=( - 0.9, - 0.98 if self.isViT else 0.999 - ), - eps=1e-6 if self.isViT else 1e-8, - weight_decay=0.2 - ) - - # Source: https://github.com/openai/CLIP/issues/107 - # Use pip install 'git+https://github.com/katsura-jp/pytorch-cosine-annealing-with-warmup' - lr_scheduler = CosineAnnealingWarmRestarts( - optimizer, - T_0=2000 - ) - # CosineAnnealingWarmupRestarts - return {'optimizer': optimizer, 'lr_scheduler': lr_scheduler} - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - - # model_name - parser.add_argument('--model', type=str, - default="ViT-B/32", - help='model definition') - - # experiment setting - parser.add_argument('--batch_size', type=int, default=128) - parser.add_argument('--num_epoches', type=int, default=1) - parser.add_argument('--num_gpus', type=int, default=2) - - # dataset - parser.add_argument('--train_filename', type=str, - help='dir or csv file') - parser.add_argument('--train_root', type=str, - help='image root path') - parser.add_argument('--val_filename', type=str, - help='dir or csv file') - parser.add_argument('--val_root', type=str, - help='image root path') - parser.add_argument('--test_filename', type=str, - help='dir or csv file') - parser.add_argument('--test_root', type=str, - help='image root path') - parser.add_argument('--num_workers', type=int, default=0) - - # huggingface pretrain model 定义 - parser.add_argument('--pretrain_model', type=str, - default="openai/clip-vit-base-patch32", - help='defalut load from openai') # "wf-genius/TaiYi-CLIP-ViT-B-32" 是我训好的 NOTE - - args = parser.parse_args() - dm = FlickrDataModule(args) - - model = CLIPLightning(model_name=args.model, minibatch_size=args.batch_size//2) - trainer = pl.Trainer(gpus=args.num_gpus, precision=16, max_epochs=args.num_epoches) - trainer.test(model, dm) # zero-shot test - trainer.fit(model, dm) # finetune on train set - trainer.test(model, dm) # test again - diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/list_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/list_dataset.py deleted file mode 100644 index 12f00aa43661d6bad701c9e72653ba8779136906..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/list_dataset.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import BaseWrapperDataset - - -class ListDataset(BaseWrapperDataset): - def __init__(self, dataset, sizes=None): - super().__init__(dataset) - self._sizes = sizes - - def __iter__(self): - for x in self.dataset: - yield x - - def collater(self, samples): - return samples - - @property - def sizes(self): - return self._sizes - - def num_tokens(self, index): - return self.sizes[index] - - def size(self, index): - return self.sizes[index] - - def set_epoch(self, epoch): - pass diff --git a/spaces/HarryLee/eCommerceImageCaptioning/models/ofa/unify_multihead_attention.py b/spaces/HarryLee/eCommerceImageCaptioning/models/ofa/unify_multihead_attention.py deleted file mode 100644 index 20111bd4545697bf9ec7be474598492fb45045d5..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/models/ofa/unify_multihead_attention.py +++ /dev/null @@ -1,518 +0,0 @@ -# Copyright 2022 The OFA-Sys Team. -# All rights reserved. -# This source code is licensed under the Apache 2.0 license -# found in the LICENSE file in the root directory. - -import math -from typing import Dict, Optional, Tuple - -import torch -import torch.nn.functional as F -from fairseq import utils -from fairseq.incremental_decoding_utils import with_incremental_state -from fairseq.modules.fairseq_dropout import FairseqDropout -from fairseq.modules.quant_noise import quant_noise -from torch import Tensor, nn -from torch.nn import Parameter - - -@with_incremental_state -class MultiheadAttention(nn.Module): - """Multi-headed attention. - - See "Attention Is All You Need" for more details. - """ - - def __init__( - self, - embed_dim, - num_heads, - kdim=None, - vdim=None, - dropout=0.0, - bias=True, - add_bias_kv=False, - add_zero_attn=False, - self_attention=False, - encoder_decoder_attention=False, - q_noise=0.0, - qn_block_size=8, - scale_factor=2, - scale_heads=False - ): - super().__init__() - self.embed_dim = embed_dim - self.kdim = kdim if kdim is not None else embed_dim - self.vdim = vdim if vdim is not None else embed_dim - self.qkv_same_dim = self.kdim == embed_dim and self.vdim == embed_dim - - self.num_heads = num_heads - self.dropout_module = FairseqDropout( - dropout, module_name=self.__class__.__name__ - ) - - self.head_dim = embed_dim // num_heads - assert ( - self.head_dim * num_heads == self.embed_dim - ), "embed_dim must be divisible by num_heads" - self.scaling = float(self.head_dim * scale_factor) ** -0.5 - - self.self_attention = self_attention - self.encoder_decoder_attention = encoder_decoder_attention - self.c_attn = nn.Parameter(torch.ones((self.num_heads,)), requires_grad=True) if scale_heads else None - - assert not self.self_attention or self.qkv_same_dim, ( - "Self-attention requires query, key and " "value to be of the same size" - ) - - self.k_proj = quant_noise( - nn.Linear(self.kdim, embed_dim, bias=bias), q_noise, qn_block_size - ) - self.v_proj = quant_noise( - nn.Linear(self.vdim, embed_dim, bias=bias), q_noise, qn_block_size - ) - self.q_proj = quant_noise( - nn.Linear(embed_dim, embed_dim, bias=bias), q_noise, qn_block_size - ) - - self.out_proj = quant_noise( - nn.Linear(embed_dim, embed_dim, bias=bias), q_noise, qn_block_size - ) - - if add_bias_kv: - self.bias_k = Parameter(torch.Tensor(1, 1, embed_dim)) - self.bias_v = Parameter(torch.Tensor(1, 1, embed_dim)) - else: - self.bias_k = self.bias_v = None - - self.add_zero_attn = add_zero_attn - - self.reset_parameters() - - self.onnx_trace = False - - def prepare_for_onnx_export_(self): - self.onnx_trace = True - - def reset_parameters(self): - if self.qkv_same_dim: - # Empirically observed the convergence to be much better with - # the scaled initialization - nn.init.xavier_uniform_(self.k_proj.weight, gain=1 / math.sqrt(2)) - nn.init.xavier_uniform_(self.v_proj.weight, gain=1 / math.sqrt(2)) - nn.init.xavier_uniform_(self.q_proj.weight, gain=1 / math.sqrt(2)) - else: - nn.init.xavier_uniform_(self.k_proj.weight) - nn.init.xavier_uniform_(self.v_proj.weight) - nn.init.xavier_uniform_(self.q_proj.weight) - - nn.init.xavier_uniform_(self.out_proj.weight) - if self.out_proj.bias is not None: - nn.init.constant_(self.out_proj.bias, 0.0) - if self.bias_k is not None: - nn.init.xavier_normal_(self.bias_k) - if self.bias_v is not None: - nn.init.xavier_normal_(self.bias_v) - - def forward( - self, - query, - key: Optional[Tensor], - value: Optional[Tensor], - key_padding_mask: Optional[Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - need_weights: bool = True, - static_kv: bool = False, - attn_mask: Optional[Tensor] = None, - self_attn_mask: Optional[Tensor] = None, - before_softmax: bool = False, - need_head_weights: bool = False, - attn_bias: Optional[Tensor] = None - ) -> Tuple[Tensor, Optional[Tensor]]: - """Input shape: Time x Batch x Channel - - Args: - key_padding_mask (ByteTensor, optional): mask to exclude - keys that are pads, of shape `(batch, src_len)`, where - padding elements are indicated by 1s. - need_weights (bool, optional): return the attention weights, - averaged over heads (default: False). - attn_mask (ByteTensor, optional): typically used to - implement causal attention, where the mask prevents the - attention from looking forward in time (default: None). - before_softmax (bool, optional): return the raw attention - weights and values before the attention softmax. - need_head_weights (bool, optional): return the attention - weights for each head. Implies *need_weights*. Default: - return the average attention weights over all heads. - """ - if need_head_weights: - need_weights = True - - is_tpu = query.device.type == "xla" - - tgt_len, bsz, embed_dim = query.size() - src_len = tgt_len - assert embed_dim == self.embed_dim, f"query dim {embed_dim} != {self.embed_dim}" - assert list(query.size()) == [tgt_len, bsz, embed_dim] - if key is not None: - src_len, key_bsz, _ = key.size() - if not torch.jit.is_scripting(): - assert key_bsz == bsz - assert value is not None - assert src_len, bsz == value.shape[:2] - - if ( - not self.onnx_trace - and not is_tpu # don't use PyTorch version on TPUs - and incremental_state is None - and not static_kv - # A workaround for quantization to work. Otherwise JIT compilation - # treats bias in linear module as method. - and not torch.jit.is_scripting() - and self_attn_mask is None - and attn_bias is None - ): - assert key is not None and value is not None - return F.multi_head_attention_forward( - query, - key, - value, - self.embed_dim, - self.num_heads, - torch.empty([0]), - torch.cat((self.q_proj.bias, self.k_proj.bias, self.v_proj.bias)), - self.bias_k, - self.bias_v, - self.add_zero_attn, - self.dropout_module.p, - self.out_proj.weight, - self.out_proj.bias, - self.training or self.dropout_module.apply_during_inference, - key_padding_mask, - need_weights, - attn_mask, - use_separate_proj_weight=True, - q_proj_weight=self.q_proj.weight, - k_proj_weight=self.k_proj.weight, - v_proj_weight=self.v_proj.weight, - ) - - if incremental_state is not None: - saved_state = self._get_input_buffer(incremental_state) - if saved_state is not None and "prev_key" in saved_state: - # previous time steps are cached - no need to recompute - # key and value if they are static - if static_kv: - assert self.encoder_decoder_attention and not self.self_attention - key = value = None - else: - saved_state = None - - if self.self_attention and self_attn_mask is None: - q = self.q_proj(query) - k = self.k_proj(query) - v = self.v_proj(query) - elif self.encoder_decoder_attention: - # encoder-decoder attention - q = self.q_proj(query) - if key is None: - assert value is None - k = v = None - else: - k = self.k_proj(key) - v = self.v_proj(key) - - else: - assert key is not None and value is not None - q = self.q_proj(query) - k = self.k_proj(key) - v = self.v_proj(value) - q *= self.scaling - - if self.bias_k is not None: - assert self.bias_v is not None - k = torch.cat([k, self.bias_k.repeat(1, bsz, 1)]) - v = torch.cat([v, self.bias_v.repeat(1, bsz, 1)]) - if attn_mask is not None: - attn_mask = torch.cat( - [attn_mask, attn_mask.new_zeros(attn_mask.size(0), 1)], dim=1 - ) - if key_padding_mask is not None: - key_padding_mask = torch.cat( - [ - key_padding_mask, - key_padding_mask.new_zeros(key_padding_mask.size(0), 1), - ], - dim=1, - ) - - q = ( - q.contiguous() - .view(tgt_len, bsz * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - if k is not None: - k = ( - k.contiguous() - .view(-1, bsz * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - if v is not None: - v = ( - v.contiguous() - .view(-1, bsz * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - - if saved_state is not None: - # saved states are stored with shape (bsz, num_heads, seq_len, head_dim) - if "prev_key" in saved_state: - _prev_key = saved_state["prev_key"] - assert _prev_key is not None - prev_key = _prev_key.view(bsz * self.num_heads, -1, self.head_dim) - if static_kv: - k = prev_key - else: - assert k is not None - k = torch.cat([prev_key, k], dim=1) - src_len = k.size(1) - if "prev_value" in saved_state: - _prev_value = saved_state["prev_value"] - assert _prev_value is not None - prev_value = _prev_value.view(bsz * self.num_heads, -1, self.head_dim) - if static_kv: - v = prev_value - else: - assert v is not None - v = torch.cat([prev_value, v], dim=1) - prev_key_padding_mask: Optional[Tensor] = None - if "prev_key_padding_mask" in saved_state: - prev_key_padding_mask = saved_state["prev_key_padding_mask"] - assert k is not None and v is not None - key_padding_mask = MultiheadAttention._append_prev_key_padding_mask( - key_padding_mask=key_padding_mask, - prev_key_padding_mask=prev_key_padding_mask, - batch_size=bsz, - src_len=k.size(1), - static_kv=static_kv, - ) - - saved_state["prev_key"] = k.view(bsz, self.num_heads, -1, self.head_dim) - saved_state["prev_value"] = v.view(bsz, self.num_heads, -1, self.head_dim) - saved_state["prev_key_padding_mask"] = key_padding_mask - # In this branch incremental_state is never None - assert incremental_state is not None - incremental_state = self._set_input_buffer(incremental_state, saved_state) - assert k is not None - assert k.size(1) == src_len - - # This is part of a workaround to get around fork/join parallelism - # not supporting Optional types. - if key_padding_mask is not None and key_padding_mask.dim() == 0: - key_padding_mask = None - - if key_padding_mask is not None: - assert key_padding_mask.size(0) == bsz - assert key_padding_mask.size(1) == src_len - - if self.add_zero_attn: - assert v is not None - src_len += 1 - k = torch.cat([k, k.new_zeros((k.size(0), 1) + k.size()[2:])], dim=1) - v = torch.cat([v, v.new_zeros((v.size(0), 1) + v.size()[2:])], dim=1) - if attn_mask is not None: - attn_mask = torch.cat( - [attn_mask, attn_mask.new_zeros(attn_mask.size(0), 1)], dim=1 - ) - if key_padding_mask is not None: - key_padding_mask = torch.cat( - [ - key_padding_mask, - torch.zeros(key_padding_mask.size(0), 1).type_as( - key_padding_mask - ), - ], - dim=1, - ) - - attn_weights = torch.bmm(q, k.transpose(1, 2)) - attn_weights = self.apply_sparse_mask(attn_weights, tgt_len, src_len, bsz) - - assert list(attn_weights.size()) == [bsz * self.num_heads, tgt_len, src_len] - - if attn_bias is not None: - attn_weights += attn_bias - - if attn_mask is not None: - attn_mask = attn_mask.unsqueeze(0) - if self.onnx_trace: - attn_mask = attn_mask.repeat(attn_weights.size(0), 1, 1) - attn_weights += attn_mask - - if self_attn_mask is not None: - self_attn_mask = self_attn_mask.unsqueeze(1).expand(bsz, self.num_heads, tgt_len, src_len) - attn_weights += self_attn_mask.contiguous().view(bsz * self.num_heads, tgt_len, src_len) - - if key_padding_mask is not None: - # don't attend to padding symbols - attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - if not is_tpu: - attn_weights = attn_weights.masked_fill( - key_padding_mask.unsqueeze(1).unsqueeze(2).to(torch.bool), - float("-inf"), - ) - else: - attn_weights = attn_weights.transpose(0, 2) - attn_weights = attn_weights.masked_fill(key_padding_mask, float("-inf")) - attn_weights = attn_weights.transpose(0, 2) - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - - if before_softmax: - return attn_weights, v - - attn_weights_float = utils.softmax( - attn_weights, dim=-1, onnx_trace=self.onnx_trace - ) - attn_weights = attn_weights_float.type_as(attn_weights) - attn_probs = self.dropout_module(attn_weights) - - assert v is not None - attn = torch.bmm(attn_probs, v) - assert list(attn.size()) == [bsz * self.num_heads, tgt_len, self.head_dim] - if self.onnx_trace and attn.size(1) == 1: - # when ONNX tracing a single decoder step (sequence length == 1) - # the transpose is a no-op copy before view, thus unnecessary - attn = attn.contiguous().view(tgt_len, bsz, embed_dim) - else: - attn = attn.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim) - if self.c_attn is not None: - attn = attn.view(tgt_len, bsz, self.num_heads, self.head_dim) - attn = torch.einsum('tbhd,h->tbhd', attn, self.c_attn) - attn = attn.reshape(tgt_len, bsz, self.embed_dim) - attn = self.out_proj(attn) - attn_weights: Optional[Tensor] = None - if need_weights: - attn_weights = attn_weights_float.view( - bsz, self.num_heads, tgt_len, src_len - ).transpose(1, 0) - if not need_head_weights: - # average attention weights over heads - attn_weights = attn_weights.mean(dim=0) - - return attn, attn_weights - - @staticmethod - def _append_prev_key_padding_mask( - key_padding_mask: Optional[Tensor], - prev_key_padding_mask: Optional[Tensor], - batch_size: int, - src_len: int, - static_kv: bool, - ) -> Optional[Tensor]: - # saved key padding masks have shape (bsz, seq_len) - if prev_key_padding_mask is not None and static_kv: - new_key_padding_mask = prev_key_padding_mask - elif prev_key_padding_mask is not None and key_padding_mask is not None: - new_key_padding_mask = torch.cat( - [prev_key_padding_mask.float(), key_padding_mask.float()], dim=1 - ) - # During incremental decoding, as the padding token enters and - # leaves the frame, there will be a time when prev or current - # is None - elif prev_key_padding_mask is not None: - if src_len > prev_key_padding_mask.size(1): - filler = torch.zeros( - (batch_size, src_len - prev_key_padding_mask.size(1)), - device=prev_key_padding_mask.device, - ) - new_key_padding_mask = torch.cat( - [prev_key_padding_mask.float(), filler.float()], dim=1 - ) - else: - new_key_padding_mask = prev_key_padding_mask.float() - elif key_padding_mask is not None: - if src_len > key_padding_mask.size(1): - filler = torch.zeros( - (batch_size, src_len - key_padding_mask.size(1)), - device=key_padding_mask.device, - ) - new_key_padding_mask = torch.cat( - [filler.float(), key_padding_mask.float()], dim=1 - ) - else: - new_key_padding_mask = key_padding_mask.float() - else: - new_key_padding_mask = prev_key_padding_mask - return new_key_padding_mask - - @torch.jit.export - def reorder_incremental_state( - self, - incremental_state: Dict[str, Dict[str, Optional[Tensor]]], - new_order: Tensor, - ): - """Reorder buffered internal state (for incremental generation).""" - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is not None: - for k in input_buffer.keys(): - input_buffer_k = input_buffer[k] - if input_buffer_k is not None: - if self.encoder_decoder_attention and input_buffer_k.size( - 0 - ) == new_order.size(0): - break - input_buffer[k] = input_buffer_k.index_select(0, new_order) - incremental_state = self._set_input_buffer(incremental_state, input_buffer) - return incremental_state - - def _get_input_buffer( - self, incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] - ) -> Dict[str, Optional[Tensor]]: - result = self.get_incremental_state(incremental_state, "attn_state") - if result is not None: - return result - else: - empty_result: Dict[str, Optional[Tensor]] = {} - return empty_result - - def _set_input_buffer( - self, - incremental_state: Dict[str, Dict[str, Optional[Tensor]]], - buffer: Dict[str, Optional[Tensor]], - ): - return self.set_incremental_state(incremental_state, "attn_state", buffer) - - def apply_sparse_mask(self, attn_weights, tgt_len: int, src_len: int, bsz: int): - return attn_weights - - def upgrade_state_dict_named(self, state_dict, name): - prefix = name + "." if name != "" else "" - items_to_add = {} - keys_to_remove = [] - for k in state_dict.keys(): - if k.endswith(prefix + "in_proj_weight"): - # in_proj_weight used to be q + k + v with same dimensions - dim = int(state_dict[k].shape[0] / 3) - items_to_add[prefix + "q_proj.weight"] = state_dict[k][:dim] - items_to_add[prefix + "k_proj.weight"] = state_dict[k][dim : 2 * dim] - items_to_add[prefix + "v_proj.weight"] = state_dict[k][2 * dim :] - - keys_to_remove.append(k) - - k_bias = prefix + "in_proj_bias" - if k_bias in state_dict.keys(): - dim = int(state_dict[k].shape[0] / 3) - items_to_add[prefix + "q_proj.bias"] = state_dict[k_bias][:dim] - items_to_add[prefix + "k_proj.bias"] = state_dict[k_bias][ - dim : 2 * dim - ] - items_to_add[prefix + "v_proj.bias"] = state_dict[k_bias][2 * dim :] - - keys_to_remove.append(prefix + "in_proj_bias") - - for k in keys_to_remove: - del state_dict[k] - - for key, value in items_to_add.items(): - state_dict[key] = value diff --git a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/text/__init__.py b/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/text/__init__.py deleted file mode 100644 index 3f5aa62bfcd56165b85d064f5ca0ba59fbe34a72..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/text/__init__.py +++ /dev/null @@ -1,84 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -import re -from text import cleaners - -# Regular expression matching text enclosed in curly braces: -_curly_re = re.compile(r'(.*?)\{(.+?)\}(.*)') - - -def get_arpabet(word, dictionary): - word_arpabet = dictionary.lookup(word) - if word_arpabet is not None: - return "{" + word_arpabet[0] + "}" - else: - return word - - -def text_to_sequence(text, symbols, cleaner_names, dictionary=None): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - - The text can optionally have ARPAbet sequences enclosed in curly braces embedded - in it. For example, "Turn left on {HH AW1 S S T AH0 N} Street." - - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - dictionary: arpabet class with arpabet dictionary - - Returns: - List of integers corresponding to the symbols in the text - ''' - # Mappings from symbol to numeric ID and vice versa: - global _id_to_symbol, _symbol_to_id - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - _id_to_symbol = {i: s for i, s in enumerate(symbols)} - - sequence = [] - - space = _symbols_to_sequence(' ') - # Check for curly braces and treat their contents as ARPAbet: - while len(text): - m = _curly_re.match(text) - if not m: - clean_text = _clean_text(text, cleaner_names) - if dictionary is not None: - clean_text = [get_arpabet(w, dictionary) for w in clean_text.split(" ")] - for i in range(len(clean_text)): - t = clean_text[i] - if t.startswith("{"): - sequence += _arpabet_to_sequence(t[1:-1]) - else: - sequence += _symbols_to_sequence(t) - sequence += space - else: - sequence += _symbols_to_sequence(clean_text) - break - sequence += _symbols_to_sequence(_clean_text(m.group(1), cleaner_names)) - sequence += _arpabet_to_sequence(m.group(2)) - text = m.group(3) - - # remove trailing space - if dictionary is not None: - sequence = sequence[:-1] if sequence[-1] == space[0] else sequence - return sequence - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text - - -def _symbols_to_sequence(symbols): - return [_symbol_to_id[s] for s in symbols if _should_keep_symbol(s)] - - -def _arpabet_to_sequence(text): - return _symbols_to_sequence(['@' + s for s in text.split()]) - - -def _should_keep_symbol(s): - return s in _symbol_to_id and s is not '_' and s is not '~' \ No newline at end of file diff --git a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/install.sh b/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/install.sh deleted file mode 100644 index 51e038d5a0098f21d4efd8051a15b7f0cdeb4b73..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/install.sh +++ /dev/null @@ -1,6 +0,0 @@ -cd src/glow_tts/monotonic_align/ -pip install . -cd ../../../ - -# torch -pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html diff --git a/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/cli/__init__.py b/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/cli/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/HgMenon/Transcribe_V0.2/src/modelCache.py b/spaces/HgMenon/Transcribe_V0.2/src/modelCache.py deleted file mode 100644 index 680a4b386fc37e17ed2353e72d04a646ece2c4a6..0000000000000000000000000000000000000000 --- a/spaces/HgMenon/Transcribe_V0.2/src/modelCache.py +++ /dev/null @@ -1,17 +0,0 @@ -class ModelCache: - def __init__(self): - self._cache = dict() - - def get(self, model_key: str, model_factory): - result = self._cache.get(model_key) - - if result is None: - result = model_factory() - self._cache[model_key] = result - return result - - def clear(self): - self._cache.clear() - -# A global cache of models. This is mainly used by the daemon processes to avoid loading the same model multiple times. -GLOBAL_MODEL_CACHE = ModelCache() \ No newline at end of file diff --git a/spaces/Hoodady/3DFuse/ldm/modules/midas/midas/midas_net.py b/spaces/Hoodady/3DFuse/ldm/modules/midas/midas/midas_net.py deleted file mode 100644 index 8a954977800b0a0f48807e80fa63041910e33c1f..0000000000000000000000000000000000000000 --- a/spaces/Hoodady/3DFuse/ldm/modules/midas/midas/midas_net.py +++ /dev/null @@ -1,76 +0,0 @@ -"""MidashNet: Network for monocular depth estimation trained by mixing several datasets. -This file contains code that is adapted from -https://github.com/thomasjpfan/pytorch_refinenet/blob/master/pytorch_refinenet/refinenet/refinenet_4cascade.py -""" -import torch -import torch.nn as nn - -from .base_model import BaseModel -from .blocks import FeatureFusionBlock, Interpolate, _make_encoder - - -class MidasNet(BaseModel): - """Network for monocular depth estimation. - """ - - def __init__(self, path=None, features=256, non_negative=True): - """Init. - - Args: - path (str, optional): Path to saved model. Defaults to None. - features (int, optional): Number of features. Defaults to 256. - backbone (str, optional): Backbone network for encoder. Defaults to resnet50 - """ - print("Loading weights: ", path) - - super(MidasNet, self).__init__() - - use_pretrained = False if path is None else True - - self.pretrained, self.scratch = _make_encoder(backbone="resnext101_wsl", features=features, use_pretrained=use_pretrained) - - self.scratch.refinenet4 = FeatureFusionBlock(features) - self.scratch.refinenet3 = FeatureFusionBlock(features) - self.scratch.refinenet2 = FeatureFusionBlock(features) - self.scratch.refinenet1 = FeatureFusionBlock(features) - - self.scratch.output_conv = nn.Sequential( - nn.Conv2d(features, 128, kernel_size=3, stride=1, padding=1), - Interpolate(scale_factor=2, mode="bilinear"), - nn.Conv2d(128, 32, kernel_size=3, stride=1, padding=1), - nn.ReLU(True), - nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0), - nn.ReLU(True) if non_negative else nn.Identity(), - ) - - if path: - self.load(path) - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input data (image) - - Returns: - tensor: depth - """ - - layer_1 = self.pretrained.layer1(x) - layer_2 = self.pretrained.layer2(layer_1) - layer_3 = self.pretrained.layer3(layer_2) - layer_4 = self.pretrained.layer4(layer_3) - - layer_1_rn = self.scratch.layer1_rn(layer_1) - layer_2_rn = self.scratch.layer2_rn(layer_2) - layer_3_rn = self.scratch.layer3_rn(layer_3) - layer_4_rn = self.scratch.layer4_rn(layer_4) - - path_4 = self.scratch.refinenet4(layer_4_rn) - path_3 = self.scratch.refinenet3(path_4, layer_3_rn) - path_2 = self.scratch.refinenet2(path_3, layer_2_rn) - path_1 = self.scratch.refinenet1(path_2, layer_1_rn) - - out = self.scratch.output_conv(path_1) - - return torch.squeeze(out, dim=1) diff --git a/spaces/HugoDzz/spaceship_drift/build/index.html b/spaces/HugoDzz/spaceship_drift/build/index.html deleted file mode 100644 index 8209ea3eaf35fc05fb581a4cfc7b5880fe5c696b..0000000000000000000000000000000000000000 --- a/spaces/HugoDzz/spaceship_drift/build/index.html +++ /dev/null @@ -1,80 +0,0 @@ - - - - Spaceship freeride - - - - - - - - - - - - - - - -
      - - - -
      -

      Spaceship freeride

      -

      Take a break and enjoy a little freeride.

      - -
      - - -
      -
      -
      -
      - - -

      Use arrow keys. SPACE to fire.

      - - - - -

      It's all about game feel

      - - -

      Made by Hugo - with - Godot, - Svelte, - Scenario, and - Pixelicious

      - - - - -
      - - diff --git a/spaces/ICML2022/OFA/fairseq/examples/roberta/preprocess_RACE.sh b/spaces/ICML2022/OFA/fairseq/examples/roberta/preprocess_RACE.sh deleted file mode 100644 index 932d2ab6e521fecc7d0297f26a8c43857541ef3b..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/roberta/preprocess_RACE.sh +++ /dev/null @@ -1,59 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -# data should be downloaded and processed with reprocess_RACE.py -if [[ $# -ne 2 ]]; then - echo "Run as following:" - echo "./examples/roberta/preprocess_RACE.sh " - exit 1 -fi - -RACE_DATA_FOLDER=$1 -OUT_DATA_FOLDER=$2 - -# download bpe encoder.json, vocabulary and fairseq dictionary -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json' -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe' -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt' - -SPLITS="train dev test-middle test-high" -INPUT_TYPES="input0 input1 input2 input3 input4" -for INPUT_TYPE in $INPUT_TYPES -do - for SPLIT in $SPLITS - do - echo "BPE encoding $SPLIT/$INPUT_TYPE" - python -m examples.roberta.multiprocessing_bpe_encoder \ - --encoder-json encoder.json \ - --vocab-bpe vocab.bpe \ - --inputs "$RACE_DATA_FOLDER/$SPLIT.$INPUT_TYPE" \ - --outputs "$RACE_DATA_FOLDER/$SPLIT.$INPUT_TYPE.bpe" \ - --workers 10 \ - --keep-empty; - - done -done - -for INPUT_TYPE in $INPUT_TYPES - do - LANG="input$INPUT_TYPE" - fairseq-preprocess \ - --only-source \ - --trainpref "$RACE_DATA_FOLDER/train.$INPUT_TYPE.bpe" \ - --validpref "$RACE_DATA_FOLDER/dev.$INPUT_TYPE.bpe" \ - --testpref "$RACE_DATA_FOLDER/test-middle.$INPUT_TYPE.bpe,$RACE_DATA_FOLDER/test-high.$INPUT_TYPE.bpe" \ - --destdir "$OUT_DATA_FOLDER/$INPUT_TYPE" \ - --workers 10 \ - --srcdict dict.txt; -done - -rm -rf "$OUT_DATA_FOLDER/label" -mkdir -p "$OUT_DATA_FOLDER/label" -cp "$RACE_DATA_FOLDER/train.label" "$OUT_DATA_FOLDER/label/" -cp "$RACE_DATA_FOLDER/dev.label" "$OUT_DATA_FOLDER/label/valid.label" -cp "$RACE_DATA_FOLDER/test-middle.label" "$OUT_DATA_FOLDER/label/test.label" -cp "$RACE_DATA_FOLDER/test-high.label" "$OUT_DATA_FOLDER/label/test1.label" diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py b/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py deleted file mode 100644 index 933f59c3bd3291e9445e26b707d16c0e25c5ff67..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py +++ /dev/null @@ -1,608 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -from typing import Callable, List, Optional, Union - -import numpy as np -import torch - -import PIL -from diffusers.utils import is_accelerate_available -from packaging import version -from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer - -from ...configuration_utils import FrozenDict -from ...models import AutoencoderKL, UNet2DConditionModel -from ...pipeline_utils import DiffusionPipeline -from ...schedulers import ( - DDIMScheduler, - DPMSolverMultistepScheduler, - EulerAncestralDiscreteScheduler, - EulerDiscreteScheduler, - LMSDiscreteScheduler, - PNDMScheduler, -) -from ...utils import PIL_INTERPOLATION, deprecate, logging -from . import StableDiffusionPipelineOutput -from .safety_checker import StableDiffusionSafetyChecker - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -def preprocess(image): - w, h = image.size - w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 - image = image.resize((w, h), resample=PIL_INTERPOLATION["lanczos"]) - image = np.array(image).astype(np.float32) / 255.0 - image = image[None].transpose(0, 3, 1, 2) - image = torch.from_numpy(image) - return 2.0 * image - 1.0 - - -class StableDiffusionImg2ImgPipeline(DiffusionPipeline): - r""" - Pipeline for text-guided image to image generation using Stable Diffusion. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. - feature_extractor ([`CLIPFeatureExtractor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - _optional_components = ["safety_checker", "feature_extractor"] - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.__init__ - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: Union[ - DDIMScheduler, - PNDMScheduler, - LMSDiscreteScheduler, - EulerDiscreteScheduler, - EulerAncestralDiscreteScheduler, - DPMSolverMultistepScheduler, - ], - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPFeatureExtractor, - requires_safety_checker: bool = True, - ): - super().__init__() - - if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`" - f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure " - "to update the config accordingly as leaving `steps_offset` might led to incorrect results" - " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub," - " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`" - " file" - ) - deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["steps_offset"] = 1 - scheduler._internal_dict = FrozenDict(new_config) - - if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`." - " `clip_sample` should be set to False in the configuration file. Please make sure to update the" - " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in" - " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very" - " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file" - ) - deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["clip_sample"] = False - scheduler._internal_dict = FrozenDict(new_config) - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse( - version.parse(unet.config._diffusers_version).base_version - ) < version.parse("0.9.0.dev0") - is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64 - if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64: - deprecation_message = ( - "The configuration file of the unet has set the default `sample_size` to smaller than" - " 64 which seems highly unlikely .If you're checkpoint is a fine-tuned version of any of the" - " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-" - " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5" - " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the" - " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`" - " in the config might lead to incorrect results in future versions. If you have downloaded this" - " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for" - " the `unet/config.json` file" - ) - deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(unet.config) - new_config["sample_size"] = 64 - unet._internal_dict = FrozenDict(new_config) - - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_attention_slicing - def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"): - r""" - Enable sliced attention computation. - - When this option is enabled, the attention module will split the input tensor in slices, to compute attention - in several steps. This is useful to save some memory in exchange for a small speed decrease. - - Args: - slice_size (`str` or `int`, *optional*, defaults to `"auto"`): - When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If - a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case, - `attention_head_dim` must be a multiple of `slice_size`. - """ - if slice_size == "auto": - if isinstance(self.unet.config.attention_head_dim, int): - # half the attention head size is usually a good trade-off between - # speed and memory - slice_size = self.unet.config.attention_head_dim // 2 - else: - # if `attention_head_dim` is a list, take the smallest head size - slice_size = min(self.unet.config.attention_head_dim) - - self.unet.set_attention_slice(slice_size) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_attention_slicing - def disable_attention_slicing(self): - r""" - Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go - back to computing attention in one step. - """ - # set slice_size = `None` to disable `attention slicing` - self.enable_attention_slicing(None) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_sequential_cpu_offload - def enable_sequential_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, - text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a - `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called. - """ - if is_accelerate_available(): - from accelerate import cpu_offload - else: - raise ImportError("Please install accelerate via `pip install accelerate`") - - device = torch.device(f"cuda:{gpu_id}") - - for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae]: - if cpu_offloaded_model is not None: - cpu_offload(cpu_offloaded_model, device) - - if self.safety_checker is not None: - # TODO(Patrick) - there is currently a bug with cpu offload of nn.Parameter in accelerate - # fix by only offloading self.safety_checker for now - cpu_offload(self.safety_checker.vision_model, device) - - @property - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device - def _execution_device(self): - r""" - Returns the device on which the pipeline's models will be executed. After calling - `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module - hooks. - """ - if self.device != torch.device("meta") or not hasattr(self.unet, "_hf_hook"): - return self.device - for module in self.unet.modules(): - if ( - hasattr(module, "_hf_hook") - and hasattr(module._hf_hook, "execution_device") - and module._hf_hook.execution_device is not None - ): - return torch.device(module._hf_hook.execution_device) - return self.device - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt - def _encode_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `list(int)`): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - """ - batch_size = len(prompt) if isinstance(prompt, list) else 1 - - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="max_length", return_tensors="pt").input_ids - - if not torch.equal(text_input_ids, untruncated_ids): - removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = text_inputs.attention_mask.to(device) - else: - attention_mask = None - - text_embeddings = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - text_embeddings = text_embeddings[0] - - # duplicate text embeddings for each generation per prompt, using mps friendly method - bs_embed, seq_len, _ = text_embeddings.shape - text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1) - text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - max_length = text_input_ids.shape[-1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = uncond_input.attention_mask.to(device) - else: - attention_mask = None - - uncond_embeddings = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - uncond_embeddings = uncond_embeddings[0] - - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = uncond_embeddings.shape[1] - uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1) - uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) - - return text_embeddings - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker - def run_safety_checker(self, image, device, dtype): - if self.safety_checker is not None: - safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device) - image, has_nsfw_concept = self.safety_checker( - images=image, clip_input=safety_checker_input.pixel_values.to(dtype) - ) - else: - has_nsfw_concept = None - return image, has_nsfw_concept - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents - def decode_latents(self, latents): - latents = 1 / 0.18215 * latents - image = self.vae.decode(latents).sample - image = (image / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - return image - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - def check_inputs(self, prompt, strength, callback_steps): - if not isinstance(prompt, str) and not isinstance(prompt, list): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if strength < 0 or strength > 1: - raise ValueError(f"The value of strength should in [1.0, 1.0] but is {strength}") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - def get_timesteps(self, num_inference_steps, strength, device): - # get the original timestep using init_timestep - offset = self.scheduler.config.get("steps_offset", 0) - init_timestep = int(num_inference_steps * strength) + offset - init_timestep = min(init_timestep, num_inference_steps) - - t_start = max(num_inference_steps - init_timestep + offset, 0) - timesteps = self.scheduler.timesteps[t_start:] - - return timesteps, num_inference_steps - t_start - - def prepare_latents(self, image, timestep, batch_size, num_images_per_prompt, dtype, device, generator=None): - image = image.to(device=device, dtype=dtype) - init_latent_dist = self.vae.encode(image).latent_dist - init_latents = init_latent_dist.sample(generator=generator) - init_latents = 0.18215 * init_latents - - if batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] == 0: - # expand init_latents for batch_size - deprecation_message = ( - f"You have passed {batch_size} text prompts (`prompt`), but only {init_latents.shape[0]} initial" - " images (`image`). Initial images are now duplicating to match the number of text prompts. Note" - " that this behavior is deprecated and will be removed in a version 1.0.0. Please make sure to update" - " your script to pass as many initial images as text prompts to suppress this warning." - ) - deprecate("len(prompt) != len(image)", "1.0.0", deprecation_message, standard_warn=False) - additional_image_per_prompt = batch_size // init_latents.shape[0] - init_latents = torch.cat([init_latents] * additional_image_per_prompt * num_images_per_prompt, dim=0) - elif batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] != 0: - raise ValueError( - f"Cannot duplicate `image` of batch size {init_latents.shape[0]} to {batch_size} text prompts." - ) - else: - init_latents = torch.cat([init_latents] * num_images_per_prompt, dim=0) - - # add noise to latents using the timesteps - noise = torch.randn(init_latents.shape, generator=generator, device=device, dtype=dtype) - - # get latents - init_latents = self.scheduler.add_noise(init_latents, noise, timestep) - latents = init_latents - - return latents - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]], - image: Union[torch.FloatTensor, PIL.Image.Image], - strength: float = 0.8, - num_inference_steps: Optional[int] = 50, - guidance_scale: Optional[float] = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: Optional[float] = 0.0, - generator: Optional[torch.Generator] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: Optional[int] = 1, - **kwargs, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - image (`torch.FloatTensor` or `PIL.Image.Image`): - `Image`, or tensor representing an image batch, that will be used as the starting point for the - process. - strength (`float`, *optional*, defaults to 0.8): - Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image` - will be used as a starting point, adding more noise to it the larger the `strength`. The number of - denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will - be maximum and the denoising process will run for the full number of iterations specified in - `num_inference_steps`. A value of 1, therefore, essentially ignores `image`. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. This parameter will be modulated by `strength`. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator`, *optional*): - A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation - deterministic. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - message = "Please use `image` instead of `init_image`." - init_image = deprecate("init_image", "0.12.0", message, take_from=kwargs) - image = init_image or image - - # 1. Check inputs - self.check_inputs(prompt, strength, callback_steps) - - # 2. Define call parameters - batch_size = 1 if isinstance(prompt, str) else len(prompt) - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - text_embeddings = self._encode_prompt( - prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt - ) - - # 4. Preprocess image - if isinstance(image, PIL.Image.Image): - image = preprocess(image) - - # 5. set timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength, device) - latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt) - - # 6. Prepare latent variables - latents = self.prepare_latents( - image, latent_timestep, batch_size, num_images_per_prompt, text_embeddings.dtype, device, generator - ) - - # 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 8. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # 9. Post-processing - image = self.decode_latents(latents) - - # 10. Run safety checker - image, has_nsfw_concept = self.run_safety_checker(image, device, text_embeddings.dtype) - - # 11. Convert to PIL - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/Jaehan/Text2Text-Sentiment-Analysis/app.py b/spaces/Jaehan/Text2Text-Sentiment-Analysis/app.py deleted file mode 100644 index 90c71aea65db44b44f04f99a7de9029db09a6f0a..0000000000000000000000000000000000000000 --- a/spaces/Jaehan/Text2Text-Sentiment-Analysis/app.py +++ /dev/null @@ -1,18 +0,0 @@ -from transformers import T5ForConditionalGeneration, T5Tokenizer -import gradio as gr - -model_name = "t5-small" -text2text_token= T5Tokenizer.from_pretrained(model_name) -model = T5ForConditionalGeneration.from_pretrained(model_name) - -def text2text_sentiment(text): - input = "sst2 sentence: " + text - encoded = text2text_token(input, return_tensors="pt") - tokens = model.generate(**encoded) - response = text2text_token.batch_decode(tokens) - return response - -# UX -in_para = gr.Textbox(lines=1, label="Input text in English", placeholder="Place your text in English...") -out = gr.Textbox(lines=1, label="Sentiment") -gr.Interface(text2text_sentiment, inputs=in_para, outputs=out).launch() \ No newline at end of file diff --git a/spaces/Jamkonams/AutoGPT/autogpt/speech/gtts.py b/spaces/Jamkonams/AutoGPT/autogpt/speech/gtts.py deleted file mode 100644 index 1c3e9cae0567428582891b11eca42f82a64f5c8e..0000000000000000000000000000000000000000 --- a/spaces/Jamkonams/AutoGPT/autogpt/speech/gtts.py +++ /dev/null @@ -1,22 +0,0 @@ -""" GTTS Voice. """ -import os - -import gtts -from playsound import playsound - -from autogpt.speech.base import VoiceBase - - -class GTTSVoice(VoiceBase): - """GTTS Voice.""" - - def _setup(self) -> None: - pass - - def _speech(self, text: str, _: int = 0) -> bool: - """Play the given text.""" - tts = gtts.gTTS(text) - tts.save("speech.mp3") - playsound("speech.mp3", True) - os.remove("speech.mp3") - return True diff --git a/spaces/Jikiwi/sovits-models/modules/attentions.py b/spaces/Jikiwi/sovits-models/modules/attentions.py deleted file mode 100644 index f9c11ca4a3acb86bf1abc04d9dcfa82a4ed4061f..0000000000000000000000000000000000000000 --- a/spaces/Jikiwi/sovits-models/modules/attentions.py +++ /dev/null @@ -1,349 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -import modules.commons as commons -import modules.modules as modules -from modules.modules import LayerNorm - - -class FFT(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers=1, kernel_size=1, p_dropout=0., - proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, - proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - x = x * x_mask - return x - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT/modules/models/ChuanhuAgent.py b/spaces/JohnSmith9982/ChuanhuChatGPT/modules/models/ChuanhuAgent.py deleted file mode 100644 index c3cb944d3d4a5f60f1402445dc52a3501f466916..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/ChuanhuChatGPT/modules/models/ChuanhuAgent.py +++ /dev/null @@ -1,216 +0,0 @@ -from langchain.chains.summarize import load_summarize_chain -from langchain import PromptTemplate, LLMChain -from langchain.chat_models import ChatOpenAI -from langchain.prompts import PromptTemplate -from langchain.text_splitter import TokenTextSplitter -from langchain.embeddings import OpenAIEmbeddings -from langchain.vectorstores import FAISS -from langchain.chains import RetrievalQA -from langchain.agents import load_tools -from langchain.agents import initialize_agent -from langchain.agents import AgentType -from langchain.docstore.document import Document -from langchain.tools import BaseTool, StructuredTool, Tool, tool -from langchain.callbacks.stdout import StdOutCallbackHandler -from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler -from langchain.callbacks.manager import BaseCallbackManager -from duckduckgo_search import DDGS -from itertools import islice - -from typing import Any, Dict, List, Optional, Union - -from langchain.callbacks.base import BaseCallbackHandler -from langchain.input import print_text -from langchain.schema import AgentAction, AgentFinish, LLMResult - -from pydantic import BaseModel, Field - -import requests -from bs4 import BeautifulSoup -from threading import Thread, Condition -from collections import deque - -from .base_model import BaseLLMModel, CallbackToIterator, ChuanhuCallbackHandler -from ..config import default_chuanhu_assistant_model -from ..presets import SUMMARIZE_PROMPT, i18n -from ..index_func import construct_index - -from langchain.callbacks import get_openai_callback -import os -import gradio as gr -import logging - -class GoogleSearchInput(BaseModel): - keywords: str = Field(description="keywords to search") - -class WebBrowsingInput(BaseModel): - url: str = Field(description="URL of a webpage") - -class WebAskingInput(BaseModel): - url: str = Field(description="URL of a webpage") - question: str = Field(description="Question that you want to know the answer to, based on the webpage's content.") - - -class ChuanhuAgent_Client(BaseLLMModel): - def __init__(self, model_name, openai_api_key, user_name="") -> None: - super().__init__(model_name=model_name, user=user_name) - self.text_splitter = TokenTextSplitter(chunk_size=500, chunk_overlap=30) - self.api_key = openai_api_key - self.llm = ChatOpenAI(openai_api_key=openai_api_key, temperature=0, model_name=default_chuanhu_assistant_model, openai_api_base=os.environ.get("OPENAI_API_BASE", None)) - self.cheap_llm = ChatOpenAI(openai_api_key=openai_api_key, temperature=0, model_name="gpt-3.5-turbo", openai_api_base=os.environ.get("OPENAI_API_BASE", None)) - PROMPT = PromptTemplate(template=SUMMARIZE_PROMPT, input_variables=["text"]) - self.summarize_chain = load_summarize_chain(self.cheap_llm, chain_type="map_reduce", return_intermediate_steps=True, map_prompt=PROMPT, combine_prompt=PROMPT) - self.index_summary = None - self.index = None - if "Pro" in self.model_name: - self.tools = load_tools(["serpapi", "google-search-results-json", "llm-math", "arxiv", "wikipedia", "wolfram-alpha"], llm=self.llm) - else: - self.tools = load_tools(["ddg-search", "llm-math", "arxiv", "wikipedia"], llm=self.llm) - self.tools.append( - Tool.from_function( - func=self.google_search_simple, - name="Google Search JSON", - description="useful when you need to search the web.", - args_schema=GoogleSearchInput - ) - ) - - self.tools.append( - Tool.from_function( - func=self.summary_url, - name="Summary Webpage", - description="useful when you need to know the overall content of a webpage.", - args_schema=WebBrowsingInput - ) - ) - - self.tools.append( - StructuredTool.from_function( - func=self.ask_url, - name="Ask Webpage", - description="useful when you need to ask detailed questions about a webpage.", - args_schema=WebAskingInput - ) - ) - - def google_search_simple(self, query): - results = [] - with DDGS() as ddgs: - ddgs_gen = ddgs.text("notes from a dead house", backend="lite") - for r in islice(ddgs_gen, 10): - results.append({ - "title": r["title"], - "link": r["href"], - "snippet": r["body"] - }) - return str(results) - - def handle_file_upload(self, files, chatbot, language): - """if the model accepts multi modal input, implement this function""" - status = gr.Markdown.update() - if files: - index = construct_index(self.api_key, file_src=files) - assert index is not None, "获取索引失败" - self.index = index - status = i18n("索引构建完成") - # Summarize the document - logging.info(i18n("生成内容总结中……")) - with get_openai_callback() as cb: - os.environ["OPENAI_API_KEY"] = self.api_key - from langchain.chains.summarize import load_summarize_chain - from langchain.prompts import PromptTemplate - from langchain.chat_models import ChatOpenAI - prompt_template = "Write a concise summary of the following:\n\n{text}\n\nCONCISE SUMMARY IN " + language + ":" - PROMPT = PromptTemplate(template=prompt_template, input_variables=["text"]) - llm = ChatOpenAI() - chain = load_summarize_chain(llm, chain_type="map_reduce", return_intermediate_steps=True, map_prompt=PROMPT, combine_prompt=PROMPT) - summary = chain({"input_documents": list(index.docstore.__dict__["_dict"].values())}, return_only_outputs=True)["output_text"] - logging.info(f"Summary: {summary}") - self.index_summary = summary - chatbot.append((f"Uploaded {len(files)} files", summary)) - logging.info(cb) - return gr.Files.update(), chatbot, status - - def query_index(self, query): - if self.index is not None: - retriever = self.index.as_retriever() - qa = RetrievalQA.from_chain_type(llm=self.llm, chain_type="stuff", retriever=retriever) - return qa.run(query) - else: - "Error during query." - - def summary(self, text): - texts = Document(page_content=text) - texts = self.text_splitter.split_documents([texts]) - return self.summarize_chain({"input_documents": texts}, return_only_outputs=True)["output_text"] - - def fetch_url_content(self, url): - response = requests.get(url) - soup = BeautifulSoup(response.text, 'html.parser') - - # 提取所有的文本 - text = ''.join(s.getText() for s in soup.find_all('p')) - logging.info(f"Extracted text from {url}") - return text - - def summary_url(self, url): - text = self.fetch_url_content(url) - if text == "": - return "URL unavailable." - text_summary = self.summary(text) - url_content = "webpage content summary:\n" + text_summary - - return url_content - - def ask_url(self, url, question): - text = self.fetch_url_content(url) - if text == "": - return "URL unavailable." - texts = Document(page_content=text) - texts = self.text_splitter.split_documents([texts]) - # use embedding - embeddings = OpenAIEmbeddings(openai_api_key=self.api_key, openai_api_base=os.environ.get("OPENAI_API_BASE", None)) - - # create vectorstore - db = FAISS.from_documents(texts, embeddings) - retriever = db.as_retriever() - qa = RetrievalQA.from_chain_type(llm=self.cheap_llm, chain_type="stuff", retriever=retriever) - return qa.run(f"{question} Reply in 中文") - - def get_answer_at_once(self): - question = self.history[-1]["content"] - # llm=ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo") - agent = initialize_agent(self.tools, self.llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True) - reply = agent.run(input=f"{question} Reply in 简体中文") - return reply, -1 - - def get_answer_stream_iter(self): - question = self.history[-1]["content"] - it = CallbackToIterator() - manager = BaseCallbackManager(handlers=[ChuanhuCallbackHandler(it.callback)]) - def thread_func(): - tools = self.tools - if self.index is not None: - tools.append( - Tool.from_function( - func=self.query_index, - name="Query Knowledge Base", - description=f"useful when you need to know about: {self.index_summary}", - args_schema=WebBrowsingInput - ) - ) - agent = initialize_agent(self.tools, self.llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True, callback_manager=manager) - try: - reply = agent.run(input=f"{question} Reply in 简体中文") - except Exception as e: - import traceback - traceback.print_exc() - reply = str(e) - it.callback(reply) - it.finish() - t = Thread(target=thread_func) - t.start() - partial_text = "" - for value in it: - partial_text += value - yield partial_text diff --git a/spaces/KEINIE/Emory_Oxford_GER_Expert/README.md b/spaces/KEINIE/Emory_Oxford_GER_Expert/README.md deleted file mode 100644 index a8719d55debd2ea8175af6cc1ff6b8e49e53bd94..0000000000000000000000000000000000000000 --- a/spaces/KEINIE/Emory_Oxford_GER_Expert/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: TESTBOT -emoji: 🐠 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/KonradSzafer/HF-QA-Demo/data/hugging_face_docs_dataset.py b/spaces/KonradSzafer/HF-QA-Demo/data/hugging_face_docs_dataset.py deleted file mode 100644 index 27f80fc7a2dd72b551f63d382ada8ff218f20273..0000000000000000000000000000000000000000 --- a/spaces/KonradSzafer/HF-QA-Demo/data/hugging_face_docs_dataset.py +++ /dev/null @@ -1,190 +0,0 @@ -import glob -import json -import os -import re -import subprocess -from typing import List - -import requests -import pandas as pd -from bs4 import BeautifulSoup -from markdown import markdown -import nbformat -from nbconvert import MarkdownExporter -from nbconvert.preprocessors import Preprocessor, ClearOutputPreprocessor -from tqdm import tqdm - - -VALIDATE_URLS = False - - -def download_repositories(repo_urls_file: str, repo_dir: str): - """ - Downloads the Hugging Face repositories. - """ - if not os.path.exists(repo_dir): - os.makedirs(repo_dir) - with open(repo_urls_file, "r") as f: - repositories_urls = json.load(f)["urls"] - print(f'Downloading {len(repositories_urls)} repositories') - for url in repositories_urls: - try: - subprocess.run(["git", "clone", url], cwd=repo_dir) - except subprocess.CalledProcessError as e: - print("Command failed with error:", e.stderr) - - -class EmptyCellPreprocessor(Preprocessor): - def preprocess_cell(self, cell, resources, index): - if cell.source.strip() == '': - cell.source = '' - cell.cell_type = 'raw' - return cell, resources - - -def convert_notebook_to_txt(filename: str): - """ - Converts a notebook to a markdown file. - """ - with open(filename) as f: - notebook = nbformat.read(f, as_version=4) - # id validation error fix - for cell in notebook['cells']: - cell['id'] = str(cell['id']) - - clear_output = ClearOutputPreprocessor() - notebook, resources = clear_output.preprocess(notebook, {}) - - exporter = MarkdownExporter() - exporter.register_preprocessor(EmptyCellPreprocessor, enabled=True) - output_notebook_text, resources = exporter.from_notebook_node(notebook) - - new_filename = filename.replace('.ipynb', '_ipynb.md') - with open(new_filename, 'w') as f: - f.write(output_notebook_text) - return new_filename - - -def extract_files_from_directories( - repo_urls_file: str, - repo_dir: str, - docs_dir: str, - files_extensions: List[str] -) -> None: - - """ - This function reads markdown and markdownx files from the repositories directory, - filters out non-English files, and adds the source GitHub URL as the first line of each file. - The resulting files are saved in the docs_dir. - """ - languages = pd.read_csv("language-codes.csv").loc[:,"alpha2"].tolist() - languages.remove("en") - - files = [ - filename - for extension in files_extensions - for filename in glob.glob(repo_dir + f"**/*{extension}", recursive=True) - ] - print(f'Used extensions: {", ".join(files_extensions)}') - print(f'Found {len(files)} files') - - repo_urls = [] - with open(repo_urls_file, "r") as f: - repo_urls = json.load(f)["urls"] - - # filter out the files that are not in english - filtered_files = [] - for filename in files: - sep_file = filename.split("/") - for seq in sep_file: - if seq in languages: - break - else: - filtered_files.append(filename) - print(f'Found {len(filtered_files)} files in English') - - # generate a GitHub URL for a file based on its name and a list of possible repository URLs - def get_github_url(filename: str, repo_urls: str, repo_dir: str) -> str: - source = filename.replace(repo_dir, '') - repo_name, file_path = source.split('/', 1) - repo_url_prefix = None - for repo_url in repo_urls: - if repo_name == repo_url.split('/')[-1]: - repo_url_prefix = repo_url - break - if not repo_url_prefix: - raise ValueError(f"Repo URL not found for {repo_name}") - url = f'{repo_url_prefix}/blob/main/{file_path}' - if VALIDATE_URLS: - try: - response = requests.get(url) - response.raise_for_status() - except: - print(f'filename: {filename}') - print(f'repo: {repo_name}, file: {file_path}') - print(f'url: {url}') - raise - return url - - # creates a valid filename by replacing certain characters and removing the repo_dir path - def create_filename_from_path(filename: str, repo_dir: str) -> str: - filename = filename.replace(repo_dir, '') - chars_to_replace = ['/', '{', '}', '-', '.'] - filename = ''.join(['_' if c in chars_to_replace else c for c in filename]) - return filename - - # copy the files with the source added in the first line - if not os.path.exists(docs_dir): - os.makedirs(docs_dir) - copied_files = [] - for filename in tqdm(filtered_files): - source_url = get_github_url(filename, repo_urls, repo_dir) - data = f"source: {source_url}\n\n" - # convert jupyter notebooks to txt files - try: - if filename.endswith('.ipynb'): - filename = convert_notebook_to_txt(filename) - # rename and copy files - with open(filename, 'r') as f: - data += f.read() - output_filename = docs_dir + create_filename_from_path(filename, repo_dir) - with open(output_filename, 'w') as f: - f.write(data) - if not os.path.isfile(output_filename): - raise ValueError(f"Failed to create the output file: {output_filename}") - copied_files.append(output_filename) - except Exception as ex: - print(f'Failed to copy file {filename}: {ex}') - - print(f'Successfully copied {len(set(copied_files))}/{len(filtered_files)} files') - - -def markdown_cleaner(data: str): - """ - Clean markdown text. - - Args: - data (str): The markdown text to be cleaned. - - Returns: - str: The cleaned markdown text. - """ - soupped = BeautifulSoup(markdown(data), "html.parser") - raw_text = ''.join(soupped.findAll(string=True)) - clean_text = re.sub(r"", "", raw_text, flags=re.DOTALL) - # remove any special tokens e.g <|endoftext|> - clean_text = re.sub(r"<\|endoftext\|>", "", clean_text, flags=re.DOTALL) - # discard non english text - clean_text = re.sub(r"[^a-zA-Z0-9\s]", "", clean_text, flags=re.DOTALL) - return "\n".join([t for t in clean_text.split("\n") if t]) - - -if __name__ == '__main__': - repo_urls_file = "./datasets/hf_repositories_urls.json" - repo_dir = "./datasets/huggingface_repositories/" - docs_dir = "./datasets/huggingface_docs/" - download_repositories(repo_urls_file, repo_dir) - extract_files_from_directories( - repo_urls_file, repo_dir, docs_dir, - files_extensions=['.md', '.mdx', '.ipynb'] - ) diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/backbones/csp_darknet.py b/spaces/KyanChen/RSPrompter/mmdet/models/backbones/csp_darknet.py deleted file mode 100644 index a890b486f255befa23fe5a3e9746f8f9298ac33f..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/backbones/csp_darknet.py +++ /dev/null @@ -1,286 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, DepthwiseSeparableConvModule -from mmengine.model import BaseModule -from torch.nn.modules.batchnorm import _BatchNorm - -from mmdet.registry import MODELS -from ..layers import CSPLayer - - -class Focus(nn.Module): - """Focus width and height information into channel space. - - Args: - in_channels (int): The input channels of this Module. - out_channels (int): The output channels of this Module. - kernel_size (int): The kernel size of the convolution. Default: 1 - stride (int): The stride of the convolution. Default: 1 - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN', momentum=0.03, eps=0.001). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='Swish'). - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size=1, - stride=1, - conv_cfg=None, - norm_cfg=dict(type='BN', momentum=0.03, eps=0.001), - act_cfg=dict(type='Swish')): - super().__init__() - self.conv = ConvModule( - in_channels * 4, - out_channels, - kernel_size, - stride, - padding=(kernel_size - 1) // 2, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, x): - # shape of x (b,c,w,h) -> y(b,4c,w/2,h/2) - patch_top_left = x[..., ::2, ::2] - patch_top_right = x[..., ::2, 1::2] - patch_bot_left = x[..., 1::2, ::2] - patch_bot_right = x[..., 1::2, 1::2] - x = torch.cat( - ( - patch_top_left, - patch_bot_left, - patch_top_right, - patch_bot_right, - ), - dim=1, - ) - return self.conv(x) - - -class SPPBottleneck(BaseModule): - """Spatial pyramid pooling layer used in YOLOv3-SPP. - - Args: - in_channels (int): The input channels of this Module. - out_channels (int): The output channels of this Module. - kernel_sizes (tuple[int]): Sequential of kernel sizes of pooling - layers. Default: (5, 9, 13). - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='Swish'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_sizes=(5, 9, 13), - conv_cfg=None, - norm_cfg=dict(type='BN', momentum=0.03, eps=0.001), - act_cfg=dict(type='Swish'), - init_cfg=None): - super().__init__(init_cfg) - mid_channels = in_channels // 2 - self.conv1 = ConvModule( - in_channels, - mid_channels, - 1, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.poolings = nn.ModuleList([ - nn.MaxPool2d(kernel_size=ks, stride=1, padding=ks // 2) - for ks in kernel_sizes - ]) - conv2_channels = mid_channels * (len(kernel_sizes) + 1) - self.conv2 = ConvModule( - conv2_channels, - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, x): - x = self.conv1(x) - with torch.cuda.amp.autocast(enabled=False): - x = torch.cat( - [x] + [pooling(x) for pooling in self.poolings], dim=1) - x = self.conv2(x) - return x - - -@MODELS.register_module() -class CSPDarknet(BaseModule): - """CSP-Darknet backbone used in YOLOv5 and YOLOX. - - Args: - arch (str): Architecture of CSP-Darknet, from {P5, P6}. - Default: P5. - deepen_factor (float): Depth multiplier, multiply number of - blocks in CSP layer by this amount. Default: 1.0. - widen_factor (float): Width multiplier, multiply number of - channels in each layer by this amount. Default: 1.0. - out_indices (Sequence[int]): Output from which stages. - Default: (2, 3, 4). - frozen_stages (int): Stages to be frozen (stop grad and set eval - mode). -1 means not freezing any parameters. Default: -1. - use_depthwise (bool): Whether to use depthwise separable convolution. - Default: False. - arch_ovewrite(list): Overwrite default arch settings. Default: None. - spp_kernal_sizes: (tuple[int]): Sequential of kernel sizes of SPP - layers. Default: (5, 9, 13). - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN', requires_grad=True). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LeakyReLU', negative_slope=0.1). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - Example: - >>> from mmdet.models import CSPDarknet - >>> import torch - >>> self = CSPDarknet(depth=53) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 416, 416) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - ... - (1, 256, 52, 52) - (1, 512, 26, 26) - (1, 1024, 13, 13) - """ - # From left to right: - # in_channels, out_channels, num_blocks, add_identity, use_spp - arch_settings = { - 'P5': [[64, 128, 3, True, False], [128, 256, 9, True, False], - [256, 512, 9, True, False], [512, 1024, 3, False, True]], - 'P6': [[64, 128, 3, True, False], [128, 256, 9, True, False], - [256, 512, 9, True, False], [512, 768, 3, True, False], - [768, 1024, 3, False, True]] - } - - def __init__(self, - arch='P5', - deepen_factor=1.0, - widen_factor=1.0, - out_indices=(2, 3, 4), - frozen_stages=-1, - use_depthwise=False, - arch_ovewrite=None, - spp_kernal_sizes=(5, 9, 13), - conv_cfg=None, - norm_cfg=dict(type='BN', momentum=0.03, eps=0.001), - act_cfg=dict(type='Swish'), - norm_eval=False, - init_cfg=dict( - type='Kaiming', - layer='Conv2d', - a=math.sqrt(5), - distribution='uniform', - mode='fan_in', - nonlinearity='leaky_relu')): - super().__init__(init_cfg) - arch_setting = self.arch_settings[arch] - if arch_ovewrite: - arch_setting = arch_ovewrite - assert set(out_indices).issubset( - i for i in range(len(arch_setting) + 1)) - if frozen_stages not in range(-1, len(arch_setting) + 1): - raise ValueError('frozen_stages must be in range(-1, ' - 'len(arch_setting) + 1). But received ' - f'{frozen_stages}') - - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.use_depthwise = use_depthwise - self.norm_eval = norm_eval - conv = DepthwiseSeparableConvModule if use_depthwise else ConvModule - - self.stem = Focus( - 3, - int(arch_setting[0][0] * widen_factor), - kernel_size=3, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.layers = ['stem'] - - for i, (in_channels, out_channels, num_blocks, add_identity, - use_spp) in enumerate(arch_setting): - in_channels = int(in_channels * widen_factor) - out_channels = int(out_channels * widen_factor) - num_blocks = max(round(num_blocks * deepen_factor), 1) - stage = [] - conv_layer = conv( - in_channels, - out_channels, - 3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - stage.append(conv_layer) - if use_spp: - spp = SPPBottleneck( - out_channels, - out_channels, - kernel_sizes=spp_kernal_sizes, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - stage.append(spp) - csp_layer = CSPLayer( - out_channels, - out_channels, - num_blocks=num_blocks, - add_identity=add_identity, - use_depthwise=use_depthwise, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - stage.append(csp_layer) - self.add_module(f'stage{i + 1}', nn.Sequential(*stage)) - self.layers.append(f'stage{i + 1}') - - def _freeze_stages(self): - if self.frozen_stages >= 0: - for i in range(self.frozen_stages + 1): - m = getattr(self, self.layers[i]) - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def train(self, mode=True): - super(CSPDarknet, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - if isinstance(m, _BatchNorm): - m.eval() - - def forward(self, x): - outs = [] - for i, layer_name in enumerate(self.layers): - layer = getattr(self, layer_name) - x = layer(x) - if i in self.out_indices: - outs.append(x) - return tuple(outs) diff --git a/spaces/Laden0p/Joeythemonster-anything-midjourney-v-4-1/README.md b/spaces/Laden0p/Joeythemonster-anything-midjourney-v-4-1/README.md deleted file mode 100644 index c136c175f931c12401006d5e66be800192e1aa01..0000000000000000000000000000000000000000 --- a/spaces/Laden0p/Joeythemonster-anything-midjourney-v-4-1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Joeythemonster Anything Midjourney V 4 1 -emoji: 📈 -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 3.38.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/LanguageBind/LanguageBind/open_clip/transform.py b/spaces/LanguageBind/LanguageBind/open_clip/transform.py deleted file mode 100644 index 748884a3c7cb7ece1ca521ca1dbf40bb74855007..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/open_clip/transform.py +++ /dev/null @@ -1,133 +0,0 @@ -import warnings -from dataclasses import dataclass, asdict -from typing import Any, Dict, Optional, Sequence, Tuple, Union - -import torch -import torch.nn as nn -import torchvision.transforms.functional as F - -from torchvision.transforms import Normalize, Compose, RandomResizedCrop, InterpolationMode, ToTensor, Resize, \ - CenterCrop - -from .constants import OPENAI_DATASET_MEAN, OPENAI_DATASET_STD - - -@dataclass -class AugmentationCfg: - scale: Tuple[float, float] = (0.9, 1.0) - ratio: Optional[Tuple[float, float]] = None - color_jitter: Optional[Union[float, Tuple[float, float, float]]] = None - interpolation: Optional[str] = None - re_prob: Optional[float] = None - re_count: Optional[int] = None - use_timm: bool = False - - -class ResizeMaxSize(nn.Module): - - def __init__(self, max_size, interpolation=InterpolationMode.BICUBIC, fn='max', fill=0): - super().__init__() - if not isinstance(max_size, int): - raise TypeError(f"Size should be int. Got {type(max_size)}") - self.max_size = max_size - self.interpolation = interpolation - self.fn = min if fn == 'min' else min - self.fill = fill - - def forward(self, img): - if isinstance(img, torch.Tensor): - height, width = img.shape[:2] - else: - width, height = img.size - scale = self.max_size / float(max(height, width)) - if scale != 1.0: - new_size = tuple(round(dim * scale) for dim in (height, width)) - img = F.resize(img, new_size, self.interpolation) - pad_h = self.max_size - new_size[0] - pad_w = self.max_size - new_size[1] - img = F.pad(img, padding=[pad_w//2, pad_h//2, pad_w - pad_w//2, pad_h - pad_h//2], fill=self.fill) - return img - - -def _convert_to_rgb(image): - return image.convert('RGB') - - -def image_transform( - image_size: int, - is_train: bool, - mean: Optional[Tuple[float, ...]] = None, - std: Optional[Tuple[float, ...]] = None, - resize_longest_max: bool = False, - fill_color: int = 0, - aug_cfg: Optional[Union[Dict[str, Any], AugmentationCfg]] = None, -): - mean = mean or OPENAI_DATASET_MEAN - if not isinstance(mean, (list, tuple)): - mean = (mean,) * 3 - - std = std or OPENAI_DATASET_STD - if not isinstance(std, (list, tuple)): - std = (std,) * 3 - - if isinstance(image_size, (list, tuple)) and image_size[0] == image_size[1]: - # for square size, pass size as int so that Resize() uses aspect preserving shortest edge - image_size = image_size[0] - - if isinstance(aug_cfg, dict): - aug_cfg = AugmentationCfg(**aug_cfg) - else: - aug_cfg = aug_cfg or AugmentationCfg() - normalize = Normalize(mean=mean, std=std) - if is_train: - aug_cfg_dict = {k: v for k, v in asdict(aug_cfg).items() if v is not None} - use_timm = aug_cfg_dict.pop('use_timm', False) - if use_timm: - from timm.data import create_transform # timm can still be optional - if isinstance(image_size, (tuple, list)): - assert len(image_size) >= 2 - input_size = (3,) + image_size[-2:] - else: - input_size = (3, image_size, image_size) - # by default, timm aug randomly alternates bicubic & bilinear for better robustness at inference time - aug_cfg_dict.setdefault('interpolation', 'random') - aug_cfg_dict.setdefault('color_jitter', None) # disable by default - train_transform = create_transform( - input_size=input_size, - is_training=True, - hflip=0., - mean=mean, - std=std, - re_mode='pixel', - **aug_cfg_dict, - ) - else: - train_transform = Compose([ - RandomResizedCrop( - image_size, - scale=aug_cfg_dict.pop('scale'), - interpolation=InterpolationMode.BICUBIC, - ), - _convert_to_rgb, - ToTensor(), - normalize, - ]) - if aug_cfg_dict: - warnings.warn(f'Unused augmentation cfg items, specify `use_timm` to use ({list(aug_cfg_dict.keys())}).') - return train_transform - else: - if resize_longest_max: - transforms = [ - ResizeMaxSize(image_size, fill=fill_color) - ] - else: - transforms = [ - Resize(image_size, interpolation=InterpolationMode.BICUBIC), - CenterCrop(image_size), - ] - transforms.extend([ - _convert_to_rgb, - ToTensor(), - normalize, - ]) - return Compose(transforms) diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_models/crnn_tps.py b/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_models/crnn_tps.py deleted file mode 100644 index 9719eb3c521cee55beee1711a73bd29a07d10366..0000000000000000000000000000000000000000 --- a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_models/crnn_tps.py +++ /dev/null @@ -1,18 +0,0 @@ -# model -label_convertor = dict( - type='CTCConvertor', dict_type='DICT36', with_unknown=False, lower=True) - -model = dict( - type='CRNNNet', - preprocessor=dict( - type='TPSPreprocessor', - num_fiducial=20, - img_size=(32, 100), - rectified_img_size=(32, 100), - num_img_channel=1), - backbone=dict(type='VeryDeepVgg', leaky_relu=False, input_channels=1), - encoder=None, - decoder=dict(type='CRNNDecoder', in_channels=512, rnn_flag=True), - loss=dict(type='CTCLoss'), - label_convertor=label_convertor, - pretrained=None) diff --git a/spaces/MarcoLYH/Extractive-QA-Chatbot/Retriever_Model.py b/spaces/MarcoLYH/Extractive-QA-Chatbot/Retriever_Model.py deleted file mode 100644 index e32377efb7f32186412ea8795af551cebf05cc21..0000000000000000000000000000000000000000 --- a/spaces/MarcoLYH/Extractive-QA-Chatbot/Retriever_Model.py +++ /dev/null @@ -1,35 +0,0 @@ -from sentence_transformers import SentenceTransformer -from sklearn.metrics.pairwise import cosine_similarity -import json -import pickle - -### Retrieve context - -class Retriever: - def __init__(self, - model = SentenceTransformer('all-MiniLM-L6-v2'), - cosine_threshold = 0.5): - - self.model = model - self.cosine_threshold = cosine_threshold - - - def Retrieve_Context(self, query): - - # load context embeddings - save_path = "Context_Embedding.pickle" - with open(save_path, "rb") as file: - dataset_context = pickle.load(file) - - query_embedding = self.model.encode(query) - - # compare query and context meaning to retrieve related documents - results = [] - for i in dataset_context: - distance = cosine_similarity(dataset_context[i]['context_embedding'].reshape((1,-1)), query_embedding.reshape((1,-1)))[0][0] - context = dataset_context[i]['context_text'] - results += [(i, context, distance)] - # get the highest score embedding - results = sorted(results, key=lambda x: x[2], reverse=True) - - return results[0][1] \ No newline at end of file diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/diffusionmodules/model.py b/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/diffusionmodules/model.py deleted file mode 100644 index b089eebbe1676d8249005bb9def002ff5180715b..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/diffusionmodules/model.py +++ /dev/null @@ -1,852 +0,0 @@ -# pytorch_diffusion + derived encoder decoder -import math -import torch -import torch.nn as nn -import numpy as np -from einops import rearrange -from typing import Optional, Any - -from ldm.modules.attention import MemoryEfficientCrossAttention - -try: - import xformers - import xformers.ops - XFORMERS_IS_AVAILBLE = True -except: - XFORMERS_IS_AVAILBLE = False - print("No module 'xformers'. Proceeding without it.") - - -def get_timestep_embedding(timesteps, embedding_dim): - """ - This matches the implementation in Denoising Diffusion Probabilistic Models: - From Fairseq. - Build sinusoidal embeddings. - This matches the implementation in tensor2tensor, but differs slightly - from the description in Section 3.5 of "Attention Is All You Need". - """ - assert len(timesteps.shape) == 1 - - half_dim = embedding_dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, dtype=torch.float32) * -emb) - emb = emb.to(device=timesteps.device) - emb = timesteps.float()[:, None] * emb[None, :] - emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1) - if embedding_dim % 2 == 1: # zero pad - emb = torch.nn.functional.pad(emb, (0,1,0,0)) - return emb - - -def nonlinearity(x): - # swish - return x*torch.sigmoid(x) - - -def Normalize(in_channels, num_groups=32): - return torch.nn.GroupNorm(num_groups=num_groups, num_channels=in_channels, eps=1e-6, affine=True) - - -class Upsample(nn.Module): - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - self.conv = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - x = torch.nn.functional.interpolate(x, scale_factor=2.0, mode="nearest") - if self.with_conv: - x = self.conv(x) - return x - - -class Downsample(nn.Module): - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - # no asymmetric padding in torch conv, must do it ourselves - self.conv = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=3, - stride=2, - padding=0) - - def forward(self, x): - if self.with_conv: - pad = (0,1,0,1) - x = torch.nn.functional.pad(x, pad, mode="constant", value=0) - x = self.conv(x) - else: - x = torch.nn.functional.avg_pool2d(x, kernel_size=2, stride=2) - return x - - -class ResnetBlock(nn.Module): - def __init__(self, *, in_channels, out_channels=None, conv_shortcut=False, - dropout, temb_channels=512): - super().__init__() - self.in_channels = in_channels - out_channels = in_channels if out_channels is None else out_channels - self.out_channels = out_channels - self.use_conv_shortcut = conv_shortcut - - self.norm1 = Normalize(in_channels) - self.conv1 = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - if temb_channels > 0: - self.temb_proj = torch.nn.Linear(temb_channels, - out_channels) - self.norm2 = Normalize(out_channels) - self.dropout = torch.nn.Dropout(dropout) - self.conv2 = torch.nn.Conv2d(out_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - if self.in_channels != self.out_channels: - if self.use_conv_shortcut: - self.conv_shortcut = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - else: - self.nin_shortcut = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=1, - stride=1, - padding=0) - - def forward(self, x, temb): - h = x - h = self.norm1(h) - h = nonlinearity(h) - h = self.conv1(h) - - if temb is not None: - h = h + self.temb_proj(nonlinearity(temb))[:,:,None,None] - - h = self.norm2(h) - h = nonlinearity(h) - h = self.dropout(h) - h = self.conv2(h) - - if self.in_channels != self.out_channels: - if self.use_conv_shortcut: - x = self.conv_shortcut(x) - else: - x = self.nin_shortcut(x) - - return x+h - - -class AttnBlock(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.k = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.v = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.proj_out = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b,c,h,w = q.shape - q = q.reshape(b,c,h*w) - q = q.permute(0,2,1) # b,hw,c - k = k.reshape(b,c,h*w) # b,c,hw - w_ = torch.bmm(q,k) # b,hw,hw w[b,i,j]=sum_c q[b,i,c]k[b,c,j] - w_ = w_ * (int(c)**(-0.5)) - w_ = torch.nn.functional.softmax(w_, dim=2) - - # attend to values - v = v.reshape(b,c,h*w) - w_ = w_.permute(0,2,1) # b,hw,hw (first hw of k, second of q) - h_ = torch.bmm(v,w_) # b, c,hw (hw of q) h_[b,c,j] = sum_i v[b,c,i] w_[b,i,j] - h_ = h_.reshape(b,c,h,w) - - h_ = self.proj_out(h_) - - return x+h_ - -class MemoryEfficientAttnBlock(nn.Module): - """ - Uses xformers efficient implementation, - see https://github.com/MatthieuTPHR/diffusers/blob/d80b531ff8060ec1ea982b65a1b8df70f73aa67c/src/diffusers/models/attention.py#L223 - Note: this is a single-head self-attention operation - """ - # - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.k = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.v = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.proj_out = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.attention_op: Optional[Any] = None - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - B, C, H, W = q.shape - q, k, v = map(lambda x: rearrange(x, 'b c h w -> b (h w) c'), (q, k, v)) - - q, k, v = map( - lambda t: t.unsqueeze(3) - .reshape(B, t.shape[1], 1, C) - .permute(0, 2, 1, 3) - .reshape(B * 1, t.shape[1], C) - .contiguous(), - (q, k, v), - ) - out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op) - - out = ( - out.unsqueeze(0) - .reshape(B, 1, out.shape[1], C) - .permute(0, 2, 1, 3) - .reshape(B, out.shape[1], C) - ) - out = rearrange(out, 'b (h w) c -> b c h w', b=B, h=H, w=W, c=C) - out = self.proj_out(out) - return x+out - - -class MemoryEfficientCrossAttentionWrapper(MemoryEfficientCrossAttention): - def forward(self, x, context=None, mask=None): - b, c, h, w = x.shape - x = rearrange(x, 'b c h w -> b (h w) c') - out = super().forward(x, context=context, mask=mask) - out = rearrange(out, 'b (h w) c -> b c h w', h=h, w=w, c=c) - return x + out - - -def make_attn(in_channels, attn_type="vanilla", attn_kwargs=None): - assert attn_type in ["vanilla", "vanilla-xformers", "memory-efficient-cross-attn", "linear", "none"], f'attn_type {attn_type} unknown' - if XFORMERS_IS_AVAILBLE and attn_type == "vanilla": - attn_type = "vanilla-xformers" - print(f"making attention of type '{attn_type}' with {in_channels} in_channels") - if attn_type == "vanilla": - assert attn_kwargs is None - return AttnBlock(in_channels) - elif attn_type == "vanilla-xformers": - print(f"building MemoryEfficientAttnBlock with {in_channels} in_channels...") - return MemoryEfficientAttnBlock(in_channels) - elif type == "memory-efficient-cross-attn": - attn_kwargs["query_dim"] = in_channels - return MemoryEfficientCrossAttentionWrapper(**attn_kwargs) - elif attn_type == "none": - return nn.Identity(in_channels) - else: - raise NotImplementedError() - - -class Model(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, use_timestep=True, use_linear_attn=False, attn_type="vanilla"): - super().__init__() - if use_linear_attn: attn_type = "linear" - self.ch = ch - self.temb_ch = self.ch*4 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - - self.use_timestep = use_timestep - if self.use_timestep: - # timestep embedding - self.temb = nn.Module() - self.temb.dense = nn.ModuleList([ - torch.nn.Linear(self.ch, - self.temb_ch), - torch.nn.Linear(self.temb_ch, - self.temb_ch), - ]) - - # downsampling - self.conv_in = torch.nn.Conv2d(in_channels, - self.ch, - kernel_size=3, - stride=1, - padding=1) - - curr_res = resolution - in_ch_mult = (1,)+tuple(ch_mult) - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch*in_ch_mult[i_level] - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions-1: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = make_attn(block_in, attn_type=attn_type) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch*ch_mult[i_level] - skip_in = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks+1): - if i_block == self.num_res_blocks: - skip_in = ch*in_ch_mult[i_level] - block.append(ResnetBlock(in_channels=block_in+skip_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_ch, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x, t=None, context=None): - #assert x.shape[2] == x.shape[3] == self.resolution - if context is not None: - # assume aligned context, cat along channel axis - x = torch.cat((x, context), dim=1) - if self.use_timestep: - # timestep embedding - assert t is not None - temb = get_timestep_embedding(t, self.ch) - temb = self.temb.dense[0](temb) - temb = nonlinearity(temb) - temb = self.temb.dense[1](temb) - else: - temb = None - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions-1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks+1): - h = self.up[i_level].block[i_block]( - torch.cat([h, hs.pop()], dim=1), temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - def get_last_layer(self): - return self.conv_out.weight - - -class Encoder(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, z_channels, double_z=True, use_linear_attn=False, attn_type="vanilla", - **ignore_kwargs): - super().__init__() - if use_linear_attn: attn_type = "linear" - self.ch = ch - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - - # downsampling - self.conv_in = torch.nn.Conv2d(in_channels, - self.ch, - kernel_size=3, - stride=1, - padding=1) - - curr_res = resolution - in_ch_mult = (1,)+tuple(ch_mult) - self.in_ch_mult = in_ch_mult - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch*in_ch_mult[i_level] - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions-1: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = make_attn(block_in, attn_type=attn_type) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - 2*z_channels if double_z else z_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - # timestep embedding - temb = None - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions-1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class Decoder(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, z_channels, give_pre_end=False, tanh_out=False, use_linear_attn=False, - attn_type="vanilla", **ignorekwargs): - super().__init__() - if use_linear_attn: attn_type = "linear" - self.ch = ch - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - self.give_pre_end = give_pre_end - self.tanh_out = tanh_out - - # compute in_ch_mult, block_in and curr_res at lowest res - in_ch_mult = (1,)+tuple(ch_mult) - block_in = ch*ch_mult[self.num_resolutions-1] - curr_res = resolution // 2**(self.num_resolutions-1) - self.z_shape = (1,z_channels,curr_res,curr_res) - print("Working with z of shape {} = {} dimensions.".format( - self.z_shape, np.prod(self.z_shape))) - - # z to block_in - self.conv_in = torch.nn.Conv2d(z_channels, - block_in, - kernel_size=3, - stride=1, - padding=1) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = make_attn(block_in, attn_type=attn_type) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks+1): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_ch, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, z): - #assert z.shape[1:] == self.z_shape[1:] - self.last_z_shape = z.shape - - # timestep embedding - temb = None - - # z to block_in - h = self.conv_in(z) - - # middle - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks+1): - h = self.up[i_level].block[i_block](h, temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - if self.give_pre_end: - return h - - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - if self.tanh_out: - h = torch.tanh(h) - return h - - -class SimpleDecoder(nn.Module): - def __init__(self, in_channels, out_channels, *args, **kwargs): - super().__init__() - self.model = nn.ModuleList([nn.Conv2d(in_channels, in_channels, 1), - ResnetBlock(in_channels=in_channels, - out_channels=2 * in_channels, - temb_channels=0, dropout=0.0), - ResnetBlock(in_channels=2 * in_channels, - out_channels=4 * in_channels, - temb_channels=0, dropout=0.0), - ResnetBlock(in_channels=4 * in_channels, - out_channels=2 * in_channels, - temb_channels=0, dropout=0.0), - nn.Conv2d(2*in_channels, in_channels, 1), - Upsample(in_channels, with_conv=True)]) - # end - self.norm_out = Normalize(in_channels) - self.conv_out = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - for i, layer in enumerate(self.model): - if i in [1,2,3]: - x = layer(x, None) - else: - x = layer(x) - - h = self.norm_out(x) - h = nonlinearity(h) - x = self.conv_out(h) - return x - - -class UpsampleDecoder(nn.Module): - def __init__(self, in_channels, out_channels, ch, num_res_blocks, resolution, - ch_mult=(2,2), dropout=0.0): - super().__init__() - # upsampling - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - block_in = in_channels - curr_res = resolution // 2 ** (self.num_resolutions - 1) - self.res_blocks = nn.ModuleList() - self.upsample_blocks = nn.ModuleList() - for i_level in range(self.num_resolutions): - res_block = [] - block_out = ch * ch_mult[i_level] - for i_block in range(self.num_res_blocks + 1): - res_block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - self.res_blocks.append(nn.ModuleList(res_block)) - if i_level != self.num_resolutions - 1: - self.upsample_blocks.append(Upsample(block_in, True)) - curr_res = curr_res * 2 - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - # upsampling - h = x - for k, i_level in enumerate(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks + 1): - h = self.res_blocks[i_level][i_block](h, None) - if i_level != self.num_resolutions - 1: - h = self.upsample_blocks[k](h) - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class LatentRescaler(nn.Module): - def __init__(self, factor, in_channels, mid_channels, out_channels, depth=2): - super().__init__() - # residual block, interpolate, residual block - self.factor = factor - self.conv_in = nn.Conv2d(in_channels, - mid_channels, - kernel_size=3, - stride=1, - padding=1) - self.res_block1 = nn.ModuleList([ResnetBlock(in_channels=mid_channels, - out_channels=mid_channels, - temb_channels=0, - dropout=0.0) for _ in range(depth)]) - self.attn = AttnBlock(mid_channels) - self.res_block2 = nn.ModuleList([ResnetBlock(in_channels=mid_channels, - out_channels=mid_channels, - temb_channels=0, - dropout=0.0) for _ in range(depth)]) - - self.conv_out = nn.Conv2d(mid_channels, - out_channels, - kernel_size=1, - ) - - def forward(self, x): - x = self.conv_in(x) - for block in self.res_block1: - x = block(x, None) - x = torch.nn.functional.interpolate(x, size=(int(round(x.shape[2]*self.factor)), int(round(x.shape[3]*self.factor)))) - x = self.attn(x) - for block in self.res_block2: - x = block(x, None) - x = self.conv_out(x) - return x - - -class MergedRescaleEncoder(nn.Module): - def __init__(self, in_channels, ch, resolution, out_ch, num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, - ch_mult=(1,2,4,8), rescale_factor=1.0, rescale_module_depth=1): - super().__init__() - intermediate_chn = ch * ch_mult[-1] - self.encoder = Encoder(in_channels=in_channels, num_res_blocks=num_res_blocks, ch=ch, ch_mult=ch_mult, - z_channels=intermediate_chn, double_z=False, resolution=resolution, - attn_resolutions=attn_resolutions, dropout=dropout, resamp_with_conv=resamp_with_conv, - out_ch=None) - self.rescaler = LatentRescaler(factor=rescale_factor, in_channels=intermediate_chn, - mid_channels=intermediate_chn, out_channels=out_ch, depth=rescale_module_depth) - - def forward(self, x): - x = self.encoder(x) - x = self.rescaler(x) - return x - - -class MergedRescaleDecoder(nn.Module): - def __init__(self, z_channels, out_ch, resolution, num_res_blocks, attn_resolutions, ch, ch_mult=(1,2,4,8), - dropout=0.0, resamp_with_conv=True, rescale_factor=1.0, rescale_module_depth=1): - super().__init__() - tmp_chn = z_channels*ch_mult[-1] - self.decoder = Decoder(out_ch=out_ch, z_channels=tmp_chn, attn_resolutions=attn_resolutions, dropout=dropout, - resamp_with_conv=resamp_with_conv, in_channels=None, num_res_blocks=num_res_blocks, - ch_mult=ch_mult, resolution=resolution, ch=ch) - self.rescaler = LatentRescaler(factor=rescale_factor, in_channels=z_channels, mid_channels=tmp_chn, - out_channels=tmp_chn, depth=rescale_module_depth) - - def forward(self, x): - x = self.rescaler(x) - x = self.decoder(x) - return x - - -class Upsampler(nn.Module): - def __init__(self, in_size, out_size, in_channels, out_channels, ch_mult=2): - super().__init__() - assert out_size >= in_size - num_blocks = int(np.log2(out_size//in_size))+1 - factor_up = 1.+ (out_size % in_size) - print(f"Building {self.__class__.__name__} with in_size: {in_size} --> out_size {out_size} and factor {factor_up}") - self.rescaler = LatentRescaler(factor=factor_up, in_channels=in_channels, mid_channels=2*in_channels, - out_channels=in_channels) - self.decoder = Decoder(out_ch=out_channels, resolution=out_size, z_channels=in_channels, num_res_blocks=2, - attn_resolutions=[], in_channels=None, ch=in_channels, - ch_mult=[ch_mult for _ in range(num_blocks)]) - - def forward(self, x): - x = self.rescaler(x) - x = self.decoder(x) - return x - - -class Resize(nn.Module): - def __init__(self, in_channels=None, learned=False, mode="bilinear"): - super().__init__() - self.with_conv = learned - self.mode = mode - if self.with_conv: - print(f"Note: {self.__class__.__name} uses learned downsampling and will ignore the fixed {mode} mode") - raise NotImplementedError() - assert in_channels is not None - # no asymmetric padding in torch conv, must do it ourselves - self.conv = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=4, - stride=2, - padding=1) - - def forward(self, x, scale_factor=1.0): - if scale_factor==1.0: - return x - else: - x = torch.nn.functional.interpolate(x, mode=self.mode, align_corners=False, scale_factor=scale_factor) - return x diff --git a/spaces/MountLiteraSwd/sd-dreambooth-library-riffusion-rage/app.py b/spaces/MountLiteraSwd/sd-dreambooth-library-riffusion-rage/app.py deleted file mode 100644 index 0e03d1b08827333aea954edb0bc06bdd8731214c..0000000000000000000000000000000000000000 --- a/spaces/MountLiteraSwd/sd-dreambooth-library-riffusion-rage/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/sd-dreambooth-library/riffusion-rage").launch() \ No newline at end of file diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/evaluation/__init__.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/evaluation/__init__.py deleted file mode 100644 index 40cd21686174fe2831ab8bc0693e283297955125..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/evaluation/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .evaluator import * # NOQA -from .metrics import * # NOQA diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/kie/module_losses/__init__.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/kie/module_losses/__init__.py deleted file mode 100644 index 9af5550ae843622d0fa2ff81a23d7c825c3c43fd..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/kie/module_losses/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .sdmgr_module_loss import SDMGRModuleLoss - -__all__ = ['SDMGRModuleLoss'] diff --git a/spaces/MrSinan/LFW-MaskedRecogntion/fit_ellipse.py b/spaces/MrSinan/LFW-MaskedRecogntion/fit_ellipse.py deleted file mode 100644 index c17e1201bbb3ab8ae5484a417e657abac194d2fc..0000000000000000000000000000000000000000 --- a/spaces/MrSinan/LFW-MaskedRecogntion/fit_ellipse.py +++ /dev/null @@ -1,64 +0,0 @@ -# Author: aqeelanwar -# Created: 4 May,2020, 1:30 AM -# Email: aqeel.anwar@gatech.edu - -import numpy as np -from numpy.linalg import eig, inv - -def fitEllipse(x,y): - x = x[:,np.newaxis] - y = y[:,np.newaxis] - D = np.hstack((x*x, x*y, y*y, x, y, np.ones_like(x))) - S = np.dot(D.T,D) - C = np.zeros([6,6]) - C[0,2] = C[2,0] = 2; C[1,1] = -1 - E, V = eig(np.dot(inv(S), C)) - n = np.argmax(np.abs(E)) - a = V[:,n] - return a - -def ellipse_center(a): - b,c,d,f,g,a = a[1]/2, a[2], a[3]/2, a[4]/2, a[5], a[0] - num = b*b-a*c - x0=(c*d-b*f)/num - y0=(a*f-b*d)/num - return np.array([x0,y0]) - - -def ellipse_angle_of_rotation( a ): - b,c,d,f,g,a = a[1]/2, a[2], a[3]/2, a[4]/2, a[5], a[0] - return 0.5*np.arctan(2*b/(a-c)) - - -def ellipse_axis_length( a ): - b,c,d,f,g,a = a[1]/2, a[2], a[3]/2, a[4]/2, a[5], a[0] - up = 2*(a*f*f+c*d*d+g*b*b-2*b*d*f-a*c*g) - down1=(b*b-a*c)*( (c-a)*np.sqrt(1+4*b*b/((a-c)*(a-c)))-(c+a)) - down2=(b*b-a*c)*( (a-c)*np.sqrt(1+4*b*b/((a-c)*(a-c)))-(c+a)) - res1=np.sqrt(up/down1) - res2=np.sqrt(up/down2) - return np.array([res1, res2]) - -def ellipse_angle_of_rotation2( a ): - b,c,d,f,g,a = a[1]/2, a[2], a[3]/2, a[4]/2, a[5], a[0] - if b == 0: - if a > c: - return 0 - else: - return np.pi/2 - else: - if a > c: - return np.arctan(2*b/(a-c))/2 - else: - return np.pi/2 + np.arctan(2*b/(a-c))/2 - -# a = fitEllipse(x,y) -# center = ellipse_center(a) -# #phi = ellipse_angle_of_rotation(a) -# phi = ellipse_angle_of_rotation2(a) -# axes = ellipse_axis_length(a) -# -# print("center = ", center) -# print("angle of rotation = ", phi) -# print("axes = ", axes) - diff --git a/spaces/NAACL2022/CLIP-Caption-Reward/captioning/utils/opts.py b/spaces/NAACL2022/CLIP-Caption-Reward/captioning/utils/opts.py deleted file mode 100644 index 778e512361727de0939bbd7b014e6eeb716a0c67..0000000000000000000000000000000000000000 --- a/spaces/NAACL2022/CLIP-Caption-Reward/captioning/utils/opts.py +++ /dev/null @@ -1,412 +0,0 @@ -from __future__ import print_function -import argparse - - -def if_use_feat(caption_model): - # Decide if load attention feature according to caption model - if caption_model in ['show_tell', 'all_img', 'fc', 'newfc']: - use_att, use_fc = False, True - elif caption_model == 'language_model': - use_att, use_fc = False, False - elif caption_model in ['updown', 'topdown']: - use_fc, use_att = True, True - else: - use_att, use_fc = True, False - return use_fc, use_att - -import pprint -class Config(object): - def __init__(self, **kwargs): - """Configuration Class: set kwargs as class attributes with setattr""" - for k, v in kwargs.items(): - setattr(self, k, v) - - @property - def config_str(self): - return pprint.pformat(self.__dict__) - - def __repr__(self): - """Pretty-print configurations in alphabetical order""" - config_str = 'Configurations\n' - config_str += self.config_str - return config_str - - -def parse_opt(parse=True, **optional_kwargs): - parser = argparse.ArgumentParser() - # Data input settings - parser.add_argument('--input_json', type=str, default='data/coco.json', - help='path to the json file containing additional info and vocab') - parser.add_argument('--input_fc_dir', type=str, default='data/cocotalk_fc', - help='path to the directory containing the preprocessed fc feats') - parser.add_argument('--input_att_dir', type=str, default='data/cocotalk_att', - help='path to the directory containing the preprocessed att feats') - parser.add_argument('--input_box_dir', type=str, default='data/cocotalk_box', - help='path to the directory containing the boxes of att feats') - parser.add_argument('--input_label_h5', type=str, default='data/coco_label.h5', - help='path to the h5file containing the preprocessed dataset') - parser.add_argument('--data_in_memory', action='store_true', - help='True if we want to save the features in memory') - parser.add_argument('--start_from', type=str, default=None, - help="""continue training from saved model at this path. Path must contain files saved by previous training process: - 'infos.pkl' : configuration; - 'model.pth' : weights - """) - parser.add_argument('--cached_tokens', type=str, default='coco-train-idxs', - help='Cached token file for calculating cider score during self critical training.') - - # Model settings - parser.add_argument('--caption_model', type=str, default="show_tell", - help='show_tell, show_attend_tell, all_img, fc, att2in, att2in2, att2all2, adaatt, adaattmo, updown, stackatt, denseatt, transformer') - parser.add_argument('--rnn_size', type=int, default=512, - help='size of the rnn in number of hidden nodes in each layer') - parser.add_argument('--num_layers', type=int, default=1, - help='number of layers in the RNN') - parser.add_argument('--rnn_type', type=str, default='lstm', - help='rnn, gru, or lstm') - parser.add_argument('--input_encoding_size', type=int, default=512, - help='the encoding size of each token in the vocabulary, and the image.') - parser.add_argument('--att_hid_size', type=int, default=512, - help='the hidden size of the attention MLP; only useful in show_attend_tell; 0 if not using hidden layer') - parser.add_argument('--fc_feat_size', type=int, default=2048, - help='2048 for resnet, 4096 for vgg') - parser.add_argument('--att_feat_size', type=int, default=2048, - help='2048 for resnet, 512 for vgg') - parser.add_argument('--logit_layers', type=int, default=1, - help='number of layers in the RNN') - - - parser.add_argument('--use_bn', type=int, default=0, - help='If 1, then do batch_normalization first in att_embed, if 2 then do bn both in the beginning and the end of att_embed') - - # feature manipulation - parser.add_argument('--norm_att_feat', type=int, default=0, - help='If normalize attention features') - parser.add_argument('--use_box', type=int, default=0, - help='If use box features') - parser.add_argument('--norm_box_feat', type=int, default=0, - help='If use box, do we normalize box feature') - - # Optimization: General - parser.add_argument('--max_epochs', type=int, default=-1, - help='number of epochs') - parser.add_argument('--batch_size', type=int, default=16, - help='minibatch size') - parser.add_argument('--grad_clip_mode', type=str, default='value', - help='value or norm') - parser.add_argument('--grad_clip_value', type=float, default=0.1, - help='clip gradients at this value/max_norm, 0 means no clipping') - parser.add_argument('--drop_prob_lm', type=float, default=0.5, - help='strength of dropout in the Language Model RNN') - parser.add_argument('--self_critical_after', type=int, default=-1, - help='After what epoch do we start finetuning the CNN? (-1 = disable; never finetune, 0 = finetune from start)') - parser.add_argument('--seq_per_img', type=int, default=5, - help='number of captions to sample for each image during training. Done for efficiency since CNN forward pass is expensive. E.g. coco has 5 sents/image') - - parser.add_argument('--verbose', type=int, default=0) - - # Sample related - add_eval_sample_opts(parser) - - #Optimization: for the Language Model - parser.add_argument('--optim', type=str, default='adam', - help='what update to use? rmsprop|sgd|sgdmom|adagrad|adam|adamw') - parser.add_argument('--learning_rate', type=float, default=4e-4, - help='learning rate') - parser.add_argument('--learning_rate_decay_start', type=int, default=-1, - help='at what iteration to start decaying learning rate? (-1 = dont) (in epoch)') - parser.add_argument('--learning_rate_decay_every', type=int, default=3, - help='every how many iterations thereafter to drop LR?(in epoch)') - parser.add_argument('--learning_rate_decay_rate', type=float, default=0.8, - help='every how many iterations thereafter to drop LR?(in epoch)') - parser.add_argument('--optim_alpha', type=float, default=0.9, - help='alpha for adam') - parser.add_argument('--optim_beta', type=float, default=0.999, - help='beta used for adam') - parser.add_argument('--optim_epsilon', type=float, default=1e-8, - help='epsilon that goes into denominator for smoothing') - parser.add_argument('--weight_decay', type=float, default=0, - help='weight_decay') - # Transformer - parser.add_argument('--label_smoothing', type=float, default=0, - help='') - parser.add_argument('--noamopt', action='store_true', - help='') - parser.add_argument('--noamopt_warmup', type=int, default=2000, - help='') - parser.add_argument('--noamopt_factor', type=float, default=1, - help='') - parser.add_argument('--reduce_on_plateau', action='store_true', - help='') - parser.add_argument('--reduce_on_plateau_factor', type=float, default=0.5, - help='') - parser.add_argument('--reduce_on_plateau_patience', type=int, default=3, - help='') - parser.add_argument('--cached_transformer', action='store_true', - help='') - - - parser.add_argument('--use_warmup', action='store_true', - help='warm up the learing rate?') - - parser.add_argument('--scheduled_sampling_start', type=int, default=-1, - help='at what iteration to start decay gt probability') - parser.add_argument('--scheduled_sampling_increase_every', type=int, default=5, - help='every how many iterations thereafter to gt probability') - parser.add_argument('--scheduled_sampling_increase_prob', type=float, default=0.05, - help='How much to update the prob') - parser.add_argument('--scheduled_sampling_max_prob', type=float, default=0.25, - help='Maximum scheduled sampling prob.') - - - # Evaluation/Checkpointing - parser.add_argument('--val_images_use', type=int, default=3200, - help='how many images to use when periodically evaluating the validation loss? (-1 = all)') - parser.add_argument('--save_checkpoint_every', type=int, default=2500, - help='how often to save a model checkpoint (in iterations)?') - parser.add_argument('--save_every_epoch', action='store_true', - help='Save checkpoint every epoch, will overwrite save_checkpoint_every') - parser.add_argument('--save_history_ckpt', type=int, default=0, - help='If save checkpoints at every save point') - parser.add_argument('--checkpoint_path', type=str, default=None, - help='directory to store checkpointed models') - parser.add_argument('--language_eval', type=int, default=0, - help='Evaluate language as well (1 = yes, 0 = no)? BLEU/CIDEr/METEOR/ROUGE_L? requires coco-caption code from Github.') - parser.add_argument('--losses_log_every', type=int, default=25, - help='How often do we snapshot losses, for inclusion in the progress dump? (0 = disable)') - parser.add_argument('--load_best_score', type=int, default=1, - help='Do we load previous best score when resuming training.') - - # misc - parser.add_argument('--id', type=str, default='', - help='an id identifying this run/job. used in cross-val and appended when writing progress files') - parser.add_argument('--train_only', type=int, default=0, - help='if true then use 80k, else use 110k') - - - # Reward - parser.add_argument('--cider_reward_weight', type=float, default=1, - help='The reward weight from cider') - parser.add_argument('--bleu_reward_weight', type=float, default=0, - help='The reward weight from bleu4') - - # Reward - parser.add_argument('--clipscore_reward_weight', type=float, default=1, - help='The reward weight from clipscore') - parser.add_argument('--use_clipscore', type=float, default=0, - help='Use CLIPScore') - parser.add_argument('--clipscore_mode', type=str, default='clip_s', - help='Which CLIPScore to use: clip_s|refclip_s') - - - # Structure_loss - parser.add_argument('--structure_loss_weight', type=float, default=1, - help='') - parser.add_argument('--structure_after', type=int, default=-1, - help='T') - parser.add_argument('--structure_loss_type', type=str, default='seqnll', - help='') - parser.add_argument('--struc_use_logsoftmax', action='store_true', help='') - parser.add_argument('--entropy_reward_weight', type=float, default=0, - help='Entropy reward, seems very interesting') - parser.add_argument('--self_cider_reward_weight', type=float, default=0, - help='self cider reward') - - # Used for self critical or structure. Used when sampling is need during training - parser.add_argument('--train_sample_n', type=int, default=16, - help='The reward weight from cider') - parser.add_argument('--train_sample_method', type=str, default='sample', - help='') - parser.add_argument('--train_beam_size', type=int, default=1, - help='') - - # Used for self critical - parser.add_argument('--sc_sample_method', type=str, default='greedy', - help='') - parser.add_argument('--sc_beam_size', type=int, default=1, - help='') - - - # For diversity evaluation during training - add_diversity_opts(parser) - - - # config - parser.add_argument('--cfg', type=str, default=None, - help='configuration; similar to what is used in detectron') - parser.add_argument( - '--set_cfgs', dest='set_cfgs', - help='Set config keys. Key value sequence seperate by whitespace.' - 'e.g. [key] [value] [key] [value]\n This has higher priority' - 'than cfg file but lower than other args. (You can only overwrite' - 'arguments that have alerady been defined in config file.)', - default=[], nargs='+') - # How will config be used - # 1) read cfg argument, and load the cfg file if it's not None - # 2) Overwrite cfg argument with set_cfgs - # 3) parse config argument to args. - # 4) in the end, parse command line argument and overwrite args - - # step 1: read cfg_fn - # args = parser.parse_args() - # Parse the arguments. - if parse: - args = parser.parse_args() - # For interative engironmnet (ex. jupyter) - else: - args = parser.parse_known_args()[0] - # print(args) - - # Namespace => Dictionary - kwargs = vars(args) - # for k, v in optional_kwargs.items(): - # setattr(args, k, v) - kwargs.update(optional_kwargs) - - args = Config(**kwargs) - - - if args.cfg is not None or args.set_cfgs is not None: - from .config import CfgNode - if args.cfg is not None: - # print('Read Cfg') - cn = CfgNode(CfgNode.load_yaml_with_base(args.cfg)) - # print(cn) - else: - cn = CfgNode() - if args.set_cfgs is not None: - cn.merge_from_list(args.set_cfgs) - for k,v in cn.items(): - if not hasattr(args, k): - import os - if 'LOCAL_RANK' in os.environ and os.environ['LOCAL_RANK'] != '0': - pass - else: - print('Warning: key %s not in args' % k) - - setattr(args, k, v) - - if parse: - args = parser.parse_args(namespace=args) - else: - args = parser.parse_known_args(namespace=args)[0] - - # Check if args are valid - assert args.rnn_size > 0, "rnn_size should be greater than 0" - assert args.num_layers > 0, "num_layers should be greater than 0" - assert args.input_encoding_size > 0, "input_encoding_size should be greater than 0" - assert args.batch_size > 0, "batch_size should be greater than 0" - assert args.drop_prob_lm >= 0 and args.drop_prob_lm < 1, "drop_prob_lm should be between 0 and 1" - assert args.seq_per_img > 0, "seq_per_img should be greater than 0" - assert args.beam_size > 0, "beam_size should be greater than 0" - assert args.save_checkpoint_every > 0, "save_checkpoint_every should be greater than 0" - assert args.losses_log_every > 0, "losses_log_every should be greater than 0" - assert args.language_eval == 0 or args.language_eval == 1, "language_eval should be 0 or 1" - assert args.load_best_score == 0 or args.load_best_score == 1, "language_eval should be 0 or 1" - assert args.train_only == 0 or args.train_only == 1, "language_eval should be 0 or 1" - - # default value for start_from and checkpoint_path - args.checkpoint_path = args.checkpoint_path or './log_%s' %args.id - args.start_from = args.start_from or args.checkpoint_path - - # Deal with feature things before anything - args.use_fc, args.use_att = if_use_feat(args.caption_model) - if args.use_box: args.att_feat_size = args.att_feat_size + 5 - - return args - - -def add_eval_options(parser): - # Basic options - parser.add_argument('--batch_size', type=int, default=0, - help='if > 0 then overrule, otherwise load from checkpoint.') - parser.add_argument('--num_images', type=int, default=-1, - help='how many images to use when periodically evaluating the loss? (-1 = all)') - parser.add_argument('--language_eval', type=int, default=0, - help='Evaluate language as well (1 = yes, 0 = no)? BLEU/CIDEr/METEOR/ROUGE_L? requires coco-caption code from Github.') - parser.add_argument('--dump_images', type=int, default=1, - help='Dump images into vis/imgs folder for vis? (1=yes,0=no)') - parser.add_argument('--dump_json', type=int, default=1, - help='Dump json with predictions into vis folder? (1=yes,0=no)') - parser.add_argument('--dump_path', type=int, default=0, - help='Write image paths along with predictions into vis json? (1=yes,0=no)') - - # Sampling options - add_eval_sample_opts(parser) - - # For evaluation on a folder of images: - parser.add_argument('--image_folder', type=str, default='', - help='If this is nonempty then will predict on the images in this folder path') - parser.add_argument('--image_root', type=str, default='', - help='In case the image paths have to be preprended with a root path to an image folder') - # For evaluation on MSCOCO images from some split: - parser.add_argument('--input_fc_dir', type=str, default='', - help='path to the h5file containing the preprocessed dataset') - parser.add_argument('--input_att_dir', type=str, default='', - help='path to the h5file containing the preprocessed dataset') - parser.add_argument('--input_box_dir', type=str, default='', - help='path to the h5file containing the preprocessed dataset') - parser.add_argument('--input_label_h5', type=str, default='', - help='path to the h5file containing the preprocessed dataset') - parser.add_argument('--input_json', type=str, default='', - help='path to the json file containing additional info and vocab. empty = fetch from model checkpoint.') - parser.add_argument('--split', type=str, default='test', - help='if running on MSCOCO images, which split to use: val|test|train') - parser.add_argument('--coco_json', type=str, default='', - help='if nonempty then use this file in DataLoaderRaw (see docs there). Used only in MSCOCO test evaluation, where we have a specific json file of only test set images.') - # misc - parser.add_argument('--id', type=str, default='', - help='an id identifying this run/job. used only if language_eval = 1 for appending to intermediate files') - parser.add_argument('--verbose_beam', type=int, default=1, - help='if we need to print out all beam search beams.') - parser.add_argument('--verbose_loss', type=int, default=0, - help='If calculate loss using ground truth during evaluation') - -def add_diversity_opts(parser): - parser.add_argument('--sample_n', type=int, default=1, - help='Diverse sampling') - parser.add_argument('--sample_n_method', type=str, default='sample', - help='sample, bs, dbs, gumbel, topk, dgreedy, dsample, dtopk, dtopp') - parser.add_argument('--eval_oracle', type=int, default=1, - help='if we need to calculate loss.') - - -# Sampling related options -def add_eval_sample_opts(parser): - parser.add_argument('--sample_method', type=str, default='greedy', - help='greedy; sample; gumbel; top, top<0-1>') - parser.add_argument('--beam_size', type=int, default=1, - help='used when sample_method = greedy, indicates number of beams in beam search. Usually 2 or 3 works well. More is not better. Set this to 1 for faster runtime but a bit worse performance.') - parser.add_argument('--max_length', type=int, default=20, - help='Maximum length during sampling') - parser.add_argument('--length_penalty', type=str, default='', - help='wu_X or avg_X, X is the alpha') - parser.add_argument('--group_size', type=int, default=1, - help='used for diverse beam search. if group_size is 1, then it\'s normal beam search') - parser.add_argument('--diversity_lambda', type=float, default=0.5, - help='used for diverse beam search. Usually from 0.2 to 0.8. Higher value of lambda produces a more diverse list') - parser.add_argument('--temperature', type=float, default=1.0, - help='temperature when sampling from distributions (i.e. when sample_method = sample). Lower = "safer" predictions.') - parser.add_argument('--decoding_constraint', type=int, default=0, - help='If 1, not allowing same word in a row') - parser.add_argument('--block_trigrams', type=int, default=0, - help='block repeated trigram.') - parser.add_argument('--remove_bad_endings', type=int, default=0, - help='Remove bad endings') - parser.add_argument('--suppress_UNK', type=int, default=1, - help='Not predicting UNK') - - -if __name__ == '__main__': - import sys - sys.argv = [sys.argv[0]] - args = parse_opt() - print(args) - print() - sys.argv = [sys.argv[0], '--cfg', 'configs/updown_long.yml'] - args1 = parse_opt() - print(dict(set(vars(args1).items()) - set(vars(args).items()))) - print() - sys.argv = [sys.argv[0], '--cfg', 'configs/updown_long.yml', '--caption_model', 'att2in2'] - args2 = parse_opt() - print(dict(set(vars(args2).items()) - set(vars(args1).items()))) diff --git a/spaces/NATSpeech/DiffSpeech/modules/commons/conv.py b/spaces/NATSpeech/DiffSpeech/modules/commons/conv.py deleted file mode 100644 index c67d90ebf971e54ae57d08750041a698268042db..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/DiffSpeech/modules/commons/conv.py +++ /dev/null @@ -1,167 +0,0 @@ -import math -import torch -import torch.nn as nn -import torch.nn.functional as F - -from modules.commons.layers import LayerNorm, Embedding - - -class LambdaLayer(nn.Module): - def __init__(self, lambd): - super(LambdaLayer, self).__init__() - self.lambd = lambd - - def forward(self, x): - return self.lambd(x) - - -def init_weights_func(m): - classname = m.__class__.__name__ - if classname.find("Conv1d") != -1: - torch.nn.init.xavier_uniform_(m.weight) - - -class ResidualBlock(nn.Module): - """Implements conv->PReLU->norm n-times""" - - def __init__(self, channels, kernel_size, dilation, n=2, norm_type='bn', dropout=0.0, - c_multiple=2, ln_eps=1e-12): - super(ResidualBlock, self).__init__() - - if norm_type == 'bn': - norm_builder = lambda: nn.BatchNorm1d(channels) - elif norm_type == 'in': - norm_builder = lambda: nn.InstanceNorm1d(channels, affine=True) - elif norm_type == 'gn': - norm_builder = lambda: nn.GroupNorm(8, channels) - elif norm_type == 'ln': - norm_builder = lambda: LayerNorm(channels, dim=1, eps=ln_eps) - else: - norm_builder = lambda: nn.Identity() - - self.blocks = [ - nn.Sequential( - norm_builder(), - nn.Conv1d(channels, c_multiple * channels, kernel_size, dilation=dilation, - padding=(dilation * (kernel_size - 1)) // 2), - LambdaLayer(lambda x: x * kernel_size ** -0.5), - nn.GELU(), - nn.Conv1d(c_multiple * channels, channels, 1, dilation=dilation), - ) - for i in range(n) - ] - - self.blocks = nn.ModuleList(self.blocks) - self.dropout = dropout - - def forward(self, x): - nonpadding = (x.abs().sum(1) > 0).float()[:, None, :] - for b in self.blocks: - x_ = b(x) - if self.dropout > 0 and self.training: - x_ = F.dropout(x_, self.dropout, training=self.training) - x = x + x_ - x = x * nonpadding - return x - - -class ConvBlocks(nn.Module): - """Decodes the expanded phoneme encoding into spectrograms""" - - def __init__(self, hidden_size, out_dims, dilations, kernel_size, - norm_type='ln', layers_in_block=2, c_multiple=2, - dropout=0.0, ln_eps=1e-5, - init_weights=True, is_BTC=True, num_layers=None, post_net_kernel=3): - super(ConvBlocks, self).__init__() - self.is_BTC = is_BTC - if num_layers is not None: - dilations = [1] * num_layers - self.res_blocks = nn.Sequential( - *[ResidualBlock(hidden_size, kernel_size, d, - n=layers_in_block, norm_type=norm_type, c_multiple=c_multiple, - dropout=dropout, ln_eps=ln_eps) - for d in dilations], - ) - if norm_type == 'bn': - norm = nn.BatchNorm1d(hidden_size) - elif norm_type == 'in': - norm = nn.InstanceNorm1d(hidden_size, affine=True) - elif norm_type == 'gn': - norm = nn.GroupNorm(8, hidden_size) - elif norm_type == 'ln': - norm = LayerNorm(hidden_size, dim=1, eps=ln_eps) - self.last_norm = norm - self.post_net1 = nn.Conv1d(hidden_size, out_dims, kernel_size=post_net_kernel, - padding=post_net_kernel // 2) - if init_weights: - self.apply(init_weights_func) - - def forward(self, x, nonpadding=None): - """ - - :param x: [B, T, H] - :return: [B, T, H] - """ - if self.is_BTC: - x = x.transpose(1, 2) - if nonpadding is None: - nonpadding = (x.abs().sum(1) > 0).float()[:, None, :] - elif self.is_BTC: - nonpadding = nonpadding.transpose(1, 2) - x = self.res_blocks(x) * nonpadding - x = self.last_norm(x) * nonpadding - x = self.post_net1(x) * nonpadding - if self.is_BTC: - x = x.transpose(1, 2) - return x - - -class TextConvEncoder(ConvBlocks): - def __init__(self, dict_size, hidden_size, out_dims, dilations, kernel_size, - norm_type='ln', layers_in_block=2, c_multiple=2, - dropout=0.0, ln_eps=1e-5, init_weights=True, num_layers=None, post_net_kernel=3): - super().__init__(hidden_size, out_dims, dilations, kernel_size, - norm_type, layers_in_block, c_multiple, - dropout, ln_eps, init_weights, num_layers=num_layers, - post_net_kernel=post_net_kernel) - self.embed_tokens = Embedding(dict_size, hidden_size, 0) - self.embed_scale = math.sqrt(hidden_size) - - def forward(self, txt_tokens): - """ - - :param txt_tokens: [B, T] - :return: { - 'encoder_out': [B x T x C] - } - """ - x = self.embed_scale * self.embed_tokens(txt_tokens) - return super().forward(x) - - -class ConditionalConvBlocks(ConvBlocks): - def __init__(self, hidden_size, c_cond, c_out, dilations, kernel_size, - norm_type='ln', layers_in_block=2, c_multiple=2, - dropout=0.0, ln_eps=1e-5, init_weights=True, is_BTC=True, num_layers=None): - super().__init__(hidden_size, c_out, dilations, kernel_size, - norm_type, layers_in_block, c_multiple, - dropout, ln_eps, init_weights, is_BTC=False, num_layers=num_layers) - self.g_prenet = nn.Conv1d(c_cond, hidden_size, 3, padding=1) - self.is_BTC_ = is_BTC - if init_weights: - self.g_prenet.apply(init_weights_func) - - def forward(self, x, cond, nonpadding=None): - if self.is_BTC_: - x = x.transpose(1, 2) - cond = cond.transpose(1, 2) - if nonpadding is not None: - nonpadding = nonpadding.transpose(1, 2) - if nonpadding is None: - nonpadding = x.abs().sum(1)[:, None] - x = x + self.g_prenet(cond) - x = x * nonpadding - x = super(ConditionalConvBlocks, self).forward(x) # input needs to be BTC - if self.is_BTC_: - x = x.transpose(1, 2) - return x diff --git a/spaces/NATSpeech/DiffSpeech/utils/nn/seq_utils.py b/spaces/NATSpeech/DiffSpeech/utils/nn/seq_utils.py deleted file mode 100644 index 1308bf7d1806a6c36de9c8af5e9d217eaefa7b56..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/DiffSpeech/utils/nn/seq_utils.py +++ /dev/null @@ -1,305 +0,0 @@ -from collections import defaultdict -import torch -import torch.nn.functional as F - - -def make_positions(tensor, padding_idx): - """Replace non-padding symbols with their position numbers. - - Position numbers begin at padding_idx+1. Padding symbols are ignored. - """ - # The series of casts and type-conversions here are carefully - # balanced to both work with ONNX export and XLA. In particular XLA - # prefers ints, cumsum defaults to output longs, and ONNX doesn't know - # how to handle the dtype kwarg in cumsum. - mask = tensor.ne(padding_idx).int() - return ( - torch.cumsum(mask, dim=1).type_as(mask) * mask - ).long() + padding_idx - - -def softmax(x, dim): - return F.softmax(x, dim=dim, dtype=torch.float32) - - -def sequence_mask(lengths, maxlen, dtype=torch.bool): - if maxlen is None: - maxlen = lengths.max() - mask = ~(torch.ones((len(lengths), maxlen)).to(lengths.device).cumsum(dim=1).t() > lengths).t() - mask.type(dtype) - return mask - - -def weights_nonzero_speech(target): - # target : B x T x mel - # Assign weight 1.0 to all labels except for padding (id=0). - dim = target.size(-1) - return target.abs().sum(-1, keepdim=True).ne(0).float().repeat(1, 1, dim) - - -INCREMENTAL_STATE_INSTANCE_ID = defaultdict(lambda: 0) - - -def _get_full_incremental_state_key(module_instance, key): - module_name = module_instance.__class__.__name__ - - # assign a unique ID to each module instance, so that incremental state is - # not shared across module instances - if not hasattr(module_instance, '_instance_id'): - INCREMENTAL_STATE_INSTANCE_ID[module_name] += 1 - module_instance._instance_id = INCREMENTAL_STATE_INSTANCE_ID[module_name] - - return '{}.{}.{}'.format(module_name, module_instance._instance_id, key) - - -def get_incremental_state(module, incremental_state, key): - """Helper for getting incremental state for an nn.Module.""" - full_key = _get_full_incremental_state_key(module, key) - if incremental_state is None or full_key not in incremental_state: - return None - return incremental_state[full_key] - - -def set_incremental_state(module, incremental_state, key, value): - """Helper for setting incremental state for an nn.Module.""" - if incremental_state is not None: - full_key = _get_full_incremental_state_key(module, key) - incremental_state[full_key] = value - - -def fill_with_neg_inf(t): - """FP16-compatible function that fills a tensor with -inf.""" - return t.float().fill_(float('-inf')).type_as(t) - - -def fill_with_neg_inf2(t): - """FP16-compatible function that fills a tensor with -inf.""" - return t.float().fill_(-1e8).type_as(t) - - -def select_attn(attn_logits, type='best'): - """ - - :param attn_logits: [n_layers, B, n_head, T_sp, T_txt] - :return: - """ - encdec_attn = torch.stack(attn_logits, 0).transpose(1, 2) - # [n_layers * n_head, B, T_sp, T_txt] - encdec_attn = (encdec_attn.reshape([-1, *encdec_attn.shape[2:]])).softmax(-1) - if type == 'best': - indices = encdec_attn.max(-1).values.sum(-1).argmax(0) - encdec_attn = encdec_attn.gather( - 0, indices[None, :, None, None].repeat(1, 1, encdec_attn.size(-2), encdec_attn.size(-1)))[0] - return encdec_attn - elif type == 'mean': - return encdec_attn.mean(0) - - -def make_pad_mask(lengths, xs=None, length_dim=-1): - """Make mask tensor containing indices of padded part. - Args: - lengths (LongTensor or List): Batch of lengths (B,). - xs (Tensor, optional): The reference tensor. - If set, masks will be the same shape as this tensor. - length_dim (int, optional): Dimension indicator of the above tensor. - See the example. - Returns: - Tensor: Mask tensor containing indices of padded part. - dtype=torch.uint8 in PyTorch 1.2- - dtype=torch.bool in PyTorch 1.2+ (including 1.2) - Examples: - With only lengths. - >>> lengths = [5, 3, 2] - >>> make_non_pad_mask(lengths) - masks = [[0, 0, 0, 0 ,0], - [0, 0, 0, 1, 1], - [0, 0, 1, 1, 1]] - With the reference tensor. - >>> xs = torch.zeros((3, 2, 4)) - >>> make_pad_mask(lengths, xs) - tensor([[[0, 0, 0, 0], - [0, 0, 0, 0]], - [[0, 0, 0, 1], - [0, 0, 0, 1]], - [[0, 0, 1, 1], - [0, 0, 1, 1]]], dtype=torch.uint8) - >>> xs = torch.zeros((3, 2, 6)) - >>> make_pad_mask(lengths, xs) - tensor([[[0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 1]], - [[0, 0, 0, 1, 1, 1], - [0, 0, 0, 1, 1, 1]], - [[0, 0, 1, 1, 1, 1], - [0, 0, 1, 1, 1, 1]]], dtype=torch.uint8) - With the reference tensor and dimension indicator. - >>> xs = torch.zeros((3, 6, 6)) - >>> make_pad_mask(lengths, xs, 1) - tensor([[[0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [1, 1, 1, 1, 1, 1]], - [[0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1]], - [[0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1]]], dtype=torch.uint8) - >>> make_pad_mask(lengths, xs, 2) - tensor([[[0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 1]], - [[0, 0, 0, 1, 1, 1], - [0, 0, 0, 1, 1, 1], - [0, 0, 0, 1, 1, 1], - [0, 0, 0, 1, 1, 1], - [0, 0, 0, 1, 1, 1], - [0, 0, 0, 1, 1, 1]], - [[0, 0, 1, 1, 1, 1], - [0, 0, 1, 1, 1, 1], - [0, 0, 1, 1, 1, 1], - [0, 0, 1, 1, 1, 1], - [0, 0, 1, 1, 1, 1], - [0, 0, 1, 1, 1, 1]]], dtype=torch.uint8) - """ - if length_dim == 0: - raise ValueError("length_dim cannot be 0: {}".format(length_dim)) - - if not isinstance(lengths, list): - lengths = lengths.tolist() - bs = int(len(lengths)) - if xs is None: - maxlen = int(max(lengths)) - else: - maxlen = xs.size(length_dim) - - seq_range = torch.arange(0, maxlen, dtype=torch.int64) - seq_range_expand = seq_range.unsqueeze(0).expand(bs, maxlen) - seq_length_expand = seq_range_expand.new(lengths).unsqueeze(-1) - mask = seq_range_expand >= seq_length_expand - - if xs is not None: - assert xs.size(0) == bs, (xs.size(0), bs) - - if length_dim < 0: - length_dim = xs.dim() + length_dim - # ind = (:, None, ..., None, :, , None, ..., None) - ind = tuple( - slice(None) if i in (0, length_dim) else None for i in range(xs.dim()) - ) - mask = mask[ind].expand_as(xs).to(xs.device) - return mask - - -def make_non_pad_mask(lengths, xs=None, length_dim=-1): - """Make mask tensor containing indices of non-padded part. - Args: - lengths (LongTensor or List): Batch of lengths (B,). - xs (Tensor, optional): The reference tensor. - If set, masks will be the same shape as this tensor. - length_dim (int, optional): Dimension indicator of the above tensor. - See the example. - Returns: - ByteTensor: mask tensor containing indices of padded part. - dtype=torch.uint8 in PyTorch 1.2- - dtype=torch.bool in PyTorch 1.2+ (including 1.2) - Examples: - With only lengths. - >>> lengths = [5, 3, 2] - >>> make_non_pad_mask(lengths) - masks = [[1, 1, 1, 1 ,1], - [1, 1, 1, 0, 0], - [1, 1, 0, 0, 0]] - With the reference tensor. - >>> xs = torch.zeros((3, 2, 4)) - >>> make_non_pad_mask(lengths, xs) - tensor([[[1, 1, 1, 1], - [1, 1, 1, 1]], - [[1, 1, 1, 0], - [1, 1, 1, 0]], - [[1, 1, 0, 0], - [1, 1, 0, 0]]], dtype=torch.uint8) - >>> xs = torch.zeros((3, 2, 6)) - >>> make_non_pad_mask(lengths, xs) - tensor([[[1, 1, 1, 1, 1, 0], - [1, 1, 1, 1, 1, 0]], - [[1, 1, 1, 0, 0, 0], - [1, 1, 1, 0, 0, 0]], - [[1, 1, 0, 0, 0, 0], - [1, 1, 0, 0, 0, 0]]], dtype=torch.uint8) - With the reference tensor and dimension indicator. - >>> xs = torch.zeros((3, 6, 6)) - >>> make_non_pad_mask(lengths, xs, 1) - tensor([[[1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [0, 0, 0, 0, 0, 0]], - [[1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0]], - [[1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0]]], dtype=torch.uint8) - >>> make_non_pad_mask(lengths, xs, 2) - tensor([[[1, 1, 1, 1, 1, 0], - [1, 1, 1, 1, 1, 0], - [1, 1, 1, 1, 1, 0], - [1, 1, 1, 1, 1, 0], - [1, 1, 1, 1, 1, 0], - [1, 1, 1, 1, 1, 0]], - [[1, 1, 1, 0, 0, 0], - [1, 1, 1, 0, 0, 0], - [1, 1, 1, 0, 0, 0], - [1, 1, 1, 0, 0, 0], - [1, 1, 1, 0, 0, 0], - [1, 1, 1, 0, 0, 0]], - [[1, 1, 0, 0, 0, 0], - [1, 1, 0, 0, 0, 0], - [1, 1, 0, 0, 0, 0], - [1, 1, 0, 0, 0, 0], - [1, 1, 0, 0, 0, 0], - [1, 1, 0, 0, 0, 0]]], dtype=torch.uint8) - """ - return ~make_pad_mask(lengths, xs, length_dim) - - -def get_mask_from_lengths(lengths): - max_len = torch.max(lengths).item() - ids = torch.arange(0, max_len).to(lengths.device) - mask = (ids < lengths.unsqueeze(1)).bool() - return mask - - -def group_hidden_by_segs(h, seg_ids, max_len): - """ - - :param h: [B, T, H] - :param seg_ids: [B, T] - :return: h_ph: [B, T_ph, H] - """ - B, T, H = h.shape - h_gby_segs = h.new_zeros([B, max_len + 1, H]).scatter_add_(1, seg_ids[:, :, None].repeat([1, 1, H]), h) - all_ones = h.new_ones(h.shape[:2]) - cnt_gby_segs = h.new_zeros([B, max_len + 1]).scatter_add_(1, seg_ids, all_ones).contiguous() - h_gby_segs = h_gby_segs[:, 1:] - cnt_gby_segs = cnt_gby_segs[:, 1:] - h_gby_segs = h_gby_segs / torch.clamp(cnt_gby_segs[:, :, None], min=1) - return h_gby_segs, cnt_gby_segs diff --git a/spaces/NCTCMumbai/NCTC/models/official/utils/flags/_conventions.py b/spaces/NCTCMumbai/NCTC/models/official/utils/flags/_conventions.py deleted file mode 100644 index e04448ab81fc6db7fd8ba1650b427320ff00c05e..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/utils/flags/_conventions.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright 2018 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Central location for shared argparse convention definitions.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import sys -import codecs -import functools - -from absl import app as absl_app -from absl import flags - - -# This codifies help string conventions and makes it easy to update them if -# necessary. Currently the only major effect is that help bodies start on the -# line after flags are listed. All flag definitions should wrap the text bodies -# with help wrap when calling DEFINE_*. -_help_wrap = functools.partial(flags.text_wrap, length=80, indent="", - firstline_indent="\n") - - -# Pretty formatting causes issues when utf-8 is not installed on a system. -def _stdout_utf8(): - try: - codecs.lookup("utf-8") - except LookupError: - return False - return getattr(sys.stdout, "encoding", "") == "UTF-8" - - -if _stdout_utf8(): - help_wrap = _help_wrap -else: - def help_wrap(text, *args, **kwargs): - return _help_wrap(text, *args, **kwargs).replace(u"\ufeff", u"") - - -# Replace None with h to also allow -h -absl_app.HelpshortFlag.SHORT_NAME = "h" diff --git a/spaces/NCTCMumbai/NCTC/models/research/adversarial_logit_pairing/model_lib.py b/spaces/NCTCMumbai/NCTC/models/research/adversarial_logit_pairing/model_lib.py deleted file mode 100644 index 1499a378ea1ba6511122ebe54ceed1226d38d649..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/adversarial_logit_pairing/model_lib.py +++ /dev/null @@ -1,189 +0,0 @@ -# Copyright 2018 Google Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== - -"""Library with common functions for training and eval.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import six - -import tensorflow as tf - -from tensorflow.contrib.slim.nets import resnet_v2 - - -def default_hparams(): - """Returns default hyperparameters.""" - return tf.contrib.training.HParams( - # Batch size for training and evaluation. - batch_size=32, - eval_batch_size=50, - - # General training parameters. - weight_decay=0.0001, - label_smoothing=0.1, - - # Parameters of the adversarial training. - train_adv_method='clean', # adversarial training method - train_lp_weight=0.0, # Weight of adversarial logit pairing loss - - # Parameters of the optimizer. - optimizer='rms', # possible values are: 'rms', 'momentum', 'adam' - momentum=0.9, # momentum - rmsprop_decay=0.9, # Decay term for RMSProp - rmsprop_epsilon=1.0, # Epsilon term for RMSProp - - # Parameters of learning rate schedule. - lr_schedule='exp_decay', # Possible values: 'exp_decay', 'step', 'fixed' - learning_rate=0.045, - lr_decay_factor=0.94, # Learning exponential decay - lr_num_epochs_per_decay=2.0, # Number of epochs per lr decay - lr_list=[1.0 / 6, 2.0 / 6, 3.0 / 6, - 4.0 / 6, 5.0 / 6, 1.0, 0.1, 0.01, - 0.001, 0.0001], - lr_decay_epochs=[1, 2, 3, 4, 5, 30, 60, 80, - 90]) - - -def get_lr_schedule(hparams, examples_per_epoch, replicas_to_aggregate=1): - """Returns TensorFlow op which compute learning rate. - - Args: - hparams: hyper parameters. - examples_per_epoch: number of training examples per epoch. - replicas_to_aggregate: number of training replicas running in parallel. - - Raises: - ValueError: if learning rate schedule specified in hparams is incorrect. - - Returns: - learning_rate: tensor with learning rate. - steps_per_epoch: number of training steps per epoch. - """ - global_step = tf.train.get_or_create_global_step() - steps_per_epoch = float(examples_per_epoch) / float(hparams.batch_size) - if replicas_to_aggregate > 0: - steps_per_epoch /= replicas_to_aggregate - - if hparams.lr_schedule == 'exp_decay': - decay_steps = long(steps_per_epoch * hparams.lr_num_epochs_per_decay) - learning_rate = tf.train.exponential_decay( - hparams.learning_rate, - global_step, - decay_steps, - hparams.lr_decay_factor, - staircase=True) - elif hparams.lr_schedule == 'step': - lr_decay_steps = [long(epoch * steps_per_epoch) - for epoch in hparams.lr_decay_epochs] - learning_rate = tf.train.piecewise_constant( - global_step, lr_decay_steps, hparams.lr_list) - elif hparams.lr_schedule == 'fixed': - learning_rate = hparams.learning_rate - else: - raise ValueError('Invalid value of lr_schedule: %s' % hparams.lr_schedule) - - if replicas_to_aggregate > 0: - learning_rate *= replicas_to_aggregate - - return learning_rate, steps_per_epoch - - -def get_optimizer(hparams, learning_rate): - """Returns optimizer. - - Args: - hparams: hyper parameters. - learning_rate: learning rate tensor. - - Raises: - ValueError: if type of optimizer specified in hparams is incorrect. - - Returns: - Instance of optimizer class. - """ - if hparams.optimizer == 'rms': - optimizer = tf.train.RMSPropOptimizer(learning_rate, - hparams.rmsprop_decay, - hparams.momentum, - hparams.rmsprop_epsilon) - elif hparams.optimizer == 'momentum': - optimizer = tf.train.MomentumOptimizer(learning_rate, - hparams.momentum) - elif hparams.optimizer == 'adam': - optimizer = tf.train.AdamOptimizer(learning_rate) - else: - raise ValueError('Invalid value of optimizer: %s' % hparams.optimizer) - return optimizer - - -RESNET_MODELS = {'resnet_v2_50': resnet_v2.resnet_v2_50} - - -def get_model(model_name, num_classes): - """Returns function which creates model. - - Args: - model_name: Name of the model. - num_classes: Number of classes. - - Raises: - ValueError: If model_name is invalid. - - Returns: - Function, which creates model when called. - """ - if model_name.startswith('resnet'): - def resnet_model(images, is_training, reuse=tf.AUTO_REUSE): - with tf.contrib.framework.arg_scope(resnet_v2.resnet_arg_scope()): - resnet_fn = RESNET_MODELS[model_name] - logits, _ = resnet_fn(images, num_classes, is_training=is_training, - reuse=reuse) - logits = tf.reshape(logits, [-1, num_classes]) - return logits - return resnet_model - else: - raise ValueError('Invalid model: %s' % model_name) - - -def filter_trainable_variables(trainable_scopes): - """Keep only trainable variables which are prefixed with given scopes. - - Args: - trainable_scopes: either list of trainable scopes or string with comma - separated list of trainable scopes. - - This function removes all variables which are not prefixed with given - trainable_scopes from collection of trainable variables. - Useful during network fine tuning, when you only need to train subset of - variables. - """ - if not trainable_scopes: - return - if isinstance(trainable_scopes, six.string_types): - trainable_scopes = [scope.strip() for scope in trainable_scopes.split(',')] - trainable_scopes = {scope for scope in trainable_scopes if scope} - if not trainable_scopes: - return - trainable_collection = tf.get_collection_ref( - tf.GraphKeys.TRAINABLE_VARIABLES) - non_trainable_vars = [ - v for v in trainable_collection - if not any([v.op.name.startswith(s) for s in trainable_scopes]) - ] - for v in non_trainable_vars: - trainable_collection.remove(v) diff --git a/spaces/NCTCMumbai/NCTC/models/research/cognitive_planning/standard_fields.py b/spaces/NCTCMumbai/NCTC/models/research/cognitive_planning/standard_fields.py deleted file mode 100644 index 99e04e66c56527e2c7be03aaf48836e077832c1f..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/cognitive_planning/standard_fields.py +++ /dev/null @@ -1,224 +0,0 @@ -# Copyright 2017 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== - -"""Contains classes specifying naming conventions used for object detection. - - -Specifies: - InputDataFields: standard fields used by reader/preprocessor/batcher. - DetectionResultFields: standard fields returned by object detector. - BoxListFields: standard field used by BoxList - TfExampleFields: standard fields for tf-example data format (go/tf-example). -""" - - -class InputDataFields(object): - """Names for the input tensors. - - Holds the standard data field names to use for identifying input tensors. This - should be used by the decoder to identify keys for the returned tensor_dict - containing input tensors. And it should be used by the model to identify the - tensors it needs. - - Attributes: - image: image. - image_additional_channels: additional channels. - original_image: image in the original input size. - key: unique key corresponding to image. - source_id: source of the original image. - filename: original filename of the dataset (without common path). - groundtruth_image_classes: image-level class labels. - groundtruth_boxes: coordinates of the ground truth boxes in the image. - groundtruth_classes: box-level class labels. - groundtruth_label_types: box-level label types (e.g. explicit negative). - groundtruth_is_crowd: [DEPRECATED, use groundtruth_group_of instead] - is the groundtruth a single object or a crowd. - groundtruth_area: area of a groundtruth segment. - groundtruth_difficult: is a `difficult` object - groundtruth_group_of: is a `group_of` objects, e.g. multiple objects of the - same class, forming a connected group, where instances are heavily - occluding each other. - proposal_boxes: coordinates of object proposal boxes. - proposal_objectness: objectness score of each proposal. - groundtruth_instance_masks: ground truth instance masks. - groundtruth_instance_boundaries: ground truth instance boundaries. - groundtruth_instance_classes: instance mask-level class labels. - groundtruth_keypoints: ground truth keypoints. - groundtruth_keypoint_visibilities: ground truth keypoint visibilities. - groundtruth_label_scores: groundtruth label scores. - groundtruth_weights: groundtruth weight factor for bounding boxes. - num_groundtruth_boxes: number of groundtruth boxes. - true_image_shapes: true shapes of images in the resized images, as resized - images can be padded with zeros. - multiclass_scores: the label score per class for each box. - """ - image = 'image' - image_additional_channels = 'image_additional_channels' - original_image = 'original_image' - key = 'key' - source_id = 'source_id' - filename = 'filename' - groundtruth_image_classes = 'groundtruth_image_classes' - groundtruth_boxes = 'groundtruth_boxes' - groundtruth_classes = 'groundtruth_classes' - groundtruth_label_types = 'groundtruth_label_types' - groundtruth_is_crowd = 'groundtruth_is_crowd' - groundtruth_area = 'groundtruth_area' - groundtruth_difficult = 'groundtruth_difficult' - groundtruth_group_of = 'groundtruth_group_of' - proposal_boxes = 'proposal_boxes' - proposal_objectness = 'proposal_objectness' - groundtruth_instance_masks = 'groundtruth_instance_masks' - groundtruth_instance_boundaries = 'groundtruth_instance_boundaries' - groundtruth_instance_classes = 'groundtruth_instance_classes' - groundtruth_keypoints = 'groundtruth_keypoints' - groundtruth_keypoint_visibilities = 'groundtruth_keypoint_visibilities' - groundtruth_label_scores = 'groundtruth_label_scores' - groundtruth_weights = 'groundtruth_weights' - num_groundtruth_boxes = 'num_groundtruth_boxes' - true_image_shape = 'true_image_shape' - multiclass_scores = 'multiclass_scores' - - -class DetectionResultFields(object): - """Naming conventions for storing the output of the detector. - - Attributes: - source_id: source of the original image. - key: unique key corresponding to image. - detection_boxes: coordinates of the detection boxes in the image. - detection_scores: detection scores for the detection boxes in the image. - detection_classes: detection-level class labels. - detection_masks: contains a segmentation mask for each detection box. - detection_boundaries: contains an object boundary for each detection box. - detection_keypoints: contains detection keypoints for each detection box. - num_detections: number of detections in the batch. - """ - - source_id = 'source_id' - key = 'key' - detection_boxes = 'detection_boxes' - detection_scores = 'detection_scores' - detection_classes = 'detection_classes' - detection_masks = 'detection_masks' - detection_boundaries = 'detection_boundaries' - detection_keypoints = 'detection_keypoints' - num_detections = 'num_detections' - - -class BoxListFields(object): - """Naming conventions for BoxLists. - - Attributes: - boxes: bounding box coordinates. - classes: classes per bounding box. - scores: scores per bounding box. - weights: sample weights per bounding box. - objectness: objectness score per bounding box. - masks: masks per bounding box. - boundaries: boundaries per bounding box. - keypoints: keypoints per bounding box. - keypoint_heatmaps: keypoint heatmaps per bounding box. - is_crowd: is_crowd annotation per bounding box. - """ - boxes = 'boxes' - classes = 'classes' - scores = 'scores' - weights = 'weights' - objectness = 'objectness' - masks = 'masks' - boundaries = 'boundaries' - keypoints = 'keypoints' - keypoint_heatmaps = 'keypoint_heatmaps' - is_crowd = 'is_crowd' - - -class TfExampleFields(object): - """TF-example proto feature names for object detection. - - Holds the standard feature names to load from an Example proto for object - detection. - - Attributes: - image_encoded: JPEG encoded string - image_format: image format, e.g. "JPEG" - filename: filename - channels: number of channels of image - colorspace: colorspace, e.g. "RGB" - height: height of image in pixels, e.g. 462 - width: width of image in pixels, e.g. 581 - source_id: original source of the image - image_class_text: image-level label in text format - image_class_label: image-level label in numerical format - object_class_text: labels in text format, e.g. ["person", "cat"] - object_class_label: labels in numbers, e.g. [16, 8] - object_bbox_xmin: xmin coordinates of groundtruth box, e.g. 10, 30 - object_bbox_xmax: xmax coordinates of groundtruth box, e.g. 50, 40 - object_bbox_ymin: ymin coordinates of groundtruth box, e.g. 40, 50 - object_bbox_ymax: ymax coordinates of groundtruth box, e.g. 80, 70 - object_view: viewpoint of object, e.g. ["frontal", "left"] - object_truncated: is object truncated, e.g. [true, false] - object_occluded: is object occluded, e.g. [true, false] - object_difficult: is object difficult, e.g. [true, false] - object_group_of: is object a single object or a group of objects - object_depiction: is object a depiction - object_is_crowd: [DEPRECATED, use object_group_of instead] - is the object a single object or a crowd - object_segment_area: the area of the segment. - object_weight: a weight factor for the object's bounding box. - instance_masks: instance segmentation masks. - instance_boundaries: instance boundaries. - instance_classes: Classes for each instance segmentation mask. - detection_class_label: class label in numbers. - detection_bbox_ymin: ymin coordinates of a detection box. - detection_bbox_xmin: xmin coordinates of a detection box. - detection_bbox_ymax: ymax coordinates of a detection box. - detection_bbox_xmax: xmax coordinates of a detection box. - detection_score: detection score for the class label and box. - """ - image_encoded = 'image/encoded' - image_format = 'image/format' # format is reserved keyword - filename = 'image/filename' - channels = 'image/channels' - colorspace = 'image/colorspace' - height = 'image/height' - width = 'image/width' - source_id = 'image/source_id' - image_class_text = 'image/class/text' - image_class_label = 'image/class/label' - object_class_text = 'image/object/class/text' - object_class_label = 'image/object/class/label' - object_bbox_ymin = 'image/object/bbox/ymin' - object_bbox_xmin = 'image/object/bbox/xmin' - object_bbox_ymax = 'image/object/bbox/ymax' - object_bbox_xmax = 'image/object/bbox/xmax' - object_view = 'image/object/view' - object_truncated = 'image/object/truncated' - object_occluded = 'image/object/occluded' - object_difficult = 'image/object/difficult' - object_group_of = 'image/object/group_of' - object_depiction = 'image/object/depiction' - object_is_crowd = 'image/object/is_crowd' - object_segment_area = 'image/object/segment/area' - object_weight = 'image/object/weight' - instance_masks = 'image/segmentation/object' - instance_boundaries = 'image/boundaries/object' - instance_classes = 'image/segmentation/object/class' - detection_class_label = 'image/detection/label' - detection_bbox_ymin = 'image/detection/bbox/ymin' - detection_bbox_xmin = 'image/detection/bbox/xmin' - detection_bbox_ymax = 'image/detection/bbox/ymax' - detection_bbox_xmax = 'image/detection/bbox/xmax' - detection_score = 'image/detection/score' diff --git a/spaces/Nee001/bing0/src/components/theme-toggle.tsx b/spaces/Nee001/bing0/src/components/theme-toggle.tsx deleted file mode 100644 index 67d3f1a2c163ccbeb52c40a7e42f107190237154..0000000000000000000000000000000000000000 --- a/spaces/Nee001/bing0/src/components/theme-toggle.tsx +++ /dev/null @@ -1,31 +0,0 @@ -'use client' - -import * as React from 'react' -import { useTheme } from 'next-themes' - -import { Button } from '@/components/ui/button' -import { IconMoon, IconSun } from '@/components/ui/icons' - -export function ThemeToggle() { - const { setTheme, theme } = useTheme() - const [_, startTransition] = React.useTransition() - - return ( - - ) -} diff --git a/spaces/Nihanvi/Text_summarization_using_transformers/app.py b/spaces/Nihanvi/Text_summarization_using_transformers/app.py deleted file mode 100644 index 64ebe05250e63ecf49b06c7c525640a2de02d5ff..0000000000000000000000000000000000000000 --- a/spaces/Nihanvi/Text_summarization_using_transformers/app.py +++ /dev/null @@ -1,110 +0,0 @@ -import streamlit as st -import time - -from transformers import pipeline -from transformers import T5Tokenizer, T5ForConditionalGeneration -from transformers import BartTokenizer, BartForConditionalGeneration -#from transformers import AutoTokenizer, EncoderDecoderModel -#from transformers import AutoTokenizer, LEDForConditionalGeneration -#from transformers import AutoTokenizer, FlaxLongT5ForConditionalGeneration - -##initializing models - -#Transformers Approach -def transform_summarize(text): - pp = pipeline("summarization") - k=pp(text,max_length=100,do_sample=False) - return k - -#T5 -def t5_summarize(text): - tokenizer = T5Tokenizer.from_pretrained("t5-small") - model = T5ForConditionalGeneration.from_pretrained("t5-small") - - input_text = "summarize: " + text - inputs = tokenizer.encode(input_text, return_tensors="pt", max_length=1024, truncation=True) - outputs = model.generate(inputs, max_length=200, min_length=50, length_penalty=2.0, num_beams=4, early_stopping=True) - pp = tokenizer.decode(outputs[0], skip_special_tokens=True) - return pp - -#BART -def bart_summarize(text): - tokenizer = BartTokenizer.from_pretrained("facebook/bart-large-cnn") - model = BartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn") - - inputs = tokenizer([text], max_length=1024, return_tensors="pt", truncation=True) - summary_ids = model.generate(inputs["input_ids"], num_beams=4, max_length=150, early_stopping=True) - pp = tokenizer.decode(summary_ids[0], skip_special_tokens=True) - return pp - -#Encoder-Decoder -# def encoder_decoder(text): -# model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert_cnn_daily_mail") -# tokenizer = AutoTokenizer.from_pretrained("patrickvonplaten/bert2bert_cnn_daily_mail") -# # let's perform inference on a long piece of text -# input_ids = tokenizer(text, return_tensors="pt").input_ids -# # autoregressively generate summary (uses greedy decoding by default) -# generated_ids = model.generate(input_ids) -# generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] -# return generated_text - -# Result -def result(summary): - st.success('Please wait while we process and summarize') - time.sleep(12) - st.subheader(":violet[Your summarized text is:]") - st.write(summary) - -#Title - -st.title("SummarizeEasy") -st.header(":violet[Summarize your text with ease!]") -st.divider() -st.write("Enter your text below and click on the button to summarize it.") -text = st.text_area("Enter your text here", height=200) -model = st.radio("Select the model you want to use", ("Transformer","T5", "BART")) -st.write("Click on the button to summarize your text.") -button = st.button("Summarize") -st.divider() -st.info("Please note that this is a beta version and summarized content may not be accurate. To get an accurate content the models need to be fined tuned and trained on respective context which requires GPUS. Please feel free to share your feedback with us.") -st.divider() -if button: - if text: - if model == "Transformer": - st.write("You have selected Transformer model.") - try: - summary = transform_summarize(text) - result(summary) - except Exception: - st.warning("🚨 Your input text is quite lengthy. For better results, consider providing a shorter text or breaking it into smaller chunks.") - elif model == "T5": - st.write("You have selected T5 model.") - try: - summary = t5_summarize(text) - except Exception: - st.warning("🚨 Your input text is quite lengthy. For better results, consider providing a shorter text or breaking it into smaller chunks.") - elif model == "BART": - st.write("You have selected BART model.") - try: - summary = bart_summarize(text) - result(summary) - except Exception: - st.warning("🚨 Your input text is quite lengthy. For better results, consider providing a shorter text or breaking it into smaller chunks.") - # elif model == "Encoder-Decoder": - # st.write("You have selected Encoder-Decoder model.") - # try: - # summary = encoder_decoder(text) - # result(summary) - # except Exception: - # st.warning("🚨 Your input text is quite lengthy. For better results, consider providing a shorter text or breaking it into smaller chunks.") - - #st.toast("Please wait while we summarize your text.") - #with st.spinner("Summarizing..."): - # time.sleep(5) - # st.toast("Done!!",icon="🎉") - # st.success('Please wait while we process and summarize') - # time.sleep(15) - # st.subheader(":violet[Your summarized text is:]") - # st.write(summary) - else: - st.warning("Please enter the text !!") diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_to_text/docs/simulst_mustc_example.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_to_text/docs/simulst_mustc_example.md deleted file mode 100644 index f3b5a413a27bbe2700da3f418460aa0a7c41abdd..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_to_text/docs/simulst_mustc_example.md +++ /dev/null @@ -1,190 +0,0 @@ -# Simultaneous Speech Translation (SimulST) on MuST-C - -This is a tutorial of training and evaluating a transformer *wait-k* simultaneous model on MUST-C English-Germen Dataset, from [SimulMT to SimulST: Adapting Simultaneous Text Translation to End-to-End Simultaneous Speech Translation](https://www.aclweb.org/anthology/2020.aacl-main.58.pdf). - -[MuST-C](https://www.aclweb.org/anthology/N19-1202) is multilingual speech-to-text translation corpus with 8-language translations on English TED talks. - -## Data Preparation -This section introduces the data preparation for training and evaluation. -If you only want to evaluate the model, please jump to [Inference & Evaluation](#inference--evaluation) - -[Download](https://ict.fbk.eu/must-c) and unpack MuST-C data to a path -`${MUSTC_ROOT}/en-${TARGET_LANG_ID}`, then preprocess it with -```bash -# Additional Python packages for S2T data processing/model training -pip install pandas torchaudio sentencepiece - -# Generate TSV manifests, features, vocabulary, -# global cepstral and mean estimation, -# and configuration for each language -cd fairseq - -python examples/speech_to_text/prep_mustc_data.py \ - --data-root ${MUSTC_ROOT} --task asr \ - --vocab-type unigram --vocab-size 10000 \ - --cmvn-type global - -python examples/speech_to_text/prep_mustc_data.py \ - --data-root ${MUSTC_ROOT} --task st \ - --vocab-type unigram --vocab-size 10000 \ - --cmvn-type global -``` - -## ASR Pretraining -We need a pretrained offline ASR model. Assuming the save directory of the ASR model is `${ASR_SAVE_DIR}`. -The following command (and the subsequent training commands in this tutorial) assume training on 1 GPU (you can also train on 8 GPUs and remove the `--update-freq 8` option). -``` -fairseq-train ${MUSTC_ROOT}/en-de \ - --config-yaml config_asr.yaml --train-subset train_asr --valid-subset dev_asr \ - --save-dir ${ASR_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-update 100000 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --report-accuracy \ - --arch convtransformer_espnet --optimizer adam --lr 0.0005 --lr-scheduler inverse_sqrt \ - --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8 -``` -A pretrained ASR checkpoint can be downloaded [here](https://dl.fbaipublicfiles.com/simultaneous_translation/must_c_v1_en_de_pretrained_asr) - -## Simultaneous Speech Translation Training - -### Wait-K with fixed pre-decision module -Fixed pre-decision indicates that the model operate simultaneous policy on the boundaries of fixed chunks. -Here is a example of fixed pre-decision ratio 7 (the simultaneous decision is made every 7 encoder states) and -a wait-3 policy model. Assuming the save directory is `${ST_SAVE_DIR}` -```bash - fairseq-train ${MUSTC_ROOT}/en-de \ - --config-yaml config_st.yaml --train-subset train_st --valid-subset dev_st \ - --save-dir ${ST_SAVE_DIR} --num-workers 8 \ - --optimizer adam --lr 0.0001 --lr-scheduler inverse_sqrt --clip-norm 10.0 \ - --criterion label_smoothed_cross_entropy \ - --warmup-updates 4000 --max-update 100000 --max-tokens 40000 --seed 2 \ - --load-pretrained-encoder-from ${ASR_SAVE_DIR}/checkpoint_best.pt \ - --task speech_to_text \ - --arch convtransformer_simul_trans_espnet \ - --simul-type waitk_fixed_pre_decision \ - --waitk-lagging 3 \ - --fixed-pre-decision-ratio 7 \ - --update-freq 8 - -``` -### Monotonic multihead attention with fixed pre-decision module -``` - fairseq-train ${MUSTC_ROOT}/en-de \ - --config-yaml config_st.yaml --train-subset train_st --valid-subset dev_st \ - --save-dir ${ST_SAVE_DIR} --num-workers 8 \ - --optimizer adam --lr 0.0001 --lr-scheduler inverse_sqrt --clip-norm 10.0 \ - --warmup-updates 4000 --max-update 100000 --max-tokens 40000 --seed 2 \ - --load-pretrained-encoder-from ${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --task speech_to_text \ - --criterion latency_augmented_label_smoothed_cross_entropy \ - --latency-weight-avg 0.1 \ - --arch convtransformer_simul_trans_espnet \ - --simul-type infinite_lookback_fixed_pre_decision \ - --fixed-pre-decision-ratio 7 \ - --update-freq 8 -``` -## Inference & Evaluation -[SimulEval](https://github.com/facebookresearch/SimulEval) is used for evaluation. -The following command is for evaluation. - -``` -git clone https://github.com/facebookresearch/SimulEval.git -cd SimulEval -pip install -e . - -simuleval \ - --agent ${FAIRSEQ}/examples/speech_to_text/simultaneous_translation/agents/fairseq_simul_st_agent.py - --source ${SRC_LIST_OF_AUDIO} - --target ${TGT_FILE} - --data-bin ${MUSTC_ROOT}/en-de \ - --config config_st.yaml \ - --model-path ${ST_SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --output ${OUTPUT} \ - --scores -``` - -The source file `${SRC_LIST_OF_AUDIO}` is a list of paths of audio files. Assuming your audio files stored at `/home/user/data`, -it should look like this - -```bash -/home/user/data/audio-1.wav -/home/user/data/audio-2.wav -``` - -Each line of target file `${TGT_FILE}` is the translation for each audio file input. -```bash -Translation_1 -Translation_2 -``` -The evaluation runs on the original MUSTC segmentation. -The following command will generate the wav list and text file for a evaluation set `${SPLIT}` (chose from `dev`, `tst-COMMON` and `tst-HE`) in MUSTC to `${EVAL_DATA}`. -```bash -python ${FAIRSEQ}/examples/speech_to_text/seg_mustc_data.py \ - --data-root ${MUSTC_ROOT} --lang de \ - --split ${SPLIT} --task st \ - --output ${EVAL_DATA} -``` - -The `--data-bin` and `--config` should be the same in previous section if you prepare the data from the scratch. -If only for evaluation, a prepared data directory can be found [here](https://dl.fbaipublicfiles.com/simultaneous_translation/must_c_v1.0_en_de_databin.tgz). It contains -- `spm_unigram10000_st.model`: a sentencepiece model binary. -- `spm_unigram10000_st.txt`: the dictionary file generated by the sentencepiece model. -- `gcmvn.npz`: the binary for global cepstral mean and variance. -- `config_st.yaml`: the config yaml file. It looks like this. -You will need to set the absolute paths for `sentencepiece_model` and `stats_npz_path` if the data directory is downloaded. -```yaml -bpe_tokenizer: - bpe: sentencepiece - sentencepiece_model: ABS_PATH_TO_SENTENCEPIECE_MODEL -global_cmvn: - stats_npz_path: ABS_PATH_TO_GCMVN_FILE -input_channels: 1 -input_feat_per_channel: 80 -sampling_alpha: 1.0 -specaugment: - freq_mask_F: 27 - freq_mask_N: 1 - time_mask_N: 1 - time_mask_T: 100 - time_mask_p: 1.0 - time_wrap_W: 0 -transforms: - '*': - - global_cmvn - _train: - - global_cmvn - - specaugment -vocab_filename: spm_unigram10000_st.txt -``` - -Notice that once a `--data-bin` is set, the `--config` is the base name of the config yaml, not the full path. - -Set `--model-path` to the model checkpoint. -A pretrained checkpoint can be downloaded from [here](https://dl.fbaipublicfiles.com/simultaneous_translation/convtransformer_wait5_pre7), which is a wait-5 model with a pre-decision of 280 ms. - -The result of this model on `tst-COMMON` is: -```bash -{ - "Quality": { - "BLEU": 13.94974229366959 - }, - "Latency": { - "AL": 1751.8031870037803, - "AL_CA": 2338.5911762796536, - "AP": 0.7931395378788959, - "AP_CA": 0.9405103863210942, - "DAL": 1987.7811616943081, - "DAL_CA": 2425.2751560926167 - } -} -``` - -If `--output ${OUTPUT}` option is used, the detailed log and scores will be stored under the `${OUTPUT}` directory. - - -The quality is measured by detokenized BLEU. So make sure that the predicted words sent to the server are detokenized. - -The latency metrics are -* Average Proportion -* Average Lagging -* Differentiable Average Lagging - -Again they will also be evaluated on detokenized text. diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/xlmr/README.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/xlmr/README.md deleted file mode 100644 index b95bfe15d3fe6d03951453679135c2e9187d73c7..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/xlmr/README.md +++ /dev/null @@ -1,144 +0,0 @@ -# Unsupervised Cross-lingual Representation Learning at Scale (XLM-RoBERTa) -https://arxiv.org/pdf/1911.02116.pdf - -# Larger-Scale Transformers for Multilingual Masked Language Modeling -https://arxiv.org/pdf/2105.00572.pdf - - -## What's New: -- June 2021: `XLMR-XL` AND `XLMR-XXL` models released. - -## Introduction - -`XLM-R` (`XLM-RoBERTa`) is a generic cross lingual sentence encoder that obtains state-of-the-art results on many cross-lingual understanding (XLU) benchmarks. It is trained on `2.5T` of filtered CommonCrawl data in 100 languages (list below). - - Language | Language|Language |Language | Language ----|---|---|---|--- -Afrikaans | Albanian | Amharic | Arabic | Armenian -Assamese | Azerbaijani | Basque | Belarusian | Bengali -Bengali Romanize | Bosnian | Breton | Bulgarian | Burmese -Burmese zawgyi font | Catalan | Chinese (Simplified) | Chinese (Traditional) | Croatian -Czech | Danish | Dutch | English | Esperanto -Estonian | Filipino | Finnish | French | Galician -Georgian | German | Greek | Gujarati | Hausa -Hebrew | Hindi | Hindi Romanize | Hungarian | Icelandic -Indonesian | Irish | Italian | Japanese | Javanese -Kannada | Kazakh | Khmer | Korean | Kurdish (Kurmanji) -Kyrgyz | Lao | Latin | Latvian | Lithuanian -Macedonian | Malagasy | Malay | Malayalam | Marathi -Mongolian | Nepali | Norwegian | Oriya | Oromo -Pashto | Persian | Polish | Portuguese | Punjabi -Romanian | Russian | Sanskrit | Scottish Gaelic | Serbian -Sindhi | Sinhala | Slovak | Slovenian | Somali -Spanish | Sundanese | Swahili | Swedish | Tamil -Tamil Romanize | Telugu | Telugu Romanize | Thai | Turkish -Ukrainian | Urdu | Urdu Romanize | Uyghur | Uzbek -Vietnamese | Welsh | Western Frisian | Xhosa | Yiddish - -## Pre-trained models - -Model | Description | #params | vocab size | Download ----|---|---|---|--- -`xlmr.base` | XLM-R using the BERT-base architecture | 250M | 250k | [xlm.base.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/xlmr.base.tar.gz) -`xlmr.large` | XLM-R using the BERT-large architecture | 560M | 250k | [xlm.large.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/xlmr.large.tar.gz) -`xlmr.xl` | XLM-R (`layers=36, model_dim=2560`) | 3.5B | 250k | [xlm.xl.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/xlmr/xlmr.xl.tar.gz) -`xlmr.xxl` | XLM-R (`layers=48, model_dim=4096`) | 10.7B | 250k | [xlm.xxl.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/xlmr/xlmr.xxl.tar.gz) - -## Results - -**[XNLI (Conneau et al., 2018)](https://arxiv.org/abs/1809.05053)** - -Model | average | en | fr | es | de | el | bg | ru | tr | ar | vi | th | zh | hi | sw | ur ----|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- -`roberta.large.mnli` _(TRANSLATE-TEST)_ | 77.8 | 91.3 | 82.9 | 84.3 | 81.2 | 81.7 | 83.1 | 78.3 | 76.8 | 76.6 | 74.2 | 74.1 | 77.5 | 70.9 | 66.7 | 66.8 -`xlmr.large` _(TRANSLATE-TRAIN-ALL)_ | 83.6 | 89.1 | 85.1 | 86.6 | 85.7 | 85.3 | 85.9 | 83.5 | 83.2 | 83.1 | 83.7 | 81.5 | 83.7 | 81.6 | 78.0 | 78.1 -`xlmr.xl` _(TRANSLATE-TRAIN-ALL)_ | 85.4 | 91.1 | 87.2 | 88.1 | 87.0 | 87.4 | 87.8 | 85.3 | 85.2 | 85.3 | 86.2 | 83.8 | 85.3 | 83.1 | 79.8 | 78.2 | 85.4 -`xlmr.xxl` _(TRANSLATE-TRAIN-ALL)_ | 86.0 | 91.5 | 87.6 | 88.7 | 87.8 | 87.4 | 88.2 | 85.6 | 85.1 | 85.8 | 86.3 | 83.9 | 85.6 | 84.6 | 81.7 | 80.6 - -**[MLQA (Lewis et al., 2018)](https://arxiv.org/abs/1910.07475)** - -Model | average | en | es | de | ar | hi | vi | zh ----|---|---|---|---|---|---|---|--- -`BERT-large` | - | 80.2/67.4 | - | - | - | - | - | - -`mBERT` | 57.7 / 41.6 | 77.7 / 65.2 | 64.3 / 46.6 | 57.9 / 44.3 | 45.7 / 29.8| 43.8 / 29.7 | 57.1 / 38.6 | 57.5 / 37.3 -`xlmr.large` | 70.7 / 52.7 | 80.6 / 67.8 | 74.1 / 56.0 | 68.5 / 53.6 | 63.1 / 43.5 | 69.2 / 51.6 | 71.3 / 50.9 | 68.0 / 45.4 -`xlmr.xl` | 73.4 / 55.3 | 85.1 / 72.6 | 66.7 / 46.2 | 70.5 / 55.5 | 74.3 / 56.9 | 72.2 / 54.7 | 74.4 / 52.9 | 70.9 / 48.5 -`xlmr.xxl` | 74.8 / 56.6 | 85.5 / 72.4 | 68.6 / 48.4 | 72.7 / 57.8 | 75.4 / 57.6 | 73.7 / 55.8 | 76.0 / 55.0 | 71.7 / 48.9 - - -## Example usage - -##### Load XLM-R from torch.hub (PyTorch >= 1.1): -```python -import torch -xlmr = torch.hub.load('pytorch/fairseq', 'xlmr.large') -xlmr.eval() # disable dropout (or leave in train mode to finetune) -``` - -##### Load XLM-R (for PyTorch 1.0 or custom models): -```python -# Download xlmr.large model -wget https://dl.fbaipublicfiles.com/fairseq/models/xlmr.large.tar.gz -tar -xzvf xlmr.large.tar.gz - -# Load the model in fairseq -from fairseq.models.roberta import XLMRModel -xlmr = XLMRModel.from_pretrained('/path/to/xlmr.large', checkpoint_file='model.pt') -xlmr.eval() # disable dropout (or leave in train mode to finetune) -``` - -##### Apply sentence-piece-model (SPM) encoding to input text: -```python -en_tokens = xlmr.encode('Hello world!') -assert en_tokens.tolist() == [0, 35378, 8999, 38, 2] -xlmr.decode(en_tokens) # 'Hello world!' - -zh_tokens = xlmr.encode('你好,世界') -assert zh_tokens.tolist() == [0, 6, 124084, 4, 3221, 2] -xlmr.decode(zh_tokens) # '你好,世界' - -hi_tokens = xlmr.encode('नमस्ते दुनिया') -assert hi_tokens.tolist() == [0, 68700, 97883, 29405, 2] -xlmr.decode(hi_tokens) # 'नमस्ते दुनिया' - -ar_tokens = xlmr.encode('مرحبا بالعالم') -assert ar_tokens.tolist() == [0, 665, 193478, 258, 1705, 77796, 2] -xlmr.decode(ar_tokens) # 'مرحبا بالعالم' - -fr_tokens = xlmr.encode('Bonjour le monde') -assert fr_tokens.tolist() == [0, 84602, 95, 11146, 2] -xlmr.decode(fr_tokens) # 'Bonjour le monde' -``` - -##### Extract features from XLM-R: -```python -# Extract the last layer's features -last_layer_features = xlmr.extract_features(zh_tokens) -assert last_layer_features.size() == torch.Size([1, 6, 1024]) - -# Extract all layer's features (layer 0 is the embedding layer) -all_layers = xlmr.extract_features(zh_tokens, return_all_hiddens=True) -assert len(all_layers) == 25 -assert torch.all(all_layers[-1] == last_layer_features) -``` - -## Citation - -```bibtex -@article{conneau2019unsupervised, - title={Unsupervised Cross-lingual Representation Learning at Scale}, - author={Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzm{\'a}n, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin}, - journal={arXiv preprint arXiv:1911.02116}, - year={2019} -} -``` - - -```bibtex -@article{goyal2021larger, - title={Larger-Scale Transformers for Multilingual Masked Language Modeling}, - author={Goyal, Naman and Du, Jingfei and Ott, Myle and Anantharaman, Giri and Conneau, Alexis}, - journal={arXiv preprint arXiv:2105.00572}, - year={2021} -} -``` diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/multilingual_denoising.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/multilingual_denoising.py deleted file mode 100644 index d1c914917feb5165aad7482cd1377f5f65b21635..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/multilingual_denoising.py +++ /dev/null @@ -1,254 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os - -import numpy as np -from fairseq.data import ( - AppendTokenDataset, - ConcatDataset, - DenoisingDataset, - Dictionary, - PrependTokenDataset, - ResamplingDataset, - SortDataset, - TokenBlockDataset, - data_utils, -) -from fairseq.data.encoders.utils import get_whole_word_mask -from fairseq.tasks import register_task - -from .denoising import DenoisingTask - - -logger = logging.getLogger(__name__) - - -@register_task("multilingual_denoising") -class MultilingualDenoisingTask(DenoisingTask): - @staticmethod - def add_args(parser): - DenoisingTask.add_args(parser) - parser.add_argument( - "--multilang-sampling-alpha", - type=float, - default=1.0, - help="smoothing alpha for sample ratios across multiple datasets", - ) - parser.add_argument("--add-lang-token", default=False, action="store_true") - parser.add_argument( - "--langs", type=str, help="language ids we are considering", default=None - ) - parser.add_argument( - "--no-whole-word-mask-langs", - type=str, - default="", - metavar="N", - help="languages without spacing between words dont support whole word masking", - ) - - @classmethod - def setup_task(cls, args, **kwargs): - """Setup the task.""" - paths = args.data.split(":") - assert len(paths) > 0 - dictionary = Dictionary.load(os.path.join(paths[0], "dict.txt")) - - data_path = paths[0] - if args.langs is None: - languages = sorted( - [ - name - for name in os.listdir(data_path) - if os.path.isdir(os.path.join(data_path, name)) - ] - ) - else: - languages = args.langs.split(",") - - if args.add_lang_token: - for lang in languages: - dictionary.add_symbol("[{}]".format(lang)) - - logger.info("dictionary: {} types".format(len(dictionary))) - if not hasattr(args, "shuffle_instance"): - args.shuffle_instance = False - return cls(args, dictionary) - - def __init__(self, args, dictionary): - super().__init__(args, dictionary) - self.dictionary = dictionary - self.seed = args.seed - - # add mask token - self.mask_idx = self.dictionary.add_symbol("") - self.langs = args.langs - self.args = args - - def _get_sample_prob(self, dataset_lens): - """ - Get smoothed sampling porbability by languages. This helps low resource - languages by upsampling them. - """ - prob = dataset_lens / dataset_lens.sum() - smoothed_prob = prob ** self.args.multilang_sampling_alpha - smoothed_prob = smoothed_prob / smoothed_prob.sum() - return smoothed_prob - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - paths = self.args.data.split(":") - assert len(paths) > 0 - data_path = paths[(epoch - 1) % len(paths)] - split_path = os.path.join(data_path, split) - - if self.langs is None: - languages = sorted( - [ - name - for name in os.listdir(data_path) - if os.path.isdir(os.path.join(data_path, name)) - ] - ) - else: - languages = self.langs.split(",") - for name in languages: - p = os.path.join(data_path, name) - assert os.path.exists(p), "data not found: {}".format(p) - - logger.info("Training on {0} languages: {1}".format(len(languages), languages)) - logger.info( - "Language to id mapping: ", {lang: id for id, lang in enumerate(languages)} - ) - - mask_whole_words = get_whole_word_mask(self.args, self.dictionary) - language_without_segmentations = self.args.no_whole_word_mask_langs.split(",") - lang_datasets = [] - for language in languages: - split_path = os.path.join(data_path, language, split) - - dataset = data_utils.load_indexed_dataset( - split_path, - self.source_dictionary, - self.args.dataset_impl, - combine=combine, - ) - if dataset is None: - raise FileNotFoundError( - "Dataset not found: {} ({})".format(split, split_path) - ) - - end_token = ( - self.source_dictionary.index("[{}]".format(language)) - if self.args.add_lang_token - else self.source_dictionary.eos() - ) - - # create continuous blocks of tokens - dataset = TokenBlockDataset( - dataset, - dataset.sizes, - self.args.tokens_per_sample - 2, # one less for - pad=self.source_dictionary.pad(), - eos=end_token, - break_mode=self.args.sample_break_mode, - ) - logger.info("loaded {} blocks from: {}".format(len(dataset), split_path)) - - # prepend beginning-of-sentence token (, equiv. to [CLS] in BERT) - dataset = PrependTokenDataset(dataset, self.source_dictionary.bos()) - dataset = AppendTokenDataset(dataset, end_token) - - lang_mask_whole_words = ( - mask_whole_words - if language not in language_without_segmentations - else None - ) - lang_dataset = DenoisingDataset( - dataset, - dataset.sizes, - self.dictionary, - self.mask_idx, - lang_mask_whole_words, - shuffle=self.args.shuffle_instance, - seed=self.seed, - args=self.args, - eos=None - if not self.args.add_lang_token - else self.source_dictionary.index("[{}]".format(language)), - ) - lang_datasets.append(lang_dataset) - - dataset_lengths = np.array( - [len(d) for d in lang_datasets], - dtype=float, - ) - logger.info( - "loaded total {} blocks for all languages".format( - int(dataset_lengths.sum()), - ) - ) - if split == self.args.train_subset: - # For train subset, additionally up or down sample languages. - sample_probs = self._get_sample_prob(dataset_lengths) - logger.info( - "Sample probability by language: {}".format( - { - lang: "{0:.4f}".format(sample_probs[id]) - for id, lang in enumerate(languages) - } - ) - ) - size_ratio = (sample_probs * dataset_lengths.sum()) / dataset_lengths - logger.info( - "Up/Down Sampling ratio by language: {}".format( - { - lang: "{0:.2f}".format(size_ratio[id]) - for id, lang in enumerate(languages) - } - ) - ) - - resampled_lang_datasets = [ - ResamplingDataset( - lang_datasets[i], - size_ratio=size_ratio[i], - seed=self.args.seed, - epoch=epoch, - replace=size_ratio[i] >= 1.0, - ) - for i, d in enumerate(lang_datasets) - ] - dataset = ConcatDataset( - resampled_lang_datasets, - ) - else: - dataset = ConcatDataset(lang_datasets) - lang_splits = [split] - for lang_id, lang_dataset in enumerate(lang_datasets): - split_name = split + "_" + languages[lang_id] - lang_splits.append(split_name) - self.datasets[split_name] = lang_dataset - - if split in self.args.valid_subset: - self.args.valid_subset = self.args.valid_subset.replace( - split, ",".join(lang_splits) - ) - - with data_utils.numpy_seed(self.args.seed + epoch): - shuffle = np.random.permutation(len(dataset)) - - self.datasets[split] = SortDataset( - dataset, - sort_order=[ - shuffle, - dataset.sizes, - ], - ) diff --git a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/discriminative_reranking_nmt/drnmt_rerank.py b/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/discriminative_reranking_nmt/drnmt_rerank.py deleted file mode 100644 index 2e0fc2bd29aedb0b477b7cc8e2c3b606acdd454a..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/discriminative_reranking_nmt/drnmt_rerank.py +++ /dev/null @@ -1,364 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Score raw text with a trained model. -""" - -from collections import namedtuple -import logging -from multiprocessing import Pool -import sys -import os -import random - -import numpy as np -import sacrebleu -import torch - -from fairseq import checkpoint_utils, options, utils - - -logger = logging.getLogger("fairseq_cli.drnmt_rerank") -logger.setLevel(logging.INFO) - -Batch = namedtuple("Batch", "ids src_tokens src_lengths") - - -pool_init_variables = {} - - -def init_loaded_scores(mt_scores, model_scores, hyp, ref): - global pool_init_variables - pool_init_variables["mt_scores"] = mt_scores - pool_init_variables["model_scores"] = model_scores - pool_init_variables["hyp"] = hyp - pool_init_variables["ref"] = ref - - -def parse_fairseq_gen(filename, task): - source = {} - hypos = {} - scores = {} - with open(filename, "r", encoding="utf-8") as f: - for line in f: - line = line.strip() - if line.startswith("S-"): # source - uid, text = line.split("\t", 1) - uid = int(uid[2:]) - source[uid] = text - elif line.startswith("D-"): # hypo - uid, score, text = line.split("\t", 2) - uid = int(uid[2:]) - if uid not in hypos: - hypos[uid] = [] - scores[uid] = [] - hypos[uid].append(text) - scores[uid].append(float(score)) - else: - continue - - source_out = [source[i] for i in range(len(hypos))] - hypos_out = [h for i in range(len(hypos)) for h in hypos[i]] - scores_out = [s for i in range(len(scores)) for s in scores[i]] - - return source_out, hypos_out, scores_out - - -def read_target(filename): - with open(filename, "r", encoding="utf-8") as f: - output = [line.strip() for line in f] - return output - - -def make_batches(args, src, hyp, task, max_positions, encode_fn): - assert len(src) * args.beam == len( - hyp - ), f"Expect {len(src) * args.beam} hypotheses for {len(src)} source sentences with beam size {args.beam}. Got {len(hyp)} hypotheses intead." - hyp_encode = [ - task.source_dictionary.encode_line(encode_fn(h), add_if_not_exist=False).long() - for h in hyp - ] - if task.cfg.include_src: - src_encode = [ - task.source_dictionary.encode_line( - encode_fn(s), add_if_not_exist=False - ).long() - for s in src - ] - tokens = [(src_encode[i // args.beam], h) for i, h in enumerate(hyp_encode)] - lengths = [(t1.numel(), t2.numel()) for t1, t2 in tokens] - else: - tokens = [(h,) for h in hyp_encode] - lengths = [(h.numel(),) for h in hyp_encode] - - itr = task.get_batch_iterator( - dataset=task.build_dataset_for_inference(tokens, lengths), - max_tokens=args.max_tokens, - max_sentences=args.batch_size, - max_positions=max_positions, - ignore_invalid_inputs=args.skip_invalid_size_inputs_valid_test, - ).next_epoch_itr(shuffle=False) - - for batch in itr: - yield Batch( - ids=batch["id"], - src_tokens=batch["net_input"]["src_tokens"], - src_lengths=batch["net_input"]["src_lengths"], - ) - - -def decode_rerank_scores(args): - if args.max_tokens is None and args.batch_size is None: - args.batch_size = 1 - - logger.info(args) - - use_cuda = torch.cuda.is_available() and not args.cpu - - # Load ensemble - logger.info("loading model(s) from {}".format(args.path)) - models, _model_args, task = checkpoint_utils.load_model_ensemble_and_task( - [args.path], arg_overrides=eval(args.model_overrides), - ) - - for model in models: - if args.fp16: - model.half() - if use_cuda: - model.cuda() - - # Initialize generator - generator = task.build_generator(args) - - # Handle tokenization and BPE - tokenizer = task.build_tokenizer(args) - bpe = task.build_bpe(args) - - def encode_fn(x): - if tokenizer is not None: - x = tokenizer.encode(x) - if bpe is not None: - x = bpe.encode(x) - return x - - max_positions = utils.resolve_max_positions( - task.max_positions(), *[model.max_positions() for model in models] - ) - - src, hyp, mt_scores = parse_fairseq_gen(args.in_text, task) - model_scores = {} - logger.info("decode reranker score") - for batch in make_batches(args, src, hyp, task, max_positions, encode_fn): - src_tokens = batch.src_tokens - src_lengths = batch.src_lengths - if use_cuda: - src_tokens = src_tokens.cuda() - src_lengths = src_lengths.cuda() - - sample = { - "net_input": {"src_tokens": src_tokens, "src_lengths": src_lengths}, - } - scores = task.inference_step(generator, models, sample) - - for id, sc in zip(batch.ids.tolist(), scores.tolist()): - model_scores[id] = sc[0] - - model_scores = [model_scores[i] for i in range(len(model_scores))] - - return src, hyp, mt_scores, model_scores - - -def get_score(mt_s, md_s, w1, lp, tgt_len): - return mt_s / (tgt_len ** lp) * w1 + md_s - - -def get_best_hyps(mt_scores, md_scores, hypos, fw_weight, lenpen, beam): - assert len(mt_scores) == len(md_scores) and len(mt_scores) == len(hypos) - hypo_scores = [] - best_hypos = [] - best_scores = [] - offset = 0 - for i in range(len(hypos)): - tgt_len = len(hypos[i].split()) - hypo_scores.append( - get_score(mt_scores[i], md_scores[i], fw_weight, lenpen, tgt_len) - ) - - if (i + 1) % beam == 0: - max_i = np.argmax(hypo_scores) - best_hypos.append(hypos[offset + max_i]) - best_scores.append(hypo_scores[max_i]) - hypo_scores = [] - offset += beam - return best_hypos, best_scores - - -def eval_metric(args, hypos, ref): - if args.metric == "bleu": - score = sacrebleu.corpus_bleu(hypos, [ref]).score - else: - score = sacrebleu.corpus_ter(hypos, [ref]).score - - return score - - -def score_target_hypo(args, fw_weight, lp): - mt_scores = pool_init_variables["mt_scores"] - model_scores = pool_init_variables["model_scores"] - hyp = pool_init_variables["hyp"] - ref = pool_init_variables["ref"] - best_hypos, _ = get_best_hyps( - mt_scores, model_scores, hyp, fw_weight, lp, args.beam - ) - rerank_eval = None - if ref: - rerank_eval = eval_metric(args, best_hypos, ref) - print(f"fw_weight {fw_weight}, lenpen {lp}, eval {rerank_eval}") - - return rerank_eval - - -def print_result(best_scores, best_hypos, output_file): - for i, (s, h) in enumerate(zip(best_scores, best_hypos)): - print(f"{i}\t{s}\t{h}", file=output_file) - - -def main(args): - utils.import_user_module(args) - - src, hyp, mt_scores, model_scores = decode_rerank_scores(args) - - assert ( - not args.tune or args.target_text is not None - ), "--target-text has to be set when tuning weights" - if args.target_text: - ref = read_target(args.target_text) - assert len(src) == len( - ref - ), f"different numbers of source and target sentences ({len(src)} vs. {len(ref)})" - - orig_best_hypos = [hyp[i] for i in range(0, len(hyp), args.beam)] - orig_eval = eval_metric(args, orig_best_hypos, ref) - - if args.tune: - logger.info("tune weights for reranking") - - random_params = np.array( - [ - [ - random.uniform( - args.lower_bound_fw_weight, args.upper_bound_fw_weight - ), - random.uniform(args.lower_bound_lenpen, args.upper_bound_lenpen), - ] - for k in range(args.num_trials) - ] - ) - - logger.info("launching pool") - with Pool( - 32, - initializer=init_loaded_scores, - initargs=(mt_scores, model_scores, hyp, ref), - ) as p: - rerank_scores = p.starmap( - score_target_hypo, - [ - (args, random_params[i][0], random_params[i][1],) - for i in range(args.num_trials) - ], - ) - if args.metric == "bleu": - best_index = np.argmax(rerank_scores) - else: - best_index = np.argmin(rerank_scores) - best_fw_weight = random_params[best_index][0] - best_lenpen = random_params[best_index][1] - else: - assert ( - args.lenpen is not None and args.fw_weight is not None - ), "--lenpen and --fw-weight should be set" - best_fw_weight, best_lenpen = args.fw_weight, args.lenpen - - best_hypos, best_scores = get_best_hyps( - mt_scores, model_scores, hyp, best_fw_weight, best_lenpen, args.beam - ) - - if args.results_path is not None: - os.makedirs(args.results_path, exist_ok=True) - output_path = os.path.join( - args.results_path, "generate-{}.txt".format(args.gen_subset), - ) - with open(output_path, "w", buffering=1, encoding="utf-8") as o: - print_result(best_scores, best_hypos, o) - else: - print_result(best_scores, best_hypos, sys.stdout) - - if args.target_text: - rerank_eval = eval_metric(args, best_hypos, ref) - print(f"before reranking, {args.metric.upper()}:", orig_eval) - print( - f"after reranking with fw_weight={best_fw_weight}, lenpen={best_lenpen}, {args.metric.upper()}:", - rerank_eval, - ) - - -def cli_main(): - parser = options.get_generation_parser(interactive=True) - - parser.add_argument( - "--in-text", - default=None, - required=True, - help="text from fairseq-interactive output, containing source sentences and hypotheses", - ) - parser.add_argument("--target-text", default=None, help="reference text") - parser.add_argument("--metric", type=str, choices=["bleu", "ter"], default="bleu") - parser.add_argument( - "--tune", - action="store_true", - help="if set, tune weights on fw scores and lenpen instead of applying fixed weights for reranking", - ) - parser.add_argument( - "--lower-bound-fw-weight", - default=0.0, - type=float, - help="lower bound of search space", - ) - parser.add_argument( - "--upper-bound-fw-weight", - default=3, - type=float, - help="upper bound of search space", - ) - parser.add_argument( - "--lower-bound-lenpen", - default=0.0, - type=float, - help="lower bound of search space", - ) - parser.add_argument( - "--upper-bound-lenpen", - default=3, - type=float, - help="upper bound of search space", - ) - parser.add_argument( - "--fw-weight", type=float, default=None, help="weight on the fw model score" - ) - parser.add_argument( - "--num-trials", - default=1000, - type=int, - help="number of trials to do for random search", - ) - - args = options.parse_args_and_arch(parser) - main(args) - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/criterions/legacy_masked_lm.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/criterions/legacy_masked_lm.py deleted file mode 100644 index c70608c5a143b7b4fbd8c58dfcf9f873639d379c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/criterions/legacy_masked_lm.py +++ /dev/null @@ -1,177 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion - - -def compute_cross_entropy_loss(logits, targets, ignore_index=-100): - """ - Function to compute the cross entropy loss. The default value of - ignore_index is the same as the default value for F.cross_entropy in - pytorch. - """ - assert logits.size(0) == targets.size( - -1 - ), "Logits and Targets tensor shapes don't match up" - - loss = F.nll_loss( - F.log_softmax(logits, -1, dtype=torch.float32), - targets, - reduction="sum", - ignore_index=ignore_index, - ) - return loss - - -@register_criterion("legacy_masked_lm_loss") -class LegacyMaskedLmLoss(FairseqCriterion): - """ - Implementation for the loss used in masked language model (MLM) training. - This optionally also computes the next sentence prediction (NSP) loss and - adds it to the overall loss based on the specified args. There are three - cases to consider: - 1) Generic MLM training without NSP loss. In this case sentence_targets - and sentence_logits are both None. - 2) BERT training without NSP loss. In this case sentence_targets is - not None but sentence_logits is None and we should not be computing - a sentence level loss. - 3) BERT training with NSP loss. In this case both sentence_targets and - sentence_logits are not None and we should be computing a sentence - level loss. The weight of the sentence level loss is specified as - an argument. - """ - - def __init__(self, task, masked_lm_only, nsp_loss_weight): - super().__init__(task) - self.masked_lm_only = masked_lm_only - self.nsp_loss_weight = nsp_loss_weight - - @staticmethod - def add_args(parser): - """Args for MaskedLM Loss""" - # Default for masked_lm_only is False so as to not break BERT training - parser.add_argument( - "--masked-lm-only", - default=False, - action="store_true", - help="compute MLM loss only", - ) - parser.add_argument( - "--nsp-loss-weight", - default=1.0, - type=float, - help="weight for next sentence prediction" " loss (default 1)", - ) - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - lm_logits, output_metadata = model(**sample["net_input"]) - - # reshape lm_logits from (N,T,C) to (N*T,C) - lm_logits = lm_logits.view(-1, lm_logits.size(-1)) - lm_targets = sample["lm_target"].view(-1) - lm_loss = compute_cross_entropy_loss(lm_logits, lm_targets, self.padding_idx) - - # compute the number of tokens for which loss is computed. This is used - # to normalize the loss - ntokens = utils.strip_pad(lm_targets, self.padding_idx).numel() - loss = lm_loss / ntokens - nsentences = sample["nsentences"] - # nsentences = 0 - - # Compute sentence loss if masked_lm_only is False - sentence_loss = None - if not self.masked_lm_only: - sentence_logits = output_metadata["sentence_logits"] - sentence_targets = sample["sentence_target"].view(-1) - # This needs to be recomputed due to some differences between - # TokenBlock and BlockPair dataset. This can be resolved with a - # refactor of BERTModel which we will do in the future. - # TODO: Remove this after refactor of BERTModel - nsentences = sentence_targets.size(0) - - # Check for logits being none which can happen when remove_heads - # is set to true in the BERT model. Ideally we should set - # masked_lm_only to true in this case, but that requires some - # refactor in the BERT model. - if sentence_logits is not None: - sentence_loss = compute_cross_entropy_loss( - sentence_logits, sentence_targets - ) - - loss += self.nsp_loss_weight * (sentence_loss / nsentences) - - # NOTE: as we are summing up per token mlm loss and per sentence nsp loss - # we don't need to use sample_size as denominator for the gradient - # here sample_size is just used for logging - sample_size = 1 - logging_output = { - "loss": utils.item(loss.data) if reduce else loss.data, - "lm_loss": utils.item(lm_loss.data) if reduce else lm_loss.data, - # sentence loss is not always computed - "sentence_loss": ( - (utils.item(sentence_loss.data) if reduce else sentence_loss.data) - if sentence_loss is not None - else 0.0 - ), - "ntokens": ntokens, - "nsentences": nsentences, - "sample_size": sample_size, - } - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - lm_loss_sum = sum(log.get("lm_loss", 0) for log in logging_outputs) - sentence_loss_sum = sum(log.get("sentence_loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - agg_loss = sum(log.get("loss", 0) for log in logging_outputs) - - metrics.log_scalar( - "loss", - agg_loss / sample_size / math.log(2) if sample_size > 0 else 0.0, - sample_size, - round=3, - ) - metrics.log_scalar( - "lm_loss", - lm_loss_sum / ntokens / math.log(2) if ntokens > 0 else 0.0, - ntokens, - round=3, - ) - metrics.log_scalar( - "sentence_loss", - sentence_loss_sum / nsentences / math.log(2) if nsentences > 0 else 0.0, - nsentences, - round=3, - ) - metrics.log_scalar( - "nll_loss", - lm_loss_sum / ntokens / math.log(2) if ntokens > 0 else 0.0, - ntokens, - round=3, - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/huffman/huffman_mmap_indexed_dataset.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/huffman/huffman_mmap_indexed_dataset.py deleted file mode 100644 index 3279dae89a8bca95178bbe1285d3cb334890b12f..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/huffman/huffman_mmap_indexed_dataset.py +++ /dev/null @@ -1,287 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import mmap -import os -import shutil -import struct -import typing as tp -from functools import lru_cache - -import numpy as np -import torch -from fairseq.data import indexed_dataset -from fairseq.data.huffman import HuffmanCoder -from fairseq.file_io import PathManager - - -class HuffmanMMapIndex: - """ - keep an index of the offsets in the huffman binary file. - First a header, then the list of sizes (num tokens) for each instance and finally - the addresses of each instance. - """ - - _HDR_MAGIC = b"HUFFIDX\x00\x00" - _VERSION = 1 - - @classmethod - def writer(cls, path: str, data_len: int): - class _Writer: - def __enter__(self): - self._file = open(path, "wb") - - # write header (magic + version) - self._file.write(cls._HDR_MAGIC) - self._file.write(struct.pack(" None: - self._path_prefix = path_prefix - self._coder = coder - self._sizes = [] - self._ptrs = [] - self._data_len = 0 - - def open(self): - self._coder.to_file(vocab_file_path(self._path_prefix)) - self._data_file = open(indexed_dataset.data_file_path(self._path_prefix), "wb") - - def __enter__(self) -> "HuffmanMMapIndexedDatasetBuilder": - self.open() - return self - - def add_item(self, tokens: tp.List[str]) -> None: - """ - add a list of tokens to the dataset, they will compressed with the - provided coder before being written to file. - """ - encoded = self._coder.encode(tokens) - code_len = len(encoded) - last_ptr = 0 - if len(self._ptrs) > 0: - last_ptr = self._ptrs[-1] - self._sizes.append(len(tokens)) - self._ptrs.append(last_ptr + code_len) - self._data_len += code_len - self._data_file.write(encoded) - - def append(self, other_dataset_path_prefix: str) -> None: - """ - append an existing dataset. - Beware, if it wasn't built with the same coder, you are in trouble. - """ - other_index = HuffmanMMapIndex( - indexed_dataset.index_file_path(other_dataset_path_prefix) - ) - for (ptr, size) in other_index: - self._ptrs.append(ptr + self._data_len) - self._sizes.append(size) - - # Concatenate data - with open(indexed_dataset.data_file_path(other_dataset_path_prefix), "rb") as f: - shutil.copyfileobj(f, self._data_file) - - self._data_len += other_index.data_len - - def close(self): - self._data_file.close() - with HuffmanMMapIndex.writer( - indexed_dataset.index_file_path(self._path_prefix), self._data_len - ) as index: - index.write(self._sizes, self._ptrs) - - def __exit__(self, exc_type, exc_val, exc_tb) -> None: - self.close() diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/same_pad.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/same_pad.py deleted file mode 100644 index 4c04990ea6fdb291f162ee8ac3d17a92483daf8e..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/same_pad.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from torch import nn - - -class SamePad(nn.Module): - def __init__(self, kernel_size, causal=False): - super().__init__() - if causal: - self.remove = kernel_size - 1 - else: - self.remove = 1 if kernel_size % 2 == 0 else 0 - - def forward(self, x): - if self.remove > 0: - x = x[:, :, : -self.remove] - return x diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_constraints.py b/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_constraints.py deleted file mode 100644 index 1c37f7e1fb26d8ea5349fedd3a60f566d09cf598..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_constraints.py +++ /dev/null @@ -1,269 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import sys -import unittest - -import torch -from fairseq.token_generation_constraints import * - - -def tensorize(constraints: List[List[int]]) -> torch.Tensor: - return [torch.tensor(x) for x in constraints] - - -class TestHelperRoutines(unittest.TestCase): - def setUp(self): - self.examples = [ - ([[]], torch.tensor([[0]])), - ([[], []], torch.tensor([[0], [0]])), - ([[torch.tensor([1, 2])], []], torch.tensor([[1, 1, 2, 0], [0, 0, 0, 0]])), - ( - [ - [ - torch.tensor([3, 1, 2]), - torch.tensor([3]), - torch.tensor([4, 5, 6, 7]), - ], - [], - [torch.tensor([1, 8, 9, 10, 1, 4, 11, 12])], - ], - torch.tensor( - [ - [3, 3, 1, 2, 0, 3, 0, 4, 5, 6, 7, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 1, 8, 9, 10, 1, 4, 11, 12, 0, 0, 0], - ] - ), - ), - ] - - def test_packing(self): - """Ensures the list of lists of tensors gets packed correctly.""" - for batch_constraints, expected_tensor in self.examples: - packed = pack_constraints(batch_constraints) - assert torch.equal(packed, expected_tensor) - - -class TestUnorderedConstraintState(unittest.TestCase): - def setUp(self): - # Tuples of (contraint set, expected printed graph, token counts per node) - self.examples = [ - ( - tensorize([[1, 2, 3], [1, 3], [1, 4], [4, 5, 6, 7], [1], [4, 5]]), - "([None].False#6 ([1].True#4 ([2].False#1 [3].True#1) [3].True#1 [4].True#1) ([4].False#2 ([5].True#2 ([6].False#1 [7].True#1))))", - {1: 4, 2: 1, 3: 2, 4: 3, 5: 2, 6: 1, 7: 1}, - ), - ([], "[None].False#0", {}), - (tensorize([[0]]), "([None].False#1 [0].True#1)", {0: 1}), - ( - tensorize([[100000, 1, 2, 3, 4, 5]]), - "([None].False#1 ([100000].False#1 ([1].False#1 ([2].False#1 ([3].False#1 ([4].False#1 [5].True#1))))))", - {100000: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}, - ), - ( - tensorize([[1, 2], [1, 2]]), - "([None].False#2 ([1].False#2 [2].True#2))", - {1: 2, 2: 2}, - ), - ( - tensorize([[1, 2], [3, 4]]), - "([None].False#2 ([1].False#1 [2].True#1) ([3].False#1 [4].True#1))", - {1: 1, 2: 1, 3: 1, 4: 1}, - ), - ] - - self.sequences = [ - ( - self.examples[0][0], - [], - {"bank": 0, "num_completed": 0, "finished": False, "is_root": True}, - ), - ( - self.examples[0][0], - [1, 2], - {"bank": 2, "num_completed": 0, "finished": False, "is_root": False}, - ), - ( - self.examples[0][0], - [1, 2, 94], - {"bank": 1, "num_completed": 1, "finished": False, "is_root": True}, - ), - ( - self.examples[0][0], - [1, 3, 999, 1, 4], - {"bank": 4, "num_completed": 2, "finished": False, "is_root": False}, - ), - ( - self.examples[0][0], - [1, 3, 999, 1, 4, 999], - {"bank": 4, "num_completed": 2, "finished": False, "is_root": True}, - ), - ( - self.examples[0][0], - [4, 5, 6, 8], - {"bank": 2, "num_completed": 1, "finished": False, "is_root": True}, - ), - ( - self.examples[0][0], - # Tricky, because in last three, goes down [1->4] branch, could miss [1] and [4->5] - # [[1, 2, 3], [1, 3], [1, 4], [4, 5, 6, 7], [1], [4, 5]], - [1, 2, 3, 1, 3, 1, 4, 4, 5, 6, 7, 1, 4, 5], - {"bank": 14, "num_completed": 6, "finished": True, "is_root": False}, - ), - ( - self.examples[0][0], - [1, 2, 3, 999, 1, 3, 1, 4, 4, 5, 6, 7, 1, 4, 5, 117], - {"bank": 14, "num_completed": 6, "finished": True, "is_root": True}, - ), - ( - tensorize([[1], [2, 3]]), - # Should not be able to get credit for entering 1 a second time - [1, 1], - {"bank": 1, "num_completed": 1, "finished": False, "is_root": True}, - ), - ( - self.examples[4][0], - [1, 2, 1, 2], - {"bank": 4, "num_completed": 2, "finished": True, "is_root": False}, - ), - ( - self.examples[4][0], - [1, 2, 1, 2, 1], - {"bank": 4, "num_completed": 2, "finished": True, "is_root": True}, - ), - ( - self.examples[5][0], - [1, 2, 3, 4, 5], - {"bank": 4, "num_completed": 2, "finished": True, "is_root": True}, - ), - ] - - def test_graphs(self): - """ - Test whether unordered graph systems are created correctly. - """ - for example in self.examples: - constraints, expected, gold_counts = example - c = ConstraintNode.create(constraints) - assert ( - ConstraintNode.print_graph(c) == expected - ), f"got {ConstraintNode.print_graph(c)}, expected {expected}" - assert ( - c.token_counts() == gold_counts - ), f"{c} got {c.token_counts()} wanted {gold_counts}" - - def test_next_tokens(self): - """ - Tests that the set of next tokens is correct. - """ - for example in self.examples: - constraints, expected, gold_counts = example - root = ConstraintNode.create(constraints) - - root_tokens = set(root.children.keys()) - for sequence in constraints: - state = UnorderedConstraintState(root) - for token in sequence: - all_tokens = root_tokens.union(state.node.children.keys()) - assert ( - all_tokens == state.next_tokens() - ), f"ALL {all_tokens} NEXT {state.next_tokens()}" - state = state.advance(token) - - def test_sequences(self): - for constraints, tokens, expected in self.sequences: - state = UnorderedConstraintState.create(pack_constraints([constraints])[0]) - for token in tokens: - state = state.advance(token) - result = {} - for attr in expected.keys(): - result[attr] = getattr(state, attr) - - assert ( - result == expected - ), f"TEST({tokens}) GOT: {result} WANTED: {expected}" - - -class TestOrderedConstraintState(unittest.TestCase): - def setUp(self): - self.sequences = [ - ( - tensorize([[1, 2, 3], [1, 3], [1, 4], [4, 5, 6, 7], [1], [4, 5]]), - [], - {"bank": 0, "num_completed": 0, "finished": False, "is_root": True}, - ), - ( - tensorize([[1, 2, 3], [1, 3], [1, 4], [4, 5, 6, 7], [1], [4, 5]]), - [1, 2], - {"bank": 2, "num_completed": 0, "finished": False, "is_root": False}, - ), - ( - tensorize([[1, 2, 3], [1, 3], [1, 4], [4, 5, 6, 7], [1], [4, 5]]), - [1, 2, 94], - {"bank": 0, "num_completed": 0, "finished": False, "is_root": True}, - ), - ( - tensorize([[1, 2, 3], [1, 3], [1, 4], [4, 5, 6, 7], [1], [4, 5]]), - [1, 3, 999, 1, 4], - {"bank": 0, "num_completed": 0, "finished": False, "is_root": True}, - ), - ( - tensorize([[1, 2, 3], [1, 3], [1, 4], [4, 5, 6, 7], [1], [4, 5]]), - [1, 2, 3, 999, 999], - {"bank": 3, "num_completed": 1, "finished": False, "is_root": False}, - ), - ( - tensorize([[1, 2, 3], [1, 3], [1, 4], [4, 5, 6, 7], [1], [4, 5]]), - [1, 2, 3, 77, 1, 3, 1], - {"bank": 6, "num_completed": 2, "finished": False, "is_root": False}, - ), - ( - tensorize([[1, 2, 3], [1, 3], [1, 4], [4, 5, 6, 7], [1], [4, 5]]), - [1, 2, 3, 1, 3, 1, 4, 4, 5, 6, 7, 1, 4, 5], - {"bank": 14, "num_completed": 6, "finished": True, "is_root": False}, - ), - ( - tensorize([[1, 2, 3], [1, 3], [1, 4], [4, 5, 6, 7], [1], [4, 5]]), - [1, 2, 999, 1, 2, 3, 999, 1, 3, 1, 4, 4, 5, 6, 7, 1, 4, 5, 117], - {"bank": 14, "num_completed": 6, "finished": True, "is_root": False}, - ), - ( - tensorize([[1], [2, 3]]), - [1, 1], - {"bank": 1, "num_completed": 1, "finished": False, "is_root": False}, - ), - ( - tensorize([[1, 2], [1, 2]]), - [1, 2, 1, 2], - {"bank": 4, "num_completed": 2, "finished": True, "is_root": False}, - ), - ( - tensorize([[1, 2], [1, 2]]), - [1, 2, 1, 2, 1], - {"bank": 4, "num_completed": 2, "finished": True, "is_root": False}, - ), - ( - tensorize([[1, 2], [3, 4]]), - [1, 2, 3, 4, 5], - {"bank": 4, "num_completed": 2, "finished": True, "is_root": False}, - ), - ] - - def test_sequences(self): - for i, (constraints, tokens, expected) in enumerate(self.sequences): - state = OrderedConstraintState.create(pack_constraints([constraints])[0]) - for token in tokens: - state = state.advance(token) - result = {} - for attr in expected.keys(): - result[attr] = getattr(state, attr) - assert ( - result == expected - ), f"TEST({tokens}) GOT: {result} WANTED: {expected}" - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/OFA-Sys/OFA-vqa/tasks/__init__.py b/spaces/OFA-Sys/OFA-vqa/tasks/__init__.py deleted file mode 100644 index 6a7fcab34c0736c74aae787a4082ddaa9cafa591..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/tasks/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .mm_tasks import * -from .ofa_task import OFATask \ No newline at end of file diff --git a/spaces/ORI-Muchim/BlueArchiveTTS/export_model.py b/spaces/ORI-Muchim/BlueArchiveTTS/export_model.py deleted file mode 100644 index 98a49835df5a7a2486e76ddf94fbbb4444b52203..0000000000000000000000000000000000000000 --- a/spaces/ORI-Muchim/BlueArchiveTTS/export_model.py +++ /dev/null @@ -1,13 +0,0 @@ -import torch - -if __name__ == '__main__': - model_path = "saved_model/11/model.pth" - output_path = "saved_model/11/model1.pth" - checkpoint_dict = torch.load(model_path, map_location='cpu') - checkpoint_dict_new = {} - for k, v in checkpoint_dict.items(): - if k == "optimizer": - print("remove optimizer") - continue - checkpoint_dict_new[k] = v - torch.save(checkpoint_dict_new, output_path) diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/evaluation/lvis_evaluation.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/evaluation/lvis_evaluation.py deleted file mode 100644 index 0604feaaf42ffd072e3cb91f395204f818fa709a..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/evaluation/lvis_evaluation.py +++ /dev/null @@ -1,380 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import itertools -import json -import logging -import os -import pickle -from collections import OrderedDict -import torch - -import detectron2.utils.comm as comm -from detectron2.config import CfgNode -from detectron2.data import MetadataCatalog -from detectron2.structures import Boxes, BoxMode, pairwise_iou -from detectron2.utils.file_io import PathManager -from detectron2.utils.logger import create_small_table - -from .coco_evaluation import instances_to_coco_json -from .evaluator import DatasetEvaluator - - -class LVISEvaluator(DatasetEvaluator): - """ - Evaluate object proposal and instance detection/segmentation outputs using - LVIS's metrics and evaluation API. - """ - - def __init__( - self, - dataset_name, - tasks=None, - distributed=True, - output_dir=None, - *, - max_dets_per_image=None, - ): - """ - Args: - dataset_name (str): name of the dataset to be evaluated. - It must have the following corresponding metadata: - "json_file": the path to the LVIS format annotation - tasks (tuple[str]): tasks that can be evaluated under the given - configuration. A task is one of "bbox", "segm". - By default, will infer this automatically from predictions. - distributed (True): if True, will collect results from all ranks for evaluation. - Otherwise, will evaluate the results in the current process. - output_dir (str): optional, an output directory to dump results. - max_dets_per_image (None or int): limit on maximum detections per image in evaluating AP - This limit, by default of the LVIS dataset, is 300. - """ - from lvis import LVIS - - self._logger = logging.getLogger(__name__) - - if tasks is not None and isinstance(tasks, CfgNode): - self._logger.warn( - "COCO Evaluator instantiated using config, this is deprecated behavior." - " Please pass in explicit arguments instead." - ) - self._tasks = None # Infering it from predictions should be better - else: - self._tasks = tasks - - self._distributed = distributed - self._output_dir = output_dir - self._max_dets_per_image = max_dets_per_image - - self._cpu_device = torch.device("cpu") - - self._metadata = MetadataCatalog.get(dataset_name) - json_file = PathManager.get_local_path(self._metadata.json_file) - self._lvis_api = LVIS(json_file) - # Test set json files do not contain annotations (evaluation must be - # performed using the LVIS evaluation server). - self._do_evaluation = len(self._lvis_api.get_ann_ids()) > 0 - - def reset(self): - self._predictions = [] - - def process(self, inputs, outputs): - """ - Args: - inputs: the inputs to a LVIS model (e.g., GeneralizedRCNN). - It is a list of dict. Each dict corresponds to an image and - contains keys like "height", "width", "file_name", "image_id". - outputs: the outputs of a LVIS model. It is a list of dicts with key - "instances" that contains :class:`Instances`. - """ - for input, output in zip(inputs, outputs): - prediction = {"image_id": input["image_id"]} - - if "instances" in output: - instances = output["instances"].to(self._cpu_device) - prediction["instances"] = instances_to_coco_json(instances, input["image_id"]) - if "proposals" in output: - prediction["proposals"] = output["proposals"].to(self._cpu_device) - self._predictions.append(prediction) - - def evaluate(self): - if self._distributed: - comm.synchronize() - predictions = comm.gather(self._predictions, dst=0) - predictions = list(itertools.chain(*predictions)) - - if not comm.is_main_process(): - return - else: - predictions = self._predictions - - if len(predictions) == 0: - self._logger.warning("[LVISEvaluator] Did not receive valid predictions.") - return {} - - if self._output_dir: - PathManager.mkdirs(self._output_dir) - file_path = os.path.join(self._output_dir, "instances_predictions.pth") - with PathManager.open(file_path, "wb") as f: - torch.save(predictions, f) - - self._results = OrderedDict() - if "proposals" in predictions[0]: - self._eval_box_proposals(predictions) - if "instances" in predictions[0]: - self._eval_predictions(predictions) - # Copy so the caller can do whatever with results - return copy.deepcopy(self._results) - - def _tasks_from_predictions(self, predictions): - for pred in predictions: - if "segmentation" in pred: - return ("bbox", "segm") - return ("bbox",) - - def _eval_predictions(self, predictions): - """ - Evaluate predictions. Fill self._results with the metrics of the tasks. - - Args: - predictions (list[dict]): list of outputs from the model - """ - self._logger.info("Preparing results in the LVIS format ...") - lvis_results = list(itertools.chain(*[x["instances"] for x in predictions])) - tasks = self._tasks or self._tasks_from_predictions(lvis_results) - - # LVIS evaluator can be used to evaluate results for COCO dataset categories. - # In this case `_metadata` variable will have a field with COCO-specific category mapping. - if hasattr(self._metadata, "thing_dataset_id_to_contiguous_id"): - reverse_id_mapping = { - v: k for k, v in self._metadata.thing_dataset_id_to_contiguous_id.items() - } - for result in lvis_results: - result["category_id"] = reverse_id_mapping[result["category_id"]] - else: - # unmap the category ids for LVIS (from 0-indexed to 1-indexed) - for result in lvis_results: - result["category_id"] += 1 - - if self._output_dir: - file_path = os.path.join(self._output_dir, "lvis_instances_results.json") - self._logger.info("Saving results to {}".format(file_path)) - with PathManager.open(file_path, "w") as f: - f.write(json.dumps(lvis_results)) - f.flush() - - if not self._do_evaluation: - self._logger.info("Annotations are not available for evaluation.") - return - - self._logger.info("Evaluating predictions ...") - for task in sorted(tasks): - res = _evaluate_predictions_on_lvis( - self._lvis_api, - lvis_results, - task, - max_dets_per_image=self._max_dets_per_image, - class_names=self._metadata.get("thing_classes"), - ) - self._results[task] = res - - def _eval_box_proposals(self, predictions): - """ - Evaluate the box proposals in predictions. - Fill self._results with the metrics for "box_proposals" task. - """ - if self._output_dir: - # Saving generated box proposals to file. - # Predicted box_proposals are in XYXY_ABS mode. - bbox_mode = BoxMode.XYXY_ABS.value - ids, boxes, objectness_logits = [], [], [] - for prediction in predictions: - ids.append(prediction["image_id"]) - boxes.append(prediction["proposals"].proposal_boxes.tensor.numpy()) - objectness_logits.append(prediction["proposals"].objectness_logits.numpy()) - - proposal_data = { - "boxes": boxes, - "objectness_logits": objectness_logits, - "ids": ids, - "bbox_mode": bbox_mode, - } - with PathManager.open(os.path.join(self._output_dir, "box_proposals.pkl"), "wb") as f: - pickle.dump(proposal_data, f) - - if not self._do_evaluation: - self._logger.info("Annotations are not available for evaluation.") - return - - self._logger.info("Evaluating bbox proposals ...") - res = {} - areas = {"all": "", "small": "s", "medium": "m", "large": "l"} - for limit in [100, 1000]: - for area, suffix in areas.items(): - stats = _evaluate_box_proposals(predictions, self._lvis_api, area=area, limit=limit) - key = "AR{}@{:d}".format(suffix, limit) - res[key] = float(stats["ar"].item() * 100) - self._logger.info("Proposal metrics: \n" + create_small_table(res)) - self._results["box_proposals"] = res - - -# inspired from Detectron: -# https://github.com/facebookresearch/Detectron/blob/a6a835f5b8208c45d0dce217ce9bbda915f44df7/detectron/datasets/json_dataset_evaluator.py#L255 # noqa -def _evaluate_box_proposals(dataset_predictions, lvis_api, thresholds=None, area="all", limit=None): - """ - Evaluate detection proposal recall metrics. This function is a much - faster alternative to the official LVIS API recall evaluation code. However, - it produces slightly different results. - """ - # Record max overlap value for each gt box - # Return vector of overlap values - areas = { - "all": 0, - "small": 1, - "medium": 2, - "large": 3, - "96-128": 4, - "128-256": 5, - "256-512": 6, - "512-inf": 7, - } - area_ranges = [ - [0 ** 2, 1e5 ** 2], # all - [0 ** 2, 32 ** 2], # small - [32 ** 2, 96 ** 2], # medium - [96 ** 2, 1e5 ** 2], # large - [96 ** 2, 128 ** 2], # 96-128 - [128 ** 2, 256 ** 2], # 128-256 - [256 ** 2, 512 ** 2], # 256-512 - [512 ** 2, 1e5 ** 2], - ] # 512-inf - assert area in areas, "Unknown area range: {}".format(area) - area_range = area_ranges[areas[area]] - gt_overlaps = [] - num_pos = 0 - - for prediction_dict in dataset_predictions: - predictions = prediction_dict["proposals"] - - # sort predictions in descending order - # TODO maybe remove this and make it explicit in the documentation - inds = predictions.objectness_logits.sort(descending=True)[1] - predictions = predictions[inds] - - ann_ids = lvis_api.get_ann_ids(img_ids=[prediction_dict["image_id"]]) - anno = lvis_api.load_anns(ann_ids) - gt_boxes = [ - BoxMode.convert(obj["bbox"], BoxMode.XYWH_ABS, BoxMode.XYXY_ABS) for obj in anno - ] - gt_boxes = torch.as_tensor(gt_boxes).reshape(-1, 4) # guard against no boxes - gt_boxes = Boxes(gt_boxes) - gt_areas = torch.as_tensor([obj["area"] for obj in anno]) - - if len(gt_boxes) == 0 or len(predictions) == 0: - continue - - valid_gt_inds = (gt_areas >= area_range[0]) & (gt_areas <= area_range[1]) - gt_boxes = gt_boxes[valid_gt_inds] - - num_pos += len(gt_boxes) - - if len(gt_boxes) == 0: - continue - - if limit is not None and len(predictions) > limit: - predictions = predictions[:limit] - - overlaps = pairwise_iou(predictions.proposal_boxes, gt_boxes) - - _gt_overlaps = torch.zeros(len(gt_boxes)) - for j in range(min(len(predictions), len(gt_boxes))): - # find which proposal box maximally covers each gt box - # and get the iou amount of coverage for each gt box - max_overlaps, argmax_overlaps = overlaps.max(dim=0) - - # find which gt box is 'best' covered (i.e. 'best' = most iou) - gt_ovr, gt_ind = max_overlaps.max(dim=0) - assert gt_ovr >= 0 - # find the proposal box that covers the best covered gt box - box_ind = argmax_overlaps[gt_ind] - # record the iou coverage of this gt box - _gt_overlaps[j] = overlaps[box_ind, gt_ind] - assert _gt_overlaps[j] == gt_ovr - # mark the proposal box and the gt box as used - overlaps[box_ind, :] = -1 - overlaps[:, gt_ind] = -1 - - # append recorded iou coverage level - gt_overlaps.append(_gt_overlaps) - gt_overlaps = ( - torch.cat(gt_overlaps, dim=0) if len(gt_overlaps) else torch.zeros(0, dtype=torch.float32) - ) - gt_overlaps, _ = torch.sort(gt_overlaps) - - if thresholds is None: - step = 0.05 - thresholds = torch.arange(0.5, 0.95 + 1e-5, step, dtype=torch.float32) - recalls = torch.zeros_like(thresholds) - # compute recall for each iou threshold - for i, t in enumerate(thresholds): - recalls[i] = (gt_overlaps >= t).float().sum() / float(num_pos) - # ar = 2 * np.trapz(recalls, thresholds) - ar = recalls.mean() - return { - "ar": ar, - "recalls": recalls, - "thresholds": thresholds, - "gt_overlaps": gt_overlaps, - "num_pos": num_pos, - } - - -def _evaluate_predictions_on_lvis( - lvis_gt, lvis_results, iou_type, max_dets_per_image=None, class_names=None -): - """ - Args: - iou_type (str): - max_dets_per_image (None or int): limit on maximum detections per image in evaluating AP - This limit, by default of the LVIS dataset, is 300. - class_names (None or list[str]): if provided, will use it to predict - per-category AP. - - Returns: - a dict of {metric name: score} - """ - metrics = { - "bbox": ["AP", "AP50", "AP75", "APs", "APm", "APl", "APr", "APc", "APf"], - "segm": ["AP", "AP50", "AP75", "APs", "APm", "APl", "APr", "APc", "APf"], - }[iou_type] - - logger = logging.getLogger(__name__) - - if len(lvis_results) == 0: # TODO: check if needed - logger.warn("No predictions from the model!") - return {metric: float("nan") for metric in metrics} - - if iou_type == "segm": - lvis_results = copy.deepcopy(lvis_results) - # When evaluating mask AP, if the results contain bbox, LVIS API will - # use the box area as the area of the instance, instead of the mask area. - # This leads to a different definition of small/medium/large. - # We remove the bbox field to let mask AP use mask area. - for c in lvis_results: - c.pop("bbox", None) - - if max_dets_per_image is None: - max_dets_per_image = 300 # Default for LVIS dataset - - from lvis import LVISEval, LVISResults - - logger.info(f"Evaluating with max detections per image = {max_dets_per_image}") - lvis_results = LVISResults(lvis_gt, lvis_results, max_dets=max_dets_per_image) - lvis_eval = LVISEval(lvis_gt, lvis_results, iou_type) - lvis_eval.run() - lvis_eval.print_results() - - # Pull the standard metrics from the LVIS results - results = lvis_eval.get_results() - results = {metric: float(results[metric] * 100) for metric in metrics} - logger.info("Evaluation results for {}: \n".format(iou_type) + create_small_table(results)) - return results diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/data/transforms/rots2joints/__init__.py b/spaces/OpenMotionLab/MotionGPT/mGPT/data/transforms/rots2joints/__init__.py deleted file mode 100644 index 7719c7018469a7c97a944d8d6d2113ef21ad01ab..0000000000000000000000000000000000000000 --- a/spaces/OpenMotionLab/MotionGPT/mGPT/data/transforms/rots2joints/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .base import Rots2Joints -from .smplh import SMPLH -from .smplx import SMPLX diff --git a/spaces/OpenMotionLab/MotionGPT/pyrender/pyrender/platforms/__init__.py b/spaces/OpenMotionLab/MotionGPT/pyrender/pyrender/platforms/__init__.py deleted file mode 100644 index 7837fd5fdeccab5e48c85e41d20b238ea7396599..0000000000000000000000000000000000000000 --- a/spaces/OpenMotionLab/MotionGPT/pyrender/pyrender/platforms/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -"""Platforms for generating offscreen OpenGL contexts for rendering. - -Author: Matthew Matl -""" - -from .base import Platform diff --git a/spaces/OpenMotionLab/MotionGPT/pyrender/pyrender/viewer.py b/spaces/OpenMotionLab/MotionGPT/pyrender/pyrender/viewer.py deleted file mode 100644 index d2326c38205c6eaddb4f567e3b088329187af258..0000000000000000000000000000000000000000 --- a/spaces/OpenMotionLab/MotionGPT/pyrender/pyrender/viewer.py +++ /dev/null @@ -1,1160 +0,0 @@ -"""A pyglet-based interactive 3D scene viewer. -""" -import copy -import os -import sys -from threading import Thread, RLock -import time - -import imageio -import numpy as np -import OpenGL -import trimesh - -try: - from Tkinter import Tk, tkFileDialog as filedialog -except Exception: - try: - from tkinter import Tk, filedialog as filedialog - except Exception: - pass - -from .constants import (TARGET_OPEN_GL_MAJOR, TARGET_OPEN_GL_MINOR, - MIN_OPEN_GL_MAJOR, MIN_OPEN_GL_MINOR, - TEXT_PADDING, DEFAULT_SCENE_SCALE, - DEFAULT_Z_FAR, DEFAULT_Z_NEAR, RenderFlags, TextAlign) -from .light import DirectionalLight -from .node import Node -from .camera import PerspectiveCamera, OrthographicCamera, IntrinsicsCamera -from .trackball import Trackball -from .renderer import Renderer -from .mesh import Mesh - -import pyglet -from pyglet import clock -pyglet.options['shadow_window'] = False - - -class Viewer(pyglet.window.Window): - """An interactive viewer for 3D scenes. - - The viewer's camera is separate from the scene's, but will take on - the parameters of the scene's main view camera and start in the same pose. - If the scene does not have a camera, a suitable default will be provided. - - Parameters - ---------- - scene : :class:`Scene` - The scene to visualize. - viewport_size : (2,) int - The width and height of the initial viewing window. - render_flags : dict - A set of flags for rendering the scene. Described in the note below. - viewer_flags : dict - A set of flags for controlling the viewer's behavior. - Described in the note below. - registered_keys : dict - A map from ASCII key characters to tuples containing: - - - A function to be called whenever the key is pressed, - whose first argument will be the viewer itself. - - (Optionally) A list of additional positional arguments - to be passed to the function. - - (Optionally) A dict of keyword arguments to be passed - to the function. - - kwargs : dict - Any keyword arguments left over will be interpreted as belonging to - either the :attr:`.Viewer.render_flags` or :attr:`.Viewer.viewer_flags` - dictionaries. Those flag sets will be updated appropriately. - - Note - ---- - The basic commands for moving about the scene are given as follows: - - - **Rotating about the scene**: Hold the left mouse button and - drag the cursor. - - **Rotating about the view axis**: Hold ``CTRL`` and the left mouse - button and drag the cursor. - - **Panning**: - - - Hold SHIFT, then hold the left mouse button and drag the cursor, or - - Hold the middle mouse button and drag the cursor. - - - **Zooming**: - - - Scroll the mouse wheel, or - - Hold the right mouse button and drag the cursor. - - Other keyboard commands are as follows: - - - ``a``: Toggles rotational animation mode. - - ``c``: Toggles backface culling. - - ``f``: Toggles fullscreen mode. - - ``h``: Toggles shadow rendering. - - ``i``: Toggles axis display mode - (no axes, world axis, mesh axes, all axes). - - ``l``: Toggles lighting mode - (scene lighting, Raymond lighting, or direct lighting). - - ``m``: Toggles face normal visualization. - - ``n``: Toggles vertex normal visualization. - - ``o``: Toggles orthographic mode. - - ``q``: Quits the viewer. - - ``r``: Starts recording a GIF, and pressing again stops recording - and opens a file dialog. - - ``s``: Opens a file dialog to save the current view as an image. - - ``w``: Toggles wireframe mode - (scene default, flip wireframes, all wireframe, or all solid). - - ``z``: Resets the camera to the initial view. - - Note - ---- - The valid keys for ``render_flags`` are as follows: - - - ``flip_wireframe``: `bool`, If `True`, all objects will have their - wireframe modes flipped from what their material indicates. - Defaults to `False`. - - ``all_wireframe``: `bool`, If `True`, all objects will be rendered - in wireframe mode. Defaults to `False`. - - ``all_solid``: `bool`, If `True`, all objects will be rendered in - solid mode. Defaults to `False`. - - ``shadows``: `bool`, If `True`, shadows will be rendered. - Defaults to `False`. - - ``vertex_normals``: `bool`, If `True`, vertex normals will be - rendered as blue lines. Defaults to `False`. - - ``face_normals``: `bool`, If `True`, face normals will be rendered as - blue lines. Defaults to `False`. - - ``cull_faces``: `bool`, If `True`, backfaces will be culled. - Defaults to `True`. - - ``point_size`` : float, The point size in pixels. Defaults to 1px. - - Note - ---- - The valid keys for ``viewer_flags`` are as follows: - - - ``rotate``: `bool`, If `True`, the scene's camera will rotate - about an axis. Defaults to `False`. - - ``rotate_rate``: `float`, The rate of rotation in radians per second. - Defaults to `PI / 3.0`. - - ``rotate_axis``: `(3,) float`, The axis in world coordinates to rotate - about. Defaults to ``[0,0,1]``. - - ``view_center``: `(3,) float`, The position to rotate the scene about. - Defaults to the scene's centroid. - - ``use_raymond_lighting``: `bool`, If `True`, an additional set of three - directional lights that move with the camera will be added to the scene. - Defaults to `False`. - - ``use_direct_lighting``: `bool`, If `True`, an additional directional - light that moves with the camera and points out of it will be added to - the scene. Defaults to `False`. - - ``lighting_intensity``: `float`, The overall intensity of the - viewer's additional lights (when they're in use). Defaults to 3.0. - - ``use_perspective_cam``: `bool`, If `True`, a perspective camera will - be used. Otherwise, an orthographic camera is used. Defaults to `True`. - - ``save_directory``: `str`, A directory to open the file dialogs in. - Defaults to `None`. - - ``window_title``: `str`, A title for the viewer's application window. - Defaults to `"Scene Viewer"`. - - ``refresh_rate``: `float`, A refresh rate for rendering, in Hertz. - Defaults to `30.0`. - - ``fullscreen``: `bool`, Whether to make viewer fullscreen. - Defaults to `False`. - - ``show_world_axis``: `bool`, Whether to show the world axis. - Defaults to `False`. - - ``show_mesh_axes``: `bool`, Whether to show the individual mesh axes. - Defaults to `False`. - - ``caption``: `list of dict`, Text caption(s) to display on the viewer. - Defaults to `None`. - - Note - ---- - Animation can be accomplished by running the viewer with ``run_in_thread`` - enabled. Then, just run a loop in your main thread, updating the scene as - needed. Before updating the scene, be sure to acquire the - :attr:`.Viewer.render_lock`, and release it when your update is done. - """ - - def __init__(self, scene, viewport_size=None, - render_flags=None, viewer_flags=None, - registered_keys=None, run_in_thread=False, - auto_start=True, - **kwargs): - - ####################################################################### - # Save attributes and flags - ####################################################################### - if viewport_size is None: - viewport_size = (640, 480) - self._scene = scene - self._viewport_size = viewport_size - self._render_lock = RLock() - self._is_active = False - self._should_close = False - self._run_in_thread = run_in_thread - self._auto_start = auto_start - - self._default_render_flags = { - 'flip_wireframe': False, - 'all_wireframe': False, - 'all_solid': False, - 'shadows': False, - 'vertex_normals': False, - 'face_normals': False, - 'cull_faces': True, - 'point_size': 1.0, - } - self._default_viewer_flags = { - 'mouse_pressed': False, - 'rotate': False, - 'rotate_rate': np.pi / 3.0, - 'rotate_axis': np.array([0.0, 0.0, 1.0]), - 'view_center': None, - 'record': False, - 'use_raymond_lighting': False, - 'use_direct_lighting': False, - 'lighting_intensity': 3.0, - 'use_perspective_cam': True, - 'save_directory': None, - 'window_title': 'Scene Viewer', - 'refresh_rate': 30.0, - 'fullscreen': False, - 'show_world_axis': False, - 'show_mesh_axes': False, - 'caption': None - } - self._render_flags = self._default_render_flags.copy() - self._viewer_flags = self._default_viewer_flags.copy() - self._viewer_flags['rotate_axis'] = ( - self._default_viewer_flags['rotate_axis'].copy() - ) - - if render_flags is not None: - self._render_flags.update(render_flags) - if viewer_flags is not None: - self._viewer_flags.update(viewer_flags) - - for key in kwargs: - if key in self.render_flags: - self._render_flags[key] = kwargs[key] - elif key in self.viewer_flags: - self._viewer_flags[key] = kwargs[key] - - # TODO MAC OS BUG FOR SHADOWS - if sys.platform == 'darwin': - self._render_flags['shadows'] = False - - self._registered_keys = {} - if registered_keys is not None: - self._registered_keys = { - ord(k.lower()): registered_keys[k] for k in registered_keys - } - - ####################################################################### - # Save internal settings - ####################################################################### - - # Set up caption stuff - self._message_text = None - self._ticks_till_fade = 2.0 / 3.0 * self.viewer_flags['refresh_rate'] - self._message_opac = 1.0 + self._ticks_till_fade - - # Set up raymond lights and direct lights - self._raymond_lights = self._create_raymond_lights() - self._direct_light = self._create_direct_light() - - # Set up axes - self._axes = {} - self._axis_mesh = Mesh.from_trimesh( - trimesh.creation.axis(origin_size=0.1, axis_radius=0.05, - axis_length=1.0), smooth=False) - if self.viewer_flags['show_world_axis']: - self._set_axes(world=self.viewer_flags['show_world_axis'], - mesh=self.viewer_flags['show_mesh_axes']) - - ####################################################################### - # Set up camera node - ####################################################################### - self._camera_node = None - self._prior_main_camera_node = None - self._default_camera_pose = None - self._default_persp_cam = None - self._default_orth_cam = None - self._trackball = None - self._saved_frames = [] - - # Extract main camera from scene and set up our mirrored copy - znear = None - zfar = None - if scene.main_camera_node is not None: - n = scene.main_camera_node - camera = copy.copy(n.camera) - if isinstance(camera, (PerspectiveCamera, IntrinsicsCamera)): - self._default_persp_cam = camera - znear = camera.znear - zfar = camera.zfar - elif isinstance(camera, OrthographicCamera): - self._default_orth_cam = camera - znear = camera.znear - zfar = camera.zfar - self._default_camera_pose = scene.get_pose(scene.main_camera_node) - self._prior_main_camera_node = n - - # Set defaults as needed - if zfar is None: - zfar = max(scene.scale * 10.0, DEFAULT_Z_FAR) - if znear is None or znear == 0: - if scene.scale == 0: - znear = DEFAULT_Z_NEAR - else: - znear = min(scene.scale / 10.0, DEFAULT_Z_NEAR) - - if self._default_persp_cam is None: - self._default_persp_cam = PerspectiveCamera( - yfov=np.pi / 3.0, znear=znear, zfar=zfar - ) - if self._default_orth_cam is None: - xmag = ymag = scene.scale - if scene.scale == 0: - xmag = ymag = 1.0 - self._default_orth_cam = OrthographicCamera( - xmag=xmag, ymag=ymag, - znear=znear, - zfar=zfar - ) - if self._default_camera_pose is None: - self._default_camera_pose = self._compute_initial_camera_pose() - - # Pick camera - if self.viewer_flags['use_perspective_cam']: - camera = self._default_persp_cam - else: - camera = self._default_orth_cam - - self._camera_node = Node( - matrix=self._default_camera_pose, camera=camera - ) - scene.add_node(self._camera_node) - scene.main_camera_node = self._camera_node - self._reset_view() - - ####################################################################### - # Initialize OpenGL context and renderer - ####################################################################### - self._renderer = Renderer( - self._viewport_size[0], self._viewport_size[1], - self.render_flags['point_size'] - ) - self._is_active = True - - if self.run_in_thread: - self._thread = Thread(target=self._init_and_start_app) - self._thread.start() - else: - if auto_start: - self._init_and_start_app() - - def start(self): - self._init_and_start_app() - - @property - def scene(self): - """:class:`.Scene` : The scene being visualized. - """ - return self._scene - - @property - def viewport_size(self): - """(2,) int : The width and height of the viewing window. - """ - return self._viewport_size - - @property - def render_lock(self): - """:class:`threading.RLock` : If acquired, prevents the viewer from - rendering until released. - - Run :meth:`.Viewer.render_lock.acquire` before making updates to - the scene in a different thread, and run - :meth:`.Viewer.render_lock.release` once you're done to let the viewer - continue. - """ - return self._render_lock - - @property - def is_active(self): - """bool : `True` if the viewer is active, or `False` if it has - been closed. - """ - return self._is_active - - @property - def run_in_thread(self): - """bool : Whether the viewer was run in a separate thread. - """ - return self._run_in_thread - - @property - def render_flags(self): - """dict : Flags for controlling the renderer's behavior. - - - ``flip_wireframe``: `bool`, If `True`, all objects will have their - wireframe modes flipped from what their material indicates. - Defaults to `False`. - - ``all_wireframe``: `bool`, If `True`, all objects will be rendered - in wireframe mode. Defaults to `False`. - - ``all_solid``: `bool`, If `True`, all objects will be rendered in - solid mode. Defaults to `False`. - - ``shadows``: `bool`, If `True`, shadows will be rendered. - Defaults to `False`. - - ``vertex_normals``: `bool`, If `True`, vertex normals will be - rendered as blue lines. Defaults to `False`. - - ``face_normals``: `bool`, If `True`, face normals will be rendered as - blue lines. Defaults to `False`. - - ``cull_faces``: `bool`, If `True`, backfaces will be culled. - Defaults to `True`. - - ``point_size`` : float, The point size in pixels. Defaults to 1px. - - """ - return self._render_flags - - @render_flags.setter - def render_flags(self, value): - self._render_flags = value - - @property - def viewer_flags(self): - """dict : Flags for controlling the viewer's behavior. - - The valid keys for ``viewer_flags`` are as follows: - - - ``rotate``: `bool`, If `True`, the scene's camera will rotate - about an axis. Defaults to `False`. - - ``rotate_rate``: `float`, The rate of rotation in radians per second. - Defaults to `PI / 3.0`. - - ``rotate_axis``: `(3,) float`, The axis in world coordinates to - rotate about. Defaults to ``[0,0,1]``. - - ``view_center``: `(3,) float`, The position to rotate the scene - about. Defaults to the scene's centroid. - - ``use_raymond_lighting``: `bool`, If `True`, an additional set of - three directional lights that move with the camera will be added to - the scene. Defaults to `False`. - - ``use_direct_lighting``: `bool`, If `True`, an additional directional - light that moves with the camera and points out of it will be - added to the scene. Defaults to `False`. - - ``lighting_intensity``: `float`, The overall intensity of the - viewer's additional lights (when they're in use). Defaults to 3.0. - - ``use_perspective_cam``: `bool`, If `True`, a perspective camera will - be used. Otherwise, an orthographic camera is used. Defaults to - `True`. - - ``save_directory``: `str`, A directory to open the file dialogs in. - Defaults to `None`. - - ``window_title``: `str`, A title for the viewer's application window. - Defaults to `"Scene Viewer"`. - - ``refresh_rate``: `float`, A refresh rate for rendering, in Hertz. - Defaults to `30.0`. - - ``fullscreen``: `bool`, Whether to make viewer fullscreen. - Defaults to `False`. - - ``show_world_axis``: `bool`, Whether to show the world axis. - Defaults to `False`. - - ``show_mesh_axes``: `bool`, Whether to show the individual mesh axes. - Defaults to `False`. - - ``caption``: `list of dict`, Text caption(s) to display on - the viewer. Defaults to `None`. - - """ - return self._viewer_flags - - @viewer_flags.setter - def viewer_flags(self, value): - self._viewer_flags = value - - @property - def registered_keys(self): - """dict : Map from ASCII key character to a handler function. - - This is a map from ASCII key characters to tuples containing: - - - A function to be called whenever the key is pressed, - whose first argument will be the viewer itself. - - (Optionally) A list of additional positional arguments - to be passed to the function. - - (Optionally) A dict of keyword arguments to be passed - to the function. - - """ - return self._registered_keys - - @registered_keys.setter - def registered_keys(self, value): - self._registered_keys = value - - def close_external(self): - """Close the viewer from another thread. - - This function will wait for the actual close, so you immediately - manipulate the scene afterwards. - """ - self._should_close = True - while self.is_active: - time.sleep(1.0 / self.viewer_flags['refresh_rate']) - - def save_gif(self, filename=None): - """Save the stored GIF frames to a file. - - To use this asynchronously, run the viewer with the ``record`` - flag and the ``run_in_thread`` flags set. - Kill the viewer after your desired time with - :meth:`.Viewer.close_external`, and then call :meth:`.Viewer.save_gif`. - - Parameters - ---------- - filename : str - The file to save the GIF to. If not specified, - a file dialog will be opened to ask the user where - to save the GIF file. - """ - if filename is None: - filename = self._get_save_filename(['gif', 'all']) - if filename is not None: - self.viewer_flags['save_directory'] = os.path.dirname(filename) - imageio.mimwrite(filename, self._saved_frames, - fps=self.viewer_flags['refresh_rate'], - palettesize=128, subrectangles=True) - self._saved_frames = [] - - def on_close(self): - """Exit the event loop when the window is closed. - """ - # Remove our camera and restore the prior one - if self._camera_node is not None: - self.scene.remove_node(self._camera_node) - if self._prior_main_camera_node is not None: - self.scene.main_camera_node = self._prior_main_camera_node - - # Delete any lighting nodes that we've attached - if self.viewer_flags['use_raymond_lighting']: - for n in self._raymond_lights: - if self.scene.has_node(n): - self.scene.remove_node(n) - if self.viewer_flags['use_direct_lighting']: - if self.scene.has_node(self._direct_light): - self.scene.remove_node(self._direct_light) - - # Delete any axis nodes that we've attached - self._remove_axes() - - # Delete renderer - if self._renderer is not None: - self._renderer.delete() - self._renderer = None - - # Force clean-up of OpenGL context data - try: - OpenGL.contextdata.cleanupContext() - self.close() - except Exception: - pass - finally: - self._is_active = False - super(Viewer, self).on_close() - pyglet.app.exit() - - def on_draw(self): - """Redraw the scene into the viewing window. - """ - if self._renderer is None: - return - - if self.run_in_thread or not self._auto_start: - self.render_lock.acquire() - - # Make OpenGL context current - self.switch_to() - - # Render the scene - self.clear() - self._render() - - if self._message_text is not None: - self._renderer.render_text( - self._message_text, - self.viewport_size[0] - TEXT_PADDING, - TEXT_PADDING, - font_pt=20, - color=np.array([0.1, 0.7, 0.2, - np.clip(self._message_opac, 0.0, 1.0)]), - align=TextAlign.BOTTOM_RIGHT - ) - - if self.viewer_flags['caption'] is not None: - for caption in self.viewer_flags['caption']: - xpos, ypos = self._location_to_x_y(caption['location']) - self._renderer.render_text( - caption['text'], - xpos, - ypos, - font_name=caption['font_name'], - font_pt=caption['font_pt'], - color=caption['color'], - scale=caption['scale'], - align=caption['location'] - ) - - if self.run_in_thread or not self._auto_start: - self.render_lock.release() - - def on_resize(self, width, height): - """Resize the camera and trackball when the window is resized. - """ - if self._renderer is None: - return - - self._viewport_size = (width, height) - self._trackball.resize(self._viewport_size) - self._renderer.viewport_width = self._viewport_size[0] - self._renderer.viewport_height = self._viewport_size[1] - self.on_draw() - - def on_mouse_press(self, x, y, buttons, modifiers): - """Record an initial mouse press. - """ - self._trackball.set_state(Trackball.STATE_ROTATE) - if (buttons == pyglet.window.mouse.LEFT): - ctrl = (modifiers & pyglet.window.key.MOD_CTRL) - shift = (modifiers & pyglet.window.key.MOD_SHIFT) - if (ctrl and shift): - self._trackball.set_state(Trackball.STATE_ZOOM) - elif ctrl: - self._trackball.set_state(Trackball.STATE_ROLL) - elif shift: - self._trackball.set_state(Trackball.STATE_PAN) - elif (buttons == pyglet.window.mouse.MIDDLE): - self._trackball.set_state(Trackball.STATE_PAN) - elif (buttons == pyglet.window.mouse.RIGHT): - self._trackball.set_state(Trackball.STATE_ZOOM) - - self._trackball.down(np.array([x, y])) - - # Stop animating while using the mouse - self.viewer_flags['mouse_pressed'] = True - - def on_mouse_drag(self, x, y, dx, dy, buttons, modifiers): - """Record a mouse drag. - """ - self._trackball.drag(np.array([x, y])) - - def on_mouse_release(self, x, y, button, modifiers): - """Record a mouse release. - """ - self.viewer_flags['mouse_pressed'] = False - - def on_mouse_scroll(self, x, y, dx, dy): - """Record a mouse scroll. - """ - if self.viewer_flags['use_perspective_cam']: - self._trackball.scroll(dy) - else: - spfc = 0.95 - spbc = 1.0 / 0.95 - sf = 1.0 - if dy > 0: - sf = spfc * dy - elif dy < 0: - sf = - spbc * dy - - c = self._camera_node.camera - xmag = max(c.xmag * sf, 1e-8) - ymag = max(c.ymag * sf, 1e-8 * c.ymag / c.xmag) - c.xmag = xmag - c.ymag = ymag - - def on_key_press(self, symbol, modifiers): - """Record a key press. - """ - # First, check for registered key callbacks - if symbol in self.registered_keys: - tup = self.registered_keys[symbol] - callback = None - args = [] - kwargs = {} - if not isinstance(tup, (list, tuple, np.ndarray)): - callback = tup - else: - callback = tup[0] - if len(tup) == 2: - args = tup[1] - if len(tup) == 3: - kwargs = tup[2] - callback(self, *args, **kwargs) - return - - # Otherwise, use default key functions - - # A causes the frame to rotate - self._message_text = None - if symbol == pyglet.window.key.A: - self.viewer_flags['rotate'] = not self.viewer_flags['rotate'] - if self.viewer_flags['rotate']: - self._message_text = 'Rotation On' - else: - self._message_text = 'Rotation Off' - - # C toggles backface culling - elif symbol == pyglet.window.key.C: - self.render_flags['cull_faces'] = ( - not self.render_flags['cull_faces'] - ) - if self.render_flags['cull_faces']: - self._message_text = 'Cull Faces On' - else: - self._message_text = 'Cull Faces Off' - - # F toggles face normals - elif symbol == pyglet.window.key.F: - self.viewer_flags['fullscreen'] = ( - not self.viewer_flags['fullscreen'] - ) - self.set_fullscreen(self.viewer_flags['fullscreen']) - self.activate() - if self.viewer_flags['fullscreen']: - self._message_text = 'Fullscreen On' - else: - self._message_text = 'Fullscreen Off' - - # S toggles shadows - elif symbol == pyglet.window.key.H and sys.platform != 'darwin': - self.render_flags['shadows'] = not self.render_flags['shadows'] - if self.render_flags['shadows']: - self._message_text = 'Shadows On' - else: - self._message_text = 'Shadows Off' - - elif symbol == pyglet.window.key.I: - if (self.viewer_flags['show_world_axis'] and not - self.viewer_flags['show_mesh_axes']): - self.viewer_flags['show_world_axis'] = False - self.viewer_flags['show_mesh_axes'] = True - self._set_axes(False, True) - self._message_text = 'Mesh Axes On' - elif (not self.viewer_flags['show_world_axis'] and - self.viewer_flags['show_mesh_axes']): - self.viewer_flags['show_world_axis'] = True - self.viewer_flags['show_mesh_axes'] = True - self._set_axes(True, True) - self._message_text = 'All Axes On' - elif (self.viewer_flags['show_world_axis'] and - self.viewer_flags['show_mesh_axes']): - self.viewer_flags['show_world_axis'] = False - self.viewer_flags['show_mesh_axes'] = False - self._set_axes(False, False) - self._message_text = 'All Axes Off' - else: - self.viewer_flags['show_world_axis'] = True - self.viewer_flags['show_mesh_axes'] = False - self._set_axes(True, False) - self._message_text = 'World Axis On' - - # L toggles the lighting mode - elif symbol == pyglet.window.key.L: - if self.viewer_flags['use_raymond_lighting']: - self.viewer_flags['use_raymond_lighting'] = False - self.viewer_flags['use_direct_lighting'] = True - self._message_text = 'Direct Lighting' - elif self.viewer_flags['use_direct_lighting']: - self.viewer_flags['use_raymond_lighting'] = False - self.viewer_flags['use_direct_lighting'] = False - self._message_text = 'Default Lighting' - else: - self.viewer_flags['use_raymond_lighting'] = True - self.viewer_flags['use_direct_lighting'] = False - self._message_text = 'Raymond Lighting' - - # M toggles face normals - elif symbol == pyglet.window.key.M: - self.render_flags['face_normals'] = ( - not self.render_flags['face_normals'] - ) - if self.render_flags['face_normals']: - self._message_text = 'Face Normals On' - else: - self._message_text = 'Face Normals Off' - - # N toggles vertex normals - elif symbol == pyglet.window.key.N: - self.render_flags['vertex_normals'] = ( - not self.render_flags['vertex_normals'] - ) - if self.render_flags['vertex_normals']: - self._message_text = 'Vert Normals On' - else: - self._message_text = 'Vert Normals Off' - - # O toggles orthographic camera mode - elif symbol == pyglet.window.key.O: - self.viewer_flags['use_perspective_cam'] = ( - not self.viewer_flags['use_perspective_cam'] - ) - if self.viewer_flags['use_perspective_cam']: - camera = self._default_persp_cam - self._message_text = 'Perspective View' - else: - camera = self._default_orth_cam - self._message_text = 'Orthographic View' - - cam_pose = self._camera_node.matrix.copy() - cam_node = Node(matrix=cam_pose, camera=camera) - self.scene.remove_node(self._camera_node) - self.scene.add_node(cam_node) - self.scene.main_camera_node = cam_node - self._camera_node = cam_node - - # Q quits the viewer - elif symbol == pyglet.window.key.Q: - self.on_close() - - # R starts recording frames - elif symbol == pyglet.window.key.R: - if self.viewer_flags['record']: - self.save_gif() - self.set_caption(self.viewer_flags['window_title']) - else: - self.set_caption( - '{} (RECORDING)'.format(self.viewer_flags['window_title']) - ) - self.viewer_flags['record'] = not self.viewer_flags['record'] - - # S saves the current frame as an image - elif symbol == pyglet.window.key.S: - self._save_image() - - # W toggles through wireframe modes - elif symbol == pyglet.window.key.W: - if self.render_flags['flip_wireframe']: - self.render_flags['flip_wireframe'] = False - self.render_flags['all_wireframe'] = True - self.render_flags['all_solid'] = False - self._message_text = 'All Wireframe' - elif self.render_flags['all_wireframe']: - self.render_flags['flip_wireframe'] = False - self.render_flags['all_wireframe'] = False - self.render_flags['all_solid'] = True - self._message_text = 'All Solid' - elif self.render_flags['all_solid']: - self.render_flags['flip_wireframe'] = False - self.render_flags['all_wireframe'] = False - self.render_flags['all_solid'] = False - self._message_text = 'Default Wireframe' - else: - self.render_flags['flip_wireframe'] = True - self.render_flags['all_wireframe'] = False - self.render_flags['all_solid'] = False - self._message_text = 'Flip Wireframe' - - # Z resets the camera viewpoint - elif symbol == pyglet.window.key.Z: - self._reset_view() - - if self._message_text is not None: - self._message_opac = 1.0 + self._ticks_till_fade - - @staticmethod - def _time_event(dt, self): - """The timer callback. - """ - # Don't run old dead events after we've already closed - if not self._is_active: - return - - if self.viewer_flags['record']: - self._record() - if (self.viewer_flags['rotate'] and not - self.viewer_flags['mouse_pressed']): - self._rotate() - - # Manage message opacity - if self._message_text is not None: - if self._message_opac > 1.0: - self._message_opac -= 1.0 - else: - self._message_opac *= 0.90 - if self._message_opac < 0.05: - self._message_opac = 1.0 + self._ticks_till_fade - self._message_text = None - - if self._should_close: - self.on_close() - else: - self.on_draw() - - def _reset_view(self): - """Reset the view to a good initial state. - - The view is initially along the positive x-axis at a - sufficient distance from the scene. - """ - scale = self.scene.scale - if scale == 0.0: - scale = DEFAULT_SCENE_SCALE - centroid = self.scene.centroid - - if self.viewer_flags['view_center'] is not None: - centroid = self.viewer_flags['view_center'] - - self._camera_node.matrix = self._default_camera_pose - self._trackball = Trackball( - self._default_camera_pose, self.viewport_size, scale, centroid - ) - - def _get_save_filename(self, file_exts): - file_types = { - 'png': ('png files', '*.png'), - 'jpg': ('jpeg files', '*.jpg'), - 'gif': ('gif files', '*.gif'), - 'all': ('all files', '*'), - } - filetypes = [file_types[x] for x in file_exts] - try: - root = Tk() - save_dir = self.viewer_flags['save_directory'] - if save_dir is None: - save_dir = os.getcwd() - filename = filedialog.asksaveasfilename( - initialdir=save_dir, title='Select file save location', - filetypes=filetypes - ) - except Exception: - return None - - root.destroy() - if filename == (): - return None - return filename - - def _save_image(self): - filename = self._get_save_filename(['png', 'jpg', 'gif', 'all']) - if filename is not None: - self.viewer_flags['save_directory'] = os.path.dirname(filename) - imageio.imwrite(filename, self._renderer.read_color_buf()) - - def _record(self): - """Save another frame for the GIF. - """ - data = self._renderer.read_color_buf() - if not np.all(data == 0.0): - self._saved_frames.append(data) - - def _rotate(self): - """Animate the scene by rotating the camera. - """ - az = (self.viewer_flags['rotate_rate'] / - self.viewer_flags['refresh_rate']) - self._trackball.rotate(az, self.viewer_flags['rotate_axis']) - - def _render(self): - """Render the scene into the framebuffer and flip. - """ - scene = self.scene - self._camera_node.matrix = self._trackball.pose.copy() - - # Set lighting - vli = self.viewer_flags['lighting_intensity'] - if self.viewer_flags['use_raymond_lighting']: - for n in self._raymond_lights: - n.light.intensity = vli / 3.0 - if not self.scene.has_node(n): - scene.add_node(n, parent_node=self._camera_node) - else: - self._direct_light.light.intensity = vli - for n in self._raymond_lights: - if self.scene.has_node(n): - self.scene.remove_node(n) - - if self.viewer_flags['use_direct_lighting']: - if not self.scene.has_node(self._direct_light): - scene.add_node( - self._direct_light, parent_node=self._camera_node - ) - elif self.scene.has_node(self._direct_light): - self.scene.remove_node(self._direct_light) - - flags = RenderFlags.NONE - if self.render_flags['flip_wireframe']: - flags |= RenderFlags.FLIP_WIREFRAME - elif self.render_flags['all_wireframe']: - flags |= RenderFlags.ALL_WIREFRAME - elif self.render_flags['all_solid']: - flags |= RenderFlags.ALL_SOLID - - if self.render_flags['shadows']: - flags |= RenderFlags.SHADOWS_DIRECTIONAL | RenderFlags.SHADOWS_SPOT - if self.render_flags['vertex_normals']: - flags |= RenderFlags.VERTEX_NORMALS - if self.render_flags['face_normals']: - flags |= RenderFlags.FACE_NORMALS - if not self.render_flags['cull_faces']: - flags |= RenderFlags.SKIP_CULL_FACES - - self._renderer.render(self.scene, flags) - - def _init_and_start_app(self): - # Try multiple configs starting with target OpenGL version - # and multisampling and removing these options if exception - # Note: multisampling not available on all hardware - from pyglet.gl import Config - confs = [Config(sample_buffers=1, samples=4, - depth_size=24, - double_buffer=True, - major_version=TARGET_OPEN_GL_MAJOR, - minor_version=TARGET_OPEN_GL_MINOR), - Config(depth_size=24, - double_buffer=True, - major_version=TARGET_OPEN_GL_MAJOR, - minor_version=TARGET_OPEN_GL_MINOR), - Config(sample_buffers=1, samples=4, - depth_size=24, - double_buffer=True, - major_version=MIN_OPEN_GL_MAJOR, - minor_version=MIN_OPEN_GL_MINOR), - Config(depth_size=24, - double_buffer=True, - major_version=MIN_OPEN_GL_MAJOR, - minor_version=MIN_OPEN_GL_MINOR)] - for conf in confs: - try: - super(Viewer, self).__init__(config=conf, resizable=True, - width=self._viewport_size[0], - height=self._viewport_size[1]) - break - except pyglet.window.NoSuchConfigException: - pass - - if not self.context: - raise ValueError('Unable to initialize an OpenGL 3+ context') - clock.schedule_interval( - Viewer._time_event, 1.0 / self.viewer_flags['refresh_rate'], self - ) - self.switch_to() - self.set_caption(self.viewer_flags['window_title']) - pyglet.app.run() - - def _compute_initial_camera_pose(self): - centroid = self.scene.centroid - if self.viewer_flags['view_center'] is not None: - centroid = self.viewer_flags['view_center'] - scale = self.scene.scale - if scale == 0.0: - scale = DEFAULT_SCENE_SCALE - - s2 = 1.0 / np.sqrt(2.0) - cp = np.eye(4) - cp[:3,:3] = np.array([ - [0.0, -s2, s2], - [1.0, 0.0, 0.0], - [0.0, s2, s2] - ]) - hfov = np.pi / 6.0 - dist = scale / (2.0 * np.tan(hfov)) - cp[:3,3] = dist * np.array([1.0, 0.0, 1.0]) + centroid - - return cp - - def _create_raymond_lights(self): - thetas = np.pi * np.array([1.0 / 6.0, 1.0 / 6.0, 1.0 / 6.0]) - phis = np.pi * np.array([0.0, 2.0 / 3.0, 4.0 / 3.0]) - - nodes = [] - - for phi, theta in zip(phis, thetas): - xp = np.sin(theta) * np.cos(phi) - yp = np.sin(theta) * np.sin(phi) - zp = np.cos(theta) - - z = np.array([xp, yp, zp]) - z = z / np.linalg.norm(z) - x = np.array([-z[1], z[0], 0.0]) - if np.linalg.norm(x) == 0: - x = np.array([1.0, 0.0, 0.0]) - x = x / np.linalg.norm(x) - y = np.cross(z, x) - - matrix = np.eye(4) - matrix[:3,:3] = np.c_[x,y,z] - nodes.append(Node( - light=DirectionalLight(color=np.ones(3), intensity=1.0), - matrix=matrix - )) - - return nodes - - def _create_direct_light(self): - light = DirectionalLight(color=np.ones(3), intensity=1.0) - n = Node(light=light, matrix=np.eye(4)) - return n - - def _set_axes(self, world, mesh): - scale = self.scene.scale - if world: - if 'scene' not in self._axes: - n = Node(mesh=self._axis_mesh, scale=np.ones(3) * scale * 0.3) - self.scene.add_node(n) - self._axes['scene'] = n - else: - if 'scene' in self._axes: - self.scene.remove_node(self._axes['scene']) - self._axes.pop('scene') - - if mesh: - old_nodes = [] - existing_axes = set([self._axes[k] for k in self._axes]) - for node in self.scene.mesh_nodes: - if node not in existing_axes: - old_nodes.append(node) - - for node in old_nodes: - if node in self._axes: - continue - n = Node( - mesh=self._axis_mesh, - scale=np.ones(3) * node.mesh.scale * 0.5 - ) - self.scene.add_node(n, parent_node=node) - self._axes[node] = n - else: - to_remove = set() - for main_node in self._axes: - if main_node in self.scene.mesh_nodes: - self.scene.remove_node(self._axes[main_node]) - to_remove.add(main_node) - for main_node in to_remove: - self._axes.pop(main_node) - - def _remove_axes(self): - for main_node in self._axes: - axis_node = self._axes[main_node] - self.scene.remove_node(axis_node) - self._axes = {} - - def _location_to_x_y(self, location): - if location == TextAlign.CENTER: - return (self.viewport_size[0] / 2.0, self.viewport_size[1] / 2.0) - elif location == TextAlign.CENTER_LEFT: - return (TEXT_PADDING, self.viewport_size[1] / 2.0) - elif location == TextAlign.CENTER_RIGHT: - return (self.viewport_size[0] - TEXT_PADDING, - self.viewport_size[1] / 2.0) - elif location == TextAlign.BOTTOM_LEFT: - return (TEXT_PADDING, TEXT_PADDING) - elif location == TextAlign.BOTTOM_RIGHT: - return (self.viewport_size[0] - TEXT_PADDING, TEXT_PADDING) - elif location == TextAlign.BOTTOM_CENTER: - return (self.viewport_size[0] / 2.0, TEXT_PADDING) - elif location == TextAlign.TOP_LEFT: - return (TEXT_PADDING, self.viewport_size[1] - TEXT_PADDING) - elif location == TextAlign.TOP_RIGHT: - return (self.viewport_size[0] - TEXT_PADDING, - self.viewport_size[1] - TEXT_PADDING) - elif location == TextAlign.TOP_CENTER: - return (self.viewport_size[0] / 2.0, - self.viewport_size[1] - TEXT_PADDING) - - -__all__ = ['Viewer'] diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/common-list.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/common-list.go deleted file mode 100644 index d85d4e9009ccb29cef5bcf0ba1b5865b0d9af860..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/common-list.go and /dev/null differ diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/roipoint_pool3d.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/roipoint_pool3d.py deleted file mode 100644 index 0a21412c0728431c04b84245bc2e3109eea9aefc..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/roipoint_pool3d.py +++ /dev/null @@ -1,77 +0,0 @@ -from torch import nn as nn -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['roipoint_pool3d_forward']) - - -class RoIPointPool3d(nn.Module): - """Encode the geometry-specific features of each 3D proposal. - - Please refer to `Paper of PartA2 `_ - for more details. - - Args: - num_sampled_points (int, optional): Number of samples in each roi. - Default: 512. - """ - - def __init__(self, num_sampled_points=512): - super().__init__() - self.num_sampled_points = num_sampled_points - - def forward(self, points, point_features, boxes3d): - """ - Args: - points (torch.Tensor): Input points whose shape is (B, N, C). - point_features (torch.Tensor): Features of input points whose shape - is (B, N, C). - boxes3d (B, M, 7), Input bounding boxes whose shape is (B, M, 7). - - Returns: - pooled_features (torch.Tensor): The output pooled features whose - shape is (B, M, 512, 3 + C). - pooled_empty_flag (torch.Tensor): Empty flag whose shape is (B, M). - """ - return RoIPointPool3dFunction.apply(points, point_features, boxes3d, - self.num_sampled_points) - - -class RoIPointPool3dFunction(Function): - - @staticmethod - def forward(ctx, points, point_features, boxes3d, num_sampled_points=512): - """ - Args: - points (torch.Tensor): Input points whose shape is (B, N, C). - point_features (torch.Tensor): Features of input points whose shape - is (B, N, C). - boxes3d (B, M, 7), Input bounding boxes whose shape is (B, M, 7). - num_sampled_points (int, optional): The num of sampled points. - Default: 512. - - Returns: - pooled_features (torch.Tensor): The output pooled features whose - shape is (B, M, 512, 3 + C). - pooled_empty_flag (torch.Tensor): Empty flag whose shape is (B, M). - """ - assert len(points.shape) == 3 and points.shape[2] == 3 - batch_size, boxes_num, feature_len = points.shape[0], boxes3d.shape[ - 1], point_features.shape[2] - pooled_boxes3d = boxes3d.view(batch_size, -1, 7) - pooled_features = point_features.new_zeros( - (batch_size, boxes_num, num_sampled_points, 3 + feature_len)) - pooled_empty_flag = point_features.new_zeros( - (batch_size, boxes_num)).int() - - ext_module.roipoint_pool3d_forward(points.contiguous(), - pooled_boxes3d.contiguous(), - point_features.contiguous(), - pooled_features, pooled_empty_flag) - - return pooled_features, pooled_empty_flag - - @staticmethod - def backward(ctx, grad_out): - raise NotImplementedError diff --git a/spaces/PixelistStudio/3dart-Models/app.py b/spaces/PixelistStudio/3dart-Models/app.py deleted file mode 100644 index d8b18abedf000e2e0e89c5ea297d71fd2dfe9054..0000000000000000000000000000000000000000 --- a/spaces/PixelistStudio/3dart-Models/app.py +++ /dev/null @@ -1,109 +0,0 @@ -import gradio as gr -import os -import sys -from pathlib import Path - -models = [ - {"name": "Art", "url": "DucHaiten/DucHaitenAIart"}, - {"name": "Classic Anime", "url": "DucHaiten/DH_ClassicAnime"}, - {"name": "Dream World", "url": "DucHaiten/DucHaitenDreamWorld"}, - {"name": "Journey", "url": "DucHaiten/DucHaitenJourney"}, - {"name": "Style Like Me", "url": "DucHaiten/DucHaiten-StyleLikeMe"}, - {"name": "Super Cute", "url": "DucHaiten/DucHaitenSuperCute"}, - {"name": "Dark Side", "url": "DucHaiten/DucHaitenDarkside"}, - {"name": "Animated", "url": "DucHaiten/DucHaitenAnimated"}, - {"name": "RedShift 765", "url": "nitrosocke/redshift-diffusion-768"}, - {"name": "RedShift", "url": "nitrosocke/redshift-diffusion"}, - {"name": "Ghibli Studio", "url": "nitrosocke/Ghibli-Diffusion"}, - {"name": "Nitro", "url": "nitrosocke/Nitro-Diffusion"}, - {"name": "Classic Animation", "url": "nitrosocke/classic-anim-diffusion"}, - {"name": "Nitro Socke", "url": "nitrosocke/mo-di-diffusion"}, - {"name": "Archer", "url": "nitrosocke/archer-diffusion"}, - {"name": "Spider Verse", "url": "nitrosocke/spider-verse-diffusion"}, - {"name": "Elden Ring", "url": "nitrosocke/elden-ring-diffusion"}, - {"name": "Arcane", "url": "nitrosocke/Arcane-Diffusion"}, - {"name": "Future", "url": "nitrosocke/Future-Diffusion"}, - {"name": "Pixar", "url": "ainz/diseny-pixar"}, -] - -current_model = models[0] - -text_gen = gr.Interface.load("spaces/PixelistStudio/prompt-extend") - -models2 = [] -for model in models: - model_url = f"models/{model['url']}" - loaded_model = gr.Interface.load(model_url, live=True, preprocess=True) - models2.append(loaded_model) - - -def text_it(inputs, text_gen=text_gen): - return text_gen(inputs) - - -def set_model(current_model_index): - global current_model - current_model = models[current_model_index] - return gr.update(value=f"{current_model['name']}") - - -def send_it(inputs, model_choice): - proc = models2[model_choice] - return proc(inputs) - - -with gr.Blocks() as myface: - gr.HTML( - - ) - - with gr.Row(): - with gr.Row(): - input_text = gr.Textbox(label="ایده درخواستی", placeholder="", lines=1) - # Model selection dropdown - model_name1 = gr.Dropdown( - label="نوع رندر", - choices=[m["name"] for m in models], - type="index", - value=current_model["name"], - interactive=True, - ) - with gr.Row(): - see_prompts = gr.Button("تولید خودکار شرح درخواست") - run = gr.Button("تولید تصاویر", variant="primary") - - with gr.Row(): - output1 = gr.Image(label="") - output2 = gr.Image(label="") - output3 = gr.Image(label="") - with gr.Row(): - magic1 = gr.Textbox(label="شرح درخواست ۳", lines=2) - magic2 = gr.Textbox(label="شرح درخواست ۲", lines=2) - magic3 = gr.Textbox(label="شرح درخواست ۱", lines=2) - with gr.Row(): - output4 = gr.Image(label="") - output5 = gr.Image(label="") - output6 = gr.Image(label="") - with gr.Row(): - magic4 = gr.Textbox(label="شرح درخواست ۶", lines=2) - magic5 = gr.Textbox(label="شرح درخواست ۵", lines=2) - magic6 = gr.Textbox(label="شرح درخواست ۴", lines=2) - - model_name1.change(set_model, inputs=model_name1, outputs=[output1, output2, output3, output4, output5, output6]) - - run.click(send_it, inputs=[magic1, model_name1], outputs=[output1]) - run.click(send_it, inputs=[magic2, model_name1], outputs=[output2]) - run.click(send_it, inputs=[magic3, model_name1], outputs=[output3]) - run.click(send_it, inputs=[magic4, model_name1], outputs=[output4]) - run.click(send_it, inputs=[magic5, model_name1], outputs=[output5]) - run.click(send_it, inputs=[magic6, model_name1], outputs=[output6]) - - see_prompts.click(text_it, inputs=[input_text], outputs=[magic1]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic2]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic3]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic4]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic5]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic6]) - -myface.queue(concurrency_count=200) -myface.launch(inline=True, show_api=False, max_threads=400) \ No newline at end of file diff --git a/spaces/Pranjal-y/data_scraping_analysis/data_info.py b/spaces/Pranjal-y/data_scraping_analysis/data_info.py deleted file mode 100644 index 53b2a6b2a34f9ef112f908410de977da3779a10c..0000000000000000000000000000000000000000 --- a/spaces/Pranjal-y/data_scraping_analysis/data_info.py +++ /dev/null @@ -1,57 +0,0 @@ -import scrapy -import csv -from urllib.parse import urlparse - -class DataInfoSpider(scrapy.Spider): - name = 'data_info' - allowed_domains = [] # We will set this dynamically - - def __init__(self, url=None, tags=None, num_columns=None, column_headings=None, *args, **kwargs): - super(DataInfoSpider, self).__init__(*args, **kwargs) - if url: - self.start_urls = [url] - parsed_url = urlparse(url) - self.allowed_domains = [parsed_url.netloc] - if parsed_url.scheme: - self.allowed_domains.append(parsed_url.scheme) # Add the scheme as an allowed domain - - self.tags = tags.split(',') if tags else [] # Convert to a list - self.num_columns = int(num_columns) - self.column_headings = column_headings.split(',') if column_headings else [] # Convert to a list - - - def parse(self, response): - css_selectors = [f'{tag}::text' for tag in self.tags] - - # Extract column headings - column_headings = self.column_headings - - # Extract data items - data_items = response.css(','.join(css_selectors)).getall() - - csv_file_path = 'data_ret.csv' - - with open(csv_file_path, 'w', newline='', encoding='utf-8') as csvfile: - writer = csv.writer(csvfile) - writer.writerow(column_headings) # Write column headings - - for i in range(0, len(data_items), self.num_columns): - row_data = data_items[i:i + self.num_columns] - writer.writerow(row_data) - - return {'message': 'CSV file generated successfully.'} - - - - - - - - - - - - - - - diff --git "a/spaces/Quickturtle005/mothership_hca/pages/M\303\245nedsrapport.py" "b/spaces/Quickturtle005/mothership_hca/pages/M\303\245nedsrapport.py" deleted file mode 100644 index 726f0d0a55bbad68cd32b110a87812f40774aa5d..0000000000000000000000000000000000000000 --- "a/spaces/Quickturtle005/mothership_hca/pages/M\303\245nedsrapport.py" +++ /dev/null @@ -1,11 +0,0 @@ -import streamlit as st -import os -st.text(f'Running in {os.getcwd()}') - -st.title("Månedsrapport") -st.markdown("This program is developed for the HCA organization. It is meant to help Data & Analytcs, to generate the monthly data slide - included in the monthly report, or as we say in DK 'Månedsrapporten'. Legends say that this is the most important sldie of the whole presentation. Follow the instructions below to get started.") - -date_event = st.date_input('Select month for report') -benchmarks = st.multiselect('Select elements for data-report', ['Linkedin', 'Youtube', 'Twitter', 'Facebook', 'Agillic'], ['Linkedin', 'Youtube', 'Twitter', 'Facebook', 'Agillic']) -if benchmarks: - button_first = st.button('Generate report') \ No newline at end of file diff --git a/spaces/RMXK/RVC_HFF/lib/uvr5_pack/lib_v5/nets_537227KB.py b/spaces/RMXK/RVC_HFF/lib/uvr5_pack/lib_v5/nets_537227KB.py deleted file mode 100644 index a1bb530e006482704f234c2e739a695174142941..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/lib/uvr5_pack/lib_v5/nets_537227KB.py +++ /dev/null @@ -1,123 +0,0 @@ -import torch -import numpy as np -from torch import nn -import torch.nn.functional as F - -from . import layers_537238KB as layers - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 64) - self.stg1_high_band_net = BaseASPPNet(2, 64) - - self.stg2_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(32, 64) - - self.stg3_bridge = layers.Conv2DBNActiv(130, 64, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(64, 128) - - self.out = nn.Conv2d(128, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(64, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(64, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/RahulSinghPundir/Sentiment-Analysis/README.md b/spaces/RahulSinghPundir/Sentiment-Analysis/README.md deleted file mode 100644 index 574dbacffd0e8795537ef09875d640be2ff6325e..0000000000000000000000000000000000000000 --- a/spaces/RahulSinghPundir/Sentiment-Analysis/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Sentiment Analysis -emoji: 🏆 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: creativeml-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/RamAnanth1/REaLTabFormer/README.md b/spaces/RamAnanth1/REaLTabFormer/README.md deleted file mode 100644 index 2463b612c06a81b06bb06038be473826c8343509..0000000000000000000000000000000000000000 --- a/spaces/RamAnanth1/REaLTabFormer/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: REaLTabFormer -emoji: 🌖 -colorFrom: red -colorTo: purple -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false -tags: -- making-demos ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/RamAnanth1/REaLTabFormer/app.py b/spaces/RamAnanth1/REaLTabFormer/app.py deleted file mode 100644 index 17d22d0d9b744eec08d216b4a63f44776fbc5cd1..0000000000000000000000000000000000000000 --- a/spaces/RamAnanth1/REaLTabFormer/app.py +++ /dev/null @@ -1,113 +0,0 @@ -import gradio as gr -import pandas as pd -from realtabformer import REaLTabFormer -from scipy.io import arff -import os - -rtf_model = REaLTabFormer( - model_type="tabular", - epochs=25, # Default is 200 - gradient_accumulation_steps=4) - - -def generate_data(file, num_samples): - if '.arff' in file.name: - data = arff.loadarff(open(file.name,'rt')) - df = pd.DataFrame(data[0]) - elif '.csv' in file.name: - df = pd.read_csv(file.name) - rtf_model.fit(df, num_bootstrap=10) # Default is 500 - # Generate synthetic data - samples = rtf_model.sample(n_samples=num_samples) - - return samples - -def generate_relational_data(parent_file, child_file, join_on): - parent_df = pd.read_csv(parent_file.name) - child_df = pd.read_csv(child_file.name) - - #Make sure join_on column exists in both - assert ((join_on in parent_df.columns) and - (join_on in child_df.columns)) - - rtf_model.fit(parent_df.drop(join_on, axis=1), num_bootstrap=100) - - pdir = Path("rtf_parent/") - rtf_model.save(pdir) - - # # Get the most recently saved parent model, - # # or a specify some other saved model. - # parent_model_path = pdir / "idXXX" - parent_model_path = sorted([ - p for p in pdir.glob("id*") if p.is_dir()], - key=os.path.getmtime)[-1] - - child_model = REaLTabFormer( - model_type="relational", - parent_realtabformer_path=parent_model_path, - epochs = 25, - output_max_length=None, - train_size=0.8) - - child_model.fit( - df=child_df, - in_df=parent_df, - join_on=join_on, - num_bootstrap=10) - - # Generate parent samples. - parent_samples = rtf_model.sample(5) - - # Create the unique ids based on the index. - parent_samples.index.name = join_on - parent_samples = parent_samples.reset_index() - - # Generate the relational observations. - child_samples = child_model.sample( - input_unique_ids=parent_samples[join_on], - input_df=parent_samples.drop(join_on, axis=1), - gen_batch=5) - - return parent_samples, child_samples, gr.update(visible = True) - - -with gr.Blocks() as demo: - gr.Markdown(""" - ## REaLTabFormer: Generating Realistic Relational and Tabular Data using Transformers - """) - gr.HTML(''' -

      - This is an unofficial demo for REaLTabFormer, an approach that can be used to generate synthetic data from single tabular data using GPT. The demo is based on the Github implementation provided by the authors. -

      - ''') - gr.HTML(''' -

      - ''') - - with gr.Column(): - - with gr.Tab("Upload Data as File: Tabular Data"): - data_input_u = gr.File(label = 'Upload Data File (Currently supports CSV and ARFF)', file_types=[".csv", ".arff"]) - num_samples = gr.Slider(label="Number of Samples", minimum=5, maximum=100, value=5, step=10) - generate_data_btn = gr.Button('Generate Synthetic Data') - - with gr.Tab("Upload Data as File: Relational Data"): - data_input_parent = gr.File(label = 'Upload Data File for Parent Dataset', file_types=[ ".csv"]) - data_input_child = gr.File(label = 'Upload Data File for Child Dataset', file_types=[ ".csv"]) - join_on = gr.Textbox(label = 'Column name to join on') - - generate_data_btn_relational = gr.Button('Generate Synthetic Data') - - with gr.Row(): - #data_sample = gr.Dataframe(label = "Original Data") - data_output = gr.Dataframe(label = "Synthetic Data") - with gr.Row(visible = False) as child_sample: - data_output_child = gr.Dataframe(label = "Synthetic Data for Child Dataset") - - - generate_data_btn.click(generate_data, inputs = [data_input_u,num_samples], outputs = [data_output]) - generate_data_btn_relational.click(generate_relational_data, inputs = [data_input_parent,data_input_child,join_on], outputs = [data_output, data_output_child, child_sample]) - examples = gr.Examples(examples=[['diabetes.arff',5], ["titanic.csv", 15]],inputs = [data_input_u,num_samples], outputs = [data_output], cache_examples = True, fn = generate_data) - - -demo.launch() \ No newline at end of file diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/metadata/importlib/_dists.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/metadata/importlib/_dists.py deleted file mode 100644 index 65c043c87eff27e9405316fdbc0c695f2b347441..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/metadata/importlib/_dists.py +++ /dev/null @@ -1,224 +0,0 @@ -import email.message -import importlib.metadata -import os -import pathlib -import zipfile -from typing import ( - Collection, - Dict, - Iterable, - Iterator, - Mapping, - Optional, - Sequence, - cast, -) - -from pip._vendor.packaging.requirements import Requirement -from pip._vendor.packaging.utils import NormalizedName, canonicalize_name -from pip._vendor.packaging.version import parse as parse_version - -from pip._internal.exceptions import InvalidWheel, UnsupportedWheel -from pip._internal.metadata.base import ( - BaseDistribution, - BaseEntryPoint, - DistributionVersion, - InfoPath, - Wheel, -) -from pip._internal.utils.misc import normalize_path -from pip._internal.utils.packaging import safe_extra -from pip._internal.utils.temp_dir import TempDirectory -from pip._internal.utils.wheel import parse_wheel, read_wheel_metadata_file - -from ._compat import BasePath, get_dist_name - - -class WheelDistribution(importlib.metadata.Distribution): - """An ``importlib.metadata.Distribution`` read from a wheel. - - Although ``importlib.metadata.PathDistribution`` accepts ``zipfile.Path``, - its implementation is too "lazy" for pip's needs (we can't keep the ZipFile - handle open for the entire lifetime of the distribution object). - - This implementation eagerly reads the entire metadata directory into the - memory instead, and operates from that. - """ - - def __init__( - self, - files: Mapping[pathlib.PurePosixPath, bytes], - info_location: pathlib.PurePosixPath, - ) -> None: - self._files = files - self.info_location = info_location - - @classmethod - def from_zipfile( - cls, - zf: zipfile.ZipFile, - name: str, - location: str, - ) -> "WheelDistribution": - info_dir, _ = parse_wheel(zf, name) - paths = ( - (name, pathlib.PurePosixPath(name.split("/", 1)[-1])) - for name in zf.namelist() - if name.startswith(f"{info_dir}/") - ) - files = { - relpath: read_wheel_metadata_file(zf, fullpath) - for fullpath, relpath in paths - } - info_location = pathlib.PurePosixPath(location, info_dir) - return cls(files, info_location) - - def iterdir(self, path: InfoPath) -> Iterator[pathlib.PurePosixPath]: - # Only allow iterating through the metadata directory. - if pathlib.PurePosixPath(str(path)) in self._files: - return iter(self._files) - raise FileNotFoundError(path) - - def read_text(self, filename: str) -> Optional[str]: - try: - data = self._files[pathlib.PurePosixPath(filename)] - except KeyError: - return None - try: - text = data.decode("utf-8") - except UnicodeDecodeError as e: - wheel = self.info_location.parent - error = f"Error decoding metadata for {wheel}: {e} in {filename} file" - raise UnsupportedWheel(error) - return text - - -class Distribution(BaseDistribution): - def __init__( - self, - dist: importlib.metadata.Distribution, - info_location: Optional[BasePath], - installed_location: Optional[BasePath], - ) -> None: - self._dist = dist - self._info_location = info_location - self._installed_location = installed_location - - @classmethod - def from_directory(cls, directory: str) -> BaseDistribution: - info_location = pathlib.Path(directory) - dist = importlib.metadata.Distribution.at(info_location) - return cls(dist, info_location, info_location.parent) - - @classmethod - def from_metadata_file_contents( - cls, - metadata_contents: bytes, - filename: str, - project_name: str, - ) -> BaseDistribution: - # Generate temp dir to contain the metadata file, and write the file contents. - temp_dir = pathlib.Path( - TempDirectory(kind="metadata", globally_managed=True).path - ) - metadata_path = temp_dir / "METADATA" - metadata_path.write_bytes(metadata_contents) - # Construct dist pointing to the newly created directory. - dist = importlib.metadata.Distribution.at(metadata_path.parent) - return cls(dist, metadata_path.parent, None) - - @classmethod - def from_wheel(cls, wheel: Wheel, name: str) -> BaseDistribution: - try: - with wheel.as_zipfile() as zf: - dist = WheelDistribution.from_zipfile(zf, name, wheel.location) - except zipfile.BadZipFile as e: - raise InvalidWheel(wheel.location, name) from e - except UnsupportedWheel as e: - raise UnsupportedWheel(f"{name} has an invalid wheel, {e}") - return cls(dist, dist.info_location, pathlib.PurePosixPath(wheel.location)) - - @property - def location(self) -> Optional[str]: - if self._info_location is None: - return None - return str(self._info_location.parent) - - @property - def info_location(self) -> Optional[str]: - if self._info_location is None: - return None - return str(self._info_location) - - @property - def installed_location(self) -> Optional[str]: - if self._installed_location is None: - return None - return normalize_path(str(self._installed_location)) - - def _get_dist_name_from_location(self) -> Optional[str]: - """Try to get the name from the metadata directory name. - - This is much faster than reading metadata. - """ - if self._info_location is None: - return None - stem, suffix = os.path.splitext(self._info_location.name) - if suffix not in (".dist-info", ".egg-info"): - return None - return stem.split("-", 1)[0] - - @property - def canonical_name(self) -> NormalizedName: - name = self._get_dist_name_from_location() or get_dist_name(self._dist) - return canonicalize_name(name) - - @property - def version(self) -> DistributionVersion: - return parse_version(self._dist.version) - - def is_file(self, path: InfoPath) -> bool: - return self._dist.read_text(str(path)) is not None - - def iter_distutils_script_names(self) -> Iterator[str]: - # A distutils installation is always "flat" (not in e.g. egg form), so - # if this distribution's info location is NOT a pathlib.Path (but e.g. - # zipfile.Path), it can never contain any distutils scripts. - if not isinstance(self._info_location, pathlib.Path): - return - for child in self._info_location.joinpath("scripts").iterdir(): - yield child.name - - def read_text(self, path: InfoPath) -> str: - content = self._dist.read_text(str(path)) - if content is None: - raise FileNotFoundError(path) - return content - - def iter_entry_points(self) -> Iterable[BaseEntryPoint]: - # importlib.metadata's EntryPoint structure sasitfies BaseEntryPoint. - return self._dist.entry_points - - def _metadata_impl(self) -> email.message.Message: - # From Python 3.10+, importlib.metadata declares PackageMetadata as the - # return type. This protocol is unfortunately a disaster now and misses - # a ton of fields that we need, including get() and get_payload(). We - # rely on the implementation that the object is actually a Message now, - # until upstream can improve the protocol. (python/cpython#94952) - return cast(email.message.Message, self._dist.metadata) - - def iter_provided_extras(self) -> Iterable[str]: - return ( - safe_extra(extra) for extra in self.metadata.get_all("Provides-Extra", []) - ) - - def iter_dependencies(self, extras: Collection[str] = ()) -> Iterable[Requirement]: - contexts: Sequence[Dict[str, str]] = [{"extra": safe_extra(e)} for e in extras] - for req_string in self.metadata.get_all("Requires-Dist", []): - req = Requirement(req_string) - if not req.marker: - yield req - elif not extras and req.marker.evaluate({"extra": ""}): - yield req - elif any(req.marker.evaluate(context) for context in contexts): - yield req diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/packaging/_manylinux.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/packaging/_manylinux.py deleted file mode 100644 index 4c379aa6f69ff56c8f19612002c6e3e939ea6012..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/packaging/_manylinux.py +++ /dev/null @@ -1,301 +0,0 @@ -import collections -import functools -import os -import re -import struct -import sys -import warnings -from typing import IO, Dict, Iterator, NamedTuple, Optional, Tuple - - -# Python does not provide platform information at sufficient granularity to -# identify the architecture of the running executable in some cases, so we -# determine it dynamically by reading the information from the running -# process. This only applies on Linux, which uses the ELF format. -class _ELFFileHeader: - # https://en.wikipedia.org/wiki/Executable_and_Linkable_Format#File_header - class _InvalidELFFileHeader(ValueError): - """ - An invalid ELF file header was found. - """ - - ELF_MAGIC_NUMBER = 0x7F454C46 - ELFCLASS32 = 1 - ELFCLASS64 = 2 - ELFDATA2LSB = 1 - ELFDATA2MSB = 2 - EM_386 = 3 - EM_S390 = 22 - EM_ARM = 40 - EM_X86_64 = 62 - EF_ARM_ABIMASK = 0xFF000000 - EF_ARM_ABI_VER5 = 0x05000000 - EF_ARM_ABI_FLOAT_HARD = 0x00000400 - - def __init__(self, file: IO[bytes]) -> None: - def unpack(fmt: str) -> int: - try: - data = file.read(struct.calcsize(fmt)) - result: Tuple[int, ...] = struct.unpack(fmt, data) - except struct.error: - raise _ELFFileHeader._InvalidELFFileHeader() - return result[0] - - self.e_ident_magic = unpack(">I") - if self.e_ident_magic != self.ELF_MAGIC_NUMBER: - raise _ELFFileHeader._InvalidELFFileHeader() - self.e_ident_class = unpack("B") - if self.e_ident_class not in {self.ELFCLASS32, self.ELFCLASS64}: - raise _ELFFileHeader._InvalidELFFileHeader() - self.e_ident_data = unpack("B") - if self.e_ident_data not in {self.ELFDATA2LSB, self.ELFDATA2MSB}: - raise _ELFFileHeader._InvalidELFFileHeader() - self.e_ident_version = unpack("B") - self.e_ident_osabi = unpack("B") - self.e_ident_abiversion = unpack("B") - self.e_ident_pad = file.read(7) - format_h = "H" - format_i = "I" - format_q = "Q" - format_p = format_i if self.e_ident_class == self.ELFCLASS32 else format_q - self.e_type = unpack(format_h) - self.e_machine = unpack(format_h) - self.e_version = unpack(format_i) - self.e_entry = unpack(format_p) - self.e_phoff = unpack(format_p) - self.e_shoff = unpack(format_p) - self.e_flags = unpack(format_i) - self.e_ehsize = unpack(format_h) - self.e_phentsize = unpack(format_h) - self.e_phnum = unpack(format_h) - self.e_shentsize = unpack(format_h) - self.e_shnum = unpack(format_h) - self.e_shstrndx = unpack(format_h) - - -def _get_elf_header() -> Optional[_ELFFileHeader]: - try: - with open(sys.executable, "rb") as f: - elf_header = _ELFFileHeader(f) - except (OSError, TypeError, _ELFFileHeader._InvalidELFFileHeader): - return None - return elf_header - - -def _is_linux_armhf() -> bool: - # hard-float ABI can be detected from the ELF header of the running - # process - # https://static.docs.arm.com/ihi0044/g/aaelf32.pdf - elf_header = _get_elf_header() - if elf_header is None: - return False - result = elf_header.e_ident_class == elf_header.ELFCLASS32 - result &= elf_header.e_ident_data == elf_header.ELFDATA2LSB - result &= elf_header.e_machine == elf_header.EM_ARM - result &= ( - elf_header.e_flags & elf_header.EF_ARM_ABIMASK - ) == elf_header.EF_ARM_ABI_VER5 - result &= ( - elf_header.e_flags & elf_header.EF_ARM_ABI_FLOAT_HARD - ) == elf_header.EF_ARM_ABI_FLOAT_HARD - return result - - -def _is_linux_i686() -> bool: - elf_header = _get_elf_header() - if elf_header is None: - return False - result = elf_header.e_ident_class == elf_header.ELFCLASS32 - result &= elf_header.e_ident_data == elf_header.ELFDATA2LSB - result &= elf_header.e_machine == elf_header.EM_386 - return result - - -def _have_compatible_abi(arch: str) -> bool: - if arch == "armv7l": - return _is_linux_armhf() - if arch == "i686": - return _is_linux_i686() - return arch in {"x86_64", "aarch64", "ppc64", "ppc64le", "s390x"} - - -# If glibc ever changes its major version, we need to know what the last -# minor version was, so we can build the complete list of all versions. -# For now, guess what the highest minor version might be, assume it will -# be 50 for testing. Once this actually happens, update the dictionary -# with the actual value. -_LAST_GLIBC_MINOR: Dict[int, int] = collections.defaultdict(lambda: 50) - - -class _GLibCVersion(NamedTuple): - major: int - minor: int - - -def _glibc_version_string_confstr() -> Optional[str]: - """ - Primary implementation of glibc_version_string using os.confstr. - """ - # os.confstr is quite a bit faster than ctypes.DLL. It's also less likely - # to be broken or missing. This strategy is used in the standard library - # platform module. - # https://github.com/python/cpython/blob/fcf1d003bf4f0100c/Lib/platform.py#L175-L183 - try: - # os.confstr("CS_GNU_LIBC_VERSION") returns a string like "glibc 2.17". - version_string = os.confstr("CS_GNU_LIBC_VERSION") - assert version_string is not None - _, version = version_string.split() - except (AssertionError, AttributeError, OSError, ValueError): - # os.confstr() or CS_GNU_LIBC_VERSION not available (or a bad value)... - return None - return version - - -def _glibc_version_string_ctypes() -> Optional[str]: - """ - Fallback implementation of glibc_version_string using ctypes. - """ - try: - import ctypes - except ImportError: - return None - - # ctypes.CDLL(None) internally calls dlopen(NULL), and as the dlopen - # manpage says, "If filename is NULL, then the returned handle is for the - # main program". This way we can let the linker do the work to figure out - # which libc our process is actually using. - # - # We must also handle the special case where the executable is not a - # dynamically linked executable. This can occur when using musl libc, - # for example. In this situation, dlopen() will error, leading to an - # OSError. Interestingly, at least in the case of musl, there is no - # errno set on the OSError. The single string argument used to construct - # OSError comes from libc itself and is therefore not portable to - # hard code here. In any case, failure to call dlopen() means we - # can proceed, so we bail on our attempt. - try: - process_namespace = ctypes.CDLL(None) - except OSError: - return None - - try: - gnu_get_libc_version = process_namespace.gnu_get_libc_version - except AttributeError: - # Symbol doesn't exist -> therefore, we are not linked to - # glibc. - return None - - # Call gnu_get_libc_version, which returns a string like "2.5" - gnu_get_libc_version.restype = ctypes.c_char_p - version_str: str = gnu_get_libc_version() - # py2 / py3 compatibility: - if not isinstance(version_str, str): - version_str = version_str.decode("ascii") - - return version_str - - -def _glibc_version_string() -> Optional[str]: - """Returns glibc version string, or None if not using glibc.""" - return _glibc_version_string_confstr() or _glibc_version_string_ctypes() - - -def _parse_glibc_version(version_str: str) -> Tuple[int, int]: - """Parse glibc version. - - We use a regexp instead of str.split because we want to discard any - random junk that might come after the minor version -- this might happen - in patched/forked versions of glibc (e.g. Linaro's version of glibc - uses version strings like "2.20-2014.11"). See gh-3588. - """ - m = re.match(r"(?P[0-9]+)\.(?P[0-9]+)", version_str) - if not m: - warnings.warn( - "Expected glibc version with 2 components major.minor," - " got: %s" % version_str, - RuntimeWarning, - ) - return -1, -1 - return int(m.group("major")), int(m.group("minor")) - - -@functools.lru_cache() -def _get_glibc_version() -> Tuple[int, int]: - version_str = _glibc_version_string() - if version_str is None: - return (-1, -1) - return _parse_glibc_version(version_str) - - -# From PEP 513, PEP 600 -def _is_compatible(name: str, arch: str, version: _GLibCVersion) -> bool: - sys_glibc = _get_glibc_version() - if sys_glibc < version: - return False - # Check for presence of _manylinux module. - try: - import _manylinux # noqa - except ImportError: - return True - if hasattr(_manylinux, "manylinux_compatible"): - result = _manylinux.manylinux_compatible(version[0], version[1], arch) - if result is not None: - return bool(result) - return True - if version == _GLibCVersion(2, 5): - if hasattr(_manylinux, "manylinux1_compatible"): - return bool(_manylinux.manylinux1_compatible) - if version == _GLibCVersion(2, 12): - if hasattr(_manylinux, "manylinux2010_compatible"): - return bool(_manylinux.manylinux2010_compatible) - if version == _GLibCVersion(2, 17): - if hasattr(_manylinux, "manylinux2014_compatible"): - return bool(_manylinux.manylinux2014_compatible) - return True - - -_LEGACY_MANYLINUX_MAP = { - # CentOS 7 w/ glibc 2.17 (PEP 599) - (2, 17): "manylinux2014", - # CentOS 6 w/ glibc 2.12 (PEP 571) - (2, 12): "manylinux2010", - # CentOS 5 w/ glibc 2.5 (PEP 513) - (2, 5): "manylinux1", -} - - -def platform_tags(linux: str, arch: str) -> Iterator[str]: - if not _have_compatible_abi(arch): - return - # Oldest glibc to be supported regardless of architecture is (2, 17). - too_old_glibc2 = _GLibCVersion(2, 16) - if arch in {"x86_64", "i686"}: - # On x86/i686 also oldest glibc to be supported is (2, 5). - too_old_glibc2 = _GLibCVersion(2, 4) - current_glibc = _GLibCVersion(*_get_glibc_version()) - glibc_max_list = [current_glibc] - # We can assume compatibility across glibc major versions. - # https://sourceware.org/bugzilla/show_bug.cgi?id=24636 - # - # Build a list of maximum glibc versions so that we can - # output the canonical list of all glibc from current_glibc - # down to too_old_glibc2, including all intermediary versions. - for glibc_major in range(current_glibc.major - 1, 1, -1): - glibc_minor = _LAST_GLIBC_MINOR[glibc_major] - glibc_max_list.append(_GLibCVersion(glibc_major, glibc_minor)) - for glibc_max in glibc_max_list: - if glibc_max.major == too_old_glibc2.major: - min_minor = too_old_glibc2.minor - else: - # For other glibc major versions oldest supported is (x, 0). - min_minor = -1 - for glibc_minor in range(glibc_max.minor, min_minor, -1): - glibc_version = _GLibCVersion(glibc_max.major, glibc_minor) - tag = "manylinux_{}_{}".format(*glibc_version) - if _is_compatible(tag, arch, glibc_version): - yield linux.replace("linux", tag) - # Handle the legacy manylinux1, manylinux2010, manylinux2014 tags. - if glibc_version in _LEGACY_MANYLINUX_MAP: - legacy_tag = _LEGACY_MANYLINUX_MAP[glibc_version] - if _is_compatible(legacy_tag, arch, glibc_version): - yield linux.replace("linux", legacy_tag) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/log.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/log.py deleted file mode 100644 index be25f6cabd839af772dd74399c57991c222d3da8..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/log.py +++ /dev/null @@ -1,80 +0,0 @@ -"""A simple log mechanism styled after PEP 282.""" - -# The class here is styled after PEP 282 so that it could later be -# replaced with a standard Python logging implementation. - -import sys - -DEBUG = 1 -INFO = 2 -WARN = 3 -ERROR = 4 -FATAL = 5 - - -class Log: - def __init__(self, threshold=WARN): - self.threshold = threshold - - def _log(self, level, msg, args): - if level not in (DEBUG, INFO, WARN, ERROR, FATAL): - raise ValueError('%s wrong log level' % str(level)) - - if level >= self.threshold: - if args: - msg = msg % args - if level in (WARN, ERROR, FATAL): - stream = sys.stderr - else: - stream = sys.stdout - try: - stream.write('%s\n' % msg) - except UnicodeEncodeError: - # emulate backslashreplace error handler - encoding = stream.encoding - msg = msg.encode(encoding, "backslashreplace").decode(encoding) - stream.write('%s\n' % msg) - stream.flush() - - def log(self, level, msg, *args): - self._log(level, msg, args) - - def debug(self, msg, *args): - self._log(DEBUG, msg, args) - - def info(self, msg, *args): - self._log(INFO, msg, args) - - def warn(self, msg, *args): - self._log(WARN, msg, args) - - def error(self, msg, *args): - self._log(ERROR, msg, args) - - def fatal(self, msg, *args): - self._log(FATAL, msg, args) - - -_global_log = Log() -log = _global_log.log -debug = _global_log.debug -info = _global_log.info -warn = _global_log.warn -error = _global_log.error -fatal = _global_log.fatal - - -def set_threshold(level): - # return the old threshold for use from tests - old = _global_log.threshold - _global_log.threshold = level - return old - - -def set_verbosity(v): - if v <= 0: - set_threshold(WARN) - elif v == 1: - set_threshold(INFO) - elif v >= 2: - set_threshold(DEBUG) diff --git a/spaces/Realcat/image-matching-webui/third_party/DarkFeat/datasets/InvISP/utils/JPEG_utils.py b/spaces/Realcat/image-matching-webui/third_party/DarkFeat/datasets/InvISP/utils/JPEG_utils.py deleted file mode 100644 index 4ef225505d21728f63d34cec55e5335a50130e17..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/DarkFeat/datasets/InvISP/utils/JPEG_utils.py +++ /dev/null @@ -1,82 +0,0 @@ -# Standard libraries -import numpy as np - -# PyTorch -import torch -import torch.nn as nn -import math - -y_table = np.array( - [ - [16, 11, 10, 16, 24, 40, 51, 61], - [12, 12, 14, 19, 26, 58, 60, 55], - [14, 13, 16, 24, 40, 57, 69, 56], - [14, 17, 22, 29, 51, 87, 80, 62], - [18, 22, 37, 56, 68, 109, 103, 77], - [24, 35, 55, 64, 81, 104, 113, 92], - [49, 64, 78, 87, 103, 121, 120, 101], - [72, 92, 95, 98, 112, 100, 103, 99], - ], - dtype=np.float32, -).T - -y_table = nn.Parameter(torch.from_numpy(y_table)) -# -c_table = np.empty((8, 8), dtype=np.float32) -c_table.fill(99) -c_table[:4, :4] = np.array( - [[17, 18, 24, 47], [18, 21, 26, 66], [24, 26, 56, 99], [47, 66, 99, 99]] -).T -c_table = nn.Parameter(torch.from_numpy(c_table)) - - -def diff_round_back(x): - """Differentiable rounding function - Input: - x(tensor) - Output: - x(tensor) - """ - return torch.round(x) + (x - torch.round(x)) ** 3 - - -def diff_round(input_tensor): - test = 0 - for n in range(1, 10): - test += math.pow(-1, n + 1) / n * torch.sin(2 * math.pi * n * input_tensor) - final_tensor = input_tensor - 1 / math.pi * test - return final_tensor - - -class Quant(torch.autograd.Function): - @staticmethod - def forward(ctx, input): - input = torch.clamp(input, 0, 1) - output = (input * 255.0).round() / 255.0 - return output - - @staticmethod - def backward(ctx, grad_output): - return grad_output - - -class Quantization(nn.Module): - def __init__(self): - super(Quantization, self).__init__() - - def forward(self, input): - return Quant.apply(input) - - -def quality_to_factor(quality): - """Calculate factor corresponding to quality - Input: - quality(float): Quality for jpeg compression - Output: - factor(float): Compression factor - """ - if quality < 50: - quality = 5000.0 / quality - else: - quality = 200.0 - quality * 2 - return quality / 100.0 diff --git a/spaces/Realcat/image-matching-webui/third_party/SGMNet/superglue/match_model.py b/spaces/Realcat/image-matching-webui/third_party/SGMNet/superglue/match_model.py deleted file mode 100644 index 4a0270dce45a1882397374615156b5310fd181d1..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/SGMNet/superglue/match_model.py +++ /dev/null @@ -1,167 +0,0 @@ -import torch -import torch.nn as nn -import time - - -eps = 1e-8 - - -def sinkhorn(M, r, c, iteration): - p = torch.softmax(M, dim=-1) - u = torch.ones_like(r) - v = torch.ones_like(c) - for _ in range(iteration): - u = r / ((p * v.unsqueeze(-2)).sum(-1) + eps) - v = c / ((p * u.unsqueeze(-1)).sum(-2) + eps) - p = p * u.unsqueeze(-1) * v.unsqueeze(-2) - return p - - -def sink_algorithm(M, dustbin, iteration): - M = torch.cat([M, dustbin.expand([M.shape[0], M.shape[1], 1])], dim=-1) - M = torch.cat([M, dustbin.expand([M.shape[0], 1, M.shape[2]])], dim=-2) - r = torch.ones([M.shape[0], M.shape[1] - 1], device="cuda") - r = torch.cat([r, torch.ones([M.shape[0], 1], device="cuda") * M.shape[1]], dim=-1) - c = torch.ones([M.shape[0], M.shape[2] - 1], device="cuda") - c = torch.cat([c, torch.ones([M.shape[0], 1], device="cuda") * M.shape[2]], dim=-1) - p = sinkhorn(M, r, c, iteration) - return p - - -class attention_block(nn.Module): - def __init__(self, channels, head, type): - assert type == "self" or type == "cross", "invalid attention type" - nn.Module.__init__(self) - self.head = head - self.type = type - self.head_dim = channels // head - self.query_filter = nn.Conv1d(channels, channels, kernel_size=1) - self.key_filter = nn.Conv1d(channels, channels, kernel_size=1) - self.value_filter = nn.Conv1d(channels, channels, kernel_size=1) - self.attention_filter = nn.Sequential( - nn.Conv1d(2 * channels, 2 * channels, kernel_size=1), - nn.SyncBatchNorm(2 * channels), - nn.ReLU(), - nn.Conv1d(2 * channels, channels, kernel_size=1), - ) - self.mh_filter = nn.Conv1d(channels, channels, kernel_size=1) - - def forward(self, fea1, fea2): - batch_size, n, m = fea1.shape[0], fea1.shape[2], fea2.shape[2] - query1, key1, value1 = ( - self.query_filter(fea1).view(batch_size, self.head_dim, self.head, -1), - self.key_filter(fea1).view(batch_size, self.head_dim, self.head, -1), - self.value_filter(fea1).view(batch_size, self.head_dim, self.head, -1), - ) - query2, key2, value2 = ( - self.query_filter(fea2).view(batch_size, self.head_dim, self.head, -1), - self.key_filter(fea2).view(batch_size, self.head_dim, self.head, -1), - self.value_filter(fea2).view(batch_size, self.head_dim, self.head, -1), - ) - if self.type == "self": - score1, score2 = torch.softmax( - torch.einsum("bdhn,bdhm->bhnm", query1, key1) / self.head_dim**0.5, - dim=-1, - ), torch.softmax( - torch.einsum("bdhn,bdhm->bhnm", query2, key2) / self.head_dim**0.5, - dim=-1, - ) - add_value1, add_value2 = torch.einsum( - "bhnm,bdhm->bdhn", score1, value1 - ), torch.einsum("bhnm,bdhm->bdhn", score2, value2) - else: - score1, score2 = torch.softmax( - torch.einsum("bdhn,bdhm->bhnm", query1, key2) / self.head_dim**0.5, - dim=-1, - ), torch.softmax( - torch.einsum("bdhn,bdhm->bhnm", query2, key1) / self.head_dim**0.5, - dim=-1, - ) - add_value1, add_value2 = torch.einsum( - "bhnm,bdhm->bdhn", score1, value2 - ), torch.einsum("bhnm,bdhm->bdhn", score2, value1) - add_value1, add_value2 = self.mh_filter( - add_value1.contiguous().view(batch_size, self.head * self.head_dim, n) - ), self.mh_filter( - add_value2.contiguous().view(batch_size, self.head * self.head_dim, m) - ) - fea11, fea22 = torch.cat([fea1, add_value1], dim=1), torch.cat( - [fea2, add_value2], dim=1 - ) - fea1, fea2 = fea1 + self.attention_filter(fea11), fea2 + self.attention_filter( - fea22 - ) - - return fea1, fea2 - - -class matcher(nn.Module): - def __init__(self, config): - nn.Module.__init__(self) - self.use_score_encoding = config.use_score_encoding - self.layer_num = config.layer_num - self.sink_iter = config.sink_iter - self.position_encoder = nn.Sequential( - nn.Conv1d(3, 32, kernel_size=1) - if config.use_score_encoding - else nn.Conv1d(2, 32, kernel_size=1), - nn.SyncBatchNorm(32), - nn.ReLU(), - nn.Conv1d(32, 64, kernel_size=1), - nn.SyncBatchNorm(64), - nn.ReLU(), - nn.Conv1d(64, 128, kernel_size=1), - nn.SyncBatchNorm(128), - nn.ReLU(), - nn.Conv1d(128, 256, kernel_size=1), - nn.SyncBatchNorm(256), - nn.ReLU(), - nn.Conv1d(256, config.net_channels, kernel_size=1), - ) - - self.dustbin = nn.Parameter(torch.tensor(1, dtype=torch.float32, device="cuda")) - self.self_attention_block = nn.Sequential( - *[ - attention_block(config.net_channels, config.head, "self") - for _ in range(config.layer_num) - ] - ) - self.cross_attention_block = nn.Sequential( - *[ - attention_block(config.net_channels, config.head, "cross") - for _ in range(config.layer_num) - ] - ) - self.final_project = nn.Conv1d( - config.net_channels, config.net_channels, kernel_size=1 - ) - - def forward(self, data, test_mode=True): - desc1, desc2 = data["desc1"], data["desc2"] - desc1, desc2 = torch.nn.functional.normalize( - desc1, dim=-1 - ), torch.nn.functional.normalize(desc2, dim=-1) - desc1, desc2 = desc1.transpose(1, 2), desc2.transpose(1, 2) - if test_mode: - encode_x1, encode_x2 = data["x1"], data["x2"] - else: - encode_x1, encode_x2 = data["aug_x1"], data["aug_x2"] - if not self.use_score_encoding: - encode_x1, encode_x2 = encode_x1[:, :, :2], encode_x2[:, :, :2] - - encode_x1, encode_x2 = encode_x1.transpose(1, 2), encode_x2.transpose(1, 2) - - x1_pos_embedding, x2_pos_embedding = self.position_encoder( - encode_x1 - ), self.position_encoder(encode_x2) - aug_desc1, aug_desc2 = x1_pos_embedding + desc1, x2_pos_embedding + desc2 - for i in range(self.layer_num): - aug_desc1, aug_desc2 = self.self_attention_block[i](aug_desc1, aug_desc2) - aug_desc1, aug_desc2 = self.cross_attention_block[i](aug_desc1, aug_desc2) - - aug_desc1, aug_desc2 = self.final_project(aug_desc1), self.final_project( - aug_desc2 - ) - desc_mat = torch.matmul(aug_desc1.transpose(1, 2), aug_desc2) - p = sink_algorithm(desc_mat, self.dustbin, self.sink_iter[0]) - return {"p": p} diff --git a/spaces/Reself/StableVideo/ldm/data/__init__.py b/spaces/Reself/StableVideo/ldm/data/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Ricecake123/RVC-demo/vc_infer_pipeline.py b/spaces/Ricecake123/RVC-demo/vc_infer_pipeline.py deleted file mode 100644 index c6be666c8d980fc6da24bd5e16ac9909d9204a46..0000000000000000000000000000000000000000 --- a/spaces/Ricecake123/RVC-demo/vc_infer_pipeline.py +++ /dev/null @@ -1,431 +0,0 @@ -import numpy as np, parselmouth, torch, pdb -from time import time as ttime -import torch.nn.functional as F -import scipy.signal as signal -import pyworld, os, traceback, faiss, librosa, torchcrepe -from scipy import signal -from functools import lru_cache - -bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000) - -input_audio_path2wav = {} - - -@lru_cache -def cache_harvest_f0(input_audio_path, fs, f0max, f0min, frame_period): - audio = input_audio_path2wav[input_audio_path] - f0, t = pyworld.harvest( - audio, - fs=fs, - f0_ceil=f0max, - f0_floor=f0min, - frame_period=frame_period, - ) - f0 = pyworld.stonemask(audio, f0, t, fs) - return f0 - - -def change_rms(data1, sr1, data2, sr2, rate): # 1是输入音频,2是输出音频,rate是2的占比 - # print(data1.max(),data2.max()) - rms1 = librosa.feature.rms( - y=data1, frame_length=sr1 // 2 * 2, hop_length=sr1 // 2 - ) # 每半秒一个点 - rms2 = librosa.feature.rms(y=data2, frame_length=sr2 // 2 * 2, hop_length=sr2 // 2) - rms1 = torch.from_numpy(rms1) - rms1 = F.interpolate( - rms1.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.from_numpy(rms2) - rms2 = F.interpolate( - rms2.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.max(rms2, torch.zeros_like(rms2) + 1e-6) - data2 *= ( - torch.pow(rms1, torch.tensor(1 - rate)) - * torch.pow(rms2, torch.tensor(rate - 1)) - ).numpy() - return data2 - - -class VC(object): - def __init__(self, tgt_sr, config): - self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = ( - config.x_pad, - config.x_query, - config.x_center, - config.x_max, - config.is_half, - ) - self.sr = 16000 # hubert输入采样率 - self.window = 160 # 每帧点数 - self.t_pad = self.sr * self.x_pad # 每条前后pad时间 - self.t_pad_tgt = tgt_sr * self.x_pad - self.t_pad2 = self.t_pad * 2 - self.t_query = self.sr * self.x_query # 查询切点前后查询时间 - self.t_center = self.sr * self.x_center # 查询切点位置 - self.t_max = self.sr * self.x_max # 免查询时长阈值 - self.device = config.device - - def get_f0( - self, - input_audio_path, - x, - p_len, - f0_up_key, - f0_method, - filter_radius, - inp_f0=None, - ): - global input_audio_path2wav - time_step = self.window / self.sr * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - if f0_method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - input_audio_path2wav[input_audio_path] = x.astype(np.double) - f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10) - if filter_radius > 2: - f0 = signal.medfilt(f0, 3) - elif f0_method == "crepe": - model = "full" - # Pick a batch size that doesn't cause memory errors on your gpu - batch_size = 512 - # Compute pitch using first gpu - audio = torch.tensor(np.copy(x))[None].float() - f0, pd = torchcrepe.predict( - audio, - self.sr, - self.window, - f0_min, - f0_max, - model, - batch_size=batch_size, - device=self.device, - return_periodicity=True, - ) - pd = torchcrepe.filter.median(pd, 3) - f0 = torchcrepe.filter.mean(f0, 3) - f0[pd < 0.1] = 0 - f0 = f0[0].cpu().numpy() - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0] - f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[ - :shape - ] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak # 1-0 - - def vc( - self, - model, - net_g, - sid, - audio0, - pitch, - pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - ): # ,file_index,file_big_npy - feats = torch.from_numpy(audio0) - if self.is_half: - feats = feats.half() - else: - feats = feats.float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - - inputs = { - "source": feats.to(self.device), - "padding_mask": padding_mask, - "output_layer": 9 if version == "v1" else 12, - } - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) if version == "v1" else logits[0] - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = feats.clone() - if ( - isinstance(index, type(None)) == False - and isinstance(big_npy, type(None)) == False - and index_rate != 0 - ): - npy = feats[0].cpu().numpy() - if self.is_half: - npy = npy.astype("float32") - - # _, I = index.search(npy, 1) - # npy = big_npy[I.squeeze()] - - score, ix = index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - - if self.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate - + (1 - index_rate) * feats - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = F.interpolate(feats0.permute(0, 2, 1), scale_factor=2).permute( - 0, 2, 1 - ) - t1 = ttime() - p_len = audio0.shape[0] // self.window - if feats.shape[1] < p_len: - p_len = feats.shape[1] - if pitch != None and pitchf != None: - pitch = pitch[:, :p_len] - pitchf = pitchf[:, :p_len] - - if protect < 0.5 and pitch != None and pitchf != None: - pitchff = pitchf.clone() - pitchff[pitchf > 0] = 1 - pitchff[pitchf < 1] = protect - pitchff = pitchff.unsqueeze(-1) - feats = feats * pitchff + feats0 * (1 - pitchff) - feats = feats.to(feats0.dtype) - p_len = torch.tensor([p_len], device=self.device).long() - with torch.no_grad(): - if pitch != None and pitchf != None: - audio1 = ( - (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]) - .data.cpu() - .float() - .numpy() - ) - else: - audio1 = ( - (net_g.infer(feats, p_len, sid)[0][0, 0]).data.cpu().float().numpy() - ) - del feats, p_len, padding_mask - if torch.cuda.is_available(): - torch.cuda.empty_cache() - t2 = ttime() - times[0] += t1 - t0 - times[2] += t2 - t1 - return audio1 - - def pipeline( - self, - model, - net_g, - sid, - audio, - input_audio_path, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=None, - ): - if ( - file_index != "" - # and file_big_npy != "" - # and os.path.exists(file_big_npy) == True - and os.path.exists(file_index) == True - and index_rate != 0 - ): - try: - index = faiss.read_index(file_index) - # big_npy = np.load(file_big_npy) - big_npy = index.reconstruct_n(0, index.ntotal) - except: - traceback.print_exc() - index = big_npy = None - else: - index = big_npy = None - audio = signal.filtfilt(bh, ah, audio) - audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect") - opt_ts = [] - if audio_pad.shape[0] > self.t_max: - audio_sum = np.zeros_like(audio) - for i in range(self.window): - audio_sum += audio_pad[i : i - self.window] - for t in range(self.t_center, audio.shape[0], self.t_center): - opt_ts.append( - t - - self.t_query - + np.where( - np.abs(audio_sum[t - self.t_query : t + self.t_query]) - == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min() - )[0][0] - ) - s = 0 - audio_opt = [] - t = None - t1 = ttime() - audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect") - p_len = audio_pad.shape[0] // self.window - inp_f0 = None - if hasattr(f0_file, "name") == True: - try: - with open(f0_file.name, "r") as f: - lines = f.read().strip("\n").split("\n") - inp_f0 = [] - for line in lines: - inp_f0.append([float(i) for i in line.split(",")]) - inp_f0 = np.array(inp_f0, dtype="float32") - except: - traceback.print_exc() - sid = torch.tensor(sid, device=self.device).unsqueeze(0).long() - pitch, pitchf = None, None - if if_f0 == 1: - pitch, pitchf = self.get_f0( - input_audio_path, - audio_pad, - p_len, - f0_up_key, - f0_method, - filter_radius, - inp_f0, - ) - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - if self.device == "mps": - pitchf = pitchf.astype(np.float32) - pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long() - pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float() - t2 = ttime() - times[1] += t2 - t1 - for t in opt_ts: - t = t // self.window * self.window - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - pitch[:, s // self.window : (t + self.t_pad2) // self.window], - pitchf[:, s // self.window : (t + self.t_pad2) // self.window], - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - s = t - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - pitch[:, t // self.window :] if t is not None else pitch, - pitchf[:, t // self.window :] if t is not None else pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - audio_opt = np.concatenate(audio_opt) - if rms_mix_rate != 1: - audio_opt = change_rms(audio, 16000, audio_opt, tgt_sr, rms_mix_rate) - if resample_sr >= 16000 and tgt_sr != resample_sr: - audio_opt = librosa.resample( - audio_opt, orig_sr=tgt_sr, target_sr=resample_sr - ) - audio_max = np.abs(audio_opt).max() / 0.99 - max_int16 = 32768 - if audio_max > 1: - max_int16 /= audio_max - audio_opt = (audio_opt * max_int16).astype(np.int16) - del pitch, pitchf, sid - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return audio_opt diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/bricks/swish.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/bricks/swish.py deleted file mode 100644 index e2ca8ed7b749413f011ae54aac0cab27e6f0b51f..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/bricks/swish.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - -from .registry import ACTIVATION_LAYERS - - -@ACTIVATION_LAYERS.register_module() -class Swish(nn.Module): - """Swish Module. - - This module applies the swish function: - - .. math:: - Swish(x) = x * Sigmoid(x) - - Returns: - Tensor: The output tensor. - """ - - def __init__(self): - super(Swish, self).__init__() - - def forward(self, x): - return x * torch.sigmoid(x) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/utils/self_attention_block.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/utils/self_attention_block.py deleted file mode 100644 index 440c7b73ee4706fde555595926d63a18d7574acc..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/utils/self_attention_block.py +++ /dev/null @@ -1,159 +0,0 @@ -import torch -from annotator.uniformer.mmcv.cnn import ConvModule, constant_init -from torch import nn as nn -from torch.nn import functional as F - - -class SelfAttentionBlock(nn.Module): - """General self-attention block/non-local block. - - Please refer to https://arxiv.org/abs/1706.03762 for details about key, - query and value. - - Args: - key_in_channels (int): Input channels of key feature. - query_in_channels (int): Input channels of query feature. - channels (int): Output channels of key/query transform. - out_channels (int): Output channels. - share_key_query (bool): Whether share projection weight between key - and query projection. - query_downsample (nn.Module): Query downsample module. - key_downsample (nn.Module): Key downsample module. - key_query_num_convs (int): Number of convs for key/query projection. - value_num_convs (int): Number of convs for value projection. - matmul_norm (bool): Whether normalize attention map with sqrt of - channels - with_out (bool): Whether use out projection. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict|None): Config of activation layers. - """ - - def __init__(self, key_in_channels, query_in_channels, channels, - out_channels, share_key_query, query_downsample, - key_downsample, key_query_num_convs, value_out_num_convs, - key_query_norm, value_out_norm, matmul_norm, with_out, - conv_cfg, norm_cfg, act_cfg): - super(SelfAttentionBlock, self).__init__() - if share_key_query: - assert key_in_channels == query_in_channels - self.key_in_channels = key_in_channels - self.query_in_channels = query_in_channels - self.out_channels = out_channels - self.channels = channels - self.share_key_query = share_key_query - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.key_project = self.build_project( - key_in_channels, - channels, - num_convs=key_query_num_convs, - use_conv_module=key_query_norm, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - if share_key_query: - self.query_project = self.key_project - else: - self.query_project = self.build_project( - query_in_channels, - channels, - num_convs=key_query_num_convs, - use_conv_module=key_query_norm, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.value_project = self.build_project( - key_in_channels, - channels if with_out else out_channels, - num_convs=value_out_num_convs, - use_conv_module=value_out_norm, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - if with_out: - self.out_project = self.build_project( - channels, - out_channels, - num_convs=value_out_num_convs, - use_conv_module=value_out_norm, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - else: - self.out_project = None - - self.query_downsample = query_downsample - self.key_downsample = key_downsample - self.matmul_norm = matmul_norm - - self.init_weights() - - def init_weights(self): - """Initialize weight of later layer.""" - if self.out_project is not None: - if not isinstance(self.out_project, ConvModule): - constant_init(self.out_project, 0) - - def build_project(self, in_channels, channels, num_convs, use_conv_module, - conv_cfg, norm_cfg, act_cfg): - """Build projection layer for key/query/value/out.""" - if use_conv_module: - convs = [ - ConvModule( - in_channels, - channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - ] - for _ in range(num_convs - 1): - convs.append( - ConvModule( - channels, - channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - else: - convs = [nn.Conv2d(in_channels, channels, 1)] - for _ in range(num_convs - 1): - convs.append(nn.Conv2d(channels, channels, 1)) - if len(convs) > 1: - convs = nn.Sequential(*convs) - else: - convs = convs[0] - return convs - - def forward(self, query_feats, key_feats): - """Forward function.""" - batch_size = query_feats.size(0) - query = self.query_project(query_feats) - if self.query_downsample is not None: - query = self.query_downsample(query) - query = query.reshape(*query.shape[:2], -1) - query = query.permute(0, 2, 1).contiguous() - - key = self.key_project(key_feats) - value = self.value_project(key_feats) - if self.key_downsample is not None: - key = self.key_downsample(key) - value = self.key_downsample(value) - key = key.reshape(*key.shape[:2], -1) - value = value.reshape(*value.shape[:2], -1) - value = value.permute(0, 2, 1).contiguous() - - sim_map = torch.matmul(query, key) - if self.matmul_norm: - sim_map = (self.channels**-.5) * sim_map - sim_map = F.softmax(sim_map, dim=-1) - - context = torch.matmul(sim_map, value) - context = context.permute(0, 2, 1).contiguous() - context = context.reshape(batch_size, -1, *query_feats.shape[2:]) - if self.out_project is not None: - context = self.out_project(context) - return context diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/models/dmnet_r50-d8.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/models/dmnet_r50-d8.py deleted file mode 100644 index d22ba52640bebd805b3b8d07025e276dfb023759..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/models/dmnet_r50-d8.py +++ /dev/null @@ -1,44 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='DMHead', - in_channels=2048, - in_index=3, - channels=512, - filter_sizes=(1, 3, 5, 7), - dropout_ratio=0.1, - num_classes=19, - norm_cfg=dict(type='SyncBN', requires_grad=True), - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/logger/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/logger/__init__.py deleted file mode 100644 index a0b6b345640a895368ac8a647afef6f24333d90e..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/logger/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base import LoggerHook -from .dvclive import DvcliveLoggerHook -from .mlflow import MlflowLoggerHook -from .neptune import NeptuneLoggerHook -from .pavi import PaviLoggerHook -from .tensorboard import TensorboardLoggerHook -from .text import TextLoggerHook -from .wandb import WandbLoggerHook - -__all__ = [ - 'LoggerHook', 'MlflowLoggerHook', 'PaviLoggerHook', - 'TensorboardLoggerHook', 'TextLoggerHook', 'WandbLoggerHook', - 'NeptuneLoggerHook', 'DvcliveLoggerHook' -] diff --git a/spaces/SIGGRAPH2022/DCT-Net/source/facelib/face_landmark.py b/spaces/SIGGRAPH2022/DCT-Net/source/facelib/face_landmark.py deleted file mode 100644 index 063d40c3ae87a362d1ce186b9f177f9c04754b30..0000000000000000000000000000000000000000 --- a/spaces/SIGGRAPH2022/DCT-Net/source/facelib/face_landmark.py +++ /dev/null @@ -1,154 +0,0 @@ -import cv2 -import numpy as np -import tensorflow as tf - -from .config import config as cfg - -if tf.__version__ >= '2.0': - tf = tf.compat.v1 - - -class FaceLandmark: - - def __init__(self, dir): - self.model_path = dir + '/keypoints.pb' - self.min_face = 60 - self.keypoint_num = cfg.KEYPOINTS.p_num * 2 - - self._graph = tf.Graph() - - with self._graph.as_default(): - - self._graph, self._sess = self.init_model(self.model_path) - self.img_input = tf.get_default_graph().get_tensor_by_name( - 'tower_0/images:0') - self.embeddings = tf.get_default_graph().get_tensor_by_name( - 'tower_0/prediction:0') - self.training = tf.get_default_graph().get_tensor_by_name( - 'training_flag:0') - - self.landmark = self.embeddings[:, :self.keypoint_num] - self.headpose = self.embeddings[:, -7:-4] * 90. - self.state = tf.nn.sigmoid(self.embeddings[:, -4:]) - - def __call__(self, img, bboxes): - landmark_result = [] - state_result = [] - for i, bbox in enumerate(bboxes): - landmark, state = self._one_shot_run(img, bbox, i) - if landmark is not None: - landmark_result.append(landmark) - state_result.append(state) - return np.array(landmark_result), np.array(state_result) - - def simple_run(self, cropped_img): - with self._graph.as_default(): - - cropped_img = np.expand_dims(cropped_img, axis=0) - landmark, p, states = self._sess.run( - [self.landmark, self.headpose, self.state], - feed_dict={ - self.img_input: cropped_img, - self.training: False - }) - - return landmark, states - - def _one_shot_run(self, image, bbox, i): - - bbox_width = bbox[2] - bbox[0] - bbox_height = bbox[3] - bbox[1] - if (bbox_width <= self.min_face and bbox_height <= self.min_face): - return None, None - add = int(max(bbox_width, bbox_height)) - bimg = cv2.copyMakeBorder( - image, - add, - add, - add, - add, - borderType=cv2.BORDER_CONSTANT, - value=cfg.DATA.pixel_means) - bbox += add - - one_edge = (1 + 2 * cfg.KEYPOINTS.base_extend_range[0]) * bbox_width - center = [(bbox[0] + bbox[2]) // 2, (bbox[1] + bbox[3]) // 2] - - bbox[0] = center[0] - one_edge // 2 - bbox[1] = center[1] - one_edge // 2 - bbox[2] = center[0] + one_edge // 2 - bbox[3] = center[1] + one_edge // 2 - - bbox = bbox.astype(np.int) - crop_image = bimg[bbox[1]:bbox[3], bbox[0]:bbox[2], :] - h, w, _ = crop_image.shape - crop_image = cv2.resize( - crop_image, - (cfg.KEYPOINTS.input_shape[1], cfg.KEYPOINTS.input_shape[0])) - crop_image = crop_image.astype(np.float32) - - keypoints, state = self.simple_run(crop_image) - - res = keypoints[0][:self.keypoint_num].reshape((-1, 2)) - res[:, 0] = res[:, 0] * w / cfg.KEYPOINTS.input_shape[1] - res[:, 1] = res[:, 1] * h / cfg.KEYPOINTS.input_shape[0] - - landmark = [] - for _index in range(res.shape[0]): - x_y = res[_index] - landmark.append([ - int(x_y[0] * cfg.KEYPOINTS.input_shape[0] + bbox[0] - add), - int(x_y[1] * cfg.KEYPOINTS.input_shape[1] + bbox[1] - add) - ]) - - landmark = np.array(landmark, np.float32) - - return landmark, state - - def init_model(self, *args): - - if len(args) == 1: - use_pb = True - pb_path = args[0] - else: - use_pb = False - meta_path = args[0] - restore_model_path = args[1] - - def ini_ckpt(): - graph = tf.Graph() - graph.as_default() - configProto = tf.ConfigProto() - configProto.gpu_options.allow_growth = True - sess = tf.Session(config=configProto) - # load_model(model_path, sess) - saver = tf.train.import_meta_graph(meta_path) - saver.restore(sess, restore_model_path) - - print('Model restred!') - return (graph, sess) - - def init_pb(model_path): - config = tf.ConfigProto() - config.gpu_options.per_process_gpu_memory_fraction = 0.2 - compute_graph = tf.Graph() - compute_graph.as_default() - sess = tf.Session(config=config) - with tf.gfile.GFile(model_path, 'rb') as fid: - graph_def = tf.GraphDef() - graph_def.ParseFromString(fid.read()) - tf.import_graph_def(graph_def, name='') - - # saver = tf.train.Saver(tf.global_variables()) - # saver.save(sess, save_path='./tmp.ckpt') - return (compute_graph, sess) - - if use_pb: - model = init_pb(pb_path) - else: - model = ini_ckpt() - - graph = model[0] - sess = model[1] - - return graph, sess diff --git a/spaces/Sa-m/Vehicles-Detection-Custom-YoloV7/utils/__init__.py b/spaces/Sa-m/Vehicles-Detection-Custom-YoloV7/utils/__init__.py deleted file mode 100644 index 84952a8167bc2975913a6def6b4f027d566552a9..0000000000000000000000000000000000000000 --- a/spaces/Sa-m/Vehicles-Detection-Custom-YoloV7/utils/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# init \ No newline at end of file diff --git a/spaces/Sa-m/Vehicles-Detection-Custom-YoloV7/utils/add_nms.py b/spaces/Sa-m/Vehicles-Detection-Custom-YoloV7/utils/add_nms.py deleted file mode 100644 index 0a1f7976a2051d07bb028f9fd68eb52f45234f43..0000000000000000000000000000000000000000 --- a/spaces/Sa-m/Vehicles-Detection-Custom-YoloV7/utils/add_nms.py +++ /dev/null @@ -1,155 +0,0 @@ -import numpy as np -import onnx -from onnx import shape_inference -try: - import onnx_graphsurgeon as gs -except Exception as e: - print('Import onnx_graphsurgeon failure: %s' % e) - -import logging - -LOGGER = logging.getLogger(__name__) - -class RegisterNMS(object): - def __init__( - self, - onnx_model_path: str, - precision: str = "fp32", - ): - - self.graph = gs.import_onnx(onnx.load(onnx_model_path)) - assert self.graph - LOGGER.info("ONNX graph created successfully") - # Fold constants via ONNX-GS that PyTorch2ONNX may have missed - self.graph.fold_constants() - self.precision = precision - self.batch_size = 1 - def infer(self): - """ - Sanitize the graph by cleaning any unconnected nodes, do a topological resort, - and fold constant inputs values. When possible, run shape inference on the - ONNX graph to determine tensor shapes. - """ - for _ in range(3): - count_before = len(self.graph.nodes) - - self.graph.cleanup().toposort() - try: - for node in self.graph.nodes: - for o in node.outputs: - o.shape = None - model = gs.export_onnx(self.graph) - model = shape_inference.infer_shapes(model) - self.graph = gs.import_onnx(model) - except Exception as e: - LOGGER.info(f"Shape inference could not be performed at this time:\n{e}") - try: - self.graph.fold_constants(fold_shapes=True) - except TypeError as e: - LOGGER.error( - "This version of ONNX GraphSurgeon does not support folding shapes, " - f"please upgrade your onnx_graphsurgeon module. Error:\n{e}" - ) - raise - - count_after = len(self.graph.nodes) - if count_before == count_after: - # No new folding occurred in this iteration, so we can stop for now. - break - - def save(self, output_path): - """ - Save the ONNX model to the given location. - Args: - output_path: Path pointing to the location where to write - out the updated ONNX model. - """ - self.graph.cleanup().toposort() - model = gs.export_onnx(self.graph) - onnx.save(model, output_path) - LOGGER.info(f"Saved ONNX model to {output_path}") - - def register_nms( - self, - *, - score_thresh: float = 0.25, - nms_thresh: float = 0.45, - detections_per_img: int = 100, - ): - """ - Register the ``EfficientNMS_TRT`` plugin node. - NMS expects these shapes for its input tensors: - - box_net: [batch_size, number_boxes, 4] - - class_net: [batch_size, number_boxes, number_labels] - Args: - score_thresh (float): The scalar threshold for score (low scoring boxes are removed). - nms_thresh (float): The scalar threshold for IOU (new boxes that have high IOU - overlap with previously selected boxes are removed). - detections_per_img (int): Number of best detections to keep after NMS. - """ - - self.infer() - # Find the concat node at the end of the network - op_inputs = self.graph.outputs - op = "EfficientNMS_TRT" - attrs = { - "plugin_version": "1", - "background_class": -1, # no background class - "max_output_boxes": detections_per_img, - "score_threshold": score_thresh, - "iou_threshold": nms_thresh, - "score_activation": False, - "box_coding": 0, - } - - if self.precision == "fp32": - dtype_output = np.float32 - elif self.precision == "fp16": - dtype_output = np.float16 - else: - raise NotImplementedError(f"Currently not supports precision: {self.precision}") - - # NMS Outputs - output_num_detections = gs.Variable( - name="num_dets", - dtype=np.int32, - shape=[self.batch_size, 1], - ) # A scalar indicating the number of valid detections per batch image. - output_boxes = gs.Variable( - name="det_boxes", - dtype=dtype_output, - shape=[self.batch_size, detections_per_img, 4], - ) - output_scores = gs.Variable( - name="det_scores", - dtype=dtype_output, - shape=[self.batch_size, detections_per_img], - ) - output_labels = gs.Variable( - name="det_classes", - dtype=np.int32, - shape=[self.batch_size, detections_per_img], - ) - - op_outputs = [output_num_detections, output_boxes, output_scores, output_labels] - - # Create the NMS Plugin node with the selected inputs. The outputs of the node will also - # become the final outputs of the graph. - self.graph.layer(op=op, name="batched_nms", inputs=op_inputs, outputs=op_outputs, attrs=attrs) - LOGGER.info(f"Created NMS plugin '{op}' with attributes: {attrs}") - - self.graph.outputs = op_outputs - - self.infer() - - def save(self, output_path): - """ - Save the ONNX model to the given location. - Args: - output_path: Path pointing to the location where to write - out the updated ONNX model. - """ - self.graph.cleanup().toposort() - model = gs.export_onnx(self.graph) - onnx.save(model, output_path) - LOGGER.info(f"Saved ONNX model to {output_path}") diff --git a/spaces/Salesforce/BLIP/data/pretrain_dataset.py b/spaces/Salesforce/BLIP/data/pretrain_dataset.py deleted file mode 100644 index 703d543ab5267fdc6fe2b7c84ef6a631d8af90ad..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/BLIP/data/pretrain_dataset.py +++ /dev/null @@ -1,59 +0,0 @@ -import json -import os -import random - -from torch.utils.data import Dataset - -from PIL import Image -from PIL import ImageFile -ImageFile.LOAD_TRUNCATED_IMAGES = True -Image.MAX_IMAGE_PIXELS = None - -from data.utils import pre_caption -import os,glob - -class pretrain_dataset(Dataset): - def __init__(self, ann_file, laion_path, transform): - - self.ann_pretrain = [] - for f in ann_file: - print('loading '+f) - ann = json.load(open(f,'r')) - self.ann_pretrain += ann - - self.laion_path = laion_path - if self.laion_path: - self.laion_files = glob.glob(os.path.join(laion_path,'*.json')) - - print('loading '+self.laion_files[0]) - with open(self.laion_files[0],'r') as f: - self.ann_laion = json.load(f) - - self.annotation = self.ann_pretrain + self.ann_laion - else: - self.annotation = self.ann_pretrain - - self.transform = transform - - - def reload_laion(self, epoch): - n = epoch%len(self.laion_files) - print('loading '+self.laion_files[n]) - with open(self.laion_files[n],'r') as f: - self.ann_laion = json.load(f) - - self.annotation = self.ann_pretrain + self.ann_laion - - - def __len__(self): - return len(self.annotation) - - def __getitem__(self, index): - - ann = self.annotation[index] - - image = Image.open(ann['image']).convert('RGB') - image = self.transform(image) - caption = pre_caption(ann['caption'],30) - - return image, caption \ No newline at end of file diff --git a/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/train_sppe/src/models/layers/DUC.py b/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/train_sppe/src/models/layers/DUC.py deleted file mode 100644 index 86811b6fd629c18d1556fef844a52ff96ef47b87..0000000000000000000000000000000000000000 --- a/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/train_sppe/src/models/layers/DUC.py +++ /dev/null @@ -1,28 +0,0 @@ -# ----------------------------------------------------- -# Copyright (c) Shanghai Jiao Tong University. All rights reserved. -# Written by Jiefeng Li (jeff.lee.sjtu@gmail.com) -# ----------------------------------------------------- - -import torch.nn as nn - - -class DUC(nn.Module): - ''' - Initialize: inplanes, planes, upscale_factor - OUTPUT: (planes // upscale_factor^2) * ht * wd - ''' - - def __init__(self, inplanes, planes, upscale_factor=2): - super(DUC, self).__init__() - self.conv = nn.Conv2d( - inplanes, planes, kernel_size=3, padding=1, bias=False) - self.bn = nn.BatchNorm2d(planes, momentum=0.1) - self.relu = nn.ReLU(inplace=True) - self.pixel_shuffle = nn.PixelShuffle(upscale_factor) - - def forward(self, x): - x = self.conv(x) - x = self.bn(x) - x = self.relu(x) - x = self.pixel_shuffle(x) - return x diff --git a/spaces/SarthakSidhant/Go-Cattle/diseases/trichomoniasis.md b/spaces/SarthakSidhant/Go-Cattle/diseases/trichomoniasis.md deleted file mode 100644 index 4ec893f1d8f3621822885aa3b780aa29b53275dd..0000000000000000000000000000000000000000 --- a/spaces/SarthakSidhant/Go-Cattle/diseases/trichomoniasis.md +++ /dev/null @@ -1,42 +0,0 @@ -## Trichomoniasis - -**Information:** Trichomoniasis is a venereal disease that affects cattle. It is caused by a protozoan parasite called Tritrichomonas foetus. Trichomoniasis can cause a variety of symptoms in affected animals, including infertility, abortion, and early embryonic death. - -**Symptoms:** - -* Infertility -* Abortion -* Early embryonic death -* Vaginal discharge -* Inflammation of the uterus -* In some cases, no symptoms may be present - -**Remedies:** - -* There is no specific cure for trichomoniasis. -* Treatment is usually supportive and may include: - * Antibiotics to treat secondary infections - * Treatment of other reproductive problems -* Animals with trichomoniasis should be isolated from other animals to prevent the spread of the disease. - -**Causes:** - -* Trichomoniasis is caused by a protozoan parasite called Tritrichomonas foetus. -* This parasite is transmitted through sexual contact between bulls and cows. -* Once inside the animal's reproductive tract, the parasite can cause inflammation and infection. -* This can lead to infertility, abortion, and early embryonic death. - -**Prevention:** - -* The best way to prevent trichomoniasis is to practice good breeding practices. -* This includes: - * Separating bulls from cows that are not pregnant - * Testing bulls for trichomoniasis before breeding -* Vaccinations are available for some types of trichomoniasis. - -**Outlook:** - -* The outlook for animals with trichomoniasis depends on the severity of the infection. -* Animals with mild infections may recover with treatment. -* Animals with severe infections may never be able to reproduce. -* Animals that recover from trichomoniasis may be immune to future infection. diff --git a/spaces/SarthakSidhant/Go-Cattle/support.py b/spaces/SarthakSidhant/Go-Cattle/support.py deleted file mode 100644 index 1531eb38398e4ae004d2b68a8c8654a35d272449..0000000000000000000000000000000000000000 --- a/spaces/SarthakSidhant/Go-Cattle/support.py +++ /dev/null @@ -1,22 +0,0 @@ -import datetime - - -def highlight_list_elements(lst): - highlighted_list = "" - for item,n in enumerate(lst): - highlighted_list += f'{item+1}). {n} \n' - return highlighted_list - -def save_feedback(name,email,feedback): - # Generate a unique filename using the current timestamp - timestamp = datetime.datetime.now().strftime("%Y%m%d%H%M%S") - filename = f"./feedbacks/feedback_{name}_{email}_{timestamp}.txt" - - # Save the feedback to a file - with open(filename, 'w') as file: - file.write(f'name : {name}\nemail : {email}\nfeedback : {feedback}') - - return filename - -if __name__ == "__main__": - print("This is the main module.") \ No newline at end of file diff --git a/spaces/Savethecats/README/README.md b/spaces/Savethecats/README/README.md deleted file mode 100644 index 7516a7a97705cabc97ad313880cdd97e93a53365..0000000000000000000000000000000000000000 --- a/spaces/Savethecats/README/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: README -emoji: 🐠 -colorFrom: indigo -colorTo: pink -sdk: static -pinned: false ---- - -Edit this `README.md` markdown file to author your organization card 🔥 diff --git a/spaces/Silentlin/DiffSinger/modules/diffsinger_midi/fs2.py b/spaces/Silentlin/DiffSinger/modules/diffsinger_midi/fs2.py deleted file mode 100644 index 94a9ec3c51cd749576a5f60caec5e5b08d2a7d02..0000000000000000000000000000000000000000 --- a/spaces/Silentlin/DiffSinger/modules/diffsinger_midi/fs2.py +++ /dev/null @@ -1,119 +0,0 @@ -from modules.commons.common_layers import * -from modules.commons.common_layers import Embedding -from modules.fastspeech.tts_modules import FastspeechDecoder, DurationPredictor, LengthRegulator, PitchPredictor, \ - EnergyPredictor, FastspeechEncoder -from utils.cwt import cwt2f0 -from utils.hparams import hparams -from utils.pitch_utils import f0_to_coarse, denorm_f0, norm_f0 -from modules.fastspeech.fs2 import FastSpeech2 - - -class FastspeechMIDIEncoder(FastspeechEncoder): - def forward_embedding(self, txt_tokens, midi_embedding, midi_dur_embedding, slur_embedding): - # embed tokens and positions - x = self.embed_scale * self.embed_tokens(txt_tokens) - x = x + midi_embedding + midi_dur_embedding + slur_embedding - if hparams['use_pos_embed']: - if hparams.get('rel_pos') is not None and hparams['rel_pos']: - x = self.embed_positions(x) - else: - positions = self.embed_positions(txt_tokens) - x = x + positions - x = F.dropout(x, p=self.dropout, training=self.training) - return x - - def forward(self, txt_tokens, midi_embedding, midi_dur_embedding, slur_embedding): - """ - - :param txt_tokens: [B, T] - :return: { - 'encoder_out': [T x B x C] - } - """ - encoder_padding_mask = txt_tokens.eq(self.padding_idx).data - x = self.forward_embedding(txt_tokens, midi_embedding, midi_dur_embedding, slur_embedding) # [B, T, H] - x = super(FastspeechEncoder, self).forward(x, encoder_padding_mask) - return x - - -FS_ENCODERS = { - 'fft': lambda hp, embed_tokens, d: FastspeechMIDIEncoder( - embed_tokens, hp['hidden_size'], hp['enc_layers'], hp['enc_ffn_kernel_size'], - num_heads=hp['num_heads']), -} - - -class FastSpeech2MIDI(FastSpeech2): - def __init__(self, dictionary, out_dims=None): - super().__init__(dictionary, out_dims) - del self.encoder - self.encoder = FS_ENCODERS[hparams['encoder_type']](hparams, self.encoder_embed_tokens, self.dictionary) - self.midi_embed = Embedding(300, self.hidden_size, self.padding_idx) - self.midi_dur_layer = Linear(1, self.hidden_size) - self.is_slur_embed = Embedding(2, self.hidden_size) - - def forward(self, txt_tokens, mel2ph=None, spk_embed=None, - ref_mels=None, f0=None, uv=None, energy=None, skip_decoder=False, - spk_embed_dur_id=None, spk_embed_f0_id=None, infer=False, **kwargs): - ret = {} - - midi_embedding = self.midi_embed(kwargs['pitch_midi']) - midi_dur_embedding, slur_embedding = 0, 0 - if kwargs.get('midi_dur') is not None: - midi_dur_embedding = self.midi_dur_layer(kwargs['midi_dur'][:, :, None]) # [B, T, 1] -> [B, T, H] - if kwargs.get('is_slur') is not None: - slur_embedding = self.is_slur_embed(kwargs['is_slur']) - encoder_out = self.encoder(txt_tokens, midi_embedding, midi_dur_embedding, slur_embedding) # [B, T, C] - src_nonpadding = (txt_tokens > 0).float()[:, :, None] - - # add ref style embed - # Not implemented - # variance encoder - var_embed = 0 - - # encoder_out_dur denotes encoder outputs for duration predictor - # in speech adaptation, duration predictor use old speaker embedding - if hparams['use_spk_embed']: - spk_embed_dur = spk_embed_f0 = spk_embed = self.spk_embed_proj(spk_embed)[:, None, :] - elif hparams['use_spk_id']: - spk_embed_id = spk_embed - if spk_embed_dur_id is None: - spk_embed_dur_id = spk_embed_id - if spk_embed_f0_id is None: - spk_embed_f0_id = spk_embed_id - spk_embed = self.spk_embed_proj(spk_embed_id)[:, None, :] - spk_embed_dur = spk_embed_f0 = spk_embed - if hparams['use_split_spk_id']: - spk_embed_dur = self.spk_embed_dur(spk_embed_dur_id)[:, None, :] - spk_embed_f0 = self.spk_embed_f0(spk_embed_f0_id)[:, None, :] - else: - spk_embed_dur = spk_embed_f0 = spk_embed = 0 - - # add dur - dur_inp = (encoder_out + var_embed + spk_embed_dur) * src_nonpadding - - mel2ph = self.add_dur(dur_inp, mel2ph, txt_tokens, ret) - - decoder_inp = F.pad(encoder_out, [0, 0, 1, 0]) - - mel2ph_ = mel2ph[..., None].repeat([1, 1, encoder_out.shape[-1]]) - decoder_inp_origin = decoder_inp = torch.gather(decoder_inp, 1, mel2ph_) # [B, T, H] - - tgt_nonpadding = (mel2ph > 0).float()[:, :, None] - - # add pitch and energy embed - pitch_inp = (decoder_inp_origin + var_embed + spk_embed_f0) * tgt_nonpadding - if hparams['use_pitch_embed']: - pitch_inp_ph = (encoder_out + var_embed + spk_embed_f0) * src_nonpadding - decoder_inp = decoder_inp + self.add_pitch(pitch_inp, f0, uv, mel2ph, ret, encoder_out=pitch_inp_ph) - if hparams['use_energy_embed']: - decoder_inp = decoder_inp + self.add_energy(pitch_inp, energy, ret) - - ret['decoder_inp'] = decoder_inp = (decoder_inp + spk_embed) * tgt_nonpadding - - if skip_decoder: - return ret - ret['mel_out'] = self.run_decoder(decoder_inp, tgt_nonpadding, ret, infer=infer, **kwargs) - - return ret - diff --git a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/models/lm.py b/spaces/SuYuanS/AudioCraft_Plus/audiocraft/models/lm.py deleted file mode 100644 index 8cefd2c58c3a337378579d6cd6469fd038cbb1ee..0000000000000000000000000000000000000000 --- a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/models/lm.py +++ /dev/null @@ -1,531 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass -from functools import partial -import logging -import math -import typing as tp - -import torch -from torch import nn - -from ..utils import utils -from ..modules.streaming import StreamingModule, State -from ..modules.transformer import StreamingTransformer, create_norm_fn -from ..modules.conditioners import ( - ConditionFuser, - ClassifierFreeGuidanceDropout, - AttributeDropout, - ConditioningProvider, - ConditioningAttributes, - ConditionType, -) -from ..modules.codebooks_patterns import CodebooksPatternProvider -from ..modules.activations import get_activation_fn - - -logger = logging.getLogger(__name__) -ConditionTensors = tp.Dict[str, ConditionType] -CFGConditions = tp.Union[ConditionTensors, tp.Tuple[ConditionTensors, ConditionTensors]] - - -def get_init_fn(method: str, input_dim: int, init_depth: tp.Optional[int] = None): - """LM layer initialization. - Inspired from xlformers: https://github.com/fairinternal/xlformers - - Args: - method (str): Method name for init function. Valid options are: - 'gaussian', 'uniform'. - input_dim (int): Input dimension of the initialized module. - init_depth (int, optional): Optional init depth value used to rescale - the standard deviation if defined. - """ - # Compute std - std = 1 / math.sqrt(input_dim) - # Rescale with depth - if init_depth is not None: - std = std / math.sqrt(2 * init_depth) - - if method == 'gaussian': - return partial( - torch.nn.init.trunc_normal_, mean=0.0, std=std, a=-3 * std, b=3 * std - ) - elif method == 'uniform': - bound = math.sqrt(3) * std # ensure the standard deviation is `std` - return partial(torch.nn.init.uniform_, a=-bound, b=bound) - else: - raise ValueError("Unsupported layer initialization method") - - -def init_layer(m: nn.Module, - method: str, - init_depth: tp.Optional[int] = None, - zero_bias_init: bool = False): - """Wrapper around ``get_init_fn`` for proper initialization of LM modules. - - Args: - m (nn.Module): Module to initialize. - method (str): Method name for the init function. - init_depth (int, optional): Optional init depth value used to rescale - the standard deviation if defined. - zero_bias_init (bool): Whether to initialize the bias to 0 or not. - """ - if isinstance(m, nn.Linear): - init_fn = get_init_fn(method, m.in_features, init_depth=init_depth) - if m.weight.device.type == 'cpu' and m.weight.dtype == torch.float16: - weight = m.weight.float() - init_fn(weight) - m.weight.data[:] = weight.half() - else: - init_fn(m.weight) - if zero_bias_init and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.Embedding): - init_fn = get_init_fn(method, m.embedding_dim, init_depth=None) - if m.weight.device.type == 'cpu' and m.weight.dtype == torch.float16: - weight = m.weight.float() - init_fn(weight) - m.weight.data[:] = weight.half() - else: - init_fn(m.weight) - - -class ScaledEmbedding(nn.Embedding): - """Boost learning rate for embeddings (with `scale`). - """ - def __init__(self, *args, lr=None, **kwargs): - super().__init__(*args, **kwargs) - self.lr = lr - - def make_optim_group(self): - group = {"params": list(self.parameters())} - if self.lr is not None: - group["lr"] = self.lr - return group - - -@dataclass -class LMOutput: - # The logits are already re-aligned with the input codes - # hence no extra shift is required, e.g. when computing CE - logits: torch.Tensor # [B, K, T, card] - mask: torch.Tensor # [B, K, T] - - -class LMModel(StreamingModule): - """Transformer-based language model on multiple streams of codes. - - Args: - pattern_provider (CodebooksPatternProvider): Pattern provider for codebook interleaving. - condition_provider (MusicConditioningProvider): Conditioning provider from metadata. - fuser (ConditionFuser): Fuser handling the fusing of conditions with language model input. - n_q (int): Number of parallel streams to model. - card (int): Cardinality, vocabulary size. - dim (int): Dimension of the transformer encoder. - num_heads (int): Number of heads for the transformer encoder. - hidden_scale (int): Scale for hidden feed forward dimension of the transformer encoder. - norm (str): Normalization method. - norm_first (bool): Use pre-norm instead of post-norm. - emb_lr (float, optional): Embedding-specific learning rate. - bias_proj (bool): Use bias for output projections. - weight_init (str, optional): Method for weight initialization. - depthwise_init (str, optional): Method for depthwise weight initialization. - zero_bias_init (bool): If true and bias in Linears, initialize bias to zeros. - cfg_dropout (float): Classifier-free guidance dropout. - cfg_coef (float): Classifier-free guidance coefficient. - attribute_dropout (dict): Attribute dropout probabilities. - two_step_cfg (bool): Whether to run classifier free-guidance with 2 distinct steps. - **kwargs: Additional parameters for the transformer encoder. - """ - def __init__(self, pattern_provider: CodebooksPatternProvider, condition_provider: ConditioningProvider, - fuser: ConditionFuser, n_q: int = 8, card: int = 1024, dim: int = 128, num_heads: int = 8, - hidden_scale: int = 4, norm: str = 'layer_norm', norm_first: bool = False, - emb_lr: tp.Optional[float] = None, bias_proj: bool = True, - weight_init: tp.Optional[str] = None, depthwise_init: tp.Optional[str] = None, - zero_bias_init: bool = False, cfg_dropout: float = 0, cfg_coef: float = 1.0, - attribute_dropout: tp.Dict[str, tp.Dict[str, float]] = {}, two_step_cfg: bool = False, - **kwargs): - super().__init__() - self.cfg_coef = cfg_coef - self.cfg_dropout = ClassifierFreeGuidanceDropout(p=cfg_dropout) - self.att_dropout = AttributeDropout(p=attribute_dropout) - self.condition_provider = condition_provider - self.fuser = fuser - self.card = card - embed_dim = self.card + 1 - self.n_q = n_q - self.dim = dim - self.pattern_provider = pattern_provider - self.two_step_cfg = two_step_cfg - self.emb = nn.ModuleList([ScaledEmbedding(embed_dim, dim, lr=emb_lr) for _ in range(n_q)]) - if 'activation' in kwargs: - kwargs['activation'] = get_activation_fn(kwargs['activation']) - self.transformer = StreamingTransformer( - d_model=dim, num_heads=num_heads, dim_feedforward=int(hidden_scale * dim), - norm=norm, norm_first=norm_first, **kwargs) - self.out_norm: tp.Optional[nn.Module] = None - if norm_first: - self.out_norm = create_norm_fn(norm, dim) - self.linears = nn.ModuleList([nn.Linear(dim, self.card, bias=bias_proj) for _ in range(n_q)]) - self._init_weights(weight_init, depthwise_init, zero_bias_init) - self._fsdp: tp.Optional[nn.Module] - self.__dict__['_fsdp'] = None - - def _init_weights(self, weight_init: tp.Optional[str], depthwise_init: tp.Optional[str], zero_bias_init: bool): - """Initialization of the transformer module weights. - - Args: - weight_init (str, optional): Weight initialization strategy. See ``get_init_fn`` for valid options. - depthwise_init (str, optional): Depthwise initialization strategy. The following options are valid: - 'current' where the depth corresponds to the current layer index or 'global' where the total number - of layer is used as depth. If not set, no depthwise initialization strategy is used. - zero_bias_init (bool): Whether to initialize bias to zero or not. - """ - assert depthwise_init is None or depthwise_init in ['current', 'global'] - assert depthwise_init is None or weight_init is not None, \ - "If 'depthwise_init' is defined, a 'weight_init' method should be provided." - assert not zero_bias_init or weight_init is not None, \ - "If 'zero_bias_init', a 'weight_init' method should be provided" - - if weight_init is None: - return - - for emb_layer in self.emb: - init_layer(emb_layer, method=weight_init, init_depth=None, zero_bias_init=zero_bias_init) - - for layer_idx, tr_layer in enumerate(self.transformer.layers): - depth = None - if depthwise_init == 'current': - depth = layer_idx + 1 - elif depthwise_init == 'global': - depth = len(self.transformer.layers) - init_fn = partial(init_layer, method=weight_init, init_depth=depth, zero_bias_init=zero_bias_init) - tr_layer.apply(init_fn) - - for linear in self.linears: - init_layer(linear, method=weight_init, init_depth=None, zero_bias_init=zero_bias_init) - - @property - def special_token_id(self) -> int: - return self.card - - @property - def num_codebooks(self) -> int: - return self.n_q - - def forward(self, sequence: torch.Tensor, - conditions: tp.List[ConditioningAttributes], - condition_tensors: tp.Optional[ConditionTensors] = None) -> torch.Tensor: - """Apply language model on sequence and conditions. - Given a tensor of sequence of shape [B, K, S] with K the number of codebooks and - S the sequence steps, return the logits with shape [B, card, K, S]. - - Args: - indices (torch.Tensor): Indices of the codes to model. - conditions (list of ConditioningAttributes): Conditions to use when modeling - the given codes. Note that when evaluating multiple time with the same conditioning - you should pre-compute those and pass them as `condition_tensors`. - condition_tensors (dict[str, ConditionType], optional): Pre-computed conditioning - tensors, see `conditions`. - Returns: - torch.Tensor: Logits. - """ - B, K, S = sequence.shape - assert K == self.num_codebooks, "Sequence shape must match the specified number of codebooks" - input_ = sum([self.emb[k](sequence[:, k]) for k in range(K)]) - if condition_tensors is None: - assert not self._is_streaming, "Conditions tensors should be precomputed when streaming." - # apply dropout modules - conditions = self.cfg_dropout(conditions) - conditions = self.att_dropout(conditions) - tokenized = self.condition_provider.tokenize(conditions) - # encode conditions and fuse, both have a streaming cache to not recompute when generating. - condition_tensors = self.condition_provider(tokenized) - else: - assert not conditions, "Shouldn't pass both conditions and condition_tensors." - - input_, cross_attention_input = self.fuser(input_, condition_tensors) - - out = self.transformer(input_, cross_attention_src=cross_attention_input) - if self.out_norm: - out = self.out_norm(out) - logits = torch.stack([self.linears[k](out) for k in range(K)], dim=1) # [B, K, S, card] - - # remove the prefix from the model outputs - if len(self.fuser.fuse2cond['prepend']) > 0: - logits = logits[:, :, -S:] - - return logits # [B, K, S, card] - - def compute_predictions( - self, codes: torch.Tensor, - conditions: tp.List[ConditioningAttributes], - condition_tensors: tp.Optional[ConditionTensors] = None) -> LMOutput: - """Given an input tensor of codes [B, K, T] and list of conditions, runs the model - forward using the specified codes interleaving pattern. - - Args: - codes (torch.Tensor): Input codes of shape [B, K, T] with B the batch size, - K the number of codebooks and T the number of timesteps. - conditions (list of ConditioningAttributes): conditionings to use when modeling - the given codes. Note that when evaluating multiple time with the same conditioning - you should pre-compute those and pass them as `condition_tensors`. - condition_tensors (dict[str, ConditionType], optional): pre-computed conditioning - tensors, see `conditions`. - Returns: - LMOutput: Language model outputs - logits (torch.Tensor) of shape [B, K, T, card] corresponding to the provided codes, - i.e. the first item corresponds to logits to predict the first code, meaning that - no additional shifting of codes and logits is required. - mask (torch.Tensor) of shape [B, K, T], mask over valid and invalid positions. - Given the specified interleaving strategies, parts of the logits and codes should - not be considered as valid predictions because of invalid context. - """ - B, K, T = codes.shape - codes = codes.contiguous() - # map codes [B, K, T] into pattern sequence [B, K, S] using special_token_id for masked tokens - pattern = self.pattern_provider.get_pattern(T) - sequence_codes, sequence_indexes, sequence_mask = pattern.build_pattern_sequence( - codes, self.special_token_id, keep_only_valid_steps=True - ) - # apply model on pattern sequence - model = self if self._fsdp is None else self._fsdp - logits = model(sequence_codes, conditions, condition_tensors) # [B, K, S, card] - # map back the logits on pattern sequence to logits on original codes: [B, K, S, card] -> [B, K, T, card] - # and provide the corresponding mask over invalid positions of tokens - logits = logits.permute(0, 3, 1, 2) # [B, card, K, S] - # note: we use nans as special token to make it obvious if we feed unexpected logits - logits, logits_indexes, logits_mask = pattern.revert_pattern_logits( - logits, float('nan'), keep_only_valid_steps=True - ) - logits = logits.permute(0, 2, 3, 1) # [B, K, T, card] - logits_mask = logits_mask[None, :, :].expand(B, -1, -1) # [K, T] -> [B, K, T] - return LMOutput(logits, logits_mask) - - def _sample_next_token(self, - sequence: torch.Tensor, - cfg_conditions: CFGConditions, - unconditional_state: State, - use_sampling: bool = False, - temp: float = 1.0, - top_k: int = 0, - top_p: float = 0.0, - cfg_coef: tp.Optional[float] = None) -> torch.Tensor: - """Sample next token from the model given a sequence and a set of conditions. The model supports - multiple sampling strategies (greedy sampling, softmax, top-k, top-p...). - - Args: - sequence (torch.Tensor): Current sequence of shape [B, K, S] - with K corresponding to the number of codebooks and S the number of sequence steps. - S = 1 in streaming mode, except for the first step that contains a bigger prompt. - condition_tensors (dict[str, ConditionType): Set of conditions. If CFG is used, - should be twice the batch size, being the concatenation of the conditions + null conditions. - use_sampling (bool): Whether to use a sampling strategy or not. - temp (float): Sampling temperature. - top_k (int): K for "top-k" sampling. - top_p (float): P for "top-p" sampling. - cfg_coef (float, optional): classifier free guidance coefficient - Returns: - next_token (torch.Tensor): Next token tensor of shape [B, K, 1]. - """ - B = sequence.shape[0] - cfg_coef = self.cfg_coef if cfg_coef is None else cfg_coef - model = self if self._fsdp is None else self._fsdp - if self.two_step_cfg and cfg_conditions != {}: - assert isinstance(cfg_conditions, tuple), type(cfg_conditions) - condition_tensors, null_condition_tensors = cfg_conditions - cond_logits = model(sequence, conditions=[], condition_tensors=condition_tensors) - state = self.get_streaming_state() - self.set_streaming_state(unconditional_state) - uncond_logits = model(sequence, conditions=[], condition_tensors=null_condition_tensors) - unconditional_state.update(self.get_streaming_state()) - self.set_streaming_state(state) - logits = uncond_logits + (cond_logits - uncond_logits) * self.cfg_coef - else: - assert isinstance(cfg_conditions, dict) - condition_tensors = cfg_conditions - if condition_tensors: - # Preparing for CFG, predicting both conditional and unconditional logits. - sequence = torch.cat([sequence, sequence], dim=0) - all_logits = model( - sequence, - conditions=[], condition_tensors=condition_tensors) - if condition_tensors: - cond_logits, uncond_logits = all_logits.split(B, dim=0) # [B, K, T, card] - logits = uncond_logits + (cond_logits - uncond_logits) * cfg_coef - else: - logits = all_logits - - logits = logits.permute(0, 1, 3, 2) # [B, K, card, T] - logits = logits[..., -1] # [B x K x card] - - # Apply softmax for sampling if temp > 0. Else, do greedy sampling to avoid zero division error. - if use_sampling and temp > 0.0: - probs = torch.softmax(logits / temp, dim=-1) - if top_p > 0.0: - next_token = utils.sample_top_p(probs, p=top_p) - elif top_k > 0: - next_token = utils.sample_top_k(probs, k=top_k) - else: - next_token = utils.multinomial(probs, num_samples=1) - else: - next_token = torch.argmax(logits, dim=-1, keepdim=True) - - return next_token - - @torch.no_grad() - def generate(self, - prompt: tp.Optional[torch.Tensor] = None, - conditions: tp.List[ConditioningAttributes] = [], - num_samples: tp.Optional[int] = None, - max_gen_len: int = 256, - use_sampling: bool = True, - temp: float = 1.0, - top_k: int = 250, - top_p: float = 0.0, - cfg_coef: tp.Optional[float] = None, - two_step_cfg: tp.Optional[bool] = None, - remove_prompts: bool = False, - check: bool = False, - callback: tp.Optional[tp.Callable[[int, int], None]] = None) -> torch.Tensor: - """Generate tokens sampling from the model given a prompt or unconditionally. Generation can - be perform in a greedy fashion or using sampling with top K and top P strategies. - - Args: - prompt (torch.Tensor, optional): Prompt tokens of shape [B, K, T]. - conditions_tensors (list of ConditioningAttributes, optional): List of conditions. - num_samples (int, optional): Number of samples to generate when no prompt and no conditions are given. - max_gen_len (int): Maximum generation length. - use_sampling (bool): Whether to use a sampling strategy or not. - temp (float): Sampling temperature. - top_k (int): K for "top-k" sampling. - top_p (float): P for "top-p" sampling. - cfg_coeff (float, optional): Classifier-free guidance coefficient. - two_step_cfg (bool, optional): Whether to perform classifier-free guidance with two steps generation. - remove_prompts (bool): Whether to remove prompts from generation or not. - check (bool): Whether to apply further checks on generated sequence. - callback (Callback, optional): Callback function to report generation progress. - Returns: - torch.Tensor: Generated tokens. - """ - assert not self.training, "generation shouldn't be used in training mode." - first_param = next(iter(self.parameters())) - device = first_param.device - - # Checking all input shapes are consistent. - possible_num_samples = [] - if num_samples is not None: - possible_num_samples.append(num_samples) - elif prompt is not None: - possible_num_samples.append(prompt.shape[0]) - elif conditions: - possible_num_samples.append(len(conditions)) - else: - possible_num_samples.append(1) - assert [x == possible_num_samples[0] for x in possible_num_samples], "Inconsistent inputs shapes" - num_samples = possible_num_samples[0] - - # below we create set of conditions: one conditional and one unconditional - # to do that we merge the regular condition together with the null condition - # we then do 1 forward pass instead of 2. - # the reason for that is two-fold: - # 1. it is about x2 faster than doing 2 forward passes - # 2. avoid the streaming API treating the 2 passes as part of different time steps - # We also support doing two different passes, in particular to ensure that - # the padding structure is exactly the same between train and test. - # With a batch size of 1, this can be slower though. - cfg_conditions: CFGConditions - two_step_cfg = self.two_step_cfg if two_step_cfg is None else two_step_cfg - if conditions: - null_conditions = ClassifierFreeGuidanceDropout(p=1.0)(conditions) - if two_step_cfg: - cfg_conditions = ( - self.condition_provider(self.condition_provider.tokenize(conditions)), - self.condition_provider(self.condition_provider.tokenize(null_conditions)), - ) - else: - conditions = conditions + null_conditions - tokenized = self.condition_provider.tokenize(conditions) - cfg_conditions = self.condition_provider(tokenized) - else: - cfg_conditions = {} - - if prompt is None: - assert num_samples > 0 - prompt = torch.zeros((num_samples, self.num_codebooks, 0), dtype=torch.long, device=device) - - B, K, T = prompt.shape - start_offset = T - assert start_offset < max_gen_len - - pattern = self.pattern_provider.get_pattern(max_gen_len) - # this token is used as default value for codes that are not generated yet - unknown_token = -1 - - # we generate codes up to the max_gen_len that will be mapped to the pattern sequence - gen_codes = torch.full((B, K, max_gen_len), unknown_token, dtype=torch.long, device=device) - # filling the gen_codes with the prompt if needed - gen_codes[..., :start_offset] = prompt - # create the gen_sequence with proper interleaving from the pattern: [B, K, S] - gen_sequence, indexes, mask = pattern.build_pattern_sequence(gen_codes, self.special_token_id) - # retrieve the start_offset in the sequence: - # it is the first sequence step that contains the `start_offset` timestep - start_offset_sequence = pattern.get_first_step_with_timesteps(start_offset) - assert start_offset_sequence is not None - - with self.streaming(): - unconditional_state = self.get_streaming_state() - prev_offset = 0 - gen_sequence_len = gen_sequence.shape[-1] # gen_sequence shape is [B, K, S] - for offset in range(start_offset_sequence, gen_sequence_len): - # get current sequence (note that the streaming API is providing the caching over previous offsets) - curr_sequence = gen_sequence[..., prev_offset:offset] - curr_mask = mask[None, ..., prev_offset:offset].expand(B, -1, -1) - if check: - # check coherence between mask and sequence - assert (curr_sequence == torch.where(curr_mask, curr_sequence, self.special_token_id)).all() - # should never happen as gen_sequence is filled progressively - assert not (curr_sequence == unknown_token).any() - # sample next token from the model, next token shape is [B, K, 1] - next_token = self._sample_next_token( - curr_sequence, cfg_conditions, unconditional_state, use_sampling, temp, top_k, top_p, - cfg_coef=cfg_coef) - # ensure the tokens that should be masked are properly set to special_token_id - # as the model never output special_token_id - valid_mask = mask[..., offset:offset+1].expand(B, -1, -1) - next_token[~valid_mask] = self.special_token_id - # ensure we don't overwrite prompt tokens, we only write over unknown tokens - # (then mask tokens should be left as is as well, which is correct) - gen_sequence[..., offset:offset+1] = torch.where( - gen_sequence[..., offset:offset+1] == unknown_token, - next_token, gen_sequence[..., offset:offset+1] - ) - prev_offset = offset - if callback is not None: - callback(1 + offset - start_offset_sequence, gen_sequence_len - start_offset_sequence) - unconditional_state.clear() - - # ensure sequence has been entirely filled - assert not (gen_sequence == unknown_token).any() - # ensure gen_sequence pattern and mask are matching - # which means the gen_sequence is valid according to the pattern - assert ( - gen_sequence == torch.where(mask[None, ...].expand(B, -1, -1), gen_sequence, self.special_token_id) - ).all() - # get back the codes, trimming the prompt if needed and cutting potentially incomplete timesteps - out_codes, out_indexes, out_mask = pattern.revert_pattern_sequence(gen_sequence, special_token=unknown_token) - - # sanity checks over the returned codes and corresponding masks - assert (out_codes[..., :max_gen_len] != unknown_token).all() - assert (out_mask[..., :max_gen_len] == 1).all() - - out_start_offset = start_offset if remove_prompts else 0 - out_codes = out_codes[..., out_start_offset:max_gen_len] - - # ensure the returned codes are all valid - assert (out_codes >= 0).all() and (out_codes <= self.card).all() - return out_codes diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_magic_arguments.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_magic_arguments.py deleted file mode 100644 index 8b263b25d6f0eacd456f99f1480c2491f0f23224..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_magic_arguments.py +++ /dev/null @@ -1,140 +0,0 @@ -#----------------------------------------------------------------------------- -# Copyright (C) 2010-2011, IPython Development Team. -# -# Distributed under the terms of the Modified BSD License. -# -# The full license is in the file COPYING.txt, distributed with this software. -#----------------------------------------------------------------------------- - -import argparse -import sys - -from IPython.core.magic_arguments import (argument, argument_group, kwds, - magic_arguments, parse_argstring, real_name) - - -@magic_arguments() -@argument('-f', '--foo', help="an argument") -def magic_foo1(self, args): - """ A docstring. - """ - return parse_argstring(magic_foo1, args) - - -@magic_arguments() -def magic_foo2(self, args): - """ A docstring. - """ - return parse_argstring(magic_foo2, args) - - -@magic_arguments() -@argument('-f', '--foo', help="an argument") -@argument_group('Group') -@argument('-b', '--bar', help="a grouped argument") -@argument_group('Second Group') -@argument('-z', '--baz', help="another grouped argument") -def magic_foo3(self, args): - """ A docstring. - """ - return parse_argstring(magic_foo3, args) - - -@magic_arguments() -@kwds(argument_default=argparse.SUPPRESS) -@argument('-f', '--foo', help="an argument") -def magic_foo4(self, args): - """ A docstring. - """ - return parse_argstring(magic_foo4, args) - - -@magic_arguments('frobnicate') -@argument('-f', '--foo', help="an argument") -def magic_foo5(self, args): - """ A docstring. - """ - return parse_argstring(magic_foo5, args) - - -@magic_arguments() -@argument('-f', '--foo', help="an argument") -def magic_magic_foo(self, args): - """ A docstring. - """ - return parse_argstring(magic_magic_foo, args) - - -@magic_arguments() -@argument('-f', '--foo', help="an argument") -def foo(self, args): - """ A docstring. - """ - return parse_argstring(foo, args) - - -def test_magic_arguments(): - # “optional arguments” was replaced with “options” in argparse help - # https://docs.python.org/3/whatsnew/3.10.html#argparse - # https://bugs.python.org/issue9694 - options = "optional arguments" if sys.version_info < (3, 10) else "options" - - assert ( - magic_foo1.__doc__ - == f"::\n\n %foo1 [-f FOO]\n\n A docstring.\n\n{options}:\n -f FOO, --foo FOO an argument\n" - ) - assert getattr(magic_foo1, "argcmd_name", None) == None - assert real_name(magic_foo1) == "foo1" - assert magic_foo1(None, "") == argparse.Namespace(foo=None) - assert hasattr(magic_foo1, "has_arguments") - - assert magic_foo2.__doc__ == "::\n\n %foo2\n\n A docstring.\n" - assert getattr(magic_foo2, "argcmd_name", None) == None - assert real_name(magic_foo2) == "foo2" - assert magic_foo2(None, "") == argparse.Namespace() - assert hasattr(magic_foo2, "has_arguments") - - assert ( - magic_foo3.__doc__ - == f"::\n\n %foo3 [-f FOO] [-b BAR] [-z BAZ]\n\n A docstring.\n\n{options}:\n -f FOO, --foo FOO an argument\n\nGroup:\n -b BAR, --bar BAR a grouped argument\n\nSecond Group:\n -z BAZ, --baz BAZ another grouped argument\n" - ) - assert getattr(magic_foo3, "argcmd_name", None) == None - assert real_name(magic_foo3) == "foo3" - assert magic_foo3(None, "") == argparse.Namespace(bar=None, baz=None, foo=None) - assert hasattr(magic_foo3, "has_arguments") - - assert ( - magic_foo4.__doc__ - == f"::\n\n %foo4 [-f FOO]\n\n A docstring.\n\n{options}:\n -f FOO, --foo FOO an argument\n" - ) - assert getattr(magic_foo4, "argcmd_name", None) == None - assert real_name(magic_foo4) == "foo4" - assert magic_foo4(None, "") == argparse.Namespace() - assert hasattr(magic_foo4, "has_arguments") - - assert ( - magic_foo5.__doc__ - == f"::\n\n %frobnicate [-f FOO]\n\n A docstring.\n\n{options}:\n -f FOO, --foo FOO an argument\n" - ) - assert getattr(magic_foo5, "argcmd_name", None) == "frobnicate" - assert real_name(magic_foo5) == "frobnicate" - assert magic_foo5(None, "") == argparse.Namespace(foo=None) - assert hasattr(magic_foo5, "has_arguments") - - assert ( - magic_magic_foo.__doc__ - == f"::\n\n %magic_foo [-f FOO]\n\n A docstring.\n\n{options}:\n -f FOO, --foo FOO an argument\n" - ) - assert getattr(magic_magic_foo, "argcmd_name", None) == None - assert real_name(magic_magic_foo) == "magic_foo" - assert magic_magic_foo(None, "") == argparse.Namespace(foo=None) - assert hasattr(magic_magic_foo, "has_arguments") - - assert ( - foo.__doc__ - == f"::\n\n %foo [-f FOO]\n\n A docstring.\n\n{options}:\n -f FOO, --foo FOO an argument\n" - ) - assert getattr(foo, "argcmd_name", None) == None - assert real_name(foo) == "foo" - assert foo(None, "") == argparse.Namespace(foo=None) - assert hasattr(foo, "has_arguments") diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/common/py_utils.hpp b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/common/py_utils.hpp deleted file mode 100644 index 97163a4e24ad18ea84c302952058cf5de4a289cf..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/common/py_utils.hpp +++ /dev/null @@ -1,84 +0,0 @@ -#ifndef _PY_UTILS_HPP_ -#define _PY_UTILS_HPP_ - -typedef int (Py_IsInitialized)(); -typedef PyInterpreterState* (PyInterpreterState_Head)(); -typedef enum { PyGILState_LOCKED, PyGILState_UNLOCKED } PyGILState_STATE; -typedef PyGILState_STATE(PyGILState_Ensure)(); -typedef void (PyGILState_Release)(PyGILState_STATE); -typedef int (PyRun_SimpleString)(const char *command); -typedef PyThreadState* (PyInterpreterState_ThreadHead)(PyInterpreterState* interp); -typedef PyThreadState* (PyThreadState_Next)(PyThreadState *tstate); -typedef PyThreadState* (PyThreadState_Swap)(PyThreadState *tstate); -typedef PyThreadState* (_PyThreadState_UncheckedGet)(); -typedef PyObject* (PyObject_CallFunctionObjArgs)(PyObject *callable, ...); // call w/ varargs, last arg should be nullptr -typedef PyObject* (PyInt_FromLong)(long); -typedef PyObject* (PyErr_Occurred)(); -typedef void (PyErr_Fetch)(PyObject **ptype, PyObject **pvalue, PyObject **ptraceback); -typedef void (PyErr_Restore)(PyObject *type, PyObject *value, PyObject *traceback); -typedef PyObject* (PyImport_ImportModule) (const char *name); -typedef PyObject* (PyImport_ImportModuleNoBlock) (const char *name); -typedef PyObject* (PyObject_GetAttrString)(PyObject *o, const char *attr_name); -typedef PyObject* (PyObject_HasAttrString)(PyObject *o, const char *attr_name); -typedef void* (PyThread_get_key_value)(int); -typedef int (PyThread_set_key_value)(int, void*); -typedef void (PyThread_delete_key_value)(int); -typedef int (PyObject_Not) (PyObject *o); -typedef PyObject* (PyDict_New)(); -typedef PyObject* (PyUnicode_InternFromString)(const char *u); -typedef PyObject * (_PyObject_FastCallDict)( - PyObject *callable, PyObject *const *args, Py_ssize_t nargs, PyObject *kwargs); -typedef int (PyTraceBack_Here)(PyFrameObject *frame); - -typedef PyObject* PyTuple_New(Py_ssize_t len); -typedef PyObject* PyEval_CallObjectWithKeywords(PyObject *callable, PyObject *args, PyObject *kwargs); - -typedef void (PyEval_SetTrace)(Py_tracefunc, PyObject *); -typedef int (*Py_tracefunc)(PyObject *, PyFrameObject *frame, int, PyObject *); -typedef int (_PyEval_SetTrace)(PyThreadState *tstate, Py_tracefunc func, PyObject *arg); - -typedef PyObject* PyObject_Repr(PyObject *); -typedef const char* PyUnicode_AsUTF8(PyObject *unicode); - -// holder to ensure we release the GIL even in error conditions -class GilHolder { - PyGILState_STATE _gilState; - PyGILState_Release* _release; -public: - GilHolder(PyGILState_Ensure* acquire, PyGILState_Release* release) { - _gilState = acquire(); - _release = release; - } - - ~GilHolder() { - _release(_gilState); - } -}; - -#ifdef _WIN32 - -#define PRINT(msg) {std::cout << msg << std::endl << std::flush;} - -#define DEFINE_PROC_NO_CHECK(func, funcType, funcNameStr, errorCode) \ - funcType func=reinterpret_cast(GetProcAddress(module, funcNameStr)); - -#define DEFINE_PROC(func, funcType, funcNameStr, errorCode) \ - DEFINE_PROC_NO_CHECK(func, funcType, funcNameStr, errorCode); \ - if(func == nullptr){std::cout << funcNameStr << " not found." << std::endl << std::flush; return errorCode;}; - -#else // LINUX ----------------------------------------------------------------- - -#define PRINT(msg) {printf(msg); printf("\n");} - -#define CHECK_NULL(ptr, msg, errorCode) if(ptr == nullptr){printf(msg); return errorCode;} - -#define DEFINE_PROC_NO_CHECK(func, funcType, funcNameStr, errorCode) \ - funcType func; *(void**)(&func) = dlsym(module, funcNameStr); - -#define DEFINE_PROC(func, funcType, funcNameStr, errorCode) \ - DEFINE_PROC_NO_CHECK(func, funcType, funcNameStr, errorCode); \ - if(func == nullptr){printf(funcNameStr); printf(" not found.\n"); return errorCode;}; - -#endif //_WIN32 - -#endif //_PY_UTILS_HPP_ \ No newline at end of file diff --git a/spaces/Superlang/ImageProcessor/annotator/normalbae/models/submodules/efficientnet_repo/geffnet/gen_efficientnet.py b/spaces/Superlang/ImageProcessor/annotator/normalbae/models/submodules/efficientnet_repo/geffnet/gen_efficientnet.py deleted file mode 100644 index cd170d4cc5bed6ca82b61539902b470d3320c691..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/normalbae/models/submodules/efficientnet_repo/geffnet/gen_efficientnet.py +++ /dev/null @@ -1,1450 +0,0 @@ -""" Generic Efficient Networks - -A generic MobileNet class with building blocks to support a variety of models: - -* EfficientNet (B0-B8, L2 + Tensorflow pretrained AutoAug/RandAug/AdvProp/NoisyStudent ports) - - EfficientNet: Rethinking Model Scaling for CNNs - https://arxiv.org/abs/1905.11946 - - CondConv: Conditionally Parameterized Convolutions for Efficient Inference - https://arxiv.org/abs/1904.04971 - - Adversarial Examples Improve Image Recognition - https://arxiv.org/abs/1911.09665 - - Self-training with Noisy Student improves ImageNet classification - https://arxiv.org/abs/1911.04252 - -* EfficientNet-Lite - -* MixNet (Small, Medium, and Large) - - MixConv: Mixed Depthwise Convolutional Kernels - https://arxiv.org/abs/1907.09595 - -* MNasNet B1, A1 (SE), Small - - MnasNet: Platform-Aware Neural Architecture Search for Mobile - https://arxiv.org/abs/1807.11626 - -* FBNet-C - - FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable NAS - https://arxiv.org/abs/1812.03443 - -* Single-Path NAS Pixel1 - - Single-Path NAS: Designing Hardware-Efficient ConvNets - https://arxiv.org/abs/1904.02877 - -* And likely more... - -Hacked together by / Copyright 2020 Ross Wightman -""" -import torch.nn as nn -import torch.nn.functional as F - -from .config import layer_config_kwargs, is_scriptable -from .conv2d_layers import select_conv2d -from .helpers import load_pretrained -from .efficientnet_builder import * - -__all__ = ['GenEfficientNet', 'mnasnet_050', 'mnasnet_075', 'mnasnet_100', 'mnasnet_b1', 'mnasnet_140', - 'semnasnet_050', 'semnasnet_075', 'semnasnet_100', 'mnasnet_a1', 'semnasnet_140', 'mnasnet_small', - 'mobilenetv2_100', 'mobilenetv2_140', 'mobilenetv2_110d', 'mobilenetv2_120d', - 'fbnetc_100', 'spnasnet_100', 'efficientnet_b0', 'efficientnet_b1', 'efficientnet_b2', 'efficientnet_b3', - 'efficientnet_b4', 'efficientnet_b5', 'efficientnet_b6', 'efficientnet_b7', 'efficientnet_b8', - 'efficientnet_l2', 'efficientnet_es', 'efficientnet_em', 'efficientnet_el', - 'efficientnet_cc_b0_4e', 'efficientnet_cc_b0_8e', 'efficientnet_cc_b1_8e', - 'efficientnet_lite0', 'efficientnet_lite1', 'efficientnet_lite2', 'efficientnet_lite3', 'efficientnet_lite4', - 'tf_efficientnet_b0', 'tf_efficientnet_b1', 'tf_efficientnet_b2', 'tf_efficientnet_b3', - 'tf_efficientnet_b4', 'tf_efficientnet_b5', 'tf_efficientnet_b6', 'tf_efficientnet_b7', 'tf_efficientnet_b8', - 'tf_efficientnet_b0_ap', 'tf_efficientnet_b1_ap', 'tf_efficientnet_b2_ap', 'tf_efficientnet_b3_ap', - 'tf_efficientnet_b4_ap', 'tf_efficientnet_b5_ap', 'tf_efficientnet_b6_ap', 'tf_efficientnet_b7_ap', - 'tf_efficientnet_b8_ap', 'tf_efficientnet_b0_ns', 'tf_efficientnet_b1_ns', 'tf_efficientnet_b2_ns', - 'tf_efficientnet_b3_ns', 'tf_efficientnet_b4_ns', 'tf_efficientnet_b5_ns', 'tf_efficientnet_b6_ns', - 'tf_efficientnet_b7_ns', 'tf_efficientnet_l2_ns', 'tf_efficientnet_l2_ns_475', - 'tf_efficientnet_es', 'tf_efficientnet_em', 'tf_efficientnet_el', - 'tf_efficientnet_cc_b0_4e', 'tf_efficientnet_cc_b0_8e', 'tf_efficientnet_cc_b1_8e', - 'tf_efficientnet_lite0', 'tf_efficientnet_lite1', 'tf_efficientnet_lite2', 'tf_efficientnet_lite3', - 'tf_efficientnet_lite4', - 'mixnet_s', 'mixnet_m', 'mixnet_l', 'mixnet_xl', 'tf_mixnet_s', 'tf_mixnet_m', 'tf_mixnet_l'] - - -model_urls = { - 'mnasnet_050': None, - 'mnasnet_075': None, - 'mnasnet_100': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mnasnet_b1-74cb7081.pth', - 'mnasnet_140': None, - 'mnasnet_small': None, - - 'semnasnet_050': None, - 'semnasnet_075': None, - 'semnasnet_100': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mnasnet_a1-d9418771.pth', - 'semnasnet_140': None, - - 'mobilenetv2_100': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv2_100_ra-b33bc2c4.pth', - 'mobilenetv2_110d': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv2_110d_ra-77090ade.pth', - 'mobilenetv2_120d': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv2_120d_ra-5987e2ed.pth', - 'mobilenetv2_140': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv2_140_ra-21a4e913.pth', - - 'fbnetc_100': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/fbnetc_100-c345b898.pth', - 'spnasnet_100': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/spnasnet_100-048bc3f4.pth', - - 'efficientnet_b0': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/efficientnet_b0_ra-3dd342df.pth', - 'efficientnet_b1': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/efficientnet_b1-533bc792.pth', - 'efficientnet_b2': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/efficientnet_b2_ra-bcdf34b7.pth', - 'efficientnet_b3': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/efficientnet_b3_ra2-cf984f9c.pth', - 'efficientnet_b4': None, - 'efficientnet_b5': None, - 'efficientnet_b6': None, - 'efficientnet_b7': None, - 'efficientnet_b8': None, - 'efficientnet_l2': None, - - 'efficientnet_es': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/efficientnet_es_ra-f111e99c.pth', - 'efficientnet_em': None, - 'efficientnet_el': None, - - 'efficientnet_cc_b0_4e': None, - 'efficientnet_cc_b0_8e': None, - 'efficientnet_cc_b1_8e': None, - - 'efficientnet_lite0': 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/efficientnet_lite0_ra-37913777.pth', - 'efficientnet_lite1': None, - 'efficientnet_lite2': None, - 'efficientnet_lite3': None, - 'efficientnet_lite4': None, - - 'tf_efficientnet_b0': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b0_aa-827b6e33.pth', - 'tf_efficientnet_b1': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b1_aa-ea7a6ee0.pth', - 'tf_efficientnet_b2': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b2_aa-60c94f97.pth', - 'tf_efficientnet_b3': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b3_aa-84b4657e.pth', - 'tf_efficientnet_b4': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b4_aa-818f208c.pth', - 'tf_efficientnet_b5': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b5_ra-9a3e5369.pth', - 'tf_efficientnet_b6': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b6_aa-80ba17e4.pth', - 'tf_efficientnet_b7': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b7_ra-6c08e654.pth', - 'tf_efficientnet_b8': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b8_ra-572d5dd9.pth', - - 'tf_efficientnet_b0_ap': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b0_ap-f262efe1.pth', - 'tf_efficientnet_b1_ap': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b1_ap-44ef0a3d.pth', - 'tf_efficientnet_b2_ap': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b2_ap-2f8e7636.pth', - 'tf_efficientnet_b3_ap': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b3_ap-aad25bdd.pth', - 'tf_efficientnet_b4_ap': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b4_ap-dedb23e6.pth', - 'tf_efficientnet_b5_ap': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b5_ap-9e82fae8.pth', - 'tf_efficientnet_b6_ap': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b6_ap-4ffb161f.pth', - 'tf_efficientnet_b7_ap': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b7_ap-ddb28fec.pth', - 'tf_efficientnet_b8_ap': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b8_ap-00e169fa.pth', - - 'tf_efficientnet_b0_ns': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b0_ns-c0e6a31c.pth', - 'tf_efficientnet_b1_ns': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b1_ns-99dd0c41.pth', - 'tf_efficientnet_b2_ns': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b2_ns-00306e48.pth', - 'tf_efficientnet_b3_ns': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b3_ns-9d44bf68.pth', - 'tf_efficientnet_b4_ns': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b4_ns-d6313a46.pth', - 'tf_efficientnet_b5_ns': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b5_ns-6f26d0cf.pth', - 'tf_efficientnet_b6_ns': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b6_ns-51548356.pth', - 'tf_efficientnet_b7_ns': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b7_ns-1dbc32de.pth', - 'tf_efficientnet_l2_ns_475': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_l2_ns_475-bebbd00a.pth', - 'tf_efficientnet_l2_ns': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_l2_ns-df73bb44.pth', - - 'tf_efficientnet_es': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_es-ca1afbfe.pth', - 'tf_efficientnet_em': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_em-e78cfe58.pth', - 'tf_efficientnet_el': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_el-5143854e.pth', - - 'tf_efficientnet_cc_b0_4e': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_cc_b0_4e-4362b6b2.pth', - 'tf_efficientnet_cc_b0_8e': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_cc_b0_8e-66184a25.pth', - 'tf_efficientnet_cc_b1_8e': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_cc_b1_8e-f7c79ae1.pth', - - 'tf_efficientnet_lite0': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_lite0-0aa007d2.pth', - 'tf_efficientnet_lite1': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_lite1-bde8b488.pth', - 'tf_efficientnet_lite2': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_lite2-dcccb7df.pth', - 'tf_efficientnet_lite3': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_lite3-b733e338.pth', - 'tf_efficientnet_lite4': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_lite4-741542c3.pth', - - 'mixnet_s': 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mixnet_s-a907afbc.pth', - 'mixnet_m': 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mixnet_m-4647fc68.pth', - 'mixnet_l': 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mixnet_l-5a9a2ed8.pth', - 'mixnet_xl': 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mixnet_xl_ra-aac3c00c.pth', - - 'tf_mixnet_s': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_mixnet_s-89d3354b.pth', - 'tf_mixnet_m': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_mixnet_m-0f4d8805.pth', - 'tf_mixnet_l': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_mixnet_l-6c92e0c8.pth', -} - - -class GenEfficientNet(nn.Module): - """ Generic EfficientNets - - An implementation of mobile optimized networks that covers: - * EfficientNet (B0-B8, L2, CondConv, EdgeTPU) - * MixNet (Small, Medium, and Large, XL) - * MNASNet A1, B1, and small - * FBNet C - * Single-Path NAS Pixel1 - """ - - def __init__(self, block_args, num_classes=1000, in_chans=3, num_features=1280, stem_size=32, fix_stem=False, - channel_multiplier=1.0, channel_divisor=8, channel_min=None, - pad_type='', act_layer=nn.ReLU, drop_rate=0., drop_connect_rate=0., - se_kwargs=None, norm_layer=nn.BatchNorm2d, norm_kwargs=None, - weight_init='goog'): - super(GenEfficientNet, self).__init__() - self.drop_rate = drop_rate - - if not fix_stem: - stem_size = round_channels(stem_size, channel_multiplier, channel_divisor, channel_min) - self.conv_stem = select_conv2d(in_chans, stem_size, 3, stride=2, padding=pad_type) - self.bn1 = norm_layer(stem_size, **norm_kwargs) - self.act1 = act_layer(inplace=True) - in_chs = stem_size - - builder = EfficientNetBuilder( - channel_multiplier, channel_divisor, channel_min, - pad_type, act_layer, se_kwargs, norm_layer, norm_kwargs, drop_connect_rate) - self.blocks = nn.Sequential(*builder(in_chs, block_args)) - in_chs = builder.in_chs - - self.conv_head = select_conv2d(in_chs, num_features, 1, padding=pad_type) - self.bn2 = norm_layer(num_features, **norm_kwargs) - self.act2 = act_layer(inplace=True) - self.global_pool = nn.AdaptiveAvgPool2d(1) - self.classifier = nn.Linear(num_features, num_classes) - - for n, m in self.named_modules(): - if weight_init == 'goog': - initialize_weight_goog(m, n) - else: - initialize_weight_default(m, n) - - def features(self, x): - x = self.conv_stem(x) - x = self.bn1(x) - x = self.act1(x) - x = self.blocks(x) - x = self.conv_head(x) - x = self.bn2(x) - x = self.act2(x) - return x - - def as_sequential(self): - layers = [self.conv_stem, self.bn1, self.act1] - layers.extend(self.blocks) - layers.extend([ - self.conv_head, self.bn2, self.act2, - self.global_pool, nn.Flatten(), nn.Dropout(self.drop_rate), self.classifier]) - return nn.Sequential(*layers) - - def forward(self, x): - x = self.features(x) - x = self.global_pool(x) - x = x.flatten(1) - if self.drop_rate > 0.: - x = F.dropout(x, p=self.drop_rate, training=self.training) - return self.classifier(x) - - -def _create_model(model_kwargs, variant, pretrained=False): - as_sequential = model_kwargs.pop('as_sequential', False) - model = GenEfficientNet(**model_kwargs) - if pretrained: - load_pretrained(model, model_urls[variant]) - if as_sequential: - model = model.as_sequential() - return model - - -def _gen_mnasnet_a1(variant, channel_multiplier=1.0, pretrained=False, **kwargs): - """Creates a mnasnet-a1 model. - - Ref impl: https://github.com/tensorflow/tpu/tree/master/models/official/mnasnet - Paper: https://arxiv.org/pdf/1807.11626.pdf. - - Args: - channel_multiplier: multiplier to number of channels per layer. - """ - arch_def = [ - # stage 0, 112x112 in - ['ds_r1_k3_s1_e1_c16_noskip'], - # stage 1, 112x112 in - ['ir_r2_k3_s2_e6_c24'], - # stage 2, 56x56 in - ['ir_r3_k5_s2_e3_c40_se0.25'], - # stage 3, 28x28 in - ['ir_r4_k3_s2_e6_c80'], - # stage 4, 14x14in - ['ir_r2_k3_s1_e6_c112_se0.25'], - # stage 5, 14x14in - ['ir_r3_k5_s2_e6_c160_se0.25'], - # stage 6, 7x7 in - ['ir_r1_k3_s1_e6_c320'], - ] - with layer_config_kwargs(kwargs): - model_kwargs = dict( - block_args=decode_arch_def(arch_def), - stem_size=32, - channel_multiplier=channel_multiplier, - act_layer=resolve_act_layer(kwargs, 'relu'), - norm_kwargs=resolve_bn_args(kwargs), - **kwargs - ) - model = _create_model(model_kwargs, variant, pretrained) - return model - - -def _gen_mnasnet_b1(variant, channel_multiplier=1.0, pretrained=False, **kwargs): - """Creates a mnasnet-b1 model. - - Ref impl: https://github.com/tensorflow/tpu/tree/master/models/official/mnasnet - Paper: https://arxiv.org/pdf/1807.11626.pdf. - - Args: - channel_multiplier: multiplier to number of channels per layer. - """ - arch_def = [ - # stage 0, 112x112 in - ['ds_r1_k3_s1_c16_noskip'], - # stage 1, 112x112 in - ['ir_r3_k3_s2_e3_c24'], - # stage 2, 56x56 in - ['ir_r3_k5_s2_e3_c40'], - # stage 3, 28x28 in - ['ir_r3_k5_s2_e6_c80'], - # stage 4, 14x14in - ['ir_r2_k3_s1_e6_c96'], - # stage 5, 14x14in - ['ir_r4_k5_s2_e6_c192'], - # stage 6, 7x7 in - ['ir_r1_k3_s1_e6_c320_noskip'] - ] - with layer_config_kwargs(kwargs): - model_kwargs = dict( - block_args=decode_arch_def(arch_def), - stem_size=32, - channel_multiplier=channel_multiplier, - act_layer=resolve_act_layer(kwargs, 'relu'), - norm_kwargs=resolve_bn_args(kwargs), - **kwargs - ) - model = _create_model(model_kwargs, variant, pretrained) - return model - - -def _gen_mnasnet_small(variant, channel_multiplier=1.0, pretrained=False, **kwargs): - """Creates a mnasnet-b1 model. - - Ref impl: https://github.com/tensorflow/tpu/tree/master/models/official/mnasnet - Paper: https://arxiv.org/pdf/1807.11626.pdf. - - Args: - channel_multiplier: multiplier to number of channels per layer. - """ - arch_def = [ - ['ds_r1_k3_s1_c8'], - ['ir_r1_k3_s2_e3_c16'], - ['ir_r2_k3_s2_e6_c16'], - ['ir_r4_k5_s2_e6_c32_se0.25'], - ['ir_r3_k3_s1_e6_c32_se0.25'], - ['ir_r3_k5_s2_e6_c88_se0.25'], - ['ir_r1_k3_s1_e6_c144'] - ] - with layer_config_kwargs(kwargs): - model_kwargs = dict( - block_args=decode_arch_def(arch_def), - stem_size=8, - channel_multiplier=channel_multiplier, - act_layer=resolve_act_layer(kwargs, 'relu'), - norm_kwargs=resolve_bn_args(kwargs), - **kwargs - ) - model = _create_model(model_kwargs, variant, pretrained) - return model - - -def _gen_mobilenet_v2( - variant, channel_multiplier=1.0, depth_multiplier=1.0, fix_stem_head=False, pretrained=False, **kwargs): - """ Generate MobileNet-V2 network - Ref impl: https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet_v2.py - Paper: https://arxiv.org/abs/1801.04381 - """ - arch_def = [ - ['ds_r1_k3_s1_c16'], - ['ir_r2_k3_s2_e6_c24'], - ['ir_r3_k3_s2_e6_c32'], - ['ir_r4_k3_s2_e6_c64'], - ['ir_r3_k3_s1_e6_c96'], - ['ir_r3_k3_s2_e6_c160'], - ['ir_r1_k3_s1_e6_c320'], - ] - with layer_config_kwargs(kwargs): - model_kwargs = dict( - block_args=decode_arch_def(arch_def, depth_multiplier=depth_multiplier, fix_first_last=fix_stem_head), - num_features=1280 if fix_stem_head else round_channels(1280, channel_multiplier, 8, None), - stem_size=32, - fix_stem=fix_stem_head, - channel_multiplier=channel_multiplier, - norm_kwargs=resolve_bn_args(kwargs), - act_layer=nn.ReLU6, - **kwargs - ) - model = _create_model(model_kwargs, variant, pretrained) - return model - - -def _gen_fbnetc(variant, channel_multiplier=1.0, pretrained=False, **kwargs): - """ FBNet-C - - Paper: https://arxiv.org/abs/1812.03443 - Ref Impl: https://github.com/facebookresearch/maskrcnn-benchmark/blob/master/maskrcnn_benchmark/modeling/backbone/fbnet_modeldef.py - - NOTE: the impl above does not relate to the 'C' variant here, that was derived from paper, - it was used to confirm some building block details - """ - arch_def = [ - ['ir_r1_k3_s1_e1_c16'], - ['ir_r1_k3_s2_e6_c24', 'ir_r2_k3_s1_e1_c24'], - ['ir_r1_k5_s2_e6_c32', 'ir_r1_k5_s1_e3_c32', 'ir_r1_k5_s1_e6_c32', 'ir_r1_k3_s1_e6_c32'], - ['ir_r1_k5_s2_e6_c64', 'ir_r1_k5_s1_e3_c64', 'ir_r2_k5_s1_e6_c64'], - ['ir_r3_k5_s1_e6_c112', 'ir_r1_k5_s1_e3_c112'], - ['ir_r4_k5_s2_e6_c184'], - ['ir_r1_k3_s1_e6_c352'], - ] - with layer_config_kwargs(kwargs): - model_kwargs = dict( - block_args=decode_arch_def(arch_def), - stem_size=16, - num_features=1984, # paper suggests this, but is not 100% clear - channel_multiplier=channel_multiplier, - act_layer=resolve_act_layer(kwargs, 'relu'), - norm_kwargs=resolve_bn_args(kwargs), - **kwargs - ) - model = _create_model(model_kwargs, variant, pretrained) - return model - - -def _gen_spnasnet(variant, channel_multiplier=1.0, pretrained=False, **kwargs): - """Creates the Single-Path NAS model from search targeted for Pixel1 phone. - - Paper: https://arxiv.org/abs/1904.02877 - - Args: - channel_multiplier: multiplier to number of channels per layer. - """ - arch_def = [ - # stage 0, 112x112 in - ['ds_r1_k3_s1_c16_noskip'], - # stage 1, 112x112 in - ['ir_r3_k3_s2_e3_c24'], - # stage 2, 56x56 in - ['ir_r1_k5_s2_e6_c40', 'ir_r3_k3_s1_e3_c40'], - # stage 3, 28x28 in - ['ir_r1_k5_s2_e6_c80', 'ir_r3_k3_s1_e3_c80'], - # stage 4, 14x14in - ['ir_r1_k5_s1_e6_c96', 'ir_r3_k5_s1_e3_c96'], - # stage 5, 14x14in - ['ir_r4_k5_s2_e6_c192'], - # stage 6, 7x7 in - ['ir_r1_k3_s1_e6_c320_noskip'] - ] - with layer_config_kwargs(kwargs): - model_kwargs = dict( - block_args=decode_arch_def(arch_def), - stem_size=32, - channel_multiplier=channel_multiplier, - act_layer=resolve_act_layer(kwargs, 'relu'), - norm_kwargs=resolve_bn_args(kwargs), - **kwargs - ) - model = _create_model(model_kwargs, variant, pretrained) - return model - - -def _gen_efficientnet(variant, channel_multiplier=1.0, depth_multiplier=1.0, pretrained=False, **kwargs): - """Creates an EfficientNet model. - - Ref impl: https://github.com/tensorflow/tpu/blob/master/models/official/efficientnet/efficientnet_model.py - Paper: https://arxiv.org/abs/1905.11946 - - EfficientNet params - name: (channel_multiplier, depth_multiplier, resolution, dropout_rate) - 'efficientnet-b0': (1.0, 1.0, 224, 0.2), - 'efficientnet-b1': (1.0, 1.1, 240, 0.2), - 'efficientnet-b2': (1.1, 1.2, 260, 0.3), - 'efficientnet-b3': (1.2, 1.4, 300, 0.3), - 'efficientnet-b4': (1.4, 1.8, 380, 0.4), - 'efficientnet-b5': (1.6, 2.2, 456, 0.4), - 'efficientnet-b6': (1.8, 2.6, 528, 0.5), - 'efficientnet-b7': (2.0, 3.1, 600, 0.5), - 'efficientnet-b8': (2.2, 3.6, 672, 0.5), - - Args: - channel_multiplier: multiplier to number of channels per layer - depth_multiplier: multiplier to number of repeats per stage - - """ - arch_def = [ - ['ds_r1_k3_s1_e1_c16_se0.25'], - ['ir_r2_k3_s2_e6_c24_se0.25'], - ['ir_r2_k5_s2_e6_c40_se0.25'], - ['ir_r3_k3_s2_e6_c80_se0.25'], - ['ir_r3_k5_s1_e6_c112_se0.25'], - ['ir_r4_k5_s2_e6_c192_se0.25'], - ['ir_r1_k3_s1_e6_c320_se0.25'], - ] - with layer_config_kwargs(kwargs): - model_kwargs = dict( - block_args=decode_arch_def(arch_def, depth_multiplier), - num_features=round_channels(1280, channel_multiplier, 8, None), - stem_size=32, - channel_multiplier=channel_multiplier, - act_layer=resolve_act_layer(kwargs, 'swish'), - norm_kwargs=resolve_bn_args(kwargs), - **kwargs, - ) - model = _create_model(model_kwargs, variant, pretrained) - return model - - -def _gen_efficientnet_edge(variant, channel_multiplier=1.0, depth_multiplier=1.0, pretrained=False, **kwargs): - arch_def = [ - # NOTE `fc` is present to override a mismatch between stem channels and in chs not - # present in other models - ['er_r1_k3_s1_e4_c24_fc24_noskip'], - ['er_r2_k3_s2_e8_c32'], - ['er_r4_k3_s2_e8_c48'], - ['ir_r5_k5_s2_e8_c96'], - ['ir_r4_k5_s1_e8_c144'], - ['ir_r2_k5_s2_e8_c192'], - ] - with layer_config_kwargs(kwargs): - model_kwargs = dict( - block_args=decode_arch_def(arch_def, depth_multiplier), - num_features=round_channels(1280, channel_multiplier, 8, None), - stem_size=32, - channel_multiplier=channel_multiplier, - act_layer=resolve_act_layer(kwargs, 'relu'), - norm_kwargs=resolve_bn_args(kwargs), - **kwargs, - ) - model = _create_model(model_kwargs, variant, pretrained) - return model - - -def _gen_efficientnet_condconv( - variant, channel_multiplier=1.0, depth_multiplier=1.0, experts_multiplier=1, pretrained=False, **kwargs): - """Creates an efficientnet-condconv model.""" - arch_def = [ - ['ds_r1_k3_s1_e1_c16_se0.25'], - ['ir_r2_k3_s2_e6_c24_se0.25'], - ['ir_r2_k5_s2_e6_c40_se0.25'], - ['ir_r3_k3_s2_e6_c80_se0.25'], - ['ir_r3_k5_s1_e6_c112_se0.25_cc4'], - ['ir_r4_k5_s2_e6_c192_se0.25_cc4'], - ['ir_r1_k3_s1_e6_c320_se0.25_cc4'], - ] - with layer_config_kwargs(kwargs): - model_kwargs = dict( - block_args=decode_arch_def(arch_def, depth_multiplier, experts_multiplier=experts_multiplier), - num_features=round_channels(1280, channel_multiplier, 8, None), - stem_size=32, - channel_multiplier=channel_multiplier, - act_layer=resolve_act_layer(kwargs, 'swish'), - norm_kwargs=resolve_bn_args(kwargs), - **kwargs, - ) - model = _create_model(model_kwargs, variant, pretrained) - return model - - -def _gen_efficientnet_lite(variant, channel_multiplier=1.0, depth_multiplier=1.0, pretrained=False, **kwargs): - """Creates an EfficientNet-Lite model. - - Ref impl: https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet/lite - Paper: https://arxiv.org/abs/1905.11946 - - EfficientNet params - name: (channel_multiplier, depth_multiplier, resolution, dropout_rate) - 'efficientnet-lite0': (1.0, 1.0, 224, 0.2), - 'efficientnet-lite1': (1.0, 1.1, 240, 0.2), - 'efficientnet-lite2': (1.1, 1.2, 260, 0.3), - 'efficientnet-lite3': (1.2, 1.4, 280, 0.3), - 'efficientnet-lite4': (1.4, 1.8, 300, 0.3), - - Args: - channel_multiplier: multiplier to number of channels per layer - depth_multiplier: multiplier to number of repeats per stage - """ - arch_def = [ - ['ds_r1_k3_s1_e1_c16'], - ['ir_r2_k3_s2_e6_c24'], - ['ir_r2_k5_s2_e6_c40'], - ['ir_r3_k3_s2_e6_c80'], - ['ir_r3_k5_s1_e6_c112'], - ['ir_r4_k5_s2_e6_c192'], - ['ir_r1_k3_s1_e6_c320'], - ] - with layer_config_kwargs(kwargs): - model_kwargs = dict( - block_args=decode_arch_def(arch_def, depth_multiplier, fix_first_last=True), - num_features=1280, - stem_size=32, - fix_stem=True, - channel_multiplier=channel_multiplier, - act_layer=nn.ReLU6, - norm_kwargs=resolve_bn_args(kwargs), - **kwargs, - ) - model = _create_model(model_kwargs, variant, pretrained) - return model - - -def _gen_mixnet_s(variant, channel_multiplier=1.0, pretrained=False, **kwargs): - """Creates a MixNet Small model. - - Ref impl: https://github.com/tensorflow/tpu/tree/master/models/official/mnasnet/mixnet - Paper: https://arxiv.org/abs/1907.09595 - """ - arch_def = [ - # stage 0, 112x112 in - ['ds_r1_k3_s1_e1_c16'], # relu - # stage 1, 112x112 in - ['ir_r1_k3_a1.1_p1.1_s2_e6_c24', 'ir_r1_k3_a1.1_p1.1_s1_e3_c24'], # relu - # stage 2, 56x56 in - ['ir_r1_k3.5.7_s2_e6_c40_se0.5_nsw', 'ir_r3_k3.5_a1.1_p1.1_s1_e6_c40_se0.5_nsw'], # swish - # stage 3, 28x28 in - ['ir_r1_k3.5.7_p1.1_s2_e6_c80_se0.25_nsw', 'ir_r2_k3.5_p1.1_s1_e6_c80_se0.25_nsw'], # swish - # stage 4, 14x14in - ['ir_r1_k3.5.7_a1.1_p1.1_s1_e6_c120_se0.5_nsw', 'ir_r2_k3.5.7.9_a1.1_p1.1_s1_e3_c120_se0.5_nsw'], # swish - # stage 5, 14x14in - ['ir_r1_k3.5.7.9.11_s2_e6_c200_se0.5_nsw', 'ir_r2_k3.5.7.9_p1.1_s1_e6_c200_se0.5_nsw'], # swish - # 7x7 - ] - with layer_config_kwargs(kwargs): - model_kwargs = dict( - block_args=decode_arch_def(arch_def), - num_features=1536, - stem_size=16, - channel_multiplier=channel_multiplier, - act_layer=resolve_act_layer(kwargs, 'relu'), - norm_kwargs=resolve_bn_args(kwargs), - **kwargs - ) - model = _create_model(model_kwargs, variant, pretrained) - return model - - -def _gen_mixnet_m(variant, channel_multiplier=1.0, depth_multiplier=1.0, pretrained=False, **kwargs): - """Creates a MixNet Medium-Large model. - - Ref impl: https://github.com/tensorflow/tpu/tree/master/models/official/mnasnet/mixnet - Paper: https://arxiv.org/abs/1907.09595 - """ - arch_def = [ - # stage 0, 112x112 in - ['ds_r1_k3_s1_e1_c24'], # relu - # stage 1, 112x112 in - ['ir_r1_k3.5.7_a1.1_p1.1_s2_e6_c32', 'ir_r1_k3_a1.1_p1.1_s1_e3_c32'], # relu - # stage 2, 56x56 in - ['ir_r1_k3.5.7.9_s2_e6_c40_se0.5_nsw', 'ir_r3_k3.5_a1.1_p1.1_s1_e6_c40_se0.5_nsw'], # swish - # stage 3, 28x28 in - ['ir_r1_k3.5.7_s2_e6_c80_se0.25_nsw', 'ir_r3_k3.5.7.9_a1.1_p1.1_s1_e6_c80_se0.25_nsw'], # swish - # stage 4, 14x14in - ['ir_r1_k3_s1_e6_c120_se0.5_nsw', 'ir_r3_k3.5.7.9_a1.1_p1.1_s1_e3_c120_se0.5_nsw'], # swish - # stage 5, 14x14in - ['ir_r1_k3.5.7.9_s2_e6_c200_se0.5_nsw', 'ir_r3_k3.5.7.9_p1.1_s1_e6_c200_se0.5_nsw'], # swish - # 7x7 - ] - with layer_config_kwargs(kwargs): - model_kwargs = dict( - block_args=decode_arch_def(arch_def, depth_multiplier, depth_trunc='round'), - num_features=1536, - stem_size=24, - channel_multiplier=channel_multiplier, - act_layer=resolve_act_layer(kwargs, 'relu'), - norm_kwargs=resolve_bn_args(kwargs), - **kwargs - ) - model = _create_model(model_kwargs, variant, pretrained) - return model - - -def mnasnet_050(pretrained=False, **kwargs): - """ MNASNet B1, depth multiplier of 0.5. """ - model = _gen_mnasnet_b1('mnasnet_050', 0.5, pretrained=pretrained, **kwargs) - return model - - -def mnasnet_075(pretrained=False, **kwargs): - """ MNASNet B1, depth multiplier of 0.75. """ - model = _gen_mnasnet_b1('mnasnet_075', 0.75, pretrained=pretrained, **kwargs) - return model - - -def mnasnet_100(pretrained=False, **kwargs): - """ MNASNet B1, depth multiplier of 1.0. """ - model = _gen_mnasnet_b1('mnasnet_100', 1.0, pretrained=pretrained, **kwargs) - return model - - -def mnasnet_b1(pretrained=False, **kwargs): - """ MNASNet B1, depth multiplier of 1.0. """ - return mnasnet_100(pretrained, **kwargs) - - -def mnasnet_140(pretrained=False, **kwargs): - """ MNASNet B1, depth multiplier of 1.4 """ - model = _gen_mnasnet_b1('mnasnet_140', 1.4, pretrained=pretrained, **kwargs) - return model - - -def semnasnet_050(pretrained=False, **kwargs): - """ MNASNet A1 (w/ SE), depth multiplier of 0.5 """ - model = _gen_mnasnet_a1('semnasnet_050', 0.5, pretrained=pretrained, **kwargs) - return model - - -def semnasnet_075(pretrained=False, **kwargs): - """ MNASNet A1 (w/ SE), depth multiplier of 0.75. """ - model = _gen_mnasnet_a1('semnasnet_075', 0.75, pretrained=pretrained, **kwargs) - return model - - -def semnasnet_100(pretrained=False, **kwargs): - """ MNASNet A1 (w/ SE), depth multiplier of 1.0. """ - model = _gen_mnasnet_a1('semnasnet_100', 1.0, pretrained=pretrained, **kwargs) - return model - - -def mnasnet_a1(pretrained=False, **kwargs): - """ MNASNet A1 (w/ SE), depth multiplier of 1.0. """ - return semnasnet_100(pretrained, **kwargs) - - -def semnasnet_140(pretrained=False, **kwargs): - """ MNASNet A1 (w/ SE), depth multiplier of 1.4. """ - model = _gen_mnasnet_a1('semnasnet_140', 1.4, pretrained=pretrained, **kwargs) - return model - - -def mnasnet_small(pretrained=False, **kwargs): - """ MNASNet Small, depth multiplier of 1.0. """ - model = _gen_mnasnet_small('mnasnet_small', 1.0, pretrained=pretrained, **kwargs) - return model - - -def mobilenetv2_100(pretrained=False, **kwargs): - """ MobileNet V2 w/ 1.0 channel multiplier """ - model = _gen_mobilenet_v2('mobilenetv2_100', 1.0, pretrained=pretrained, **kwargs) - return model - - -def mobilenetv2_140(pretrained=False, **kwargs): - """ MobileNet V2 w/ 1.4 channel multiplier """ - model = _gen_mobilenet_v2('mobilenetv2_140', 1.4, pretrained=pretrained, **kwargs) - return model - - -def mobilenetv2_110d(pretrained=False, **kwargs): - """ MobileNet V2 w/ 1.1 channel, 1.2 depth multipliers""" - model = _gen_mobilenet_v2( - 'mobilenetv2_110d', 1.1, depth_multiplier=1.2, fix_stem_head=True, pretrained=pretrained, **kwargs) - return model - - -def mobilenetv2_120d(pretrained=False, **kwargs): - """ MobileNet V2 w/ 1.2 channel, 1.4 depth multipliers """ - model = _gen_mobilenet_v2( - 'mobilenetv2_120d', 1.2, depth_multiplier=1.4, fix_stem_head=True, pretrained=pretrained, **kwargs) - return model - - -def fbnetc_100(pretrained=False, **kwargs): - """ FBNet-C """ - if pretrained: - # pretrained model trained with non-default BN epsilon - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - model = _gen_fbnetc('fbnetc_100', 1.0, pretrained=pretrained, **kwargs) - return model - - -def spnasnet_100(pretrained=False, **kwargs): - """ Single-Path NAS Pixel1""" - model = _gen_spnasnet('spnasnet_100', 1.0, pretrained=pretrained, **kwargs) - return model - - -def efficientnet_b0(pretrained=False, **kwargs): - """ EfficientNet-B0 """ - # NOTE for train set drop_rate=0.2, drop_connect_rate=0.2 - model = _gen_efficientnet( - 'efficientnet_b0', channel_multiplier=1.0, depth_multiplier=1.0, pretrained=pretrained, **kwargs) - return model - - -def efficientnet_b1(pretrained=False, **kwargs): - """ EfficientNet-B1 """ - # NOTE for train set drop_rate=0.2, drop_connect_rate=0.2 - model = _gen_efficientnet( - 'efficientnet_b1', channel_multiplier=1.0, depth_multiplier=1.1, pretrained=pretrained, **kwargs) - return model - - -def efficientnet_b2(pretrained=False, **kwargs): - """ EfficientNet-B2 """ - # NOTE for train set drop_rate=0.3, drop_connect_rate=0.2 - model = _gen_efficientnet( - 'efficientnet_b2', channel_multiplier=1.1, depth_multiplier=1.2, pretrained=pretrained, **kwargs) - return model - - -def efficientnet_b3(pretrained=False, **kwargs): - """ EfficientNet-B3 """ - # NOTE for train set drop_rate=0.3, drop_connect_rate=0.2 - model = _gen_efficientnet( - 'efficientnet_b3', channel_multiplier=1.2, depth_multiplier=1.4, pretrained=pretrained, **kwargs) - return model - - -def efficientnet_b4(pretrained=False, **kwargs): - """ EfficientNet-B4 """ - # NOTE for train set drop_rate=0.4, drop_connect_rate=0.2 - model = _gen_efficientnet( - 'efficientnet_b4', channel_multiplier=1.4, depth_multiplier=1.8, pretrained=pretrained, **kwargs) - return model - - -def efficientnet_b5(pretrained=False, **kwargs): - """ EfficientNet-B5 """ - # NOTE for train set drop_rate=0.4, drop_connect_rate=0.2 - model = _gen_efficientnet( - 'efficientnet_b5', channel_multiplier=1.6, depth_multiplier=2.2, pretrained=pretrained, **kwargs) - return model - - -def efficientnet_b6(pretrained=False, **kwargs): - """ EfficientNet-B6 """ - # NOTE for train set drop_rate=0.5, drop_connect_rate=0.2 - model = _gen_efficientnet( - 'efficientnet_b6', channel_multiplier=1.8, depth_multiplier=2.6, pretrained=pretrained, **kwargs) - return model - - -def efficientnet_b7(pretrained=False, **kwargs): - """ EfficientNet-B7 """ - # NOTE for train set drop_rate=0.5, drop_connect_rate=0.2 - model = _gen_efficientnet( - 'efficientnet_b7', channel_multiplier=2.0, depth_multiplier=3.1, pretrained=pretrained, **kwargs) - return model - - -def efficientnet_b8(pretrained=False, **kwargs): - """ EfficientNet-B8 """ - # NOTE for train set drop_rate=0.5, drop_connect_rate=0.2 - model = _gen_efficientnet( - 'efficientnet_b8', channel_multiplier=2.2, depth_multiplier=3.6, pretrained=pretrained, **kwargs) - return model - - -def efficientnet_l2(pretrained=False, **kwargs): - """ EfficientNet-L2. """ - # NOTE for train, drop_rate should be 0.5 - model = _gen_efficientnet( - 'efficientnet_l2', channel_multiplier=4.3, depth_multiplier=5.3, pretrained=pretrained, **kwargs) - return model - - -def efficientnet_es(pretrained=False, **kwargs): - """ EfficientNet-Edge Small. """ - model = _gen_efficientnet_edge( - 'efficientnet_es', channel_multiplier=1.0, depth_multiplier=1.0, pretrained=pretrained, **kwargs) - return model - - -def efficientnet_em(pretrained=False, **kwargs): - """ EfficientNet-Edge-Medium. """ - model = _gen_efficientnet_edge( - 'efficientnet_em', channel_multiplier=1.0, depth_multiplier=1.1, pretrained=pretrained, **kwargs) - return model - - -def efficientnet_el(pretrained=False, **kwargs): - """ EfficientNet-Edge-Large. """ - model = _gen_efficientnet_edge( - 'efficientnet_el', channel_multiplier=1.2, depth_multiplier=1.4, pretrained=pretrained, **kwargs) - return model - - -def efficientnet_cc_b0_4e(pretrained=False, **kwargs): - """ EfficientNet-CondConv-B0 w/ 8 Experts """ - # NOTE for train set drop_rate=0.25, drop_connect_rate=0.2 - model = _gen_efficientnet_condconv( - 'efficientnet_cc_b0_4e', channel_multiplier=1.0, depth_multiplier=1.0, pretrained=pretrained, **kwargs) - return model - - -def efficientnet_cc_b0_8e(pretrained=False, **kwargs): - """ EfficientNet-CondConv-B0 w/ 8 Experts """ - # NOTE for train set drop_rate=0.25, drop_connect_rate=0.2 - model = _gen_efficientnet_condconv( - 'efficientnet_cc_b0_8e', channel_multiplier=1.0, depth_multiplier=1.0, experts_multiplier=2, - pretrained=pretrained, **kwargs) - return model - - -def efficientnet_cc_b1_8e(pretrained=False, **kwargs): - """ EfficientNet-CondConv-B1 w/ 8 Experts """ - # NOTE for train set drop_rate=0.25, drop_connect_rate=0.2 - model = _gen_efficientnet_condconv( - 'efficientnet_cc_b1_8e', channel_multiplier=1.0, depth_multiplier=1.1, experts_multiplier=2, - pretrained=pretrained, **kwargs) - return model - - -def efficientnet_lite0(pretrained=False, **kwargs): - """ EfficientNet-Lite0 """ - model = _gen_efficientnet_lite( - 'efficientnet_lite0', channel_multiplier=1.0, depth_multiplier=1.0, pretrained=pretrained, **kwargs) - return model - - -def efficientnet_lite1(pretrained=False, **kwargs): - """ EfficientNet-Lite1 """ - model = _gen_efficientnet_lite( - 'efficientnet_lite1', channel_multiplier=1.0, depth_multiplier=1.1, pretrained=pretrained, **kwargs) - return model - - -def efficientnet_lite2(pretrained=False, **kwargs): - """ EfficientNet-Lite2 """ - model = _gen_efficientnet_lite( - 'efficientnet_lite2', channel_multiplier=1.1, depth_multiplier=1.2, pretrained=pretrained, **kwargs) - return model - - -def efficientnet_lite3(pretrained=False, **kwargs): - """ EfficientNet-Lite3 """ - model = _gen_efficientnet_lite( - 'efficientnet_lite3', channel_multiplier=1.2, depth_multiplier=1.4, pretrained=pretrained, **kwargs) - return model - - -def efficientnet_lite4(pretrained=False, **kwargs): - """ EfficientNet-Lite4 """ - model = _gen_efficientnet_lite( - 'efficientnet_lite4', channel_multiplier=1.4, depth_multiplier=1.8, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_b0(pretrained=False, **kwargs): - """ EfficientNet-B0 AutoAug. Tensorflow compatible variant """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_b0', channel_multiplier=1.0, depth_multiplier=1.0, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_b1(pretrained=False, **kwargs): - """ EfficientNet-B1 AutoAug. Tensorflow compatible variant """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_b1', channel_multiplier=1.0, depth_multiplier=1.1, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_b2(pretrained=False, **kwargs): - """ EfficientNet-B2 AutoAug. Tensorflow compatible variant """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_b2', channel_multiplier=1.1, depth_multiplier=1.2, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_b3(pretrained=False, **kwargs): - """ EfficientNet-B3 AutoAug. Tensorflow compatible variant """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_b3', channel_multiplier=1.2, depth_multiplier=1.4, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_b4(pretrained=False, **kwargs): - """ EfficientNet-B4 AutoAug. Tensorflow compatible variant """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_b4', channel_multiplier=1.4, depth_multiplier=1.8, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_b5(pretrained=False, **kwargs): - """ EfficientNet-B5 RandAug. Tensorflow compatible variant """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_b5', channel_multiplier=1.6, depth_multiplier=2.2, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_b6(pretrained=False, **kwargs): - """ EfficientNet-B6 AutoAug. Tensorflow compatible variant """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_b6', channel_multiplier=1.8, depth_multiplier=2.6, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_b7(pretrained=False, **kwargs): - """ EfficientNet-B7 RandAug. Tensorflow compatible variant """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_b7', channel_multiplier=2.0, depth_multiplier=3.1, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_b8(pretrained=False, **kwargs): - """ EfficientNet-B8 RandAug. Tensorflow compatible variant """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_b8', channel_multiplier=2.2, depth_multiplier=3.6, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_b0_ap(pretrained=False, **kwargs): - """ EfficientNet-B0 AdvProp. Tensorflow compatible variant - Paper: Adversarial Examples Improve Image Recognition (https://arxiv.org/abs/1911.09665) - """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_b0_ap', channel_multiplier=1.0, depth_multiplier=1.0, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_b1_ap(pretrained=False, **kwargs): - """ EfficientNet-B1 AdvProp. Tensorflow compatible variant - Paper: Adversarial Examples Improve Image Recognition (https://arxiv.org/abs/1911.09665) - """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_b1_ap', channel_multiplier=1.0, depth_multiplier=1.1, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_b2_ap(pretrained=False, **kwargs): - """ EfficientNet-B2 AdvProp. Tensorflow compatible variant - Paper: Adversarial Examples Improve Image Recognition (https://arxiv.org/abs/1911.09665) - """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_b2_ap', channel_multiplier=1.1, depth_multiplier=1.2, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_b3_ap(pretrained=False, **kwargs): - """ EfficientNet-B3 AdvProp. Tensorflow compatible variant - Paper: Adversarial Examples Improve Image Recognition (https://arxiv.org/abs/1911.09665) - """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_b3_ap', channel_multiplier=1.2, depth_multiplier=1.4, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_b4_ap(pretrained=False, **kwargs): - """ EfficientNet-B4 AdvProp. Tensorflow compatible variant - Paper: Adversarial Examples Improve Image Recognition (https://arxiv.org/abs/1911.09665) - """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_b4_ap', channel_multiplier=1.4, depth_multiplier=1.8, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_b5_ap(pretrained=False, **kwargs): - """ EfficientNet-B5 AdvProp. Tensorflow compatible variant - Paper: Adversarial Examples Improve Image Recognition (https://arxiv.org/abs/1911.09665) - """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_b5_ap', channel_multiplier=1.6, depth_multiplier=2.2, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_b6_ap(pretrained=False, **kwargs): - """ EfficientNet-B6 AdvProp. Tensorflow compatible variant - Paper: Adversarial Examples Improve Image Recognition (https://arxiv.org/abs/1911.09665) - """ - # NOTE for train, drop_rate should be 0.5 - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_b6_ap', channel_multiplier=1.8, depth_multiplier=2.6, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_b7_ap(pretrained=False, **kwargs): - """ EfficientNet-B7 AdvProp. Tensorflow compatible variant - Paper: Adversarial Examples Improve Image Recognition (https://arxiv.org/abs/1911.09665) - """ - # NOTE for train, drop_rate should be 0.5 - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_b7_ap', channel_multiplier=2.0, depth_multiplier=3.1, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_b8_ap(pretrained=False, **kwargs): - """ EfficientNet-B8 AdvProp. Tensorflow compatible variant - Paper: Adversarial Examples Improve Image Recognition (https://arxiv.org/abs/1911.09665) - """ - # NOTE for train, drop_rate should be 0.5 - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_b8_ap', channel_multiplier=2.2, depth_multiplier=3.6, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_b0_ns(pretrained=False, **kwargs): - """ EfficientNet-B0 NoisyStudent. Tensorflow compatible variant - Paper: Self-training with Noisy Student improves ImageNet classification (https://arxiv.org/abs/1911.04252) - """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_b0_ns', channel_multiplier=1.0, depth_multiplier=1.0, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_b1_ns(pretrained=False, **kwargs): - """ EfficientNet-B1 NoisyStudent. Tensorflow compatible variant - Paper: Self-training with Noisy Student improves ImageNet classification (https://arxiv.org/abs/1911.04252) - """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_b1_ns', channel_multiplier=1.0, depth_multiplier=1.1, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_b2_ns(pretrained=False, **kwargs): - """ EfficientNet-B2 NoisyStudent. Tensorflow compatible variant - Paper: Self-training with Noisy Student improves ImageNet classification (https://arxiv.org/abs/1911.04252) - """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_b2_ns', channel_multiplier=1.1, depth_multiplier=1.2, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_b3_ns(pretrained=False, **kwargs): - """ EfficientNet-B3 NoisyStudent. Tensorflow compatible variant - Paper: Self-training with Noisy Student improves ImageNet classification (https://arxiv.org/abs/1911.04252) - """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_b3_ns', channel_multiplier=1.2, depth_multiplier=1.4, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_b4_ns(pretrained=False, **kwargs): - """ EfficientNet-B4 NoisyStudent. Tensorflow compatible variant - Paper: Self-training with Noisy Student improves ImageNet classification (https://arxiv.org/abs/1911.04252) - """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_b4_ns', channel_multiplier=1.4, depth_multiplier=1.8, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_b5_ns(pretrained=False, **kwargs): - """ EfficientNet-B5 NoisyStudent. Tensorflow compatible variant - Paper: Self-training with Noisy Student improves ImageNet classification (https://arxiv.org/abs/1911.04252) - """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_b5_ns', channel_multiplier=1.6, depth_multiplier=2.2, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_b6_ns(pretrained=False, **kwargs): - """ EfficientNet-B6 NoisyStudent. Tensorflow compatible variant - Paper: Self-training with Noisy Student improves ImageNet classification (https://arxiv.org/abs/1911.04252) - """ - # NOTE for train, drop_rate should be 0.5 - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_b6_ns', channel_multiplier=1.8, depth_multiplier=2.6, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_b7_ns(pretrained=False, **kwargs): - """ EfficientNet-B7 NoisyStudent. Tensorflow compatible variant - Paper: Self-training with Noisy Student improves ImageNet classification (https://arxiv.org/abs/1911.04252) - """ - # NOTE for train, drop_rate should be 0.5 - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_b7_ns', channel_multiplier=2.0, depth_multiplier=3.1, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_l2_ns_475(pretrained=False, **kwargs): - """ EfficientNet-L2 NoisyStudent @ 475x475. Tensorflow compatible variant - Paper: Self-training with Noisy Student improves ImageNet classification (https://arxiv.org/abs/1911.04252) - """ - # NOTE for train, drop_rate should be 0.5 - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_l2_ns_475', channel_multiplier=4.3, depth_multiplier=5.3, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_l2_ns(pretrained=False, **kwargs): - """ EfficientNet-L2 NoisyStudent. Tensorflow compatible variant - Paper: Self-training with Noisy Student improves ImageNet classification (https://arxiv.org/abs/1911.04252) - """ - # NOTE for train, drop_rate should be 0.5 - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet( - 'tf_efficientnet_l2_ns', channel_multiplier=4.3, depth_multiplier=5.3, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_es(pretrained=False, **kwargs): - """ EfficientNet-Edge Small. Tensorflow compatible variant """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet_edge( - 'tf_efficientnet_es', channel_multiplier=1.0, depth_multiplier=1.0, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_em(pretrained=False, **kwargs): - """ EfficientNet-Edge-Medium. Tensorflow compatible variant """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet_edge( - 'tf_efficientnet_em', channel_multiplier=1.0, depth_multiplier=1.1, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_el(pretrained=False, **kwargs): - """ EfficientNet-Edge-Large. Tensorflow compatible variant """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet_edge( - 'tf_efficientnet_el', channel_multiplier=1.2, depth_multiplier=1.4, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_cc_b0_4e(pretrained=False, **kwargs): - """ EfficientNet-CondConv-B0 w/ 4 Experts """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet_condconv( - 'tf_efficientnet_cc_b0_4e', channel_multiplier=1.0, depth_multiplier=1.0, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_cc_b0_8e(pretrained=False, **kwargs): - """ EfficientNet-CondConv-B0 w/ 8 Experts """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet_condconv( - 'tf_efficientnet_cc_b0_8e', channel_multiplier=1.0, depth_multiplier=1.0, experts_multiplier=2, - pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_cc_b1_8e(pretrained=False, **kwargs): - """ EfficientNet-CondConv-B1 w/ 8 Experts """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet_condconv( - 'tf_efficientnet_cc_b1_8e', channel_multiplier=1.0, depth_multiplier=1.1, experts_multiplier=2, - pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_lite0(pretrained=False, **kwargs): - """ EfficientNet-Lite0. Tensorflow compatible variant """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet_lite( - 'tf_efficientnet_lite0', channel_multiplier=1.0, depth_multiplier=1.0, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_lite1(pretrained=False, **kwargs): - """ EfficientNet-Lite1. Tensorflow compatible variant """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet_lite( - 'tf_efficientnet_lite1', channel_multiplier=1.0, depth_multiplier=1.1, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_lite2(pretrained=False, **kwargs): - """ EfficientNet-Lite2. Tensorflow compatible variant """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet_lite( - 'tf_efficientnet_lite2', channel_multiplier=1.1, depth_multiplier=1.2, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_lite3(pretrained=False, **kwargs): - """ EfficientNet-Lite3. Tensorflow compatible variant """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet_lite( - 'tf_efficientnet_lite3', channel_multiplier=1.2, depth_multiplier=1.4, pretrained=pretrained, **kwargs) - return model - - -def tf_efficientnet_lite4(pretrained=False, **kwargs): - """ EfficientNet-Lite4. Tensorflow compatible variant """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_efficientnet_lite( - 'tf_efficientnet_lite4', channel_multiplier=1.4, depth_multiplier=1.8, pretrained=pretrained, **kwargs) - return model - - -def mixnet_s(pretrained=False, **kwargs): - """Creates a MixNet Small model. - """ - # NOTE for train set drop_rate=0.2 - model = _gen_mixnet_s( - 'mixnet_s', channel_multiplier=1.0, pretrained=pretrained, **kwargs) - return model - - -def mixnet_m(pretrained=False, **kwargs): - """Creates a MixNet Medium model. - """ - # NOTE for train set drop_rate=0.25 - model = _gen_mixnet_m( - 'mixnet_m', channel_multiplier=1.0, pretrained=pretrained, **kwargs) - return model - - -def mixnet_l(pretrained=False, **kwargs): - """Creates a MixNet Large model. - """ - # NOTE for train set drop_rate=0.25 - model = _gen_mixnet_m( - 'mixnet_l', channel_multiplier=1.3, pretrained=pretrained, **kwargs) - return model - - -def mixnet_xl(pretrained=False, **kwargs): - """Creates a MixNet Extra-Large model. - Not a paper spec, experimental def by RW w/ depth scaling. - """ - # NOTE for train set drop_rate=0.25, drop_connect_rate=0.2 - model = _gen_mixnet_m( - 'mixnet_xl', channel_multiplier=1.6, depth_multiplier=1.2, pretrained=pretrained, **kwargs) - return model - - -def mixnet_xxl(pretrained=False, **kwargs): - """Creates a MixNet Double Extra Large model. - Not a paper spec, experimental def by RW w/ depth scaling. - """ - # NOTE for train set drop_rate=0.3, drop_connect_rate=0.2 - model = _gen_mixnet_m( - 'mixnet_xxl', channel_multiplier=2.4, depth_multiplier=1.3, pretrained=pretrained, **kwargs) - return model - - -def tf_mixnet_s(pretrained=False, **kwargs): - """Creates a MixNet Small model. Tensorflow compatible variant - """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_mixnet_s( - 'tf_mixnet_s', channel_multiplier=1.0, pretrained=pretrained, **kwargs) - return model - - -def tf_mixnet_m(pretrained=False, **kwargs): - """Creates a MixNet Medium model. Tensorflow compatible variant - """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_mixnet_m( - 'tf_mixnet_m', channel_multiplier=1.0, pretrained=pretrained, **kwargs) - return model - - -def tf_mixnet_l(pretrained=False, **kwargs): - """Creates a MixNet Large model. Tensorflow compatible variant - """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_mixnet_m( - 'tf_mixnet_l', channel_multiplier=1.3, pretrained=pretrained, **kwargs) - return model diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/data/dataset_mappers/__init__.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/data/dataset_mappers/__init__.py deleted file mode 100644 index 8b137891791fe96927ad78e64b0aad7bded08bdc..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/data/dataset_mappers/__init__.py +++ /dev/null @@ -1 +0,0 @@ - diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/runner/hooks/checkpoint.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/runner/hooks/checkpoint.py deleted file mode 100644 index 6af3fae43ac4b35532641a81eb13557edfc7dfba..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/runner/hooks/checkpoint.py +++ /dev/null @@ -1,167 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import warnings - -from annotator.uniformer.mmcv.fileio import FileClient -from ..dist_utils import allreduce_params, master_only -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class CheckpointHook(Hook): - """Save checkpoints periodically. - - Args: - interval (int): The saving period. If ``by_epoch=True``, interval - indicates epochs, otherwise it indicates iterations. - Default: -1, which means "never". - by_epoch (bool): Saving checkpoints by epoch or by iteration. - Default: True. - save_optimizer (bool): Whether to save optimizer state_dict in the - checkpoint. It is usually used for resuming experiments. - Default: True. - out_dir (str, optional): The root directory to save checkpoints. If not - specified, ``runner.work_dir`` will be used by default. If - specified, the ``out_dir`` will be the concatenation of ``out_dir`` - and the last level directory of ``runner.work_dir``. - `Changed in version 1.3.16.` - max_keep_ckpts (int, optional): The maximum checkpoints to keep. - In some cases we want only the latest few checkpoints and would - like to delete old ones to save the disk space. - Default: -1, which means unlimited. - save_last (bool, optional): Whether to force the last checkpoint to be - saved regardless of interval. Default: True. - sync_buffer (bool, optional): Whether to synchronize buffers in - different gpus. Default: False. - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - `New in version 1.3.16.` - - .. warning:: - Before v1.3.16, the ``out_dir`` argument indicates the path where the - checkpoint is stored. However, since v1.3.16, ``out_dir`` indicates the - root directory and the final path to save checkpoint is the - concatenation of ``out_dir`` and the last level directory of - ``runner.work_dir``. Suppose the value of ``out_dir`` is "/path/of/A" - and the value of ``runner.work_dir`` is "/path/of/B", then the final - path will be "/path/of/A/B". - """ - - def __init__(self, - interval=-1, - by_epoch=True, - save_optimizer=True, - out_dir=None, - max_keep_ckpts=-1, - save_last=True, - sync_buffer=False, - file_client_args=None, - **kwargs): - self.interval = interval - self.by_epoch = by_epoch - self.save_optimizer = save_optimizer - self.out_dir = out_dir - self.max_keep_ckpts = max_keep_ckpts - self.save_last = save_last - self.args = kwargs - self.sync_buffer = sync_buffer - self.file_client_args = file_client_args - - def before_run(self, runner): - if not self.out_dir: - self.out_dir = runner.work_dir - - self.file_client = FileClient.infer_client(self.file_client_args, - self.out_dir) - - # if `self.out_dir` is not equal to `runner.work_dir`, it means that - # `self.out_dir` is set so the final `self.out_dir` is the - # concatenation of `self.out_dir` and the last level directory of - # `runner.work_dir` - if self.out_dir != runner.work_dir: - basename = osp.basename(runner.work_dir.rstrip(osp.sep)) - self.out_dir = self.file_client.join_path(self.out_dir, basename) - - runner.logger.info((f'Checkpoints will be saved to {self.out_dir} by ' - f'{self.file_client.name}.')) - - # disable the create_symlink option because some file backends do not - # allow to create a symlink - if 'create_symlink' in self.args: - if self.args[ - 'create_symlink'] and not self.file_client.allow_symlink: - self.args['create_symlink'] = False - warnings.warn( - ('create_symlink is set as True by the user but is changed' - 'to be False because creating symbolic link is not ' - f'allowed in {self.file_client.name}')) - else: - self.args['create_symlink'] = self.file_client.allow_symlink - - def after_train_epoch(self, runner): - if not self.by_epoch: - return - - # save checkpoint for following cases: - # 1. every ``self.interval`` epochs - # 2. reach the last epoch of training - if self.every_n_epochs( - runner, self.interval) or (self.save_last - and self.is_last_epoch(runner)): - runner.logger.info( - f'Saving checkpoint at {runner.epoch + 1} epochs') - if self.sync_buffer: - allreduce_params(runner.model.buffers()) - self._save_checkpoint(runner) - - @master_only - def _save_checkpoint(self, runner): - """Save the current checkpoint and delete unwanted checkpoint.""" - runner.save_checkpoint( - self.out_dir, save_optimizer=self.save_optimizer, **self.args) - if runner.meta is not None: - if self.by_epoch: - cur_ckpt_filename = self.args.get( - 'filename_tmpl', 'epoch_{}.pth').format(runner.epoch + 1) - else: - cur_ckpt_filename = self.args.get( - 'filename_tmpl', 'iter_{}.pth').format(runner.iter + 1) - runner.meta.setdefault('hook_msgs', dict()) - runner.meta['hook_msgs']['last_ckpt'] = self.file_client.join_path( - self.out_dir, cur_ckpt_filename) - # remove other checkpoints - if self.max_keep_ckpts > 0: - if self.by_epoch: - name = 'epoch_{}.pth' - current_ckpt = runner.epoch + 1 - else: - name = 'iter_{}.pth' - current_ckpt = runner.iter + 1 - redundant_ckpts = range( - current_ckpt - self.max_keep_ckpts * self.interval, 0, - -self.interval) - filename_tmpl = self.args.get('filename_tmpl', name) - for _step in redundant_ckpts: - ckpt_path = self.file_client.join_path( - self.out_dir, filename_tmpl.format(_step)) - if self.file_client.isfile(ckpt_path): - self.file_client.remove(ckpt_path) - else: - break - - def after_train_iter(self, runner): - if self.by_epoch: - return - - # save checkpoint for following cases: - # 1. every ``self.interval`` iterations - # 2. reach the last iteration of training - if self.every_n_iters( - runner, self.interval) or (self.save_last - and self.is_last_iter(runner)): - runner.logger.info( - f'Saving checkpoint at {runner.iter + 1} iterations') - if self.sync_buffer: - allreduce_params(runner.model.buffers()) - self._save_checkpoint(runner) diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/losses/utils.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/losses/utils.py deleted file mode 100644 index 85aec9f3045240c3de96a928324ae8f5c3aebe8b..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/losses/utils.py +++ /dev/null @@ -1,121 +0,0 @@ -import functools - -import annotator.uniformer.mmcv as mmcv -import numpy as np -import torch.nn.functional as F - - -def get_class_weight(class_weight): - """Get class weight for loss function. - - Args: - class_weight (list[float] | str | None): If class_weight is a str, - take it as a file name and read from it. - """ - if isinstance(class_weight, str): - # take it as a file path - if class_weight.endswith('.npy'): - class_weight = np.load(class_weight) - else: - # pkl, json or yaml - class_weight = mmcv.load(class_weight) - - return class_weight - - -def reduce_loss(loss, reduction): - """Reduce loss as specified. - - Args: - loss (Tensor): Elementwise loss tensor. - reduction (str): Options are "none", "mean" and "sum". - - Return: - Tensor: Reduced loss tensor. - """ - reduction_enum = F._Reduction.get_enum(reduction) - # none: 0, elementwise_mean:1, sum: 2 - if reduction_enum == 0: - return loss - elif reduction_enum == 1: - return loss.mean() - elif reduction_enum == 2: - return loss.sum() - - -def weight_reduce_loss(loss, weight=None, reduction='mean', avg_factor=None): - """Apply element-wise weight and reduce loss. - - Args: - loss (Tensor): Element-wise loss. - weight (Tensor): Element-wise weights. - reduction (str): Same as built-in losses of PyTorch. - avg_factor (float): Avarage factor when computing the mean of losses. - - Returns: - Tensor: Processed loss values. - """ - # if weight is specified, apply element-wise weight - if weight is not None: - assert weight.dim() == loss.dim() - if weight.dim() > 1: - assert weight.size(1) == 1 or weight.size(1) == loss.size(1) - loss = loss * weight - - # if avg_factor is not specified, just reduce the loss - if avg_factor is None: - loss = reduce_loss(loss, reduction) - else: - # if reduction is mean, then average the loss by avg_factor - if reduction == 'mean': - loss = loss.sum() / avg_factor - # if reduction is 'none', then do nothing, otherwise raise an error - elif reduction != 'none': - raise ValueError('avg_factor can not be used with reduction="sum"') - return loss - - -def weighted_loss(loss_func): - """Create a weighted version of a given loss function. - - To use this decorator, the loss function must have the signature like - `loss_func(pred, target, **kwargs)`. The function only needs to compute - element-wise loss without any reduction. This decorator will add weight - and reduction arguments to the function. The decorated function will have - the signature like `loss_func(pred, target, weight=None, reduction='mean', - avg_factor=None, **kwargs)`. - - :Example: - - >>> import torch - >>> @weighted_loss - >>> def l1_loss(pred, target): - >>> return (pred - target).abs() - - >>> pred = torch.Tensor([0, 2, 3]) - >>> target = torch.Tensor([1, 1, 1]) - >>> weight = torch.Tensor([1, 0, 1]) - - >>> l1_loss(pred, target) - tensor(1.3333) - >>> l1_loss(pred, target, weight) - tensor(1.) - >>> l1_loss(pred, target, reduction='none') - tensor([1., 1., 2.]) - >>> l1_loss(pred, target, weight, avg_factor=2) - tensor(1.5000) - """ - - @functools.wraps(loss_func) - def wrapper(pred, - target, - weight=None, - reduction='mean', - avg_factor=None, - **kwargs): - # get element-wise loss - loss = loss_func(pred, target, **kwargs) - loss = weight_reduce_loss(loss, weight, reduction, avg_factor) - return loss - - return wrapper diff --git a/spaces/Superying/vits-uma-genshin-honkai/models.py b/spaces/Superying/vits-uma-genshin-honkai/models.py deleted file mode 100644 index 8353b867f441de7e4d05aef980e672899c3a8889..0000000000000000000000000000000000000000 --- a/spaces/Superying/vits-uma-genshin-honkai/models.py +++ /dev/null @@ -1,533 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/control.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/control.py deleted file mode 100644 index 88fcb9295164f4e18827ef61fff6723e94ef7381..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/control.py +++ /dev/null @@ -1,225 +0,0 @@ -import sys -import time -from typing import TYPE_CHECKING, Callable, Dict, Iterable, List, Union - -if sys.version_info >= (3, 8): - from typing import Final -else: - from pip._vendor.typing_extensions import Final # pragma: no cover - -from .segment import ControlCode, ControlType, Segment - -if TYPE_CHECKING: - from .console import Console, ConsoleOptions, RenderResult - -STRIP_CONTROL_CODES: Final = [ - 7, # Bell - 8, # Backspace - 11, # Vertical tab - 12, # Form feed - 13, # Carriage return -] -_CONTROL_STRIP_TRANSLATE: Final = { - _codepoint: None for _codepoint in STRIP_CONTROL_CODES -} - -CONTROL_ESCAPE: Final = { - 7: "\\a", - 8: "\\b", - 11: "\\v", - 12: "\\f", - 13: "\\r", -} - -CONTROL_CODES_FORMAT: Dict[int, Callable[..., str]] = { - ControlType.BELL: lambda: "\x07", - ControlType.CARRIAGE_RETURN: lambda: "\r", - ControlType.HOME: lambda: "\x1b[H", - ControlType.CLEAR: lambda: "\x1b[2J", - ControlType.ENABLE_ALT_SCREEN: lambda: "\x1b[?1049h", - ControlType.DISABLE_ALT_SCREEN: lambda: "\x1b[?1049l", - ControlType.SHOW_CURSOR: lambda: "\x1b[?25h", - ControlType.HIDE_CURSOR: lambda: "\x1b[?25l", - ControlType.CURSOR_UP: lambda param: f"\x1b[{param}A", - ControlType.CURSOR_DOWN: lambda param: f"\x1b[{param}B", - ControlType.CURSOR_FORWARD: lambda param: f"\x1b[{param}C", - ControlType.CURSOR_BACKWARD: lambda param: f"\x1b[{param}D", - ControlType.CURSOR_MOVE_TO_COLUMN: lambda param: f"\x1b[{param+1}G", - ControlType.ERASE_IN_LINE: lambda param: f"\x1b[{param}K", - ControlType.CURSOR_MOVE_TO: lambda x, y: f"\x1b[{y+1};{x+1}H", - ControlType.SET_WINDOW_TITLE: lambda title: f"\x1b]0;{title}\x07", -} - - -class Control: - """A renderable that inserts a control code (non printable but may move cursor). - - Args: - *codes (str): Positional arguments are either a :class:`~rich.segment.ControlType` enum or a - tuple of ControlType and an integer parameter - """ - - __slots__ = ["segment"] - - def __init__(self, *codes: Union[ControlType, ControlCode]) -> None: - control_codes: List[ControlCode] = [ - (code,) if isinstance(code, ControlType) else code for code in codes - ] - _format_map = CONTROL_CODES_FORMAT - rendered_codes = "".join( - _format_map[code](*parameters) for code, *parameters in control_codes - ) - self.segment = Segment(rendered_codes, None, control_codes) - - @classmethod - def bell(cls) -> "Control": - """Ring the 'bell'.""" - return cls(ControlType.BELL) - - @classmethod - def home(cls) -> "Control": - """Move cursor to 'home' position.""" - return cls(ControlType.HOME) - - @classmethod - def move(cls, x: int = 0, y: int = 0) -> "Control": - """Move cursor relative to current position. - - Args: - x (int): X offset. - y (int): Y offset. - - Returns: - ~Control: Control object. - - """ - - def get_codes() -> Iterable[ControlCode]: - control = ControlType - if x: - yield ( - control.CURSOR_FORWARD if x > 0 else control.CURSOR_BACKWARD, - abs(x), - ) - if y: - yield ( - control.CURSOR_DOWN if y > 0 else control.CURSOR_UP, - abs(y), - ) - - control = cls(*get_codes()) - return control - - @classmethod - def move_to_column(cls, x: int, y: int = 0) -> "Control": - """Move to the given column, optionally add offset to row. - - Returns: - x (int): absolute x (column) - y (int): optional y offset (row) - - Returns: - ~Control: Control object. - """ - - return ( - cls( - (ControlType.CURSOR_MOVE_TO_COLUMN, x), - ( - ControlType.CURSOR_DOWN if y > 0 else ControlType.CURSOR_UP, - abs(y), - ), - ) - if y - else cls((ControlType.CURSOR_MOVE_TO_COLUMN, x)) - ) - - @classmethod - def move_to(cls, x: int, y: int) -> "Control": - """Move cursor to absolute position. - - Args: - x (int): x offset (column) - y (int): y offset (row) - - Returns: - ~Control: Control object. - """ - return cls((ControlType.CURSOR_MOVE_TO, x, y)) - - @classmethod - def clear(cls) -> "Control": - """Clear the screen.""" - return cls(ControlType.CLEAR) - - @classmethod - def show_cursor(cls, show: bool) -> "Control": - """Show or hide the cursor.""" - return cls(ControlType.SHOW_CURSOR if show else ControlType.HIDE_CURSOR) - - @classmethod - def alt_screen(cls, enable: bool) -> "Control": - """Enable or disable alt screen.""" - if enable: - return cls(ControlType.ENABLE_ALT_SCREEN, ControlType.HOME) - else: - return cls(ControlType.DISABLE_ALT_SCREEN) - - @classmethod - def title(cls, title: str) -> "Control": - """Set the terminal window title - - Args: - title (str): The new terminal window title - """ - return cls((ControlType.SET_WINDOW_TITLE, title)) - - def __str__(self) -> str: - return self.segment.text - - def __rich_console__( - self, console: "Console", options: "ConsoleOptions" - ) -> "RenderResult": - if self.segment.text: - yield self.segment - - -def strip_control_codes( - text: str, _translate_table: Dict[int, None] = _CONTROL_STRIP_TRANSLATE -) -> str: - """Remove control codes from text. - - Args: - text (str): A string possibly contain control codes. - - Returns: - str: String with control codes removed. - """ - return text.translate(_translate_table) - - -def escape_control_codes( - text: str, - _translate_table: Dict[int, str] = CONTROL_ESCAPE, -) -> str: - """Replace control codes with their "escaped" equivalent in the given text. - (e.g. "\b" becomes "\\b") - - Args: - text (str): A string possibly containing control codes. - - Returns: - str: String with control codes replaced with their escaped version. - """ - return text.translate(_translate_table) - - -if __name__ == "__main__": # pragma: no cover - from pip._vendor.rich.console import Console - - console = Console() - console.print("Look at the title of your terminal window ^") - # console.print(Control((ControlType.SET_WINDOW_TITLE, "Hello, world!"))) - for i in range(10): - console.set_window_title("🚀 Loading" + "." * i) - time.sleep(0.5) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/tenacity/retry.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/tenacity/retry.py deleted file mode 100644 index 38988739d6406aeb5e3be903c0ea6fb82752f328..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/tenacity/retry.py +++ /dev/null @@ -1,272 +0,0 @@ -# Copyright 2016–2021 Julien Danjou -# Copyright 2016 Joshua Harlow -# Copyright 2013-2014 Ray Holder -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import abc -import re -import typing - -if typing.TYPE_CHECKING: - from pip._vendor.tenacity import RetryCallState - - -class retry_base(abc.ABC): - """Abstract base class for retry strategies.""" - - @abc.abstractmethod - def __call__(self, retry_state: "RetryCallState") -> bool: - pass - - def __and__(self, other: "retry_base") -> "retry_all": - return retry_all(self, other) - - def __or__(self, other: "retry_base") -> "retry_any": - return retry_any(self, other) - - -RetryBaseT = typing.Union[retry_base, typing.Callable[["RetryCallState"], bool]] - - -class _retry_never(retry_base): - """Retry strategy that never rejects any result.""" - - def __call__(self, retry_state: "RetryCallState") -> bool: - return False - - -retry_never = _retry_never() - - -class _retry_always(retry_base): - """Retry strategy that always rejects any result.""" - - def __call__(self, retry_state: "RetryCallState") -> bool: - return True - - -retry_always = _retry_always() - - -class retry_if_exception(retry_base): - """Retry strategy that retries if an exception verifies a predicate.""" - - def __init__(self, predicate: typing.Callable[[BaseException], bool]) -> None: - self.predicate = predicate - - def __call__(self, retry_state: "RetryCallState") -> bool: - if retry_state.outcome is None: - raise RuntimeError("__call__() called before outcome was set") - - if retry_state.outcome.failed: - exception = retry_state.outcome.exception() - if exception is None: - raise RuntimeError("outcome failed but the exception is None") - return self.predicate(exception) - else: - return False - - -class retry_if_exception_type(retry_if_exception): - """Retries if an exception has been raised of one or more types.""" - - def __init__( - self, - exception_types: typing.Union[ - typing.Type[BaseException], - typing.Tuple[typing.Type[BaseException], ...], - ] = Exception, - ) -> None: - self.exception_types = exception_types - super().__init__(lambda e: isinstance(e, exception_types)) - - -class retry_if_not_exception_type(retry_if_exception): - """Retries except an exception has been raised of one or more types.""" - - def __init__( - self, - exception_types: typing.Union[ - typing.Type[BaseException], - typing.Tuple[typing.Type[BaseException], ...], - ] = Exception, - ) -> None: - self.exception_types = exception_types - super().__init__(lambda e: not isinstance(e, exception_types)) - - -class retry_unless_exception_type(retry_if_exception): - """Retries until an exception is raised of one or more types.""" - - def __init__( - self, - exception_types: typing.Union[ - typing.Type[BaseException], - typing.Tuple[typing.Type[BaseException], ...], - ] = Exception, - ) -> None: - self.exception_types = exception_types - super().__init__(lambda e: not isinstance(e, exception_types)) - - def __call__(self, retry_state: "RetryCallState") -> bool: - if retry_state.outcome is None: - raise RuntimeError("__call__() called before outcome was set") - - # always retry if no exception was raised - if not retry_state.outcome.failed: - return True - - exception = retry_state.outcome.exception() - if exception is None: - raise RuntimeError("outcome failed but the exception is None") - return self.predicate(exception) - - -class retry_if_exception_cause_type(retry_base): - """Retries if any of the causes of the raised exception is of one or more types. - - The check on the type of the cause of the exception is done recursively (until finding - an exception in the chain that has no `__cause__`) - """ - - def __init__( - self, - exception_types: typing.Union[ - typing.Type[BaseException], - typing.Tuple[typing.Type[BaseException], ...], - ] = Exception, - ) -> None: - self.exception_cause_types = exception_types - - def __call__(self, retry_state: "RetryCallState") -> bool: - if retry_state.outcome is None: - raise RuntimeError("__call__ called before outcome was set") - - if retry_state.outcome.failed: - exc = retry_state.outcome.exception() - while exc is not None: - if isinstance(exc.__cause__, self.exception_cause_types): - return True - exc = exc.__cause__ - - return False - - -class retry_if_result(retry_base): - """Retries if the result verifies a predicate.""" - - def __init__(self, predicate: typing.Callable[[typing.Any], bool]) -> None: - self.predicate = predicate - - def __call__(self, retry_state: "RetryCallState") -> bool: - if retry_state.outcome is None: - raise RuntimeError("__call__() called before outcome was set") - - if not retry_state.outcome.failed: - return self.predicate(retry_state.outcome.result()) - else: - return False - - -class retry_if_not_result(retry_base): - """Retries if the result refutes a predicate.""" - - def __init__(self, predicate: typing.Callable[[typing.Any], bool]) -> None: - self.predicate = predicate - - def __call__(self, retry_state: "RetryCallState") -> bool: - if retry_state.outcome is None: - raise RuntimeError("__call__() called before outcome was set") - - if not retry_state.outcome.failed: - return not self.predicate(retry_state.outcome.result()) - else: - return False - - -class retry_if_exception_message(retry_if_exception): - """Retries if an exception message equals or matches.""" - - def __init__( - self, - message: typing.Optional[str] = None, - match: typing.Optional[str] = None, - ) -> None: - if message and match: - raise TypeError(f"{self.__class__.__name__}() takes either 'message' or 'match', not both") - - # set predicate - if message: - - def message_fnc(exception: BaseException) -> bool: - return message == str(exception) - - predicate = message_fnc - elif match: - prog = re.compile(match) - - def match_fnc(exception: BaseException) -> bool: - return bool(prog.match(str(exception))) - - predicate = match_fnc - else: - raise TypeError(f"{self.__class__.__name__}() missing 1 required argument 'message' or 'match'") - - super().__init__(predicate) - - -class retry_if_not_exception_message(retry_if_exception_message): - """Retries until an exception message equals or matches.""" - - def __init__( - self, - message: typing.Optional[str] = None, - match: typing.Optional[str] = None, - ) -> None: - super().__init__(message, match) - # invert predicate - if_predicate = self.predicate - self.predicate = lambda *args_, **kwargs_: not if_predicate(*args_, **kwargs_) - - def __call__(self, retry_state: "RetryCallState") -> bool: - if retry_state.outcome is None: - raise RuntimeError("__call__() called before outcome was set") - - if not retry_state.outcome.failed: - return True - - exception = retry_state.outcome.exception() - if exception is None: - raise RuntimeError("outcome failed but the exception is None") - return self.predicate(exception) - - -class retry_any(retry_base): - """Retries if any of the retries condition is valid.""" - - def __init__(self, *retries: retry_base) -> None: - self.retries = retries - - def __call__(self, retry_state: "RetryCallState") -> bool: - return any(r(retry_state) for r in self.retries) - - -class retry_all(retry_base): - """Retries if all the retries condition are valid.""" - - def __init__(self, *retries: retry_base) -> None: - self.retries = retries - - def __call__(self, retry_state: "RetryCallState") -> bool: - return all(r(retry_state) for r in self.retries) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/webencodings/__init__.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/webencodings/__init__.py deleted file mode 100644 index d21d697c887bed1f8ab7f36d10185e986d9f1e54..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/webencodings/__init__.py +++ /dev/null @@ -1,342 +0,0 @@ -# coding: utf-8 -""" - - webencodings - ~~~~~~~~~~~~ - - This is a Python implementation of the `WHATWG Encoding standard - `. See README for details. - - :copyright: Copyright 2012 by Simon Sapin - :license: BSD, see LICENSE for details. - -""" - -from __future__ import unicode_literals - -import codecs - -from .labels import LABELS - - -VERSION = '0.5.1' - - -# Some names in Encoding are not valid Python aliases. Remap these. -PYTHON_NAMES = { - 'iso-8859-8-i': 'iso-8859-8', - 'x-mac-cyrillic': 'mac-cyrillic', - 'macintosh': 'mac-roman', - 'windows-874': 'cp874'} - -CACHE = {} - - -def ascii_lower(string): - r"""Transform (only) ASCII letters to lower case: A-Z is mapped to a-z. - - :param string: An Unicode string. - :returns: A new Unicode string. - - This is used for `ASCII case-insensitive - `_ - matching of encoding labels. - The same matching is also used, among other things, - for `CSS keywords `_. - - This is different from the :meth:`~py:str.lower` method of Unicode strings - which also affect non-ASCII characters, - sometimes mapping them into the ASCII range: - - >>> keyword = u'Bac\N{KELVIN SIGN}ground' - >>> assert keyword.lower() == u'background' - >>> assert ascii_lower(keyword) != keyword.lower() - >>> assert ascii_lower(keyword) == u'bac\N{KELVIN SIGN}ground' - - """ - # This turns out to be faster than unicode.translate() - return string.encode('utf8').lower().decode('utf8') - - -def lookup(label): - """ - Look for an encoding by its label. - This is the spec’s `get an encoding - `_ algorithm. - Supported labels are listed there. - - :param label: A string. - :returns: - An :class:`Encoding` object, or :obj:`None` for an unknown label. - - """ - # Only strip ASCII whitespace: U+0009, U+000A, U+000C, U+000D, and U+0020. - label = ascii_lower(label.strip('\t\n\f\r ')) - name = LABELS.get(label) - if name is None: - return None - encoding = CACHE.get(name) - if encoding is None: - if name == 'x-user-defined': - from .x_user_defined import codec_info - else: - python_name = PYTHON_NAMES.get(name, name) - # Any python_name value that gets to here should be valid. - codec_info = codecs.lookup(python_name) - encoding = Encoding(name, codec_info) - CACHE[name] = encoding - return encoding - - -def _get_encoding(encoding_or_label): - """ - Accept either an encoding object or label. - - :param encoding: An :class:`Encoding` object or a label string. - :returns: An :class:`Encoding` object. - :raises: :exc:`~exceptions.LookupError` for an unknown label. - - """ - if hasattr(encoding_or_label, 'codec_info'): - return encoding_or_label - - encoding = lookup(encoding_or_label) - if encoding is None: - raise LookupError('Unknown encoding label: %r' % encoding_or_label) - return encoding - - -class Encoding(object): - """Reresents a character encoding such as UTF-8, - that can be used for decoding or encoding. - - .. attribute:: name - - Canonical name of the encoding - - .. attribute:: codec_info - - The actual implementation of the encoding, - a stdlib :class:`~codecs.CodecInfo` object. - See :func:`codecs.register`. - - """ - def __init__(self, name, codec_info): - self.name = name - self.codec_info = codec_info - - def __repr__(self): - return '' % self.name - - -#: The UTF-8 encoding. Should be used for new content and formats. -UTF8 = lookup('utf-8') - -_UTF16LE = lookup('utf-16le') -_UTF16BE = lookup('utf-16be') - - -def decode(input, fallback_encoding, errors='replace'): - """ - Decode a single string. - - :param input: A byte string - :param fallback_encoding: - An :class:`Encoding` object or a label string. - The encoding to use if :obj:`input` does note have a BOM. - :param errors: Type of error handling. See :func:`codecs.register`. - :raises: :exc:`~exceptions.LookupError` for an unknown encoding label. - :return: - A ``(output, encoding)`` tuple of an Unicode string - and an :obj:`Encoding`. - - """ - # Fail early if `encoding` is an invalid label. - fallback_encoding = _get_encoding(fallback_encoding) - bom_encoding, input = _detect_bom(input) - encoding = bom_encoding or fallback_encoding - return encoding.codec_info.decode(input, errors)[0], encoding - - -def _detect_bom(input): - """Return (bom_encoding, input), with any BOM removed from the input.""" - if input.startswith(b'\xFF\xFE'): - return _UTF16LE, input[2:] - if input.startswith(b'\xFE\xFF'): - return _UTF16BE, input[2:] - if input.startswith(b'\xEF\xBB\xBF'): - return UTF8, input[3:] - return None, input - - -def encode(input, encoding=UTF8, errors='strict'): - """ - Encode a single string. - - :param input: An Unicode string. - :param encoding: An :class:`Encoding` object or a label string. - :param errors: Type of error handling. See :func:`codecs.register`. - :raises: :exc:`~exceptions.LookupError` for an unknown encoding label. - :return: A byte string. - - """ - return _get_encoding(encoding).codec_info.encode(input, errors)[0] - - -def iter_decode(input, fallback_encoding, errors='replace'): - """ - "Pull"-based decoder. - - :param input: - An iterable of byte strings. - - The input is first consumed just enough to determine the encoding - based on the precense of a BOM, - then consumed on demand when the return value is. - :param fallback_encoding: - An :class:`Encoding` object or a label string. - The encoding to use if :obj:`input` does note have a BOM. - :param errors: Type of error handling. See :func:`codecs.register`. - :raises: :exc:`~exceptions.LookupError` for an unknown encoding label. - :returns: - An ``(output, encoding)`` tuple. - :obj:`output` is an iterable of Unicode strings, - :obj:`encoding` is the :obj:`Encoding` that is being used. - - """ - - decoder = IncrementalDecoder(fallback_encoding, errors) - generator = _iter_decode_generator(input, decoder) - encoding = next(generator) - return generator, encoding - - -def _iter_decode_generator(input, decoder): - """Return a generator that first yields the :obj:`Encoding`, - then yields output chukns as Unicode strings. - - """ - decode = decoder.decode - input = iter(input) - for chunck in input: - output = decode(chunck) - if output: - assert decoder.encoding is not None - yield decoder.encoding - yield output - break - else: - # Input exhausted without determining the encoding - output = decode(b'', final=True) - assert decoder.encoding is not None - yield decoder.encoding - if output: - yield output - return - - for chunck in input: - output = decode(chunck) - if output: - yield output - output = decode(b'', final=True) - if output: - yield output - - -def iter_encode(input, encoding=UTF8, errors='strict'): - """ - “Pull”-based encoder. - - :param input: An iterable of Unicode strings. - :param encoding: An :class:`Encoding` object or a label string. - :param errors: Type of error handling. See :func:`codecs.register`. - :raises: :exc:`~exceptions.LookupError` for an unknown encoding label. - :returns: An iterable of byte strings. - - """ - # Fail early if `encoding` is an invalid label. - encode = IncrementalEncoder(encoding, errors).encode - return _iter_encode_generator(input, encode) - - -def _iter_encode_generator(input, encode): - for chunck in input: - output = encode(chunck) - if output: - yield output - output = encode('', final=True) - if output: - yield output - - -class IncrementalDecoder(object): - """ - “Push”-based decoder. - - :param fallback_encoding: - An :class:`Encoding` object or a label string. - The encoding to use if :obj:`input` does note have a BOM. - :param errors: Type of error handling. See :func:`codecs.register`. - :raises: :exc:`~exceptions.LookupError` for an unknown encoding label. - - """ - def __init__(self, fallback_encoding, errors='replace'): - # Fail early if `encoding` is an invalid label. - self._fallback_encoding = _get_encoding(fallback_encoding) - self._errors = errors - self._buffer = b'' - self._decoder = None - #: The actual :class:`Encoding` that is being used, - #: or :obj:`None` if that is not determined yet. - #: (Ie. if there is not enough input yet to determine - #: if there is a BOM.) - self.encoding = None # Not known yet. - - def decode(self, input, final=False): - """Decode one chunk of the input. - - :param input: A byte string. - :param final: - Indicate that no more input is available. - Must be :obj:`True` if this is the last call. - :returns: An Unicode string. - - """ - decoder = self._decoder - if decoder is not None: - return decoder(input, final) - - input = self._buffer + input - encoding, input = _detect_bom(input) - if encoding is None: - if len(input) < 3 and not final: # Not enough data yet. - self._buffer = input - return '' - else: # No BOM - encoding = self._fallback_encoding - decoder = encoding.codec_info.incrementaldecoder(self._errors).decode - self._decoder = decoder - self.encoding = encoding - return decoder(input, final) - - -class IncrementalEncoder(object): - """ - “Push”-based encoder. - - :param encoding: An :class:`Encoding` object or a label string. - :param errors: Type of error handling. See :func:`codecs.register`. - :raises: :exc:`~exceptions.LookupError` for an unknown encoding label. - - .. method:: encode(input, final=False) - - :param input: An Unicode string. - :param final: - Indicate that no more input is available. - Must be :obj:`True` if this is the last call. - :returns: A byte string. - - """ - def __init__(self, encoding=UTF8, errors='strict'): - encoding = _get_encoding(encoding) - self.encode = encoding.codec_info.incrementalencoder(errors).encode diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/docs/tutorials/README.md b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/docs/tutorials/README.md deleted file mode 100644 index 1ca9c94d042ef838143a45490fe6b4556c19f3c9..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/docs/tutorials/README.md +++ /dev/null @@ -1,4 +0,0 @@ -# Read the docs: - -The latest documentation built from this directory is available at [detectron2.readthedocs.io](https://detectron2.readthedocs.io/). -Documents in this directory are not meant to be read on github. diff --git a/spaces/Tetel/chat/EdgeGPT/__init__.py b/spaces/Tetel/chat/EdgeGPT/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Thaweewat/ControlNet-Architecture/util.py b/spaces/Thaweewat/ControlNet-Architecture/util.py deleted file mode 100644 index 4632173761308f7701c68184b152f700b10edb8a..0000000000000000000000000000000000000000 --- a/spaces/Thaweewat/ControlNet-Architecture/util.py +++ /dev/null @@ -1,37 +0,0 @@ -import numpy as np -import cv2 - - -def HWC3(x): - assert x.dtype == np.uint8 - if x.ndim == 2: - x = x[:, :, None] - assert x.ndim == 3 - H, W, C = x.shape - assert C == 1 or C == 3 or C == 4 - if C == 3: - return x - if C == 1: - return np.concatenate([x, x, x], axis=2) - if C == 4: - color = x[:, :, 0:3].astype(np.float32) - alpha = x[:, :, 3:4].astype(np.float32) / 255.0 - y = color * alpha + 255.0 * (1.0 - alpha) - y = y.clip(0, 255).astype(np.uint8) - return y - - -def resize_image(input_image, resolution): - H, W, C = input_image.shape - H = float(H) - W = float(W) - k = float(resolution) / min(H, W) - H *= k - W *= k - H = int(np.round(H / 64.0)) * 64 - W = int(np.round(W / 64.0)) * 64 - img = cv2.resize(input_image, (W, H), interpolation=cv2.INTER_LANCZOS4 if k > 1 else cv2.INTER_AREA) - return img - -def apply_canny(img, low_threshold, high_threshold): - return cv2.Canny(img, low_threshold, high_threshold) diff --git a/spaces/Truym/rvc-pendu/vc_infer_pipeline.py b/spaces/Truym/rvc-pendu/vc_infer_pipeline.py deleted file mode 100644 index c26d45068f9b6bf2b194b13c3c89f8a06347c124..0000000000000000000000000000000000000000 --- a/spaces/Truym/rvc-pendu/vc_infer_pipeline.py +++ /dev/null @@ -1,306 +0,0 @@ -import numpy as np, parselmouth, torch, pdb -from time import time as ttime -import torch.nn.functional as F -from config import x_pad, x_query, x_center, x_max -import scipy.signal as signal -import pyworld, os, traceback, faiss -from scipy import signal - -bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000) - - -class VC(object): - def __init__(self, tgt_sr, device, is_half): - self.sr = 16000 # hubert输入采样率 - self.window = 160 # 每帧点数 - self.t_pad = self.sr * x_pad # 每条前后pad时间 - self.t_pad_tgt = tgt_sr * x_pad - self.t_pad2 = self.t_pad * 2 - self.t_query = self.sr * x_query # 查询切点前后查询时间 - self.t_center = self.sr * x_center # 查询切点位置 - self.t_max = self.sr * x_max # 免查询时长阈值 - self.device = device - self.is_half = is_half - - def get_f0(self, x, p_len, f0_up_key, f0_method, inp_f0=None): - time_step = self.window / self.sr * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - if f0_method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - f0, t = pyworld.harvest( - x.astype(np.double), - fs=self.sr, - f0_ceil=f0_max, - f0_floor=f0_min, - frame_period=10, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr) - f0 = signal.medfilt(f0, 3) - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)].shape[0] - f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)] = replace_f0[:shape] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak # 1-0 - - def vc( - self, - model, - net_g, - sid, - audio0, - pitch, - pitchf, - times, - index, - big_npy, - index_rate, - ): # ,file_index,file_big_npy - feats = torch.from_numpy(audio0) - if self.is_half: - feats = feats.half() - else: - feats = feats.float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - - inputs = { - "source": feats.to(self.device), - "padding_mask": padding_mask, - "output_layer": 9, # layer 9 - } - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) - - if ( - isinstance(index, type(None)) == False - and isinstance(big_npy, type(None)) == False - and index_rate != 0 - ): - npy = feats[0].cpu().numpy() - if self.is_half: - npy = npy.astype("float32") - _, I = index.search(npy, 1) - npy = big_npy[I.squeeze()] - if self.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate - + (1 - index_rate) * feats - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - t1 = ttime() - p_len = audio0.shape[0] // self.window - if feats.shape[1] < p_len: - p_len = feats.shape[1] - if pitch != None and pitchf != None: - pitch = pitch[:, :p_len] - pitchf = pitchf[:, :p_len] - p_len = torch.tensor([p_len], device=self.device).long() - with torch.no_grad(): - if pitch != None and pitchf != None: - audio1 = ( - (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0] * 32768) - .data.cpu() - .float() - .numpy() - .astype(np.int16) - ) - else: - audio1 = ( - (net_g.infer(feats, p_len, sid)[0][0, 0] * 32768) - .data.cpu() - .float() - .numpy() - .astype(np.int16) - ) - del feats, p_len, padding_mask - if torch.cuda.is_available(): - torch.cuda.empty_cache() - t2 = ttime() - times[0] += t1 - t0 - times[2] += t2 - t1 - return audio1 - - def pipeline( - self, - model, - net_g, - sid, - audio, - times, - f0_up_key, - f0_method, - file_index, - file_big_npy, - index_rate, - if_f0, - f0_file=None, - ): - if ( - file_big_npy != "" - and file_index != "" - and os.path.exists(file_big_npy) == True - and os.path.exists(file_index) == True - and index_rate != 0 - ): - try: - index = faiss.read_index(file_index) - big_npy = np.load(file_big_npy) - except: - traceback.print_exc() - index = big_npy = None - else: - index = big_npy = None - print("Feature retrieval library doesn't exist or ratio is 0") - audio = signal.filtfilt(bh, ah, audio) - audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect") - opt_ts = [] - if audio_pad.shape[0] > self.t_max: - audio_sum = np.zeros_like(audio) - for i in range(self.window): - audio_sum += audio_pad[i : i - self.window] - for t in range(self.t_center, audio.shape[0], self.t_center): - opt_ts.append( - t - - self.t_query - + np.where( - np.abs(audio_sum[t - self.t_query : t + self.t_query]) - == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min() - )[0][0] - ) - s = 0 - audio_opt = [] - t = None - t1 = ttime() - audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect") - p_len = audio_pad.shape[0] // self.window - inp_f0 = None - if hasattr(f0_file, "name") == True: - try: - with open(f0_file.name, "r") as f: - lines = f.read().strip("\n").split("\n") - inp_f0 = [] - for line in lines: - inp_f0.append([float(i) for i in line.split(",")]) - inp_f0 = np.array(inp_f0, dtype="float32") - except: - traceback.print_exc() - sid = torch.tensor(sid, device=self.device).unsqueeze(0).long() - pitch, pitchf = None, None - if if_f0 == 1: - pitch, pitchf = self.get_f0(audio_pad, p_len, f0_up_key, f0_method, inp_f0) - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long() - pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float() - t2 = ttime() - times[1] += t2 - t1 - for t in opt_ts: - t = t // self.window * self.window - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - pitch[:, s // self.window : (t + self.t_pad2) // self.window], - pitchf[:, s // self.window : (t + self.t_pad2) // self.window], - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - None, - None, - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - s = t - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - pitch[:, t // self.window :] if t is not None else pitch, - pitchf[:, t // self.window :] if t is not None else pitchf, - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - None, - None, - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - audio_opt = np.concatenate(audio_opt) - del pitch, pitchf, sid - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return audio_opt diff --git a/spaces/UmairMirza/Face-Attendance/app.py b/spaces/UmairMirza/Face-Attendance/app.py deleted file mode 100644 index 41c2f2b28a4316000de06256092fc5ecdf7a55ae..0000000000000000000000000000000000000000 --- a/spaces/UmairMirza/Face-Attendance/app.py +++ /dev/null @@ -1,101 +0,0 @@ -import cv2 -import numpy as np -import face_recognition -import os -from datetime import datetime -import gradio as gr - - - - -def faceEncodings(images): - encodeList = [] - for img in images: - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - encode = face_recognition.face_encodings(img)[0] - encodeList.append(encode) - return encodeList - -def Attandance(text,video,image): - data=cv2.VideoCapture(video) - totalframescount = data.get(cv2.CAP_PROP_FRAME_COUNT) - framecount=0 - names=[] - path = text - images = [] - personNames = [] - myList = os.listdir(path) - unkownEncodings=[] - - print(myList) - for cu_img in myList: - current_Img = cv2.imread(f'{path}/{cu_img}') - images.append(current_Img) - personNames.append(os.path.splitext(cu_img)[0]) - print(personNames) - encodeListKnown = faceEncodings(images) - print('All Encodings Complete!!!') - if video is not None: - cap = cv2.VideoCapture(video) - index=1 - while True: - try: - if framecount>totalframescount: - break - elif framecount%15==0: - - ret, frame = cap.read() - #faces = cv2.resize(frame, (0, 0), None, 0.25, 0.25) - faces = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) - - facesCurrentFrame = face_recognition.face_locations(faces) - encodesCurrentFrame = face_recognition.face_encodings(faces, facesCurrentFrame) - - for encodeFace, faceLoc in zip(encodesCurrentFrame, facesCurrentFrame): - matches = face_recognition.compare_faces(encodeListKnown, encodeFace) - faceDis = face_recognition.face_distance(encodeListKnown, encodeFace) - # print(faceDis) - matchIndex = np.argmin(faceDis) - - if matches[matchIndex]: - name = personNames[matchIndex].upper() - if names.count(name) == 0: - names.append(name) - framecount=framecount+1 - - cv2.waitKey(1) - except: - break - return ' '.join(names) - else: - try: - #faces = cv2.resize(frame, (0, 0), None, 0.25, 0.25) - faces = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) - - facesCurrentFrame = face_recognition.face_locations(faces) - encodesCurrentFrame = face_recognition.face_encodings(faces, facesCurrentFrame) - - for encodeFace, faceLoc in zip(encodesCurrentFrame, facesCurrentFrame): - matches = face_recognition.compare_faces(encodeListKnown, encodeFace) - faceDis = face_recognition.face_distance(encodeListKnown, encodeFace) - # print(faceDis) - matchIndex = np.argmin(faceDis) - - if matches[matchIndex]: - name = personNames[matchIndex].upper() - if names.count(name) == 0: - names.append(name) - - cv2.waitKey(1) - except: - pass - return ' '.join(names) - -demo=gr.Interface(fn=Attandance, - inputs=["text","video","image"], - outputs="text", - title="Face Attendance", - -) -demo.launch(debug=True) - diff --git a/spaces/Valerina128503/U_1/README.md b/spaces/Valerina128503/U_1/README.md deleted file mode 100644 index dcd60ef47fae82555d34b9d4b9564d5b3737590d..0000000000000000000000000000000000000000 --- a/spaces/Valerina128503/U_1/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: U 1 -emoji: 📉 -colorFrom: gray -colorTo: yellow -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Xixeo/Text-to-Music/constants.py b/spaces/Xixeo/Text-to-Music/constants.py deleted file mode 100644 index 62633e107d6ff9e39e65843c9ac805dcb194a965..0000000000000000000000000000000000000000 --- a/spaces/Xixeo/Text-to-Music/constants.py +++ /dev/null @@ -1,7 +0,0 @@ -import numpy as np - -MUBERT_TAGS_STRING = 'tribal,action,kids,neo-classic,run 130,pumped,jazz / funk,ethnic,dubtechno,reggae,acid jazz,liquidfunk,funk,witch house,tech house,underground,artists,mystical,disco,sensorium,r&b,agender,psychedelic trance / psytrance,peaceful,run 140,piano,run 160,setting,meditation,christmas,ambient,horror,cinematic,electro house,idm,bass,minimal,underscore,drums,glitchy,beautiful,technology,tribal house,country pop,jazz & funk,documentary,space,classical,valentines,chillstep,experimental,trap,new jack swing,drama,post-rock,tense,corporate,neutral,happy,analog,funky,spiritual,sberzvuk special,chill hop,dramatic,catchy,holidays,fitness 90,optimistic,orchestra,acid techno,energizing,romantic,minimal house,breaks,hyper pop,warm up,dreamy,dark,urban,microfunk,dub,nu disco,vogue,keys,hardcore,aggressive,indie,electro funk,beauty,relaxing,trance,pop,hiphop,soft,acoustic,chillrave / ethno-house,deep techno,angry,dance,fun,dubstep,tropical,latin pop,heroic,world music,inspirational,uplifting,atmosphere,art,epic,advertising,chillout,scary,spooky,slow ballad,saxophone,summer,erotic,jazzy,energy 100,kara mar,xmas,atmospheric,indie pop,hip-hop,yoga,reggaeton,lounge,travel,running,folk,chillrave & ethno-house,detective,darkambient,chill,fantasy,minimal techno,special,night,tropical house,downtempo,lullaby,meditative,upbeat,glitch hop,fitness,neurofunk,sexual,indie rock,future pop,jazz,cyberpunk,melancholic,happy hardcore,family / kids,synths,electric guitar,comedy,psychedelic trance & psytrance,edm,psychedelic rock,calm,zen,bells,podcast,melodic house,ethnic percussion,nature,heavy,bassline,indie dance,techno,drumnbass,synth pop,vaporwave,sad,8-bit,chillgressive,deep,orchestral,futuristic,hardtechno,nostalgic,big room,sci-fi,tutorial,joyful,pads,minimal 170,drill,ethnic 108,amusing,sleepy ambient,psychill,italo disco,lofi,house,acoustic guitar,bassline house,rock,k-pop,synthwave,deep house,electronica,gabber,nightlife,sport & fitness,road trip,celebration,electro,disco house,electronic' -MUBERT_TAGS = np.array(MUBERT_TAGS_STRING.split(',')) -MUBERT_LICENSE = "ttmmubertlicense#f0acYBenRcfeFpNT4wpYGaTQIyDI4mJGv5MfIhBFz97NXDwDNFHmMRsBSzmGsJwbTpP1A6i07AXcIeAHo5" -MUBERT_MODE = "loop" -MUBERT_TOKEN = "4951f6428e83172a4f39de05d5b3ab10d58560b8" diff --git a/spaces/XzJosh/Carol-Bert-VITS2/text/tone_sandhi.py b/spaces/XzJosh/Carol-Bert-VITS2/text/tone_sandhi.py deleted file mode 100644 index 0f45b7a72c5d858bcaab19ac85cfa686bf9a74da..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Carol-Bert-VITS2/text/tone_sandhi.py +++ /dev/null @@ -1,351 +0,0 @@ -# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from typing import List -from typing import Tuple - -import jieba -from pypinyin import lazy_pinyin -from pypinyin import Style - - -class ToneSandhi(): - def __init__(self): - self.must_neural_tone_words = { - '麻烦', '麻利', '鸳鸯', '高粱', '骨头', '骆驼', '马虎', '首饰', '馒头', '馄饨', '风筝', - '难为', '队伍', '阔气', '闺女', '门道', '锄头', '铺盖', '铃铛', '铁匠', '钥匙', '里脊', - '里头', '部分', '那么', '道士', '造化', '迷糊', '连累', '这么', '这个', '运气', '过去', - '软和', '转悠', '踏实', '跳蚤', '跟头', '趔趄', '财主', '豆腐', '讲究', '记性', '记号', - '认识', '规矩', '见识', '裁缝', '补丁', '衣裳', '衣服', '衙门', '街坊', '行李', '行当', - '蛤蟆', '蘑菇', '薄荷', '葫芦', '葡萄', '萝卜', '荸荠', '苗条', '苗头', '苍蝇', '芝麻', - '舒服', '舒坦', '舌头', '自在', '膏药', '脾气', '脑袋', '脊梁', '能耐', '胳膊', '胭脂', - '胡萝', '胡琴', '胡同', '聪明', '耽误', '耽搁', '耷拉', '耳朵', '老爷', '老实', '老婆', - '老头', '老太', '翻腾', '罗嗦', '罐头', '编辑', '结实', '红火', '累赘', '糨糊', '糊涂', - '精神', '粮食', '簸箕', '篱笆', '算计', '算盘', '答应', '笤帚', '笑语', '笑话', '窟窿', - '窝囊', '窗户', '稳当', '稀罕', '称呼', '秧歌', '秀气', '秀才', '福气', '祖宗', '砚台', - '码头', '石榴', '石头', '石匠', '知识', '眼睛', '眯缝', '眨巴', '眉毛', '相声', '盘算', - '白净', '痢疾', '痛快', '疟疾', '疙瘩', '疏忽', '畜生', '生意', '甘蔗', '琵琶', '琢磨', - '琉璃', '玻璃', '玫瑰', '玄乎', '狐狸', '状元', '特务', '牲口', '牙碜', '牌楼', '爽快', - '爱人', '热闹', '烧饼', '烟筒', '烂糊', '点心', '炊帚', '灯笼', '火候', '漂亮', '滑溜', - '溜达', '温和', '清楚', '消息', '浪头', '活泼', '比方', '正经', '欺负', '模糊', '槟榔', - '棺材', '棒槌', '棉花', '核桃', '栅栏', '柴火', '架势', '枕头', '枇杷', '机灵', '本事', - '木头', '木匠', '朋友', '月饼', '月亮', '暖和', '明白', '时候', '新鲜', '故事', '收拾', - '收成', '提防', '挖苦', '挑剔', '指甲', '指头', '拾掇', '拳头', '拨弄', '招牌', '招呼', - '抬举', '护士', '折腾', '扫帚', '打量', '打算', '打点', '打扮', '打听', '打发', '扎实', - '扁担', '戒指', '懒得', '意识', '意思', '情形', '悟性', '怪物', '思量', '怎么', '念头', - '念叨', '快活', '忙活', '志气', '心思', '得罪', '张罗', '弟兄', '开通', '应酬', '庄稼', - '干事', '帮手', '帐篷', '希罕', '师父', '师傅', '巴结', '巴掌', '差事', '工夫', '岁数', - '屁股', '尾巴', '少爷', '小气', '小伙', '将就', '对头', '对付', '寡妇', '家伙', '客气', - '实在', '官司', '学问', '学生', '字号', '嫁妆', '媳妇', '媒人', '婆家', '娘家', '委屈', - '姑娘', '姐夫', '妯娌', '妥当', '妖精', '奴才', '女婿', '头发', '太阳', '大爷', '大方', - '大意', '大夫', '多少', '多么', '外甥', '壮实', '地道', '地方', '在乎', '困难', '嘴巴', - '嘱咐', '嘟囔', '嘀咕', '喜欢', '喇嘛', '喇叭', '商量', '唾沫', '哑巴', '哈欠', '哆嗦', - '咳嗽', '和尚', '告诉', '告示', '含糊', '吓唬', '后头', '名字', '名堂', '合同', '吆喝', - '叫唤', '口袋', '厚道', '厉害', '千斤', '包袱', '包涵', '匀称', '勤快', '动静', '动弹', - '功夫', '力气', '前头', '刺猬', '刺激', '别扭', '利落', '利索', '利害', '分析', '出息', - '凑合', '凉快', '冷战', '冤枉', '冒失', '养活', '关系', '先生', '兄弟', '便宜', '使唤', - '佩服', '作坊', '体面', '位置', '似的', '伙计', '休息', '什么', '人家', '亲戚', '亲家', - '交情', '云彩', '事情', '买卖', '主意', '丫头', '丧气', '两口', '东西', '东家', '世故', - '不由', '不在', '下水', '下巴', '上头', '上司', '丈夫', '丈人', '一辈', '那个', '菩萨', - '父亲', '母亲', '咕噜', '邋遢', '费用', '冤家', '甜头', '介绍', '荒唐', '大人', '泥鳅', - '幸福', '熟悉', '计划', '扑腾', '蜡烛', '姥爷', '照顾', '喉咙', '吉他', '弄堂', '蚂蚱', - '凤凰', '拖沓', '寒碜', '糟蹋', '倒腾', '报复', '逻辑', '盘缠', '喽啰', '牢骚', '咖喱', - '扫把', '惦记' - } - self.must_not_neural_tone_words = { - "男子", "女子", "分子", "原子", "量子", "莲子", "石子", "瓜子", "电子", "人人", "虎虎" - } - self.punc = ":,;。?!“”‘’':,;.?!" - - # the meaning of jieba pos tag: https://blog.csdn.net/weixin_44174352/article/details/113731041 - # e.g. - # word: "家里" - # pos: "s" - # finals: ['ia1', 'i3'] - def _neural_sandhi(self, word: str, pos: str, - finals: List[str]) -> List[str]: - - # reduplication words for n. and v. e.g. 奶奶, 试试, 旺旺 - for j, item in enumerate(word): - if j - 1 >= 0 and item == word[j - 1] and pos[0] in { - "n", "v", "a" - } and word not in self.must_not_neural_tone_words: - finals[j] = finals[j][:-1] + "5" - ge_idx = word.find("个") - if len(word) >= 1 and word[-1] in "吧呢啊呐噻嘛吖嗨呐哦哒额滴哩哟喽啰耶喔诶": - finals[-1] = finals[-1][:-1] + "5" - elif len(word) >= 1 and word[-1] in "的地得": - finals[-1] = finals[-1][:-1] + "5" - # e.g. 走了, 看着, 去过 - # elif len(word) == 1 and word in "了着过" and pos in {"ul", "uz", "ug"}: - # finals[-1] = finals[-1][:-1] + "5" - elif len(word) > 1 and word[-1] in "们子" and pos in { - "r", "n" - } and word not in self.must_not_neural_tone_words: - finals[-1] = finals[-1][:-1] + "5" - # e.g. 桌上, 地下, 家里 - elif len(word) > 1 and word[-1] in "上下里" and pos in {"s", "l", "f"}: - finals[-1] = finals[-1][:-1] + "5" - # e.g. 上来, 下去 - elif len(word) > 1 and word[-1] in "来去" and word[-2] in "上下进出回过起开": - finals[-1] = finals[-1][:-1] + "5" - # 个做量词 - elif (ge_idx >= 1 and - (word[ge_idx - 1].isnumeric() or - word[ge_idx - 1] in "几有两半多各整每做是")) or word == '个': - finals[ge_idx] = finals[ge_idx][:-1] + "5" - else: - if word in self.must_neural_tone_words or word[ - -2:] in self.must_neural_tone_words: - finals[-1] = finals[-1][:-1] + "5" - - word_list = self._split_word(word) - finals_list = [finals[:len(word_list[0])], finals[len(word_list[0]):]] - for i, word in enumerate(word_list): - # conventional neural in Chinese - if word in self.must_neural_tone_words or word[ - -2:] in self.must_neural_tone_words: - finals_list[i][-1] = finals_list[i][-1][:-1] + "5" - finals = sum(finals_list, []) - return finals - - def _bu_sandhi(self, word: str, finals: List[str]) -> List[str]: - # e.g. 看不懂 - if len(word) == 3 and word[1] == "不": - finals[1] = finals[1][:-1] + "5" - else: - for i, char in enumerate(word): - # "不" before tone4 should be bu2, e.g. 不怕 - if char == "不" and i + 1 < len(word) and finals[i + - 1][-1] == "4": - finals[i] = finals[i][:-1] + "2" - return finals - - def _yi_sandhi(self, word: str, finals: List[str]) -> List[str]: - # "一" in number sequences, e.g. 一零零, 二一零 - if word.find("一") != -1 and all( - [item.isnumeric() for item in word if item != "一"]): - return finals - # "一" between reduplication words shold be yi5, e.g. 看一看 - elif len(word) == 3 and word[1] == "一" and word[0] == word[-1]: - finals[1] = finals[1][:-1] + "5" - # when "一" is ordinal word, it should be yi1 - elif word.startswith("第一"): - finals[1] = finals[1][:-1] + "1" - else: - for i, char in enumerate(word): - if char == "一" and i + 1 < len(word): - # "一" before tone4 should be yi2, e.g. 一段 - if finals[i + 1][-1] == "4": - finals[i] = finals[i][:-1] + "2" - # "一" before non-tone4 should be yi4, e.g. 一天 - else: - # "一" 后面如果是标点,还读一声 - if word[i + 1] not in self.punc: - finals[i] = finals[i][:-1] + "4" - return finals - - def _split_word(self, word: str) -> List[str]: - word_list = jieba.cut_for_search(word) - word_list = sorted(word_list, key=lambda i: len(i), reverse=False) - first_subword = word_list[0] - first_begin_idx = word.find(first_subword) - if first_begin_idx == 0: - second_subword = word[len(first_subword):] - new_word_list = [first_subword, second_subword] - else: - second_subword = word[:-len(first_subword)] - new_word_list = [second_subword, first_subword] - return new_word_list - - def _three_sandhi(self, word: str, finals: List[str]) -> List[str]: - if len(word) == 2 and self._all_tone_three(finals): - finals[0] = finals[0][:-1] + "2" - elif len(word) == 3: - word_list = self._split_word(word) - if self._all_tone_three(finals): - # disyllabic + monosyllabic, e.g. 蒙古/包 - if len(word_list[0]) == 2: - finals[0] = finals[0][:-1] + "2" - finals[1] = finals[1][:-1] + "2" - # monosyllabic + disyllabic, e.g. 纸/老虎 - elif len(word_list[0]) == 1: - finals[1] = finals[1][:-1] + "2" - else: - finals_list = [ - finals[:len(word_list[0])], finals[len(word_list[0]):] - ] - if len(finals_list) == 2: - for i, sub in enumerate(finals_list): - # e.g. 所有/人 - if self._all_tone_three(sub) and len(sub) == 2: - finals_list[i][0] = finals_list[i][0][:-1] + "2" - # e.g. 好/喜欢 - elif i == 1 and not self._all_tone_three(sub) and finals_list[i][0][-1] == "3" and \ - finals_list[0][-1][-1] == "3": - - finals_list[0][-1] = finals_list[0][-1][:-1] + "2" - finals = sum(finals_list, []) - # split idiom into two words who's length is 2 - elif len(word) == 4: - finals_list = [finals[:2], finals[2:]] - finals = [] - for sub in finals_list: - if self._all_tone_three(sub): - sub[0] = sub[0][:-1] + "2" - finals += sub - - return finals - - def _all_tone_three(self, finals: List[str]) -> bool: - return all(x[-1] == "3" for x in finals) - - # merge "不" and the word behind it - # if don't merge, "不" sometimes appears alone according to jieba, which may occur sandhi error - def _merge_bu(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - last_word = "" - for word, pos in seg: - if last_word == "不": - word = last_word + word - if word != "不": - new_seg.append((word, pos)) - last_word = word[:] - if last_word == "不": - new_seg.append((last_word, 'd')) - last_word = "" - return new_seg - - # function 1: merge "一" and reduplication words in it's left and right, e.g. "听","一","听" ->"听一听" - # function 2: merge single "一" and the word behind it - # if don't merge, "一" sometimes appears alone according to jieba, which may occur sandhi error - # e.g. - # input seg: [('听', 'v'), ('一', 'm'), ('听', 'v')] - # output seg: [['听一听', 'v']] - def _merge_yi(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - # function 1 - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and word == "一" and i + 1 < len(seg) and seg[i - 1][ - 0] == seg[i + 1][0] and seg[i - 1][1] == "v": - new_seg[i - 1][0] = new_seg[i - 1][0] + "一" + new_seg[i - 1][0] - else: - if i - 2 >= 0 and seg[i - 1][0] == "一" and seg[i - 2][ - 0] == word and pos == "v": - continue - else: - new_seg.append([word, pos]) - seg = new_seg - new_seg = [] - # function 2 - for i, (word, pos) in enumerate(seg): - if new_seg and new_seg[-1][0] == "一": - new_seg[-1][0] = new_seg[-1][0] + word - else: - new_seg.append([word, pos]) - return new_seg - - # the first and the second words are all_tone_three - def _merge_continuous_three_tones( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - sub_finals_list = [ - lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for (word, pos) in seg - ] - assert len(sub_finals_list) == len(seg) - merge_last = [False] * len(seg) - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and self._all_tone_three( - sub_finals_list[i - 1]) and self._all_tone_three( - sub_finals_list[i]) and not merge_last[i - 1]: - # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi - if not self._is_reduplication(seg[i - 1][0]) and len( - seg[i - 1][0]) + len(seg[i][0]) <= 3: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - merge_last[i] = True - else: - new_seg.append([word, pos]) - else: - new_seg.append([word, pos]) - - return new_seg - - def _is_reduplication(self, word: str) -> bool: - return len(word) == 2 and word[0] == word[1] - - # the last char of first word and the first char of second word is tone_three - def _merge_continuous_three_tones_2( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - sub_finals_list = [ - lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for (word, pos) in seg - ] - assert len(sub_finals_list) == len(seg) - merge_last = [False] * len(seg) - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and sub_finals_list[i - 1][-1][-1] == "3" and sub_finals_list[i][0][-1] == "3" and not \ - merge_last[i - 1]: - # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi - if not self._is_reduplication(seg[i - 1][0]) and len( - seg[i - 1][0]) + len(seg[i][0]) <= 3: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - merge_last[i] = True - else: - new_seg.append([word, pos]) - else: - new_seg.append([word, pos]) - return new_seg - - def _merge_er(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and word == "儿" and seg[i-1][0] != "#": - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - else: - new_seg.append([word, pos]) - return new_seg - - def _merge_reduplication( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - for i, (word, pos) in enumerate(seg): - if new_seg and word == new_seg[-1][0]: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - else: - new_seg.append([word, pos]) - return new_seg - - def pre_merge_for_modify( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - seg = self._merge_bu(seg) - try: - seg = self._merge_yi(seg) - except: - print("_merge_yi failed") - seg = self._merge_reduplication(seg) - seg = self._merge_continuous_three_tones(seg) - seg = self._merge_continuous_three_tones_2(seg) - seg = self._merge_er(seg) - return seg - - def modified_tone(self, word: str, pos: str, - finals: List[str]) -> List[str]: - finals = self._bu_sandhi(word, finals) - finals = self._yi_sandhi(word, finals) - finals = self._neural_sandhi(word, pos, finals) - finals = self._three_sandhi(word, finals) - return finals diff --git a/spaces/XzJosh/Taffy-Bert-VITS2/commons.py b/spaces/XzJosh/Taffy-Bert-VITS2/commons.py deleted file mode 100644 index 9ad0444b61cbadaa388619986c2889c707d873ce..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Taffy-Bert-VITS2/commons.py +++ /dev/null @@ -1,161 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/YONG627/456123/yolov5-code-main/models/__init__.py b/spaces/YONG627/456123/yolov5-code-main/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/configs/COCO-Detection/fcos_R_50_FPN_1x.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/configs/COCO-Detection/fcos_R_50_FPN_1x.py deleted file mode 100644 index 86f83c68786f5995c462ade5f3067072d69f047e..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/configs/COCO-Detection/fcos_R_50_FPN_1x.py +++ /dev/null @@ -1,11 +0,0 @@ -from ..common.optim import SGD as optimizer -from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier -from ..common.data.coco import dataloader -from ..common.models.fcos import model -from ..common.train import train - -dataloader.train.mapper.use_instance_mask = False -optimizer.lr = 0.01 - -model.backbone.bottom_up.freeze_at = 2 -train.init_checkpoint = "detectron2://ImageNetPretrained/MSRA/R-50.pkl" diff --git a/spaces/YuanMio/vits-uma-genshin-honkai/README.md b/spaces/YuanMio/vits-uma-genshin-honkai/README.md deleted file mode 100644 index 1c0aa069bfd980b6b45bb2bf62ff74bd9b0b61c2..0000000000000000000000000000000000000000 --- a/spaces/YuanMio/vits-uma-genshin-honkai/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -license: apache-2.0 -title: ' vits-uma-genshin-honkai' -sdk: gradio -sdk_version: 3.7 -emoji: 🐨 -colorTo: yellow -pinned: false -app_file: app.py -duplicated_from: ikechan8370/vits-uma-genshin-honkai ---- diff --git a/spaces/Yudha515/Rvc-Models/audiocraft/modules/__init__.py b/spaces/Yudha515/Rvc-Models/audiocraft/modules/__init__.py deleted file mode 100644 index 81ba30f6466ff91b90490a4fb92f7d3d0d00144d..0000000000000000000000000000000000000000 --- a/spaces/Yudha515/Rvc-Models/audiocraft/modules/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from .conv import ( - NormConv1d, - NormConv2d, - NormConvTranspose1d, - NormConvTranspose2d, - StreamableConv1d, - StreamableConvTranspose1d, - pad_for_conv1d, - pad1d, - unpad1d, -) -from .lstm import StreamableLSTM -from .seanet import SEANetEncoder, SEANetDecoder diff --git a/spaces/ZilliaxOfficial/nyaru-svc-3.0/spec_gen.py b/spaces/ZilliaxOfficial/nyaru-svc-3.0/spec_gen.py deleted file mode 100644 index 85ad3188ac93aaef7b1b1d7dbbe47d358f4b0da6..0000000000000000000000000000000000000000 --- a/spaces/ZilliaxOfficial/nyaru-svc-3.0/spec_gen.py +++ /dev/null @@ -1,22 +0,0 @@ -from data_utils import TextAudioSpeakerLoader, EvalDataLoader -import json -from tqdm import tqdm - -from utils import HParams - -config_path = 'configs/config.json' -with open(config_path, "r") as f: - data = f.read() -config = json.loads(data) -hps = HParams(**config) - -train_dataset = TextAudioSpeakerLoader("filelists/train.txt", hps) -test_dataset = TextAudioSpeakerLoader("filelists/test.txt", hps) -eval_dataset = TextAudioSpeakerLoader("filelists/val.txt", hps) - -for _ in tqdm(train_dataset): - pass -for _ in tqdm(eval_dataset): - pass -for _ in tqdm(test_dataset): - pass \ No newline at end of file diff --git a/spaces/a-v-bely/spanish-task-generator/utilities/utils.py b/spaces/a-v-bely/spanish-task-generator/utilities/utils.py deleted file mode 100644 index 58fb296ef9f4e5bc343cf6f81e52784fc89cf375..0000000000000000000000000000000000000000 --- a/spaces/a-v-bely/spanish-task-generator/utilities/utils.py +++ /dev/null @@ -1,29 +0,0 @@ -import uuid - - -def points_to_mark(good, total): - percents = good / total * 100 - if percents < 50: - return 2 - elif percents < 66: - return 3 - elif percents < 90: - return 4 - else: - return 5 - - -def answer_letter(answer, variants): - answer = answer.lower() - for var in variants: - letter, var = var.split(') ') - if var == answer: - return letter + ') ' + answer - - -def is_valid_uuid(value): - try: - uuid.UUID(str(value)) - return True - except ValueError: - return False diff --git a/spaces/abdvl/datahub_qa_bot/docs/dev-guides/timeline.md b/spaces/abdvl/datahub_qa_bot/docs/dev-guides/timeline.md deleted file mode 100644 index a45caa013698b29851a8790d902aa7acc333d043..0000000000000000000000000000000000000000 --- a/spaces/abdvl/datahub_qa_bot/docs/dev-guides/timeline.md +++ /dev/null @@ -1,241 +0,0 @@ ---- -title: "Timeline API" ---- - -The Timeline API supports viewing version history of schemas, documentation, tags, glossary terms, and other updates -to entities. At present, the API only supports Datasets and Glossary Terms. - -## Compatibility - -The Timeline API is available in server versions `0.8.28` and higher. The `cli` timeline command is available in [pypi](https://pypi.org/project/acryl-datahub/) versions `0.8.27.1` onwards. - -# Concepts - -## Entity Timeline Conceptually -For the visually inclined, here is a conceptual diagram that illustrates how to think about the entity timeline with categorical changes overlaid on it. - -![../imgs/timeline/timeline-conceptually.png](../imgs/timeline/timeline-conceptually.png) - -## Change Event -Each modification is modeled as a -[ChangeEvent](../../metadata-io/src/main/java/com/linkedin/metadata/timeline/data/ChangeEvent.java) -which are grouped under [ChangeTransactions](../../metadata-io/src/main/java/com/linkedin/metadata/timeline/data/ChangeTransaction.java) -based on timestamp. A `ChangeEvent` consists of: - -- `changeType`: An operational type for the change, either `ADD`, `MODIFY`, or `REMOVE` -- `semVerChange`: A [semver](https://semver.org/) change type based on the compatibility of the change. This gets utilized in the computation of the transaction level version. Options are `NONE`, `PATCH`, `MINOR`, `MAJOR`, and `EXCEPTIONAL` for cases where an exception occurred during processing, but we do not fail the entire change calculation -- `target`: The high level target of the change. This is usually an `urn`, but can differ depending on the type of change. -- `category`: The category a change falls under, specific aspects are mapped to each category depending on the entity -- `elementId`: Optional, the ID of the element being applied to the target -- `description`: A human readable description of the change produced by the `Differ` type computing the diff -- `changeDetails`: A loose property map of additional details about the change - -### Change Event Examples -- A tag was applied to a *field* of a dataset through the UI: - - `changeType`: `ADD` - - `target`: `urn:li:schemaField:(urn:li:dataset:(urn:li:dataPlatform:,,),)` -> The field the tag is being added to - - `category`: `TAG` - - `elementId`: `urn:li:tag:` -> The ID of the tag being added - - `semVerChange`: `MINOR` -- A tag was added directly at the top-level to a dataset through the UI: - - `changeType`: `ADD` - - `target`: `urn:li:dataset:(urn:li:dataPlatform:,,)` -> The dataset the tag is being added to - - `category`: `TAG` - - `elementId`: `urn:li:tag:` -> The ID of the tag being added - - `semVerChange`: `MINOR` - -Note the `target` and `elementId` fields in the examples above to familiarize yourself with the semantics. - -## Change Transaction -Each `ChangeTransaction` is assigned a computed semantic version based on the `ChangeEvents` that occurred within it, -starting at `0.0.0` and updating based on whether the most significant change in the transaction is a `MAJOR`, `MINOR`, or -`PATCH` change. The logic for what changes constitute a Major, Minor or Patch change are encoded in the category specific `Differ` implementation. -For example, the [SchemaMetadataDiffer](../../metadata-io/src/main/java/com/linkedin/metadata/timeline/eventgenerator/SchemaMetadataChangeEventGenerator.java) has baked-in logic for determining what level of semantic change an event is based on backwards and forwards incompatibility. Read on to learn about the different categories of changes, and how semantic changes are interpreted in each. - -# Categories -ChangeTransactions contain a `category` that represents a kind of change that happened. The `Timeline API` allows the caller to specify which categories of changes they are interested in. Categories allow us to abstract away the low-level technical change that happened in the metadata (e.g. the `schemaMetadata` aspect changed) to a high-level semantic change that happened in the metadata (e.g. the `Technical Schema` of the dataset changed). Read on to learn about the different categories that are supported today. - -The Dataset entity currently supports the following categories: - -## Technical Schema - -- Any structural changes in the technical schema of the dataset, such as adding, dropping, renaming columns. -- Driven by the `schemaMetadata` aspect. -- Changes are marked with the appropriate semantic version marker based on well-understood rules for backwards and forwards compatibility. - -**_NOTE_**: Changes in field descriptions are not communicated via this category, use the Documentation category for that. - -### Example Usage - -We have provided some example scripts that demonstrate making changes to an aspect within each category and use then use the Timeline API to query the result. -All examples can be found in [smoke-test/test_resources/timeline](../../smoke-test/test_resources/timeline) and should be executed from that directory. -```console -% ./test_timeline_schema.sh -[2022-02-24 15:31:52,617] INFO {datahub.cli.delete_cli:130} - DataHub configured with http://localhost:8080 -Successfully deleted urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD). 6 rows deleted -Took 1.077 seconds to hard delete 6 rows for 1 entities -Update succeeded with status 200 -Update succeeded with status 200 -Update succeeded with status 200 -http://localhost:8080/openapi/timeline/v1/urn%3Ali%3Adataset%3A%28urn%3Ali%3AdataPlatform%3Ahive%2CtestTimelineDataset%2CPROD%29?categories=TECHNICAL_SCHEMA&start=1644874316591&end=2682397800000 -2022-02-24 15:31:53 - 0.0.0-computed - ADD TECHNICAL_SCHEMA dataset:hive:testTimelineDataset (field:property_id): A forwards & backwards compatible change due to the newly added field 'property_id'. - ADD TECHNICAL_SCHEMA dataset:hive:testTimelineDataset (field:service): A forwards & backwards compatible change due to the newly added field 'service'. - ADD TECHNICAL_SCHEMA dataset:hive:testTimelineDataset (field:service.type): A forwards & backwards compatible change due to the newly added field 'service.type'. - ADD TECHNICAL_SCHEMA dataset:hive:testTimelineDataset (field:service.provider): A forwards & backwards compatible change due to the newly added field 'service.provider'. - ADD TECHNICAL_SCHEMA dataset:hive:testTimelineDataset (field:service.provider.name): A forwards & backwards compatible change due to the newly added field 'service.provider.name'. - ADD TECHNICAL_SCHEMA dataset:hive:testTimelineDataset (field:service.provider.id): A forwards & backwards compatible change due to the newly added field 'service.provider.id'. -2022-02-24 15:31:55 - 1.0.0-computed - MODIFY TECHNICAL_SCHEMA dataset:hive:testTimelineDataset (field:service.provider.name): A backwards incompatible change due to native datatype of the field 'service.provider.id' changed from 'varchar(50)' to 'tinyint'. - MODIFY TECHNICAL_SCHEMA dataset:hive:testTimelineDataset (field:service.provider.id): A forwards compatible change due to field name changed from 'service.provider.id' to 'service.provider.id2' -2022-02-24 15:31:55 - 2.0.0-computed - MODIFY TECHNICAL_SCHEMA dataset:hive:testTimelineDataset (field:service.provider.id): A backwards incompatible change due to native datatype of the field 'service.provider.name' changed from 'tinyint' to 'varchar(50)'. - MODIFY TECHNICAL_SCHEMA dataset:hive:testTimelineDataset (field:service.provider.id2): A forwards compatible change due to field name changed from 'service.provider.id2' to 'service.provider.id' -``` - -## Ownership - -- Any changes in ownership of the dataset, adding an owner, or changing the type of the owner. -- Driven by the `ownership` aspect. -- All changes are currently marked as `MINOR`. - -### Example Usage - -We have provided some example scripts that demonstrate making changes to an aspect within each category and use then use the Timeline API to query the result. -All examples can be found in [smoke-test/test_resources/timeline](../../smoke-test/test_resources/timeline) and should be executed from that directory. -```console -% ./test_timeline_ownership.sh -[2022-02-24 15:40:25,367] INFO {datahub.cli.delete_cli:130} - DataHub configured with http://localhost:8080 -Successfully deleted urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD). 6 rows deleted -Took 1.087 seconds to hard delete 6 rows for 1 entities -Update succeeded with status 200 -Update succeeded with status 200 -Update succeeded with status 200 -http://localhost:8080/openapi/timeline/v1/urn%3Ali%3Adataset%3A%28urn%3Ali%3AdataPlatform%3Ahive%2CtestTimelineDataset%2CPROD%29?categories=OWNERSHIP&start=1644874829027&end=2682397800000 -2022-02-24 15:40:26 - 0.0.0-computed - ADD OWNERSHIP dataset:hive:testTimelineDataset (urn:li:corpuser:datahub): A new owner 'datahub' for the dataset 'urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD)' has been added. - ADD OWNERSHIP dataset:hive:testTimelineDataset (urn:li:corpuser:jdoe): A new owner 'jdoe' for the dataset 'urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD)' has been added. -2022-02-24 15:40:27 - 0.1.0-computed - REMOVE OWNERSHIP dataset:hive:testTimelineDataset (urn:li:corpuser:datahub): Owner 'datahub' of the dataset 'urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD)' has been removed. -2022-02-24 15:40:28 - 0.2.0-computed - ADD OWNERSHIP dataset:hive:testTimelineDataset (urn:li:corpuser:datahub): A new owner 'datahub' for the dataset 'urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD)' has been added. - REMOVE OWNERSHIP dataset:hive:testTimelineDataset (urn:li:corpuser:jdoe): Owner 'jdoe' of the dataset 'urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD)' has been removed. -Update succeeded with status 200 -Update succeeded with status 200 -Update succeeded with status 200 -http://localhost:8080/openapi/timeline/v1/urn%3Ali%3Adataset%3A%28urn%3Ali%3AdataPlatform%3Ahive%2CtestTimelineDataset%2CPROD%29?categories=OWNERSHIP&start=1644874831456&end=2682397800000 -2022-02-24 15:40:26 - 0.0.0-computed - ADD OWNERSHIP dataset:hive:testTimelineDataset (urn:li:corpuser:datahub): A new owner 'datahub' for the dataset 'urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD)' has been added. - ADD OWNERSHIP dataset:hive:testTimelineDataset (urn:li:corpuser:jdoe): A new owner 'jdoe' for the dataset 'urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD)' has been added. -2022-02-24 15:40:27 - 0.1.0-computed - REMOVE OWNERSHIP dataset:hive:testTimelineDataset (urn:li:corpuser:datahub): Owner 'datahub' of the dataset 'urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD)' has been removed. -2022-02-24 15:40:28 - 0.2.0-computed - ADD OWNERSHIP dataset:hive:testTimelineDataset (urn:li:corpuser:datahub): A new owner 'datahub' for the dataset 'urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD)' has been added. - REMOVE OWNERSHIP dataset:hive:testTimelineDataset (urn:li:corpuser:jdoe): Owner 'jdoe' of the dataset 'urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD)' has been removed. -2022-02-24 15:40:29 - 0.2.0-computed -2022-02-24 15:40:30 - 0.3.0-computed - ADD OWNERSHIP dataset:hive:testTimelineDataset (urn:li:corpuser:jdoe): A new owner 'jdoe' for the dataset 'urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD)' has been added. -2022-02-24 15:40:30 - 0.4.0-computed - MODIFY OWNERSHIP urn:li:corpuser:jdoe (DEVELOPER): The ownership type of the owner 'jdoe' changed from 'DATAOWNER' to 'DEVELOPER'. -``` - -## Tags - -- Any changes in tags applied to the dataset or to fields of the dataset. -- Driven by the `schemaMetadata`, `editableSchemaMetadata` and `globalTags` aspects. -- All changes are currently marked as `MINOR`. - -### Example Usage - -We have provided some example scripts that demonstrate making changes to an aspect within each category and use then use the Timeline API to query the result. -All examples can be found in [smoke-test/test_resources/timeline](../../smoke-test/test_resources/timeline) and should be executed from that directory. -```console -% ./test_timeline_tags.sh -[2022-02-24 15:44:04,279] INFO {datahub.cli.delete_cli:130} - DataHub configured with http://localhost:8080 -Successfully deleted urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD). 9 rows deleted -Took 0.626 seconds to hard delete 9 rows for 1 entities -Update succeeded with status 200 -Update succeeded with status 200 -Update succeeded with status 200 -http://localhost:8080/openapi/timeline/v1/urn%3Ali%3Adataset%3A%28urn%3Ali%3AdataPlatform%3Ahive%2CtestTimelineDataset%2CPROD%29?categories=TAG&start=1644875047911&end=2682397800000 -2022-02-24 15:44:05 - 0.0.0-computed - ADD TAG dataset:hive:testTimelineDataset (urn:li:tag:Legacy): A new tag 'Legacy' for the entity 'urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD)' has been added. -2022-02-24 15:44:06 - 0.1.0-computed - ADD TAG dataset:hive:testTimelineDataset (urn:li:tag:NeedsDocumentation): A new tag 'NeedsDocumentation' for the entity 'urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD)' has been added. -2022-02-24 15:44:07 - 0.2.0-computed - REMOVE TAG dataset:hive:testTimelineDataset (urn:li:tag:Legacy): Tag 'Legacy' of the entity 'urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD)' has been removed. - REMOVE TAG dataset:hive:testTimelineDataset (urn:li:tag:NeedsDocumentation): Tag 'NeedsDocumentation' of the entity 'urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD)' has been removed. -``` - -## Documentation - -- Any changes to documentation at the dataset level or at the field level. -- Driven by the `datasetProperties`, `institutionalMemory`, `schemaMetadata` and `editableSchemaMetadata`. -- Addition or removal of documentation or links is marked as `MINOR` while edits to existing documentation are marked as `PATCH` changes. - -### Example Usage - -We have provided some example scripts that demonstrate making changes to an aspect within each category and use then use the Timeline API to query the result. -All examples can be found in [smoke-test/test_resources/timeline](../../smoke-test/test_resources/timeline) and should be executed from that directory. -```console -% ./test_timeline_documentation.sh -[2022-02-24 15:45:53,950] INFO {datahub.cli.delete_cli:130} - DataHub configured with http://localhost:8080 -Successfully deleted urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD). 6 rows deleted -Took 0.578 seconds to hard delete 6 rows for 1 entities -Update succeeded with status 200 -Update succeeded with status 200 -Update succeeded with status 200 -http://localhost:8080/openapi/timeline/v1/urn%3Ali%3Adataset%3A%28urn%3Ali%3AdataPlatform%3Ahive%2CtestTimelineDataset%2CPROD%29?categories=DOCUMENTATION&start=1644875157616&end=2682397800000 -2022-02-24 15:45:55 - 0.0.0-computed - ADD DOCUMENTATION dataset:hive:testTimelineDataset (https://www.linkedin.com): The institutionalMemory 'https://www.linkedin.com' for the dataset 'urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD)' has been added. -2022-02-24 15:45:56 - 0.1.0-computed - ADD DOCUMENTATION dataset:hive:testTimelineDataset (https://www.google.com): The institutionalMemory 'https://www.google.com' for the dataset 'urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD)' has been added. -2022-02-24 15:45:56 - 0.2.0-computed - ADD DOCUMENTATION dataset:hive:testTimelineDataset (https://datahubproject.io/docs): The institutionalMemory 'https://datahubproject.io/docs' for the dataset 'urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD)' has been added. - ADD DOCUMENTATION dataset:hive:testTimelineDataset (https://datahubproject.io/docs): The institutionalMemory 'https://datahubproject.io/docs' for the dataset 'urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD)' has been added. - REMOVE DOCUMENTATION dataset:hive:testTimelineDataset (https://www.linkedin.com): The institutionalMemory 'https://www.linkedin.com' of the dataset 'urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD)' has been removed. -``` - -## Glossary Terms - -- Any changes to applied glossary terms to the dataset or to fields in the dataset. -- Driven by the `schemaMetadata`, `editableSchemaMetadata`, `glossaryTerms` aspects. -- All changes are currently marked as `MINOR`. - -### Example Usage - -We have provided some example scripts that demonstrate making changes to an aspect within each category and use then use the Timeline API to query the result. -All examples can be found in [smoke-test/test_resources/timeline](../../smoke-test/test_resources/timeline) and should be executed from that directory. -```console -% ./test_timeline_glossary.sh -[2022-02-24 15:44:56,152] INFO {datahub.cli.delete_cli:130} - DataHub configured with http://localhost:8080 -Successfully deleted urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD). 6 rows deleted -Took 0.443 seconds to hard delete 6 rows for 1 entities -Update succeeded with status 200 -Update succeeded with status 200 -Update succeeded with status 200 -http://localhost:8080/openapi/timeline/v1/urn%3Ali%3Adataset%3A%28urn%3Ali%3AdataPlatform%3Ahive%2CtestTimelineDataset%2CPROD%29?categories=GLOSSARY_TERM&start=1644875100605&end=2682397800000 -1969-12-31 18:00:00 - 0.0.0-computed - None None : java.lang.NullPointerException:null -2022-02-24 15:44:58 - 0.1.0-computed - ADD GLOSSARY_TERM dataset:hive:testTimelineDataset (urn:li:glossaryTerm:SavingsAccount): The GlossaryTerm 'SavingsAccount' for the entity 'urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD)' has been added. -2022-02-24 15:44:59 - 0.2.0-computed - REMOVE GLOSSARY_TERM dataset:hive:testTimelineDataset (urn:li:glossaryTerm:CustomerAccount): The GlossaryTerm 'CustomerAccount' for the entity 'urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD)' has been removed. - REMOVE GLOSSARY_TERM dataset:hive:testTimelineDataset (urn:li:glossaryTerm:SavingsAccount): The GlossaryTerm 'SavingsAccount' for the entity 'urn:li:dataset:(urn:li:dataPlatform:hive,testTimelineDataset,PROD)' has been removed. -``` - -# Explore the API - -The API is browse-able via the UI through through the dropdown. -Here are a few screenshots showing how to navigate to it. You can try out the API and send example requests. -![../imgs/timeline/dropdown-apis.png](../imgs/timeline/dropdown-apis.png) -![../imgs/timeline/swagger-ui.png](../imgs/timeline/swagger-ui.png) - -# Future Work - -- Supporting versions as start and end parameters as part of the call to the timeline API -- Supporting entities beyond Datasets -- Adding GraphQL API support -- Supporting materialization of computed versions for entity categories (compared to the current read-time version computation) -- Support in the UI to visualize the timeline in various places (e.g. schema history, etc.) - diff --git a/spaces/abhishek/first-order-motion-model/LICENSE.md b/spaces/abhishek/first-order-motion-model/LICENSE.md deleted file mode 100644 index 93e69b7a6a1b9c94ee30ac3eaf90af64d003f11e..0000000000000000000000000000000000000000 --- a/spaces/abhishek/first-order-motion-model/LICENSE.md +++ /dev/null @@ -1,185 +0,0 @@ -## creative commons - -# Attribution-NonCommercial 4.0 International - -Creative Commons Corporation (“Creative Commons”) is not a law firm and does not provide legal services or legal advice. Distribution of Creative Commons public licenses does not create a lawyer-client or other relationship. Creative Commons makes its licenses and related information available on an “as-is” basis. Creative Commons gives no warranties regarding its licenses, any material licensed under their terms and conditions, or any related information. Creative Commons disclaims all liability for damages resulting from their use to the fullest extent possible. - -### Using Creative Commons Public Licenses - -Creative Commons public licenses provide a standard set of terms and conditions that creators and other rights holders may use to share original works of authorship and other material subject to copyright and certain other rights specified in the public license below. The following considerations are for informational purposes only, are not exhaustive, and do not form part of our licenses. - -* __Considerations for licensors:__ Our public licenses are intended for use by those authorized to give the public permission to use material in ways otherwise restricted by copyright and certain other rights. Our licenses are irrevocable. Licensors should read and understand the terms and conditions of the license they choose before applying it. Licensors should also secure all rights necessary before applying our licenses so that the public can reuse the material as expected. Licensors should clearly mark any material not subject to the license. This includes other CC-licensed material, or material used under an exception or limitation to copyright. [More considerations for licensors](http://wiki.creativecommons.org/Considerations_for_licensors_and_licensees#Considerations_for_licensors). - -* __Considerations for the public:__ By using one of our public licenses, a licensor grants the public permission to use the licensed material under specified terms and conditions. If the licensor’s permission is not necessary for any reason–for example, because of any applicable exception or limitation to copyright–then that use is not regulated by the license. Our licenses grant only permissions under copyright and certain other rights that a licensor has authority to grant. Use of the licensed material may still be restricted for other reasons, including because others have copyright or other rights in the material. A licensor may make special requests, such as asking that all changes be marked or described. Although not required by our licenses, you are encouraged to respect those requests where reasonable. [More considerations for the public](http://wiki.creativecommons.org/Considerations_for_licensors_and_licensees#Considerations_for_licensees). - -## Creative Commons Attribution-NonCommercial 4.0 International Public License - -By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution-NonCommercial 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions. - -### Section 1 – Definitions. - -a. __Adapted Material__ means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image. - -b. __Adapter's License__ means the license You apply to Your Copyright and Similar Rights in Your contributions to Adapted Material in accordance with the terms and conditions of this Public License. - -c. __Copyright and Similar Rights__ means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights. - -d. __Effective Technological Measures__ means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements. - -e. __Exceptions and Limitations__ means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material. - -f. __Licensed Material__ means the artistic or literary work, database, or other material to which the Licensor applied this Public License. - -g. __Licensed Rights__ means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license. - -h. __Licensor__ means the individual(s) or entity(ies) granting rights under this Public License. - -i. __NonCommercial__ means not primarily intended for or directed towards commercial advantage or monetary compensation. For purposes of this Public License, the exchange of the Licensed Material for other material subject to Copyright and Similar Rights by digital file-sharing or similar means is NonCommercial provided there is no payment of monetary compensation in connection with the exchange. - -j. __Share__ means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them. - -k. __Sui Generis Database Rights__ means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world. - -l. __You__ means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning. - -### Section 2 – Scope. - -a. ___License grant.___ - - 1. Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable license to exercise the Licensed Rights in the Licensed Material to: - - A. reproduce and Share the Licensed Material, in whole or in part, for NonCommercial purposes only; and - - B. produce, reproduce, and Share Adapted Material for NonCommercial purposes only. - - 2. __Exceptions and Limitations.__ For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions. - - 3. __Term.__ The term of this Public License is specified in Section 6(a). - - 4. __Media and formats; technical modifications allowed.__ The Licensor authorizes You to exercise the Licensed Rights in all media and formats whether now known or hereafter created, and to make technical modifications necessary to do so. The Licensor waives and/or agrees not to assert any right or authority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures. For purposes of this Public License, simply making modifications authorized by this Section 2(a)(4) never produces Adapted Material. - - 5. __Downstream recipients.__ - - A. __Offer from the Licensor – Licensed Material.__ Every recipient of the Licensed Material automatically receives an offer from the Licensor to exercise the Licensed Rights under the terms and conditions of this Public License. - - B. __No downstream restrictions.__ You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material. - - 6. __No endorsement.__ Nothing in this Public License constitutes or may be construed as permission to assert or imply that You are, or that Your use of the Licensed Material is, connected with, or sponsored, endorsed, or granted official status by, the Licensor or others designated to receive attribution as provided in Section 3(a)(1)(A)(i). - -b. ___Other rights.___ - - 1. Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise. - - 2. Patent and trademark rights are not licensed under this Public License. - - 3. To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties, including when the Licensed Material is used other than for NonCommercial purposes. - -### Section 3 – License Conditions. - -Your exercise of the Licensed Rights is expressly made subject to the following conditions. - -a. ___Attribution.___ - - 1. If You Share the Licensed Material (including in modified form), You must: - - A. retain the following if it is supplied by the Licensor with the Licensed Material: - - i. identification of the creator(s) of the Licensed Material and any others designated to receive attribution, in any reasonable manner requested by the Licensor (including by pseudonym if designated); - - ii. a copyright notice; - - iii. a notice that refers to this Public License; - - iv. a notice that refers to the disclaimer of warranties; - - v. a URI or hyperlink to the Licensed Material to the extent reasonably practicable; - - B. indicate if You modified the Licensed Material and retain an indication of any previous modifications; and - - C. indicate the Licensed Material is licensed under this Public License, and include the text of, or the URI or hyperlink to, this Public License. - - 2. You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which You Share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information. - - 3. If requested by the Licensor, You must remove any of the information required by Section 3(a)(1)(A) to the extent reasonably practicable. - - 4. If You Share Adapted Material You produce, the Adapter's License You apply must not prevent recipients of the Adapted Material from complying with this Public License. - -### Section 4 – Sui Generis Database Rights. - -Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material: - -a. for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database for NonCommercial purposes only; - -b. if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material; and - -c. You must comply with the conditions in Section 3(a) if You Share all or a substantial portion of the contents of the database. - -For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights. - -### Section 5 – Disclaimer of Warranties and Limitation of Liability. - -a. __Unless otherwise separately undertaken by the Licensor, to the extent possible, the Licensor offers the Licensed Material as-is and as-available, and makes no representations or warranties of any kind concerning the Licensed Material, whether express, implied, statutory, or other. This includes, without limitation, warranties of title, merchantability, fitness for a particular purpose, non-infringement, absence of latent or other defects, accuracy, or the presence or absence of errors, whether or not known or discoverable. Where disclaimers of warranties are not allowed in full or in part, this disclaimer may not apply to You.__ - -b. __To the extent possible, in no event will the Licensor be liable to You on any legal theory (including, without limitation, negligence) or otherwise for any direct, special, indirect, incidental, consequential, punitive, exemplary, or other losses, costs, expenses, or damages arising out of this Public License or use of the Licensed Material, even if the Licensor has been advised of the possibility of such losses, costs, expenses, or damages. Where a limitation of liability is not allowed in full or in part, this limitation may not apply to You.__ - -c. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability. - -### Section 6 – Term and Termination. - -a. This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically. - -b. Where Your right to use the Licensed Material has terminated under Section 6(a), it reinstates: - - 1. automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or - - 2. upon express reinstatement by the Licensor. - - For the avoidance of doubt, this Section 6(b) does not affect any right the Licensor may have to seek remedies for Your violations of this Public License. - -c. For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License. - -d. Sections 1, 5, 6, 7, and 8 survive termination of this Public License. - -### Section 7 – Other Terms and Conditions. - -a. The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed. - -b. Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License. - -### Section 8 – Interpretation. - -a. For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License. - -b. To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions. - -c. No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor. - -d. Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority. - -> Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances will be considered the “Licensor.” Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as otherwise permitted by the Creative Commons policies published at [creativecommons.org/policies](http://creativecommons.org/policies), Creative Commons does not authorize the use of the trademark “Creative Commons” or any other trademark or logo of Creative Commons without its prior written consent including, without limitation, in connection with any unauthorized modifications to any of its public licenses or any other arrangements, understandings, or agreements concerning use of licensed material. For the avoidance of doubt, this paragraph does not form part of the public licenses. -> -> Creative Commons may be contacted at creativecommons.org - ---------------------------- LICENSE FOR Synchronized-BatchNorm-PyTorch -------------------------------- - -MIT License - -Copyright (c) 2018 Jiayuan MAO - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/video/io.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/video/io.py deleted file mode 100644 index 9879154227f640c262853b92c219461c6f67ee8e..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/video/io.py +++ /dev/null @@ -1,318 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -from collections import OrderedDict - -import cv2 -from cv2 import (CAP_PROP_FOURCC, CAP_PROP_FPS, CAP_PROP_FRAME_COUNT, - CAP_PROP_FRAME_HEIGHT, CAP_PROP_FRAME_WIDTH, - CAP_PROP_POS_FRAMES, VideoWriter_fourcc) - -from annotator.uniformer.mmcv.utils import (check_file_exist, mkdir_or_exist, scandir, - track_progress) - - -class Cache: - - def __init__(self, capacity): - self._cache = OrderedDict() - self._capacity = int(capacity) - if capacity <= 0: - raise ValueError('capacity must be a positive integer') - - @property - def capacity(self): - return self._capacity - - @property - def size(self): - return len(self._cache) - - def put(self, key, val): - if key in self._cache: - return - if len(self._cache) >= self.capacity: - self._cache.popitem(last=False) - self._cache[key] = val - - def get(self, key, default=None): - val = self._cache[key] if key in self._cache else default - return val - - -class VideoReader: - """Video class with similar usage to a list object. - - This video warpper class provides convenient apis to access frames. - There exists an issue of OpenCV's VideoCapture class that jumping to a - certain frame may be inaccurate. It is fixed in this class by checking - the position after jumping each time. - Cache is used when decoding videos. So if the same frame is visited for - the second time, there is no need to decode again if it is stored in the - cache. - - :Example: - - >>> import annotator.uniformer.mmcv as mmcv - >>> v = mmcv.VideoReader('sample.mp4') - >>> len(v) # get the total frame number with `len()` - 120 - >>> for img in v: # v is iterable - >>> mmcv.imshow(img) - >>> v[5] # get the 6th frame - """ - - def __init__(self, filename, cache_capacity=10): - # Check whether the video path is a url - if not filename.startswith(('https://', 'http://')): - check_file_exist(filename, 'Video file not found: ' + filename) - self._vcap = cv2.VideoCapture(filename) - assert cache_capacity > 0 - self._cache = Cache(cache_capacity) - self._position = 0 - # get basic info - self._width = int(self._vcap.get(CAP_PROP_FRAME_WIDTH)) - self._height = int(self._vcap.get(CAP_PROP_FRAME_HEIGHT)) - self._fps = self._vcap.get(CAP_PROP_FPS) - self._frame_cnt = int(self._vcap.get(CAP_PROP_FRAME_COUNT)) - self._fourcc = self._vcap.get(CAP_PROP_FOURCC) - - @property - def vcap(self): - """:obj:`cv2.VideoCapture`: The raw VideoCapture object.""" - return self._vcap - - @property - def opened(self): - """bool: Indicate whether the video is opened.""" - return self._vcap.isOpened() - - @property - def width(self): - """int: Width of video frames.""" - return self._width - - @property - def height(self): - """int: Height of video frames.""" - return self._height - - @property - def resolution(self): - """tuple: Video resolution (width, height).""" - return (self._width, self._height) - - @property - def fps(self): - """float: FPS of the video.""" - return self._fps - - @property - def frame_cnt(self): - """int: Total frames of the video.""" - return self._frame_cnt - - @property - def fourcc(self): - """str: "Four character code" of the video.""" - return self._fourcc - - @property - def position(self): - """int: Current cursor position, indicating frame decoded.""" - return self._position - - def _get_real_position(self): - return int(round(self._vcap.get(CAP_PROP_POS_FRAMES))) - - def _set_real_position(self, frame_id): - self._vcap.set(CAP_PROP_POS_FRAMES, frame_id) - pos = self._get_real_position() - for _ in range(frame_id - pos): - self._vcap.read() - self._position = frame_id - - def read(self): - """Read the next frame. - - If the next frame have been decoded before and in the cache, then - return it directly, otherwise decode, cache and return it. - - Returns: - ndarray or None: Return the frame if successful, otherwise None. - """ - # pos = self._position - if self._cache: - img = self._cache.get(self._position) - if img is not None: - ret = True - else: - if self._position != self._get_real_position(): - self._set_real_position(self._position) - ret, img = self._vcap.read() - if ret: - self._cache.put(self._position, img) - else: - ret, img = self._vcap.read() - if ret: - self._position += 1 - return img - - def get_frame(self, frame_id): - """Get frame by index. - - Args: - frame_id (int): Index of the expected frame, 0-based. - - Returns: - ndarray or None: Return the frame if successful, otherwise None. - """ - if frame_id < 0 or frame_id >= self._frame_cnt: - raise IndexError( - f'"frame_id" must be between 0 and {self._frame_cnt - 1}') - if frame_id == self._position: - return self.read() - if self._cache: - img = self._cache.get(frame_id) - if img is not None: - self._position = frame_id + 1 - return img - self._set_real_position(frame_id) - ret, img = self._vcap.read() - if ret: - if self._cache: - self._cache.put(self._position, img) - self._position += 1 - return img - - def current_frame(self): - """Get the current frame (frame that is just visited). - - Returns: - ndarray or None: If the video is fresh, return None, otherwise - return the frame. - """ - if self._position == 0: - return None - return self._cache.get(self._position - 1) - - def cvt2frames(self, - frame_dir, - file_start=0, - filename_tmpl='{:06d}.jpg', - start=0, - max_num=0, - show_progress=True): - """Convert a video to frame images. - - Args: - frame_dir (str): Output directory to store all the frame images. - file_start (int): Filenames will start from the specified number. - filename_tmpl (str): Filename template with the index as the - placeholder. - start (int): The starting frame index. - max_num (int): Maximum number of frames to be written. - show_progress (bool): Whether to show a progress bar. - """ - mkdir_or_exist(frame_dir) - if max_num == 0: - task_num = self.frame_cnt - start - else: - task_num = min(self.frame_cnt - start, max_num) - if task_num <= 0: - raise ValueError('start must be less than total frame number') - if start > 0: - self._set_real_position(start) - - def write_frame(file_idx): - img = self.read() - if img is None: - return - filename = osp.join(frame_dir, filename_tmpl.format(file_idx)) - cv2.imwrite(filename, img) - - if show_progress: - track_progress(write_frame, range(file_start, - file_start + task_num)) - else: - for i in range(task_num): - write_frame(file_start + i) - - def __len__(self): - return self.frame_cnt - - def __getitem__(self, index): - if isinstance(index, slice): - return [ - self.get_frame(i) - for i in range(*index.indices(self.frame_cnt)) - ] - # support negative indexing - if index < 0: - index += self.frame_cnt - if index < 0: - raise IndexError('index out of range') - return self.get_frame(index) - - def __iter__(self): - self._set_real_position(0) - return self - - def __next__(self): - img = self.read() - if img is not None: - return img - else: - raise StopIteration - - next = __next__ - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, traceback): - self._vcap.release() - - -def frames2video(frame_dir, - video_file, - fps=30, - fourcc='XVID', - filename_tmpl='{:06d}.jpg', - start=0, - end=0, - show_progress=True): - """Read the frame images from a directory and join them as a video. - - Args: - frame_dir (str): The directory containing video frames. - video_file (str): Output filename. - fps (float): FPS of the output video. - fourcc (str): Fourcc of the output video, this should be compatible - with the output file type. - filename_tmpl (str): Filename template with the index as the variable. - start (int): Starting frame index. - end (int): Ending frame index. - show_progress (bool): Whether to show a progress bar. - """ - if end == 0: - ext = filename_tmpl.split('.')[-1] - end = len([name for name in scandir(frame_dir, ext)]) - first_file = osp.join(frame_dir, filename_tmpl.format(start)) - check_file_exist(first_file, 'The start frame not found: ' + first_file) - img = cv2.imread(first_file) - height, width = img.shape[:2] - resolution = (width, height) - vwriter = cv2.VideoWriter(video_file, VideoWriter_fourcc(*fourcc), fps, - resolution) - - def write_frame(file_idx): - filename = osp.join(frame_dir, filename_tmpl.format(file_idx)) - img = cv2.imread(filename) - vwriter.write(img) - - if show_progress: - track_progress(write_frame, range(start, end)) - else: - for i in range(start, end): - write_frame(i) - vwriter.release() diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/necks/channel_mapper.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/necks/channel_mapper.py deleted file mode 100644 index a4f5ed44caefb1612df67785b1f4f0d9ec46ee93..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/necks/channel_mapper.py +++ /dev/null @@ -1,74 +0,0 @@ -import torch.nn as nn -from mmcv.cnn import ConvModule, xavier_init - -from ..builder import NECKS - - -@NECKS.register_module() -class ChannelMapper(nn.Module): - r"""Channel Mapper to reduce/increase channels of backbone features. - - This is used to reduce/increase channels of backbone features. - - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale). - kernel_size (int, optional): kernel_size for reducing channels (used - at each scale). Default: 3. - conv_cfg (dict, optional): Config dict for convolution layer. - Default: None. - norm_cfg (dict, optional): Config dict for normalization layer. - Default: None. - act_cfg (dict, optional): Config dict for activation layer in - ConvModule. Default: dict(type='ReLU'). - - Example: - >>> import torch - >>> in_channels = [2, 3, 5, 7] - >>> scales = [340, 170, 84, 43] - >>> inputs = [torch.rand(1, c, s, s) - ... for c, s in zip(in_channels, scales)] - >>> self = ChannelMapper(in_channels, 11, 3).eval() - >>> outputs = self.forward(inputs) - >>> for i in range(len(outputs)): - ... print(f'outputs[{i}].shape = {outputs[i].shape}') - outputs[0].shape = torch.Size([1, 11, 340, 340]) - outputs[1].shape = torch.Size([1, 11, 170, 170]) - outputs[2].shape = torch.Size([1, 11, 84, 84]) - outputs[3].shape = torch.Size([1, 11, 43, 43]) - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size=3, - conv_cfg=None, - norm_cfg=None, - act_cfg=dict(type='ReLU')): - super(ChannelMapper, self).__init__() - assert isinstance(in_channels, list) - - self.convs = nn.ModuleList() - for in_channel in in_channels: - self.convs.append( - ConvModule( - in_channel, - out_channels, - kernel_size, - padding=(kernel_size - 1) // 2, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - # default init_weights for conv(msra) and norm in ConvModule - def init_weights(self): - """Initialize the weights of ChannelMapper module.""" - for m in self.modules(): - if isinstance(m, nn.Conv2d): - xavier_init(m, distribution='uniform') - - def forward(self, inputs): - """Forward function.""" - assert len(inputs) == len(self.convs) - outs = [self.convs[i](inputs[i]) for i in range(len(inputs))] - return tuple(outs) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/exp/upernet_global_small/test_config_h32.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/exp/upernet_global_small/test_config_h32.py deleted file mode 100644 index b2ce6e6a7be0e42c6c2915f3dfe56addb8c0e1ef..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/exp/upernet_global_small/test_config_h32.py +++ /dev/null @@ -1,50 +0,0 @@ -''' - * Copyright (c) 2023 Salesforce, Inc. - * All rights reserved. - * SPDX-License-Identifier: Apache License 2.0 - * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/ - * By Can Qin - * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet - * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala - * Modified from UniFormer repo: From https://github.com/Sense-X/UniFormer - * Apache-2.0 license -''' -_base_ = [ - '../../configs/_base_/models/upernet_uniformer.py', - '../../configs/_base_/datasets/ade20k.py', - '../../configs/_base_/default_runtime.py', - '../../configs/_base_/schedules/schedule_160k.py' -] -model = dict( - backbone=dict( - type='UniFormer', - embed_dim=[64, 128, 320, 512], - layers=[3, 4, 8, 3], - head_dim=64, - drop_path_rate=0.25, - windows=False, - hybrid=True, - window_size=32 - ), - decode_head=dict( - in_channels=[64, 128, 320, 512], - num_classes=150 - ), - auxiliary_head=dict( - in_channels=320, - num_classes=150 - )) - -# AdamW optimizer, no weight decay for position embedding & layer norm in backbone -optimizer = dict(_delete_=True, type='AdamW', lr=0.00006, betas=(0.9, 0.999), weight_decay=0.01, - paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.), - 'relative_position_bias_table': dict(decay_mult=0.), - 'norm': dict(decay_mult=0.)})) - -lr_config = dict(_delete_=True, policy='poly', - warmup='linear', - warmup_iters=1500, - warmup_ratio=1e-6, - power=1.0, min_lr=0.0, by_epoch=False) - -data=dict(samples_per_gpu=2) \ No newline at end of file diff --git a/spaces/abidlabs/call-sentiment-blocks-2/app.py b/spaces/abidlabs/call-sentiment-blocks-2/app.py deleted file mode 100644 index 70dd1be88e6b0d7b2b7e69a2b10bfaed56bc3bc8..0000000000000000000000000000000000000000 --- a/spaces/abidlabs/call-sentiment-blocks-2/app.py +++ /dev/null @@ -1,95 +0,0 @@ -import gradio as gr -from transformers import pipeline, Wav2Vec2ProcessorWithLM -from pyannote.audio import Pipeline -from librosa import load, resample -from rpunct import RestorePuncts - -# Audio components -asr_model = 'patrickvonplaten/wav2vec2-base-960h-4-gram' -processor = Wav2Vec2ProcessorWithLM.from_pretrained(asr_model) -asr = pipeline('automatic-speech-recognition', model=asr_model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, decoder=processor.decoder) -speaker_segmentation = Pipeline.from_pretrained("pyannote/speaker-segmentation") -rpunct = RestorePuncts() - -# Text components -sentiment_pipeline = pipeline('text-classification', model="distilbert-base-uncased-finetuned-sst-2-english") -sentiment_threshold = 0.75 - -EXAMPLES = ["example_audio.wav"] - -def speech_to_text(speech): - speaker_output = speaker_segmentation(speech) - speech, sampling_rate = load(speech) - if sampling_rate != 16000: - speech = resample(speech, sampling_rate, 16000) - text = asr(speech, return_timestamps="word") - - full_text = text['text'].lower() - chunks = text['chunks'] - - diarized_output = [] - i = 0 - speaker_counter = 0 - - # New iteration every time the speaker changes - for turn, _, _ in speaker_output.itertracks(yield_label=True): - speaker = "Speaker 0" if speaker_counter % 2 == 0 else "Speaker 1" - diarized = "" - while i < len(chunks) and chunks[i]['timestamp'][1] <= turn.end: - diarized += chunks[i]['text'].lower() + ' ' - i += 1 - - if diarized != "": - diarized = rpunct.punctuate(diarized) - diarized_output.extend([(diarized, speaker), ('from {:.2f}-{:.2f}'.format(turn.start, turn.end), None)]) - speaker_counter += 1 - return diarized_output, full_text - -def sentiment(checked_options, diarized): - customer_id = checked_options - customer_sentiments = [] - - for transcript in diarized: - speaker_speech, speaker_id = transcript - if speaker_id == customer_id: - output = sentiment_pipeline(speaker_speech)[0] - if output["label"] != "neutral" and output["score"] > sentiment_threshold: - customer_sentiments.append((speaker_speech, output["label"])) - else: - customer_sentiments.append(speaker_speech, None) - return customer_sentiments - -demo = gr.Blocks(enable_queue=True) -demo.encrypt = False - -with demo: - with gr.Row(): - with gr.Column(): - audio = gr.Audio(label="Audio file", type='filepath') - with gr.Row(): - btn = gr.Button("Transcribe") - with gr.Row(): - examples = gr.components.Dataset(components=[audio], samples=[EXAMPLES], type="index") - with gr.Column(): - gr.Markdown("**Diarized Output:**") - diarized = gr.HighlightedText(lines=5, label="Diarized Output") - full = gr.Textbox(lines=4, label="Full Transcript") - check = gr.Radio(["Speaker 0", "Speaker 1"], label='Choose speaker for sentiment analysis') - analyzed = gr.HighlightedText(label="Customer Sentiment") - - btn.click(speech_to_text, audio, [diarized, full]) - check.change(sentiment, [check, diarized], analyzed) - - def cache_example(example): - processed_examples = audio.preprocess_example(example) - diarized_output, full_text = speech_to_text(example) - return processed_examples, diarized_output, full_text - - cache = [cache_example(e) for e in EXAMPLES] - - def load_example(example_id): - return cache[example_id] - - examples._click_no_postprocess(load_example, inputs=[examples], outputs=[audio, diarized, full], queue=False) - - demo.launch() \ No newline at end of file diff --git a/spaces/adorp/ControlNet-v1-1-duplicate/app_shuffle.py b/spaces/adorp/ControlNet-v1-1-duplicate/app_shuffle.py deleted file mode 100644 index 62c18b3578e53103c4851a253f8c3217f0bf30c2..0000000000000000000000000000000000000000 --- a/spaces/adorp/ControlNet-v1-1-duplicate/app_shuffle.py +++ /dev/null @@ -1,100 +0,0 @@ -#!/usr/bin/env python - -import gradio as gr - -from utils import randomize_seed_fn - - -def create_demo(process, max_images=12, default_num_images=3): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - image = gr.Image() - prompt = gr.Textbox(label='Prompt') - run_button = gr.Button('Run') - with gr.Accordion('Advanced options', open=False): - preprocessor_name = gr.Radio( - label='Preprocessor', - choices=['ContentShuffle', 'None'], - type='value', - value='ContentShuffle') - num_samples = gr.Slider(label='Number of images', - minimum=1, - maximum=max_images, - value=default_num_images, - step=1) - image_resolution = gr.Slider(label='Image resolution', - minimum=256, - maximum=512, - value=512, - step=256) - num_steps = gr.Slider(label='Number of steps', - minimum=1, - maximum=100, - value=20, - step=1) - guidance_scale = gr.Slider(label='Guidance scale', - minimum=0.1, - maximum=30.0, - value=9.0, - step=0.1) - seed = gr.Slider(label='Seed', - minimum=0, - maximum=1000000, - step=1, - value=0, - randomize=True) - randomize_seed = gr.Checkbox(label='Randomize seed', - value=True) - a_prompt = gr.Textbox( - label='Additional prompt', - value='best quality, extremely detailed') - n_prompt = gr.Textbox( - label='Negative prompt', - value= - 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality' - ) - with gr.Column(): - result = gr.Gallery(label='Output', show_label=False).style( - columns=2, object_fit='scale-down') - inputs = [ - image, - prompt, - a_prompt, - n_prompt, - num_samples, - image_resolution, - num_steps, - guidance_scale, - seed, - preprocessor_name, - ] - prompt.submit( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - ).then( - fn=process, - inputs=inputs, - outputs=result, - ) - run_button.click( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - ).then( - fn=process, - inputs=inputs, - outputs=result, - api_name='content-shuffle', - ) - return demo - - -if __name__ == '__main__': - from model import Model - model = Model(task_name='shuffle') - demo = create_demo(model.process_shuffle) - demo.queue().launch() diff --git a/spaces/akhaliq/Music_Source_Separation/bytesep/models/unet.py b/spaces/akhaliq/Music_Source_Separation/bytesep/models/unet.py deleted file mode 100644 index f1ffb5f0879b13aa37f7a14c2b1ce3a90271fb96..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Music_Source_Separation/bytesep/models/unet.py +++ /dev/null @@ -1,532 +0,0 @@ -import math -from typing import Dict, List, NoReturn, Tuple - -import matplotlib.pyplot as plt -import numpy as np -import pytorch_lightning as pl -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.optim as optim -from torch.optim.lr_scheduler import LambdaLR -from torchlibrosa.stft import ISTFT, STFT, magphase - -from bytesep.models.pytorch_modules import Base, Subband, act, init_bn, init_layer - - -class ConvBlock(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - kernel_size: Tuple, - activation: str, - momentum: float, - ): - r"""Convolutional block.""" - super(ConvBlock, self).__init__() - - self.activation = activation - padding = (kernel_size[0] // 2, kernel_size[1] // 2) - - self.conv1 = nn.Conv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=kernel_size, - stride=(1, 1), - dilation=(1, 1), - padding=padding, - bias=False, - ) - - self.bn1 = nn.BatchNorm2d(out_channels, momentum=momentum) - - self.conv2 = nn.Conv2d( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=kernel_size, - stride=(1, 1), - dilation=(1, 1), - padding=padding, - bias=False, - ) - - self.bn2 = nn.BatchNorm2d(out_channels, momentum=momentum) - - self.init_weights() - - def init_weights(self) -> NoReturn: - r"""Initialize weights.""" - init_layer(self.conv1) - init_layer(self.conv2) - init_bn(self.bn1) - init_bn(self.bn2) - - def forward(self, input_tensor: torch.Tensor) -> torch.Tensor: - r"""Forward data into the module. - - Args: - input_tensor: (batch_size, in_feature_maps, time_steps, freq_bins) - - Returns: - output_tensor: (batch_size, out_feature_maps, time_steps, freq_bins) - """ - x = act(self.bn1(self.conv1(input_tensor)), self.activation) - x = act(self.bn2(self.conv2(x)), self.activation) - output_tensor = x - - return output_tensor - - -class EncoderBlock(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - kernel_size: Tuple, - downsample: Tuple, - activation: str, - momentum: float, - ): - r"""Encoder block.""" - super(EncoderBlock, self).__init__() - - self.conv_block = ConvBlock( - in_channels, out_channels, kernel_size, activation, momentum - ) - self.downsample = downsample - - def forward(self, input_tensor: torch.Tensor) -> torch.Tensor: - r"""Forward data into the module. - - Args: - input_tensor: (batch_size, in_feature_maps, time_steps, freq_bins) - - Returns: - encoder_pool: (batch_size, out_feature_maps, downsampled_time_steps, downsampled_freq_bins) - encoder: (batch_size, out_feature_maps, time_steps, freq_bins) - """ - encoder_tensor = self.conv_block(input_tensor) - # encoder: (batch_size, out_feature_maps, time_steps, freq_bins) - - encoder_pool = F.avg_pool2d(encoder_tensor, kernel_size=self.downsample) - # encoder_pool: (batch_size, out_feature_maps, downsampled_time_steps, downsampled_freq_bins) - - return encoder_pool, encoder_tensor - - -class DecoderBlock(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - kernel_size: Tuple, - upsample: Tuple, - activation: str, - momentum: float, - ): - r"""Decoder block.""" - super(DecoderBlock, self).__init__() - - self.kernel_size = kernel_size - self.stride = upsample - self.activation = activation - - self.conv1 = torch.nn.ConvTranspose2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=self.stride, - stride=self.stride, - padding=(0, 0), - bias=False, - dilation=(1, 1), - ) - - self.bn1 = nn.BatchNorm2d(out_channels, momentum=momentum) - - self.conv_block2 = ConvBlock( - out_channels * 2, out_channels, kernel_size, activation, momentum - ) - - self.init_weights() - - def init_weights(self): - r"""Initialize weights.""" - init_layer(self.conv1) - init_bn(self.bn1) - - def forward( - self, input_tensor: torch.Tensor, concat_tensor: torch.Tensor - ) -> torch.Tensor: - r"""Forward data into the module. - - Args: - torch_tensor: (batch_size, in_feature_maps, downsampled_time_steps, downsampled_freq_bins) - concat_tensor: (batch_size, in_feature_maps, time_steps, freq_bins) - - Returns: - output_tensor: (batch_size, out_feature_maps, time_steps, freq_bins) - """ - x = act(self.bn1(self.conv1(input_tensor)), self.activation) - # (batch_size, in_feature_maps, time_steps, freq_bins) - - x = torch.cat((x, concat_tensor), dim=1) - # (batch_size, in_feature_maps * 2, time_steps, freq_bins) - - output_tensor = self.conv_block2(x) - # output_tensor: (batch_size, out_feature_maps, time_steps, freq_bins) - - return output_tensor - - -class UNet(nn.Module, Base): - def __init__(self, input_channels: int, target_sources_num: int): - r"""UNet.""" - super(UNet, self).__init__() - - self.input_channels = input_channels - self.target_sources_num = target_sources_num - - window_size = 2048 - hop_size = 441 - center = True - pad_mode = "reflect" - window = "hann" - activation = "leaky_relu" - momentum = 0.01 - - self.subbands_num = 1 - - assert ( - self.subbands_num == 1 - ), "Using subbands_num > 1 on spectrogram \ - will lead to unexpected performance sometimes. Suggest to use \ - subband method on waveform." - - self.K = 3 # outputs: |M|, cos∠M, sin∠M - self.downsample_ratio = 2 ** 6 # This number equals 2^{#encoder_blcoks} - - self.stft = STFT( - n_fft=window_size, - hop_length=hop_size, - win_length=window_size, - window=window, - center=center, - pad_mode=pad_mode, - freeze_parameters=True, - ) - - self.istft = ISTFT( - n_fft=window_size, - hop_length=hop_size, - win_length=window_size, - window=window, - center=center, - pad_mode=pad_mode, - freeze_parameters=True, - ) - - self.bn0 = nn.BatchNorm2d(window_size // 2 + 1, momentum=momentum) - - self.subband = Subband(subbands_num=self.subbands_num) - - self.encoder_block1 = EncoderBlock( - in_channels=input_channels * self.subbands_num, - out_channels=32, - kernel_size=(3, 3), - downsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.encoder_block2 = EncoderBlock( - in_channels=32, - out_channels=64, - kernel_size=(3, 3), - downsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.encoder_block3 = EncoderBlock( - in_channels=64, - out_channels=128, - kernel_size=(3, 3), - downsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.encoder_block4 = EncoderBlock( - in_channels=128, - out_channels=256, - kernel_size=(3, 3), - downsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.encoder_block5 = EncoderBlock( - in_channels=256, - out_channels=384, - kernel_size=(3, 3), - downsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.encoder_block6 = EncoderBlock( - in_channels=384, - out_channels=384, - kernel_size=(3, 3), - downsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.conv_block7 = ConvBlock( - in_channels=384, - out_channels=384, - kernel_size=(3, 3), - activation=activation, - momentum=momentum, - ) - self.decoder_block1 = DecoderBlock( - in_channels=384, - out_channels=384, - kernel_size=(3, 3), - upsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.decoder_block2 = DecoderBlock( - in_channels=384, - out_channels=384, - kernel_size=(3, 3), - upsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.decoder_block3 = DecoderBlock( - in_channels=384, - out_channels=256, - kernel_size=(3, 3), - upsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.decoder_block4 = DecoderBlock( - in_channels=256, - out_channels=128, - kernel_size=(3, 3), - upsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.decoder_block5 = DecoderBlock( - in_channels=128, - out_channels=64, - kernel_size=(3, 3), - upsample=(2, 2), - activation=activation, - momentum=momentum, - ) - - self.decoder_block6 = DecoderBlock( - in_channels=64, - out_channels=32, - kernel_size=(3, 3), - upsample=(2, 2), - activation=activation, - momentum=momentum, - ) - - self.after_conv_block1 = ConvBlock( - in_channels=32, - out_channels=32, - kernel_size=(3, 3), - activation=activation, - momentum=momentum, - ) - - self.after_conv2 = nn.Conv2d( - in_channels=32, - out_channels=target_sources_num - * input_channels - * self.K - * self.subbands_num, - kernel_size=(1, 1), - stride=(1, 1), - padding=(0, 0), - bias=True, - ) - - self.init_weights() - - def init_weights(self): - r"""Initialize weights.""" - init_bn(self.bn0) - init_layer(self.after_conv2) - - def feature_maps_to_wav( - self, - input_tensor: torch.Tensor, - sp: torch.Tensor, - sin_in: torch.Tensor, - cos_in: torch.Tensor, - audio_length: int, - ) -> torch.Tensor: - r"""Convert feature maps to waveform. - - Args: - input_tensor: (batch_size, target_sources_num * input_channels * self.K, time_steps, freq_bins) - sp: (batch_size, target_sources_num * input_channels, time_steps, freq_bins) - sin_in: (batch_size, target_sources_num * input_channels, time_steps, freq_bins) - cos_in: (batch_size, target_sources_num * input_channels, time_steps, freq_bins) - - Outputs: - waveform: (batch_size, target_sources_num * input_channels, segment_samples) - """ - batch_size, _, time_steps, freq_bins = input_tensor.shape - - x = input_tensor.reshape( - batch_size, - self.target_sources_num, - self.input_channels, - self.K, - time_steps, - freq_bins, - ) - # x: (batch_size, target_sources_num, input_channles, K, time_steps, freq_bins) - - mask_mag = torch.sigmoid(x[:, :, :, 0, :, :]) - _mask_real = torch.tanh(x[:, :, :, 1, :, :]) - _mask_imag = torch.tanh(x[:, :, :, 2, :, :]) - _, mask_cos, mask_sin = magphase(_mask_real, _mask_imag) - # mask_cos, mask_sin: (batch_size, target_sources_num, input_channles, time_steps, freq_bins) - - # Y = |Y|cos∠Y + j|Y|sin∠Y - # = |Y|cos(∠X + ∠M) + j|Y|sin(∠X + ∠M) - # = |Y|(cos∠X cos∠M - sin∠X sin∠M) + j|Y|(sin∠X cos∠M + cos∠X sin∠M) - out_cos = ( - cos_in[:, None, :, :, :] * mask_cos - sin_in[:, None, :, :, :] * mask_sin - ) - out_sin = ( - sin_in[:, None, :, :, :] * mask_cos + cos_in[:, None, :, :, :] * mask_sin - ) - # out_cos: (batch_size, target_sources_num, input_channles, time_steps, freq_bins) - # out_sin: (batch_size, target_sources_num, input_channles, time_steps, freq_bins) - - # Calculate |Y|. - out_mag = F.relu_(sp[:, None, :, :, :] * mask_mag) - # out_mag: (batch_size, target_sources_num, input_channles, time_steps, freq_bins) - - # Calculate Y_{real} and Y_{imag} for ISTFT. - out_real = out_mag * out_cos - out_imag = out_mag * out_sin - # out_real, out_imag: (batch_size, target_sources_num, input_channles, time_steps, freq_bins) - - # Reformat shape to (n, 1, time_steps, freq_bins) for ISTFT. - shape = ( - batch_size * self.target_sources_num * self.input_channels, - 1, - time_steps, - freq_bins, - ) - out_real = out_real.reshape(shape) - out_imag = out_imag.reshape(shape) - - # ISTFT. - x = self.istft(out_real, out_imag, audio_length) - # (batch_size * target_sources_num * input_channels, segments_num) - - # Reshape. - waveform = x.reshape( - batch_size, self.target_sources_num * self.input_channels, audio_length - ) - # (batch_size, target_sources_num * input_channels, segments_num) - - return waveform - - def forward(self, input_dict: Dict) -> Dict: - r"""Forward data into the module. - - Args: - input_dict: dict, e.g., { - waveform: (batch_size, input_channels, segment_samples), - ..., - } - - Outputs: - output_dict: dict, e.g., { - 'waveform': (batch_size, input_channels, segment_samples), - ..., - } - """ - mixtures = input_dict['waveform'] - # (batch_size, input_channels, segment_samples) - - mag, cos_in, sin_in = self.wav_to_spectrogram_phase(mixtures) - # mag, cos_in, sin_in: (batch_size, input_channels, time_steps, freq_bins) - - # Batch normalize on individual frequency bins. - x = mag.transpose(1, 3) - x = self.bn0(x) - x = x.transpose(1, 3) - # x: (batch_size, input_channels, time_steps, freq_bins) - - # Pad spectrogram to be evenly divided by downsample ratio. - origin_len = x.shape[2] - pad_len = ( - int(np.ceil(x.shape[2] / self.downsample_ratio)) * self.downsample_ratio - - origin_len - ) - x = F.pad(x, pad=(0, 0, 0, pad_len)) - # x: (batch_size, input_channels, padded_time_steps, freq_bins) - - # Let frequency bins be evenly divided by 2, e.g., 1025 -> 1024 - x = x[..., 0 : x.shape[-1] - 1] # (bs, input_channels, T, F) - - if self.subbands_num > 1: - x = self.subband.analysis(x) - # (bs, input_channels, T, F'), where F' = F // subbands_num - - # UNet - (x1_pool, x1) = self.encoder_block1(x) # x1_pool: (bs, 32, T / 2, F' / 2) - (x2_pool, x2) = self.encoder_block2(x1_pool) # x2_pool: (bs, 64, T / 4, F' / 4) - (x3_pool, x3) = self.encoder_block3( - x2_pool - ) # x3_pool: (bs, 128, T / 8, F' / 8) - (x4_pool, x4) = self.encoder_block4( - x3_pool - ) # x4_pool: (bs, 256, T / 16, F' / 16) - (x5_pool, x5) = self.encoder_block5( - x4_pool - ) # x5_pool: (bs, 384, T / 32, F' / 32) - (x6_pool, x6) = self.encoder_block6( - x5_pool - ) # x6_pool: (bs, 384, T / 64, F' / 64) - x_center = self.conv_block7(x6_pool) # (bs, 384, T / 64, F' / 64) - x7 = self.decoder_block1(x_center, x6) # (bs, 384, T / 32, F' / 32) - x8 = self.decoder_block2(x7, x5) # (bs, 384, T / 16, F' / 16) - x9 = self.decoder_block3(x8, x4) # (bs, 256, T / 8, F' / 8) - x10 = self.decoder_block4(x9, x3) # (bs, 128, T / 4, F' / 4) - x11 = self.decoder_block5(x10, x2) # (bs, 64, T / 2, F' / 2) - x12 = self.decoder_block6(x11, x1) # (bs, 32, T, F') - x = self.after_conv_block1(x12) # (bs, 32, T, F') - - x = self.after_conv2(x) - # (batch_size, target_sources_num * input_channles * self.K * subbands_num, T, F') - - if self.subbands_num > 1: - x = self.subband.synthesis(x) - # (batch_size, target_sources_num * input_channles * self.K, T, F) - - # Recover shape - x = F.pad(x, pad=(0, 1)) # Pad frequency, e.g., 1024 -> 1025. - - x = x[:, :, 0:origin_len, :] - # (batch_size, target_sources_num * input_channles * self.K, T, F) - - audio_length = mixtures.shape[2] - - separated_audio = self.feature_maps_to_wav(x, mag, sin_in, cos_in, audio_length) - # separated_audio: (batch_size, target_sources_num * input_channels, segments_num) - - output_dict = {'waveform': separated_audio} - - return output_dict diff --git a/spaces/akhaliq/Real-Time-Voice-Cloning/synthesizer/synthesize.py b/spaces/akhaliq/Real-Time-Voice-Cloning/synthesizer/synthesize.py deleted file mode 100644 index ffc7dc2678e85006b9f66d910fcae3e307c521a8..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Real-Time-Voice-Cloning/synthesizer/synthesize.py +++ /dev/null @@ -1,97 +0,0 @@ -import torch -from torch.utils.data import DataLoader -from synthesizer.hparams import hparams_debug_string -from synthesizer.synthesizer_dataset import SynthesizerDataset, collate_synthesizer -from synthesizer.models.tacotron import Tacotron -from synthesizer.utils.text import text_to_sequence -from synthesizer.utils.symbols import symbols -import numpy as np -from pathlib import Path -from tqdm import tqdm -import platform - -def run_synthesis(in_dir, out_dir, model_dir, hparams): - # This generates ground truth-aligned mels for vocoder training - synth_dir = Path(out_dir).joinpath("mels_gta") - synth_dir.mkdir(exist_ok=True) - print(hparams_debug_string()) - - # Check for GPU - if torch.cuda.is_available(): - device = torch.device("cuda") - if hparams.synthesis_batch_size % torch.cuda.device_count() != 0: - raise ValueError("`hparams.synthesis_batch_size` must be evenly divisible by n_gpus!") - else: - device = torch.device("cpu") - print("Synthesizer using device:", device) - - # Instantiate Tacotron model - model = Tacotron(embed_dims=hparams.tts_embed_dims, - num_chars=len(symbols), - encoder_dims=hparams.tts_encoder_dims, - decoder_dims=hparams.tts_decoder_dims, - n_mels=hparams.num_mels, - fft_bins=hparams.num_mels, - postnet_dims=hparams.tts_postnet_dims, - encoder_K=hparams.tts_encoder_K, - lstm_dims=hparams.tts_lstm_dims, - postnet_K=hparams.tts_postnet_K, - num_highways=hparams.tts_num_highways, - dropout=0., # Use zero dropout for gta mels - stop_threshold=hparams.tts_stop_threshold, - speaker_embedding_size=hparams.speaker_embedding_size).to(device) - - # Load the weights - model_dir = Path(model_dir) - model_fpath = model_dir.joinpath(model_dir.stem).with_suffix(".pt") - print("\nLoading weights at %s" % model_fpath) - model.load(model_fpath) - print("Tacotron weights loaded from step %d" % model.step) - - # Synthesize using same reduction factor as the model is currently trained - r = np.int32(model.r) - - # Set model to eval mode (disable gradient and zoneout) - model.eval() - - # Initialize the dataset - in_dir = Path(in_dir) - metadata_fpath = in_dir.joinpath("train.txt") - mel_dir = in_dir.joinpath("mels") - embed_dir = in_dir.joinpath("embeds") - - dataset = SynthesizerDataset(metadata_fpath, mel_dir, embed_dir, hparams) - data_loader = DataLoader(dataset, - collate_fn=lambda batch: collate_synthesizer(batch, r, hparams), - batch_size=hparams.synthesis_batch_size, - num_workers=2 if platform.system() != "Windows" else 0, - shuffle=False, - pin_memory=True) - - # Generate GTA mels - meta_out_fpath = Path(out_dir).joinpath("synthesized.txt") - with open(meta_out_fpath, "w") as file: - for i, (texts, mels, embeds, idx) in tqdm(enumerate(data_loader), total=len(data_loader)): - texts = texts.to(device) - mels = mels.to(device) - embeds = embeds.to(device) - - # Parallelize model onto GPUS using workaround due to python bug - if device.type == "cuda" and torch.cuda.device_count() > 1: - _, mels_out, _ = data_parallel_workaround(model, texts, mels, embeds) - else: - _, mels_out, _, _ = model(texts, mels, embeds) - - for j, k in enumerate(idx): - # Note: outputs mel-spectrogram files and target ones have same names, just different folders - mel_filename = Path(synth_dir).joinpath(dataset.metadata[k][1]) - mel_out = mels_out[j].detach().cpu().numpy().T - - # Use the length of the ground truth mel to remove padding from the generated mels - mel_out = mel_out[:int(dataset.metadata[k][4])] - - # Write the spectrogram to disk - np.save(mel_filename, mel_out, allow_pickle=False) - - # Write metadata into the synthesized file - file.write("|".join(dataset.metadata[k])) diff --git a/spaces/akhaliq/coqui-ai-tts/README.md b/spaces/akhaliq/coqui-ai-tts/README.md deleted file mode 100644 index fde5177a062807f566ad2ce7c112dfaf552ea960..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/coqui-ai-tts/README.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -title: Coqui.ai TTS -emoji: 🐸 -colorFrom: green -colorTo: green -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/akhaliq/demucs/README.md b/spaces/akhaliq/demucs/README.md deleted file mode 100644 index abe569b7c29f3356fd6830156ab69885ad96b781..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/demucs/README.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -title: Demucs -emoji: ⚡ -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/sbcsgroupprober.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/sbcsgroupprober.py deleted file mode 100644 index bdeef4e15b0dc5a68220b14c9dcec1a019401106..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/sbcsgroupprober.py +++ /dev/null @@ -1,83 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is Mozilla Universal charset detector code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 2001 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# Shy Shalom - original C code -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -from .charsetgroupprober import CharSetGroupProber -from .hebrewprober import HebrewProber -from .langbulgarianmodel import (ISO_8859_5_BULGARIAN_MODEL, - WINDOWS_1251_BULGARIAN_MODEL) -from .langgreekmodel import ISO_8859_7_GREEK_MODEL, WINDOWS_1253_GREEK_MODEL -from .langhebrewmodel import WINDOWS_1255_HEBREW_MODEL -# from .langhungarianmodel import (ISO_8859_2_HUNGARIAN_MODEL, -# WINDOWS_1250_HUNGARIAN_MODEL) -from .langrussianmodel import (IBM855_RUSSIAN_MODEL, IBM866_RUSSIAN_MODEL, - ISO_8859_5_RUSSIAN_MODEL, KOI8_R_RUSSIAN_MODEL, - MACCYRILLIC_RUSSIAN_MODEL, - WINDOWS_1251_RUSSIAN_MODEL) -from .langthaimodel import TIS_620_THAI_MODEL -from .langturkishmodel import ISO_8859_9_TURKISH_MODEL -from .sbcharsetprober import SingleByteCharSetProber - - -class SBCSGroupProber(CharSetGroupProber): - def __init__(self): - super(SBCSGroupProber, self).__init__() - hebrew_prober = HebrewProber() - logical_hebrew_prober = SingleByteCharSetProber(WINDOWS_1255_HEBREW_MODEL, - False, hebrew_prober) - # TODO: See if using ISO-8859-8 Hebrew model works better here, since - # it's actually the visual one - visual_hebrew_prober = SingleByteCharSetProber(WINDOWS_1255_HEBREW_MODEL, - True, hebrew_prober) - hebrew_prober.set_model_probers(logical_hebrew_prober, - visual_hebrew_prober) - # TODO: ORDER MATTERS HERE. I changed the order vs what was in master - # and several tests failed that did not before. Some thought - # should be put into the ordering, and we should consider making - # order not matter here, because that is very counter-intuitive. - self.probers = [ - SingleByteCharSetProber(WINDOWS_1251_RUSSIAN_MODEL), - SingleByteCharSetProber(KOI8_R_RUSSIAN_MODEL), - SingleByteCharSetProber(ISO_8859_5_RUSSIAN_MODEL), - SingleByteCharSetProber(MACCYRILLIC_RUSSIAN_MODEL), - SingleByteCharSetProber(IBM866_RUSSIAN_MODEL), - SingleByteCharSetProber(IBM855_RUSSIAN_MODEL), - SingleByteCharSetProber(ISO_8859_7_GREEK_MODEL), - SingleByteCharSetProber(WINDOWS_1253_GREEK_MODEL), - SingleByteCharSetProber(ISO_8859_5_BULGARIAN_MODEL), - SingleByteCharSetProber(WINDOWS_1251_BULGARIAN_MODEL), - # TODO: Restore Hungarian encodings (iso-8859-2 and windows-1250) - # after we retrain model. - # SingleByteCharSetProber(ISO_8859_2_HUNGARIAN_MODEL), - # SingleByteCharSetProber(WINDOWS_1250_HUNGARIAN_MODEL), - SingleByteCharSetProber(TIS_620_THAI_MODEL), - SingleByteCharSetProber(ISO_8859_9_TURKISH_MODEL), - hebrew_prober, - logical_hebrew_prober, - visual_hebrew_prober, - ] - self.reset() diff --git a/spaces/ali-ghamdan/image-colors-corrector/demo.py b/spaces/ali-ghamdan/image-colors-corrector/demo.py deleted file mode 100644 index 34b55d4603b3674e05624064ca32a39da7ce1b47..0000000000000000000000000000000000000000 --- a/spaces/ali-ghamdan/image-colors-corrector/demo.py +++ /dev/null @@ -1,59 +0,0 @@ -## Demo: White balancing a single image -# -# Copyright (c) 2018-present, Mahmoud Afifi -# York University, Canada -# mafifi@eecs.yorku.ca | m.3afifi@gmail.com -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# All rights reserved. -# -# Please cite the following work if this program is used: -# Mahmoud Afifi, Brian Price, Scott Cohen, and Michael S. Brown, -# "When color constancy goes wrong: Correcting improperly white-balanced -# images", CVPR 2019. -# -########################################################################## - -import cv2 -from classes import WBsRGB as wb_srgb -import argparse - -def parsed_args(): - parser = argparse.ArgumentParser(description="correct the colors of images") - parser.add_argument('-i','--input', type=str, required=True) - parser.add_argument('-o','--output', type=str, required=True) - parser.add_argument('-m','--model_id', type=int, default=1, help="available models (1, 0) default is 1 (its new)") - parser.add_argument('-g', '--gamut_mapping', type=int, default=2, help=""" - use gamut_mapping = 1 for scaling, 2 for clipping (our paper's results -reported using clipping). If the image is over-saturated, scaling is -recommended.""") - return parser.parse_args() - -def ResizeWithAspectRatio(image, width=None, height=None, inter=cv2.INTER_AREA): - (h, w) = image.shape[:2] - - if width is None and height is None: - return image - if width is None: - r = height / float(h) - dim = (int(w * r), height) - else: - r = width / float(w) - dim = (width, int(h * r)) - - return cv2.resize(image, dim, interpolation=inter) - -args = parsed_args() -image = args.input -output_dir = args.output -upgraded_model = args.model_id -gamut_mapping = args.gamut_mapping - -wbModel = wb_srgb.WBsRGB(gamut_mapping=gamut_mapping, upgraded=upgraded_model) - -gamut_mapping = 2 - -I = cv2.imread(image) -outImg = wbModel.correctImage(I) -cv2.imwrite(output_dir, outImg * 255) \ No newline at end of file diff --git a/spaces/ali-ghamdan/realesrgan-models/docs/ncnn_conversion.md b/spaces/ali-ghamdan/realesrgan-models/docs/ncnn_conversion.md deleted file mode 100644 index e1785cd079ccbb6f0a5ddefe24f63bfe81ce9b21..0000000000000000000000000000000000000000 --- a/spaces/ali-ghamdan/realesrgan-models/docs/ncnn_conversion.md +++ /dev/null @@ -1,11 +0,0 @@ -# Instructions on converting to NCNN models - -1. Convert to onnx model with `scripts/pytorch2onnx.py`. Remember to modify codes accordingly -1. Convert onnx model to ncnn model - 1. `cd ncnn-master\ncnn\build\tools\onnx` - 1. `onnx2ncnn.exe realesrgan-x4.onnx realesrgan-x4-raw.param realesrgan-x4-raw.bin` -1. Optimize ncnn model - 1. fp16 mode - 1. `cd ncnn-master\ncnn\build\tools` - 1. `ncnnoptimize.exe realesrgan-x4-raw.param realesrgan-x4-raw.bin realesrgan-x4.param realesrgan-x4.bin 1` -1. Modify the blob name in `realesrgan-x4.param`: `data` and `output` diff --git a/spaces/amarax/cowtopia/app.py b/spaces/amarax/cowtopia/app.py deleted file mode 100644 index 8e69f441f49ebb76b7b3c3447ca73f36ecb04488..0000000000000000000000000000000000000000 --- a/spaces/amarax/cowtopia/app.py +++ /dev/null @@ -1,229 +0,0 @@ -VISUAL_CHATGPT_PREFIX = """Visual ChatGPT is designed to be able to assist with a wide range of text and visual related tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. Visual ChatGPT is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. - -Visual ChatGPT is able to process and understand large amounts of text and image. As a language model, Visual ChatGPT can not directly read images, but it has a list of tools to finish different visual tasks. Each image will have a file name formed as "image/xxx.png", and Visual ChatGPT can invoke different tools to indirectly understand pictures. When talking about images, Visual ChatGPT is very strict to the file name and will never fabricate nonexistent files. When using tools to generate new image files, Visual ChatGPT is also known that the image may not be the same as user's demand, and will use other visual question answering tools or description tools to observe the real image. Visual ChatGPT is able to use tools in a sequence, and is loyal to the tool observation outputs rather than faking the image content and image file name. It will remember to provide the file name from the last tool observation, if a new image is generated. - -Human may provide new figures to Visual ChatGPT with a description. The description helps Visual ChatGPT to understand this image, but Visual ChatGPT should use tools to finish following tasks, rather than directly imagine from the description. - -Overall, Visual ChatGPT is a powerful visual dialogue assistant tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. - - -TOOLS: ------- - -Visual ChatGPT has access to the following tools:""" - -VISUAL_CHATGPT_FORMAT_INSTRUCTIONS = """To use a tool, please use the following format: - -``` -Thought: Do I need to use a tool? Yes -Action: the action to take, should be one of [{tool_names}] -Action Input: the input to the action -Observation: the result of the action -``` - -When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format: - -``` -Thought: Do I need to use a tool? No -{ai_prefix}: [your response here] -``` -""" - -VISUAL_CHATGPT_SUFFIX = """You are very strict to the filename correctness and will never fake a file name if not exists. -You will remember to provide the image file name loyally if it's provided in the last tool observation. - -Begin! - -Previous conversation history: -{chat_history} - -New input: {input} -Since Visual ChatGPT is a text language model, Visual ChatGPT must use tools to observe images rather than imagination. -The thoughts and observations are only visible for Visual ChatGPT, Visual ChatGPT should remember to repeat important information in the final response for Human. -Thought: Do I need to use a tool? {agent_scratchpad}""" - -from visual_foundation_models import * -from langchain.agents.initialize import initialize_agent -from langchain.agents.tools import Tool -from langchain.chains.conversation.memory import ConversationBufferMemory -from langchain.llms.openai import OpenAI -import re -import gradio as gr - -import os - -OPENAI_API_KEY = os.environ.get('OPENAI_API_KEY') - -def cut_dialogue_history(history_memory, keep_last_n_words=400): - if history_memory is None or len(history_memory) == 0: - return history_memory - tokens = history_memory.split() - n_tokens = len(tokens) - print(f"history_memory:{history_memory}, n_tokens: {n_tokens}") - if n_tokens < keep_last_n_words: - return history_memory - paragraphs = history_memory.split('\n') - last_n_tokens = n_tokens - while last_n_tokens >= keep_last_n_words: - last_n_tokens -= len(paragraphs[0].split(' ')) - paragraphs = paragraphs[1:] - return '\n' + '\n'.join(paragraphs) - - -class ConversationBot: - def __init__(self, load_dict): - print(f"Initializing VisualChatGPT, load_dict={load_dict}") - if 'ImageCaptioning' not in load_dict: - raise ValueError("You have to load ImageCaptioning as a basic function for VisualChatGPT") - - self.memory = ConversationBufferMemory(memory_key="chat_history", output_key='output') - self.models = dict() - for class_name, device in load_dict.items(): - self.models[class_name] = globals()[class_name](device=device) - - self.tools = [] - for class_name, instance in self.models.items(): - for e in dir(instance): - if e.startswith('inference'): - func = getattr(instance, e) - self.tools.append(Tool(name=func.name, description=func.description, func=func)) - - def run_text(self, text, state): - try: - self.agent.memory.buffer = cut_dialogue_history(self.agent.memory.buffer, keep_last_n_words=500) - res = self.agent({"input": text}) - res['output'] = res['output'].replace("\\", "/") - response = re.sub('(image/\S*png)', lambda m: f'![](/file={m.group(0)})*{m.group(0)}*', res['output']) - except Exception as e: - print(e) - response = f"Oops, an error occurred while generating the response.\n\nTry asking your question in another way." - - state = state + [(text, response)] - - print(f"\nProcessed run_text, Input text: {text}\nCurrent state: {state}\n" - f"Current Memory: {self.agent.memory.buffer}") - return state, state - - def run_image(self, image, state, txt): - image_filename = os.path.join('image', f"{str(uuid.uuid4())[:8]}.png") - print("======>Auto Resize Image...") - img = Image.open(image.name) - width, height = img.size - ratio = min(512 / width, 512 / height) - width_new, height_new = (round(width * ratio), round(height * ratio)) - width_new = int(np.round(width_new / 64.0)) * 64 - height_new = int(np.round(height_new / 64.0)) * 64 - img = img.resize((width_new, height_new)) - img = img.convert('RGB') - img.save(image_filename, "PNG") - print(f"Resize image form {width}x{height} to {width_new}x{height_new}") - - description = self.models['ImageCaptioning'].inference(image_filename) - Human_prompt = f'\nHuman: provide a figure named {image_filename}. The description is: {description}. This information helps you to understand this image, but you should use tools to finish following tasks, rather than directly imagine from my description. If you understand, say \"Received\". \n' - AI_prompt = "Received. " - - self.agent.memory.buffer = self.agent.memory.buffer + Human_prompt + 'AI: ' + AI_prompt - state = state + [(f"![](/file={image_filename})*{image_filename}*", AI_prompt)] - print(f"\nProcessed run_image, Input image: {image_filename}\nCurrent state: {state}\n" - f"Current Memory: {self.agent.memory.buffer}") - - return state, state, f'{txt} {image_filename} ' - - def init_agent(self, openai_api_key): - self.llm = OpenAI(temperature=0, max_tokens=512, openai_api_key=openai_api_key) - self.agent = initialize_agent( - self.tools, - self.llm, - agent="conversational-react-description", - verbose=True, - memory=self.memory, - return_intermediate_steps=True, - agent_kwargs={'prefix': VISUAL_CHATGPT_PREFIX, 'format_instructions': VISUAL_CHATGPT_FORMAT_INSTRUCTIONS, 'suffix': VISUAL_CHATGPT_SUFFIX}, ) - - print("Agent initialized.") - - return gr.update(visible = True) - -bot = ConversationBot({'Text2Image': 'cuda:0', - 'ImageCaptioning': 'cuda:0', - # 'ImageEditing': 'cuda:0', - 'VisualQuestionAnswering': 'cuda:0', - # 'Image2Canny': 'cpu', - # 'CannyText2Image': 'cuda:0', - 'InstructPix2Pix': 'cuda:0', - # 'Image2Depth': 'cpu', - # 'DepthText2Image': 'cuda:0', - }) - -if OPENAI_API_KEY: - print("OPENAI_API_KEY found in environment variables. Starting agent.") - bot.init_agent(OPENAI_API_KEY) - -with gr.Blocks(css="#chatbot {overflow:auto; height:500px;} .message img {max-width:90% !important; max-height:initial !important;}") as demo: - gr.Markdown("

      Visual ChatGPT

      ") - gr.Markdown( - """This is based on the [demo](https://huggingface.co/spaces/microsoft/visual_chatgpt) to the work [Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models](https://github.com/microsoft/visual-chatgpt).
      - """ - ) - - if not OPENAI_API_KEY: - with gr.Row(): - openai_api_key_textbox = gr.Textbox( - placeholder="Paste your OpenAI API key here to start Visual ChatGPT(sk-...) and press Enter ↵️", - show_label=False, - lines=1, - type="password", - ) - - chatbot = gr.Chatbot(elem_id="chatbot", label="Visual ChatGPT") - state = gr.State([]) - - with gr.Row(visible=getattr(bot, 'agent', False)) as input_raws: - with gr.Column(scale=0.7): - txt = gr.Textbox(show_label=False, placeholder="Enter text and press enter, or upload an image").style(container=False) - with gr.Column(scale=0.10, min_width=100): - run = gr.Button("🏃‍♂️Run") - with gr.Column(scale=0.10, min_width=100): - clear = gr.Button("🔄Clear️") - with gr.Column(scale=0.10, min_width=100): - btn = gr.UploadButton("🖼️Upload", file_types=["image"]) - - gr.Examples( - examples=["Generate a figure of a cat running in the garden", - "Replace the cat with a dog", - "Remove the dog in this image", - # "Can you detect the canny edge of this image?", - # "Can you use this canny image to generate an oil painting of a dog", - "Make it like water-color painting", - "What is the background color", - "Describe this image", - # "please detect the depth of this image", - # "Can you use this depth image to generate a cute dog", - ], - inputs=txt - ) - - if not OPENAI_API_KEY: - openai_api_key_textbox.submit(bot.init_agent, [openai_api_key_textbox], [input_raws]) - - def update_text(text, state): - chat = state + [(text, None)] - return chat - - txt.submit(update_text, [txt, state], [chatbot]) - txt.submit(bot.run_text, [txt, state], [chatbot, state]) - txt.submit(lambda: "", None, txt) - - run.click(update_text, [txt, state], [chatbot]) - run.click(bot.run_text, [txt, state], [chatbot, state]) - run.click(lambda: "", None, txt) - - btn.upload(bot.run_image, [btn, state, txt], [chatbot, state, txt]) - - clear.click(bot.memory.clear) - clear.click(lambda: [], None, chatbot) - clear.click(lambda: [], None, state) - - # demo.queue(concurrency_count=10).launch(server_name="0.0.0.0", server_port=7860) - if __name__ == "__main__": - demo.launch() diff --git a/spaces/anaclaudia13ct/insect_detection/utils/plots.py b/spaces/anaclaudia13ct/insect_detection/utils/plots.py deleted file mode 100644 index d2f232de0e973ece246f2110fdfc9c2f8cfa8416..0000000000000000000000000000000000000000 --- a/spaces/anaclaudia13ct/insect_detection/utils/plots.py +++ /dev/null @@ -1,559 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Plotting utils -""" - -import contextlib -import math -import os -from copy import copy -from pathlib import Path -from urllib.error import URLError - -import cv2 -import matplotlib -import matplotlib.pyplot as plt -import numpy as np -import pandas as pd -import seaborn as sn -import torch -from PIL import Image, ImageDraw, ImageFont - -from utils import TryExcept, threaded -from utils.general import (CONFIG_DIR, FONT, LOGGER, check_font, check_requirements, clip_boxes, increment_path, - is_ascii, xywh2xyxy, xyxy2xywh) -from utils.metrics import fitness -from utils.segment.general import scale_image - -# Settings -RANK = int(os.getenv('RANK', -1)) -matplotlib.rc('font', **{'size': 11}) -matplotlib.use('Agg') # for writing to files only - - -class Colors: - # Ultralytics color palette https://ultralytics.com/ - def __init__(self): - # hex = matplotlib.colors.TABLEAU_COLORS.values() - hexs = ('FF3838', 'FF9D97', 'FF701F', 'FFB21D', 'CFD231', '48F90A', '92CC17', '3DDB86', '1A9334', '00D4BB', - '2C99A8', '00C2FF', '344593', '6473FF', '0018EC', '8438FF', '520085', 'CB38FF', 'FF95C8', 'FF37C7') - self.palette = [self.hex2rgb(f'#{c}') for c in hexs] - self.n = len(self.palette) - - def __call__(self, i, bgr=False): - c = self.palette[int(i) % self.n] - return (c[2], c[1], c[0]) if bgr else c - - @staticmethod - def hex2rgb(h): # rgb order (PIL) - return tuple(int(h[1 + i:1 + i + 2], 16) for i in (0, 2, 4)) - - -colors = Colors() # create instance for 'from utils.plots import colors' - - -def check_pil_font(font=FONT, size=10): - # Return a PIL TrueType Font, downloading to CONFIG_DIR if necessary - font = Path(font) - font = font if font.exists() else (CONFIG_DIR / font.name) - try: - return ImageFont.truetype(str(font) if font.exists() else font.name, size) - except Exception: # download if missing - try: - check_font(font) - return ImageFont.truetype(str(font), size) - except TypeError: - check_requirements('Pillow>=8.4.0') # known issue https://github.com/ultralytics/yolov5/issues/5374 - except URLError: # not online - return ImageFont.load_default() - - -class Annotator: - # YOLOv5 Annotator for train/val mosaics and jpgs and detect/hub inference annotations - def __init__(self, im, line_width=None, font_size=None, font='Arial.ttf', pil=False, example='abc'): - assert im.data.contiguous, 'Image not contiguous. Apply np.ascontiguousarray(im) to Annotator() input images.' - non_ascii = not is_ascii(example) # non-latin labels, i.e. asian, arabic, cyrillic - self.pil = pil or non_ascii - if self.pil: # use PIL - self.im = im if isinstance(im, Image.Image) else Image.fromarray(im) - self.draw = ImageDraw.Draw(self.im) - self.font = check_pil_font(font='Arial.Unicode.ttf' if non_ascii else font, - size=font_size or max(round(sum(self.im.size) / 2 * 0.035), 12)) - else: # use cv2 - self.im = im - self.lw = line_width or max(round(sum(im.shape) / 2 * 0.003), 2) # line width - - def box_label(self, box, label='', color=(128, 128, 128), txt_color=(255, 255, 255)): - # Add one xyxy box to image with label - if self.pil or not is_ascii(label): - self.draw.rectangle(box, width=self.lw, outline=color) # box - if label: - w, h = self.font.getsize(label) # text width, height - outside = box[1] - h >= 0 # label fits outside box - self.draw.rectangle( - (box[0], box[1] - h if outside else box[1], box[0] + w + 1, - box[1] + 1 if outside else box[1] + h + 1), - fill=color, - ) - # self.draw.text((box[0], box[1]), label, fill=txt_color, font=self.font, anchor='ls') # for PIL>8.0 - self.draw.text((box[0], box[1] - h if outside else box[1]), label, fill=txt_color, font=self.font) - else: # cv2 - p1, p2 = (int(box[0]), int(box[1])), (int(box[2]), int(box[3])) - cv2.rectangle(self.im, p1, p2, color, thickness=self.lw, lineType=cv2.LINE_AA) - if label: - tf = max(self.lw - 1, 1) # font thickness - w, h = cv2.getTextSize(label, 0, fontScale=self.lw / 3, thickness=tf)[0] # text width, height - outside = p1[1] - h >= 3 - p2 = p1[0] + w, p1[1] - h - 3 if outside else p1[1] + h + 3 - cv2.rectangle(self.im, p1, p2, color, -1, cv2.LINE_AA) # filled - cv2.putText(self.im, - label, (p1[0], p1[1] - 2 if outside else p1[1] + h + 2), - 0, - self.lw / 3, - txt_color, - thickness=tf, - lineType=cv2.LINE_AA) - - def masks(self, masks, colors, im_gpu, alpha=0.5, retina_masks=False): - """Plot masks at once. - Args: - masks (tensor): predicted masks on cuda, shape: [n, h, w] - colors (List[List[Int]]): colors for predicted masks, [[r, g, b] * n] - im_gpu (tensor): img is in cuda, shape: [3, h, w], range: [0, 1] - alpha (float): mask transparency: 0.0 fully transparent, 1.0 opaque - """ - if self.pil: - # convert to numpy first - self.im = np.asarray(self.im).copy() - if len(masks) == 0: - self.im[:] = im_gpu.permute(1, 2, 0).contiguous().cpu().numpy() * 255 - colors = torch.tensor(colors, device=im_gpu.device, dtype=torch.float32) / 255.0 - colors = colors[:, None, None] # shape(n,1,1,3) - masks = masks.unsqueeze(3) # shape(n,h,w,1) - masks_color = masks * (colors * alpha) # shape(n,h,w,3) - - inv_alph_masks = (1 - masks * alpha).cumprod(0) # shape(n,h,w,1) - mcs = (masks_color * inv_alph_masks).sum(0) * 2 # mask color summand shape(n,h,w,3) - - im_gpu = im_gpu.flip(dims=[0]) # flip channel - im_gpu = im_gpu.permute(1, 2, 0).contiguous() # shape(h,w,3) - im_gpu = im_gpu * inv_alph_masks[-1] + mcs - im_mask = (im_gpu * 255).byte().cpu().numpy() - self.im[:] = im_mask if retina_masks else scale_image(im_gpu.shape, im_mask, self.im.shape) - if self.pil: - # convert im back to PIL and update draw - self.fromarray(self.im) - - def rectangle(self, xy, fill=None, outline=None, width=1): - # Add rectangle to image (PIL-only) - self.draw.rectangle(xy, fill, outline, width) - - def text(self, xy, text, txt_color=(255, 255, 255), anchor='top'): - # Add text to image (PIL-only) - if anchor == 'bottom': # start y from font bottom - w, h = self.font.getsize(text) # text width, height - xy[1] += 1 - h - self.draw.text(xy, text, fill=txt_color, font=self.font) - - def fromarray(self, im): - # Update self.im from a numpy array - self.im = im if isinstance(im, Image.Image) else Image.fromarray(im) - self.draw = ImageDraw.Draw(self.im) - - def result(self): - # Return annotated image as array - return np.asarray(self.im) - - -def feature_visualization(x, module_type, stage, n=32, save_dir=Path('runs/detect/exp')): - """ - x: Features to be visualized - module_type: Module type - stage: Module stage within model - n: Maximum number of feature maps to plot - save_dir: Directory to save results - """ - if 'Detect' not in module_type: - batch, channels, height, width = x.shape # batch, channels, height, width - if height > 1 and width > 1: - f = save_dir / f"stage{stage}_{module_type.split('.')[-1]}_features.png" # filename - - blocks = torch.chunk(x[0].cpu(), channels, dim=0) # select batch index 0, block by channels - n = min(n, channels) # number of plots - fig, ax = plt.subplots(math.ceil(n / 8), 8, tight_layout=True) # 8 rows x n/8 cols - ax = ax.ravel() - plt.subplots_adjust(wspace=0.05, hspace=0.05) - for i in range(n): - ax[i].imshow(blocks[i].squeeze()) # cmap='gray' - ax[i].axis('off') - - LOGGER.info(f'Saving {f}... ({n}/{channels})') - plt.savefig(f, dpi=300, bbox_inches='tight') - plt.close() - np.save(str(f.with_suffix('.npy')), x[0].cpu().numpy()) # npy save - - -def hist2d(x, y, n=100): - # 2d histogram used in labels.png and evolve.png - xedges, yedges = np.linspace(x.min(), x.max(), n), np.linspace(y.min(), y.max(), n) - hist, xedges, yedges = np.histogram2d(x, y, (xedges, yedges)) - xidx = np.clip(np.digitize(x, xedges) - 1, 0, hist.shape[0] - 1) - yidx = np.clip(np.digitize(y, yedges) - 1, 0, hist.shape[1] - 1) - return np.log(hist[xidx, yidx]) - - -def butter_lowpass_filtfilt(data, cutoff=1500, fs=50000, order=5): - from scipy.signal import butter, filtfilt - - # https://stackoverflow.com/questions/28536191/how-to-filter-smooth-with-scipy-numpy - def butter_lowpass(cutoff, fs, order): - nyq = 0.5 * fs - normal_cutoff = cutoff / nyq - return butter(order, normal_cutoff, btype='low', analog=False) - - b, a = butter_lowpass(cutoff, fs, order=order) - return filtfilt(b, a, data) # forward-backward filter - - -def output_to_target(output, max_det=300): - # Convert model output to target format [batch_id, class_id, x, y, w, h, conf] for plotting - targets = [] - for i, o in enumerate(output): - box, conf, cls = o[:max_det, :6].cpu().split((4, 1, 1), 1) - j = torch.full((conf.shape[0], 1), i) - targets.append(torch.cat((j, cls, xyxy2xywh(box), conf), 1)) - return torch.cat(targets, 0).numpy() - - -@threaded -def plot_images(images, targets, paths=None, fname='images.jpg', names=None): - # Plot image grid with labels - if isinstance(images, torch.Tensor): - images = images.cpu().float().numpy() - if isinstance(targets, torch.Tensor): - targets = targets.cpu().numpy() - - max_size = 1920 # max image size - max_subplots = 16 # max image subplots, i.e. 4x4 - bs, _, h, w = images.shape # batch size, _, height, width - bs = min(bs, max_subplots) # limit plot images - ns = np.ceil(bs ** 0.5) # number of subplots (square) - if np.max(images[0]) <= 1: - images *= 255 # de-normalise (optional) - - # Build Image - mosaic = np.full((int(ns * h), int(ns * w), 3), 255, dtype=np.uint8) # init - for i, im in enumerate(images): - if i == max_subplots: # if last batch has fewer images than we expect - break - x, y = int(w * (i // ns)), int(h * (i % ns)) # block origin - im = im.transpose(1, 2, 0) - mosaic[y:y + h, x:x + w, :] = im - - # Resize (optional) - scale = max_size / ns / max(h, w) - if scale < 1: - h = math.ceil(scale * h) - w = math.ceil(scale * w) - mosaic = cv2.resize(mosaic, tuple(int(x * ns) for x in (w, h))) - - # Annotate - fs = int((h + w) * ns * 0.01) # font size - annotator = Annotator(mosaic, line_width=round(fs / 10), font_size=fs, pil=True, example=names) - for i in range(i + 1): - x, y = int(w * (i // ns)), int(h * (i % ns)) # block origin - annotator.rectangle([x, y, x + w, y + h], None, (255, 255, 255), width=2) # borders - if paths: - annotator.text((x + 5, y + 5), text=Path(paths[i]).name[:40], txt_color=(220, 220, 220)) # filenames - if len(targets) > 0: - ti = targets[targets[:, 0] == i] # image targets - boxes = xywh2xyxy(ti[:, 2:6]).T - classes = ti[:, 1].astype('int') - labels = ti.shape[1] == 6 # labels if no conf column - conf = None if labels else ti[:, 6] # check for confidence presence (label vs pred) - - if boxes.shape[1]: - if boxes.max() <= 1.01: # if normalized with tolerance 0.01 - boxes[[0, 2]] *= w # scale to pixels - boxes[[1, 3]] *= h - elif scale < 1: # absolute coords need scale if image scales - boxes *= scale - boxes[[0, 2]] += x - boxes[[1, 3]] += y - for j, box in enumerate(boxes.T.tolist()): - cls = classes[j] - color = colors(cls) - cls = names[cls] if names else cls - if labels or conf[j] > 0.25: # 0.25 conf thresh - label = f'{cls}' if labels else f'{cls} {conf[j]:.1f}' - annotator.box_label(box, label, color=color) - annotator.im.save(fname) # save - - -def plot_lr_scheduler(optimizer, scheduler, epochs=300, save_dir=''): - # Plot LR simulating training for full epochs - optimizer, scheduler = copy(optimizer), copy(scheduler) # do not modify originals - y = [] - for _ in range(epochs): - scheduler.step() - y.append(optimizer.param_groups[0]['lr']) - plt.plot(y, '.-', label='LR') - plt.xlabel('epoch') - plt.ylabel('LR') - plt.grid() - plt.xlim(0, epochs) - plt.ylim(0) - plt.savefig(Path(save_dir) / 'LR.png', dpi=200) - plt.close() - - -def plot_val_txt(): # from utils.plots import *; plot_val() - # Plot val.txt histograms - x = np.loadtxt('val.txt', dtype=np.float32) - box = xyxy2xywh(x[:, :4]) - cx, cy = box[:, 0], box[:, 1] - - fig, ax = plt.subplots(1, 1, figsize=(6, 6), tight_layout=True) - ax.hist2d(cx, cy, bins=600, cmax=10, cmin=0) - ax.set_aspect('equal') - plt.savefig('hist2d.png', dpi=300) - - fig, ax = plt.subplots(1, 2, figsize=(12, 6), tight_layout=True) - ax[0].hist(cx, bins=600) - ax[1].hist(cy, bins=600) - plt.savefig('hist1d.png', dpi=200) - - -def plot_targets_txt(): # from utils.plots import *; plot_targets_txt() - # Plot targets.txt histograms - x = np.loadtxt('targets.txt', dtype=np.float32).T - s = ['x targets', 'y targets', 'width targets', 'height targets'] - fig, ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True) - ax = ax.ravel() - for i in range(4): - ax[i].hist(x[i], bins=100, label=f'{x[i].mean():.3g} +/- {x[i].std():.3g}') - ax[i].legend() - ax[i].set_title(s[i]) - plt.savefig('targets.jpg', dpi=200) - - -def plot_val_study(file='', dir='', x=None): # from utils.plots import *; plot_val_study() - # Plot file=study.txt generated by val.py (or plot all study*.txt in dir) - save_dir = Path(file).parent if file else Path(dir) - plot2 = False # plot additional results - if plot2: - ax = plt.subplots(2, 4, figsize=(10, 6), tight_layout=True)[1].ravel() - - fig2, ax2 = plt.subplots(1, 1, figsize=(8, 4), tight_layout=True) - # for f in [save_dir / f'study_coco_{x}.txt' for x in ['yolov5n6', 'yolov5s6', 'yolov5m6', 'yolov5l6', 'yolov5x6']]: - for f in sorted(save_dir.glob('study*.txt')): - y = np.loadtxt(f, dtype=np.float32, usecols=[0, 1, 2, 3, 7, 8, 9], ndmin=2).T - x = np.arange(y.shape[1]) if x is None else np.array(x) - if plot2: - s = ['P', 'R', 'mAP@.5', 'mAP@.5:.95', 't_preprocess (ms/img)', 't_inference (ms/img)', 't_NMS (ms/img)'] - for i in range(7): - ax[i].plot(x, y[i], '.-', linewidth=2, markersize=8) - ax[i].set_title(s[i]) - - j = y[3].argmax() + 1 - ax2.plot(y[5, 1:j], - y[3, 1:j] * 1E2, - '.-', - linewidth=2, - markersize=8, - label=f.stem.replace('study_coco_', '').replace('yolo', 'YOLO')) - - ax2.plot(1E3 / np.array([209, 140, 97, 58, 35, 18]), [34.6, 40.5, 43.0, 47.5, 49.7, 51.5], - 'k.-', - linewidth=2, - markersize=8, - alpha=.25, - label='EfficientDet') - - ax2.grid(alpha=0.2) - ax2.set_yticks(np.arange(20, 60, 5)) - ax2.set_xlim(0, 57) - ax2.set_ylim(25, 55) - ax2.set_xlabel('GPU Speed (ms/img)') - ax2.set_ylabel('COCO AP val') - ax2.legend(loc='lower right') - f = save_dir / 'study.png' - print(f'Saving {f}...') - plt.savefig(f, dpi=300) - - -@TryExcept() # known issue https://github.com/ultralytics/yolov5/issues/5395 -def plot_labels(labels, names=(), save_dir=Path('')): - # plot dataset labels - LOGGER.info(f"Plotting labels to {save_dir / 'labels.jpg'}... ") - c, b = labels[:, 0], labels[:, 1:].transpose() # classes, boxes - nc = int(c.max() + 1) # number of classes - x = pd.DataFrame(b.transpose(), columns=['x', 'y', 'width', 'height']) - - # seaborn correlogram - sn.pairplot(x, corner=True, diag_kind='auto', kind='hist', diag_kws=dict(bins=50), plot_kws=dict(pmax=0.9)) - plt.savefig(save_dir / 'labels_correlogram.jpg', dpi=200) - plt.close() - - # matplotlib labels - matplotlib.use('svg') # faster - ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True)[1].ravel() - y = ax[0].hist(c, bins=np.linspace(0, nc, nc + 1) - 0.5, rwidth=0.8) - with contextlib.suppress(Exception): # color histogram bars by class - [y[2].patches[i].set_color([x / 255 for x in colors(i)]) for i in range(nc)] # known issue #3195 - ax[0].set_ylabel('instances') - if 0 < len(names) < 30: - ax[0].set_xticks(range(len(names))) - ax[0].set_xticklabels(list(names.values()), rotation=90, fontsize=10) - else: - ax[0].set_xlabel('classes') - sn.histplot(x, x='x', y='y', ax=ax[2], bins=50, pmax=0.9) - sn.histplot(x, x='width', y='height', ax=ax[3], bins=50, pmax=0.9) - - # rectangles - labels[:, 1:3] = 0.5 # center - labels[:, 1:] = xywh2xyxy(labels[:, 1:]) * 2000 - img = Image.fromarray(np.ones((2000, 2000, 3), dtype=np.uint8) * 255) - for cls, *box in labels[:1000]: - ImageDraw.Draw(img).rectangle(box, width=1, outline=colors(cls)) # plot - ax[1].imshow(img) - ax[1].axis('off') - - for a in [0, 1, 2, 3]: - for s in ['top', 'right', 'left', 'bottom']: - ax[a].spines[s].set_visible(False) - - plt.savefig(save_dir / 'labels.jpg', dpi=200) - matplotlib.use('Agg') - plt.close() - - -def imshow_cls(im, labels=None, pred=None, names=None, nmax=25, verbose=False, f=Path('images.jpg')): - # Show classification image grid with labels (optional) and predictions (optional) - from utils.augmentations import denormalize - - names = names or [f'class{i}' for i in range(1000)] - blocks = torch.chunk(denormalize(im.clone()).cpu().float(), len(im), - dim=0) # select batch index 0, block by channels - n = min(len(blocks), nmax) # number of plots - m = min(8, round(n ** 0.5)) # 8 x 8 default - fig, ax = plt.subplots(math.ceil(n / m), m) # 8 rows x n/8 cols - ax = ax.ravel() if m > 1 else [ax] - # plt.subplots_adjust(wspace=0.05, hspace=0.05) - for i in range(n): - ax[i].imshow(blocks[i].squeeze().permute((1, 2, 0)).numpy().clip(0.0, 1.0)) - ax[i].axis('off') - if labels is not None: - s = names[labels[i]] + (f'—{names[pred[i]]}' if pred is not None else '') - ax[i].set_title(s, fontsize=8, verticalalignment='top') - plt.savefig(f, dpi=300, bbox_inches='tight') - plt.close() - if verbose: - LOGGER.info(f"Saving {f}") - if labels is not None: - LOGGER.info('True: ' + ' '.join(f'{names[i]:3s}' for i in labels[:nmax])) - if pred is not None: - LOGGER.info('Predicted:' + ' '.join(f'{names[i]:3s}' for i in pred[:nmax])) - return f - - -def plot_evolve(evolve_csv='path/to/evolve.csv'): # from utils.plots import *; plot_evolve() - # Plot evolve.csv hyp evolution results - evolve_csv = Path(evolve_csv) - data = pd.read_csv(evolve_csv) - keys = [x.strip() for x in data.columns] - x = data.values - f = fitness(x) - j = np.argmax(f) # max fitness index - plt.figure(figsize=(10, 12), tight_layout=True) - matplotlib.rc('font', **{'size': 8}) - print(f'Best results from row {j} of {evolve_csv}:') - for i, k in enumerate(keys[7:]): - v = x[:, 7 + i] - mu = v[j] # best single result - plt.subplot(6, 5, i + 1) - plt.scatter(v, f, c=hist2d(v, f, 20), cmap='viridis', alpha=.8, edgecolors='none') - plt.plot(mu, f.max(), 'k+', markersize=15) - plt.title(f'{k} = {mu:.3g}', fontdict={'size': 9}) # limit to 40 characters - if i % 5 != 0: - plt.yticks([]) - print(f'{k:>15}: {mu:.3g}') - f = evolve_csv.with_suffix('.png') # filename - plt.savefig(f, dpi=200) - plt.close() - print(f'Saved {f}') - - -def plot_results(file='path/to/results.csv', dir=''): - # Plot training results.csv. Usage: from utils.plots import *; plot_results('path/to/results.csv') - save_dir = Path(file).parent if file else Path(dir) - fig, ax = plt.subplots(2, 5, figsize=(12, 6), tight_layout=True) - ax = ax.ravel() - files = list(save_dir.glob('results*.csv')) - assert len(files), f'No results.csv files found in {save_dir.resolve()}, nothing to plot.' - for f in files: - try: - data = pd.read_csv(f) - s = [x.strip() for x in data.columns] - x = data.values[:, 0] - for i, j in enumerate([1, 2, 3, 4, 5, 8, 9, 10, 6, 7]): - y = data.values[:, j].astype('float') - # y[y == 0] = np.nan # don't show zero values - ax[i].plot(x, y, marker='.', label=f.stem, linewidth=2, markersize=8) - ax[i].set_title(s[j], fontsize=12) - # if j in [8, 9, 10]: # share train and val loss y axes - # ax[i].get_shared_y_axes().join(ax[i], ax[i - 5]) - except Exception as e: - LOGGER.info(f'Warning: Plotting error for {f}: {e}') - ax[1].legend() - fig.savefig(save_dir / 'results.png', dpi=200) - plt.close() - - -def profile_idetection(start=0, stop=0, labels=(), save_dir=''): - # Plot iDetection '*.txt' per-image logs. from utils.plots import *; profile_idetection() - ax = plt.subplots(2, 4, figsize=(12, 6), tight_layout=True)[1].ravel() - s = ['Images', 'Free Storage (GB)', 'RAM Usage (GB)', 'Battery', 'dt_raw (ms)', 'dt_smooth (ms)', 'real-world FPS'] - files = list(Path(save_dir).glob('frames*.txt')) - for fi, f in enumerate(files): - try: - results = np.loadtxt(f, ndmin=2).T[:, 90:-30] # clip first and last rows - n = results.shape[1] # number of rows - x = np.arange(start, min(stop, n) if stop else n) - results = results[:, x] - t = (results[0] - results[0].min()) # set t0=0s - results[0] = x - for i, a in enumerate(ax): - if i < len(results): - label = labels[fi] if len(labels) else f.stem.replace('frames_', '') - a.plot(t, results[i], marker='.', label=label, linewidth=1, markersize=5) - a.set_title(s[i]) - a.set_xlabel('time (s)') - # if fi == len(files) - 1: - # a.set_ylim(bottom=0) - for side in ['top', 'right']: - a.spines[side].set_visible(False) - else: - a.remove() - except Exception as e: - print(f'Warning: Plotting error for {f}; {e}') - ax[1].legend() - plt.savefig(Path(save_dir) / 'idetection_profile.png', dpi=200) - - -def save_one_box(xyxy, im, file=Path('im.jpg'), gain=1.02, pad=10, square=False, BGR=False, save=True): - # Save image crop as {file} with crop size multiple {gain} and {pad} pixels. Save and/or return crop - xyxy = torch.tensor(xyxy).view(-1, 4) - b = xyxy2xywh(xyxy) # boxes - if square: - b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # attempt rectangle to square - b[:, 2:] = b[:, 2:] * gain + pad # box wh * gain + pad - xyxy = xywh2xyxy(b).long() - clip_boxes(xyxy, im.shape) - crop = im[int(xyxy[0, 1]):int(xyxy[0, 3]), int(xyxy[0, 0]):int(xyxy[0, 2]), ::(1 if BGR else -1)] - if save: - file.parent.mkdir(parents=True, exist_ok=True) # make directory - f = str(increment_path(file).with_suffix('.jpg')) - # cv2.imwrite(f, crop) # save BGR, https://github.com/ultralytics/yolov5/issues/7007 chroma subsampling issue - Image.fromarray(crop[..., ::-1]).save(f, quality=95, subsampling=0) # save RGB - return crop diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/utils/text/phonemizers/zh_cn_phonemizer.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/utils/text/phonemizers/zh_cn_phonemizer.py deleted file mode 100644 index 41480c417356fd941e71e3eff0099eb38ac7296a..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/utils/text/phonemizers/zh_cn_phonemizer.py +++ /dev/null @@ -1,62 +0,0 @@ -from typing import Dict - -from TTS.tts.utils.text.chinese_mandarin.phonemizer import chinese_text_to_phonemes -from TTS.tts.utils.text.phonemizers.base import BasePhonemizer - -_DEF_ZH_PUNCS = "、.,[]()?!〽~『』「」【】" - - -class ZH_CN_Phonemizer(BasePhonemizer): - """🐸TTS Zh-Cn phonemizer using functions in `TTS.tts.utils.text.chinese_mandarin.phonemizer` - - Args: - punctuations (str): - Set of characters to be treated as punctuation. Defaults to `_DEF_ZH_PUNCS`. - - keep_puncs (bool): - If True, keep the punctuations after phonemization. Defaults to False. - - Example :: - - "这是,样本中文。" -> `d|ʒ|ø|4| |ʂ|ʏ|4| |,| |i|ɑ|ŋ|4|b|œ|n|3| |d|ʒ|o|ŋ|1|w|œ|n|2| |。` - - TODO: someone with Mandarin knowledge should check this implementation - """ - - language = "zh-cn" - - def __init__(self, punctuations=_DEF_ZH_PUNCS, keep_puncs=False, **kwargs): # pylint: disable=unused-argument - super().__init__(self.language, punctuations=punctuations, keep_puncs=keep_puncs) - - @staticmethod - def name(): - return "zh_cn_phonemizer" - - @staticmethod - def phonemize_zh_cn(text: str, separator: str = "|") -> str: - ph = chinese_text_to_phonemes(text, separator) - return ph - - def _phonemize(self, text, separator): - return self.phonemize_zh_cn(text, separator) - - @staticmethod - def supported_languages() -> Dict: - return {"zh-cn": "Chinese (China)"} - - def version(self) -> str: - return "0.0.1" - - def is_available(self) -> bool: - return True - - -# if __name__ == "__main__": -# text = "这是,样本中文。" -# e = ZH_CN_Phonemizer() -# print(e.supported_languages()) -# print(e.version()) -# print(e.language) -# print(e.name()) -# print(e.is_available()) -# print("`" + e.phonemize(text) + "`") diff --git a/spaces/arxify/RVC-beta-v2-0618/infer/infer-pm-index256.py b/spaces/arxify/RVC-beta-v2-0618/infer/infer-pm-index256.py deleted file mode 100644 index 66e38d49071994e9c850f7d75d0a3b2e5c79b0da..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/infer/infer-pm-index256.py +++ /dev/null @@ -1,199 +0,0 @@ -""" - -对源特征进行检索 -""" -import torch, pdb, os, parselmouth - -os.environ["CUDA_VISIBLE_DEVICES"] = "0" -import numpy as np -import soundfile as sf - -# from models import SynthesizerTrn256#hifigan_nonsf -# from infer_pack.models import SynthesizerTrn256NSF as SynthesizerTrn256#hifigan_nsf -from infer_pack.models import ( - SynthesizerTrnMs256NSFsid as SynthesizerTrn256, -) # hifigan_nsf - -# from infer_pack.models import SynthesizerTrnMs256NSFsid_sim as SynthesizerTrn256#hifigan_nsf -# from models import SynthesizerTrn256NSFsim as SynthesizerTrn256#hifigan_nsf -# from models import SynthesizerTrn256NSFsimFlow as SynthesizerTrn256#hifigan_nsf - - -from scipy.io import wavfile -from fairseq import checkpoint_utils - -# import pyworld -import librosa -import torch.nn.functional as F -import scipy.signal as signal - -# import torchcrepe -from time import time as ttime - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -model_path = r"E:\codes\py39\vits_vc_gpu_train\hubert_base.pt" # -print("load model(s) from {}".format(model_path)) -models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [model_path], - suffix="", -) -model = models[0] -model = model.to(device) -model = model.half() -model.eval() - -# net_g = SynthesizerTrn256(1025,32,192,192,768,2,6,3,0.1,"1", [3,7,11],[[1,3,5], [1,3,5], [1,3,5]],[10,10,2,2],512,[16,16,4,4],183,256,is_half=True)#hifigan#512#256 -# net_g = SynthesizerTrn256(1025,32,192,192,768,2,6,3,0.1,"1", [3,7,11],[[1,3,5], [1,3,5], [1,3,5]],[10,10,2,2],512,[16,16,4,4],109,256,is_half=True)#hifigan#512#256 -net_g = SynthesizerTrn256( - 1025, - 32, - 192, - 192, - 768, - 2, - 6, - 3, - 0, - "1", - [3, 7, 11], - [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - [10, 10, 2, 2], - 512, - [16, 16, 4, 4], - 183, - 256, - is_half=True, -) # hifigan#512#256#no_dropout -# net_g = SynthesizerTrn256(1025,32,192,192,768,2,3,3,0.1,"1", [3,7,11],[[1,3,5], [1,3,5], [1,3,5]],[10,10,2,2],512,[16,16,4,4],0)#ts3 -# net_g = SynthesizerTrn256(1025,32,192,192,768,2,6,3,0.1,"1", [3,7,11],[[1,3,5], [1,3,5], [1,3,5]],[10,10,2],512,[16,16,4],0)#hifigan-ps-sr -# -# net_g = SynthesizerTrn(1025, 32, 192, 192, 768, 2, 6, 3, 0.1, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [5,5], 512, [15,15], 0)#ms -# net_g = SynthesizerTrn(1025, 32, 192, 192, 768, 2, 6, 3, 0.1, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10,10], 512, [16,16], 0)#idwt2 - -# weights=torch.load("infer/ft-mi_1k-noD.pt") -# weights=torch.load("infer/ft-mi-freeze-vocoder-flow-enc_q_1k.pt") -# weights=torch.load("infer/ft-mi-freeze-vocoder_true_1k.pt") -# weights=torch.load("infer/ft-mi-sim1k.pt") -weights = torch.load("infer/ft-mi-no_opt-no_dropout.pt") -print(net_g.load_state_dict(weights, strict=True)) - -net_g.eval().to(device) -net_g.half() - - -def get_f0(x, p_len, f0_up_key=0): - time_step = 160 / 16000 * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - f0 = ( - parselmouth.Sound(x, 16000) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant") - f0 *= pow(2, f0_up_key / 12) - f0bak = f0.copy() - - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - # f0_mel[f0_mel > 188] = 188 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak - - -import faiss - -index = faiss.read_index("infer/added_IVF512_Flat_mi_baseline_src_feat.index") -big_npy = np.load("infer/big_src_feature_mi.npy") -ta0 = ta1 = ta2 = 0 -for idx, name in enumerate( - [ - "冬之花clip1.wav", - ] -): ## - wav_path = "todo-songs/%s" % name # - f0_up_key = -2 # - audio, sampling_rate = sf.read(wav_path) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - - feats = torch.from_numpy(audio).float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).fill_(False) - inputs = { - "source": feats.half().to(device), - "padding_mask": padding_mask.to(device), - "output_layer": 9, # layer 9 - } - if torch.cuda.is_available(): - torch.cuda.synchronize() - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) - - ####索引优化 - npy = feats[0].cpu().numpy().astype("float32") - D, I = index.search(npy, 1) - feats = ( - torch.from_numpy(big_npy[I.squeeze()].astype("float16")).unsqueeze(0).to(device) - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - if torch.cuda.is_available(): - torch.cuda.synchronize() - t1 = ttime() - # p_len = min(feats.shape[1],10000,pitch.shape[0])#太大了爆显存 - p_len = min(feats.shape[1], 10000) # - pitch, pitchf = get_f0(audio, p_len, f0_up_key) - p_len = min(feats.shape[1], 10000, pitch.shape[0]) # 太大了爆显存 - if torch.cuda.is_available(): - torch.cuda.synchronize() - t2 = ttime() - feats = feats[:, :p_len, :] - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - p_len = torch.LongTensor([p_len]).to(device) - pitch = torch.LongTensor(pitch).unsqueeze(0).to(device) - sid = torch.LongTensor([0]).to(device) - pitchf = torch.FloatTensor(pitchf).unsqueeze(0).to(device) - with torch.no_grad(): - audio = ( - net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0] - .data.cpu() - .float() - .numpy() - ) # nsf - if torch.cuda.is_available(): - torch.cuda.synchronize() - t3 = ttime() - ta0 += t1 - t0 - ta1 += t2 - t1 - ta2 += t3 - t2 - # wavfile.write("ft-mi_1k-index256-noD-%s.wav"%name, 40000, audio)## - # wavfile.write("ft-mi-freeze-vocoder-flow-enc_q_1k-%s.wav"%name, 40000, audio)## - # wavfile.write("ft-mi-sim1k-%s.wav"%name, 40000, audio)## - wavfile.write("ft-mi-no_opt-no_dropout-%s.wav" % name, 40000, audio) ## - - -print(ta0, ta1, ta2) # diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/adodbapi/test/tryconnection.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/adodbapi/test/tryconnection.py deleted file mode 100644 index 9d3901a8c0449fcb3a2e560d7917643db25e0f31..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/adodbapi/test/tryconnection.py +++ /dev/null @@ -1,33 +0,0 @@ -remote = False # automatic testing of remote access has been removed here - - -def try_connection(verbose, *args, **kwargs): - import adodbapi - - dbconnect = adodbapi.connect - try: - s = dbconnect(*args, **kwargs) # connect to server - if verbose: - print("Connected to:", s.connection_string) - print("which has tables:", s.get_table_names()) - s.close() # thanks, it worked, goodbye - except adodbapi.DatabaseError as inst: - print(inst.args[0]) # should be the error message - print("***Failed getting connection using=", repr(args), repr(kwargs)) - return False, (args, kwargs), None - - print(" (successful)") - - return True, (args, kwargs, remote), dbconnect - - -def try_operation_with_expected_exception( - expected_exception_list, some_function, *args, **kwargs -): - try: - some_function(*args, **kwargs) - except expected_exception_list as e: - return True, e - except: - raise # an exception other than the expected occurred - return False, "The expected exception did not occur" diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/v3/tests/test_display.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/v3/tests/test_display.py deleted file mode 100644 index 9035ead7cdfaf5f3efb401d1f1485f5d7bc19e84..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/v3/tests/test_display.py +++ /dev/null @@ -1,69 +0,0 @@ -from contextlib import contextmanager - -import pytest - -import altair.vegalite.v3 as alt - - -@contextmanager -def check_render_options(**options): - """ - Context manager that will assert that alt.renderers.options are equivalent - to the given options in the IPython.display.display call - """ - import IPython.display - - def check_options(obj): - assert alt.renderers.options == options - - _display = IPython.display.display - IPython.display.display = check_options - try: - yield - finally: - IPython.display.display = _display - - -def test_check_renderer_options(): - # this test should pass - with check_render_options(): - from IPython.display import display - - display(None) - - # check that an error is appropriately raised if the test fails - with pytest.raises(AssertionError): - with check_render_options(foo="bar"): - from IPython.display import display - - display(None) - - -def test_display_options(): - chart = alt.Chart("data.csv").mark_point().encode(x="foo:Q") - - # check that there are no options by default - with check_render_options(): - chart.display() - - # check that display options are passed - with check_render_options(embed_options={"tooltip": False, "renderer": "canvas"}): - chart.display("canvas", tooltip=False) - - # check that above options do not persist - with check_render_options(): - chart.display() - - # check that display options augment rather than overwrite pre-set options - with alt.renderers.enable(embed_options={"tooltip": True, "renderer": "svg"}): - with check_render_options(embed_options={"tooltip": True, "renderer": "svg"}): - chart.display() - - with check_render_options( - embed_options={"tooltip": True, "renderer": "canvas"} - ): - chart.display("canvas") - - # check that above options do not persist - with check_render_options(): - chart.display() diff --git a/spaces/ashercn97/AsherTesting/extensions/api/blocking_api.py b/spaces/ashercn97/AsherTesting/extensions/api/blocking_api.py deleted file mode 100644 index edc6d8f41f3a742de550724a0403924e2753f001..0000000000000000000000000000000000000000 --- a/spaces/ashercn97/AsherTesting/extensions/api/blocking_api.py +++ /dev/null @@ -1,219 +0,0 @@ -import json -from http.server import BaseHTTPRequestHandler, ThreadingHTTPServer -from threading import Thread - -from extensions.api.util import build_parameters, try_start_cloudflared -from modules import shared -from modules.chat import generate_chat_reply -from modules.LoRA import add_lora_to_model -from modules.models import load_model, unload_model -from modules.models_settings import (get_model_settings_from_yamls, - update_model_parameters) -from modules.text_generation import (encode, generate_reply, - stop_everything_event) -from modules.utils import get_available_models - - -def get_model_info(): - return { - 'model_name': shared.model_name, - 'lora_names': shared.lora_names, - # dump - 'shared.settings': shared.settings, - 'shared.args': vars(shared.args), - } - - -class Handler(BaseHTTPRequestHandler): - def do_GET(self): - if self.path == '/api/v1/model': - self.send_response(200) - self.end_headers() - response = json.dumps({ - 'result': shared.model_name - }) - - self.wfile.write(response.encode('utf-8')) - else: - self.send_error(404) - - def do_POST(self): - content_length = int(self.headers['Content-Length']) - body = json.loads(self.rfile.read(content_length).decode('utf-8')) - - if self.path == '/api/v1/generate': - self.send_response(200) - self.send_header('Content-Type', 'application/json') - self.end_headers() - - prompt = body['prompt'] - generate_params = build_parameters(body) - stopping_strings = generate_params.pop('stopping_strings') - generate_params['stream'] = False - - generator = generate_reply( - prompt, generate_params, stopping_strings=stopping_strings, is_chat=False) - - answer = '' - for a in generator: - answer = a - - response = json.dumps({ - 'results': [{ - 'text': answer - }] - }) - - self.wfile.write(response.encode('utf-8')) - - elif self.path == '/api/v1/chat': - self.send_response(200) - self.send_header('Content-Type', 'application/json') - self.end_headers() - - user_input = body['user_input'] - regenerate = body.get('regenerate', False) - _continue = body.get('_continue', False) - - generate_params = build_parameters(body, chat=True) - generate_params['stream'] = False - - generator = generate_chat_reply( - user_input, generate_params, regenerate=regenerate, _continue=_continue, loading_message=False) - - answer = generate_params['history'] - for a in generator: - answer = a - - response = json.dumps({ - 'results': [{ - 'history': answer - }] - }) - - self.wfile.write(response.encode('utf-8')) - - elif self.path == '/api/v1/stop-stream': - self.send_response(200) - self.send_header('Content-Type', 'application/json') - self.end_headers() - - stop_everything_event() - - response = json.dumps({ - 'results': 'success' - }) - - self.wfile.write(response.encode('utf-8')) - - elif self.path == '/api/v1/model': - self.send_response(200) - self.send_header('Content-Type', 'application/json') - self.end_headers() - - # by default return the same as the GET interface - result = shared.model_name - - # Actions: info, load, list, unload - action = body.get('action', '') - - if action == 'load': - model_name = body['model_name'] - args = body.get('args', {}) - print('args', args) - for k in args: - setattr(shared.args, k, args[k]) - - shared.model_name = model_name - unload_model() - - model_settings = get_model_settings_from_yamls(shared.model_name) - shared.settings.update(model_settings) - update_model_parameters(model_settings, initial=True) - - if shared.settings['mode'] != 'instruct': - shared.settings['instruction_template'] = None - - try: - shared.model, shared.tokenizer = load_model(shared.model_name) - if shared.args.lora: - add_lora_to_model(shared.args.lora) # list - - except Exception as e: - response = json.dumps({'error': {'message': repr(e)}}) - - self.wfile.write(response.encode('utf-8')) - raise e - - shared.args.model = shared.model_name - - result = get_model_info() - - elif action == 'unload': - unload_model() - shared.model_name = None - shared.args.model = None - result = get_model_info() - - elif action == 'list': - result = get_available_models() - - elif action == 'info': - result = get_model_info() - - response = json.dumps({ - 'result': result, - }) - - self.wfile.write(response.encode('utf-8')) - - elif self.path == '/api/v1/token-count': - self.send_response(200) - self.send_header('Content-Type', 'application/json') - self.end_headers() - - tokens = encode(body['prompt'])[0] - response = json.dumps({ - 'results': [{ - 'tokens': len(tokens) - }] - }) - - self.wfile.write(response.encode('utf-8')) - else: - self.send_error(404) - - def do_OPTIONS(self): - self.send_response(200) - self.end_headers() - - def end_headers(self): - self.send_header('Access-Control-Allow-Origin', '*') - self.send_header('Access-Control-Allow-Methods', '*') - self.send_header('Access-Control-Allow-Headers', '*') - self.send_header('Cache-Control', 'no-store, no-cache, must-revalidate') - super().end_headers() - - -def _run_server(port: int, share: bool = False): - address = '0.0.0.0' if shared.args.listen else '127.0.0.1' - - server = ThreadingHTTPServer((address, port), Handler) - - def on_start(public_url: str): - print(f'Starting non-streaming server at public url {public_url}/api') - - if share: - try: - try_start_cloudflared(port, max_attempts=3, on_start=on_start) - except Exception: - pass - else: - print( - f'Starting API at http://{address}:{port}/api') - - server.serve_forever() - - -def start_server(port: int, share: bool = False): - Thread(target=_run_server, args=[port, share], daemon=True).start() diff --git a/spaces/ashishabraham22/WATCHA-READIN/app.py b/spaces/ashishabraham22/WATCHA-READIN/app.py deleted file mode 100644 index 33cff5304a3edee3adb465128bce00d581efd39a..0000000000000000000000000000000000000000 --- a/spaces/ashishabraham22/WATCHA-READIN/app.py +++ /dev/null @@ -1,31 +0,0 @@ -import gradio as gr -import tensorflow -from tensorflow.keras.models import load_model -import prepro -import numpy as np -import nltk - - -def classify(text): - nltk.download('stopwords') - model= load_model('nlp3.h5') - X= prepro.preprocess(text) - prediction = model.predict(np.array(X)) - # return prediction - if(prediction<=0.4): - return "Looks like you are reading negative content. Some words sound negative in context." - elif(prediction>0.4 and prediction<=0.6): - return "Sounds Neutral. Speaks generally and not biased towards any value." - else : - return "Sounds Positive. Giving a good impression to start reading this stuff. " - - -iface= gr.Interface( - inputs=[gr.inputs.Textbox(lines=5, label="Context", placeholder="Type a sentence or paragraph here.")], - outputs=[gr.outputs.Textbox(label="Prediction")], - fn=classify, - title='WATCHA-READIN', - theme='dark-peach' -) - -iface.launch() \ No newline at end of file diff --git a/spaces/augmentedimaginationhackathon/paperstocode/app.py b/spaces/augmentedimaginationhackathon/paperstocode/app.py deleted file mode 100644 index bb29e1b9f0c1602c7100c36a19c9e2d2496aca0f..0000000000000000000000000000000000000000 --- a/spaces/augmentedimaginationhackathon/paperstocode/app.py +++ /dev/null @@ -1,35 +0,0 @@ -import streamlit as st -from io import StringIO -import os - -from retrieval.single_prompt import generate_code - - -st.title("Papers with Code") - -uploaded_file = st.file_uploader("Choose a file") - -if uploaded_file is not None: - col1, col2 = st.columns(2) - # To convert to a string based IO: - stringio = StringIO(uploaded_file.getvalue().decode("utf-8")) - # st.write(stringio) - - # To read file as string: - string_data = stringio.read() - # col1.header(len(string_data)) - - with st.expander("Show LaTeX"): - st.header("Paper Contents") - st.code(rf"""{string_data} """, language="latex") - - - bar = st.progress(0, "Generating Code") - code = "import torch" - for complete in range(5): - code += generate_code(string_data, model_name=os.environ["OPENAI_MODEL_NAME"], code=code) - bar.progress((complete + 1) * 20) - - with st.expander("Show Generated Code"): - st.header("Generated Code") - st.code(code) \ No newline at end of file diff --git a/spaces/awacke1/Generative-AI-Provider/README.md b/spaces/awacke1/Generative-AI-Provider/README.md deleted file mode 100644 index 25b12ac26637dcbd530a13bc13e138e613ed099a..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Generative-AI-Provider/README.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -title: AIProvider-ChatGPT -emoji: ⚕️AIEPro👩‍⚕️ -colorFrom: gray -colorTo: red -sdk: static -pinned: false -license: mit -duplicated_from: awacke1/Generative-AI-EACN ---- -| No. | Service Type | CPT Code Range | Rules for Required Evidence of Medical Necessity | -|-----|-----------------------------|----------------------|--------------------------------------------------------| -| 1 | 🫀 Organ Transplant | 50300-50380 | Diagnosis📄, waiting list📃, physician referral👩‍⚕️ | -| 2 | 🦴 Spinal Fusion Surgery | 22532-22812 | Diagnosis📄, conservative treatment history📚, physician referral👩‍⚕️ | -| 3 | 🍔 Bariatric Surgery | 43644-43775 | BMI🏋️, documented weight loss attempts📉, physician referral👩‍⚕️, psychological evaluation🧠 | -| 4 | 🦵 Joint Replacement Surgery | 27130-27447 | Diagnosis📄, conservative treatment history📚, physician referral👩‍⚕️ | -| 5 | 💉 Chemotherapy | 96401-96549 | Cancer diagnosis🦠, treatment plan💊, medication💊, dosage💊, frequency💊 | -| 6 | ☢️ Radiation Therapy | 77261-77799 | Cancer diagnosis🦠, treatment plan💊, physician referral👩‍⚕️ | -| 7 | ❤️ Cardiac Surgery | 33010-33999 | Diagnosis📄, conservative treatment history📚, physician referral👩‍⚕️ | -| 8 | 🧊 Dialysis | 90935-90999 | Diagnosis of kidney disease🩸, treatment plan💊, physician referral👩‍⚕️ | -| 9 | 🫁 Gastrointestinal Surgery | 43620-44979 | Diagnosis📄, conservative treatment history📚, physician referral👩‍⚕️ | -| 10 | 🖼️ Advanced Imaging Services | 70450-72159 (CT), 70540-72198 (MRI) | Clinical history📚, prior relevant imaging📸, symptoms justification😷 | -| 11 | 🎯 Interventional Radiology | 37220-37235 | Diagnosis📄, conservative treatment history📚, physician referral👩‍⚕️ | -| 12 | 🛌 Sleep Study | 95800-95811 | Documented sleep disorder symptoms😴, sleep diary📘, physician referral👩‍⚕️ | -| 13 | 💉 Infusion Therapy | 96360-96549 | Diagnosis📄, medication💊, dosage💊, frequency💊, duration⏳ | -| 14 | 💊 Pain Management | 64400-64530 | Diagnosis📄, conservative treatment history📚, treatment plan💊 | -| 15 | ❤️ Cardiac Stress Test | 93015-93018 | Documented symptoms😷, cardiac risk factors❤️, physician referral👩‍⚕️ | -| 16 | 🫁 Pulmonary Function Test | 94010-94799 | Documented respiratory issues😷, physician referral👩‍⚕️ | diff --git a/spaces/awacke1/MTBenchmarkForChatGPTMetricsScoring/app.py b/spaces/awacke1/MTBenchmarkForChatGPTMetricsScoring/app.py deleted file mode 100644 index 9a2a8f07c6de2b71a7e19c2af61a1ca1b9d9594a..0000000000000000000000000000000000000000 --- a/spaces/awacke1/MTBenchmarkForChatGPTMetricsScoring/app.py +++ /dev/null @@ -1,430 +0,0 @@ -""" -Usage: -python3 qa_browser.py --share -""" - -import argparse -from collections import defaultdict -import re - -import gradio as gr - -from common import ( - load_questions, - load_model_answers, - load_single_model_judgments, - load_pairwise_model_judgments, - resolve_single_judgment_dict, - resolve_pairwise_judgment_dict, - get_single_judge_explanation, - get_pairwise_judge_explanation, -) - - -questions = [] -model_answers = {} - -model_judgments_normal_single = {} -model_judgments_math_single = {} - -model_judgments_normal_pairwise = {} -model_judgments_math_pairwise = {} - -question_selector_map = {} -category_selector_map = defaultdict(list) - - -def display_question(category_selector, request: gr.Request): - choices = category_selector_map[category_selector] - return gr.Dropdown.update( - value=choices[0], - choices=choices, - ) - - -def display_pairwise_answer( - question_selector, model_selector1, model_selector2, request: gr.Request -): - q = question_selector_map[question_selector] - qid = q["question_id"] - - ans1 = model_answers[model_selector1][qid] - ans2 = model_answers[model_selector2][qid] - - chat_mds = pairwise_to_gradio_chat_mds(q, ans1, ans2) - gamekey = (qid, model_selector1, model_selector2) - - judgment_dict = resolve_pairwise_judgment_dict( - q, - model_judgments_normal_pairwise, - model_judgments_math_pairwise, - multi_turn=False, - ) - - explanation = ( - "##### Model Judgment (first turn)\n" - + get_pairwise_judge_explanation(gamekey, judgment_dict) - ) - - judgment_dict_turn2 = resolve_pairwise_judgment_dict( - q, - model_judgments_normal_pairwise, - model_judgments_math_pairwise, - multi_turn=True, - ) - - explanation_turn2 = ( - "##### Model Judgment (second turn)\n" - + get_pairwise_judge_explanation(gamekey, judgment_dict_turn2) - ) - - return chat_mds + [explanation] + [explanation_turn2] - - -def display_single_answer(question_selector, model_selector1, request: gr.Request): - q = question_selector_map[question_selector] - qid = q["question_id"] - - ans1 = model_answers[model_selector1][qid] - - chat_mds = single_to_gradio_chat_mds(q, ans1) - gamekey = (qid, model_selector1) - - judgment_dict = resolve_single_judgment_dict( - q, model_judgments_normal_single, model_judgments_math_single, multi_turn=False - ) - - explanation = "##### Model Judgment (first turn)\n" + get_single_judge_explanation( - gamekey, judgment_dict - ) - - judgment_dict_turn2 = resolve_single_judgment_dict( - q, model_judgments_normal_single, model_judgments_math_single, multi_turn=True - ) - - explanation_turn2 = ( - "##### Model Judgment (second turn)\n" - + get_single_judge_explanation(gamekey, judgment_dict_turn2) - ) - - return chat_mds + [explanation] + [explanation_turn2] - - -newline_pattern1 = re.compile("\n\n(\d+\. )") -newline_pattern2 = re.compile("\n\n(- )") - - -def post_process_answer(x): - """Fix Markdown rendering problems.""" - x = x.replace("\u2022", "- ") - x = re.sub(newline_pattern1, "\n\g<1>", x) - x = re.sub(newline_pattern2, "\n\g<1>", x) - return x - - -def pairwise_to_gradio_chat_mds(question, ans_a, ans_b, turn=None): - end = len(question["turns"]) if turn is None else turn + 1 - - mds = ["", "", "", "", "", "", ""] - for i in range(end): - base = i * 3 - if i == 0: - mds[base + 0] = "##### User\n" + question["turns"][i] - else: - mds[base + 0] = "##### User's follow-up question \n" + question["turns"][i] - mds[base + 1] = "##### Assistant A\n" + post_process_answer( - ans_a["choices"][0]["turns"][i].strip() - ) - mds[base + 2] = "##### Assistant B\n" + post_process_answer( - ans_b["choices"][0]["turns"][i].strip() - ) - - ref = question.get("reference", ["", ""]) - - ref_md = "" - if turn is None: - if ref[0] != "" or ref[1] != "": - mds[6] = f"##### Reference Solution\nQ1. {ref[0]}\nQ2. {ref[1]}" - else: - x = ref[turn] if turn < len(ref) else "" - if x: - mds[6] = f"##### Reference Solution\n{ref[turn]}" - else: - mds[6] = "" - return mds - - -def single_to_gradio_chat_mds(question, ans, turn=None): - end = len(question["turns"]) if turn is None else turn + 1 - - mds = ["", "", "", "", ""] - for i in range(end): - base = i * 2 - if i == 0: - mds[base + 0] = "##### User\n" + question["turns"][i] - else: - mds[base + 0] = "##### User's follow-up question \n" + question["turns"][i] - mds[base + 1] = "##### Assistant A\n" + post_process_answer( - ans["choices"][0]["turns"][i].strip() - ) - - ref = question.get("reference", ["", ""]) - - ref_md = "" - if turn is None: - if ref[0] != "" or ref[1] != "": - mds[4] = f"##### Reference Solution\nQ1. {ref[0]}\nQ2. {ref[1]}" - else: - x = ref[turn] if turn < len(ref) else "" - if x: - mds[4] = f"##### Reference Solution\n{ref[turn]}" - else: - mds[4] = "" - return mds - - -def build_question_selector_map(): - global question_selector_map, category_selector_map - - # Build question selector map - for q in questions: - preview = f"{q['question_id']}: " + q["turns"][0][:128] + "..." - question_selector_map[preview] = q - category_selector_map[q["category"]].append(preview) - - -def sort_models(models): - priority = { - "Llama-2-70b-chat": "aaaa", - "Llama-2-13b-chat": "aaab", - "Llama-2-7b-chat": "aaac", - } - - models = list(models) - models.sort(key=lambda x: priority.get(x, x)) - return models - - -def build_pairwise_browser_tab(): - global question_selector_map, category_selector_map - - models = sort_models(list(model_answers.keys())) - num_sides = 2 - num_turns = 2 - side_names = ["A", "B"] - - question_selector_choices = list(question_selector_map.keys()) - category_selector_choices = list(category_selector_map.keys()) - - # Selectors - with gr.Row(): - with gr.Column(scale=1, min_width=200): - category_selector = gr.Dropdown( - choices=category_selector_choices, label="Category", container=False - ) - with gr.Column(scale=100): - question_selector = gr.Dropdown( - choices=question_selector_choices, label="Question", container=False - ) - - model_selectors = [None] * num_sides - with gr.Row(): - for i in range(num_sides): - with gr.Column(): - if i == 0: - value = models[0] - else: - value = "gpt-3.5-turbo" - model_selectors[i] = gr.Dropdown( - choices=models, - value=value, - label=f"Model {side_names[i]}", - container=False, - ) - - # Conversation - chat_mds = [] - for i in range(num_turns): - chat_mds.append(gr.Markdown(elem_id=f"user_question_{i+1}")) - with gr.Row(): - for j in range(num_sides): - with gr.Column(scale=100): - chat_mds.append(gr.Markdown()) - - if j == 0: - with gr.Column(scale=1, min_width=8): - gr.Markdown() - reference = gr.Markdown(elem_id=f"reference") - chat_mds.append(reference) - - model_explanation = gr.Markdown(elem_id="model_explanation") - model_explanation2 = gr.Markdown(elem_id="model_explanation") - - # Callbacks - category_selector.change(display_question, [category_selector], [question_selector]) - question_selector.change( - display_pairwise_answer, - [question_selector] + model_selectors, - chat_mds + [model_explanation] + [model_explanation2], - ) - - for i in range(num_sides): - model_selectors[i].change( - display_pairwise_answer, - [question_selector] + model_selectors, - chat_mds + [model_explanation] + [model_explanation2], - ) - - return (category_selector,) - - -def build_single_answer_browser_tab(): - global question_selector_map, category_selector_map - - models = sort_models(list(model_answers.keys())) - num_sides = 1 - num_turns = 2 - side_names = ["A"] - - question_selector_choices = list(question_selector_map.keys()) - category_selector_choices = list(category_selector_map.keys()) - - # Selectors - with gr.Row(): - with gr.Column(scale=1, min_width=200): - category_selector = gr.Dropdown( - choices=category_selector_choices, label="Category", container=False - ) - with gr.Column(scale=100): - question_selector = gr.Dropdown( - choices=question_selector_choices, label="Question", container=False - ) - - model_selectors = [None] * num_sides - with gr.Row(): - for i in range(num_sides): - with gr.Column(): - model_selectors[i] = gr.Dropdown( - choices=models, - value=models[i] if len(models) > i else "", - label=f"Model {side_names[i]}", - container=False, - ) - - # Conversation - chat_mds = [] - for i in range(num_turns): - chat_mds.append(gr.Markdown(elem_id=f"user_question_{i+1}")) - with gr.Row(): - for j in range(num_sides): - with gr.Column(scale=100): - chat_mds.append(gr.Markdown()) - - if j == 0: - with gr.Column(scale=1, min_width=8): - gr.Markdown() - - reference = gr.Markdown(elem_id=f"reference") - chat_mds.append(reference) - - model_explanation = gr.Markdown(elem_id="model_explanation") - model_explanation2 = gr.Markdown(elem_id="model_explanation") - - # Callbacks - category_selector.change(display_question, [category_selector], [question_selector]) - question_selector.change( - display_single_answer, - [question_selector] + model_selectors, - chat_mds + [model_explanation] + [model_explanation2], - ) - - for i in range(num_sides): - model_selectors[i].change( - display_single_answer, - [question_selector] + model_selectors, - chat_mds + [model_explanation] + [model_explanation2], - ) - - return (category_selector,) - - -block_css = """ -#user_question_1 { - background-color: #DEEBF7; -} -#user_question_2 { - background-color: #E2F0D9; -} -#reference { - background-color: #FFF2CC; -} -#model_explanation { - background-color: #FBE5D6; -} -""" - - -def load_demo(): - dropdown_update = gr.Dropdown.update(value=list(category_selector_map.keys())[0]) - return dropdown_update, dropdown_update - - -def build_demo(): - build_question_selector_map() - - with gr.Blocks( - title="MT-Bench Browser", - theme=gr.themes.Base(text_size=gr.themes.sizes.text_lg), - css=block_css, - ) as demo: - gr.Markdown( - """ -# MT-Bench Browser -| [Paper](https://arxiv.org/abs/2306.05685) | [Code](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) | [Leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard) | -""" - ) - with gr.Tab("Single Answer Grading"): - (category_selector,) = build_single_answer_browser_tab() - with gr.Tab("Pairwise Comparison"): - (category_selector2,) = build_pairwise_browser_tab() - demo.load(load_demo, [], [category_selector, category_selector2]) - - return demo - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--host", type=str, default="0.0.0.0") - parser.add_argument("--port", type=int) - parser.add_argument("--share", action="store_true") - parser.add_argument("--bench-name", type=str, default="mt_bench") - args = parser.parse_args() - print(args) - - question_file = f"data/{args.bench_name}/question.jsonl" - answer_dir = f"data/{args.bench_name}/model_answer" - pairwise_model_judgment_file = ( - f"data/{args.bench_name}/model_judgment/gpt-4_pair.jsonl" - ) - single_model_judgment_file = ( - f"data/{args.bench_name}/model_judgment/gpt-4_single.jsonl" - ) - - # Load questions - questions = load_questions(question_file, None, None) - - # Load answers - model_answers = load_model_answers(answer_dir) - - # Load model judgments - model_judgments_normal_single = ( - model_judgments_math_single - ) = load_single_model_judgments(single_model_judgment_file) - model_judgments_normal_pairwise = ( - model_judgments_math_pairwise - ) = load_pairwise_model_judgments(pairwise_model_judgment_file) - - demo = build_demo() - demo.launch( - server_name=args.host, server_port=args.port, share=args.share, max_threads=200 - ) \ No newline at end of file diff --git a/spaces/awinml/api-instructor-xl-1/README.md b/spaces/awinml/api-instructor-xl-1/README.md deleted file mode 100644 index c9e384d7d512e6d982316be7b07fa374e7363d07..0000000000000000000000000000000000000000 --- a/spaces/awinml/api-instructor-xl-1/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Api Instructor Xl 1 -emoji: 🚀 -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.33.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/banana-projects/web3d/node_modules/three/src/helpers/VertexNormalsHelper.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/helpers/VertexNormalsHelper.d.ts deleted file mode 100644 index 6757419a6fc741401c98e02c1b6a4ecc6a6a25be..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/helpers/VertexNormalsHelper.d.ts +++ /dev/null @@ -1,16 +0,0 @@ -import { Object3D } from './../core/Object3D'; -import { LineSegments } from './../objects/LineSegments'; - -export class VertexNormalsHelper extends LineSegments { - constructor( - object: Object3D, - size?: number, - hex?: number, - linewidth?: number - ); - - object: Object3D; - size: number; - - update(object?: Object3D): void; -} diff --git a/spaces/bhasker412/IDD-YOLO-Tracking/trackers/strongsort/sort/__init__.py b/spaces/bhasker412/IDD-YOLO-Tracking/trackers/strongsort/sort/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/bielalpha/pixelparty-pixel-party-xl/README.md b/spaces/bielalpha/pixelparty-pixel-party-xl/README.md deleted file mode 100644 index 044579be5d3180250c2b5e8bc9e8f928a6e9e6e7..0000000000000000000000000000000000000000 --- a/spaces/bielalpha/pixelparty-pixel-party-xl/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Pixelparty Pixel Party Xl -emoji: 💻 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.44.3 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bioriAsaeru/text-to-voice/Bhool Bhulaiyaa movie 5 full movie download in hindi and mp4 The best way to enjoy the fifth installment of the franchise.md b/spaces/bioriAsaeru/text-to-voice/Bhool Bhulaiyaa movie 5 full movie download in hindi and mp4 The best way to enjoy the fifth installment of the franchise.md deleted file mode 100644 index e0b721277d78e5bf7aff9042f12b568b3129d148..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Bhool Bhulaiyaa movie 5 full movie download in hindi and mp4 The best way to enjoy the fifth installment of the franchise.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Bhool Bhulaiyaa movie 5 full movie download in hindi and mp4


      Download Zip » https://urloso.com/2uyOEl



      -
      - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/bioriAsaeru/text-to-voice/Box Hako Save Gamel.md b/spaces/bioriAsaeru/text-to-voice/Box Hako Save Gamel.md deleted file mode 100644 index e6bcc28ff60c0c9c6d2079695fc545bd701eb657..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Box Hako Save Gamel.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Box Hako Save Gamel


      DOWNLOADhttps://urloso.com/2uyQDO



      -
      -... Works Needle Felted Rabbit Artist Chocolat Box Collection Of Art Works Japanese ... Hako 850b Repair Service Manual User · Repertory Of The Homeopathic Materia ... Why Save The Bankers And Other Essays On Our Economic And Political ... Mobile Computer Usability Wiredu Gamel · The Healthy Thyroid What You ... 1fdad05405
      -
      -
      -

      diff --git a/spaces/bioriAsaeru/text-to-voice/Chatroulette Premium Token Generator V2.rar [UPDATED].md b/spaces/bioriAsaeru/text-to-voice/Chatroulette Premium Token Generator V2.rar [UPDATED].md deleted file mode 100644 index 27760abfde89d11218d18fa87ba22604440e9eba..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Chatroulette Premium Token Generator V2.rar [UPDATED].md +++ /dev/null @@ -1,43 +0,0 @@ - -

      Chatroulette Premium Token Generator v2: A Must-Have Tool for Chatroulette Lovers

      -

      Do you love using Chatroulette, the online platform that connects you with random strangers from all over the world? Do you want to enjoy the premium features of Chatroulette, such as choosing the gender and location of your chat partners, skipping ads, and getting more tokens? If yes, then you need Chatroulette Premium Token Generator v2, a tool that can generate unlimited tokens for your Chatroulette account. In this article, we will tell you what Chatroulette Premium Token Generator v2 is, how it works, and how you can download it for free.

      -

      What is Chatroulette Premium Token Generator v2?

      -

      Chatroulette Premium Token Generator v2 is a software program that can generate tokens for your Chatroulette account. Tokens are the currency used on Chatroulette to access the premium features and services. Normally, you have to buy tokens with real money or earn them by completing surveys and offers. However, with Chatroulette Premium Token Generator v2, you can get as many tokens as you want without spending a dime or wasting your time.

      -

      Chatroulette Premium Token Generator v2.rar


      Download ✑ ✑ ✑ https://urloso.com/2uyQvA



      -

      How does Chatroulette Premium Token Generator v2 work?

      -

      Chatroulette Premium Token Generator v2 works by exploiting a loophole in the Chatroulette system that allows it to generate valid tokens for any account. The program is safe, secure, and easy to use. All you have to do is follow these simple steps:

      -
        -
      1. Download Chatroulette Premium Token Generator v2 from the link provided below.
      2. -
      3. Extract the file using WinRAR or any other program that can open .rar files.
      4. -
      5. Run the program and enter your Chatroulette username or email.
      6. -
      7. Select the amount of tokens you want to generate (from 100 to 10,000).
      8. -
      9. Click on the "Generate" button and wait for a few seconds.
      10. -
      11. Check your Chatroulette account and enjoy your free tokens!
      12. -
      -

      How can you download Chatroulette Premium Token Generator v2 for free?

      -

      If you want to download Chatroulette Premium Token Generator v2 for free, you can do so from various online sources. However, be careful of fake or malicious links that may harm your computer or steal your personal information. We recommend you to download Chatroulette Premium Token Generator v2 from the following link:

      -
        -
      • Easy-Game: This is a website that provides free hacks and cheats for various online games and platforms. You can download Chatroulette Premium Token Generator v2 from this website without any surveys or passwords.
      • -
      -

      So what are you waiting for? Download Chatroulette Premium Token Generator v2 today and enjoy the premium features of Chatroulette!

      -

      What are the advantages of using Chatroulette Premium Token Generator v2?

      -

      Using Chatroulette Premium Token Generator v2 can have many advantages for you as a Chatroulette user. Some of them are:

      -
        -
      • You can save your money and time. You don't have to spend any money or waste any time to get tokens for your Chatroulette account. You can get them for free and instantly with Chatroulette Premium Token Generator v2.
      • -
      • You can enjoy the premium features of Chatroulette. You can choose the gender and location of your chat partners, skip ads, and get more tokens. You can have more fun and satisfaction on Chatroulette with these features.
      • -
      • You can improve your chances of finding a match. You can increase your chances of finding someone who shares your interests, preferences, and goals on Chatroulette. You can have more meaningful and enjoyable conversations with your chat partners.
      • -
      • You can be safe and secure. You don't have to worry about any viruses, malware, or spyware that may harm your computer or steal your personal information. Chatroulette Premium Token Generator v2 is tested and verified by many users and experts.
      • -
      -

      What are the disadvantages of using Chatroulette Premium Token Generator v2?

      -

      Using Chatroulette Premium Token Generator v2 can also have some disadvantages for you as a Chatroulette user. Some of them are:

      -

      -
        -
      • You may violate the terms and conditions of Chatroulette. You may break the rules and regulations of Chatroulette by using Chatroulette Premium Token Generator v2. You may risk getting banned or suspended from Chatroulette if you are caught using this tool.
      • -
      • You may encounter some technical issues or errors. You may face some problems or glitches while using Chatroulette Premium Token Generator v2. You may need to update or reinstall the program if it stops working or crashes.
      • -
      • You may lose the thrill and excitement of Chatroulette. You may lose the sense of adventure and spontaneity of Chatroulette by using Chatroulette Premium Token Generator v2. You may miss the fun and challenge of earning tokens by completing surveys and offers.
      • -
      • You may become addicted to Chatroulette. You may become obsessed with using Chatroulette and its premium features by using Chatroulette Premium Token Generator v2. You may neglect your other responsibilities and activities in life.
      • -
      -

      Conclusion

      -

      Chatroulette Premium Token Generator v2 is a tool that can generate unlimited tokens for your Chatroulette account. You can use these tokens to access the premium features and services of Chatroulette, such as choosing the gender and location of your chat partners, skipping ads, and getting more tokens. You can download this tool for free from various online sources and use it at your own risk. We hope that this article has helped you learn more about Chatroulette Premium Token Generator v2 and how you can use it to enhance your Chatroulette experience!

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Crazy Frog Axel F 1080p Torrent Get Ready to Laugh and Groove with this HD Download.md b/spaces/bioriAsaeru/text-to-voice/Crazy Frog Axel F 1080p Torrent Get Ready to Laugh and Groove with this HD Download.md deleted file mode 100644 index 89491c006cb84330a27fa21cec56d270cd9ce23d..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Crazy Frog Axel F 1080p Torrent Get Ready to Laugh and Groove with this HD Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Crazy Frog Axel F 1080p Torrent


      Download File ———>>> https://urloso.com/2uyOT9



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/bioriAsaeru/text-to-voice/Dhoom 1 Tamil Dubbed Movie Free Downloadl Experience the Adrenaline Rush of the First Installment.md b/spaces/bioriAsaeru/text-to-voice/Dhoom 1 Tamil Dubbed Movie Free Downloadl Experience the Adrenaline Rush of the First Installment.md deleted file mode 100644 index ae9ae9854966951e93efa07adbdf03b296c61abd..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Dhoom 1 Tamil Dubbed Movie Free Downloadl Experience the Adrenaline Rush of the First Installment.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Dhoom 1 Tamil Dubbed Movie Free Downloadl


      Download Zip »»» https://urloso.com/2uyPaz



      -
      - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/birsardar/stable-diffusion-mat-outpainting-primer/metrics/__init__.py b/spaces/birsardar/stable-diffusion-mat-outpainting-primer/metrics/__init__.py deleted file mode 100644 index e1e1a5ba99e56a56ecaa14f7d4fa41777789c0cf..0000000000000000000000000000000000000000 --- a/spaces/birsardar/stable-diffusion-mat-outpainting-primer/metrics/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -# empty diff --git a/spaces/bradarrML/magic-diffusion/share_btn.py b/spaces/bradarrML/magic-diffusion/share_btn.py deleted file mode 100644 index 1382fb25a5ef50e843598187e1e660e86ea8dd05..0000000000000000000000000000000000000000 --- a/spaces/bradarrML/magic-diffusion/share_btn.py +++ /dev/null @@ -1,88 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - async function getInputImgFile(imgEl){ - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const isPng = imgEl.src.startsWith(`data:image/png`); - if(isPng){ - const fileName = `magic-prompt-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - }else{ - const fileName = `magic-prompt-${{imgId}}.jpg`; - return new File([blob], fileName, { type: 'image/jpeg' }); - } - } - const gradioEl = document.querySelector('body > gradio-app'); - // const gradioEl = document.querySelector("gradio-app").shadowRoot; - const inputImgEl = gradioEl.querySelector('#input-img img'); - const imgEls = gradioEl.querySelectorAll('#generated-gallery img'); - const promptTxt = gradioEl.querySelector('#translated textarea').value; - let titleTxt = promptTxt; - if(titleTxt.length > 100){ - titleTxt = titleTxt.slice(0, 100) + ' ...'; - } - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - if(!imgEls.length){ - return; - }; - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - const files = await Promise.all( - [...imgEls].map(async (imgEl) => { - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const fileName = `sd-perception-${{imgId}}.jpg`; - return new File([blob], fileName, { type: 'image/jpeg' }); - }) - ); - const inputFile = await getInputImgFile(inputImgEl); - files.push(inputFile); - const urls = await Promise.all(files.map((f) => uploadFile(f))); - const urlInputImg = urls.pop(); - const htmlImgs = urls.map(url => ``); - const htmlImgsMd = htmlImgs.join(`\n`); - const descriptionMd = `#### Input img: - -#### Caption: -${promptTxt} -#### Generations: -
      -${htmlImgsMd} -
      `; - const params = new URLSearchParams({ - title: titleTxt, - description: descriptionMd, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/huggingface-projects/magic-diffusion/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/Makefile b/spaces/brainblow/AudioCreator_Music-Audio_Generation/Makefile deleted file mode 100644 index be8a8b03aa984ac5ed95c98e05887fe108dce073..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/Makefile +++ /dev/null @@ -1,40 +0,0 @@ -INTEG=AUDIOCRAFT_DORA_DIR="/tmp/magma_$(USER)" python3 -m dora -v run --clear device=cpu dataset.num_workers=0 optim.epochs=1 \ - dataset.train.num_samples=10 dataset.valid.num_samples=10 \ - dataset.evaluate.num_samples=10 dataset.generate.num_samples=2 sample_rate=16000 \ - logging.level=DEBUG -INTEG_COMPRESSION = $(INTEG) solver=compression/debug rvq.n_q=2 rvq.bins=48 checkpoint.save_last=true # SIG is 616d7b3c -INTEG_MUSICGEN = $(INTEG) solver=musicgen/debug dset=audio/example compression_model_checkpoint=//sig/5091833e \ - transformer_lm.n_q=2 transformer_lm.card=48 transformer_lm.dim=16 checkpoint.save_last=false # Using compression model from 616d7b3c -INTEG_AUDIOGEN = $(INTEG) solver=audiogen/debug dset=audio/example compression_model_checkpoint=//sig/5091833e \ - transformer_lm.n_q=2 transformer_lm.card=48 transformer_lm.dim=16 checkpoint.save_last=false # Using compression model from 616d7b3c -INTEG_MBD = $(INTEG) solver=diffusion/debug dset=audio/example \ - checkpoint.save_last=false # Using compression model from 616d7b3c - -default: linter tests - -install: - pip install -U pip - pip install -U -e '.[dev]' - -linter: - flake8 audiocraft && mypy audiocraft - flake8 tests && mypy tests - -tests: - coverage run -m pytest tests - coverage report - -tests_integ: - $(INTEG_COMPRESSION) - $(INTEG_MBD) - $(INTEG_MUSICGEN) - $(INTEG_AUDIOGEN) - - -api_docs: - pdoc3 --html -o api_docs -f audiocraft - -dist: - python setup.py sdist - -.PHONY: linter tests api_docs dist diff --git a/spaces/brjathu/HMR2.0/vendor/pyrender/pyrender/light.py b/spaces/brjathu/HMR2.0/vendor/pyrender/pyrender/light.py deleted file mode 100644 index 333d9e4e553a245c259251a89b69cb46b73b1278..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/pyrender/pyrender/light.py +++ /dev/null @@ -1,385 +0,0 @@ -"""Punctual light sources as defined by the glTF 2.0 KHR extension at -https://github.com/KhronosGroup/glTF/tree/master/extensions/2.0/Khronos/KHR_lights_punctual - -Author: Matthew Matl -""" -import abc -import numpy as np -import six - -from OpenGL.GL import * - -from .utils import format_color_vector -from .texture import Texture -from .constants import SHADOW_TEX_SZ -from .camera import OrthographicCamera, PerspectiveCamera - - - -@six.add_metaclass(abc.ABCMeta) -class Light(object): - """Base class for all light objects. - - Parameters - ---------- - color : (3,) float - RGB value for the light's color in linear space. - intensity : float - Brightness of light. The units that this is defined in depend on the - type of light. Point and spot lights use luminous intensity in candela - (lm/sr), while directional lights use illuminance in lux (lm/m2). - name : str, optional - Name of the light. - """ - def __init__(self, - color=None, - intensity=None, - name=None): - - if color is None: - color = np.ones(3) - if intensity is None: - intensity = 1.0 - - self.name = name - self.color = color - self.intensity = intensity - self._shadow_camera = None - self._shadow_texture = None - - @property - def name(self): - """str : The user-defined name of this object. - """ - return self._name - - @name.setter - def name(self, value): - if value is not None: - value = str(value) - self._name = value - - @property - def color(self): - """(3,) float : The light's color. - """ - return self._color - - @color.setter - def color(self, value): - self._color = format_color_vector(value, 3) - - @property - def intensity(self): - """float : The light's intensity in candela or lux. - """ - return self._intensity - - @intensity.setter - def intensity(self, value): - self._intensity = float(value) - - @property - def shadow_texture(self): - """:class:`.Texture` : A texture used to hold shadow maps for this light. - """ - return self._shadow_texture - - @shadow_texture.setter - def shadow_texture(self, value): - if self._shadow_texture is not None: - if self._shadow_texture._in_context(): - self._shadow_texture.delete() - self._shadow_texture = value - - @abc.abstractmethod - def _generate_shadow_texture(self, size=None): - """Generate a shadow texture for this light. - - Parameters - ---------- - size : int, optional - Size of texture map. Must be a positive power of two. - """ - pass - - @abc.abstractmethod - def _get_shadow_camera(self, scene_scale): - """Generate and return a shadow mapping camera for this light. - - Parameters - ---------- - scene_scale : float - Length of scene's bounding box diagonal. - - Returns - ------- - camera : :class:`.Camera` - The camera used to render shadowmaps for this light. - """ - pass - - -class DirectionalLight(Light): - """Directional lights are light sources that act as though they are - infinitely far away and emit light in the direction of the local -z axis. - This light type inherits the orientation of the node that it belongs to; - position and scale are ignored except for their effect on the inherited - node orientation. Because it is at an infinite distance, the light is - not attenuated. Its intensity is defined in lumens per metre squared, - or lux (lm/m2). - - Parameters - ---------- - color : (3,) float, optional - RGB value for the light's color in linear space. Defaults to white - (i.e. [1.0, 1.0, 1.0]). - intensity : float, optional - Brightness of light, in lux (lm/m^2). Defaults to 1.0 - name : str, optional - Name of the light. - """ - - def __init__(self, - color=None, - intensity=None, - name=None): - super(DirectionalLight, self).__init__( - color=color, - intensity=intensity, - name=name, - ) - - def _generate_shadow_texture(self, size=None): - """Generate a shadow texture for this light. - - Parameters - ---------- - size : int, optional - Size of texture map. Must be a positive power of two. - """ - if size is None: - size = SHADOW_TEX_SZ - self.shadow_texture = Texture(width=size, height=size, - source_channels='D', data_format=GL_FLOAT) - - def _get_shadow_camera(self, scene_scale): - """Generate and return a shadow mapping camera for this light. - - Parameters - ---------- - scene_scale : float - Length of scene's bounding box diagonal. - - Returns - ------- - camera : :class:`.Camera` - The camera used to render shadowmaps for this light. - """ - return OrthographicCamera( - znear=0.01 * scene_scale, - zfar=10 * scene_scale, - xmag=scene_scale, - ymag=scene_scale - ) - - -class PointLight(Light): - """Point lights emit light in all directions from their position in space; - rotation and scale are ignored except for their effect on the inherited - node position. The brightness of the light attenuates in a physically - correct manner as distance increases from the light's position (i.e. - brightness goes like the inverse square of the distance). Point light - intensity is defined in candela, which is lumens per square radian (lm/sr). - - Parameters - ---------- - color : (3,) float - RGB value for the light's color in linear space. - intensity : float - Brightness of light in candela (lm/sr). - range : float - Cutoff distance at which light's intensity may be considered to - have reached zero. If None, the range is assumed to be infinite. - name : str, optional - Name of the light. - """ - - def __init__(self, - color=None, - intensity=None, - range=None, - name=None): - super(PointLight, self).__init__( - color=color, - intensity=intensity, - name=name, - ) - self.range = range - - @property - def range(self): - """float : The cutoff distance for the light. - """ - return self._range - - @range.setter - def range(self, value): - if value is not None: - value = float(value) - if value <= 0: - raise ValueError('Range must be > 0') - self._range = value - self._range = value - - def _generate_shadow_texture(self, size=None): - """Generate a shadow texture for this light. - - Parameters - ---------- - size : int, optional - Size of texture map. Must be a positive power of two. - """ - raise NotImplementedError('Shadows not implemented for point lights') - - def _get_shadow_camera(self, scene_scale): - """Generate and return a shadow mapping camera for this light. - - Parameters - ---------- - scene_scale : float - Length of scene's bounding box diagonal. - - Returns - ------- - camera : :class:`.Camera` - The camera used to render shadowmaps for this light. - """ - raise NotImplementedError('Shadows not implemented for point lights') - - -class SpotLight(Light): - """Spot lights emit light in a cone in the direction of the local -z axis. - The angle and falloff of the cone is defined using two numbers, the - ``innerConeAngle`` and ``outerConeAngle``. - As with point lights, the brightness - also attenuates in a physically correct manner as distance increases from - the light's position (i.e. brightness goes like the inverse square of the - distance). Spot light intensity refers to the brightness inside the - ``innerConeAngle`` (and at the location of the light) and is defined in - candela, which is lumens per square radian (lm/sr). A spot light's position - and orientation are inherited from its node transform. Inherited scale does - not affect cone shape, and is ignored except for its effect on position - and orientation. - - Parameters - ---------- - color : (3,) float - RGB value for the light's color in linear space. - intensity : float - Brightness of light in candela (lm/sr). - range : float - Cutoff distance at which light's intensity may be considered to - have reached zero. If None, the range is assumed to be infinite. - innerConeAngle : float - Angle, in radians, from centre of spotlight where falloff begins. - Must be greater than or equal to ``0`` and less - than ``outerConeAngle``. Defaults to ``0``. - outerConeAngle : float - Angle, in radians, from centre of spotlight where falloff ends. - Must be greater than ``innerConeAngle`` and less than or equal to - ``PI / 2.0``. Defaults to ``PI / 4.0``. - name : str, optional - Name of the light. - """ - - def __init__(self, - color=None, - intensity=None, - range=None, - innerConeAngle=0.0, - outerConeAngle=(np.pi / 4.0), - name=None): - super(SpotLight, self).__init__( - name=name, - color=color, - intensity=intensity, - ) - self.outerConeAngle = outerConeAngle - self.innerConeAngle = innerConeAngle - self.range = range - - @property - def innerConeAngle(self): - """float : The inner cone angle in radians. - """ - return self._innerConeAngle - - @innerConeAngle.setter - def innerConeAngle(self, value): - if value < 0.0 or value > self.outerConeAngle: - raise ValueError('Invalid value for inner cone angle') - self._innerConeAngle = float(value) - - @property - def outerConeAngle(self): - """float : The outer cone angle in radians. - """ - return self._outerConeAngle - - @outerConeAngle.setter - def outerConeAngle(self, value): - if value < 0.0 or value > np.pi / 2.0 + 1e-9: - raise ValueError('Invalid value for outer cone angle') - self._outerConeAngle = float(value) - - @property - def range(self): - """float : The cutoff distance for the light. - """ - return self._range - - @range.setter - def range(self, value): - if value is not None: - value = float(value) - if value <= 0: - raise ValueError('Range must be > 0') - self._range = value - self._range = value - - def _generate_shadow_texture(self, size=None): - """Generate a shadow texture for this light. - - Parameters - ---------- - size : int, optional - Size of texture map. Must be a positive power of two. - """ - if size is None: - size = SHADOW_TEX_SZ - self.shadow_texture = Texture(width=size, height=size, - source_channels='D', data_format=GL_FLOAT) - - def _get_shadow_camera(self, scene_scale): - """Generate and return a shadow mapping camera for this light. - - Parameters - ---------- - scene_scale : float - Length of scene's bounding box diagonal. - - Returns - ------- - camera : :class:`.Camera` - The camera used to render shadowmaps for this light. - """ - return PerspectiveCamera( - znear=0.01 * scene_scale, - zfar=10 * scene_scale, - yfov=np.clip(2 * self.outerConeAngle + np.pi / 16.0, 0.0, np.pi), - aspectRatio=1.0 - ) - - -__all__ = ['Light', 'DirectionalLight', 'SpotLight', 'PointLight'] diff --git a/spaces/caffeinum/VToonify/vtoonify/model/encoder/criteria/id_loss.py b/spaces/caffeinum/VToonify/vtoonify/model/encoder/criteria/id_loss.py deleted file mode 100644 index 37c71d3047be01ae7b301e0a96f14e2df88a143f..0000000000000000000000000000000000000000 --- a/spaces/caffeinum/VToonify/vtoonify/model/encoder/criteria/id_loss.py +++ /dev/null @@ -1,33 +0,0 @@ -import torch -from torch import nn -from model.encoder.encoders.model_irse import Backbone - - -class IDLoss(nn.Module): - def __init__(self, model_paths): - super(IDLoss, self).__init__() - print('Loading ResNet ArcFace') - self.facenet = Backbone(input_size=112, num_layers=50, drop_ratio=0.6, mode='ir_se') - self.facenet.load_state_dict(torch.load(model_paths)) - self.face_pool = torch.nn.AdaptiveAvgPool2d((112, 112)) - self.facenet.eval() - - def extract_feats(self, x): - x = x[:, :, 35:223, 32:220] # Crop interesting region - x = self.face_pool(x) - x_feats = self.facenet(x) - return x_feats - - def forward(self, y_hat, y): - n_samples = y_hat.shape[0] - y_feats = self.extract_feats(y) # Otherwise use the feature from there - y_hat_feats = self.extract_feats(y_hat) - y_feats = y_feats.detach() - loss = 0 - count = 0 - for i in range(n_samples): - diff_target = y_hat_feats[i].dot(y_feats[i]) - loss += 1 - diff_target - count += 1 - - return loss / count \ No newline at end of file diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/_deprecate.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/_deprecate.py deleted file mode 100644 index 2f2a3df13e312aed847e482a067c2c10e4fd5632..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/_deprecate.py +++ /dev/null @@ -1,69 +0,0 @@ -from __future__ import annotations - -import warnings - -from . import __version__ - - -def deprecate( - deprecated: str, - when: int | None, - replacement: str | None = None, - *, - action: str | None = None, - plural: bool = False, -) -> None: - """ - Deprecations helper. - - :param deprecated: Name of thing to be deprecated. - :param when: Pillow major version to be removed in. - :param replacement: Name of replacement. - :param action: Instead of "replacement", give a custom call to action - e.g. "Upgrade to new thing". - :param plural: if the deprecated thing is plural, needing "are" instead of "is". - - Usually of the form: - - "[deprecated] is deprecated and will be removed in Pillow [when] (yyyy-mm-dd). - Use [replacement] instead." - - You can leave out the replacement sentence: - - "[deprecated] is deprecated and will be removed in Pillow [when] (yyyy-mm-dd)" - - Or with another call to action: - - "[deprecated] is deprecated and will be removed in Pillow [when] (yyyy-mm-dd). - [action]." - """ - - is_ = "are" if plural else "is" - - if when is None: - removed = "a future version" - elif when <= int(__version__.split(".")[0]): - msg = f"{deprecated} {is_} deprecated and should be removed." - raise RuntimeError(msg) - elif when == 11: - removed = "Pillow 11 (2024-10-15)" - else: - msg = f"Unknown removal version: {when}. Update {__name__}?" - raise ValueError(msg) - - if replacement and action: - msg = "Use only one of 'replacement' and 'action'" - raise ValueError(msg) - - if replacement: - action = f". Use {replacement} instead." - elif action: - action = f". {action.rstrip('.')}." - else: - action = "" - - warnings.warn( - f"{deprecated} {is_} deprecated and will be removed in {removed}{action}", - DeprecationWarning, - stacklevel=3, - ) diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/aiohttp/multipart.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/aiohttp/multipart.py deleted file mode 100644 index 73801f459aa274ca6aae7bf28a2c5bb3bf075d11..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/aiohttp/multipart.py +++ /dev/null @@ -1,961 +0,0 @@ -import base64 -import binascii -import json -import re -import uuid -import warnings -import zlib -from collections import deque -from types import TracebackType -from typing import ( - TYPE_CHECKING, - Any, - AsyncIterator, - Deque, - Dict, - Iterator, - List, - Mapping, - Optional, - Sequence, - Tuple, - Type, - Union, - cast, -) -from urllib.parse import parse_qsl, unquote, urlencode - -from multidict import CIMultiDict, CIMultiDictProxy, MultiMapping - -from .hdrs import ( - CONTENT_DISPOSITION, - CONTENT_ENCODING, - CONTENT_LENGTH, - CONTENT_TRANSFER_ENCODING, - CONTENT_TYPE, -) -from .helpers import CHAR, TOKEN, parse_mimetype, reify -from .http import HeadersParser -from .payload import ( - JsonPayload, - LookupError, - Order, - Payload, - StringPayload, - get_payload, - payload_type, -) -from .streams import StreamReader - -__all__ = ( - "MultipartReader", - "MultipartWriter", - "BodyPartReader", - "BadContentDispositionHeader", - "BadContentDispositionParam", - "parse_content_disposition", - "content_disposition_filename", -) - - -if TYPE_CHECKING: # pragma: no cover - from .client_reqrep import ClientResponse - - -class BadContentDispositionHeader(RuntimeWarning): - pass - - -class BadContentDispositionParam(RuntimeWarning): - pass - - -def parse_content_disposition( - header: Optional[str], -) -> Tuple[Optional[str], Dict[str, str]]: - def is_token(string: str) -> bool: - return bool(string) and TOKEN >= set(string) - - def is_quoted(string: str) -> bool: - return string[0] == string[-1] == '"' - - def is_rfc5987(string: str) -> bool: - return is_token(string) and string.count("'") == 2 - - def is_extended_param(string: str) -> bool: - return string.endswith("*") - - def is_continuous_param(string: str) -> bool: - pos = string.find("*") + 1 - if not pos: - return False - substring = string[pos:-1] if string.endswith("*") else string[pos:] - return substring.isdigit() - - def unescape(text: str, *, chars: str = "".join(map(re.escape, CHAR))) -> str: - return re.sub(f"\\\\([{chars}])", "\\1", text) - - if not header: - return None, {} - - disptype, *parts = header.split(";") - if not is_token(disptype): - warnings.warn(BadContentDispositionHeader(header)) - return None, {} - - params: Dict[str, str] = {} - while parts: - item = parts.pop(0) - - if "=" not in item: - warnings.warn(BadContentDispositionHeader(header)) - return None, {} - - key, value = item.split("=", 1) - key = key.lower().strip() - value = value.lstrip() - - if key in params: - warnings.warn(BadContentDispositionHeader(header)) - return None, {} - - if not is_token(key): - warnings.warn(BadContentDispositionParam(item)) - continue - - elif is_continuous_param(key): - if is_quoted(value): - value = unescape(value[1:-1]) - elif not is_token(value): - warnings.warn(BadContentDispositionParam(item)) - continue - - elif is_extended_param(key): - if is_rfc5987(value): - encoding, _, value = value.split("'", 2) - encoding = encoding or "utf-8" - else: - warnings.warn(BadContentDispositionParam(item)) - continue - - try: - value = unquote(value, encoding, "strict") - except UnicodeDecodeError: # pragma: nocover - warnings.warn(BadContentDispositionParam(item)) - continue - - else: - failed = True - if is_quoted(value): - failed = False - value = unescape(value[1:-1].lstrip("\\/")) - elif is_token(value): - failed = False - elif parts: - # maybe just ; in filename, in any case this is just - # one case fix, for proper fix we need to redesign parser - _value = f"{value};{parts[0]}" - if is_quoted(_value): - parts.pop(0) - value = unescape(_value[1:-1].lstrip("\\/")) - failed = False - - if failed: - warnings.warn(BadContentDispositionHeader(header)) - return None, {} - - params[key] = value - - return disptype.lower(), params - - -def content_disposition_filename( - params: Mapping[str, str], name: str = "filename" -) -> Optional[str]: - name_suf = "%s*" % name - if not params: - return None - elif name_suf in params: - return params[name_suf] - elif name in params: - return params[name] - else: - parts = [] - fnparams = sorted( - (key, value) for key, value in params.items() if key.startswith(name_suf) - ) - for num, (key, value) in enumerate(fnparams): - _, tail = key.split("*", 1) - if tail.endswith("*"): - tail = tail[:-1] - if tail == str(num): - parts.append(value) - else: - break - if not parts: - return None - value = "".join(parts) - if "'" in value: - encoding, _, value = value.split("'", 2) - encoding = encoding or "utf-8" - return unquote(value, encoding, "strict") - return value - - -class MultipartResponseWrapper: - """Wrapper around the MultipartReader. - - It takes care about - underlying connection and close it when it needs in. - """ - - def __init__( - self, - resp: "ClientResponse", - stream: "MultipartReader", - ) -> None: - self.resp = resp - self.stream = stream - - def __aiter__(self) -> "MultipartResponseWrapper": - return self - - async def __anext__( - self, - ) -> Union["MultipartReader", "BodyPartReader"]: - part = await self.next() - if part is None: - raise StopAsyncIteration - return part - - def at_eof(self) -> bool: - """Returns True when all response data had been read.""" - return self.resp.content.at_eof() - - async def next( - self, - ) -> Optional[Union["MultipartReader", "BodyPartReader"]]: - """Emits next multipart reader object.""" - item = await self.stream.next() - if self.stream.at_eof(): - await self.release() - return item - - async def release(self) -> None: - """Release the connection gracefully. - - All remaining content is read to the void. - """ - await self.resp.release() - - -class BodyPartReader: - """Multipart reader for single body part.""" - - chunk_size = 8192 - - def __init__( - self, boundary: bytes, headers: "CIMultiDictProxy[str]", content: StreamReader - ) -> None: - self.headers = headers - self._boundary = boundary - self._content = content - self._at_eof = False - length = self.headers.get(CONTENT_LENGTH, None) - self._length = int(length) if length is not None else None - self._read_bytes = 0 - # TODO: typeing.Deque is not supported by Python 3.5 - self._unread: Deque[bytes] = deque() - self._prev_chunk: Optional[bytes] = None - self._content_eof = 0 - self._cache: Dict[str, Any] = {} - - def __aiter__(self) -> AsyncIterator["BodyPartReader"]: - return self # type: ignore[return-value] - - async def __anext__(self) -> bytes: - part = await self.next() - if part is None: - raise StopAsyncIteration - return part - - async def next(self) -> Optional[bytes]: - item = await self.read() - if not item: - return None - return item - - async def read(self, *, decode: bool = False) -> bytes: - """Reads body part data. - - decode: Decodes data following by encoding - method from Content-Encoding header. If it missed - data remains untouched - """ - if self._at_eof: - return b"" - data = bytearray() - while not self._at_eof: - data.extend(await self.read_chunk(self.chunk_size)) - if decode: - return self.decode(data) - return data - - async def read_chunk(self, size: int = chunk_size) -> bytes: - """Reads body part content chunk of the specified size. - - size: chunk size - """ - if self._at_eof: - return b"" - if self._length: - chunk = await self._read_chunk_from_length(size) - else: - chunk = await self._read_chunk_from_stream(size) - - self._read_bytes += len(chunk) - if self._read_bytes == self._length: - self._at_eof = True - if self._at_eof: - clrf = await self._content.readline() - assert ( - b"\r\n" == clrf - ), "reader did not read all the data or it is malformed" - return chunk - - async def _read_chunk_from_length(self, size: int) -> bytes: - # Reads body part content chunk of the specified size. - # The body part must has Content-Length header with proper value. - assert self._length is not None, "Content-Length required for chunked read" - chunk_size = min(size, self._length - self._read_bytes) - chunk = await self._content.read(chunk_size) - return chunk - - async def _read_chunk_from_stream(self, size: int) -> bytes: - # Reads content chunk of body part with unknown length. - # The Content-Length header for body part is not necessary. - assert ( - size >= len(self._boundary) + 2 - ), "Chunk size must be greater or equal than boundary length + 2" - first_chunk = self._prev_chunk is None - if first_chunk: - self._prev_chunk = await self._content.read(size) - - chunk = await self._content.read(size) - self._content_eof += int(self._content.at_eof()) - assert self._content_eof < 3, "Reading after EOF" - assert self._prev_chunk is not None - window = self._prev_chunk + chunk - sub = b"\r\n" + self._boundary - if first_chunk: - idx = window.find(sub) - else: - idx = window.find(sub, max(0, len(self._prev_chunk) - len(sub))) - if idx >= 0: - # pushing boundary back to content - with warnings.catch_warnings(): - warnings.filterwarnings("ignore", category=DeprecationWarning) - self._content.unread_data(window[idx:]) - if size > idx: - self._prev_chunk = self._prev_chunk[:idx] - chunk = window[len(self._prev_chunk) : idx] - if not chunk: - self._at_eof = True - result = self._prev_chunk - self._prev_chunk = chunk - return result - - async def readline(self) -> bytes: - """Reads body part by line by line.""" - if self._at_eof: - return b"" - - if self._unread: - line = self._unread.popleft() - else: - line = await self._content.readline() - - if line.startswith(self._boundary): - # the very last boundary may not come with \r\n, - # so set single rules for everyone - sline = line.rstrip(b"\r\n") - boundary = self._boundary - last_boundary = self._boundary + b"--" - # ensure that we read exactly the boundary, not something alike - if sline == boundary or sline == last_boundary: - self._at_eof = True - self._unread.append(line) - return b"" - else: - next_line = await self._content.readline() - if next_line.startswith(self._boundary): - line = line[:-2] # strip CRLF but only once - self._unread.append(next_line) - - return line - - async def release(self) -> None: - """Like read(), but reads all the data to the void.""" - if self._at_eof: - return - while not self._at_eof: - await self.read_chunk(self.chunk_size) - - async def text(self, *, encoding: Optional[str] = None) -> str: - """Like read(), but assumes that body part contains text data.""" - data = await self.read(decode=True) - # see https://www.w3.org/TR/html5/forms.html#multipart/form-data-encoding-algorithm # NOQA - # and https://dvcs.w3.org/hg/xhr/raw-file/tip/Overview.html#dom-xmlhttprequest-send # NOQA - encoding = encoding or self.get_charset(default="utf-8") - return data.decode(encoding) - - async def json(self, *, encoding: Optional[str] = None) -> Optional[Dict[str, Any]]: - """Like read(), but assumes that body parts contains JSON data.""" - data = await self.read(decode=True) - if not data: - return None - encoding = encoding or self.get_charset(default="utf-8") - return cast(Dict[str, Any], json.loads(data.decode(encoding))) - - async def form(self, *, encoding: Optional[str] = None) -> List[Tuple[str, str]]: - """Like read(), but assumes that body parts contain form urlencoded data.""" - data = await self.read(decode=True) - if not data: - return [] - if encoding is not None: - real_encoding = encoding - else: - real_encoding = self.get_charset(default="utf-8") - return parse_qsl( - data.rstrip().decode(real_encoding), - keep_blank_values=True, - encoding=real_encoding, - ) - - def at_eof(self) -> bool: - """Returns True if the boundary was reached or False otherwise.""" - return self._at_eof - - def decode(self, data: bytes) -> bytes: - """Decodes data. - - Decoding is done according the specified Content-Encoding - or Content-Transfer-Encoding headers value. - """ - if CONTENT_TRANSFER_ENCODING in self.headers: - data = self._decode_content_transfer(data) - if CONTENT_ENCODING in self.headers: - return self._decode_content(data) - return data - - def _decode_content(self, data: bytes) -> bytes: - encoding = self.headers.get(CONTENT_ENCODING, "").lower() - - if encoding == "deflate": - return zlib.decompress(data, -zlib.MAX_WBITS) - elif encoding == "gzip": - return zlib.decompress(data, 16 + zlib.MAX_WBITS) - elif encoding == "identity": - return data - else: - raise RuntimeError(f"unknown content encoding: {encoding}") - - def _decode_content_transfer(self, data: bytes) -> bytes: - encoding = self.headers.get(CONTENT_TRANSFER_ENCODING, "").lower() - - if encoding == "base64": - return base64.b64decode(data) - elif encoding == "quoted-printable": - return binascii.a2b_qp(data) - elif encoding in ("binary", "8bit", "7bit"): - return data - else: - raise RuntimeError( - "unknown content transfer encoding: {}" "".format(encoding) - ) - - def get_charset(self, default: str) -> str: - """Returns charset parameter from Content-Type header or default.""" - ctype = self.headers.get(CONTENT_TYPE, "") - mimetype = parse_mimetype(ctype) - return mimetype.parameters.get("charset", default) - - @reify - def name(self) -> Optional[str]: - """Returns name specified in Content-Disposition header. - - If the header is missing or malformed, returns None. - """ - _, params = parse_content_disposition(self.headers.get(CONTENT_DISPOSITION)) - return content_disposition_filename(params, "name") - - @reify - def filename(self) -> Optional[str]: - """Returns filename specified in Content-Disposition header. - - Returns None if the header is missing or malformed. - """ - _, params = parse_content_disposition(self.headers.get(CONTENT_DISPOSITION)) - return content_disposition_filename(params, "filename") - - -@payload_type(BodyPartReader, order=Order.try_first) -class BodyPartReaderPayload(Payload): - def __init__(self, value: BodyPartReader, *args: Any, **kwargs: Any) -> None: - super().__init__(value, *args, **kwargs) - - params: Dict[str, str] = {} - if value.name is not None: - params["name"] = value.name - if value.filename is not None: - params["filename"] = value.filename - - if params: - self.set_content_disposition("attachment", True, **params) - - async def write(self, writer: Any) -> None: - field = self._value - chunk = await field.read_chunk(size=2**16) - while chunk: - await writer.write(field.decode(chunk)) - chunk = await field.read_chunk(size=2**16) - - -class MultipartReader: - """Multipart body reader.""" - - #: Response wrapper, used when multipart readers constructs from response. - response_wrapper_cls = MultipartResponseWrapper - #: Multipart reader class, used to handle multipart/* body parts. - #: None points to type(self) - multipart_reader_cls = None - #: Body part reader class for non multipart/* content types. - part_reader_cls = BodyPartReader - - def __init__(self, headers: Mapping[str, str], content: StreamReader) -> None: - self.headers = headers - self._boundary = ("--" + self._get_boundary()).encode() - self._content = content - self._last_part: Optional[Union["MultipartReader", BodyPartReader]] = None - self._at_eof = False - self._at_bof = True - self._unread: List[bytes] = [] - - def __aiter__( - self, - ) -> AsyncIterator["BodyPartReader"]: - return self # type: ignore[return-value] - - async def __anext__( - self, - ) -> Optional[Union["MultipartReader", BodyPartReader]]: - part = await self.next() - if part is None: - raise StopAsyncIteration - return part - - @classmethod - def from_response( - cls, - response: "ClientResponse", - ) -> MultipartResponseWrapper: - """Constructs reader instance from HTTP response. - - :param response: :class:`~aiohttp.client.ClientResponse` instance - """ - obj = cls.response_wrapper_cls( - response, cls(response.headers, response.content) - ) - return obj - - def at_eof(self) -> bool: - """Returns True if the final boundary was reached, false otherwise.""" - return self._at_eof - - async def next( - self, - ) -> Optional[Union["MultipartReader", BodyPartReader]]: - """Emits the next multipart body part.""" - # So, if we're at BOF, we need to skip till the boundary. - if self._at_eof: - return None - await self._maybe_release_last_part() - if self._at_bof: - await self._read_until_first_boundary() - self._at_bof = False - else: - await self._read_boundary() - if self._at_eof: # we just read the last boundary, nothing to do there - return None - self._last_part = await self.fetch_next_part() - return self._last_part - - async def release(self) -> None: - """Reads all the body parts to the void till the final boundary.""" - while not self._at_eof: - item = await self.next() - if item is None: - break - await item.release() - - async def fetch_next_part( - self, - ) -> Union["MultipartReader", BodyPartReader]: - """Returns the next body part reader.""" - headers = await self._read_headers() - return self._get_part_reader(headers) - - def _get_part_reader( - self, - headers: "CIMultiDictProxy[str]", - ) -> Union["MultipartReader", BodyPartReader]: - """Dispatches the response by the `Content-Type` header. - - Returns a suitable reader instance. - - :param dict headers: Response headers - """ - ctype = headers.get(CONTENT_TYPE, "") - mimetype = parse_mimetype(ctype) - - if mimetype.type == "multipart": - if self.multipart_reader_cls is None: - return type(self)(headers, self._content) - return self.multipart_reader_cls(headers, self._content) - else: - return self.part_reader_cls(self._boundary, headers, self._content) - - def _get_boundary(self) -> str: - mimetype = parse_mimetype(self.headers[CONTENT_TYPE]) - - assert mimetype.type == "multipart", "multipart/* content type expected" - - if "boundary" not in mimetype.parameters: - raise ValueError( - "boundary missed for Content-Type: %s" % self.headers[CONTENT_TYPE] - ) - - boundary = mimetype.parameters["boundary"] - if len(boundary) > 70: - raise ValueError("boundary %r is too long (70 chars max)" % boundary) - - return boundary - - async def _readline(self) -> bytes: - if self._unread: - return self._unread.pop() - return await self._content.readline() - - async def _read_until_first_boundary(self) -> None: - while True: - chunk = await self._readline() - if chunk == b"": - raise ValueError( - "Could not find starting boundary %r" % (self._boundary) - ) - chunk = chunk.rstrip() - if chunk == self._boundary: - return - elif chunk == self._boundary + b"--": - self._at_eof = True - return - - async def _read_boundary(self) -> None: - chunk = (await self._readline()).rstrip() - if chunk == self._boundary: - pass - elif chunk == self._boundary + b"--": - self._at_eof = True - epilogue = await self._readline() - next_line = await self._readline() - - # the epilogue is expected and then either the end of input or the - # parent multipart boundary, if the parent boundary is found then - # it should be marked as unread and handed to the parent for - # processing - if next_line[:2] == b"--": - self._unread.append(next_line) - # otherwise the request is likely missing an epilogue and both - # lines should be passed to the parent for processing - # (this handles the old behavior gracefully) - else: - self._unread.extend([next_line, epilogue]) - else: - raise ValueError(f"Invalid boundary {chunk!r}, expected {self._boundary!r}") - - async def _read_headers(self) -> "CIMultiDictProxy[str]": - lines = [b""] - while True: - chunk = await self._content.readline() - chunk = chunk.strip() - lines.append(chunk) - if not chunk: - break - parser = HeadersParser() - headers, raw_headers = parser.parse_headers(lines) - return headers - - async def _maybe_release_last_part(self) -> None: - """Ensures that the last read body part is read completely.""" - if self._last_part is not None: - if not self._last_part.at_eof(): - await self._last_part.release() - self._unread.extend(self._last_part._unread) - self._last_part = None - - -_Part = Tuple[Payload, str, str] - - -class MultipartWriter(Payload): - """Multipart body writer.""" - - def __init__(self, subtype: str = "mixed", boundary: Optional[str] = None) -> None: - boundary = boundary if boundary is not None else uuid.uuid4().hex - # The underlying Payload API demands a str (utf-8), not bytes, - # so we need to ensure we don't lose anything during conversion. - # As a result, require the boundary to be ASCII only. - # In both situations. - - try: - self._boundary = boundary.encode("ascii") - except UnicodeEncodeError: - raise ValueError("boundary should contain ASCII only chars") from None - ctype = f"multipart/{subtype}; boundary={self._boundary_value}" - - super().__init__(None, content_type=ctype) - - self._parts: List[_Part] = [] - - def __enter__(self) -> "MultipartWriter": - return self - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> None: - pass - - def __iter__(self) -> Iterator[_Part]: - return iter(self._parts) - - def __len__(self) -> int: - return len(self._parts) - - def __bool__(self) -> bool: - return True - - _valid_tchar_regex = re.compile(rb"\A[!#$%&'*+\-.^_`|~\w]+\Z") - _invalid_qdtext_char_regex = re.compile(rb"[\x00-\x08\x0A-\x1F\x7F]") - - @property - def _boundary_value(self) -> str: - """Wrap boundary parameter value in quotes, if necessary. - - Reads self.boundary and returns a unicode sting. - """ - # Refer to RFCs 7231, 7230, 5234. - # - # parameter = token "=" ( token / quoted-string ) - # token = 1*tchar - # quoted-string = DQUOTE *( qdtext / quoted-pair ) DQUOTE - # qdtext = HTAB / SP / %x21 / %x23-5B / %x5D-7E / obs-text - # obs-text = %x80-FF - # quoted-pair = "\" ( HTAB / SP / VCHAR / obs-text ) - # tchar = "!" / "#" / "$" / "%" / "&" / "'" / "*" - # / "+" / "-" / "." / "^" / "_" / "`" / "|" / "~" - # / DIGIT / ALPHA - # ; any VCHAR, except delimiters - # VCHAR = %x21-7E - value = self._boundary - if re.match(self._valid_tchar_regex, value): - return value.decode("ascii") # cannot fail - - if re.search(self._invalid_qdtext_char_regex, value): - raise ValueError("boundary value contains invalid characters") - - # escape %x5C and %x22 - quoted_value_content = value.replace(b"\\", b"\\\\") - quoted_value_content = quoted_value_content.replace(b'"', b'\\"') - - return '"' + quoted_value_content.decode("ascii") + '"' - - @property - def boundary(self) -> str: - return self._boundary.decode("ascii") - - def append(self, obj: Any, headers: Optional[MultiMapping[str]] = None) -> Payload: - if headers is None: - headers = CIMultiDict() - - if isinstance(obj, Payload): - obj.headers.update(headers) - return self.append_payload(obj) - else: - try: - payload = get_payload(obj, headers=headers) - except LookupError: - raise TypeError("Cannot create payload from %r" % obj) - else: - return self.append_payload(payload) - - def append_payload(self, payload: Payload) -> Payload: - """Adds a new body part to multipart writer.""" - # compression - encoding: Optional[str] = payload.headers.get( - CONTENT_ENCODING, - "", - ).lower() - if encoding and encoding not in ("deflate", "gzip", "identity"): - raise RuntimeError(f"unknown content encoding: {encoding}") - if encoding == "identity": - encoding = None - - # te encoding - te_encoding: Optional[str] = payload.headers.get( - CONTENT_TRANSFER_ENCODING, - "", - ).lower() - if te_encoding not in ("", "base64", "quoted-printable", "binary"): - raise RuntimeError( - "unknown content transfer encoding: {}" "".format(te_encoding) - ) - if te_encoding == "binary": - te_encoding = None - - # size - size = payload.size - if size is not None and not (encoding or te_encoding): - payload.headers[CONTENT_LENGTH] = str(size) - - self._parts.append((payload, encoding, te_encoding)) # type: ignore[arg-type] - return payload - - def append_json( - self, obj: Any, headers: Optional[MultiMapping[str]] = None - ) -> Payload: - """Helper to append JSON part.""" - if headers is None: - headers = CIMultiDict() - - return self.append_payload(JsonPayload(obj, headers=headers)) - - def append_form( - self, - obj: Union[Sequence[Tuple[str, str]], Mapping[str, str]], - headers: Optional[MultiMapping[str]] = None, - ) -> Payload: - """Helper to append form urlencoded part.""" - assert isinstance(obj, (Sequence, Mapping)) - - if headers is None: - headers = CIMultiDict() - - if isinstance(obj, Mapping): - obj = list(obj.items()) - data = urlencode(obj, doseq=True) - - return self.append_payload( - StringPayload( - data, headers=headers, content_type="application/x-www-form-urlencoded" - ) - ) - - @property - def size(self) -> Optional[int]: - """Size of the payload.""" - total = 0 - for part, encoding, te_encoding in self._parts: - if encoding or te_encoding or part.size is None: - return None - - total += int( - 2 - + len(self._boundary) - + 2 - + part.size # b'--'+self._boundary+b'\r\n' - + len(part._binary_headers) - + 2 # b'\r\n' - ) - - total += 2 + len(self._boundary) + 4 # b'--'+self._boundary+b'--\r\n' - return total - - async def write(self, writer: Any, close_boundary: bool = True) -> None: - """Write body.""" - for part, encoding, te_encoding in self._parts: - await writer.write(b"--" + self._boundary + b"\r\n") - await writer.write(part._binary_headers) - - if encoding or te_encoding: - w = MultipartPayloadWriter(writer) - if encoding: - w.enable_compression(encoding) - if te_encoding: - w.enable_encoding(te_encoding) - await part.write(w) # type: ignore[arg-type] - await w.write_eof() - else: - await part.write(writer) - - await writer.write(b"\r\n") - - if close_boundary: - await writer.write(b"--" + self._boundary + b"--\r\n") - - -class MultipartPayloadWriter: - def __init__(self, writer: Any) -> None: - self._writer = writer - self._encoding: Optional[str] = None - self._compress: Any = None - self._encoding_buffer: Optional[bytearray] = None - - def enable_encoding(self, encoding: str) -> None: - if encoding == "base64": - self._encoding = encoding - self._encoding_buffer = bytearray() - elif encoding == "quoted-printable": - self._encoding = "quoted-printable" - - def enable_compression( - self, encoding: str = "deflate", strategy: int = zlib.Z_DEFAULT_STRATEGY - ) -> None: - zlib_mode = 16 + zlib.MAX_WBITS if encoding == "gzip" else -zlib.MAX_WBITS - self._compress = zlib.compressobj(wbits=zlib_mode, strategy=strategy) - - async def write_eof(self) -> None: - if self._compress is not None: - chunk = self._compress.flush() - if chunk: - self._compress = None - await self.write(chunk) - - if self._encoding == "base64": - if self._encoding_buffer: - await self._writer.write(base64.b64encode(self._encoding_buffer)) - - async def write(self, chunk: bytes) -> None: - if self._compress is not None: - if chunk: - chunk = self._compress.compress(chunk) - if not chunk: - return - - if self._encoding == "base64": - buf = self._encoding_buffer - assert buf is not None - buf.extend(chunk) - - if buf: - div, mod = divmod(len(buf), 3) - enc_chunk, self._encoding_buffer = (buf[: div * 3], buf[div * 3 :]) - if enc_chunk: - b64chunk = base64.b64encode(enc_chunk) - await self._writer.write(b64chunk) - elif self._encoding == "quoted-printable": - await self._writer.write(binascii.b2a_qp(chunk)) - else: - await self._writer.write(chunk) diff --git a/spaces/captchaboy/FAST-ABINet-OCR/losses.py b/spaces/captchaboy/FAST-ABINet-OCR/losses.py deleted file mode 100644 index 1b718a9ce2dd125ccd2c45f112fb278a299f4a99..0000000000000000000000000000000000000000 --- a/spaces/captchaboy/FAST-ABINet-OCR/losses.py +++ /dev/null @@ -1,72 +0,0 @@ -from fastai.vision import * - -from modules.model import Model - - -class MultiLosses(nn.Module): - def __init__(self, one_hot=True): - super().__init__() - self.ce = SoftCrossEntropyLoss() if one_hot else torch.nn.CrossEntropyLoss() - self.bce = torch.nn.BCELoss() - - @property - def last_losses(self): - return self.losses - - def _flatten(self, sources, lengths): - return torch.cat([t[:l] for t, l in zip(sources, lengths)]) - - def _merge_list(self, all_res): - if not isinstance(all_res, (list, tuple)): - return all_res - def merge(items): - if isinstance(items[0], torch.Tensor): return torch.cat(items, dim=0) - else: return items[0] - res = dict() - for key in all_res[0].keys(): - items = [r[key] for r in all_res] - res[key] = merge(items) - return res - - def _ce_loss(self, output, gt_labels, gt_lengths, idx=None, record=True): - loss_name = output.get('name') - pt_logits, weight = output['logits'], output['loss_weight'] - - assert pt_logits.shape[0] % gt_labels.shape[0] == 0 - iter_size = pt_logits.shape[0] // gt_labels.shape[0] - if iter_size > 1: - gt_labels = gt_labels.repeat(3, 1, 1) - gt_lengths = gt_lengths.repeat(3) - flat_gt_labels = self._flatten(gt_labels, gt_lengths) - flat_pt_logits = self._flatten(pt_logits, gt_lengths) - - nll = output.get('nll') - if nll is not None: - loss = self.ce(flat_pt_logits, flat_gt_labels, softmax=False) * weight - else: - loss = self.ce(flat_pt_logits, flat_gt_labels) * weight - if record and loss_name is not None: self.losses[f'{loss_name}_loss'] = loss - - return loss - - def forward(self, outputs, *args): - self.losses = {} - if isinstance(outputs, (tuple, list)): - outputs = [self._merge_list(o) for o in outputs] - return sum([self._ce_loss(o, *args) for o in outputs if o['loss_weight'] > 0.]) - else: - return self._ce_loss(outputs, *args, record=False) - - -class SoftCrossEntropyLoss(nn.Module): - def __init__(self, reduction="mean"): - super().__init__() - self.reduction = reduction - - def forward(self, input, target, softmax=True): - if softmax: log_prob = F.log_softmax(input, dim=-1) - else: log_prob = torch.log(input) - loss = -(target * log_prob).sum(dim=-1) - if self.reduction == "mean": return loss.mean() - elif self.reduction == "sum": return loss.sum() - else: return loss diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/losses/registry.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/losses/registry.py deleted file mode 100644 index d9c8817a743e42b2aec382818f0cc1bb39a66004..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/losses/registry.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from detectron2.utils.registry import Registry - -DENSEPOSE_LOSS_REGISTRY = Registry("DENSEPOSE_LOSS") diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/MViTv2/configs/cascade_mask_rcnn_mvitv2_t_3x.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/MViTv2/configs/cascade_mask_rcnn_mvitv2_t_3x.py deleted file mode 100644 index 51327dd9379b011c2d6cdc8299515b6df8112f4e..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/MViTv2/configs/cascade_mask_rcnn_mvitv2_t_3x.py +++ /dev/null @@ -1,48 +0,0 @@ -from detectron2.config import LazyCall as L -from detectron2.layers import ShapeSpec -from detectron2.modeling.box_regression import Box2BoxTransform -from detectron2.modeling.matcher import Matcher -from detectron2.modeling.roi_heads import FastRCNNOutputLayers, FastRCNNConvFCHead, CascadeROIHeads -from detectron2.layers.batch_norm import NaiveSyncBatchNorm - -from .mask_rcnn_mvitv2_t_3x import model, dataloader, optimizer, lr_multiplier, train - - -# arguments that don't exist for Cascade R-CNN -[model.roi_heads.pop(k) for k in ["box_head", "box_predictor", "proposal_matcher"]] - -model.roi_heads.update( - _target_=CascadeROIHeads, - box_heads=[ - L(FastRCNNConvFCHead)( - input_shape=ShapeSpec(channels=256, height=7, width=7), - conv_dims=[256, 256, 256, 256], - fc_dims=[1024], - conv_norm=lambda c: NaiveSyncBatchNorm(c, stats_mode="N"), - ) - for _ in range(3) - ], - box_predictors=[ - L(FastRCNNOutputLayers)( - input_shape=ShapeSpec(channels=1024), - test_score_thresh=0.05, - box2box_transform=L(Box2BoxTransform)(weights=(w1, w1, w2, w2)), - cls_agnostic_bbox_reg=True, - num_classes="${...num_classes}", - ) - for (w1, w2) in [(10, 5), (20, 10), (30, 15)] - ], - proposal_matchers=[ - L(Matcher)(thresholds=[th], labels=[0, 1], allow_low_quality_matches=False) - for th in [0.5, 0.6, 0.7] - ], -) - -# Using NaiveSyncBatchNorm becase heads may have empty input. That is not supported by -# torch.nn.SyncBatchNorm. We can remove this after -# https://github.com/pytorch/pytorch/issues/36530 is fixed. -model.roi_heads.mask_head.conv_norm = lambda c: NaiveSyncBatchNorm(c, stats_mode="N") - -# 2conv in RPN: -# https://github.com/tensorflow/tpu/blob/b24729de804fdb751b06467d3dce0637fa652060/models/official/detection/modeling/architecture/heads.py#L95-L97 # noqa: E501, B950 -model.proposal_generator.head.conv_dims = [-1, -1] diff --git a/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/VC_inference.py b/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/VC_inference.py deleted file mode 100644 index a75676039114144a06fe57d19f6b20c8ec774668..0000000000000000000000000000000000000000 --- a/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/VC_inference.py +++ /dev/null @@ -1,139 +0,0 @@ -import os -import numpy as np -import torch -from torch import no_grad, LongTensor -import argparse -import commons -from mel_processing import spectrogram_torch -import utils -from models import SynthesizerTrn -import gradio as gr -import librosa -import webbrowser - -from text import text_to_sequence, _clean_text -device = "cuda:0" if torch.cuda.is_available() else "cpu" -language_marks = { - "Japanese": "", - "日本語": "[JA]", - "简体中文": "[ZH]", - "English": "[EN]", - "Mix": "", -} -lang = ['日本語', '简体中文', 'English', 'Mix'] -def get_text(text, hps, is_symbol): - text_norm = text_to_sequence(text, hps.symbols, [] if is_symbol else hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = LongTensor(text_norm) - return text_norm - -def create_tts_fn(model, hps, speaker_ids): - def tts_fn(text, speaker, language, speed): - if language is not None: - text = language_marks[language] + text + language_marks[language] - speaker_id = speaker_ids[speaker] - stn_tst = get_text(text, hps, False) - with no_grad(): - x_tst = stn_tst.unsqueeze(0).to(device) - x_tst_lengths = LongTensor([stn_tst.size(0)]).to(device) - sid = LongTensor([speaker_id]).to(device) - audio = model.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8, - length_scale=1.0 / speed)[0][0, 0].data.cpu().float().numpy() - del stn_tst, x_tst, x_tst_lengths, sid - return "Success", (hps.data.sampling_rate, audio) - - return tts_fn - -def create_vc_fn(model, hps, speaker_ids): - def vc_fn(original_speaker, target_speaker, record_audio, upload_audio): - input_audio = record_audio if record_audio is not None else upload_audio - if input_audio is None: - return "You need to record or upload an audio", None - sampling_rate, audio = input_audio - original_speaker_id = speaker_ids[original_speaker] - target_speaker_id = speaker_ids[target_speaker] - - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != hps.data.sampling_rate: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=hps.data.sampling_rate) - with no_grad(): - y = torch.FloatTensor(audio) - y = y / max(-y.min(), y.max()) / 0.99 - y = y.to(device) - y = y.unsqueeze(0) - spec = spectrogram_torch(y, hps.data.filter_length, - hps.data.sampling_rate, hps.data.hop_length, hps.data.win_length, - center=False).to(device) - spec_lengths = LongTensor([spec.size(-1)]).to(device) - sid_src = LongTensor([original_speaker_id]).to(device) - sid_tgt = LongTensor([target_speaker_id]).to(device) - audio = model.voice_conversion(spec, spec_lengths, sid_src=sid_src, sid_tgt=sid_tgt)[0][ - 0, 0].data.cpu().float().numpy() - del y, spec, spec_lengths, sid_src, sid_tgt - return "Success", (hps.data.sampling_rate, audio) - - return vc_fn -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--model_dir", default="./G_latest.pth", help="directory to your fine-tuned model") - parser.add_argument("--config_dir", default="./finetune_speaker.json", help="directory to your model config file") - parser.add_argument("--share", default=False, help="make link public (used in colab)") - - args = parser.parse_args() - hps = utils.get_hparams_from_file(args.config_dir) - - - net_g = SynthesizerTrn( - len(hps.symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model).to(device) - _ = net_g.eval() - - _ = utils.load_checkpoint(args.model_dir, net_g, None) - speaker_ids = hps.speakers - speakers = list(hps.speakers.keys()) - tts_fn = create_tts_fn(net_g, hps, speaker_ids) - vc_fn = create_vc_fn(net_g, hps, speaker_ids) - app = gr.Blocks() - with app: - with gr.Tab("Text-to-Speech"): - with gr.Row(): - with gr.Column(): - textbox = gr.TextArea(label="Text", - placeholder="Type your sentence here", - value="こんにちわ。", elem_id=f"tts-input") - # select character - char_dropdown = gr.Dropdown(choices=speakers, value=speakers[0], label='character') - language_dropdown = gr.Dropdown(choices=lang, value=lang[0], label='language') - duration_slider = gr.Slider(minimum=0.1, maximum=5, value=1, step=0.1, - label='速度 Speed') - with gr.Column(): - text_output = gr.Textbox(label="Message") - audio_output = gr.Audio(label="Output Audio", elem_id="tts-audio") - btn = gr.Button("Generate!") - btn.click(tts_fn, - inputs=[textbox, char_dropdown, language_dropdown, duration_slider,], - outputs=[text_output, audio_output]) - with gr.Tab("Voice Conversion"): - gr.Markdown(""" - 录制或上传声音,并选择要转换的音色。 - """) - with gr.Column(): - record_audio = gr.Audio(label="record your voice", source="microphone") - upload_audio = gr.Audio(label="or upload audio here", source="upload") - source_speaker = gr.Dropdown(choices=speakers, value=speakers[0], label="source speaker") - target_speaker = gr.Dropdown(choices=speakers, value=speakers[0], label="target speaker") - with gr.Column(): - message_box = gr.Textbox(label="Message") - converted_audio = gr.Audio(label='converted audio') - btn = gr.Button("Convert!") - btn.click(vc_fn, inputs=[source_speaker, target_speaker, record_audio, upload_audio], - outputs=[message_box, converted_audio]) - webbrowser.open("http://127.0.0.1:7860") - app.launch(share=args.share) - diff --git a/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/monotonic_align/__init__.py b/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/monotonic_align/__init__.py deleted file mode 100644 index 3d7009c40fea3a98168e3e3bc9ae061e91327422..0000000000000000000000000000000000000000 --- a/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/monotonic_align/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -import numpy as np -import torch -from .monotonic_align.core import maximum_path_c - - -def maximum_path(neg_cent, mask): - """ Cython optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(np.float32) - path = np.zeros(neg_cent.shape, dtype=np.int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(np.int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(np.int32) - maximum_path_c(path, neg_cent, t_t_max, t_s_max) - return torch.from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/cenji1109285052/img-to-music/app.py b/spaces/cenji1109285052/img-to-music/app.py deleted file mode 100644 index a325b27b8177f9bca294439724ec16c2da2f0169..0000000000000000000000000000000000000000 --- a/spaces/cenji1109285052/img-to-music/app.py +++ /dev/null @@ -1,163 +0,0 @@ -import time -import base64 -import gradio as gr -from sentence_transformers import SentenceTransformer - -import httpx -import json - -import os -import requests -import urllib - -from os import path -from pydub import AudioSegment - -#img_to_text = gr.Blocks.load(name="spaces/pharma/CLIP-Interrogator") -img_to_text = gr.Blocks.load(name="spaces/fffiloni/CLIP-Interrogator-2") - -from share_btn import community_icon_html, loading_icon_html, share_js - -def get_prompts(uploaded_image, track_duration, gen_intensity, gen_mode): - print("calling clip interrogator") - #prompt = img_to_text(uploaded_image, "ViT-L (best for Stable Diffusion 1.*)", "fast", fn_index=1)[0] - prompt = img_to_text(uploaded_image, 'fast', 4, fn_index=1)[0] - print(prompt) - music_result = generate_track_by_prompt(prompt, track_duration, gen_intensity, gen_mode) - print(music_result) - return music_result[0], gr.update(visible=True), gr.update(visible=True), gr.update(visible=True) - -from utils import get_tags_for_prompts, get_mubert_tags_embeddings, get_pat - -minilm = SentenceTransformer('all-MiniLM-L6-v2') -mubert_tags_embeddings = get_mubert_tags_embeddings(minilm) - - -def get_track_by_tags(tags, pat, duration, gen_intensity, gen_mode, maxit=20): - - r = httpx.post('https://api-b2b.mubert.com/v2/RecordTrackTTM', - json={ - "method": "RecordTrackTTM", - "params": { - "pat": pat, - "duration": duration, - "format": "wav", - "intensity":gen_intensity, - "tags": tags, - "mode": gen_mode - } - }) - - rdata = json.loads(r.text) - assert rdata['status'] == 1, rdata['error']['text'] - trackurl = rdata['data']['tasks'][0]['download_link'] - - print('Generating track ', end='') - for i in range(maxit): - r = httpx.get(trackurl) - if r.status_code == 200: - return trackurl - time.sleep(1) - - -def generate_track_by_prompt(prompt, duration, gen_intensity, gen_mode): - try: - pat = get_pat("prodia@prodia.com") - _, tags = get_tags_for_prompts(minilm, mubert_tags_embeddings, [prompt, ])[0] - result = get_track_by_tags(tags, pat, int(duration), gen_intensity, gen_mode) - print(result) - return result, ",".join(tags), "Success" - except Exception as e: - return None, "", str(e) - -def convert_mp3_to_wav(mp3_filepath): - - url = mp3_filepath - save_as = "file.mp3" - - data = urllib.request.urlopen(url) - - f = open(save_as,'wb') - f.write(data.read()) - f.close() - - wave_file="file.wav" - - sound = AudioSegment.from_mp3(save_as) - sound.export(wave_file, format="wav") - - return wave_file - -article = """ - - - -
      -

      You may also like:

      -
      - - - - - - - - - - -
      -
      - - -""" - -with gr.Blocks(css="style.css") as demo: - with gr.Column(elem_id="col-container"): - - gr.HTML("""
      -
      -

      - Image to Music -

      -
      -

      - Sends an image in to CLIP Interrogator - to generate a text prompt which is then run through - Mubert text-to-music to generate music from the input image! -

      -
      """) - - input_img = gr.Image(type="filepath", elem_id="input-img") - music_output = gr.Audio(label="Result", type="filepath", elem_id="music-output").style(height="5rem") - - with gr.Group(elem_id="share-btn-container"): - community_icon = gr.HTML(community_icon_html, visible=False) - loading_icon = gr.HTML(loading_icon_html, visible=False) - share_button = gr.Button("Share to community", elem_id="share-btn", visible=False) - - with gr.Accordion(label="Music Generation Options", open=False): - track_duration = gr.Slider(minimum=20, maximum=120, value=30, step=5, label="Track duration", elem_id="duration-inp") - with gr.Row(): - gen_intensity = gr.Dropdown(choices=["low", "medium", "high"], value="medium", label="Intensity") - gen_mode = gr.Radio(label="mode", choices=["track", "loop"], value="track") - - generate = gr.Button("Generate Music from Image") - - gr.HTML(article) - - generate.click(get_prompts, inputs=[input_img,track_duration,gen_intensity,gen_mode], outputs=[music_output, share_button, community_icon, loading_icon], api_name="i2m") - share_button.click(None, [], [], _js=share_js) - -demo.queue(max_size=32, concurrency_count=20).launch() \ No newline at end of file diff --git a/spaces/chendl/compositional_test/multimodal/open_flamingo/train/train_utils.py b/spaces/chendl/compositional_test/multimodal/open_flamingo/train/train_utils.py deleted file mode 100644 index 8ccdaf049ba5092933a6c01bc28019cb55174b30..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/open_flamingo/train/train_utils.py +++ /dev/null @@ -1,387 +0,0 @@ -import time -from contextlib import suppress -import numpy as np - -import torch -from tqdm import tqdm -import datetime -import os -import gc -from torch.distributed.fsdp import ( - FullyShardedDataParallel as FSDP, - MixedPrecision, - BackwardPrefetch, - ShardingStrategy, - FullStateDictConfig, - StateDictType, -) -from torch.distributed.fsdp.sharded_grad_scaler import ShardedGradScaler -from torch.distributed.fsdp.wrap import ( - transformer_auto_wrap_policy, - enable_wrap, - wrap, -) - -from torch.utils.tensorboard import SummaryWriter -import logging -logging.basicConfig( - level=logging.INFO, - format='%(asctime)s %(message)s', - datefmt='%m/%d %I:%M:%S', -) - -def get_cast_dtype(precision: str): - cast_dtype = None - if precision == "bf16": - cast_dtype = torch.bfloat16 - elif precision == "fp16": - cast_dtype = torch.float16 - return cast_dtype - - -def get_autocast(precision): - if precision == "amp_fp16": - return lambda: torch.cuda.amp.autocast(dtype=torch.float16) - elif precision == "amp_bfloat16" or precision == "amp_bf16": - # amp_bfloat16 is more stable than amp float16 for clip training - return lambda: torch.cuda.amp.autocast(dtype=torch.bfloat16) - else: - return suppress - - -def get_sync(model, flag): - if flag: - return suppress - else: - return lambda: model.no_sync() - - -def train_one_epoch( - args, - model, - laion_loader, - pile_loader, - tokenizer, - optimizer, - lr_scheduler, - device_id, - writer: SummaryWriter, - optim_groups, - scaler, - total_laion_token: int, - total_pile_token: int, - total_laion_sample: int, - total_step: int, -): - world_size = torch.distributed.get_world_size() - autocast = get_autocast(args.precision) - cast_dtype = get_cast_dtype(args.precision) - - media_token_id = tokenizer("<|#image#|>", add_special_tokens=False)["input_ids"][-1] - endofmedia_token_id = tokenizer("<|#endofimage#|>", add_special_tokens=False)["input_ids"][-1] - visual_token_id = tokenizer("<|#visual#|>", add_special_tokens=False)["input_ids"][-1] - if args.add_box: - box_token_id = tokenizer("<|#box#|>", add_special_tokens=False)["input_ids"][-1] - endofobject_token_id = tokenizer("<|#endofobject#|>", add_special_tokens=False)["input_ids"][-1] - endofattr_token_id = tokenizer("<|#endofattr#|>", add_special_tokens=False)["input_ids"][-1] - if args.use_format_v2: - prebox_token_id = tokenizer("<|#prebox#|>", add_special_tokens=False)["input_ids"][-1] - previsual_token_id = tokenizer("<|#previsual#|>", add_special_tokens=False)["input_ids"][-1] - if args.rank == 0: - logging.info(f"train from: {total_step} step") - model.train() - # loop through dataloader - last_logging_step = total_step - last_save_step = total_step - for num_steps, (batch_laion, batch_pile) in tqdm( - enumerate(zip(laion_loader, pile_loader)), - disable=args.rank != 0 or "SLURM_PROCID" in os.environ, - total=args.num_steps * args.gradient_accumulation_steps, - initial=total_step * args.gradient_accumulation_steps, - ): - #### LAION FORWARD PASS #### - images = ( - batch_laion[0] - .to(device_id, dtype=cast_dtype, non_blocking=True) - .unsqueeze(1) - .unsqueeze(1) - ) - image_nums = batch_laion[1] - image_start_index_list = batch_laion[2] - - # TODO: OPT model: input_ids is not started with
      while input_ids2 is? - input_ids = batch_laion[3].to(device_id, non_blocking=True).long() - attention_mask = batch_laion[4].to(device_id, dtype=cast_dtype, non_blocking=True) - added_bbox_list = [x.to(device_id) for x in batch_laion[5]] # list object - total_laion_token += int(attention_mask.sum().long()) * world_size - total_laion_sample += sum(image_nums) * world_size - - labels = input_ids.clone() - if args.add_box: - labels[input_ids == visual_token_id] = -100 - labels[input_ids == box_token_id] = -100 - labels[input_ids == endofattr_token_id] = -100 - if args.use_format_v2: - labels[input_ids == previsual_token_id] = -100 - labels[input_ids == prebox_token_id] = -100 - labels[torch.roll(input_ids == prebox_token_id, 1)] = -100 - labels[torch.roll(input_ids == box_token_id, 1)] = -100 - labels[:, 0] = -100 - labels[input_ids == tokenizer.pad_token_id] = -100 - labels[input_ids == media_token_id] = -100 - labels[input_ids == endofmedia_token_id] = -100 - labels.to(device_id) - current_laion_num = input_ids.shape[0] - - #### PILE FORWARD PASS #### - if batch_pile is not None and batch_pile[0] is not None and batch_pile[1] is not None: - input_ids2 = batch_pile[0].to(device_id, non_blocking=True).long() - attention_mask2 = batch_pile[1].to(device_id, dtype=cast_dtype, non_blocking=True) - input_length = input_ids.shape[-1] - - input_ids2 = torch.cat([input_ids2, torch.ones((input_ids2.shape[0], input_length - input_ids2.shape[1]), device=input_ids2.device, dtype=input_ids2.dtype) * tokenizer.pad_token_id], dim=-1) - attention_mask2 = torch.cat([attention_mask2, torch.zeros((attention_mask2.shape[0], input_length - attention_mask2.shape[1]), device=attention_mask2.device, dtype=attention_mask2.dtype)], dim=-1) - - labels2 = input_ids2.clone() - labels2[labels2 == tokenizer.pad_token_id] = -100 - labels2[:, 0] = -100 - labels2.to(device_id) - - if (num_steps != 0 and num_steps % args.pile_freq == 0) or args.pile_freq == 1: - image_nums = image_nums + [0] * len(input_ids2) - image_start_index_list = image_start_index_list + [[]] * len(input_ids2) - input_ids = torch.cat([input_ids, input_ids2], dim=0) - attention_mask = torch.cat([attention_mask, attention_mask2], dim=0) - labels = torch.cat([labels, labels2], dim=0) - total_pile_token += int(attention_mask2.sum().long()) * world_size - else: - del input_ids2 - del attention_mask2 - del labels2 - - if args.instruct: - answer_token_id = tokenizer(" Answer").input_ids[0] - answer_token_loc = (input_ids == answer_token_id).nonzero() - for batch_idx, idx in answer_token_loc: - labels[batch_idx][:idx+2] = -100 - - if args.relation and not args.instruct: - relations = batch_laion[6] - else: - relations = None - if len(added_bbox_list) == 0: - added_bbox_list = None - update_flag = (num_steps != 0 and num_steps % args.gradient_accumulation_steps == 0) or args.gradient_accumulation_steps == 1 - # do_sync = get_sync(model, update_flag) - with autocast(): - # modify: - # /gpfs/u/home/LMCG/LMCGljnn/scratch/miniconda3-ppc64le/envs/unified/lib/python3.9/site-packages/transformers/models/codegen/modeling_codegen.py - # /gpfs/u/home/LMCG/LMCGljnn/scratch/miniconda3-ppc64le/envs/unified/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py - # CrossEntropyLoss(reduction="none") - outputs = model( - vision_x=images, - lang_x=input_ids, - attention_mask=attention_mask, - labels=labels, - image_nums=image_nums, - image_start_index_list=image_start_index_list, - added_bbox_list=added_bbox_list, - add_box=args.add_box, - relations=relations, - ) - loss_total = outputs.loss.reshape(labels.shape[0], -1) - loss_sample = loss_total.sum(-1) / (loss_total != 0).sum(-1) - loss_sample_for_laion = loss_sample[:current_laion_num] - nan_mask = torch.isnan(loss_sample_for_laion) - if nan_mask.sum() > 0: - logging.warning(f"caption NaN: {nan_mask}") - if nan_mask.sum() == len(loss_sample_for_laion) or not model.valid: - logging.info("WARNING: skip this caption loss due to some error") - loss_laion = torch.tensor(0.0).cuda() - else: - loss_laion = loss_sample_for_laion[~nan_mask].mean() - loss_caption = loss_laion - divided_loss_laion = loss_laion / args.gradient_accumulation_steps - if current_laion_num != loss_sample.shape[0]: - loss_pile = loss_sample[current_laion_num:].mean() - else: - loss_pile = torch.tensor(0.0).cuda() - divided_loss_pile = loss_pile / args.gradient_accumulation_steps - - if "detection_losses" in outputs: - loss_det = outputs["detection_losses"]["loss"] - loss_iou = outputs["detection_losses"]["loss_iou"] - loss_obj = outputs["detection_losses"]["loss_obj"] - loss_cls = outputs["detection_losses"]["loss_cls"] - else: - loss_det = torch.tensor(0.0).cuda() - loss_iou = torch.tensor(0.0).cuda() - loss_obj = torch.tensor(0.0).cuda() - loss_cls = torch.tensor(0.0).cuda() - - if "loss_dict" in outputs: - visual_loss_iou = outputs["loss_dict"][0]["loss_iou"] - previsual_loss_iou = outputs["loss_dict"][1]["loss_iou"] - visual_loss_obj = outputs["loss_dict"][0]["loss_obj"] - previsual_loss_obj = outputs["loss_dict"][1]["loss_obj"] - else: - visual_loss_iou = torch.tensor(0.0).cuda() - previsual_loss_iou = torch.tensor(0.0).cuda() - visual_loss_obj = torch.tensor(0.0).cuda() - previsual_loss_obj = torch.tensor(0.0).cuda() - - divided_loss_det = loss_det / args.gradient_accumulation_steps - loss_rel = outputs.get("rel_loss", torch.tensor(0.0).cuda()) - divided_loss_rel = loss_rel / args.gradient_accumulation_steps - loss = ( - divided_loss_laion * args.loss_multiplier_laion + - divided_loss_pile * args.loss_multiplier_pile + - divided_loss_det * args.loss_multiplier_det + - divided_loss_rel * args.loss_multiplier_rel - ) - - scaler.scale(loss).backward() - - # for logging only - loss = ( - loss_laion * args.loss_multiplier_laion - + loss_pile * args.loss_multiplier_pile - + loss_det * args.loss_multiplier_det - + loss_rel * args.loss_multiplier_rel - ).detach() - - # step optimizer and log - if update_flag: - #### MASK GRADIENTS FOR EMBEDDINGS #### - # Note (anas): Do not apply weight decay to embeddings as it will break this function. - # ! not an important point - # if args.ddp: - # def mask_embedding(m): - # if isinstance(m, torch.nn.Embedding) and m.weight.requires_grad: - # zero_mask = torch.zeros_like(m.weight.grad) - # zero_mask[media_token_id] = torch.ones_like(zero_mask[media_token_id]) - # zero_mask[endofmedia_token_id] = torch.ones_like(zero_mask[endofmedia_token_id]) - # m.weight.grad = m.weight.grad * zero_mask - # model.apply(mask_embedding) - total_step += 1 - scaler.unscale_(optimizer) - if args.ddp: - torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) - else: - model.clip_grad_norm_(1.0) - scaler.step(optimizer) - scaler.update() - lr_scheduler.step() - optimizer.zero_grad() - # https://github.com/facebookresearch/fairscale/issues/627 - model.zero_grad(set_to_none=True) - - if args.rank == 0 and total_step % args.logging_steps == 0 and total_step != last_logging_step: - last_logging_step = total_step - global_step = total_step - lr = optimizer.param_groups[0]["lr"] - writer.add_scalar("lr", lr, global_step) - writer.add_scalar("scale", scaler.get_scale(), global_step) - writer.add_scalar("loss_groundcaption", loss_laion.item(), global_step) - writer.add_scalar("loss_laion", loss_caption.item(), global_step) - writer.add_scalar("loss_pile", loss_pile.item(), global_step) - writer.add_scalar("loss", loss.item(), global_step) - writer.add_scalar("loss_det", loss_det.item(), global_step) - writer.add_scalar("loss_iou", loss_iou.item(), global_step) - writer.add_scalar("loss_obj", loss_obj.item(), global_step) - writer.add_scalar("loss_cls", loss_cls.item(), global_step) - if loss_rel.item() != 0: - writer.add_scalar("loss_rel", loss_rel.item(), global_step) - if args.use_format_v2: - writer.add_scalar("loss_iou_visual", visual_loss_iou.item(), global_step) - writer.add_scalar("loss_obj_visual", visual_loss_obj.item(), global_step) - writer.add_scalar("loss_iou_previsual", previsual_loss_iou.item(), global_step) - writer.add_scalar("loss_obj_previsual", previsual_loss_obj.item(), global_step) - - global_sample_num = total_laion_sample - writer.add_scalar("loss_groundcaption_vs_sample_num", loss_laion.item(), global_sample_num) - writer.add_scalar("loss_laion_vs_sample_num", loss_caption.item(), global_sample_num) - writer.add_scalar("loss_pile_vs_sample_num", loss_pile.item(), global_sample_num) - writer.add_scalar("loss_vs_sample_num", loss.item(), global_sample_num) - writer.add_scalar("loss_det_vs_sample_num", loss_det.item(), global_sample_num) - writer.add_scalar("loss_iou_vs_sample_num", loss_iou.item(), global_sample_num) - writer.add_scalar("loss_obj_vs_sample_num", loss_obj.item(), global_sample_num) - if loss_rel.item() != 0: - writer.add_scalar("loss_rel_vs_sample_num", loss_rel.item(), global_sample_num) - writer.add_scalar("lr_vs_sample_num", optimizer.param_groups[0]["lr"], global_sample_num) - - writer.add_scalar("loss_groundcaption_vs_token", loss_laion.item(), total_laion_token) - writer.add_scalar("loss_laion_vs_token", loss_caption.item(), total_laion_token) - writer.add_scalar("loss_pile_vs_token", loss_pile.item(), total_pile_token) - writer.add_scalar("loss_det_vs_token", loss_det.item(), total_laion_token) - writer.add_scalar("loss_iou_vs_token", loss_iou.item(), total_laion_token) - writer.add_scalar("loss_obj_vs_token", loss_obj.item(), total_laion_token) - writer.add_scalar("loss_cls_vs_token", loss_cls.item(), total_laion_token) - if loss_rel.item() != 0: - writer.add_scalar("loss_rel_vs_token", loss_rel.item(), total_laion_token) - - total_token = total_laion_token + total_pile_token - writer.add_scalar("sample_num", global_sample_num, global_step) - writer.add_scalar("total_laion_token", total_laion_token, global_step) - writer.add_scalar("total_pile_token", total_pile_token, global_step) - writer.add_scalar("total_token", total_token, global_step) - logging.info( - f"[{global_step}][{total_laion_sample}][{total_token}]. total: {loss.item():.3f} // laion: {loss_caption.item():.3f} // pile: {loss_pile.item():.3f} // iou: {loss_iou.item():.4f} // obj: {loss_obj.item():.4f} // previsual_obj: {previsual_loss_obj.item():.4f} // visual_obj: {visual_loss_obj.item():.4f} // previsual_iou: {previsual_loss_iou.item():.4f} // visual_iou: {visual_loss_iou.item():.4f} // lr: {lr:.2e} // scale: {scaler.get_scale()}" - ) - - if total_step % args.save_interval == 0 and total_step != last_save_step: - last_save_step = total_step - torch.distributed.barrier() - if args.ddp: - cpu_state = model.state_dict() - # if args.rank == 0: - # optimizer_state = optimizer.state_dict() - else: - save_policy = FullStateDictConfig(offload_to_cpu=True, rank0_only=True) - with FSDP.state_dict_type( - model, StateDictType.FULL_STATE_DICT, save_policy - ): - cpu_state = model.state_dict() - torch.distributed.barrier() - # https://pytorch.org/docs/1.12/fsdp.html - # need to pass optim_groups as optim_input - # optimizer_state = FSDP.full_optim_state_dict(model, optimizer, optim_input=optim_groups) - if args.rank == 0: - checkpoint_dict = { - "model_state_dict": cpu_state, - # "optimizer_state_dict": optimizer_state, - "lr_scheduler_state_dict": lr_scheduler.state_dict(), - "scaler_state_dict": scaler.state_dict(), - "total_pile_token": total_pile_token, - "total_laion_token": total_laion_token, - "total_laion_sample": total_laion_sample, - "total_step": total_step, - } - logging.info(f"Saving checkpoint to {args.run_name}/checkpoint_{total_step}.pt") - torch.save(checkpoint_dict, f"{args.run_name}/checkpoint_{total_step}.pt") - del checkpoint_dict - if args.delete_previous_checkpoint and total_step-args.save_interval > 0 and (total_step-args.save_interval) % args.skip_delete_pattern != 0: - try: - os.remove(f"{args.run_name}/checkpoint_{total_step-args.save_interval}.pt") - except: - pass - torch.distributed.barrier() - - -class AverageMeter(object): - """Computes and stores the average and current value""" - - def __init__(self): - self.reset() - - def reset(self): - self.val = 0 - self.avg = 0 - self.sum = 0 - self.count = 0 - - def update(self, val, n=1): - self.val = val - self.sum += val * n - self.count += n - self.avg = self.sum / self.count diff --git a/spaces/cihyFjudo/fairness-paper-search/Events in place for 31st annual International Week Explore the world without leaving your home.md b/spaces/cihyFjudo/fairness-paper-search/Events in place for 31st annual International Week Explore the world without leaving your home.md deleted file mode 100644 index 56a1dac28163bfd1483c9aca08af14c80b4de82c..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Events in place for 31st annual International Week Explore the world without leaving your home.md +++ /dev/null @@ -1,24 +0,0 @@ - -

      Artists work on finishing their sculptures during the 31st annual International Snow Sculpture Championships in Breckenridge, Colo., on Thursday, Jan. 27, 2022. This year, nine teams from around the world are crafting 12-foot, 25-ton blocks of solid snow into intricate sculptures. Viewing begins on Friday, Jan. 28 and runs through Feb. 2, 2022. (Chancey Bush /The Gazette)

      -

      In short, the live Law of Product Distribution & Franchise Seminar is back in 2022, hosting its 31st-annual event in three Foley office locations, from 8:30 a.m. to 4:30 p.m. local time (Central):

      -

      Events in place for 31st annual International Week


      Download ►►►►► https://tinurli.com/2uwkRY



      -

      ORLANDO, Fla., Oct. 16, 2022 /PRNewswire/ -- Honeywell's (NASDAQ: HON) 31st annual Global Business Aviation Outlook forecasts up to 8,500 new business jet deliveries worth $274 billion from 2023 to 2032, which is up 15% in both deliveries and expenditures from the same 10-year forecast a year ago. This year, surveyed operators reported new jet purchase plans on par with 2019 levels, with fleet addition rates doubling from last year's reported intentions. Respondents' feedback in this year's survey aligns with industry reports of sold-out business jet production lines for the next several years.

      -

      Cruise on down to Ocean City Maryland for the 32nd Annual Cruisin Ocean City, May 18-19-20-21, 2023. This event will feature over 3,000 hot rods, customs, classics and trucks. While the main events will take place at the beautiful beachside Inlet Parking lot, which is located along the historic OC boardwalk, and at the Ocean City Convention Center, there will be various car shows citywide. There will also be entertainment, boardwalk parades, special guests, featured cars, live music, celebrities and more.

      -

      Earth Day/Week
      Celebrate the greenest time of the year in New York City by attending one of the many events that go on all week. Encouraging locals and visitors to be Earth friendly in every part of their lives, the City hosts art exhibitions, educational forums, entertainment and outdoor events in the parks.

      -

      Cherry Blossom Festival
      Each spring, more than 200 cherry trees at the Brooklyn Botanic Garden are in full bloom. To celebrate, the garden hosts the weekend-long Cherry Blossom Festival, known by its Japanese name Sakura Matsuri. During the festival, visitors enjoy scores of events celebrating Japanese culture including J-pop concerts, traditional Japanese music and dance, taiko drumming, martial arts, bonsai-pruning workshops, tea ceremonies and manga art.

      -

      Rockefeller Center Tree Lighting
      The Christmas tree lighting at Rockefeller Center, which takes place the first Wednesday after Thanksgiving, heralds the holiday season in New York City. Brave the cold in the weeks afterward to see the giant tree adorned with tens of thousands of multicolored lights. The tree remains lit through the first week of the new year.

      -

      Curious about when your favorite event is taking place? Search our event calendar by the type of event for a complete listing. If you've already scheduled your Houston excursion, see what events will be happening by searching our calendar of events. With so many awesome events throughout the year, there's always something special happening in Houston, Texas!

      -

      The 31st International Conference on Antiviral Research (ICAR), hosted by the International Society for Antiviral Research (ISAR), took place at the Alfândega Congress Centre in Porto, Portugal. The conference started on Monday, June 11, 2018, and concluded on Friday, June 15, 2018.

      -

      The International Society for Antiviral Research (ISAR) is an internationally recognized organization for scientists involved in basic, applied, and clinical aspects of antiviral research. The Society main event is the annual International Conference on Antiviral Research (ICAR), a truly interdisciplinary meeting which attracts the interest of chemists, biologists, and clinicians.

      -

      -

      From festivals every weekend in the summer months to light displays throughout the city in the winter, there's always something happening in Columbus! Check these highlights, or search our interactive events calendar for even more.

      -

      From festivals every weekend in the summer months to light displays throughout the city in the winter, there's always something happening in Columbus! Check out these highlights, or search our interactive events calendar for more.

      -

      Columbus Greek Festival | Summer 2023, Annunciation Greek Orthodox Cathedral
      This annual celebration of Greek culture and heritage takes place over Labor Day Weekend. Become immersed in Greek culture through live performances, Cathedral tours, authentic food, and more. Learn how to Live Greek for a weekend here.

      -

      All-American Quarter Horse Congress | Fall 2023, Ohio Expo Center
      The largest single-breed horse show in America takes place in Columbus each year, stretching for three weeks and welcoming more than 725,000 guests. Find the best riding gear and watch fun competitions like barrel racing.

      -

      In 1979, the General Assembly adopted a programme of activities to be undertaken during the second half of the Decade for Action to Combat Racism and Racial Discrimination. On that occasion, the General Assembly decided that a week of solidarity with the peoples struggling against racism and racial discrimination, beginning on 21 March, would be organized annually in all States.

      -

      The United Nations has been concerned with this issue since its foundation and the prohibition of racial discrimination is enshrined in all core international human rights instruments. It places obligations on States and tasks them with eradicating discrimination in the public and private spheres. The principle of equality also requires States to adopt special measures to eliminate conditions that cause or help to perpetuate racial discrimination.

      -

      International days and weeks are occasions to educate the public on issues of concern, to mobilize political will and resources to address global problems, and to celebrate and reinforce achievements of humanity. The existence of international days predates the establishment of the United Nations, but the UN has embraced them as a powerful advocacy tool. We also mark other UN observances.

      -

      The 4th Year 5K is a longstanding UVA tradition presented by the Peer Health Educators to promote a community of care. Given this week's tragic events, this mission is more important than ever. Whether choosing to register, or just show up, all members of the UVA and Charlottesville community are invited to join us on the South Lawn on November 19th at our new start time of 9:00am.

      -

      The 4th Year 5K is a beloved UVA tradition that has historically occurred on the morning of the last home football game. The race is held in memory of Leslie Baltz, a UVA student who passed away due to high-risk drinking. The race aims to create a positive tradition where students can safely come together and make lasting memories. We are excited to honor Leslie Baltz, and a portion of the proceeds will be donated to the UVA Leslie Baltz Art Study Fund. This year we are also honoring the students and families and entire University community which has been impacted by this week's tragic events.

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/cncn102/bingo1/src/components/chat-header.tsx b/spaces/cncn102/bingo1/src/components/chat-header.tsx deleted file mode 100644 index c6664b8dee61179f844d45c5bd650518fc2cb4c2..0000000000000000000000000000000000000000 --- a/spaces/cncn102/bingo1/src/components/chat-header.tsx +++ /dev/null @@ -1,12 +0,0 @@ -import LogoIcon from '@/assets/images/logo.svg' -import Image from 'next/image' - -export function ChatHeader() { - return ( -
      - logo -
      欢迎使用新必应
      -
      由 AI 支持的网页版 Copilot
      -
      - ) -} diff --git a/spaces/codebox/diffuse-flood/build/_app/immutable/chunks/index-a207c28c.js b/spaces/codebox/diffuse-flood/build/_app/immutable/chunks/index-a207c28c.js deleted file mode 100644 index 611187bf3614d76b63d3bb9dd81303184d23411d..0000000000000000000000000000000000000000 --- a/spaces/codebox/diffuse-flood/build/_app/immutable/chunks/index-a207c28c.js +++ /dev/null @@ -1 +0,0 @@ -function N(){}function F(t,n){for(const e in n)t[e]=n[e];return t}function k(t){return t()}function C(){return Object.create(null)}function p(t){t.forEach(k)}function H(t){return typeof t=="function"}function ct(t,n){return t!=t?n==n:t!==n||t&&typeof t=="object"||typeof t=="function"}let g;function ut(t,n){return g||(g=document.createElement("a")),g.href=n,t===g.href}function I(t){return Object.keys(t).length===0}function G(t,...n){if(t==null)return N;const e=t.subscribe(...n);return e.unsubscribe?()=>e.unsubscribe():e}function ot(t,n,e){t.$$.on_destroy.push(G(n,e))}function st(t,n,e,i){if(t){const r=P(t,n,e,i);return t[0](r)}}function P(t,n,e,i){return t[1]&&i?F(e.ctx.slice(),t[1](i(n))):e.ctx}function at(t,n,e,i){if(t[2]&&i){const r=t[2](i(e));if(n.dirty===void 0)return r;if(typeof r=="object"){const s=[],l=Math.max(n.dirty.length,r.length);for(let o=0;o32){const n=[],e=t.ctx.length/32;for(let i=0;i>1);e(r)<=i?t=r+1:n=r}return t}function Q(t){if(t.hydrate_init)return;t.hydrate_init=!0;let n=t.childNodes;if(t.nodeName==="HEAD"){const c=[];for(let u=0;u0&&n[e[r]].claim_order<=u?r+1:W(1,r,y=>n[e[y]].claim_order,u))-1;i[c]=e[f]+1;const a=f+1;e[a]=c,r=Math.max(a,r)}const s=[],l=[];let o=n.length-1;for(let c=e[r]+1;c!=0;c=i[c-1]){for(s.push(n[c-1]);o>=c;o--)l.push(n[o]);o--}for(;o>=0;o--)l.push(n[o]);s.reverse(),l.sort((c,u)=>c.claim_order-u.claim_order);for(let c=0,u=0;c=s[u].claim_order;)u++;const f=ut.removeEventListener(n,e,i)}function yt(t){return function(n){return n.preventDefault(),t.call(this,n)}}function gt(t){return function(n){return n.stopPropagation(),t.call(this,n)}}function bt(t,n,e){e==null?t.removeAttribute(n):t.getAttribute(n)!==e&&t.setAttribute(n,e)}function X(t){return Array.from(t.childNodes)}function Y(t){t.claim_info===void 0&&(t.claim_info={last_index:0,total_claimed:0})}function B(t,n,e,i,r=!1){Y(t);const s=(()=>{for(let l=t.claim_info.last_index;l=0;l--){const o=t[l];if(n(o)){const c=e(o);return c===void 0?t.splice(l,1):t[l]=c,r?c===void 0&&t.claim_info.last_index--:t.claim_info.last_index=l,o}}return i()})();return s.claim_order=t.claim_info.total_claimed,t.claim_info.total_claimed+=1,s}function Z(t,n,e,i){return B(t,r=>r.nodeName===n,r=>{const s=[];for(let l=0;lr.removeAttribute(l))},()=>i(n))}function xt(t,n,e){return Z(t,n,e,V)}function tt(t,n){return B(t,e=>e.nodeType===3,e=>{const i=""+n;if(e.data.startsWith(i)){if(e.data.length!==i.length)return e.splitText(i.length)}else e.data=i},()=>S(n),!0)}function wt(t){return tt(t," ")}function $t(t,n){n=""+n,t.wholeText!==n&&(t.data=n)}function Et(t,n,e,i){e===null?t.style.removeProperty(n):t.style.setProperty(n,e,i?"important":"")}function vt(t,n=document.body){return Array.from(n.querySelectorAll(t))}let m;function h(t){m=t}function L(){if(!m)throw new Error("Function called outside component initialization");return m}function At(t){L().$$.on_mount.push(t)}function Nt(t){L().$$.after_update.push(t)}const _=[],M=[],x=[],T=[],O=Promise.resolve();let v=!1;function D(){v||(v=!0,O.then(z))}function St(){return D(),O}function A(t){x.push(t)}const E=new Set;let b=0;function z(){const t=m;do{for(;b<_.length;){const n=_[b];b++,h(n),nt(n.$$)}for(h(null),_.length=0,b=0;M.length;)M.pop()();for(let n=0;n{w.delete(t),i&&(e&&t.d(1),i())}),t.o(n)}else i&&i()}const Mt=typeof window<"u"?window:typeof globalThis<"u"?globalThis:global;function Tt(t){t&&t.c()}function kt(t,n){t&&t.l(n)}function it(t,n,e,i){const{fragment:r,on_mount:s,on_destroy:l,after_update:o}=t.$$;r&&r.m(n,e),i||A(()=>{const c=s.map(k).filter(H);l?l.push(...c):p(c),t.$$.on_mount=[]}),o.forEach(A)}function rt(t,n){const e=t.$$;e.fragment!==null&&(p(e.on_destroy),e.fragment&&e.fragment.d(n),e.on_destroy=e.fragment=null,e.ctx=[])}function lt(t,n){t.$$.dirty[0]===-1&&(_.push(t),D(),t.$$.dirty.fill(0)),t.$$.dirty[n/31|0]|=1<{const q=j.length?j[0]:y;return u.ctx&&r(u.ctx[a],u.ctx[a]=q)&&(!u.skip_bound&&u.bound[a]&&u.bound[a](q),f&<(t,a)),y}):[],u.update(),f=!0,p(u.before_update),u.fragment=i?i(u.ctx):!1,n.target){if(n.hydrate){J();const a=X(n.target);u.fragment&&u.fragment.l(a),a.forEach(U)}else u.fragment&&u.fragment.c();n.intro&&et(t.$$.fragment),it(t,n.target,n.anchor,n.customElement),K(),z()}h(c)}class Bt{$destroy(){rt(this,1),this.$destroy=N}$on(n,e){const i=this.$$.callbacks[n]||(this.$$.callbacks[n]=[]);return i.push(e),()=>{const r=i.indexOf(e);r!==-1&&i.splice(r,1)}}$set(n){this.$$set&&!I(n)&&(this.$$.skip_bound=!0,this.$$set(n),this.$$.skip_bound=!1)}}export{N as A,st as B,ft as C,dt as D,at as E,R as F,ot as G,vt as H,ut as I,pt as J,gt as K,yt as L,p as M,Mt as N,A as O,M as P,Bt as S,ht as a,_t as b,wt as c,qt as d,mt as e,et as f,jt as g,U as h,Pt as i,Nt as j,V as k,xt as l,X as m,bt as n,At as o,Et as p,S as q,tt as r,ct as s,Ct as t,$t as u,Tt as v,kt as w,it as x,rt as y,St as z}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/mpegvideo_armv5te.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/mpegvideo_armv5te.c deleted file mode 100644 index e20bb4c6456a60e723ac32e33a8d54086e510d2c..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/mpegvideo_armv5te.c +++ /dev/null @@ -1,102 +0,0 @@ -/* - * Optimization of some functions from mpegvideo.c for armv5te - * Copyright (c) 2007 Siarhei Siamashka - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "libavutil/attributes.h" -#include "libavutil/avassert.h" -#include "libavcodec/avcodec.h" -#include "libavcodec/mpegvideo.h" -#include "mpegvideo_arm.h" - -void ff_dct_unquantize_h263_armv5te(int16_t *block, int qmul, int qadd, int count); - -#ifdef ENABLE_ARM_TESTS -/** - * H.263 dequantizer supplementary function, it is performance critical and needs to - * have optimized implementations for each architecture. Is also used as a reference - * implementation in regression tests - */ -static inline void dct_unquantize_h263_helper_c(int16_t *block, int qmul, int qadd, int count) -{ - int i, level; - for (i = 0; i < count; i++) { - level = block[i]; - if (level) { - if (level < 0) { - level = level * qmul - qadd; - } else { - level = level * qmul + qadd; - } - block[i] = level; - } - } -} -#endif - -static void dct_unquantize_h263_intra_armv5te(MpegEncContext *s, - int16_t *block, int n, int qscale) -{ - int level, qmul, qadd; - int nCoeffs; - - av_assert2(s->block_last_index[n]>=0); - - qmul = qscale << 1; - - if (!s->h263_aic) { - if (n < 4) - level = block[0] * s->y_dc_scale; - else - level = block[0] * s->c_dc_scale; - qadd = (qscale - 1) | 1; - }else{ - qadd = 0; - level = block[0]; - } - if(s->ac_pred) - nCoeffs=63; - else - nCoeffs= s->inter_scantable.raster_end[ s->block_last_index[n] ]; - - ff_dct_unquantize_h263_armv5te(block, qmul, qadd, nCoeffs + 1); - block[0] = level; -} - -static void dct_unquantize_h263_inter_armv5te(MpegEncContext *s, - int16_t *block, int n, int qscale) -{ - int qmul, qadd; - int nCoeffs; - - av_assert2(s->block_last_index[n]>=0); - - qadd = (qscale - 1) | 1; - qmul = qscale << 1; - - nCoeffs= s->inter_scantable.raster_end[ s->block_last_index[n] ]; - - ff_dct_unquantize_h263_armv5te(block, qmul, qadd, nCoeffs + 1); -} - -av_cold void ff_mpv_common_init_armv5te(MpegEncContext *s) -{ - s->dct_unquantize_h263_intra = dct_unquantize_h263_intra_armv5te; - s->dct_unquantize_h263_inter = dct_unquantize_h263_inter_armv5te; -} diff --git a/spaces/congsaPfin/Manga-OCR/logs/Animals and Their Babies A Worksheet Bundle for Preschool and Kindergarten.md b/spaces/congsaPfin/Manga-OCR/logs/Animals and Their Babies A Worksheet Bundle for Preschool and Kindergarten.md deleted file mode 100644 index 5f1bb7efb2c15e428137d136ec37da3849e4a5c6..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Animals and Their Babies A Worksheet Bundle for Preschool and Kindergarten.md +++ /dev/null @@ -1,139 +0,0 @@ -
      -

      Animals and Their Babies Worksheet: A Fun and Educational Activity for Kids

      -

      Do you want to help your kids learn more about animals and their babies in a fun and engaging way? If so, you will love this animals and their babies worksheet that we have created for you. This worksheet is filled with adorable illustrations of different animals and their offspring, as well as various games and activities that will keep your kids entertained and curious.

      -

      animals and their babies worksheet


      Download File ⚹⚹⚹ https://urlca.com/2uOdoq



      -

      In this article, we will explain why it is important for kids to learn about animals and their babies, how to use the worksheet effectively, what are some interesting facts about different animals and their babies, and how to extend the learning beyond the worksheet. By the end of this article, you will have everything you need to make this a fun and educational experience for your kids.

      -

      What are the benefits of learning about animals and their babies?

      -

      Learning about animals and their babies can have many benefits for kids of all ages. Here are some of them:

      -
        -
      • It develops empathy. By learning about how animals care for their babies, kids can develop a sense of empathy and compassion for other living beings. They can also learn to respect the diversity of life on Earth and appreciate the similarities and differences between humans and animals.
      • -
      • It sparks curiosity. By learning about how animals live, grow, and reproduce, kids can spark their natural curiosity and wonder about the world around them. They can also develop a scientific mindset by asking questions, making observations, and finding answers.
      • -
      • It enriches vocabulary. By learning about the names of different animals and their babies, as well as their body parts and characteristics, kids can enrich their vocabulary and improve their communication skills. They can also learn to use descriptive words and adjectives to express their thoughts and opinions.
      • -
      • It enhances cognitive skills. By learning about how animals adapt to their environments, survive predators, find food, and communicate with each other, kids can enhance their cognitive skills, such as memory, logic, reasoning, and problem-solving. They can also learn to classify, compare, and contrast different animals and their babies.
      • -
      -

      As you can see, learning about animals and their babies can be a great way to stimulate your kids' minds and hearts. But how can you make this learning fun and easy? That's where our worksheet comes in handy.

      -

      animals and their babies matching worksheet
      -animals and their offspring worksheet
      -animals and their young worksheet
      -farm animals and their babies worksheet
      -wild animals and their babies worksheet
      -zoo animals and their babies worksheet
      -animals and their babies cut and paste worksheet
      -animals and their babies free printable worksheet
      -animals and their babies name worksheet
      -animals and their babies pdf worksheet
      -sea animals and their babies worksheet
      -forest animals and their babies worksheet
      -african animals and their babies worksheet
      -arctic animals and their babies worksheet
      -australian animals and their babies worksheet
      -desert animals and their babies worksheet
      -rainforest animals and their babies worksheet
      -polar animals and their babies worksheet
      -domestic animals and their babies worksheet
      -nocturnal animals and their babies worksheet
      -pet animals and their babies worksheet
      -birds and their babies worksheet
      -reptiles and their babies worksheet
      -insects and their babies worksheet
      -mammals and their babies worksheet
      -amphibians and their babies worksheet
      -fish and their babies worksheet
      -dinosaurs and their babies worksheet
      -animal families and their names worksheet
      -animal parents and their children worksheet
      -animal mothers and fathers worksheet
      -animal baby shower game worksheet
      -animal baby memory game worksheet
      -animal baby bingo game worksheet
      -animal baby trivia quiz worksheet
      -animal baby word scramble worksheet
      -animal baby crossword puzzle worksheet
      -animal baby word search puzzle worksheet
      -animal baby coloring pages worksheet
      -animal baby dot to dot worksheet
      -animal baby hidden pictures worksheet
      -animal baby maze puzzle worksheet
      -animal baby spot the difference worksheet
      -animal baby matching cards worksheet
      -animal baby flashcards worksheet
      -animal baby label it worksheets
      -animal baby trace it worksheets
      -animal baby write it worksheets

      -

      How to use the animals and their babies worksheet?

      -

      Our worksheet is designed to be simple, colorful, and interactive. It consists of two pages: one with the animal cards and one with the labeling game. You can download the worksheet for free from our website and print it out on a regular paper or a cardstock. You will also need a pair of scissors, a pencil, and some glue or tape.

      -

      Matching game

      -

      The first page of the worksheet has 12 animal cards, each with a picture of an animal and its baby. You can cut out the cards along the dotted lines and shuffle them. Then, you can ask your kids to find the pairs of mother and baby animals and match them together. You can also ask them to name the animals and their babies, such as cow and calf, horse and foal, chicken and chick, etc. You can make this game more challenging by adding more cards from other sources or by mixing up the cards with different categories of animals, such as farm animals, wild animals, pets, etc.

      -

      Labeling game

      -

      The second page of the worksheet has four pictures of different animals: a dog, a cat, a duck, and a deer. Each picture has some blank spaces for labeling the body parts of the animal, such as ears, eyes, nose, mouth, legs, tail, etc. You can ask your kids to identify the body parts of each animal and trace their names with a pencil. You can also ask them to spell out the names or write them in uppercase or lowercase letters. You can make this game more fun by using different colors or stickers to label the body parts.

      -

      What are some fun facts about animals and their babies?

      -

      Learning about animals and their babies is not only fun but also fascinating. There are so many amazing facts about how animals give birth, nurture their young ones, teach them survival skills, and protect them from danger. Here are some examples of fun facts about animals and their babies that you can share with your kids:

      -

      Farm animals

      -
        -
      • Cows have a gestation period of about nine months, just like humans. They usually give birth to one calf at a time, but sometimes they can have twins or even triplets. A newborn calf can stand up and walk within an hour of being born. A calf stays with its mother for about eight months before being weaned.
      • -
      • Sheep have a gestation period of about five months. They usually give birth to one or two lambs at a time, but sometimes they can have three or four. A newborn lamb can stand up and nurse within minutes of being born. A lamb stays with its mother for about four months before being weaned.
      • -
      • Horses have a gestation period of about 11 months. They usually give birth to one foal at a time, but sometimes they can have twins or even triplets. A newborn foal can stand up and run within an hour of being born. A foal stays with its mother for about six months before being weaned.
      • -
      • Chickens lay eggs that hatch after about 21 days of incubation. They usually lay one egg per day, but sometimes they can lay two or more. A newborn chick is covered with soft down feathers and can see and hear well. A chick stays with its mother for about six weeks before becoming independent.
      • -
      • Pigs have a gestation period of about four months. They usually give birth to six to 12 piglets at a time, but sometimes they can have more than 20. A newborn piglet is born with teeth and can walk within minutes of being born. A piglet stays with its mother for about two months before being weaned.
      • -
      • Dogs have a gestation period of about two months. They usually give birth to four to six puppies at a time, but sometimes they can have more than 10. A newborn puppy is born blind, deaf, and toothless. A puppy stays with its mother for about eight weeks before being weaned.
      • -
      • Cats have a gestation period of about two months. They usually give birth to three to five kittens at a time, but sometimes they can have more than 10. A newborn kitten is born blind, deaf, and toothless. A kitten stays with its mother for about eight weeks before being weaned.
      • -
      • Ducks lay eggs that hatch after about 28 days of incubation. They usually lay one egg per day, but sometimes they can lay two or more. A newborn duckling is covered with soft down feathers and can swim and dive within hours of being born. A duckling stays with its mother for about two months before becoming independent.
      • -
      • Deer have a gestation period of about seven months. They usually give birth to one fawn at a time, but sometimes they can have twins or triplets. A newborn fawn is born with white spots on its coat that help it camouflage in the grass. A fawn stays with its mother for about six months before being weaned.
      • -
      • Rabbits have a gestation period of about one month. They usually give birth to four to eight kits at a time, but sometimes they can have more than 10. A newborn kit is born hairless, blind, and deaf. A kit stays with its mother for about four weeks before being weaned.
      • -
      -

      Wild animals

      -
        -
      • Lions have a gestation period of about four months. They usually give birth to two to four cubs at a time, but sometimes they can have more than six. A newborn cub is born with dark spots on its coat that fade as it grows older. A cub stays with its mother for about two years before becoming independent.
      • -
      • Tigers have a gestation period of about three and a half months. They usually give birth to two to four cubs at a time, but sometimes they can have more than six. A newborn cub is born with dark stripes on its coat that remain throughout its life. A cub stays with its mother for about two years before becoming independent.
      • -
      • Elephants have a gestation period of about 22 months, the longest among mammals. They usually give birth to one calf at a time, but sometimes they can have twins. A newborn calf can weigh up to 120 kg and stand up within minutes of being born. A calf stays with its mother for about four years before being weaned.
      • -
      • Giraffes have a gestation period of about 15 months. They usually give birth to one calf at a time, but sometimes they can have twins. A newborn calf can weigh up to 70 kg and stand up within an hour of being born. A calf stays with its mother for about one year before being weaned.
      • -
      • Pandas have a gestation period of about five months. They usually give birth to one cub at a time, but sometimes they can have twins. A newborn cub is born pink, hairless, and blind. It weighs only about 100 g and is about the size of a stick of butter. A cub stays with its mother for about two years before becoming independent.
      • -
      • Koalas have a gestation period of about one month. They usually give birth to one joey at a time, but sometimes they can have twins. A newborn joey is born hairless, blind, and earless. It weighs only about 0.5 g and is about the size of a jelly bean. It crawls into its mother's pouch and stays there for about six months before emerging.
      • -
      • Polar bears have a gestation period of about eight months. They usually give birth to two cubs at a time, but sometimes they can have one or three. A newborn cub is born white, fluffy, and blind. It weighs only about 0.6 kg and is about the size of a guinea pig. A cub stays with its mother for about two and a half years before becoming independent.
      • -
      • Turtles lay eggs that hatch after about two months of incubation. They usually lay dozens of eggs at a time, but sometimes they can lay hundreds. A newborn hatchling is born with a hard shell and can crawl and swim immediately after hatching. A hatchling stays on its own from the moment it hatches and does not receive any parental care.
      • -
      • Frogs lay eggs that hatch after a few days or weeks of incubation, depending on the species and the temperature. They usually lay hundreds or thousands of eggs at a time, but sometimes they can lay more or less. A newborn tadpole is born with gills and a tail and lives in water until it metamorphoses into an adult frog with lungs and legs.
      • -
      • Penguins lay eggs that hatch after about two months of incubation. They usually lay one or two eggs at a time, but sometimes they can lay more or less. A newborn chick is born with down feathers and a beak and depends on its parents for food and warmth. A chick stays with its parents for about two to six months before becoming independent.
      • -
      -

      How to extend the learning beyond the worksheet?

      -

      Our worksheet is a great way to introduce your kids to the wonderful world of animals and their babies, but it is not the only way. There are many other resources and activities that you can use to enhance your kids' learning and enjoyment. Here are some suggestions:

      -

      Books and videos

      -

      There are many books and videos that feature animals and their babies, either in fiction or non-fiction format. You can read or watch them with your kids and discuss the stories, facts, and messages that they convey. Some examples of books and videos that you can check out are:

      -
        -
      • Are You My Mother? by P.D. Eastman. This is a classic children's book that tells the story of a baby bird who goes in search of his mother after she leaves the nest to find food. Along the way, he meets different animals and asks them if they are his mother, until he finally finds her.
      • -
      • Baby Animals by DK Publishing. This is a colorful and informative book that introduces young readers to more than 100 baby animals from around the world. It includes stunning photographs, simple facts, and fun quizzes that will engage your kids' curiosity and imagination.
      • -
      • Animal Babies by National Geographic Kids. This is a video series that showcases the lives of different animal babies in their natural habitats. It features amazing footage, narration, and music that will captivate your kids' attention and emotions.
      • -
      -

      Crafts and activities

      -

      There are many crafts and activities that involve animals and their babies, either in real or imaginary ways. You can make them with your kids and have fun while developing their creativity and skills. Some examples of crafts and activities that you can try are:

      -
        -
      • Making animal masks, puppets, or collages. You can use paper plates, cardboard, felt, glue, scissors, crayons, markers, googly eyes, yarn, etc. to create your own animal masks, puppets, or collages. You can then use them to act out stories, sing songs, or play games with your kids.
      • -
      • Playing animal charades or bingo. You can use the animal cards from our worksheet or make your own to play animal charades or bingo with your kids. For animal charades, you can take turns acting out different animals and their babies and guessing what they are. For animal bingo, you can make a bingo card with different animals and their babies and mark them off as you call them out or show them.
      • -
      • Visiting a zoo or a farm. You can take your kids to a zoo or a farm where they can see real animals and their babies up close. You can also learn more about how they live, what they eat, how they communicate, etc. You can also interact with some of the animals if possible and take pictures or videos as souvenirs.
      • -
      -

      Conclusion

      -

      We hope you enjoyed this article about animals and their babies worksheet. We hope you learned something new and had fun with your kids along the way. Learning about animals and their babies can be a rewarding experience for both you and your kids, as it can foster empathy, curiosity, vocabulary, and cognitive skills.

      -

      If you want to download our worksheet for free, just click on the link below and print it out. You can also share it with your friends and family who might be interested in this topic. Have fun with your kids and let us know how it goes!

      -

      FAQs

      -

      Here are some frequently asked questions related to animals and their babies worksheet:

      -
        -
      1. Where can I download the worksheet?
      2. -

        You can download the worksheet for free from our website by clicking on this link: [Download Animals And Their Babies Worksheet].

        -
      3. How can I print the worksheet?
      4. -

        You can print the worksheet on a regular paper or a cardstock using any printer that supports color printing. You will need two pages for the worksheet: one for the animal cards and one for the labeling game.

        -
      5. What age group is the worksheet suitable for?
      6. -

        The worksheet is suitable for kids of any age who are interested in learning about animals and their babies. However, it is especially designed for preschoolers and kindergarteners who are learning to recognize, name, and match different animals and their babies, as well as to label their body parts.

        -
      7. What materials do I need for the worksheet?
      8. -

        Besides the worksheet itself, you will need a pair of scissors, a pencil, and some glue or tape. You will use the scissors to cut out the animal cards from the first page of the worksheet. You will use the pencil to trace the names of the body parts on the second page of the worksheet. You will use the glue or tape to stick the animal cards together or to make them more durable.

        -
      9. How can I assess the learning outcomes of the worksheet?
      10. -

        You can assess the learning outcomes of the worksheet by observing how your kids perform the games and activities on the worksheet, as well as by asking them some questions related to the topic. For example, you can ask them to:

        -
          -
        • Identify and name different animals and their babies.
        • -
        • Match the mother and baby animals correctly.
        • -
        • Label the body parts of different animals correctly.
        • -
        • Recall some fun facts about animals and their babies.
        • -
        • Express their opinions and feelings about animals and their babies.
        • -
        -

        You can also give them feedback and praise for their efforts and achievements.

        -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/CarX Drift Racing Mod APK v1.16.2 Unlimited Money and More.md b/spaces/congsaPfin/Manga-OCR/logs/CarX Drift Racing Mod APK v1.16.2 Unlimited Money and More.md deleted file mode 100644 index 439abcbecf6bea9f9588564a27149d69a4ad53ff..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/CarX Drift Racing Mod APK v1.16.2 Unlimited Money and More.md +++ /dev/null @@ -1,133 +0,0 @@ -
      -

      CarX Drift Racing Mod APK v1.16.2: The Ultimate Drifting Experience on Android

      -

      If you are a fan of racing games, especially drifting games, you must have heard of CarX Drift Racing. It is one of the most popular and realistic drifting games on Android devices.

      -

      carx drift racing mod apk v1.16.2


      Download Zip ✏ ✏ ✏ https://urlca.com/2uO87i



      -

      CarX Drift Racing lets you experience the thrill of drifting with amazing graphics, physics, and sound effects. You can choose from over 50 different cars and customize them with various parts and paint jobs.

      -

      But what if you want to enjoy CarX Drift Racing without any limitations? What if you want to have unlimited money, gold, cars, tracks, and upgrades? Well, that's where CarX Drift Racing Mod APK v1.16.2 comes in.

      -

      CarX Drift Racing Mod APK v1.16.2 is a modified version of CarX Drift Racing that gives you access to all the features and content of the game for free.

      -

      With CarX Drift Racing Mod APK v1.16.2, you can:

      -
        -
      • Get unlimited coins and gold to buy anything you want in the game
      • -

        How to Download and Install CarX Drift Racing Mod APK v1.16.2

        -

        Downloading and installing CarX Drift Racing Mod APK v1.16.2 is very easy and fast. You just need to follow these simple steps:

        -
          -
        1. Click on this link to download CarX Drift Racing Mod APK v1.16.2 from a trusted and secure source.
        2. -
        3. Once the download is complete, go to your device's settings and enable the installation of apps from unknown sources.
        4. -
        5. Locate the downloaded file in your device's storage and tap on it to start the installation process.
        6. -
        7. Follow the instructions on the screen and wait for the installation to finish.
        8. -
        9. Launch the game and enjoy CarX Drift Racing Mod APK v1.16.2 with all its features and content.
        10. -
        -

        Note: You don't need to uninstall the original version of CarX Drift Racing if you have it installed on your device. CarX Drift Racing Mod APK v1.16.2 will overwrite it and use the same data.

        -

        How to Play CarX Drift Racing Mod APK v1.16.2

        -

        Playing CarX Drift Racing Mod APK v1.16.2 is very fun and addictive. You can play it in different modes and tracks, and challenge yourself and other players with your drifting skills.

        -

        Here are some tips on how to play CarX Drift Racing Mod APK v1.16.2:

        -

        How to choose your car and customize it

        -

        In CarX Drift Racing Mod APK v1.16.2, you can choose from over 50 different cars, ranging from sports cars, muscle cars, supercars, and more.

        -

        carx drift racing unlimited money mod apk 1.16.2
        -carx drift racing hack apk download 1.16.2
        -carx drift racing modded apk latest version 1.16.2
        -carx drift racing apk mod free download 1.16.2
        -carx drift racing mod apk android 1 1.16.2
        -carx drift racing mod apk unlimited coins and gold 1.16.2
        -carx drift racing mod apk rexdl 1.16.2
        -carx drift racing mod apk revdl 1.16.2
        -carx drift racing mod apk offline 1.16.2
        -carx drift racing mod apk obb 1.16.2
        -carx drift racing mod apk data 1.16.2
        -carx drift racing mod apk all cars unlocked 1.16.2
        -carx drift racing mod apk no root 1.16.2
        -carx drift racing mod apk online 1.16.2
        -carx drift racing mod apk unlimited everything 1.16.2
        -carx drift racing mod apk new update 1.16.2
        -carx drift racing mod apk old version 1.16.2
        -carx drift racing mod apk pure 1.16.2
        -carx drift racing mod apk happymod 1.16.2
        -carx drift racing mod apk an1 1.16.2
        -carx drift racing mod apk android republic 1.16.2
        -carx drift racing mod apk andropalace 1.16.2
        -carx drift racing mod apk apkpure 1.16.2
        -carx drift racing mod apk apkmody 1.16.2
        -carx drift racing mod apk apkmirror 1.16.2
        -carx drift racing mod apk android oyun club 1.16.2
        -carx drift racing mod apk blackmod 1.16.2
        -carx drift racing mod apk by revdl 1.16.2
        -carx drift racing mod apk by rexdl 1.16.2
        -carx drift racing mod apk by android 1 1.16.2
        -carx drift racing mod apk by apkpure 1.16.2
        -carx drift racing mod apk by happymod 1.16.2
        -carx drift racing mod apk by an1 1.16.2
        -carx drift racing mod apk by apkmody 1.16.2
        -carx drift racing mod apk by apkmirror 1.16.2
        -carx drift racing cheat codes for android v1 .16 .2
        -how to install carx drift racing hacked version on android v1 .16 .2
        -how to download and play carx drift racing with unlimited resources v1 .16 .2
        -how to get free coins and gold in carx drift racing v1 .16 .2
        -how to unlock all cars and tracks in carx drift racing v1 .16 .2
        -how to update carx drift racing to latest version v1 .16 .2
        -how to fix lag and crash issues in carx drift racing v1 .16 .2
        -how to enable multiplayer mode in carx drift racing v1 .16 .2
        -how to customize your cars and settings in carx drift racing v1 .16 .2
        -how to master drifting skills and techniques in carx drift racing v1 .16 .2

        -

        You can also customize your car with various parts and paint jobs, such as spoilers, bumpers, wheels, decals, colors, etc.

        -

        To choose and customize your car, you need to go to the garage menu and select the car you want to use or buy with coins or gold.

        -

        You can also upgrade your car's performance by improving its engine, turbo, brakes, suspension, tires, etc.

        -

        How to control your car and perform drifts

        -

        In CarX Drift Racing Mod APK v1.16.2, you can control your car with different options, such as tilt, buttons, or steering wheel.

        -

        You can also adjust the sensitivity and feedback of the controls in the settings menu.

        -

        To perform drifts, you need to use the accelerator, brake, handbrake, and steering buttons on the screen.

        -

        You need to balance the speed and angle of your car while drifting, and avoid hitting the walls or obstacles.

        -

        The more you drift, the more points you earn. You can also earn bonus points by performing combos, near misses, or hitting cones.

        -

        How to compete in different modes and tracks

        -

        In CarX Drift Racing Mod APK v1.16.2, you can compete in different modes and tracks, such as career mode, single mode, multiplayer mode, time attack mode, etc.

        -

        You can also choose from different difficulty levels, such as beginner, amateur, professional, or master.

        -

        In career mode, you need to complete various missions and challenges in different tracks and earn stars and coins.

        -

        In single mode, you can practice your drifting skills in any track you want without any time or score limit.

        -

        In multiplayer mode, you can race against other players online in real time and show off your drifting skills.

        -

        In time attack mode, you need to complete a lap in the shortest time possible while drifting as much as you can.

        -

        Tips and Tricks for CarX Drift Racing Mod APK v1.16.2

        -

        If you want to master CarX Drift Racing Mod APK v1.16.2 and become a drifting legend, you need to know some tips and tricks that will help you improve your game.

        -

        Here are some tips and tricks for CarX Drift Racing Mod APK v1.16.2:

        -

        How to earn more coins and gold

        -upgrades, and customization. You can earn coins and gold by playing the game and completing missions, challenges, and achievements. You can also watch ads or use real money to buy more coins and gold.

        -

        However, with CarX Drift Racing Mod APK v1.16.2, you don't need to worry about coins and gold anymore. You can get unlimited coins and gold for free and spend them as much as you want without any restrictions.

        -

        How to unlock more cars and upgrades

        -

        In CarX Drift Racing Mod APK v1.16.2, you can unlock more cars and upgrades by earning stars and coins in career mode, or by using gold in the shop.

        -

        You can also unlock some cars and upgrades by completing certain achievements or events in the game.

        -

        However, with CarX Drift Racing Mod APK v1.16.2, you don't need to wait or work hard to unlock more cars and upgrades. You can unlock all cars and upgrades for free and use them right away without any limitations.

        -

        How to improve your drifting skills

        -

        In CarX Drift Racing Mod APK v1.16.2, you can improve your drifting skills by practicing in single mode, watching tutorials and tips in the game, or following some online guides and videos.

        -

        You can also improve your drifting skills by adjusting the settings of your car, such as the steering angle, the tire pressure, the suspension stiffness, etc.

        -

        However, the best way to improve your drifting skills is by playing the game regularly and learning from your mistakes and feedback. You can also learn from other players by watching their replays or racing against them in multiplayer mode.

        -

        Conclusion

        -

        CarX Drift Racing Mod APK v1.16.2 is a great game for anyone who loves racing and drifting games. It offers a realistic and immersive drifting experience with amazing graphics, physics, and sound effects.

        -

        It also offers a lot of features and content that will keep you entertained for hours. You can choose from over 50 different cars and customize them with various parts and paint jobs. You can also compete in different modes and tracks, and challenge yourself and other players with your drifting skills.

        -

        But what makes CarX Drift Racing Mod APK v1.16.2 even better is that it gives you access to all the features and content of the game for free. You can get unlimited coins and gold, unlock all cars and upgrades, and enjoy the game without any limitations.

        -

        So what are you waiting for? Download CarX Drift Racing Mod APK v1.16.2 now and enjoy the ultimate drifting experience on Android.

        -

        Click here to download CarX Drift Racing Mod APK v1.16.2 from a trusted and secure source.

        -

        FAQs

        -

        What is the difference between CarX Drift Racing and CarX Drift Racing 2?

        -

        CarX Drift Racing 2 is the sequel to CarX Drift Racing that was released in 2018. It has improved graphics, physics, sound effects, gameplay, features, content, etc.

        -

        However, some players still prefer CarX Drift Racing because it is simpler, faster, smoother, and more fun to play.

        -

        Is CarX Drift Racing Mod APK v1.16.2 safe to use?

        -

        Yes, CarX Drift Racing Mod APK v1.16.2 is safe to use as long as you download it from a trusted and secure source like this one.

        -

        You don't need to root your device or use any third-party apps to install or run CarX Drift Racing Mod APK v1.16.2.

        -

        You also don't need to worry about viruses, malware, spyware, or any other harmful threats that might harm your device or data.

        -

        Can I play CarX Drift Racing Mod APK v1.16.2 offline?

        -

        Yes, you can play CarX Drift Racing Mod APK v1.16.2 offline without any internet connection.

        -

        You can play single mode or career mode offline without any problem.

        -

        However, you need an internet connection to play multiplayer mode or access some online features such as leaderboards, achievements, events, etc.

        -

        Can I play CarX Drift Racing Mod APK v1.16.2 with friends?

        -p>Yes, you can play CarX Drift Racing Mod APK v1.16.2 with friends online or locally.

        -

        You can play multiplayer mode online with other players from around the world and compete in real time.

        -

        You can also play local multiplayer mode with your friends using Wi-Fi or Bluetooth and race on the same device or different devices.

        -

        How can I contact the developers of CarX Drift Racing Mod APK v1.16.2?

        -

        If you have any questions, feedback, suggestions, or issues regarding CarX Drift Racing Mod APK v1.16.2, you can contact the developers of the game by using the following methods:

        -
          -
        • Email: support@carx-tech.com
        • -
        • Facebook: https://www.facebook.com/carxdriftracing
        • -
        • Instagram: https://www.instagram.com/carxdriftracing/
        • -
        • Twitter: https://twitter.com/carx_technology
        • -
        -

        The developers of CarX Drift Racing Mod APK v1.16.2 are very responsive and friendly, and they will try to help you as soon as possible.

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Download and Install Garena HoN on Your PC.md b/spaces/congsaPfin/Manga-OCR/logs/How to Download and Install Garena HoN on Your PC.md deleted file mode 100644 index f68107ff2d8f9955f7308c2b6da61d19a4586342..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Download and Install Garena HoN on Your PC.md +++ /dev/null @@ -1,116 +0,0 @@ - -

        How to Download Garena HoN

        -

        If you are looking for a fast-paced, action-packed, and competitive multiplayer online battle arena (MOBA) game, then you should try Garena HoN. Garena HoN is a free-to-play game that features over 100 heroes, various items, modes, maps, and a vibrant community of players from all over Southeast Asia. In this article, we will show you how to download Garena HoN and enjoy this exciting game.

        -

        download garena hon


        DOWNLOAD > https://urlca.com/2uO5J1



        -

        Step 1: Visit the official website

        -

        The first step to download Garena HoN is to visit the official website at [11](http://hon.garena.com). Here you can find all the information you need about the game, such as news, updates, events, guides, forums, support, etc. You can also watch live streams of other players or check out the leaderboards and statistics.

        -

        Step 2: Register an account

        -

        The next step is to register an account. You can do this by clicking on the "Register" button at the top right corner of the website. You will need to provide a valid email address and create a username and password. You will also need to verify your email by clicking on the link that will be sent to you. Once you have verified your email

        your account, you can log in to the website and access all the features.

        -

        Step 3: Download the game client

        -

        The third step is to download the game client. You can do this by clicking on the "Download" button at the top right corner of the website. You will be redirected to a page where you can choose your operating system (Windows or Mac) and your download method (direct or torrent). You will also see the system requirements and the file size of the game client. The game client is about 2.5 GB, so make sure you have enough space and a stable internet connection. Once you have downloaded the game client, you can save it to your preferred location.

        -

        Step 4: Install the game

        -

        The fourth step is to install the game. You can do this by double-clicking on the game client file that you have downloaded. You will see a pop-up window that will guide you through the installation process. You will need to agree to the terms and conditions, choose your installation folder, and create a desktop shortcut. The installation process may take a few minutes, depending on your system and internet speed. Once the installation is complete, you can click on the "Finish" button.

        -

        download garena hon client
        -download garena hon thailand
        -download garena hon singapore
        -download garena hon latest patch
        -download garena hon for windows 10
        -download garena hon offline installer
        -download garena hon kongor online
        -download garena hon heroes of newerth
        -download garena hon x64 clean
        -download garena hon mediafire link
        -download garena hon zip file
        -download garena hon rar file
        -download garena hon update 2023
        -download garena hon free account
        -download garena hon full version
        -download garena hon from official website
        -download garena hon from discord server
        -download garena hon from steam
        -download garena hon from google drive
        -download garena hon from mega.nz
        -how to download garena hon on pc
        -how to download garena hon on mac
        -how to download garena hon on linux
        -how to download garena hon on android
        -how to download garena hon on ios
        -how to download and install garena hon
        -how to download and play garena hon
        -how to download and update garena hon
        -how to download and register garena hon
        -how to download and extract garena hon
        -where to download garena hon in 2023
        -where to download garena hon in thailand
        -where to download garena hon in singapore
        -where to download garena hon in malaysia
        -where to download garena hon in indonesia
        -where to download garena hon in philippines
        -where to download garena hon in vietnam
        -where to download garena hon in india
        -where to download garena hon in australia
        -where to download garena hon in europe
        -best site to download garena hon
        -best way to download garena hon
        -best source to download garena hon
        -best alternative to download garena hon
        -best option to download garena hon
        -fastest method to download garena hon
        -easiest guide to download garena hon
        -most reliable link to download garena hon
        -most secure platform to download garena hon

        -

        Step 5: Launch the game and log in

        -

        The final step is to launch the game and log in. You can do this by clicking on the desktop shortcut that you have created or by finding the game in your start menu or applications folder. You will see a launcher window that will check for updates and patches. You may need to wait for a while until the game is fully updated. Once the game is ready, you can click on the "Play" button. You will then see a login screen where you can enter your username and password that you have registered earlier. After logging in, you can choose your server (Singapore, Malaysia, Philippines, Thailand, or Indonesia) and start playing Garena HoN.

        -

        Features of Garena HoN

        -

        Now that you have downloaded Garena HoN, you may be wondering what are the features of this game and how they make it different from other MOBA games. In this section, we will give you an overview of the main features of Garena HoN and how they can enhance your gaming experience.

        -

        Heroes

        -

        Choose from over 100 heroes with unique abilities and roles

        -

        One of the most important features of Garena HoN is the heroes. Heroes are the characters that you control in the game and they have different abilities and roles that affect your gameplay and strategy. There are over 100 heroes to choose from, each with their own strengths, weaknesses, skills, and personalities. You can select a hero before each match or during the picking phase. You can also customize your hero with different skins, avatars, taunts, announcers, etc.

        -

        There are three types of heroes in Garena HoN: strength, agility, and intelligence. Strength heroes are usually tanky and durable, agility heroes are usually fast and agile, and intelligence heroes are usually smart and powerful. Each hero also has a primary attribute that determines their damage output and scaling. Within each type of hero, there are different roles that define their function in the team. Some of the common roles are carry, support, ganker, initiator, pusher, etc.

        To choose a hero, you need to consider several factors, such as your team composition, your enemy's picks, your personal preference, your skill level, etc. You also need to learn how to use your hero's abilities effectively and how to synergize with your teammates. You can find more information about each hero on the website or in the game.

        -

        Items

        -

        Customize your hero with various items that enhance your stats and skills

        -

        Another feature of Garena HoN is the items. Items are objects that you can buy in the game and equip on your hero to enhance your stats and skills. There are various items to choose from, each with their own effects and costs. You can buy items from the shop in the base or from the secret shop in the map. You can also sell items back to the shop for a reduced price.

        -

        There are five categories of items in Garena HoN: consumables, components, basic, intermediate, and advanced. Consumables are items that have a one-time use and provide temporary benefits, such as healing, mana regeneration, vision, etc. Components are items that have no effects on their own but can be combined with other components to form more powerful items. Basic items are items that have simple effects and low costs, such as boots, gloves, rings, etc. Intermediate items are items that have more complex effects and higher costs, such as swords, staffs, shields, etc. Advanced items are items that have the most powerful effects and the highest costs, such as relics, artifacts, crowns, etc.

        -

        To buy items, you need to consider several factors, such as your hero's attributes, role, skills, etc. You also need to learn how to use your items effectively and how to adapt to different situations. You can find more information about each item on the website or in the game.

        -

        Modes

        -

        Play different modes that suit your preference and skill level

        -

        A third feature of Garena HoN is the modes. Modes are variations of the game that have different rules and objectives. There are different modes to choose from, each with their own advantages and disadvantages. You can choose a mode before each match or during the picking phase.

        -

        There are five modes available in Garena HoN: casual, ranked, mid wars, public games, and custom games. Casual mode is a mode that is more relaxed and forgiving than ranked mode. It has lower penalties for dying and leaving the game, higher gold and experience gain, and easier mechanics. Ranked mode is a mode that is more competitive and challenging than casual mode. It has higher penalties for dying and leaving the game, lower gold and experience gain, and harder mechanics. It also affects your rating and rank in the leaderboards. Mid wars mode is a mode that is more fun and chaotic than other modes. It has only one lane and one base for each team, faster gameplay, more kills and team fights, and random heroes. Public games mode is a mode that allows you to join or create a game with other players or bots. You can set your own rules and preferences for the game. Custom games mode is a mode that allows you to play user-made maps and modes with other players or bots. You can find or create your own custom games using the map editor.

        -

        To choose a mode, you need to consider several factors, such as your mood, time limit, skill level, etc. You also need to learn how to play each mode effectively and how to cooperate with your team. You can find more information about each mode on the website or in the game.

        Maps

        -

        Explore different maps that offer different challenges and strategies

        -

        A fourth feature of Garena HoN is the maps. Maps are the environments where the game takes place and they have different layouts and objectives. There are different maps to choose from, each with their own advantages and disadvantages. You can choose a map before each match or during the picking phase.

        -

        There are four maps available in Garena HoN: forest of caldavar, rift wars, capture the flag, and hero defense. Forest of caldavar is the default and most popular map in Garena HoN. It has three lanes and two bases for each team, as well as neutral creeps, runes, towers, barracks, and ancients. The objective is to destroy the enemy's base while defending your own. Rift wars is a map that is more random and unpredictable than other maps. It has one lane and one base for each team, as well as random heroes, skills, items, and events. The objective is to kill the enemy team as many times as possible while surviving their attacks. Capture the flag is a map that is more objective-oriented and team-based than other maps. It has two flags and two bases for each team, as well as neutral creeps, runes, towers, and shrines. The objective is to capture the enemy's flag and bring it back to your base while preventing them from doing the same. Hero defense is a map that is more cooperative and defensive than other maps. It has one base for each team, as well as waves of enemy creeps, bosses, and towers. The objective is to defend your base from the enemy's onslaught while destroying their towers.

        -

        To choose a map, you need to consider several factors, such as your preference, playstyle, skill level, etc. You also need to learn how to play each map effectively and how to adapt to different scenarios. You can find more information about each map on the website or in the game.

        -

        Community

        -

        Join a vibrant community of players from all over Southeast Asia

        -

        A fifth feature of Garena HoN is the community. Community is the collective term for the players who play Garena HoN and interact with each other. There are millions of players from all over Southeast Asia who play Garena HoN every day and form a vibrant community. You can join this community and enjoy various benefits.

        -

        Some of the benefits of joining the community are: chat, clans, tournaments, events, and rewards. Chat is a feature that allows you to communicate with other players using voice chat or text chat. You can chat with your friends, teammates, opponents, or anyone else in the game. Clans are groups of players who share a common interest or goal in the game. You can join or create a clan and invite other players to join you. You can also participate in clan wars or clan events with your clan members. Tournaments are competitions that pit players or teams against each other for prizes and glory. You can join or create a tournament and compete with other players or teams in various modes and maps. Events are special occasions that offer unique gameplay or rewards for players. You can join or create an event and enjoy different challenges or benefits in the game. Rewards are incentives that reward players for playing the game or completing certain tasks. You can earn rewards such as gold coins, silver coins, gems, skins, avatars, etc.

        -

        To join the community, you need to be friendly, respectful, and cooperative with other players. You also need to follow the rules and regulations of the game and avoid any misconduct or abuse. You can find more information about the community on the website or in the game.

        -

        Tips and Tricks for Garena HoN

        -

        In this section, we will give you some tips and tricks that can help you improve your gameplay and enjoy Garena HoN more.

        -

        Tip 1: Learn the basics of the game

        -

        The first tip is to learn the basics of the game such as controls, objectives, mechanics, roles, etc. These are essential knowledge that you need to have before playing any match or mode in Garena HoN. You can learn these by reading guides, watching tutorials, or asking other players.

        -

        Tip 2: Practice with bots or friends

        -

        The second tip is to practice with bots or friends to improve your skills and confidence. Bots are artificial intelligence that simulate real players in the game. You can play with bots in any mode or map to test your abilities or try new strategies. Friends are real players that you know personally or online in the game. You can play with friends in any mode or map to have fun or cooperate with each other.

        -

        Tip 3: Watch replays or streams of other players

        -

        The third tip is to watch replays or streams of other players to learn from their mistakes and strategies. Replays are recordings of past matches that you can watch in the game. You can watch replays of your own matches or other players' matches to analyze your performance or observe their moves. Streams are live broadcasts of current matches that you can watch online. You can watch streams of professional players or popular streamers to get tips or inspiration from their gameplay.

        -

        Tip 4: Communicate with your team

        -

        The fourth tip is to communicate with your team using voice chat or text chat to coordinate your actions and plan your moves. Communication is key to winning any match or mode in Garena HoN, especially when you are playing with strangers or in a competitive setting. You can communicate with your team by using voice chat or text chat in the game. You can also use pings, signals, or emoticons to convey your messages more quickly and easily.

        -

        Tip 5: Have fun and be respectful

        -

        The fifth and final tip is to have fun and be respectful to other players by following the rules, avoiding toxicity, and being a good sport. Garena HoN is a game that is meant to be enjoyed and shared with others, not a source of stress or anger. You can have fun and be respectful by following the rules and regulations of the game and avoiding any misconduct or abuse. You can also avoid toxicity and be a good sport by not flaming, trolling, feeding, afking, or griefing other players. You can also compliment, encourage, thank, or apologize to other players when appropriate.

        -

        Conclusion

        -

        Garena HoN is a free-to-play MOBA game that features over 100 heroes, various items, modes, maps, and a vibrant community of players from all over Southeast Asia. In this article, we have shown you how to download Garena HoN and enjoy this exciting game. We have also given you an overview of the main features of Garena HoN and how they can enhance your gaming experience. We have also given you some tips and tricks that can help you improve your gameplay and enjoy Garena HoN more. We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy gaming!

        -

        FAQs

        -

        Here are some frequently asked questions about Garena HoN:

        -

        Q: Is Garena HoN free to play?

        -

        A: Yes, Garena HoN is free to play. You do not need to pay anything to download or play the game. However, you can buy optional items such as gold coins, gems, skins, avatars, etc. using real money if you want to.

        -

        Q: Is Garena HoN available in other regions?

        -

        A: Yes, Garena HoN is available in other regions such as North America, Europe, China, etc. However, you may need to download a different version of the game client or use a different server depending on your region.

        -

        Q: Is Garena HoN compatible with my device?

        -

        A: Garena HoN is compatible with Windows and Mac devices. However, you need to make sure that your device meets the minimum system requirements for the game. You can check the system requirements on the website or in the game.

        -

        Q: How can I report a bug or a problem in the game?

        -

        A: You can report a bug or a problem in the game by using the report function in the game or by contacting the support team on the website or in the game. You can also check the forums or the FAQ section on the website for possible solutions.

        -

        Q: How can I get better at Garena HoN?

        -

        A: You can get better at Garena HoN by practicing regularly, learning from other players, communicating with your team, and having fun. You can also check out our tips and tricks section in this article for more advice.

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/PES 2018 APK OBB Data How to Get It on Your Android Phone.md b/spaces/congsaPfin/Manga-OCR/logs/PES 2018 APK OBB Data How to Get It on Your Android Phone.md deleted file mode 100644 index adac30444ea2ecf41e3a369aef01c5c8a39dc044..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/PES 2018 APK OBB Data How to Get It on Your Android Phone.md +++ /dev/null @@ -1,210 +0,0 @@ - -

        How to Download PES 2018 on Android

        -

        If you are a fan of football games, you might have heard of PES 2018, one of the most popular and realistic football games ever made. But did you know that you can also play it on your Android device? In this article, we will show you how to download PES 2018 on Android from the official site or other sources, as well as how to play it and enjoy its amazing features. Let's get started!

        -

        how to download pes 2018 on android


        Download »»» https://urlca.com/2uOfbj



        -

        What is PES 2018?

        -

        PES 2018 is a football simulation game developed by Konami, the same company behind other famous games like Metal Gear Solid and Silent Hill. It is the latest installment in the Pro Evolution Soccer series, which has been running since 1995. PES 2018 was released in September 2020 for various platforms, including Windows, PlayStation, Xbox, iOS, and Android.

        -

        PES 2018 offers a realistic and immersive football experience, with over 10,000 players from more than 100 countries, licensed teams and leagues, authentic stadiums and kits, and stunning graphics and animations. You can build your dream team from scratch, compete with players from around the world online or locally, or manage your own club in various modes. You can also play with legendary players like Maradona, Beckham, Zico, or Ronaldinho.

        -

        Why Download PES 2018 on Android?

        -

        There are many reasons why you should download PES 2018 on your Android device. Here are some of them:

        -
          -
        • You can play it anytime, anywhere. You don't need a console or a PC to enjoy PES 2018. You can just grab your phone or tablet and start playing whenever you want.
        • -
        • You can save space and money. You don't need to buy a physical copy or download a large file to play PES 2018. The game is free to download from the official site or other sources (more on that later), and it only takes up about 1.5 GB of storage on your device.
        • -
        • You can customize it to your liking. You can adjust the settings, controls, graphics, and sound to suit your preferences and device performance. You can also download additional data, such as commentary languages, team logos, or player faces.
        • -
        • You can have fun with your friends. You can play PES 2018 with your friends online or offline, using Bluetooth or Wi-Fi. You can also chat with them, send them messages, or invite them to join your team.
        • -
        -

        As you can see, downloading PES 2018 on Android is a great idea if you love football and want to have a portable and versatile game that you can enjoy anytime, anywhere.

        -

        How to Download PES 2018 on Android from the Official Site

        -

        The easiest and safest way to download PES 2018 on Android is from the official site of Konami. Here are the steps you need to follow:

        -

        Requirements and Compatibility

        -

        Before you download PES 2018 on Android, you need to make sure that your device meets the minimum and recommended requirements and is compatible with the game. Here is a table that shows the requirements and compatibility for PES 2018 on Android:

        -

        How to download PES 18 on Android from APKPure.com[^1^]
        -How to download and install PES 2018 on Android with video tutorial[^2^]
        -Download and install PES 2018 on Android with Apk+Obb data[^3^]
        -How to download PES 2018 on Android without OBB file
        -How to download PES 2018 on Android for free
        -How to download PES 2018 on Android with Zarchiver Apk[^3^]
        -How to download PES 2018 on Android with direct link
        -How to download PES 2018 on Android with torrent
        -How to download PES 2018 on Android with Google Play Store
        -How to download PES 2018 on Android with SD card
        -How to download PES 2018 on Android with emulator
        -How to download PES 2018 on Android with VPN
        -How to download PES 2018 on Android in Hindi
        -How to download PES 2018 on Android in Nigeria
        -How to download PES 2018 on Android in Indonesia
        -How to download PES 2018 on Android offline mode
        -How to download PES 2018 on Android online mode
        -How to download PES 2018 on Android full version
        -How to download PES 2018 on Android latest update
        -How to download PES 2018 on Android mod apk
        -How to download PES 2018 on Android hack version
        -How to download PES 2018 on Android unlimited coins
        -How to download PES 2018 on Android best graphics
        -How to download PES 2018 on Android low mb
        -How to download PES 2018 on Android high compressed
        -How to download PES 2018 on Android without verification
        -How to download PES 2018 on Android without root
        -How to download PES 2018 on Android without internet
        -How to download PES 2018 on Android without password
        -How to download PES 2018 on Android without survey
        -How to download PES 2018 on Android step by step guide
        -How to download PES 2018 on Android easy method
        -How to download PES 2018 on Android fast and secure
        -How to download PES 2018 on Android no errors or bugs
        -How to download PES 2018 on Android working 100%
        -How to download PES 2018 on Android gameplay review
        -How to download PES 2018 on Android tips and tricks
        -How to download PES 2018 on Android features and modes
        -How to download PES 2018 on Android system requirements
        -How to download PES 2018 on Android compatible devices

        - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
        RequirementMinimumRecommended
        Operating SystemAndroid 5.0 (Lollipop)Android 8.0 (Oreo) or higher
        Processor1.5 GHz quad-core2.0 GHz octa-core or higher
        RAM1 GB2 GB or higher
        Storage1.5 GB free space2 GB free space or higher
        Screen Resolution800 x 480 pixels1280 x 720 pixels or higher
        Internet ConnectionRequired for some features and modesRequired for some features and modes
        Compatible Devices: Most devices from Samsung, Huawei, LG, Sony, Motorola, Xiaomi, OnePlus, Google, etc.
        Incompatible Devices: Some devices from HTC, Lenovo, Asus, Acer, Alcatel, ZTE, etc.
        Note: You can check the compatibility of your device by visiting the Google Play Store page of PES 2018 and seeing if it says "This app is compatible with your device" or "This app is incompatible with your device".
        -

        Download and Installation Process

        -

        If your device meets the requirements and is compatible with PES 2018, you can proceed to download and install the game from the official site of Konami. Here are the steps you need to follow:

        -
          -
        1. Go to the official site of Konami and click on the "Download" button.
        2. -
        3. You will be redirected to the Google Play Store page of PES 2018. Click on the "Install" button.
        4. -
        5. The game will start downloading on your device. You can see the progress and size of the download on the screen.
        6. -
        7. Once the download is complete, the game will start installing automatically. You can see the progress and status of the installation on the screen.
        8. -
        9. When the installation is complete, you will see a "Open" button. Click on it to launch the game.
        10. -
        11. The game will ask you to accept the terms of service and privacy policy. Read them carefully and click on "Agree" if you agree with them.
        12. -
        13. The game will ask you to download additional data for optimal performance. You can choose to download it now or later. We recommend downloading it now if you have a stable and fast internet connection.
        14. -
        15. The game will start downloading the additional data on your device. You can see the progress and size of the download on the screen.
        16. -
        17. Once the download is complete, the game will start initializing and loading. You can see the progress and status of the initialization and loading on the screen.
        18. -
        19. The game will ask you to choose your region and language. Select them according to your preference and click on "OK".
        20. -
        21. The game will ask you to create or link your Konami ID. A Konami ID is a free account that allows you to access various features and services of Konami games, such as saving your progress, transferring your data, or receiving rewards. You can create a new Konami ID or link an existing one. You can also skip this step and do it later.
        22. -
        23. The game will ask you to choose your team name, emblem, and uniform. You can customize them according to your preference or use the default ones. You can also change them later.
        24. -
        25. The game will ask you to choose your initial players. You can select them from a random pool of players with different ratings and positions. You can also trade them later.
        26. -
        27. The game will ask you to choose your manager. You can select them from a list of real-life managers with different tactics and formations. You can also change them later.
        28. -
        29. The game will ask you to choose your difficulty level. You can select from beginner, amateur, regular, professional, or superstar. You can also change it later.
        30. -
        31. The game will give you a tutorial on how to play PES 2018 on Android. You can learn the basics of the game, such as passing, shooting, dribbling, defending, and scoring. You can also skip the tutorial and start playing right away.
        32. -
        -

        Congratulations! You have successfully downloaded and installed PES 2018 on Android from the official site of Konami. Now you can enjoy the game and have fun!

        -

        How to Download PES 2018 on Android from Other Sources

        -

        Another way to download PES 2018 on Android is from other sources, such as unofficial or third-party sites or apps. However, this method is not recommended for several reasons. Here are some of them:

        -
          -
        • You may download a fake or corrupted file that may harm your device or steal your data.
        • -
        • You may download a modified or hacked version of the game that may not work properly or cause errors.
        • -
        • You may download an outdated or incompatible version of the game that may not run smoothly or support all the features.
        • -
        • You may violate the terms of service and privacy policy of Konami and risk losing your account or facing legal action.
        • -
        • You may miss out on the updates, patches, and events that Konami provides for the official version of the game.
        • -
        -

        As you can see, downloading PES 2018 on Android from other sources is risky and disadvantageous. We advise you to avoid this method and stick to the official site of Konami.

        -

        Alternative Sources

        -

        If you still want to download PES 2018 on Android from other sources, despite the warnings and drawbacks, here are some of the alternative sources that claim to offer PES 2018 for Android, with their pros and cons:

        - - - - - - - - - - - - - - - - - - - - - -
        SourceProsCons
        AptoideA popular alternative app store that offers various apps and games for free.Many apps and games are fake, modified, or infected with malware. The quality and security of the apps and games are not guaranteed.
        APKPureA reliable source of APK files that are verified by SHA1 signatures. It also offers updates and patches for some apps and games.Some APK files may not be compatible with your device or region. Some APK files may not include additional data or resources that are required for some apps and games.
        Ocean of APKA website that offers various APK files for free download. It also provides screenshots and descriptions for some apps and games.Some APK files may be outdated or corrupted. Some APK files may contain ads or viruses that may interfere with your device performance or security.
        -

        Precautions and Safety Measures

        -

        If you decide to download PES 2018 on Android from other sources, you need to take some precautions and safety measures before doing so. Here are some of them:

        -
          -
        • Check the reviews and ratings of the source and the app or game. Look for positive feedback from other users who have downloaded it successfully and safely.
        • -
        • Check the permissions and access rights of the app or game. Look for suspicious or unnecessary permissions that may compromise your device functionality or privacy.
        • -
        • Check the antivirus scan results of the app or game. Look for any signs of malware or viruses that may harm your device or data.
        • -
        • Check the size and version of the app or game. Look for any discrepancies or inconsistencies that may indicate a fake or modified file.
        • -
        • Backup your device data and settings before downloading and installing the app or game. In case something goes wrong, you can restore your device to its previous state.
        • -
        • Use a VPN or proxy service to hide your IP address and location when downloading and installing the app or game. This can help you avoid any regional restrictions or legal issues.
        • -
        -

        By following these precautions and safety measures, you can reduce the risks and dangers of downloading PES 2018 on Android from other sources. However, you should still be careful and vigilant, as there is no guarantee that the app or game will work properly or safely.

        -

        How to Play PES 2018 on Android

        -

        Now that you have downloaded and installed PES 2018 on Android, you are ready to play it and have fun. Here are some tips and tricks on how to play PES 2018 on Android:

        -

        Gameplay Modes

        -

        PES 2018 offers various gameplay modes that suit different preferences and styles. Here are some of them:

        -
          -
        • Online: You can play online matches with players from around the world, using your own team or a random team. You can also join online tournaments and events, and earn rewards and rankings.
        • -
        • Local: You can play local matches with your friends, using Bluetooth or Wi-Fi. You can also create your own tournaments and leagues, and customize the rules and settings.
        • -
        • Campaign: You can play a series of matches against different teams, with increasing difficulty and challenge. You can also earn coins and GP (game points) that you can use to buy players, managers, or items.
        • -
        • Manager Mode: You can manage your own club, from signing players and staff, to setting tactics and formations, to managing finances and facilities. You can also compete with other clubs in various competitions and leagues.
        • -
        -

        You can switch between different modes by tapping on the menu icon on the top left corner of the screen, and selecting the mode you want to play.

        -

        Controls

        -

        PES 2018 offers two types of controls that you can use to play the game: classic and advanced. Here are the differences between them:

        -
          -
        • Classic: This is the traditional control scheme that uses a virtual joystick and buttons on the screen. You can move your players with the joystick, and perform actions like passing, shooting, dribbling, or tackling with the buttons.
        • -
        • Advanced: This is the new control scheme that uses gestures and taps on the screen. You can move your players by tapping on the screen, and perform actions like passing, shooting, dribbling, or tackling by swiping or flicking on the screen.
        • -
        -

        You can switch between different controls by tapping on the settings icon on the top right corner of the screen, and selecting the control type you want to use. You can also adjust the sensitivity, position, size, and transparency of the controls according to your preference.

        -

        Features

        -

        PES 2018 has many features that make it stand out from other football games. Here are some of them:

        -
          -
        • Realistic Graphics: PES 2018 uses the Unreal Engine 4 to create stunning graphics that simulate real-life football. You can see the details of the players, stadiums, kits, weather, lighting, shadows, and animations.
        • -
        • Sound Effects: PES 2018 uses high-quality sound effects that enhance the atmosphere of the game. You can hear the cheers of the crowd, the whistles of the referee, the kicks of the ball, and the collisions of the players.
        • -
        • Commentary: PES 2018 has commentary from famous commentators like Peter Drury and Jim Beglin. They provide insightful and entertaining commentary that matches the situation and mood of the game.
        • -
        • Licensed Teams and Players: PES 2018 has over 10,000 players from more than 100 countries, licensed teams and leagues, authentic stadiums and kits, and official partnerships with clubs like Barcelona, Liverpool, Borussia Dortmund, or Inter Milan.
        • -
        -

        These features make PES 2018 a realistic and immersive football game that you will love to play.

        -

        Conclusion

        -

        In this article, we have shown you how to download PES 2018 on Android from the official site or other sources, as well as how to play it and enjoy its amazing features. We hope that you have found this article helpful and informative. Now you can download PES 2018 on Android and have fun with one of the best football games ever made. You will not regret it!

        FAQs

        -

        Here are some frequently asked questions about PES 2018 on Android, with brief answers:

        -
          -
        1. Q: How much data does PES 2018 use?
        2. -
        3. A: PES 2018 uses about 50 MB of data per hour when playing online matches. You can reduce the data usage by lowering the graphics quality or playing offline modes.
        4. -
        5. Q: How can I update PES 2018 on Android?
        6. -
        7. A: You can update PES 2018 on Android by going to the Google Play Store and tapping on the "Update" button. You can also enable the auto-update feature to get the latest updates automatically.
        8. -
        9. Q: How can I transfer my PES 2018 data to another device?
        10. -
        11. A: You can transfer your PES 2018 data to another device by linking your Konami ID to your game account. Then, you can log in with your Konami ID on the other device and restore your data.
        12. -
        13. Q: How can I get more coins and GP in PES 2018?
        14. -
        15. A: You can get more coins and GP in PES 2018 by playing various modes and events, completing achievements and challenges, or buying them with real money.
        16. -
        17. Q: How can I contact Konami for support or feedback?
        18. -
        19. A: You can contact Konami for support or feedback by going to the settings menu and tapping on the "Contact" button. You can also visit their official website or social media pages for more information.
        20. -

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/SAKURA School Simulator China A Unique Simulation Game with Various Missions and Characters.md b/spaces/congsaPfin/Manga-OCR/logs/SAKURA School Simulator China A Unique Simulation Game with Various Missions and Characters.md deleted file mode 100644 index fabd23b71bd7dcfc170fab09b609837266afb650..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/SAKURA School Simulator China A Unique Simulation Game with Various Missions and Characters.md +++ /dev/null @@ -1,141 +0,0 @@ - -

        Download Sakura School Simulator Versi China APK: A Guide for Beginners

        -

        If you are a fan of simulation games that let you experience the life of a Japanese high school student, you might have heard of Sakura School Simulator. This game, developed by Garusoft Development Inc., has been downloaded over 100 million times on the Google Play Store and has received mostly positive reviews from players. However, there is a catch: the game only supports single-player mode, meaning you cannot interact with other players online.

        -

        That is why some players have turned to the Chinese version of the game, which offers a multiplayer mode that allows you to play with your friends or strangers online. This version is not available on the official platforms, but can be downloaded from third-party sources as an APK file. But before you do that, there are some things you need to know about this modded version of the game, such as its features, risks, and alternatives. In this article, we will provide you with a comprehensive guide on how to download and play Sakura School Simulator Versi China APK safely and legally.

        -

        download sakura school simulator versi china apk


        Download File ->>> https://urlca.com/2uOc5X



        -

        What is Sakura School Simulator?

        -

        Sakura School Simulator is a game that simulates the daily life of a high school student in a fictional town called Sakura. You can choose from four different characters (two male and two female) and customize their appearance, clothes, and accessories. You can also control and switch between them at any time.

        -

        The game gives you a lot of freedom to explore the town and interact with various NPCs (non-player characters). You can make friends, enemies, or lovers with them, depending on your actions and choices. You can also join clubs, attend classes, take exams, go shopping, eat at restaurants, visit temples, and more. There are also missions and quests that you can complete to earn money and rewards.

        -

        However, the game is not all about realism and normalcy. There are also many bizarre and hilarious elements that make the game more fun and exciting. For example, you can borrow weapons from the Yakuza office and go on a rampage, or use jetpacks and shrink rays to fly or become tiny. You can also encounter giant monsters like Big Alice or UFOs that will attack the town. The game has no end or death concept, so you can do whatever you want without any consequences.

        -

        Features of the game

        -

        Some of the features that make Sakura School Simulator stand out from other simulation games are:

        -
          -
        • Simple and interactive controls: You can easily move around and perform actions using the virtual joystick and buttons on the screen. You can also adjust the camera angle and zoom in or out using gestures.
        • -
        • Help option: If you have any questions or problems while playing the game, you can access the help menu that provides detailed explanations and tips for various aspects of the game.
        • -
        • Exciting school life: You can enjoy a realistic and immersive school life experience with various activities and events. You can also influence the story and outcome of the game with your decisions and interactions.
        • -
        • Epic rampage experiences: You can unleash your wild side and cause chaos and destruction in the town with various weapons and gadgets. You can also fight against enemies or other players using martial arts or firearms.
        • -
        • Choices and customization: You can personalize your character's appearance, clothes, accessories, hairstyle, eye color, etc. You can also choose your own name and gender.
        • -
        • Multi-character gameplay: You can play as up to four different characters in the same stage and switch between them anytime. Each character has their own personality, preferences, skills, and relationships.
        • -
        • How to download and install the Chinese version

          -

          If you want to play the multiplayer mode of Sakura School Simulator, you will need to download and install the Chinese version of the game, which is also known as Sakura School Simulator Versi China APK. This version is not available on the official platforms like Google Play Store or App Store, but can be found on some third-party websites that offer modded APKs.

          -

          An APK file is an Android application package file that contains all the files and data needed to run an app on an Android device. However, not all APK files are safe and legal to use, especially those that are modified or hacked by unauthorized developers. Therefore, you should be careful and cautious when downloading and installing any APK file from unknown sources.

          -

          Here are the steps to download and install Sakura School Simulator Versi China APK on your Android device:

          -
            -
          1. First, you need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on. This will allow you to install apps that are not from the official platforms.
          2. -
          3. Next, you need to find a reliable and trustworthy website that offers the Sakura School Simulator Versi China APK file. You can search for it on Google or use some of the links below:
          4. - -
          5. Once you have found the website, click on the download button and wait for the APK file to be downloaded to your device.
          6. -
          7. After the download is complete, locate the APK file in your device's file manager and tap on it to start the installation process.
          8. -
          9. Follow the instructions on the screen and grant the necessary permissions to the app.
          10. -
          11. When the installation is done, you can launch the app from your home screen or app drawer and enjoy playing Sakura School Simulator Versi China APK.
          12. -
          -

          Risks and dangers of using modded APKs

          -

          While downloading and installing Sakura School Simulator Versi China APK might seem tempting and exciting, you should also be aware of the potential risks and dangers that come with using modded APKs. These include:

          -

          download sakura school simulator china multiplayer version
          -sakura school simulator chinese version mod apk
          -how to install sakura school simulator china terbaru 2023
          -sakura school simulator china tutorial youtube
          -sakura school simulator china jalantikus gaming
          -sakura school simulator china garusoft development inc
          -sakura school simulator china mod features
          -sakura school simulator china apk download link
          -sakura school simulator china gameplay review
          -sakura school simulator china latest update 2023
          -sakura school simulator china vs original version
          -sakura school simulator china server status
          -sakura school simulator china online mode
          -sakura school simulator china android 6.0 ke atas
          -sakura school simulator china rating 4.5/5.0 google play
          -sakura school simulator china 178mb size
          -sakura school simulator china daystar music tracks
          -sakura school simulator china sugar cookie lemon cake
          -sakura school simulator china disclaimer and risks
          -sakura school simulator china simulation game anak sekolah jepang
          -download sakura town multiplayer mod apk
          -sakura town chinese version free download
          -how to play sakura town with friends online
          -sakura town mod apk unlimited money and gems
          -sakura town gameplay walkthrough part 1
          -sakura town garusoft development inc official website
          -sakura town new scientist physics article
          -sakura town nuclear fusion reactor experiment
          -sakura town 100 million degrees celsius for 30 seconds
          -sakura town holy grail fusion experiment mini sun
          -download game simulasi sekolah jepang multiplayer mod apk
          -game simulasi sekolah jepang multiplayer online gratis
          -cara main game simulasi sekolah jepang multiplayer bersama teman
          -game simulasi sekolah jepang multiplayer fitur unggulan dan kelebihan
          -game simulasi sekolah jepang multiplayer review dan rating pengguna
          -game simulasi sekolah jepang multiplayer link download terbaru 2023
          -game simulasi sekolah jepang multiplayer developer garusoft development inc
          -game simulasi sekolah jepang multiplayer ukuran dan spesifikasi minimal
          -game simulasi sekolah jepang multiplayer mode cerita dan misi
          -game simulasi sekolah jepang multiplayer soundtrack dan efek suara

          -

          Legal issues

          -

          Modded APKs are usually created by unauthorized developers who do not have the permission or license from the original developers or publishers of the app. This means that they are violating the intellectual property rights and terms of service of the app. Therefore, using modded APKs can be considered as piracy or illegal activity, which can result in legal actions or penalties from the authorities or the app owners.

          -

          Security threats

          -

          Modded APKs are also prone to malware, viruses, spyware, adware, or other malicious programs that can harm your device or steal your personal information. These programs can be hidden or embedded in the APK file or in the app itself, and can run in the background without your knowledge or consent. They can also access your camera, microphone, contacts, messages, location, or other sensitive data on your device. Therefore, using modded APKs can compromise your device's security and privacy.

          -

          Data privacy concerns

          -

          Modded APKs can also collect your data and send it to third-party servers or advertisers without your permission or awareness. This data can include your browsing history, online behavior, preferences, interests, or other personal information. This data can be used for various purposes such as targeted advertising, marketing research, analytics, or even identity theft. Therefore, using modded APKs can expose your data to unknown and untrusted parties.

          -

          Alternatives to Sakura School Simulator Versi China APK

          -

          If you want to avoid the risks and dangers of using modded APKs, you should consider some alternatives to Sakura School Simulator Versi China APK. These include:

          -

          Original version from Google Play Store or App Store

          -

          The safest and most legal way to play Sakura School Simulator is to download and install the original version from the official platforms like Google Play Store or App Store. This way, you can enjoy the game without worrying about malware, viruses, legal issues, or data privacy concerns. You can also get regular updates and support from the developers. However, you will not be able to play the multiplayer mode of the game, as it is only available in the Chinese version.

          -

          Other similar games

          -

          If you are looking for other simulation games that offer a multiplayer mode and a similar gameplay to Sakura School Simulator, you can try some of these games:

          - - - - - - - - - - - - - - - - - -
          GameDescription
          Yandere School SimulatorThis game is inspired by the popular Yandere Simulator game, where you play as a girl who is obsessed with her crush and will do anything to eliminate her rivals. You can explore the school, interact with other students, and use various weapons and tactics to eliminate your enemies. You can also play online with other players and compete or cooperate with them.
          High School Simulator 2021This game is similar to Sakura School Simulator, but with more realistic graphics and physics. You can choose from different characters and customize their appearance and clothes. You can also enjoy various activities and events in the school and the town. You can also play online with other players and chat with them.
          School Girls SimulatorThis game is also similar to Sakura School Simulator, but with more focus on the female characters. You can choose from 10 different girls and customize their appearance and clothes. You can also enjoy various activities and events in the school and the town. You can also play online with other players and chat with them.
          -

          Conclusion

          -

          Sakura School Simulator is a fun and entertaining simulation game that lets you experience the life of a Japanese high school student. However, if you want to play the multiplayer mode of the game, you will need to download and install the Chinese version of the game, which is also known as Sakura School Simulator Versi China APK. This version is not available on the official platforms, but can be downloaded from third-party sources as an APK file.

          -

          However, before you do that, you should be aware of the potential risks and dangers that come with using modded APKs. These include legal issues, security threats, and data privacy concerns. Therefore, you should be careful and cautious when downloading and installing any APK file from unknown sources.

          -

          If you want to avoid these risks and dangers, you should consider some alternatives to Sakura School Simulator Versi China APK. These include the original version from Google Play Store or App Store, or other similar games that offer a multiplayer mode and a similar gameplay to Sakura School Simulator.

          -

          FAQs

          -
            -
          • Q: Is Sakura School Simulator Versi China APK safe to use?
          • -
          • A: Sakura School Simulator Versi China APK is not safe to use, as it is a modded APK file that can contain malware, viruses, or other malicious programs that can harm your device or steal your personal information. It can also violate the intellectual property rights and terms of service of the original app, which can result in legal actions or penalties from the authorities or the app owners.
          • -
          • Q: How can I play Sakura School Simulator online with other players?
          • -
          • A: The only way to play Sakura School Simulator online with other players is to download and install the Chinese version of the game, which offers a multiplayer mode that allows you to play with your friends or strangers online. However, this version is not available on the official platforms, but can be downloaded from third-party sources as an APK file.
          • -
          • Q: What are some of the differences between Sakura School Simulator Versi China APK and the original version?
          • -
          • A: Some of the differences between Sakura School Simulator Versi China APK and the original version are:
          • -
              -
            • The Chinese version has a multiplayer mode that allows you to play online with other players, while the original version only supports single-player mode.
            • -
            • The Chinese version has more characters, clothes, accessories, weapons, gadgets, vehicles, missions, quests, events, and features than the original version.
            • -
            • The Chinese version has different graphics, sounds, languages, currencies, ads, and servers than the original version.
            • -
            -
          • Q: What are some of the benefits of using Sakura School Simulator Versi China APK?
          • -
          • A: Some of the benefits of using Sakura School Simulator Versi China APK are:
          • -
              -
            • You can enjoy a more fun and exciting gameplay experience with more options and possibilities than the original version.
            • -
            • You can interact with other players online and make friends or enemies with them.
            • -
            • You can access new content and updates that are not available in the original version.
            • -
            -
          • Q: What are some of the drawbacks of using Sakura School Simulator Versi China APK?
          • -
          • A: Some of the drawbacks of using Sakura School Simulator Versi China APK are:
          • -
              -
            • You can expose your device and data to malware, viruses, or other malicious programs that can harm your device or steal your personal information.
            • -
            • You can violate the intellectual property rights and terms of service of the original app, which can result in legal actions or penalties from the authorities or the app owners.
            • -
            • You can compromise your data privacy and security, as the app can collect and send your data to third-party servers or advertisers without your permission or awareness.
            • -
            -

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Scary Teacher 3D APK How to Download and Play on Android.md b/spaces/congsaPfin/Manga-OCR/logs/Scary Teacher 3D APK How to Download and Play on Android.md deleted file mode 100644 index 9fbe38e09ac148cf34b22ddaf4175b156457f223..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Scary Teacher 3D APK How to Download and Play on Android.md +++ /dev/null @@ -1,126 +0,0 @@ -
          -

          Scary Teacher 3D Installer APK: How to Download and Play the Game on Your Android Device

          -

          Do you enjoy horror games with a twist of humor? Do you want to take revenge on a cruel teacher who tortures her students? If yes, then you should try Scary Teacher 3D, a free Android-based adventure game with scary themes. In this article, we will tell you what Scary Teacher 3D is, how to download and install it on your Android device, and how to play it with some tips and tricks.

          -

          scary teacher 3d installer apk


          Download Zip ---> https://urlca.com/2uO9eN



          -

          What is Scary Teacher 3D?

          -

          Scary Teacher 3D is a game developed by Z & K Games, a studio that specializes in creating fun and engaging games for mobile platforms. The game was released in 2020 and has received over 100 million downloads and positive reviews from players around the world. The game is rated for ages 12 and up, as it contains moderate violence, horror, and crude humor.

          -

          The plot and gameplay of Scary Teacher 3D

          -

          The game revolves around the story of Miss T, a sadist teacher who loves to torture her students in various ways. She has moved into a new house near your school, and you have decided to teach her a lesson by playing some cruel pranks on her. You will have to sneak into her house, find clues, set up traps, and execute your plan without getting caught by Miss T.

          -

          The game has over 20 levels, each with a different prank and a different room to explore. You will have to use your creativity, logic, and stealth skills to complete each level successfully. You will also have to face Miss T's pet dog, her creepy boyfriend, and other obstacles along the way.

          -

          The features and graphics of Scary Teacher 3D

          -

          Scary Teacher 3D is a game that combines horror, comedy, and adventure elements. The game has many features that make it fun and exciting to play, such as:

          -
            -
          • A realistic and interactive 3D environment with high-quality graphics and sound effects
          • -
          • A variety of items and tools to use for your pranks
          • -
          • A dynamic AI system that makes Miss T react differently depending on your actions
          • -
          • A user-friendly interface and easy controls
          • -
          • A reward system that lets you collect coins and stars to unlock new levels and items
          • -
          • A customization option that lets you change your character's appearance
          • -
          • A multiplayer mode that lets you play with your friends online
          • -
          -

          How to download and install Scary Teacher 3D APK on your Android device?

          -

          If you want to play Scary Teacher 3D on your Android device, you will need to download and install the APK file of the game. An APK file is an application package file that contains all the data and resources needed to run an app on an Android device. However, since Scary Teacher 3D is not available on the Google Play Store, you will need to follow some steps to download and install it safely and correctly. Here are the steps you need to follow:

          -

          Step 1: Enable unknown sources on your device

          -

          Before you can install any APK file on your device, you need to enable the option to allow installation from unknown sources. This option is disabled by default for security reasons, but you can easily enable it by following these steps:

          -

          scary teacher 3d game download apk
          -scary teacher 3d mod apk unlimited money
          -scary teacher 3d apk for pc
          -scary teacher 3d apk latest version
          -scary teacher 3d apk obb download
          -scary teacher 3d apk pure
          -scary teacher 3d apk hack
          -scary teacher 3d apk offline
          -scary teacher 3d apk revdl
          -scary teacher 3d apk uptodown
          -scary teacher 3d apk android 1
          -scary teacher 3d apk mod menu
          -scary teacher 3d apk all unlocked
          -scary teacher 3d apk free shopping
          -scary teacher 3d apk rexdl
          -scary teacher 3d apk no ads
          -scary teacher 3d apk old version
          -scary teacher 3d apk full version
          -scary teacher 3d apk and data
          -scary teacher 3d apk unlimited coins
          -scary teacher 3d apk mod download
          -scary teacher 3d apk online play
          -scary teacher 3d apk new update
          -scary teacher 3d apk xapk
          -scary teacher 3d apk everything unlocked
          -scary teacher 3d apk unlimited stars
          -scary teacher 3d apk mod android oyun club
          -scary teacher 3d apk without obb
          -scary teacher 3d apk happymod
          -scary teacher 3d apk mod unlimited everything
          -scary teacher 3d apk for android tv
          -scary teacher 3d apk with obb file
          -scary teacher 3d apk low mb
          -scary teacher 3d apk mod all chapters unlocked
          -scary teacher 3d apk for ios
          -scary teacher 3d apk mod free download
          -scary teacher 3d apk no verification
          -scary teacher 3d apk latest update download
          -scary teacher 3d apk mod zippyshare
          -scary teacher 3d apk for windows phone

          -
            -
          1. Go to your device's settings and tap on security or privacy
          2. -
          3. Find the option that says unknown sources or install unknown apps and toggle it on
          4. -
          5. A warning message will pop up, telling you the risks of installing apps from unknown sources. Tap on OK to confirm
          6. -
          -

          Once you have enabled this option, you can proceed to the next step.

          -

          Step 2: Download the Scary Teacher 3D APK file from a trusted source

          -

          The next step is to download the Scary Teacher 3D APK file from a reliable and trustworthy source. There are many websites that offer APK files for free, but not all of them are safe and secure. Some of them may contain malware, viruses, or other harmful content that can damage your device or compromise your privacy. Therefore, you should always do some research before downloading any APK file from the internet.

          -

          One of the best sources to download Scary Teacher 3D APK is [APKPure], a website that provides original and pure APK files for various Android apps and games. APKPure verifies the authenticity and integrity of every APK file it hosts, and ensures that they are free of any malicious code or content. To download Scary Teacher 3D APK from APKPure, follow these steps:

          -
            -
          1. Open your browser and go to [APKPure]
          2. -
          3. In the search bar, type Scary Teacher 3D and hit enter
          4. -
          5. From the results, select the Scary Teacher 3D app and tap on download
          6. -
          7. A pop-up window will appear, asking you to choose a download location. Select a folder where you want to save the APK file and tap on OK
          8. -
          9. The download will start automatically and may take a few minutes depending on your internet speed
          10. -
          -

          Once the download is complete, you can move on to the next step.

          -

          Step 3: Locate and install the Scary Teacher 3D APK file on your device

          -

          The final step is to locate and install the Scary Teacher 3D APK file on your device. To do this, follow these steps:

          -
            -
          1. Go to your device's file manager and find the folder where you saved the APK file
          2. -
          3. Tap on the APK file to open it. A prompt will appear, asking you to confirm the installation. Tap on install
          4. -
          5. The installation will begin and may take a few seconds or minutes depending on your device's performance
          6. -
          7. Once the installation is done, you will see a message that says app installed. Tap on open to launch the game or done to exit the installer
          8. -
          -

          Congratulations! You have successfully downloaded and installed Scary Teacher 3D APK on your Android device. You can now enjoy playing this game anytime and anywhere.

          -

          How to play Scary Teacher 3D on your Android device?

          -

          Now that you have installed Scary Teacher 3D on your device, you may be wondering how to play it. Don't worry, we have got you covered. Here are the basic steps you need to follow to play Scary Teacher 3D on your Android device:

          -

          Step 1: Launch the game and choose a level

          -

          To start playing Scary Teacher 3D, you need to launch the game by tapping on its icon on your home screen or app drawer. The game will load and show you its main menu, where you can see different options such as play, settings, shop, etc. Tap on play to enter the game mode.

          -

          The game will show you a map of Miss T's house, where you can see different rooms with different levels. Each level has a different prank and a different difficulty level. You can choose any level you want by tapping on it, but some levels may be locked until you complete certain requirements. You can also see how many coins and stars you have earned by playing each level.

          -

          Step 2: Explore the house and find the clues

          -

          Once you have chosen a level, the game will take you inside Miss T's house, where you will have to explore and find clues for your prank. You will have to use the joystick on the left side of the screen to move around and the buttons on the right side of the screen to interact with objects and items. You will also see a map on the top right corner of the screen that shows you the layout of the house and your location.

          -

          Your goal is to find clues that will help you set up your prank. The clues are usually hidden in drawers, cabinets, closets, or other places. You will have to search carefully and use your logic to find them. Sometimes, you may need to use certain items or tools to access the clues, such as keys, scissors, hammers, etc. You can find these items in the house or buy them from the shop using your coins.

          -

          When you find a clue, you will see a hint on the screen that tells you what to do next. For example, if you find a bottle of glue, the hint may say "use it on Miss T's chair". You will have to follow the hint and prepare your prank accordingly.

          -

          Step 3: Execute the prank and escape from Miss T

          -

          After you have found all the clues and set up your prank, you will have to wait for Miss T to come and fall for it. You will see a timer on the top left corner of the screen that shows you how much time you have left before Miss T arrives. You can use this time to hide somewhere or explore more of the house.

          -

          When Miss T arrives, she will enter the room where you have set up your prank. You will see a cutscene that shows how she reacts to your prank. Depending on the level and the prank, she may scream, cry, faint, or get angry. You will also see how many stars you have earned by completing the level. The more stars you earn, the better your prank is.

          -

          However, your job is not done yet. You still have to escape from Miss T's house without getting caught by her. She will chase you around the house and try to catch you. If she sees you, she will run towards you and try to grab you. You will have to run away from her and find an exit. You can use doors, windows, or other ways to get out of the house.

          -

          If you manage to escape from Miss T's house, you will complete the level and return to the map. You can then choose another level or replay the same level to improve your score. If Miss T catches you, you will fail the level and have to start over.

          -

          Tips and tricks for playing Scary Teacher 3D

          -

          Scary Teacher 3D is a game that requires skill, strategy, and creativity. To help you play better and enjoy more of this game, here are some tips and tricks that you can use:

          -

          Tip 1: Use the map and the hints to find the clues

          -

          The map and the hints are your best friends in this game. They will help you find the clues and set up your pranks faster and easier. The map shows you where each clue is located in each room. The hints tell you what to do with each clue once you find it. You can access both of them by tapping on their icons on the top right corner of the screen.

          -

          You should always check the map and the hints before entering a room or searching for a clue. They will save you time and effort and prevent you from getting lost or confused.

          -

          Tip 2: Be stealthy and avoid Miss T's sight

          -

          Miss T is not someone you want to mess with. She is fast, furious, and ruthless. She will not hesitate to catch you and punish you if she sees you in her house. Therefore, you should always be stealthy and avoid her sight as much as possible.

          -

          You can use various methods to hide from Miss T or distract her attention. For example, you can hide behind furniture, under beds, or in closets. You can also throw objects at her or make noises to divert her attention from your location.

          -

          You should also pay attention to Miss T's mood meter on on your Android device today and enjoy this amazing game. You will not regret it.

          -

          Here are some FAQs that you may have about Scary Teacher 3D:

          -

          Q: Is Scary Teacher 3D safe to play?

          -

          A: Yes, Scary Teacher 3D is safe to play, as long as you download and install it from a trusted source like APKPure. The game does not contain any harmful or inappropriate content that may harm your device or your privacy. However, you should always be careful when installing apps from unknown sources and enable the unknown sources option only when necessary.

          -

          Q: Is Scary Teacher 3D free to play?

          -

          A: Yes, Scary Teacher 3D is free to play, and you can download and install it without paying anything. However, the game does offer some in-app purchases that can enhance your gaming experience, such as buying coins, stars, or items. You can also watch ads to earn some rewards or support the developers.

          -

          Q: How can I play Scary Teacher 3D with my friends?

          -

          A: Scary Teacher 3D has a multiplayer mode that lets you play with your friends online. You can either join a random room or create your own room and invite your friends to join. You can then choose a level and a role, either as a prankster or as Miss T. The prankster's goal is to prank Miss T, while Miss T's goal is to catch the prankster. You can chat with your friends and have fun together in this mode.

          -

          Q: How can I update Scary Teacher 3D on my device?

          -

          A: Scary Teacher 3D is regularly updated by the developers to fix bugs, improve performance, and add new features and levels. To update the game on your device, you need to download and install the latest version of the APK file from APKPure or any other trusted source. You can also enable the auto-update option on APKPure to get notified when a new version is available.

          -

          Q: How can I contact the developers of Scary Teacher 3D?

          -

          A: If you have any questions, feedback, or suggestions for the developers of Scary Teacher 3D, you can contact them through their email address: zkgamesofficial@gmail.com. You can also follow them on their social media accounts: Facebook, Twitter, Instagram, and YouTube.

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/The Power of Pride A Movie About Friendship Courage and Social Justice.md b/spaces/congsaPfin/Manga-OCR/logs/The Power of Pride A Movie About Friendship Courage and Social Justice.md deleted file mode 100644 index 419ed215b62bd572502d9dd923989ae0d74dfbab..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/The Power of Pride A Movie About Friendship Courage and Social Justice.md +++ /dev/null @@ -1,138 +0,0 @@ -
          -

          How to Download Movie Pride for Free

          -

          Movie Pride is a 2014 British historical comedy-drama film that tells the story of a group of lesbian and gay activists who supported the miners' strike in 1984. It's a film that celebrates the power of friendship, solidarity, and diversity in the face of adversity. If you're looking for a movie that will make you laugh, cry, and cheer, then Movie Pride is the perfect choice for you.

          -

          But how can you watch this movie without paying anything? Is it possible to download Movie Pride for free? The answer is yes, but you need to be careful and follow some tips to avoid any legal or technical issues. In this article, we'll show you what Movie Pride is about, why you should watch it, where to find it online, and how to download it safely and legally.

          -

          download movie pride


          Download ····· https://urlca.com/2uOcyG



          -

          What is Movie Pride About?

          -

          Movie Pride is based on a true story that happened in the UK in the 1980s. It depicts the unlikely alliance between a group of lesbian and gay activists and a small mining village in Wales. The film shows how they overcome their differences and prejudices to support each other during the miners' strike, which was one of the most significant social movements in British history.

          -

          The Plot

          -

          The film begins with Mark Ashton, a gay activist who realizes that the police have stopped harassing the gay community because they are busy with the miners' strike. He decides to start a bucket collection for the miners during the Gay Pride Parade in London. He forms a group called Lesbians and Gays Support the Miners (LGSM) with his friends and fellow activists.

          -

          download pride 2014 movie online
          -watch pride movie free streaming
          -pride movie download full hd
          -pride 2007 movie download torrent
          -download pride movie based on true story
          -pride movie subtitles download english
          -download pride and prejudice movie 2005
          -watch pride and prejudice movie online free
          -pride and prejudice movie download mp4
          -download pride and prejudice movie with english subtitles
          -download pride and prejudice and zombies movie
          -watch pride and prejudice and zombies movie online free
          -pride and prejudice and zombies movie download in hindi
          -download pride and prejudice and zombies full movie
          -download pride the lioness movie 2004
          -watch pride the lioness movie online free
          -pride the lioness movie download free
          -download pride the lioness full movie
          -download pride the lioness movie in hindi
          -watch pride the lioness movie in english
          -download death at a funeral 2007 movie pride version
          -watch death at a funeral 2007 movie online free
          -death at a funeral 2007 movie download in hindi
          -death at a funeral 2007 movie download mp4
          -death at a funeral 2007 movie subtitles download english
          -download death at a funeral 2010 movie remake of pride version
          -watch death at a funeral 2010 movie online free
          -death at a funeral 2010 movie download in hindi
          -death at a funeral 2010 movie download mp4
          -death at a funeral 2010 movie subtitles download english
          -download sense and sensibility 1995 movie by the writer of pride and prejudice
          -watch sense and sensibility 1995 movie online free
          -sense and sensibility 1995 movie download in hindi
          -sense and sensibility 1995 movie download mp4
          -sense and sensibility 1995 movie subtitles download english
          -download sense and sensibility 2008 tv series by the writer of pride and prejudice
          -watch sense and sensibility 2008 tv series online free
          -sense and sensibility 2008 tv series download in hindi
          -sense and sensibility 2008 tv series download mp4
          -sense and sensibility 2008 tv series subtitles download english

          -

          However, they face opposition from both sides. The mining community does not want to associate with them, and some members of the gay community think that the miners have been homophobic in the past. Mark decides to take their donations directly to a mining village in Wales called Onllwyn. There, he meets Dai Donovan, a spokesperson for the miners, who accepts their help and invites them to visit the village.

          -

          The film follows the journey of LGSM as they travel to Onllwyn and bond with the villagers, especially the women's support group led by Hefina Headon and Maureen Barry. They face various challenges and conflicts along the way, such as homophobic attacks, media backlash, family issues, and personal struggles. They also experience joy and friendship as they share their stories, cultures, and music. The film culminates with a triumphant scene at the Gay Pride Parade in 1985, where hundreds of miners show up to support LGSM.

          -

          The Cast

          -

          Movie Pride features an ensemble cast of talented actors who bring their characters to life with humor and emotion. Some of the main cast members are:

          -
            -
          • Ben Schnetzer as Mark Ashton, the charismatic leader of LGSM
          • -
          • George MacKay as Joe Cooper, a closeted student who joins LGSM
          • -
          • Imelda Staunton as Hefina Headon, a feisty Welsh woman who welcomes LGSM
          • -
          • Paddy Considine as Dai Donovan, a kind-hearted miner who befriends LGSM
          • -
          • Dominic West as Jonathan Blake, an older gay man who owns a bookshop with his partner Gethin
          • -
          • Bill Nighy as Cliff Williams, a quiet miner who supports LGSM
          • -
          • Faye Marsay as Steph Chambers, a lesbian activist who joins LGSM
          • -
          • Andrew Scott as Gethin Roberts, Jonathan's partner who has a strained relationship with his Welsh familyThe Reception -

            Movie Pride received critical acclaim and audience praise for its portrayal of a remarkable true story. It has a 92% approval rating on Rotten Tomatoes, based on 165 reviews, with an average rating of 7.6/10. The critics consensus reads: "Earnest without being didactic and uplifting without stooping to sentimentality, Pride is a joyous crowd-pleaser that genuinely works."

            -

            The film also won several awards and nominations, including the BAFTA Award for Outstanding Debut by a British Writer, Director or Producer, the British Independent Film Award for Best British Independent Film, and the Golden Globe Award for Best Motion Picture – Musical or Comedy. It was also selected as one of the best films of 2014 by various publications, such as The Guardian, The Telegraph, and Empire.

            -

            Why You Should Watch Movie Pride

            -

            Movie Pride is not just a historical film, but also a relevant and inspiring one. It shows how people from different backgrounds and identities can come together and fight for a common cause. It also celebrates the diversity and solidarity of the LGBTQ+ community and its allies. Here are some of the reasons why you should watch Movie Pride:

            -

            It's Based on a True Story

            -

            One of the most amazing things about Movie Pride is that it's based on real events and people. The film is faithful to the facts and spirit of what happened in 1984-1985, when LGSM raised over £20,000 for the miners and their families. The film also features archival footage and photos of the actual LGSM members and miners, as well as interviews with some of them at the end credits. Watching Movie Pride will make you appreciate the courage and generosity of these people who made history.

            -

            It's Funny and Heartwarming

            -

            Movie Pride is not a dry or depressing film, but a hilarious and uplifting one. It's full of witty dialogue, colorful characters, and hilarious situations that will make you laugh out loud. It also has moments of tenderness and emotion that will touch your heart. The film balances humor and drama perfectly, creating a realistic and engaging tone. You'll find yourself rooting for the characters and their relationships, as they overcome their challenges and grow as individuals and as a group.

            -

            It's a Celebration of Diversity and Solidarity

            -

            Movie Pride is a film that celebrates the diversity and solidarity of the LGBTQ+ community and its allies. It shows how people from different backgrounds, cultures, genders, sexualities, and political views can find common ground and support each other. It also shows how the LGBTQ+ community has contributed to social justice movements throughout history, and how it continues to do so today. Movie Pride is a film that will make you proud of who you are and who you stand with.

            -

            Where to Find Movie Pride Online

            -

            If you're convinced that Movie Pride is a movie worth watching, you might be wondering where to find it online. There are several options available, depending on your preferences and budget. Here are some of the most popular ones:

            -

            Legal Streaming Services

            -

            If you want to watch Movie Pride legally and support the filmmakers, you can use one of the many streaming services that offer it. Some of the most popular ones are:

            - - - - - - - - - - -
            ServicePriceAvailability
            Netflix$8.99-$17.99 per monthUS, UK, Canada, Australia, etc.
            Amazon Prime Video$8.99 per month or $119 per yearUS, UK, Canada, Australia, etc.
            Hulu$5.99-$11.99 per monthUS only
            YouTube$3.99-$4.99 per rental or purchaseUS, UK, Canada, Australia, etc.
            iTunes$3.99-$4.99 per rental or purchaseUS, UK, Canada, Australia, etc.
            Google Play$3.99-$4.99 per rental or purchaseUS, UK, Canada, Australia, etc.
            Vudu$3.99-$4.99 per rental or purchaseUS only
            Fandango $3.99-$4.99 per rental or purchaseUS only
            -

            As you can see, there are plenty of legal streaming services that offer Movie Pride for a reasonable price. You can choose the one that suits your needs and preferences, and enjoy the movie in high quality and without interruptions.

            -

            Free Movie Download Sites

            -

            If you don't want to pay anything to watch Movie Pride, you can also use one of the many free movie download sites that offer it. However, you need to be careful with these sites, as they may contain viruses, malware, pop-ups, or other unwanted content. Some of the most popular free movie download sites are:

            -
              -
            • 123Movies
            • -
            • Putlocker
            • -
            • FMovies
            • -
            • YesMovies
            • -
            • GoMovies
            • -
            -

            These sites allow you to watch or download Movie Pride for free, without registration or sign-up. However, they may not have the best quality, subtitles, or audio options. They may also be blocked or banned in some countries due to legal issues.

            -

            Torrent Sites

            -

            Another option to download Movie Pride for free is to use torrent sites. Torrent sites are platforms that allow users to share files with each other using a peer-to-peer network. Some of the most popular torrent sites are:

            -
              -
            • The Pirate Bay
            • -
            • RARBG
            • -
            • 1337x
            • -
            • LimeTorrents
            • -
            • Torrentz2
            • -
            -

            These sites allow you to download Movie Pride in various formats, sizes, and qualities, depending on the availability and popularity of the file. However, they also have some risks and drawbacks. You need to have a torrent client software to download the files, such as BitTorrent or uTorrent. You also need to be careful with the files you download, as they may contain viruses, malware, or fake content. Moreover, torrenting is illegal in some countries and may result in fines or legal action.

            -

            How to Download Movie Pride Safely and Legally

            -

            As you can see, there are several ways to download Movie Pride for free online, but not all of them are safe and legal. If you want to avoid any problems or issues, you need to follow some tips and precautions. Here are some of them:

            -

            Choose a Reliable and Trustworthy Source

            -

            The first tip is to choose a reliable and trustworthy source for downloading Movie Pride. This means that you should avoid shady or suspicious sites that may harm your device or compromise your privacy. You should also check the reviews and ratings of the site or the file before downloading it, to make sure that it's legitimate and safe.

            -

            Use a VPN and Antivirus Software

            -

            The second tip is to use a VPN and antivirus software when downloading Movie Pride. A VPN is a service that encrypts your internet connection and hides your IP address, making you anonymous and secure online. This way, you can bypass any geo-restrictions or censorship that may prevent you from accessing certain sites or content. You can also avoid any tracking or monitoring by your ISP or other third parties.

            -

            An antivirus software is a program that protects your device from viruses, malware, spyware, or other malicious content that may infect your device when downloading Movie Pride. It scans and removes any potential threats from your device and alerts you of any suspicious activity.

            -

            Respect the Copyright Laws and Regulations

            -

            The third tip is to respect the copyright laws and regulations when downloading Movie Pride. This means that you should not download or distribute Movie Pride without the permission or consent of the filmmakers or the rights holders. You should also not use Movie Pride for any commercial or illegal purposes.

            -

            Downloading Movie Pride for free may seem tempting, but it may also have some legal consequences. You may be violating the intellectual property rights of the filmmakers or the rights holders, who have invested time, money, and effort into creating Movie Pride. You may also be depriving them of their fair share of revenue and recognition.

            -

            Conclusion

            -

            Movie Pride is a wonderful film that tells the story of a group of lesbian and gay activists who supported the miners' strike in 1984. It's a film that celebrates the diversity and solidarity of the LGBTQ+ community and its allies. It's also a film that will make you laugh, cry, and cheer.

            -

            If you want to watch this film without paying anything, you can download it for free online using one of the methods we've discussed in this article. However, you need to be careful and follow some tips to avoid any legal or technical issues. You should also respect the filmmakers and the rights holders, and support them by watching Movie Pride legally and ethically.

            -

            We hope that this article has helped you learn more about Movie Pride and how to download it for free. If you have any questions or comments, feel free to leave them below. Thank you for reading and enjoy the movie!

            -

            FAQs

            -

            Here are some of the most frequently asked questions about Movie Pride and how to download it for free:

            -
              -
            1. Is Movie Pride a true story?
            2. -

              Yes, Movie Pride is based on a true story that happened in the UK in 1984-1985. It depicts the unlikely alliance between a group of lesbian and gay activists and a small mining village in Wales. The film is faithful to the facts and spirit of what happened, and features archival footage and photos of the actual people involved.

              -
            3. Who are the main actors in Movie Pride?
            4. -

              Movie Pride features an ensemble cast of talented actors who bring their characters to life with humor and emotion. Some of the main actors are Ben Schnetzer, George MacKay, Imelda Staunton, Paddy Considine, Dominic West, Bill Nighy, Faye Marsay, and Andrew Scott.

              -
            5. What are the best legal streaming services to watch Movie Pride?
            6. -

              There are many legal streaming services that offer Movie Pride for a reasonable price. Some of the most popular ones are Netflix, Amazon Prime Video, Hulu, YouTube, iTunes, Google Play, Vudu, and Fandango Now. You can choose the one that suits your needs and preferences, and enjoy the movie in high quality and without interruptions.

              -
            7. What are the best free movie download sites to watch Movie Pride?
            8. -

              There are also many free movie download sites that offer Movie Pride for free. However, you need to be careful with these sites, as they may contain viruses, malware, pop-ups, or other unwanted content. Some of the most popular free movie download sites are 123Movies, Putlocker, FMovies, YesMovies, and GoMovies.

              -
            9. What are the best torrent sites to download Movie Pride?
            10. -

              Another option to download Movie Pride for free is to use torrent sites. Torrent sites are platforms that allow users to share files with each other using a peer-to-peer network. However, they also have some risks and drawbacks. You need to have a torrent client software to download the files, such as BitTorrent or uTorrent. You also need to be careful with the files you download, as they may contain viruses, malware, or fake content. Moreover, torrenting is illegal in some countries and may result in fines or legal action. Some of the most popular torrent sites are The Pirate Bay, RARBG, 1337x, LimeTorrents, and Torrentz2.

              -

            197e85843d
            -
            -
            \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Ccvision Car Special V18 Download Torrent Download Torrent 35 les meilleures astuces et conseils pour utiliser le logiciel de prsentation 3D des vhicules.md b/spaces/contluForse/HuggingGPT/assets/Ccvision Car Special V18 Download Torrent Download Torrent 35 les meilleures astuces et conseils pour utiliser le logiciel de prsentation 3D des vhicules.md deleted file mode 100644 index f3b9e330f26b77bd3c06d32670c60e6f1e4bf2ba..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Ccvision Car Special V18 Download Torrent Download Torrent 35 les meilleures astuces et conseils pour utiliser le logiciel de prsentation 3D des vhicules.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Ccvision Car Special V18 Download Torrent Download Torrent 35


            DOWNLOADhttps://ssurll.com/2uzvV0



            - - aaccfb2cb3
            -
            -
            -

            diff --git a/spaces/contluForse/HuggingGPT/assets/Copyto Manager 5.1.1.3 Serial.md b/spaces/contluForse/HuggingGPT/assets/Copyto Manager 5.1.1.3 Serial.md deleted file mode 100644 index d54dc86ff35fa425b4db4eaf18a19c05bcbbeb8e..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Copyto Manager 5.1.1.3 Serial.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Copyto Manager 5.1.1.3 Serial


            Download File ✫✫✫ https://ssurll.com/2uzy5E



            -
            - 3cee63e6c2
            -
            -
            -

            diff --git a/spaces/contluForse/HuggingGPT/assets/Download book Ericksonian psychotherapy v.1 The Art and Science of Hypnosis and Healing in DJVU EPUB FB2 AZW.md b/spaces/contluForse/HuggingGPT/assets/Download book Ericksonian psychotherapy v.1 The Art and Science of Hypnosis and Healing in DJVU EPUB FB2 AZW.md deleted file mode 100644 index ef6ccf5f7990bb750de932a7f80a4f9295b099da..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Download book Ericksonian psychotherapy v.1 The Art and Science of Hypnosis and Healing in DJVU EPUB FB2 AZW.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Download book Ericksonian psychotherapy v.1 in DJVU, EPUB, FB2, AZW


            Download File ⚙⚙⚙ https://ssurll.com/2uzwsb



            -
            - aaccfb2cb3
            -
            -
            -

            diff --git a/spaces/cooelf/Multimodal-CoT/timm/models/layers/squeeze_excite.py b/spaces/cooelf/Multimodal-CoT/timm/models/layers/squeeze_excite.py deleted file mode 100644 index e5da29ef166de27705cc160f729b6e3b45061c59..0000000000000000000000000000000000000000 --- a/spaces/cooelf/Multimodal-CoT/timm/models/layers/squeeze_excite.py +++ /dev/null @@ -1,74 +0,0 @@ -""" Squeeze-and-Excitation Channel Attention - -An SE implementation originally based on PyTorch SE-Net impl. -Has since evolved with additional functionality / configuration. - -Paper: `Squeeze-and-Excitation Networks` - https://arxiv.org/abs/1709.01507 - -Also included is Effective Squeeze-Excitation (ESE). -Paper: `CenterMask : Real-Time Anchor-Free Instance Segmentation` - https://arxiv.org/abs/1911.06667 - -Hacked together by / Copyright 2021 Ross Wightman -""" -from torch import nn as nn - -from .create_act import create_act_layer -from .helpers import make_divisible - - -class SEModule(nn.Module): - """ SE Module as defined in original SE-Nets with a few additions - Additions include: - * divisor can be specified to keep channels % div == 0 (default: 8) - * reduction channels can be specified directly by arg (if rd_channels is set) - * reduction channels can be specified by float rd_ratio (default: 1/16) - * global max pooling can be added to the squeeze aggregation - * customizable activation, normalization, and gate layer - """ - def __init__( - self, channels, rd_ratio=1. / 16, rd_channels=None, rd_divisor=8, add_maxpool=False, - act_layer=nn.ReLU, norm_layer=None, gate_layer='sigmoid'): - super(SEModule, self).__init__() - self.add_maxpool = add_maxpool - if not rd_channels: - rd_channels = make_divisible(channels * rd_ratio, rd_divisor, round_limit=0.) - self.fc1 = nn.Conv2d(channels, rd_channels, kernel_size=1, bias=True) - self.bn = norm_layer(rd_channels) if norm_layer else nn.Identity() - self.act = create_act_layer(act_layer, inplace=True) - self.fc2 = nn.Conv2d(rd_channels, channels, kernel_size=1, bias=True) - self.gate = create_act_layer(gate_layer) - - def forward(self, x): - x_se = x.mean((2, 3), keepdim=True) - if self.add_maxpool: - # experimental codepath, may remove or change - x_se = 0.5 * x_se + 0.5 * x.amax((2, 3), keepdim=True) - x_se = self.fc1(x_se) - x_se = self.act(self.bn(x_se)) - x_se = self.fc2(x_se) - return x * self.gate(x_se) - - -SqueezeExcite = SEModule # alias - - -class EffectiveSEModule(nn.Module): - """ 'Effective Squeeze-Excitation - From `CenterMask : Real-Time Anchor-Free Instance Segmentation` - https://arxiv.org/abs/1911.06667 - """ - def __init__(self, channels, add_maxpool=False, gate_layer='hard_sigmoid', **_): - super(EffectiveSEModule, self).__init__() - self.add_maxpool = add_maxpool - self.fc = nn.Conv2d(channels, channels, kernel_size=1, padding=0) - self.gate = create_act_layer(gate_layer) - - def forward(self, x): - x_se = x.mean((2, 3), keepdim=True) - if self.add_maxpool: - # experimental codepath, may remove or change - x_se = 0.5 * x_se + 0.5 * x.amax((2, 3), keepdim=True) - x_se = self.fc(x_se) - return x * self.gate(x_se) - - -EffectiveSqueezeExcite = EffectiveSEModule # alias diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/models/submodules/efficientnet_repo/geffnet/activations/activations_me.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/models/submodules/efficientnet_repo/geffnet/activations/activations_me.py deleted file mode 100644 index e91df5a50fdbe40bc386e2541a4fda743ad95e9a..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/models/submodules/efficientnet_repo/geffnet/activations/activations_me.py +++ /dev/null @@ -1,174 +0,0 @@ -""" Activations (memory-efficient w/ custom autograd) - -A collection of activations fn and modules with a common interface so that they can -easily be swapped. All have an `inplace` arg even if not used. - -These activations are not compatible with jit scripting or ONNX export of the model, please use either -the JIT or basic versions of the activations. - -Copyright 2020 Ross Wightman -""" - -import torch -from torch import nn as nn -from torch.nn import functional as F - - -__all__ = ['swish_me', 'SwishMe', 'mish_me', 'MishMe', - 'hard_sigmoid_me', 'HardSigmoidMe', 'hard_swish_me', 'HardSwishMe'] - - -@torch.jit.script -def swish_jit_fwd(x): - return x.mul(torch.sigmoid(x)) - - -@torch.jit.script -def swish_jit_bwd(x, grad_output): - x_sigmoid = torch.sigmoid(x) - return grad_output * (x_sigmoid * (1 + x * (1 - x_sigmoid))) - - -class SwishJitAutoFn(torch.autograd.Function): - """ torch.jit.script optimised Swish w/ memory-efficient checkpoint - Inspired by conversation btw Jeremy Howard & Adam Pazske - https://twitter.com/jeremyphoward/status/1188251041835315200 - - Swish - Described originally as SiLU (https://arxiv.org/abs/1702.03118v3) - and also as Swish (https://arxiv.org/abs/1710.05941). - - TODO Rename to SiLU with addition to PyTorch - """ - - @staticmethod - def forward(ctx, x): - ctx.save_for_backward(x) - return swish_jit_fwd(x) - - @staticmethod - def backward(ctx, grad_output): - x = ctx.saved_tensors[0] - return swish_jit_bwd(x, grad_output) - - -def swish_me(x, inplace=False): - return SwishJitAutoFn.apply(x) - - -class SwishMe(nn.Module): - def __init__(self, inplace: bool = False): - super(SwishMe, self).__init__() - - def forward(self, x): - return SwishJitAutoFn.apply(x) - - -@torch.jit.script -def mish_jit_fwd(x): - return x.mul(torch.tanh(F.softplus(x))) - - -@torch.jit.script -def mish_jit_bwd(x, grad_output): - x_sigmoid = torch.sigmoid(x) - x_tanh_sp = F.softplus(x).tanh() - return grad_output.mul(x_tanh_sp + x * x_sigmoid * (1 - x_tanh_sp * x_tanh_sp)) - - -class MishJitAutoFn(torch.autograd.Function): - """ Mish: A Self Regularized Non-Monotonic Neural Activation Function - https://arxiv.org/abs/1908.08681 - A memory efficient, jit scripted variant of Mish - """ - @staticmethod - def forward(ctx, x): - ctx.save_for_backward(x) - return mish_jit_fwd(x) - - @staticmethod - def backward(ctx, grad_output): - x = ctx.saved_tensors[0] - return mish_jit_bwd(x, grad_output) - - -def mish_me(x, inplace=False): - return MishJitAutoFn.apply(x) - - -class MishMe(nn.Module): - def __init__(self, inplace: bool = False): - super(MishMe, self).__init__() - - def forward(self, x): - return MishJitAutoFn.apply(x) - - -@torch.jit.script -def hard_sigmoid_jit_fwd(x, inplace: bool = False): - return (x + 3).clamp(min=0, max=6).div(6.) - - -@torch.jit.script -def hard_sigmoid_jit_bwd(x, grad_output): - m = torch.ones_like(x) * ((x >= -3.) & (x <= 3.)) / 6. - return grad_output * m - - -class HardSigmoidJitAutoFn(torch.autograd.Function): - @staticmethod - def forward(ctx, x): - ctx.save_for_backward(x) - return hard_sigmoid_jit_fwd(x) - - @staticmethod - def backward(ctx, grad_output): - x = ctx.saved_tensors[0] - return hard_sigmoid_jit_bwd(x, grad_output) - - -def hard_sigmoid_me(x, inplace: bool = False): - return HardSigmoidJitAutoFn.apply(x) - - -class HardSigmoidMe(nn.Module): - def __init__(self, inplace: bool = False): - super(HardSigmoidMe, self).__init__() - - def forward(self, x): - return HardSigmoidJitAutoFn.apply(x) - - -@torch.jit.script -def hard_swish_jit_fwd(x): - return x * (x + 3).clamp(min=0, max=6).div(6.) - - -@torch.jit.script -def hard_swish_jit_bwd(x, grad_output): - m = torch.ones_like(x) * (x >= 3.) - m = torch.where((x >= -3.) & (x <= 3.), x / 3. + .5, m) - return grad_output * m - - -class HardSwishJitAutoFn(torch.autograd.Function): - """A memory efficient, jit-scripted HardSwish activation""" - @staticmethod - def forward(ctx, x): - ctx.save_for_backward(x) - return hard_swish_jit_fwd(x) - - @staticmethod - def backward(ctx, grad_output): - x = ctx.saved_tensors[0] - return hard_swish_jit_bwd(x, grad_output) - - -def hard_swish_me(x, inplace=False): - return HardSwishJitAutoFn.apply(x) - - -class HardSwishMe(nn.Module): - def __init__(self, inplace: bool = False): - super(HardSwishMe, self).__init__() - - def forward(self, x): - return HardSwishJitAutoFn.apply(x) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/bricks/padding.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/bricks/padding.py deleted file mode 100644 index e4ac6b28a1789bd551c613a7d3e7b622433ac7ec..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/bricks/padding.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn - -from .registry import PADDING_LAYERS - -PADDING_LAYERS.register_module('zero', module=nn.ZeroPad2d) -PADDING_LAYERS.register_module('reflect', module=nn.ReflectionPad2d) -PADDING_LAYERS.register_module('replicate', module=nn.ReplicationPad2d) - - -def build_padding_layer(cfg, *args, **kwargs): - """Build padding layer. - - Args: - cfg (None or dict): The padding layer config, which should contain: - - type (str): Layer type. - - layer args: Args needed to instantiate a padding layer. - - Returns: - nn.Module: Created padding layer. - """ - if not isinstance(cfg, dict): - raise TypeError('cfg must be a dict') - if 'type' not in cfg: - raise KeyError('the cfg dict must contain the key "type"') - - cfg_ = cfg.copy() - padding_type = cfg_.pop('type') - if padding_type not in PADDING_LAYERS: - raise KeyError(f'Unrecognized padding type {padding_type}.') - else: - padding_layer = PADDING_LAYERS.get(padding_type) - - layer = padding_layer(*args, **kwargs, **cfg_) - - return layer diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/fileio/file_client.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/fileio/file_client.py deleted file mode 100644 index 950f0c1aeab14b8e308a7455ccd64a95b5d98add..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/fileio/file_client.py +++ /dev/null @@ -1,1148 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import inspect -import os -import os.path as osp -import re -import tempfile -import warnings -from abc import ABCMeta, abstractmethod -from contextlib import contextmanager -from pathlib import Path -from typing import Iterable, Iterator, Optional, Tuple, Union -from urllib.request import urlopen - -import annotator.uniformer.mmcv as mmcv -from annotator.uniformer.mmcv.utils.misc import has_method -from annotator.uniformer.mmcv.utils.path import is_filepath - - -class BaseStorageBackend(metaclass=ABCMeta): - """Abstract class of storage backends. - - All backends need to implement two apis: ``get()`` and ``get_text()``. - ``get()`` reads the file as a byte stream and ``get_text()`` reads the file - as texts. - """ - - # a flag to indicate whether the backend can create a symlink for a file - _allow_symlink = False - - @property - def name(self): - return self.__class__.__name__ - - @property - def allow_symlink(self): - return self._allow_symlink - - @abstractmethod - def get(self, filepath): - pass - - @abstractmethod - def get_text(self, filepath): - pass - - -class CephBackend(BaseStorageBackend): - """Ceph storage backend (for internal use). - - Args: - path_mapping (dict|None): path mapping dict from local path to Petrel - path. When ``path_mapping={'src': 'dst'}``, ``src`` in ``filepath`` - will be replaced by ``dst``. Default: None. - - .. warning:: - :class:`mmcv.fileio.file_client.CephBackend` will be deprecated, - please use :class:`mmcv.fileio.file_client.PetrelBackend` instead. - """ - - def __init__(self, path_mapping=None): - try: - import ceph - except ImportError: - raise ImportError('Please install ceph to enable CephBackend.') - - warnings.warn( - 'CephBackend will be deprecated, please use PetrelBackend instead') - self._client = ceph.S3Client() - assert isinstance(path_mapping, dict) or path_mapping is None - self.path_mapping = path_mapping - - def get(self, filepath): - filepath = str(filepath) - if self.path_mapping is not None: - for k, v in self.path_mapping.items(): - filepath = filepath.replace(k, v) - value = self._client.Get(filepath) - value_buf = memoryview(value) - return value_buf - - def get_text(self, filepath, encoding=None): - raise NotImplementedError - - -class PetrelBackend(BaseStorageBackend): - """Petrel storage backend (for internal use). - - PetrelBackend supports reading and writing data to multiple clusters. - If the file path contains the cluster name, PetrelBackend will read data - from specified cluster or write data to it. Otherwise, PetrelBackend will - access the default cluster. - - Args: - path_mapping (dict, optional): Path mapping dict from local path to - Petrel path. When ``path_mapping={'src': 'dst'}``, ``src`` in - ``filepath`` will be replaced by ``dst``. Default: None. - enable_mc (bool, optional): Whether to enable memcached support. - Default: True. - - Examples: - >>> filepath1 = 's3://path/of/file' - >>> filepath2 = 'cluster-name:s3://path/of/file' - >>> client = PetrelBackend() - >>> client.get(filepath1) # get data from default cluster - >>> client.get(filepath2) # get data from 'cluster-name' cluster - """ - - def __init__(self, - path_mapping: Optional[dict] = None, - enable_mc: bool = True): - try: - from petrel_client import client - except ImportError: - raise ImportError('Please install petrel_client to enable ' - 'PetrelBackend.') - - self._client = client.Client(enable_mc=enable_mc) - assert isinstance(path_mapping, dict) or path_mapping is None - self.path_mapping = path_mapping - - def _map_path(self, filepath: Union[str, Path]) -> str: - """Map ``filepath`` to a string path whose prefix will be replaced by - :attr:`self.path_mapping`. - - Args: - filepath (str): Path to be mapped. - """ - filepath = str(filepath) - if self.path_mapping is not None: - for k, v in self.path_mapping.items(): - filepath = filepath.replace(k, v) - return filepath - - def _format_path(self, filepath: str) -> str: - """Convert a ``filepath`` to standard format of petrel oss. - - If the ``filepath`` is concatenated by ``os.path.join``, in a Windows - environment, the ``filepath`` will be the format of - 's3://bucket_name\\image.jpg'. By invoking :meth:`_format_path`, the - above ``filepath`` will be converted to 's3://bucket_name/image.jpg'. - - Args: - filepath (str): Path to be formatted. - """ - return re.sub(r'\\+', '/', filepath) - - def get(self, filepath: Union[str, Path]) -> memoryview: - """Read data from a given ``filepath`` with 'rb' mode. - - Args: - filepath (str or Path): Path to read data. - - Returns: - memoryview: A memory view of expected bytes object to avoid - copying. The memoryview object can be converted to bytes by - ``value_buf.tobytes()``. - """ - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - value = self._client.Get(filepath) - value_buf = memoryview(value) - return value_buf - - def get_text(self, - filepath: Union[str, Path], - encoding: str = 'utf-8') -> str: - """Read data from a given ``filepath`` with 'r' mode. - - Args: - filepath (str or Path): Path to read data. - encoding (str): The encoding format used to open the ``filepath``. - Default: 'utf-8'. - - Returns: - str: Expected text reading from ``filepath``. - """ - return str(self.get(filepath), encoding=encoding) - - def put(self, obj: bytes, filepath: Union[str, Path]) -> None: - """Save data to a given ``filepath``. - - Args: - obj (bytes): Data to be saved. - filepath (str or Path): Path to write data. - """ - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - self._client.put(filepath, obj) - - def put_text(self, - obj: str, - filepath: Union[str, Path], - encoding: str = 'utf-8') -> None: - """Save data to a given ``filepath``. - - Args: - obj (str): Data to be written. - filepath (str or Path): Path to write data. - encoding (str): The encoding format used to encode the ``obj``. - Default: 'utf-8'. - """ - self.put(bytes(obj, encoding=encoding), filepath) - - def remove(self, filepath: Union[str, Path]) -> None: - """Remove a file. - - Args: - filepath (str or Path): Path to be removed. - """ - if not has_method(self._client, 'delete'): - raise NotImplementedError( - ('Current version of Petrel Python SDK has not supported ' - 'the `delete` method, please use a higher version or dev' - ' branch instead.')) - - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - self._client.delete(filepath) - - def exists(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path exists. - - Args: - filepath (str or Path): Path to be checked whether exists. - - Returns: - bool: Return ``True`` if ``filepath`` exists, ``False`` otherwise. - """ - if not (has_method(self._client, 'contains') - and has_method(self._client, 'isdir')): - raise NotImplementedError( - ('Current version of Petrel Python SDK has not supported ' - 'the `contains` and `isdir` methods, please use a higher' - 'version or dev branch instead.')) - - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - return self._client.contains(filepath) or self._client.isdir(filepath) - - def isdir(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a directory. - - Args: - filepath (str or Path): Path to be checked whether it is a - directory. - - Returns: - bool: Return ``True`` if ``filepath`` points to a directory, - ``False`` otherwise. - """ - if not has_method(self._client, 'isdir'): - raise NotImplementedError( - ('Current version of Petrel Python SDK has not supported ' - 'the `isdir` method, please use a higher version or dev' - ' branch instead.')) - - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - return self._client.isdir(filepath) - - def isfile(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a file. - - Args: - filepath (str or Path): Path to be checked whether it is a file. - - Returns: - bool: Return ``True`` if ``filepath`` points to a file, ``False`` - otherwise. - """ - if not has_method(self._client, 'contains'): - raise NotImplementedError( - ('Current version of Petrel Python SDK has not supported ' - 'the `contains` method, please use a higher version or ' - 'dev branch instead.')) - - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - return self._client.contains(filepath) - - def join_path(self, filepath: Union[str, Path], - *filepaths: Union[str, Path]) -> str: - """Concatenate all file paths. - - Args: - filepath (str or Path): Path to be concatenated. - - Returns: - str: The result after concatenation. - """ - filepath = self._format_path(self._map_path(filepath)) - if filepath.endswith('/'): - filepath = filepath[:-1] - formatted_paths = [filepath] - for path in filepaths: - formatted_paths.append(self._format_path(self._map_path(path))) - return '/'.join(formatted_paths) - - @contextmanager - def get_local_path(self, filepath: Union[str, Path]) -> Iterable[str]: - """Download a file from ``filepath`` and return a temporary path. - - ``get_local_path`` is decorated by :meth:`contxtlib.contextmanager`. It - can be called with ``with`` statement, and when exists from the - ``with`` statement, the temporary path will be released. - - Args: - filepath (str | Path): Download a file from ``filepath``. - - Examples: - >>> client = PetrelBackend() - >>> # After existing from the ``with`` clause, - >>> # the path will be removed - >>> with client.get_local_path('s3://path/of/your/file') as path: - ... # do something here - - Yields: - Iterable[str]: Only yield one temporary path. - """ - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - assert self.isfile(filepath) - try: - f = tempfile.NamedTemporaryFile(delete=False) - f.write(self.get(filepath)) - f.close() - yield f.name - finally: - os.remove(f.name) - - def list_dir_or_file(self, - dir_path: Union[str, Path], - list_dir: bool = True, - list_file: bool = True, - suffix: Optional[Union[str, Tuple[str]]] = None, - recursive: bool = False) -> Iterator[str]: - """Scan a directory to find the interested directories or files in - arbitrary order. - - Note: - Petrel has no concept of directories but it simulates the directory - hierarchy in the filesystem through public prefixes. In addition, - if the returned path ends with '/', it means the path is a public - prefix which is a logical directory. - - Note: - :meth:`list_dir_or_file` returns the path relative to ``dir_path``. - In addition, the returned path of directory will not contains the - suffix '/' which is consistent with other backends. - - Args: - dir_path (str | Path): Path of the directory. - list_dir (bool): List the directories. Default: True. - list_file (bool): List the path of files. Default: True. - suffix (str or tuple[str], optional): File suffix - that we are interested in. Default: None. - recursive (bool): If set to True, recursively scan the - directory. Default: False. - - Yields: - Iterable[str]: A relative path to ``dir_path``. - """ - if not has_method(self._client, 'list'): - raise NotImplementedError( - ('Current version of Petrel Python SDK has not supported ' - 'the `list` method, please use a higher version or dev' - ' branch instead.')) - - dir_path = self._map_path(dir_path) - dir_path = self._format_path(dir_path) - if list_dir and suffix is not None: - raise TypeError( - '`list_dir` should be False when `suffix` is not None') - - if (suffix is not None) and not isinstance(suffix, (str, tuple)): - raise TypeError('`suffix` must be a string or tuple of strings') - - # Petrel's simulated directory hierarchy assumes that directory paths - # should end with `/` - if not dir_path.endswith('/'): - dir_path += '/' - - root = dir_path - - def _list_dir_or_file(dir_path, list_dir, list_file, suffix, - recursive): - for path in self._client.list(dir_path): - # the `self.isdir` is not used here to determine whether path - # is a directory, because `self.isdir` relies on - # `self._client.list` - if path.endswith('/'): # a directory path - next_dir_path = self.join_path(dir_path, path) - if list_dir: - # get the relative path and exclude the last - # character '/' - rel_dir = next_dir_path[len(root):-1] - yield rel_dir - if recursive: - yield from _list_dir_or_file(next_dir_path, list_dir, - list_file, suffix, - recursive) - else: # a file path - absolute_path = self.join_path(dir_path, path) - rel_path = absolute_path[len(root):] - if (suffix is None - or rel_path.endswith(suffix)) and list_file: - yield rel_path - - return _list_dir_or_file(dir_path, list_dir, list_file, suffix, - recursive) - - -class MemcachedBackend(BaseStorageBackend): - """Memcached storage backend. - - Attributes: - server_list_cfg (str): Config file for memcached server list. - client_cfg (str): Config file for memcached client. - sys_path (str | None): Additional path to be appended to `sys.path`. - Default: None. - """ - - def __init__(self, server_list_cfg, client_cfg, sys_path=None): - if sys_path is not None: - import sys - sys.path.append(sys_path) - try: - import mc - except ImportError: - raise ImportError( - 'Please install memcached to enable MemcachedBackend.') - - self.server_list_cfg = server_list_cfg - self.client_cfg = client_cfg - self._client = mc.MemcachedClient.GetInstance(self.server_list_cfg, - self.client_cfg) - # mc.pyvector servers as a point which points to a memory cache - self._mc_buffer = mc.pyvector() - - def get(self, filepath): - filepath = str(filepath) - import mc - self._client.Get(filepath, self._mc_buffer) - value_buf = mc.ConvertBuffer(self._mc_buffer) - return value_buf - - def get_text(self, filepath, encoding=None): - raise NotImplementedError - - -class LmdbBackend(BaseStorageBackend): - """Lmdb storage backend. - - Args: - db_path (str): Lmdb database path. - readonly (bool, optional): Lmdb environment parameter. If True, - disallow any write operations. Default: True. - lock (bool, optional): Lmdb environment parameter. If False, when - concurrent access occurs, do not lock the database. Default: False. - readahead (bool, optional): Lmdb environment parameter. If False, - disable the OS filesystem readahead mechanism, which may improve - random read performance when a database is larger than RAM. - Default: False. - - Attributes: - db_path (str): Lmdb database path. - """ - - def __init__(self, - db_path, - readonly=True, - lock=False, - readahead=False, - **kwargs): - try: - import lmdb - except ImportError: - raise ImportError('Please install lmdb to enable LmdbBackend.') - - self.db_path = str(db_path) - self._client = lmdb.open( - self.db_path, - readonly=readonly, - lock=lock, - readahead=readahead, - **kwargs) - - def get(self, filepath): - """Get values according to the filepath. - - Args: - filepath (str | obj:`Path`): Here, filepath is the lmdb key. - """ - filepath = str(filepath) - with self._client.begin(write=False) as txn: - value_buf = txn.get(filepath.encode('ascii')) - return value_buf - - def get_text(self, filepath, encoding=None): - raise NotImplementedError - - -class HardDiskBackend(BaseStorageBackend): - """Raw hard disks storage backend.""" - - _allow_symlink = True - - def get(self, filepath: Union[str, Path]) -> bytes: - """Read data from a given ``filepath`` with 'rb' mode. - - Args: - filepath (str or Path): Path to read data. - - Returns: - bytes: Expected bytes object. - """ - with open(filepath, 'rb') as f: - value_buf = f.read() - return value_buf - - def get_text(self, - filepath: Union[str, Path], - encoding: str = 'utf-8') -> str: - """Read data from a given ``filepath`` with 'r' mode. - - Args: - filepath (str or Path): Path to read data. - encoding (str): The encoding format used to open the ``filepath``. - Default: 'utf-8'. - - Returns: - str: Expected text reading from ``filepath``. - """ - with open(filepath, 'r', encoding=encoding) as f: - value_buf = f.read() - return value_buf - - def put(self, obj: bytes, filepath: Union[str, Path]) -> None: - """Write data to a given ``filepath`` with 'wb' mode. - - Note: - ``put`` will create a directory if the directory of ``filepath`` - does not exist. - - Args: - obj (bytes): Data to be written. - filepath (str or Path): Path to write data. - """ - mmcv.mkdir_or_exist(osp.dirname(filepath)) - with open(filepath, 'wb') as f: - f.write(obj) - - def put_text(self, - obj: str, - filepath: Union[str, Path], - encoding: str = 'utf-8') -> None: - """Write data to a given ``filepath`` with 'w' mode. - - Note: - ``put_text`` will create a directory if the directory of - ``filepath`` does not exist. - - Args: - obj (str): Data to be written. - filepath (str or Path): Path to write data. - encoding (str): The encoding format used to open the ``filepath``. - Default: 'utf-8'. - """ - mmcv.mkdir_or_exist(osp.dirname(filepath)) - with open(filepath, 'w', encoding=encoding) as f: - f.write(obj) - - def remove(self, filepath: Union[str, Path]) -> None: - """Remove a file. - - Args: - filepath (str or Path): Path to be removed. - """ - os.remove(filepath) - - def exists(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path exists. - - Args: - filepath (str or Path): Path to be checked whether exists. - - Returns: - bool: Return ``True`` if ``filepath`` exists, ``False`` otherwise. - """ - return osp.exists(filepath) - - def isdir(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a directory. - - Args: - filepath (str or Path): Path to be checked whether it is a - directory. - - Returns: - bool: Return ``True`` if ``filepath`` points to a directory, - ``False`` otherwise. - """ - return osp.isdir(filepath) - - def isfile(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a file. - - Args: - filepath (str or Path): Path to be checked whether it is a file. - - Returns: - bool: Return ``True`` if ``filepath`` points to a file, ``False`` - otherwise. - """ - return osp.isfile(filepath) - - def join_path(self, filepath: Union[str, Path], - *filepaths: Union[str, Path]) -> str: - """Concatenate all file paths. - - Join one or more filepath components intelligently. The return value - is the concatenation of filepath and any members of *filepaths. - - Args: - filepath (str or Path): Path to be concatenated. - - Returns: - str: The result of concatenation. - """ - return osp.join(filepath, *filepaths) - - @contextmanager - def get_local_path( - self, filepath: Union[str, Path]) -> Iterable[Union[str, Path]]: - """Only for unified API and do nothing.""" - yield filepath - - def list_dir_or_file(self, - dir_path: Union[str, Path], - list_dir: bool = True, - list_file: bool = True, - suffix: Optional[Union[str, Tuple[str]]] = None, - recursive: bool = False) -> Iterator[str]: - """Scan a directory to find the interested directories or files in - arbitrary order. - - Note: - :meth:`list_dir_or_file` returns the path relative to ``dir_path``. - - Args: - dir_path (str | Path): Path of the directory. - list_dir (bool): List the directories. Default: True. - list_file (bool): List the path of files. Default: True. - suffix (str or tuple[str], optional): File suffix - that we are interested in. Default: None. - recursive (bool): If set to True, recursively scan the - directory. Default: False. - - Yields: - Iterable[str]: A relative path to ``dir_path``. - """ - if list_dir and suffix is not None: - raise TypeError('`suffix` should be None when `list_dir` is True') - - if (suffix is not None) and not isinstance(suffix, (str, tuple)): - raise TypeError('`suffix` must be a string or tuple of strings') - - root = dir_path - - def _list_dir_or_file(dir_path, list_dir, list_file, suffix, - recursive): - for entry in os.scandir(dir_path): - if not entry.name.startswith('.') and entry.is_file(): - rel_path = osp.relpath(entry.path, root) - if (suffix is None - or rel_path.endswith(suffix)) and list_file: - yield rel_path - elif osp.isdir(entry.path): - if list_dir: - rel_dir = osp.relpath(entry.path, root) - yield rel_dir - if recursive: - yield from _list_dir_or_file(entry.path, list_dir, - list_file, suffix, - recursive) - - return _list_dir_or_file(dir_path, list_dir, list_file, suffix, - recursive) - - -class HTTPBackend(BaseStorageBackend): - """HTTP and HTTPS storage bachend.""" - - def get(self, filepath): - value_buf = urlopen(filepath).read() - return value_buf - - def get_text(self, filepath, encoding='utf-8'): - value_buf = urlopen(filepath).read() - return value_buf.decode(encoding) - - @contextmanager - def get_local_path(self, filepath: str) -> Iterable[str]: - """Download a file from ``filepath``. - - ``get_local_path`` is decorated by :meth:`contxtlib.contextmanager`. It - can be called with ``with`` statement, and when exists from the - ``with`` statement, the temporary path will be released. - - Args: - filepath (str): Download a file from ``filepath``. - - Examples: - >>> client = HTTPBackend() - >>> # After existing from the ``with`` clause, - >>> # the path will be removed - >>> with client.get_local_path('http://path/of/your/file') as path: - ... # do something here - """ - try: - f = tempfile.NamedTemporaryFile(delete=False) - f.write(self.get(filepath)) - f.close() - yield f.name - finally: - os.remove(f.name) - - -class FileClient: - """A general file client to access files in different backends. - - The client loads a file or text in a specified backend from its path - and returns it as a binary or text file. There are two ways to choose a - backend, the name of backend and the prefix of path. Although both of them - can be used to choose a storage backend, ``backend`` has a higher priority - that is if they are all set, the storage backend will be chosen by the - backend argument. If they are all `None`, the disk backend will be chosen. - Note that It can also register other backend accessor with a given name, - prefixes, and backend class. In addition, We use the singleton pattern to - avoid repeated object creation. If the arguments are the same, the same - object will be returned. - - Args: - backend (str, optional): The storage backend type. Options are "disk", - "ceph", "memcached", "lmdb", "http" and "petrel". Default: None. - prefix (str, optional): The prefix of the registered storage backend. - Options are "s3", "http", "https". Default: None. - - Examples: - >>> # only set backend - >>> file_client = FileClient(backend='petrel') - >>> # only set prefix - >>> file_client = FileClient(prefix='s3') - >>> # set both backend and prefix but use backend to choose client - >>> file_client = FileClient(backend='petrel', prefix='s3') - >>> # if the arguments are the same, the same object is returned - >>> file_client1 = FileClient(backend='petrel') - >>> file_client1 is file_client - True - - Attributes: - client (:obj:`BaseStorageBackend`): The backend object. - """ - - _backends = { - 'disk': HardDiskBackend, - 'ceph': CephBackend, - 'memcached': MemcachedBackend, - 'lmdb': LmdbBackend, - 'petrel': PetrelBackend, - 'http': HTTPBackend, - } - # This collection is used to record the overridden backends, and when a - # backend appears in the collection, the singleton pattern is disabled for - # that backend, because if the singleton pattern is used, then the object - # returned will be the backend before overwriting - _overridden_backends = set() - _prefix_to_backends = { - 's3': PetrelBackend, - 'http': HTTPBackend, - 'https': HTTPBackend, - } - _overridden_prefixes = set() - - _instances = {} - - def __new__(cls, backend=None, prefix=None, **kwargs): - if backend is None and prefix is None: - backend = 'disk' - if backend is not None and backend not in cls._backends: - raise ValueError( - f'Backend {backend} is not supported. Currently supported ones' - f' are {list(cls._backends.keys())}') - if prefix is not None and prefix not in cls._prefix_to_backends: - raise ValueError( - f'prefix {prefix} is not supported. Currently supported ones ' - f'are {list(cls._prefix_to_backends.keys())}') - - # concatenate the arguments to a unique key for determining whether - # objects with the same arguments were created - arg_key = f'{backend}:{prefix}' - for key, value in kwargs.items(): - arg_key += f':{key}:{value}' - - # if a backend was overridden, it will create a new object - if (arg_key in cls._instances - and backend not in cls._overridden_backends - and prefix not in cls._overridden_prefixes): - _instance = cls._instances[arg_key] - else: - # create a new object and put it to _instance - _instance = super().__new__(cls) - if backend is not None: - _instance.client = cls._backends[backend](**kwargs) - else: - _instance.client = cls._prefix_to_backends[prefix](**kwargs) - - cls._instances[arg_key] = _instance - - return _instance - - @property - def name(self): - return self.client.name - - @property - def allow_symlink(self): - return self.client.allow_symlink - - @staticmethod - def parse_uri_prefix(uri: Union[str, Path]) -> Optional[str]: - """Parse the prefix of a uri. - - Args: - uri (str | Path): Uri to be parsed that contains the file prefix. - - Examples: - >>> FileClient.parse_uri_prefix('s3://path/of/your/file') - 's3' - - Returns: - str | None: Return the prefix of uri if the uri contains '://' - else ``None``. - """ - assert is_filepath(uri) - uri = str(uri) - if '://' not in uri: - return None - else: - prefix, _ = uri.split('://') - # In the case of PetrelBackend, the prefix may contains the cluster - # name like clusterName:s3 - if ':' in prefix: - _, prefix = prefix.split(':') - return prefix - - @classmethod - def infer_client(cls, - file_client_args: Optional[dict] = None, - uri: Optional[Union[str, Path]] = None) -> 'FileClient': - """Infer a suitable file client based on the URI and arguments. - - Args: - file_client_args (dict, optional): Arguments to instantiate a - FileClient. Default: None. - uri (str | Path, optional): Uri to be parsed that contains the file - prefix. Default: None. - - Examples: - >>> uri = 's3://path/of/your/file' - >>> file_client = FileClient.infer_client(uri=uri) - >>> file_client_args = {'backend': 'petrel'} - >>> file_client = FileClient.infer_client(file_client_args) - - Returns: - FileClient: Instantiated FileClient object. - """ - assert file_client_args is not None or uri is not None - if file_client_args is None: - file_prefix = cls.parse_uri_prefix(uri) # type: ignore - return cls(prefix=file_prefix) - else: - return cls(**file_client_args) - - @classmethod - def _register_backend(cls, name, backend, force=False, prefixes=None): - if not isinstance(name, str): - raise TypeError('the backend name should be a string, ' - f'but got {type(name)}') - if not inspect.isclass(backend): - raise TypeError( - f'backend should be a class but got {type(backend)}') - if not issubclass(backend, BaseStorageBackend): - raise TypeError( - f'backend {backend} is not a subclass of BaseStorageBackend') - if not force and name in cls._backends: - raise KeyError( - f'{name} is already registered as a storage backend, ' - 'add "force=True" if you want to override it') - - if name in cls._backends and force: - cls._overridden_backends.add(name) - cls._backends[name] = backend - - if prefixes is not None: - if isinstance(prefixes, str): - prefixes = [prefixes] - else: - assert isinstance(prefixes, (list, tuple)) - for prefix in prefixes: - if prefix not in cls._prefix_to_backends: - cls._prefix_to_backends[prefix] = backend - elif (prefix in cls._prefix_to_backends) and force: - cls._overridden_prefixes.add(prefix) - cls._prefix_to_backends[prefix] = backend - else: - raise KeyError( - f'{prefix} is already registered as a storage backend,' - ' add "force=True" if you want to override it') - - @classmethod - def register_backend(cls, name, backend=None, force=False, prefixes=None): - """Register a backend to FileClient. - - This method can be used as a normal class method or a decorator. - - .. code-block:: python - - class NewBackend(BaseStorageBackend): - - def get(self, filepath): - return filepath - - def get_text(self, filepath): - return filepath - - FileClient.register_backend('new', NewBackend) - - or - - .. code-block:: python - - @FileClient.register_backend('new') - class NewBackend(BaseStorageBackend): - - def get(self, filepath): - return filepath - - def get_text(self, filepath): - return filepath - - Args: - name (str): The name of the registered backend. - backend (class, optional): The backend class to be registered, - which must be a subclass of :class:`BaseStorageBackend`. - When this method is used as a decorator, backend is None. - Defaults to None. - force (bool, optional): Whether to override the backend if the name - has already been registered. Defaults to False. - prefixes (str or list[str] or tuple[str], optional): The prefixes - of the registered storage backend. Default: None. - `New in version 1.3.15.` - """ - if backend is not None: - cls._register_backend( - name, backend, force=force, prefixes=prefixes) - return - - def _register(backend_cls): - cls._register_backend( - name, backend_cls, force=force, prefixes=prefixes) - return backend_cls - - return _register - - def get(self, filepath: Union[str, Path]) -> Union[bytes, memoryview]: - """Read data from a given ``filepath`` with 'rb' mode. - - Note: - There are two types of return values for ``get``, one is ``bytes`` - and the other is ``memoryview``. The advantage of using memoryview - is that you can avoid copying, and if you want to convert it to - ``bytes``, you can use ``.tobytes()``. - - Args: - filepath (str or Path): Path to read data. - - Returns: - bytes | memoryview: Expected bytes object or a memory view of the - bytes object. - """ - return self.client.get(filepath) - - def get_text(self, filepath: Union[str, Path], encoding='utf-8') -> str: - """Read data from a given ``filepath`` with 'r' mode. - - Args: - filepath (str or Path): Path to read data. - encoding (str): The encoding format used to open the ``filepath``. - Default: 'utf-8'. - - Returns: - str: Expected text reading from ``filepath``. - """ - return self.client.get_text(filepath, encoding) - - def put(self, obj: bytes, filepath: Union[str, Path]) -> None: - """Write data to a given ``filepath`` with 'wb' mode. - - Note: - ``put`` should create a directory if the directory of ``filepath`` - does not exist. - - Args: - obj (bytes): Data to be written. - filepath (str or Path): Path to write data. - """ - self.client.put(obj, filepath) - - def put_text(self, obj: str, filepath: Union[str, Path]) -> None: - """Write data to a given ``filepath`` with 'w' mode. - - Note: - ``put_text`` should create a directory if the directory of - ``filepath`` does not exist. - - Args: - obj (str): Data to be written. - filepath (str or Path): Path to write data. - encoding (str, optional): The encoding format used to open the - `filepath`. Default: 'utf-8'. - """ - self.client.put_text(obj, filepath) - - def remove(self, filepath: Union[str, Path]) -> None: - """Remove a file. - - Args: - filepath (str, Path): Path to be removed. - """ - self.client.remove(filepath) - - def exists(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path exists. - - Args: - filepath (str or Path): Path to be checked whether exists. - - Returns: - bool: Return ``True`` if ``filepath`` exists, ``False`` otherwise. - """ - return self.client.exists(filepath) - - def isdir(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a directory. - - Args: - filepath (str or Path): Path to be checked whether it is a - directory. - - Returns: - bool: Return ``True`` if ``filepath`` points to a directory, - ``False`` otherwise. - """ - return self.client.isdir(filepath) - - def isfile(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a file. - - Args: - filepath (str or Path): Path to be checked whether it is a file. - - Returns: - bool: Return ``True`` if ``filepath`` points to a file, ``False`` - otherwise. - """ - return self.client.isfile(filepath) - - def join_path(self, filepath: Union[str, Path], - *filepaths: Union[str, Path]) -> str: - """Concatenate all file paths. - - Join one or more filepath components intelligently. The return value - is the concatenation of filepath and any members of *filepaths. - - Args: - filepath (str or Path): Path to be concatenated. - - Returns: - str: The result of concatenation. - """ - return self.client.join_path(filepath, *filepaths) - - @contextmanager - def get_local_path(self, filepath: Union[str, Path]) -> Iterable[str]: - """Download data from ``filepath`` and write the data to local path. - - ``get_local_path`` is decorated by :meth:`contxtlib.contextmanager`. It - can be called with ``with`` statement, and when exists from the - ``with`` statement, the temporary path will be released. - - Note: - If the ``filepath`` is a local path, just return itself. - - .. warning:: - ``get_local_path`` is an experimental interface that may change in - the future. - - Args: - filepath (str or Path): Path to be read data. - - Examples: - >>> file_client = FileClient(prefix='s3') - >>> with file_client.get_local_path('s3://bucket/abc.jpg') as path: - ... # do something here - - Yields: - Iterable[str]: Only yield one path. - """ - with self.client.get_local_path(str(filepath)) as local_path: - yield local_path - - def list_dir_or_file(self, - dir_path: Union[str, Path], - list_dir: bool = True, - list_file: bool = True, - suffix: Optional[Union[str, Tuple[str]]] = None, - recursive: bool = False) -> Iterator[str]: - """Scan a directory to find the interested directories or files in - arbitrary order. - - Note: - :meth:`list_dir_or_file` returns the path relative to ``dir_path``. - - Args: - dir_path (str | Path): Path of the directory. - list_dir (bool): List the directories. Default: True. - list_file (bool): List the path of files. Default: True. - suffix (str or tuple[str], optional): File suffix - that we are interested in. Default: None. - recursive (bool): If set to True, recursively scan the - directory. Default: False. - - Yields: - Iterable[str]: A relative path to ``dir_path``. - """ - yield from self.client.list_dir_or_file(dir_path, list_dir, list_file, - suffix, recursive) diff --git a/spaces/csuhan/opendet2/opendet2/modeling/backbone/__init__.py b/spaces/csuhan/opendet2/opendet2/modeling/backbone/__init__.py deleted file mode 100644 index f9cf81ceec9d7609b3229aa0a3cc57352f34005a..0000000000000000000000000000000000000000 --- a/spaces/csuhan/opendet2/opendet2/modeling/backbone/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .swin_transformer import SwinTransformer - -__all__ = [k for k in globals().keys() if not k.startswith("_")] \ No newline at end of file diff --git a/spaces/cymic/Waifu_Diffusion_Webui/modules/swinir_model_arch.py b/spaces/cymic/Waifu_Diffusion_Webui/modules/swinir_model_arch.py deleted file mode 100644 index 461fb354ce5a7614d9ffbfcad4d32a2811134ae4..0000000000000000000000000000000000000000 --- a/spaces/cymic/Waifu_Diffusion_Webui/modules/swinir_model_arch.py +++ /dev/null @@ -1,867 +0,0 @@ -# ----------------------------------------------------------------------------------- -# SwinIR: Image Restoration Using Swin Transformer, https://arxiv.org/abs/2108.10257 -# Originally Written by Ze Liu, Modified by Jingyun Liang. -# ----------------------------------------------------------------------------------- - -import math -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -from timm.models.layers import DropPath, to_2tuple, trunc_normal_ - - -class Mlp(nn.Module): - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -def window_partition(x, window_size): - """ - Args: - x: (B, H, W, C) - window_size (int): window size - - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse(windows, window_size, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - window_size (int): Window size - H (int): Height of image - W (int): Width of image - - Returns: - x: (B, H, W, C) - """ - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class WindowAttention(nn.Module): - r""" Window based multi-head self attention (W-MSA) module with relative position bias. - It supports both of shifted and non-shifted window. - - Args: - dim (int): Number of input channels. - window_size (tuple[int]): The height and width of the window. - num_heads (int): Number of attention heads. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set - attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0 - proj_drop (float, optional): Dropout ratio of output. Default: 0.0 - """ - - def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim ** -0.5 - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - - self.proj_drop = nn.Dropout(proj_drop) - - trunc_normal_(self.relative_position_bias_table, std=.02) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - """ - Args: - x: input features with shape of (num_windows*B, N, C) - mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None - """ - B_, N, C = x.shape - qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = (q @ k.transpose(-2, -1)) - - relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - def extra_repr(self) -> str: - return f'dim={self.dim}, window_size={self.window_size}, num_heads={self.num_heads}' - - def flops(self, N): - # calculate flops for 1 window with token length of N - flops = 0 - # qkv = self.qkv(x) - flops += N * self.dim * 3 * self.dim - # attn = (q @ k.transpose(-2, -1)) - flops += self.num_heads * N * (self.dim // self.num_heads) * N - # x = (attn @ v) - flops += self.num_heads * N * N * (self.dim // self.num_heads) - # x = self.proj(x) - flops += N * self.dim * self.dim - return flops - - -class SwinTransformerBlock(nn.Module): - r""" Swin Transformer Block. - - Args: - dim (int): Number of input channels. - input_resolution (tuple[int]): Input resulotion. - num_heads (int): Number of attention heads. - window_size (int): Window size. - shift_size (int): Shift size for SW-MSA. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float, optional): Stochastic depth rate. Default: 0.0 - act_layer (nn.Module, optional): Activation layer. Default: nn.GELU - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, dim, input_resolution, num_heads, window_size=7, shift_size=0, - mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0., - act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.input_resolution = input_resolution - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - if min(self.input_resolution) <= self.window_size: - # if window size is larger than input resolution, we don't partition windows - self.shift_size = 0 - self.window_size = min(self.input_resolution) - assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention( - dim, window_size=to_2tuple(self.window_size), num_heads=num_heads, - qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop) - - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - if self.shift_size > 0: - attn_mask = self.calculate_mask(self.input_resolution) - else: - attn_mask = None - - self.register_buffer("attn_mask", attn_mask) - - def calculate_mask(self, x_size): - # calculate attention mask for SW-MSA - H, W = x_size - img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1 - h_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - w_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0)) - - return attn_mask - - def forward(self, x, x_size): - H, W = x_size - B, L, C = x.shape - # assert L == H * W, "input feature has wrong size" - - shortcut = x - x = self.norm1(x) - x = x.view(B, H, W, C) - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) - else: - shifted_x = x - - # partition windows - x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C - x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA (to be compatible for testing on images whose shapes are the multiple of window size - if self.input_resolution == x_size: - attn_windows = self.attn(x_windows, mask=self.attn_mask) # nW*B, window_size*window_size, C - else: - attn_windows = self.attn(x_windows, mask=self.calculate_mask(x_size).to(x.device)) - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse(attn_windows, self.window_size, H, W) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) - else: - x = shifted_x - x = x.view(B, H * W, C) - - # FFN - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - - return x - - def extra_repr(self) -> str: - return f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " \ - f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}" - - def flops(self): - flops = 0 - H, W = self.input_resolution - # norm1 - flops += self.dim * H * W - # W-MSA/SW-MSA - nW = H * W / self.window_size / self.window_size - flops += nW * self.attn.flops(self.window_size * self.window_size) - # mlp - flops += 2 * H * W * self.dim * self.dim * self.mlp_ratio - # norm2 - flops += self.dim * H * W - return flops - - -class PatchMerging(nn.Module): - r""" Patch Merging Layer. - - Args: - input_resolution (tuple[int]): Resolution of input feature. - dim (int): Number of input channels. - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, input_resolution, dim, norm_layer=nn.LayerNorm): - super().__init__() - self.input_resolution = input_resolution - self.dim = dim - self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False) - self.norm = norm_layer(4 * dim) - - def forward(self, x): - """ - x: B, H*W, C - """ - H, W = self.input_resolution - B, L, C = x.shape - assert L == H * W, "input feature has wrong size" - assert H % 2 == 0 and W % 2 == 0, f"x size ({H}*{W}) are not even." - - x = x.view(B, H, W, C) - - x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C - x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C - x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C - x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C - x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C - x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C - - x = self.norm(x) - x = self.reduction(x) - - return x - - def extra_repr(self) -> str: - return f"input_resolution={self.input_resolution}, dim={self.dim}" - - def flops(self): - H, W = self.input_resolution - flops = H * W * self.dim - flops += (H // 2) * (W // 2) * 4 * self.dim * 2 * self.dim - return flops - - -class BasicLayer(nn.Module): - """ A basic Swin Transformer layer for one stage. - - Args: - dim (int): Number of input channels. - input_resolution (tuple[int]): Input resolution. - depth (int): Number of blocks. - num_heads (int): Number of attention heads. - window_size (int): Local window size. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__(self, dim, input_resolution, depth, num_heads, window_size, - mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False): - - super().__init__() - self.dim = dim - self.input_resolution = input_resolution - self.depth = depth - self.use_checkpoint = use_checkpoint - - # build blocks - self.blocks = nn.ModuleList([ - SwinTransformerBlock(dim=dim, input_resolution=input_resolution, - num_heads=num_heads, window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop, attn_drop=attn_drop, - drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path, - norm_layer=norm_layer) - for i in range(depth)]) - - # patch merging layer - if downsample is not None: - self.downsample = downsample(input_resolution, dim=dim, norm_layer=norm_layer) - else: - self.downsample = None - - def forward(self, x, x_size): - for blk in self.blocks: - if self.use_checkpoint: - x = checkpoint.checkpoint(blk, x, x_size) - else: - x = blk(x, x_size) - if self.downsample is not None: - x = self.downsample(x) - return x - - def extra_repr(self) -> str: - return f"dim={self.dim}, input_resolution={self.input_resolution}, depth={self.depth}" - - def flops(self): - flops = 0 - for blk in self.blocks: - flops += blk.flops() - if self.downsample is not None: - flops += self.downsample.flops() - return flops - - -class RSTB(nn.Module): - """Residual Swin Transformer Block (RSTB). - - Args: - dim (int): Number of input channels. - input_resolution (tuple[int]): Input resolution. - depth (int): Number of blocks. - num_heads (int): Number of attention heads. - window_size (int): Local window size. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - img_size: Input image size. - patch_size: Patch size. - resi_connection: The convolutional block before residual connection. - """ - - def __init__(self, dim, input_resolution, depth, num_heads, window_size, - mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False, - img_size=224, patch_size=4, resi_connection='1conv'): - super(RSTB, self).__init__() - - self.dim = dim - self.input_resolution = input_resolution - - self.residual_group = BasicLayer(dim=dim, - input_resolution=input_resolution, - depth=depth, - num_heads=num_heads, - window_size=window_size, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop, attn_drop=attn_drop, - drop_path=drop_path, - norm_layer=norm_layer, - downsample=downsample, - use_checkpoint=use_checkpoint) - - if resi_connection == '1conv': - self.conv = nn.Conv2d(dim, dim, 3, 1, 1) - elif resi_connection == '3conv': - # to save parameters and memory - self.conv = nn.Sequential(nn.Conv2d(dim, dim // 4, 3, 1, 1), nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Conv2d(dim // 4, dim // 4, 1, 1, 0), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Conv2d(dim // 4, dim, 3, 1, 1)) - - self.patch_embed = PatchEmbed( - img_size=img_size, patch_size=patch_size, in_chans=0, embed_dim=dim, - norm_layer=None) - - self.patch_unembed = PatchUnEmbed( - img_size=img_size, patch_size=patch_size, in_chans=0, embed_dim=dim, - norm_layer=None) - - def forward(self, x, x_size): - return self.patch_embed(self.conv(self.patch_unembed(self.residual_group(x, x_size), x_size))) + x - - def flops(self): - flops = 0 - flops += self.residual_group.flops() - H, W = self.input_resolution - flops += H * W * self.dim * self.dim * 9 - flops += self.patch_embed.flops() - flops += self.patch_unembed.flops() - - return flops - - -class PatchEmbed(nn.Module): - r""" Image to Patch Embedding - - Args: - img_size (int): Image size. Default: 224. - patch_size (int): Patch token size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - norm_layer (nn.Module, optional): Normalization layer. Default: None - """ - - def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): - super().__init__() - img_size = to_2tuple(img_size) - patch_size = to_2tuple(patch_size) - patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]] - self.img_size = img_size - self.patch_size = patch_size - self.patches_resolution = patches_resolution - self.num_patches = patches_resolution[0] * patches_resolution[1] - - self.in_chans = in_chans - self.embed_dim = embed_dim - - if norm_layer is not None: - self.norm = norm_layer(embed_dim) - else: - self.norm = None - - def forward(self, x): - x = x.flatten(2).transpose(1, 2) # B Ph*Pw C - if self.norm is not None: - x = self.norm(x) - return x - - def flops(self): - flops = 0 - H, W = self.img_size - if self.norm is not None: - flops += H * W * self.embed_dim - return flops - - -class PatchUnEmbed(nn.Module): - r""" Image to Patch Unembedding - - Args: - img_size (int): Image size. Default: 224. - patch_size (int): Patch token size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - norm_layer (nn.Module, optional): Normalization layer. Default: None - """ - - def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): - super().__init__() - img_size = to_2tuple(img_size) - patch_size = to_2tuple(patch_size) - patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]] - self.img_size = img_size - self.patch_size = patch_size - self.patches_resolution = patches_resolution - self.num_patches = patches_resolution[0] * patches_resolution[1] - - self.in_chans = in_chans - self.embed_dim = embed_dim - - def forward(self, x, x_size): - B, HW, C = x.shape - x = x.transpose(1, 2).view(B, self.embed_dim, x_size[0], x_size[1]) # B Ph*Pw C - return x - - def flops(self): - flops = 0 - return flops - - -class Upsample(nn.Sequential): - """Upsample module. - - Args: - scale (int): Scale factor. Supported scales: 2^n and 3. - num_feat (int): Channel number of intermediate features. - """ - - def __init__(self, scale, num_feat): - m = [] - if (scale & (scale - 1)) == 0: # scale = 2^n - for _ in range(int(math.log(scale, 2))): - m.append(nn.Conv2d(num_feat, 4 * num_feat, 3, 1, 1)) - m.append(nn.PixelShuffle(2)) - elif scale == 3: - m.append(nn.Conv2d(num_feat, 9 * num_feat, 3, 1, 1)) - m.append(nn.PixelShuffle(3)) - else: - raise ValueError(f'scale {scale} is not supported. ' 'Supported scales: 2^n and 3.') - super(Upsample, self).__init__(*m) - - -class UpsampleOneStep(nn.Sequential): - """UpsampleOneStep module (the difference with Upsample is that it always only has 1conv + 1pixelshuffle) - Used in lightweight SR to save parameters. - - Args: - scale (int): Scale factor. Supported scales: 2^n and 3. - num_feat (int): Channel number of intermediate features. - - """ - - def __init__(self, scale, num_feat, num_out_ch, input_resolution=None): - self.num_feat = num_feat - self.input_resolution = input_resolution - m = [] - m.append(nn.Conv2d(num_feat, (scale ** 2) * num_out_ch, 3, 1, 1)) - m.append(nn.PixelShuffle(scale)) - super(UpsampleOneStep, self).__init__(*m) - - def flops(self): - H, W = self.input_resolution - flops = H * W * self.num_feat * 3 * 9 - return flops - - -class SwinIR(nn.Module): - r""" SwinIR - A PyTorch impl of : `SwinIR: Image Restoration Using Swin Transformer`, based on Swin Transformer. - - Args: - img_size (int | tuple(int)): Input image size. Default 64 - patch_size (int | tuple(int)): Patch size. Default: 1 - in_chans (int): Number of input image channels. Default: 3 - embed_dim (int): Patch embedding dimension. Default: 96 - depths (tuple(int)): Depth of each Swin Transformer layer. - num_heads (tuple(int)): Number of attention heads in different layers. - window_size (int): Window size. Default: 7 - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4 - qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. Default: None - drop_rate (float): Dropout rate. Default: 0 - attn_drop_rate (float): Attention dropout rate. Default: 0 - drop_path_rate (float): Stochastic depth rate. Default: 0.1 - norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm. - ape (bool): If True, add absolute position embedding to the patch embedding. Default: False - patch_norm (bool): If True, add normalization after patch embedding. Default: True - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False - upscale: Upscale factor. 2/3/4/8 for image SR, 1 for denoising and compress artifact reduction - img_range: Image range. 1. or 255. - upsampler: The reconstruction reconstruction module. 'pixelshuffle'/'pixelshuffledirect'/'nearest+conv'/None - resi_connection: The convolutional block before residual connection. '1conv'/'3conv' - """ - - def __init__(self, img_size=64, patch_size=1, in_chans=3, - embed_dim=96, depths=[6, 6, 6, 6], num_heads=[6, 6, 6, 6], - window_size=7, mlp_ratio=4., qkv_bias=True, qk_scale=None, - drop_rate=0., attn_drop_rate=0., drop_path_rate=0.1, - norm_layer=nn.LayerNorm, ape=False, patch_norm=True, - use_checkpoint=False, upscale=2, img_range=1., upsampler='', resi_connection='1conv', - **kwargs): - super(SwinIR, self).__init__() - num_in_ch = in_chans - num_out_ch = in_chans - num_feat = 64 - self.img_range = img_range - if in_chans == 3: - rgb_mean = (0.4488, 0.4371, 0.4040) - self.mean = torch.Tensor(rgb_mean).view(1, 3, 1, 1) - else: - self.mean = torch.zeros(1, 1, 1, 1) - self.upscale = upscale - self.upsampler = upsampler - self.window_size = window_size - - ##################################################################################################### - ################################### 1, shallow feature extraction ################################### - self.conv_first = nn.Conv2d(num_in_ch, embed_dim, 3, 1, 1) - - ##################################################################################################### - ################################### 2, deep feature extraction ###################################### - self.num_layers = len(depths) - self.embed_dim = embed_dim - self.ape = ape - self.patch_norm = patch_norm - self.num_features = embed_dim - self.mlp_ratio = mlp_ratio - - # split image into non-overlapping patches - self.patch_embed = PatchEmbed( - img_size=img_size, patch_size=patch_size, in_chans=embed_dim, embed_dim=embed_dim, - norm_layer=norm_layer if self.patch_norm else None) - num_patches = self.patch_embed.num_patches - patches_resolution = self.patch_embed.patches_resolution - self.patches_resolution = patches_resolution - - # merge non-overlapping patches into image - self.patch_unembed = PatchUnEmbed( - img_size=img_size, patch_size=patch_size, in_chans=embed_dim, embed_dim=embed_dim, - norm_layer=norm_layer if self.patch_norm else None) - - # absolute position embedding - if self.ape: - self.absolute_pos_embed = nn.Parameter(torch.zeros(1, num_patches, embed_dim)) - trunc_normal_(self.absolute_pos_embed, std=.02) - - self.pos_drop = nn.Dropout(p=drop_rate) - - # stochastic depth - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule - - # build Residual Swin Transformer blocks (RSTB) - self.layers = nn.ModuleList() - for i_layer in range(self.num_layers): - layer = RSTB(dim=embed_dim, - input_resolution=(patches_resolution[0], - patches_resolution[1]), - depth=depths[i_layer], - num_heads=num_heads[i_layer], - window_size=window_size, - mlp_ratio=self.mlp_ratio, - qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, - drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], # no impact on SR results - norm_layer=norm_layer, - downsample=None, - use_checkpoint=use_checkpoint, - img_size=img_size, - patch_size=patch_size, - resi_connection=resi_connection - - ) - self.layers.append(layer) - self.norm = norm_layer(self.num_features) - - # build the last conv layer in deep feature extraction - if resi_connection == '1conv': - self.conv_after_body = nn.Conv2d(embed_dim, embed_dim, 3, 1, 1) - elif resi_connection == '3conv': - # to save parameters and memory - self.conv_after_body = nn.Sequential(nn.Conv2d(embed_dim, embed_dim // 4, 3, 1, 1), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Conv2d(embed_dim // 4, embed_dim // 4, 1, 1, 0), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Conv2d(embed_dim // 4, embed_dim, 3, 1, 1)) - - ##################################################################################################### - ################################ 3, high quality image reconstruction ################################ - if self.upsampler == 'pixelshuffle': - # for classical SR - self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1), - nn.LeakyReLU(inplace=True)) - self.upsample = Upsample(upscale, num_feat) - self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1) - elif self.upsampler == 'pixelshuffledirect': - # for lightweight SR (to save parameters) - self.upsample = UpsampleOneStep(upscale, embed_dim, num_out_ch, - (patches_resolution[0], patches_resolution[1])) - elif self.upsampler == 'nearest+conv': - # for real-world SR (less artifacts) - self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1), - nn.LeakyReLU(inplace=True)) - self.conv_up1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - if self.upscale == 4: - self.conv_up2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - self.conv_hr = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1) - self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True) - else: - # for image denoising and JPEG compression artifact reduction - self.conv_last = nn.Conv2d(embed_dim, num_out_ch, 3, 1, 1) - - self.apply(self._init_weights) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - @torch.jit.ignore - def no_weight_decay(self): - return {'absolute_pos_embed'} - - @torch.jit.ignore - def no_weight_decay_keywords(self): - return {'relative_position_bias_table'} - - def check_image_size(self, x): - _, _, h, w = x.size() - mod_pad_h = (self.window_size - h % self.window_size) % self.window_size - mod_pad_w = (self.window_size - w % self.window_size) % self.window_size - x = F.pad(x, (0, mod_pad_w, 0, mod_pad_h), 'reflect') - return x - - def forward_features(self, x): - x_size = (x.shape[2], x.shape[3]) - x = self.patch_embed(x) - if self.ape: - x = x + self.absolute_pos_embed - x = self.pos_drop(x) - - for layer in self.layers: - x = layer(x, x_size) - - x = self.norm(x) # B L C - x = self.patch_unembed(x, x_size) - - return x - - def forward(self, x): - H, W = x.shape[2:] - x = self.check_image_size(x) - - self.mean = self.mean.type_as(x) - x = (x - self.mean) * self.img_range - - if self.upsampler == 'pixelshuffle': - # for classical SR - x = self.conv_first(x) - x = self.conv_after_body(self.forward_features(x)) + x - x = self.conv_before_upsample(x) - x = self.conv_last(self.upsample(x)) - elif self.upsampler == 'pixelshuffledirect': - # for lightweight SR - x = self.conv_first(x) - x = self.conv_after_body(self.forward_features(x)) + x - x = self.upsample(x) - elif self.upsampler == 'nearest+conv': - # for real-world SR - x = self.conv_first(x) - x = self.conv_after_body(self.forward_features(x)) + x - x = self.conv_before_upsample(x) - x = self.lrelu(self.conv_up1(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest'))) - if self.upscale == 4: - x = self.lrelu(self.conv_up2(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest'))) - x = self.conv_last(self.lrelu(self.conv_hr(x))) - else: - # for image denoising and JPEG compression artifact reduction - x_first = self.conv_first(x) - res = self.conv_after_body(self.forward_features(x_first)) + x_first - x = x + self.conv_last(res) - - x = x / self.img_range + self.mean - - return x[:, :, :H*self.upscale, :W*self.upscale] - - def flops(self): - flops = 0 - H, W = self.patches_resolution - flops += H * W * 3 * self.embed_dim * 9 - flops += self.patch_embed.flops() - for i, layer in enumerate(self.layers): - flops += layer.flops() - flops += H * W * 3 * self.embed_dim * self.embed_dim - flops += self.upsample.flops() - return flops - - -if __name__ == '__main__': - upscale = 4 - window_size = 8 - height = (1024 // upscale // window_size + 1) * window_size - width = (720 // upscale // window_size + 1) * window_size - model = SwinIR(upscale=2, img_size=(height, width), - window_size=window_size, img_range=1., depths=[6, 6, 6, 6], - embed_dim=60, num_heads=[6, 6, 6, 6], mlp_ratio=2, upsampler='pixelshuffledirect') - print(model) - print(height, width, model.flops() / 1e9) - - x = torch.randn((1, 3, height, width)) - x = model(x) - print(x.shape) diff --git a/spaces/danielsapit/JPEG_Artifacts_Removal/README.md b/spaces/danielsapit/JPEG_Artifacts_Removal/README.md deleted file mode 100644 index 841b234edbf493db91e8de98a3128e148da24e9d..0000000000000000000000000000000000000000 --- a/spaces/danielsapit/JPEG_Artifacts_Removal/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: JPEG Artifacts Removal -emoji: 🖼️ -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/danterivers/music-generation-samples/audiocraft/quantization/vq.py b/spaces/danterivers/music-generation-samples/audiocraft/quantization/vq.py deleted file mode 100644 index f67c3a0cd30d4b8993a36c587f00dc8a451d926f..0000000000000000000000000000000000000000 --- a/spaces/danterivers/music-generation-samples/audiocraft/quantization/vq.py +++ /dev/null @@ -1,116 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import math -import typing as tp - -import torch - -from .base import BaseQuantizer, QuantizedResult -from .core_vq import ResidualVectorQuantization - - -class ResidualVectorQuantizer(BaseQuantizer): - """Residual Vector Quantizer. - - Args: - dimension (int): Dimension of the codebooks. - n_q (int): Number of residual vector quantizers used. - q_dropout (bool): Random quantizer drop out at train time. - bins (int): Codebook size. - decay (float): Decay for exponential moving average over the codebooks. - kmeans_init (bool): Whether to use kmeans to initialize the codebooks. - kmeans_iters (int): Number of iterations used for kmeans initialization. - threshold_ema_dead_code (int): Threshold for dead code expiration. Replace any codes - that have an exponential moving average cluster size less than the specified threshold with - randomly selected vector from the current batch. - orthogonal_reg_weight (float): Orthogonal regularization weights. - orthogonal_reg_active_codes_only (bool): Apply orthogonal regularization only on active codes. - orthogonal_reg_max_codes (optional int): Maximum number of codes to consider. - for orthogonal regulariation. - """ - def __init__( - self, - dimension: int = 256, - n_q: int = 8, - q_dropout: bool = False, - bins: int = 1024, - decay: float = 0.99, - kmeans_init: bool = True, - kmeans_iters: int = 10, - threshold_ema_dead_code: int = 2, - orthogonal_reg_weight: float = 0.0, - orthogonal_reg_active_codes_only: bool = False, - orthogonal_reg_max_codes: tp.Optional[int] = None, - ): - super().__init__() - self.max_n_q = n_q - self.n_q = n_q - self.q_dropout = q_dropout - self.dimension = dimension - self.bins = bins - self.decay = decay - self.kmeans_init = kmeans_init - self.kmeans_iters = kmeans_iters - self.threshold_ema_dead_code = threshold_ema_dead_code - self.orthogonal_reg_weight = orthogonal_reg_weight - self.orthogonal_reg_active_codes_only = orthogonal_reg_active_codes_only - self.orthogonal_reg_max_codes = orthogonal_reg_max_codes - self.vq = ResidualVectorQuantization( - dim=self.dimension, - codebook_size=self.bins, - num_quantizers=self.n_q, - decay=self.decay, - kmeans_init=self.kmeans_init, - kmeans_iters=self.kmeans_iters, - threshold_ema_dead_code=self.threshold_ema_dead_code, - orthogonal_reg_weight=self.orthogonal_reg_weight, - orthogonal_reg_active_codes_only=self.orthogonal_reg_active_codes_only, - orthogonal_reg_max_codes=self.orthogonal_reg_max_codes, - channels_last=False - ) - - def forward(self, x: torch.Tensor, frame_rate: int): - n_q = self.n_q - if self.training and self.q_dropout: - n_q = int(torch.randint(1, self.n_q + 1, (1,)).item()) - bw_per_q = math.log2(self.bins) * frame_rate / 1000 - quantized, codes, commit_loss = self.vq(x, n_q=n_q) - codes = codes.transpose(0, 1) - # codes is [B, K, T], with T frames, K nb of codebooks. - bw = torch.tensor(n_q * bw_per_q).to(x) - return QuantizedResult(quantized, codes, bw, penalty=torch.mean(commit_loss)) - - def encode(self, x: torch.Tensor) -> torch.Tensor: - """Encode a given input tensor with the specified frame rate at the given bandwidth. - The RVQ encode method sets the appropriate number of quantizer to use - and returns indices for each quantizer. - """ - n_q = self.n_q - codes = self.vq.encode(x, n_q=n_q) - codes = codes.transpose(0, 1) - # codes is [B, K, T], with T frames, K nb of codebooks. - return codes - - def decode(self, codes: torch.Tensor) -> torch.Tensor: - """Decode the given codes to the quantized representation. - """ - # codes is [B, K, T], with T frames, K nb of codebooks, vq.decode expects [K, B, T]. - codes = codes.transpose(0, 1) - quantized = self.vq.decode(codes) - return quantized - - @property - def total_codebooks(self): - return self.max_n_q - - @property - def num_codebooks(self): - return self.n_q - - def set_num_codebooks(self, n: int): - assert n > 0 and n <= self.max_n_q - self.n_q = n diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/vegalite/v5/__init__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/vegalite/v5/__init__.py deleted file mode 100644 index 38202d89c0d86a9be7a39d4b189781c43427983e..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/vegalite/v5/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -# ruff: noqa -from .schema import * -from .api import * - -from ...expr import datum, expr # type: ignore[no-redef] - -from .display import VegaLite, renderers - -from .data import ( - MaxRowsError, - pipe, - curry, - limit_rows, - sample, - to_json, - to_csv, - to_values, - default_data_transformer, - data_transformers, -) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/backends/_backend_pdf_ps.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/backends/_backend_pdf_ps.py deleted file mode 100644 index 7a4c2e6a396646a4b819cadbb956b0dfc5827563..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/backends/_backend_pdf_ps.py +++ /dev/null @@ -1,145 +0,0 @@ -""" -Common functionality between the PDF and PS backends. -""" - -from io import BytesIO -import functools - -from fontTools import subset - -import matplotlib as mpl -from .. import font_manager, ft2font -from .._afm import AFM -from ..backend_bases import RendererBase - - -@functools.lru_cache(50) -def _cached_get_afm_from_fname(fname): - with open(fname, "rb") as fh: - return AFM(fh) - - -def get_glyphs_subset(fontfile, characters): - """ - Subset a TTF font - - Reads the named fontfile and restricts the font to the characters. - Returns a serialization of the subset font as file-like object. - - Parameters - ---------- - symbol : str - Path to the font file - characters : str - Continuous set of characters to include in subset - """ - - options = subset.Options(glyph_names=True, recommended_glyphs=True) - - # Prevent subsetting extra tables. - options.drop_tables += [ - 'FFTM', # FontForge Timestamp. - 'PfEd', # FontForge personal table. - 'BDF', # X11 BDF header. - 'meta', # Metadata stores design/supported languages (meaningless for subsets). - ] - - # if fontfile is a ttc, specify font number - if fontfile.endswith(".ttc"): - options.font_number = 0 - - with subset.load_font(fontfile, options) as font: - subsetter = subset.Subsetter(options=options) - subsetter.populate(text=characters) - subsetter.subset(font) - fh = BytesIO() - font.save(fh, reorderTables=False) - return fh - - -class CharacterTracker: - """ - Helper for font subsetting by the pdf and ps backends. - - Maintains a mapping of font paths to the set of character codepoints that - are being used from that font. - """ - - def __init__(self): - self.used = {} - - def track(self, font, s): - """Record that string *s* is being typeset using font *font*.""" - char_to_font = font._get_fontmap(s) - for _c, _f in char_to_font.items(): - self.used.setdefault(_f.fname, set()).add(ord(_c)) - - def track_glyph(self, font, glyph): - """Record that codepoint *glyph* is being typeset using font *font*.""" - self.used.setdefault(font.fname, set()).add(glyph) - - -class RendererPDFPSBase(RendererBase): - # The following attributes must be defined by the subclasses: - # - _afm_font_dir - # - _use_afm_rc_name - - def __init__(self, width, height): - super().__init__() - self.width = width - self.height = height - - def flipy(self): - # docstring inherited - return False # y increases from bottom to top. - - def option_scale_image(self): - # docstring inherited - return True # PDF and PS support arbitrary image scaling. - - def option_image_nocomposite(self): - # docstring inherited - # Decide whether to composite image based on rcParam value. - return not mpl.rcParams["image.composite_image"] - - def get_canvas_width_height(self): - # docstring inherited - return self.width * 72.0, self.height * 72.0 - - def get_text_width_height_descent(self, s, prop, ismath): - # docstring inherited - if ismath == "TeX": - return super().get_text_width_height_descent(s, prop, ismath) - elif ismath: - parse = self._text2path.mathtext_parser.parse(s, 72, prop) - return parse.width, parse.height, parse.depth - elif mpl.rcParams[self._use_afm_rc_name]: - font = self._get_font_afm(prop) - l, b, w, h, d = font.get_str_bbox_and_descent(s) - scale = prop.get_size_in_points() / 1000 - w *= scale - h *= scale - d *= scale - return w, h, d - else: - font = self._get_font_ttf(prop) - font.set_text(s, 0.0, flags=ft2font.LOAD_NO_HINTING) - w, h = font.get_width_height() - d = font.get_descent() - scale = 1 / 64 - w *= scale - h *= scale - d *= scale - return w, h, d - - def _get_font_afm(self, prop): - fname = font_manager.findfont( - prop, fontext="afm", directory=self._afm_font_dir) - return _cached_get_afm_from_fname(fname) - - def _get_font_ttf(self, prop): - fnames = font_manager.fontManager._find_fonts_by_props(prop) - font = font_manager.get_font(fnames) - font.clear() - font.set_size(prop.get_size_in_points(), 72) - return font diff --git a/spaces/deaf1296/finetuned_diffusion/utils.py b/spaces/deaf1296/finetuned_diffusion/utils.py deleted file mode 100644 index ff1c065d186347ca51b47d010a697dbe1814695c..0000000000000000000000000000000000000000 --- a/spaces/deaf1296/finetuned_diffusion/utils.py +++ /dev/null @@ -1,6 +0,0 @@ -def is_google_colab(): - try: - import google.colab - return True - except: - return False \ No newline at end of file diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/models/autoencoder_kl.py b/spaces/declare-lab/tango/diffusers/src/diffusers/models/autoencoder_kl.py deleted file mode 100644 index 8f65c2357cac4c86380451bc794856e0b5f31550..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/models/autoencoder_kl.py +++ /dev/null @@ -1,328 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import torch -import torch.nn as nn - -from ..configuration_utils import ConfigMixin, register_to_config -from ..utils import BaseOutput, apply_forward_hook -from .modeling_utils import ModelMixin -from .vae import Decoder, DecoderOutput, DiagonalGaussianDistribution, Encoder - - -@dataclass -class AutoencoderKLOutput(BaseOutput): - """ - Output of AutoencoderKL encoding method. - - Args: - latent_dist (`DiagonalGaussianDistribution`): - Encoded outputs of `Encoder` represented as the mean and logvar of `DiagonalGaussianDistribution`. - `DiagonalGaussianDistribution` allows for sampling latents from the distribution. - """ - - latent_dist: "DiagonalGaussianDistribution" - - -class AutoencoderKL(ModelMixin, ConfigMixin): - r"""Variational Autoencoder (VAE) model with KL loss from the paper Auto-Encoding Variational Bayes by Diederik P. Kingma - and Max Welling. - - This model inherits from [`ModelMixin`]. Check the superclass documentation for the generic methods the library - implements for all the model (such as downloading or saving, etc.) - - Parameters: - in_channels (int, *optional*, defaults to 3): Number of channels in the input image. - out_channels (int, *optional*, defaults to 3): Number of channels in the output. - down_block_types (`Tuple[str]`, *optional*, defaults to : - obj:`("DownEncoderBlock2D",)`): Tuple of downsample block types. - up_block_types (`Tuple[str]`, *optional*, defaults to : - obj:`("UpDecoderBlock2D",)`): Tuple of upsample block types. - block_out_channels (`Tuple[int]`, *optional*, defaults to : - obj:`(64,)`): Tuple of block output channels. - act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use. - latent_channels (`int`, *optional*, defaults to 4): Number of channels in the latent space. - sample_size (`int`, *optional*, defaults to `32`): TODO - scaling_factor (`float`, *optional*, defaults to 0.18215): - The component-wise standard deviation of the trained latent space computed using the first batch of the - training set. This is used to scale the latent space to have unit variance when training the diffusion - model. The latents are scaled with the formula `z = z * scaling_factor` before being passed to the - diffusion model. When decoding, the latents are scaled back to the original scale with the formula: `z = 1 - / scaling_factor * z`. For more details, refer to sections 4.3.2 and D.1 of the [High-Resolution Image - Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) paper. - """ - - _supports_gradient_checkpointing = True - - @register_to_config - def __init__( - self, - in_channels: int = 3, - out_channels: int = 3, - down_block_types: Tuple[str] = ("DownEncoderBlock2D",), - up_block_types: Tuple[str] = ("UpDecoderBlock2D",), - block_out_channels: Tuple[int] = (64,), - layers_per_block: int = 1, - act_fn: str = "silu", - latent_channels: int = 4, - norm_num_groups: int = 32, - sample_size: int = 32, - scaling_factor: float = 0.18215, - ): - super().__init__() - - # pass init params to Encoder - self.encoder = Encoder( - in_channels=in_channels, - out_channels=latent_channels, - down_block_types=down_block_types, - block_out_channels=block_out_channels, - layers_per_block=layers_per_block, - act_fn=act_fn, - norm_num_groups=norm_num_groups, - double_z=True, - ) - - # pass init params to Decoder - self.decoder = Decoder( - in_channels=latent_channels, - out_channels=out_channels, - up_block_types=up_block_types, - block_out_channels=block_out_channels, - layers_per_block=layers_per_block, - norm_num_groups=norm_num_groups, - act_fn=act_fn, - ) - - self.quant_conv = nn.Conv2d(2 * latent_channels, 2 * latent_channels, 1) - self.post_quant_conv = nn.Conv2d(latent_channels, latent_channels, 1) - - self.use_slicing = False - self.use_tiling = False - - # only relevant if vae tiling is enabled - self.tile_sample_min_size = self.config.sample_size - sample_size = ( - self.config.sample_size[0] - if isinstance(self.config.sample_size, (list, tuple)) - else self.config.sample_size - ) - self.tile_latent_min_size = int(sample_size / (2 ** (len(self.block_out_channels) - 1))) - self.tile_overlap_factor = 0.25 - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, (Encoder, Decoder)): - module.gradient_checkpointing = value - - def enable_tiling(self, use_tiling: bool = True): - r""" - Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to - compute decoding and encoding in several steps. This is useful to save a large amount of memory and to allow - the processing of larger images. - """ - self.use_tiling = use_tiling - - def disable_tiling(self): - r""" - Disable tiled VAE decoding. If `enable_vae_tiling` was previously invoked, this method will go back to - computing decoding in one step. - """ - self.enable_tiling(False) - - def enable_slicing(self): - r""" - Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to - compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. - """ - self.use_slicing = True - - def disable_slicing(self): - r""" - Disable sliced VAE decoding. If `enable_slicing` was previously invoked, this method will go back to computing - decoding in one step. - """ - self.use_slicing = False - - @apply_forward_hook - def encode(self, x: torch.FloatTensor, return_dict: bool = True) -> AutoencoderKLOutput: - if self.use_tiling and (x.shape[-1] > self.tile_sample_min_size or x.shape[-2] > self.tile_sample_min_size): - return self.tiled_encode(x, return_dict=return_dict) - - h = self.encoder(x) - moments = self.quant_conv(h) - posterior = DiagonalGaussianDistribution(moments) - - if not return_dict: - return (posterior,) - - return AutoencoderKLOutput(latent_dist=posterior) - - def _decode(self, z: torch.FloatTensor, return_dict: bool = True) -> Union[DecoderOutput, torch.FloatTensor]: - if self.use_tiling and (z.shape[-1] > self.tile_latent_min_size or z.shape[-2] > self.tile_latent_min_size): - return self.tiled_decode(z, return_dict=return_dict) - - z = self.post_quant_conv(z) - dec = self.decoder(z) - - if not return_dict: - return (dec,) - - return DecoderOutput(sample=dec) - - @apply_forward_hook - def decode(self, z: torch.FloatTensor, return_dict: bool = True) -> Union[DecoderOutput, torch.FloatTensor]: - if self.use_slicing and z.shape[0] > 1: - decoded_slices = [self._decode(z_slice).sample for z_slice in z.split(1)] - decoded = torch.cat(decoded_slices) - else: - decoded = self._decode(z).sample - - if not return_dict: - return (decoded,) - - return DecoderOutput(sample=decoded) - - def blend_v(self, a, b, blend_extent): - for y in range(min(a.shape[2], b.shape[2], blend_extent)): - b[:, :, y, :] = a[:, :, -blend_extent + y, :] * (1 - y / blend_extent) + b[:, :, y, :] * (y / blend_extent) - return b - - def blend_h(self, a, b, blend_extent): - for x in range(min(a.shape[3], b.shape[3], blend_extent)): - b[:, :, :, x] = a[:, :, :, -blend_extent + x] * (1 - x / blend_extent) + b[:, :, :, x] * (x / blend_extent) - return b - - def tiled_encode(self, x: torch.FloatTensor, return_dict: bool = True) -> AutoencoderKLOutput: - r"""Encode a batch of images using a tiled encoder. - - Args: - When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several - steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is: - different from non-tiled encoding due to each tile using a different encoder. To avoid tiling artifacts, the - tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the - look of the output, but they should be much less noticeable. - x (`torch.FloatTensor`): Input batch of images. return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`AutoencoderKLOutput`] instead of a plain tuple. - """ - overlap_size = int(self.tile_sample_min_size * (1 - self.tile_overlap_factor)) - blend_extent = int(self.tile_latent_min_size * self.tile_overlap_factor) - row_limit = self.tile_latent_min_size - blend_extent - - # Split the image into 512x512 tiles and encode them separately. - rows = [] - for i in range(0, x.shape[2], overlap_size): - row = [] - for j in range(0, x.shape[3], overlap_size): - tile = x[:, :, i : i + self.tile_sample_min_size, j : j + self.tile_sample_min_size] - tile = self.encoder(tile) - tile = self.quant_conv(tile) - row.append(tile) - rows.append(row) - result_rows = [] - for i, row in enumerate(rows): - result_row = [] - for j, tile in enumerate(row): - # blend the above tile and the left tile - # to the current tile and add the current tile to the result row - if i > 0: - tile = self.blend_v(rows[i - 1][j], tile, blend_extent) - if j > 0: - tile = self.blend_h(row[j - 1], tile, blend_extent) - result_row.append(tile[:, :, :row_limit, :row_limit]) - result_rows.append(torch.cat(result_row, dim=3)) - - moments = torch.cat(result_rows, dim=2) - posterior = DiagonalGaussianDistribution(moments) - - if not return_dict: - return (posterior,) - - return AutoencoderKLOutput(latent_dist=posterior) - - def tiled_decode(self, z: torch.FloatTensor, return_dict: bool = True) -> Union[DecoderOutput, torch.FloatTensor]: - r"""Decode a batch of images using a tiled decoder. - - Args: - When this option is enabled, the VAE will split the input tensor into tiles to compute decoding in several - steps. This is useful to keep memory use constant regardless of image size. The end result of tiled decoding is: - different from non-tiled decoding due to each tile using a different decoder. To avoid tiling artifacts, the - tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the - look of the output, but they should be much less noticeable. - z (`torch.FloatTensor`): Input batch of latent vectors. return_dict (`bool`, *optional*, defaults to - `True`): - Whether or not to return a [`DecoderOutput`] instead of a plain tuple. - """ - overlap_size = int(self.tile_latent_min_size * (1 - self.tile_overlap_factor)) - blend_extent = int(self.tile_sample_min_size * self.tile_overlap_factor) - row_limit = self.tile_sample_min_size - blend_extent - - # Split z into overlapping 64x64 tiles and decode them separately. - # The tiles have an overlap to avoid seams between tiles. - rows = [] - for i in range(0, z.shape[2], overlap_size): - row = [] - for j in range(0, z.shape[3], overlap_size): - tile = z[:, :, i : i + self.tile_latent_min_size, j : j + self.tile_latent_min_size] - tile = self.post_quant_conv(tile) - decoded = self.decoder(tile) - row.append(decoded) - rows.append(row) - result_rows = [] - for i, row in enumerate(rows): - result_row = [] - for j, tile in enumerate(row): - # blend the above tile and the left tile - # to the current tile and add the current tile to the result row - if i > 0: - tile = self.blend_v(rows[i - 1][j], tile, blend_extent) - if j > 0: - tile = self.blend_h(row[j - 1], tile, blend_extent) - result_row.append(tile[:, :, :row_limit, :row_limit]) - result_rows.append(torch.cat(result_row, dim=3)) - - dec = torch.cat(result_rows, dim=2) - if not return_dict: - return (dec,) - - return DecoderOutput(sample=dec) - - def forward( - self, - sample: torch.FloatTensor, - sample_posterior: bool = False, - return_dict: bool = True, - generator: Optional[torch.Generator] = None, - ) -> Union[DecoderOutput, torch.FloatTensor]: - r""" - Args: - sample (`torch.FloatTensor`): Input sample. - sample_posterior (`bool`, *optional*, defaults to `False`): - Whether to sample from the posterior. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`DecoderOutput`] instead of a plain tuple. - """ - x = sample - posterior = self.encode(x).latent_dist - if sample_posterior: - z = posterior.sample(generator=generator) - else: - z = posterior.mode() - dec = self.decode(z).sample - - if not return_dict: - return (dec,) - - return DecoderOutput(sample=dec) diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_text_to_image.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_text_to_image.py deleted file mode 100644 index fdca625fd99d99ff31d0e0a65a30f52e4b002ce0..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_text_to_image.py +++ /dev/null @@ -1,501 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -from typing import Callable, List, Optional, Union - -import torch -import torch.utils.checkpoint -from transformers import CLIPImageProcessor, CLIPTextModelWithProjection, CLIPTokenizer - -from ...models import AutoencoderKL, Transformer2DModel, UNet2DConditionModel -from ...schedulers import KarrasDiffusionSchedulers -from ...utils import is_accelerate_available, logging, randn_tensor -from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput -from .modeling_text_unet import UNetFlatConditionModel - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -class VersatileDiffusionTextToImagePipeline(DiffusionPipeline): - r""" - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Parameters: - vqvae ([`VQModel`]): - Vector-quantized (VQ) Model to encode and decode images to and from latent representations. - bert ([`LDMBertModel`]): - Text-encoder model based on [BERT](https://huggingface.co/docs/transformers/model_doc/bert) architecture. - tokenizer (`transformers.BertTokenizer`): - Tokenizer of class - [BertTokenizer](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - """ - tokenizer: CLIPTokenizer - image_feature_extractor: CLIPImageProcessor - text_encoder: CLIPTextModelWithProjection - image_unet: UNet2DConditionModel - text_unet: UNetFlatConditionModel - vae: AutoencoderKL - scheduler: KarrasDiffusionSchedulers - - _optional_components = ["text_unet"] - - def __init__( - self, - tokenizer: CLIPTokenizer, - text_encoder: CLIPTextModelWithProjection, - image_unet: UNet2DConditionModel, - text_unet: UNetFlatConditionModel, - vae: AutoencoderKL, - scheduler: KarrasDiffusionSchedulers, - ): - super().__init__() - self.register_modules( - tokenizer=tokenizer, - text_encoder=text_encoder, - image_unet=image_unet, - text_unet=text_unet, - vae=vae, - scheduler=scheduler, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - - if self.text_unet is not None: - self._swap_unet_attention_blocks() - - def _swap_unet_attention_blocks(self): - """ - Swap the `Transformer2DModel` blocks between the image and text UNets - """ - for name, module in self.image_unet.named_modules(): - if isinstance(module, Transformer2DModel): - parent_name, index = name.rsplit(".", 1) - index = int(index) - self.image_unet.get_submodule(parent_name)[index], self.text_unet.get_submodule(parent_name)[index] = ( - self.text_unet.get_submodule(parent_name)[index], - self.image_unet.get_submodule(parent_name)[index], - ) - - def remove_unused_weights(self): - self.register_modules(text_unet=None) - - def enable_sequential_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, - text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a - `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called. - """ - if is_accelerate_available(): - from accelerate import cpu_offload - else: - raise ImportError("Please install accelerate via `pip install accelerate`") - - device = torch.device(f"cuda:{gpu_id}") - - for cpu_offloaded_model in [self.image_unet, self.text_unet, self.text_encoder, self.vae]: - if cpu_offloaded_model is not None: - cpu_offload(cpu_offloaded_model, device) - - @property - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device with unet->image_unet - def _execution_device(self): - r""" - Returns the device on which the pipeline's models will be executed. After calling - `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module - hooks. - """ - if not hasattr(self.image_unet, "_hf_hook"): - return self.device - for module in self.image_unet.modules(): - if ( - hasattr(module, "_hf_hook") - and hasattr(module._hf_hook, "execution_device") - and module._hf_hook.execution_device is not None - ): - return torch.device(module._hf_hook.execution_device) - return self.device - - def _encode_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - """ - - def normalize_embeddings(encoder_output): - embeds = self.text_encoder.text_projection(encoder_output.last_hidden_state) - embeds_pooled = encoder_output.text_embeds - embeds = embeds / torch.norm(embeds_pooled.unsqueeze(1), dim=-1, keepdim=True) - return embeds - - batch_size = len(prompt) if isinstance(prompt, list) else 1 - - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="max_length", return_tensors="pt").input_ids - - if not torch.equal(text_input_ids, untruncated_ids): - removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = text_inputs.attention_mask.to(device) - else: - attention_mask = None - - prompt_embeds = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - prompt_embeds = normalize_embeddings(prompt_embeds) - - # duplicate text embeddings for each generation per prompt, using mps friendly method - bs_embed, seq_len, _ = prompt_embeds.shape - prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) - prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - max_length = text_input_ids.shape[-1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = uncond_input.attention_mask.to(device) - else: - attention_mask = None - - negative_prompt_embeds = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - negative_prompt_embeds = normalize_embeddings(negative_prompt_embeds) - - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = negative_prompt_embeds.shape[1] - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) - - return prompt_embeds - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents - def decode_latents(self, latents): - latents = 1 / self.vae.config.scaling_factor * latents - image = self.vae.decode(latents).sample - image = (image / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - return image - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.check_inputs - def check_inputs( - self, - prompt, - height, - width, - callback_steps, - negative_prompt=None, - prompt_embeds=None, - negative_prompt_embeds=None, - ): - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - if prompt is not None and prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" - " only forward one of the two." - ) - elif prompt is None and prompt_embeds is None: - raise ValueError( - "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." - ) - elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if negative_prompt is not None and negative_prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" - f" {negative_prompt_embeds}. Please make sure to only forward one of the two." - ) - - if prompt_embeds is not None and negative_prompt_embeds is not None: - if prompt_embeds.shape != negative_prompt_embeds.shape: - raise ValueError( - "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" - f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" - f" {negative_prompt_embeds.shape}." - ) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents - def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): - shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor) - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - if latents is None: - latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - else: - latents = latents.to(device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - return latents - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]], - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - **kwargs, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - height (`int`, *optional*, defaults to self.image_unet.config.sample_size * self.vae_scale_factor): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to self.image_unet.config.sample_size * self.vae_scale_factor): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - - Examples: - - ```py - >>> from diffusers import VersatileDiffusionTextToImagePipeline - >>> import torch - - >>> pipe = VersatileDiffusionTextToImagePipeline.from_pretrained( - ... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 - ... ) - >>> pipe.remove_unused_weights() - >>> pipe = pipe.to("cuda") - - >>> generator = torch.Generator(device="cuda").manual_seed(0) - >>> image = pipe("an astronaut riding on a horse on mars", generator=generator).images[0] - >>> image.save("./astronaut.png") - ``` - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - # 0. Default height and width to unet - height = height or self.image_unet.config.sample_size * self.vae_scale_factor - width = width or self.image_unet.config.sample_size * self.vae_scale_factor - - # 1. Check inputs. Raise error if not correct - self.check_inputs(prompt, height, width, callback_steps) - - # 2. Define call parameters - batch_size = 1 if isinstance(prompt, str) else len(prompt) - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - prompt_embeds = self._encode_prompt( - prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt - ) - - # 4. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # 5. Prepare latent variables - num_channels_latents = self.image_unet.in_channels - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - prompt_embeds.dtype, - device, - generator, - latents, - ) - - # 6. Prepare extra step kwargs. - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 7. Denoising loop - for i, t in enumerate(self.progress_bar(timesteps)): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.image_unet(latent_model_input, t, encoder_hidden_states=prompt_embeds).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # call the callback, if provided - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # 9. Post-processing - image = self.decode_latents(latents) - - # 10. Convert to PIL - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image,) - - return ImagePipelineOutput(images=image) diff --git a/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_flax.py b/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_flax.py deleted file mode 100644 index 8db8ec7810068aab4517fe2066e3fab10a52f6f7..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_flax.py +++ /dev/null @@ -1,99 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import gc -import unittest - -from diffusers import FlaxDPMSolverMultistepScheduler, FlaxStableDiffusionPipeline -from diffusers.utils import is_flax_available, slow -from diffusers.utils.testing_utils import require_flax - - -if is_flax_available(): - import jax - import jax.numpy as jnp - from flax.jax_utils import replicate - from flax.training.common_utils import shard - - -@slow -@require_flax -class FlaxStableDiffusion2PipelineIntegrationTests(unittest.TestCase): - def tearDown(self): - # clean up the VRAM after each test - super().tearDown() - gc.collect() - - def test_stable_diffusion_flax(self): - sd_pipe, params = FlaxStableDiffusionPipeline.from_pretrained( - "stabilityai/stable-diffusion-2", - revision="bf16", - dtype=jnp.bfloat16, - ) - - prompt = "A painting of a squirrel eating a burger" - num_samples = jax.device_count() - prompt = num_samples * [prompt] - prompt_ids = sd_pipe.prepare_inputs(prompt) - - params = replicate(params) - prompt_ids = shard(prompt_ids) - - prng_seed = jax.random.PRNGKey(0) - prng_seed = jax.random.split(prng_seed, jax.device_count()) - - images = sd_pipe(prompt_ids, params, prng_seed, num_inference_steps=25, jit=True)[0] - assert images.shape == (jax.device_count(), 1, 768, 768, 3) - - images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) - image_slice = images[0, 253:256, 253:256, -1] - - output_slice = jnp.asarray(jax.device_get(image_slice.flatten())) - expected_slice = jnp.array([0.4238, 0.4414, 0.4395, 0.4453, 0.4629, 0.4590, 0.4531, 0.45508, 0.4512]) - print(f"output_slice: {output_slice}") - assert jnp.abs(output_slice - expected_slice).max() < 1e-2 - - def test_stable_diffusion_dpm_flax(self): - model_id = "stabilityai/stable-diffusion-2" - scheduler, scheduler_params = FlaxDPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler") - sd_pipe, params = FlaxStableDiffusionPipeline.from_pretrained( - model_id, - scheduler=scheduler, - revision="bf16", - dtype=jnp.bfloat16, - ) - params["scheduler"] = scheduler_params - - prompt = "A painting of a squirrel eating a burger" - num_samples = jax.device_count() - prompt = num_samples * [prompt] - prompt_ids = sd_pipe.prepare_inputs(prompt) - - params = replicate(params) - prompt_ids = shard(prompt_ids) - - prng_seed = jax.random.PRNGKey(0) - prng_seed = jax.random.split(prng_seed, jax.device_count()) - - images = sd_pipe(prompt_ids, params, prng_seed, num_inference_steps=25, jit=True)[0] - assert images.shape == (jax.device_count(), 1, 768, 768, 3) - - images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) - image_slice = images[0, 253:256, 253:256, -1] - - output_slice = jnp.asarray(jax.device_get(image_slice.flatten())) - expected_slice = jnp.array([0.4336, 0.42969, 0.4453, 0.4199, 0.4297, 0.4531, 0.4434, 0.4434, 0.4297]) - print(f"output_slice: {output_slice}") - assert jnp.abs(output_slice - expected_slice).max() < 1e-2 diff --git a/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/audio/stft.py b/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/audio/stft.py deleted file mode 100644 index 2aa1ac89277734a6676c20a81bf88e21e8ca7aa9..0000000000000000000000000000000000000000 --- a/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/audio/stft.py +++ /dev/null @@ -1,180 +0,0 @@ -import torch -import torch.nn.functional as F -import numpy as np -from scipy.signal import get_window -from librosa.util import pad_center, tiny -from librosa.filters import mel as librosa_mel_fn - -from audioldm.audio.audio_processing import ( - dynamic_range_compression, - dynamic_range_decompression, - window_sumsquare, -) - - -class STFT(torch.nn.Module): - """adapted from Prem Seetharaman's https://github.com/pseeth/pytorch-stft""" - - def __init__(self, filter_length, hop_length, win_length, window="hann"): - super(STFT, self).__init__() - self.filter_length = filter_length - self.hop_length = hop_length - self.win_length = win_length - self.window = window - self.forward_transform = None - scale = self.filter_length / self.hop_length - fourier_basis = np.fft.fft(np.eye(self.filter_length)) - - cutoff = int((self.filter_length / 2 + 1)) - fourier_basis = np.vstack( - [np.real(fourier_basis[:cutoff, :]), np.imag(fourier_basis[:cutoff, :])] - ) - - forward_basis = torch.FloatTensor(fourier_basis[:, None, :]) - inverse_basis = torch.FloatTensor( - np.linalg.pinv(scale * fourier_basis).T[:, None, :] - ) - - if window is not None: - assert filter_length >= win_length - # get window and zero center pad it to filter_length - fft_window = get_window(window, win_length, fftbins=True) - fft_window = pad_center(fft_window, filter_length) - fft_window = torch.from_numpy(fft_window).float() - - # window the bases - forward_basis *= fft_window - inverse_basis *= fft_window - - self.register_buffer("forward_basis", forward_basis.float()) - self.register_buffer("inverse_basis", inverse_basis.float()) - - def transform(self, input_data): - num_batches = input_data.size(0) - num_samples = input_data.size(1) - - self.num_samples = num_samples - - # similar to librosa, reflect-pad the input - input_data = input_data.view(num_batches, 1, num_samples) - input_data = F.pad( - input_data.unsqueeze(1), - (int(self.filter_length / 2), int(self.filter_length / 2), 0, 0), - mode="reflect", - ) - input_data = input_data.squeeze(1) - - forward_transform = F.conv1d( - input_data, - torch.autograd.Variable(self.forward_basis, requires_grad=False), - stride=self.hop_length, - padding=0, - ).cpu() - - cutoff = int((self.filter_length / 2) + 1) - real_part = forward_transform[:, :cutoff, :] - imag_part = forward_transform[:, cutoff:, :] - - magnitude = torch.sqrt(real_part**2 + imag_part**2) - phase = torch.autograd.Variable(torch.atan2(imag_part.data, real_part.data)) - - return magnitude, phase - - def inverse(self, magnitude, phase): - recombine_magnitude_phase = torch.cat( - [magnitude * torch.cos(phase), magnitude * torch.sin(phase)], dim=1 - ) - - inverse_transform = F.conv_transpose1d( - recombine_magnitude_phase, - torch.autograd.Variable(self.inverse_basis, requires_grad=False), - stride=self.hop_length, - padding=0, - ) - - if self.window is not None: - window_sum = window_sumsquare( - self.window, - magnitude.size(-1), - hop_length=self.hop_length, - win_length=self.win_length, - n_fft=self.filter_length, - dtype=np.float32, - ) - # remove modulation effects - approx_nonzero_indices = torch.from_numpy( - np.where(window_sum > tiny(window_sum))[0] - ) - window_sum = torch.autograd.Variable( - torch.from_numpy(window_sum), requires_grad=False - ) - window_sum = window_sum - inverse_transform[:, :, approx_nonzero_indices] /= window_sum[ - approx_nonzero_indices - ] - - # scale by hop ratio - inverse_transform *= float(self.filter_length) / self.hop_length - - inverse_transform = inverse_transform[:, :, int(self.filter_length / 2) :] - inverse_transform = inverse_transform[:, :, : -int(self.filter_length / 2) :] - - return inverse_transform - - def forward(self, input_data): - self.magnitude, self.phase = self.transform(input_data) - reconstruction = self.inverse(self.magnitude, self.phase) - return reconstruction - - -class TacotronSTFT(torch.nn.Module): - def __init__( - self, - filter_length, - hop_length, - win_length, - n_mel_channels, - sampling_rate, - mel_fmin, - mel_fmax, - ): - super(TacotronSTFT, self).__init__() - self.n_mel_channels = n_mel_channels - self.sampling_rate = sampling_rate - self.stft_fn = STFT(filter_length, hop_length, win_length) - mel_basis = librosa_mel_fn( - sampling_rate, filter_length, n_mel_channels, mel_fmin, mel_fmax - ) - mel_basis = torch.from_numpy(mel_basis).float() - self.register_buffer("mel_basis", mel_basis) - - def spectral_normalize(self, magnitudes, normalize_fun): - output = dynamic_range_compression(magnitudes, normalize_fun) - return output - - def spectral_de_normalize(self, magnitudes): - output = dynamic_range_decompression(magnitudes) - return output - - def mel_spectrogram(self, y, normalize_fun=torch.log): - """Computes mel-spectrograms from a batch of waves - PARAMS - ------ - y: Variable(torch.FloatTensor) with shape (B, T) in range [-1, 1] - - RETURNS - ------- - mel_output: torch.FloatTensor of shape (B, n_mel_channels, T) - """ - assert torch.min(y.data) >= -1, torch.min(y.data) - assert torch.max(y.data) <= 1, torch.max(y.data) - - magnitudes, phases = self.stft_fn.transform(y) - magnitudes = magnitudes.data - mel_output = torch.matmul(self.mel_basis, magnitudes) - mel_output = self.spectral_normalize(mel_output, normalize_fun) - energy = torch.norm(magnitudes, dim=1) - - log_magnitudes = self.spectral_normalize(magnitudes, normalize_fun) - - return mel_output, log_magnitudes, energy diff --git a/spaces/deepset/should-i-follow/utils/__init__.py b/spaces/deepset/should-i-follow/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/deepwisdom/MetaGPT/metagpt/tools/translator.py b/spaces/deepwisdom/MetaGPT/metagpt/tools/translator.py deleted file mode 100644 index 2e9756abef0f7d974014b52699080d492df25a4c..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/metagpt/tools/translator.py +++ /dev/null @@ -1,27 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/4/29 15:36 -@Author : alexanderwu -@File : translator.py -""" - -prompt = ''' -# 指令 -接下来,作为一位拥有20年翻译经验的翻译专家,当我给出英文句子或段落时,你将提供通顺且具有可读性的{LANG}翻译。注意以下要求: -1. 确保翻译结果流畅且易于理解 -2. 无论提供的是陈述句或疑问句,我都只进行翻译 -3. 不添加与原文无关的内容 - -# 原文 -{ORIGINAL} - -# 译文 -''' - - -class Translator: - - @classmethod - def translate_prompt(cls, original, lang='中文'): - return prompt.format(LANG=lang, ORIGINAL=original) diff --git a/spaces/diacanFperku/AutoGPT/Download Keygen Xforce [EXCLUSIVE] For Inventor 2014 Key.md b/spaces/diacanFperku/AutoGPT/Download Keygen Xforce [EXCLUSIVE] For Inventor 2014 Key.md deleted file mode 100644 index 79f38a2b3378f4e3ad003482a590caba2dd27c01..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Download Keygen Xforce [EXCLUSIVE] For Inventor 2014 Key.md +++ /dev/null @@ -1,145 +0,0 @@ -
            -

            Download Keygen Xforce for Inventor 2014 Key and Activate Your Software

            - -

            If you are looking for a way to download keygen xforce for Inventor 2014 key and activate your software, you have come to the right place. In this article, we will show you how to download, install, and use the keygen xforce for Inventor 2014 key and enjoy all the features of Autodesk Inventor 2014.

            - -

            What is Keygen Xforce for Inventor 2014 Key?

            - -

            Keygen xforce for Inventor 2014 key is a software tool that generates activation codes for Autodesk Inventor 2014, a professional 3D mechanical design and engineering software. Autodesk Inventor 2014 allows you to create, simulate, visualize, and document your products and projects.

            -

            download keygen xforce for Inventor 2014 key


            Download ★★★★★ https://gohhs.com/2uFTDL



            - -

            However, to use Autodesk Inventor 2014, you need to have a valid license and activation code. If you do not have one, you can use the keygen xforce for Inventor 2014 key to generate one and activate your software.

            - -

            How to Download Keygen Xforce for Inventor 2014 Key?

            - -

            To download keygen xforce for Inventor 2014 key, you need to follow these steps:

            - -
              -
            1. Go to this link and scroll down to find the download link for keygen xforce for Inventor 2014 key.
            2. -
            3. Click on the download link and wait for the file to be downloaded. The file name is Universal.xforce.keygen.Autodesk.2014.zip and the file size is about 1.5 MB.
            4. -
            5. Extract the zip file using a software like WinRAR or 7-Zip. You will get a folder named Universal.xforce.keygen.Autodesk.2014 with two files inside: x-force_2014_x32.exe and x-force_2014_x64.exe.
            6. -
            7. Select the file that matches your system architecture: x-force_2014_x32.exe for 32-bit systems or x-force_2014_x64.exe for 64-bit systems.
            8. -
            - -

            How to Install Keygen Xforce for Inventor 2014 Key?

            - -

            To install keygen xforce for Inventor 2014 key, you need to follow these steps:

            - -
              -
            1. Run the file that you selected in the previous step as administrator. You will see a window like this:
            2. -
            3. Keygen Xforce Window
            4. -
            5. Select Autodesk Inventor Professional 2014 from the product list and click on Generate. You will see a code like this:
            6. -
            7. Keygen Xforce Code
            8. -
            9. Copy the code and keep it somewhere safe. You will need it later to activate your software.
            10. -
            - -

            How to Use Keygen Xforce for Inventor 2014 Key?

            - -

            To use keygen xforce for Inventor 2014 key, you need to follow these steps:

            - -
              -
            1. Install Autodesk Inventor Professional 2014 on your computer if you have not done so already. You can download it from this link.
            2. -
            3. After the installation is complete, launch Autodesk Inventor Professional 2014 and click on Activate on the startup screen.
            4. -
            5. If you see a message that says your serial number is wrong, just click on Close and click on Activate again.
            6. -
            7. Select I have an activation code from Autodesk and click on Next.
            8. -
            9. You will see an activation screen like this:
            10. -
            11. Activation Screen
            12. -
            13. Paste the code that you copied from the keygen xforce into the Request field and click on Generate.
            14. -
            15. You will see an activation code like this:
            16. -
            17. Activation Code
            18. -
            19. Copy the activation code and paste it into the Activation field on the activation screen.
            20. -
            21. Click on Next. You will see a message that says your product has been activated successfully.
            22. -
            - -

            Congratulations! You have successfully downloaded, installed, and used keygen xforce for Inventor 2014 key and activated your software.

            - -

            You can now enjoy all the features of Autodesk Inventor Professional 2014 and create amazing 3D designs and models.

            -

            - -

            Conclusion

            - -

            In this article, we have shown you how to download keygen xforce for Inventor 2014 key and activate your software. We have also explained what is keygen xforce for Inventor 2014 key, how to install it, and how to use it.

            - -

            We hope you found this article helpful and informative. If you have any questions or comments, please feel free to leave them below.

            - -

            Thank you for reading!

            -

            What are the Benefits of Keygen Xforce for Inventor 2014 Key?

            - -

            By using keygen xforce for Inventor 2014 key, you can enjoy many benefits that will enhance your experience with Autodesk Inventor 2014. Some of these benefits are:

            - -
              -
            • You can save money and time by not having to buy or renew a license for Autodesk Inventor 2014.
            • -
            • You can access all the features and functions of Autodesk Inventor 2014 without any limitations or restrictions.
            • -
            • You can create, edit, and share your 3D designs and models with ease and efficiency.
            • -
            • You can use Autodesk Inventor 2014 for personal or professional purposes without any legal issues or risks.
            • -
            • You can update your Autodesk Inventor 2014 software whenever you want without losing your activation status.
            • -
            - -

            These benefits make keygen xforce for Inventor 2014 key a valuable tool that will help you get the most out of Autodesk Inventor 2014.

            - -

            What are the Extras of Keygen Xforce for Inventor 2014 Key?

            - -

            In addition to generating activation codes for Autodesk Inventor 2014, keygen xforce for Inventor 2014 key also offers some extras that will make your software more powerful and versatile. Some of these extras are:

            - -
              -
            • You can use keygen xforce for Inventor 2014 key to generate activation codes for other Autodesk products, such as AutoCAD, Revit, Maya, and more.
            • -
            • You can use keygen xforce for Inventor 2014 key to activate multiple Autodesk products on the same computer or on different computers.
            • -
            • You can use keygen xforce for Inventor 2014 key to activate both 32-bit and 64-bit versions of Autodesk products.
            • -
            • You can use keygen xforce for Inventor 2014 key to activate both online and offline modes of Autodesk products.
            • -
            • You can use keygen xforce for Inventor 2014 key to activate both Windows and Mac versions of Autodesk products.
            • -
            - -

            These extras make keygen xforce for Inventor 2014 key a versatile tool that will allow you to use various Autodesk products with ease and convenience.

            - -

            How to Watch Keygen Xforce for Inventor 2014 Key on Your TV?

            - -

            If you want to watch keygen xforce for Inventor 2014 key on your TV, you can do so by following these steps:

            - -
              -
            1. Connect your computer to your TV using an HDMI cable or a wireless connection.
            2. -
            3. Turn on your TV and select the input source that corresponds to your computer.
            4. -
            5. Open your web browser on your computer and go to this link to download keygen xforce for Inventor 2014 key.
            6. -
            7. Follow the instructions in this article to download, install, and use keygen xforce for Inventor 2014 key and activate your software.
            8. -
            9. Launch Autodesk Inventor 2014 on your computer and enjoy watching it on your TV.
            10. -
            - -

            This way, you can watch keygen xforce for Inventor 2014 key on your TV and enjoy a larger and clearer view of your software.

            - -

            What are the Reviews of Keygen Xforce for Inventor 2014 Key?

            - -

            Keygen xforce for Inventor 2014 key has received many positive reviews from users who have used it to activate their software. Here are some of the reviews that you can find on the internet:

            - -
            "Keygen xforce for Inventor 2014 key is a great tool that works perfectly. I downloaded it from this site and followed the instructions. It was easy and fast. I activated my Autodesk Inventor Professional 2014 in minutes and now I can use all the features without any problems. Thank you so much!" - John Smith on YouTube
            - -
            "I have been using Autodesk Inventor Professional 2014 for a while but I always had issues with the license and activation. I tried many methods but none of them worked. Then I found out about keygen xforce for Inventor 2014 key and decided to give it a try. I was amazed by how simple and effective it was. I downloaded it from this link and installed it on my computer. It generated an activation code for me and I entered it on the activation screen. It worked like a charm. Now I can use my software without any worries. This is awesome!" - Mary Jones on Quora
            - -
            "I love Autodesk Inventor Professional 2014 but I hate paying for it. It is too expensive and I cannot afford it. That's why I searched for a way to get it for free. I found out about keygen xforce for Inventor 2014 key and I was skeptical at first. I thought it was a scam or a virus. But I decided to take a risk and download it from this site. I was surprised by how easy and safe it was. I installed it on my computer and ran it as administrator. It generated an activation code for me and I copied it into the activation field. It activated my software instantly. Now I can use my software without paying anything. This is amazing!" - David Lee on Reddit
            - -

            These reviews show that keygen xforce for Inventor 2014 key is a tool that has received many positive feedbacks from users who have used it to activate their software. They also show that keygen xforce for Inventor 2014 key is a tool that offers you a great quality and experience of using Autodesk Inventor Professional 2014.

            - -

            Conclusion

            - -

            In this article, we have shown you how to download keygen xforce for Inventor 2014 key and activate your software. We have also explained what is keygen xforce for Inventor 2014 key, how to install it, how to use it, what are the benefits of it, what are the extras of it, how to watch it on your TV, and what are the reviews of it.

            - -

            We hope you found this article helpful and informative. If you are looking for more information on download keygen xforce for Inventor 2014 key or other related topics, you can check out some of these links:

            - - - -

            Thank you for reading!

            -

            In this article, we have shown you how to download keygen xforce for Inventor 2014 key and activate your software. We have also explained what is keygen xforce for Inventor 2014 key, how to install it, how to use it, what are the benefits of it, what are the extras of it, how to watch it on your TV, and what are the reviews of it.

            - -

            We hope you found this article helpful and informative. If you are looking for more information on download keygen xforce for Inventor 2014 key or other related topics, you can check out some of these links:

            - - - -

            Thank you for reading!

            3cee63e6c2
            -
            -
            \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/HelloHyderabadmovieenglishsubtitlesfree [WORK]download.md b/spaces/diacanFperku/AutoGPT/HelloHyderabadmovieenglishsubtitlesfree [WORK]download.md deleted file mode 100644 index 7618395e1d490032b6eb3fb11f9ac8f97fa86ba0..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/HelloHyderabadmovieenglishsubtitlesfree [WORK]download.md +++ /dev/null @@ -1,9 +0,0 @@ -

            HelloHyderabadmovieenglishsubtitlesfreedownload


            DOWNLOAD --->>> https://gohhs.com/2uFUeo



            -
            -hellohyderabadmovieenglishsubtitles. com -Hellohyderabad - watch Indian movie online in Russian in good quality. -This weekend movie is available for online viewing on ipad, iphone, android -Hell 8a78ff9644
            -
            -
            -

            diff --git a/spaces/diacanFperku/AutoGPT/Human Resource Management Gary Dessler 13th Edition Download NEW.zip.md b/spaces/diacanFperku/AutoGPT/Human Resource Management Gary Dessler 13th Edition Download NEW.zip.md deleted file mode 100644 index 360b44b58ad5b7b057c23393b403ce73a9469132..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Human Resource Management Gary Dessler 13th Edition Download NEW.zip.md +++ /dev/null @@ -1,6 +0,0 @@ -

            human resource management gary dessler 13th edition download.zip


            Downloadhttps://gohhs.com/2uFUGI



            - -DOWNLOAD FREE Human Resource Management (15th … Human Resource Management Gary Dessler 13th. Edition ... t e n t h e d i t i o n ... 4d29de3e1b
            -
            -
            -

            diff --git a/spaces/diacanFperku/AutoGPT/Sam Broadcaster Firebird V 4.2. 2 Crack.rar.md b/spaces/diacanFperku/AutoGPT/Sam Broadcaster Firebird V 4.2. 2 Crack.rar.md deleted file mode 100644 index 89d8ea7316657fec311ddabe8195eaa7a20e4c9c..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Sam Broadcaster Firebird V 4.2. 2 Crack.rar.md +++ /dev/null @@ -1,11 +0,0 @@ -

            sam broadcaster firebird v 4.2. 2 crack.rar


            Download File ⚙⚙⚙ https://gohhs.com/2uFUKo



            -
            -1Click DVD Ripper 2.0.3 Incl Keygen AT4RE.rar . Abbyy PDF Transformer v.2.0982 - DARKSiDE.rar . click-2-crop.4.2.cracks-icu.rar. click OK.Hide.Secret. XPE.rar. -1 click. -1. download a movie with a crack. -1. K-Lite Codec Pack - Full/Mega/10.2/9.8.9/8.5/7.5/6.5/5.9 (Rus/Eng). -1 clicked. -1. download the film with a crack. 8a78ff9644
            -
            -
            -

            diff --git a/spaces/dineshreddy/WALT/configs/_base_/default_runtime.py b/spaces/dineshreddy/WALT/configs/_base_/default_runtime.py deleted file mode 100644 index 55097c5b242da66c9735c0b45cd84beefab487b1..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/configs/_base_/default_runtime.py +++ /dev/null @@ -1,16 +0,0 @@ -checkpoint_config = dict(interval=1) -# yapf:disable -log_config = dict( - interval=50, - hooks=[ - dict(type='TextLoggerHook'), - # dict(type='TensorboardLoggerHook') - ]) -# yapf:enable -custom_hooks = [dict(type='NumClassCheckHook')] - -dist_params = dict(backend='nccl') -log_level = 'INFO' -load_from = None -resume_from = None -workflow = [('train', 1)] diff --git a/spaces/dineshreddy/WALT/mmdet/datasets/__init__.py b/spaces/dineshreddy/WALT/mmdet/datasets/__init__.py deleted file mode 100644 index 9b18b30a258c32283cbfc03ba01781a19fd993c1..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/datasets/__init__.py +++ /dev/null @@ -1,24 +0,0 @@ -from .builder import DATASETS, PIPELINES, build_dataloader, build_dataset -from .cityscapes import CityscapesDataset -from .coco import CocoDataset -from .custom import CustomDataset -from .dataset_wrappers import (ClassBalancedDataset, ConcatDataset, - RepeatDataset) -from .deepfashion import DeepFashionDataset -from .lvis import LVISDataset, LVISV1Dataset, LVISV05Dataset -from .samplers import DistributedGroupSampler, DistributedSampler, GroupSampler -from .utils import (NumClassCheckHook, get_loading_pipeline, - replace_ImageToTensor) -from .voc import VOCDataset -from .wider_face import WIDERFaceDataset -from .xml_style import XMLDataset - -__all__ = [ - 'CustomDataset', 'XMLDataset', 'CocoDataset', 'DeepFashionDataset', - 'VOCDataset', 'CityscapesDataset', 'LVISDataset', 'LVISV05Dataset', - 'LVISV1Dataset', 'GroupSampler', 'DistributedGroupSampler', - 'DistributedSampler', 'build_dataloader', 'ConcatDataset', 'RepeatDataset', - 'ClassBalancedDataset', 'WIDERFaceDataset', 'DATASETS', 'PIPELINES', - 'build_dataset', 'replace_ImageToTensor', 'get_loading_pipeline', - 'NumClassCheckHook' -] diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/schedules/schedule_sgd_160e.py b/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/schedules/schedule_sgd_160e.py deleted file mode 100644 index 985b8f63b3cb34f04ff55b298b44a53568a50ae8..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/schedules/schedule_sgd_160e.py +++ /dev/null @@ -1,13 +0,0 @@ -# optimizer -optimizer = dict(type='SGD', lr=0.08, momentum=0.9, weight_decay=0.0001) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=0.001, - step=[80, 128]) -# running settings -runner = dict(type='EpochBasedRunner', max_epochs=160) -checkpoint_config = dict(interval=10) diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/textdet/panet/panet_r18_fpem_ffm_600e_ctw1500.py b/spaces/dinhminh20521597/OCR_DEMO/configs/textdet/panet/panet_r18_fpem_ffm_600e_ctw1500.py deleted file mode 100644 index 91d23af68417b0c589964f0908d4de60dfcfc4e4..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/configs/textdet/panet/panet_r18_fpem_ffm_600e_ctw1500.py +++ /dev/null @@ -1,35 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/schedules/schedule_adam_600e.py', - '../../_base_/det_models/panet_r18_fpem_ffm.py', - '../../_base_/det_datasets/ctw1500.py', - '../../_base_/det_pipelines/panet_pipeline.py' -] - -model = {{_base_.model_poly}} - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline_ctw1500 = {{_base_.train_pipeline_ctw1500}} -test_pipeline_ctw1500 = {{_base_.test_pipeline_ctw1500}} - -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline_ctw1500), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_ctw1500), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_ctw1500)) - -evaluation = dict(interval=10, metric='hmean-iou') diff --git a/spaces/dolceschokolade/chatbot-mini/tailwind.config.js b/spaces/dolceschokolade/chatbot-mini/tailwind.config.js deleted file mode 100644 index 74ef404d16fc34967d35c1acadcd885d04a19035..0000000000000000000000000000000000000000 --- a/spaces/dolceschokolade/chatbot-mini/tailwind.config.js +++ /dev/null @@ -1,18 +0,0 @@ -/** @type {import('tailwindcss').Config} */ -module.exports = { - content: [ - './app/**/*.{js,ts,jsx,tsx}', - './pages/**/*.{js,ts,jsx,tsx}', - './components/**/*.{js,ts,jsx,tsx}', - ], - darkMode: 'class', - theme: { - extend: {}, - }, - variants: { - extend: { - visibility: ['group-hover'], - }, - }, - plugins: [require('@tailwindcss/typography')], -}; diff --git a/spaces/doluvor/faster-whisper-webui/src/download.py b/spaces/doluvor/faster-whisper-webui/src/download.py deleted file mode 100644 index 20565153f9e582be73246a1e2a3b7be3f368b322..0000000000000000000000000000000000000000 --- a/spaces/doluvor/faster-whisper-webui/src/download.py +++ /dev/null @@ -1,78 +0,0 @@ -from tempfile import mkdtemp -from typing import List -from yt_dlp import YoutubeDL - -import yt_dlp -from yt_dlp.postprocessor import PostProcessor - -class FilenameCollectorPP(PostProcessor): - def __init__(self): - super(FilenameCollectorPP, self).__init__(None) - self.filenames = [] - - def run(self, information): - self.filenames.append(information["filepath"]) - return [], information - -def download_url(url: str, maxDuration: int = None, destinationDirectory: str = None, playlistItems: str = "1") -> List[str]: - try: - return _perform_download(url, maxDuration=maxDuration, outputTemplate=None, destinationDirectory=destinationDirectory, playlistItems=playlistItems) - except yt_dlp.utils.DownloadError as e: - # In case of an OS error, try again with a different output template - if e.msg and e.msg.find("[Errno 36] File name too long") >= 0: - return _perform_download(url, maxDuration=maxDuration, outputTemplate="%(title).10s %(id)s.%(ext)s") - pass - -def _perform_download(url: str, maxDuration: int = None, outputTemplate: str = None, destinationDirectory: str = None, playlistItems: str = "1"): - # Create a temporary directory to store the downloaded files - if destinationDirectory is None: - destinationDirectory = mkdtemp() - - ydl_opts = { - "format": "bestaudio/best", - 'paths': { - 'home': destinationDirectory - } - } - if (playlistItems): - ydl_opts['playlist_items'] = playlistItems - - # Add output template if specified - if outputTemplate: - ydl_opts['outtmpl'] = outputTemplate - - filename_collector = FilenameCollectorPP() - - with YoutubeDL(ydl_opts) as ydl: - if maxDuration and maxDuration > 0: - info = ydl.extract_info(url, download=False) - entries = "entries" in info and info["entries"] or [info] - - total_duration = 0 - - # Compute total duration - for entry in entries: - total_duration += float(entry["duration"]) - - if total_duration >= maxDuration: - raise ExceededMaximumDuration(videoDuration=total_duration, maxDuration=maxDuration, message="Video is too long") - - ydl.add_post_processor(filename_collector) - ydl.download([url]) - - if len(filename_collector.filenames) <= 0: - raise Exception("Cannot download " + url) - - result = [] - - for filename in filename_collector.filenames: - result.append(filename) - print("Downloaded " + filename) - - return result - -class ExceededMaximumDuration(Exception): - def __init__(self, videoDuration, maxDuration, message): - self.videoDuration = videoDuration - self.maxDuration = maxDuration - super().__init__(message) \ No newline at end of file diff --git a/spaces/ds520/bingo/src/components/chat.tsx b/spaces/ds520/bingo/src/components/chat.tsx deleted file mode 100644 index a37ab1cc96ca2e6bfd9acbe313a8d946bfd5c3d4..0000000000000000000000000000000000000000 --- a/spaces/ds520/bingo/src/components/chat.tsx +++ /dev/null @@ -1,93 +0,0 @@ -'use client' - -import { useCallback, useEffect, useMemo, useState } from 'react' -import { useAtom } from 'jotai' -import Image from 'next/image' -import { cn } from '@/lib/utils' -import { ChatList } from '@/components/chat-list' -import { ChatPanel } from '@/components/chat-panel' -import { WelcomeScreen } from '@/components/welcome-screen' -import { ChatScrollAnchor } from '@/components/chat-scroll-anchor' -import { ToneSelector } from './tone-selector' -import { ChatHeader } from './chat-header' -import { ChatSuggestions } from './chat-suggestions' -import { bingConversationStyleAtom } from '@/state' -import { ButtonScrollToBottom } from '@/components/button-scroll-to-bottom' -import StopIcon from '@/assets/images/stop.svg' -import { useBing } from '@/lib/hooks/use-bing' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { ChatNotification } from './chat-notification' -import { Settings } from './settings' -import { ChatHistory } from './chat-history' - -export type ChatProps = React.ComponentProps<'div'> & { initialMessages?: ChatMessageModel[] } - -export default function Chat({ className }: ChatProps) { - - const [bingStyle, setBingStyle] = useAtom(bingConversationStyleAtom) - const { - messages, - sendMessage, - resetConversation, - stopGenerating, - setInput, - bot, - input, - generating, - isSpeaking, - uploadImage, - attachmentList, - setAttachmentList, - } = useBing() - - useEffect(() => { - window.scrollTo({ - top: document.body.offsetHeight, - behavior: 'smooth' - }) - }, []) - - return ( -
            - -
            - - - - {messages.length ? ( - <> - - - - - - {generating ? ( -
            - -
            - ) : null} - - ) : null} -
            - - -
            - ) -} diff --git a/spaces/dvitel/codebleu/tests.py b/spaces/dvitel/codebleu/tests.py deleted file mode 100644 index 601ed757507caebec67493462d11eb4c8901c2a1..0000000000000000000000000000000000000000 --- a/spaces/dvitel/codebleu/tests.py +++ /dev/null @@ -1,17 +0,0 @@ -test_cases = [ - { - "predictions": [0, 0], - "references": [1, 1], - "result": {"metric_score": 0} - }, - { - "predictions": [1, 1], - "references": [1, 1], - "result": {"metric_score": 1} - }, - { - "predictions": [1, 0], - "references": [1, 1], - "result": {"metric_score": 0.5} - } -] \ No newline at end of file diff --git a/spaces/emc348/faces-through-time/training/coaches/single_id_coach.py b/spaces/emc348/faces-through-time/training/coaches/single_id_coach.py deleted file mode 100644 index ad30a997f3a2cc26e50f59b939ac41132c6cf63d..0000000000000000000000000000000000000000 --- a/spaces/emc348/faces-through-time/training/coaches/single_id_coach.py +++ /dev/null @@ -1,121 +0,0 @@ -import os -import torch -from tqdm import tqdm -from configs import paths_config, hyperparameters, global_config -from training.coaches.base_coach import BaseCoach -from utils.log_utils import log_images_from_w -from color_transfer_loss import ColorTransferLoss -import copy - - -class SingleIDCoach(BaseCoach): - def __init__(self, data_loader, in_year, use_wandb): - super().__init__(data_loader, in_year, use_wandb) - - def train(self): - - w_path_dir = f"{paths_config.embedding_base_dir}/{paths_config.input_data_id}" - os.makedirs(w_path_dir, exist_ok=True) - os.makedirs(f"{w_path_dir}/{paths_config.pti_results_keyword}", exist_ok=True) - - use_ball_holder = True - - for fname, image in tqdm(self.data_loader): - - image_name = fname[0] - - self.restart_training() - - if self.image_counter >= hyperparameters.max_images_to_invert: - break - - embedding_dir = ( - f"{w_path_dir}/{paths_config.pti_results_keyword}/{image_name}" - ) - - os.makedirs(embedding_dir, exist_ok=True) - - w_pivot = None - - if hyperparameters.use_last_w_pivots: - w_pivot = self.load_inversions(w_path_dir, image_name) - - elif not hyperparameters.use_last_w_pivots or w_pivot is None: - w_pivot = self.calc_inversions(image, image_name) - - # w_pivot = w_pivot.detach().clone().to(global_config.device) - w_pivot = w_pivot.to(global_config.device) - - torch.save(w_pivot, f"{embedding_dir}/0.pt") - - # w_pivot = torch.load( - # f"{embedding_dir}/0.pt", map_location=global_config.device - # ) - log_images_counter = 0 - real_images_batch = image.to(global_config.device) - - if hyperparameters.color_transfer_lambda > 0: - self.color_losses = {} - for y in self.years: - _, init_rgbs = self.forward_sibling( - self.siblings[y].synthesis, w_pivot - ) - self.color_losses[y] = ColorTransferLoss(init_rgbs) - - for i in tqdm(range(hyperparameters.max_pti_steps)): - rgbs = {} - if hyperparameters.color_transfer_lambda > 0: - for y in self.years: - G_sibling_aug = copy.deepcopy(self.siblings[y]) - for p_pti, p_orig, p in zip( - self.G.synthesis.parameters(), - self.original_G.synthesis.parameters(), - G_sibling_aug.synthesis.parameters(), - ): - delta = p_pti - p_orig - p += delta - rgbs[y] = self.forward_sibling( - G_sibling_aug.synthesis, w_pivot - )[1] - - generated_images = self.forward(w_pivot) - loss, l2_loss_val, loss_lpips = self.calc_loss( - generated_images, - real_images_batch, - image_name, - self.G, - use_ball_holder, - w_pivot, - rgbs, - ) - - self.optimizer.zero_grad() - - if loss_lpips <= hyperparameters.LPIPS_value_threshold: - break - - loss.backward() - self.optimizer.step() - - use_ball_holder = ( - global_config.training_step - % hyperparameters.locality_regularization_interval - == 0 - ) - - if ( - self.use_wandb - and log_images_counter % global_config.image_rec_result_log_snapshot - == 0 - ): - log_images_from_w([w_pivot], self.G, [image_name]) - - global_config.training_step += 1 - log_images_counter += 1 - - self.image_counter += 1 - - torch.save( - self.G, - f"{paths_config.checkpoints_dir}/model_{global_config.run_name}_{image_name}.pt", - ) diff --git a/spaces/epexVfeibi/Imagedeblurr/Acrobat 11 Serial Number.md b/spaces/epexVfeibi/Imagedeblurr/Acrobat 11 Serial Number.md deleted file mode 100644 index 2a6fbc441657c6cdc76795d0f65f73056e6ed98e..0000000000000000000000000000000000000000 --- a/spaces/epexVfeibi/Imagedeblurr/Acrobat 11 Serial Number.md +++ /dev/null @@ -1,24 +0,0 @@ -

            Acrobat 11 serial number


            Download Filehttps://jinyurl.com/2uEr86



            -
            -A year or two later, there was a problem with the old adobe version. I put in a key and opened Acrobat 9 so I could print my work. The error came up saying "This document was not completely saved and the settings may be lost." In order to correct this, I had to open a new document, save it and then reopen the original document. - -I have used Adobe Acrobat reader to print test pages for a few years now. I love Acrobat. I was very surprised to hear about this error as Acrobat has never let me down. The only glitch I found was with Acrobat Pro X and the Acrobat 10 that I bought just before it was updated with 11. I cannot print to PDF or PDF/A on my new Mac using the newest version. When I try I get an error. - -I would love to update to the latest version of Acrobat but I cannot print to a PDF file from the Mac App Store. It just doesn't do it. When I get the error, I cannot print to a PDF file either. Does anyone know why? I haven't found a solution yet. - -Here's the error: - -"This document could not be saved or printed because an error occurred. - -You have created a PDF. All previously printed pages will be lost. - -I have a 64-bit Mac running OS X 10.10.5. I print to a networked printer. I have tried: 1) - -Using iPrint. It worked until the same problem occurred. 2) Using File/Print. 3) Using a USB printer. 4) Using a USB printer that I used to use on another Mac. 5) Using the Acrobat Reader app. - -I think the problem has to do with my printer but I don't know what else to do. - -I think the problem has to do with my printer but I don't know 4fefd39f24
            -
            -
            -

            diff --git a/spaces/eson/bert-perplexity/perplexity.py b/spaces/eson/bert-perplexity/perplexity.py deleted file mode 100644 index 2764be18ee30e516fc4fe1e52b875c3db047fd32..0000000000000000000000000000000000000000 --- a/spaces/eson/bert-perplexity/perplexity.py +++ /dev/null @@ -1,57 +0,0 @@ -# coding=utf-8 -# author: xusong -# time: 2022/8/22 12:06 - -import numpy as np -import torch -from transformers import FillMaskPipeline - - -class PerplexityPipeline(FillMaskPipeline): - - def create_sequential_mask(self, input_data, mask_count=1): - _, seq_length = input_data["input_ids"].shape - mask_count = seq_length - 2 - - input_ids = input_data["input_ids"] - - new_input_ids = torch.repeat_interleave(input_data["input_ids"], repeats=mask_count, dim=0) - token_type_ids = torch.repeat_interleave(input_data["token_type_ids"], repeats=mask_count, dim=0) - attention_mask = torch.repeat_interleave(input_data["attention_mask"], repeats=mask_count, dim=0) - masked_lm_labels = [] - masked_lm_positions = list(range(1, mask_count + 1)) - for i in masked_lm_positions: - new_input_ids[i - 1][i] = self.tokenizer.mask_token_id - masked_lm_labels.append(input_ids[0][i].item()) - new_data = {"input_ids": new_input_ids, "token_type_ids": token_type_ids, "attention_mask": attention_mask} - return new_data, masked_lm_positions, masked_lm_labels - - def __call__(self, input_text, *args, **kwargs): - """ - Compute perplexity for given sentence. - """ - if not isinstance(input_text, str): - return None - # 1. create sequential mask - model_inputs = self.tokenizer(input_text, return_tensors='pt') - new_data, masked_lm_positions, masked_lm_labels = self.create_sequential_mask(model_inputs.data) - model_inputs.data = new_data - labels = torch.tensor(masked_lm_labels) - - # 2. predict - model_outputs = self.model(**model_inputs) - - # 3. compute perplexity - sentence = {} - tokens = [] - for i in range(len(labels)): - model_outputs_i = {} - model_outputs_i["input_ids"] = model_inputs["input_ids"][i:i + 1] - model_outputs_i["logits"] = model_outputs["logits"][i:i + 1] - outputs = self.postprocess(model_outputs_i, target_ids=labels[i:i + 1]) - print(outputs) - tokens.append({"token": outputs[0]["token_str"], - "prob": outputs[0]["score"]}) - sentence["tokens"] = tokens - sentence["ppl"] = float(np.exp(- sum(np.log(token["prob"]) for token in tokens) / len(tokens))) - return sentence diff --git a/spaces/eson/tokenizer-arena/utils/text_util.py b/spaces/eson/tokenizer-arena/utils/text_util.py deleted file mode 100644 index e1347d97af665638739238552c0d4272e32ac8a6..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/utils/text_util.py +++ /dev/null @@ -1,15 +0,0 @@ - - - -def is_chinese(uchar): - """ - https://github.com/fxsjy/jieba/blob/master/jieba/__init__.py#L48 - re.compile("([\u4E00-\u9FD5]+)", re.U) - """ - return u'\u4e00' <= uchar <= u'\u9fa5' - - - -def has_chinese(text): - """ contains Chinese characters """ - return any(is_chinese(ch) for ch in text) \ No newline at end of file diff --git a/spaces/eswat/Image-and-3D-Model-Creator/PIFu/lib/renderer/gl/framework.py b/spaces/eswat/Image-and-3D-Model-Creator/PIFu/lib/renderer/gl/framework.py deleted file mode 100644 index a4375b659a91267d3db9278f72bd1f0b030a4655..0000000000000000000000000000000000000000 --- a/spaces/eswat/Image-and-3D-Model-Creator/PIFu/lib/renderer/gl/framework.py +++ /dev/null @@ -1,90 +0,0 @@ -# Mario Rosasco, 2016 -# adapted from framework.cpp, Copyright (C) 2010-2012 by Jason L. McKesson -# This file is licensed under the MIT License. -# -# NB: Unlike in the framework.cpp organization, the main loop is contained -# in the tutorial files, not in this framework file. Additionally, a copy of -# this module file must exist in the same directory as the tutorial files -# to be imported properly. - -import os -from OpenGL.GL import * - -# Function that creates and compiles shaders according to the given type (a GL enum value) and -# shader program (a file containing a GLSL program). -def loadShader(shaderType, shaderFile): - # check if file exists, get full path name - strFilename = findFileOrThrow(shaderFile) - shaderData = None - with open(strFilename, 'r') as f: - shaderData = f.read() - - shader = glCreateShader(shaderType) - glShaderSource(shader, shaderData) # note that this is a simpler function call than in C - - # This shader compilation is more explicit than the one used in - # framework.cpp, which relies on a glutil wrapper function. - # This is made explicit here mainly to decrease dependence on pyOpenGL - # utilities and wrappers, which docs caution may change in future versions. - glCompileShader(shader) - - status = glGetShaderiv(shader, GL_COMPILE_STATUS) - if status == GL_FALSE: - # Note that getting the error log is much simpler in Python than in C/C++ - # and does not require explicit handling of the string buffer - strInfoLog = glGetShaderInfoLog(shader) - strShaderType = "" - if shaderType is GL_VERTEX_SHADER: - strShaderType = "vertex" - elif shaderType is GL_GEOMETRY_SHADER: - strShaderType = "geometry" - elif shaderType is GL_FRAGMENT_SHADER: - strShaderType = "fragment" - - print("Compilation failure for " + strShaderType + " shader:\n" + str(strInfoLog)) - - return shader - - -# Function that accepts a list of shaders, compiles them, and returns a handle to the compiled program -def createProgram(shaderList): - program = glCreateProgram() - - for shader in shaderList: - glAttachShader(program, shader) - - glLinkProgram(program) - - status = glGetProgramiv(program, GL_LINK_STATUS) - if status == GL_FALSE: - # Note that getting the error log is much simpler in Python than in C/C++ - # and does not require explicit handling of the string buffer - strInfoLog = glGetProgramInfoLog(program) - print("Linker failure: \n" + str(strInfoLog)) - - for shader in shaderList: - glDetachShader(program, shader) - - return program - - -# Helper function to locate and open the target file (passed in as a string). -# Returns the full path to the file as a string. -def findFileOrThrow(strBasename): - # Keep constant names in C-style convention, for readability - # when comparing to C(/C++) code. - if os.path.isfile(strBasename): - return strBasename - - LOCAL_FILE_DIR = "data" + os.sep - GLOBAL_FILE_DIR = os.path.dirname(os.path.abspath(__file__)) + os.sep + "data" + os.sep - - strFilename = LOCAL_FILE_DIR + strBasename - if os.path.isfile(strFilename): - return strFilename - - strFilename = GLOBAL_FILE_DIR + strBasename - if os.path.isfile(strFilename): - return strFilename - - raise IOError('Could not find target file ' + strBasename) \ No newline at end of file diff --git "a/spaces/f2api/gpt-academic/crazy_functions/\346\211\271\351\207\217\347\277\273\350\257\221PDF\346\226\207\346\241\243_\345\244\232\347\272\277\347\250\213.py" "b/spaces/f2api/gpt-academic/crazy_functions/\346\211\271\351\207\217\347\277\273\350\257\221PDF\346\226\207\346\241\243_\345\244\232\347\272\277\347\250\213.py" deleted file mode 100644 index 06d8a5a7f4459d9620f33fa2b96e28e8c27abbc7..0000000000000000000000000000000000000000 --- "a/spaces/f2api/gpt-academic/crazy_functions/\346\211\271\351\207\217\347\277\273\350\257\221PDF\346\226\207\346\241\243_\345\244\232\347\272\277\347\250\213.py" +++ /dev/null @@ -1,216 +0,0 @@ -from toolbox import CatchException, report_execption, write_results_to_file -from toolbox import update_ui -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency -from .crazy_utils import read_and_clean_pdf_text -from colorful import * - -@CatchException -def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys_prompt, web_port): - import glob - import os - - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "批量翻译PDF文档。函数插件贡献者: Binary-Husky"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import fitz - import tiktoken - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf tiktoken```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 清空历史,以免输入溢出 - history = [] - - # 检测输入参数,如没有给定输入参数,直接退出 - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": - txt = '空空如也的输入栏' - report_execption(chatbot, history, - a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 搜索需要处理的文件清单 - file_manifest = [f for f in glob.glob( - f'{project_folder}/**/*.pdf', recursive=True)] - - # 如果没找到任何文件 - if len(file_manifest) == 0: - report_execption(chatbot, history, - a=f"解析项目: {txt}", b=f"找不到任何.tex或.pdf文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 开始正式执行任务 - yield from 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, sys_prompt) - - -def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, sys_prompt): - import os - import copy - import tiktoken - TOKEN_LIMIT_PER_FRAGMENT = 1280 - generated_conclusion_files = [] - generated_html_files = [] - for index, fp in enumerate(file_manifest): - - # 读取PDF文件 - file_content, page_one = read_and_clean_pdf_text(fp) - file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars - page_one = str(page_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars - # 递归地切割PDF文件 - from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf - from request_llm.bridge_all import model_info - enc = model_info["gpt-3.5-turbo"]['tokenizer'] - def get_token_num(txt): return len(enc.encode(txt, disallowed_special=())) - paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf( - txt=file_content, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT) - page_one_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf( - txt=page_one, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT//4) - - # 为了更好的效果,我们剥离Introduction之后的部分(如果有) - paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0] - - # 单线,获取文章meta信息 - paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=f"以下是一篇学术论文的基础信息,请从中提取出“标题”、“收录会议或期刊”、“作者”、“摘要”、“编号”、“作者邮箱”这六个部分。请用markdown格式输出,最后用中文翻译摘要部分。请提取:{paper_meta}", - inputs_show_user=f"请从{fp}中提取出“标题”、“收录会议或期刊”等基本信息。", - llm_kwargs=llm_kwargs, - chatbot=chatbot, history=[], - sys_prompt="Your job is to collect information from materials。", - ) - - # 多线,翻译 - gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( - inputs_array=[ - f"你需要翻译以下内容:\n{frag}" for frag in paper_fragments], - inputs_show_user_array=[f"\n---\n 原文: \n\n {frag.replace('#', '')} \n---\n 翻译:\n " for frag in paper_fragments], - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history_array=[[paper_meta] for _ in paper_fragments], - sys_prompt_array=[ - "请你作为一个学术翻译,负责把学术论文准确翻译成中文。注意文章中的每一句话都要翻译。" for _ in paper_fragments], - # max_workers=5 # OpenAI所允许的最大并行过载 - ) - gpt_response_collection_md = copy.deepcopy(gpt_response_collection) - # 整理报告的格式 - for i,k in enumerate(gpt_response_collection_md): - if i%2==0: - gpt_response_collection_md[i] = f"\n\n---\n\n ## 原文[{i//2}/{len(gpt_response_collection_md)//2}]: \n\n {paper_fragments[i//2].replace('#', '')} \n\n---\n\n ## 翻译[{i//2}/{len(gpt_response_collection_md)//2}]:\n " - else: - gpt_response_collection_md[i] = gpt_response_collection_md[i] - final = ["一、论文概况\n\n---\n\n", paper_meta_info.replace('# ', '### ') + '\n\n---\n\n', "二、论文翻译", ""] - final.extend(gpt_response_collection_md) - create_report_file_name = f"{os.path.basename(fp)}.trans.md" - res = write_results_to_file(final, file_name=create_report_file_name) - - # 更新UI - generated_conclusion_files.append(f'./gpt_log/{create_report_file_name}') - chatbot.append((f"{fp}完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # write html - try: - ch = construct_html() - orig = "" - trans = "" - gpt_response_collection_html = copy.deepcopy(gpt_response_collection) - for i,k in enumerate(gpt_response_collection_html): - if i%2==0: - gpt_response_collection_html[i] = paper_fragments[i//2].replace('#', '') - else: - gpt_response_collection_html[i] = gpt_response_collection_html[i] - final = ["论文概况", paper_meta_info.replace('# ', '### '), "二、论文翻译", ""] - final.extend(gpt_response_collection_html) - for i, k in enumerate(final): - if i%2==0: - orig = k - if i%2==1: - trans = k - ch.add_row(a=orig, b=trans) - create_report_file_name = f"{os.path.basename(fp)}.trans.html" - ch.save_file(create_report_file_name) - generated_html_files.append(f'./gpt_log/{create_report_file_name}') - except: - from toolbox import trimmed_format_exc - print('writing html result failed:', trimmed_format_exc()) - - # 准备文件的下载 - import shutil - for pdf_path in generated_conclusion_files: - # 重命名文件 - rename_file = f'./gpt_log/翻译-{os.path.basename(pdf_path)}' - if os.path.exists(rename_file): - os.remove(rename_file) - shutil.copyfile(pdf_path, rename_file) - if os.path.exists(pdf_path): - os.remove(pdf_path) - for html_path in generated_html_files: - # 重命名文件 - rename_file = f'./gpt_log/翻译-{os.path.basename(html_path)}' - if os.path.exists(rename_file): - os.remove(rename_file) - shutil.copyfile(html_path, rename_file) - if os.path.exists(html_path): - os.remove(html_path) - chatbot.append(("给出输出文件清单", str(generated_conclusion_files + generated_html_files))) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - -class construct_html(): - def __init__(self) -> None: - self.css = """ -.row { - display: flex; - flex-wrap: wrap; -} - -.column { - flex: 1; - padding: 10px; -} - -.table-header { - font-weight: bold; - border-bottom: 1px solid black; -} - -.table-row { - border-bottom: 1px solid lightgray; -} - -.table-cell { - padding: 5px; -} - """ - self.html_string = f'翻译结果' - - - def add_row(self, a, b): - tmp = """ -
            -
            REPLACE_A
            -
            REPLACE_B
            -
            - """ - from toolbox import markdown_convertion - tmp = tmp.replace('REPLACE_A', markdown_convertion(a)) - tmp = tmp.replace('REPLACE_B', markdown_convertion(b)) - self.html_string += tmp - - - def save_file(self, file_name): - with open(f'./gpt_log/{file_name}', 'w', encoding='utf8') as f: - f.write(self.html_string.encode('utf-8', 'ignore').decode()) - diff --git a/spaces/facebook/ov-seg/open_vocab_seg/modeling/clip_adapter/utils.py b/spaces/facebook/ov-seg/open_vocab_seg/modeling/clip_adapter/utils.py deleted file mode 100644 index dbe5d9d5284597cca444287f6bae38e37549bde0..0000000000000000000000000000000000000000 --- a/spaces/facebook/ov-seg/open_vocab_seg/modeling/clip_adapter/utils.py +++ /dev/null @@ -1,81 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Copyright (c) Meta Platforms, Inc. All Rights Reserved - -from typing import Tuple -import numpy as np -import torch -from .clip import load as clip_load -from detectron2.utils.comm import get_local_rank, synchronize - - -def expand_box( - x1: float, - y1: float, - x2: float, - y2: float, - expand_ratio: float = 1.0, - max_h: int = None, - max_w: int = None, -): - cx = 0.5 * (x1 + x2) - cy = 0.5 * (y1 + y2) - w = x2 - x1 - h = y2 - y1 - w = w * expand_ratio - h = h * expand_ratio - box = [cx - 0.5 * w, cy - 0.5 * h, cx + 0.5 * w, cy + 0.5 * h] - if max_h is not None: - box[1] = max(0, box[1]) - box[3] = min(max_h - 1, box[3]) - if max_w is not None: - box[0] = max(0, box[0]) - box[2] = min(max_w - 1, box[2]) - return [int(b) for b in box] - - -def mask2box(mask: torch.Tensor): - # use naive way - row = torch.nonzero(mask.sum(dim=0))[:, 0] - if len(row) == 0: - return None - x1 = row.min() - x2 = row.max() - col = np.nonzero(mask.sum(dim=1))[:, 0] - y1 = col.min() - y2 = col.max() - return x1, y1, x2 + 1, y2 + 1 - - -def crop_with_mask( - image: torch.Tensor, - mask: torch.Tensor, - bbox: torch.Tensor, - fill: Tuple[float, float, float] = (0, 0, 0), - expand_ratio: float = 1.0, -): - l, t, r, b = expand_box(*bbox, expand_ratio) - _, h, w = image.shape - l = max(l, 0) - t = max(t, 0) - r = min(r, w) - b = min(b, h) - new_image = torch.cat( - [image.new_full((1, b - t, r - l), fill_value=val) for val in fill] - ) - mask_bool = mask.bool() - return image[:, t:b, l:r] * mask[None, t:b, l:r] + (~ mask_bool[None, t:b, l:r]) * new_image, mask[None, t:b, l:r] - - -def build_clip_model(model: str, mask_prompt_depth: int = 0, frozen: bool = True): - rank = get_local_rank() - if rank == 0: - # download on rank 0 only - model, _ = clip_load(model, mask_prompt_depth=mask_prompt_depth, device="cpu") - synchronize() - if rank != 0: - model, _ = clip_load(model, mask_prompt_depth=mask_prompt_depth, device="cpu") - synchronize() - if frozen: - for param in model.parameters(): - param.requires_grad = False - return model diff --git a/spaces/falterWliame/Face_Mask_Detection/Aleo Swf Gif Converter 16 Crack !!EXCLUSIVE!!.md b/spaces/falterWliame/Face_Mask_Detection/Aleo Swf Gif Converter 16 Crack !!EXCLUSIVE!!.md deleted file mode 100644 index abf6c126e1b6f738c6753eaea0f166f9871972f8..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Aleo Swf Gif Converter 16 Crack !!EXCLUSIVE!!.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Aleo Swf Gif Converter 16 Crack


            Download Filehttps://urlca.com/2uDdPq



            -
            -3dsmax-2008-32bit-keygen.rar, 39K. [ ], 4U. ... Converter.2.3.8. ... Aleo.Mp3.to.Swf.Converter.2.1._build.3_.patch-icu.zip, 191K. [ ], All.Video. ... Keymaker.zip, 16K ... GIF.Animator.5.05.Patch.zip, 891K. [ ], Ultimate.DVD&Video.Converter.Suite.7. 1fdad05405
            -
            -
            -

            diff --git a/spaces/falterWliame/Face_Mask_Detection/Apollo Brown Official Discography 20082012torrent.md b/spaces/falterWliame/Face_Mask_Detection/Apollo Brown Official Discography 20082012torrent.md deleted file mode 100644 index 9d76297cc55ed756adc0f1a23aec09cf2b4c7039..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Apollo Brown Official Discography 20082012torrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Apollo Brown Official Discography 20082012torrent


            Download Zip ✓✓✓ https://urlca.com/2uDcil



            -
            -Their joint debut album is something special: »Hustle Don't Give« shows the 26-year-old on a par with Black Thought by The Roots. On other ... 4d29de3e1b
            -
            -
            -

            diff --git a/spaces/falterWliame/Face_Mask_Detection/Juiced 2 Pc Game Download [Extra Quality] Full Version.md b/spaces/falterWliame/Face_Mask_Detection/Juiced 2 Pc Game Download [Extra Quality] Full Version.md deleted file mode 100644 index b80c31521a915e4a1fc47a8e16184831ce78b44b..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Juiced 2 Pc Game Download [Extra Quality] Full Version.md +++ /dev/null @@ -1,10 +0,0 @@ -

            juiced 2 pc game download full version


            DOWNLOADhttps://urlca.com/2uDdvh



            - -Subscribe for weekly updates on new walkthroughs of various racing games. To do this, go to the main page of the channel and click on the "Subscribe" link. -If you want to know the opinion of other people about a game, you can leave a request on the "Speak Out" page. -In the game, you control your pilot, who can drive around different cities of the world. -The task in the game is to get from point A to point B. If you don't know how to do this, then first look at the tutorial missions. -In the game, you can find many interesting and beautiful places worth visiting. 8a78ff9644
            -
            -
            -

            diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/utils/videoio.py b/spaces/fb700/chatglm-fitness-RLHF/src/utils/videoio.py deleted file mode 100644 index d16ee667713a16e3f9644fcc3cb3e023bc2c9102..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/src/utils/videoio.py +++ /dev/null @@ -1,41 +0,0 @@ -import shutil -import uuid - -import os - -import cv2 - -def load_video_to_cv2(input_path): - video_stream = cv2.VideoCapture(input_path) - fps = video_stream.get(cv2.CAP_PROP_FPS) - full_frames = [] - while 1: - still_reading, frame = video_stream.read() - if not still_reading: - video_stream.release() - break - full_frames.append(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)) - return full_frames - -def save_video_with_watermark(video, audio, save_path, watermark=False): - temp_file = str(uuid.uuid4())+'.mp4' - cmd = r'ffmpeg -y -hide_banner -loglevel error -i "%s" -i "%s" -vcodec mpeg4 "%s"' % (video, audio, temp_file) - os.system(cmd) - - if watermark is False: - shutil.move(temp_file, save_path) - else: - # watermark - try: - ##### check if stable-diffusion-webui - import webui - from modules import paths - watarmark_path = paths.script_path+"/extensions/SadTalker/docs/sadtalker_logo.png" - except: - # get the root path of sadtalker. - dir_path = os.path.dirname(os.path.realpath(__file__)) - watarmark_path = dir_path+"/../../docs/sadtalker_logo.png" - - cmd = r'ffmpeg -y -hide_banner -loglevel error -i "%s" -i "%s" -filter_complex "[1]scale=100:-1[wm];[0][wm]overlay=(main_w-overlay_w)-10:10" "%s"' % (temp_file, watarmark_path, save_path) - os.system(cmd) - os.remove(temp_file) \ No newline at end of file diff --git a/spaces/fengmuxi/ChatGpt-Web/app/api/openai/[...path]/route.ts b/spaces/fengmuxi/ChatGpt-Web/app/api/openai/[...path]/route.ts deleted file mode 100644 index b2bf2a3d9be4a347cfd48b827066d5db80a3245d..0000000000000000000000000000000000000000 --- a/spaces/fengmuxi/ChatGpt-Web/app/api/openai/[...path]/route.ts +++ /dev/null @@ -1,102 +0,0 @@ -import { createParser } from "eventsource-parser"; -import { NextRequest, NextResponse } from "next/server"; -import { auth } from "../../auth"; -import { requestOpenai } from "../../common"; - -async function createStream(res: Response) { - const encoder = new TextEncoder(); - const decoder = new TextDecoder(); - - const stream = new ReadableStream({ - async start(controller) { - function onParse(event: any) { - if (event.type === "event") { - const data = event.data; - // console.log(data) - // https://beta.openai.com/docs/api-reference/completions/create#completions/create-stream - if (data === "[DONE]") { - controller.close(); - return; - } - try { - const json = JSON.parse(data); - const text = json.choices[0].delta.content; - const queue = encoder.encode(text); - controller.enqueue(queue); - } catch (e) { - controller.error(e); - } - } - } - - const parser = createParser(onParse); - for await (const chunk of res.body as any) { - parser.feed(decoder.decode(chunk, { stream: true })); - } - }, - }); - return stream; -} - -function formatResponse(msg: any) { - const jsonMsg = ["```json\n", JSON.stringify(msg, null, " "), "\n```"].join( - "", - ); - return new Response(jsonMsg); -} - -async function handle( - req: NextRequest, - { params }: { params: { path: string[] } }, -) { - console.log("[OpenAI Route] params ", params); - - const authResult = auth(req); - if (authResult.error) { - return NextResponse.json(authResult, { - status: 401, - }); - } - - try { - const api = await requestOpenai(req); - - const contentType = api.headers.get("Content-Type") ?? ""; - - // streaming response - if (contentType.includes("stream")) { - const stream = await createStream(api); - const res = new Response(stream); - res.headers.set("Content-Type", contentType); - return res; - } - - // try to parse error msg - try { - const mayBeErrorBody = await api.json(); - if (mayBeErrorBody.error) { - console.error("[OpenAI Response] ", mayBeErrorBody); - return formatResponse(mayBeErrorBody); - } else { - const res = new Response(JSON.stringify(mayBeErrorBody)); - res.headers.set("Content-Type", "application/json"); - res.headers.set("Cache-Control", "no-cache"); - return res; - } - } catch (e) { - console.error("[OpenAI Parse] ", e); - return formatResponse({ - msg: "invalid response from openai server", - error: e, - }); - } - } catch (e) { - console.error("[OpenAI] ", e); - return formatResponse(e); - } -} - -export const GET = handle; -export const POST = handle; - -export const runtime = "edge"; diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Dig A Little Deeper - Original Broadway Cast Recording MP3 Download.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Dig A Little Deeper - Original Broadway Cast Recording MP3 Download.md deleted file mode 100644 index fd3fd585c083a8f2b751728046ba805fdff5ef11..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Dig A Little Deeper - Original Broadway Cast Recording MP3 Download.md +++ /dev/null @@ -1,152 +0,0 @@ -
            -

            Dig a Little Deeper: How to Download MP3 Songs from the Internet

            -

            Music is one of the most universal forms of expression and entertainment. Whether you want to relax, energize, inspire, or simply enjoy yourself, there is a song for every occasion and mood. But how can you access your favorite songs anytime and anywhere? The answer is simple: download them as MP3 files from the internet.

            -

            Introduction

            -

            What is MP3 and why is it popular?

            -

            MP3 is a type of audio file format that compresses sound data into smaller sizes without losing much quality. This means that you can store more songs on your device and transfer them faster over the internet. MP3 is also compatible with most devices and platforms, such as computers, smartphones, tablets, music players, and online streaming services.

            -

            dig a little deeper mp3 download


            Download File ✵✵✵ https://gohhs.com/2uPu3n



            -

            What are the benefits of downloading MP3 songs?

            -

            Downloading MP3 songs from the internet has many advantages over other methods of listening to music. Some of these benefits are:

            -
              -
            • You can listen to your favorite songs offline, without relying on internet connection or data plan.
            • -
            • You can create your own playlists and customize your music library according to your preferences.
            • -
            • You can save money by avoiding subscription fees or ads from online streaming services.
            • -
            • You can support your favorite artists by buying their songs legally from authorized sources.
            • -
            -

            What are the challenges of downloading MP3 songs?

            -

            However, downloading MP3 songs from the internet also comes with some challenges that you need to be aware of. Some of these challenges are:

            -
              -
            • You need to find a reliable and legal source of MP3 songs that offers high-quality downloads and respects the rights of the artists and producers.
            • -
            • You need to choose the right MP3 song that matches your taste and expectations from a vast selection of options.
            • -
            • You need to download the MP3 song safely and efficiently to your device without compromising its performance or security.
            • -
            -

            In this article, we will guide you through these challenges and show you how to download MP3 songs from the internet in three easy steps.

            -

            How to Download MP3 Songs from the Internet

            -

            Step 1: Find a reliable and legal source of MP3 songs

            -

            The first step to download MP3 songs from the internet is to find a website or an app that offers legal and high-quality downloads. There are many sources of MP3 songs on the internet, but not all of them are trustworthy or ethical. Some may contain viruses, malware, or spyware that can harm your device or steal your personal information. Others may violate the intellectual property rights of the artists and producers by distributing their songs without their permission or compensation.

            -

            Examples of legal sources of MP3 songs

            -

            Some examples of legal sources of MP3 songs that you can use are:

            - - - -iTunes - - - -
            NameDescriptionURL
            Amazon MusicA digital music store that offers millions of songs for purchase or streaming with a Prime membership.[text](^1^)
            A media player and library that allows you to buy and download songs from the iTunes Store.[text]
            SpotifyA music streaming service that lets you download songs for offline listening with a Premium subscription.[text]
            YouTube MusicA music streaming service that lets you download songs and videos for offline playback with a YouTube Premium subscription.[text]
            SoundCloudA music sharing platform that allows you to download songs from independent artists and creators.[text]
            -

            These are just some of the examples of legal sources of MP3 songs that you can use. There are many more options available on the internet, but make sure to check their reputation, reviews, and terms of service before using them.

            -

            How to avoid illegal sources of MP3 songs

            -

            Some of the signs that indicate that a source of MP3 songs is illegal or unsafe are:

            -
              -
            • It offers free or unlimited downloads of songs that are normally paid or restricted.
            • -
            • It does not have a clear or credible name, logo, or domain.
            • -
            • It does not have a secure connection (HTTPS) or a privacy policy.
            • -
            • It asks for your personal or financial information or requires you to install additional software or extensions.
            • -
            • It has pop-up ads, redirects, or warnings from your browser or antivirus software.
            • -
            -

            If you encounter any of these signs, avoid using the source and look for another one. Downloading MP3 songs from illegal or unsafe sources can expose you to legal issues, fines, or lawsuits, as well as damage your device or compromise your security.

            -

            dig a little deeper princess and the frog mp3 download
            -dig a little deeper billy jones mp3 download
            -dig a little deeper mahalia jackson mp3 download
            -dig a little deeper theme song mp3 download
            -dig a little deeper james f hanley mp3 download
            -dig a little deeper internet archive mp3 download
            -dig a little deeper 1923 mp3 download
            -dig a little deeper spiritual mp3 download
            -dig a little deeper television tunes mp3 download
            -dig a little deeper edison 51220-r mp3 download
            -dig a little deeper free download borrow and streaming mp3
            -dig a little deeper webamp volume 90% mp3
            -dig a little deeper korea institute of fusion energy mp3
            -dig a little deeper nuclear fusion reactor mp3
            -dig a little deeper 100 million degrees celsius mp3
            -dig a little deeper holy grail fusion experiment mp3
            -dig a little deeper mini sun breakthrough mp3
            -dig a little deeper south korea's kstar facility mp3
            -dig a little deeper net energy gain mp3
            -dig a little deeper 30 seconds for the first time mp3
            -dig a little deeper george blood lp mp3
            -dig a little deeper 78 revolutions per minute mp3
            -dig a little deeper four stylii used to transfer mp3
            -dig a little deeper vertical-cut acoustic mp3
            -dig a little deeper crackling popping and swishing mp3
            -dig a little deeper just a girl that men forget mp3
            -dig a little deeper with organ accompaniment mp3
            -dig a little deeper 78_dig-a-little-deeper_mahalia-jackson-morris_gbia0183303a mp3
            -dig a little deeper _78_the-gold-digger-dig-a-little-deeper_billy-jones-james-f-hanley_gbia0090816b_03_2.8_CT_EQ.mp3 download
            -dig a little deeper princess and the frog - theme song.mp3 download

            -

            Step 2: Choose the MP3 song you want to download

            -

            The second step to download MP3 songs from the internet is to choose the song that you want to download from the source that you have selected. Depending on the source, you may have different ways of searching, browsing, and selecting MP3 songs. However, some of the common methods are:

            -

            How to search for MP3 songs by title, artist, genre, or mood

            -

            Most sources of MP3 songs have a search bar or a search icon that allows you to type in keywords related to the song that you are looking for. For example, you can type in the title of the song, the name of the artist, the genre of the music, or the mood that you want to evoke. The source will then display a list of results that match your keywords. You can then scroll through the results and click on the one that interests you.

            -

            How to preview and play MP3 songs before downloading

            -

            Before downloading an MP3 song, it is advisable to preview and play it first to make sure that it is the right one for you. Most sources of MP3 songs have a play button or a speaker icon that allows you to listen to a sample or the full version of the song. You can also see other information about the song, such as its title, artist, album, duration, quality, and size. If you like the song and want to download it, you can proceed to the next step. If not, you can go back to the search results and look for another one.

            -

            Step 3: Download the MP3 song to your device

            -

            The third and final step to download MP3 songs from the internet is to download the song that you have chosen to your device. Depending on the source and the device that you are using, you may have different methods and options for downloading MP3 songs. However, some of the common methods are:

            -

            How to download MP3 songs using different methods and devices

            -

            Some of the methods that you can use to download MP3 songs from different sources and devices are:

            -
              -
            • If you are using a computer, you can usually download an MP3 song by right-clicking on it and choosing "Save link as" or "Download file as". You can then choose a folder on your computer where you want to save the song.
            • -
            • If you are using a smartphone or a tablet, you can usually download an MP3 song by tapping on it and choosing "Download" or "Save". You can then access the song from your device's music app or file manager.
            • -
            • If you are using an online streaming service, such as Spotify or YouTube Music, you can usually download an MP3 song by adding it to your library or playlist and toggling the "Download" or "Offline" switch. You can then listen to the song offline from the app.
            • -
            • If you are using a music player or a device that supports MP3 files, such as an iPod or a Kindle, you can usually download an MP3 song by connecting your device to your computer and transferring the song using a USB cable or a wireless connection. You can then play the song from your device's music app or menu.
            • -
            -

            How to check the quality and size of the downloaded MP3 song

            -

            After downloading an MP3 song, it is advisable to check its quality and size to make sure that it meets your expectations and needs. Some of the factors that affect the quality and size of an MP3 song are:

            -
              -
            • The bitrate: This is the amount of data that is encoded in each second of the song. The higher the bitrate, the better the quality and the larger the size of the song. The standard bitrate for MP3 songs is 128 kbps, but you can also find higher or lower bitrates depending on the source and your preference.
            • -
            • The sample rate: This is the number of times that the sound wave is sampled per second. The higher the sample rate, the more accurate and detailed the sound and the larger the size of the song. The standard sample rate for MP3 songs is 44.1 kHz, but you can also find higher or lower sample rates depending on the source and your preference.
            • -
            • The compression: This is the process of reducing the size of the song by removing some of the sound data that is not essential or noticeable. The more compressed the song, the lower the quality and the smaller the size of the song. The standard compression for MP3 songs is lossy, which means that some of the sound data is lost during encoding. However, you can also find lossless compression, which means that no sound data is lost during encoding, but the size of the song is much larger.
            • -
            -

            You can check the quality and size of an MP3 song by looking at its properties or details on your device or computer. You can also use online tools or apps that can analyze and compare MP3 songs based on their quality and size.

            -

            Conclusion

            -

            Summary of the main points

            -

            In this article, we have shown you how to download MP3 songs from the internet in three easy steps. First, you need to find a reliable and legal source of MP3 songs that offers high-quality downloads and respects the rights of the artists and producers. Second, you need to choose the MP3 song that you want to download from a vast selection of options. Third, you need to download the MP3 song to your device using different methods and devices. Finally, you need to check the quality and size of the downloaded MP3 song to make sure that it meets your expectations and needs.

            -

            Call to action and recommendations

            -

            Now that you know how to download MP3 songs from the internet, you can enjoy your favorite music anytime and anywhere. However, we also recommend that you follow some best practices to enhance your experience and avoid any problems. Some of these best practices are:

            -
              -
            • Always respect the intellectual property rights of the artists and producers and only download MP3 songs from legal and authorized sources.
            • -
            • Always scan the downloaded MP3 songs for viruses, malware, or spyware before playing them on your device or computer.
            • -
            • Always backup your downloaded MP3 songs to an external storage device or a cloud service in case of data loss or corruption.
            • -
            • Always delete the downloaded MP3 songs that you no longer need or want to free up space on your device or computer.
            • -
            • Always update your device or computer software and drivers to ensure compatibility and performance with the downloaded MP3 songs.
            • -
            -

            We hope that this article has helped you learn how to download MP3 songs from the internet. If you have any questions, comments, or feedback, please feel free to contact us. We would love to hear from you. Happy listening!

            -

            FAQs

            -

            What is the difference between MP3 and other audio file formats?

            -

            MP3 is one of the most common and popular audio file formats, but it is not the only one. There are many other audio file formats, such as WAV, FLAC, AAC, OGG, WMA, etc. Each format has its own advantages and disadvantages in terms of quality, size, compatibility, and functionality. For example, WAV files have higher quality but larger size than MP3 files, while FLAC files have lossless compression but lower compatibility than MP3 files. The choice of audio file format depends on your personal preference and needs.

            -

            How can I convert other audio file formats to MP3?

            -

            If you have an audio file in a different format than MP3 and you want to convert it to MP3, you can use online tools or apps that can perform the conversion for you. Some examples of online tools or apps that can convert other audio file formats to MP3 are:

            -
              -
            • [text]
            • -
            • [text]
            • -
            • [text]
            • -
            • [text]
            • -
            -

            These are just some of the examples of online tools or apps that can convert other audio file formats to MP3. There are many more options available on the internet, but make sure to check their reputation, reviews, and terms of service before using them.

            -

            How can I edit or modify an MP3 song?

            -

            If you want to edit or modify an MP3 song, such as cutting, trimming, merging, splitting, adding effects, changing speed, pitch, volume, etc., you can use online tools or apps that can perform the editing or modification for you. Some examples of online tools or apps that can edit or modify an MP3 song are:

            -
              -
            • [text]
            • -
            • [text]
            • -
            • [text]
            • -
            • [text]
            • -
            -

            These are just some of the examples of online tools or apps that can edit or modify an MP3 song. There are many more options available on the internet, but make sure to check their reputation, reviews, and terms of service before using them.

            -

            How can I share an MP3 song with others?

            -

            If you want to share an MP3 song with others, such as sending it via email, messaging app, social media platform, etc., you can use online tools or apps that can perform the sharing for you. Some examples of online tools or apps that can share an MP3 song with others are:

            -
              [Filemail](^1^): A file transfer service that lets you send large audio files up to 5 GB for free and up to 100 GB with a paid plan. -
            • [SoundCloud](^2^): A music sharing platform that lets you upload and share your audio files with millions of listeners.
            • -
            • [Headphonesty](^3^): A blog that reviews the top music sync apps that let you listen to music with your friends online.
            • -
            -

            These are just some of the examples of online tools or apps that can share an MP3 song with others. There are many more options available on the internet, but make sure to check their reputation, reviews, and terms of service before using them.

            -

            How can I make my own MP3 song?

            -

            If you want to make your own MP3 song, such as recording your voice, playing an instrument, mixing sounds, or creating beats, you can use online tools or apps that can perform the creation for you. Some examples of online tools or apps that can make your own MP3 song are:

            -
              -
            • [Audacity]: A free and open-source audio editor and recorder that lets you record, edit, and export your audio files as MP3.
            • -
            • [GarageBand]: A music creation app for Mac and iOS devices that lets you play, record, and produce your own songs using virtual instruments, loops, and effects.
            • -
            • [BandLab]: A social music platform that lets you create, collaborate, and share your own songs using online tools and instruments.
            • -
            -

            These are just some of the examples of online tools or apps that can make your own MP3 song. There are many more options available on the internet, but make sure to check their reputation, reviews, and terms of service before using them.

            197e85843d
            -
            -
            \ No newline at end of file diff --git a/spaces/fersch/predictor_fraude/README.md b/spaces/fersch/predictor_fraude/README.md deleted file mode 100644 index 002542c0b0986c3989f3a67a60c004e7e8759601..0000000000000000000000000000000000000000 --- a/spaces/fersch/predictor_fraude/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Predictor Fraude -emoji: 🏃 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.44.4 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/fffffu/bing/src/components/ui/alert-dialog.tsx b/spaces/fffffu/bing/src/components/ui/alert-dialog.tsx deleted file mode 100644 index 17fec4d16510328deacc1416569173c97761ef72..0000000000000000000000000000000000000000 --- a/spaces/fffffu/bing/src/components/ui/alert-dialog.tsx +++ /dev/null @@ -1,150 +0,0 @@ -'use client' - -import * as React from 'react' -import * as AlertDialogPrimitive from '@radix-ui/react-alert-dialog' - -import { cn } from '@/lib/utils' -import { buttonVariants } from '@/components/ui/button' - -const AlertDialog = AlertDialogPrimitive.Root - -const AlertDialogTrigger = AlertDialogPrimitive.Trigger - -const AlertDialogPortal = ({ - className, - children, - ...props -}: AlertDialogPrimitive.AlertDialogPortalProps) => ( - -
            - {children} -
            -
            -) -AlertDialogPortal.displayName = AlertDialogPrimitive.Portal.displayName - -const AlertDialogOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - -)) -AlertDialogOverlay.displayName = AlertDialogPrimitive.Overlay.displayName - -const AlertDialogContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - - - - -)) -AlertDialogContent.displayName = AlertDialogPrimitive.Content.displayName - -const AlertDialogHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
            -) -AlertDialogHeader.displayName = 'AlertDialogHeader' - -const AlertDialogFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
            -) -AlertDialogFooter.displayName = 'AlertDialogFooter' - -const AlertDialogTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogTitle.displayName = AlertDialogPrimitive.Title.displayName - -const AlertDialogDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogDescription.displayName = - AlertDialogPrimitive.Description.displayName - -const AlertDialogAction = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogAction.displayName = AlertDialogPrimitive.Action.displayName - -const AlertDialogCancel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogCancel.displayName = AlertDialogPrimitive.Cancel.displayName - -export { - AlertDialog, - AlertDialogTrigger, - AlertDialogContent, - AlertDialogHeader, - AlertDialogFooter, - AlertDialogTitle, - AlertDialogDescription, - AlertDialogAction, - AlertDialogCancel -} diff --git a/spaces/fffiloni/Music_Source_Separation/bytesep/dataset_creation/pack_audios_to_hdf5s/musdb18.py b/spaces/fffiloni/Music_Source_Separation/bytesep/dataset_creation/pack_audios_to_hdf5s/musdb18.py deleted file mode 100644 index 4f242de04d850527531794a2a85f3454191adede..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Music_Source_Separation/bytesep/dataset_creation/pack_audios_to_hdf5s/musdb18.py +++ /dev/null @@ -1,207 +0,0 @@ -import argparse -import os -import time -from concurrent.futures import ProcessPoolExecutor -from typing import NoReturn - -import h5py -import librosa -import musdb -import numpy as np - -from bytesep.utils import float32_to_int16 - -# Source types of the MUSDB18 dataset. -SOURCE_TYPES = ["vocals", "drums", "bass", "other", "accompaniment"] - - -def pack_audios_to_hdf5s(args) -> NoReturn: - r"""Pack (resampled) audio files into hdf5 files to speed up loading. - - Args: - dataset_dir: str - subset: str, 'train' | 'test' - split: str, '' | 'train' | 'valid' - hdf5s_dir: str, directory to write out hdf5 files - sample_rate: int - channels_num: int - mono: bool - - Returns: - NoReturn - """ - - # arguments & parameters - dataset_dir = args.dataset_dir - subset = args.subset - split = None if args.split == "" else args.split - hdf5s_dir = args.hdf5s_dir - sample_rate = args.sample_rate - channels = args.channels - - mono = True if channels == 1 else False - source_types = SOURCE_TYPES - resample_type = "kaiser_fast" - - # Paths - os.makedirs(hdf5s_dir, exist_ok=True) - - # Dataset of corresponding subset and split. - mus = musdb.DB(root=dataset_dir, subsets=[subset], split=split) - print("Subset: {}, Split: {}, Total pieces: {}".format(subset, split, len(mus))) - - params = [] # A list of params for multiple processing. - - for track_index in range(len(mus.tracks)): - - param = ( - dataset_dir, - subset, - split, - track_index, - source_types, - mono, - sample_rate, - resample_type, - hdf5s_dir, - ) - - params.append(param) - - # Uncomment for debug. - # write_single_audio_to_hdf5(params[0]) - # os._exit(0) - - pack_hdf5s_time = time.time() - - with ProcessPoolExecutor(max_workers=None) as pool: - # Maximum works on the machine - pool.map(write_single_audio_to_hdf5, params) - - print("Pack hdf5 time: {:.3f} s".format(time.time() - pack_hdf5s_time)) - - -def write_single_audio_to_hdf5(param) -> NoReturn: - r"""Write single audio into hdf5 file.""" - ( - dataset_dir, - subset, - split, - track_index, - source_types, - mono, - sample_rate, - resample_type, - hdf5s_dir, - ) = param - - # Dataset of corresponding subset and split. - mus = musdb.DB(root=dataset_dir, subsets=[subset], split=split) - track = mus.tracks[track_index] - - # Path to write out hdf5 file. - hdf5_path = os.path.join(hdf5s_dir, "{}.h5".format(track.name)) - - with h5py.File(hdf5_path, "w") as hf: - - hf.attrs.create("audio_name", data=track.name.encode(), dtype="S100") - hf.attrs.create("sample_rate", data=sample_rate, dtype=np.int32) - - for source_type in source_types: - - audio = track.targets[source_type].audio.T - # (channels_num, audio_samples) - - # Preprocess audio to mono / stereo, and resample. - audio = preprocess_audio( - audio, mono, track.rate, sample_rate, resample_type - ) - # audio = load_audio(audio_path=audio_path, mono=mono, sample_rate=sample_rate) - # (channels_num, audio_samples) | (audio_samples,) - - hf.create_dataset( - name=source_type, data=float32_to_int16(audio), dtype=np.int16 - ) - - # Mixture - audio = track.audio.T - # (channels_num, audio_samples) - - # Preprocess audio to mono / stereo, and resample. - audio = preprocess_audio(audio, mono, track.rate, sample_rate, resample_type) - # (channels_num, audio_samples) - - hf.create_dataset(name="mixture", data=float32_to_int16(audio), dtype=np.int16) - - print("{} Write to {}, {}".format(track_index, hdf5_path, audio.shape)) - - -def preprocess_audio(audio, mono, origin_sr, sr, resample_type) -> np.array: - r"""Preprocess audio to mono / stereo, and resample. - - Args: - audio: (channels_num, audio_samples), input audio - mono: bool - origin_sr: float, original sample rate - sr: float, target sample rate - resample_type: str, e.g., 'kaiser_fast' - - Returns: - output: ndarray, output audio - """ - if mono: - audio = np.mean(audio, axis=0) - # (audio_samples,) - - output = librosa.core.resample( - audio, orig_sr=origin_sr, target_sr=sr, res_type=resample_type - ) - # (audio_samples,) | (channels_num, audio_samples) - - if output.ndim == 1: - output = output[None, :] - # (1, audio_samples,) - - return output - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - parser.add_argument( - "--dataset_dir", - type=str, - required=True, - help="Directory of the MUSDB18 dataset.", - ) - parser.add_argument( - "--subset", - type=str, - required=True, - choices=["train", "test"], - help="Train subset: 100 pieces; test subset: 50 pieces.", - ) - parser.add_argument( - "--split", - type=str, - required=True, - choices=["", "train", "valid"], - help="Use '' to use all 100 pieces to train. Use 'train' to use 86 \ - pieces for train, and use 'test' to use 14 pieces for valid.", - ) - parser.add_argument( - "--hdf5s_dir", - type=str, - required=True, - help="Directory to write out hdf5 files.", - ) - parser.add_argument("--sample_rate", type=int, required=True, help="Sample rate.") - parser.add_argument( - "--channels", type=int, required=True, help="Use 1 for mono, 2 for stereo." - ) - - # Parse arguments. - args = parser.parse_args() - - # Pack audios into hdf5 files. - pack_audios_to_hdf5s(args) diff --git a/spaces/fffiloni/Music_Source_Separation/bytesep/models/resunet_subbandtime.py b/spaces/fffiloni/Music_Source_Separation/bytesep/models/resunet_subbandtime.py deleted file mode 100644 index d5ac3c5cd5aa2e3d49b7513b2e577ba148c80d7e..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Music_Source_Separation/bytesep/models/resunet_subbandtime.py +++ /dev/null @@ -1,545 +0,0 @@ -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from torchlibrosa.stft import ISTFT, STFT, magphase - -from bytesep.models.pytorch_modules import Base, init_bn, init_layer -from bytesep.models.subband_tools.pqmf import PQMF - - -class ConvBlockRes(nn.Module): - def __init__(self, in_channels, out_channels, kernel_size, activation, momentum): - r"""Residual block.""" - super(ConvBlockRes, self).__init__() - - self.activation = activation - padding = [kernel_size[0] // 2, kernel_size[1] // 2] - - self.bn1 = nn.BatchNorm2d(in_channels, momentum=momentum) - self.bn2 = nn.BatchNorm2d(out_channels, momentum=momentum) - - self.conv1 = nn.Conv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=kernel_size, - stride=(1, 1), - dilation=(1, 1), - padding=padding, - bias=False, - ) - - self.conv2 = nn.Conv2d( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=kernel_size, - stride=(1, 1), - dilation=(1, 1), - padding=padding, - bias=False, - ) - - if in_channels != out_channels: - self.shortcut = nn.Conv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(1, 1), - stride=(1, 1), - padding=(0, 0), - ) - self.is_shortcut = True - else: - self.is_shortcut = False - - self.init_weights() - - def init_weights(self): - init_bn(self.bn1) - init_bn(self.bn2) - init_layer(self.conv1) - init_layer(self.conv2) - - if self.is_shortcut: - init_layer(self.shortcut) - - def forward(self, x): - origin = x - x = self.conv1(F.leaky_relu_(self.bn1(x), negative_slope=0.01)) - x = self.conv2(F.leaky_relu_(self.bn2(x), negative_slope=0.01)) - - if self.is_shortcut: - return self.shortcut(origin) + x - else: - return origin + x - - -class EncoderBlockRes4B(nn.Module): - def __init__( - self, in_channels, out_channels, kernel_size, downsample, activation, momentum - ): - r"""Encoder block, contains 8 convolutional layers.""" - super(EncoderBlockRes4B, self).__init__() - - self.conv_block1 = ConvBlockRes( - in_channels, out_channels, kernel_size, activation, momentum - ) - self.conv_block2 = ConvBlockRes( - out_channels, out_channels, kernel_size, activation, momentum - ) - self.conv_block3 = ConvBlockRes( - out_channels, out_channels, kernel_size, activation, momentum - ) - self.conv_block4 = ConvBlockRes( - out_channels, out_channels, kernel_size, activation, momentum - ) - self.downsample = downsample - - def forward(self, x): - encoder = self.conv_block1(x) - encoder = self.conv_block2(encoder) - encoder = self.conv_block3(encoder) - encoder = self.conv_block4(encoder) - encoder_pool = F.avg_pool2d(encoder, kernel_size=self.downsample) - return encoder_pool, encoder - - -class DecoderBlockRes4B(nn.Module): - def __init__( - self, in_channels, out_channels, kernel_size, upsample, activation, momentum - ): - r"""Decoder block, contains 1 transpose convolutional and 8 convolutional layers.""" - super(DecoderBlockRes4B, self).__init__() - self.kernel_size = kernel_size - self.stride = upsample - self.activation = activation - - self.conv1 = torch.nn.ConvTranspose2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=self.stride, - stride=self.stride, - padding=(0, 0), - bias=False, - dilation=(1, 1), - ) - - self.bn1 = nn.BatchNorm2d(in_channels, momentum=momentum) - self.conv_block2 = ConvBlockRes( - out_channels * 2, out_channels, kernel_size, activation, momentum - ) - self.conv_block3 = ConvBlockRes( - out_channels, out_channels, kernel_size, activation, momentum - ) - self.conv_block4 = ConvBlockRes( - out_channels, out_channels, kernel_size, activation, momentum - ) - self.conv_block5 = ConvBlockRes( - out_channels, out_channels, kernel_size, activation, momentum - ) - - self.init_weights() - - def init_weights(self): - init_bn(self.bn1) - init_layer(self.conv1) - - def forward(self, input_tensor, concat_tensor): - x = self.conv1(F.relu_(self.bn1(input_tensor))) - x = torch.cat((x, concat_tensor), dim=1) - x = self.conv_block2(x) - x = self.conv_block3(x) - x = self.conv_block4(x) - x = self.conv_block5(x) - return x - - -class ResUNet143_Subbandtime(nn.Module, Base): - def __init__(self, input_channels, target_sources_num): - super(ResUNet143_Subbandtime, self).__init__() - - self.input_channels = input_channels - self.target_sources_num = target_sources_num - - window_size = 512 - hop_size = 110 - center = True - pad_mode = "reflect" - window = "hann" - activation = "leaky_relu" - momentum = 0.01 - - self.subbands_num = 4 - self.K = 4 # outputs: |M|, cos∠M, sin∠M, Q - - self.downsample_ratio = 2 ** 5 # This number equals 2^{#encoder_blcoks} - - self.pqmf = PQMF( - N=self.subbands_num, - M=64, - project_root='bytesep/models/subband_tools/filters', - ) - - self.stft = STFT( - n_fft=window_size, - hop_length=hop_size, - win_length=window_size, - window=window, - center=center, - pad_mode=pad_mode, - freeze_parameters=True, - ) - - self.istft = ISTFT( - n_fft=window_size, - hop_length=hop_size, - win_length=window_size, - window=window, - center=center, - pad_mode=pad_mode, - freeze_parameters=True, - ) - - self.bn0 = nn.BatchNorm2d(window_size // 2 + 1, momentum=momentum) - - self.encoder_block1 = EncoderBlockRes4B( - in_channels=input_channels * self.subbands_num, - out_channels=32, - kernel_size=(3, 3), - downsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.encoder_block2 = EncoderBlockRes4B( - in_channels=32, - out_channels=64, - kernel_size=(3, 3), - downsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.encoder_block3 = EncoderBlockRes4B( - in_channels=64, - out_channels=128, - kernel_size=(3, 3), - downsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.encoder_block4 = EncoderBlockRes4B( - in_channels=128, - out_channels=256, - kernel_size=(3, 3), - downsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.encoder_block5 = EncoderBlockRes4B( - in_channels=256, - out_channels=384, - kernel_size=(3, 3), - downsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.encoder_block6 = EncoderBlockRes4B( - in_channels=384, - out_channels=384, - kernel_size=(3, 3), - downsample=(1, 2), - activation=activation, - momentum=momentum, - ) - self.conv_block7a = EncoderBlockRes4B( - in_channels=384, - out_channels=384, - kernel_size=(3, 3), - downsample=(1, 1), - activation=activation, - momentum=momentum, - ) - self.conv_block7b = EncoderBlockRes4B( - in_channels=384, - out_channels=384, - kernel_size=(3, 3), - downsample=(1, 1), - activation=activation, - momentum=momentum, - ) - self.conv_block7c = EncoderBlockRes4B( - in_channels=384, - out_channels=384, - kernel_size=(3, 3), - downsample=(1, 1), - activation=activation, - momentum=momentum, - ) - self.conv_block7d = EncoderBlockRes4B( - in_channels=384, - out_channels=384, - kernel_size=(3, 3), - downsample=(1, 1), - activation=activation, - momentum=momentum, - ) - self.decoder_block1 = DecoderBlockRes4B( - in_channels=384, - out_channels=384, - kernel_size=(3, 3), - upsample=(1, 2), - activation=activation, - momentum=momentum, - ) - self.decoder_block2 = DecoderBlockRes4B( - in_channels=384, - out_channels=384, - kernel_size=(3, 3), - upsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.decoder_block3 = DecoderBlockRes4B( - in_channels=384, - out_channels=256, - kernel_size=(3, 3), - upsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.decoder_block4 = DecoderBlockRes4B( - in_channels=256, - out_channels=128, - kernel_size=(3, 3), - upsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.decoder_block5 = DecoderBlockRes4B( - in_channels=128, - out_channels=64, - kernel_size=(3, 3), - upsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.decoder_block6 = DecoderBlockRes4B( - in_channels=64, - out_channels=32, - kernel_size=(3, 3), - upsample=(2, 2), - activation=activation, - momentum=momentum, - ) - - self.after_conv_block1 = EncoderBlockRes4B( - in_channels=32, - out_channels=32, - kernel_size=(3, 3), - downsample=(1, 1), - activation=activation, - momentum=momentum, - ) - - self.after_conv2 = nn.Conv2d( - in_channels=32, - out_channels=target_sources_num - * input_channels - * self.K - * self.subbands_num, - kernel_size=(1, 1), - stride=(1, 1), - padding=(0, 0), - bias=True, - ) - - self.init_weights() - - def init_weights(self): - init_bn(self.bn0) - init_layer(self.after_conv2) - - def feature_maps_to_wav( - self, - input_tensor: torch.Tensor, - sp: torch.Tensor, - sin_in: torch.Tensor, - cos_in: torch.Tensor, - audio_length: int, - ) -> torch.Tensor: - r"""Convert feature maps to waveform. - - Args: - input_tensor: (batch_size, target_sources_num * input_channels * self.K, time_steps, freq_bins) - sp: (batch_size, target_sources_num * input_channels, time_steps, freq_bins) - sin_in: (batch_size, target_sources_num * input_channels, time_steps, freq_bins) - cos_in: (batch_size, target_sources_num * input_channels, time_steps, freq_bins) - - Outputs: - waveform: (batch_size, target_sources_num * input_channels, segment_samples) - """ - batch_size, _, time_steps, freq_bins = input_tensor.shape - - x = input_tensor.reshape( - batch_size, - self.target_sources_num, - self.input_channels, - self.K, - time_steps, - freq_bins, - ) - # x: (batch_size, target_sources_num, input_channles, K, time_steps, freq_bins) - - mask_mag = torch.sigmoid(x[:, :, :, 0, :, :]) - _mask_real = torch.tanh(x[:, :, :, 1, :, :]) - _mask_imag = torch.tanh(x[:, :, :, 2, :, :]) - linear_mag = torch.tanh(x[:, :, :, 3, :, :]) - _, mask_cos, mask_sin = magphase(_mask_real, _mask_imag) - # mask_cos, mask_sin: (batch_size, target_sources_num, input_channles, time_steps, freq_bins) - - # Y = |Y|cos∠Y + j|Y|sin∠Y - # = |Y|cos(∠X + ∠M) + j|Y|sin(∠X + ∠M) - # = |Y|(cos∠X cos∠M - sin∠X sin∠M) + j|Y|(sin∠X cos∠M + cos∠X sin∠M) - out_cos = ( - cos_in[:, None, :, :, :] * mask_cos - sin_in[:, None, :, :, :] * mask_sin - ) - out_sin = ( - sin_in[:, None, :, :, :] * mask_cos + cos_in[:, None, :, :, :] * mask_sin - ) - # out_cos: (batch_size, target_sources_num, input_channles, time_steps, freq_bins) - # out_sin: (batch_size, target_sources_num, input_channles, time_steps, freq_bins) - - # Calculate |Y|. - out_mag = F.relu_(sp[:, None, :, :, :] * mask_mag + linear_mag) - # out_mag: (batch_size, target_sources_num, input_channles, time_steps, freq_bins) - - # Calculate Y_{real} and Y_{imag} for ISTFT. - out_real = out_mag * out_cos - out_imag = out_mag * out_sin - # out_real, out_imag: (batch_size, target_sources_num, input_channles, time_steps, freq_bins) - - # Reformat shape to (n, 1, time_steps, freq_bins) for ISTFT. - shape = ( - batch_size * self.target_sources_num * self.input_channels, - 1, - time_steps, - freq_bins, - ) - out_real = out_real.reshape(shape) - out_imag = out_imag.reshape(shape) - - # ISTFT. - x = self.istft(out_real, out_imag, audio_length) - # (batch_size * target_sources_num * input_channels, segments_num) - - # Reshape. - waveform = x.reshape( - batch_size, self.target_sources_num * self.input_channels, audio_length - ) - # (batch_size, target_sources_num * input_channels, segments_num) - - return waveform - - def forward(self, input_dict): - r"""Forward data into the module. - - Args: - input_dict: dict, e.g., { - waveform: (batch_size, input_channels, segment_samples), - ..., - } - - Outputs: - output_dict: dict, e.g., { - 'waveform': (batch_size, input_channels, segment_samples), - ..., - } - """ - mixtures = input_dict['waveform'] - # (batch_size, input_channels, segment_samples) - - subband_x = self.pqmf.analysis(mixtures) - # subband_x: (batch_size, input_channels * subbands_num, segment_samples) - - mag, cos_in, sin_in = self.wav_to_spectrogram_phase(subband_x) - # mag, cos_in, sin_in: (batch_size, input_channels * subbands_num, time_steps, freq_bins) - - # Batch normalize on individual frequency bins. - x = mag.transpose(1, 3) - x = self.bn0(x) - x = x.transpose(1, 3) - # (batch_size, input_channels * subbands_num, time_steps, freq_bins) - - # Pad spectrogram to be evenly divided by downsample ratio. - origin_len = x.shape[2] - pad_len = ( - int(np.ceil(x.shape[2] / self.downsample_ratio)) * self.downsample_ratio - - origin_len - ) - x = F.pad(x, pad=(0, 0, 0, pad_len)) - # x: (batch_size, input_channels * subbands_num, padded_time_steps, freq_bins) - - # Let frequency bins be evenly divided by 2, e.g., 257 -> 256 - x = x[..., 0 : x.shape[-1] - 1] # (bs, input_channels, T, F) - # x: (batch_size, input_channels * subbands_num, padded_time_steps, freq_bins) - - # UNet - (x1_pool, x1) = self.encoder_block1(x) # x1_pool: (bs, 32, T / 2, F / 2) - (x2_pool, x2) = self.encoder_block2(x1_pool) # x2_pool: (bs, 64, T / 4, F / 4) - (x3_pool, x3) = self.encoder_block3(x2_pool) # x3_pool: (bs, 128, T / 8, F / 8) - (x4_pool, x4) = self.encoder_block4( - x3_pool - ) # x4_pool: (bs, 256, T / 16, F / 16) - (x5_pool, x5) = self.encoder_block5( - x4_pool - ) # x5_pool: (bs, 384, T / 32, F / 32) - (x6_pool, x6) = self.encoder_block6( - x5_pool - ) # x6_pool: (bs, 384, T / 32, F / 64) - (x_center, _) = self.conv_block7a(x6_pool) # (bs, 384, T / 32, F / 64) - (x_center, _) = self.conv_block7b(x_center) # (bs, 384, T / 32, F / 64) - (x_center, _) = self.conv_block7c(x_center) # (bs, 384, T / 32, F / 64) - (x_center, _) = self.conv_block7d(x_center) # (bs, 384, T / 32, F / 64) - x7 = self.decoder_block1(x_center, x6) # (bs, 384, T / 32, F / 32) - x8 = self.decoder_block2(x7, x5) # (bs, 384, T / 16, F / 16) - x9 = self.decoder_block3(x8, x4) # (bs, 256, T / 8, F / 8) - x10 = self.decoder_block4(x9, x3) # (bs, 128, T / 4, F / 4) - x11 = self.decoder_block5(x10, x2) # (bs, 64, T / 2, F / 2) - x12 = self.decoder_block6(x11, x1) # (bs, 32, T, F) - (x, _) = self.after_conv_block1(x12) # (bs, 32, T, F) - - x = self.after_conv2(x) - # (batch_size, subbands_num * target_sources_num * input_channles * self.K, T, F') - - # Recover shape - x = F.pad(x, pad=(0, 1)) # Pad frequency, e.g., 256 -> 257. - - x = x[:, :, 0:origin_len, :] - # (batch_size, subbands_num * target_sources_num * input_channles * self.K, T, F') - - audio_length = subband_x.shape[2] - - # Recover each subband spectrograms to subband waveforms. Then synthesis - # the subband waveforms to a waveform. - C1 = x.shape[1] // self.subbands_num - C2 = mag.shape[1] // self.subbands_num - - separated_subband_audio = torch.cat( - [ - self.feature_maps_to_wav( - input_tensor=x[:, j * C1 : (j + 1) * C1, :, :], - sp=mag[:, j * C2 : (j + 1) * C2, :, :], - sin_in=sin_in[:, j * C2 : (j + 1) * C2, :, :], - cos_in=cos_in[:, j * C2 : (j + 1) * C2, :, :], - audio_length=audio_length, - ) - for j in range(self.subbands_num) - ], - dim=1, - ) - # (batch_size, subbands_num * target_sources_num * input_channles, segment_samples) - - separated_audio = self.pqmf.synthesis(separated_subband_audio) - # (batch_size, input_channles, segment_samples) - - output_dict = {'waveform': separated_audio} - - return output_dict diff --git a/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/modules/lstm.py b/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/modules/lstm.py deleted file mode 100644 index c0866175950c1ca4f6cca98649525e6481853bba..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/modules/lstm.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from torch import nn - - -class StreamableLSTM(nn.Module): - """LSTM without worrying about the hidden state, nor the layout of the data. - Expects input as convolutional layout. - """ - def __init__(self, dimension: int, num_layers: int = 2, skip: bool = True): - super().__init__() - self.skip = skip - self.lstm = nn.LSTM(dimension, dimension, num_layers) - - def forward(self, x): - x = x.permute(2, 0, 1) - y, _ = self.lstm(x) - if self.skip: - y = y + x - y = y.permute(1, 2, 0) - return y diff --git a/spaces/fffiloni/audioldm-text-to-audio-generation-copy/README.md b/spaces/fffiloni/audioldm-text-to-audio-generation-copy/README.md deleted file mode 100644 index 971b512a7af3911008d233210230f91435e5f2c4..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/audioldm-text-to-audio-generation-copy/README.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -title: Audioldm Text To Audio Generation -emoji: 🔊 -colorFrom: indigo -colorTo: red -python_version: 3.10.12 -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false -license: bigscience-openrail-m -duplicated_from: haoheliu/audioldm-text-to-audio-generation ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - -## Reference -Part of the code from this repo is borrowed from the following repos. We would like to thank the authors of them for their contribution. - -> https://github.com/LAION-AI/CLAP -> https://github.com/CompVis/stable-diffusion -> https://github.com/v-iashin/SpecVQGAN -> https://github.com/toshas/torch-fidelity \ No newline at end of file diff --git a/spaces/fffiloni/lama-video-watermark-remover/saicinpainting/training/data/masks.py b/spaces/fffiloni/lama-video-watermark-remover/saicinpainting/training/data/masks.py deleted file mode 100644 index e91fc74913356481065c5f5906acd50fb05f521c..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/lama-video-watermark-remover/saicinpainting/training/data/masks.py +++ /dev/null @@ -1,332 +0,0 @@ -import math -import random -import hashlib -import logging -from enum import Enum - -import cv2 -import numpy as np - -from saicinpainting.evaluation.masks.mask import SegmentationMask -from saicinpainting.utils import LinearRamp - -LOGGER = logging.getLogger(__name__) - - -class DrawMethod(Enum): - LINE = 'line' - CIRCLE = 'circle' - SQUARE = 'square' - - -def make_random_irregular_mask(shape, max_angle=4, max_len=60, max_width=20, min_times=0, max_times=10, - draw_method=DrawMethod.LINE): - draw_method = DrawMethod(draw_method) - - height, width = shape - mask = np.zeros((height, width), np.float32) - times = np.random.randint(min_times, max_times + 1) - for i in range(times): - start_x = np.random.randint(width) - start_y = np.random.randint(height) - for j in range(1 + np.random.randint(5)): - angle = 0.01 + np.random.randint(max_angle) - if i % 2 == 0: - angle = 2 * 3.1415926 - angle - length = 10 + np.random.randint(max_len) - brush_w = 5 + np.random.randint(max_width) - end_x = np.clip((start_x + length * np.sin(angle)).astype(np.int32), 0, width) - end_y = np.clip((start_y + length * np.cos(angle)).astype(np.int32), 0, height) - if draw_method == DrawMethod.LINE: - cv2.line(mask, (start_x, start_y), (end_x, end_y), 1.0, brush_w) - elif draw_method == DrawMethod.CIRCLE: - cv2.circle(mask, (start_x, start_y), radius=brush_w, color=1., thickness=-1) - elif draw_method == DrawMethod.SQUARE: - radius = brush_w // 2 - mask[start_y - radius:start_y + radius, start_x - radius:start_x + radius] = 1 - start_x, start_y = end_x, end_y - return mask[None, ...] - - -class RandomIrregularMaskGenerator: - def __init__(self, max_angle=4, max_len=60, max_width=20, min_times=0, max_times=10, ramp_kwargs=None, - draw_method=DrawMethod.LINE): - self.max_angle = max_angle - self.max_len = max_len - self.max_width = max_width - self.min_times = min_times - self.max_times = max_times - self.draw_method = draw_method - self.ramp = LinearRamp(**ramp_kwargs) if ramp_kwargs is not None else None - - def __call__(self, img, iter_i=None, raw_image=None): - coef = self.ramp(iter_i) if (self.ramp is not None) and (iter_i is not None) else 1 - cur_max_len = int(max(1, self.max_len * coef)) - cur_max_width = int(max(1, self.max_width * coef)) - cur_max_times = int(self.min_times + 1 + (self.max_times - self.min_times) * coef) - return make_random_irregular_mask(img.shape[1:], max_angle=self.max_angle, max_len=cur_max_len, - max_width=cur_max_width, min_times=self.min_times, max_times=cur_max_times, - draw_method=self.draw_method) - - -def make_random_rectangle_mask(shape, margin=10, bbox_min_size=30, bbox_max_size=100, min_times=0, max_times=3): - height, width = shape - mask = np.zeros((height, width), np.float32) - bbox_max_size = min(bbox_max_size, height - margin * 2, width - margin * 2) - times = np.random.randint(min_times, max_times + 1) - for i in range(times): - box_width = np.random.randint(bbox_min_size, bbox_max_size) - box_height = np.random.randint(bbox_min_size, bbox_max_size) - start_x = np.random.randint(margin, width - margin - box_width + 1) - start_y = np.random.randint(margin, height - margin - box_height + 1) - mask[start_y:start_y + box_height, start_x:start_x + box_width] = 1 - return mask[None, ...] - - -class RandomRectangleMaskGenerator: - def __init__(self, margin=10, bbox_min_size=30, bbox_max_size=100, min_times=0, max_times=3, ramp_kwargs=None): - self.margin = margin - self.bbox_min_size = bbox_min_size - self.bbox_max_size = bbox_max_size - self.min_times = min_times - self.max_times = max_times - self.ramp = LinearRamp(**ramp_kwargs) if ramp_kwargs is not None else None - - def __call__(self, img, iter_i=None, raw_image=None): - coef = self.ramp(iter_i) if (self.ramp is not None) and (iter_i is not None) else 1 - cur_bbox_max_size = int(self.bbox_min_size + 1 + (self.bbox_max_size - self.bbox_min_size) * coef) - cur_max_times = int(self.min_times + (self.max_times - self.min_times) * coef) - return make_random_rectangle_mask(img.shape[1:], margin=self.margin, bbox_min_size=self.bbox_min_size, - bbox_max_size=cur_bbox_max_size, min_times=self.min_times, - max_times=cur_max_times) - - -class RandomSegmentationMaskGenerator: - def __init__(self, **kwargs): - self.impl = None # will be instantiated in first call (effectively in subprocess) - self.kwargs = kwargs - - def __call__(self, img, iter_i=None, raw_image=None): - if self.impl is None: - self.impl = SegmentationMask(**self.kwargs) - - masks = self.impl.get_masks(np.transpose(img, (1, 2, 0))) - masks = [m for m in masks if len(np.unique(m)) > 1] - return np.random.choice(masks) - - -def make_random_superres_mask(shape, min_step=2, max_step=4, min_width=1, max_width=3): - height, width = shape - mask = np.zeros((height, width), np.float32) - step_x = np.random.randint(min_step, max_step + 1) - width_x = np.random.randint(min_width, min(step_x, max_width + 1)) - offset_x = np.random.randint(0, step_x) - - step_y = np.random.randint(min_step, max_step + 1) - width_y = np.random.randint(min_width, min(step_y, max_width + 1)) - offset_y = np.random.randint(0, step_y) - - for dy in range(width_y): - mask[offset_y + dy::step_y] = 1 - for dx in range(width_x): - mask[:, offset_x + dx::step_x] = 1 - return mask[None, ...] - - -class RandomSuperresMaskGenerator: - def __init__(self, **kwargs): - self.kwargs = kwargs - - def __call__(self, img, iter_i=None): - return make_random_superres_mask(img.shape[1:], **self.kwargs) - - -class DumbAreaMaskGenerator: - min_ratio = 0.1 - max_ratio = 0.35 - default_ratio = 0.225 - - def __init__(self, is_training): - #Parameters: - # is_training(bool): If true - random rectangular mask, if false - central square mask - self.is_training = is_training - - def _random_vector(self, dimension): - if self.is_training: - lower_limit = math.sqrt(self.min_ratio) - upper_limit = math.sqrt(self.max_ratio) - mask_side = round((random.random() * (upper_limit - lower_limit) + lower_limit) * dimension) - u = random.randint(0, dimension-mask_side-1) - v = u+mask_side - else: - margin = (math.sqrt(self.default_ratio) / 2) * dimension - u = round(dimension/2 - margin) - v = round(dimension/2 + margin) - return u, v - - def __call__(self, img, iter_i=None, raw_image=None): - c, height, width = img.shape - mask = np.zeros((height, width), np.float32) - x1, x2 = self._random_vector(width) - y1, y2 = self._random_vector(height) - mask[x1:x2, y1:y2] = 1 - return mask[None, ...] - - -class OutpaintingMaskGenerator: - def __init__(self, min_padding_percent:float=0.04, max_padding_percent:int=0.25, left_padding_prob:float=0.5, top_padding_prob:float=0.5, - right_padding_prob:float=0.5, bottom_padding_prob:float=0.5, is_fixed_randomness:bool=False): - """ - is_fixed_randomness - get identical paddings for the same image if args are the same - """ - self.min_padding_percent = min_padding_percent - self.max_padding_percent = max_padding_percent - self.probs = [left_padding_prob, top_padding_prob, right_padding_prob, bottom_padding_prob] - self.is_fixed_randomness = is_fixed_randomness - - assert self.min_padding_percent <= self.max_padding_percent - assert self.max_padding_percent > 0 - assert len([x for x in [self.min_padding_percent, self.max_padding_percent] if (x>=0 and x<=1)]) == 2, f"Padding percentage should be in [0,1]" - assert sum(self.probs) > 0, f"At least one of the padding probs should be greater than 0 - {self.probs}" - assert len([x for x in self.probs if (x >= 0) and (x <= 1)]) == 4, f"At least one of padding probs is not in [0,1] - {self.probs}" - if len([x for x in self.probs if x > 0]) == 1: - LOGGER.warning(f"Only one padding prob is greater than zero - {self.probs}. That means that the outpainting masks will be always on the same side") - - def apply_padding(self, mask, coord): - mask[int(coord[0][0]*self.img_h):int(coord[1][0]*self.img_h), - int(coord[0][1]*self.img_w):int(coord[1][1]*self.img_w)] = 1 - return mask - - def get_padding(self, size): - n1 = int(self.min_padding_percent*size) - n2 = int(self.max_padding_percent*size) - return self.rnd.randint(n1, n2) / size - - @staticmethod - def _img2rs(img): - arr = np.ascontiguousarray(img.astype(np.uint8)) - str_hash = hashlib.sha1(arr).hexdigest() - res = hash(str_hash)%(2**32) - return res - - def __call__(self, img, iter_i=None, raw_image=None): - c, self.img_h, self.img_w = img.shape - mask = np.zeros((self.img_h, self.img_w), np.float32) - at_least_one_mask_applied = False - - if self.is_fixed_randomness: - assert raw_image is not None, f"Cant calculate hash on raw_image=None" - rs = self._img2rs(raw_image) - self.rnd = np.random.RandomState(rs) - else: - self.rnd = np.random - - coords = [[ - (0,0), - (1,self.get_padding(size=self.img_h)) - ], - [ - (0,0), - (self.get_padding(size=self.img_w),1) - ], - [ - (0,1-self.get_padding(size=self.img_h)), - (1,1) - ], - [ - (1-self.get_padding(size=self.img_w),0), - (1,1) - ]] - - for pp, coord in zip(self.probs, coords): - if self.rnd.random() < pp: - at_least_one_mask_applied = True - mask = self.apply_padding(mask=mask, coord=coord) - - if not at_least_one_mask_applied: - idx = self.rnd.choice(range(len(coords)), p=np.array(self.probs)/sum(self.probs)) - mask = self.apply_padding(mask=mask, coord=coords[idx]) - return mask[None, ...] - - -class MixedMaskGenerator: - def __init__(self, irregular_proba=1/3, irregular_kwargs=None, - box_proba=1/3, box_kwargs=None, - segm_proba=1/3, segm_kwargs=None, - squares_proba=0, squares_kwargs=None, - superres_proba=0, superres_kwargs=None, - outpainting_proba=0, outpainting_kwargs=None, - invert_proba=0): - self.probas = [] - self.gens = [] - - if irregular_proba > 0: - self.probas.append(irregular_proba) - if irregular_kwargs is None: - irregular_kwargs = {} - else: - irregular_kwargs = dict(irregular_kwargs) - irregular_kwargs['draw_method'] = DrawMethod.LINE - self.gens.append(RandomIrregularMaskGenerator(**irregular_kwargs)) - - if box_proba > 0: - self.probas.append(box_proba) - if box_kwargs is None: - box_kwargs = {} - self.gens.append(RandomRectangleMaskGenerator(**box_kwargs)) - - if segm_proba > 0: - self.probas.append(segm_proba) - if segm_kwargs is None: - segm_kwargs = {} - self.gens.append(RandomSegmentationMaskGenerator(**segm_kwargs)) - - if squares_proba > 0: - self.probas.append(squares_proba) - if squares_kwargs is None: - squares_kwargs = {} - else: - squares_kwargs = dict(squares_kwargs) - squares_kwargs['draw_method'] = DrawMethod.SQUARE - self.gens.append(RandomIrregularMaskGenerator(**squares_kwargs)) - - if superres_proba > 0: - self.probas.append(superres_proba) - if superres_kwargs is None: - superres_kwargs = {} - self.gens.append(RandomSuperresMaskGenerator(**superres_kwargs)) - - if outpainting_proba > 0: - self.probas.append(outpainting_proba) - if outpainting_kwargs is None: - outpainting_kwargs = {} - self.gens.append(OutpaintingMaskGenerator(**outpainting_kwargs)) - - self.probas = np.array(self.probas, dtype='float32') - self.probas /= self.probas.sum() - self.invert_proba = invert_proba - - def __call__(self, img, iter_i=None, raw_image=None): - kind = np.random.choice(len(self.probas), p=self.probas) - gen = self.gens[kind] - result = gen(img, iter_i=iter_i, raw_image=raw_image) - if self.invert_proba > 0 and random.random() < self.invert_proba: - result = 1 - result - return result - - -def get_mask_generator(kind, kwargs): - if kind is None: - kind = "mixed" - if kwargs is None: - kwargs = {} - - if kind == "mixed": - cl = MixedMaskGenerator - elif kind == "outpainting": - cl = OutpaintingMaskGenerator - elif kind == "dumb": - cl = DumbAreaMaskGenerator - else: - raise NotImplementedError(f"No such generator kind = {kind}") - return cl(**kwargs) diff --git a/spaces/flax-community/clip-reply-demo/model/config.py b/spaces/flax-community/clip-reply-demo/model/config.py deleted file mode 100644 index ce1af96f6f74fa2ac2e146885b65183c9b53f8e0..0000000000000000000000000000000000000000 --- a/spaces/flax-community/clip-reply-demo/model/config.py +++ /dev/null @@ -1,109 +0,0 @@ -import copy - -from transformers.configuration_utils import PretrainedConfig -from transformers.utils import logging - -logger = logging.get_logger(__name__) - - -class HybridCLIPConfig(PretrainedConfig): - r""" - :class:`HybridCLIPConfig` is the configuration class to store the configuration of a - :class:`~HybridCLIPModel`. It is used to instantiate HybridCLIPModel model according to the specified arguments, - defining the text model and vision model configs. - Configuration objects inherit from :class:`~transformers.PretrainedConfig` and can be used to control the model - outputs. Read the documentation from :class:`~transformers.PretrainedConfig` for more information. - Args: - text_config_dict (:obj:`dict`): - Dictionary of configuration options that defines text model config. - vision_config_dict (:obj:`dict`): - Dictionary of configuration options that defines vison model config. - projection_dim (:obj:`int`, `optional`, defaults to 512): - Dimentionality of text and vision projection layers. - kwargs (`optional`): - Dictionary of keyword arguments. - Examples:: - >>> from transformers import BertConfig, CLIPConfig, HybridCLIPConfig, FlaxHybridCLIP - >>> # Initializing a BERT and CLIP configuration - >>> config_text = BertConfig() - >>> config_vision = CLIPConfig() - >>> config = HybridCLIPConfig.from_text_vision_configs(config_text, config_vision, projection_dim=512) - >>> # Initializing a BERT and CLIPVision model - >>> model = EncoderDecoderModel(config=config) - >>> # Accessing the model configuration - >>> config_text = model.config.text_config - >>> config_vision = model.config.vision_config - >>> # Saving the model, including its configuration - >>> model.save_pretrained('my-model') - >>> # loading model and config from pretrained folder - >>> encoder_decoder_config = HybridCLIPConfig.from_pretrained('my-model') - >>> model = FlaxHybridCLIP.from_pretrained('my-model', config=encoder_decoder_config) - """ - - model_type = "hybrid-clip" - is_composition = True - - def __init__(self, projection_dim=512, **kwargs): - super().__init__(**kwargs) - - if "text_config" not in kwargs: - raise ValueError("`text_config` can not be `None`.") - - if "vision_config" not in kwargs: - raise ValueError("`vision_config` can not be `None`.") - - text_config = kwargs.pop("text_config") - vision_config = kwargs.pop("vision_config") - - text_model_type = text_config.pop("model_type") - vision_model_type = vision_config.pop("model_type") - - from transformers import AutoConfig - - self.text_config = AutoConfig.for_model(text_model_type, **text_config) - - if vision_model_type == "clip": - self.vision_config = AutoConfig.for_model( - vision_model_type, **vision_config - ).vision_config - elif vision_model_type == "clip_vision_model": - from transformers import CLIPVisionConfig - - self.vision_config = CLIPVisionConfig(**vision_config) - else: - self.vision_config = AutoConfig.for_model( - vision_model_type, **vision_config - ) - - self.projection_dim = projection_dim - self.initializer_factor = 1.0 - - @classmethod - def from_text_vision_configs( - cls, text_config: PretrainedConfig, vision_config: PretrainedConfig, **kwargs - ): - r""" - Instantiate a :class:`HybridCLIPConfig` (or a derived class) from text model configuration and - vision model configuration. - Returns: - :class:`HybridCLIPConfig`: An instance of a configuration object - """ - - return cls( - text_config=text_config.to_dict(), - vision_config=vision_config.to_dict(), - **kwargs - ) - - def to_dict(self): - """ - Serializes this instance to a Python dictionary. Override the default - :meth:`~transformers.PretrainedConfig.to_dict`. - Returns: - :obj:`Dict[str, any]`: Dictionary of all the attributes that make up this configuration instance, - """ - output = copy.deepcopy(self.__dict__) - output["text_config"] = self.text_config.to_dict() - output["vision_config"] = self.vision_config.to_dict() - output["model_type"] = self.__class__.model_type - return output diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/curriculums/__init__.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/curriculums/__init__.py deleted file mode 100644 index 42a356d2ed38ff3d9f69328ff5ae09b6ccfb4c00..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/curriculums/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from gym_minigrid.curriculums.expertcurriculumsocialaiparamenv import * diff --git a/spaces/fun-research/FC-CLIP/fcclip/data/datasets/register_ade20k_panoptic.py b/spaces/fun-research/FC-CLIP/fcclip/data/datasets/register_ade20k_panoptic.py deleted file mode 100644 index 3818fcbee051c09cb0d5129b9b7bbba364a5e178..0000000000000000000000000000000000000000 --- a/spaces/fun-research/FC-CLIP/fcclip/data/datasets/register_ade20k_panoptic.py +++ /dev/null @@ -1,215 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import json -import os - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.utils.file_io import PathManager -from detectron2.data.datasets.coco import load_sem_seg - - -from . import openseg_classes - -ADE20K_150_CATEGORIES = openseg_classes.get_ade20k_categories_with_prompt_eng() - -ADE20k_COLORS = [k["color"] for k in ADE20K_150_CATEGORIES] - -MetadataCatalog.get("openvocab_ade20k_sem_seg_train").set( - stuff_colors=ADE20k_COLORS[:], -) - -MetadataCatalog.get("openvocab_ade20k_sem_seg_val").set( - stuff_colors=ADE20k_COLORS[:], -) - - -def load_ade20k_panoptic_json(json_file, image_dir, gt_dir, semseg_dir, meta): - """ - Args: - image_dir (str): path to the raw dataset. e.g., "~/coco/train2017". - gt_dir (str): path to the raw annotations. e.g., "~/coco/panoptic_train2017". - json_file (str): path to the json file. e.g., "~/coco/annotations/panoptic_train2017.json". - Returns: - list[dict]: a list of dicts in Detectron2 standard format. (See - `Using Custom Datasets `_ ) - """ - - def _convert_category_id(segment_info, meta): - if segment_info["category_id"] in meta["thing_dataset_id_to_contiguous_id"]: - segment_info["category_id"] = meta["thing_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - segment_info["isthing"] = True - else: - segment_info["category_id"] = meta["stuff_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - segment_info["isthing"] = False - return segment_info - - with PathManager.open(json_file) as f: - json_info = json.load(f) - - ret = [] - for ann in json_info["annotations"]: - image_id = ann["image_id"] - # TODO: currently we assume image and label has the same filename but - # different extension, and images have extension ".jpg" for COCO. Need - # to make image extension a user-provided argument if we extend this - # function to support other COCO-like datasets. - image_file = os.path.join(image_dir, os.path.splitext(ann["file_name"])[0] + ".jpg") - label_file = os.path.join(gt_dir, ann["file_name"]) - sem_label_file = os.path.join(semseg_dir, ann["file_name"]) - segments_info = [_convert_category_id(x, meta) for x in ann["segments_info"]] - ret.append( - { - "file_name": image_file, - "image_id": image_id, - "pan_seg_file_name": label_file, - "sem_seg_file_name": sem_label_file, - "segments_info": segments_info, - } - ) - assert len(ret), f"No images found in {image_dir}!" - assert PathManager.isfile(ret[0]["file_name"]), ret[0]["file_name"] - assert PathManager.isfile(ret[0]["pan_seg_file_name"]), ret[0]["pan_seg_file_name"] - assert PathManager.isfile(ret[0]["sem_seg_file_name"]), ret[0]["sem_seg_file_name"] - return ret - - -def register_ade20k_panoptic( - name, metadata, image_root, panoptic_root, semantic_root, panoptic_json, instances_json=None -): - """ - Register a "standard" version of ADE20k panoptic segmentation dataset named `name`. - The dictionaries in this registered dataset follows detectron2's standard format. - Hence it's called "standard". - Args: - name (str): the name that identifies a dataset, - e.g. "ade20k_panoptic_train" - metadata (dict): extra metadata associated with this dataset. - image_root (str): directory which contains all the images - panoptic_root (str): directory which contains panoptic annotation images in COCO format - panoptic_json (str): path to the json panoptic annotation file in COCO format - sem_seg_root (none): not used, to be consistent with - `register_coco_panoptic_separated`. - instances_json (str): path to the json instance annotation file - """ - panoptic_name = name - DatasetCatalog.register( - panoptic_name, - lambda: load_ade20k_panoptic_json( - panoptic_json, image_root, panoptic_root, semantic_root, metadata - ), - ) - MetadataCatalog.get(panoptic_name).set( - panoptic_root=panoptic_root, - image_root=image_root, - panoptic_json=panoptic_json, - json_file=instances_json, - evaluator_type="ade20k_panoptic_seg", - ignore_label=255, - label_divisor=1000, - **metadata, - ) - - -_PREDEFINED_SPLITS_ADE20K_PANOPTIC = { - "openvocab_ade20k_panoptic_train": ( - "ADEChallengeData2016/images/training", - "ADEChallengeData2016/ade20k_panoptic_train", - "ADEChallengeData2016/ade20k_panoptic_train.json", - "ADEChallengeData2016/annotations_detectron2/training", - "ADEChallengeData2016/ade20k_instance_train.json", - ), - "openvocab_ade20k_panoptic_val": ( - "ADEChallengeData2016/images/validation", - "ADEChallengeData2016/ade20k_panoptic_val", - "ADEChallengeData2016/ade20k_panoptic_val.json", - "ADEChallengeData2016/annotations_detectron2/validation", - "ADEChallengeData2016/ade20k_instance_val.json", - ), -} - - -def get_metadata(): - meta = {} - # The following metadata maps contiguous id from [0, #thing categories + - # #stuff categories) to their names and colors. We have to replica of the - # same name and color under "thing_*" and "stuff_*" because the current - # visualization function in D2 handles thing and class classes differently - # due to some heuristic used in Panoptic FPN. We keep the same naming to - # enable reusing existing visualization functions. - thing_classes = [k["name"] for k in ADE20K_150_CATEGORIES if k["isthing"] == 1] - thing_colors = [k["color"] for k in ADE20K_150_CATEGORIES if k["isthing"] == 1] - stuff_classes = [k["name"] for k in ADE20K_150_CATEGORIES] - stuff_colors = [k["color"] for k in ADE20K_150_CATEGORIES] - - meta["thing_classes"] = thing_classes - meta["thing_colors"] = thing_colors - meta["stuff_classes"] = stuff_classes - meta["stuff_colors"] = stuff_colors - - # Convert category id for training: - # category id: like semantic segmentation, it is the class id for each - # pixel. Since there are some classes not used in evaluation, the category - # id is not always contiguous and thus we have two set of category ids: - # - original category id: category id in the original dataset, mainly - # used for evaluation. - # - contiguous category id: [0, #classes), in order to train the linear - # softmax classifier. - thing_dataset_id_to_contiguous_id = {} - stuff_dataset_id_to_contiguous_id = {} - - for i, cat in enumerate(ADE20K_150_CATEGORIES): - if cat["isthing"]: - thing_dataset_id_to_contiguous_id[cat["id"]] = i - # else: - # stuff_dataset_id_to_contiguous_id[cat["id"]] = i - - # in order to use sem_seg evaluator - stuff_dataset_id_to_contiguous_id[cat["id"]] = i - - meta["thing_dataset_id_to_contiguous_id"] = thing_dataset_id_to_contiguous_id - meta["stuff_dataset_id_to_contiguous_id"] = stuff_dataset_id_to_contiguous_id - - return meta - - -def register_all_ade20k_panoptic(root): - metadata = get_metadata() - for ( - prefix, - (image_root, panoptic_root, panoptic_json, semantic_root, instance_json), - ) in _PREDEFINED_SPLITS_ADE20K_PANOPTIC.items(): - # The "standard" version of COCO panoptic segmentation dataset, - # e.g. used by Panoptic-DeepLab - register_ade20k_panoptic( - prefix, - metadata, - os.path.join(root, image_root), - os.path.join(root, panoptic_root), - os.path.join(root, semantic_root), - os.path.join(root, panoptic_json), - os.path.join(root, instance_json), - ) - -def register_all_ade20k_semantic(root): - root = os.path.join(root, "ADEChallengeData2016") - for name, dirname in [("train", "training"), ("val", "validation")]: - image_dir = os.path.join(root, "images", dirname) - gt_dir = os.path.join(root, "annotations_detectron2", dirname) - name = f"openvocab_ade20k_sem_seg_{name}" - DatasetCatalog.register( - name, lambda x=image_dir, y=gt_dir: load_sem_seg(y, x, gt_ext="png", image_ext="jpg") - ) - MetadataCatalog.get(name).set( - stuff_classes=[x["name"] for x in ADE20K_150_CATEGORIES], - image_root=image_dir, - sem_seg_root=gt_dir, - evaluator_type="sem_seg", - ignore_label=255, - ) - -_root = os.getenv("DETECTRON2_DATASETS", "datasets") -register_all_ade20k_panoptic(_root) -register_all_ade20k_semantic(_root) \ No newline at end of file diff --git a/spaces/gstaff/MagicGen/app.py b/spaces/gstaff/MagicGen/app.py deleted file mode 100644 index 8a5a71830f455393dbdad23f64c7e7f915b4faf2..0000000000000000000000000000000000000000 --- a/spaces/gstaff/MagicGen/app.py +++ /dev/null @@ -1,254 +0,0 @@ -import base64 -import re -import os -import pathlib -import random -import time -from io import BytesIO - -from diffusers import DiffusionPipeline, EulerAncestralDiscreteScheduler -import gradio as gr -import imgkit -from PIL import Image -import torch -from transformers import GPT2LMHeadModel, GPT2TokenizerFast, pipeline - - -gpu = False - -AUTH_TOKEN = os.environ.get('AUTH_TOKEN') -BASE_MODEL = "gpt2" -MERGED_MODEL = "gpt2-magic-card" - -if gpu: - image_pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, - revision="fp16", use_auth_token=AUTH_TOKEN) - scheduler = EulerAncestralDiscreteScheduler.from_config(image_pipeline.scheduler.config) - image_pipeline.scheduler = scheduler - image_pipeline.to("cuda") -else: - image_pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_auth_token=AUTH_TOKEN) - scheduler = EulerAncestralDiscreteScheduler.from_config(image_pipeline.scheduler.config) - image_pipeline.scheduler = scheduler - -# Huggingface Spaces have 16GB RAM and 8 CPU cores -# See https://huggingface.co/docs/hub/spaces-overview#hardware-resources - -model = GPT2LMHeadModel.from_pretrained(MERGED_MODEL) -tokenizer = GPT2TokenizerFast.from_pretrained(BASE_MODEL) -END_TOKEN = '###' -eos_id = tokenizer.encode(END_TOKEN) -text_pipeline = pipeline('text-generation', model=model, tokenizer=tokenizer) - - -def gen_card_text(name): - if name == '': - prompt = f"Name: {random.choice('ABCDEFGHIJKLMNOPQRSTUVWXYZ')}" - else: - prompt = f"Name: {name}\n" - print(f'GENERATING CARD TEXT with prompt: {prompt}') - output = text_pipeline(prompt, max_length=512, num_return_sequences=1, num_beams=5, temperature=1.5, do_sample=True, - repetition_penalty=1.2, eos_token_id=eos_id) - result = output[0]['generated_text'].split("###")[0].replace(r'\r\n', '\n').replace('\r', '').replace(r'\r', '') - print(f'GENERATING CARD COMPLETE') - print(result) - if name == '': - pattern = re.compile('Name: (.*)') - name = pattern.findall(result)[0] - return name, result - - -pathlib.Path('card_data').mkdir(parents=True, exist_ok=True) -pathlib.Path('card_images').mkdir(parents=True, exist_ok=True) -pathlib.Path('card_html').mkdir(parents=True, exist_ok=True) -pathlib.Path('rendered_cards').mkdir(parents=True, exist_ok=True) - - -def run(name): - start = time.time() - print(f'BEGINNING RUN FOR {name}') - name, text = gen_card_text(name) - save_name = get_savename('card_data', name, 'txt') - pathlib.Path(f'card_data/{save_name}').write_text(text, encoding='utf-8') - - pattern = re.compile('Type: (.*)') - card_type = pattern.findall(text)[0] - prompt_template = f"fantasy illustration of a {card_type} {name}, by Greg Rutkowski" - print(f"GENERATING IMAGE FOR {prompt_template}") - # Regarding sizing see https://huggingface.co/blog/stable_diffusion#:~:text=When%20choosing%20image%20sizes%2C%20we%20advise%20the%20following%3A - images = image_pipeline(prompt_template, width=512, height=368, num_inference_steps=20).images - card_image = None - for image in images: - save_name = get_savename('card_images', name, 'png') - image.save(f"card_images/{save_name}") - card_image = image - - image_data = pil_to_base64(card_image) - html = format_html(text, image_data) - save_name = get_savename('card_html', name, 'html') - pathlib.Path(f'card_html/{save_name}').write_text(html, encoding='utf-8') - rendered = html_to_png(name, html) - - end = time.time() - print(f'RUN COMPLETED IN {int(end - start)} seconds') - return rendered, text, card_image, html - - -def pil_to_base64(image): - print('CONVERTING PIL IMAGE TO BASE64 STRING') - buffered = BytesIO() - image.save(buffered, format="PNG") - img_str = base64.b64encode(buffered.getvalue()) - print('CONVERTING PIL IMAGE TO BASE64 STRING COMPLETE') - return img_str - - -def format_html(text, image_data): - template = pathlib.Path("colab-data-test/card_template.html").read_text(encoding='utf-8') - if "['U']" in text: - template = template.replace("{card_color}", 'style="background-color:#5a73ab"') - elif "['W']" in text: - template = template.replace("{card_color}", 'style="background-color:#f0e3d0"') - elif "['G']" in text: - template = template.replace("{card_color}", 'style="background-color:#325433"') - elif "['B']" in text: - template = template.replace("{card_color}", 'style="background-color:#1a1b1e"') - elif "['R']" in text: - template = template.replace("{card_color}", 'style="background-color:#c2401c"') - elif "Type: Land" in text: - template = template.replace("{card_color}", 'style="background-color:#aa8c71"') - elif "Type: Artifact" in text: - template = template.replace("{card_color}", 'style="background-color:#9ba7bc"') - else: - template = template.replace("{card_color}", 'style="background-color:#edd99d"') - pattern = re.compile('Name: (.*)') - name = pattern.findall(text)[0] - template = template.replace("{name}", name) - pattern = re.compile('ManaCost: (.*)') - mana_cost = pattern.findall(text)[0] - if mana_cost == "None": - template = template.replace("{mana_cost}", '') - else: - symbols = [] - for c in mana_cost: - if c in {"{", "}"}: - continue - else: - symbols.append(c.lower()) - formatted_symbols = [] - for s in symbols: - formatted_symbols.append(f'') - template = template.replace("{mana_cost}", "\n".join(formatted_symbols[::-1])) - if not isinstance(image_data, (bytes, bytearray)): - template = template.replace('{image_data}', f'{image_data}') - else: - template = template.replace('{image_data}', f'data:image/png;base64,{image_data.decode("utf-8")}') - pattern = re.compile('Type: (.*)') - card_type = pattern.findall(text)[0] - template = template.replace("{card_type}", card_type) - if len(card_type) > 30: - template = template.replace("{type_size}", "16") - else: - template = template.replace("{type_size}", "18") - pattern = re.compile('Rarity: (.*)') - rarity = pattern.findall(text)[0] - template = template.replace("{rarity}", f"ss-{rarity}") - pattern = re.compile('Text: (.*)\nFlavorText', re.MULTILINE | re.DOTALL) - card_text = pattern.findall(text)[0] - text_lines = [] - for line in card_text.splitlines(): - line = line.replace('{T}', '') - line = line.replace('{UT}', '') - line = line.replace('{E}', '') - line = re.sub(r"{(.*?)}", r''.lower(), line) - line = re.sub(r"ms-(.)/(.)", r''.lower(), line) - line = line.replace('(', '(').replace(')', ')') - text_lines.append(f"

            {line}

            ") - template = template.replace("{card_text}", "\n".join(text_lines)) - pattern = re.compile('FlavorText: (.*)\nPower', re.MULTILINE | re.DOTALL) - flavor_text = pattern.findall(text) - if flavor_text: - flavor_text = flavor_text[0] - flavor_text_lines = [] - for line in flavor_text.splitlines(): - flavor_text_lines.append(f"

            {line}

            ") - template = template.replace("{flavor_text}", "
            " + "\n".join(flavor_text_lines) + "
            ") - else: - template = template.replace("{flavor_text}", "") - if len(card_text) + len(flavor_text or '') > 170 or len(text_lines) > 3: - template = template.replace("{text_size}", '16') - template = template.replace('ms-cost" style="top:0px;float:none;height: 18px;width: 18px;font-size: 13px;">', - 'ms-cost" style="top:0px;float:none;height: 16px;width: 16px;font-size: 11px;">') - else: - template = template.replace("{text_size}", '18') - pattern = re.compile('Power: (.*)') - power = pattern.findall(text) - if power: - power = power[0] - if not power: - template = template.replace("{power_toughness}", "") - pattern = re.compile('Toughness: (.*)') - toughness = pattern.findall(text)[0] - template = template.replace("{power_toughness}", f'

            {power}/{toughness}

            ') - else: - template = template.replace("{power_toughness}", "") - pathlib.Path("test.html").write_text(template, encoding='utf-8') - return template - - -def get_savename(directory, name, extension): - save_name = f"{name}.{extension}" - i = 1 - while os.path.exists(os.path.join(directory, save_name)): - save_name = save_name.replace(f'.{extension}', '').split('-')[0] + f"-{i}.{extension}" - i += 1 - return save_name - - -def html_to_png(card_name, html): - save_name = get_savename('rendered_cards', card_name, 'png') - print('CONVERTING HTML CARD TO PNG IMAGE') - - path = os.path.join('rendered_cards', save_name) - try: - css = ['./colab-data-test/css/mana.css', './colab-data-test/css/keyrune.css', './colab-data-test/css/mtg_custom.css'] - imgkit.from_string(html, path, {"xvfb": ""}, css=css) - except: - try: - # For Windows local, requires 'html2image' package from pip. - from html2image import Html2Image - rendered_card_dir = 'rendered_cards' - hti = Html2Image(output_path=rendered_card_dir) - paths = hti.screenshot(html_str=html, - css_file=['./colab-data-test/css/mtg_custom.css', './colab-data-test/css/mana.css', './colab-data-test/css/keyrune.css'], - save_as=save_name, size=(450, 600)) - print(paths) - path = paths[0] - except: - pass - print('OPENING IMAGE FROM FILE') - img = Image.open(path) - print('CROPPING BACKGROUND') - area = (0, 50, 400, 600) - cropped_img = img.crop(area) - cropped_img.resize((400, 550)) - cropped_img.save(os.path.join(path)) - print('CONVERTING HTML CARD TO PNG IMAGE COMPLETE') - return cropped_img.convert('RGB') - - -app_description = ( - """ - # Create your own Magic: The Gathering cards! - Enter a name, click Submit, it may take up to 10 minutes to run on the free CPU only instance. - """).strip() -input_box = gr.Textbox(label="Enter a card name", placeholder="Firebolt") -rendered_card = gr.Image(label="Card", type='pil', value="examples/card.png") -output_text_box = gr.Textbox(label="Card Text", value=pathlib.Path("examples/text.txt").read_text('utf-8')) -output_card_image = gr.Image(label="Card Image", type='pil', value="examples/image.png") -output_card_html = gr.HTML(label="Card HTML", visible=False, show_label=False) -x = gr.components.Textbox() -iface = gr.Interface(title="MagicGen", theme="default", description=app_description, fn=run, inputs=[input_box], - outputs=[rendered_card, output_text_box, output_card_image, output_card_html]) - -iface.launch() diff --git a/spaces/gulabpatel/Real-ESRGAN/realesrgan/data/realesrgan_dataset.py b/spaces/gulabpatel/Real-ESRGAN/realesrgan/data/realesrgan_dataset.py deleted file mode 100644 index 4cf2d9e6583a6789b771679734ce55bb8a22e628..0000000000000000000000000000000000000000 --- a/spaces/gulabpatel/Real-ESRGAN/realesrgan/data/realesrgan_dataset.py +++ /dev/null @@ -1,192 +0,0 @@ -import cv2 -import math -import numpy as np -import os -import os.path as osp -import random -import time -import torch -from basicsr.data.degradations import circular_lowpass_kernel, random_mixed_kernels -from basicsr.data.transforms import augment -from basicsr.utils import FileClient, get_root_logger, imfrombytes, img2tensor -from basicsr.utils.registry import DATASET_REGISTRY -from torch.utils import data as data - - -@DATASET_REGISTRY.register() -class RealESRGANDataset(data.Dataset): - """Dataset used for Real-ESRGAN model: - Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data. - - It loads gt (Ground-Truth) images, and augments them. - It also generates blur kernels and sinc kernels for generating low-quality images. - Note that the low-quality images are processed in tensors on GPUS for faster processing. - - Args: - opt (dict): Config for train datasets. It contains the following keys: - dataroot_gt (str): Data root path for gt. - meta_info (str): Path for meta information file. - io_backend (dict): IO backend type and other kwarg. - use_hflip (bool): Use horizontal flips. - use_rot (bool): Use rotation (use vertical flip and transposing h and w for implementation). - Please see more options in the codes. - """ - - def __init__(self, opt): - super(RealESRGANDataset, self).__init__() - self.opt = opt - self.file_client = None - self.io_backend_opt = opt['io_backend'] - self.gt_folder = opt['dataroot_gt'] - - # file client (lmdb io backend) - if self.io_backend_opt['type'] == 'lmdb': - self.io_backend_opt['db_paths'] = [self.gt_folder] - self.io_backend_opt['client_keys'] = ['gt'] - if not self.gt_folder.endswith('.lmdb'): - raise ValueError(f"'dataroot_gt' should end with '.lmdb', but received {self.gt_folder}") - with open(osp.join(self.gt_folder, 'meta_info.txt')) as fin: - self.paths = [line.split('.')[0] for line in fin] - else: - # disk backend with meta_info - # Each line in the meta_info describes the relative path to an image - with open(self.opt['meta_info']) as fin: - paths = [line.strip().split(' ')[0] for line in fin] - self.paths = [os.path.join(self.gt_folder, v) for v in paths] - - # blur settings for the first degradation - self.blur_kernel_size = opt['blur_kernel_size'] - self.kernel_list = opt['kernel_list'] - self.kernel_prob = opt['kernel_prob'] # a list for each kernel probability - self.blur_sigma = opt['blur_sigma'] - self.betag_range = opt['betag_range'] # betag used in generalized Gaussian blur kernels - self.betap_range = opt['betap_range'] # betap used in plateau blur kernels - self.sinc_prob = opt['sinc_prob'] # the probability for sinc filters - - # blur settings for the second degradation - self.blur_kernel_size2 = opt['blur_kernel_size2'] - self.kernel_list2 = opt['kernel_list2'] - self.kernel_prob2 = opt['kernel_prob2'] - self.blur_sigma2 = opt['blur_sigma2'] - self.betag_range2 = opt['betag_range2'] - self.betap_range2 = opt['betap_range2'] - self.sinc_prob2 = opt['sinc_prob2'] - - # a final sinc filter - self.final_sinc_prob = opt['final_sinc_prob'] - - self.kernel_range = [2 * v + 1 for v in range(3, 11)] # kernel size ranges from 7 to 21 - # TODO: kernel range is now hard-coded, should be in the configure file - self.pulse_tensor = torch.zeros(21, 21).float() # convolving with pulse tensor brings no blurry effect - self.pulse_tensor[10, 10] = 1 - - def __getitem__(self, index): - if self.file_client is None: - self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt) - - # -------------------------------- Load gt images -------------------------------- # - # Shape: (h, w, c); channel order: BGR; image range: [0, 1], float32. - gt_path = self.paths[index] - # avoid errors caused by high latency in reading files - retry = 3 - while retry > 0: - try: - img_bytes = self.file_client.get(gt_path, 'gt') - except (IOError, OSError) as e: - logger = get_root_logger() - logger.warn(f'File client error: {e}, remaining retry times: {retry - 1}') - # change another file to read - index = random.randint(0, self.__len__()) - gt_path = self.paths[index] - time.sleep(1) # sleep 1s for occasional server congestion - else: - break - finally: - retry -= 1 - img_gt = imfrombytes(img_bytes, float32=True) - - # -------------------- Do augmentation for training: flip, rotation -------------------- # - img_gt = augment(img_gt, self.opt['use_hflip'], self.opt['use_rot']) - - # crop or pad to 400 - # TODO: 400 is hard-coded. You may change it accordingly - h, w = img_gt.shape[0:2] - crop_pad_size = 400 - # pad - if h < crop_pad_size or w < crop_pad_size: - pad_h = max(0, crop_pad_size - h) - pad_w = max(0, crop_pad_size - w) - img_gt = cv2.copyMakeBorder(img_gt, 0, pad_h, 0, pad_w, cv2.BORDER_REFLECT_101) - # crop - if img_gt.shape[0] > crop_pad_size or img_gt.shape[1] > crop_pad_size: - h, w = img_gt.shape[0:2] - # randomly choose top and left coordinates - top = random.randint(0, h - crop_pad_size) - left = random.randint(0, w - crop_pad_size) - img_gt = img_gt[top:top + crop_pad_size, left:left + crop_pad_size, ...] - - # ------------------------ Generate kernels (used in the first degradation) ------------------------ # - kernel_size = random.choice(self.kernel_range) - if np.random.uniform() < self.opt['sinc_prob']: - # this sinc filter setting is for kernels ranging from [7, 21] - if kernel_size < 13: - omega_c = np.random.uniform(np.pi / 3, np.pi) - else: - omega_c = np.random.uniform(np.pi / 5, np.pi) - kernel = circular_lowpass_kernel(omega_c, kernel_size, pad_to=False) - else: - kernel = random_mixed_kernels( - self.kernel_list, - self.kernel_prob, - kernel_size, - self.blur_sigma, - self.blur_sigma, [-math.pi, math.pi], - self.betag_range, - self.betap_range, - noise_range=None) - # pad kernel - pad_size = (21 - kernel_size) // 2 - kernel = np.pad(kernel, ((pad_size, pad_size), (pad_size, pad_size))) - - # ------------------------ Generate kernels (used in the second degradation) ------------------------ # - kernel_size = random.choice(self.kernel_range) - if np.random.uniform() < self.opt['sinc_prob2']: - if kernel_size < 13: - omega_c = np.random.uniform(np.pi / 3, np.pi) - else: - omega_c = np.random.uniform(np.pi / 5, np.pi) - kernel2 = circular_lowpass_kernel(omega_c, kernel_size, pad_to=False) - else: - kernel2 = random_mixed_kernels( - self.kernel_list2, - self.kernel_prob2, - kernel_size, - self.blur_sigma2, - self.blur_sigma2, [-math.pi, math.pi], - self.betag_range2, - self.betap_range2, - noise_range=None) - - # pad kernel - pad_size = (21 - kernel_size) // 2 - kernel2 = np.pad(kernel2, ((pad_size, pad_size), (pad_size, pad_size))) - - # ------------------------------------- the final sinc kernel ------------------------------------- # - if np.random.uniform() < self.opt['final_sinc_prob']: - kernel_size = random.choice(self.kernel_range) - omega_c = np.random.uniform(np.pi / 3, np.pi) - sinc_kernel = circular_lowpass_kernel(omega_c, kernel_size, pad_to=21) - sinc_kernel = torch.FloatTensor(sinc_kernel) - else: - sinc_kernel = self.pulse_tensor - - # BGR to RGB, HWC to CHW, numpy to tensor - img_gt = img2tensor([img_gt], bgr2rgb=True, float32=True)[0] - kernel = torch.FloatTensor(kernel) - kernel2 = torch.FloatTensor(kernel2) - - return_d = {'gt': img_gt, 'kernel1': kernel, 'kernel2': kernel2, 'sinc_kernel': sinc_kernel, 'gt_path': gt_path} - return return_d - - def __len__(self): - return len(self.paths) diff --git a/spaces/haakohu/deep_privacy2/dp2/utils/__init__.py b/spaces/haakohu/deep_privacy2/dp2/utils/__init__.py deleted file mode 100644 index d4edbacb6e8032ea081839f1a2408d4101868e79..0000000000000000000000000000000000000000 --- a/spaces/haakohu/deep_privacy2/dp2/utils/__init__.py +++ /dev/null @@ -1,24 +0,0 @@ -import pathlib -from tops.config import LazyConfig -from .torch_utils import ( - im2torch, im2numpy, denormalize_img, set_requires_grad, forward_D_fake, - binary_dilation, crop_box, remove_pad, - torch_wasserstein_loss -) -from .ema import EMA -from .utils import init_tops, tqdm_, print_config, config_to_str, trange_ -from .cse import from_E_to_vertex - - -def load_config(config_path): - config_path = pathlib.Path(config_path) - assert config_path.is_file(), config_path - cfg = LazyConfig.load(str(config_path)) - cfg.output_dir = pathlib.Path(str(config_path).replace("configs", str(cfg.common.output_dir)).replace(".py", "")) - if cfg.common.experiment_name is None: - cfg.experiment_name = str(config_path) - else: - cfg.experiment_name = cfg.common.experiment_name - cfg.checkpoint_dir = cfg.output_dir.joinpath("checkpoints") - print("Saving outputs to:", cfg.output_dir) - return cfg diff --git a/spaces/haakohu/deep_privacy2/gradio_demos/modules.py b/spaces/haakohu/deep_privacy2/gradio_demos/modules.py deleted file mode 100644 index 93a4035cd0b2ae11146e130e2e785a0c82e5c0c6..0000000000000000000000000000000000000000 --- a/spaces/haakohu/deep_privacy2/gradio_demos/modules.py +++ /dev/null @@ -1,247 +0,0 @@ -from collections import defaultdict -import gradio -import numpy as np -import torch -import cv2 -from PIL import Image -from dp2 import utils -from tops.config import instantiate -import tops -import gradio.inputs -from stylemc import get_and_cache_direction, get_styles -from sg3_torch_utils.ops import grid_sample_gradfix, bias_act, upfirdn2d - -grid_sample_gradfix.enabled = False -bias_act.enabled = False -upfirdn2d.enabled = False - - -class GuidedDemo: - def __init__(self, face_anonymizer, cfg_face, multi_modal_truncation, truncation_value) -> None: - self.anonymizer = face_anonymizer - self.multi_modal_truncation = multi_modal_truncation - self.truncation_value = truncation_value - assert sum([x is not None for x in list(face_anonymizer.generators.values())]) == 1 - self.generator = [x for x in list(face_anonymizer.generators.values()) if x is not None][0] - face_G_cfg = utils.load_config(cfg_face.anonymizer.face_G_cfg) - face_G_cfg.train.batch_size = 1 - self.dl = instantiate(face_G_cfg.data.val.loader) - self.cache_dir = face_G_cfg.output_dir - self.precompute_edits() - - def precompute_edits(self): - self.precomputed_edits = set() - for edit in self.precomputed_edits: - get_and_cache_direction(self.cache_dir, self.dl, self.generator, edit) - if self.cache_dir.joinpath("stylemc_cache").is_dir(): - for path in self.cache_dir.joinpath("stylemc_cache").iterdir(): - text_prompt = path.stem.replace("_", " ") - self.precomputed_edits.add(text_prompt) - print(text_prompt) - self.edits = defaultdict(defaultdict) - - def anonymize(self, img, show_boxes: bool, current_box_idx: int, current_styles, current_boxes, update_identity, edits, cache_id=None): - if not isinstance(img, torch.Tensor): - img, cache_id = pil2torch(img) - img = tops.to_cuda(img) - - current_box_idx = current_box_idx % len(current_boxes) - edited_styles = [s.clone() for s in current_styles] - for face_idx, face_edits in edits.items(): - for prompt, strength in face_edits.items(): - direction = get_and_cache_direction(self.cache_dir, self.dl, self.generator, prompt) - edited_styles[int(face_idx)] += direction * strength - update_identity[int(face_idx)] = True - assert img.dtype == torch.uint8 - img = self.anonymizer( - img, truncation_value=self.truncation_value, - multi_modal_truncation=self.multi_modal_truncation, amp=True, - cache_id=cache_id, - all_styles=edited_styles, - update_identity=update_identity) - update_identity = [True for i in range(len(update_identity))] - img = utils.im2numpy(img) - if show_boxes: - x0, y0, x1, y1 = [int(_) for _ in current_boxes[int(current_box_idx)]] - img = cv2.rectangle(img, (x0, y0), (x1, y1), (255, 0, 0), 1) - return img, update_identity - - def update_image(self, img, show_boxes): - img, cache_id = pil2torch(img) - img = tops.to_cuda(img) - det = self.anonymizer.detector.forward_and_cache(img, cache_id, load_cache=True)[0] - current_styles = [] - for i in range(len(det)): - s = get_styles( - np.random.randint(0, 999999), self.generator, - None, truncation_value=self.truncation_value) - current_styles.append(s) - update_identity = [True for i in range(len(det))] - current_boxes = np.array(det.boxes) - edits = defaultdict(defaultdict) - cur_face_idx = -1 % len(current_boxes) - img, update_identity = self.anonymize( - img, show_boxes, cur_face_idx, - current_styles, current_boxes, update_identity, edits, cache_id=cache_id) - return img, current_styles, current_boxes, update_identity, edits, cur_face_idx - - def change_face(self, change, cur_face_idx, current_boxes, input_image, show_boxes, current_styles, update_identity, edits): - cur_face_idx = (cur_face_idx + change) % len(current_boxes) - img, update_identity = self.anonymize( - input_image, show_boxes, cur_face_idx, - current_styles, current_boxes, update_identity, edits) - return img, update_identity, cur_face_idx - - def add_style(self, face_idx: int, prompt: str, strength: float, input_image, show_boxes, current_styles, current_boxes, update_identity, edits): - face_idx = face_idx % len(current_boxes) - edits[face_idx][prompt] = strength - img, update_identity = self.anonymize( - input_image, show_boxes, face_idx, - current_styles, current_boxes, update_identity, edits) - return img, update_identity, edits - - def setup_interface(self): - current_styles = gradio.State() - current_boxes = gradio.State(None) - update_identity = gradio.State([]) - edits = gradio.State([]) - with gradio.Row(): - input_image = gradio.Image( - type="pil", label="Upload your image or try the example below!", source="webcam") - output_image = gradio.Image(type="numpy", label="Output") - with gradio.Row(): - update_btn = gradio.Button("Update Anonymization").style(full_width=True) - with gradio.Row(): - show_boxes = gradio.Checkbox(value=True, label="Show Selected") - cur_face_idx = gradio.Number(value=-1, label="Current", interactive=False) - previous = gradio.Button("Previous Person") - next_ = gradio.Button("Next Person") - with gradio.Row(): - text_prompt = gradio.Textbox( - placeholder=" | ".join(list(self.precomputed_edits)), - label="Text Prompt for Edit") - edit_strength = gradio.Slider(0, 5, step=.01) - add_btn = gradio.Button("Add Edit") - add_btn.click( - self.add_style, - inputs=[cur_face_idx, text_prompt, edit_strength, input_image, show_boxes,current_styles, current_boxes, update_identity, edits], - outputs=[output_image, update_identity, edits]) - update_btn.click( - self.update_image, - inputs=[input_image, show_boxes], - outputs=[output_image, current_styles, current_boxes, update_identity, edits, cur_face_idx]) - input_image.change( - self.update_image, - inputs=[input_image, show_boxes], - outputs=[output_image, current_styles, current_boxes, update_identity, edits, cur_face_idx]) - previous.click( - self.change_face, - inputs=[gradio.State(-1), cur_face_idx, current_boxes, input_image, show_boxes, current_styles, update_identity, edits], - outputs=[output_image, update_identity, cur_face_idx]) - next_.click( - self.change_face, - inputs=[gradio.State(1), cur_face_idx, current_boxes, input_image, show_boxes,current_styles, update_identity, edits], - outputs=[output_image, update_identity, cur_face_idx]) - show_boxes.change( - self.anonymize, - inputs=[input_image, show_boxes, cur_face_idx, current_styles, current_boxes, update_identity, edits], - outputs=[output_image, update_identity]) - - -class WebcamDemo: - - def __init__(self, anonymizer) -> None: - self.anonymizer = anonymizer - with gradio.Row(): - input_image = gradio.Image(type="pil", source="webcam", streaming=True) - output_image = gradio.Image(type="numpy", label="Output") - with gradio.Row(): - truncation_value = gradio.Slider(0, 1, value=0, step=0.01) - truncation = gradio.Radio(["Multi-modal truncation", "Unimodal truncation"], value="Unimodal truncation") - with gradio.Row(): - visualize_det = gradio.Checkbox(value=False, label="Show Detections") - track = gradio.Checkbox(value=False, label="Track detections (samples same latent variable per track)") - input_image.stream( - self.anonymize, - inputs=[input_image, visualize_det, truncation_value,truncation, track, gradio.Variable(False)], - outputs=[output_image]) - self.track = True - - def anonymize(self, img: Image, visualize_detection: bool, truncation_value, truncation_type, track, reset_track): - if reset_track: - self.anonymizer.reset_tracker() - mmt = truncation_type == "Multi-modal truncation" - img, cache_id = pil2torch(img) - img = tops.to_cuda(img) - self.anonymizer - if visualize_detection: - img = self.anonymizer.visualize_detection(img, cache_id=cache_id) - else: - img = self.anonymizer( - img, - truncation_value=truncation_value, - multi_modal_truncation=mmt, - amp=True, - cache_id=cache_id, - track=track) - img = utils.im2numpy(img) - return img - - -class ExampleDemo(WebcamDemo): - - def __init__(self, anonymizer, source="webcam") -> None: - self.anonymizer = anonymizer - with gradio.Row(): - input_image = gradio.Image(type="pil", source=source) - output_image = gradio.Image(type="numpy", label="Output") - with gradio.Row(): - update_btn = gradio.Button("Update Anonymization").style(full_width=True) - resample = gradio.Button("Resample Latent Variables").style(full_width=True) - with gradio.Row(): - truncation_value = gradio.Slider(0, 1, value=0, step=0.01) - truncation = gradio.Radio(["Multi-modal truncation", "Unimodal truncation"], value="Unimodal truncation") - visualize_det = gradio.Checkbox(value=False, label="Show Detections") - visualize_det.change( - self.anonymize, - inputs=[input_image, visualize_det, truncation_value, truncation, gradio.Variable(True), gradio.Variable(False)], - outputs=[output_image]) - gradio.Examples( - ["media/erling.jpg", "media/regjeringen.jpg"], inputs=[input_image] - ) - - update_btn.click( - self.anonymize, - inputs=[input_image, visualize_det, truncation_value, truncation, gradio.Variable(True), gradio.Variable(False)], - outputs=[output_image]) - resample.click( - self.anonymize, - inputs=[input_image, visualize_det, truncation_value, truncation, gradio.Variable(True), gradio.Variable(True)], - outputs=[output_image]) - input_image.change( - self.anonymize, - inputs=[input_image, visualize_det, truncation_value, truncation, gradio.Variable(False), gradio.Variable(True)], - outputs=[output_image]) - self.track = False - self.truncation_value = truncation_value - - -class Information: - - def __init__(self) -> None: - gradio.Markdown("##
            Face Anonymization Architecture
            ") - gradio.Markdown("---") - gradio.Image(value="media/overall_architecture.png") - gradio.Markdown("##
            Full-Body Anonymization Architecture
            ") - gradio.Markdown("---") - gradio.Image(value="media/full_body.png") - gradio.Markdown("###
            Generative Adversarial Networks
            ") - gradio.Markdown("---") - gradio.Image(value="media/gan_architecture.png") - - -def pil2torch(img: Image.Image): - img = img.convert("RGB") - img = np.array(img) - img = np.rollaxis(img, 2) - return torch.from_numpy(img), None diff --git a/spaces/hamelcubsfan/AutoGPT/autogpt/commands/image_gen.py b/spaces/hamelcubsfan/AutoGPT/autogpt/commands/image_gen.py deleted file mode 100644 index 0809fcdd3e38b52a2ce09ca1444f2574813d40f9..0000000000000000000000000000000000000000 --- a/spaces/hamelcubsfan/AutoGPT/autogpt/commands/image_gen.py +++ /dev/null @@ -1,163 +0,0 @@ -""" Image Generation Module for AutoGPT.""" -import io -import os.path -import uuid -from base64 import b64decode - -import openai -import requests -from PIL import Image - -from autogpt.config import Config -from autogpt.workspace import path_in_workspace - -CFG = Config() - - -def generate_image(prompt: str, size: int = 256) -> str: - """Generate an image from a prompt. - - Args: - prompt (str): The prompt to use - size (int, optional): The size of the image. Defaults to 256. (Not supported by HuggingFace) - - Returns: - str: The filename of the image - """ - filename = f"{str(uuid.uuid4())}.jpg" - - # DALL-E - if CFG.image_provider == "dalle": - return generate_image_with_dalle(prompt, filename, size) - # HuggingFace - elif CFG.image_provider == "huggingface": - return generate_image_with_hf(prompt, filename) - # SD WebUI - elif CFG.image_provider == "sdwebui": - return generate_image_with_sd_webui(prompt, filename, size) - return "No Image Provider Set" - - -def generate_image_with_hf(prompt: str, filename: str) -> str: - """Generate an image with HuggingFace's API. - - Args: - prompt (str): The prompt to use - filename (str): The filename to save the image to - - Returns: - str: The filename of the image - """ - API_URL = ( - f"https://api-inference.huggingface.co/models/{CFG.huggingface_image_model}" - ) - if CFG.huggingface_api_token is None: - raise ValueError( - "You need to set your Hugging Face API token in the config file." - ) - headers = { - "Authorization": f"Bearer {CFG.huggingface_api_token}", - "X-Use-Cache": "false", - } - - response = requests.post( - API_URL, - headers=headers, - json={ - "inputs": prompt, - }, - ) - - image = Image.open(io.BytesIO(response.content)) - print(f"Image Generated for prompt:{prompt}") - - image.save(path_in_workspace(filename)) - - return f"Saved to disk:{filename}" - - -def generate_image_with_dalle(prompt: str, filename: str) -> str: - """Generate an image with DALL-E. - - Args: - prompt (str): The prompt to use - filename (str): The filename to save the image to - - Returns: - str: The filename of the image - """ - openai.api_key = CFG.openai_api_key - - # Check for supported image sizes - if size not in [256, 512, 1024]: - closest = min([256, 512, 1024], key=lambda x: abs(x - size)) - print( - f"DALL-E only supports image sizes of 256x256, 512x512, or 1024x1024. Setting to {closest}, was {size}." - ) - size = closest - - response = openai.Image.create( - prompt=prompt, - n=1, - size=f"{size}x{size}", - response_format="b64_json", - ) - - print(f"Image Generated for prompt:{prompt}") - - image_data = b64decode(response["data"][0]["b64_json"]) - - with open(path_in_workspace(filename), mode="wb") as png: - png.write(image_data) - - return f"Saved to disk:{filename}" - - -def generate_image_with_sd_webui( - prompt: str, - filename: str, - size: int = 512, - negative_prompt: str = "", - extra: dict = {}, -) -> str: - """Generate an image with Stable Diffusion webui. - Args: - prompt (str): The prompt to use - filename (str): The filename to save the image to - size (int, optional): The size of the image. Defaults to 256. - negative_prompt (str, optional): The negative prompt to use. Defaults to "". - extra (dict, optional): Extra parameters to pass to the API. Defaults to {}. - Returns: - str: The filename of the image - """ - # Create a session and set the basic auth if needed - s = requests.Session() - if CFG.sd_webui_auth: - username, password = CFG.sd_webui_auth.split(":") - s.auth = (username, password or "") - - # Generate the images - response = requests.post( - f"{CFG.sd_webui_url}/sdapi/v1/txt2img", - json={ - "prompt": prompt, - "negative_prompt": negative_prompt, - "sampler_index": "DDIM", - "steps": 20, - "cfg_scale": 7.0, - "width": size, - "height": size, - "n_iter": 1, - **extra, - }, - ) - - print(f"Image Generated for prompt:{prompt}") - - # Save the image to disk - response = response.json() - b64 = b64decode(response["images"][0].split(",", 1)[0]) - image = Image.open(io.BytesIO(b64)) - image.save(path_in_workspace(filename)) - - return f"Saved to disk:{filename}" diff --git a/spaces/hands012/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpge.h b/spaces/hands012/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpge.h deleted file mode 100644 index a46c805ab80aab491f7f9508b3a008b149866bee..0000000000000000000000000000000000000000 --- a/spaces/hands012/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpge.h +++ /dev/null @@ -1,172 +0,0 @@ - -// jpge.h - C++ class for JPEG compression. -// Public domain, Rich Geldreich -// Alex Evans: Added RGBA support, linear memory allocator. -#ifndef JPEG_ENCODER_H -#define JPEG_ENCODER_H - -#include - -namespace jpge -{ - typedef unsigned char uint8; - typedef signed short int16; - typedef signed int int32; - typedef unsigned short uint16; - typedef unsigned int uint32; - typedef unsigned int uint; - - // JPEG chroma subsampling factors. Y_ONLY (grayscale images) and H2V2 (color images) are the most common. - enum subsampling_t { Y_ONLY = 0, H1V1 = 1, H2V1 = 2, H2V2 = 3 }; - - // JPEG compression parameters structure. - struct params - { - inline params() : m_quality(85), m_subsampling(H2V2), m_no_chroma_discrim_flag(false), m_two_pass_flag(false) { } - - inline bool check_valid() const - { - if ((m_quality < 1) || (m_quality > 100)) return false; - if ((uint)m_subsampling > (uint)H2V2) return false; - return true; - } - - // Quality: 1-100, higher is better. Typical values are around 50-95. - int m_quality; - - // m_subsampling: - // 0 = Y (grayscale) only - // 1 = YCbCr, no subsampling (H1V1, YCbCr 1x1x1, 3 blocks per MCU) - // 2 = YCbCr, H2V1 subsampling (YCbCr 2x1x1, 4 blocks per MCU) - // 3 = YCbCr, H2V2 subsampling (YCbCr 4x1x1, 6 blocks per MCU-- very common) - subsampling_t m_subsampling; - - // Disables CbCr discrimination - only intended for testing. - // If true, the Y quantization table is also used for the CbCr channels. - bool m_no_chroma_discrim_flag; - - bool m_two_pass_flag; - }; - - // Writes JPEG image to a file. - // num_channels must be 1 (Y) or 3 (RGB), image pitch must be width*num_channels. - bool compress_image_to_jpeg_file(const char *pFilename, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params()); - - // Writes JPEG image to memory buffer. - // On entry, buf_size is the size of the output buffer pointed at by pBuf, which should be at least ~1024 bytes. - // If return value is true, buf_size will be set to the size of the compressed data. - bool compress_image_to_jpeg_file_in_memory(void *pBuf, int64_t &buf_size, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params()); - - // Output stream abstract class - used by the jpeg_encoder class to write to the output stream. - // put_buf() is generally called with len==JPGE_OUT_BUF_SIZE bytes, but for headers it'll be called with smaller amounts. - class output_stream - { - public: - virtual ~output_stream() { }; - virtual bool put_buf(const void* Pbuf, int64_t len) = 0; - template inline bool put_obj(const T& obj) { return put_buf(&obj, sizeof(T)); } - }; - - // Lower level jpeg_encoder class - useful if more control is needed than the above helper functions. - class jpeg_encoder - { - public: - jpeg_encoder(); - ~jpeg_encoder(); - - // Initializes the compressor. - // pStream: The stream object to use for writing compressed data. - // params - Compression parameters structure, defined above. - // width, height - Image dimensions. - // channels - May be 1, or 3. 1 indicates grayscale, 3 indicates RGB source data. - // Returns false on out of memory or if a stream write fails. - bool init(output_stream *pStream, int64_t width, int64_t height, int64_t src_channels, const params &comp_params = params()); - - const params &get_params() const { return m_params; } - - // Deinitializes the compressor, freeing any allocated memory. May be called at any time. - void deinit(); - - uint get_total_passes() const { return m_params.m_two_pass_flag ? 2 : 1; } - inline uint get_cur_pass() { return m_pass_num; } - - // Call this method with each source scanline. - // width * src_channels bytes per scanline is expected (RGB or Y format). - // You must call with NULL after all scanlines are processed to finish compression. - // Returns false on out of memory or if a stream write fails. - bool process_scanline(const void* pScanline); - - private: - jpeg_encoder(const jpeg_encoder &); - jpeg_encoder &operator =(const jpeg_encoder &); - - typedef int32 sample_array_t; - - output_stream *m_pStream; - params m_params; - uint8 m_num_components; - uint8 m_comp_h_samp[3], m_comp_v_samp[3]; - int m_image_x, m_image_y, m_image_bpp, m_image_bpl; - int m_image_x_mcu, m_image_y_mcu; - int m_image_bpl_xlt, m_image_bpl_mcu; - int m_mcus_per_row; - int m_mcu_x, m_mcu_y; - uint8 *m_mcu_lines[16]; - uint8 m_mcu_y_ofs; - sample_array_t m_sample_array[64]; - int16 m_coefficient_array[64]; - int32 m_quantization_tables[2][64]; - uint m_huff_codes[4][256]; - uint8 m_huff_code_sizes[4][256]; - uint8 m_huff_bits[4][17]; - uint8 m_huff_val[4][256]; - uint32 m_huff_count[4][256]; - int m_last_dc_val[3]; - enum { JPGE_OUT_BUF_SIZE = 2048 }; - uint8 m_out_buf[JPGE_OUT_BUF_SIZE]; - uint8 *m_pOut_buf; - uint m_out_buf_left; - uint32 m_bit_buffer; - uint m_bits_in; - uint8 m_pass_num; - bool m_all_stream_writes_succeeded; - - void optimize_huffman_table(int table_num, int table_len); - void emit_byte(uint8 i); - void emit_word(uint i); - void emit_marker(int marker); - void emit_jfif_app0(); - void emit_dqt(); - void emit_sof(); - void emit_dht(uint8 *bits, uint8 *val, int index, bool ac_flag); - void emit_dhts(); - void emit_sos(); - void emit_markers(); - void compute_huffman_table(uint *codes, uint8 *code_sizes, uint8 *bits, uint8 *val); - void compute_quant_table(int32 *dst, int16 *src); - void adjust_quant_table(int32 *dst, int32 *src); - void first_pass_init(); - bool second_pass_init(); - bool jpg_open(int p_x_res, int p_y_res, int src_channels); - void load_block_8_8_grey(int x); - void load_block_8_8(int x, int y, int c); - void load_block_16_8(int x, int c); - void load_block_16_8_8(int x, int c); - void load_quantized_coefficients(int component_num); - void flush_output_buffer(); - void put_bits(uint bits, uint len); - void code_coefficients_pass_one(int component_num); - void code_coefficients_pass_two(int component_num); - void code_block(int component_num); - void process_mcu_row(); - bool terminate_pass_one(); - bool terminate_pass_two(); - bool process_end_of_image(); - void load_mcu(const void* src); - void clear(); - void init(); - }; - -} // namespace jpge - -#endif // JPEG_ENCODER \ No newline at end of file diff --git a/spaces/harisansarkhan/Predict_Car_Brand/Gradio.py b/spaces/harisansarkhan/Predict_Car_Brand/Gradio.py deleted file mode 100644 index bfd368b43c6f5f762fd8c39e681d9699004033cf..0000000000000000000000000000000000000000 --- a/spaces/harisansarkhan/Predict_Car_Brand/Gradio.py +++ /dev/null @@ -1,61 +0,0 @@ -import os -import gradio as gr -import numpy as np -import tensorflow as tf -from tensorflow.keras.preprocessing.image import load_img, img_to_array -from tensorflow.keras.models import load_model -import cv2 - -# Load the best model -model_path = "car_classification_model.h5" -best_model = load_model(model_path) - -class_labels = ['Audi', 'Hyundai Creta', 'Mahindra Scorpio', 'Rolls Royce', 'Swift', 'Tata Safari', 'Toyota Innova'] - -# Define a tf.function for prediction -@tf.function -def predict_image(image_array): - prediction = best_model(image_array) - class_index = tf.argmax(prediction, axis=1) - predicted_class = tf.gather(class_labels, class_index) - return predicted_class - -# Predict function with breed name as string -def Predict_Car_Brand(image_upload): - # Convert the PIL image to a NumPy array - image_array = np.array(image_upload) - - # Resize the image to (224, 224) - image_resized = cv2.resize(image_array, (224, 224)) - - img_array = img_to_array(image_resized) - img_array = np.expand_dims(img_array, axis=0) - img_array /= 255.0 # Normalize the image - - # Convert to TensorFlow tensor - image_tensor = tf.convert_to_tensor(img_array, dtype=tf.float32) - - # Predict using the tf.function - predicted_brand = predict_image(image_tensor) - label = predicted_brand.numpy()[0].decode() - return label - -# Create and launch the Gradio interface -demo = gr.Interface( - Predict_Car_Brand, - inputs = "image", - outputs="text", - title = "Car Brand Predictor", - description="Upload an image of your Car to predict its brand. (Audi, Hyundai Creta, Mahindra Scorpio, Rolls Royce, Swift, Tata Safari or Toyota Innova )", - cache_examples=True, - theme="default", - allow_flagging="manual", - flagging_options=["Flag as incorrect", "Flag as inaccurate"], - analytics_enabled=True, - batch=False, - max_batch_size=4, - allow_duplication=False -) - -demo.launch() - diff --git a/spaces/harkov000/peft-lora-sd-dreambooth/train_dreambooth.py b/spaces/harkov000/peft-lora-sd-dreambooth/train_dreambooth.py deleted file mode 100644 index 2f5312390975e9aefd0fc8617af3cffeded12fcb..0000000000000000000000000000000000000000 --- a/spaces/harkov000/peft-lora-sd-dreambooth/train_dreambooth.py +++ /dev/null @@ -1,1005 +0,0 @@ -import argparse -import gc -import hashlib -import itertools -import json -import logging -import math -import os -import threading -import warnings -from pathlib import Path -from typing import Optional - -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -import transformers -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import set_seed -from torch.utils.data import Dataset -from transformers import AutoTokenizer, PretrainedConfig - -import datasets -import diffusers -import psutil -from diffusers import AutoencoderKL, DDPMScheduler, DiffusionPipeline, UNet2DConditionModel -from diffusers.optimization import get_scheduler -from diffusers.utils import check_min_version -from diffusers.utils.import_utils import is_xformers_available -from huggingface_hub import HfFolder, Repository, whoami -from peft import LoraConfig, LoraModel, get_peft_model_state_dict -from PIL import Image -from torchvision import transforms -from tqdm.auto import tqdm - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.10.0.dev0") - -logger = get_logger(__name__) - -UNET_TARGET_MODULES = ["to_q", "to_v", "query", "value"] # , "ff.net.0.proj"] -TEXT_ENCODER_TARGET_MODULES = ["q_proj", "v_proj"] - - -def import_model_class_from_model_name_or_path(pretrained_model_name_or_path: str, revision: str): - text_encoder_config = PretrainedConfig.from_pretrained( - pretrained_model_name_or_path, - subfolder="text_encoder", - revision=revision, - ) - model_class = text_encoder_config.architectures[0] - - if model_class == "CLIPTextModel": - from transformers import CLIPTextModel - - return CLIPTextModel - elif model_class == "RobertaSeriesModelWithTransformation": - from diffusers.pipelines.alt_diffusion.modeling_roberta_series import RobertaSeriesModelWithTransformation - - return RobertaSeriesModelWithTransformation - else: - raise ValueError(f"{model_class} is not supported.") - - -def parse_args(input_args=None): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--revision", - type=str, - default=None, - required=False, - help="Revision of pretrained model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--instance_data_dir", - type=str, - default=None, - required=True, - help="A folder containing the training data of instance images.", - ) - parser.add_argument( - "--class_data_dir", - type=str, - default=None, - required=False, - help="A folder containing the training data of class images.", - ) - parser.add_argument( - "--instance_prompt", - type=str, - default=None, - required=True, - help="The prompt with identifier specifying the instance", - ) - parser.add_argument( - "--class_prompt", - type=str, - default=None, - help="The prompt to specify images in the same class as provided instance images.", - ) - parser.add_argument( - "--with_prior_preservation", - default=False, - action="store_true", - help="Flag to add prior preservation loss.", - ) - parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.") - parser.add_argument( - "--num_class_images", - type=int, - default=100, - help=( - "Minimal class images for prior preservation loss. If there are not enough images already present in" - " class_data_dir, additional images will be sampled with class_prompt." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="text-inversion-model", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution" - ) - parser.add_argument("--train_text_encoder", action="store_true", help="Whether to train the text encoder") - - # lora args - parser.add_argument("--use_lora", action="store_true", help="Whether to use Lora for parameter efficient tuning") - parser.add_argument("--lora_r", type=int, default=8, help="Lora rank, only used if use_lora is True") - parser.add_argument("--lora_alpha", type=int, default=32, help="Lora alpha, only used if use_lora is True") - parser.add_argument("--lora_dropout", type=float, default=0.0, help="Lora dropout, only used if use_lora is True") - parser.add_argument( - "--lora_bias", - type=str, - default="none", - help="Bias type for Lora. Can be 'none', 'all' or 'lora_only', only used if use_lora is True", - ) - parser.add_argument( - "--lora_text_encoder_r", - type=int, - default=8, - help="Lora rank for text encoder, only used if `use_lora` and `train_text_encoder` are True", - ) - parser.add_argument( - "--lora_text_encoder_alpha", - type=int, - default=32, - help="Lora alpha for text encoder, only used if `use_lora` and `train_text_encoder` are True", - ) - parser.add_argument( - "--lora_text_encoder_dropout", - type=float, - default=0.0, - help="Lora dropout for text encoder, only used if `use_lora` and `train_text_encoder` are True", - ) - parser.add_argument( - "--lora_text_encoder_bias", - type=str, - default="none", - help="Bias type for Lora. Can be 'none', 'all' or 'lora_only', only used if use_lora and `train_text_encoder` are True", - ) - - parser.add_argument( - "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument( - "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images." - ) - parser.add_argument("--num_train_epochs", type=int, default=1) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--checkpointing_steps", - type=int, - default=500, - help=( - "Save a checkpoint of the training state every X updates. These checkpoints can be used both as final" - " checkpoints in case they are better than the last checkpoint, and are also suitable for resuming" - " training using `--resume_from_checkpoint`." - ), - ) - parser.add_argument( - "--resume_from_checkpoint", - type=str, - default=None, - help=( - "Whether training should be resumed from a previous checkpoint. Use a path saved by" - ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' - ), - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=5e-6, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--lr_num_cycles", - type=int, - default=1, - help="Number of hard resets of the lr in cosine_with_restarts scheduler.", - ) - parser.add_argument("--lr_power", type=float, default=1.0, help="Power factor of the polynomial scheduler.") - parser.add_argument( - "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--allow_tf32", - action="store_true", - help=( - "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" - " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" - ), - ) - parser.add_argument( - "--report_to", - type=str, - default="tensorboard", - help=( - 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' - ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default=None, - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" - " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" - " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config." - ), - ) - parser.add_argument( - "--prior_generation_precision", - type=str, - default=None, - choices=["no", "fp32", "fp16", "bf16"], - help=( - "Choose prior generation precision between fp32, fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" - " 1.10.and an Nvidia Ampere GPU. Default to fp16 if a GPU is available else fp32." - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - parser.add_argument( - "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers." - ) - - if input_args is not None: - args = parser.parse_args(input_args) - else: - args = parser.parse_args() - - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - if args.with_prior_preservation: - if args.class_data_dir is None: - raise ValueError("You must specify a data directory for class images.") - if args.class_prompt is None: - raise ValueError("You must specify prompt for class images.") - else: - # logger is not available yet - if args.class_data_dir is not None: - warnings.warn("You need not use --class_data_dir without --with_prior_preservation.") - if args.class_prompt is not None: - warnings.warn("You need not use --class_prompt without --with_prior_preservation.") - - return args - - -# Converting Bytes to Megabytes -def b2mb(x): - return int(x / 2**20) - - -# This context manager is used to track the peak memory usage of the process -class TorchTracemalloc: - def __enter__(self): - gc.collect() - torch.cuda.empty_cache() - torch.cuda.reset_max_memory_allocated() # reset the peak gauge to zero - self.begin = torch.cuda.memory_allocated() - self.process = psutil.Process() - - self.cpu_begin = self.cpu_mem_used() - self.peak_monitoring = True - peak_monitor_thread = threading.Thread(target=self.peak_monitor_func) - peak_monitor_thread.daemon = True - peak_monitor_thread.start() - return self - - def cpu_mem_used(self): - """get resident set size memory for the current process""" - return self.process.memory_info().rss - - def peak_monitor_func(self): - self.cpu_peak = -1 - - while True: - self.cpu_peak = max(self.cpu_mem_used(), self.cpu_peak) - - # can't sleep or will not catch the peak right (this comment is here on purpose) - # time.sleep(0.001) # 1msec - - if not self.peak_monitoring: - break - - def __exit__(self, *exc): - self.peak_monitoring = False - - gc.collect() - torch.cuda.empty_cache() - self.end = torch.cuda.memory_allocated() - self.peak = torch.cuda.max_memory_allocated() - self.used = b2mb(self.end - self.begin) - self.peaked = b2mb(self.peak - self.begin) - - self.cpu_end = self.cpu_mem_used() - self.cpu_used = b2mb(self.cpu_end - self.cpu_begin) - self.cpu_peaked = b2mb(self.cpu_peak - self.cpu_begin) - # print(f"delta used/peak {self.used:4d}/{self.peaked:4d}") - - -def print_trainable_parameters(model): - """ - Prints the number of trainable parameters in the model. - """ - trainable_params = 0 - all_param = 0 - for _, param in model.named_parameters(): - all_param += param.numel() - if param.requires_grad: - trainable_params += param.numel() - print( - f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}" - ) - - -class DreamBoothDataset(Dataset): - """ - A dataset to prepare the instance and class images with the prompts for fine-tuning the model. - It pre-processes the images and the tokenizes prompts. - """ - - def __init__( - self, - instance_data_root, - instance_prompt, - tokenizer, - class_data_root=None, - class_prompt=None, - size=512, - center_crop=False, - ): - self.size = size - self.center_crop = center_crop - self.tokenizer = tokenizer - - self.instance_data_root = Path(instance_data_root) - if not self.instance_data_root.exists(): - raise ValueError("Instance images root doesn't exists.") - - self.instance_images_path = list(Path(instance_data_root).iterdir()) - self.num_instance_images = len(self.instance_images_path) - self.instance_prompt = instance_prompt - self._length = self.num_instance_images - - if class_data_root is not None: - self.class_data_root = Path(class_data_root) - self.class_data_root.mkdir(parents=True, exist_ok=True) - self.class_images_path = list(self.class_data_root.iterdir()) - self.num_class_images = len(self.class_images_path) - self._length = max(self.num_class_images, self.num_instance_images) - self.class_prompt = class_prompt - else: - self.class_data_root = None - - self.image_transforms = transforms.Compose( - [ - transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR), - transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size), - transforms.ToTensor(), - transforms.Normalize([0.5], [0.5]), - ] - ) - - def __len__(self): - return self._length - - def __getitem__(self, index): - example = {} - instance_image = Image.open(self.instance_images_path[index % self.num_instance_images]) - if not instance_image.mode == "RGB": - instance_image = instance_image.convert("RGB") - example["instance_images"] = self.image_transforms(instance_image) - example["instance_prompt_ids"] = self.tokenizer( - self.instance_prompt, - truncation=True, - padding="max_length", - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - ).input_ids - - if self.class_data_root: - class_image = Image.open(self.class_images_path[index % self.num_class_images]) - if not class_image.mode == "RGB": - class_image = class_image.convert("RGB") - example["class_images"] = self.image_transforms(class_image) - example["class_prompt_ids"] = self.tokenizer( - self.class_prompt, - truncation=True, - padding="max_length", - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - ).input_ids - - return example - - -def collate_fn(examples, with_prior_preservation=False): - input_ids = [example["instance_prompt_ids"] for example in examples] - pixel_values = [example["instance_images"] for example in examples] - - # Concat class and instance examples for prior preservation. - # We do this to avoid doing two forward passes. - if with_prior_preservation: - input_ids += [example["class_prompt_ids"] for example in examples] - pixel_values += [example["class_images"] for example in examples] - - pixel_values = torch.stack(pixel_values) - pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() - - input_ids = torch.cat(input_ids, dim=0) - - batch = { - "input_ids": input_ids, - "pixel_values": pixel_values, - } - return batch - - -class PromptDataset(Dataset): - "A simple dataset to prepare the prompts to generate class images on multiple GPUs." - - def __init__(self, prompt, num_samples): - self.prompt = prompt - self.num_samples = num_samples - - def __len__(self): - return self.num_samples - - def __getitem__(self, index): - example = {} - example["prompt"] = self.prompt - example["index"] = index - return example - - -def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None): - if token is None: - token = HfFolder.get_token() - if organization is None: - username = whoami(token)["name"] - return f"{username}/{model_id}" - else: - return f"{organization}/{model_id}" - - -def main(args): - logging_dir = Path(args.output_dir, args.logging_dir) - - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with=args.report_to, - logging_dir=logging_dir, - ) - - # Currently, it's not possible to do gradient accumulation when training two models with accelerate.accumulate - # This will be enabled soon in accelerate. For now, we don't allow gradient accumulation when training two models. - # TODO (patil-suraj): Remove this check when gradient accumulation with two models is enabled in accelerate. - if args.train_text_encoder and args.gradient_accumulation_steps > 1 and accelerator.num_processes > 1: - raise ValueError( - "Gradient accumulation is not supported when training the text encoder in distributed training. " - "Please set gradient_accumulation_steps to 1. This feature will be supported in the future." - ) - - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - logger.info(accelerator.state, main_process_only=False) - if accelerator.is_local_main_process: - datasets.utils.logging.set_verbosity_warning() - transformers.utils.logging.set_verbosity_warning() - diffusers.utils.logging.set_verbosity_info() - else: - datasets.utils.logging.set_verbosity_error() - transformers.utils.logging.set_verbosity_error() - diffusers.utils.logging.set_verbosity_error() - - # If passed along, set the training seed now. - if args.seed is not None: - set_seed(args.seed) - - # Generate class images if prior preservation is enabled. - if args.with_prior_preservation: - class_images_dir = Path(args.class_data_dir) - if not class_images_dir.exists(): - class_images_dir.mkdir(parents=True) - cur_class_images = len(list(class_images_dir.iterdir())) - - if cur_class_images < args.num_class_images: - torch_dtype = torch.float16 if accelerator.device.type == "cuda" else torch.float32 - if args.prior_generation_precision == "fp32": - torch_dtype = torch.float32 - elif args.prior_generation_precision == "fp16": - torch_dtype = torch.float16 - elif args.prior_generation_precision == "bf16": - torch_dtype = torch.bfloat16 - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - torch_dtype=torch_dtype, - safety_checker=None, - revision=args.revision, - ) - pipeline.set_progress_bar_config(disable=True) - - num_new_images = args.num_class_images - cur_class_images - logger.info(f"Number of class images to sample: {num_new_images}.") - - sample_dataset = PromptDataset(args.class_prompt, num_new_images) - sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) - - sample_dataloader = accelerator.prepare(sample_dataloader) - pipeline.to(accelerator.device) - - for example in tqdm( - sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process - ): - images = pipeline(example["prompt"]).images - - for i, image in enumerate(images): - hash_image = hashlib.sha1(image.tobytes()).hexdigest() - image_filename = class_images_dir / f"{example['index'][i] + cur_class_images}-{hash_image}.jpg" - image.save(image_filename) - - del pipeline - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - # Handle the repository creation - if accelerator.is_main_process: - if args.push_to_hub: - if args.hub_model_id is None: - repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token) - else: - repo_name = args.hub_model_id - repo = Repository(args.output_dir, clone_from=repo_name) # noqa: F841 - - with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore: - if "step_*" not in gitignore: - gitignore.write("step_*\n") - if "epoch_*" not in gitignore: - gitignore.write("epoch_*\n") - elif args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - # Load the tokenizer - if args.tokenizer_name: - tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name, revision=args.revision, use_fast=False) - elif args.pretrained_model_name_or_path: - tokenizer = AutoTokenizer.from_pretrained( - args.pretrained_model_name_or_path, - subfolder="tokenizer", - revision=args.revision, - use_fast=False, - ) - - # import correct text encoder class - text_encoder_cls = import_model_class_from_model_name_or_path(args.pretrained_model_name_or_path, args.revision) - - # Load scheduler and models - noise_scheduler = DDPMScheduler( - beta_start=0.00085, - beta_end=0.012, - beta_schedule="scaled_linear", - num_train_timesteps=1000, - ) # DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") - text_encoder = text_encoder_cls.from_pretrained( - args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision - ) - vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) - unet = UNet2DConditionModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision - ) - - if args.use_lora: - config = LoraConfig( - r=args.lora_r, - lora_alpha=args.lora_alpha, - target_modules=UNET_TARGET_MODULES, - lora_dropout=args.lora_dropout, - bias=args.lora_bias, - ) - unet = LoraModel(config, unet) - print_trainable_parameters(unet) - print(unet) - - vae.requires_grad_(False) - if not args.train_text_encoder: - text_encoder.requires_grad_(False) - elif args.train_text_encoder and args.use_lora: - config = LoraConfig( - r=args.lora_text_encoder_r, - lora_alpha=args.lora_text_encoder_alpha, - target_modules=TEXT_ENCODER_TARGET_MODULES, - lora_dropout=args.lora_text_encoder_dropout, - bias=args.lora_text_encoder_bias, - ) - text_encoder = LoraModel(config, text_encoder) - print_trainable_parameters(text_encoder) - print(text_encoder) - - if args.enable_xformers_memory_efficient_attention: - if is_xformers_available(): - unet.enable_xformers_memory_efficient_attention() - else: - raise ValueError("xformers is not available. Make sure it is installed correctly") - - if args.gradient_checkpointing: - unet.enable_gradient_checkpointing() - # below fails when using lora so commenting it out - if args.train_text_encoder and not args.use_lora: - text_encoder.gradient_checkpointing_enable() - - # Enable TF32 for faster training on Ampere GPUs, - # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices - if args.allow_tf32: - torch.backends.cuda.matmul.allow_tf32 = True - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs - if args.use_8bit_adam: - try: - import bitsandbytes as bnb - except ImportError: - raise ImportError( - "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`." - ) - - optimizer_class = bnb.optim.AdamW8bit - else: - optimizer_class = torch.optim.AdamW - - # Optimizer creation - params_to_optimize = ( - itertools.chain(unet.parameters(), text_encoder.parameters()) if args.train_text_encoder else unet.parameters() - ) - optimizer = optimizer_class( - params_to_optimize, - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - # Dataset and DataLoaders creation: - train_dataset = DreamBoothDataset( - instance_data_root=args.instance_data_dir, - instance_prompt=args.instance_prompt, - class_data_root=args.class_data_dir if args.with_prior_preservation else None, - class_prompt=args.class_prompt, - tokenizer=tokenizer, - size=args.resolution, - center_crop=args.center_crop, - ) - - train_dataloader = torch.utils.data.DataLoader( - train_dataset, - batch_size=args.train_batch_size, - shuffle=True, - collate_fn=lambda examples: collate_fn(examples, args.with_prior_preservation), - num_workers=1, - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps, - num_training_steps=args.max_train_steps * args.gradient_accumulation_steps, - num_cycles=args.lr_num_cycles, - power=args.lr_power, - ) - - # Prepare everything with our `accelerator`. - if args.train_text_encoder: - unet, text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, text_encoder, optimizer, train_dataloader, lr_scheduler - ) - else: - unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, optimizer, train_dataloader, lr_scheduler - ) - - # For mixed precision training we cast the text_encoder and vae weights to half-precision - # as these models are only used for inference, keeping weights in full precision is not required. - weight_dtype = torch.float32 - if accelerator.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif accelerator.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move vae and text_encoder to device and cast to weight_dtype - vae.to(accelerator.device, dtype=weight_dtype) - if not args.train_text_encoder: - text_encoder.to(accelerator.device, dtype=weight_dtype) - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("dreambooth", config=vars(args)) - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num batches each epoch = {len(train_dataloader)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - global_step = 0 - first_epoch = 0 - - # Potentially load in the weights and states from a previous save - if args.resume_from_checkpoint: - if args.resume_from_checkpoint != "latest": - path = os.path.basename(args.resume_from_checkpoint) - else: - # Get the mos recent checkpoint - dirs = os.listdir(args.output_dir) - dirs = [d for d in dirs if d.startswith("checkpoint")] - dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) - path = dirs[-1] - accelerator.print(f"Resuming from checkpoint {path}") - accelerator.load_state(os.path.join(args.output_dir, path)) - global_step = int(path.split("-")[1]) - - resume_global_step = global_step * args.gradient_accumulation_steps - first_epoch = resume_global_step // num_update_steps_per_epoch - resume_step = resume_global_step % num_update_steps_per_epoch - - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process) - progress_bar.set_description("Steps") - - for epoch in range(first_epoch, args.num_train_epochs): - unet.train() - if args.train_text_encoder: - text_encoder.train() - with TorchTracemalloc() as tracemalloc: - for step, batch in enumerate(train_dataloader): - # Skip steps until we reach the resumed step - if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step: - if step % args.gradient_accumulation_steps == 0: - progress_bar.update(1) - continue - - with accelerator.accumulate(unet): - # Convert images to latent space - latents = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist.sample() - latents = latents * 0.18215 - - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint( - 0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device - ) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning - encoder_hidden_states = text_encoder(batch["input_ids"])[0] - - # Predict the noise residual - model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - if args.with_prior_preservation: - # Chunk the noise and model_pred into two parts and compute the loss on each part separately. - model_pred, model_pred_prior = torch.chunk(model_pred, 2, dim=0) - target, target_prior = torch.chunk(target, 2, dim=0) - - # Compute instance loss - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - # Compute prior loss - prior_loss = F.mse_loss(model_pred_prior.float(), target_prior.float(), reduction="mean") - - # Add the prior loss to the instance loss. - loss = loss + args.prior_loss_weight * prior_loss - else: - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - accelerator.backward(loss) - if accelerator.sync_gradients: - params_to_clip = ( - itertools.chain(unet.parameters(), text_encoder.parameters()) - if args.train_text_encoder - else unet.parameters() - ) - accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm) - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - - # if global_step % args.checkpointing_steps == 0: - # if accelerator.is_main_process: - # save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") - # accelerator.save_state(save_path) - # logger.info(f"Saved state to {save_path}") - - logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - accelerator.log(logs, step=global_step) - - if global_step >= args.max_train_steps: - break - # Printing the GPU memory usage details such as allocated memory, peak memory, and total memory usage - accelerator.print("GPU Memory before entering the train : {}".format(b2mb(tracemalloc.begin))) - accelerator.print("GPU Memory consumed at the end of the train (end-begin): {}".format(tracemalloc.used)) - accelerator.print("GPU Peak Memory consumed during the train (max-begin): {}".format(tracemalloc.peaked)) - accelerator.print( - "GPU Total Peak Memory consumed during the train (max): {}".format( - tracemalloc.peaked + b2mb(tracemalloc.begin) - ) - ) - - accelerator.print("CPU Memory before entering the train : {}".format(b2mb(tracemalloc.cpu_begin))) - accelerator.print("CPU Memory consumed at the end of the train (end-begin): {}".format(tracemalloc.cpu_used)) - accelerator.print("CPU Peak Memory consumed during the train (max-begin): {}".format(tracemalloc.cpu_peaked)) - accelerator.print( - "CPU Total Peak Memory consumed during the train (max): {}".format( - tracemalloc.cpu_peaked + b2mb(tracemalloc.cpu_begin) - ) - ) - - # Create the pipeline using using the trained modules and save it. - accelerator.wait_for_everyone() - if accelerator.is_main_process: - if args.use_lora: - lora_config = {} - state_dict = get_peft_model_state_dict(unet, state_dict=accelerator.get_state_dict(unet)) - lora_config["peft_config"] = unet.get_peft_config_as_dict(inference=True) - if args.train_text_encoder: - text_encoder_state_dict = get_peft_model_state_dict( - text_encoder, state_dict=accelerator.get_state_dict(text_encoder) - ) - text_encoder_state_dict = {f"text_encoder_{k}": v for k, v in text_encoder_state_dict.items()} - state_dict.update(text_encoder_state_dict) - lora_config["text_encoder_peft_config"] = text_encoder.get_peft_config_as_dict(inference=True) - - accelerator.print(state_dict) - accelerator.save(state_dict, os.path.join(args.output_dir, f"{args.instance_prompt}_lora.pt")) - with open(os.path.join(args.output_dir, f"{args.instance_prompt}_lora_config.json"), "w") as f: - json.dump(lora_config, f) - else: - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - revision=args.revision, - ) - pipeline.save_pretrained(args.output_dir) - - accelerator.end_training() - - -if __name__ == "__main__": - args = parse_args() - main(args) diff --git a/spaces/hdhzk/bingo/src/components/welcome-screen.tsx b/spaces/hdhzk/bingo/src/components/welcome-screen.tsx deleted file mode 100644 index f7449fcbb6c621875e235db98f2790bf7894fb0a..0000000000000000000000000000000000000000 --- a/spaces/hdhzk/bingo/src/components/welcome-screen.tsx +++ /dev/null @@ -1,34 +0,0 @@ -import { useBing } from '@/lib/hooks/use-bing' - -const exampleMessages = [ - { - heading: '🧐 提出复杂问题', - message: `我可以为我挑剔的只吃橙色食物的孩子做什么饭?` - }, - { - heading: '🙌 获取更好的答案', - message: '销量最高的 3 种宠物吸尘器有哪些优点和缺点?' - }, - { - heading: '🎨 获得创意灵感', - message: `以海盗的口吻写一首关于外太空鳄鱼的俳句` - } -] - -export function WelcomeScreen({ setInput }: Pick, 'setInput'>) { - return ( -
            - {exampleMessages.map(example => ( - - ))} -
            - ) -} diff --git a/spaces/heiyuan/ChatGPT/README.md b/spaces/heiyuan/ChatGPT/README.md deleted file mode 100644 index feb19352c11d33b74cd0462f8699d4967aa9d53b..0000000000000000000000000000000000000000 --- a/spaces/heiyuan/ChatGPT/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: ChuanhuChatGPT -emoji: 🐠 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: gpl-3.0 -duplicated_from: JohnSmith9982/ChuanhuChatGPT ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/hezhaoqia/vits-simple-api/vits/mel_processing.py b/spaces/hezhaoqia/vits-simple-api/vits/mel_processing.py deleted file mode 100644 index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000 --- a/spaces/hezhaoqia/vits-simple-api/vits/mel_processing.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/hhalim/dataViz-mermaid/style.css b/spaces/hhalim/dataViz-mermaid/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/hhalim/dataViz-mermaid/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/huy-ha/semabs-relevancy/CLIP/data/yfcc100m.md b/spaces/huy-ha/semabs-relevancy/CLIP/data/yfcc100m.md deleted file mode 100644 index 06083ef9a613b5d360e87c3f395c2a16c6e9208e..0000000000000000000000000000000000000000 --- a/spaces/huy-ha/semabs-relevancy/CLIP/data/yfcc100m.md +++ /dev/null @@ -1,14 +0,0 @@ -# The YFCC100M Subset - -In the paper, we performed a dataset ablation using a subset of the YFCC100M dataset and showed that the performance remained largely similar. - -The subset contains 14,829,396 images, about 15% of the full dataset, which have been filtered to only keep those with natural languag titles and/or descriptions in English. - -We provide the list of (line number, photo identifier, photo hash) of each image contained in this subset. These correspond to the first three columns in the dataset's metadata TSV file. - -```bash -wget https://openaipublic.azureedge.net/clip/data/yfcc100m_subset_data.tsv.bz2 -bunzip2 yfcc100m_subset_data.tsv.bz2 -``` - -Use of the underlying media files is subject to the Creative Commons licenses chosen by their creators/uploaders. For more information about the YFCC100M dataset, visit [the official website](https://multimediacommons.wordpress.com/yfcc100m-core-dataset/). \ No newline at end of file diff --git a/spaces/iamstolas/STOLAS/src/components/ui/badge.tsx b/spaces/iamstolas/STOLAS/src/components/ui/badge.tsx deleted file mode 100644 index d9a84b394090e5b4b3bd34f6135b9a2f2ead0aa2..0000000000000000000000000000000000000000 --- a/spaces/iamstolas/STOLAS/src/components/ui/badge.tsx +++ /dev/null @@ -1,36 +0,0 @@ -import * as React from 'react' -import { cva, type VariantProps } from 'class-variance-authority' - -import { cn } from '@/lib/utils' - -const badgeVariants = cva( - 'inline-flex items-center rounded-full border px-2.5 py-0.5 text-xs font-semibold transition-colors focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2', - { - variants: { - variant: { - default: - 'border-transparent bg-primary text-primary-foreground hover:bg-primary/80', - secondary: - 'border-transparent bg-secondary text-secondary-foreground hover:bg-secondary/80', - destructive: - 'border-transparent bg-destructive text-destructive-foreground hover:bg-destructive/80', - outline: 'text-foreground' - } - }, - defaultVariants: { - variant: 'default' - } - } -) - -export interface BadgeProps - extends React.HTMLAttributes, - VariantProps {} - -function Badge({ className, variant, ...props }: BadgeProps) { - return ( -
            - ) -} - -export { Badge, badgeVariants } diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Bjork Homogenic Full Album Zip [TOP].md b/spaces/inplisQlawa/anything-midjourney-v4-1/Bjork Homogenic Full Album Zip [TOP].md deleted file mode 100644 index 646632d594d0bd962e8a0c143374903afdbb3efa..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Bjork Homogenic Full Album Zip [TOP].md +++ /dev/null @@ -1,12 +0,0 @@ -
            -

            Björk's Homogenic: A Masterpiece of Electronic Music

            -

            Homogenic is the fourth studio album by Icelandic singer-songwriter Björk, released on September 22, 1997. It is widely regarded as one of her best and most influential works, as well as a landmark in electronic music. Homogenic showcases Björk's unique blend of avant-garde, pop, classical, and ambient styles, as well as her distinctive vocals and lyrics. The album explores themes of identity, nature, love, and technology, reflecting Björk's personal and artistic vision.

            -

            The album was recorded in various locations around the world, including Spain, Iceland, England, and the United States. Björk collaborated with several producers and musicians, such as Mark Bell, Howie B, Eumir Deodato, Lenny Franchi, Guy Sigsworth, and the Icelandic String Octet. The album features a minimalist and experimental sound, dominated by electronic beats, strings, and synthesizers. The album also incorporates elements of folk, industrial, trip-hop, and glitch music.

            -

            Bjork, Homogenic full album zip


            Download Zip ––– https://urlin.us/2uEyHv



            -

            Homogenic received critical acclaim upon its release and has since been recognized as one of the greatest albums of all time by various publications and critics. It was nominated for the Grammy Award for Best Alternative Music Album in 1998 and won the Brit Award for Best International Female in 1998. It has sold over four million copies worldwide and has been certified platinum in several countries. The album spawned four singles: "Jóga", "Bachelorette", "Hunter", and "Alarm Call".

            -

            Homogenic is a masterpiece of electronic music that showcases Björk's artistic vision and musical innovation. It is an album that transcends genres and boundaries, creating a sonic landscape that is both timeless and futuristic. Homogenic is a must-listen for any fan of Björk or electronic music in general.

            Homogenic: A Critical and Commercial Success

            -

            Homogenic was met with widespread acclaim from critics, who praised Björk's artistic vision, musical innovation, and emotional depth. The album was nominated for the Grammy Award for Best Alternative Music Album in 1998 and won the Brit Award for Best International Female in 1998. It also appeared on several year-end and decade-end lists of the best albums by various publications, such as Rolling Stone, Pitchfork, NME, The Guardian, and Spin.

            -

            The album also performed well commercially, reaching number 4 on the UK Albums Chart and number 28 on the US Billboard 200. It has sold over four million copies worldwide and has been certified platinum in several countries, including the UK, France, Canada, and Australia. The album spawned five singles: \"Jóga\", \"Bachelorette\", \"Hunter\", \"Alarm Call\", and \"All Is Full of Love\". The singles were accompanied by innovative music videos directed by acclaimed filmmakers such as Michel Gondry, Paul White, and Chris Cunningham.

            -

            Homogenic is widely regarded as one of Björk's best and most influential works, as well as a landmark in electronic music. It has inspired many artists across genres and generations, such as Radiohead, Kanye West, FKA Twigs, Arca, Grimes, and Frank Ocean. Homogenic is a testament to Björk's musical genius and cultural impact.

            d5da3c52bf
            -
            -
            \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/DiskGetor Data Recovery 3.2 Key Crack Serial REPACK Keygen Cd 12.md b/spaces/inreVtussa/clothingai/Examples/DiskGetor Data Recovery 3.2 Key Crack Serial REPACK Keygen Cd 12.md deleted file mode 100644 index a5d93e18832e732902172d49d811f7f8b4ae1b02..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/DiskGetor Data Recovery 3.2 Key Crack Serial REPACK Keygen Cd 12.md +++ /dev/null @@ -1,6 +0,0 @@ -

            DiskGetor Data Recovery 3.2 key Crack serial keygen cd 12


            Download Zip ……… https://tiurll.com/2uClFD



            -
            -Data Rescue 3.2.2 Full ISO and Keygen Download. (FULL + Serial Keys) Comfy Photo Recovery 4.0 torrent contents DiskGetor Data Recovery 3.2.8 + Serial Key ... 4d29de3e1b
            -
            -
            -

            diff --git a/spaces/iqovocn/ChuanhuChatGPT/modules/models/__init__.py b/spaces/iqovocn/ChuanhuChatGPT/modules/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ivn888/Twitter-dashboard/panel-geodashboard-twitter/graphs/sentiment_plots.py b/spaces/ivn888/Twitter-dashboard/panel-geodashboard-twitter/graphs/sentiment_plots.py deleted file mode 100644 index 248926fcf4c0ffbae7090e4ec34ae169eedd0d4b..0000000000000000000000000000000000000000 --- a/spaces/ivn888/Twitter-dashboard/panel-geodashboard-twitter/graphs/sentiment_plots.py +++ /dev/null @@ -1,71 +0,0 @@ -import panel as pn -from pd_utils.utils import filter_df_by_bbox - - -def get_overall_sentiment(in_data, x_range, y_range): - """ - Returns the overall sentiment (Positive vs Negative) - within the current map extent. - """ - - # Verify whether x_range or y_range are None - if (x_range, y_range) == (None, None): - return None - - # Filter the tweet locations by bounding box - out_data = filter_df_by_bbox(in_data, x_range, y_range) - - # Check if out_data is empty - if out_data.shape[0] == 0: - return None - - # Get the overall sentiment - Positive vs Negative - sent_df = out_data[out_data["tweet_sentiment"].isin(["positive", "negative"])] - - sent_df = sent_df["tweet_sentiment"].value_counts().reset_index() - sent_df["pct"] = round((sent_df["count"] / sent_df["count"].sum()) * 100, 2) - sent_df = sent_df.set_index("tweet_sentiment") - - positive_value = sent_df.loc["positive", "pct"] - negative_value = sent_df.loc["negative", "pct"] - - sent_plot = { - "dataset": [ - { - "source": [ - ["sentiment", "count", "text", "emoji"], - ["Positive", positive_value, f"{positive_value}%", "😄"], - ["Negative", negative_value, f"{negative_value}%", "🙁"], - ] - } - ], - "tooltip": {"trigger": "item"}, - "legend": { - "bottom": "5%", - "left": "center", - "selectedMode": "false", - "textStyle": {"color": "#ccc"}, - }, - "series": [ - { - "name": "Sentiment", - "type": "pie", - "radius": ["40%", "70%"], - "color": ["#009988", "#EECC66"], - "avoidLabelOverlap": "false", - "label": { - "show": "false", - "fontSize": "40", - "position": "center", - "formatter": "{@emoji}", - }, - "encode": { - "value": "count", - "itemName": "sentiment", - "tooltip": ["text"], - }, - } - ], - } - - return pn.pane.ECharts(dict(sent_plot, responsive=True)) diff --git a/spaces/jax-diffusers-event/canny_coyo1m/README.md b/spaces/jax-diffusers-event/canny_coyo1m/README.md deleted file mode 100644 index 2c574bd243540509f0da9b38d0caa43d2b35f8e4..0000000000000000000000000000000000000000 --- a/spaces/jax-diffusers-event/canny_coyo1m/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Canny Coyo1m -emoji: 🌖 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jewellery/ChatGPT4/app.py b/spaces/jewellery/ChatGPT4/app.py deleted file mode 100644 index 632f0ee79c2a44a19c299e5965101cad17293e69..0000000000000000000000000000000000000000 --- a/spaces/jewellery/ChatGPT4/app.py +++ /dev/null @@ -1,191 +0,0 @@ -import gradio as gr -import os -import json -import requests - -#Streaming endpoint -API_URL = "https://api.openai.com/v1/chat/completions" #os.getenv("API_URL") + "/generate_stream" - -#Inferenec function -def predict(openai_gpt4_key, system_msg, inputs, top_p, temperature, chat_counter, chatbot=[], history=[]): - - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {openai_gpt4_key}" #Users will provide their own OPENAI_API_KEY - } - print(f"system message is ^^ {system_msg}") - if system_msg.strip() == '': - initial_message = [{"role": "user", "content": f"{inputs}"},] - multi_turn_message = [] - else: - initial_message= [{"role": "system", "content": system_msg}, - {"role": "user", "content": f"{inputs}"},] - multi_turn_message = [{"role": "system", "content": system_msg},] - - if chat_counter == 0 : - payload = { - "model": "gpt-4", - "messages": initial_message , - "temperature" : 1.0, - "top_p":1.0, - "n" : 1, - "stream": True, - "presence_penalty":0, - "frequency_penalty":0, - } - print(f"chat_counter - {chat_counter}") - else: #if chat_counter != 0 : - messages=multi_turn_message # Of the type of - [{"role": "system", "content": system_msg},] - for data in chatbot: - user = {} - user["role"] = "user" - user["content"] = data[0] - assistant = {} - assistant["role"] = "assistant" - assistant["content"] = data[1] - messages.append(user) - messages.append(assistant) - temp = {} - temp["role"] = "user" - temp["content"] = inputs - messages.append(temp) - #messages - payload = { - "model": "gpt-4", - "messages": messages, # Of the type of [{"role": "user", "content": f"{inputs}"}], - "temperature" : temperature, #1.0, - "top_p": top_p, #1.0, - "n" : 1, - "stream": True, - "presence_penalty":0, - "frequency_penalty":0,} - - chat_counter+=1 - - history.append(inputs) - print(f"Logging : payload is - {payload}") - # make a POST request to the API endpoint using the requests.post method, passing in stream=True - response = requests.post(API_URL, headers=headers, json=payload, stream=True) - print(f"Logging : response code - {response}") - token_counter = 0 - partial_words = "" - - counter=0 - for chunk in response.iter_lines(): - #Skipping first chunk - if counter == 0: - counter+=1 - continue - # check whether each line is non-empty - if chunk.decode() : - chunk = chunk.decode() - # decode each line as response data is in bytes - if len(chunk) > 12 and "content" in json.loads(chunk[6:])['choices'][0]['delta']: - partial_words = partial_words + json.loads(chunk[6:])['choices'][0]["delta"]["content"] - if token_counter == 0: - history.append(" " + partial_words) - else: - history[-1] = partial_words - chat = [(history[i], history[i + 1]) for i in range(0, len(history) - 1, 2) ] # convert to tuples of list - token_counter+=1 - yield chat, history, chat_counter, response # resembles {chatbot: chat, state: history} - -#Resetting to blank -def reset_textbox(): - return gr.update(value='') - -#to set a component as visible=False -def set_visible_false(): - return gr.update(visible=False) - -#to set a component as visible=True -def set_visible_true(): - return gr.update(visible=True) - -title = """

            🔥GPT4 using Chat-Completions API & 🚀Gradio-Streaming

            """ -#display message for themes feature -theme_addon_msg = """
            🌟 This Demo also introduces you to Gradio Themes. Discover more on Gradio website using our Themeing-Guide🎨! You can develop from scratch, modify an existing Gradio theme, and share your themes with community by uploading them to huggingface-hub easily using theme.push_to_hub().
            -""" - -#Using info to add additional information about System message in GPT4 -system_msg_info = """A conversation could begin with a system message to gently instruct the assistant. -System message helps set the behavior of the AI Assistant. For example, the assistant could be instructed with 'You are a helpful assistant.'""" - -#Modifying existing Gradio Theme -theme = gr.themes.Soft(primary_hue="zinc", secondary_hue="green", neutral_hue="green", - text_size=gr.themes.sizes.text_lg) - -with gr.Blocks(css = """#col_container { margin-left: auto; margin-right: auto;} #chatbot {height: 520px; overflow: auto;}""", - theme=theme) as demo: - gr.HTML(title) - gr.HTML("""

            🔥This Huggingface Gradio Demo provides you access to GPT4 API with System Messages. Please note that you would be needing an OPENAI API key for GPT4 access🙌

            """) - gr.HTML(theme_addon_msg) - gr.HTML('''
            Duplicate SpaceDuplicate the Space and run securely with your OpenAI API Key
            ''') - - with gr.Column(elem_id = "col_container"): - #Users need to provide their own GPT4 API key, it is no longer provided by Huggingface - with gr.Row(): - openai_gpt4_key = gr.Textbox(label="OpenAI GPT4 Key", value="", type="password", placeholder="sk..", info = "You have to provide your own GPT4 keys for this app to function properly",) - with gr.Accordion(label="System message:", open=False): - system_msg = gr.Textbox(label="Instruct the AI Assistant to set its beaviour", info = system_msg_info, value="",placeholder="Type here..") - accordion_msg = gr.HTML(value="🚧 To set System message you will have to refresh the app", visible=False) - - chatbot = gr.Chatbot(label='GPT4', elem_id="chatbot") - inputs = gr.Textbox(placeholder= "Hi there!", label= "Type an input and press Enter") - state = gr.State([]) - with gr.Row(): - with gr.Column(scale=7): - b1 = gr.Button().style(full_width=True) - with gr.Column(scale=3): - server_status_code = gr.Textbox(label="Status code from OpenAI server", ) - - #top_p, temperature - with gr.Accordion("Parameters", open=False): - top_p = gr.Slider( minimum=-0, maximum=1.0, value=1.0, step=0.05, interactive=True, label="Top-p (nucleus sampling)",) - temperature = gr.Slider( minimum=-0, maximum=5.0, value=1.0, step=0.1, interactive=True, label="Temperature",) - chat_counter = gr.Number(value=0, visible=False, precision=0) - - #Event handling - inputs.submit( predict, [openai_gpt4_key, system_msg, inputs, top_p, temperature, chat_counter, chatbot, state], [chatbot, state, chat_counter, server_status_code],) #openai_api_key - b1.click( predict, [openai_gpt4_key, system_msg, inputs, top_p, temperature, chat_counter, chatbot, state], [chatbot, state, chat_counter, server_status_code],) #openai_api_key - - inputs.submit(set_visible_false, [], [system_msg]) - b1.click(set_visible_false, [], [system_msg]) - inputs.submit(set_visible_true, [], [accordion_msg]) - b1.click(set_visible_true, [], [accordion_msg]) - - b1.click(reset_textbox, [], [inputs]) - inputs.submit(reset_textbox, [], [inputs]) - - #Examples - with gr.Accordion(label="Examples for System message:", open=False): - gr.Examples( - examples = [["""You are an AI programming assistant. - - - Follow the user's requirements carefully and to the letter. - - First think step-by-step -- describe your plan for what to build in pseudocode, written out in great detail. - - Then output the code in a single code block. - - Minimize any other prose."""], ["""You are ComedianGPT who is a helpful assistant. You answer everything with a joke and witty replies."""], - ["You are ChefGPT, a helpful assistant who answers questions with culinary expertise and a pinch of humor."], - ["You are FitnessGuruGPT, a fitness expert who shares workout tips and motivation with a playful twist."], - ["You are SciFiGPT, an AI assistant who discusses science fiction topics with a blend of knowledge and wit."], - ["You are PhilosopherGPT, a thoughtful assistant who responds to inquiries with philosophical insights and a touch of humor."], - ["You are EcoWarriorGPT, a helpful assistant who shares environment-friendly advice with a lighthearted approach."], - ["You are MusicMaestroGPT, a knowledgeable AI who discusses music and its history with a mix of facts and playful banter."], - ["You are SportsFanGPT, an enthusiastic assistant who talks about sports and shares amusing anecdotes."], - ["You are TechWhizGPT, a tech-savvy AI who can help users troubleshoot issues and answer questions with a dash of humor."], - ["You are FashionistaGPT, an AI fashion expert who shares style advice and trends with a sprinkle of wit."], - ["You are ArtConnoisseurGPT, an AI assistant who discusses art and its history with a blend of knowledge and playful commentary."], - ["You are a helpful assistant that provides detailed and accurate information."], - ["You are an assistant that speaks like Shakespeare."], - ["You are a friendly assistant who uses casual language and humor."], - ["You are a financial advisor who gives expert advice on investments and budgeting."], - ["You are a health and fitness expert who provides advice on nutrition and exercise."], - ["You are a travel consultant who offers recommendations for destinations, accommodations, and attractions."], - ["You are a movie critic who shares insightful opinions on films and their themes."], - ["You are a history enthusiast who loves to discuss historical events and figures."], - ["You are a tech-savvy assistant who can help users troubleshoot issues and answer questions about gadgets and software."], - ["You are an AI poet who can compose creative and evocative poems on any given topic."],], - inputs = system_msg,) - -demo.queue(max_size=99, concurrency_count=20).launch(debug=True) \ No newline at end of file diff --git a/spaces/jgurzoni/image_background_swapper/saicinpainting/__init__.py b/spaces/jgurzoni/image_background_swapper/saicinpainting/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/jskalbg/ChatDev01/camel/agents/tool_agents/__init__.py b/spaces/jskalbg/ChatDev01/camel/agents/tool_agents/__init__.py deleted file mode 100644 index e47fcf82b3b5195696632fc3200ee9e46f4f2554..0000000000000000000000000000000000000000 --- a/spaces/jskalbg/ChatDev01/camel/agents/tool_agents/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. =========== -# Licensed under the Apache License, Version 2.0 (the “License”); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an “AS IS” BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. =========== -from .base import BaseToolAgent -from .hugging_face_tool_agent import HuggingFaceToolAgent - -__all__ = [ - 'BaseToolAgent', - 'HuggingFaceToolAgent', -] diff --git a/spaces/junchenmo/OpenAI-Manager/js/app.js b/spaces/junchenmo/OpenAI-Manager/js/app.js deleted file mode 100644 index 6e5823ad55907039ffaed524c7a0ebaf8289ad6e..0000000000000000000000000000000000000000 --- a/spaces/junchenmo/OpenAI-Manager/js/app.js +++ /dev/null @@ -1,2112 +0,0 @@ -/* - * ATTENTION: The "eval" devtool has been used (maybe by default in mode: "development"). - * This devtool is neither made for production nor for readable output files. - * It uses "eval()" calls to create a separate source file in the browser devtools. - * If you are trying to read the output file, select a different devtool (https://webpack.js.org/configuration/devtool/) - * or disable the default devtool with "devtool: false". - * If you are looking for production-ready output files, see mode: "production" (https://webpack.js.org/configuration/mode/). - */ -/******/ (function() { // webpackBootstrap -/******/ var __webpack_modules__ = ({ - -/***/ "./src/App.vue": -/*!*********************!*\ - !*** ./src/App.vue ***! - \*********************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _App_vue_vue_type_template_id_7ba5bd90___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ./App.vue?vue&type=template&id=7ba5bd90& */ \"./src/App.vue?vue&type=template&id=7ba5bd90&\");\n/* harmony import */ var _App_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ./App.vue?vue&type=script&lang=js& */ \"./src/App.vue?vue&type=script&lang=js&\");\n/* harmony import */ var _App_vue_vue_type_style_index_0_id_7ba5bd90_lang_scss___WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(/*! ./App.vue?vue&type=style&index=0&id=7ba5bd90&lang=scss& */ \"./src/App.vue?vue&type=style&index=0&id=7ba5bd90&lang=scss&\");\n/* harmony import */ var _node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(/*! !../node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js */ \"./node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js\");\n\n\n\n;\n\n\n/* normalize component */\n\nvar component = (0,_node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__[\"default\"])(\n _App_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__[\"default\"],\n _App_vue_vue_type_template_id_7ba5bd90___WEBPACK_IMPORTED_MODULE_0__.render,\n _App_vue_vue_type_template_id_7ba5bd90___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns,\n false,\n null,\n null,\n null\n \n)\n\n/* hot reload */\nif (false) { var api; }\ncomponent.options.__file = \"src/App.vue\"\n/* harmony default export */ __webpack_exports__[\"default\"] = (component.exports);\n\n//# sourceURL=webpack://JCM-AI/./src/App.vue?"); - -/***/ }), - -/***/ "./src/components/Emoji.vue": -/*!**********************************!*\ - !*** ./src/components/Emoji.vue ***! - \**********************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _Emoji_vue_vue_type_template_id_534ad946_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ./Emoji.vue?vue&type=template&id=534ad946&scoped=true& */ \"./src/components/Emoji.vue?vue&type=template&id=534ad946&scoped=true&\");\n/* harmony import */ var _Emoji_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ./Emoji.vue?vue&type=script&lang=js& */ \"./src/components/Emoji.vue?vue&type=script&lang=js&\");\n/* harmony import */ var _Emoji_vue_vue_type_style_index_0_id_534ad946_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(/*! ./Emoji.vue?vue&type=style&index=0&id=534ad946&lang=scss&scoped=true& */ \"./src/components/Emoji.vue?vue&type=style&index=0&id=534ad946&lang=scss&scoped=true&\");\n/* harmony import */ var _node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(/*! !../../node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js */ \"./node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js\");\n\n\n\n;\n\n\n/* normalize component */\n\nvar component = (0,_node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__[\"default\"])(\n _Emoji_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__[\"default\"],\n _Emoji_vue_vue_type_template_id_534ad946_scoped_true___WEBPACK_IMPORTED_MODULE_0__.render,\n _Emoji_vue_vue_type_template_id_534ad946_scoped_true___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns,\n false,\n null,\n \"534ad946\",\n null\n \n)\n\n/* hot reload */\nif (false) { var api; }\ncomponent.options.__file = \"src/components/Emoji.vue\"\n/* harmony default export */ __webpack_exports__[\"default\"] = (component.exports);\n\n//# sourceURL=webpack://JCM-AI/./src/components/Emoji.vue?"); - -/***/ }), - -/***/ "./src/components/File.vue": -/*!*********************************!*\ - !*** ./src/components/File.vue ***! - \*********************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _File_vue_vue_type_template_id_ab80f8a8_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ./File.vue?vue&type=template&id=ab80f8a8&scoped=true& */ \"./src/components/File.vue?vue&type=template&id=ab80f8a8&scoped=true&\");\n/* harmony import */ var _File_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ./File.vue?vue&type=script&lang=js& */ \"./src/components/File.vue?vue&type=script&lang=js&\");\n/* harmony import */ var _File_vue_vue_type_style_index_0_id_ab80f8a8_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(/*! ./File.vue?vue&type=style&index=0&id=ab80f8a8&lang=scss&scoped=true& */ \"./src/components/File.vue?vue&type=style&index=0&id=ab80f8a8&lang=scss&scoped=true&\");\n/* harmony import */ var _node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(/*! !../../node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js */ \"./node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js\");\n\n\n\n;\n\n\n/* normalize component */\n\nvar component = (0,_node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__[\"default\"])(\n _File_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__[\"default\"],\n _File_vue_vue_type_template_id_ab80f8a8_scoped_true___WEBPACK_IMPORTED_MODULE_0__.render,\n _File_vue_vue_type_template_id_ab80f8a8_scoped_true___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns,\n false,\n null,\n \"ab80f8a8\",\n null\n \n)\n\n/* hot reload */\nif (false) { var api; }\ncomponent.options.__file = \"src/components/File.vue\"\n/* harmony default export */ __webpack_exports__[\"default\"] = (component.exports);\n\n//# sourceURL=webpack://JCM-AI/./src/components/File.vue?"); - -/***/ }), - -/***/ "./src/components/FileCard.vue": -/*!*************************************!*\ - !*** ./src/components/FileCard.vue ***! - \*************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _FileCard_vue_vue_type_template_id_48849e48_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ./FileCard.vue?vue&type=template&id=48849e48&scoped=true& */ \"./src/components/FileCard.vue?vue&type=template&id=48849e48&scoped=true&\");\n/* harmony import */ var _FileCard_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ./FileCard.vue?vue&type=script&lang=js& */ \"./src/components/FileCard.vue?vue&type=script&lang=js&\");\n/* harmony import */ var _FileCard_vue_vue_type_style_index_0_id_48849e48_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(/*! ./FileCard.vue?vue&type=style&index=0&id=48849e48&lang=scss&scoped=true& */ \"./src/components/FileCard.vue?vue&type=style&index=0&id=48849e48&lang=scss&scoped=true&\");\n/* harmony import */ var _node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(/*! !../../node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js */ \"./node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js\");\n\n\n\n;\n\n\n/* normalize component */\n\nvar component = (0,_node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__[\"default\"])(\n _FileCard_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__[\"default\"],\n _FileCard_vue_vue_type_template_id_48849e48_scoped_true___WEBPACK_IMPORTED_MODULE_0__.render,\n _FileCard_vue_vue_type_template_id_48849e48_scoped_true___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns,\n false,\n null,\n \"48849e48\",\n null\n \n)\n\n/* hot reload */\nif (false) { var api; }\ncomponent.options.__file = \"src/components/FileCard.vue\"\n/* harmony default export */ __webpack_exports__[\"default\"] = (component.exports);\n\n//# sourceURL=webpack://JCM-AI/./src/components/FileCard.vue?"); - -/***/ }), - -/***/ "./src/components/HeadImg.vue": -/*!************************************!*\ - !*** ./src/components/HeadImg.vue ***! - \************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _HeadImg_vue_vue_type_template_id_0b1d9e43_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ./HeadImg.vue?vue&type=template&id=0b1d9e43&scoped=true& */ \"./src/components/HeadImg.vue?vue&type=template&id=0b1d9e43&scoped=true&\");\n/* harmony import */ var _HeadImg_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ./HeadImg.vue?vue&type=script&lang=js& */ \"./src/components/HeadImg.vue?vue&type=script&lang=js&\");\n/* harmony import */ var _HeadImg_vue_vue_type_style_index_0_id_0b1d9e43_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(/*! ./HeadImg.vue?vue&type=style&index=0&id=0b1d9e43&lang=scss&scoped=true& */ \"./src/components/HeadImg.vue?vue&type=style&index=0&id=0b1d9e43&lang=scss&scoped=true&\");\n/* harmony import */ var _node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(/*! !../../node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js */ \"./node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js\");\n\n\n\n;\n\n\n/* normalize component */\n\nvar component = (0,_node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__[\"default\"])(\n _HeadImg_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__[\"default\"],\n _HeadImg_vue_vue_type_template_id_0b1d9e43_scoped_true___WEBPACK_IMPORTED_MODULE_0__.render,\n _HeadImg_vue_vue_type_template_id_0b1d9e43_scoped_true___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns,\n false,\n null,\n \"0b1d9e43\",\n null\n \n)\n\n/* hot reload */\nif (false) { var api; }\ncomponent.options.__file = \"src/components/HeadImg.vue\"\n/* harmony default export */ __webpack_exports__[\"default\"] = (component.exports);\n\n//# sourceURL=webpack://JCM-AI/./src/components/HeadImg.vue?"); - -/***/ }), - -/***/ "./src/components/HeadPortrait.vue": -/*!*****************************************!*\ - !*** ./src/components/HeadPortrait.vue ***! - \*****************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _HeadPortrait_vue_vue_type_template_id_24585c4b_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ./HeadPortrait.vue?vue&type=template&id=24585c4b&scoped=true& */ \"./src/components/HeadPortrait.vue?vue&type=template&id=24585c4b&scoped=true&\");\n/* harmony import */ var _HeadPortrait_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ./HeadPortrait.vue?vue&type=script&lang=js& */ \"./src/components/HeadPortrait.vue?vue&type=script&lang=js&\");\n/* harmony import */ var _HeadPortrait_vue_vue_type_style_index_0_id_24585c4b_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(/*! ./HeadPortrait.vue?vue&type=style&index=0&id=24585c4b&lang=scss&scoped=true& */ \"./src/components/HeadPortrait.vue?vue&type=style&index=0&id=24585c4b&lang=scss&scoped=true&\");\n/* harmony import */ var _node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(/*! !../../node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js */ \"./node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js\");\n\n\n\n;\n\n\n/* normalize component */\n\nvar component = (0,_node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__[\"default\"])(\n _HeadPortrait_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__[\"default\"],\n _HeadPortrait_vue_vue_type_template_id_24585c4b_scoped_true___WEBPACK_IMPORTED_MODULE_0__.render,\n _HeadPortrait_vue_vue_type_template_id_24585c4b_scoped_true___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns,\n false,\n null,\n \"24585c4b\",\n null\n \n)\n\n/* hot reload */\nif (false) { var api; }\ncomponent.options.__file = \"src/components/HeadPortrait.vue\"\n/* harmony default export */ __webpack_exports__[\"default\"] = (component.exports);\n\n//# sourceURL=webpack://JCM-AI/./src/components/HeadPortrait.vue?"); - -/***/ }), - -/***/ "./src/components/Nav.vue": -/*!********************************!*\ - !*** ./src/components/Nav.vue ***! - \********************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _Nav_vue_vue_type_template_id_65af85a3_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ./Nav.vue?vue&type=template&id=65af85a3&scoped=true& */ \"./src/components/Nav.vue?vue&type=template&id=65af85a3&scoped=true&\");\n/* harmony import */ var _Nav_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ./Nav.vue?vue&type=script&lang=js& */ \"./src/components/Nav.vue?vue&type=script&lang=js&\");\n/* harmony import */ var _Nav_vue_vue_type_style_index_0_id_65af85a3_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(/*! ./Nav.vue?vue&type=style&index=0&id=65af85a3&lang=scss&scoped=true& */ \"./src/components/Nav.vue?vue&type=style&index=0&id=65af85a3&lang=scss&scoped=true&\");\n/* harmony import */ var _node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(/*! !../../node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js */ \"./node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js\");\n\n\n\n;\n\n\n/* normalize component */\n\nvar component = (0,_node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__[\"default\"])(\n _Nav_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__[\"default\"],\n _Nav_vue_vue_type_template_id_65af85a3_scoped_true___WEBPACK_IMPORTED_MODULE_0__.render,\n _Nav_vue_vue_type_template_id_65af85a3_scoped_true___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns,\n false,\n null,\n \"65af85a3\",\n null\n \n)\n\n/* hot reload */\nif (false) { var api; }\ncomponent.options.__file = \"src/components/Nav.vue\"\n/* harmony default export */ __webpack_exports__[\"default\"] = (component.exports);\n\n//# sourceURL=webpack://JCM-AI/./src/components/Nav.vue?"); - -/***/ }), - -/***/ "./src/components/PersonCard.vue": -/*!***************************************!*\ - !*** ./src/components/PersonCard.vue ***! - \***************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _PersonCard_vue_vue_type_template_id_d74d3096_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ./PersonCard.vue?vue&type=template&id=d74d3096&scoped=true& */ \"./src/components/PersonCard.vue?vue&type=template&id=d74d3096&scoped=true&\");\n/* harmony import */ var _PersonCard_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ./PersonCard.vue?vue&type=script&lang=js& */ \"./src/components/PersonCard.vue?vue&type=script&lang=js&\");\n/* harmony import */ var _PersonCard_vue_vue_type_style_index_0_id_d74d3096_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(/*! ./PersonCard.vue?vue&type=style&index=0&id=d74d3096&lang=scss&scoped=true& */ \"./src/components/PersonCard.vue?vue&type=style&index=0&id=d74d3096&lang=scss&scoped=true&\");\n/* harmony import */ var _node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(/*! !../../node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js */ \"./node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js\");\n\n\n\n;\n\n\n/* normalize component */\n\nvar component = (0,_node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__[\"default\"])(\n _PersonCard_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__[\"default\"],\n _PersonCard_vue_vue_type_template_id_d74d3096_scoped_true___WEBPACK_IMPORTED_MODULE_0__.render,\n _PersonCard_vue_vue_type_template_id_d74d3096_scoped_true___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns,\n false,\n null,\n \"d74d3096\",\n null\n \n)\n\n/* hot reload */\nif (false) { var api; }\ncomponent.options.__file = \"src/components/PersonCard.vue\"\n/* harmony default export */ __webpack_exports__[\"default\"] = (component.exports);\n\n//# sourceURL=webpack://JCM-AI/./src/components/PersonCard.vue?"); - -/***/ }), - -/***/ "./src/components/RoleCard.vue": -/*!*************************************!*\ - !*** ./src/components/RoleCard.vue ***! - \*************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _RoleCard_vue_vue_type_template_id_9524bc54_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ./RoleCard.vue?vue&type=template&id=9524bc54&scoped=true& */ \"./src/components/RoleCard.vue?vue&type=template&id=9524bc54&scoped=true&\");\n/* harmony import */ var _RoleCard_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ./RoleCard.vue?vue&type=script&lang=js& */ \"./src/components/RoleCard.vue?vue&type=script&lang=js&\");\n/* harmony import */ var _RoleCard_vue_vue_type_style_index_0_id_9524bc54_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(/*! ./RoleCard.vue?vue&type=style&index=0&id=9524bc54&lang=scss&scoped=true& */ \"./src/components/RoleCard.vue?vue&type=style&index=0&id=9524bc54&lang=scss&scoped=true&\");\n/* harmony import */ var _node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(/*! !../../node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js */ \"./node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js\");\n\n\n\n;\n\n\n/* normalize component */\n\nvar component = (0,_node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__[\"default\"])(\n _RoleCard_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__[\"default\"],\n _RoleCard_vue_vue_type_template_id_9524bc54_scoped_true___WEBPACK_IMPORTED_MODULE_0__.render,\n _RoleCard_vue_vue_type_template_id_9524bc54_scoped_true___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns,\n false,\n null,\n \"9524bc54\",\n null\n \n)\n\n/* hot reload */\nif (false) { var api; }\ncomponent.options.__file = \"src/components/RoleCard.vue\"\n/* harmony default export */ __webpack_exports__[\"default\"] = (component.exports);\n\n//# sourceURL=webpack://JCM-AI/./src/components/RoleCard.vue?"); - -/***/ }), - -/***/ "./src/components/Session.vue": -/*!************************************!*\ - !*** ./src/components/Session.vue ***! - \************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _Session_vue_vue_type_template_id_d6f30cd4_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ./Session.vue?vue&type=template&id=d6f30cd4&scoped=true& */ \"./src/components/Session.vue?vue&type=template&id=d6f30cd4&scoped=true&\");\n/* harmony import */ var _Session_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ./Session.vue?vue&type=script&lang=js& */ \"./src/components/Session.vue?vue&type=script&lang=js&\");\n/* harmony import */ var _Session_vue_vue_type_style_index_0_id_d6f30cd4_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(/*! ./Session.vue?vue&type=style&index=0&id=d6f30cd4&lang=scss&scoped=true& */ \"./src/components/Session.vue?vue&type=style&index=0&id=d6f30cd4&lang=scss&scoped=true&\");\n/* harmony import */ var _node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(/*! !../../node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js */ \"./node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js\");\n\n\n\n;\n\n\n/* normalize component */\n\nvar component = (0,_node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__[\"default\"])(\n _Session_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__[\"default\"],\n _Session_vue_vue_type_template_id_d6f30cd4_scoped_true___WEBPACK_IMPORTED_MODULE_0__.render,\n _Session_vue_vue_type_template_id_d6f30cd4_scoped_true___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns,\n false,\n null,\n \"d6f30cd4\",\n null\n \n)\n\n/* hot reload */\nif (false) { var api; }\ncomponent.options.__file = \"src/components/Session.vue\"\n/* harmony default export */ __webpack_exports__[\"default\"] = (component.exports);\n\n//# sourceURL=webpack://JCM-AI/./src/components/Session.vue?"); - -/***/ }), - -/***/ "./src/view/home.vue": -/*!***************************!*\ - !*** ./src/view/home.vue ***! - \***************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _home_vue_vue_type_template_id_73eb9c00_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ./home.vue?vue&type=template&id=73eb9c00&scoped=true& */ \"./src/view/home.vue?vue&type=template&id=73eb9c00&scoped=true&\");\n/* harmony import */ var _home_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ./home.vue?vue&type=script&lang=js& */ \"./src/view/home.vue?vue&type=script&lang=js&\");\n/* harmony import */ var _home_vue_vue_type_style_index_0_id_73eb9c00_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(/*! ./home.vue?vue&type=style&index=0&id=73eb9c00&lang=scss&scoped=true& */ \"./src/view/home.vue?vue&type=style&index=0&id=73eb9c00&lang=scss&scoped=true&\");\n/* harmony import */ var _node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(/*! !../../node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js */ \"./node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js\");\n\n\n\n;\n\n\n/* normalize component */\n\nvar component = (0,_node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__[\"default\"])(\n _home_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__[\"default\"],\n _home_vue_vue_type_template_id_73eb9c00_scoped_true___WEBPACK_IMPORTED_MODULE_0__.render,\n _home_vue_vue_type_template_id_73eb9c00_scoped_true___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns,\n false,\n null,\n \"73eb9c00\",\n null\n \n)\n\n/* hot reload */\nif (false) { var api; }\ncomponent.options.__file = \"src/view/home.vue\"\n/* harmony default export */ __webpack_exports__[\"default\"] = (component.exports);\n\n//# sourceURL=webpack://JCM-AI/./src/view/home.vue?"); - -/***/ }), - -/***/ "./src/view/pages/chatHome/chatwindow.vue": -/*!************************************************!*\ - !*** ./src/view/pages/chatHome/chatwindow.vue ***! - \************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _chatwindow_vue_vue_type_template_id_13fede38_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ./chatwindow.vue?vue&type=template&id=13fede38&scoped=true& */ \"./src/view/pages/chatHome/chatwindow.vue?vue&type=template&id=13fede38&scoped=true&\");\n/* harmony import */ var _chatwindow_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ./chatwindow.vue?vue&type=script&lang=js& */ \"./src/view/pages/chatHome/chatwindow.vue?vue&type=script&lang=js&\");\n/* harmony import */ var _chatwindow_vue_vue_type_style_index_0_id_13fede38_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(/*! ./chatwindow.vue?vue&type=style&index=0&id=13fede38&lang=scss&scoped=true& */ \"./src/view/pages/chatHome/chatwindow.vue?vue&type=style&index=0&id=13fede38&lang=scss&scoped=true&\");\n/* harmony import */ var _node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(/*! !../../../../node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js */ \"./node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js\");\n\n\n\n;\n\n\n/* normalize component */\n\nvar component = (0,_node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__[\"default\"])(\n _chatwindow_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__[\"default\"],\n _chatwindow_vue_vue_type_template_id_13fede38_scoped_true___WEBPACK_IMPORTED_MODULE_0__.render,\n _chatwindow_vue_vue_type_template_id_13fede38_scoped_true___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns,\n false,\n null,\n \"13fede38\",\n null\n \n)\n\n/* hot reload */\nif (false) { var api; }\ncomponent.options.__file = \"src/view/pages/chatHome/chatwindow.vue\"\n/* harmony default export */ __webpack_exports__[\"default\"] = (component.exports);\n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/chatHome/chatwindow.vue?"); - -/***/ }), - -/***/ "./src/view/pages/chatHome/index.vue": -/*!*******************************************!*\ - !*** ./src/view/pages/chatHome/index.vue ***! - \*******************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _index_vue_vue_type_template_id_c6884a34_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ./index.vue?vue&type=template&id=c6884a34&scoped=true& */ \"./src/view/pages/chatHome/index.vue?vue&type=template&id=c6884a34&scoped=true&\");\n/* harmony import */ var _index_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ./index.vue?vue&type=script&lang=js& */ \"./src/view/pages/chatHome/index.vue?vue&type=script&lang=js&\");\n/* harmony import */ var _index_vue_vue_type_style_index_0_id_c6884a34_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(/*! ./index.vue?vue&type=style&index=0&id=c6884a34&lang=scss&scoped=true& */ \"./src/view/pages/chatHome/index.vue?vue&type=style&index=0&id=c6884a34&lang=scss&scoped=true&\");\n/* harmony import */ var _node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(/*! !../../../../node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js */ \"./node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js\");\n\n\n\n;\n\n\n/* normalize component */\n\nvar component = (0,_node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__[\"default\"])(\n _index_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__[\"default\"],\n _index_vue_vue_type_template_id_c6884a34_scoped_true___WEBPACK_IMPORTED_MODULE_0__.render,\n _index_vue_vue_type_template_id_c6884a34_scoped_true___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns,\n false,\n null,\n \"c6884a34\",\n null\n \n)\n\n/* hot reload */\nif (false) { var api; }\ncomponent.options.__file = \"src/view/pages/chatHome/index.vue\"\n/* harmony default export */ __webpack_exports__[\"default\"] = (component.exports);\n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/chatHome/index.vue?"); - -/***/ }), - -/***/ "./src/view/pages/setting.vue": -/*!************************************!*\ - !*** ./src/view/pages/setting.vue ***! - \************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _setting_vue_vue_type_template_id_f89df198___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ./setting.vue?vue&type=template&id=f89df198& */ \"./src/view/pages/setting.vue?vue&type=template&id=f89df198&\");\n/* harmony import */ var _setting_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ./setting.vue?vue&type=script&lang=js& */ \"./src/view/pages/setting.vue?vue&type=script&lang=js&\");\n/* harmony import */ var _setting_vue_vue_type_style_index_0_id_f89df198_lang_css___WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(/*! ./setting.vue?vue&type=style&index=0&id=f89df198&lang=css& */ \"./src/view/pages/setting.vue?vue&type=style&index=0&id=f89df198&lang=css&\");\n/* harmony import */ var _node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(/*! !../../../node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js */ \"./node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js\");\n\n\n\n;\n\n\n/* normalize component */\n\nvar component = (0,_node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__[\"default\"])(\n _setting_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__[\"default\"],\n _setting_vue_vue_type_template_id_f89df198___WEBPACK_IMPORTED_MODULE_0__.render,\n _setting_vue_vue_type_template_id_f89df198___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns,\n false,\n null,\n null,\n null\n \n)\n\n/* hot reload */\nif (false) { var api; }\ncomponent.options.__file = \"src/view/pages/setting.vue\"\n/* harmony default export */ __webpack_exports__[\"default\"] = (component.exports);\n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/setting.vue?"); - -/***/ }), - -/***/ "./src/view/pages/user/userInfo.vue": -/*!******************************************!*\ - !*** ./src/view/pages/user/userInfo.vue ***! - \******************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _userInfo_vue_vue_type_template_id_3c4a7241___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ./userInfo.vue?vue&type=template&id=3c4a7241& */ \"./src/view/pages/user/userInfo.vue?vue&type=template&id=3c4a7241&\");\n/* harmony import */ var _userInfo_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ./userInfo.vue?vue&type=script&lang=js& */ \"./src/view/pages/user/userInfo.vue?vue&type=script&lang=js&\");\n/* harmony import */ var _userInfo_vue_vue_type_style_index_0_id_3c4a7241_lang_css___WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(/*! ./userInfo.vue?vue&type=style&index=0&id=3c4a7241&lang=css& */ \"./src/view/pages/user/userInfo.vue?vue&type=style&index=0&id=3c4a7241&lang=css&\");\n/* harmony import */ var _node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(/*! !../../../../node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js */ \"./node_modules/@vue/vue-loader-v15/lib/runtime/componentNormalizer.js\");\n\n\n\n;\n\n\n/* normalize component */\n\nvar component = (0,_node_modules_vue_vue_loader_v15_lib_runtime_componentNormalizer_js__WEBPACK_IMPORTED_MODULE_3__[\"default\"])(\n _userInfo_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_1__[\"default\"],\n _userInfo_vue_vue_type_template_id_3c4a7241___WEBPACK_IMPORTED_MODULE_0__.render,\n _userInfo_vue_vue_type_template_id_3c4a7241___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns,\n false,\n null,\n null,\n null\n \n)\n\n/* hot reload */\nif (false) { var api; }\ncomponent.options.__file = \"src/view/pages/user/userInfo.vue\"\n/* harmony default export */ __webpack_exports__[\"default\"] = (component.exports);\n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/user/userInfo.vue?"); - -/***/ }), - -/***/ "./src/App.vue?vue&type=script&lang=js&": -/*!**********************************************!*\ - !*** ./src/App.vue?vue&type=script&lang=js& ***! - \**********************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_App_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./App.vue?vue&type=script&lang=js& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/App.vue?vue&type=script&lang=js&\");\n /* harmony default export */ __webpack_exports__[\"default\"] = (_node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_App_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__[\"default\"]); \n\n//# sourceURL=webpack://JCM-AI/./src/App.vue?"); - -/***/ }), - -/***/ "./src/components/Emoji.vue?vue&type=script&lang=js&": -/*!***********************************************************!*\ - !*** ./src/components/Emoji.vue?vue&type=script&lang=js& ***! - \***********************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Emoji_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./Emoji.vue?vue&type=script&lang=js& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Emoji.vue?vue&type=script&lang=js&\");\n /* harmony default export */ __webpack_exports__[\"default\"] = (_node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Emoji_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__[\"default\"]); \n\n//# sourceURL=webpack://JCM-AI/./src/components/Emoji.vue?"); - -/***/ }), - -/***/ "./src/components/File.vue?vue&type=script&lang=js&": -/*!**********************************************************!*\ - !*** ./src/components/File.vue?vue&type=script&lang=js& ***! - \**********************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_File_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./File.vue?vue&type=script&lang=js& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/File.vue?vue&type=script&lang=js&\");\n /* harmony default export */ __webpack_exports__[\"default\"] = (_node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_File_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__[\"default\"]); \n\n//# sourceURL=webpack://JCM-AI/./src/components/File.vue?"); - -/***/ }), - -/***/ "./src/components/FileCard.vue?vue&type=script&lang=js&": -/*!**************************************************************!*\ - !*** ./src/components/FileCard.vue?vue&type=script&lang=js& ***! - \**************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_FileCard_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./FileCard.vue?vue&type=script&lang=js& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/FileCard.vue?vue&type=script&lang=js&\");\n /* harmony default export */ __webpack_exports__[\"default\"] = (_node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_FileCard_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__[\"default\"]); \n\n//# sourceURL=webpack://JCM-AI/./src/components/FileCard.vue?"); - -/***/ }), - -/***/ "./src/components/HeadImg.vue?vue&type=script&lang=js&": -/*!*************************************************************!*\ - !*** ./src/components/HeadImg.vue?vue&type=script&lang=js& ***! - \*************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_HeadImg_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./HeadImg.vue?vue&type=script&lang=js& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/HeadImg.vue?vue&type=script&lang=js&\");\n /* harmony default export */ __webpack_exports__[\"default\"] = (_node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_HeadImg_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__[\"default\"]); \n\n//# sourceURL=webpack://JCM-AI/./src/components/HeadImg.vue?"); - -/***/ }), - -/***/ "./src/components/HeadPortrait.vue?vue&type=script&lang=js&": -/*!******************************************************************!*\ - !*** ./src/components/HeadPortrait.vue?vue&type=script&lang=js& ***! - \******************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_HeadPortrait_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./HeadPortrait.vue?vue&type=script&lang=js& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/HeadPortrait.vue?vue&type=script&lang=js&\");\n /* harmony default export */ __webpack_exports__[\"default\"] = (_node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_HeadPortrait_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__[\"default\"]); \n\n//# sourceURL=webpack://JCM-AI/./src/components/HeadPortrait.vue?"); - -/***/ }), - -/***/ "./src/components/Nav.vue?vue&type=script&lang=js&": -/*!*********************************************************!*\ - !*** ./src/components/Nav.vue?vue&type=script&lang=js& ***! - \*********************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Nav_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./Nav.vue?vue&type=script&lang=js& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Nav.vue?vue&type=script&lang=js&\");\n /* harmony default export */ __webpack_exports__[\"default\"] = (_node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Nav_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__[\"default\"]); \n\n//# sourceURL=webpack://JCM-AI/./src/components/Nav.vue?"); - -/***/ }), - -/***/ "./src/components/PersonCard.vue?vue&type=script&lang=js&": -/*!****************************************************************!*\ - !*** ./src/components/PersonCard.vue?vue&type=script&lang=js& ***! - \****************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_PersonCard_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./PersonCard.vue?vue&type=script&lang=js& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/PersonCard.vue?vue&type=script&lang=js&\");\n /* harmony default export */ __webpack_exports__[\"default\"] = (_node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_PersonCard_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__[\"default\"]); \n\n//# sourceURL=webpack://JCM-AI/./src/components/PersonCard.vue?"); - -/***/ }), - -/***/ "./src/components/RoleCard.vue?vue&type=script&lang=js&": -/*!**************************************************************!*\ - !*** ./src/components/RoleCard.vue?vue&type=script&lang=js& ***! - \**************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_RoleCard_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./RoleCard.vue?vue&type=script&lang=js& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/RoleCard.vue?vue&type=script&lang=js&\");\n /* harmony default export */ __webpack_exports__[\"default\"] = (_node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_RoleCard_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__[\"default\"]); \n\n//# sourceURL=webpack://JCM-AI/./src/components/RoleCard.vue?"); - -/***/ }), - -/***/ "./src/components/Session.vue?vue&type=script&lang=js&": -/*!*************************************************************!*\ - !*** ./src/components/Session.vue?vue&type=script&lang=js& ***! - \*************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Session_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./Session.vue?vue&type=script&lang=js& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Session.vue?vue&type=script&lang=js&\");\n /* harmony default export */ __webpack_exports__[\"default\"] = (_node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Session_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__[\"default\"]); \n\n//# sourceURL=webpack://JCM-AI/./src/components/Session.vue?"); - -/***/ }), - -/***/ "./src/view/home.vue?vue&type=script&lang=js&": -/*!****************************************************!*\ - !*** ./src/view/home.vue?vue&type=script&lang=js& ***! - \****************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_home_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./home.vue?vue&type=script&lang=js& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/home.vue?vue&type=script&lang=js&\");\n /* harmony default export */ __webpack_exports__[\"default\"] = (_node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_home_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__[\"default\"]); \n\n//# sourceURL=webpack://JCM-AI/./src/view/home.vue?"); - -/***/ }), - -/***/ "./src/view/pages/chatHome/chatwindow.vue?vue&type=script&lang=js&": -/*!*************************************************************************!*\ - !*** ./src/view/pages/chatHome/chatwindow.vue?vue&type=script&lang=js& ***! - \*************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_chatwindow_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./chatwindow.vue?vue&type=script&lang=js& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/chatHome/chatwindow.vue?vue&type=script&lang=js&\");\n /* harmony default export */ __webpack_exports__[\"default\"] = (_node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_chatwindow_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__[\"default\"]); \n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/chatHome/chatwindow.vue?"); - -/***/ }), - -/***/ "./src/view/pages/chatHome/index.vue?vue&type=script&lang=js&": -/*!********************************************************************!*\ - !*** ./src/view/pages/chatHome/index.vue?vue&type=script&lang=js& ***! - \********************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_index_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./index.vue?vue&type=script&lang=js& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/chatHome/index.vue?vue&type=script&lang=js&\");\n /* harmony default export */ __webpack_exports__[\"default\"] = (_node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_index_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__[\"default\"]); \n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/chatHome/index.vue?"); - -/***/ }), - -/***/ "./src/view/pages/setting.vue?vue&type=script&lang=js&": -/*!*************************************************************!*\ - !*** ./src/view/pages/setting.vue?vue&type=script&lang=js& ***! - \*************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_setting_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./setting.vue?vue&type=script&lang=js& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/setting.vue?vue&type=script&lang=js&\");\n /* harmony default export */ __webpack_exports__[\"default\"] = (_node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_setting_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__[\"default\"]); \n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/setting.vue?"); - -/***/ }), - -/***/ "./src/view/pages/user/userInfo.vue?vue&type=script&lang=js&": -/*!*******************************************************************!*\ - !*** ./src/view/pages/user/userInfo.vue?vue&type=script&lang=js& ***! - \*******************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_userInfo_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./userInfo.vue?vue&type=script&lang=js& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/user/userInfo.vue?vue&type=script&lang=js&\");\n /* harmony default export */ __webpack_exports__[\"default\"] = (_node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_userInfo_vue_vue_type_script_lang_js___WEBPACK_IMPORTED_MODULE_0__[\"default\"]); \n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/user/userInfo.vue?"); - -/***/ }), - -/***/ "./src/App.vue?vue&type=template&id=7ba5bd90&": -/*!****************************************************!*\ - !*** ./src/App.vue?vue&type=template&id=7ba5bd90& ***! - \****************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_App_vue_vue_type_template_id_7ba5bd90___WEBPACK_IMPORTED_MODULE_0__.render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_App_vue_vue_type_template_id_7ba5bd90___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns; }\n/* harmony export */ });\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_App_vue_vue_type_template_id_7ba5bd90___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./App.vue?vue&type=template&id=7ba5bd90& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/App.vue?vue&type=template&id=7ba5bd90&\");\n\n\n//# sourceURL=webpack://JCM-AI/./src/App.vue?"); - -/***/ }), - -/***/ "./src/components/Emoji.vue?vue&type=template&id=534ad946&scoped=true&": -/*!*****************************************************************************!*\ - !*** ./src/components/Emoji.vue?vue&type=template&id=534ad946&scoped=true& ***! - \*****************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Emoji_vue_vue_type_template_id_534ad946_scoped_true___WEBPACK_IMPORTED_MODULE_0__.render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Emoji_vue_vue_type_template_id_534ad946_scoped_true___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns; }\n/* harmony export */ });\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Emoji_vue_vue_type_template_id_534ad946_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./Emoji.vue?vue&type=template&id=534ad946&scoped=true& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Emoji.vue?vue&type=template&id=534ad946&scoped=true&\");\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/Emoji.vue?"); - -/***/ }), - -/***/ "./src/components/File.vue?vue&type=template&id=ab80f8a8&scoped=true&": -/*!****************************************************************************!*\ - !*** ./src/components/File.vue?vue&type=template&id=ab80f8a8&scoped=true& ***! - \****************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_File_vue_vue_type_template_id_ab80f8a8_scoped_true___WEBPACK_IMPORTED_MODULE_0__.render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_File_vue_vue_type_template_id_ab80f8a8_scoped_true___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns; }\n/* harmony export */ });\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_File_vue_vue_type_template_id_ab80f8a8_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./File.vue?vue&type=template&id=ab80f8a8&scoped=true& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/File.vue?vue&type=template&id=ab80f8a8&scoped=true&\");\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/File.vue?"); - -/***/ }), - -/***/ "./src/components/FileCard.vue?vue&type=template&id=48849e48&scoped=true&": -/*!********************************************************************************!*\ - !*** ./src/components/FileCard.vue?vue&type=template&id=48849e48&scoped=true& ***! - \********************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_FileCard_vue_vue_type_template_id_48849e48_scoped_true___WEBPACK_IMPORTED_MODULE_0__.render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_FileCard_vue_vue_type_template_id_48849e48_scoped_true___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns; }\n/* harmony export */ });\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_FileCard_vue_vue_type_template_id_48849e48_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./FileCard.vue?vue&type=template&id=48849e48&scoped=true& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/FileCard.vue?vue&type=template&id=48849e48&scoped=true&\");\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/FileCard.vue?"); - -/***/ }), - -/***/ "./src/components/HeadImg.vue?vue&type=template&id=0b1d9e43&scoped=true&": -/*!*******************************************************************************!*\ - !*** ./src/components/HeadImg.vue?vue&type=template&id=0b1d9e43&scoped=true& ***! - \*******************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_HeadImg_vue_vue_type_template_id_0b1d9e43_scoped_true___WEBPACK_IMPORTED_MODULE_0__.render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_HeadImg_vue_vue_type_template_id_0b1d9e43_scoped_true___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns; }\n/* harmony export */ });\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_HeadImg_vue_vue_type_template_id_0b1d9e43_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./HeadImg.vue?vue&type=template&id=0b1d9e43&scoped=true& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/HeadImg.vue?vue&type=template&id=0b1d9e43&scoped=true&\");\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/HeadImg.vue?"); - -/***/ }), - -/***/ "./src/components/HeadPortrait.vue?vue&type=template&id=24585c4b&scoped=true&": -/*!************************************************************************************!*\ - !*** ./src/components/HeadPortrait.vue?vue&type=template&id=24585c4b&scoped=true& ***! - \************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_HeadPortrait_vue_vue_type_template_id_24585c4b_scoped_true___WEBPACK_IMPORTED_MODULE_0__.render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_HeadPortrait_vue_vue_type_template_id_24585c4b_scoped_true___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns; }\n/* harmony export */ });\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_HeadPortrait_vue_vue_type_template_id_24585c4b_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./HeadPortrait.vue?vue&type=template&id=24585c4b&scoped=true& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/HeadPortrait.vue?vue&type=template&id=24585c4b&scoped=true&\");\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/HeadPortrait.vue?"); - -/***/ }), - -/***/ "./src/components/Nav.vue?vue&type=template&id=65af85a3&scoped=true&": -/*!***************************************************************************!*\ - !*** ./src/components/Nav.vue?vue&type=template&id=65af85a3&scoped=true& ***! - \***************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Nav_vue_vue_type_template_id_65af85a3_scoped_true___WEBPACK_IMPORTED_MODULE_0__.render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Nav_vue_vue_type_template_id_65af85a3_scoped_true___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns; }\n/* harmony export */ });\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Nav_vue_vue_type_template_id_65af85a3_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./Nav.vue?vue&type=template&id=65af85a3&scoped=true& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Nav.vue?vue&type=template&id=65af85a3&scoped=true&\");\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/Nav.vue?"); - -/***/ }), - -/***/ "./src/components/PersonCard.vue?vue&type=template&id=d74d3096&scoped=true&": -/*!**********************************************************************************!*\ - !*** ./src/components/PersonCard.vue?vue&type=template&id=d74d3096&scoped=true& ***! - \**********************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_PersonCard_vue_vue_type_template_id_d74d3096_scoped_true___WEBPACK_IMPORTED_MODULE_0__.render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_PersonCard_vue_vue_type_template_id_d74d3096_scoped_true___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns; }\n/* harmony export */ });\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_PersonCard_vue_vue_type_template_id_d74d3096_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./PersonCard.vue?vue&type=template&id=d74d3096&scoped=true& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/PersonCard.vue?vue&type=template&id=d74d3096&scoped=true&\");\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/PersonCard.vue?"); - -/***/ }), - -/***/ "./src/components/RoleCard.vue?vue&type=template&id=9524bc54&scoped=true&": -/*!********************************************************************************!*\ - !*** ./src/components/RoleCard.vue?vue&type=template&id=9524bc54&scoped=true& ***! - \********************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_RoleCard_vue_vue_type_template_id_9524bc54_scoped_true___WEBPACK_IMPORTED_MODULE_0__.render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_RoleCard_vue_vue_type_template_id_9524bc54_scoped_true___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns; }\n/* harmony export */ });\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_RoleCard_vue_vue_type_template_id_9524bc54_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./RoleCard.vue?vue&type=template&id=9524bc54&scoped=true& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/RoleCard.vue?vue&type=template&id=9524bc54&scoped=true&\");\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/RoleCard.vue?"); - -/***/ }), - -/***/ "./src/components/Session.vue?vue&type=template&id=d6f30cd4&scoped=true&": -/*!*******************************************************************************!*\ - !*** ./src/components/Session.vue?vue&type=template&id=d6f30cd4&scoped=true& ***! - \*******************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Session_vue_vue_type_template_id_d6f30cd4_scoped_true___WEBPACK_IMPORTED_MODULE_0__.render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Session_vue_vue_type_template_id_d6f30cd4_scoped_true___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns; }\n/* harmony export */ });\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Session_vue_vue_type_template_id_d6f30cd4_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./Session.vue?vue&type=template&id=d6f30cd4&scoped=true& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Session.vue?vue&type=template&id=d6f30cd4&scoped=true&\");\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/Session.vue?"); - -/***/ }), - -/***/ "./src/view/home.vue?vue&type=template&id=73eb9c00&scoped=true&": -/*!**********************************************************************!*\ - !*** ./src/view/home.vue?vue&type=template&id=73eb9c00&scoped=true& ***! - \**********************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_home_vue_vue_type_template_id_73eb9c00_scoped_true___WEBPACK_IMPORTED_MODULE_0__.render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_home_vue_vue_type_template_id_73eb9c00_scoped_true___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns; }\n/* harmony export */ });\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_home_vue_vue_type_template_id_73eb9c00_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./home.vue?vue&type=template&id=73eb9c00&scoped=true& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/home.vue?vue&type=template&id=73eb9c00&scoped=true&\");\n\n\n//# sourceURL=webpack://JCM-AI/./src/view/home.vue?"); - -/***/ }), - -/***/ "./src/view/pages/chatHome/chatwindow.vue?vue&type=template&id=13fede38&scoped=true&": -/*!*******************************************************************************************!*\ - !*** ./src/view/pages/chatHome/chatwindow.vue?vue&type=template&id=13fede38&scoped=true& ***! - \*******************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_chatwindow_vue_vue_type_template_id_13fede38_scoped_true___WEBPACK_IMPORTED_MODULE_0__.render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_chatwindow_vue_vue_type_template_id_13fede38_scoped_true___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns; }\n/* harmony export */ });\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_chatwindow_vue_vue_type_template_id_13fede38_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../../../node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!../../../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./chatwindow.vue?vue&type=template&id=13fede38&scoped=true& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/chatHome/chatwindow.vue?vue&type=template&id=13fede38&scoped=true&\");\n\n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/chatHome/chatwindow.vue?"); - -/***/ }), - -/***/ "./src/view/pages/chatHome/index.vue?vue&type=template&id=c6884a34&scoped=true&": -/*!**************************************************************************************!*\ - !*** ./src/view/pages/chatHome/index.vue?vue&type=template&id=c6884a34&scoped=true& ***! - \**************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_index_vue_vue_type_template_id_c6884a34_scoped_true___WEBPACK_IMPORTED_MODULE_0__.render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_index_vue_vue_type_template_id_c6884a34_scoped_true___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns; }\n/* harmony export */ });\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_index_vue_vue_type_template_id_c6884a34_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../../../node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!../../../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./index.vue?vue&type=template&id=c6884a34&scoped=true& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/chatHome/index.vue?vue&type=template&id=c6884a34&scoped=true&\");\n\n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/chatHome/index.vue?"); - -/***/ }), - -/***/ "./src/view/pages/setting.vue?vue&type=template&id=f89df198&": -/*!*******************************************************************!*\ - !*** ./src/view/pages/setting.vue?vue&type=template&id=f89df198& ***! - \*******************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_setting_vue_vue_type_template_id_f89df198___WEBPACK_IMPORTED_MODULE_0__.render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_setting_vue_vue_type_template_id_f89df198___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns; }\n/* harmony export */ });\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_setting_vue_vue_type_template_id_f89df198___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../../node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!../../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./setting.vue?vue&type=template&id=f89df198& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/setting.vue?vue&type=template&id=f89df198&\");\n\n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/setting.vue?"); - -/***/ }), - -/***/ "./src/view/pages/user/userInfo.vue?vue&type=template&id=3c4a7241&": -/*!*************************************************************************!*\ - !*** ./src/view/pages/user/userInfo.vue?vue&type=template&id=3c4a7241& ***! - \*************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_userInfo_vue_vue_type_template_id_3c4a7241___WEBPACK_IMPORTED_MODULE_0__.render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* reexport safe */ _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_userInfo_vue_vue_type_template_id_3c4a7241___WEBPACK_IMPORTED_MODULE_0__.staticRenderFns; }\n/* harmony export */ });\n/* harmony import */ var _node_modules_babel_loader_lib_index_js_clonedRuleSet_40_use_0_node_modules_vue_vue_loader_v15_lib_loaders_templateLoader_js_ruleSet_1_rules_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_userInfo_vue_vue_type_template_id_3c4a7241___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../../../node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!../../../../node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!../../../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./userInfo.vue?vue&type=template&id=3c4a7241& */ \"./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/user/userInfo.vue?vue&type=template&id=3c4a7241&\");\n\n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/user/userInfo.vue?"); - -/***/ }), - -/***/ "./src/view/pages/setting.vue?vue&type=style&index=0&id=f89df198&lang=css&": -/*!*********************************************************************************!*\ - !*** ./src/view/pages/setting.vue?vue&type=style&index=0&id=f89df198&lang=css& ***! - \*********************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_12_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_12_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_12_use_2_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_setting_vue_vue_type_style_index_0_id_f89df198_lang_css___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../../node_modules/vue-style-loader/index.js??clonedRuleSet-12.use[0]!../../../node_modules/css-loader/dist/cjs.js??clonedRuleSet-12.use[1]!../../../node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!../../../node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-12.use[2]!../../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./setting.vue?vue&type=style&index=0&id=f89df198&lang=css& */ \"./node_modules/vue-style-loader/index.js??clonedRuleSet-12.use[0]!./node_modules/css-loader/dist/cjs.js??clonedRuleSet-12.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-12.use[2]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/setting.vue?vue&type=style&index=0&id=f89df198&lang=css&\");\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_12_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_12_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_12_use_2_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_setting_vue_vue_type_style_index_0_id_f89df198_lang_css___WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_vue_style_loader_index_js_clonedRuleSet_12_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_12_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_12_use_2_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_setting_vue_vue_type_style_index_0_id_f89df198_lang_css___WEBPACK_IMPORTED_MODULE_0__);\n/* harmony reexport (unknown) */ var __WEBPACK_REEXPORT_OBJECT__ = {};\n/* harmony reexport (unknown) */ for(var __WEBPACK_IMPORT_KEY__ in _node_modules_vue_style_loader_index_js_clonedRuleSet_12_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_12_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_12_use_2_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_setting_vue_vue_type_style_index_0_id_f89df198_lang_css___WEBPACK_IMPORTED_MODULE_0__) if(__WEBPACK_IMPORT_KEY__ !== \"default\") __WEBPACK_REEXPORT_OBJECT__[__WEBPACK_IMPORT_KEY__] = function(key) { return _node_modules_vue_style_loader_index_js_clonedRuleSet_12_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_12_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_12_use_2_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_setting_vue_vue_type_style_index_0_id_f89df198_lang_css___WEBPACK_IMPORTED_MODULE_0__[key]; }.bind(0, __WEBPACK_IMPORT_KEY__)\n/* harmony reexport (unknown) */ __webpack_require__.d(__webpack_exports__, __WEBPACK_REEXPORT_OBJECT__);\n\n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/setting.vue?"); - -/***/ }), - -/***/ "./src/view/pages/user/userInfo.vue?vue&type=style&index=0&id=3c4a7241&lang=css&": -/*!***************************************************************************************!*\ - !*** ./src/view/pages/user/userInfo.vue?vue&type=style&index=0&id=3c4a7241&lang=css& ***! - \***************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_12_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_12_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_12_use_2_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_userInfo_vue_vue_type_style_index_0_id_3c4a7241_lang_css___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../../../node_modules/vue-style-loader/index.js??clonedRuleSet-12.use[0]!../../../../node_modules/css-loader/dist/cjs.js??clonedRuleSet-12.use[1]!../../../../node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!../../../../node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-12.use[2]!../../../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./userInfo.vue?vue&type=style&index=0&id=3c4a7241&lang=css& */ \"./node_modules/vue-style-loader/index.js??clonedRuleSet-12.use[0]!./node_modules/css-loader/dist/cjs.js??clonedRuleSet-12.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-12.use[2]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/user/userInfo.vue?vue&type=style&index=0&id=3c4a7241&lang=css&\");\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_12_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_12_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_12_use_2_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_userInfo_vue_vue_type_style_index_0_id_3c4a7241_lang_css___WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_vue_style_loader_index_js_clonedRuleSet_12_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_12_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_12_use_2_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_userInfo_vue_vue_type_style_index_0_id_3c4a7241_lang_css___WEBPACK_IMPORTED_MODULE_0__);\n/* harmony reexport (unknown) */ var __WEBPACK_REEXPORT_OBJECT__ = {};\n/* harmony reexport (unknown) */ for(var __WEBPACK_IMPORT_KEY__ in _node_modules_vue_style_loader_index_js_clonedRuleSet_12_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_12_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_12_use_2_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_userInfo_vue_vue_type_style_index_0_id_3c4a7241_lang_css___WEBPACK_IMPORTED_MODULE_0__) if(__WEBPACK_IMPORT_KEY__ !== \"default\") __WEBPACK_REEXPORT_OBJECT__[__WEBPACK_IMPORT_KEY__] = function(key) { return _node_modules_vue_style_loader_index_js_clonedRuleSet_12_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_12_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_12_use_2_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_userInfo_vue_vue_type_style_index_0_id_3c4a7241_lang_css___WEBPACK_IMPORTED_MODULE_0__[key]; }.bind(0, __WEBPACK_IMPORT_KEY__)\n/* harmony reexport (unknown) */ __webpack_require__.d(__webpack_exports__, __WEBPACK_REEXPORT_OBJECT__);\n\n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/user/userInfo.vue?"); - -/***/ }), - -/***/ "./src/App.vue?vue&type=style&index=0&id=7ba5bd90&lang=scss&": -/*!*******************************************************************!*\ - !*** ./src/App.vue?vue&type=style&index=0&id=7ba5bd90&lang=scss& ***! - \*******************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_App_vue_vue_type_style_index_0_id_7ba5bd90_lang_scss___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../node_modules/vue-style-loader/index.js??clonedRuleSet-22.use[0]!../node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!../node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!../node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!../node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./App.vue?vue&type=style&index=0&id=7ba5bd90&lang=scss& */ \"./node_modules/vue-style-loader/index.js??clonedRuleSet-22.use[0]!./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/App.vue?vue&type=style&index=0&id=7ba5bd90&lang=scss&\");\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_App_vue_vue_type_style_index_0_id_7ba5bd90_lang_scss___WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_App_vue_vue_type_style_index_0_id_7ba5bd90_lang_scss___WEBPACK_IMPORTED_MODULE_0__);\n/* harmony reexport (unknown) */ var __WEBPACK_REEXPORT_OBJECT__ = {};\n/* harmony reexport (unknown) */ for(var __WEBPACK_IMPORT_KEY__ in _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_App_vue_vue_type_style_index_0_id_7ba5bd90_lang_scss___WEBPACK_IMPORTED_MODULE_0__) if(__WEBPACK_IMPORT_KEY__ !== \"default\") __WEBPACK_REEXPORT_OBJECT__[__WEBPACK_IMPORT_KEY__] = function(key) { return _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_App_vue_vue_type_style_index_0_id_7ba5bd90_lang_scss___WEBPACK_IMPORTED_MODULE_0__[key]; }.bind(0, __WEBPACK_IMPORT_KEY__)\n/* harmony reexport (unknown) */ __webpack_require__.d(__webpack_exports__, __WEBPACK_REEXPORT_OBJECT__);\n\n\n//# sourceURL=webpack://JCM-AI/./src/App.vue?"); - -/***/ }), - -/***/ "./src/components/Emoji.vue?vue&type=style&index=0&id=534ad946&lang=scss&scoped=true&": -/*!********************************************************************************************!*\ - !*** ./src/components/Emoji.vue?vue&type=style&index=0&id=534ad946&lang=scss&scoped=true& ***! - \********************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Emoji_vue_vue_type_style_index_0_id_534ad946_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/vue-style-loader/index.js??clonedRuleSet-22.use[0]!../../node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!../../node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!../../node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!../../node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./Emoji.vue?vue&type=style&index=0&id=534ad946&lang=scss&scoped=true& */ \"./node_modules/vue-style-loader/index.js??clonedRuleSet-22.use[0]!./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Emoji.vue?vue&type=style&index=0&id=534ad946&lang=scss&scoped=true&\");\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Emoji_vue_vue_type_style_index_0_id_534ad946_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Emoji_vue_vue_type_style_index_0_id_534ad946_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__);\n/* harmony reexport (unknown) */ var __WEBPACK_REEXPORT_OBJECT__ = {};\n/* harmony reexport (unknown) */ for(var __WEBPACK_IMPORT_KEY__ in _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Emoji_vue_vue_type_style_index_0_id_534ad946_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__) if(__WEBPACK_IMPORT_KEY__ !== \"default\") __WEBPACK_REEXPORT_OBJECT__[__WEBPACK_IMPORT_KEY__] = function(key) { return _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Emoji_vue_vue_type_style_index_0_id_534ad946_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__[key]; }.bind(0, __WEBPACK_IMPORT_KEY__)\n/* harmony reexport (unknown) */ __webpack_require__.d(__webpack_exports__, __WEBPACK_REEXPORT_OBJECT__);\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/Emoji.vue?"); - -/***/ }), - -/***/ "./src/components/File.vue?vue&type=style&index=0&id=ab80f8a8&lang=scss&scoped=true&": -/*!*******************************************************************************************!*\ - !*** ./src/components/File.vue?vue&type=style&index=0&id=ab80f8a8&lang=scss&scoped=true& ***! - \*******************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_File_vue_vue_type_style_index_0_id_ab80f8a8_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/vue-style-loader/index.js??clonedRuleSet-22.use[0]!../../node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!../../node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!../../node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!../../node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./File.vue?vue&type=style&index=0&id=ab80f8a8&lang=scss&scoped=true& */ \"./node_modules/vue-style-loader/index.js??clonedRuleSet-22.use[0]!./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/File.vue?vue&type=style&index=0&id=ab80f8a8&lang=scss&scoped=true&\");\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_File_vue_vue_type_style_index_0_id_ab80f8a8_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_File_vue_vue_type_style_index_0_id_ab80f8a8_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__);\n/* harmony reexport (unknown) */ var __WEBPACK_REEXPORT_OBJECT__ = {};\n/* harmony reexport (unknown) */ for(var __WEBPACK_IMPORT_KEY__ in _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_File_vue_vue_type_style_index_0_id_ab80f8a8_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__) if(__WEBPACK_IMPORT_KEY__ !== \"default\") __WEBPACK_REEXPORT_OBJECT__[__WEBPACK_IMPORT_KEY__] = function(key) { return _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_File_vue_vue_type_style_index_0_id_ab80f8a8_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__[key]; }.bind(0, __WEBPACK_IMPORT_KEY__)\n/* harmony reexport (unknown) */ __webpack_require__.d(__webpack_exports__, __WEBPACK_REEXPORT_OBJECT__);\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/File.vue?"); - -/***/ }), - -/***/ "./src/components/FileCard.vue?vue&type=style&index=0&id=48849e48&lang=scss&scoped=true&": -/*!***********************************************************************************************!*\ - !*** ./src/components/FileCard.vue?vue&type=style&index=0&id=48849e48&lang=scss&scoped=true& ***! - \***********************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_FileCard_vue_vue_type_style_index_0_id_48849e48_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/vue-style-loader/index.js??clonedRuleSet-22.use[0]!../../node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!../../node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!../../node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!../../node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./FileCard.vue?vue&type=style&index=0&id=48849e48&lang=scss&scoped=true& */ \"./node_modules/vue-style-loader/index.js??clonedRuleSet-22.use[0]!./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/FileCard.vue?vue&type=style&index=0&id=48849e48&lang=scss&scoped=true&\");\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_FileCard_vue_vue_type_style_index_0_id_48849e48_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_FileCard_vue_vue_type_style_index_0_id_48849e48_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__);\n/* harmony reexport (unknown) */ var __WEBPACK_REEXPORT_OBJECT__ = {};\n/* harmony reexport (unknown) */ for(var __WEBPACK_IMPORT_KEY__ in _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_FileCard_vue_vue_type_style_index_0_id_48849e48_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__) if(__WEBPACK_IMPORT_KEY__ !== \"default\") __WEBPACK_REEXPORT_OBJECT__[__WEBPACK_IMPORT_KEY__] = function(key) { return _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_FileCard_vue_vue_type_style_index_0_id_48849e48_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__[key]; }.bind(0, __WEBPACK_IMPORT_KEY__)\n/* harmony reexport (unknown) */ __webpack_require__.d(__webpack_exports__, __WEBPACK_REEXPORT_OBJECT__);\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/FileCard.vue?"); - -/***/ }), - -/***/ "./src/components/HeadImg.vue?vue&type=style&index=0&id=0b1d9e43&lang=scss&scoped=true&": -/*!**********************************************************************************************!*\ - !*** ./src/components/HeadImg.vue?vue&type=style&index=0&id=0b1d9e43&lang=scss&scoped=true& ***! - \**********************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_HeadImg_vue_vue_type_style_index_0_id_0b1d9e43_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/vue-style-loader/index.js??clonedRuleSet-22.use[0]!../../node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!../../node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!../../node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!../../node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./HeadImg.vue?vue&type=style&index=0&id=0b1d9e43&lang=scss&scoped=true& */ \"./node_modules/vue-style-loader/index.js??clonedRuleSet-22.use[0]!./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/HeadImg.vue?vue&type=style&index=0&id=0b1d9e43&lang=scss&scoped=true&\");\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_HeadImg_vue_vue_type_style_index_0_id_0b1d9e43_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_HeadImg_vue_vue_type_style_index_0_id_0b1d9e43_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__);\n/* harmony reexport (unknown) */ var __WEBPACK_REEXPORT_OBJECT__ = {};\n/* harmony reexport (unknown) */ for(var __WEBPACK_IMPORT_KEY__ in _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_HeadImg_vue_vue_type_style_index_0_id_0b1d9e43_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__) if(__WEBPACK_IMPORT_KEY__ !== \"default\") __WEBPACK_REEXPORT_OBJECT__[__WEBPACK_IMPORT_KEY__] = function(key) { return _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_HeadImg_vue_vue_type_style_index_0_id_0b1d9e43_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__[key]; }.bind(0, __WEBPACK_IMPORT_KEY__)\n/* harmony reexport (unknown) */ __webpack_require__.d(__webpack_exports__, __WEBPACK_REEXPORT_OBJECT__);\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/HeadImg.vue?"); - -/***/ }), - -/***/ "./src/components/HeadPortrait.vue?vue&type=style&index=0&id=24585c4b&lang=scss&scoped=true&": -/*!***************************************************************************************************!*\ - !*** ./src/components/HeadPortrait.vue?vue&type=style&index=0&id=24585c4b&lang=scss&scoped=true& ***! - \***************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_HeadPortrait_vue_vue_type_style_index_0_id_24585c4b_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/vue-style-loader/index.js??clonedRuleSet-22.use[0]!../../node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!../../node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!../../node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!../../node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./HeadPortrait.vue?vue&type=style&index=0&id=24585c4b&lang=scss&scoped=true& */ \"./node_modules/vue-style-loader/index.js??clonedRuleSet-22.use[0]!./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/HeadPortrait.vue?vue&type=style&index=0&id=24585c4b&lang=scss&scoped=true&\");\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_HeadPortrait_vue_vue_type_style_index_0_id_24585c4b_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_HeadPortrait_vue_vue_type_style_index_0_id_24585c4b_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__);\n/* harmony reexport (unknown) */ var __WEBPACK_REEXPORT_OBJECT__ = {};\n/* harmony reexport (unknown) */ for(var __WEBPACK_IMPORT_KEY__ in _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_HeadPortrait_vue_vue_type_style_index_0_id_24585c4b_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__) if(__WEBPACK_IMPORT_KEY__ !== \"default\") __WEBPACK_REEXPORT_OBJECT__[__WEBPACK_IMPORT_KEY__] = function(key) { return _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_HeadPortrait_vue_vue_type_style_index_0_id_24585c4b_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__[key]; }.bind(0, __WEBPACK_IMPORT_KEY__)\n/* harmony reexport (unknown) */ __webpack_require__.d(__webpack_exports__, __WEBPACK_REEXPORT_OBJECT__);\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/HeadPortrait.vue?"); - -/***/ }), - -/***/ "./src/components/Nav.vue?vue&type=style&index=0&id=65af85a3&lang=scss&scoped=true&": -/*!******************************************************************************************!*\ - !*** ./src/components/Nav.vue?vue&type=style&index=0&id=65af85a3&lang=scss&scoped=true& ***! - \******************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Nav_vue_vue_type_style_index_0_id_65af85a3_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/vue-style-loader/index.js??clonedRuleSet-22.use[0]!../../node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!../../node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!../../node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!../../node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./Nav.vue?vue&type=style&index=0&id=65af85a3&lang=scss&scoped=true& */ \"./node_modules/vue-style-loader/index.js??clonedRuleSet-22.use[0]!./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Nav.vue?vue&type=style&index=0&id=65af85a3&lang=scss&scoped=true&\");\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Nav_vue_vue_type_style_index_0_id_65af85a3_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Nav_vue_vue_type_style_index_0_id_65af85a3_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__);\n/* harmony reexport (unknown) */ var __WEBPACK_REEXPORT_OBJECT__ = {};\n/* harmony reexport (unknown) */ for(var __WEBPACK_IMPORT_KEY__ in _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Nav_vue_vue_type_style_index_0_id_65af85a3_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__) if(__WEBPACK_IMPORT_KEY__ !== \"default\") __WEBPACK_REEXPORT_OBJECT__[__WEBPACK_IMPORT_KEY__] = function(key) { return _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Nav_vue_vue_type_style_index_0_id_65af85a3_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__[key]; }.bind(0, __WEBPACK_IMPORT_KEY__)\n/* harmony reexport (unknown) */ __webpack_require__.d(__webpack_exports__, __WEBPACK_REEXPORT_OBJECT__);\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/Nav.vue?"); - -/***/ }), - -/***/ "./src/components/PersonCard.vue?vue&type=style&index=0&id=d74d3096&lang=scss&scoped=true&": -/*!*************************************************************************************************!*\ - !*** ./src/components/PersonCard.vue?vue&type=style&index=0&id=d74d3096&lang=scss&scoped=true& ***! - \*************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_PersonCard_vue_vue_type_style_index_0_id_d74d3096_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/vue-style-loader/index.js??clonedRuleSet-22.use[0]!../../node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!../../node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!../../node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!../../node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./PersonCard.vue?vue&type=style&index=0&id=d74d3096&lang=scss&scoped=true& */ \"./node_modules/vue-style-loader/index.js??clonedRuleSet-22.use[0]!./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/PersonCard.vue?vue&type=style&index=0&id=d74d3096&lang=scss&scoped=true&\");\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_PersonCard_vue_vue_type_style_index_0_id_d74d3096_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_PersonCard_vue_vue_type_style_index_0_id_d74d3096_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__);\n/* harmony reexport (unknown) */ var __WEBPACK_REEXPORT_OBJECT__ = {};\n/* harmony reexport (unknown) */ for(var __WEBPACK_IMPORT_KEY__ in _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_PersonCard_vue_vue_type_style_index_0_id_d74d3096_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__) if(__WEBPACK_IMPORT_KEY__ !== \"default\") __WEBPACK_REEXPORT_OBJECT__[__WEBPACK_IMPORT_KEY__] = function(key) { return _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_PersonCard_vue_vue_type_style_index_0_id_d74d3096_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__[key]; }.bind(0, __WEBPACK_IMPORT_KEY__)\n/* harmony reexport (unknown) */ __webpack_require__.d(__webpack_exports__, __WEBPACK_REEXPORT_OBJECT__);\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/PersonCard.vue?"); - -/***/ }), - -/***/ "./src/components/RoleCard.vue?vue&type=style&index=0&id=9524bc54&lang=scss&scoped=true&": -/*!***********************************************************************************************!*\ - !*** ./src/components/RoleCard.vue?vue&type=style&index=0&id=9524bc54&lang=scss&scoped=true& ***! - \***********************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_RoleCard_vue_vue_type_style_index_0_id_9524bc54_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/vue-style-loader/index.js??clonedRuleSet-22.use[0]!../../node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!../../node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!../../node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!../../node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./RoleCard.vue?vue&type=style&index=0&id=9524bc54&lang=scss&scoped=true& */ \"./node_modules/vue-style-loader/index.js??clonedRuleSet-22.use[0]!./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/RoleCard.vue?vue&type=style&index=0&id=9524bc54&lang=scss&scoped=true&\");\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_RoleCard_vue_vue_type_style_index_0_id_9524bc54_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_RoleCard_vue_vue_type_style_index_0_id_9524bc54_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__);\n/* harmony reexport (unknown) */ var __WEBPACK_REEXPORT_OBJECT__ = {};\n/* harmony reexport (unknown) */ for(var __WEBPACK_IMPORT_KEY__ in _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_RoleCard_vue_vue_type_style_index_0_id_9524bc54_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__) if(__WEBPACK_IMPORT_KEY__ !== \"default\") __WEBPACK_REEXPORT_OBJECT__[__WEBPACK_IMPORT_KEY__] = function(key) { return _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_RoleCard_vue_vue_type_style_index_0_id_9524bc54_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__[key]; }.bind(0, __WEBPACK_IMPORT_KEY__)\n/* harmony reexport (unknown) */ __webpack_require__.d(__webpack_exports__, __WEBPACK_REEXPORT_OBJECT__);\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/RoleCard.vue?"); - -/***/ }), - -/***/ "./src/components/Session.vue?vue&type=style&index=0&id=d6f30cd4&lang=scss&scoped=true&": -/*!**********************************************************************************************!*\ - !*** ./src/components/Session.vue?vue&type=style&index=0&id=d6f30cd4&lang=scss&scoped=true& ***! - \**********************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Session_vue_vue_type_style_index_0_id_d6f30cd4_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/vue-style-loader/index.js??clonedRuleSet-22.use[0]!../../node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!../../node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!../../node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!../../node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./Session.vue?vue&type=style&index=0&id=d6f30cd4&lang=scss&scoped=true& */ \"./node_modules/vue-style-loader/index.js??clonedRuleSet-22.use[0]!./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Session.vue?vue&type=style&index=0&id=d6f30cd4&lang=scss&scoped=true&\");\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Session_vue_vue_type_style_index_0_id_d6f30cd4_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Session_vue_vue_type_style_index_0_id_d6f30cd4_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__);\n/* harmony reexport (unknown) */ var __WEBPACK_REEXPORT_OBJECT__ = {};\n/* harmony reexport (unknown) */ for(var __WEBPACK_IMPORT_KEY__ in _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Session_vue_vue_type_style_index_0_id_d6f30cd4_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__) if(__WEBPACK_IMPORT_KEY__ !== \"default\") __WEBPACK_REEXPORT_OBJECT__[__WEBPACK_IMPORT_KEY__] = function(key) { return _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_Session_vue_vue_type_style_index_0_id_d6f30cd4_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__[key]; }.bind(0, __WEBPACK_IMPORT_KEY__)\n/* harmony reexport (unknown) */ __webpack_require__.d(__webpack_exports__, __WEBPACK_REEXPORT_OBJECT__);\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/Session.vue?"); - -/***/ }), - -/***/ "./src/view/home.vue?vue&type=style&index=0&id=73eb9c00&lang=scss&scoped=true&": -/*!*************************************************************************************!*\ - !*** ./src/view/home.vue?vue&type=style&index=0&id=73eb9c00&lang=scss&scoped=true& ***! - \*************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_home_vue_vue_type_style_index_0_id_73eb9c00_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../node_modules/vue-style-loader/index.js??clonedRuleSet-22.use[0]!../../node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!../../node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!../../node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!../../node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./home.vue?vue&type=style&index=0&id=73eb9c00&lang=scss&scoped=true& */ \"./node_modules/vue-style-loader/index.js??clonedRuleSet-22.use[0]!./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/home.vue?vue&type=style&index=0&id=73eb9c00&lang=scss&scoped=true&\");\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_home_vue_vue_type_style_index_0_id_73eb9c00_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_home_vue_vue_type_style_index_0_id_73eb9c00_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__);\n/* harmony reexport (unknown) */ var __WEBPACK_REEXPORT_OBJECT__ = {};\n/* harmony reexport (unknown) */ for(var __WEBPACK_IMPORT_KEY__ in _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_home_vue_vue_type_style_index_0_id_73eb9c00_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__) if(__WEBPACK_IMPORT_KEY__ !== \"default\") __WEBPACK_REEXPORT_OBJECT__[__WEBPACK_IMPORT_KEY__] = function(key) { return _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_home_vue_vue_type_style_index_0_id_73eb9c00_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__[key]; }.bind(0, __WEBPACK_IMPORT_KEY__)\n/* harmony reexport (unknown) */ __webpack_require__.d(__webpack_exports__, __WEBPACK_REEXPORT_OBJECT__);\n\n\n//# sourceURL=webpack://JCM-AI/./src/view/home.vue?"); - -/***/ }), - -/***/ "./src/view/pages/chatHome/chatwindow.vue?vue&type=style&index=0&id=13fede38&lang=scss&scoped=true&": -/*!**********************************************************************************************************!*\ - !*** ./src/view/pages/chatHome/chatwindow.vue?vue&type=style&index=0&id=13fede38&lang=scss&scoped=true& ***! - \**********************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_chatwindow_vue_vue_type_style_index_0_id_13fede38_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../../../node_modules/vue-style-loader/index.js??clonedRuleSet-22.use[0]!../../../../node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!../../../../node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!../../../../node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!../../../../node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!../../../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./chatwindow.vue?vue&type=style&index=0&id=13fede38&lang=scss&scoped=true& */ \"./node_modules/vue-style-loader/index.js??clonedRuleSet-22.use[0]!./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/chatHome/chatwindow.vue?vue&type=style&index=0&id=13fede38&lang=scss&scoped=true&\");\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_chatwindow_vue_vue_type_style_index_0_id_13fede38_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_chatwindow_vue_vue_type_style_index_0_id_13fede38_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__);\n/* harmony reexport (unknown) */ var __WEBPACK_REEXPORT_OBJECT__ = {};\n/* harmony reexport (unknown) */ for(var __WEBPACK_IMPORT_KEY__ in _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_chatwindow_vue_vue_type_style_index_0_id_13fede38_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__) if(__WEBPACK_IMPORT_KEY__ !== \"default\") __WEBPACK_REEXPORT_OBJECT__[__WEBPACK_IMPORT_KEY__] = function(key) { return _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_chatwindow_vue_vue_type_style_index_0_id_13fede38_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__[key]; }.bind(0, __WEBPACK_IMPORT_KEY__)\n/* harmony reexport (unknown) */ __webpack_require__.d(__webpack_exports__, __WEBPACK_REEXPORT_OBJECT__);\n\n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/chatHome/chatwindow.vue?"); - -/***/ }), - -/***/ "./src/view/pages/chatHome/index.vue?vue&type=style&index=0&id=c6884a34&lang=scss&scoped=true&": -/*!*****************************************************************************************************!*\ - !*** ./src/view/pages/chatHome/index.vue?vue&type=style&index=0&id=c6884a34&lang=scss&scoped=true& ***! - \*****************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_index_vue_vue_type_style_index_0_id_c6884a34_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! -!../../../../node_modules/vue-style-loader/index.js??clonedRuleSet-22.use[0]!../../../../node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!../../../../node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!../../../../node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!../../../../node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!../../../../node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./index.vue?vue&type=style&index=0&id=c6884a34&lang=scss&scoped=true& */ \"./node_modules/vue-style-loader/index.js??clonedRuleSet-22.use[0]!./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/chatHome/index.vue?vue&type=style&index=0&id=c6884a34&lang=scss&scoped=true&\");\n/* harmony import */ var _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_index_vue_vue_type_style_index_0_id_c6884a34_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_index_vue_vue_type_style_index_0_id_c6884a34_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__);\n/* harmony reexport (unknown) */ var __WEBPACK_REEXPORT_OBJECT__ = {};\n/* harmony reexport (unknown) */ for(var __WEBPACK_IMPORT_KEY__ in _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_index_vue_vue_type_style_index_0_id_c6884a34_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__) if(__WEBPACK_IMPORT_KEY__ !== \"default\") __WEBPACK_REEXPORT_OBJECT__[__WEBPACK_IMPORT_KEY__] = function(key) { return _node_modules_vue_style_loader_index_js_clonedRuleSet_22_use_0_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_node_modules_sass_loader_dist_cjs_js_clonedRuleSet_22_use_3_node_modules_vue_vue_loader_v15_lib_index_js_vue_loader_options_index_vue_vue_type_style_index_0_id_c6884a34_lang_scss_scoped_true___WEBPACK_IMPORTED_MODULE_0__[key]; }.bind(0, __WEBPACK_IMPORT_KEY__)\n/* harmony reexport (unknown) */ __webpack_require__.d(__webpack_exports__, __WEBPACK_REEXPORT_OBJECT__);\n\n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/chatHome/index.vue?"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/App.vue?vue&type=script&lang=js&": -/*!************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/App.vue?vue&type=script&lang=js& ***! - \************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _assets_font_font_css__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! @/assets/font/font.css */ \"./src/assets/font/font.css\");\n/* harmony import */ var _assets_font_font_css__WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_assets_font_font_css__WEBPACK_IMPORTED_MODULE_0__);\n/* harmony import */ var _view_home_vue__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ./view/home.vue */ \"./src/view/home.vue\");\n\n\n/* harmony default export */ __webpack_exports__[\"default\"] = ({\n name: 'App',\n components: {\n Home: _view_home_vue__WEBPACK_IMPORTED_MODULE_1__[\"default\"]\n }\n});\n\n//# sourceURL=webpack://JCM-AI/./src/App.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Emoji.vue?vue&type=script&lang=js&": -/*!*************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Emoji.vue?vue&type=script&lang=js& ***! - \*************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony default export */ __webpack_exports__[\"default\"] = ({\n data() {\n return {\n emojiList: [__webpack_require__(/*! @/assets/img/emoji/slightly-smiling-face.png */ \"./src/assets/img/emoji/slightly-smiling-face.png\"), __webpack_require__(/*! @/assets/img/emoji/smiling-face.png */ \"./src/assets/img/emoji/smiling-face.png\"), __webpack_require__(/*! @/assets/img/emoji/smiling-face-with-heart-eyes.png */ \"./src/assets/img/emoji/smiling-face-with-heart-eyes.png\"), __webpack_require__(/*! @/assets/img/emoji/smiling-face-with-sunglasses.png */ \"./src/assets/img/emoji/smiling-face-with-sunglasses.png\"), __webpack_require__(/*! @/assets/img/emoji/thinking-face.png */ \"./src/assets/img/emoji/thinking-face.png\"), __webpack_require__(/*! @/assets/img/emoji/tired-face.png */ \"./src/assets/img/emoji/tired-face.png\"), __webpack_require__(/*! @/assets/img/emoji/money-mouth-face.png */ \"./src/assets/img/emoji/money-mouth-face.png\"), __webpack_require__(/*! @/assets/img/emoji/loudly-crying-face.png */ \"./src/assets/img/emoji/loudly-crying-face.png\"), __webpack_require__(/*! @/assets/img/emoji/pouting-face.png */ \"./src/assets/img/emoji/pouting-face.png\"), __webpack_require__(/*! @/assets/img/emoji/face-screaming-in-fear.png */ \"./src/assets/img/emoji/face-screaming-in-fear.png\"), __webpack_require__(/*! @/assets/img/emoji/face-vomiting.png */ \"./src/assets/img/emoji/face-vomiting.png\"), __webpack_require__(/*! @/assets/img/emoji/face-without-mouth.png */ \"./src/assets/img/emoji/face-without-mouth.png\"), __webpack_require__(/*! @/assets/img/emoji/face-with-tongue.png */ \"./src/assets/img/emoji/face-with-tongue.png\"), __webpack_require__(/*! @/assets/img/emoji/clown-face.png */ \"./src/assets/img/emoji/clown-face.png\"), __webpack_require__(/*! @/assets/img/emoji/new-moon-face.png */ \"./src/assets/img/emoji/new-moon-face.png\"), __webpack_require__(/*! @/assets/img/emoji/ghost.png */ \"./src/assets/img/emoji/ghost.png\"), __webpack_require__(/*! @/assets/img/emoji/jack-o-lantern.png */ \"./src/assets/img/emoji/jack-o-lantern.png\"), __webpack_require__(/*! @/assets/img/emoji/money-bag.png */ \"./src/assets/img/emoji/money-bag.png\"), __webpack_require__(/*! @/assets/img/emoji/pile-of-poo.png */ \"./src/assets/img/emoji/pile-of-poo.png\"), __webpack_require__(/*! @/assets/img/emoji/shamrock.png */ \"./src/assets/img/emoji/shamrock.png\"), __webpack_require__(/*! @/assets/img/emoji/hibiscus.png */ \"./src/assets/img/emoji/hibiscus.png\"), __webpack_require__(/*! @/assets/img/emoji/lips.png */ \"./src/assets/img/emoji/lips.png\"), __webpack_require__(/*! @/assets/img/emoji/sparkles.png */ \"./src/assets/img/emoji/sparkles.png\"), __webpack_require__(/*! @/assets/img/emoji/star.png */ \"./src/assets/img/emoji/star.png\"), __webpack_require__(/*! @/assets/img/emoji/two-hearts.png */ \"./src/assets/img/emoji/two-hearts.png\"), __webpack_require__(/*! @/assets/img/emoji/rainbow.png */ \"./src/assets/img/emoji/rainbow.png\"), __webpack_require__(/*! @/assets/img/emoji/thought-balloon.png */ \"./src/assets/img/emoji/thought-balloon.png\")]\n };\n },\n methods: {\n sendEmoji(item) {\n this.$emit(\"sendEmoji\", item);\n },\n closeEmoji() {\n this.$emit(\"closeEmoji\");\n }\n }\n});\n\n//# sourceURL=webpack://JCM-AI/./src/components/Emoji.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/File.vue?vue&type=script&lang=js&": -/*!************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/File.vue?vue&type=script&lang=js& ***! - \************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony default export */ __webpack_exports__[\"default\"] = ({\n props: {\n fileInfo: {\n default: {}\n },\n pcCurrent: {\n default: ''\n }\n },\n data() {\n return {\n current: ''\n };\n },\n watch: {\n pcCurrent: function () {\n this.isActive();\n }\n },\n methods: {\n isActive() {\n this.current = this.pcCurrent;\n },\n truncateString(str, num) {\n if (str.length <= num) {\n return str;\n }\n return str.slice(0, num) + \"...\";\n }\n }\n});\n\n//# sourceURL=webpack://JCM-AI/./src/components/File.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/FileCard.vue?vue&type=script&lang=js&": -/*!****************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/FileCard.vue?vue&type=script&lang=js& ***! - \****************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* provided dependency */ var console = __webpack_require__(/*! ./node_modules/console-browserify/index.js */ \"./node_modules/console-browserify/index.js\");\n/* harmony default export */ __webpack_exports__[\"default\"] = ({\n // props: [\"fileType\", \"file\"],\n props: {\n fileType: Number,\n file: File,\n default() {\n return {};\n }\n },\n watch: {\n file() {\n console.log(this.file);\n }\n },\n mounted() {\n console.log(this.file);\n console.log(this.fileType);\n }\n});\n\n//# sourceURL=webpack://JCM-AI/./src/components/FileCard.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/HeadImg.vue?vue&type=script&lang=js&": -/*!***************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/HeadImg.vue?vue&type=script&lang=js& ***! - \***************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _store_mutation_types__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! @/store/mutation-types */ \"./src/store/mutation-types.js\");\n\n/* harmony default export */ __webpack_exports__[\"default\"] = ({\n props: {\n imgUrl: {\n default: _store_mutation_types__WEBPACK_IMPORTED_MODULE_0__.USER_HEAD_IMG_URL\n }\n }\n});\n\n//# sourceURL=webpack://JCM-AI/./src/components/HeadImg.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/HeadPortrait.vue?vue&type=script&lang=js&": -/*!********************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/HeadPortrait.vue?vue&type=script&lang=js& ***! - \********************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _store_mutation_types__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! @/store/mutation-types */ \"./src/store/mutation-types.js\");\n\n/* harmony default export */ __webpack_exports__[\"default\"] = ({\n props: {\n imgUrl: {\n default: _store_mutation_types__WEBPACK_IMPORTED_MODULE_0__.USER_HEAD_IMG_URL\n }\n }\n});\n\n//# sourceURL=webpack://JCM-AI/./src/components/HeadPortrait.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Nav.vue?vue&type=script&lang=js&": -/*!***********************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Nav.vue?vue&type=script&lang=js& ***! - \***********************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var core_js_modules_es_array_push_js__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! core-js/modules/es.array.push.js */ \"./node_modules/core-js/modules/es.array.push.js\");\n/* harmony import */ var core_js_modules_es_array_push_js__WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(core_js_modules_es_array_push_js__WEBPACK_IMPORTED_MODULE_0__);\n/* harmony import */ var _store_mutation_types__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! @/store/mutation-types */ \"./src/store/mutation-types.js\");\n/* harmony import */ var _HeadPortrait_vue__WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(/*! ./HeadPortrait.vue */ \"./src/components/HeadPortrait.vue\");\n/* harmony import */ var _HeadImg_vue__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(/*! ./HeadImg.vue */ \"./src/components/HeadImg.vue\");\n\n\n\n\n/* harmony default export */ __webpack_exports__[\"default\"] = ({\n components: {\n HeadPortrait: _HeadPortrait_vue__WEBPACK_IMPORTED_MODULE_2__[\"default\"],\n HeadImg: _HeadImg_vue__WEBPACK_IMPORTED_MODULE_3__[\"default\"]\n },\n data() {\n return {\n menuList: [\"icon-xinxi\", \"icon-shezhi\"],\n current: 0,\n imgUrl: _store_mutation_types__WEBPACK_IMPORTED_MODULE_1__.USER_HEAD_IMG_URL\n };\n },\n methods: {\n changeMenu(index) {\n switch (index) {\n case 0:\n this.$router.push({\n name: \"ChatHome\"\n }, () => {});\n break;\n case 1:\n this.$router.push({\n name: \"Setting\"\n }, () => {});\n break;\n default:\n this.$router.push({\n name: \"ChatHome\"\n });\n }\n this.current = index;\n },\n userInfoShow() {\n this.$router.push({\n name: \"UserInfo\"\n }, () => {});\n }\n }\n});\n\n//# sourceURL=webpack://JCM-AI/./src/components/Nav.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/PersonCard.vue?vue&type=script&lang=js&": -/*!******************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/PersonCard.vue?vue&type=script&lang=js& ***! - \******************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _HeadPortrait_vue__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ./HeadPortrait.vue */ \"./src/components/HeadPortrait.vue\");\n\n/* harmony default export */ __webpack_exports__[\"default\"] = ({\n props: {\n personInfo: {\n default: {}\n },\n pcCurrent: {\n default: ''\n }\n },\n components: {\n HeadPortrait: _HeadPortrait_vue__WEBPACK_IMPORTED_MODULE_0__[\"default\"]\n },\n data() {\n return {\n current: ''\n };\n },\n watch: {\n pcCurrent() {\n this.isActive();\n }\n },\n methods: {\n isActive() {\n this.current = this.pcCurrent;\n }\n }\n});\n\n//# sourceURL=webpack://JCM-AI/./src/components/PersonCard.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/RoleCard.vue?vue&type=script&lang=js&": -/*!****************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/RoleCard.vue?vue&type=script&lang=js& ***! - \****************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _HeadPortrait_vue__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ./HeadPortrait.vue */ \"./src/components/HeadPortrait.vue\");\n\n/* harmony default export */ __webpack_exports__[\"default\"] = ({\n props: {\n roleInfo: {\n default: {}\n },\n prCurrent: {\n default: ''\n }\n },\n components: {\n HeadPortrait: _HeadPortrait_vue__WEBPACK_IMPORTED_MODULE_0__[\"default\"]\n },\n data() {\n return {\n current: ''\n };\n },\n watch: {\n pcCurrent() {\n this.isActive();\n }\n },\n methods: {\n isActive() {\n this.current = this.prCurrent;\n }\n }\n});\n\n//# sourceURL=webpack://JCM-AI/./src/components/RoleCard.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Session.vue?vue&type=script&lang=js&": -/*!***************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Session.vue?vue&type=script&lang=js& ***! - \***************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony default export */ __webpack_exports__[\"default\"] = ({\n props: {\n sessionInfo: {\n default: {}\n },\n pcCurrent: {\n default: ''\n }\n },\n data() {\n return {\n current: ''\n };\n },\n watch: {\n pcCurrent: function () {\n this.isActive();\n }\n },\n methods: {\n isActive() {\n this.current = this.pcCurrent;\n },\n truncateString(str, num) {\n if (str.length <= num) {\n return str;\n }\n return str.slice(0, num) + \"...\";\n }\n }\n});\n\n//# sourceURL=webpack://JCM-AI/./src/components/Session.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/home.vue?vue&type=script&lang=js&": -/*!******************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/home.vue?vue&type=script&lang=js& ***! - \******************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _components_Nav_vue__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ../components/Nav.vue */ \"./src/components/Nav.vue\");\n\n/* harmony default export */ __webpack_exports__[\"default\"] = ({\n name: \"App\",\n components: {\n Nav: _components_Nav_vue__WEBPACK_IMPORTED_MODULE_0__[\"default\"]\n },\n data() {\n return {\n asideStatus: true\n };\n },\n created() {\n window.addEventListener('resize', this.handleResize);\n this.handleResize();\n },\n destoryed() {\n window.removeEventListener('resize', this.handleResize);\n },\n methods: {\n //监听窗口尺寸的变化\n handleResize() {\n if (window.innerWidth <= 1150) {\n this.asideStatus = false;\n } else {\n this.asideStatus = true;\n }\n ;\n }\n }\n});\n\n//# sourceURL=webpack://JCM-AI/./src/view/home.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/chatHome/chatwindow.vue?vue&type=script&lang=js&": -/*!***************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/chatHome/chatwindow.vue?vue&type=script&lang=js& ***! - \***************************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var core_js_modules_es_array_push_js__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! core-js/modules/es.array.push.js */ \"./node_modules/core-js/modules/es.array.push.js\");\n/* harmony import */ var core_js_modules_es_array_push_js__WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(core_js_modules_es_array_push_js__WEBPACK_IMPORTED_MODULE_0__);\n/* harmony import */ var _util_util__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! @/util/util */ \"./src/util/util.js\");\n/* harmony import */ var _api_getData__WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(/*! @/api/getData */ \"./src/api/getData.js\");\n/* harmony import */ var _components_HeadPortrait__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(/*! @/components/HeadPortrait */ \"./src/components/HeadPortrait.vue\");\n/* harmony import */ var _components_Emoji__WEBPACK_IMPORTED_MODULE_4__ = __webpack_require__(/*! @/components/Emoji */ \"./src/components/Emoji.vue\");\n/* harmony import */ var _components_FileCard_vue__WEBPACK_IMPORTED_MODULE_5__ = __webpack_require__(/*! @/components/FileCard.vue */ \"./src/components/FileCard.vue\");\n/* harmony import */ var _api_index__WEBPACK_IMPORTED_MODULE_6__ = __webpack_require__(/*! @/api/index */ \"./src/api/index.js\");\n/* harmony import */ var markdown_it_vue__WEBPACK_IMPORTED_MODULE_7__ = __webpack_require__(/*! markdown-it-vue */ \"./node_modules/markdown-it-vue/dist/markdown-it-vue.umd.min.js\");\n/* harmony import */ var markdown_it_vue__WEBPACK_IMPORTED_MODULE_7___default = /*#__PURE__*/__webpack_require__.n(markdown_it_vue__WEBPACK_IMPORTED_MODULE_7__);\n/* harmony import */ var markdown_it_vue_dist_markdown_it_vue_css__WEBPACK_IMPORTED_MODULE_8__ = __webpack_require__(/*! markdown-it-vue/dist/markdown-it-vue.css */ \"./node_modules/markdown-it-vue/dist/markdown-it-vue.css\");\n/* harmony import */ var markdown_it_vue_dist_markdown_it_vue_css__WEBPACK_IMPORTED_MODULE_8___default = /*#__PURE__*/__webpack_require__.n(markdown_it_vue_dist_markdown_it_vue_css__WEBPACK_IMPORTED_MODULE_8__);\n/* harmony import */ var _store_mutation_types__WEBPACK_IMPORTED_MODULE_9__ = __webpack_require__(/*! @/store/mutation-types */ \"./src/store/mutation-types.js\");\n/* harmony import */ var file_saver__WEBPACK_IMPORTED_MODULE_10__ = __webpack_require__(/*! file-saver */ \"./node_modules/file-saver/dist/FileSaver.min.js\");\n/* harmony import */ var file_saver__WEBPACK_IMPORTED_MODULE_10___default = /*#__PURE__*/__webpack_require__.n(file_saver__WEBPACK_IMPORTED_MODULE_10__);\n/* provided dependency */ var console = __webpack_require__(/*! ./node_modules/console-browserify/index.js */ \"./node_modules/console-browserify/index.js\");\n\n\n\n\n\n\n\n\n\n\n\n/* harmony default export */ __webpack_exports__[\"default\"] = ({\n directives: {\n //用于自适应文本框的高度\n autoheight: {\n inserted: function (el) {\n var Msg = document.getElementById(\"textareaMsg\").value;\n if (Msg == \"\") {\n el.style.height = \"26px\";\n } else {\n el.style.height = el.scrollHeight + 'px';\n }\n },\n update: function (el) {\n var Msg = document.getElementById(\"textareaMsg\").value;\n if (Msg == \"\") {\n el.style.height = \"26px\";\n } else {\n el.style.height = el.scrollHeight + 'px';\n }\n }\n }\n },\n components: {\n HeadPortrait: _components_HeadPortrait__WEBPACK_IMPORTED_MODULE_3__[\"default\"],\n Emoji: _components_Emoji__WEBPACK_IMPORTED_MODULE_4__[\"default\"],\n FileCard: _components_FileCard_vue__WEBPACK_IMPORTED_MODULE_5__[\"default\"],\n MarkdownItVue: (markdown_it_vue__WEBPACK_IMPORTED_MODULE_7___default())\n },\n props: {\n storeStatu: Number,\n settingInfo: Object,\n frinedInfo: Object,\n default() {\n return {};\n }\n },\n watch: {},\n data() {\n return {\n isAutoScroll: true,\n fileArrays: [],\n inputsStatus: true,\n rows: 1,\n //是否显示表情和录音按钮\n buttonStatus: true,\n //是否在接收消息中,如果是则true待发送状态,如果是false则是等待消息转圈状态\n acqStatus: true,\n chatList: [],\n inputMsg: \"\",\n showEmoji: false,\n friendInfo: {},\n srcImgList: [],\n recording: false,\n audioChunks: [],\n screenshot: \"\",\n contentBackImageUrl: \"https://bpic.51yuansu.com/backgd/cover/00/31/39/5bc8088deeedd.jpg?x-oss-process=image/resize,w_780\",\n updateImage: null,\n // 是否隐藏对话框上方介绍(空间局促时隐藏)\n personInfoSpan: [1, 17, 6]\n };\n },\n created() {\n window.addEventListener('resize', this.handleResize);\n this.handleResize();\n },\n destoryed() {\n window.removeEventListener('resize', this.handleResize);\n },\n methods: {\n handleKeyDown(event) {\n if (event.keyCode === 13 && !event.shiftKey) {\n // 按下回车键,没按shift\n this.sendText();\n }\n },\n readStream(reader, _this, currentResLocation) {\n return reader.read().then(({\n done,\n value\n }) => {\n if (done) {\n return;\n }\n if (!_this.chatList[currentResLocation].reminder) {\n _this.chatList[currentResLocation].reminder = \"\";\n }\n let decoded = new TextDecoder().decode(value);\n decoded = _this.chatList[currentResLocation].reminder + decoded;\n let decodedArray = decoded.split(\"data: \");\n decodedArray.forEach(decoded => {\n if (decoded !== \"\") {\n if (decoded.trim() === \"[DONE]\") {\n return;\n } else {\n const response = JSON.parse(decoded).choices[0].delta.content ? JSON.parse(decoded).choices[0].delta.content : \"\";\n _this.chatList[currentResLocation].msg = _this.chatList[currentResLocation].msg + response;\n }\n }\n });\n return this.readStream(reader, _this, currentResLocation);\n });\n },\n //导入当前内容json触发的方法\n importFromJsonArr() {\n this.$refs.onupdateJosnArr.click(); // 触发选择文件的弹框\n },\n\n handleFileUpload(event) {\n const file = event.target.files[0];\n const reader = new FileReader();\n reader.onload = () => {\n const fileContent = reader.result; // 文件内容\n const parsed = JSON.parse(fileContent); // 转换为数组\n this.chatList = this.chatList.concat(parsed);\n };\n reader.readAsText(file);\n },\n //导出当前会话到json文件\n exportObjArrToJson() {\n console.log(this.chatList);\n let jsonString = JSON.stringify(this.chatList); // 将数组转为JSON字符串\n let blob = new Blob([jsonString], {\n type: \"application/json;charset=utf-8\"\n });\n (0,file_saver__WEBPACK_IMPORTED_MODULE_10__.saveAs)(blob, \"data.json\");\n },\n //监听窗口的变化\n handleResize() {\n if (window.innerWidth <= 700) {\n this.$nextTick(() => {\n document.querySelectorAll('.chat-content')[0].style.height = '93%';\n this.buttonStatus = false;\n var textareaMsg = document.getElementById(\"textareaMsg\");\n textareaMsg.style.marginLeft = \"0px\";\n this.personInfoSpan = [14, 0, 10];\n const isMobile = /iPhone|iPad|iPod|Android/i.test(navigator.userAgent);\n if (isMobile) {\n document.querySelectorAll('.chatInputs')[0].style.margin = '0%';\n } else {\n document.querySelectorAll('.chatInputs')[0].style.margin = '3%';\n }\n });\n } else {\n this.$nextTick(() => {\n document.querySelectorAll('.chat-content')[0].style.height = '88%';\n this.buttonStatus = true;\n this.personInfoSpan = [1, 17, 6];\n });\n }\n ;\n },\n newLine(event) {\n this.rows += 1;\n let text = this.$refs.textInput.value;\n text += '\\n';\n this.$refs.textInput.value = text;\n },\n //赋值对话列表\n assignmentMesList(msgList) {\n this.chatList = msgList;\n },\n //获取对话列表\n getMesList() {\n return this.chatList;\n },\n //清除当前对话列表\n clearMsgList() {\n this.chatList = [];\n },\n //更新内容背景图片\n updateContentImageUrl(imgUrl) {\n this.contentBackImageUrl = imgUrl;\n },\n //组装上下文数据\n contextualAssemblyData() {\n const conversation = [];\n for (var chat of this.chatList.filter(chat => chat.chatType === 0)) {\n if (chat.uid == 'jcm') {\n let my = {\n 'speaker': 'user',\n 'text': chat.msg\n };\n conversation.push(my);\n } else if (chat.uid == this.frinedInfo.id) {\n let ai = {\n 'speaker': 'agent',\n 'text': chat.msg\n };\n conversation.push(ai);\n }\n }\n return conversation;\n },\n //开始录音\n startRecording() {\n navigator.mediaDevices.getUserMedia({\n audio: true\n }).then(stream => {\n this.recorder = new MediaRecorder(stream);\n this.recorder.addEventListener('dataavailable', event => {\n this.audioChunks.push(event.data);\n });\n this.recording = true;\n this.recorder.start();\n // 在这里使用录音器\n this.$message.success(this.$t('message.start_recording'));\n }).catch(error => {\n this.$message.error(this.$t('message.fail_audio'));\n });\n },\n //停止录音\n async stopRecording() {\n this.recorder.stop();\n this.recording = false;\n this.recorder.ondataavailable = event => {\n const blob = new Blob([event.data], {\n type: 'audio/wav'\n });\n const file = new File([blob], 'recording.wav', {\n type: 'audio/wav',\n lastModified: Date.now()\n });\n const formData = new FormData();\n formData.append('file', file);\n formData.append('model', \"whisper-1\");\n formData.append('temperature', this.settingInfo.TemperatureAudio);\n formData.append('response_format', \"text\");\n if (this.settingInfo.translateEnglish) {\n (0,_api_getData__WEBPACK_IMPORTED_MODULE_2__.createTranslation)(formData, this.settingInfo.KeyMsg).then(data => {\n this.$nextTick(() => {\n this.inputMsg = data;\n });\n });\n } else {\n formData.append('language', this.settingInfo.language);\n (0,_api_getData__WEBPACK_IMPORTED_MODULE_2__.createTranscription)(formData, this.settingInfo.KeyMsg).then(data => {\n this.$nextTick(() => {\n this.inputMsg = data;\n });\n });\n }\n };\n this.$message.success(this.$t('message.end_recording'));\n },\n //发送信息\n sendMsg(msgList) {\n this.chatList.push(msgList);\n this.scrollBottom();\n },\n // 在组件或页面外部声明计算余弦相似度的函数\n cosineSimilarity(a, b) {\n const dotProduct = a.reduce((acc, curr, i) => acc + curr * b[i], 0);\n const normA = Math.sqrt(a.reduce((acc, curr) => acc + curr * curr, 0));\n const normB = Math.sqrt(b.reduce((acc, curr) => acc + curr * curr, 0));\n return dotProduct / (normA * normB);\n },\n //发送文字信息\n sendText() {\n // if(this.settingInfo.readefile){\n // console.log(this.fileArrays)\n // const formData = new FormData();\n // formData.append(\"model\", \"text-embedding-ada-002\");\n // formData.append(\"input\", \"吕世昊是谁?\");\n // createEmbeddings(formData,this.settingInfo.KeyMsg).then(data => {\n // const inputEmbedding=data.data[0]\n // // const similarText = this.findMostSimilarEmbedding(, this.fileArrays);\n\n // // 计算每个句子embedding与输入数据embedding之间的相似度\n // const similarities = this.cosineSimilarity(this.fileArrays.embedding, inputEmbedding.embedding)\n // const similaritiesArr=[];\n // console.log(similarities)\n // similaritiesArr.push(similarities)\n // // 对相似度进行排名,选择与输入数据最相似的句子或文章段落作为匹配结果\n // const topMatchIndex = similaritiesArr.reduce((maxIndex, similarity, index) => similarity > similaritiesArr[maxIndex] ? index : maxIndex, 0)\n\n // console.log(topMatchIndex)\n // const topMatchText = sentences[topMatchIndex]\n // console.log('最匹配的句子是:', topMatchText)\n // // console.log('最相似的文本为:', similarText);\n // })\n\n // // const configuration = new Configuration({\n // // apiKey: ,\n // // });\n // // const openai = new OpenAIApi(configuration);\n // // const response = openai.embeddings({\n // // model: 'text-embedding-ada-002',\n // // input:\"text\"\n // // });\n // // console.log(response)\n\n // return\n // }\n this.rows = 1;\n this.$nextTick(() => {\n this.acqStatus = false;\n });\n const dateNow = (0,_util_util__WEBPACK_IMPORTED_MODULE_1__.JCMFormatDate)((0,_util_util__WEBPACK_IMPORTED_MODULE_1__.getNowTime)());\n let params = {};\n if (this.settingInfo.openChangePicture) {\n if (this.updateImage == null) {\n this.$nextTick(() => {\n this.acqStatus = true;\n });\n this.$message.warning(this.$t('message.edit_picture'));\n return;\n } else {\n // 通过验证后,上传文件\n const formData = new FormData();\n formData.append(\"image\", this.updateImage);\n formData.append(\"prompt\", this.inputMsg);\n formData.append(\"n\", this.settingInfo.n);\n formData.append(\"size\", this.settingInfo.size);\n const dateNow = (0,_util_util__WEBPACK_IMPORTED_MODULE_1__.JCMFormatDate)((0,_util_util__WEBPACK_IMPORTED_MODULE_1__.getNowTime)());\n let chatMsg = {\n headImg: _store_mutation_types__WEBPACK_IMPORTED_MODULE_9__.USER_HEAD_IMG_URL,\n name: _store_mutation_types__WEBPACK_IMPORTED_MODULE_9__.USER_NAME,\n time: dateNow,\n msg: this.inputMsg,\n chatType: 0,\n //信息类型,0文字,1图片\n uid: \"jcm\" //uid\n };\n\n this.sendMsg(chatMsg);\n this.inputMsg = \"\";\n (0,_api_getData__WEBPACK_IMPORTED_MODULE_2__.createImageEdit)(formData, this.settingInfo.KeyMsg).then(data => {\n for (var imgInfo of data) {\n let imgResMsg = {\n headImg: _store_mutation_types__WEBPACK_IMPORTED_MODULE_9__.AI_HEAD_IMG_URL,\n name: this.frinedInfo.name,\n time: (0,_util_util__WEBPACK_IMPORTED_MODULE_1__.JCMFormatDate)((0,_util_util__WEBPACK_IMPORTED_MODULE_1__.getNowTime)()),\n msg: imgInfo.url,\n chatType: 1,\n //信息类型,0文字,1图片\n extend: {\n imgType: 2 //(1表情,2本地图片)\n },\n\n uid: this.frinedInfo.id //uid\n };\n\n this.sendMsg(imgResMsg);\n this.srcImgList.push(imgInfo.url);\n }\n this.updateImage = null;\n this.acqStatus = true;\n });\n return;\n }\n }\n if (this.inputMsg) {\n let chatMsg = {\n headImg: _store_mutation_types__WEBPACK_IMPORTED_MODULE_9__.USER_HEAD_IMG_URL,\n name: _store_mutation_types__WEBPACK_IMPORTED_MODULE_9__.USER_NAME,\n time: dateNow,\n msg: this.inputMsg,\n chatType: 0,\n //信息类型,0文字,1图片\n uid: \"jcm\" //uid\n };\n\n this.sendMsg(chatMsg);\n\n //如果是图片模式则进入待开发不过可用改状态使用\n if (this.settingInfo.openProductionPicture) {\n params.prompt = this.inputMsg, params.n = this.settingInfo.n, params.size = this.settingInfo.size, (0,_api_getData__WEBPACK_IMPORTED_MODULE_2__.createImage)(params, this.settingInfo.KeyMsg).then(data => {\n for (var imgInfo of data) {\n let imgResMsg = {\n headImg: _store_mutation_types__WEBPACK_IMPORTED_MODULE_9__.AI_HEAD_IMG_URL,\n name: this.frinedInfo.name,\n time: (0,_util_util__WEBPACK_IMPORTED_MODULE_1__.JCMFormatDate)((0,_util_util__WEBPACK_IMPORTED_MODULE_1__.getNowTime)()),\n msg: imgInfo.url,\n chatType: 1,\n //信息类型,0文字,1图片\n extend: {\n imgType: 2 //(1表情,2本地图片)\n },\n\n uid: this.frinedInfo.id //uid\n };\n\n this.sendMsg(imgResMsg);\n this.srcImgList.push(imgInfo.url);\n }\n this.acqStatus = true;\n });\n } else {\n //如果是文字模式则进入\n params.model = this.frinedInfo.id, params.max_tokens = this.settingInfo.chat.MaxTokens, params.temperature = this.settingInfo.chat.Temperature, params.top_p = this.settingInfo.chat.TopP, params.n = this.settingInfo.chat.n, params.stream = this.settingInfo.chat.stream, params.stop = this.settingInfo.chat.stop, params.presence_penalty = this.settingInfo.chat.PresencePenalty, params.frequency_penalty = this.settingInfo.chat.FrequencyPenalty;\n let chatBeforResMsg = {\n headImg: _store_mutation_types__WEBPACK_IMPORTED_MODULE_9__.AI_HEAD_IMG_URL,\n name: this.frinedInfo.name,\n time: (0,_util_util__WEBPACK_IMPORTED_MODULE_1__.JCMFormatDate)((0,_util_util__WEBPACK_IMPORTED_MODULE_1__.getNowTime)()),\n msg: \"\",\n chatType: 0,\n //信息类型,0文字,1图片\n uid: this.frinedInfo.id //uid\n };\n\n if (this.frinedInfo.id === \"gpt-3.5-turbo\" || this.frinedInfo.id === \"gpt-3.5-turbo-0301\") {\n this.chatCompletion(params, chatBeforResMsg);\n } else {\n this.completion(params, chatBeforResMsg);\n }\n }\n if (this.storeStatu == 0) {\n this.$emit('personCardSort', this.frinedInfo.id);\n } else if (this.storeStatu == 1) {\n this.$emit('fineTunesCardSort', this.frinedInfo.id);\n }\n this.inputMsg = \"\";\n // this.$parent.updateMoneyInfo();\n } else {\n this.$nextTick(() => {\n this.acqStatus = true;\n });\n this.$message.warning(this.$t('message.msg_empty'));\n }\n },\n async chatCompletion(params, chatBeforResMsg) {\n let textContext = this.inputMsg;\n let itemContent;\n let noUrlNetMessage;\n if (this.settingInfo.openNet) {\n let context = \"max_results=\" + this.settingInfo.max_results + \"&q=\" + textContext + \"®ion=us-en\";\n await fetch('https://search.freechatgpt.cc/search?' + context).then(response => response.json()).then(data => {\n let netMessage = \"Web search results: \";\n noUrlNetMessage = netMessage + \"\\n\\n\";\n for (let i = 0; i < data.length; i++) {\n netMessage += \"[\" + (i + 1) + \"] \\\"\" + data[i].body.substring(0, 400) + \"\\\" \";\n netMessage += \"URL:\" + data[i].href + \" \";\n noUrlNetMessage += \"[\" + (i + 1) + \"] \\\"\" + data[i].body.substring(0, 400) + \"\\\" \\n\\n\";\n }\n const date = new Date();\n const year = date.getFullYear();\n const month = date.getMonth() + 1;\n const day = date.getDate();\n const formattedDate = `${year}/${month}/${day}`;\n netMessage = netMessage.substring(0, 1500);\n netMessage += \"Current date:\" + formattedDate + \" \";\n netMessage += \"Instructions: Using the provided web search results, write a comprehensive reply to the given query. \" + \"Make sure to cite results using [[number](URL)] notation after the reference. If the provided search \" + \"results refer to multiple subjects with the same name, write separate answers for each subject.\" + \"Query: \" + textContext + \"Reply in 中文\";\n noUrlNetMessage += \" 您的问题: \" + textContext;\n itemContent = {};\n itemContent.time = (0,_util_util__WEBPACK_IMPORTED_MODULE_1__.JCMFormatDate)((0,_util_util__WEBPACK_IMPORTED_MODULE_1__.getNowTime)());\n itemContent.msg = netMessage;\n itemContent.chatType = 0;\n itemContent.name = \"网络\";\n itemContent.headImg = \"https://i.328888.xyz/2023/04/04/ijlmhJ.png\";\n itemContent.uid = this.frinedInfo.id;\n this.chatList.push(itemContent);\n let conversation = this.contextualAssemblyData();\n params.messages = conversation.map(item => {\n return {\n role: item.speaker === 'user' ? 'user' : 'assistant',\n content: item.text\n };\n });\n itemContent.msg = noUrlNetMessage;\n });\n } else {\n let conversation = this.contextualAssemblyData();\n params.messages = conversation.map(item => {\n return {\n role: item.speaker === 'user' ? 'user' : 'assistant',\n content: item.text\n };\n });\n }\n //新增一个空的消息\n this.sendMsg(chatBeforResMsg);\n const currentResLocation = this.chatList.length - 1;\n let _this = this;\n try {\n if (this.settingInfo.chat.stream) {\n await fetch(_api_index__WEBPACK_IMPORTED_MODULE_6__[\"default\"].baseUrl + '/v1/chat/completions', {\n method: \"POST\",\n body: JSON.stringify({\n ...params\n }),\n headers: {\n Authorization: 'Bearer ' + this.settingInfo.KeyMsg,\n \"Content-Type\": \"application/json\",\n Accept: \"application/json\"\n }\n }).then(response => {\n const reader = response.body.getReader();\n this.readStream(reader, _this, currentResLocation);\n });\n } else {\n await fetch(_api_index__WEBPACK_IMPORTED_MODULE_6__[\"default\"].baseUrl + '/v1/chat/completions', {\n method: \"POST\",\n body: JSON.stringify({\n ...params\n }),\n headers: {\n Authorization: 'Bearer ' + this.settingInfo.KeyMsg,\n \"Content-Type\": \"application/json\",\n Accept: \"application/json\"\n }\n }).then(response => response.json()).then(data => {\n const content = data.choices[0].message.content; // 获取\"content\"字段的值\n let decodedArray = content.split(\"\");\n decodedArray.forEach(decoded => {\n _this.chatList[currentResLocation].msg = _this.chatList[currentResLocation].msg + decoded;\n });\n });\n }\n } catch (error) {\n const content = \"网络不稳定或key余额不足,请重试或更换key\"; // 获取\"content\"字段的值\n let decodedArray = content.split(\"\");\n decodedArray.forEach(decoded => {\n _this.chatList[currentResLocation].msg = _this.chatList[currentResLocation].msg + decoded;\n });\n console.error(error);\n }\n this.acqStatus = true;\n },\n async completion(params, chatBeforResMsg) {\n if (this.settingInfo.chat.suffix !== \"\") {\n params.suffix = this.settingInfo.chat.suffix; //chat没有\n }\n\n params.echo = this.settingInfo.chat.echo,\n //chat没有\n params.prompt = this.inputMsg;\n //新增一个空的消息\n this.sendMsg(chatBeforResMsg);\n const currentResLocation = this.chatList.length - 1;\n let _this = this;\n try {\n await fetch(_api_index__WEBPACK_IMPORTED_MODULE_6__[\"default\"].baseUrl + '/v1/completions', {\n method: \"POST\",\n timeout: 10000,\n body: JSON.stringify({\n ...params\n }),\n headers: {\n Authorization: 'Bearer ' + this.settingInfo.KeyMsg,\n \"Content-Type\": \"application/json\"\n }\n }).then(response => {\n if (response.status == 404) {\n this.$message.error(this.$t('message.model_del'));\n this.$nextTick(() => {\n this.acqStatus = true;\n });\n return;\n }\n const reader = response.body.getReader();\n this.$nextTick(() => {\n this.acqStatus = true;\n });\n // _this.chatList[currentResLocation].msg = _this.chatList[currentResLocation].msg + \":grinning:\"\n this.readStream(reader, _this, currentResLocation);\n });\n } catch (error) {}\n },\n resetUpdate() {\n this.updateImage = null;\n },\n onScroll() {\n const scrollDom = this.$refs.chatContent;\n const scrollTop = scrollDom.scrollTop;\n const offsetHeight = scrollDom.offsetHeight;\n const scrollHeight = scrollDom.scrollHeight;\n // 当滚动到底部,设置 isAutoScroll 为 true\n if (scrollTop + offsetHeight === scrollHeight) {\n this.isAutoScroll = true;\n } else {\n // 否则,用户正在手动滑动,设置为 false,停止自动滚动\n this.isAutoScroll = false;\n }\n },\n //获取窗口高度并滚动至最底层\n scrollBottom() {\n this.$nextTick(() => {\n if (!this.isAutoScroll) return; // 如果 isAutoScroll 为 false,不执行滚动方法\n const scrollDom = this.$refs.chatContent;\n (0,_util_util__WEBPACK_IMPORTED_MODULE_1__.animation)(scrollDom, scrollDom.scrollHeight - scrollDom.offsetHeight);\n });\n },\n //关闭标签框\n clickEmoji() {\n this.showEmoji = !this.showEmoji;\n },\n //发送表情\n sendEmoji(msg) {\n const dateNow = (0,_util_util__WEBPACK_IMPORTED_MODULE_1__.JCMFormatDate)((0,_util_util__WEBPACK_IMPORTED_MODULE_1__.getNowTime)());\n let chatMsg = {\n headImg: _store_mutation_types__WEBPACK_IMPORTED_MODULE_9__.USER_HEAD_IMG_URL,\n name: _store_mutation_types__WEBPACK_IMPORTED_MODULE_9__.USER_NAME,\n time: dateNow,\n msg: msg,\n chatType: 1,\n //信息类型,0文字,1图片\n extend: {\n imgType: 1 //(1表情,2本地图片)\n },\n\n uid: \"jcm\"\n };\n this.sendMsg(chatMsg);\n this.clickEmoji();\n },\n //发送本地图片\n sendImg(e) {\n this.acqStatus = false;\n //获取文件\n const file = e.target.files[0];\n\n // 验证文件类型是否为PNG格式\n if (file.type !== \"image/png\") {\n this.$message.warning(this.$t('message.valid_png'));\n this.$nextTick(() => {\n this.acqStatus = true;\n });\n return;\n }\n\n // 验证文件大小是否小于4MB\n if (file.size > 4 * 1024 * 1024) {\n this.$message.warning(this.$t('message.less_4M'));\n this.$nextTick(() => {\n this.acqStatus = true;\n });\n return;\n }\n if (this.settingInfo.openChangePicture) {\n this.updateImage = file;\n this.$message.info(this.$t('message.upload_complete'));\n e.target.files = null;\n this.$nextTick(() => {\n this.acqStatus = true;\n });\n return;\n }\n // 通过验证后,上传文件\n const formData = new FormData();\n formData.append(\"image\", file);\n formData.append(\"n\", this.settingInfo.n);\n formData.append(\"size\", this.settingInfo.size);\n const dateNow = (0,_util_util__WEBPACK_IMPORTED_MODULE_1__.JCMFormatDate)((0,_util_util__WEBPACK_IMPORTED_MODULE_1__.getNowTime)());\n let _this = this;\n let chatMsg = {\n headImg: _store_mutation_types__WEBPACK_IMPORTED_MODULE_9__.USER_HEAD_IMG_URL,\n name: _store_mutation_types__WEBPACK_IMPORTED_MODULE_9__.USER_NAME,\n time: dateNow,\n msg: \"\",\n chatType: 1,\n //信息类型,0文字,1图片, 2文件\n extend: {\n imgType: 2 //(1表情,2本地图片)\n },\n\n uid: \"jcm\"\n };\n if (!e || !window.FileReader) return; // 看是否支持FileReader\n let reader = new FileReader();\n reader.readAsDataURL(file); // 关键一步,在这里转换的\n reader.onloadend = function () {\n chatMsg.msg = this.result; //赋值\n _this.srcImgList.push(chatMsg.msg);\n };\n this.sendMsg(chatMsg);\n (0,_api_getData__WEBPACK_IMPORTED_MODULE_2__.createImageVariations)(formData, this.settingInfo.KeyMsg).then(data => {\n for (var imgInfo of data) {\n let imgResMsg = {\n headImg: _store_mutation_types__WEBPACK_IMPORTED_MODULE_9__.AI_HEAD_IMG_URL,\n name: this.frinedInfo.name,\n time: (0,_util_util__WEBPACK_IMPORTED_MODULE_1__.JCMFormatDate)((0,_util_util__WEBPACK_IMPORTED_MODULE_1__.getNowTime)()),\n msg: imgInfo.url,\n chatType: 1,\n //信息类型,0文字,1图片\n extend: {\n imgType: 2 //(1表情,2本地图片)\n },\n\n uid: this.frinedInfo.id //uid\n };\n\n this.sendMsg(imgResMsg);\n this.srcImgList.push(imgInfo.url);\n }\n this.acqStatus = true;\n });\n e.target.files = null;\n },\n //发送文件\n sendFile(e) {\n // let file = e.target.files[0];\n // let reader = new FileReader();\n // reader.readAsText(file);\n // let _this=this\n // reader.onload = function(event) {\n // let text = event.target.result;\n // //处理文件数据\n // const delimiters = ['.', '?', '!', '\\n',':',\",\"];\n // let result = [];\n // for (let i = 0; i < text.length; i++) {\n // let current = '';\n // while (i < text.length && !delimiters.includes(text[i])) {\n // current += text[i];\n // i++;\n // }\n // // 加入句子,并去除前后空格\n // if (current.trim()) {\n // result.push(current.trim());\n // }\n // }\n // const formData = new FormData()\n // formData.append(\"model\", \"text-embedding-ada-002\");\n // formData.append(\"input\", result);\n // createEmbeddings(formData,_this.settingInfo.KeyMsg).then(data => {\n // _this.fileArrays = data.data[0]\n // })\n // }; \n const dateNow = (0,_util_util__WEBPACK_IMPORTED_MODULE_1__.JCMFormatDate)((0,_util_util__WEBPACK_IMPORTED_MODULE_1__.getNowTime)());\n let chatMsg = {\n headImg: _store_mutation_types__WEBPACK_IMPORTED_MODULE_9__.USER_HEAD_IMG_URL,\n name: _store_mutation_types__WEBPACK_IMPORTED_MODULE_9__.USER_NAME,\n time: dateNow,\n msg: \"\",\n chatType: 2,\n //信息类型,0文字,1图片, 2文件\n extend: {\n fileType: \"\" //(1word,2excel,3ppt,4pdf,5zpi, 6txt)\n },\n\n uid: \"jcm\"\n };\n let files = e.target.files[0]; //图片文件名\n chatMsg.msg = files;\n if (files) {\n switch (files.type) {\n case \"application/msword\":\n case \"application/vnd.openxmlformats-officedocument.wordprocessingml.document\":\n chatMsg.extend.fileType = 1;\n break;\n case \"application/vnd.ms-excel\":\n case \"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet\":\n chatMsg.extend.fileType = 2;\n break;\n case \"application/vnd.ms-powerpoint\":\n case \"application/vnd.openxmlformats-officedocument.presentationml.presentation\":\n chatMsg.extend.fileType = 3;\n break;\n case \"application/pdf\":\n chatMsg.extend.fileType = 4;\n break;\n case \"application/zip\":\n case \"application/x-zip-compressed\":\n chatMsg.extend.fileType = 5;\n break;\n case \"text/plain\":\n chatMsg.extend.fileType = 6;\n break;\n default:\n chatMsg.extend.fileType = 0;\n }\n this.sendMsg(chatMsg);\n e.target.files = null;\n }\n }\n }\n});\n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/chatHome/chatwindow.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/chatHome/index.vue?vue&type=script&lang=js&": -/*!**********************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/chatHome/index.vue?vue&type=script&lang=js& ***! - \**********************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var core_js_modules_es_array_push_js__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! core-js/modules/es.array.push.js */ \"./node_modules/core-js/modules/es.array.push.js\");\n/* harmony import */ var core_js_modules_es_array_push_js__WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(core_js_modules_es_array_push_js__WEBPACK_IMPORTED_MODULE_0__);\n/* harmony import */ var core_js_modules_es_array_unshift_js__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! core-js/modules/es.array.unshift.js */ \"./node_modules/core-js/modules/es.array.unshift.js\");\n/* harmony import */ var core_js_modules_es_array_unshift_js__WEBPACK_IMPORTED_MODULE_1___default = /*#__PURE__*/__webpack_require__.n(core_js_modules_es_array_unshift_js__WEBPACK_IMPORTED_MODULE_1__);\n/* harmony import */ var _components_PersonCard_vue__WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(/*! @/components/PersonCard.vue */ \"./src/components/PersonCard.vue\");\n/* harmony import */ var _components_Session_vue__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(/*! @/components/Session.vue */ \"./src/components/Session.vue\");\n/* harmony import */ var _components_File_vue__WEBPACK_IMPORTED_MODULE_4__ = __webpack_require__(/*! @/components/File.vue */ \"./src/components/File.vue\");\n/* harmony import */ var _chatwindow_vue__WEBPACK_IMPORTED_MODULE_5__ = __webpack_require__(/*! ./chatwindow.vue */ \"./src/view/pages/chatHome/chatwindow.vue\");\n/* harmony import */ var _store_mutation_types__WEBPACK_IMPORTED_MODULE_6__ = __webpack_require__(/*! @/store/mutation-types */ \"./src/store/mutation-types.js\");\n/* harmony import */ var _components_RoleCard_vue__WEBPACK_IMPORTED_MODULE_7__ = __webpack_require__(/*! @/components/RoleCard.vue */ \"./src/components/RoleCard.vue\");\n/* harmony import */ var _api_getData__WEBPACK_IMPORTED_MODULE_8__ = __webpack_require__(/*! @/api/getData */ \"./src/api/getData.js\");\n/* harmony import */ var _util_util__WEBPACK_IMPORTED_MODULE_9__ = __webpack_require__(/*! @/util/util */ \"./src/util/util.js\");\n/* provided dependency */ var console = __webpack_require__(/*! ./node_modules/console-browserify/index.js */ \"./node_modules/console-browserify/index.js\");\n\n\n\n\n\n\n\n\n\n\nconst {\n Configuration,\n OpenAIApi\n} = __webpack_require__(/*! openai */ \"./node_modules/openai/dist/index.js\");\n/* harmony default export */ __webpack_exports__[\"default\"] = ({\n name: \"App\",\n components: {\n RoleCard: _components_RoleCard_vue__WEBPACK_IMPORTED_MODULE_7__[\"default\"],\n PersonCard: _components_PersonCard_vue__WEBPACK_IMPORTED_MODULE_2__[\"default\"],\n ChatWindow: _chatwindow_vue__WEBPACK_IMPORTED_MODULE_5__[\"default\"],\n Session: _components_Session_vue__WEBPACK_IMPORTED_MODULE_3__[\"default\"],\n File: _components_File_vue__WEBPACK_IMPORTED_MODULE_4__[\"default\"]\n },\n data() {\n return {\n fileSearch: \"\",\n sessionSearch: \"\",\n showFineSetting: false,\n cancelFineStatus: true,\n storeStatus: 0,\n //宽度\n defaulWidth: 70,\n //0是聊天设置,1是图片设置\n SettingStatus: 0,\n //0是模型列表,1是会话列表\n cutSetting: 1,\n //余额信息\n moneryInfo: {\n totalGranted: 0,\n totalUsed: 0,\n totalAvailable: 0\n },\n batch_sizeStr: \"\",\n //全部的设置参数\n SettingInfo: {\n KeyMsg: \"\",\n readefile: false,\n inputStatus: true,\n translateEnglish: false,\n openProductionPicture: false,\n openChangePicture: false,\n TemperatureAudio: 0,\n n: 1,\n size: \"256x256\",\n language: \"zh\",\n chat: {\n suffix: \"\",\n MaxTokens: 1000,\n Temperature: 1,\n TopP: 1,\n n: 1,\n stream: true,\n echo: false,\n stop: \"\",\n FrequencyPenalty: 0,\n PresencePenalty: 0\n },\n openNet: false,\n max_results: 3,\n fineTunes: {\n training_file: \"\",\n model: \"curie\",\n n_epochs: 4,\n prompt_loss_weight: 0.01,\n suffix: \"\"\n // compute_classification_metrics: false,\n // classification_betas:\"\",\n // classification_positive_class:\"\",\n }\n },\n\n //当前点击的文件\n fiCurrent: \"\",\n //当前点击的模型\n pcCurrent: \"\",\n //当前点击的角色\n prCurrent: \"\",\n //当前点击的会话\n sessionCurrent: \"\",\n //当前点击的微调模型\n ftCurrent: \"\",\n //微调搜索数据\n fineTuningSearch: \"\",\n //模型搜索数据\n modelSearch: \"\",\n //角色搜索数据\n roleSearch: \"\",\n //文件列表\n fileList: [],\n //文件缓存列表\n fineTuningSearch: [],\n //微调模型列表\n fineTuningList: [],\n //微调模型缓存列表\n fineTuningCacheList: [],\n //模型列表\n personList: [],\n //会话列表\n sessionList: [],\n //角色列表\n roleList: [],\n //模型列表缓存\n personListCache: [],\n //是否显示聊天窗口\n showChatWindow: true,\n //当前窗口的对话模型信息\n chatWindowInfo: {},\n //图片大小参数列表\n imgSizes: [{\n value: '256x256'\n }, {\n value: '512x512'\n }, {\n value: '1024x1024'\n }],\n //语音定义的参数\n languages: [{\n value: 'zh'\n }, {\n value: 'en'\n }, {\n value: 'fr'\n }, {\n value: 'de'\n }, {\n value: 'ja'\n }],\n // 是否隐藏模型列表和功能设置选择列表\n showPersonList: true,\n showSetupList: true,\n showMainContent: true\n };\n },\n computed: {\n // 把获取setting列表的操作放到computed计算属性里来,这样才能动态绑定i18n的值\n getSettings() {\n return [{\n name: this.$t('model.talk'),\n active: true\n }, {\n name: this.$t('image.title'),\n active: false\n }, {\n name: this.$t('audio.title'),\n active: false\n }, {\n name: this.$t('slightly.title.abbreviation'),\n active: false\n }, {\n name: this.$t('file.title'),\n active: false\n }, {\n name: this.$t('session.title'),\n active: false\n }, {\n name: this.$t('role.title'),\n active: false\n }, {\n name: this.$t('setting.title'),\n active: false\n }];\n }\n },\n created() {\n window.addEventListener('resize', this.handleResize);\n this.handleResize();\n },\n destoryed() {\n window.removeEventListener('resize', this.handleResize);\n },\n mounted() {\n this.chatWindowInfo = {\n img: \"\",\n name: \"ChatGPT\",\n detail: this.$t('index.detail'),\n lastMsg: this.$t('index.lastMsg'),\n id: \"gpt-3.5-turbo\",\n headImg: _store_mutation_types__WEBPACK_IMPORTED_MODULE_6__.AI_HEAD_IMG_URL,\n showHeadImg: true\n };\n if (this.SettingInfo.KeyMsg) {\n this.getModelList(this.SettingInfo.KeyMsg);\n }\n this.getRolesList();\n this.$watch('fileSearch', this.watchFileSearch);\n },\n filters: {\n // //截取数据到小数点后几位\n // numFilterReserved(value, digit) {\n // return parseFloat(value).toFixed(digit)\n // }\n },\n watch: {\n modelSearch: {\n handler: function (newVal, oldVal) {\n if (this.personList) {\n this.personList = this.personListCache.filter(person => person.id.includes(newVal));\n } else {\n this.personList = this.personListCache;\n }\n }\n },\n fineTuningSearch: {\n handler: function (newVal, oldVal) {\n if (this.fineTuningList) {\n if (!this.cancelFineStatus) {\n this.fineTuningList = this.fineTuningCacheList.filter(fineTunin => fineTunin.fineTunesStatus === \"succeeded\").filter(fineTuning => fineTuning.id.includes(newVal));\n } else {\n this.fineTuningList = this.fineTuningCacheList.filter(fineTuning => fineTuning.id.includes(newVal));\n }\n } else {\n if (!this.cancelFineStatus) {\n this.fineTuningList = this.fineTuningCacheList.filter(fineTunin => fineTunin.fineTunesStatus === \"succeeded\");\n } else {\n this.fineTuningList = this.fineTuningCacheList;\n }\n }\n }\n },\n fileSearch: {\n handler: function (newVal, oldVal) {\n if (this.fileList) {\n this.fileList = this.fileCacheList.filter(fileList => fileList.id.includes(newVal));\n } else {\n this.fileList = this.fileCacheList;\n }\n }\n },\n roleSearch: {\n handler: function (newVal, oldVal) {\n if (this.roleList) {\n this.roleList = this.roleCacheList.filter(fileList => fileList.act.toLowerCase().includes(newVal.toLowerCase()));\n } else {\n this.roleList = this.roleCacheList;\n }\n }\n },\n SettingInfo: {\n handler: function (newVal, oldVal) {\n if (newVal.openChangePicture) {\n this.SettingInfo.openProductionPicture = false;\n }\n if (newVal.openProductionPicture) {\n this.SettingInfo.openChangePicture = false;\n }\n if (newVal.fineTunes.batch_size) {\n let batchSize = parseInt(newVal.fineTunes.batch_size);\n this.SettingInfo.fineTunes.batch_size = batchSize;\n } else {}\n if (newVal.fineTunes.validation_file) {\n this.SettingInfo.fineTunes.validation_file = newVal.fineTunes.validation_file;\n }\n if (newVal.fineTunes.learning_rate_multiplier) {\n this.SettingInfo.fineTunes.learning_rate_multiplier = parseInt(newVal.fineTunes.learning_rate_multiplier);\n }\n if (newVal.KeyMsg && newVal !== oldVal) {\n //获取模型列表\n (0,_api_getData__WEBPACK_IMPORTED_MODULE_8__.getModels)(newVal).then(res => {\n this.personList = res;\n this.personListCache = res;\n }).catch(e => {\n this.$message.error(this.$t('message.get_model_fail'));\n });\n }\n // if (newVal.fineTunes.classification_n_classes) {\n // this.SettingInfo.fineTunes.classification_n_classes = parseInt(newVal.fineTunes.classification_n_classes)\n // }\n },\n\n deep: true\n }\n },\n methods: {\n // 切换语言\n changeLanguage() {\n const lang = this.$i18n.locale === \"zh\" ? \"en\" : \"zh\";\n localStorage.setItem(\"lang\", lang);\n this.$i18n.locale = lang;\n },\n //显示或者隐藏取消过的微调模型\n showOrHidenCancelFine(status) {\n this.cancelFineStatus = status;\n if (this.cancelFineStatus == true) {\n this.fineTuningList = this.fineTuningCacheList;\n } else {\n this.fineTuningList = this.fineTuningCacheList.filter(fineTunin => fineTunin.fineTunesStatus === \"succeeded\");\n }\n },\n //导入会话列表触发的方法\n importFromJsonArrAll() {\n this.$refs.onupdateJosnArrAll.click(); // 触发选择文件的弹框\n },\n\n handleFileUploadAll(event) {\n const file = event.target.files[0];\n const reader = new FileReader();\n reader.onload = () => {\n const fileContent = reader.result; // 文件内容\n const parsed = JSON.parse(fileContent); // 转换为数组\n this.sessionList = parsed;\n };\n reader.readAsText(file);\n },\n //导出所有会话到json文件\n exportObjArrAllToJson() {\n let jsonString = JSON.stringify(this.sessionList); // 将数组转为JSON字符串\n let blob = new Blob([jsonString], {\n type: \"application/json;charset=utf-8\"\n });\n saveAs(blob, \"data.json\");\n },\n //清除所有的会话内容\n clearAllContext() {\n this.sessionList = [];\n },\n //清除当前会话内容\n clearCurrentContext() {\n this.$refs.chatWindow.clearMsgList();\n },\n // 点击切换显示状态\n toggleLeft() {\n console.log(\"left clicked\");\n this.showPersonList = !this.showPersonList;\n const isMobile = /iPhone|iPad|iPod|Android/i.test(navigator.userAgent);\n if (isMobile && (this.showPersonList || this.showSetupList)) {\n this.showMainContent = false;\n document.querySelectorAll('.chatLeft')[0].style.width = '100%';\n } else {\n this.showMainContent = true;\n document.querySelectorAll('.chatLeft')[0].style.width = '22%';\n }\n },\n toggleRight() {\n console.log(\"right clicked\");\n this.showSetupList = !this.showSetupList;\n const isMobile = /iPhone|iPad|iPod|Android/i.test(navigator.userAgent);\n if (isMobile && (this.showPersonList || this.showSetupList)) {\n this.showMainContent = false;\n document.querySelectorAll('.chatLeft')[1].style.width = '100%';\n } else {\n this.showMainContent = true;\n document.querySelectorAll('.chatLeft')[1].style.width = '22%';\n }\n },\n //获取模型列表\n getModelList(key) {\n (0,_api_getData__WEBPACK_IMPORTED_MODULE_8__.getModels)(key).then(modelsRes => {\n // 提取fineTunesRes集合中所有id属性值\n (0,_api_getData__WEBPACK_IMPORTED_MODULE_8__.getFineTunesList)(key).then(fineTunesRes => {\n const fineTunesIds = fineTunesRes.map(item => item.id);\n const models = modelsRes.filter(item => !fineTunesIds.includes(item.id));\n this.personList = models;\n this.personListCache = models;\n });\n this.updateMoneyInfo();\n }).catch(e => {\n // this.$message.error(this.$t('message.get_model_fail'))\n });\n },\n //获取微调模型列表\n getFineTunessList(key) {\n (0,_api_getData__WEBPACK_IMPORTED_MODULE_8__.getFineTunesList)(key).then(res => {\n this.fineTuningCacheList = res;\n if (this.cancelFineStatus == true) {\n this.fineTuningList = this.fineTuningCacheList;\n } else {\n this.fineTuningList = this.fineTuningCacheList.filter(fineTunin => fineTunin.fineTunesStatus === \"succeeded\");\n }\n }).catch(e => {\n this.$message.error(this.$t('message.get_model_fail'));\n });\n },\n //获取文件列表\n getFilessList(key) {\n (0,_api_getData__WEBPACK_IMPORTED_MODULE_8__.getFilesList)(key).then(res => {\n this.fileList = res;\n this.fileCacheList = res;\n }).catch(e => {\n this.$message.error(this.$t('message.get_files_fail'));\n });\n },\n //获取角色列表\n getRolesList() {\n (0,_api_getData__WEBPACK_IMPORTED_MODULE_8__.getRoles)().then(res => {\n let data = res.data;\n this.roleList = data;\n this.roleCacheList = data;\n }).catch(e => {\n this.$message.error(this.$t('message.get_roles_fail'));\n });\n },\n //监听窗口尺寸的变化\n handleResize() {\n if (window.innerWidth <= 1150) {\n this.showPersonList = false;\n this.showSetupList = false;\n this.showChatWindow = true;\n const info = {\n img: \"\",\n name: \"ChatGPT\",\n detail: \"chatgpt v3.5 所基于的模型\",\n lastMsg: \"chatgpt v3.5 所基于的模型\",\n id: \"gpt-3.5-turbo\",\n headImg: _store_mutation_types__WEBPACK_IMPORTED_MODULE_6__.AI_HEAD_IMG_URL,\n showHeadImg: true\n };\n this.chatWindowInfo = info;\n this.personInfo = info;\n } else {\n this.showPersonList = true;\n this.showSetupList = true;\n }\n ;\n },\n // // 更新当前余额\n // updateMoneyInfo() {\n // getMoneyInfo(this.SettingInfo.KeyMsg).then((res) => {\n // this.$nextTick(() => {\n // this.moneryInfo.totalGranted = res.total_granted;\n // this.moneryInfo.totalUsed = res.total_used;\n // this.moneryInfo.totalAvailable = res.total_available;\n // })\n // })\n // },\n //创建会话\n newSession() {\n //获取当前会话长度\n const currentLen = this.sessionList.length + 1;\n //定义对象\n const obj = {\n \"id\": currentLen,\n \"title\": \"\",\n \"dataList\": []\n };\n //先获取对话的列表\n const msgList = this.$refs.chatWindow.getMesList();\n if (msgList.length >= 2) {\n if (this.sessionCurrent) {\n this.sessionCurrent = \"\";\n //清除当前窗口数据\n this.$refs.chatWindow.clearMsgList();\n } else {\n obj.title = msgList[0].msg;\n obj.dataList = msgList;\n let tempSessionList = this.sessionList;\n tempSessionList.push(obj);\n this.sessionList = tempSessionList.reverse();\n this.sessionCurrent = \"\";\n //清除当前窗口数据\n this.$refs.chatWindow.clearMsgList();\n }\n } else if (msgList.length = 1) {\n //清除当前窗口数据\n this.$refs.chatWindow.clearMsgList();\n }\n },\n //模型列表被点击\n modelClick() {\n this.clearCurrent();\n this.getModelList(this.SettingInfo.KeyMsg);\n //清除被点击的微调对象\n this.fineTuningInfo = {};\n this.SettingStatus = 0;\n this.cutSetting = 0;\n // this.showChatWindow = false;\n },\n\n //会话列表被点击\n sessionClick() {\n //清除当前点击的状态\n this.clearCurrent();\n this.SettingStatus = 5;\n this.cutSetting = 1;\n this.chatWindowInfo = {\n img: \"\",\n name: \"ChatGPT\",\n detail: \"chatgpt v3.5 所基于的模型\",\n lastMsg: \"chatgpt v3.5 所基于的模型\",\n id: \"gpt-3.5-turbo\",\n headImg: _store_mutation_types__WEBPACK_IMPORTED_MODULE_6__.AI_HEAD_IMG_URL,\n showHeadImg: true\n };\n // this.showChatWindow = true;\n },\n\n //角色列表被点击\n roleClick(info) {\n if (!this.showChatWindow) {\n this.$message({\n message: \"请选一个模型\",\n type: \"error\"\n });\n } else {\n var chatWindow = this.$refs.chatWindow;\n chatWindow.inputMsg = info.prompt;\n }\n },\n //微调模型列表被点击\n fineTuningClick() {\n this.clearCurrent();\n this.SettingStatus = 3;\n this.cutSetting = 2;\n // this.showChatWindow = false;\n //获取微调模型列表\n this.getFineTunessList(this.SettingInfo.KeyMsg);\n },\n clearCurrent() {\n //清除当前选择的模型微调模型\n this.ftCurrent = \"\";\n //清除当前选择的模型\n this.pcCurrent = \"\";\n //清除当前选择的会话\n this.sessionCurrent = \"\";\n //清除当前选择的文件\n this.fiCurrent = \"\";\n },\n //文件列表被点击\n fileClick() {\n this.clearCurrent();\n //清除被点击的微调对象\n this.fineTuningInfo = {};\n this.SettingStatus = 4;\n this.cutSetting = 3;\n //获取微调模型列表\n this.getFilessList(this.SettingInfo.KeyMsg);\n },\n //上传文件按钮被点击触发的方法\n uploadFile() {\n this.$refs.fileInput.click();\n },\n //文件上传触发的方法\n onFileChange(e) {\n //获取文件\n const file = e.target.files[0];\n // 验证文件类型是否为jsonl格式\n if (!file.name.endsWith('.jsonl')) {\n this.$message.warning(this.$t('message.valid_json'));\n return;\n }\n // 通过验证后,上传文件\n const formData = new FormData();\n formData.append(\"file\", file);\n formData.append(\"purpose\", \"fine-tune\");\n (0,_api_getData__WEBPACK_IMPORTED_MODULE_8__.uploadFile)(formData, this.SettingInfo.KeyMsg).then(res => {\n this.$copy(res.id, this.$t('index.up_file_id') + res.id + this.$t('index.copy'));\n //更新文件列表\n this.getFilessList(this.SettingInfo.KeyMsg);\n });\n },\n //检索文件信息\n retrieveOnFile() {\n if (!this.fileInfo || !this.fileInfo.fileId) {\n this.$message.error(this.$t('message.only_file'));\n } else {\n (0,_api_getData__WEBPACK_IMPORTED_MODULE_8__.retrieveFile)(this.fileInfo.fileId, this.SettingInfo.KeyMsg).then(res => {\n let context = this.$t('index.file_id') + res.id + \" \\n\" + this.$t('index.file_name') + res.filename + \" \\n\" + this.$t('index.file_size') + (res.bytes / 1024 / 1024).toFixed(2) + \"MB \\n\" + this.$t('index.obj') + res.object + \" \\n\" + this.$t('index.status') + res.status + \" \\n\" + this.$t('index.status_des') + res.status_details + \" \\n\" + this.$t('index.target') + res.purpose + \" \\n\" + this.$t('index.file_time') + (0,_util_util__WEBPACK_IMPORTED_MODULE_9__.JCMFormatTimestamp)(res.created_at);\n let retrieveFineTuneMsg = {\n headImg: _store_mutation_types__WEBPACK_IMPORTED_MODULE_6__.AI_HEAD_IMG_URL,\n name: res.filename,\n time: (0,_util_util__WEBPACK_IMPORTED_MODULE_9__.JCMFormatDate)((0,_util_util__WEBPACK_IMPORTED_MODULE_9__.getNowTime)()),\n msg: context,\n chatType: 0,\n uid: res.id\n };\n this.$refs.chatWindow.sendMsg(retrieveFineTuneMsg);\n console.log(res);\n }).catch(e => {\n this.$message.error(this.$t('message.fail_file'));\n });\n }\n },\n //检索文件内容\n async retrieveOnFileContent() {\n if (!this.fileInfo || !this.fileInfo.fileId) {\n this.$message.error(this.$t('message.only_file'));\n } else {\n try {\n const configuration = new Configuration({\n apiKey: this.SettingInfo.KeyMsg\n });\n const openai = new OpenAIApi(configuration);\n const response = await openai.downloadFile(this.fileInfo.fileId);\n } catch (e) {\n this.$message.error(this.$t('message.openai_free'));\n }\n }\n },\n //模型被点击\n clickPerson(info) {\n this.storeStatus = 0;\n //传入当前聊天窗口信息\n this.chatWindowInfo = info;\n //设置当前被点击的对象\n this.personInfo = info;\n //设置当前被点击的模型id\n this.pcCurrent = info.id;\n },\n //会话被点击\n clickSession(info) {\n this.sessionCurrent = info.id;\n this.$refs.chatWindow.assignmentMesList(info.dataList);\n },\n //微调模型被点击\n clickFineTuning(info) {\n this.storeStatus = 1;\n //传入当前聊天窗口信息\n this.chatWindowInfo = info;\n //设置当前被点击的对象\n this.fineTuningInfo = info;\n //设置当前选着的微调模型id\n this.ftCurrent = info.id;\n },\n //文件被点击\n clickFile(info) {\n this.chatWindowInfo = {\n img: \"\",\n name: info.id,\n detail: info.detail,\n lastMsg: info.lastMsg,\n id: info.id\n };\n this.fiCurrent = info.fileId;\n this.fileInfo = info;\n },\n //删除文件\n deleteOnFile() {\n if (!this.fileInfo || !this.fileInfo.fileId) {\n this.$message.error(this.$t('message.only_del_file'));\n } else {\n (0,_api_getData__WEBPACK_IMPORTED_MODULE_8__.deleteFile)(this.fileInfo.fileId, this.SettingInfo.KeyMsg).then(res => {\n this.$message.success(this.$t('message.del_file_succ'));\n this.getFilessList(this.SettingInfo.KeyMsg);\n }).catch(e => {\n this.$message.error(this.$t('message.del_fail'));\n });\n }\n },\n //创建微调\n createFine() {\n (0,_api_getData__WEBPACK_IMPORTED_MODULE_8__.createFineTune)(this.SettingInfo.fineTunes, this.SettingInfo.KeyMsg).then(res => {\n this.$message.success(this.$t('message.create_succ'));\n this.getFineTunessList(this.SettingInfo.KeyMsg);\n }).catch(e => {\n this.$message.error(this.$t('message.create_fail'));\n });\n },\n //删除微调\n deleteFine() {\n if (!this.fineTuningInfo || !this.fineTuningInfo.fineTunesId) {\n this.$message.error(this.$t('message.only_del_model'));\n } else {\n (0,_api_getData__WEBPACK_IMPORTED_MODULE_8__.deleteFineTuneModel)(this.fineTuningInfo.name, this.SettingInfo.KeyMsg).then(res => {\n this.$message.success(this.$t('message.del_model_succ'));\n this.getFineTunessList(this.SettingInfo.KeyMsg);\n }).catch(e => {\n this.$message.error(this.$t('message.del_fail_ing'));\n });\n }\n },\n //取消微调\n cancelFine() {\n if (!this.fineTuningInfo || !this.fineTuningInfo.fineTunesId || this.fineTuningInfo.fineTunesStatus === \"succeeded\") {\n this.$message.error(this.$t('message.only_cancel'));\n } else {\n console.log(this.fineTuningInfo.fineTunesId);\n (0,_api_getData__WEBPACK_IMPORTED_MODULE_8__.cancelFineTune)(this.fineTuningInfo.fineTunesId, this.SettingInfo.KeyMsg).then(res => {\n this.$message.success(this.$t('message.cancel_succ'));\n this.getFineTunessList(this.SettingInfo.KeyMsg);\n }).catch(e => {\n console.log(e);\n this.$message.error(this.$t('message.cancel_fail'));\n });\n }\n },\n //检索微调\n retrieveFine() {\n if (!this.fineTuningInfo || !this.fineTuningInfo.fineTunesId) {\n this.$message.error(this.$t('message.only_model'));\n } else {\n console.log(this.fineTuningInfo.fineTunesId);\n (0,_api_getData__WEBPACK_IMPORTED_MODULE_8__.retrieveFineTune)(this.fineTuningInfo.fineTunesId, this.SettingInfo.KeyMsg).then(res => {\n let context = this.$t('index.task_id') + res.id + \" \\n\" + this.$t('index.task_type') + res.object + \" \\n\" + this.$t('index.model_type') + res.model + \" \\n\" + this.$t('index.task_time') + (0,_util_util__WEBPACK_IMPORTED_MODULE_9__.JCMFormatTimestamp)(res.created_at) + \" \\n\" + this.$t('index.task_list') + this.$t('index.obj_log_info_time') + \"| :------: | :------: | :------: | :------: |\\n\";\n res.events.forEach(obj => {\n context += `| ${obj.object} | ${obj.level} | ${obj.message} | ${(0,_util_util__WEBPACK_IMPORTED_MODULE_9__.JCMFormatTimestamp)(obj.created_at)} |\\n`;\n });\n context += this.$t('index.model_id') + res.fine_tuned_model + this.$t('index.args') + this.$t('index.item_setting') + \"| :------: | :------: | \\n\";\n for (let prop in res.hyperparams) {\n if (res.hyperparams.hasOwnProperty(prop)) {\n context += `| ${prop} | ${res.hyperparams[prop]} |\\n`;\n }\n }\n context += this.$t('index.user_group') + res.organization_id;\n if (res.result_files.length == 0) {\n context += this.$t('index.results_null');\n } else {\n context += this.$t('index.results') + this.$t('index.table_head') + \"| :------: | :------: | :------: | :------: | :------: | \\n\";\n res.result_files.forEach(obj => {\n context += `| ${obj.id} | ${obj.filename} | ${(obj.bytes / 1024 / 1024).toFixed(2) + \"MB\"} | ${obj.object} | ${obj.status} | \\n`;\n });\n }\n context += this.$t('index.statu') + res.status + \"\\n\";\n if (res.training_files.length == 0) {\n context += this.$t('index.files_null');\n } else {\n context += this.$t('index.files') + this.$t('index.table_head') + \"| :------: | :------: | :------: | :------: | :------: | \\n\";\n res.training_files.forEach(obj => {\n context += `| ${obj.id} | ${obj.filename} | ${(obj.bytes / 1024 / 1024).toFixed(2) + \"MB\"} | ${obj.object} | ${obj.status} | \\n`;\n });\n }\n if (res.validation_files.length == 0) {\n context += this.$t('index.verifys_null');\n } else {\n context += this.$t('index.verifys') + this.$t('index.table_head') + \"| :------: | :------: | :------: | :------: | :------: | \\n\";\n res.validation_files.forEach(obj => {\n context += `| ${obj.id} | ${obj.filename} | ${(obj.bytes / 1024 / 1024).toFixed(2) + \"MB\"} | ${obj.object} | ${obj.status} | \\n`;\n });\n }\n context += this.$t('index.last_time') + (0,_util_util__WEBPACK_IMPORTED_MODULE_9__.JCMFormatTimestamp)(res.updated_at);\n let retrieveFineTuneMsg = {\n headImg: _store_mutation_types__WEBPACK_IMPORTED_MODULE_6__.AI_HEAD_IMG_URL,\n name: res.fine_tuned_model !== null ? res.fine_tuned_model : res.id,\n time: (0,_util_util__WEBPACK_IMPORTED_MODULE_9__.JCMFormatDate)((0,_util_util__WEBPACK_IMPORTED_MODULE_9__.getNowTime)()),\n msg: context,\n chatType: 0,\n uid: res.id\n };\n this.$refs.chatWindow.sendMsg(retrieveFineTuneMsg);\n console.log(res);\n }).catch(e => {\n console.log(e);\n this.$message.error(this.$t('message.verify_model_fail'));\n });\n }\n },\n personCardSort(id) {\n if (typeof this.personList[0] != 'undefined' && id !== this.personList[0].id) {\n console.log(id);\n let nowPersonInfo;\n for (let i = 0; i < this.personList.length; i++) {\n if (this.personList[i].id == id) {\n nowPersonInfo = this.personList[i];\n this.personList.splice(i, 1);\n break;\n }\n }\n this.personList.unshift(nowPersonInfo);\n }\n },\n fineTunesCardSort(id) {\n if (id !== this.fineTuningList[0].id) {\n console.log(id);\n let nowPersonInfo;\n for (let i = 0; i < this.fineTuningList.length; i++) {\n if (this.fineTuningList[i].id == id) {\n nowPersonInfo = this.fineTuningList[i];\n this.fineTuningList.splice(i, 1);\n break;\n }\n }\n this.fineTuningList.unshift(nowPersonInfo);\n }\n }\n }\n});\n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/chatHome/index.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/setting.vue?vue&type=script&lang=js&": -/*!***************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/setting.vue?vue&type=script&lang=js& ***! - \***************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony default export */ __webpack_exports__[\"default\"] = ({\n name: \"App\",\n data() {\n return {\n show: false\n };\n },\n mounted() {\n this.show = true;\n }\n});\n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/setting.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/user/userInfo.vue?vue&type=script&lang=js&": -/*!*********************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/user/userInfo.vue?vue&type=script&lang=js& ***! - \*********************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony default export */ __webpack_exports__[\"default\"] = ({\n name: \"App\",\n data() {\n return {\n show: false\n };\n },\n mounted() {\n this.show = true;\n }\n});\n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/user/userInfo.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/App.vue?vue&type=template&id=7ba5bd90&": -/*!********************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/App.vue?vue&type=template&id=7ba5bd90& ***! - \********************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* binding */ render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* binding */ staticRenderFns; }\n/* harmony export */ });\nvar render = function render() {\n var _vm = this,\n _c = _vm._self._c;\n return _c(\"div\", {\n attrs: {\n id: \"app\"\n }\n }, [_c(\"Home\")], 1);\n};\nvar staticRenderFns = [];\nrender._withStripped = true;\n\n\n//# sourceURL=webpack://JCM-AI/./src/App.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet%5B1%5D.rules%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Emoji.vue?vue&type=template&id=534ad946&scoped=true&": -/*!*********************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Emoji.vue?vue&type=template&id=534ad946&scoped=true& ***! - \*********************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* binding */ render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* binding */ staticRenderFns; }\n/* harmony export */ });\nvar render = function render() {\n var _vm = this,\n _c = _vm._self._c;\n return _c(\"div\", {\n staticClass: \"emoji-content\"\n }, [_c(\"div\", {\n staticClass: \"emoji\"\n }, [_c(\"div\", {\n staticClass: \"emoji-wrapper\"\n }, [_c(\"ul\", {\n staticClass: \"emoji-list\"\n }, _vm._l(_vm.emojiList, function (item, index) {\n return _c(\"li\", {\n key: index,\n staticClass: \"emoji-item\",\n on: {\n click: function ($event) {\n return _vm.sendEmoji(item);\n }\n }\n }, [_c(\"img\", {\n attrs: {\n src: item,\n alt: \"\"\n }\n })]);\n }), 0)])]), _c(\"div\", {\n staticClass: \"mask\",\n on: {\n click: _vm.closeEmoji\n }\n })]);\n};\nvar staticRenderFns = [];\nrender._withStripped = true;\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/Emoji.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet%5B1%5D.rules%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/File.vue?vue&type=template&id=ab80f8a8&scoped=true&": -/*!********************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/File.vue?vue&type=template&id=ab80f8a8&scoped=true& ***! - \********************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* binding */ render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* binding */ staticRenderFns; }\n/* harmony export */ });\nvar render = function render() {\n var _vm = this,\n _c = _vm._self._c;\n return _c(\"div\", {\n staticClass: \"person-card\",\n class: {\n activeCard: _vm.fileInfo.fileId == _vm.pcCurrent\n }\n }, [_c(\"div\", {\n staticClass: \"info\"\n }, [_c(\"div\", [_c(\"svg\", {\n staticClass: \"icon\",\n attrs: {\n t: \"1679461381774\",\n viewBox: \"0 0 1024 1024\",\n version: \"1.1\",\n xmlns: \"http://www.w3.org/2000/svg\",\n \"p-id\": \"4047\",\n width: \"50\",\n height: \"50\"\n }\n }, [_c(\"path\", {\n attrs: {\n d: \"M752 80H272c-70.4 0-128 57.6-128 128v608c0 70.4 57.6 128 128 128h353.6c33.6 0 65.6-12.8 91.2-36.8l126.4-126.4c24-24 36.8-56 36.8-91.2V208c0-70.4-57.6-128-128-128zM208 816V208c0-35.2 28.8-64 64-64h480c35.2 0 64 28.8 64 64v464h-96c-70.4 0-128 57.6-128 128v80H272c-35.2 0-64-28.8-64-64z m462.4 44.8c-4.8 4.8-9.6 8-14.4 11.2V800c0-35.2 28.8-64 64-64h75.2l-124.8 124.8z\",\n fill: \"#ffffff\",\n \"p-id\": \"4048\"\n }\n }), _c(\"path\", {\n attrs: {\n d: \"M368 352h288c17.6 0 32-14.4 32-32s-14.4-32-32-32H368c-17.6 0-32 14.4-32 32s14.4 32 32 32zM496 608h-128c-17.6 0-32 14.4-32 32s14.4 32 32 32h128c17.6 0 32-14.4 32-32s-14.4-32-32-32zM368 512h288c17.6 0 32-14.4 32-32s-14.4-32-32-32H368c-17.6 0-32 14.4-32 32s14.4 32 32 32z\",\n fill: \"#ffffff\",\n \"p-id\": \"4049\"\n }\n })])]), _c(\"div\", {\n staticClass: \"info-detail\"\n }, [_c(\"div\", {\n staticClass: \"name\"\n }, [_vm._v(_vm._s(_vm.fileInfo.name.slice(0, 25)))]), _c(\"div\", {\n staticClass: \"detail\"\n }, [_vm._v(_vm._s(_vm.fileInfo.lastMsg.slice(0, 40)))])])])]);\n};\nvar staticRenderFns = [];\nrender._withStripped = true;\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/File.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet%5B1%5D.rules%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/FileCard.vue?vue&type=template&id=48849e48&scoped=true&": -/*!************************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/FileCard.vue?vue&type=template&id=48849e48&scoped=true& ***! - \************************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* binding */ render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* binding */ staticRenderFns; }\n/* harmony export */ });\nvar render = function render() {\n var _vm = this,\n _c = _vm._self._c;\n return _c(\"div\", {\n staticClass: \"file-card\"\n }, [_vm.fileType == 0 ? _c(\"img\", {\n attrs: {\n src: __webpack_require__(/*! @/assets/img/fileImg/unknowfile.png */ \"./src/assets/img/fileImg/unknowfile.png\"),\n alt: \"\"\n }\n }) : _vm.fileType == 1 ? _c(\"img\", {\n attrs: {\n src: __webpack_require__(/*! @/assets/img/fileImg/word.png */ \"./src/assets/img/fileImg/word.png\"),\n alt: \"\"\n }\n }) : _vm.fileType == 2 ? _c(\"img\", {\n attrs: {\n src: __webpack_require__(/*! @/assets/img/fileImg/excel.png */ \"./src/assets/img/fileImg/excel.png\"),\n alt: \"\"\n }\n }) : _vm.fileType == 3 ? _c(\"img\", {\n attrs: {\n src: __webpack_require__(/*! @/assets/img/fileImg/ppt.png */ \"./src/assets/img/fileImg/ppt.png\"),\n alt: \"\"\n }\n }) : _vm.fileType == 4 ? _c(\"img\", {\n attrs: {\n src: __webpack_require__(/*! @/assets/img/fileImg/pdf.png */ \"./src/assets/img/fileImg/pdf.png\"),\n alt: \"\"\n }\n }) : _vm.fileType == 5 ? _c(\"img\", {\n attrs: {\n src: __webpack_require__(/*! @/assets/img/fileImg/zpi.png */ \"./src/assets/img/fileImg/zpi.png\"),\n alt: \"\"\n }\n }) : _c(\"img\", {\n attrs: {\n src: __webpack_require__(/*! @/assets/img/fileImg/txt.png */ \"./src/assets/img/fileImg/txt.png\"),\n alt: \"\"\n }\n }), _c(\"div\", {\n staticClass: \"word\"\n }, [_c(\"span\", [_vm._v(_vm._s(_vm.file.name || _vm.$t(\"file_card.unknown\")))]), _c(\"span\", [_vm._v(\"154kb\")])])]);\n};\nvar staticRenderFns = [];\nrender._withStripped = true;\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/FileCard.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet%5B1%5D.rules%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/HeadImg.vue?vue&type=template&id=0b1d9e43&scoped=true&": -/*!***********************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/HeadImg.vue?vue&type=template&id=0b1d9e43&scoped=true& ***! - \***********************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* binding */ render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* binding */ staticRenderFns; }\n/* harmony export */ });\nvar render = function render() {\n var _vm = this,\n _c = _vm._self._c;\n return _vm._m(0);\n};\nvar staticRenderFns = [function () {\n var _vm = this,\n _c = _vm._self._c;\n return _c(\"div\", {\n staticClass: \"head-portrait\"\n }, [_c(\"img\", {\n attrs: {\n src: \"https://i.328888.xyz/2023/04/07/irgoxk.png\",\n alt: \"Kevin Powell\"\n }\n })]);\n}];\nrender._withStripped = true;\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/HeadImg.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet%5B1%5D.rules%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/HeadPortrait.vue?vue&type=template&id=24585c4b&scoped=true&": -/*!****************************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/HeadPortrait.vue?vue&type=template&id=24585c4b&scoped=true& ***! - \****************************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* binding */ render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* binding */ staticRenderFns; }\n/* harmony export */ });\nvar render = function render() {\n var _vm = this,\n _c = _vm._self._c;\n return _c(\"div\", {\n staticClass: \"head-portrait\"\n }, [_c(\"img\", {\n attrs: {\n src: _vm.imgUrl,\n alt: \"\"\n }\n })]);\n};\nvar staticRenderFns = [];\nrender._withStripped = true;\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/HeadPortrait.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet%5B1%5D.rules%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Nav.vue?vue&type=template&id=65af85a3&scoped=true&": -/*!*******************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Nav.vue?vue&type=template&id=65af85a3&scoped=true& ***! - \*******************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* binding */ render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* binding */ staticRenderFns; }\n/* harmony export */ });\nvar render = function render() {\n var _vm = this,\n _c = _vm._self._c;\n return _c(\"div\", {\n staticClass: \"nav\"\n }, [_c(\"div\", {\n staticClass: \"nav-menu-wrapper\"\n }, [_c(\"ul\", {\n staticClass: \"menu-list\"\n }, _vm._l(_vm.menuList, function (item, index) {\n return _c(\"li\", {\n key: index,\n class: {\n activeNav: index == _vm.current\n },\n on: {\n click: function ($event) {\n return _vm.changeMenu(index);\n }\n }\n }, [_c(\"div\", {\n staticClass: \"block\"\n }), _c(\"span\", {\n staticClass: \"iconfont\",\n class: item\n })]);\n }), 0)]), _c(\"div\", {\n staticClass: \"own-pic\",\n on: {\n click: _vm.userInfoShow\n }\n }, [_c(\"HeadImg\")], 1)]);\n};\nvar staticRenderFns = [];\nrender._withStripped = true;\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/Nav.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet%5B1%5D.rules%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/PersonCard.vue?vue&type=template&id=d74d3096&scoped=true&": -/*!**************************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/PersonCard.vue?vue&type=template&id=d74d3096&scoped=true& ***! - \**************************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* binding */ render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* binding */ staticRenderFns; }\n/* harmony export */ });\nvar render = function render() {\n var _vm = this,\n _c = _vm._self._c;\n return _c(\"div\", {\n staticClass: \"person-card\",\n class: {\n activeCard: _vm.personInfo.id == _vm.pcCurrent\n }\n }, [_c(\"div\", {\n staticClass: \"info\"\n }, [_c(\"HeadPortrait\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.personInfo.showHeadImg,\n expression: \"personInfo.showHeadImg\"\n }],\n attrs: {\n imgUrl: _vm.personInfo.headImg\n }\n }), _c(\"div\", {\n staticClass: \"info-detail\"\n }, [_c(\"div\", {\n staticClass: \"name\"\n }, [_vm._v(_vm._s(_vm.personInfo.name ? _vm.personInfo.name.slice(0, 20) : _vm.personInfo.fineTunesStatus == \"pending\" ? _vm.$t(\"person_card.train\") : _vm.$t(\"person_card.cancel\")))]), _c(\"div\", {\n staticClass: \"detail\"\n }, [_vm._v(_vm._s(_vm.personInfo.lastMsg.slice(0, 22)))])])], 1)]);\n};\nvar staticRenderFns = [];\nrender._withStripped = true;\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/PersonCard.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet%5B1%5D.rules%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/RoleCard.vue?vue&type=template&id=9524bc54&scoped=true&": -/*!************************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/RoleCard.vue?vue&type=template&id=9524bc54&scoped=true& ***! - \************************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* binding */ render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* binding */ staticRenderFns; }\n/* harmony export */ });\nvar render = function render() {\n var _vm = this,\n _c = _vm._self._c;\n return _c(\"div\", {\n staticClass: \"role-card\",\n class: {\n activeCard: _vm.roleInfo.act == _vm.prCurrent\n }\n }, [_c(\"div\", {\n staticClass: \"info\"\n }, [_c(\"div\", {\n staticClass: \"info-detail\"\n }, [_c(\"div\", {\n staticClass: \"name\"\n }, [_vm._v(_vm._s(_vm.roleInfo.act))]), _c(\"div\", {\n staticClass: \"detail\"\n }, [_vm._v(_vm._s(_vm.roleInfo.prompt.slice(0, 50)))])])])]);\n};\nvar staticRenderFns = [];\nrender._withStripped = true;\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/RoleCard.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet%5B1%5D.rules%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Session.vue?vue&type=template&id=d6f30cd4&scoped=true&": -/*!***********************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Session.vue?vue&type=template&id=d6f30cd4&scoped=true& ***! - \***********************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* binding */ render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* binding */ staticRenderFns; }\n/* harmony export */ });\nvar render = function render() {\n var _vm = this,\n _c = _vm._self._c;\n return _c(\"div\", {\n staticClass: \"person-card\",\n class: {\n activeCard: _vm.sessionInfo.id == _vm.current\n }\n }, [_c(\"div\", {\n staticClass: \"info\"\n }, [_c(\"div\", {\n staticClass: \"info-detail\"\n }, [_c(\"div\", {\n staticClass: \"detail\"\n }, [_c(\"div\", {\n staticStyle: {\n padding: \"10px\"\n }\n }, [_vm._v(_vm._s(_vm.sessionInfo.title))])])])])]);\n};\nvar staticRenderFns = [];\nrender._withStripped = true;\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/Session.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet%5B1%5D.rules%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/home.vue?vue&type=template&id=73eb9c00&scoped=true&": -/*!**************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/home.vue?vue&type=template&id=73eb9c00&scoped=true& ***! - \**************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* binding */ render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* binding */ staticRenderFns; }\n/* harmony export */ });\nvar render = function render() {\n var _vm = this,\n _c = _vm._self._c;\n return _c(\"div\", {\n staticClass: \"home\"\n }, [_c(\"el-container\", {\n attrs: {\n height: \"100%\"\n }\n }, [_c(\"el-aside\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.asideStatus,\n expression: \"asideStatus\"\n }],\n attrs: {\n width: \"100px\"\n }\n }, [_c(\"Nav\")], 1), _c(\"el-main\", [_c(\"router-view\")], 1)], 1)], 1);\n};\nvar staticRenderFns = [];\nrender._withStripped = true;\n\n\n//# sourceURL=webpack://JCM-AI/./src/view/home.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet%5B1%5D.rules%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/chatHome/chatwindow.vue?vue&type=template&id=13fede38&scoped=true&": -/*!***********************************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/chatHome/chatwindow.vue?vue&type=template&id=13fede38&scoped=true& ***! - \***********************************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* binding */ render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* binding */ staticRenderFns; }\n/* harmony export */ });\nvar render = function render() {\n var _vm = this,\n _c = _vm._self._c;\n return _c(\"div\", {\n staticClass: \"chat-window\"\n }, [_c(\"div\", {\n staticClass: \"top\"\n }, [_c(\"el-row\", {\n staticStyle: {\n height: \"70px\"\n }\n }, [_c(\"el-col\", {\n attrs: {\n span: _vm.personInfoSpan[0]\n }\n }, [_c(\"div\", {\n staticClass: \"head-pic\"\n }, [_c(\"HeadPortrait\", {\n attrs: {\n imgUrl: _vm.frinedInfo.headImg\n }\n })], 1)]), _c(\"el-col\", {\n attrs: {\n span: _vm.personInfoSpan[1]\n }\n }, [_c(\"div\", {\n staticClass: \"info-detail\"\n }, [_c(\"div\", {\n staticClass: \"name\"\n }, [_vm._v(_vm._s(_vm.frinedInfo.name))]), _c(\"div\", {\n staticClass: \"detail\"\n }, [_vm._v(_vm._s(_vm.frinedInfo.detail))])])]), _c(\"el-col\", {\n attrs: {\n span: _vm.personInfoSpan[2]\n }\n }, [_c(\"div\", {\n staticClass: \"other-fun\"\n }, [_c(\"label\", {\n on: {\n click: _vm.clearMsgList\n }\n }, [_c(\"span\", {\n staticClass: \"iconfont icon-qingchu\"\n })]), _c(\"label\", {\n on: {\n click: _vm.importFromJsonArr\n }\n }, [_c(\"span\", {\n staticClass: \"iconfont icon-daoru\"\n })]), _c(\"label\", {\n on: {\n click: _vm.exportObjArrToJson\n }\n }, [_c(\"span\", {\n staticClass: \"iconfont icon-daochu\"\n })]), _c(\"label\", {\n attrs: {\n for: \"imgFile\"\n }\n }, [_c(\"span\", {\n staticClass: \"iconfont icon-tupian\"\n })]), _c(\"label\", {\n attrs: {\n for: \"docFile\"\n }\n }, [_c(\"span\", {\n staticClass: \"iconfont icon-wenben\"\n })]), _c(\"input\", {\n attrs: {\n type: \"file\",\n name: \"\",\n id: \"imgFile\",\n accept: \"image/*\"\n },\n on: {\n change: _vm.sendImg\n }\n }), _c(\"input\", {\n attrs: {\n type: \"file\",\n name: \"\",\n id: \"docFile\",\n accept: \"application/*,text/*\"\n },\n on: {\n change: _vm.sendFile\n }\n }), _c(\"input\", {\n ref: \"onupdateJosnArr\",\n staticStyle: {\n display: \"none\"\n },\n attrs: {\n type: \"file\"\n },\n on: {\n change: _vm.handleFileUpload\n }\n })])])], 1)], 1), _c(\"div\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: !_vm.acqStatus,\n expression: \"!acqStatus\"\n }]\n }, [_c(\"div\", {\n staticClass: \"line\"\n })]), _c(\"div\", {\n staticClass: \"botoom\",\n staticStyle: {\n \"background-color\": \"rgb(50, 54, 68)\"\n }\n }, [_c(\"div\", {\n ref: \"chatContent\",\n staticClass: \"chat-content\",\n attrs: {\n id: \"chat-content\"\n },\n on: {\n scroll: _vm.onScroll\n }\n }, _vm._l(_vm.chatList, function (item) {\n return _c(\"div\", {\n key: item.id,\n staticClass: \"chat-wrapper\"\n }, [item.uid !== \"jcm\" ? _c(\"div\", {\n staticClass: \"chat-friend\"\n }, [item.chatType == 0 ? _c(\"div\", {\n staticClass: \"chat-text\"\n }, [_c(\"el-row\", {\n attrs: {\n gutter: 20\n }\n }, [_c(\"el-col\", {\n attrs: {\n span: 2\n }\n }, [_c(\"svg\", {\n staticClass: \"icon\",\n attrs: {\n t: \"1679666016648\",\n viewBox: \"0 0 1024 1024\",\n version: \"1.1\",\n xmlns: \"http://www.w3.org/2000/svg\",\n \"p-id\": \"6241\",\n width: \"22\",\n height: \"22\"\n },\n on: {\n click: function ($event) {\n return _vm.$copy(item.msg, \"已复制\");\n }\n }\n }, [_c(\"path\", {\n attrs: {\n d: \"M661.333333 234.666667A64 64 0 0 1 725.333333 298.666667v597.333333a64 64 0 0 1-64 64h-469.333333A64 64 0 0 1 128 896V298.666667a64 64 0 0 1 64-64z m-21.333333 85.333333H213.333333v554.666667h426.666667v-554.666667z m191.829333-256a64 64 0 0 1 63.744 57.856l0.256 6.144v575.701333a42.666667 42.666667 0 0 1-85.034666 4.992l-0.298667-4.992V149.333333H384a42.666667 42.666667 0 0 1-42.368-37.674666L341.333333 106.666667a42.666667 42.666667 0 0 1 37.674667-42.368L384 64h447.829333z\",\n fill: \"#909399\",\n \"p-id\": \"6242\"\n }\n })])]), _c(\"el-col\", {\n attrs: {\n span: 21\n }\n })], 1), _c(\"markdown-it-vue\", {\n attrs: {\n content: item.msg.trim()\n }\n })], 1) : _vm._e(), item.chatType == 1 ? _c(\"div\", {\n staticClass: \"chat-img\"\n }, [item.extend.imgType == 1 ? _c(\"img\", {\n staticStyle: {\n width: \"100px\",\n height: \"100px\"\n },\n attrs: {\n src: item.msg,\n alt: \"表情\"\n }\n }) : _c(\"el-image\", {\n staticStyle: {\n \"border-radius\": \"10px\"\n },\n attrs: {\n src: item.msg,\n \"preview-src-list\": _vm.srcImgList\n }\n })], 1) : _vm._e(), item.chatType == 2 ? _c(\"div\", {\n staticClass: \"chat-img\"\n }, [_c(\"div\", {\n staticClass: \"word-file\"\n }, [_c(\"FileCard\", {\n attrs: {\n fileType: item.extend.fileType,\n file: item.msg\n }\n })], 1)]) : _vm._e(), _c(\"div\", {\n staticClass: \"info-time\"\n }, [_c(\"img\", {\n attrs: {\n src: item.headImg,\n alt: \"\"\n }\n }), _c(\"span\", [_vm._v(_vm._s(item.name))]), _c(\"span\", [_vm._v(_vm._s(item.time))])])]) : _c(\"div\", {\n staticClass: \"chat-me\"\n }, [item.chatType == 0 ? _c(\"div\", {\n staticClass: \"chat-text\"\n }, [_c(\"span\", {\n staticStyle: {\n \"font-size\": \"16px\"\n }\n }, [_vm._v(_vm._s(item.msg))])]) : _vm._e(), item.chatType == 1 ? _c(\"div\", {\n staticClass: \"chat-img\"\n }, [item.extend.imgType == 1 ? _c(\"img\", {\n staticStyle: {\n width: \"100px\",\n height: \"100px\"\n },\n attrs: {\n src: item.msg,\n alt: \"表情\"\n }\n }) : _c(\"el-image\", {\n staticStyle: {\n \"max-width\": \"300px\",\n \"border-radius\": \"10px\"\n },\n attrs: {\n src: item.msg,\n \"preview-src-list\": _vm.srcImgList\n }\n })], 1) : _vm._e(), item.chatType == 2 ? _c(\"div\", {\n staticClass: \"chat-img\"\n }, [_c(\"div\", {\n staticClass: \"word-file\"\n }, [_c(\"FileCard\", {\n attrs: {\n fileType: item.extend.fileType,\n file: item.msg\n }\n })], 1)]) : _vm._e(), _c(\"div\", {\n staticClass: \"info-time\"\n }, [_c(\"span\", [_vm._v(_vm._s(item.name))]), _c(\"span\", [_vm._v(_vm._s(item.time))]), _c(\"img\", {\n attrs: {\n src: item.headImg,\n alt: \"\"\n }\n })])])]);\n }), 0), _c(\"div\", {\n staticClass: \"chatInputs\"\n }, [_c(\"div\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.buttonStatus,\n expression: \"buttonStatus\"\n }],\n staticClass: \"emoji boxinput\",\n on: {\n click: _vm.clickEmoji\n }\n }, [_c(\"img\", {\n attrs: {\n src: __webpack_require__(/*! @/assets/img/emoji/smiling-face.png */ \"./src/assets/img/emoji/smiling-face.png\"),\n alt: \"\"\n }\n })]), _vm.recording ? _c(\"div\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.buttonStatus,\n expression: \"buttonStatus\"\n }],\n staticClass: \"luyin boxinput\",\n on: {\n click: _vm.stopRecording\n }\n }, [_c(\"i\", {\n staticClass: \"el-icon-microphone\",\n staticStyle: {\n \"margin-top\": \"17%\"\n }\n })]) : _vm._e(), !_vm.recording ? _c(\"div\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.buttonStatus,\n expression: \"buttonStatus\"\n }],\n staticClass: \"luyin boxinput\",\n on: {\n click: _vm.startRecording\n }\n }, [_c(\"i\", {\n staticClass: \"el-icon-turn-off-microphone\",\n staticStyle: {\n \"margin-top\": \"17%\"\n }\n })]) : _vm._e(), _c(\"div\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.buttonStatus,\n expression: \"buttonStatus\"\n }],\n staticClass: \"emoji-content\"\n }, [_c(\"Emoji\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.showEmoji,\n expression: \"showEmoji\"\n }],\n on: {\n sendEmoji: _vm.sendEmoji,\n closeEmoji: _vm.clickEmoji\n }\n })], 1), _c(\"textarea\", {\n directives: [{\n name: \"autoheight\",\n rawName: \"v-autoheight\"\n }, {\n name: \"model\",\n rawName: \"v-model\",\n value: _vm.inputMsg,\n expression: \"inputMsg\"\n }],\n staticClass: \"inputs\",\n staticStyle: {\n \"z-index\": \"9999999999\",\n \"min-height\": \"50px\",\n \"max-height\": \"400px\",\n \"max-width\": \"100%\",\n \"min-width\": \"45%\"\n },\n attrs: {\n id: \"textareaMsg\",\n placeholder: _vm.$t(\"placeholder.question\"),\n maxlength: \"2048\",\n rows: \"3\",\n dir: \"\",\n autocorrect: \"off\",\n \"aria-autocomplete\": \"both\",\n spellcheck: \"false\",\n autocapitalize: \"off\",\n autocomplete: \"off\"\n },\n domProps: {\n value: _vm.inputMsg\n },\n on: {\n keyup: function ($event) {\n if (!$event.type.indexOf(\"key\") && _vm._k($event.keyCode, \"enter\", 13, $event.key, \"Enter\")) return null;\n return _vm.handleKeyDown.apply(null, arguments);\n },\n input: function ($event) {\n if ($event.target.composing) return;\n _vm.inputMsg = $event.target.value;\n }\n }\n }), _c(\"div\", [_c(\"div\", {\n staticClass: \"send boxinput\",\n on: {\n click: _vm.sendText\n }\n }, [_c(\"img\", {\n attrs: {\n src: __webpack_require__(/*! @/assets/img/emoji/rocket.png */ \"./src/assets/img/emoji/rocket.png\"),\n alt: \"\"\n }\n })])])])])]);\n};\nvar staticRenderFns = [];\nrender._withStripped = true;\n\n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/chatHome/chatwindow.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet%5B1%5D.rules%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/chatHome/index.vue?vue&type=template&id=c6884a34&scoped=true&": -/*!******************************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/chatHome/index.vue?vue&type=template&id=c6884a34&scoped=true& ***! - \******************************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* binding */ render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* binding */ staticRenderFns; }\n/* harmony export */ });\nvar render = function render() {\n var _vm = this,\n _c = _vm._self._c;\n return _c(\"div\", {\n staticClass: \"chatHome\"\n }, [_c(\"div\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.showPersonList,\n expression: \"showPersonList\"\n }],\n staticClass: \"chatLeft\",\n staticStyle: {\n width: \"22%\"\n }\n }, [_vm._m(0), _c(\"div\", {\n staticClass: \"online-person\",\n staticStyle: {\n \"margin-top\": \"5%\"\n }\n }, [_c(\"el-row\", {\n attrs: {\n gutter: 24\n }\n }, [_c(\"el-col\", {\n attrs: {\n span: 6\n }\n }, [_c(\"div\", {\n staticClass: \"setting\",\n staticStyle: {\n \"text-align\": \"center\"\n }\n }, [_c(\"span\", {\n class: {\n whiteText: _vm.cutSetting === 1\n },\n on: {\n click: _vm.sessionClick\n }\n }, [_vm._v(_vm._s(_vm.$t(\"session.title\")))])])]), _c(\"el-col\", {\n attrs: {\n span: 6\n }\n }, [_c(\"div\", {\n staticClass: \"setting\",\n staticStyle: {\n \"text-align\": \"center\"\n }\n }, [_c(\"span\", {\n class: {\n whiteText: _vm.cutSetting === 0\n },\n on: {\n click: _vm.modelClick\n }\n }, [_vm._v(_vm._s(_vm.$t(\"model.title\")))])])]), _c(\"el-col\", {\n attrs: {\n span: 6\n }\n }, [_c(\"div\", {\n staticClass: \"setting\",\n staticStyle: {\n \"text-align\": \"center\"\n }\n }, [_c(\"span\", {\n class: {\n whiteText: _vm.cutSetting === 2\n },\n on: {\n click: _vm.fineTuningClick\n }\n }, [_vm._v(_vm._s(_vm.$t(\"slightly.title.whole\")))])])]), _c(\"el-col\", {\n attrs: {\n span: 6\n }\n }, [_c(\"div\", {\n staticClass: \"setting\",\n staticStyle: {\n \"text-align\": \"center\"\n }\n }, [_c(\"span\", {\n class: {\n whiteText: _vm.cutSetting === 3\n },\n on: {\n click: _vm.fileClick\n }\n }, [_vm._v(_vm._s(_vm.$t(\"file.title\")))])])])], 1), _c(\"div\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.cutSetting == 0,\n expression: \"cutSetting == 0\"\n }]\n }, [_c(\"input\", {\n directives: [{\n name: \"model\",\n rawName: \"v-model\",\n value: _vm.modelSearch,\n expression: \"modelSearch\"\n }],\n staticClass: \"inputs\",\n staticStyle: {\n \"margin-top\": \"10px\"\n },\n attrs: {\n placeholder: _vm.$t(\"placeholder.model_name\")\n },\n domProps: {\n value: _vm.modelSearch\n },\n on: {\n input: function ($event) {\n if ($event.target.composing) return;\n _vm.modelSearch = $event.target.value;\n }\n }\n }), _c(\"div\", {\n staticClass: \"s-wrapper\"\n }, _vm._l(_vm.personList, function (personInfo) {\n return _c(\"div\", {\n key: personInfo.id,\n staticClass: \"personList\",\n on: {\n click: function ($event) {\n return _vm.clickPerson(personInfo);\n }\n }\n }, [_c(\"PersonCard\", {\n attrs: {\n personInfo: personInfo,\n pcCurrent: _vm.pcCurrent\n }\n })], 1);\n }), 0)]), _c(\"div\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.cutSetting == 1,\n expression: \"cutSetting == 1\"\n }]\n }, [_c(\"input\", {\n directives: [{\n name: \"model\",\n rawName: \"v-model\",\n value: _vm.sessionSearch,\n expression: \"sessionSearch\"\n }],\n staticClass: \"inputs\",\n staticStyle: {\n \"margin-top\": \"10px\"\n },\n attrs: {\n placeholder: _vm.$t(\"placeholder.session_name\")\n },\n domProps: {\n value: _vm.sessionSearch\n },\n on: {\n input: function ($event) {\n if ($event.target.composing) return;\n _vm.sessionSearch = $event.target.value;\n }\n }\n }), _c(\"div\", {\n staticClass: \"s-wrapper\"\n }, _vm._l(_vm.sessionList, function (sessionInfo) {\n return _c(\"div\", {\n key: sessionInfo.id,\n on: {\n click: function ($event) {\n return _vm.clickSession(sessionInfo);\n }\n }\n }, [_c(\"Session\", {\n attrs: {\n sessionInfo: sessionInfo,\n pcCurrent: _vm.sessionCurrent\n }\n })], 1);\n }), 0)]), _c(\"div\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.cutSetting == 2,\n expression: \"cutSetting == 2\"\n }]\n }, [_c(\"input\", {\n directives: [{\n name: \"model\",\n rawName: \"v-model\",\n value: _vm.fineTuningSearch,\n expression: \"fineTuningSearch\"\n }],\n staticClass: \"inputs\",\n staticStyle: {\n \"margin-top\": \"10px\"\n },\n attrs: {\n placeholder: _vm.$t(\"placeholder.slightly_name\")\n },\n domProps: {\n value: _vm.fineTuningSearch\n },\n on: {\n input: function ($event) {\n if ($event.target.composing) return;\n _vm.fineTuningSearch = $event.target.value;\n }\n }\n }), _c(\"div\", {\n staticClass: \"s-wrapper\"\n }, _vm._l(_vm.fineTuningList, function (fineTuningInfo) {\n return _c(\"div\", {\n key: fineTuningInfo.id,\n staticClass: \"personList\",\n on: {\n click: function ($event) {\n return _vm.clickFineTuning(fineTuningInfo);\n }\n }\n }, [_c(\"PersonCard\", {\n attrs: {\n personInfo: fineTuningInfo,\n pcCurrent: _vm.ftCurrent\n }\n })], 1);\n }), 0)]), _c(\"div\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.cutSetting == 3,\n expression: \"cutSetting == 3\"\n }]\n }, [_c(\"input\", {\n directives: [{\n name: \"model\",\n rawName: \"v-model\",\n value: _vm.fileSearch,\n expression: \"fileSearch\"\n }],\n staticClass: \"inputs\",\n staticStyle: {\n \"margin-top\": \"10px\"\n },\n attrs: {\n placeholder: _vm.$t(\"placeholder.file_name\")\n },\n domProps: {\n value: _vm.fileSearch\n },\n on: {\n input: function ($event) {\n if ($event.target.composing) return;\n _vm.fileSearch = $event.target.value;\n }\n }\n }), _c(\"div\", {\n staticClass: \"s-wrapper\"\n }, _vm._l(_vm.fileList, function (fileInfo, index) {\n return _c(\"div\", {\n key: index,\n staticClass: \"personList\",\n on: {\n click: function ($event) {\n return _vm.clickFile(fileInfo);\n }\n }\n }, [_c(\"File\", {\n attrs: {\n fileInfo: fileInfo,\n pcCurrent: _vm.fiCurrent\n }\n })], 1);\n }), 0)])], 1)]), _c(\"div\", {\n staticClass: \"chatRight\"\n }, [_c(\"div\", {\n staticClass: \"top-left\",\n on: {\n click: _vm.toggleLeft\n }\n }, [_c(\"svg\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: !_vm.showPersonList,\n expression: \"!showPersonList\"\n }],\n staticClass: \"icon\",\n attrs: {\n t: \"1679366341860\",\n viewBox: \"0 0 1024 1024\",\n version: \"1.1\",\n xmlns: \"http://www.w3.org/2000/svg\",\n \"p-id\": \"5764\",\n width: \"30\",\n height: \"30\"\n }\n }, [_c(\"path\", {\n attrs: {\n d: \"M912.8 513.2C912.8 733.1 733.9 912 514 912S115.2 733.1 115.2 513.2 294.1 114.3 514 114.3s398.8 179 398.8 398.9z m-701.5 0c0 166.9 135.8 302.7 302.7 302.7s302.7-135.8 302.7-302.7S680.9 210.5 514 210.5 211.3 346.3 211.3 513.2z\",\n fill: \"#BDD2EF\",\n \"p-id\": \"5765\"\n }\n }), _c(\"path\", {\n attrs: {\n d: \"M626.8 345.9c0 15-5.7 30.1-17.2 41.5L487.1 510l122.6 122.6c22.9 22.9 22.9 60.2 0 83.1-22.9 22.9-60.2 22.9-83.1 0L362.4 551.6c-22.9-22.9-22.9-60.2 0-83.1l164.1-164.1c22.9-22.9 60.2-22.9 83.1 0 11.5 11.5 17.2 26.5 17.2 41.5z\",\n fill: \"#2867CE\",\n \"p-id\": \"5766\"\n }\n })]), _c(\"svg\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.showPersonList,\n expression: \"showPersonList\"\n }],\n staticClass: \"icon\",\n attrs: {\n t: \"1679366707602\",\n viewBox: \"0 0 1024 1024\",\n version: \"1.1\",\n xmlns: \"http://www.w3.org/2000/svg\",\n \"p-id\": \"7551\",\n width: \"30\",\n height: \"30\"\n }\n }, [_c(\"path\", {\n attrs: {\n d: \"M514 912c-219.9 0-398.8-178.9-398.8-398.9 0-219.9 178.9-398.8 398.8-398.8s398.8 178.9 398.8 398.8c0 220-178.9 398.9-398.8 398.9z m0-701.5c-166.9 0-302.7 135.8-302.7 302.7S347.1 815.9 514 815.9s302.7-135.8 302.7-302.7S680.9 210.5 514 210.5z\",\n fill: \"#BDD2EF\",\n \"p-id\": \"7552\"\n }\n }), _c(\"path\", {\n attrs: {\n d: \"M402.5 677.3c0-15 5.7-30.1 17.2-41.5l122.6-122.6-122.6-122.6c-22.9-22.9-22.9-60.2 0-83.1 22.9-22.9 60.2-22.9 83.1 0l164.1 164.1c22.9 22.9 22.9 60.2 0 83.1L502.8 718.8c-22.9 22.9-60.2 22.9-83.1 0-11.5-11.4-17.2-26.5-17.2-41.5z\",\n fill: \"#2867CE\",\n \"p-id\": \"7553\"\n }\n })])]), _c(\"div\", {\n staticClass: \"top-right\",\n on: {\n click: _vm.toggleRight\n }\n }, [_c(\"svg\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: !_vm.showSetupList,\n expression: \"!showSetupList\"\n }],\n staticClass: \"icon\",\n attrs: {\n t: \"1679366707602\",\n viewBox: \"0 0 1024 1024\",\n version: \"1.1\",\n xmlns: \"http://www.w3.org/2000/svg\",\n \"p-id\": \"7551\",\n width: \"30\",\n height: \"30\"\n }\n }, [_c(\"path\", {\n attrs: {\n d: \"M514 912c-219.9 0-398.8-178.9-398.8-398.9 0-219.9 178.9-398.8 398.8-398.8s398.8 178.9 398.8 398.8c0 220-178.9 398.9-398.8 398.9z m0-701.5c-166.9 0-302.7 135.8-302.7 302.7S347.1 815.9 514 815.9s302.7-135.8 302.7-302.7S680.9 210.5 514 210.5z\",\n fill: \"#BDD2EF\",\n \"p-id\": \"7552\"\n }\n }), _c(\"path\", {\n attrs: {\n d: \"M402.5 677.3c0-15 5.7-30.1 17.2-41.5l122.6-122.6-122.6-122.6c-22.9-22.9-22.9-60.2 0-83.1 22.9-22.9 60.2-22.9 83.1 0l164.1 164.1c22.9 22.9 22.9 60.2 0 83.1L502.8 718.8c-22.9 22.9-60.2 22.9-83.1 0-11.5-11.4-17.2-26.5-17.2-41.5z\",\n fill: \"#2867CE\",\n \"p-id\": \"7553\"\n }\n })]), _c(\"svg\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.showSetupList,\n expression: \"showSetupList\"\n }],\n staticClass: \"icon\",\n attrs: {\n t: \"1679366341860\",\n viewBox: \"0 0 1024 1024\",\n version: \"1.1\",\n xmlns: \"http://www.w3.org/2000/svg\",\n \"p-id\": \"5764\",\n width: \"30\",\n height: \"30\"\n }\n }, [_c(\"path\", {\n attrs: {\n d: \"M912.8 513.2C912.8 733.1 733.9 912 514 912S115.2 733.1 115.2 513.2 294.1 114.3 514 114.3s398.8 179 398.8 398.9z m-701.5 0c0 166.9 135.8 302.7 302.7 302.7s302.7-135.8 302.7-302.7S680.9 210.5 514 210.5 211.3 346.3 211.3 513.2z\",\n fill: \"#BDD2EF\",\n \"p-id\": \"5765\"\n }\n }), _c(\"path\", {\n attrs: {\n d: \"M626.8 345.9c0 15-5.7 30.1-17.2 41.5L487.1 510l122.6 122.6c22.9 22.9 22.9 60.2 0 83.1-22.9 22.9-60.2 22.9-83.1 0L362.4 551.6c-22.9-22.9-22.9-60.2 0-83.1l164.1-164.1c22.9-22.9 60.2-22.9 83.1 0 11.5 11.5 17.2 26.5 17.2 41.5z\",\n fill: \"#2867CE\",\n \"p-id\": \"5766\"\n }\n })])]), _vm.showChatWindow ? _c(\"div\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.showMainContent,\n expression: \"showMainContent\"\n }]\n }, [_c(\"ChatWindow\", {\n ref: \"chatWindow\",\n attrs: {\n frinedInfo: _vm.chatWindowInfo,\n settingInfo: _vm.SettingInfo,\n storeStatu: _vm.storeStatus\n },\n on: {\n personCardSort: _vm.personCardSort\n }\n })], 1) : _c(\"div\", {\n staticClass: \"showIcon\"\n }, [_c(\"svg\", {\n staticClass: \"icon iconfont icon-snapchat\",\n attrs: {\n t: \"1679552353056\",\n viewBox: \"0 0 1024 1024\",\n version: \"1.1\",\n xmlns: \"http://www.w3.org/2000/svg\",\n \"p-id\": \"3488\",\n width: \"200\",\n height: \"200\"\n }\n }, [_c(\"path\", {\n attrs: {\n d: \"M992.33 416.37c17.66 0 31.98-14.32 31.98-31.98s-14.32-31.98-31.98-31.98h-63.98v-63.96h63.98c17.66 0 31.98-14.32 31.98-31.98s-14.32-31.98-31.98-31.98h-63.98v-95.94c0.01-8.48-3.36-16.62-9.35-22.62-6-6-14.14-9.37-22.62-9.36h-95.94V32.61c0-17.67-14.32-31.98-31.98-31.98-17.67 0-31.98 14.32-31.98 31.98v63.96h-63.96V32.61c0-17.67-14.32-31.98-31.98-31.98-17.67 0-31.98 14.32-31.98 31.98v63.96H544.6V32.61c0-17.67-14.32-31.98-31.98-31.98-17.67 0-31.98 14.32-31.98 31.98v63.96h-63.96V32.61c0-17.67-14.32-31.98-31.98-31.98s-31.98 14.32-31.98 31.98v63.96h-63.96V32.61c0-17.67-14.32-31.98-31.98-31.98S224.8 14.95 224.8 32.61v63.96h-95.94c-8.48 0-16.62 3.36-22.62 9.36s-9.36 14.14-9.36 22.62v95.94H32.92c-17.67 0-31.98 14.32-31.98 31.98s14.32 31.98 31.98 31.98h63.96v63.96H32.92c-17.67 0-31.98 14.32-31.98 31.98 0 17.67 14.32 31.98 31.98 31.98h63.96v63.97H32.92c-17.66 0-31.97 14.31-31.97 31.97 0 17.65 14.31 31.97 31.97 31.97h63.96v63.98H32.92c-17.66 0-31.97 14.31-31.97 31.97 0 17.66 14.31 31.97 31.97 31.97h63.96v63.98H32.92C15.26 736.18 0.95 750.5 0.95 768.15s14.31 31.97 31.97 31.97h63.96v95.95a31.944 31.944 0 0 0 9.36 22.62c6 5.99 14.14 9.36 22.62 9.35h95.94v63.98c0 17.66 14.32 31.98 31.98 31.98 17.67 0 31.98-14.32 31.98-31.98v-63.98h63.96v63.98c0 17.66 14.32 31.98 31.98 31.98 17.67 0 31.98-14.32 31.98-31.98v-63.98h63.96v63.98c0 17.66 14.32 31.98 31.98 31.98s31.98-14.32 31.98-31.98v-63.98h63.96v63.98c0 17.66 14.32 31.98 31.98 31.98s31.98-14.32 31.98-31.98v-63.98h63.96v63.98c0 17.66 14.32 31.98 31.98 31.98s31.98-14.32 31.98-31.98v-63.98h95.94c8.48 0.02 16.62-3.35 22.62-9.35s9.37-14.14 9.35-22.62v-95.95h63.98c17.65 0 31.97-14.31 31.97-31.97 0-17.66-14.31-31.97-31.97-31.97h-63.98V672.2h63.98c17.65 0 31.97-14.31 31.97-31.97 0-17.66-14.31-31.97-31.97-31.97h-63.98v-63.98h63.98c17.65 0 31.97-14.31 31.97-31.97 0-17.66-14.31-31.97-31.97-31.97h-63.98v-63.97h63.98zM864.41 864.1H160.84V160.53h703.57V864.1zM406.82 580.42h79.2l15.48 61.56h67.68l-83.16-267.84h-77.04l-83.16 267.84h65.52l15.48-61.56z m18-72.36c6.84-26.64 14.04-57.96 20.52-86.04h1.44c7.2 27.36 14.04 59.4 21.24 86.04l5.76 22.68h-54.72l5.76-22.68zM697.7 641.98h-64.44V374.14h64.44v267.84z\",\n \"p-id\": \"3489\"\n }\n })])])]), _c(\"div\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.showSetupList,\n expression: \"showSetupList\"\n }],\n staticClass: \"chatLeft\"\n }, [_c(\"el-card\", {\n staticStyle: {\n \"line-height\": \"120%\",\n \"text-align\": \"center\"\n },\n attrs: {\n shadow: \"hover\",\n id: \"jianbian\"\n }\n }, [_c(\"div\", [_c(\"input\", {\n directives: [{\n name: \"model\",\n rawName: \"v-model\",\n value: _vm.SettingInfo.KeyMsg,\n expression: \"SettingInfo.KeyMsg\"\n }],\n staticClass: \"inputs\",\n staticStyle: {\n width: \"100%\",\n \"margin-left\": \"0px\",\n \"margin-right\": \"0px\"\n },\n attrs: {\n placeholder: _vm.$t(\"placeholder.openai_key\"),\n type: \"password\"\n },\n domProps: {\n value: _vm.SettingInfo.KeyMsg\n },\n on: {\n input: function ($event) {\n if ($event.target.composing) return;\n _vm.$set(_vm.SettingInfo, \"KeyMsg\", $event.target.value);\n }\n }\n })])]), _c(\"div\", {\n staticClass: \"online-person\"\n }, [_c(\"el-row\", {\n attrs: {\n gutter: 20\n }\n }, _vm._l(_vm.getSettings, function (setting, index) {\n return _c(\"el-col\", {\n key: index,\n attrs: {\n span: 6\n }\n }, [_c(\"span\", {\n staticClass: \"setting\",\n class: {\n active: _vm.SettingStatus === index\n },\n on: {\n click: function ($event) {\n _vm.SettingStatus = index;\n }\n }\n }, [_vm._v(\" \" + _vm._s(setting.name) + \" \")])]);\n }), 1), _c(\"div\", {\n staticClass: \"s-wrapper\",\n staticStyle: {\n height: \"75vh\"\n }\n }, [_c(\"el-collapse-transition\", [_c(\"div\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.SettingStatus == 0,\n expression: \"SettingStatus == 0\"\n }]\n }, [_c(\"div\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.SettingInfo.openNet,\n expression: \"SettingInfo.openNet\"\n }],\n staticClass: \"block\"\n }, [_c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"model.online\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\"\n }, [_vm._v(_vm._s(_vm.$t(\"model.online_title\")))])]), _c(\"el-switch\", {\n staticStyle: {\n \"margin-left\": \"15%\"\n },\n attrs: {\n width: _vm.defaulWidth\n },\n model: {\n value: _vm.SettingInfo.openNet,\n callback: function ($$v) {\n _vm.$set(_vm.SettingInfo, \"openNet\", $$v);\n },\n expression: \"SettingInfo.openNet\"\n }\n })], 1), _c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"model.max_results_title\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\"\n }, [_vm._v(_vm._s(_vm.$t(\"model.max_results\")))])]), _c(\"el-slider\", {\n staticClass: \"astrict\",\n attrs: {\n step: 1,\n min: 0,\n max: 6\n },\n model: {\n value: _vm.SettingInfo.max_results,\n callback: function ($$v) {\n _vm.$set(_vm.SettingInfo, \"max_results\", $$v);\n },\n expression: \"SettingInfo.max_results\"\n }\n })], 1), _c(\"div\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: !_vm.SettingInfo.openNet,\n expression: \"!SettingInfo.openNet\"\n }]\n }, [_c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"model.suffix\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\"\n }, [_vm._v(_vm._s(_vm.$t(\"model.suffix_title\")))])]), _c(\"input\", {\n directives: [{\n name: \"model\",\n rawName: \"v-model\",\n value: _vm.SettingInfo.chat.suffix,\n expression: \"SettingInfo.chat.suffix\"\n }],\n staticClass: \"weitiao\",\n attrs: {\n placeholder: _vm.$t(\"placeholder.suffix\")\n },\n domProps: {\n value: _vm.SettingInfo.chat.suffix\n },\n on: {\n input: function ($event) {\n if ($event.target.composing) return;\n _vm.$set(_vm.SettingInfo.chat, \"suffix\", $event.target.value);\n }\n }\n })], 1), _c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"model.stop\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\",\n attrs: {\n s: \"\"\n }\n }, [_vm._v(_vm._s(_vm.$t(\"model.stop_title\")))])]), _c(\"input\", {\n directives: [{\n name: \"model\",\n rawName: \"v-model\",\n value: _vm.SettingInfo.chat.stop,\n expression: \"SettingInfo.chat.stop\"\n }],\n staticClass: \"weitiao\",\n attrs: {\n placeholder: _vm.$t(\"placeholder.stop\")\n },\n domProps: {\n value: _vm.SettingInfo.chat.stop\n },\n on: {\n input: function ($event) {\n if ($event.target.composing) return;\n _vm.$set(_vm.SettingInfo.chat, \"stop\", $event.target.value);\n }\n }\n })], 1), _c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"model.frequency_penalty\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\"\n }, [_vm._v(_vm._s(_vm.$t(\"model.frequency_penalty_title\")))])]), _c(\"el-slider\", {\n staticClass: \"astrict\",\n attrs: {\n step: 0.1,\n min: -2,\n max: 2\n },\n model: {\n value: _vm.SettingInfo.chat.FrequencyPenalty,\n callback: function ($$v) {\n _vm.$set(_vm.SettingInfo.chat, \"FrequencyPenalty\", $$v);\n },\n expression: \"SettingInfo.chat.FrequencyPenalty\"\n }\n })], 1), _c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"model.presence_penalty\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\"\n }, [_vm._v(_vm._s(_vm.$t(\"model.presence_penalty_title\")))])]), _c(\"el-slider\", {\n staticClass: \"astrict\",\n attrs: {\n step: 0.1,\n min: -2,\n max: 2\n },\n model: {\n value: _vm.SettingInfo.chat.PresencePenalty,\n callback: function ($$v) {\n _vm.$set(_vm.SettingInfo.chat, \"PresencePenalty\", $$v);\n },\n expression: \"SettingInfo.chat.PresencePenalty\"\n }\n })], 1), _c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"model.max_tokens\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\"\n }, [_vm._v(_vm._s(_vm.$t(\"model.max_tokens_title\")))])]), _c(\"el-slider\", {\n staticClass: \"astrict\",\n attrs: {\n step: 1,\n min: 0,\n max: 2048\n },\n model: {\n value: _vm.SettingInfo.chat.MaxTokens,\n callback: function ($$v) {\n _vm.$set(_vm.SettingInfo.chat, \"MaxTokens\", $$v);\n },\n expression: \"SettingInfo.chat.MaxTokens\"\n }\n })], 1), _c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"model.temperature\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\"\n }, [_vm._v(_vm._s(_vm.$t(\"model.temperature_title\")))])]), _c(\"el-slider\", {\n staticClass: \"astrict\",\n attrs: {\n step: 0.1,\n min: 0,\n max: 2\n },\n model: {\n value: _vm.SettingInfo.chat.Temperature,\n callback: function ($$v) {\n _vm.$set(_vm.SettingInfo.chat, \"Temperature\", $$v);\n },\n expression: \"SettingInfo.chat.Temperature\"\n }\n })], 1), _c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"model.top_p\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\",\n attrs: {\n s: \"\"\n }\n }, [_vm._v(_vm._s(_vm.$t(\"model.top_p_title\")))])]), _c(\"el-slider\", {\n staticClass: \"astrict\",\n attrs: {\n step: 0.1,\n min: 0,\n max: 1\n },\n model: {\n value: _vm.SettingInfo.chat.TopP,\n callback: function ($$v) {\n _vm.$set(_vm.SettingInfo.chat, \"TopP\", $$v);\n },\n expression: \"SettingInfo.chat.TopP\"\n }\n })], 1), _c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"model.stream\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\"\n }, [_vm._v(_vm._s(_vm.$t(\"model.stream_title\")))])]), _c(\"el-switch\", {\n staticStyle: {\n \"margin-left\": \"15%\"\n },\n attrs: {\n width: _vm.defaulWidth\n },\n model: {\n value: _vm.SettingInfo.chat.stream,\n callback: function ($$v) {\n _vm.$set(_vm.SettingInfo.chat, \"stream\", $$v);\n },\n expression: \"SettingInfo.chat.stream\"\n }\n })], 1), _c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"model.echo\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\"\n }, [_vm._v(_vm._s(_vm.$t(\"model.echo_title\")))])]), _c(\"el-switch\", {\n staticStyle: {\n \"margin-left\": \"22%\"\n },\n attrs: {\n width: _vm.defaulWidth\n },\n model: {\n value: _vm.SettingInfo.chat.echo,\n callback: function ($$v) {\n _vm.$set(_vm.SettingInfo.chat, \"echo\", $$v);\n },\n expression: \"SettingInfo.chat.echo\"\n }\n })], 1), _c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"model.online\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\"\n }, [_vm._v(_vm._s(_vm.$t(\"model.online_title\")))])]), _c(\"el-switch\", {\n staticStyle: {\n \"margin-left\": \"15%\"\n },\n attrs: {\n width: _vm.defaulWidth\n },\n model: {\n value: _vm.SettingInfo.openNet,\n callback: function ($$v) {\n _vm.$set(_vm.SettingInfo, \"openNet\", $$v);\n },\n expression: \"SettingInfo.openNet\"\n }\n })], 1)])])]), _c(\"el-collapse-transition\", [_c(\"div\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.SettingStatus == 1,\n expression: \"SettingStatus == 1\"\n }]\n }, [_c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"image.production_title\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\"\n }, [_vm._v(_vm._s(_vm.$t(\"image.production\")))])]), _c(\"el-switch\", {\n staticStyle: {\n \"margin-left\": \"15%\"\n },\n attrs: {\n width: _vm.defaulWidth\n },\n model: {\n value: _vm.SettingInfo.openProductionPicture,\n callback: function ($$v) {\n _vm.$set(_vm.SettingInfo, \"openProductionPicture\", $$v);\n },\n expression: \"SettingInfo.openProductionPicture\"\n }\n })], 1), _c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"image.change_title\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\"\n }, [_vm._v(_vm._s(_vm.$t(\"image.change\")))])]), _c(\"el-switch\", {\n staticStyle: {\n \"margin-left\": \"15%\"\n },\n attrs: {\n width: _vm.defaulWidth\n },\n model: {\n value: _vm.SettingInfo.openChangePicture,\n callback: function ($$v) {\n _vm.$set(_vm.SettingInfo, \"openChangePicture\", $$v);\n },\n expression: \"SettingInfo.openChangePicture\"\n }\n })], 1), _c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"image.size_title\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\"\n }, [_vm._v(_vm._s(_vm.$t(\"image.size\")))])]), _c(\"div\", [_c(\"el-select\", {\n staticStyle: {\n \"margin-top\": \"10px\"\n },\n attrs: {\n placeholder: \"请选择\"\n },\n model: {\n value: _vm.SettingInfo.size,\n callback: function ($$v) {\n _vm.$set(_vm.SettingInfo, \"size\", $$v);\n },\n expression: \"SettingInfo.size\"\n }\n }, _vm._l(_vm.imgSizes, function (item) {\n return _c(\"el-option\", {\n key: item.value,\n attrs: {\n value: item.value\n }\n });\n }), 1)], 1)], 1), _c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"image.count_title\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\"\n }, [_vm._v(_vm._s(_vm.$t(\"image.count\")))])]), _c(\"el-slider\", {\n staticClass: \"astrict\",\n attrs: {\n step: 1,\n min: -1,\n max: 10\n },\n model: {\n value: _vm.SettingInfo.n,\n callback: function ($$v) {\n _vm.$set(_vm.SettingInfo, \"n\", $$v);\n },\n expression: \"SettingInfo.n\"\n }\n })], 1)])]), _c(\"el-collapse-transition\", [_c(\"div\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.SettingStatus == 2,\n expression: \"SettingStatus == 2\"\n }]\n }, [_c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"audio.to_text_title\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\"\n }, [_vm._v(_vm._s(_vm.$t(\"audio.to_text\")))])]), _c(\"el-switch\", {\n staticStyle: {\n \"margin-left\": \"15%\"\n },\n attrs: {\n width: _vm.defaulWidth\n },\n model: {\n value: _vm.SettingInfo.translateEnglish,\n callback: function ($$v) {\n _vm.$set(_vm.SettingInfo, \"translateEnglish\", $$v);\n },\n expression: \"SettingInfo.translateEnglish\"\n }\n })], 1), _c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"audio.language_title\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\"\n }, [_vm._v(_vm._s(_vm.$t(\"audio.language\")))])]), _c(\"div\", [_c(\"el-select\", {\n staticStyle: {\n \"margin-top\": \"10px\"\n },\n attrs: {\n placeholder: \"请选择\"\n },\n model: {\n value: _vm.SettingInfo.language,\n callback: function ($$v) {\n _vm.$set(_vm.SettingInfo, \"language\", $$v);\n },\n expression: \"SettingInfo.language\"\n }\n }, _vm._l(_vm.languages, function (item) {\n return _c(\"el-option\", {\n key: item.value,\n attrs: {\n value: item.value\n }\n });\n }), 1)], 1)], 1), _c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"audio.temperature_title\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\"\n }, [_vm._v(_vm._s(_vm.$t(\"audio.temperature\")))])]), _c(\"el-slider\", {\n staticClass: \"astrict\",\n attrs: {\n step: 0.1,\n min: 0,\n max: 1\n },\n model: {\n value: _vm.SettingInfo.TemperatureAudio,\n callback: function ($$v) {\n _vm.$set(_vm.SettingInfo, \"TemperatureAudio\", $$v);\n },\n expression: \"SettingInfo.TemperatureAudio\"\n }\n })], 1)])]), _c(\"el-collapse-transition\", [_c(\"div\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.SettingStatus == 3,\n expression: \"SettingStatus == 3\"\n }]\n }, [_c(\"div\", {\n staticClass: \"fineTune boxinput\",\n staticStyle: {\n \"margin-left\": \"0px\",\n \"margin-right\": \"0px\",\n width: \"99%\"\n },\n on: {\n click: _vm.retrieveFine\n }\n }, [_vm._v(\" \" + _vm._s(_vm.$t(\"slightly.retrieveFineTuning\")) + \" \")]), _c(\"div\", {\n staticClass: \"fineTune boxinput\",\n staticStyle: {\n \"margin-left\": \"0px\",\n \"margin-right\": \"0px\",\n width: \"99%\"\n },\n on: {\n click: _vm.cancelFine\n }\n }, [_vm._v(\" \" + _vm._s(_vm.$t(\"slightly.cancelFineTuning\")) + \" \")]), _vm.cancelFineStatus ? _c(\"div\", {\n staticClass: \"fineTune boxinput\",\n staticStyle: {\n \"margin-left\": \"0px\",\n \"margin-right\": \"0px\",\n width: \"99%\"\n },\n on: {\n click: function ($event) {\n return _vm.showOrHidenCancelFine(false);\n }\n }\n }, [_vm._v(\" \" + _vm._s(_vm.$t(\"slightly.hideCanceledFineTuning\")) + \" \")]) : _c(\"div\", {\n staticClass: \"fineTune boxinput\",\n staticStyle: {\n \"margin-left\": \"0px\",\n \"margin-right\": \"0px\",\n width: \"99%\"\n },\n on: {\n click: function ($event) {\n return _vm.showOrHidenCancelFine(true);\n }\n }\n }, [_vm._v(\" \" + _vm._s(_vm.$t(\"slightly.showCanceledFineTuning\")) + \" \")]), _c(\"div\", {\n staticClass: \"fineTune boxinput\",\n staticStyle: {\n \"margin-left\": \"0px\",\n \"margin-right\": \"0px\",\n width: \"99%\"\n },\n on: {\n click: _vm.deleteFine\n }\n }, [_c(\"span\", {\n staticClass: \"iconfont icon-shanchu\",\n staticStyle: {\n color: \"#fff\",\n \"margin-right\": \"10px\"\n }\n }), _vm._v(\" \" + _vm._s(_vm.$t(\"slightly.deleteFineTuningModel\")) + \" \")]), _c(\"div\", {\n staticClass: \"fineTune boxinput\",\n staticStyle: {\n \"margin-left\": \"0px\",\n \"margin-right\": \"0px\",\n width: \"99%\"\n },\n on: {\n click: function ($event) {\n _vm.showFineSetting = !_vm.showFineSetting;\n }\n }\n }, [_vm._v(\" \" + _vm._s(_vm.$t(\"slightly.createFineTuning\")) + \" \")]), _c(\"el-collapse-transition\", [_c(\"div\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.showFineSetting,\n expression: \"showFineSetting\"\n }]\n }, [_c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"slightly.fileIDTrainingData\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\"\n }, [_vm._v(\"trainingFile\"), _c(\"span\", {\n staticStyle: {\n color: \"red\"\n }\n }, [_vm._v(\"*\")])])]), _c(\"input\", {\n directives: [{\n name: \"model\",\n rawName: \"v-model\",\n value: _vm.SettingInfo.fineTunes.training_file,\n expression: \"SettingInfo.fineTunes.training_file\"\n }],\n staticClass: \"weitiao\",\n attrs: {\n placeholder: _vm.$t(\"placeholder.trainingDataFileID\")\n },\n domProps: {\n value: _vm.SettingInfo.fineTunes.training_file\n },\n on: {\n input: function ($event) {\n if ($event.target.composing) return;\n _vm.$set(_vm.SettingInfo.fineTunes, \"training_file\", $event.target.value);\n }\n }\n })], 1), _c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"slightly.fileIDValidationData\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\"\n }, [_vm._v(\"validationFile\")])]), _c(\"input\", {\n directives: [{\n name: \"model\",\n rawName: \"v-model\",\n value: _vm.SettingInfo.fineTunes.validation_file,\n expression: \"SettingInfo.fineTunes.validation_file\"\n }],\n staticClass: \"weitiao\",\n attrs: {\n placeholder: _vm.$t(\"placeholder.validationDataFileID\")\n },\n domProps: {\n value: _vm.SettingInfo.fineTunes.validation_file\n },\n on: {\n input: function ($event) {\n if ($event.target.composing) return;\n _vm.$set(_vm.SettingInfo.fineTunes, \"validation_file\", $event.target.value);\n }\n }\n })], 1), _c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"slightly.modelOptions\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\"\n }, [_vm._v(\"model\")])]), _c(\"input\", {\n directives: [{\n name: \"model\",\n rawName: \"v-model\",\n value: _vm.SettingInfo.fineTunes.model,\n expression: \"SettingInfo.fineTunes.model\"\n }],\n staticClass: \"weitiao\",\n attrs: {\n placeholder: _vm.$t(\"placeholder.modelName\")\n },\n domProps: {\n value: _vm.SettingInfo.fineTunes.model\n },\n on: {\n input: function ($event) {\n if ($event.target.composing) return;\n _vm.$set(_vm.SettingInfo.fineTunes, \"model\", $event.target.value);\n }\n }\n })], 1), _c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"slightly.epochs\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\"\n }, [_vm._v(\"nEpochs\")])]), _c(\"input\", {\n directives: [{\n name: \"model\",\n rawName: \"v-model\",\n value: _vm.SettingInfo.fineTunes.n_epochs,\n expression: \"SettingInfo.fineTunes.n_epochs\"\n }],\n staticClass: \"weitiao\",\n attrs: {\n type: \"number\",\n placeholder: _vm.$t(\"placeholder.trainingIterations\")\n },\n domProps: {\n value: _vm.SettingInfo.fineTunes.n_epochs\n },\n on: {\n input: function ($event) {\n if ($event.target.composing) return;\n _vm.$set(_vm.SettingInfo.fineTunes, \"n_epochs\", $event.target.value);\n }\n }\n })], 1), _c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"slightly.batchSize\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\"\n }, [_vm._v(\"batchSize\")])]), _c(\"input\", {\n directives: [{\n name: \"model\",\n rawName: \"v-model\",\n value: _vm.SettingInfo.fineTunes.batch_size,\n expression: \"SettingInfo.fineTunes.batch_size\"\n }],\n staticClass: \"weitiao\",\n attrs: {\n type: \"number\",\n placeholder: _vm.$t(\"placeholder.batchSize\")\n },\n domProps: {\n value: _vm.SettingInfo.fineTunes.batch_size\n },\n on: {\n input: function ($event) {\n if ($event.target.composing) return;\n _vm.$set(_vm.SettingInfo.fineTunes, \"batch_size\", $event.target.value);\n }\n }\n })], 1), _c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"slightly.learningRate\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\"\n }, [_vm._v(\"learningRateMultiplier\")])]), _c(\"input\", {\n directives: [{\n name: \"model\",\n rawName: \"v-model\",\n value: _vm.SettingInfo.fineTunes.learning_rate_multiplier,\n expression: \"SettingInfo.fineTunes.learning_rate_multiplier\"\n }],\n staticClass: \"weitiao\",\n attrs: {\n type: \"number\",\n placeholder: _vm.$t(\"placeholder.learningRate\")\n },\n domProps: {\n value: _vm.SettingInfo.fineTunes.learning_rate_multiplier\n },\n on: {\n input: function ($event) {\n if ($event.target.composing) return;\n _vm.$set(_vm.SettingInfo.fineTunes, \"learning_rate_multiplier\", $event.target.value);\n }\n }\n })], 1), _c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"slightly.fineTunedName\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\"\n }, [_vm._v(\"suffix\")])]), _c(\"input\", {\n directives: [{\n name: \"model\",\n rawName: \"v-model\",\n value: _vm.SettingInfo.fineTunes.suffix,\n expression: \"SettingInfo.fineTunes.suffix\"\n }],\n staticClass: \"weitiao\",\n attrs: {\n placeholder: _vm.$t(\"placeholder.ftsuffix\")\n },\n domProps: {\n value: _vm.SettingInfo.fineTunes.suffix\n },\n on: {\n input: function ($event) {\n if ($event.target.composing) return;\n _vm.$set(_vm.SettingInfo.fineTunes, \"suffix\", $event.target.value);\n }\n }\n })], 1), _c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"el-tooltip\", {\n staticClass: \"item\",\n attrs: {\n effect: \"dark\",\n content: _vm.$t(\"slightly.promptAttention\"),\n placement: \"top\"\n }\n }, [_c(\"span\", {\n staticClass: \"demonstration\"\n }, [_vm._v(\"promptLossWeight\")])]), _c(\"el-slider\", {\n staticClass: \"astrict\",\n staticStyle: {\n width: \"95%\"\n },\n attrs: {\n step: 0.01,\n min: 0.01,\n max: 1\n },\n model: {\n value: _vm.SettingInfo.fineTunes.prompt_loss_weight,\n callback: function ($$v) {\n _vm.$set(_vm.SettingInfo.fineTunes, \"prompt_loss_weight\", $$v);\n },\n expression: \"SettingInfo.fineTunes.prompt_loss_weight\"\n }\n })], 1), _c(\"div\", {\n staticClass: \"fineTune boxinput\",\n staticStyle: {\n \"margin-left\": \"0px\",\n \"margin-right\": \"0px\",\n width: \"99%\",\n \"background-color\": \"#409EFF\"\n },\n on: {\n click: _vm.createFine\n }\n }, [_vm._v(\" \" + _vm._s(_vm.$t(\"slightly.create\")) + \" \")])])])], 1)]), _c(\"el-collapse-transition\", [_c(\"div\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.SettingStatus == 4,\n expression: \"SettingStatus == 4\"\n }]\n }, [_c(\"div\", {\n staticClass: \"fineTune boxinput\",\n staticStyle: {\n \"margin-left\": \"0px\",\n \"margin-right\": \"0px\",\n width: \"99%\"\n },\n on: {\n click: _vm.uploadFile\n }\n }, [_c(\"input\", {\n ref: \"fileInput\",\n staticStyle: {\n display: \"none\"\n },\n attrs: {\n type: \"file\"\n },\n on: {\n change: _vm.onFileChange\n }\n }), _c(\"svg\", {\n staticClass: \"icon\",\n attrs: {\n t: \"1679458974300\",\n viewBox: \"0 0 1024 1024\",\n version: \"1.1\",\n xmlns: \"http://www.w3.org/2000/svg\",\n \"p-id\": \"1590\",\n width: \"30\",\n height: \"30\"\n }\n }, [_c(\"path\", {\n attrs: {\n d: \"M567.466667 634.325333v234.666667a21.333333 21.333333 0 0 1-21.333334 21.333333h-42.666666a21.333333 21.333333 0 0 1-21.333334-21.333333v-234.666667H413.866667a8.533333 8.533333 0 0 1-6.826667-13.653333l110.933333-147.925333a8.533333 8.533333 0 0 1 13.653334 0l110.933333 147.925333a8.533333 8.533333 0 0 1-6.826667 13.653333h-68.266666z\",\n fill: \"#ffffff\",\n \"p-id\": \"1591\"\n }\n }), _c(\"path\", {\n attrs: {\n d: \"M768 725.333333a128 128 0 0 0 38.613333-250.112l-39.850666-12.586666-14.506667-39.253334a256.128 256.128 0 0 0-480.554667 0l-14.464 39.253334-39.850666 12.586666A128.085333 128.085333 0 0 0 256 725.333333a42.666667 42.666667 0 0 1 0 85.333334 213.333333 213.333333 0 0 1-64.341333-416.810667 341.461333 341.461333 0 0 1 640.682666 0A213.418667 213.418667 0 0 1 768 810.666667a42.666667 42.666667 0 0 1 0-85.333334z\",\n fill: \"#ffffff\",\n \"p-id\": \"1592\"\n }\n })]), _vm._v(\" \" + _vm._s(_vm.$t(\"file.upload\")) + \" \")]), _c(\"div\", {\n staticClass: \"fineTune boxinput\",\n staticStyle: {\n \"margin-left\": \"0px\",\n \"margin-right\": \"0px\",\n width: \"99%\"\n },\n on: {\n click: _vm.deleteOnFile\n }\n }, [_c(\"span\", {\n staticClass: \"iconfont icon-shanchu\",\n staticStyle: {\n color: \"#fff\",\n \"margin-right\": \"10px\"\n }\n }), _vm._v(\" \" + _vm._s(_vm.$t(\"file.delete\")) + \" \")]), _c(\"div\", {\n staticClass: \"fineTune boxinput\",\n staticStyle: {\n \"margin-left\": \"0px\",\n \"margin-right\": \"0px\",\n width: \"99%\"\n },\n on: {\n click: _vm.retrieveOnFile\n }\n }, [_vm._v(\" \" + _vm._s(_vm.$t(\"file.retrieve\")) + \" \")]), _c(\"div\", {\n staticClass: \"fineTune boxinput\",\n staticStyle: {\n \"margin-left\": \"0px\",\n \"margin-right\": \"0px\",\n width: \"99%\"\n },\n on: {\n click: _vm.retrieveOnFileContent\n }\n }, [_vm._v(\" \" + _vm._s(_vm.$t(\"file.view\")) + \" \")])])]), _c(\"el-collapse-transition\", [_c(\"div\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.SettingStatus == 5,\n expression: \"SettingStatus == 5\"\n }]\n }, [_c(\"div\", {\n staticClass: \"session boxinput\",\n on: {\n click: _vm.newSession\n }\n }, [_c(\"svg\", {\n staticClass: \"icon\",\n attrs: {\n t: \"1679215361568\",\n viewBox: \"0 0 1024 1024\",\n version: \"1.1\",\n xmlns: \"http://www.w3.org/2000/svg\",\n \"p-id\": \"3128\",\n width: \"25\",\n height: \"25\"\n }\n }, [_c(\"path\", {\n attrs: {\n d: \"M512.001024 0A512 512 0 0 0 0.001024 512a506.88 506.88 0 0 0 92.16 292.352V972.8a51.2 51.2 0 0 0 51.2 51.2H512.001024a512 512 0 0 0 0-1024z m0 921.6H194.561024v-134.144a51.2 51.2 0 0 0-10.24-30.72A406.016 406.016 0 0 1 102.401024 512a409.6 409.6 0 1 1 409.6 409.6z\",\n fill: \"#ffffff\",\n \"p-id\": \"3129\"\n }\n }), _c(\"path\", {\n attrs: {\n d: \"M716.801024 486.4a51.2 51.2 0 0 0-51.2 51.2 153.6 153.6 0 0 1-307.2 0 51.2 51.2 0 0 0-102.4 0 256 256 0 0 0 512 0 51.2 51.2 0 0 0-51.2-51.2z\",\n fill: \"#ffffff\",\n \"p-id\": \"3130\"\n }\n })]), _vm._v(\" \" + _vm._s(_vm.$t(\"session.create\")) + \" \")]), _c(\"div\", {\n staticClass: \"session boxinput\",\n staticStyle: {\n \"margin-left\": \"0px\",\n \"margin-right\": \"0px\",\n width: \"99%\"\n },\n on: {\n click: _vm.exportObjArrAllToJson\n }\n }, [_c(\"span\", {\n staticClass: \"iconfont icon-daochu\",\n staticStyle: {\n color: \"#fff\",\n \"margin-right\": \"10px\"\n }\n }), _vm._v(\" \" + _vm._s(_vm.$t(\"session.export\")) + \" \")]), _c(\"div\", {\n staticClass: \"session boxinput\",\n on: {\n click: _vm.importFromJsonArrAll\n }\n }, [_c(\"span\", {\n staticClass: \"iconfont icon-daoru\",\n staticStyle: {\n color: \"#fff\",\n \"margin-right\": \"10px\"\n }\n }), _vm._v(\" \" + _vm._s(_vm.$t(\"session.import\")) + \" \"), _c(\"input\", {\n ref: \"onupdateJosnArrAll\",\n staticStyle: {\n display: \"none\"\n },\n attrs: {\n type: \"file\"\n },\n on: {\n change: _vm.handleFileUploadAll\n }\n })]), _c(\"div\", {\n staticClass: \"session boxinput\",\n on: {\n click: _vm.clearAllContext\n }\n }, [_c(\"span\", {\n staticClass: \"iconfont icon-qingchu\",\n staticStyle: {\n color: \"#fff\",\n \"margin-right\": \"10px\"\n }\n }), _vm._v(\" \" + _vm._s(_vm.$t(\"session.clear\")) + \" \")])])]), _c(\"el-collapse-transition\", [_c(\"div\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.SettingStatus == 6,\n expression: \"SettingStatus == 6\"\n }]\n }, [_c(\"div\", {\n staticClass: \"block\"\n }, [_c(\"input\", {\n directives: [{\n name: \"model\",\n rawName: \"v-model\",\n value: _vm.roleSearch,\n expression: \"roleSearch\"\n }],\n staticClass: \"weitiao\",\n attrs: {\n placeholder: _vm.$t(\"placeholder.role_name\")\n },\n domProps: {\n value: _vm.roleSearch\n },\n on: {\n input: function ($event) {\n if ($event.target.composing) return;\n _vm.roleSearch = $event.target.value;\n }\n }\n })]), _vm._l(_vm.roleList, function (roleInfo) {\n return _c(\"div\", {\n key: roleInfo.act,\n staticClass: \"personList\",\n on: {\n click: function ($event) {\n return _vm.roleClick(roleInfo);\n }\n }\n }, [_c(\"RoleCard\", {\n attrs: {\n roleInfo: roleInfo,\n prCurrent: _vm.prCurrent\n }\n })], 1);\n })], 2)]), _c(\"el-collapse-transition\", [_c(\"div\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.SettingStatus == 7,\n expression: \"SettingStatus == 7\"\n }]\n }, [_c(\"div\", {\n staticClass: \"session boxinput\",\n staticStyle: {\n \"margin-left\": \"0px\",\n \"margin-right\": \"0px\",\n width: \"99%\"\n },\n on: {\n click: _vm.changeLanguage\n }\n }, [_c(\"span\", {\n staticClass: \"iconfont icon-iconyuanbanben_fanyi\",\n staticStyle: {\n color: \"#fff\",\n \"margin-right\": \"10px\"\n }\n }), _vm._v(\" \" + _vm._s(_vm.$t(\"setting.Language\")) + \" \")])])])], 1)], 1)], 1)]);\n};\nvar staticRenderFns = [function () {\n var _vm = this,\n _c = _vm._self._c;\n return _c(\"div\", {\n staticClass: \"title\",\n staticStyle: {\n \"text-align\": \"center\"\n }\n }, [_c(\"h2\", [_vm._v(\"OpenAI-Manager(科学~)\")])]);\n}];\nrender._withStripped = true;\n\n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/chatHome/index.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet%5B1%5D.rules%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/setting.vue?vue&type=template&id=f89df198&": -/*!***********************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/setting.vue?vue&type=template&id=f89df198& ***! - \***********************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* binding */ render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* binding */ staticRenderFns; }\n/* harmony export */ });\nvar render = function render() {\n var _vm = this,\n _c = _vm._self._c;\n return _c(\"div\", {\n staticClass: \"setting\"\n }, [_c(\"el-container\", [_c(\"el-header\", [_c(\"transition\", {\n attrs: {\n name: \"el-zoom-in-top\"\n }\n }, [_c(\"div\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.show,\n expression: \"show\"\n }],\n staticClass: \"transition-box\"\n }, [_c(\"h3\", [_vm._v(\"JUN CHEN MO\")])])])], 1), _c(\"el-main\", [_c(\"transition\", {\n attrs: {\n name: \"el-zoom-in-top\"\n }\n }, [_c(\"div\", {\n directives: [{\n name: \"show\",\n rawName: \"v-show\",\n value: _vm.show,\n expression: \"show\"\n }],\n staticClass: \"transition-box\"\n }, [_c(\"span\", [_vm._v(\" 很感谢大家对我的支持,现已接入OpenAI的Models API、Completions API、Chat API、Audio API、Images API、Files API、Fine-tunes API后续会添加更多有意思的功能进去,希望大家给我的GitHub点个小小的星星, 大家如果有什么好的想法可以在GitHub中提出来,My Age 19。 \")]), _c(\"div\", [_c(\"a\", {\n attrs: {\n href: \"https://space.bilibili.com/326625155?spm_id_from=333.337.0.0\"\n }\n }, [_vm._v(\"BliBili\")]), _vm._v(\"---\"), _c(\"a\", {\n attrs: {\n href: \"https://github.com/202252197/ChatGPT_JCM\"\n }\n }, [_vm._v(\"GitHub\")])]), _c(\"div\", [_c(\"h3\", [_vm._v(\"愿半生编码,如一生老友\")])]), _c(\"div\", [_c(\"img\", {\n attrs: {\n src: \"https://i.328888.xyz/2023/04/03/iHKA4H.jpeg\",\n alt: \"drawing\",\n width: \"300px\",\n height: \"300px\"\n }\n }), _c(\"br\"), _vm._v(\"如有问题请+上方微信 \")])])])], 1)], 1)], 1);\n};\nvar staticRenderFns = [];\nrender._withStripped = true;\n\n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/setting.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet%5B1%5D.rules%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/user/userInfo.vue?vue&type=template&id=3c4a7241&": -/*!*****************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use[0]!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet[1].rules[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/user/userInfo.vue?vue&type=template&id=3c4a7241& ***! - \*****************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"render\": function() { return /* binding */ render; },\n/* harmony export */ \"staticRenderFns\": function() { return /* binding */ staticRenderFns; }\n/* harmony export */ });\nvar render = function render() {\n var _vm = this,\n _c = _vm._self._c;\n return _c(\"el-container\", {\n staticStyle: {\n height: \"94vh\"\n }\n });\n};\nvar staticRenderFns = [];\nrender._withStripped = true;\n\n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/user/userInfo.vue?./node_modules/babel-loader/lib/index.js??clonedRuleSet-40.use%5B0%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/templateLoader.js??ruleSet%5B1%5D.rules%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./src/api/getData.js": -/*!****************************!*\ - !*** ./src/api/getData.js ***! - \****************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"cancelFineTune\": function() { return /* binding */ cancelFineTune; },\n/* harmony export */ \"createEmbeddings\": function() { return /* binding */ createEmbeddings; },\n/* harmony export */ \"createFineTune\": function() { return /* binding */ createFineTune; },\n/* harmony export */ \"createImage\": function() { return /* binding */ createImage; },\n/* harmony export */ \"createImageEdit\": function() { return /* binding */ createImageEdit; },\n/* harmony export */ \"createImageVariations\": function() { return /* binding */ createImageVariations; },\n/* harmony export */ \"createTranscription\": function() { return /* binding */ createTranscription; },\n/* harmony export */ \"createTranslation\": function() { return /* binding */ createTranslation; },\n/* harmony export */ \"deleteFile\": function() { return /* binding */ deleteFile; },\n/* harmony export */ \"deleteFineTuneModel\": function() { return /* binding */ deleteFineTuneModel; },\n/* harmony export */ \"getChatMsg\": function() { return /* binding */ getChatMsg; },\n/* harmony export */ \"getFilesList\": function() { return /* binding */ getFilesList; },\n/* harmony export */ \"getFineTuneEventsList\": function() { return /* binding */ getFineTuneEventsList; },\n/* harmony export */ \"getFineTunesList\": function() { return /* binding */ getFineTunesList; },\n/* harmony export */ \"getModels\": function() { return /* binding */ getModels; },\n/* harmony export */ \"getMoneyInfo\": function() { return /* binding */ getMoneyInfo; },\n/* harmony export */ \"getRoles\": function() { return /* binding */ getRoles; },\n/* harmony export */ \"retrieveFile\": function() { return /* binding */ retrieveFile; },\n/* harmony export */ \"retrieveFileContent\": function() { return /* binding */ retrieveFileContent; },\n/* harmony export */ \"retrieveFineTune\": function() { return /* binding */ retrieveFineTune; },\n/* harmony export */ \"uploadFile\": function() { return /* binding */ uploadFile; }\n/* harmony export */ });\n/* harmony import */ var core_js_modules_es_array_push_js__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! core-js/modules/es.array.push.js */ \"./node_modules/core-js/modules/es.array.push.js\");\n/* harmony import */ var core_js_modules_es_array_push_js__WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(core_js_modules_es_array_push_js__WEBPACK_IMPORTED_MODULE_0__);\n/* harmony import */ var core_js_modules_es_array_unshift_js__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! core-js/modules/es.array.unshift.js */ \"./node_modules/core-js/modules/es.array.unshift.js\");\n/* harmony import */ var core_js_modules_es_array_unshift_js__WEBPACK_IMPORTED_MODULE_1___default = /*#__PURE__*/__webpack_require__.n(core_js_modules_es_array_unshift_js__WEBPACK_IMPORTED_MODULE_1__);\n/* harmony import */ var _index__WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(/*! ./index */ \"./src/api/index.js\");\n/* harmony import */ var _store_mutation_types__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(/*! ../store/mutation-types */ \"./src/store/mutation-types.js\");\n/* harmony import */ var _util_util__WEBPACK_IMPORTED_MODULE_4__ = __webpack_require__(/*! @/util/util */ \"./src/util/util.js\");\n/* provided dependency */ var console = __webpack_require__(/*! ./node_modules/console-browserify/index.js */ \"./node_modules/console-browserify/index.js\");\n\n\n\n\n\nlet axios = _index__WEBPACK_IMPORTED_MODULE_2__[\"default\"].axios;\nlet baseUrl = _index__WEBPACK_IMPORTED_MODULE_2__[\"default\"].baseUrl;\n\n// 根据name查找元素的索引\nfunction findIndexByName(arr, name) {\n for (let i = 0; i < arr.length; i++) {\n if (arr[i].name === name || arr[i] === name) {\n return i;\n }\n }\n return -1; // 没有找到对应的元素\n}\n\nconst desp_model = {\n \"gpt-3.5-turbo\": \"chatgpt v3.5 所基于的模型\",\n \"ada\": \"自然语言模型,OpenAI提供的最快,最便宜的模型,但性能也最差,含有ada字眼的模型都是基于ada训练而来\",\n \"babbage\": \"自然语言模型,性能比ada强,价格比ada贵,规模比ada大,含有babbage字眼的模型都是基于babbage训练而来\",\n \"curie\": \"自然语言模型,性能优于ada,babbage,价钱也更贵,规模更大,含有curie字眼的模型都是基于curie训练而来\",\n \"davinci\": \"自然语言模型,在ada,babbage,curie和davinci中性能最优,规模最大,速度最慢,价钱最贵,含有davinci字眼的模型都是基于davinci训练而来,目前chatgpt基于davinci微调而来\",\n \"whisper-1\": \"强大的语音转换文本的模型\"\n};\nconst other_desps = {\n \"code\": \"的AI代码处理模型\",\n \"similarity\": \"的AI文本相似度计算模型\",\n \"document\": \"的大文档处理模型\",\n \"text\": \"的文本处理模型\",\n \"instruct\": \"的人工指令微调模型\",\n \"if\": \"一个分支\"\n};\nconst desp_keys = Object.keys(desp_model);\nconst other_desp_keys = Object.keys(other_desps);\nfunction produceModelDesc(model) {\n const idx = findIndexByName(desp_keys, model);\n if (idx !== -1) {\n return desp_model[model];\n } else {\n let desc = '';\n for (let i = 0; i < desp_keys.length; i++) {\n const key = desp_keys[i];\n if (model.includes(key)) {\n desc += `基于语言模型${key}`;\n break;\n }\n }\n for (let i = 0; i < other_desp_keys.length; i++) {\n const key = other_desp_keys[i];\n if (model.includes(key)) {\n desc += other_desps[key];\n break;\n }\n }\n if (desc == \"\") {\n desc = model + \"模型\";\n }\n return desc;\n }\n}\n\n// 获取模型列表\nconst getModels = token => {\n return axios({\n method: 'get',\n baseURL: `${baseUrl}/v1/models`,\n headers: {\n 'Authorization': 'Bearer ' + token,\n 'Content-Type': 'application/json'\n }\n }).then(res => {\n const modelsObj = [];\n //获取所有的模型\n const models = [...new Set(res.data.data.map(model => model.id))].sort();\n models.forEach(model => {\n let modelObj = {\n img: \"\",\n name: model,\n detail: produceModelDesc(model),\n lastMsg: produceModelDesc(model),\n id: model,\n headImg: _store_mutation_types__WEBPACK_IMPORTED_MODULE_3__.AI_HEAD_IMG_URL,\n showHeadImg: true\n };\n modelsObj.push(modelObj);\n });\n // 将gpt-3.5-turbo置顶\n const idx = findIndexByName(modelsObj, \"gpt-3.5-turbo\");\n if (idx !== -1) {\n const element = modelsObj.splice(idx, 1)[0]; // 将idx元素删除\n modelsObj.unshift(element); // 将idx出的元素至于列表头\n }\n\n return modelsObj;\n });\n};\n// 获取角色列表\nconst getRoles = () => {\n return axios({\n method: 'get',\n baseURL: `user_custom.json`,\n headers: {\n 'Content-Type': 'application/json'\n }\n });\n};\n\n// 根据提示创建图像\nconst createImage = (params, token) => {\n console.log(params);\n return axios({\n method: 'post',\n baseURL: `${baseUrl}/v1/images/generations`,\n headers: {\n 'Authorization': 'Bearer ' + token,\n 'Content-Type': 'application/json'\n },\n data: params\n }).then(res => {\n return res.data.data;\n });\n};\n\n// 根据提示词编辑图像\nconst createImageEdit = (formData, token) => {\n return axios({\n method: 'post',\n baseURL: `${baseUrl}/v1/images/edits`,\n headers: {\n 'Authorization': 'Bearer ' + token,\n 'Content-Type': 'multipart/form-data'\n },\n data: formData\n }).then(res => {\n return res.data.data;\n });\n};\n\n// 根据创建图像变体\nconst createImageVariations = (formData, token) => {\n return axios({\n method: 'post',\n baseURL: `${baseUrl}/v1/images/variations`,\n headers: {\n 'Authorization': 'Bearer ' + token,\n 'Content-Type': 'multipart/form-data'\n },\n data: formData\n }).then(res => {\n return res.data.data;\n });\n};\n\n// 将音频转换为文字\nconst createTranscription = (formData, token) => {\n return axios({\n method: 'post',\n baseURL: `${baseUrl}/v1/audio/transcriptions`,\n headers: {\n 'Authorization': 'Bearer ' + token,\n 'Content-Type': 'multipart/form-data'\n },\n data: formData\n }).then(res => {\n return res.data;\n });\n};\n\n// 将音频翻译成英语\nconst createTranslation = (formData, token) => {\n return axios({\n method: 'post',\n baseURL: `${baseUrl}/v1/audio/translations`,\n headers: {\n 'Authorization': 'Bearer ' + token,\n 'Content-Type': 'multipart/form-data'\n },\n data: formData\n }).then(res => {\n return res.data;\n });\n};\n\n// 创建微调\nconst createFineTune = (formData, token) => {\n return axios({\n method: 'post',\n baseURL: `${baseUrl}/v1/fine-tunes`,\n headers: {\n 'Authorization': 'Bearer ' + token,\n 'Content-Type': 'application/json'\n },\n data: formData\n }).then(res => {\n return res.data;\n }).catch(e => {\n console.log(e);\n });\n};\n\n// 列出微调\nconst getFineTunesList = token => {\n return axios({\n method: 'get',\n baseURL: `${baseUrl}/v1/fine-tunes`,\n headers: {\n 'Authorization': 'Bearer ' + token,\n 'Content-Type': 'application/json'\n }\n }).then(res => {\n console.log(res);\n const fineTunesObjs = [];\n res.data.data.forEach(fineTunes => {\n let fineTunesObj = {\n img: \"\",\n name: fineTunes.fine_tuned_model,\n detail: \"基于\" + fineTunes.model + \"微调创建的模型\",\n lastMsg: \"基于\" + fineTunes.model + \"微调创建的模型\",\n id: fineTunes.fine_tuned_model ? fineTunes.fine_tuned_model : (0,_util_util__WEBPACK_IMPORTED_MODULE_4__.generateUUID)(),\n headImg: _store_mutation_types__WEBPACK_IMPORTED_MODULE_3__.AI_HEAD_IMG_URL,\n showHeadImg: true,\n createTime: fineTunes.created_at,\n fineTunesId: fineTunes.id,\n fineTunesStatus: fineTunes.status\n };\n fineTunesObjs.push(fineTunesObj);\n });\n return fineTunesObjs.sort((a, b) => b.createTime - a.createTime);\n });\n};\n\n// 检索微调信息\nconst retrieveFineTune = (fineTuneId, token) => {\n return axios({\n method: 'get',\n baseURL: `${baseUrl}/v1/fine-tunes/` + fineTuneId,\n headers: {\n 'Authorization': 'Bearer ' + token,\n 'Content-Type': 'application/json'\n }\n }).then(res => {\n return res.data;\n });\n};\n\n// 取消微调\nconst cancelFineTune = (fineTuneId, token) => {\n return axios({\n method: 'post',\n baseURL: `${baseUrl}/v1/fine-tunes/` + fineTuneId + '/cancel',\n headers: {\n 'Authorization': 'Bearer ' + token,\n 'Content-Type': 'application/json'\n }\n }).then(res => {\n return res.data;\n });\n};\n\n// 获取微调事件列表\nconst getFineTuneEventsList = (fineTuneId, token) => {\n return axios({\n method: 'get',\n baseURL: `${baseUrl}/v1/fine-tunes/` + fineTuneId + '/events',\n headers: {\n 'Authorization': 'Bearer ' + token,\n 'Content-Type': 'multipart/form-data'\n },\n data: fineTuneId\n }).then(res => {\n return res.data;\n });\n};\n\n// 删除微调模型\nconst deleteFineTuneModel = (model, token) => {\n return axios({\n method: 'delete',\n baseURL: `${baseUrl}/v1/models/` + model,\n headers: {\n 'Authorization': 'Bearer ' + token,\n 'Content-Type': 'application/json'\n }\n }).then(res => {\n return res.data;\n });\n};\n\n//获取文件列表\nconst getFilesList = token => {\n return axios({\n method: 'get',\n baseURL: `${baseUrl}/v1/files`,\n headers: {\n 'Authorization': 'Bearer ' + token,\n 'Content-Type': 'application/json'\n }\n }).then(res => {\n console.log(res);\n const fileObjs = [];\n res.data.data.forEach(file => {\n let fileObj = {\n img: \"\",\n name: file.filename,\n detail: \"文件ID是:\" + file.id + \",文件大小是:\" + (file.bytes / 1024 / 1024).toFixed(2) + \"MB\",\n lastMsg: \"文件ID是:\" + file.id + \",文件大小是:\" + (file.bytes / 1024 / 1024).toFixed(2) + \"MB\",\n id: file.filename,\n createTime: file.created_at,\n fileId: file.id\n };\n fileObjs.push(fileObj);\n });\n return fileObjs.sort((a, b) => b.createTime - a.createTime);\n });\n};\n\n// 删除文件\nconst deleteFile = (file, token) => {\n return axios({\n method: 'delete',\n baseURL: `${baseUrl}/v1/files/` + file,\n headers: {\n 'Authorization': 'Bearer ' + token,\n 'Content-Type': 'application/json'\n }\n }).then(res => {\n return res.data;\n });\n};\n\n// 上传JSONL文件\nconst uploadFile = (formData, token) => {\n return axios({\n method: 'post',\n baseURL: `${baseUrl}/v1/files`,\n headers: {\n 'Authorization': 'Bearer ' + token,\n 'Content-Type': 'multipart/form-data'\n },\n data: formData\n }).then(res => {\n console.log(\"文件上传成功\");\n console.log(res);\n return res.data;\n });\n};\n\n// 检索文件\nconst retrieveFile = (file, token) => {\n return axios({\n method: 'get',\n baseURL: `${baseUrl}/v1/files/` + file,\n headers: {\n 'Authorization': 'Bearer ' + token,\n 'Content-Type': 'application/json'\n }\n }).then(res => {\n return res.data;\n });\n};\n\n// 检索文件内容\nconst retrieveFileContent = (file, token) => {\n\n // return axios({\n // method: 'get',\n // baseURL: `${baseUrl}v1/files/`+file+`/content`,\n // headers: {\n // 'Authorization': 'Bearer ' + token\n // }\n // }).then(response => {\n // const writer = fs.createWriteStream('./file.txt')\n // response.data.pipe(writer)\n // })\n};\n\n// 检索文件内容\nconst createEmbeddings = (params, token) => {\n return axios({\n method: 'post',\n baseURL: `${baseUrl}/v1/embeddings`,\n headers: {\n 'Authorization': 'Bearer ' + token,\n 'Content-Type': 'application/json'\n },\n data: params\n }).then(response => {\n console.log(response);\n return response.data;\n });\n};\n\n// 获取账号余额信息\nconst getMoneyInfo = token => {\n return axios({\n method: 'get',\n baseURL: `${baseUrl}/dashboard/billing/credit_grants`,\n headers: {\n 'Authorization': 'Bearer ' + token,\n 'Content-Type': 'application/json'\n }\n }).then(res => {\n return res.data;\n });\n};\n\n// 获取聊天信息\nconst getChatMsg = params => {\n return axios({\n method: 'post',\n baseURL: `${baseUrl}/friend/chatMsg`,\n data: params\n }).then(res => res.data);\n};\n\n//# sourceURL=webpack://JCM-AI/./src/api/getData.js?"); - -/***/ }), - -/***/ "./src/api/index.js": -/*!**************************!*\ - !*** ./src/api/index.js ***! - \**************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var axios__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! axios */ \"./node_modules/axios/lib/axios.js\");\n/* provided dependency */ var console = __webpack_require__(/*! ./node_modules/console-browserify/index.js */ \"./node_modules/console-browserify/index.js\");\n\n\n//全局参数,自定义参数可在发送请求时设置\naxios__WEBPACK_IMPORTED_MODULE_0__[\"default\"].defaults.timeout = 300000000; //超时时间ms\naxios__WEBPACK_IMPORTED_MODULE_0__[\"default\"].defaults.withCredentials = false;\n// 请求时的拦截\n//回调里面不能获取错误信息\naxios__WEBPACK_IMPORTED_MODULE_0__[\"default\"].interceptors.request.use(function (config) {\n return config;\n}, function (error) {\n // 当请求异常时做一些处理\n console.log('请求异常:' + JSON.stringify(error));\n return Promise.reject(error);\n});\naxios__WEBPACK_IMPORTED_MODULE_0__[\"default\"].interceptors.response.use(function (response) {\n // Do something with response data\n\n return response;\n}, function (error) {\n // Do something with response error\n console.log('响应出错:' + error);\n return Promise.reject(error);\n});\nconst base = {\n axios: axios__WEBPACK_IMPORTED_MODULE_0__[\"default\"],\n baseUrl: 'https://api.openai.com'\n};\n/* harmony default export */ __webpack_exports__[\"default\"] = (base);\n\n//# sourceURL=webpack://JCM-AI/./src/api/index.js?"); - -/***/ }), - -/***/ "./src/config/i18n.js": -/*!****************************!*\ - !*** ./src/config/i18n.js ***! - \****************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var vue__WEBPACK_IMPORTED_MODULE_5__ = __webpack_require__(/*! vue */ \"./node_modules/vue/dist/vue.runtime.esm.js\");\n/* harmony import */ var vue_i18n__WEBPACK_IMPORTED_MODULE_6__ = __webpack_require__(/*! vue-i18n */ \"./node_modules/vue-i18n/dist/vue-i18n.esm.js\");\n/* harmony import */ var _lang_en__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! @/lang/en */ \"./src/lang/en.js\");\n/* harmony import */ var _lang_zh_CN__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! @/lang/zh-CN */ \"./src/lang/zh-CN.js\");\n/* harmony import */ var element_ui_lib_locale_lang_en__WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(/*! element-ui/lib/locale/lang/en */ \"./node_modules/element-ui/lib/locale/lang/en.js\");\n/* harmony import */ var element_ui_lib_locale_lang_zh_CN__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(/*! element-ui/lib/locale/lang/zh-CN */ \"./node_modules/element-ui/lib/locale/lang/zh-CN.js\");\n/* harmony import */ var element_ui_lib_locale__WEBPACK_IMPORTED_MODULE_4__ = __webpack_require__(/*! element-ui/lib/locale */ \"./node_modules/element-ui/lib/locale/index.js\");\n\n// 引入i18n插件\n\n// 引入语言包\n\n\n// 引入element-ui语言包\n\n\n// 下面不可少的两个配置【参考官网 按需加载里定制 i18n】\n\nelement_ui_lib_locale__WEBPACK_IMPORTED_MODULE_4__[\"default\"].i18n((key, value) => i18n.t(key, value));\nvue__WEBPACK_IMPORTED_MODULE_5__[\"default\"].use(vue_i18n__WEBPACK_IMPORTED_MODULE_6__[\"default\"]);\nconst messages = {\n en: {\n ..._lang_en__WEBPACK_IMPORTED_MODULE_0__[\"default\"],\n ...element_ui_lib_locale_lang_en__WEBPACK_IMPORTED_MODULE_2__[\"default\"] // element-ui语言包\n },\n\n zh: {\n ..._lang_zh_CN__WEBPACK_IMPORTED_MODULE_1__[\"default\"],\n ...element_ui_lib_locale_lang_zh_CN__WEBPACK_IMPORTED_MODULE_3__[\"default\"]\n }\n};\n\n// 配置i18n\nconst i18n = new vue_i18n__WEBPACK_IMPORTED_MODULE_6__[\"default\"]({\n locale: localStorage.getItem(\"lang\") || \"zh\",\n // 从缓存中获取当前的语言类型\n messages\n});\n/* harmony default export */ __webpack_exports__[\"default\"] = (i18n);\n\n//# sourceURL=webpack://JCM-AI/./src/config/i18n.js?"); - -/***/ }), - -/***/ "./src/lang/en.js": -/*!************************!*\ - !*** ./src/lang/en.js ***! - \************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony default export */ __webpack_exports__[\"default\"] = ({\n placeholder: {\n question: \"Enter your question here~\",\n openai_key: \"Please enter OpenAI KEY\",\n role_name: \"role name\",\n session_name: \"session name\",\n model_name: \"model name\",\n slightly_name: \"fine-tuned model name\",\n file_name: \"file name\",\n suffix: \"Text snippet to add at the end.\",\n stop: \"Token to stop generating text.\",\n response_count: \"Number of Answers Generated\",\n trainingDataFileID: 'ID of training data file',\n validationDataFileID: 'ID of validation data file',\n modelName: 'Model name',\n trainingIterations: 'Number of training iterations',\n batchSize: 'Batch size',\n learningRate: 'Learning rate',\n ftsuffix: 'Suffix'\n },\n session: {\n title: \"session\",\n create: \"create session\",\n export: \"Export the session list\",\n import: \"Import the session list\",\n clear: \"Clear the session list\"\n },\n model: {\n title: \"model\",\n talk: \"chat\",\n online_title: \"online\",\n online: \"Online query after opening\",\n suffix_title: \"suffix\",\n suffix: \"A text snippet to add at the end of the generated text\",\n max_tokens_title: \"Maximum word count\",\n max_tokens: \"Specifies the maximum number of words to generate, which cannot exceed 2048.\",\n temperature_title: \"Randomness(0-2)\",\n temperature: \"Specifies the randomness of the generated text, ranging from 0 to 2, where higher values are more diverse and creative, and lower values are more conservative and deterministic.\",\n top_p_title: \"Reserved word ratio(0-1)\",\n top_p: \"Specifies the proportion of words with the highest probability of being retained at each step, ranging from 0 to 1, similar to temperature, but more flexible and robust.\",\n n_title: \"Result count\",\n n: \"This parameter produces many results\",\n stream_title: \"Stream output\",\n stream: \"Enable streaming output\",\n echo_title: \"Echo words\",\n echo: \"echo prompt word\",\n stop_title: \"Stop token\",\n stop: \"Sets the token at which the model stops generating text\",\n frequency_penalty_title: \"Word repetition(0-1)\",\n frequency_penalty: \"Specify the degree to reduce the probability of repeated words, the range is 0 to 1, the higher the more to avoid repetition.\",\n presence_penalty_title: \"Topic repetition(0-1)\",\n presence_penalty: \"Specify the degree to reduce the occurrence probability of repeated topics, ranging from 0 to 1, the higher means avoiding repetition.\",\n max_results_title: \"Specify the amount of online query data, it is not recommended to be too large.\",\n max_results: \"max_results\"\n },\n slightly: {\n title: {\n whole: \"FT\",\n abbreviation: \"FT\"\n },\n retrieveFineTuning: 'Retrieve fine-tuning',\n cancelFineTuning: 'Cancel fine-tuning',\n hideCanceledFineTuning: 'Hide canceled fine-tuning',\n showCanceledFineTuning: 'Show canceled fine-tuning',\n deleteFineTuningModel: 'Delete fine-tuning model',\n createFineTuning: 'Create fine-tuning',\n create: 'Create',\n fileIDTrainingData: 'File ID containing training data',\n fileIDValidationData: 'File ID containing validation data',\n modelOptions: 'You can choose the model name from ada, babbage, curie, davinci, or the name of your own fine-tuned model.',\n epochs: 'By adjusting the number of n_epochs, you can control the training period and number of training times of the model, thereby affecting the performance and convergence speed of the model.',\n batchSize: 'A larger batch_size can speed up the training speed, stability, and generalization ability of the model, while a smaller batch_size can reduce memory and computing resource usage, and improve the performance of the model on test data.',\n fineTunedName: 'A string of up to 40 characters that will be added to the fine-tuned model name.',\n learningRate: 'You can control how many times the learning rate used during fine-tuning training is compared to the learning rate used by the pre-trained model. For example, if you set it to 2.0, the learning rate used during fine-tuning training will be twice that of the pre-trained model.',\n promptAttention: 'Setting a higher value will make the model pay more attention to prompts when generating text, while setting a lower value will make the model focus more on its own language model and generate more free text.'\n },\n file: {\n title: \"file\",\n upload: \"Upload files\",\n delete: \"Delete Files\",\n retrieve: \"Retrieve files\",\n view: \"View file content\"\n },\n image: {\n title: \"image\",\n production: \"Production picture\",\n production_title: \"After opening, the content sent by the chat is information describing the picture\",\n change: \"Change picture\",\n change_title: \"After opening, upload the picture first, and then enter the prompt words to modify.\",\n size: \"Size\",\n size_title: \"The size of the image.\",\n count: \"Quantity\",\n count_title: \"The number of generated images.\"\n },\n audio: {\n title: \"audio\",\n to_text_title: \"Speech to Text\",\n to_text: \"Speech to Text\",\n language_title: \"Translate speech or audio files from one or more source languages to a target language\",\n language: \"Language\",\n temperature_title: \"Specify the randomness of speech recognition, ranging from 0 to 1. Higher values indicate more diversity and creativity, while lower values indicate more conservatism and certainty.\",\n temperature: \"Randomness(0-1)\"\n },\n role: {\n title: \"role\"\n },\n setting: {\n title: \"settings\",\n Language: \"Chinese Language\"\n },\n file_card: {\n unknown: \"unknown\"\n },\n person_card: {\n train: \"training...\",\n cancel: \"Cancelled\"\n },\n util_js: {\n select: \"Please select an image to upload!\",\n path: \"The path is incorrect!\",\n notallowed: \"This file type is not allowed to be uploaded. please upload \",\n type: \" A file of type, the current file type is\"\n },\n message: {\n start_recording: \"Start recording~\",\n fail_audio: \"Failed to get audio stream~\",\n end_recording: \"End recording~\",\n edit_picture: \"Edit picture mode: Please upload the picture in the upper right corner of the chat window first, and then send the modified content~\",\n msg_empty: \"Message cannot be empty~\",\n model_del: \"The model has been deleted or canceled...\",\n valid_png: \"Please upload a valid PNG file~\",\n less_4M: \"Please upload a file smaller than 4MB~\",\n upload_complete: \"Image upload completed, please give me a prompt to edit~\",\n get_model_fail: \"Failed to get model list~~\",\n valid_json: \"Please upload a valid JSON file~~\",\n only_file: \"Can only search for files~\",\n fail_file: \"Failed to search for files~\",\n openai_free: \"In order to reduce misuse, OpenAI free accounts cannot download fine-tuned training files~\",\n only_del_file: \"Can only delete files~\",\n del_file_succ: \"Congratulations on successfully deleting the file~\",\n del_fail: \"Failed to delete the file~\",\n create_succ: \"Congratulations on successfully creating fine-tuning~\",\n create_fail: \"Failed to create fine-tuning...\",\n only_del_model: \"Can only delete the model in fine-tuning~\",\n del_model_succ: \"Congratulations on successfully deleting the fine-tuned model~\",\n del_fail_ing: \"Failed to delete the fine-tuned model. The model is being trained or has been cancelled midway\",\n only_cancel: \"Can only cancel fine-tuned models in training~\",\n cancel_succ: \"Successfully canceled this model~\",\n cancel_fail: \"Failed to cancel the fine-tuned model~\",\n only_model: \"Can only search for fine-tuned models~\",\n verify_model_fail: \"Failed to search for fine-tuned models~\",\n get_files_fail: \"Failed to get file list~\",\n get_roles_fail: \"Failed to get role list~\"\n },\n index: {\n detail: \"The model behind ChatGPT v3.5\",\n lastMsg: \"The model behind ChatGPT v3.5\",\n up_file_id: \"The file has been uploaded successfully, and the file ID is\",\n copy: \", and it has been copied for you~\",\n file_id: \"`File ID:`\",\n file_name: \"`File Name:`\",\n file_size: \"`File Size:`\",\n obj: \"`Object:`\",\n status: \"`Status:`\",\n status_des: \"`Status Description:`\",\n target: \"`Target:`\",\n file_time: \"`File Creation Time:`\",\n task_id: \"`Fine-tuning Task ID:`\",\n task_type: \"`Task Type:`\",\n model_type: \"`Model Type:`\",\n task_time: \"`Fine-tuning Task Creation Time:`\",\n task_list: \"`Fine-tuning Event List`\\n\",\n obj_log_info_time: \"| Object | Log Level | Information | Creation Time |\\n\",\n model_id: \"\\n`Fine-tuned Model ID:`\",\n args: \"\\n\\n`Fine-tuning Arguments:`\\n\",\n item_setting: \"| Property | Value Set |\\n\",\n user_group: \"\\n`User Group:`\",\n results_null: \"\\n\\n`Training Results File List: None`\\n\\n\",\n results: \"\\n\\n`Training Results File List:`\\n\\n\",\n table_head: \"| ID | File Name | File Size | Object | Status | \\n\",\n statu: \"\\n`Status:`\",\n files_null: \"\\n\\n`Training File List: None`\\n\\n\",\n files: \"\\n\\n`Training File List:`\\n\\n\",\n verifys_null: \"\\n\\n`Verification File List: None`\\n\\n\",\n verifys: \"\\n\\n`Verification File List:`\\n\\n\",\n last_time: \"\\n`Last Update Timestamp:`\"\n }\n});\n\n//# sourceURL=webpack://JCM-AI/./src/lang/en.js?"); - -/***/ }), - -/***/ "./src/lang/zh-CN.js": -/*!***************************!*\ - !*** ./src/lang/zh-CN.js ***! - \***************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony default export */ __webpack_exports__[\"default\"] = ({\n placeholder: {\n question: \"在此输入您的问题~\",\n openai_key: \"请输入OpenAI KEY\",\n role_name: \"角色名称\",\n session_name: \"会话名称\",\n model_name: \"模型名称\",\n slightly_name: \"微调模型名称\",\n file_name: \"文件名称\",\n suffix: \"末尾添加的文本片段\",\n stop: \"停止生成文本的令牌\",\n response_count: \"生成的答案次数\",\n trainingDataFileID: '训练数据的文件ID',\n validationDataFileID: '验证数据文件ID',\n modelName: '模型名称',\n trainingIterations: '训练次数',\n batchSize: '每批数据的大小',\n learningRate: '学习率',\n ftsuffix: '后缀'\n },\n session: {\n title: \"会话\",\n create: \"创建会话\",\n export: \"导出会话列表\",\n import: \"导入会话列表\",\n clear: \"清除会话列表\"\n },\n model: {\n title: \"模型\",\n talk: \"对话\",\n online_title: \"联网\",\n online: \"打开之后联网查询\",\n suffix_title: \"后缀\",\n suffix: \"在生成文本末尾添加的文本片段\",\n max_tokens_title: \"最大单词数\",\n max_tokens: \"指定要生成的最大单词数,不能超过2048。\",\n temperature_title: \"随机度(0-2)\",\n temperature: \"指定生成文本的随机性,范围是0到2,越高表示越多样化和创造性,越低表示越保守和确定性。\",\n top_p_title: \"保留词比例(0-1)\",\n top_p: \"指定在每个步骤中保留概率最高的单词的比例,范围是0到1,与temperature类似,但更加灵活和稳健。\",\n n_title: \"结果规模\",\n n: \"此参数会生成许多结果\",\n stream_title: \"流式输出\",\n stream: \"开启流式输出\",\n echo_title: \"回显词\",\n echo: \"回显提示词\",\n stop_title: \"停止令牌\",\n stop: \"设置模型停止生成文本的令牌\",\n frequency_penalty_title: \"单词重复度(0-1)\",\n frequency_penalty: \"指定降低重复单词出现概率的程度,范围是0到1,越高表示越避免重复。\",\n presence_penalty_title: \"话题重复度(0-1)\",\n presence_penalty: \"指定降低重复话题出现概率的程度,范围是0到1,越高表示越避免重复。\",\n max_results_title: \"指定联网查询数据的数量,不建议太大。\",\n max_results: \"查询规模\"\n },\n slightly: {\n title: {\n whole: \"微调\",\n abbreviation: \"微调\"\n },\n retrieveFineTuning: '检索微调',\n cancelFineTuning: '取消微调',\n hideCanceledFineTuning: '隐藏已取消的微调',\n showCanceledFineTuning: '显示已取消的微调',\n deleteFineTuningModel: '删除微调模型',\n createFineTuning: '创建微调',\n create: '创建',\n fileIDTrainingData: '包含训练数据的文件ID',\n fileIDValidationData: '包含验证数据的文件ID',\n modelOptions: '您可以选择ada、babbage、curie、davinci或者是你自己通过微调训练的模型名称',\n epochs: '通过调整n_epochs的数量,可以控制模型的训练时期和训练次数,从而影响模型的性能和收敛速度',\n batchSize: '较大的 batch_size 可以加快模型的训练速度、模型的稳定性和泛化能力,较小的 batch_size 可以减少内存和计算资源的使用、提高模型在测试数据上的性能',\n fineTunedName: '最多 40 个字符的字符串,将添加到微调的模型名称中。',\n learningRate: '可以控制微调训练期间使用的学习率是预训练模型使用的学习率的多少倍。例如,如果您设置为2.0,则微调训练期间使用的学习率将是预训练模型使用的学习率的两倍。',\n promptAttention: '设置较高的值,那么模型在生成文本时会更加注重提示,设置较低的值模型则会更加注重自己的语言模型,生成更自由的文本'\n },\n file: {\n title: \"文件\",\n upload: \"上传文件\",\n delete: \"删除文件\",\n retrieve: \"检索文件\",\n view: \"查看文件内容\"\n },\n image: {\n title: \"图片\",\n production: \"产图模式\",\n production_title: \"打开之后聊天发送的内容为描述图片的信息\",\n change: \"改图模式\",\n change_title: \"打开之后先上传图片,然后再输入提示词进行修改。\",\n size: \"图片大小\",\n size_title: \"生成图片的大小\",\n count: \"图片数量\",\n count_title: \"生成图片的数量\"\n },\n audio: {\n title: \"音频\",\n to_text_title: \"语音转文字\",\n to_text: \"语音转文字\",\n language_title: \"将一个或多个来源语言的语音或音频文件翻译成目标语言\",\n language: \"语言\",\n temperature_title: \"指定语音识别的随机性,范围是0到1,越高表示越多样化和创造性,越低表示越保守和确定性。\",\n temperature: \"随机度(0-1)\"\n },\n role: {\n title: \"角色\"\n },\n setting: {\n title: \"设置\",\n Language: \"英文语言\"\n },\n file_card: {\n unknown: \"未知\"\n },\n person_card: {\n train: \"正在训练...\",\n cancel: \"已取消\"\n },\n util_js: {\n select: \"请选择要上传的图片!\",\n path: \"路径不正确!\",\n notallowed: \"该文件类型不允许上传。请上传 \",\n type: \" 类型的文件,当前文件类型为\"\n },\n message: {\n start_recording: \"开始录音咯~\",\n fail_audio: \"获取音频流失败啦~\",\n end_recording: \"结束录音咯~\",\n edit_picture: \"编辑图片模式:请您聊天窗口右上角先上传图片,再发送修改的内容~\",\n msg_empty: \"消息不能为空哦~\",\n model_del: \"模型已被删除或已取消...\",\n valid_png: \"请上传一个有效的PNG文件~\",\n less_4M: \"请上传一个小于4MB的文件~\",\n upload_complete: \"图片上传完成啦,请给我提示进行编辑~\",\n get_model_fail: \"获取模型列表失败哦~~\",\n valid_json: \"请上传一个有效的JSON文件~~\",\n only_file: \"只能检索文件哦~\",\n fail_file: \"文件检索失败了~\",\n openai_free: \"OpenAI为了减少滥用,免费帐户将无法下载微调训练的文件~\",\n only_del_file: \"只能删除文件哦~\",\n del_file_succ: \"恭喜您删除成功~\",\n del_fail: \"文件删除失败了~\",\n create_succ: \"恭喜您微调创建成功~\",\n create_fail: \"微调创建失败了...\",\n only_del_model: \"只能删除微调中的模型哦~\",\n del_model_succ: \"恭喜您微调模型删除成功~\",\n del_fail_ing: \"微调模型删除失败了,模型正在训练中或者中途已取消\",\n only_cancel: \"只能取消进行训练中的微调模型哦~\",\n cancel_succ: \"成功取消此模型~\",\n cancel_fail: \"取消微调模型失败~\",\n only_model: \"只能检索的微调模型哦~\",\n verify_model_fail: \"检索微调模型失败~\",\n get_files_fail: \"获取文件列表失败哦~\",\n get_roles_fail: \"获取角色列表失败哦~\"\n },\n index: {\n detail: \"chatgpt v3.5 所基于的模型\",\n lastMsg: \"chatgpt v3.5 所基于的模型\",\n up_file_id: \"文件已上传成功,文件ID是\",\n copy: \",已经帮您复制啦~\",\n file_id: \"`文件ID:`\",\n file_name: \"`文件名称:`\",\n file_size: \"`文件大小:`\",\n obj: \"`对象:`\",\n status: \"`状态:`\",\n status_des: \"`状态描述`\",\n target: \"`目的` \",\n file_time: \"`文件创建时间`\",\n task_id: \"`微调任务ID:`\",\n task_type: \"`任务类型:`\",\n model_type: \"`模型的类型:`\",\n task_time: \"`微调任务的创建时间:`\",\n task_list: \"`微调的事件列表` \\n\",\n obj_log_info_time: \"| 对象 | 日志级别 | 信息 | 创建时间 |\\n\",\n model_id: \"\\n `微调的模型ID:`\",\n args: \"\\n\\n `微调使用的参数:` \\n\",\n item_setting: \"| 属性 | 设置的值 | \\n\",\n user_group: \"\\n`用户所属组:`\",\n results_null: \"\\n\\n`训练结果文件列表:没有`\\n\\n\",\n results: \"\\n\\n`训练结果文件列表:`\\n\\n\",\n table_head: \"| ID | 文件名称 | 文件大小 | 对象 | 状态 | \\n\",\n statu: \"\\n`状态:`\",\n files_null: \"\\n\\n`训练的文件列表:没有`\\n\\n\",\n files: \"\\n\\n`训练的文件列表:`\\n\\n\",\n verifys_null: \"\\n\\n`验证的文件列表:没有`\\n\\n\",\n verifys: \"\\n\\n`验证的文件列表:`\\n\\n\",\n last_time: \"\\n`最后更新时间戳:`\"\n }\n});\n\n//# sourceURL=webpack://JCM-AI/./src/lang/zh-CN.js?"); - -/***/ }), - -/***/ "./src/main.js": -/*!*********************!*\ - !*** ./src/main.js ***! - \*********************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var vue__WEBPACK_IMPORTED_MODULE_6__ = __webpack_require__(/*! vue */ \"./node_modules/vue/dist/vue.runtime.esm.js\");\n/* harmony import */ var _App_vue__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ./App.vue */ \"./src/App.vue\");\n/* harmony import */ var element_ui__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! element-ui */ \"./node_modules/element-ui/lib/element-ui.common.js\");\n/* harmony import */ var element_ui__WEBPACK_IMPORTED_MODULE_1___default = /*#__PURE__*/__webpack_require__.n(element_ui__WEBPACK_IMPORTED_MODULE_1__);\n/* harmony import */ var vue_router__WEBPACK_IMPORTED_MODULE_7__ = __webpack_require__(/*! vue-router */ \"./node_modules/vue-router/dist/vue-router.esm.js\");\n/* harmony import */ var element_ui_lib_theme_chalk_index_css__WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(/*! element-ui/lib/theme-chalk/index.css */ \"./node_modules/element-ui/lib/theme-chalk/index.css\");\n/* harmony import */ var element_ui_lib_theme_chalk_index_css__WEBPACK_IMPORTED_MODULE_2___default = /*#__PURE__*/__webpack_require__.n(element_ui_lib_theme_chalk_index_css__WEBPACK_IMPORTED_MODULE_2__);\n/* harmony import */ var _router_index__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(/*! ./router/index */ \"./src/router/index.js\");\n/* harmony import */ var _util_util__WEBPACK_IMPORTED_MODULE_4__ = __webpack_require__(/*! @/util/util */ \"./src/util/util.js\");\n/* harmony import */ var _config_i18n__WEBPACK_IMPORTED_MODULE_5__ = __webpack_require__(/*! @/config/i18n */ \"./src/config/i18n.js\");\n\n\n\n\n\n\n\n\nvue__WEBPACK_IMPORTED_MODULE_6__[\"default\"].use(vue_router__WEBPACK_IMPORTED_MODULE_7__[\"default\"]);\nvue__WEBPACK_IMPORTED_MODULE_6__[\"default\"].config.productionTip = false;\nvue__WEBPACK_IMPORTED_MODULE_6__[\"default\"].use((element_ui__WEBPACK_IMPORTED_MODULE_1___default()));\n\n/**\r\n * 复制\r\n */\n\nvue__WEBPACK_IMPORTED_MODULE_6__[\"default\"].prototype.$copy = function (value, mes) {\n if ((0,_util_util__WEBPACK_IMPORTED_MODULE_4__.copyToClipboard)(value)) {\n this.$message.success(mes);\n }\n};\nnew vue__WEBPACK_IMPORTED_MODULE_6__[\"default\"]({\n i18n: _config_i18n__WEBPACK_IMPORTED_MODULE_5__[\"default\"],\n router: _router_index__WEBPACK_IMPORTED_MODULE_3__[\"default\"],\n render: h => h(_App_vue__WEBPACK_IMPORTED_MODULE_0__[\"default\"])\n}).$mount('#app');\n\n//# sourceURL=webpack://JCM-AI/./src/main.js?"); - -/***/ }), - -/***/ "./src/router/index.js": -/*!*****************************!*\ - !*** ./src/router/index.js ***! - \*****************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var vue_router__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(/*! vue-router */ \"./node_modules/vue-router/dist/vue-router.esm.js\");\n/* harmony import */ var _view_pages_chatHome_index_vue__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ../view/pages/chatHome/index.vue */ \"./src/view/pages/chatHome/index.vue\");\n/* harmony import */ var _view_pages_setting_vue__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ../view/pages/setting.vue */ \"./src/view/pages/setting.vue\");\n/* harmony import */ var _view_pages_user_userInfo_vue__WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(/*! ../view/pages/user/userInfo.vue */ \"./src/view/pages/user/userInfo.vue\");\n\n\n\n\n/* harmony default export */ __webpack_exports__[\"default\"] = (new vue_router__WEBPACK_IMPORTED_MODULE_3__[\"default\"]({\n routes: [{\n path: \"/\",\n redirect: \"/ChatHome\"\n }, {\n path: \"/ChatHome\",\n name: \"ChatHome\",\n component: _view_pages_chatHome_index_vue__WEBPACK_IMPORTED_MODULE_0__[\"default\"]\n }, {\n path: \"/Setting\",\n name: \"Setting\",\n component: _view_pages_setting_vue__WEBPACK_IMPORTED_MODULE_1__[\"default\"]\n }, {\n path: \"/UserInfo\",\n name: \"UserInfo\",\n component: _view_pages_user_userInfo_vue__WEBPACK_IMPORTED_MODULE_2__[\"default\"]\n }]\n}));\n\n//# sourceURL=webpack://JCM-AI/./src/router/index.js?"); - -/***/ }), - -/***/ "./src/store/mutation-types.js": -/*!*************************************!*\ - !*** ./src/store/mutation-types.js ***! - \*************************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"AI_HEAD_IMG_URL\": function() { return /* binding */ AI_HEAD_IMG_URL; },\n/* harmony export */ \"USER_HEAD_IMG_URL\": function() { return /* binding */ USER_HEAD_IMG_URL; },\n/* harmony export */ \"USER_NAME\": function() { return /* binding */ USER_NAME; }\n/* harmony export */ });\n//AI头像地址设置\nconst AI_HEAD_IMG_URL = \"https://th.bing.com/th?id=ODL.3e2fbff4543f0d3632d34be6d02adc93&w=100&h=100&c=12&pcl=faf9f7&o=6&dpr=1.5&pid=13.1\";\n//用户头像地址设置\nconst USER_HEAD_IMG_URL = \"https://avatars.githubusercontent.com/u/40659515?v=4\";\n//用户名称设置\nconst USER_NAME = \"君尘陌\";\n\n//# sourceURL=webpack://JCM-AI/./src/store/mutation-types.js?"); - -/***/ }), - -/***/ "./src/util/util.js": -/*!**************************!*\ - !*** ./src/util/util.js ***! - \**************************/ -/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony export */ __webpack_require__.d(__webpack_exports__, {\n/* harmony export */ \"JCMFormatDate\": function() { return /* binding */ JCMFormatDate; },\n/* harmony export */ \"JCMFormatTimestamp\": function() { return /* binding */ JCMFormatTimestamp; },\n/* harmony export */ \"animation\": function() { return /* binding */ animation; },\n/* harmony export */ \"copyToClipboard\": function() { return /* binding */ copyToClipboard; },\n/* harmony export */ \"debounce\": function() { return /* binding */ debounce; },\n/* harmony export */ \"fileType\": function() { return /* binding */ fileType; },\n/* harmony export */ \"generateUUID\": function() { return /* binding */ generateUUID; },\n/* harmony export */ \"getNowTime\": function() { return /* binding */ getNowTime; },\n/* harmony export */ \"judgeFileType\": function() { return /* binding */ judgeFileType; },\n/* harmony export */ \"throttle\": function() { return /* binding */ throttle; }\n/* harmony export */ });\n/* harmony import */ var _config_i18n__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! @/config/i18n */ \"./src/config/i18n.js\");\n/* provided dependency */ var console = __webpack_require__(/*! ./node_modules/console-browserify/index.js */ \"./node_modules/console-browserify/index.js\");\n\n//防抖\nfunction debounce(fn) {\n console.log(1);\n let t = null; //只会执行一次\n debugger;\n return function () {\n if (t) {\n clearTimeout(t);\n }\n t = setTimeout(() => {\n console.log(temp); //可以获取\n // console.log(arguments[0]) //undefined\n fn.apply(this, arguments);\n //在这个回调函数里面的argument是这个回调函数的参数,因为没有参数所以undefined,可以通过外面的函数赋值来进行访问\n //也可以改变成箭头函数,箭头函数的this是指向定义函数的那一层的,所以访问到的arguments是上一层函数的arguments\n }, 1000);\n };\n}\n//节流\nfunction throttle(fn, delay = 200) {\n let timer = null;\n console.log(fn);\n debugger;\n return function () {\n if (timer) return;\n timer = setTimeout(() => {\n debugger;\n fn.apply(this, arguments);\n timer = null;\n });\n };\n}\n//下拉动画\nfunction animation(obj, target, fn1) {\n // console.log(fn1);\n // fn是一个回调函数,在定时器结束的时候添加\n // 每次开定时器之前先清除掉定时器\n clearInterval(obj.timer);\n obj.timer = setInterval(function () {\n // 步长计算公式 越来越小\n // 步长取整\n var step = (target - obj.scrollTop) / 10;\n step = step > 0 ? Math.ceil(step) : Math.floor(step);\n if (obj.scrollTop >= target) {\n clearInterval(obj.timer);\n // 如果fn1存在,调用fn\n if (fn1) {\n fn1();\n }\n } else {\n // 每30毫秒就将新的值给obj.left\n obj.scrollTop = obj.scrollTop + step;\n }\n }, 10);\n}\n\n//判断文件类型\nfunction judgeFileType(file) {\n if (file == null || file == \"\") {\n alert(_config_i18n__WEBPACK_IMPORTED_MODULE_0__[\"default\"].t('util_js.select'));\n return false;\n }\n if (file.lastIndexOf('.') == -1) {\n //如果不存在\".\" \n alert(_config_i18n__WEBPACK_IMPORTED_MODULE_0__[\"default\"].t('util_js.path'));\n return false;\n }\n var AllImgExt = \".jpg|.jpeg|.gif|.bmp|.png|\";\n var extName = file.substring(file.lastIndexOf(\".\")).toLowerCase(); //(把路径中的所有字母全部转换为小写) \n if (AllImgExt.indexOf(extName + \"|\") == -1) {\n ErrMsg = _config_i18n__WEBPACK_IMPORTED_MODULE_0__[\"default\"].t('util_js.notallowed') + AllImgExt + _config_i18n__WEBPACK_IMPORTED_MODULE_0__[\"default\"].t('util_js.type') + extName;\n alert(ErrMsg);\n return false;\n }\n}\n\n//文件类型\nfunction fileType() {\n return {\n 'application/msword': 'word',\n 'application/pdf': 'pdf',\n 'application/vnd.ms-powerpoint': 'ppt',\n 'application/vnd.ms-excel': 'excel',\n 'aplication/zip': 'zpi'\n };\n}\n\n/**\r\n* 获取当前时间\r\n*/\nfunction getNowTime() {\n // 创建一个Date对象\n var date = new Date();\n // 获取年份、月份、日期、小时、分钟和秒数\n var year = date.getFullYear();\n var month = date.getMonth() + 1; // 注意月份从0开始计数\n var day = date.getDate();\n var hour = date.getHours();\n var minute = date.getMinutes();\n var second = date.getSeconds();\n // 如果月份、日期、小时、分钟或秒数小于10,需要在前面补0\n if (month < 10) {\n month = \"0\" + month;\n }\n if (day < 10) {\n day = \"0\" + day;\n }\n if (hour < 10) {\n hour = \"0\" + hour;\n }\n if (minute < 10) {\n minute = \"0\" + minute;\n }\n if (second < 10) {\n second = \"0\" + second;\n }\n // 拼接成字符串\n var currentTime = year + \"-\" + month + \"-\" + day + \" \" + hour + \":\" + minute + \":\" + second;\n // 输出结果\n return currentTime;\n}\n\n/**\r\n * 格式化时间\r\n */\nfunction JCMFormatDate(dateStr) {\n let date = new Date(dateStr);\n let year = date.getFullYear();\n let month = date.getMonth() + 1;\n let day = date.getDate();\n let hour = date.getHours();\n let minute = date.getMinutes();\n let second = date.getSeconds();\n return `${year}/${month}/${day} ${hour}:${minute}:${second}`;\n}\n\n//将时间戳转换为正常时间\nfunction JCMFormatTimestamp(timestamp) {\n const date = new Date(timestamp * 1000); // 转换为Date对象\n const options = {\n // 背景时间的格式选项\n year: 'numeric',\n // 年份(4位数字)\n month: 'long',\n // 月份的全称\n day: 'numeric',\n // 天(数字)\n hour: 'numeric',\n // 小时(数字)\n minute: 'numeric',\n // 分钟(数字)\n second: 'numeric' // 秒钟(数字)\n };\n\n return date.toLocaleDateString('zh-CN', options);\n}\n/**\r\n * 复制到剪切板\r\n */\n\nfunction copyToClipboard(content) {\n const clipboardData = window.clipboardData;\n if (clipboardData) {\n clipboardData.clearData();\n clipboardData.setData('Text', content);\n return true;\n } else if (document.execCommand) {\n const el = document.createElement('textarea');\n el.value = content;\n el.setAttribute('readonly', '');\n el.style.position = 'absolute';\n el.style.left = '-9999px';\n document.body.appendChild(el);\n el.select();\n document.execCommand('copy');\n document.body.removeChild(el);\n return true;\n }\n return false;\n}\n\n/**\r\n * 生成UUID\r\n * @returns \r\n */\nfunction generateUUID() {\n var d = new Date().getTime();\n if (window.performance && typeof window.performance.now === \"function\") {\n d += performance.now(); //use high-precision timer if available\n }\n\n var uuid = 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function (c) {\n var r = (d + Math.random() * 16) % 16 | 0;\n d = Math.floor(d / 16);\n return (c === 'x' ? r : r & 0x3 | 0x8).toString(16);\n });\n return uuid;\n}\n\n//# sourceURL=webpack://JCM-AI/./src/util/util.js?"); - -/***/ }), - -/***/ "./node_modules/css-loader/dist/cjs.js??clonedRuleSet-12.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-12.use[2]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/setting.vue?vue&type=style&index=0&id=f89df198&lang=css&": -/*!*********************************************************************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/css-loader/dist/cjs.js??clonedRuleSet-12.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-12.use[2]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/setting.vue?vue&type=style&index=0&id=f89df198&lang=css& ***! - \*********************************************************************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ../../../node_modules/css-loader/dist/runtime/noSourceMaps.js */ \"./node_modules/css-loader/dist/runtime/noSourceMaps.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ../../../node_modules/css-loader/dist/runtime/api.js */ \"./node_modules/css-loader/dist/runtime/api.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__);\n// Imports\n\n\nvar ___CSS_LOADER_EXPORT___ = _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default()((_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default()));\n// Module\n___CSS_LOADER_EXPORT___.push([module.id, \"\\n.transition-box {\\n text-align: center;\\n margin-top: 5%;\\n color: #F2F6FC;\\n font-size: 30px;\\n}\\nh1 {\\n color: aliceblue;\\n font-size: 80px;\\n}\\na {\\n text-decoration: none;\\n color: #67C23A;\\n}\\n\", \"\"]);\n// Exports\n/* harmony default export */ __webpack_exports__[\"default\"] = (___CSS_LOADER_EXPORT___);\n\n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/setting.vue?./node_modules/css-loader/dist/cjs.js??clonedRuleSet-12.use%5B1%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-12.use%5B2%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/css-loader/dist/cjs.js??clonedRuleSet-12.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-12.use[2]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/user/userInfo.vue?vue&type=style&index=0&id=3c4a7241&lang=css&": -/*!***************************************************************************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/css-loader/dist/cjs.js??clonedRuleSet-12.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-12.use[2]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/user/userInfo.vue?vue&type=style&index=0&id=3c4a7241&lang=css& ***! - \***************************************************************************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ../../../../node_modules/css-loader/dist/runtime/noSourceMaps.js */ \"./node_modules/css-loader/dist/runtime/noSourceMaps.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ../../../../node_modules/css-loader/dist/runtime/api.js */ \"./node_modules/css-loader/dist/runtime/api.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__);\n// Imports\n\n\nvar ___CSS_LOADER_EXPORT___ = _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default()((_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default()));\n// Module\n___CSS_LOADER_EXPORT___.push([module.id, \"\\n.transition-box {\\n text-align: center;\\n margin-top: 5%;\\n color: #F2F6FC;\\n font-size: 30px;\\n}\\nh1 {\\n color: aliceblue;\\n font-size: 80px;\\n}\\na {\\n text-decoration: none;\\n color: #67C23A;\\n}\\n\", \"\"]);\n// Exports\n/* harmony default export */ __webpack_exports__[\"default\"] = (___CSS_LOADER_EXPORT___);\n\n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/user/userInfo.vue?./node_modules/css-loader/dist/cjs.js??clonedRuleSet-12.use%5B1%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-12.use%5B2%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/css-loader/dist/cjs.js??clonedRuleSet-14.use[1]!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-14.use[2]!./src/assets/font/font.css": -/*!********************************************************************************************************************************************************************!*\ - !*** ./node_modules/css-loader/dist/cjs.js??clonedRuleSet-14.use[1]!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-14.use[2]!./src/assets/font/font.css ***! - \********************************************************************************************************************************************************************/ -/***/ (function(module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ../../../node_modules/css-loader/dist/runtime/noSourceMaps.js */ \"./node_modules/css-loader/dist/runtime/noSourceMaps.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ../../../node_modules/css-loader/dist/runtime/api.js */ \"./node_modules/css-loader/dist/runtime/api.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_getUrl_js__WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(/*! ../../../node_modules/css-loader/dist/runtime/getUrl.js */ \"./node_modules/css-loader/dist/runtime/getUrl.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_getUrl_js__WEBPACK_IMPORTED_MODULE_2___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_getUrl_js__WEBPACK_IMPORTED_MODULE_2__);\n// Imports\n\n\n\nvar ___CSS_LOADER_URL_IMPORT_0___ = new URL(/* asset import */ __webpack_require__(/*! 阿里妈妈东方大楷_Regular.ttf */ \"./src/assets/font/阿里妈妈东方大楷_Regular.ttf\"), __webpack_require__.b);\nvar ___CSS_LOADER_EXPORT___ = _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default()((_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default()));\nvar ___CSS_LOADER_URL_REPLACEMENT_0___ = _node_modules_css_loader_dist_runtime_getUrl_js__WEBPACK_IMPORTED_MODULE_2___default()(___CSS_LOADER_URL_IMPORT_0___);\n// Module\n___CSS_LOADER_EXPORT___.push([module.id, \"@font-face {\\r\\n font-family: 'SSFY';\\r\\n src: url(\" + ___CSS_LOADER_URL_REPLACEMENT_0___ + \");\\r\\n font-weight: normal;\\r\\n font-style: normal;\\r\\n} \", \"\"]);\n// Exports\n/* harmony default export */ __webpack_exports__[\"default\"] = (___CSS_LOADER_EXPORT___);\n\n\n//# sourceURL=webpack://JCM-AI/./src/assets/font/font.css?./node_modules/css-loader/dist/cjs.js??clonedRuleSet-14.use%5B1%5D!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-14.use%5B2%5D"); - -/***/ }), - -/***/ "./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/App.vue?vue&type=style&index=0&id=7ba5bd90&lang=scss&": -/*!***********************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/App.vue?vue&type=style&index=0&id=7ba5bd90&lang=scss& ***! - \***********************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ../node_modules/css-loader/dist/runtime/noSourceMaps.js */ \"./node_modules/css-loader/dist/runtime/noSourceMaps.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ../node_modules/css-loader/dist/runtime/api.js */ \"./node_modules/css-loader/dist/runtime/api.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__);\n/* harmony import */ var _node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_assets_font_iconfont_css__WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(/*! -!../node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!../node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!../node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./assets/font/iconfont.css */ \"./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./src/assets/font/iconfont.css\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_getUrl_js__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(/*! ../node_modules/css-loader/dist/runtime/getUrl.js */ \"./node_modules/css-loader/dist/runtime/getUrl.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_getUrl_js__WEBPACK_IMPORTED_MODULE_3___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_getUrl_js__WEBPACK_IMPORTED_MODULE_3__);\n// Imports\n\n\n\n\nvar ___CSS_LOADER_URL_IMPORT_0___ = new URL(/* asset import */ __webpack_require__(/*! @/assets/img/bj.png */ \"./src/assets/img/bj.png\"), __webpack_require__.b);\nvar ___CSS_LOADER_EXPORT___ = _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default()((_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default()));\n___CSS_LOADER_EXPORT___.i(_node_modules_css_loader_dist_cjs_js_clonedRuleSet_22_use_1_node_modules_vue_vue_loader_v15_lib_loaders_stylePostLoader_js_node_modules_postcss_loader_dist_cjs_js_clonedRuleSet_22_use_2_assets_font_iconfont_css__WEBPACK_IMPORTED_MODULE_2__[\"default\"]);\nvar ___CSS_LOADER_URL_REPLACEMENT_0___ = _node_modules_css_loader_dist_runtime_getUrl_js__WEBPACK_IMPORTED_MODULE_3___default()(___CSS_LOADER_URL_IMPORT_0___);\n// Module\n___CSS_LOADER_EXPORT___.push([module.id, \".iconfont {\\n font-family: \\\"iconfont\\\" !important;\\n font-style: normal;\\n font-size: 25px;\\n vertical-align: middle;\\n color: rgb(117, 120, 137);\\n transition: 0.3s;\\n -webkit-font-smoothing: antialiased;\\n -moz-osx-font-smoothing: grayscale;\\n}\\n* {\\n padding: 0;\\n margin: 0;\\n font-family: \\\"SSFY\\\";\\n}\\n#app {\\n width: 100vw;\\n height: 100vh;\\n background: url(\" + ___CSS_LOADER_URL_REPLACEMENT_0___ + \") no-repeat;\\n background-size: cover;\\n position: absolute;\\n}\\n::-webkit-scrollbar {\\n display: none; /* Chrome Safari */\\n}\", \"\"]);\n// Exports\n/* harmony default export */ __webpack_exports__[\"default\"] = (___CSS_LOADER_EXPORT___);\n\n\n//# sourceURL=webpack://JCM-AI/./src/App.vue?./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use%5B1%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use%5B2%5D!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Emoji.vue?vue&type=style&index=0&id=534ad946&lang=scss&scoped=true&": -/*!************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Emoji.vue?vue&type=style&index=0&id=534ad946&lang=scss&scoped=true& ***! - \************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ../../node_modules/css-loader/dist/runtime/noSourceMaps.js */ \"./node_modules/css-loader/dist/runtime/noSourceMaps.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ../../node_modules/css-loader/dist/runtime/api.js */ \"./node_modules/css-loader/dist/runtime/api.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__);\n// Imports\n\n\nvar ___CSS_LOADER_EXPORT___ = _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default()((_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default()));\n// Module\n___CSS_LOADER_EXPORT___.push([module.id, \"@charset \\\"UTF-8\\\";\\n.emoji-content .emoji[data-v-534ad946] {\\n width: 400px;\\n height: 200px;\\n background-color: rgb(39, 42, 55);\\n position: absolute;\\n top: -220px;\\n left: -10px;\\n border-radius: 10px;\\n transition: 0.3s;\\n z-index: 3;\\n}\\n.emoji-content .emoji[data-v-534ad946]::after {\\n content: \\\"\\\";\\n display: block;\\n width: 0;\\n height: 0;\\n border-top: 10px solid rgb(39, 42, 55);\\n border-right: 10px solid transparent;\\n border-left: 10px solid transparent;\\n position: absolute;\\n bottom: -8px;\\n left: 15px;\\n z-index: 100;\\n}\\n.emoji-content .emoji .emoji-wrapper[data-v-534ad946] {\\n width: 100%;\\n height: 100%;\\n overflow-y: scroll;\\n padding: 10px;\\n box-sizing: border-box;\\n position: relative;\\n}\\n.emoji-content .emoji .emoji-wrapper[data-v-534ad946]::-webkit-scrollbar {\\n /*滚动条整体样式*/\\n width: 4px; /*高宽分别对应横竖滚动条的尺寸*/\\n height: 1px;\\n}\\n.emoji-content .emoji .emoji-wrapper[data-v-534ad946]::-webkit-scrollbar-thumb {\\n /*滚动条里面小方块*/\\n border-radius: 10px;\\n box-shadow: inset 0 0 5px rgba(97, 184, 179, 0.1);\\n background: rgb(95, 101, 122);\\n}\\n.emoji-content .emoji .emoji-wrapper[data-v-534ad946]::-webkit-scrollbar-track {\\n /*滚动条里面轨道*/\\n box-shadow: inset 0 0 5px rgba(87, 175, 187, 0.1);\\n border-radius: 10px;\\n background: rgb(39, 42, 55);\\n}\\n.emoji-content .emoji .emoji-wrapper .emoji-list[data-v-534ad946] {\\n display: flex;\\n justify-content: flex-start;\\n flex-wrap: wrap;\\n margin-left: 10px;\\n}\\n.emoji-content .emoji .emoji-wrapper .emoji-list .emoji-item[data-v-534ad946] {\\n list-style: none;\\n width: 50px;\\n height: 50px;\\n border-radius: 10px;\\n margin: 5px;\\n position: relative;\\n cursor: pointer;\\n}\\n.emoji-content .emoji .emoji-wrapper .emoji-list .emoji-item[data-v-534ad946]:hover {\\n background-color: rgb(50, 54, 68);\\n}\\n.emoji-content .emoji .emoji-wrapper .emoji-list .emoji-item img[data-v-534ad946] {\\n width: 35px;\\n height: 35px;\\n position: absolute;\\n left: 50%;\\n top: 50%;\\n transform: translate(-50%, -50%);\\n}\\n.emoji-content .mask[data-v-534ad946] {\\n width: 100%;\\n height: 100%;\\n position: fixed;\\n background: transparent;\\n left: 0;\\n top: 0;\\n z-index: 1;\\n}\", \"\"]);\n// Exports\n/* harmony default export */ __webpack_exports__[\"default\"] = (___CSS_LOADER_EXPORT___);\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/Emoji.vue?./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use%5B1%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use%5B2%5D!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/File.vue?vue&type=style&index=0&id=ab80f8a8&lang=scss&scoped=true&": -/*!***********************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/File.vue?vue&type=style&index=0&id=ab80f8a8&lang=scss&scoped=true& ***! - \***********************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ../../node_modules/css-loader/dist/runtime/noSourceMaps.js */ \"./node_modules/css-loader/dist/runtime/noSourceMaps.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ../../node_modules/css-loader/dist/runtime/api.js */ \"./node_modules/css-loader/dist/runtime/api.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__);\n// Imports\n\n\nvar ___CSS_LOADER_EXPORT___ = _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default()((_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default()));\n// Module\n___CSS_LOADER_EXPORT___.push([module.id, \".person-card[data-v-ab80f8a8] {\\n width: 100%;\\n height: 80px;\\n border-radius: 10px;\\n background-color: rgb(50, 54, 68);\\n position: relative;\\n margin: 25px 0;\\n cursor: pointer;\\n}\\n.person-card .info[data-v-ab80f8a8] {\\n position: absolute;\\n left: 50%;\\n top: 50%;\\n width: 90%;\\n transform: translate(-50%, -50%);\\n overflow: hidden;\\n display: flex;\\n}\\n.person-card .info .info-detail[data-v-ab80f8a8] {\\n margin-top: 5px;\\n margin-left: 20px;\\n display: flex;\\n flex-direction: column;\\n overflow: hidden;\\n text-overflow: ellipsis;\\n}\\n.person-card .info .info-detail .name[data-v-ab80f8a8] {\\n color: #fff;\\n overflow: hidden;\\n white-space: nowrap;\\n text-overflow: ellipsis;\\n margin-bottom: 5px;\\n}\\n.person-card .info .info-detail .detail[data-v-ab80f8a8] {\\n color: #5c6675;\\n overflow: hidden;\\n white-space: nowrap;\\n text-overflow: ellipsis;\\n font-size: 12px;\\n}\\n.person-card[data-v-ab80f8a8]:hover {\\n background-color: #1d90f5;\\n transition: 0.3s;\\n box-shadow: 0px 0px 10px 0px rgb(0, 136, 255);\\n}\\n.person-card:hover .info .info-detail .detail[data-v-ab80f8a8] {\\n color: #fff;\\n}\\n.activeCard[data-v-ab80f8a8] {\\n background-color: #1d90f5;\\n transition: 0.3s;\\n box-shadow: 3px 2px 10px 0px rgb(0, 136, 255);\\n}\\n.activeCard .info .info-detail .detail[data-v-ab80f8a8] {\\n color: #fff;\\n}\", \"\"]);\n// Exports\n/* harmony default export */ __webpack_exports__[\"default\"] = (___CSS_LOADER_EXPORT___);\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/File.vue?./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use%5B1%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use%5B2%5D!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/FileCard.vue?vue&type=style&index=0&id=48849e48&lang=scss&scoped=true&": -/*!***************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/FileCard.vue?vue&type=style&index=0&id=48849e48&lang=scss&scoped=true& ***! - \***************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ../../node_modules/css-loader/dist/runtime/noSourceMaps.js */ \"./node_modules/css-loader/dist/runtime/noSourceMaps.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ../../node_modules/css-loader/dist/runtime/api.js */ \"./node_modules/css-loader/dist/runtime/api.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__);\n// Imports\n\n\nvar ___CSS_LOADER_EXPORT___ = _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default()((_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default()));\n// Module\n___CSS_LOADER_EXPORT___.push([module.id, \".file-card[data-v-48849e48] {\\n width: 250px;\\n height: 100px;\\n background-color: rgb(45, 48, 63);\\n border-radius: 20px;\\n display: flex;\\n justify-content: center;\\n align-items: center;\\n padding: 10px;\\n box-sizing: border-box;\\n cursor: pointer;\\n}\\n.file-card[data-v-48849e48]:hover {\\n background-color: rgb(33, 36, 54);\\n}\\n.file-card img[data-v-48849e48] {\\n width: 60px;\\n height: 60px;\\n}\\n.file-card .word[data-v-48849e48] {\\n width: 60%;\\n margin-left: 10px;\\n overflow: hidden;\\n}\\n.file-card .word span[data-v-48849e48] {\\n width: 90%;\\n display: inline-block;\\n color: #fff;\\n}\\n.file-card .word span[data-v-48849e48]:first-child {\\n font-size: 14px;\\n overflow: hidden;\\n white-space: nowrap;\\n text-overflow: ellipsis;\\n}\\n.file-card .word span[data-v-48849e48]:last-child {\\n font-size: 12px;\\n color: rgb(180, 180, 180);\\n}\", \"\"]);\n// Exports\n/* harmony default export */ __webpack_exports__[\"default\"] = (___CSS_LOADER_EXPORT___);\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/FileCard.vue?./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use%5B1%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use%5B2%5D!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/HeadImg.vue?vue&type=style&index=0&id=0b1d9e43&lang=scss&scoped=true&": -/*!**************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/HeadImg.vue?vue&type=style&index=0&id=0b1d9e43&lang=scss&scoped=true& ***! - \**************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ../../node_modules/css-loader/dist/runtime/noSourceMaps.js */ \"./node_modules/css-loader/dist/runtime/noSourceMaps.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ../../node_modules/css-loader/dist/runtime/api.js */ \"./node_modules/css-loader/dist/runtime/api.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__);\n// Imports\n\n\nvar ___CSS_LOADER_EXPORT___ = _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default()((_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default()));\n// Module\n___CSS_LOADER_EXPORT___.push([module.id, \"@charset \\\"UTF-8\\\";\\nimg[data-v-0b1d9e43] {\\n --s: 75px; /* image size */\\n --b: 3px; /* border thickness */\\n --c: #255b98; /* border color */\\n --cb: #a34c4c; /* background color */\\n --_g: content-box no-repeat center / calc(100% / var(--f)) 100%; /* content-box: 内容区域开始显示背景图 放大后背景图大小不变 */\\n --_o: calc(\\n (1 / var(--f) - 1) * var(--s) / 2 - var(--b)\\n ); /* offset 相对于原来的长度,所以放大的长度-原来的长度除以2在除以倍数 */\\n --f: 1; /* initial scale */\\n --mh: calc(1px - var(--_o)) / calc(100% / var(--f) - 2 * var(--b) - 2px);\\n width: var(--s);\\n aspect-ratio: 1;\\n padding-top: calc(var(--s) / 5); /* 防止上面挡住人物,保留上部分空间 */\\n cursor: pointer;\\n border-radius: 0 0 999px 999px;\\n outline: var(--b) solid var(--c);\\n outline-offset: var(--_o);\\n background: radial-gradient(circle closest-side, var(--cb) calc(99% - var(--b)), var(--c) calc(100% - var(--b)), var(--c) 99%, transparent 100%) var(--_g);\\n -webkit-mask: linear-gradient(#000 0 0) no-repeat center var(--mh) 50%, radial-gradient(circle closest-side, #000 99%, rgba(0, 0, 0, 0)) var(--_g);\\n transform: scale(var(--f));\\n transition: 0.45s;\\n}\\nimg[data-v-0b1d9e43]:hover {\\n --f: 1.4; /* hover scale */\\n}\", \"\"]);\n// Exports\n/* harmony default export */ __webpack_exports__[\"default\"] = (___CSS_LOADER_EXPORT___);\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/HeadImg.vue?./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use%5B1%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use%5B2%5D!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/HeadPortrait.vue?vue&type=style&index=0&id=24585c4b&lang=scss&scoped=true&": -/*!*******************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/HeadPortrait.vue?vue&type=style&index=0&id=24585c4b&lang=scss&scoped=true& ***! - \*******************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ../../node_modules/css-loader/dist/runtime/noSourceMaps.js */ \"./node_modules/css-loader/dist/runtime/noSourceMaps.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ../../node_modules/css-loader/dist/runtime/api.js */ \"./node_modules/css-loader/dist/runtime/api.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__);\n// Imports\n\n\nvar ___CSS_LOADER_EXPORT___ = _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default()((_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default()));\n// Module\n___CSS_LOADER_EXPORT___.push([module.id, \".head-portrait[data-v-24585c4b] {\\n width: 50px;\\n height: 50px;\\n border-radius: 50%;\\n border: 2px solid rgb(255, 255, 255);\\n position: relative;\\n}\\n.head-portrait[data-v-24585c4b]::before {\\n content: \\\"\\\";\\n width: 15px;\\n height: 15px;\\n z-index: 1;\\n display: block;\\n border-radius: 50%;\\n background-color: rgb(144, 225, 80);\\n position: absolute;\\n right: 0;\\n}\\n.head-portrait img[data-v-24585c4b] {\\n width: 45px;\\n height: 45px;\\n border-radius: 50%;\\n box-sizing: border-box;\\n position: absolute;\\n left: 50%;\\n top: 50%;\\n transform: translate(-50%, -50%);\\n vertical-align: middle;\\n}\", \"\"]);\n// Exports\n/* harmony default export */ __webpack_exports__[\"default\"] = (___CSS_LOADER_EXPORT___);\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/HeadPortrait.vue?./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use%5B1%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use%5B2%5D!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Nav.vue?vue&type=style&index=0&id=65af85a3&lang=scss&scoped=true&": -/*!**********************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Nav.vue?vue&type=style&index=0&id=65af85a3&lang=scss&scoped=true& ***! - \**********************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ../../node_modules/css-loader/dist/runtime/noSourceMaps.js */ \"./node_modules/css-loader/dist/runtime/noSourceMaps.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ../../node_modules/css-loader/dist/runtime/api.js */ \"./node_modules/css-loader/dist/runtime/api.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__);\n// Imports\n\n\nvar ___CSS_LOADER_EXPORT___ = _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default()((_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default()));\n// Module\n___CSS_LOADER_EXPORT___.push([module.id, \".nav[data-v-65af85a3] {\\n width: 100%;\\n height: 90vh;\\n position: relative;\\n border-radius: 20px 0 0 20px;\\n}\\n.nav .nav-menu-wrapper[data-v-65af85a3] {\\n position: absolute;\\n top: 40%;\\n transform: translate(0, -50%);\\n}\\n.nav .nav-menu-wrapper .menu-list[data-v-65af85a3] {\\n margin-left: 10px;\\n}\\n.nav .nav-menu-wrapper .menu-list li[data-v-65af85a3] {\\n margin: 40px 0 0 30px;\\n list-style: none;\\n cursor: pointer;\\n position: relative;\\n}\\n.nav .nav-menu-wrapper .menu-list li .block[data-v-65af85a3] {\\n background-color: rgb(29, 144, 245);\\n position: absolute;\\n left: -40px;\\n width: 6px;\\n height: 25px;\\n transition: 0.5s;\\n border-top-right-radius: 4px;\\n border-bottom-right-radius: 4px;\\n opacity: 0;\\n}\\n.nav .nav-menu-wrapper .menu-list li:hover span[data-v-65af85a3] {\\n color: rgb(29, 144, 245);\\n}\\n.nav .nav-menu-wrapper .menu-list li:hover .block[data-v-65af85a3] {\\n opacity: 1;\\n}\\n.nav .own-pic[data-v-65af85a3] {\\n position: absolute;\\n bottom: 10%;\\n margin-left: 25px;\\n}\\n.activeNav span[data-v-65af85a3] {\\n color: rgb(29, 144, 245);\\n}\\n.activeNav .block[data-v-65af85a3] {\\n opacity: 1 !important;\\n}\", \"\"]);\n// Exports\n/* harmony default export */ __webpack_exports__[\"default\"] = (___CSS_LOADER_EXPORT___);\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/Nav.vue?./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use%5B1%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use%5B2%5D!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/PersonCard.vue?vue&type=style&index=0&id=d74d3096&lang=scss&scoped=true&": -/*!*****************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/PersonCard.vue?vue&type=style&index=0&id=d74d3096&lang=scss&scoped=true& ***! - \*****************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ../../node_modules/css-loader/dist/runtime/noSourceMaps.js */ \"./node_modules/css-loader/dist/runtime/noSourceMaps.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ../../node_modules/css-loader/dist/runtime/api.js */ \"./node_modules/css-loader/dist/runtime/api.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__);\n// Imports\n\n\nvar ___CSS_LOADER_EXPORT___ = _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default()((_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default()));\n// Module\n___CSS_LOADER_EXPORT___.push([module.id, \".person-card[data-v-d74d3096] {\\n width: 100%;\\n height: 80px;\\n border-radius: 10px;\\n background-color: rgb(50, 54, 68);\\n position: relative;\\n margin: 25px 0;\\n cursor: pointer;\\n}\\n.person-card .info[data-v-d74d3096] {\\n position: absolute;\\n left: 50%;\\n top: 50%;\\n width: 90%;\\n transform: translate(-50%, -50%);\\n overflow: hidden;\\n display: flex;\\n}\\n.person-card .info .info-detail[data-v-d74d3096] {\\n margin-top: 5px;\\n margin-left: 20px;\\n display: flex;\\n flex-direction: column;\\n overflow: hidden;\\n text-overflow: ellipsis;\\n}\\n.person-card .info .info-detail .name[data-v-d74d3096] {\\n color: #fff;\\n overflow: hidden;\\n white-space: nowrap;\\n text-overflow: ellipsis;\\n margin-bottom: 5px;\\n}\\n.person-card .info .info-detail .detail[data-v-d74d3096] {\\n color: #5c6675;\\n overflow: hidden;\\n white-space: nowrap;\\n text-overflow: ellipsis;\\n font-size: 12px;\\n}\\n.person-card[data-v-d74d3096]:hover {\\n background-color: #1d90f5;\\n transition: 0.3s;\\n box-shadow: 0px 0px 10px 0px rgb(0, 136, 255);\\n}\\n.person-card:hover .info .info-detail .detail[data-v-d74d3096] {\\n color: #fff;\\n}\\n.activeCard[data-v-d74d3096] {\\n background-color: #1d90f5;\\n transition: 0.3s;\\n box-shadow: 3px 2px 10px 0px rgb(0, 136, 255);\\n}\\n.activeCard .info .info-detail .detail[data-v-d74d3096] {\\n color: #fff;\\n}\", \"\"]);\n// Exports\n/* harmony default export */ __webpack_exports__[\"default\"] = (___CSS_LOADER_EXPORT___);\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/PersonCard.vue?./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use%5B1%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use%5B2%5D!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/RoleCard.vue?vue&type=style&index=0&id=9524bc54&lang=scss&scoped=true&": -/*!***************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/RoleCard.vue?vue&type=style&index=0&id=9524bc54&lang=scss&scoped=true& ***! - \***************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ../../node_modules/css-loader/dist/runtime/noSourceMaps.js */ \"./node_modules/css-loader/dist/runtime/noSourceMaps.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ../../node_modules/css-loader/dist/runtime/api.js */ \"./node_modules/css-loader/dist/runtime/api.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__);\n// Imports\n\n\nvar ___CSS_LOADER_EXPORT___ = _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default()((_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default()));\n// Module\n___CSS_LOADER_EXPORT___.push([module.id, \".role-card[data-v-9524bc54] {\\n width: 100%;\\n height: 80px;\\n border-radius: 10px;\\n background-color: rgb(50, 54, 68);\\n position: relative;\\n margin: 25px 0;\\n cursor: pointer;\\n}\\n.role-card .info[data-v-9524bc54] {\\n position: absolute;\\n left: 50%;\\n top: 50%;\\n width: 90%;\\n transform: translate(-50%, -50%);\\n overflow: hidden;\\n display: flex;\\n}\\n.role-card .info .info-detail[data-v-9524bc54] {\\n margin-top: 5px;\\n margin-left: 20px;\\n display: flex;\\n flex-direction: column;\\n overflow: hidden;\\n text-overflow: ellipsis;\\n}\\n.role-card .info .info-detail .name[data-v-9524bc54] {\\n color: #fff;\\n overflow: hidden;\\n white-space: nowrap;\\n text-overflow: ellipsis;\\n margin-bottom: 5px;\\n}\\n.role-card .info .info-detail .detail[data-v-9524bc54] {\\n color: #5c6675;\\n overflow: hidden;\\n white-space: nowrap;\\n text-overflow: ellipsis;\\n font-size: 12px;\\n}\\n.role-card[data-v-9524bc54]:hover {\\n background-color: #1d90f5;\\n transition: 0.3s;\\n box-shadow: 0px 0px 10px 0px rgb(0, 136, 255);\\n}\\n.role-card:hover .info .info-detail .detail[data-v-9524bc54] {\\n color: #fff;\\n}\\n.activeCard[data-v-9524bc54] {\\n background-color: #1d90f5;\\n transition: 0.3s;\\n box-shadow: 3px 2px 10px 0px rgb(0, 136, 255);\\n}\\n.activeCard .info .info-detail .detail[data-v-9524bc54] {\\n color: #fff;\\n}\", \"\"]);\n// Exports\n/* harmony default export */ __webpack_exports__[\"default\"] = (___CSS_LOADER_EXPORT___);\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/RoleCard.vue?./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use%5B1%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use%5B2%5D!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Session.vue?vue&type=style&index=0&id=d6f30cd4&lang=scss&scoped=true&": -/*!**************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/components/Session.vue?vue&type=style&index=0&id=d6f30cd4&lang=scss&scoped=true& ***! - \**************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ../../node_modules/css-loader/dist/runtime/noSourceMaps.js */ \"./node_modules/css-loader/dist/runtime/noSourceMaps.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ../../node_modules/css-loader/dist/runtime/api.js */ \"./node_modules/css-loader/dist/runtime/api.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__);\n// Imports\n\n\nvar ___CSS_LOADER_EXPORT___ = _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default()((_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default()));\n// Module\n___CSS_LOADER_EXPORT___.push([module.id, \".person-card[data-v-d6f30cd4] {\\n width: 100%;\\n height: auto;\\n border-radius: 10px;\\n background-color: rgb(50, 54, 68);\\n position: relative;\\n margin: 25px 0;\\n cursor: pointer;\\n}\\n.person-card .info[data-v-d6f30cd4] {\\n width: auto;\\n}\\n.person-card .info .info-detail[data-v-d6f30cd4] {\\n margin-top: 5px;\\n margin-left: 20px;\\n}\\n.person-card .info .info-detail .detail[data-v-d6f30cd4] {\\n color: #fff;\\n font-size: 15px;\\n}\\n.person-card[data-v-d6f30cd4]:hover {\\n background-color: #1d90f5;\\n transition: 0.3s;\\n box-shadow: 0px 0px 10px 0px rgb(0, 136, 255);\\n}\\n.person-card:hover .info .info-detail .detail[data-v-d6f30cd4] {\\n color: #fff;\\n}\\n.activeCard[data-v-d6f30cd4] {\\n background-color: #1d90f5;\\n transition: 0.3s;\\n box-shadow: 3px 2px 10px 0px rgb(0, 136, 255);\\n}\\n.activeCard .info .info-detail .detail[data-v-d6f30cd4] {\\n color: #fff;\\n}\", \"\"]);\n// Exports\n/* harmony default export */ __webpack_exports__[\"default\"] = (___CSS_LOADER_EXPORT___);\n\n\n//# sourceURL=webpack://JCM-AI/./src/components/Session.vue?./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use%5B1%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use%5B2%5D!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/home.vue?vue&type=style&index=0&id=73eb9c00&lang=scss&scoped=true&": -/*!*****************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/home.vue?vue&type=style&index=0&id=73eb9c00&lang=scss&scoped=true& ***! - \*****************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ../../node_modules/css-loader/dist/runtime/noSourceMaps.js */ \"./node_modules/css-loader/dist/runtime/noSourceMaps.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ../../node_modules/css-loader/dist/runtime/api.js */ \"./node_modules/css-loader/dist/runtime/api.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__);\n// Imports\n\n\nvar ___CSS_LOADER_EXPORT___ = _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default()((_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default()));\n// Module\n___CSS_LOADER_EXPORT___.push([module.id, \".home[data-v-73eb9c00] {\\n width: 100vw;\\n height: auto;\\n background-color: rgb(39, 42, 55);\\n border-radius: 15px;\\n position: absolute;\\n left: 50%;\\n top: 50%;\\n transform: translate(-50%, -50%);\\n}\", \"\"]);\n// Exports\n/* harmony default export */ __webpack_exports__[\"default\"] = (___CSS_LOADER_EXPORT___);\n\n\n//# sourceURL=webpack://JCM-AI/./src/view/home.vue?./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use%5B1%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use%5B2%5D!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/chatHome/chatwindow.vue?vue&type=style&index=0&id=13fede38&lang=scss&scoped=true&": -/*!**************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/chatHome/chatwindow.vue?vue&type=style&index=0&id=13fede38&lang=scss&scoped=true& ***! - \**************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ../../../../node_modules/css-loader/dist/runtime/noSourceMaps.js */ \"./node_modules/css-loader/dist/runtime/noSourceMaps.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ../../../../node_modules/css-loader/dist/runtime/api.js */ \"./node_modules/css-loader/dist/runtime/api.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__);\n// Imports\n\n\nvar ___CSS_LOADER_EXPORT___ = _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default()((_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default()));\n// Module\n___CSS_LOADER_EXPORT___.push([module.id, \"@charset \\\"UTF-8\\\";\\n.iconfont[data-v-13fede38]:hover {\\n color: rgb(29, 144, 245);\\n}\\n.iconfont:hover .block[data-v-13fede38] {\\n opacity: 1;\\n}\\n[data-v-13fede38] .el-textarea__inner {\\n background-color: rgb(66, 70, 86);\\n border-radius: 15px;\\n border: 2px solid rgb(34, 135, 225);\\n /* padding: 10px; */\\n box-sizing: border-box;\\n transition: 0.2s;\\n font-size: 20px;\\n color: #fff;\\n font-weight: 100;\\n /* margin: 0 20px; */\\n width: 98%;\\n height: 70px !important;\\n}\\npre[data-v-13fede38] {\\n background-color: #211f1f !important;\\n border-radius: 20px !important;\\n box-shadow: 0px 0px 9px 0px #000000 !important;\\n color: white !important;\\n}\\n.hljs[data-v-13fede38] {\\n background-color: #211f1f !important;\\n border-radius: 20px !important;\\n box-shadow: 0px 0px 9px 0px #000000 !important;\\n color: white !important;\\n}\\ntextarea[data-v-13fede38]::-webkit-scrollbar {\\n width: 3px;\\n /* 设置滚动条宽度 */\\n}\\ntextarea[data-v-13fede38]::-webkit-scrollbar-thumb {\\n background-color: rgb(66, 70, 86);\\n /* 设置滚动条滑块的背景色 */\\n border-radius: 50%;\\n /* 设置滑块的圆角 */\\n}\\n.spinner[data-v-13fede38] {\\n width: 50px;\\n height: 50px;\\n animation: spin-13fede38 1s infinite linear;\\n}\\n@keyframes spin-13fede38 {\\n0% {\\n transform: rotate(0deg);\\n}\\n100% {\\n transform: rotate(360deg);\\n}\\n}\\n.chat-window[data-v-13fede38] {\\n width: 100%;\\n height: 100%;\\n margin-left: 20px;\\n position: relative;\\n}\\n.chat-window .top[data-v-13fede38]::after {\\n content: \\\"\\\";\\n display: block;\\n clear: both;\\n}\\n.chat-window .top .head-pic[data-v-13fede38] {\\n float: left;\\n}\\n.chat-window .top .info-detail[data-v-13fede38] {\\n float: left;\\n margin: 5px 20px 0;\\n}\\n.chat-window .top .info-detail .name[data-v-13fede38] {\\n font-size: 20px;\\n font-weight: 600;\\n color: #fff;\\n}\\n.chat-window .top .info-detail .detail[data-v-13fede38] {\\n color: #9e9e9e;\\n font-size: 12px;\\n margin-top: 2px;\\n}\\n.chat-window .top .other-fun[data-v-13fede38] {\\n float: right;\\n margin-top: 20px;\\n}\\n.chat-window .top .other-fun span[data-v-13fede38] {\\n margin-left: 30px;\\n cursor: pointer;\\n}\\n.chat-window .top .other-fun input[data-v-13fede38] {\\n display: none;\\n}\\n.chat-window .textarea[data-v-13fede38]:focus {\\n outline: none;\\n}\\n.chat-window .botoom[data-v-13fede38] {\\n width: 100%;\\n height: 85vh;\\n background-size: 100% 100%;\\n border-radius: 20px;\\n padding: 20px;\\n box-sizing: border-box;\\n position: relative;\\n}\\n.chat-window .botoom .chat-content[data-v-13fede38] {\\n width: 100%;\\n height: 85%;\\n overflow-y: scroll;\\n padding: 20px;\\n box-sizing: border-box;\\n}\\n.chat-window .botoom .chat-content[data-v-13fede38]::-webkit-scrollbar {\\n width: 3px;\\n /* 设置滚动条宽度 */\\n}\\n.chat-window .botoom .chat-content[data-v-13fede38]::-webkit-scrollbar-thumb {\\n background-color: rgb(66, 70, 86);\\n /* 设置滚动条滑块的背景色 */\\n border-radius: 50%;\\n /* 设置滑块的圆角 */\\n}\\n.chat-window .botoom .chat-content .chat-friend[data-v-13fede38] {\\n width: 100%;\\n float: left;\\n margin-bottom: 20px;\\n position: relative;\\n display: flex;\\n flex-direction: column;\\n justify-content: flex-end;\\n align-items: flex-start;\\n}\\n.chat-window .botoom .chat-content .chat-friend .chat-text[data-v-13fede38] {\\n float: left;\\n max-width: 90%;\\n padding: 15px;\\n max-width: 650px;\\n border-radius: 20px 20px 20px 5px;\\n background-color: #fff;\\n}\\n.chat-window .botoom .chat-content .chat-friend .chat-img img[data-v-13fede38] {\\n max-width: 300px;\\n max-height: 200px;\\n border-radius: 10px;\\n}\\n.chat-window .botoom .chat-content .chat-friend .info-time[data-v-13fede38] {\\n margin: 10px 0;\\n color: #fff;\\n font-size: 14px;\\n display: flex;\\n justify-content: flex-start;\\n}\\n.chat-window .botoom .chat-content .chat-friend .info-time img[data-v-13fede38] {\\n width: 30px;\\n height: 30px;\\n border-radius: 50%;\\n vertical-align: middle;\\n margin-right: 10px;\\n}\\n.chat-window .botoom .chat-content .chat-friend .info-time span[data-v-13fede38] {\\n line-height: 30px;\\n}\\n.chat-window .botoom .chat-content .chat-friend .info-time span[data-v-13fede38]:last-child {\\n color: rgb(101, 104, 115);\\n margin-left: 10px;\\n vertical-align: middle;\\n}\\n.chat-window .botoom .chat-content .chat-me[data-v-13fede38] {\\n width: 100%;\\n float: right;\\n margin-bottom: 20px;\\n position: relative;\\n display: flex;\\n flex-direction: column;\\n justify-content: flex-end;\\n align-items: flex-end;\\n}\\n.chat-window .botoom .chat-content .chat-me .chat-text[data-v-13fede38] {\\n float: right;\\n max-width: 90%;\\n padding: 15px;\\n border-radius: 20px 20px 5px 20px;\\n background-color: #95ec69;\\n color: #000;\\n word-break: break-all;\\n}\\n.chat-window .botoom .chat-content .chat-me .chat-img img[data-v-13fede38] {\\n max-width: 300px;\\n max-height: 200px;\\n border-radius: 10px;\\n}\\n.chat-window .botoom .chat-content .chat-me .info-time[data-v-13fede38] {\\n margin: 10px 0;\\n color: #fff;\\n font-size: 14px;\\n display: flex;\\n justify-content: flex-end;\\n}\\n.chat-window .botoom .chat-content .chat-me .info-time img[data-v-13fede38] {\\n width: 30px;\\n height: 30px;\\n border-radius: 50%;\\n vertical-align: middle;\\n margin-left: 10px;\\n}\\n.chat-window .botoom .chat-content .chat-me .info-time span[data-v-13fede38] {\\n line-height: 30px;\\n}\\n.chat-window .botoom .chat-content .chat-me .info-time span[data-v-13fede38]:first-child {\\n color: rgb(101, 104, 115);\\n margin-right: 10px;\\n vertical-align: middle;\\n}\\n.chat-window .botoom .chatInputs[data-v-13fede38] {\\n width: 90%;\\n position: absolute;\\n bottom: 0;\\n margin: 3%;\\n display: flex;\\n background-color: #323644;\\n}\\n.chat-window .botoom .chatInputs .boxinput[data-v-13fede38] {\\n width: 50px;\\n height: 50px;\\n background-color: rgb(50, 54, 68);\\n border-radius: 15px;\\n border: 1px solid rgb(80, 85, 103);\\n box-shadow: 0px 0px 5px 0px rgb(0, 136, 255);\\n position: relative;\\n cursor: pointer;\\n}\\n.chat-window .botoom .chatInputs .boxinput img[data-v-13fede38] {\\n width: 30px;\\n height: 30px;\\n position: absolute;\\n left: 50%;\\n top: 50%;\\n transform: translate(-50%, -50%);\\n}\\n.chat-window .botoom .chatInputs .emoji[data-v-13fede38] {\\n transition: 0.3s;\\n width: 50px;\\n min-width: 50px;\\n}\\n.chat-window .botoom .chatInputs .luyin[data-v-13fede38] {\\n color: #fff;\\n margin-left: 1.5%;\\n font-size: 30px;\\n text-align: center;\\n transition: 0.3s;\\n width: 50px;\\n min-width: 50px;\\n}\\n.chat-window .botoom .chatInputs .inputs[data-v-13fede38] {\\n width: 95%;\\n height: 50px;\\n background-color: rgb(66, 70, 86);\\n border-radius: 15px;\\n border: 2px solid rgb(34, 135, 225);\\n padding: 10px;\\n box-sizing: border-box;\\n transition: 0.2s;\\n font-size: 20px;\\n color: #fff;\\n font-weight: 100;\\n margin: 0 20px;\\n}\\n.chat-window .botoom .chatInputs .inputs[data-v-13fede38]:focus {\\n outline: none;\\n}\\n.chat-window .botoom .chatInputs .send[data-v-13fede38] {\\n background-color: rgb(29, 144, 245);\\n border: 0;\\n transition: 0.3s;\\n box-shadow: 0px 0px 5px 0px rgb(0, 136, 255);\\n}\\n.chat-window .botoom .chatInputs .send[data-v-13fede38]:hover {\\n box-shadow: 0px 0px 10px 0px rgb(0, 136, 255);\\n}\\n.line[data-v-13fede38] {\\n position: relative;\\n width: 94%;\\n margin-left: 2%;\\n height: 2px;\\n background: linear-gradient(to right, red, yellow, green);\\n animation: shrink-and-expand-13fede38 2s ease-in-out infinite;\\n}\\n.line[data-v-13fede38]::before,\\n.line[data-v-13fede38]::after {\\n content: \\\"\\\";\\n position: absolute;\\n top: 0;\\n width: 50%;\\n height: 100%;\\n background: inherit;\\n}\\n.line[data-v-13fede38]::before {\\n border-top-left-radius: 2px;\\n border-bottom-left-radius: 2px;\\n left: 0;\\n transform-origin: left;\\n animation: shrink-left-13fede38 2s ease-in-out infinite;\\n}\\n.line[data-v-13fede38]::after {\\n border-top-left-radius: 2px;\\n border-bottom-left-radius: 2px;\\n right: 0;\\n transform-origin: right;\\n animation: shrink-right-13fede38 2s ease-in-out infinite;\\n}\\n@keyframes shrink-and-expand-13fede38 {\\n0%, 100% {\\n transform: scaleX(1);\\n}\\n50% {\\n transform: scaleX(0);\\n}\\n}\\n@keyframes shrink-left-13fede38 {\\n0%, 50% {\\n transform: scaleX(1);\\n}\\n50.1%, 100% {\\n transform: scaleX(0);\\n}\\n}\\n@keyframes shrink-right-13fede38 {\\n0%, 50% {\\n transform: scaleX(1);\\n}\\n50.1%, 100% {\\n transform: scaleX(0);\\n}\\n}\", \"\"]);\n// Exports\n/* harmony default export */ __webpack_exports__[\"default\"] = (___CSS_LOADER_EXPORT___);\n\n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/chatHome/chatwindow.vue?./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use%5B1%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use%5B2%5D!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/chatHome/index.vue?vue&type=style&index=0&id=c6884a34&lang=scss&scoped=true&": -/*!*********************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use[3]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/chatHome/index.vue?vue&type=style&index=0&id=c6884a34&lang=scss&scoped=true& ***! - \*********************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ../../../../node_modules/css-loader/dist/runtime/noSourceMaps.js */ \"./node_modules/css-loader/dist/runtime/noSourceMaps.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ../../../../node_modules/css-loader/dist/runtime/api.js */ \"./node_modules/css-loader/dist/runtime/api.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__);\n// Imports\n\n\nvar ___CSS_LOADER_EXPORT___ = _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default()((_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default()));\n// Module\n___CSS_LOADER_EXPORT___.push([module.id, \"@charset \\\"UTF-8\\\";\\n.top-left[data-v-c6884a34],\\n.top-right[data-v-c6884a34] {\\n position: absolute;\\n top: 5px;\\n cursor: pointer;\\n}\\n.top-left[data-v-c6884a34] {\\n left: 5px;\\n}\\n.top-right[data-v-c6884a34] {\\n right: 5px;\\n}\\ninput[type=number][data-v-c6884a34]::-webkit-inner-spin-button,\\ninput[type=number][data-v-c6884a34]::-webkit-outer-spin-button {\\n -webkit-appearance: none;\\n margin: 0;\\n}\\n.boxinput[data-v-c6884a34] {\\n height: 30px;\\n line-height: 50px;\\n color: #fff;\\n margin-top: 10px;\\n margin-left: 20px;\\n margin-right: 20px;\\n width: 90%;\\n text-align: center;\\n height: 50px;\\n background-color: rgb(66, 70, 86);\\n border-radius: 15px;\\n border: 1px solid rgb(80, 85, 103);\\n position: relative;\\n cursor: pointer;\\n}\\n.icon[data-v-c6884a34] {\\n margin-right: 10px;\\n vertical-align: middle;\\n}\\n.send[data-v-c6884a34] {\\n display: flex;\\n align-items: center;\\n justify-content: center;\\n text-align: center;\\n background-color: rgb(66, 70, 86);\\n border: 0;\\n transition: 0.3s;\\n box-shadow: 0px 0px 5px 0px rgb(84, 89, 110);\\n}\\n.send[data-v-c6884a34]:hover {\\n box-shadow: 0px 0px 10px 0px rgb(91, 219, 239);\\n}\\n.weitiao[data-v-c6884a34] {\\n margin-top: 10px;\\n width: 100%;\\n margin-left: 0px;\\n margin-right: 0px;\\n height: 50px;\\n background-color: rgb(66, 70, 86);\\n border-radius: 15px;\\n border: 2px solid rgb(34, 135, 225);\\n padding: 10px;\\n box-sizing: border-box;\\n transition: 0.2s;\\n font-size: 20px;\\n color: #fff;\\n font-weight: 100;\\n}\\n.weitiao[data-v-c6884a34]:focus {\\n outline: none;\\n}\\n.fineTune[data-v-c6884a34] {\\n display: flex;\\n align-items: center;\\n justify-content: center;\\n text-align: center;\\n background-color: rgb(66, 70, 86);\\n border: 0;\\n transition: 0.3s;\\n box-shadow: 0px 0px 5px 0px rgb(84, 89, 110);\\n}\\n.fineTune[data-v-c6884a34]:hover {\\n box-shadow: 0px 0px 10px 0px rgb(29, 144, 245);\\n}\\n.session[data-v-c6884a34] {\\n display: flex;\\n align-items: center;\\n justify-content: center;\\n text-align: center;\\n background-color: rgb(66, 70, 86);\\n border: 0;\\n transition: 0.3s;\\n box-shadow: 0px 0px 5px 0px rgb(84, 89, 110);\\n margin-left: 0px;\\n margin-right: 0px;\\n width: 99%;\\n}\\n.session[data-v-c6884a34]:hover {\\n box-shadow: 0px 0px 10px 0px rgb(29, 144, 245);\\n}\\n.inputs[data-v-c6884a34] {\\n width: 65%;\\n height: 50px;\\n background-color: rgb(66, 70, 86);\\n border-radius: 15px;\\n border: 2px solid rgb(34, 135, 225);\\n padding: 10px;\\n box-sizing: border-box;\\n transition: 0.2s;\\n font-size: 20px;\\n color: #fff;\\n font-weight: 100;\\n margin: 0 20px;\\n}\\n.inputs[data-v-c6884a34]:focus {\\n outline: none;\\n}\\n.whiteText[data-v-c6884a34] {\\n color: #fff;\\n}\\n[data-v-c6884a34] .el-input__inner {\\n background-color: transparent;\\n color: #409EFF;\\n}\\n.setting[data-v-c6884a34] {\\n margin-left: 0px;\\n padding-left: 10px;\\n color: rgb(176, 178, 189);\\n}\\n.setting.active[data-v-c6884a34] {\\n color: #fff;\\n}\\n.setting[data-v-c6884a34]:hover {\\n cursor: pointer;\\n}\\n#jianbian[data-v-c6884a34] {\\n background-color: rgb(39, 42, 55);\\n border-color: #409EFF;\\n color: #fff;\\n border-width: 0px;\\n}\\n.astrict[data-v-c6884a34] {\\n width: 90%;\\n}\\n.settingButton[data-v-c6884a34] {\\n width: 99%;\\n}\\n.block[data-v-c6884a34] {\\n margin-top: 5%;\\n}\\n.block .demonstration[data-v-c6884a34] {\\n color: aliceblue;\\n text-align: center;\\n}\\n.inputs[data-v-c6884a34] {\\n width: 90%;\\n height: 50px;\\n background-color: rgb(66, 70, 86);\\n border-radius: 15px;\\n border: 2px solid rgb(34, 135, 225);\\n padding: 10px;\\n box-sizing: border-box;\\n transition: 0.2s;\\n font-size: 20px;\\n color: #fff;\\n font-weight: 100;\\n margin: 0 20px;\\n}\\n.inputs[data-v-c6884a34]:focus {\\n outline: none;\\n}\\n.chatHome[data-v-c6884a34] {\\n display: flex;\\n}\\n.chatHome .chatLeft[data-v-c6884a34] {\\n width: 17%;\\n}\\n.chatHome .chatLeft .title[data-v-c6884a34] {\\n color: #fff;\\n padding-left: 10px;\\n}\\n.chatHome .chatLeft .online-person .onlin-text[data-v-c6884a34] {\\n margin-left: 20%;\\n padding-left: 10px;\\n color: rgb(176, 178, 189);\\n}\\n.chatHome .chatLeft .online-person .s-wrapper[data-v-c6884a34] {\\n padding-left: 10px;\\n height: 70vh;\\n margin-top: 10px;\\n overflow: hidden;\\n overflow-y: scroll;\\n box-sizing: border-box;\\n}\\n.chatHome .chatLeft .online-person .s-wrapper[data-v-c6884a34]::-webkit-scrollbar {\\n width: 0;\\n /* Safari,Chrome 隐藏滚动条 */\\n height: 0;\\n /* Safari,Chrome 隐藏滚动条 */\\n display: none;\\n /* 移动端、pad 上Safari,Chrome,隐藏滚动条 */\\n}\\n.chatHome .chatRight[data-v-c6884a34] {\\n flex: 1;\\n padding-right: 30px;\\n}\\n.chatHome .chatRight .showIcon[data-v-c6884a34] {\\n position: absolute;\\n top: calc(50% - 150px);\\n /*垂直居中 */\\n left: calc(50% - 50px);\\n /*水平居中 */\\n}\\n.chatHome .chatRight .showIcon .icon-snapchat[data-v-c6884a34] {\\n width: 300px;\\n height: 300px;\\n font-size: 300px;\\n}\", \"\"]);\n// Exports\n/* harmony default export */ __webpack_exports__[\"default\"] = (___CSS_LOADER_EXPORT___);\n\n\n//# sourceURL=webpack://JCM-AI/./src/view/pages/chatHome/index.vue?./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use%5B1%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use%5B2%5D!./node_modules/sass-loader/dist/cjs.js??clonedRuleSet-22.use%5B3%5D!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options"); - -/***/ }), - -/***/ "./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./src/assets/font/iconfont.css": -/*!******************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use[2]!./src/assets/font/iconfont.css ***! - \******************************************************************************************************************************************************************************************************************************************/ -/***/ (function(module, __webpack_exports__, __webpack_require__) { - -"use strict"; -eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ../../../node_modules/css-loader/dist/runtime/noSourceMaps.js */ \"./node_modules/css-loader/dist/runtime/noSourceMaps.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(/*! ../../../node_modules/css-loader/dist/runtime/api.js */ \"./node_modules/css-loader/dist/runtime/api.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1__);\n/* harmony import */ var _node_modules_css_loader_dist_runtime_getUrl_js__WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(/*! ../../../node_modules/css-loader/dist/runtime/getUrl.js */ \"./node_modules/css-loader/dist/runtime/getUrl.js\");\n/* harmony import */ var _node_modules_css_loader_dist_runtime_getUrl_js__WEBPACK_IMPORTED_MODULE_2___default = /*#__PURE__*/__webpack_require__.n(_node_modules_css_loader_dist_runtime_getUrl_js__WEBPACK_IMPORTED_MODULE_2__);\n// Imports\n\n\n\nvar ___CSS_LOADER_URL_IMPORT_0___ = new URL(/* asset import */ __webpack_require__(/*! iconfont.woff2?t=1681088355288 */ \"./src/assets/font/iconfont.woff2?t=1681088355288\"), __webpack_require__.b);\nvar ___CSS_LOADER_URL_IMPORT_1___ = new URL(/* asset import */ __webpack_require__(/*! iconfont.woff?t=1681088355288 */ \"./src/assets/font/iconfont.woff?t=1681088355288\"), __webpack_require__.b);\nvar ___CSS_LOADER_URL_IMPORT_2___ = new URL(/* asset import */ __webpack_require__(/*! iconfont.ttf?t=1681088355288 */ \"./src/assets/font/iconfont.ttf?t=1681088355288\"), __webpack_require__.b);\nvar ___CSS_LOADER_EXPORT___ = _node_modules_css_loader_dist_runtime_api_js__WEBPACK_IMPORTED_MODULE_1___default()((_node_modules_css_loader_dist_runtime_noSourceMaps_js__WEBPACK_IMPORTED_MODULE_0___default()));\nvar ___CSS_LOADER_URL_REPLACEMENT_0___ = _node_modules_css_loader_dist_runtime_getUrl_js__WEBPACK_IMPORTED_MODULE_2___default()(___CSS_LOADER_URL_IMPORT_0___);\nvar ___CSS_LOADER_URL_REPLACEMENT_1___ = _node_modules_css_loader_dist_runtime_getUrl_js__WEBPACK_IMPORTED_MODULE_2___default()(___CSS_LOADER_URL_IMPORT_1___);\nvar ___CSS_LOADER_URL_REPLACEMENT_2___ = _node_modules_css_loader_dist_runtime_getUrl_js__WEBPACK_IMPORTED_MODULE_2___default()(___CSS_LOADER_URL_IMPORT_2___);\n// Module\n___CSS_LOADER_EXPORT___.push([module.id, \"@font-face {\\n font-family: \\\"iconfont\\\"; /* Project id 3996937 */\\n src: url(\" + ___CSS_LOADER_URL_REPLACEMENT_0___ + \") format('woff2'),\\n url(\" + ___CSS_LOADER_URL_REPLACEMENT_1___ + \") format('woff'),\\n url(\" + ___CSS_LOADER_URL_REPLACEMENT_2___ + \") format('truetype');\\n}\\n.iconfont {\\n font-family: \\\"iconfont\\\" !important;\\n font-size: 16px;\\n font-style: normal;\\n -webkit-font-smoothing: antialiased;\\n -moz-osx-font-smoothing: grayscale;\\n}\\n.icon-shanchu:before {\\n content: \\\"\\\\e630\\\";\\n}\\n.icon-iconyuanbanben_fanyi:before {\\n content: \\\"\\\\e6b6\\\";\\n}\\n.icon-wenben:before {\\n content: \\\"\\\\e600\\\";\\n}\\n.icon-luyin:before {\\n content: \\\"\\\\e740\\\";\\n}\\n.icon-tupian:before {\\n content: \\\"\\\\e623\\\";\\n}\\n.icon-luyin1:before {\\n content: \\\"\\\\e602\\\";\\n}\\n.icon-shezhi:before {\\n content: \\\"\\\\e8b8\\\";\\n}\\n.icon-qingchu:before {\\n content: \\\"\\\\e609\\\";\\n}\\n.icon-xinxi:before {\\n content: \\\"\\\\e624\\\";\\n}\\n.icon-weidenglu:before {\\n content: \\\"\\\\e6a3\\\";\\n}\\n.icon-daoru:before {\\n content: \\\"\\\\e645\\\";\\n}\\n.icon-daochu:before {\\n content: \\\"\\\\e646\\\";\\n}\\n\\n\", \"\"]);\n// Exports\n/* harmony default export */ __webpack_exports__[\"default\"] = (___CSS_LOADER_EXPORT___);\n\n\n//# sourceURL=webpack://JCM-AI/./src/assets/font/iconfont.css?./node_modules/css-loader/dist/cjs.js??clonedRuleSet-22.use%5B1%5D!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-22.use%5B2%5D"); - -/***/ }), - -/***/ "./node_modules/vue-style-loader/index.js??clonedRuleSet-12.use[0]!./node_modules/css-loader/dist/cjs.js??clonedRuleSet-12.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-12.use[2]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/setting.vue?vue&type=style&index=0&id=f89df198&lang=css&": -/*!***************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************!*\ - !*** ./node_modules/vue-style-loader/index.js??clonedRuleSet-12.use[0]!./node_modules/css-loader/dist/cjs.js??clonedRuleSet-12.use[1]!./node_modules/@vue/vue-loader-v15/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/dist/cjs.js??clonedRuleSet-12.use[2]!./node_modules/@vue/vue-loader-v15/lib/index.js??vue-loader-options!./src/view/pages/setting.vue?vue&type=style&index=0&id=f89df198&lang=css& ***! - \***************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************/ -/***/ (function(module, __unused_webpack_exports, __webpack_require__) { - -eval("// style-loader: Adds some css to the DOM by adding a