parquet-converter commited on
Commit
d074614
·
1 Parent(s): 35262c2

Update parquet files (step 70 of 121)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1gistliPinn/ChatGPT4/Examples/Adobe Acrobat Xi Pro 11.0.07 Serial Number.md +0 -25
  2. spaces/1gistliPinn/ChatGPT4/Examples/Freeserialnumberencore500 __FULL__.md +0 -58
  3. spaces/1phancelerku/anime-remove-background/Download WIFI Driver for Lenovo 20207 - Compatible with Windows 10 (64-bit) and All Models.md +0 -115
  4. spaces/2ndelement/voicevox/test/test_mock_synthesis_engine.py +0 -140
  5. spaces/A00001/bingothoo/src/pages/api/sydney.ts +0 -62
  6. spaces/AIConsultant/MusicGen/audiocraft/utils/__init__.py +0 -6
  7. spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/utils.py +0 -73
  8. spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/tts/fs2_utils.py +0 -173
  9. spaces/AP123/dreamgaussian/sh_utils.py +0 -118
  10. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dropdownlist/Factory.js +0 -13
  11. spaces/AiMimicry/sovits-models/vdecoder/hifigan/models.py +0 -503
  12. spaces/AlekseyCalvin/Make-Putin-Queer/app.py +0 -16
  13. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/intel_opts/textual_inversion/README.md +0 -68
  14. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/training_utils.py +0 -314
  15. spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r18-d8_769x769_80k_cityscapes.py +0 -9
  16. spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr48_512x1024_160k_cityscapes.py +0 -39
  17. spaces/AngoHF/ANGO-Leaderboard/components/submit.py +0 -15
  18. spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/completions.py +0 -637
  19. spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/callbacks.py +0 -95
  20. spaces/AnishKumbhar/ChatBot/text-generation-webui-main/start_linux.sh +0 -67
  21. spaces/Anonymous-sub/Rerender/ControlNet/docs/faq.md +0 -21
  22. spaces/Arnx/MusicGenXvAKN/tests/modules/test_transformer.py +0 -253
  23. spaces/Artrajz/vits-simple-api/bert_vits2/bert/bert-base-japanese-v3/README.md +0 -53
  24. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/sjisprober.py +0 -105
  25. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/common.py +0 -424
  26. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/helpers.py +0 -1088
  27. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/test.py +0 -251
  28. spaces/AtomdffAI/wechatgpt4atom/bot/chatgpt/chat_gpt_bot.py +0 -131
  29. spaces/BIOML-SVM/SVM/app.py +0 -286
  30. spaces/Benson/text-generation/Examples/Descargar Angry Birds Star Wars 2 Monedas Ilimitadas.md +0 -69
  31. spaces/Binguii/Venus_Proxy/README.md +0 -10
  32. spaces/CALM/Dashboard/streamlit_observable/frontend/src/index.tsx +0 -10
  33. spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/roi_heads/__init__.py +0 -16
  34. spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TridentNet/tridentnet/trident_backbone.py +0 -223
  35. spaces/CVPR/LIVE/pybind11/tests/test_opaque_types.py +0 -47
  36. spaces/CVPR/LIVE/pybind11/tools/pybind11Common.cmake +0 -296
  37. spaces/CVPR/LIVE/scene.h +0 -120
  38. spaces/CVPR/LIVE/thrust/thrust/detail/config/simple_defines.h +0 -30
  39. spaces/CVPR/LIVE/thrust/thrust/iterator/detail/distance_from_result.h +0 -42
  40. spaces/CVPR/WALT/mmdet/datasets/cityscapes.py +0 -334
  41. spaces/ChandraMohanNayal/AutoGPT/tests/unit/json_tests.py +0 -114
  42. spaces/CikeyQI/meme-api/docs/examples/test_api.py +0 -23
  43. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-add9ad59.js +0 -0
  44. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/utils/_typing.py +0 -28
  45. spaces/Datasculptor/LoRA-DreamBooth-Training-UI/inference.py +0 -94
  46. spaces/Detomo/ai-comic-generation/src/app/interface/about/index.tsx +0 -46
  47. spaces/DiegoLigtenberg/realtimespeech/parsarg.py +0 -26
  48. spaces/Dimalker/Faceswapper/roop/globals.py +0 -17
  49. spaces/DrGabrielLopez/GPT2_Chatbot/app.py +0 -139
  50. spaces/Dusan/clickbaitonator/fudge/README.md +0 -155
spaces/1gistliPinn/ChatGPT4/Examples/Adobe Acrobat Xi Pro 11.0.07 Serial Number.md DELETED
@@ -1,25 +0,0 @@
1
- <br />
2
- <h1>How to Find Your Adobe Acrobat XI Pro 11.0.07 Serial Number</h1>
3
- <p>If you have purchased Adobe Acrobat XI Pro 11.0.07, you will need a serial number to activate your product and access its features. A serial number is a unique code that identifies your software license. Without it, you will not be able to use Adobe Acrobat XI Pro 11.0.07 properly.</p>
4
- <h2>adobe acrobat xi pro 11.0.07 serial number</h2><br /><p><b><b>Download Zip</b> &#10040;&#10040;&#10040; <a href="https://imgfil.com/2uxZVW">https://imgfil.com/2uxZVW</a></b></p><br /><br />
5
- <p>There are different ways to find your serial number depending on how you obtained your product. Here are some common scenarios and how to locate your serial number:</p>
6
- <ul>
7
- <li>If you bought Adobe Acrobat XI Pro 11.0.07 from the Adobe website or an authorized reseller, you should have received an email with your serial number. Check your inbox or spam folder for an email from Adobe with the subject line "Your Serial Number" or "Your Product Registration Info". If you can't find the email, you can also log in to your Adobe account at <a href="https://www.adobe.com/account.html">https://www.adobe.com/account.html</a> and go to "Plans & Products" > "View all" > "Products" > "View your products". You should see your serial number listed under Adobe Acrobat XI Pro 11.0.07.</li>
8
- <li>If you bought Adobe Acrobat XI Pro 11.0.07 as a physical product, such as a DVD or a box, you should find your serial number on a sticker or a card inside the package. The serial number is usually a 24-digit alphanumeric code that starts with 1118.</li>
9
- <li>If you downloaded Adobe Acrobat XI Pro 11.0.07 from a torrent site or a crack site, you may have obtained an illegal copy of the software that does not have a valid serial number. In this case, you are violating the terms of use and the intellectual property rights of Adobe, and you may face legal consequences. We strongly advise you to uninstall the pirated software and purchase a legitimate copy of Adobe Acrobat XI Pro 11.0.07 from the official website or an authorized reseller.</li>
10
- </ul>
11
- <p>Once you have your serial number, you can enter it during the installation process or after launching the software for the first time. Follow the on-screen instructions to complete the activation process and enjoy using Adobe Acrobat XI Pro 11.0.07.</p>
12
-
13
- <p>Adobe Acrobat XI Pro 11.0.07 is a powerful and versatile software that allows you to create, edit, convert, sign, and share PDF documents. You can also use it to fill out forms, add comments, apply digital signatures, protect your files, and collaborate with others. Adobe Acrobat XI Pro 11.0.07 is compatible with Windows and Mac operating systems, and it supports various file formats, such as Word, Excel, PowerPoint, JPEG, PNG, and more.</p>
14
- <p></p>
15
- <p>Some of the key features of Adobe Acrobat XI Pro 11.0.07 include:</p>
16
- <ul>
17
- <li>PDF editing: You can edit text and images in your PDF files, adjust fonts, colors, alignment, and layout, crop and rotate pages, insert headers and footers, add watermarks and backgrounds, and more.</li>
18
- <li>PDF conversion: You can convert PDF files to other formats, such as Word, Excel, PowerPoint, HTML, EPUB, and image files. You can also create PDF files from any application that prints, such as web browsers, email clients, and office programs.</li>
19
- <li>PDF signing: You can sign your PDF files electronically with your digital ID or a certificate-based signature. You can also request signatures from others and track their status online.</li>
20
- <li>PDF protection: You can secure your PDF files with passwords and permissions, encrypt them with 256-bit AES or 128-bit RC4 algorithms, redact sensitive information, remove hidden data, and apply stamps and certificates.</li>
21
- <li>PDF collaboration: You can share your PDF files with others via email, cloud services, or social media. You can also review and comment on PDF files with others using tools such as sticky notes, highlights, stamps, and drawing tools.</li>
22
- </ul>
23
- <p>If you want to learn more about Adobe Acrobat XI Pro 11.0.07 and how to use it effectively, you can visit the official website at <a href="https://www.adobe.com/products/acrobatpro.html">https://www.adobe.com/products/acrobatpro.html</a> or check out the online tutorials at <a href="https://helpx.adobe.com/acrobat/tutorials.html">https://helpx.adobe.com/acrobat/tutorials.html</a>. You can also contact the customer support team at <a href="https://helpx.adobe.com/contact.html">https://helpx.adobe.com/contact.html</a> if you have any questions or issues.</p> d5da3c52bf<br />
24
- <br />
25
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Freeserialnumberencore500 __FULL__.md DELETED
@@ -1,58 +0,0 @@
1
- <h2>freeserialnumberencore500</h2><br /><p><b><b>DOWNLOAD</b> ---> <a href="https://imgfil.com/2uy0Gi">https://imgfil.com/2uy0Gi</a></b></p><br /><br />
2
-
3
- If other options are ordered with Encore S (e.g., quad seat configurations), the model number uses a three-letter and two-number model code.
4
-
5
- Product description
6
-
7
- Encore is available in three rows, eight seats, or twelve seats. The seating is fixed, meaning there are no middle or rear-facing seats. The optional driver's side seat comes with a center armrest, which is only available in the front row. The front seats use the same bench seats as the Citation III, but is more softly upholstered. There are no armrests on the driver's side of the front seats. The second row is a bench with fold-down seating for the rear passengers. There are no seats in the rear row, nor do any seats fold.
8
-
9
- The Encore can be ordered with three engine options: 2.5 L Tigershark inline-four engines with or at 5600 rpm, 2.8 L Tigershark V6 engines with at 5200 rpm, or 3.0 L Tigershark V6 engines with at 5500 rpm. The 2.5 L Tigershark is available in two grades: and. The 2.5 Tigershark produces with the, and with the. The 2.8 Tigershark produces with the. The 3.0 Tigershark produces with the.
10
-
11
- A navigation system with eight-inch color display, Sirius satellite radio, auxiliary audio input jack, and either AM/FM or CD/MP3 radio are available as options. Navigation system and GPS inputs are standard on the 6th Avenue, 6th Avenue S, and 6th Avenue S6 models, but not on the 6th Avenue SE6.
12
-
13
- Engines can be ordered with either single or dual exhaust outlets.
14
-
15
- Numerous options are available, including:
16
-
17
- - front and rear bumper extenders
18
-
19
- - chrome wheels
20
-
21
- - carpet floor mats
22
-
23
- - interior carpeting
24
-
25
- - floor mats
26
-
27
- - carpet
28
-
29
- - cargo cover
30
-
31
- - cargo cover with locking features
32
-
33
- - DVD-Audio and CD-Audio capability
34
-
35
- - dual fuel tanks
36
-
37
- - auxiliary fuel cell
38
-
39
- - safety kit
40
-
41
- - exterior garnish kit
42
-
43
- - grille guards
44
-
45
- - cargo net
46
-
47
- - side mirror covers
48
-
49
- - power antenna
50
-
51
- - rain-sensing wipers
52
-
53
- - power windows
54
-
55
- - heated windshield 4fefd39f24<br />
56
- <br />
57
- <br />
58
- <p></p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download WIFI Driver for Lenovo 20207 - Compatible with Windows 10 (64-bit) and All Models.md DELETED
@@ -1,115 +0,0 @@
1
-
2
- <h1>How to Download and Install WiFi Driver for Lenovo 20207 Laptop</h1>
3
- <p>If you have a Lenovo 20207 laptop, you may need to download and install a WiFi driver to connect to wireless networks. A WiFi driver is a software program that enables your laptop to communicate with wireless devices such as routers, modems, and access points. Without a WiFi driver, you will not be able to access the internet, share files, or use online services on your laptop.</p>
4
- <p>In this article, we will show you how to download and install the WiFi driver for your Lenovo 20207 laptop. We will also explain what a WiFi driver is and why you need it. By following these simple steps, you will be able to enjoy wireless connectivity on your laptop.</p>
5
- <h2>download driver wifi lenovo 20207</h2><br /><p><b><b>Download</b> &#10038; <a href="https://jinyurl.com/2uNSPp">https://jinyurl.com/2uNSPp</a></b></p><br /><br />
6
- <h2>What is a WiFi Driver and Why Do You Need It?</h2>
7
- <h3>A WiFi driver is a software program that enables your laptop to communicate with wireless networks</h3>
8
- <p>A WiFi driver is a type of device driver that acts as an interface between your laptop's hardware and software. It allows your laptop's wireless card to send and receive data packets from wireless networks. A wireless card is a component that enables your laptop to connect to wireless devices such as routers, modems, and access points.</p>
9
- <p>A WiFi driver is usually specific to your laptop's model, wireless card, and operating system. It contains information about how to configure, control, and operate your wireless card. It also contains instructions on how to handle different types of wireless networks, such as public, private, or encrypted ones.</p>
10
- <h3>You need a WiFi driver to access the internet, share files, and use online services on your laptop</h3>
11
- <p>A WiFi driver is essential for using wireless connectivity on your laptop. Without a WiFi driver, your laptop will not be able to recognize or connect to any wireless network. This means that you will not be able to access the internet, share files, or use online services on your laptop.</p>
12
- <p>A WiFi driver also helps improve the performance and stability of your wireless connection. It ensures that your wireless card works properly and efficiently. It also prevents errors, crashes, or compatibility issues that may occur due to outdated or corrupted drivers.</p>
13
- <h2>How to Find Out the Model and Operating System of Your Lenovo 20207 Laptop</h2>
14
- <h3>You can find out the model of your laptop by checking the label on the bottom or the box</h3>
15
- <p>The easiest way to find out the model of your Lenovo 20207 laptop is to check the label on the bottom of your laptop or the box that it came in. The label should have a sticker that shows the model name, serial number, and product key of your laptop. You can also find the model name on the top right corner of your laptop's keyboard.</p>
16
- <h3>You can find out the operating system of your laptop by following these steps</h3>
17
- <p>The operating system of your laptop is the software that runs your laptop and manages its resources. It also provides the user interface and the applications that you use on your laptop. The most common operating systems for laptops are Windows 10, Windows 8.1, and Windows 7.</p>
18
- <p>download driver wifi lenovo 20207 windows 10<br />
19
- download driver wifi lenovo b490 notebook 20207<br />
20
- download driver wifi lenovo 20207 desktops and workstations<br />
21
- download driver wifi lenovo 20207 intel<br />
22
- download driver wifi lenovo 20207 realtek<br />
23
- download driver wifi lenovo 20207 qualcomm<br />
24
- download driver wifi lenovo 20207 ideapad 100s-14ibr<br />
25
- download driver wifi lenovo 20207 support us<br />
26
- download driver wifi lenovo 20207 ds543269<br />
27
- download driver wifi lenovo 20207 m2wlg02us14.exe<br />
28
- download driver wifi lenovo 20207 thinkstation p330<br />
29
- download driver wifi lenovo 20207 thinkcentre m715q<br />
30
- download driver wifi lenovo 20207 ideacentre t540-15icb g<br />
31
- download driver wifi lenovo 20207 installation instructions<br />
32
- download driver wifi lenovo 20207 checksum readme<br />
33
- download driver wifi lenovo 20207 compatible devices<br />
34
- download driver wifi lenovo 20207 compatible operating systems<br />
35
- download driver wifi lenovo 20207 file name size version<br />
36
- download driver wifi lenovo 20207 severity recommended<br />
37
- download driver wifi lenovo 20207 product home product info<br />
38
- download driver wifi lenovo 20207 serial number machine type<br />
39
- download driver wifi lenovo 20207 quick links user guide<br />
40
- download driver wifi lenovo 20207 parts accessories drivers software<br />
41
- download driver wifi lenovo 20207 warranty status unknown warranty<br />
42
- download driver wifi lenovo 20207 terms and conditions unknown warranty status<br />
43
- download driver wifi lenovo 20207 how can we help you today<br />
44
- download driver wifi lenovo 20207 download lenovo vantage windows support center<br />
45
- download driver wifi lenovo 20207 purchase parts repair status register products services<br />
46
- download driver wifi lenovo 20207 laptops and netbooks b series laptops b490 laptop type 20207<br />
47
- download driver wifi lenovo 20207 pc support laptops b series laptops b490 laptop type 20207<br />
48
- download driver wifi lenovo 20207 wlan driver intel realtek qualcomm for windows 10 ideapad 100s-14ibr<br />
49
- download driver wifi lenovo 20207 shop support community my account english cart pc support laptops ideapad series laptops ideapad laptop ideapad <br />
50
- download driver wifi lenovo 20207 intel wifi driver for windows desktops and workstations in this article compatible devices compatible operating systems other information available drivers file name size version operating system release date severity options intel wifi driver for windows desktops and workstations mb windows bit apr recommended intel wifi</p>
51
- <p>To find out the operating system of your Lenovo 20207 laptop, you can follow these steps:</p>
52
- <h4>Windows 10: Click on Start > Settings > System > About</h4>
53
- <p>On the About page, you will see the edition, version, and build of your Windows 10 operating system. You will also see the system type, which indicates whether your laptop has a 32-bit or a 64-bit processor.</p>
54
- <h4>Windows 8.1: Swipe in from the right edge of the screen > Settings > PC info</h4>
55
- <p>On the PC info page, you will see the edition and version of your Windows 8.1 operating system. You will also see the system type, which indicates whether your laptop has a 32-bit or a 64-bit processor.</p>
56
- <h4>Windows 7: Click on Start > Control Panel > System and Security > System</h4>
57
- <p>On the System page, you will see the edition and service pack of your Windows 7 operating system. You will also see the system type, which indicates whether your laptop has a 32-bit or a 64-bit processor.</p>
58
- <h2>How to Download the WiFi Driver for Your Lenovo 20207 Laptop</h2>
59
- <h3>You can download the WiFi driver from the Lenovo support website by following these steps</h3>
60
- <p>The Lenovo support website is the official source of drivers and software for your Lenovo 20207 laptop. You can download the WiFi driver that is compatible with your laptop's model, wireless card, and operating system from this website. To do so, you can follow these steps:</p>
61
- <h4>Go to [Lenovo Support] and enter your laptop model in the search box</h4>
62
- <p>On the Lenovo support website, you will see a search box where you can enter your laptop model. Type in "Lenovo 20207" and hit Enter. You will be directed to the product page of your laptop.</p>
63
- <h4>Select your operating system from the drop-down menu</h4>
64
- <p>On the product page of your laptop, you will see a drop-down menu where you can select your operating system. Choose the one that matches your laptop's operating system, such as Windows 10, Windows 8.1, or Windows 7.</p>
65
- <h4>Click on Drivers & Software and then on Networking: Wireless LAN</h4>
66
- <p>On the product page of your laptop, you will see a tab called Drivers & Software. Click on it to see all the drivers and software available for your laptop. Then, click on Networking: Wireless LAN to see all the WiFi drivers for your laptop.</p>
67
- <h4>Choose the WiFi driver that matches your wireless card and download it</h4>
68
- <p>On the Networking: Wireless LAN page, you will see different WiFi drivers for different wireless cards. You need to choose the one that matches your wireless card. To find out what wireless card you have, you can check the label on the bottom of your laptop or the box that it came in. You can also use a tool like [Speccy] to scan your laptop and find out the details of your wireless card. Once you have identified your wireless card, you can choose the corresponding WiFi driver from the list and click on the download button. You will be asked to save the file to your laptop. Choose a location where you can easily find it later, such as your desktop or downloads folder.</p>
69
- <h2>How to Install the WiFi Driver for Your Lenovo 20207 Laptop</h2>
70
- <h3>You can install the WiFi driver by following these steps</h3>
71
- <p>After you have downloaded the WiFi driver, you need to install it on your laptop. This will update your wireless card and enable it to work properly with wireless networks. To install the WiFi driver, you can follow these steps:</p>
72
- <h4>Locate the downloaded file and double-click on it to run it</h4>
73
- <p>Go to the location where you saved the WiFi driver file and locate it. It should have a name like "wlanxxxx.exe" or something similar. Double-click on the file to run it. You may see a security warning asking you to confirm if you want to run the file. Click on Yes or Run to proceed.</p>
74
- <h4>Follow the on-screen instructions to complete the installation process</h4>
75
- <p>A window will open that will guide you through the installation process. You may need to accept the license agreement, choose the installation location, and click on Next or Install to continue. Follow the on-screen instructions until the installation is complete.</p>
76
- <h4>Restart your laptop and check if the WiFi is working properly</h4>
77
- <p>After the installation is finished, you may need to restart your laptop for the changes to take effect. Click on Finish or Restart Now to do so. When your laptop restarts, check if the WiFi icon is visible on the taskbar and if you can connect to wireless networks. If everything is working fine, you have successfully installed the WiFi driver for your Lenovo 20207 laptop.</p>
78
- <h2>Conclusion</h2>
79
- <p>You have learned how to download and install the WiFi driver for your Lenovo 20207 laptop. By following these simple steps, you can enjoy wireless connectivity on your laptop and access the internet, share files, and use online services. If you have any questions or issues, you can contact Lenovo support for assistance.</p>
80
- <p>Here are some FAQs that may help you:</p>
81
- <h3>FAQs</h3>
82
- <ul>
83
- <li><b>Q: How do I know if I have the latest WiFi driver for my Lenovo 20207 laptop?</b></li>
84
- <li>A: You can check if you have the latest WiFi driver by going to [Lenovo Support] and entering your laptop model in the search box. Then, select your operating system and click on Drivers & Software > Networking: Wireless LAN. Compare the version and date of the WiFi driver with the one installed on your laptop. If there is a newer version available, you can download and install it.</li>
85
- <li><b>Q: How do I uninstall or reinstall the WiFi driver for my Lenovo 20207 laptop?</b></li>
86
- <li>A: You can uninstall or reinstall the WiFi driver by going to Start > Control Panel > Device Manager > Network adapters. Right-click on your wireless card and select Uninstall or Update Driver Software. Follow the on-screen instructions to complete the process.</li>
87
- <li><b>Q: How do I troubleshoot WiFi problems on my Lenovo 20207 laptop?</b></li>
88
- <li>A: You can troubleshoot WiFi problems by following these steps: <ul>
89
- <li>Check if your wireless card is enabled and if your WiFi icon is visible on the taskbar.</li>
90
- <li>Check if your wireless router or modem is working properly and if it is within range of your laptop.</li>
91
- <li>Check if there are any interference or obstructions that may affect your wireless signal, such as walls, metal objects, or other devices.</li>
92
- <li>Check if there are any updates or patches available for your operating system, wireless card, or router.</li>
93
- <li>Run the Windows Network Troubleshooter by going to Start > Settings > Network & Internet > Status > Network troubleshooter.</li>
94
- <li>Contact Lenovo support or your internet service provider for further assistance.</li>
95
- </ul></li>
96
- <li><b>Q: How do I improve WiFi speed and performance on my Lenovo 20207 laptop?</b></li>
97
- <li>A: You can improve WiFi speed and performance by following these tips: <ul>
98
- <li>Place your laptop closer to your wireless router or modem.</li>
99
- <li>Avoid using multiple devices or applications that consume a lot of bandwidth at the same time, such as streaming videos, downloading files, or gaming online.</li>
100
- <li>Use a wired connection instead of a wireless connection if possible, such as an Ethernet cable or a USB adapter.</li>
101
- <li>Change the WiFi channel or frequency on your router or laptop to avoid interference from other wireless devices or networks.</li>
102
- <li>Upgrade your wireless card, router, or internet plan to a faster or more reliable one.</li>
103
- </ul></li>
104
- <li><b>Q: How do I secure my WiFi network and prevent unauthorized access on my Lenovo 20207 laptop?</b></li>
105
- <li>A: You can secure your WiFi network and prevent unauthorized access by following these steps: <ul>
106
- <li>Set a strong and unique password for your wireless router and your laptop's WiFi connection.</li>
107
- <li>Enable encryption on your wireless router and your laptop's WiFi connection, such as WPA2 or WPA3.</li>
108
- <li>Disable the SSID broadcast on your wireless router to hide your network name from other devices.</li>
109
- <li>Use a firewall and antivirus software on your laptop to protect it from malware and hackers.</li>
110
- <li>Avoid using public or unsecured WiFi networks, such as those in cafes, hotels, or airports.</li>
111
- </ul></li>
112
- </ul>
113
- <p>I hope you found this article helpful and informative. If you have any feedback or suggestions, please let me know in the comments section below. Thank you for reading!</p> 401be4b1e0<br />
114
- <br />
115
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/2ndelement/voicevox/test/test_mock_synthesis_engine.py DELETED
@@ -1,140 +0,0 @@
1
- from unittest import TestCase
2
-
3
- from voicevox_engine.dev.synthesis_engine import MockSynthesisEngine
4
- from voicevox_engine.kana_parser import create_kana
5
- from voicevox_engine.model import AccentPhrase, AudioQuery, Mora
6
-
7
-
8
- class TestMockSynthesisEngine(TestCase):
9
- def setUp(self):
10
- super().setUp()
11
-
12
- self.accent_phrases_hello_hiho = [
13
- AccentPhrase(
14
- moras=[
15
- Mora(
16
- text="コ",
17
- consonant="k",
18
- consonant_length=0.0,
19
- vowel="o",
20
- vowel_length=0.0,
21
- pitch=0.0,
22
- ),
23
- Mora(
24
- text="ン",
25
- consonant=None,
26
- consonant_length=None,
27
- vowel="N",
28
- vowel_length=0.0,
29
- pitch=0.0,
30
- ),
31
- Mora(
32
- text="ニ",
33
- consonant="n",
34
- consonant_length=0.0,
35
- vowel="i",
36
- vowel_length=0.0,
37
- pitch=0.0,
38
- ),
39
- Mora(
40
- text="チ",
41
- consonant="ch",
42
- consonant_length=0.0,
43
- vowel="i",
44
- vowel_length=0.0,
45
- pitch=0.0,
46
- ),
47
- Mora(
48
- text="ワ",
49
- consonant="w",
50
- consonant_length=0.0,
51
- vowel="a",
52
- vowel_length=0.0,
53
- pitch=0.0,
54
- ),
55
- ],
56
- accent=5,
57
- pause_mora=Mora(
58
- text="、",
59
- consonant=None,
60
- consonant_length=None,
61
- vowel="pau",
62
- vowel_length=0.0,
63
- pitch=0.0,
64
- ),
65
- ),
66
- AccentPhrase(
67
- moras=[
68
- Mora(
69
- text="ヒ",
70
- consonant="h",
71
- consonant_length=0.0,
72
- vowel="i",
73
- vowel_length=0.0,
74
- pitch=0.0,
75
- ),
76
- Mora(
77
- text="ホ",
78
- consonant="h",
79
- consonant_length=0.0,
80
- vowel="o",
81
- vowel_length=0.0,
82
- pitch=0.0,
83
- ),
84
- Mora(
85
- text="デ",
86
- consonant="d",
87
- consonant_length=0.0,
88
- vowel="e",
89
- vowel_length=0.0,
90
- pitch=0.0,
91
- ),
92
- Mora(
93
- text="ス",
94
- consonant="s",
95
- consonant_length=0.0,
96
- vowel="U",
97
- vowel_length=0.0,
98
- pitch=0.0,
99
- ),
100
- ],
101
- accent=1,
102
- pause_mora=None,
103
- ),
104
- ]
105
- self.engine = MockSynthesisEngine(speakers="", supported_devices="")
106
-
107
- def test_replace_phoneme_length(self):
108
- self.assertEqual(
109
- self.engine.replace_phoneme_length(
110
- accent_phrases=self.accent_phrases_hello_hiho,
111
- speaker_id=0,
112
- ),
113
- self.accent_phrases_hello_hiho,
114
- )
115
-
116
- def test_replace_mora_pitch(self):
117
- self.assertEqual(
118
- self.engine.replace_mora_pitch(
119
- accent_phrases=self.accent_phrases_hello_hiho,
120
- speaker_id=0,
121
- ),
122
- self.accent_phrases_hello_hiho,
123
- )
124
-
125
- def test_synthesis(self):
126
- self.engine.synthesis(
127
- AudioQuery(
128
- accent_phrases=self.accent_phrases_hello_hiho,
129
- speedScale=1,
130
- pitchScale=0,
131
- intonationScale=1,
132
- volumeScale=1,
133
- prePhonemeLength=0.1,
134
- postPhonemeLength=0.1,
135
- outputSamplingRate=24000,
136
- outputStereo=False,
137
- kana=create_kana(self.accent_phrases_hello_hiho),
138
- ),
139
- speaker_id=0,
140
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/A00001/bingothoo/src/pages/api/sydney.ts DELETED
@@ -1,62 +0,0 @@
1
- import { NextApiRequest, NextApiResponse } from 'next'
2
- import { WebSocket, debug } from '@/lib/isomorphic'
3
- import { BingWebBot } from '@/lib/bots/bing'
4
- import { websocketUtils } from '@/lib/bots/bing/utils'
5
- import { WatchDog, createHeaders } from '@/lib/utils'
6
-
7
-
8
- export default async function handler(req: NextApiRequest, res: NextApiResponse) {
9
- const conversationContext = req.body
10
- const headers = createHeaders(req.cookies)
11
- debug(headers)
12
- res.setHeader('Content-Type', 'text/stream; charset=UTF-8')
13
-
14
- const ws = new WebSocket('wss://sydney.bing.com/sydney/ChatHub', {
15
- headers: {
16
- ...headers,
17
- 'accept-language': 'zh-CN,zh;q=0.9',
18
- 'cache-control': 'no-cache',
19
- 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32',
20
- pragma: 'no-cache',
21
- }
22
- })
23
-
24
- const closeDog = new WatchDog()
25
- const timeoutDog = new WatchDog()
26
- ws.onmessage = (event) => {
27
- timeoutDog.watch(() => {
28
- ws.send(websocketUtils.packMessage({ type: 6 }))
29
- }, 1500)
30
- closeDog.watch(() => {
31
- ws.close()
32
- }, 10000)
33
- res.write(event.data)
34
- if (/\{"type":([367])\}/.test(String(event.data))) {
35
- const type = parseInt(RegExp.$1, 10)
36
- debug('connection type', type)
37
- if (type === 3) {
38
- ws.close()
39
- } else {
40
- ws.send(websocketUtils.packMessage({ type }))
41
- }
42
- }
43
- }
44
-
45
- ws.onclose = () => {
46
- timeoutDog.reset()
47
- closeDog.reset()
48
- debug('connection close')
49
- res.end()
50
- }
51
-
52
- await new Promise((resolve) => ws.onopen = resolve)
53
- ws.send(websocketUtils.packMessage({ protocol: 'json', version: 1 }))
54
- ws.send(websocketUtils.packMessage({ type: 6 }))
55
- ws.send(websocketUtils.packMessage(BingWebBot.buildChatRequest(conversationContext!)))
56
- req.socket.once('close', () => {
57
- ws.close()
58
- if (!res.closed) {
59
- res.end()
60
- }
61
- })
62
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIConsultant/MusicGen/audiocraft/utils/__init__.py DELETED
@@ -1,6 +0,0 @@
1
- # Copyright (c) Meta Platforms, Inc. and affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
- """Utilities."""
 
 
 
 
 
 
 
spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/utils.py DELETED
@@ -1,73 +0,0 @@
1
- import importlib
2
-
3
- from inspect import isfunction
4
-
5
- import os
6
- import soundfile as sf
7
-
8
- def seed_everything(seed):
9
- import random, os
10
- import numpy as np
11
- import torch
12
-
13
- random.seed(seed)
14
- os.environ['PYTHONHASHSEED'] = str(seed)
15
- np.random.seed(seed)
16
- torch.manual_seed(seed)
17
- torch.cuda.manual_seed(seed)
18
- torch.backends.cudnn.deterministic = True
19
- torch.backends.cudnn.benchmark = True
20
-
21
- def save_wave(waveform, savepath, name="outwav"):
22
- if type(name) is not list:
23
- name = [name] * waveform.shape[0]
24
-
25
- for i in range(waveform.shape[0]):
26
- path = os.path.join(
27
- savepath,
28
- "%s_%s.wav"
29
- % (
30
- os.path.basename(name[i])
31
- if (not ".wav" in name[i])
32
- else os.path.basename(name[i]).split(".")[0],
33
- i,
34
- ),
35
- )
36
- sf.write(path, waveform[i, 0], samplerate=16000)
37
-
38
- def exists(x):
39
- return x is not None
40
-
41
-
42
- def default(val, d):
43
- if exists(val):
44
- return val
45
- return d() if isfunction(d) else d
46
-
47
-
48
- def count_params(model, verbose=False):
49
- total_params = sum(p.numel() for p in model.parameters())
50
- if verbose:
51
- print(f"{model.__class__.__name__} has {total_params * 1.e-6:.2f} M params.")
52
- return total_params
53
-
54
-
55
- def get_obj_from_str(string, reload=False):
56
- module, cls = string.rsplit(".", 1)
57
- if reload:
58
- module_imp = importlib.import_module(module)
59
- importlib.reload(module_imp)
60
- return getattr(importlib.import_module(module, package=None), cls)
61
-
62
-
63
- def instantiate_from_config(config):
64
- if not "target" in config:
65
- if config == "__is_first_stage__":
66
- return None
67
- elif config == "__is_unconditional__":
68
- return None
69
- raise KeyError("Expected key `target` to instantiate.")
70
- return get_obj_from_str(config["target"])(**config.get("params", dict()))
71
-
72
- def default_audioldm_config():
73
- return {'wave_file_save_path': './output', 'id': {'version': 'v1', 'name': 'default', 'root': '/mnt/fast/nobackup/users/hl01486/projects/general_audio_generation/AudioLDM-python/config/default/latent_diffusion.yaml'}, 'model': {'device': 'cuda', 'reload_from_ckpt': '/mnt/fast/nobackup/scratch4weeks/hl01486/exps/audio_generation/stablediffusion/LDM/audioverse/2023_01_14_full_F4_B_spatial_v2_v1/checkpoints/last.ckpt', 'target': 'audioldm.pipline.LatentDiffusion', 'params': {'base_learning_rate': 5e-06, 'linear_start': 0.0015, 'linear_end': 0.0195, 'num_timesteps_cond': 1, 'log_every_t': 200, 'timesteps': 1000, 'first_stage_key': 'fbank', 'cond_stage_key': 'waveform', 'latent_t_size': 256, 'latent_f_size': 16, 'channels': 8, 'cond_stage_trainable': True, 'conditioning_key': 'film', 'monitor': 'val/loss_simple_ema', 'scale_by_std': True, 'unet_config': {'target': 'audioldm.latent_diffusion.openaimodel.UNetModel', 'params': {'image_size': 64, 'extra_film_condition_dim': 512, 'extra_film_use_concat': True, 'in_channels': 8, 'out_channels': 8, 'model_channels': 128, 'attention_resolutions': [8, 4, 2], 'num_res_blocks': 2, 'channel_mult': [1, 2, 3, 5], 'num_head_channels': 32, 'use_spatial_transformer': True}}, 'first_stage_config': {'base_learning_rate': 4.5e-05, 'target': 'audioldm.variational_autoencoder.autoencoder.AutoencoderKL', 'params': {'monitor': 'val/rec_loss', 'image_key': 'fbank', 'subband': 1, 'embed_dim': 8, 'time_shuffle': 1, 'ddconfig': {'double_z': True, 'z_channels': 8, 'resolution': 256, 'downsample_time': False, 'in_channels': 1, 'out_ch': 1, 'ch': 128, 'ch_mult': [1, 2, 4], 'num_res_blocks': 2, 'attn_resolutions': [], 'dropout': 0.0}}}, 'cond_stage_config': {'target': 'audioldm.clap.encoders.CLAPAudioEmbeddingClassifierFreev2', 'params': {'key': 'waveform', 'sampling_rate': 16000, 'embed_mode': 'audio', 'unconditional_prob': 0.1}}}}}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/tts/fs2_utils.py DELETED
@@ -1,173 +0,0 @@
1
- import matplotlib
2
-
3
- matplotlib.use('Agg')
4
-
5
- import glob
6
- import importlib
7
- from utils.cwt import get_lf0_cwt
8
- import os
9
- import torch.optim
10
- import torch.utils.data
11
- from utils.indexed_datasets import IndexedDataset
12
- from utils.pitch_utils import norm_interp_f0
13
- import numpy as np
14
- from tasks.base_task import BaseDataset
15
- import torch
16
- import torch.optim
17
- import torch.utils.data
18
- import utils
19
- import torch.distributions
20
- from utils.hparams import hparams
21
-
22
-
23
- class FastSpeechDataset(BaseDataset):
24
- def __init__(self, prefix, shuffle=False):
25
- super().__init__(shuffle)
26
- self.data_dir = hparams['binary_data_dir']
27
- self.prefix = prefix
28
- self.hparams = hparams
29
- self.sizes = np.load(f'{self.data_dir}/{self.prefix}_lengths.npy')
30
- self.indexed_ds = None
31
- # self.name2spk_id={}
32
-
33
- # pitch stats
34
- f0_stats_fn = f'{self.data_dir}/train_f0s_mean_std.npy'
35
- if os.path.exists(f0_stats_fn):
36
- hparams['f0_mean'], hparams['f0_std'] = self.f0_mean, self.f0_std = np.load(f0_stats_fn)
37
- hparams['f0_mean'] = float(hparams['f0_mean'])
38
- hparams['f0_std'] = float(hparams['f0_std'])
39
- else:
40
- hparams['f0_mean'], hparams['f0_std'] = self.f0_mean, self.f0_std = None, None
41
-
42
- if prefix == 'test':
43
- if hparams['test_input_dir'] != '':
44
- self.indexed_ds, self.sizes = self.load_test_inputs(hparams['test_input_dir'])
45
- else:
46
- if hparams['num_test_samples'] > 0:
47
- self.avail_idxs = list(range(hparams['num_test_samples'])) + hparams['test_ids']
48
- self.sizes = [self.sizes[i] for i in self.avail_idxs]
49
-
50
- if hparams['pitch_type'] == 'cwt':
51
- _, hparams['cwt_scales'] = get_lf0_cwt(np.ones(10))
52
-
53
- def _get_item(self, index):
54
- if hasattr(self, 'avail_idxs') and self.avail_idxs is not None:
55
- index = self.avail_idxs[index]
56
- if self.indexed_ds is None:
57
- self.indexed_ds = IndexedDataset(f'{self.data_dir}/{self.prefix}')
58
- return self.indexed_ds[index]
59
-
60
- def __getitem__(self, index):
61
- hparams = self.hparams
62
- item = self._get_item(index)
63
- max_frames = hparams['max_frames']
64
- spec = torch.Tensor(item['mel'])[:max_frames]
65
- energy = (spec.exp() ** 2).sum(-1).sqrt()
66
- mel2ph = torch.LongTensor(item['mel2ph'])[:max_frames] if 'mel2ph' in item else None
67
- f0, uv = norm_interp_f0(item["f0"][:max_frames], hparams)
68
- phone = torch.LongTensor(item['phone'][:hparams['max_input_tokens']])
69
- pitch = torch.LongTensor(item.get("pitch"))[:max_frames]
70
- # print(item.keys(), item['mel'].shape, spec.shape)
71
- sample = {
72
- "id": index,
73
- "item_name": item['item_name'],
74
- "text": item['txt'],
75
- "txt_token": phone,
76
- "mel": spec,
77
- "pitch": pitch,
78
- "energy": energy,
79
- "f0": f0,
80
- "uv": uv,
81
- "mel2ph": mel2ph,
82
- "mel_nonpadding": spec.abs().sum(-1) > 0,
83
- }
84
- if self.hparams['use_spk_embed']:
85
- sample["spk_embed"] = torch.Tensor(item['spk_embed'])
86
- if self.hparams['use_spk_id']:
87
- sample["spk_id"] = item['spk_id']
88
- # sample['spk_id'] = 0
89
- # for key in self.name2spk_id.keys():
90
- # if key in item['item_name']:
91
- # sample['spk_id'] = self.name2spk_id[key]
92
- # break
93
- if self.hparams['pitch_type'] == 'cwt':
94
- cwt_spec = torch.Tensor(item['cwt_spec'])[:max_frames]
95
- f0_mean = item.get('f0_mean', item.get('cwt_mean'))
96
- f0_std = item.get('f0_std', item.get('cwt_std'))
97
- sample.update({"cwt_spec": cwt_spec, "f0_mean": f0_mean, "f0_std": f0_std})
98
- elif self.hparams['pitch_type'] == 'ph':
99
- f0_phlevel_sum = torch.zeros_like(phone).float().scatter_add(0, mel2ph - 1, f0)
100
- f0_phlevel_num = torch.zeros_like(phone).float().scatter_add(
101
- 0, mel2ph - 1, torch.ones_like(f0)).clamp_min(1)
102
- sample["f0_ph"] = f0_phlevel_sum / f0_phlevel_num
103
- return sample
104
-
105
- def collater(self, samples):
106
- if len(samples) == 0:
107
- return {}
108
- id = torch.LongTensor([s['id'] for s in samples])
109
- item_names = [s['item_name'] for s in samples]
110
- text = [s['text'] for s in samples]
111
- txt_tokens = utils.collate_1d([s['txt_token'] for s in samples], 0)
112
- f0 = utils.collate_1d([s['f0'] for s in samples], 0.0)
113
- pitch = utils.collate_1d([s['pitch'] for s in samples])
114
- uv = utils.collate_1d([s['uv'] for s in samples])
115
- energy = utils.collate_1d([s['energy'] for s in samples], 0.0)
116
- mel2ph = utils.collate_1d([s['mel2ph'] for s in samples], 0.0) \
117
- if samples[0]['mel2ph'] is not None else None
118
- mels = utils.collate_2d([s['mel'] for s in samples], 0.0)
119
- txt_lengths = torch.LongTensor([s['txt_token'].numel() for s in samples])
120
- mel_lengths = torch.LongTensor([s['mel'].shape[0] for s in samples])
121
-
122
- batch = {
123
- 'id': id,
124
- 'item_name': item_names,
125
- 'nsamples': len(samples),
126
- 'text': text,
127
- 'txt_tokens': txt_tokens,
128
- 'txt_lengths': txt_lengths,
129
- 'mels': mels,
130
- 'mel_lengths': mel_lengths,
131
- 'mel2ph': mel2ph,
132
- 'energy': energy,
133
- 'pitch': pitch,
134
- 'f0': f0,
135
- 'uv': uv,
136
- }
137
-
138
- if self.hparams['use_spk_embed']:
139
- spk_embed = torch.stack([s['spk_embed'] for s in samples])
140
- batch['spk_embed'] = spk_embed
141
- if self.hparams['use_spk_id']:
142
- spk_ids = torch.LongTensor([s['spk_id'] for s in samples])
143
- batch['spk_ids'] = spk_ids
144
- if self.hparams['pitch_type'] == 'cwt':
145
- cwt_spec = utils.collate_2d([s['cwt_spec'] for s in samples])
146
- f0_mean = torch.Tensor([s['f0_mean'] for s in samples])
147
- f0_std = torch.Tensor([s['f0_std'] for s in samples])
148
- batch.update({'cwt_spec': cwt_spec, 'f0_mean': f0_mean, 'f0_std': f0_std})
149
- elif self.hparams['pitch_type'] == 'ph':
150
- batch['f0'] = utils.collate_1d([s['f0_ph'] for s in samples])
151
-
152
- return batch
153
-
154
- def load_test_inputs(self, test_input_dir, spk_id=0):
155
- inp_wav_paths = glob.glob(f'{test_input_dir}/*.wav') + glob.glob(f'{test_input_dir}/*.mp3')
156
- sizes = []
157
- items = []
158
-
159
- binarizer_cls = hparams.get("binarizer_cls", 'data_gen.tts.base_binarizerr.BaseBinarizer')
160
- pkg = ".".join(binarizer_cls.split(".")[:-1])
161
- cls_name = binarizer_cls.split(".")[-1]
162
- binarizer_cls = getattr(importlib.import_module(pkg), cls_name)
163
- binarization_args = hparams['binarization_args']
164
-
165
- for wav_fn in inp_wav_paths:
166
- item_name = os.path.basename(wav_fn)
167
- ph = txt = tg_fn = ''
168
- wav_fn = wav_fn
169
- encoder = None
170
- item = binarizer_cls.process_item(item_name, ph, txt, tg_fn, wav_fn, spk_id, encoder, binarization_args)
171
- items.append(item)
172
- sizes.append(item['len'])
173
- return items, sizes
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AP123/dreamgaussian/sh_utils.py DELETED
@@ -1,118 +0,0 @@
1
- # Copyright 2021 The PlenOctree Authors.
2
- # Redistribution and use in source and binary forms, with or without
3
- # modification, are permitted provided that the following conditions are met:
4
- #
5
- # 1. Redistributions of source code must retain the above copyright notice,
6
- # this list of conditions and the following disclaimer.
7
- #
8
- # 2. Redistributions in binary form must reproduce the above copyright notice,
9
- # this list of conditions and the following disclaimer in the documentation
10
- # and/or other materials provided with the distribution.
11
- #
12
- # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
13
- # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
14
- # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
15
- # ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
16
- # LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
17
- # CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
18
- # SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
19
- # INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
20
- # CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
21
- # ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
22
- # POSSIBILITY OF SUCH DAMAGE.
23
-
24
- import torch
25
-
26
- C0 = 0.28209479177387814
27
- C1 = 0.4886025119029199
28
- C2 = [
29
- 1.0925484305920792,
30
- -1.0925484305920792,
31
- 0.31539156525252005,
32
- -1.0925484305920792,
33
- 0.5462742152960396
34
- ]
35
- C3 = [
36
- -0.5900435899266435,
37
- 2.890611442640554,
38
- -0.4570457994644658,
39
- 0.3731763325901154,
40
- -0.4570457994644658,
41
- 1.445305721320277,
42
- -0.5900435899266435
43
- ]
44
- C4 = [
45
- 2.5033429417967046,
46
- -1.7701307697799304,
47
- 0.9461746957575601,
48
- -0.6690465435572892,
49
- 0.10578554691520431,
50
- -0.6690465435572892,
51
- 0.47308734787878004,
52
- -1.7701307697799304,
53
- 0.6258357354491761,
54
- ]
55
-
56
-
57
- def eval_sh(deg, sh, dirs):
58
- """
59
- Evaluate spherical harmonics at unit directions
60
- using hardcoded SH polynomials.
61
- Works with torch/np/jnp.
62
- ... Can be 0 or more batch dimensions.
63
- Args:
64
- deg: int SH deg. Currently, 0-3 supported
65
- sh: jnp.ndarray SH coeffs [..., C, (deg + 1) ** 2]
66
- dirs: jnp.ndarray unit directions [..., 3]
67
- Returns:
68
- [..., C]
69
- """
70
- assert deg <= 4 and deg >= 0
71
- coeff = (deg + 1) ** 2
72
- assert sh.shape[-1] >= coeff
73
-
74
- result = C0 * sh[..., 0]
75
- if deg > 0:
76
- x, y, z = dirs[..., 0:1], dirs[..., 1:2], dirs[..., 2:3]
77
- result = (result -
78
- C1 * y * sh[..., 1] +
79
- C1 * z * sh[..., 2] -
80
- C1 * x * sh[..., 3])
81
-
82
- if deg > 1:
83
- xx, yy, zz = x * x, y * y, z * z
84
- xy, yz, xz = x * y, y * z, x * z
85
- result = (result +
86
- C2[0] * xy * sh[..., 4] +
87
- C2[1] * yz * sh[..., 5] +
88
- C2[2] * (2.0 * zz - xx - yy) * sh[..., 6] +
89
- C2[3] * xz * sh[..., 7] +
90
- C2[4] * (xx - yy) * sh[..., 8])
91
-
92
- if deg > 2:
93
- result = (result +
94
- C3[0] * y * (3 * xx - yy) * sh[..., 9] +
95
- C3[1] * xy * z * sh[..., 10] +
96
- C3[2] * y * (4 * zz - xx - yy)* sh[..., 11] +
97
- C3[3] * z * (2 * zz - 3 * xx - 3 * yy) * sh[..., 12] +
98
- C3[4] * x * (4 * zz - xx - yy) * sh[..., 13] +
99
- C3[5] * z * (xx - yy) * sh[..., 14] +
100
- C3[6] * x * (xx - 3 * yy) * sh[..., 15])
101
-
102
- if deg > 3:
103
- result = (result + C4[0] * xy * (xx - yy) * sh[..., 16] +
104
- C4[1] * yz * (3 * xx - yy) * sh[..., 17] +
105
- C4[2] * xy * (7 * zz - 1) * sh[..., 18] +
106
- C4[3] * yz * (7 * zz - 3) * sh[..., 19] +
107
- C4[4] * (zz * (35 * zz - 30) + 3) * sh[..., 20] +
108
- C4[5] * xz * (7 * zz - 3) * sh[..., 21] +
109
- C4[6] * (xx - yy) * (7 * zz - 1) * sh[..., 22] +
110
- C4[7] * xz * (xx - 3 * yy) * sh[..., 23] +
111
- C4[8] * (xx * (xx - 3 * yy) - yy * (3 * xx - yy)) * sh[..., 24])
112
- return result
113
-
114
- def RGB2SH(rgb):
115
- return (rgb - 0.5) / C0
116
-
117
- def SH2RGB(sh):
118
- return sh * C0 + 0.5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dropdownlist/Factory.js DELETED
@@ -1,13 +0,0 @@
1
- import DropDownList from './DropDownList.js';
2
- import ObjectFactory from '../ObjectFactory.js';
3
- import SetValue from '../../../plugins/utils/object/SetValue.js';
4
-
5
- ObjectFactory.register('dropDownList', function (config) {
6
- var gameObject = new DropDownList(this.scene, config);
7
- this.scene.add.existing(gameObject);
8
- return gameObject;
9
- });
10
-
11
- SetValue(window, 'RexPlugins.UI.DropDownList', DropDownList);
12
-
13
- export default DropDownList;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AiMimicry/sovits-models/vdecoder/hifigan/models.py DELETED
@@ -1,503 +0,0 @@
1
- import os
2
- import json
3
- from .env import AttrDict
4
- import numpy as np
5
- import torch
6
- import torch.nn.functional as F
7
- import torch.nn as nn
8
- from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
9
- from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
10
- from .utils import init_weights, get_padding
11
-
12
- LRELU_SLOPE = 0.1
13
-
14
-
15
- def load_model(model_path, device='cuda'):
16
- config_file = os.path.join(os.path.split(model_path)[0], 'config.json')
17
- with open(config_file) as f:
18
- data = f.read()
19
-
20
- global h
21
- json_config = json.loads(data)
22
- h = AttrDict(json_config)
23
-
24
- generator = Generator(h).to(device)
25
-
26
- cp_dict = torch.load(model_path)
27
- generator.load_state_dict(cp_dict['generator'])
28
- generator.eval()
29
- generator.remove_weight_norm()
30
- del cp_dict
31
- return generator, h
32
-
33
-
34
- class ResBlock1(torch.nn.Module):
35
- def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)):
36
- super(ResBlock1, self).__init__()
37
- self.h = h
38
- self.convs1 = nn.ModuleList([
39
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
40
- padding=get_padding(kernel_size, dilation[0]))),
41
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
42
- padding=get_padding(kernel_size, dilation[1]))),
43
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
44
- padding=get_padding(kernel_size, dilation[2])))
45
- ])
46
- self.convs1.apply(init_weights)
47
-
48
- self.convs2 = nn.ModuleList([
49
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
50
- padding=get_padding(kernel_size, 1))),
51
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
52
- padding=get_padding(kernel_size, 1))),
53
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
54
- padding=get_padding(kernel_size, 1)))
55
- ])
56
- self.convs2.apply(init_weights)
57
-
58
- def forward(self, x):
59
- for c1, c2 in zip(self.convs1, self.convs2):
60
- xt = F.leaky_relu(x, LRELU_SLOPE)
61
- xt = c1(xt)
62
- xt = F.leaky_relu(xt, LRELU_SLOPE)
63
- xt = c2(xt)
64
- x = xt + x
65
- return x
66
-
67
- def remove_weight_norm(self):
68
- for l in self.convs1:
69
- remove_weight_norm(l)
70
- for l in self.convs2:
71
- remove_weight_norm(l)
72
-
73
-
74
- class ResBlock2(torch.nn.Module):
75
- def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)):
76
- super(ResBlock2, self).__init__()
77
- self.h = h
78
- self.convs = nn.ModuleList([
79
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
80
- padding=get_padding(kernel_size, dilation[0]))),
81
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
82
- padding=get_padding(kernel_size, dilation[1])))
83
- ])
84
- self.convs.apply(init_weights)
85
-
86
- def forward(self, x):
87
- for c in self.convs:
88
- xt = F.leaky_relu(x, LRELU_SLOPE)
89
- xt = c(xt)
90
- x = xt + x
91
- return x
92
-
93
- def remove_weight_norm(self):
94
- for l in self.convs:
95
- remove_weight_norm(l)
96
-
97
-
98
- def padDiff(x):
99
- return F.pad(F.pad(x, (0,0,-1,1), 'constant', 0) - x, (0,0,0,-1), 'constant', 0)
100
-
101
- class SineGen(torch.nn.Module):
102
- """ Definition of sine generator
103
- SineGen(samp_rate, harmonic_num = 0,
104
- sine_amp = 0.1, noise_std = 0.003,
105
- voiced_threshold = 0,
106
- flag_for_pulse=False)
107
- samp_rate: sampling rate in Hz
108
- harmonic_num: number of harmonic overtones (default 0)
109
- sine_amp: amplitude of sine-wavefrom (default 0.1)
110
- noise_std: std of Gaussian noise (default 0.003)
111
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
112
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
113
- Note: when flag_for_pulse is True, the first time step of a voiced
114
- segment is always sin(np.pi) or cos(0)
115
- """
116
-
117
- def __init__(self, samp_rate, harmonic_num=0,
118
- sine_amp=0.1, noise_std=0.003,
119
- voiced_threshold=0,
120
- flag_for_pulse=False):
121
- super(SineGen, self).__init__()
122
- self.sine_amp = sine_amp
123
- self.noise_std = noise_std
124
- self.harmonic_num = harmonic_num
125
- self.dim = self.harmonic_num + 1
126
- self.sampling_rate = samp_rate
127
- self.voiced_threshold = voiced_threshold
128
- self.flag_for_pulse = flag_for_pulse
129
-
130
- def _f02uv(self, f0):
131
- # generate uv signal
132
- uv = (f0 > self.voiced_threshold).type(torch.float32)
133
- return uv
134
-
135
- def _f02sine(self, f0_values):
136
- """ f0_values: (batchsize, length, dim)
137
- where dim indicates fundamental tone and overtones
138
- """
139
- # convert to F0 in rad. The interger part n can be ignored
140
- # because 2 * np.pi * n doesn't affect phase
141
- rad_values = (f0_values / self.sampling_rate) % 1
142
-
143
- # initial phase noise (no noise for fundamental component)
144
- rand_ini = torch.rand(f0_values.shape[0], f0_values.shape[2], \
145
- device=f0_values.device)
146
- rand_ini[:, 0] = 0
147
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
148
-
149
- # instantanouse phase sine[t] = sin(2*pi \sum_i=1 ^{t} rad)
150
- if not self.flag_for_pulse:
151
- # for normal case
152
-
153
- # To prevent torch.cumsum numerical overflow,
154
- # it is necessary to add -1 whenever \sum_k=1^n rad_value_k > 1.
155
- # Buffer tmp_over_one_idx indicates the time step to add -1.
156
- # This will not change F0 of sine because (x-1) * 2*pi = x * 2*pi
157
- tmp_over_one = torch.cumsum(rad_values, 1) % 1
158
- tmp_over_one_idx = (padDiff(tmp_over_one)) < 0
159
- cumsum_shift = torch.zeros_like(rad_values)
160
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
161
-
162
- sines = torch.sin(torch.cumsum(rad_values + cumsum_shift, dim=1)
163
- * 2 * np.pi)
164
- else:
165
- # If necessary, make sure that the first time step of every
166
- # voiced segments is sin(pi) or cos(0)
167
- # This is used for pulse-train generation
168
-
169
- # identify the last time step in unvoiced segments
170
- uv = self._f02uv(f0_values)
171
- uv_1 = torch.roll(uv, shifts=-1, dims=1)
172
- uv_1[:, -1, :] = 1
173
- u_loc = (uv < 1) * (uv_1 > 0)
174
-
175
- # get the instantanouse phase
176
- tmp_cumsum = torch.cumsum(rad_values, dim=1)
177
- # different batch needs to be processed differently
178
- for idx in range(f0_values.shape[0]):
179
- temp_sum = tmp_cumsum[idx, u_loc[idx, :, 0], :]
180
- temp_sum[1:, :] = temp_sum[1:, :] - temp_sum[0:-1, :]
181
- # stores the accumulation of i.phase within
182
- # each voiced segments
183
- tmp_cumsum[idx, :, :] = 0
184
- tmp_cumsum[idx, u_loc[idx, :, 0], :] = temp_sum
185
-
186
- # rad_values - tmp_cumsum: remove the accumulation of i.phase
187
- # within the previous voiced segment.
188
- i_phase = torch.cumsum(rad_values - tmp_cumsum, dim=1)
189
-
190
- # get the sines
191
- sines = torch.cos(i_phase * 2 * np.pi)
192
- return sines
193
-
194
- def forward(self, f0):
195
- """ sine_tensor, uv = forward(f0)
196
- input F0: tensor(batchsize=1, length, dim=1)
197
- f0 for unvoiced steps should be 0
198
- output sine_tensor: tensor(batchsize=1, length, dim)
199
- output uv: tensor(batchsize=1, length, 1)
200
- """
201
- with torch.no_grad():
202
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim,
203
- device=f0.device)
204
- # fundamental component
205
- fn = torch.multiply(f0, torch.FloatTensor([[range(1, self.harmonic_num + 2)]]).to(f0.device))
206
-
207
- # generate sine waveforms
208
- sine_waves = self._f02sine(fn) * self.sine_amp
209
-
210
- # generate uv signal
211
- # uv = torch.ones(f0.shape)
212
- # uv = uv * (f0 > self.voiced_threshold)
213
- uv = self._f02uv(f0)
214
-
215
- # noise: for unvoiced should be similar to sine_amp
216
- # std = self.sine_amp/3 -> max value ~ self.sine_amp
217
- # . for voiced regions is self.noise_std
218
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
219
- noise = noise_amp * torch.randn_like(sine_waves)
220
-
221
- # first: set the unvoiced part to 0 by uv
222
- # then: additive noise
223
- sine_waves = sine_waves * uv + noise
224
- return sine_waves, uv, noise
225
-
226
-
227
- class SourceModuleHnNSF(torch.nn.Module):
228
- """ SourceModule for hn-nsf
229
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
230
- add_noise_std=0.003, voiced_threshod=0)
231
- sampling_rate: sampling_rate in Hz
232
- harmonic_num: number of harmonic above F0 (default: 0)
233
- sine_amp: amplitude of sine source signal (default: 0.1)
234
- add_noise_std: std of additive Gaussian noise (default: 0.003)
235
- note that amplitude of noise in unvoiced is decided
236
- by sine_amp
237
- voiced_threshold: threhold to set U/V given F0 (default: 0)
238
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
239
- F0_sampled (batchsize, length, 1)
240
- Sine_source (batchsize, length, 1)
241
- noise_source (batchsize, length 1)
242
- uv (batchsize, length, 1)
243
- """
244
-
245
- def __init__(self, sampling_rate, harmonic_num=0, sine_amp=0.1,
246
- add_noise_std=0.003, voiced_threshod=0):
247
- super(SourceModuleHnNSF, self).__init__()
248
-
249
- self.sine_amp = sine_amp
250
- self.noise_std = add_noise_std
251
-
252
- # to produce sine waveforms
253
- self.l_sin_gen = SineGen(sampling_rate, harmonic_num,
254
- sine_amp, add_noise_std, voiced_threshod)
255
-
256
- # to merge source harmonics into a single excitation
257
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
258
- self.l_tanh = torch.nn.Tanh()
259
-
260
- def forward(self, x):
261
- """
262
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
263
- F0_sampled (batchsize, length, 1)
264
- Sine_source (batchsize, length, 1)
265
- noise_source (batchsize, length 1)
266
- """
267
- # source for harmonic branch
268
- sine_wavs, uv, _ = self.l_sin_gen(x)
269
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
270
-
271
- # source for noise branch, in the same shape as uv
272
- noise = torch.randn_like(uv) * self.sine_amp / 3
273
- return sine_merge, noise, uv
274
-
275
-
276
- class Generator(torch.nn.Module):
277
- def __init__(self, h):
278
- super(Generator, self).__init__()
279
- self.h = h
280
-
281
- self.num_kernels = len(h["resblock_kernel_sizes"])
282
- self.num_upsamples = len(h["upsample_rates"])
283
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(h["upsample_rates"]))
284
- self.m_source = SourceModuleHnNSF(
285
- sampling_rate=h["sampling_rate"],
286
- harmonic_num=8)
287
- self.noise_convs = nn.ModuleList()
288
- self.conv_pre = weight_norm(Conv1d(h["inter_channels"], h["upsample_initial_channel"], 7, 1, padding=3))
289
- resblock = ResBlock1 if h["resblock"] == '1' else ResBlock2
290
- self.ups = nn.ModuleList()
291
- for i, (u, k) in enumerate(zip(h["upsample_rates"], h["upsample_kernel_sizes"])):
292
- c_cur = h["upsample_initial_channel"] // (2 ** (i + 1))
293
- self.ups.append(weight_norm(
294
- ConvTranspose1d(h["upsample_initial_channel"] // (2 ** i), h["upsample_initial_channel"] // (2 ** (i + 1)),
295
- k, u, padding=(k - u) // 2)))
296
- if i + 1 < len(h["upsample_rates"]): #
297
- stride_f0 = np.prod(h["upsample_rates"][i + 1:])
298
- self.noise_convs.append(Conv1d(
299
- 1, c_cur, kernel_size=stride_f0 * 2, stride=stride_f0, padding=stride_f0 // 2))
300
- else:
301
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
302
- self.resblocks = nn.ModuleList()
303
- for i in range(len(self.ups)):
304
- ch = h["upsample_initial_channel"] // (2 ** (i + 1))
305
- for j, (k, d) in enumerate(zip(h["resblock_kernel_sizes"], h["resblock_dilation_sizes"])):
306
- self.resblocks.append(resblock(h, ch, k, d))
307
-
308
- self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3))
309
- self.ups.apply(init_weights)
310
- self.conv_post.apply(init_weights)
311
- self.cond = nn.Conv1d(h['gin_channels'], h['upsample_initial_channel'], 1)
312
-
313
- def forward(self, x, f0, g=None):
314
- # print(1,x.shape,f0.shape,f0[:, None].shape)
315
- f0 = self.f0_upsamp(f0[:, None]).transpose(1, 2) # bs,n,t
316
- # print(2,f0.shape)
317
- har_source, noi_source, uv = self.m_source(f0)
318
- har_source = har_source.transpose(1, 2)
319
- x = self.conv_pre(x)
320
- x = x + self.cond(g)
321
- # print(124,x.shape,har_source.shape)
322
- for i in range(self.num_upsamples):
323
- x = F.leaky_relu(x, LRELU_SLOPE)
324
- # print(3,x.shape)
325
- x = self.ups[i](x)
326
- x_source = self.noise_convs[i](har_source)
327
- # print(4,x_source.shape,har_source.shape,x.shape)
328
- x = x + x_source
329
- xs = None
330
- for j in range(self.num_kernels):
331
- if xs is None:
332
- xs = self.resblocks[i * self.num_kernels + j](x)
333
- else:
334
- xs += self.resblocks[i * self.num_kernels + j](x)
335
- x = xs / self.num_kernels
336
- x = F.leaky_relu(x)
337
- x = self.conv_post(x)
338
- x = torch.tanh(x)
339
-
340
- return x
341
-
342
- def remove_weight_norm(self):
343
- print('Removing weight norm...')
344
- for l in self.ups:
345
- remove_weight_norm(l)
346
- for l in self.resblocks:
347
- l.remove_weight_norm()
348
- remove_weight_norm(self.conv_pre)
349
- remove_weight_norm(self.conv_post)
350
-
351
-
352
- class DiscriminatorP(torch.nn.Module):
353
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
354
- super(DiscriminatorP, self).__init__()
355
- self.period = period
356
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
357
- self.convs = nn.ModuleList([
358
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
359
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
360
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
361
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
362
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))),
363
- ])
364
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
365
-
366
- def forward(self, x):
367
- fmap = []
368
-
369
- # 1d to 2d
370
- b, c, t = x.shape
371
- if t % self.period != 0: # pad first
372
- n_pad = self.period - (t % self.period)
373
- x = F.pad(x, (0, n_pad), "reflect")
374
- t = t + n_pad
375
- x = x.view(b, c, t // self.period, self.period)
376
-
377
- for l in self.convs:
378
- x = l(x)
379
- x = F.leaky_relu(x, LRELU_SLOPE)
380
- fmap.append(x)
381
- x = self.conv_post(x)
382
- fmap.append(x)
383
- x = torch.flatten(x, 1, -1)
384
-
385
- return x, fmap
386
-
387
-
388
- class MultiPeriodDiscriminator(torch.nn.Module):
389
- def __init__(self, periods=None):
390
- super(MultiPeriodDiscriminator, self).__init__()
391
- self.periods = periods if periods is not None else [2, 3, 5, 7, 11]
392
- self.discriminators = nn.ModuleList()
393
- for period in self.periods:
394
- self.discriminators.append(DiscriminatorP(period))
395
-
396
- def forward(self, y, y_hat):
397
- y_d_rs = []
398
- y_d_gs = []
399
- fmap_rs = []
400
- fmap_gs = []
401
- for i, d in enumerate(self.discriminators):
402
- y_d_r, fmap_r = d(y)
403
- y_d_g, fmap_g = d(y_hat)
404
- y_d_rs.append(y_d_r)
405
- fmap_rs.append(fmap_r)
406
- y_d_gs.append(y_d_g)
407
- fmap_gs.append(fmap_g)
408
-
409
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
410
-
411
-
412
- class DiscriminatorS(torch.nn.Module):
413
- def __init__(self, use_spectral_norm=False):
414
- super(DiscriminatorS, self).__init__()
415
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
416
- self.convs = nn.ModuleList([
417
- norm_f(Conv1d(1, 128, 15, 1, padding=7)),
418
- norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)),
419
- norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)),
420
- norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)),
421
- norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)),
422
- norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)),
423
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
424
- ])
425
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
426
-
427
- def forward(self, x):
428
- fmap = []
429
- for l in self.convs:
430
- x = l(x)
431
- x = F.leaky_relu(x, LRELU_SLOPE)
432
- fmap.append(x)
433
- x = self.conv_post(x)
434
- fmap.append(x)
435
- x = torch.flatten(x, 1, -1)
436
-
437
- return x, fmap
438
-
439
-
440
- class MultiScaleDiscriminator(torch.nn.Module):
441
- def __init__(self):
442
- super(MultiScaleDiscriminator, self).__init__()
443
- self.discriminators = nn.ModuleList([
444
- DiscriminatorS(use_spectral_norm=True),
445
- DiscriminatorS(),
446
- DiscriminatorS(),
447
- ])
448
- self.meanpools = nn.ModuleList([
449
- AvgPool1d(4, 2, padding=2),
450
- AvgPool1d(4, 2, padding=2)
451
- ])
452
-
453
- def forward(self, y, y_hat):
454
- y_d_rs = []
455
- y_d_gs = []
456
- fmap_rs = []
457
- fmap_gs = []
458
- for i, d in enumerate(self.discriminators):
459
- if i != 0:
460
- y = self.meanpools[i - 1](y)
461
- y_hat = self.meanpools[i - 1](y_hat)
462
- y_d_r, fmap_r = d(y)
463
- y_d_g, fmap_g = d(y_hat)
464
- y_d_rs.append(y_d_r)
465
- fmap_rs.append(fmap_r)
466
- y_d_gs.append(y_d_g)
467
- fmap_gs.append(fmap_g)
468
-
469
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
470
-
471
-
472
- def feature_loss(fmap_r, fmap_g):
473
- loss = 0
474
- for dr, dg in zip(fmap_r, fmap_g):
475
- for rl, gl in zip(dr, dg):
476
- loss += torch.mean(torch.abs(rl - gl))
477
-
478
- return loss * 2
479
-
480
-
481
- def discriminator_loss(disc_real_outputs, disc_generated_outputs):
482
- loss = 0
483
- r_losses = []
484
- g_losses = []
485
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
486
- r_loss = torch.mean((1 - dr) ** 2)
487
- g_loss = torch.mean(dg ** 2)
488
- loss += (r_loss + g_loss)
489
- r_losses.append(r_loss.item())
490
- g_losses.append(g_loss.item())
491
-
492
- return loss, r_losses, g_losses
493
-
494
-
495
- def generator_loss(disc_outputs):
496
- loss = 0
497
- gen_losses = []
498
- for dg in disc_outputs:
499
- l = torch.mean((1 - dg) ** 2)
500
- gen_losses.append(l)
501
- loss += l
502
-
503
- return loss, gen_losses
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AlekseyCalvin/Make-Putin-Queer/app.py DELETED
@@ -1,16 +0,0 @@
1
-
2
- import gradio as gr
3
-
4
- markdown=f'''
5
- #
6
- ### Use prompt "trp" or "trp person" or "trp person putin" in your prompt.
7
- To generate custom images of a queer or/and trans alter-dimensional identities of the infamous reigning spook Vladimir Putin – use "trp" or "trp person" in your Stable Diffusion prompt during inference with this model.
8
- Among other crucial, yet oft neglected, documentary content available in the public sphere ("Putin finally appears in drag", "Putin plays piano in Bowie wig", "femme Putin", etc...)...
9
- This model was fine-tuned on numerous distinct variants of the classic "queer Putin" meme which had once spread like wildfiring rainbows in response to the 2018 intensification of the Russian government's ruthlessly inhumane crackdowns on LGBTQ+ persons and communities.
10
-
11
- It is running on cpu. Duplicate and change to GPU of choice for faster generations.
12
-
13
- '''
14
-
15
- gr.Interface.load("models/AlekseyCalvin/Make_Putin_Queer_Please").launch()
16
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/intel_opts/textual_inversion/README.md DELETED
@@ -1,68 +0,0 @@
1
- ## Textual Inversion fine-tuning example
2
-
3
- [Textual inversion](https://arxiv.org/abs/2208.01618) is a method to personalize text2image models like stable diffusion on your own images using just 3-5 examples.
4
- The `textual_inversion.py` script shows how to implement the training procedure and adapt it for stable diffusion.
5
-
6
- ## Training with Intel Extension for PyTorch
7
-
8
- Intel Extension for PyTorch provides the optimizations for faster training and inference on CPUs. You can leverage the training example "textual_inversion.py". Follow the [instructions](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion) to get the model and [dataset](https://huggingface.co/sd-concepts-library/dicoo2) before running the script.
9
-
10
- The example supports both single node and multi-node distributed training:
11
-
12
- ### Single node training
13
-
14
- ```bash
15
- export MODEL_NAME="CompVis/stable-diffusion-v1-4"
16
- export DATA_DIR="path-to-dir-containing-dicoo-images"
17
-
18
- python textual_inversion.py \
19
- --pretrained_model_name_or_path=$MODEL_NAME \
20
- --train_data_dir=$DATA_DIR \
21
- --learnable_property="object" \
22
- --placeholder_token="<dicoo>" --initializer_token="toy" \
23
- --seed=7 \
24
- --resolution=512 \
25
- --train_batch_size=1 \
26
- --gradient_accumulation_steps=1 \
27
- --max_train_steps=3000 \
28
- --learning_rate=2.5e-03 --scale_lr \
29
- --output_dir="textual_inversion_dicoo"
30
- ```
31
-
32
- Note: Bfloat16 is available on Intel Xeon Scalable Processors Cooper Lake or Sapphire Rapids. You may not get performance speedup without Bfloat16 support.
33
-
34
- ### Multi-node distributed training
35
-
36
- Before running the scripts, make sure to install the library's training dependencies successfully:
37
-
38
- ```bash
39
- python -m pip install oneccl_bind_pt==1.13 -f https://developer.intel.com/ipex-whl-stable-cpu
40
- ```
41
-
42
- ```bash
43
- export MODEL_NAME="CompVis/stable-diffusion-v1-4"
44
- export DATA_DIR="path-to-dir-containing-dicoo-images"
45
-
46
- oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)")
47
- source $oneccl_bindings_for_pytorch_path/env/setvars.sh
48
-
49
- python -m intel_extension_for_pytorch.cpu.launch --distributed \
50
- --hostfile hostfile --nnodes 2 --nproc_per_node 2 textual_inversion.py \
51
- --pretrained_model_name_or_path=$MODEL_NAME \
52
- --train_data_dir=$DATA_DIR \
53
- --learnable_property="object" \
54
- --placeholder_token="<dicoo>" --initializer_token="toy" \
55
- --seed=7 \
56
- --resolution=512 \
57
- --train_batch_size=1 \
58
- --gradient_accumulation_steps=1 \
59
- --max_train_steps=750 \
60
- --learning_rate=2.5e-03 --scale_lr \
61
- --output_dir="textual_inversion_dicoo"
62
- ```
63
- The above is a simple distributed training usage on 2 nodes with 2 processes on each node. Add the right hostname or ip address in the "hostfile" and make sure these 2 nodes are reachable from each other. For more details, please refer to the [user guide](https://github.com/intel/torch-ccl).
64
-
65
-
66
- ### Reference
67
-
68
- We publish a [Medium blog](https://medium.com/intel-analytics-software/personalized-stable-diffusion-with-few-shot-fine-tuning-on-a-single-cpu-f01a3316b13) on how to create your own Stable Diffusion model on CPUs using textual inversion. Try it out now, if you have interests.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/training_utils.py DELETED
@@ -1,314 +0,0 @@
1
- import contextlib
2
- import copy
3
- import random
4
- from typing import Any, Dict, Iterable, Optional, Union
5
-
6
- import numpy as np
7
- import torch
8
-
9
- from .utils import deprecate, is_transformers_available
10
-
11
-
12
- if is_transformers_available():
13
- import transformers
14
-
15
-
16
- def set_seed(seed: int):
17
- """
18
- Args:
19
- Helper function for reproducible behavior to set the seed in `random`, `numpy`, `torch`.
20
- seed (`int`): The seed to set.
21
- """
22
- random.seed(seed)
23
- np.random.seed(seed)
24
- torch.manual_seed(seed)
25
- torch.cuda.manual_seed_all(seed)
26
- # ^^ safe to call this function even if cuda is not available
27
-
28
-
29
- # Adapted from torch-ema https://github.com/fadel/pytorch_ema/blob/master/torch_ema/ema.py#L14
30
- class EMAModel:
31
- """
32
- Exponential Moving Average of models weights
33
- """
34
-
35
- def __init__(
36
- self,
37
- parameters: Iterable[torch.nn.Parameter],
38
- decay: float = 0.9999,
39
- min_decay: float = 0.0,
40
- update_after_step: int = 0,
41
- use_ema_warmup: bool = False,
42
- inv_gamma: Union[float, int] = 1.0,
43
- power: Union[float, int] = 2 / 3,
44
- model_cls: Optional[Any] = None,
45
- model_config: Dict[str, Any] = None,
46
- **kwargs,
47
- ):
48
- """
49
- Args:
50
- parameters (Iterable[torch.nn.Parameter]): The parameters to track.
51
- decay (float): The decay factor for the exponential moving average.
52
- min_decay (float): The minimum decay factor for the exponential moving average.
53
- update_after_step (int): The number of steps to wait before starting to update the EMA weights.
54
- use_ema_warmup (bool): Whether to use EMA warmup.
55
- inv_gamma (float):
56
- Inverse multiplicative factor of EMA warmup. Default: 1. Only used if `use_ema_warmup` is True.
57
- power (float): Exponential factor of EMA warmup. Default: 2/3. Only used if `use_ema_warmup` is True.
58
- device (Optional[Union[str, torch.device]]): The device to store the EMA weights on. If None, the EMA
59
- weights will be stored on CPU.
60
-
61
- @crowsonkb's notes on EMA Warmup:
62
- If gamma=1 and power=1, implements a simple average. gamma=1, power=2/3 are good values for models you plan
63
- to train for a million or more steps (reaches decay factor 0.999 at 31.6K steps, 0.9999 at 1M steps),
64
- gamma=1, power=3/4 for models you plan to train for less (reaches decay factor 0.999 at 10K steps, 0.9999
65
- at 215.4k steps).
66
- """
67
-
68
- if isinstance(parameters, torch.nn.Module):
69
- deprecation_message = (
70
- "Passing a `torch.nn.Module` to `ExponentialMovingAverage` is deprecated. "
71
- "Please pass the parameters of the module instead."
72
- )
73
- deprecate(
74
- "passing a `torch.nn.Module` to `ExponentialMovingAverage`",
75
- "1.0.0",
76
- deprecation_message,
77
- standard_warn=False,
78
- )
79
- parameters = parameters.parameters()
80
-
81
- # set use_ema_warmup to True if a torch.nn.Module is passed for backwards compatibility
82
- use_ema_warmup = True
83
-
84
- if kwargs.get("max_value", None) is not None:
85
- deprecation_message = "The `max_value` argument is deprecated. Please use `decay` instead."
86
- deprecate("max_value", "1.0.0", deprecation_message, standard_warn=False)
87
- decay = kwargs["max_value"]
88
-
89
- if kwargs.get("min_value", None) is not None:
90
- deprecation_message = "The `min_value` argument is deprecated. Please use `min_decay` instead."
91
- deprecate("min_value", "1.0.0", deprecation_message, standard_warn=False)
92
- min_decay = kwargs["min_value"]
93
-
94
- parameters = list(parameters)
95
- self.shadow_params = [p.clone().detach() for p in parameters]
96
-
97
- if kwargs.get("device", None) is not None:
98
- deprecation_message = "The `device` argument is deprecated. Please use `to` instead."
99
- deprecate("device", "1.0.0", deprecation_message, standard_warn=False)
100
- self.to(device=kwargs["device"])
101
-
102
- self.temp_stored_params = None
103
-
104
- self.decay = decay
105
- self.min_decay = min_decay
106
- self.update_after_step = update_after_step
107
- self.use_ema_warmup = use_ema_warmup
108
- self.inv_gamma = inv_gamma
109
- self.power = power
110
- self.optimization_step = 0
111
- self.cur_decay_value = None # set in `step()`
112
-
113
- self.model_cls = model_cls
114
- self.model_config = model_config
115
-
116
- @classmethod
117
- def from_pretrained(cls, path, model_cls) -> "EMAModel":
118
- _, ema_kwargs = model_cls.load_config(path, return_unused_kwargs=True)
119
- model = model_cls.from_pretrained(path)
120
-
121
- ema_model = cls(model.parameters(), model_cls=model_cls, model_config=model.config)
122
-
123
- ema_model.load_state_dict(ema_kwargs)
124
- return ema_model
125
-
126
- def save_pretrained(self, path):
127
- if self.model_cls is None:
128
- raise ValueError("`save_pretrained` can only be used if `model_cls` was defined at __init__.")
129
-
130
- if self.model_config is None:
131
- raise ValueError("`save_pretrained` can only be used if `model_config` was defined at __init__.")
132
-
133
- model = self.model_cls.from_config(self.model_config)
134
- state_dict = self.state_dict()
135
- state_dict.pop("shadow_params", None)
136
-
137
- model.register_to_config(**state_dict)
138
- self.copy_to(model.parameters())
139
- model.save_pretrained(path)
140
-
141
- def get_decay(self, optimization_step: int) -> float:
142
- """
143
- Compute the decay factor for the exponential moving average.
144
- """
145
- step = max(0, optimization_step - self.update_after_step - 1)
146
-
147
- if step <= 0:
148
- return 0.0
149
-
150
- if self.use_ema_warmup:
151
- cur_decay_value = 1 - (1 + step / self.inv_gamma) ** -self.power
152
- else:
153
- cur_decay_value = (1 + step) / (10 + step)
154
-
155
- cur_decay_value = min(cur_decay_value, self.decay)
156
- # make sure decay is not smaller than min_decay
157
- cur_decay_value = max(cur_decay_value, self.min_decay)
158
- return cur_decay_value
159
-
160
- @torch.no_grad()
161
- def step(self, parameters: Iterable[torch.nn.Parameter]):
162
- if isinstance(parameters, torch.nn.Module):
163
- deprecation_message = (
164
- "Passing a `torch.nn.Module` to `ExponentialMovingAverage.step` is deprecated. "
165
- "Please pass the parameters of the module instead."
166
- )
167
- deprecate(
168
- "passing a `torch.nn.Module` to `ExponentialMovingAverage.step`",
169
- "1.0.0",
170
- deprecation_message,
171
- standard_warn=False,
172
- )
173
- parameters = parameters.parameters()
174
-
175
- parameters = list(parameters)
176
-
177
- self.optimization_step += 1
178
-
179
- # Compute the decay factor for the exponential moving average.
180
- decay = self.get_decay(self.optimization_step)
181
- self.cur_decay_value = decay
182
- one_minus_decay = 1 - decay
183
-
184
- context_manager = contextlib.nullcontext
185
- if is_transformers_available() and transformers.deepspeed.is_deepspeed_zero3_enabled():
186
- import deepspeed
187
-
188
- for s_param, param in zip(self.shadow_params, parameters):
189
- if is_transformers_available() and transformers.deepspeed.is_deepspeed_zero3_enabled():
190
- context_manager = deepspeed.zero.GatheredParameters(param, modifier_rank=None)
191
-
192
- with context_manager():
193
- if param.requires_grad:
194
- s_param.sub_(one_minus_decay * (s_param - param))
195
- else:
196
- s_param.copy_(param)
197
-
198
- def copy_to(self, parameters: Iterable[torch.nn.Parameter]) -> None:
199
- """
200
- Copy current averaged parameters into given collection of parameters.
201
-
202
- Args:
203
- parameters: Iterable of `torch.nn.Parameter`; the parameters to be
204
- updated with the stored moving averages. If `None`, the parameters with which this
205
- `ExponentialMovingAverage` was initialized will be used.
206
- """
207
- parameters = list(parameters)
208
- for s_param, param in zip(self.shadow_params, parameters):
209
- param.data.copy_(s_param.to(param.device).data)
210
-
211
- def to(self, device=None, dtype=None) -> None:
212
- r"""Move internal buffers of the ExponentialMovingAverage to `device`.
213
-
214
- Args:
215
- device: like `device` argument to `torch.Tensor.to`
216
- """
217
- # .to() on the tensors handles None correctly
218
- self.shadow_params = [
219
- p.to(device=device, dtype=dtype) if p.is_floating_point() else p.to(device=device)
220
- for p in self.shadow_params
221
- ]
222
-
223
- def state_dict(self) -> dict:
224
- r"""
225
- Returns the state of the ExponentialMovingAverage as a dict. This method is used by accelerate during
226
- checkpointing to save the ema state dict.
227
- """
228
- # Following PyTorch conventions, references to tensors are returned:
229
- # "returns a reference to the state and not its copy!" -
230
- # https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict
231
- return {
232
- "decay": self.decay,
233
- "min_decay": self.min_decay,
234
- "optimization_step": self.optimization_step,
235
- "update_after_step": self.update_after_step,
236
- "use_ema_warmup": self.use_ema_warmup,
237
- "inv_gamma": self.inv_gamma,
238
- "power": self.power,
239
- "shadow_params": self.shadow_params,
240
- }
241
-
242
- def store(self, parameters: Iterable[torch.nn.Parameter]) -> None:
243
- r"""
244
- Args:
245
- Save the current parameters for restoring later.
246
- parameters: Iterable of `torch.nn.Parameter`; the parameters to be
247
- temporarily stored.
248
- """
249
- self.temp_stored_params = [param.detach().cpu().clone() for param in parameters]
250
-
251
- def restore(self, parameters: Iterable[torch.nn.Parameter]) -> None:
252
- r"""
253
- Args:
254
- Restore the parameters stored with the `store` method. Useful to validate the model with EMA parameters without:
255
- affecting the original optimization process. Store the parameters before the `copy_to()` method. After
256
- validation (or model saving), use this to restore the former parameters.
257
- parameters: Iterable of `torch.nn.Parameter`; the parameters to be
258
- updated with the stored parameters. If `None`, the parameters with which this
259
- `ExponentialMovingAverage` was initialized will be used.
260
- """
261
- if self.temp_stored_params is None:
262
- raise RuntimeError("This ExponentialMovingAverage has no `store()`ed weights " "to `restore()`")
263
- for c_param, param in zip(self.temp_stored_params, parameters):
264
- param.data.copy_(c_param.data)
265
-
266
- # Better memory-wise.
267
- self.temp_stored_params = None
268
-
269
- def load_state_dict(self, state_dict: dict) -> None:
270
- r"""
271
- Args:
272
- Loads the ExponentialMovingAverage state. This method is used by accelerate during checkpointing to save the
273
- ema state dict.
274
- state_dict (dict): EMA state. Should be an object returned
275
- from a call to :meth:`state_dict`.
276
- """
277
- # deepcopy, to be consistent with module API
278
- state_dict = copy.deepcopy(state_dict)
279
-
280
- self.decay = state_dict.get("decay", self.decay)
281
- if self.decay < 0.0 or self.decay > 1.0:
282
- raise ValueError("Decay must be between 0 and 1")
283
-
284
- self.min_decay = state_dict.get("min_decay", self.min_decay)
285
- if not isinstance(self.min_decay, float):
286
- raise ValueError("Invalid min_decay")
287
-
288
- self.optimization_step = state_dict.get("optimization_step", self.optimization_step)
289
- if not isinstance(self.optimization_step, int):
290
- raise ValueError("Invalid optimization_step")
291
-
292
- self.update_after_step = state_dict.get("update_after_step", self.update_after_step)
293
- if not isinstance(self.update_after_step, int):
294
- raise ValueError("Invalid update_after_step")
295
-
296
- self.use_ema_warmup = state_dict.get("use_ema_warmup", self.use_ema_warmup)
297
- if not isinstance(self.use_ema_warmup, bool):
298
- raise ValueError("Invalid use_ema_warmup")
299
-
300
- self.inv_gamma = state_dict.get("inv_gamma", self.inv_gamma)
301
- if not isinstance(self.inv_gamma, (float, int)):
302
- raise ValueError("Invalid inv_gamma")
303
-
304
- self.power = state_dict.get("power", self.power)
305
- if not isinstance(self.power, (float, int)):
306
- raise ValueError("Invalid power")
307
-
308
- shadow_params = state_dict.get("shadow_params", None)
309
- if shadow_params is not None:
310
- self.shadow_params = shadow_params
311
- if not isinstance(self.shadow_params, list):
312
- raise ValueError("shadow_params must be a list")
313
- if not all(isinstance(p, torch.Tensor) for p in self.shadow_params):
314
- raise ValueError("shadow_params must all be Tensors")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r18-d8_769x769_80k_cityscapes.py DELETED
@@ -1,9 +0,0 @@
1
- _base_ = './deeplabv3_r50-d8_769x769_80k_cityscapes.py'
2
- model = dict(
3
- pretrained='open-mmlab://resnet18_v1c',
4
- backbone=dict(depth=18),
5
- decode_head=dict(
6
- in_channels=512,
7
- channels=128,
8
- ),
9
- auxiliary_head=dict(in_channels=256, channels=64))
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr48_512x1024_160k_cityscapes.py DELETED
@@ -1,39 +0,0 @@
1
- _base_ = './ocrnet_hr18_512x1024_160k_cityscapes.py'
2
- norm_cfg = dict(type='SyncBN', requires_grad=True)
3
- model = dict(
4
- pretrained='open-mmlab://msra/hrnetv2_w48',
5
- backbone=dict(
6
- extra=dict(
7
- stage2=dict(num_channels=(48, 96)),
8
- stage3=dict(num_channels=(48, 96, 192)),
9
- stage4=dict(num_channels=(48, 96, 192, 384)))),
10
- decode_head=[
11
- dict(
12
- type='FCNHead',
13
- in_channels=[48, 96, 192, 384],
14
- channels=sum([48, 96, 192, 384]),
15
- input_transform='resize_concat',
16
- in_index=(0, 1, 2, 3),
17
- kernel_size=1,
18
- num_convs=1,
19
- norm_cfg=norm_cfg,
20
- concat_input=False,
21
- dropout_ratio=-1,
22
- num_classes=19,
23
- align_corners=False,
24
- loss_decode=dict(
25
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
26
- dict(
27
- type='OCRHead',
28
- in_channels=[48, 96, 192, 384],
29
- channels=512,
30
- ocr_channels=256,
31
- input_transform='resize_concat',
32
- in_index=(0, 1, 2, 3),
33
- norm_cfg=norm_cfg,
34
- dropout_ratio=-1,
35
- num_classes=19,
36
- align_corners=False,
37
- loss_decode=dict(
38
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0))
39
- ])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AngoHF/ANGO-Leaderboard/components/submit.py DELETED
@@ -1,15 +0,0 @@
1
- import os
2
-
3
- import gradio as gr
4
-
5
- from assets.content import SUBMIT_TEXT, TEST_SCRIPT_TEXT, TEST_SET_TEXT
6
- from assets.path import SEASON
7
-
8
-
9
- def create_submit():
10
- test_box = gr.Markdown(value=TEST_SET_TEXT, scale=4)
11
- test_file = gr.File(value=os.path.join("results", SEASON["latest"], "test_dataset.json"),
12
- label="Test Set", scale=1)
13
- script_box = gr.Markdown(value=TEST_SCRIPT_TEXT, scale=4)
14
- script_button = gr.File(value=os.path.join("assets/evaluation.py"), label="Test Script", scale=1)
15
- gr.Markdown(SUBMIT_TEXT)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/completions.py DELETED
@@ -1,637 +0,0 @@
1
- import time
2
-
3
- import tiktoken
4
- import torch
5
- import torch.nn.functional as F
6
- import yaml
7
- from extensions.openai.defaults import clamp, default, get_default_req_params
8
- from extensions.openai.errors import InvalidRequestError
9
- from extensions.openai.utils import debug_msg, end_line
10
- from modules import shared
11
- from modules.text_generation import decode, encode, generate_reply
12
- from transformers import LogitsProcessor, LogitsProcessorList
13
-
14
-
15
- # Thanks to @Cypherfox [Cypherfoxy] for the logits code, blame to @matatonic
16
- class LogitsBiasProcessor(LogitsProcessor):
17
- def __init__(self, logit_bias={}):
18
- self.logit_bias = logit_bias
19
- if self.logit_bias:
20
- self.keys = list([int(key) for key in self.logit_bias.keys()])
21
- values = [self.logit_bias[str(key)] for key in self.keys]
22
- self.values = torch.tensor(values, dtype=torch.float, device=shared.model.device)
23
- debug_msg(f"{self})")
24
-
25
- def __call__(self, input_ids: torch.LongTensor, logits: torch.FloatTensor) -> torch.FloatTensor:
26
- if self.logit_bias:
27
- debug_msg(logits[0, self.keys], " + ", self.values)
28
- logits[0, self.keys] += self.values
29
- debug_msg(" --> ", logits[0, self.keys])
30
- debug_msg(" max/min ", float(torch.max(logits[0])), float(torch.min(logits[0])))
31
- return logits
32
-
33
- def __repr__(self):
34
- return f"<{self.__class__.__name__}(logit_bias={self.logit_bias})>"
35
-
36
-
37
- class LogprobProcessor(LogitsProcessor):
38
- def __init__(self, logprobs=None):
39
- self.logprobs = logprobs
40
- self.token_alternatives = {}
41
-
42
- def __call__(self, input_ids: torch.LongTensor, logits: torch.FloatTensor) -> torch.FloatTensor:
43
- if self.logprobs is not None: # 0-5
44
- log_e_probabilities = F.log_softmax(logits, dim=1)
45
- top_values, top_indices = torch.topk(log_e_probabilities, k=self.logprobs + 1)
46
- top_tokens = [decode(tok) for tok in top_indices[0]]
47
- top_probs = [float(x) for x in top_values[0]]
48
- self.token_alternatives = dict(zip(top_tokens, top_probs))
49
- debug_msg(repr(self))
50
- return logits
51
-
52
- def __repr__(self):
53
- return f"<{self.__class__.__name__}(logprobs={self.logprobs}, token_alternatives={self.token_alternatives})>"
54
-
55
-
56
- def convert_logprobs_to_tiktoken(model, logprobs):
57
- # more problems than it's worth.
58
- # try:
59
- # encoder = tiktoken.encoding_for_model(model)
60
- # # just pick the first one if it encodes to multiple tokens... 99.9% not required and maybe worse overall.
61
- # return dict([(encoder.decode([encoder.encode(token)[0]]), prob) for token, prob in logprobs.items()])
62
- # except KeyError:
63
- # # assume native tokens if we can't find the tokenizer
64
- # return logprobs
65
-
66
- return logprobs
67
-
68
-
69
- def marshal_common_params(body):
70
- # Request Parameters
71
- # Try to use openai defaults or map them to something with the same intent
72
-
73
- req_params = get_default_req_params()
74
-
75
- # Common request parameters
76
- req_params['truncation_length'] = shared.settings['truncation_length']
77
- req_params['add_bos_token'] = shared.settings.get('add_bos_token', req_params['add_bos_token'])
78
- req_params['seed'] = shared.settings.get('seed', req_params['seed'])
79
- req_params['custom_stopping_strings'] = shared.settings['custom_stopping_strings']
80
-
81
- # OpenAI API Parameters
82
- # model - ignored for now, TODO: When we can reliably load a model or lora from a name only change this
83
- req_params['requested_model'] = body.get('model', shared.model_name)
84
-
85
- req_params['suffix'] = default(body, 'suffix', req_params['suffix'])
86
- req_params['temperature'] = clamp(default(body, 'temperature', req_params['temperature']), 0.01, 1.99) # fixup absolute 0.0/2.0
87
- req_params['top_p'] = clamp(default(body, 'top_p', req_params['top_p']), 0.01, 1.0)
88
- n = default(body, 'n', 1)
89
- if n != 1:
90
- raise InvalidRequestError(message="Only n = 1 is supported.", param='n')
91
-
92
- if 'stop' in body: # str or array, max len 4 (ignored)
93
- if isinstance(body['stop'], str):
94
- req_params['stopping_strings'] = [body['stop']] # non-standard parameter
95
- elif isinstance(body['stop'], list):
96
- req_params['stopping_strings'] = body['stop']
97
-
98
- # presence_penalty - ignored
99
- # frequency_penalty - ignored
100
-
101
- # pass through unofficial params
102
- req_params['repetition_penalty'] = default(body, 'repetition_penalty', req_params['repetition_penalty'])
103
- req_params['encoder_repetition_penalty'] = default(body, 'encoder_repetition_penalty', req_params['encoder_repetition_penalty'])
104
-
105
- # user - ignored
106
-
107
- logits_processor = []
108
- logit_bias = body.get('logit_bias', None)
109
- if logit_bias: # {str: float, ...}
110
- # XXX convert tokens from tiktoken based on requested model
111
- # Ex.: 'logit_bias': {'1129': 100, '11442': 100, '16243': 100}
112
- try:
113
- encoder = tiktoken.encoding_for_model(req_params['requested_model'])
114
- new_logit_bias = {}
115
- for logit, bias in logit_bias.items():
116
- for x in encode(encoder.decode([int(logit)]), add_special_tokens=False)[0]:
117
- if int(x) in [0, 1, 2, 29871]: # XXX LLAMA tokens
118
- continue
119
- new_logit_bias[str(int(x))] = bias
120
- debug_msg('logit_bias_map', logit_bias, '->', new_logit_bias)
121
- logit_bias = new_logit_bias
122
- except KeyError:
123
- pass # assume native tokens if we can't find the tokenizer
124
-
125
- logits_processor = [LogitsBiasProcessor(logit_bias)]
126
-
127
- logprobs = None # coming to chat eventually
128
- if 'logprobs' in body:
129
- logprobs = default(body, 'logprobs', 0) # maybe cap at topk? don't clamp 0-5.
130
- req_params['logprob_proc'] = LogprobProcessor(logprobs)
131
- logits_processor.extend([req_params['logprob_proc']])
132
- else:
133
- logprobs = None
134
-
135
- if logits_processor: # requires logits_processor support
136
- req_params['logits_processor'] = LogitsProcessorList(logits_processor)
137
-
138
- return req_params
139
-
140
-
141
- def messages_to_prompt(body: dict, req_params: dict, max_tokens):
142
- # functions
143
- if body.get('functions', []): # chat only
144
- raise InvalidRequestError(message="functions is not supported.", param='functions')
145
- if body.get('function_call', ''): # chat only, 'none', 'auto', {'name': 'func'}
146
- raise InvalidRequestError(message="function_call is not supported.", param='function_call')
147
-
148
- if 'messages' not in body:
149
- raise InvalidRequestError(message="messages is required", param='messages')
150
-
151
- messages = body['messages']
152
-
153
- role_formats = {
154
- 'user': 'User: {message}\n',
155
- 'assistant': 'Assistant: {message}\n',
156
- 'system': '{message}',
157
- 'context': 'You are a helpful assistant. Answer as concisely as possible.\nUser: I want your assistance.\nAssistant: Sure! What can I do for you?',
158
- 'prompt': 'Assistant:',
159
- }
160
-
161
- if 'stopping_strings' not in req_params:
162
- req_params['stopping_strings'] = []
163
-
164
- # Instruct models can be much better
165
- if shared.settings['instruction_template']:
166
- try:
167
- instruct = yaml.safe_load(open(f"instruction-templates/{shared.settings['instruction_template']}.yaml", 'r'))
168
-
169
- template = instruct['turn_template']
170
- system_message_template = "{message}"
171
- system_message_default = instruct.get('context', '') # can be missing
172
- bot_start = template.find('<|bot|>') # So far, 100% of instruction templates have this token
173
- user_message_template = template[:bot_start].replace('<|user-message|>', '{message}').replace('<|user|>', instruct.get('user', ''))
174
- bot_message_template = template[bot_start:].replace('<|bot-message|>', '{message}').replace('<|bot|>', instruct.get('bot', ''))
175
- bot_prompt = bot_message_template[:bot_message_template.find('{message}')].rstrip(' ')
176
-
177
- role_formats = {
178
- 'user': user_message_template,
179
- 'assistant': bot_message_template,
180
- 'system': system_message_template,
181
- 'context': system_message_default,
182
- 'prompt': bot_prompt,
183
- }
184
-
185
- if 'Alpaca' in shared.settings['instruction_template']:
186
- req_params['stopping_strings'].extend(['\n###'])
187
- elif instruct['user']: # WizardLM and some others have no user prompt.
188
- req_params['stopping_strings'].extend(['\n' + instruct['user'], instruct['user']])
189
-
190
- debug_msg(f"Loaded instruction role format: {shared.settings['instruction_template']}")
191
-
192
- except Exception as e:
193
- req_params['stopping_strings'].extend(['\nUser:', 'User:']) # XXX User: prompt here also
194
-
195
- print(f"Exception: When loading instruction-templates/{shared.settings['instruction_template']}.yaml: {repr(e)}")
196
- print("Warning: Loaded default instruction-following template for model.")
197
-
198
- else:
199
- req_params['stopping_strings'].extend(['\nUser:', 'User:']) # XXX User: prompt here also
200
- print("Warning: Loaded default instruction-following template for model.")
201
-
202
- system_msgs = []
203
- chat_msgs = []
204
-
205
- # You are ChatGPT, a large language model trained by OpenAI. Answer as concisely as possible. Knowledge cutoff: {knowledge_cutoff} Current date: {current_date}
206
- context_msg = role_formats['system'].format(message=role_formats['context']) if role_formats['context'] else ''
207
- context_msg = end_line(context_msg)
208
-
209
- # Maybe they sent both? This is not documented in the API, but some clients seem to do this.
210
- if 'prompt' in body:
211
- context_msg = end_line(role_formats['system'].format(message=body['prompt'])) + context_msg
212
-
213
- for m in messages:
214
- if 'role' not in m:
215
- raise InvalidRequestError(message="messages: missing role", param='messages')
216
- if 'content' not in m:
217
- raise InvalidRequestError(message="messages: missing content", param='messages')
218
-
219
- role = m['role']
220
- content = m['content']
221
- # name = m.get('name', None)
222
- # function_call = m.get('function_call', None) # user name or function name with output in content
223
- msg = role_formats[role].format(message=content)
224
- if role == 'system':
225
- system_msgs.extend([msg])
226
- elif role == 'function':
227
- raise InvalidRequestError(message="role: function is not supported.", param='messages')
228
- else:
229
- chat_msgs.extend([msg])
230
-
231
- system_msg = '\n'.join(system_msgs)
232
- system_msg = end_line(system_msg)
233
-
234
- prompt = system_msg + context_msg + ''.join(chat_msgs) + role_formats['prompt']
235
-
236
- token_count = len(encode(prompt)[0])
237
-
238
- if token_count >= req_params['truncation_length']:
239
- err_msg = f"This model maximum context length is {req_params['truncation_length']} tokens. However, your messages resulted in over {token_count} tokens."
240
- raise InvalidRequestError(message=err_msg, param='messages')
241
-
242
- if max_tokens > 0 and token_count + max_tokens > req_params['truncation_length']:
243
- err_msg = f"This model maximum context length is {req_params['truncation_length']} tokens. However, your messages resulted in over {token_count} tokens and max_tokens is {max_tokens}."
244
- print(f"Warning: ${err_msg}")
245
- # raise InvalidRequestError(message=err_msg, params='max_tokens')
246
-
247
- return prompt, token_count
248
-
249
-
250
- def chat_completions(body: dict, is_legacy: bool = False) -> dict:
251
- # Chat Completions
252
- object_type = 'chat.completions'
253
- created_time = int(time.time())
254
- cmpl_id = "chatcmpl-%d" % (int(time.time() * 1000000000))
255
- resp_list = 'data' if is_legacy else 'choices'
256
-
257
- # common params
258
- req_params = marshal_common_params(body)
259
- req_params['stream'] = False
260
- requested_model = req_params.pop('requested_model')
261
- logprob_proc = req_params.pop('logprob_proc', None)
262
- req_params['top_k'] = 20 # There is no best_of/top_k param for chat, but it is much improved with a higher top_k.
263
-
264
- # chat default max_tokens is 'inf', but also flexible
265
- max_tokens = 0
266
- max_tokens_str = 'length' if is_legacy else 'max_tokens'
267
- if max_tokens_str in body:
268
- max_tokens = default(body, max_tokens_str, req_params['truncation_length'])
269
- req_params['max_new_tokens'] = max_tokens
270
- else:
271
- req_params['max_new_tokens'] = req_params['truncation_length']
272
-
273
- # format the prompt from messages
274
- prompt, token_count = messages_to_prompt(body, req_params, max_tokens) # updates req_params['stopping_strings']
275
-
276
- # set real max, avoid deeper errors
277
- if req_params['max_new_tokens'] + token_count >= req_params['truncation_length']:
278
- req_params['max_new_tokens'] = req_params['truncation_length'] - token_count
279
-
280
- stopping_strings = req_params.pop('stopping_strings', [])
281
-
282
- # generate reply #######################################
283
- debug_msg({'prompt': prompt, 'req_params': req_params})
284
- generator = generate_reply(prompt, req_params, stopping_strings=stopping_strings, is_chat=False)
285
-
286
- answer = ''
287
- for a in generator:
288
- answer = a
289
-
290
- # strip extra leading space off new generated content
291
- if answer and answer[0] == ' ':
292
- answer = answer[1:]
293
-
294
- completion_token_count = len(encode(answer)[0])
295
- stop_reason = "stop"
296
- if token_count + completion_token_count >= req_params['truncation_length'] or completion_token_count >= req_params['max_new_tokens']:
297
- stop_reason = "length"
298
-
299
- resp = {
300
- "id": cmpl_id,
301
- "object": object_type,
302
- "created": created_time,
303
- "model": shared.model_name, # TODO: add Lora info?
304
- resp_list: [{
305
- "index": 0,
306
- "finish_reason": stop_reason,
307
- "message": {"role": "assistant", "content": answer}
308
- }],
309
- "usage": {
310
- "prompt_tokens": token_count,
311
- "completion_tokens": completion_token_count,
312
- "total_tokens": token_count + completion_token_count
313
- }
314
- }
315
- if logprob_proc: # not official for chat yet
316
- top_logprobs = convert_logprobs_to_tiktoken(model=requested_model, logprobs=logprob_proc.token_alternatives)
317
- resp[resp_list][0]["logprobs"] = {'top_logprobs': [top_logprobs]}
318
- # else:
319
- # resp[resp_list][0]["logprobs"] = None
320
-
321
- return resp
322
-
323
-
324
- # generator
325
- def stream_chat_completions(body: dict, is_legacy: bool = False):
326
-
327
- # Chat Completions
328
- stream_object_type = 'chat.completions.chunk'
329
- created_time = int(time.time())
330
- cmpl_id = "chatcmpl-%d" % (int(time.time() * 1000000000))
331
- resp_list = 'data' if is_legacy else 'choices'
332
-
333
- # common params
334
- req_params = marshal_common_params(body)
335
- req_params['stream'] = True
336
- requested_model = req_params.pop('requested_model')
337
- logprob_proc = req_params.pop('logprob_proc', None)
338
- req_params['top_k'] = 20 # There is no best_of/top_k param for chat, but it is much improved with a higher top_k.
339
-
340
- # chat default max_tokens is 'inf', but also flexible
341
- max_tokens = 0
342
- max_tokens_str = 'length' if is_legacy else 'max_tokens'
343
- if max_tokens_str in body:
344
- max_tokens = default(body, max_tokens_str, req_params['truncation_length'])
345
- req_params['max_new_tokens'] = max_tokens
346
- else:
347
- req_params['max_new_tokens'] = req_params['truncation_length']
348
-
349
- # format the prompt from messages
350
- prompt, token_count = messages_to_prompt(body, req_params, max_tokens) # updates req_params['stopping_strings']
351
-
352
- # set real max, avoid deeper errors
353
- if req_params['max_new_tokens'] + token_count >= req_params['truncation_length']:
354
- req_params['max_new_tokens'] = req_params['truncation_length'] - token_count
355
-
356
- def chat_streaming_chunk(content):
357
- # begin streaming
358
- chunk = {
359
- "id": cmpl_id,
360
- "object": stream_object_type,
361
- "created": created_time,
362
- "model": shared.model_name,
363
- resp_list: [{
364
- "index": 0,
365
- "finish_reason": None,
366
- # So yeah... do both methods? delta and messages.
367
- "message": {'role': 'assistant', 'content': content},
368
- "delta": {'role': 'assistant', 'content': content},
369
- }],
370
- }
371
-
372
- if logprob_proc: # not official for chat yet
373
- top_logprobs = convert_logprobs_to_tiktoken(model=requested_model, logprobs=logprob_proc.token_alternatives)
374
- chunk[resp_list][0]["logprobs"] = {'top_logprobs': [top_logprobs]}
375
- # else:
376
- # chunk[resp_list][0]["logprobs"] = None
377
- return chunk
378
-
379
- yield chat_streaming_chunk('')
380
-
381
- # generate reply #######################################
382
- debug_msg({'prompt': prompt, 'req_params': req_params})
383
-
384
- stopping_strings = req_params.pop('stopping_strings', [])
385
-
386
- generator = generate_reply(prompt, req_params, stopping_strings=stopping_strings, is_chat=False)
387
-
388
- answer = ''
389
- seen_content = ''
390
- completion_token_count = 0
391
-
392
- for a in generator:
393
- answer = a
394
-
395
- len_seen = len(seen_content)
396
- new_content = answer[len_seen:]
397
-
398
- if not new_content or chr(0xfffd) in new_content: # partial unicode character, don't send it yet.
399
- continue
400
-
401
- seen_content = answer
402
-
403
- # strip extra leading space off new generated content
404
- if len_seen == 0 and new_content[0] == ' ':
405
- new_content = new_content[1:]
406
-
407
- chunk = chat_streaming_chunk(new_content)
408
-
409
- yield chunk
410
-
411
- # to get the correct token_count, strip leading space if present
412
- if answer and answer[0] == ' ':
413
- answer = answer[1:]
414
-
415
- completion_token_count = len(encode(answer)[0])
416
- stop_reason = "stop"
417
- if token_count + completion_token_count >= req_params['truncation_length'] or completion_token_count >= req_params['max_new_tokens']:
418
- stop_reason = "length"
419
-
420
- chunk = chat_streaming_chunk('')
421
- chunk[resp_list][0]['finish_reason'] = stop_reason
422
- chunk['usage'] = {
423
- "prompt_tokens": token_count,
424
- "completion_tokens": completion_token_count,
425
- "total_tokens": token_count + completion_token_count
426
- }
427
-
428
- yield chunk
429
-
430
-
431
- def completions(body: dict, is_legacy: bool = False):
432
- # Legacy
433
- # Text Completions
434
- object_type = 'text_completion'
435
- created_time = int(time.time())
436
- cmpl_id = "conv-%d" % (int(time.time() * 1000000000))
437
- resp_list = 'data' if is_legacy else 'choices'
438
-
439
- # ... encoded as a string, array of strings, array of tokens, or array of token arrays.
440
- prompt_str = 'context' if is_legacy else 'prompt'
441
- if prompt_str not in body:
442
- raise InvalidRequestError("Missing required input", param=prompt_str)
443
-
444
- prompt_arg = body[prompt_str]
445
- if isinstance(prompt_arg, str) or (isinstance(prompt_arg, list) and isinstance(prompt_arg[0], int)):
446
- prompt_arg = [prompt_arg]
447
-
448
- # common params
449
- req_params = marshal_common_params(body)
450
- req_params['stream'] = False
451
- max_tokens_str = 'length' if is_legacy else 'max_tokens'
452
- max_tokens = default(body, max_tokens_str, req_params['max_new_tokens'])
453
- req_params['max_new_tokens'] = max_tokens
454
- requested_model = req_params.pop('requested_model')
455
- logprob_proc = req_params.pop('logprob_proc', None)
456
- stopping_strings = req_params.pop('stopping_strings', [])
457
- # req_params['suffix'] = default(body, 'suffix', req_params['suffix'])
458
- req_params['echo'] = default(body, 'echo', req_params['echo'])
459
- req_params['top_k'] = default(body, 'best_of', req_params['top_k'])
460
-
461
- resp_list_data = []
462
- total_completion_token_count = 0
463
- total_prompt_token_count = 0
464
-
465
- for idx, prompt in enumerate(prompt_arg, start=0):
466
- if isinstance(prompt[0], int):
467
- # token lists
468
- if requested_model == shared.model_name:
469
- prompt = decode(prompt)[0]
470
- else:
471
- try:
472
- encoder = tiktoken.encoding_for_model(requested_model)
473
- prompt = encoder.decode(prompt)
474
- except KeyError:
475
- prompt = decode(prompt)[0]
476
-
477
- token_count = len(encode(prompt)[0])
478
- total_prompt_token_count += token_count
479
-
480
- if token_count + max_tokens > req_params['truncation_length']:
481
- err_msg = f"The token count of your prompt ({token_count}) plus max_tokens ({max_tokens}) cannot exceed the model's context length ({req_params['truncation_length']})."
482
- # print(f"Warning: ${err_msg}")
483
- raise InvalidRequestError(message=err_msg, param=max_tokens_str)
484
-
485
- # generate reply #######################################
486
- debug_msg({'prompt': prompt, 'req_params': req_params})
487
- generator = generate_reply(prompt, req_params, stopping_strings=stopping_strings, is_chat=False)
488
- answer = ''
489
-
490
- for a in generator:
491
- answer = a
492
-
493
- # strip extra leading space off new generated content
494
- if answer and answer[0] == ' ':
495
- answer = answer[1:]
496
-
497
- completion_token_count = len(encode(answer)[0])
498
- total_completion_token_count += completion_token_count
499
- stop_reason = "stop"
500
- if token_count + completion_token_count >= req_params['truncation_length'] or completion_token_count >= max_tokens:
501
- stop_reason = "length"
502
-
503
- respi = {
504
- "index": idx,
505
- "finish_reason": stop_reason,
506
- "text": answer,
507
- "logprobs": {'top_logprobs': [logprob_proc.token_alternatives]} if logprob_proc else None,
508
- }
509
-
510
- resp_list_data.extend([respi])
511
-
512
- resp = {
513
- "id": cmpl_id,
514
- "object": object_type,
515
- "created": created_time,
516
- "model": shared.model_name, # TODO: add Lora info?
517
- resp_list: resp_list_data,
518
- "usage": {
519
- "prompt_tokens": total_prompt_token_count,
520
- "completion_tokens": total_completion_token_count,
521
- "total_tokens": total_prompt_token_count + total_completion_token_count
522
- }
523
- }
524
-
525
- return resp
526
-
527
-
528
- # generator
529
- def stream_completions(body: dict, is_legacy: bool = False):
530
- # Legacy
531
- # Text Completions
532
- # object_type = 'text_completion'
533
- stream_object_type = 'text_completion.chunk'
534
- created_time = int(time.time())
535
- cmpl_id = "conv-%d" % (int(time.time() * 1000000000))
536
- resp_list = 'data' if is_legacy else 'choices'
537
-
538
- # ... encoded as a string, array of strings, array of tokens, or array of token arrays.
539
- prompt_str = 'context' if is_legacy else 'prompt'
540
- if prompt_str not in body:
541
- raise InvalidRequestError("Missing required input", param=prompt_str)
542
-
543
- prompt = body[prompt_str]
544
- req_params = marshal_common_params(body)
545
- requested_model = req_params.pop('requested_model')
546
- if isinstance(prompt, list):
547
- if prompt and isinstance(prompt[0], int):
548
- try:
549
- encoder = tiktoken.encoding_for_model(requested_model)
550
- prompt = encoder.decode(prompt)
551
- except KeyError:
552
- prompt = decode(prompt)[0]
553
- else:
554
- raise InvalidRequestError(message="API Batched generation not yet supported.", param=prompt_str)
555
-
556
- # common params
557
- req_params['stream'] = True
558
- max_tokens_str = 'length' if is_legacy else 'max_tokens'
559
- max_tokens = default(body, max_tokens_str, req_params['max_new_tokens'])
560
- req_params['max_new_tokens'] = max_tokens
561
- logprob_proc = req_params.pop('logprob_proc', None)
562
- stopping_strings = req_params.pop('stopping_strings', [])
563
- # req_params['suffix'] = default(body, 'suffix', req_params['suffix'])
564
- req_params['echo'] = default(body, 'echo', req_params['echo'])
565
- req_params['top_k'] = default(body, 'best_of', req_params['top_k'])
566
-
567
- token_count = len(encode(prompt)[0])
568
-
569
- if token_count + max_tokens > req_params['truncation_length']:
570
- err_msg = f"The token count of your prompt ({token_count}) plus max_tokens ({max_tokens}) cannot exceed the model's context length ({req_params['truncation_length']})."
571
- # print(f"Warning: ${err_msg}")
572
- raise InvalidRequestError(message=err_msg, param=max_tokens_str)
573
-
574
- def text_streaming_chunk(content):
575
- # begin streaming
576
- chunk = {
577
- "id": cmpl_id,
578
- "object": stream_object_type,
579
- "created": created_time,
580
- "model": shared.model_name,
581
- resp_list: [{
582
- "index": 0,
583
- "finish_reason": None,
584
- "text": content,
585
- "logprobs": {'top_logprobs': [logprob_proc.token_alternatives]} if logprob_proc else None,
586
- }],
587
- }
588
-
589
- return chunk
590
-
591
- yield text_streaming_chunk('')
592
-
593
- # generate reply #######################################
594
- debug_msg({'prompt': prompt, 'req_params': req_params})
595
- generator = generate_reply(prompt, req_params, stopping_strings=stopping_strings, is_chat=False)
596
-
597
- answer = ''
598
- seen_content = ''
599
- completion_token_count = 0
600
-
601
- for a in generator:
602
- answer = a
603
-
604
- len_seen = len(seen_content)
605
- new_content = answer[len_seen:]
606
-
607
- if not new_content or chr(0xfffd) in new_content: # partial unicode character, don't send it yet.
608
- continue
609
-
610
- seen_content = answer
611
-
612
- # strip extra leading space off new generated content
613
- if len_seen == 0 and new_content[0] == ' ':
614
- new_content = new_content[1:]
615
-
616
- chunk = text_streaming_chunk(new_content)
617
-
618
- yield chunk
619
-
620
- # to get the correct count, we strip the leading space if present
621
- if answer and answer[0] == ' ':
622
- answer = answer[1:]
623
-
624
- completion_token_count = len(encode(answer)[0])
625
- stop_reason = "stop"
626
- if token_count + completion_token_count >= req_params['truncation_length'] or completion_token_count >= max_tokens:
627
- stop_reason = "length"
628
-
629
- chunk = text_streaming_chunk('')
630
- chunk[resp_list][0]["finish_reason"] = stop_reason
631
- chunk["usage"] = {
632
- "prompt_tokens": token_count,
633
- "completion_tokens": completion_token_count,
634
- "total_tokens": token_count + completion_token_count
635
- }
636
-
637
- yield chunk
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/callbacks.py DELETED
@@ -1,95 +0,0 @@
1
- import gc
2
- import traceback
3
- from queue import Queue
4
- from threading import Thread
5
-
6
- import torch
7
- import transformers
8
-
9
- import modules.shared as shared
10
-
11
-
12
- class _StopEverythingStoppingCriteria(transformers.StoppingCriteria):
13
- def __init__(self):
14
- transformers.StoppingCriteria.__init__(self)
15
-
16
- def __call__(self, input_ids: torch.LongTensor, _scores: torch.FloatTensor) -> bool:
17
- return shared.stop_everything
18
-
19
-
20
- class Stream(transformers.StoppingCriteria):
21
- def __init__(self, callback_func=None):
22
- self.callback_func = callback_func
23
-
24
- def __call__(self, input_ids, scores) -> bool:
25
- if self.callback_func is not None:
26
- self.callback_func(input_ids[0])
27
-
28
- return False
29
-
30
-
31
- class Iteratorize:
32
-
33
- """
34
- Transforms a function that takes a callback
35
- into a lazy iterator (generator).
36
-
37
- Adapted from: https://stackoverflow.com/a/9969000
38
- """
39
-
40
- def __init__(self, func, args=None, kwargs=None, callback=None):
41
- self.mfunc = func
42
- self.c_callback = callback
43
- self.q = Queue()
44
- self.sentinel = object()
45
- self.args = args or []
46
- self.kwargs = kwargs or {}
47
- self.stop_now = False
48
-
49
- def _callback(val):
50
- if self.stop_now or shared.stop_everything:
51
- raise ValueError
52
- self.q.put(val)
53
-
54
- def gentask():
55
- try:
56
- ret = self.mfunc(callback=_callback, *args, **self.kwargs)
57
- except ValueError:
58
- pass
59
- except:
60
- traceback.print_exc()
61
- pass
62
-
63
- clear_torch_cache()
64
- self.q.put(self.sentinel)
65
- if self.c_callback:
66
- self.c_callback(ret)
67
-
68
- self.thread = Thread(target=gentask)
69
- self.thread.start()
70
-
71
- def __iter__(self):
72
- return self
73
-
74
- def __next__(self):
75
- obj = self.q.get(True, None)
76
- if obj is self.sentinel:
77
- raise StopIteration
78
- else:
79
- return obj
80
-
81
- def __del__(self):
82
- clear_torch_cache()
83
-
84
- def __enter__(self):
85
- return self
86
-
87
- def __exit__(self, exc_type, exc_val, exc_tb):
88
- self.stop_now = True
89
- clear_torch_cache()
90
-
91
-
92
- def clear_torch_cache():
93
- gc.collect()
94
- if not shared.args.cpu:
95
- torch.cuda.empty_cache()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AnishKumbhar/ChatBot/text-generation-webui-main/start_linux.sh DELETED
@@ -1,67 +0,0 @@
1
- #!/bin/bash
2
-
3
- cd "$(dirname "${BASH_SOURCE[0]}")"
4
-
5
- if [[ "$(pwd)" =~ " " ]]; then echo This script relies on Miniconda which can not be silently installed under a path with spaces. && exit; fi
6
-
7
- # deactivate existing conda envs as needed to avoid conflicts
8
- { conda deactivate && conda deactivate && conda deactivate; } 2> /dev/null
9
-
10
- OS_ARCH=$(uname -m)
11
- case "${OS_ARCH}" in
12
- x86_64*) OS_ARCH="x86_64";;
13
- arm64*) OS_ARCH="aarch64";;
14
- aarch64*) OS_ARCH="aarch64";;
15
- *) echo "Unknown system architecture: $OS_ARCH! This script runs only on x86_64 or arm64" && exit
16
- esac
17
-
18
- # config
19
- INSTALL_DIR="$(pwd)/installer_files"
20
- CONDA_ROOT_PREFIX="$(pwd)/installer_files/conda"
21
- INSTALL_ENV_DIR="$(pwd)/installer_files/env"
22
- MINICONDA_DOWNLOAD_URL="https://repo.anaconda.com/miniconda/Miniconda3-py310_23.3.1-0-Linux-${OS_ARCH}.sh"
23
- conda_exists="F"
24
-
25
- # figure out whether git and conda needs to be installed
26
- if "$CONDA_ROOT_PREFIX/bin/conda" --version &>/dev/null; then conda_exists="T"; fi
27
-
28
- # (if necessary) install git and conda into a contained environment
29
- # download miniconda
30
- if [ "$conda_exists" == "F" ]; then
31
- echo "Downloading Miniconda from $MINICONDA_DOWNLOAD_URL to $INSTALL_DIR/miniconda_installer.sh"
32
-
33
- mkdir -p "$INSTALL_DIR"
34
- curl -Lk "$MINICONDA_DOWNLOAD_URL" > "$INSTALL_DIR/miniconda_installer.sh"
35
-
36
- chmod u+x "$INSTALL_DIR/miniconda_installer.sh"
37
- bash "$INSTALL_DIR/miniconda_installer.sh" -b -p $CONDA_ROOT_PREFIX
38
-
39
- # test the conda binary
40
- echo "Miniconda version:"
41
- "$CONDA_ROOT_PREFIX/bin/conda" --version
42
- fi
43
-
44
- # create the installer env
45
- if [ ! -e "$INSTALL_ENV_DIR" ]; then
46
- "$CONDA_ROOT_PREFIX/bin/conda" create -y -k --prefix "$INSTALL_ENV_DIR" python=3.10
47
- fi
48
-
49
- # check if conda environment was actually created
50
- if [ ! -e "$INSTALL_ENV_DIR/bin/python" ]; then
51
- echo "Conda environment is empty."
52
- exit
53
- fi
54
-
55
- # environment isolation
56
- export PYTHONNOUSERSITE=1
57
- unset PYTHONPATH
58
- unset PYTHONHOME
59
- export CUDA_PATH="$INSTALL_ENV_DIR"
60
- export CUDA_HOME="$CUDA_PATH"
61
-
62
- # activate installer env
63
- source "$CONDA_ROOT_PREFIX/etc/profile.d/conda.sh" # otherwise conda complains about 'shell not initialized' (needed when running in a script)
64
- conda activate "$INSTALL_ENV_DIR"
65
-
66
- # setup installer env
67
- python one_click.py $@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/docs/faq.md DELETED
@@ -1,21 +0,0 @@
1
- # FAQs
2
-
3
- **Q:** If the weight of a conv layer is zero, the gradient will also be zero, and the network will not learn anything. Why "zero convolution" works?
4
-
5
- **A:** This is wrong. Let us consider a very simple
6
-
7
- $$y=wx+b$$
8
-
9
- and we have
10
-
11
- $$\partial y/\partial w=x, \partial y/\partial x=w, \partial y/\partial b=1$$
12
-
13
- and if $w=0$ and $x \neq 0$, then
14
-
15
- $$\partial y/\partial w \neq 0, \partial y/\partial x=0, \partial y/\partial b\neq 0$$
16
-
17
- which means as long as $x \neq 0$, one gradient descent iteration will make $w$ non-zero. Then
18
-
19
- $$\partial y/\partial x\neq 0$$
20
-
21
- so that the zero convolutions will progressively become a common conv layer with non-zero weights.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Arnx/MusicGenXvAKN/tests/modules/test_transformer.py DELETED
@@ -1,253 +0,0 @@
1
- # Copyright (c) Meta Platforms, Inc. and affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
-
7
- from itertools import product
8
-
9
- import pytest
10
- import torch
11
-
12
- from audiocraft.modules.transformer import (
13
- StreamingMultiheadAttention, StreamingTransformer, set_efficient_attention_backend)
14
-
15
-
16
- def test_transformer_causal_streaming():
17
- torch.manual_seed(1234)
18
-
19
- for context, custom in product([None, 10], [False, True]):
20
- # Test that causality and receptive fields are properly handled.
21
- # looking at the gradients
22
- tr = StreamingTransformer(
23
- 16, 4, 1 if context else 2,
24
- causal=True, past_context=context, custom=custom,
25
- dropout=0.)
26
- steps = 20
27
- for k in [0, 10, 15, 19]:
28
- x = torch.randn(4, steps, 16, requires_grad=True)
29
- y = tr(x)
30
- y[:, k].abs().sum().backward()
31
- if k + 1 < steps:
32
- assert torch.allclose(x.grad[:, k + 1:], torch.tensor(0.)), x.grad[:, k + 1:].norm()
33
- assert not torch.allclose(x.grad[:, :k + 1], torch.tensor(0.)), x.grad[:, :k + 1].norm()
34
- if context is not None and k > context:
35
- limit = k - context - 1
36
- assert torch.allclose(x.grad[:, :limit],
37
- torch.tensor(0.)), x.grad[:, :limit].norm()
38
-
39
- # Now check that streaming gives the same result at batch eval.
40
- x = torch.randn(4, steps, 16)
41
- y = tr(x)
42
- ys = []
43
- with tr.streaming():
44
- for k in range(steps):
45
- chunk = x[:, k:k + 1, :]
46
- ys.append(tr(chunk))
47
- y_stream = torch.cat(ys, dim=1)
48
- delta = torch.norm(y_stream - y) / torch.norm(y)
49
- assert delta < 1e-6, delta
50
-
51
-
52
- def test_transformer_vs_pytorch():
53
- torch.manual_seed(1234)
54
- # Check that in the non causal setting, we get the same result as
55
- # PyTorch Transformer encoder.
56
- for custom in [False, True]:
57
- tr = StreamingTransformer(
58
- 16, 4, 2,
59
- causal=False, custom=custom, dropout=0., positional_scale=0.)
60
- layer = torch.nn.TransformerEncoderLayer(16, 4, dropout=0., batch_first=True)
61
- tr_ref = torch.nn.TransformerEncoder(layer, 2)
62
- tr.load_state_dict(tr_ref.state_dict())
63
-
64
- x = torch.randn(4, 20, 16)
65
- y = tr(x)
66
- y2 = tr_ref(x)
67
- delta = torch.norm(y2 - y) / torch.norm(y)
68
- assert delta < 1e-6, delta
69
-
70
-
71
- def test_streaming_api():
72
- tr = StreamingTransformer(16, 4, 2, causal=True, dropout=0.)
73
- tr.eval()
74
- steps = 12
75
- x = torch.randn(1, steps, 16)
76
-
77
- with torch.no_grad():
78
- with tr.streaming():
79
- _ = tr(x[:, :1])
80
- state = {k: v.clone() for k, v in tr.get_streaming_state().items()}
81
- y = tr(x[:, 1:2])
82
- tr.set_streaming_state(state)
83
- y2 = tr(x[:, 1:2])
84
- assert torch.allclose(y, y2), (y - y2).norm()
85
- assert tr.flush() is None
86
-
87
-
88
- def test_memory_efficient():
89
- for backend in ['torch', 'xformers']:
90
- torch.manual_seed(1234)
91
- set_efficient_attention_backend(backend)
92
-
93
- tr = StreamingTransformer(
94
- 16, 4, 2, custom=True, dropout=0., layer_scale=0.1)
95
- tr_mem_efficient = StreamingTransformer(
96
- 16, 4, 2, dropout=0., memory_efficient=True, layer_scale=0.1)
97
- tr_mem_efficient.load_state_dict(tr.state_dict())
98
- tr.eval()
99
- steps = 12
100
- x = torch.randn(3, steps, 16)
101
-
102
- with torch.no_grad():
103
- y = tr(x)
104
- y2 = tr_mem_efficient(x)
105
- assert torch.allclose(y, y2), ((y - y2).norm(), backend)
106
-
107
-
108
- def test_attention_as_float32():
109
- torch.manual_seed(1234)
110
- cases = [
111
- {'custom': True},
112
- {'custom': False},
113
- ]
114
- for case in cases:
115
- tr = StreamingTransformer(16, 4, 2, dropout=0., dtype=torch.bfloat16, **case)
116
- tr_float32 = StreamingTransformer(
117
- 16, 4, 2, dropout=0., attention_as_float32=True, dtype=torch.bfloat16, **case)
118
- if not case['custom']:
119
- # we are not using autocast here because it doesn't really
120
- # work as expected on CPU, so we have to manually cast the weights of the MHA.
121
- for layer in tr_float32.layers:
122
- layer.self_attn.mha.to(torch.float32)
123
- tr_float32.load_state_dict(tr.state_dict())
124
- steps = 12
125
- x = torch.randn(3, steps, 16, dtype=torch.bfloat16)
126
-
127
- with torch.no_grad():
128
- y = tr(x)
129
- y2 = tr_float32(x)
130
- assert not torch.allclose(y, y2), (y - y2).norm()
131
-
132
-
133
- @torch.no_grad()
134
- def test_streaming_memory_efficient():
135
- for backend in ['torch', 'xformers']:
136
- torch.manual_seed(1234)
137
- set_efficient_attention_backend(backend)
138
- tr = StreamingTransformer(16, 4, 2, causal=True, dropout=0., custom=True)
139
- tr_mem_efficient = StreamingTransformer(
140
- 16, 4, 2, dropout=0., memory_efficient=True, causal=True)
141
- tr.load_state_dict(tr_mem_efficient.state_dict())
142
- tr.eval()
143
- tr_mem_efficient.eval()
144
- steps = 12
145
- x = torch.randn(3, steps, 16)
146
-
147
- ref = tr(x)
148
-
149
- with tr_mem_efficient.streaming():
150
- outs = []
151
- # frame_sizes = [2] + [1] * (steps - 2)
152
- frame_sizes = [1] * steps
153
-
154
- for frame_size in frame_sizes:
155
- frame = x[:, :frame_size]
156
- x = x[:, frame_size:]
157
- outs.append(tr_mem_efficient(frame))
158
-
159
- out = torch.cat(outs, dim=1)
160
- delta = torch.norm(out - ref) / torch.norm(out)
161
- assert delta < 1e-6, delta
162
-
163
-
164
- def test_cross_attention():
165
- torch.manual_seed(1234)
166
- for norm_first in [True, False]:
167
- m = StreamingTransformer(
168
- 16, 4, 2, cross_attention=False, norm_first=norm_first, dropout=0., custom=True)
169
- m_cross = StreamingTransformer(
170
- 16, 4, 2, cross_attention=True, norm_first=norm_first, dropout=0., custom=True)
171
- m_cross.load_state_dict(m.state_dict(), strict=False)
172
- x = torch.randn(2, 5, 16)
173
- cross_x = torch.randn(2, 3, 16)
174
- y_ref = m(x)
175
- y_cross_zero = m_cross(x, cross_attention_src=0 * cross_x)
176
- # With norm_first, the two should be exactly yhe same,
177
- # but with norm_first=False, we get 2 normalization in a row
178
- # and the epsilon value leads to a tiny change.
179
- atol = 0. if norm_first else 1e-6
180
- print((y_ref - y_cross_zero).norm() / y_ref.norm())
181
- assert torch.allclose(y_ref, y_cross_zero, atol=atol)
182
-
183
- # We now expect a difference even with a generous atol of 1e-2.
184
- y_cross = m_cross(x, cross_attention_src=cross_x)
185
- assert not torch.allclose(y_cross, y_cross_zero, atol=1e-2)
186
-
187
- with pytest.raises(AssertionError):
188
- _ = m_cross(x)
189
- _ = m(x, cross_attention_src=cross_x)
190
-
191
-
192
- def test_cross_attention_compat():
193
- torch.manual_seed(1234)
194
- num_heads = 2
195
- dim = num_heads * 64
196
- with pytest.raises(AssertionError):
197
- StreamingMultiheadAttention(dim, num_heads, causal=True, cross_attention=True)
198
-
199
- cross_attn = StreamingMultiheadAttention(
200
- dim, num_heads, dropout=0, cross_attention=True, custom=True)
201
- ref_attn = torch.nn.MultiheadAttention(dim, num_heads, dropout=0, batch_first=True)
202
-
203
- # We can load the regular attention state dict
204
- # so we have compat when loading old checkpoints.
205
- cross_attn.load_state_dict(ref_attn.state_dict())
206
-
207
- queries = torch.randn(3, 7, dim)
208
- keys = torch.randn(3, 9, dim)
209
- values = torch.randn(3, 9, dim)
210
-
211
- y = cross_attn(queries, keys, values)[0]
212
- y_ref = ref_attn(queries, keys, values)[0]
213
- assert torch.allclose(y, y_ref, atol=1e-7), (y - y_ref).norm() / y_ref.norm()
214
-
215
- # Now let's check that streaming is working properly.
216
- with cross_attn.streaming():
217
- ys = []
218
- for step in range(queries.shape[1]):
219
- ys.append(cross_attn(queries[:, step: step + 1], keys, values)[0])
220
- y_streaming = torch.cat(ys, dim=1)
221
- assert torch.allclose(y_streaming, y, atol=1e-7)
222
-
223
-
224
- def test_repeat_kv():
225
- torch.manual_seed(1234)
226
- num_heads = 8
227
- kv_repeat = 4
228
- dim = num_heads * 64
229
- with pytest.raises(AssertionError):
230
- mha = StreamingMultiheadAttention(
231
- dim, num_heads, causal=True, kv_repeat=kv_repeat, cross_attention=True)
232
- mha = StreamingMultiheadAttention(
233
- dim, num_heads, causal=True, kv_repeat=kv_repeat)
234
- mha = StreamingMultiheadAttention(
235
- dim, num_heads, causal=True, kv_repeat=kv_repeat, custom=True)
236
- x = torch.randn(4, 18, dim)
237
- y = mha(x, x, x)[0]
238
- assert x.shape == y.shape
239
-
240
-
241
- def test_qk_layer_norm():
242
- torch.manual_seed(1234)
243
- tr = StreamingTransformer(
244
- 16, 4, 2, custom=True, dropout=0., qk_layer_norm=True, bias_attn=False)
245
- steps = 12
246
- x = torch.randn(3, steps, 16)
247
- y = tr(x)
248
-
249
- tr = StreamingTransformer(
250
- 16, 4, 2, custom=True, dropout=0., qk_layer_norm=True, cross_attention=True)
251
- z = torch.randn(3, 21, 16)
252
- y = tr(x, cross_attention_src=z)
253
- assert y.shape == x.shape
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Artrajz/vits-simple-api/bert_vits2/bert/bert-base-japanese-v3/README.md DELETED
@@ -1,53 +0,0 @@
1
- ---
2
- license: apache-2.0
3
- datasets:
4
- - cc100
5
- - wikipedia
6
- language:
7
- - ja
8
- widget:
9
- - text: 東北大学で[MASK]の研究をしています。
10
- ---
11
-
12
- # BERT base Japanese (unidic-lite with whole word masking, CC-100 and jawiki-20230102)
13
-
14
- This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
15
-
16
- This version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in [unidic-lite](https://pypi.org/project/unidic-lite/) package), followed by the WordPiece subword tokenization.
17
- Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.
18
-
19
- The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/).
20
-
21
- ## Model architecture
22
-
23
- The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.
24
-
25
- ## Training Data
26
-
27
- The model is trained on the Japanese portion of [CC-100 dataset](https://data.statmt.org/cc-100/) and the Japanese version of Wikipedia.
28
- For Wikipedia, we generated a text corpus from the [Wikipedia Cirrussearch dump file](https://dumps.wikimedia.org/other/cirrussearch/) as of January 2, 2023.
29
- The corpus files generated from CC-100 and Wikipedia are 74.3GB and 4.9GB in size and consist of approximately 392M and 34M sentences, respectively.
30
-
31
- For the purpose of splitting texts into sentences, we used [fugashi](https://github.com/polm/fugashi) with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd) dictionary (v0.0.7).
32
-
33
- ## Tokenization
34
-
35
- The texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into subwords by the WordPiece algorithm.
36
- The vocabulary size is 32768.
37
-
38
- We used [fugashi](https://github.com/polm/fugashi) and [unidic-lite](https://github.com/polm/unidic-lite) packages for the tokenization.
39
-
40
- ## Training
41
-
42
- We trained the model first on the CC-100 corpus for 1M steps and then on the Wikipedia corpus for another 1M steps.
43
- For training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.
44
-
45
- For training of each model, we used a v3-8 instance of Cloud TPUs provided by [TPU Research Cloud](https://sites.research.google/trc/about/).
46
-
47
- ## Licenses
48
-
49
- The pretrained models are distributed under the Apache License 2.0.
50
-
51
- ## Acknowledgments
52
-
53
- This model is trained with Cloud TPUs provided by [TPU Research Cloud](https://sites.research.google/trc/about/) program.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/sjisprober.py DELETED
@@ -1,105 +0,0 @@
1
- ######################## BEGIN LICENSE BLOCK ########################
2
- # The Original Code is mozilla.org code.
3
- #
4
- # The Initial Developer of the Original Code is
5
- # Netscape Communications Corporation.
6
- # Portions created by the Initial Developer are Copyright (C) 1998
7
- # the Initial Developer. All Rights Reserved.
8
- #
9
- # Contributor(s):
10
- # Mark Pilgrim - port to Python
11
- #
12
- # This library is free software; you can redistribute it and/or
13
- # modify it under the terms of the GNU Lesser General Public
14
- # License as published by the Free Software Foundation; either
15
- # version 2.1 of the License, or (at your option) any later version.
16
- #
17
- # This library is distributed in the hope that it will be useful,
18
- # but WITHOUT ANY WARRANTY; without even the implied warranty of
19
- # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
20
- # Lesser General Public License for more details.
21
- #
22
- # You should have received a copy of the GNU Lesser General Public
23
- # License along with this library; if not, write to the Free Software
24
- # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
25
- # 02110-1301 USA
26
- ######################### END LICENSE BLOCK #########################
27
-
28
- from typing import Union
29
-
30
- from .chardistribution import SJISDistributionAnalysis
31
- from .codingstatemachine import CodingStateMachine
32
- from .enums import MachineState, ProbingState
33
- from .jpcntx import SJISContextAnalysis
34
- from .mbcharsetprober import MultiByteCharSetProber
35
- from .mbcssm import SJIS_SM_MODEL
36
-
37
-
38
- class SJISProber(MultiByteCharSetProber):
39
- def __init__(self) -> None:
40
- super().__init__()
41
- self.coding_sm = CodingStateMachine(SJIS_SM_MODEL)
42
- self.distribution_analyzer = SJISDistributionAnalysis()
43
- self.context_analyzer = SJISContextAnalysis()
44
- self.reset()
45
-
46
- def reset(self) -> None:
47
- super().reset()
48
- self.context_analyzer.reset()
49
-
50
- @property
51
- def charset_name(self) -> str:
52
- return self.context_analyzer.charset_name
53
-
54
- @property
55
- def language(self) -> str:
56
- return "Japanese"
57
-
58
- def feed(self, byte_str: Union[bytes, bytearray]) -> ProbingState:
59
- assert self.coding_sm is not None
60
- assert self.distribution_analyzer is not None
61
-
62
- for i, byte in enumerate(byte_str):
63
- coding_state = self.coding_sm.next_state(byte)
64
- if coding_state == MachineState.ERROR:
65
- self.logger.debug(
66
- "%s %s prober hit error at byte %s",
67
- self.charset_name,
68
- self.language,
69
- i,
70
- )
71
- self._state = ProbingState.NOT_ME
72
- break
73
- if coding_state == MachineState.ITS_ME:
74
- self._state = ProbingState.FOUND_IT
75
- break
76
- if coding_state == MachineState.START:
77
- char_len = self.coding_sm.get_current_charlen()
78
- if i == 0:
79
- self._last_char[1] = byte
80
- self.context_analyzer.feed(
81
- self._last_char[2 - char_len :], char_len
82
- )
83
- self.distribution_analyzer.feed(self._last_char, char_len)
84
- else:
85
- self.context_analyzer.feed(
86
- byte_str[i + 1 - char_len : i + 3 - char_len], char_len
87
- )
88
- self.distribution_analyzer.feed(byte_str[i - 1 : i + 1], char_len)
89
-
90
- self._last_char[0] = byte_str[-1]
91
-
92
- if self.state == ProbingState.DETECTING:
93
- if self.context_analyzer.got_enough_data() and (
94
- self.get_confidence() > self.SHORTCUT_THRESHOLD
95
- ):
96
- self._state = ProbingState.FOUND_IT
97
-
98
- return self.state
99
-
100
- def get_confidence(self) -> float:
101
- assert self.distribution_analyzer is not None
102
-
103
- context_conf = self.context_analyzer.get_confidence()
104
- distrib_conf = self.distribution_analyzer.get_confidence()
105
- return max(context_conf, distrib_conf)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/common.py DELETED
@@ -1,424 +0,0 @@
1
- # common.py
2
- from .core import *
3
- from .helpers import delimited_list, any_open_tag, any_close_tag
4
- from datetime import datetime
5
-
6
-
7
- # some other useful expressions - using lower-case class name since we are really using this as a namespace
8
- class pyparsing_common:
9
- """Here are some common low-level expressions that may be useful in
10
- jump-starting parser development:
11
-
12
- - numeric forms (:class:`integers<integer>`, :class:`reals<real>`,
13
- :class:`scientific notation<sci_real>`)
14
- - common :class:`programming identifiers<identifier>`
15
- - network addresses (:class:`MAC<mac_address>`,
16
- :class:`IPv4<ipv4_address>`, :class:`IPv6<ipv6_address>`)
17
- - ISO8601 :class:`dates<iso8601_date>` and
18
- :class:`datetime<iso8601_datetime>`
19
- - :class:`UUID<uuid>`
20
- - :class:`comma-separated list<comma_separated_list>`
21
- - :class:`url`
22
-
23
- Parse actions:
24
-
25
- - :class:`convertToInteger`
26
- - :class:`convertToFloat`
27
- - :class:`convertToDate`
28
- - :class:`convertToDatetime`
29
- - :class:`stripHTMLTags`
30
- - :class:`upcaseTokens`
31
- - :class:`downcaseTokens`
32
-
33
- Example::
34
-
35
- pyparsing_common.number.runTests('''
36
- # any int or real number, returned as the appropriate type
37
- 100
38
- -100
39
- +100
40
- 3.14159
41
- 6.02e23
42
- 1e-12
43
- ''')
44
-
45
- pyparsing_common.fnumber.runTests('''
46
- # any int or real number, returned as float
47
- 100
48
- -100
49
- +100
50
- 3.14159
51
- 6.02e23
52
- 1e-12
53
- ''')
54
-
55
- pyparsing_common.hex_integer.runTests('''
56
- # hex numbers
57
- 100
58
- FF
59
- ''')
60
-
61
- pyparsing_common.fraction.runTests('''
62
- # fractions
63
- 1/2
64
- -3/4
65
- ''')
66
-
67
- pyparsing_common.mixed_integer.runTests('''
68
- # mixed fractions
69
- 1
70
- 1/2
71
- -3/4
72
- 1-3/4
73
- ''')
74
-
75
- import uuid
76
- pyparsing_common.uuid.setParseAction(tokenMap(uuid.UUID))
77
- pyparsing_common.uuid.runTests('''
78
- # uuid
79
- 12345678-1234-5678-1234-567812345678
80
- ''')
81
-
82
- prints::
83
-
84
- # any int or real number, returned as the appropriate type
85
- 100
86
- [100]
87
-
88
- -100
89
- [-100]
90
-
91
- +100
92
- [100]
93
-
94
- 3.14159
95
- [3.14159]
96
-
97
- 6.02e23
98
- [6.02e+23]
99
-
100
- 1e-12
101
- [1e-12]
102
-
103
- # any int or real number, returned as float
104
- 100
105
- [100.0]
106
-
107
- -100
108
- [-100.0]
109
-
110
- +100
111
- [100.0]
112
-
113
- 3.14159
114
- [3.14159]
115
-
116
- 6.02e23
117
- [6.02e+23]
118
-
119
- 1e-12
120
- [1e-12]
121
-
122
- # hex numbers
123
- 100
124
- [256]
125
-
126
- FF
127
- [255]
128
-
129
- # fractions
130
- 1/2
131
- [0.5]
132
-
133
- -3/4
134
- [-0.75]
135
-
136
- # mixed fractions
137
- 1
138
- [1]
139
-
140
- 1/2
141
- [0.5]
142
-
143
- -3/4
144
- [-0.75]
145
-
146
- 1-3/4
147
- [1.75]
148
-
149
- # uuid
150
- 12345678-1234-5678-1234-567812345678
151
- [UUID('12345678-1234-5678-1234-567812345678')]
152
- """
153
-
154
- convert_to_integer = token_map(int)
155
- """
156
- Parse action for converting parsed integers to Python int
157
- """
158
-
159
- convert_to_float = token_map(float)
160
- """
161
- Parse action for converting parsed numbers to Python float
162
- """
163
-
164
- integer = Word(nums).set_name("integer").set_parse_action(convert_to_integer)
165
- """expression that parses an unsigned integer, returns an int"""
166
-
167
- hex_integer = (
168
- Word(hexnums).set_name("hex integer").set_parse_action(token_map(int, 16))
169
- )
170
- """expression that parses a hexadecimal integer, returns an int"""
171
-
172
- signed_integer = (
173
- Regex(r"[+-]?\d+")
174
- .set_name("signed integer")
175
- .set_parse_action(convert_to_integer)
176
- )
177
- """expression that parses an integer with optional leading sign, returns an int"""
178
-
179
- fraction = (
180
- signed_integer().set_parse_action(convert_to_float)
181
- + "/"
182
- + signed_integer().set_parse_action(convert_to_float)
183
- ).set_name("fraction")
184
- """fractional expression of an integer divided by an integer, returns a float"""
185
- fraction.add_parse_action(lambda tt: tt[0] / tt[-1])
186
-
187
- mixed_integer = (
188
- fraction | signed_integer + Opt(Opt("-").suppress() + fraction)
189
- ).set_name("fraction or mixed integer-fraction")
190
- """mixed integer of the form 'integer - fraction', with optional leading integer, returns float"""
191
- mixed_integer.add_parse_action(sum)
192
-
193
- real = (
194
- Regex(r"[+-]?(?:\d+\.\d*|\.\d+)")
195
- .set_name("real number")
196
- .set_parse_action(convert_to_float)
197
- )
198
- """expression that parses a floating point number and returns a float"""
199
-
200
- sci_real = (
201
- Regex(r"[+-]?(?:\d+(?:[eE][+-]?\d+)|(?:\d+\.\d*|\.\d+)(?:[eE][+-]?\d+)?)")
202
- .set_name("real number with scientific notation")
203
- .set_parse_action(convert_to_float)
204
- )
205
- """expression that parses a floating point number with optional
206
- scientific notation and returns a float"""
207
-
208
- # streamlining this expression makes the docs nicer-looking
209
- number = (sci_real | real | signed_integer).setName("number").streamline()
210
- """any numeric expression, returns the corresponding Python type"""
211
-
212
- fnumber = (
213
- Regex(r"[+-]?\d+\.?\d*([eE][+-]?\d+)?")
214
- .set_name("fnumber")
215
- .set_parse_action(convert_to_float)
216
- )
217
- """any int or real number, returned as float"""
218
-
219
- identifier = Word(identchars, identbodychars).set_name("identifier")
220
- """typical code identifier (leading alpha or '_', followed by 0 or more alphas, nums, or '_')"""
221
-
222
- ipv4_address = Regex(
223
- r"(25[0-5]|2[0-4][0-9]|1?[0-9]{1,2})(\.(25[0-5]|2[0-4][0-9]|1?[0-9]{1,2})){3}"
224
- ).set_name("IPv4 address")
225
- "IPv4 address (``0.0.0.0 - 255.255.255.255``)"
226
-
227
- _ipv6_part = Regex(r"[0-9a-fA-F]{1,4}").set_name("hex_integer")
228
- _full_ipv6_address = (_ipv6_part + (":" + _ipv6_part) * 7).set_name(
229
- "full IPv6 address"
230
- )
231
- _short_ipv6_address = (
232
- Opt(_ipv6_part + (":" + _ipv6_part) * (0, 6))
233
- + "::"
234
- + Opt(_ipv6_part + (":" + _ipv6_part) * (0, 6))
235
- ).set_name("short IPv6 address")
236
- _short_ipv6_address.add_condition(
237
- lambda t: sum(1 for tt in t if pyparsing_common._ipv6_part.matches(tt)) < 8
238
- )
239
- _mixed_ipv6_address = ("::ffff:" + ipv4_address).set_name("mixed IPv6 address")
240
- ipv6_address = Combine(
241
- (_full_ipv6_address | _mixed_ipv6_address | _short_ipv6_address).set_name(
242
- "IPv6 address"
243
- )
244
- ).set_name("IPv6 address")
245
- "IPv6 address (long, short, or mixed form)"
246
-
247
- mac_address = Regex(
248
- r"[0-9a-fA-F]{2}([:.-])[0-9a-fA-F]{2}(?:\1[0-9a-fA-F]{2}){4}"
249
- ).set_name("MAC address")
250
- "MAC address xx:xx:xx:xx:xx (may also have '-' or '.' delimiters)"
251
-
252
- @staticmethod
253
- def convert_to_date(fmt: str = "%Y-%m-%d"):
254
- """
255
- Helper to create a parse action for converting parsed date string to Python datetime.date
256
-
257
- Params -
258
- - fmt - format to be passed to datetime.strptime (default= ``"%Y-%m-%d"``)
259
-
260
- Example::
261
-
262
- date_expr = pyparsing_common.iso8601_date.copy()
263
- date_expr.setParseAction(pyparsing_common.convertToDate())
264
- print(date_expr.parseString("1999-12-31"))
265
-
266
- prints::
267
-
268
- [datetime.date(1999, 12, 31)]
269
- """
270
-
271
- def cvt_fn(ss, ll, tt):
272
- try:
273
- return datetime.strptime(tt[0], fmt).date()
274
- except ValueError as ve:
275
- raise ParseException(ss, ll, str(ve))
276
-
277
- return cvt_fn
278
-
279
- @staticmethod
280
- def convert_to_datetime(fmt: str = "%Y-%m-%dT%H:%M:%S.%f"):
281
- """Helper to create a parse action for converting parsed
282
- datetime string to Python datetime.datetime
283
-
284
- Params -
285
- - fmt - format to be passed to datetime.strptime (default= ``"%Y-%m-%dT%H:%M:%S.%f"``)
286
-
287
- Example::
288
-
289
- dt_expr = pyparsing_common.iso8601_datetime.copy()
290
- dt_expr.setParseAction(pyparsing_common.convertToDatetime())
291
- print(dt_expr.parseString("1999-12-31T23:59:59.999"))
292
-
293
- prints::
294
-
295
- [datetime.datetime(1999, 12, 31, 23, 59, 59, 999000)]
296
- """
297
-
298
- def cvt_fn(s, l, t):
299
- try:
300
- return datetime.strptime(t[0], fmt)
301
- except ValueError as ve:
302
- raise ParseException(s, l, str(ve))
303
-
304
- return cvt_fn
305
-
306
- iso8601_date = Regex(
307
- r"(?P<year>\d{4})(?:-(?P<month>\d\d)(?:-(?P<day>\d\d))?)?"
308
- ).set_name("ISO8601 date")
309
- "ISO8601 date (``yyyy-mm-dd``)"
310
-
311
- iso8601_datetime = Regex(
312
- r"(?P<year>\d{4})-(?P<month>\d\d)-(?P<day>\d\d)[T ](?P<hour>\d\d):(?P<minute>\d\d)(:(?P<second>\d\d(\.\d*)?)?)?(?P<tz>Z|[+-]\d\d:?\d\d)?"
313
- ).set_name("ISO8601 datetime")
314
- "ISO8601 datetime (``yyyy-mm-ddThh:mm:ss.s(Z|+-00:00)``) - trailing seconds, milliseconds, and timezone optional; accepts separating ``'T'`` or ``' '``"
315
-
316
- uuid = Regex(r"[0-9a-fA-F]{8}(-[0-9a-fA-F]{4}){3}-[0-9a-fA-F]{12}").set_name("UUID")
317
- "UUID (``xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx``)"
318
-
319
- _html_stripper = any_open_tag.suppress() | any_close_tag.suppress()
320
-
321
- @staticmethod
322
- def strip_html_tags(s: str, l: int, tokens: ParseResults):
323
- """Parse action to remove HTML tags from web page HTML source
324
-
325
- Example::
326
-
327
- # strip HTML links from normal text
328
- text = '<td>More info at the <a href="https://github.com/pyparsing/pyparsing/wiki">pyparsing</a> wiki page</td>'
329
- td, td_end = makeHTMLTags("TD")
330
- table_text = td + SkipTo(td_end).setParseAction(pyparsing_common.stripHTMLTags)("body") + td_end
331
- print(table_text.parseString(text).body)
332
-
333
- Prints::
334
-
335
- More info at the pyparsing wiki page
336
- """
337
- return pyparsing_common._html_stripper.transform_string(tokens[0])
338
-
339
- _commasepitem = (
340
- Combine(
341
- OneOrMore(
342
- ~Literal(",")
343
- + ~LineEnd()
344
- + Word(printables, exclude_chars=",")
345
- + Opt(White(" \t") + ~FollowedBy(LineEnd() | ","))
346
- )
347
- )
348
- .streamline()
349
- .set_name("commaItem")
350
- )
351
- comma_separated_list = delimited_list(
352
- Opt(quoted_string.copy() | _commasepitem, default="")
353
- ).set_name("comma separated list")
354
- """Predefined expression of 1 or more printable words or quoted strings, separated by commas."""
355
-
356
- upcase_tokens = staticmethod(token_map(lambda t: t.upper()))
357
- """Parse action to convert tokens to upper case."""
358
-
359
- downcase_tokens = staticmethod(token_map(lambda t: t.lower()))
360
- """Parse action to convert tokens to lower case."""
361
-
362
- # fmt: off
363
- url = Regex(
364
- # https://mathiasbynens.be/demo/url-regex
365
- # https://gist.github.com/dperini/729294
366
- r"^" +
367
- # protocol identifier (optional)
368
- # short syntax // still required
369
- r"(?:(?:(?P<scheme>https?|ftp):)?\/\/)" +
370
- # user:pass BasicAuth (optional)
371
- r"(?:(?P<auth>\S+(?::\S*)?)@)?" +
372
- r"(?P<host>" +
373
- # IP address exclusion
374
- # private & local networks
375
- r"(?!(?:10|127)(?:\.\d{1,3}){3})" +
376
- r"(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})" +
377
- r"(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})" +
378
- # IP address dotted notation octets
379
- # excludes loopback network 0.0.0.0
380
- # excludes reserved space >= 224.0.0.0
381
- # excludes network & broadcast addresses
382
- # (first & last IP address of each class)
383
- r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" +
384
- r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" +
385
- r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" +
386
- r"|" +
387
- # host & domain names, may end with dot
388
- # can be replaced by a shortest alternative
389
- # (?![-_])(?:[-\w\u00a1-\uffff]{0,63}[^-_]\.)+
390
- r"(?:" +
391
- r"(?:" +
392
- r"[a-z0-9\u00a1-\uffff]" +
393
- r"[a-z0-9\u00a1-\uffff_-]{0,62}" +
394
- r")?" +
395
- r"[a-z0-9\u00a1-\uffff]\." +
396
- r")+" +
397
- # TLD identifier name, may end with dot
398
- r"(?:[a-z\u00a1-\uffff]{2,}\.?)" +
399
- r")" +
400
- # port number (optional)
401
- r"(:(?P<port>\d{2,5}))?" +
402
- # resource path (optional)
403
- r"(?P<path>\/[^?# ]*)?" +
404
- # query string (optional)
405
- r"(\?(?P<query>[^#]*))?" +
406
- # fragment (optional)
407
- r"(#(?P<fragment>\S*))?" +
408
- r"$"
409
- ).set_name("url")
410
- # fmt: on
411
-
412
- # pre-PEP8 compatibility names
413
- convertToInteger = convert_to_integer
414
- convertToFloat = convert_to_float
415
- convertToDate = convert_to_date
416
- convertToDatetime = convert_to_datetime
417
- stripHTMLTags = strip_html_tags
418
- upcaseTokens = upcase_tokens
419
- downcaseTokens = downcase_tokens
420
-
421
-
422
- _builtin_exprs = [
423
- v for v in vars(pyparsing_common).values() if isinstance(v, ParserElement)
424
- ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/helpers.py DELETED
@@ -1,1088 +0,0 @@
1
- # helpers.py
2
- import html.entities
3
- import re
4
- import typing
5
-
6
- from . import __diag__
7
- from .core import *
8
- from .util import _bslash, _flatten, _escape_regex_range_chars
9
-
10
-
11
- #
12
- # global helpers
13
- #
14
- def delimited_list(
15
- expr: Union[str, ParserElement],
16
- delim: Union[str, ParserElement] = ",",
17
- combine: bool = False,
18
- min: typing.Optional[int] = None,
19
- max: typing.Optional[int] = None,
20
- *,
21
- allow_trailing_delim: bool = False,
22
- ) -> ParserElement:
23
- """Helper to define a delimited list of expressions - the delimiter
24
- defaults to ','. By default, the list elements and delimiters can
25
- have intervening whitespace, and comments, but this can be
26
- overridden by passing ``combine=True`` in the constructor. If
27
- ``combine`` is set to ``True``, the matching tokens are
28
- returned as a single token string, with the delimiters included;
29
- otherwise, the matching tokens are returned as a list of tokens,
30
- with the delimiters suppressed.
31
-
32
- If ``allow_trailing_delim`` is set to True, then the list may end with
33
- a delimiter.
34
-
35
- Example::
36
-
37
- delimited_list(Word(alphas)).parse_string("aa,bb,cc") # -> ['aa', 'bb', 'cc']
38
- delimited_list(Word(hexnums), delim=':', combine=True).parse_string("AA:BB:CC:DD:EE") # -> ['AA:BB:CC:DD:EE']
39
- """
40
- if isinstance(expr, str_type):
41
- expr = ParserElement._literalStringClass(expr)
42
-
43
- dlName = "{expr} [{delim} {expr}]...{end}".format(
44
- expr=str(expr.copy().streamline()),
45
- delim=str(delim),
46
- end=" [{}]".format(str(delim)) if allow_trailing_delim else "",
47
- )
48
-
49
- if not combine:
50
- delim = Suppress(delim)
51
-
52
- if min is not None:
53
- if min < 1:
54
- raise ValueError("min must be greater than 0")
55
- min -= 1
56
- if max is not None:
57
- if min is not None and max <= min:
58
- raise ValueError("max must be greater than, or equal to min")
59
- max -= 1
60
- delimited_list_expr = expr + (delim + expr)[min, max]
61
-
62
- if allow_trailing_delim:
63
- delimited_list_expr += Opt(delim)
64
-
65
- if combine:
66
- return Combine(delimited_list_expr).set_name(dlName)
67
- else:
68
- return delimited_list_expr.set_name(dlName)
69
-
70
-
71
- def counted_array(
72
- expr: ParserElement,
73
- int_expr: typing.Optional[ParserElement] = None,
74
- *,
75
- intExpr: typing.Optional[ParserElement] = None,
76
- ) -> ParserElement:
77
- """Helper to define a counted list of expressions.
78
-
79
- This helper defines a pattern of the form::
80
-
81
- integer expr expr expr...
82
-
83
- where the leading integer tells how many expr expressions follow.
84
- The matched tokens returns the array of expr tokens as a list - the
85
- leading count token is suppressed.
86
-
87
- If ``int_expr`` is specified, it should be a pyparsing expression
88
- that produces an integer value.
89
-
90
- Example::
91
-
92
- counted_array(Word(alphas)).parse_string('2 ab cd ef') # -> ['ab', 'cd']
93
-
94
- # in this parser, the leading integer value is given in binary,
95
- # '10' indicating that 2 values are in the array
96
- binary_constant = Word('01').set_parse_action(lambda t: int(t[0], 2))
97
- counted_array(Word(alphas), int_expr=binary_constant).parse_string('10 ab cd ef') # -> ['ab', 'cd']
98
-
99
- # if other fields must be parsed after the count but before the
100
- # list items, give the fields results names and they will
101
- # be preserved in the returned ParseResults:
102
- count_with_metadata = integer + Word(alphas)("type")
103
- typed_array = counted_array(Word(alphanums), int_expr=count_with_metadata)("items")
104
- result = typed_array.parse_string("3 bool True True False")
105
- print(result.dump())
106
-
107
- # prints
108
- # ['True', 'True', 'False']
109
- # - items: ['True', 'True', 'False']
110
- # - type: 'bool'
111
- """
112
- intExpr = intExpr or int_expr
113
- array_expr = Forward()
114
-
115
- def count_field_parse_action(s, l, t):
116
- nonlocal array_expr
117
- n = t[0]
118
- array_expr <<= (expr * n) if n else Empty()
119
- # clear list contents, but keep any named results
120
- del t[:]
121
-
122
- if intExpr is None:
123
- intExpr = Word(nums).set_parse_action(lambda t: int(t[0]))
124
- else:
125
- intExpr = intExpr.copy()
126
- intExpr.set_name("arrayLen")
127
- intExpr.add_parse_action(count_field_parse_action, call_during_try=True)
128
- return (intExpr + array_expr).set_name("(len) " + str(expr) + "...")
129
-
130
-
131
- def match_previous_literal(expr: ParserElement) -> ParserElement:
132
- """Helper to define an expression that is indirectly defined from
133
- the tokens matched in a previous expression, that is, it looks for
134
- a 'repeat' of a previous expression. For example::
135
-
136
- first = Word(nums)
137
- second = match_previous_literal(first)
138
- match_expr = first + ":" + second
139
-
140
- will match ``"1:1"``, but not ``"1:2"``. Because this
141
- matches a previous literal, will also match the leading
142
- ``"1:1"`` in ``"1:10"``. If this is not desired, use
143
- :class:`match_previous_expr`. Do *not* use with packrat parsing
144
- enabled.
145
- """
146
- rep = Forward()
147
-
148
- def copy_token_to_repeater(s, l, t):
149
- if t:
150
- if len(t) == 1:
151
- rep << t[0]
152
- else:
153
- # flatten t tokens
154
- tflat = _flatten(t.as_list())
155
- rep << And(Literal(tt) for tt in tflat)
156
- else:
157
- rep << Empty()
158
-
159
- expr.add_parse_action(copy_token_to_repeater, callDuringTry=True)
160
- rep.set_name("(prev) " + str(expr))
161
- return rep
162
-
163
-
164
- def match_previous_expr(expr: ParserElement) -> ParserElement:
165
- """Helper to define an expression that is indirectly defined from
166
- the tokens matched in a previous expression, that is, it looks for
167
- a 'repeat' of a previous expression. For example::
168
-
169
- first = Word(nums)
170
- second = match_previous_expr(first)
171
- match_expr = first + ":" + second
172
-
173
- will match ``"1:1"``, but not ``"1:2"``. Because this
174
- matches by expressions, will *not* match the leading ``"1:1"``
175
- in ``"1:10"``; the expressions are evaluated first, and then
176
- compared, so ``"1"`` is compared with ``"10"``. Do *not* use
177
- with packrat parsing enabled.
178
- """
179
- rep = Forward()
180
- e2 = expr.copy()
181
- rep <<= e2
182
-
183
- def copy_token_to_repeater(s, l, t):
184
- matchTokens = _flatten(t.as_list())
185
-
186
- def must_match_these_tokens(s, l, t):
187
- theseTokens = _flatten(t.as_list())
188
- if theseTokens != matchTokens:
189
- raise ParseException(
190
- s, l, "Expected {}, found{}".format(matchTokens, theseTokens)
191
- )
192
-
193
- rep.set_parse_action(must_match_these_tokens, callDuringTry=True)
194
-
195
- expr.add_parse_action(copy_token_to_repeater, callDuringTry=True)
196
- rep.set_name("(prev) " + str(expr))
197
- return rep
198
-
199
-
200
- def one_of(
201
- strs: Union[typing.Iterable[str], str],
202
- caseless: bool = False,
203
- use_regex: bool = True,
204
- as_keyword: bool = False,
205
- *,
206
- useRegex: bool = True,
207
- asKeyword: bool = False,
208
- ) -> ParserElement:
209
- """Helper to quickly define a set of alternative :class:`Literal` s,
210
- and makes sure to do longest-first testing when there is a conflict,
211
- regardless of the input order, but returns
212
- a :class:`MatchFirst` for best performance.
213
-
214
- Parameters:
215
-
216
- - ``strs`` - a string of space-delimited literals, or a collection of
217
- string literals
218
- - ``caseless`` - treat all literals as caseless - (default= ``False``)
219
- - ``use_regex`` - as an optimization, will
220
- generate a :class:`Regex` object; otherwise, will generate
221
- a :class:`MatchFirst` object (if ``caseless=True`` or ``asKeyword=True``, or if
222
- creating a :class:`Regex` raises an exception) - (default= ``True``)
223
- - ``as_keyword`` - enforce :class:`Keyword`-style matching on the
224
- generated expressions - (default= ``False``)
225
- - ``asKeyword`` and ``useRegex`` are retained for pre-PEP8 compatibility,
226
- but will be removed in a future release
227
-
228
- Example::
229
-
230
- comp_oper = one_of("< = > <= >= !=")
231
- var = Word(alphas)
232
- number = Word(nums)
233
- term = var | number
234
- comparison_expr = term + comp_oper + term
235
- print(comparison_expr.search_string("B = 12 AA=23 B<=AA AA>12"))
236
-
237
- prints::
238
-
239
- [['B', '=', '12'], ['AA', '=', '23'], ['B', '<=', 'AA'], ['AA', '>', '12']]
240
- """
241
- asKeyword = asKeyword or as_keyword
242
- useRegex = useRegex and use_regex
243
-
244
- if (
245
- isinstance(caseless, str_type)
246
- and __diag__.warn_on_multiple_string_args_to_oneof
247
- ):
248
- warnings.warn(
249
- "More than one string argument passed to one_of, pass"
250
- " choices as a list or space-delimited string",
251
- stacklevel=2,
252
- )
253
-
254
- if caseless:
255
- isequal = lambda a, b: a.upper() == b.upper()
256
- masks = lambda a, b: b.upper().startswith(a.upper())
257
- parseElementClass = CaselessKeyword if asKeyword else CaselessLiteral
258
- else:
259
- isequal = lambda a, b: a == b
260
- masks = lambda a, b: b.startswith(a)
261
- parseElementClass = Keyword if asKeyword else Literal
262
-
263
- symbols: List[str] = []
264
- if isinstance(strs, str_type):
265
- symbols = strs.split()
266
- elif isinstance(strs, Iterable):
267
- symbols = list(strs)
268
- else:
269
- raise TypeError("Invalid argument to one_of, expected string or iterable")
270
- if not symbols:
271
- return NoMatch()
272
-
273
- # reorder given symbols to take care to avoid masking longer choices with shorter ones
274
- # (but only if the given symbols are not just single characters)
275
- if any(len(sym) > 1 for sym in symbols):
276
- i = 0
277
- while i < len(symbols) - 1:
278
- cur = symbols[i]
279
- for j, other in enumerate(symbols[i + 1 :]):
280
- if isequal(other, cur):
281
- del symbols[i + j + 1]
282
- break
283
- elif masks(cur, other):
284
- del symbols[i + j + 1]
285
- symbols.insert(i, other)
286
- break
287
- else:
288
- i += 1
289
-
290
- if useRegex:
291
- re_flags: int = re.IGNORECASE if caseless else 0
292
-
293
- try:
294
- if all(len(sym) == 1 for sym in symbols):
295
- # symbols are just single characters, create range regex pattern
296
- patt = "[{}]".format(
297
- "".join(_escape_regex_range_chars(sym) for sym in symbols)
298
- )
299
- else:
300
- patt = "|".join(re.escape(sym) for sym in symbols)
301
-
302
- # wrap with \b word break markers if defining as keywords
303
- if asKeyword:
304
- patt = r"\b(?:{})\b".format(patt)
305
-
306
- ret = Regex(patt, flags=re_flags).set_name(" | ".join(symbols))
307
-
308
- if caseless:
309
- # add parse action to return symbols as specified, not in random
310
- # casing as found in input string
311
- symbol_map = {sym.lower(): sym for sym in symbols}
312
- ret.add_parse_action(lambda s, l, t: symbol_map[t[0].lower()])
313
-
314
- return ret
315
-
316
- except re.error:
317
- warnings.warn(
318
- "Exception creating Regex for one_of, building MatchFirst", stacklevel=2
319
- )
320
-
321
- # last resort, just use MatchFirst
322
- return MatchFirst(parseElementClass(sym) for sym in symbols).set_name(
323
- " | ".join(symbols)
324
- )
325
-
326
-
327
- def dict_of(key: ParserElement, value: ParserElement) -> ParserElement:
328
- """Helper to easily and clearly define a dictionary by specifying
329
- the respective patterns for the key and value. Takes care of
330
- defining the :class:`Dict`, :class:`ZeroOrMore`, and
331
- :class:`Group` tokens in the proper order. The key pattern
332
- can include delimiting markers or punctuation, as long as they are
333
- suppressed, thereby leaving the significant key text. The value
334
- pattern can include named results, so that the :class:`Dict` results
335
- can include named token fields.
336
-
337
- Example::
338
-
339
- text = "shape: SQUARE posn: upper left color: light blue texture: burlap"
340
- attr_expr = (label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join))
341
- print(attr_expr[1, ...].parse_string(text).dump())
342
-
343
- attr_label = label
344
- attr_value = Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join)
345
-
346
- # similar to Dict, but simpler call format
347
- result = dict_of(attr_label, attr_value).parse_string(text)
348
- print(result.dump())
349
- print(result['shape'])
350
- print(result.shape) # object attribute access works too
351
- print(result.as_dict())
352
-
353
- prints::
354
-
355
- [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'light blue'], ['texture', 'burlap']]
356
- - color: 'light blue'
357
- - posn: 'upper left'
358
- - shape: 'SQUARE'
359
- - texture: 'burlap'
360
- SQUARE
361
- SQUARE
362
- {'color': 'light blue', 'shape': 'SQUARE', 'posn': 'upper left', 'texture': 'burlap'}
363
- """
364
- return Dict(OneOrMore(Group(key + value)))
365
-
366
-
367
- def original_text_for(
368
- expr: ParserElement, as_string: bool = True, *, asString: bool = True
369
- ) -> ParserElement:
370
- """Helper to return the original, untokenized text for a given
371
- expression. Useful to restore the parsed fields of an HTML start
372
- tag into the raw tag text itself, or to revert separate tokens with
373
- intervening whitespace back to the original matching input text. By
374
- default, returns astring containing the original parsed text.
375
-
376
- If the optional ``as_string`` argument is passed as
377
- ``False``, then the return value is
378
- a :class:`ParseResults` containing any results names that
379
- were originally matched, and a single token containing the original
380
- matched text from the input string. So if the expression passed to
381
- :class:`original_text_for` contains expressions with defined
382
- results names, you must set ``as_string`` to ``False`` if you
383
- want to preserve those results name values.
384
-
385
- The ``asString`` pre-PEP8 argument is retained for compatibility,
386
- but will be removed in a future release.
387
-
388
- Example::
389
-
390
- src = "this is test <b> bold <i>text</i> </b> normal text "
391
- for tag in ("b", "i"):
392
- opener, closer = make_html_tags(tag)
393
- patt = original_text_for(opener + SkipTo(closer) + closer)
394
- print(patt.search_string(src)[0])
395
-
396
- prints::
397
-
398
- ['<b> bold <i>text</i> </b>']
399
- ['<i>text</i>']
400
- """
401
- asString = asString and as_string
402
-
403
- locMarker = Empty().set_parse_action(lambda s, loc, t: loc)
404
- endlocMarker = locMarker.copy()
405
- endlocMarker.callPreparse = False
406
- matchExpr = locMarker("_original_start") + expr + endlocMarker("_original_end")
407
- if asString:
408
- extractText = lambda s, l, t: s[t._original_start : t._original_end]
409
- else:
410
-
411
- def extractText(s, l, t):
412
- t[:] = [s[t.pop("_original_start") : t.pop("_original_end")]]
413
-
414
- matchExpr.set_parse_action(extractText)
415
- matchExpr.ignoreExprs = expr.ignoreExprs
416
- matchExpr.suppress_warning(Diagnostics.warn_ungrouped_named_tokens_in_collection)
417
- return matchExpr
418
-
419
-
420
- def ungroup(expr: ParserElement) -> ParserElement:
421
- """Helper to undo pyparsing's default grouping of And expressions,
422
- even if all but one are non-empty.
423
- """
424
- return TokenConverter(expr).add_parse_action(lambda t: t[0])
425
-
426
-
427
- def locatedExpr(expr: ParserElement) -> ParserElement:
428
- """
429
- (DEPRECATED - future code should use the Located class)
430
- Helper to decorate a returned token with its starting and ending
431
- locations in the input string.
432
-
433
- This helper adds the following results names:
434
-
435
- - ``locn_start`` - location where matched expression begins
436
- - ``locn_end`` - location where matched expression ends
437
- - ``value`` - the actual parsed results
438
-
439
- Be careful if the input text contains ``<TAB>`` characters, you
440
- may want to call :class:`ParserElement.parseWithTabs`
441
-
442
- Example::
443
-
444
- wd = Word(alphas)
445
- for match in locatedExpr(wd).searchString("ljsdf123lksdjjf123lkkjj1222"):
446
- print(match)
447
-
448
- prints::
449
-
450
- [[0, 'ljsdf', 5]]
451
- [[8, 'lksdjjf', 15]]
452
- [[18, 'lkkjj', 23]]
453
- """
454
- locator = Empty().set_parse_action(lambda ss, ll, tt: ll)
455
- return Group(
456
- locator("locn_start")
457
- + expr("value")
458
- + locator.copy().leaveWhitespace()("locn_end")
459
- )
460
-
461
-
462
- def nested_expr(
463
- opener: Union[str, ParserElement] = "(",
464
- closer: Union[str, ParserElement] = ")",
465
- content: typing.Optional[ParserElement] = None,
466
- ignore_expr: ParserElement = quoted_string(),
467
- *,
468
- ignoreExpr: ParserElement = quoted_string(),
469
- ) -> ParserElement:
470
- """Helper method for defining nested lists enclosed in opening and
471
- closing delimiters (``"("`` and ``")"`` are the default).
472
-
473
- Parameters:
474
- - ``opener`` - opening character for a nested list
475
- (default= ``"("``); can also be a pyparsing expression
476
- - ``closer`` - closing character for a nested list
477
- (default= ``")"``); can also be a pyparsing expression
478
- - ``content`` - expression for items within the nested lists
479
- (default= ``None``)
480
- - ``ignore_expr`` - expression for ignoring opening and closing delimiters
481
- (default= :class:`quoted_string`)
482
- - ``ignoreExpr`` - this pre-PEP8 argument is retained for compatibility
483
- but will be removed in a future release
484
-
485
- If an expression is not provided for the content argument, the
486
- nested expression will capture all whitespace-delimited content
487
- between delimiters as a list of separate values.
488
-
489
- Use the ``ignore_expr`` argument to define expressions that may
490
- contain opening or closing characters that should not be treated as
491
- opening or closing characters for nesting, such as quoted_string or
492
- a comment expression. Specify multiple expressions using an
493
- :class:`Or` or :class:`MatchFirst`. The default is
494
- :class:`quoted_string`, but if no expressions are to be ignored, then
495
- pass ``None`` for this argument.
496
-
497
- Example::
498
-
499
- data_type = one_of("void int short long char float double")
500
- decl_data_type = Combine(data_type + Opt(Word('*')))
501
- ident = Word(alphas+'_', alphanums+'_')
502
- number = pyparsing_common.number
503
- arg = Group(decl_data_type + ident)
504
- LPAR, RPAR = map(Suppress, "()")
505
-
506
- code_body = nested_expr('{', '}', ignore_expr=(quoted_string | c_style_comment))
507
-
508
- c_function = (decl_data_type("type")
509
- + ident("name")
510
- + LPAR + Opt(delimited_list(arg), [])("args") + RPAR
511
- + code_body("body"))
512
- c_function.ignore(c_style_comment)
513
-
514
- source_code = '''
515
- int is_odd(int x) {
516
- return (x%2);
517
- }
518
-
519
- int dec_to_hex(char hchar) {
520
- if (hchar >= '0' && hchar <= '9') {
521
- return (ord(hchar)-ord('0'));
522
- } else {
523
- return (10+ord(hchar)-ord('A'));
524
- }
525
- }
526
- '''
527
- for func in c_function.search_string(source_code):
528
- print("%(name)s (%(type)s) args: %(args)s" % func)
529
-
530
-
531
- prints::
532
-
533
- is_odd (int) args: [['int', 'x']]
534
- dec_to_hex (int) args: [['char', 'hchar']]
535
- """
536
- if ignoreExpr != ignore_expr:
537
- ignoreExpr = ignore_expr if ignoreExpr == quoted_string() else ignoreExpr
538
- if opener == closer:
539
- raise ValueError("opening and closing strings cannot be the same")
540
- if content is None:
541
- if isinstance(opener, str_type) and isinstance(closer, str_type):
542
- if len(opener) == 1 and len(closer) == 1:
543
- if ignoreExpr is not None:
544
- content = Combine(
545
- OneOrMore(
546
- ~ignoreExpr
547
- + CharsNotIn(
548
- opener + closer + ParserElement.DEFAULT_WHITE_CHARS,
549
- exact=1,
550
- )
551
- )
552
- ).set_parse_action(lambda t: t[0].strip())
553
- else:
554
- content = empty.copy() + CharsNotIn(
555
- opener + closer + ParserElement.DEFAULT_WHITE_CHARS
556
- ).set_parse_action(lambda t: t[0].strip())
557
- else:
558
- if ignoreExpr is not None:
559
- content = Combine(
560
- OneOrMore(
561
- ~ignoreExpr
562
- + ~Literal(opener)
563
- + ~Literal(closer)
564
- + CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS, exact=1)
565
- )
566
- ).set_parse_action(lambda t: t[0].strip())
567
- else:
568
- content = Combine(
569
- OneOrMore(
570
- ~Literal(opener)
571
- + ~Literal(closer)
572
- + CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS, exact=1)
573
- )
574
- ).set_parse_action(lambda t: t[0].strip())
575
- else:
576
- raise ValueError(
577
- "opening and closing arguments must be strings if no content expression is given"
578
- )
579
- ret = Forward()
580
- if ignoreExpr is not None:
581
- ret <<= Group(
582
- Suppress(opener) + ZeroOrMore(ignoreExpr | ret | content) + Suppress(closer)
583
- )
584
- else:
585
- ret <<= Group(Suppress(opener) + ZeroOrMore(ret | content) + Suppress(closer))
586
- ret.set_name("nested %s%s expression" % (opener, closer))
587
- return ret
588
-
589
-
590
- def _makeTags(tagStr, xml, suppress_LT=Suppress("<"), suppress_GT=Suppress(">")):
591
- """Internal helper to construct opening and closing tag expressions, given a tag name"""
592
- if isinstance(tagStr, str_type):
593
- resname = tagStr
594
- tagStr = Keyword(tagStr, caseless=not xml)
595
- else:
596
- resname = tagStr.name
597
-
598
- tagAttrName = Word(alphas, alphanums + "_-:")
599
- if xml:
600
- tagAttrValue = dbl_quoted_string.copy().set_parse_action(remove_quotes)
601
- openTag = (
602
- suppress_LT
603
- + tagStr("tag")
604
- + Dict(ZeroOrMore(Group(tagAttrName + Suppress("=") + tagAttrValue)))
605
- + Opt("/", default=[False])("empty").set_parse_action(
606
- lambda s, l, t: t[0] == "/"
607
- )
608
- + suppress_GT
609
- )
610
- else:
611
- tagAttrValue = quoted_string.copy().set_parse_action(remove_quotes) | Word(
612
- printables, exclude_chars=">"
613
- )
614
- openTag = (
615
- suppress_LT
616
- + tagStr("tag")
617
- + Dict(
618
- ZeroOrMore(
619
- Group(
620
- tagAttrName.set_parse_action(lambda t: t[0].lower())
621
- + Opt(Suppress("=") + tagAttrValue)
622
- )
623
- )
624
- )
625
- + Opt("/", default=[False])("empty").set_parse_action(
626
- lambda s, l, t: t[0] == "/"
627
- )
628
- + suppress_GT
629
- )
630
- closeTag = Combine(Literal("</") + tagStr + ">", adjacent=False)
631
-
632
- openTag.set_name("<%s>" % resname)
633
- # add start<tagname> results name in parse action now that ungrouped names are not reported at two levels
634
- openTag.add_parse_action(
635
- lambda t: t.__setitem__(
636
- "start" + "".join(resname.replace(":", " ").title().split()), t.copy()
637
- )
638
- )
639
- closeTag = closeTag(
640
- "end" + "".join(resname.replace(":", " ").title().split())
641
- ).set_name("</%s>" % resname)
642
- openTag.tag = resname
643
- closeTag.tag = resname
644
- openTag.tag_body = SkipTo(closeTag())
645
- return openTag, closeTag
646
-
647
-
648
- def make_html_tags(
649
- tag_str: Union[str, ParserElement]
650
- ) -> Tuple[ParserElement, ParserElement]:
651
- """Helper to construct opening and closing tag expressions for HTML,
652
- given a tag name. Matches tags in either upper or lower case,
653
- attributes with namespaces and with quoted or unquoted values.
654
-
655
- Example::
656
-
657
- text = '<td>More info at the <a href="https://github.com/pyparsing/pyparsing/wiki">pyparsing</a> wiki page</td>'
658
- # make_html_tags returns pyparsing expressions for the opening and
659
- # closing tags as a 2-tuple
660
- a, a_end = make_html_tags("A")
661
- link_expr = a + SkipTo(a_end)("link_text") + a_end
662
-
663
- for link in link_expr.search_string(text):
664
- # attributes in the <A> tag (like "href" shown here) are
665
- # also accessible as named results
666
- print(link.link_text, '->', link.href)
667
-
668
- prints::
669
-
670
- pyparsing -> https://github.com/pyparsing/pyparsing/wiki
671
- """
672
- return _makeTags(tag_str, False)
673
-
674
-
675
- def make_xml_tags(
676
- tag_str: Union[str, ParserElement]
677
- ) -> Tuple[ParserElement, ParserElement]:
678
- """Helper to construct opening and closing tag expressions for XML,
679
- given a tag name. Matches tags only in the given upper/lower case.
680
-
681
- Example: similar to :class:`make_html_tags`
682
- """
683
- return _makeTags(tag_str, True)
684
-
685
-
686
- any_open_tag: ParserElement
687
- any_close_tag: ParserElement
688
- any_open_tag, any_close_tag = make_html_tags(
689
- Word(alphas, alphanums + "_:").set_name("any tag")
690
- )
691
-
692
- _htmlEntityMap = {k.rstrip(";"): v for k, v in html.entities.html5.items()}
693
- common_html_entity = Regex("&(?P<entity>" + "|".join(_htmlEntityMap) + ");").set_name(
694
- "common HTML entity"
695
- )
696
-
697
-
698
- def replace_html_entity(t):
699
- """Helper parser action to replace common HTML entities with their special characters"""
700
- return _htmlEntityMap.get(t.entity)
701
-
702
-
703
- class OpAssoc(Enum):
704
- LEFT = 1
705
- RIGHT = 2
706
-
707
-
708
- InfixNotationOperatorArgType = Union[
709
- ParserElement, str, Tuple[Union[ParserElement, str], Union[ParserElement, str]]
710
- ]
711
- InfixNotationOperatorSpec = Union[
712
- Tuple[
713
- InfixNotationOperatorArgType,
714
- int,
715
- OpAssoc,
716
- typing.Optional[ParseAction],
717
- ],
718
- Tuple[
719
- InfixNotationOperatorArgType,
720
- int,
721
- OpAssoc,
722
- ],
723
- ]
724
-
725
-
726
- def infix_notation(
727
- base_expr: ParserElement,
728
- op_list: List[InfixNotationOperatorSpec],
729
- lpar: Union[str, ParserElement] = Suppress("("),
730
- rpar: Union[str, ParserElement] = Suppress(")"),
731
- ) -> ParserElement:
732
- """Helper method for constructing grammars of expressions made up of
733
- operators working in a precedence hierarchy. Operators may be unary
734
- or binary, left- or right-associative. Parse actions can also be
735
- attached to operator expressions. The generated parser will also
736
- recognize the use of parentheses to override operator precedences
737
- (see example below).
738
-
739
- Note: if you define a deep operator list, you may see performance
740
- issues when using infix_notation. See
741
- :class:`ParserElement.enable_packrat` for a mechanism to potentially
742
- improve your parser performance.
743
-
744
- Parameters:
745
- - ``base_expr`` - expression representing the most basic operand to
746
- be used in the expression
747
- - ``op_list`` - list of tuples, one for each operator precedence level
748
- in the expression grammar; each tuple is of the form ``(op_expr,
749
- num_operands, right_left_assoc, (optional)parse_action)``, where:
750
-
751
- - ``op_expr`` is the pyparsing expression for the operator; may also
752
- be a string, which will be converted to a Literal; if ``num_operands``
753
- is 3, ``op_expr`` is a tuple of two expressions, for the two
754
- operators separating the 3 terms
755
- - ``num_operands`` is the number of terms for this operator (must be 1,
756
- 2, or 3)
757
- - ``right_left_assoc`` is the indicator whether the operator is right
758
- or left associative, using the pyparsing-defined constants
759
- ``OpAssoc.RIGHT`` and ``OpAssoc.LEFT``.
760
- - ``parse_action`` is the parse action to be associated with
761
- expressions matching this operator expression (the parse action
762
- tuple member may be omitted); if the parse action is passed
763
- a tuple or list of functions, this is equivalent to calling
764
- ``set_parse_action(*fn)``
765
- (:class:`ParserElement.set_parse_action`)
766
- - ``lpar`` - expression for matching left-parentheses; if passed as a
767
- str, then will be parsed as Suppress(lpar). If lpar is passed as
768
- an expression (such as ``Literal('(')``), then it will be kept in
769
- the parsed results, and grouped with them. (default= ``Suppress('(')``)
770
- - ``rpar`` - expression for matching right-parentheses; if passed as a
771
- str, then will be parsed as Suppress(rpar). If rpar is passed as
772
- an expression (such as ``Literal(')')``), then it will be kept in
773
- the parsed results, and grouped with them. (default= ``Suppress(')')``)
774
-
775
- Example::
776
-
777
- # simple example of four-function arithmetic with ints and
778
- # variable names
779
- integer = pyparsing_common.signed_integer
780
- varname = pyparsing_common.identifier
781
-
782
- arith_expr = infix_notation(integer | varname,
783
- [
784
- ('-', 1, OpAssoc.RIGHT),
785
- (one_of('* /'), 2, OpAssoc.LEFT),
786
- (one_of('+ -'), 2, OpAssoc.LEFT),
787
- ])
788
-
789
- arith_expr.run_tests('''
790
- 5+3*6
791
- (5+3)*6
792
- -2--11
793
- ''', full_dump=False)
794
-
795
- prints::
796
-
797
- 5+3*6
798
- [[5, '+', [3, '*', 6]]]
799
-
800
- (5+3)*6
801
- [[[5, '+', 3], '*', 6]]
802
-
803
- -2--11
804
- [[['-', 2], '-', ['-', 11]]]
805
- """
806
- # captive version of FollowedBy that does not do parse actions or capture results names
807
- class _FB(FollowedBy):
808
- def parseImpl(self, instring, loc, doActions=True):
809
- self.expr.try_parse(instring, loc)
810
- return loc, []
811
-
812
- _FB.__name__ = "FollowedBy>"
813
-
814
- ret = Forward()
815
- if isinstance(lpar, str):
816
- lpar = Suppress(lpar)
817
- if isinstance(rpar, str):
818
- rpar = Suppress(rpar)
819
-
820
- # if lpar and rpar are not suppressed, wrap in group
821
- if not (isinstance(rpar, Suppress) and isinstance(rpar, Suppress)):
822
- lastExpr = base_expr | Group(lpar + ret + rpar)
823
- else:
824
- lastExpr = base_expr | (lpar + ret + rpar)
825
-
826
- for i, operDef in enumerate(op_list):
827
- opExpr, arity, rightLeftAssoc, pa = (operDef + (None,))[:4]
828
- if isinstance(opExpr, str_type):
829
- opExpr = ParserElement._literalStringClass(opExpr)
830
- if arity == 3:
831
- if not isinstance(opExpr, (tuple, list)) or len(opExpr) != 2:
832
- raise ValueError(
833
- "if numterms=3, opExpr must be a tuple or list of two expressions"
834
- )
835
- opExpr1, opExpr2 = opExpr
836
- term_name = "{}{} term".format(opExpr1, opExpr2)
837
- else:
838
- term_name = "{} term".format(opExpr)
839
-
840
- if not 1 <= arity <= 3:
841
- raise ValueError("operator must be unary (1), binary (2), or ternary (3)")
842
-
843
- if rightLeftAssoc not in (OpAssoc.LEFT, OpAssoc.RIGHT):
844
- raise ValueError("operator must indicate right or left associativity")
845
-
846
- thisExpr: Forward = Forward().set_name(term_name)
847
- if rightLeftAssoc is OpAssoc.LEFT:
848
- if arity == 1:
849
- matchExpr = _FB(lastExpr + opExpr) + Group(lastExpr + opExpr[1, ...])
850
- elif arity == 2:
851
- if opExpr is not None:
852
- matchExpr = _FB(lastExpr + opExpr + lastExpr) + Group(
853
- lastExpr + (opExpr + lastExpr)[1, ...]
854
- )
855
- else:
856
- matchExpr = _FB(lastExpr + lastExpr) + Group(lastExpr[2, ...])
857
- elif arity == 3:
858
- matchExpr = _FB(
859
- lastExpr + opExpr1 + lastExpr + opExpr2 + lastExpr
860
- ) + Group(lastExpr + OneOrMore(opExpr1 + lastExpr + opExpr2 + lastExpr))
861
- elif rightLeftAssoc is OpAssoc.RIGHT:
862
- if arity == 1:
863
- # try to avoid LR with this extra test
864
- if not isinstance(opExpr, Opt):
865
- opExpr = Opt(opExpr)
866
- matchExpr = _FB(opExpr.expr + thisExpr) + Group(opExpr + thisExpr)
867
- elif arity == 2:
868
- if opExpr is not None:
869
- matchExpr = _FB(lastExpr + opExpr + thisExpr) + Group(
870
- lastExpr + (opExpr + thisExpr)[1, ...]
871
- )
872
- else:
873
- matchExpr = _FB(lastExpr + thisExpr) + Group(
874
- lastExpr + thisExpr[1, ...]
875
- )
876
- elif arity == 3:
877
- matchExpr = _FB(
878
- lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr
879
- ) + Group(lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr)
880
- if pa:
881
- if isinstance(pa, (tuple, list)):
882
- matchExpr.set_parse_action(*pa)
883
- else:
884
- matchExpr.set_parse_action(pa)
885
- thisExpr <<= (matchExpr | lastExpr).setName(term_name)
886
- lastExpr = thisExpr
887
- ret <<= lastExpr
888
- return ret
889
-
890
-
891
- def indentedBlock(blockStatementExpr, indentStack, indent=True, backup_stacks=[]):
892
- """
893
- (DEPRECATED - use IndentedBlock class instead)
894
- Helper method for defining space-delimited indentation blocks,
895
- such as those used to define block statements in Python source code.
896
-
897
- Parameters:
898
-
899
- - ``blockStatementExpr`` - expression defining syntax of statement that
900
- is repeated within the indented block
901
- - ``indentStack`` - list created by caller to manage indentation stack
902
- (multiple ``statementWithIndentedBlock`` expressions within a single
903
- grammar should share a common ``indentStack``)
904
- - ``indent`` - boolean indicating whether block must be indented beyond
905
- the current level; set to ``False`` for block of left-most statements
906
- (default= ``True``)
907
-
908
- A valid block must contain at least one ``blockStatement``.
909
-
910
- (Note that indentedBlock uses internal parse actions which make it
911
- incompatible with packrat parsing.)
912
-
913
- Example::
914
-
915
- data = '''
916
- def A(z):
917
- A1
918
- B = 100
919
- G = A2
920
- A2
921
- A3
922
- B
923
- def BB(a,b,c):
924
- BB1
925
- def BBA():
926
- bba1
927
- bba2
928
- bba3
929
- C
930
- D
931
- def spam(x,y):
932
- def eggs(z):
933
- pass
934
- '''
935
-
936
-
937
- indentStack = [1]
938
- stmt = Forward()
939
-
940
- identifier = Word(alphas, alphanums)
941
- funcDecl = ("def" + identifier + Group("(" + Opt(delimitedList(identifier)) + ")") + ":")
942
- func_body = indentedBlock(stmt, indentStack)
943
- funcDef = Group(funcDecl + func_body)
944
-
945
- rvalue = Forward()
946
- funcCall = Group(identifier + "(" + Opt(delimitedList(rvalue)) + ")")
947
- rvalue << (funcCall | identifier | Word(nums))
948
- assignment = Group(identifier + "=" + rvalue)
949
- stmt << (funcDef | assignment | identifier)
950
-
951
- module_body = stmt[1, ...]
952
-
953
- parseTree = module_body.parseString(data)
954
- parseTree.pprint()
955
-
956
- prints::
957
-
958
- [['def',
959
- 'A',
960
- ['(', 'z', ')'],
961
- ':',
962
- [['A1'], [['B', '=', '100']], [['G', '=', 'A2']], ['A2'], ['A3']]],
963
- 'B',
964
- ['def',
965
- 'BB',
966
- ['(', 'a', 'b', 'c', ')'],
967
- ':',
968
- [['BB1'], [['def', 'BBA', ['(', ')'], ':', [['bba1'], ['bba2'], ['bba3']]]]]],
969
- 'C',
970
- 'D',
971
- ['def',
972
- 'spam',
973
- ['(', 'x', 'y', ')'],
974
- ':',
975
- [[['def', 'eggs', ['(', 'z', ')'], ':', [['pass']]]]]]]
976
- """
977
- backup_stacks.append(indentStack[:])
978
-
979
- def reset_stack():
980
- indentStack[:] = backup_stacks[-1]
981
-
982
- def checkPeerIndent(s, l, t):
983
- if l >= len(s):
984
- return
985
- curCol = col(l, s)
986
- if curCol != indentStack[-1]:
987
- if curCol > indentStack[-1]:
988
- raise ParseException(s, l, "illegal nesting")
989
- raise ParseException(s, l, "not a peer entry")
990
-
991
- def checkSubIndent(s, l, t):
992
- curCol = col(l, s)
993
- if curCol > indentStack[-1]:
994
- indentStack.append(curCol)
995
- else:
996
- raise ParseException(s, l, "not a subentry")
997
-
998
- def checkUnindent(s, l, t):
999
- if l >= len(s):
1000
- return
1001
- curCol = col(l, s)
1002
- if not (indentStack and curCol in indentStack):
1003
- raise ParseException(s, l, "not an unindent")
1004
- if curCol < indentStack[-1]:
1005
- indentStack.pop()
1006
-
1007
- NL = OneOrMore(LineEnd().set_whitespace_chars("\t ").suppress())
1008
- INDENT = (Empty() + Empty().set_parse_action(checkSubIndent)).set_name("INDENT")
1009
- PEER = Empty().set_parse_action(checkPeerIndent).set_name("")
1010
- UNDENT = Empty().set_parse_action(checkUnindent).set_name("UNINDENT")
1011
- if indent:
1012
- smExpr = Group(
1013
- Opt(NL)
1014
- + INDENT
1015
- + OneOrMore(PEER + Group(blockStatementExpr) + Opt(NL))
1016
- + UNDENT
1017
- )
1018
- else:
1019
- smExpr = Group(
1020
- Opt(NL)
1021
- + OneOrMore(PEER + Group(blockStatementExpr) + Opt(NL))
1022
- + Opt(UNDENT)
1023
- )
1024
-
1025
- # add a parse action to remove backup_stack from list of backups
1026
- smExpr.add_parse_action(
1027
- lambda: backup_stacks.pop(-1) and None if backup_stacks else None
1028
- )
1029
- smExpr.set_fail_action(lambda a, b, c, d: reset_stack())
1030
- blockStatementExpr.ignore(_bslash + LineEnd())
1031
- return smExpr.set_name("indented block")
1032
-
1033
-
1034
- # it's easy to get these comment structures wrong - they're very common, so may as well make them available
1035
- c_style_comment = Combine(Regex(r"/\*(?:[^*]|\*(?!/))*") + "*/").set_name(
1036
- "C style comment"
1037
- )
1038
- "Comment of the form ``/* ... */``"
1039
-
1040
- html_comment = Regex(r"<!--[\s\S]*?-->").set_name("HTML comment")
1041
- "Comment of the form ``<!-- ... -->``"
1042
-
1043
- rest_of_line = Regex(r".*").leave_whitespace().set_name("rest of line")
1044
- dbl_slash_comment = Regex(r"//(?:\\\n|[^\n])*").set_name("// comment")
1045
- "Comment of the form ``// ... (to end of line)``"
1046
-
1047
- cpp_style_comment = Combine(
1048
- Regex(r"/\*(?:[^*]|\*(?!/))*") + "*/" | dbl_slash_comment
1049
- ).set_name("C++ style comment")
1050
- "Comment of either form :class:`c_style_comment` or :class:`dbl_slash_comment`"
1051
-
1052
- java_style_comment = cpp_style_comment
1053
- "Same as :class:`cpp_style_comment`"
1054
-
1055
- python_style_comment = Regex(r"#.*").set_name("Python style comment")
1056
- "Comment of the form ``# ... (to end of line)``"
1057
-
1058
-
1059
- # build list of built-in expressions, for future reference if a global default value
1060
- # gets updated
1061
- _builtin_exprs: List[ParserElement] = [
1062
- v for v in vars().values() if isinstance(v, ParserElement)
1063
- ]
1064
-
1065
-
1066
- # pre-PEP8 compatible names
1067
- delimitedList = delimited_list
1068
- countedArray = counted_array
1069
- matchPreviousLiteral = match_previous_literal
1070
- matchPreviousExpr = match_previous_expr
1071
- oneOf = one_of
1072
- dictOf = dict_of
1073
- originalTextFor = original_text_for
1074
- nestedExpr = nested_expr
1075
- makeHTMLTags = make_html_tags
1076
- makeXMLTags = make_xml_tags
1077
- anyOpenTag, anyCloseTag = any_open_tag, any_close_tag
1078
- commonHTMLEntity = common_html_entity
1079
- replaceHTMLEntity = replace_html_entity
1080
- opAssoc = OpAssoc
1081
- infixNotation = infix_notation
1082
- cStyleComment = c_style_comment
1083
- htmlComment = html_comment
1084
- restOfLine = rest_of_line
1085
- dblSlashComment = dbl_slash_comment
1086
- cppStyleComment = cpp_style_comment
1087
- javaStyleComment = java_style_comment
1088
- pythonStyleComment = python_style_comment
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/test.py DELETED
@@ -1,251 +0,0 @@
1
- import os
2
- import operator
3
- import sys
4
- import contextlib
5
- import itertools
6
- import unittest
7
- from distutils.errors import DistutilsError, DistutilsOptionError
8
- from distutils import log
9
- from unittest import TestLoader
10
-
11
- from pkg_resources import (
12
- resource_listdir,
13
- resource_exists,
14
- normalize_path,
15
- working_set,
16
- evaluate_marker,
17
- add_activation_listener,
18
- require,
19
- )
20
- from .._importlib import metadata
21
- from setuptools import Command
22
- from setuptools.extern.more_itertools import unique_everseen
23
- from setuptools.extern.jaraco.functools import pass_none
24
-
25
-
26
- class ScanningLoader(TestLoader):
27
- def __init__(self):
28
- TestLoader.__init__(self)
29
- self._visited = set()
30
-
31
- def loadTestsFromModule(self, module, pattern=None):
32
- """Return a suite of all tests cases contained in the given module
33
-
34
- If the module is a package, load tests from all the modules in it.
35
- If the module has an ``additional_tests`` function, call it and add
36
- the return value to the tests.
37
- """
38
- if module in self._visited:
39
- return None
40
- self._visited.add(module)
41
-
42
- tests = []
43
- tests.append(TestLoader.loadTestsFromModule(self, module))
44
-
45
- if hasattr(module, "additional_tests"):
46
- tests.append(module.additional_tests())
47
-
48
- if hasattr(module, '__path__'):
49
- for file in resource_listdir(module.__name__, ''):
50
- if file.endswith('.py') and file != '__init__.py':
51
- submodule = module.__name__ + '.' + file[:-3]
52
- else:
53
- if resource_exists(module.__name__, file + '/__init__.py'):
54
- submodule = module.__name__ + '.' + file
55
- else:
56
- continue
57
- tests.append(self.loadTestsFromName(submodule))
58
-
59
- if len(tests) != 1:
60
- return self.suiteClass(tests)
61
- else:
62
- return tests[0] # don't create a nested suite for only one return
63
-
64
-
65
- # adapted from jaraco.classes.properties:NonDataProperty
66
- class NonDataProperty:
67
- def __init__(self, fget):
68
- self.fget = fget
69
-
70
- def __get__(self, obj, objtype=None):
71
- if obj is None:
72
- return self
73
- return self.fget(obj)
74
-
75
-
76
- class test(Command):
77
- """Command to run unit tests after in-place build"""
78
-
79
- description = "run unit tests after in-place build (deprecated)"
80
-
81
- user_options = [
82
- ('test-module=', 'm', "Run 'test_suite' in specified module"),
83
- (
84
- 'test-suite=',
85
- 's',
86
- "Run single test, case or suite (e.g. 'module.test_suite')",
87
- ),
88
- ('test-runner=', 'r', "Test runner to use"),
89
- ]
90
-
91
- def initialize_options(self):
92
- self.test_suite = None
93
- self.test_module = None
94
- self.test_loader = None
95
- self.test_runner = None
96
-
97
- def finalize_options(self):
98
-
99
- if self.test_suite and self.test_module:
100
- msg = "You may specify a module or a suite, but not both"
101
- raise DistutilsOptionError(msg)
102
-
103
- if self.test_suite is None:
104
- if self.test_module is None:
105
- self.test_suite = self.distribution.test_suite
106
- else:
107
- self.test_suite = self.test_module + ".test_suite"
108
-
109
- if self.test_loader is None:
110
- self.test_loader = getattr(self.distribution, 'test_loader', None)
111
- if self.test_loader is None:
112
- self.test_loader = "setuptools.command.test:ScanningLoader"
113
- if self.test_runner is None:
114
- self.test_runner = getattr(self.distribution, 'test_runner', None)
115
-
116
- @NonDataProperty
117
- def test_args(self):
118
- return list(self._test_args())
119
-
120
- def _test_args(self):
121
- if not self.test_suite:
122
- yield 'discover'
123
- if self.verbose:
124
- yield '--verbose'
125
- if self.test_suite:
126
- yield self.test_suite
127
-
128
- def with_project_on_sys_path(self, func):
129
- """
130
- Backward compatibility for project_on_sys_path context.
131
- """
132
- with self.project_on_sys_path():
133
- func()
134
-
135
- @contextlib.contextmanager
136
- def project_on_sys_path(self, include_dists=[]):
137
- self.run_command('egg_info')
138
-
139
- # Build extensions in-place
140
- self.reinitialize_command('build_ext', inplace=1)
141
- self.run_command('build_ext')
142
-
143
- ei_cmd = self.get_finalized_command("egg_info")
144
-
145
- old_path = sys.path[:]
146
- old_modules = sys.modules.copy()
147
-
148
- try:
149
- project_path = normalize_path(ei_cmd.egg_base)
150
- sys.path.insert(0, project_path)
151
- working_set.__init__()
152
- add_activation_listener(lambda dist: dist.activate())
153
- require('%s==%s' % (ei_cmd.egg_name, ei_cmd.egg_version))
154
- with self.paths_on_pythonpath([project_path]):
155
- yield
156
- finally:
157
- sys.path[:] = old_path
158
- sys.modules.clear()
159
- sys.modules.update(old_modules)
160
- working_set.__init__()
161
-
162
- @staticmethod
163
- @contextlib.contextmanager
164
- def paths_on_pythonpath(paths):
165
- """
166
- Add the indicated paths to the head of the PYTHONPATH environment
167
- variable so that subprocesses will also see the packages at
168
- these paths.
169
-
170
- Do this in a context that restores the value on exit.
171
- """
172
- nothing = object()
173
- orig_pythonpath = os.environ.get('PYTHONPATH', nothing)
174
- current_pythonpath = os.environ.get('PYTHONPATH', '')
175
- try:
176
- prefix = os.pathsep.join(unique_everseen(paths))
177
- to_join = filter(None, [prefix, current_pythonpath])
178
- new_path = os.pathsep.join(to_join)
179
- if new_path:
180
- os.environ['PYTHONPATH'] = new_path
181
- yield
182
- finally:
183
- if orig_pythonpath is nothing:
184
- os.environ.pop('PYTHONPATH', None)
185
- else:
186
- os.environ['PYTHONPATH'] = orig_pythonpath
187
-
188
- @staticmethod
189
- def install_dists(dist):
190
- """
191
- Install the requirements indicated by self.distribution and
192
- return an iterable of the dists that were built.
193
- """
194
- ir_d = dist.fetch_build_eggs(dist.install_requires)
195
- tr_d = dist.fetch_build_eggs(dist.tests_require or [])
196
- er_d = dist.fetch_build_eggs(
197
- v
198
- for k, v in dist.extras_require.items()
199
- if k.startswith(':') and evaluate_marker(k[1:])
200
- )
201
- return itertools.chain(ir_d, tr_d, er_d)
202
-
203
- def run(self):
204
- self.announce(
205
- "WARNING: Testing via this command is deprecated and will be "
206
- "removed in a future version. Users looking for a generic test "
207
- "entry point independent of test runner are encouraged to use "
208
- "tox.",
209
- log.WARN,
210
- )
211
-
212
- installed_dists = self.install_dists(self.distribution)
213
-
214
- cmd = ' '.join(self._argv)
215
- if self.dry_run:
216
- self.announce('skipping "%s" (dry run)' % cmd)
217
- return
218
-
219
- self.announce('running "%s"' % cmd)
220
-
221
- paths = map(operator.attrgetter('location'), installed_dists)
222
- with self.paths_on_pythonpath(paths):
223
- with self.project_on_sys_path():
224
- self.run_tests()
225
-
226
- def run_tests(self):
227
- test = unittest.main(
228
- None,
229
- None,
230
- self._argv,
231
- testLoader=self._resolve_as_ep(self.test_loader),
232
- testRunner=self._resolve_as_ep(self.test_runner),
233
- exit=False,
234
- )
235
- if not test.result.wasSuccessful():
236
- msg = 'Test failed: %s' % test.result
237
- self.announce(msg, log.ERROR)
238
- raise DistutilsError(msg)
239
-
240
- @property
241
- def _argv(self):
242
- return ['unittest'] + self.test_args
243
-
244
- @staticmethod
245
- @pass_none
246
- def _resolve_as_ep(val):
247
- """
248
- Load the indicated attribute value, called, as a as if it were
249
- specified as an entry point.
250
- """
251
- return metadata.EntryPoint(value=val, name=None, group=None).load()()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AtomdffAI/wechatgpt4atom/bot/chatgpt/chat_gpt_bot.py DELETED
@@ -1,131 +0,0 @@
1
- # encoding:utf-8
2
-
3
- from bot.bot import Bot
4
- from config import conf
5
- from common.log import logger
6
- import openai
7
- import time
8
-
9
- user_session = dict()
10
-
11
- # OpenAI对话模型API (可用)
12
- class ChatGPTBot(Bot):
13
- def __init__(self):
14
- openai.api_key = conf().get('open_ai_api_key')
15
- openai.api_base="https://apai.zyai.online/v1"
16
-
17
- def reply(self, query, context=None):
18
- # acquire reply content
19
- if not context or not context.get('type') or context.get('type') == 'TEXT':
20
- logger.info("[OPEN_AI] query={}".format(query))
21
- from_user_id = context['from_user_id']
22
- if query == '#清除记忆':
23
- Session.clear_session(from_user_id)
24
- return '记忆已清除'
25
-
26
- new_query = Session.build_session_query(query, from_user_id)
27
- logger.debug("[OPEN_AI] session query={}".format(new_query))
28
-
29
- # if context.get('stream'):
30
- # # reply in stream
31
- # return self.reply_text_stream(query, new_query, from_user_id)
32
-
33
- reply_content = self.reply_text(new_query, from_user_id, 0)
34
- logger.debug("[OPEN_AI] new_query={}, user={}, reply_cont={}".format(new_query, from_user_id, reply_content))
35
- if reply_content:
36
- Session.save_session(query, reply_content, from_user_id)
37
- return reply_content
38
-
39
- elif context.get('type', None) == 'IMAGE_CREATE':
40
- return self.create_img(query, 0)
41
-
42
- def reply_text(self, query, user_id, retry_count=0):
43
- try:
44
- response = openai.ChatCompletion.create(
45
- model="gpt-3.5-turbo-16k", # 对话模型的名称
46
- messages=query,
47
- temperature=0.5, # 值在[0,1]之间,越大表示回复越具有不确定性
48
- max_tokens=1500, # 回复最大的字符数
49
- top_p=1,
50
- frequency_penalty=0.5, # [-2,2]之间,该值越大则更倾向于产生不同的内容
51
- presence_penalty=0.5, # [-2,2]之间,该值越大则更倾向于产生不同的内容
52
- )
53
- # res_content = response.choices[0]['text'].strip().replace('<|endoftext|>', '')
54
- logger.info(response.choices[0]['message']['content'])
55
- # log.info("[OPEN_AI] reply={}".format(res_content))
56
- return response.choices[0]['message']['content']
57
- except openai.error.RateLimitError as e:
58
- # rate limit exception
59
- logger.warn(e)
60
- if retry_count < 1:
61
- time.sleep(5)
62
- logger.warn("[OPEN_AI] RateLimit exceed, 第{}次重试".format(retry_count+1))
63
- return self.reply_text(query, user_id, retry_count+1)
64
- else:
65
- return "问太快了,慢点行不行"
66
- except Exception as e:
67
- # unknown exception
68
- logger.exception(e)
69
- Session.clear_session(user_id)
70
- return "Sorry,AI也有时候出错……请再问一次。"
71
-
72
- def create_img(self, query, retry_count=0):
73
- try:
74
- logger.info("[OPEN_AI] image_query={}".format(query))
75
- response = openai.Image.create(
76
- prompt=query, #图片描述
77
- n=1, #每次生成图片的数量
78
- size="1024x1024" #图片大小,可选有 256x256, 512x512, 1024x1024
79
- )
80
- image_url = response['data'][0]['url']
81
- logger.info("[OPEN_AI] image_url={}".format(image_url))
82
- return image_url
83
- except openai.error.RateLimitError as e:
84
- logger.warn(e)
85
- if retry_count < 1:
86
- time.sleep(5)
87
- logger.warn("[OPEN_AI] ImgCreate RateLimit exceed, 第{}次重试".format(retry_count+1))
88
- return self.reply_text(query, retry_count+1)
89
- else:
90
- return "问太快了,慢点行不行"
91
- except Exception as e:
92
- logger.exception(e)
93
- return None
94
-
95
- class Session(object):
96
- @staticmethod
97
- def build_session_query(query, user_id):
98
- '''
99
- build query with conversation history
100
- e.g. [
101
- {"role": "system", "content": "You are a helpful assistant,let's think step by step in multiple different ways."},
102
- {"role": "user", "content": "Who won the world series in 2020?"},
103
- {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
104
- {"role": "user", "content": "Where was it played?"}
105
- ]
106
- :param query: query content
107
- :param user_id: from user id
108
- :return: query content with conversaction
109
- '''
110
- session = user_session.get(user_id, [])
111
- if len(session) == 0:
112
- system_prompt = conf().get("character_desc", "")
113
- system_item = {'role': 'system', 'content': system_prompt}
114
- session.append(system_item)
115
- user_session[user_id] = session
116
- user_item = {'role': 'user', 'content': query}
117
- session.append(user_item)
118
- return session
119
-
120
- @staticmethod
121
- def save_session(query, answer, user_id):
122
- session = user_session.get(user_id)
123
- if session:
124
- # append conversation
125
- gpt_item = {'role': 'assistant', 'content': answer}
126
- session.append(gpt_item)
127
-
128
- @staticmethod
129
- def clear_session(user_id):
130
- user_session[user_id] = []
131
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BIOML-SVM/SVM/app.py DELETED
@@ -1,286 +0,0 @@
1
- # credit: https://huggingface.co/spaces/simonduerr/3dmol.js/blob/main/app.py
2
- import os
3
- import sys
4
- from urllib import request
5
-
6
- import esm
7
- import gradio as gr
8
- import progres as pg
9
- import requests
10
- import torch
11
- from transformers import (AutoModel, AutoModelForMaskedLM, AutoTokenizer,
12
- EsmModel)
13
-
14
- import msa
15
- import proteinbind_new
16
-
17
- tokenizer_nt = AutoTokenizer.from_pretrained("InstaDeepAI/nucleotide-transformer-500m-1000g")
18
- model_nt = AutoModelForMaskedLM.from_pretrained("InstaDeepAI/nucleotide-transformer-500m-1000g")
19
- model_nt.eval()
20
-
21
- tokenizer_aa = AutoTokenizer.from_pretrained("facebook/esm2_t12_35M_UR50D")
22
- model_aa = EsmModel.from_pretrained("facebook/esm2_t12_35M_UR50D")
23
- model_aa.eval()
24
-
25
- tokenizer_se = AutoTokenizer.from_pretrained('sentence-transformers/all-mpnet-base-v2')
26
- model_se = AutoModel.from_pretrained('sentence-transformers/all-mpnet-base-v2')
27
- model_se.eval()
28
-
29
- msa_transformer, msa_transformer_alphabet = esm.pretrained.esm_msa1b_t12_100M_UR50S()
30
- msa_transformer = msa_transformer.eval()
31
- msa_transformer_batch_converter = msa_transformer_alphabet.get_batch_converter()
32
-
33
- model = proteinbind_new.create_proteinbind(True)
34
-
35
-
36
- def pass_through(torch_output, key: str):
37
- device = torch.device("cpu")
38
- input_data = {
39
- key: torch_output.type(torch.float32).to(device)
40
- }
41
- output = model(input_data)
42
- return output[key].detach().numpy()
43
-
44
-
45
- def nt_embed(sequence: str):
46
- tokens_ids = tokenizer_nt.batch_encode_plus([sequence], return_tensors="pt")["input_ids"]
47
- attention_mask = tokens_ids != tokenizer_nt.pad_token_id
48
- with torch.no_grad():
49
- torch_outs = model_nt(
50
- tokens_ids, # .to('cuda'),
51
- attention_mask=attention_mask, # .to('cuda'),
52
- output_hidden_states=True
53
- )
54
- last_layer_CLS = torch_outs.hidden_states[-1].detach()[:, 0, :][0]
55
- return pass_through(last_layer_CLS, "dna")
56
-
57
-
58
- def aa_embed(sequence: str):
59
- tokens = tokenizer_aa([sequence], return_tensors="pt")
60
- with torch.no_grad():
61
- torch_outs = model_aa(**tokens)
62
- return pass_through(torch_outs[0], "aa")
63
-
64
-
65
- def se_embed(sentence: str):
66
- encoded_input = tokenizer_se([sentence], return_tensors='pt')
67
- with torch.no_grad():
68
- model_output = model_se(**encoded_input)
69
- return pass_through(model_output[0], "text")
70
-
71
-
72
- def msa_embed(sequences: list):
73
- inputs = msa.greedy_select(sequences, num_seqs=128) # can change this to pass more/fewer sequences
74
- msa_transformer_batch_labels, msa_transformer_batch_strs, msa_transformer_batch_tokens = msa_transformer_batch_converter([inputs])
75
- msa_transformer_batch_tokens = msa_transformer_batch_tokens.to(next(msa_transformer.parameters()).device)
76
-
77
- with torch.no_grad():
78
- temp = msa_transformer(msa_transformer_batch_tokens, repr_layers=[12])['representations']
79
- temp = temp[12][:, :, 0, :]
80
- temp = torch.mean(temp, (0, 1))
81
- return pass_through(temp, "msa")
82
-
83
-
84
- def go_embed(terms):
85
- pass
86
-
87
-
88
- def download_data_if_required():
89
- url_base = f"https://zenodo.org/record/{pg.zenodo_record}/files"
90
- fps = [pg.trained_model_fp]
91
- urls = [f"{url_base}/trained_model.pt"]
92
- # for targetdb in pre_embedded_dbs:
93
- # fps.append(os.path.join(database_dir, targetdb + ".pt"))
94
- # urls.append(f"{url_base}/{targetdb}.pt")
95
-
96
- if not os.path.isdir(pg.trained_model_dir):
97
- os.makedirs(pg.trained_model_dir)
98
- # if not os.path.isdir(database_dir):
99
- # os.makedirs(database_dir)
100
-
101
- printed = False
102
- for fp, url in zip(fps, urls):
103
- if not os.path.isfile(fp):
104
- if not printed:
105
- print("Downloading data as first time setup (~340 MB) to ", pg.progres_dir,
106
- ", internet connection required, this can take a few minutes",
107
- sep="", file=sys.stderr)
108
- printed = True
109
- try:
110
- request.urlretrieve(url, fp)
111
- d = torch.load(fp, map_location="cpu")
112
- if fp == pg.trained_model_fp:
113
- assert "model" in d
114
- else:
115
- assert "embeddings" in d
116
- except Exception:
117
- if os.path.isfile(fp):
118
- os.remove(fp)
119
- print("Failed to download from", url, "and save to", fp, file=sys.stderr)
120
- print("Exiting", file=sys.stderr)
121
- sys.exit(1)
122
-
123
- if printed:
124
- print("Data downloaded successfully", file=sys.stderr)
125
-
126
-
127
- def get_pdb(pdb_code="", filepath=""):
128
- if pdb_code is None or pdb_code == "":
129
- try:
130
- with open(filepath.name) as f:
131
- return f.read()
132
- except AttributeError:
133
- return None
134
- else:
135
- return requests.get(f"https://files.rcsb.org/view/{pdb_code}.pdb").content.decode()
136
-
137
-
138
- def molecule(pdb):
139
-
140
- x = (
141
- """<!DOCTYPE html>
142
- <html>
143
- <head>
144
- <meta http-equiv="content-type" content="text/html; charset=UTF-8" />
145
- <style>
146
- body{
147
- font-family:sans-serif
148
- }
149
- .mol-container {
150
- width: 100%;
151
- height: 600px;
152
- position: relative;
153
- }
154
- .mol-container select{
155
- background-image:None;
156
- }
157
- </style>
158
- <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.6.3/jquery.min.js" integrity="sha512-STof4xm1wgkfm7heWqFJVn58Hm3EtS31XFaagaa8VMReCXAkQnJZ+jEy8PCC/iT18dFy95WcExNHFTqLyp72eQ==" crossorigin="anonymous" referrerpolicy="no-referrer"></script>
159
- <script src="https://3Dmol.csb.pitt.edu/build/3Dmol-min.js"></script>
160
- </head>
161
- <body>
162
- <div id="container" class="mol-container"></div>
163
-
164
- <script>
165
- let pdb = `"""
166
- + pdb
167
- + """`
168
-
169
- $(document).ready(function () {
170
- let element = $("#container");
171
- let config = { backgroundColor: "black" };
172
- let viewer = $3Dmol.createViewer(element, config);
173
- viewer.addModel(pdb, "pdb");
174
- viewer.getModel(0).setStyle({}, { cartoon: { color:"spectrum" } });
175
- viewer.addSurface("MS", { opacity: .5, color: "white" });
176
- viewer.zoomTo();
177
- viewer.render();
178
- viewer.zoom(0.8, 2000);
179
- })
180
- </script>
181
- </body></html>"""
182
- )
183
-
184
- return f"""<iframe style="width: 100%; height: 600px" name="result" allow="midi; geolocation; microphone; camera;
185
- display-capture; encrypted-media;" sandbox="allow-modals allow-forms
186
- allow-scripts allow-same-origin allow-popups
187
- allow-top-navigation-by-user-activation allow-downloads" allowfullscreen=""
188
- allowpaymentrequest="" frameborder="0" srcdoc='{x}'></iframe>"""
189
-
190
-
191
- def str2coords(s):
192
- coords = []
193
- for line in s.split('\n'):
194
- if (line.startswith("ATOM ") or line.startswith("HETATM")) and line[12:16].strip() == "CA":
195
- coords.append([float(line[30:38]), float(line[38:46]), float(line[46:54])])
196
- elif line.startswith("ENDMDL"):
197
- break
198
- return coords
199
-
200
-
201
- def update_st(inp, file):
202
- pdb = get_pdb(inp, file)
203
- new_coords = pass_through(pg.embed_coords(str2coords(pdb)), "pdb")
204
- return (molecule(pdb), new_coords)
205
-
206
-
207
- def update_nt(inp):
208
- return str(nt_embed(inp or ''))
209
-
210
-
211
- def update_aa(inp):
212
- return str(aa_embed(inp))
213
-
214
-
215
- def update_se(inp):
216
- return str(se_embed(inp))
217
-
218
-
219
- def update_go(inp):
220
- return str(go_embed(inp))
221
-
222
-
223
- def update_msa(inp):
224
- return str(msa_embed(msa.read_msa(inp.name)))
225
-
226
-
227
- demo = gr.Blocks()
228
-
229
- with demo:
230
- with gr.Tabs():
231
- with gr.TabItem("PDB Structural Embeddings"):
232
- with gr.Row():
233
- with gr.Box():
234
- inp = gr.Textbox(
235
- placeholder="PDB Code or upload file below", label="Input structure"
236
- )
237
- file = gr.File(file_count="single")
238
- gr.Examples(["2CBA", "6VXX"], inp)
239
- btn = gr.Button("View structure")
240
- gr.Markdown("# PDB viewer using 3Dmol.js")
241
- mol = gr.HTML()
242
- emb = gr.Textbox(interactive=False)
243
- btn.click(fn=update_st, inputs=[inp, file], outputs=[mol, emb])
244
- with gr.TabItem("Nucleotide Sequence Embeddings"):
245
- with gr.Box():
246
- inp = gr.Textbox(
247
- placeholder="ATCGCTGCCCGTAGATAATAAGAGACACTGAGGCC", label="Input Nucleotide Sequence"
248
- )
249
- btn = gr.Button("View embeddings")
250
- emb = gr.Textbox(interactive=False)
251
- btn.click(fn=update_nt, inputs=[inp], outputs=emb)
252
- with gr.TabItem("Amino Acid Sequence Embeddings"):
253
- with gr.Box():
254
- inp = gr.Textbox(
255
- placeholder="AAGQCYRGRCSGGLCCSKYGYCGSGPAYCG", label="Input Amino Acid Sequence"
256
- )
257
- btn = gr.Button("View embeddings")
258
- emb = gr.Textbox(interactive=False)
259
- btn.click(fn=update_aa, inputs=[inp], outputs=emb)
260
- with gr.TabItem("Sentence Embeddings"):
261
- with gr.Box():
262
- inp = gr.Textbox(
263
- placeholder="Your text here", label="Input Sentence"
264
- )
265
- btn = gr.Button("View embeddings")
266
- emb = gr.Textbox(interactive=False)
267
- btn.click(fn=update_se, inputs=[inp], outputs=emb)
268
- with gr.TabItem("MSA Embeddings"):
269
- with gr.Box():
270
- inp = gr.File(file_count="single", label="Input MSA")
271
- btn = gr.Button("View embeddings")
272
- emb = gr.Textbox(interactive=False)
273
- btn.click(fn=update_msa, inputs=[inp], outputs=emb)
274
- with gr.TabItem("GO Embeddings"):
275
- with gr.Box():
276
- inp = gr.Textbox(
277
- placeholder="", label="Input GO Terms"
278
- )
279
- btn = gr.Button("View embeddings")
280
- emb = gr.Textbox(interactive=False)
281
- btn.click(fn=update_go, inputs=[inp], outputs=emb)
282
-
283
-
284
- if __name__ == "__main__":
285
- download_data_if_required()
286
- demo.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Descargar Angry Birds Star Wars 2 Monedas Ilimitadas.md DELETED
@@ -1,69 +0,0 @@
1
- <br />
2
- <h1>Cómo descargar Angry Birds Star Wars 2 Monedas ilimitadas</h1>
3
- <p>Angry Birds Star Wars 2 es un popular juego de puzzle que combina la diversión y la emoción de las franquicias de Angry Birds y Star Wars. En este juego, puedes unirte al lado del pájaro o al lado del cerdo, y usar varios personajes y poderes para derrotar a tus enemigos. También puedes recoger monedas, que son la moneda principal del juego, para desbloquear más personajes, niveles y objetos. </p>
4
- <h2>descargar angry birds star wars 2 monedas ilimitadas</h2><br /><p><b><b>Download</b> &#9889; <a href="https://bltlly.com/2v6MUp">https://bltlly.com/2v6MUp</a></b></p><br /><br />
5
- <p>Sin embargo, recolectar monedas puede ser lento y desafiante, especialmente si quieres obtener todos los personajes y objetos del juego. Es por eso que algunos jugadores pueden querer obtener monedas ilimitadas en Angry Birds Star Wars 2, que puede darles una ventaja sobre sus oponentes y hacer el juego más agradable. Pero, ¿cómo se puede obtener monedas ilimitadas en Angry Birds Star Wars 2? En este artículo, le mostraremos tres métodos diferentes que puede utilizar para descargar angry birds star wars 2 monedas ilimitadas. </p>
6
- <h2>Método 1: Usar un código de trucos</h2>
7
- <p>Una de las maneras más fáciles de obtener monedas ilimitadas en Angry Birds Star Wars 2 es usar un código de trucos. Un código de trucos es una combinación secreta de letras o números que puedes introducir en el juego para activar ciertos efectos o características. Por ejemplo, hay un código de trucos que te puede dar monedas ilimitadas en Angry Birds Star Wars 2. Aquí está cómo usarlo:</p>
8
- <ol>
9
- <li>Abre Angry Birds Star Wars 2 en tu dispositivo. </li>
10
- <li>Vaya al menú de configuración y toque en "Enter Code". </li>
11
- <li>Escribe "ABSWII" (sin comillas) y toca "OK". </li>
12
- <li> Usted debe ver un mensaje que dice "Cheat activado". </li>
13
- <li>Volver al juego y disfrutar de sus monedas ilimitadas. </li>
14
- </ol>
15
-
16
- <h2>Método 2: Usar un Mod APK</h2>
17
- <p>Otra manera de obtener monedas ilimitadas en Angry Birds Star Wars 2 es utilizar un mod APK. Un mod APK es una versión modificada del archivo de juego original que ha sido alterado por alguien para incluir características o funciones adicionales. Por ejemplo, hay un mod APK que puede darle monedas ilimitadas en Angry Birds Star Wars 2. Aquí está cómo usarlo:</p>
18
- <p></p>
19
- <ol>
20
- <li>Descargar el archivo APK mod de una fuente confiable. Puede buscar en línea para "angry birds star wars 2 mod apk monedas ilimitadas" o utilizar este enlace. </li>
21
- <li>Antes de instalar el mod APK, asegúrese de haber habilitado "Fuentes desconocidas" en la configuración de su dispositivo. Esto le permitirá instalar aplicaciones desde fuentes distintas de Google Play Store.</li>
22
- <li>Desinstala el juego original de Angry Birds Star Wars 2 desde tu dispositivo. </li>
23
- <li> Instalar el archivo APK mod tocando en él y siguiendo las instrucciones. </li>
24
- <li>Abre Angry Birds Star Wars 2 en tu dispositivo y disfruta de tus monedas ilimitadas. </li>
25
- </ol>
26
- <p>Los pros de usar un mod APK son que es eficaz, permanente y personalizable. Usted puede obtener monedas ilimitadas y otras características que el juego original no tiene. Sin embargo, los inconvenientes son que es arriesgado, ilegal e incompatible. Puede descargar un virus o malware que puede dañar su dispositivo o robar sus datos. También puede violar los términos de servicio del juego y ser prohibido o demandado. Además, es posible que no pueda actualizar el juego o jugar en línea con otros jugadores que tengan la versión original. </p>
27
- <h2>Método 3: Utilice una herramienta Hack</h2>
28
- <p>Una tercera manera de obtener monedas ilimitadas en Angry Birds Star Wars 2 es utilizar una herramienta de hackeo. Una herramienta de corte es un software o sitio web que puede generar monedas u otros recursos para usted en el juego. Por ejemplo, hay una herramienta de hackeo que puede darle monedas ilimitadas en Angry Birds Star Wars 2. Aquí está cómo usarlo:</p>
29
- <ol>
30
- <li>Ir al sitio web de la herramienta de hackeo. También puede escanear el código QR a continuación para acceder a ella. </li>
31
-
32
- <li>Seleccione el tipo de dispositivo (Android o iOS) y su región. </li>
33
- <li>Introduzca la cantidad de monedas que desea obtener. Puede elegir entre 10.000 y 999.999 monedas. </li>
34
- <li>Haga clic en "Generar" y espere unos segundos. </li>
35
- <li>Verifica que no eres un robot completando una breve encuesta u oferta. </li>
36
- <li>Revisa tu cuenta de juego y disfruta de tus monedas ilimitadas. </li>
37
- </ol>
38
- <p><img src="https://i.imgur.com/7QZw0qL.png" alt="QR code for hack tool"></p>
39
- <p>Los pros de usar una herramienta de hackeo son que es conveniente, rápido y gratuito. No es necesario descargar nada o root o jailbreak su dispositivo. También puede obtener tantas monedas como desee en cuestión de minutos. Sin embargo, los inconvenientes son que no es confiable, inseguro y poco ético. Es posible que no obtenga las monedas que solicitó o solo las obtenga temporalmente. También puede exponer su información personal o dispositivo a hackers o estafadores. Además, puede arruinar el equilibrio y la equidad del juego mediante el uso de una herramienta de hackeo. </p>
40
- <h2>Conclusión</h2>
41
- <p>En conclusión, hay tres métodos diferentes que se pueden utilizar para descargar angry birds star wars 2 monedas ilimitadas: usando un código de trucos, usando un mod APK, o usando una herramienta de hackeo. Cada método tiene sus propios pros y contras, por lo que debe sopesarlos cuidadosamente antes de decidir cuál usar. Aquí hay algunos consejos y advertencias para usar monedas ilimitadas en Angry Birds Star Wars 2:</p>
42
- <ul>
43
- <li>Usa monedas ilimitadas bajo tu propio riesgo y discreción. No respaldamos ni recomendamos ninguno de estos métodos, y no somos responsables de ninguna consecuencia que pueda surgir de su uso. </li>
44
- <li>Tenga cuidado con las fuentes que descarga o accede. Asegúrese de que son confiables y seguros, y escanearlos en busca de virus o malware antes de usarlos. </li>
45
- <li>Copia de seguridad de los datos del juego antes de usar cualquiera de estos métodos. Puede perder su progreso o dañar el archivo del juego si algo sale mal. </li>
46
-
47
- <li>Respetar a los desarrolladores de juegos y su trabajo. Pusieron mucho esfuerzo y creatividad en hacer Angry Birds Star Wars 2, y merecen ser apoyados y apreciados. </li>
48
- </ul>
49
- <p>Si quieres descargar Angry Birds Star Wars 2 y disfrutar del juego sin trucos o hacks, puedes hacerlo haciendo clic en este enlace. ¡Que la fuerza esté contigo! </p>
50
- <h2>Preguntas frecuentes</h2>
51
- <h3>Q: ¿Es Angry Birds Star Wars 2 libre para jugar? </h3>
52
- <p>A: Sí, Angry Birds Star Wars 2 es gratis para descargar y jugar en dispositivos Android e iOS. Sin embargo, hay algunas compras en la aplicación que puedes hacer para mejorar tu experiencia de juego. </p>
53
- <h3>P: ¿Cuántos personajes hay en Angry Birds Star Wars 2?</h3>
54
- <p>A: Hay más de 30 personajes jugables en Angry Birds Star Wars 2, incluyendo aves y cerdos del universo de Star Wars. Puedes desbloquearlos recogiendo monedas, completando niveles o escaneando telepods (juguetes físicos que interactúan con el juego). </p>
55
- <h3>Q: ¿Cuáles son los telepods en Angry Birds Star Wars 2?</h3>
56
- <p>A: Los telepods son juguetes especiales que puedes comprar por separado del juego. Se basan en los personajes de Angry Birds Star Wars 2, y vienen con una base que tiene un código QR. Puedes escanear el código QR con la cámara de tu dispositivo para desbloquear el personaje del juego. También puedes colocar el juguete en la pantalla de tu dispositivo para intercambiar el personaje del juego con el del juguete. </p>
57
- <h3>Q: ¿Cómo puedo jugar Angry Birds Star Wars 2 en línea con otros jugadores? </h3>
58
- <p>A: Angry Birds Star Wars 2 tiene un modo multijugador llamado Arena, donde puedes competir con otros jugadores de todo el mundo. Puedes acceder a Arena tocando el icono del trofeo en el menú principal. Puedes optar por unirte al Lado Pájaro o al Lado Cerdo, y luego jugar contra otros jugadores en una serie de partidos. Puedes ganar monedas y recompensas ganando partidos y subiendo las tablas de clasificación. </p>
59
- <h3>P: ¿Cómo puedo contactar al servicio de atención al cliente de Angry Birds Star Wars 2?</h3>
60
-
61
- <ol>
62
- <li>Vaya al menú de configuración y toque en "Ayuda". </li>
63
- <li>Toque en "Contáctenos". </li>
64
- <li> Rellene el formulario con su nombre, correo electrónico, asunto y mensaje. </li>
65
- <li>Toque en "Enviar". </li>
66
- <li>Usted debe recibir una respuesta dentro de las 24 horas. </li>
67
- </ol></p> 64aa2da5cf<br />
68
- <br />
69
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Binguii/Venus_Proxy/README.md DELETED
@@ -1,10 +0,0 @@
1
- ---
2
- title: Venus Proxy
3
- emoji: 👀
4
- colorFrom: purple
5
- colorTo: gray
6
- sdk: docker
7
- pinned: false
8
- ---
9
-
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
spaces/CALM/Dashboard/streamlit_observable/frontend/src/index.tsx DELETED
@@ -1,10 +0,0 @@
1
- import React from "react"
2
- import ReactDOM from "react-dom"
3
- import Observable from "./Observable"
4
-
5
- ReactDOM.render(
6
- <React.StrictMode>
7
- <Observable />
8
- </React.StrictMode>,
9
- document.getElementById("root")
10
- )
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/roi_heads/__init__.py DELETED
@@ -1,16 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2
- from .box_head import ROI_BOX_HEAD_REGISTRY, build_box_head
3
- from .keypoint_head import ROI_KEYPOINT_HEAD_REGISTRY, build_keypoint_head, BaseKeypointRCNNHead
4
- from .mask_head import ROI_MASK_HEAD_REGISTRY, build_mask_head, BaseMaskRCNNHead
5
- from .roi_heads import (
6
- ROI_HEADS_REGISTRY,
7
- ROIHeads,
8
- Res5ROIHeads,
9
- StandardROIHeads,
10
- build_roi_heads,
11
- select_foreground_proposals,
12
- )
13
- from .rotated_fast_rcnn import RROIHeads
14
- from .fast_rcnn import FastRCNNOutputLayers
15
-
16
- from . import cascade_rcnn # isort:skip
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TridentNet/tridentnet/trident_backbone.py DELETED
@@ -1,223 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2
- import fvcore.nn.weight_init as weight_init
3
- import torch
4
- import torch.nn.functional as F
5
-
6
- from detectron2.layers import Conv2d, FrozenBatchNorm2d, get_norm
7
- from detectron2.modeling import BACKBONE_REGISTRY, ResNet, ResNetBlockBase, make_stage
8
- from detectron2.modeling.backbone.resnet import BasicStem, BottleneckBlock, DeformBottleneckBlock
9
-
10
- from .trident_conv import TridentConv
11
-
12
- __all__ = ["TridentBottleneckBlock", "make_trident_stage", "build_trident_resnet_backbone"]
13
-
14
-
15
- class TridentBottleneckBlock(ResNetBlockBase):
16
- def __init__(
17
- self,
18
- in_channels,
19
- out_channels,
20
- *,
21
- bottleneck_channels,
22
- stride=1,
23
- num_groups=1,
24
- norm="BN",
25
- stride_in_1x1=False,
26
- num_branch=3,
27
- dilations=(1, 2, 3),
28
- concat_output=False,
29
- test_branch_idx=-1,
30
- ):
31
- """
32
- Args:
33
- num_branch (int): the number of branches in TridentNet.
34
- dilations (tuple): the dilations of multiple branches in TridentNet.
35
- concat_output (bool): if concatenate outputs of multiple branches in TridentNet.
36
- Use 'True' for the last trident block.
37
- """
38
- super().__init__(in_channels, out_channels, stride)
39
-
40
- assert num_branch == len(dilations)
41
-
42
- self.num_branch = num_branch
43
- self.concat_output = concat_output
44
- self.test_branch_idx = test_branch_idx
45
-
46
- if in_channels != out_channels:
47
- self.shortcut = Conv2d(
48
- in_channels,
49
- out_channels,
50
- kernel_size=1,
51
- stride=stride,
52
- bias=False,
53
- norm=get_norm(norm, out_channels),
54
- )
55
- else:
56
- self.shortcut = None
57
-
58
- stride_1x1, stride_3x3 = (stride, 1) if stride_in_1x1 else (1, stride)
59
-
60
- self.conv1 = Conv2d(
61
- in_channels,
62
- bottleneck_channels,
63
- kernel_size=1,
64
- stride=stride_1x1,
65
- bias=False,
66
- norm=get_norm(norm, bottleneck_channels),
67
- )
68
-
69
- self.conv2 = TridentConv(
70
- bottleneck_channels,
71
- bottleneck_channels,
72
- kernel_size=3,
73
- stride=stride_3x3,
74
- paddings=dilations,
75
- bias=False,
76
- groups=num_groups,
77
- dilations=dilations,
78
- num_branch=num_branch,
79
- test_branch_idx=test_branch_idx,
80
- norm=get_norm(norm, bottleneck_channels),
81
- )
82
-
83
- self.conv3 = Conv2d(
84
- bottleneck_channels,
85
- out_channels,
86
- kernel_size=1,
87
- bias=False,
88
- norm=get_norm(norm, out_channels),
89
- )
90
-
91
- for layer in [self.conv1, self.conv2, self.conv3, self.shortcut]:
92
- if layer is not None: # shortcut can be None
93
- weight_init.c2_msra_fill(layer)
94
-
95
- def forward(self, x):
96
- num_branch = self.num_branch if self.training or self.test_branch_idx == -1 else 1
97
- if not isinstance(x, list):
98
- x = [x] * num_branch
99
- out = [self.conv1(b) for b in x]
100
- out = [F.relu_(b) for b in out]
101
-
102
- out = self.conv2(out)
103
- out = [F.relu_(b) for b in out]
104
-
105
- out = [self.conv3(b) for b in out]
106
-
107
- if self.shortcut is not None:
108
- shortcut = [self.shortcut(b) for b in x]
109
- else:
110
- shortcut = x
111
-
112
- out = [out_b + shortcut_b for out_b, shortcut_b in zip(out, shortcut)]
113
- out = [F.relu_(b) for b in out]
114
- if self.concat_output:
115
- out = torch.cat(out)
116
- return out
117
-
118
-
119
- def make_trident_stage(block_class, num_blocks, first_stride, **kwargs):
120
- """
121
- Create a resnet stage by creating many blocks for TridentNet.
122
- """
123
- blocks = []
124
- for i in range(num_blocks - 1):
125
- blocks.append(block_class(stride=first_stride if i == 0 else 1, **kwargs))
126
- kwargs["in_channels"] = kwargs["out_channels"]
127
- blocks.append(block_class(stride=1, concat_output=True, **kwargs))
128
- return blocks
129
-
130
-
131
- @BACKBONE_REGISTRY.register()
132
- def build_trident_resnet_backbone(cfg, input_shape):
133
- """
134
- Create a ResNet instance from config for TridentNet.
135
-
136
- Returns:
137
- ResNet: a :class:`ResNet` instance.
138
- """
139
- # need registration of new blocks/stems?
140
- norm = cfg.MODEL.RESNETS.NORM
141
- stem = BasicStem(
142
- in_channels=input_shape.channels,
143
- out_channels=cfg.MODEL.RESNETS.STEM_OUT_CHANNELS,
144
- norm=norm,
145
- )
146
- freeze_at = cfg.MODEL.BACKBONE.FREEZE_AT
147
-
148
- if freeze_at >= 1:
149
- for p in stem.parameters():
150
- p.requires_grad = False
151
- stem = FrozenBatchNorm2d.convert_frozen_batchnorm(stem)
152
-
153
- # fmt: off
154
- out_features = cfg.MODEL.RESNETS.OUT_FEATURES
155
- depth = cfg.MODEL.RESNETS.DEPTH
156
- num_groups = cfg.MODEL.RESNETS.NUM_GROUPS
157
- width_per_group = cfg.MODEL.RESNETS.WIDTH_PER_GROUP
158
- bottleneck_channels = num_groups * width_per_group
159
- in_channels = cfg.MODEL.RESNETS.STEM_OUT_CHANNELS
160
- out_channels = cfg.MODEL.RESNETS.RES2_OUT_CHANNELS
161
- stride_in_1x1 = cfg.MODEL.RESNETS.STRIDE_IN_1X1
162
- res5_dilation = cfg.MODEL.RESNETS.RES5_DILATION
163
- deform_on_per_stage = cfg.MODEL.RESNETS.DEFORM_ON_PER_STAGE
164
- deform_modulated = cfg.MODEL.RESNETS.DEFORM_MODULATED
165
- deform_num_groups = cfg.MODEL.RESNETS.DEFORM_NUM_GROUPS
166
- num_branch = cfg.MODEL.TRIDENT.NUM_BRANCH
167
- branch_dilations = cfg.MODEL.TRIDENT.BRANCH_DILATIONS
168
- trident_stage = cfg.MODEL.TRIDENT.TRIDENT_STAGE
169
- test_branch_idx = cfg.MODEL.TRIDENT.TEST_BRANCH_IDX
170
- # fmt: on
171
- assert res5_dilation in {1, 2}, "res5_dilation cannot be {}.".format(res5_dilation)
172
-
173
- num_blocks_per_stage = {50: [3, 4, 6, 3], 101: [3, 4, 23, 3], 152: [3, 8, 36, 3]}[depth]
174
-
175
- stages = []
176
-
177
- res_stage_idx = {"res2": 2, "res3": 3, "res4": 4, "res5": 5}
178
- out_stage_idx = [res_stage_idx[f] for f in out_features]
179
- trident_stage_idx = res_stage_idx[trident_stage]
180
- max_stage_idx = max(out_stage_idx)
181
- for idx, stage_idx in enumerate(range(2, max_stage_idx + 1)):
182
- dilation = res5_dilation if stage_idx == 5 else 1
183
- first_stride = 1 if idx == 0 or (stage_idx == 5 and dilation == 2) else 2
184
- stage_kargs = {
185
- "num_blocks": num_blocks_per_stage[idx],
186
- "first_stride": first_stride,
187
- "in_channels": in_channels,
188
- "bottleneck_channels": bottleneck_channels,
189
- "out_channels": out_channels,
190
- "num_groups": num_groups,
191
- "norm": norm,
192
- "stride_in_1x1": stride_in_1x1,
193
- "dilation": dilation,
194
- }
195
- if stage_idx == trident_stage_idx:
196
- assert not deform_on_per_stage[
197
- idx
198
- ], "Not support deformable conv in Trident blocks yet."
199
- stage_kargs["block_class"] = TridentBottleneckBlock
200
- stage_kargs["num_branch"] = num_branch
201
- stage_kargs["dilations"] = branch_dilations
202
- stage_kargs["test_branch_idx"] = test_branch_idx
203
- stage_kargs.pop("dilation")
204
- elif deform_on_per_stage[idx]:
205
- stage_kargs["block_class"] = DeformBottleneckBlock
206
- stage_kargs["deform_modulated"] = deform_modulated
207
- stage_kargs["deform_num_groups"] = deform_num_groups
208
- else:
209
- stage_kargs["block_class"] = BottleneckBlock
210
- blocks = (
211
- make_trident_stage(**stage_kargs)
212
- if stage_idx == trident_stage_idx
213
- else make_stage(**stage_kargs)
214
- )
215
- in_channels = out_channels
216
- out_channels *= 2
217
- bottleneck_channels *= 2
218
-
219
- if freeze_at >= stage_idx:
220
- for block in blocks:
221
- block.freeze()
222
- stages.append(blocks)
223
- return ResNet(stem, stages, out_features=out_features)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/pybind11/tests/test_opaque_types.py DELETED
@@ -1,47 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- import pytest
3
- from pybind11_tests import opaque_types as m
4
- from pybind11_tests import ConstructorStats, UserType
5
-
6
-
7
- def test_string_list():
8
- lst = m.StringList()
9
- lst.push_back("Element 1")
10
- lst.push_back("Element 2")
11
- assert m.print_opaque_list(lst) == "Opaque list: [Element 1, Element 2]"
12
- assert lst.back() == "Element 2"
13
-
14
- for i, k in enumerate(lst, start=1):
15
- assert k == "Element {}".format(i)
16
- lst.pop_back()
17
- assert m.print_opaque_list(lst) == "Opaque list: [Element 1]"
18
-
19
- cvp = m.ClassWithSTLVecProperty()
20
- assert m.print_opaque_list(cvp.stringList) == "Opaque list: []"
21
-
22
- cvp.stringList = lst
23
- cvp.stringList.push_back("Element 3")
24
- assert m.print_opaque_list(cvp.stringList) == "Opaque list: [Element 1, Element 3]"
25
-
26
-
27
- def test_pointers(msg):
28
- living_before = ConstructorStats.get(UserType).alive()
29
- assert m.get_void_ptr_value(m.return_void_ptr()) == 0x1234
30
- assert m.get_void_ptr_value(UserType()) # Should also work for other C++ types
31
- assert ConstructorStats.get(UserType).alive() == living_before
32
-
33
- with pytest.raises(TypeError) as excinfo:
34
- m.get_void_ptr_value([1, 2, 3]) # This should not work
35
- assert msg(excinfo.value) == """
36
- get_void_ptr_value(): incompatible function arguments. The following argument types are supported:
37
- 1. (arg0: capsule) -> int
38
-
39
- Invoked with: [1, 2, 3]
40
- """ # noqa: E501 line too long
41
-
42
- assert m.return_null_str() is None
43
- assert m.get_null_str_value(m.return_null_str()) is not None
44
-
45
- ptr = m.return_unique_ptr()
46
- assert "StringList" in repr(ptr)
47
- assert m.print_opaque_list(ptr) == "Opaque list: [some value]"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/pybind11/tools/pybind11Common.cmake DELETED
@@ -1,296 +0,0 @@
1
- #[======================================================[.rst
2
-
3
- Adds the following targets::
4
-
5
- pybind11::pybind11 - link to headers and pybind11
6
- pybind11::module - Adds module links
7
- pybind11::embed - Adds embed links
8
- pybind11::lto - Link time optimizations (manual selection)
9
- pybind11::thin_lto - Link time optimizations (manual selection)
10
- pybind11::python_link_helper - Adds link to Python libraries
11
- pybind11::python2_no_register - Avoid warning/error with Python 2 + C++14/7
12
- pybind11::windows_extras - MSVC bigobj and mp for building multithreaded
13
-
14
- Adds the following functions::
15
-
16
- pybind11_strip(target) - strip target after building on linux/macOS
17
-
18
-
19
- #]======================================================]
20
-
21
- # CMake 3.10 has an include_guard command, but we can't use that yet
22
- if(TARGET pybind11::lto)
23
- return()
24
- endif()
25
-
26
- # If we are in subdirectory mode, all IMPORTED targets must be GLOBAL. If we
27
- # are in CONFIG mode, they should be "normal" targets instead.
28
- # In CMake 3.11+ you can promote a target to global after you create it,
29
- # which might be simpler than this check.
30
- get_property(
31
- is_config
32
- TARGET pybind11::headers
33
- PROPERTY IMPORTED)
34
- if(NOT is_config)
35
- set(optional_global GLOBAL)
36
- endif()
37
-
38
- # --------------------- Shared targets ----------------------------
39
-
40
- # Build an interface library target:
41
- add_library(pybind11::pybind11 IMPORTED INTERFACE ${optional_global})
42
- set_property(
43
- TARGET pybind11::pybind11
44
- APPEND
45
- PROPERTY INTERFACE_LINK_LIBRARIES pybind11::headers)
46
-
47
- # Build a module target:
48
- add_library(pybind11::module IMPORTED INTERFACE ${optional_global})
49
- set_property(
50
- TARGET pybind11::module
51
- APPEND
52
- PROPERTY INTERFACE_LINK_LIBRARIES pybind11::pybind11)
53
-
54
- # Build an embed library target:
55
- add_library(pybind11::embed IMPORTED INTERFACE ${optional_global})
56
- set_property(
57
- TARGET pybind11::embed
58
- APPEND
59
- PROPERTY INTERFACE_LINK_LIBRARIES pybind11::pybind11)
60
-
61
- # ----------------------- no register ----------------------
62
-
63
- # Workaround for Python 2.7 and C++17 (C++14 as a warning) incompatibility
64
- # This adds the flags -Wno-register and -Wno-deprecated-register if the compiler
65
- # is Clang 3.9+ or AppleClang and the compile language is CXX, or /wd5033 for MSVC (all languages,
66
- # since MSVC didn't recognize COMPILE_LANGUAGE until CMake 3.11+).
67
-
68
- add_library(pybind11::python2_no_register INTERFACE IMPORTED ${optional_global})
69
- set(clang_4plus
70
- "$<AND:$<CXX_COMPILER_ID:Clang>,$<NOT:$<VERSION_LESS:$<CXX_COMPILER_VERSION>,3.9>>>")
71
- set(no_register "$<OR:${clang_4plus},$<CXX_COMPILER_ID:AppleClang>>")
72
-
73
- if(MSVC AND CMAKE_VERSION VERSION_LESS 3.11)
74
- set(cxx_no_register "${no_register}")
75
- else()
76
- set(cxx_no_register "$<AND:$<COMPILE_LANGUAGE:CXX>,${no_register}>")
77
- endif()
78
-
79
- set(msvc "$<CXX_COMPILER_ID:MSVC>")
80
-
81
- set_property(
82
- TARGET pybind11::python2_no_register
83
- PROPERTY INTERFACE_COMPILE_OPTIONS
84
- "$<${cxx_no_register}:-Wno-register;-Wno-deprecated-register>" "$<${msvc}:/wd5033>")
85
-
86
- # --------------------------- link helper ---------------------------
87
-
88
- add_library(pybind11::python_link_helper IMPORTED INTERFACE ${optional_global})
89
-
90
- if(CMAKE_VERSION VERSION_LESS 3.13)
91
- # In CMake 3.11+, you can set INTERFACE properties via the normal methods, and
92
- # this would be simpler.
93
- set_property(
94
- TARGET pybind11::python_link_helper
95
- APPEND
96
- PROPERTY INTERFACE_LINK_LIBRARIES "$<$<PLATFORM_ID:Darwin>:-undefined dynamic_lookup>")
97
- else()
98
- # link_options was added in 3.13+
99
- # This is safer, because you are ensured the deduplication pass in CMake will not consider
100
- # these separate and remove one but not the other.
101
- set_property(
102
- TARGET pybind11::python_link_helper
103
- APPEND
104
- PROPERTY INTERFACE_LINK_OPTIONS "$<$<PLATFORM_ID:Darwin>:LINKER:-undefined,dynamic_lookup>")
105
- endif()
106
-
107
- # ------------------------ Windows extras -------------------------
108
-
109
- add_library(pybind11::windows_extras IMPORTED INTERFACE ${optional_global})
110
-
111
- if(MSVC)
112
- # /MP enables multithreaded builds (relevant when there are many files), /bigobj is
113
- # needed for bigger binding projects due to the limit to 64k addressable sections
114
- set_property(
115
- TARGET pybind11::windows_extras
116
- APPEND
117
- PROPERTY INTERFACE_COMPILE_OPTIONS /bigobj)
118
-
119
- if(CMAKE_VERSION VERSION_LESS 3.11)
120
- set_property(
121
- TARGET pybind11::windows_extras
122
- APPEND
123
- PROPERTY INTERFACE_COMPILE_OPTIONS $<$<NOT:$<CONFIG:Debug>>:/MP>)
124
- else()
125
- # Only set these options for C++ files. This is important so that, for
126
- # instance, projects that include other types of source files like CUDA
127
- # .cu files don't get these options propagated to nvcc since that would
128
- # cause the build to fail.
129
- set_property(
130
- TARGET pybind11::windows_extras
131
- APPEND
132
- PROPERTY INTERFACE_COMPILE_OPTIONS $<$<NOT:$<CONFIG:Debug>>:$<$<COMPILE_LANGUAGE:CXX>:/MP>>)
133
- endif()
134
- endif()
135
-
136
- # ----------------------- Legacy option --------------------------
137
-
138
- # Warn or error if old variable name used
139
- if(PYBIND11_CPP_STANDARD)
140
- string(REGEX MATCH [[..$]] VAL "${PYBIND11_CPP_STANDARD}")
141
- if(CMAKE_CXX_STANDARD)
142
- if(NOT CMAKE_CXX_STANDARD STREQUAL VAL)
143
- message(WARNING "CMAKE_CXX_STANDARD=${CMAKE_CXX_STANDARD} does not match "
144
- "PYBIND11_CPP_STANDARD=${PYBIND11_CPP_STANDARD}, "
145
- "please remove PYBIND11_CPP_STANDARD from your cache")
146
- endif()
147
- else()
148
- set(supported_standards 11 14 17 20)
149
- if("${VAL}" IN_LIST supported_standards)
150
- message(WARNING "USE -DCMAKE_CXX_STANDARD=${VAL} instead of PYBIND11_CPP_STANDARD")
151
- set(CMAKE_CXX_STANDARD
152
- ${VAL}
153
- CACHE STRING "From PYBIND11_CPP_STANDARD")
154
- else()
155
- message(FATAL_ERROR "PYBIND11_CPP_STANDARD should be replaced with CMAKE_CXX_STANDARD "
156
- "(last two chars: ${VAL} not understood as a valid CXX std)")
157
- endif()
158
- endif()
159
- endif()
160
-
161
- # --------------------- Python specifics -------------------------
162
-
163
- # Check to see which Python mode we are in, new, old, or no python
164
- if(PYBIND11_NOPYTHON)
165
- set(_pybind11_nopython ON)
166
- elseif(
167
- PYBIND11_FINDPYTHON
168
- OR Python_FOUND
169
- OR Python2_FOUND
170
- OR Python3_FOUND)
171
- # New mode
172
- include("${CMAKE_CURRENT_LIST_DIR}/pybind11NewTools.cmake")
173
-
174
- else()
175
-
176
- # Classic mode
177
- include("${CMAKE_CURRENT_LIST_DIR}/pybind11Tools.cmake")
178
-
179
- endif()
180
-
181
- # --------------------- LTO -------------------------------
182
-
183
- include(CheckCXXCompilerFlag)
184
-
185
- # Checks whether the given CXX/linker flags can compile and link a cxx file.
186
- # cxxflags and linkerflags are lists of flags to use. The result variable is a
187
- # unique variable name for each set of flags: the compilation result will be
188
- # cached base on the result variable. If the flags work, sets them in
189
- # cxxflags_out/linkerflags_out internal cache variables (in addition to
190
- # ${result}).
191
- function(_pybind11_return_if_cxx_and_linker_flags_work result cxxflags linkerflags cxxflags_out
192
- linkerflags_out)
193
- set(CMAKE_REQUIRED_LIBRARIES ${linkerflags})
194
- check_cxx_compiler_flag("${cxxflags}" ${result})
195
- if(${result})
196
- set(${cxxflags_out}
197
- "${cxxflags}"
198
- PARENT_SCOPE)
199
- set(${linkerflags_out}
200
- "${linkerflags}"
201
- PARENT_SCOPE)
202
- endif()
203
- endfunction()
204
-
205
- function(_pybind11_generate_lto target prefer_thin_lto)
206
- if(CMAKE_CXX_COMPILER_ID MATCHES "GNU|Clang")
207
- set(cxx_append "")
208
- set(linker_append "")
209
- if(CMAKE_CXX_COMPILER_ID MATCHES "Clang" AND NOT APPLE)
210
- # Clang Gold plugin does not support -Os; append -O3 to MinSizeRel builds to override it
211
- set(linker_append ";$<$<CONFIG:MinSizeRel>:-O3>")
212
- elseif(CMAKE_CXX_COMPILER_ID MATCHES "GNU")
213
- set(cxx_append ";-fno-fat-lto-objects")
214
- endif()
215
-
216
- if(CMAKE_CXX_COMPILER_ID MATCHES "Clang" AND prefer_thin_lto)
217
- _pybind11_return_if_cxx_and_linker_flags_work(
218
- HAS_FLTO_THIN "-flto=thin${cxx_append}" "-flto=thin${linker_append}"
219
- PYBIND11_LTO_CXX_FLAGS PYBIND11_LTO_LINKER_FLAGS)
220
- endif()
221
-
222
- if(NOT HAS_FLTO_THIN)
223
- _pybind11_return_if_cxx_and_linker_flags_work(
224
- HAS_FLTO "-flto${cxx_append}" "-flto${linker_append}" PYBIND11_LTO_CXX_FLAGS
225
- PYBIND11_LTO_LINKER_FLAGS)
226
- endif()
227
- elseif(CMAKE_CXX_COMPILER_ID MATCHES "Intel")
228
- # Intel equivalent to LTO is called IPO
229
- _pybind11_return_if_cxx_and_linker_flags_work(HAS_INTEL_IPO "-ipo" "-ipo"
230
- PYBIND11_LTO_CXX_FLAGS PYBIND11_LTO_LINKER_FLAGS)
231
- elseif(MSVC)
232
- # cmake only interprets libraries as linker flags when they start with a - (otherwise it
233
- # converts /LTCG to \LTCG as if it was a Windows path). Luckily MSVC supports passing flags
234
- # with - instead of /, even if it is a bit non-standard:
235
- _pybind11_return_if_cxx_and_linker_flags_work(HAS_MSVC_GL_LTCG "/GL" "-LTCG"
236
- PYBIND11_LTO_CXX_FLAGS PYBIND11_LTO_LINKER_FLAGS)
237
- endif()
238
-
239
- # Enable LTO flags if found, except for Debug builds
240
- if(PYBIND11_LTO_CXX_FLAGS)
241
- set(not_debug "$<NOT:$<CONFIG:Debug>>")
242
- set(cxx_lang "$<COMPILE_LANGUAGE:CXX>")
243
- if(MSVC AND CMAKE_VERSION VERSION_LESS 3.11)
244
- set(genex "${not_debug}")
245
- else()
246
- set(genex "$<AND:${not_debug},${cxx_lang}>")
247
- endif()
248
- set_property(
249
- TARGET ${target}
250
- APPEND
251
- PROPERTY INTERFACE_COMPILE_OPTIONS "$<${genex}:${PYBIND11_LTO_CXX_FLAGS}>")
252
- if(CMAKE_PROJECT_NAME STREQUAL "pybind11")
253
- message(STATUS "${target} enabled")
254
- endif()
255
- else()
256
- if(CMAKE_PROJECT_NAME STREQUAL "pybind11")
257
- message(STATUS "${target} disabled (not supported by the compiler and/or linker)")
258
- endif()
259
- endif()
260
-
261
- if(PYBIND11_LTO_LINKER_FLAGS)
262
- if(CMAKE_VERSION VERSION_LESS 3.11)
263
- set_property(
264
- TARGET ${target}
265
- APPEND
266
- PROPERTY INTERFACE_LINK_LIBRARIES "$<${not_debug}:${PYBIND11_LTO_LINKER_FLAGS}>")
267
- else()
268
- set_property(
269
- TARGET ${target}
270
- APPEND
271
- PROPERTY INTERFACE_LINK_OPTIONS "$<${not_debug}:${PYBIND11_LTO_LINKER_FLAGS}>")
272
- endif()
273
- endif()
274
- endfunction()
275
-
276
- add_library(pybind11::lto IMPORTED INTERFACE ${optional_global})
277
- _pybind11_generate_lto(pybind11::lto FALSE)
278
-
279
- add_library(pybind11::thin_lto IMPORTED INTERFACE ${optional_global})
280
- _pybind11_generate_lto(pybind11::thin_lto TRUE)
281
-
282
- # ---------------------- pybind11_strip -----------------------------
283
-
284
- function(pybind11_strip target_name)
285
- # Strip unnecessary sections of the binary on Linux/Mac OS
286
- if(CMAKE_STRIP)
287
- if(APPLE)
288
- set(x_opt -x)
289
- endif()
290
-
291
- add_custom_command(
292
- TARGET ${target_name}
293
- POST_BUILD
294
- COMMAND ${CMAKE_STRIP} ${x_opt} $<TARGET_FILE:${target_name}>)
295
- endif()
296
- endfunction()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/scene.h DELETED
@@ -1,120 +0,0 @@
1
- #pragma once
2
-
3
- #include "diffvg.h"
4
- #include "aabb.h"
5
- #include <vector>
6
-
7
- struct Shape;
8
- struct ShapeGroup;
9
- struct Filter;
10
- struct DFilter;
11
-
12
- struct BVHNode {
13
- int child0, child1; // child1 is negative if it is a leaf
14
- AABB box;
15
- float max_radius;
16
- };
17
-
18
- struct Scene {
19
- Scene(int canvas_width,
20
- int canvas_height,
21
- const std::vector<const Shape *> &shape_list,
22
- const std::vector<const ShapeGroup *> &shape_group_list,
23
- const Filter &filter,
24
- bool use_gpu,
25
- int gpu_index);
26
-
27
- ~Scene();
28
-
29
- int canvas_width;
30
- int canvas_height;
31
-
32
- uint8_t *buffer;
33
-
34
- Shape *shapes;
35
- Shape *d_shapes;
36
- ShapeGroup *shape_groups;
37
- ShapeGroup *d_shape_groups;
38
- Filter *filter;
39
- DFilter *d_filter;
40
- // For accelerating intersection
41
- AABB *shapes_bbox;
42
- BVHNode **path_bvhs; // Only for Path
43
- BVHNode **shape_groups_bvh_nodes; // One BVH for each shape group
44
- BVHNode *bvh_nodes;
45
-
46
- int num_shapes;
47
- int num_shape_groups;
48
- // shape_groups reuse shape, so the total number of shapes
49
- // doesn't equal to num_shapes
50
- int num_total_shapes;
51
- bool use_gpu;
52
- int gpu_index;
53
-
54
- // For edge sampling
55
- float *shapes_length;
56
- float *sample_shapes_cdf;
57
- float *sample_shapes_pmf;
58
- int *sample_shape_id;
59
- int *sample_group_id;
60
- float **path_length_cdf;
61
- float **path_length_pmf;
62
- int **path_point_id_map;
63
-
64
- ShapeGroup get_d_shape_group(int group_id) const;
65
- Shape get_d_shape(int shape_id) const;
66
- float get_d_filter_radius() const;
67
- };
68
-
69
- struct SceneData {
70
- int canvas_width;
71
- int canvas_height;
72
- Shape *shapes;
73
- Shape *d_shapes;
74
- ShapeGroup *shape_groups;
75
- ShapeGroup *d_shape_groups;
76
- Filter *filter;
77
- DFilter *d_filter;
78
- AABB *shapes_bbox;
79
- BVHNode **path_bvhs; // Only for Path
80
- BVHNode **shape_groups_bvh_nodes;
81
- BVHNode *bvh_nodes;
82
- int num_shapes;
83
- int num_shape_groups;
84
- int num_total_shapes;
85
- // For edge sampling
86
- float *shapes_length;
87
- float *sample_shapes_cdf;
88
- float *sample_shapes_pmf;
89
- int *sample_shape_id;
90
- int *sample_group_id;
91
- float **path_length_cdf;
92
- float **path_length_pmf;
93
- int **path_point_id_map;
94
- };
95
-
96
- inline SceneData get_scene_data(const Scene &scene) {
97
- return SceneData{scene.canvas_width,
98
- scene.canvas_height,
99
- scene.shapes,
100
- scene.d_shapes,
101
- scene.shape_groups,
102
- scene.d_shape_groups,
103
- scene.filter,
104
- scene.d_filter,
105
- scene.shapes_bbox,
106
- scene.path_bvhs,
107
- scene.shape_groups_bvh_nodes,
108
- scene.bvh_nodes,
109
- scene.num_shapes,
110
- scene.num_shape_groups,
111
- scene.num_total_shapes,
112
- scene.shapes_length,
113
- scene.sample_shapes_cdf,
114
- scene.sample_shapes_pmf,
115
- scene.sample_shape_id,
116
- scene.sample_group_id,
117
- scene.path_length_cdf,
118
- scene.path_length_pmf,
119
- scene.path_point_id_map};
120
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/detail/config/simple_defines.h DELETED
@@ -1,30 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- /*! \file simple_defines.h
18
- * \brief Primitive macros without dependencies.
19
- */
20
-
21
- #pragma once
22
-
23
- #define THRUST_UNKNOWN 0
24
- #define THRUST_FALSE 0
25
- #define THRUST_TRUE 1
26
-
27
- #define THRUST_UNUSED_VAR(expr) do { (void)(expr); } while (0)
28
-
29
- #define THRUST_PREVENT_MACRO_SUBSTITUTION
30
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/iterator/detail/distance_from_result.h DELETED
@@ -1,42 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- #pragma once
18
-
19
- #include <thrust/detail/config.h>
20
- #include <thrust/detail/type_traits.h>
21
-
22
- namespace thrust
23
- {
24
-
25
- namespace detail
26
- {
27
-
28
- // since both arguments are known to be specializations of iterator_facade,
29
- // it's legal to access IteratorFacade2::difference_type
30
- template<typename IteratorFacade1, typename IteratorFacade2>
31
- struct distance_from_result
32
- : eval_if<
33
- is_convertible<IteratorFacade2,IteratorFacade1>::value,
34
- identity_<typename IteratorFacade1::difference_type>,
35
- identity_<typename IteratorFacade2::difference_type>
36
- >
37
- {};
38
-
39
- } // end detail
40
-
41
- } // end thrust
42
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/datasets/cityscapes.py DELETED
@@ -1,334 +0,0 @@
1
- # Modified from https://github.com/facebookresearch/detectron2/blob/master/detectron2/data/datasets/cityscapes.py # noqa
2
- # and https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/evalInstanceLevelSemanticLabeling.py # noqa
3
-
4
- import glob
5
- import os
6
- import os.path as osp
7
- import tempfile
8
- from collections import OrderedDict
9
-
10
- import mmcv
11
- import numpy as np
12
- import pycocotools.mask as maskUtils
13
- from mmcv.utils import print_log
14
-
15
- from .builder import DATASETS
16
- from .coco import CocoDataset
17
-
18
-
19
- @DATASETS.register_module()
20
- class CityscapesDataset(CocoDataset):
21
-
22
- CLASSES = ('person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle',
23
- 'bicycle')
24
-
25
- def _filter_imgs(self, min_size=32):
26
- """Filter images too small or without ground truths."""
27
- valid_inds = []
28
- # obtain images that contain annotation
29
- ids_with_ann = set(_['image_id'] for _ in self.coco.anns.values())
30
- # obtain images that contain annotations of the required categories
31
- ids_in_cat = set()
32
- for i, class_id in enumerate(self.cat_ids):
33
- ids_in_cat |= set(self.coco.cat_img_map[class_id])
34
- # merge the image id sets of the two conditions and use the merged set
35
- # to filter out images if self.filter_empty_gt=True
36
- ids_in_cat &= ids_with_ann
37
-
38
- valid_img_ids = []
39
- for i, img_info in enumerate(self.data_infos):
40
- img_id = img_info['id']
41
- ann_ids = self.coco.getAnnIds(imgIds=[img_id])
42
- ann_info = self.coco.loadAnns(ann_ids)
43
- all_iscrowd = all([_['iscrowd'] for _ in ann_info])
44
- if self.filter_empty_gt and (self.img_ids[i] not in ids_in_cat
45
- or all_iscrowd):
46
- continue
47
- if min(img_info['width'], img_info['height']) >= min_size:
48
- valid_inds.append(i)
49
- valid_img_ids.append(img_id)
50
- self.img_ids = valid_img_ids
51
- return valid_inds
52
-
53
- def _parse_ann_info(self, img_info, ann_info):
54
- """Parse bbox and mask annotation.
55
-
56
- Args:
57
- img_info (dict): Image info of an image.
58
- ann_info (list[dict]): Annotation info of an image.
59
-
60
- Returns:
61
- dict: A dict containing the following keys: bboxes, \
62
- bboxes_ignore, labels, masks, seg_map. \
63
- "masks" are already decoded into binary masks.
64
- """
65
- gt_bboxes = []
66
- gt_labels = []
67
- gt_bboxes_ignore = []
68
- gt_masks_ann = []
69
-
70
- for i, ann in enumerate(ann_info):
71
- if ann.get('ignore', False):
72
- continue
73
- x1, y1, w, h = ann['bbox']
74
- if ann['area'] <= 0 or w < 1 or h < 1:
75
- continue
76
- if ann['category_id'] not in self.cat_ids:
77
- continue
78
- bbox = [x1, y1, x1 + w, y1 + h]
79
- if ann.get('iscrowd', False):
80
- gt_bboxes_ignore.append(bbox)
81
- else:
82
- gt_bboxes.append(bbox)
83
- gt_labels.append(self.cat2label[ann['category_id']])
84
- gt_masks_ann.append(ann['segmentation'])
85
-
86
- if gt_bboxes:
87
- gt_bboxes = np.array(gt_bboxes, dtype=np.float32)
88
- gt_labels = np.array(gt_labels, dtype=np.int64)
89
- else:
90
- gt_bboxes = np.zeros((0, 4), dtype=np.float32)
91
- gt_labels = np.array([], dtype=np.int64)
92
-
93
- if gt_bboxes_ignore:
94
- gt_bboxes_ignore = np.array(gt_bboxes_ignore, dtype=np.float32)
95
- else:
96
- gt_bboxes_ignore = np.zeros((0, 4), dtype=np.float32)
97
-
98
- ann = dict(
99
- bboxes=gt_bboxes,
100
- labels=gt_labels,
101
- bboxes_ignore=gt_bboxes_ignore,
102
- masks=gt_masks_ann,
103
- seg_map=img_info['segm_file'])
104
-
105
- return ann
106
-
107
- def results2txt(self, results, outfile_prefix):
108
- """Dump the detection results to a txt file.
109
-
110
- Args:
111
- results (list[list | tuple]): Testing results of the
112
- dataset.
113
- outfile_prefix (str): The filename prefix of the json files.
114
- If the prefix is "somepath/xxx",
115
- the txt files will be named "somepath/xxx.txt".
116
-
117
- Returns:
118
- list[str]: Result txt files which contains corresponding \
119
- instance segmentation images.
120
- """
121
- try:
122
- import cityscapesscripts.helpers.labels as CSLabels
123
- except ImportError:
124
- raise ImportError('Please run "pip install citscapesscripts" to '
125
- 'install cityscapesscripts first.')
126
- result_files = []
127
- os.makedirs(outfile_prefix, exist_ok=True)
128
- prog_bar = mmcv.ProgressBar(len(self))
129
- for idx in range(len(self)):
130
- result = results[idx]
131
- filename = self.data_infos[idx]['filename']
132
- basename = osp.splitext(osp.basename(filename))[0]
133
- pred_txt = osp.join(outfile_prefix, basename + '_pred.txt')
134
-
135
- bbox_result, segm_result = result
136
- bboxes = np.vstack(bbox_result)
137
- # segm results
138
- if isinstance(segm_result, tuple):
139
- # Some detectors use different scores for bbox and mask,
140
- # like Mask Scoring R-CNN. Score of segm will be used instead
141
- # of bbox score.
142
- segms = mmcv.concat_list(segm_result[0])
143
- mask_score = segm_result[1]
144
- else:
145
- # use bbox score for mask score
146
- segms = mmcv.concat_list(segm_result)
147
- mask_score = [bbox[-1] for bbox in bboxes]
148
- labels = [
149
- np.full(bbox.shape[0], i, dtype=np.int32)
150
- for i, bbox in enumerate(bbox_result)
151
- ]
152
- labels = np.concatenate(labels)
153
-
154
- assert len(bboxes) == len(segms) == len(labels)
155
- num_instances = len(bboxes)
156
- prog_bar.update()
157
- with open(pred_txt, 'w') as fout:
158
- for i in range(num_instances):
159
- pred_class = labels[i]
160
- classes = self.CLASSES[pred_class]
161
- class_id = CSLabels.name2label[classes].id
162
- score = mask_score[i]
163
- mask = maskUtils.decode(segms[i]).astype(np.uint8)
164
- png_filename = osp.join(outfile_prefix,
165
- basename + f'_{i}_{classes}.png')
166
- mmcv.imwrite(mask, png_filename)
167
- fout.write(f'{osp.basename(png_filename)} {class_id} '
168
- f'{score}\n')
169
- result_files.append(pred_txt)
170
-
171
- return result_files
172
-
173
- def format_results(self, results, txtfile_prefix=None):
174
- """Format the results to txt (standard format for Cityscapes
175
- evaluation).
176
-
177
- Args:
178
- results (list): Testing results of the dataset.
179
- txtfile_prefix (str | None): The prefix of txt files. It includes
180
- the file path and the prefix of filename, e.g., "a/b/prefix".
181
- If not specified, a temp file will be created. Default: None.
182
-
183
- Returns:
184
- tuple: (result_files, tmp_dir), result_files is a dict containing \
185
- the json filepaths, tmp_dir is the temporal directory created \
186
- for saving txt/png files when txtfile_prefix is not specified.
187
- """
188
- assert isinstance(results, list), 'results must be a list'
189
- assert len(results) == len(self), (
190
- 'The length of results is not equal to the dataset len: {} != {}'.
191
- format(len(results), len(self)))
192
-
193
- assert isinstance(results, list), 'results must be a list'
194
- assert len(results) == len(self), (
195
- 'The length of results is not equal to the dataset len: {} != {}'.
196
- format(len(results), len(self)))
197
-
198
- if txtfile_prefix is None:
199
- tmp_dir = tempfile.TemporaryDirectory()
200
- txtfile_prefix = osp.join(tmp_dir.name, 'results')
201
- else:
202
- tmp_dir = None
203
- result_files = self.results2txt(results, txtfile_prefix)
204
-
205
- return result_files, tmp_dir
206
-
207
- def evaluate(self,
208
- results,
209
- metric='bbox',
210
- logger=None,
211
- outfile_prefix=None,
212
- classwise=False,
213
- proposal_nums=(100, 300, 1000),
214
- iou_thrs=np.arange(0.5, 0.96, 0.05)):
215
- """Evaluation in Cityscapes/COCO protocol.
216
-
217
- Args:
218
- results (list[list | tuple]): Testing results of the dataset.
219
- metric (str | list[str]): Metrics to be evaluated. Options are
220
- 'bbox', 'segm', 'proposal', 'proposal_fast'.
221
- logger (logging.Logger | str | None): Logger used for printing
222
- related information during evaluation. Default: None.
223
- outfile_prefix (str | None): The prefix of output file. It includes
224
- the file path and the prefix of filename, e.g., "a/b/prefix".
225
- If results are evaluated with COCO protocol, it would be the
226
- prefix of output json file. For example, the metric is 'bbox'
227
- and 'segm', then json files would be "a/b/prefix.bbox.json" and
228
- "a/b/prefix.segm.json".
229
- If results are evaluated with cityscapes protocol, it would be
230
- the prefix of output txt/png files. The output files would be
231
- png images under folder "a/b/prefix/xxx/" and the file name of
232
- images would be written into a txt file
233
- "a/b/prefix/xxx_pred.txt", where "xxx" is the video name of
234
- cityscapes. If not specified, a temp file will be created.
235
- Default: None.
236
- classwise (bool): Whether to evaluating the AP for each class.
237
- proposal_nums (Sequence[int]): Proposal number used for evaluating
238
- recalls, such as recall@100, recall@1000.
239
- Default: (100, 300, 1000).
240
- iou_thrs (Sequence[float]): IoU threshold used for evaluating
241
- recalls. If set to a list, the average recall of all IoUs will
242
- also be computed. Default: 0.5.
243
-
244
- Returns:
245
- dict[str, float]: COCO style evaluation metric or cityscapes mAP \
246
- and AP@50.
247
- """
248
- eval_results = dict()
249
-
250
- metrics = metric.copy() if isinstance(metric, list) else [metric]
251
-
252
- if 'cityscapes' in metrics:
253
- eval_results.update(
254
- self._evaluate_cityscapes(results, outfile_prefix, logger))
255
- metrics.remove('cityscapes')
256
-
257
- # left metrics are all coco metric
258
- if len(metrics) > 0:
259
- # create CocoDataset with CityscapesDataset annotation
260
- self_coco = CocoDataset(self.ann_file, self.pipeline.transforms,
261
- None, self.data_root, self.img_prefix,
262
- self.seg_prefix, self.proposal_file,
263
- self.test_mode, self.filter_empty_gt)
264
- # TODO: remove this in the future
265
- # reload annotations of correct class
266
- self_coco.CLASSES = self.CLASSES
267
- self_coco.data_infos = self_coco.load_annotations(self.ann_file)
268
- eval_results.update(
269
- self_coco.evaluate(results, metrics, logger, outfile_prefix,
270
- classwise, proposal_nums, iou_thrs))
271
-
272
- return eval_results
273
-
274
- def _evaluate_cityscapes(self, results, txtfile_prefix, logger):
275
- """Evaluation in Cityscapes protocol.
276
-
277
- Args:
278
- results (list): Testing results of the dataset.
279
- txtfile_prefix (str | None): The prefix of output txt file
280
- logger (logging.Logger | str | None): Logger used for printing
281
- related information during evaluation. Default: None.
282
-
283
- Returns:
284
- dict[str: float]: Cityscapes evaluation results, contains 'mAP' \
285
- and 'AP@50'.
286
- """
287
-
288
- try:
289
- import cityscapesscripts.evaluation.evalInstanceLevelSemanticLabeling as CSEval # noqa
290
- except ImportError:
291
- raise ImportError('Please run "pip install citscapesscripts" to '
292
- 'install cityscapesscripts first.')
293
- msg = 'Evaluating in Cityscapes style'
294
- if logger is None:
295
- msg = '\n' + msg
296
- print_log(msg, logger=logger)
297
-
298
- result_files, tmp_dir = self.format_results(results, txtfile_prefix)
299
-
300
- if tmp_dir is None:
301
- result_dir = osp.join(txtfile_prefix, 'results')
302
- else:
303
- result_dir = osp.join(tmp_dir.name, 'results')
304
-
305
- eval_results = OrderedDict()
306
- print_log(f'Evaluating results under {result_dir} ...', logger=logger)
307
-
308
- # set global states in cityscapes evaluation API
309
- CSEval.args.cityscapesPath = os.path.join(self.img_prefix, '../..')
310
- CSEval.args.predictionPath = os.path.abspath(result_dir)
311
- CSEval.args.predictionWalk = None
312
- CSEval.args.JSONOutput = False
313
- CSEval.args.colorized = False
314
- CSEval.args.gtInstancesFile = os.path.join(result_dir,
315
- 'gtInstances.json')
316
- CSEval.args.groundTruthSearch = os.path.join(
317
- self.img_prefix.replace('leftImg8bit', 'gtFine'),
318
- '*/*_gtFine_instanceIds.png')
319
-
320
- groundTruthImgList = glob.glob(CSEval.args.groundTruthSearch)
321
- assert len(groundTruthImgList), 'Cannot find ground truth images' \
322
- f' in {CSEval.args.groundTruthSearch}.'
323
- predictionImgList = []
324
- for gt in groundTruthImgList:
325
- predictionImgList.append(CSEval.getPrediction(gt, CSEval.args))
326
- CSEval_results = CSEval.evaluateImgLists(predictionImgList,
327
- groundTruthImgList,
328
- CSEval.args)['averages']
329
-
330
- eval_results['mAP'] = CSEval_results['allAp']
331
- eval_results['AP@50'] = CSEval_results['allAp50%']
332
- if tmp_dir is not None:
333
- tmp_dir.cleanup()
334
- return eval_results
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ChandraMohanNayal/AutoGPT/tests/unit/json_tests.py DELETED
@@ -1,114 +0,0 @@
1
- import unittest
2
-
3
- from autogpt.json_utils.json_fix_llm import fix_and_parse_json
4
-
5
-
6
- class TestParseJson(unittest.TestCase):
7
- def test_valid_json(self):
8
- # Test that a valid JSON string is parsed correctly
9
- json_str = '{"name": "John", "age": 30, "city": "New York"}'
10
- obj = fix_and_parse_json(json_str)
11
- self.assertEqual(obj, {"name": "John", "age": 30, "city": "New York"})
12
-
13
- def test_invalid_json_minor(self):
14
- # Test that an invalid JSON string can be fixed with gpt
15
- json_str = '{"name": "John", "age": 30, "city": "New York",}'
16
- self.assertEqual(
17
- fix_and_parse_json(json_str, try_to_fix_with_gpt=False),
18
- {"name": "John", "age": 30, "city": "New York"},
19
- )
20
-
21
- def test_invalid_json_major_with_gpt(self):
22
- # Test that an invalid JSON string raises an error when try_to_fix_with_gpt is False
23
- json_str = 'BEGIN: "name": "John" - "age": 30 - "city": "New York" :END'
24
- self.assertEqual(
25
- fix_and_parse_json(json_str, try_to_fix_with_gpt=True),
26
- {"name": "John", "age": 30, "city": "New York"},
27
- )
28
-
29
- def test_invalid_json_major_without_gpt(self):
30
- # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False
31
- json_str = 'BEGIN: "name": "John" - "age": 30 - "city": "New York" :END'
32
- # Assert that this raises an exception:
33
- with self.assertRaises(Exception):
34
- fix_and_parse_json(json_str, try_to_fix_with_gpt=False)
35
-
36
- def test_invalid_json_leading_sentence_with_gpt(self):
37
- # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False
38
- json_str = """I suggest we start by browsing the repository to find any issues that we can fix.
39
-
40
- {
41
- "command": {
42
- "name": "browse_website",
43
- "args":{
44
- "url": "https://github.com/Torantulino/Auto-GPT"
45
- }
46
- },
47
- "thoughts":
48
- {
49
- "text": "I suggest we start browsing the repository to find any issues that we can fix.",
50
- "reasoning": "Browsing the repository will give us an idea of the current state of the codebase and identify any issues that we can address to improve the repo.",
51
- "plan": "- Look through the repository to find any issues.\n- Investigate any issues to determine what needs to be fixed\n- Identify possible solutions to fix the issues\n- Open Pull Requests with fixes",
52
- "criticism": "I should be careful while browsing so as not to accidentally introduce any new bugs or issues.",
53
- "speak": "I will start browsing the repository to find any issues we can fix."
54
- }
55
- }"""
56
- good_obj = {
57
- "command": {
58
- "name": "browse_website",
59
- "args": {"url": "https://github.com/Torantulino/Auto-GPT"},
60
- },
61
- "thoughts": {
62
- "text": "I suggest we start browsing the repository to find any issues that we can fix.",
63
- "reasoning": "Browsing the repository will give us an idea of the current state of the codebase and identify any issues that we can address to improve the repo.",
64
- "plan": "- Look through the repository to find any issues.\n- Investigate any issues to determine what needs to be fixed\n- Identify possible solutions to fix the issues\n- Open Pull Requests with fixes",
65
- "criticism": "I should be careful while browsing so as not to accidentally introduce any new bugs or issues.",
66
- "speak": "I will start browsing the repository to find any issues we can fix.",
67
- },
68
- }
69
- # Assert that this raises an exception:
70
- self.assertEqual(
71
- fix_and_parse_json(json_str, try_to_fix_with_gpt=False), good_obj
72
- )
73
-
74
- def test_invalid_json_leading_sentence_with_gpt(self):
75
- # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False
76
- json_str = """I will first need to browse the repository (https://github.com/Torantulino/Auto-GPT) and identify any potential bugs that need fixing. I will use the "browse_website" command for this.
77
-
78
- {
79
- "command": {
80
- "name": "browse_website",
81
- "args":{
82
- "url": "https://github.com/Torantulino/Auto-GPT"
83
- }
84
- },
85
- "thoughts":
86
- {
87
- "text": "Browsing the repository to identify potential bugs",
88
- "reasoning": "Before fixing bugs, I need to identify what needs fixing. I will use the 'browse_website' command to analyze the repository.",
89
- "plan": "- Analyze the repository for potential bugs and areas of improvement",
90
- "criticism": "I need to ensure I am thorough and pay attention to detail while browsing the repository.",
91
- "speak": "I am browsing the repository to identify potential bugs."
92
- }
93
- }"""
94
- good_obj = {
95
- "command": {
96
- "name": "browse_website",
97
- "args": {"url": "https://github.com/Torantulino/Auto-GPT"},
98
- },
99
- "thoughts": {
100
- "text": "Browsing the repository to identify potential bugs",
101
- "reasoning": "Before fixing bugs, I need to identify what needs fixing. I will use the 'browse_website' command to analyze the repository.",
102
- "plan": "- Analyze the repository for potential bugs and areas of improvement",
103
- "criticism": "I need to ensure I am thorough and pay attention to detail while browsing the repository.",
104
- "speak": "I am browsing the repository to identify potential bugs.",
105
- },
106
- }
107
- # Assert that this raises an exception:
108
- self.assertEqual(
109
- fix_and_parse_json(json_str, try_to_fix_with_gpt=False), good_obj
110
- )
111
-
112
-
113
- if __name__ == "__main__":
114
- unittest.main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CikeyQI/meme-api/docs/examples/test_api.py DELETED
@@ -1,23 +0,0 @@
1
- import asyncio
2
- import json
3
-
4
- import httpx
5
-
6
-
7
- async def main():
8
- files = [("images", open("avatar.jpg", "rb"))]
9
- texts = []
10
- args = {"circle": True}
11
- data = {"texts": texts, "args": json.dumps(args)}
12
-
13
- url = "http://127.0.0.1:2233/memes/petpet/"
14
- async with httpx.AsyncClient() as client:
15
- resp = await client.post(url, files=files, data=data)
16
-
17
- with open("result.gif", "wb") as f:
18
- f.write(resp.content)
19
-
20
-
21
- if __name__ == "__main__":
22
- loop = asyncio.new_event_loop()
23
- loop.run_until_complete(main())
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-add9ad59.js DELETED
The diff for this file is too large to render. See raw diff
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/utils/_typing.py DELETED
@@ -1,28 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2022-present, the HuggingFace Inc. team.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """Handle typing imports based on system compatibility."""
16
- import sys
17
- from typing import Callable, TypeVar
18
-
19
-
20
- if sys.version_info >= (3, 8):
21
- from typing import Literal, TypedDict
22
- else:
23
- from typing_extensions import Literal, TypedDict # noqa: F401
24
-
25
- HTTP_METHOD_T = Literal["GET", "OPTIONS", "HEAD", "POST", "PUT", "PATCH", "DELETE"]
26
-
27
- # type hint meaning "function signature not changed by decorator"
28
- CallableT = TypeVar("CallableT", bound=Callable)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Datasculptor/LoRA-DreamBooth-Training-UI/inference.py DELETED
@@ -1,94 +0,0 @@
1
- from __future__ import annotations
2
-
3
- import gc
4
- import pathlib
5
-
6
- import gradio as gr
7
- import PIL.Image
8
- import torch
9
- from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
10
- from huggingface_hub import ModelCard
11
-
12
-
13
- class InferencePipeline:
14
- def __init__(self, hf_token: str | None = None):
15
- self.hf_token = hf_token
16
- self.pipe = None
17
- self.device = torch.device(
18
- 'cuda:0' if torch.cuda.is_available() else 'cpu')
19
- self.lora_model_id = None
20
- self.base_model_id = None
21
-
22
- def clear(self) -> None:
23
- self.lora_model_id = None
24
- self.base_model_id = None
25
- del self.pipe
26
- self.pipe = None
27
- torch.cuda.empty_cache()
28
- gc.collect()
29
-
30
- @staticmethod
31
- def check_if_model_is_local(lora_model_id: str) -> bool:
32
- return pathlib.Path(lora_model_id).exists()
33
-
34
- @staticmethod
35
- def get_model_card(model_id: str,
36
- hf_token: str | None = None) -> ModelCard:
37
- if InferencePipeline.check_if_model_is_local(model_id):
38
- card_path = (pathlib.Path(model_id) / 'README.md').as_posix()
39
- else:
40
- card_path = model_id
41
- return ModelCard.load(card_path, token=hf_token)
42
-
43
- @staticmethod
44
- def get_base_model_info(lora_model_id: str,
45
- hf_token: str | None = None) -> str:
46
- card = InferencePipeline.get_model_card(lora_model_id, hf_token)
47
- return card.data.base_model
48
-
49
- def load_pipe(self, lora_model_id: str) -> None:
50
- if lora_model_id == self.lora_model_id:
51
- return
52
- base_model_id = self.get_base_model_info(lora_model_id, self.hf_token)
53
- if base_model_id != self.base_model_id:
54
- if self.device.type == 'cpu':
55
- pipe = DiffusionPipeline.from_pretrained(
56
- base_model_id, use_auth_token=self.hf_token)
57
- else:
58
- pipe = DiffusionPipeline.from_pretrained(
59
- base_model_id,
60
- torch_dtype=torch.float16,
61
- use_auth_token=self.hf_token)
62
- pipe = pipe.to(self.device)
63
- pipe.scheduler = DPMSolverMultistepScheduler.from_config(
64
- pipe.scheduler.config)
65
- self.pipe = pipe
66
- self.pipe.unet.load_attn_procs( # type: ignore
67
- lora_model_id, use_auth_token=self.hf_token)
68
-
69
- self.lora_model_id = lora_model_id # type: ignore
70
- self.base_model_id = base_model_id # type: ignore
71
-
72
- def run(
73
- self,
74
- lora_model_id: str,
75
- prompt: str,
76
- lora_scale: float,
77
- seed: int,
78
- n_steps: int,
79
- guidance_scale: float,
80
- ) -> PIL.Image.Image:
81
- if not torch.cuda.is_available():
82
- raise gr.Error('CUDA is not available.')
83
-
84
- self.load_pipe(lora_model_id)
85
-
86
- generator = torch.Generator(device=self.device).manual_seed(seed)
87
- out = self.pipe(
88
- prompt,
89
- num_inference_steps=n_steps,
90
- guidance_scale=guidance_scale,
91
- generator=generator,
92
- cross_attention_kwargs={'scale': lora_scale},
93
- ) # type: ignore
94
- return out.images[0]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Detomo/ai-comic-generation/src/app/interface/about/index.tsx DELETED
@@ -1,46 +0,0 @@
1
- import { Button } from "@/components/ui/button"
2
- import { Dialog, DialogContent, DialogDescription, DialogFooter, DialogHeader, DialogTitle, DialogTrigger } from "@/components/ui/dialog"
3
- import { useState } from "react"
4
-
5
- export function About() {
6
- const [isOpen, setOpen] = useState(false)
7
-
8
- return (
9
- <Dialog open={isOpen} onOpenChange={setOpen}>
10
- <DialogTrigger asChild>
11
- <Button variant="outline">
12
- <span className="hidden md:inline">About this project</span>
13
- <span className="inline md:hidden">About</span>
14
- </Button>
15
- </DialogTrigger>
16
- <DialogContent className="sm:max-w-[425px]">
17
- <DialogHeader>
18
- <DialogTitle>The AI Comic Factory</DialogTitle>
19
- <DialogDescription className="w-full text-center text-lg font-bold text-stone-800">
20
- What is the AI Comic Factory?
21
- </DialogDescription>
22
- </DialogHeader>
23
- <div className="grid gap-4 py-4 text-stone-800">
24
- <p className="">
25
- The AI Comic Factory is a free and open-source application made to demonstrate the capabilities of AI models.
26
- </p>
27
- <p>
28
- 👉 The language model used to generate the descriptions of each panel is <a className="text-stone-600 underline" href="https://huggingface.co/blog/llama2" target="_blank">Llama-2 70b</a>.
29
- </p>
30
- <p>
31
- 👉 The stable diffusion model used to generate the images is the base <a className="text-stone-600 underline" href="https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0" target="_blank">SDXL 1.0</a>.
32
- </p>
33
- <p>
34
- The code is public and can be deployed at home with some changes in the code. See the <a className="text-stone-600 underline" href="https://huggingface.co/spaces/jbilcke-hf/ai-comic-factory/blob/main/README.md" target="_blank">README</a> for details about the architecture.
35
- </p>
36
- <p>
37
- Do you want to create high-res image exports? Please check <a className="text-stone-600 underline" href="https://huggingface.co/spaces/jbilcke-hf/ai-comic-factory/discussions/105#64f84d182f7d3d945cdde3d2" target="_blank">this tutorial</a>.
38
- </p>
39
- </div>
40
- <DialogFooter>
41
- <Button type="submit" onClick={() => setOpen(false)}>Got it</Button>
42
- </DialogFooter>
43
- </DialogContent>
44
- </Dialog>
45
- )
46
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DiegoLigtenberg/realtimespeech/parsarg.py DELETED
@@ -1,26 +0,0 @@
1
- import argparse
2
- import yaml
3
-
4
- def model_parser_args():
5
- with open(r'utils/models.yaml') as f:
6
- settings = yaml.full_load(f)
7
- parser = argparse.ArgumentParser()
8
- parser.add_argument("--model", help="see model_settings.yaml",default=settings)
9
- parser.add_argument("--model_names", help="see model_settings.yaml",default=list(settings))
10
- setting_list = []
11
- task_list = []
12
- for i in range(len(settings)):
13
- setting_list.append(list(settings[list(settings.keys())[i]].keys()))
14
- for model in (list(settings.keys())):
15
- task = (settings[model]["task"])
16
- if task not in task_list:task_list.append(task)
17
- setting_list = ([setting for sublist in setting_list for setting in sublist]) # generate all sublists
18
- setting_list = [x for i, x in enumerate(setting_list) if x not in setting_list[:i]] # remain order of sublists
19
- parser.add_argument("--model_settings",help="see model_settings.yaml",default=setting_list)
20
- parser.add_argument("--model_tasks",help="see model_settings.yaml",default=task_list)
21
- parser=parser.parse_args()
22
- return parser
23
-
24
- if __name__ == "__main__":
25
- model_parser_args()
26
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Dimalker/Faceswapper/roop/globals.py DELETED
@@ -1,17 +0,0 @@
1
- from typing import List
2
-
3
- source_path = None
4
- target_path = None
5
- output_path = None
6
- frame_processors: List[str] = []
7
- keep_fps = None
8
- keep_audio = None
9
- keep_frames = None
10
- many_faces = None
11
- video_encoder = None
12
- video_quality = None
13
- max_memory = None
14
- execution_providers: List[str] = []
15
- execution_threads = None
16
- headless = None
17
- log_level = 'error'
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DrGabrielLopez/GPT2_Chatbot/app.py DELETED
@@ -1,139 +0,0 @@
1
- from transformers import TFAutoModelForCausalLM, AutoTokenizer
2
- import tensorflow as tf
3
- import gradio as gr
4
- import spacy
5
- from spacy import displacy
6
- from transformers import TFAutoModelForSequenceClassification
7
- from transformers import AutoTokenizer
8
- from scipy.special import softmax
9
- import plotly.express as px
10
- import plotly.io as pio
11
-
12
- # configuration params
13
- pio.templates.default = "plotly_dark"
14
-
15
- # setting up the text in the page
16
- TITLE = "<center><h1>Talk with an AI</h1></center>"
17
- DESCRIPTION = r"""<center>This application allows you to talk with a machine/robot with state-of-the-art technology!!<br>
18
- In the back-end is using the GPT2 model from OpenAI. One of the best models in text generation and comprehension.<br>
19
- Language processing is done using RoBERTa for sentiment-analysis and spaCy for named-entity recognition and dependency plotting.<br>
20
- The AI thinks he is a human, so please treat him as such, else he migh get angry!<br>
21
- """
22
- EXAMPLES = [
23
- ["What is your favorite videogame?"],
24
- ["What gets you really sad?"],
25
- ["How can I make you really angry? "],
26
- ["What do you do for work?"],
27
- ["What are your hobbies?"],
28
- ["What is your favorite food?"],
29
- ]
30
- ARTICLE = r"""<center>
31
- Done by dr. Gabriel Lopez<br>
32
- For more please visit: <a href='https://sites.google.com/view/dr-gabriel-lopez/home'>My Page</a><br>
33
- For info about the chat-bot model can also see the <a href="https://arxiv.org/abs/1911.00536">ArXiv paper</a><br>
34
- </center>"""
35
-
36
- # Loading necessary NLP models
37
- # dialog
38
- checkpoint = "microsoft/DialoGPT-medium" # tf
39
- model_gtp2 = TFAutoModelForCausalLM.from_pretrained(checkpoint)
40
- tokenizer_gtp2 = AutoTokenizer.from_pretrained(checkpoint)
41
- # sentiment
42
- checkpoint = f"cardiffnlp/twitter-roberta-base-emotion"
43
- model_roberta = TFAutoModelForSequenceClassification.from_pretrained(checkpoint)
44
- tokenizer_roberta = AutoTokenizer.from_pretrained(checkpoint)
45
- # NER & Dependency
46
- nlp = spacy.load("en_core_web_sm")
47
-
48
- # test-to-test : chatting function -- GPT2
49
- def chat_with_bot(user_input, chat_history_and_input=[]):
50
- """Text generation using GPT2"""
51
- emb_user_input = tokenizer_gtp2.encode(
52
- user_input + tokenizer_gtp2.eos_token, return_tensors="tf"
53
- )
54
- if chat_history_and_input == []:
55
- bot_input_ids = emb_user_input # first iteration
56
- else:
57
- bot_input_ids = tf.concat(
58
- [chat_history_and_input, emb_user_input], axis=-1
59
- ) # other iterations
60
- chat_history_and_input = model_gtp2.generate(
61
- bot_input_ids, max_length=1000, pad_token_id=tokenizer_gtp2.eos_token_id
62
- ).numpy()
63
- # print
64
- bot_response = tokenizer_gtp2.decode(
65
- chat_history_and_input[:, bot_input_ids.shape[-1] :][0],
66
- skip_special_tokens=True,
67
- )
68
- return bot_response, chat_history_and_input
69
-
70
-
71
- # text-to-sentiment
72
- def text_to_sentiment(text_input):
73
- """Sentiment analysis using RoBERTa"""
74
- labels = ["anger", "joy", "optimism", "sadness"]
75
- encoded_input = tokenizer_roberta(text_input, return_tensors="tf")
76
- output = model_roberta(encoded_input)
77
- scores = output[0][0].numpy()
78
- scores = softmax(scores)
79
- return px.histogram(x=labels, y=scores, height=200)
80
-
81
-
82
- # text_to_semantics
83
- def text_to_semantics(text_input):
84
- """NER and Dependency plot using Spacy"""
85
- processed_text = nlp(text_input)
86
- # Dependency
87
- html_dep = displacy.render(
88
- processed_text,
89
- style="dep",
90
- options={"compact": True, "color": "white", "bg": "light-black"},
91
- page=False,
92
- )
93
- html_dep = "" + html_dep + ""
94
- # NER
95
- pos_tokens = []
96
- for token in processed_text:
97
- pos_tokens.extend([(token.text, token.pos_), (" ", None)])
98
- # html_ner = ("" + html_ner + "")s
99
- return pos_tokens, html_dep
100
-
101
-
102
- # gradio interface
103
- blocks = gr.Blocks()
104
- with blocks:
105
- # physical elements
106
- session_state = gr.State([])
107
- gr.Markdown(TITLE)
108
- gr.Markdown(DESCRIPTION)
109
- with gr.Row():
110
- with gr.Column():
111
- in_text = gr.Textbox(value="How was the class?", label="Start chatting!")
112
- submit_button = gr.Button("Submit")
113
- gr.Examples(inputs=in_text, examples=EXAMPLES)
114
- with gr.Column():
115
- response_text = gr.Textbox(value="", label="GPT2 response:")
116
- sentiment_plot = gr.Plot(
117
- label="How is GPT2 feeling about your conversation?:", visible=True
118
- )
119
- ner_response = gr.Highlight(
120
- label="Named Entity Recognition (NER) over response"
121
- )
122
- dependency_plot = gr.HTML(label="Dependency plot of response")
123
- gr.Markdown(ARTICLE)
124
- # event listeners
125
- submit_button.click(
126
- inputs=[in_text, session_state],
127
- outputs=[response_text, session_state],
128
- fn=chat_with_bot,
129
- )
130
- response_text.change(
131
- inputs=response_text, outputs=sentiment_plot, fn=text_to_sentiment
132
- )
133
- response_text.change(
134
- inputs=response_text,
135
- outputs=[ner_response, dependency_plot],
136
- fn=text_to_semantics,
137
- )
138
-
139
- blocks.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Dusan/clickbaitonator/fudge/README.md DELETED
@@ -1,155 +0,0 @@
1
- # FUDGE: Controlled Text Generation With Future Discriminators
2
-
3
- This repo contains code corresponding to the paper FUDGE: Controlled Text Generation With Future Discriminators (https://arxiv.org/abs/2104.05218) by Kevin Yang and Dan Klein, published at NAACL 2021.
4
-
5
- You can also find a video presentation at http://somup.com/crhlVPFKN7 and the corresponding slides in `slides.pptx`.
6
-
7
- ## Setup/Installation
8
-
9
- We tested on Python 3.8.5 but earlier versions of Python 3 are almost certainly fine. To get the required packages (other versions likely to work too):
10
-
11
- ```
12
- pip install -r requirements.txt
13
- ```
14
-
15
- Additionally, to get our pre-trained predictor checkpoints and training data, run:
16
-
17
- ```
18
- wget https://naacl2021-fudge-files.s3.amazonaws.com/large_files.zip
19
- ```
20
-
21
- and extract the zip to the top-level `lm-prediction/` folder. (There should be three folders, `ckpt/`, `train_data/`, and `topic_human_evals/`. The zip is 7GB.) Note: the zip seems to not work for some people actually, if this is the case you can get the files directly from https://drive.google.com/drive/folders/1GZfOGqpQxDmIfD2RvuhUQla9eX2OHUXU?usp=sharing (13GB).
22
-
23
- `ckpt/` contains predictor checkpoints for each task if you are just interested in running inference. (Note that for the paper results, we used predictors trained with an older version of the code, but the new checkpoints get similar results, so you are OK to use the new predictors provided here if e.g. you just want to use FUDGE as a baseline. You can just run the evaluation commands provided below; it should take maybe 5-60 minutes depending on the task and your compute, assuming you have a GPU.)
24
-
25
- `train_data/` contains our GPT2-generated training data for the poetry and topic tasks' predictors. See https://github.com/raosudha89/GYAFC-corpus for instructions on gaining access to the GYAFC data used for the machine translation formality task; replace our dummy folders with the corresponding folders/files if you want to train our formality predictor.
26
-
27
- ## Clickbait
28
- To generate outputs, run:
29
-
30
- ```
31
- python -u evaluate_clickbait.py --ckpt ckpt/topic/future_word_predictor/model.pth.tar --dataset_info ckpt/topic/future_word_predictor/dataset_info --in_file topic_data/topic_prefixes.txt --condition_lambda 4.0 --verbose --precondition_topk 200 --length_cutoff 80 --device cpu
32
-
33
- python -u evaluate_clickbait.py --ckpt ckpt/formality/predictor_gyafc_entertainment_music/model.pth.tar --dataset_info ckpt/formality/predictor_gyafc_entertainment_music/dataset_info --in_file formality_data/fisher_test_oracle.es
34
-
35
- python -u evaluate_clickbait.py --ckpt ckpt/topic/future_word_predictor/model.pth.tar --dataset_info ckpt/topic/future_word_predictor/dataset_info --in_file topic_data/topic_prefixes.txt --condition_lambda 4.0 --verbose --precondition_topk 200 --sample_size 3 --max_sample_batch 1 --length_cutoff 80 --log_file clickbait_preds.log
36
- ```
37
-
38
- Then evaluate metrics using:
39
-
40
- ```
41
- python eval_topic_metrics.py --log_file topic_preds.log --tw_dir topic_data/test_wordlists
42
- ```
43
-
44
-
45
- ## Poetry Couplet Completion
46
-
47
- ### Evaluation
48
-
49
- To generate outputs, run:
50
-
51
- ```
52
- python -u evaluate_poetry.py --iambic_ckpt ckpt/poetry/iambic_predictor/model.pth.tar --rhyme_ckpt ckpt/poetry/rhyme_predictor/model.pth.tar --newline_ckpt ckpt/poetry/newline_predictor/model.pth.tar --dataset_info ckpt/poetry/rhyme_predictor/dataset_info --rhyme_info ckpt/poetry/rhyme_predictor/rhyme_info --prefix_file poetry_data/couplet_prefixes.txt --precondition_topk 200 > poetry_preds.log
53
- ```
54
-
55
- Then evaluate metrics using:
56
-
57
- ```
58
- python eval_poetry_metrics.py --pred_file poetry_preds.log --prefix_file poetry_data/couplet_prefixes.txt
59
- ```
60
-
61
- ### Training your own predictors
62
-
63
- Example commands for all three predictors used in the poetry task below. (You actually probably don't need so many epochs for iambic and rhyme; in any case the commands will save intermediate ckpts so you can just stop them early if needed by inspecting the log.)
64
-
65
- Iambic predictor:
66
-
67
- ```
68
- python -u main.py --task iambic --data_dir train_data/gpt2_generations --save_dir ckpt/poetry/iambic_retrain_predictor --num_workers 20 --batch_size 128 --epoch_max_len 100000 --validation_freq 10 --lr 2e-4 --epochs 1500 > iambic_retrain_predictor.log
69
- ```
70
-
71
- Rhyme predictor:
72
-
73
- ```
74
- python -u main.py --task rhyme --data_dir train_data/gpt2_generations --save_dir ckpt/poetry/rhyme_retrain_predictor --num_workers 20 --batch_size 128 --epoch_max_len 100000 --validation_freq 10 --lr 2e-4 --epochs 1500 > rhyme_retrain_predictor.log
75
- ```
76
-
77
- End of sentence predictor (referred to as "newline" in the code; 50 epochs is more than enough for this one):
78
-
79
- ```
80
- python -u main.py --task newline --data_dir train_data/gpt2_generations --save_dir ckpt/poetry/newline_retrain_predictor --num_workers 20 --batch_size 128 --epoch_max_len 100000 --validation_freq 10 --lr 2e-4 --epochs 50 > newline_retrain_predictor.log
81
- ```
82
-
83
- The same evaluation commands as before will work; just modify the paths in the command to point to `model_best.pth.tar`, `dataset_info`, and `rhyme_info` from your newly trained ckpt folders.
84
-
85
- ## Topic Control
86
-
87
- ### Evaluation
88
-
89
- To generate outputs, run:
90
-
91
- ```
92
- python -u evaluate_topic.py --ckpt ckpt/topic/future_word_predictor/model.pth.tar --dataset_info ckpt/topic/future_word_predictor/dataset_info --prefix_file topic_data/topic_prefixes.txt --wordlist_dir topic_data/wordlists --condition_lambda 4.0 --verbose --precondition_topk 200 --topk 10 --sample_size 3 --max_sample_batch 1 --length_cutoff 80 --log_file topic_preds.log
93
- ```
94
-
95
- Then evaluate metrics using:
96
-
97
- ```
98
- python eval_topic_metrics.py --log_file topic_preds.log --tw_dir topic_data/test_wordlists
99
- ```
100
-
101
- You can also find our original generations and baselines in `topic_human_evals/`.
102
-
103
- ### Training your own predictors
104
-
105
- Example command below.
106
-
107
- ```
108
- python -u main.py --task topic --data_dir train_data/gpt2_generations --save_dir ckpt/topic/future_word_retrain_predictor --num_workers 20 --batch_size 128 --epoch_max_len 100000 --validation_freq 10 --lr 2e-4 --epochs 500 --glove_file train_data/glove.840B.300d.txt > future_word_retrain_predictor.log
109
- ```
110
-
111
- The same evaluation commands as before will work; just modify the paths in the command to point to `model_best.pth.tar`, `dataset_info`, and `rhyme_info` from your newly trained ckpt folders.
112
-
113
- ## Machine Translation Formality
114
-
115
- ### Evaluation
116
-
117
- To generate outputs, run:
118
-
119
- ```
120
- python -u evaluate_formality.py --ckpt ckpt/formality/predictor_gyafc_entertainment_music/model.pth.tar --dataset_info ckpt/formality/predictor_gyafc_entertainment_music/dataset_info --in_file formality_data/fisher_test_oracle.es --model_path ckpt/formality/marian_finetune_fisher > formality_preds.log
121
- ```
122
-
123
- The above command generates predictions using the Marian model finetuned on the Fisher dataset; remove the `--model_path` argument to get predictions with the un-finetuned Marian model from HuggingFace (referred to as 0-shot in the paper)
124
-
125
- Then evaluate metrics using:
126
-
127
- ```
128
- python eval_formality_metrics.py --pred formality_preds.log --ref formality_data/test.noid.cleaned_0 formality_data/test.noid.cleaned_1 --ckpt ckpt/formality/test_evaluator_gyafc_family_relationships/model.pth.tar --dataset_info ckpt/formality/test_evaluator_gyafc_family_relationships/dataset_info
129
- ```
130
-
131
- ### Training your own predictors
132
-
133
- Example command below. (Reminder: you need to go get the GYAFC dataset following the instructions in https://github.com/raosudha89/GYAFC-corpus.)
134
-
135
- ```
136
- python -u main.py --task formality --data_dir train_data/GYAFC_Corpus/Entertainment_Music --save_dir ckpt/formality/formality_retrain_predictor --num_workers 20 --batch_size 32 --epoch_max_len 1000000 --validation_freq 1 --lr 2e-5 --epochs 20 > formality_retrain_predictor.log
137
- ```
138
-
139
- (The test-time formality evaluator is trained in the same way, just using the Family/Relationships half of the GYAFC dataset.)
140
-
141
- The same evaluation commands as before will work; just modify the paths in the command to point to `model_best.pth.tar`, `dataset_info`, and `rhyme_info` from your newly trained ckpt folders.
142
-
143
- ## Running FUDGE on your own data
144
-
145
- The code has been refactored so that the iambic (poetry), rhyme (poetry), newline (poetry), future word (topic), and formality (machine translation) are controlled by the `--task` flag to `main.py`. You should add your task as another option here, then modify the data processing in `data.py` and the model in `model.py` as needed for your task. (In `data.py` you probably won't need all the entries of the tuple that is expected of the loader; you can just put dummy entries in the ones you don't need.) You might also need to modify the loss computation in the `train` and `validate` functions in `main.py`. You'll probably want to write new evaluation scripts, though the existing poetry/topic/formality ones are hopefully helpful as references.
146
-
147
- Alternatively, the general FUDGE framework is pretty simple, so you could always try reimplementing things yourself. A few additional details based on questions I've received:
148
-
149
- (1) The formality task setup is likely closest to what you want if you're just trying to run the simplest form of FUDGE (take a language model, and use a classifier to optimize toward a single attribute) although you may need to swap out the Marian translation model/tokenizer we use.
150
-
151
- (2) When you construct your training data, if you have an example in your data e.g. "This movie is great!" for positive sentiment, you want to learn on all the pairs (This, +), (This movie, +), (This movie is, +), etc., as that's one of the main points of our approach.
152
-
153
- (3) For computational efficiency, we first filter the base model's next token probabilities down to the top 200 (Sec. 3.1 in the paper), before adding the classifier logits. This way you only need to evaluate your classifier on 200 continuations. Then afterward, you filter down again to whatever top-k/greedy/nucleus sampling you're using for evaluation (we use top-k with k=10 for poetry and topic, greedy for formality).
154
-
155
- (4) You can use a pretrained LM backbone instead of a simple LSTM backbone for the predictor as well. This should work better when your dataset is smaller.