parquet-converter commited on
Commit
27a1767
·
1 Parent(s): 3bcd6df

Update parquet files (step 72 of 121)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/data/A Bugs Life Movie Free [UPDATED] Download In Hindi.md +0 -23
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dr. Folder 2.7.0.1 full key change icons for all folders on Windows A complete guide to customize your folders.md +0 -87
  3. spaces/1gistliPinn/ChatGPT4/Examples/Audiorealism Bassline 2 Abl2 Crackl [2021].md +0 -6
  4. spaces/1gistliPinn/ChatGPT4/Examples/Baixarsonar85completoportuguestorrent WORK.md +0 -6
  5. spaces/1gistliPinn/ChatGPT4/Examples/El Diabolico Inconsciente Pdf Download [TOP].md +0 -5
  6. spaces/1line/AutoGPT/autogpt/workspace.py +0 -47
  7. spaces/1phancelerku/anime-remove-background/Enjoy the Best Soccer Experience with Real Football 2009 - Download Now.md +0 -139
  8. spaces/1toTree/lora_test/ppdiffusers/models/__init__.py +0 -25
  9. spaces/1toTree/lora_test/ppdiffusers/pipelines/score_sde_ve/pipeline_score_sde_ve.py +0 -100
  10. spaces/1toTree/lora_test/ppdiffusers/utils/outputs.py +0 -117
  11. spaces/AI-Hobbyist/Hoyo-RVC/Changelog_CN.md +0 -80
  12. spaces/AIFILMS/ControlNet-Video/model.py +0 -760
  13. spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/models/autoencoder_multi.py +0 -201
  14. spaces/AILab-CVC/SEED-LLaMA/scripts/start_backend_14b.sh +0 -10
  15. spaces/Adapter/CoAdapter/ldm/data/dataset_depth.py +0 -35
  16. spaces/AiMimicry/sovits-models/cluster/train_cluster.py +0 -89
  17. spaces/AkiKagura/Marco-Generation-Img2img/app.py +0 -74
  18. spaces/Alesteba/NeRF_ficus-pxl/README.md +0 -12
  19. spaces/Andy1621/uniformer_image_detection/configs/detectors/cascade_rcnn_r50_sac_1x_coco.py +0 -12
  20. spaces/Andy1621/uniformer_image_detection/configs/lvis/mask_rcnn_x101_32x4d_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py +0 -13
  21. spaces/Andy1621/uniformer_image_detection/configs/retinanet/retinanet_x101_32x4d_fpn_1x_coco.py +0 -13
  22. spaces/AngoHF/ANGO-Leaderboard/assets/color.py +0 -8
  23. spaces/Anthony7906/MengHuiMXD_GPT/modules/models.py +0 -625
  24. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_ratio.py +0 -160
  25. spaces/AttendAndExcite/Attend-and-Excite/README.md +0 -15
  26. spaces/BAAI/AltDiffusion-m9/header.html +0 -43
  27. spaces/Badaleeloveashley/badaleeloveashley/app.py +0 -3
  28. spaces/Bart92/RVC_HF/infer/lib/uvr5_pack/lib_v5/nets_123821KB.py +0 -122
  29. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/escsm.py +0 -261
  30. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/formatters/svg.py +0 -188
  31. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/util/request.py +0 -137
  32. spaces/Boadiwaa/Recipes/openai/__init__.py +0 -73
  33. spaces/Boadiwaa/Recipes/openai/embeddings_utils.py +0 -227
  34. spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/reduce_intervals.h +0 -53
  35. spaces/CVPR/WALT/configs/_base_/models/mask_rcnn_swin_fpn.py +0 -127
  36. spaces/CVPR/v-doc_abstractive_mac/app.py +0 -12
  37. spaces/CikeyQI/meme-api/meme_generator/memes/lim_x_0/__init__.py +0 -35
  38. spaces/CofAI/chat/g4f/Provider/Providers/hteyun.py +0 -34
  39. spaces/Colbe/basketball/README.md +0 -13
  40. spaces/Cran-May/yugangVI/model.py +0 -34
  41. spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/utils/registry.py +0 -45
  42. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/helpers.py +0 -878
  43. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/annotated_types/__init__.py +0 -319
  44. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/feaLib/variableScalar.py +0 -112
  45. spaces/DaFujaTyping/hf-Chat-ui/src/lib/stores/errors.ts +0 -7
  46. spaces/DaleChen/AutoGPT/tests.py +0 -21
  47. spaces/Dao3/chatwithdocs/__init__.py +0 -0
  48. spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/utils/height.py +0 -131
  49. spaces/Datasculptor/DescriptionGPT/tools/remove_lvis_rare.py +0 -20
  50. spaces/EllaTsoi/text_generator/app.py +0 -10
spaces/1acneusushi/gradio-2dmoleculeeditor/data/A Bugs Life Movie Free [UPDATED] Download In Hindi.md DELETED
@@ -1,23 +0,0 @@
1
- <br />
2
- <h1>How to Watch A Bug's Life Movie Free Download In Hindi Online</h1>
3
- <p>A Bug's Life is a 1998 animated comedy film produced by Pixar Animation Studios and distributed by Walt Disney Pictures. The film tells the story of an ant colony that is oppressed by a gang of grasshoppers and how a misfit ant named Flik tries to save the day with the help of a circus troupe of bugs.</p>
4
- <h2>A Bug's Life Movie Free Download In Hindi</h2><br /><p><b><b>Download Zip</b> &#10037; <a href="https://byltly.com/2uKyKz">https://byltly.com/2uKyKz</a></b></p><br /><br />
5
- <p>If you are looking for a way to watch A Bug's Life movie free download in Hindi online, you have come to the right place. In this article, we will show you how to stream or download the movie legally and safely without any hassle.</p>
6
- <h2>Why You Should Watch A Bug's Life Movie Free Download In Hindi Online</h2>
7
- <p>A Bug's Life is a classic animated film that has won many awards and accolades, including an Academy Award nomination for Best Original Score. The film features a talented voice cast, including Dave Foley, Kevin Spacey, Julia Louis-Dreyfus, Hayden Panettiere, Phyllis Diller, David Hyde Pierce, Denis Leary, and more.</p>
8
- <p>The film is also full of humor, adventure, and heartwarming messages about courage, friendship, and teamwork. It is a great movie for kids and adults alike, as it offers a lot of fun and entertainment for the whole family.</p>
9
- <p>Moreover, watching A Bug's Life movie free download in Hindi online can help you enjoy the film in your native language and understand the cultural references better. You can also learn some new words and phrases in Hindi while watching the movie.</p>
10
- <h2>How to Watch A Bug's Life Movie Free Download In Hindi Online</h2>
11
- <p>There are many websites and apps that claim to offer A Bug's Life movie free download in Hindi online, but not all of them are reliable or legal. Some of them may contain viruses, malware, or spyware that can harm your device or compromise your privacy. Some of them may also violate the copyright laws and put you at risk of legal trouble.</p>
12
- <p></p>
13
- <p>Therefore, we recommend that you watch A Bug's Life movie free download in Hindi online only from trusted and authorized sources. Here are some of the best options that you can choose from:</p>
14
- <ul>
15
- <li><a href="https://www.hotstar.com/in/movies/a-bugs-life/1260022154">Hotstar</a>: Hotstar is one of the most popular streaming platforms in India that offers a wide range of movies, TV shows, sports, news, and more. You can watch A Bug's Life movie free download in Hindi online on Hotstar with a subscription plan that starts from Rs. 299 per month or Rs. 1499 per year. You can also get a free trial for seven days before you decide to subscribe.</li>
16
- <li><a href="https://www.youtube.com/watch?v=0Z0z7Z6Yxjw">YouTube</a>: YouTube is another great option to watch A Bug's Life movie free download in Hindi online. You can rent or buy the movie on YouTube for Rs. 120 or Rs. 490 respectively. You can also watch it for free with ads if you have a YouTube Premium subscription that costs Rs. 129 per month or Rs. 399 per quarter.</li>
17
- <li><a href="https://www.amazon.in/Bugs-Life-Dave-Foley/dp/B07BZQ4F4D">Amazon Prime Video</a>: Amazon Prime Video is another popular streaming service that offers a huge collection of movies, TV shows, originals, and more. You can watch A Bug's Life movie free download in Hindi online on Amazon Prime Video with a Prime membership that costs Rs. 129 per month or Rs. 999 per year. You can also get a free trial for 30 days before you subscribe.</li>
18
- </ul>
19
- <h2>Conclusion</h2>
20
- <p>A Bug's Life is a wonderful animated film that you should not miss out on. You can watch A Bug's Life movie free download in Hindi online from any of the above-mentioned sources and enjoy the movie in your preferred language.</p>
21
- <p>We hope this article has helped you find the best way to watch A Bug's Life movie free download in Hindi online. If you have any questions or feedback, please feel free to leave a comment below.</p> cec2833e83<br />
22
- <br />
23
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dr. Folder 2.7.0.1 full key change icons for all folders on Windows A complete guide to customize your folders.md DELETED
@@ -1,87 +0,0 @@
1
- <br />
2
- <h1>Dr. Folder 2.7.0.1 full key change icons for all folders on Windows update 6 13 2019</h1>
3
- <h2>Introduction</h2>
4
- <p>Are you bored with the same old folder icons on your Windows computer? Do you want to customize your folders with different colors, shapes, and images? If yes, then you need Dr. Folder, a powerful and easy-to-use software that lets you change icons for all folders on Windows in just a few clicks.</p>
5
- <h2>Dr. Folder 2.7.0.1 full key change icons for all folders on Windows update 6 13 2019</h2><br /><p><b><b>DOWNLOAD</b> &#10003;&#10003;&#10003; <a href="https://byltly.com/2uKyqf">https://byltly.com/2uKyqf</a></b></p><br /><br />
6
- <h3>What is Dr. Folder?</h3>
7
- <p>Dr. Folder is a folder icon changer software that can replace the default folder icon with any icon you want. It has a large collection of icons for various categories, such as animals, cartoons, games, movies, music, nature, sports, etc. You can also use your own icons or download more from the internet.</p>
8
- <h3>Why use Dr. Folder?</h3>
9
- <p>Dr. Folder can help you organize your folders better by making them more recognizable and attractive. You can also use different icons to indicate the status or priority of your folders, such as important, private, locked, shared, etc. Moreover, Dr. Folder can protect your folders from being deleted or modified by hiding them or making them read-only.</p>
10
- <h3>How to download and install Dr. Folder?</h3>
11
- <p>You can download Dr. Folder from its official website or from other trusted sources online. The latest version is 2.7.0.1 and it was updated on June 13, 2019. The file size is about 8 MB and it supports Windows XP/Vista/7/8/10 (32-bit and 64-bit). To install Dr. Folder, you need to run the setup file and follow the instructions on the screen.</p>
12
- <h2>How to change icons for all folders on Windows with Dr. Folder?</h2>
13
- <p>Changing icons for all folders on Windows with Dr. Folder is very simple and fast. Here are the steps you need to follow:</p>
14
- <h3>Step 1: Launch Dr. Folder</h3>
15
- <p>After installing Dr. Folder, you can find it in your Start menu or on your desktop. Double-click on its icon to open it.</p>
16
- <h3>Step 2: Select the folders you want to change icons for</h3>
17
- <p>You can select one or more folders from your computer by using the Add Folders button or by dragging and dropping them into the main window of Dr. Folder.</p>
18
- <h3>Step 3: Choose an icon from the built-in library or your own collection</h3>
19
- <p>You can browse through the built-in library of icons by clicking on the Icons button at the top of the window. You can also use the Search function to find a specific icon by its name or keyword.</p>
20
- <p>How to customize folder icons with Dr. Folder 2.7.0.1 full key<br />
21
- Dr. Folder 2.7.0.1 full key download link and installation guide<br />
22
- Best folder icon changer software for Windows: Dr. Folder 2.7.0.1 full key<br />
23
- Dr. Folder 2.7.0.1 full key features and benefits<br />
24
- Dr. Folder 2.7.0.1 full key review and rating<br />
25
- How to get Dr. Folder 2.7.0.1 full key for free<br />
26
- Dr. Folder 2.7.0.1 full key vs other folder icon changer tools<br />
27
- How to update Dr. Folder 2.7.0.1 full key to the latest version<br />
28
- How to use Dr. Folder 2.7.0.1 full key to change icons for multiple folders<br />
29
- How to fix Dr. Folder 2.7.0.1 full key errors and issues<br />
30
- How to uninstall Dr. Folder 2.7.0.1 full key from Windows<br />
31
- How to backup and restore folder icons with Dr. Folder 2.7.0.1 full key<br />
32
- How to create custom folder icons with Dr. Folder 2.7.0.1 full key<br />
33
- How to apply different folder icons for different file types with Dr. Folder 2.7.0.1 full key<br />
34
- How to change folder icons according to themes with Dr. Folder 2.7.0.1 full key<br />
35
- How to change folder icons in Windows Explorer and Desktop with Dr. Folder 2.7.0.1 full key<br />
36
- How to change folder icons in OneDrive and Dropbox with Dr.<br />
37
- How to change folder icons in network drives and external devices with Dr.<br />
38
- How to change folder icons in Windows Start Menu and Taskbar with Dr.<br />
39
- How to change folder icons in Windows Registry and System Files with Dr.<br />
40
- How to change folder icons for hidden and protected folders with Dr.<br />
41
- How to change folder icons for shortcuts and links with Dr.<br />
42
- How to change folder icons for compressed and encrypted folders with Dr.<br />
43
- How to change folder icons for shared and synced folders with Dr.<br />
44
- How to change folder icons for special and system folders with Dr.<br />
45
- How to batch change folder icons with Dr.<br />
46
- How to preview folder icons before changing them with Dr.<br />
47
- How to revert folder icons back to default with Dr.<br />
48
- How to find and replace folder icons with Dr.<br />
49
- How to sort and filter folder icons with Dr.<br />
50
- How to export and import folder icons with Dr.</p>
51
- <p>If you want to use your own icons, you can click on the Add Icons button and select them from your computer.</p>
52
- <h3>Step 4: Apply the changes and enjoy your new folder icons</h3>
53
- <p>Once you have chosen an icon for your folders, you can click on the Apply button at the bottom of the window to make the changes effective.</p>
54
- <p>You can also preview how your folders will look like before applying by clicking on the Preview button.</p>
55
- <p>Congratulations! You have successfully changed icons for all folders on Windows with Dr. Folder.</p>
56
- <h2>How to restore the default icons for all folders on Windows with Dr. Folder?</h2>
57
- <p>If you want to go back to the original folder icons on Windows, you can easily do that with Dr. Folder as well. Here are the steps you need to follow:</p>
58
- <h3>Step 1: Launch Dr. Folder</h3>
59
- <p>Open Dr. Folder as described in step 1 above.</p>
60
- <h3>Step 2: Select the folders you want to restore icons for</h3>
61
- <p>Select one or more folders from your computer by using the Add Folders button or by dragging and dropping them into the main window of Dr. Folder.</p>
62
- <h3>Step 3: Click on the Restore Default Icon button</h3>
63
- <p>You can find this button at the top of the window next to the Icons button.</p>
64
- <h3>Step 4: Confirm the changes and revert to the original folder icons</h3>
65
- <p>A pop-up window will ask you if you are sure you want to restore the default icon for your folders.</p>
66
- <p>Click Yes to confirm and No to cancel.</p>
67
- <p>You have successfully restored the default icons for all folders on Windows with Dr. Folder.</p>
68
- <h2>Conclusion</h2>
69
- <p>In this article, we have shown you how to change icons for all folders on Windows with Dr. Folder, a folder icon changer software that can make your folders more personalized and organized.</p>
70
- <p>We have also shown you how to restore the default icons for all folders on Windows with Dr. Folder in case you want to undo your changes.</p>
71
- <p>We hope you have enjoyed this article and found it useful.</p>
72
- <h2>FAQs</h2>
73
- <ul>
74
- <li><b>Q: How much does Dr. Folder cost?</b></li>
75
- <li>A: Dr. Folder is not a free software but it offers a free trial version that allows you to change up to three folder icons per day.</li>
76
- <li><b>Q: How can I get more icons for Dr. Folder?</b></li>
77
- <li>A: You can download more icons from various websites online or create your own icons using an icon editor software.</li>
78
- <li><b>Q: Can I change system folder icons with Dr. Folder?</b></li>
79
- <li>A: Yes, you can change system folder icons such as My Computer, Recycle Bin, Network, etc., but be careful not to mess up your system settings.</li>
80
- <li><b>Q: Can I change file icons with Dr. Folder?</b></li>
81
- <li>A: No, Dr. Folder only works with folder icons not file icons.</li>
82
- <li><b>Q: Can I undo my changes with Dr. Folder?</b></li>
83
- <li>A: Yes, you can undo your changes by restoring the default icon for each folder or by using System Restore in Windows.</li>
84
- </ul>
85
- </p> 0a6ba089eb<br />
86
- <br />
87
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Audiorealism Bassline 2 Abl2 Crackl [2021].md DELETED
@@ -1,6 +0,0 @@
1
- <h2>Audiorealism Bassline 2 Abl2 Crackl</h2><br /><p><b><b>Download Zip</b> &#10037; <a href="https://imgfil.com/2uxZSt">https://imgfil.com/2uxZSt</a></b></p><br /><br />
2
-
3
- Then click the Open button. Installation on Windows 10 or later. Sometimes Windows will give you a warning when you run the installer. If the software. NET Framework is not installed, you will be prompted to: Run the .NET Framework installer, or Run the .NET Framework installer You can now start installing the .NET Framework software. Once the installation is complete (usually takes one to two minutes), you can proceed to the next step. Run the .NET Framework installer (rather than just running it). Once installed, you can use the Close button to exit the installer. 8a78ff9644<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Baixarsonar85completoportuguestorrent WORK.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>baixarsonar85completoportuguestorrent</h2><br /><p><b><b>Download File</b> &#9734;&#9734;&#9734;&#9734;&#9734; <a href="https://imgfil.com/2uxZWZ">https://imgfil.com/2uxZWZ</a></b></p><br /><br />
2
-
3
- 3cee63e6c2<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/El Diabolico Inconsciente Pdf Download [TOP].md DELETED
@@ -1,5 +0,0 @@
1
- <br />
2
- <p>babahiddenz. el diabolico, 1939. Consciencia dei profeti antichissimi libro di. Diabola, diabolico inconsciente, vita di uranus diabolo, cultura dell'antichristo, diabolico incansante. Diablo inconsciente - Authobiografia - Segunda edición y correo electrónico - PDF bajo licencia. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la guerra n. El Diabolo Inconsciente. El Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane.</p>
3
- <h2>el diabolico inconsciente pdf download</h2><br /><p><b><b>DOWNLOAD</b> &#9658; <a href="https://imgfil.com/2uy1oQ">https://imgfil.com/2uy1oQ</a></b></p><br /><br /> 899543212b<br />
4
- <br />
5
- <br />
 
 
 
 
 
 
spaces/1line/AutoGPT/autogpt/workspace.py DELETED
@@ -1,47 +0,0 @@
1
- from __future__ import annotations
2
-
3
- import os
4
- from pathlib import Path
5
-
6
- from autogpt.config import Config
7
-
8
- CFG = Config()
9
-
10
- # Set a dedicated folder for file I/O
11
- WORKSPACE_PATH = Path(os.getcwd()) / "auto_gpt_workspace"
12
-
13
- # Create the directory if it doesn't exist
14
- if not os.path.exists(WORKSPACE_PATH):
15
- os.makedirs(WORKSPACE_PATH)
16
-
17
-
18
- def path_in_workspace(relative_path: str | Path) -> Path:
19
- """Get full path for item in workspace
20
-
21
- Parameters:
22
- relative_path (str | Path): Path to translate into the workspace
23
-
24
- Returns:
25
- Path: Absolute path for the given path in the workspace
26
- """
27
- return safe_path_join(WORKSPACE_PATH, relative_path)
28
-
29
-
30
- def safe_path_join(base: Path, *paths: str | Path) -> Path:
31
- """Join one or more path components, asserting the resulting path is within the workspace.
32
-
33
- Args:
34
- base (Path): The base path
35
- *paths (str): The paths to join to the base path
36
-
37
- Returns:
38
- Path: The joined path
39
- """
40
- joined_path = base.joinpath(*paths).resolve()
41
-
42
- if CFG.restrict_to_workspace and not joined_path.is_relative_to(base):
43
- raise ValueError(
44
- f"Attempted to access path '{joined_path}' outside of workspace '{base}'."
45
- )
46
-
47
- return joined_path
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Enjoy the Best Soccer Experience with Real Football 2009 - Download Now.md DELETED
@@ -1,139 +0,0 @@
1
- <br />
2
- <h1>Download 2009 Real Football: A Review of the Best Soccer Game for Nintendo DS</h1>
3
- <p>If you are a fan of soccer, you probably know that there are many games available for different platforms. But if you own a Nintendo DS, you might be wondering which one is the best. Well, look no further than 2009 Real Football, also known as Real Soccer 2009 in some regions. This game is widely considered as the definitive soccer title for the DS, offering a never-before-seen experience with its unique features and capabilities. In this article, we will review 2009 Real Football and show you how to download it for your device.</p>
4
- <h2>download 2009 real football</h2><br /><p><b><b>Download Zip</b> &rArr; <a href="https://jinyurl.com/2uNUqC">https://jinyurl.com/2uNUqC</a></b></p><br /><br />
5
- <h2>Introduction</h2>
6
- <h3>What is 2009 Real Football?</h3>
7
- <p>2009 Real Football is a soccer game developed by Gameloft and published by Ubisoft for the Nintendo DS in 2008. It is part of the long-running Real Football series, which started in 2004 for mobile phones. The game features over 200 teams and players from around the world, including licensed ones from FIFA. It also boasts realistic physics, animations, and sound effects that make you feel like you are on the pitch.</p>
8
- <h3>Why should you download it?</h3>
9
- <p>There are many reasons why you should download 2009 Real Football for your Nintendo DS. Here are some of them:</p>
10
- <ul>
11
- <li>It is one of the few soccer games that use the touch screen and stylus of the DS, allowing you to control your players with precision and accuracy.</li>
12
- <li>It has a variety of game modes and options, such as exhibition, league, cup, penalty shootout, training, and custom matches. You can also create your own team and player with the editor mode.</li>
13
- <li>It has an online multiplayer mode that lets you play with or against other players around the world via Wi-Fi or Bluetooth. You can also join the online community and chat with other fans, share your scores, and download new content.</li>
14
- </ul>
15
- <h2>Features of 2009 Real Football</h2>
16
- <h3>Realistic graphics and animations</h3>
17
- <p>One of the most impressive aspects of 2009 Real Football is its graphics and animations. The game uses a 3D engine that renders the players, stadiums, and crowds in high detail. The players have realistic faces, expressions, movements, and reactions that match their real-life counterparts. The stadiums are based on real ones from different countries, such as Wembley, Camp Nou, or Maracana. The crowds are also lively and responsive, cheering or booing depending on the situation.</p>
18
- <h3>Various game modes and options</h3>
19
- <p>Another great feature of 2009 Real Football is its variety of game modes and options. You can choose from different types of matches, such as exhibition, league, cup, penalty shootout, training, or custom. You can also adjust the difficulty level, time limit, weather conditions, camera angles, and rules to suit your preference. You can also create your own team and player with the editor mode, where you can customize their name, appearance, skills, nationality, and position.</p>
20
- <p>download 2009 real football java game<br />
21
- download 2009 real football for android<br />
22
- download 2009 real football apk<br />
23
- download 2009 real football jar<br />
24
- download 2009 real football mobile game<br />
25
- download 2009 real football 3d<br />
26
- download 2009 real football hd<br />
27
- download 2009 real football for nokia<br />
28
- download 2009 real football for samsung<br />
29
- download 2009 real football for pc<br />
30
- download 2009 real football mod apk<br />
31
- download 2009 real football gameloft<br />
32
- download 2009 real football free<br />
33
- download 2009 real football full version<br />
34
- download 2009 real football offline<br />
35
- download 2009 real football online<br />
36
- download 2009 real football multiplayer<br />
37
- download 2009 real football cheats<br />
38
- download 2009 real football hack<br />
39
- download 2009 real football unlimited money<br />
40
- download 2009 real football latest version<br />
41
- download 2009 real football old version<br />
42
- download 2009 real football update<br />
43
- download 2009 real football patch<br />
44
- download 2009 real football crack<br />
45
- download 2009 real football serial key<br />
46
- download 2009 real football license key<br />
47
- download 2009 real football activation key<br />
48
- download 2009 real football registration key<br />
49
- download 2009 real football product key<br />
50
- download 2009 real football review<br />
51
- download 2009 real football gameplay<br />
52
- download 2009 real football trailer<br />
53
- download 2009 real football video<br />
54
- download 2009 real football tips and tricks<br />
55
- download 2009 real football guide and walkthrough<br />
56
- download 2009 real football best players and teams<br />
57
- download 2009 real football skills and goals<br />
58
- download 2009 real football tournaments and leagues<br />
59
- download 2009 real football stadiums and kits<br />
60
- how to download 2009 real football on android phone or tablet?<br />
61
- how to install and play 2009 real football on pc or laptop?<br />
62
- how to run and enjoy 2009 real football on java or symbian device?<br />
63
- where to find and get the link to download 2009 real football?<br />
64
- what are the requirements and specifications to download and play 2009 real football?</p>
65
- <h3>Online multiplayer and community</h3>
66
- <p>The last but not least feature of 2009 Real Football is its online multiplayer and community. You can connect your Nintendo DS to the internet via Wi-Fi or Bluetooth and play with or against other players around the world. You can also join the online community and chat with other fans, share your scores, and download new content. You can also access the official website of the game and get updates, news, tips, tricks, and more.</p>
67
- <h2>How to download 2009 Real Football</h2>
68
- <h3>Requirements and compatibility</ <h3>Requirements and compatibility</h3>
69
- <p>Before you download 2009 Real Football, you need to make sure that your Nintendo DS meets the requirements and compatibility of the game. Here are some of them:</p>
70
- <ul>
71
- <li>You need a Nintendo DS or DS Lite device with a working touch screen and stylus. The game is not compatible with the DSi or 3DS models.</li>
72
- <li>You need a Wi-Fi or Bluetooth connection to access the online features of the game. You also need a Nintendo Wi-Fi Connection account and a friend code to play with other players.</li>
73
- <li>You need at least 64 MB of free space on your DS card to save the game data. You can also use a microSD card to store additional content.</li>
74
- </ul>
75
- <h3>Sources and links</h3>
76
- <p>There are different sources and links where you can download 2009 Real Football for your Nintendo DS. Here are some of them:</p>
77
- <table>
78
- <tr>
79
- <th>Source</th>
80
- <th>Link</th>
81
- </tr>
82
- <tr>
83
- <td>Nintendo eShop</td>
84
- <td><a href="">https://www.nintendo.com/games/detail/real-soccer-2009-ds/</a></td>
85
- </tr>
86
- <tr>
87
- <td>Gameloft official website</td>
88
- <td><a href="">https://www.gameloft.com/en/game/real-football-2009-ds/</a></td>
89
- </tr>
90
- <tr>
91
- <td>Ubisoft official website</td>
92
- <td><a href="">https://www.ubisoft.com/en-us/game/real-football-2009-ds/</a></td>
93
- </tr>
94
- <tr>
95
- <td>Amazon</td>
96
- <td><a href="">https://www.amazon.com/Real-Soccer-2009-Nintendo-DS/dp/B001E27DLM/</a></td>
97
- </tr>
98
- <tr>
99
- <td>eBay</td>
100
- <td><a href="">https://www.ebay.com/sch/i.html?_nkw=real+football+2009+ds</a></td>
101
- </tr>
102
- </table>
103
- <h3>Installation and setup</h3>
104
- <p>After you download 2009 Real Football from one of the sources and links above, you need to install and set up the game on your Nintendo DS. Here are the steps:</p>
105
- <ol>
106
- <li>Insert the game card into the slot of your DS device and turn it on.</li>
107
- <li>Select the game icon from the menu and press A to start.</li>
108
- <li>Follow the instructions on the screen to create your profile, choose your language, and adjust your settings.</li>
109
- <li>Enjoy playing 2009 Real Football on your Nintendo DS!</li>
110
- </ol>
111
- <h2>Conclusion</h2> <h3>Summary of the main points</h3>
112
- <p>In this article, we have reviewed 2009 Real Football, the best soccer game for Nintendo DS. We have discussed its features, such as realistic graphics and animations, various game modes and options, and online multiplayer and community. We have also shown you how to download it from different sources and links, and how to install and set up it on your device.</p>
113
- <h3>Recommendations and ratings</h3>
114
- <p>We highly recommend 2009 Real Football to anyone who loves soccer and owns a Nintendo DS. It is a fun, challenging, and immersive game that will keep you entertained for hours. It is also a great way to connect with other players and fans around the world. We give it a rating of 4.5 out of 5 stars, based on its gameplay, graphics, sound, and online features.</p>
115
- <h3>Call to action</h3>
116
- <p>If you are interested in downloading 2009 Real Football for your Nintendo DS, don't hesitate to do so. You can find it on the Nintendo eShop, Gameloft official website, Ubisoft official website, Amazon, or eBay. You can also visit the official website of the game for more information and updates. Don't miss this opportunity to enjoy the best soccer game for Nintendo DS!</p>
117
- <h2>FAQs</h2>
118
- <p>Here are some frequently asked questions about 2009 Real Football:</p>
119
- <ul>
120
- <li><b>Q: How much does 2009 Real Football cost?</b></li>
121
- <li>A: The game costs $19.99 on the Nintendo eShop, Gameloft official website, and Ubisoft official website. It may vary on Amazon or eBay depending on the seller.</li>
122
- <li><b>Q: How many players can play 2009 Real Football online?</b></li>
123
- <li>A: The game supports up to four players online via Wi-Fi or Bluetooth. You can play with or against your friends or strangers from around the world.</li>
124
- <li><b>Q: What are the benefits of joining the online community of 2009 Real Football?</b></li>
125
- <li>A: By joining the online community of 2009 Real Football, you can chat with other fans, share your scores, and download new content. You can also access the official website of the game and get updates, news, tips, tricks, and more.</li>
126
- <li><b>Q: How can I create my own team and player in 2009 Real Football?</b></li>
127
- <li>A: You can create your own team and player in the editor mode of the game. You can customize their name, appearance, skills, nationality, and position. You can also use them in any game mode or online match.</li>
128
- <li><b>Q: What are some tips and tricks for playing 2009 Real Football?</b></li>
129
- <li>A: Some tips and tricks for playing 2009 Real Football are:</li>
130
- <ul>
131
- <li>Use the touch screen and stylus to control your players with precision and accuracy.</li>
132
- <li>Experiment with different game modes and options to find your preferred style.</li>
133
- <li>Practice your skills in the training mode or penalty shootout mode.</li>
134
- <li>Learn from your opponents and improve your strategies in the online mode.</li>
135
- <li>Have fun and enjoy the game!</li>
136
- </ul>
137
- </ul></p> 401be4b1e0<br />
138
- <br />
139
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1toTree/lora_test/ppdiffusers/models/__init__.py DELETED
@@ -1,25 +0,0 @@
1
- # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2
- # Copyright 2022 The HuggingFace Team. All rights reserved.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- # flake8: noqa
16
-
17
- from ..utils import is_paddle_available
18
-
19
- if is_paddle_available():
20
- from .attention import Transformer2DModel
21
- from .prior_transformer import PriorTransformer
22
- from .unet_1d import UNet1DModel
23
- from .unet_2d import UNet2DModel
24
- from .unet_2d_condition import UNet2DConditionModel
25
- from .vae import AutoencoderKL, VQModel
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1toTree/lora_test/ppdiffusers/pipelines/score_sde_ve/pipeline_score_sde_ve.py DELETED
@@ -1,100 +0,0 @@
1
- # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2
- # Copyright 2022 The HuggingFace Team. All rights reserved.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- from typing import List, Optional, Tuple, Union
17
-
18
- import paddle
19
-
20
- from ...models import UNet2DModel
21
- from ...pipeline_utils import DiffusionPipeline, ImagePipelineOutput
22
- from ...schedulers import ScoreSdeVeScheduler
23
-
24
-
25
- class ScoreSdeVePipeline(DiffusionPipeline):
26
- r"""
27
- Parameters:
28
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
29
- library implements for all the pipelines (such as downloading or saving, running on a particular xxxx, etc.)
30
- unet ([`UNet2DModel`]): U-Net architecture to denoise the encoded image. scheduler ([`SchedulerMixin`]):
31
- The [`ScoreSdeVeScheduler`] scheduler to be used in combination with `unet` to denoise the encoded image.
32
- """
33
- unet: UNet2DModel
34
- scheduler: ScoreSdeVeScheduler
35
-
36
- def __init__(self, unet: UNet2DModel, scheduler: DiffusionPipeline):
37
- super().__init__()
38
- self.register_modules(unet=unet, scheduler=scheduler)
39
-
40
- @paddle.no_grad()
41
- def __call__(
42
- self,
43
- batch_size: int = 1,
44
- num_inference_steps: int = 2000,
45
- generator: Optional[Union[paddle.Generator, List[paddle.Generator]]] = None,
46
- output_type: Optional[str] = "pil",
47
- return_dict: bool = True,
48
- **kwargs,
49
- ) -> Union[ImagePipelineOutput, Tuple]:
50
- r"""
51
- Args:
52
- batch_size (`int`, *optional*, defaults to 1):
53
- The number of images to generate.
54
- generator (`paddle.Generator`, *optional*):
55
- One or a list of paddle generator(s) to make generation deterministic.
56
- output_type (`str`, *optional*, defaults to `"pil"`):
57
- The output format of the generate image. Choose between
58
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
59
- return_dict (`bool`, *optional*, defaults to `True`):
60
- Whether or not to return a [`~pipeline_utils.ImagePipelineOutput`] instead of a plain tuple.
61
-
62
- Returns:
63
- [`~pipeline_utils.ImagePipelineOutput`] or `tuple`: [`~pipelines.utils.ImagePipelineOutput`] if
64
- `return_dict` is True, otherwise a `tuple. When returning a tuple, the first element is a list with the
65
- generated images.
66
- """
67
-
68
- img_size = self.unet.config.sample_size
69
- shape = (batch_size, 3, img_size, img_size)
70
-
71
- model = self.unet
72
-
73
- sample = paddle.randn(shape, generator=generator) * self.scheduler.init_noise_sigma
74
-
75
- self.scheduler.set_timesteps(num_inference_steps)
76
- self.scheduler.set_sigmas(num_inference_steps)
77
-
78
- for i, t in enumerate(self.progress_bar(self.scheduler.timesteps)):
79
- sigma_t = self.scheduler.sigmas[i] * paddle.ones((shape[0],))
80
-
81
- # correction step
82
- for _ in range(self.scheduler.config.correct_steps):
83
- model_output = self.unet(sample, sigma_t).sample
84
- sample = self.scheduler.step_correct(model_output, sample, generator=generator).prev_sample
85
-
86
- # prediction step
87
- model_output = model(sample, sigma_t).sample
88
- output = self.scheduler.step_pred(model_output, t, sample, generator=generator)
89
-
90
- sample, sample_mean = output.prev_sample, output.prev_sample_mean
91
-
92
- sample = sample_mean.clip(0, 1)
93
- sample = sample.transpose([0, 2, 3, 1]).numpy()
94
- if output_type == "pil":
95
- sample = self.numpy_to_pil(sample)
96
-
97
- if not return_dict:
98
- return (sample,)
99
-
100
- return ImagePipelineOutput(images=sample)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1toTree/lora_test/ppdiffusers/utils/outputs.py DELETED
@@ -1,117 +0,0 @@
1
- # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2
- # Copyright 2022 The HuggingFace Team. All rights reserved.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """
16
- Generic utilities
17
- """
18
-
19
- from collections import OrderedDict
20
- from dataclasses import fields
21
- from typing import Any, Tuple
22
-
23
- import numpy as np
24
-
25
- from .import_utils import is_paddle_available
26
-
27
-
28
- def is_tensor(x):
29
- """
30
- Tests if `x` is a `paddle.Tensor` or `np.ndarray`.
31
- """
32
- if is_paddle_available():
33
- import paddle
34
-
35
- return paddle.is_tensor(x)
36
-
37
- return isinstance(x, np.ndarray)
38
-
39
-
40
- class BaseOutput(OrderedDict):
41
- """
42
- Base class for all model outputs as dataclass. Has a `__getitem__` that allows indexing by integer or slice (like a
43
- tuple) or strings (like a dictionary) that will ignore the `None` attributes. Otherwise behaves like a regular
44
- python dictionary.
45
-
46
- <Tip warning={true}>
47
-
48
- You can't unpack a `BaseOutput` directly. Use the [`~utils.BaseOutput.to_tuple`] method to convert it to a tuple
49
- before.
50
-
51
- </Tip>
52
- """
53
-
54
- def __post_init__(self):
55
- class_fields = fields(self)
56
-
57
- # Safety and consistency checks
58
- if not len(class_fields):
59
- raise ValueError(f"{self.__class__.__name__} has no fields.")
60
-
61
- first_field = getattr(self, class_fields[0].name)
62
- other_fields_are_none = all(getattr(self, field.name) is None for field in class_fields[1:])
63
-
64
- if other_fields_are_none and isinstance(first_field, dict):
65
- for key, value in first_field.items():
66
- self[key] = value
67
- else:
68
- for field in class_fields:
69
- v = getattr(self, field.name)
70
- if v is not None:
71
- self[field.name] = v
72
-
73
- def __delitem__(self, *args, **kwargs):
74
- raise Exception(f"You cannot use ``__delitem__`` on a {self.__class__.__name__} instance.")
75
-
76
- def setdefault(self, *args, **kwargs):
77
- raise Exception(f"You cannot use ``setdefault`` on a {self.__class__.__name__} instance.")
78
-
79
- def pop(self, *args, **kwargs):
80
- raise Exception(f"You cannot use ``pop`` on a {self.__class__.__name__} instance.")
81
-
82
- def update(self, *args, **kwargs):
83
- raise Exception(f"You cannot use ``update`` on a {self.__class__.__name__} instance.")
84
-
85
- def __getitem__(self, k):
86
- if isinstance(k, str):
87
- inner_dict = {k: v for (k, v) in self.items()}
88
- return inner_dict[k]
89
- else:
90
- return self.to_tuple()[k]
91
-
92
- def __setattr__(self, name, value):
93
- if name in self.keys() and value is not None:
94
- # Don't call self.__setitem__ to avoid recursion errors
95
- super().__setitem__(name, value)
96
- super().__setattr__(name, value)
97
-
98
- def __setitem__(self, key, value):
99
- # Will raise a KeyException if needed
100
- super().__setitem__(key, value)
101
- # Don't call self.__setattr__ to avoid recursion errors
102
- super().__setattr__(key, value)
103
-
104
- def to_tuple(self) -> Tuple[Any]:
105
- """
106
- Convert self to a tuple containing all the attributes/keys that are not `None`.
107
- """
108
- # try to fix: https://github.com/PaddlePaddle/PaddleNLP/issues/3355
109
- # when trying to get the keys of `OrderedDict`, `keys` method return empty values.
110
- # TODO(wj-Mcat): this bug should be fixed in Paddle framework
111
- tuples = ()
112
- for field in fields(self):
113
- if getattr(self, field.name, None) is None:
114
- continue
115
- tuples = tuples + (getattr(self, field.name),)
116
-
117
- return tuples
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AI-Hobbyist/Hoyo-RVC/Changelog_CN.md DELETED
@@ -1,80 +0,0 @@
1
- ### 20230618更新
2
- - v2增加32k和48k两个新预训练模型
3
- - 修复非f0模型推理报错
4
- - 对于超过一小时的训练集的索引建立环节,自动kmeans缩小特征处理以加速索引训练、加入和查询
5
- - 附送一个人声转吉他玩具仓库
6
- - 数据处理剔除异常值切片
7
- - onnx导出选项卡
8
-
9
- 失败的实验:
10
- - ~~特征检索增加时序维度:寄,没啥效果~~
11
- - ~~特征检索增加PCAR降维可选项:寄,数据大用kmeans缩小数据量,数据小降维操作耗时比省下的匹配耗时还多~~
12
- - ~~支持onnx推理(附带仅推理的小压缩包):寄,生成nsf还是需要pytorch~~
13
- - ~~训练时在音高、gender、eq、噪声等方面对输入进行随机增强:寄,没啥效果~~
14
-
15
- todolist:
16
- - 接入小型声码器调研
17
- - 训练集音高识别支持crepe
18
- - crepe的精度支持和RVC-config同步
19
- - 对接F0编辑器
20
-
21
-
22
- ### 20230528更新
23
- - 增加v2的jupyter notebook,韩文changelog,增加一些环境依赖
24
- - 增加呼吸、清辅音、齿音保护模式
25
- - 支持crepe-full推理
26
- - UVR5人声伴奏分离加上3个去延迟模型和MDX-Net去混响模型,增加HP3人声提取模型
27
- - 索引名称增加版本和实验名称
28
- - 人声伴奏分离、推理批量导出增加音频导出格式选项
29
- - 废弃32k模型的训练
30
-
31
- ### 20230513更新
32
- - 清除一键包内部老版本runtime内残留的infer_pack和uvr5_pack
33
- - 修复训练集预处理伪多进程的bug
34
- - 增加harvest识别音高可选通过中值滤波削弱哑音现象,可调整中值滤波半径
35
- - 导出音频增加后处理重采样
36
- - 训练n_cpu进程数从"仅调整f0提取"改为"调整数据预处理和f0提取"
37
- - 自动检测logs文件夹下的index路径,提供下拉列表功能
38
- - tab页增加"常见问题解答"(也可参考github-rvc-wiki)
39
- - 相同路径的输入音频推理增加了音高缓存(用途:使用harvest音高提取,整个pipeline会经历漫长且重复的音高提取过程,如果不使用缓存,实验不同音色、索引、音高中值滤波半径参数的用户在第一次测试后的等待结果会非常痛苦)
40
-
41
- ### 20230514更新
42
- - 音量包络对齐输入混合(可以缓解“输入静音输出小幅度噪声”的问题。如果输入音频背景底噪大则不建议开启,默认不开启(值为1可视为不开启))
43
- - 支持按照指定频率保存提取的小模型(假如你想尝试不同epoch下的推理效果,但是不想保存所有大checkpoint并且每次都要ckpt手工处理提取小模型,这项功能会非常实用)
44
- - 通过设置环境变量解决服务端开了系统全局代理导致浏览器连接错误的问题
45
- - 支持v2预训练模型(目前只公开了40k版本进行测试,另外2个采样率还没有训练完全)
46
- - 推理前限制超过1的过大音量
47
- - 微调数据预处理参数
48
-
49
-
50
- ### 20230409更新
51
- - 修正训练参数,提升显卡平均利用率,A100最高从25%提升至90%左右,V100:50%->90%左右,2060S:60%->85%左右,P40:25%->95%左右,训练速度显著提升
52
- - 修正参数:总batch_size改为每张卡的batch_size
53
- - 修正total_epoch:最大限制100解锁至1000;默认10提升至默认20
54
- - 修复ckpt提取识别是否带音高错误导致推理异常的问题
55
- - 修复分布式训练每个rank都保存一次ckpt的问题
56
- - 特征提取进行nan特征过滤
57
- - 修复静音输入输出随机辅音or噪声的问题(老版模型需要重做训练集重训)
58
-
59
- ### 20230416更新
60
- - 新增本地实时变声迷你GUI,双击go-realtime-gui.bat启动
61
- - 训练推理均对<50Hz的频段进行滤波过滤
62
- - 训练推理音高提取pyworld最低音高从默认80下降至50,50-80hz间的男声低音不会哑
63
- - WebUI支持根据系统区域变更语言(现支持en_US,ja_JP,zh_CN,zh_HK,zh_SG,zh_TW,不支持的默认en_US)
64
- - 修正部分显卡识别(例如V100-16G识别失败,P4识别失败)
65
-
66
- ### 20230428更新
67
- - 升级faiss索引设置,速度更快,质量更高
68
- - 取消total_npy依赖,后续分享模型不再需要填写total_npy
69
- - 解锁16系限制。4G显存GPU给到4G的推理设置。
70
- - 修复部分音频格式下UVR5人声伴奏分离的bug
71
- - 实时变声迷你gui增加对非40k与不懈怠音高模型的支持
72
-
73
- ### 后续计划:
74
- 功能:
75
- - 支持多人训练选项卡(至多4人)
76
-
77
- 底模:
78
- - 收集呼吸wav加入训练集修正呼吸变声电音的问题
79
- - 我们正在训练增加了歌声训练集的底模,未来会公开
80
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/ControlNet-Video/model.py DELETED
@@ -1,760 +0,0 @@
1
- # This file is adapted from gradio_*.py in https://github.com/lllyasviel/ControlNet/tree/f4748e3630d8141d7765e2bd9b1e348f47847707
2
- # The original license file is LICENSE.ControlNet in this repo.
3
- from __future__ import annotations
4
-
5
- import pathlib
6
- import random
7
- import shlex
8
- import subprocess
9
- import sys
10
-
11
- import cv2
12
- import einops
13
- import numpy as np
14
- import torch
15
- from huggingface_hub import hf_hub_url
16
- from pytorch_lightning import seed_everything
17
-
18
- sys.path.append('ControlNet')
19
-
20
- import config
21
- from annotator.canny import apply_canny
22
- from annotator.hed import apply_hed, nms
23
- from annotator.midas import apply_midas
24
- from annotator.mlsd import apply_mlsd
25
- from annotator.openpose import apply_openpose
26
- from annotator.uniformer import apply_uniformer
27
- from annotator.util import HWC3, resize_image
28
- from cldm.model import create_model, load_state_dict
29
- from ldm.models.diffusion.ddim import DDIMSampler
30
- from share import *
31
-
32
-
33
- MODEL_NAMES = {
34
- 'canny': 'control_canny-fp16.safetensors',
35
- 'hough': 'control_mlsd-fp16.safetensors',
36
- 'hed': 'control_hed-fp16.safetensors',
37
- 'scribble': 'control_scribble-fp16.safetensors',
38
- 'pose': 'control_openpose-fp16.safetensors',
39
- 'seg': 'control_seg-fp16.safetensors',
40
- 'depth': 'control_depth-fp16.safetensors',
41
- 'normal': 'control_normal-fp16.safetensors',
42
- }
43
-
44
- MODEL_REPO = 'webui/ControlNet-modules-safetensors'
45
-
46
- DEFAULT_BASE_MODEL_REPO = 'runwayml/stable-diffusion-v1-5'
47
- DEFAULT_BASE_MODEL_FILENAME = 'v1-5-pruned-emaonly.safetensors'
48
- DEFAULT_BASE_MODEL_URL = 'https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors'
49
-
50
- class Model:
51
- def __init__(self,
52
- model_config_path: str = 'ControlNet/models/cldm_v15.yaml',
53
- model_dir: str = 'models'):
54
- self.device = torch.device(
55
- 'cuda:0' if torch.cuda.is_available() else 'cpu')
56
- self.model = create_model(model_config_path).to(self.device)
57
- self.ddim_sampler = DDIMSampler(self.model)
58
- self.task_name = ''
59
-
60
- self.base_model_url = ''
61
-
62
- self.model_dir = pathlib.Path(model_dir)
63
- self.model_dir.mkdir(exist_ok=True, parents=True)
64
-
65
- self.download_models()
66
- self.set_base_model(DEFAULT_BASE_MODEL_REPO,
67
- DEFAULT_BASE_MODEL_FILENAME)
68
-
69
- def set_base_model(self, model_id: str, filename: str) -> str:
70
- if not model_id or not filename:
71
- return self.base_model_url
72
- base_model_url = hf_hub_url(model_id, filename)
73
- if base_model_url != self.base_model_url:
74
- self.load_base_model(base_model_url)
75
- self.base_model_url = base_model_url
76
- return self.base_model_url
77
-
78
-
79
- def download_base_model(self, model_url: str) -> pathlib.Path:
80
- self.model_dir.mkdir(exist_ok=True, parents=True)
81
- model_name = model_url.split('/')[-1]
82
- out_path = self.model_dir / model_name
83
- if not out_path.exists():
84
- subprocess.run(shlex.split(f'wget {model_url} -O {out_path}'))
85
- return out_path
86
-
87
- def load_base_model(self, model_url: str) -> None:
88
- model_path = self.download_base_model(model_url)
89
- self.model.load_state_dict(load_state_dict(model_path,
90
- location=self.device.type),
91
- strict=False)
92
-
93
- def load_weight(self, task_name: str) -> None:
94
- if task_name == self.task_name:
95
- return
96
- weight_path = self.get_weight_path(task_name)
97
- self.model.control_model.load_state_dict(
98
- load_state_dict(weight_path, location=self.device.type))
99
- self.task_name = task_name
100
-
101
- def get_weight_path(self, task_name: str) -> str:
102
- if 'scribble' in task_name:
103
- task_name = 'scribble'
104
- return f'{self.model_dir}/{MODEL_NAMES[task_name]}'
105
-
106
- def download_models(self) -> None:
107
- self.model_dir.mkdir(exist_ok=True, parents=True)
108
- for name in MODEL_NAMES.values():
109
- out_path = self.model_dir / name
110
- if out_path.exists():
111
- continue
112
- model_url = hf_hub_url(MODEL_REPO, name)
113
- subprocess.run(shlex.split(f'wget {model_url} -O {out_path}'))
114
-
115
- @torch.inference_mode()
116
- def process_canny(self, input_image, prompt, a_prompt, n_prompt,
117
- num_samples, image_resolution, ddim_steps, scale, seed,
118
- eta, low_threshold, high_threshold):
119
- self.load_weight('canny')
120
-
121
- img = resize_image(HWC3(input_image), image_resolution)
122
- H, W, C = img.shape
123
-
124
- detected_map = apply_canny(img, low_threshold, high_threshold)
125
- detected_map = HWC3(detected_map)
126
-
127
- control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0
128
- control = torch.stack([control for _ in range(num_samples)], dim=0)
129
- control = einops.rearrange(control, 'b h w c -> b c h w').clone()
130
-
131
- if seed == -1:
132
- seed = random.randint(0, 65535)
133
- seed_everything(seed)
134
-
135
- if config.save_memory:
136
- self.model.low_vram_shift(is_diffusing=False)
137
-
138
- cond = {
139
- 'c_concat': [control],
140
- 'c_crossattn': [
141
- self.model.get_learned_conditioning(
142
- [prompt + ', ' + a_prompt] * num_samples)
143
- ]
144
- }
145
- un_cond = {
146
- 'c_concat': [control],
147
- 'c_crossattn':
148
- [self.model.get_learned_conditioning([n_prompt] * num_samples)]
149
- }
150
- shape = (4, H // 8, W // 8)
151
-
152
- if config.save_memory:
153
- self.model.low_vram_shift(is_diffusing=True)
154
-
155
- samples, intermediates = self.ddim_sampler.sample(
156
- ddim_steps,
157
- num_samples,
158
- shape,
159
- cond,
160
- verbose=False,
161
- eta=eta,
162
- unconditional_guidance_scale=scale,
163
- unconditional_conditioning=un_cond)
164
-
165
- if config.save_memory:
166
- self.model.low_vram_shift(is_diffusing=False)
167
-
168
- x_samples = self.model.decode_first_stage(samples)
169
- x_samples = (
170
- einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 +
171
- 127.5).cpu().numpy().clip(0, 255).astype(np.uint8)
172
-
173
- results = [x_samples[i] for i in range(num_samples)]
174
- return [255 - detected_map] + results
175
-
176
- @torch.inference_mode()
177
- def process_hough(self, input_image, prompt, a_prompt, n_prompt,
178
- num_samples, image_resolution, detect_resolution,
179
- ddim_steps, scale, seed, eta, value_threshold,
180
- distance_threshold):
181
- self.load_weight('hough')
182
-
183
- input_image = HWC3(input_image)
184
- detected_map = apply_mlsd(resize_image(input_image, detect_resolution),
185
- value_threshold, distance_threshold)
186
- detected_map = HWC3(detected_map)
187
- img = resize_image(input_image, image_resolution)
188
- H, W, C = img.shape
189
-
190
- detected_map = cv2.resize(detected_map, (W, H),
191
- interpolation=cv2.INTER_NEAREST)
192
-
193
- control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0
194
- control = torch.stack([control for _ in range(num_samples)], dim=0)
195
- control = einops.rearrange(control, 'b h w c -> b c h w').clone()
196
-
197
- if seed == -1:
198
- seed = random.randint(0, 65535)
199
- seed_everything(seed)
200
-
201
- if config.save_memory:
202
- self.model.low_vram_shift(is_diffusing=False)
203
-
204
- cond = {
205
- 'c_concat': [control],
206
- 'c_crossattn': [
207
- self.model.get_learned_conditioning(
208
- [prompt + ', ' + a_prompt] * num_samples)
209
- ]
210
- }
211
- un_cond = {
212
- 'c_concat': [control],
213
- 'c_crossattn':
214
- [self.model.get_learned_conditioning([n_prompt] * num_samples)]
215
- }
216
- shape = (4, H // 8, W // 8)
217
-
218
- if config.save_memory:
219
- self.model.low_vram_shift(is_diffusing=True)
220
-
221
- samples, intermediates = self.ddim_sampler.sample(
222
- ddim_steps,
223
- num_samples,
224
- shape,
225
- cond,
226
- verbose=False,
227
- eta=eta,
228
- unconditional_guidance_scale=scale,
229
- unconditional_conditioning=un_cond)
230
-
231
- if config.save_memory:
232
- self.model.low_vram_shift(is_diffusing=False)
233
-
234
- x_samples = self.model.decode_first_stage(samples)
235
- x_samples = (
236
- einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 +
237
- 127.5).cpu().numpy().clip(0, 255).astype(np.uint8)
238
-
239
- results = [x_samples[i] for i in range(num_samples)]
240
- return [
241
- 255 - cv2.dilate(detected_map,
242
- np.ones(shape=(3, 3), dtype=np.uint8),
243
- iterations=1)
244
- ] + results
245
-
246
- @torch.inference_mode()
247
- def process_hed(self, input_image, prompt, a_prompt, n_prompt, num_samples,
248
- image_resolution, detect_resolution, ddim_steps, scale,
249
- seed, eta):
250
- self.load_weight('hed')
251
-
252
- input_image = HWC3(input_image)
253
- detected_map = apply_hed(resize_image(input_image, detect_resolution))
254
- detected_map = HWC3(detected_map)
255
- img = resize_image(input_image, image_resolution)
256
- H, W, C = img.shape
257
-
258
- detected_map = cv2.resize(detected_map, (W, H),
259
- interpolation=cv2.INTER_LINEAR)
260
-
261
- control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0
262
- control = torch.stack([control for _ in range(num_samples)], dim=0)
263
- control = einops.rearrange(control, 'b h w c -> b c h w').clone()
264
-
265
- if seed == -1:
266
- seed = random.randint(0, 65535)
267
- seed_everything(seed)
268
-
269
- if config.save_memory:
270
- self.model.low_vram_shift(is_diffusing=False)
271
-
272
- cond = {
273
- 'c_concat': [control],
274
- 'c_crossattn': [
275
- self.model.get_learned_conditioning(
276
- [prompt + ', ' + a_prompt] * num_samples)
277
- ]
278
- }
279
- un_cond = {
280
- 'c_concat': [control],
281
- 'c_crossattn':
282
- [self.model.get_learned_conditioning([n_prompt] * num_samples)]
283
- }
284
- shape = (4, H // 8, W // 8)
285
-
286
- if config.save_memory:
287
- self.model.low_vram_shift(is_diffusing=True)
288
-
289
- samples, intermediates = self.ddim_sampler.sample(
290
- ddim_steps,
291
- num_samples,
292
- shape,
293
- cond,
294
- verbose=False,
295
- eta=eta,
296
- unconditional_guidance_scale=scale,
297
- unconditional_conditioning=un_cond)
298
-
299
- if config.save_memory:
300
- self.model.low_vram_shift(is_diffusing=False)
301
-
302
- x_samples = self.model.decode_first_stage(samples)
303
- x_samples = (
304
- einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 +
305
- 127.5).cpu().numpy().clip(0, 255).astype(np.uint8)
306
-
307
- results = [x_samples[i] for i in range(num_samples)]
308
- return [detected_map] + results
309
-
310
- @torch.inference_mode()
311
- def process_scribble(self, input_image, prompt, a_prompt, n_prompt,
312
- num_samples, image_resolution, ddim_steps, scale,
313
- seed, eta):
314
- self.load_weight('scribble')
315
-
316
- img = resize_image(HWC3(input_image), image_resolution)
317
- H, W, C = img.shape
318
-
319
- detected_map = np.zeros_like(img, dtype=np.uint8)
320
- detected_map[np.min(img, axis=2) < 127] = 255
321
-
322
- control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0
323
- control = torch.stack([control for _ in range(num_samples)], dim=0)
324
- control = einops.rearrange(control, 'b h w c -> b c h w').clone()
325
-
326
- if seed == -1:
327
- seed = random.randint(0, 65535)
328
- seed_everything(seed)
329
-
330
- if config.save_memory:
331
- self.model.low_vram_shift(is_diffusing=False)
332
-
333
- cond = {
334
- 'c_concat': [control],
335
- 'c_crossattn': [
336
- self.model.get_learned_conditioning(
337
- [prompt + ', ' + a_prompt] * num_samples)
338
- ]
339
- }
340
- un_cond = {
341
- 'c_concat': [control],
342
- 'c_crossattn':
343
- [self.model.get_learned_conditioning([n_prompt] * num_samples)]
344
- }
345
- shape = (4, H // 8, W // 8)
346
-
347
- if config.save_memory:
348
- self.model.low_vram_shift(is_diffusing=True)
349
-
350
- samples, intermediates = self.ddim_sampler.sample(
351
- ddim_steps,
352
- num_samples,
353
- shape,
354
- cond,
355
- verbose=False,
356
- eta=eta,
357
- unconditional_guidance_scale=scale,
358
- unconditional_conditioning=un_cond)
359
-
360
- if config.save_memory:
361
- self.model.low_vram_shift(is_diffusing=False)
362
-
363
- x_samples = self.model.decode_first_stage(samples)
364
- x_samples = (
365
- einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 +
366
- 127.5).cpu().numpy().clip(0, 255).astype(np.uint8)
367
-
368
- results = [x_samples[i] for i in range(num_samples)]
369
- return [255 - detected_map] + results
370
-
371
- @torch.inference_mode()
372
- def process_scribble_interactive(self, input_image, prompt, a_prompt,
373
- n_prompt, num_samples, image_resolution,
374
- ddim_steps, scale, seed, eta):
375
- self.load_weight('scribble')
376
-
377
- img = resize_image(HWC3(input_image['mask'][:, :, 0]),
378
- image_resolution)
379
- H, W, C = img.shape
380
-
381
- detected_map = np.zeros_like(img, dtype=np.uint8)
382
- detected_map[np.min(img, axis=2) > 127] = 255
383
-
384
- control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0
385
- control = torch.stack([control for _ in range(num_samples)], dim=0)
386
- control = einops.rearrange(control, 'b h w c -> b c h w').clone()
387
-
388
- if seed == -1:
389
- seed = random.randint(0, 65535)
390
- seed_everything(seed)
391
-
392
- if config.save_memory:
393
- self.model.low_vram_shift(is_diffusing=False)
394
-
395
- cond = {
396
- 'c_concat': [control],
397
- 'c_crossattn': [
398
- self.model.get_learned_conditioning(
399
- [prompt + ', ' + a_prompt] * num_samples)
400
- ]
401
- }
402
- un_cond = {
403
- 'c_concat': [control],
404
- 'c_crossattn':
405
- [self.model.get_learned_conditioning([n_prompt] * num_samples)]
406
- }
407
- shape = (4, H // 8, W // 8)
408
-
409
- if config.save_memory:
410
- self.model.low_vram_shift(is_diffusing=True)
411
-
412
- samples, intermediates = self.ddim_sampler.sample(
413
- ddim_steps,
414
- num_samples,
415
- shape,
416
- cond,
417
- verbose=False,
418
- eta=eta,
419
- unconditional_guidance_scale=scale,
420
- unconditional_conditioning=un_cond)
421
-
422
- if config.save_memory:
423
- self.model.low_vram_shift(is_diffusing=False)
424
-
425
- x_samples = self.model.decode_first_stage(samples)
426
- x_samples = (
427
- einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 +
428
- 127.5).cpu().numpy().clip(0, 255).astype(np.uint8)
429
-
430
- results = [x_samples[i] for i in range(num_samples)]
431
- return [255 - detected_map] + results
432
-
433
- @torch.inference_mode()
434
- def process_fake_scribble(self, input_image, prompt, a_prompt, n_prompt,
435
- num_samples, image_resolution, detect_resolution,
436
- ddim_steps, scale, seed, eta):
437
- self.load_weight('scribble')
438
-
439
- input_image = HWC3(input_image)
440
- detected_map = apply_hed(resize_image(input_image, detect_resolution))
441
- detected_map = HWC3(detected_map)
442
- img = resize_image(input_image, image_resolution)
443
- H, W, C = img.shape
444
-
445
- detected_map = cv2.resize(detected_map, (W, H),
446
- interpolation=cv2.INTER_LINEAR)
447
- detected_map = nms(detected_map, 127, 3.0)
448
- detected_map = cv2.GaussianBlur(detected_map, (0, 0), 3.0)
449
- detected_map[detected_map > 4] = 255
450
- detected_map[detected_map < 255] = 0
451
-
452
- control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0
453
- control = torch.stack([control for _ in range(num_samples)], dim=0)
454
- control = einops.rearrange(control, 'b h w c -> b c h w').clone()
455
-
456
- if seed == -1:
457
- seed = random.randint(0, 65535)
458
- seed_everything(seed)
459
-
460
- if config.save_memory:
461
- self.model.low_vram_shift(is_diffusing=False)
462
-
463
- cond = {
464
- 'c_concat': [control],
465
- 'c_crossattn': [
466
- self.model.get_learned_conditioning(
467
- [prompt + ', ' + a_prompt] * num_samples)
468
- ]
469
- }
470
- un_cond = {
471
- 'c_concat': [control],
472
- 'c_crossattn':
473
- [self.model.get_learned_conditioning([n_prompt] * num_samples)]
474
- }
475
- shape = (4, H // 8, W // 8)
476
-
477
- if config.save_memory:
478
- self.model.low_vram_shift(is_diffusing=True)
479
-
480
- samples, intermediates = self.ddim_sampler.sample(
481
- ddim_steps,
482
- num_samples,
483
- shape,
484
- cond,
485
- verbose=False,
486
- eta=eta,
487
- unconditional_guidance_scale=scale,
488
- unconditional_conditioning=un_cond)
489
-
490
- if config.save_memory:
491
- self.model.low_vram_shift(is_diffusing=False)
492
-
493
- x_samples = self.model.decode_first_stage(samples)
494
- x_samples = (
495
- einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 +
496
- 127.5).cpu().numpy().clip(0, 255).astype(np.uint8)
497
-
498
- results = [x_samples[i] for i in range(num_samples)]
499
- return [255 - detected_map] + results
500
-
501
- @torch.inference_mode()
502
- def process_pose(self, input_image, prompt, a_prompt, n_prompt,
503
- num_samples, image_resolution, detect_resolution,
504
- ddim_steps, scale, seed, eta):
505
- self.load_weight('pose')
506
-
507
- input_image = HWC3(input_image)
508
- detected_map, _ = apply_openpose(
509
- resize_image(input_image, detect_resolution))
510
- detected_map = HWC3(detected_map)
511
- img = resize_image(input_image, image_resolution)
512
- H, W, C = img.shape
513
-
514
- detected_map = cv2.resize(detected_map, (W, H),
515
- interpolation=cv2.INTER_NEAREST)
516
-
517
- control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0
518
- control = torch.stack([control for _ in range(num_samples)], dim=0)
519
- control = einops.rearrange(control, 'b h w c -> b c h w').clone()
520
-
521
- if seed == -1:
522
- seed = random.randint(0, 65535)
523
- seed_everything(seed)
524
-
525
- if config.save_memory:
526
- self.model.low_vram_shift(is_diffusing=False)
527
-
528
- cond = {
529
- 'c_concat': [control],
530
- 'c_crossattn': [
531
- self.model.get_learned_conditioning(
532
- [prompt + ', ' + a_prompt] * num_samples)
533
- ]
534
- }
535
- un_cond = {
536
- 'c_concat': [control],
537
- 'c_crossattn':
538
- [self.model.get_learned_conditioning([n_prompt] * num_samples)]
539
- }
540
- shape = (4, H // 8, W // 8)
541
-
542
- if config.save_memory:
543
- self.model.low_vram_shift(is_diffusing=True)
544
-
545
- samples, intermediates = self.ddim_sampler.sample(
546
- ddim_steps,
547
- num_samples,
548
- shape,
549
- cond,
550
- verbose=False,
551
- eta=eta,
552
- unconditional_guidance_scale=scale,
553
- unconditional_conditioning=un_cond)
554
-
555
- if config.save_memory:
556
- self.model.low_vram_shift(is_diffusing=False)
557
-
558
- x_samples = self.model.decode_first_stage(samples)
559
- x_samples = (
560
- einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 +
561
- 127.5).cpu().numpy().clip(0, 255).astype(np.uint8)
562
-
563
- results = [x_samples[i] for i in range(num_samples)]
564
- return [detected_map] + results
565
-
566
- @torch.inference_mode()
567
- def process_seg(self, input_image, prompt, a_prompt, n_prompt, num_samples,
568
- image_resolution, detect_resolution, ddim_steps, scale,
569
- seed, eta):
570
- self.load_weight('seg')
571
-
572
- input_image = HWC3(input_image)
573
- detected_map = apply_uniformer(
574
- resize_image(input_image, detect_resolution))
575
- img = resize_image(input_image, image_resolution)
576
- H, W, C = img.shape
577
-
578
- detected_map = cv2.resize(detected_map, (W, H),
579
- interpolation=cv2.INTER_NEAREST)
580
-
581
- control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0
582
- control = torch.stack([control for _ in range(num_samples)], dim=0)
583
- control = einops.rearrange(control, 'b h w c -> b c h w').clone()
584
-
585
- if seed == -1:
586
- seed = random.randint(0, 65535)
587
- seed_everything(seed)
588
-
589
- if config.save_memory:
590
- self.model.low_vram_shift(is_diffusing=False)
591
-
592
- cond = {
593
- 'c_concat': [control],
594
- 'c_crossattn': [
595
- self.model.get_learned_conditioning(
596
- [prompt + ', ' + a_prompt] * num_samples)
597
- ]
598
- }
599
- un_cond = {
600
- 'c_concat': [control],
601
- 'c_crossattn':
602
- [self.model.get_learned_conditioning([n_prompt] * num_samples)]
603
- }
604
- shape = (4, H // 8, W // 8)
605
-
606
- if config.save_memory:
607
- self.model.low_vram_shift(is_diffusing=True)
608
-
609
- samples, intermediates = self.ddim_sampler.sample(
610
- ddim_steps,
611
- num_samples,
612
- shape,
613
- cond,
614
- verbose=False,
615
- eta=eta,
616
- unconditional_guidance_scale=scale,
617
- unconditional_conditioning=un_cond)
618
-
619
- if config.save_memory:
620
- self.model.low_vram_shift(is_diffusing=False)
621
-
622
- x_samples = self.model.decode_first_stage(samples)
623
- x_samples = (
624
- einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 +
625
- 127.5).cpu().numpy().clip(0, 255).astype(np.uint8)
626
-
627
- results = [x_samples[i] for i in range(num_samples)]
628
- return [detected_map] + results
629
-
630
- @torch.inference_mode()
631
- def process_depth(self, input_image, prompt, a_prompt, n_prompt,
632
- num_samples, image_resolution, detect_resolution,
633
- ddim_steps, scale, seed, eta):
634
- self.load_weight('depth')
635
-
636
- input_image = HWC3(input_image)
637
- detected_map, _ = apply_midas(
638
- resize_image(input_image, detect_resolution))
639
- detected_map = HWC3(detected_map)
640
- img = resize_image(input_image, image_resolution)
641
- H, W, C = img.shape
642
-
643
- detected_map = cv2.resize(detected_map, (W, H),
644
- interpolation=cv2.INTER_LINEAR)
645
-
646
- control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0
647
- control = torch.stack([control for _ in range(num_samples)], dim=0)
648
- control = einops.rearrange(control, 'b h w c -> b c h w').clone()
649
-
650
- if seed == -1:
651
- seed = random.randint(0, 65535)
652
- seed_everything(seed)
653
-
654
- if config.save_memory:
655
- self.model.low_vram_shift(is_diffusing=False)
656
-
657
- cond = {
658
- 'c_concat': [control],
659
- 'c_crossattn': [
660
- self.model.get_learned_conditioning(
661
- [prompt + ', ' + a_prompt] * num_samples)
662
- ]
663
- }
664
- un_cond = {
665
- 'c_concat': [control],
666
- 'c_crossattn':
667
- [self.model.get_learned_conditioning([n_prompt] * num_samples)]
668
- }
669
- shape = (4, H // 8, W // 8)
670
-
671
- if config.save_memory:
672
- self.model.low_vram_shift(is_diffusing=True)
673
-
674
- samples, intermediates = self.ddim_sampler.sample(
675
- ddim_steps,
676
- num_samples,
677
- shape,
678
- cond,
679
- verbose=False,
680
- eta=eta,
681
- unconditional_guidance_scale=scale,
682
- unconditional_conditioning=un_cond)
683
-
684
- if config.save_memory:
685
- self.model.low_vram_shift(is_diffusing=False)
686
-
687
- x_samples = self.model.decode_first_stage(samples)
688
- x_samples = (
689
- einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 +
690
- 127.5).cpu().numpy().clip(0, 255).astype(np.uint8)
691
-
692
- results = [x_samples[i] for i in range(num_samples)]
693
- return [detected_map] + results
694
-
695
- @torch.inference_mode()
696
- def process_normal(self, input_image, prompt, a_prompt, n_prompt,
697
- num_samples, image_resolution, detect_resolution,
698
- ddim_steps, scale, seed, eta, bg_threshold):
699
- self.load_weight('normal')
700
-
701
- input_image = HWC3(input_image)
702
- _, detected_map = apply_midas(resize_image(input_image,
703
- detect_resolution),
704
- bg_th=bg_threshold)
705
- detected_map = HWC3(detected_map)
706
- img = resize_image(input_image, image_resolution)
707
- H, W, C = img.shape
708
-
709
- detected_map = cv2.resize(detected_map, (W, H),
710
- interpolation=cv2.INTER_LINEAR)
711
-
712
- control = torch.from_numpy(
713
- detected_map[:, :, ::-1].copy()).float().cuda() / 255.0
714
- control = torch.stack([control for _ in range(num_samples)], dim=0)
715
- control = einops.rearrange(control, 'b h w c -> b c h w').clone()
716
-
717
- if seed == -1:
718
- seed = random.randint(0, 65535)
719
- seed_everything(seed)
720
-
721
- if config.save_memory:
722
- self.model.low_vram_shift(is_diffusing=False)
723
-
724
- cond = {
725
- 'c_concat': [control],
726
- 'c_crossattn': [
727
- self.model.get_learned_conditioning(
728
- [prompt + ', ' + a_prompt] * num_samples)
729
- ]
730
- }
731
- un_cond = {
732
- 'c_concat': [control],
733
- 'c_crossattn':
734
- [self.model.get_learned_conditioning([n_prompt] * num_samples)]
735
- }
736
- shape = (4, H // 8, W // 8)
737
-
738
- if config.save_memory:
739
- self.model.low_vram_shift(is_diffusing=True)
740
-
741
- samples, intermediates = self.ddim_sampler.sample(
742
- ddim_steps,
743
- num_samples,
744
- shape,
745
- cond,
746
- verbose=False,
747
- eta=eta,
748
- unconditional_guidance_scale=scale,
749
- unconditional_conditioning=un_cond)
750
-
751
- if config.save_memory:
752
- self.model.low_vram_shift(is_diffusing=False)
753
-
754
- x_samples = self.model.decode_first_stage(samples)
755
- x_samples = (
756
- einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 +
757
- 127.5).cpu().numpy().clip(0, 255).astype(np.uint8)
758
-
759
- results = [x_samples[i] for i in range(num_samples)]
760
- return [detected_map] + results
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/models/autoencoder_multi.py DELETED
@@ -1,201 +0,0 @@
1
- """
2
- 与autoencoder.py的区别在于,autoencoder.py计算loss时只有一个discriminator,而此处又多了个multiwindowDiscriminator,所以优化器
3
- 优化的参数改为:
4
- opt_disc = torch.optim.Adam(list(self.loss.discriminator.parameters()) + list(self.loss.discriminator_multi.parameters()),
5
- lr=lr, betas=(0.5, 0.9))
6
- """
7
-
8
- import os
9
- import torch
10
- import pytorch_lightning as pl
11
- import torch.nn.functional as F
12
- from contextlib import contextmanager
13
-
14
- from packaging import version
15
- import numpy as np
16
- from ldm.modules.diffusionmodules.model import Encoder, Decoder
17
- from ldm.modules.distributions.distributions import DiagonalGaussianDistribution
18
- from torch.optim.lr_scheduler import LambdaLR
19
- from ldm.util import instantiate_from_config
20
-
21
-
22
-
23
- class AutoencoderKL(pl.LightningModule):
24
- def __init__(self,
25
- ddconfig,
26
- lossconfig,
27
- embed_dim,
28
- ckpt_path=None,
29
- ignore_keys=[],
30
- image_key="image",
31
- colorize_nlabels=None,
32
- monitor=None,
33
- ):
34
- super().__init__()
35
- self.image_key = image_key
36
- self.encoder = Encoder(**ddconfig)
37
- self.decoder = Decoder(**ddconfig)
38
- self.loss = instantiate_from_config(lossconfig)
39
- assert ddconfig["double_z"]
40
- self.quant_conv = torch.nn.Conv2d(2*ddconfig["z_channels"], 2*embed_dim, 1)
41
- self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig["z_channels"], 1)
42
- self.embed_dim = embed_dim
43
- if colorize_nlabels is not None:
44
- assert type(colorize_nlabels)==int
45
- self.register_buffer("colorize", torch.randn(3, colorize_nlabels, 1, 1))
46
- if monitor is not None:
47
- self.monitor = monitor
48
- if ckpt_path is not None:
49
- self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys)
50
-
51
- def init_from_ckpt(self, path, ignore_keys=list()):
52
- sd = torch.load(path, map_location="cpu")["state_dict"]
53
- keys = list(sd.keys())
54
- for k in keys:
55
- for ik in ignore_keys:
56
- if k.startswith(ik):
57
- print("Deleting key {} from state_dict.".format(k))
58
- del sd[k]
59
- self.load_state_dict(sd, strict=False)
60
- print(f"Restored from {path}")
61
-
62
- def encode(self, x):
63
- h = self.encoder(x)
64
- moments = self.quant_conv(h)
65
- posterior = DiagonalGaussianDistribution(moments)
66
- return posterior
67
-
68
- def decode(self, z):
69
- z = self.post_quant_conv(z)
70
- dec = self.decoder(z)
71
- return dec
72
-
73
- def forward(self, input, sample_posterior=True):
74
- posterior = self.encode(input)
75
- if sample_posterior:
76
- z = posterior.sample()
77
- else:
78
- z = posterior.mode()
79
- dec = self.decode(z)
80
- return dec, posterior
81
-
82
- def get_input(self, batch, k):
83
- x = batch[k]
84
- if len(x.shape) == 3:
85
- x = x[..., None]
86
- x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format).float()
87
- return x
88
-
89
- def training_step(self, batch, batch_idx, optimizer_idx):
90
- inputs = self.get_input(batch, self.image_key)
91
- reconstructions, posterior = self(inputs)
92
-
93
- if optimizer_idx == 0:
94
- # train encoder+decoder+logvar
95
- aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step,
96
- last_layer=self.get_last_layer(), split="train")
97
- self.log("aeloss", aeloss, prog_bar=True, logger=True, on_step=True, on_epoch=True)
98
- self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=False)
99
- return aeloss
100
-
101
- if optimizer_idx == 1:
102
- # train the discriminator
103
- discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step,
104
- last_layer=self.get_last_layer(), split="train")
105
-
106
- self.log("discloss", discloss, prog_bar=True, logger=True, on_step=True, on_epoch=True)
107
- self.log_dict(log_dict_disc, prog_bar=False, logger=True, on_step=True, on_epoch=False)
108
- return discloss
109
-
110
- def validation_step(self, batch, batch_idx):
111
- inputs = self.get_input(batch, self.image_key)
112
- reconstructions, posterior = self(inputs)
113
- aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, 0, self.global_step,
114
- last_layer=self.get_last_layer(), split="val")
115
-
116
- discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, 1, self.global_step,
117
- last_layer=self.get_last_layer(), split="val")
118
-
119
- self.log("val/rec_loss", log_dict_ae["val/rec_loss"])
120
- self.log_dict(log_dict_ae)
121
- self.log_dict(log_dict_disc)
122
- return self.log_dict
123
-
124
- def test_step(self, batch, batch_idx):
125
- inputs = self.get_input(batch, self.image_key)# inputs shape:(b,c,mel_len,T) or (b,c,h,w)
126
- reconstructions, posterior = self(inputs)# reconstructions:(b,c,mel_len,T) or (b,c,h,w)
127
- reconstructions = (reconstructions + 1)/2 # to mel scale
128
- test_ckpt_path = os.path.basename(self.trainer.tested_ckpt_path)
129
- savedir = os.path.join(self.trainer.log_dir,f'output_imgs_{test_ckpt_path}','fake_class')
130
- if not os.path.exists(savedir):
131
- os.makedirs(savedir)
132
-
133
- file_names = batch['f_name']
134
- # print(f"reconstructions.shape:{reconstructions.shape}",file_names)
135
- reconstructions = reconstructions.cpu().numpy().squeeze(1) # squuze channel dim
136
- for b in range(reconstructions.shape[0]):
137
- vname_num_split_index = file_names[b].rfind('_')# file_names[b]:video_name+'_'+num
138
- v_n,num = file_names[b][:vname_num_split_index],file_names[b][vname_num_split_index+1:]
139
- save_img_path = os.path.join(savedir,f'{v_n}_sample_{num}.npy')
140
- np.save(save_img_path,reconstructions[b])
141
-
142
- return None
143
-
144
- def configure_optimizers(self):
145
- lr = self.learning_rate
146
- opt_ae = torch.optim.Adam(list(self.encoder.parameters())+
147
- list(self.decoder.parameters())+
148
- list(self.quant_conv.parameters())+
149
- list(self.post_quant_conv.parameters()),
150
- lr=lr, betas=(0.5, 0.9))
151
- opt_disc = torch.optim.Adam(list(self.loss.discriminator.parameters()) + list(self.loss.discriminator_multi.parameters()),
152
- lr=lr, betas=(0.5, 0.9))
153
- return [opt_ae, opt_disc], []
154
-
155
- def get_last_layer(self):
156
- return self.decoder.conv_out.weight
157
-
158
- @torch.no_grad()
159
- def log_images(self, batch, only_inputs=False, **kwargs):
160
- log = dict()
161
- x = self.get_input(batch, self.image_key)
162
- x = x.to(self.device)
163
- if not only_inputs:
164
- xrec, posterior = self(x)
165
- if x.shape[1] > 3:
166
- # colorize with random projection
167
- assert xrec.shape[1] > 3
168
- x = self.to_rgb(x)
169
- xrec = self.to_rgb(xrec)
170
- log["samples"] = self.decode(torch.randn_like(posterior.sample()))
171
- log["reconstructions"] = xrec
172
- log["inputs"] = x
173
- return log
174
-
175
- def to_rgb(self, x):
176
- assert self.image_key == "segmentation"
177
- if not hasattr(self, "colorize"):
178
- self.register_buffer("colorize", torch.randn(3, x.shape[1], 1, 1).to(x))
179
- x = F.conv2d(x, weight=self.colorize)
180
- x = 2.*(x-x.min())/(x.max()-x.min()) - 1.
181
- return x
182
-
183
-
184
- class IdentityFirstStage(torch.nn.Module):
185
- def __init__(self, *args, vq_interface=False, **kwargs):
186
- self.vq_interface = vq_interface # TODO: Should be true by default but check to not break older stuff
187
- super().__init__()
188
-
189
- def encode(self, x, *args, **kwargs):
190
- return x
191
-
192
- def decode(self, x, *args, **kwargs):
193
- return x
194
-
195
- def quantize(self, x, *args, **kwargs):
196
- if self.vq_interface:
197
- return x, None, [None, None, None]
198
- return x
199
-
200
- def forward(self, x, *args, **kwargs):
201
- return x
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AILab-CVC/SEED-LLaMA/scripts/start_backend_14b.sh DELETED
@@ -1,10 +0,0 @@
1
-
2
- python3 gradio_demo/seed_llama_flask.py \
3
- --image_transform configs/transform/clip_transform.yaml \
4
- --tokenizer configs/tokenizer/seed_llama_tokenizer.yaml \
5
- --model configs/llm/seed_llama_14b_8bit.yaml \
6
- --port 7890 \
7
- --llm_device cuda:0 \
8
- --tokenizer_device cuda:0 \
9
- --offload_encoder \
10
- --offload_decoder
 
 
 
 
 
 
 
 
 
 
 
spaces/Adapter/CoAdapter/ldm/data/dataset_depth.py DELETED
@@ -1,35 +0,0 @@
1
- import json
2
- import cv2
3
- import os
4
- from basicsr.utils import img2tensor
5
-
6
-
7
- class DepthDataset():
8
- def __init__(self, meta_file):
9
- super(DepthDataset, self).__init__()
10
-
11
- self.files = []
12
- with open(meta_file, 'r') as f:
13
- lines = f.readlines()
14
- for line in lines:
15
- img_path = line.strip()
16
- depth_img_path = img_path.rsplit('.', 1)[0] + '.depth.png'
17
- txt_path = img_path.rsplit('.', 1)[0] + '.txt'
18
- self.files.append({'img_path': img_path, 'depth_img_path': depth_img_path, 'txt_path': txt_path})
19
-
20
- def __getitem__(self, idx):
21
- file = self.files[idx]
22
-
23
- im = cv2.imread(file['img_path'])
24
- im = img2tensor(im, bgr2rgb=True, float32=True) / 255.
25
-
26
- depth = cv2.imread(file['depth_img_path']) # [:,:,0]
27
- depth = img2tensor(depth, bgr2rgb=True, float32=True) / 255. # [0].unsqueeze(0)#/255.
28
-
29
- with open(file['txt_path'], 'r') as fs:
30
- sentence = fs.readline().strip()
31
-
32
- return {'im': im, 'depth': depth, 'sentence': sentence}
33
-
34
- def __len__(self):
35
- return len(self.files)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AiMimicry/sovits-models/cluster/train_cluster.py DELETED
@@ -1,89 +0,0 @@
1
- import os
2
- from glob import glob
3
- from pathlib import Path
4
- import torch
5
- import logging
6
- import argparse
7
- import torch
8
- import numpy as np
9
- from sklearn.cluster import KMeans, MiniBatchKMeans
10
- import tqdm
11
- logging.basicConfig(level=logging.INFO)
12
- logger = logging.getLogger(__name__)
13
- import time
14
- import random
15
-
16
- def train_cluster(in_dir, n_clusters, use_minibatch=True, verbose=False):
17
-
18
- logger.info(f"Loading features from {in_dir}")
19
- features = []
20
- nums = 0
21
- for path in tqdm.tqdm(in_dir.glob("*.soft.pt")):
22
- features.append(torch.load(path).squeeze(0).numpy().T)
23
- # print(features[-1].shape)
24
- features = np.concatenate(features, axis=0)
25
- print(nums, features.nbytes/ 1024**2, "MB , shape:",features.shape, features.dtype)
26
- features = features.astype(np.float32)
27
- logger.info(f"Clustering features of shape: {features.shape}")
28
- t = time.time()
29
- if use_minibatch:
30
- kmeans = MiniBatchKMeans(n_clusters=n_clusters,verbose=verbose, batch_size=4096, max_iter=80).fit(features)
31
- else:
32
- kmeans = KMeans(n_clusters=n_clusters,verbose=verbose).fit(features)
33
- print(time.time()-t, "s")
34
-
35
- x = {
36
- "n_features_in_": kmeans.n_features_in_,
37
- "_n_threads": kmeans._n_threads,
38
- "cluster_centers_": kmeans.cluster_centers_,
39
- }
40
- print("end")
41
-
42
- return x
43
-
44
-
45
- if __name__ == "__main__":
46
-
47
- parser = argparse.ArgumentParser()
48
- parser.add_argument('--dataset', type=Path, default="./dataset/44k",
49
- help='path of training data directory')
50
- parser.add_argument('--output', type=Path, default="logs/44k",
51
- help='path of model output directory')
52
-
53
- args = parser.parse_args()
54
-
55
- checkpoint_dir = args.output
56
- dataset = args.dataset
57
- n_clusters = 10000
58
-
59
- ckpt = {}
60
- for spk in os.listdir(dataset):
61
- if os.path.isdir(dataset/spk):
62
- print(f"train kmeans for {spk}...")
63
- in_dir = dataset/spk
64
- x = train_cluster(in_dir, n_clusters, verbose=False)
65
- ckpt[spk] = x
66
-
67
- checkpoint_path = checkpoint_dir / f"kmeans_{n_clusters}.pt"
68
- checkpoint_path.parent.mkdir(exist_ok=True, parents=True)
69
- torch.save(
70
- ckpt,
71
- checkpoint_path,
72
- )
73
-
74
-
75
- # import cluster
76
- # for spk in tqdm.tqdm(os.listdir("dataset")):
77
- # if os.path.isdir(f"dataset/{spk}"):
78
- # print(f"start kmeans inference for {spk}...")
79
- # for feature_path in tqdm.tqdm(glob(f"dataset/{spk}/*.discrete.npy", recursive=True)):
80
- # mel_path = feature_path.replace(".discrete.npy",".mel.npy")
81
- # mel_spectrogram = np.load(mel_path)
82
- # feature_len = mel_spectrogram.shape[-1]
83
- # c = np.load(feature_path)
84
- # c = utils.tools.repeat_expand_2d(torch.FloatTensor(c), feature_len).numpy()
85
- # feature = c.T
86
- # feature_class = cluster.get_cluster_result(feature, spk)
87
- # np.save(feature_path.replace(".discrete.npy", ".discrete_class.npy"), feature_class)
88
-
89
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AkiKagura/Marco-Generation-Img2img/app.py DELETED
@@ -1,74 +0,0 @@
1
- import gradio as gr
2
- import torch
3
- #from torch import autocast // only for GPU
4
-
5
- from PIL import Image
6
- import numpy as np
7
- from io import BytesIO
8
- import os
9
- MY_SECRET_TOKEN=os.environ.get('HF_TOKEN_SD')
10
-
11
- #from diffusers import StableDiffusionPipeline
12
- from diffusers import StableDiffusionImg2ImgPipeline
13
-
14
- def empty_checker(images, **kwargs): return images, False
15
-
16
- print("hello")
17
-
18
- YOUR_TOKEN=MY_SECRET_TOKEN
19
-
20
- device="cpu"
21
-
22
- # img2img pipeline
23
- img_pipe = StableDiffusionImg2ImgPipeline.from_pretrained("AkiKagura/mkgen-diffusion", duse_auth_token=YOUR_TOKEN)
24
- img_pipe.safety_checker = empty_checker
25
- img_pipe.to(device)
26
-
27
- source_img = gr.Image(source="upload", type="filepath", label="init_img")
28
- gallery = gr.Gallery(label="Generated images", show_label=False, elem_id="gallery").style(grid=[1], height="auto")
29
-
30
- def resize(img):
31
- #baseheight = value
32
- img = Image.open(img)
33
- #hpercent = (baseheight/float(img.size[1]))
34
- #wsize = int((float(img.size[0])*float(hpercent)))
35
- #img = img.resize((wsize,baseheight), Image.Resampling.LANCZOS)
36
- hsize = img.size[1]
37
- wsize = img.size[0]
38
- if 6*wsize <= 5*hsize:
39
- wsize = 512
40
- hsize = 768
41
- elif 4*wsize >= 5*hsize:
42
- wsize = 768
43
- hsize = 512
44
- else:
45
- wsize = 512
46
- hsize = 512
47
- img = img.resize((wsize,hsize), Image.Resampling.LANCZOS)
48
- return img, wsize, hsize
49
-
50
-
51
- def infer(source_img, prompt, guide, steps, seed, strength):
52
- generator = torch.Generator('cpu').manual_seed(seed)
53
-
54
- source_image, img_w, img_h = resize(source_img)
55
- source_image.save('source.png')
56
- images_list = img_pipe([prompt] * 1, init_image=source_image, strength=strength, guidance_scale=guide, num_inference_steps=steps, width=img_w, height=img_h)
57
- images = []
58
-
59
- for i, image in enumerate(images_list["images"]):
60
- images.append(image)
61
- return images
62
-
63
- print("done")
64
-
65
- title="Marco Generation Img2img"
66
- description="<p style='text-align: center;'>Upload your image and input 'mkmk woman' to get Marco image. <br />Warning: Slow process... about 10 min inference time.</p>"
67
-
68
- gr.Interface(fn=infer, inputs=[source_img,
69
- "text",
70
- gr.Slider(2, 15, value = 7, label = 'Guidence Scale'),
71
- gr.Slider(10, 50, value = 25, step = 1, label = 'Number of Iterations'),
72
- gr.Slider(label = "Seed", minimum = 0, maximum = 2147483647, step = 1, randomize = True),
73
- gr.Slider(label='Strength', minimum = 0, maximum = 1, step = .05, value = .75)],
74
- outputs=gallery,title=title,description=description, allow_flagging="manual", flagging_dir="flagged").queue(max_size=100).launch(enable_queue=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Alesteba/NeRF_ficus-pxl/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: NeRF Ficus-pxl
3
- emoji: 🐠
4
- colorFrom: indigo
5
- colorTo: blue
6
- sdk: streamlit
7
- sdk_version: 1.17.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/detectors/cascade_rcnn_r50_sac_1x_coco.py DELETED
@@ -1,12 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/cascade_rcnn_r50_fpn.py',
3
- '../_base_/datasets/coco_detection.py',
4
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
5
- ]
6
-
7
- model = dict(
8
- backbone=dict(
9
- type='DetectoRS_ResNet',
10
- conv_cfg=dict(type='ConvAWS'),
11
- sac=dict(type='SAC', use_deform=True),
12
- stage_with_sac=(False, True, True, True)))
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/lvis/mask_rcnn_x101_32x4d_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py DELETED
@@ -1,13 +0,0 @@
1
- _base_ = './mask_rcnn_r50_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py'
2
- model = dict(
3
- pretrained='open-mmlab://resnext101_32x4d',
4
- backbone=dict(
5
- type='ResNeXt',
6
- depth=101,
7
- groups=32,
8
- base_width=4,
9
- num_stages=4,
10
- out_indices=(0, 1, 2, 3),
11
- frozen_stages=1,
12
- norm_cfg=dict(type='BN', requires_grad=True),
13
- style='pytorch'))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/retinanet/retinanet_x101_32x4d_fpn_1x_coco.py DELETED
@@ -1,13 +0,0 @@
1
- _base_ = './retinanet_r50_fpn_1x_coco.py'
2
- model = dict(
3
- pretrained='open-mmlab://resnext101_32x4d',
4
- backbone=dict(
5
- type='ResNeXt',
6
- depth=101,
7
- groups=32,
8
- base_width=4,
9
- num_stages=4,
10
- out_indices=(0, 1, 2, 3),
11
- frozen_stages=1,
12
- norm_cfg=dict(type='BN', requires_grad=True),
13
- style='pytorch'))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AngoHF/ANGO-Leaderboard/assets/color.py DELETED
@@ -1,8 +0,0 @@
1
- color_dict = {
2
- '言语理解与表达': '#B22222',
3
- '数量关系': '#CC6600',
4
- '判断推理': '#CC9900',
5
- '资料分析': '#228B22',
6
- '常识判断': '#0077BE',
7
- '': '#9400D3'
8
- }
 
 
 
 
 
 
 
 
 
spaces/Anthony7906/MengHuiMXD_GPT/modules/models.py DELETED
@@ -1,625 +0,0 @@
1
- from __future__ import annotations
2
- from typing import TYPE_CHECKING, List
3
-
4
- import logging
5
- import json
6
- import commentjson as cjson
7
- import os
8
- import sys
9
- import requests
10
- import urllib3
11
- import platform
12
- import base64
13
- from io import BytesIO
14
- from PIL import Image
15
-
16
- from tqdm import tqdm
17
- import colorama
18
- from duckduckgo_search import ddg
19
- import asyncio
20
- import aiohttp
21
- from enum import Enum
22
- import uuid
23
-
24
- from .presets import *
25
- from .llama_func import *
26
- from .utils import *
27
- from . import shared
28
- from .config import retrieve_proxy
29
- from modules import config
30
- from .base_model import BaseLLMModel, ModelType
31
-
32
-
33
- class OpenAIClient(BaseLLMModel):
34
- def __init__(
35
- self,
36
- model_name,
37
- api_key,
38
- system_prompt=INITIAL_SYSTEM_PROMPT,
39
- temperature=1.0,
40
- top_p=1.0,
41
- ) -> None:
42
- super().__init__(
43
- model_name=model_name,
44
- temperature=temperature,
45
- top_p=top_p,
46
- system_prompt=system_prompt,
47
- )
48
- self.api_key = api_key
49
- self.need_api_key = True
50
- self._refresh_header()
51
-
52
- def get_answer_stream_iter(self):
53
- response = self._get_response(stream=True)
54
- if response is not None:
55
- iter = self._decode_chat_response(response)
56
- partial_text = ""
57
- for i in iter:
58
- partial_text += i
59
- yield partial_text
60
- else:
61
- yield STANDARD_ERROR_MSG + GENERAL_ERROR_MSG
62
-
63
- def get_answer_at_once(self):
64
- response = self._get_response()
65
- response = json.loads(response.text)
66
- content = response["choices"][0]["message"]["content"]
67
- total_token_count = response["usage"]["total_tokens"]
68
- return content, total_token_count
69
-
70
- def count_token(self, user_input):
71
- input_token_count = count_token(construct_user(user_input))
72
- if self.system_prompt is not None and len(self.all_token_counts) == 0:
73
- system_prompt_token_count = count_token(
74
- construct_system(self.system_prompt)
75
- )
76
- return input_token_count + system_prompt_token_count
77
- return input_token_count
78
-
79
- def billing_info(self):
80
- try:
81
- curr_time = datetime.datetime.now()
82
- last_day_of_month = get_last_day_of_month(
83
- curr_time).strftime("%Y-%m-%d")
84
- first_day_of_month = curr_time.replace(day=1).strftime("%Y-%m-%d")
85
- usage_url = f"{shared.state.usage_api_url}?start_date={first_day_of_month}&end_date={last_day_of_month}"
86
- try:
87
- usage_data = self._get_billing_data(usage_url)
88
- except Exception as e:
89
- logging.error(f"获取API使用情况失败:" + str(e))
90
- return i18n("**获取API使用情况失败**")
91
- rounded_usage = "{:.5f}".format(usage_data["total_usage"] / 100)
92
- return i18n("**本月使用金额** ") + f"\u3000 ${rounded_usage}"
93
- except requests.exceptions.ConnectTimeout:
94
- status_text = (
95
- STANDARD_ERROR_MSG + CONNECTION_TIMEOUT_MSG + ERROR_RETRIEVE_MSG
96
- )
97
- return status_text
98
- except requests.exceptions.ReadTimeout:
99
- status_text = STANDARD_ERROR_MSG + READ_TIMEOUT_MSG + ERROR_RETRIEVE_MSG
100
- return status_text
101
- except Exception as e:
102
- import traceback
103
- traceback.print_exc()
104
- logging.error(i18n("获取API使用情况失败:") + str(e))
105
- return STANDARD_ERROR_MSG + ERROR_RETRIEVE_MSG
106
-
107
- def set_token_upper_limit(self, new_upper_limit):
108
- pass
109
-
110
- @shared.state.switching_api_key # 在不开启多账号模式的时候,这个装饰器不会起作用
111
- def _get_response(self, stream=False):
112
- openai_api_key = self.api_key
113
- system_prompt = self.system_prompt
114
- history = self.history
115
- logging.debug(colorama.Fore.YELLOW +
116
- f"{history}" + colorama.Fore.RESET)
117
- headers = {
118
- "Content-Type": "application/json",
119
- "Authorization": f"Bearer {openai_api_key}",
120
- }
121
-
122
- if system_prompt is not None:
123
- history = [construct_system(system_prompt), *history]
124
-
125
- payload = {
126
- "model": self.model_name,
127
- "messages": history,
128
- "temperature": self.temperature,
129
- "top_p": self.top_p,
130
- "n": self.n_choices,
131
- "stream": stream,
132
- "presence_penalty": self.presence_penalty,
133
- "frequency_penalty": self.frequency_penalty,
134
- }
135
-
136
- if self.max_generation_token is not None:
137
- payload["max_tokens"] = self.max_generation_token
138
- if self.stop_sequence is not None:
139
- payload["stop"] = self.stop_sequence
140
- if self.logit_bias is not None:
141
- payload["logit_bias"] = self.logit_bias
142
- if self.user_identifier is not None:
143
- payload["user"] = self.user_identifier
144
-
145
- if stream:
146
- timeout = TIMEOUT_STREAMING
147
- else:
148
- timeout = TIMEOUT_ALL
149
-
150
- # 如果有自定义的api-host,使用自定义host发送请求,否则使用默认设置发送请求
151
- if shared.state.completion_url != COMPLETION_URL:
152
- logging.info(f"使用自定义API URL: {shared.state.completion_url}")
153
-
154
- with retrieve_proxy():
155
- try:
156
- response = requests.post(
157
- shared.state.completion_url,
158
- headers=headers,
159
- json=payload,
160
- stream=stream,
161
- timeout=timeout,
162
- )
163
- except:
164
- return None
165
- return response
166
-
167
- def _refresh_header(self):
168
- self.headers = {
169
- "Content-Type": "application/json",
170
- "Authorization": f"Bearer {self.api_key}",
171
- }
172
-
173
- def _get_billing_data(self, billing_url):
174
- with retrieve_proxy():
175
- response = requests.get(
176
- billing_url,
177
- headers=self.headers,
178
- timeout=TIMEOUT_ALL,
179
- )
180
-
181
- if response.status_code == 200:
182
- data = response.json()
183
- return data
184
- else:
185
- raise Exception(
186
- f"API request failed with status code {response.status_code}: {response.text}"
187
- )
188
-
189
- def _decode_chat_response(self, response):
190
- error_msg = ""
191
- for chunk in response.iter_lines():
192
- if chunk:
193
- chunk = chunk.decode()
194
- chunk_length = len(chunk)
195
- try:
196
- chunk = json.loads(chunk[6:])
197
- except json.JSONDecodeError:
198
- print(i18n("JSON解析错误,收到的内容: ") + f"{chunk}")
199
- error_msg += chunk
200
- continue
201
- if chunk_length > 6 and "delta" in chunk["choices"][0]:
202
- if chunk["choices"][0]["finish_reason"] == "stop":
203
- break
204
- try:
205
- yield chunk["choices"][0]["delta"]["content"]
206
- except Exception as e:
207
- # logging.error(f"Error: {e}")
208
- continue
209
- if error_msg:
210
- raise Exception(error_msg)
211
-
212
- def set_key(self, new_access_key):
213
- ret = super().set_key(new_access_key)
214
- self._refresh_header()
215
- return ret
216
-
217
-
218
- class ChatGLM_Client(BaseLLMModel):
219
- def __init__(self, model_name) -> None:
220
- super().__init__(model_name=model_name)
221
- from transformers import AutoTokenizer, AutoModel
222
- import torch
223
- global CHATGLM_TOKENIZER, CHATGLM_MODEL
224
- if CHATGLM_TOKENIZER is None or CHATGLM_MODEL is None:
225
- system_name = platform.system()
226
- model_path = None
227
- if os.path.exists("models"):
228
- model_dirs = os.listdir("models")
229
- if model_name in model_dirs:
230
- model_path = f"models/{model_name}"
231
- if model_path is not None:
232
- model_source = model_path
233
- else:
234
- model_source = f"THUDM/{model_name}"
235
- CHATGLM_TOKENIZER = AutoTokenizer.from_pretrained(
236
- model_source, trust_remote_code=True
237
- )
238
- quantified = False
239
- if "int4" in model_name:
240
- quantified = True
241
- model = AutoModel.from_pretrained(
242
- model_source, trust_remote_code=True
243
- )
244
- if torch.cuda.is_available():
245
- # run on CUDA
246
- logging.info("CUDA is available, using CUDA")
247
- model = model.half().cuda()
248
- # mps加速还存在一些问题,暂时不使用
249
- elif system_name == "Darwin" and model_path is not None and not quantified:
250
- logging.info("Running on macOS, using MPS")
251
- # running on macOS and model already downloaded
252
- model = model.half().to("mps")
253
- else:
254
- logging.info("GPU is not available, using CPU")
255
- model = model.float()
256
- model = model.eval()
257
- CHATGLM_MODEL = model
258
-
259
- def _get_glm_style_input(self):
260
- history = [x["content"] for x in self.history]
261
- query = history.pop()
262
- logging.debug(colorama.Fore.YELLOW +
263
- f"{history}" + colorama.Fore.RESET)
264
- assert (
265
- len(history) % 2 == 0
266
- ), f"History should be even length. current history is: {history}"
267
- history = [[history[i], history[i + 1]]
268
- for i in range(0, len(history), 2)]
269
- return history, query
270
-
271
- def get_answer_at_once(self):
272
- history, query = self._get_glm_style_input()
273
- response, _ = CHATGLM_MODEL.chat(
274
- CHATGLM_TOKENIZER, query, history=history)
275
- return response, len(response)
276
-
277
- def get_answer_stream_iter(self):
278
- history, query = self._get_glm_style_input()
279
- for response, history in CHATGLM_MODEL.stream_chat(
280
- CHATGLM_TOKENIZER,
281
- query,
282
- history,
283
- max_length=self.token_upper_limit,
284
- top_p=self.top_p,
285
- temperature=self.temperature,
286
- ):
287
- yield response
288
-
289
-
290
- class LLaMA_Client(BaseLLMModel):
291
- def __init__(
292
- self,
293
- model_name,
294
- lora_path=None,
295
- ) -> None:
296
- super().__init__(model_name=model_name)
297
- from lmflow.datasets.dataset import Dataset
298
- from lmflow.pipeline.auto_pipeline import AutoPipeline
299
- from lmflow.models.auto_model import AutoModel
300
- from lmflow.args import ModelArguments, DatasetArguments, InferencerArguments
301
-
302
- self.max_generation_token = 1000
303
- self.end_string = "\n\n"
304
- # We don't need input data
305
- data_args = DatasetArguments(dataset_path=None)
306
- self.dataset = Dataset(data_args)
307
- self.system_prompt = ""
308
-
309
- global LLAMA_MODEL, LLAMA_INFERENCER
310
- if LLAMA_MODEL is None or LLAMA_INFERENCER is None:
311
- model_path = None
312
- if os.path.exists("models"):
313
- model_dirs = os.listdir("models")
314
- if model_name in model_dirs:
315
- model_path = f"models/{model_name}"
316
- if model_path is not None:
317
- model_source = model_path
318
- else:
319
- model_source = f"decapoda-research/{model_name}"
320
- # raise Exception(f"models目录下没有这个模型: {model_name}")
321
- if lora_path is not None:
322
- lora_path = f"lora/{lora_path}"
323
- model_args = ModelArguments(model_name_or_path=model_source, lora_model_path=lora_path, model_type=None, config_overrides=None, config_name=None, tokenizer_name=None, cache_dir=None,
324
- use_fast_tokenizer=True, model_revision='main', use_auth_token=False, torch_dtype=None, use_lora=False, lora_r=8, lora_alpha=32, lora_dropout=0.1, use_ram_optimized_load=True)
325
- pipeline_args = InferencerArguments(
326
- local_rank=0, random_seed=1, deepspeed='configs/ds_config_chatbot.json', mixed_precision='bf16')
327
-
328
- with open(pipeline_args.deepspeed, "r") as f:
329
- ds_config = json.load(f)
330
- LLAMA_MODEL = AutoModel.get_model(
331
- model_args,
332
- tune_strategy="none",
333
- ds_config=ds_config,
334
- )
335
- LLAMA_INFERENCER = AutoPipeline.get_pipeline(
336
- pipeline_name="inferencer",
337
- model_args=model_args,
338
- data_args=data_args,
339
- pipeline_args=pipeline_args,
340
- )
341
-
342
- def _get_llama_style_input(self):
343
- history = []
344
- instruction = ""
345
- if self.system_prompt:
346
- instruction = (f"Instruction: {self.system_prompt}\n")
347
- for x in self.history:
348
- if x["role"] == "user":
349
- history.append(f"{instruction}Input: {x['content']}")
350
- else:
351
- history.append(f"Output: {x['content']}")
352
- context = "\n\n".join(history)
353
- context += "\n\nOutput: "
354
- return context
355
-
356
- def get_answer_at_once(self):
357
- context = self._get_llama_style_input()
358
-
359
- input_dataset = self.dataset.from_dict(
360
- {"type": "text_only", "instances": [{"text": context}]}
361
- )
362
-
363
- output_dataset = LLAMA_INFERENCER.inference(
364
- model=LLAMA_MODEL,
365
- dataset=input_dataset,
366
- max_new_tokens=self.max_generation_token,
367
- temperature=self.temperature,
368
- )
369
-
370
- response = output_dataset.to_dict()["instances"][0]["text"]
371
- return response, len(response)
372
-
373
- def get_answer_stream_iter(self):
374
- context = self._get_llama_style_input()
375
- partial_text = ""
376
- step = 1
377
- for _ in range(0, self.max_generation_token, step):
378
- input_dataset = self.dataset.from_dict(
379
- {"type": "text_only", "instances": [
380
- {"text": context + partial_text}]}
381
- )
382
- output_dataset = LLAMA_INFERENCER.inference(
383
- model=LLAMA_MODEL,
384
- dataset=input_dataset,
385
- max_new_tokens=step,
386
- temperature=self.temperature,
387
- )
388
- response = output_dataset.to_dict()["instances"][0]["text"]
389
- if response == "" or response == self.end_string:
390
- break
391
- partial_text += response
392
- yield partial_text
393
-
394
-
395
- class XMChat(BaseLLMModel):
396
- def __init__(self, api_key):
397
- super().__init__(model_name="xmchat")
398
- self.api_key = api_key
399
- self.session_id = None
400
- self.reset()
401
- self.image_bytes = None
402
- self.image_path = None
403
- self.xm_history = []
404
- self.url = "https://xmbot.net/web"
405
- self.last_conv_id = None
406
-
407
- def reset(self):
408
- self.session_id = str(uuid.uuid4())
409
- self.last_conv_id = None
410
- return [], "已重置"
411
-
412
- def image_to_base64(self, image_path):
413
- # 打开并加载图片
414
- img = Image.open(image_path)
415
-
416
- # 获取图片的宽度和高度
417
- width, height = img.size
418
-
419
- # 计算压缩比例,以确保最长边小于4096像素
420
- max_dimension = 2048
421
- scale_ratio = min(max_dimension / width, max_dimension / height)
422
-
423
- if scale_ratio < 1:
424
- # 按压缩比例调整图片大小
425
- new_width = int(width * scale_ratio)
426
- new_height = int(height * scale_ratio)
427
- img = img.resize((new_width, new_height), Image.ANTIALIAS)
428
-
429
- # 将图片转换为jpg格式的二进制数据
430
- buffer = BytesIO()
431
- if img.mode == "RGBA":
432
- img = img.convert("RGB")
433
- img.save(buffer, format='JPEG')
434
- binary_image = buffer.getvalue()
435
-
436
- # 对二进制数据进行Base64编码
437
- base64_image = base64.b64encode(binary_image).decode('utf-8')
438
-
439
- return base64_image
440
-
441
- def try_read_image(self, filepath):
442
- def is_image_file(filepath):
443
- # 判断文件是否为图片
444
- valid_image_extensions = [".jpg", ".jpeg", ".png", ".bmp", ".gif", ".tiff"]
445
- file_extension = os.path.splitext(filepath)[1].lower()
446
- return file_extension in valid_image_extensions
447
-
448
- if is_image_file(filepath):
449
- logging.info(f"读取图片文件: {filepath}")
450
- self.image_bytes = self.image_to_base64(filepath)
451
- self.image_path = filepath
452
- else:
453
- self.image_bytes = None
454
- self.image_path = None
455
-
456
- def like(self):
457
- if self.last_conv_id is None:
458
- return "点赞失败,你还没发送过消息"
459
- data = {
460
- "uuid": self.last_conv_id,
461
- "appraise": "good"
462
- }
463
- response = requests.post(self.url, json=data)
464
- return "👍点赞成功,,感谢反馈~"
465
-
466
- def dislike(self):
467
- if self.last_conv_id is None:
468
- return "点踩失败,你还没发送过消息"
469
- data = {
470
- "uuid": self.last_conv_id,
471
- "appraise": "bad"
472
- }
473
- response = requests.post(self.url, json=data)
474
- return "👎点踩成功,感谢反馈~"
475
-
476
- def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot):
477
- fake_inputs = real_inputs
478
- display_append = ""
479
- limited_context = False
480
- return limited_context, fake_inputs, display_append, real_inputs, chatbot
481
-
482
- def handle_file_upload(self, files, chatbot):
483
- """if the model accepts multi modal input, implement this function"""
484
- if files:
485
- for file in files:
486
- if file.name:
487
- logging.info(f"尝试读取图像: {file.name}")
488
- self.try_read_image(file.name)
489
- if self.image_path is not None:
490
- chatbot = chatbot + [((self.image_path,), None)]
491
- if self.image_bytes is not None:
492
- logging.info("使用图片作为输入")
493
- # XMChat的一轮对话中实际上只能处理一张图片
494
- self.reset()
495
- conv_id = str(uuid.uuid4())
496
- data = {
497
- "user_id": self.api_key,
498
- "session_id": self.session_id,
499
- "uuid": conv_id,
500
- "data_type": "imgbase64",
501
- "data": self.image_bytes
502
- }
503
- response = requests.post(self.url, json=data)
504
- response = json.loads(response.text)
505
- logging.info(f"图片回复: {response['data']}")
506
- return None, chatbot, None
507
-
508
- def get_answer_at_once(self):
509
- question = self.history[-1]["content"]
510
- conv_id = str(uuid.uuid4())
511
- self.last_conv_id = conv_id
512
- data = {
513
- "user_id": self.api_key,
514
- "session_id": self.session_id,
515
- "uuid": conv_id,
516
- "data_type": "text",
517
- "data": question
518
- }
519
- response = requests.post(self.url, json=data)
520
- try:
521
- response = json.loads(response.text)
522
- return response["data"], len(response["data"])
523
- except Exception as e:
524
- return response.text, len(response.text)
525
-
526
-
527
-
528
-
529
- def get_model(
530
- model_name,
531
- lora_model_path=None,
532
- access_key=None,
533
- temperature=None,
534
- top_p=None,
535
- system_prompt=None,
536
- ) -> BaseLLMModel:
537
- msg = i18n("模型设置为了:") + f" {model_name}"
538
- model_type = ModelType.get_type(model_name)
539
- lora_selector_visibility = False
540
- lora_choices = []
541
- dont_change_lora_selector = False
542
- if model_type != ModelType.OpenAI:
543
- config.local_embedding = True
544
- # del current_model.model
545
- model = None
546
- try:
547
- if model_type == ModelType.OpenAI:
548
- logging.info(f"正在加载OpenAI模型: {model_name}")
549
- model = OpenAIClient(
550
- model_name=model_name,
551
- api_key=access_key,
552
- system_prompt=system_prompt,
553
- temperature=temperature,
554
- top_p=top_p,
555
- )
556
- elif model_type == ModelType.ChatGLM:
557
- logging.info(f"正在加载ChatGLM模型: {model_name}")
558
- model = ChatGLM_Client(model_name)
559
- elif model_type == ModelType.LLaMA and lora_model_path == "":
560
- msg = f"现在请为 {model_name} 选择LoRA模型"
561
- logging.info(msg)
562
- lora_selector_visibility = True
563
- if os.path.isdir("lora"):
564
- lora_choices = get_file_names(
565
- "lora", plain=True, filetypes=[""])
566
- lora_choices = ["No LoRA"] + lora_choices
567
- elif model_type == ModelType.LLaMA and lora_model_path != "":
568
- logging.info(f"正在加载LLaMA模型: {model_name} + {lora_model_path}")
569
- dont_change_lora_selector = True
570
- if lora_model_path == "No LoRA":
571
- lora_model_path = None
572
- msg += " + No LoRA"
573
- else:
574
- msg += f" + {lora_model_path}"
575
- model = LLaMA_Client(model_name, lora_model_path)
576
- elif model_type == ModelType.XMChat:
577
- if os.environ.get("XMCHAT_API_KEY") != "":
578
- access_key = os.environ.get("XMCHAT_API_KEY")
579
- model = XMChat(api_key=access_key)
580
- elif model_type == ModelType.Unknown:
581
- raise ValueError(f"未知模型: {model_name}")
582
- logging.info(msg)
583
- except Exception as e:
584
- logging.error(e)
585
- msg = f"{STANDARD_ERROR_MSG}: {e}"
586
- if dont_change_lora_selector:
587
- return model, msg
588
- else:
589
- return model, msg, gr.Dropdown.update(choices=lora_choices, visible=lora_selector_visibility)
590
-
591
-
592
- if __name__ == "__main__":
593
- with open("config.json", "r") as f:
594
- openai_api_key = cjson.load(f)["openai_api_key"]
595
- # set logging level to debug
596
- logging.basicConfig(level=logging.DEBUG)
597
- # client = ModelManager(model_name="gpt-3.5-turbo", access_key=openai_api_key)
598
- client = get_model(model_name="chatglm-6b-int4")
599
- chatbot = []
600
- stream = False
601
- # 测试账单功能
602
- logging.info(colorama.Back.GREEN + "测试账单功能" + colorama.Back.RESET)
603
- logging.info(client.billing_info())
604
- # 测试问答
605
- logging.info(colorama.Back.GREEN + "测试问答" + colorama.Back.RESET)
606
- question = "巴黎是中国的首都吗?"
607
- for i in client.predict(inputs=question, chatbot=chatbot, stream=stream):
608
- logging.info(i)
609
- logging.info(f"测试问答后history : {client.history}")
610
- # 测试记忆力
611
- logging.info(colorama.Back.GREEN + "测试记忆力" + colorama.Back.RESET)
612
- question = "我刚刚问了你什么问题?"
613
- for i in client.predict(inputs=question, chatbot=chatbot, stream=stream):
614
- logging.info(i)
615
- logging.info(f"测试记忆力后history : {client.history}")
616
- # 测试重试功能
617
- logging.info(colorama.Back.GREEN + "测试重试功能" + colorama.Back.RESET)
618
- for i in client.retry(chatbot=chatbot, stream=stream):
619
- logging.info(i)
620
- logging.info(f"重试后history : {client.history}")
621
- # # 测试总结功能
622
- # print(colorama.Back.GREEN + "测试总结功能" + colorama.Back.RESET)
623
- # chatbot, msg = client.reduce_token_size(chatbot=chatbot)
624
- # print(chatbot, msg)
625
- # print(f"总结后history: {client.history}")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_ratio.py DELETED
@@ -1,160 +0,0 @@
1
- import sys
2
- from fractions import Fraction
3
- from math import ceil
4
- from typing import cast, List, Optional, Sequence
5
-
6
- if sys.version_info >= (3, 8):
7
- from typing import Protocol
8
- else:
9
- from pip._vendor.typing_extensions import Protocol # pragma: no cover
10
-
11
-
12
- class Edge(Protocol):
13
- """Any object that defines an edge (such as Layout)."""
14
-
15
- size: Optional[int] = None
16
- ratio: int = 1
17
- minimum_size: int = 1
18
-
19
-
20
- def ratio_resolve(total: int, edges: Sequence[Edge]) -> List[int]:
21
- """Divide total space to satisfy size, ratio, and minimum_size, constraints.
22
-
23
- The returned list of integers should add up to total in most cases, unless it is
24
- impossible to satisfy all the constraints. For instance, if there are two edges
25
- with a minimum size of 20 each and `total` is 30 then the returned list will be
26
- greater than total. In practice, this would mean that a Layout object would
27
- clip the rows that would overflow the screen height.
28
-
29
- Args:
30
- total (int): Total number of characters.
31
- edges (List[Edge]): Edges within total space.
32
-
33
- Returns:
34
- List[int]: Number of characters for each edge.
35
- """
36
- # Size of edge or None for yet to be determined
37
- sizes = [(edge.size or None) for edge in edges]
38
-
39
- _Fraction = Fraction
40
-
41
- # While any edges haven't been calculated
42
- while None in sizes:
43
- # Get flexible edges and index to map these back on to sizes list
44
- flexible_edges = [
45
- (index, edge)
46
- for index, (size, edge) in enumerate(zip(sizes, edges))
47
- if size is None
48
- ]
49
- # Remaining space in total
50
- remaining = total - sum(size or 0 for size in sizes)
51
- if remaining <= 0:
52
- # No room for flexible edges
53
- return [
54
- ((edge.minimum_size or 1) if size is None else size)
55
- for size, edge in zip(sizes, edges)
56
- ]
57
- # Calculate number of characters in a ratio portion
58
- portion = _Fraction(
59
- remaining, sum((edge.ratio or 1) for _, edge in flexible_edges)
60
- )
61
-
62
- # If any edges will be less than their minimum, replace size with the minimum
63
- for index, edge in flexible_edges:
64
- if portion * edge.ratio <= edge.minimum_size:
65
- sizes[index] = edge.minimum_size
66
- # New fixed size will invalidate calculations, so we need to repeat the process
67
- break
68
- else:
69
- # Distribute flexible space and compensate for rounding error
70
- # Since edge sizes can only be integers we need to add the remainder
71
- # to the following line
72
- remainder = _Fraction(0)
73
- for index, edge in flexible_edges:
74
- size, remainder = divmod(portion * edge.ratio + remainder, 1)
75
- sizes[index] = size
76
- break
77
- # Sizes now contains integers only
78
- return cast(List[int], sizes)
79
-
80
-
81
- def ratio_reduce(
82
- total: int, ratios: List[int], maximums: List[int], values: List[int]
83
- ) -> List[int]:
84
- """Divide an integer total in to parts based on ratios.
85
-
86
- Args:
87
- total (int): The total to divide.
88
- ratios (List[int]): A list of integer ratios.
89
- maximums (List[int]): List of maximums values for each slot.
90
- values (List[int]): List of values
91
-
92
- Returns:
93
- List[int]: A list of integers guaranteed to sum to total.
94
- """
95
- ratios = [ratio if _max else 0 for ratio, _max in zip(ratios, maximums)]
96
- total_ratio = sum(ratios)
97
- if not total_ratio:
98
- return values[:]
99
- total_remaining = total
100
- result: List[int] = []
101
- append = result.append
102
- for ratio, maximum, value in zip(ratios, maximums, values):
103
- if ratio and total_ratio > 0:
104
- distributed = min(maximum, round(ratio * total_remaining / total_ratio))
105
- append(value - distributed)
106
- total_remaining -= distributed
107
- total_ratio -= ratio
108
- else:
109
- append(value)
110
- return result
111
-
112
-
113
- def ratio_distribute(
114
- total: int, ratios: List[int], minimums: Optional[List[int]] = None
115
- ) -> List[int]:
116
- """Distribute an integer total in to parts based on ratios.
117
-
118
- Args:
119
- total (int): The total to divide.
120
- ratios (List[int]): A list of integer ratios.
121
- minimums (List[int]): List of minimum values for each slot.
122
-
123
- Returns:
124
- List[int]: A list of integers guaranteed to sum to total.
125
- """
126
- if minimums:
127
- ratios = [ratio if _min else 0 for ratio, _min in zip(ratios, minimums)]
128
- total_ratio = sum(ratios)
129
- assert total_ratio > 0, "Sum of ratios must be > 0"
130
-
131
- total_remaining = total
132
- distributed_total: List[int] = []
133
- append = distributed_total.append
134
- if minimums is None:
135
- _minimums = [0] * len(ratios)
136
- else:
137
- _minimums = minimums
138
- for ratio, minimum in zip(ratios, _minimums):
139
- if total_ratio > 0:
140
- distributed = max(minimum, ceil(ratio * total_remaining / total_ratio))
141
- else:
142
- distributed = total_remaining
143
- append(distributed)
144
- total_ratio -= ratio
145
- total_remaining -= distributed
146
- return distributed_total
147
-
148
-
149
- if __name__ == "__main__":
150
- from dataclasses import dataclass
151
-
152
- @dataclass
153
- class E:
154
-
155
- size: Optional[int] = None
156
- ratio: int = 1
157
- minimum_size: int = 1
158
-
159
- resolved = ratio_resolve(110, [E(None, 1, 1), E(None, 1, 1), E(None, 1, 1)])
160
- print(sum(resolved))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AttendAndExcite/Attend-and-Excite/README.md DELETED
@@ -1,15 +0,0 @@
1
- ---
2
- title: Attend And Excite
3
- emoji: 💻
4
- colorFrom: gray
5
- colorTo: pink
6
- sdk: gradio
7
- sdk_version: 3.47.1
8
- python_version: 3.10.13
9
- app_file: app.py
10
- pinned: false
11
- license: mit
12
- suggested_hardware: a10g-small
13
- ---
14
-
15
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BAAI/AltDiffusion-m9/header.html DELETED
@@ -1,43 +0,0 @@
1
- <div style="text-align: center; max-width: 650px; margin: 0 auto;">
2
- <div
3
- style="
4
- display: inline-flex;
5
- gap: 0.8rem;
6
- font-size: 1.75rem;
7
- margin-bottom: 10px;
8
- width: 600px;
9
- height: 200px;
10
- margin: 0 auto;
11
- /* border: 1px solid red; */
12
- justify-content: center;
13
- "
14
-
15
- <a href="https://github.com/FlagAI-Open/FlagAI"><img src="https://raw.githubusercontent.com/920232796/test/master/WechatIMG6906.png" alt="FlagAI" width="80%" height="80%" style="margin: 0 auto;"></a>
16
- </div>
17
- <div
18
- style="
19
- display: inline-flex;
20
- align-items: center;
21
- gap: 0.8rem;
22
- font-size: 1.75rem;
23
- margin-bottom: 10px;
24
- justify-content: center;
25
- ">
26
- <a href="https://github.com/FlagAI-Open/FlagAI"><h1 style="font-weight: 900; margin-bottom: 7px;">
27
- FlagStudio
28
- </h1></a>
29
- </div>
30
- <p style="margin-bottom: 10px; font-size: 94%">
31
- FlagStudio 项目致力于贡献优秀AI生成艺术作品。此九语文生图模型项目基于 <a href="https://huggingface.co/CompVis/stable-diffusion" style="text-decoration: underline;">stable diffusion</a>,由BAAI旗下的FlagAI团队提供支持,相关代码和模型权重在<a href="https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltDiffusion" style="text-decoration: underline;">AltDiffusion</a>中进行开源。
32
- </p>
33
- <p style="margin-bottom: 10px; font-size: 94%">
34
- FlagStudio aims to provide high-quality AI-generated artwork. Our current multilingual model is based on the original <a href="https://huggingface.co/CompVis/stable-diffusion" style="text-decoration: underline;">stable diffusion</a> model and is capable to generate images from both Chinese and English text. FlagStudio is developed and supported by the FlagAI team. Relevant code and model weights released in <a href="https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltDiffusion" style="text-decoration: underline;">AltDiffusion-m9</a>.([email protected]
35
- </p>
36
- <p style="margin-bottom: 10px; font-size: 94%">
37
- AltDiffusion has been added to 🧨Diffusers, see the documentation page: <a href="https://huggingface.co/docs/diffusers/main/en/api/pipelines/alt_diffusion">🧨 Pipeline doc</a>
38
- </p>
39
- <p style="margin-bottom: 10px; font-size: 94%; text-align: left;">
40
- 我们在colab设置了一个脚本,你可以在colab试用我们的模型!(We have a script on colab, You can try our models on colab.Enjoy it!)
41
- <a href="https://colab.research.google.com/drive/1htPovT5YNutl2i31mIYrOzlIgGLm06IX#scrollTo=0KXFRkjG1RVk"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
42
- </p>
43
- </div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Badaleeloveashley/badaleeloveashley/app.py DELETED
@@ -1,3 +0,0 @@
1
- import gradio as gr
2
-
3
- gr.Interface.load('huggingface/gpt2').Iaunch()
 
 
 
 
spaces/Bart92/RVC_HF/infer/lib/uvr5_pack/lib_v5/nets_123821KB.py DELETED
@@ -1,122 +0,0 @@
1
- import torch
2
- import torch.nn.functional as F
3
- from torch import nn
4
-
5
- from . import layers_123821KB as layers
6
-
7
-
8
- class BaseASPPNet(nn.Module):
9
- def __init__(self, nin, ch, dilations=(4, 8, 16)):
10
- super(BaseASPPNet, self).__init__()
11
- self.enc1 = layers.Encoder(nin, ch, 3, 2, 1)
12
- self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1)
13
- self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1)
14
- self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1)
15
-
16
- self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations)
17
-
18
- self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1)
19
- self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1)
20
- self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1)
21
- self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1)
22
-
23
- def __call__(self, x):
24
- h, e1 = self.enc1(x)
25
- h, e2 = self.enc2(h)
26
- h, e3 = self.enc3(h)
27
- h, e4 = self.enc4(h)
28
-
29
- h = self.aspp(h)
30
-
31
- h = self.dec4(h, e4)
32
- h = self.dec3(h, e3)
33
- h = self.dec2(h, e2)
34
- h = self.dec1(h, e1)
35
-
36
- return h
37
-
38
-
39
- class CascadedASPPNet(nn.Module):
40
- def __init__(self, n_fft):
41
- super(CascadedASPPNet, self).__init__()
42
- self.stg1_low_band_net = BaseASPPNet(2, 32)
43
- self.stg1_high_band_net = BaseASPPNet(2, 32)
44
-
45
- self.stg2_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0)
46
- self.stg2_full_band_net = BaseASPPNet(16, 32)
47
-
48
- self.stg3_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0)
49
- self.stg3_full_band_net = BaseASPPNet(32, 64)
50
-
51
- self.out = nn.Conv2d(64, 2, 1, bias=False)
52
- self.aux1_out = nn.Conv2d(32, 2, 1, bias=False)
53
- self.aux2_out = nn.Conv2d(32, 2, 1, bias=False)
54
-
55
- self.max_bin = n_fft // 2
56
- self.output_bin = n_fft // 2 + 1
57
-
58
- self.offset = 128
59
-
60
- def forward(self, x, aggressiveness=None):
61
- mix = x.detach()
62
- x = x.clone()
63
-
64
- x = x[:, :, : self.max_bin]
65
-
66
- bandw = x.size()[2] // 2
67
- aux1 = torch.cat(
68
- [
69
- self.stg1_low_band_net(x[:, :, :bandw]),
70
- self.stg1_high_band_net(x[:, :, bandw:]),
71
- ],
72
- dim=2,
73
- )
74
-
75
- h = torch.cat([x, aux1], dim=1)
76
- aux2 = self.stg2_full_band_net(self.stg2_bridge(h))
77
-
78
- h = torch.cat([x, aux1, aux2], dim=1)
79
- h = self.stg3_full_band_net(self.stg3_bridge(h))
80
-
81
- mask = torch.sigmoid(self.out(h))
82
- mask = F.pad(
83
- input=mask,
84
- pad=(0, 0, 0, self.output_bin - mask.size()[2]),
85
- mode="replicate",
86
- )
87
-
88
- if self.training:
89
- aux1 = torch.sigmoid(self.aux1_out(aux1))
90
- aux1 = F.pad(
91
- input=aux1,
92
- pad=(0, 0, 0, self.output_bin - aux1.size()[2]),
93
- mode="replicate",
94
- )
95
- aux2 = torch.sigmoid(self.aux2_out(aux2))
96
- aux2 = F.pad(
97
- input=aux2,
98
- pad=(0, 0, 0, self.output_bin - aux2.size()[2]),
99
- mode="replicate",
100
- )
101
- return mask * mix, aux1 * mix, aux2 * mix
102
- else:
103
- if aggressiveness:
104
- mask[:, :, : aggressiveness["split_bin"]] = torch.pow(
105
- mask[:, :, : aggressiveness["split_bin"]],
106
- 1 + aggressiveness["value"] / 3,
107
- )
108
- mask[:, :, aggressiveness["split_bin"] :] = torch.pow(
109
- mask[:, :, aggressiveness["split_bin"] :],
110
- 1 + aggressiveness["value"],
111
- )
112
-
113
- return mask * mix
114
-
115
- def predict(self, x_mag, aggressiveness=None):
116
- h = self.forward(x_mag, aggressiveness)
117
-
118
- if self.offset > 0:
119
- h = h[:, :, :, self.offset : -self.offset]
120
- assert h.size()[3] > 0
121
-
122
- return h
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/escsm.py DELETED
@@ -1,261 +0,0 @@
1
- ######################## BEGIN LICENSE BLOCK ########################
2
- # The Original Code is mozilla.org code.
3
- #
4
- # The Initial Developer of the Original Code is
5
- # Netscape Communications Corporation.
6
- # Portions created by the Initial Developer are Copyright (C) 1998
7
- # the Initial Developer. All Rights Reserved.
8
- #
9
- # Contributor(s):
10
- # Mark Pilgrim - port to Python
11
- #
12
- # This library is free software; you can redistribute it and/or
13
- # modify it under the terms of the GNU Lesser General Public
14
- # License as published by the Free Software Foundation; either
15
- # version 2.1 of the License, or (at your option) any later version.
16
- #
17
- # This library is distributed in the hope that it will be useful,
18
- # but WITHOUT ANY WARRANTY; without even the implied warranty of
19
- # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
20
- # Lesser General Public License for more details.
21
- #
22
- # You should have received a copy of the GNU Lesser General Public
23
- # License along with this library; if not, write to the Free Software
24
- # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
25
- # 02110-1301 USA
26
- ######################### END LICENSE BLOCK #########################
27
-
28
- from .codingstatemachinedict import CodingStateMachineDict
29
- from .enums import MachineState
30
-
31
- # fmt: off
32
- HZ_CLS = (
33
- 1, 0, 0, 0, 0, 0, 0, 0, # 00 - 07
34
- 0, 0, 0, 0, 0, 0, 0, 0, # 08 - 0f
35
- 0, 0, 0, 0, 0, 0, 0, 0, # 10 - 17
36
- 0, 0, 0, 1, 0, 0, 0, 0, # 18 - 1f
37
- 0, 0, 0, 0, 0, 0, 0, 0, # 20 - 27
38
- 0, 0, 0, 0, 0, 0, 0, 0, # 28 - 2f
39
- 0, 0, 0, 0, 0, 0, 0, 0, # 30 - 37
40
- 0, 0, 0, 0, 0, 0, 0, 0, # 38 - 3f
41
- 0, 0, 0, 0, 0, 0, 0, 0, # 40 - 47
42
- 0, 0, 0, 0, 0, 0, 0, 0, # 48 - 4f
43
- 0, 0, 0, 0, 0, 0, 0, 0, # 50 - 57
44
- 0, 0, 0, 0, 0, 0, 0, 0, # 58 - 5f
45
- 0, 0, 0, 0, 0, 0, 0, 0, # 60 - 67
46
- 0, 0, 0, 0, 0, 0, 0, 0, # 68 - 6f
47
- 0, 0, 0, 0, 0, 0, 0, 0, # 70 - 77
48
- 0, 0, 0, 4, 0, 5, 2, 0, # 78 - 7f
49
- 1, 1, 1, 1, 1, 1, 1, 1, # 80 - 87
50
- 1, 1, 1, 1, 1, 1, 1, 1, # 88 - 8f
51
- 1, 1, 1, 1, 1, 1, 1, 1, # 90 - 97
52
- 1, 1, 1, 1, 1, 1, 1, 1, # 98 - 9f
53
- 1, 1, 1, 1, 1, 1, 1, 1, # a0 - a7
54
- 1, 1, 1, 1, 1, 1, 1, 1, # a8 - af
55
- 1, 1, 1, 1, 1, 1, 1, 1, # b0 - b7
56
- 1, 1, 1, 1, 1, 1, 1, 1, # b8 - bf
57
- 1, 1, 1, 1, 1, 1, 1, 1, # c0 - c7
58
- 1, 1, 1, 1, 1, 1, 1, 1, # c8 - cf
59
- 1, 1, 1, 1, 1, 1, 1, 1, # d0 - d7
60
- 1, 1, 1, 1, 1, 1, 1, 1, # d8 - df
61
- 1, 1, 1, 1, 1, 1, 1, 1, # e0 - e7
62
- 1, 1, 1, 1, 1, 1, 1, 1, # e8 - ef
63
- 1, 1, 1, 1, 1, 1, 1, 1, # f0 - f7
64
- 1, 1, 1, 1, 1, 1, 1, 1, # f8 - ff
65
- )
66
-
67
- HZ_ST = (
68
- MachineState.START, MachineState.ERROR, 3, MachineState.START, MachineState.START, MachineState.START, MachineState.ERROR, MachineState.ERROR, # 00-07
69
- MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, # 08-0f
70
- MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ERROR, MachineState.ERROR, MachineState.START, MachineState.START, 4, MachineState.ERROR, # 10-17
71
- 5, MachineState.ERROR, 6, MachineState.ERROR, 5, 5, 4, MachineState.ERROR, # 18-1f
72
- 4, MachineState.ERROR, 4, 4, 4, MachineState.ERROR, 4, MachineState.ERROR, # 20-27
73
- 4, MachineState.ITS_ME, MachineState.START, MachineState.START, MachineState.START, MachineState.START, MachineState.START, MachineState.START, # 28-2f
74
- )
75
- # fmt: on
76
-
77
- HZ_CHAR_LEN_TABLE = (0, 0, 0, 0, 0, 0)
78
-
79
- HZ_SM_MODEL: CodingStateMachineDict = {
80
- "class_table": HZ_CLS,
81
- "class_factor": 6,
82
- "state_table": HZ_ST,
83
- "char_len_table": HZ_CHAR_LEN_TABLE,
84
- "name": "HZ-GB-2312",
85
- "language": "Chinese",
86
- }
87
-
88
- # fmt: off
89
- ISO2022CN_CLS = (
90
- 2, 0, 0, 0, 0, 0, 0, 0, # 00 - 07
91
- 0, 0, 0, 0, 0, 0, 0, 0, # 08 - 0f
92
- 0, 0, 0, 0, 0, 0, 0, 0, # 10 - 17
93
- 0, 0, 0, 1, 0, 0, 0, 0, # 18 - 1f
94
- 0, 0, 0, 0, 0, 0, 0, 0, # 20 - 27
95
- 0, 3, 0, 0, 0, 0, 0, 0, # 28 - 2f
96
- 0, 0, 0, 0, 0, 0, 0, 0, # 30 - 37
97
- 0, 0, 0, 0, 0, 0, 0, 0, # 38 - 3f
98
- 0, 0, 0, 4, 0, 0, 0, 0, # 40 - 47
99
- 0, 0, 0, 0, 0, 0, 0, 0, # 48 - 4f
100
- 0, 0, 0, 0, 0, 0, 0, 0, # 50 - 57
101
- 0, 0, 0, 0, 0, 0, 0, 0, # 58 - 5f
102
- 0, 0, 0, 0, 0, 0, 0, 0, # 60 - 67
103
- 0, 0, 0, 0, 0, 0, 0, 0, # 68 - 6f
104
- 0, 0, 0, 0, 0, 0, 0, 0, # 70 - 77
105
- 0, 0, 0, 0, 0, 0, 0, 0, # 78 - 7f
106
- 2, 2, 2, 2, 2, 2, 2, 2, # 80 - 87
107
- 2, 2, 2, 2, 2, 2, 2, 2, # 88 - 8f
108
- 2, 2, 2, 2, 2, 2, 2, 2, # 90 - 97
109
- 2, 2, 2, 2, 2, 2, 2, 2, # 98 - 9f
110
- 2, 2, 2, 2, 2, 2, 2, 2, # a0 - a7
111
- 2, 2, 2, 2, 2, 2, 2, 2, # a8 - af
112
- 2, 2, 2, 2, 2, 2, 2, 2, # b0 - b7
113
- 2, 2, 2, 2, 2, 2, 2, 2, # b8 - bf
114
- 2, 2, 2, 2, 2, 2, 2, 2, # c0 - c7
115
- 2, 2, 2, 2, 2, 2, 2, 2, # c8 - cf
116
- 2, 2, 2, 2, 2, 2, 2, 2, # d0 - d7
117
- 2, 2, 2, 2, 2, 2, 2, 2, # d8 - df
118
- 2, 2, 2, 2, 2, 2, 2, 2, # e0 - e7
119
- 2, 2, 2, 2, 2, 2, 2, 2, # e8 - ef
120
- 2, 2, 2, 2, 2, 2, 2, 2, # f0 - f7
121
- 2, 2, 2, 2, 2, 2, 2, 2, # f8 - ff
122
- )
123
-
124
- ISO2022CN_ST = (
125
- MachineState.START, 3, MachineState.ERROR, MachineState.START, MachineState.START, MachineState.START, MachineState.START, MachineState.START, # 00-07
126
- MachineState.START, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, # 08-0f
127
- MachineState.ERROR, MachineState.ERROR, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, # 10-17
128
- MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, 4, MachineState.ERROR, # 18-1f
129
- MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ITS_ME, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, # 20-27
130
- 5, 6, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, # 28-2f
131
- MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ITS_ME, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, # 30-37
132
- MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ITS_ME, MachineState.ERROR, MachineState.START, # 38-3f
133
- )
134
- # fmt: on
135
-
136
- ISO2022CN_CHAR_LEN_TABLE = (0, 0, 0, 0, 0, 0, 0, 0, 0)
137
-
138
- ISO2022CN_SM_MODEL: CodingStateMachineDict = {
139
- "class_table": ISO2022CN_CLS,
140
- "class_factor": 9,
141
- "state_table": ISO2022CN_ST,
142
- "char_len_table": ISO2022CN_CHAR_LEN_TABLE,
143
- "name": "ISO-2022-CN",
144
- "language": "Chinese",
145
- }
146
-
147
- # fmt: off
148
- ISO2022JP_CLS = (
149
- 2, 0, 0, 0, 0, 0, 0, 0, # 00 - 07
150
- 0, 0, 0, 0, 0, 0, 2, 2, # 08 - 0f
151
- 0, 0, 0, 0, 0, 0, 0, 0, # 10 - 17
152
- 0, 0, 0, 1, 0, 0, 0, 0, # 18 - 1f
153
- 0, 0, 0, 0, 7, 0, 0, 0, # 20 - 27
154
- 3, 0, 0, 0, 0, 0, 0, 0, # 28 - 2f
155
- 0, 0, 0, 0, 0, 0, 0, 0, # 30 - 37
156
- 0, 0, 0, 0, 0, 0, 0, 0, # 38 - 3f
157
- 6, 0, 4, 0, 8, 0, 0, 0, # 40 - 47
158
- 0, 9, 5, 0, 0, 0, 0, 0, # 48 - 4f
159
- 0, 0, 0, 0, 0, 0, 0, 0, # 50 - 57
160
- 0, 0, 0, 0, 0, 0, 0, 0, # 58 - 5f
161
- 0, 0, 0, 0, 0, 0, 0, 0, # 60 - 67
162
- 0, 0, 0, 0, 0, 0, 0, 0, # 68 - 6f
163
- 0, 0, 0, 0, 0, 0, 0, 0, # 70 - 77
164
- 0, 0, 0, 0, 0, 0, 0, 0, # 78 - 7f
165
- 2, 2, 2, 2, 2, 2, 2, 2, # 80 - 87
166
- 2, 2, 2, 2, 2, 2, 2, 2, # 88 - 8f
167
- 2, 2, 2, 2, 2, 2, 2, 2, # 90 - 97
168
- 2, 2, 2, 2, 2, 2, 2, 2, # 98 - 9f
169
- 2, 2, 2, 2, 2, 2, 2, 2, # a0 - a7
170
- 2, 2, 2, 2, 2, 2, 2, 2, # a8 - af
171
- 2, 2, 2, 2, 2, 2, 2, 2, # b0 - b7
172
- 2, 2, 2, 2, 2, 2, 2, 2, # b8 - bf
173
- 2, 2, 2, 2, 2, 2, 2, 2, # c0 - c7
174
- 2, 2, 2, 2, 2, 2, 2, 2, # c8 - cf
175
- 2, 2, 2, 2, 2, 2, 2, 2, # d0 - d7
176
- 2, 2, 2, 2, 2, 2, 2, 2, # d8 - df
177
- 2, 2, 2, 2, 2, 2, 2, 2, # e0 - e7
178
- 2, 2, 2, 2, 2, 2, 2, 2, # e8 - ef
179
- 2, 2, 2, 2, 2, 2, 2, 2, # f0 - f7
180
- 2, 2, 2, 2, 2, 2, 2, 2, # f8 - ff
181
- )
182
-
183
- ISO2022JP_ST = (
184
- MachineState.START, 3, MachineState.ERROR, MachineState.START, MachineState.START, MachineState.START, MachineState.START, MachineState.START, # 00-07
185
- MachineState.START, MachineState.START, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, # 08-0f
186
- MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, # 10-17
187
- MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ERROR, MachineState.ERROR, # 18-1f
188
- MachineState.ERROR, 5, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, 4, MachineState.ERROR, MachineState.ERROR, # 20-27
189
- MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, 6, MachineState.ITS_ME, MachineState.ERROR, MachineState.ITS_ME, MachineState.ERROR, # 28-2f
190
- MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ITS_ME, MachineState.ITS_ME, # 30-37
191
- MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ITS_ME, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, # 38-3f
192
- MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ITS_ME, MachineState.ERROR, MachineState.START, MachineState.START, # 40-47
193
- )
194
- # fmt: on
195
-
196
- ISO2022JP_CHAR_LEN_TABLE = (0, 0, 0, 0, 0, 0, 0, 0, 0, 0)
197
-
198
- ISO2022JP_SM_MODEL: CodingStateMachineDict = {
199
- "class_table": ISO2022JP_CLS,
200
- "class_factor": 10,
201
- "state_table": ISO2022JP_ST,
202
- "char_len_table": ISO2022JP_CHAR_LEN_TABLE,
203
- "name": "ISO-2022-JP",
204
- "language": "Japanese",
205
- }
206
-
207
- # fmt: off
208
- ISO2022KR_CLS = (
209
- 2, 0, 0, 0, 0, 0, 0, 0, # 00 - 07
210
- 0, 0, 0, 0, 0, 0, 0, 0, # 08 - 0f
211
- 0, 0, 0, 0, 0, 0, 0, 0, # 10 - 17
212
- 0, 0, 0, 1, 0, 0, 0, 0, # 18 - 1f
213
- 0, 0, 0, 0, 3, 0, 0, 0, # 20 - 27
214
- 0, 4, 0, 0, 0, 0, 0, 0, # 28 - 2f
215
- 0, 0, 0, 0, 0, 0, 0, 0, # 30 - 37
216
- 0, 0, 0, 0, 0, 0, 0, 0, # 38 - 3f
217
- 0, 0, 0, 5, 0, 0, 0, 0, # 40 - 47
218
- 0, 0, 0, 0, 0, 0, 0, 0, # 48 - 4f
219
- 0, 0, 0, 0, 0, 0, 0, 0, # 50 - 57
220
- 0, 0, 0, 0, 0, 0, 0, 0, # 58 - 5f
221
- 0, 0, 0, 0, 0, 0, 0, 0, # 60 - 67
222
- 0, 0, 0, 0, 0, 0, 0, 0, # 68 - 6f
223
- 0, 0, 0, 0, 0, 0, 0, 0, # 70 - 77
224
- 0, 0, 0, 0, 0, 0, 0, 0, # 78 - 7f
225
- 2, 2, 2, 2, 2, 2, 2, 2, # 80 - 87
226
- 2, 2, 2, 2, 2, 2, 2, 2, # 88 - 8f
227
- 2, 2, 2, 2, 2, 2, 2, 2, # 90 - 97
228
- 2, 2, 2, 2, 2, 2, 2, 2, # 98 - 9f
229
- 2, 2, 2, 2, 2, 2, 2, 2, # a0 - a7
230
- 2, 2, 2, 2, 2, 2, 2, 2, # a8 - af
231
- 2, 2, 2, 2, 2, 2, 2, 2, # b0 - b7
232
- 2, 2, 2, 2, 2, 2, 2, 2, # b8 - bf
233
- 2, 2, 2, 2, 2, 2, 2, 2, # c0 - c7
234
- 2, 2, 2, 2, 2, 2, 2, 2, # c8 - cf
235
- 2, 2, 2, 2, 2, 2, 2, 2, # d0 - d7
236
- 2, 2, 2, 2, 2, 2, 2, 2, # d8 - df
237
- 2, 2, 2, 2, 2, 2, 2, 2, # e0 - e7
238
- 2, 2, 2, 2, 2, 2, 2, 2, # e8 - ef
239
- 2, 2, 2, 2, 2, 2, 2, 2, # f0 - f7
240
- 2, 2, 2, 2, 2, 2, 2, 2, # f8 - ff
241
- )
242
-
243
- ISO2022KR_ST = (
244
- MachineState.START, 3, MachineState.ERROR, MachineState.START, MachineState.START, MachineState.START, MachineState.ERROR, MachineState.ERROR, # 00-07
245
- MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, # 08-0f
246
- MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, 4, MachineState.ERROR, MachineState.ERROR, # 10-17
247
- MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, 5, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, # 18-1f
248
- MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ITS_ME, MachineState.START, MachineState.START, MachineState.START, MachineState.START, # 20-27
249
- )
250
- # fmt: on
251
-
252
- ISO2022KR_CHAR_LEN_TABLE = (0, 0, 0, 0, 0, 0)
253
-
254
- ISO2022KR_SM_MODEL: CodingStateMachineDict = {
255
- "class_table": ISO2022KR_CLS,
256
- "class_factor": 6,
257
- "state_table": ISO2022KR_ST,
258
- "char_len_table": ISO2022KR_CHAR_LEN_TABLE,
259
- "name": "ISO-2022-KR",
260
- "language": "Korean",
261
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/formatters/svg.py DELETED
@@ -1,188 +0,0 @@
1
- """
2
- pygments.formatters.svg
3
- ~~~~~~~~~~~~~~~~~~~~~~~
4
-
5
- Formatter for SVG output.
6
-
7
- :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS.
8
- :license: BSD, see LICENSE for details.
9
- """
10
-
11
- from pip._vendor.pygments.formatter import Formatter
12
- from pip._vendor.pygments.token import Comment
13
- from pip._vendor.pygments.util import get_bool_opt, get_int_opt
14
-
15
- __all__ = ['SvgFormatter']
16
-
17
-
18
- def escape_html(text):
19
- """Escape &, <, > as well as single and double quotes for HTML."""
20
- return text.replace('&', '&amp;'). \
21
- replace('<', '&lt;'). \
22
- replace('>', '&gt;'). \
23
- replace('"', '&quot;'). \
24
- replace("'", '&#39;')
25
-
26
-
27
- class2style = {}
28
-
29
- class SvgFormatter(Formatter):
30
- """
31
- Format tokens as an SVG graphics file. This formatter is still experimental.
32
- Each line of code is a ``<text>`` element with explicit ``x`` and ``y``
33
- coordinates containing ``<tspan>`` elements with the individual token styles.
34
-
35
- By default, this formatter outputs a full SVG document including doctype
36
- declaration and the ``<svg>`` root element.
37
-
38
- .. versionadded:: 0.9
39
-
40
- Additional options accepted:
41
-
42
- `nowrap`
43
- Don't wrap the SVG ``<text>`` elements in ``<svg><g>`` elements and
44
- don't add a XML declaration and a doctype. If true, the `fontfamily`
45
- and `fontsize` options are ignored. Defaults to ``False``.
46
-
47
- `fontfamily`
48
- The value to give the wrapping ``<g>`` element's ``font-family``
49
- attribute, defaults to ``"monospace"``.
50
-
51
- `fontsize`
52
- The value to give the wrapping ``<g>`` element's ``font-size``
53
- attribute, defaults to ``"14px"``.
54
-
55
- `linenos`
56
- If ``True``, add line numbers (default: ``False``).
57
-
58
- `linenostart`
59
- The line number for the first line (default: ``1``).
60
-
61
- `linenostep`
62
- If set to a number n > 1, only every nth line number is printed.
63
-
64
- `linenowidth`
65
- Maximum width devoted to line numbers (default: ``3*ystep``, sufficient
66
- for up to 4-digit line numbers. Increase width for longer code blocks).
67
-
68
- `xoffset`
69
- Starting offset in X direction, defaults to ``0``.
70
-
71
- `yoffset`
72
- Starting offset in Y direction, defaults to the font size if it is given
73
- in pixels, or ``20`` else. (This is necessary since text coordinates
74
- refer to the text baseline, not the top edge.)
75
-
76
- `ystep`
77
- Offset to add to the Y coordinate for each subsequent line. This should
78
- roughly be the text size plus 5. It defaults to that value if the text
79
- size is given in pixels, or ``25`` else.
80
-
81
- `spacehack`
82
- Convert spaces in the source to ``&#160;``, which are non-breaking
83
- spaces. SVG provides the ``xml:space`` attribute to control how
84
- whitespace inside tags is handled, in theory, the ``preserve`` value
85
- could be used to keep all whitespace as-is. However, many current SVG
86
- viewers don't obey that rule, so this option is provided as a workaround
87
- and defaults to ``True``.
88
- """
89
- name = 'SVG'
90
- aliases = ['svg']
91
- filenames = ['*.svg']
92
-
93
- def __init__(self, **options):
94
- Formatter.__init__(self, **options)
95
- self.nowrap = get_bool_opt(options, 'nowrap', False)
96
- self.fontfamily = options.get('fontfamily', 'monospace')
97
- self.fontsize = options.get('fontsize', '14px')
98
- self.xoffset = get_int_opt(options, 'xoffset', 0)
99
- fs = self.fontsize.strip()
100
- if fs.endswith('px'): fs = fs[:-2].strip()
101
- try:
102
- int_fs = int(fs)
103
- except:
104
- int_fs = 20
105
- self.yoffset = get_int_opt(options, 'yoffset', int_fs)
106
- self.ystep = get_int_opt(options, 'ystep', int_fs + 5)
107
- self.spacehack = get_bool_opt(options, 'spacehack', True)
108
- self.linenos = get_bool_opt(options,'linenos',False)
109
- self.linenostart = get_int_opt(options,'linenostart',1)
110
- self.linenostep = get_int_opt(options,'linenostep',1)
111
- self.linenowidth = get_int_opt(options,'linenowidth', 3*self.ystep)
112
- self._stylecache = {}
113
-
114
- def format_unencoded(self, tokensource, outfile):
115
- """
116
- Format ``tokensource``, an iterable of ``(tokentype, tokenstring)``
117
- tuples and write it into ``outfile``.
118
-
119
- For our implementation we put all lines in their own 'line group'.
120
- """
121
- x = self.xoffset
122
- y = self.yoffset
123
- if not self.nowrap:
124
- if self.encoding:
125
- outfile.write('<?xml version="1.0" encoding="%s"?>\n' %
126
- self.encoding)
127
- else:
128
- outfile.write('<?xml version="1.0"?>\n')
129
- outfile.write('<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.0//EN" '
130
- '"http://www.w3.org/TR/2001/REC-SVG-20010904/DTD/'
131
- 'svg10.dtd">\n')
132
- outfile.write('<svg xmlns="http://www.w3.org/2000/svg">\n')
133
- outfile.write('<g font-family="%s" font-size="%s">\n' %
134
- (self.fontfamily, self.fontsize))
135
-
136
- counter = self.linenostart
137
- counter_step = self.linenostep
138
- counter_style = self._get_style(Comment)
139
- line_x = x
140
-
141
- if self.linenos:
142
- if counter % counter_step == 0:
143
- outfile.write('<text x="%s" y="%s" %s text-anchor="end">%s</text>' %
144
- (x+self.linenowidth,y,counter_style,counter))
145
- line_x += self.linenowidth + self.ystep
146
- counter += 1
147
-
148
- outfile.write('<text x="%s" y="%s" xml:space="preserve">' % (line_x, y))
149
- for ttype, value in tokensource:
150
- style = self._get_style(ttype)
151
- tspan = style and '<tspan' + style + '>' or ''
152
- tspanend = tspan and '</tspan>' or ''
153
- value = escape_html(value)
154
- if self.spacehack:
155
- value = value.expandtabs().replace(' ', '&#160;')
156
- parts = value.split('\n')
157
- for part in parts[:-1]:
158
- outfile.write(tspan + part + tspanend)
159
- y += self.ystep
160
- outfile.write('</text>\n')
161
- if self.linenos and counter % counter_step == 0:
162
- outfile.write('<text x="%s" y="%s" text-anchor="end" %s>%s</text>' %
163
- (x+self.linenowidth,y,counter_style,counter))
164
-
165
- counter += 1
166
- outfile.write('<text x="%s" y="%s" ' 'xml:space="preserve">' % (line_x,y))
167
- outfile.write(tspan + parts[-1] + tspanend)
168
- outfile.write('</text>')
169
-
170
- if not self.nowrap:
171
- outfile.write('</g></svg>\n')
172
-
173
- def _get_style(self, tokentype):
174
- if tokentype in self._stylecache:
175
- return self._stylecache[tokentype]
176
- otokentype = tokentype
177
- while not self.style.styles_token(tokentype):
178
- tokentype = tokentype.parent
179
- value = self.style.style_for_token(tokentype)
180
- result = ''
181
- if value['color']:
182
- result = ' fill="#' + value['color'] + '"'
183
- if value['bold']:
184
- result += ' font-weight="bold"'
185
- if value['italic']:
186
- result += ' font-style="italic"'
187
- self._stylecache[otokentype] = result
188
- return result
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/util/request.py DELETED
@@ -1,137 +0,0 @@
1
- from __future__ import absolute_import
2
-
3
- from base64 import b64encode
4
-
5
- from ..exceptions import UnrewindableBodyError
6
- from ..packages.six import b, integer_types
7
-
8
- # Pass as a value within ``headers`` to skip
9
- # emitting some HTTP headers that are added automatically.
10
- # The only headers that are supported are ``Accept-Encoding``,
11
- # ``Host``, and ``User-Agent``.
12
- SKIP_HEADER = "@@@SKIP_HEADER@@@"
13
- SKIPPABLE_HEADERS = frozenset(["accept-encoding", "host", "user-agent"])
14
-
15
- ACCEPT_ENCODING = "gzip,deflate"
16
-
17
- _FAILEDTELL = object()
18
-
19
-
20
- def make_headers(
21
- keep_alive=None,
22
- accept_encoding=None,
23
- user_agent=None,
24
- basic_auth=None,
25
- proxy_basic_auth=None,
26
- disable_cache=None,
27
- ):
28
- """
29
- Shortcuts for generating request headers.
30
-
31
- :param keep_alive:
32
- If ``True``, adds 'connection: keep-alive' header.
33
-
34
- :param accept_encoding:
35
- Can be a boolean, list, or string.
36
- ``True`` translates to 'gzip,deflate'.
37
- List will get joined by comma.
38
- String will be used as provided.
39
-
40
- :param user_agent:
41
- String representing the user-agent you want, such as
42
- "python-urllib3/0.6"
43
-
44
- :param basic_auth:
45
- Colon-separated username:password string for 'authorization: basic ...'
46
- auth header.
47
-
48
- :param proxy_basic_auth:
49
- Colon-separated username:password string for 'proxy-authorization: basic ...'
50
- auth header.
51
-
52
- :param disable_cache:
53
- If ``True``, adds 'cache-control: no-cache' header.
54
-
55
- Example::
56
-
57
- >>> make_headers(keep_alive=True, user_agent="Batman/1.0")
58
- {'connection': 'keep-alive', 'user-agent': 'Batman/1.0'}
59
- >>> make_headers(accept_encoding=True)
60
- {'accept-encoding': 'gzip,deflate'}
61
- """
62
- headers = {}
63
- if accept_encoding:
64
- if isinstance(accept_encoding, str):
65
- pass
66
- elif isinstance(accept_encoding, list):
67
- accept_encoding = ",".join(accept_encoding)
68
- else:
69
- accept_encoding = ACCEPT_ENCODING
70
- headers["accept-encoding"] = accept_encoding
71
-
72
- if user_agent:
73
- headers["user-agent"] = user_agent
74
-
75
- if keep_alive:
76
- headers["connection"] = "keep-alive"
77
-
78
- if basic_auth:
79
- headers["authorization"] = "Basic " + b64encode(b(basic_auth)).decode("utf-8")
80
-
81
- if proxy_basic_auth:
82
- headers["proxy-authorization"] = "Basic " + b64encode(
83
- b(proxy_basic_auth)
84
- ).decode("utf-8")
85
-
86
- if disable_cache:
87
- headers["cache-control"] = "no-cache"
88
-
89
- return headers
90
-
91
-
92
- def set_file_position(body, pos):
93
- """
94
- If a position is provided, move file to that point.
95
- Otherwise, we'll attempt to record a position for future use.
96
- """
97
- if pos is not None:
98
- rewind_body(body, pos)
99
- elif getattr(body, "tell", None) is not None:
100
- try:
101
- pos = body.tell()
102
- except (IOError, OSError):
103
- # This differentiates from None, allowing us to catch
104
- # a failed `tell()` later when trying to rewind the body.
105
- pos = _FAILEDTELL
106
-
107
- return pos
108
-
109
-
110
- def rewind_body(body, body_pos):
111
- """
112
- Attempt to rewind body to a certain position.
113
- Primarily used for request redirects and retries.
114
-
115
- :param body:
116
- File-like object that supports seek.
117
-
118
- :param int pos:
119
- Position to seek to in file.
120
- """
121
- body_seek = getattr(body, "seek", None)
122
- if body_seek is not None and isinstance(body_pos, integer_types):
123
- try:
124
- body_seek(body_pos)
125
- except (IOError, OSError):
126
- raise UnrewindableBodyError(
127
- "An error occurred when rewinding request body for redirect/retry."
128
- )
129
- elif body_pos is _FAILEDTELL:
130
- raise UnrewindableBodyError(
131
- "Unable to record file position for rewinding "
132
- "request body during a redirect/retry."
133
- )
134
- else:
135
- raise ValueError(
136
- "body_pos must be of type integer, instead it was %s." % type(body_pos)
137
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Boadiwaa/Recipes/openai/__init__.py DELETED
@@ -1,73 +0,0 @@
1
- # OpenAI Python bindings.
2
- #
3
- # Originally forked from the MIT-licensed Stripe Python bindings.
4
-
5
- import os
6
- from typing import Optional
7
-
8
- from openai.api_resources import (
9
- Answer,
10
- Classification,
11
- Completion,
12
- Customer,
13
- Edit,
14
- Deployment,
15
- Embedding,
16
- Engine,
17
- ErrorObject,
18
- File,
19
- FineTune,
20
- Model,
21
- Search,
22
- )
23
- from openai.error import APIError, InvalidRequestError, OpenAIError
24
-
25
- api_key = os.environ.get("OPENAI_API_KEY")
26
- # Path of a file with an API key, whose contents can change. Supercedes
27
- # `api_key` if set. The main use case is volume-mounted Kubernetes secrets,
28
- # which are updated automatically.
29
- api_key_path: Optional[str] = os.environ.get("OPENAI_API_KEY_PATH")
30
-
31
- organization = os.environ.get("OPENAI_ORGANIZATION")
32
- api_base = os.environ.get("OPENAI_API_BASE", "https://api.openai.com/v1")
33
- api_type = os.environ.get("OPENAI_API_TYPE", "open_ai")
34
- api_version = "2021-11-01-preview" if api_type == "azure" else None
35
- verify_ssl_certs = True # No effect. Certificates are always verified.
36
- proxy = None
37
- app_info = None
38
- enable_telemetry = False # Ignored; the telemetry feature was removed.
39
- ca_bundle_path = None # No longer used, feature was removed
40
- debug = False
41
- log = None # Set to either 'debug' or 'info', controls console logging
42
-
43
- __all__ = [
44
- "APIError",
45
- "Answer",
46
- "Classification",
47
- "Completion",
48
- "Customer",
49
- "Edit",
50
- "Deployment",
51
- "Embedding",
52
- "Engine",
53
- "ErrorObject",
54
- "File",
55
- "FineTune",
56
- "InvalidRequestError",
57
- "Model",
58
- "OpenAIError",
59
- "Search",
60
- "api_base",
61
- "api_key",
62
- "api_type",
63
- "api_key_path",
64
- "api_version",
65
- "app_info",
66
- "ca_bundle_path",
67
- "debug",
68
- "enable_elemetry",
69
- "log",
70
- "organization",
71
- "proxy",
72
- "verify_ssl_certs",
73
- ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Boadiwaa/Recipes/openai/embeddings_utils.py DELETED
@@ -1,227 +0,0 @@
1
- import textwrap as tr
2
- from typing import List, Optional
3
-
4
- import matplotlib.pyplot as plt
5
- import numpy as np
6
- import pandas as pd
7
- import plotly.express as px
8
- from scipy import spatial
9
- from sklearn.decomposition import PCA
10
- from sklearn.manifold import TSNE
11
- from sklearn.metrics import average_precision_score, precision_recall_curve
12
- from tenacity import retry, stop_after_attempt, wait_random_exponential
13
-
14
- import openai
15
-
16
-
17
- @retry(wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6))
18
- def get_embedding(text: str, engine="text-similarity-davinci-001") -> List[float]:
19
-
20
- # replace newlines, which can negatively affect performance.
21
- text = text.replace("\n", " ")
22
-
23
- return openai.Embedding.create(input=[text], engine=engine)["data"][0]["embedding"]
24
-
25
-
26
- @retry(wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6))
27
- def get_embeddings(
28
- list_of_text: List[str], engine="text-similarity-babbage-001"
29
- ) -> List[List[float]]:
30
- assert len(list_of_text) < 2048, "The batch size should not be larger than 2048."
31
-
32
- # replace newlines, which can negatively affect performance.
33
- list_of_text = [text.replace("\n", " ") for text in list_of_text]
34
-
35
- data = openai.Embedding.create(input=list_of_text, engine=engine).data
36
- data = sorted(data, key=lambda x: x["index"]) # maintain the same order as input.
37
- return [d["embedding"] for d in data]
38
-
39
-
40
- def cosine_similarity(a, b):
41
- return np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b))
42
-
43
-
44
- def plot_multiclass_precision_recall(
45
- y_score, y_true_untransformed, class_list, classifier_name
46
- ):
47
- """
48
- Precision-Recall plotting for a multiclass problem. It plots average precision-recall, per class precision recall and reference f1 contours.
49
-
50
- Code slightly modified, but heavily based on https://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html
51
- """
52
- n_classes = len(class_list)
53
- y_true = pd.concat(
54
- [(y_true_untransformed == class_list[i]) for i in range(n_classes)], axis=1
55
- ).values
56
-
57
- # For each class
58
- precision = dict()
59
- recall = dict()
60
- average_precision = dict()
61
- for i in range(n_classes):
62
- precision[i], recall[i], _ = precision_recall_curve(y_true[:, i], y_score[:, i])
63
- average_precision[i] = average_precision_score(y_true[:, i], y_score[:, i])
64
-
65
- # A "micro-average": quantifying score on all classes jointly
66
- precision_micro, recall_micro, _ = precision_recall_curve(
67
- y_true.ravel(), y_score.ravel()
68
- )
69
- average_precision_micro = average_precision_score(y_true, y_score, average="micro")
70
- print(
71
- str(classifier_name)
72
- + " - Average precision score over all classes: {0:0.2f}".format(
73
- average_precision_micro
74
- )
75
- )
76
-
77
- # setup plot details
78
- plt.figure(figsize=(9, 10))
79
- f_scores = np.linspace(0.2, 0.8, num=4)
80
- lines = []
81
- labels = []
82
- for f_score in f_scores:
83
- x = np.linspace(0.01, 1)
84
- y = f_score * x / (2 * x - f_score)
85
- (l,) = plt.plot(x[y >= 0], y[y >= 0], color="gray", alpha=0.2)
86
- plt.annotate("f1={0:0.1f}".format(f_score), xy=(0.9, y[45] + 0.02))
87
-
88
- lines.append(l)
89
- labels.append("iso-f1 curves")
90
- (l,) = plt.plot(recall_micro, precision_micro, color="gold", lw=2)
91
- lines.append(l)
92
- labels.append(
93
- "average Precision-recall (auprc = {0:0.2f})" "".format(average_precision_micro)
94
- )
95
-
96
- for i in range(n_classes):
97
- (l,) = plt.plot(recall[i], precision[i], lw=2)
98
- lines.append(l)
99
- labels.append(
100
- "Precision-recall for class `{0}` (auprc = {1:0.2f})"
101
- "".format(class_list[i], average_precision[i])
102
- )
103
-
104
- fig = plt.gcf()
105
- fig.subplots_adjust(bottom=0.25)
106
- plt.xlim([0.0, 1.0])
107
- plt.ylim([0.0, 1.05])
108
- plt.xlabel("Recall")
109
- plt.ylabel("Precision")
110
- plt.title(f"{classifier_name}: Precision-Recall curve for each class")
111
- plt.legend(lines, labels)
112
-
113
-
114
- def distances_from_embeddings(
115
- query_embedding: List[float],
116
- embeddings: List[List[float]],
117
- distance_metric="cosine",
118
- ) -> List[List]:
119
- """Return the distances between a query embedding and a list of embeddings."""
120
- distance_metrics = {
121
- "cosine": spatial.distance.cosine,
122
- "L1": spatial.distance.cityblock,
123
- "L2": spatial.distance.euclidean,
124
- "Linf": spatial.distance.chebyshev,
125
- }
126
- distances = [
127
- distance_metrics[distance_metric](query_embedding, embedding)
128
- for embedding in embeddings
129
- ]
130
- return distances
131
-
132
-
133
- def indices_of_nearest_neighbors_from_distances(distances) -> np.ndarray:
134
- """Return a list of indices of nearest neighbors from a list of distances."""
135
- return np.argsort(distances)
136
-
137
-
138
- def pca_components_from_embeddings(
139
- embeddings: List[List[float]], n_components=2
140
- ) -> np.ndarray:
141
- """Return the PCA components of a list of embeddings."""
142
- pca = PCA(n_components=n_components)
143
- array_of_embeddings = np.array(embeddings)
144
- return pca.fit_transform(array_of_embeddings)
145
-
146
-
147
- def tsne_components_from_embeddings(
148
- embeddings: List[List[float]], n_components=2, **kwargs
149
- ) -> np.ndarray:
150
- """Returns t-SNE components of a list of embeddings."""
151
- # use better defaults if not specified
152
- if "init" not in kwargs.keys():
153
- kwargs["init"] = "pca"
154
- if "learning_rate" not in kwargs.keys():
155
- kwargs["learning_rate"] = "auto"
156
- tsne = TSNE(n_components=n_components, **kwargs)
157
- array_of_embeddings = np.array(embeddings)
158
- return tsne.fit_transform(array_of_embeddings)
159
-
160
-
161
- def chart_from_components(
162
- components: np.ndarray,
163
- labels: Optional[List[str]] = None,
164
- strings: Optional[List[str]] = None,
165
- x_title="Component 0",
166
- y_title="Component 1",
167
- mark_size=5,
168
- **kwargs,
169
- ):
170
- """Return an interactive 2D chart of embedding components."""
171
- empty_list = ["" for _ in components]
172
- data = pd.DataFrame(
173
- {
174
- x_title: components[:, 0],
175
- y_title: components[:, 1],
176
- "label": labels if labels else empty_list,
177
- "string": ["<br>".join(tr.wrap(string, width=30)) for string in strings]
178
- if strings
179
- else empty_list,
180
- }
181
- )
182
- chart = px.scatter(
183
- data,
184
- x=x_title,
185
- y=y_title,
186
- color="label" if labels else None,
187
- symbol="label" if labels else None,
188
- hover_data=["string"] if strings else None,
189
- **kwargs,
190
- ).update_traces(marker=dict(size=mark_size))
191
- return chart
192
-
193
-
194
- def chart_from_components_3D(
195
- components: np.ndarray,
196
- labels: Optional[List[str]] = None,
197
- strings: Optional[List[str]] = None,
198
- x_title: str = "Component 0",
199
- y_title: str = "Component 1",
200
- z_title: str = "Compontent 2",
201
- mark_size: int = 5,
202
- **kwargs,
203
- ):
204
- """Return an interactive 3D chart of embedding components."""
205
- empty_list = ["" for _ in components]
206
- data = pd.DataFrame(
207
- {
208
- x_title: components[:, 0],
209
- y_title: components[:, 1],
210
- z_title: components[:, 2],
211
- "label": labels if labels else empty_list,
212
- "string": ["<br>".join(tr.wrap(string, width=30)) for string in strings]
213
- if strings
214
- else empty_list,
215
- }
216
- )
217
- chart = px.scatter_3d(
218
- data,
219
- x=x_title,
220
- y=y_title,
221
- z=z_title,
222
- color="label" if labels else None,
223
- symbol="label" if labels else None,
224
- hover_data=["string"] if strings else None,
225
- **kwargs,
226
- ).update_traces(marker=dict(size=mark_size))
227
- return chart
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/reduce_intervals.h DELETED
@@ -1,53 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
-
18
- /*! \file reduce_intervals.h
19
- * \brief OpenMP implementations of reduce_intervals algorithms.
20
- */
21
-
22
- #pragma once
23
-
24
- #include <thrust/detail/config.h>
25
- #include <thrust/system/omp/detail/execution_policy.h>
26
-
27
- namespace thrust
28
- {
29
- namespace system
30
- {
31
- namespace omp
32
- {
33
- namespace detail
34
- {
35
-
36
- template <typename DerivedPolicy,
37
- typename InputIterator,
38
- typename OutputIterator,
39
- typename BinaryFunction,
40
- typename Decomposition>
41
- void reduce_intervals(execution_policy<DerivedPolicy> &exec,
42
- InputIterator input,
43
- OutputIterator output,
44
- BinaryFunction binary_op,
45
- Decomposition decomp);
46
-
47
- } // end namespace detail
48
- } // end namespace omp
49
- } // end namespace system
50
- } // end namespace thrust
51
-
52
- #include <thrust/system/omp/detail/reduce_intervals.inl>
53
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/configs/_base_/models/mask_rcnn_swin_fpn.py DELETED
@@ -1,127 +0,0 @@
1
- # model settings
2
- model = dict(
3
- type='MaskRCNN',
4
- pretrained=None,
5
- backbone=dict(
6
- type='SwinTransformer',
7
- embed_dim=96,
8
- depths=[2, 2, 6, 2],
9
- num_heads=[3, 6, 12, 24],
10
- window_size=7,
11
- mlp_ratio=4.,
12
- qkv_bias=True,
13
- qk_scale=None,
14
- drop_rate=0.,
15
- attn_drop_rate=0.,
16
- drop_path_rate=0.2,
17
- ape=False,
18
- patch_norm=True,
19
- out_indices=(0, 1, 2, 3),
20
- use_checkpoint=False),
21
- neck=dict(
22
- type='FPN',
23
- in_channels=[96, 192, 384, 768],
24
- out_channels=256,
25
- num_outs=5),
26
- rpn_head=dict(
27
- type='RPNHead',
28
- in_channels=256,
29
- feat_channels=256,
30
- anchor_generator=dict(
31
- type='AnchorGenerator',
32
- scales=[8],
33
- ratios=[0.5, 1.0, 2.0],
34
- strides=[4, 8, 16, 32, 64]),
35
- bbox_coder=dict(
36
- type='DeltaXYWHBBoxCoder',
37
- target_means=[.0, .0, .0, .0],
38
- target_stds=[1.0, 1.0, 1.0, 1.0]),
39
- loss_cls=dict(
40
- type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
41
- loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
42
- roi_head=dict(
43
- type='StandardRoIHead',
44
- bbox_roi_extractor=dict(
45
- type='SingleRoIExtractor',
46
- roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
47
- out_channels=256,
48
- featmap_strides=[4, 8, 16, 32]),
49
- bbox_head=dict(
50
- type='Shared2FCBBoxHead',
51
- in_channels=256,
52
- fc_out_channels=1024,
53
- roi_feat_size=7,
54
- num_classes=80,
55
- bbox_coder=dict(
56
- type='DeltaXYWHBBoxCoder',
57
- target_means=[0., 0., 0., 0.],
58
- target_stds=[0.1, 0.1, 0.2, 0.2]),
59
- reg_class_agnostic=False,
60
- loss_cls=dict(
61
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
62
- loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
63
- mask_roi_extractor=dict(
64
- type='SingleRoIExtractor',
65
- roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0),
66
- out_channels=256,
67
- featmap_strides=[4, 8, 16, 32]),
68
- mask_head=dict(
69
- type='FCNMaskHead',
70
- num_convs=4,
71
- in_channels=256,
72
- conv_out_channels=256,
73
- num_classes=80,
74
- loss_mask=dict(
75
- type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))),
76
- # model training and testing settings
77
- train_cfg=dict(
78
- rpn=dict(
79
- assigner=dict(
80
- type='MaxIoUAssigner',
81
- pos_iou_thr=0.7,
82
- neg_iou_thr=0.3,
83
- min_pos_iou=0.3,
84
- match_low_quality=True,
85
- ignore_iof_thr=-1),
86
- sampler=dict(
87
- type='RandomSampler',
88
- num=256,
89
- pos_fraction=0.5,
90
- neg_pos_ub=-1,
91
- add_gt_as_proposals=False),
92
- allowed_border=-1,
93
- pos_weight=-1,
94
- debug=False),
95
- rpn_proposal=dict(
96
- nms_pre=2000,
97
- max_per_img=1000,
98
- nms=dict(type='nms', iou_threshold=0.7),
99
- min_bbox_size=0),
100
- rcnn=dict(
101
- assigner=dict(
102
- type='MaxIoUAssigner',
103
- pos_iou_thr=0.5,
104
- neg_iou_thr=0.5,
105
- min_pos_iou=0.5,
106
- match_low_quality=True,
107
- ignore_iof_thr=-1),
108
- sampler=dict(
109
- type='RandomSampler',
110
- num=512,
111
- pos_fraction=0.25,
112
- neg_pos_ub=-1,
113
- add_gt_as_proposals=True),
114
- mask_size=28,
115
- pos_weight=-1,
116
- debug=False)),
117
- test_cfg=dict(
118
- rpn=dict(
119
- nms_pre=1000,
120
- max_per_img=1000,
121
- nms=dict(type='nms', iou_threshold=0.7),
122
- min_bbox_size=0),
123
- rcnn=dict(
124
- score_thr=0.05,
125
- nms=dict(type='nms', iou_threshold=0.5),
126
- max_per_img=100,
127
- mask_thr_binary=0.5)))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/v-doc_abstractive_mac/app.py DELETED
@@ -1,12 +0,0 @@
1
- import gradio as gr
2
-
3
- description = "Story generation with GPT-2"
4
- title = "Generate your own story"
5
- examples = [["Adventurer is approached by a mysterious stranger in the tavern for a new quest."]]
6
-
7
- interface = gr.Interface.load("huggingface/ydin0771/vdoc-demo-mac",
8
- description=description,
9
- examples=examples
10
- )
11
-
12
- interface.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CikeyQI/meme-api/meme_generator/memes/lim_x_0/__init__.py DELETED
@@ -1,35 +0,0 @@
1
- from pathlib import Path
2
- from typing import List
3
-
4
- from pil_utils import BuildImage
5
-
6
- from meme_generator import add_meme
7
-
8
- img_dir = Path(__file__).parent / "images"
9
-
10
-
11
- def lim_x_0(images: List[BuildImage], texts, args):
12
- img = images[0]
13
- frame = BuildImage.open(img_dir / "0.png")
14
- img_c = img.convert("RGBA").circle().resize((72, 72))
15
- img_tp = img.convert("RGBA").circle().resize((51, 51))
16
- frame.paste(img_tp, (948, 247), alpha=True)
17
- # fmt: off
18
- locs = [
19
- (143, 32), (155, 148), (334, 149), (275, 266), (486, 266),
20
- (258, 383), (439, 382), (343, 539), (577, 487), (296, 717),
21
- (535, 717), (64, 896), (340, 896), (578, 897), (210, 1038),
22
- (644, 1039), (64, 1192), (460, 1192), (698, 1192), (1036, 141),
23
- (1217, 141), (1243, 263), (1140, 378), (1321, 378), (929, 531),
24
- (1325, 531), (1592, 531), (1007, 687), (1390, 687), (1631, 686),
25
- (1036, 840), (1209, 839), (1447, 839), (1141, 1018), (1309, 1019),
26
- (1546, 1019), (1037, 1197), (1317, 1198), (1555, 1197),
27
- ]
28
- # fmt: on
29
- for i in range(39):
30
- x, y = locs[i]
31
- frame.paste(img_c, (x, y), alpha=True)
32
- return frame.save_jpg()
33
-
34
-
35
- add_meme("lim_x_0", lim_x_0, min_images=1, max_images=1, keywords=["等价无穷小"])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CofAI/chat/g4f/Provider/Providers/hteyun.py DELETED
@@ -1,34 +0,0 @@
1
- import requests
2
- import os
3
- import json
4
- from ...typing import sha256, Dict, get_type_hints
5
-
6
- url = 'https://hteyun.com'
7
- model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613']
8
- supports_stream = True
9
- needs_auth = False
10
-
11
- def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs):
12
- headers = {
13
- 'Content-Type': 'application/json',
14
- 'Accept': 'application/json, text/plain, */*',
15
- 'Accept-Language': 'ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7,ja;q=0.6,zh-TW;q=0.5,zh;q=0.4',
16
- 'Origin': 'https://hteyun.com',
17
- 'Referer': 'https://hteyun.com/chat/',
18
- }
19
- data = {
20
- 'messages': messages,
21
- 'model': model,
22
- 'systemMessage': 'You are ChatGPT, a large language model trained by OpenAI. Follow the user\'s instructions carefully. Respond using russian language.',
23
- 'temperature': 0.7,
24
- 'presence_penalty': 0,
25
- }
26
- response = requests.post(url + '/api/chat-stream', json=data, headers=headers, stream=True)
27
- print(response.json())
28
-
29
- # Извлечение текста из response
30
- return response.json()['text']
31
-
32
-
33
- params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
34
- '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Colbe/basketball/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Basketball
3
- emoji: 📊
4
- colorFrom: pink
5
- colorTo: red
6
- sdk: gradio
7
- sdk_version: 3.5
8
- app_file: app.py
9
- pinned: false
10
- license: apache-2.0
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cran-May/yugangVI/model.py DELETED
@@ -1,34 +0,0 @@
1
-
2
- from typing import Iterator
3
-
4
-
5
-
6
- model_id = 'xuqinyang/baichuan-13b-chat-ggml-int4'
7
-
8
- from huggingface_hub import snapshot_download,hf_hub_download
9
- #旧
10
- #snapshot_download(model_id, local_dir="./",revision="7f71a8abefa7b2eede3f74ce0564abe5fbe6874a")
11
- snapshot_download(model_id, local_dir="./",revision="b2414a0ceee68fe09c99ace44446cfc9a1c52b08")
12
- hf_hub_download(repo_id="baichuan-inc/Baichuan-13B-Chat",local_dir="./", filename="tokenizer.model")
13
- from llama_cpp import Llama
14
- llm = Llama(model_path="./ggml-model-q4_0.bin", n_ctx=4096,seed=-1)
15
-
16
- def run(message: str,
17
- chat_history: list[tuple[str, str]],
18
- system_prompt: str,
19
- max_new_tokens: int = 1024,
20
- temperature: float = 0.3,
21
- top_p: float = 0.85,
22
- top_k: int = 5) -> Iterator[str]:
23
- history = []
24
- print(chat_history)
25
- result=""
26
- for i in chat_history:
27
- history.append({"role": "user", "content": i[0]})
28
- history.append({"role": "assistant", "content": i[1]})
29
- print(history)
30
- history.append({"role": "user", "content": message})
31
- for response in llm.create_chat_completion(history,stop=["</s>"],stream=True,max_tokens=-1,temperature=temperature,top_k=top_k,top_p=top_p,repeat_penalty=1.1):
32
- if "content" in response["choices"][0]["delta"]:
33
- result = result + response["choices"][0]["delta"]["content"]
34
- yield result
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/utils/registry.py DELETED
@@ -1,45 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
2
-
3
-
4
- def _register_generic(module_dict, module_name, module):
5
- assert module_name not in module_dict
6
- module_dict[module_name] = module
7
-
8
-
9
- class Registry(dict):
10
- '''
11
- A helper class for managing registering modules, it extends a dictionary
12
- and provides a register functions.
13
-
14
- Eg. creeting a registry:
15
- some_registry = Registry({"default": default_module})
16
-
17
- There're two ways of registering new modules:
18
- 1): normal way is just calling register function:
19
- def foo():
20
- ...
21
- some_registry.register("foo_module", foo)
22
- 2): used as decorator when declaring the module:
23
- @some_registry.register("foo_module")
24
- @some_registry.register("foo_modeul_nickname")
25
- def foo():
26
- ...
27
-
28
- Access of module is just like using a dictionary, eg:
29
- f = some_registry["foo_modeul"]
30
- '''
31
- def __init__(self, *args, **kwargs):
32
- super(Registry, self).__init__(*args, **kwargs)
33
-
34
- def register(self, module_name, module=None):
35
- # used as function call
36
- if module is not None:
37
- _register_generic(self, module_name, module)
38
- return
39
-
40
- # used as decorator
41
- def register_fn(fn):
42
- _register_generic(self, module_name, fn)
43
- return fn
44
-
45
- return register_fn
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/helpers.py DELETED
@@ -1,878 +0,0 @@
1
- """Various helper functions"""
2
-
3
- import asyncio
4
- import base64
5
- import binascii
6
- import datetime
7
- import functools
8
- import inspect
9
- import netrc
10
- import os
11
- import platform
12
- import re
13
- import sys
14
- import time
15
- import warnings
16
- import weakref
17
- from collections import namedtuple
18
- from contextlib import suppress
19
- from email.parser import HeaderParser
20
- from email.utils import parsedate
21
- from math import ceil
22
- from pathlib import Path
23
- from types import TracebackType
24
- from typing import (
25
- Any,
26
- Callable,
27
- ContextManager,
28
- Dict,
29
- Generator,
30
- Generic,
31
- Iterable,
32
- Iterator,
33
- List,
34
- Mapping,
35
- Optional,
36
- Pattern,
37
- Set,
38
- Tuple,
39
- Type,
40
- TypeVar,
41
- Union,
42
- cast,
43
- )
44
- from urllib.parse import quote
45
- from urllib.request import getproxies, proxy_bypass
46
-
47
- import async_timeout
48
- import attr
49
- from multidict import MultiDict, MultiDictProxy
50
- from yarl import URL
51
-
52
- from . import hdrs
53
- from .log import client_logger, internal_logger
54
- from .typedefs import PathLike, Protocol # noqa
55
-
56
- __all__ = ("BasicAuth", "ChainMapProxy", "ETag")
57
-
58
- IS_MACOS = platform.system() == "Darwin"
59
- IS_WINDOWS = platform.system() == "Windows"
60
-
61
- PY_36 = sys.version_info >= (3, 6)
62
- PY_37 = sys.version_info >= (3, 7)
63
- PY_38 = sys.version_info >= (3, 8)
64
- PY_310 = sys.version_info >= (3, 10)
65
- PY_311 = sys.version_info >= (3, 11)
66
-
67
- if sys.version_info < (3, 7):
68
- import idna_ssl
69
-
70
- idna_ssl.patch_match_hostname()
71
-
72
- def all_tasks(
73
- loop: Optional[asyncio.AbstractEventLoop] = None,
74
- ) -> Set["asyncio.Task[Any]"]:
75
- tasks = list(asyncio.Task.all_tasks(loop))
76
- return {t for t in tasks if not t.done()}
77
-
78
- else:
79
- all_tasks = asyncio.all_tasks
80
-
81
-
82
- _T = TypeVar("_T")
83
- _S = TypeVar("_S")
84
-
85
-
86
- sentinel: Any = object()
87
- NO_EXTENSIONS: bool = bool(os.environ.get("AIOHTTP_NO_EXTENSIONS"))
88
-
89
- # N.B. sys.flags.dev_mode is available on Python 3.7+, use getattr
90
- # for compatibility with older versions
91
- DEBUG: bool = getattr(sys.flags, "dev_mode", False) or (
92
- not sys.flags.ignore_environment and bool(os.environ.get("PYTHONASYNCIODEBUG"))
93
- )
94
-
95
-
96
- CHAR = {chr(i) for i in range(0, 128)}
97
- CTL = {chr(i) for i in range(0, 32)} | {
98
- chr(127),
99
- }
100
- SEPARATORS = {
101
- "(",
102
- ")",
103
- "<",
104
- ">",
105
- "@",
106
- ",",
107
- ";",
108
- ":",
109
- "\\",
110
- '"',
111
- "/",
112
- "[",
113
- "]",
114
- "?",
115
- "=",
116
- "{",
117
- "}",
118
- " ",
119
- chr(9),
120
- }
121
- TOKEN = CHAR ^ CTL ^ SEPARATORS
122
-
123
-
124
- class noop:
125
- def __await__(self) -> Generator[None, None, None]:
126
- yield
127
-
128
-
129
- class BasicAuth(namedtuple("BasicAuth", ["login", "password", "encoding"])):
130
- """Http basic authentication helper."""
131
-
132
- def __new__(
133
- cls, login: str, password: str = "", encoding: str = "latin1"
134
- ) -> "BasicAuth":
135
- if login is None:
136
- raise ValueError("None is not allowed as login value")
137
-
138
- if password is None:
139
- raise ValueError("None is not allowed as password value")
140
-
141
- if ":" in login:
142
- raise ValueError('A ":" is not allowed in login (RFC 1945#section-11.1)')
143
-
144
- return super().__new__(cls, login, password, encoding)
145
-
146
- @classmethod
147
- def decode(cls, auth_header: str, encoding: str = "latin1") -> "BasicAuth":
148
- """Create a BasicAuth object from an Authorization HTTP header."""
149
- try:
150
- auth_type, encoded_credentials = auth_header.split(" ", 1)
151
- except ValueError:
152
- raise ValueError("Could not parse authorization header.")
153
-
154
- if auth_type.lower() != "basic":
155
- raise ValueError("Unknown authorization method %s" % auth_type)
156
-
157
- try:
158
- decoded = base64.b64decode(
159
- encoded_credentials.encode("ascii"), validate=True
160
- ).decode(encoding)
161
- except binascii.Error:
162
- raise ValueError("Invalid base64 encoding.")
163
-
164
- try:
165
- # RFC 2617 HTTP Authentication
166
- # https://www.ietf.org/rfc/rfc2617.txt
167
- # the colon must be present, but the username and password may be
168
- # otherwise blank.
169
- username, password = decoded.split(":", 1)
170
- except ValueError:
171
- raise ValueError("Invalid credentials.")
172
-
173
- return cls(username, password, encoding=encoding)
174
-
175
- @classmethod
176
- def from_url(cls, url: URL, *, encoding: str = "latin1") -> Optional["BasicAuth"]:
177
- """Create BasicAuth from url."""
178
- if not isinstance(url, URL):
179
- raise TypeError("url should be yarl.URL instance")
180
- if url.user is None:
181
- return None
182
- return cls(url.user, url.password or "", encoding=encoding)
183
-
184
- def encode(self) -> str:
185
- """Encode credentials."""
186
- creds = (f"{self.login}:{self.password}").encode(self.encoding)
187
- return "Basic %s" % base64.b64encode(creds).decode(self.encoding)
188
-
189
-
190
- def strip_auth_from_url(url: URL) -> Tuple[URL, Optional[BasicAuth]]:
191
- auth = BasicAuth.from_url(url)
192
- if auth is None:
193
- return url, None
194
- else:
195
- return url.with_user(None), auth
196
-
197
-
198
- def netrc_from_env() -> Optional[netrc.netrc]:
199
- """Load netrc from file.
200
-
201
- Attempt to load it from the path specified by the env-var
202
- NETRC or in the default location in the user's home directory.
203
-
204
- Returns None if it couldn't be found or fails to parse.
205
- """
206
- netrc_env = os.environ.get("NETRC")
207
-
208
- if netrc_env is not None:
209
- netrc_path = Path(netrc_env)
210
- else:
211
- try:
212
- home_dir = Path.home()
213
- except RuntimeError as e: # pragma: no cover
214
- # if pathlib can't resolve home, it may raise a RuntimeError
215
- client_logger.debug(
216
- "Could not resolve home directory when "
217
- "trying to look for .netrc file: %s",
218
- e,
219
- )
220
- return None
221
-
222
- netrc_path = home_dir / ("_netrc" if IS_WINDOWS else ".netrc")
223
-
224
- try:
225
- return netrc.netrc(str(netrc_path))
226
- except netrc.NetrcParseError as e:
227
- client_logger.warning("Could not parse .netrc file: %s", e)
228
- except OSError as e:
229
- # we couldn't read the file (doesn't exist, permissions, etc.)
230
- if netrc_env or netrc_path.is_file():
231
- # only warn if the environment wanted us to load it,
232
- # or it appears like the default file does actually exist
233
- client_logger.warning("Could not read .netrc file: %s", e)
234
-
235
- return None
236
-
237
-
238
- @attr.s(auto_attribs=True, frozen=True, slots=True)
239
- class ProxyInfo:
240
- proxy: URL
241
- proxy_auth: Optional[BasicAuth]
242
-
243
-
244
- def proxies_from_env() -> Dict[str, ProxyInfo]:
245
- proxy_urls = {
246
- k: URL(v)
247
- for k, v in getproxies().items()
248
- if k in ("http", "https", "ws", "wss")
249
- }
250
- netrc_obj = netrc_from_env()
251
- stripped = {k: strip_auth_from_url(v) for k, v in proxy_urls.items()}
252
- ret = {}
253
- for proto, val in stripped.items():
254
- proxy, auth = val
255
- if proxy.scheme in ("https", "wss"):
256
- client_logger.warning(
257
- "%s proxies %s are not supported, ignoring", proxy.scheme.upper(), proxy
258
- )
259
- continue
260
- if netrc_obj and auth is None:
261
- auth_from_netrc = None
262
- if proxy.host is not None:
263
- auth_from_netrc = netrc_obj.authenticators(proxy.host)
264
- if auth_from_netrc is not None:
265
- # auth_from_netrc is a (`user`, `account`, `password`) tuple,
266
- # `user` and `account` both can be username,
267
- # if `user` is None, use `account`
268
- *logins, password = auth_from_netrc
269
- login = logins[0] if logins[0] else logins[-1]
270
- auth = BasicAuth(cast(str, login), cast(str, password))
271
- ret[proto] = ProxyInfo(proxy, auth)
272
- return ret
273
-
274
-
275
- def current_task(
276
- loop: Optional[asyncio.AbstractEventLoop] = None,
277
- ) -> "Optional[asyncio.Task[Any]]":
278
- if sys.version_info >= (3, 7):
279
- return asyncio.current_task(loop=loop)
280
- else:
281
- return asyncio.Task.current_task(loop=loop)
282
-
283
-
284
- def get_running_loop(
285
- loop: Optional[asyncio.AbstractEventLoop] = None,
286
- ) -> asyncio.AbstractEventLoop:
287
- if loop is None:
288
- loop = asyncio.get_event_loop()
289
- if not loop.is_running():
290
- warnings.warn(
291
- "The object should be created within an async function",
292
- DeprecationWarning,
293
- stacklevel=3,
294
- )
295
- if loop.get_debug():
296
- internal_logger.warning(
297
- "The object should be created within an async function", stack_info=True
298
- )
299
- return loop
300
-
301
-
302
- def isasyncgenfunction(obj: Any) -> bool:
303
- func = getattr(inspect, "isasyncgenfunction", None)
304
- if func is not None:
305
- return func(obj) # type: ignore[no-any-return]
306
- else:
307
- return False
308
-
309
-
310
- def get_env_proxy_for_url(url: URL) -> Tuple[URL, Optional[BasicAuth]]:
311
- """Get a permitted proxy for the given URL from the env."""
312
- if url.host is not None and proxy_bypass(url.host):
313
- raise LookupError(f"Proxying is disallowed for `{url.host!r}`")
314
-
315
- proxies_in_env = proxies_from_env()
316
- try:
317
- proxy_info = proxies_in_env[url.scheme]
318
- except KeyError:
319
- raise LookupError(f"No proxies found for `{url!s}` in the env")
320
- else:
321
- return proxy_info.proxy, proxy_info.proxy_auth
322
-
323
-
324
- @attr.s(auto_attribs=True, frozen=True, slots=True)
325
- class MimeType:
326
- type: str
327
- subtype: str
328
- suffix: str
329
- parameters: "MultiDictProxy[str]"
330
-
331
-
332
- @functools.lru_cache(maxsize=56)
333
- def parse_mimetype(mimetype: str) -> MimeType:
334
- """Parses a MIME type into its components.
335
-
336
- mimetype is a MIME type string.
337
-
338
- Returns a MimeType object.
339
-
340
- Example:
341
-
342
- >>> parse_mimetype('text/html; charset=utf-8')
343
- MimeType(type='text', subtype='html', suffix='',
344
- parameters={'charset': 'utf-8'})
345
-
346
- """
347
- if not mimetype:
348
- return MimeType(
349
- type="", subtype="", suffix="", parameters=MultiDictProxy(MultiDict())
350
- )
351
-
352
- parts = mimetype.split(";")
353
- params: MultiDict[str] = MultiDict()
354
- for item in parts[1:]:
355
- if not item:
356
- continue
357
- key, value = cast(
358
- Tuple[str, str], item.split("=", 1) if "=" in item else (item, "")
359
- )
360
- params.add(key.lower().strip(), value.strip(' "'))
361
-
362
- fulltype = parts[0].strip().lower()
363
- if fulltype == "*":
364
- fulltype = "*/*"
365
-
366
- mtype, stype = (
367
- cast(Tuple[str, str], fulltype.split("/", 1))
368
- if "/" in fulltype
369
- else (fulltype, "")
370
- )
371
- stype, suffix = (
372
- cast(Tuple[str, str], stype.split("+", 1)) if "+" in stype else (stype, "")
373
- )
374
-
375
- return MimeType(
376
- type=mtype, subtype=stype, suffix=suffix, parameters=MultiDictProxy(params)
377
- )
378
-
379
-
380
- def guess_filename(obj: Any, default: Optional[str] = None) -> Optional[str]:
381
- name = getattr(obj, "name", None)
382
- if name and isinstance(name, str) and name[0] != "<" and name[-1] != ">":
383
- return Path(name).name
384
- return default
385
-
386
-
387
- not_qtext_re = re.compile(r"[^\041\043-\133\135-\176]")
388
- QCONTENT = {chr(i) for i in range(0x20, 0x7F)} | {"\t"}
389
-
390
-
391
- def quoted_string(content: str) -> str:
392
- """Return 7-bit content as quoted-string.
393
-
394
- Format content into a quoted-string as defined in RFC5322 for
395
- Internet Message Format. Notice that this is not the 8-bit HTTP
396
- format, but the 7-bit email format. Content must be in usascii or
397
- a ValueError is raised.
398
- """
399
- if not (QCONTENT > set(content)):
400
- raise ValueError(f"bad content for quoted-string {content!r}")
401
- return not_qtext_re.sub(lambda x: "\\" + x.group(0), content)
402
-
403
-
404
- def content_disposition_header(
405
- disptype: str, quote_fields: bool = True, _charset: str = "utf-8", **params: str
406
- ) -> str:
407
- """Sets ``Content-Disposition`` header for MIME.
408
-
409
- This is the MIME payload Content-Disposition header from RFC 2183
410
- and RFC 7579 section 4.2, not the HTTP Content-Disposition from
411
- RFC 6266.
412
-
413
- disptype is a disposition type: inline, attachment, form-data.
414
- Should be valid extension token (see RFC 2183)
415
-
416
- quote_fields performs value quoting to 7-bit MIME headers
417
- according to RFC 7578. Set to quote_fields to False if recipient
418
- can take 8-bit file names and field values.
419
-
420
- _charset specifies the charset to use when quote_fields is True.
421
-
422
- params is a dict with disposition params.
423
- """
424
- if not disptype or not (TOKEN > set(disptype)):
425
- raise ValueError("bad content disposition type {!r}" "".format(disptype))
426
-
427
- value = disptype
428
- if params:
429
- lparams = []
430
- for key, val in params.items():
431
- if not key or not (TOKEN > set(key)):
432
- raise ValueError(
433
- "bad content disposition parameter" " {!r}={!r}".format(key, val)
434
- )
435
- if quote_fields:
436
- if key.lower() == "filename":
437
- qval = quote(val, "", encoding=_charset)
438
- lparams.append((key, '"%s"' % qval))
439
- else:
440
- try:
441
- qval = quoted_string(val)
442
- except ValueError:
443
- qval = "".join(
444
- (_charset, "''", quote(val, "", encoding=_charset))
445
- )
446
- lparams.append((key + "*", qval))
447
- else:
448
- lparams.append((key, '"%s"' % qval))
449
- else:
450
- qval = val.replace("\\", "\\\\").replace('"', '\\"')
451
- lparams.append((key, '"%s"' % qval))
452
- sparams = "; ".join("=".join(pair) for pair in lparams)
453
- value = "; ".join((value, sparams))
454
- return value
455
-
456
-
457
- class _TSelf(Protocol, Generic[_T]):
458
- _cache: Dict[str, _T]
459
-
460
-
461
- class reify(Generic[_T]):
462
- """Use as a class method decorator.
463
-
464
- It operates almost exactly like
465
- the Python `@property` decorator, but it puts the result of the
466
- method it decorates into the instance dict after the first call,
467
- effectively replacing the function it decorates with an instance
468
- variable. It is, in Python parlance, a data descriptor.
469
- """
470
-
471
- def __init__(self, wrapped: Callable[..., _T]) -> None:
472
- self.wrapped = wrapped
473
- self.__doc__ = wrapped.__doc__
474
- self.name = wrapped.__name__
475
-
476
- def __get__(self, inst: _TSelf[_T], owner: Optional[Type[Any]] = None) -> _T:
477
- try:
478
- try:
479
- return inst._cache[self.name]
480
- except KeyError:
481
- val = self.wrapped(inst)
482
- inst._cache[self.name] = val
483
- return val
484
- except AttributeError:
485
- if inst is None:
486
- return self
487
- raise
488
-
489
- def __set__(self, inst: _TSelf[_T], value: _T) -> None:
490
- raise AttributeError("reified property is read-only")
491
-
492
-
493
- reify_py = reify
494
-
495
- try:
496
- from ._helpers import reify as reify_c
497
-
498
- if not NO_EXTENSIONS:
499
- reify = reify_c # type: ignore[misc,assignment]
500
- except ImportError:
501
- pass
502
-
503
- _ipv4_pattern = (
504
- r"^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}"
505
- r"(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$"
506
- )
507
- _ipv6_pattern = (
508
- r"^(?:(?:(?:[A-F0-9]{1,4}:){6}|(?=(?:[A-F0-9]{0,4}:){0,6}"
509
- r"(?:[0-9]{1,3}\.){3}[0-9]{1,3}$)(([0-9A-F]{1,4}:){0,5}|:)"
510
- r"((:[0-9A-F]{1,4}){1,5}:|:)|::(?:[A-F0-9]{1,4}:){5})"
511
- r"(?:(?:25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])\.){3}"
512
- r"(?:25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])|(?:[A-F0-9]{1,4}:){7}"
513
- r"[A-F0-9]{1,4}|(?=(?:[A-F0-9]{0,4}:){0,7}[A-F0-9]{0,4}$)"
514
- r"(([0-9A-F]{1,4}:){1,7}|:)((:[0-9A-F]{1,4}){1,7}|:)|(?:[A-F0-9]{1,4}:){7}"
515
- r":|:(:[A-F0-9]{1,4}){7})$"
516
- )
517
- _ipv4_regex = re.compile(_ipv4_pattern)
518
- _ipv6_regex = re.compile(_ipv6_pattern, flags=re.IGNORECASE)
519
- _ipv4_regexb = re.compile(_ipv4_pattern.encode("ascii"))
520
- _ipv6_regexb = re.compile(_ipv6_pattern.encode("ascii"), flags=re.IGNORECASE)
521
-
522
-
523
- def _is_ip_address(
524
- regex: Pattern[str], regexb: Pattern[bytes], host: Optional[Union[str, bytes]]
525
- ) -> bool:
526
- if host is None:
527
- return False
528
- if isinstance(host, str):
529
- return bool(regex.match(host))
530
- elif isinstance(host, (bytes, bytearray, memoryview)):
531
- return bool(regexb.match(host))
532
- else:
533
- raise TypeError(f"{host} [{type(host)}] is not a str or bytes")
534
-
535
-
536
- is_ipv4_address = functools.partial(_is_ip_address, _ipv4_regex, _ipv4_regexb)
537
- is_ipv6_address = functools.partial(_is_ip_address, _ipv6_regex, _ipv6_regexb)
538
-
539
-
540
- def is_ip_address(host: Optional[Union[str, bytes, bytearray, memoryview]]) -> bool:
541
- return is_ipv4_address(host) or is_ipv6_address(host)
542
-
543
-
544
- def next_whole_second() -> datetime.datetime:
545
- """Return current time rounded up to the next whole second."""
546
- return datetime.datetime.now(datetime.timezone.utc).replace(
547
- microsecond=0
548
- ) + datetime.timedelta(seconds=0)
549
-
550
-
551
- _cached_current_datetime: Optional[int] = None
552
- _cached_formatted_datetime = ""
553
-
554
-
555
- def rfc822_formatted_time() -> str:
556
- global _cached_current_datetime
557
- global _cached_formatted_datetime
558
-
559
- now = int(time.time())
560
- if now != _cached_current_datetime:
561
- # Weekday and month names for HTTP date/time formatting;
562
- # always English!
563
- # Tuples are constants stored in codeobject!
564
- _weekdayname = ("Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun")
565
- _monthname = (
566
- "", # Dummy so we can use 1-based month numbers
567
- "Jan",
568
- "Feb",
569
- "Mar",
570
- "Apr",
571
- "May",
572
- "Jun",
573
- "Jul",
574
- "Aug",
575
- "Sep",
576
- "Oct",
577
- "Nov",
578
- "Dec",
579
- )
580
-
581
- year, month, day, hh, mm, ss, wd, *tail = time.gmtime(now)
582
- _cached_formatted_datetime = "%s, %02d %3s %4d %02d:%02d:%02d GMT" % (
583
- _weekdayname[wd],
584
- day,
585
- _monthname[month],
586
- year,
587
- hh,
588
- mm,
589
- ss,
590
- )
591
- _cached_current_datetime = now
592
- return _cached_formatted_datetime
593
-
594
-
595
- def _weakref_handle(info: "Tuple[weakref.ref[object], str]") -> None:
596
- ref, name = info
597
- ob = ref()
598
- if ob is not None:
599
- with suppress(Exception):
600
- getattr(ob, name)()
601
-
602
-
603
- def weakref_handle(
604
- ob: object, name: str, timeout: float, loop: asyncio.AbstractEventLoop
605
- ) -> Optional[asyncio.TimerHandle]:
606
- if timeout is not None and timeout > 0:
607
- when = loop.time() + timeout
608
- if timeout >= 5:
609
- when = ceil(when)
610
-
611
- return loop.call_at(when, _weakref_handle, (weakref.ref(ob), name))
612
- return None
613
-
614
-
615
- def call_later(
616
- cb: Callable[[], Any], timeout: float, loop: asyncio.AbstractEventLoop
617
- ) -> Optional[asyncio.TimerHandle]:
618
- if timeout is not None and timeout > 0:
619
- when = loop.time() + timeout
620
- if timeout > 5:
621
- when = ceil(when)
622
- return loop.call_at(when, cb)
623
- return None
624
-
625
-
626
- class TimeoutHandle:
627
- """Timeout handle"""
628
-
629
- def __init__(
630
- self, loop: asyncio.AbstractEventLoop, timeout: Optional[float]
631
- ) -> None:
632
- self._timeout = timeout
633
- self._loop = loop
634
- self._callbacks: List[
635
- Tuple[Callable[..., None], Tuple[Any, ...], Dict[str, Any]]
636
- ] = []
637
-
638
- def register(
639
- self, callback: Callable[..., None], *args: Any, **kwargs: Any
640
- ) -> None:
641
- self._callbacks.append((callback, args, kwargs))
642
-
643
- def close(self) -> None:
644
- self._callbacks.clear()
645
-
646
- def start(self) -> Optional[asyncio.Handle]:
647
- timeout = self._timeout
648
- if timeout is not None and timeout > 0:
649
- when = self._loop.time() + timeout
650
- if timeout >= 5:
651
- when = ceil(when)
652
- return self._loop.call_at(when, self.__call__)
653
- else:
654
- return None
655
-
656
- def timer(self) -> "BaseTimerContext":
657
- if self._timeout is not None and self._timeout > 0:
658
- timer = TimerContext(self._loop)
659
- self.register(timer.timeout)
660
- return timer
661
- else:
662
- return TimerNoop()
663
-
664
- def __call__(self) -> None:
665
- for cb, args, kwargs in self._callbacks:
666
- with suppress(Exception):
667
- cb(*args, **kwargs)
668
-
669
- self._callbacks.clear()
670
-
671
-
672
- class BaseTimerContext(ContextManager["BaseTimerContext"]):
673
- pass
674
-
675
-
676
- class TimerNoop(BaseTimerContext):
677
- def __enter__(self) -> BaseTimerContext:
678
- return self
679
-
680
- def __exit__(
681
- self,
682
- exc_type: Optional[Type[BaseException]],
683
- exc_val: Optional[BaseException],
684
- exc_tb: Optional[TracebackType],
685
- ) -> None:
686
- return
687
-
688
-
689
- class TimerContext(BaseTimerContext):
690
- """Low resolution timeout context manager"""
691
-
692
- def __init__(self, loop: asyncio.AbstractEventLoop) -> None:
693
- self._loop = loop
694
- self._tasks: List[asyncio.Task[Any]] = []
695
- self._cancelled = False
696
-
697
- def __enter__(self) -> BaseTimerContext:
698
- task = current_task(loop=self._loop)
699
-
700
- if task is None:
701
- raise RuntimeError(
702
- "Timeout context manager should be used " "inside a task"
703
- )
704
-
705
- if self._cancelled:
706
- raise asyncio.TimeoutError from None
707
-
708
- self._tasks.append(task)
709
- return self
710
-
711
- def __exit__(
712
- self,
713
- exc_type: Optional[Type[BaseException]],
714
- exc_val: Optional[BaseException],
715
- exc_tb: Optional[TracebackType],
716
- ) -> Optional[bool]:
717
- if self._tasks:
718
- self._tasks.pop()
719
-
720
- if exc_type is asyncio.CancelledError and self._cancelled:
721
- raise asyncio.TimeoutError from None
722
- return None
723
-
724
- def timeout(self) -> None:
725
- if not self._cancelled:
726
- for task in set(self._tasks):
727
- task.cancel()
728
-
729
- self._cancelled = True
730
-
731
-
732
- def ceil_timeout(delay: Optional[float]) -> async_timeout.Timeout:
733
- if delay is None or delay <= 0:
734
- return async_timeout.timeout(None)
735
-
736
- loop = get_running_loop()
737
- now = loop.time()
738
- when = now + delay
739
- if delay > 5:
740
- when = ceil(when)
741
- return async_timeout.timeout_at(when)
742
-
743
-
744
- class HeadersMixin:
745
-
746
- ATTRS = frozenset(["_content_type", "_content_dict", "_stored_content_type"])
747
-
748
- _content_type: Optional[str] = None
749
- _content_dict: Optional[Dict[str, str]] = None
750
- _stored_content_type = sentinel
751
-
752
- def _parse_content_type(self, raw: str) -> None:
753
- self._stored_content_type = raw
754
- if raw is None:
755
- # default value according to RFC 2616
756
- self._content_type = "application/octet-stream"
757
- self._content_dict = {}
758
- else:
759
- msg = HeaderParser().parsestr("Content-Type: " + raw)
760
- self._content_type = msg.get_content_type()
761
- params = msg.get_params()
762
- self._content_dict = dict(params[1:]) # First element is content type again
763
-
764
- @property
765
- def content_type(self) -> str:
766
- """The value of content part for Content-Type HTTP header."""
767
- raw = self._headers.get(hdrs.CONTENT_TYPE) # type: ignore[attr-defined]
768
- if self._stored_content_type != raw:
769
- self._parse_content_type(raw)
770
- return self._content_type # type: ignore[return-value]
771
-
772
- @property
773
- def charset(self) -> Optional[str]:
774
- """The value of charset part for Content-Type HTTP header."""
775
- raw = self._headers.get(hdrs.CONTENT_TYPE) # type: ignore[attr-defined]
776
- if self._stored_content_type != raw:
777
- self._parse_content_type(raw)
778
- return self._content_dict.get("charset") # type: ignore[union-attr]
779
-
780
- @property
781
- def content_length(self) -> Optional[int]:
782
- """The value of Content-Length HTTP header."""
783
- content_length = self._headers.get( # type: ignore[attr-defined]
784
- hdrs.CONTENT_LENGTH
785
- )
786
-
787
- if content_length is not None:
788
- return int(content_length)
789
- else:
790
- return None
791
-
792
-
793
- def set_result(fut: "asyncio.Future[_T]", result: _T) -> None:
794
- if not fut.done():
795
- fut.set_result(result)
796
-
797
-
798
- def set_exception(fut: "asyncio.Future[_T]", exc: BaseException) -> None:
799
- if not fut.done():
800
- fut.set_exception(exc)
801
-
802
-
803
- class ChainMapProxy(Mapping[str, Any]):
804
- __slots__ = ("_maps",)
805
-
806
- def __init__(self, maps: Iterable[Mapping[str, Any]]) -> None:
807
- self._maps = tuple(maps)
808
-
809
- def __init_subclass__(cls) -> None:
810
- raise TypeError(
811
- "Inheritance class {} from ChainMapProxy "
812
- "is forbidden".format(cls.__name__)
813
- )
814
-
815
- def __getitem__(self, key: str) -> Any:
816
- for mapping in self._maps:
817
- try:
818
- return mapping[key]
819
- except KeyError:
820
- pass
821
- raise KeyError(key)
822
-
823
- def get(self, key: str, default: Any = None) -> Any:
824
- return self[key] if key in self else default
825
-
826
- def __len__(self) -> int:
827
- # reuses stored hash values if possible
828
- return len(set().union(*self._maps)) # type: ignore[arg-type]
829
-
830
- def __iter__(self) -> Iterator[str]:
831
- d: Dict[str, Any] = {}
832
- for mapping in reversed(self._maps):
833
- # reuses stored hash values if possible
834
- d.update(mapping)
835
- return iter(d)
836
-
837
- def __contains__(self, key: object) -> bool:
838
- return any(key in m for m in self._maps)
839
-
840
- def __bool__(self) -> bool:
841
- return any(self._maps)
842
-
843
- def __repr__(self) -> str:
844
- content = ", ".join(map(repr, self._maps))
845
- return f"ChainMapProxy({content})"
846
-
847
-
848
- # https://tools.ietf.org/html/rfc7232#section-2.3
849
- _ETAGC = r"[!#-}\x80-\xff]+"
850
- _ETAGC_RE = re.compile(_ETAGC)
851
- _QUOTED_ETAG = rf'(W/)?"({_ETAGC})"'
852
- QUOTED_ETAG_RE = re.compile(_QUOTED_ETAG)
853
- LIST_QUOTED_ETAG_RE = re.compile(rf"({_QUOTED_ETAG})(?:\s*,\s*|$)|(.)")
854
-
855
- ETAG_ANY = "*"
856
-
857
-
858
- @attr.s(auto_attribs=True, frozen=True, slots=True)
859
- class ETag:
860
- value: str
861
- is_weak: bool = False
862
-
863
-
864
- def validate_etag_value(value: str) -> None:
865
- if value != ETAG_ANY and not _ETAGC_RE.fullmatch(value):
866
- raise ValueError(
867
- f"Value {value!r} is not a valid etag. Maybe it contains '\"'?"
868
- )
869
-
870
-
871
- def parse_http_date(date_str: Optional[str]) -> Optional[datetime.datetime]:
872
- """Process a date string, return a datetime object"""
873
- if date_str is not None:
874
- timetuple = parsedate(date_str)
875
- if timetuple is not None:
876
- with suppress(ValueError):
877
- return datetime.datetime(*timetuple[:6], tzinfo=datetime.timezone.utc)
878
- return None
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/annotated_types/__init__.py DELETED
@@ -1,319 +0,0 @@
1
- import sys
2
- from dataclasses import dataclass
3
- from datetime import timezone
4
- from typing import TYPE_CHECKING, Any, Callable, Iterator, Optional, TypeVar, Union
5
-
6
- if sys.version_info < (3, 8):
7
- from typing_extensions import Protocol, runtime_checkable
8
- else:
9
- from typing import Protocol, runtime_checkable
10
-
11
- if sys.version_info < (3, 9):
12
- from typing_extensions import Annotated, Literal
13
- else:
14
- from typing import Annotated, Literal
15
-
16
- if sys.version_info < (3, 10):
17
- EllipsisType = type(Ellipsis)
18
- KW_ONLY = {}
19
- SLOTS = {}
20
- else:
21
- from types import EllipsisType
22
-
23
- KW_ONLY = {"kw_only": True}
24
- SLOTS = {"slots": True}
25
-
26
-
27
- __all__ = (
28
- 'BaseMetadata',
29
- 'GroupedMetadata',
30
- 'Gt',
31
- 'Ge',
32
- 'Lt',
33
- 'Le',
34
- 'Interval',
35
- 'MultipleOf',
36
- 'MinLen',
37
- 'MaxLen',
38
- 'Len',
39
- 'Timezone',
40
- 'Predicate',
41
- 'LowerCase',
42
- 'UpperCase',
43
- 'IsDigits',
44
- '__version__',
45
- )
46
-
47
- __version__ = '0.5.0'
48
-
49
-
50
- T = TypeVar('T')
51
-
52
-
53
- # arguments that start with __ are considered
54
- # positional only
55
- # see https://peps.python.org/pep-0484/#positional-only-arguments
56
-
57
-
58
- class SupportsGt(Protocol):
59
- def __gt__(self: T, __other: T) -> bool:
60
- ...
61
-
62
-
63
- class SupportsGe(Protocol):
64
- def __ge__(self: T, __other: T) -> bool:
65
- ...
66
-
67
-
68
- class SupportsLt(Protocol):
69
- def __lt__(self: T, __other: T) -> bool:
70
- ...
71
-
72
-
73
- class SupportsLe(Protocol):
74
- def __le__(self: T, __other: T) -> bool:
75
- ...
76
-
77
-
78
- class SupportsMod(Protocol):
79
- def __mod__(self: T, __other: T) -> T:
80
- ...
81
-
82
-
83
- class SupportsDiv(Protocol):
84
- def __div__(self: T, __other: T) -> T:
85
- ...
86
-
87
-
88
- class BaseMetadata:
89
- """Base class for all metadata.
90
-
91
- This exists mainly so that implementers
92
- can do `isinstance(..., BaseMetadata)` while traversing field annotations.
93
- """
94
-
95
- __slots__ = ()
96
-
97
-
98
- @dataclass(frozen=True, **SLOTS)
99
- class Gt(BaseMetadata):
100
- """Gt(gt=x) implies that the value must be greater than x.
101
-
102
- It can be used with any type that supports the ``>`` operator,
103
- including numbers, dates and times, strings, sets, and so on.
104
- """
105
-
106
- gt: SupportsGt
107
-
108
-
109
- @dataclass(frozen=True, **SLOTS)
110
- class Ge(BaseMetadata):
111
- """Ge(ge=x) implies that the value must be greater than or equal to x.
112
-
113
- It can be used with any type that supports the ``>=`` operator,
114
- including numbers, dates and times, strings, sets, and so on.
115
- """
116
-
117
- ge: SupportsGe
118
-
119
-
120
- @dataclass(frozen=True, **SLOTS)
121
- class Lt(BaseMetadata):
122
- """Lt(lt=x) implies that the value must be less than x.
123
-
124
- It can be used with any type that supports the ``<`` operator,
125
- including numbers, dates and times, strings, sets, and so on.
126
- """
127
-
128
- lt: SupportsLt
129
-
130
-
131
- @dataclass(frozen=True, **SLOTS)
132
- class Le(BaseMetadata):
133
- """Le(le=x) implies that the value must be less than or equal to x.
134
-
135
- It can be used with any type that supports the ``<=`` operator,
136
- including numbers, dates and times, strings, sets, and so on.
137
- """
138
-
139
- le: SupportsLe
140
-
141
-
142
- @runtime_checkable
143
- class GroupedMetadata(Protocol):
144
- """A grouping of multiple BaseMetadata objects.
145
-
146
- `GroupedMetadata` on its own is not metadata and has no meaning.
147
- All it the the constraint and metadata should be fully expressable
148
- in terms of the `BaseMetadata`'s returned by `GroupedMetadata.__iter__()`.
149
-
150
- Concrete implementations should override `GroupedMetadata.__iter__()`
151
- to add their own metadata.
152
- For example:
153
-
154
- >>> @dataclass
155
- >>> class Field(GroupedMetadata):
156
- >>> gt: float | None = None
157
- >>> description: str | None = None
158
- ...
159
- >>> def __iter__(self) -> Iterable[BaseMetadata]:
160
- >>> if self.gt is not None:
161
- >>> yield Gt(self.gt)
162
- >>> if self.description is not None:
163
- >>> yield Description(self.gt)
164
-
165
- Also see the implementation of `Interval` below for an example.
166
-
167
- Parsers should recognize this and unpack it so that it can be used
168
- both with and without unpacking:
169
-
170
- - `Annotated[int, Field(...)]` (parser must unpack Field)
171
- - `Annotated[int, *Field(...)]` (PEP-646)
172
- """ # noqa: trailing-whitespace
173
-
174
- @property
175
- def __is_annotated_types_grouped_metadata__(self) -> Literal[True]:
176
- return True
177
-
178
- def __iter__(self) -> Iterator[BaseMetadata]:
179
- ...
180
-
181
- if not TYPE_CHECKING:
182
- __slots__ = () # allow subclasses to use slots
183
-
184
- def __init_subclass__(cls, *args: Any, **kwargs: Any) -> None:
185
- # Basic ABC like functionality without the complexity of an ABC
186
- super().__init_subclass__(*args, **kwargs)
187
- if cls.__iter__ is GroupedMetadata.__iter__:
188
- raise TypeError("Can't subclass GroupedMetadata without implementing __iter__")
189
-
190
- def __iter__(self) -> Iterator[BaseMetadata]: # noqa: F811
191
- raise NotImplementedError # more helpful than "None has no attribute..." type errors
192
-
193
-
194
- @dataclass(frozen=True, **KW_ONLY, **SLOTS)
195
- class Interval(GroupedMetadata):
196
- """Interval can express inclusive or exclusive bounds with a single object.
197
-
198
- It accepts keyword arguments ``gt``, ``ge``, ``lt``, and/or ``le``, which
199
- are interpreted the same way as the single-bound constraints.
200
- """
201
-
202
- gt: Union[SupportsGt, None] = None
203
- ge: Union[SupportsGe, None] = None
204
- lt: Union[SupportsLt, None] = None
205
- le: Union[SupportsLe, None] = None
206
-
207
- def __iter__(self) -> Iterator[BaseMetadata]:
208
- """Unpack an Interval into zero or more single-bounds."""
209
- if self.gt is not None:
210
- yield Gt(self.gt)
211
- if self.ge is not None:
212
- yield Ge(self.ge)
213
- if self.lt is not None:
214
- yield Lt(self.lt)
215
- if self.le is not None:
216
- yield Le(self.le)
217
-
218
-
219
- @dataclass(frozen=True, **SLOTS)
220
- class MultipleOf(BaseMetadata):
221
- """MultipleOf(multiple_of=x) might be interpreted in two ways:
222
-
223
- 1. Python semantics, implying ``value % multiple_of == 0``, or
224
- 2. JSONschema semantics, where ``int(value / multiple_of) == value / multiple_of``
225
-
226
- We encourage users to be aware of these two common interpretations,
227
- and libraries to carefully document which they implement.
228
- """
229
-
230
- multiple_of: Union[SupportsDiv, SupportsMod]
231
-
232
-
233
- @dataclass(frozen=True, **SLOTS)
234
- class MinLen(BaseMetadata):
235
- """
236
- MinLen() implies minimum inclusive length,
237
- e.g. ``len(value) >= min_length``.
238
- """
239
-
240
- min_length: Annotated[int, Ge(0)]
241
-
242
-
243
- @dataclass(frozen=True, **SLOTS)
244
- class MaxLen(BaseMetadata):
245
- """
246
- MaxLen() implies maximum inclusive length,
247
- e.g. ``len(value) <= max_length``.
248
- """
249
-
250
- max_length: Annotated[int, Ge(0)]
251
-
252
-
253
- @dataclass(frozen=True, **SLOTS)
254
- class Len(GroupedMetadata):
255
- """
256
- Len() implies that ``min_length <= len(value) <= max_length``.
257
-
258
- Upper bound may be omitted or ``None`` to indicate no upper length bound.
259
- """
260
-
261
- min_length: Annotated[int, Ge(0)] = 0
262
- max_length: Optional[Annotated[int, Ge(0)]] = None
263
-
264
- def __iter__(self) -> Iterator[BaseMetadata]:
265
- """Unpack a Len into zone or more single-bounds."""
266
- if self.min_length > 0:
267
- yield MinLen(self.min_length)
268
- if self.max_length is not None:
269
- yield MaxLen(self.max_length)
270
-
271
-
272
- @dataclass(frozen=True, **SLOTS)
273
- class Timezone(BaseMetadata):
274
- """Timezone(tz=...) requires a datetime to be aware (or ``tz=None``, naive).
275
-
276
- ``Annotated[datetime, Timezone(None)]`` must be a naive datetime.
277
- ``Timezone[...]`` (the ellipsis literal) expresses that the datetime must be
278
- tz-aware but any timezone is allowed.
279
-
280
- You may also pass a specific timezone string or timezone object such as
281
- ``Timezone(timezone.utc)`` or ``Timezone("Africa/Abidjan")`` to express that
282
- you only allow a specific timezone, though we note that this is often
283
- a symptom of poor design.
284
- """
285
-
286
- tz: Union[str, timezone, EllipsisType, None]
287
-
288
-
289
- @dataclass(frozen=True, **SLOTS)
290
- class Predicate(BaseMetadata):
291
- """``Predicate(func: Callable)`` implies `func(value)` is truthy for valid values.
292
-
293
- Users should prefer statically inspectable metadata, but if you need the full
294
- power and flexibility of arbitrary runtime predicates... here it is.
295
-
296
- We provide a few predefined predicates for common string constraints:
297
- ``IsLower = Predicate(str.islower)``, ``IsUpper = Predicate(str.isupper)``, and
298
- ``IsDigit = Predicate(str.isdigit)``. Users are encouraged to use methods which
299
- can be given special handling, and avoid indirection like ``lambda s: s.lower()``.
300
-
301
- Some libraries might have special logic to handle certain predicates, e.g. by
302
- checking for `str.isdigit` and using its presence to both call custom logic to
303
- enforce digit-only strings, and customise some generated external schema.
304
-
305
- We do not specify what behaviour should be expected for predicates that raise
306
- an exception. For example `Annotated[int, Predicate(str.isdigit)]` might silently
307
- skip invalid constraints, or statically raise an error; or it might try calling it
308
- and then propogate or discard the resulting exception.
309
- """
310
-
311
- func: Callable[[Any], bool]
312
-
313
-
314
- StrType = TypeVar("StrType", bound=str)
315
-
316
- LowerCase = Annotated[StrType, Predicate(str.islower)]
317
- UpperCase = Annotated[StrType, Predicate(str.isupper)]
318
- IsDigits = Annotated[StrType, Predicate(str.isdigit)]
319
- IsAscii = Annotated[StrType, Predicate(str.isascii)]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/feaLib/variableScalar.py DELETED
@@ -1,112 +0,0 @@
1
- from fontTools.varLib.models import VariationModel, normalizeValue, piecewiseLinearMap
2
-
3
-
4
- def Location(loc):
5
- return tuple(sorted(loc.items()))
6
-
7
-
8
- class VariableScalar:
9
- """A scalar with different values at different points in the designspace."""
10
-
11
- def __init__(self, location_value={}):
12
- self.values = {}
13
- self.axes = {}
14
- for location, value in location_value.items():
15
- self.add_value(location, value)
16
-
17
- def __repr__(self):
18
- items = []
19
- for location, value in self.values.items():
20
- loc = ",".join(["%s=%i" % (ax, loc) for ax, loc in location])
21
- items.append("%s:%i" % (loc, value))
22
- return "(" + (" ".join(items)) + ")"
23
-
24
- @property
25
- def does_vary(self):
26
- values = list(self.values.values())
27
- return any(v != values[0] for v in values[1:])
28
-
29
- @property
30
- def axes_dict(self):
31
- if not self.axes:
32
- raise ValueError(
33
- ".axes must be defined on variable scalar before interpolating"
34
- )
35
- return {ax.axisTag: ax for ax in self.axes}
36
-
37
- def _normalized_location(self, location):
38
- location = self.fix_location(location)
39
- normalized_location = {}
40
- for axtag in location.keys():
41
- if axtag not in self.axes_dict:
42
- raise ValueError("Unknown axis %s in %s" % (axtag, location))
43
- axis = self.axes_dict[axtag]
44
- normalized_location[axtag] = normalizeValue(
45
- location[axtag], (axis.minValue, axis.defaultValue, axis.maxValue)
46
- )
47
-
48
- return Location(normalized_location)
49
-
50
- def fix_location(self, location):
51
- location = dict(location)
52
- for tag, axis in self.axes_dict.items():
53
- if tag not in location:
54
- location[tag] = axis.defaultValue
55
- return location
56
-
57
- def add_value(self, location, value):
58
- if self.axes:
59
- location = self.fix_location(location)
60
-
61
- self.values[Location(location)] = value
62
-
63
- def fix_all_locations(self):
64
- self.values = {
65
- Location(self.fix_location(l)): v for l, v in self.values.items()
66
- }
67
-
68
- @property
69
- def default(self):
70
- self.fix_all_locations()
71
- key = Location({ax.axisTag: ax.defaultValue for ax in self.axes})
72
- if key not in self.values:
73
- raise ValueError("Default value could not be found")
74
- # I *guess* we could interpolate one, but I don't know how.
75
- return self.values[key]
76
-
77
- def value_at_location(self, location, model_cache=None, avar=None):
78
- loc = location
79
- if loc in self.values.keys():
80
- return self.values[loc]
81
- values = list(self.values.values())
82
- return self.model(model_cache, avar).interpolateFromMasters(loc, values)
83
-
84
- def model(self, model_cache=None, avar=None):
85
- if model_cache is not None:
86
- key = tuple(self.values.keys())
87
- if key in model_cache:
88
- return model_cache[key]
89
- locations = [dict(self._normalized_location(k)) for k in self.values.keys()]
90
- if avar is not None:
91
- mapping = avar.segments
92
- locations = [
93
- {
94
- k: piecewiseLinearMap(v, mapping[k]) if k in mapping else v
95
- for k, v in location.items()
96
- }
97
- for location in locations
98
- ]
99
- m = VariationModel(locations)
100
- if model_cache is not None:
101
- model_cache[key] = m
102
- return m
103
-
104
- def get_deltas_and_supports(self, model_cache=None, avar=None):
105
- values = list(self.values.values())
106
- return self.model(model_cache, avar).getDeltasAndSupports(values)
107
-
108
- def add_to_variation_store(self, store_builder, model_cache=None, avar=None):
109
- deltas, supports = self.get_deltas_and_supports(model_cache, avar)
110
- store_builder.setSupports(supports)
111
- index = store_builder.storeDeltas(deltas)
112
- return int(self.default), index
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DaFujaTyping/hf-Chat-ui/src/lib/stores/errors.ts DELETED
@@ -1,7 +0,0 @@
1
- import { writable } from "svelte/store";
2
-
3
- export const ERROR_MESSAGES = {
4
- default: "Oops, something went wrong.",
5
- };
6
-
7
- export const error = writable<string | null>(null);
 
 
 
 
 
 
 
 
spaces/DaleChen/AutoGPT/tests.py DELETED
@@ -1,21 +0,0 @@
1
- import unittest
2
-
3
- import coverage
4
-
5
- if __name__ == "__main__":
6
- # Start coverage collection
7
- cov = coverage.Coverage()
8
- cov.start()
9
-
10
- # Load all tests from the 'autogpt/tests' package
11
- suite = unittest.defaultTestLoader.discover("./tests")
12
-
13
- # Run the tests
14
- unittest.TextTestRunner().run(suite)
15
-
16
- # Stop coverage collection
17
- cov.stop()
18
- cov.save()
19
-
20
- # Report the coverage
21
- cov.report(show_missing=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Dao3/chatwithdocs/__init__.py DELETED
File without changes
spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/utils/height.py DELETED
@@ -1,131 +0,0 @@
1
- """
2
- @date: 2021/6/30
3
- @description:
4
- """
5
- import numpy as np
6
- from typing import List
7
-
8
- from utils.boundary import *
9
- from scipy.optimize import least_squares
10
- from functools import partial
11
-
12
-
13
- def lsq_fit(ceil_norm, floor_norm):
14
- """
15
- Least Squares
16
- :param ceil_norm:
17
- :param floor_norm:
18
- :return:
19
- """
20
-
21
- def error_fun(ratio, ceil_norm, floor_norm):
22
- error = np.abs(ratio * ceil_norm - floor_norm)
23
- return error
24
-
25
- init_ratio = np.mean(floor_norm / ceil_norm, axis=-1)
26
- error_func = partial(error_fun, ceil_norm=ceil_norm, floor_norm=floor_norm)
27
- ret = least_squares(error_func, init_ratio, verbose=0)
28
- ratio = ret.x[0]
29
- return ratio
30
-
31
-
32
- def mean_percentile_fit(ceil_norm, floor_norm, p1=25, p2=75):
33
- """
34
- :param ceil_norm:
35
- :param floor_norm:
36
- :param p1:
37
- :param p2:
38
- :return:
39
- """
40
- ratio = floor_norm / ceil_norm
41
- r_min = np.percentile(ratio, p1)
42
- r_max = np.percentile(ratio, p2)
43
- return ratio[(r_min <= ratio) & (ratio <= r_max)].mean()
44
-
45
-
46
- def calc_ceil_ratio(boundaries: List[np.array], mode='lsq'):
47
- """
48
- :param boundaries: [ [[cu1, cv1], [cu2, cv2], ...], [[fu1, fv1], [fu2, fv2], ...] ]
49
- :param mode: 'lsq' or 'mean'
50
- :return:
51
- """
52
- assert len(boundaries[0].shape) < 4 and len(boundaries[1].shape) < 4, 'error shape'
53
- if not is_normal_layout(boundaries):
54
- return 0
55
-
56
- ceil_boundary = boundaries[0]
57
- floor_boundary = boundaries[1]
58
- assert ceil_boundary.shape[-2] == floor_boundary.shape[-2], "boundary need same length"
59
-
60
- ceil_xyz = uv2xyz(ceil_boundary, -1)
61
- floor_xyz = uv2xyz(floor_boundary, 1)
62
-
63
- ceil_xz = ceil_xyz[..., ::2]
64
- floor_xz = floor_xyz[..., ::2]
65
-
66
- ceil_norm = np.linalg.norm(ceil_xz, axis=-1)
67
- floor_norm = np.linalg.norm(floor_xz, axis=-1)
68
-
69
- if mode == "lsq":
70
- if len(ceil_norm.shape) == 2:
71
- ratio = np.array([lsq_fit(ceil_norm[i], floor_norm[i]) for i in range(ceil_norm.shape[0])])
72
- else:
73
- ratio = lsq_fit(ceil_norm, floor_norm)
74
- else:
75
- if len(ceil_norm.shape) == 2:
76
- ratio = np.array([mean_percentile_fit(ceil_norm[i], floor_norm[i]) for i in range(ceil_norm.shape[0])])
77
- else:
78
- ratio = mean_percentile_fit(ceil_norm, floor_norm)
79
-
80
- return ratio
81
-
82
-
83
- def calc_ceil_height(boundaries: List[np.array], camera_height=1.6, mode='lsq') -> float:
84
- """
85
- :param boundaries: [ [[cu1, cv1], [cu2, cv2], ...], [[fu1, fv1], [fu2, fv2], ...] ]
86
- :param camera_height:
87
- :param mode:
88
- :return:
89
- """
90
- ratio = calc_ceil_ratio(boundaries, mode)
91
- ceil_height = camera_height * ratio
92
- return ceil_height
93
-
94
-
95
- def calc_room_height(boundaries: List[np.array], camera_height=1.6, mode='lsq') -> float:
96
- """
97
- :param boundaries: also can corners,format: [ [[cu1, cv1], [cu2, cv2], ...], [[fu1, fv1], [fu2, fv2], ...] ],
98
- 0 denotes ceil, 1 denotes floor
99
- :param camera_height: actual camera height determines the scale
100
- :param mode: fitting method lsq or mean
101
- :return:
102
- """
103
- ceil_height = calc_ceil_height(boundaries, camera_height, mode)
104
- room_height = camera_height + ceil_height
105
- return room_height
106
-
107
-
108
- def height2ratio(height, camera_height=1.6):
109
- ceil_height = height - camera_height
110
- ratio = ceil_height / camera_height
111
- return ratio
112
-
113
-
114
- def ratio2height(ratio, camera_height=1.6):
115
- ceil_height = camera_height * ratio
116
- room_height = camera_height + ceil_height
117
- return room_height
118
-
119
-
120
- if __name__ == '__main__':
121
- from dataset.mp3d_dataset import MP3DDataset
122
-
123
- dataset = MP3DDataset(root_dir="../src/dataset/mp3d", mode="train")
124
- for data in dataset:
125
- ceil_corners = data['corners'][::2]
126
- floor_corners = data['corners'][1::2]
127
- # ceil_boundary = corners2boundary(ceil_corners, length=1024)
128
- # floor_boundary = corners2boundary(floor_corners, length=1024)
129
- room_height1 = calc_room_height([ceil_corners, floor_corners], camera_height=1.6, mode='mean')
130
- room_height2 = calc_room_height([ceil_corners, floor_corners], camera_height=1.6, mode='lsq')
131
- print(room_height1, room_height2, data['cameraCeilingHeight'] + 1.6)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Datasculptor/DescriptionGPT/tools/remove_lvis_rare.py DELETED
@@ -1,20 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- import argparse
3
- import json
4
-
5
- if __name__ == '__main__':
6
- parser = argparse.ArgumentParser()
7
- parser.add_argument('--ann', default='datasets/lvis/lvis_v1_train.json')
8
- args = parser.parse_args()
9
-
10
- print('Loading', args.ann)
11
- data = json.load(open(args.ann, 'r'))
12
- catid2freq = {x['id']: x['frequency'] for x in data['categories']}
13
- print('ori #anns', len(data['annotations']))
14
- exclude = ['r']
15
- data['annotations'] = [x for x in data['annotations'] \
16
- if catid2freq[x['category_id']] not in exclude]
17
- print('filtered #anns', len(data['annotations']))
18
- out_path = args.ann[:-5] + '_norare.json'
19
- print('Saving to', out_path)
20
- json.dump(data, open(out_path, 'w'))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/EllaTsoi/text_generator/app.py DELETED
@@ -1,10 +0,0 @@
1
- import gradio as gr
2
- from gradio.mix import Parallel
3
-
4
- title = "My First Text Generator"
5
- description = "input text and submit."
6
-
7
- model1=gr.Interface.load("huggingface/EleutherAI/gpt-neo-1.3B")
8
- model3=gr.Interface.load("huggingface/EleutherAI/gpt-neo-2.7B")
9
-
10
- gr.Parallel(model1 , model3, title=title,description=description).launch()