Commit
·
c0f211b
1
Parent(s):
70e7b61
Update parquet files (step 9 of 249)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Forza Motorsport 4 Keygen PC How to Emulate the Game on Your Computer.md +0 -194
- spaces/1gistliPinn/ChatGPT4/Examples/Art Modeling Liliana Model Sets 01 89.md +0 -6
- spaces/1phancelerku/anime-remove-background/CarX Highway Racing Hack How to Download Cheat and Get Free Coins.md +0 -123
- spaces/1phancelerku/anime-remove-background/Criminal Case Supernatural Investigations Mod Apk Bahasa Indonesia Game Seru yang Mengasah Otak dan Imajinasi.md +0 -119
- spaces/AB-TW/team-ai/agents/tools/python_code_tool.py +0 -116
- spaces/AIConsultant/MusicGen/audiocraft/adversarial/discriminators/msstftd.py +0 -134
- spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/CLAP/CLAPWrapper.py +0 -257
- spaces/AILab-CVC/SEED-Bench_Leaderboard/src/auto_leaderboard/model_metadata_type.py +0 -30
- spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/__init__.py +0 -0
- spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet50_cutmix.py +0 -18
- spaces/Abdullahw72/bark-voice-cloning/hubert/pre_kmeans_hubert.py +0 -85
- spaces/AgentVerse/agentVerse/dataloader/__init__.py +0 -10
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fixwidthsizer/RunWidthWrap.js +0 -10
- spaces/Aki004/herta-so-vits/inference_main.py +0 -161
- spaces/AlhitawiMohammed22/CER_Hu-Evaluation-Metrics/app.py +0 -5
- spaces/Aloento/9Nine-PITS/text/__init__.py +0 -15
- spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/roi_extractors/generic_roi_extractor.py +0 -83
- spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/roi_extractors/single_level_roi_extractor.py +0 -108
- spaces/AnimalEquality/chatbot/_proc/_docs/site_libs/quarto-search/quarto-search.js +0 -1140
- spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/html_4chan_style.css +0 -104
- spaces/Anustup/NS_AI_LABS/app-shared.py +0 -3
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/hebrewprober.py +0 -316
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/jaraco/functools.py +0 -525
- spaces/AtomdffAI/wechatgpt4atom/bot/baidu/baidu_unit_bot.py +0 -26
- spaces/Augustya/ai-subject-answer-generator/README.md +0 -13
- spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/poolers.py +0 -245
- spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/tutorials/data_loading.md +0 -95
- spaces/Banbri/zcvzcv/src/lib/createLlamaPrompt.ts +0 -25
- spaces/BartPoint/VoiceChange_Beta/infer_pack/modules/F0Predictor/PMF0Predictor.py +0 -97
- spaces/Benson/text-generation/Examples/Bloqueo De Aplicaciones 2019.md +0 -95
- spaces/Benson/text-generation/Examples/Descargar Gratis De Backgammon Pc.md +0 -65
- spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/resources/collection.py +0 -572
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_inspect.py +0 -270
- spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/_functools.py +0 -20
- spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/importlib_resources/readers.py +0 -122
- spaces/BilalSardar/Halal_Food_Checker/README.md +0 -12
- spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/tutorials/write-models.md +0 -39
- spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/scan_by_key.h +0 -1004
- spaces/CVPR/Text2Human/Text2Human/models/__init__.py +0 -42
- spaces/CVPR/lama-example/saicinpainting/training/modules/depthwise_sep_conv.py +0 -17
- spaces/ChallengeHub/Chinese-LangChain/tests/test_langchain.py +0 -36
- spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/image_gen.py +0 -163
- spaces/ChrisCaviar/ControlNet-v1-1/app.py +0 -130
- spaces/CofAI/chat.b4/client/css/message.css +0 -65
- spaces/CofAI/chat/g4f/Provider/Providers/Xiaor.py +0 -39
- spaces/Covert1107/sd-diffusers-webui/modules/lora.py +0 -183
- spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/np.py +0 -171
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/testclient.py +0 -1
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/implementations/cached.py +0 -867
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/__vite-browser-external-b25bb000.js +0 -2
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Forza Motorsport 4 Keygen PC How to Emulate the Game on Your Computer.md
DELETED
@@ -1,194 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Forza Motorsport 4 Keygen PC: How to Play the Game on Your Computer</h1>
|
3 |
-
<p>Forza Motorsport 4 is a racing video game developed by Turn 10 Studios and published by Microsoft Studios for the Xbox 360. It is the fourth installment in the Forza Motorsport series and features over 500 cars, 26 tracks, a career mode, a multiplayer mode, and a variety of customization options.</p>
|
4 |
-
<h2>forza motorsport 4 keygen pc</h2><br /><p><b><b>Download File</b> >>>>> <a href="https://byltly.com/2uKuXd">https://byltly.com/2uKuXd</a></b></p><br /><br />
|
5 |
-
<p>If you are a fan of racing games, you might be wondering how you can play Forza Motorsport 4 on your PC. After all, the game is not officially available for Windows platforms and it is not compatible with Xbox One or Series X|S consoles. However, there are some ways you can enjoy this game on your computer using keygen software, emulators, and mods.</p>
|
6 |
-
<p>In this article, we will show you how to get Forza Motorsport 4 keygen PC and how to enhance your gaming experience with some tips and tricks. We will also cover some of the common issues and bugs that you might encounter when playing the game on PC and how to fix them. Let's get started!</p>
|
7 |
-
<h2>How to Get Forza Motorsport 4 for PC</h2>
|
8 |
-
<p>One of the easiest ways to play Forza Motorsport 4 on your PC is by emulating it with Xenia. Xenia is an open-source Xbox 360 emulator that can run many Xbox 360 games on Windows platforms. It is free to download and use and it does not require any special hardware or software requirements.</p>
|
9 |
-
<p>forza motorsport 4 activation code generator pc<br />
|
10 |
-
forza motorsport 4 crack and serial key download pc<br />
|
11 |
-
forza motorsport 4 license key free pc<br />
|
12 |
-
forza motorsport 4 product key generator pc<br />
|
13 |
-
forza motorsport 4 registration code pc<br />
|
14 |
-
forza motorsport 4 steam keygen pc<br />
|
15 |
-
forza motorsport 4 cd key generator pc<br />
|
16 |
-
forza motorsport 4 full game download with keygen pc<br />
|
17 |
-
forza motorsport 4 online key generator pc<br />
|
18 |
-
forza motorsport 4 serial number pc<br />
|
19 |
-
forza motorsport 4 keygen no survey pc<br />
|
20 |
-
forza motorsport 4 keygen download free pc<br />
|
21 |
-
forza motorsport 4 keygen rar password pc<br />
|
22 |
-
forza motorsport 4 keygen skidrow pc<br />
|
23 |
-
forza motorsport 4 keygen working pc<br />
|
24 |
-
forza motorsport 4 keygen torrent pc<br />
|
25 |
-
forza motorsport 4 keygen.exe pc<br />
|
26 |
-
forza motorsport 4 keygen mac pc<br />
|
27 |
-
forza motorsport 4 keygen windows 10 pc<br />
|
28 |
-
forza motorsport 4 keygen windows 7 pc<br />
|
29 |
-
forza motorsport 4 keygen windows 8 pc<br />
|
30 |
-
forza motorsport 4 keygen linux pc<br />
|
31 |
-
forza motorsport 4 keygen ubuntu pc<br />
|
32 |
-
forza motorsport 4 keygen android pc<br />
|
33 |
-
forza motorsport 4 keygen ios pc<br />
|
34 |
-
forza motorsport 4 keygen xbox one pc<br />
|
35 |
-
forza motorsport 4 keygen ps4 pc<br />
|
36 |
-
forza motorsport 4 keygen switch pc<br />
|
37 |
-
forza motorsport 4 keygen origin pc<br />
|
38 |
-
forza motorsport 4 keygen epic games pc<br />
|
39 |
-
forza motorsport 4 keygen gog pc<br />
|
40 |
-
forza motorsport 4 keygen uplay pc<br />
|
41 |
-
forza motorsport 4 keygen rockstar games pc<br />
|
42 |
-
forza motorsport 4 keygen ea games pc<br />
|
43 |
-
forza motorsport 4 keygen steam gift card pc<br />
|
44 |
-
forza motorsport 4 keygen amazon gift card pc<br />
|
45 |
-
forza motorsport 4 keygen paypal gift card pc<br />
|
46 |
-
forza motorsport 4 keygen bitcoin gift card pc<br />
|
47 |
-
forza motorsport 4 keygen visa gift card pc<br />
|
48 |
-
forza motorsport 4 keygen mastercard gift card pc<br />
|
49 |
-
forza motorsport 4 keygen google play gift card pc<br />
|
50 |
-
forza motorsport 4 keygen itunes gift card pc<br />
|
51 |
-
forza motorsport 4 keygen xbox live gift card pc<br />
|
52 |
-
forza motorsport 4 keygen playstation network gift card pc<br />
|
53 |
-
forza motorsport 4 keygen nintendo eshop gift card pc<br />
|
54 |
-
forza motorsport 4 keygen roblox gift card pc<br />
|
55 |
-
forza motorsport 4 keygen minecraft gift card pc<br />
|
56 |
-
forza motorsport 4 keygen fortnite gift card pc<br />
|
57 |
-
forza motorsport 4 keygen pubg gift card pc</p>
|
58 |
-
<h3>What is Xenia and how does it work?</h3>
|
59 |
-
<p>Xenia is a software that mimics the Xbox 360 hardware and software environment on your PC. It allows you to run Xbox 360 games from disc images or extracted files without needing an actual console or a license key. It also supports various input devices such as keyboards, mice, controllers, and steering wheels.</p>
|
60 |
-
<p>Xenia works by translating the Xbox 360 instructions into native PC instructions that can be executed by your CPU and GPU. It also emulates the Xbox 360 memory, storage, audio, video, network, and user interface features. However, Xenia is not perfect and it may have some compatibility issues or performance problems with some games.</p>
|
61 |
-
<h3>How to download and install Xenia</h3>
|
62 |
-
<p>To download and install Xenia on your PC, follow these steps:</p>
|
63 |
-
<ol>
|
64 |
-
<li>Go to <a href="https://xenia.jp">https://xenia.jp</a> and click on the Download button.</li>
|
65 |
-
<li>Select the latest version of Xenia Canary (Oct 5th 2022 build) and save it to your preferred location.</li>
|
66 |
-
<li>Extract the zip file using a tool like WinRAR or 7-Zip.</li>
|
67 |
-
<li>Open the extracted folder and double-click on xenia.exe.</li>
|
68 |
-
<li>Xenia will launch and show you a list of games that you can run.</li>
|
69 |
-
</ol>
|
70 |
-
<h3>How to get Forza Motorsport 4 for Xenia</h3>
|
71 |
-
<p>To get Forza Motorsport 4 for Xenia, follow these steps:</p>
|
72 |
-
<ol>
|
73 |
-
<li>Get a copy of Forza Motorsport 4 in extracted XEX form. You can either rip it from your own disc using a tool like XBOX Backup Creator or download it from a trusted source online.</li>
|
74 |
-
<li>Place the extracted folder in a location that you can easily access.</li>
|
75 |
-
<li>Open Xenia and click on File > Open.</li>
|
76 |
-
<li>Navigate to the extracted folder and select default.xex.</li>
|
77 |
-
<li>Xenia will load Forza Motorsport 4 and show you a splash screen.</li>
|
78 |
-
</ol>
|
79 |
-
<h3>How to run Forza Motorsport 4 on Xenia</h3>
|
80 |
-
<p>To run Forza Motorsport 4 on Xenia, follow these steps:</p>
|
81 |
-
<ol>
|
82 |
-
<li>After loading the game, press F11 to enter full-screen mode.</li>
|
83 |
-
<li>Press F12 to open the settings menu.</li>
|
84 |
-
<li>Adjust the settings according to your preferences and system specifications. Some recommended settings are:</li>
|
85 |
-
<ul>
|
86 |
-
<li>GPU > Resolution Scale = Native (1280x720)</li>
|
87 |
-
<li>GPU > VSync = On</li>
|
88 |
-
<li>CPU > Processor Count = Auto</li>
|
89 |
-
<li>CPU > Enable AVX = On (if supported by your CPU)</li>
|
90 |
-
<li>CPU > Enable AVX512 = On (if supported by your CPU)</li>
|
91 |
-
<li>CPU > Enable Host Optimizations = On</li>
|
92 |
-
</ul>
|
93 |
-
<li>Press F12 again to close the settings menu.</li>
|
94 |
-
<li>Press Enter or Start button on your controller to start playing.</li>
|
95 |
-
</ol>
|
96 |
-
<h2>How to Enhance Your Forza Motorsport 4 Experience on PC</h2>
|
97 |
-
<p>If you want to take your Forza Motorsport 4 experience on PC to the next level, you can try modding the game files. Modding allows you to add new features or modify existing ones in the game. You can do things like adding DLC cars and tracks, editing save games, changing graphics settings, etc.</p>
|
98 |
-
<h3>What are the benefits of modding Forza Motorsport 4?</h3>
|
99 |
-
<p>Some of the benefits of modding Forza Motorsport 4 are:</p>
|
100 |
-
<ul>
|
101 |
-
<li>You can access all DLC content without paying extra money or needing an Xbox Live account.</li>
|
102 |
-
<li>You can edit your save games and customize your garage, credits, level, etc.</li>
|
103 |
-
<li>You can change graphics settings such as resolution, anti-aliasing, shadows, etc.</li>
|
104 |
-
<li>You can add new cars or replace existing ones with custom models.</li>
|
105 |
-
<li>You can tweak gameplay aspects such as physics, AI difficulty, damage model, etc.</li>
|
106 |
-
</ul>
|
107 |
-
<h3>How to mod Forza Motorsport 4 game files</h3>
|
108 |
-
<p>To mod Forza Motorsport 4 game files, follow these steps:</p>
|
109 |
-
<ol>
|
110 |
-
<h3>How to add DLC cars and tracks to Forza Motorsport 4</h3>
|
111 |
-
<p>One of the advantages of modding Forza Motorsport 4 is that you can access all the DLC content that was released for the game without paying extra money or needing an Xbox Live account. DLC stands for downloadable content and it includes additional cars and tracks that were not available in the base game.</p>
|
112 |
-
<p>Forza Motorsport 4 had a total of 19 car packs and 2 track packs that added over 200 cars and 3 tracks to the game. Some of these packs were bundled with the Limited Collector's Edition or the Season Pass, while others were sold separately or offered as promotional items. However, since September 2015, all DLC releases for the game can no longer be purchased from the Xbox Games Store.</p>
|
113 |
-
<p>Luckily, you can still get these DLC packs by downloading them from trusted sources online and adding them to your game files using a tool called God2Iso. God2Iso is a program that can convert Xbox 360 GOD (Games on Demand) files to ISO files that can be extracted and edited. Here is how to use it:</p>
|
114 |
-
<ol>
|
115 |
-
<li>Download God2Iso from <a href="https://digiex.net/threads/god2iso-xbox-360-god-to-iso-converter-download.10036/">https://digiex.net/threads/god2iso-xbox-360-god-to-iso-converter-download.10036/</a> and extract it to a folder on your PC.</li>
|
116 |
-
<li>Download the DLC pack that you want to add to your game from a trusted source online. Make sure it is in GOD format and has a .000 extension.</li>
|
117 |
-
<li>Open God2Iso and click on the Browse button next to the Input File field.</li>
|
118 |
-
<li>Navigate to the folder where you downloaded the DLC pack and select the .000 file.</li>
|
119 |
-
<li>Click on the Browse button next to the Output File field and choose a location and a name for the ISO file that will be created.</li>
|
120 |
-
<li>Click on Convert and wait for the process to finish.</li>
|
121 |
-
<li>Open the ISO file with a tool like WinRAR or 7-Zip and extract its contents to a folder on your PC.</li>
|
122 |
-
<li>Open the extracted folder and look for a file named default.xex. This is the executable file for the DLC pack.</li>
|
123 |
-
<li>Copy this file and paste it into the same folder where you have your Forza Motorsport 4 extracted XEX file.</li>
|
124 |
-
<li>Rename this file according to the DLC pack that it belongs to. For example, if you downloaded the Porsche Expansion Pack, rename it to porsche.xex.</li>
|
125 |
-
<li>Repeat steps 3 to 10 for any other DLC pack that you want to add to your game.</li>
|
126 |
-
</ol>
|
127 |
-
<h3>How to transfer modded game files to Xenia</h3>
|
128 |
-
<p>After modding your game files, you need to transfer them to Xenia so that you can run them on your PC. Here is how to do it:</p>
|
129 |
-
<ol>
|
130 |
-
<li>Open Xenia and click on File > Open Content Package.</li>
|
131 |
-
<li>Navigate to the folder where you have your Forza Motorsport 4 extracted XEX file and select it.</li>
|
132 |
-
<li>Xenia will load Forza Motorsport 4 and show you a splash screen.</li>
|
133 |
-
<li>Press F12 to open the settings menu.</li>
|
134 |
-
<li>Click on Content > Add Content Package.</li>
|
135 |
-
<li>Navigate to the folder where you have your modded DLC XEX files and select one of them.</li>
|
136 |
-
<li>Xenia will add this DLC pack to your game content list.</li>
|
137 |
-
<li>Repeat steps 5 to 7 for any other modded DLC XEX file that you want to add to your game.</li>
|
138 |
-
<li>Press F12 again to close the settings menu.</li>
|
139 |
-
<li>Press Enter or Start button on your controller to start playing with your modded game files.</li>
|
140 |
-
</ol>
|
141 |
-
<h2>How to Troubleshoot Common Issues with Forza Motorsport 4 on PC</h2>
|
142 |
-
<p>While playing Forza Motorsport 4 on PC can be a lot of fun, it can also come with some challenges and frustrations. Since Xenia is not a perfect emulator, it may have some compatibility issues or performance problems with some games. Moreover, since Forza Motorsport 4 is not a flawless game itself, it may have some bugs, glitches, and lack of polish that can affect your gaming experience.</p>
|
143 |
-
<p>In this section, we will list some of the common issues and bugs that you might encounter when playing Forza Motorsport 4 on PC and how to fix them or mitigate them. Note that some of these issues may be specific to certain hardware configurations or software versions, so they may not apply to everyone.</p>
|
144 |
-
<h3>What are some of the common issues and bugs with Forza Motorsport 4 on PC?</h3>
|
145 |
-
<p>Some of the common issues and bugs with Forza Motorsport 4 on PC are:</p>
|
146 |
-
<ul>
|
147 |
-
<li>Over-bright sun: The sun in some tracks may appear too bright and cause glare or bloom effects that make it hard to see the road or other cars.</li>
|
148 |
-
<li>Missing mirrors: The mirrors in some cars may not work at all or show incorrect reflections or black screens.</li>
|
149 |
-
<li>Mirrors glitching and not working: The mirrors in some cars may work intermittently or glitch out and show distorted images or artifacts.</li>
|
150 |
-
<li>Missing glass: The glass in some cars may not be visible at all or appear as solid black surfaces.</li>
|
151 |
-
<li>Missing interior pieces: Some interior pieces in some cars may not be visible at all or appear as solid black surfaces.</li>
|
152 |
-
<li>Missing body parts: Some body parts in some cars may not be visible at all or appear as solid black surfaces.</li>
|
153 |
-
<li>Glitching dashboard screens: The dashboard screens in some cars may not work at all or show incorrect information or artifacts.</li>
|
154 |
-
<li>Missing/wrong paint job color: The paint job color in some cars may not match what you selected in the garage or appear as solid black surfaces.</li>
|
155 |
-
<li>Missing/dark wheels: The wheels in some cars may not be visible at all or appear as solid black surfaces.</li>
|
156 |
-
<li>Repeating weird patterns on roads: The roads in some tracks may show repeating weird patterns or textures that make it hard to see the road markings or details.</li>
|
157 |
-
<li>General audio stuttering/pops: The audio in some parts of the game may stutter or pop occasionally or frequently, affecting the engine sounds, music, voiceovers, etc.</li>
|
158 |
-
<li>Square & rectangle car shadows: The shadows cast by some cars may appear as square or rectangle shapes instead of realistic shapes.</li>
|
159 |
-
<li>Weird garage thumbnails: The thumbnails of some cars in the garage may show incorrect images or artifacts instead of actual car models.</li>
|
160 |
-
<li>Trim pieces having odd textures: The trim pieces in some cars may have odd textures or colors that do not match with the rest of the car body.</li>
|
161 |
-
</ul>
|
162 |
-
<h3>How to fix or mitigate these issues and bugs</h3>
|
163 |
-
<p>To fix or mitigate these issues and bugs, try these possible solutions:</p>
|
164 |
-
<ul>
|
165 |
-
<li>Adjust the graphics settings in Xenia. Try lowering the resolution scale, turning off VSync, or disabling some GPU features. This may improve the performance and reduce some graphical glitches.</li>
|
166 |
-
<li>Update your drivers and software. Make sure you have the latest version of Xenia, your graphics card drivers, and your Windows updates. This may fix some compatibility and stability issues.</li>
|
167 |
-
<li>Check your hardware specifications. Make sure your PC meets the minimum requirements for running Xenia and Forza Motorsport 4. You may need a powerful CPU, GPU, and RAM to run the game smoothly.</li>
|
168 |
-
<li>Check your game files. Make sure your Forza Motorsport 4 extracted XEX file and your modded DLC XEX files are not corrupted or damaged. You may need to re-download them or re-extract them if they are.</li>
|
169 |
-
<li>Check your input devices. Make sure your keyboard, mouse, controller, or steering wheel are working properly and are compatible with Xenia. You may need to adjust the input settings in Xenia or use a third-party tool to map your input devices.</li>
|
170 |
-
<li>Check your audio devices. Make sure your speakers, headphones, or microphone are working properly and are compatible with Xenia. You may need to adjust the audio settings in Xenia or use a third-party tool to enhance your audio quality.</li>
|
171 |
-
<li>Check your internet connection. Make sure you have a stable and fast internet connection for downloading Xenia, Forza Motorsport 4, and DLC packs. You may also need an internet connection for accessing some online features of the game.</li>
|
172 |
-
</ul>
|
173 |
-
<h2>Conclusion</h2>
|
174 |
-
<p>In conclusion, Forza Motorsport 4 keygen PC is a way to play one of the best racing games ever made on your computer using an emulator and some mods. It can be a lot of fun and rewarding to experience this game with enhanced graphics, new content, and custom features. However, it can also be challenging and frustrating to deal with some issues and bugs that may occur when playing the game on PC.</p>
|
175 |
-
<p>If you want to try Forza Motorsport 4 keygen PC, you will need a powerful PC, a copy of Xenia emulator, a copy of Forza Motorsport 4 in extracted XEX form, and some modded DLC XEX files. You will also need to follow some steps to download, install, configure, and run the game on Xenia. You will also need to troubleshoot some common problems and glitches that may affect your gaming experience.</p>
|
176 |
-
<p>We hope this article has helped you understand how to get Forza Motorsport 4 keygen PC and how to enhance your gaming experience with some tips and tricks. We also hope you have enjoyed reading this article as much as we have enjoyed writing it. Thank you for your time and attention.</p>
|
177 |
-
<p>Now go ahead and enjoy Forza Motorsport 4 on your PC!</p>
|
178 |
-
<h2>FAQs</h2>
|
179 |
-
<p>Here are some frequently asked questions and answers about Forza Motorsport 4 keygen PC:</p>
|
180 |
-
<ol>
|
181 |
-
<li><b>Is Forza Motorsport 4 keygen PC legal?</b></li>
|
182 |
-
<p>Forza Motorsport 4 keygen PC is not legal in most countries and regions. It involves downloading and using pirated copies of the game and DLC packs, which violates the intellectual property rights of Microsoft Studios and Turn 10 Studios. It also involves using an emulator without owning an actual Xbox 360 console or a license key for the game, which violates the terms of service of Microsoft. Therefore, we do not recommend or endorse Forza Motorsport 4 keygen PC and we advise you to use it at your own risk.</p>
|
183 |
-
<li><b>Is Forza Motorsport 4 keygen PC safe?</b></li>
|
184 |
-
<p>Forza Motorsport 4 keygen PC is not safe in terms of security and privacy. It involves downloading and using files from untrusted sources online, which may contain viruses, malware, spyware, or other harmful programs that can damage your PC or steal your personal information. It also involves using an emulator that may have bugs or vulnerabilities that can expose your PC to hackers or attackers. Therefore, we do not recommend or endorse Forza Motorsport 4 keygen PC and we advise you to use it at your own risk.</p>
|
185 |
-
<li><b>Is Forza Motorsport 4 keygen PC worth it?</b></li>
|
186 |
-
<p>Forza Motorsport 4 keygen PC is worth it in terms of entertainment and satisfaction. It allows you to play one of the best racing games ever made on your computer with enhanced graphics, new content, and custom features. It can be a lot of fun and rewarding to experience this game with different cars, tracks, modes, settings, etc. However, it can also be challenging and frustrating to deal with some issues and bugs that may occur when playing the game on PC. Therefore, we recommend Forza Motorsport 4 keygen PC only if you are willing to accept the risks and challenges involved.</p>
|
187 |
-
<li><b>Can I play Forza Motorsport 4 online on PC?</b></li>
|
188 |
-
<p>No, you cannot play Forza Motorsport 4 online on PC. The online features of the game require an Xbox Live account and a valid license key for the game, which are not available for Forza Motorsport 4 keygen PC users. Moreover, Xenia does not support online multiplayer emulation for Xbox 360 games at this time. Therefore, you can only play Forza Motorsport 4 offline on PC.</p>
|
189 |
-
<li><b>Can I play Forza Motorsport 4 on Xbox One or Series X|S?</b></li>
|
190 |
-
<p>No, you cannot play Forza Motorsport 4 on Xbox One or Series X|S consoles. The game is not compatible with these consoles and it is not part of the backward compatibility program of Microsoft. The only way to play Forza Motorsport 4 on these consoles is by streaming it from an Xbox 360 console using the Xbox Console Companion app on Windows 10 devices.</p>
|
191 |
-
</ol>
|
192 |
-
</p> 0a6ba089eb<br />
|
193 |
-
<br />
|
194 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Art Modeling Liliana Model Sets 01 89.md
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
<h2>Art Modeling Liliana Model Sets 01 89</h2><br /><p><b><b>Download Zip</b> ✵✵✵ <a href="https://imgfil.com/2uy1bM">https://imgfil.com/2uy1bM</a></b></p><br /><br />
|
2 |
-
<br />
|
3 |
-
d5da3c52bf<br />
|
4 |
-
<br />
|
5 |
-
<br />
|
6 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/CarX Highway Racing Hack How to Download Cheat and Get Free Coins.md
DELETED
@@ -1,123 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Download Cheat CarX Highway Racing: How to Get Unlimited NOS and Money</h1>
|
3 |
-
<p>If you are a fan of realistic and thrilling racing games, you might have heard of CarX Highway Racing. This game offers you a chance to drive on traffic-packed highways, compete with rivals, evade the police, and customize your cars. However, you might also find it hard to progress in the game without spending real money or grinding for hours. That's why some players look for ways to download cheat carx highway racing and get unlimited NOS and money in the game. In this article, we will show you how to do that for both Android and iOS devices.</p>
|
4 |
-
<h2>download cheat carx highway racing</h2><br /><p><b><b>DOWNLOAD</b> ★ <a href="https://jinyurl.com/2uNOS5">https://jinyurl.com/2uNOS5</a></b></p><br /><br />
|
5 |
-
<h2>Introduction</h2>
|
6 |
-
<h3>What is CarX Highway Racing?</h3>
|
7 |
-
<p>CarX Highway Racing is a racing game developed by CarX Technologies, LLC. It is available for free on Google Play Store and App Store. The game features realistic physics, eye-catching graphics, and extreme driving on traffic-packed roads. You can choose from over 40 sports cars, from pickup trucks to hypercars, and tune them to your liking. You can also immerse yourself in the campaign mode, where you have to uncover the secrets of secret organizations, make new friends, and challenge powerful bosses. Alternatively, you can race online with other players, or play as a police officer and chase down criminals.</p>
|
8 |
-
<h3>Why do you need cheat codes for CarX Highway Racing?</h3>
|
9 |
-
<p>CarX Highway Racing is a fun and addictive game, but it also has some drawbacks. One of them is that the game is quite challenging and requires a lot of skill and patience to master. You have to deal with traffic, police, rivals, and other obstacles on the road. Another drawback is that the game is somewhat pay-to-win, meaning that you have to spend real money or watch ads to get more NOS (nitrous oxide) and money in the game. NOS is essential for boosting your speed and overtaking your opponents, while money is needed for buying new cars and upgrading them. Without enough NOS and money, you might find it hard to win races and unlock new content.</p>
|
10 |
-
<p>That's why some players resort to downloading cheat carx highway racing and getting unlimited NOS and money in the game. By doing so, they can enjoy the game without any limitations or frustrations. They can drive faster, buy better cars, and dominate the highway.</p>
|
11 |
-
<h2>How to download cheat carx highway racing for Android</h2>
|
12 |
-
<h3>Method 1: Use a modded APK file</h3>
|
13 |
-
<p>One of the easiest ways to download cheat carx highway racing for Android is to use a modded APK file. This is a modified version of the original game file that has been hacked to include cheat codes. You can find many websites that offer such files, such as or . However, be careful when downloading such files, as they might contain viruses or malware that can harm your device.</p>
|
14 |
-
<p>To use a modded APK file, you have to follow these steps:</p>
|
15 |
-
<ol>
|
16 |
-
<li>Uninstall the original CarX Highway Racing game from your device.</li>
|
17 |
-
<li>Download the modded APK file from a trusted source.</li>
|
18 |
-
<li>Enable the installation of apps from unknown sources in your device settings.</li>
|
19 |
-
<li>Install the modded APK file on your device.</li>
|
20 |
-
<li>Launch the game and enjoy unlimited NOS and money.</li>
|
21 |
-
</ol>
|
22 |
-
<h3>Method 2: Use a game hacker app</h3>
|
23 |
-
<p>Another way to download cheat carx highway racing for Android is to use a game hacker app. This is a type of app that can modify the data and values of other apps on your device, such as CarX Highway Racing. You can use such apps to change the amount of NOS and money you have in the game, or even unlock all the cars and tracks. Some of the popular game hacker apps are , , and . However, be aware that using such apps might require root access on your device, which can void your warranty and expose your device to security risks.</p>
|
24 |
-
<p>download carx highway racing mod apk unlimited money<br />
|
25 |
-
download carx highway racing hack version<br />
|
26 |
-
download carx highway racing cheat engine<br />
|
27 |
-
download carx highway racing mod apk latest version<br />
|
28 |
-
download carx highway racing unlimited gold and cash<br />
|
29 |
-
download carx highway racing mod menu<br />
|
30 |
-
download carx highway racing hack tool<br />
|
31 |
-
download carx highway racing cheat codes<br />
|
32 |
-
download carx highway racing mod apk android 1<br />
|
33 |
-
download carx highway racing unlimited nitro<br />
|
34 |
-
download carx highway racing hack apk 2023<br />
|
35 |
-
download carx highway racing cheat sheet<br />
|
36 |
-
download carx highway racing mod apk obb<br />
|
37 |
-
download carx highway racing unlimited money and gold<br />
|
38 |
-
download carx highway racing hack ios<br />
|
39 |
-
download carx highway racing cheat apk<br />
|
40 |
-
download carx highway racing mod apk revdl<br />
|
41 |
-
download carx highway racing unlimited coins and gems<br />
|
42 |
-
download carx highway racing hack online<br />
|
43 |
-
download carx highway racing cheat mod<br />
|
44 |
-
download carx highway racing mod apk rexdl<br />
|
45 |
-
download carx highway racing unlimited fuel and energy<br />
|
46 |
-
download carx highway racing hack no root<br />
|
47 |
-
download carx highway racing cheat app<br />
|
48 |
-
download carx highway racing mod apk offline<br />
|
49 |
-
download carx highway racing unlimited everything<br />
|
50 |
-
download carx highway racing hack generator<br />
|
51 |
-
download carx highway racing cheat no survey<br />
|
52 |
-
download carx highway racing mod apk data<br />
|
53 |
-
download carx highway racing unlimited cars and tracks<br />
|
54 |
-
download carx highway racing hack no verification<br />
|
55 |
-
download carx highway racing cheat free<br />
|
56 |
-
download carx highway racing mod apk pure<br />
|
57 |
-
download carx highway racing unlimited keys and diamonds<br />
|
58 |
-
download carx highway racing hack without human verification<br />
|
59 |
-
download carx highway racing cheat online<br />
|
60 |
-
download carx highway racing mod apk happymod<br />
|
61 |
-
download carx highway racing unlimited xp and level up<br />
|
62 |
-
download carx highway racing hack for pc<br />
|
63 |
-
download carx highway racing cheat for android</p>
|
64 |
-
<p>To use a game hacker app, you have to follow these steps:</p>
|
65 |
-
<ol>
|
66 |
-
<li>Install the game hacker app of your choice on your device.</li>
|
67 |
-
<li>Launch the game hacker app and grant it root permissions if needed.</li>
|
68 |
-
<li>Launch CarX Highway Racing and play a race.</li>
|
69 |
-
<li>Minimize the game and open the game hacker app.</li>
|
70 |
-
<li>Search for the value of NOS or money you have in the game.</li>
|
71 |
-
<li>Change the value to any number you want.</li>
|
72 |
-
<li>Resume the game and enjoy unlimited NOS and money.</li>
|
73 |
-
</ol>
|
74 |
-
<h2>How to download cheat carx highway racing for iOS</h2>
|
75 |
-
<h3>Method 1: Use a tweaked app store</h3>
|
76 |
-
<p>If you have an iOS device, you can also download cheat carx highway racing by using a tweaked app store. This is a third-party app store that offers modified versions of apps and games, such as CarX Highway Racing. You can find many tweaked app stores online, such as , , and . However, be careful when downloading such apps, as they might not be safe or legal.</p>
|
77 |
-
<p>To use a tweaked app store, you have to follow these steps:</p>
|
78 |
-
<ol>
|
79 |
-
<li>Delete the original CarX Highway Racing game from your device.</li>
|
80 |
-
<li>Download the tweaked app store of your choice from its official website.</li>
|
81 |
-
<li>Trust the developer profile of the tweaked app store in your device settings.</li>
|
82 |
-
<li>Open the tweaked app store and search for CarX Highway Racing.</li>
|
83 |
-
<li>Download and install the modified version of CarX Highway Racing.</li>
|
84 |
-
<li>Launch the game and enjoy unlimited NOS and money.</li>
|
85 |
-
</ol>
|
86 |
-
<h3>Method 2: Use a jailbreak tweak</h3>
|
87 |
-
<p>Another way to download cheat carx highway racing for iOS is to use a jailbreak tweak. This is a software modification that can alter the functionality and appearance of your device, including apps and games. You can find many jailbreak tweaks for CarX Highway Racing on Cydia, which is the default app store for jailbroken devices. Some of the popular jailbreak tweaks are , , and . However, be aware that using such tweaks might require jailbreaking your device, which can void your warranty and expose your device to security risks.</p>
|
88 |
-
<p>To use a jailbreak tweak, you have to follow these steps:</p>
|
89 |
-
<ol>
|
90 |
-
<li>Jailbreak your device using a tool like or .</li>
|
91 |
-
<li>Open Cydia and add the source of the jailbreak tweak you want to use.</li>
|
92 |
-
<li>Search for the jailbreak tweak and install it on your device.</li>
|
93 |
-
<li>Launch CarX Highway Racing and enjoy unlimited NOS and money.</li>
|
94 |
-
</ol> <h2>How to use cheat codes for CarX Highway Racing</h2>
|
95 |
-
<p>Now that you have downloaded cheat carx highway racing for your device, you might be wondering how to use the cheat codes in the game. Depending on the method you used, the cheat codes might be already activated or require some additional steps. Here are some tips on how to use the cheat codes for CarX Highway Racing.</p>
|
96 |
-
<h3>How to activate unlimited NOS cheat</h3>
|
97 |
-
<p>The unlimited NOS cheat allows you to use the nitrous oxide boost as much as you want, without running out of it. This can help you speed up and overtake your rivals easily. To activate the unlimited NOS cheat, you have to do the following:</p>
|
98 |
-
<ul>
|
99 |
-
<li>If you used a modded APK file or a tweaked app store, the unlimited NOS cheat should be already activated by default. You can check this by looking at the NOS bar on the screen. It should be always full and never decrease.</li>
|
100 |
-
<li>If you used a game hacker app or a jailbreak tweak, you have to change the value of NOS in the game data. You can do this by using the same steps as described above for changing the value of money. Just search for the value of NOS instead of money, and change it to any number you want.</li>
|
101 |
-
</ul>
|
102 |
-
<h3>How to activate unlimited money cheat</h3>
|
103 |
-
<p>The unlimited money cheat allows you to have as much money as you want in the game, without earning or spending it. This can help you buy new cars and upgrade them to your liking. To activate the unlimited money cheat, you have to do the following:</p>
|
104 |
-
<ul>
|
105 |
-
<li>If you used a modded APK file or a tweaked app store, the unlimited money cheat should be already activated by default. You can check this by looking at the money balance on the screen. It should be always high and never decrease.</li>
|
106 |
-
<li>If you used a game hacker app or a jailbreak tweak, you have to change the value of money in the game data. You can do this by using the same steps as described above for changing the value of NOS. Just search for the value of money instead of NOS, and change it to any number you want.</li>
|
107 |
-
</ul>
|
108 |
-
<h2>Conclusion</h2>
|
109 |
-
<h3>Summary of the main points</h3>
|
110 |
-
<p>In this article, we have shown you how to download cheat carx highway racing and get unlimited NOS and money in the game. We have explained what CarX Highway Racing is, why you might need cheat codes for it, and how to download and use them for both Android and iOS devices. We have also provided some links to websites where you can find modded APK files, game hacker apps, tweaked app stores, and jailbreak tweaks for CarX Highway Racing.</p>
|
111 |
-
<h3>Call to action</h3>
|
112 |
-
<p>We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Also, if you liked this article, please share it with your friends who might be interested in downloading cheat carx highway racing. Thank you for reading and happy racing!</p>
|
113 |
-
<h2>FAQs</h2>
|
114 |
-
<p>Here are some frequently asked questions about downloading cheat carx highway racing:</p>
|
115 |
-
<ol>
|
116 |
-
<li><b>Is downloading cheat carx highway racing safe?</b><br>Downloading cheat carx highway racing is not completely safe, as it might involve downloading files or apps from unknown sources that could contain viruses or malware. It might also violate the terms of service of the game and result in your account being banned or suspended. Therefore, we advise you to download cheat carx highway racing at your own risk and discretion.</li>
|
117 |
-
<li><b>Is downloading cheat carx highway racing legal?</b><br>Downloading cheat carx highway racing is not legal, as it infringes on the intellectual property rights of the game developers and publishers. It also gives you an unfair advantage over other players who play by the rules. Therefore, we do not condone or endorse downloading cheat carx highway racing.</li>
|
118 |
-
<li><b>Can I download cheat carx highway racing without rooting or jailbreaking my device?</b><br>You can download cheat carx highway racing without rooting or jailbreaking your device by using methods such as modded APK files or tweaked app stores. However, these methods might not work on all devices or versions of the game. They might also require trusting unknown developers or sources that could compromise your device's security.</li>
|
119 |
-
<li><b>Can I download cheat carx highway racing for PC?</b><br>You can download cheat carx highway racing for PC by using an Android emulator such as or . These are programs that allow you to run Android apps and games on your PC. You can then use the same methods as described above for downloading cheat car x highway racing on your PC. However, these methods might not be compatible with all PC systems or games. They might also require installing additional software or files that could affect your PC's performance or security.</li>
|
120 |
-
<li><b>Can I download cheat carx highway racing for other platforms?</b><br>You can download cheat carx highway racing for other platforms such as Xbox, PlayStation, or Nintendo Switch by using a modded console or a game hacking device. These are devices that can modify the hardware or software of your console to run cheat codes or custom firmware. You can find many websites that offer such devices, such as or . However, be careful when using such devices, as they might void your warranty, damage your console, or get you banned from online services.</li>
|
121 |
-
</ol></p> 197e85843d<br />
|
122 |
-
<br />
|
123 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Criminal Case Supernatural Investigations Mod Apk Bahasa Indonesia Game Seru yang Mengasah Otak dan Imajinasi.md
DELETED
@@ -1,119 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Criminal Case: Supernatural Investigations Mod APK Bahasa Indonesia</h1>
|
3 |
-
<p>If you are a fan of hidden object games, mystery stories, and supernatural creatures, you might want to check out Criminal Case: Supernatural Investigations. This is a captivating adventure game where you join a team of supernatural hunters to solve a series of murder cases involving vampires, werewolves, ghosts, demons, and more. In this article, we will tell you more about this game and its features, as well as how to use the mod APK to get unlimited money and energy, remove ads, and play it in Bahasa Indonesia.</p>
|
4 |
-
<h2>criminal case supernatural investigations mod apk bahasa indonesia</h2><br /><p><b><b>Download File</b> >>> <a href="https://jinyurl.com/2uNMfc">https://jinyurl.com/2uNMfc</a></b></p><br /><br />
|
5 |
-
<h2>What is Criminal Case: Supernatural Investigations?</h2>
|
6 |
-
<p>Criminal Case: Supernatural Investigations is a video game developed by Pretty Simple, released in 2019 for Android and iOS devices. It is the seventh game of the Criminal Case series, which has over 100 million players worldwide. The game follows the same gameplay formula as its predecessors, but with a twist: instead of solving crimes in realistic settings, you will be dealing with cases involving paranormal phenomena and creatures.</p>
|
7 |
-
<h3>Gameplay</h3>
|
8 |
-
<p>The gameplay of Criminal Case: Supernatural Investigations is similar to other hidden object games. You will investigate crime scenes across America by finding clues, collecting evidence, and analyzing samples. You will also interrogate witnesses and suspects, bring them in for questioning, and use your logic and intuition to identify the killer. Each case has several chapters that you need to complete in order to progress in the story. You will also earn stars that you can use to unlock additional scenes and tasks.</p>
|
9 |
-
<h3>Story</h3>
|
10 |
-
<p>The story of Criminal Case: Supernatural Investigations revolves around a team of supernatural hunters who work for a secret organization called The Bureau. The team consists of Luke Fernandez (the leader), Gwen Harper (the profiler), Hope Newman (the historian), Priya Desai (the coroner), Ben Shepherd (the tech expert), and you (the rookie). Together, you will travel across six regions of America - The West, The Southwest, The Rockies, The Midwest, The East, and The South - to solve cases involving vampires, werewolves, ghosts, demons, witches, zombies, and more. You will also encounter various allies and enemies along the way, such as Arthur Darkwood (the vampire hunter), George Mathison (the demonologist), Dr. Aculus (the vampire leader), Zeke Davis (the <h3>Graphics and Sound</h3>
|
11 |
-
<p>The graphics and sound of Criminal Case: Supernatural Investigations are impressive and immersive. The game features a variety of locations and themes, such as haunted mansions, spooky forests, ancient temples, and futuristic labs. The crime scenes are detailed and realistic, with different objects and clues to find. The characters are well-designed and animated, with expressive facial expressions and voice acting. The sound effects and music are also fitting and atmospheric, creating a sense of tension and suspense.</p>
|
12 |
-
<h2>Why use Criminal Case: Supernatural Investigations Mod APK?</h2>
|
13 |
-
<p>If you enjoy playing Criminal Case: Supernatural Investigations, you might want to try using the mod APK to enhance your gaming experience. The mod APK is a modified version of the game that gives you some advantages and benefits that are not available in the original version. Here are some of the reasons why you should use the mod APK:</p>
|
14 |
-
<h3>Unlimited Money and Energy</h3>
|
15 |
-
<p>One of the main features of the mod APK is that it gives you unlimited money and energy. Money is the currency of the game that you can use to buy items, such as clothes, accessories, pets, and decorations. You can also use money to buy hints, which can help you find clues faster and easier. Energy is the resource that you need to play the game. Each crime scene requires a certain amount of energy to investigate, and each task requires a certain amount of energy to complete. Energy replenishes over time, but it can be frustrating to wait for it to refill. With the mod APK, you don't have to worry about running out of money or energy. You can buy whatever you want and play whenever you want, without any limitations.</p>
|
16 |
-
<h3>No Ads</h3>
|
17 |
-
<p>Another benefit of using the mod APK is that it removes all ads from the game. Ads can be annoying and distracting, especially when they pop up in the middle of your investigation or interrogation. They can also slow down your device and consume your data. With the mod APK, you can enjoy the game without any interruptions or disturbances.</p>
|
18 |
-
<h3>Easy Installation</h3>
|
19 |
-
<p>The mod APK is also easy to install on your Android device. You don't need to root your device or do any complicated steps. All you need to do is follow these simple instructions:</p>
|
20 |
-
<ol>
|
21 |
-
<li>Download the mod APK file from this link: </li>
|
22 |
-
<li>Allow installation from unknown sources on your device settings.</li>
|
23 |
-
<li>Open the downloaded file and tap on install.</li>
|
24 |
-
<li>Wait for the installation to finish and launch the game.</li>
|
25 |
-
<li>Enjoy playing Criminal Case: Supernatural Investigations with unlimited money and energy, no ads, and more.</li>
|
26 |
-
</ol> <h2>How to play Criminal Case: Supernatural Investigations in Bahasa Indonesia?</h2>
|
27 |
-
<p>If you want to play Criminal Case: Supernatural Investigations in Bahasa Indonesia, you can easily do so by changing the language settings of the game. There are two ways to do this:</p>
|
28 |
-
<h3>Language Settings</h3>
|
29 |
-
<p>The first way is to use the game menu to change the language. Here are the steps:</p>
|
30 |
-
<p>criminal case supernatural investigations apk mod unlimited money indonesia<br />
|
31 |
-
download criminal case supernatural investigations mod apk versi terbaru bahasa indonesia<br />
|
32 |
-
criminal case supernatural investigations mod apk offline bahasa indonesia<br />
|
33 |
-
criminal case supernatural investigations mod apk unlimited energy and stars indonesia<br />
|
34 |
-
criminal case supernatural investigations mod apk android 1 bahasa indonesia<br />
|
35 |
-
criminal case supernatural investigations mod apk happymod indonesia<br />
|
36 |
-
criminal case supernatural investigations mod apk latest version indonesia<br />
|
37 |
-
criminal case supernatural investigations mod apk free download bahasa indonesia<br />
|
38 |
-
criminal case supernatural investigations mod apk unlimited everything indonesia<br />
|
39 |
-
criminal case supernatural investigations mod apk no root bahasa indonesia<br />
|
40 |
-
criminal case supernatural investigations mod apk revdl indonesia<br />
|
41 |
-
criminal case supernatural investigations mod apk cheat bahasa indonesia<br />
|
42 |
-
criminal case supernatural investigations mod apk full unlocked indonesia<br />
|
43 |
-
criminal case supernatural investigations mod apk update bahasa indonesia<br />
|
44 |
-
criminal case supernatural investigations mod apk rexdl indonesia<br />
|
45 |
-
criminal case supernatural investigations mod apk unlimited coins and gems bahasa indonesia<br />
|
46 |
-
criminal case supernatural investigations mod apk obb indonesia<br />
|
47 |
-
criminal case supernatural investigations mod apk hack bahasa indonesia<br />
|
48 |
-
criminal case supernatural investigations mod apk unlimited hints and boosters indonesia<br />
|
49 |
-
criminal case supernatural investigations mod apk mega bahasa indonesia<br />
|
50 |
-
criminal case supernatural investigations mod apk data indonesia<br />
|
51 |
-
criminal case supernatural investigations mod apk premium bahasa indonesia<br />
|
52 |
-
criminal case supernatural investigations mod apk all episodes unlocked indonesia<br />
|
53 |
-
criminal case supernatural investigations mod apk vip bahasa indonesia<br />
|
54 |
-
criminal case supernatural investigations mod apk pure indonesia<br />
|
55 |
-
criminal case supernatural investigations mod apk pro bahasa indonesia<br />
|
56 |
-
criminal case supernatural investigations mod apk 2023 indonesia<br />
|
57 |
-
criminal case supernatural investigations mod apk plus bahasa indonesia<br />
|
58 |
-
criminal case supernatural investigations mod apk new version indonesia<br />
|
59 |
-
criminal case supernatural investigations mod apk original bahasa indonesia<br />
|
60 |
-
criminal case supernatural investigations mod apk online indonesia<br />
|
61 |
-
criminal case supernatural investigations mod apk terbaru bahasa indonesia 2023<br />
|
62 |
-
criminal case supernatural investigations mod apk unlimited keys and diamonds indonesia<br />
|
63 |
-
criminal case supernatural investigations mod apk cracked bahasa indonesia<br />
|
64 |
-
criminal case supernatural investigations mod apk all items unlocked indonesia<br />
|
65 |
-
criminal case supernatural investigations mod apk ad free bahasa indonesia<br />
|
66 |
-
criminal case supernatural investigations mod apk unlimited lives and moves indonesia<br />
|
67 |
-
criminal case supernatural investigations mod apk no ads bahasa indonesia<br />
|
68 |
-
criminal case supernatural investigations mod apk high damage and defense indonesia<br />
|
69 |
-
criminal case supernatural investigations mod apk no verification bahasa indonesia<br />
|
70 |
-
criminal case supernatural investigations mod apk unlimited resources and money indonesia<br />
|
71 |
-
criminal case supernatural investigations mod apk for pc bahasa indonesia<br />
|
72 |
-
criminal case supernatural investigations mod apk god mode and one hit kill indonesia<br />
|
73 |
-
criminal case supernatural investigations mod apk for ios bahasa indonesia<br />
|
74 |
-
criminal case supernatural investigations mod apk unlimited time and gold bars indonesia<br />
|
75 |
-
criminal case supernatural investigations mod apk without human verification bahasa indonesia<br />
|
76 |
-
criminal case supernatural investigations mod apk all levels unlocked and maxed out indonesia<br />
|
77 |
-
criminal case supernatural investigations mod apk with unlimited coins and cash bahasa indonesia</p>
|
78 |
-
<ol>
|
79 |
-
<li>Open the game and tap on the gear icon on the top right corner of the screen.</li>
|
80 |
-
<li>Tap on the language option, which is the second one from the top.</li>
|
81 |
-
<li>Select Bahasa Indonesia from the list of available languages.</li>
|
82 |
-
<li>Tap on OK to confirm your choice.</li>
|
83 |
-
<li>Restart the game and enjoy playing it in Bahasa Indonesia.</li>
|
84 |
-
</ol>
|
85 |
-
<p>The second way is to use your device settings to change the language. Here are the steps:</p>
|
86 |
-
<ol>
|
87 |
-
<li>Go to your device settings and tap on the language and input option.</li>
|
88 |
-
<li>Tap on the language option and select Bahasa Indonesia from the list of available languages.</li>
|
89 |
-
<li>Tap on OK to confirm your choice.</li>
|
90 |
-
<li>Restart your device and launch the game. It should automatically detect your device language and display it in Bahasa Indonesia.</li>
|
91 |
-
</ol>
|
92 |
-
<h3>Tips and Tricks</h3>
|
93 |
-
<p>To help you play Criminal Case: Supernatural Investigations more effectively, here are some tips and tricks that you can use:</p>
|
94 |
-
<ul>
|
95 |
-
<li>Use hints wisely. Hints can help you find clues faster and easier, but they cost money and energy. You can get free hints by watching ads, inviting friends, or using the mod APK.</li>
|
96 |
-
<li>Analyze clues carefully. Clues can give you valuable information about the suspects, the murder weapon, and the motive. You can analyze clues by sending them to the lab, comparing them with other clues, or using your own logic.</li>
|
97 |
-
<li>Choose the right suspects. Suspects can be eliminated based on their alibi, appearance, personality, or relationship with the victim. You can interrogate suspects by asking them questions, showing them evidence, or using your intuition.</li>
|
98 |
-
<li>Earn stars and rewards. Stars are earned by completing tasks and scenes. You can use stars to unlock additional scenes and tasks, as well as buy items and hints. Rewards are earned by solving cases and achieving milestones. You can use rewards to buy clothes, accessories, pets, and decorations.</li>
|
99 |
-
<li>Have fun and explore. The game offers a lot of content and variety, such as different locations, themes, characters, and stories. You can have fun and explore the game world by interacting with objects, talking to people, and discovering secrets.</li>
|
100 |
-
</ul>
|
101 |
-
<h1>Conclusion</h1>
|
102 |
-
<p>Criminal Case: Supernatural Investigations is a fun and exciting game that combines hidden object gameplay with mystery stories and supernatural elements. You can join a team of supernatural hunters and solve cases involving vampires, werewolves, ghosts, demons, and more. You can also use the mod APK to get unlimited money and energy, remove ads, and play it in Bahasa Indonesia. If you are looking for a game that will challenge your detective skills and immerse you in a paranormal world, you should definitely download and play Criminal Case: Supernatural Investigations.</p>
|
103 |
-
<p>We hope that this article has given you some useful information about Criminal Case: Supernatural Investigations and its mod APK. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading!</p>
|
104 |
-
<h2>FAQs</h2>
|
105 |
-
<p>Here are some frequently asked questions and their answers about Criminal Case: Supernatural Investigations and its mod APK:</p>
|
106 |
-
<ol>
|
107 |
-
<li><b>Is Criminal Case: Supernatural Investigations free to play?</b></li>
|
108 |
-
<p>Yes, Criminal Case: Supernatural Investigations is free to play. However, it contains some optional in-app purchases that can enhance your gaming experience.</p>
|
109 |
-
<li><b>Is Criminal Case: Supernatural Investigations safe to download?</b></li>
|
110 |
-
<p>Yes, Criminal Case: Supernatural Investigations is safe to download from the official app stores or from trusted sources. However, you should be careful when downloading mod APKs from unknown or unverified sources, as they may contain viruses or malware that can harm your device.</p>
|
111 |
-
<li><b>Is Criminal Case: Supernatural Investigations offline or online?</b></li>
|
112 |
-
<p>Criminal Case: Supernatural Investigations is an online game that requires an internet connection to play. However, you can play some parts of the game offline, such as investigating crime scenes or analyzing clues.</p>
|
113 |
-
<li><b>How many cases are there in Criminal Case: Supernatural Investigations?</b></li>
|
114 |
-
<p>Criminal Case: Supernatural Investigations has 60 cases in total, divided into six regions of America - The West, The Southwest, The Rockies, The Midwest, The East, and The South. Each region has 10 cases, each with a different theme and storyline. You can play the cases in any order, but you need to complete all the cases in a region to unlock the next one.</p>
|
115 |
-
<li><b>Can I play Criminal Case: Supernatural Investigations with friends?</b></li>
|
116 |
-
<p>Yes, you can play Criminal Case: Supernatural Investigations with friends. You can connect your game account to Facebook and invite your friends to join your team. You can also chat with them, send and receive gifts, and compete with them on the leaderboards.</p>
|
117 |
-
</ol></p> 401be4b1e0<br />
|
118 |
-
<br />
|
119 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AB-TW/team-ai/agents/tools/python_code_tool.py
DELETED
@@ -1,116 +0,0 @@
|
|
1 |
-
import re
|
2 |
-
from langchain import LLMChain, PromptTemplate
|
3 |
-
from langchain.chat_models import ChatOpenAI
|
4 |
-
from langchain.llms import OpenAI
|
5 |
-
from langchain.agents import tool, Tool
|
6 |
-
# from langchain.utilities import PythonREPL
|
7 |
-
|
8 |
-
import sys
|
9 |
-
from io import StringIO
|
10 |
-
from typing import Dict, Optional
|
11 |
-
|
12 |
-
from pydantic import BaseModel, Field
|
13 |
-
|
14 |
-
from models import llm
|
15 |
-
|
16 |
-
|
17 |
-
class PythonREPL(BaseModel):
|
18 |
-
"""Simulates a standalone Python REPL."""
|
19 |
-
|
20 |
-
# globals: Optional[Dict] = Field(default_factory=dict, alias="_globals")
|
21 |
-
# locals: Optional[Dict] = Field(default_factory=dict, alias="_locals")
|
22 |
-
|
23 |
-
def run(self, command: str) -> str:
|
24 |
-
"""Run command with own globals/locals and returns anything printed."""
|
25 |
-
old_stdout = sys.stdout
|
26 |
-
sys.stdout = mystdout = StringIO()
|
27 |
-
try:
|
28 |
-
code_content = command
|
29 |
-
if('```python' in command):
|
30 |
-
start = command.find('```python') + len('```python')
|
31 |
-
end = command.rfind('```')
|
32 |
-
code_content = command[start:end].strip()
|
33 |
-
elif("```" in command):
|
34 |
-
start = command.find('```') + len('```')
|
35 |
-
end = command.rfind('```')
|
36 |
-
code_content = command[start:end].strip()
|
37 |
-
exec(code_content, globals(), globals())
|
38 |
-
sys.stdout = old_stdout
|
39 |
-
output = mystdout.getvalue()
|
40 |
-
except Exception as e:
|
41 |
-
sys.stdout = old_stdout
|
42 |
-
output = str(e)
|
43 |
-
return output
|
44 |
-
|
45 |
-
|
46 |
-
generate_python_code = """
|
47 |
-
Please write Python script to fulfill the following requirement:
|
48 |
-
|
49 |
-
---
|
50 |
-
{input}
|
51 |
-
---
|
52 |
-
|
53 |
-
Only output the code section with code block, without __name__ guard.
|
54 |
-
"""
|
55 |
-
|
56 |
-
generate_python_code_promopt = PromptTemplate(input_variables=["input"], template=generate_python_code,)
|
57 |
-
|
58 |
-
generate_code_chain = LLMChain(llm = llm(temperature=0.1), prompt=generate_python_code_promopt, output_key="code")
|
59 |
-
|
60 |
-
|
61 |
-
@tool("Generate and Excute Python Code ", return_direct=True)
|
62 |
-
def generate_and_excute_python_code(input: str) -> str:
|
63 |
-
'''useful for when you need to generate python code and excute it'''
|
64 |
-
answer_code = generate_code_chain.run(input)
|
65 |
-
python_repl = PythonREPL()
|
66 |
-
result = python_repl.run(answer_code)
|
67 |
-
print(result)
|
68 |
-
return f"""
|
69 |
-
code:
|
70 |
-
```
|
71 |
-
{answer_code}
|
72 |
-
```
|
73 |
-
|
74 |
-
execute result:
|
75 |
-
---
|
76 |
-
{result}
|
77 |
-
---
|
78 |
-
"""
|
79 |
-
|
80 |
-
python_repl = PythonREPL()
|
81 |
-
repl_tool = Tool(
|
82 |
-
name="python_repl",
|
83 |
-
description="A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.",
|
84 |
-
func=python_repl.run
|
85 |
-
)
|
86 |
-
|
87 |
-
if __name__ == "__main__":
|
88 |
-
input = """
|
89 |
-
我有一个json文件url为: https://artwork-assets-staging-sbux.starbucks.com.cn/accountavatars.json
|
90 |
-
并按照如下Example进行格式转换
|
91 |
-
文件格式为:
|
92 |
-
```
|
93 |
-
{
|
94 |
-
'artworks': {
|
95 |
-
'file1.png': {
|
96 |
-
'middle@1x': '***',
|
97 |
-
'middle@2x': '***',
|
98 |
-
'middle@3x': '***'
|
99 |
-
},
|
100 |
-
'file2.png': {
|
101 |
-
'middle@1x': '***',
|
102 |
-
'middle@2x': '***',
|
103 |
-
'middle@3x': '***'
|
104 |
-
}
|
105 |
-
}
|
106 |
-
}
|
107 |
-
```
|
108 |
-
输出格式:
|
109 |
-
```
|
110 |
-
curl https://active.stg.starbucks.com.cn/accountAvatar/file1.png
|
111 |
-
curl https://active.stg.starbucks.com.cn/accountAvatar/file2.png
|
112 |
-
```
|
113 |
-
"""
|
114 |
-
|
115 |
-
result = generate_and_excute_python_code(input)
|
116 |
-
print(result)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIConsultant/MusicGen/audiocraft/adversarial/discriminators/msstftd.py
DELETED
@@ -1,134 +0,0 @@
|
|
1 |
-
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
2 |
-
# All rights reserved.
|
3 |
-
#
|
4 |
-
# This source code is licensed under the license found in the
|
5 |
-
# LICENSE file in the root directory of this source tree.
|
6 |
-
|
7 |
-
import typing as tp
|
8 |
-
|
9 |
-
import torchaudio
|
10 |
-
import torch
|
11 |
-
from torch import nn
|
12 |
-
from einops import rearrange
|
13 |
-
|
14 |
-
from ...modules import NormConv2d
|
15 |
-
from .base import MultiDiscriminator, MultiDiscriminatorOutputType
|
16 |
-
|
17 |
-
|
18 |
-
def get_2d_padding(kernel_size: tp.Tuple[int, int], dilation: tp.Tuple[int, int] = (1, 1)):
|
19 |
-
return (((kernel_size[0] - 1) * dilation[0]) // 2, ((kernel_size[1] - 1) * dilation[1]) // 2)
|
20 |
-
|
21 |
-
|
22 |
-
class DiscriminatorSTFT(nn.Module):
|
23 |
-
"""STFT sub-discriminator.
|
24 |
-
|
25 |
-
Args:
|
26 |
-
filters (int): Number of filters in convolutions.
|
27 |
-
in_channels (int): Number of input channels.
|
28 |
-
out_channels (int): Number of output channels.
|
29 |
-
n_fft (int): Size of FFT for each scale.
|
30 |
-
hop_length (int): Length of hop between STFT windows for each scale.
|
31 |
-
kernel_size (tuple of int): Inner Conv2d kernel sizes.
|
32 |
-
stride (tuple of int): Inner Conv2d strides.
|
33 |
-
dilations (list of int): Inner Conv2d dilation on the time dimension.
|
34 |
-
win_length (int): Window size for each scale.
|
35 |
-
normalized (bool): Whether to normalize by magnitude after stft.
|
36 |
-
norm (str): Normalization method.
|
37 |
-
activation (str): Activation function.
|
38 |
-
activation_params (dict): Parameters to provide to the activation function.
|
39 |
-
growth (int): Growth factor for the filters.
|
40 |
-
"""
|
41 |
-
def __init__(self, filters: int, in_channels: int = 1, out_channels: int = 1,
|
42 |
-
n_fft: int = 1024, hop_length: int = 256, win_length: int = 1024, max_filters: int = 1024,
|
43 |
-
filters_scale: int = 1, kernel_size: tp.Tuple[int, int] = (3, 9), dilations: tp.List = [1, 2, 4],
|
44 |
-
stride: tp.Tuple[int, int] = (1, 2), normalized: bool = True, norm: str = 'weight_norm',
|
45 |
-
activation: str = 'LeakyReLU', activation_params: dict = {'negative_slope': 0.2}):
|
46 |
-
super().__init__()
|
47 |
-
assert len(kernel_size) == 2
|
48 |
-
assert len(stride) == 2
|
49 |
-
self.filters = filters
|
50 |
-
self.in_channels = in_channels
|
51 |
-
self.out_channels = out_channels
|
52 |
-
self.n_fft = n_fft
|
53 |
-
self.hop_length = hop_length
|
54 |
-
self.win_length = win_length
|
55 |
-
self.normalized = normalized
|
56 |
-
self.activation = getattr(torch.nn, activation)(**activation_params)
|
57 |
-
self.spec_transform = torchaudio.transforms.Spectrogram(
|
58 |
-
n_fft=self.n_fft, hop_length=self.hop_length, win_length=self.win_length, window_fn=torch.hann_window,
|
59 |
-
normalized=self.normalized, center=False, pad_mode=None, power=None)
|
60 |
-
spec_channels = 2 * self.in_channels
|
61 |
-
self.convs = nn.ModuleList()
|
62 |
-
self.convs.append(
|
63 |
-
NormConv2d(spec_channels, self.filters, kernel_size=kernel_size, padding=get_2d_padding(kernel_size))
|
64 |
-
)
|
65 |
-
in_chs = min(filters_scale * self.filters, max_filters)
|
66 |
-
for i, dilation in enumerate(dilations):
|
67 |
-
out_chs = min((filters_scale ** (i + 1)) * self.filters, max_filters)
|
68 |
-
self.convs.append(NormConv2d(in_chs, out_chs, kernel_size=kernel_size, stride=stride,
|
69 |
-
dilation=(dilation, 1), padding=get_2d_padding(kernel_size, (dilation, 1)),
|
70 |
-
norm=norm))
|
71 |
-
in_chs = out_chs
|
72 |
-
out_chs = min((filters_scale ** (len(dilations) + 1)) * self.filters, max_filters)
|
73 |
-
self.convs.append(NormConv2d(in_chs, out_chs, kernel_size=(kernel_size[0], kernel_size[0]),
|
74 |
-
padding=get_2d_padding((kernel_size[0], kernel_size[0])),
|
75 |
-
norm=norm))
|
76 |
-
self.conv_post = NormConv2d(out_chs, self.out_channels,
|
77 |
-
kernel_size=(kernel_size[0], kernel_size[0]),
|
78 |
-
padding=get_2d_padding((kernel_size[0], kernel_size[0])),
|
79 |
-
norm=norm)
|
80 |
-
|
81 |
-
def forward(self, x: torch.Tensor):
|
82 |
-
fmap = []
|
83 |
-
z = self.spec_transform(x) # [B, 2, Freq, Frames, 2]
|
84 |
-
z = torch.cat([z.real, z.imag], dim=1)
|
85 |
-
z = rearrange(z, 'b c w t -> b c t w')
|
86 |
-
for i, layer in enumerate(self.convs):
|
87 |
-
z = layer(z)
|
88 |
-
z = self.activation(z)
|
89 |
-
fmap.append(z)
|
90 |
-
z = self.conv_post(z)
|
91 |
-
return z, fmap
|
92 |
-
|
93 |
-
|
94 |
-
class MultiScaleSTFTDiscriminator(MultiDiscriminator):
|
95 |
-
"""Multi-Scale STFT (MS-STFT) discriminator.
|
96 |
-
|
97 |
-
Args:
|
98 |
-
filters (int): Number of filters in convolutions.
|
99 |
-
in_channels (int): Number of input channels.
|
100 |
-
out_channels (int): Number of output channels.
|
101 |
-
sep_channels (bool): Separate channels to distinct samples for stereo support.
|
102 |
-
n_ffts (Sequence[int]): Size of FFT for each scale.
|
103 |
-
hop_lengths (Sequence[int]): Length of hop between STFT windows for each scale.
|
104 |
-
win_lengths (Sequence[int]): Window size for each scale.
|
105 |
-
**kwargs: Additional args for STFTDiscriminator.
|
106 |
-
"""
|
107 |
-
def __init__(self, filters: int, in_channels: int = 1, out_channels: int = 1, sep_channels: bool = False,
|
108 |
-
n_ffts: tp.List[int] = [1024, 2048, 512], hop_lengths: tp.List[int] = [256, 512, 128],
|
109 |
-
win_lengths: tp.List[int] = [1024, 2048, 512], **kwargs):
|
110 |
-
super().__init__()
|
111 |
-
assert len(n_ffts) == len(hop_lengths) == len(win_lengths)
|
112 |
-
self.sep_channels = sep_channels
|
113 |
-
self.discriminators = nn.ModuleList([
|
114 |
-
DiscriminatorSTFT(filters, in_channels=in_channels, out_channels=out_channels,
|
115 |
-
n_fft=n_ffts[i], win_length=win_lengths[i], hop_length=hop_lengths[i], **kwargs)
|
116 |
-
for i in range(len(n_ffts))
|
117 |
-
])
|
118 |
-
|
119 |
-
@property
|
120 |
-
def num_discriminators(self):
|
121 |
-
return len(self.discriminators)
|
122 |
-
|
123 |
-
def _separate_channels(self, x: torch.Tensor) -> torch.Tensor:
|
124 |
-
B, C, T = x.shape
|
125 |
-
return x.view(-1, 1, T)
|
126 |
-
|
127 |
-
def forward(self, x: torch.Tensor) -> MultiDiscriminatorOutputType:
|
128 |
-
logits = []
|
129 |
-
fmaps = []
|
130 |
-
for disc in self.discriminators:
|
131 |
-
logit, fmap = disc(x)
|
132 |
-
logits.append(logit)
|
133 |
-
fmaps.append(fmap)
|
134 |
-
return logits, fmaps
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/CLAP/CLAPWrapper.py
DELETED
@@ -1,257 +0,0 @@
|
|
1 |
-
import random
|
2 |
-
import torchaudio
|
3 |
-
from torch._six import string_classes
|
4 |
-
import collections
|
5 |
-
import re
|
6 |
-
import torch.nn.functional as F
|
7 |
-
import numpy as np
|
8 |
-
from transformers import AutoTokenizer
|
9 |
-
from ldm.modules.encoders.CLAP.utils import read_config_as_args
|
10 |
-
from ldm.modules.encoders.CLAP.clap import CLAP
|
11 |
-
import math
|
12 |
-
import torchaudio.transforms as T
|
13 |
-
import os
|
14 |
-
import torch
|
15 |
-
from importlib_resources import files
|
16 |
-
|
17 |
-
|
18 |
-
class CLAPWrapper():
|
19 |
-
"""
|
20 |
-
A class for interfacing CLAP model.
|
21 |
-
"""
|
22 |
-
|
23 |
-
def __init__(self, model_fp, device):
|
24 |
-
self.np_str_obj_array_pattern = re.compile(r'[SaUO]')
|
25 |
-
self.file_path = os.path.realpath(__file__)
|
26 |
-
self.default_collate_err_msg_format = (
|
27 |
-
"default_collate: batch must contain tensors, numpy arrays, numbers, "
|
28 |
-
"dicts or lists; found {}")
|
29 |
-
self.config_as_str = files('ldm').joinpath('modules/encoders/CLAP/config.yml').read_text()
|
30 |
-
self.model_fp = model_fp
|
31 |
-
self.device = device
|
32 |
-
self.clap, self.tokenizer, self.args = self.load_clap()
|
33 |
-
|
34 |
-
def load_clap(self):
|
35 |
-
r"""Load CLAP model with args from config file"""
|
36 |
-
|
37 |
-
args = read_config_as_args(self.config_as_str, is_config_str=True)
|
38 |
-
|
39 |
-
if 'bert' in args.text_model:
|
40 |
-
self.token_keys = ['input_ids', 'token_type_ids', 'attention_mask']
|
41 |
-
else:
|
42 |
-
self.token_keys = ['input_ids', 'attention_mask']
|
43 |
-
|
44 |
-
clap = CLAP(
|
45 |
-
audioenc_name=args.audioenc_name,
|
46 |
-
sample_rate=args.sampling_rate,
|
47 |
-
window_size=args.window_size,
|
48 |
-
hop_size=args.hop_size,
|
49 |
-
mel_bins=args.mel_bins,
|
50 |
-
fmin=args.fmin,
|
51 |
-
fmax=args.fmax,
|
52 |
-
classes_num=args.num_classes,
|
53 |
-
out_emb=args.out_emb,
|
54 |
-
text_model=args.text_model,
|
55 |
-
transformer_embed_dim=args.transformer_embed_dim,
|
56 |
-
d_proj=args.d_proj
|
57 |
-
)
|
58 |
-
|
59 |
-
# Load pretrained weights for model
|
60 |
-
model_state_dict = torch.load(self.model_fp, map_location=torch.device('cpu'))['model']
|
61 |
-
clap.load_state_dict(model_state_dict)
|
62 |
-
|
63 |
-
clap.eval() # set clap in eval mode
|
64 |
-
tokenizer = AutoTokenizer.from_pretrained(args.text_model)
|
65 |
-
|
66 |
-
clap = clap.to(self.device)
|
67 |
-
tokenizer = tokenizer.to(self.device)
|
68 |
-
|
69 |
-
return clap, tokenizer, args
|
70 |
-
|
71 |
-
def default_collate(self, batch):
|
72 |
-
r"""Puts each data field into a tensor with outer dimension batch size"""
|
73 |
-
elem = batch[0]
|
74 |
-
elem_type = type(elem)
|
75 |
-
if isinstance(elem, torch.Tensor):
|
76 |
-
out = None
|
77 |
-
if torch.utils.data.get_worker_info() is not None:
|
78 |
-
# If we're in a background process, concatenate directly into a
|
79 |
-
# shared memory tensor to avoid an extra copy
|
80 |
-
numel = sum([x.numel() for x in batch])
|
81 |
-
storage = elem.storage()._new_shared(numel)
|
82 |
-
out = elem.new(storage)
|
83 |
-
return torch.stack(batch, 0, out=out)
|
84 |
-
elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \
|
85 |
-
and elem_type.__name__ != 'string_':
|
86 |
-
if elem_type.__name__ == 'ndarray' or elem_type.__name__ == 'memmap':
|
87 |
-
# array of string classes and object
|
88 |
-
if self.np_str_obj_array_pattern.search(elem.dtype.str) is not None:
|
89 |
-
raise TypeError(
|
90 |
-
self.default_collate_err_msg_format.format(elem.dtype))
|
91 |
-
|
92 |
-
return self.default_collate([torch.as_tensor(b) for b in batch])
|
93 |
-
elif elem.shape == (): # scalars
|
94 |
-
return torch.as_tensor(batch)
|
95 |
-
elif isinstance(elem, float):
|
96 |
-
return torch.tensor(batch, dtype=torch.float64)
|
97 |
-
elif isinstance(elem, int):
|
98 |
-
return torch.tensor(batch)
|
99 |
-
elif isinstance(elem, string_classes):
|
100 |
-
return batch
|
101 |
-
elif isinstance(elem, collections.abc.Mapping):
|
102 |
-
return {key: self.default_collate([d[key] for d in batch]) for key in elem}
|
103 |
-
elif isinstance(elem, tuple) and hasattr(elem, '_fields'): # namedtuple
|
104 |
-
return elem_type(*(self.default_collate(samples) for samples in zip(*batch)))
|
105 |
-
elif isinstance(elem, collections.abc.Sequence):
|
106 |
-
# check to make sure that the elements in batch have consistent size
|
107 |
-
it = iter(batch)
|
108 |
-
elem_size = len(next(it))
|
109 |
-
if not all(len(elem) == elem_size for elem in it):
|
110 |
-
raise RuntimeError(
|
111 |
-
'each element in list of batch should be of equal size')
|
112 |
-
transposed = zip(*batch)
|
113 |
-
return [self.default_collate(samples) for samples in transposed]
|
114 |
-
|
115 |
-
raise TypeError(self.default_collate_err_msg_format.format(elem_type))
|
116 |
-
|
117 |
-
def load_audio_into_tensor(self, audio_path, audio_duration, resample=False):
|
118 |
-
r"""Loads audio file and returns raw audio."""
|
119 |
-
# Randomly sample a segment of audio_duration from the clip or pad to match duration
|
120 |
-
audio_time_series, sample_rate = torchaudio.load(audio_path)
|
121 |
-
resample_rate = self.args.sampling_rate
|
122 |
-
if resample:
|
123 |
-
resampler = T.Resample(sample_rate, resample_rate)
|
124 |
-
audio_time_series = resampler(audio_time_series)
|
125 |
-
audio_time_series = audio_time_series.reshape(-1)
|
126 |
-
|
127 |
-
# audio_time_series is shorter than predefined audio duration,
|
128 |
-
# so audio_time_series is extended
|
129 |
-
if audio_duration*sample_rate >= audio_time_series.shape[0]:
|
130 |
-
repeat_factor = int(np.ceil((audio_duration*sample_rate) /
|
131 |
-
audio_time_series.shape[0]))
|
132 |
-
# Repeat audio_time_series by repeat_factor to match audio_duration
|
133 |
-
audio_time_series = audio_time_series.repeat(repeat_factor)
|
134 |
-
# remove excess part of audio_time_series
|
135 |
-
audio_time_series = audio_time_series[0:audio_duration*sample_rate]
|
136 |
-
else:
|
137 |
-
# audio_time_series is longer than predefined audio duration,
|
138 |
-
# so audio_time_series is trimmed
|
139 |
-
start_index = random.randrange(
|
140 |
-
audio_time_series.shape[0] - audio_duration*sample_rate)
|
141 |
-
audio_time_series = audio_time_series[start_index:start_index +
|
142 |
-
audio_duration*sample_rate]
|
143 |
-
return torch.FloatTensor(audio_time_series)
|
144 |
-
|
145 |
-
def preprocess_audio(self, audio_files, resample):
|
146 |
-
r"""Load list of audio files and return raw audio"""
|
147 |
-
audio_tensors = []
|
148 |
-
for audio_file in audio_files:
|
149 |
-
audio_tensor = self.load_audio_into_tensor(
|
150 |
-
audio_file, self.args.duration, resample)
|
151 |
-
audio_tensor = audio_tensor.reshape(1, -1).to(self.device)
|
152 |
-
audio_tensors.append(audio_tensor)
|
153 |
-
return self.default_collate(audio_tensors)
|
154 |
-
|
155 |
-
def preprocess_text(self, text_queries, text_len=100):
|
156 |
-
r"""Load list of class labels and return tokenized text"""
|
157 |
-
device = next(self.clap.parameters()).device
|
158 |
-
tokenized_texts = []
|
159 |
-
for ttext in text_queries:
|
160 |
-
tok = self.tokenizer.encode_plus(
|
161 |
-
text=ttext, add_special_tokens=True, max_length=text_len, pad_to_max_length=True, return_tensors="pt")
|
162 |
-
for key in self.token_keys:
|
163 |
-
tok[key] = tok[key].reshape(-1).to(device)
|
164 |
-
tokenized_texts.append(tok)
|
165 |
-
return self.default_collate(tokenized_texts)
|
166 |
-
|
167 |
-
def get_text_embeddings(self, class_labels):
|
168 |
-
r"""Load list of class labels and return text embeddings"""
|
169 |
-
preprocessed_text = self.preprocess_text(class_labels)
|
170 |
-
text_embeddings = self._get_text_embeddings(preprocessed_text)
|
171 |
-
text_embeddings = text_embeddings/torch.norm(text_embeddings, dim=-1, keepdim=True)
|
172 |
-
return text_embeddings
|
173 |
-
|
174 |
-
def get_audio_embeddings(self, audio_files, resample):
|
175 |
-
r"""Load list of audio files and return a audio embeddings"""
|
176 |
-
preprocessed_audio = self.preprocess_audio(audio_files, resample)
|
177 |
-
audio_embeddings = self._get_audio_embeddings(preprocessed_audio)
|
178 |
-
audio_embeddings = audio_embeddings/torch.norm(audio_embeddings, dim=-1, keepdim=True)
|
179 |
-
return audio_embeddings
|
180 |
-
|
181 |
-
def _get_text_embeddings(self, preprocessed_text):
|
182 |
-
r"""Load preprocessed text and return text embeddings"""
|
183 |
-
with torch.no_grad():
|
184 |
-
text_embeddings = self.clap.caption_encoder(preprocessed_text)
|
185 |
-
text_embeddings = text_embeddings/torch.norm(text_embeddings, dim=-1, keepdim=True)
|
186 |
-
return text_embeddings
|
187 |
-
|
188 |
-
def _get_audio_embeddings(self, preprocessed_audio):
|
189 |
-
r"""Load preprocessed audio and return a audio embeddings"""
|
190 |
-
with torch.no_grad():
|
191 |
-
preprocessed_audio = preprocessed_audio.reshape(
|
192 |
-
preprocessed_audio.shape[0], preprocessed_audio.shape[2])
|
193 |
-
#Append [0] the audio emebdding, [1] has output class probabilities
|
194 |
-
audio_embeddings = self.clap.audio_encoder(preprocessed_audio)[0]
|
195 |
-
audio_embeddings = audio_embeddings/torch.norm(audio_embeddings, dim=-1, keepdim=True)
|
196 |
-
return audio_embeddings
|
197 |
-
|
198 |
-
def compute_similarity(self, audio_embeddings, text_embeddings):
|
199 |
-
r"""Compute similarity between text and audio embeddings"""
|
200 |
-
logit_scale = self.clap.logit_scale.exp()
|
201 |
-
similarity = logit_scale*text_embeddings @ audio_embeddings.T
|
202 |
-
return similarity.T
|
203 |
-
|
204 |
-
def _generic_batch_inference(self, func, *args):
|
205 |
-
r"""Process audio and/or text per batch"""
|
206 |
-
input_tmp = args[0]
|
207 |
-
batch_size = args[-1]
|
208 |
-
# args[0] has audio_files, args[1] has class_labels
|
209 |
-
inputs = [args[0], args[1]] if len(args) == 3 else [args[0]]
|
210 |
-
args0_len = len(args[0])
|
211 |
-
# compute text_embeddings once for all the audio_files batches
|
212 |
-
if len(inputs) == 2:
|
213 |
-
text_embeddings = self.get_text_embeddings(args[1])
|
214 |
-
inputs = [args[0], args[1], text_embeddings]
|
215 |
-
dataset_idx = 0
|
216 |
-
for _ in range(math.ceil(args0_len/batch_size)):
|
217 |
-
next_batch_idx = dataset_idx + batch_size
|
218 |
-
# batch size is bigger than available audio/text items
|
219 |
-
if next_batch_idx >= args0_len:
|
220 |
-
inputs[0] = input_tmp[dataset_idx:]
|
221 |
-
return func(*tuple(inputs))
|
222 |
-
else:
|
223 |
-
inputs[0] = input_tmp[dataset_idx:next_batch_idx]
|
224 |
-
yield func(*tuple(inputs))
|
225 |
-
dataset_idx = next_batch_idx
|
226 |
-
|
227 |
-
def get_audio_embeddings_per_batch(self, audio_files, batch_size):
|
228 |
-
r"""Load preprocessed audio and return a audio embeddings per batch"""
|
229 |
-
return self._generic_batch_inference(self.get_audio_embeddings, audio_files, batch_size)
|
230 |
-
|
231 |
-
def get_text_embeddings_per_batch(self, class_labels, batch_size):
|
232 |
-
r"""Load preprocessed text and return text embeddings per batch"""
|
233 |
-
return self._generic_batch_inference(self.get_text_embeddings, class_labels, batch_size)
|
234 |
-
|
235 |
-
def classify_audio_files_per_batch(self, audio_files, class_labels, batch_size):
|
236 |
-
r"""Compute classification probabilities for each audio recording in a batch and each class label"""
|
237 |
-
return self._generic_batch_inference(self.classify_audio_files, audio_files, class_labels, batch_size)
|
238 |
-
|
239 |
-
if __name__ == '__main__':
|
240 |
-
|
241 |
-
# Load and initialize CLAP
|
242 |
-
weights_path = "/home1/huangrongjie/Project/Diffusion/LatentDiffusion/CLAP/CLAP_weights_2022.pth"
|
243 |
-
clap_model = CLAPWrapper(weights_path, use_cuda=False)
|
244 |
-
|
245 |
-
y = ["A woman talks nearby as water pours", "Multiple clanging and clanking sounds"]
|
246 |
-
x = ['/home2/huangjiawei/data/audiocaps/train/Yr1nicOVtvkQ.wav', '/home2/huangjiawei/data/audiocaps/train/YUDGBjjwyaqE.wav']
|
247 |
-
|
248 |
-
# Computing text embeddings
|
249 |
-
text_embeddings = clap_model.get_text_embeddings(y)
|
250 |
-
|
251 |
-
import ipdb
|
252 |
-
ipdb.set_trace()
|
253 |
-
|
254 |
-
# Computing audio embeddings
|
255 |
-
audio_embeddings = clap_model.get_audio_embeddings(x, resample=True)
|
256 |
-
similarity = clap_model.compute_similarity(audio_embeddings, text_embeddings)
|
257 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AILab-CVC/SEED-Bench_Leaderboard/src/auto_leaderboard/model_metadata_type.py
DELETED
@@ -1,30 +0,0 @@
|
|
1 |
-
from dataclasses import dataclass
|
2 |
-
from enum import Enum
|
3 |
-
import glob
|
4 |
-
import json
|
5 |
-
import os
|
6 |
-
from typing import Dict, List
|
7 |
-
|
8 |
-
from ..utils_display import AutoEvalColumn
|
9 |
-
|
10 |
-
@dataclass
|
11 |
-
class ModelInfo:
|
12 |
-
name: str
|
13 |
-
symbol: str # emoji
|
14 |
-
|
15 |
-
model_type_symbols = {
|
16 |
-
"LLM": "🟢",
|
17 |
-
"ImageLLM": "🔶",
|
18 |
-
"VideoLLM": "⭕",
|
19 |
-
"Other": "🟦",
|
20 |
-
}
|
21 |
-
|
22 |
-
class ModelType(Enum):
|
23 |
-
PT = ModelInfo(name="LLM", symbol="🟢")
|
24 |
-
FT = ModelInfo(name="ImageLLM", symbol="🔶")
|
25 |
-
IFT = ModelInfo(name="VideoLLM", symbol="⭕")
|
26 |
-
RL = ModelInfo(name="Other", symbol="🟦")
|
27 |
-
|
28 |
-
def to_str(self, separator = " "):
|
29 |
-
return f"{self.value.symbol}{separator}{self.value.name}"
|
30 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/__init__.py
DELETED
File without changes
|
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet50_cutmix.py
DELETED
@@ -1,18 +0,0 @@
|
|
1 |
-
# model settings
|
2 |
-
model = dict(
|
3 |
-
type='ImageClassifier',
|
4 |
-
backbone=dict(
|
5 |
-
type='ResNet',
|
6 |
-
depth=50,
|
7 |
-
num_stages=4,
|
8 |
-
out_indices=(3, ),
|
9 |
-
style='pytorch'),
|
10 |
-
neck=dict(type='GlobalAveragePooling'),
|
11 |
-
head=dict(
|
12 |
-
type='MultiLabelLinearClsHead',
|
13 |
-
num_classes=1000,
|
14 |
-
in_channels=2048,
|
15 |
-
loss=dict(type='CrossEntropyLoss', loss_weight=1.0, use_soft=True)),
|
16 |
-
train_cfg=dict(
|
17 |
-
augments=dict(
|
18 |
-
type='BatchCutMix', alpha=1.0, num_classes=1000, prob=1.0)))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Abdullahw72/bark-voice-cloning/hubert/pre_kmeans_hubert.py
DELETED
@@ -1,85 +0,0 @@
|
|
1 |
-
from pathlib import Path
|
2 |
-
|
3 |
-
import torch
|
4 |
-
from torch import nn
|
5 |
-
from einops import pack, unpack
|
6 |
-
|
7 |
-
import fairseq
|
8 |
-
|
9 |
-
from torchaudio.functional import resample
|
10 |
-
|
11 |
-
import logging
|
12 |
-
logging.root.setLevel(logging.ERROR)
|
13 |
-
|
14 |
-
|
15 |
-
def exists(val):
|
16 |
-
return val is not None
|
17 |
-
|
18 |
-
|
19 |
-
def default(val, d):
|
20 |
-
return val if exists(val) else d
|
21 |
-
|
22 |
-
|
23 |
-
class CustomHubert(nn.Module):
|
24 |
-
"""
|
25 |
-
checkpoint and kmeans can be downloaded at https://github.com/facebookresearch/fairseq/tree/main/examples/hubert
|
26 |
-
or you can train your own
|
27 |
-
"""
|
28 |
-
|
29 |
-
def __init__(
|
30 |
-
self,
|
31 |
-
checkpoint_path,
|
32 |
-
target_sample_hz=16000,
|
33 |
-
seq_len_multiple_of=None,
|
34 |
-
output_layer=9
|
35 |
-
):
|
36 |
-
super().__init__()
|
37 |
-
self.target_sample_hz = target_sample_hz
|
38 |
-
self.seq_len_multiple_of = seq_len_multiple_of
|
39 |
-
self.output_layer = output_layer
|
40 |
-
|
41 |
-
model_path = Path(checkpoint_path)
|
42 |
-
|
43 |
-
assert model_path.exists(), f'path {checkpoint_path} does not exist'
|
44 |
-
|
45 |
-
checkpoint = torch.load(checkpoint_path)
|
46 |
-
load_model_input = {checkpoint_path: checkpoint}
|
47 |
-
model, *_ = fairseq.checkpoint_utils.load_model_ensemble_and_task(load_model_input)
|
48 |
-
|
49 |
-
self.model = model[0]
|
50 |
-
self.model.eval()
|
51 |
-
|
52 |
-
@property
|
53 |
-
def groups(self):
|
54 |
-
return 1
|
55 |
-
|
56 |
-
@torch.no_grad()
|
57 |
-
def forward(
|
58 |
-
self,
|
59 |
-
wav_input,
|
60 |
-
flatten=True,
|
61 |
-
input_sample_hz=None
|
62 |
-
):
|
63 |
-
device = wav_input.device
|
64 |
-
|
65 |
-
if exists(input_sample_hz):
|
66 |
-
wav_input = resample(wav_input, input_sample_hz, self.target_sample_hz)
|
67 |
-
|
68 |
-
embed = self.model(
|
69 |
-
wav_input,
|
70 |
-
features_only=True,
|
71 |
-
mask=False, # thanks to @maitycyrus for noticing that mask is defaulted to True in the fairseq code
|
72 |
-
output_layer=self.output_layer
|
73 |
-
)
|
74 |
-
|
75 |
-
embed, packed_shape = pack([embed['x']], '* d')
|
76 |
-
|
77 |
-
# codebook_indices = self.kmeans.predict(embed.cpu().detach().numpy())
|
78 |
-
|
79 |
-
codebook_indices = torch.from_numpy(embed.cpu().detach().numpy()).to(device) # .long()
|
80 |
-
|
81 |
-
if flatten:
|
82 |
-
return codebook_indices
|
83 |
-
|
84 |
-
codebook_indices, = unpack(codebook_indices, packed_shape, '*')
|
85 |
-
return codebook_indices
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/dataloader/__init__.py
DELETED
@@ -1,10 +0,0 @@
|
|
1 |
-
from agentverse.registry import Registry
|
2 |
-
|
3 |
-
dataloader_registry = Registry(name="dataloader")
|
4 |
-
|
5 |
-
from .gsm8k import GSM8KLoader
|
6 |
-
from .responsegen import ResponseGenLoader
|
7 |
-
from .humaneval import HumanevalLoader
|
8 |
-
from .commongen import CommongenLoader
|
9 |
-
from .mgsm import MGSMLoader
|
10 |
-
from .logic_grid import LogicGridLoader
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fixwidthsizer/RunWidthWrap.js
DELETED
@@ -1,10 +0,0 @@
|
|
1 |
-
import RunChildrenWrapBase from '../basesizer/RunWidthWrap.js';
|
2 |
-
import RunChildrenWrap from './RunChildrenWrap.js';
|
3 |
-
|
4 |
-
var RunWidthWrap = function (width) {
|
5 |
-
var innerWidth = width - this.space.left - this.space.right;
|
6 |
-
this.widthWrapResult = RunChildrenWrap.call(this, innerWidth, this.widthWrapResult);
|
7 |
-
RunChildrenWrapBase.call(this, width);
|
8 |
-
}
|
9 |
-
|
10 |
-
export default RunWidthWrap;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Aki004/herta-so-vits/inference_main.py
DELETED
@@ -1,161 +0,0 @@
|
|
1 |
-
import io
|
2 |
-
import logging
|
3 |
-
import time
|
4 |
-
from pathlib import Path
|
5 |
-
|
6 |
-
import librosa
|
7 |
-
import matplotlib.pyplot as plt
|
8 |
-
import numpy as np
|
9 |
-
import soundfile
|
10 |
-
|
11 |
-
from inference import infer_tool
|
12 |
-
from inference import slicer
|
13 |
-
from inference.infer_tool import Svc
|
14 |
-
|
15 |
-
logging.getLogger('numba').setLevel(logging.WARNING)
|
16 |
-
chunks_dict = infer_tool.read_temp("inference/chunks_temp.json")
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
def main():
|
21 |
-
import argparse
|
22 |
-
|
23 |
-
parser = argparse.ArgumentParser(description='sovits4 inference')
|
24 |
-
|
25 |
-
# Required
|
26 |
-
parser.add_argument('-m', '--model_path', type=str, default="logs/44k/G_0.pth",
|
27 |
-
help='Path to the model.')
|
28 |
-
parser.add_argument('-c', '--config_path', type=str, default="configs/config.json",
|
29 |
-
help='Path to the configuration file.')
|
30 |
-
parser.add_argument('-s', '--spk_list', type=str, nargs='+', default=['nen'],
|
31 |
-
help='Target speaker name for conversion.')
|
32 |
-
parser.add_argument('-n', '--clean_names', type=str, nargs='+', default=["君の知らない物語-src.wav"],
|
33 |
-
help='A list of wav file names located in the raw folder.')
|
34 |
-
parser.add_argument('-t', '--trans', type=int, nargs='+', default=[0],
|
35 |
-
help='Pitch adjustment, supports positive and negative (semitone) values.')
|
36 |
-
|
37 |
-
# Optional
|
38 |
-
parser.add_argument('-a', '--auto_predict_f0', action='store_true', default=False,
|
39 |
-
help='Automatic pitch prediction for voice conversion. Do not enable this when converting songs as it can cause serious pitch issues.')
|
40 |
-
parser.add_argument('-cl', '--clip', type=float, default=0,
|
41 |
-
help='Voice forced slicing. Set to 0 to turn off(default), duration in seconds.')
|
42 |
-
parser.add_argument('-lg', '--linear_gradient', type=float, default=0,
|
43 |
-
help='The cross fade length of two audio slices in seconds. If there is a discontinuous voice after forced slicing, you can adjust this value. Otherwise, it is recommended to use. Default 0.')
|
44 |
-
parser.add_argument('-cm', '--cluster_model_path', type=str, default="logs/44k/kmeans_10000.pt",
|
45 |
-
help='Path to the clustering model. Fill in any value if clustering is not trained.')
|
46 |
-
parser.add_argument('-cr', '--cluster_infer_ratio', type=float, default=0,
|
47 |
-
help='Proportion of the clustering solution, range 0-1. Fill in 0 if the clustering model is not trained.')
|
48 |
-
parser.add_argument('-fmp', '--f0_mean_pooling', action='store_true', default=False,
|
49 |
-
help='Apply mean filter (pooling) to f0, which may improve some hoarse sounds. Enabling this option will reduce inference speed.')
|
50 |
-
parser.add_argument('-eh', '--enhance', action='store_true', default=False,
|
51 |
-
help='Whether to use NSF_HIFIGAN enhancer. This option has certain effect on sound quality enhancement for some models with few training sets, but has negative effect on well-trained models, so it is turned off by default.')
|
52 |
-
|
53 |
-
# generally keep default
|
54 |
-
parser.add_argument('-sd', '--slice_db', type=int, default=-40,
|
55 |
-
help='Loudness for automatic slicing. For noisy audio it can be set to -30')
|
56 |
-
parser.add_argument('-d', '--device', type=str, default=None,
|
57 |
-
help='Device used for inference. None means auto selecting.')
|
58 |
-
parser.add_argument('-ns', '--noice_scale', type=float, default=0.4,
|
59 |
-
help='Affect pronunciation and sound quality.')
|
60 |
-
parser.add_argument('-p', '--pad_seconds', type=float, default=0.5,
|
61 |
-
help='Due to unknown reasons, there may be abnormal noise at the beginning and end. It will disappear after padding a short silent segment.')
|
62 |
-
parser.add_argument('-wf', '--wav_format', type=str, default='flac',
|
63 |
-
help='output format')
|
64 |
-
parser.add_argument('-lgr', '--linear_gradient_retain', type=float, default=0.75,
|
65 |
-
help='Proportion of cross length retention, range (0-1]. After forced slicing, the beginning and end of each segment need to be discarded.')
|
66 |
-
parser.add_argument('-eak', '--enhancer_adaptive_key', type=int, default=0,
|
67 |
-
help='Adapt the enhancer to a higher range of sound. The unit is the semitones, default 0.')
|
68 |
-
parser.add_argument('-ft', '--f0_filter_threshold', type=float, default=0.05,
|
69 |
-
help='F0 Filtering threshold: This parameter is valid only when f0_mean_pooling is enabled. Values range from 0 to 1. Reducing this value reduces the probability of being out of tune, but increases matte.')
|
70 |
-
|
71 |
-
|
72 |
-
args = parser.parse_args()
|
73 |
-
|
74 |
-
clean_names = args.clean_names
|
75 |
-
trans = args.trans
|
76 |
-
spk_list = args.spk_list
|
77 |
-
slice_db = args.slice_db
|
78 |
-
wav_format = args.wav_format
|
79 |
-
auto_predict_f0 = args.auto_predict_f0
|
80 |
-
cluster_infer_ratio = args.cluster_infer_ratio
|
81 |
-
noice_scale = args.noice_scale
|
82 |
-
pad_seconds = args.pad_seconds
|
83 |
-
clip = args.clip
|
84 |
-
lg = args.linear_gradient
|
85 |
-
lgr = args.linear_gradient_retain
|
86 |
-
F0_mean_pooling = args.f0_mean_pooling
|
87 |
-
enhance = args.enhance
|
88 |
-
enhancer_adaptive_key = args.enhancer_adaptive_key
|
89 |
-
cr_threshold = args.f0_filter_threshold
|
90 |
-
|
91 |
-
svc_model = Svc(args.model_path, args.config_path, args.device, args.cluster_model_path,enhance)
|
92 |
-
infer_tool.mkdir(["raw", "results"])
|
93 |
-
|
94 |
-
infer_tool.fill_a_to_b(trans, clean_names)
|
95 |
-
for clean_name, tran in zip(clean_names, trans):
|
96 |
-
raw_audio_path = f"raw/{clean_name}"
|
97 |
-
if "." not in raw_audio_path:
|
98 |
-
raw_audio_path += ".wav"
|
99 |
-
infer_tool.format_wav(raw_audio_path)
|
100 |
-
wav_path = Path(raw_audio_path).with_suffix('.wav')
|
101 |
-
chunks = slicer.cut(wav_path, db_thresh=slice_db)
|
102 |
-
audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks)
|
103 |
-
per_size = int(clip*audio_sr)
|
104 |
-
lg_size = int(lg*audio_sr)
|
105 |
-
lg_size_r = int(lg_size*lgr)
|
106 |
-
lg_size_c_l = (lg_size-lg_size_r)//2
|
107 |
-
lg_size_c_r = lg_size-lg_size_r-lg_size_c_l
|
108 |
-
lg_2 = np.linspace(0,1,lg_size_r) if lg_size!=0 else 0
|
109 |
-
|
110 |
-
for spk in spk_list:
|
111 |
-
audio = []
|
112 |
-
for (slice_tag, data) in audio_data:
|
113 |
-
print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======')
|
114 |
-
|
115 |
-
length = int(np.ceil(len(data) / audio_sr * svc_model.target_sample))
|
116 |
-
if slice_tag:
|
117 |
-
print('jump empty segment')
|
118 |
-
_audio = np.zeros(length)
|
119 |
-
audio.extend(list(infer_tool.pad_array(_audio, length)))
|
120 |
-
continue
|
121 |
-
if per_size != 0:
|
122 |
-
datas = infer_tool.split_list_by_n(data, per_size,lg_size)
|
123 |
-
else:
|
124 |
-
datas = [data]
|
125 |
-
for k,dat in enumerate(datas):
|
126 |
-
per_length = int(np.ceil(len(dat) / audio_sr * svc_model.target_sample)) if clip!=0 else length
|
127 |
-
if clip!=0: print(f'###=====segment clip start, {round(len(dat) / audio_sr, 3)}s======')
|
128 |
-
# padd
|
129 |
-
pad_len = int(audio_sr * pad_seconds)
|
130 |
-
dat = np.concatenate([np.zeros([pad_len]), dat, np.zeros([pad_len])])
|
131 |
-
raw_path = io.BytesIO()
|
132 |
-
soundfile.write(raw_path, dat, audio_sr, format="wav")
|
133 |
-
raw_path.seek(0)
|
134 |
-
out_audio, out_sr = svc_model.infer(spk, tran, raw_path,
|
135 |
-
cluster_infer_ratio=cluster_infer_ratio,
|
136 |
-
auto_predict_f0=auto_predict_f0,
|
137 |
-
noice_scale=noice_scale,
|
138 |
-
F0_mean_pooling = F0_mean_pooling,
|
139 |
-
enhancer_adaptive_key = enhancer_adaptive_key,
|
140 |
-
cr_threshold = cr_threshold
|
141 |
-
)
|
142 |
-
_audio = out_audio.cpu().numpy()
|
143 |
-
pad_len = int(svc_model.target_sample * pad_seconds)
|
144 |
-
_audio = _audio[pad_len:-pad_len]
|
145 |
-
_audio = infer_tool.pad_array(_audio, per_length)
|
146 |
-
if lg_size!=0 and k!=0:
|
147 |
-
lg1 = audio[-(lg_size_r+lg_size_c_r):-lg_size_c_r] if lgr != 1 else audio[-lg_size:]
|
148 |
-
lg2 = _audio[lg_size_c_l:lg_size_c_l+lg_size_r] if lgr != 1 else _audio[0:lg_size]
|
149 |
-
lg_pre = lg1*(1-lg_2)+lg2*lg_2
|
150 |
-
audio = audio[0:-(lg_size_r+lg_size_c_r)] if lgr != 1 else audio[0:-lg_size]
|
151 |
-
audio.extend(lg_pre)
|
152 |
-
_audio = _audio[lg_size_c_l+lg_size_r:] if lgr != 1 else _audio[lg_size:]
|
153 |
-
audio.extend(list(_audio))
|
154 |
-
key = "auto" if auto_predict_f0 else f"{tran}key"
|
155 |
-
cluster_name = "" if cluster_infer_ratio == 0 else f"_{cluster_infer_ratio}"
|
156 |
-
res_path = f'./results/{clean_name}_{key}_{spk}{cluster_name}.{wav_format}'
|
157 |
-
soundfile.write(res_path, audio, svc_model.target_sample, format=wav_format)
|
158 |
-
svc_model.clear_empty()
|
159 |
-
|
160 |
-
if __name__ == '__main__':
|
161 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AlhitawiMohammed22/CER_Hu-Evaluation-Metrics/app.py
DELETED
@@ -1,5 +0,0 @@
|
|
1 |
-
import evaluate
|
2 |
-
from evaluate.utils import launch_gradio_widget
|
3 |
-
|
4 |
-
module = [evaluate.load("cer")]
|
5 |
-
launch_gradio_widget(module[0])
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Aloento/9Nine-PITS/text/__init__.py
DELETED
@@ -1,15 +0,0 @@
|
|
1 |
-
from text.symbols import symbols
|
2 |
-
|
3 |
-
_symbol_to_id = {s: i for i, s in enumerate(symbols)}
|
4 |
-
|
5 |
-
|
6 |
-
def cleaned_text_to_sequence(cleaned_text):
|
7 |
-
"""
|
8 |
-
Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
|
9 |
-
Args:
|
10 |
-
cleaned_text: string to convert to a sequence
|
11 |
-
Returns:
|
12 |
-
List of integers corresponding to the symbols in the text
|
13 |
-
"""
|
14 |
-
sequence = [_symbol_to_id[symbol] for symbol in cleaned_text]
|
15 |
-
return sequence
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/roi_extractors/generic_roi_extractor.py
DELETED
@@ -1,83 +0,0 @@
|
|
1 |
-
from mmcv.cnn.bricks import build_plugin_layer
|
2 |
-
from mmcv.runner import force_fp32
|
3 |
-
|
4 |
-
from mmdet.models.builder import ROI_EXTRACTORS
|
5 |
-
from .base_roi_extractor import BaseRoIExtractor
|
6 |
-
|
7 |
-
|
8 |
-
@ROI_EXTRACTORS.register_module()
|
9 |
-
class GenericRoIExtractor(BaseRoIExtractor):
|
10 |
-
"""Extract RoI features from all level feature maps levels.
|
11 |
-
|
12 |
-
This is the implementation of `A novel Region of Interest Extraction Layer
|
13 |
-
for Instance Segmentation <https://arxiv.org/abs/2004.13665>`_.
|
14 |
-
|
15 |
-
Args:
|
16 |
-
aggregation (str): The method to aggregate multiple feature maps.
|
17 |
-
Options are 'sum', 'concat'. Default: 'sum'.
|
18 |
-
pre_cfg (dict | None): Specify pre-processing modules. Default: None.
|
19 |
-
post_cfg (dict | None): Specify post-processing modules. Default: None.
|
20 |
-
kwargs (keyword arguments): Arguments that are the same
|
21 |
-
as :class:`BaseRoIExtractor`.
|
22 |
-
"""
|
23 |
-
|
24 |
-
def __init__(self,
|
25 |
-
aggregation='sum',
|
26 |
-
pre_cfg=None,
|
27 |
-
post_cfg=None,
|
28 |
-
**kwargs):
|
29 |
-
super(GenericRoIExtractor, self).__init__(**kwargs)
|
30 |
-
|
31 |
-
assert aggregation in ['sum', 'concat']
|
32 |
-
|
33 |
-
self.aggregation = aggregation
|
34 |
-
self.with_post = post_cfg is not None
|
35 |
-
self.with_pre = pre_cfg is not None
|
36 |
-
# build pre/post processing modules
|
37 |
-
if self.with_post:
|
38 |
-
self.post_module = build_plugin_layer(post_cfg, '_post_module')[1]
|
39 |
-
if self.with_pre:
|
40 |
-
self.pre_module = build_plugin_layer(pre_cfg, '_pre_module')[1]
|
41 |
-
|
42 |
-
@force_fp32(apply_to=('feats', ), out_fp16=True)
|
43 |
-
def forward(self, feats, rois, roi_scale_factor=None):
|
44 |
-
"""Forward function."""
|
45 |
-
if len(feats) == 1:
|
46 |
-
return self.roi_layers[0](feats[0], rois)
|
47 |
-
|
48 |
-
out_size = self.roi_layers[0].output_size
|
49 |
-
num_levels = len(feats)
|
50 |
-
roi_feats = feats[0].new_zeros(
|
51 |
-
rois.size(0), self.out_channels, *out_size)
|
52 |
-
|
53 |
-
# some times rois is an empty tensor
|
54 |
-
if roi_feats.shape[0] == 0:
|
55 |
-
return roi_feats
|
56 |
-
|
57 |
-
if roi_scale_factor is not None:
|
58 |
-
rois = self.roi_rescale(rois, roi_scale_factor)
|
59 |
-
|
60 |
-
# mark the starting channels for concat mode
|
61 |
-
start_channels = 0
|
62 |
-
for i in range(num_levels):
|
63 |
-
roi_feats_t = self.roi_layers[i](feats[i], rois)
|
64 |
-
end_channels = start_channels + roi_feats_t.size(1)
|
65 |
-
if self.with_pre:
|
66 |
-
# apply pre-processing to a RoI extracted from each layer
|
67 |
-
roi_feats_t = self.pre_module(roi_feats_t)
|
68 |
-
if self.aggregation == 'sum':
|
69 |
-
# and sum them all
|
70 |
-
roi_feats += roi_feats_t
|
71 |
-
else:
|
72 |
-
# and concat them along channel dimension
|
73 |
-
roi_feats[:, start_channels:end_channels] = roi_feats_t
|
74 |
-
# update channels starting position
|
75 |
-
start_channels = end_channels
|
76 |
-
# check if concat channels match at the end
|
77 |
-
if self.aggregation == 'concat':
|
78 |
-
assert start_channels == self.out_channels
|
79 |
-
|
80 |
-
if self.with_post:
|
81 |
-
# apply post-processing before return the result
|
82 |
-
roi_feats = self.post_module(roi_feats)
|
83 |
-
return roi_feats
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/roi_extractors/single_level_roi_extractor.py
DELETED
@@ -1,108 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
from mmcv.runner import force_fp32
|
3 |
-
|
4 |
-
from mmdet.models.builder import ROI_EXTRACTORS
|
5 |
-
from .base_roi_extractor import BaseRoIExtractor
|
6 |
-
|
7 |
-
|
8 |
-
@ROI_EXTRACTORS.register_module()
|
9 |
-
class SingleRoIExtractor(BaseRoIExtractor):
|
10 |
-
"""Extract RoI features from a single level feature map.
|
11 |
-
|
12 |
-
If there are multiple input feature levels, each RoI is mapped to a level
|
13 |
-
according to its scale. The mapping rule is proposed in
|
14 |
-
`FPN <https://arxiv.org/abs/1612.03144>`_.
|
15 |
-
|
16 |
-
Args:
|
17 |
-
roi_layer (dict): Specify RoI layer type and arguments.
|
18 |
-
out_channels (int): Output channels of RoI layers.
|
19 |
-
featmap_strides (List[int]): Strides of input feature maps.
|
20 |
-
finest_scale (int): Scale threshold of mapping to level 0. Default: 56.
|
21 |
-
"""
|
22 |
-
|
23 |
-
def __init__(self,
|
24 |
-
roi_layer,
|
25 |
-
out_channels,
|
26 |
-
featmap_strides,
|
27 |
-
finest_scale=56):
|
28 |
-
super(SingleRoIExtractor, self).__init__(roi_layer, out_channels,
|
29 |
-
featmap_strides)
|
30 |
-
self.finest_scale = finest_scale
|
31 |
-
|
32 |
-
def map_roi_levels(self, rois, num_levels):
|
33 |
-
"""Map rois to corresponding feature levels by scales.
|
34 |
-
|
35 |
-
- scale < finest_scale * 2: level 0
|
36 |
-
- finest_scale * 2 <= scale < finest_scale * 4: level 1
|
37 |
-
- finest_scale * 4 <= scale < finest_scale * 8: level 2
|
38 |
-
- scale >= finest_scale * 8: level 3
|
39 |
-
|
40 |
-
Args:
|
41 |
-
rois (Tensor): Input RoIs, shape (k, 5).
|
42 |
-
num_levels (int): Total level number.
|
43 |
-
|
44 |
-
Returns:
|
45 |
-
Tensor: Level index (0-based) of each RoI, shape (k, )
|
46 |
-
"""
|
47 |
-
scale = torch.sqrt(
|
48 |
-
(rois[:, 3] - rois[:, 1]) * (rois[:, 4] - rois[:, 2]))
|
49 |
-
target_lvls = torch.floor(torch.log2(scale / self.finest_scale + 1e-6))
|
50 |
-
target_lvls = target_lvls.clamp(min=0, max=num_levels - 1).long()
|
51 |
-
return target_lvls
|
52 |
-
|
53 |
-
@force_fp32(apply_to=('feats', ), out_fp16=True)
|
54 |
-
def forward(self, feats, rois, roi_scale_factor=None):
|
55 |
-
"""Forward function."""
|
56 |
-
out_size = self.roi_layers[0].output_size
|
57 |
-
num_levels = len(feats)
|
58 |
-
expand_dims = (-1, self.out_channels * out_size[0] * out_size[1])
|
59 |
-
if torch.onnx.is_in_onnx_export():
|
60 |
-
# Work around to export mask-rcnn to onnx
|
61 |
-
roi_feats = rois[:, :1].clone().detach()
|
62 |
-
roi_feats = roi_feats.expand(*expand_dims)
|
63 |
-
roi_feats = roi_feats.reshape(-1, self.out_channels, *out_size)
|
64 |
-
roi_feats = roi_feats * 0
|
65 |
-
else:
|
66 |
-
roi_feats = feats[0].new_zeros(
|
67 |
-
rois.size(0), self.out_channels, *out_size)
|
68 |
-
# TODO: remove this when parrots supports
|
69 |
-
if torch.__version__ == 'parrots':
|
70 |
-
roi_feats.requires_grad = True
|
71 |
-
|
72 |
-
if num_levels == 1:
|
73 |
-
if len(rois) == 0:
|
74 |
-
return roi_feats
|
75 |
-
return self.roi_layers[0](feats[0], rois)
|
76 |
-
|
77 |
-
target_lvls = self.map_roi_levels(rois, num_levels)
|
78 |
-
|
79 |
-
if roi_scale_factor is not None:
|
80 |
-
rois = self.roi_rescale(rois, roi_scale_factor)
|
81 |
-
|
82 |
-
for i in range(num_levels):
|
83 |
-
mask = target_lvls == i
|
84 |
-
if torch.onnx.is_in_onnx_export():
|
85 |
-
# To keep all roi_align nodes exported to onnx
|
86 |
-
# and skip nonzero op
|
87 |
-
mask = mask.float().unsqueeze(-1).expand(*expand_dims).reshape(
|
88 |
-
roi_feats.shape)
|
89 |
-
roi_feats_t = self.roi_layers[i](feats[i], rois)
|
90 |
-
roi_feats_t *= mask
|
91 |
-
roi_feats += roi_feats_t
|
92 |
-
continue
|
93 |
-
inds = mask.nonzero(as_tuple=False).squeeze(1)
|
94 |
-
if inds.numel() > 0:
|
95 |
-
rois_ = rois[inds]
|
96 |
-
roi_feats_t = self.roi_layers[i](feats[i], rois_)
|
97 |
-
roi_feats[inds] = roi_feats_t
|
98 |
-
else:
|
99 |
-
# Sometimes some pyramid levels will not be used for RoI
|
100 |
-
# feature extraction and this will cause an incomplete
|
101 |
-
# computation graph in one GPU, which is different from those
|
102 |
-
# in other GPUs and will cause a hanging error.
|
103 |
-
# Therefore, we add it to ensure each feature pyramid is
|
104 |
-
# included in the computation graph to avoid runtime bugs.
|
105 |
-
roi_feats += sum(
|
106 |
-
x.view(-1)[0]
|
107 |
-
for x in self.parameters()) * 0. + feats[i].sum() * 0.
|
108 |
-
return roi_feats
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AnimalEquality/chatbot/_proc/_docs/site_libs/quarto-search/quarto-search.js
DELETED
@@ -1,1140 +0,0 @@
|
|
1 |
-
const kQueryArg = "q";
|
2 |
-
const kResultsArg = "show-results";
|
3 |
-
|
4 |
-
// If items don't provide a URL, then both the navigator and the onSelect
|
5 |
-
// function aren't called (and therefore, the default implementation is used)
|
6 |
-
//
|
7 |
-
// We're using this sentinel URL to signal to those handlers that this
|
8 |
-
// item is a more item (along with the type) and can be handled appropriately
|
9 |
-
const kItemTypeMoreHref = "0767FDFD-0422-4E5A-BC8A-3BE11E5BBA05";
|
10 |
-
|
11 |
-
window.document.addEventListener("DOMContentLoaded", function (_event) {
|
12 |
-
// Ensure that search is available on this page. If it isn't,
|
13 |
-
// should return early and not do anything
|
14 |
-
var searchEl = window.document.getElementById("quarto-search");
|
15 |
-
if (!searchEl) return;
|
16 |
-
|
17 |
-
const { autocomplete } = window["@algolia/autocomplete-js"];
|
18 |
-
|
19 |
-
let quartoSearchOptions = {};
|
20 |
-
let language = {};
|
21 |
-
const searchOptionEl = window.document.getElementById(
|
22 |
-
"quarto-search-options"
|
23 |
-
);
|
24 |
-
if (searchOptionEl) {
|
25 |
-
const jsonStr = searchOptionEl.textContent;
|
26 |
-
quartoSearchOptions = JSON.parse(jsonStr);
|
27 |
-
language = quartoSearchOptions.language;
|
28 |
-
}
|
29 |
-
|
30 |
-
// note the search mode
|
31 |
-
if (quartoSearchOptions.type === "overlay") {
|
32 |
-
searchEl.classList.add("type-overlay");
|
33 |
-
} else {
|
34 |
-
searchEl.classList.add("type-textbox");
|
35 |
-
}
|
36 |
-
|
37 |
-
// Used to determine highlighting behavior for this page
|
38 |
-
// A `q` query param is expected when the user follows a search
|
39 |
-
// to this page
|
40 |
-
const currentUrl = new URL(window.location);
|
41 |
-
const query = currentUrl.searchParams.get(kQueryArg);
|
42 |
-
const showSearchResults = currentUrl.searchParams.get(kResultsArg);
|
43 |
-
const mainEl = window.document.querySelector("main");
|
44 |
-
|
45 |
-
// highlight matches on the page
|
46 |
-
if (query !== null && mainEl) {
|
47 |
-
// perform any highlighting
|
48 |
-
highlight(escapeRegExp(query), mainEl);
|
49 |
-
|
50 |
-
// fix up the URL to remove the q query param
|
51 |
-
const replacementUrl = new URL(window.location);
|
52 |
-
replacementUrl.searchParams.delete(kQueryArg);
|
53 |
-
window.history.replaceState({}, "", replacementUrl);
|
54 |
-
}
|
55 |
-
|
56 |
-
// function to clear highlighting on the page when the search query changes
|
57 |
-
// (e.g. if the user edits the query or clears it)
|
58 |
-
let highlighting = true;
|
59 |
-
const resetHighlighting = (searchTerm) => {
|
60 |
-
if (mainEl && highlighting && query !== null && searchTerm !== query) {
|
61 |
-
clearHighlight(query, mainEl);
|
62 |
-
highlighting = false;
|
63 |
-
}
|
64 |
-
};
|
65 |
-
|
66 |
-
// Clear search highlighting when the user scrolls sufficiently
|
67 |
-
const resetFn = () => {
|
68 |
-
resetHighlighting("");
|
69 |
-
window.removeEventListener("quarto-hrChanged", resetFn);
|
70 |
-
window.removeEventListener("quarto-sectionChanged", resetFn);
|
71 |
-
};
|
72 |
-
|
73 |
-
// Register this event after the initial scrolling and settling of events
|
74 |
-
// on the page
|
75 |
-
window.addEventListener("quarto-hrChanged", resetFn);
|
76 |
-
window.addEventListener("quarto-sectionChanged", resetFn);
|
77 |
-
|
78 |
-
// Responsively switch to overlay mode if the search is present on the navbar
|
79 |
-
// Note that switching the sidebar to overlay mode requires more coordinate (not just
|
80 |
-
// the media query since we generate different HTML for sidebar overlays than we do
|
81 |
-
// for sidebar input UI)
|
82 |
-
const detachedMediaQuery =
|
83 |
-
quartoSearchOptions.type === "overlay" ? "all" : "(max-width: 991px)";
|
84 |
-
|
85 |
-
// If configured, include the analytics client to send insights
|
86 |
-
const plugins = configurePlugins(quartoSearchOptions);
|
87 |
-
|
88 |
-
let lastState = null;
|
89 |
-
const { setIsOpen, setQuery, setCollections } = autocomplete({
|
90 |
-
container: searchEl,
|
91 |
-
detachedMediaQuery: detachedMediaQuery,
|
92 |
-
defaultActiveItemId: 0,
|
93 |
-
panelContainer: "#quarto-search-results",
|
94 |
-
panelPlacement: quartoSearchOptions["panel-placement"],
|
95 |
-
debug: false,
|
96 |
-
openOnFocus: true,
|
97 |
-
plugins,
|
98 |
-
classNames: {
|
99 |
-
form: "d-flex",
|
100 |
-
},
|
101 |
-
translations: {
|
102 |
-
clearButtonTitle: language["search-clear-button-title"],
|
103 |
-
detachedCancelButtonText: language["search-detached-cancel-button-title"],
|
104 |
-
submitButtonTitle: language["search-submit-button-title"],
|
105 |
-
},
|
106 |
-
initialState: {
|
107 |
-
query,
|
108 |
-
},
|
109 |
-
getItemUrl({ item }) {
|
110 |
-
return item.href;
|
111 |
-
},
|
112 |
-
onStateChange({ state }) {
|
113 |
-
// Perhaps reset highlighting
|
114 |
-
resetHighlighting(state.query);
|
115 |
-
|
116 |
-
// If the panel just opened, ensure the panel is positioned properly
|
117 |
-
if (state.isOpen) {
|
118 |
-
if (lastState && !lastState.isOpen) {
|
119 |
-
setTimeout(() => {
|
120 |
-
positionPanel(quartoSearchOptions["panel-placement"]);
|
121 |
-
}, 150);
|
122 |
-
}
|
123 |
-
}
|
124 |
-
|
125 |
-
// Perhaps show the copy link
|
126 |
-
showCopyLink(state.query, quartoSearchOptions);
|
127 |
-
|
128 |
-
lastState = state;
|
129 |
-
},
|
130 |
-
reshape({ sources, state }) {
|
131 |
-
return sources.map((source) => {
|
132 |
-
try {
|
133 |
-
const items = source.getItems();
|
134 |
-
|
135 |
-
// Validate the items
|
136 |
-
validateItems(items);
|
137 |
-
|
138 |
-
// group the items by document
|
139 |
-
const groupedItems = new Map();
|
140 |
-
items.forEach((item) => {
|
141 |
-
const hrefParts = item.href.split("#");
|
142 |
-
const baseHref = hrefParts[0];
|
143 |
-
const isDocumentItem = hrefParts.length === 1;
|
144 |
-
|
145 |
-
const items = groupedItems.get(baseHref);
|
146 |
-
if (!items) {
|
147 |
-
groupedItems.set(baseHref, [item]);
|
148 |
-
} else {
|
149 |
-
// If the href for this item matches the document
|
150 |
-
// exactly, place this item first as it is the item that represents
|
151 |
-
// the document itself
|
152 |
-
if (isDocumentItem) {
|
153 |
-
items.unshift(item);
|
154 |
-
} else {
|
155 |
-
items.push(item);
|
156 |
-
}
|
157 |
-
groupedItems.set(baseHref, items);
|
158 |
-
}
|
159 |
-
});
|
160 |
-
|
161 |
-
const reshapedItems = [];
|
162 |
-
let count = 1;
|
163 |
-
for (const [_key, value] of groupedItems) {
|
164 |
-
const firstItem = value[0];
|
165 |
-
reshapedItems.push({
|
166 |
-
...firstItem,
|
167 |
-
type: kItemTypeDoc,
|
168 |
-
});
|
169 |
-
|
170 |
-
const collapseMatches = quartoSearchOptions["collapse-after"];
|
171 |
-
const collapseCount =
|
172 |
-
typeof collapseMatches === "number" ? collapseMatches : 1;
|
173 |
-
|
174 |
-
if (value.length > 1) {
|
175 |
-
const target = `search-more-${count}`;
|
176 |
-
const isExpanded =
|
177 |
-
state.context.expanded &&
|
178 |
-
state.context.expanded.includes(target);
|
179 |
-
|
180 |
-
const remainingCount = value.length - collapseCount;
|
181 |
-
|
182 |
-
for (let i = 1; i < value.length; i++) {
|
183 |
-
if (collapseMatches && i === collapseCount) {
|
184 |
-
reshapedItems.push({
|
185 |
-
target,
|
186 |
-
title: isExpanded
|
187 |
-
? language["search-hide-matches-text"]
|
188 |
-
: remainingCount === 1
|
189 |
-
? `${remainingCount} ${language["search-more-match-text"]}`
|
190 |
-
: `${remainingCount} ${language["search-more-matches-text"]}`,
|
191 |
-
type: kItemTypeMore,
|
192 |
-
href: kItemTypeMoreHref,
|
193 |
-
});
|
194 |
-
}
|
195 |
-
|
196 |
-
if (isExpanded || !collapseMatches || i < collapseCount) {
|
197 |
-
reshapedItems.push({
|
198 |
-
...value[i],
|
199 |
-
type: kItemTypeItem,
|
200 |
-
target,
|
201 |
-
});
|
202 |
-
}
|
203 |
-
}
|
204 |
-
}
|
205 |
-
count += 1;
|
206 |
-
}
|
207 |
-
|
208 |
-
return {
|
209 |
-
...source,
|
210 |
-
getItems() {
|
211 |
-
return reshapedItems;
|
212 |
-
},
|
213 |
-
};
|
214 |
-
} catch (error) {
|
215 |
-
// Some form of error occurred
|
216 |
-
return {
|
217 |
-
...source,
|
218 |
-
getItems() {
|
219 |
-
return [
|
220 |
-
{
|
221 |
-
title: error.name || "An Error Occurred While Searching",
|
222 |
-
text:
|
223 |
-
error.message ||
|
224 |
-
"An unknown error occurred while attempting to perform the requested search.",
|
225 |
-
type: kItemTypeError,
|
226 |
-
},
|
227 |
-
];
|
228 |
-
},
|
229 |
-
};
|
230 |
-
}
|
231 |
-
});
|
232 |
-
},
|
233 |
-
navigator: {
|
234 |
-
navigate({ itemUrl }) {
|
235 |
-
if (itemUrl !== offsetURL(kItemTypeMoreHref)) {
|
236 |
-
window.location.assign(itemUrl);
|
237 |
-
}
|
238 |
-
},
|
239 |
-
navigateNewTab({ itemUrl }) {
|
240 |
-
if (itemUrl !== offsetURL(kItemTypeMoreHref)) {
|
241 |
-
const windowReference = window.open(itemUrl, "_blank", "noopener");
|
242 |
-
if (windowReference) {
|
243 |
-
windowReference.focus();
|
244 |
-
}
|
245 |
-
}
|
246 |
-
},
|
247 |
-
navigateNewWindow({ itemUrl }) {
|
248 |
-
if (itemUrl !== offsetURL(kItemTypeMoreHref)) {
|
249 |
-
window.open(itemUrl, "_blank", "noopener");
|
250 |
-
}
|
251 |
-
},
|
252 |
-
},
|
253 |
-
getSources({ state, setContext, setActiveItemId, refresh }) {
|
254 |
-
return [
|
255 |
-
{
|
256 |
-
sourceId: "documents",
|
257 |
-
getItemUrl({ item }) {
|
258 |
-
if (item.href) {
|
259 |
-
return offsetURL(item.href);
|
260 |
-
} else {
|
261 |
-
return undefined;
|
262 |
-
}
|
263 |
-
},
|
264 |
-
onSelect({
|
265 |
-
item,
|
266 |
-
state,
|
267 |
-
setContext,
|
268 |
-
setIsOpen,
|
269 |
-
setActiveItemId,
|
270 |
-
refresh,
|
271 |
-
}) {
|
272 |
-
if (item.type === kItemTypeMore) {
|
273 |
-
toggleExpanded(item, state, setContext, setActiveItemId, refresh);
|
274 |
-
|
275 |
-
// Toggle more
|
276 |
-
setIsOpen(true);
|
277 |
-
}
|
278 |
-
},
|
279 |
-
getItems({ query }) {
|
280 |
-
if (query === null || query === "") {
|
281 |
-
return [];
|
282 |
-
}
|
283 |
-
|
284 |
-
const limit = quartoSearchOptions.limit;
|
285 |
-
if (quartoSearchOptions.algolia) {
|
286 |
-
return algoliaSearch(query, limit, quartoSearchOptions.algolia);
|
287 |
-
} else {
|
288 |
-
// Fuse search options
|
289 |
-
const fuseSearchOptions = {
|
290 |
-
isCaseSensitive: false,
|
291 |
-
shouldSort: true,
|
292 |
-
minMatchCharLength: 2,
|
293 |
-
limit: limit,
|
294 |
-
};
|
295 |
-
|
296 |
-
return readSearchData().then(function (fuse) {
|
297 |
-
return fuseSearch(query, fuse, fuseSearchOptions);
|
298 |
-
});
|
299 |
-
}
|
300 |
-
},
|
301 |
-
templates: {
|
302 |
-
noResults({ createElement }) {
|
303 |
-
const hasQuery = lastState.query;
|
304 |
-
|
305 |
-
return createElement(
|
306 |
-
"div",
|
307 |
-
{
|
308 |
-
class: `quarto-search-no-results${
|
309 |
-
hasQuery ? "" : " no-query"
|
310 |
-
}`,
|
311 |
-
},
|
312 |
-
language["search-no-results-text"]
|
313 |
-
);
|
314 |
-
},
|
315 |
-
header({ items, createElement }) {
|
316 |
-
// count the documents
|
317 |
-
const count = items.filter((item) => {
|
318 |
-
return item.type === kItemTypeDoc;
|
319 |
-
}).length;
|
320 |
-
|
321 |
-
if (count > 0) {
|
322 |
-
return createElement(
|
323 |
-
"div",
|
324 |
-
{ class: "search-result-header" },
|
325 |
-
`${count} ${language["search-matching-documents-text"]}`
|
326 |
-
);
|
327 |
-
} else {
|
328 |
-
return createElement(
|
329 |
-
"div",
|
330 |
-
{ class: "search-result-header-no-results" },
|
331 |
-
``
|
332 |
-
);
|
333 |
-
}
|
334 |
-
},
|
335 |
-
footer({ _items, createElement }) {
|
336 |
-
if (
|
337 |
-
quartoSearchOptions.algolia &&
|
338 |
-
quartoSearchOptions.algolia["show-logo"]
|
339 |
-
) {
|
340 |
-
const libDir = quartoSearchOptions.algolia["libDir"];
|
341 |
-
const logo = createElement("img", {
|
342 |
-
src: offsetURL(
|
343 |
-
`${libDir}/quarto-search/search-by-algolia.svg`
|
344 |
-
),
|
345 |
-
class: "algolia-search-logo",
|
346 |
-
});
|
347 |
-
return createElement(
|
348 |
-
"a",
|
349 |
-
{ href: "http://www.algolia.com/" },
|
350 |
-
logo
|
351 |
-
);
|
352 |
-
}
|
353 |
-
},
|
354 |
-
|
355 |
-
item({ item, createElement }) {
|
356 |
-
return renderItem(
|
357 |
-
item,
|
358 |
-
createElement,
|
359 |
-
state,
|
360 |
-
setActiveItemId,
|
361 |
-
setContext,
|
362 |
-
refresh
|
363 |
-
);
|
364 |
-
},
|
365 |
-
},
|
366 |
-
},
|
367 |
-
];
|
368 |
-
},
|
369 |
-
});
|
370 |
-
|
371 |
-
window.quartoOpenSearch = () => {
|
372 |
-
setIsOpen(false);
|
373 |
-
setIsOpen(true);
|
374 |
-
focusSearchInput();
|
375 |
-
};
|
376 |
-
|
377 |
-
// Remove the labeleledby attribute since it is pointing
|
378 |
-
// to a non-existent label
|
379 |
-
if (quartoSearchOptions.type === "overlay") {
|
380 |
-
const inputEl = window.document.querySelector(
|
381 |
-
"#quarto-search .aa-Autocomplete"
|
382 |
-
);
|
383 |
-
if (inputEl) {
|
384 |
-
inputEl.removeAttribute("aria-labelledby");
|
385 |
-
}
|
386 |
-
}
|
387 |
-
|
388 |
-
// If the main document scrolls dismiss the search results
|
389 |
-
// (otherwise, since they're floating in the document they can scroll with the document)
|
390 |
-
window.document.body.onscroll = () => {
|
391 |
-
setIsOpen(false);
|
392 |
-
};
|
393 |
-
|
394 |
-
if (showSearchResults) {
|
395 |
-
setIsOpen(true);
|
396 |
-
focusSearchInput();
|
397 |
-
}
|
398 |
-
});
|
399 |
-
|
400 |
-
function configurePlugins(quartoSearchOptions) {
|
401 |
-
const autocompletePlugins = [];
|
402 |
-
const algoliaOptions = quartoSearchOptions.algolia;
|
403 |
-
if (
|
404 |
-
algoliaOptions &&
|
405 |
-
algoliaOptions["analytics-events"] &&
|
406 |
-
algoliaOptions["search-only-api-key"] &&
|
407 |
-
algoliaOptions["application-id"]
|
408 |
-
) {
|
409 |
-
const apiKey = algoliaOptions["search-only-api-key"];
|
410 |
-
const appId = algoliaOptions["application-id"];
|
411 |
-
|
412 |
-
// Aloglia insights may not be loaded because they require cookie consent
|
413 |
-
// Use deferred loading so events will start being recorded when/if consent
|
414 |
-
// is granted.
|
415 |
-
const algoliaInsightsDeferredPlugin = deferredLoadPlugin(() => {
|
416 |
-
if (
|
417 |
-
window.aa &&
|
418 |
-
window["@algolia/autocomplete-plugin-algolia-insights"]
|
419 |
-
) {
|
420 |
-
window.aa("init", {
|
421 |
-
appId,
|
422 |
-
apiKey,
|
423 |
-
useCookie: true,
|
424 |
-
});
|
425 |
-
|
426 |
-
const { createAlgoliaInsightsPlugin } =
|
427 |
-
window["@algolia/autocomplete-plugin-algolia-insights"];
|
428 |
-
// Register the insights client
|
429 |
-
const algoliaInsightsPlugin = createAlgoliaInsightsPlugin({
|
430 |
-
insightsClient: window.aa,
|
431 |
-
onItemsChange({ insights, insightsEvents }) {
|
432 |
-
const events = insightsEvents.map((event) => {
|
433 |
-
const maxEvents = event.objectIDs.slice(0, 20);
|
434 |
-
return {
|
435 |
-
...event,
|
436 |
-
objectIDs: maxEvents,
|
437 |
-
};
|
438 |
-
});
|
439 |
-
|
440 |
-
insights.viewedObjectIDs(...events);
|
441 |
-
},
|
442 |
-
});
|
443 |
-
return algoliaInsightsPlugin;
|
444 |
-
}
|
445 |
-
});
|
446 |
-
|
447 |
-
// Add the plugin
|
448 |
-
autocompletePlugins.push(algoliaInsightsDeferredPlugin);
|
449 |
-
return autocompletePlugins;
|
450 |
-
}
|
451 |
-
}
|
452 |
-
|
453 |
-
// For plugins that may not load immediately, create a wrapper
|
454 |
-
// plugin and forward events and plugin data once the plugin
|
455 |
-
// is initialized. This is useful for cases like cookie consent
|
456 |
-
// which may prevent the analytics insights event plugin from initializing
|
457 |
-
// immediately.
|
458 |
-
function deferredLoadPlugin(createPlugin) {
|
459 |
-
let plugin = undefined;
|
460 |
-
let subscribeObj = undefined;
|
461 |
-
const wrappedPlugin = () => {
|
462 |
-
if (!plugin && subscribeObj) {
|
463 |
-
plugin = createPlugin();
|
464 |
-
if (plugin && plugin.subscribe) {
|
465 |
-
plugin.subscribe(subscribeObj);
|
466 |
-
}
|
467 |
-
}
|
468 |
-
return plugin;
|
469 |
-
};
|
470 |
-
|
471 |
-
return {
|
472 |
-
subscribe: (obj) => {
|
473 |
-
subscribeObj = obj;
|
474 |
-
},
|
475 |
-
onStateChange: (obj) => {
|
476 |
-
const plugin = wrappedPlugin();
|
477 |
-
if (plugin && plugin.onStateChange) {
|
478 |
-
plugin.onStateChange(obj);
|
479 |
-
}
|
480 |
-
},
|
481 |
-
onSubmit: (obj) => {
|
482 |
-
const plugin = wrappedPlugin();
|
483 |
-
if (plugin && plugin.onSubmit) {
|
484 |
-
plugin.onSubmit(obj);
|
485 |
-
}
|
486 |
-
},
|
487 |
-
onReset: (obj) => {
|
488 |
-
const plugin = wrappedPlugin();
|
489 |
-
if (plugin && plugin.onReset) {
|
490 |
-
plugin.onReset(obj);
|
491 |
-
}
|
492 |
-
},
|
493 |
-
getSources: (obj) => {
|
494 |
-
const plugin = wrappedPlugin();
|
495 |
-
if (plugin && plugin.getSources) {
|
496 |
-
return plugin.getSources(obj);
|
497 |
-
} else {
|
498 |
-
return Promise.resolve([]);
|
499 |
-
}
|
500 |
-
},
|
501 |
-
data: (obj) => {
|
502 |
-
const plugin = wrappedPlugin();
|
503 |
-
if (plugin && plugin.data) {
|
504 |
-
plugin.data(obj);
|
505 |
-
}
|
506 |
-
},
|
507 |
-
};
|
508 |
-
}
|
509 |
-
|
510 |
-
function validateItems(items) {
|
511 |
-
// Validate the first item
|
512 |
-
if (items.length > 0) {
|
513 |
-
const item = items[0];
|
514 |
-
const missingFields = [];
|
515 |
-
if (item.href == undefined) {
|
516 |
-
missingFields.push("href");
|
517 |
-
}
|
518 |
-
if (!item.title == undefined) {
|
519 |
-
missingFields.push("title");
|
520 |
-
}
|
521 |
-
if (!item.text == undefined) {
|
522 |
-
missingFields.push("text");
|
523 |
-
}
|
524 |
-
|
525 |
-
if (missingFields.length === 1) {
|
526 |
-
throw {
|
527 |
-
name: `Error: Search index is missing the <code>${missingFields[0]}</code> field.`,
|
528 |
-
message: `The items being returned for this search do not include all the required fields. Please ensure that your index items include the <code>${missingFields[0]}</code> field or use <code>index-fields</code> in your <code>_quarto.yml</code> file to specify the field names.`,
|
529 |
-
};
|
530 |
-
} else if (missingFields.length > 1) {
|
531 |
-
const missingFieldList = missingFields
|
532 |
-
.map((field) => {
|
533 |
-
return `<code>${field}</code>`;
|
534 |
-
})
|
535 |
-
.join(", ");
|
536 |
-
|
537 |
-
throw {
|
538 |
-
name: `Error: Search index is missing the following fields: ${missingFieldList}.`,
|
539 |
-
message: `The items being returned for this search do not include all the required fields. Please ensure that your index items includes the following fields: ${missingFieldList}, or use <code>index-fields</code> in your <code>_quarto.yml</code> file to specify the field names.`,
|
540 |
-
};
|
541 |
-
}
|
542 |
-
}
|
543 |
-
}
|
544 |
-
|
545 |
-
let lastQuery = null;
|
546 |
-
function showCopyLink(query, options) {
|
547 |
-
const language = options.language;
|
548 |
-
lastQuery = query;
|
549 |
-
// Insert share icon
|
550 |
-
const inputSuffixEl = window.document.body.querySelector(
|
551 |
-
".aa-Form .aa-InputWrapperSuffix"
|
552 |
-
);
|
553 |
-
|
554 |
-
if (inputSuffixEl) {
|
555 |
-
let copyButtonEl = window.document.body.querySelector(
|
556 |
-
".aa-Form .aa-InputWrapperSuffix .aa-CopyButton"
|
557 |
-
);
|
558 |
-
|
559 |
-
if (copyButtonEl === null) {
|
560 |
-
copyButtonEl = window.document.createElement("button");
|
561 |
-
copyButtonEl.setAttribute("class", "aa-CopyButton");
|
562 |
-
copyButtonEl.setAttribute("type", "button");
|
563 |
-
copyButtonEl.setAttribute("title", language["search-copy-link-title"]);
|
564 |
-
copyButtonEl.onmousedown = (e) => {
|
565 |
-
e.preventDefault();
|
566 |
-
e.stopPropagation();
|
567 |
-
};
|
568 |
-
|
569 |
-
const linkIcon = "bi-clipboard";
|
570 |
-
const checkIcon = "bi-check2";
|
571 |
-
|
572 |
-
const shareIconEl = window.document.createElement("i");
|
573 |
-
shareIconEl.setAttribute("class", `bi ${linkIcon}`);
|
574 |
-
copyButtonEl.appendChild(shareIconEl);
|
575 |
-
inputSuffixEl.prepend(copyButtonEl);
|
576 |
-
|
577 |
-
const clipboard = new window.ClipboardJS(".aa-CopyButton", {
|
578 |
-
text: function (_trigger) {
|
579 |
-
const copyUrl = new URL(window.location);
|
580 |
-
copyUrl.searchParams.set(kQueryArg, lastQuery);
|
581 |
-
copyUrl.searchParams.set(kResultsArg, "1");
|
582 |
-
return copyUrl.toString();
|
583 |
-
},
|
584 |
-
});
|
585 |
-
clipboard.on("success", function (e) {
|
586 |
-
// Focus the input
|
587 |
-
|
588 |
-
// button target
|
589 |
-
const button = e.trigger;
|
590 |
-
const icon = button.querySelector("i.bi");
|
591 |
-
|
592 |
-
// flash "checked"
|
593 |
-
icon.classList.add(checkIcon);
|
594 |
-
icon.classList.remove(linkIcon);
|
595 |
-
setTimeout(function () {
|
596 |
-
icon.classList.remove(checkIcon);
|
597 |
-
icon.classList.add(linkIcon);
|
598 |
-
}, 1000);
|
599 |
-
});
|
600 |
-
}
|
601 |
-
|
602 |
-
// If there is a query, show the link icon
|
603 |
-
if (copyButtonEl) {
|
604 |
-
if (lastQuery && options["copy-button"]) {
|
605 |
-
copyButtonEl.style.display = "flex";
|
606 |
-
} else {
|
607 |
-
copyButtonEl.style.display = "none";
|
608 |
-
}
|
609 |
-
}
|
610 |
-
}
|
611 |
-
}
|
612 |
-
|
613 |
-
/* Search Index Handling */
|
614 |
-
// create the index
|
615 |
-
var fuseIndex = undefined;
|
616 |
-
async function readSearchData() {
|
617 |
-
// Initialize the search index on demand
|
618 |
-
if (fuseIndex === undefined) {
|
619 |
-
// create fuse index
|
620 |
-
const options = {
|
621 |
-
keys: [
|
622 |
-
{ name: "title", weight: 20 },
|
623 |
-
{ name: "section", weight: 20 },
|
624 |
-
{ name: "text", weight: 10 },
|
625 |
-
],
|
626 |
-
ignoreLocation: true,
|
627 |
-
threshold: 0.1,
|
628 |
-
};
|
629 |
-
const fuse = new window.Fuse([], options);
|
630 |
-
|
631 |
-
// fetch the main search.json
|
632 |
-
const response = await fetch(offsetURL("search.json"));
|
633 |
-
if (response.status == 200) {
|
634 |
-
return response.json().then(function (searchDocs) {
|
635 |
-
searchDocs.forEach(function (searchDoc) {
|
636 |
-
fuse.add(searchDoc);
|
637 |
-
});
|
638 |
-
fuseIndex = fuse;
|
639 |
-
return fuseIndex;
|
640 |
-
});
|
641 |
-
} else {
|
642 |
-
return Promise.reject(
|
643 |
-
new Error(
|
644 |
-
"Unexpected status from search index request: " + response.status
|
645 |
-
)
|
646 |
-
);
|
647 |
-
}
|
648 |
-
}
|
649 |
-
return fuseIndex;
|
650 |
-
}
|
651 |
-
|
652 |
-
function inputElement() {
|
653 |
-
return window.document.body.querySelector(".aa-Form .aa-Input");
|
654 |
-
}
|
655 |
-
|
656 |
-
function focusSearchInput() {
|
657 |
-
setTimeout(() => {
|
658 |
-
const inputEl = inputElement();
|
659 |
-
if (inputEl) {
|
660 |
-
inputEl.focus();
|
661 |
-
}
|
662 |
-
}, 50);
|
663 |
-
}
|
664 |
-
|
665 |
-
/* Panels */
|
666 |
-
const kItemTypeDoc = "document";
|
667 |
-
const kItemTypeMore = "document-more";
|
668 |
-
const kItemTypeItem = "document-item";
|
669 |
-
const kItemTypeError = "error";
|
670 |
-
|
671 |
-
function renderItem(
|
672 |
-
item,
|
673 |
-
createElement,
|
674 |
-
state,
|
675 |
-
setActiveItemId,
|
676 |
-
setContext,
|
677 |
-
refresh
|
678 |
-
) {
|
679 |
-
switch (item.type) {
|
680 |
-
case kItemTypeDoc:
|
681 |
-
return createDocumentCard(
|
682 |
-
createElement,
|
683 |
-
"file-richtext",
|
684 |
-
item.title,
|
685 |
-
item.section,
|
686 |
-
item.text,
|
687 |
-
item.href
|
688 |
-
);
|
689 |
-
case kItemTypeMore:
|
690 |
-
return createMoreCard(
|
691 |
-
createElement,
|
692 |
-
item,
|
693 |
-
state,
|
694 |
-
setActiveItemId,
|
695 |
-
setContext,
|
696 |
-
refresh
|
697 |
-
);
|
698 |
-
case kItemTypeItem:
|
699 |
-
return createSectionCard(
|
700 |
-
createElement,
|
701 |
-
item.section,
|
702 |
-
item.text,
|
703 |
-
item.href
|
704 |
-
);
|
705 |
-
case kItemTypeError:
|
706 |
-
return createErrorCard(createElement, item.title, item.text);
|
707 |
-
default:
|
708 |
-
return undefined;
|
709 |
-
}
|
710 |
-
}
|
711 |
-
|
712 |
-
function createDocumentCard(createElement, icon, title, section, text, href) {
|
713 |
-
const iconEl = createElement("i", {
|
714 |
-
class: `bi bi-${icon} search-result-icon`,
|
715 |
-
});
|
716 |
-
const titleEl = createElement("p", { class: "search-result-title" }, title);
|
717 |
-
const titleContainerEl = createElement(
|
718 |
-
"div",
|
719 |
-
{ class: "search-result-title-container" },
|
720 |
-
[iconEl, titleEl]
|
721 |
-
);
|
722 |
-
|
723 |
-
const textEls = [];
|
724 |
-
if (section) {
|
725 |
-
const sectionEl = createElement(
|
726 |
-
"p",
|
727 |
-
{ class: "search-result-section" },
|
728 |
-
section
|
729 |
-
);
|
730 |
-
textEls.push(sectionEl);
|
731 |
-
}
|
732 |
-
const descEl = createElement("p", {
|
733 |
-
class: "search-result-text",
|
734 |
-
dangerouslySetInnerHTML: {
|
735 |
-
__html: text,
|
736 |
-
},
|
737 |
-
});
|
738 |
-
textEls.push(descEl);
|
739 |
-
|
740 |
-
const textContainerEl = createElement(
|
741 |
-
"div",
|
742 |
-
{ class: "search-result-text-container" },
|
743 |
-
textEls
|
744 |
-
);
|
745 |
-
|
746 |
-
const containerEl = createElement(
|
747 |
-
"div",
|
748 |
-
{
|
749 |
-
class: "search-result-container",
|
750 |
-
},
|
751 |
-
[titleContainerEl, textContainerEl]
|
752 |
-
);
|
753 |
-
|
754 |
-
const linkEl = createElement(
|
755 |
-
"a",
|
756 |
-
{
|
757 |
-
href: offsetURL(href),
|
758 |
-
class: "search-result-link",
|
759 |
-
},
|
760 |
-
containerEl
|
761 |
-
);
|
762 |
-
|
763 |
-
const classes = ["search-result-doc", "search-item"];
|
764 |
-
if (!section) {
|
765 |
-
classes.push("document-selectable");
|
766 |
-
}
|
767 |
-
|
768 |
-
return createElement(
|
769 |
-
"div",
|
770 |
-
{
|
771 |
-
class: classes.join(" "),
|
772 |
-
},
|
773 |
-
linkEl
|
774 |
-
);
|
775 |
-
}
|
776 |
-
|
777 |
-
function createMoreCard(
|
778 |
-
createElement,
|
779 |
-
item,
|
780 |
-
state,
|
781 |
-
setActiveItemId,
|
782 |
-
setContext,
|
783 |
-
refresh
|
784 |
-
) {
|
785 |
-
const moreCardEl = createElement(
|
786 |
-
"div",
|
787 |
-
{
|
788 |
-
class: "search-result-more search-item",
|
789 |
-
onClick: (e) => {
|
790 |
-
// Handle expanding the sections by adding the expanded
|
791 |
-
// section to the list of expanded sections
|
792 |
-
toggleExpanded(item, state, setContext, setActiveItemId, refresh);
|
793 |
-
e.stopPropagation();
|
794 |
-
},
|
795 |
-
},
|
796 |
-
item.title
|
797 |
-
);
|
798 |
-
|
799 |
-
return moreCardEl;
|
800 |
-
}
|
801 |
-
|
802 |
-
function toggleExpanded(item, state, setContext, setActiveItemId, refresh) {
|
803 |
-
const expanded = state.context.expanded || [];
|
804 |
-
if (expanded.includes(item.target)) {
|
805 |
-
setContext({
|
806 |
-
expanded: expanded.filter((target) => target !== item.target),
|
807 |
-
});
|
808 |
-
} else {
|
809 |
-
setContext({ expanded: [...expanded, item.target] });
|
810 |
-
}
|
811 |
-
|
812 |
-
refresh();
|
813 |
-
setActiveItemId(item.__autocomplete_id);
|
814 |
-
}
|
815 |
-
|
816 |
-
function createSectionCard(createElement, section, text, href) {
|
817 |
-
const sectionEl = createSection(createElement, section, text, href);
|
818 |
-
return createElement(
|
819 |
-
"div",
|
820 |
-
{
|
821 |
-
class: "search-result-doc-section search-item",
|
822 |
-
},
|
823 |
-
sectionEl
|
824 |
-
);
|
825 |
-
}
|
826 |
-
|
827 |
-
function createSection(createElement, title, text, href) {
|
828 |
-
const descEl = createElement("p", {
|
829 |
-
class: "search-result-text",
|
830 |
-
dangerouslySetInnerHTML: {
|
831 |
-
__html: text,
|
832 |
-
},
|
833 |
-
});
|
834 |
-
|
835 |
-
const titleEl = createElement("p", { class: "search-result-section" }, title);
|
836 |
-
const linkEl = createElement(
|
837 |
-
"a",
|
838 |
-
{
|
839 |
-
href: offsetURL(href),
|
840 |
-
class: "search-result-link",
|
841 |
-
},
|
842 |
-
[titleEl, descEl]
|
843 |
-
);
|
844 |
-
return linkEl;
|
845 |
-
}
|
846 |
-
|
847 |
-
function createErrorCard(createElement, title, text) {
|
848 |
-
const descEl = createElement("p", {
|
849 |
-
class: "search-error-text",
|
850 |
-
dangerouslySetInnerHTML: {
|
851 |
-
__html: text,
|
852 |
-
},
|
853 |
-
});
|
854 |
-
|
855 |
-
const titleEl = createElement("p", {
|
856 |
-
class: "search-error-title",
|
857 |
-
dangerouslySetInnerHTML: {
|
858 |
-
__html: `<i class="bi bi-exclamation-circle search-error-icon"></i> ${title}`,
|
859 |
-
},
|
860 |
-
});
|
861 |
-
const errorEl = createElement("div", { class: "search-error" }, [
|
862 |
-
titleEl,
|
863 |
-
descEl,
|
864 |
-
]);
|
865 |
-
return errorEl;
|
866 |
-
}
|
867 |
-
|
868 |
-
function positionPanel(pos) {
|
869 |
-
const panelEl = window.document.querySelector(
|
870 |
-
"#quarto-search-results .aa-Panel"
|
871 |
-
);
|
872 |
-
const inputEl = window.document.querySelector(
|
873 |
-
"#quarto-search .aa-Autocomplete"
|
874 |
-
);
|
875 |
-
|
876 |
-
if (panelEl && inputEl) {
|
877 |
-
panelEl.style.top = `${Math.round(panelEl.offsetTop)}px`;
|
878 |
-
if (pos === "start") {
|
879 |
-
panelEl.style.left = `${Math.round(inputEl.left)}px`;
|
880 |
-
} else {
|
881 |
-
panelEl.style.right = `${Math.round(inputEl.offsetRight)}px`;
|
882 |
-
}
|
883 |
-
}
|
884 |
-
}
|
885 |
-
|
886 |
-
/* Highlighting */
|
887 |
-
// highlighting functions
|
888 |
-
function highlightMatch(query, text) {
|
889 |
-
if (text) {
|
890 |
-
const start = text.toLowerCase().indexOf(query.toLowerCase());
|
891 |
-
if (start !== -1) {
|
892 |
-
const startMark = "<mark class='search-match'>";
|
893 |
-
const endMark = "</mark>";
|
894 |
-
|
895 |
-
const end = start + query.length;
|
896 |
-
text =
|
897 |
-
text.slice(0, start) +
|
898 |
-
startMark +
|
899 |
-
text.slice(start, end) +
|
900 |
-
endMark +
|
901 |
-
text.slice(end);
|
902 |
-
const startInfo = clipStart(text, start);
|
903 |
-
const endInfo = clipEnd(
|
904 |
-
text,
|
905 |
-
startInfo.position + startMark.length + endMark.length
|
906 |
-
);
|
907 |
-
text =
|
908 |
-
startInfo.prefix +
|
909 |
-
text.slice(startInfo.position, endInfo.position) +
|
910 |
-
endInfo.suffix;
|
911 |
-
|
912 |
-
return text;
|
913 |
-
} else {
|
914 |
-
return text;
|
915 |
-
}
|
916 |
-
} else {
|
917 |
-
return text;
|
918 |
-
}
|
919 |
-
}
|
920 |
-
|
921 |
-
function clipStart(text, pos) {
|
922 |
-
const clipStart = pos - 50;
|
923 |
-
if (clipStart < 0) {
|
924 |
-
// This will just return the start of the string
|
925 |
-
return {
|
926 |
-
position: 0,
|
927 |
-
prefix: "",
|
928 |
-
};
|
929 |
-
} else {
|
930 |
-
// We're clipping before the start of the string, walk backwards to the first space.
|
931 |
-
const spacePos = findSpace(text, pos, -1);
|
932 |
-
return {
|
933 |
-
position: spacePos.position,
|
934 |
-
prefix: "",
|
935 |
-
};
|
936 |
-
}
|
937 |
-
}
|
938 |
-
|
939 |
-
function clipEnd(text, pos) {
|
940 |
-
const clipEnd = pos + 200;
|
941 |
-
if (clipEnd > text.length) {
|
942 |
-
return {
|
943 |
-
position: text.length,
|
944 |
-
suffix: "",
|
945 |
-
};
|
946 |
-
} else {
|
947 |
-
const spacePos = findSpace(text, clipEnd, 1);
|
948 |
-
return {
|
949 |
-
position: spacePos.position,
|
950 |
-
suffix: spacePos.clipped ? "…" : "",
|
951 |
-
};
|
952 |
-
}
|
953 |
-
}
|
954 |
-
|
955 |
-
function findSpace(text, start, step) {
|
956 |
-
let stepPos = start;
|
957 |
-
while (stepPos > -1 && stepPos < text.length) {
|
958 |
-
const char = text[stepPos];
|
959 |
-
if (char === " " || char === "," || char === ":") {
|
960 |
-
return {
|
961 |
-
position: step === 1 ? stepPos : stepPos - step,
|
962 |
-
clipped: stepPos > 1 && stepPos < text.length,
|
963 |
-
};
|
964 |
-
}
|
965 |
-
stepPos = stepPos + step;
|
966 |
-
}
|
967 |
-
|
968 |
-
return {
|
969 |
-
position: stepPos - step,
|
970 |
-
clipped: false,
|
971 |
-
};
|
972 |
-
}
|
973 |
-
|
974 |
-
// removes highlighting as implemented by the mark tag
|
975 |
-
function clearHighlight(searchterm, el) {
|
976 |
-
const childNodes = el.childNodes;
|
977 |
-
for (let i = childNodes.length - 1; i >= 0; i--) {
|
978 |
-
const node = childNodes[i];
|
979 |
-
if (node.nodeType === Node.ELEMENT_NODE) {
|
980 |
-
if (
|
981 |
-
node.tagName === "MARK" &&
|
982 |
-
node.innerText.toLowerCase() === searchterm.toLowerCase()
|
983 |
-
) {
|
984 |
-
el.replaceChild(document.createTextNode(node.innerText), node);
|
985 |
-
} else {
|
986 |
-
clearHighlight(searchterm, node);
|
987 |
-
}
|
988 |
-
}
|
989 |
-
}
|
990 |
-
}
|
991 |
-
|
992 |
-
function escapeRegExp(string) {
|
993 |
-
return string.replace(/[.*+?^${}()|[\]\\]/g, "\\$&"); // $& means the whole matched string
|
994 |
-
}
|
995 |
-
|
996 |
-
// highlight matches
|
997 |
-
function highlight(term, el) {
|
998 |
-
const termRegex = new RegExp(term, "ig");
|
999 |
-
const childNodes = el.childNodes;
|
1000 |
-
|
1001 |
-
// walk back to front avoid mutating elements in front of us
|
1002 |
-
for (let i = childNodes.length - 1; i >= 0; i--) {
|
1003 |
-
const node = childNodes[i];
|
1004 |
-
|
1005 |
-
if (node.nodeType === Node.TEXT_NODE) {
|
1006 |
-
// Search text nodes for text to highlight
|
1007 |
-
const text = node.nodeValue;
|
1008 |
-
|
1009 |
-
let startIndex = 0;
|
1010 |
-
let matchIndex = text.search(termRegex);
|
1011 |
-
if (matchIndex > -1) {
|
1012 |
-
const markFragment = document.createDocumentFragment();
|
1013 |
-
while (matchIndex > -1) {
|
1014 |
-
const prefix = text.slice(startIndex, matchIndex);
|
1015 |
-
markFragment.appendChild(document.createTextNode(prefix));
|
1016 |
-
|
1017 |
-
const mark = document.createElement("mark");
|
1018 |
-
mark.appendChild(
|
1019 |
-
document.createTextNode(
|
1020 |
-
text.slice(matchIndex, matchIndex + term.length)
|
1021 |
-
)
|
1022 |
-
);
|
1023 |
-
markFragment.appendChild(mark);
|
1024 |
-
|
1025 |
-
startIndex = matchIndex + term.length;
|
1026 |
-
matchIndex = text.slice(startIndex).search(new RegExp(term, "ig"));
|
1027 |
-
if (matchIndex > -1) {
|
1028 |
-
matchIndex = startIndex + matchIndex;
|
1029 |
-
}
|
1030 |
-
}
|
1031 |
-
if (startIndex < text.length) {
|
1032 |
-
markFragment.appendChild(
|
1033 |
-
document.createTextNode(text.slice(startIndex, text.length))
|
1034 |
-
);
|
1035 |
-
}
|
1036 |
-
|
1037 |
-
el.replaceChild(markFragment, node);
|
1038 |
-
}
|
1039 |
-
} else if (node.nodeType === Node.ELEMENT_NODE) {
|
1040 |
-
// recurse through elements
|
1041 |
-
highlight(term, node);
|
1042 |
-
}
|
1043 |
-
}
|
1044 |
-
}
|
1045 |
-
|
1046 |
-
/* Link Handling */
|
1047 |
-
// get the offset from this page for a given site root relative url
|
1048 |
-
function offsetURL(url) {
|
1049 |
-
var offset = getMeta("quarto:offset");
|
1050 |
-
return offset ? offset + url : url;
|
1051 |
-
}
|
1052 |
-
|
1053 |
-
// read a meta tag value
|
1054 |
-
function getMeta(metaName) {
|
1055 |
-
var metas = window.document.getElementsByTagName("meta");
|
1056 |
-
for (let i = 0; i < metas.length; i++) {
|
1057 |
-
if (metas[i].getAttribute("name") === metaName) {
|
1058 |
-
return metas[i].getAttribute("content");
|
1059 |
-
}
|
1060 |
-
}
|
1061 |
-
return "";
|
1062 |
-
}
|
1063 |
-
|
1064 |
-
function algoliaSearch(query, limit, algoliaOptions) {
|
1065 |
-
const { getAlgoliaResults } = window["@algolia/autocomplete-preset-algolia"];
|
1066 |
-
|
1067 |
-
const applicationId = algoliaOptions["application-id"];
|
1068 |
-
const searchOnlyApiKey = algoliaOptions["search-only-api-key"];
|
1069 |
-
const indexName = algoliaOptions["index-name"];
|
1070 |
-
const indexFields = algoliaOptions["index-fields"];
|
1071 |
-
const searchClient = window.algoliasearch(applicationId, searchOnlyApiKey);
|
1072 |
-
const searchParams = algoliaOptions["params"];
|
1073 |
-
const searchAnalytics = !!algoliaOptions["analytics-events"];
|
1074 |
-
|
1075 |
-
return getAlgoliaResults({
|
1076 |
-
searchClient,
|
1077 |
-
queries: [
|
1078 |
-
{
|
1079 |
-
indexName: indexName,
|
1080 |
-
query,
|
1081 |
-
params: {
|
1082 |
-
hitsPerPage: limit,
|
1083 |
-
clickAnalytics: searchAnalytics,
|
1084 |
-
...searchParams,
|
1085 |
-
},
|
1086 |
-
},
|
1087 |
-
],
|
1088 |
-
transformResponse: (response) => {
|
1089 |
-
if (!indexFields) {
|
1090 |
-
return response.hits.map((hit) => {
|
1091 |
-
return hit.map((item) => {
|
1092 |
-
return {
|
1093 |
-
...item,
|
1094 |
-
text: highlightMatch(query, item.text),
|
1095 |
-
};
|
1096 |
-
});
|
1097 |
-
});
|
1098 |
-
} else {
|
1099 |
-
const remappedHits = response.hits.map((hit) => {
|
1100 |
-
return hit.map((item) => {
|
1101 |
-
const newItem = { ...item };
|
1102 |
-
["href", "section", "title", "text"].forEach((keyName) => {
|
1103 |
-
const mappedName = indexFields[keyName];
|
1104 |
-
if (
|
1105 |
-
mappedName &&
|
1106 |
-
item[mappedName] !== undefined &&
|
1107 |
-
mappedName !== keyName
|
1108 |
-
) {
|
1109 |
-
newItem[keyName] = item[mappedName];
|
1110 |
-
delete newItem[mappedName];
|
1111 |
-
}
|
1112 |
-
});
|
1113 |
-
newItem.text = highlightMatch(query, newItem.text);
|
1114 |
-
return newItem;
|
1115 |
-
});
|
1116 |
-
});
|
1117 |
-
return remappedHits;
|
1118 |
-
}
|
1119 |
-
},
|
1120 |
-
});
|
1121 |
-
}
|
1122 |
-
|
1123 |
-
function fuseSearch(query, fuse, fuseOptions) {
|
1124 |
-
return fuse.search(query, fuseOptions).map((result) => {
|
1125 |
-
const addParam = (url, name, value) => {
|
1126 |
-
const anchorParts = url.split("#");
|
1127 |
-
const baseUrl = anchorParts[0];
|
1128 |
-
const sep = baseUrl.search("\\?") > 0 ? "&" : "?";
|
1129 |
-
anchorParts[0] = baseUrl + sep + name + "=" + value;
|
1130 |
-
return anchorParts.join("#");
|
1131 |
-
};
|
1132 |
-
|
1133 |
-
return {
|
1134 |
-
title: result.item.title,
|
1135 |
-
section: result.item.section,
|
1136 |
-
href: addParam(result.item.href, kQueryArg, query),
|
1137 |
-
text: highlightMatch(query, result.item.text),
|
1138 |
-
};
|
1139 |
-
});
|
1140 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/html_4chan_style.css
DELETED
@@ -1,104 +0,0 @@
|
|
1 |
-
#parent #container {
|
2 |
-
background-color: #eef2ff;
|
3 |
-
padding: 17px;
|
4 |
-
}
|
5 |
-
|
6 |
-
#parent #container .reply {
|
7 |
-
background-color: rgb(214, 218, 240);
|
8 |
-
border-bottom-color: rgb(183, 197, 217);
|
9 |
-
border-bottom-style: solid;
|
10 |
-
border-bottom-width: 1px;
|
11 |
-
border-image-outset: 0;
|
12 |
-
border-image-repeat: stretch;
|
13 |
-
border-image-slice: 100%;
|
14 |
-
border-image-source: none;
|
15 |
-
border-image-width: 1;
|
16 |
-
border-left-color: rgb(0, 0, 0);
|
17 |
-
border-left-style: none;
|
18 |
-
border-left-width: 0px;
|
19 |
-
border-right-color: rgb(183, 197, 217);
|
20 |
-
border-right-style: solid;
|
21 |
-
border-right-width: 1px;
|
22 |
-
border-top-color: rgb(0, 0, 0);
|
23 |
-
border-top-style: none;
|
24 |
-
border-top-width: 0px;
|
25 |
-
color: rgb(0, 0, 0);
|
26 |
-
display: table;
|
27 |
-
font-family: arial, helvetica, sans-serif;
|
28 |
-
font-size: 13.3333px;
|
29 |
-
margin-bottom: 4px;
|
30 |
-
margin-left: 0px;
|
31 |
-
margin-right: 0px;
|
32 |
-
margin-top: 4px;
|
33 |
-
overflow-x: hidden;
|
34 |
-
overflow-y: hidden;
|
35 |
-
padding-bottom: 4px;
|
36 |
-
padding-left: 2px;
|
37 |
-
padding-right: 2px;
|
38 |
-
padding-top: 4px;
|
39 |
-
}
|
40 |
-
|
41 |
-
#parent #container .number {
|
42 |
-
color: rgb(0, 0, 0);
|
43 |
-
font-family: arial, helvetica, sans-serif;
|
44 |
-
font-size: 13.3333px;
|
45 |
-
width: 342.65px;
|
46 |
-
margin-right: 7px;
|
47 |
-
}
|
48 |
-
|
49 |
-
#parent #container .op {
|
50 |
-
color: rgb(0, 0, 0);
|
51 |
-
font-family: arial, helvetica, sans-serif;
|
52 |
-
font-size: 13.3333px;
|
53 |
-
margin-bottom: 8px;
|
54 |
-
margin-left: 0px;
|
55 |
-
margin-right: 0px;
|
56 |
-
margin-top: 4px;
|
57 |
-
overflow-x: hidden;
|
58 |
-
overflow-y: hidden;
|
59 |
-
}
|
60 |
-
|
61 |
-
#parent #container .op blockquote {
|
62 |
-
margin-left: 0px !important;
|
63 |
-
}
|
64 |
-
|
65 |
-
#parent #container .name {
|
66 |
-
color: rgb(17, 119, 67);
|
67 |
-
font-family: arial, helvetica, sans-serif;
|
68 |
-
font-size: 13.3333px;
|
69 |
-
font-weight: 700;
|
70 |
-
margin-left: 7px;
|
71 |
-
}
|
72 |
-
|
73 |
-
#parent #container .quote {
|
74 |
-
color: rgb(221, 0, 0);
|
75 |
-
font-family: arial, helvetica, sans-serif;
|
76 |
-
font-size: 13.3333px;
|
77 |
-
text-decoration-color: rgb(221, 0, 0);
|
78 |
-
text-decoration-line: underline;
|
79 |
-
text-decoration-style: solid;
|
80 |
-
text-decoration-thickness: auto;
|
81 |
-
}
|
82 |
-
|
83 |
-
#parent #container .greentext {
|
84 |
-
color: rgb(120, 153, 34);
|
85 |
-
font-family: arial, helvetica, sans-serif;
|
86 |
-
font-size: 13.3333px;
|
87 |
-
}
|
88 |
-
|
89 |
-
#parent #container blockquote {
|
90 |
-
margin: 0px !important;
|
91 |
-
margin-block-start: 1em;
|
92 |
-
margin-block-end: 1em;
|
93 |
-
margin-inline-start: 40px;
|
94 |
-
margin-inline-end: 40px;
|
95 |
-
margin-top: 13.33px !important;
|
96 |
-
margin-bottom: 13.33px !important;
|
97 |
-
margin-left: 40px !important;
|
98 |
-
margin-right: 40px !important;
|
99 |
-
}
|
100 |
-
|
101 |
-
#parent #container .message_4chan {
|
102 |
-
color: black;
|
103 |
-
border: none;
|
104 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anustup/NS_AI_LABS/app-shared.py
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
# Run the app with no audio file restrictions
|
2 |
-
from app import create_ui
|
3 |
-
create_ui(-1, share=True)
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/hebrewprober.py
DELETED
@@ -1,316 +0,0 @@
|
|
1 |
-
######################## BEGIN LICENSE BLOCK ########################
|
2 |
-
# The Original Code is Mozilla Universal charset detector code.
|
3 |
-
#
|
4 |
-
# The Initial Developer of the Original Code is
|
5 |
-
# Shy Shalom
|
6 |
-
# Portions created by the Initial Developer are Copyright (C) 2005
|
7 |
-
# the Initial Developer. All Rights Reserved.
|
8 |
-
#
|
9 |
-
# Contributor(s):
|
10 |
-
# Mark Pilgrim - port to Python
|
11 |
-
#
|
12 |
-
# This library is free software; you can redistribute it and/or
|
13 |
-
# modify it under the terms of the GNU Lesser General Public
|
14 |
-
# License as published by the Free Software Foundation; either
|
15 |
-
# version 2.1 of the License, or (at your option) any later version.
|
16 |
-
#
|
17 |
-
# This library is distributed in the hope that it will be useful,
|
18 |
-
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
19 |
-
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
20 |
-
# Lesser General Public License for more details.
|
21 |
-
#
|
22 |
-
# You should have received a copy of the GNU Lesser General Public
|
23 |
-
# License along with this library; if not, write to the Free Software
|
24 |
-
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
|
25 |
-
# 02110-1301 USA
|
26 |
-
######################### END LICENSE BLOCK #########################
|
27 |
-
|
28 |
-
from typing import Optional, Union
|
29 |
-
|
30 |
-
from .charsetprober import CharSetProber
|
31 |
-
from .enums import ProbingState
|
32 |
-
from .sbcharsetprober import SingleByteCharSetProber
|
33 |
-
|
34 |
-
# This prober doesn't actually recognize a language or a charset.
|
35 |
-
# It is a helper prober for the use of the Hebrew model probers
|
36 |
-
|
37 |
-
### General ideas of the Hebrew charset recognition ###
|
38 |
-
#
|
39 |
-
# Four main charsets exist in Hebrew:
|
40 |
-
# "ISO-8859-8" - Visual Hebrew
|
41 |
-
# "windows-1255" - Logical Hebrew
|
42 |
-
# "ISO-8859-8-I" - Logical Hebrew
|
43 |
-
# "x-mac-hebrew" - ?? Logical Hebrew ??
|
44 |
-
#
|
45 |
-
# Both "ISO" charsets use a completely identical set of code points, whereas
|
46 |
-
# "windows-1255" and "x-mac-hebrew" are two different proper supersets of
|
47 |
-
# these code points. windows-1255 defines additional characters in the range
|
48 |
-
# 0x80-0x9F as some misc punctuation marks as well as some Hebrew-specific
|
49 |
-
# diacritics and additional 'Yiddish' ligature letters in the range 0xc0-0xd6.
|
50 |
-
# x-mac-hebrew defines similar additional code points but with a different
|
51 |
-
# mapping.
|
52 |
-
#
|
53 |
-
# As far as an average Hebrew text with no diacritics is concerned, all four
|
54 |
-
# charsets are identical with respect to code points. Meaning that for the
|
55 |
-
# main Hebrew alphabet, all four map the same values to all 27 Hebrew letters
|
56 |
-
# (including final letters).
|
57 |
-
#
|
58 |
-
# The dominant difference between these charsets is their directionality.
|
59 |
-
# "Visual" directionality means that the text is ordered as if the renderer is
|
60 |
-
# not aware of a BIDI rendering algorithm. The renderer sees the text and
|
61 |
-
# draws it from left to right. The text itself when ordered naturally is read
|
62 |
-
# backwards. A buffer of Visual Hebrew generally looks like so:
|
63 |
-
# "[last word of first line spelled backwards] [whole line ordered backwards
|
64 |
-
# and spelled backwards] [first word of first line spelled backwards]
|
65 |
-
# [end of line] [last word of second line] ... etc' "
|
66 |
-
# adding punctuation marks, numbers and English text to visual text is
|
67 |
-
# naturally also "visual" and from left to right.
|
68 |
-
#
|
69 |
-
# "Logical" directionality means the text is ordered "naturally" according to
|
70 |
-
# the order it is read. It is the responsibility of the renderer to display
|
71 |
-
# the text from right to left. A BIDI algorithm is used to place general
|
72 |
-
# punctuation marks, numbers and English text in the text.
|
73 |
-
#
|
74 |
-
# Texts in x-mac-hebrew are almost impossible to find on the Internet. From
|
75 |
-
# what little evidence I could find, it seems that its general directionality
|
76 |
-
# is Logical.
|
77 |
-
#
|
78 |
-
# To sum up all of the above, the Hebrew probing mechanism knows about two
|
79 |
-
# charsets:
|
80 |
-
# Visual Hebrew - "ISO-8859-8" - backwards text - Words and sentences are
|
81 |
-
# backwards while line order is natural. For charset recognition purposes
|
82 |
-
# the line order is unimportant (In fact, for this implementation, even
|
83 |
-
# word order is unimportant).
|
84 |
-
# Logical Hebrew - "windows-1255" - normal, naturally ordered text.
|
85 |
-
#
|
86 |
-
# "ISO-8859-8-I" is a subset of windows-1255 and doesn't need to be
|
87 |
-
# specifically identified.
|
88 |
-
# "x-mac-hebrew" is also identified as windows-1255. A text in x-mac-hebrew
|
89 |
-
# that contain special punctuation marks or diacritics is displayed with
|
90 |
-
# some unconverted characters showing as question marks. This problem might
|
91 |
-
# be corrected using another model prober for x-mac-hebrew. Due to the fact
|
92 |
-
# that x-mac-hebrew texts are so rare, writing another model prober isn't
|
93 |
-
# worth the effort and performance hit.
|
94 |
-
#
|
95 |
-
#### The Prober ####
|
96 |
-
#
|
97 |
-
# The prober is divided between two SBCharSetProbers and a HebrewProber,
|
98 |
-
# all of which are managed, created, fed data, inquired and deleted by the
|
99 |
-
# SBCSGroupProber. The two SBCharSetProbers identify that the text is in
|
100 |
-
# fact some kind of Hebrew, Logical or Visual. The final decision about which
|
101 |
-
# one is it is made by the HebrewProber by combining final-letter scores
|
102 |
-
# with the scores of the two SBCharSetProbers to produce a final answer.
|
103 |
-
#
|
104 |
-
# The SBCSGroupProber is responsible for stripping the original text of HTML
|
105 |
-
# tags, English characters, numbers, low-ASCII punctuation characters, spaces
|
106 |
-
# and new lines. It reduces any sequence of such characters to a single space.
|
107 |
-
# The buffer fed to each prober in the SBCS group prober is pure text in
|
108 |
-
# high-ASCII.
|
109 |
-
# The two SBCharSetProbers (model probers) share the same language model:
|
110 |
-
# Win1255Model.
|
111 |
-
# The first SBCharSetProber uses the model normally as any other
|
112 |
-
# SBCharSetProber does, to recognize windows-1255, upon which this model was
|
113 |
-
# built. The second SBCharSetProber is told to make the pair-of-letter
|
114 |
-
# lookup in the language model backwards. This in practice exactly simulates
|
115 |
-
# a visual Hebrew model using the windows-1255 logical Hebrew model.
|
116 |
-
#
|
117 |
-
# The HebrewProber is not using any language model. All it does is look for
|
118 |
-
# final-letter evidence suggesting the text is either logical Hebrew or visual
|
119 |
-
# Hebrew. Disjointed from the model probers, the results of the HebrewProber
|
120 |
-
# alone are meaningless. HebrewProber always returns 0.00 as confidence
|
121 |
-
# since it never identifies a charset by itself. Instead, the pointer to the
|
122 |
-
# HebrewProber is passed to the model probers as a helper "Name Prober".
|
123 |
-
# When the Group prober receives a positive identification from any prober,
|
124 |
-
# it asks for the name of the charset identified. If the prober queried is a
|
125 |
-
# Hebrew model prober, the model prober forwards the call to the
|
126 |
-
# HebrewProber to make the final decision. In the HebrewProber, the
|
127 |
-
# decision is made according to the final-letters scores maintained and Both
|
128 |
-
# model probers scores. The answer is returned in the form of the name of the
|
129 |
-
# charset identified, either "windows-1255" or "ISO-8859-8".
|
130 |
-
|
131 |
-
|
132 |
-
class HebrewProber(CharSetProber):
|
133 |
-
SPACE = 0x20
|
134 |
-
# windows-1255 / ISO-8859-8 code points of interest
|
135 |
-
FINAL_KAF = 0xEA
|
136 |
-
NORMAL_KAF = 0xEB
|
137 |
-
FINAL_MEM = 0xED
|
138 |
-
NORMAL_MEM = 0xEE
|
139 |
-
FINAL_NUN = 0xEF
|
140 |
-
NORMAL_NUN = 0xF0
|
141 |
-
FINAL_PE = 0xF3
|
142 |
-
NORMAL_PE = 0xF4
|
143 |
-
FINAL_TSADI = 0xF5
|
144 |
-
NORMAL_TSADI = 0xF6
|
145 |
-
|
146 |
-
# Minimum Visual vs Logical final letter score difference.
|
147 |
-
# If the difference is below this, don't rely solely on the final letter score
|
148 |
-
# distance.
|
149 |
-
MIN_FINAL_CHAR_DISTANCE = 5
|
150 |
-
|
151 |
-
# Minimum Visual vs Logical model score difference.
|
152 |
-
# If the difference is below this, don't rely at all on the model score
|
153 |
-
# distance.
|
154 |
-
MIN_MODEL_DISTANCE = 0.01
|
155 |
-
|
156 |
-
VISUAL_HEBREW_NAME = "ISO-8859-8"
|
157 |
-
LOGICAL_HEBREW_NAME = "windows-1255"
|
158 |
-
|
159 |
-
def __init__(self) -> None:
|
160 |
-
super().__init__()
|
161 |
-
self._final_char_logical_score = 0
|
162 |
-
self._final_char_visual_score = 0
|
163 |
-
self._prev = self.SPACE
|
164 |
-
self._before_prev = self.SPACE
|
165 |
-
self._logical_prober: Optional[SingleByteCharSetProber] = None
|
166 |
-
self._visual_prober: Optional[SingleByteCharSetProber] = None
|
167 |
-
self.reset()
|
168 |
-
|
169 |
-
def reset(self) -> None:
|
170 |
-
self._final_char_logical_score = 0
|
171 |
-
self._final_char_visual_score = 0
|
172 |
-
# The two last characters seen in the previous buffer,
|
173 |
-
# mPrev and mBeforePrev are initialized to space in order to simulate
|
174 |
-
# a word delimiter at the beginning of the data
|
175 |
-
self._prev = self.SPACE
|
176 |
-
self._before_prev = self.SPACE
|
177 |
-
# These probers are owned by the group prober.
|
178 |
-
|
179 |
-
def set_model_probers(
|
180 |
-
self,
|
181 |
-
logical_prober: SingleByteCharSetProber,
|
182 |
-
visual_prober: SingleByteCharSetProber,
|
183 |
-
) -> None:
|
184 |
-
self._logical_prober = logical_prober
|
185 |
-
self._visual_prober = visual_prober
|
186 |
-
|
187 |
-
def is_final(self, c: int) -> bool:
|
188 |
-
return c in [
|
189 |
-
self.FINAL_KAF,
|
190 |
-
self.FINAL_MEM,
|
191 |
-
self.FINAL_NUN,
|
192 |
-
self.FINAL_PE,
|
193 |
-
self.FINAL_TSADI,
|
194 |
-
]
|
195 |
-
|
196 |
-
def is_non_final(self, c: int) -> bool:
|
197 |
-
# The normal Tsadi is not a good Non-Final letter due to words like
|
198 |
-
# 'lechotet' (to chat) containing an apostrophe after the tsadi. This
|
199 |
-
# apostrophe is converted to a space in FilterWithoutEnglishLetters
|
200 |
-
# causing the Non-Final tsadi to appear at an end of a word even
|
201 |
-
# though this is not the case in the original text.
|
202 |
-
# The letters Pe and Kaf rarely display a related behavior of not being
|
203 |
-
# a good Non-Final letter. Words like 'Pop', 'Winamp' and 'Mubarak'
|
204 |
-
# for example legally end with a Non-Final Pe or Kaf. However, the
|
205 |
-
# benefit of these letters as Non-Final letters outweighs the damage
|
206 |
-
# since these words are quite rare.
|
207 |
-
return c in [self.NORMAL_KAF, self.NORMAL_MEM, self.NORMAL_NUN, self.NORMAL_PE]
|
208 |
-
|
209 |
-
def feed(self, byte_str: Union[bytes, bytearray]) -> ProbingState:
|
210 |
-
# Final letter analysis for logical-visual decision.
|
211 |
-
# Look for evidence that the received buffer is either logical Hebrew
|
212 |
-
# or visual Hebrew.
|
213 |
-
# The following cases are checked:
|
214 |
-
# 1) A word longer than 1 letter, ending with a final letter. This is
|
215 |
-
# an indication that the text is laid out "naturally" since the
|
216 |
-
# final letter really appears at the end. +1 for logical score.
|
217 |
-
# 2) A word longer than 1 letter, ending with a Non-Final letter. In
|
218 |
-
# normal Hebrew, words ending with Kaf, Mem, Nun, Pe or Tsadi,
|
219 |
-
# should not end with the Non-Final form of that letter. Exceptions
|
220 |
-
# to this rule are mentioned above in isNonFinal(). This is an
|
221 |
-
# indication that the text is laid out backwards. +1 for visual
|
222 |
-
# score
|
223 |
-
# 3) A word longer than 1 letter, starting with a final letter. Final
|
224 |
-
# letters should not appear at the beginning of a word. This is an
|
225 |
-
# indication that the text is laid out backwards. +1 for visual
|
226 |
-
# score.
|
227 |
-
#
|
228 |
-
# The visual score and logical score are accumulated throughout the
|
229 |
-
# text and are finally checked against each other in GetCharSetName().
|
230 |
-
# No checking for final letters in the middle of words is done since
|
231 |
-
# that case is not an indication for either Logical or Visual text.
|
232 |
-
#
|
233 |
-
# We automatically filter out all 7-bit characters (replace them with
|
234 |
-
# spaces) so the word boundary detection works properly. [MAP]
|
235 |
-
|
236 |
-
if self.state == ProbingState.NOT_ME:
|
237 |
-
# Both model probers say it's not them. No reason to continue.
|
238 |
-
return ProbingState.NOT_ME
|
239 |
-
|
240 |
-
byte_str = self.filter_high_byte_only(byte_str)
|
241 |
-
|
242 |
-
for cur in byte_str:
|
243 |
-
if cur == self.SPACE:
|
244 |
-
# We stand on a space - a word just ended
|
245 |
-
if self._before_prev != self.SPACE:
|
246 |
-
# next-to-last char was not a space so self._prev is not a
|
247 |
-
# 1 letter word
|
248 |
-
if self.is_final(self._prev):
|
249 |
-
# case (1) [-2:not space][-1:final letter][cur:space]
|
250 |
-
self._final_char_logical_score += 1
|
251 |
-
elif self.is_non_final(self._prev):
|
252 |
-
# case (2) [-2:not space][-1:Non-Final letter][
|
253 |
-
# cur:space]
|
254 |
-
self._final_char_visual_score += 1
|
255 |
-
else:
|
256 |
-
# Not standing on a space
|
257 |
-
if (
|
258 |
-
(self._before_prev == self.SPACE)
|
259 |
-
and (self.is_final(self._prev))
|
260 |
-
and (cur != self.SPACE)
|
261 |
-
):
|
262 |
-
# case (3) [-2:space][-1:final letter][cur:not space]
|
263 |
-
self._final_char_visual_score += 1
|
264 |
-
self._before_prev = self._prev
|
265 |
-
self._prev = cur
|
266 |
-
|
267 |
-
# Forever detecting, till the end or until both model probers return
|
268 |
-
# ProbingState.NOT_ME (handled above)
|
269 |
-
return ProbingState.DETECTING
|
270 |
-
|
271 |
-
@property
|
272 |
-
def charset_name(self) -> str:
|
273 |
-
assert self._logical_prober is not None
|
274 |
-
assert self._visual_prober is not None
|
275 |
-
|
276 |
-
# Make the decision: is it Logical or Visual?
|
277 |
-
# If the final letter score distance is dominant enough, rely on it.
|
278 |
-
finalsub = self._final_char_logical_score - self._final_char_visual_score
|
279 |
-
if finalsub >= self.MIN_FINAL_CHAR_DISTANCE:
|
280 |
-
return self.LOGICAL_HEBREW_NAME
|
281 |
-
if finalsub <= -self.MIN_FINAL_CHAR_DISTANCE:
|
282 |
-
return self.VISUAL_HEBREW_NAME
|
283 |
-
|
284 |
-
# It's not dominant enough, try to rely on the model scores instead.
|
285 |
-
modelsub = (
|
286 |
-
self._logical_prober.get_confidence() - self._visual_prober.get_confidence()
|
287 |
-
)
|
288 |
-
if modelsub > self.MIN_MODEL_DISTANCE:
|
289 |
-
return self.LOGICAL_HEBREW_NAME
|
290 |
-
if modelsub < -self.MIN_MODEL_DISTANCE:
|
291 |
-
return self.VISUAL_HEBREW_NAME
|
292 |
-
|
293 |
-
# Still no good, back to final letter distance, maybe it'll save the
|
294 |
-
# day.
|
295 |
-
if finalsub < 0.0:
|
296 |
-
return self.VISUAL_HEBREW_NAME
|
297 |
-
|
298 |
-
# (finalsub > 0 - Logical) or (don't know what to do) default to
|
299 |
-
# Logical.
|
300 |
-
return self.LOGICAL_HEBREW_NAME
|
301 |
-
|
302 |
-
@property
|
303 |
-
def language(self) -> str:
|
304 |
-
return "Hebrew"
|
305 |
-
|
306 |
-
@property
|
307 |
-
def state(self) -> ProbingState:
|
308 |
-
assert self._logical_prober is not None
|
309 |
-
assert self._visual_prober is not None
|
310 |
-
|
311 |
-
# Remain active as long as any of the model probers are active.
|
312 |
-
if (self._logical_prober.state == ProbingState.NOT_ME) and (
|
313 |
-
self._visual_prober.state == ProbingState.NOT_ME
|
314 |
-
):
|
315 |
-
return ProbingState.NOT_ME
|
316 |
-
return ProbingState.DETECTING
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/jaraco/functools.py
DELETED
@@ -1,525 +0,0 @@
|
|
1 |
-
import functools
|
2 |
-
import time
|
3 |
-
import inspect
|
4 |
-
import collections
|
5 |
-
import types
|
6 |
-
import itertools
|
7 |
-
|
8 |
-
import pkg_resources.extern.more_itertools
|
9 |
-
|
10 |
-
from typing import Callable, TypeVar
|
11 |
-
|
12 |
-
|
13 |
-
CallableT = TypeVar("CallableT", bound=Callable[..., object])
|
14 |
-
|
15 |
-
|
16 |
-
def compose(*funcs):
|
17 |
-
"""
|
18 |
-
Compose any number of unary functions into a single unary function.
|
19 |
-
|
20 |
-
>>> import textwrap
|
21 |
-
>>> expected = str.strip(textwrap.dedent(compose.__doc__))
|
22 |
-
>>> strip_and_dedent = compose(str.strip, textwrap.dedent)
|
23 |
-
>>> strip_and_dedent(compose.__doc__) == expected
|
24 |
-
True
|
25 |
-
|
26 |
-
Compose also allows the innermost function to take arbitrary arguments.
|
27 |
-
|
28 |
-
>>> round_three = lambda x: round(x, ndigits=3)
|
29 |
-
>>> f = compose(round_three, int.__truediv__)
|
30 |
-
>>> [f(3*x, x+1) for x in range(1,10)]
|
31 |
-
[1.5, 2.0, 2.25, 2.4, 2.5, 2.571, 2.625, 2.667, 2.7]
|
32 |
-
"""
|
33 |
-
|
34 |
-
def compose_two(f1, f2):
|
35 |
-
return lambda *args, **kwargs: f1(f2(*args, **kwargs))
|
36 |
-
|
37 |
-
return functools.reduce(compose_two, funcs)
|
38 |
-
|
39 |
-
|
40 |
-
def method_caller(method_name, *args, **kwargs):
|
41 |
-
"""
|
42 |
-
Return a function that will call a named method on the
|
43 |
-
target object with optional positional and keyword
|
44 |
-
arguments.
|
45 |
-
|
46 |
-
>>> lower = method_caller('lower')
|
47 |
-
>>> lower('MyString')
|
48 |
-
'mystring'
|
49 |
-
"""
|
50 |
-
|
51 |
-
def call_method(target):
|
52 |
-
func = getattr(target, method_name)
|
53 |
-
return func(*args, **kwargs)
|
54 |
-
|
55 |
-
return call_method
|
56 |
-
|
57 |
-
|
58 |
-
def once(func):
|
59 |
-
"""
|
60 |
-
Decorate func so it's only ever called the first time.
|
61 |
-
|
62 |
-
This decorator can ensure that an expensive or non-idempotent function
|
63 |
-
will not be expensive on subsequent calls and is idempotent.
|
64 |
-
|
65 |
-
>>> add_three = once(lambda a: a+3)
|
66 |
-
>>> add_three(3)
|
67 |
-
6
|
68 |
-
>>> add_three(9)
|
69 |
-
6
|
70 |
-
>>> add_three('12')
|
71 |
-
6
|
72 |
-
|
73 |
-
To reset the stored value, simply clear the property ``saved_result``.
|
74 |
-
|
75 |
-
>>> del add_three.saved_result
|
76 |
-
>>> add_three(9)
|
77 |
-
12
|
78 |
-
>>> add_three(8)
|
79 |
-
12
|
80 |
-
|
81 |
-
Or invoke 'reset()' on it.
|
82 |
-
|
83 |
-
>>> add_three.reset()
|
84 |
-
>>> add_three(-3)
|
85 |
-
0
|
86 |
-
>>> add_three(0)
|
87 |
-
0
|
88 |
-
"""
|
89 |
-
|
90 |
-
@functools.wraps(func)
|
91 |
-
def wrapper(*args, **kwargs):
|
92 |
-
if not hasattr(wrapper, 'saved_result'):
|
93 |
-
wrapper.saved_result = func(*args, **kwargs)
|
94 |
-
return wrapper.saved_result
|
95 |
-
|
96 |
-
wrapper.reset = lambda: vars(wrapper).__delitem__('saved_result')
|
97 |
-
return wrapper
|
98 |
-
|
99 |
-
|
100 |
-
def method_cache(
|
101 |
-
method: CallableT,
|
102 |
-
cache_wrapper: Callable[
|
103 |
-
[CallableT], CallableT
|
104 |
-
] = functools.lru_cache(), # type: ignore[assignment]
|
105 |
-
) -> CallableT:
|
106 |
-
"""
|
107 |
-
Wrap lru_cache to support storing the cache data in the object instances.
|
108 |
-
|
109 |
-
Abstracts the common paradigm where the method explicitly saves an
|
110 |
-
underscore-prefixed protected property on first call and returns that
|
111 |
-
subsequently.
|
112 |
-
|
113 |
-
>>> class MyClass:
|
114 |
-
... calls = 0
|
115 |
-
...
|
116 |
-
... @method_cache
|
117 |
-
... def method(self, value):
|
118 |
-
... self.calls += 1
|
119 |
-
... return value
|
120 |
-
|
121 |
-
>>> a = MyClass()
|
122 |
-
>>> a.method(3)
|
123 |
-
3
|
124 |
-
>>> for x in range(75):
|
125 |
-
... res = a.method(x)
|
126 |
-
>>> a.calls
|
127 |
-
75
|
128 |
-
|
129 |
-
Note that the apparent behavior will be exactly like that of lru_cache
|
130 |
-
except that the cache is stored on each instance, so values in one
|
131 |
-
instance will not flush values from another, and when an instance is
|
132 |
-
deleted, so are the cached values for that instance.
|
133 |
-
|
134 |
-
>>> b = MyClass()
|
135 |
-
>>> for x in range(35):
|
136 |
-
... res = b.method(x)
|
137 |
-
>>> b.calls
|
138 |
-
35
|
139 |
-
>>> a.method(0)
|
140 |
-
0
|
141 |
-
>>> a.calls
|
142 |
-
75
|
143 |
-
|
144 |
-
Note that if method had been decorated with ``functools.lru_cache()``,
|
145 |
-
a.calls would have been 76 (due to the cached value of 0 having been
|
146 |
-
flushed by the 'b' instance).
|
147 |
-
|
148 |
-
Clear the cache with ``.cache_clear()``
|
149 |
-
|
150 |
-
>>> a.method.cache_clear()
|
151 |
-
|
152 |
-
Same for a method that hasn't yet been called.
|
153 |
-
|
154 |
-
>>> c = MyClass()
|
155 |
-
>>> c.method.cache_clear()
|
156 |
-
|
157 |
-
Another cache wrapper may be supplied:
|
158 |
-
|
159 |
-
>>> cache = functools.lru_cache(maxsize=2)
|
160 |
-
>>> MyClass.method2 = method_cache(lambda self: 3, cache_wrapper=cache)
|
161 |
-
>>> a = MyClass()
|
162 |
-
>>> a.method2()
|
163 |
-
3
|
164 |
-
|
165 |
-
Caution - do not subsequently wrap the method with another decorator, such
|
166 |
-
as ``@property``, which changes the semantics of the function.
|
167 |
-
|
168 |
-
See also
|
169 |
-
http://code.activestate.com/recipes/577452-a-memoize-decorator-for-instance-methods/
|
170 |
-
for another implementation and additional justification.
|
171 |
-
"""
|
172 |
-
|
173 |
-
def wrapper(self: object, *args: object, **kwargs: object) -> object:
|
174 |
-
# it's the first call, replace the method with a cached, bound method
|
175 |
-
bound_method: CallableT = types.MethodType( # type: ignore[assignment]
|
176 |
-
method, self
|
177 |
-
)
|
178 |
-
cached_method = cache_wrapper(bound_method)
|
179 |
-
setattr(self, method.__name__, cached_method)
|
180 |
-
return cached_method(*args, **kwargs)
|
181 |
-
|
182 |
-
# Support cache clear even before cache has been created.
|
183 |
-
wrapper.cache_clear = lambda: None # type: ignore[attr-defined]
|
184 |
-
|
185 |
-
return ( # type: ignore[return-value]
|
186 |
-
_special_method_cache(method, cache_wrapper) or wrapper
|
187 |
-
)
|
188 |
-
|
189 |
-
|
190 |
-
def _special_method_cache(method, cache_wrapper):
|
191 |
-
"""
|
192 |
-
Because Python treats special methods differently, it's not
|
193 |
-
possible to use instance attributes to implement the cached
|
194 |
-
methods.
|
195 |
-
|
196 |
-
Instead, install the wrapper method under a different name
|
197 |
-
and return a simple proxy to that wrapper.
|
198 |
-
|
199 |
-
https://github.com/jaraco/jaraco.functools/issues/5
|
200 |
-
"""
|
201 |
-
name = method.__name__
|
202 |
-
special_names = '__getattr__', '__getitem__'
|
203 |
-
if name not in special_names:
|
204 |
-
return
|
205 |
-
|
206 |
-
wrapper_name = '__cached' + name
|
207 |
-
|
208 |
-
def proxy(self, *args, **kwargs):
|
209 |
-
if wrapper_name not in vars(self):
|
210 |
-
bound = types.MethodType(method, self)
|
211 |
-
cache = cache_wrapper(bound)
|
212 |
-
setattr(self, wrapper_name, cache)
|
213 |
-
else:
|
214 |
-
cache = getattr(self, wrapper_name)
|
215 |
-
return cache(*args, **kwargs)
|
216 |
-
|
217 |
-
return proxy
|
218 |
-
|
219 |
-
|
220 |
-
def apply(transform):
|
221 |
-
"""
|
222 |
-
Decorate a function with a transform function that is
|
223 |
-
invoked on results returned from the decorated function.
|
224 |
-
|
225 |
-
>>> @apply(reversed)
|
226 |
-
... def get_numbers(start):
|
227 |
-
... "doc for get_numbers"
|
228 |
-
... return range(start, start+3)
|
229 |
-
>>> list(get_numbers(4))
|
230 |
-
[6, 5, 4]
|
231 |
-
>>> get_numbers.__doc__
|
232 |
-
'doc for get_numbers'
|
233 |
-
"""
|
234 |
-
|
235 |
-
def wrap(func):
|
236 |
-
return functools.wraps(func)(compose(transform, func))
|
237 |
-
|
238 |
-
return wrap
|
239 |
-
|
240 |
-
|
241 |
-
def result_invoke(action):
|
242 |
-
r"""
|
243 |
-
Decorate a function with an action function that is
|
244 |
-
invoked on the results returned from the decorated
|
245 |
-
function (for its side-effect), then return the original
|
246 |
-
result.
|
247 |
-
|
248 |
-
>>> @result_invoke(print)
|
249 |
-
... def add_two(a, b):
|
250 |
-
... return a + b
|
251 |
-
>>> x = add_two(2, 3)
|
252 |
-
5
|
253 |
-
>>> x
|
254 |
-
5
|
255 |
-
"""
|
256 |
-
|
257 |
-
def wrap(func):
|
258 |
-
@functools.wraps(func)
|
259 |
-
def wrapper(*args, **kwargs):
|
260 |
-
result = func(*args, **kwargs)
|
261 |
-
action(result)
|
262 |
-
return result
|
263 |
-
|
264 |
-
return wrapper
|
265 |
-
|
266 |
-
return wrap
|
267 |
-
|
268 |
-
|
269 |
-
def call_aside(f, *args, **kwargs):
|
270 |
-
"""
|
271 |
-
Call a function for its side effect after initialization.
|
272 |
-
|
273 |
-
>>> @call_aside
|
274 |
-
... def func(): print("called")
|
275 |
-
called
|
276 |
-
>>> func()
|
277 |
-
called
|
278 |
-
|
279 |
-
Use functools.partial to pass parameters to the initial call
|
280 |
-
|
281 |
-
>>> @functools.partial(call_aside, name='bingo')
|
282 |
-
... def func(name): print("called with", name)
|
283 |
-
called with bingo
|
284 |
-
"""
|
285 |
-
f(*args, **kwargs)
|
286 |
-
return f
|
287 |
-
|
288 |
-
|
289 |
-
class Throttler:
|
290 |
-
"""
|
291 |
-
Rate-limit a function (or other callable)
|
292 |
-
"""
|
293 |
-
|
294 |
-
def __init__(self, func, max_rate=float('Inf')):
|
295 |
-
if isinstance(func, Throttler):
|
296 |
-
func = func.func
|
297 |
-
self.func = func
|
298 |
-
self.max_rate = max_rate
|
299 |
-
self.reset()
|
300 |
-
|
301 |
-
def reset(self):
|
302 |
-
self.last_called = 0
|
303 |
-
|
304 |
-
def __call__(self, *args, **kwargs):
|
305 |
-
self._wait()
|
306 |
-
return self.func(*args, **kwargs)
|
307 |
-
|
308 |
-
def _wait(self):
|
309 |
-
"ensure at least 1/max_rate seconds from last call"
|
310 |
-
elapsed = time.time() - self.last_called
|
311 |
-
must_wait = 1 / self.max_rate - elapsed
|
312 |
-
time.sleep(max(0, must_wait))
|
313 |
-
self.last_called = time.time()
|
314 |
-
|
315 |
-
def __get__(self, obj, type=None):
|
316 |
-
return first_invoke(self._wait, functools.partial(self.func, obj))
|
317 |
-
|
318 |
-
|
319 |
-
def first_invoke(func1, func2):
|
320 |
-
"""
|
321 |
-
Return a function that when invoked will invoke func1 without
|
322 |
-
any parameters (for its side-effect) and then invoke func2
|
323 |
-
with whatever parameters were passed, returning its result.
|
324 |
-
"""
|
325 |
-
|
326 |
-
def wrapper(*args, **kwargs):
|
327 |
-
func1()
|
328 |
-
return func2(*args, **kwargs)
|
329 |
-
|
330 |
-
return wrapper
|
331 |
-
|
332 |
-
|
333 |
-
def retry_call(func, cleanup=lambda: None, retries=0, trap=()):
|
334 |
-
"""
|
335 |
-
Given a callable func, trap the indicated exceptions
|
336 |
-
for up to 'retries' times, invoking cleanup on the
|
337 |
-
exception. On the final attempt, allow any exceptions
|
338 |
-
to propagate.
|
339 |
-
"""
|
340 |
-
attempts = itertools.count() if retries == float('inf') else range(retries)
|
341 |
-
for attempt in attempts:
|
342 |
-
try:
|
343 |
-
return func()
|
344 |
-
except trap:
|
345 |
-
cleanup()
|
346 |
-
|
347 |
-
return func()
|
348 |
-
|
349 |
-
|
350 |
-
def retry(*r_args, **r_kwargs):
|
351 |
-
"""
|
352 |
-
Decorator wrapper for retry_call. Accepts arguments to retry_call
|
353 |
-
except func and then returns a decorator for the decorated function.
|
354 |
-
|
355 |
-
Ex:
|
356 |
-
|
357 |
-
>>> @retry(retries=3)
|
358 |
-
... def my_func(a, b):
|
359 |
-
... "this is my funk"
|
360 |
-
... print(a, b)
|
361 |
-
>>> my_func.__doc__
|
362 |
-
'this is my funk'
|
363 |
-
"""
|
364 |
-
|
365 |
-
def decorate(func):
|
366 |
-
@functools.wraps(func)
|
367 |
-
def wrapper(*f_args, **f_kwargs):
|
368 |
-
bound = functools.partial(func, *f_args, **f_kwargs)
|
369 |
-
return retry_call(bound, *r_args, **r_kwargs)
|
370 |
-
|
371 |
-
return wrapper
|
372 |
-
|
373 |
-
return decorate
|
374 |
-
|
375 |
-
|
376 |
-
def print_yielded(func):
|
377 |
-
"""
|
378 |
-
Convert a generator into a function that prints all yielded elements
|
379 |
-
|
380 |
-
>>> @print_yielded
|
381 |
-
... def x():
|
382 |
-
... yield 3; yield None
|
383 |
-
>>> x()
|
384 |
-
3
|
385 |
-
None
|
386 |
-
"""
|
387 |
-
print_all = functools.partial(map, print)
|
388 |
-
print_results = compose(more_itertools.consume, print_all, func)
|
389 |
-
return functools.wraps(func)(print_results)
|
390 |
-
|
391 |
-
|
392 |
-
def pass_none(func):
|
393 |
-
"""
|
394 |
-
Wrap func so it's not called if its first param is None
|
395 |
-
|
396 |
-
>>> print_text = pass_none(print)
|
397 |
-
>>> print_text('text')
|
398 |
-
text
|
399 |
-
>>> print_text(None)
|
400 |
-
"""
|
401 |
-
|
402 |
-
@functools.wraps(func)
|
403 |
-
def wrapper(param, *args, **kwargs):
|
404 |
-
if param is not None:
|
405 |
-
return func(param, *args, **kwargs)
|
406 |
-
|
407 |
-
return wrapper
|
408 |
-
|
409 |
-
|
410 |
-
def assign_params(func, namespace):
|
411 |
-
"""
|
412 |
-
Assign parameters from namespace where func solicits.
|
413 |
-
|
414 |
-
>>> def func(x, y=3):
|
415 |
-
... print(x, y)
|
416 |
-
>>> assigned = assign_params(func, dict(x=2, z=4))
|
417 |
-
>>> assigned()
|
418 |
-
2 3
|
419 |
-
|
420 |
-
The usual errors are raised if a function doesn't receive
|
421 |
-
its required parameters:
|
422 |
-
|
423 |
-
>>> assigned = assign_params(func, dict(y=3, z=4))
|
424 |
-
>>> assigned()
|
425 |
-
Traceback (most recent call last):
|
426 |
-
TypeError: func() ...argument...
|
427 |
-
|
428 |
-
It even works on methods:
|
429 |
-
|
430 |
-
>>> class Handler:
|
431 |
-
... def meth(self, arg):
|
432 |
-
... print(arg)
|
433 |
-
>>> assign_params(Handler().meth, dict(arg='crystal', foo='clear'))()
|
434 |
-
crystal
|
435 |
-
"""
|
436 |
-
sig = inspect.signature(func)
|
437 |
-
params = sig.parameters.keys()
|
438 |
-
call_ns = {k: namespace[k] for k in params if k in namespace}
|
439 |
-
return functools.partial(func, **call_ns)
|
440 |
-
|
441 |
-
|
442 |
-
def save_method_args(method):
|
443 |
-
"""
|
444 |
-
Wrap a method such that when it is called, the args and kwargs are
|
445 |
-
saved on the method.
|
446 |
-
|
447 |
-
>>> class MyClass:
|
448 |
-
... @save_method_args
|
449 |
-
... def method(self, a, b):
|
450 |
-
... print(a, b)
|
451 |
-
>>> my_ob = MyClass()
|
452 |
-
>>> my_ob.method(1, 2)
|
453 |
-
1 2
|
454 |
-
>>> my_ob._saved_method.args
|
455 |
-
(1, 2)
|
456 |
-
>>> my_ob._saved_method.kwargs
|
457 |
-
{}
|
458 |
-
>>> my_ob.method(a=3, b='foo')
|
459 |
-
3 foo
|
460 |
-
>>> my_ob._saved_method.args
|
461 |
-
()
|
462 |
-
>>> my_ob._saved_method.kwargs == dict(a=3, b='foo')
|
463 |
-
True
|
464 |
-
|
465 |
-
The arguments are stored on the instance, allowing for
|
466 |
-
different instance to save different args.
|
467 |
-
|
468 |
-
>>> your_ob = MyClass()
|
469 |
-
>>> your_ob.method({str('x'): 3}, b=[4])
|
470 |
-
{'x': 3} [4]
|
471 |
-
>>> your_ob._saved_method.args
|
472 |
-
({'x': 3},)
|
473 |
-
>>> my_ob._saved_method.args
|
474 |
-
()
|
475 |
-
"""
|
476 |
-
args_and_kwargs = collections.namedtuple('args_and_kwargs', 'args kwargs')
|
477 |
-
|
478 |
-
@functools.wraps(method)
|
479 |
-
def wrapper(self, *args, **kwargs):
|
480 |
-
attr_name = '_saved_' + method.__name__
|
481 |
-
attr = args_and_kwargs(args, kwargs)
|
482 |
-
setattr(self, attr_name, attr)
|
483 |
-
return method(self, *args, **kwargs)
|
484 |
-
|
485 |
-
return wrapper
|
486 |
-
|
487 |
-
|
488 |
-
def except_(*exceptions, replace=None, use=None):
|
489 |
-
"""
|
490 |
-
Replace the indicated exceptions, if raised, with the indicated
|
491 |
-
literal replacement or evaluated expression (if present).
|
492 |
-
|
493 |
-
>>> safe_int = except_(ValueError)(int)
|
494 |
-
>>> safe_int('five')
|
495 |
-
>>> safe_int('5')
|
496 |
-
5
|
497 |
-
|
498 |
-
Specify a literal replacement with ``replace``.
|
499 |
-
|
500 |
-
>>> safe_int_r = except_(ValueError, replace=0)(int)
|
501 |
-
>>> safe_int_r('five')
|
502 |
-
0
|
503 |
-
|
504 |
-
Provide an expression to ``use`` to pass through particular parameters.
|
505 |
-
|
506 |
-
>>> safe_int_pt = except_(ValueError, use='args[0]')(int)
|
507 |
-
>>> safe_int_pt('five')
|
508 |
-
'five'
|
509 |
-
|
510 |
-
"""
|
511 |
-
|
512 |
-
def decorate(func):
|
513 |
-
@functools.wraps(func)
|
514 |
-
def wrapper(*args, **kwargs):
|
515 |
-
try:
|
516 |
-
return func(*args, **kwargs)
|
517 |
-
except exceptions:
|
518 |
-
try:
|
519 |
-
return eval(use)
|
520 |
-
except TypeError:
|
521 |
-
return replace
|
522 |
-
|
523 |
-
return wrapper
|
524 |
-
|
525 |
-
return decorate
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AtomdffAI/wechatgpt4atom/bot/baidu/baidu_unit_bot.py
DELETED
@@ -1,26 +0,0 @@
|
|
1 |
-
# encoding:utf-8
|
2 |
-
|
3 |
-
import requests
|
4 |
-
from bot.bot import Bot
|
5 |
-
|
6 |
-
|
7 |
-
# Baidu Unit对话接口 (可用, 但能力较弱)
|
8 |
-
class BaiduUnitBot(Bot):
|
9 |
-
def reply(self, query, context=None):
|
10 |
-
token = self.get_token()
|
11 |
-
url = 'https://aip.baidubce.com/rpc/2.0/unit/service/v3/chat?access_token=' + token
|
12 |
-
post_data = "{\"version\":\"3.0\",\"service_id\":\"S73177\",\"session_id\":\"\",\"log_id\":\"7758521\",\"skill_ids\":[\"1221886\"],\"request\":{\"terminal_id\":\"88888\",\"query\":\"" + query + "\", \"hyper_params\": {\"chat_custom_bot_profile\": 1}}}"
|
13 |
-
print(post_data)
|
14 |
-
headers = {'content-type': 'application/x-www-form-urlencoded'}
|
15 |
-
response = requests.post(url, data=post_data.encode(), headers=headers)
|
16 |
-
if response:
|
17 |
-
return response.json()['result']['context']['SYS_PRESUMED_HIST'][1]
|
18 |
-
|
19 |
-
def get_token(self):
|
20 |
-
access_key = 'YOUR_ACCESS_KEY'
|
21 |
-
secret_key = 'YOUR_SECRET_KEY'
|
22 |
-
host = 'https://aip.baidubce.com/oauth/2.0/token?grant_type=client_credentials&client_id=' + access_key + '&client_secret=' + secret_key
|
23 |
-
response = requests.get(host)
|
24 |
-
if response:
|
25 |
-
print(response.json())
|
26 |
-
return response.json()['access_token']
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Augustya/ai-subject-answer-generator/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Ai Subject Answer Generator
|
3 |
-
emoji: 👁
|
4 |
-
colorFrom: gray
|
5 |
-
colorTo: yellow
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.39.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: mit
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/poolers.py
DELETED
@@ -1,245 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates.
|
2 |
-
import math
|
3 |
-
from typing import List
|
4 |
-
import torch
|
5 |
-
from torch import nn
|
6 |
-
from torchvision.ops import RoIPool
|
7 |
-
|
8 |
-
from detectron2.layers import ROIAlign, ROIAlignRotated, cat, nonzero_tuple, shapes_to_tensor
|
9 |
-
from detectron2.structures import Boxes
|
10 |
-
|
11 |
-
"""
|
12 |
-
To export ROIPooler to torchscript, in this file, variables that should be annotated with
|
13 |
-
`Union[List[Boxes], List[RotatedBoxes]]` are only annotated with `List[Boxes]`.
|
14 |
-
|
15 |
-
TODO: Correct these annotations when torchscript support `Union`.
|
16 |
-
https://github.com/pytorch/pytorch/issues/41412
|
17 |
-
"""
|
18 |
-
|
19 |
-
__all__ = ["ROIPooler"]
|
20 |
-
|
21 |
-
|
22 |
-
def assign_boxes_to_levels(
|
23 |
-
box_lists: List[Boxes],
|
24 |
-
min_level: int,
|
25 |
-
max_level: int,
|
26 |
-
canonical_box_size: int,
|
27 |
-
canonical_level: int,
|
28 |
-
):
|
29 |
-
"""
|
30 |
-
Map each box in `box_lists` to a feature map level index and return the assignment
|
31 |
-
vector.
|
32 |
-
|
33 |
-
Args:
|
34 |
-
box_lists (list[Boxes] | list[RotatedBoxes]): A list of N Boxes or N RotatedBoxes,
|
35 |
-
where N is the number of images in the batch.
|
36 |
-
min_level (int): Smallest feature map level index. The input is considered index 0,
|
37 |
-
the output of stage 1 is index 1, and so.
|
38 |
-
max_level (int): Largest feature map level index.
|
39 |
-
canonical_box_size (int): A canonical box size in pixels (sqrt(box area)).
|
40 |
-
canonical_level (int): The feature map level index on which a canonically-sized box
|
41 |
-
should be placed.
|
42 |
-
|
43 |
-
Returns:
|
44 |
-
A tensor of length M, where M is the total number of boxes aggregated over all
|
45 |
-
N batch images. The memory layout corresponds to the concatenation of boxes
|
46 |
-
from all images. Each element is the feature map index, as an offset from
|
47 |
-
`self.min_level`, for the corresponding box (so value i means the box is at
|
48 |
-
`self.min_level + i`).
|
49 |
-
"""
|
50 |
-
box_sizes = torch.sqrt(cat([boxes.area() for boxes in box_lists]))
|
51 |
-
# Eqn.(1) in FPN paper
|
52 |
-
level_assignments = torch.floor(
|
53 |
-
canonical_level + torch.log2(box_sizes / canonical_box_size + 1e-8)
|
54 |
-
)
|
55 |
-
# clamp level to (min, max), in case the box size is too large or too small
|
56 |
-
# for the available feature maps
|
57 |
-
level_assignments = torch.clamp(level_assignments, min=min_level, max=max_level)
|
58 |
-
return level_assignments.to(torch.int64) - min_level
|
59 |
-
|
60 |
-
|
61 |
-
def convert_boxes_to_pooler_format(box_lists: List[Boxes]):
|
62 |
-
"""
|
63 |
-
Convert all boxes in `box_lists` to the low-level format used by ROI pooling ops
|
64 |
-
(see description under Returns).
|
65 |
-
|
66 |
-
Args:
|
67 |
-
box_lists (list[Boxes] | list[RotatedBoxes]):
|
68 |
-
A list of N Boxes or N RotatedBoxes, where N is the number of images in the batch.
|
69 |
-
|
70 |
-
Returns:
|
71 |
-
When input is list[Boxes]:
|
72 |
-
A tensor of shape (M, 5), where M is the total number of boxes aggregated over all
|
73 |
-
N batch images.
|
74 |
-
The 5 columns are (batch index, x0, y0, x1, y1), where batch index
|
75 |
-
is the index in [0, N) identifying which batch image the box with corners at
|
76 |
-
(x0, y0, x1, y1) comes from.
|
77 |
-
When input is list[RotatedBoxes]:
|
78 |
-
A tensor of shape (M, 6), where M is the total number of boxes aggregated over all
|
79 |
-
N batch images.
|
80 |
-
The 6 columns are (batch index, x_ctr, y_ctr, width, height, angle_degrees),
|
81 |
-
where batch index is the index in [0, N) identifying which batch image the
|
82 |
-
rotated box (x_ctr, y_ctr, width, height, angle_degrees) comes from.
|
83 |
-
"""
|
84 |
-
boxes = torch.cat([x.tensor for x in box_lists], dim=0)
|
85 |
-
# __len__ returns Tensor in tracing.
|
86 |
-
sizes = shapes_to_tensor([x.__len__() for x in box_lists], device=boxes.device)
|
87 |
-
indices = torch.repeat_interleave(
|
88 |
-
torch.arange(len(box_lists), dtype=boxes.dtype, device=boxes.device), sizes
|
89 |
-
)
|
90 |
-
return cat([indices[:, None], boxes], dim=1)
|
91 |
-
|
92 |
-
|
93 |
-
class ROIPooler(nn.Module):
|
94 |
-
"""
|
95 |
-
Region of interest feature map pooler that supports pooling from one or more
|
96 |
-
feature maps.
|
97 |
-
"""
|
98 |
-
|
99 |
-
def __init__(
|
100 |
-
self,
|
101 |
-
output_size,
|
102 |
-
scales,
|
103 |
-
sampling_ratio,
|
104 |
-
pooler_type,
|
105 |
-
canonical_box_size=224,
|
106 |
-
canonical_level=4,
|
107 |
-
):
|
108 |
-
"""
|
109 |
-
Args:
|
110 |
-
output_size (int, tuple[int] or list[int]): output size of the pooled region,
|
111 |
-
e.g., 14 x 14. If tuple or list is given, the length must be 2.
|
112 |
-
scales (list[float]): The scale for each low-level pooling op relative to
|
113 |
-
the input image. For a feature map with stride s relative to the input
|
114 |
-
image, scale is defined as 1/s. The stride must be power of 2.
|
115 |
-
When there are multiple scales, they must form a pyramid, i.e. they must be
|
116 |
-
a monotically decreasing geometric sequence with a factor of 1/2.
|
117 |
-
sampling_ratio (int): The `sampling_ratio` parameter for the ROIAlign op.
|
118 |
-
pooler_type (string): Name of the type of pooling operation that should be applied.
|
119 |
-
For instance, "ROIPool" or "ROIAlignV2".
|
120 |
-
canonical_box_size (int): A canonical box size in pixels (sqrt(box area)). The default
|
121 |
-
is heuristically defined as 224 pixels in the FPN paper (based on ImageNet
|
122 |
-
pre-training).
|
123 |
-
canonical_level (int): The feature map level index from which a canonically-sized box
|
124 |
-
should be placed. The default is defined as level 4 (stride=16) in the FPN paper,
|
125 |
-
i.e., a box of size 224x224 will be placed on the feature with stride=16.
|
126 |
-
The box placement for all boxes will be determined from their sizes w.r.t
|
127 |
-
canonical_box_size. For example, a box whose area is 4x that of a canonical box
|
128 |
-
should be used to pool features from feature level ``canonical_level+1``.
|
129 |
-
|
130 |
-
Note that the actual input feature maps given to this module may not have
|
131 |
-
sufficiently many levels for the input boxes. If the boxes are too large or too
|
132 |
-
small for the input feature maps, the closest level will be used.
|
133 |
-
"""
|
134 |
-
super().__init__()
|
135 |
-
|
136 |
-
if isinstance(output_size, int):
|
137 |
-
output_size = (output_size, output_size)
|
138 |
-
assert len(output_size) == 2
|
139 |
-
assert isinstance(output_size[0], int) and isinstance(output_size[1], int)
|
140 |
-
self.output_size = output_size
|
141 |
-
|
142 |
-
if pooler_type == "ROIAlign":
|
143 |
-
self.level_poolers = nn.ModuleList(
|
144 |
-
ROIAlign(
|
145 |
-
output_size, spatial_scale=scale, sampling_ratio=sampling_ratio, aligned=False
|
146 |
-
)
|
147 |
-
for scale in scales
|
148 |
-
)
|
149 |
-
elif pooler_type == "ROIAlignV2":
|
150 |
-
self.level_poolers = nn.ModuleList(
|
151 |
-
ROIAlign(
|
152 |
-
output_size, spatial_scale=scale, sampling_ratio=sampling_ratio, aligned=True
|
153 |
-
)
|
154 |
-
for scale in scales
|
155 |
-
)
|
156 |
-
elif pooler_type == "ROIPool":
|
157 |
-
self.level_poolers = nn.ModuleList(
|
158 |
-
RoIPool(output_size, spatial_scale=scale) for scale in scales
|
159 |
-
)
|
160 |
-
elif pooler_type == "ROIAlignRotated":
|
161 |
-
self.level_poolers = nn.ModuleList(
|
162 |
-
ROIAlignRotated(output_size, spatial_scale=scale, sampling_ratio=sampling_ratio)
|
163 |
-
for scale in scales
|
164 |
-
)
|
165 |
-
else:
|
166 |
-
raise ValueError("Unknown pooler type: {}".format(pooler_type))
|
167 |
-
|
168 |
-
# Map scale (defined as 1 / stride) to its feature map level under the
|
169 |
-
# assumption that stride is a power of 2.
|
170 |
-
min_level = -(math.log2(scales[0]))
|
171 |
-
max_level = -(math.log2(scales[-1]))
|
172 |
-
assert math.isclose(min_level, int(min_level)) and math.isclose(
|
173 |
-
max_level, int(max_level)
|
174 |
-
), "Featuremap stride is not power of 2!"
|
175 |
-
self.min_level = int(min_level)
|
176 |
-
self.max_level = int(max_level)
|
177 |
-
assert (
|
178 |
-
len(scales) == self.max_level - self.min_level + 1
|
179 |
-
), "[ROIPooler] Sizes of input featuremaps do not form a pyramid!"
|
180 |
-
assert 0 <= self.min_level and self.min_level <= self.max_level
|
181 |
-
self.canonical_level = canonical_level
|
182 |
-
assert canonical_box_size > 0
|
183 |
-
self.canonical_box_size = canonical_box_size
|
184 |
-
|
185 |
-
def forward(self, x: List[torch.Tensor], box_lists: List[Boxes]):
|
186 |
-
"""
|
187 |
-
Args:
|
188 |
-
x (list[Tensor]): A list of feature maps of NCHW shape, with scales matching those
|
189 |
-
used to construct this module.
|
190 |
-
box_lists (list[Boxes] | list[RotatedBoxes]):
|
191 |
-
A list of N Boxes or N RotatedBoxes, where N is the number of images in the batch.
|
192 |
-
The box coordinates are defined on the original image and
|
193 |
-
will be scaled by the `scales` argument of :class:`ROIPooler`.
|
194 |
-
|
195 |
-
Returns:
|
196 |
-
Tensor:
|
197 |
-
A tensor of shape (M, C, output_size, output_size) where M is the total number of
|
198 |
-
boxes aggregated over all N batch images and C is the number of channels in `x`.
|
199 |
-
"""
|
200 |
-
num_level_assignments = len(self.level_poolers)
|
201 |
-
|
202 |
-
assert isinstance(x, list) and isinstance(
|
203 |
-
box_lists, list
|
204 |
-
), "Arguments to pooler must be lists"
|
205 |
-
assert (
|
206 |
-
len(x) == num_level_assignments
|
207 |
-
), "unequal value, num_level_assignments={}, but x is list of {} Tensors".format(
|
208 |
-
num_level_assignments, len(x)
|
209 |
-
)
|
210 |
-
|
211 |
-
assert len(box_lists) == x[0].size(
|
212 |
-
0
|
213 |
-
), "unequal value, x[0] batch dim 0 is {}, but box_list has length {}".format(
|
214 |
-
x[0].size(0), len(box_lists)
|
215 |
-
)
|
216 |
-
if len(box_lists) == 0:
|
217 |
-
return torch.zeros(
|
218 |
-
(0, x[0].shape[1]) + self.output_size, device=x[0].device, dtype=x[0].dtype
|
219 |
-
)
|
220 |
-
|
221 |
-
pooler_fmt_boxes = convert_boxes_to_pooler_format(box_lists)
|
222 |
-
|
223 |
-
if num_level_assignments == 1:
|
224 |
-
return self.level_poolers[0](x[0], pooler_fmt_boxes)
|
225 |
-
|
226 |
-
level_assignments = assign_boxes_to_levels(
|
227 |
-
box_lists, self.min_level, self.max_level, self.canonical_box_size, self.canonical_level
|
228 |
-
)
|
229 |
-
|
230 |
-
num_boxes = pooler_fmt_boxes.size(0)
|
231 |
-
num_channels = x[0].shape[1]
|
232 |
-
output_size = self.output_size[0]
|
233 |
-
|
234 |
-
dtype, device = x[0].dtype, x[0].device
|
235 |
-
output = torch.zeros(
|
236 |
-
(num_boxes, num_channels, output_size, output_size), dtype=dtype, device=device
|
237 |
-
)
|
238 |
-
|
239 |
-
for level, pooler in enumerate(self.level_poolers):
|
240 |
-
inds = nonzero_tuple(level_assignments == level)[0]
|
241 |
-
pooler_fmt_boxes_level = pooler_fmt_boxes[inds]
|
242 |
-
# Use index_put_ instead of advance indexing, to avoid pytorch/issues/49852
|
243 |
-
output.index_put_((inds,), pooler(x[level], pooler_fmt_boxes_level))
|
244 |
-
|
245 |
-
return output
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/tutorials/data_loading.md
DELETED
@@ -1,95 +0,0 @@
|
|
1 |
-
|
2 |
-
# Dataloader
|
3 |
-
|
4 |
-
Dataloader is the component that provides data to models.
|
5 |
-
A dataloader usually (but not necessarily) takes raw information from [datasets](./datasets.md),
|
6 |
-
and process them into a format needed by the model.
|
7 |
-
|
8 |
-
## How the Existing Dataloader Works
|
9 |
-
|
10 |
-
Detectron2 contains a builtin data loading pipeline.
|
11 |
-
It's good to understand how it works, in case you need to write a custom one.
|
12 |
-
|
13 |
-
Detectron2 provides two functions
|
14 |
-
[build_detection_{train,test}_loader](../modules/data.html#detectron2.data.build_detection_train_loader)
|
15 |
-
that create a default data loader from a given config.
|
16 |
-
Here is how `build_detection_{train,test}_loader` work:
|
17 |
-
|
18 |
-
1. It takes the name of a registered dataset (e.g., "coco_2017_train") and loads a `list[dict]` representing the dataset items
|
19 |
-
in a lightweight format. These dataset items are not yet ready to be used by the model (e.g., images are
|
20 |
-
not loaded into memory, random augmentations have not been applied, etc.).
|
21 |
-
Details about the dataset format and dataset registration can be found in
|
22 |
-
[datasets](./datasets.md).
|
23 |
-
2. Each dict in this list is mapped by a function ("mapper"):
|
24 |
-
* Users can customize this mapping function by specifying the "mapper" argument in
|
25 |
-
`build_detection_{train,test}_loader`. The default mapper is [DatasetMapper](../modules/data.html#detectron2.data.DatasetMapper).
|
26 |
-
* The output format of the mapper can be arbitrary, as long as it is accepted by the consumer of this data loader (usually the model).
|
27 |
-
The outputs of the default mapper, after batching, follow the default model input format documented in
|
28 |
-
[Use Models](./models.html#model-input-format).
|
29 |
-
* The role of the mapper is to transform the lightweight representation of a dataset item into a format
|
30 |
-
that is ready for the model to consume (including, e.g., read images, perform random data augmentation and convert to torch Tensors).
|
31 |
-
If you would like to perform custom transformations to data, you often want a custom mapper.
|
32 |
-
3. The outputs of the mapper are batched (simply into a list).
|
33 |
-
4. This batched data is the output of the data loader. Typically, it's also the input of
|
34 |
-
`model.forward()`.
|
35 |
-
|
36 |
-
|
37 |
-
## Write a Custom Dataloader
|
38 |
-
|
39 |
-
Using a different "mapper" with `build_detection_{train,test}_loader(mapper=)` works for most use cases
|
40 |
-
of custom data loading.
|
41 |
-
For example, if you want to resize all images to a fixed size for training, use:
|
42 |
-
|
43 |
-
```python
|
44 |
-
import detectron2.data.transforms as T
|
45 |
-
from detectron2.data import DatasetMapper # the default mapper
|
46 |
-
dataloader = build_detection_train_loader(cfg,
|
47 |
-
mapper=DatasetMapper(cfg, is_train=True, augmentations=[
|
48 |
-
T.Resize((800, 800))
|
49 |
-
]))
|
50 |
-
# use this dataloader instead of the default
|
51 |
-
```
|
52 |
-
If the arguments of the default [DatasetMapper](../modules/data.html#detectron2.data.DatasetMapper)
|
53 |
-
does not provide what you need, you may write a custom mapper function and use it instead, e.g.:
|
54 |
-
|
55 |
-
```python
|
56 |
-
from detectron2.data import detection_utils as utils
|
57 |
-
# Show how to implement a minimal mapper, similar to the default DatasetMapper
|
58 |
-
def mapper(dataset_dict):
|
59 |
-
dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below
|
60 |
-
# can use other ways to read image
|
61 |
-
image = utils.read_image(dataset_dict["file_name"], format="BGR")
|
62 |
-
# See "Data Augmentation" tutorial for details usage
|
63 |
-
auginput = T.AugInput(image)
|
64 |
-
transform = T.Resize((800, 800))(auginput)
|
65 |
-
image = torch.from_numpy(auginput.image.transpose(2, 0, 1))
|
66 |
-
annos = [
|
67 |
-
utils.transform_instance_annotations(annotation, [transform], image.shape[1:])
|
68 |
-
for annotation in dataset_dict.pop("annotations")
|
69 |
-
]
|
70 |
-
return {
|
71 |
-
# create the format that the model expects
|
72 |
-
"image": image,
|
73 |
-
"instances": utils.annotations_to_instances(annos, image.shape[1:])
|
74 |
-
}
|
75 |
-
dataloader = build_detection_train_loader(cfg, mapper=mapper)
|
76 |
-
```
|
77 |
-
|
78 |
-
If you want to change not only the mapper (e.g., in order to implement different sampling or batching logic),
|
79 |
-
`build_detection_train_loader` won't work and you will need to write a different data loader.
|
80 |
-
The data loader is simply a
|
81 |
-
python iterator that produces [the format](./models.md) that the model accepts.
|
82 |
-
You can implement it using any tools you like.
|
83 |
-
|
84 |
-
No matter what to implement, it's recommended to
|
85 |
-
check out [API documentation of detectron2.data](../modules/data) to learn more about the APIs of
|
86 |
-
these functions.
|
87 |
-
|
88 |
-
## Use a Custom Dataloader
|
89 |
-
|
90 |
-
If you use [DefaultTrainer](../modules/engine.html#detectron2.engine.defaults.DefaultTrainer),
|
91 |
-
you can overwrite its `build_{train,test}_loader` method to use your own dataloader.
|
92 |
-
See the [deeplab dataloader](../../projects/DeepLab/train_net.py)
|
93 |
-
for an example.
|
94 |
-
|
95 |
-
If you write your own training loop, you can plug in your data loader easily.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Banbri/zcvzcv/src/lib/createLlamaPrompt.ts
DELETED
@@ -1,25 +0,0 @@
|
|
1 |
-
// adapted from https://huggingface.co/TheBloke/Llama-2-13B-chat-GPTQ/discussions/5
|
2 |
-
export function createLlamaPrompt(messages: Array<{ role: string, content: string }>) {
|
3 |
-
const B_INST = "[INST]", E_INST = "[/INST]";
|
4 |
-
const B_SYS = "<<SYS>>\n", E_SYS = "\n<</SYS>>\n\n";
|
5 |
-
const BOS = "<s>", EOS = "</s>";
|
6 |
-
const DEFAULT_SYSTEM_PROMPT = "You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.";
|
7 |
-
|
8 |
-
if (messages[0].role != "system"){
|
9 |
-
messages = [
|
10 |
-
{role: "system", content: DEFAULT_SYSTEM_PROMPT}
|
11 |
-
].concat(messages);
|
12 |
-
}
|
13 |
-
messages = [{role: messages[1].role, content: B_SYS + messages[0].content + E_SYS + messages[1].content}].concat(messages.slice(2));
|
14 |
-
|
15 |
-
let messages_list = messages.map((value, index, array) => {
|
16 |
-
if (index % 2 == 0 && index + 1 < array.length){
|
17 |
-
return `${BOS}${B_INST} ${array[index].content.trim()} ${E_INST} ${array[index+1].content.trim()} ${EOS}`
|
18 |
-
}
|
19 |
-
return '';
|
20 |
-
})
|
21 |
-
|
22 |
-
messages_list.push(`${BOS}${B_INST} ${messages[messages.length-1].content.trim()} ${E_INST}`)
|
23 |
-
|
24 |
-
return messages_list.join('');
|
25 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BartPoint/VoiceChange_Beta/infer_pack/modules/F0Predictor/PMF0Predictor.py
DELETED
@@ -1,97 +0,0 @@
|
|
1 |
-
from infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
|
2 |
-
import parselmouth
|
3 |
-
import numpy as np
|
4 |
-
|
5 |
-
|
6 |
-
class PMF0Predictor(F0Predictor):
|
7 |
-
def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
|
8 |
-
self.hop_length = hop_length
|
9 |
-
self.f0_min = f0_min
|
10 |
-
self.f0_max = f0_max
|
11 |
-
self.sampling_rate = sampling_rate
|
12 |
-
|
13 |
-
def interpolate_f0(self, f0):
|
14 |
-
"""
|
15 |
-
对F0进行插值处理
|
16 |
-
"""
|
17 |
-
|
18 |
-
data = np.reshape(f0, (f0.size, 1))
|
19 |
-
|
20 |
-
vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
|
21 |
-
vuv_vector[data > 0.0] = 1.0
|
22 |
-
vuv_vector[data <= 0.0] = 0.0
|
23 |
-
|
24 |
-
ip_data = data
|
25 |
-
|
26 |
-
frame_number = data.size
|
27 |
-
last_value = 0.0
|
28 |
-
for i in range(frame_number):
|
29 |
-
if data[i] <= 0.0:
|
30 |
-
j = i + 1
|
31 |
-
for j in range(i + 1, frame_number):
|
32 |
-
if data[j] > 0.0:
|
33 |
-
break
|
34 |
-
if j < frame_number - 1:
|
35 |
-
if last_value > 0.0:
|
36 |
-
step = (data[j] - data[i - 1]) / float(j - i)
|
37 |
-
for k in range(i, j):
|
38 |
-
ip_data[k] = data[i - 1] + step * (k - i + 1)
|
39 |
-
else:
|
40 |
-
for k in range(i, j):
|
41 |
-
ip_data[k] = data[j]
|
42 |
-
else:
|
43 |
-
for k in range(i, frame_number):
|
44 |
-
ip_data[k] = last_value
|
45 |
-
else:
|
46 |
-
ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
|
47 |
-
last_value = data[i]
|
48 |
-
|
49 |
-
return ip_data[:, 0], vuv_vector[:, 0]
|
50 |
-
|
51 |
-
def compute_f0(self, wav, p_len=None):
|
52 |
-
x = wav
|
53 |
-
if p_len is None:
|
54 |
-
p_len = x.shape[0] // self.hop_length
|
55 |
-
else:
|
56 |
-
assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
|
57 |
-
time_step = self.hop_length / self.sampling_rate * 1000
|
58 |
-
f0 = (
|
59 |
-
parselmouth.Sound(x, self.sampling_rate)
|
60 |
-
.to_pitch_ac(
|
61 |
-
time_step=time_step / 1000,
|
62 |
-
voicing_threshold=0.6,
|
63 |
-
pitch_floor=self.f0_min,
|
64 |
-
pitch_ceiling=self.f0_max,
|
65 |
-
)
|
66 |
-
.selected_array["frequency"]
|
67 |
-
)
|
68 |
-
|
69 |
-
pad_size = (p_len - len(f0) + 1) // 2
|
70 |
-
if pad_size > 0 or p_len - len(f0) - pad_size > 0:
|
71 |
-
f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
|
72 |
-
f0, uv = self.interpolate_f0(f0)
|
73 |
-
return f0
|
74 |
-
|
75 |
-
def compute_f0_uv(self, wav, p_len=None):
|
76 |
-
x = wav
|
77 |
-
if p_len is None:
|
78 |
-
p_len = x.shape[0] // self.hop_length
|
79 |
-
else:
|
80 |
-
assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
|
81 |
-
time_step = self.hop_length / self.sampling_rate * 1000
|
82 |
-
f0 = (
|
83 |
-
parselmouth.Sound(x, self.sampling_rate)
|
84 |
-
.to_pitch_ac(
|
85 |
-
time_step=time_step / 1000,
|
86 |
-
voicing_threshold=0.6,
|
87 |
-
pitch_floor=self.f0_min,
|
88 |
-
pitch_ceiling=self.f0_max,
|
89 |
-
)
|
90 |
-
.selected_array["frequency"]
|
91 |
-
)
|
92 |
-
|
93 |
-
pad_size = (p_len - len(f0) + 1) // 2
|
94 |
-
if pad_size > 0 or p_len - len(f0) - pad_size > 0:
|
95 |
-
f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
|
96 |
-
f0, uv = self.interpolate_f0(f0)
|
97 |
-
return f0, uv
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Bloqueo De Aplicaciones 2019.md
DELETED
@@ -1,95 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>App Lock Descargar 2019: Cómo proteger su privacidad en su teléfono</h1>
|
3 |
-
<p>¿Tienes información sensible o personal en tu teléfono que no quieres que otros vean? ¿Te preocupa que tus hijos o amigos accedan a tus aplicaciones sin tu permiso? ¿Quieres mantener tus fotos, vídeos, mensajes y contactos a salvo de miradas indiscretas? </p>
|
4 |
-
<h2>bloqueo de aplicaciones 2019</h2><br /><p><b><b>DOWNLOAD</b> ⚡ <a href="https://bltlly.com/2v6L13">https://bltlly.com/2v6L13</a></b></p><br /><br />
|
5 |
-
<p>Si respondiste sí a cualquiera de estas preguntas, entonces necesitas un bloqueo de aplicación. Un bloqueo de aplicaciones es una herramienta que te permite bloquear cualquier aplicación en tu teléfono con una contraseña, patrón, huella digital o reconocimiento facial. De esta manera, puede evitar el acceso no autorizado y proteger su privacidad. </p>
|
6 |
-
<h2>¿Qué es el bloqueo de aplicaciones y por qué lo necesita? </h2>
|
7 |
-
<p>Un bloqueo de aplicación es un software que añade una capa adicional de seguridad a su teléfono. Te permite bloquear cualquier aplicación que elijas, como redes sociales, mensajería, banca, galería, configuración y más. También puede bloquear llamadas entrantes, notificaciones y bloqueo de pantalla. </p>
|
8 |
-
<p>Al usar un bloqueo de aplicación, puede proteger sus datos personales de ser expuestos o robados por otros. También puedes evitar situaciones embarazosas cuando alguien toma prestado tu teléfono y ve algo que no debería. Además, puede controlar lo que sus hijos o familiares pueden acceder en su teléfono y evitar que hagan compras o cambios no deseados. </p>
|
9 |
-
<h3>Características y beneficios del bloqueo de aplicaciones</h3>
|
10 |
-
<p>Algunas de las características y beneficios comunes del bloqueo de aplicaciones son:</p>
|
11 |
-
<ul>
|
12 |
-
<li> Puede elegir entre diferentes tipos de cerraduras, como PIN, patrón, huella digital o reconocimiento facial. </li>
|
13 |
-
<li> Puede personalizar la pantalla de bloqueo con diferentes temas, fondos de pantalla y estilos. </li>
|
14 |
-
<li> Puede ocultar sus fotos y vídeos en una bóveda privada a la que solo puede acceder. </li>
|
15 |
-
<li>Puedes capturar el selfie del intruso que intenta desbloquear tus aplicaciones con la contraseña incorrecta. </li>
|
16 |
-
<li>Puedes limpiar las notificaciones de spam y mantener tu barra de notificaciones ordenada. </li>
|
17 |
-
<li> Puede habilitar el modo de incógnito y los rastreadores de bloques para la navegación privada. </li>
|
18 |
-
</ul>
|
19 |
-
|
20 |
-
<p>Descargar e instalar el bloqueo de aplicaciones en su teléfono es fácil y rápido. Estos son los pasos:</p>
|
21 |
-
<ol>
|
22 |
-
<li>Ir a la Google Play Store o la App Store y buscar el bloqueo de aplicaciones. </li>
|
23 |
-
<li>Seleccione la aplicación que se adapte a sus necesidades y preferencias. Puedes comprobar las valoraciones, reseñas, características y capturas de pantalla de cada aplicación antes de descargarla. </li>
|
24 |
-
<li>Toque en el botón de instalación y espere a que la aplicación se descargue. </li>
|
25 |
-
<li>Abra la aplicación y configure su contraseña o patrón. También puede usar su reconocimiento de huellas dactilares o de rostros si su teléfono lo admite. </li>
|
26 |
-
<li>Seleccione las aplicaciones que desea bloquear y active la opción de bloqueo. </li>
|
27 |
-
<li>¡Disfruta de tu privacidad y seguridad! </li>
|
28 |
-
</ol>
|
29 |
-
<h2>Las mejores aplicaciones de bloqueo de aplicaciones para Android e iOS en 2019</h2>
|
30 |
-
<p>Hay muchas aplicaciones de bloqueo de aplicaciones disponibles en el mercado, pero no todas son confiables y eficaces. Para ayudarle a elegir el mejor para su teléfono, hemos revisado tres de las aplicaciones de bloqueo de aplicaciones más populares y altamente calificadas para Android e iOS en 2019. Aquí están:</p>
|
31 |
-
<p></p>
|
32 |
-
<h3>AppLock - Aplicaciones de bloqueo y bloqueo de pasadores</h3>
|
33 |
-
<p>Esta aplicación es una de las aplicaciones de bloqueo de aplicaciones más descargadas y confiables en Google Play. Tiene más de 100 millones de descargas y 4.7 estrellas. Ofrece una variedad de características y opciones para proteger su privacidad en aplicaciones móviles. </p>
|
34 |
-
<h4>Pros</h4>
|
35 |
-
<ul>
|
36 |
-
<li> Admite múltiples tipos de bloqueo, como PIN, patrón, huella digital y reconocimiento facial. </li>
|
37 |
-
<li> Tiene una bóveda de fotos donde puede ocultar sus fotos y videos de forma segura. </li <li>Tiene una característica selfie intruso que captura la foto de la persona que intenta desbloquear sus aplicaciones con la contraseña incorrecta. </li>
|
38 |
-
<li> Tiene una función de portada falsa que disfraza la pantalla de bloqueo de la aplicación con un mensaje de error falso o un escáner de huellas dactilares. </li>
|
39 |
-
<li> Tiene un modo de ahorro de energía que reduce el consumo de batería de la aplicación. </li>
|
40 |
-
</ul>
|
41 |
-
<h4>Contras</h4>
|
42 |
-
<ul>
|
43 |
-
<li> Contiene anuncios que pueden ser molestos o intrusivos. </li>
|
44 |
-
<li>Puede que no funcione bien en algunos dispositivos o versiones de Android. </li>
|
45 |
-
|
46 |
-
</ul>
|
47 |
-
<h3>AppLock</h3>
|
48 |
-
<p>Esta aplicación es otra aplicación de bloqueo de aplicaciones populares y confiables en Google Play. Tiene más de 50 millones de descargas y 4.4 estrellas. Proporciona una forma sencilla y efectiva de bloquear tus aplicaciones y archivos. </p>
|
49 |
-
<h4>Pros</h4>
|
50 |
-
<ul>
|
51 |
-
<li> Admite múltiples tipos de bloqueo, como PIN, patrón y huella digital. </li>
|
52 |
-
<li> Tiene una bóveda donde puede ocultar sus fotos, videos, audio y documentos. </li>
|
53 |
-
<li> Tiene una función de alerta de robo que registra la hora y la ubicación del intruso que intenta desbloquear sus aplicaciones. </li>
|
54 |
-
<li> Tiene una función de teclado aleatorio que evita que otros espíen su contraseña. </li>
|
55 |
-
<li> Tiene una función de bloqueo de tiempo y ubicación que le permite establecer diferentes bloqueos para diferentes momentos o lugares. </li>
|
56 |
-
</ul>
|
57 |
-
<h4>Contras</h4>
|
58 |
-
<ul>
|
59 |
-
<li> Contiene anuncios que pueden ser molestos o intrusivos. </li>
|
60 |
-
<li>Puede que no funcione bien en algunos dispositivos o versiones de Android. </li>
|
61 |
-
<li> Puede entrar en conflicto con algunas otras aplicaciones o configuraciones del sistema. </li>
|
62 |
-
</ul>
|
63 |
-
<h3>Seguridad de bloqueo de aplicaciones</h3>
|
64 |
-
<p>Esta aplicación es una de las mejores aplicaciones de bloqueo de aplicaciones para dispositivos iOS. Tiene más de 10 millones de descargas y 4.6 estrellas en la App Store. Ofrece una forma potente y fácil de usar para bloquear sus aplicaciones y datos. </p>
|
65 |
-
<h4>Pros</h4>
|
66 |
-
<ul>
|
67 |
-
<li> Admite múltiples tipos de bloqueo, como PIN, patrón, ID táctil e ID de la cara.</li>
|
68 |
-
<li>Tiene una bóveda de fotos y una bóveda de video donde puede ocultar sus fotos y videos de forma segura. </li <li>Tiene una función de contraseña falsa que muestra una pantalla de bloqueo de aplicación falsa cuando alguien ingresa una contraseña incorrecta. </li>
|
69 |
-
<li> Tiene una función de aplicación de señuelo que disfraza la aplicación de bloqueo de aplicaciones como una calculadora o un reloj. </li>
|
70 |
-
<li> Tiene una función de navegador privado que le permite navegar por la web sin dejar rastros. </li>
|
71 |
-
</ul>
|
72 |
-
<h4>Contras</h4>
|
73 |
-
<ul>
|
74 |
-
<li>No es gratuito y requiere una suscripción para desbloquear todas las funciones. </li>
|
75 |
-
<li>Puede que no funcione bien en algunos dispositivos o versiones de iOS. </li>
|
76 |
-
<li> Puede entrar en conflicto con algunas otras aplicaciones o configuraciones del sistema. </li>
|
77 |
-
</ul>
|
78 |
-
|
79 |
-
<p>App lock es una herramienta imprescindible para cualquiera que valore su privacidad y seguridad en su teléfono. Puede ayudarlo a bloquear cualquier aplicación que desee y evitar el acceso no autorizado. También puede ocultar sus fotos, videos y archivos en una bóveda privada y capturar el selfie del intruso. Además, puede limpiar tus notificaciones, bloquear rastreadores y personalizar tu pantalla de bloqueo. </p>
|
80 |
-
<p>En este artículo, hemos revisado tres de las mejores aplicaciones de bloqueo de aplicaciones para Android e iOS en 2019. Son AppLock - Aplicaciones de bloqueo y bloqueo de pines, AppLock y App Lock Security. Cada uno de ellos tiene sus propios pros y contras, por lo que puede elegir el que se adapte a sus necesidades y preferencias. Puedes descargarlos desde la Google Play Store o la App Store e instalarlos en tu teléfono de forma fácil y rápida. </p>
|
81 |
-
<p>Esperamos que este artículo te haya ayudado a aprender más sobre el bloqueo de aplicaciones y cómo descargarlo en 2019. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. ¡Gracias por leer! </p>
|
82 |
-
<h2>Preguntas frecuentes</h2>
|
83 |
-
<ol>
|
84 |
-
<li> ¿Cuál es la diferencia entre bloqueo de aplicación y bloqueo de pantalla? </li>
|
85 |
-
<p>App lock es una herramienta que te permite bloquear aplicaciones individuales en tu teléfono con una contraseña, patrón, huella digital o reconocimiento facial. Bloqueo de pantalla es una función que le permite bloquear todo el teléfono con una contraseña, patrón, huella digital o reconocimiento facial. Puedes usar ambos juntos para máxima seguridad. </p>
|
86 |
-
<li>¿Cómo puedo desinstalar el bloqueo de aplicaciones desde mi teléfono? </li>
|
87 |
-
<p>Para desinstalar el bloqueo de aplicaciones de su teléfono, primero debe desbloquear todas las aplicaciones que haya bloqueado con él. Luego, puede ir a la configuración de la aplicación de bloqueo de aplicaciones y encontrar la opción de desinstalación. Alternativamente, puede ir a la Google Play Store o la App Store y encontrar la aplicación de bloqueo de aplicaciones que ha instalado y toque en el botón de desinstalación. </p>
|
88 |
-
|
89 |
-
<li> ¿Puede bloqueo de aplicaciones proteger mi teléfono de virus o malware? </li>
|
90 |
-
<p>El bloqueo de la aplicación puede proteger su teléfono del acceso no autorizado, pero no puede proteger su teléfono de virus o malware. Necesita instalar una aplicación antivirus o antimalware confiable en su teléfono y escanear su teléfono regularmente para detectar cualquier amenaza. También debe evitar descargar aplicaciones de fuentes desconocidas o hacer clic en enlaces sospechosos. </p>
|
91 |
-
<li>¿Puedo bloquear aplicaciones del sistema con bloqueo de aplicaciones? </li>
|
92 |
-
<p>Sí, puede bloquear aplicaciones del sistema con bloqueo de aplicaciones, como configuraciones, contactos, mensajes, teléfono y más. Sin embargo, debe tener cuidado al bloquear las aplicaciones del sistema, ya que puede afectar el funcionamiento normal de su teléfono. Por ejemplo, si bloquea la aplicación de configuración, es posible que no pueda cambiar la configuración del teléfono o acceder a algunas funciones. Si bloquea la aplicación del teléfono, es posible que no pueda hacer o recibir llamadas. </p>
|
93 |
-
</ol></p> 64aa2da5cf<br />
|
94 |
-
<br />
|
95 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Descargar Gratis De Backgammon Pc.md
DELETED
@@ -1,65 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Descarga gratuita de backgammon PC: Cómo jugar el juego de mesa clásico en su computadora</h1>
|
3 |
-
<p>El backgammon es uno de los juegos de mesa más antiguos y populares del mundo. Es un juego de habilidad y estrategia que se puede jugar por diversión o para apostar. Si usted está buscando una manera de jugar al backgammon en su computadora, está de suerte. Hay muchos sitios web que ofrecen versiones de backgammon descarga gratuita pc que se puede instalar y disfrutar en su dispositivo Windows. En este artículo, te mostraremos cómo descargar e instalar backgammon gratis en tu PC, así como cómo mejorar tus habilidades y estrategia en este juego clásico. </p>
|
4 |
-
<h2>descargar gratis de backgammon pc</h2><br /><p><b><b>Download File</b> ►►► <a href="https://bltlly.com/2v6KbR">https://bltlly.com/2v6KbR</a></b></p><br /><br />
|
5 |
-
<h2>¿Qué es el Backgammon y por qué deberías jugarlo? </h2>
|
6 |
-
<p>Backgammon es un juego para dos jugadores que consiste en mover piezas (llamadas damas) alrededor de un tablero con 24 espacios triangulares (llamados puntos). El objetivo del juego es mover todas las fichas al tablero de casa (los últimos seis puntos) y luego quitarlas (eliminarlas del tablero). El primer jugador en sacar todas sus fichas gana el juego. </p>
|
7 |
-
<h3>Historia y reglas del backgammon</h3>
|
8 |
-
<p>El backgammon tiene una larga y rica historia que se remonta a la antigüedad. Se cree que el backgammon se originó en Egipto hace más de 3000 años, donde se jugaba con dados hechos de huesos de animales. Desde allí, se extendió a otras civilizaciones, como Roma, India, China y Persia. También se hizo popular en Europa y América en la Edad Media y el Renacimiento. Hoy en día, el backgammon se juega en todo el mundo, tanto online como offline, en grupos sociales, clubes, torneos y casinos. </p>
|
9 |
-
|
10 |
-
<p>Si un jugador aterriza en un punto ocupado por una sola ficha del oponente (llamada mancha), puede golpear esa ficha y enviarla al centro del tablero (llamada barra). Un verificador de aciertos debe volver a entrar en el tablero de casa del oponente antes de que pueda mover otras fichas. Un jugador no puede mover ninguna otra ficha hasta que haya traído todas sus fichas de éxito de vuelta al juego. </p>
|
11 |
-
<p>Un jugador también puede usar un dispositivo especial llamado cubo de duplicación para aumentar las apuestas del juego. El cubo doble tiene seis caras con los números 2, 4, 8, 16, 32 y 64. Al comienzo del juego, el cubo de duplicación se coloca en el centro del tablero con el número 64 mirando hacia arriba. Esto significa que el juego vale un punto. Durante el juego, cualquier jugador puede proponer duplicar el valor del juego girando el cubo al siguiente número más alto y ofreciéndolo al oponente. El oponente puede aceptar el doble y tomar el cubo, o rechazar el doble y perder el juego. El jugador que posee el cubo puede proponer redoblarlo en cualquier momento, siempre y cuando el cubo no esté en el centro. El juego se puede duplicar hasta 64 veces, que es el número más alto en el cubo. </p>
|
12 |
-
<p>También hay algunas reglas opcionales que pueden hacer el juego más interesante y desafiante. Por ejemplo, algunos jugadores usan una regla llamada la regla de Crawford, que establece que en un partido de varios juegos, cuando un jugador está a un punto de ganar, el cubo de duplicación no se puede usar para un juego. Esto evita que el jugador que sigue duplique su camino a la victoria en un juego afortunado. Otra regla opcional se llama la regla Jacoby, que establece que los gammons y backgammons (explicados a continuación) no se cuentan a menos que el cubo se haya girado al menos una vez. Esto anima a los jugadores a usar el cubo y jugar más agresivamente. </p>
|
13 |
-
<p></p>
|
14 |
-
|
15 |
-
<h3>Los beneficios de jugar al backgammon</h3>
|
16 |
-
<p>Jugar al backgammon no solo es divertido y emocionante, sino que también es beneficioso para tu cerebro y tu salud mental. Estos son algunos de los beneficios de jugar al backgammon:</p>
|
17 |
-
<ul>
|
18 |
-
<li>Mejora tu memoria y concentración haciendo que recuerdes las posiciones de las damas y planifiques tus movimientos. </li>
|
19 |
-
<li>Mejora tus habilidades analíticas y lógicas haciendo que calcules probabilidades y evalúes riesgos y recompensas. </li>
|
20 |
-
<li>Impulsa tu creatividad y habilidades para resolver problemas haciendo que encuentres soluciones y estrategias alternativas en diferentes situaciones. </li>
|
21 |
-
<li>Reduce el estrés y la ansiedad al proporcionarle una actividad relajante y agradable. </li>
|
22 |
-
<li>Aumenta tus habilidades sociales y tu confianza al permitirte interactuar con otros jugadores online o offline. </li>
|
23 |
-
</ul>
|
24 |
-
<h2>Cómo descargar e instalar Backgammon gratis en tu PC</h2>
|
25 |
-
<p>Si quieres jugar al backgammon en tu computadora, no necesitas comprar ningún software o hardware costoso. Hay muchos sitios web que ofrecen versiones de backgammon PC descarga gratuita que son compatibles con los dispositivos de Windows. Estos son algunos de los mejores sitios web para descargar backgammon gratis:</p>
|
26 |
-
<h3>Los mejores sitios web para descargar Backgammon gratis</h3>
|
27 |
-
<h4>Obtener Backgammon! - Microsoft Store</h4>
|
28 |
-
<p>Este sitio web ofrece un juego de backgammon gratuito que puedes descargar desde la Tienda Microsoft. El juego cuenta con hermosos gráficos, efectos de sonido realistas y un juego suave. Puedes jugar contra el ordenador o contra tus amigos en el modo de 2 jugadores. También puedes personalizar el tablero y las piezas con las que juegas, así como ajustar el nivel de dificultad y la velocidad del juego. El juego también tiene un tutorial y una función de sugerencia que puede ayudarte a aprender y mejorar tus habilidades. </p>
|
29 |
-
<h4>Obtener Backgammon Deluxe - Microsoft Store</h4>
|
30 |
-
|
31 |
-
<h4>Obtener Backgammon juego clásico - Microsoft Store</h4>
|
32 |
-
<p>Este sitio web ofrece otro juego de backgammon gratuito que se puede descargar desde la tienda de Microsoft. El juego tiene una interfaz sencilla, gráficos claros y sonidos realistas. Puedes jugar contra el ordenador o contra tus amigos en el modo de 2 jugadores. También puede seleccionar entre diferentes temas de tablero, conjuntos de piezas y tipos de dados. El juego también tiene una función de ayuda que explica las reglas y consejos del backgammon. </p> <h3>Los pasos para instalar y ejecutar backgammon en su PC</h3>
|
33 |
-
<p>Una vez que haya elegido su sitio web preferido para descargar backgammon gratis, debe seguir estos pasos para instalar y ejecutar el juego en su PC:</p>
|
34 |
-
<h4>Paso 1: Elija su sitio web preferido y haga clic en el botón de descarga</h4>
|
35 |
-
<p>Ir al sitio web que ofrece el juego de backgammon que desea descargar. Por ejemplo, si desea descargar Backgammon! de Microsoft Store, vaya a [este enlace]. Luego, haga clic en el botón azul que dice "Obtener" o "Gratis". Esto abrirá la aplicación de Microsoft Store en tu PC e iniciará el proceso de descarga. </p>
|
36 |
-
<h4>Paso 2: Siga las instrucciones para completar el proceso de instalación</h4>
|
37 |
-
<p>Después de que se complete la descarga, verá un mensaje que dice "Este producto está instalado". También puede comprobar el progreso de la instalación haciendo clic en el icono de tres puntos en la esquina superior derecha de la aplicación Microsoft Store y seleccionando "Descargas y actualizaciones". Una vez finalizada la instalación, puede hacer clic en el botón "Iniciar" o encontrar el juego en el menú Inicio. </p>
|
38 |
-
<h4>Paso 3: Inicie el juego y disfrute jugando al backgammon en su PC</h4>
|
39 |
-
|
40 |
-
<h2>Cómo mejorar tus habilidades y estrategia en Backgammon</h2>
|
41 |
-
<p>Jugar al backgammon no es solo cuestión de suerte, sino también de habilidad y estrategia. Si quieres mejorar tu juego y ganar más partidos, aquí hay algunos consejos y trucos que pueden ayudarte:</p>
|
42 |
-
<h3>Aprende del Tutorial y la Función de Pistas</h3>
|
43 |
-
<p>Si usted es nuevo en el backgammon o necesita una actualización de las reglas y fundamentos del juego, puede utilizar la función de tutorial que está disponible en la mayoría de los juegos de backgammon. El tutorial lo guiará a través de los diferentes aspectos del backgammon, como cómo mover sus fichas, cómo golpear y soportar, cómo usar el cubo de duplicación y cómo anotar puntos. También puede utilizar la función de sugerencia que le sugerirá el mejor movimiento posible para su situación actual. La función de sugerencia puede ayudarte a aprender de tus errores y evitar errores. </p>
|
44 |
-
<h3>Practica contra el ordenador o juega contra tus amigos en el modo 2 jugadores</h3>
|
45 |
-
<p>La mejor manera de mejorar tus habilidades y estrategia en backgammon es practicar tanto como sea posible. Puedes jugar contra el ordenador o contra tus amigos en el modo de 2 jugadores. Jugar contra la computadora te ayudará a probar tus habilidades contra diferentes niveles de dificultad y aprender de los movimientos de tu oponente. Jugar contra tus amigos te ayudará a divertirte y desafiarte con diferentes estilos y estrategias. También puedes chatear con tus amigos mientras juegas y compartir tus comentarios y consejos. </p>
|
46 |
-
<h3>Personaliza el tablero y las piezas con las que juegas y lleva un registro de tus estadísticas</h3>
|
47 |
-
|
48 |
-
<h2>Conclusión</h2>
|
49 |
-
<p>El backgammon es un juego de mesa clásico que se puede jugar por diversión o para apostar. Es un juego de habilidad y estrategia que puede mejorar tu memoria, concentración, habilidades analíticas, creatividad, habilidades para resolver problemas, manejo del estrés, habilidades sociales y confianza. Si desea jugar al backgammon en su computadora, puede descargarlo de forma gratuita desde varios sitios web que ofrecen versiones de backgammon para PC. También puede instalarlo y ejecutarlo fácilmente en su dispositivo Windows. También puede mejorar sus habilidades y estrategia en el backgammon aprendiendo del tutorial y las características de pistas, practicando contra la computadora o sus amigos en el modo de 2 jugadores, personalizando el tablero y las piezas con las que juega y haciendo un seguimiento de sus estadísticas. Esperamos que este artículo le haya ayudado a aprender más sobre el backgammon y cómo jugarlo en su PC. Si tiene alguna pregunta o comentario, siéntase libre de dejarlos abajo. ¡Feliz jugando! </p>
|
50 |
-
<h2>Preguntas frecuentes</h2>
|
51 |
-
<p>Aquí están algunas de las preguntas más frecuentes sobre el backgammon y cómo jugarlo en su PC:</p>
|
52 |
-
<ol>
|
53 |
-
<li>¿Cuáles son los mejores sitios web para descargar backgammon gratis en su PC? </li>
|
54 |
-
<p>Algunos de los mejores sitios web para descargar backgammon gratis en su PC son Get Backgammon! - Microsoft Store, Obtener Backgammon Deluxe - Microsoft Store, y Obtener Backgammon Classic Game - Microsoft Store. Estos sitios web ofrecen juegos de backgammon de alta calidad que son compatibles con dispositivos Windows y tienen varias características y opciones. </p>
|
55 |
-
<li>¿Cómo se usa el cubo de duplicación en el backgammon? </li>
|
56 |
-
|
57 |
-
<li>¿Qué es un gammon y un backgammon en backgammon? </li>
|
58 |
-
<p>Un gammon es cuando un jugador gana quitando todas sus fichas antes de que el oponente haya arrancado cualquier. Un gammon vale dos puntos. Un backgammon es cuando un jugador gana quitando todas sus fichas mientras el oponente todavía tiene una o más fichas en la barra o en el tablero del ganador. Un backgammon vale tres puntos. </p>
|
59 |
-
<li>¿Cómo mejorar tus habilidades y estrategia en el backgammon? </li>
|
60 |
-
<p>Puedes mejorar tus habilidades y estrategia en el backgammon aprendiendo del tutorial y las funciones de pistas, practicando contra la computadora o tus amigos en el modo de 2 jugadores, personalizando el tablero y las piezas con las que juegas y haciendo un seguimiento de tus estadísticas. También puedes leer libros, artículos, blogs y foros sobre backgammon y ver videos y tutoriales de expertos y profesionales. </p>
|
61 |
-
<li>¿Cuáles son algunos de los beneficios de jugar al backgammon? </li>
|
62 |
-
<p>Jugar al backgammon no solo es divertido y emocionante, sino que también es beneficioso para tu cerebro y tu salud mental. Algunos de los beneficios de jugar backgammon son que mejora su memoria y concentración, mejora sus habilidades analíticas y lógicas, aumenta su creatividad y habilidades para resolver problemas, reduce su estrés y ansiedad, aumenta sus habilidades sociales y la confianza. </p>
|
63 |
-
</ol></p> 64aa2da5cf<br />
|
64 |
-
<br />
|
65 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/resources/collection.py
DELETED
@@ -1,572 +0,0 @@
|
|
1 |
-
# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
2 |
-
#
|
3 |
-
# Licensed under the Apache License, Version 2.0 (the "License"). You
|
4 |
-
# may not use this file except in compliance with the License. A copy of
|
5 |
-
# the License is located at
|
6 |
-
#
|
7 |
-
# https://aws.amazon.com/apache2.0/
|
8 |
-
#
|
9 |
-
# or in the "license" file accompanying this file. This file is
|
10 |
-
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
|
11 |
-
# ANY KIND, either express or implied. See the License for the specific
|
12 |
-
# language governing permissions and limitations under the License.
|
13 |
-
|
14 |
-
import copy
|
15 |
-
import logging
|
16 |
-
|
17 |
-
from botocore import xform_name
|
18 |
-
from botocore.utils import merge_dicts
|
19 |
-
|
20 |
-
from ..docs import docstring
|
21 |
-
from .action import BatchAction
|
22 |
-
from .params import create_request_parameters
|
23 |
-
from .response import ResourceHandler
|
24 |
-
|
25 |
-
logger = logging.getLogger(__name__)
|
26 |
-
|
27 |
-
|
28 |
-
class ResourceCollection:
|
29 |
-
"""
|
30 |
-
Represents a collection of resources, which can be iterated through,
|
31 |
-
optionally with filtering. Collections automatically handle pagination
|
32 |
-
for you.
|
33 |
-
|
34 |
-
See :ref:`guide_collections` for a high-level overview of collections,
|
35 |
-
including when remote service requests are performed.
|
36 |
-
|
37 |
-
:type model: :py:class:`~boto3.resources.model.Collection`
|
38 |
-
:param model: Collection model
|
39 |
-
:type parent: :py:class:`~boto3.resources.base.ServiceResource`
|
40 |
-
:param parent: The collection's parent resource
|
41 |
-
:type handler: :py:class:`~boto3.resources.response.ResourceHandler`
|
42 |
-
:param handler: The resource response handler used to create resource
|
43 |
-
instances
|
44 |
-
"""
|
45 |
-
|
46 |
-
def __init__(self, model, parent, handler, **kwargs):
|
47 |
-
self._model = model
|
48 |
-
self._parent = parent
|
49 |
-
self._py_operation_name = xform_name(model.request.operation)
|
50 |
-
self._handler = handler
|
51 |
-
self._params = copy.deepcopy(kwargs)
|
52 |
-
|
53 |
-
def __repr__(self):
|
54 |
-
return '{}({}, {})'.format(
|
55 |
-
self.__class__.__name__,
|
56 |
-
self._parent,
|
57 |
-
'{}.{}'.format(
|
58 |
-
self._parent.meta.service_name, self._model.resource.type
|
59 |
-
),
|
60 |
-
)
|
61 |
-
|
62 |
-
def __iter__(self):
|
63 |
-
"""
|
64 |
-
A generator which yields resource instances after doing the
|
65 |
-
appropriate service operation calls and handling any pagination
|
66 |
-
on your behalf.
|
67 |
-
|
68 |
-
Page size, item limit, and filter parameters are applied
|
69 |
-
if they have previously been set.
|
70 |
-
|
71 |
-
>>> bucket = s3.Bucket('boto3')
|
72 |
-
>>> for obj in bucket.objects.all():
|
73 |
-
... print(obj.key)
|
74 |
-
'key1'
|
75 |
-
'key2'
|
76 |
-
|
77 |
-
"""
|
78 |
-
limit = self._params.get('limit', None)
|
79 |
-
|
80 |
-
count = 0
|
81 |
-
for page in self.pages():
|
82 |
-
for item in page:
|
83 |
-
yield item
|
84 |
-
|
85 |
-
# If the limit is set and has been reached, then
|
86 |
-
# we stop processing items here.
|
87 |
-
count += 1
|
88 |
-
if limit is not None and count >= limit:
|
89 |
-
return
|
90 |
-
|
91 |
-
def _clone(self, **kwargs):
|
92 |
-
"""
|
93 |
-
Create a clone of this collection. This is used by the methods
|
94 |
-
below to provide a chainable interface that returns copies
|
95 |
-
rather than the original. This allows things like:
|
96 |
-
|
97 |
-
>>> base = collection.filter(Param1=1)
|
98 |
-
>>> query1 = base.filter(Param2=2)
|
99 |
-
>>> query2 = base.filter(Param3=3)
|
100 |
-
>>> query1.params
|
101 |
-
{'Param1': 1, 'Param2': 2}
|
102 |
-
>>> query2.params
|
103 |
-
{'Param1': 1, 'Param3': 3}
|
104 |
-
|
105 |
-
:rtype: :py:class:`ResourceCollection`
|
106 |
-
:return: A clone of this resource collection
|
107 |
-
"""
|
108 |
-
params = copy.deepcopy(self._params)
|
109 |
-
merge_dicts(params, kwargs, append_lists=True)
|
110 |
-
clone = self.__class__(
|
111 |
-
self._model, self._parent, self._handler, **params
|
112 |
-
)
|
113 |
-
return clone
|
114 |
-
|
115 |
-
def pages(self):
|
116 |
-
"""
|
117 |
-
A generator which yields pages of resource instances after
|
118 |
-
doing the appropriate service operation calls and handling
|
119 |
-
any pagination on your behalf. Non-paginated calls will
|
120 |
-
return a single page of items.
|
121 |
-
|
122 |
-
Page size, item limit, and filter parameters are applied
|
123 |
-
if they have previously been set.
|
124 |
-
|
125 |
-
>>> bucket = s3.Bucket('boto3')
|
126 |
-
>>> for page in bucket.objects.pages():
|
127 |
-
... for obj in page:
|
128 |
-
... print(obj.key)
|
129 |
-
'key1'
|
130 |
-
'key2'
|
131 |
-
|
132 |
-
:rtype: list(:py:class:`~boto3.resources.base.ServiceResource`)
|
133 |
-
:return: List of resource instances
|
134 |
-
"""
|
135 |
-
client = self._parent.meta.client
|
136 |
-
cleaned_params = self._params.copy()
|
137 |
-
limit = cleaned_params.pop('limit', None)
|
138 |
-
page_size = cleaned_params.pop('page_size', None)
|
139 |
-
params = create_request_parameters(self._parent, self._model.request)
|
140 |
-
merge_dicts(params, cleaned_params, append_lists=True)
|
141 |
-
|
142 |
-
# Is this a paginated operation? If so, we need to get an
|
143 |
-
# iterator for the various pages. If not, then we simply
|
144 |
-
# call the operation and return the result as a single
|
145 |
-
# page in a list. For non-paginated results, we just ignore
|
146 |
-
# the page size parameter.
|
147 |
-
if client.can_paginate(self._py_operation_name):
|
148 |
-
logger.debug(
|
149 |
-
'Calling paginated %s:%s with %r',
|
150 |
-
self._parent.meta.service_name,
|
151 |
-
self._py_operation_name,
|
152 |
-
params,
|
153 |
-
)
|
154 |
-
paginator = client.get_paginator(self._py_operation_name)
|
155 |
-
pages = paginator.paginate(
|
156 |
-
PaginationConfig={'MaxItems': limit, 'PageSize': page_size},
|
157 |
-
**params
|
158 |
-
)
|
159 |
-
else:
|
160 |
-
logger.debug(
|
161 |
-
'Calling %s:%s with %r',
|
162 |
-
self._parent.meta.service_name,
|
163 |
-
self._py_operation_name,
|
164 |
-
params,
|
165 |
-
)
|
166 |
-
pages = [getattr(client, self._py_operation_name)(**params)]
|
167 |
-
|
168 |
-
# Now that we have a page iterator or single page of results
|
169 |
-
# we start processing and yielding individual items.
|
170 |
-
count = 0
|
171 |
-
for page in pages:
|
172 |
-
page_items = []
|
173 |
-
for item in self._handler(self._parent, params, page):
|
174 |
-
page_items.append(item)
|
175 |
-
|
176 |
-
# If the limit is set and has been reached, then
|
177 |
-
# we stop processing items here.
|
178 |
-
count += 1
|
179 |
-
if limit is not None and count >= limit:
|
180 |
-
break
|
181 |
-
|
182 |
-
yield page_items
|
183 |
-
|
184 |
-
# Stop reading pages if we've reached out limit
|
185 |
-
if limit is not None and count >= limit:
|
186 |
-
break
|
187 |
-
|
188 |
-
def all(self):
|
189 |
-
"""
|
190 |
-
Get all items from the collection, optionally with a custom
|
191 |
-
page size and item count limit.
|
192 |
-
|
193 |
-
This method returns an iterable generator which yields
|
194 |
-
individual resource instances. Example use::
|
195 |
-
|
196 |
-
# Iterate through items
|
197 |
-
>>> for queue in sqs.queues.all():
|
198 |
-
... print(queue.url)
|
199 |
-
'https://url1'
|
200 |
-
'https://url2'
|
201 |
-
|
202 |
-
# Convert to list
|
203 |
-
>>> queues = list(sqs.queues.all())
|
204 |
-
>>> len(queues)
|
205 |
-
2
|
206 |
-
"""
|
207 |
-
return self._clone()
|
208 |
-
|
209 |
-
def filter(self, **kwargs):
|
210 |
-
"""
|
211 |
-
Get items from the collection, passing keyword arguments along
|
212 |
-
as parameters to the underlying service operation, which are
|
213 |
-
typically used to filter the results.
|
214 |
-
|
215 |
-
This method returns an iterable generator which yields
|
216 |
-
individual resource instances. Example use::
|
217 |
-
|
218 |
-
# Iterate through items
|
219 |
-
>>> for queue in sqs.queues.filter(Param='foo'):
|
220 |
-
... print(queue.url)
|
221 |
-
'https://url1'
|
222 |
-
'https://url2'
|
223 |
-
|
224 |
-
# Convert to list
|
225 |
-
>>> queues = list(sqs.queues.filter(Param='foo'))
|
226 |
-
>>> len(queues)
|
227 |
-
2
|
228 |
-
|
229 |
-
:rtype: :py:class:`ResourceCollection`
|
230 |
-
"""
|
231 |
-
return self._clone(**kwargs)
|
232 |
-
|
233 |
-
def limit(self, count):
|
234 |
-
"""
|
235 |
-
Return at most this many resources.
|
236 |
-
|
237 |
-
>>> for bucket in s3.buckets.limit(5):
|
238 |
-
... print(bucket.name)
|
239 |
-
'bucket1'
|
240 |
-
'bucket2'
|
241 |
-
'bucket3'
|
242 |
-
'bucket4'
|
243 |
-
'bucket5'
|
244 |
-
|
245 |
-
:type count: int
|
246 |
-
:param count: Return no more than this many items
|
247 |
-
:rtype: :py:class:`ResourceCollection`
|
248 |
-
"""
|
249 |
-
return self._clone(limit=count)
|
250 |
-
|
251 |
-
def page_size(self, count):
|
252 |
-
"""
|
253 |
-
Fetch at most this many resources per service request.
|
254 |
-
|
255 |
-
>>> for obj in s3.Bucket('boto3').objects.page_size(100):
|
256 |
-
... print(obj.key)
|
257 |
-
|
258 |
-
:type count: int
|
259 |
-
:param count: Fetch this many items per request
|
260 |
-
:rtype: :py:class:`ResourceCollection`
|
261 |
-
"""
|
262 |
-
return self._clone(page_size=count)
|
263 |
-
|
264 |
-
|
265 |
-
class CollectionManager:
|
266 |
-
"""
|
267 |
-
A collection manager provides access to resource collection instances,
|
268 |
-
which can be iterated and filtered. The manager exposes some
|
269 |
-
convenience functions that are also found on resource collections,
|
270 |
-
such as :py:meth:`~ResourceCollection.all` and
|
271 |
-
:py:meth:`~ResourceCollection.filter`.
|
272 |
-
|
273 |
-
Get all items::
|
274 |
-
|
275 |
-
>>> for bucket in s3.buckets.all():
|
276 |
-
... print(bucket.name)
|
277 |
-
|
278 |
-
Get only some items via filtering::
|
279 |
-
|
280 |
-
>>> for queue in sqs.queues.filter(QueueNamePrefix='AWS'):
|
281 |
-
... print(queue.url)
|
282 |
-
|
283 |
-
Get whole pages of items:
|
284 |
-
|
285 |
-
>>> for page in s3.Bucket('boto3').objects.pages():
|
286 |
-
... for obj in page:
|
287 |
-
... print(obj.key)
|
288 |
-
|
289 |
-
A collection manager is not iterable. You **must** call one of the
|
290 |
-
methods that return a :py:class:`ResourceCollection` before trying
|
291 |
-
to iterate, slice, or convert to a list.
|
292 |
-
|
293 |
-
See the :ref:`guide_collections` guide for a high-level overview
|
294 |
-
of collections, including when remote service requests are performed.
|
295 |
-
|
296 |
-
:type collection_model: :py:class:`~boto3.resources.model.Collection`
|
297 |
-
:param model: Collection model
|
298 |
-
|
299 |
-
:type parent: :py:class:`~boto3.resources.base.ServiceResource`
|
300 |
-
:param parent: The collection's parent resource
|
301 |
-
|
302 |
-
:type factory: :py:class:`~boto3.resources.factory.ResourceFactory`
|
303 |
-
:param factory: The resource factory to create new resources
|
304 |
-
|
305 |
-
:type service_context: :py:class:`~boto3.utils.ServiceContext`
|
306 |
-
:param service_context: Context about the AWS service
|
307 |
-
"""
|
308 |
-
|
309 |
-
# The class to use when creating an iterator
|
310 |
-
_collection_cls = ResourceCollection
|
311 |
-
|
312 |
-
def __init__(self, collection_model, parent, factory, service_context):
|
313 |
-
self._model = collection_model
|
314 |
-
operation_name = self._model.request.operation
|
315 |
-
self._parent = parent
|
316 |
-
|
317 |
-
search_path = collection_model.resource.path
|
318 |
-
self._handler = ResourceHandler(
|
319 |
-
search_path=search_path,
|
320 |
-
factory=factory,
|
321 |
-
resource_model=collection_model.resource,
|
322 |
-
service_context=service_context,
|
323 |
-
operation_name=operation_name,
|
324 |
-
)
|
325 |
-
|
326 |
-
def __repr__(self):
|
327 |
-
return '{}({}, {})'.format(
|
328 |
-
self.__class__.__name__,
|
329 |
-
self._parent,
|
330 |
-
'{}.{}'.format(
|
331 |
-
self._parent.meta.service_name, self._model.resource.type
|
332 |
-
),
|
333 |
-
)
|
334 |
-
|
335 |
-
def iterator(self, **kwargs):
|
336 |
-
"""
|
337 |
-
Get a resource collection iterator from this manager.
|
338 |
-
|
339 |
-
:rtype: :py:class:`ResourceCollection`
|
340 |
-
:return: An iterable representing the collection of resources
|
341 |
-
"""
|
342 |
-
return self._collection_cls(
|
343 |
-
self._model, self._parent, self._handler, **kwargs
|
344 |
-
)
|
345 |
-
|
346 |
-
# Set up some methods to proxy ResourceCollection methods
|
347 |
-
def all(self):
|
348 |
-
return self.iterator()
|
349 |
-
|
350 |
-
all.__doc__ = ResourceCollection.all.__doc__
|
351 |
-
|
352 |
-
def filter(self, **kwargs):
|
353 |
-
return self.iterator(**kwargs)
|
354 |
-
|
355 |
-
filter.__doc__ = ResourceCollection.filter.__doc__
|
356 |
-
|
357 |
-
def limit(self, count):
|
358 |
-
return self.iterator(limit=count)
|
359 |
-
|
360 |
-
limit.__doc__ = ResourceCollection.limit.__doc__
|
361 |
-
|
362 |
-
def page_size(self, count):
|
363 |
-
return self.iterator(page_size=count)
|
364 |
-
|
365 |
-
page_size.__doc__ = ResourceCollection.page_size.__doc__
|
366 |
-
|
367 |
-
def pages(self):
|
368 |
-
return self.iterator().pages()
|
369 |
-
|
370 |
-
pages.__doc__ = ResourceCollection.pages.__doc__
|
371 |
-
|
372 |
-
|
373 |
-
class CollectionFactory:
|
374 |
-
"""
|
375 |
-
A factory to create new
|
376 |
-
:py:class:`CollectionManager` and :py:class:`ResourceCollection`
|
377 |
-
subclasses from a :py:class:`~boto3.resources.model.Collection`
|
378 |
-
model. These subclasses include methods to perform batch operations.
|
379 |
-
"""
|
380 |
-
|
381 |
-
def load_from_definition(
|
382 |
-
self, resource_name, collection_model, service_context, event_emitter
|
383 |
-
):
|
384 |
-
"""
|
385 |
-
Loads a collection from a model, creating a new
|
386 |
-
:py:class:`CollectionManager` subclass
|
387 |
-
with the correct properties and methods, named based on the service
|
388 |
-
and resource name, e.g. ec2.InstanceCollectionManager. It also
|
389 |
-
creates a new :py:class:`ResourceCollection` subclass which is used
|
390 |
-
by the new manager class.
|
391 |
-
|
392 |
-
:type resource_name: string
|
393 |
-
:param resource_name: Name of the resource to look up. For services,
|
394 |
-
this should match the ``service_name``.
|
395 |
-
|
396 |
-
:type service_context: :py:class:`~boto3.utils.ServiceContext`
|
397 |
-
:param service_context: Context about the AWS service
|
398 |
-
|
399 |
-
:type event_emitter: :py:class:`~botocore.hooks.HierarchialEmitter`
|
400 |
-
:param event_emitter: An event emitter
|
401 |
-
|
402 |
-
:rtype: Subclass of :py:class:`CollectionManager`
|
403 |
-
:return: The collection class.
|
404 |
-
"""
|
405 |
-
attrs = {}
|
406 |
-
collection_name = collection_model.name
|
407 |
-
|
408 |
-
# Create the batch actions for a collection
|
409 |
-
self._load_batch_actions(
|
410 |
-
attrs,
|
411 |
-
resource_name,
|
412 |
-
collection_model,
|
413 |
-
service_context.service_model,
|
414 |
-
event_emitter,
|
415 |
-
)
|
416 |
-
# Add the documentation to the collection class's methods
|
417 |
-
self._load_documented_collection_methods(
|
418 |
-
attrs=attrs,
|
419 |
-
resource_name=resource_name,
|
420 |
-
collection_model=collection_model,
|
421 |
-
service_model=service_context.service_model,
|
422 |
-
event_emitter=event_emitter,
|
423 |
-
base_class=ResourceCollection,
|
424 |
-
)
|
425 |
-
|
426 |
-
if service_context.service_name == resource_name:
|
427 |
-
cls_name = '{}.{}Collection'.format(
|
428 |
-
service_context.service_name, collection_name
|
429 |
-
)
|
430 |
-
else:
|
431 |
-
cls_name = '{}.{}.{}Collection'.format(
|
432 |
-
service_context.service_name, resource_name, collection_name
|
433 |
-
)
|
434 |
-
|
435 |
-
collection_cls = type(str(cls_name), (ResourceCollection,), attrs)
|
436 |
-
|
437 |
-
# Add the documentation to the collection manager's methods
|
438 |
-
self._load_documented_collection_methods(
|
439 |
-
attrs=attrs,
|
440 |
-
resource_name=resource_name,
|
441 |
-
collection_model=collection_model,
|
442 |
-
service_model=service_context.service_model,
|
443 |
-
event_emitter=event_emitter,
|
444 |
-
base_class=CollectionManager,
|
445 |
-
)
|
446 |
-
attrs['_collection_cls'] = collection_cls
|
447 |
-
cls_name += 'Manager'
|
448 |
-
|
449 |
-
return type(str(cls_name), (CollectionManager,), attrs)
|
450 |
-
|
451 |
-
def _load_batch_actions(
|
452 |
-
self,
|
453 |
-
attrs,
|
454 |
-
resource_name,
|
455 |
-
collection_model,
|
456 |
-
service_model,
|
457 |
-
event_emitter,
|
458 |
-
):
|
459 |
-
"""
|
460 |
-
Batch actions on the collection become methods on both
|
461 |
-
the collection manager and iterators.
|
462 |
-
"""
|
463 |
-
for action_model in collection_model.batch_actions:
|
464 |
-
snake_cased = xform_name(action_model.name)
|
465 |
-
attrs[snake_cased] = self._create_batch_action(
|
466 |
-
resource_name,
|
467 |
-
snake_cased,
|
468 |
-
action_model,
|
469 |
-
collection_model,
|
470 |
-
service_model,
|
471 |
-
event_emitter,
|
472 |
-
)
|
473 |
-
|
474 |
-
def _load_documented_collection_methods(
|
475 |
-
factory_self,
|
476 |
-
attrs,
|
477 |
-
resource_name,
|
478 |
-
collection_model,
|
479 |
-
service_model,
|
480 |
-
event_emitter,
|
481 |
-
base_class,
|
482 |
-
):
|
483 |
-
# The base class already has these methods defined. However
|
484 |
-
# the docstrings are generic and not based for a particular service
|
485 |
-
# or resource. So we override these methods by proxying to the
|
486 |
-
# base class's builtin method and adding a docstring
|
487 |
-
# that pertains to the resource.
|
488 |
-
|
489 |
-
# A collection's all() method.
|
490 |
-
def all(self):
|
491 |
-
return base_class.all(self)
|
492 |
-
|
493 |
-
all.__doc__ = docstring.CollectionMethodDocstring(
|
494 |
-
resource_name=resource_name,
|
495 |
-
action_name='all',
|
496 |
-
event_emitter=event_emitter,
|
497 |
-
collection_model=collection_model,
|
498 |
-
service_model=service_model,
|
499 |
-
include_signature=False,
|
500 |
-
)
|
501 |
-
attrs['all'] = all
|
502 |
-
|
503 |
-
# The collection's filter() method.
|
504 |
-
def filter(self, **kwargs):
|
505 |
-
return base_class.filter(self, **kwargs)
|
506 |
-
|
507 |
-
filter.__doc__ = docstring.CollectionMethodDocstring(
|
508 |
-
resource_name=resource_name,
|
509 |
-
action_name='filter',
|
510 |
-
event_emitter=event_emitter,
|
511 |
-
collection_model=collection_model,
|
512 |
-
service_model=service_model,
|
513 |
-
include_signature=False,
|
514 |
-
)
|
515 |
-
attrs['filter'] = filter
|
516 |
-
|
517 |
-
# The collection's limit method.
|
518 |
-
def limit(self, count):
|
519 |
-
return base_class.limit(self, count)
|
520 |
-
|
521 |
-
limit.__doc__ = docstring.CollectionMethodDocstring(
|
522 |
-
resource_name=resource_name,
|
523 |
-
action_name='limit',
|
524 |
-
event_emitter=event_emitter,
|
525 |
-
collection_model=collection_model,
|
526 |
-
service_model=service_model,
|
527 |
-
include_signature=False,
|
528 |
-
)
|
529 |
-
attrs['limit'] = limit
|
530 |
-
|
531 |
-
# The collection's page_size method.
|
532 |
-
def page_size(self, count):
|
533 |
-
return base_class.page_size(self, count)
|
534 |
-
|
535 |
-
page_size.__doc__ = docstring.CollectionMethodDocstring(
|
536 |
-
resource_name=resource_name,
|
537 |
-
action_name='page_size',
|
538 |
-
event_emitter=event_emitter,
|
539 |
-
collection_model=collection_model,
|
540 |
-
service_model=service_model,
|
541 |
-
include_signature=False,
|
542 |
-
)
|
543 |
-
attrs['page_size'] = page_size
|
544 |
-
|
545 |
-
def _create_batch_action(
|
546 |
-
factory_self,
|
547 |
-
resource_name,
|
548 |
-
snake_cased,
|
549 |
-
action_model,
|
550 |
-
collection_model,
|
551 |
-
service_model,
|
552 |
-
event_emitter,
|
553 |
-
):
|
554 |
-
"""
|
555 |
-
Creates a new method which makes a batch operation request
|
556 |
-
to the underlying service API.
|
557 |
-
"""
|
558 |
-
action = BatchAction(action_model)
|
559 |
-
|
560 |
-
def batch_action(self, *args, **kwargs):
|
561 |
-
return action(self, *args, **kwargs)
|
562 |
-
|
563 |
-
batch_action.__name__ = str(snake_cased)
|
564 |
-
batch_action.__doc__ = docstring.BatchActionDocstring(
|
565 |
-
resource_name=resource_name,
|
566 |
-
event_emitter=event_emitter,
|
567 |
-
batch_action_model=action_model,
|
568 |
-
service_model=service_model,
|
569 |
-
collection_model=collection_model,
|
570 |
-
include_signature=False,
|
571 |
-
)
|
572 |
-
return batch_action
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_inspect.py
DELETED
@@ -1,270 +0,0 @@
|
|
1 |
-
from __future__ import absolute_import
|
2 |
-
|
3 |
-
import inspect
|
4 |
-
from inspect import cleandoc, getdoc, getfile, isclass, ismodule, signature
|
5 |
-
from typing import Any, Collection, Iterable, Optional, Tuple, Type, Union
|
6 |
-
|
7 |
-
from .console import Group, RenderableType
|
8 |
-
from .control import escape_control_codes
|
9 |
-
from .highlighter import ReprHighlighter
|
10 |
-
from .jupyter import JupyterMixin
|
11 |
-
from .panel import Panel
|
12 |
-
from .pretty import Pretty
|
13 |
-
from .table import Table
|
14 |
-
from .text import Text, TextType
|
15 |
-
|
16 |
-
|
17 |
-
def _first_paragraph(doc: str) -> str:
|
18 |
-
"""Get the first paragraph from a docstring."""
|
19 |
-
paragraph, _, _ = doc.partition("\n\n")
|
20 |
-
return paragraph
|
21 |
-
|
22 |
-
|
23 |
-
class Inspect(JupyterMixin):
|
24 |
-
"""A renderable to inspect any Python Object.
|
25 |
-
|
26 |
-
Args:
|
27 |
-
obj (Any): An object to inspect.
|
28 |
-
title (str, optional): Title to display over inspect result, or None use type. Defaults to None.
|
29 |
-
help (bool, optional): Show full help text rather than just first paragraph. Defaults to False.
|
30 |
-
methods (bool, optional): Enable inspection of callables. Defaults to False.
|
31 |
-
docs (bool, optional): Also render doc strings. Defaults to True.
|
32 |
-
private (bool, optional): Show private attributes (beginning with underscore). Defaults to False.
|
33 |
-
dunder (bool, optional): Show attributes starting with double underscore. Defaults to False.
|
34 |
-
sort (bool, optional): Sort attributes alphabetically. Defaults to True.
|
35 |
-
all (bool, optional): Show all attributes. Defaults to False.
|
36 |
-
value (bool, optional): Pretty print value of object. Defaults to True.
|
37 |
-
"""
|
38 |
-
|
39 |
-
def __init__(
|
40 |
-
self,
|
41 |
-
obj: Any,
|
42 |
-
*,
|
43 |
-
title: Optional[TextType] = None,
|
44 |
-
help: bool = False,
|
45 |
-
methods: bool = False,
|
46 |
-
docs: bool = True,
|
47 |
-
private: bool = False,
|
48 |
-
dunder: bool = False,
|
49 |
-
sort: bool = True,
|
50 |
-
all: bool = True,
|
51 |
-
value: bool = True,
|
52 |
-
) -> None:
|
53 |
-
self.highlighter = ReprHighlighter()
|
54 |
-
self.obj = obj
|
55 |
-
self.title = title or self._make_title(obj)
|
56 |
-
if all:
|
57 |
-
methods = private = dunder = True
|
58 |
-
self.help = help
|
59 |
-
self.methods = methods
|
60 |
-
self.docs = docs or help
|
61 |
-
self.private = private or dunder
|
62 |
-
self.dunder = dunder
|
63 |
-
self.sort = sort
|
64 |
-
self.value = value
|
65 |
-
|
66 |
-
def _make_title(self, obj: Any) -> Text:
|
67 |
-
"""Make a default title."""
|
68 |
-
title_str = (
|
69 |
-
str(obj)
|
70 |
-
if (isclass(obj) or callable(obj) or ismodule(obj))
|
71 |
-
else str(type(obj))
|
72 |
-
)
|
73 |
-
title_text = self.highlighter(title_str)
|
74 |
-
return title_text
|
75 |
-
|
76 |
-
def __rich__(self) -> Panel:
|
77 |
-
return Panel.fit(
|
78 |
-
Group(*self._render()),
|
79 |
-
title=self.title,
|
80 |
-
border_style="scope.border",
|
81 |
-
padding=(0, 1),
|
82 |
-
)
|
83 |
-
|
84 |
-
def _get_signature(self, name: str, obj: Any) -> Optional[Text]:
|
85 |
-
"""Get a signature for a callable."""
|
86 |
-
try:
|
87 |
-
_signature = str(signature(obj)) + ":"
|
88 |
-
except ValueError:
|
89 |
-
_signature = "(...)"
|
90 |
-
except TypeError:
|
91 |
-
return None
|
92 |
-
|
93 |
-
source_filename: Optional[str] = None
|
94 |
-
try:
|
95 |
-
source_filename = getfile(obj)
|
96 |
-
except (OSError, TypeError):
|
97 |
-
# OSError is raised if obj has no source file, e.g. when defined in REPL.
|
98 |
-
pass
|
99 |
-
|
100 |
-
callable_name = Text(name, style="inspect.callable")
|
101 |
-
if source_filename:
|
102 |
-
callable_name.stylize(f"link file://{source_filename}")
|
103 |
-
signature_text = self.highlighter(_signature)
|
104 |
-
|
105 |
-
qualname = name or getattr(obj, "__qualname__", name)
|
106 |
-
|
107 |
-
# If obj is a module, there may be classes (which are callable) to display
|
108 |
-
if inspect.isclass(obj):
|
109 |
-
prefix = "class"
|
110 |
-
elif inspect.iscoroutinefunction(obj):
|
111 |
-
prefix = "async def"
|
112 |
-
else:
|
113 |
-
prefix = "def"
|
114 |
-
|
115 |
-
qual_signature = Text.assemble(
|
116 |
-
(f"{prefix} ", f"inspect.{prefix.replace(' ', '_')}"),
|
117 |
-
(qualname, "inspect.callable"),
|
118 |
-
signature_text,
|
119 |
-
)
|
120 |
-
|
121 |
-
return qual_signature
|
122 |
-
|
123 |
-
def _render(self) -> Iterable[RenderableType]:
|
124 |
-
"""Render object."""
|
125 |
-
|
126 |
-
def sort_items(item: Tuple[str, Any]) -> Tuple[bool, str]:
|
127 |
-
key, (_error, value) = item
|
128 |
-
return (callable(value), key.strip("_").lower())
|
129 |
-
|
130 |
-
def safe_getattr(attr_name: str) -> Tuple[Any, Any]:
|
131 |
-
"""Get attribute or any exception."""
|
132 |
-
try:
|
133 |
-
return (None, getattr(obj, attr_name))
|
134 |
-
except Exception as error:
|
135 |
-
return (error, None)
|
136 |
-
|
137 |
-
obj = self.obj
|
138 |
-
keys = dir(obj)
|
139 |
-
total_items = len(keys)
|
140 |
-
if not self.dunder:
|
141 |
-
keys = [key for key in keys if not key.startswith("__")]
|
142 |
-
if not self.private:
|
143 |
-
keys = [key for key in keys if not key.startswith("_")]
|
144 |
-
not_shown_count = total_items - len(keys)
|
145 |
-
items = [(key, safe_getattr(key)) for key in keys]
|
146 |
-
if self.sort:
|
147 |
-
items.sort(key=sort_items)
|
148 |
-
|
149 |
-
items_table = Table.grid(padding=(0, 1), expand=False)
|
150 |
-
items_table.add_column(justify="right")
|
151 |
-
add_row = items_table.add_row
|
152 |
-
highlighter = self.highlighter
|
153 |
-
|
154 |
-
if callable(obj):
|
155 |
-
signature = self._get_signature("", obj)
|
156 |
-
if signature is not None:
|
157 |
-
yield signature
|
158 |
-
yield ""
|
159 |
-
|
160 |
-
if self.docs:
|
161 |
-
_doc = self._get_formatted_doc(obj)
|
162 |
-
if _doc is not None:
|
163 |
-
doc_text = Text(_doc, style="inspect.help")
|
164 |
-
doc_text = highlighter(doc_text)
|
165 |
-
yield doc_text
|
166 |
-
yield ""
|
167 |
-
|
168 |
-
if self.value and not (isclass(obj) or callable(obj) or ismodule(obj)):
|
169 |
-
yield Panel(
|
170 |
-
Pretty(obj, indent_guides=True, max_length=10, max_string=60),
|
171 |
-
border_style="inspect.value.border",
|
172 |
-
)
|
173 |
-
yield ""
|
174 |
-
|
175 |
-
for key, (error, value) in items:
|
176 |
-
key_text = Text.assemble(
|
177 |
-
(
|
178 |
-
key,
|
179 |
-
"inspect.attr.dunder" if key.startswith("__") else "inspect.attr",
|
180 |
-
),
|
181 |
-
(" =", "inspect.equals"),
|
182 |
-
)
|
183 |
-
if error is not None:
|
184 |
-
warning = key_text.copy()
|
185 |
-
warning.stylize("inspect.error")
|
186 |
-
add_row(warning, highlighter(repr(error)))
|
187 |
-
continue
|
188 |
-
|
189 |
-
if callable(value):
|
190 |
-
if not self.methods:
|
191 |
-
continue
|
192 |
-
|
193 |
-
_signature_text = self._get_signature(key, value)
|
194 |
-
if _signature_text is None:
|
195 |
-
add_row(key_text, Pretty(value, highlighter=highlighter))
|
196 |
-
else:
|
197 |
-
if self.docs:
|
198 |
-
docs = self._get_formatted_doc(value)
|
199 |
-
if docs is not None:
|
200 |
-
_signature_text.append("\n" if "\n" in docs else " ")
|
201 |
-
doc = highlighter(docs)
|
202 |
-
doc.stylize("inspect.doc")
|
203 |
-
_signature_text.append(doc)
|
204 |
-
|
205 |
-
add_row(key_text, _signature_text)
|
206 |
-
else:
|
207 |
-
add_row(key_text, Pretty(value, highlighter=highlighter))
|
208 |
-
if items_table.row_count:
|
209 |
-
yield items_table
|
210 |
-
elif not_shown_count:
|
211 |
-
yield Text.from_markup(
|
212 |
-
f"[b cyan]{not_shown_count}[/][i] attribute(s) not shown.[/i] "
|
213 |
-
f"Run [b][magenta]inspect[/]([not b]inspect[/])[/b] for options."
|
214 |
-
)
|
215 |
-
|
216 |
-
def _get_formatted_doc(self, object_: Any) -> Optional[str]:
|
217 |
-
"""
|
218 |
-
Extract the docstring of an object, process it and returns it.
|
219 |
-
The processing consists in cleaning up the doctring's indentation,
|
220 |
-
taking only its 1st paragraph if `self.help` is not True,
|
221 |
-
and escape its control codes.
|
222 |
-
|
223 |
-
Args:
|
224 |
-
object_ (Any): the object to get the docstring from.
|
225 |
-
|
226 |
-
Returns:
|
227 |
-
Optional[str]: the processed docstring, or None if no docstring was found.
|
228 |
-
"""
|
229 |
-
docs = getdoc(object_)
|
230 |
-
if docs is None:
|
231 |
-
return None
|
232 |
-
docs = cleandoc(docs).strip()
|
233 |
-
if not self.help:
|
234 |
-
docs = _first_paragraph(docs)
|
235 |
-
return escape_control_codes(docs)
|
236 |
-
|
237 |
-
|
238 |
-
def get_object_types_mro(obj: Union[object, Type[Any]]) -> Tuple[type, ...]:
|
239 |
-
"""Returns the MRO of an object's class, or of the object itself if it's a class."""
|
240 |
-
if not hasattr(obj, "__mro__"):
|
241 |
-
# N.B. we cannot use `if type(obj) is type` here because it doesn't work with
|
242 |
-
# some types of classes, such as the ones that use abc.ABCMeta.
|
243 |
-
obj = type(obj)
|
244 |
-
return getattr(obj, "__mro__", ())
|
245 |
-
|
246 |
-
|
247 |
-
def get_object_types_mro_as_strings(obj: object) -> Collection[str]:
|
248 |
-
"""
|
249 |
-
Returns the MRO of an object's class as full qualified names, or of the object itself if it's a class.
|
250 |
-
|
251 |
-
Examples:
|
252 |
-
`object_types_mro_as_strings(JSONDecoder)` will return `['json.decoder.JSONDecoder', 'builtins.object']`
|
253 |
-
"""
|
254 |
-
return [
|
255 |
-
f'{getattr(type_, "__module__", "")}.{getattr(type_, "__qualname__", "")}'
|
256 |
-
for type_ in get_object_types_mro(obj)
|
257 |
-
]
|
258 |
-
|
259 |
-
|
260 |
-
def is_object_one_of_types(
|
261 |
-
obj: object, fully_qualified_types_names: Collection[str]
|
262 |
-
) -> bool:
|
263 |
-
"""
|
264 |
-
Returns `True` if the given object's class (or the object itself, if it's a class) has one of the
|
265 |
-
fully qualified names in its MRO.
|
266 |
-
"""
|
267 |
-
for type_name in get_object_types_mro_as_strings(obj):
|
268 |
-
if type_name in fully_qualified_types_names:
|
269 |
-
return True
|
270 |
-
return False
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/_functools.py
DELETED
@@ -1,20 +0,0 @@
|
|
1 |
-
import functools
|
2 |
-
|
3 |
-
|
4 |
-
# from jaraco.functools 3.5
|
5 |
-
def pass_none(func):
|
6 |
-
"""
|
7 |
-
Wrap func so it's not called if its first param is None
|
8 |
-
|
9 |
-
>>> print_text = pass_none(print)
|
10 |
-
>>> print_text('text')
|
11 |
-
text
|
12 |
-
>>> print_text(None)
|
13 |
-
"""
|
14 |
-
|
15 |
-
@functools.wraps(func)
|
16 |
-
def wrapper(param, *args, **kwargs):
|
17 |
-
if param is not None:
|
18 |
-
return func(param, *args, **kwargs)
|
19 |
-
|
20 |
-
return wrapper
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/importlib_resources/readers.py
DELETED
@@ -1,122 +0,0 @@
|
|
1 |
-
import collections
|
2 |
-
import pathlib
|
3 |
-
import operator
|
4 |
-
|
5 |
-
from . import abc
|
6 |
-
|
7 |
-
from ._itertools import unique_everseen
|
8 |
-
from ._compat import ZipPath
|
9 |
-
|
10 |
-
|
11 |
-
def remove_duplicates(items):
|
12 |
-
return iter(collections.OrderedDict.fromkeys(items))
|
13 |
-
|
14 |
-
|
15 |
-
class FileReader(abc.TraversableResources):
|
16 |
-
def __init__(self, loader):
|
17 |
-
self.path = pathlib.Path(loader.path).parent
|
18 |
-
|
19 |
-
def resource_path(self, resource):
|
20 |
-
"""
|
21 |
-
Return the file system path to prevent
|
22 |
-
`resources.path()` from creating a temporary
|
23 |
-
copy.
|
24 |
-
"""
|
25 |
-
return str(self.path.joinpath(resource))
|
26 |
-
|
27 |
-
def files(self):
|
28 |
-
return self.path
|
29 |
-
|
30 |
-
|
31 |
-
class ZipReader(abc.TraversableResources):
|
32 |
-
def __init__(self, loader, module):
|
33 |
-
_, _, name = module.rpartition('.')
|
34 |
-
self.prefix = loader.prefix.replace('\\', '/') + name + '/'
|
35 |
-
self.archive = loader.archive
|
36 |
-
|
37 |
-
def open_resource(self, resource):
|
38 |
-
try:
|
39 |
-
return super().open_resource(resource)
|
40 |
-
except KeyError as exc:
|
41 |
-
raise FileNotFoundError(exc.args[0])
|
42 |
-
|
43 |
-
def is_resource(self, path):
|
44 |
-
# workaround for `zipfile.Path.is_file` returning true
|
45 |
-
# for non-existent paths.
|
46 |
-
target = self.files().joinpath(path)
|
47 |
-
return target.is_file() and target.exists()
|
48 |
-
|
49 |
-
def files(self):
|
50 |
-
return ZipPath(self.archive, self.prefix)
|
51 |
-
|
52 |
-
|
53 |
-
class MultiplexedPath(abc.Traversable):
|
54 |
-
"""
|
55 |
-
Given a series of Traversable objects, implement a merged
|
56 |
-
version of the interface across all objects. Useful for
|
57 |
-
namespace packages which may be multihomed at a single
|
58 |
-
name.
|
59 |
-
"""
|
60 |
-
|
61 |
-
def __init__(self, *paths):
|
62 |
-
self._paths = list(map(pathlib.Path, remove_duplicates(paths)))
|
63 |
-
if not self._paths:
|
64 |
-
message = 'MultiplexedPath must contain at least one path'
|
65 |
-
raise FileNotFoundError(message)
|
66 |
-
if not all(path.is_dir() for path in self._paths):
|
67 |
-
raise NotADirectoryError('MultiplexedPath only supports directories')
|
68 |
-
|
69 |
-
def iterdir(self):
|
70 |
-
files = (file for path in self._paths for file in path.iterdir())
|
71 |
-
return unique_everseen(files, key=operator.attrgetter('name'))
|
72 |
-
|
73 |
-
def read_bytes(self):
|
74 |
-
raise FileNotFoundError(f'{self} is not a file')
|
75 |
-
|
76 |
-
def read_text(self, *args, **kwargs):
|
77 |
-
raise FileNotFoundError(f'{self} is not a file')
|
78 |
-
|
79 |
-
def is_dir(self):
|
80 |
-
return True
|
81 |
-
|
82 |
-
def is_file(self):
|
83 |
-
return False
|
84 |
-
|
85 |
-
def joinpath(self, child):
|
86 |
-
# first try to find child in current paths
|
87 |
-
for file in self.iterdir():
|
88 |
-
if file.name == child:
|
89 |
-
return file
|
90 |
-
# if it does not exist, construct it with the first path
|
91 |
-
return self._paths[0] / child
|
92 |
-
|
93 |
-
__truediv__ = joinpath
|
94 |
-
|
95 |
-
def open(self, *args, **kwargs):
|
96 |
-
raise FileNotFoundError(f'{self} is not a file')
|
97 |
-
|
98 |
-
@property
|
99 |
-
def name(self):
|
100 |
-
return self._paths[0].name
|
101 |
-
|
102 |
-
def __repr__(self):
|
103 |
-
paths = ', '.join(f"'{path}'" for path in self._paths)
|
104 |
-
return f'MultiplexedPath({paths})'
|
105 |
-
|
106 |
-
|
107 |
-
class NamespaceReader(abc.TraversableResources):
|
108 |
-
def __init__(self, namespace_path):
|
109 |
-
if 'NamespacePath' not in str(namespace_path):
|
110 |
-
raise ValueError('Invalid path')
|
111 |
-
self.path = MultiplexedPath(*list(namespace_path))
|
112 |
-
|
113 |
-
def resource_path(self, resource):
|
114 |
-
"""
|
115 |
-
Return the file system path to prevent
|
116 |
-
`resources.path()` from creating a temporary
|
117 |
-
copy.
|
118 |
-
"""
|
119 |
-
return str(self.path.joinpath(resource))
|
120 |
-
|
121 |
-
def files(self):
|
122 |
-
return self.path
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BilalSardar/Halal_Food_Checker/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Halal Food Checker
|
3 |
-
emoji: 🐠
|
4 |
-
colorFrom: pink
|
5 |
-
colorTo: blue
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.46.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/tutorials/write-models.md
DELETED
@@ -1,39 +0,0 @@
|
|
1 |
-
# Write Models
|
2 |
-
|
3 |
-
If you are trying to do something completely new, you may wish to implement
|
4 |
-
a model entirely from scratch within detectron2. However, in many situations you may
|
5 |
-
be interested in modifying or extending some components of an existing model.
|
6 |
-
Therefore, we also provide a registration mechanism that lets you override the
|
7 |
-
behavior of certain internal components of standard models.
|
8 |
-
|
9 |
-
For example, to add a new backbone, import this code in your code:
|
10 |
-
```python
|
11 |
-
from detectron2.modeling import BACKBONE_REGISTRY, Backbone, ShapeSpec
|
12 |
-
|
13 |
-
@BACKBONE_REGISTRY.register()
|
14 |
-
class ToyBackBone(Backbone):
|
15 |
-
def __init__(self, cfg, input_shape):
|
16 |
-
# create your own backbone
|
17 |
-
self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=16, padding=3)
|
18 |
-
|
19 |
-
def forward(self, image):
|
20 |
-
return {"conv1": self.conv1(image)}
|
21 |
-
|
22 |
-
def output_shape(self):
|
23 |
-
return {"conv1": ShapeSpec(channels=64, stride=16)}
|
24 |
-
```
|
25 |
-
Then, you can use `cfg.MODEL.BACKBONE.NAME = 'ToyBackBone'` in your config object.
|
26 |
-
`build_model(cfg)` will then call your `ToyBackBone` instead.
|
27 |
-
|
28 |
-
As another example, to add new abilities to the ROI heads in the Generalized R-CNN meta-architecture,
|
29 |
-
you can implement a new
|
30 |
-
[ROIHeads](../modules/modeling.html#detectron2.modeling.ROIHeads) subclass and put it in the `ROI_HEADS_REGISTRY`.
|
31 |
-
See [densepose in detectron2](../../projects/DensePose)
|
32 |
-
and [meshrcnn](https://github.com/facebookresearch/meshrcnn)
|
33 |
-
for examples that implement new ROIHeads to perform new tasks.
|
34 |
-
And [projects/](../../projects/)
|
35 |
-
contains more examples that implement different architectures.
|
36 |
-
|
37 |
-
A complete list of registries can be found in [API documentation](../modules/modeling.html#model-registries).
|
38 |
-
You can register components in these registries to customize different parts of a model, or the
|
39 |
-
entire model.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/scan_by_key.h
DELETED
@@ -1,1004 +0,0 @@
|
|
1 |
-
/******************************************************************************
|
2 |
-
* Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
|
3 |
-
*
|
4 |
-
* Redistribution and use in source and binary forms, with or without
|
5 |
-
* modification, are permitted provided that the following conditions are met:
|
6 |
-
* * Redistributions of source code must retain the above copyright
|
7 |
-
* notice, this list of conditions and the following disclaimer.
|
8 |
-
* * Redistributions in binary form must reproduce the above copyright
|
9 |
-
* notice, this list of conditions and the following disclaimer in the
|
10 |
-
* documentation and/or other materials provided with the distribution.
|
11 |
-
* * Neither the name of the NVIDIA CORPORATION nor the
|
12 |
-
* names of its contributors may be used to endorse or promote products
|
13 |
-
* derived from this software without specific prior written permission.
|
14 |
-
*
|
15 |
-
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
16 |
-
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
17 |
-
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
18 |
-
* ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
|
19 |
-
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
20 |
-
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
21 |
-
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
|
22 |
-
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
23 |
-
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
24 |
-
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
25 |
-
*
|
26 |
-
******************************************************************************/
|
27 |
-
#pragma once
|
28 |
-
|
29 |
-
#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC
|
30 |
-
#include <thrust/detail/cstdint.h>
|
31 |
-
#include <thrust/detail/temporary_array.h>
|
32 |
-
#include <thrust/system/cuda/detail/util.h>
|
33 |
-
|
34 |
-
#include <thrust/system/cuda/execution_policy.h>
|
35 |
-
#include <thrust/system/cuda/detail/par_to_seq.h>
|
36 |
-
#include <thrust/system/cuda/detail/core/agent_launcher.h>
|
37 |
-
#include <thrust/detail/mpl/math.h>
|
38 |
-
#include <thrust/detail/minmax.h>
|
39 |
-
#include <thrust/distance.h>
|
40 |
-
|
41 |
-
namespace thrust
|
42 |
-
{
|
43 |
-
namespace cuda_cub {
|
44 |
-
|
45 |
-
namespace __scan_by_key {
|
46 |
-
namespace mpl = thrust::detail::mpl::math;
|
47 |
-
|
48 |
-
template <int _BLOCK_THREADS,
|
49 |
-
int _ITEMS_PER_THREAD = 1,
|
50 |
-
cub::BlockLoadAlgorithm _LOAD_ALGORITHM = cub::BLOCK_LOAD_DIRECT,
|
51 |
-
cub::CacheLoadModifier _LOAD_MODIFIER = cub::LOAD_DEFAULT,
|
52 |
-
cub::BlockScanAlgorithm _SCAN_ALGORITHM = cub::BLOCK_SCAN_WARP_SCANS,
|
53 |
-
cub::BlockStoreAlgorithm _STORE_ALGORITHM = cub::BLOCK_STORE_DIRECT>
|
54 |
-
struct PtxPolicy
|
55 |
-
{
|
56 |
-
enum
|
57 |
-
{
|
58 |
-
BLOCK_THREADS = _BLOCK_THREADS,
|
59 |
-
ITEMS_PER_THREAD = _ITEMS_PER_THREAD,
|
60 |
-
ITEMS_PER_TILE = BLOCK_THREADS * ITEMS_PER_THREAD,
|
61 |
-
};
|
62 |
-
|
63 |
-
static const cub::BlockLoadAlgorithm LOAD_ALGORITHM = _LOAD_ALGORITHM;
|
64 |
-
static const cub::CacheLoadModifier LOAD_MODIFIER = _LOAD_MODIFIER;
|
65 |
-
static const cub::BlockScanAlgorithm SCAN_ALGORITHM = _SCAN_ALGORITHM;
|
66 |
-
static const cub::BlockStoreAlgorithm STORE_ALGORITHM = _STORE_ALGORITHM;
|
67 |
-
}; // struct PtxPolicy
|
68 |
-
|
69 |
-
template <class Arch, class Key, class Value>
|
70 |
-
struct Tuning;
|
71 |
-
|
72 |
-
template <class Key, class Value>
|
73 |
-
struct Tuning<sm30, Key, Value>
|
74 |
-
{
|
75 |
-
enum
|
76 |
-
{
|
77 |
-
MAX_INPUT_BYTES = mpl::max<size_t, sizeof(Key), sizeof(Value)>::value,
|
78 |
-
COMBINED_INPUT_BYTES = sizeof(Key) + sizeof(Value),
|
79 |
-
|
80 |
-
NOMINAL_4B_ITEMS_PER_THREAD = 6,
|
81 |
-
|
82 |
-
ITEMS_PER_THREAD = mpl::min<
|
83 |
-
int,
|
84 |
-
NOMINAL_4B_ITEMS_PER_THREAD,
|
85 |
-
mpl::max<
|
86 |
-
int,
|
87 |
-
1,
|
88 |
-
((NOMINAL_4B_ITEMS_PER_THREAD * 8) +
|
89 |
-
COMBINED_INPUT_BYTES - 1) /
|
90 |
-
COMBINED_INPUT_BYTES>::value>::value,
|
91 |
-
};
|
92 |
-
|
93 |
-
typedef PtxPolicy<128,
|
94 |
-
ITEMS_PER_THREAD,
|
95 |
-
cub::BLOCK_LOAD_WARP_TRANSPOSE,
|
96 |
-
cub::LOAD_DEFAULT,
|
97 |
-
cub::BLOCK_SCAN_WARP_SCANS,
|
98 |
-
cub::BLOCK_STORE_WARP_TRANSPOSE>
|
99 |
-
type;
|
100 |
-
}; // Tuning sm30
|
101 |
-
|
102 |
-
template <class Key, class Value>
|
103 |
-
struct Tuning<sm35, Key, Value> : Tuning<sm30, Key, Value>
|
104 |
-
{
|
105 |
-
enum
|
106 |
-
{
|
107 |
-
NOMINAL_4B_ITEMS_PER_THREAD = 6,
|
108 |
-
|
109 |
-
ITEMS_PER_THREAD =
|
110 |
-
(Tuning::MAX_INPUT_BYTES <= 8)
|
111 |
-
? 6
|
112 |
-
: mpl::min<
|
113 |
-
int,
|
114 |
-
NOMINAL_4B_ITEMS_PER_THREAD,
|
115 |
-
mpl::max<
|
116 |
-
int,
|
117 |
-
1,
|
118 |
-
((NOMINAL_4B_ITEMS_PER_THREAD * 8) +
|
119 |
-
Tuning::COMBINED_INPUT_BYTES - 1) /
|
120 |
-
Tuning::COMBINED_INPUT_BYTES>::value>::value,
|
121 |
-
};
|
122 |
-
|
123 |
-
typedef PtxPolicy<128,
|
124 |
-
ITEMS_PER_THREAD,
|
125 |
-
cub::BLOCK_LOAD_WARP_TRANSPOSE,
|
126 |
-
cub::LOAD_LDG,
|
127 |
-
cub::BLOCK_SCAN_WARP_SCANS,
|
128 |
-
cub::BLOCK_STORE_WARP_TRANSPOSE>
|
129 |
-
type;
|
130 |
-
}; // Tuning sm35
|
131 |
-
|
132 |
-
template <class Key, class Value>
|
133 |
-
struct Tuning<sm52, Key, Value> : Tuning<sm30, Key, Value>
|
134 |
-
{
|
135 |
-
enum
|
136 |
-
{
|
137 |
-
NOMINAL_4B_ITEMS_PER_THREAD = 9,
|
138 |
-
|
139 |
-
ITEMS_PER_THREAD =
|
140 |
-
(Tuning::MAX_INPUT_BYTES <= 8)
|
141 |
-
? 9
|
142 |
-
: mpl::min<
|
143 |
-
int,
|
144 |
-
NOMINAL_4B_ITEMS_PER_THREAD,
|
145 |
-
mpl::max<
|
146 |
-
int,
|
147 |
-
1,
|
148 |
-
((NOMINAL_4B_ITEMS_PER_THREAD * 8) +
|
149 |
-
Tuning::COMBINED_INPUT_BYTES - 1) /
|
150 |
-
Tuning::COMBINED_INPUT_BYTES>::value>::value,
|
151 |
-
};
|
152 |
-
|
153 |
-
typedef PtxPolicy<256,
|
154 |
-
ITEMS_PER_THREAD,
|
155 |
-
cub::BLOCK_LOAD_WARP_TRANSPOSE,
|
156 |
-
cub::LOAD_LDG,
|
157 |
-
cub::BLOCK_SCAN_WARP_SCANS,
|
158 |
-
cub::BLOCK_STORE_WARP_TRANSPOSE>
|
159 |
-
type;
|
160 |
-
}; // Tuning sm52
|
161 |
-
|
162 |
-
template <class KeysInputIt,
|
163 |
-
class ValuesInputIt,
|
164 |
-
class ValuesOutputIt,
|
165 |
-
class EqualityOp,
|
166 |
-
class ScanOp,
|
167 |
-
class Size,
|
168 |
-
class T,
|
169 |
-
class Inclusive>
|
170 |
-
struct ScanByKeyAgent
|
171 |
-
{
|
172 |
-
typedef typename iterator_traits<KeysInputIt>::value_type key_type;
|
173 |
-
|
174 |
-
typedef T value_type;
|
175 |
-
typedef Size size_type;
|
176 |
-
|
177 |
-
typedef cub::KeyValuePair<size_type, value_type> size_value_pair_t;
|
178 |
-
typedef cub::KeyValuePair<key_type, value_type> key_value_pair_t;
|
179 |
-
|
180 |
-
typedef cub::ReduceByKeyScanTileState<value_type, size_type> ScanTileState;
|
181 |
-
typedef cub::ReduceBySegmentOp<ScanOp> ReduceBySegmentOp;
|
182 |
-
|
183 |
-
template <class Arch>
|
184 |
-
struct PtxPlan : Tuning<Arch, key_type, value_type>::type
|
185 |
-
{
|
186 |
-
typedef Tuning<Arch, key_type, value_type> tuning;
|
187 |
-
|
188 |
-
typedef typename core::LoadIterator<PtxPlan, KeysInputIt>::type KeysLoadIt;
|
189 |
-
typedef typename core::LoadIterator<PtxPlan, ValuesInputIt>::type ValuesLoadIt;
|
190 |
-
|
191 |
-
typedef typename core::BlockLoad<PtxPlan, KeysLoadIt, key_type>::type BlockLoadKeys;
|
192 |
-
typedef typename core::BlockLoad<PtxPlan, ValuesLoadIt, value_type>::type BlockLoadValues;
|
193 |
-
|
194 |
-
typedef typename core::BlockStore<PtxPlan,
|
195 |
-
ValuesOutputIt,
|
196 |
-
value_type>::type BlockStoreValues;
|
197 |
-
|
198 |
-
typedef cub::BlockDiscontinuity<key_type,
|
199 |
-
PtxPlan::BLOCK_THREADS,
|
200 |
-
1,
|
201 |
-
1,
|
202 |
-
Arch::ver>
|
203 |
-
BlockDiscontinuityKeys;
|
204 |
-
|
205 |
-
typedef cub::TilePrefixCallbackOp<size_value_pair_t,
|
206 |
-
ReduceBySegmentOp,
|
207 |
-
ScanTileState,
|
208 |
-
Arch::ver>
|
209 |
-
TilePrefixCallback;
|
210 |
-
typedef cub::BlockScan<size_value_pair_t,
|
211 |
-
PtxPlan::BLOCK_THREADS,
|
212 |
-
PtxPlan::SCAN_ALGORITHM,
|
213 |
-
1,
|
214 |
-
1,
|
215 |
-
Arch::ver>
|
216 |
-
BlockScan;
|
217 |
-
|
218 |
-
union TempStorage
|
219 |
-
{
|
220 |
-
struct
|
221 |
-
{
|
222 |
-
typename BlockScan::TempStorage scan;
|
223 |
-
typename TilePrefixCallback::TempStorage prefix;
|
224 |
-
typename BlockDiscontinuityKeys::TempStorage discontinuity;
|
225 |
-
};
|
226 |
-
|
227 |
-
typename BlockLoadKeys::TempStorage load_keys;
|
228 |
-
typename BlockLoadValues::TempStorage load_values;
|
229 |
-
|
230 |
-
typename BlockStoreValues::TempStorage store_values;
|
231 |
-
}; // union TempStorage
|
232 |
-
}; // struct PtxPlan
|
233 |
-
|
234 |
-
typedef typename core::specialize_plan_msvc10_war<PtxPlan>::type::type ptx_plan;
|
235 |
-
|
236 |
-
typedef typename ptx_plan::KeysLoadIt KeysLoadIt;
|
237 |
-
typedef typename ptx_plan::ValuesLoadIt ValuesLoadIt;
|
238 |
-
|
239 |
-
typedef typename ptx_plan::BlockLoadKeys BlockLoadKeys;
|
240 |
-
typedef typename ptx_plan::BlockLoadValues BlockLoadValues;
|
241 |
-
typedef typename ptx_plan::BlockStoreValues BlockStoreValues;
|
242 |
-
|
243 |
-
typedef typename ptx_plan::BlockDiscontinuityKeys BlockDiscontinuityKeys;
|
244 |
-
typedef typename ptx_plan::TilePrefixCallback TilePrefixCallback;
|
245 |
-
typedef typename ptx_plan::BlockScan BlockScan;
|
246 |
-
typedef typename ptx_plan::TempStorage TempStorage;
|
247 |
-
|
248 |
-
enum
|
249 |
-
{
|
250 |
-
BLOCK_THREADS = ptx_plan::BLOCK_THREADS,
|
251 |
-
ITEMS_PER_THREAD = ptx_plan::ITEMS_PER_THREAD,
|
252 |
-
ITEMS_PER_TILE = ptx_plan::ITEMS_PER_TILE,
|
253 |
-
};
|
254 |
-
|
255 |
-
struct impl
|
256 |
-
{
|
257 |
-
//---------------------------------------------------------------------
|
258 |
-
// Per thread data
|
259 |
-
//---------------------------------------------------------------------
|
260 |
-
|
261 |
-
TempStorage & storage;
|
262 |
-
ScanTileState &tile_state;
|
263 |
-
|
264 |
-
KeysLoadIt keys_load_it;
|
265 |
-
ValuesLoadIt values_load_it;
|
266 |
-
ValuesOutputIt values_output_it;
|
267 |
-
|
268 |
-
cub::InequalityWrapper<EqualityOp> inequality_op;
|
269 |
-
ReduceBySegmentOp scan_op;
|
270 |
-
|
271 |
-
|
272 |
-
//---------------------------------------------------------------------
|
273 |
-
// Block scan utility methods (first tile)
|
274 |
-
//---------------------------------------------------------------------
|
275 |
-
|
276 |
-
// Exclusive scan specialization
|
277 |
-
//
|
278 |
-
THRUST_DEVICE_FUNCTION void
|
279 |
-
scan_tile(size_value_pair_t (&scan_items)[ITEMS_PER_THREAD],
|
280 |
-
size_value_pair_t &tile_aggregate,
|
281 |
-
thrust::detail::false_type /* is_inclusive */)
|
282 |
-
{
|
283 |
-
BlockScan(storage.scan)
|
284 |
-
.ExclusiveScan(scan_items, scan_items, scan_op, tile_aggregate);
|
285 |
-
}
|
286 |
-
|
287 |
-
// Inclusive scan specialization
|
288 |
-
//
|
289 |
-
THRUST_DEVICE_FUNCTION void
|
290 |
-
scan_tile(size_value_pair_t (&scan_items)[ITEMS_PER_THREAD],
|
291 |
-
size_value_pair_t &tile_aggregate,
|
292 |
-
thrust::detail::true_type /* is_inclusive */)
|
293 |
-
{
|
294 |
-
BlockScan(storage.scan)
|
295 |
-
.InclusiveScan(scan_items, scan_items, scan_op, tile_aggregate);
|
296 |
-
}
|
297 |
-
|
298 |
-
//---------------------------------------------------------------------
|
299 |
-
// Block scan utility methods (subsequent tiles)
|
300 |
-
//---------------------------------------------------------------------
|
301 |
-
|
302 |
-
// Exclusive scan specialization (with prefix from predecessors)
|
303 |
-
//
|
304 |
-
THRUST_DEVICE_FUNCTION void
|
305 |
-
scan_tile(size_value_pair_t (&scan_items)[ITEMS_PER_THREAD],
|
306 |
-
size_value_pair_t & tile_aggregate,
|
307 |
-
TilePrefixCallback &prefix_op,
|
308 |
-
thrust::detail::false_type /* is_incclusive */)
|
309 |
-
{
|
310 |
-
BlockScan(storage.scan)
|
311 |
-
.ExclusiveScan(scan_items, scan_items, scan_op, prefix_op);
|
312 |
-
tile_aggregate = prefix_op.GetBlockAggregate();
|
313 |
-
}
|
314 |
-
|
315 |
-
// Inclusive scan specialization (with prefix from predecessors)
|
316 |
-
//
|
317 |
-
THRUST_DEVICE_FUNCTION void
|
318 |
-
scan_tile(size_value_pair_t (&scan_items)[ITEMS_PER_THREAD],
|
319 |
-
size_value_pair_t & tile_aggregate,
|
320 |
-
TilePrefixCallback &prefix_op,
|
321 |
-
thrust::detail::true_type /* is_inclusive */)
|
322 |
-
{
|
323 |
-
BlockScan(storage.scan)
|
324 |
-
.InclusiveScan(scan_items, scan_items, scan_op, prefix_op);
|
325 |
-
tile_aggregate = prefix_op.GetBlockAggregate();
|
326 |
-
}
|
327 |
-
|
328 |
-
//---------------------------------------------------------------------
|
329 |
-
// Zip utility methods
|
330 |
-
//---------------------------------------------------------------------
|
331 |
-
|
332 |
-
template <bool IS_LAST_TILE>
|
333 |
-
THRUST_DEVICE_FUNCTION void
|
334 |
-
zip_values_and_flags(size_type num_remaining,
|
335 |
-
value_type (&values)[ITEMS_PER_THREAD],
|
336 |
-
size_type (&segment_flags)[ITEMS_PER_THREAD],
|
337 |
-
size_value_pair_t (&scan_items)[ITEMS_PER_THREAD])
|
338 |
-
{
|
339 |
-
// Zip values and segment_flags
|
340 |
-
#pragma unroll
|
341 |
-
for (int ITEM = 0; ITEM < ITEMS_PER_THREAD; ++ITEM)
|
342 |
-
{
|
343 |
-
// Set segment_flags for first out-of-bounds item, zero for others
|
344 |
-
if (IS_LAST_TILE &&
|
345 |
-
Size(threadIdx.x * ITEMS_PER_THREAD) + ITEM == num_remaining)
|
346 |
-
segment_flags[ITEM] = 1;
|
347 |
-
|
348 |
-
scan_items[ITEM].value = values[ITEM];
|
349 |
-
scan_items[ITEM].key = segment_flags[ITEM];
|
350 |
-
}
|
351 |
-
}
|
352 |
-
|
353 |
-
THRUST_DEVICE_FUNCTION void unzip_values(
|
354 |
-
value_type (&values)[ITEMS_PER_THREAD],
|
355 |
-
size_value_pair_t (&scan_items)[ITEMS_PER_THREAD])
|
356 |
-
{
|
357 |
-
// Zip values and segment_flags
|
358 |
-
#pragma unroll
|
359 |
-
for (int ITEM = 0; ITEM < ITEMS_PER_THREAD; ++ITEM)
|
360 |
-
{
|
361 |
-
values[ITEM] = scan_items[ITEM].value;
|
362 |
-
}
|
363 |
-
}
|
364 |
-
|
365 |
-
//---------------------------------------------------------------------
|
366 |
-
// Cooperatively scan a device-wide sequence of tiles with other CTAs
|
367 |
-
//---------------------------------------------------------------------
|
368 |
-
|
369 |
-
// Process a tile of input (dynamic chained scan)
|
370 |
-
//
|
371 |
-
template <bool IS_LAST_TILE, class AddInitToScan>
|
372 |
-
THRUST_DEVICE_FUNCTION void
|
373 |
-
consume_tile(Size /*num_items*/,
|
374 |
-
Size num_remaining,
|
375 |
-
int tile_idx,
|
376 |
-
Size tile_base,
|
377 |
-
AddInitToScan add_init_to_scan)
|
378 |
-
{
|
379 |
-
using core::sync_threadblock;
|
380 |
-
|
381 |
-
// Load items
|
382 |
-
key_type keys[ITEMS_PER_THREAD];
|
383 |
-
value_type values[ITEMS_PER_THREAD];
|
384 |
-
size_type segment_flags[ITEMS_PER_THREAD];
|
385 |
-
size_value_pair_t scan_items[ITEMS_PER_THREAD];
|
386 |
-
|
387 |
-
if (IS_LAST_TILE)
|
388 |
-
{
|
389 |
-
// Fill last element with the first element
|
390 |
-
// because collectives are not suffix guarded
|
391 |
-
BlockLoadKeys(storage.load_keys)
|
392 |
-
.Load(keys_load_it + tile_base,
|
393 |
-
keys,
|
394 |
-
num_remaining,
|
395 |
-
*(keys_load_it + tile_base));
|
396 |
-
}
|
397 |
-
else
|
398 |
-
{
|
399 |
-
BlockLoadKeys(storage.load_keys)
|
400 |
-
.Load(keys_load_it + tile_base, keys);
|
401 |
-
}
|
402 |
-
|
403 |
-
sync_threadblock();
|
404 |
-
|
405 |
-
if (IS_LAST_TILE)
|
406 |
-
{
|
407 |
-
// Fill last element with the first element
|
408 |
-
// because collectives are not suffix guarded
|
409 |
-
BlockLoadValues(storage.load_values)
|
410 |
-
.Load(values_load_it + tile_base,
|
411 |
-
values,
|
412 |
-
num_remaining,
|
413 |
-
*(values_load_it + tile_base));
|
414 |
-
}
|
415 |
-
else
|
416 |
-
{
|
417 |
-
BlockLoadValues(storage.load_values)
|
418 |
-
.Load(values_load_it + tile_base, values);
|
419 |
-
}
|
420 |
-
|
421 |
-
sync_threadblock();
|
422 |
-
|
423 |
-
// first tile
|
424 |
-
if (tile_idx == 0)
|
425 |
-
{
|
426 |
-
BlockDiscontinuityKeys(storage.discontinuity)
|
427 |
-
.FlagHeads(segment_flags, keys, inequality_op);
|
428 |
-
|
429 |
-
// Zip values and segment_flags
|
430 |
-
zip_values_and_flags<IS_LAST_TILE>(num_remaining,
|
431 |
-
values,
|
432 |
-
segment_flags,
|
433 |
-
scan_items);
|
434 |
-
|
435 |
-
// Exclusive scan of values and segment_flags
|
436 |
-
size_value_pair_t tile_aggregate;
|
437 |
-
scan_tile(scan_items, tile_aggregate, Inclusive());
|
438 |
-
|
439 |
-
if (threadIdx.x == 0)
|
440 |
-
{
|
441 |
-
if (!IS_LAST_TILE)
|
442 |
-
tile_state.SetInclusive(0, tile_aggregate);
|
443 |
-
|
444 |
-
scan_items[0].key = 0;
|
445 |
-
}
|
446 |
-
}
|
447 |
-
else
|
448 |
-
{
|
449 |
-
key_type tile_pred_key = (threadIdx.x == 0)
|
450 |
-
? keys_load_it[tile_base - 1]
|
451 |
-
: key_type();
|
452 |
-
BlockDiscontinuityKeys(storage.discontinuity)
|
453 |
-
.FlagHeads(segment_flags,
|
454 |
-
keys,
|
455 |
-
inequality_op,
|
456 |
-
tile_pred_key);
|
457 |
-
|
458 |
-
// Zip values and segment_flags
|
459 |
-
zip_values_and_flags<IS_LAST_TILE>(num_remaining,
|
460 |
-
values,
|
461 |
-
segment_flags,
|
462 |
-
scan_items);
|
463 |
-
|
464 |
-
size_value_pair_t tile_aggregate;
|
465 |
-
TilePrefixCallback prefix_op(tile_state, storage.prefix, scan_op, tile_idx);
|
466 |
-
scan_tile(scan_items, tile_aggregate, prefix_op, Inclusive());
|
467 |
-
}
|
468 |
-
|
469 |
-
sync_threadblock();
|
470 |
-
|
471 |
-
unzip_values(values, scan_items);
|
472 |
-
|
473 |
-
add_init_to_scan(values, segment_flags);
|
474 |
-
|
475 |
-
// Store items
|
476 |
-
if (IS_LAST_TILE)
|
477 |
-
{
|
478 |
-
BlockStoreValues(storage.store_values)
|
479 |
-
.Store(values_output_it + tile_base, values, num_remaining);
|
480 |
-
}
|
481 |
-
else
|
482 |
-
{
|
483 |
-
BlockStoreValues(storage.store_values)
|
484 |
-
.Store(values_output_it + tile_base, values);
|
485 |
-
}
|
486 |
-
}
|
487 |
-
|
488 |
-
//---------------------------------------------------------------------
|
489 |
-
// Constructor
|
490 |
-
//---------------------------------------------------------------------
|
491 |
-
|
492 |
-
// Dequeue and scan tiles of items as part of a dynamic chained scan
|
493 |
-
// with Init functor
|
494 |
-
template <class AddInitToScan>
|
495 |
-
THRUST_DEVICE_FUNCTION
|
496 |
-
impl(TempStorage & storage_,
|
497 |
-
ScanTileState &tile_state_,
|
498 |
-
KeysInputIt keys_input_it,
|
499 |
-
ValuesInputIt values_input_it,
|
500 |
-
ValuesOutputIt values_output_it_,
|
501 |
-
EqualityOp equality_op_,
|
502 |
-
ScanOp scan_op_,
|
503 |
-
Size num_items,
|
504 |
-
AddInitToScan add_init_to_scan)
|
505 |
-
: storage(storage_),
|
506 |
-
tile_state(tile_state_),
|
507 |
-
keys_load_it(core::make_load_iterator(ptx_plan(), keys_input_it)),
|
508 |
-
values_load_it(core::make_load_iterator(ptx_plan(), values_input_it)),
|
509 |
-
values_output_it(values_output_it_),
|
510 |
-
inequality_op(equality_op_),
|
511 |
-
scan_op(scan_op_)
|
512 |
-
{
|
513 |
-
int tile_idx = blockIdx.x;
|
514 |
-
Size tile_base = ITEMS_PER_TILE * tile_idx;
|
515 |
-
Size num_remaining = num_items - tile_base;
|
516 |
-
|
517 |
-
if (num_remaining > ITEMS_PER_TILE)
|
518 |
-
{
|
519 |
-
// Not the last tile (full)
|
520 |
-
consume_tile<false>(num_items,
|
521 |
-
num_remaining,
|
522 |
-
tile_idx,
|
523 |
-
tile_base,
|
524 |
-
add_init_to_scan);
|
525 |
-
}
|
526 |
-
else if (num_remaining > 0)
|
527 |
-
{
|
528 |
-
// The last tile (possibly partially-full)
|
529 |
-
consume_tile<true>(num_items,
|
530 |
-
num_remaining,
|
531 |
-
tile_idx,
|
532 |
-
tile_base,
|
533 |
-
add_init_to_scan);
|
534 |
-
}
|
535 |
-
}
|
536 |
-
}; // struct impl
|
537 |
-
|
538 |
-
//---------------------------------------------------------------------
|
539 |
-
// Agent entry point
|
540 |
-
//---------------------------------------------------------------------
|
541 |
-
|
542 |
-
template <class AddInitToScan>
|
543 |
-
THRUST_AGENT_ENTRY(KeysInputIt keys_input_it,
|
544 |
-
ValuesInputIt values_input_it,
|
545 |
-
ValuesOutputIt values_output_it,
|
546 |
-
EqualityOp equaility_op,
|
547 |
-
ScanOp scan_op,
|
548 |
-
ScanTileState tile_state,
|
549 |
-
Size num_items,
|
550 |
-
AddInitToScan add_init_to_scan,
|
551 |
-
char * shmem)
|
552 |
-
{
|
553 |
-
TempStorage &storage = *reinterpret_cast<TempStorage *>(shmem);
|
554 |
-
impl(storage,
|
555 |
-
tile_state,
|
556 |
-
keys_input_it,
|
557 |
-
values_input_it,
|
558 |
-
values_output_it,
|
559 |
-
equaility_op,
|
560 |
-
scan_op,
|
561 |
-
num_items,
|
562 |
-
add_init_to_scan);
|
563 |
-
}
|
564 |
-
|
565 |
-
}; // struct ScanByKeyAgent
|
566 |
-
|
567 |
-
template <class ScanTileState,
|
568 |
-
class Size>
|
569 |
-
struct InitAgent
|
570 |
-
{
|
571 |
-
template <class Arch>
|
572 |
-
struct PtxPlan : PtxPolicy<128> {};
|
573 |
-
|
574 |
-
typedef core::specialize_plan<PtxPlan> ptx_plan;
|
575 |
-
|
576 |
-
//---------------------------------------------------------------------
|
577 |
-
// Agent entry point
|
578 |
-
//---------------------------------------------------------------------
|
579 |
-
|
580 |
-
THRUST_AGENT_ENTRY(ScanTileState tile_state,
|
581 |
-
Size num_tiles,
|
582 |
-
char * /*shmem*/)
|
583 |
-
{
|
584 |
-
tile_state.InitializeStatus(num_tiles);
|
585 |
-
}
|
586 |
-
}; // struct InitAgent
|
587 |
-
|
588 |
-
template<class T>
|
589 |
-
struct DoNothing
|
590 |
-
{
|
591 |
-
typedef T type;
|
592 |
-
template <int ITEMS_PER_THREAD, class Size>
|
593 |
-
THRUST_DEVICE_FUNCTION void
|
594 |
-
operator()(T (&/*items*/)[ITEMS_PER_THREAD],
|
595 |
-
Size (&/*flags*/)[ITEMS_PER_THREAD])
|
596 |
-
{
|
597 |
-
}
|
598 |
-
}; // struct DoNothing
|
599 |
-
|
600 |
-
template<class T, class ScanOp>
|
601 |
-
struct AddInitToScan
|
602 |
-
{
|
603 |
-
typedef T type;
|
604 |
-
T init;
|
605 |
-
ScanOp scan_op;
|
606 |
-
|
607 |
-
THRUST_RUNTIME_FUNCTION
|
608 |
-
AddInitToScan(T init_, ScanOp scan_op_)
|
609 |
-
: init(init_), scan_op(scan_op_) {}
|
610 |
-
|
611 |
-
template <int ITEMS_PER_THREAD, class Size>
|
612 |
-
THRUST_DEVICE_FUNCTION void
|
613 |
-
operator()(T (&items)[ITEMS_PER_THREAD],
|
614 |
-
Size (&flags)[ITEMS_PER_THREAD])
|
615 |
-
{
|
616 |
-
#pragma unroll
|
617 |
-
for (int ITEM = 0; ITEM < ITEMS_PER_THREAD; ++ITEM)
|
618 |
-
{
|
619 |
-
items[ITEM] = flags[ITEM] ? init : scan_op(init, items[ITEM]);
|
620 |
-
}
|
621 |
-
}
|
622 |
-
}; // struct AddInitToScan
|
623 |
-
|
624 |
-
template <class Inclusive,
|
625 |
-
class KeysInputIt,
|
626 |
-
class ValuesInputIt,
|
627 |
-
class ValuesOutputIt,
|
628 |
-
class EqualityOp,
|
629 |
-
class ScanOp,
|
630 |
-
class Size,
|
631 |
-
class AddInitToScan>
|
632 |
-
THRUST_RUNTIME_FUNCTION cudaError_t
|
633 |
-
doit_step(void * d_temp_storage,
|
634 |
-
size_t & temp_storage_bytes,
|
635 |
-
KeysInputIt keys_input_it,
|
636 |
-
ValuesInputIt values_input_it,
|
637 |
-
Size num_items,
|
638 |
-
ValuesOutputIt values_output_it,
|
639 |
-
EqualityOp equality_op,
|
640 |
-
ScanOp scan_op,
|
641 |
-
AddInitToScan add_init_to_scan,
|
642 |
-
cudaStream_t stream,
|
643 |
-
bool debug_sync)
|
644 |
-
{
|
645 |
-
using core::AgentPlan;
|
646 |
-
using core::AgentLauncher;
|
647 |
-
|
648 |
-
cudaError_t status = cudaSuccess;
|
649 |
-
if (num_items == 0)
|
650 |
-
return cudaErrorNotSupported;
|
651 |
-
|
652 |
-
typedef typename AddInitToScan::type T;
|
653 |
-
|
654 |
-
typedef AgentLauncher<
|
655 |
-
ScanByKeyAgent<KeysInputIt,
|
656 |
-
ValuesInputIt,
|
657 |
-
ValuesOutputIt,
|
658 |
-
EqualityOp,
|
659 |
-
ScanOp,
|
660 |
-
Size,
|
661 |
-
T,
|
662 |
-
Inclusive> >
|
663 |
-
scan_by_key_agent;
|
664 |
-
|
665 |
-
typedef typename scan_by_key_agent::ScanTileState ScanTileState;
|
666 |
-
|
667 |
-
typedef AgentLauncher<InitAgent<ScanTileState, Size> > init_agent;
|
668 |
-
|
669 |
-
AgentPlan scan_by_key_plan = scan_by_key_agent::get_plan(stream);
|
670 |
-
AgentPlan init_plan = init_agent::get_plan();
|
671 |
-
|
672 |
-
int tile_size = scan_by_key_plan.items_per_tile;
|
673 |
-
size_t num_tiles = (num_items + tile_size - 1) / tile_size;
|
674 |
-
|
675 |
-
size_t vshmem_size = core::vshmem_size(scan_by_key_plan.shared_memory_size,
|
676 |
-
num_tiles);
|
677 |
-
|
678 |
-
size_t allocation_sizes[2] = {0, vshmem_size};
|
679 |
-
status = ScanTileState::AllocationSize(static_cast<int>(num_tiles), allocation_sizes[0]);
|
680 |
-
CUDA_CUB_RET_IF_FAIL(status);
|
681 |
-
|
682 |
-
void *allocations[2] = {NULL, NULL};
|
683 |
-
status = cub::AliasTemporaries(d_temp_storage,
|
684 |
-
temp_storage_bytes,
|
685 |
-
allocations,
|
686 |
-
allocation_sizes);
|
687 |
-
CUDA_CUB_RET_IF_FAIL(status);
|
688 |
-
|
689 |
-
if (d_temp_storage == NULL)
|
690 |
-
{
|
691 |
-
return status;
|
692 |
-
}
|
693 |
-
|
694 |
-
ScanTileState tile_state;
|
695 |
-
status = tile_state.Init(static_cast<int>(num_tiles), allocations[0], allocation_sizes[0]);
|
696 |
-
CUDA_CUB_RET_IF_FAIL(status);
|
697 |
-
|
698 |
-
char *vshmem_ptr = vshmem_size > 0 ? (char*)allocations[1] : NULL;
|
699 |
-
|
700 |
-
init_agent ia(init_plan, num_tiles, stream, "scan_by_key::init_agent", debug_sync);
|
701 |
-
ia.launch(tile_state, num_tiles);
|
702 |
-
CUDA_CUB_RET_IF_FAIL(cudaPeekAtLastError());
|
703 |
-
|
704 |
-
scan_by_key_agent sbka(scan_by_key_plan, num_items, stream, vshmem_ptr, "scan_by_key::scan_agent", debug_sync);
|
705 |
-
sbka.launch(keys_input_it,
|
706 |
-
values_input_it,
|
707 |
-
values_output_it,
|
708 |
-
equality_op,
|
709 |
-
scan_op,
|
710 |
-
tile_state,
|
711 |
-
num_items,
|
712 |
-
add_init_to_scan);
|
713 |
-
CUDA_CUB_RET_IF_FAIL(cudaPeekAtLastError());
|
714 |
-
return status;
|
715 |
-
} // func doit_pass
|
716 |
-
|
717 |
-
template <typename Inclusive,
|
718 |
-
typename Derived,
|
719 |
-
typename KeysInputIt,
|
720 |
-
typename ValuesInputIt,
|
721 |
-
typename ValuesOutputIt,
|
722 |
-
typename EqualityOp,
|
723 |
-
typename ScanOp,
|
724 |
-
typename AddInitToScan>
|
725 |
-
THRUST_RUNTIME_FUNCTION
|
726 |
-
ValuesOutputIt scan_by_key(execution_policy<Derived>& policy,
|
727 |
-
KeysInputIt keys_first,
|
728 |
-
KeysInputIt keys_last,
|
729 |
-
ValuesInputIt values_first,
|
730 |
-
ValuesOutputIt values_result,
|
731 |
-
EqualityOp equality_op,
|
732 |
-
ScanOp scan_op,
|
733 |
-
AddInitToScan add_init_to_scan)
|
734 |
-
{
|
735 |
-
int num_items = static_cast<int>(thrust::distance(keys_first, keys_last));
|
736 |
-
size_t storage_size = 0;
|
737 |
-
cudaStream_t stream = cuda_cub::stream(policy);
|
738 |
-
bool debug_sync = THRUST_DEBUG_SYNC_FLAG;
|
739 |
-
|
740 |
-
if (num_items == 0)
|
741 |
-
return values_result;
|
742 |
-
|
743 |
-
cudaError_t status;
|
744 |
-
status = doit_step<Inclusive>(NULL,
|
745 |
-
storage_size,
|
746 |
-
keys_first,
|
747 |
-
values_first,
|
748 |
-
num_items,
|
749 |
-
values_result,
|
750 |
-
equality_op,
|
751 |
-
scan_op,
|
752 |
-
add_init_to_scan,
|
753 |
-
stream,
|
754 |
-
debug_sync);
|
755 |
-
cuda_cub::throw_on_error(status, "scan_by_key: failed on 1st step");
|
756 |
-
|
757 |
-
// Allocate temporary storage.
|
758 |
-
thrust::detail::temporary_array<thrust::detail::uint8_t, Derived>
|
759 |
-
tmp(policy, storage_size);
|
760 |
-
void *ptr = static_cast<void*>(tmp.data().get());
|
761 |
-
|
762 |
-
status = doit_step<Inclusive>(ptr,
|
763 |
-
storage_size,
|
764 |
-
keys_first,
|
765 |
-
values_first,
|
766 |
-
num_items,
|
767 |
-
values_result,
|
768 |
-
equality_op,
|
769 |
-
scan_op,
|
770 |
-
add_init_to_scan,
|
771 |
-
stream,
|
772 |
-
debug_sync);
|
773 |
-
cuda_cub::throw_on_error(status, "scan_by_key: failed on 2nd step");
|
774 |
-
|
775 |
-
status = cuda_cub::synchronize(policy);
|
776 |
-
cuda_cub::throw_on_error(status, "scan_by_key: failed to synchronize");
|
777 |
-
|
778 |
-
return values_result + num_items;
|
779 |
-
} // func doit
|
780 |
-
} // namspace scan_by_key
|
781 |
-
|
782 |
-
//-------------------------
|
783 |
-
// Thrust API entry points
|
784 |
-
//-------------------------
|
785 |
-
|
786 |
-
//---------------------------
|
787 |
-
// Inclusive scan
|
788 |
-
//---------------------------
|
789 |
-
|
790 |
-
__thrust_exec_check_disable__
|
791 |
-
template <class Derived,
|
792 |
-
class KeyInputIt,
|
793 |
-
class ValInputIt,
|
794 |
-
class ValOutputIt,
|
795 |
-
class BinaryPred,
|
796 |
-
class ScanOp>
|
797 |
-
ValOutputIt __host__ __device__
|
798 |
-
inclusive_scan_by_key(execution_policy<Derived> &policy,
|
799 |
-
KeyInputIt key_first,
|
800 |
-
KeyInputIt key_last,
|
801 |
-
ValInputIt value_first,
|
802 |
-
ValOutputIt value_result,
|
803 |
-
BinaryPred binary_pred,
|
804 |
-
ScanOp scan_op)
|
805 |
-
{
|
806 |
-
ValOutputIt ret = value_result;
|
807 |
-
if (__THRUST_HAS_CUDART__)
|
808 |
-
{
|
809 |
-
typedef typename iterator_traits<ValInputIt>::value_type T;
|
810 |
-
ret = __scan_by_key::scan_by_key<thrust::detail::true_type>(policy,
|
811 |
-
key_first,
|
812 |
-
key_last,
|
813 |
-
value_first,
|
814 |
-
value_result,
|
815 |
-
binary_pred,
|
816 |
-
scan_op,
|
817 |
-
__scan_by_key::DoNothing<T>());
|
818 |
-
}
|
819 |
-
else
|
820 |
-
{
|
821 |
-
#if !__THRUST_HAS_CUDART__
|
822 |
-
ret = thrust::inclusive_scan_by_key(cvt_to_seq(derived_cast(policy)),
|
823 |
-
key_first,
|
824 |
-
key_last,
|
825 |
-
value_first,
|
826 |
-
value_result,
|
827 |
-
binary_pred,
|
828 |
-
scan_op);
|
829 |
-
#endif
|
830 |
-
}
|
831 |
-
return ret;
|
832 |
-
}
|
833 |
-
|
834 |
-
template <class Derived,
|
835 |
-
class KeyInputIt,
|
836 |
-
class ValInputIt,
|
837 |
-
class ValOutputIt,
|
838 |
-
class BinaryPred>
|
839 |
-
ValOutputIt __host__ __device__
|
840 |
-
inclusive_scan_by_key(execution_policy<Derived> &policy,
|
841 |
-
KeyInputIt key_first,
|
842 |
-
KeyInputIt key_last,
|
843 |
-
ValInputIt value_first,
|
844 |
-
ValOutputIt value_result,
|
845 |
-
BinaryPred binary_pred)
|
846 |
-
{
|
847 |
-
typedef typename thrust::iterator_traits<ValOutputIt>::value_type value_type;
|
848 |
-
return cuda_cub::inclusive_scan_by_key(policy,
|
849 |
-
key_first,
|
850 |
-
key_last,
|
851 |
-
value_first,
|
852 |
-
value_result,
|
853 |
-
binary_pred,
|
854 |
-
plus<value_type>());
|
855 |
-
}
|
856 |
-
|
857 |
-
template <class Derived,
|
858 |
-
class KeyInputIt,
|
859 |
-
class ValInputIt,
|
860 |
-
class ValOutputIt>
|
861 |
-
ValOutputIt __host__ __device__
|
862 |
-
inclusive_scan_by_key(execution_policy<Derived> &policy,
|
863 |
-
KeyInputIt key_first,
|
864 |
-
KeyInputIt key_last,
|
865 |
-
ValInputIt value_first,
|
866 |
-
ValOutputIt value_result)
|
867 |
-
{
|
868 |
-
typedef typename thrust::iterator_traits<KeyInputIt>::value_type key_type;
|
869 |
-
return cuda_cub::inclusive_scan_by_key(policy,
|
870 |
-
key_first,
|
871 |
-
key_last,
|
872 |
-
value_first,
|
873 |
-
value_result,
|
874 |
-
equal_to<key_type>());
|
875 |
-
}
|
876 |
-
|
877 |
-
|
878 |
-
//---------------------------
|
879 |
-
// Exclusive scan
|
880 |
-
//---------------------------
|
881 |
-
|
882 |
-
__thrust_exec_check_disable__
|
883 |
-
template <class Derived,
|
884 |
-
class KeyInputIt,
|
885 |
-
class ValInputIt,
|
886 |
-
class ValOutputIt,
|
887 |
-
class Init,
|
888 |
-
class BinaryPred,
|
889 |
-
class ScanOp>
|
890 |
-
ValOutputIt __host__ __device__
|
891 |
-
exclusive_scan_by_key(execution_policy<Derived> &policy,
|
892 |
-
KeyInputIt key_first,
|
893 |
-
KeyInputIt key_last,
|
894 |
-
ValInputIt value_first,
|
895 |
-
ValOutputIt value_result,
|
896 |
-
Init init,
|
897 |
-
BinaryPred binary_pred,
|
898 |
-
ScanOp scan_op)
|
899 |
-
{
|
900 |
-
ValOutputIt ret = value_result;
|
901 |
-
if (__THRUST_HAS_CUDART__)
|
902 |
-
{
|
903 |
-
ret = __scan_by_key::scan_by_key<thrust::detail::false_type>(
|
904 |
-
policy,
|
905 |
-
key_first,
|
906 |
-
key_last,
|
907 |
-
value_first,
|
908 |
-
value_result,
|
909 |
-
binary_pred,
|
910 |
-
scan_op,
|
911 |
-
__scan_by_key::AddInitToScan<Init, ScanOp>(init, scan_op));
|
912 |
-
}
|
913 |
-
else
|
914 |
-
{
|
915 |
-
#if !__THRUST_HAS_CUDART__
|
916 |
-
ret = thrust::exclusive_scan_by_key(cvt_to_seq(derived_cast(policy)),
|
917 |
-
key_first,
|
918 |
-
key_last,
|
919 |
-
value_first,
|
920 |
-
value_result,
|
921 |
-
init,
|
922 |
-
binary_pred,
|
923 |
-
scan_op);
|
924 |
-
#endif
|
925 |
-
}
|
926 |
-
return ret;
|
927 |
-
}
|
928 |
-
|
929 |
-
template <class Derived,
|
930 |
-
class KeyInputIt,
|
931 |
-
class ValInputIt,
|
932 |
-
class ValOutputIt,
|
933 |
-
class Init,
|
934 |
-
class BinaryPred>
|
935 |
-
ValOutputIt __host__ __device__
|
936 |
-
exclusive_scan_by_key(execution_policy<Derived> &policy,
|
937 |
-
KeyInputIt key_first,
|
938 |
-
KeyInputIt key_last,
|
939 |
-
ValInputIt value_first,
|
940 |
-
ValOutputIt value_result,
|
941 |
-
Init init,
|
942 |
-
BinaryPred binary_pred)
|
943 |
-
{
|
944 |
-
return cuda_cub::exclusive_scan_by_key(policy,
|
945 |
-
key_first,
|
946 |
-
key_last,
|
947 |
-
value_first,
|
948 |
-
value_result,
|
949 |
-
init,
|
950 |
-
binary_pred,
|
951 |
-
plus<Init>());
|
952 |
-
}
|
953 |
-
|
954 |
-
template <class Derived,
|
955 |
-
class KeyInputIt,
|
956 |
-
class ValInputIt,
|
957 |
-
class ValOutputIt,
|
958 |
-
class Init>
|
959 |
-
ValOutputIt __host__ __device__
|
960 |
-
exclusive_scan_by_key(execution_policy<Derived> &policy,
|
961 |
-
KeyInputIt key_first,
|
962 |
-
KeyInputIt key_last,
|
963 |
-
ValInputIt value_first,
|
964 |
-
ValOutputIt value_result,
|
965 |
-
Init init)
|
966 |
-
{
|
967 |
-
typedef typename iterator_traits<KeyInputIt>::value_type key_type;
|
968 |
-
return cuda_cub::exclusive_scan_by_key(policy,
|
969 |
-
key_first,
|
970 |
-
key_last,
|
971 |
-
value_first,
|
972 |
-
value_result,
|
973 |
-
init,
|
974 |
-
equal_to<key_type>());
|
975 |
-
}
|
976 |
-
|
977 |
-
|
978 |
-
template <class Derived,
|
979 |
-
class KeyInputIt,
|
980 |
-
class ValInputIt,
|
981 |
-
class ValOutputIt>
|
982 |
-
ValOutputIt __host__ __device__
|
983 |
-
exclusive_scan_by_key(execution_policy<Derived> &policy,
|
984 |
-
KeyInputIt key_first,
|
985 |
-
KeyInputIt key_last,
|
986 |
-
ValInputIt value_first,
|
987 |
-
ValOutputIt value_result)
|
988 |
-
{
|
989 |
-
typedef typename iterator_traits<ValOutputIt>::value_type value_type;
|
990 |
-
return cuda_cub::exclusive_scan_by_key(policy,
|
991 |
-
key_first,
|
992 |
-
key_last,
|
993 |
-
value_first,
|
994 |
-
value_result,
|
995 |
-
value_type(0));
|
996 |
-
}
|
997 |
-
|
998 |
-
|
999 |
-
} // namespace cuda_cub
|
1000 |
-
} // end namespace thrust
|
1001 |
-
|
1002 |
-
#include <thrust/scan.h>
|
1003 |
-
|
1004 |
-
#endif
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Text2Human/Text2Human/models/__init__.py
DELETED
@@ -1,42 +0,0 @@
|
|
1 |
-
import glob
|
2 |
-
import importlib
|
3 |
-
import logging
|
4 |
-
import os.path as osp
|
5 |
-
|
6 |
-
# automatically scan and import model modules
|
7 |
-
# scan all the files under the 'models' folder and collect files ending with
|
8 |
-
# '_model.py'
|
9 |
-
model_folder = osp.dirname(osp.abspath(__file__))
|
10 |
-
model_filenames = [
|
11 |
-
osp.splitext(osp.basename(v))[0]
|
12 |
-
for v in glob.glob(f'{model_folder}/*_model.py')
|
13 |
-
]
|
14 |
-
# import all the model modules
|
15 |
-
_model_modules = [
|
16 |
-
importlib.import_module(f'models.{file_name}')
|
17 |
-
for file_name in model_filenames
|
18 |
-
]
|
19 |
-
|
20 |
-
|
21 |
-
def create_model(opt):
|
22 |
-
"""Create model.
|
23 |
-
|
24 |
-
Args:
|
25 |
-
opt (dict): Configuration. It constains:
|
26 |
-
model_type (str): Model type.
|
27 |
-
"""
|
28 |
-
model_type = opt['model_type']
|
29 |
-
|
30 |
-
# dynamically instantiation
|
31 |
-
for module in _model_modules:
|
32 |
-
model_cls = getattr(module, model_type, None)
|
33 |
-
if model_cls is not None:
|
34 |
-
break
|
35 |
-
if model_cls is None:
|
36 |
-
raise ValueError(f'Model {model_type} is not found.')
|
37 |
-
|
38 |
-
model = model_cls(opt)
|
39 |
-
|
40 |
-
logger = logging.getLogger('base')
|
41 |
-
logger.info(f'Model [{model.__class__.__name__}] is created.')
|
42 |
-
return model
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/lama-example/saicinpainting/training/modules/depthwise_sep_conv.py
DELETED
@@ -1,17 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
import torch.nn as nn
|
3 |
-
|
4 |
-
class DepthWiseSeperableConv(nn.Module):
|
5 |
-
def __init__(self, in_dim, out_dim, *args, **kwargs):
|
6 |
-
super().__init__()
|
7 |
-
if 'groups' in kwargs:
|
8 |
-
# ignoring groups for Depthwise Sep Conv
|
9 |
-
del kwargs['groups']
|
10 |
-
|
11 |
-
self.depthwise = nn.Conv2d(in_dim, in_dim, *args, groups=in_dim, **kwargs)
|
12 |
-
self.pointwise = nn.Conv2d(in_dim, out_dim, kernel_size=1)
|
13 |
-
|
14 |
-
def forward(self, x):
|
15 |
-
out = self.depthwise(x)
|
16 |
-
out = self.pointwise(out)
|
17 |
-
return out
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ChallengeHub/Chinese-LangChain/tests/test_langchain.py
DELETED
@@ -1,36 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
|
3 |
-
from langchain.document_loaders import UnstructuredFileLoader
|
4 |
-
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
|
5 |
-
from langchain.vectorstores import FAISS
|
6 |
-
|
7 |
-
embedding_model_name = '/home/searchgpt/pretrained_models/ernie-gram-zh'
|
8 |
-
docs_path = '/home/searchgpt/yq/Knowledge-ChatGLM/docs'
|
9 |
-
embeddings = HuggingFaceEmbeddings(model_name=embedding_model_name)
|
10 |
-
|
11 |
-
docs = []
|
12 |
-
|
13 |
-
for doc in os.listdir(docs_path):
|
14 |
-
if doc.endswith('.txt'):
|
15 |
-
print(doc)
|
16 |
-
loader = UnstructuredFileLoader(f'{docs_path}/{doc}', mode="elements")
|
17 |
-
doc = loader.load()
|
18 |
-
docs.extend(doc)
|
19 |
-
|
20 |
-
vector_store = FAISS.from_documents(docs, embeddings)
|
21 |
-
vector_store.save_local('vector_store_local')
|
22 |
-
search_result = vector_store.similarity_search_with_score(query='科比', k=2)
|
23 |
-
print(search_result)
|
24 |
-
|
25 |
-
loader = UnstructuredFileLoader(f'{docs_path}/added/马保国.txt', mode="elements")
|
26 |
-
doc = loader.load()
|
27 |
-
vector_store.add_documents(doc)
|
28 |
-
print(doc)
|
29 |
-
search_result = vector_store.similarity_search_with_score(query='科比·布莱恩特', k=2)
|
30 |
-
print(search_result)
|
31 |
-
|
32 |
-
"""
|
33 |
-
[(Document(page_content='王治郅,1977年7月8日出生于北京,前中国篮球运动员,司职大前锋/中锋,现已退役。 [1]', metadata={'source': 'docs/王治郅.txt', 'filename': 'docs/王治郅.txt', 'category': 'Title'}), 285.40765), (Document(page_content='王治郅是中国篮球界进入NBA的第一人,被评选为中国篮坛50大杰出人物和中国申办奥运特使。他和姚明、蒙克·巴特尔一起,被称为篮球场上的“移动长城”。 [5]', metadata={'source': 'docs/王治郅.txt', 'filename': 'docs/王治郅.txt', 'category': 'NarrativeText'}), 290.19086)]
|
34 |
-
[Document(page_content='科比·布莱恩特(Kobe Bryant,1978年8月23日—2020年1月26日),全名科比·比恩·布莱恩特·考克斯(Kobe Bean Bryant Cox),出生于美国宾夕法尼亚州费城,美国已故篮球运动员,司职得分后卫/小前锋。 [5] [24] [84]', metadata={'source': 'docs/added/科比.txt', 'filename': 'docs/added/科比.txt', 'category': 'NarrativeText'}), Document(page_content='1996年NBA选秀,科比于第1轮第13顺位被夏洛特黄蜂队选中并被交易至洛杉矶湖人队,整个NBA生涯都效力于洛杉矶湖人队;共获得5次NBA总冠军、1次NBA常规赛MVP、2次NBA总决赛MVP、4次NBA全明星赛MVP、2次NBA赛季得分王;共入选NBA全明星首发阵容18次、NBA最佳阵容15次(其中一阵11次、二阵2次、三阵2次)、NBA最佳防守阵容12次(其中一阵9次、二阵3次)。 [9] [24]', metadata={'source': 'docs/added/科比.txt', 'filename': 'docs/added/科比.txt', 'category': 'Title'}), Document(page_content='2007年,科比首次入选美国国家男子篮球队,后帮助美国队夺得2007年美洲男篮锦标赛金牌、2008年北京奥运会男子篮球金牌以及2012年伦敦奥运会男子篮球金牌。 [91]', metadata={'source': 'docs/added/科比.txt', 'filename': 'docs/added/科比.txt', 'category': 'Title'}), Document(page_content='2015年11月30日,科比发文宣布将在赛季结束后退役。 [100] 2017年12月19日,湖人队为科比举行球衣退役仪式。 [22] 2020年4月5日,科比入选奈·史密斯篮球名人纪念堂。 [7]', metadata={'source': 'docs/added/科比.txt', 'filename': 'docs/added/科比.txt', 'category': 'Title'}), Document(page_content='美国时间2020年1月26日(北京时间2020年1月27日),科比因直升机事故遇难,享年41岁。 [23]', metadata={'source': 'docs/added/科比.txt', 'filename': 'docs/added/科比.txt', 'category': 'Title'})]
|
35 |
-
[(Document(page_content='科比·布莱恩特(Kobe Bryant,1978年8月23日—2020年1月26日),全名科比·比恩·布莱恩特·考克斯(Kobe Bean Bryant Cox),出生于美国宾夕法尼亚州费城,美国已故篮球运动员,司职得分后卫/小前锋。 [5] [24] [84]', metadata={'source': 'docs/added/科比.txt', 'filename': 'docs/added/科比.txt', 'category': 'NarrativeText'}), 179.68744), (Document(page_content='2015年11月30日,科比发文宣布将在赛季结束后退役。 [100] 2017年12月19日,湖人队为科比举行球衣退役仪式。 [22] 2020年4月5日,科比入选奈·史密斯篮球名人纪念堂。 [7]', metadata={'source': 'docs/added/科比.txt', 'filename': 'docs/added/科比.txt', 'category': 'Title'}), 200.57565)]
|
36 |
-
"""
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/image_gen.py
DELETED
@@ -1,163 +0,0 @@
|
|
1 |
-
""" Image Generation Module for AutoGPT."""
|
2 |
-
import io
|
3 |
-
import os.path
|
4 |
-
import uuid
|
5 |
-
from base64 import b64decode
|
6 |
-
|
7 |
-
import openai
|
8 |
-
import requests
|
9 |
-
from PIL import Image
|
10 |
-
|
11 |
-
from autogpt.config import Config
|
12 |
-
from autogpt.workspace import path_in_workspace
|
13 |
-
|
14 |
-
CFG = Config()
|
15 |
-
|
16 |
-
|
17 |
-
def generate_image(prompt: str, size: int = 256) -> str:
|
18 |
-
"""Generate an image from a prompt.
|
19 |
-
|
20 |
-
Args:
|
21 |
-
prompt (str): The prompt to use
|
22 |
-
size (int, optional): The size of the image. Defaults to 256. (Not supported by HuggingFace)
|
23 |
-
|
24 |
-
Returns:
|
25 |
-
str: The filename of the image
|
26 |
-
"""
|
27 |
-
filename = f"{str(uuid.uuid4())}.jpg"
|
28 |
-
|
29 |
-
# DALL-E
|
30 |
-
if CFG.image_provider == "dalle":
|
31 |
-
return generate_image_with_dalle(prompt, filename, size)
|
32 |
-
# HuggingFace
|
33 |
-
elif CFG.image_provider == "huggingface":
|
34 |
-
return generate_image_with_hf(prompt, filename)
|
35 |
-
# SD WebUI
|
36 |
-
elif CFG.image_provider == "sdwebui":
|
37 |
-
return generate_image_with_sd_webui(prompt, filename, size)
|
38 |
-
return "No Image Provider Set"
|
39 |
-
|
40 |
-
|
41 |
-
def generate_image_with_hf(prompt: str, filename: str) -> str:
|
42 |
-
"""Generate an image with HuggingFace's API.
|
43 |
-
|
44 |
-
Args:
|
45 |
-
prompt (str): The prompt to use
|
46 |
-
filename (str): The filename to save the image to
|
47 |
-
|
48 |
-
Returns:
|
49 |
-
str: The filename of the image
|
50 |
-
"""
|
51 |
-
API_URL = (
|
52 |
-
f"https://api-inference.huggingface.co/models/{CFG.huggingface_image_model}"
|
53 |
-
)
|
54 |
-
if CFG.huggingface_api_token is None:
|
55 |
-
raise ValueError(
|
56 |
-
"You need to set your Hugging Face API token in the config file."
|
57 |
-
)
|
58 |
-
headers = {
|
59 |
-
"Authorization": f"Bearer {CFG.huggingface_api_token}",
|
60 |
-
"X-Use-Cache": "false",
|
61 |
-
}
|
62 |
-
|
63 |
-
response = requests.post(
|
64 |
-
API_URL,
|
65 |
-
headers=headers,
|
66 |
-
json={
|
67 |
-
"inputs": prompt,
|
68 |
-
},
|
69 |
-
)
|
70 |
-
|
71 |
-
image = Image.open(io.BytesIO(response.content))
|
72 |
-
print(f"Image Generated for prompt:{prompt}")
|
73 |
-
|
74 |
-
image.save(path_in_workspace(filename))
|
75 |
-
|
76 |
-
return f"Saved to disk:{filename}"
|
77 |
-
|
78 |
-
|
79 |
-
def generate_image_with_dalle(prompt: str, filename: str) -> str:
|
80 |
-
"""Generate an image with DALL-E.
|
81 |
-
|
82 |
-
Args:
|
83 |
-
prompt (str): The prompt to use
|
84 |
-
filename (str): The filename to save the image to
|
85 |
-
|
86 |
-
Returns:
|
87 |
-
str: The filename of the image
|
88 |
-
"""
|
89 |
-
openai.api_key = CFG.openai_api_key
|
90 |
-
|
91 |
-
# Check for supported image sizes
|
92 |
-
if size not in [256, 512, 1024]:
|
93 |
-
closest = min([256, 512, 1024], key=lambda x: abs(x - size))
|
94 |
-
print(
|
95 |
-
f"DALL-E only supports image sizes of 256x256, 512x512, or 1024x1024. Setting to {closest}, was {size}."
|
96 |
-
)
|
97 |
-
size = closest
|
98 |
-
|
99 |
-
response = openai.Image.create(
|
100 |
-
prompt=prompt,
|
101 |
-
n=1,
|
102 |
-
size=f"{size}x{size}",
|
103 |
-
response_format="b64_json",
|
104 |
-
)
|
105 |
-
|
106 |
-
print(f"Image Generated for prompt:{prompt}")
|
107 |
-
|
108 |
-
image_data = b64decode(response["data"][0]["b64_json"])
|
109 |
-
|
110 |
-
with open(path_in_workspace(filename), mode="wb") as png:
|
111 |
-
png.write(image_data)
|
112 |
-
|
113 |
-
return f"Saved to disk:{filename}"
|
114 |
-
|
115 |
-
|
116 |
-
def generate_image_with_sd_webui(
|
117 |
-
prompt: str,
|
118 |
-
filename: str,
|
119 |
-
size: int = 512,
|
120 |
-
negative_prompt: str = "",
|
121 |
-
extra: dict = {},
|
122 |
-
) -> str:
|
123 |
-
"""Generate an image with Stable Diffusion webui.
|
124 |
-
Args:
|
125 |
-
prompt (str): The prompt to use
|
126 |
-
filename (str): The filename to save the image to
|
127 |
-
size (int, optional): The size of the image. Defaults to 256.
|
128 |
-
negative_prompt (str, optional): The negative prompt to use. Defaults to "".
|
129 |
-
extra (dict, optional): Extra parameters to pass to the API. Defaults to {}.
|
130 |
-
Returns:
|
131 |
-
str: The filename of the image
|
132 |
-
"""
|
133 |
-
# Create a session and set the basic auth if needed
|
134 |
-
s = requests.Session()
|
135 |
-
if CFG.sd_webui_auth:
|
136 |
-
username, password = CFG.sd_webui_auth.split(":")
|
137 |
-
s.auth = (username, password or "")
|
138 |
-
|
139 |
-
# Generate the images
|
140 |
-
response = requests.post(
|
141 |
-
f"{CFG.sd_webui_url}/sdapi/v1/txt2img",
|
142 |
-
json={
|
143 |
-
"prompt": prompt,
|
144 |
-
"negative_prompt": negative_prompt,
|
145 |
-
"sampler_index": "DDIM",
|
146 |
-
"steps": 20,
|
147 |
-
"cfg_scale": 7.0,
|
148 |
-
"width": size,
|
149 |
-
"height": size,
|
150 |
-
"n_iter": 1,
|
151 |
-
**extra,
|
152 |
-
},
|
153 |
-
)
|
154 |
-
|
155 |
-
print(f"Image Generated for prompt:{prompt}")
|
156 |
-
|
157 |
-
# Save the image to disk
|
158 |
-
response = response.json()
|
159 |
-
b64 = b64decode(response["images"][0].split(",", 1)[0])
|
160 |
-
image = Image.open(io.BytesIO(b64))
|
161 |
-
image.save(path_in_workspace(filename))
|
162 |
-
|
163 |
-
return f"Saved to disk:{filename}"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ChrisCaviar/ControlNet-v1-1/app.py
DELETED
@@ -1,130 +0,0 @@
|
|
1 |
-
#!/usr/bin/env python
|
2 |
-
|
3 |
-
from __future__ import annotations
|
4 |
-
|
5 |
-
import os
|
6 |
-
|
7 |
-
import gradio as gr
|
8 |
-
import torch
|
9 |
-
|
10 |
-
from app_canny import create_demo as create_demo_canny
|
11 |
-
from app_depth import create_demo as create_demo_depth
|
12 |
-
from app_ip2p import create_demo as create_demo_ip2p
|
13 |
-
from app_lineart import create_demo as create_demo_lineart
|
14 |
-
from app_mlsd import create_demo as create_demo_mlsd
|
15 |
-
from app_normal import create_demo as create_demo_normal
|
16 |
-
from app_openpose import create_demo as create_demo_openpose
|
17 |
-
from app_scribble import create_demo as create_demo_scribble
|
18 |
-
from app_scribble_interactive import \
|
19 |
-
create_demo as create_demo_scribble_interactive
|
20 |
-
from app_segmentation import create_demo as create_demo_segmentation
|
21 |
-
from app_shuffle import create_demo as create_demo_shuffle
|
22 |
-
from app_softedge import create_demo as create_demo_softedge
|
23 |
-
from model import Model
|
24 |
-
|
25 |
-
DESCRIPTION = '# ControlNet v1.1'
|
26 |
-
|
27 |
-
SPACE_ID = os.getenv('SPACE_ID')
|
28 |
-
ALLOW_CHANGING_BASE_MODEL = SPACE_ID != 'hysts/ControlNet-v1-1'
|
29 |
-
|
30 |
-
if SPACE_ID is not None:
|
31 |
-
DESCRIPTION += f'\n<p>For faster inference without waiting in queue, you may duplicate the space and upgrade to GPU in settings. <a href="https://huggingface.co/spaces/{SPACE_ID}?duplicate=true"><img style="display: inline; margin-top: 0em; margin-bottom: 0em" src="https://bit.ly/3gLdBN6" alt="Duplicate Space" /></a></p>'
|
32 |
-
|
33 |
-
if not torch.cuda.is_available():
|
34 |
-
DESCRIPTION += '\n<p>Running on CPU 🥶 This demo does not work on CPU.</p>'
|
35 |
-
|
36 |
-
MAX_NUM_IMAGES = int(os.getenv('MAX_NUM_IMAGES', '3'))
|
37 |
-
DEFAULT_NUM_IMAGES = min(MAX_NUM_IMAGES,
|
38 |
-
int(os.getenv('DEFAULT_NUM_IMAGES', '1')))
|
39 |
-
|
40 |
-
DEFAULT_MODEL_ID = os.getenv('DEFAULT_MODEL_ID',
|
41 |
-
'runwayml/stable-diffusion-v1-5')
|
42 |
-
model = Model(base_model_id=DEFAULT_MODEL_ID, task_name='Canny')
|
43 |
-
|
44 |
-
with gr.Blocks(css='style.css') as demo:
|
45 |
-
gr.Markdown(DESCRIPTION)
|
46 |
-
with gr.Tabs():
|
47 |
-
with gr.TabItem('Canny'):
|
48 |
-
create_demo_canny(model.process_canny,
|
49 |
-
max_images=MAX_NUM_IMAGES,
|
50 |
-
default_num_images=DEFAULT_NUM_IMAGES)
|
51 |
-
with gr.TabItem('MLSD'):
|
52 |
-
create_demo_mlsd(model.process_mlsd,
|
53 |
-
max_images=MAX_NUM_IMAGES,
|
54 |
-
default_num_images=DEFAULT_NUM_IMAGES)
|
55 |
-
with gr.TabItem('Scribble'):
|
56 |
-
create_demo_scribble(model.process_scribble,
|
57 |
-
max_images=MAX_NUM_IMAGES,
|
58 |
-
default_num_images=DEFAULT_NUM_IMAGES)
|
59 |
-
with gr.TabItem('Scribble Interactive'):
|
60 |
-
create_demo_scribble_interactive(
|
61 |
-
model.process_scribble_interactive,
|
62 |
-
max_images=MAX_NUM_IMAGES,
|
63 |
-
default_num_images=DEFAULT_NUM_IMAGES)
|
64 |
-
with gr.TabItem('SoftEdge'):
|
65 |
-
create_demo_softedge(model.process_softedge,
|
66 |
-
max_images=MAX_NUM_IMAGES,
|
67 |
-
default_num_images=DEFAULT_NUM_IMAGES)
|
68 |
-
with gr.TabItem('OpenPose'):
|
69 |
-
create_demo_openpose(model.process_openpose,
|
70 |
-
max_images=MAX_NUM_IMAGES,
|
71 |
-
default_num_images=DEFAULT_NUM_IMAGES)
|
72 |
-
with gr.TabItem('Segmentation'):
|
73 |
-
create_demo_segmentation(model.process_segmentation,
|
74 |
-
max_images=MAX_NUM_IMAGES,
|
75 |
-
default_num_images=DEFAULT_NUM_IMAGES)
|
76 |
-
with gr.TabItem('Depth'):
|
77 |
-
create_demo_depth(model.process_depth,
|
78 |
-
max_images=MAX_NUM_IMAGES,
|
79 |
-
default_num_images=DEFAULT_NUM_IMAGES)
|
80 |
-
with gr.TabItem('Normal map'):
|
81 |
-
create_demo_normal(model.process_normal,
|
82 |
-
max_images=MAX_NUM_IMAGES,
|
83 |
-
default_num_images=DEFAULT_NUM_IMAGES)
|
84 |
-
with gr.TabItem('Lineart'):
|
85 |
-
create_demo_lineart(model.process_lineart,
|
86 |
-
max_images=MAX_NUM_IMAGES,
|
87 |
-
default_num_images=DEFAULT_NUM_IMAGES)
|
88 |
-
with gr.TabItem('Content Shuffle'):
|
89 |
-
create_demo_shuffle(model.process_shuffle,
|
90 |
-
max_images=MAX_NUM_IMAGES,
|
91 |
-
default_num_images=DEFAULT_NUM_IMAGES)
|
92 |
-
with gr.TabItem('Instruct Pix2Pix'):
|
93 |
-
create_demo_ip2p(model.process_ip2p,
|
94 |
-
max_images=MAX_NUM_IMAGES,
|
95 |
-
default_num_images=DEFAULT_NUM_IMAGES)
|
96 |
-
|
97 |
-
with gr.Accordion(label='Base model', open=False):
|
98 |
-
with gr.Row():
|
99 |
-
with gr.Column():
|
100 |
-
current_base_model = gr.Text(label='Current base model')
|
101 |
-
with gr.Column(scale=0.3):
|
102 |
-
check_base_model_button = gr.Button('Check current base model')
|
103 |
-
with gr.Row():
|
104 |
-
with gr.Column():
|
105 |
-
new_base_model_id = gr.Text(
|
106 |
-
label='New base model',
|
107 |
-
max_lines=1,
|
108 |
-
placeholder='runwayml/stable-diffusion-v1-5',
|
109 |
-
info=
|
110 |
-
'The base model must be compatible with Stable Diffusion v1.5.',
|
111 |
-
interactive=ALLOW_CHANGING_BASE_MODEL)
|
112 |
-
with gr.Column(scale=0.3):
|
113 |
-
change_base_model_button = gr.Button(
|
114 |
-
'Change base model', interactive=ALLOW_CHANGING_BASE_MODEL)
|
115 |
-
if not ALLOW_CHANGING_BASE_MODEL:
|
116 |
-
gr.Markdown(
|
117 |
-
'''The base model is not allowed to be changed in this Space so as not to slow down the demo, but it can be changed if you duplicate the Space. <a href="https://huggingface.co/spaces/{SPACE_ID}?duplicate=true"><img style="display: inline; margin-top: 0em; margin-bottom: 0em" src="https://bit.ly/3gLdBN6" alt="Duplicate Space" /></a>'''
|
118 |
-
)
|
119 |
-
|
120 |
-
check_base_model_button.click(fn=lambda: model.base_model_id,
|
121 |
-
outputs=current_base_model,
|
122 |
-
queue=False)
|
123 |
-
new_base_model_id.submit(fn=model.set_base_model,
|
124 |
-
inputs=new_base_model_id,
|
125 |
-
outputs=current_base_model)
|
126 |
-
change_base_model_button.click(fn=model.set_base_model,
|
127 |
-
inputs=new_base_model_id,
|
128 |
-
outputs=current_base_model)
|
129 |
-
|
130 |
-
demo.queue(api_open=False, max_size=10).launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CofAI/chat.b4/client/css/message.css
DELETED
@@ -1,65 +0,0 @@
|
|
1 |
-
.message {
|
2 |
-
width: 100%;
|
3 |
-
overflow-wrap: break-word;
|
4 |
-
display: flex;
|
5 |
-
gap: var(--section-gap);
|
6 |
-
padding: var(--section-gap);
|
7 |
-
padding-bottom: 0;
|
8 |
-
}
|
9 |
-
|
10 |
-
.message:last-child {
|
11 |
-
animation: 0.6s show_message;
|
12 |
-
}
|
13 |
-
|
14 |
-
@keyframes show_message {
|
15 |
-
from {
|
16 |
-
transform: translateY(10px);
|
17 |
-
opacity: 0;
|
18 |
-
}
|
19 |
-
}
|
20 |
-
|
21 |
-
.message .avatar-container img {
|
22 |
-
max-width: 48px;
|
23 |
-
max-height: 48px;
|
24 |
-
box-shadow: 0.4px 0.5px 0.7px -2px rgba(0, 0, 0, 0.08), 1.1px 1.3px 2px -2px rgba(0, 0, 0, 0.041),
|
25 |
-
2.7px 3px 4.8px -2px rgba(0, 0, 0, 0.029), 9px 10px 16px -2px rgba(0, 0, 0, 0.022);
|
26 |
-
}
|
27 |
-
|
28 |
-
.message .content {
|
29 |
-
display: flex;
|
30 |
-
flex-direction: column;
|
31 |
-
width: 90%;
|
32 |
-
gap: 18px;
|
33 |
-
}
|
34 |
-
|
35 |
-
.message .content p,
|
36 |
-
.message .content li,
|
37 |
-
.message .content code {
|
38 |
-
font-size: 1rem;
|
39 |
-
line-height: 1.3;
|
40 |
-
}
|
41 |
-
|
42 |
-
@media screen and (max-height: 720px) {
|
43 |
-
.message {
|
44 |
-
padding: 12px;
|
45 |
-
gap: 0;
|
46 |
-
}
|
47 |
-
|
48 |
-
.message .content {
|
49 |
-
margin-left: 8px;
|
50 |
-
width: 80%;
|
51 |
-
}
|
52 |
-
|
53 |
-
.message .avatar-container img {
|
54 |
-
max-width: 32px;
|
55 |
-
max-height: 32px;
|
56 |
-
}
|
57 |
-
|
58 |
-
.message .content,
|
59 |
-
.message .content p,
|
60 |
-
.message .content li,
|
61 |
-
.message .content code {
|
62 |
-
font-size: 0.875rem;
|
63 |
-
line-height: 1.3;
|
64 |
-
}
|
65 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CofAI/chat/g4f/Provider/Providers/Xiaor.py
DELETED
@@ -1,39 +0,0 @@
|
|
1 |
-
import requests
|
2 |
-
import os
|
3 |
-
import json
|
4 |
-
from ...typing import sha256, Dict, get_type_hints
|
5 |
-
|
6 |
-
url = 'https://xiaor.eu.org'
|
7 |
-
model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k',
|
8 |
-
'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613']
|
9 |
-
supports_stream = True
|
10 |
-
needs_auth = False
|
11 |
-
|
12 |
-
|
13 |
-
def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs):
|
14 |
-
headers = {
|
15 |
-
'Content-Type': 'application/json',
|
16 |
-
}
|
17 |
-
data = {
|
18 |
-
'model': model,
|
19 |
-
'temperature': 0.7,
|
20 |
-
'presence_penalty': 0,
|
21 |
-
'messages': messages,
|
22 |
-
}
|
23 |
-
response = requests.post(url + '/p1/v1/chat/completions',
|
24 |
-
json=data, stream=True)
|
25 |
-
|
26 |
-
if stream:
|
27 |
-
for chunk in response.iter_content(chunk_size=None):
|
28 |
-
chunk = chunk.decode('utf-8')
|
29 |
-
if chunk.strip():
|
30 |
-
message = json.loads(chunk)['choices'][0]['message']['content']
|
31 |
-
yield message
|
32 |
-
else:
|
33 |
-
message = response.json()['choices'][0]['message']['content']
|
34 |
-
yield message
|
35 |
-
|
36 |
-
|
37 |
-
params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
|
38 |
-
'(%s)' % ', '.join(
|
39 |
-
[f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Covert1107/sd-diffusers-webui/modules/lora.py
DELETED
@@ -1,183 +0,0 @@
|
|
1 |
-
# LoRA network module
|
2 |
-
# reference:
|
3 |
-
# https://github.com/microsoft/LoRA/blob/main/loralib/layers.py
|
4 |
-
# https://github.com/cloneofsimo/lora/blob/master/lora_diffusion/lora.py
|
5 |
-
# https://github.com/bmaltais/kohya_ss/blob/master/networks/lora.py#L48
|
6 |
-
|
7 |
-
import math
|
8 |
-
import os
|
9 |
-
import torch
|
10 |
-
import modules.safe as _
|
11 |
-
from safetensors.torch import load_file
|
12 |
-
|
13 |
-
|
14 |
-
class LoRAModule(torch.nn.Module):
|
15 |
-
"""
|
16 |
-
replaces forward method of the original Linear, instead of replacing the original Linear module.
|
17 |
-
"""
|
18 |
-
|
19 |
-
def __init__(
|
20 |
-
self,
|
21 |
-
lora_name,
|
22 |
-
org_module: torch.nn.Module,
|
23 |
-
multiplier=1.0,
|
24 |
-
lora_dim=4,
|
25 |
-
alpha=1,
|
26 |
-
):
|
27 |
-
"""if alpha == 0 or None, alpha is rank (no scaling)."""
|
28 |
-
super().__init__()
|
29 |
-
self.lora_name = lora_name
|
30 |
-
self.lora_dim = lora_dim
|
31 |
-
|
32 |
-
if org_module.__class__.__name__ == "Conv2d":
|
33 |
-
in_dim = org_module.in_channels
|
34 |
-
out_dim = org_module.out_channels
|
35 |
-
self.lora_down = torch.nn.Conv2d(in_dim, lora_dim, (1, 1), bias=False)
|
36 |
-
self.lora_up = torch.nn.Conv2d(lora_dim, out_dim, (1, 1), bias=False)
|
37 |
-
else:
|
38 |
-
in_dim = org_module.in_features
|
39 |
-
out_dim = org_module.out_features
|
40 |
-
self.lora_down = torch.nn.Linear(in_dim, lora_dim, bias=False)
|
41 |
-
self.lora_up = torch.nn.Linear(lora_dim, out_dim, bias=False)
|
42 |
-
|
43 |
-
if type(alpha) == torch.Tensor:
|
44 |
-
alpha = alpha.detach().float().numpy() # without casting, bf16 causes error
|
45 |
-
|
46 |
-
alpha = lora_dim if alpha is None or alpha == 0 else alpha
|
47 |
-
self.scale = alpha / self.lora_dim
|
48 |
-
self.register_buffer("alpha", torch.tensor(alpha)) # 定数として扱える
|
49 |
-
|
50 |
-
# same as microsoft's
|
51 |
-
torch.nn.init.kaiming_uniform_(self.lora_down.weight, a=math.sqrt(5))
|
52 |
-
torch.nn.init.zeros_(self.lora_up.weight)
|
53 |
-
|
54 |
-
self.multiplier = multiplier
|
55 |
-
self.org_module = org_module # remove in applying
|
56 |
-
self.enable = False
|
57 |
-
|
58 |
-
def resize(self, rank, alpha, multiplier):
|
59 |
-
self.alpha = torch.tensor(alpha)
|
60 |
-
self.multiplier = multiplier
|
61 |
-
self.scale = alpha / rank
|
62 |
-
if self.lora_down.__class__.__name__ == "Conv2d":
|
63 |
-
in_dim = self.lora_down.in_channels
|
64 |
-
out_dim = self.lora_up.out_channels
|
65 |
-
self.lora_down = torch.nn.Conv2d(in_dim, rank, (1, 1), bias=False)
|
66 |
-
self.lora_up = torch.nn.Conv2d(rank, out_dim, (1, 1), bias=False)
|
67 |
-
else:
|
68 |
-
in_dim = self.lora_down.in_features
|
69 |
-
out_dim = self.lora_up.out_features
|
70 |
-
self.lora_down = torch.nn.Linear(in_dim, rank, bias=False)
|
71 |
-
self.lora_up = torch.nn.Linear(rank, out_dim, bias=False)
|
72 |
-
|
73 |
-
def apply(self):
|
74 |
-
if hasattr(self, "org_module"):
|
75 |
-
self.org_forward = self.org_module.forward
|
76 |
-
self.org_module.forward = self.forward
|
77 |
-
del self.org_module
|
78 |
-
|
79 |
-
def forward(self, x):
|
80 |
-
if self.enable:
|
81 |
-
return (
|
82 |
-
self.org_forward(x)
|
83 |
-
+ self.lora_up(self.lora_down(x)) * self.multiplier * self.scale
|
84 |
-
)
|
85 |
-
return self.org_forward(x)
|
86 |
-
|
87 |
-
|
88 |
-
class LoRANetwork(torch.nn.Module):
|
89 |
-
UNET_TARGET_REPLACE_MODULE = ["Transformer2DModel", "Attention"]
|
90 |
-
TEXT_ENCODER_TARGET_REPLACE_MODULE = ["CLIPAttention", "CLIPMLP"]
|
91 |
-
LORA_PREFIX_UNET = "lora_unet"
|
92 |
-
LORA_PREFIX_TEXT_ENCODER = "lora_te"
|
93 |
-
|
94 |
-
def __init__(self, text_encoder, unet, multiplier=1.0, lora_dim=4, alpha=1) -> None:
|
95 |
-
super().__init__()
|
96 |
-
self.multiplier = multiplier
|
97 |
-
self.lora_dim = lora_dim
|
98 |
-
self.alpha = alpha
|
99 |
-
|
100 |
-
# create module instances
|
101 |
-
def create_modules(prefix, root_module: torch.nn.Module, target_replace_modules):
|
102 |
-
loras = []
|
103 |
-
for name, module in root_module.named_modules():
|
104 |
-
if module.__class__.__name__ in target_replace_modules:
|
105 |
-
for child_name, child_module in module.named_modules():
|
106 |
-
if child_module.__class__.__name__ == "Linear" or (child_module.__class__.__name__ == "Conv2d" and child_module.kernel_size == (1, 1)):
|
107 |
-
lora_name = prefix + "." + name + "." + child_name
|
108 |
-
lora_name = lora_name.replace(".", "_")
|
109 |
-
lora = LoRAModule(lora_name, child_module, self.multiplier, self.lora_dim, self.alpha,)
|
110 |
-
loras.append(lora)
|
111 |
-
return loras
|
112 |
-
|
113 |
-
if isinstance(text_encoder, list):
|
114 |
-
self.text_encoder_loras = text_encoder
|
115 |
-
else:
|
116 |
-
self.text_encoder_loras = create_modules(LoRANetwork.LORA_PREFIX_TEXT_ENCODER, text_encoder, LoRANetwork.TEXT_ENCODER_TARGET_REPLACE_MODULE)
|
117 |
-
print(f"Create LoRA for Text Encoder: {len(self.text_encoder_loras)} modules.")
|
118 |
-
|
119 |
-
self.unet_loras = create_modules(LoRANetwork.LORA_PREFIX_UNET, unet, LoRANetwork.UNET_TARGET_REPLACE_MODULE)
|
120 |
-
print(f"Create LoRA for U-Net: {len(self.unet_loras)} modules.")
|
121 |
-
|
122 |
-
self.weights_sd = None
|
123 |
-
|
124 |
-
# assertion
|
125 |
-
names = set()
|
126 |
-
for lora in self.text_encoder_loras + self.unet_loras:
|
127 |
-
assert (lora.lora_name not in names), f"duplicated lora name: {lora.lora_name}"
|
128 |
-
names.add(lora.lora_name)
|
129 |
-
|
130 |
-
lora.apply()
|
131 |
-
self.add_module(lora.lora_name, lora)
|
132 |
-
|
133 |
-
def reset(self):
|
134 |
-
for lora in self.text_encoder_loras + self.unet_loras:
|
135 |
-
lora.enable = False
|
136 |
-
|
137 |
-
def load(self, file, scale):
|
138 |
-
|
139 |
-
weights = None
|
140 |
-
if os.path.splitext(file)[1] == ".safetensors":
|
141 |
-
weights = load_file(file)
|
142 |
-
else:
|
143 |
-
weights = torch.load(file, map_location="cpu")
|
144 |
-
|
145 |
-
if not weights:
|
146 |
-
return
|
147 |
-
|
148 |
-
network_alpha = None
|
149 |
-
network_dim = None
|
150 |
-
for key, value in weights.items():
|
151 |
-
if network_alpha is None and "alpha" in key:
|
152 |
-
network_alpha = value
|
153 |
-
if network_dim is None and "lora_down" in key and len(value.size()) == 2:
|
154 |
-
network_dim = value.size()[0]
|
155 |
-
|
156 |
-
if network_alpha is None:
|
157 |
-
network_alpha = network_dim
|
158 |
-
|
159 |
-
weights_has_text_encoder = weights_has_unet = False
|
160 |
-
weights_to_modify = []
|
161 |
-
|
162 |
-
for key in weights.keys():
|
163 |
-
if key.startswith(LoRANetwork.LORA_PREFIX_TEXT_ENCODER):
|
164 |
-
weights_has_text_encoder = True
|
165 |
-
|
166 |
-
if key.startswith(LoRANetwork.LORA_PREFIX_UNET):
|
167 |
-
weights_has_unet = True
|
168 |
-
|
169 |
-
if weights_has_text_encoder:
|
170 |
-
weights_to_modify += self.text_encoder_loras
|
171 |
-
|
172 |
-
if weights_has_unet:
|
173 |
-
weights_to_modify += self.unet_loras
|
174 |
-
|
175 |
-
for lora in self.text_encoder_loras + self.unet_loras:
|
176 |
-
lora.resize(network_dim, network_alpha, scale)
|
177 |
-
if lora in weights_to_modify:
|
178 |
-
lora.enable = True
|
179 |
-
|
180 |
-
info = self.load_state_dict(weights, False)
|
181 |
-
if len(info.unexpected_keys) > 0:
|
182 |
-
print(f"Weights are loaded. Unexpected keys={info.unexpected_keys}")
|
183 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/np.py
DELETED
@@ -1,171 +0,0 @@
|
|
1 |
-
import numpy as np
|
2 |
-
import copy
|
3 |
-
|
4 |
-
TINY = np.exp(-100)
|
5 |
-
concat = np.concatenate
|
6 |
-
def is_2D(m):
|
7 |
-
'''
|
8 |
-
judge if a matrix is 2-D or not
|
9 |
-
'''
|
10 |
-
return len(np.shape(m)) == 2
|
11 |
-
|
12 |
-
def norm1(v):
|
13 |
-
return np.sum(np.abs(v))
|
14 |
-
|
15 |
-
def norm2(v):
|
16 |
-
return np.sqrt(np.sum(v ** 2))
|
17 |
-
|
18 |
-
def norm2_squared(v):
|
19 |
-
return np.sum(v ** 2)
|
20 |
-
|
21 |
-
|
22 |
-
def cos_dist(v1, v2):
|
23 |
-
length1 = norm2(v1)
|
24 |
-
length2 = norm2(v2)
|
25 |
-
return np.dot(v1, v2) / (length1 * length2)
|
26 |
-
|
27 |
-
def eu_dist(v1, v2):
|
28 |
-
v = v1 - v2
|
29 |
-
return norm2(v)
|
30 |
-
|
31 |
-
def chi_squared_dist(f1, f2):
|
32 |
-
dist = 0
|
33 |
-
for ff1, ff2 in zip(f1, f2):
|
34 |
-
if ff1 + ff2 == 0:# color feature values are supposed to be non-negative. If this case happened, it means both ne and de are 0s
|
35 |
-
continue;
|
36 |
-
dist += (ff1 - ff2) ** 2 * 1.0/ (ff1 + ff2)
|
37 |
-
return np.sqrt(dist)
|
38 |
-
|
39 |
-
def flatten(arr, ndim = 1):
|
40 |
-
"""
|
41 |
-
flatten an multi-dimensional array to a certain degree.
|
42 |
-
ndim: the number of dimensions after flatten
|
43 |
-
"""
|
44 |
-
arr = np.asarray(arr)
|
45 |
-
dims = len(arr.shape)
|
46 |
-
shape = [np.prod(arr.shape[0: dims + 1 - ndim])]
|
47 |
-
shape.extend(arr.shape[dims + 1 - ndim: dims])
|
48 |
-
return np.reshape(arr, shape)
|
49 |
-
|
50 |
-
def arcsin(sins, xs = None):
|
51 |
-
"""
|
52 |
-
cal arcsin.
|
53 |
-
xs: if this parameter is provided, the returned arcsins will be within [0, 2*pi)
|
54 |
-
otherwise the default [-pi/2, pi/2]
|
55 |
-
"""
|
56 |
-
arcs = np.arcsin(sins);
|
57 |
-
if xs != None:
|
58 |
-
xs = np.asarray(xs)
|
59 |
-
sins = np.asarray(sins)
|
60 |
-
# if x > 0, then the corresponding mask value is -1. The resulting angle unchanged: v = 0 - (-v) = v. else, v = pi - v
|
61 |
-
add_pi = xs < 0
|
62 |
-
pi_mask = add_pi * np.pi
|
63 |
-
# 0 --> 1, 1 --> -1
|
64 |
-
arc_mask = 2 * add_pi - 1
|
65 |
-
arcs = pi_mask - arcs * arc_mask
|
66 |
-
|
67 |
-
# if x >= 0 and sin < 0, v = 2*pi + v
|
68 |
-
add_2_pi = (xs >= 0) * (sins < 0)
|
69 |
-
pi_mask = add_2_pi * 2 * np.pi
|
70 |
-
arcs = pi_mask + arcs
|
71 |
-
return arcs
|
72 |
-
|
73 |
-
def sin(ys = None, lengths = None, xs = None, angles = None):
|
74 |
-
"""
|
75 |
-
calculate sin with multiple kinds of parameters
|
76 |
-
"""
|
77 |
-
if not angles is None:
|
78 |
-
return np.sin(angles)
|
79 |
-
|
80 |
-
if ys is None:
|
81 |
-
raise ValueError('ys must be provided when "angles" is None ')
|
82 |
-
|
83 |
-
if lengths is None:
|
84 |
-
if xs is None:
|
85 |
-
raise ValueError('xs must be provided when "lengths" is None ')
|
86 |
-
lengths = np.sqrt(xs ** 2 + ys ** 2)
|
87 |
-
|
88 |
-
if not np.iterable(lengths):
|
89 |
-
sins = ys / lengths if lengths > 0 else 0
|
90 |
-
else:
|
91 |
-
lengths = np.asarray(lengths)
|
92 |
-
shape = lengths.shape
|
93 |
-
ys = flatten(ys)
|
94 |
-
lengths = flatten(lengths)
|
95 |
-
sins = [y / length if length > 0 else 0 for (y, length) in zip(ys, lengths)]
|
96 |
-
sins = np.reshape(sins, shape)
|
97 |
-
return sins
|
98 |
-
|
99 |
-
def sum_all(m):
|
100 |
-
"""
|
101 |
-
sum up all the elements in a multi-dimension array
|
102 |
-
"""
|
103 |
-
return np.sum(m)
|
104 |
-
|
105 |
-
|
106 |
-
def clone(obj, deep = False):
|
107 |
-
if not deep:
|
108 |
-
return copy.copy(obj)
|
109 |
-
return copy.deepcopy(obj)
|
110 |
-
|
111 |
-
def empty_list(length, etype):
|
112 |
-
empty_list = [None] * length
|
113 |
-
for i in xrange(length):
|
114 |
-
if etype == list:
|
115 |
-
empty_list[i] = []
|
116 |
-
else:
|
117 |
-
raise NotImplementedError
|
118 |
-
|
119 |
-
return empty_list
|
120 |
-
|
121 |
-
def shuffle(arr):
|
122 |
-
import random
|
123 |
-
random.shuffle(arr)
|
124 |
-
|
125 |
-
def is_empty(a):
|
126 |
-
'''
|
127 |
-
tell whether an array is empty.
|
128 |
-
If a is multidimensional, it is empty when it contains no entry in the last dimension.
|
129 |
-
'''
|
130 |
-
if a is None:
|
131 |
-
return True
|
132 |
-
|
133 |
-
shape = np.shape(a)
|
134 |
-
if np.prod(shape) == 0:
|
135 |
-
return True
|
136 |
-
|
137 |
-
return False
|
138 |
-
|
139 |
-
def angle_with_x(x, y):
|
140 |
-
"""
|
141 |
-
return the arctan x/y, in range [-pi, pi]
|
142 |
-
"""
|
143 |
-
return np.arctan2(y, x)
|
144 |
-
|
145 |
-
def has_infty(x):
|
146 |
-
test = x == np.infty
|
147 |
-
return np.sum(test) > 0
|
148 |
-
|
149 |
-
def has_nan(x):
|
150 |
-
x = np.asarray(x)
|
151 |
-
test = x != x
|
152 |
-
return np.sum(test) > 0
|
153 |
-
|
154 |
-
def has_nan_or_infty(x):
|
155 |
-
if has_nan(x):
|
156 |
-
return True
|
157 |
-
|
158 |
-
if has_infty(x):
|
159 |
-
return True
|
160 |
-
|
161 |
-
|
162 |
-
def iterable(x):
|
163 |
-
return np.iterable(x)
|
164 |
-
|
165 |
-
def smooth(arr):
|
166 |
-
result = [0] * len(arr)
|
167 |
-
s = 0
|
168 |
-
for idx, n in enumerate(arr):
|
169 |
-
s += n
|
170 |
-
result[idx] = s * 1.0 / (idx + 1)
|
171 |
-
return result
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/testclient.py
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
from starlette.testclient import TestClient as TestClient # noqa
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/implementations/cached.py
DELETED
@@ -1,867 +0,0 @@
|
|
1 |
-
from __future__ import annotations
|
2 |
-
|
3 |
-
import contextlib
|
4 |
-
import hashlib
|
5 |
-
import inspect
|
6 |
-
import logging
|
7 |
-
import os
|
8 |
-
import pickle
|
9 |
-
import tempfile
|
10 |
-
import time
|
11 |
-
from shutil import rmtree
|
12 |
-
from typing import ClassVar
|
13 |
-
|
14 |
-
from fsspec import AbstractFileSystem, filesystem
|
15 |
-
from fsspec.callbacks import _DEFAULT_CALLBACK
|
16 |
-
from fsspec.compression import compr
|
17 |
-
from fsspec.core import BaseCache, MMapCache
|
18 |
-
from fsspec.exceptions import BlocksizeMismatchError
|
19 |
-
from fsspec.spec import AbstractBufferedFile
|
20 |
-
from fsspec.utils import infer_compression
|
21 |
-
|
22 |
-
logger = logging.getLogger("fsspec.cached")
|
23 |
-
|
24 |
-
|
25 |
-
class CachingFileSystem(AbstractFileSystem):
|
26 |
-
"""Locally caching filesystem, layer over any other FS
|
27 |
-
|
28 |
-
This class implements chunk-wise local storage of remote files, for quick
|
29 |
-
access after the initial download. The files are stored in a given
|
30 |
-
directory with hashes of URLs for the filenames. If no directory is given,
|
31 |
-
a temporary one is used, which should be cleaned up by the OS after the
|
32 |
-
process ends. The files themselves are sparse (as implemented in
|
33 |
-
:class:`~fsspec.caching.MMapCache`), so only the data which is accessed
|
34 |
-
takes up space.
|
35 |
-
|
36 |
-
Restrictions:
|
37 |
-
|
38 |
-
- the block-size must be the same for each access of a given file, unless
|
39 |
-
all blocks of the file have already been read
|
40 |
-
- caching can only be applied to file-systems which produce files
|
41 |
-
derived from fsspec.spec.AbstractBufferedFile ; LocalFileSystem is also
|
42 |
-
allowed, for testing
|
43 |
-
"""
|
44 |
-
|
45 |
-
protocol: ClassVar[str | tuple[str, ...]] = ("blockcache", "cached")
|
46 |
-
|
47 |
-
def __init__(
|
48 |
-
self,
|
49 |
-
target_protocol=None,
|
50 |
-
cache_storage="TMP",
|
51 |
-
cache_check=10,
|
52 |
-
check_files=False,
|
53 |
-
expiry_time=604800,
|
54 |
-
target_options=None,
|
55 |
-
fs=None,
|
56 |
-
same_names=False,
|
57 |
-
compression=None,
|
58 |
-
**kwargs,
|
59 |
-
):
|
60 |
-
"""
|
61 |
-
|
62 |
-
Parameters
|
63 |
-
----------
|
64 |
-
target_protocol: str (optional)
|
65 |
-
Target filesystem protocol. Provide either this or ``fs``.
|
66 |
-
cache_storage: str or list(str)
|
67 |
-
Location to store files. If "TMP", this is a temporary directory,
|
68 |
-
and will be cleaned up by the OS when this process ends (or later).
|
69 |
-
If a list, each location will be tried in the order given, but
|
70 |
-
only the last will be considered writable.
|
71 |
-
cache_check: int
|
72 |
-
Number of seconds between reload of cache metadata
|
73 |
-
check_files: bool
|
74 |
-
Whether to explicitly see if the UID of the remote file matches
|
75 |
-
the stored one before using. Warning: some file systems such as
|
76 |
-
HTTP cannot reliably give a unique hash of the contents of some
|
77 |
-
path, so be sure to set this option to False.
|
78 |
-
expiry_time: int
|
79 |
-
The time in seconds after which a local copy is considered useless.
|
80 |
-
Set to falsy to prevent expiry. The default is equivalent to one
|
81 |
-
week.
|
82 |
-
target_options: dict or None
|
83 |
-
Passed to the instantiation of the FS, if fs is None.
|
84 |
-
fs: filesystem instance
|
85 |
-
The target filesystem to run against. Provide this or ``protocol``.
|
86 |
-
same_names: bool (optional)
|
87 |
-
By default, target URLs are hashed, so that files from different backends
|
88 |
-
with the same basename do not conflict. If this is true, the original
|
89 |
-
basename is used.
|
90 |
-
compression: str (optional)
|
91 |
-
To decompress on download. Can be 'infer' (guess from the URL name),
|
92 |
-
one of the entries in ``fsspec.compression.compr``, or None for no
|
93 |
-
decompression.
|
94 |
-
"""
|
95 |
-
super().__init__(**kwargs)
|
96 |
-
if fs is None and target_protocol is None:
|
97 |
-
raise ValueError(
|
98 |
-
"Please provide filesystem instance(fs) or target_protocol"
|
99 |
-
)
|
100 |
-
if not (fs is None) ^ (target_protocol is None):
|
101 |
-
raise ValueError(
|
102 |
-
"Both filesystems (fs) and target_protocol may not be both given."
|
103 |
-
)
|
104 |
-
if cache_storage == "TMP":
|
105 |
-
storage = [tempfile.mkdtemp()]
|
106 |
-
else:
|
107 |
-
if isinstance(cache_storage, str):
|
108 |
-
storage = [cache_storage]
|
109 |
-
else:
|
110 |
-
storage = cache_storage
|
111 |
-
os.makedirs(storage[-1], exist_ok=True)
|
112 |
-
self.storage = storage
|
113 |
-
self.kwargs = target_options or {}
|
114 |
-
self.cache_check = cache_check
|
115 |
-
self.check_files = check_files
|
116 |
-
self.expiry = expiry_time
|
117 |
-
self.compression = compression
|
118 |
-
# TODO: same_names should allow for variable prefix, not only
|
119 |
-
# to keep the basename
|
120 |
-
self.same_names = same_names
|
121 |
-
self.target_protocol = (
|
122 |
-
target_protocol
|
123 |
-
if isinstance(target_protocol, str)
|
124 |
-
else (fs.protocol if isinstance(fs.protocol, str) else fs.protocol[0])
|
125 |
-
)
|
126 |
-
self.load_cache()
|
127 |
-
self.fs = fs if fs is not None else filesystem(target_protocol, **self.kwargs)
|
128 |
-
|
129 |
-
def _strip_protocol(path):
|
130 |
-
# acts as a method, since each instance has a difference target
|
131 |
-
return self.fs._strip_protocol(type(self)._strip_protocol(path))
|
132 |
-
|
133 |
-
self._strip_protocol = _strip_protocol
|
134 |
-
|
135 |
-
def _mkcache(self):
|
136 |
-
os.makedirs(self.storage[-1], exist_ok=True)
|
137 |
-
|
138 |
-
def load_cache(self):
|
139 |
-
"""Read set of stored blocks from file"""
|
140 |
-
cached_files = []
|
141 |
-
for storage in self.storage:
|
142 |
-
fn = os.path.join(storage, "cache")
|
143 |
-
if os.path.exists(fn):
|
144 |
-
with open(fn, "rb") as f:
|
145 |
-
# TODO: consolidate blocks here
|
146 |
-
loaded_cached_files = pickle.load(f)
|
147 |
-
for c in loaded_cached_files.values():
|
148 |
-
if isinstance(c["blocks"], list):
|
149 |
-
c["blocks"] = set(c["blocks"])
|
150 |
-
cached_files.append(loaded_cached_files)
|
151 |
-
else:
|
152 |
-
cached_files.append({})
|
153 |
-
self._mkcache()
|
154 |
-
self.cached_files = cached_files or [{}]
|
155 |
-
self.last_cache = time.time()
|
156 |
-
|
157 |
-
def save_cache(self):
|
158 |
-
"""Save set of stored blocks from file"""
|
159 |
-
fn = os.path.join(self.storage[-1], "cache")
|
160 |
-
# TODO: a file lock could be used to ensure file does not change
|
161 |
-
# between re-read and write; but occasional duplicated reads ok.
|
162 |
-
cache = self.cached_files[-1]
|
163 |
-
if os.path.exists(fn):
|
164 |
-
with open(fn, "rb") as f:
|
165 |
-
cached_files = pickle.load(f)
|
166 |
-
for k, c in cached_files.items():
|
167 |
-
if k in cache:
|
168 |
-
if c["blocks"] is True or cache[k]["blocks"] is True:
|
169 |
-
c["blocks"] = True
|
170 |
-
else:
|
171 |
-
# self.cached_files[*][*]["blocks"] must continue to
|
172 |
-
# point to the same set object so that updates
|
173 |
-
# performed by MMapCache are propagated back to
|
174 |
-
# self.cached_files.
|
175 |
-
blocks = cache[k]["blocks"]
|
176 |
-
blocks.update(c["blocks"])
|
177 |
-
c["blocks"] = blocks
|
178 |
-
c["time"] = max(c["time"], cache[k]["time"])
|
179 |
-
c["uid"] = cache[k]["uid"]
|
180 |
-
|
181 |
-
# Files can be added to cache after it was written once
|
182 |
-
for k, c in cache.items():
|
183 |
-
if k not in cached_files:
|
184 |
-
cached_files[k] = c
|
185 |
-
else:
|
186 |
-
cached_files = cache
|
187 |
-
cache = {k: v.copy() for k, v in cached_files.items()}
|
188 |
-
for c in cache.values():
|
189 |
-
if isinstance(c["blocks"], set):
|
190 |
-
c["blocks"] = list(c["blocks"])
|
191 |
-
self._mkcache()
|
192 |
-
with atomic_write(fn) as f:
|
193 |
-
pickle.dump(cache, f)
|
194 |
-
self.cached_files[-1] = cached_files
|
195 |
-
self.last_cache = time.time()
|
196 |
-
|
197 |
-
def _check_cache(self):
|
198 |
-
"""Reload caches if time elapsed or any disappeared"""
|
199 |
-
self._mkcache()
|
200 |
-
if not self.cache_check:
|
201 |
-
# explicitly told not to bother checking
|
202 |
-
return
|
203 |
-
timecond = time.time() - self.last_cache > self.cache_check
|
204 |
-
existcond = all(os.path.exists(storage) for storage in self.storage)
|
205 |
-
if timecond or not existcond:
|
206 |
-
self.load_cache()
|
207 |
-
|
208 |
-
def _check_file(self, path):
|
209 |
-
"""Is path in cache and still valid"""
|
210 |
-
path = self._strip_protocol(path)
|
211 |
-
|
212 |
-
self._check_cache()
|
213 |
-
for storage, cache in zip(self.storage, self.cached_files):
|
214 |
-
if path not in cache:
|
215 |
-
continue
|
216 |
-
detail = cache[path].copy()
|
217 |
-
if self.check_files:
|
218 |
-
if detail["uid"] != self.fs.ukey(path):
|
219 |
-
continue
|
220 |
-
if self.expiry:
|
221 |
-
if time.time() - detail["time"] > self.expiry:
|
222 |
-
continue
|
223 |
-
fn = os.path.join(storage, detail["fn"])
|
224 |
-
if os.path.exists(fn):
|
225 |
-
return detail, fn
|
226 |
-
return False
|
227 |
-
|
228 |
-
def clear_cache(self):
|
229 |
-
"""Remove all files and metadat from the cache
|
230 |
-
|
231 |
-
In the case of multiple cache locations, this clears only the last one,
|
232 |
-
which is assumed to be the read/write one.
|
233 |
-
"""
|
234 |
-
rmtree(self.storage[-1])
|
235 |
-
self.load_cache()
|
236 |
-
|
237 |
-
def clear_expired_cache(self, expiry_time=None):
|
238 |
-
"""Remove all expired files and metadata from the cache
|
239 |
-
|
240 |
-
In the case of multiple cache locations, this clears only the last one,
|
241 |
-
which is assumed to be the read/write one.
|
242 |
-
|
243 |
-
Parameters
|
244 |
-
----------
|
245 |
-
expiry_time: int
|
246 |
-
The time in seconds after which a local copy is considered useless.
|
247 |
-
If not defined the default is equivalent to the attribute from the
|
248 |
-
file caching instantiation.
|
249 |
-
"""
|
250 |
-
|
251 |
-
if not expiry_time:
|
252 |
-
expiry_time = self.expiry
|
253 |
-
|
254 |
-
self._check_cache()
|
255 |
-
|
256 |
-
for path, detail in self.cached_files[-1].copy().items():
|
257 |
-
if time.time() - detail["time"] > expiry_time:
|
258 |
-
if self.same_names:
|
259 |
-
basename = os.path.basename(detail["original"])
|
260 |
-
fn = os.path.join(self.storage[-1], basename)
|
261 |
-
else:
|
262 |
-
fn = os.path.join(self.storage[-1], detail["fn"])
|
263 |
-
if os.path.exists(fn):
|
264 |
-
os.remove(fn)
|
265 |
-
self.cached_files[-1].pop(path)
|
266 |
-
|
267 |
-
if self.cached_files[-1]:
|
268 |
-
cache_path = os.path.join(self.storage[-1], "cache")
|
269 |
-
with atomic_write(cache_path) as fc:
|
270 |
-
pickle.dump(self.cached_files[-1], fc)
|
271 |
-
else:
|
272 |
-
rmtree(self.storage[-1])
|
273 |
-
self.load_cache()
|
274 |
-
|
275 |
-
def pop_from_cache(self, path):
|
276 |
-
"""Remove cached version of given file
|
277 |
-
|
278 |
-
Deletes local copy of the given (remote) path. If it is found in a cache
|
279 |
-
location which is not the last, it is assumed to be read-only, and
|
280 |
-
raises PermissionError
|
281 |
-
"""
|
282 |
-
path = self._strip_protocol(path)
|
283 |
-
details = self._check_file(path)
|
284 |
-
if not details:
|
285 |
-
return
|
286 |
-
_, fn = details
|
287 |
-
if fn.startswith(self.storage[-1]):
|
288 |
-
# is in in writable cache
|
289 |
-
os.remove(fn)
|
290 |
-
self.cached_files[-1].pop(path)
|
291 |
-
self.save_cache()
|
292 |
-
else:
|
293 |
-
raise PermissionError(
|
294 |
-
"Can only delete cached file in last, writable cache location"
|
295 |
-
)
|
296 |
-
|
297 |
-
def _open(
|
298 |
-
self,
|
299 |
-
path,
|
300 |
-
mode="rb",
|
301 |
-
block_size=None,
|
302 |
-
autocommit=True,
|
303 |
-
cache_options=None,
|
304 |
-
**kwargs,
|
305 |
-
):
|
306 |
-
"""Wrap the target _open
|
307 |
-
|
308 |
-
If the whole file exists in the cache, just open it locally and
|
309 |
-
return that.
|
310 |
-
|
311 |
-
Otherwise, open the file on the target FS, and make it have a mmap
|
312 |
-
cache pointing to the location which we determine, in our cache.
|
313 |
-
The ``blocks`` instance is shared, so as the mmap cache instance
|
314 |
-
updates, so does the entry in our ``cached_files`` attribute.
|
315 |
-
We monkey-patch this file, so that when it closes, we call
|
316 |
-
``close_and_update`` to save the state of the blocks.
|
317 |
-
"""
|
318 |
-
path = self._strip_protocol(path)
|
319 |
-
|
320 |
-
path = self.fs._strip_protocol(path)
|
321 |
-
if "r" not in mode:
|
322 |
-
return self.fs._open(
|
323 |
-
path,
|
324 |
-
mode=mode,
|
325 |
-
block_size=block_size,
|
326 |
-
autocommit=autocommit,
|
327 |
-
cache_options=cache_options,
|
328 |
-
**kwargs,
|
329 |
-
)
|
330 |
-
detail = self._check_file(path)
|
331 |
-
if detail:
|
332 |
-
# file is in cache
|
333 |
-
detail, fn = detail
|
334 |
-
hash, blocks = detail["fn"], detail["blocks"]
|
335 |
-
if blocks is True:
|
336 |
-
# stored file is complete
|
337 |
-
logger.debug("Opening local copy of %s" % path)
|
338 |
-
return open(fn, mode)
|
339 |
-
# TODO: action where partial file exists in read-only cache
|
340 |
-
logger.debug("Opening partially cached copy of %s" % path)
|
341 |
-
else:
|
342 |
-
hash = self.hash_name(path, self.same_names)
|
343 |
-
fn = os.path.join(self.storage[-1], hash)
|
344 |
-
blocks = set()
|
345 |
-
detail = {
|
346 |
-
"original": path,
|
347 |
-
"fn": hash,
|
348 |
-
"blocks": blocks,
|
349 |
-
"time": time.time(),
|
350 |
-
"uid": self.fs.ukey(path),
|
351 |
-
}
|
352 |
-
self.cached_files[-1][path] = detail
|
353 |
-
logger.debug("Creating local sparse file for %s" % path)
|
354 |
-
|
355 |
-
# call target filesystems open
|
356 |
-
self._mkcache()
|
357 |
-
f = self.fs._open(
|
358 |
-
path,
|
359 |
-
mode=mode,
|
360 |
-
block_size=block_size,
|
361 |
-
autocommit=autocommit,
|
362 |
-
cache_options=cache_options,
|
363 |
-
cache_type="none",
|
364 |
-
**kwargs,
|
365 |
-
)
|
366 |
-
if self.compression:
|
367 |
-
comp = (
|
368 |
-
infer_compression(path)
|
369 |
-
if self.compression == "infer"
|
370 |
-
else self.compression
|
371 |
-
)
|
372 |
-
f = compr[comp](f, mode="rb")
|
373 |
-
if "blocksize" in detail:
|
374 |
-
if detail["blocksize"] != f.blocksize:
|
375 |
-
raise BlocksizeMismatchError(
|
376 |
-
"Cached file must be reopened with same block"
|
377 |
-
"size as original (old: %i, new %i)"
|
378 |
-
"" % (detail["blocksize"], f.blocksize)
|
379 |
-
)
|
380 |
-
else:
|
381 |
-
detail["blocksize"] = f.blocksize
|
382 |
-
f.cache = MMapCache(f.blocksize, f._fetch_range, f.size, fn, blocks)
|
383 |
-
close = f.close
|
384 |
-
f.close = lambda: self.close_and_update(f, close)
|
385 |
-
self.save_cache()
|
386 |
-
return f
|
387 |
-
|
388 |
-
def hash_name(self, path, same_name):
|
389 |
-
return hash_name(path, same_name=same_name)
|
390 |
-
|
391 |
-
def close_and_update(self, f, close):
|
392 |
-
"""Called when a file is closing, so store the set of blocks"""
|
393 |
-
if f.closed:
|
394 |
-
return
|
395 |
-
path = self._strip_protocol(f.path)
|
396 |
-
|
397 |
-
c = self.cached_files[-1][path]
|
398 |
-
if c["blocks"] is not True and len(c["blocks"]) * f.blocksize >= f.size:
|
399 |
-
c["blocks"] = True
|
400 |
-
try:
|
401 |
-
logger.debug("going to save")
|
402 |
-
self.save_cache()
|
403 |
-
logger.debug("saved")
|
404 |
-
except OSError:
|
405 |
-
logger.debug("Cache saving failed while closing file")
|
406 |
-
except NameError:
|
407 |
-
logger.debug("Cache save failed due to interpreter shutdown")
|
408 |
-
close()
|
409 |
-
f.closed = True
|
410 |
-
|
411 |
-
def __getattribute__(self, item):
|
412 |
-
if item in [
|
413 |
-
"load_cache",
|
414 |
-
"_open",
|
415 |
-
"save_cache",
|
416 |
-
"close_and_update",
|
417 |
-
"__init__",
|
418 |
-
"__getattribute__",
|
419 |
-
"__reduce__",
|
420 |
-
"_make_local_details",
|
421 |
-
"open",
|
422 |
-
"cat",
|
423 |
-
"cat_file",
|
424 |
-
"get",
|
425 |
-
"read_block",
|
426 |
-
"tail",
|
427 |
-
"head",
|
428 |
-
"_check_file",
|
429 |
-
"_check_cache",
|
430 |
-
"_mkcache",
|
431 |
-
"clear_cache",
|
432 |
-
"clear_expired_cache",
|
433 |
-
"pop_from_cache",
|
434 |
-
"_mkcache",
|
435 |
-
"local_file",
|
436 |
-
"_paths_from_path",
|
437 |
-
"get_mapper",
|
438 |
-
"open_many",
|
439 |
-
"commit_many",
|
440 |
-
"hash_name",
|
441 |
-
"__hash__",
|
442 |
-
"__eq__",
|
443 |
-
"to_json",
|
444 |
-
]:
|
445 |
-
# all the methods defined in this class. Note `open` here, since
|
446 |
-
# it calls `_open`, but is actually in superclass
|
447 |
-
return lambda *args, **kw: getattr(type(self), item).__get__(self)(
|
448 |
-
*args, **kw
|
449 |
-
)
|
450 |
-
if item in ["__reduce_ex__"]:
|
451 |
-
raise AttributeError
|
452 |
-
if item in ["_cache"]:
|
453 |
-
# class attributes
|
454 |
-
return getattr(type(self), item)
|
455 |
-
if item == "__class__":
|
456 |
-
return type(self)
|
457 |
-
d = object.__getattribute__(self, "__dict__")
|
458 |
-
fs = d.get("fs", None) # fs is not immediately defined
|
459 |
-
if item in d:
|
460 |
-
return d[item]
|
461 |
-
elif fs is not None:
|
462 |
-
if item in fs.__dict__:
|
463 |
-
# attribute of instance
|
464 |
-
return fs.__dict__[item]
|
465 |
-
# attributed belonging to the target filesystem
|
466 |
-
cls = type(fs)
|
467 |
-
m = getattr(cls, item)
|
468 |
-
if (inspect.isfunction(m) or inspect.isdatadescriptor(m)) and (
|
469 |
-
not hasattr(m, "__self__") or m.__self__ is None
|
470 |
-
):
|
471 |
-
# instance method
|
472 |
-
return m.__get__(fs, cls)
|
473 |
-
return m # class method or attribute
|
474 |
-
else:
|
475 |
-
# attributes of the superclass, while target is being set up
|
476 |
-
return super().__getattribute__(item)
|
477 |
-
|
478 |
-
def __eq__(self, other):
|
479 |
-
"""Test for equality."""
|
480 |
-
if self is other:
|
481 |
-
return True
|
482 |
-
if not isinstance(other, type(self)):
|
483 |
-
return False
|
484 |
-
return (
|
485 |
-
self.storage == other.storage
|
486 |
-
and self.kwargs == other.kwargs
|
487 |
-
and self.cache_check == other.cache_check
|
488 |
-
and self.check_files == other.check_files
|
489 |
-
and self.expiry == other.expiry
|
490 |
-
and self.compression == other.compression
|
491 |
-
and self.same_names == other.same_names
|
492 |
-
and self.target_protocol == other.target_protocol
|
493 |
-
)
|
494 |
-
|
495 |
-
def __hash__(self):
|
496 |
-
"""Calculate hash."""
|
497 |
-
return (
|
498 |
-
hash(tuple(self.storage))
|
499 |
-
^ hash(str(self.kwargs))
|
500 |
-
^ hash(self.cache_check)
|
501 |
-
^ hash(self.check_files)
|
502 |
-
^ hash(self.expiry)
|
503 |
-
^ hash(self.compression)
|
504 |
-
^ hash(self.same_names)
|
505 |
-
^ hash(self.target_protocol)
|
506 |
-
)
|
507 |
-
|
508 |
-
def to_json(self):
|
509 |
-
"""Calculate JSON representation.
|
510 |
-
|
511 |
-
Not implemented yet for CachingFileSystem.
|
512 |
-
"""
|
513 |
-
raise NotImplementedError(
|
514 |
-
"CachingFileSystem JSON representation not implemented"
|
515 |
-
)
|
516 |
-
|
517 |
-
|
518 |
-
class WholeFileCacheFileSystem(CachingFileSystem):
|
519 |
-
"""Caches whole remote files on first access
|
520 |
-
|
521 |
-
This class is intended as a layer over any other file system, and
|
522 |
-
will make a local copy of each file accessed, so that all subsequent
|
523 |
-
reads are local. This is similar to ``CachingFileSystem``, but without
|
524 |
-
the block-wise functionality and so can work even when sparse files
|
525 |
-
are not allowed. See its docstring for definition of the init
|
526 |
-
arguments.
|
527 |
-
|
528 |
-
The class still needs access to the remote store for listing files,
|
529 |
-
and may refresh cached files.
|
530 |
-
"""
|
531 |
-
|
532 |
-
protocol = "filecache"
|
533 |
-
local_file = True
|
534 |
-
|
535 |
-
def open_many(self, open_files):
|
536 |
-
paths = [of.path for of in open_files]
|
537 |
-
if "r" in open_files.mode:
|
538 |
-
self._mkcache()
|
539 |
-
else:
|
540 |
-
return [
|
541 |
-
LocalTempFile(self.fs, path, mode=open_files.mode) for path in paths
|
542 |
-
]
|
543 |
-
|
544 |
-
if self.compression:
|
545 |
-
raise NotImplementedError
|
546 |
-
details = [self._check_file(sp) for sp in paths]
|
547 |
-
downpath = [p for p, d in zip(paths, details) if not d]
|
548 |
-
downfn0 = [
|
549 |
-
os.path.join(self.storage[-1], self.hash_name(p, self.same_names))
|
550 |
-
for p, d in zip(paths, details)
|
551 |
-
] # keep these path names for opening later
|
552 |
-
downfn = [fn for fn, d in zip(downfn0, details) if not d]
|
553 |
-
if downpath:
|
554 |
-
# skip if all files are already cached and up to date
|
555 |
-
self.fs.get(downpath, downfn)
|
556 |
-
|
557 |
-
# update metadata - only happens when downloads are successful
|
558 |
-
newdetail = [
|
559 |
-
{
|
560 |
-
"original": path,
|
561 |
-
"fn": self.hash_name(path, self.same_names),
|
562 |
-
"blocks": True,
|
563 |
-
"time": time.time(),
|
564 |
-
"uid": self.fs.ukey(path),
|
565 |
-
}
|
566 |
-
for path in downpath
|
567 |
-
]
|
568 |
-
self.cached_files[-1].update(
|
569 |
-
{path: detail for path, detail in zip(downpath, newdetail)}
|
570 |
-
)
|
571 |
-
self.save_cache()
|
572 |
-
|
573 |
-
def firstpart(fn):
|
574 |
-
# helper to adapt both whole-file and simple-cache
|
575 |
-
return fn[1] if isinstance(fn, tuple) else fn
|
576 |
-
|
577 |
-
return [
|
578 |
-
open(firstpart(fn0) if fn0 else fn1, mode=open_files.mode)
|
579 |
-
for fn0, fn1 in zip(details, downfn0)
|
580 |
-
]
|
581 |
-
|
582 |
-
def commit_many(self, open_files):
|
583 |
-
self.fs.put([f.fn for f in open_files], [f.path for f in open_files])
|
584 |
-
[f.close() for f in open_files]
|
585 |
-
for f in open_files:
|
586 |
-
# in case autocommit is off, and so close did not already delete
|
587 |
-
try:
|
588 |
-
os.remove(f.name)
|
589 |
-
except FileNotFoundError:
|
590 |
-
pass
|
591 |
-
|
592 |
-
def _make_local_details(self, path):
|
593 |
-
hash = self.hash_name(path, self.same_names)
|
594 |
-
fn = os.path.join(self.storage[-1], hash)
|
595 |
-
detail = {
|
596 |
-
"original": path,
|
597 |
-
"fn": hash,
|
598 |
-
"blocks": True,
|
599 |
-
"time": time.time(),
|
600 |
-
"uid": self.fs.ukey(path),
|
601 |
-
}
|
602 |
-
self.cached_files[-1][path] = detail
|
603 |
-
logger.debug("Copying %s to local cache" % path)
|
604 |
-
return fn
|
605 |
-
|
606 |
-
def cat(
|
607 |
-
self,
|
608 |
-
path,
|
609 |
-
recursive=False,
|
610 |
-
on_error="raise",
|
611 |
-
callback=_DEFAULT_CALLBACK,
|
612 |
-
**kwargs,
|
613 |
-
):
|
614 |
-
paths = self.expand_path(
|
615 |
-
path, recursive=recursive, maxdepth=kwargs.get("maxdepth", None)
|
616 |
-
)
|
617 |
-
getpaths = []
|
618 |
-
storepaths = []
|
619 |
-
fns = []
|
620 |
-
out = {}
|
621 |
-
for p in paths.copy():
|
622 |
-
try:
|
623 |
-
detail = self._check_file(p)
|
624 |
-
if not detail:
|
625 |
-
fn = self._make_local_details(p)
|
626 |
-
getpaths.append(p)
|
627 |
-
storepaths.append(fn)
|
628 |
-
else:
|
629 |
-
detail, fn = detail if isinstance(detail, tuple) else (None, detail)
|
630 |
-
fns.append(fn)
|
631 |
-
except Exception as e:
|
632 |
-
if on_error == "raise":
|
633 |
-
raise
|
634 |
-
if on_error == "return":
|
635 |
-
out[p] = e
|
636 |
-
paths.remove(p)
|
637 |
-
|
638 |
-
if getpaths:
|
639 |
-
self.fs.get(getpaths, storepaths)
|
640 |
-
self.save_cache()
|
641 |
-
|
642 |
-
callback.set_size(len(paths))
|
643 |
-
for p, fn in zip(paths, fns):
|
644 |
-
with open(fn, "rb") as f:
|
645 |
-
out[p] = f.read()
|
646 |
-
callback.relative_update(1)
|
647 |
-
if isinstance(path, str) and len(paths) == 1 and recursive is False:
|
648 |
-
out = out[paths[0]]
|
649 |
-
return out
|
650 |
-
|
651 |
-
def _open(self, path, mode="rb", **kwargs):
|
652 |
-
path = self._strip_protocol(path)
|
653 |
-
if "r" not in mode:
|
654 |
-
return LocalTempFile(self, path, mode=mode)
|
655 |
-
detail = self._check_file(path)
|
656 |
-
if detail:
|
657 |
-
detail, fn = detail
|
658 |
-
_, blocks = detail["fn"], detail["blocks"]
|
659 |
-
if blocks is True:
|
660 |
-
logger.debug("Opening local copy of %s" % path)
|
661 |
-
|
662 |
-
# In order to support downstream filesystems to be able to
|
663 |
-
# infer the compression from the original filename, like
|
664 |
-
# the `TarFileSystem`, let's extend the `io.BufferedReader`
|
665 |
-
# fileobject protocol by adding a dedicated attribute
|
666 |
-
# `original`.
|
667 |
-
f = open(fn, mode)
|
668 |
-
f.original = detail.get("original")
|
669 |
-
return f
|
670 |
-
else:
|
671 |
-
raise ValueError(
|
672 |
-
"Attempt to open partially cached file %s"
|
673 |
-
"as a wholly cached file" % path
|
674 |
-
)
|
675 |
-
else:
|
676 |
-
fn = self._make_local_details(path)
|
677 |
-
kwargs["mode"] = mode
|
678 |
-
|
679 |
-
# call target filesystems open
|
680 |
-
self._mkcache()
|
681 |
-
if self.compression:
|
682 |
-
with self.fs._open(path, **kwargs) as f, open(fn, "wb") as f2:
|
683 |
-
if isinstance(f, AbstractBufferedFile):
|
684 |
-
# want no type of caching if just downloading whole thing
|
685 |
-
f.cache = BaseCache(0, f.cache.fetcher, f.size)
|
686 |
-
comp = (
|
687 |
-
infer_compression(path)
|
688 |
-
if self.compression == "infer"
|
689 |
-
else self.compression
|
690 |
-
)
|
691 |
-
f = compr[comp](f, mode="rb")
|
692 |
-
data = True
|
693 |
-
while data:
|
694 |
-
block = getattr(f, "blocksize", 5 * 2**20)
|
695 |
-
data = f.read(block)
|
696 |
-
f2.write(data)
|
697 |
-
else:
|
698 |
-
self.fs.get(path, fn)
|
699 |
-
self.save_cache()
|
700 |
-
return self._open(path, mode)
|
701 |
-
|
702 |
-
|
703 |
-
class SimpleCacheFileSystem(WholeFileCacheFileSystem):
|
704 |
-
"""Caches whole remote files on first access
|
705 |
-
|
706 |
-
This class is intended as a layer over any other file system, and
|
707 |
-
will make a local copy of each file accessed, so that all subsequent
|
708 |
-
reads are local. This implementation only copies whole files, and
|
709 |
-
does not keep any metadata about the download time or file details.
|
710 |
-
It is therefore safer to use in multi-threaded/concurrent situations.
|
711 |
-
|
712 |
-
This is the only of the caching filesystems that supports write: you will
|
713 |
-
be given a real local open file, and upon close and commit, it will be
|
714 |
-
uploaded to the target filesystem; the writability or the target URL is
|
715 |
-
not checked until that time.
|
716 |
-
|
717 |
-
"""
|
718 |
-
|
719 |
-
protocol = "simplecache"
|
720 |
-
local_file = True
|
721 |
-
|
722 |
-
def __init__(self, **kwargs):
|
723 |
-
kw = kwargs.copy()
|
724 |
-
for key in ["cache_check", "expiry_time", "check_files"]:
|
725 |
-
kw[key] = False
|
726 |
-
super().__init__(**kw)
|
727 |
-
for storage in self.storage:
|
728 |
-
if not os.path.exists(storage):
|
729 |
-
os.makedirs(storage, exist_ok=True)
|
730 |
-
self.cached_files = [{}]
|
731 |
-
|
732 |
-
def _check_file(self, path):
|
733 |
-
self._check_cache()
|
734 |
-
sha = self.hash_name(path, self.same_names)
|
735 |
-
for storage in self.storage:
|
736 |
-
fn = os.path.join(storage, sha)
|
737 |
-
if os.path.exists(fn):
|
738 |
-
return fn
|
739 |
-
|
740 |
-
def save_cache(self):
|
741 |
-
pass
|
742 |
-
|
743 |
-
def load_cache(self):
|
744 |
-
pass
|
745 |
-
|
746 |
-
def _open(self, path, mode="rb", **kwargs):
|
747 |
-
path = self._strip_protocol(path)
|
748 |
-
|
749 |
-
if "r" not in mode:
|
750 |
-
return LocalTempFile(self, path, mode=mode)
|
751 |
-
fn = self._check_file(path)
|
752 |
-
if fn:
|
753 |
-
return open(fn, mode)
|
754 |
-
|
755 |
-
sha = self.hash_name(path, self.same_names)
|
756 |
-
fn = os.path.join(self.storage[-1], sha)
|
757 |
-
logger.debug("Copying %s to local cache" % path)
|
758 |
-
kwargs["mode"] = mode
|
759 |
-
|
760 |
-
self._mkcache()
|
761 |
-
if self.compression:
|
762 |
-
with self.fs._open(path, **kwargs) as f, open(fn, "wb") as f2:
|
763 |
-
if isinstance(f, AbstractBufferedFile):
|
764 |
-
# want no type of caching if just downloading whole thing
|
765 |
-
f.cache = BaseCache(0, f.cache.fetcher, f.size)
|
766 |
-
comp = (
|
767 |
-
infer_compression(path)
|
768 |
-
if self.compression == "infer"
|
769 |
-
else self.compression
|
770 |
-
)
|
771 |
-
f = compr[comp](f, mode="rb")
|
772 |
-
data = True
|
773 |
-
while data:
|
774 |
-
block = getattr(f, "blocksize", 5 * 2**20)
|
775 |
-
data = f.read(block)
|
776 |
-
f2.write(data)
|
777 |
-
else:
|
778 |
-
self.fs.get(path, fn)
|
779 |
-
return self._open(path, mode)
|
780 |
-
|
781 |
-
|
782 |
-
class LocalTempFile:
|
783 |
-
"""A temporary local file, which will be uploaded on commit"""
|
784 |
-
|
785 |
-
def __init__(self, fs, path, fn=None, mode="wb", autocommit=True, seek=0):
|
786 |
-
if fn:
|
787 |
-
self.fn = fn
|
788 |
-
self.fh = open(fn, mode)
|
789 |
-
else:
|
790 |
-
fd, self.fn = tempfile.mkstemp()
|
791 |
-
self.fh = open(fd, mode)
|
792 |
-
self.mode = mode
|
793 |
-
if seek:
|
794 |
-
self.fh.seek(seek)
|
795 |
-
self.path = path
|
796 |
-
self.fs = fs
|
797 |
-
self.closed = False
|
798 |
-
self.autocommit = autocommit
|
799 |
-
|
800 |
-
def __reduce__(self):
|
801 |
-
# always open in rb+ to allow continuing writing at a location
|
802 |
-
return (
|
803 |
-
LocalTempFile,
|
804 |
-
(self.fs, self.path, self.fn, "rb+", self.autocommit, self.tell()),
|
805 |
-
)
|
806 |
-
|
807 |
-
def __enter__(self):
|
808 |
-
return self.fh
|
809 |
-
|
810 |
-
def __exit__(self, exc_type, exc_val, exc_tb):
|
811 |
-
self.close()
|
812 |
-
|
813 |
-
def close(self):
|
814 |
-
if self.closed:
|
815 |
-
return
|
816 |
-
self.fh.close()
|
817 |
-
self.closed = True
|
818 |
-
if self.autocommit:
|
819 |
-
self.commit()
|
820 |
-
|
821 |
-
def discard(self):
|
822 |
-
self.fh.close()
|
823 |
-
os.remove(self.fn)
|
824 |
-
|
825 |
-
def commit(self):
|
826 |
-
self.fs.put(self.fn, self.path)
|
827 |
-
try:
|
828 |
-
os.remove(self.fn)
|
829 |
-
except (PermissionError, FileNotFoundError):
|
830 |
-
# file path may be held by new version of the file on windows
|
831 |
-
pass
|
832 |
-
|
833 |
-
@property
|
834 |
-
def name(self):
|
835 |
-
return self.fn
|
836 |
-
|
837 |
-
def __getattr__(self, item):
|
838 |
-
return getattr(self.fh, item)
|
839 |
-
|
840 |
-
|
841 |
-
def hash_name(path, same_name):
|
842 |
-
if same_name:
|
843 |
-
hash = os.path.basename(path)
|
844 |
-
else:
|
845 |
-
hash = hashlib.sha256(path.encode()).hexdigest()
|
846 |
-
return hash
|
847 |
-
|
848 |
-
|
849 |
-
@contextlib.contextmanager
|
850 |
-
def atomic_write(path, mode="wb"):
|
851 |
-
"""
|
852 |
-
A context manager that opens a temporary file next to `path` and, on exit,
|
853 |
-
replaces `path` with the temporary file, thereby updating `path`
|
854 |
-
atomically.
|
855 |
-
"""
|
856 |
-
fd, fn = tempfile.mkstemp(
|
857 |
-
dir=os.path.dirname(path), prefix=os.path.basename(path) + "-"
|
858 |
-
)
|
859 |
-
try:
|
860 |
-
with open(fd, mode) as fp:
|
861 |
-
yield fp
|
862 |
-
except BaseException:
|
863 |
-
with contextlib.suppress(FileNotFoundError):
|
864 |
-
os.unlink(fn)
|
865 |
-
raise
|
866 |
-
else:
|
867 |
-
os.replace(fn, path)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/__vite-browser-external-b25bb000.js
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
const e={};export{e as default};
|
2 |
-
//# sourceMappingURL=__vite-browser-external-b25bb000.js.map
|
|
|
|
|
|