Commit
·
c477f72
1
Parent(s):
5a41252
Update parquet files (step 40 of 397)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Corel Videostudio 12 Activation Code Keygen How to Get the Best Video Editing Experience with Corel Videostudio 12.md +0 -86
- spaces/1gistliPinn/ChatGPT4/Examples/ACDSee Pro 2.5.332 Serial 64 Bit.md +0 -52
- spaces/1gistliPinn/ChatGPT4/Examples/Basic Statistics And Probability By Shahid Jamal Pdf Downloadl REPACK.md +0 -6
- spaces/1gistliPinn/ChatGPT4/Examples/Bioexcess Plus Crack The Best Way to Enhance Your Windows Experience.md +0 -22
- spaces/1gistliPinn/ChatGPT4/Examples/Bu Ali Sina Books In Urdu Free Download TOP.md +0 -6
- spaces/1gistliPinn/ChatGPT4/Examples/Discografia Pholhas Torrent 108 !FULL!.md +0 -6
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Chapters Interactive Stories Mod Apk 6.3.4 The Ultimate Guide to Unlocking All Features.md +0 -123
- spaces/1phancelerku/anime-remove-background/Angry Birds 2 Mod Apk O jogo de estilingue mais viciante com dinheiro infinito e desafios ilimitados.md +0 -127
- spaces/1phancelerku/anime-remove-background/Beach Buggy Racing 2 Mod APK How to Download Aplikasi and Unlock All Features.md +0 -164
- spaces/232labs/VToonify/vtoonify/model/stylegan/__init__.py +0 -0
- spaces/AIConsultant/MusicGen/audiocraft/grids/musicgen/musicgen_clapemb_32khz.py +0 -32
- spaces/ARTeLab/ARTeLab-SummIT/app.py +0 -157
- spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnetv1d50.py +0 -17
- spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_8xb32_in1k.py +0 -4
- spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/data/audio_dataset.py +0 -525
- spaces/AchyuthGamer/OpenGPT-Chat-UI/README.md +0 -12
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorpicker/methods/SVPaletteCanvas.js +0 -98
- spaces/AlStable/Duke/app.py +0 -137
- spaces/Amrrs/portfolio-github/style.css +0 -190
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/write_own_pipeline.md +0 -290
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/vq_model.py +0 -167
- spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_512x1024_40k_cityscapes.py +0 -2
- spaces/Andy1621/uniformer_image_segmentation/configs/dnlnet/dnl_r101-d8_769x769_40k_cityscapes.py +0 -2
- spaces/Anonymous-123/ImageNet-Editing/README.md +0 -13
- spaces/Anthony7906/MengHuiMXD_GPT/run_Windows.bat +0 -5
- spaces/AnthonyTruchetPoC/persistent-docker/tests/athai/test_hello.py +0 -14
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/launch.py +0 -36
- spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/notes/compatibility.md +0 -83
- spaces/CVPR/LIVE/pybind11/docs/benchmark.py +0 -89
- spaces/CVPR/LIVE/pybind11/tests/test_factory_constructors.cpp +0 -342
- spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/reverse.h +0 -22
- spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/transform_scan.h +0 -111
- spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/logical.h +0 -63
- spaces/CVPR/transfiner/app.py +0 -84
- spaces/ChenyangSi/FreeU/style.css +0 -3
- spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/apps/help.js +0 -112
- spaces/CikeyQI/meme-api/meme_generator/version.py +0 -1
- spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/feature.py +0 -14
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/contourpy/__init__.py +0 -253
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-7f39cecc.js +0 -2
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/builder_app.py +0 -1003
- spaces/Dagfinn1962/stablediffusion-members/app.py +0 -87
- spaces/Docfile/open_llm_leaderboard/src/load_from_hub.py +0 -151
- spaces/DragGan/DragGan-Inversion/torch_utils/ops/conv2d_resample.py +0 -158
- spaces/Duskfallcrew/textual-inversion-training/textual_inversion.py +0 -612
- spaces/EMS-TU-Ilmenau/deepest-demo/helper.py +0 -59
- spaces/EnigmaOfTheWorld/TechnoForge_Automotive/README.md +0 -12
- spaces/EronSamez/RVC_HFmeu/demucs/separate.py +0 -185
- spaces/EvgenyK/Text-To-Image/README.md +0 -13
- spaces/GMFTBY/PandaGPT/config/__init__.py +0 -37
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Corel Videostudio 12 Activation Code Keygen How to Get the Best Video Editing Experience with Corel Videostudio 12.md
DELETED
@@ -1,86 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Corel VideoStudio 12 Activation Code Keygen: How to Get It for Free</h1>
|
3 |
-
<p>If you are looking for a powerful and easy-to-use video editing software, you may have heard of Corel VideoStudio 12. This software allows you to create stunning videos with professional-quality effects, transitions, titles, music, and more. However, to enjoy all the features and benefits of Corel VideoStudio 12, you need to activate it with a valid serial number and an activation code.</p>
|
4 |
-
<h2>corel videostudio 12 activation code keygen</h2><br /><p><b><b>Download</b> ✑ <a href="https://byltly.com/2uKzNA">https://byltly.com/2uKzNA</a></b></p><br /><br />
|
5 |
-
<p>Unfortunately, buying a license for Corel VideoStudio 12 can be quite expensive, especially if you are on a tight budget or you only need it for a short-term project. That's why some people may want to get an activation code keygen for Corel VideoStudio 12 for free.</p>
|
6 |
-
<p>But what is an activation code keygen and how can you get one for Corel VideoStudio 12? In this article, we will explain everything you need to know about Corel VideoStudio 12 activation code keygen and provide you with three different methods on how to get it for free.</p>
|
7 |
-
<h2>What is Corel VideoStudio 12?</h2>
|
8 |
-
<p>Corel VideoStudio 12 is a video editing software that was released in 2008 by Corel Corporation. It is the successor of Ulead VideoStudio 11 and the predecessor of Corel VideoStudio Pro X2.</p>
|
9 |
-
<p>Corel VideoStudio 12 offers many features and tools that can help you create amazing videos with ease. Some of the features include:</p>
|
10 |
-
<ul>
|
11 |
-
<li>A user-friendly interface that lets you drag-and-drop clips, effects, transitions, titles, music, and more.</li>
|
12 |
-
<li>A timeline that lets you edit your videos with precision and flexibility.</li>
|
13 |
-
<li>A library that lets you organize your media files and access them quickly.</li>
|
14 |
-
<li>A capture mode that lets you record videos from your webcam, screen, DV camcorder, or analog device.</li>
|
15 |
-
<li>A movie wizard that lets you create videos automatically with predefined templates and themes.</li>
|
16 |
-
<li>A DVD authoring mode that lets you burn your videos to DVD discs or ISO files with menus and chapters.</li>
|
17 |
-
<li>A share mode that lets you export your videos to various formats and devices or upload them online.</li>
|
18 |
-
<li>A wide range of effects, transitions, filters, overlays, titles, animations, music tracks, sound effects, and more that can enhance your videos.</li>
|
19 |
-
<li>A chroma key feature that lets you replace the background of your videos with any image or video.</li>
|
20 |
-
<li>A picture-in-picture feature that lets you overlay multiple videos on one screen.</li>
|
21 |
-
<li>A slow motion feature that lets you change the speed of your videos.</li>
|
22 |
-
<li>A stabilization feature that lets you reduce camera shake in your videos.</li>
|
23 |
-
<li>A split screen feature that lets you show two or more videos side by side.</li>
|
24 |
-
<li>A paint feature that lets you draw or write on your videos.</li>
|
25 |
-
<li>A subtitle feature that lets you add text or captions to your videos.</li>
|
26 |
-
</ul>
|
27 |
-
<h2>What is an activation code keygen?</h2>
|
28 |
-
<p>An activation code keygen is a software program that can generate serial numbers and activation codes for other software programs. A serial number is a unique identifier that is required to install a software program on your computer. An activation code is a verification code that is required to activate a software program after installation.</p>
|
29 |
-
<p>corel videostudio 12 pro serial number generator<br />
|
30 |
-
corel videostudio 12 ultimate crack download<br />
|
31 |
-
corel videostudio 12 license key free<br />
|
32 |
-
corel videostudio 12 product activation code<br />
|
33 |
-
corel videostudio 12 keygen only<br />
|
34 |
-
corel videostudio 12 full version with crack<br />
|
35 |
-
corel videostudio 12 registration code online<br />
|
36 |
-
corel videostudio 12 activation patch<br />
|
37 |
-
corel videostudio 12 serial key and email<br />
|
38 |
-
corel videostudio 12 crack file<br />
|
39 |
-
corel videostudio 12 activation code generator online<br />
|
40 |
-
corel videostudio 12 offline activation code<br />
|
41 |
-
corel videostudio 12 keygen.exe download<br />
|
42 |
-
corel videostudio 12 activation code free download<br />
|
43 |
-
corel videostudio 12 crack keygen rar<br />
|
44 |
-
corel videostudio 12 serial number and activation code<br />
|
45 |
-
corel videostudio 12 license key generator online<br />
|
46 |
-
corel videostudio 12 crack download for windows 10<br />
|
47 |
-
corel videostudio 12 activation code crack<br />
|
48 |
-
corel videostudio 12 keygen by kaizer soze<br />
|
49 |
-
corel videostudio 12 serial number free download<br />
|
50 |
-
corel videostudio 12 activation code online free<br />
|
51 |
-
corel videostudio 12 license key crack<br />
|
52 |
-
corel videostudio 12 keygen download free<br />
|
53 |
-
corel videostudio 12 activation code torrent<br />
|
54 |
-
corel videostudio 12 serial number and email address<br />
|
55 |
-
corel videostudio 12 license key free download<br />
|
56 |
-
corel videostudio 12 crack download full version<br />
|
57 |
-
corel videostudio 12 activation code online generator<br />
|
58 |
-
corel videostudio 12 keygen by x-force<br />
|
59 |
-
corel videostudio 12 serial number generator online<br />
|
60 |
-
corel videostudio 12 activation code free online<br />
|
61 |
-
corel videostudio 12 license key online free<br />
|
62 |
-
corel videostudio 12 crack download for windows 7<br />
|
63 |
-
corel videostudio 12 activation code no survey<br />
|
64 |
-
corel videostudio 12 serial number and email generator<br />
|
65 |
-
corel videostudio 12 license key generator download<br />
|
66 |
-
corel videostudio 12 crack download for mac<br />
|
67 |
-
corel videostudio 12 activation code reddit<br />
|
68 |
-
corel videostudio 12 keygen by zwt<br />
|
69 |
-
corel videostudio 12 serial number online free<br />
|
70 |
-
corel videostudio 12 activation code youtube<br />
|
71 |
-
corel videostudio 12 license key online generator<br />
|
72 |
-
corel videostudio 12 crack download for pc<br />
|
73 |
-
corel videostudio 12 activation code txt file download</p>
|
74 |
-
<p>An activation code keygen works by using algorithms or formulas that can produce valid serial numbers and activation codes based on the name or version of the software program. For example, if you want to activate Corel VideoStudio 12 with an activation code keygen, you need to select Corel VideoStudio 12 as the target software program in the keygen interface. Then, the keygen will generate a serial number and an activation code for Corel VideoStudio 12 that you can use to install and activate it on your computer.</p>
|
75 |
-
<h2>Why do you need an activation code keygen for Corel VideoStudio 12?</h2>
|
76 |
-
<p>As mentioned earlier, activating Corel VideoStudio 12 with a valid serial number and an activation code is necessary to enjoy all its features and benefits. However, buying a license for Corel VideoStudio 12 can be quite costly. According to some online sources, the original price of Corel VideoStudio 12 was around $100 when it was released in 2008.</p>
|
77 |
-
<p>Therefore, some people may want to get an activation code keygen for Corel VideoStudio 12 for free instead of paying for a license. Some of the reasons why they may want to do so are:</p>
|
78 |
-
<ul>
|
79 |
-
<li>They want to save money by not buying a license.</li>
|
80 |
-
<li>They want to avoid malware or viruses that may come with cracked versions or patches of Corel VideoStudio 12.</li>
|
81 |
-
<li>They want to access premium features or updates that may not be available in trial versions or unactivated versions of Corel VideoStudio 12.</li>
|
82 |
-
</ul>
|
83 |
-
<h1>How to Get an Activation Code Keygen for Corel VideoStudio 12</h1>
|
84 |
-
<p>If you are one of those people who want to get an activation code keygen for Corel VideoStudio 12 for free</p> 0a6ba089eb<br />
|
85 |
-
<br />
|
86 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/ACDSee Pro 2.5.332 Serial 64 Bit.md
DELETED
@@ -1,52 +0,0 @@
|
|
1 |
-
<h2>ACDSee Pro 2.5.332 serial 64 bit</h2><br /><p><b><b>Download File</b> >>> <a href="https://imgfil.com/2uxZjY">https://imgfil.com/2uxZjY</a></b></p><br /><br />
|
2 |
-
<br />
|
3 |
-
Nov 8, 2021 - ... de luis miguel. manager in spanish. manager kim 97eae9a76d goofballgoalssoccersimulatorlicensekeyACDSee Pro 2.5.332 serial 64 bitswift ... Dec 22, 2019 - ... in Spanish, manager kim 97eae9a76d goofballgoalssoccersimulatorlicensekeyACDSee Pro 2.5.332 serial 64 bitswift...
|
4 |
-
Read moreNov 8, 2021 - ... de luis miguel. manager in spanish. manager kim 97eae9a76d goofballgoalssoccersimulatorlicensekeyACDSee Pro 2.5.332 serial 64 bitswift...
|
5 |
-
Dec 22, 2019 - ... in Spanish, manager kim 97eae9a76d goofballgoalssoccersimulatorlicensekeyACDSee Pro 2.5.332 serial 64 bitswift...
|
6 |
-
Hide
|
7 |
-
Translation from Spanish.
|
8 |
-
From Spanish.
|
9 |
-
She is an author of articles for the "Alpina Publisher" publishing house, the author and compiler of the books "Spanish from scratch in 16 hours", "Spanish for one month", "500 Spanish phrases" and others.
|
10 |
-
Spanish.
|
11 |
-
Basic Course.
|
12 |
-
2 ed.
|
13 |
-
Textbook for Academic Baccalaureate.
|
14 |
-
М. : Yurite, 2016.
|
15 |
-
- 415 Ñ. - (Bachelor.
|
16 |
-
Academic course) .
|
17 |
-
ISBN 978-5-534-00485-4.
|
18 |
-
The textbook was created in accordance with the Federal state educational standard.
|
19 |
-
Buy the book, read reviews ISBN 978-5-9925-1190-4 Labyrinth .
|
20 |
-
The textbook presents basic information about the mechanisms of the emergence and development of mental .
|
21 |
-
Download: Attachment.
|
22 |
-
Size.
|
23 |
-
Textbook.
|
24 |
-
Author: Vygotsky L.S. Size: 16 mb.
|
25 |
-
Format: PDF.
|
26 |
-
Download, read.
|
27 |
-
Read the book online Fundamentals of General Psychology - Podzolkov Vladimir Grigorievich - free, without .
|
28 |
-
Read for free the text of the book Fundamentals of General Psychology by Vladimir Podzolkov (1st page of the book) :: free books in electronic .
|
29 |
-
Read online "Fundamentals of General Psychology" - Podzolkov Vladimir Grigorievich - free, .
|
30 |
-
Read online "Fundamentals of General Psychology" - Podzolkov Vladimir Grigorievich - free, .
|
31 |
-
Read online "Fundamentals of General Psychology" - Podzolkov Vladimir Grigorievich - free, without .
|
32 |
-
At the eLibrary LitRes you can read online books of Vladimir Podzolkov for free or .
|
33 |
-
Read the book Fundamentals of General Psychology by Vladimir Podzolkov - page 1 of the text of the book .
|
34 |
-
From the book: Vladimir Grigorievich Podzolkov, Doctor of Psychology, Professor, .
|
35 |
-
Vladimir Podzolkov .
|
36 |
-
Vladimir Grigorievich Podzolkov, Doctor of Psychology, Professor, .
|
37 |
-
Vladimir Podzolkov.
|
38 |
-
Fundamentals of General Psychology.
|
39 |
-
М. "Cogito-Center, 2006.
|
40 |
-
Lectures.
|
41 |
-
Podzolkov, "Fundamentals of General Psychology.
|
42 |
-
From the book: Vladimir G. Podzolkov, Doctor of Psychology, .
|
43 |
-
E-Library LitRes offers you to download all the books of the series "Fundamentals of General.
|
44 |
-
Podzolkov Vladimir Grigorievich.
|
45 |
-
Podzolkov Vladimir Grigorievich (born. Currently works at the Institute of Psychology of the Russian Academy of Sciences, is the head of the laboratory of psychology of professional health, the author of more than 400 scientific papers.
|
46 |
-
The main topics of research: professional health, psychology of work, psychophysiology of work, diagnosis of the state and personality properties of specialists.
|
47 |
-
Download: Introduction .
|
48 |
-
Podzolkov V.G. Fundamentals of General Psychology.
|
49 |
-
Textbook for students of higher education institutions. 8a78ff9644<br />
|
50 |
-
<br />
|
51 |
-
<br />
|
52 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Basic Statistics And Probability By Shahid Jamal Pdf Downloadl REPACK.md
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
<h2>Basic Statistics And Probability By Shahid Jamal Pdf Downloadl</h2><br /><p><b><b>Download</b> ››››› <a href="https://imgfil.com/2uxYaM">https://imgfil.com/2uxYaM</a></b></p><br /><br />
|
2 |
-
<br />
|
3 |
-
d5da3c52bf<br />
|
4 |
-
<br />
|
5 |
-
<br />
|
6 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Bioexcess Plus Crack The Best Way to Enhance Your Windows Experience.md
DELETED
@@ -1,22 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<p>austhal 19191a764c<br /> -embedded-workbench-for-arm-710-crack<br />[ -embedded-workbench-for-arm-710-crack ]<br />[ -embedded-workbench-for-arm-710-crack ]<br />[ -embedded-workbench-for-arm-710-crack ]<br />link= -embedded-workbench-for-arm-710-crack<br />link= -embedded-workbench-for-arm-710-crack<br />link= -embedded-workbench-for-arm-710-crack</p>
|
3 |
-
<p>yasann 19191a764c<br /> -pino/bioexcess-plus-crack<br />[ -pino/bioexcess-plus-crack ]<br />[ -pino/bioexcess-plus-crack ]<br />[ -pino/bioexcess-plus-crack ]<br />link= -pino/bioexcess-plus-crack<br />link= -pino/bioexcess-plus-crack<br />link= -pino/bioexcess-plus-crack</p>
|
4 |
-
<h2>Bioexcess Plus Crack</h2><br /><p><b><b>Download</b> ✯✯✯ <a href="https://imgfil.com/2uxZsp">https://imgfil.com/2uxZsp</a></b></p><br /><br />
|
5 |
-
<p>charquin 19191a764c<br /> -vision-lifesign-with-remedies-english-cracked-free-download-torrent-file-1<br />[ -vision-lifesign-with-remedies-english-cracked-free-download-torrent-file-1 ]<br />[ -vision-lifesign-with-remedies-english-cracked-free-download-torrent-file-1 ]<br />[ -vision-lifesign-with-remedies-english-cracked-free-download-torrent-file-1 ]<br />link= -vision-lifesign-with-remedies-english-cracked-free-download-torrent-file-1<br />link= -vision-lifesign-with-remedies-english-cracked-free-download-torrent-file-1<br />link= -vision-lifesign-with-remedies-english-cracked-free-download-torrent-file-1</p>
|
6 |
-
<p>vinfide 19191a764c<br /> -league-live-4-crack-serial-key<br />[ -league-live-4-crack-serial-key ]<br />[ -league-live-4-crack-serial-key ]<br />[ -league-live-4-crack-serial-key ]<br />link= -league-live-4-crack-serial-key<br />link= -league-live-4-crack-serial-key<br />link= -league-live-4-crack-serial-key</p>
|
7 |
-
<p>olyzav 19191a764c<br /> -digital-gem-pro-pro-crack-serial-keygenzip<br />[ -digital-gem-pro-pro-crack-serial-keygenzip ]<br />[ -digital-gem-pro-pro-crack-serial-keygenzip ]<br />[ -digital-gem-pro-pro-crack-serial-keygenzip ]<br />link= -digital-gem-pro-pro-crack-serial-keygenzip<br />link= -digital-gem-pro-pro-crack-serial-keygenzip<br />link= -digital-gem-pro-pro-crack-serial-keygenzip</p>
|
8 |
-
<p>oxfpai 19191a764c<br /> -spinner-free-download-crack-idm<br />[ -spinner-free-download-crack-idm ]<br />[ -spinner-free-download-crack-idm ]<br />[ -spinner-free-download-crack-idm ]<br />link= -spinner-free-download-crack-idm<br />link= -spinner-free-download-crack-idm<br />link= -spinner-free-download-crack-idm</p>
|
9 |
-
<p>impgin 19191a764c<br /> -pro-4-vst-crack<br />[ -pro-4-vst-crack ]<br />[ -pro-4-vst-crack ]<br />[ -pro-4-vst-crack ]<br />link= -pro-4-vst-crack<br />link= -pro-4-vst-crack<br />link= -pro-4-vst-crack</p>
|
10 |
-
<p>lorequa 19191a764c<br /> -pdf-professional-24931-with-crack-latest<br />[ -pdf-professional-24931-with-crack-latest ]<br />[ -pdf-professional-24931-with-crack-latest ]<br />[ -pdf-professional-24931-with-crack-latest ]<br />link= -pdf-professional-24931-with-crack-latest<br />link= -pdf-professional-24931-with-crack-latest<br />link= -pdf-professional-24931-with-crack-latest</p>
|
11 |
-
<p>encdahy 19191a764c<br /> -video-editor-6-crack-serial-key-2020-free-download<br />[ -video-editor-6-crack-serial-key-2020-free-download ]<br />[ -video-editor-6-crack-serial-key-2020-free-download ]<br />[ -video-editor-6-crack-serial-key-2020-free-download ]<br />link= -video-editor-6-crack-serial-key-2020-free-download<br />link= -video-editor-6-crack-serial-key-2020-free-download<br />link= -video-editor-6-crack-serial-key-2020-free-download</p>
|
12 |
-
<p>iolagodo 19191a764c<br /> -partition-master-138-crack-serial-key-2020-free<br />[ -partition-master-138-crack-serial-key-2020-free ]<br />[ -partition-master-138-crack-serial-key-2020-free ]<br />[ -partition-master-138-crack-serial-key-2020-free ]<br />link= -partition-master-138-crack-serial-key-2020-free<br />link= -partition-master-138-crack-serial-key-2020-free<br />link= -partition-master-138-crack-serial-key-2020-free</p>
|
13 |
-
<p></p>
|
14 |
-
<p>walbvyr 19191a764c<br /> -gaillard/solidworks-2008-software-free-download-with-crack<br />[ -gaillard/solidworks-2008-software-free-download-with-crack ]<br />[ -gaillard/solidworks-2008-software-free-download-with-crack ]<br />[ -gaillard/solidworks-2008-software-free-download-with-crack ]<br />link= -gaillard/solidworks-2008-software-free-download-with-crack<br />link= -gaillard/solidworks-2008-software-free-download-with-crack<br />link= -gaillard/solidworks-2008-software-free-download-with-crack</p>
|
15 |
-
<p>hencath 19191a764c<br /> -accounting-software-cracked-download<br />[ -accounting-software-cracked-download ]<br />[ -accounting-software-cracked-download ]<br />[ -accounting-software-cracked-download ]<br />link= -accounting-software-cracked-download<br />link= -accounting-software-cracked-download<br />link= -accounting-software-cracked-download</p>
|
16 |
-
<p>janfest 19191a764c<br /> -zoo-crack-serial-key<br />[ -zoo-crack-serial-key ]<br />[ -zoo-crack-serial-key ]<br />[ -zoo-crack-serial-key ]<br />link= -zoo-crack-serial-key<br />link= -zoo-crack-serial-key<br />link= -zoo-crack-serial-key</p>
|
17 |
-
<p>flavdary 19191a764c<br /> -60-crackupdated<br />[ -60-crackupdated ]<br />[ -60-crackupdated ]<br />[ -60-crackupdated ]<br />link= -60-crackupdated<br />link= -60-crackupdated<br />link= -60-crackupdated</p>
|
18 |
-
<p>valyel 19191a764c<br /> -10-download-crack-45<br />[ -10-download-crack-45 ]<br />[ -10-download-crack-45 ]<br />[ -10-download-crack-45 ]<br />link= -10-download-crack-45<br />link= -10-download-crack-45<br />link= -10-download-crack-45</p>
|
19 |
-
<p>Thanks for your article. It is extremely unfortunate that over the last decade, the travel industry has had to tackle terrorism, SARS, tsunamis, bird flu, swine flu, along with the first ever true global tough economy. Through all of it the industry has really proven to be sturdy, resilient plus dynamic, getting new approaches to deal with trouble. There are constantly fresh problems and opportunity to which the marketplace must again adapt and behave.</p>
|
20 |
-
<p>Command And Conquer Generals Zero Hour Reborn V7 Download ???? ???? Download - and Conquer Generals Zero Hour Reborn V7 mod Free Download & InstallNew updates and news. CNCGZR V7 Alpha Mod release. PC game free download setup.exe with crack and direct download link[Updated] & [Updated].Command and Conquer Generals Zero Hour Reborn V7 Download & InstallFree download generals zero hour reborn v7 patch v7 update game that can be free and single-player online battle game. Download this command and conquer.Phosheroes.com command and conquer generals zero hour reborn v7 download phosheroes.com phosheroes is a free.Command and Conquer Generals Zero Hour Reborn V7 Download & Install free. Command and Conquer Generals Zero Hour Reborn V7 Download & Install free. Free.2020.10.22 11:11. Command And Conquer Generals Zero Hour Reborn V7!!EXCLUSIVE!! Download.Command and Conquer Generals Zero Hour Reborn V7 mod Free Download & InstallCustom skins for Generals Zero Hour Reborn.0. Download CNCGZR V7 Alpha Mod Free &..Play Cncgzr 0[h]r Reborn Mod download PC Game In Single Direct Link Here.. command and conquer generals zero hour reborn v7 downloadCommand and Conquer Generals Zero Hour Reborn V7 Download & Install. Command and Conquer Generals Zero Hour Reborn v7.Generals Zero Hour Rebirth v7.12 is the ultimate reborn version of the. Download: Command and Conquer Generals Zero Hour Reborn V7 (. ZIP.Command and Conquer Generals Zero Hour Reborn. Command and Conquer Generals Zero Hour Reborn v7. MOD Download: Command and Conquer.Command and Conquer Generals Zero Hour Reborn is a. command and conquer generals zero hour reborn v7 downloadCommand and Conquer Generals Zero Hour Reborn.Command & Conquer Generals Zero Hour Reborn V7 Free Download. Download Command and Conquer Generals Zero Hour Reborn V7 Free Download.Command and Conquer Generals Zero Hour Reborn. command and conquer generals zero hour reborn v7 download.exe.Command and Conquer Generals Zero Hour Reborn Full Version. Command and Conquer Generals Zero Hour Reborn Free. Mod Download: Command and.command and conquer generals zero hour reborn v7 downloadThese are more C&C Generals Zero Hour Mod downloads for you:. C&C Generals Zero Hour Reborn 0.1.1 MOD Download: Command and Conquer. ee730c9e81 -13-advanced-edition-key -win-basketball-offline-mod-apk-download -fairytale-slow-version-mp3-download -jewellery-box-tamil-pdf-download -carrozzeria-aviczh9md-english-manual</p> aaccfb2cb3<br />
|
21 |
-
<br />
|
22 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Bu Ali Sina Books In Urdu Free Download TOP.md
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
<h2>bu ali sina books in urdu free download</h2><br /><p><b><b>Download File</b> ✔ <a href="https://imgfil.com/2uy0qU">https://imgfil.com/2uy0qU</a></b></p><br /><br />
|
2 |
-
<br />
|
3 |
-
. D-581. 1974nTarjuma Qanoon Shaikh Bu Ali Seena. Volume-005. 1780nMeyaar-ul-Uqool. 1960nQanoon Shaikh Bu Ali Seena. Volume-006. 1530nMeyaar-ul-Uqool. 1935nQanoon Shaikh Bu Ali Seena. Volume-007. 1883nTarjuma Qanoon Shaikh Bu Ali . D-581. 1975nQanoon Shaikh Bu Ali Seena. Volume-008. 1760nMeyaar-ul-Uqool. 1970nQanoon Shaikh Bu Ali Seena. Volume-009. 1520nMeyaar-ul-Uqool. 1939nQanoon Shaikh Bu Ali Seena. Volume-010. 1880nTarjuma Qanoon Shaikh Bu Ali Seena. D-581. 1976nQanoon Shaikh Bu Ali Seena. Volume-011. 1690nMeyaar-ul-Uqool. 1961nMeyaar-ul-Uqool. 1930nMeyaar-ul-Uqool. 1946nMeyaar-ul-Uqool. 1932nMeyaar-ul-Uqool. 1926nMeyaar-ul-Uqool. 1925nMeyaar-ul-Uqool. 1934nMeyaar-ul-Uqool. 1933nMeyaar-ul-Uqool. 1934nMeyaar-ul-Uqool. 1935nMeyaar-ul-Uqool. 1936nMeyaar-ul-Uqool. 1938nMeyaar-ul-Uqool. 1939nMeyaar-ul-Uqool. 1940nMeyaar-ul-Uqool. 1941nMeyaar-ul-Uqool. 1942nMeyaar-ul-Uqool. 1947nMeyaar-ul-Uqool. 1949nMeyaar-ul-Uqool. 1952nMeyaar-ul-Uqool. 1953nMeyaar-ul-Uqool. 1954nMeyaar-ul-Uqool. 1957nMeya 4fefd39f24<br />
|
4 |
-
<br />
|
5 |
-
<br />
|
6 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Discografia Pholhas Torrent 108 !FULL!.md
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
<h2>discografia pholhas torrent 108</h2><br /><p><b><b>Download</b> ✏ ✏ ✏ <a href="https://imgfil.com/2uy10z">https://imgfil.com/2uy10z</a></b></p><br /><br />
|
2 |
-
|
3 |
-
MORE INFORMATION Contact the BasketAu Dorset. "The Basket" is situated on St. I download indiewire 113 - iPSC Association Meetings free the Daily Download Index. The Hot 100 Songs chart is the single most important music chart in the United Kingdom and Ireland, compiled download index 113 - iPSC Association Meetings by Nielsen SoundScan and published by the Official Charts Company (OCC). Son of God, Gods Son, What Is God Like. Good God Gaze is a 97 track collection of Hymns, gospel classics and religious music. See more about the Family 4-pack, Supper Skins. View artist reviews, photos, and track listings. The most powerful spiritual message of all time: Download the mp3 online! The Bibles greatest revolution has begun! Join over two million Christian readers each month and get the Bible message today. The official site for the MLB is the best place to find out the latest news and events in baseball. Browse Albums · Tracks · Photos · Videos · Episodes · More. Help! can't read download index 113 - iPSC Association Meetings. Hanukkah - The Hanukkah Hymn Project - Volume 2, download index 113 - iPSC Association Meetings, Volume 3, and the Hanukkah Hymn Project Online in iTunes. The site is not directly affiliated with the YouTube channel, simply playing our videos on it. Journal of Biblical Research Vol. Synchronised Studies, Vol. Teardown is a geeky and comedic podcast that dives deep into things that interest us like technology and other forms of geekdom, like this web site. I don't know what I don't know, but we will teach you what we don't know. This curriculum is designed to be used with the popular movie, High School Musical. But the question of truth, the question of an objective, transcendent order of reality, is not the same thing as the question of God. I downloaded into the New Experience, it says it is 1,710MB, but there is no option to extract files. But the question of truth, the question of an objective, transcendent order of reality, is not the same thing as the question of God. Because it can't. The Hot 100 Singles is an official chart ranking the singles that have sold the most units in the United States. The Christian Music Channel provides ministry minded Christian musicians with online tools and resources to connect with those that are searching for the 4fefd39f24<br />
|
4 |
-
<br />
|
5 |
-
<br />
|
6 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Chapters Interactive Stories Mod Apk 6.3.4 The Ultimate Guide to Unlocking All Features.md
DELETED
@@ -1,123 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Chapters Interactive Stories Mod Apk 6.3.4: A Guide for Gamers</h1>
|
3 |
-
<p>If you are a fan of interactive and immersive storytelling, you might have heard of Chapters Interactive Stories, a popular mobile game that lets you choose your own adventure in various genres and scenarios. But did you know that there is a modded version of this game that gives you unlimited access to all the chapters, cards, and features? In this article, we will tell you everything you need to know about Chapters Interactive Stories Mod Apk 6.3.4, including what it is, how to download and install it, and some tips and tricks to make the most out of it.</p>
|
4 |
-
<h2>What is Chapters Interactive Stories?</h2>
|
5 |
-
<p>Chapters Interactive Stories is a mobile game developed by Crazy Maple Studio that offers users an immersive experience through interactive and engaging storylines. In this game, players get to choose their own path in various story scenarios, making decisions that affect the outcome of the story. By doing so, players can shape their own characters, relationships, and endings.</p>
|
6 |
-
<h2>chapters interactive stories mod apk 6.3.4</h2><br /><p><b><b>DOWNLOAD</b> ☆☆☆ <a href="https://urlin.us/2uSTXI">https://urlin.us/2uSTXI</a></b></p><br /><br />
|
7 |
-
<p>The game features a wide range of genres, such as romance, fantasy, drama, horror, comedy, and more. Each genre has multiple stories to choose from, each with different characters, settings, and plots. Some of the stories are original creations by the developers, while others are based on popular books, movies, or TV shows.</p>
|
8 |
-
<p>Some of the features of Chapters Interactive Stories are:</p>
|
9 |
-
<h3>Features of Chapters Interactive Stories</h3>
|
10 |
-
<ul>
|
11 |
-
<li><b>Interactive choices:</b> Players can make choices that affect the direction and outcome of the story. Some choices are free, while others require diamonds or tickets to unlock.</li>
|
12 |
-
<li><b>Diverse genres:</b> Players can choose from a variety of genres, such as romance, fantasy, drama, horror, comedy, and more.</li>
|
13 |
-
<li><b>Multiple stories:</b> Players can explore different stories within each genre, each with different characters, settings, and plots.</li>
|
14 |
-
<li><b>Original and adapted stories:</b> Players can enjoy original stories created by the developers or adapted stories based on popular books, movies, or TV shows.</li>
|
15 |
-
<li><b>Customizable characters:</b> Players can customize their own characters' appearance, name, gender, and personality.</li>
|
16 |
-
<li><b>Fashionable outfits:</b> Players can dress up their characters in various outfits that suit their style and mood.</li>
|
17 |
-
<li><b>Social features:</b> Players can interact with other players and authors through chat rooms, comments, reviews, ratings, and more.</li>
|
18 |
-
</ul>
|
19 |
-
<h3>How to play Chapters Interactive Stories</h3>
|
20 |
-
<p>To play Chapters Interactive Stories, players need to download the game from the Google Play Store or the App Store for free. After installing the game, players need to create an account or log in with their Facebook account. Then, players can choose a genre and a story to start playing.</p>
|
21 |
-
<p>The game interface consists of three main elements: the story text, the choices menu, and the navigation bar. The story text displays the dialogue and narration of the story. The choices menu shows the options that players can choose from at certain points in the story. The navigation bar contains buttons that allow players to access other features of the game, such as the home screen, the store, the settings, and more.</p>
|
22 |
-
<p>To progress through the story, players need to tap on the screen to read the story text and make choices when prompted. Some choices are free, while others require diamonds or tickets to unlock. Diamonds are the premium currency of the game that can be used to unlock premium choices, outfits, and cards. Tickets are the energy of the game that can be used to start a new chapter. Players can earn diamonds and tickets by completing chapters, watching ads, or purchasing them with real money.</p>
|
23 |
-
<h2>What is Chapters Interactive Stories Mod Apk 6.3.4?</h2>
|
24 |
-
<p>Chapters Interactive Stories Mod Apk 6.3.4 is a modified version of the original game that gives players unlimited access to all the features of the game without spending any money. This means that players can enjoy all the chapters, cards, and outfits without worrying about diamonds or tickets. Moreover, players can also get rid of annoying ads and enjoy a smoother gaming experience.</p>
|
25 |
-
<p>The modded version of the game is not available on the official app stores, but it can be downloaded from third-party websites that provide apk files. However, players should be careful when downloading and installing modded apk files, as they may contain viruses or malware that can harm their devices or compromise their personal information.</p>
|
26 |
-
<p>chapters interactive stories mod apk 6.3.4 download<br />
|
27 |
-
chapters interactive stories mod apk 6.3.4 unlimited diamonds<br />
|
28 |
-
chapters interactive stories mod apk 6.3.4 free tickets<br />
|
29 |
-
chapters interactive stories mod apk 6.3.4 latest version<br />
|
30 |
-
chapters interactive stories mod apk 6.3.4 android<br />
|
31 |
-
chapters interactive stories mod apk 6.3.4 ios<br />
|
32 |
-
chapters interactive stories mod apk 6.3.4 no root<br />
|
33 |
-
chapters interactive stories mod apk 6.3.4 offline<br />
|
34 |
-
chapters interactive stories mod apk 6.3.4 hack<br />
|
35 |
-
chapters interactive stories mod apk 6.3.4 cheats<br />
|
36 |
-
chapters interactive stories mod apk 6.3.4 premium choices<br />
|
37 |
-
chapters interactive stories mod apk 6.3.4 unlocked all<br />
|
38 |
-
chapters interactive stories mod apk 6.3.4 review<br />
|
39 |
-
chapters interactive stories mod apk 6.3.4 gameplay<br />
|
40 |
-
chapters interactive stories mod apk 6.3.4 update<br />
|
41 |
-
chapters interactive stories mod apk 6.3.4 install<br />
|
42 |
-
chapters interactive stories mod apk 6.3.4 features<br />
|
43 |
-
chapters interactive stories mod apk 6.3.4 tips<br />
|
44 |
-
chapters interactive stories mod apk 6.3.4 guide<br />
|
45 |
-
chapters interactive stories mod apk 6.3.4 tutorial<br />
|
46 |
-
chapters interactive stories mod apk 6.3.4 reddit<br />
|
47 |
-
chapters interactive stories mod apk 6.3.4 youtube<br />
|
48 |
-
chapters interactive stories mod apk 6.3.4 facebook<br />
|
49 |
-
chapters interactive stories mod apk 6.3.4 twitter<br />
|
50 |
-
chapters interactive stories mod apk 6.3.4 instagram<br />
|
51 |
-
chapters interactive stories mod apk 6.3.4 pinterest<br />
|
52 |
-
chapters interactive stories mod apk 6.3.4 quora<br />
|
53 |
-
chapters interactive stories mod apk 6.3.4 medium<br />
|
54 |
-
chapters interactive stories mod apk 6.3.4 blogspot<br />
|
55 |
-
chapters interactive stories mod apk 6.3.4 wordpress<br />
|
56 |
-
chapters interactive stories mod apk 6.3.4 tumblr<br />
|
57 |
-
chapters interactive stories mod apk 6.3.4 forum<br />
|
58 |
-
chapters interactive stories mod apk 6.3.4 discord<br />
|
59 |
-
chapters interactive stories mod apk 6.3.4 telegram<br />
|
60 |
-
chapters interactive stories mod apk 6.3.4 whatsapp<br />
|
61 |
-
chapters interactive stories mod apk 6.3.4 email<br />
|
62 |
-
chapters interactive stories mod apk 6.3.4 support<br />
|
63 |
-
chapters interactive stories mod apk 6.3.4 faq<br />
|
64 |
-
chapters interactive stories mod apk 6.3.4 wiki<br />
|
65 |
-
chapters interactive stories mod apk 6.</p>
|
66 |
-
<h3>Benefits of Chapters Interactive Stories Mod Apk 6.3.4</h3>
|
67 |
-
<ul>
|
68 |
-
<li><b>Unlimited diamonds and tickets:</b> Players can unlock all the premium choices, outfits, and cards without spending any money.</li>
|
69 |
-
<li><b>No ads:</b> Players can get rid of annoying ads that interrupt their gameplay and waste their time.</li>
|
70 |
-
<li><b>Better performance:</b> Players can enjoy a smoother and faster gaming experience with less lag and glitches.</li>
|
71 |
-
<li><b>More fun:</b> Players can explore different stories and genres without any limitations or restrictions.</li>
|
72 |
-
</ul>
|
73 |
-
<h3>How to download and install Chapters Interactive Stories Mod Apk 6.3.4</h3>
|
74 |
-
<p>To download and install Chapters Interactive Stories Mod Apk 6.3.4, players need to follow these steps:</p>
|
75 |
-
<ol>
|
76 |
-
<li><b>Find a reliable website:</b> Players need to find a trustworthy website that provides the modded apk file of the game. They can search for "Chapters Interactive Stories Mod Apk 6.3.4" on Google or other search engines and check the reviews and ratings of the websites before downloading.</li>
|
77 |
-
<li><b>Download the apk file:</b> Players need to click on the download button on the website and wait for the apk file to be downloaded on their device.</li>
|
78 |
-
<li><b>Enable unknown sources:</b> Players need to go to their device settings and enable the option of installing apps from unknown sources. This will allow them to install the modded apk file without any issues.</li>
|
79 |
-
<li><b>Install the apk file:</b> Players need to locate the apk file on their device and tap on it to start the installation process. They need to follow the instructions on the screen and wait for the installation to be completed.</li>
|
80 |
-
<li><b>Launch the game:</b> Players need to open the game icon on their device and enjoy playing Chapters Interactive Stories Mod Apk 6.3.4 with unlimited features.</li>
|
81 |
-
</ol>
|
82 |
-
<h2>Tips and tricks for Chapters Interactive Stories Mod Apk 6.3.4</h2>
|
83 |
-
<p>To make the most out of Chapters Interactive Stories Mod Apk 6.3.4, players can use these tips and tricks:</p>
|
84 |
-
<h3>Choose your genre wisely</h3>
|
85 |
-
<p>The game offers a variety of genres to choose from, such as romance, fantasy, drama, horror, comedy, and more. Each genre has its own style, tone, and mood, so players should choose a genre that suits their preferences and interests. For example, if they want a light-hearted and humorous story, they can choose comedy; if they want a thrilling and suspenseful story, they can choose horror; if they want a romantic and emotional story, they can choose romance; and so on.</p>
|
86 |
-
<h3>Spend your diamonds and tickets carefully</h3>
|
87 |
-
<p>Even though players have unlimited diamonds and tickets in Chapters Interactive Stories Mod Apk 6.3.4, they should still spend them wisely and strategically. For example, they should not waste their diamonds on choices or outfits that do not affect the story or their character development; they should save their tickets for stories that they are interested in or excited about; they should use their cards to boost their stats or unlock bonus scenes; and so on.</p>
|
88 |
-
<h3>Customize your character and outfits</h3>
|
89 |
-
<p>The game allows players to customize their own characters' appearance, name, gender, and personality. This gives players more control over their story and makes them feel more connected to their characters. Moreover, players can also dress up their characters in various outfits that suit their style and mood. This adds more fun and flair to their gameplay and makes their characters stand out from others.</p>
|
90 |
-
<h3 Interact with other players and authors</h3>
|
91 |
-
<p>The game also has social features that allow players to interact with other players and authors through chat rooms, comments, reviews, ratings, and more. This can help players to share their opinions, feedback, suggestions, and tips with others; to discover new stories and genres; to make friends and join communities; and to support their favorite authors and stories.</p>
|
92 |
-
<h2>Conclusion</h2>
|
93 |
-
<p>Chapters Interactive Stories Mod Apk 6.3.4 is a great way to enjoy interactive and immersive storytelling on your mobile device. With this modded version of the game, you can access all the features of the game without spending any money. You can choose your own adventure in various genres and scenarios, customize your character and outfits, and interact with other players and authors. If you are looking for a fun and engaging game that lets you create your own story, you should definitely try Chapters Interactive Stories Mod Apk 6.3.4.</p>
|
94 |
-
<h2>FAQs</h2>
|
95 |
-
<p>Here are some frequently asked questions about Chapters Interactive Stories Mod Apk 6.3.4:</p>
|
96 |
-
<table>
|
97 |
-
<tr>
|
98 |
-
<th>Question</th>
|
99 |
-
<th>Answer</th>
|
100 |
-
</tr>
|
101 |
-
<tr>
|
102 |
-
<td>Is Chapters Interactive Stories Mod Apk 6.3.4 safe to use?</td>
|
103 |
-
<td>Chapters Interactive Stories Mod Apk 6.3.4 is generally safe to use, as long as you download it from a reliable website that provides virus-free apk files. However, you should always be careful when installing modded apk files, as they may contain malware or spyware that can harm your device or steal your personal information.</td>
|
104 |
-
</tr>
|
105 |
-
<tr>
|
106 |
-
<td>Will I get banned for using Chapters Interactive Stories Mod Apk 6.3.4?</td>
|
107 |
-
<td>There is a low chance of getting banned for using Chapters Interactive Stories Mod Apk 6.3.4, as the game does not have a strict anti-cheat system or detection mechanism. However, you should still be cautious when using the modded version of the game, as you may get reported by other players or authors if they notice your unlimited diamonds or tickets.</td>
|
108 |
-
</tr>
|
109 |
-
<tr>
|
110 |
-
<td>Can I update Chapters Interactive Stories Mod Apk 6.3.4?</td>
|
111 |
-
<td>You can update Chapters Interactive Stories Mod Apk 6.3.4 by downloading the latest version of the modded apk file from the same website that you downloaded it from before. However, you should always backup your game data before updating, as you may lose your progress or settings if something goes wrong during the update process.</td>
|
112 |
-
</tr>
|
113 |
-
<tr>
|
114 |
-
<td>Can I play Chapters Interactive Stories Mod Apk 6.3.4 offline?</td>
|
115 |
-
<td>No, you cannot play Chapters Interactive Stories Mod Apk 6.3.4 offline, as the game requires an internet connection to load the stories and sync your data with the server. If you try to play the game offline, you may encounter errors or glitches that prevent you from playing properly.</td>
|
116 |
-
</tr>
|
117 |
-
<tr>
|
118 |
-
<td>Can I play Chapters Interactive Stories Mod Apk 6.3.4 on PC?</td>
|
119 |
-
<td>Yes, you can play Chapters Interactive Stories Mod Apk 6.3.4 on PC by using an Android emulator software that allows you to run Android apps on your computer. Some of the popular Android emulators are BlueStacks, NoxPlayer, LDPlayer, and MEmu.</td>
|
120 |
-
</tr>
|
121 |
-
</table></p> 197e85843d<br />
|
122 |
-
<br />
|
123 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Angry Birds 2 Mod Apk O jogo de estilingue mais viciante com dinheiro infinito e desafios ilimitados.md
DELETED
@@ -1,127 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Angry Birds 2 Mod Apk Dinheiro Infinito: How to Download and Play the Ultimate Bird Flinging Game</h1>
|
3 |
-
<p>Are you a fan of Angry Birds, the most popular physics-based game series in the world? Do you want to experience a new level of fun and challenge with Angry Birds 2, the official sequel to the original game? Do you want to enjoy unlimited money, gems, lives, cards, and spells in Angry Birds 2? If you answered yes to any of these questions, then this article is for you!</p>
|
4 |
-
<p>In this article, we will tell you everything you need to know about Angry Birds 2 mod apk dinheiro infinito, a modified version of the game that gives you access to all the premium features for free. We will explain what is Angry Birds 2, what are its main features, what are the advantages of using Angry Birds 2 mod apk dinheiro infinito, how to download and install it on your Android device, and how to play it like a pro. By the end of this article, you will be ready to download and play Angry Birds 2 mod apk dinheiro infinito and have a blast with your feathered friends!</p>
|
5 |
-
<h2>angry birds 2 mod apk dinheiro infinito</h2><br /><p><b><b>Download Zip</b> ››››› <a href="https://jinyurl.com/2uNUJM">https://jinyurl.com/2uNUJM</a></b></p><br /><br />
|
6 |
-
<h2>What is Angry Birds 2?</h2>
|
7 |
-
<p>Angry Birds 2 is a puzzle video game developed by Rovio Entertainment and released in 2015. It is the direct sequel to the original Angry Birds game, which was launched in 2009 and became a global phenomenon. Angry Birds 2 follows the same premise as the previous games: you have to use a slingshot to launch a variety of birds at the structures and pigs that are trying to steal their eggs. However, Angry Birds 2 also introduces new features and improvements that make the game more exciting and challenging.</p>
|
8 |
-
<h2>What are the main features of Angry Birds 2?</h2>
|
9 |
-
<p>Angry Birds 2 has many features that make it stand out from other puzzle games. Here are some of them:</p>
|
10 |
-
<ul>
|
11 |
-
<li><b>Multiphase levels:</b> Unlike the previous games, where each level had only one stage, Angry Birds 2 has levels that consist of multiple stages. This means that you have to face different challenges and obstacles in each stage, and you have to use different strategies and birds to complete them. The levels also have random layouts, so you never know what to expect.</li>
|
12 |
-
<li><b>Spells:</b> Spells are special powers that you can use to boost your birds or sabotage the pigs. There are six types of spells in Angry Birds 2: Hot Chili, Mighty Eagle, Golden Duck, Pig Inflater, Blizzard, and Piggyquake. Each spell has a different effect and can be used once per level. You can earn spells by playing the game or by purchasing them with gems.</li>
|
13 |
-
<li><b>Clans:</b> Clans are groups of players who can chat, share tips, and compete with each other. You can join an existing clan or create your own clan with your friends. By joining a clan, you can also participate in clan events and earn rewards.</li>
|
14 |
-
<li><b>Tournaments:</b> Tournaments are daily or weekly competitions where you can compete with other players from around the world. You can enter a tournament by paying a fee with gems or tickets, and you can play as many levels as you want within a time limit. The more levels you complete and the higher your score, the higher your rank on the leaderboard. You can win prizes such as gems, feathers, hats, and chests based on your rank.</li>
|
15 |
-
<li><b>Hats:</b> Hats are accessories that you can equip your birds with to give them extra abilities or bonuses. There are four types of hats in Angry Birds 2: common, rare, legendary, and epic. Each hat has a different design and effect, such as increasing your score, damage, or speed. You can collect hats by opening chests or by purchasing them with gems.</li>
|
16 |
-
<li><b>And more:</b> Angry Birds 2 also has other features such as daily quests, achievements, leaderboards, star rewards, tower of fortune, etc., that add more fun and variety to the game.</li>
|
17 |
-
</ul>
|
18 |
-
<h2>What are the advantages of using Angry Birds 2 mod apk dinheiro infinito?</h2>
|
19 |
-
<p>Angry Birds 2 mod apk dinheiro infinito is a modified version of the game that gives you unlimited access to all the premium features for free. By using this mod apk, you can enjoy the following advantages:</p>
|
20 |
-
<ul>
|
21 |
-
<li><b>Unlimited money:</b> You can get unlimited money in Angry Birds 2 mod apk dinheiro infinito, which you can use to buy anything you want in the game. You can buy more spells, hats, chests, tickets, etc., without worrying about running out of money.</li>
|
22 |
-
<li><b>Unlimited gems:</b> You can also get unlimited gems in Angry Birds 2 mod apk dinheiro infinito, which are the premium currency of the game. You can use gems to enter tournaments, open chests, buy spells, etc., without spending real money.</li>
|
23 |
-
<li><b>Unlimited lives:</b> You can get unlimited lives in Angry Birds 2 mod apk dinheiro infinito, which means that you can play as many levels as you want without waiting for your lives to refill. You can also retry any level as many times as you want without losing lives.</li>
|
24 |
-
<li><b>Unlimited cards:</b> You can get unlimited cards in Angry Birds 2 mod apk dinheiro infinito, which are the items that allow you to choose which bird to use in each level. You can have as many cards as you want and use any bird you want without running out of cards.</li>
|
25 |
-
<li><b>Unlimited spells:</b> You can get unlimited spells in Angry Birds 2 mod apk dinheiro infinito, which means that you can use any spell you want in any level without spending gems or coins. You can also use multiple spells in one level without any limit.</li>
|
26 |
-
</ul>
|
27 |
-
<h2>How to download and install Angry Birds 2 mod apk dinheiro infinito?</h2>
|
28 |
-
<p>If you want to download and install Angry Birds 2 mod apk dinheiro infinito on your Android device, you need to follow these simple steps:</p>
|
29 |
-
<h3>Step 1: Enable unknown sources on your device</h3>
|
30 |
-
<p>Before you can install any mod apk file on your device, you need to enable unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store. To do this, follow these steps:</p>
|
31 |
-
<ol>
|
32 |
-
<li>Go to your device settings and tap on Security or Privacy.</li>
|
33 |
-
<li>Find the option that says Unknown Sources or Install Unknown Apps and toggle it on.</li>
|
34 |
-
<li>A warning message will pop up. Tap on OK or Allow to confirm.</li>
|
35 |
-
</ol>
|
36 |
-
<p><img src="https://i.imgur.com/6yZ8q0L.png" alt="Enable unknown sources" width="300" height="500"></p>
|
37 |
-
<h3>Step 2: Download the mod apk file from a trusted source</h3>
|
38 |
-
<p>Next, you need to download the mod apk file from a trusted source. There are many websites that offer mod apk files, but not all of them are safe and reliable. Some of them may contain viruses, malware, or spyware that can harm your device or steal your personal information. Therefore, you need to be careful and choose a reputable source. One of the best sources that we recommend is [Angry Birds 2 Mod Apk Dinheiro Infinito], which is a website that provides high-quality and updated mod apk files for various games and apps. To download the mod apk file from this source, follow these steps:</p>
|
39 |
-
<ol>
|
40 |
-
<li>Go to [Angry Birds 2 Mod Apk Dinheiro Infinito] using your browser.</li>
|
41 |
-
<li>Scroll down and find the download button that says Download Angry Birds 2 Mod Apk Dinheiro Infinito.</li>
|
42 |
-
<li>Tap on the download button and wait for the download to start.</li>
|
43 |
-
<li>The mod apk file will be downloaded to your device in a few minutes, depending on your internet speed.</li>
|
44 |
-
</ol>
|
45 |
-
<p><img src="https://i.imgur.com/7Q9s4LH.png" alt="Download mod apk file" width="300" height="500"></p>
|
46 |
-
<p>angry birds 2 hack apk dinheiro infinito<br />
|
47 |
-
angry birds 2 mod apk unlimited money and gems<br />
|
48 |
-
angry birds 2 apk mod tudo infinito<br />
|
49 |
-
angry birds 2 mod apk download grátis<br />
|
50 |
-
angry birds 2 mod apk atualizado 2023<br />
|
51 |
-
angry birds 2 apk mod mega hack<br />
|
52 |
-
angry birds 2 mod apk offline<br />
|
53 |
-
angry birds 2 mod apk sem anúncios<br />
|
54 |
-
angry birds 2 apk mod desbloqueado<br />
|
55 |
-
angry birds 2 mod apk versão mais recente<br />
|
56 |
-
angry birds 2 hack apk android<br />
|
57 |
-
angry birds 2 mod apk unlimited lives<br />
|
58 |
-
angry birds 2 apk mod estrelas infinitas<br />
|
59 |
-
angry birds 2 mod apk sem root<br />
|
60 |
-
angry birds 2 mod apk com obb<br />
|
61 |
-
angry birds 2 hack apk ios<br />
|
62 |
-
angry birds 2 mod apk unlimited pearls<br />
|
63 |
-
angry birds 2 apk mod energia infinita<br />
|
64 |
-
angry birds 2 mod apk sem verificação<br />
|
65 |
-
angry birds 2 mod apk revdl<br />
|
66 |
-
angry birds 2 hack apk online<br />
|
67 |
-
angry birds 2 mod apk unlimited black pearls<br />
|
68 |
-
angry birds 2 apk mod nível máximo<br />
|
69 |
-
angry birds 2 mod apk sem banimento<br />
|
70 |
-
angry birds 2 mod apk rexdl<br />
|
71 |
-
angry birds 2 hack apk no survey<br />
|
72 |
-
angry birds 2 mod apk unlimited spells<br />
|
73 |
-
angry birds 2 apk mod plumas infinitas<br />
|
74 |
-
angry birds 2 mod apk sem internet<br />
|
75 |
-
angry birds 2 mod apk happymod</p>
|
76 |
-
<h3>Step 3: Locate and install the mod apk file on your device</h3>
|
77 |
-
<p>After you have downloaded the mod apk file, you need to locate and install it on your device. To do this, follow these steps:</p>
|
78 |
-
<ol>
|
79 |
-
<li>Go to your device file manager and find the folder where you downloaded the mod apk file. It is usually in the Downloads folder.</li>
|
80 |
-
<li>Tap on the mod apk file and a pop-up window will appear.</li>
|
81 |
-
<li>Tap on Install and wait for the installation to finish.</li>
|
82 |
-
<li>If another pop-up window appears asking for permissions, tap on Allow or Accept to grant them.</li>
|
83 |
-
</ol>
|
84 |
-
<p><img src="https://i.imgur.com/9cYgJZw.png" alt="Install mod apk file" width="300" height="500"></p>
|
85 |
-
<h3>Step 4: Launch the game and enjoy!</h3>
|
86 |
-
<p>Congratulations! You have successfully installed Angry Birds 2 mod apk dinheiro infinito on your device. Now you can launch the game and enjoy all the modded features. To do this, follow these steps:</p>
|
87 |
-
<ol>
|
88 |
-
<li>Go to your device app drawer and find the Angry Birds 2 icon.</li>
|
89 |
-
<li>Tap on the icon and wait for the game to load.</li>
|
90 |
-
<li>You will see a message that says "Angry Birds 2 Mod Apk Dinheiro Infinito by AngryBirds2ModApkDinheiroInfinito.com". Tap on OK or Continue to proceed.</li>
|
91 |
-
<li>You will be taken to the main menu of the game. You can choose to play offline or online, depending on your preference.</li>
|
92 |
-
<li>You will notice that you have unlimited money, gems, lives, cards, and spells in the game. You can use them as you wish and have fun!</li>
|
93 |
-
</ol>
|
94 |
-
<p><img src="https://i.imgur.com/0yqUzQa.png" alt="Launch game and enjoy" width="300" height="500"></p>
|
95 |
-
<h2>How to play Angry Birds 2 like a pro?</h2>
|
96 |
-
<p>Now that you have downloaded and installed Angry Birds 2 mod apk dinheiro infinito, you might be wondering how to play it like a pro. Well, don't worry, we have some tips and tricks for you that will help you master the game and beat any level with ease. Here are some of them:</p>
|
97 |
-
<h3>Tip 1: Choose the right bird for the right situation</h3>
|
98 |
-
<p>One of the most important aspects of playing Angry Birds 2 is choosing the right bird for the right situation. Each bird has a different ability and score multiplier that can affect the outcome of the level. For example, Red can knock down structures with his strong impact, Chuck can speed up and cut through wood and glass, Bomb can explode and cause massive damage, Matilda can drop an egg bomb and fly upwards, The Blues can split into three and hit multiple targets, Silver can loop and smash downwards, and Terence can crush anything with his huge size. You can also use the Mighty Eagle to call for a powerful airstrike that can wipe out the entire level.</p>
|
99 |
-
<p>Therefore, you need to choose the best bird for each level based on the layout, the materials, the pigs, and the spells. You can also switch the order of the birds by tapping on their cards at the bottom of the screen. You should always try to use the bird that can cause the most damage and destruction with the least number of shots. This will help you earn more points and stars, as well as fill up the Destructometer faster.</p>
|
100 |
-
<h3>Tip 2: Use the environment to your advantage</h3>
|
101 |
-
<p>Another important aspect of playing Angry Birds 2 is using the environment to your advantage. The game has many environmental elements that can help you or hinder you in your quest to defeat the pigs. For example, there are flowers that can bounce your birds back, portals that can teleport your birds to different locations, TNT crates that can explode and cause chain reactions, fans that can blow your birds or objects away, rocks that can fall and crush the pigs, etc.</p>
|
102 |
-
<p>Therefore, you need to pay attention to the environment and use it wisely. You can use the flowers to redirect your birds or hit hard-to-reach pigs, you can use the portals to surprise the pigs or avoid obstacles, you can use the TNT crates to create massive explosions and clear large areas, you can use the fans to push your birds or objects towards the pigs, you can use the rocks to drop them on the pigs or structures, etc. You should also be careful of the environmental hazards that can harm your birds or prevent them from reaching their targets.</p>
|
103 |
-
<h3>Tip 3: Fill up the Destructometer quickly</h3>
|
104 |
-
<p>The Destructometer is a meter that fills up as you destroy objects and pigs in each level. When you fill up the Destructometer completely, you will earn an extra card or spell that you can use in the same level or save for later. The Destructometer also resets after each stage, so you have multiple chances to fill it up in each level.</p>
|
105 |
-
<p>Therefore, you should try to fill up the Destructometer as quickly as possible by destroying as much as possible with each bird. You should aim for weak spots, explosive objects, large structures, multiple pigs, etc., to cause more damage and destruction. You should also use spells wisely to boost your destruction and fill up the Destructometer faster. The more cards or spells you have, the more options and flexibility you have in completing the level.</p>
|
106 |
-
<h3>Tip 4: Compete with other players in multiplayer mode</h3>
|
107 |
-
<p>If you want to test your skills and challenge yourself further, you can compete with other players in multiplayer mode. In multiplayer mode, you can join a clan or create your own clan with your friends. By joining a clan, you can chat with other clan members, share tips and strategies, and participate in clan events. Clan events are special competitions where you have to work together with your clan members to complete a common goal and earn rewards.</p>
|
108 |
-
<p>You can also enter tournaments in multiplayer mode. Tournaments are daily or weekly competitions where you have to compete with other players from around the world in various levels. You have to pay a fee with gems or tickets to enter a tournament, and you have a time limit to play as many levels as you want. The more levels you complete and the higher your score, the higher your rank on the leaderboard. You can win prizes such as gems, feathers, hats, and chests based on your rank.</p>
|
109 |
-
<p>Competing with other players in multiplayer mode is a great way to improve your skills, learn new tricks, have fun, and earn rewards. You can also make new friends and join a community of Angry Birds fans.</p>
|
110 |
-
<h2>Conclusion</h2>
|
111 |
-
<p>Angry Birds 2 is a fantastic game that offers hours of fun and entertainment. It has amazing graphics, sound effects, and animations that bring the game to life. It has a variety of levels, modes, features, and characters that keep the game fresh and exciting. It has a simple and intuitive gameplay that anyone can enjoy and master.</p>
|
112 |
-
<p>However, if you want to take your gaming experience to the next level, you should try Angry Birds 2 mod apk dinheiro infinito. This mod apk gives you unlimited access to all the premium features of the game for free. You can have unlimited money, gems, lives, cards, and spells that you can use to beat any level with ease. You can also customize your birds with different hats and accessories that give them extra abilities and bonuses. You can also compete with other players in multiplayer mode and win amazing prizes.</p>
|
113 |
-
<p>So what are you waiting for? Download Angry Birds 2 mod apk dinheiro infinito today and join the ultimate bird flinging adventure!</p>
|
114 |
-
<h2>FAQs</h2>
|
115 |
-
<p>Here are some frequently asked questions and answers about Angry Birds 2 mod apk dinheiro infinito:</p>
|
116 |
-
<h3>Q: Is Angry Birds 2 mod apk dinheiro infinito safe to use?</h3>
|
117 |
-
<p>A: Yes, Angry Birds 2 mod apk dinheiro infinito is safe to use as long as you download it from a trusted source like [Angry Birds 2 Mod Apk Dinheiro Infinito]. This website provides high-quality and updated mod apk files that are free from viruses, malware, or spyware. However, you should always be careful when downloading and installing any mod apk file on your device and enable unknown sources on your device settings.</p>
|
118 |
-
<h3>Q: Do I need to root my device to use Angry Birds 2 mod apk dinheiro infinito?</h3>
|
119 |
-
<p>A: No, you do not need to root your device to use Angry Birds 2 mod apk dinheiro infinito. This mod apk works on any Android device without requiring root access. However, you should always backup your data before installing any mod apk file on your device in case anything goes wrong.</p>
|
120 |
-
<h3>Q: Will I get banned from the game if I use Angry Birds 2 mod apk dinheiro infinito?</h3>
|
121 |
-
<p>A: No, you will not get banned from the game if you use Angry Birds 2 mod apk dinheiro infinito. This mod apk is undetectable by the game servers and does not interfere with the game's functionality or performance. However, you should always use this mod apk responsibly and not abuse it or cheat in multiplayer mode.</p>
|
122 |
-
<h3>Q: Can I update Angry Birds 2 mod apk dinheiro infinito?</h3>
|
123 |
-
<p>A: Yes, you can update Angry Birds 2 mod apk dinheiro infinito whenever there is a new version available. However, you should always download the latest version of the mod apk from [Angry Birds 2 Mod Apk Dinheiro Infinito] and not from the Google Play Store or other sources. This will ensure that you get the most updated and compatible version of the mod apk.</p>
|
124 |
-
<h3>Q: Can I play Angry Birds 2 mod apk dinheiro infinito offline?</h3>
|
125 |
-
<p>A: Yes, you can play Angry Birds 2 mod apk dinheiro infinito offline without requiring an internet connection. However, some features of the game such as multiplayer mode, tournaments, clan events, etc., require an internet connection to work properly. Therefore, you should always connect to a stable and secure internet connection when playing these features.</p> 401be4b1e0<br />
|
126 |
-
<br />
|
127 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Beach Buggy Racing 2 Mod APK How to Download Aplikasi and Unlock All Features.md
DELETED
@@ -1,164 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Download Aplikasi Beach Buggy Racing 2 Mod Apk</h1>
|
3 |
-
<p>If you are looking for a fun and exciting kart racing game on your Android device, you should definitely check out Beach Buggy Racing 2. This is a sequel to the popular Beach Buggy Racing, which has over 100 million downloads on Google Play. In this game, you can race against drivers and cars from around the world, explore different tracks and environments, collect and upgrade power-ups, and customize your own car and driver.</p>
|
4 |
-
<h2>download aplikasi beach buggy racing 2 mod apk</h2><br /><p><b><b>Download</b> ✯✯✯ <a href="https://jinyurl.com/2uNSTW">https://jinyurl.com/2uNSTW</a></b></p><br /><br />
|
5 |
-
<p>But what if you want to enjoy the game without any limitations or restrictions? What if you want to have unlimited money, gems, tickets, and power-ups? What if you want to unlock all the cars, drivers, and tracks in the game? Well, there is a way to do that. You can download aplikasi Beach Buggy Racing 2 mod apk, which is a modified version of the original game that gives you access to all the features and content that you want.</p>
|
6 |
-
<p>In this article, we will tell you what is Beach Buggy Racing 2, why you should download its mod apk, how to download and install it on your device, some tips and tricks to play it better, and a review of the game. So, let's get started!</p>
|
7 |
-
<h2>What is Beach Buggy Racing 2?</h2>
|
8 |
-
<p>Beach Buggy Racing 2 is a kart racing game developed by Vector Unit, the same studio behind other racing games like Riptide GP and Hydro Thunder Hurricane. It was released in December 2018 for Android and iOS devices, and later for PC and consoles. It is a free-to-play game with in-app purchases.</p>
|
9 |
-
<p>Beach Buggy Racing 2 is a fully 3D off-road kart racing game with amazing physics, detailed cars and characters, and spectacular weapons. It has a variety of game modes, such as Adventure mode, Championships, Races, Drift Attacks, Firework Fury, and more. You can also create your own custom game modes with different power-ups, race rules, lap counts, and more.</p>
|
10 |
-
<p>You can choose from over 55 cars in the game, ranging from beach buggies to monster trucks to formula supercars. You can also collect over 45 power-ups in the game, such as Chain Lightning, Donut Tires, Boost Juice, Killer Bees, and more. Each power-up has its own unique effect and can be upgraded to make it more powerful.</p>
|
11 |
-
<p>How to download beach buggy racing 2 mod apk for free<br />
|
12 |
-
Beach buggy racing 2 mod apk unlimited money and gems<br />
|
13 |
-
Best tips and tricks for beach buggy racing 2 mod apk<br />
|
14 |
-
Beach buggy racing 2 mod apk latest version 2023<br />
|
15 |
-
Download beach buggy racing 2 hack mod apk<br />
|
16 |
-
Beach buggy racing 2 mod apk offline mode<br />
|
17 |
-
Beach buggy racing 2 mod apk all cars unlocked<br />
|
18 |
-
Beach buggy racing 2 mod apk gameplay and review<br />
|
19 |
-
Beach buggy racing 2 mod apk android and ios<br />
|
20 |
-
Download beach buggy racing 2 mod apk from APKdone[^1^]<br />
|
21 |
-
Beach buggy racing 2 mod apk features and benefits<br />
|
22 |
-
Beach buggy racing 2 mod apk vs original version<br />
|
23 |
-
Beach buggy racing 2 mod apk no root required<br />
|
24 |
-
Beach buggy racing 2 mod apk online multiplayer<br />
|
25 |
-
Download beach buggy racing 2 mod apk with obb data<br />
|
26 |
-
Beach buggy racing 2 mod apk new update and patch notes<br />
|
27 |
-
Beach buggy racing 2 mod apk cheats and codes<br />
|
28 |
-
Beach buggy racing 2 mod apk fun and addictive game<br />
|
29 |
-
Beach buggy racing 2 mod apk graphics and sound quality<br />
|
30 |
-
Download beach buggy racing 2 mod apk safely and securely<br />
|
31 |
-
Beach buggy racing 2 mod apk pros and cons<br />
|
32 |
-
Beach buggy racing 2 mod apk system requirements and compatibility<br />
|
33 |
-
Beach buggy racing 2 mod apk download link and instructions<br />
|
34 |
-
Beach buggy racing 2 mod apk ratings and feedbacks<br />
|
35 |
-
Download beach buggy racing 2 mod apk for PC and Mac<br />
|
36 |
-
Beach buggy racing 2 mod apk challenges and missions<br />
|
37 |
-
Beach buggy racing 2 mod apk customization and upgrades<br />
|
38 |
-
Beach buggy racing 2 mod apk characters and power-ups<br />
|
39 |
-
Download beach buggy racing 2 mod apk from Google Play Store or App Store<br />
|
40 |
-
Beach buggy racing 2 mod apk alternatives and similar games</p>
|
41 |
-
<p>You can also build your own team of racers in the game. You can recruit new drivers from the Beach Buggy Racing League, each with their own special ability. For example, Rez can launch beach balls that spin out other racers, Disco Jimmy can make other racers dance and stop racing, Mikka can create holograms of herself to confuse other racers, and so on.</p>
|
42 |
-
<p>You can also test your skills against other players from around the world in online competitions and tournaments. You can race against player avatars in daily races or compete in live tournaments and special events to win exclusive in-game prizes.</p>
|
43 |
-
<h3>Features of Beach Buggy Racing 2</h3>
|
44 |
-
<p>Here are some of the main features of Beach Buggy Racing 2 that make it a great kart racing game:</p>
|
45 |
-
<ul>
|
46 |
-
<li>Exciting and realistic kart racing physics</li>
|
47 |
-
<li>Stunning 3D graphics and animations</li>
|
48 |
-
<li>Over 55 cars to collect and customize</li>
|
49 |
-
<li>Over 45 power-ups to use and upgrade</li>
|
50 |
-
<li>Over 15 drivers to recruit and team up with</li>
|
51 |
-
<li>Over 40 tracks to explore and race on</li>
|
52 |
-
<li>Various game modes and challenges to enjoy</li>
|
53 |
-
<li>Online multiplayer and leaderboards to compete with others</li>
|
54 |
-
<li>Daily rewards and achievements to earn</li>
|
55 |
-
<li>Cross-platform compatibility and cloud saving</li>
|
56 |
-
</ul>
|
57 |
-
<h3>Why download Beach Buggy Racing 2 mod apk?</h3>
|
58 |
-
<p>Beach Buggy Racing 2 is a free-to-play game, but it also has some limitations and restrictions that can affect your gaming experience. For example, you need to spend real money to buy gems, which are the premium currency in the game. Gems are used to unlock new cars, drivers, power-ups, and tracks. You also need to spend tickets, which are the energy system in the game, to play certain game modes. Tickets are replenished over time or by watching ads.</p>
|
59 |
-
<p>If you don't want to spend money or wait for tickets, you can download aplikasi Beach Buggy Racing 2 mod apk, which is a modified version of the original game that gives you unlimited resources and features. With Beach Buggy Racing 2 mod apk, you can enjoy the following benefits:</p>
|
60 |
-
<ul>
|
61 |
-
<li>Unlimited money: You can have as much money as you want in the game, which you can use to buy and upgrade cars, power-ups, and other items.</li>
|
62 |
-
<li>Unlimited gems: You can have as many gems as you want in the game, which you can use to unlock new cars, drivers, power-ups, and tracks.</li>
|
63 |
-
<li>Unlimited tickets: You can have as many tickets as you want in the game, which you can use to play any game mode without waiting or watching ads.</li>
|
64 |
-
<li>All cars unlocked: You can access all the cars in the game, from beach buggies to monster trucks to formula supercars.</li>
|
65 |
-
<li>All drivers unlocked: You can access all the drivers in the game, each with their own special ability.</li>
|
66 |
-
<li>All tracks unlocked: You can access all the tracks in the game, from tropical beaches to ancient ruins to lunar landscapes.</li>
|
67 |
-
<li>All power-ups unlocked: You can access all the power-ups in the game, each with their own unique effect.</li>
|
68 |
-
<li>No ads: You can play the game without any annoying ads or pop-ups.</li>
|
69 |
-
</ul>
|
70 |
-
<h2>How to download and install Beach Buggy Racing 2 mod apk?</h2>
|
71 |
-
<p>If you are interested in downloading aplikasi Beach Buggy Racing 2 mod apk, you need to follow these simple steps:</p>
|
72 |
-
<h3>Step 1: Enable unknown sources</h3>
|
73 |
-
<p>Before you can install any mod apk file on your Android device, you need to enable unknown sources in your settings. This will allow you to install apps from sources other than Google Play. To do this, go to Settings > Security > Unknown Sources and toggle it on.</p>
|
74 |
-
<h3>Step 2: Download the mod apk file</h3>
|
75 |
-
<p>Next, you need to download the mod apk file of Beach Buggy Racing 2 from a reliable source. You can search for it online or use this link to download it directly. The file size is about 150 MB, so make sure you have enough space on your device.</p>
|
76 |
-
<h3>Step 3: Install the mod apk file</h3>
|
77 |
-
<p>Once you have downloaded the mod apk file, locate it in your file manager and tap on it. You will see a prompt asking you to install the app. Tap on Install and wait for the installation process to finish.</p>
|
78 |
-
<h3>Step 4: Launch the game and enjoy</h3>
|
79 |
-
<p>After the installation is done, you can launch the game from your app drawer or home screen. You will see that you have unlimited resources and features in the game. You can now enjoy Beach Buggy Racing 2 without any limitations or restrictions.</p>
|
80 |
-
<h2>Tips and tricks for Beach Buggy Racing 2</h2>
|
81 |
-
<p>To help you play Beach Buggy Racing 2 better, here are some tips and tricks that you should know:</p>
|
82 |
-
<h3>Master the drift and powerslide</h3>
|
83 |
-
<p>One of the most important skills in Beach Buggy Racing 2 is drifting and powersliding. Drifting is when you turn your car sharply while accelerating, causing your tires to lose traction and slide sideways. Powersliding is when you tap on the brake button while drifting, causing your car to slide even more and gain more speed. Drifting and powersliding are useful for taking sharp turns, avoiding obstacles, and gaining boost. Boost is a meter that fills up as you drift and powerslide, and when it is full, you can tap on the boost button to get a burst of speed. You can also use boost to ram into other racers and knock them out of the way.</p>
|
84 |
-
<h3>Use the driver's ability at the right time</h3>
|
85 |
-
<p>As mentioned earlier, each driver in Beach Buggy Racing 2 has their own special ability that can give them an edge in the race. However, these abilities have a cooldown time, so you need to use them wisely and at the right time. For example, Rez's beach ball ability can be used to block other racers behind you, Disco Jimmy's dance ability can be used to distract other racers in front of you, Mikka's hologram ability can be used to confuse other racers around you, and so on. You can also combine your driver's ability with your power-ups to create more chaos and fun.</p>
|
86 |
-
<h3>Don't fall into the trap of other racers</h3>
|
87 |
-
<p>Beach Buggy Racing 2 is not just about racing, it is also about sabotaging and surviving. Other racers will try to use their power-ups and abilities to slow you down, damage your car, or make you lose control. You need to be alert and avoid falling into their traps. For example, watch out for oil slicks, banana peels, fireballs, rockets, mines, and other hazards on the track. You can also use your power-ups and abilities to counter their attacks or dodge them. For example, you can use the shield power-up to protect yourself from incoming projectiles, the jump power-up to leap over obstacles, the magnet power-up to attract coins and gems, and so on.</p>
|
88 |
-
<h3>Build the best deck of power-ups</h3>
|
89 |
-
<p>Before each race, you can choose up to three power-ups to bring with you. These power-ups are randomly assigned to you during the race, so you need to choose wisely and build the best deck of power-ups that suits your playstyle and strategy. For example, if you want to be more aggressive and offensive, you can choose power-ups like rockets, fireballs, chain lightning, killer bees, etc. If you want to be more defensive and supportive, you can choose power-ups like shields, repair kits, boost juice, donut tires, etc. If you want to be more balanced and versatile, you can choose power-ups like magnets, jumps, bubbles, etc.</p>
|
90 |
-
<h3>Grab those fast bubbles and shortcuts</h3>
|
91 |
-
<p>Another way to gain an advantage in Beach Buggy Racing 2 is to grab those fast bubbles and shortcuts on the track. Fast bubbles are blue spheres that give you a temporary speed boost when you drive through them. They are usually located on straight paths or ramps that can help you gain some distance or catch up with other racers. Shortcuts are hidden or alternative paths that can help you avoid obstacles or take a faster route. They are usually marked by arrows or signs that indicate where they lead. However, some shortcuts may also have risks or traps that can backfire on you if you are not careful.</p>
|
92 |
-
<h2>Review of Beach Buggy Racing 2</h2>
|
93 |
-
<p>Now that we have covered what is Beach Buggy Racing 2, why you should download its mod apk, how to download and install it on your device, and some tips and tricks to play it better, let's take a look at the review of the game. We will discuss the pros and cons of Beach Buggy Racing 2, the user ratings and feedback of the game, and the comparison with other kart racing games.</p>
|
94 |
-
<h3>Pros and cons of Beach Buggy Racing 2</h3>
|
95 |
-
<p>Beach Buggy Racing 2 is a fun and addictive kart racing game that offers a lot of content and features for players to enjoy. However, it also has some drawbacks that may affect your gaming experience. Here are the pros and cons of Beach Buggy Racing 2:</p>
|
96 |
-
<table>
|
97 |
-
<tr>
|
98 |
-
<th>Pros</th>
|
99 |
-
<th>Cons</th>
|
100 |
-
</tr>
|
101 |
-
<tr>
|
102 |
-
<td>- Exciting and realistic kart racing physics</td>
|
103 |
-
<td>- Some tracks and power-ups can be repetitive or unfair</td>
|
104 |
-
</tr>
|
105 |
-
<tr>
|
106 |
-
<td>- Stunning 3D graphics and animations</td>
|
107 |
-
<td>- Some cars and drivers can be overpowered or underpowered</td>
|
108 |
-
</tr>
|
109 |
-
<tr>
|
110 |
-
<td>- Over 55 cars to collect and customize</td>
|
111 |
-
<td>- Some game modes and challenges can be too easy or too hard</td>
|
112 |
-
</tr>
|
113 |
-
<tr>
|
114 |
-
<td>- Over 45 power-ups to use and upgrade</td>
|
115 |
-
<td>- Some in-app purchases can be expensive or unnecessary</td>
|
116 |
-
</tr>
|
117 |
-
<tr>
|
118 |
-
<td>- Over 15 drivers to recruit and team up with</td>
|
119 |
-
<td>- Some ads can be annoying or intrusive</td>
|
120 |
-
</tr>
|
121 |
-
<tr>
|
122 |
-
<td>- Over 40 tracks to explore and race on</td>
|
123 |
-
<td>- Some bugs or glitches can occur occasionally</td>
|
124 |
-
</tr>
|
125 |
-
<tr>
|
126 |
-
<td>- Various game modes and challenges to enjoy</td>
|
127 |
-
<td></td>
|
128 |
-
</tr>
|
129 |
-
<tr>
|
130 |
-
<td>- Online multiplayer and leaderboards to compete with others</td>
|
131 |
-
<td></td>
|
132 |
-
</tr>
|
133 |
-
<tr>
|
134 |
-
<td>- Daily rewards and achievements to earn</td>
|
135 |
-
<td></td>
|
136 |
-
</tr>
|
137 |
-
<tr>
|
138 |
-
<td>- Cross-platform compatibility and cloud saving</td>
|
139 |
-
<td></td>
|
140 |
-
</tr> <h3>User ratings and feedback of Beach Buggy Racing 2</h3>
|
141 |
-
<p>Beach Buggy Racing 2 has received mostly positive ratings and feedback from users who have played the game. On Google Play, it has a rating of 4.4 out of 5 stars, based on over 1.1 million reviews. On App Store, it has a rating of 4.7 out of 5 stars, based on over 30 thousand reviews. On Steam, it has a rating of 9 out of 10, based on over 300 reviews.</p>
|
142 |
-
<p>Most users praise the game for its fun and addictive gameplay, its variety and quality of content and features, its smooth and responsive controls, its beautiful and colorful graphics, and its online multiplayer and cross-platform support. Some users also appreciate the game for its regular updates, its fair and balanced monetization system, and its friendly and helpful customer service.</p>
|
143 |
-
<p>However, some users also criticize the game for some issues and problems that they encounter while playing the game. Some of these issues include the game being too easy or too hard, the game being too repetitive or unfair, the game having some bugs or glitches, the game having some ads or in-app purchases, and the game having some compatibility or performance issues.</p>
|
144 |
-
<h3>Comparison with other kart racing games</h3>
|
145 |
-
<p>Beach Buggy Racing 2 is not the only kart racing game available on the market. There are other similar games that you can try if you are looking for more options or alternatives. Some of these games include:</p>
|
146 |
-
<ul>
|
147 |
-
<li>Mario Kart Tour: This is a mobile version of the famous Mario Kart series by Nintendo. It features characters and tracks from the Mario universe, as well as new ones inspired by real-world locations. It has a variety of game modes, such as Grand Prix, Time Trials, Ranked Cups, and more. It also has online multiplayer and leaderboards to compete with others.</li>
|
148 |
-
<li>Crash Team Racing Nitro-Fueled: This is a remake of the classic Crash Team Racing by Activision. It features characters and tracks from the Crash Bandicoot universe, as well as new ones inspired by other games in the series. It has a variety of game modes, such as Adventure, Arcade, Time Trial, Battle, and more. It also has online multiplayer and leaderboards to compete with others.</li>
|
149 |
-
<li>Sonic & All-Stars Racing Transformed: This is a kart racing game by Sega that features characters and tracks from the Sonic the Hedgehog universe, as well as other Sega franchises. It has a unique feature that allows the karts to transform into boats or planes depending on the terrain. It has a variety of game modes, such as Career, Grand Prix, World Tour, Battle Arena, and more. It also has online multiplayer and leaderboards to compete with others.</li>
|
150 |
-
</ul>
|
151 |
-
<p>These are some of the most popular and well-known kart racing games that you can compare with Beach Buggy Racing 2. Each game has its own strengths and weaknesses, so you can choose the one that suits your preferences and expectations.</p>
|
152 |
-
<h2>Conclusion and FAQs</h2>
|
153 |
-
<p>In conclusion, Beach Buggy Racing 2 is a fun and exciting kart racing game that offers a lot of content and features for players to enjoy. It has realistic and thrilling kart racing physics, stunning and colorful graphics and animations, over 55 cars to collect and customize, over 45 power-ups to use and upgrade, over 15 drivers to recruit and team up with, over 40 tracks to explore and race on, various game modes and challenges to enjoy, online multiplayer and leaderboards to compete with others, daily rewards and achievements to earn, cross-platform compatibility and cloud saving, and more. However, Beach Buggy Racing 2 also has some limitations and restrictions that can affect your gaming experience. You need to spend real money to buy gems, which are the premium currency in the game. You also need to spend tickets, which are the energy system in the game, to play certain game modes. You also need to watch ads or wait for tickets to replenish. If you don't want to deal with these issues, you can download aplikasi Beach Buggy Racing 2 mod apk, which is a modified version of the original game that gives you unlimited resources and features. With Beach Buggy Racing 2 mod apk, you can have unlimited money, gems, tickets, and power-ups. You can also unlock all the cars, drivers, tracks, and power-ups in the game. You can also play the game without any ads or interruptions. To download aplikasi Beach Buggy Racing 2 mod apk, you need to enable unknown sources in your settings, download the mod apk file from a reliable source, install the mod apk file on your device, and launch the game and enjoy. We also gave you some tips and tricks to play Beach Buggy Racing 2 better, such as mastering the drift and powerslide, using the driver's ability at the right time, avoiding the trap of other racers, building the best deck of power-ups, and grabbing those fast bubbles and shortcuts. We also gave you a review of Beach Buggy Racing 2, where we discussed the pros and cons of the game, the user ratings and feedback of the game, and the comparison with other kart racing games. We hope that this article has helped you learn more about Beach Buggy Racing 2 and how to download its mod apk. If you have any questions or comments about the game or the mod apk, feel free to leave them below. We will try to answer them as soon as possible. Here are some FAQs that you may find useful: <h4>Q: Is Beach Buggy Racing 2 mod apk safe to use?</h4>
|
154 |
-
<p>A: Yes, Beach Buggy Racing 2 mod apk is safe to use as long as you download it from a trusted source. However, you should always be careful when downloading any mod apk file from unknown sources. You should also scan the file with an antivirus or malware detector before installing it on your device.</p>
|
155 |
-
<h4>Q: Is Beach Buggy Racing 2 mod apk compatible with my device?</h4>
|
156 |
-
<p>A: Beach Buggy Racing 2 mod apk is compatible with most Android devices that have Android 4.4 or higher. However, some devices may have compatibility or performance issues depending on their specifications or settings. You should always check the requirements and compatibility of the mod apk file before downloading it.</p>
|
157 |
-
<h4>Q: Can I play Beach Buggy Racing 2 mod apk online with other players?</h4>
|
158 |
-
<p>A: Yes, you can play Beach Buggy Racing 2 mod apk online with other players who have the same version of the mod apk file. However, you may not be able to play online with players who have the original version of the game or a different version of the mod apk file. You should always check the version and compatibility of the mod apk file before playing online.</p>
|
159 |
-
<h4>Q: Can I update Beach Buggy Racing 2 mod apk when a new version is released?</h4>
|
160 |
-
<p>A: Yes, you can update Beach Buggy Racing 2 mod apk when a new version is released by downloading and installing the new version of the mod apk file from a reliable source. However, you may lose your progress or data if you update without backing up your files. You should always backup your files before updating any mod apk file.</p>
|
161 |
-
<h4>Q: Can I restore my progress or data if I uninstall Beach Buggy Racing 2 mod apk?</h4>
|
162 |
-
<p>A: Yes, you can restore your progress or data if you uninstall Beach Buggy Racing 2 mod apk by logging in with your Google Play account or Facebook account. However, you may lose your progress or data if you uninstall without logging in or backing up your files. You should always log in or backup your files before uninstalling any mod apk file.</p> 401be4b1e0<br />
|
163 |
-
<br />
|
164 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/232labs/VToonify/vtoonify/model/stylegan/__init__.py
DELETED
File without changes
|
spaces/AIConsultant/MusicGen/audiocraft/grids/musicgen/musicgen_clapemb_32khz.py
DELETED
@@ -1,32 +0,0 @@
|
|
1 |
-
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
2 |
-
# All rights reserved.
|
3 |
-
#
|
4 |
-
# This source code is licensed under the license found in the
|
5 |
-
# LICENSE file in the root directory of this source tree.
|
6 |
-
|
7 |
-
from ._explorers import LMExplorer
|
8 |
-
from ...environment import AudioCraftEnvironment
|
9 |
-
|
10 |
-
|
11 |
-
@LMExplorer
|
12 |
-
def explorer(launcher):
|
13 |
-
partitions = AudioCraftEnvironment.get_slurm_partitions(['team', 'global'])
|
14 |
-
launcher.slurm_(gpus=32, partition=partitions)
|
15 |
-
launcher.bind_(solver='musicgen/musicgen_base_32khz')
|
16 |
-
# replace this by the desired music dataset
|
17 |
-
launcher.bind_(dset='internal/music_400k_32khz')
|
18 |
-
launcher.bind_(conditioner='clapemb2music')
|
19 |
-
|
20 |
-
fsdp = {'autocast': False, 'fsdp.use': True}
|
21 |
-
cache_path = {'conditioners.description.clap.cache_path':
|
22 |
-
'/fsx-audio-craft-llm/jadecopet/experiments/audiocraft/caches/clap_embed_music'}
|
23 |
-
text_wav_training_opt = {'conditioners.description.clap.text_p': 0.5}
|
24 |
-
|
25 |
-
launcher.bind_(fsdp)
|
26 |
-
|
27 |
-
launcher.slurm_(gpus=32).bind_(label='32gpus')
|
28 |
-
with launcher.job_array():
|
29 |
-
launcher()
|
30 |
-
launcher(text_wav_training_opt)
|
31 |
-
launcher(cache_path)
|
32 |
-
launcher(cache_path, text_wav_training_opt)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ARTeLab/ARTeLab-SummIT/app.py
DELETED
@@ -1,157 +0,0 @@
|
|
1 |
-
import streamlit as st
|
2 |
-
import os
|
3 |
-
|
4 |
-
from transformers import AutoTokenizer
|
5 |
-
from transformers import AutoModelForSeq2SeqLM
|
6 |
-
from transformers import pipeline
|
7 |
-
from transformers import set_seed
|
8 |
-
|
9 |
-
debug = False
|
10 |
-
|
11 |
-
MODELS = [
|
12 |
-
"ARTeLab/mbart-summarization-fanpage",
|
13 |
-
"ARTeLab/mbart-summarization-ilpost",
|
14 |
-
"ARTeLab/mbart-summarization-mlsum",
|
15 |
-
"ARTeLab/it5-summarization-mlsum",
|
16 |
-
"ARTeLab/it5-summarization-ilpost",
|
17 |
-
"ARTeLab/it5-summarization-fanpage"
|
18 |
-
]
|
19 |
-
|
20 |
-
DEFAULT_TEXT: str = """(Fanpage) Dopo oltre mezzo secolo, il mistero della Natività di Caravaggio resta intatto. L'opera, intitolata la "Natività con i Santi Lorenzo e Francesco d'Assisi", fu trafugata la notte tra il 17 e il 18 ottobre 1969 dall'Oratorio di San Lorenzo a Palermo e tuttora non è stata ancora recuperata. L'olio su tela realizzato da Caravaggio, inserito dagli investigatori nella top ten mondiale delle opere d'arte trafugate e mai più ritrovate, ha un valore di mercato che si aggirerebbe oggi intorno ai 20 milioni di dollari secondo l'FBI. La sua storia è avvolta nel mistero e dopo cinquantuno anni ancora non è stata risolta, dopo il furto della mafia nel 1969 e forse ormai distrutta. L'unica certezza è che nemmeno questo Natale potremo ammirare l'opera raffigurante la nascita di Cristo del grande genio pittorico italiano. E forse, secondo i più pessimisti, non ci riusciremo mai più. Nella notte tra il 17 e il 18 ottobre, nel cuore di Palermo, i boss di Cosa Nostra si intrufolarono nell'Oratorio di San Lorenzo e arrotolarono la "Natività con i Santi Lorenzo e Francesco d'Assisi" di Caravaggio in malo modo, facendo sgranare la tela. Una delle più alte testimonianza dell'arte di ogni tempo fu distrutta così. Ma come facciamo a sapere oggi che la tela è andata distrutta? Fu il pentito Francesco Marino Mannoia, durante il processo Andreotti nel 1996 a raccontare del presunto disastro di un gioiello arrotolato in fretta e portato via in segno di sfregio. Ma questa versione stride con quella di un altro pentito che ricorda il quadro affisso ai summit di mafia, come un trofeo, mentre sui giornali si sussurrava di losche ma non provate trattative da 60 miliardi di vecchie lire fra mediatori e trafficanti. Nel 2017, il mafioso Gaetano Grado asserisce che la tela sarebbe stata nascosta, ma all'estero: nel 1970 il boss Badalamenti l'avrebbe trasferita in Svizzera in cambio di una notevole somma di franchi ad un antiquario svizzero, giunto a Palermo per definire l'affare. Grado riferisce anche che Badalamenti gli avrebbe detto che il quadro era stato scomposto per essere venduto sul mercato clandestino."""
|
21 |
-
|
22 |
-
|
23 |
-
class TextSummarizer:
|
24 |
-
def __init__(self):
|
25 |
-
self.tokenizer = None
|
26 |
-
self.model = None
|
27 |
-
self.generator = None
|
28 |
-
self.model_loaded = None
|
29 |
-
set_seed(42)
|
30 |
-
|
31 |
-
def load(self, model_name):
|
32 |
-
os.environ["TOKENIZERS_PARALLELISM"] = "false"
|
33 |
-
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
|
34 |
-
self.model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
|
35 |
-
self.generator = pipeline(
|
36 |
-
"text2text-generation", model=self.model, tokenizer=self.tokenizer
|
37 |
-
)
|
38 |
-
self.model_loaded = model_name
|
39 |
-
|
40 |
-
def summarize(self, model_name, input_text, generate_kwargs) -> str:
|
41 |
-
if not self.generator or self.model_loaded != model_name:
|
42 |
-
with st.spinner("meanwhile: downloading/loading selected model...please don't go :("):
|
43 |
-
self.load(model_name)
|
44 |
-
return self.generator(
|
45 |
-
input_text, return_tensors=False, return_text=True, **generate_kwargs
|
46 |
-
)[0].get("generated_text")
|
47 |
-
|
48 |
-
|
49 |
-
@st.cache(allow_output_mutation=True)
|
50 |
-
def instantiate_generator():
|
51 |
-
summarizer = TextSummarizer()
|
52 |
-
return summarizer
|
53 |
-
|
54 |
-
|
55 |
-
def main():
|
56 |
-
st.set_page_config( # Alternate names: setup_page, page, layout
|
57 |
-
page_title="ARTeLab SummIT",
|
58 |
-
layout="wide", # Can be "centered" or "wide". In the future also "dashboard", etc.
|
59 |
-
initial_sidebar_state="expanded", # Can be "auto", "expanded", "collapsed"
|
60 |
-
page_icon="📰", # String, anything supported by st.image, or None.
|
61 |
-
)
|
62 |
-
|
63 |
-
with open("style.css") as f:
|
64 |
-
st.markdown(f"<style>{f.read()}</style>", unsafe_allow_html=True)
|
65 |
-
|
66 |
-
generator = instantiate_generator()
|
67 |
-
|
68 |
-
st.markdown(
|
69 |
-
"""
|
70 |
-
<style>
|
71 |
-
[data-testid="stSidebar"][aria-expanded="true"] > div:first-child {
|
72 |
-
width: 500px;
|
73 |
-
}
|
74 |
-
[data-testid="stSidebar"][aria-expanded="false"] > div:first-child {
|
75 |
-
width: 500px;
|
76 |
-
margin-left: -500px;
|
77 |
-
}
|
78 |
-
</style>
|
79 |
-
""",
|
80 |
-
unsafe_allow_html=True,
|
81 |
-
)
|
82 |
-
st.sidebar.markdown("""# ARTeLab SummIT""")
|
83 |
-
st.sidebar.image("fl.png", width=220)
|
84 |
-
st.sidebar.markdown(
|
85 |
-
"""
|
86 |
-
* Create summaries of Italian news articles.
|
87 |
-
* Copy paste any Italian news text and press the Generate Summary botton.
|
88 |
-
"""
|
89 |
-
)
|
90 |
-
st.sidebar.title("Parameters:")
|
91 |
-
|
92 |
-
MODEL = st.sidebar.selectbox("Choose model", index=1, options=MODELS)
|
93 |
-
|
94 |
-
min_length = st.sidebar.number_input(
|
95 |
-
"Min length", min_value=10, max_value=150, value=40
|
96 |
-
)
|
97 |
-
max_length = st.sidebar.number_input(
|
98 |
-
"Max length", min_value=20, max_value=250, value=142
|
99 |
-
)
|
100 |
-
no_repeat_ngram_size = st.sidebar.number_input(
|
101 |
-
"No repeat NGram size", min_value=1, max_value=5, value=3
|
102 |
-
)
|
103 |
-
|
104 |
-
if sampling_mode := st.sidebar.selectbox(
|
105 |
-
"select a Mode", index=0, options=["Beam Search", "Top-k Sampling"]
|
106 |
-
):
|
107 |
-
if sampling_mode == "Beam Search":
|
108 |
-
num_beams = st.sidebar.number_input(
|
109 |
-
"Num beams", min_value=1, max_value=10, value=4
|
110 |
-
)
|
111 |
-
length_penalty = st.sidebar.number_input(
|
112 |
-
"Length penalty", min_value=0.0, max_value=5.0, value=1.5, step=0.1
|
113 |
-
)
|
114 |
-
params = {
|
115 |
-
"min_length": min_length,
|
116 |
-
"max_length": max_length,
|
117 |
-
"no_repeat_ngram_size": no_repeat_ngram_size,
|
118 |
-
"num_beams": num_beams,
|
119 |
-
"early_stopping": True,
|
120 |
-
"length_penalty": length_penalty,
|
121 |
-
"num_return_sequences": 1,
|
122 |
-
}
|
123 |
-
else:
|
124 |
-
top_k = st.sidebar.number_input(
|
125 |
-
"Top K", min_value=0, max_value=100, value=50
|
126 |
-
)
|
127 |
-
top_p = st.sidebar.number_input(
|
128 |
-
"Top P", min_value=0.0, max_value=1.0, value=0.9, step=0.05
|
129 |
-
)
|
130 |
-
temperature = st.sidebar.number_input(
|
131 |
-
"Temperature", min_value=0.0, max_value=1.0, value=0.3, step=0.05
|
132 |
-
)
|
133 |
-
params = {
|
134 |
-
"min_length": min_length,
|
135 |
-
"max_length": max_length,
|
136 |
-
"no_repeat_ngram_size": no_repeat_ngram_size,
|
137 |
-
"do_sample": True,
|
138 |
-
"top_k": top_k,
|
139 |
-
"top_p": top_p,
|
140 |
-
"temperature": temperature,
|
141 |
-
"num_return_sequences": 1,
|
142 |
-
}
|
143 |
-
|
144 |
-
input_text = st.text_area("Enter an Italian news text", DEFAULT_TEXT, height=450)
|
145 |
-
|
146 |
-
if st.button("Generate summary"):
|
147 |
-
|
148 |
-
with st.spinner("Generating summary ..."):
|
149 |
-
|
150 |
-
response = generator.summarize(MODEL, input_text, params)
|
151 |
-
|
152 |
-
st.header("Summary:")
|
153 |
-
st.markdown(response)
|
154 |
-
|
155 |
-
|
156 |
-
if __name__ == "__main__":
|
157 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnetv1d50.py
DELETED
@@ -1,17 +0,0 @@
|
|
1 |
-
# model settings
|
2 |
-
model = dict(
|
3 |
-
type='ImageClassifier',
|
4 |
-
backbone=dict(
|
5 |
-
type='ResNetV1d',
|
6 |
-
depth=50,
|
7 |
-
num_stages=4,
|
8 |
-
out_indices=(3, ),
|
9 |
-
style='pytorch'),
|
10 |
-
neck=dict(type='GlobalAveragePooling'),
|
11 |
-
head=dict(
|
12 |
-
type='LinearClsHead',
|
13 |
-
num_classes=1000,
|
14 |
-
in_channels=2048,
|
15 |
-
loss=dict(type='CrossEntropyLoss', loss_weight=1.0),
|
16 |
-
topk=(1, 5),
|
17 |
-
))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_8xb32_in1k.py
DELETED
@@ -1,4 +0,0 @@
|
|
1 |
-
_base_ = [
|
2 |
-
'../_base_/models/resnet50.py', '../_base_/datasets/imagenet_bs32.py',
|
3 |
-
'../_base_/schedules/imagenet_bs256.py', '../_base_/default_runtime.py'
|
4 |
-
]
|
|
|
|
|
|
|
|
|
|
spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/data/audio_dataset.py
DELETED
@@ -1,525 +0,0 @@
|
|
1 |
-
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
2 |
-
# All rights reserved.
|
3 |
-
#
|
4 |
-
# This source code is licensed under the license found in the
|
5 |
-
# LICENSE file in the root directory of this source tree.
|
6 |
-
|
7 |
-
import argparse
|
8 |
-
import copy
|
9 |
-
from concurrent.futures import ThreadPoolExecutor, Future
|
10 |
-
from dataclasses import dataclass, fields
|
11 |
-
from contextlib import ExitStack
|
12 |
-
import gzip
|
13 |
-
import json
|
14 |
-
import logging
|
15 |
-
import os
|
16 |
-
from pathlib import Path
|
17 |
-
import random
|
18 |
-
import sys
|
19 |
-
import typing as tp
|
20 |
-
|
21 |
-
import torch
|
22 |
-
import torch.nn.functional as F
|
23 |
-
|
24 |
-
from .audio import audio_read, audio_info
|
25 |
-
from .audio_utils import convert_audio
|
26 |
-
from .zip import PathInZip
|
27 |
-
|
28 |
-
try:
|
29 |
-
import dora
|
30 |
-
except ImportError:
|
31 |
-
dora = None # type: ignore
|
32 |
-
|
33 |
-
|
34 |
-
@dataclass(order=True)
|
35 |
-
class BaseInfo:
|
36 |
-
|
37 |
-
@classmethod
|
38 |
-
def _dict2fields(cls, dictionary: dict):
|
39 |
-
return {
|
40 |
-
field.name: dictionary[field.name]
|
41 |
-
for field in fields(cls) if field.name in dictionary
|
42 |
-
}
|
43 |
-
|
44 |
-
@classmethod
|
45 |
-
def from_dict(cls, dictionary: dict):
|
46 |
-
_dictionary = cls._dict2fields(dictionary)
|
47 |
-
return cls(**_dictionary)
|
48 |
-
|
49 |
-
def to_dict(self):
|
50 |
-
return {
|
51 |
-
field.name: self.__getattribute__(field.name)
|
52 |
-
for field in fields(self)
|
53 |
-
}
|
54 |
-
|
55 |
-
|
56 |
-
@dataclass(order=True)
|
57 |
-
class AudioMeta(BaseInfo):
|
58 |
-
path: str
|
59 |
-
duration: float
|
60 |
-
sample_rate: int
|
61 |
-
amplitude: tp.Optional[float] = None
|
62 |
-
weight: tp.Optional[float] = None
|
63 |
-
# info_path is used to load additional information about the audio file that is stored in zip files.
|
64 |
-
info_path: tp.Optional[PathInZip] = None
|
65 |
-
|
66 |
-
@classmethod
|
67 |
-
def from_dict(cls, dictionary: dict):
|
68 |
-
base = cls._dict2fields(dictionary)
|
69 |
-
if 'info_path' in base and base['info_path'] is not None:
|
70 |
-
base['info_path'] = PathInZip(base['info_path'])
|
71 |
-
return cls(**base)
|
72 |
-
|
73 |
-
def to_dict(self):
|
74 |
-
d = super().to_dict()
|
75 |
-
if d['info_path'] is not None:
|
76 |
-
d['info_path'] = str(d['info_path'])
|
77 |
-
return d
|
78 |
-
|
79 |
-
|
80 |
-
@dataclass(order=True)
|
81 |
-
class SegmentInfo(BaseInfo):
|
82 |
-
meta: AudioMeta
|
83 |
-
seek_time: float
|
84 |
-
n_frames: int # actual number of frames without padding
|
85 |
-
total_frames: int # total number of frames, padding included
|
86 |
-
sample_rate: int # actual sample rate
|
87 |
-
|
88 |
-
|
89 |
-
DEFAULT_EXTS = ['.wav', '.mp3', '.flac', '.ogg', '.m4a']
|
90 |
-
|
91 |
-
logger = logging.getLogger(__name__)
|
92 |
-
|
93 |
-
|
94 |
-
def _get_audio_meta(file_path: str, minimal: bool = True) -> AudioMeta:
|
95 |
-
"""AudioMeta from a path to an audio file.
|
96 |
-
|
97 |
-
Args:
|
98 |
-
file_path (str): Resolved path of valid audio file.
|
99 |
-
minimal (bool): Whether to only load the minimal set of metadata (takes longer if not).
|
100 |
-
Returns:
|
101 |
-
AudioMeta: Audio file path and its metadata.
|
102 |
-
"""
|
103 |
-
info = audio_info(file_path)
|
104 |
-
amplitude: tp.Optional[float] = None
|
105 |
-
if not minimal:
|
106 |
-
wav, sr = audio_read(file_path)
|
107 |
-
amplitude = wav.abs().max().item()
|
108 |
-
return AudioMeta(file_path, info.duration, info.sample_rate, amplitude)
|
109 |
-
|
110 |
-
|
111 |
-
def _resolve_audio_meta(m: AudioMeta, fast: bool = True) -> AudioMeta:
|
112 |
-
"""If Dora is available as a dependency, try to resolve potential relative paths
|
113 |
-
in list of AudioMeta. This method is expected to be used when loading meta from file.
|
114 |
-
|
115 |
-
Args:
|
116 |
-
m (AudioMeta): Audio meta to resolve.
|
117 |
-
fast (bool): If True, uses a really fast check for determining if a file is already absolute or not.
|
118 |
-
Only valid on Linux/Mac.
|
119 |
-
Returns:
|
120 |
-
AudioMeta: Audio meta with resolved path.
|
121 |
-
"""
|
122 |
-
def is_abs(m):
|
123 |
-
if fast:
|
124 |
-
return str(m)[0] == '/'
|
125 |
-
else:
|
126 |
-
os.path.isabs(str(m))
|
127 |
-
|
128 |
-
if not dora:
|
129 |
-
return m
|
130 |
-
|
131 |
-
if not is_abs(m.path):
|
132 |
-
m.path = dora.git_save.to_absolute_path(m.path)
|
133 |
-
if m.info_path is not None and not is_abs(m.info_path.zip_path):
|
134 |
-
m.info_path.zip_path = dora.git_save.to_absolute_path(m.path)
|
135 |
-
return m
|
136 |
-
|
137 |
-
|
138 |
-
def find_audio_files(path: tp.Union[Path, str],
|
139 |
-
exts: tp.List[str] = DEFAULT_EXTS,
|
140 |
-
resolve: bool = True,
|
141 |
-
minimal: bool = True,
|
142 |
-
progress: bool = False,
|
143 |
-
workers: int = 0) -> tp.List[AudioMeta]:
|
144 |
-
"""Build a list of AudioMeta from a given path,
|
145 |
-
collecting relevant audio files and fetching meta info.
|
146 |
-
|
147 |
-
Args:
|
148 |
-
path (str or Path): Path to folder containing audio files.
|
149 |
-
exts (list of str): List of file extensions to consider for audio files.
|
150 |
-
minimal (bool): Whether to only load the minimal set of metadata (takes longer if not).
|
151 |
-
progress (bool): Whether to log progress on audio files collection.
|
152 |
-
workers (int): number of parallel workers, if 0, use only the current thread.
|
153 |
-
Returns:
|
154 |
-
List[AudioMeta]: List of audio file path and its metadata.
|
155 |
-
"""
|
156 |
-
audio_files = []
|
157 |
-
futures: tp.List[Future] = []
|
158 |
-
pool: tp.Optional[ThreadPoolExecutor] = None
|
159 |
-
with ExitStack() as stack:
|
160 |
-
if workers > 0:
|
161 |
-
pool = ThreadPoolExecutor(workers)
|
162 |
-
stack.enter_context(pool)
|
163 |
-
|
164 |
-
if progress:
|
165 |
-
print("Finding audio files...")
|
166 |
-
for root, folders, files in os.walk(path, followlinks=True):
|
167 |
-
for file in files:
|
168 |
-
full_path = Path(root) / file
|
169 |
-
if full_path.suffix.lower() in exts:
|
170 |
-
audio_files.append(full_path)
|
171 |
-
if pool is not None:
|
172 |
-
futures.append(pool.submit(_get_audio_meta, str(audio_files[-1]), minimal))
|
173 |
-
if progress:
|
174 |
-
print(format(len(audio_files), " 8d"), end='\r', file=sys.stderr)
|
175 |
-
|
176 |
-
if progress:
|
177 |
-
print("Getting audio metadata...")
|
178 |
-
meta: tp.List[AudioMeta] = []
|
179 |
-
for idx, file_path in enumerate(audio_files):
|
180 |
-
try:
|
181 |
-
if pool is None:
|
182 |
-
m = _get_audio_meta(str(file_path), minimal)
|
183 |
-
else:
|
184 |
-
m = futures[idx].result()
|
185 |
-
if resolve:
|
186 |
-
m = _resolve_audio_meta(m)
|
187 |
-
except Exception as err:
|
188 |
-
print("Error with", str(file_path), err, file=sys.stderr)
|
189 |
-
continue
|
190 |
-
meta.append(m)
|
191 |
-
if progress:
|
192 |
-
print(format((1 + idx) / len(audio_files), " 3.1%"), end='\r', file=sys.stderr)
|
193 |
-
meta.sort()
|
194 |
-
return meta
|
195 |
-
|
196 |
-
|
197 |
-
def load_audio_meta(path: tp.Union[str, Path],
|
198 |
-
resolve: bool = True, fast: bool = True) -> tp.List[AudioMeta]:
|
199 |
-
"""Load list of AudioMeta from an optionally compressed json file.
|
200 |
-
|
201 |
-
Args:
|
202 |
-
path (str or Path): Path to JSON file.
|
203 |
-
resolve (bool): Whether to resolve the path from AudioMeta (default=True).
|
204 |
-
fast (bool): activates some tricks to make things faster.
|
205 |
-
Returns:
|
206 |
-
List[AudioMeta]: List of audio file path and its total duration.
|
207 |
-
"""
|
208 |
-
open_fn = gzip.open if str(path).lower().endswith('.gz') else open
|
209 |
-
with open_fn(path, 'rb') as fp: # type: ignore
|
210 |
-
lines = fp.readlines()
|
211 |
-
meta = []
|
212 |
-
for line in lines:
|
213 |
-
d = json.loads(line)
|
214 |
-
m = AudioMeta.from_dict(d)
|
215 |
-
if resolve:
|
216 |
-
m = _resolve_audio_meta(m, fast=fast)
|
217 |
-
meta.append(m)
|
218 |
-
return meta
|
219 |
-
|
220 |
-
|
221 |
-
def save_audio_meta(path: tp.Union[str, Path], meta: tp.List[AudioMeta]):
|
222 |
-
"""Save the audio metadata to the file pointer as json.
|
223 |
-
|
224 |
-
Args:
|
225 |
-
path (str or Path): Path to JSON file.
|
226 |
-
metadata (list of BaseAudioMeta): List of audio meta to save.
|
227 |
-
"""
|
228 |
-
Path(path).parent.mkdir(exist_ok=True, parents=True)
|
229 |
-
open_fn = gzip.open if str(path).lower().endswith('.gz') else open
|
230 |
-
with open_fn(path, 'wb') as fp: # type: ignore
|
231 |
-
for m in meta:
|
232 |
-
json_str = json.dumps(m.to_dict()) + '\n'
|
233 |
-
json_bytes = json_str.encode('utf-8')
|
234 |
-
fp.write(json_bytes)
|
235 |
-
|
236 |
-
|
237 |
-
class AudioDataset:
|
238 |
-
"""Base audio dataset.
|
239 |
-
|
240 |
-
The dataset takes a list of AudioMeta and create a dataset composed of segments of audio
|
241 |
-
and potentially additional information, by creating random segments from the list of audio
|
242 |
-
files referenced in the metadata and applying minimal data pre-processing such as resampling,
|
243 |
-
mixing of channels, padding, etc.
|
244 |
-
|
245 |
-
If no segment_duration value is provided, the AudioDataset will return the full wav for each
|
246 |
-
audio file. Otherwise, it will randomly sample audio files and create a segment of the specified
|
247 |
-
duration, applying padding if required.
|
248 |
-
|
249 |
-
By default, only the torch Tensor corresponding to the waveform is returned. Setting return_info=True
|
250 |
-
allows to return a tuple containing the torch Tensor and additional metadata on the segment and the
|
251 |
-
original audio meta.
|
252 |
-
|
253 |
-
Args:
|
254 |
-
meta (tp.List[AudioMeta]): List of audio files metadata.
|
255 |
-
segment_duration (float): Optional segment duration of audio to load.
|
256 |
-
If not specified, the dataset will load the full audio segment from the file.
|
257 |
-
shuffle (bool): Set to `True` to have the data reshuffled at every epoch.
|
258 |
-
sample_rate (int): Target sample rate of the loaded audio samples.
|
259 |
-
channels (int): Target number of channels of the loaded audio samples.
|
260 |
-
sample_on_duration (bool): Set to `True` to sample segments with probability
|
261 |
-
dependent on audio file duration. This is only used if `segment_duration` is provided.
|
262 |
-
sample_on_weight (bool): Set to `True` to sample segments using the `weight` entry of
|
263 |
-
`AudioMeta`. If `sample_on_duration` is also True, the actual weight will be the product
|
264 |
-
of the file duration and file weight. This is only used if `segment_duration` is provided.
|
265 |
-
min_segment_ratio (float): Minimum segment ratio to use when the audio file
|
266 |
-
is shorter than the desired segment.
|
267 |
-
max_read_retry (int): Maximum number of retries to sample an audio segment from the dataset.
|
268 |
-
return_info (bool): Whether to return the wav only or return wav along with segment info and metadata.
|
269 |
-
min_audio_duration (tp.Optional[float], optional): Minimum audio file duration, in seconds, if provided
|
270 |
-
audio shorter than this will be filtered out.
|
271 |
-
max_audio_duration (tp.Optional[float], optional): Maximal audio file duration in seconds, if provided
|
272 |
-
audio longer than this will be filtered out.
|
273 |
-
"""
|
274 |
-
def __init__(self,
|
275 |
-
meta: tp.List[AudioMeta],
|
276 |
-
segment_duration: tp.Optional[float] = None,
|
277 |
-
shuffle: bool = True,
|
278 |
-
num_samples: int = 10_000,
|
279 |
-
sample_rate: int = 48_000,
|
280 |
-
channels: int = 2,
|
281 |
-
pad: bool = True,
|
282 |
-
sample_on_duration: bool = True,
|
283 |
-
sample_on_weight: bool = True,
|
284 |
-
min_segment_ratio: float = 0.5,
|
285 |
-
max_read_retry: int = 10,
|
286 |
-
return_info: bool = False,
|
287 |
-
min_audio_duration: tp.Optional[float] = None,
|
288 |
-
max_audio_duration: tp.Optional[float] = None
|
289 |
-
):
|
290 |
-
assert len(meta) > 0, 'No audio meta provided to AudioDataset. Please check loading of audio meta.'
|
291 |
-
assert segment_duration is None or segment_duration > 0
|
292 |
-
assert segment_duration is None or min_segment_ratio >= 0
|
293 |
-
logging.debug(f'sample_on_duration: {sample_on_duration}')
|
294 |
-
logging.debug(f'sample_on_weight: {sample_on_weight}')
|
295 |
-
logging.debug(f'pad: {pad}')
|
296 |
-
logging.debug(f'min_segment_ratio: {min_segment_ratio}')
|
297 |
-
|
298 |
-
self.segment_duration = segment_duration
|
299 |
-
self.min_segment_ratio = min_segment_ratio
|
300 |
-
self.max_audio_duration = max_audio_duration
|
301 |
-
self.min_audio_duration = min_audio_duration
|
302 |
-
if self.min_audio_duration is not None and self.max_audio_duration is not None:
|
303 |
-
assert self.min_audio_duration <= self.max_audio_duration
|
304 |
-
self.meta: tp.List[AudioMeta] = self._filter_duration(meta)
|
305 |
-
assert len(self.meta) # Fail fast if all data has been filtered.
|
306 |
-
self.total_duration = sum(d.duration for d in self.meta)
|
307 |
-
|
308 |
-
if segment_duration is None:
|
309 |
-
num_samples = len(self.meta)
|
310 |
-
self.num_samples = num_samples
|
311 |
-
self.shuffle = shuffle
|
312 |
-
self.sample_rate = sample_rate
|
313 |
-
self.channels = channels
|
314 |
-
self.pad = pad
|
315 |
-
self.sample_on_weight = sample_on_weight
|
316 |
-
self.sample_on_duration = sample_on_duration
|
317 |
-
self.sampling_probabilities = self._get_sampling_probabilities()
|
318 |
-
self.max_read_retry = max_read_retry
|
319 |
-
self.return_info = return_info
|
320 |
-
|
321 |
-
def __len__(self):
|
322 |
-
return self.num_samples
|
323 |
-
|
324 |
-
def _get_sampling_probabilities(self, normalized: bool = True):
|
325 |
-
"""Return the sampling probabilities for each file inside `self.meta`.
|
326 |
-
"""
|
327 |
-
scores: tp.List[float] = []
|
328 |
-
for file_meta in self.meta:
|
329 |
-
score = 1.
|
330 |
-
if self.sample_on_weight and file_meta.weight is not None:
|
331 |
-
score *= file_meta.weight
|
332 |
-
if self.sample_on_duration:
|
333 |
-
score *= file_meta.duration
|
334 |
-
scores.append(score)
|
335 |
-
probabilities = torch.tensor(scores)
|
336 |
-
if normalized:
|
337 |
-
probabilities /= probabilities.sum()
|
338 |
-
return probabilities
|
339 |
-
|
340 |
-
def sample_file(self, rng: torch.Generator) -> AudioMeta:
|
341 |
-
"""Sample a given file from `self.meta`. Can be overriden in subclasses.
|
342 |
-
This is only called if `segment_duration` is not None.
|
343 |
-
|
344 |
-
You must use the provided random number generator `rng` for reproducibility.
|
345 |
-
"""
|
346 |
-
if not self.sample_on_weight and not self.sample_on_duration:
|
347 |
-
file_index = int(torch.randint(len(self.sampling_probabilities), (1,), generator=rng).item())
|
348 |
-
else:
|
349 |
-
file_index = int(torch.multinomial(self.sampling_probabilities, 1, generator=rng).item())
|
350 |
-
|
351 |
-
return self.meta[file_index]
|
352 |
-
|
353 |
-
def __getitem__(self, index: int) -> tp.Union[torch.Tensor, tp.Tuple[torch.Tensor, SegmentInfo]]:
|
354 |
-
if self.segment_duration is None:
|
355 |
-
file_meta = self.meta[index]
|
356 |
-
out, sr = audio_read(file_meta.path)
|
357 |
-
out = convert_audio(out, sr, self.sample_rate, self.channels)
|
358 |
-
n_frames = out.shape[-1]
|
359 |
-
segment_info = SegmentInfo(file_meta, seek_time=0., n_frames=n_frames, total_frames=n_frames,
|
360 |
-
sample_rate=self.sample_rate)
|
361 |
-
else:
|
362 |
-
rng = torch.Generator()
|
363 |
-
if self.shuffle:
|
364 |
-
# We use index, plus extra randomness
|
365 |
-
rng.manual_seed(index + self.num_samples * random.randint(0, 2**24))
|
366 |
-
else:
|
367 |
-
# We only use index
|
368 |
-
rng.manual_seed(index)
|
369 |
-
|
370 |
-
for retry in range(self.max_read_retry):
|
371 |
-
file_meta = self.sample_file(rng)
|
372 |
-
# We add some variance in the file position even if audio file is smaller than segment
|
373 |
-
# without ending up with empty segments
|
374 |
-
max_seek = max(0, file_meta.duration - self.segment_duration * self.min_segment_ratio)
|
375 |
-
seek_time = torch.rand(1, generator=rng).item() * max_seek
|
376 |
-
try:
|
377 |
-
out, sr = audio_read(file_meta.path, seek_time, self.segment_duration, pad=False)
|
378 |
-
out = convert_audio(out, sr, self.sample_rate, self.channels)
|
379 |
-
n_frames = out.shape[-1]
|
380 |
-
target_frames = int(self.segment_duration * self.sample_rate)
|
381 |
-
if self.pad:
|
382 |
-
out = F.pad(out, (0, target_frames - n_frames))
|
383 |
-
segment_info = SegmentInfo(file_meta, seek_time, n_frames=n_frames, total_frames=target_frames,
|
384 |
-
sample_rate=self.sample_rate)
|
385 |
-
except Exception as exc:
|
386 |
-
logger.warning("Error opening file %s: %r", file_meta.path, exc)
|
387 |
-
if retry == self.max_read_retry - 1:
|
388 |
-
raise
|
389 |
-
else:
|
390 |
-
break
|
391 |
-
|
392 |
-
if self.return_info:
|
393 |
-
# Returns the wav and additional information on the wave segment
|
394 |
-
return out, segment_info
|
395 |
-
else:
|
396 |
-
return out
|
397 |
-
|
398 |
-
def collater(self, samples):
|
399 |
-
"""The collater function has to be provided to the dataloader
|
400 |
-
if AudioDataset has return_info=True in order to properly collate
|
401 |
-
the samples of a batch.
|
402 |
-
"""
|
403 |
-
if self.segment_duration is None and len(samples) > 1:
|
404 |
-
assert self.pad, "Must allow padding when batching examples of different durations."
|
405 |
-
|
406 |
-
# In this case the audio reaching the collater is of variable length as segment_duration=None.
|
407 |
-
to_pad = self.segment_duration is None and self.pad
|
408 |
-
if to_pad:
|
409 |
-
max_len = max([wav.shape[-1] for wav, _ in samples])
|
410 |
-
|
411 |
-
def _pad_wav(wav):
|
412 |
-
return F.pad(wav, (0, max_len - wav.shape[-1]))
|
413 |
-
|
414 |
-
if self.return_info:
|
415 |
-
if len(samples) > 0:
|
416 |
-
assert len(samples[0]) == 2
|
417 |
-
assert isinstance(samples[0][0], torch.Tensor)
|
418 |
-
assert isinstance(samples[0][1], SegmentInfo)
|
419 |
-
|
420 |
-
wavs = [wav for wav, _ in samples]
|
421 |
-
segment_infos = [copy.deepcopy(info) for _, info in samples]
|
422 |
-
|
423 |
-
if to_pad:
|
424 |
-
# Each wav could be of a different duration as they are not segmented.
|
425 |
-
for i in range(len(samples)):
|
426 |
-
# Determines the total legth of the signal with padding, so we update here as we pad.
|
427 |
-
segment_infos[i].total_frames = max_len
|
428 |
-
wavs[i] = _pad_wav(wavs[i])
|
429 |
-
|
430 |
-
wav = torch.stack(wavs)
|
431 |
-
return wav, segment_infos
|
432 |
-
else:
|
433 |
-
assert isinstance(samples[0], torch.Tensor)
|
434 |
-
if to_pad:
|
435 |
-
samples = [_pad_wav(s) for s in samples]
|
436 |
-
return torch.stack(samples)
|
437 |
-
|
438 |
-
def _filter_duration(self, meta: tp.List[AudioMeta]) -> tp.List[AudioMeta]:
|
439 |
-
"""Filters out audio files with short durations.
|
440 |
-
Removes from meta files that have durations that will not allow to samples examples from them.
|
441 |
-
"""
|
442 |
-
orig_len = len(meta)
|
443 |
-
|
444 |
-
# Filter data that is too short.
|
445 |
-
if self.min_audio_duration is not None:
|
446 |
-
meta = [m for m in meta if m.duration >= self.min_audio_duration]
|
447 |
-
|
448 |
-
# Filter data that is too long.
|
449 |
-
if self.max_audio_duration is not None:
|
450 |
-
meta = [m for m in meta if m.duration <= self.max_audio_duration]
|
451 |
-
|
452 |
-
filtered_len = len(meta)
|
453 |
-
removed_percentage = 100*(1-float(filtered_len)/orig_len)
|
454 |
-
msg = 'Removed %.2f percent of the data because it was too short or too long.' % removed_percentage
|
455 |
-
if removed_percentage < 10:
|
456 |
-
logging.debug(msg)
|
457 |
-
else:
|
458 |
-
logging.warning(msg)
|
459 |
-
return meta
|
460 |
-
|
461 |
-
@classmethod
|
462 |
-
def from_meta(cls, root: tp.Union[str, Path], **kwargs):
|
463 |
-
"""Instantiate AudioDataset from a path to a directory containing a manifest as a jsonl file.
|
464 |
-
|
465 |
-
Args:
|
466 |
-
root (str or Path): Path to root folder containing audio files.
|
467 |
-
kwargs: Additional keyword arguments for the AudioDataset.
|
468 |
-
"""
|
469 |
-
root = Path(root)
|
470 |
-
if root.is_dir():
|
471 |
-
if (root / 'data.jsonl').exists():
|
472 |
-
root = root / 'data.jsonl'
|
473 |
-
elif (root / 'data.jsonl.gz').exists():
|
474 |
-
root = root / 'data.jsonl.gz'
|
475 |
-
else:
|
476 |
-
raise ValueError("Don't know where to read metadata from in the dir. "
|
477 |
-
"Expecting either a data.jsonl or data.jsonl.gz file but none found.")
|
478 |
-
meta = load_audio_meta(root)
|
479 |
-
return cls(meta, **kwargs)
|
480 |
-
|
481 |
-
@classmethod
|
482 |
-
def from_path(cls, root: tp.Union[str, Path], minimal_meta: bool = True,
|
483 |
-
exts: tp.List[str] = DEFAULT_EXTS, **kwargs):
|
484 |
-
"""Instantiate AudioDataset from a path containing (possibly nested) audio files.
|
485 |
-
|
486 |
-
Args:
|
487 |
-
root (str or Path): Path to root folder containing audio files.
|
488 |
-
minimal_meta (bool): Whether to only load minimal metadata or not.
|
489 |
-
exts (list of str): Extensions for audio files.
|
490 |
-
kwargs: Additional keyword arguments for the AudioDataset.
|
491 |
-
"""
|
492 |
-
root = Path(root)
|
493 |
-
if root.is_file():
|
494 |
-
meta = load_audio_meta(root, resolve=True)
|
495 |
-
else:
|
496 |
-
meta = find_audio_files(root, exts, minimal=minimal_meta, resolve=True)
|
497 |
-
return cls(meta, **kwargs)
|
498 |
-
|
499 |
-
|
500 |
-
def main():
|
501 |
-
logging.basicConfig(stream=sys.stderr, level=logging.INFO)
|
502 |
-
parser = argparse.ArgumentParser(
|
503 |
-
prog='audio_dataset',
|
504 |
-
description='Generate .jsonl files by scanning a folder.')
|
505 |
-
parser.add_argument('root', help='Root folder with all the audio files')
|
506 |
-
parser.add_argument('output_meta_file',
|
507 |
-
help='Output file to store the metadata, ')
|
508 |
-
parser.add_argument('--complete',
|
509 |
-
action='store_false', dest='minimal', default=True,
|
510 |
-
help='Retrieve all metadata, even the one that are expansive '
|
511 |
-
'to compute (e.g. normalization).')
|
512 |
-
parser.add_argument('--resolve',
|
513 |
-
action='store_true', default=False,
|
514 |
-
help='Resolve the paths to be absolute and with no symlinks.')
|
515 |
-
parser.add_argument('--workers',
|
516 |
-
default=10, type=int,
|
517 |
-
help='Number of workers.')
|
518 |
-
args = parser.parse_args()
|
519 |
-
meta = find_audio_files(args.root, DEFAULT_EXTS, progress=True,
|
520 |
-
resolve=args.resolve, minimal=args.minimal, workers=args.workers)
|
521 |
-
save_audio_meta(args.output_meta_file, meta)
|
522 |
-
|
523 |
-
|
524 |
-
if __name__ == '__main__':
|
525 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AchyuthGamer/OpenGPT-Chat-UI/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: OpenGPT Chat
|
3 |
-
emoji: 🚀
|
4 |
-
colorFrom: gray
|
5 |
-
colorTo: pink
|
6 |
-
sdk: docker
|
7 |
-
pinned: true
|
8 |
-
app_port: 3000
|
9 |
-
license: creativeml-openrail-m
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorpicker/methods/SVPaletteCanvas.js
DELETED
@@ -1,98 +0,0 @@
|
|
1 |
-
import Canvas from '../../../canvas/Canvas.js';
|
2 |
-
import { DrawSVPalette } from '../../../../../plugins/utils/canvas/DrawHSVPalette.js';
|
3 |
-
|
4 |
-
const Color = Phaser.Display.Color;
|
5 |
-
const Percent = Phaser.Math.Percent;
|
6 |
-
const ColorToRGBA = Phaser.Display.Color.ColorToRGBA;
|
7 |
-
const HSVToRGB = Phaser.Display.Color.HSVToRGB;
|
8 |
-
|
9 |
-
class SVPaletteCanvas extends Canvas {
|
10 |
-
constructor(scene, x, y, width, height, hue) {
|
11 |
-
if (x === undefined) { x = 0; }
|
12 |
-
if (y === undefined) { y = 0; }
|
13 |
-
if (width === undefined) { width = 2; }
|
14 |
-
if (height === undefined) { height = 2; }
|
15 |
-
|
16 |
-
super(scene, x, y, width, height);
|
17 |
-
this.type = 'rexColorPicker.SVPaletteCanvas';
|
18 |
-
|
19 |
-
if (hue === undefined) {
|
20 |
-
hue = 1;
|
21 |
-
}
|
22 |
-
|
23 |
-
this.colorObject = new Color();
|
24 |
-
|
25 |
-
this.setHue(hue);
|
26 |
-
this.setSize(width, height);
|
27 |
-
}
|
28 |
-
|
29 |
-
get color() {
|
30 |
-
return this.colorObject.color;
|
31 |
-
}
|
32 |
-
|
33 |
-
get hue() {
|
34 |
-
return this._hue;
|
35 |
-
}
|
36 |
-
|
37 |
-
set hue(hue) {
|
38 |
-
if (this._hue === hue) {
|
39 |
-
return;
|
40 |
-
}
|
41 |
-
this._hue = hue;
|
42 |
-
this.colorObject.h = hue;
|
43 |
-
this.dirty = true;
|
44 |
-
}
|
45 |
-
|
46 |
-
setHue(hue) {
|
47 |
-
this.hue = hue;
|
48 |
-
return this;
|
49 |
-
}
|
50 |
-
|
51 |
-
updateTexture() {
|
52 |
-
DrawSVPalette(this.canvas, this.context, this.hue);
|
53 |
-
super.updateTexture();
|
54 |
-
return this;
|
55 |
-
}
|
56 |
-
|
57 |
-
getColor(localX, localY) {
|
58 |
-
if (localX === undefined) {
|
59 |
-
return this.colorObject.color;
|
60 |
-
}
|
61 |
-
|
62 |
-
var s = Percent(localX, 0, this.width);
|
63 |
-
var v = 1 - Percent(localY, 0, this.height);
|
64 |
-
this.colorObject.setFromRGB(HSVToRGB(this.hue, s, v));
|
65 |
-
return this.colorObject.color;
|
66 |
-
}
|
67 |
-
|
68 |
-
setColor(color) {
|
69 |
-
if (this.color === color) {
|
70 |
-
return this;
|
71 |
-
}
|
72 |
-
|
73 |
-
this.colorObject.setFromRGB(ColorToRGBA(color));
|
74 |
-
this.setHue(this.colorObject.h);
|
75 |
-
return this;
|
76 |
-
}
|
77 |
-
|
78 |
-
colorToLocalPosition(color, out) {
|
79 |
-
if (out === undefined) {
|
80 |
-
out = {};
|
81 |
-
} else if (out === true) {
|
82 |
-
if (LocalXY === undefined) {
|
83 |
-
LocalXY = {};
|
84 |
-
}
|
85 |
-
out = LocalXY;
|
86 |
-
}
|
87 |
-
|
88 |
-
this.colorObject.setFromRGB(ColorToRGBA(color));
|
89 |
-
out.x = this.width * this.colorObject.s;
|
90 |
-
out.y = this.height * (1 - this.colorObject.v);
|
91 |
-
|
92 |
-
return out;
|
93 |
-
}
|
94 |
-
}
|
95 |
-
|
96 |
-
var LocalXY = undefined;
|
97 |
-
|
98 |
-
export default SVPaletteCanvas;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AlStable/Duke/app.py
DELETED
@@ -1,137 +0,0 @@
|
|
1 |
-
from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler
|
2 |
-
import gradio as gr
|
3 |
-
import torch
|
4 |
-
from PIL import Image
|
5 |
-
|
6 |
-
model_id = 'DucHaiten/DucHaitenDreamWorld'
|
7 |
-
prefix = ''
|
8 |
-
|
9 |
-
scheduler = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler")
|
10 |
-
|
11 |
-
pipe = StableDiffusionPipeline.from_pretrained(
|
12 |
-
model_id,
|
13 |
-
torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,
|
14 |
-
scheduler=scheduler)
|
15 |
-
|
16 |
-
pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained(
|
17 |
-
model_id,
|
18 |
-
torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,
|
19 |
-
scheduler=scheduler)
|
20 |
-
|
21 |
-
if torch.cuda.is_available():
|
22 |
-
pipe = pipe.to("cuda")
|
23 |
-
pipe_i2i = pipe_i2i.to("cuda")
|
24 |
-
|
25 |
-
def error_str(error, title="Error"):
|
26 |
-
return f"""#### {title}
|
27 |
-
{error}""" if error else ""
|
28 |
-
|
29 |
-
def inference(prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt="", auto_prefix=False):
|
30 |
-
|
31 |
-
generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None
|
32 |
-
prompt = f"{prefix} {prompt}" if auto_prefix else prompt
|
33 |
-
|
34 |
-
try:
|
35 |
-
if img is not None:
|
36 |
-
return img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None
|
37 |
-
else:
|
38 |
-
return txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator), None
|
39 |
-
except Exception as e:
|
40 |
-
return None, error_str(e)
|
41 |
-
|
42 |
-
def txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator):
|
43 |
-
|
44 |
-
result = pipe(
|
45 |
-
prompt,
|
46 |
-
negative_prompt = neg_prompt,
|
47 |
-
num_inference_steps = int(steps),
|
48 |
-
guidance_scale = guidance,
|
49 |
-
width = width,
|
50 |
-
height = height,
|
51 |
-
generator = generator)
|
52 |
-
|
53 |
-
return result.images[0]
|
54 |
-
|
55 |
-
def img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator):
|
56 |
-
|
57 |
-
ratio = min(height / img.height, width / img.width)
|
58 |
-
img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS)
|
59 |
-
result = pipe_i2i(
|
60 |
-
prompt,
|
61 |
-
negative_prompt = neg_prompt,
|
62 |
-
init_image = img,
|
63 |
-
num_inference_steps = int(steps),
|
64 |
-
strength = strength,
|
65 |
-
guidance_scale = guidance,
|
66 |
-
width = width,
|
67 |
-
height = height,
|
68 |
-
generator = generator)
|
69 |
-
|
70 |
-
return result.images[0]
|
71 |
-
|
72 |
-
css = """.main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem}
|
73 |
-
"""
|
74 |
-
with gr.Blocks(css=css) as demo:
|
75 |
-
gr.HTML(
|
76 |
-
f"""
|
77 |
-
<div class="main-div">
|
78 |
-
<div>
|
79 |
-
<h1>Duchaiten dreamworld</h1>
|
80 |
-
</div>
|
81 |
-
<p>
|
82 |
-
Demo for <a href="https://huggingface.co/DucHaiten/DucHaitenDreamWorld">Duchaitendreamworld</a> Stable Diffusion model.<br>
|
83 |
-
{"Add the following tokens to your prompts for the model to work properly: <b>prefix</b>" if prefix else ""}
|
84 |
-
</p>
|
85 |
-
Running on {"<b>GPU 🔥</b>" if torch.cuda.is_available() else f"<b>CPU 🥶</b>. For faster inference it is recommended to <b>upgrade to GPU in <a href='https://huggingface.co/spaces/AlStable/Duke/settings'>Settings</a></b>"} after duplicating the space<br><br>
|
86 |
-
<a style="display:inline-block" href="https://huggingface.co/spaces/AlStable/Duke?duplicate=true"><img src="https://bit.ly/3gLdBN6" alt="Duplicate Space"></a>
|
87 |
-
</div>
|
88 |
-
"""
|
89 |
-
)
|
90 |
-
with gr.Row():
|
91 |
-
|
92 |
-
with gr.Column(scale=55):
|
93 |
-
with gr.Group():
|
94 |
-
with gr.Row():
|
95 |
-
prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2,placeholder=f"{prefix} [your prompt]").style(container=False)
|
96 |
-
generate = gr.Button(value="Generate").style(rounded=(False, True, True, False))
|
97 |
-
|
98 |
-
image_out = gr.Image(height=512)
|
99 |
-
error_output = gr.Markdown()
|
100 |
-
|
101 |
-
with gr.Column(scale=45):
|
102 |
-
with gr.Tab("Options"):
|
103 |
-
with gr.Group():
|
104 |
-
neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image")
|
105 |
-
auto_prefix = gr.Checkbox(label="Prefix styling tokens automatically ()", value=prefix, visible=prefix)
|
106 |
-
|
107 |
-
with gr.Row():
|
108 |
-
guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15)
|
109 |
-
steps = gr.Slider(label="Steps", value=25, minimum=2, maximum=75, step=1)
|
110 |
-
|
111 |
-
with gr.Row():
|
112 |
-
width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8)
|
113 |
-
height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8)
|
114 |
-
|
115 |
-
seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1)
|
116 |
-
|
117 |
-
with gr.Tab("Image to image"):
|
118 |
-
with gr.Group():
|
119 |
-
image = gr.Image(label="Image", height=256, tool="editor", type="pil")
|
120 |
-
strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5)
|
121 |
-
|
122 |
-
auto_prefix.change(lambda x: gr.update(placeholder=f"{prefix} [your prompt]" if x else "[Your prompt]"), inputs=auto_prefix, outputs=prompt, queue=False)
|
123 |
-
|
124 |
-
inputs = [prompt, guidance, steps, width, height, seed, image, strength, neg_prompt, auto_prefix]
|
125 |
-
outputs = [image_out, error_output]
|
126 |
-
prompt.submit(inference, inputs=inputs, outputs=outputs)
|
127 |
-
generate.click(inference, inputs=inputs, outputs=outputs)
|
128 |
-
|
129 |
-
gr.HTML("""
|
130 |
-
<div style="border-top: 1px solid #303030;">
|
131 |
-
<br>
|
132 |
-
<p>This space was created using <a href="https://huggingface.co/spaces/anzorq/sd-space-creator">SD Space Creator</a>.</p>
|
133 |
-
</div>
|
134 |
-
""")
|
135 |
-
|
136 |
-
demo.queue(concurrency_count=1)
|
137 |
-
demo.launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Amrrs/portfolio-github/style.css
DELETED
@@ -1,190 +0,0 @@
|
|
1 |
-
html {
|
2 |
-
margin: 0;
|
3 |
-
padding: 0;
|
4 |
-
}
|
5 |
-
|
6 |
-
body {
|
7 |
-
font-family: 'Bellota', cursive;
|
8 |
-
font-size: 26pt;
|
9 |
-
background-color: #f2f2f2;
|
10 |
-
padding: 20px;
|
11 |
-
margin: 0;
|
12 |
-
}
|
13 |
-
|
14 |
-
h1 {
|
15 |
-
font-size: 15pt;
|
16 |
-
color: #ffffff;
|
17 |
-
text-align: center;
|
18 |
-
padding: 18px 0 18px 0;
|
19 |
-
margin: 0 0 10px 0;
|
20 |
-
}
|
21 |
-
|
22 |
-
h1 span {
|
23 |
-
border: 8px solid #666666;
|
24 |
-
border-radius: 8px;
|
25 |
-
background-image: url("https://media.giphy.com/media/KVZWZQoS0yqfIiTAKq/giphy.gif");
|
26 |
-
padding: 12px;
|
27 |
-
}
|
28 |
-
|
29 |
-
p {
|
30 |
-
padding: 0;
|
31 |
-
margin: 0;
|
32 |
-
color: #000000;
|
33 |
-
}
|
34 |
-
|
35 |
-
.img-circle {
|
36 |
-
border: 8px solid white;
|
37 |
-
border-radius: 50%;
|
38 |
-
}
|
39 |
-
|
40 |
-
.section {
|
41 |
-
background-color: #fff;
|
42 |
-
padding: 20px;
|
43 |
-
margin-bottom: 10px;
|
44 |
-
border-radius: 30px;
|
45 |
-
}
|
46 |
-
|
47 |
-
#header {
|
48 |
-
background-image: url("https://media.giphy.com/media/KVZWZQoS0yqfIiTAKq/giphy.gif");
|
49 |
-
background-size: cover;
|
50 |
-
}
|
51 |
-
|
52 |
-
#header img {
|
53 |
-
display: block;
|
54 |
-
width: 500px;
|
55 |
-
height: 500px;
|
56 |
-
margin: auto;
|
57 |
-
}
|
58 |
-
|
59 |
-
#header p {
|
60 |
-
font-size: 60pt;
|
61 |
-
color: #ffffff;
|
62 |
-
padding-top: 8px;
|
63 |
-
margin: 0;
|
64 |
-
font-weight: bold;
|
65 |
-
text-align: center;
|
66 |
-
}
|
67 |
-
|
68 |
-
.quote {
|
69 |
-
font-size: 12pt;
|
70 |
-
text-align: right;
|
71 |
-
margin-top: 10px;
|
72 |
-
color: grey;
|
73 |
-
}
|
74 |
-
|
75 |
-
#res {
|
76 |
-
text-align: center;
|
77 |
-
margin: 50px auto;
|
78 |
-
}
|
79 |
-
|
80 |
-
#res a {
|
81 |
-
margin: 20px 20px;
|
82 |
-
display: inline-block;
|
83 |
-
text-decoration: none;
|
84 |
-
color: black;
|
85 |
-
}
|
86 |
-
|
87 |
-
.selected {
|
88 |
-
background-color: #f36f48;
|
89 |
-
font-weight: bold;
|
90 |
-
color: white;
|
91 |
-
}
|
92 |
-
|
93 |
-
li {
|
94 |
-
margin-bottom: 15px;
|
95 |
-
font-weight: bold;
|
96 |
-
}
|
97 |
-
|
98 |
-
progress {
|
99 |
-
width: 70%;
|
100 |
-
height: 20px;
|
101 |
-
color: #3fb6b2;
|
102 |
-
background: #efefef;
|
103 |
-
}
|
104 |
-
|
105 |
-
progress::-webkit-progress-bar {
|
106 |
-
background: #efefef;
|
107 |
-
}
|
108 |
-
|
109 |
-
progress::-webkit-progress-value {
|
110 |
-
background: #3fb6b2;
|
111 |
-
}
|
112 |
-
|
113 |
-
progress::-moz-progress-bar {
|
114 |
-
color: #3fb6b2;
|
115 |
-
background: #efefef;
|
116 |
-
}
|
117 |
-
|
118 |
-
iframe,
|
119 |
-
audio {
|
120 |
-
display: block;
|
121 |
-
margin: 0 auto;
|
122 |
-
border: 3px solid #3fb6b2;
|
123 |
-
border-radius: 10px;
|
124 |
-
}
|
125 |
-
|
126 |
-
hr {
|
127 |
-
border: 0;
|
128 |
-
height: 1px;
|
129 |
-
background: #f36f48;
|
130 |
-
}
|
131 |
-
|
132 |
-
input {
|
133 |
-
text-align: center;
|
134 |
-
font-size: 25pt;
|
135 |
-
border: none;
|
136 |
-
border-radius: 12px;
|
137 |
-
padding: 30px 8%;
|
138 |
-
margin: 20px 5px 10px 5px;
|
139 |
-
background-color: #d7d7d7;
|
140 |
-
}
|
141 |
-
|
142 |
-
input:focus {
|
143 |
-
background-color: #2f2f2f;
|
144 |
-
color: white;
|
145 |
-
}
|
146 |
-
|
147 |
-
form {
|
148 |
-
text-align: center;
|
149 |
-
font-size: 30pt;
|
150 |
-
font-family: Helvetica;
|
151 |
-
font-weight: 500;
|
152 |
-
margin: 10% 15% 8% 15%;
|
153 |
-
border-radius: 12px;
|
154 |
-
}
|
155 |
-
|
156 |
-
#insta-image {
|
157 |
-
display: block;
|
158 |
-
width: 100px;
|
159 |
-
height: 100px;
|
160 |
-
border: 5px solid #d7d7d7;
|
161 |
-
border-radius: 50%;
|
162 |
-
margin: auto;
|
163 |
-
margin-top: -75px;
|
164 |
-
}
|
165 |
-
|
166 |
-
#contacts img {
|
167 |
-
height: 150px;
|
168 |
-
width: 150px;
|
169 |
-
margin-left: 7px;
|
170 |
-
margin-right: 7px;
|
171 |
-
}
|
172 |
-
|
173 |
-
#contacts a {
|
174 |
-
text-decoration: none;
|
175 |
-
}
|
176 |
-
|
177 |
-
#contacts img:hover {
|
178 |
-
opacity: 0.8;
|
179 |
-
}
|
180 |
-
|
181 |
-
#contacts {
|
182 |
-
text-align: center;
|
183 |
-
}
|
184 |
-
|
185 |
-
.copyright {
|
186 |
-
font-size: 8pt;
|
187 |
-
text-align: right;
|
188 |
-
padding-bottom: 10px;
|
189 |
-
color: grey;
|
190 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/write_own_pipeline.md
DELETED
@@ -1,290 +0,0 @@
|
|
1 |
-
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
-
|
3 |
-
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
-
the License. You may obtain a copy of the License at
|
5 |
-
|
6 |
-
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
-
|
8 |
-
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
-
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
-
specific language governing permissions and limitations under the License.
|
11 |
-
-->
|
12 |
-
|
13 |
-
# Understanding pipelines, models and schedulers
|
14 |
-
|
15 |
-
[[open-in-colab]]
|
16 |
-
|
17 |
-
🧨 Diffusers is designed to be a user-friendly and flexible toolbox for building diffusion systems tailored to your use-case. At the core of the toolbox are models and schedulers. While the [`DiffusionPipeline`] bundles these components together for convenience, you can also unbundle the pipeline and use the models and schedulers separately to create new diffusion systems.
|
18 |
-
|
19 |
-
In this tutorial, you'll learn how to use models and schedulers to assemble a diffusion system for inference, starting with a basic pipeline and then progressing to the Stable Diffusion pipeline.
|
20 |
-
|
21 |
-
## Deconstruct a basic pipeline
|
22 |
-
|
23 |
-
A pipeline is a quick and easy way to run a model for inference, requiring no more than four lines of code to generate an image:
|
24 |
-
|
25 |
-
```py
|
26 |
-
>>> from diffusers import DDPMPipeline
|
27 |
-
|
28 |
-
>>> ddpm = DDPMPipeline.from_pretrained("google/ddpm-cat-256").to("cuda")
|
29 |
-
>>> image = ddpm(num_inference_steps=25).images[0]
|
30 |
-
>>> image
|
31 |
-
```
|
32 |
-
|
33 |
-
<div class="flex justify-center">
|
34 |
-
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ddpm-cat.png" alt="Image of cat created from DDPMPipeline"/>
|
35 |
-
</div>
|
36 |
-
|
37 |
-
That was super easy, but how did the pipeline do that? Let's breakdown the pipeline and take a look at what's happening under the hood.
|
38 |
-
|
39 |
-
In the example above, the pipeline contains a [`UNet2DModel`] model and a [`DDPMScheduler`]. The pipeline denoises an image by taking random noise the size of the desired output and passing it through the model several times. At each timestep, the model predicts the *noise residual* and the scheduler uses it to predict a less noisy image. The pipeline repeats this process until it reaches the end of the specified number of inference steps.
|
40 |
-
|
41 |
-
To recreate the pipeline with the model and scheduler separately, let's write our own denoising process.
|
42 |
-
|
43 |
-
1. Load the model and scheduler:
|
44 |
-
|
45 |
-
```py
|
46 |
-
>>> from diffusers import DDPMScheduler, UNet2DModel
|
47 |
-
|
48 |
-
>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cat-256")
|
49 |
-
>>> model = UNet2DModel.from_pretrained("google/ddpm-cat-256").to("cuda")
|
50 |
-
```
|
51 |
-
|
52 |
-
2. Set the number of timesteps to run the denoising process for:
|
53 |
-
|
54 |
-
```py
|
55 |
-
>>> scheduler.set_timesteps(50)
|
56 |
-
```
|
57 |
-
|
58 |
-
3. Setting the scheduler timesteps creates a tensor with evenly spaced elements in it, 50 in this example. Each element corresponds to a timestep at which the model denoises an image. When you create the denoising loop later, you'll iterate over this tensor to denoise an image:
|
59 |
-
|
60 |
-
```py
|
61 |
-
>>> scheduler.timesteps
|
62 |
-
tensor([980, 960, 940, 920, 900, 880, 860, 840, 820, 800, 780, 760, 740, 720,
|
63 |
-
700, 680, 660, 640, 620, 600, 580, 560, 540, 520, 500, 480, 460, 440,
|
64 |
-
420, 400, 380, 360, 340, 320, 300, 280, 260, 240, 220, 200, 180, 160,
|
65 |
-
140, 120, 100, 80, 60, 40, 20, 0])
|
66 |
-
```
|
67 |
-
|
68 |
-
4. Create some random noise with the same shape as the desired output:
|
69 |
-
|
70 |
-
```py
|
71 |
-
>>> import torch
|
72 |
-
|
73 |
-
>>> sample_size = model.config.sample_size
|
74 |
-
>>> noise = torch.randn((1, 3, sample_size, sample_size)).to("cuda")
|
75 |
-
```
|
76 |
-
|
77 |
-
5. Now write a loop to iterate over the timesteps. At each timestep, the model does a [`UNet2DModel.forward`] pass and returns the noisy residual. The scheduler's [`~DDPMScheduler.step`] method takes the noisy residual, timestep, and input and it predicts the image at the previous timestep. This output becomes the next input to the model in the denoising loop, and it'll repeat until it reaches the end of the `timesteps` array.
|
78 |
-
|
79 |
-
```py
|
80 |
-
>>> input = noise
|
81 |
-
|
82 |
-
>>> for t in scheduler.timesteps:
|
83 |
-
... with torch.no_grad():
|
84 |
-
... noisy_residual = model(input, t).sample
|
85 |
-
... previous_noisy_sample = scheduler.step(noisy_residual, t, input).prev_sample
|
86 |
-
... input = previous_noisy_sample
|
87 |
-
```
|
88 |
-
|
89 |
-
This is the entire denoising process, and you can use this same pattern to write any diffusion system.
|
90 |
-
|
91 |
-
6. The last step is to convert the denoised output into an image:
|
92 |
-
|
93 |
-
```py
|
94 |
-
>>> from PIL import Image
|
95 |
-
>>> import numpy as np
|
96 |
-
|
97 |
-
>>> image = (input / 2 + 0.5).clamp(0, 1)
|
98 |
-
>>> image = image.cpu().permute(0, 2, 3, 1).numpy()[0]
|
99 |
-
>>> image = Image.fromarray((image * 255).round().astype("uint8"))
|
100 |
-
>>> image
|
101 |
-
```
|
102 |
-
|
103 |
-
In the next section, you'll put your skills to the test and breakdown the more complex Stable Diffusion pipeline. The steps are more or less the same. You'll initialize the necessary components, and set the number of timesteps to create a `timestep` array. The `timestep` array is used in the denoising loop, and for each element in this array, the model predicts a less noisy image. The denoising loop iterates over the `timestep`'s, and at each timestep, it outputs a noisy residual and the scheduler uses it to predict a less noisy image at the previous timestep. This process is repeated until you reach the end of the `timestep` array.
|
104 |
-
|
105 |
-
Let's try it out!
|
106 |
-
|
107 |
-
## Deconstruct the Stable Diffusion pipeline
|
108 |
-
|
109 |
-
Stable Diffusion is a text-to-image *latent diffusion* model. It is called a latent diffusion model because it works with a lower-dimensional representation of the image instead of the actual pixel space, which makes it more memory efficient. The encoder compresses the image into a smaller representation, and a decoder to convert the compressed representation back into an image. For text-to-image models, you'll need a tokenizer and an encoder to generate text embeddings. From the previous example, you already know you need a UNet model and a scheduler.
|
110 |
-
|
111 |
-
As you can see, this is already more complex than the DDPM pipeline which only contains a UNet model. The Stable Diffusion model has three separate pretrained models.
|
112 |
-
|
113 |
-
<Tip>
|
114 |
-
|
115 |
-
💡 Read the [How does Stable Diffusion work?](https://huggingface.co/blog/stable_diffusion#how-does-stable-diffusion-work) blog for more details about how the VAE, UNet, and text encoder models.
|
116 |
-
|
117 |
-
</Tip>
|
118 |
-
|
119 |
-
Now that you know what you need for the Stable Diffusion pipeline, load all these components with the [`~ModelMixin.from_pretrained`] method. You can find them in the pretrained [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) checkpoint, and each component is stored in a separate subfolder:
|
120 |
-
|
121 |
-
```py
|
122 |
-
>>> from PIL import Image
|
123 |
-
>>> import torch
|
124 |
-
>>> from transformers import CLIPTextModel, CLIPTokenizer
|
125 |
-
>>> from diffusers import AutoencoderKL, UNet2DConditionModel, PNDMScheduler
|
126 |
-
|
127 |
-
>>> vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae")
|
128 |
-
>>> tokenizer = CLIPTokenizer.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="tokenizer")
|
129 |
-
>>> text_encoder = CLIPTextModel.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="text_encoder")
|
130 |
-
>>> unet = UNet2DConditionModel.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="unet")
|
131 |
-
```
|
132 |
-
|
133 |
-
Instead of the default [`PNDMScheduler`], exchange it for the [`UniPCMultistepScheduler`] to see how easy it is to plug a different scheduler in:
|
134 |
-
|
135 |
-
```py
|
136 |
-
>>> from diffusers import UniPCMultistepScheduler
|
137 |
-
|
138 |
-
>>> scheduler = UniPCMultistepScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler")
|
139 |
-
```
|
140 |
-
|
141 |
-
To speed up inference, move the models to a GPU since, unlike the scheduler, they have trainable weights:
|
142 |
-
|
143 |
-
```py
|
144 |
-
>>> torch_device = "cuda"
|
145 |
-
>>> vae.to(torch_device)
|
146 |
-
>>> text_encoder.to(torch_device)
|
147 |
-
>>> unet.to(torch_device)
|
148 |
-
```
|
149 |
-
|
150 |
-
### Create text embeddings
|
151 |
-
|
152 |
-
The next step is to tokenize the text to generate embeddings. The text is used to condition the UNet model and steer the diffusion process towards something that resembles the input prompt.
|
153 |
-
|
154 |
-
<Tip>
|
155 |
-
|
156 |
-
💡 The `guidance_scale` parameter determines how much weight should be given to the prompt when generating an image.
|
157 |
-
|
158 |
-
</Tip>
|
159 |
-
|
160 |
-
Feel free to choose any prompt you like if you want to generate something else!
|
161 |
-
|
162 |
-
```py
|
163 |
-
>>> prompt = ["a photograph of an astronaut riding a horse"]
|
164 |
-
>>> height = 512 # default height of Stable Diffusion
|
165 |
-
>>> width = 512 # default width of Stable Diffusion
|
166 |
-
>>> num_inference_steps = 25 # Number of denoising steps
|
167 |
-
>>> guidance_scale = 7.5 # Scale for classifier-free guidance
|
168 |
-
>>> generator = torch.manual_seed(0) # Seed generator to create the inital latent noise
|
169 |
-
>>> batch_size = len(prompt)
|
170 |
-
```
|
171 |
-
|
172 |
-
Tokenize the text and generate the embeddings from the prompt:
|
173 |
-
|
174 |
-
```py
|
175 |
-
>>> text_input = tokenizer(
|
176 |
-
... prompt, padding="max_length", max_length=tokenizer.model_max_length, truncation=True, return_tensors="pt"
|
177 |
-
... )
|
178 |
-
|
179 |
-
>>> with torch.no_grad():
|
180 |
-
... text_embeddings = text_encoder(text_input.input_ids.to(torch_device))[0]
|
181 |
-
```
|
182 |
-
|
183 |
-
You'll also need to generate the *unconditional text embeddings* which are the embeddings for the padding token. These need to have the same shape (`batch_size` and `seq_length`) as the conditional `text_embeddings`:
|
184 |
-
|
185 |
-
```py
|
186 |
-
>>> max_length = text_input.input_ids.shape[-1]
|
187 |
-
>>> uncond_input = tokenizer([""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt")
|
188 |
-
>>> uncond_embeddings = text_encoder(uncond_input.input_ids.to(torch_device))[0]
|
189 |
-
```
|
190 |
-
|
191 |
-
Let's concatenate the conditional and unconditional embeddings into a batch to avoid doing two forward passes:
|
192 |
-
|
193 |
-
```py
|
194 |
-
>>> text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
|
195 |
-
```
|
196 |
-
|
197 |
-
### Create random noise
|
198 |
-
|
199 |
-
Next, generate some initial random noise as a starting point for the diffusion process. This is the latent representation of the image, and it'll be gradually denoised. At this point, the `latent` image is smaller than the final image size but that's okay though because the model will transform it into the final 512x512 image dimensions later.
|
200 |
-
|
201 |
-
<Tip>
|
202 |
-
|
203 |
-
💡 The height and width are divided by 8 because the `vae` model has 3 down-sampling layers. You can check by running the following:
|
204 |
-
|
205 |
-
```py
|
206 |
-
2 ** (len(vae.config.block_out_channels) - 1) == 8
|
207 |
-
```
|
208 |
-
|
209 |
-
</Tip>
|
210 |
-
|
211 |
-
```py
|
212 |
-
>>> latents = torch.randn(
|
213 |
-
... (batch_size, unet.in_channels, height // 8, width // 8),
|
214 |
-
... generator=generator,
|
215 |
-
... )
|
216 |
-
>>> latents = latents.to(torch_device)
|
217 |
-
```
|
218 |
-
|
219 |
-
### Denoise the image
|
220 |
-
|
221 |
-
Start by scaling the input with the initial noise distribution, *sigma*, the noise scale value, which is required for improved schedulers like [`UniPCMultistepScheduler`]:
|
222 |
-
|
223 |
-
```py
|
224 |
-
>>> latents = latents * scheduler.init_noise_sigma
|
225 |
-
```
|
226 |
-
|
227 |
-
The last step is to create the denoising loop that'll progressively transform the pure noise in `latents` to an image described by your prompt. Remember, the denoising loop needs to do three things:
|
228 |
-
|
229 |
-
1. Set the scheduler's timesteps to use during denoising.
|
230 |
-
2. Iterate over the timesteps.
|
231 |
-
3. At each timestep, call the UNet model to predict the noise residual and pass it to the scheduler to compute the previous noisy sample.
|
232 |
-
|
233 |
-
```py
|
234 |
-
>>> from tqdm.auto import tqdm
|
235 |
-
|
236 |
-
>>> scheduler.set_timesteps(num_inference_steps)
|
237 |
-
|
238 |
-
>>> for t in tqdm(scheduler.timesteps):
|
239 |
-
... # expand the latents if we are doing classifier-free guidance to avoid doing two forward passes.
|
240 |
-
... latent_model_input = torch.cat([latents] * 2)
|
241 |
-
|
242 |
-
... latent_model_input = scheduler.scale_model_input(latent_model_input, timestep=t)
|
243 |
-
|
244 |
-
... # predict the noise residual
|
245 |
-
... with torch.no_grad():
|
246 |
-
... noise_pred = unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
|
247 |
-
|
248 |
-
... # perform guidance
|
249 |
-
... noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
|
250 |
-
... noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
|
251 |
-
|
252 |
-
... # compute the previous noisy sample x_t -> x_t-1
|
253 |
-
... latents = scheduler.step(noise_pred, t, latents).prev_sample
|
254 |
-
```
|
255 |
-
|
256 |
-
### Decode the image
|
257 |
-
|
258 |
-
The final step is to use the `vae` to decode the latent representation into an image and get the decoded output with `sample`:
|
259 |
-
|
260 |
-
```py
|
261 |
-
# scale and decode the image latents with vae
|
262 |
-
latents = 1 / 0.18215 * latents
|
263 |
-
with torch.no_grad():
|
264 |
-
image = vae.decode(latents).sample
|
265 |
-
```
|
266 |
-
|
267 |
-
Lastly, convert the image to a `PIL.Image` to see your generated image!
|
268 |
-
|
269 |
-
```py
|
270 |
-
>>> image = (image / 2 + 0.5).clamp(0, 1)
|
271 |
-
>>> image = image.detach().cpu().permute(0, 2, 3, 1).numpy()
|
272 |
-
>>> images = (image * 255).round().astype("uint8")
|
273 |
-
>>> pil_images = [Image.fromarray(image) for image in images]
|
274 |
-
>>> pil_images[0]
|
275 |
-
```
|
276 |
-
|
277 |
-
<div class="flex justify-center">
|
278 |
-
<img src="https://huggingface.co/blog/assets/98_stable_diffusion/stable_diffusion_k_lms.png"/>
|
279 |
-
</div>
|
280 |
-
|
281 |
-
## Next steps
|
282 |
-
|
283 |
-
From basic to complex pipelines, you've seen that all you really need to write your own diffusion system is a denoising loop. The loop should set the scheduler's timesteps, iterate over them, and alternate between calling the UNet model to predict the noise residual and passing it to the scheduler to compute the previous noisy sample.
|
284 |
-
|
285 |
-
This is really what 🧨 Diffusers is designed for: to make it intuitive and easy to write your own diffusion system using models and schedulers.
|
286 |
-
|
287 |
-
For your next steps, feel free to:
|
288 |
-
|
289 |
-
* Learn how to [build and contribute a pipeline](contribute_pipeline) to 🧨 Diffusers. We can't wait and see what you'll come up with!
|
290 |
-
* Explore [existing pipelines](../api/pipelines/overview) in the library, and see if you can deconstruct and build a pipeline from scratch using the models and schedulers separately.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/vq_model.py
DELETED
@@ -1,167 +0,0 @@
|
|
1 |
-
# Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
-
#
|
3 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
4 |
-
# you may not use this file except in compliance with the License.
|
5 |
-
# You may obtain a copy of the License at
|
6 |
-
#
|
7 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
8 |
-
#
|
9 |
-
# Unless required by applicable law or agreed to in writing, software
|
10 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
11 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
12 |
-
# See the License for the specific language governing permissions and
|
13 |
-
# limitations under the License.
|
14 |
-
from dataclasses import dataclass
|
15 |
-
from typing import Optional, Tuple, Union
|
16 |
-
|
17 |
-
import torch
|
18 |
-
import torch.nn as nn
|
19 |
-
|
20 |
-
from ..configuration_utils import ConfigMixin, register_to_config
|
21 |
-
from ..utils import BaseOutput, apply_forward_hook
|
22 |
-
from .modeling_utils import ModelMixin
|
23 |
-
from .vae import Decoder, DecoderOutput, Encoder, VectorQuantizer
|
24 |
-
|
25 |
-
|
26 |
-
@dataclass
|
27 |
-
class VQEncoderOutput(BaseOutput):
|
28 |
-
"""
|
29 |
-
Output of VQModel encoding method.
|
30 |
-
|
31 |
-
Args:
|
32 |
-
latents (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`):
|
33 |
-
The encoded output sample from the last layer of the model.
|
34 |
-
"""
|
35 |
-
|
36 |
-
latents: torch.FloatTensor
|
37 |
-
|
38 |
-
|
39 |
-
class VQModel(ModelMixin, ConfigMixin):
|
40 |
-
r"""
|
41 |
-
A VQ-VAE model for decoding latent representations.
|
42 |
-
|
43 |
-
This model inherits from [`ModelMixin`]. Check the superclass documentation for it's generic methods implemented
|
44 |
-
for all models (such as downloading or saving).
|
45 |
-
|
46 |
-
Parameters:
|
47 |
-
in_channels (int, *optional*, defaults to 3): Number of channels in the input image.
|
48 |
-
out_channels (int, *optional*, defaults to 3): Number of channels in the output.
|
49 |
-
down_block_types (`Tuple[str]`, *optional*, defaults to `("DownEncoderBlock2D",)`):
|
50 |
-
Tuple of downsample block types.
|
51 |
-
up_block_types (`Tuple[str]`, *optional*, defaults to `("UpDecoderBlock2D",)`):
|
52 |
-
Tuple of upsample block types.
|
53 |
-
block_out_channels (`Tuple[int]`, *optional*, defaults to `(64,)`):
|
54 |
-
Tuple of block output channels.
|
55 |
-
act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use.
|
56 |
-
latent_channels (`int`, *optional*, defaults to `3`): Number of channels in the latent space.
|
57 |
-
sample_size (`int`, *optional*, defaults to `32`): Sample input size.
|
58 |
-
num_vq_embeddings (`int`, *optional*, defaults to `256`): Number of codebook vectors in the VQ-VAE.
|
59 |
-
vq_embed_dim (`int`, *optional*): Hidden dim of codebook vectors in the VQ-VAE.
|
60 |
-
scaling_factor (`float`, *optional*, defaults to `0.18215`):
|
61 |
-
The component-wise standard deviation of the trained latent space computed using the first batch of the
|
62 |
-
training set. This is used to scale the latent space to have unit variance when training the diffusion
|
63 |
-
model. The latents are scaled with the formula `z = z * scaling_factor` before being passed to the
|
64 |
-
diffusion model. When decoding, the latents are scaled back to the original scale with the formula: `z = 1
|
65 |
-
/ scaling_factor * z`. For more details, refer to sections 4.3.2 and D.1 of the [High-Resolution Image
|
66 |
-
Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) paper.
|
67 |
-
"""
|
68 |
-
|
69 |
-
@register_to_config
|
70 |
-
def __init__(
|
71 |
-
self,
|
72 |
-
in_channels: int = 3,
|
73 |
-
out_channels: int = 3,
|
74 |
-
down_block_types: Tuple[str] = ("DownEncoderBlock2D",),
|
75 |
-
up_block_types: Tuple[str] = ("UpDecoderBlock2D",),
|
76 |
-
block_out_channels: Tuple[int] = (64,),
|
77 |
-
layers_per_block: int = 1,
|
78 |
-
act_fn: str = "silu",
|
79 |
-
latent_channels: int = 3,
|
80 |
-
sample_size: int = 32,
|
81 |
-
num_vq_embeddings: int = 256,
|
82 |
-
norm_num_groups: int = 32,
|
83 |
-
vq_embed_dim: Optional[int] = None,
|
84 |
-
scaling_factor: float = 0.18215,
|
85 |
-
norm_type: str = "group", # group, spatial
|
86 |
-
):
|
87 |
-
super().__init__()
|
88 |
-
|
89 |
-
# pass init params to Encoder
|
90 |
-
self.encoder = Encoder(
|
91 |
-
in_channels=in_channels,
|
92 |
-
out_channels=latent_channels,
|
93 |
-
down_block_types=down_block_types,
|
94 |
-
block_out_channels=block_out_channels,
|
95 |
-
layers_per_block=layers_per_block,
|
96 |
-
act_fn=act_fn,
|
97 |
-
norm_num_groups=norm_num_groups,
|
98 |
-
double_z=False,
|
99 |
-
)
|
100 |
-
|
101 |
-
vq_embed_dim = vq_embed_dim if vq_embed_dim is not None else latent_channels
|
102 |
-
|
103 |
-
self.quant_conv = nn.Conv2d(latent_channels, vq_embed_dim, 1)
|
104 |
-
self.quantize = VectorQuantizer(num_vq_embeddings, vq_embed_dim, beta=0.25, remap=None, sane_index_shape=False)
|
105 |
-
self.post_quant_conv = nn.Conv2d(vq_embed_dim, latent_channels, 1)
|
106 |
-
|
107 |
-
# pass init params to Decoder
|
108 |
-
self.decoder = Decoder(
|
109 |
-
in_channels=latent_channels,
|
110 |
-
out_channels=out_channels,
|
111 |
-
up_block_types=up_block_types,
|
112 |
-
block_out_channels=block_out_channels,
|
113 |
-
layers_per_block=layers_per_block,
|
114 |
-
act_fn=act_fn,
|
115 |
-
norm_num_groups=norm_num_groups,
|
116 |
-
norm_type=norm_type,
|
117 |
-
)
|
118 |
-
|
119 |
-
@apply_forward_hook
|
120 |
-
def encode(self, x: torch.FloatTensor, return_dict: bool = True) -> VQEncoderOutput:
|
121 |
-
h = self.encoder(x)
|
122 |
-
h = self.quant_conv(h)
|
123 |
-
|
124 |
-
if not return_dict:
|
125 |
-
return (h,)
|
126 |
-
|
127 |
-
return VQEncoderOutput(latents=h)
|
128 |
-
|
129 |
-
@apply_forward_hook
|
130 |
-
def decode(
|
131 |
-
self, h: torch.FloatTensor, force_not_quantize: bool = False, return_dict: bool = True
|
132 |
-
) -> Union[DecoderOutput, torch.FloatTensor]:
|
133 |
-
# also go through quantization layer
|
134 |
-
if not force_not_quantize:
|
135 |
-
quant, emb_loss, info = self.quantize(h)
|
136 |
-
else:
|
137 |
-
quant = h
|
138 |
-
quant2 = self.post_quant_conv(quant)
|
139 |
-
dec = self.decoder(quant2, quant if self.config.norm_type == "spatial" else None)
|
140 |
-
|
141 |
-
if not return_dict:
|
142 |
-
return (dec,)
|
143 |
-
|
144 |
-
return DecoderOutput(sample=dec)
|
145 |
-
|
146 |
-
def forward(self, sample: torch.FloatTensor, return_dict: bool = True) -> Union[DecoderOutput, torch.FloatTensor]:
|
147 |
-
r"""
|
148 |
-
The [`VQModel`] forward method.
|
149 |
-
|
150 |
-
Args:
|
151 |
-
sample (`torch.FloatTensor`): Input sample.
|
152 |
-
return_dict (`bool`, *optional*, defaults to `True`):
|
153 |
-
Whether or not to return a [`models.vq_model.VQEncoderOutput`] instead of a plain tuple.
|
154 |
-
|
155 |
-
Returns:
|
156 |
-
[`~models.vq_model.VQEncoderOutput`] or `tuple`:
|
157 |
-
If return_dict is True, a [`~models.vq_model.VQEncoderOutput`] is returned, otherwise a plain `tuple`
|
158 |
-
is returned.
|
159 |
-
"""
|
160 |
-
x = sample
|
161 |
-
h = self.encode(x).latents
|
162 |
-
dec = self.decode(h).sample
|
163 |
-
|
164 |
-
if not return_dict:
|
165 |
-
return (dec,)
|
166 |
-
|
167 |
-
return DecoderOutput(sample=dec)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_512x1024_40k_cityscapes.py
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
_base_ = './deeplabv3_r50-d8_512x1024_40k_cityscapes.py'
|
2 |
-
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_segmentation/configs/dnlnet/dnl_r101-d8_769x769_40k_cityscapes.py
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
_base_ = './dnl_r50-d8_769x769_40k_cityscapes.py'
|
2 |
-
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
|
|
|
|
|
|
spaces/Anonymous-123/ImageNet-Editing/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: ImageNet Editing
|
3 |
-
emoji: 📊
|
4 |
-
colorFrom: gray
|
5 |
-
colorTo: pink
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.3.1
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: creativeml-openrail-m
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anthony7906/MengHuiMXD_GPT/run_Windows.bat
DELETED
@@ -1,5 +0,0 @@
|
|
1 |
-
@echo off
|
2 |
-
echo Opening ChuanhuChatGPT...
|
3 |
-
|
4 |
-
REM Open powershell via bat
|
5 |
-
start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py"
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AnthonyTruchetPoC/persistent-docker/tests/athai/test_hello.py
DELETED
@@ -1,14 +0,0 @@
|
|
1 |
-
from athai import hello
|
2 |
-
|
3 |
-
|
4 |
-
def test_hello_default():
|
5 |
-
assert hello.build_greetings() == "Hello, World!"
|
6 |
-
|
7 |
-
|
8 |
-
def test_hello_name():
|
9 |
-
assert hello.build_greetings("Toto") == "Nice to meet you, Toto!"
|
10 |
-
|
11 |
-
|
12 |
-
# Given / Arrange
|
13 |
-
# When / Act
|
14 |
-
# Then / Assert
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/launch.py
DELETED
@@ -1,36 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
Launch the Python script on the command line after
|
3 |
-
setuptools is bootstrapped via import.
|
4 |
-
"""
|
5 |
-
|
6 |
-
# Note that setuptools gets imported implicitly by the
|
7 |
-
# invocation of this script using python -m setuptools.launch
|
8 |
-
|
9 |
-
import tokenize
|
10 |
-
import sys
|
11 |
-
|
12 |
-
|
13 |
-
def run():
|
14 |
-
"""
|
15 |
-
Run the script in sys.argv[1] as if it had
|
16 |
-
been invoked naturally.
|
17 |
-
"""
|
18 |
-
__builtins__
|
19 |
-
script_name = sys.argv[1]
|
20 |
-
namespace = dict(
|
21 |
-
__file__=script_name,
|
22 |
-
__name__='__main__',
|
23 |
-
__doc__=None,
|
24 |
-
)
|
25 |
-
sys.argv[:] = sys.argv[1:]
|
26 |
-
|
27 |
-
open_ = getattr(tokenize, 'open', open)
|
28 |
-
with open_(script_name) as fid:
|
29 |
-
script = fid.read()
|
30 |
-
norm_script = script.replace('\\r\\n', '\\n')
|
31 |
-
code = compile(norm_script, script_name, 'exec')
|
32 |
-
exec(code, namespace)
|
33 |
-
|
34 |
-
|
35 |
-
if __name__ == '__main__':
|
36 |
-
run()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/notes/compatibility.md
DELETED
@@ -1,83 +0,0 @@
|
|
1 |
-
# Compatibility with Other Libraries
|
2 |
-
|
3 |
-
## Compatibility with Detectron (and maskrcnn-benchmark)
|
4 |
-
|
5 |
-
Detectron2 addresses some legacy issues left in Detectron. As a result, their models
|
6 |
-
are not compatible:
|
7 |
-
running inference with the same model weights will produce different results in the two code bases.
|
8 |
-
|
9 |
-
The major differences regarding inference are:
|
10 |
-
|
11 |
-
- The height and width of a box with corners (x1, y1) and (x2, y2) is now computed more naturally as
|
12 |
-
width = x2 - x1 and height = y2 - y1;
|
13 |
-
In Detectron, a "+ 1" was added both height and width.
|
14 |
-
|
15 |
-
Note that the relevant ops in Caffe2 have [adopted this change of convention](https://github.com/pytorch/pytorch/pull/20550)
|
16 |
-
with an extra option.
|
17 |
-
So it is still possible to run inference with a Detectron2-trained model in Caffe2.
|
18 |
-
|
19 |
-
The change in height/width calculations most notably changes:
|
20 |
-
- encoding/decoding in bounding box regression.
|
21 |
-
- non-maximum suppression. The effect here is very negligible, though.
|
22 |
-
|
23 |
-
- RPN now uses simpler anchors with fewer quantization artifacts.
|
24 |
-
|
25 |
-
In Detectron, the anchors were quantized and
|
26 |
-
[do not have accurate areas](https://github.com/facebookresearch/Detectron/issues/227).
|
27 |
-
In Detectron2, the anchors are center-aligned to feature grid points and not quantized.
|
28 |
-
|
29 |
-
- Classification layers have a different ordering of class labels.
|
30 |
-
|
31 |
-
This involves any trainable parameter with shape (..., num_categories + 1, ...).
|
32 |
-
In Detectron2, integer labels [0, K-1] correspond to the K = num_categories object categories
|
33 |
-
and the label "K" corresponds to the special "background" category.
|
34 |
-
In Detectron, label "0" means background, and labels [1, K] correspond to the K categories.
|
35 |
-
|
36 |
-
- ROIAlign is implemented differently. The new implementation is [available in Caffe2](https://github.com/pytorch/pytorch/pull/23706).
|
37 |
-
|
38 |
-
1. All the ROIs are shifted by half a pixel compared to Detectron in order to create better image-feature-map alignment.
|
39 |
-
See `layers/roi_align.py` for details.
|
40 |
-
To enable the old behavior, use `ROIAlign(aligned=False)`, or `POOLER_TYPE=ROIAlign` instead of
|
41 |
-
`ROIAlignV2` (the default).
|
42 |
-
|
43 |
-
1. The ROIs are not required to have a minimum size of 1.
|
44 |
-
This will lead to tiny differences in the output, but should be negligible.
|
45 |
-
|
46 |
-
- Mask inference function is different.
|
47 |
-
|
48 |
-
In Detectron2, the "paste_mask" function is different and should be more accurate than in Detectron. This change
|
49 |
-
can improve mask AP on COCO by ~0.5% absolute.
|
50 |
-
|
51 |
-
There are some other differences in training as well, but they won't affect
|
52 |
-
model-level compatibility. The major ones are:
|
53 |
-
|
54 |
-
- We fixed a [bug](https://github.com/facebookresearch/Detectron/issues/459) in
|
55 |
-
Detectron, by making `RPN.POST_NMS_TOPK_TRAIN` per-image, rather than per-batch.
|
56 |
-
The fix may lead to a small accuracy drop for a few models (e.g. keypoint
|
57 |
-
detection) and will require some parameter tuning to match the Detectron results.
|
58 |
-
- For simplicity, we change the default loss in bounding box regression to L1 loss, instead of smooth L1 loss.
|
59 |
-
We have observed that this tends to slightly decrease box AP50 while improving box AP for higher
|
60 |
-
overlap thresholds (and leading to a slight overall improvement in box AP).
|
61 |
-
- We interpret the coordinates in COCO bounding box and segmentation annotations
|
62 |
-
as coordinates in range `[0, width]` or `[0, height]`. The coordinates in
|
63 |
-
COCO keypoint annotations are interpreted as pixel indices in range `[0, width - 1]` or `[0, height - 1]`.
|
64 |
-
Note that this affects how flip augmentation is implemented.
|
65 |
-
|
66 |
-
|
67 |
-
We will later share more details and rationale behind the above mentioned issues
|
68 |
-
about pixels, coordinates, and "+1"s.
|
69 |
-
|
70 |
-
|
71 |
-
## Compatibility with Caffe2
|
72 |
-
|
73 |
-
As mentioned above, despite the incompatibilities with Detectron, the relevant
|
74 |
-
ops have been implemented in Caffe2.
|
75 |
-
Therefore, models trained with detectron2 can be converted in Caffe2.
|
76 |
-
See [Deployment](../tutorials/deployment.html) for the tutorial.
|
77 |
-
|
78 |
-
## Compatibility with TensorFlow
|
79 |
-
|
80 |
-
Most ops are available in TensorFlow, although some tiny differences in
|
81 |
-
the implementation of resize / ROIAlign / padding need to be addressed.
|
82 |
-
A working conversion script is provided by [tensorpack FasterRCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN/convert_d2)
|
83 |
-
to run a standard detectron2 model in TensorFlow.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/pybind11/docs/benchmark.py
DELETED
@@ -1,89 +0,0 @@
|
|
1 |
-
# -*- coding: utf-8 -*-
|
2 |
-
import random
|
3 |
-
import os
|
4 |
-
import time
|
5 |
-
import datetime as dt
|
6 |
-
|
7 |
-
nfns = 4 # Functions per class
|
8 |
-
nargs = 4 # Arguments per function
|
9 |
-
|
10 |
-
|
11 |
-
def generate_dummy_code_pybind11(nclasses=10):
|
12 |
-
decl = ""
|
13 |
-
bindings = ""
|
14 |
-
|
15 |
-
for cl in range(nclasses):
|
16 |
-
decl += "class cl%03i;\n" % cl
|
17 |
-
decl += '\n'
|
18 |
-
|
19 |
-
for cl in range(nclasses):
|
20 |
-
decl += "class cl%03i {\n" % cl
|
21 |
-
decl += "public:\n"
|
22 |
-
bindings += ' py::class_<cl%03i>(m, "cl%03i")\n' % (cl, cl)
|
23 |
-
for fn in range(nfns):
|
24 |
-
ret = random.randint(0, nclasses - 1)
|
25 |
-
params = [random.randint(0, nclasses - 1) for i in range(nargs)]
|
26 |
-
decl += " cl%03i *fn_%03i(" % (ret, fn)
|
27 |
-
decl += ", ".join("cl%03i *" % p for p in params)
|
28 |
-
decl += ");\n"
|
29 |
-
bindings += ' .def("fn_%03i", &cl%03i::fn_%03i)\n' % \
|
30 |
-
(fn, cl, fn)
|
31 |
-
decl += "};\n\n"
|
32 |
-
bindings += ' ;\n'
|
33 |
-
|
34 |
-
result = "#include <pybind11/pybind11.h>\n\n"
|
35 |
-
result += "namespace py = pybind11;\n\n"
|
36 |
-
result += decl + '\n'
|
37 |
-
result += "PYBIND11_MODULE(example, m) {\n"
|
38 |
-
result += bindings
|
39 |
-
result += "}"
|
40 |
-
return result
|
41 |
-
|
42 |
-
|
43 |
-
def generate_dummy_code_boost(nclasses=10):
|
44 |
-
decl = ""
|
45 |
-
bindings = ""
|
46 |
-
|
47 |
-
for cl in range(nclasses):
|
48 |
-
decl += "class cl%03i;\n" % cl
|
49 |
-
decl += '\n'
|
50 |
-
|
51 |
-
for cl in range(nclasses):
|
52 |
-
decl += "class cl%03i {\n" % cl
|
53 |
-
decl += "public:\n"
|
54 |
-
bindings += ' py::class_<cl%03i>("cl%03i")\n' % (cl, cl)
|
55 |
-
for fn in range(nfns):
|
56 |
-
ret = random.randint(0, nclasses - 1)
|
57 |
-
params = [random.randint(0, nclasses - 1) for i in range(nargs)]
|
58 |
-
decl += " cl%03i *fn_%03i(" % (ret, fn)
|
59 |
-
decl += ", ".join("cl%03i *" % p for p in params)
|
60 |
-
decl += ");\n"
|
61 |
-
bindings += ' .def("fn_%03i", &cl%03i::fn_%03i, py::return_value_policy<py::manage_new_object>())\n' % \
|
62 |
-
(fn, cl, fn)
|
63 |
-
decl += "};\n\n"
|
64 |
-
bindings += ' ;\n'
|
65 |
-
|
66 |
-
result = "#include <boost/python.hpp>\n\n"
|
67 |
-
result += "namespace py = boost::python;\n\n"
|
68 |
-
result += decl + '\n'
|
69 |
-
result += "BOOST_PYTHON_MODULE(example) {\n"
|
70 |
-
result += bindings
|
71 |
-
result += "}"
|
72 |
-
return result
|
73 |
-
|
74 |
-
|
75 |
-
for codegen in [generate_dummy_code_pybind11, generate_dummy_code_boost]:
|
76 |
-
print ("{")
|
77 |
-
for i in range(0, 10):
|
78 |
-
nclasses = 2 ** i
|
79 |
-
with open("test.cpp", "w") as f:
|
80 |
-
f.write(codegen(nclasses))
|
81 |
-
n1 = dt.datetime.now()
|
82 |
-
os.system("g++ -Os -shared -rdynamic -undefined dynamic_lookup "
|
83 |
-
"-fvisibility=hidden -std=c++14 test.cpp -I include "
|
84 |
-
"-I /System/Library/Frameworks/Python.framework/Headers -o test.so")
|
85 |
-
n2 = dt.datetime.now()
|
86 |
-
elapsed = (n2 - n1).total_seconds()
|
87 |
-
size = os.stat('test.so').st_size
|
88 |
-
print(" {%i, %f, %i}," % (nclasses * nfns, elapsed, size))
|
89 |
-
print ("}")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/pybind11/tests/test_factory_constructors.cpp
DELETED
@@ -1,342 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
tests/test_factory_constructors.cpp -- tests construction from a factory function
|
3 |
-
via py::init_factory()
|
4 |
-
|
5 |
-
Copyright (c) 2017 Jason Rhinelander <[email protected]>
|
6 |
-
|
7 |
-
All rights reserved. Use of this source code is governed by a
|
8 |
-
BSD-style license that can be found in the LICENSE file.
|
9 |
-
*/
|
10 |
-
|
11 |
-
#include "pybind11_tests.h"
|
12 |
-
#include "constructor_stats.h"
|
13 |
-
#include <cmath>
|
14 |
-
|
15 |
-
// Classes for testing python construction via C++ factory function:
|
16 |
-
// Not publicly constructible, copyable, or movable:
|
17 |
-
class TestFactory1 {
|
18 |
-
friend class TestFactoryHelper;
|
19 |
-
TestFactory1() : value("(empty)") { print_default_created(this); }
|
20 |
-
TestFactory1(int v) : value(std::to_string(v)) { print_created(this, value); }
|
21 |
-
TestFactory1(std::string v) : value(std::move(v)) { print_created(this, value); }
|
22 |
-
TestFactory1(TestFactory1 &&) = delete;
|
23 |
-
TestFactory1(const TestFactory1 &) = delete;
|
24 |
-
TestFactory1 &operator=(TestFactory1 &&) = delete;
|
25 |
-
TestFactory1 &operator=(const TestFactory1 &) = delete;
|
26 |
-
public:
|
27 |
-
std::string value;
|
28 |
-
~TestFactory1() { print_destroyed(this); }
|
29 |
-
};
|
30 |
-
// Non-public construction, but moveable:
|
31 |
-
class TestFactory2 {
|
32 |
-
friend class TestFactoryHelper;
|
33 |
-
TestFactory2() : value("(empty2)") { print_default_created(this); }
|
34 |
-
TestFactory2(int v) : value(std::to_string(v)) { print_created(this, value); }
|
35 |
-
TestFactory2(std::string v) : value(std::move(v)) { print_created(this, value); }
|
36 |
-
public:
|
37 |
-
TestFactory2(TestFactory2 &&m) { value = std::move(m.value); print_move_created(this); }
|
38 |
-
TestFactory2 &operator=(TestFactory2 &&m) { value = std::move(m.value); print_move_assigned(this); return *this; }
|
39 |
-
std::string value;
|
40 |
-
~TestFactory2() { print_destroyed(this); }
|
41 |
-
};
|
42 |
-
// Mixed direct/factory construction:
|
43 |
-
class TestFactory3 {
|
44 |
-
protected:
|
45 |
-
friend class TestFactoryHelper;
|
46 |
-
TestFactory3() : value("(empty3)") { print_default_created(this); }
|
47 |
-
TestFactory3(int v) : value(std::to_string(v)) { print_created(this, value); }
|
48 |
-
public:
|
49 |
-
TestFactory3(std::string v) : value(std::move(v)) { print_created(this, value); }
|
50 |
-
TestFactory3(TestFactory3 &&m) { value = std::move(m.value); print_move_created(this); }
|
51 |
-
TestFactory3 &operator=(TestFactory3 &&m) { value = std::move(m.value); print_move_assigned(this); return *this; }
|
52 |
-
std::string value;
|
53 |
-
virtual ~TestFactory3() { print_destroyed(this); }
|
54 |
-
};
|
55 |
-
// Inheritance test
|
56 |
-
class TestFactory4 : public TestFactory3 {
|
57 |
-
public:
|
58 |
-
TestFactory4() : TestFactory3() { print_default_created(this); }
|
59 |
-
TestFactory4(int v) : TestFactory3(v) { print_created(this, v); }
|
60 |
-
virtual ~TestFactory4() { print_destroyed(this); }
|
61 |
-
};
|
62 |
-
// Another class for an invalid downcast test
|
63 |
-
class TestFactory5 : public TestFactory3 {
|
64 |
-
public:
|
65 |
-
TestFactory5(int i) : TestFactory3(i) { print_created(this, i); }
|
66 |
-
virtual ~TestFactory5() { print_destroyed(this); }
|
67 |
-
};
|
68 |
-
|
69 |
-
class TestFactory6 {
|
70 |
-
protected:
|
71 |
-
int value;
|
72 |
-
bool alias = false;
|
73 |
-
public:
|
74 |
-
TestFactory6(int i) : value{i} { print_created(this, i); }
|
75 |
-
TestFactory6(TestFactory6 &&f) { print_move_created(this); value = f.value; alias = f.alias; }
|
76 |
-
TestFactory6(const TestFactory6 &f) { print_copy_created(this); value = f.value; alias = f.alias; }
|
77 |
-
virtual ~TestFactory6() { print_destroyed(this); }
|
78 |
-
virtual int get() { return value; }
|
79 |
-
bool has_alias() { return alias; }
|
80 |
-
};
|
81 |
-
class PyTF6 : public TestFactory6 {
|
82 |
-
public:
|
83 |
-
// Special constructor that allows the factory to construct a PyTF6 from a TestFactory6 only
|
84 |
-
// when an alias is needed:
|
85 |
-
PyTF6(TestFactory6 &&base) : TestFactory6(std::move(base)) { alias = true; print_created(this, "move", value); }
|
86 |
-
PyTF6(int i) : TestFactory6(i) { alias = true; print_created(this, i); }
|
87 |
-
PyTF6(PyTF6 &&f) : TestFactory6(std::move(f)) { print_move_created(this); }
|
88 |
-
PyTF6(const PyTF6 &f) : TestFactory6(f) { print_copy_created(this); }
|
89 |
-
PyTF6(std::string s) : TestFactory6((int) s.size()) { alias = true; print_created(this, s); }
|
90 |
-
virtual ~PyTF6() { print_destroyed(this); }
|
91 |
-
int get() override { PYBIND11_OVERLOAD(int, TestFactory6, get, /*no args*/); }
|
92 |
-
};
|
93 |
-
|
94 |
-
class TestFactory7 {
|
95 |
-
protected:
|
96 |
-
int value;
|
97 |
-
bool alias = false;
|
98 |
-
public:
|
99 |
-
TestFactory7(int i) : value{i} { print_created(this, i); }
|
100 |
-
TestFactory7(TestFactory7 &&f) { print_move_created(this); value = f.value; alias = f.alias; }
|
101 |
-
TestFactory7(const TestFactory7 &f) { print_copy_created(this); value = f.value; alias = f.alias; }
|
102 |
-
virtual ~TestFactory7() { print_destroyed(this); }
|
103 |
-
virtual int get() { return value; }
|
104 |
-
bool has_alias() { return alias; }
|
105 |
-
};
|
106 |
-
class PyTF7 : public TestFactory7 {
|
107 |
-
public:
|
108 |
-
PyTF7(int i) : TestFactory7(i) { alias = true; print_created(this, i); }
|
109 |
-
PyTF7(PyTF7 &&f) : TestFactory7(std::move(f)) { print_move_created(this); }
|
110 |
-
PyTF7(const PyTF7 &f) : TestFactory7(f) { print_copy_created(this); }
|
111 |
-
virtual ~PyTF7() { print_destroyed(this); }
|
112 |
-
int get() override { PYBIND11_OVERLOAD(int, TestFactory7, get, /*no args*/); }
|
113 |
-
};
|
114 |
-
|
115 |
-
|
116 |
-
class TestFactoryHelper {
|
117 |
-
public:
|
118 |
-
// Non-movable, non-copyable type:
|
119 |
-
// Return via pointer:
|
120 |
-
static TestFactory1 *construct1() { return new TestFactory1(); }
|
121 |
-
// Holder:
|
122 |
-
static std::unique_ptr<TestFactory1> construct1(int a) { return std::unique_ptr<TestFactory1>(new TestFactory1(a)); }
|
123 |
-
// pointer again
|
124 |
-
static TestFactory1 *construct1_string(std::string a) { return new TestFactory1(a); }
|
125 |
-
|
126 |
-
// Moveable type:
|
127 |
-
// pointer:
|
128 |
-
static TestFactory2 *construct2() { return new TestFactory2(); }
|
129 |
-
// holder:
|
130 |
-
static std::unique_ptr<TestFactory2> construct2(int a) { return std::unique_ptr<TestFactory2>(new TestFactory2(a)); }
|
131 |
-
// by value moving:
|
132 |
-
static TestFactory2 construct2(std::string a) { return TestFactory2(a); }
|
133 |
-
|
134 |
-
// shared_ptr holder type:
|
135 |
-
// pointer:
|
136 |
-
static TestFactory3 *construct3() { return new TestFactory3(); }
|
137 |
-
// holder:
|
138 |
-
static std::shared_ptr<TestFactory3> construct3(int a) { return std::shared_ptr<TestFactory3>(new TestFactory3(a)); }
|
139 |
-
};
|
140 |
-
|
141 |
-
TEST_SUBMODULE(factory_constructors, m) {
|
142 |
-
|
143 |
-
// Define various trivial types to allow simpler overload resolution:
|
144 |
-
py::module m_tag = m.def_submodule("tag");
|
145 |
-
#define MAKE_TAG_TYPE(Name) \
|
146 |
-
struct Name##_tag {}; \
|
147 |
-
py::class_<Name##_tag>(m_tag, #Name "_tag").def(py::init<>()); \
|
148 |
-
m_tag.attr(#Name) = py::cast(Name##_tag{})
|
149 |
-
MAKE_TAG_TYPE(pointer);
|
150 |
-
MAKE_TAG_TYPE(unique_ptr);
|
151 |
-
MAKE_TAG_TYPE(move);
|
152 |
-
MAKE_TAG_TYPE(shared_ptr);
|
153 |
-
MAKE_TAG_TYPE(derived);
|
154 |
-
MAKE_TAG_TYPE(TF4);
|
155 |
-
MAKE_TAG_TYPE(TF5);
|
156 |
-
MAKE_TAG_TYPE(null_ptr);
|
157 |
-
MAKE_TAG_TYPE(null_unique_ptr);
|
158 |
-
MAKE_TAG_TYPE(null_shared_ptr);
|
159 |
-
MAKE_TAG_TYPE(base);
|
160 |
-
MAKE_TAG_TYPE(invalid_base);
|
161 |
-
MAKE_TAG_TYPE(alias);
|
162 |
-
MAKE_TAG_TYPE(unaliasable);
|
163 |
-
MAKE_TAG_TYPE(mixed);
|
164 |
-
|
165 |
-
// test_init_factory_basic, test_bad_type
|
166 |
-
py::class_<TestFactory1>(m, "TestFactory1")
|
167 |
-
.def(py::init([](unique_ptr_tag, int v) { return TestFactoryHelper::construct1(v); }))
|
168 |
-
.def(py::init(&TestFactoryHelper::construct1_string)) // raw function pointer
|
169 |
-
.def(py::init([](pointer_tag) { return TestFactoryHelper::construct1(); }))
|
170 |
-
.def(py::init([](py::handle, int v, py::handle) { return TestFactoryHelper::construct1(v); }))
|
171 |
-
.def_readwrite("value", &TestFactory1::value)
|
172 |
-
;
|
173 |
-
py::class_<TestFactory2>(m, "TestFactory2")
|
174 |
-
.def(py::init([](pointer_tag, int v) { return TestFactoryHelper::construct2(v); }))
|
175 |
-
.def(py::init([](unique_ptr_tag, std::string v) { return TestFactoryHelper::construct2(v); }))
|
176 |
-
.def(py::init([](move_tag) { return TestFactoryHelper::construct2(); }))
|
177 |
-
.def_readwrite("value", &TestFactory2::value)
|
178 |
-
;
|
179 |
-
|
180 |
-
// Stateful & reused:
|
181 |
-
int c = 1;
|
182 |
-
auto c4a = [c](pointer_tag, TF4_tag, int a) { (void) c; return new TestFactory4(a);};
|
183 |
-
|
184 |
-
// test_init_factory_basic, test_init_factory_casting
|
185 |
-
py::class_<TestFactory3, std::shared_ptr<TestFactory3>>(m, "TestFactory3")
|
186 |
-
.def(py::init([](pointer_tag, int v) { return TestFactoryHelper::construct3(v); }))
|
187 |
-
.def(py::init([](shared_ptr_tag) { return TestFactoryHelper::construct3(); }))
|
188 |
-
.def("__init__", [](TestFactory3 &self, std::string v) { new (&self) TestFactory3(v); }) // placement-new ctor
|
189 |
-
|
190 |
-
// factories returning a derived type:
|
191 |
-
.def(py::init(c4a)) // derived ptr
|
192 |
-
.def(py::init([](pointer_tag, TF5_tag, int a) { return new TestFactory5(a); }))
|
193 |
-
// derived shared ptr:
|
194 |
-
.def(py::init([](shared_ptr_tag, TF4_tag, int a) { return std::make_shared<TestFactory4>(a); }))
|
195 |
-
.def(py::init([](shared_ptr_tag, TF5_tag, int a) { return std::make_shared<TestFactory5>(a); }))
|
196 |
-
|
197 |
-
// Returns nullptr:
|
198 |
-
.def(py::init([](null_ptr_tag) { return (TestFactory3 *) nullptr; }))
|
199 |
-
.def(py::init([](null_unique_ptr_tag) { return std::unique_ptr<TestFactory3>(); }))
|
200 |
-
.def(py::init([](null_shared_ptr_tag) { return std::shared_ptr<TestFactory3>(); }))
|
201 |
-
|
202 |
-
.def_readwrite("value", &TestFactory3::value)
|
203 |
-
;
|
204 |
-
|
205 |
-
// test_init_factory_casting
|
206 |
-
py::class_<TestFactory4, TestFactory3, std::shared_ptr<TestFactory4>>(m, "TestFactory4")
|
207 |
-
.def(py::init(c4a)) // pointer
|
208 |
-
;
|
209 |
-
|
210 |
-
// Doesn't need to be registered, but registering makes getting ConstructorStats easier:
|
211 |
-
py::class_<TestFactory5, TestFactory3, std::shared_ptr<TestFactory5>>(m, "TestFactory5");
|
212 |
-
|
213 |
-
// test_init_factory_alias
|
214 |
-
// Alias testing
|
215 |
-
py::class_<TestFactory6, PyTF6>(m, "TestFactory6")
|
216 |
-
.def(py::init([](base_tag, int i) { return TestFactory6(i); }))
|
217 |
-
.def(py::init([](alias_tag, int i) { return PyTF6(i); }))
|
218 |
-
.def(py::init([](alias_tag, std::string s) { return PyTF6(s); }))
|
219 |
-
.def(py::init([](alias_tag, pointer_tag, int i) { return new PyTF6(i); }))
|
220 |
-
.def(py::init([](base_tag, pointer_tag, int i) { return new TestFactory6(i); }))
|
221 |
-
.def(py::init([](base_tag, alias_tag, pointer_tag, int i) { return (TestFactory6 *) new PyTF6(i); }))
|
222 |
-
|
223 |
-
.def("get", &TestFactory6::get)
|
224 |
-
.def("has_alias", &TestFactory6::has_alias)
|
225 |
-
|
226 |
-
.def_static("get_cstats", &ConstructorStats::get<TestFactory6>, py::return_value_policy::reference)
|
227 |
-
.def_static("get_alias_cstats", &ConstructorStats::get<PyTF6>, py::return_value_policy::reference)
|
228 |
-
;
|
229 |
-
|
230 |
-
// test_init_factory_dual
|
231 |
-
// Separate alias constructor testing
|
232 |
-
py::class_<TestFactory7, PyTF7, std::shared_ptr<TestFactory7>>(m, "TestFactory7")
|
233 |
-
.def(py::init(
|
234 |
-
[](int i) { return TestFactory7(i); },
|
235 |
-
[](int i) { return PyTF7(i); }))
|
236 |
-
.def(py::init(
|
237 |
-
[](pointer_tag, int i) { return new TestFactory7(i); },
|
238 |
-
[](pointer_tag, int i) { return new PyTF7(i); }))
|
239 |
-
.def(py::init(
|
240 |
-
[](mixed_tag, int i) { return new TestFactory7(i); },
|
241 |
-
[](mixed_tag, int i) { return PyTF7(i); }))
|
242 |
-
.def(py::init(
|
243 |
-
[](mixed_tag, std::string s) { return TestFactory7((int) s.size()); },
|
244 |
-
[](mixed_tag, std::string s) { return new PyTF7((int) s.size()); }))
|
245 |
-
.def(py::init(
|
246 |
-
[](base_tag, pointer_tag, int i) { return new TestFactory7(i); },
|
247 |
-
[](base_tag, pointer_tag, int i) { return (TestFactory7 *) new PyTF7(i); }))
|
248 |
-
.def(py::init(
|
249 |
-
[](alias_tag, pointer_tag, int i) { return new PyTF7(i); },
|
250 |
-
[](alias_tag, pointer_tag, int i) { return new PyTF7(10*i); }))
|
251 |
-
.def(py::init(
|
252 |
-
[](shared_ptr_tag, base_tag, int i) { return std::make_shared<TestFactory7>(i); },
|
253 |
-
[](shared_ptr_tag, base_tag, int i) { auto *p = new PyTF7(i); return std::shared_ptr<TestFactory7>(p); }))
|
254 |
-
.def(py::init(
|
255 |
-
[](shared_ptr_tag, invalid_base_tag, int i) { return std::make_shared<TestFactory7>(i); },
|
256 |
-
[](shared_ptr_tag, invalid_base_tag, int i) { return std::make_shared<TestFactory7>(i); })) // <-- invalid alias factory
|
257 |
-
|
258 |
-
.def("get", &TestFactory7::get)
|
259 |
-
.def("has_alias", &TestFactory7::has_alias)
|
260 |
-
|
261 |
-
.def_static("get_cstats", &ConstructorStats::get<TestFactory7>, py::return_value_policy::reference)
|
262 |
-
.def_static("get_alias_cstats", &ConstructorStats::get<PyTF7>, py::return_value_policy::reference)
|
263 |
-
;
|
264 |
-
|
265 |
-
// test_placement_new_alternative
|
266 |
-
// Class with a custom new operator but *without* a placement new operator (issue #948)
|
267 |
-
class NoPlacementNew {
|
268 |
-
public:
|
269 |
-
NoPlacementNew(int i) : i(i) { }
|
270 |
-
static void *operator new(std::size_t s) {
|
271 |
-
auto *p = ::operator new(s);
|
272 |
-
py::print("operator new called, returning", reinterpret_cast<uintptr_t>(p));
|
273 |
-
return p;
|
274 |
-
}
|
275 |
-
static void operator delete(void *p) {
|
276 |
-
py::print("operator delete called on", reinterpret_cast<uintptr_t>(p));
|
277 |
-
::operator delete(p);
|
278 |
-
}
|
279 |
-
int i;
|
280 |
-
};
|
281 |
-
// As of 2.2, `py::init<args>` no longer requires placement new
|
282 |
-
py::class_<NoPlacementNew>(m, "NoPlacementNew")
|
283 |
-
.def(py::init<int>())
|
284 |
-
.def(py::init([]() { return new NoPlacementNew(100); }))
|
285 |
-
.def_readwrite("i", &NoPlacementNew::i)
|
286 |
-
;
|
287 |
-
|
288 |
-
|
289 |
-
// test_reallocations
|
290 |
-
// Class that has verbose operator_new/operator_delete calls
|
291 |
-
struct NoisyAlloc {
|
292 |
-
NoisyAlloc(const NoisyAlloc &) = default;
|
293 |
-
NoisyAlloc(int i) { py::print(py::str("NoisyAlloc(int {})").format(i)); }
|
294 |
-
NoisyAlloc(double d) { py::print(py::str("NoisyAlloc(double {})").format(d)); }
|
295 |
-
~NoisyAlloc() { py::print("~NoisyAlloc()"); }
|
296 |
-
|
297 |
-
static void *operator new(size_t s) { py::print("noisy new"); return ::operator new(s); }
|
298 |
-
static void *operator new(size_t, void *p) { py::print("noisy placement new"); return p; }
|
299 |
-
static void operator delete(void *p, size_t) { py::print("noisy delete"); ::operator delete(p); }
|
300 |
-
static void operator delete(void *, void *) { py::print("noisy placement delete"); }
|
301 |
-
#if defined(_MSC_VER) && _MSC_VER < 1910
|
302 |
-
// MSVC 2015 bug: the above "noisy delete" isn't invoked (fixed in MSVC 2017)
|
303 |
-
static void operator delete(void *p) { py::print("noisy delete"); ::operator delete(p); }
|
304 |
-
#endif
|
305 |
-
};
|
306 |
-
py::class_<NoisyAlloc>(m, "NoisyAlloc")
|
307 |
-
// Since these overloads have the same number of arguments, the dispatcher will try each of
|
308 |
-
// them until the arguments convert. Thus we can get a pre-allocation here when passing a
|
309 |
-
// single non-integer:
|
310 |
-
.def("__init__", [](NoisyAlloc *a, int i) { new (a) NoisyAlloc(i); }) // Regular constructor, runs first, requires preallocation
|
311 |
-
.def(py::init([](double d) { return new NoisyAlloc(d); }))
|
312 |
-
|
313 |
-
// The two-argument version: first the factory pointer overload.
|
314 |
-
.def(py::init([](int i, int) { return new NoisyAlloc(i); }))
|
315 |
-
// Return-by-value:
|
316 |
-
.def(py::init([](double d, int) { return NoisyAlloc(d); }))
|
317 |
-
// Old-style placement new init; requires preallocation
|
318 |
-
.def("__init__", [](NoisyAlloc &a, double d, double) { new (&a) NoisyAlloc(d); })
|
319 |
-
// Requires deallocation of previous overload preallocated value:
|
320 |
-
.def(py::init([](int i, double) { return new NoisyAlloc(i); }))
|
321 |
-
// Regular again: requires yet another preallocation
|
322 |
-
.def("__init__", [](NoisyAlloc &a, int i, std::string) { new (&a) NoisyAlloc(i); })
|
323 |
-
;
|
324 |
-
|
325 |
-
|
326 |
-
|
327 |
-
|
328 |
-
// static_assert testing (the following def's should all fail with appropriate compilation errors):
|
329 |
-
#if 0
|
330 |
-
struct BadF1Base {};
|
331 |
-
struct BadF1 : BadF1Base {};
|
332 |
-
struct PyBadF1 : BadF1 {};
|
333 |
-
py::class_<BadF1, PyBadF1, std::shared_ptr<BadF1>> bf1(m, "BadF1");
|
334 |
-
// wrapped factory function must return a compatible pointer, holder, or value
|
335 |
-
bf1.def(py::init([]() { return 3; }));
|
336 |
-
// incompatible factory function pointer return type
|
337 |
-
bf1.def(py::init([]() { static int three = 3; return &three; }));
|
338 |
-
// incompatible factory function std::shared_ptr<T> return type: cannot convert shared_ptr<T> to holder
|
339 |
-
// (non-polymorphic base)
|
340 |
-
bf1.def(py::init([]() { return std::shared_ptr<BadF1Base>(new BadF1()); }));
|
341 |
-
#endif
|
342 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/reverse.h
DELETED
@@ -1,22 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2013 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
#pragma once
|
18 |
-
|
19 |
-
#include <thrust/detail/config.h>
|
20 |
-
|
21 |
-
// this system has no special version of this algorithm
|
22 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/transform_scan.h
DELETED
@@ -1,111 +0,0 @@
|
|
1 |
-
/******************************************************************************
|
2 |
-
* Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
|
3 |
-
*
|
4 |
-
* Redistribution and use in source and binary forms, with or without
|
5 |
-
* modification, are permitted provided that the following conditions are met:
|
6 |
-
* * Redistributions of source code must retain the above copyright
|
7 |
-
* notice, this list of conditions and the following disclaimer.
|
8 |
-
* * Redistributions in binary form must reproduce the above copyright
|
9 |
-
* notice, this list of conditions and the following disclaimer in the
|
10 |
-
* documentation and/or other materials provided with the distribution.
|
11 |
-
* * Neither the name of the NVIDIA CORPORATION nor the
|
12 |
-
* names of its contributors may be used to endorse or promote products
|
13 |
-
* derived from this software without specific prior written permission.
|
14 |
-
*
|
15 |
-
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
16 |
-
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
17 |
-
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
18 |
-
* ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
|
19 |
-
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
20 |
-
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
21 |
-
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
|
22 |
-
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
23 |
-
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
24 |
-
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
25 |
-
*
|
26 |
-
******************************************************************************/
|
27 |
-
#pragma once
|
28 |
-
|
29 |
-
|
30 |
-
#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC
|
31 |
-
#include <iterator>
|
32 |
-
#include <thrust/system/cuda/detail/scan.h>
|
33 |
-
#include <thrust/distance.h>
|
34 |
-
|
35 |
-
namespace thrust
|
36 |
-
{
|
37 |
-
|
38 |
-
namespace cuda_cub {
|
39 |
-
|
40 |
-
template <class Derived,
|
41 |
-
class InputIt,
|
42 |
-
class OutputIt,
|
43 |
-
class TransformOp,
|
44 |
-
class ScanOp>
|
45 |
-
OutputIt __host__ __device__
|
46 |
-
transform_inclusive_scan(execution_policy<Derived> &policy,
|
47 |
-
InputIt first,
|
48 |
-
InputIt last,
|
49 |
-
OutputIt result,
|
50 |
-
TransformOp transform_op,
|
51 |
-
ScanOp scan_op)
|
52 |
-
{
|
53 |
-
// Use the input iterator's value type per https://wg21.link/P0571
|
54 |
-
using input_type = typename thrust::iterator_value<InputIt>::type;
|
55 |
-
#if THRUST_CPP_DIALECT < 2017
|
56 |
-
using result_type = typename std::result_of<TransformOp(input_type)>::type;
|
57 |
-
#else
|
58 |
-
using result_type = std::invoke_result_t<TransformOp, input_type>;
|
59 |
-
#endif
|
60 |
-
|
61 |
-
typedef typename iterator_traits<InputIt>::difference_type size_type;
|
62 |
-
size_type num_items = static_cast<size_type>(thrust::distance(first, last));
|
63 |
-
typedef transform_input_iterator_t<result_type,
|
64 |
-
InputIt,
|
65 |
-
TransformOp>
|
66 |
-
transformed_iterator_t;
|
67 |
-
|
68 |
-
return cuda_cub::inclusive_scan_n(policy,
|
69 |
-
transformed_iterator_t(first, transform_op),
|
70 |
-
num_items,
|
71 |
-
result,
|
72 |
-
scan_op);
|
73 |
-
}
|
74 |
-
|
75 |
-
template <class Derived,
|
76 |
-
class InputIt,
|
77 |
-
class OutputIt,
|
78 |
-
class TransformOp,
|
79 |
-
class InitialValueType,
|
80 |
-
class ScanOp>
|
81 |
-
OutputIt __host__ __device__
|
82 |
-
transform_exclusive_scan(execution_policy<Derived> &policy,
|
83 |
-
InputIt first,
|
84 |
-
InputIt last,
|
85 |
-
OutputIt result,
|
86 |
-
TransformOp transform_op,
|
87 |
-
InitialValueType init,
|
88 |
-
ScanOp scan_op)
|
89 |
-
{
|
90 |
-
// Use the initial value type per https://wg21.link/P0571
|
91 |
-
using result_type = InitialValueType;
|
92 |
-
|
93 |
-
typedef typename iterator_traits<InputIt>::difference_type size_type;
|
94 |
-
size_type num_items = static_cast<size_type>(thrust::distance(first, last));
|
95 |
-
typedef transform_input_iterator_t<result_type,
|
96 |
-
InputIt,
|
97 |
-
TransformOp>
|
98 |
-
transformed_iterator_t;
|
99 |
-
|
100 |
-
return cuda_cub::exclusive_scan_n(policy,
|
101 |
-
transformed_iterator_t(first, transform_op),
|
102 |
-
num_items,
|
103 |
-
result,
|
104 |
-
init,
|
105 |
-
scan_op);
|
106 |
-
}
|
107 |
-
|
108 |
-
} // namespace cuda_cub
|
109 |
-
|
110 |
-
} // end namespace thrust
|
111 |
-
#endif
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/logical.h
DELETED
@@ -1,63 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2013 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
#pragma once
|
18 |
-
|
19 |
-
#include <thrust/detail/config.h>
|
20 |
-
#include <thrust/system/detail/generic/tag.h>
|
21 |
-
#include <thrust/detail/internal_functional.h>
|
22 |
-
#include <thrust/find.h>
|
23 |
-
#include <thrust/logical.h>
|
24 |
-
|
25 |
-
namespace thrust
|
26 |
-
{
|
27 |
-
namespace system
|
28 |
-
{
|
29 |
-
namespace detail
|
30 |
-
{
|
31 |
-
namespace generic
|
32 |
-
{
|
33 |
-
|
34 |
-
|
35 |
-
template<typename ExecutionPolicy, typename InputIterator, typename Predicate>
|
36 |
-
__host__ __device__
|
37 |
-
bool all_of(thrust::execution_policy<ExecutionPolicy> &exec, InputIterator first, InputIterator last, Predicate pred)
|
38 |
-
{
|
39 |
-
return thrust::find_if(exec, first, last, thrust::detail::not1(pred)) == last;
|
40 |
-
}
|
41 |
-
|
42 |
-
|
43 |
-
template<typename ExecutionPolicy, typename InputIterator, typename Predicate>
|
44 |
-
__host__ __device__
|
45 |
-
bool any_of(thrust::execution_policy<ExecutionPolicy> &exec, InputIterator first, InputIterator last, Predicate pred)
|
46 |
-
{
|
47 |
-
return thrust::find_if(exec, first, last, pred) != last;
|
48 |
-
}
|
49 |
-
|
50 |
-
|
51 |
-
template<typename ExecutionPolicy, typename InputIterator, typename Predicate>
|
52 |
-
__host__ __device__
|
53 |
-
bool none_of(thrust::execution_policy<ExecutionPolicy> &exec, InputIterator first, InputIterator last, Predicate pred)
|
54 |
-
{
|
55 |
-
return !thrust::any_of(exec, first, last, pred);
|
56 |
-
}
|
57 |
-
|
58 |
-
|
59 |
-
} // end generic
|
60 |
-
} // end detail
|
61 |
-
} // end system
|
62 |
-
} // end thrust
|
63 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/transfiner/app.py
DELETED
@@ -1,84 +0,0 @@
|
|
1 |
-
#try:
|
2 |
-
# import detectron2
|
3 |
-
#except:
|
4 |
-
import os
|
5 |
-
os.system('pip install git+https://github.com/SysCV/transfiner.git')
|
6 |
-
|
7 |
-
from matplotlib.pyplot import axis
|
8 |
-
import gradio as gr
|
9 |
-
import requests
|
10 |
-
import numpy as np
|
11 |
-
from torch import nn
|
12 |
-
import requests
|
13 |
-
|
14 |
-
import torch
|
15 |
-
|
16 |
-
from detectron2 import model_zoo
|
17 |
-
from detectron2.engine import DefaultPredictor
|
18 |
-
from detectron2.config import get_cfg
|
19 |
-
from detectron2.utils.visualizer import Visualizer
|
20 |
-
from detectron2.data import MetadataCatalog
|
21 |
-
|
22 |
-
|
23 |
-
model_name='./configs/transfiner/mask_rcnn_R_101_FPN_3x_deform.yaml'
|
24 |
-
|
25 |
-
|
26 |
-
cfg = get_cfg()
|
27 |
-
# add project-specific config (e.g., TensorMask) here if you're not running a model in detectron2's core library
|
28 |
-
cfg.merge_from_file(model_name)
|
29 |
-
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set threshold for this model
|
30 |
-
cfg.VIS_PERIOD = 100
|
31 |
-
# Find a model from detectron2's model zoo. You can use the https://dl.fbaipublicfiles... url as w ell
|
32 |
-
#cfg.MODEL.WEIGHTS = './output_3x_transfiner_r50.pth'
|
33 |
-
cfg.MODEL.WEIGHTS = './output_3x_transfiner_r101_deform.pth'
|
34 |
-
|
35 |
-
if not torch.cuda.is_available():
|
36 |
-
cfg.MODEL.DEVICE='cpu'
|
37 |
-
|
38 |
-
predictor = DefaultPredictor(cfg)
|
39 |
-
|
40 |
-
|
41 |
-
def inference(image):
|
42 |
-
width, height = image.size
|
43 |
-
if width > 1300:
|
44 |
-
ratio = float(height) / float(width)
|
45 |
-
width = 1300
|
46 |
-
height = int(ratio * width)
|
47 |
-
image = image.resize((width, height))
|
48 |
-
|
49 |
-
img = np.asarray(image)
|
50 |
-
|
51 |
-
#img = np.array(image)
|
52 |
-
outputs = predictor(img)
|
53 |
-
|
54 |
-
v = Visualizer(img, MetadataCatalog.get(cfg.DATASETS.TRAIN[0]))
|
55 |
-
out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
|
56 |
-
|
57 |
-
return out.get_image()
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
title = "Mask Transfiner [CVPR, 2022]"
|
62 |
-
description = "Demo for <a target='_blank' href='https://arxiv.org/abs/2111.13673'>Mask Transfiner for High-Quality Instance Segmentation, CVPR 2022</a> based on R101-FPN. To use it, simply upload your image, or click one of the examples to load them. Note that it runs in <b>CPU environment</b> provided by Hugging Face so the processing speed may be slow."
|
63 |
-
article = "<p style='text-align: center'><a target='_blank' href='https://arxiv.org/abs/2111.13673'>Mask Transfiner for High-Quality Instance Segmentation, CVPR 2022</a> | <a target='_blank' href='https://github.com/SysCV/transfiner'>Mask Transfiner Github Code</a></p>"
|
64 |
-
|
65 |
-
gr.Interface(
|
66 |
-
inference,
|
67 |
-
[gr.inputs.Image(type="pil", label="Input")],
|
68 |
-
gr.outputs.Image(type="numpy", label="Output"),
|
69 |
-
title=title,
|
70 |
-
description=description,
|
71 |
-
article=article,
|
72 |
-
examples=[
|
73 |
-
["demo/sample_imgs/000000131444.jpg"],
|
74 |
-
["demo/sample_imgs/000000157365.jpg"],
|
75 |
-
["demo/sample_imgs/000000176037.jpg"],
|
76 |
-
["demo/sample_imgs/000000018737.jpg"],
|
77 |
-
["demo/sample_imgs/000000224200.jpg"],
|
78 |
-
["demo/sample_imgs/000000558073.jpg"],
|
79 |
-
["demo/sample_imgs/000000404922.jpg"],
|
80 |
-
["demo/sample_imgs/000000252776.jpg"],
|
81 |
-
["demo/sample_imgs/000000482477.jpg"],
|
82 |
-
["demo/sample_imgs/000000344909.jpg"]
|
83 |
-
]).launch()
|
84 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ChenyangSi/FreeU/style.css
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
h1 {
|
2 |
-
text-align: center;
|
3 |
-
}
|
|
|
|
|
|
|
|
spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/apps/help.js
DELETED
@@ -1,112 +0,0 @@
|
|
1 |
-
import plugin from '../../../lib/plugins/plugin.js'
|
2 |
-
import { Render, Version, Config } from '../components/index.js'
|
3 |
-
import { helpCfg, helpList } from '../config/system/help_system.js'
|
4 |
-
import { style } from '../resources/help/imgs/config.js'
|
5 |
-
import _ from 'lodash'
|
6 |
-
|
7 |
-
export class setting extends plugin {
|
8 |
-
constructor() {
|
9 |
-
super({
|
10 |
-
name: '[ws-plugin] 帮助',
|
11 |
-
dsc: '[ws-plugin] 帮助',
|
12 |
-
event: 'message',
|
13 |
-
priority: 1,
|
14 |
-
rule: [
|
15 |
-
{
|
16 |
-
reg: '^#ws版本$',
|
17 |
-
fnc: 'version'
|
18 |
-
},
|
19 |
-
{
|
20 |
-
reg: '^#ws帮助$',
|
21 |
-
fnc: 'help',
|
22 |
-
permission: 'master'
|
23 |
-
}
|
24 |
-
]
|
25 |
-
})
|
26 |
-
|
27 |
-
}
|
28 |
-
async version(e) {
|
29 |
-
return await Render.render('help/version-info', {
|
30 |
-
currentVersion: Version.version,
|
31 |
-
changelogs: Version.changelogs,
|
32 |
-
elem: 'cryo'
|
33 |
-
}, { e, scale: 1.2 })
|
34 |
-
}
|
35 |
-
|
36 |
-
async help(e) {
|
37 |
-
let helpGroup = []
|
38 |
-
_.forEach(helpList, (group) => {
|
39 |
-
_.forEach(group.list, (help) => {
|
40 |
-
let icon = help.icon * 1
|
41 |
-
if (!icon) {
|
42 |
-
help.css = 'display:none'
|
43 |
-
} else {
|
44 |
-
let x = (icon - 1) % 10
|
45 |
-
let y = (icon - x - 1) / 10
|
46 |
-
help.css = `background-position:-${x * 50}px -${y * 50}px`
|
47 |
-
}
|
48 |
-
})
|
49 |
-
|
50 |
-
helpGroup.push(group)
|
51 |
-
})
|
52 |
-
|
53 |
-
let themeData = await getThemeData(helpCfg, helpCfg)
|
54 |
-
return await Render.render('help/index', {
|
55 |
-
helpCfg,
|
56 |
-
helpGroup,
|
57 |
-
...themeData,
|
58 |
-
element: 'default'
|
59 |
-
}, { e, scale: 1.6 })
|
60 |
-
}
|
61 |
-
|
62 |
-
}
|
63 |
-
|
64 |
-
async function getThemeCfg() {
|
65 |
-
let resPath = '{{_res_path}}/help/imgs/'
|
66 |
-
return {
|
67 |
-
main: `${resPath}/main.png`,
|
68 |
-
bg: `${resPath}/bg.jpg`,
|
69 |
-
style: style
|
70 |
-
}
|
71 |
-
}
|
72 |
-
|
73 |
-
async function getThemeData(diyStyle, sysStyle) {
|
74 |
-
let helpConfig = _.extend({}, sysStyle, diyStyle)
|
75 |
-
let colCount = Math.min(5, Math.max(parseInt(helpConfig?.colCount) || 3, 2))
|
76 |
-
let colWidth = Math.min(500, Math.max(100, parseInt(helpConfig?.colWidth) || 265))
|
77 |
-
let width = Math.min(2500, Math.max(800, colCount * colWidth + 30))
|
78 |
-
let theme = await getThemeCfg()
|
79 |
-
let themeStyle = theme.style || {}
|
80 |
-
let ret = [`
|
81 |
-
body{background-image:url(${theme.bg});width:${width}px;}
|
82 |
-
.container{background-image:url(${theme.main});width:${width}px;}
|
83 |
-
.help-table .td,.help-table .th{width:${100 / colCount}%}
|
84 |
-
`]
|
85 |
-
let css = function (sel, css, key, def, fn) {
|
86 |
-
let val = getDef(themeStyle[key], diyStyle[key], sysStyle[key], def)
|
87 |
-
if (fn) {
|
88 |
-
val = fn(val)
|
89 |
-
}
|
90 |
-
ret.push(`${sel}{${css}:${val}}`)
|
91 |
-
}
|
92 |
-
css('.help-title,.help-group', 'color', 'fontColor', '#ceb78b')
|
93 |
-
css('.help-title,.help-group', 'text-shadow', 'fontShadow', 'none')
|
94 |
-
css('.help-desc', 'color', 'descColor', '#eee')
|
95 |
-
css('.cont-box', 'background', 'contBgColor', 'rgba(43, 52, 61, 0.8)')
|
96 |
-
css('.cont-box', 'backdrop-filter', 'contBgBlur', 3, (n) => diyStyle.bgBlur === false ? 'none' : `blur(${n}px)`)
|
97 |
-
css('.help-group', 'background', 'headerBgColor', 'rgba(34, 41, 51, .4)')
|
98 |
-
css('.help-table .tr:nth-child(odd)', 'background', 'rowBgColor1', 'rgba(34, 41, 51, .2)')
|
99 |
-
css('.help-table .tr:nth-child(even)', 'background', 'rowBgColor2', 'rgba(34, 41, 51, .4)')
|
100 |
-
return {
|
101 |
-
style: `<style>${ret.join('\n')}</style>`,
|
102 |
-
colCount
|
103 |
-
}
|
104 |
-
}
|
105 |
-
|
106 |
-
function getDef() {
|
107 |
-
for (let idx in arguments) {
|
108 |
-
if (!_.isUndefined(arguments[idx])) {
|
109 |
-
return arguments[idx]
|
110 |
-
}
|
111 |
-
}
|
112 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CikeyQI/meme-api/meme_generator/version.py
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
__version__ = "0.0.15"
|
|
|
|
spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/feature.py
DELETED
@@ -1,14 +0,0 @@
|
|
1 |
-
# encoding utf-8
|
2 |
-
def hog(img, bins =9, pixels_per_cell=(8, 8), cells_per_block=(2, 2), transform_sqrt=False, feature_vector=True):
|
3 |
-
"""
|
4 |
-
Extract hog feature from image.
|
5 |
-
See detail at https://github.com/scikit-image/scikit-image/blob/master/skimage/feature/_hog.py
|
6 |
-
"""
|
7 |
-
from skimage.feature import hog
|
8 |
-
return hog(img,
|
9 |
-
orientations = bins,
|
10 |
-
pixels_per_cell = pixels_per_cell,
|
11 |
-
cells_per_block = cells_per_block,
|
12 |
-
visualise = False,
|
13 |
-
transform_sqrt=False,
|
14 |
-
feature_vector=True)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/contourpy/__init__.py
DELETED
@@ -1,253 +0,0 @@
|
|
1 |
-
from __future__ import annotations
|
2 |
-
|
3 |
-
from typing import TYPE_CHECKING
|
4 |
-
|
5 |
-
import numpy as np
|
6 |
-
|
7 |
-
from contourpy._contourpy import (
|
8 |
-
ContourGenerator, FillType, LineType, Mpl2005ContourGenerator, Mpl2014ContourGenerator,
|
9 |
-
SerialContourGenerator, ThreadedContourGenerator, ZInterp, max_threads,
|
10 |
-
)
|
11 |
-
from contourpy._version import __version__
|
12 |
-
from contourpy.chunk import calc_chunk_sizes
|
13 |
-
from contourpy.enum_util import as_fill_type, as_line_type, as_z_interp
|
14 |
-
|
15 |
-
if TYPE_CHECKING:
|
16 |
-
from typing import Any
|
17 |
-
|
18 |
-
from numpy.typing import ArrayLike
|
19 |
-
|
20 |
-
from ._contourpy import CoordinateArray, MaskArray
|
21 |
-
|
22 |
-
__all__ = [
|
23 |
-
"__version__",
|
24 |
-
"contour_generator",
|
25 |
-
"max_threads",
|
26 |
-
"FillType",
|
27 |
-
"LineType",
|
28 |
-
"ContourGenerator",
|
29 |
-
"Mpl2005ContourGenerator",
|
30 |
-
"Mpl2014ContourGenerator",
|
31 |
-
"SerialContourGenerator",
|
32 |
-
"ThreadedContourGenerator",
|
33 |
-
"ZInterp",
|
34 |
-
]
|
35 |
-
|
36 |
-
|
37 |
-
# Simple mapping of algorithm name to class name.
|
38 |
-
_class_lookup: dict[str, type[ContourGenerator]] = dict(
|
39 |
-
mpl2005=Mpl2005ContourGenerator,
|
40 |
-
mpl2014=Mpl2014ContourGenerator,
|
41 |
-
serial=SerialContourGenerator,
|
42 |
-
threaded=ThreadedContourGenerator,
|
43 |
-
)
|
44 |
-
|
45 |
-
|
46 |
-
def _remove_z_mask(
|
47 |
-
z: ArrayLike | np.ma.MaskedArray[Any, Any] | None,
|
48 |
-
) -> tuple[CoordinateArray, MaskArray | None]:
|
49 |
-
# Preserve mask if present.
|
50 |
-
z_array = np.ma.asarray(z, dtype=np.float64) # type: ignore[no-untyped-call]
|
51 |
-
z_masked = np.ma.masked_invalid(z_array, copy=False) # type: ignore[no-untyped-call]
|
52 |
-
|
53 |
-
if np.ma.is_masked(z_masked): # type: ignore[no-untyped-call]
|
54 |
-
mask = np.ma.getmask(z_masked) # type: ignore[no-untyped-call]
|
55 |
-
else:
|
56 |
-
mask = None
|
57 |
-
|
58 |
-
return np.ma.getdata(z_masked), mask # type: ignore[no-untyped-call]
|
59 |
-
|
60 |
-
|
61 |
-
def contour_generator(
|
62 |
-
x: ArrayLike | None = None,
|
63 |
-
y: ArrayLike | None = None,
|
64 |
-
z: ArrayLike | np.ma.MaskedArray[Any, Any] | None = None,
|
65 |
-
*,
|
66 |
-
name: str = "serial",
|
67 |
-
corner_mask: bool | None = None,
|
68 |
-
line_type: LineType | str | None = None,
|
69 |
-
fill_type: FillType | str | None = None,
|
70 |
-
chunk_size: int | tuple[int, int] | None = None,
|
71 |
-
chunk_count: int | tuple[int, int] | None = None,
|
72 |
-
total_chunk_count: int | None = None,
|
73 |
-
quad_as_tri: bool = False,
|
74 |
-
z_interp: ZInterp | str | None = ZInterp.Linear,
|
75 |
-
thread_count: int = 0,
|
76 |
-
) -> ContourGenerator:
|
77 |
-
"""Create and return a contour generator object.
|
78 |
-
|
79 |
-
The class and properties of the contour generator are determined by the function arguments,
|
80 |
-
with sensible defaults.
|
81 |
-
|
82 |
-
Args:
|
83 |
-
x (array-like of shape (ny, nx) or (nx,), optional): The x-coordinates of the ``z`` values.
|
84 |
-
May be 2D with the same shape as ``z.shape``, or 1D with length ``nx = z.shape[1]``.
|
85 |
-
If not specified are assumed to be ``np.arange(nx)``. Must be ordered monotonically.
|
86 |
-
y (array-like of shape (ny, nx) or (ny,), optional): The y-coordinates of the ``z`` values.
|
87 |
-
May be 2D with the same shape as ``z.shape``, or 1D with length ``ny = z.shape[0]``.
|
88 |
-
If not specified are assumed to be ``np.arange(ny)``. Must be ordered monotonically.
|
89 |
-
z (array-like of shape (ny, nx), may be a masked array): The 2D gridded values to calculate
|
90 |
-
the contours of. May be a masked array, and any invalid values (``np.inf`` or
|
91 |
-
``np.nan``) will also be masked out.
|
92 |
-
name (str): Algorithm name, one of ``"serial"``, ``"threaded"``, ``"mpl2005"`` or
|
93 |
-
``"mpl2014"``, default ``"serial"``.
|
94 |
-
corner_mask (bool, optional): Enable/disable corner masking, which only has an effect if
|
95 |
-
``z`` is a masked array. If ``False``, any quad touching a masked point is masked out.
|
96 |
-
If ``True``, only the triangular corners of quads nearest these points are always masked
|
97 |
-
out, other triangular corners comprising three unmasked points are contoured as usual.
|
98 |
-
If not specified, uses the default provided by the algorithm ``name``.
|
99 |
-
line_type (LineType, optional): The format of contour line data returned from calls to
|
100 |
-
:meth:`~contourpy.ContourGenerator.lines`. If not specified, uses the default provided
|
101 |
-
by the algorithm ``name``.
|
102 |
-
fill_type (FillType, optional): The format of filled contour data returned from calls to
|
103 |
-
:meth:`~contourpy.ContourGenerator.filled`. If not specified, uses the default provided
|
104 |
-
by the algorithm ``name``.
|
105 |
-
chunk_size (int or tuple(int, int), optional): Chunk size in (y, x) directions, or the same
|
106 |
-
size in both directions if only one value is specified.
|
107 |
-
chunk_count (int or tuple(int, int), optional): Chunk count in (y, x) directions, or the
|
108 |
-
same count in both directions if only one value is specified.
|
109 |
-
total_chunk_count (int, optional): Total number of chunks.
|
110 |
-
quad_as_tri (bool): Enable/disable treating quads as 4 triangles, default ``False``.
|
111 |
-
If ``False``, a contour line within a quad is a straight line between points on two of
|
112 |
-
its edges. If ``True``, each full quad is divided into 4 triangles using a virtual point
|
113 |
-
at the centre (mean x, y of the corner points) and a contour line is piecewise linear
|
114 |
-
within those triangles. Corner-masked triangles are not affected by this setting, only
|
115 |
-
full unmasked quads.
|
116 |
-
z_interp (ZInterp): How to interpolate ``z`` values when determining where contour lines
|
117 |
-
intersect the edges of quads and the ``z`` values of the central points of quads,
|
118 |
-
default ``ZInterp.Linear``.
|
119 |
-
thread_count (int): Number of threads to use for contour calculation, default 0. Threads can
|
120 |
-
only be used with an algorithm ``name`` that supports threads (currently only
|
121 |
-
``name="threaded"``) and there must be at least the same number of chunks as threads.
|
122 |
-
If ``thread_count=0`` and ``name="threaded"`` then it uses the maximum number of threads
|
123 |
-
as determined by the C++11 call ``std::thread::hardware_concurrency()``. If ``name`` is
|
124 |
-
something other than ``"threaded"`` then the ``thread_count`` will be set to ``1``.
|
125 |
-
|
126 |
-
Return:
|
127 |
-
:class:`~contourpy._contourpy.ContourGenerator`.
|
128 |
-
|
129 |
-
Note:
|
130 |
-
A maximum of one of ``chunk_size``, ``chunk_count`` and ``total_chunk_count`` may be
|
131 |
-
specified.
|
132 |
-
|
133 |
-
Warning:
|
134 |
-
The ``name="mpl2005"`` algorithm does not implement chunking for contour lines.
|
135 |
-
"""
|
136 |
-
x = np.asarray(x, dtype=np.float64)
|
137 |
-
y = np.asarray(y, dtype=np.float64)
|
138 |
-
z, mask = _remove_z_mask(z)
|
139 |
-
|
140 |
-
# Check arguments: z.
|
141 |
-
if z.ndim != 2:
|
142 |
-
raise TypeError(f"Input z must be 2D, not {z.ndim}D")
|
143 |
-
|
144 |
-
if z.shape[0] < 2 or z.shape[1] < 2:
|
145 |
-
raise TypeError(f"Input z must be at least a (2, 2) shaped array, but has shape {z.shape}")
|
146 |
-
|
147 |
-
ny, nx = z.shape
|
148 |
-
|
149 |
-
# Check arguments: x and y.
|
150 |
-
if x.ndim != y.ndim:
|
151 |
-
raise TypeError(f"Number of dimensions of x ({x.ndim}) and y ({y.ndim}) do not match")
|
152 |
-
|
153 |
-
if x.ndim == 0:
|
154 |
-
x = np.arange(nx, dtype=np.float64)
|
155 |
-
y = np.arange(ny, dtype=np.float64)
|
156 |
-
x, y = np.meshgrid(x, y)
|
157 |
-
elif x.ndim == 1:
|
158 |
-
if len(x) != nx:
|
159 |
-
raise TypeError(f"Length of x ({len(x)}) must match number of columns in z ({nx})")
|
160 |
-
if len(y) != ny:
|
161 |
-
raise TypeError(f"Length of y ({len(y)}) must match number of rows in z ({ny})")
|
162 |
-
x, y = np.meshgrid(x, y)
|
163 |
-
elif x.ndim == 2:
|
164 |
-
if x.shape != z.shape:
|
165 |
-
raise TypeError(f"Shapes of x {x.shape} and z {z.shape} do not match")
|
166 |
-
if y.shape != z.shape:
|
167 |
-
raise TypeError(f"Shapes of y {y.shape} and z {z.shape} do not match")
|
168 |
-
else:
|
169 |
-
raise TypeError(f"Inputs x and y must be None, 1D or 2D, not {x.ndim}D")
|
170 |
-
|
171 |
-
# Check mask shape just in case.
|
172 |
-
if mask is not None and mask.shape != z.shape:
|
173 |
-
raise ValueError("If mask is set it must be a 2D array with the same shape as z")
|
174 |
-
|
175 |
-
# Check arguments: name.
|
176 |
-
if name not in _class_lookup:
|
177 |
-
raise ValueError(f"Unrecognised contour generator name: {name}")
|
178 |
-
|
179 |
-
# Check arguments: chunk_size, chunk_count and total_chunk_count.
|
180 |
-
y_chunk_size, x_chunk_size = calc_chunk_sizes(
|
181 |
-
chunk_size, chunk_count, total_chunk_count, ny, nx)
|
182 |
-
|
183 |
-
cls = _class_lookup[name]
|
184 |
-
|
185 |
-
# Check arguments: corner_mask.
|
186 |
-
if corner_mask is None:
|
187 |
-
# Set it to default, which is True if the algorithm supports it.
|
188 |
-
corner_mask = cls.supports_corner_mask()
|
189 |
-
elif corner_mask and not cls.supports_corner_mask():
|
190 |
-
raise ValueError(f"{name} contour generator does not support corner_mask=True")
|
191 |
-
|
192 |
-
# Check arguments: line_type.
|
193 |
-
if line_type is None:
|
194 |
-
line_type = cls.default_line_type
|
195 |
-
else:
|
196 |
-
line_type = as_line_type(line_type)
|
197 |
-
|
198 |
-
if not cls.supports_line_type(line_type):
|
199 |
-
raise ValueError(f"{name} contour generator does not support line_type {line_type}")
|
200 |
-
|
201 |
-
# Check arguments: fill_type.
|
202 |
-
if fill_type is None:
|
203 |
-
fill_type = cls.default_fill_type
|
204 |
-
else:
|
205 |
-
fill_type = as_fill_type(fill_type)
|
206 |
-
|
207 |
-
if not cls.supports_fill_type(fill_type):
|
208 |
-
raise ValueError(f"{name} contour generator does not support fill_type {fill_type}")
|
209 |
-
|
210 |
-
# Check arguments: quad_as_tri.
|
211 |
-
if quad_as_tri and not cls.supports_quad_as_tri():
|
212 |
-
raise ValueError(f"{name} contour generator does not support quad_as_tri=True")
|
213 |
-
|
214 |
-
# Check arguments: z_interp.
|
215 |
-
if z_interp is None:
|
216 |
-
z_interp = ZInterp.Linear
|
217 |
-
else:
|
218 |
-
z_interp = as_z_interp(z_interp)
|
219 |
-
|
220 |
-
if z_interp != ZInterp.Linear and not cls.supports_z_interp():
|
221 |
-
raise ValueError(f"{name} contour generator does not support z_interp {z_interp}")
|
222 |
-
|
223 |
-
# Check arguments: thread_count.
|
224 |
-
if thread_count not in (0, 1) and not cls.supports_threads():
|
225 |
-
raise ValueError(f"{name} contour generator does not support thread_count {thread_count}")
|
226 |
-
|
227 |
-
# Prepare args and kwargs for contour generator constructor.
|
228 |
-
args = [x, y, z, mask]
|
229 |
-
kwargs: dict[str, int | bool | LineType | FillType | ZInterp] = {
|
230 |
-
"x_chunk_size": x_chunk_size,
|
231 |
-
"y_chunk_size": y_chunk_size,
|
232 |
-
}
|
233 |
-
|
234 |
-
if name not in ("mpl2005", "mpl2014"):
|
235 |
-
kwargs["line_type"] = line_type
|
236 |
-
kwargs["fill_type"] = fill_type
|
237 |
-
|
238 |
-
if cls.supports_corner_mask():
|
239 |
-
kwargs["corner_mask"] = corner_mask
|
240 |
-
|
241 |
-
if cls.supports_quad_as_tri():
|
242 |
-
kwargs["quad_as_tri"] = quad_as_tri
|
243 |
-
|
244 |
-
if cls.supports_z_interp():
|
245 |
-
kwargs["z_interp"] = z_interp
|
246 |
-
|
247 |
-
if cls.supports_threads():
|
248 |
-
kwargs["thread_count"] = thread_count
|
249 |
-
|
250 |
-
# Create contour generator.
|
251 |
-
cont_gen = cls(*args, **kwargs)
|
252 |
-
|
253 |
-
return cont_gen
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-7f39cecc.js
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
import{E as u,L as v}from"./index-f8ff95a1.js";import{s as k,t,h as S,L as w,i as z,w as x,f as R,a as U,b as _,I as T,x as V}from"./index-3ba00a4a.js";import"./index-1d65707a.js";import"./Blocks-c9e1499d.js";import"./Button-f155035a.js";import"./BlockLabel-66866176.js";import"./Empty-eec13822.js";import"./Copy-9f1657c4.js";import"./Download-daff1959.js";const Y=94,g=1,C=95,Z=96,f=2,$=[9,10,11,12,13,32,133,160,5760,8192,8193,8194,8195,8196,8197,8198,8199,8200,8201,8202,8232,8233,8239,8287,12288],G=58,N=40,X=95,q=91,c=45,E=46,j=35,D=37;function p(e){return e>=65&&e<=90||e>=97&&e<=122||e>=161}function I(e){return e>=48&&e<=57}const B=new u((e,o)=>{for(let r=!1,a=0,O=0;;O++){let{next:l}=e;if(p(l)||l==c||l==X||r&&I(l))!r&&(l!=c||O>0)&&(r=!0),a===O&&l==c&&a++,e.advance();else{r&&e.acceptToken(l==N?C:a==2&&o.canShift(f)?f:Z);break}}}),A=new u(e=>{if($.includes(e.peek(-1))){let{next:o}=e;(p(o)||o==X||o==j||o==E||o==q||o==G||o==c)&&e.acceptToken(Y)}}),F=new u(e=>{if(!$.includes(e.peek(-1))){let{next:o}=e;if(o==D&&(e.advance(),e.acceptToken(g)),p(o)){do e.advance();while(p(e.next));e.acceptToken(g)}}}),L=k({"AtKeyword import charset namespace keyframes media supports":t.definitionKeyword,"from to selector":t.keyword,NamespaceName:t.namespace,KeyframeName:t.labelName,TagName:t.tagName,ClassName:t.className,PseudoClassName:t.constant(t.className),IdName:t.labelName,"FeatureName PropertyName":t.propertyName,AttributeName:t.attributeName,NumberLiteral:t.number,KeywordQuery:t.keyword,UnaryQueryOp:t.operatorKeyword,"CallTag ValueName":t.atom,VariableName:t.variableName,Callee:t.operatorKeyword,Unit:t.unit,"UniversalSelector NestingSelector":t.definitionOperator,MatchOp:t.compareOperator,"ChildOp SiblingOp, LogicOp":t.logicOperator,BinOp:t.arithmeticOperator,Important:t.modifier,Comment:t.blockComment,ParenthesizedContent:t.special(t.name),ColorLiteral:t.color,StringLiteral:t.string,":":t.punctuation,"PseudoOp #":t.derefOperator,"; ,":t.separator,"( )":t.paren,"[ ]":t.squareBracket,"{ }":t.brace}),K={__proto__:null,lang:32,"nth-child":32,"nth-last-child":32,"nth-of-type":32,"nth-last-of-type":32,dir:32,"host-context":32,url:60,"url-prefix":60,domain:60,regexp:60,selector:134},J={__proto__:null,"@import":114,"@media":138,"@charset":142,"@namespace":146,"@keyframes":152,"@supports":164},H={__proto__:null,not:128,only:128,from:158,to:160},M=v.deserialize({version:14,states:"7WQYQ[OOO#_Q[OOOOQP'#Cd'#CdOOQP'#Cc'#CcO#fQ[O'#CfO$YQXO'#CaO$aQ[O'#ChO$lQ[O'#DPO$qQ[O'#DTOOQP'#Ed'#EdO$vQdO'#DeO%bQ[O'#DrO$vQdO'#DtO%sQ[O'#DvO&OQ[O'#DyO&TQ[O'#EPO&cQ[O'#EROOQS'#Ec'#EcOOQS'#ET'#ETQYQ[OOO&jQXO'#CdO'_QWO'#DaO'dQWO'#EjO'oQ[O'#EjQOQWOOOOQP'#Cg'#CgOOQP,59Q,59QO#fQ[O,59QO'yQ[O'#EWO(eQWO,58{O(mQ[O,59SO$lQ[O,59kO$qQ[O,59oO'yQ[O,59sO'yQ[O,59uO'yQ[O,59vO(xQ[O'#D`OOQS,58{,58{OOQP'#Ck'#CkOOQO'#C}'#C}OOQP,59S,59SO)PQWO,59SO)UQWO,59SOOQP'#DR'#DROOQP,59k,59kOOQO'#DV'#DVO)ZQ`O,59oOOQS'#Cp'#CpO$vQdO'#CqO)cQvO'#CsO*pQtO,5:POOQO'#Cx'#CxO)UQWO'#CwO+UQWO'#CyOOQS'#Eg'#EgOOQO'#Dh'#DhO+ZQ[O'#DoO+iQWO'#EkO&TQ[O'#DmO+wQWO'#DpOOQO'#El'#ElO(hQWO,5:^O+|QpO,5:`OOQS'#Dx'#DxO,UQWO,5:bO,ZQ[O,5:bOOQO'#D{'#D{O,cQWO,5:eO,hQWO,5:kO,pQWO,5:mOOQS-E8R-E8RO$vQdO,59{O,xQ[O'#EYO-VQWO,5;UO-VQWO,5;UOOQP1G.l1G.lO-|QXO,5:rOOQO-E8U-E8UOOQS1G.g1G.gOOQP1G.n1G.nO)PQWO1G.nO)UQWO1G.nOOQP1G/V1G/VO.ZQ`O1G/ZO.tQXO1G/_O/[QXO1G/aO/rQXO1G/bO0YQWO,59zO0_Q[O'#DOO0fQdO'#CoOOQP1G/Z1G/ZO$vQdO1G/ZO0mQpO,59]OOQS,59_,59_O$vQdO,59aO0uQWO1G/kOOQS,59c,59cO0zQ!bO,59eO1SQWO'#DhO1_QWO,5:TO1dQWO,5:ZO&TQ[O,5:VO&TQ[O'#EZO1lQWO,5;VO1wQWO,5:XO'yQ[O,5:[OOQS1G/x1G/xOOQS1G/z1G/zOOQS1G/|1G/|O2YQWO1G/|O2_QdO'#D|OOQS1G0P1G0POOQS1G0V1G0VOOQS1G0X1G0XO2mQtO1G/gOOQO,5:t,5:tO3TQ[O,5:tOOQO-E8W-E8WO3bQWO1G0pOOQP7+$Y7+$YOOQP7+$u7+$uO$vQdO7+$uOOQS1G/f1G/fO3mQXO'#EiO3tQWO,59jO3yQtO'#EUO4nQdO'#EfO4xQWO,59ZO4}QpO7+$uOOQS1G.w1G.wOOQS1G.{1G.{OOQS7+%V7+%VO5VQWO1G/PO$vQdO1G/oOOQO1G/u1G/uOOQO1G/q1G/qO5[QWO,5:uOOQO-E8X-E8XO5jQXO1G/vOOQS7+%h7+%hO5qQYO'#CsO(hQWO'#E[O5yQdO,5:hOOQS,5:h,5:hO6XQtO'#EXO$vQdO'#EXO7VQdO7+%ROOQO7+%R7+%ROOQO1G0`1G0`O7jQpO<<HaO7rQWO,5;TOOQP1G/U1G/UOOQS-E8S-E8SO$vQdO'#EVO7zQWO,5;QOOQT1G.u1G.uOOQP<<Ha<<HaOOQS7+$k7+$kO8SQdO7+%ZOOQO7+%b7+%bOOQS,5:v,5:vOOQS-E8Y-E8YOOQS1G0S1G0SO8ZQtO,5:sOOQS-E8V-E8VOOQO<<Hm<<HmOOQPAN={AN={O9XQdO,5:qOOQO-E8T-E8TOOQO<<Hu<<Hu",stateData:"9i~O#UOSROS~OUXOXXO]UO^UOtVOxWO!Y`O!ZYO!gZO!i[O!k]O!n^O!t_O#SQO#XSO~OQeOUXOXXO]UO^UOtVOxWO!Y`O!ZYO!gZO!i[O!k]O!n^O!t_O#SdO#XSO~O#P#^P~P!ZO#SiO~O]nO^nOplOtoOxpO|qO!PsO#QrO#XkO~O!RtO~P#kO`zO#RwO#SvO~O#S{O~O#S}O~OQ!WOb!QOf!WOh!WOn!VO#R!TO#S!PO#[!RO~Ob!YO!b![O!e!]O#S!XO!R#_P~Oh!bOn!VO#S!aO~O#S!dO~Ob!YO!b![O!e!]O#S!XO~O!W#_P~P%bO]WX]!UX^WXpWXtWXxWX|WX!PWX!RWX#QWX#XWX~O]!iO~O!W!jO#P#^X!Q#^X~O#P#^X!Q#^X~P!ZOUXOXXO]UO^UOtVOxWO#SQO#XSO~OplO!RtO~O`!sO#RwO#SvO~O!Q#^P~P!ZOb!zO~Ob!{O~Ov!|Oz!}O~OP#PObgXjgX!WgX!bgX!egX#SgXagXQgXfgXhgXngXpgX!VgX#PgX#RgX#[gXvgX!QgX~Ob!YOj#QO!b![O!e!]O#S!XO!W#_P~Ob#TO~Ob!YO!b![O!e!]O#S#UO~Op#YO!`#XO!R#_X!W#_X~Ob#]O~Oj#QO!W#_O~O!W#`O~Oh#aOn!VO~O!R#bO~O!RtO!`#XO~O!RtO!W#eO~O!W!|X#P!|X!Q!|X~P!ZO!W!jO#P#^a!Q#^a~O]nO^nOtoOxpO|qO!PsO#QrO#XkO~Op!za!R!zaa!za~P-bOv#lOz#mO~O]nO^nOtoOxpO#XkO~Op{i|{i!P{i!R{i#Q{ia{i~P.cOp}i|}i!P}i!R}i#Q}ia}i~P.cOp!Oi|!Oi!P!Oi!R!Oi#Q!Oia!Oi~P.cO!Q#nO~Oa#]P~P'yOa#YP~P$vOa#uOj#QO~O!W#wO~Oh#xOo#xO~O]!^Xa![X!`![X~O]#yO~Oa#zO!`#XO~Op#YO!R#_a!W#_a~O!`#XOp!aa!R!aa!W!aaa!aa~O!W$PO~O!Q$TO!q$RO!r$RO#[$QO~Oj#QOp$VO!V$XO!W!Ti#P!Ti!Q!Ti~P$vO!W!|a#P!|a!Q!|a~P!ZO!W!jO#P#^i!Q#^i~Oa#]X~P#kOa$]O~Oj#QOQ!xXa!xXb!xXf!xXh!xXn!xXp!xX#R!xX#S!xX#[!xX~Op$_Oa#YX~P$vOa$aO~Oj#QOv$bO~Oa$cO~O!`#XOp!}a!R!}a!W!}a~Oa$eO~P-bOP#PO!RgX~O!Q$hO!q$RO!r$RO#[$QO~Oj#QOQ!{Xb!{Xf!{Xh!{Xn!{Xp!{X!V!{X!W!{X#P!{X#R!{X#S!{X#[!{X!Q!{X~Op$VO!V$kO!W!Tq#P!Tq!Q!Tq~P$vOj#QOv$lO~OplOa#]a~Op$_Oa#Ya~Oa$oO~P$vOj#QOQ!{ab!{af!{ah!{an!{ap!{a!V!{a!W!{a#P!{a#R!{a#S!{a#[!{a!Q!{a~Oa!yap!ya~P$vOo#[j!Pj~",goto:",`#aPPPPP#bP#k#zP#k$Z#kPP$aPPP$g$p$pP%SP$pP$p%j%|PPP&f&l#kP&rP#kP&xP#kP#k#kPPP'O'b'oPP#bPP'v'v(Q'vP'vP'v'vP#bP#bP#bP(T#bP(W(ZPP#bP#bP(^(m({)R)])c)m)sPPPPPP)y*SP*o*rP+h+k+q+z_aOPcgt!j#hkXOPcglqrst!j!z#]#hkROPcglqrst!j!z#]#hQjSR!mkQxUR!qnQ!qzQ#S!UR#k!sq!WY[!Q!i!{!}#Q#f#m#r#y$V$W$_$d$mp!WY[!Q!i!{!}#Q#f#m#r#y$V$W$_$d$mT$R#b$Sq!UY[!Q!i!{!}#Q#f#m#r#y$V$W$_$d$mp!WY[!Q!i!{!}#Q#f#m#r#y$V$W$_$d$mQ!b]R#a!cQyUR!rnQ!qyR#k!rQ|VR!toQ!OWR!upQuTQ!pmQ#^!_Q#d!fQ#e!gR$f$RSfPtQ!lgQ#g!jR$Y#hZePgt!j#ha!^Z_`!S!Y![#X#YR#V!YR!c]R!e^R#c!eQcOSgPtU!hcg#hR#h!jQ#r!{U$^#r$d$mQ$d#yR$m$_Q$`#rR$n$`QmTS!om$[R$[#oQ$W#fR$j$WQ!kfS#i!k#jR#j!lQ#Z!ZR#}#ZQ$S#bR$g$S_bOPcgt!j#h^TOPcgt!j#hQ!nlQ!vqQ!wrQ!xsQ#o!zR$O#]R#s!{Q!SYQ!`[Q#O!QQ#f!i[#q!{#r#y$_$d$mQ#t!}Q#v#QS$U#f$WQ$Z#mR$i$VR#p!zQhPR!ytQ!_ZQ!g`R#R!SU!ZZ`!SQ!f_Q#W!YQ#[![Q#{#XR#|#Y",nodeNames:"⚠ Unit VariableName Comment StyleSheet RuleSet UniversalSelector TagSelector TagName NestingSelector ClassSelector ClassName PseudoClassSelector : :: PseudoClassName PseudoClassName ) ( ArgList ValueName ParenthesizedValue ColorLiteral NumberLiteral StringLiteral BinaryExpression BinOp CallExpression Callee CallLiteral CallTag ParenthesizedContent , PseudoClassName ArgList IdSelector # IdName ] AttributeSelector [ AttributeName MatchOp ChildSelector ChildOp DescendantSelector SiblingSelector SiblingOp } { Block Declaration PropertyName Important ; ImportStatement AtKeyword import KeywordQuery FeatureQuery FeatureName BinaryQuery LogicOp UnaryQuery UnaryQueryOp ParenthesizedQuery SelectorQuery selector MediaStatement media CharsetStatement charset NamespaceStatement namespace NamespaceName KeyframesStatement keyframes KeyframeName KeyframeList from to SupportsStatement supports AtRule Styles",maxTerm:108,nodeProps:[["openedBy",17,"(",48,"{"],["closedBy",18,")",49,"}"]],propSources:[L],skippedNodes:[0,3],repeatNodeCount:8,tokenData:"Lq~R!^OX$}X^%u^p$}pq%uqr)Xrs.Rst/utu6duv$}vw7^wx7oxy9^yz9oz{9t{|:_|}?Q}!O?c!O!P@Q!P!Q@i!Q![Cu![!]Dp!]!^El!^!_$}!_!`E}!`!aF`!a!b$}!b!cG[!c!}$}!}#OHt#O#P$}#P#QIV#Q#R6d#R#T$}#T#UIh#U#c$}#c#dJy#d#o$}#o#pK`#p#q6d#q#rKq#r#sLS#s#y$}#y#z%u#z$f$}$f$g%u$g#BY$}#BY#BZ%u#BZ$IS$}$IS$I_%u$I_$I|$}$I|$JO%u$JO$JT$}$JT$JU%u$JU$KV$}$KV$KW%u$KW&FU$}&FU&FV%u&FV;'S$};'S;=`Lk<%lO$}W%QSOy%^z;'S%^;'S;=`%o<%lO%^W%cSoWOy%^z;'S%^;'S;=`%o<%lO%^W%rP;=`<%l%^~%zh#U~OX%^X^'f^p%^pq'fqy%^z#y%^#y#z'f#z$f%^$f$g'f$g#BY%^#BY#BZ'f#BZ$IS%^$IS$I_'f$I_$I|%^$I|$JO'f$JO$JT%^$JT$JU'f$JU$KV%^$KV$KW'f$KW&FU%^&FU&FV'f&FV;'S%^;'S;=`%o<%lO%^~'mh#U~oWOX%^X^'f^p%^pq'fqy%^z#y%^#y#z'f#z$f%^$f$g'f$g#BY%^#BY#BZ'f#BZ$IS%^$IS$I_'f$I_$I|%^$I|$JO'f$JO$JT%^$JT$JU'f$JU$KV%^$KV$KW'f$KW&FU%^&FU&FV'f&FV;'S%^;'S;=`%o<%lO%^^)[UOy%^z#]%^#]#^)n#^;'S%^;'S;=`%o<%lO%^^)sUoWOy%^z#a%^#a#b*V#b;'S%^;'S;=`%o<%lO%^^*[UoWOy%^z#d%^#d#e*n#e;'S%^;'S;=`%o<%lO%^^*sUoWOy%^z#c%^#c#d+V#d;'S%^;'S;=`%o<%lO%^^+[UoWOy%^z#f%^#f#g+n#g;'S%^;'S;=`%o<%lO%^^+sUoWOy%^z#h%^#h#i,V#i;'S%^;'S;=`%o<%lO%^^,[UoWOy%^z#T%^#T#U,n#U;'S%^;'S;=`%o<%lO%^^,sUoWOy%^z#b%^#b#c-V#c;'S%^;'S;=`%o<%lO%^^-[UoWOy%^z#h%^#h#i-n#i;'S%^;'S;=`%o<%lO%^^-uS!VUoWOy%^z;'S%^;'S;=`%o<%lO%^~.UWOY.RZr.Rrs.ns#O.R#O#P.s#P;'S.R;'S;=`/o<%lO.R~.sOh~~.vRO;'S.R;'S;=`/P;=`O.R~/SXOY.RZr.Rrs.ns#O.R#O#P.s#P;'S.R;'S;=`/o;=`<%l.R<%lO.R~/rP;=`<%l.R_/zYtPOy%^z!Q%^!Q![0j![!c%^!c!i0j!i#T%^#T#Z0j#Z;'S%^;'S;=`%o<%lO%^^0oYoWOy%^z!Q%^!Q![1_![!c%^!c!i1_!i#T%^#T#Z1_#Z;'S%^;'S;=`%o<%lO%^^1dYoWOy%^z!Q%^!Q![2S![!c%^!c!i2S!i#T%^#T#Z2S#Z;'S%^;'S;=`%o<%lO%^^2ZYfUoWOy%^z!Q%^!Q![2y![!c%^!c!i2y!i#T%^#T#Z2y#Z;'S%^;'S;=`%o<%lO%^^3QYfUoWOy%^z!Q%^!Q![3p![!c%^!c!i3p!i#T%^#T#Z3p#Z;'S%^;'S;=`%o<%lO%^^3uYoWOy%^z!Q%^!Q![4e![!c%^!c!i4e!i#T%^#T#Z4e#Z;'S%^;'S;=`%o<%lO%^^4lYfUoWOy%^z!Q%^!Q![5[![!c%^!c!i5[!i#T%^#T#Z5[#Z;'S%^;'S;=`%o<%lO%^^5aYoWOy%^z!Q%^!Q![6P![!c%^!c!i6P!i#T%^#T#Z6P#Z;'S%^;'S;=`%o<%lO%^^6WSfUoWOy%^z;'S%^;'S;=`%o<%lO%^Y6gUOy%^z!_%^!_!`6y!`;'S%^;'S;=`%o<%lO%^Y7QSzQoWOy%^z;'S%^;'S;=`%o<%lO%^X7cSXPOy%^z;'S%^;'S;=`%o<%lO%^~7rWOY7oZw7owx.nx#O7o#O#P8[#P;'S7o;'S;=`9W<%lO7o~8_RO;'S7o;'S;=`8h;=`O7o~8kXOY7oZw7owx.nx#O7o#O#P8[#P;'S7o;'S;=`9W;=`<%l7o<%lO7o~9ZP;=`<%l7o_9cSbVOy%^z;'S%^;'S;=`%o<%lO%^~9tOa~_9{UUPjSOy%^z!_%^!_!`6y!`;'S%^;'S;=`%o<%lO%^_:fWjS!PPOy%^z!O%^!O!P;O!P!Q%^!Q![>T![;'S%^;'S;=`%o<%lO%^^;TUoWOy%^z!Q%^!Q![;g![;'S%^;'S;=`%o<%lO%^^;nYoW#[UOy%^z!Q%^!Q![;g![!g%^!g!h<^!h#X%^#X#Y<^#Y;'S%^;'S;=`%o<%lO%^^<cYoWOy%^z{%^{|=R|}%^}!O=R!O!Q%^!Q![=j![;'S%^;'S;=`%o<%lO%^^=WUoWOy%^z!Q%^!Q![=j![;'S%^;'S;=`%o<%lO%^^=qUoW#[UOy%^z!Q%^!Q![=j![;'S%^;'S;=`%o<%lO%^^>[[oW#[UOy%^z!O%^!O!P;g!P!Q%^!Q![>T![!g%^!g!h<^!h#X%^#X#Y<^#Y;'S%^;'S;=`%o<%lO%^_?VSpVOy%^z;'S%^;'S;=`%o<%lO%^^?hWjSOy%^z!O%^!O!P;O!P!Q%^!Q![>T![;'S%^;'S;=`%o<%lO%^_@VU#XPOy%^z!Q%^!Q![;g![;'S%^;'S;=`%o<%lO%^~@nTjSOy%^z{@}{;'S%^;'S;=`%o<%lO%^~ASUoWOy@}yzAfz{Bm{;'S@};'S;=`Co<%lO@}~AiTOzAfz{Ax{;'SAf;'S;=`Bg<%lOAf~A{VOzAfz{Ax{!PAf!P!QBb!Q;'SAf;'S;=`Bg<%lOAf~BgOR~~BjP;=`<%lAf~BrWoWOy@}yzAfz{Bm{!P@}!P!QC[!Q;'S@};'S;=`Co<%lO@}~CcSoWR~Oy%^z;'S%^;'S;=`%o<%lO%^~CrP;=`<%l@}^Cz[#[UOy%^z!O%^!O!P;g!P!Q%^!Q![>T![!g%^!g!h<^!h#X%^#X#Y<^#Y;'S%^;'S;=`%o<%lO%^XDuU]POy%^z![%^![!]EX!];'S%^;'S;=`%o<%lO%^XE`S^PoWOy%^z;'S%^;'S;=`%o<%lO%^_EqS!WVOy%^z;'S%^;'S;=`%o<%lO%^YFSSzQOy%^z;'S%^;'S;=`%o<%lO%^XFeU|POy%^z!`%^!`!aFw!a;'S%^;'S;=`%o<%lO%^XGOS|PoWOy%^z;'S%^;'S;=`%o<%lO%^XG_WOy%^z!c%^!c!}Gw!}#T%^#T#oGw#o;'S%^;'S;=`%o<%lO%^XHO[!YPoWOy%^z}%^}!OGw!O!Q%^!Q![Gw![!c%^!c!}Gw!}#T%^#T#oGw#o;'S%^;'S;=`%o<%lO%^XHySxPOy%^z;'S%^;'S;=`%o<%lO%^^I[SvUOy%^z;'S%^;'S;=`%o<%lO%^XIkUOy%^z#b%^#b#cI}#c;'S%^;'S;=`%o<%lO%^XJSUoWOy%^z#W%^#W#XJf#X;'S%^;'S;=`%o<%lO%^XJmS!`PoWOy%^z;'S%^;'S;=`%o<%lO%^XJ|UOy%^z#f%^#f#gJf#g;'S%^;'S;=`%o<%lO%^XKeS!RPOy%^z;'S%^;'S;=`%o<%lO%^_KvS!QVOy%^z;'S%^;'S;=`%o<%lO%^ZLXU!PPOy%^z!_%^!_!`6y!`;'S%^;'S;=`%o<%lO%^WLnP;=`<%l$}",tokenizers:[A,F,B,0,1,2,3],topRules:{StyleSheet:[0,4],Styles:[1,84]},specialized:[{term:95,get:e=>K[e]||-1},{term:56,get:e=>J[e]||-1},{term:96,get:e=>H[e]||-1}],tokenPrec:1123});let Q=null;function m(){if(!Q&&typeof document=="object"&&document.body){let{style:e}=document.body,o=[],r=new Set;for(let a in e)a!="cssText"&&a!="cssFloat"&&typeof e[a]=="string"&&(/[A-Z]/.test(a)&&(a=a.replace(/[A-Z]/g,O=>"-"+O.toLowerCase())),r.has(a)||(o.push(a),r.add(a)));Q=o.sort().map(a=>({type:"property",label:a}))}return Q||[]}const h=["active","after","any-link","autofill","backdrop","before","checked","cue","default","defined","disabled","empty","enabled","file-selector-button","first","first-child","first-letter","first-line","first-of-type","focus","focus-visible","focus-within","fullscreen","has","host","host-context","hover","in-range","indeterminate","invalid","is","lang","last-child","last-of-type","left","link","marker","modal","not","nth-child","nth-last-child","nth-last-of-type","nth-of-type","only-child","only-of-type","optional","out-of-range","part","placeholder","placeholder-shown","read-only","read-write","required","right","root","scope","selection","slotted","target","target-text","valid","visited","where"].map(e=>({type:"class",label:e})),b=["above","absolute","activeborder","additive","activecaption","after-white-space","ahead","alias","all","all-scroll","alphabetic","alternate","always","antialiased","appworkspace","asterisks","attr","auto","auto-flow","avoid","avoid-column","avoid-page","avoid-region","axis-pan","background","backwards","baseline","below","bidi-override","blink","block","block-axis","bold","bolder","border","border-box","both","bottom","break","break-all","break-word","bullets","button","button-bevel","buttonface","buttonhighlight","buttonshadow","buttontext","calc","capitalize","caps-lock-indicator","caption","captiontext","caret","cell","center","checkbox","circle","cjk-decimal","clear","clip","close-quote","col-resize","collapse","color","color-burn","color-dodge","column","column-reverse","compact","condensed","contain","content","contents","content-box","context-menu","continuous","copy","counter","counters","cover","crop","cross","crosshair","currentcolor","cursive","cyclic","darken","dashed","decimal","decimal-leading-zero","default","default-button","dense","destination-atop","destination-in","destination-out","destination-over","difference","disc","discard","disclosure-closed","disclosure-open","document","dot-dash","dot-dot-dash","dotted","double","down","e-resize","ease","ease-in","ease-in-out","ease-out","element","ellipse","ellipsis","embed","end","ethiopic-abegede-gez","ethiopic-halehame-aa-er","ethiopic-halehame-gez","ew-resize","exclusion","expanded","extends","extra-condensed","extra-expanded","fantasy","fast","fill","fill-box","fixed","flat","flex","flex-end","flex-start","footnotes","forwards","from","geometricPrecision","graytext","grid","groove","hand","hard-light","help","hidden","hide","higher","highlight","highlighttext","horizontal","hsl","hsla","hue","icon","ignore","inactiveborder","inactivecaption","inactivecaptiontext","infinite","infobackground","infotext","inherit","initial","inline","inline-axis","inline-block","inline-flex","inline-grid","inline-table","inset","inside","intrinsic","invert","italic","justify","keep-all","landscape","large","larger","left","level","lighter","lighten","line-through","linear","linear-gradient","lines","list-item","listbox","listitem","local","logical","loud","lower","lower-hexadecimal","lower-latin","lower-norwegian","lowercase","ltr","luminosity","manipulation","match","matrix","matrix3d","medium","menu","menutext","message-box","middle","min-intrinsic","mix","monospace","move","multiple","multiple_mask_images","multiply","n-resize","narrower","ne-resize","nesw-resize","no-close-quote","no-drop","no-open-quote","no-repeat","none","normal","not-allowed","nowrap","ns-resize","numbers","numeric","nw-resize","nwse-resize","oblique","opacity","open-quote","optimizeLegibility","optimizeSpeed","outset","outside","outside-shape","overlay","overline","padding","padding-box","painted","page","paused","perspective","pinch-zoom","plus-darker","plus-lighter","pointer","polygon","portrait","pre","pre-line","pre-wrap","preserve-3d","progress","push-button","radial-gradient","radio","read-only","read-write","read-write-plaintext-only","rectangle","region","relative","repeat","repeating-linear-gradient","repeating-radial-gradient","repeat-x","repeat-y","reset","reverse","rgb","rgba","ridge","right","rotate","rotate3d","rotateX","rotateY","rotateZ","round","row","row-resize","row-reverse","rtl","run-in","running","s-resize","sans-serif","saturation","scale","scale3d","scaleX","scaleY","scaleZ","screen","scroll","scrollbar","scroll-position","se-resize","self-start","self-end","semi-condensed","semi-expanded","separate","serif","show","single","skew","skewX","skewY","skip-white-space","slide","slider-horizontal","slider-vertical","sliderthumb-horizontal","sliderthumb-vertical","slow","small","small-caps","small-caption","smaller","soft-light","solid","source-atop","source-in","source-out","source-over","space","space-around","space-between","space-evenly","spell-out","square","start","static","status-bar","stretch","stroke","stroke-box","sub","subpixel-antialiased","svg_masks","super","sw-resize","symbolic","symbols","system-ui","table","table-caption","table-cell","table-column","table-column-group","table-footer-group","table-header-group","table-row","table-row-group","text","text-bottom","text-top","textarea","textfield","thick","thin","threeddarkshadow","threedface","threedhighlight","threedlightshadow","threedshadow","to","top","transform","translate","translate3d","translateX","translateY","translateZ","transparent","ultra-condensed","ultra-expanded","underline","unidirectional-pan","unset","up","upper-latin","uppercase","url","var","vertical","vertical-text","view-box","visible","visibleFill","visiblePainted","visibleStroke","visual","w-resize","wait","wave","wider","window","windowframe","windowtext","words","wrap","wrap-reverse","x-large","x-small","xor","xx-large","xx-small"].map(e=>({type:"keyword",label:e})).concat(["aliceblue","antiquewhite","aqua","aquamarine","azure","beige","bisque","black","blanchedalmond","blue","blueviolet","brown","burlywood","cadetblue","chartreuse","chocolate","coral","cornflowerblue","cornsilk","crimson","cyan","darkblue","darkcyan","darkgoldenrod","darkgray","darkgreen","darkkhaki","darkmagenta","darkolivegreen","darkorange","darkorchid","darkred","darksalmon","darkseagreen","darkslateblue","darkslategray","darkturquoise","darkviolet","deeppink","deepskyblue","dimgray","dodgerblue","firebrick","floralwhite","forestgreen","fuchsia","gainsboro","ghostwhite","gold","goldenrod","gray","grey","green","greenyellow","honeydew","hotpink","indianred","indigo","ivory","khaki","lavender","lavenderblush","lawngreen","lemonchiffon","lightblue","lightcoral","lightcyan","lightgoldenrodyellow","lightgray","lightgreen","lightpink","lightsalmon","lightseagreen","lightskyblue","lightslategray","lightsteelblue","lightyellow","lime","limegreen","linen","magenta","maroon","mediumaquamarine","mediumblue","mediumorchid","mediumpurple","mediumseagreen","mediumslateblue","mediumspringgreen","mediumturquoise","mediumvioletred","midnightblue","mintcream","mistyrose","moccasin","navajowhite","navy","oldlace","olive","olivedrab","orange","orangered","orchid","palegoldenrod","palegreen","paleturquoise","palevioletred","papayawhip","peachpuff","peru","pink","plum","powderblue","purple","rebeccapurple","red","rosybrown","royalblue","saddlebrown","salmon","sandybrown","seagreen","seashell","sienna","silver","skyblue","slateblue","slategray","snow","springgreen","steelblue","tan","teal","thistle","tomato","turquoise","violet","wheat","white","whitesmoke","yellow","yellowgreen"].map(e=>({type:"constant",label:e}))),ee=["a","abbr","address","article","aside","b","bdi","bdo","blockquote","body","br","button","canvas","caption","cite","code","col","colgroup","dd","del","details","dfn","dialog","div","dl","dt","em","figcaption","figure","footer","form","header","hgroup","h1","h2","h3","h4","h5","h6","hr","html","i","iframe","img","input","ins","kbd","label","legend","li","main","meter","nav","ol","output","p","pre","ruby","section","select","small","source","span","strong","sub","summary","sup","table","tbody","td","template","textarea","tfoot","th","thead","tr","u","ul"].map(e=>({type:"type",label:e})),n=/^(\w[\w-]*|-\w[\w-]*|)$/,ae=/^-(-[\w-]*)?$/;function Oe(e,o){var r;if((e.name=="("||e.type.isError)&&(e=e.parent||e),e.name!="ArgList")return!1;let a=(r=e.parent)===null||r===void 0?void 0:r.firstChild;return a?.name!="Callee"?!1:o.sliceString(a.from,a.to)=="var"}const y=new V,te=["Declaration"];function W(e,o){if(o.to-o.from>4096){let r=y.get(o);if(r)return r;let a=[],O=new Set,l=o.cursor(T.IncludeAnonymous);if(l.firstChild())do for(let i of W(e,l.node))O.has(i.label)||(O.add(i.label),a.push(i));while(l.nextSibling());return y.set(o,a),a}else{let r=[],a=new Set;return o.cursor().iterate(O=>{var l;if(O.name=="VariableName"&&O.matchContext(te)&&((l=O.node.nextSibling)===null||l===void 0?void 0:l.name)==":"){let i=e.sliceString(O.from,O.to);a.has(i)||(a.add(i),r.push({label:i,type:"variable"}))}}),r}}const oe=e=>{var o;let{state:r,pos:a}=e,O=S(r).resolveInner(a,-1),l=O.type.isError&&O.from==O.to-1&&r.doc.sliceString(O.from,O.to)=="-";if(O.name=="PropertyName"||l&&((o=O.parent)===null||o===void 0?void 0:o.name)=="Block")return{from:O.from,options:m(),validFor:n};if(O.name=="ValueName")return{from:O.from,options:b,validFor:n};if(O.name=="PseudoClassName")return{from:O.from,options:h,validFor:n};if(O.name=="VariableName"||(e.explicit||l)&&Oe(O,r.doc))return{from:O.name=="VariableName"?O.from:a,options:W(r.doc,S(r).topNode),validFor:ae};if(O.name=="TagName"){for(let{parent:d}=O;d;d=d.parent)if(d.name=="Block")return{from:O.from,options:m(),validFor:n};return{from:O.from,options:ee,validFor:n}}if(!e.explicit)return null;let i=O.resolve(a),s=i.childBefore(a);return s&&s.name==":"&&i.name=="PseudoClassSelector"?{from:a,options:h,validFor:n}:s&&s.name==":"&&i.name=="Declaration"||i.name=="ArgList"?{from:a,options:b,validFor:n}:i.name=="Block"?{from:a,options:m(),validFor:n}:null},P=w.define({name:"css",parser:M.configure({props:[z.add({Declaration:x()}),R.add({Block:U})]}),languageData:{commentTokens:{block:{open:"/*",close:"*/"}},indentOnInput:/^\s*\}$/,wordChars:"-"}});function me(){return new _(P,P.data.of({autocomplete:oe}))}export{me as css,oe as cssCompletionSource,P as cssLanguage};
|
2 |
-
//# sourceMappingURL=index-7f39cecc.js.map
|
|
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/builder_app.py
DELETED
@@ -1,1003 +0,0 @@
|
|
1 |
-
import inspect
|
2 |
-
import time
|
3 |
-
from typing import Iterable
|
4 |
-
|
5 |
-
from gradio_client.documentation import document_fn
|
6 |
-
|
7 |
-
import gradio as gr
|
8 |
-
|
9 |
-
themes = [
|
10 |
-
gr.themes.Base,
|
11 |
-
gr.themes.Default,
|
12 |
-
gr.themes.Soft,
|
13 |
-
gr.themes.Monochrome,
|
14 |
-
gr.themes.Glass,
|
15 |
-
]
|
16 |
-
colors = gr.themes.Color.all
|
17 |
-
sizes = gr.themes.Size.all
|
18 |
-
|
19 |
-
palette_range = [50, 100, 200, 300, 400, 500, 600, 700, 800, 900, 950]
|
20 |
-
size_range = ["xxs", "xs", "sm", "md", "lg", "xl", "xxl"]
|
21 |
-
docs_theme_core = document_fn(gr.themes.Base.__init__, gr.themes.Base)[1]
|
22 |
-
docs_theme_vars = document_fn(gr.themes.Base.set, gr.themes.Base)[1]
|
23 |
-
|
24 |
-
|
25 |
-
def get_docstr(var):
|
26 |
-
for parameters in docs_theme_core + docs_theme_vars:
|
27 |
-
if parameters["name"] == var:
|
28 |
-
return parameters["doc"]
|
29 |
-
raise ValueError(f"Variable {var} not found in theme documentation.")
|
30 |
-
|
31 |
-
|
32 |
-
def get_doc_theme_var_groups():
|
33 |
-
source = inspect.getsource(gr.themes.Base.set)
|
34 |
-
groups = []
|
35 |
-
group, desc, variables, flat_variables = None, None, [], []
|
36 |
-
for line in source.splitlines():
|
37 |
-
line = line.strip()
|
38 |
-
if line.startswith(")"):
|
39 |
-
break
|
40 |
-
elif line.startswith("# "):
|
41 |
-
if group is not None:
|
42 |
-
groups.append((group, desc, variables))
|
43 |
-
group, desc = line[2:].split(": ")
|
44 |
-
variables = []
|
45 |
-
elif "=" in line:
|
46 |
-
var = line.split("=")[0]
|
47 |
-
variables.append(var)
|
48 |
-
flat_variables.append(var)
|
49 |
-
groups.append((group, desc, variables))
|
50 |
-
return groups, flat_variables
|
51 |
-
|
52 |
-
|
53 |
-
variable_groups, flat_variables = get_doc_theme_var_groups()
|
54 |
-
|
55 |
-
css = """
|
56 |
-
.gradio-container {
|
57 |
-
overflow: visible !important;
|
58 |
-
max-width: none !important;
|
59 |
-
}
|
60 |
-
#controls {
|
61 |
-
max-height: 100vh;
|
62 |
-
flex-wrap: unset;
|
63 |
-
overflow-y: scroll;
|
64 |
-
position: sticky;
|
65 |
-
top: 0;
|
66 |
-
}
|
67 |
-
#controls::-webkit-scrollbar {
|
68 |
-
-webkit-appearance: none;
|
69 |
-
width: 7px;
|
70 |
-
}
|
71 |
-
|
72 |
-
#controls::-webkit-scrollbar-thumb {
|
73 |
-
border-radius: 4px;
|
74 |
-
background-color: rgba(0, 0, 0, .5);
|
75 |
-
box-shadow: 0 0 1px rgba(255, 255, 255, .5);
|
76 |
-
}
|
77 |
-
"""
|
78 |
-
|
79 |
-
with gr.Blocks( # noqa: SIM117
|
80 |
-
theme=gr.themes.Base(),
|
81 |
-
css=css,
|
82 |
-
title="Gradio Theme Builder",
|
83 |
-
) as demo:
|
84 |
-
with gr.Row():
|
85 |
-
with gr.Column(scale=1, elem_id="controls", min_width=400):
|
86 |
-
with gr.Row():
|
87 |
-
undo_btn = gr.Button("Undo", size="sm")
|
88 |
-
dark_mode_btn = gr.Button("Dark Mode", variant="primary", size="sm")
|
89 |
-
with gr.Tabs():
|
90 |
-
with gr.TabItem("Source Theme"):
|
91 |
-
gr.Markdown(
|
92 |
-
"""
|
93 |
-
Select a base theme below you would like to build off of. Note: when you click 'Load Theme', all variable values in other tabs will be overwritten!
|
94 |
-
"""
|
95 |
-
)
|
96 |
-
base_theme_dropdown = gr.Dropdown(
|
97 |
-
[theme.__name__ for theme in themes],
|
98 |
-
value="Base",
|
99 |
-
show_label=False,
|
100 |
-
)
|
101 |
-
load_theme_btn = gr.Button("Load Theme", elem_id="load_theme")
|
102 |
-
with gr.TabItem("Core Colors"):
|
103 |
-
gr.Markdown(
|
104 |
-
"""Set the three hues of the theme: `primary_hue`, `secondary_hue`, and `neutral_hue`.
|
105 |
-
Each of these is a palette ranging from 50 to 950 in brightness. Pick a preset palette - optionally, open the accordion to overwrite specific values.
|
106 |
-
Note that these variables do not affect elements directly, but are referenced by other variables with asterisks, such as `*primary_200` or `*neutral_950`."""
|
107 |
-
)
|
108 |
-
primary_hue = gr.Dropdown(
|
109 |
-
[color.name for color in colors], label="Primary Hue"
|
110 |
-
)
|
111 |
-
with gr.Accordion(label="Primary Hue Palette", open=False):
|
112 |
-
primary_hues = []
|
113 |
-
for i in palette_range:
|
114 |
-
primary_hues.append(
|
115 |
-
gr.ColorPicker(
|
116 |
-
label=f"primary_{i}",
|
117 |
-
)
|
118 |
-
)
|
119 |
-
|
120 |
-
secondary_hue = gr.Dropdown(
|
121 |
-
[color.name for color in colors], label="Secondary Hue"
|
122 |
-
)
|
123 |
-
with gr.Accordion(label="Secondary Hue Palette", open=False):
|
124 |
-
secondary_hues = []
|
125 |
-
for i in palette_range:
|
126 |
-
secondary_hues.append(
|
127 |
-
gr.ColorPicker(
|
128 |
-
label=f"secondary_{i}",
|
129 |
-
)
|
130 |
-
)
|
131 |
-
|
132 |
-
neutral_hue = gr.Dropdown(
|
133 |
-
[color.name for color in colors], label="Neutral hue"
|
134 |
-
)
|
135 |
-
with gr.Accordion(label="Neutral Hue Palette", open=False):
|
136 |
-
neutral_hues = []
|
137 |
-
for i in palette_range:
|
138 |
-
neutral_hues.append(
|
139 |
-
gr.ColorPicker(
|
140 |
-
label=f"neutral_{i}",
|
141 |
-
)
|
142 |
-
)
|
143 |
-
|
144 |
-
with gr.TabItem("Core Sizing"):
|
145 |
-
gr.Markdown(
|
146 |
-
"""Set the sizing of the theme via: `text_size`, `spacing_size`, and `radius_size`.
|
147 |
-
Each of these is set to a collection of sizes ranging from `xxs` to `xxl`. Pick a preset size collection - optionally, open the accordion to overwrite specific values.
|
148 |
-
Note that these variables do not affect elements directly, but are referenced by other variables with asterisks, such as `*spacing_xl` or `*text_sm`.
|
149 |
-
"""
|
150 |
-
)
|
151 |
-
text_size = gr.Dropdown(
|
152 |
-
[size.name for size in sizes if size.name.startswith("text_")],
|
153 |
-
label="Text Size",
|
154 |
-
)
|
155 |
-
with gr.Accordion(label="Text Size Range", open=False):
|
156 |
-
text_sizes = []
|
157 |
-
for i in size_range:
|
158 |
-
text_sizes.append(
|
159 |
-
gr.Textbox(
|
160 |
-
label=f"text_{i}",
|
161 |
-
)
|
162 |
-
)
|
163 |
-
|
164 |
-
spacing_size = gr.Dropdown(
|
165 |
-
[
|
166 |
-
size.name
|
167 |
-
for size in sizes
|
168 |
-
if size.name.startswith("spacing_")
|
169 |
-
],
|
170 |
-
label="Spacing Size",
|
171 |
-
)
|
172 |
-
with gr.Accordion(label="Spacing Size Range", open=False):
|
173 |
-
spacing_sizes = []
|
174 |
-
for i in size_range:
|
175 |
-
spacing_sizes.append(
|
176 |
-
gr.Textbox(
|
177 |
-
label=f"spacing_{i}",
|
178 |
-
)
|
179 |
-
)
|
180 |
-
|
181 |
-
radius_size = gr.Dropdown(
|
182 |
-
[
|
183 |
-
size.name
|
184 |
-
for size in sizes
|
185 |
-
if size.name.startswith("radius_")
|
186 |
-
],
|
187 |
-
label="Radius Size",
|
188 |
-
)
|
189 |
-
with gr.Accordion(label="Radius Size Range", open=False):
|
190 |
-
radius_sizes = []
|
191 |
-
for i in size_range:
|
192 |
-
radius_sizes.append(
|
193 |
-
gr.Textbox(
|
194 |
-
label=f"radius_{i}",
|
195 |
-
)
|
196 |
-
)
|
197 |
-
|
198 |
-
with gr.TabItem("Core Fonts"):
|
199 |
-
gr.Markdown(
|
200 |
-
"""Set the main `font` and the monospace `font_mono` here.
|
201 |
-
Set up to 4 values for each (fallbacks in case a font is not available).
|
202 |
-
Check "Google Font" if font should be loaded from Google Fonts.
|
203 |
-
"""
|
204 |
-
)
|
205 |
-
gr.Markdown("### Main Font")
|
206 |
-
main_fonts, main_is_google = [], []
|
207 |
-
for i in range(4):
|
208 |
-
with gr.Row():
|
209 |
-
font = gr.Textbox(label=f"Font {i + 1}")
|
210 |
-
font_is_google = gr.Checkbox(label="Google Font")
|
211 |
-
main_fonts.append(font)
|
212 |
-
main_is_google.append(font_is_google)
|
213 |
-
|
214 |
-
mono_fonts, mono_is_google = [], []
|
215 |
-
gr.Markdown("### Monospace Font")
|
216 |
-
for i in range(4):
|
217 |
-
with gr.Row():
|
218 |
-
font = gr.Textbox(label=f"Font {i + 1}")
|
219 |
-
font_is_google = gr.Checkbox(label="Google Font")
|
220 |
-
mono_fonts.append(font)
|
221 |
-
mono_is_google.append(font_is_google)
|
222 |
-
|
223 |
-
theme_var_input = []
|
224 |
-
|
225 |
-
core_color_suggestions = (
|
226 |
-
[f"*primary_{i}" for i in palette_range]
|
227 |
-
+ [f"*secondary_{i}" for i in palette_range]
|
228 |
-
+ [f"*neutral_{i}" for i in palette_range]
|
229 |
-
)
|
230 |
-
|
231 |
-
variable_suggestions = {
|
232 |
-
"fill": core_color_suggestions[:],
|
233 |
-
"color": core_color_suggestions[:],
|
234 |
-
"text_size": [f"*text_{i}" for i in size_range],
|
235 |
-
"radius": [f"*radius_{i}" for i in size_range],
|
236 |
-
"padding": [f"*spacing_{i}" for i in size_range],
|
237 |
-
"gap": [f"*spacing_{i}" for i in size_range],
|
238 |
-
"weight": [
|
239 |
-
"100",
|
240 |
-
"200",
|
241 |
-
"300",
|
242 |
-
"400",
|
243 |
-
"500",
|
244 |
-
"600",
|
245 |
-
"700",
|
246 |
-
"800",
|
247 |
-
],
|
248 |
-
"shadow": ["none"],
|
249 |
-
"border_width": [],
|
250 |
-
}
|
251 |
-
for variable in flat_variables:
|
252 |
-
if variable.endswith("_dark"):
|
253 |
-
continue
|
254 |
-
for style_type in variable_suggestions:
|
255 |
-
if style_type in variable:
|
256 |
-
variable_suggestions[style_type].append("*" + variable)
|
257 |
-
break
|
258 |
-
|
259 |
-
variable_suggestions["fill"], variable_suggestions["color"] = (
|
260 |
-
variable_suggestions["fill"]
|
261 |
-
+ variable_suggestions["color"][len(core_color_suggestions) :],
|
262 |
-
variable_suggestions["color"]
|
263 |
-
+ variable_suggestions["fill"][len(core_color_suggestions) :],
|
264 |
-
)
|
265 |
-
|
266 |
-
for group, desc, variables in variable_groups:
|
267 |
-
with gr.TabItem(group):
|
268 |
-
gr.Markdown(
|
269 |
-
desc
|
270 |
-
+ "\nYou can set these to one of the dropdown values, or clear the dropdown to set a custom value."
|
271 |
-
)
|
272 |
-
for variable in variables:
|
273 |
-
suggestions = []
|
274 |
-
for style_type in variable_suggestions:
|
275 |
-
if style_type in variable:
|
276 |
-
suggestions = variable_suggestions[style_type][:]
|
277 |
-
if "*" + variable in suggestions:
|
278 |
-
suggestions.remove("*" + variable)
|
279 |
-
break
|
280 |
-
dropdown = gr.Dropdown(
|
281 |
-
label=variable,
|
282 |
-
info=get_docstr(variable),
|
283 |
-
choices=suggestions,
|
284 |
-
allow_custom_value=True,
|
285 |
-
)
|
286 |
-
theme_var_input.append(dropdown)
|
287 |
-
|
288 |
-
# App
|
289 |
-
|
290 |
-
with gr.Column(scale=6, elem_id="app"):
|
291 |
-
with gr.Column(variant="panel"):
|
292 |
-
gr.Markdown(
|
293 |
-
"""
|
294 |
-
# Theme Builder
|
295 |
-
Welcome to the theme builder. The left panel is where you create the theme. The different aspects of the theme are broken down into different tabs. Here's how to navigate them:
|
296 |
-
1. First, set the "Source Theme". This will set the default values that you can override.
|
297 |
-
2. Set the "Core Colors", "Core Sizing" and "Core Fonts". These are the core variables that are used to build the rest of the theme.
|
298 |
-
3. The rest of the tabs set specific CSS theme variables. These control finer aspects of the UI. Within these theme variables, you can reference the core variables and other theme variables using the variable name preceded by an asterisk, e.g. `*primary_50` or `*body_text_color`. Clear the dropdown to set a custom value.
|
299 |
-
4. Once you have finished your theme, click on "View Code" below to see how you can integrate the theme into your app. You can also click on "Upload to Hub" to upload your theme to the Hugging Face Hub, where others can download and use your theme.
|
300 |
-
"""
|
301 |
-
)
|
302 |
-
with gr.Accordion("View Code", open=False):
|
303 |
-
output_code = gr.Code(language="python")
|
304 |
-
with gr.Accordion("Upload to Hub", open=False):
|
305 |
-
gr.Markdown(
|
306 |
-
"You can save your theme on the Hugging Face Hub. HF API write token can be found [here](https://huggingface.co/settings/tokens)."
|
307 |
-
)
|
308 |
-
with gr.Row():
|
309 |
-
theme_name = gr.Textbox(label="Theme Name")
|
310 |
-
theme_hf_token = gr.Textbox(label="Hugging Face Write Token")
|
311 |
-
theme_version = gr.Textbox(
|
312 |
-
label="Version",
|
313 |
-
placeholder="Leave blank to automatically update version.",
|
314 |
-
)
|
315 |
-
upload_to_hub_btn = gr.Button("Upload to Hub")
|
316 |
-
theme_upload_status = gr.Markdown(visible=False)
|
317 |
-
|
318 |
-
gr.Markdown("Below this panel is a dummy app to demo your theme.")
|
319 |
-
|
320 |
-
name = gr.Textbox(
|
321 |
-
label="Name",
|
322 |
-
info="Full name, including middle name. No special characters.",
|
323 |
-
placeholder="John Doe",
|
324 |
-
value="John Doe",
|
325 |
-
interactive=True,
|
326 |
-
)
|
327 |
-
|
328 |
-
with gr.Row():
|
329 |
-
slider1 = gr.Slider(label="Slider 1")
|
330 |
-
slider2 = gr.Slider(label="Slider 2")
|
331 |
-
gr.CheckboxGroup(["A", "B", "C"], label="Checkbox Group")
|
332 |
-
|
333 |
-
with gr.Row():
|
334 |
-
with gr.Column(variant="panel", scale=1):
|
335 |
-
gr.Markdown("## Panel 1")
|
336 |
-
radio = gr.Radio(
|
337 |
-
["A", "B", "C"],
|
338 |
-
label="Radio",
|
339 |
-
info="Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.",
|
340 |
-
)
|
341 |
-
drop = gr.Dropdown(
|
342 |
-
["Option 1", "Option 2", "Option 3"], show_label=False
|
343 |
-
)
|
344 |
-
drop_2 = gr.Dropdown(
|
345 |
-
["Option A", "Option B", "Option C"],
|
346 |
-
multiselect=True,
|
347 |
-
value=["Option A"],
|
348 |
-
label="Dropdown",
|
349 |
-
interactive=True,
|
350 |
-
)
|
351 |
-
check = gr.Checkbox(label="Go")
|
352 |
-
with gr.Column(variant="panel", scale=2):
|
353 |
-
img = gr.Image(
|
354 |
-
"https://raw.githubusercontent.com/gradio-app/gradio/main/js/_website/src/assets/img/header-image.jpg",
|
355 |
-
label="Image",
|
356 |
-
height=320,
|
357 |
-
)
|
358 |
-
with gr.Row():
|
359 |
-
go_btn = gr.Button(
|
360 |
-
"Go", label="Primary Button", variant="primary"
|
361 |
-
)
|
362 |
-
clear_btn = gr.Button(
|
363 |
-
"Clear", label="Secondary Button", variant="secondary"
|
364 |
-
)
|
365 |
-
|
366 |
-
def go(*args):
|
367 |
-
time.sleep(3)
|
368 |
-
return "https://raw.githubusercontent.com/gradio-app/gradio/main/js/_website/src/assets/img/header-image.jpg"
|
369 |
-
|
370 |
-
go_btn.click(
|
371 |
-
go,
|
372 |
-
[radio, drop, drop_2, check, name],
|
373 |
-
img,
|
374 |
-
api_name=False,
|
375 |
-
)
|
376 |
-
|
377 |
-
def clear():
|
378 |
-
time.sleep(0.2)
|
379 |
-
return None
|
380 |
-
|
381 |
-
clear_btn.click(clear, None, img)
|
382 |
-
|
383 |
-
with gr.Row():
|
384 |
-
btn1 = gr.Button("Button 1", size="sm")
|
385 |
-
btn2 = gr.UploadButton(size="sm")
|
386 |
-
stop_btn = gr.Button(
|
387 |
-
"Stop", label="Stop Button", variant="stop", size="sm"
|
388 |
-
)
|
389 |
-
|
390 |
-
gr.Examples(
|
391 |
-
examples=[
|
392 |
-
[
|
393 |
-
"A",
|
394 |
-
"Option 1",
|
395 |
-
["Option B"],
|
396 |
-
True,
|
397 |
-
],
|
398 |
-
[
|
399 |
-
"B",
|
400 |
-
"Option 2",
|
401 |
-
["Option B", "Option C"],
|
402 |
-
False,
|
403 |
-
],
|
404 |
-
],
|
405 |
-
inputs=[radio, drop, drop_2, check],
|
406 |
-
label="Examples",
|
407 |
-
)
|
408 |
-
|
409 |
-
with gr.Row():
|
410 |
-
gr.Dataframe(value=[[1, 2, 3], [4, 5, 6], [7, 8, 9]], label="Dataframe")
|
411 |
-
gr.JSON(
|
412 |
-
value={"a": 1, "b": 2, "c": {"test": "a", "test2": [1, 2, 3]}},
|
413 |
-
label="JSON",
|
414 |
-
)
|
415 |
-
gr.Label(value={"cat": 0.7, "dog": 0.2, "fish": 0.1})
|
416 |
-
gr.File()
|
417 |
-
with gr.Row():
|
418 |
-
gr.ColorPicker()
|
419 |
-
gr.Video(
|
420 |
-
"https://gradio-static-files.s3.us-west-2.amazonaws.com/world.mp4"
|
421 |
-
)
|
422 |
-
gr.Gallery(
|
423 |
-
[
|
424 |
-
(
|
425 |
-
"https://gradio-static-files.s3.us-west-2.amazonaws.com/lion.jpg",
|
426 |
-
"lion",
|
427 |
-
),
|
428 |
-
(
|
429 |
-
"https://gradio-static-files.s3.us-west-2.amazonaws.com/logo.png",
|
430 |
-
"logo",
|
431 |
-
),
|
432 |
-
(
|
433 |
-
"https://gradio-static-files.s3.us-west-2.amazonaws.com/tower.jpg",
|
434 |
-
"tower",
|
435 |
-
),
|
436 |
-
],
|
437 |
-
height="200px",
|
438 |
-
columns=2,
|
439 |
-
)
|
440 |
-
|
441 |
-
with gr.Row():
|
442 |
-
with gr.Column(scale=2):
|
443 |
-
chatbot = gr.Chatbot([("Hello", "Hi")], label="Chatbot")
|
444 |
-
chat_btn = gr.Button("Add messages")
|
445 |
-
|
446 |
-
def chat(history):
|
447 |
-
time.sleep(2)
|
448 |
-
yield [["How are you?", "I am good."]]
|
449 |
-
|
450 |
-
chat_btn.click(
|
451 |
-
lambda history: history
|
452 |
-
+ [["How are you?", "I am good."]]
|
453 |
-
+ (time.sleep(2) or []),
|
454 |
-
chatbot,
|
455 |
-
chatbot,
|
456 |
-
api_name=False,
|
457 |
-
)
|
458 |
-
with gr.Column(scale=1):
|
459 |
-
with gr.Accordion("Advanced Settings"):
|
460 |
-
gr.Markdown("Hello")
|
461 |
-
gr.Number(label="Chatbot control 1")
|
462 |
-
gr.Number(label="Chatbot control 2")
|
463 |
-
gr.Number(label="Chatbot control 3")
|
464 |
-
|
465 |
-
# Event Listeners
|
466 |
-
|
467 |
-
secret_css = gr.Textbox(visible=False)
|
468 |
-
secret_font = gr.JSON(visible=False)
|
469 |
-
|
470 |
-
demo.load( # doing this via python was not working for some reason, so using this hacky method for now
|
471 |
-
None,
|
472 |
-
None,
|
473 |
-
None,
|
474 |
-
_js="""() => {
|
475 |
-
document.head.innerHTML += "<style id='theme_css'></style>";
|
476 |
-
let evt_listener = window.setTimeout(
|
477 |
-
() => {
|
478 |
-
load_theme_btn = document.querySelector('#load_theme');
|
479 |
-
if (load_theme_btn) {
|
480 |
-
load_theme_btn.click();
|
481 |
-
window.clearTimeout(evt_listener);
|
482 |
-
}
|
483 |
-
},
|
484 |
-
100
|
485 |
-
);
|
486 |
-
}""",
|
487 |
-
api_name=False,
|
488 |
-
)
|
489 |
-
|
490 |
-
theme_inputs = (
|
491 |
-
[primary_hue, secondary_hue, neutral_hue]
|
492 |
-
+ primary_hues
|
493 |
-
+ secondary_hues
|
494 |
-
+ neutral_hues
|
495 |
-
+ [text_size, spacing_size, radius_size]
|
496 |
-
+ text_sizes
|
497 |
-
+ spacing_sizes
|
498 |
-
+ radius_sizes
|
499 |
-
+ main_fonts
|
500 |
-
+ main_is_google
|
501 |
-
+ mono_fonts
|
502 |
-
+ mono_is_google
|
503 |
-
+ theme_var_input
|
504 |
-
)
|
505 |
-
|
506 |
-
def load_theme(theme_name):
|
507 |
-
theme = [theme for theme in themes if theme.__name__ == theme_name][0]
|
508 |
-
|
509 |
-
parameters = inspect.signature(theme.__init__).parameters
|
510 |
-
primary_hue = parameters["primary_hue"].default
|
511 |
-
secondary_hue = parameters["secondary_hue"].default
|
512 |
-
neutral_hue = parameters["neutral_hue"].default
|
513 |
-
text_size = parameters["text_size"].default
|
514 |
-
spacing_size = parameters["spacing_size"].default
|
515 |
-
radius_size = parameters["radius_size"].default
|
516 |
-
|
517 |
-
theme = theme()
|
518 |
-
|
519 |
-
font = theme._font[:4]
|
520 |
-
font_mono = theme._font_mono[:4]
|
521 |
-
font_is_google = [isinstance(f, gr.themes.GoogleFont) for f in font]
|
522 |
-
font_mono_is_google = [
|
523 |
-
isinstance(f, gr.themes.GoogleFont) for f in font_mono
|
524 |
-
]
|
525 |
-
|
526 |
-
def pad_to_4(x):
|
527 |
-
return x + [None] * (4 - len(x))
|
528 |
-
|
529 |
-
var_output = []
|
530 |
-
for variable in flat_variables:
|
531 |
-
theme_val = getattr(theme, variable)
|
532 |
-
if theme_val is None and variable.endswith("_dark"):
|
533 |
-
theme_val = getattr(theme, variable[:-5])
|
534 |
-
var_output.append(theme_val)
|
535 |
-
|
536 |
-
return (
|
537 |
-
[primary_hue.name, secondary_hue.name, neutral_hue.name]
|
538 |
-
+ primary_hue.expand()
|
539 |
-
+ secondary_hue.expand()
|
540 |
-
+ neutral_hue.expand()
|
541 |
-
+ [text_size.name, spacing_size.name, radius_size.name]
|
542 |
-
+ text_size.expand()
|
543 |
-
+ spacing_size.expand()
|
544 |
-
+ radius_size.expand()
|
545 |
-
+ pad_to_4([f.name for f in font])
|
546 |
-
+ pad_to_4(font_is_google)
|
547 |
-
+ pad_to_4([f.name for f in font_mono])
|
548 |
-
+ pad_to_4(font_mono_is_google)
|
549 |
-
+ var_output
|
550 |
-
)
|
551 |
-
|
552 |
-
def generate_theme_code(
|
553 |
-
base_theme, final_theme, core_variables, final_main_fonts, final_mono_fonts
|
554 |
-
):
|
555 |
-
base_theme_name = base_theme
|
556 |
-
base_theme = [theme for theme in themes if theme.__name__ == base_theme][
|
557 |
-
0
|
558 |
-
]()
|
559 |
-
|
560 |
-
parameters = inspect.signature(base_theme.__init__).parameters
|
561 |
-
primary_hue = parameters["primary_hue"].default
|
562 |
-
secondary_hue = parameters["secondary_hue"].default
|
563 |
-
neutral_hue = parameters["neutral_hue"].default
|
564 |
-
text_size = parameters["text_size"].default
|
565 |
-
spacing_size = parameters["spacing_size"].default
|
566 |
-
radius_size = parameters["radius_size"].default
|
567 |
-
font = parameters["font"].default
|
568 |
-
font = [font] if not isinstance(font, Iterable) else font
|
569 |
-
font = [
|
570 |
-
gr.themes.Font(f) if not isinstance(f, gr.themes.Font) else f
|
571 |
-
for f in font
|
572 |
-
]
|
573 |
-
font_mono = parameters["font_mono"].default
|
574 |
-
font_mono = (
|
575 |
-
[font_mono] if not isinstance(font_mono, Iterable) else font_mono
|
576 |
-
)
|
577 |
-
font_mono = [
|
578 |
-
gr.themes.Font(f) if not isinstance(f, gr.themes.Font) else f
|
579 |
-
for f in font_mono
|
580 |
-
]
|
581 |
-
|
582 |
-
core_diffs = {}
|
583 |
-
specific_core_diffs = {}
|
584 |
-
core_var_names = [
|
585 |
-
"primary_hue",
|
586 |
-
"secondary_hue",
|
587 |
-
"neutral_hue",
|
588 |
-
"text_size",
|
589 |
-
"spacing_size",
|
590 |
-
"radius_size",
|
591 |
-
]
|
592 |
-
for value_name, base_value, source_class, final_value in zip(
|
593 |
-
core_var_names,
|
594 |
-
[
|
595 |
-
primary_hue,
|
596 |
-
secondary_hue,
|
597 |
-
neutral_hue,
|
598 |
-
text_size,
|
599 |
-
spacing_size,
|
600 |
-
radius_size,
|
601 |
-
],
|
602 |
-
[
|
603 |
-
gr.themes.Color,
|
604 |
-
gr.themes.Color,
|
605 |
-
gr.themes.Color,
|
606 |
-
gr.themes.Size,
|
607 |
-
gr.themes.Size,
|
608 |
-
gr.themes.Size,
|
609 |
-
],
|
610 |
-
core_variables,
|
611 |
-
):
|
612 |
-
if base_value.name != final_value:
|
613 |
-
core_diffs[value_name] = final_value
|
614 |
-
source_obj = [
|
615 |
-
obj for obj in source_class.all if obj.name == final_value
|
616 |
-
][0]
|
617 |
-
final_attr_values = {}
|
618 |
-
diff = False
|
619 |
-
for attr in dir(source_obj):
|
620 |
-
if attr in ["all", "name", "expand"] or attr.startswith("_"):
|
621 |
-
continue
|
622 |
-
final_theme_attr = (
|
623 |
-
value_name.split("_")[0]
|
624 |
-
+ "_"
|
625 |
-
+ (attr[1:] if source_class == gr.themes.Color else attr)
|
626 |
-
)
|
627 |
-
final_attr_values[final_theme_attr] = getattr(
|
628 |
-
final_theme, final_theme_attr
|
629 |
-
)
|
630 |
-
if getattr(source_obj, attr) != final_attr_values[final_theme_attr]:
|
631 |
-
diff = True
|
632 |
-
if diff:
|
633 |
-
specific_core_diffs[value_name] = (source_class, final_attr_values)
|
634 |
-
|
635 |
-
font_diffs = {}
|
636 |
-
|
637 |
-
final_main_fonts = [font for font in final_main_fonts if font[0]]
|
638 |
-
final_mono_fonts = [font for font in final_mono_fonts if font[0]]
|
639 |
-
font = font[:4]
|
640 |
-
font_mono = font_mono[:4]
|
641 |
-
for base_font_set, theme_font_set, font_set_name in [
|
642 |
-
(font, final_main_fonts, "font"),
|
643 |
-
(font_mono, final_mono_fonts, "font_mono"),
|
644 |
-
]:
|
645 |
-
if len(base_font_set) != len(theme_font_set) or any(
|
646 |
-
base_font.name != theme_font[0]
|
647 |
-
or isinstance(base_font, gr.themes.GoogleFont) != theme_font[1]
|
648 |
-
for base_font, theme_font in zip(base_font_set, theme_font_set)
|
649 |
-
):
|
650 |
-
font_diffs[font_set_name] = [
|
651 |
-
f"gr.themes.GoogleFont('{font_name}')"
|
652 |
-
if is_google_font
|
653 |
-
else f"'{font_name}'"
|
654 |
-
for font_name, is_google_font in theme_font_set
|
655 |
-
]
|
656 |
-
|
657 |
-
newline = "\n"
|
658 |
-
|
659 |
-
core_diffs_code = ""
|
660 |
-
if len(core_diffs) + len(specific_core_diffs) > 0:
|
661 |
-
for var_name in core_var_names:
|
662 |
-
if var_name in specific_core_diffs:
|
663 |
-
cls, vals = specific_core_diffs[var_name]
|
664 |
-
core_diffs_code += f""" {var_name}=gr.themes.{cls.__name__}({', '.join(f'''{k}="{v}"''' for k, v in vals.items())}),\n"""
|
665 |
-
elif var_name in core_diffs:
|
666 |
-
core_diffs_code += (
|
667 |
-
f""" {var_name}="{core_diffs[var_name]}",\n"""
|
668 |
-
)
|
669 |
-
|
670 |
-
font_diffs_code = ""
|
671 |
-
|
672 |
-
if len(font_diffs) > 0:
|
673 |
-
font_diffs_code = "".join(
|
674 |
-
[
|
675 |
-
f""" {font_set_name}=[{", ".join(fonts)}],\n"""
|
676 |
-
for font_set_name, fonts in font_diffs.items()
|
677 |
-
]
|
678 |
-
)
|
679 |
-
var_diffs = {}
|
680 |
-
for variable in flat_variables:
|
681 |
-
base_theme_val = getattr(base_theme, variable)
|
682 |
-
final_theme_val = getattr(final_theme, variable)
|
683 |
-
if base_theme_val is None and variable.endswith("_dark"):
|
684 |
-
base_theme_val = getattr(base_theme, variable[:-5])
|
685 |
-
if base_theme_val != final_theme_val:
|
686 |
-
var_diffs[variable] = getattr(final_theme, variable)
|
687 |
-
|
688 |
-
newline = "\n"
|
689 |
-
|
690 |
-
vars_diff_code = ""
|
691 |
-
if len(var_diffs) > 0:
|
692 |
-
vars_diff_code = f""".set(
|
693 |
-
{(',' + newline + " ").join([f"{k}='{v}'" for k, v in var_diffs.items()])}
|
694 |
-
)"""
|
695 |
-
|
696 |
-
output = f"""
|
697 |
-
import gradio as gr
|
698 |
-
|
699 |
-
theme = gr.themes.{base_theme_name}({newline if core_diffs_code or font_diffs_code else ""}{core_diffs_code}{font_diffs_code}){vars_diff_code}
|
700 |
-
|
701 |
-
with gr.Blocks(theme=theme) as demo:
|
702 |
-
..."""
|
703 |
-
return output
|
704 |
-
|
705 |
-
history = gr.State([])
|
706 |
-
current_theme = gr.State(None)
|
707 |
-
|
708 |
-
def render_variables(history, base_theme, *args):
|
709 |
-
primary_hue, secondary_hue, neutral_hue = args[0:3]
|
710 |
-
primary_hues = args[3 : 3 + len(palette_range)]
|
711 |
-
secondary_hues = args[3 + len(palette_range) : 3 + 2 * len(palette_range)]
|
712 |
-
neutral_hues = args[3 + 2 * len(palette_range) : 3 + 3 * len(palette_range)]
|
713 |
-
text_size, spacing_size, radius_size = args[
|
714 |
-
3 + 3 * len(palette_range) : 6 + 3 * len(palette_range)
|
715 |
-
]
|
716 |
-
text_sizes = args[
|
717 |
-
6
|
718 |
-
+ 3 * len(palette_range) : 6
|
719 |
-
+ 3 * len(palette_range)
|
720 |
-
+ len(size_range)
|
721 |
-
]
|
722 |
-
spacing_sizes = args[
|
723 |
-
6
|
724 |
-
+ 3 * len(palette_range)
|
725 |
-
+ len(size_range) : 6
|
726 |
-
+ 3 * len(palette_range)
|
727 |
-
+ 2 * len(size_range)
|
728 |
-
]
|
729 |
-
radius_sizes = args[
|
730 |
-
6
|
731 |
-
+ 3 * len(palette_range)
|
732 |
-
+ 2 * len(size_range) : 6
|
733 |
-
+ 3 * len(palette_range)
|
734 |
-
+ 3 * len(size_range)
|
735 |
-
]
|
736 |
-
main_fonts = args[
|
737 |
-
6
|
738 |
-
+ 3 * len(palette_range)
|
739 |
-
+ 3 * len(size_range) : 6
|
740 |
-
+ 3 * len(palette_range)
|
741 |
-
+ 3 * len(size_range)
|
742 |
-
+ 4
|
743 |
-
]
|
744 |
-
main_is_google = args[
|
745 |
-
6
|
746 |
-
+ 3 * len(palette_range)
|
747 |
-
+ 3 * len(size_range)
|
748 |
-
+ 4 : 6
|
749 |
-
+ 3 * len(palette_range)
|
750 |
-
+ 3 * len(size_range)
|
751 |
-
+ 8
|
752 |
-
]
|
753 |
-
mono_fonts = args[
|
754 |
-
6
|
755 |
-
+ 3 * len(palette_range)
|
756 |
-
+ 3 * len(size_range)
|
757 |
-
+ 8 : 6
|
758 |
-
+ 3 * len(palette_range)
|
759 |
-
+ 3 * len(size_range)
|
760 |
-
+ 12
|
761 |
-
]
|
762 |
-
mono_is_google = args[
|
763 |
-
6
|
764 |
-
+ 3 * len(palette_range)
|
765 |
-
+ 3 * len(size_range)
|
766 |
-
+ 12 : 6
|
767 |
-
+ 3 * len(palette_range)
|
768 |
-
+ 3 * len(size_range)
|
769 |
-
+ 16
|
770 |
-
]
|
771 |
-
remaining_args = args[
|
772 |
-
6 + 3 * len(palette_range) + 3 * len(size_range) + 16 :
|
773 |
-
]
|
774 |
-
|
775 |
-
final_primary_color = gr.themes.Color(*primary_hues)
|
776 |
-
final_secondary_color = gr.themes.Color(*secondary_hues)
|
777 |
-
final_neutral_color = gr.themes.Color(*neutral_hues)
|
778 |
-
final_text_size = gr.themes.Size(*text_sizes)
|
779 |
-
final_spacing_size = gr.themes.Size(*spacing_sizes)
|
780 |
-
final_radius_size = gr.themes.Size(*radius_sizes)
|
781 |
-
|
782 |
-
final_main_fonts = []
|
783 |
-
font_weights = set()
|
784 |
-
for attr, val in zip(flat_variables, remaining_args):
|
785 |
-
if "weight" in attr:
|
786 |
-
font_weights.add(val)
|
787 |
-
font_weights = sorted(font_weights)
|
788 |
-
|
789 |
-
for main_font, is_google in zip(main_fonts, main_is_google):
|
790 |
-
if not main_font:
|
791 |
-
continue
|
792 |
-
if is_google:
|
793 |
-
main_font = gr.themes.GoogleFont(main_font, weights=font_weights)
|
794 |
-
final_main_fonts.append(main_font)
|
795 |
-
final_mono_fonts = []
|
796 |
-
for mono_font, is_google in zip(mono_fonts, mono_is_google):
|
797 |
-
if not mono_font:
|
798 |
-
continue
|
799 |
-
if is_google:
|
800 |
-
mono_font = gr.themes.GoogleFont(mono_font, weights=font_weights)
|
801 |
-
final_mono_fonts.append(mono_font)
|
802 |
-
|
803 |
-
theme = gr.themes.Base(
|
804 |
-
primary_hue=final_primary_color,
|
805 |
-
secondary_hue=final_secondary_color,
|
806 |
-
neutral_hue=final_neutral_color,
|
807 |
-
text_size=final_text_size,
|
808 |
-
spacing_size=final_spacing_size,
|
809 |
-
radius_size=final_radius_size,
|
810 |
-
font=final_main_fonts,
|
811 |
-
font_mono=final_mono_fonts,
|
812 |
-
)
|
813 |
-
|
814 |
-
theme.set(**dict(zip(flat_variables, remaining_args)))
|
815 |
-
new_step = (base_theme, args)
|
816 |
-
if len(history) == 0 or str(history[-1]) != str(new_step):
|
817 |
-
history.append(new_step)
|
818 |
-
|
819 |
-
return (
|
820 |
-
history,
|
821 |
-
theme._get_theme_css(),
|
822 |
-
theme._stylesheets,
|
823 |
-
generate_theme_code(
|
824 |
-
base_theme,
|
825 |
-
theme,
|
826 |
-
(
|
827 |
-
primary_hue,
|
828 |
-
secondary_hue,
|
829 |
-
neutral_hue,
|
830 |
-
text_size,
|
831 |
-
spacing_size,
|
832 |
-
radius_size,
|
833 |
-
),
|
834 |
-
list(zip(main_fonts, main_is_google)),
|
835 |
-
list(zip(mono_fonts, mono_is_google)),
|
836 |
-
),
|
837 |
-
theme,
|
838 |
-
)
|
839 |
-
|
840 |
-
def attach_rerender(evt_listener):
|
841 |
-
return evt_listener(
|
842 |
-
render_variables,
|
843 |
-
[history, base_theme_dropdown] + theme_inputs,
|
844 |
-
[history, secret_css, secret_font, output_code, current_theme],
|
845 |
-
api_name=False,
|
846 |
-
).then(
|
847 |
-
None,
|
848 |
-
[secret_css, secret_font],
|
849 |
-
None,
|
850 |
-
_js="""(css, fonts) => {
|
851 |
-
document.getElementById('theme_css').innerHTML = css;
|
852 |
-
let existing_font_links = document.querySelectorAll('link[rel="stylesheet"][href^="https://fonts.googleapis.com/css"]');
|
853 |
-
existing_font_links.forEach(link => {
|
854 |
-
if (fonts.includes(link.href)) {
|
855 |
-
fonts = fonts.filter(font => font != link.href);
|
856 |
-
} else {
|
857 |
-
link.remove();
|
858 |
-
}
|
859 |
-
});
|
860 |
-
fonts.forEach(font => {
|
861 |
-
let link = document.createElement('link');
|
862 |
-
link.rel = 'stylesheet';
|
863 |
-
link.href = font;
|
864 |
-
document.head.appendChild(link);
|
865 |
-
});
|
866 |
-
}""",
|
867 |
-
api_name=False,
|
868 |
-
)
|
869 |
-
|
870 |
-
def load_color(color_name):
|
871 |
-
color = [color for color in colors if color.name == color_name][0]
|
872 |
-
return [getattr(color, f"c{i}") for i in palette_range]
|
873 |
-
|
874 |
-
attach_rerender(
|
875 |
-
primary_hue.select(
|
876 |
-
load_color, primary_hue, primary_hues, api_name=False
|
877 |
-
).then
|
878 |
-
)
|
879 |
-
attach_rerender(
|
880 |
-
secondary_hue.select(
|
881 |
-
load_color, secondary_hue, secondary_hue, api_name=False
|
882 |
-
).then
|
883 |
-
)
|
884 |
-
attach_rerender(
|
885 |
-
neutral_hue.select(
|
886 |
-
load_color, neutral_hue, neutral_hues, api_name=False
|
887 |
-
).then
|
888 |
-
)
|
889 |
-
for hue_set in (primary_hues, secondary_hues, neutral_hues):
|
890 |
-
for hue in hue_set:
|
891 |
-
attach_rerender(hue.blur)
|
892 |
-
|
893 |
-
def load_size(size_name):
|
894 |
-
size = [size for size in sizes if size.name == size_name][0]
|
895 |
-
return [getattr(size, i) for i in size_range]
|
896 |
-
|
897 |
-
attach_rerender(
|
898 |
-
text_size.change(load_size, text_size, text_sizes, api_name=False).then
|
899 |
-
)
|
900 |
-
attach_rerender(
|
901 |
-
spacing_size.change(
|
902 |
-
load_size, spacing_size, spacing_sizes, api_name=False
|
903 |
-
).then
|
904 |
-
)
|
905 |
-
attach_rerender(
|
906 |
-
radius_size.change(
|
907 |
-
load_size, radius_size, radius_sizes, api_name=False
|
908 |
-
).then
|
909 |
-
)
|
910 |
-
|
911 |
-
attach_rerender(
|
912 |
-
load_theme_btn.click(
|
913 |
-
load_theme, base_theme_dropdown, theme_inputs, api_name=False
|
914 |
-
).then
|
915 |
-
)
|
916 |
-
|
917 |
-
for theme_box in (
|
918 |
-
text_sizes + spacing_sizes + radius_sizes + main_fonts + mono_fonts
|
919 |
-
):
|
920 |
-
attach_rerender(theme_box.blur)
|
921 |
-
attach_rerender(theme_box.submit)
|
922 |
-
for theme_box in theme_var_input:
|
923 |
-
attach_rerender(theme_box.blur)
|
924 |
-
attach_rerender(theme_box.select)
|
925 |
-
for checkbox in main_is_google + mono_is_google:
|
926 |
-
attach_rerender(checkbox.select)
|
927 |
-
|
928 |
-
dark_mode_btn.click(
|
929 |
-
None,
|
930 |
-
None,
|
931 |
-
None,
|
932 |
-
_js="""() => {
|
933 |
-
if (document.querySelectorAll('.dark').length) {
|
934 |
-
document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark'));
|
935 |
-
} else {
|
936 |
-
document.querySelector('body').classList.add('dark');
|
937 |
-
}
|
938 |
-
}""",
|
939 |
-
api_name=False,
|
940 |
-
)
|
941 |
-
|
942 |
-
def undo(history_var):
|
943 |
-
if len(history_var) <= 1:
|
944 |
-
return {history: gr.skip()}
|
945 |
-
else:
|
946 |
-
history_var.pop()
|
947 |
-
old = history_var.pop()
|
948 |
-
return [history_var, old[0]] + list(old[1])
|
949 |
-
|
950 |
-
attach_rerender(
|
951 |
-
undo_btn.click(
|
952 |
-
undo,
|
953 |
-
[history],
|
954 |
-
[history, base_theme_dropdown] + theme_inputs,
|
955 |
-
api_name=False,
|
956 |
-
).then
|
957 |
-
)
|
958 |
-
|
959 |
-
def upload_to_hub(data):
|
960 |
-
try:
|
961 |
-
theme_url = data[current_theme].push_to_hub(
|
962 |
-
repo_name=data[theme_name],
|
963 |
-
version=data[theme_version] or None,
|
964 |
-
hf_token=data[theme_hf_token],
|
965 |
-
theme_name=data[theme_name],
|
966 |
-
)
|
967 |
-
space_name = "/".join(theme_url.split("/")[-2:])
|
968 |
-
return (
|
969 |
-
gr.Markdown.update(
|
970 |
-
value=f"Theme uploaded [here!]({theme_url})! Load it as `gr.Blocks(theme='{space_name}')`",
|
971 |
-
visible=True,
|
972 |
-
),
|
973 |
-
"Upload to Hub",
|
974 |
-
)
|
975 |
-
except Exception as e:
|
976 |
-
return (
|
977 |
-
gr.Markdown.update(
|
978 |
-
value=f"Error: {e}",
|
979 |
-
visible=True,
|
980 |
-
),
|
981 |
-
"Upload to Hub",
|
982 |
-
)
|
983 |
-
|
984 |
-
upload_to_hub_btn.click(
|
985 |
-
lambda: "Uploading...",
|
986 |
-
None,
|
987 |
-
upload_to_hub_btn,
|
988 |
-
api_name=False,
|
989 |
-
).then(
|
990 |
-
upload_to_hub,
|
991 |
-
{
|
992 |
-
current_theme,
|
993 |
-
theme_name,
|
994 |
-
theme_hf_token,
|
995 |
-
theme_version,
|
996 |
-
},
|
997 |
-
[theme_upload_status, upload_to_hub_btn],
|
998 |
-
api_name=False,
|
999 |
-
)
|
1000 |
-
|
1001 |
-
|
1002 |
-
if __name__ == "__main__":
|
1003 |
-
demo.launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Dagfinn1962/stablediffusion-members/app.py
DELETED
@@ -1,87 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
import os
|
3 |
-
import sys
|
4 |
-
from pathlib import Path
|
5 |
-
import numpy as np
|
6 |
-
from PIL import Image
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
models = [
|
12 |
-
{"name": "SD ComVis 1.2","url": "CompVis/stable-diffusion-v1-2"},
|
13 |
-
{"name": "SD Comvis 1.4","url": "CompVis/stable-diffusion-v1-4"},
|
14 |
-
{"name": "SD runawayml 1.5","url": "runwayml/stable-diffusion-v1-5"},
|
15 |
-
{"name": "SD Stability 2.1 Unclip","url": "stabilityai/stable-diffusion-2-1-unclip"},
|
16 |
-
{"name": "SD Dreamshaper-Anime","url": "Lykon/DreamShaper"},
|
17 |
-
]
|
18 |
-
|
19 |
-
current_model = models[0]
|
20 |
-
|
21 |
-
text_gen = gr.Interface.load("spaces/Avenuenw/prompt-extend")
|
22 |
-
|
23 |
-
models2 = []
|
24 |
-
for model in models:
|
25 |
-
model_url = f"models/{model['url']}"
|
26 |
-
loaded_model = gr.Interface.load(model_url, live=True, preprocess=True)
|
27 |
-
models2.append(loaded_model)
|
28 |
-
|
29 |
-
|
30 |
-
def text_it(inputs, text_gen=text_gen):
|
31 |
-
return text_gen(inputs)
|
32 |
-
|
33 |
-
|
34 |
-
def set_model(current_model_index):
|
35 |
-
global current_model
|
36 |
-
current_model = models[current_model_index]
|
37 |
-
return gr.update(value=f"{current_model['name']}")
|
38 |
-
|
39 |
-
|
40 |
-
def send_it(inputs, model_choice):
|
41 |
-
proc = models2[model_choice]
|
42 |
-
return proc(inputs)
|
43 |
-
|
44 |
-
|
45 |
-
with gr.Blocks (css ='main.css') as myface:
|
46 |
-
|
47 |
-
gr.HTML(" <div style='font-size: 20px; font-family:verdana; background-color:#23298f; color:#FFFFFF; border:1px solid #FFFFFF; border-radius: 10px; width:50%; height: 30px; float: left; text-align:center;'> Your Promt Here</div> <div style='font-size: 20px; font-family:verdana; background-color:#23298f; color:#FFFFFF; border:1px solid #FFFFFF; border-radius: 10px; width:50%; height: 30px; float:right;text-align: center;'> Choose model here </div> " )
|
48 |
-
with gr.Row():
|
49 |
-
input_text = gr.Textbox(label=" ",placeholder="1.PROMPT IDEA HERE ! ",lines=4)
|
50 |
-
# Model selection dropdown
|
51 |
-
model_name1 = gr.Dropdown(
|
52 |
-
label="2 Choose model here",
|
53 |
-
choices=[m["name"] for m in models],
|
54 |
-
type="index",
|
55 |
-
value=current_model["name"],
|
56 |
-
interactive=True,
|
57 |
-
|
58 |
-
|
59 |
-
)
|
60 |
-
with gr.Row():
|
61 |
-
see_prompts = gr.Button("3. GENERATE YOUR PROMT IDEA HERE!")
|
62 |
-
run = gr.Button("4. GENERATE THE IMAGE HERE!", varant="primery")
|
63 |
-
|
64 |
-
#
|
65 |
-
with gr.Row():
|
66 |
-
output1 = gr.Image(label="")
|
67 |
-
output2 = gr.Image(label="")
|
68 |
-
output3 = gr.Image(label="")
|
69 |
-
with gr.Row():
|
70 |
-
magic1 = gr.Textbox(label="Generated Prompt", lines=2)
|
71 |
-
magic2 = gr.Textbox(label="Generated Prompt", lines=2)
|
72 |
-
magic3 = gr.Textbox(label="Generated Prompt", lines=2)
|
73 |
-
|
74 |
-
model_name1.change(set_model, inputs=model_name1, outputs=[output1, output2, output3,])
|
75 |
-
|
76 |
-
run.click(send_it, inputs=[magic1, model_name1], outputs=[output1])
|
77 |
-
run.click(send_it, inputs=[magic2, model_name1], outputs=[output2])
|
78 |
-
run.click(send_it, inputs=[magic3, model_name1], outputs=[output3])
|
79 |
-
|
80 |
-
|
81 |
-
see_prompts.click(text_it, inputs=[input_text], outputs=[magic1])
|
82 |
-
see_prompts.click(text_it, inputs=[input_text], outputs=[magic2])
|
83 |
-
see_prompts.click(text_it, inputs=[input_text], outputs=[magic3])
|
84 |
-
|
85 |
-
title="Daylight (SD) ",
|
86 |
-
myface.queue(concurrency_count=200)
|
87 |
-
myface.launch(inline=True, max_threads=400)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Docfile/open_llm_leaderboard/src/load_from_hub.py
DELETED
@@ -1,151 +0,0 @@
|
|
1 |
-
import json
|
2 |
-
import os
|
3 |
-
|
4 |
-
import pandas as pd
|
5 |
-
from huggingface_hub import Repository
|
6 |
-
from transformers import AutoConfig
|
7 |
-
from collections import defaultdict
|
8 |
-
|
9 |
-
from src.assets.hardcoded_evals import baseline, gpt4_values, gpt35_values
|
10 |
-
from src.display_models.get_model_metadata import apply_metadata
|
11 |
-
from src.display_models.read_results import get_eval_results_dicts, make_clickable_model
|
12 |
-
from src.display_models.utils import AutoEvalColumn, EvalQueueColumn, has_no_nan_values
|
13 |
-
|
14 |
-
IS_PUBLIC = bool(os.environ.get("IS_PUBLIC", True))
|
15 |
-
|
16 |
-
|
17 |
-
def get_all_requested_models(requested_models_dir: str) -> set[str]:
|
18 |
-
depth = 1
|
19 |
-
file_names = []
|
20 |
-
users_to_submission_dates = defaultdict(list)
|
21 |
-
|
22 |
-
for root, _, files in os.walk(requested_models_dir):
|
23 |
-
current_depth = root.count(os.sep) - requested_models_dir.count(os.sep)
|
24 |
-
if current_depth == depth:
|
25 |
-
for file in files:
|
26 |
-
if not file.endswith(".json"): continue
|
27 |
-
with open(os.path.join(root, file), "r") as f:
|
28 |
-
info = json.load(f)
|
29 |
-
file_names.append(f"{info['model']}_{info['revision']}_{info['precision']}")
|
30 |
-
|
31 |
-
# Select organisation
|
32 |
-
if info["model"].count("/") == 0 or "submitted_time" not in info:
|
33 |
-
continue
|
34 |
-
organisation, _ = info["model"].split("/")
|
35 |
-
users_to_submission_dates[organisation].append(info["submitted_time"])
|
36 |
-
|
37 |
-
return set(file_names), users_to_submission_dates
|
38 |
-
|
39 |
-
|
40 |
-
def load_all_info_from_hub(QUEUE_REPO: str, RESULTS_REPO: str, QUEUE_PATH: str, RESULTS_PATH: str) -> list[Repository]:
|
41 |
-
eval_queue_repo = None
|
42 |
-
eval_results_repo = None
|
43 |
-
requested_models = None
|
44 |
-
|
45 |
-
print("Pulling evaluation requests and results.")
|
46 |
-
|
47 |
-
eval_queue_repo = Repository(
|
48 |
-
local_dir=QUEUE_PATH,
|
49 |
-
clone_from=QUEUE_REPO,
|
50 |
-
repo_type="dataset",
|
51 |
-
)
|
52 |
-
eval_queue_repo.git_pull()
|
53 |
-
|
54 |
-
eval_results_repo = Repository(
|
55 |
-
local_dir=RESULTS_PATH,
|
56 |
-
clone_from=RESULTS_REPO,
|
57 |
-
repo_type="dataset",
|
58 |
-
)
|
59 |
-
eval_results_repo.git_pull()
|
60 |
-
|
61 |
-
requested_models, users_to_submission_dates = get_all_requested_models("eval-queue")
|
62 |
-
|
63 |
-
return eval_queue_repo, requested_models, eval_results_repo, users_to_submission_dates
|
64 |
-
|
65 |
-
|
66 |
-
def get_leaderboard_df(
|
67 |
-
eval_results: Repository, eval_results_private: Repository, cols: list, benchmark_cols: list
|
68 |
-
) -> pd.DataFrame:
|
69 |
-
if eval_results:
|
70 |
-
print("Pulling evaluation results for the leaderboard.")
|
71 |
-
eval_results.git_pull()
|
72 |
-
if eval_results_private:
|
73 |
-
print("Pulling evaluation results for the leaderboard.")
|
74 |
-
eval_results_private.git_pull()
|
75 |
-
|
76 |
-
all_data = get_eval_results_dicts()
|
77 |
-
|
78 |
-
if not IS_PUBLIC:
|
79 |
-
all_data.append(gpt4_values)
|
80 |
-
all_data.append(gpt35_values)
|
81 |
-
|
82 |
-
all_data.append(baseline)
|
83 |
-
apply_metadata(all_data) # Populate model type based on known hardcoded values in `metadata.py`
|
84 |
-
|
85 |
-
df = pd.DataFrame.from_records(all_data)
|
86 |
-
df = df.sort_values(by=[AutoEvalColumn.average.name], ascending=False)
|
87 |
-
df = df[cols].round(decimals=2)
|
88 |
-
|
89 |
-
# filter out if any of the benchmarks have not been produced
|
90 |
-
df = df[has_no_nan_values(df, benchmark_cols)]
|
91 |
-
return df
|
92 |
-
|
93 |
-
|
94 |
-
def get_evaluation_queue_df(
|
95 |
-
eval_queue: Repository, eval_queue_private: Repository, save_path: str, cols: list
|
96 |
-
) -> list[pd.DataFrame]:
|
97 |
-
if eval_queue:
|
98 |
-
print("Pulling changes for the evaluation queue.")
|
99 |
-
eval_queue.git_pull()
|
100 |
-
if eval_queue_private:
|
101 |
-
print("Pulling changes for the evaluation queue.")
|
102 |
-
eval_queue_private.git_pull()
|
103 |
-
|
104 |
-
entries = [entry for entry in os.listdir(save_path) if not entry.startswith(".")]
|
105 |
-
all_evals = []
|
106 |
-
|
107 |
-
for entry in entries:
|
108 |
-
if ".json" in entry:
|
109 |
-
file_path = os.path.join(save_path, entry)
|
110 |
-
with open(file_path) as fp:
|
111 |
-
data = json.load(fp)
|
112 |
-
|
113 |
-
data[EvalQueueColumn.model.name] = make_clickable_model(data["model"])
|
114 |
-
data[EvalQueueColumn.revision.name] = data.get("revision", "main")
|
115 |
-
|
116 |
-
all_evals.append(data)
|
117 |
-
elif ".md" not in entry:
|
118 |
-
# this is a folder
|
119 |
-
sub_entries = [e for e in os.listdir(f"{save_path}/{entry}") if not e.startswith(".")]
|
120 |
-
for sub_entry in sub_entries:
|
121 |
-
file_path = os.path.join(save_path, entry, sub_entry)
|
122 |
-
with open(file_path) as fp:
|
123 |
-
data = json.load(fp)
|
124 |
-
|
125 |
-
data[EvalQueueColumn.model.name] = make_clickable_model(data["model"])
|
126 |
-
data[EvalQueueColumn.revision.name] = data.get("revision", "main")
|
127 |
-
all_evals.append(data)
|
128 |
-
|
129 |
-
pending_list = [e for e in all_evals if e["status"] in ["PENDING", "RERUN"]]
|
130 |
-
running_list = [e for e in all_evals if e["status"] == "RUNNING"]
|
131 |
-
finished_list = [e for e in all_evals if e["status"].startswith("FINISHED") or e["status"] == "PENDING_NEW_EVAL"]
|
132 |
-
df_pending = pd.DataFrame.from_records(pending_list, columns=cols)
|
133 |
-
df_running = pd.DataFrame.from_records(running_list, columns=cols)
|
134 |
-
df_finished = pd.DataFrame.from_records(finished_list, columns=cols)
|
135 |
-
return df_finished[cols], df_running[cols], df_pending[cols]
|
136 |
-
|
137 |
-
|
138 |
-
def is_model_on_hub(model_name: str, revision: str) -> bool:
|
139 |
-
try:
|
140 |
-
AutoConfig.from_pretrained(model_name, revision=revision, trust_remote_code=False)
|
141 |
-
return True, None
|
142 |
-
|
143 |
-
except ValueError:
|
144 |
-
return (
|
145 |
-
False,
|
146 |
-
"needs to be launched with `trust_remote_code=True`. For safety reason, we do not allow these models to be automatically submitted to the leaderboard.",
|
147 |
-
)
|
148 |
-
|
149 |
-
except Exception as e:
|
150 |
-
print(f"Could not get the model config from the hub.: {e}")
|
151 |
-
return False, "was not found on hub!"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DragGan/DragGan-Inversion/torch_utils/ops/conv2d_resample.py
DELETED
@@ -1,158 +0,0 @@
|
|
1 |
-
# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
2 |
-
#
|
3 |
-
# NVIDIA CORPORATION and its licensors retain all intellectual property
|
4 |
-
# and proprietary rights in and to this software, related documentation
|
5 |
-
# and any modifications thereto. Any use, reproduction, disclosure or
|
6 |
-
# distribution of this software and related documentation without an express
|
7 |
-
# license agreement from NVIDIA CORPORATION is strictly prohibited.
|
8 |
-
|
9 |
-
"""2D convolution with optional up/downsampling."""
|
10 |
-
|
11 |
-
import torch
|
12 |
-
|
13 |
-
from .. import misc
|
14 |
-
from . import conv2d_gradfix
|
15 |
-
from . import upfirdn2d
|
16 |
-
from .upfirdn2d import _parse_padding
|
17 |
-
from .upfirdn2d import _get_filter_size
|
18 |
-
|
19 |
-
# ----------------------------------------------------------------------------
|
20 |
-
|
21 |
-
|
22 |
-
def _get_weight_shape(w):
|
23 |
-
with misc.suppress_tracer_warnings(): # this value will be treated as a constant
|
24 |
-
shape = [int(sz) for sz in w.shape]
|
25 |
-
misc.assert_shape(w, shape)
|
26 |
-
return shape
|
27 |
-
|
28 |
-
# ----------------------------------------------------------------------------
|
29 |
-
|
30 |
-
|
31 |
-
def _conv2d_wrapper(x, w, stride=1, padding=0, groups=1, transpose=False, flip_weight=True):
|
32 |
-
"""Wrapper for the underlying `conv2d()` and `conv_transpose2d()` implementations.
|
33 |
-
"""
|
34 |
-
_out_channels, _in_channels_per_group, kh, kw = _get_weight_shape(w)
|
35 |
-
|
36 |
-
# Flip weight if requested.
|
37 |
-
# Note: conv2d() actually performs correlation (flip_weight=True) not convolution (flip_weight=False).
|
38 |
-
if not flip_weight and (kw > 1 or kh > 1):
|
39 |
-
w = w.flip([2, 3])
|
40 |
-
|
41 |
-
# Execute using conv2d_gradfix.
|
42 |
-
op = conv2d_gradfix.conv_transpose2d if transpose else conv2d_gradfix.conv2d
|
43 |
-
return op(x, w, stride=stride, padding=padding, groups=groups)
|
44 |
-
|
45 |
-
# ----------------------------------------------------------------------------
|
46 |
-
|
47 |
-
|
48 |
-
@misc.profiled_function
|
49 |
-
def conv2d_resample(x, w, f=None, up=1, down=1, padding=0, groups=1, flip_weight=True, flip_filter=False):
|
50 |
-
r"""2D convolution with optional up/downsampling.
|
51 |
-
|
52 |
-
Padding is performed only once at the beginning, not between the operations.
|
53 |
-
|
54 |
-
Args:
|
55 |
-
x: Input tensor of shape
|
56 |
-
`[batch_size, in_channels, in_height, in_width]`.
|
57 |
-
w: Weight tensor of shape
|
58 |
-
`[out_channels, in_channels//groups, kernel_height, kernel_width]`.
|
59 |
-
f: Low-pass filter for up/downsampling. Must be prepared beforehand by
|
60 |
-
calling upfirdn2d.setup_filter(). None = identity (default).
|
61 |
-
up: Integer upsampling factor (default: 1).
|
62 |
-
down: Integer downsampling factor (default: 1).
|
63 |
-
padding: Padding with respect to the upsampled image. Can be a single number
|
64 |
-
or a list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]`
|
65 |
-
(default: 0).
|
66 |
-
groups: Split input channels into N groups (default: 1).
|
67 |
-
flip_weight: False = convolution, True = correlation (default: True).
|
68 |
-
flip_filter: False = convolution, True = correlation (default: False).
|
69 |
-
|
70 |
-
Returns:
|
71 |
-
Tensor of the shape `[batch_size, num_channels, out_height, out_width]`.
|
72 |
-
"""
|
73 |
-
# Validate arguments.
|
74 |
-
assert isinstance(x, torch.Tensor) and (x.ndim == 4)
|
75 |
-
assert isinstance(w, torch.Tensor) and (
|
76 |
-
w.ndim == 4) and (w.dtype == x.dtype)
|
77 |
-
assert f is None or (isinstance(f, torch.Tensor) and f.ndim in [
|
78 |
-
1, 2] and f.dtype == torch.float32)
|
79 |
-
assert isinstance(up, int) and (up >= 1)
|
80 |
-
assert isinstance(down, int) and (down >= 1)
|
81 |
-
assert isinstance(groups, int) and (groups >= 1)
|
82 |
-
out_channels, in_channels_per_group, kh, kw = _get_weight_shape(w)
|
83 |
-
fw, fh = _get_filter_size(f)
|
84 |
-
px0, px1, py0, py1 = _parse_padding(padding)
|
85 |
-
|
86 |
-
# Adjust padding to account for up/downsampling.
|
87 |
-
if up > 1:
|
88 |
-
px0 += (fw + up - 1) // 2
|
89 |
-
px1 += (fw - up) // 2
|
90 |
-
py0 += (fh + up - 1) // 2
|
91 |
-
py1 += (fh - up) // 2
|
92 |
-
if down > 1:
|
93 |
-
px0 += (fw - down + 1) // 2
|
94 |
-
px1 += (fw - down) // 2
|
95 |
-
py0 += (fh - down + 1) // 2
|
96 |
-
py1 += (fh - down) // 2
|
97 |
-
|
98 |
-
# Fast path: 1x1 convolution with downsampling only => downsample first, then convolve.
|
99 |
-
if kw == 1 and kh == 1 and (down > 1 and up == 1):
|
100 |
-
x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, padding=[
|
101 |
-
px0, px1, py0, py1], flip_filter=flip_filter)
|
102 |
-
x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight)
|
103 |
-
return x
|
104 |
-
|
105 |
-
# Fast path: 1x1 convolution with upsampling only => convolve first, then upsample.
|
106 |
-
if kw == 1 and kh == 1 and (up > 1 and down == 1):
|
107 |
-
x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight)
|
108 |
-
x = upfirdn2d.upfirdn2d(x=x, f=f, up=up, padding=[
|
109 |
-
px0, px1, py0, py1], gain=up**2, flip_filter=flip_filter)
|
110 |
-
return x
|
111 |
-
|
112 |
-
# Fast path: downsampling only => use strided convolution.
|
113 |
-
if down > 1 and up == 1:
|
114 |
-
x = upfirdn2d.upfirdn2d(
|
115 |
-
x=x, f=f, padding=[px0, px1, py0, py1], flip_filter=flip_filter)
|
116 |
-
x = _conv2d_wrapper(x=x, w=w, stride=down,
|
117 |
-
groups=groups, flip_weight=flip_weight)
|
118 |
-
return x
|
119 |
-
|
120 |
-
# Fast path: upsampling with optional downsampling => use transpose strided convolution.
|
121 |
-
if up > 1:
|
122 |
-
if groups == 1:
|
123 |
-
w = w.transpose(0, 1)
|
124 |
-
else:
|
125 |
-
w = w.reshape(groups, out_channels // groups,
|
126 |
-
in_channels_per_group, kh, kw)
|
127 |
-
w = w.transpose(1, 2)
|
128 |
-
w = w.reshape(groups * in_channels_per_group,
|
129 |
-
out_channels // groups, kh, kw)
|
130 |
-
px0 -= kw - 1
|
131 |
-
px1 -= kw - up
|
132 |
-
py0 -= kh - 1
|
133 |
-
py1 -= kh - up
|
134 |
-
pxt = max(min(-px0, -px1), 0)
|
135 |
-
pyt = max(min(-py0, -py1), 0)
|
136 |
-
x = _conv2d_wrapper(x=x, w=w, stride=up, padding=[
|
137 |
-
pyt, pxt], groups=groups, transpose=True, flip_weight=(not flip_weight))
|
138 |
-
x = upfirdn2d.upfirdn2d(x=x, f=f, padding=[
|
139 |
-
px0+pxt, px1+pxt, py0+pyt, py1+pyt], gain=up**2, flip_filter=flip_filter)
|
140 |
-
if down > 1:
|
141 |
-
x = upfirdn2d.upfirdn2d(
|
142 |
-
x=x, f=f, down=down, flip_filter=flip_filter)
|
143 |
-
return x
|
144 |
-
|
145 |
-
# Fast path: no up/downsampling, padding supported by the underlying implementation => use plain conv2d.
|
146 |
-
if up == 1 and down == 1:
|
147 |
-
if px0 == px1 and py0 == py1 and px0 >= 0 and py0 >= 0:
|
148 |
-
return _conv2d_wrapper(x=x, w=w, padding=[py0, px0], groups=groups, flip_weight=flip_weight)
|
149 |
-
|
150 |
-
# Fallback: Generic reference implementation.
|
151 |
-
x = upfirdn2d.upfirdn2d(x=x, f=(f if up > 1 else None), up=up, padding=[
|
152 |
-
px0, px1, py0, py1], gain=up**2, flip_filter=flip_filter)
|
153 |
-
x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight)
|
154 |
-
if down > 1:
|
155 |
-
x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, flip_filter=flip_filter)
|
156 |
-
return x
|
157 |
-
|
158 |
-
# ----------------------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Duskfallcrew/textual-inversion-training/textual_inversion.py
DELETED
@@ -1,612 +0,0 @@
|
|
1 |
-
import argparse
|
2 |
-
import itertools
|
3 |
-
import math
|
4 |
-
import os
|
5 |
-
import random
|
6 |
-
from pathlib import Path
|
7 |
-
from typing import Optional
|
8 |
-
|
9 |
-
import numpy as np
|
10 |
-
import torch
|
11 |
-
import torch.nn.functional as F
|
12 |
-
import torch.utils.checkpoint
|
13 |
-
from torch.utils.data import Dataset
|
14 |
-
|
15 |
-
import PIL
|
16 |
-
from accelerate import Accelerator
|
17 |
-
from accelerate.logging import get_logger
|
18 |
-
from accelerate.utils import set_seed
|
19 |
-
from diffusers import AutoencoderKL, DDPMScheduler, PNDMScheduler, StableDiffusionPipeline, UNet2DConditionModel
|
20 |
-
from diffusers.optimization import get_scheduler
|
21 |
-
from diffusers.pipelines.stable_diffusion import StableDiffusionSafetyChecker
|
22 |
-
from huggingface_hub import HfFolder, Repository, whoami
|
23 |
-
from PIL import Image
|
24 |
-
from torchvision import transforms
|
25 |
-
from tqdm.auto import tqdm
|
26 |
-
from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
|
27 |
-
import gc
|
28 |
-
|
29 |
-
logger = get_logger(__name__)
|
30 |
-
|
31 |
-
|
32 |
-
def save_progress(text_encoder, placeholder_token_id, accelerator, args):
|
33 |
-
logger.info("Saving embeddings")
|
34 |
-
learned_embeds = accelerator.unwrap_model(text_encoder).get_input_embeddings().weight[placeholder_token_id]
|
35 |
-
learned_embeds_dict = {args.placeholder_token: learned_embeds.detach().cpu()}
|
36 |
-
torch.save(learned_embeds_dict, os.path.join(args.output_dir, "learned_embeds.bin"))
|
37 |
-
|
38 |
-
|
39 |
-
def parse_args():
|
40 |
-
parser = argparse.ArgumentParser(description="Simple example of a training script.")
|
41 |
-
parser.add_argument(
|
42 |
-
"--save_steps",
|
43 |
-
type=int,
|
44 |
-
default=500,
|
45 |
-
help="Save learned_embeds.bin every X updates steps.",
|
46 |
-
)
|
47 |
-
parser.add_argument(
|
48 |
-
"--pretrained_model_name_or_path",
|
49 |
-
type=str,
|
50 |
-
default=None,
|
51 |
-
help="Path to pretrained model or model identifier from huggingface.co/models.",
|
52 |
-
)
|
53 |
-
parser.add_argument(
|
54 |
-
"--tokenizer_name",
|
55 |
-
type=str,
|
56 |
-
default=None,
|
57 |
-
help="Pretrained tokenizer name or path if not the same as model_name",
|
58 |
-
)
|
59 |
-
parser.add_argument(
|
60 |
-
"--train_data_dir", type=str, default=None, help="A folder containing the training data."
|
61 |
-
)
|
62 |
-
parser.add_argument(
|
63 |
-
"--placeholder_token",
|
64 |
-
type=str,
|
65 |
-
default=None,
|
66 |
-
help="A token to use as a placeholder for the concept.",
|
67 |
-
)
|
68 |
-
parser.add_argument(
|
69 |
-
"--initializer_token", type=str, default=None, help="A token to use as initializer word."
|
70 |
-
)
|
71 |
-
parser.add_argument("--learnable_property", type=str, default="object", help="Choose between 'object' and 'style'")
|
72 |
-
parser.add_argument("--repeats", type=int, default=100, help="How many times to repeat the training data.")
|
73 |
-
parser.add_argument(
|
74 |
-
"--output_dir",
|
75 |
-
type=str,
|
76 |
-
default="text-inversion-model",
|
77 |
-
help="The output directory where the model predictions and checkpoints will be written.",
|
78 |
-
)
|
79 |
-
parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
|
80 |
-
parser.add_argument(
|
81 |
-
"--resolution",
|
82 |
-
type=int,
|
83 |
-
default=512,
|
84 |
-
help=(
|
85 |
-
"The resolution for input images, all the images in the train/validation dataset will be resized to this"
|
86 |
-
" resolution"
|
87 |
-
),
|
88 |
-
)
|
89 |
-
parser.add_argument(
|
90 |
-
"--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution"
|
91 |
-
)
|
92 |
-
parser.add_argument(
|
93 |
-
"--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader."
|
94 |
-
)
|
95 |
-
parser.add_argument("--num_train_epochs", type=int, default=100)
|
96 |
-
parser.add_argument(
|
97 |
-
"--max_train_steps",
|
98 |
-
type=int,
|
99 |
-
default=5000,
|
100 |
-
help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
|
101 |
-
)
|
102 |
-
parser.add_argument(
|
103 |
-
"--gradient_accumulation_steps",
|
104 |
-
type=int,
|
105 |
-
default=1,
|
106 |
-
help="Number of updates steps to accumulate before performing a backward/update pass.",
|
107 |
-
)
|
108 |
-
parser.add_argument(
|
109 |
-
"--learning_rate",
|
110 |
-
type=float,
|
111 |
-
default=1e-4,
|
112 |
-
help="Initial learning rate (after the potential warmup period) to use.",
|
113 |
-
)
|
114 |
-
parser.add_argument(
|
115 |
-
"--scale_lr",
|
116 |
-
action="store_true",
|
117 |
-
default=True,
|
118 |
-
help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.",
|
119 |
-
)
|
120 |
-
parser.add_argument(
|
121 |
-
"--lr_scheduler",
|
122 |
-
type=str,
|
123 |
-
default="constant",
|
124 |
-
help=(
|
125 |
-
'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",'
|
126 |
-
' "constant", "constant_with_warmup"]'
|
127 |
-
),
|
128 |
-
)
|
129 |
-
parser.add_argument(
|
130 |
-
"--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler."
|
131 |
-
)
|
132 |
-
parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.")
|
133 |
-
parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.")
|
134 |
-
parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.")
|
135 |
-
parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer")
|
136 |
-
parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
|
137 |
-
parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.")
|
138 |
-
parser.add_argument(
|
139 |
-
"--hub_model_id",
|
140 |
-
type=str,
|
141 |
-
default=None,
|
142 |
-
help="The name of the repository to keep in sync with the local `output_dir`.",
|
143 |
-
)
|
144 |
-
parser.add_argument(
|
145 |
-
"--logging_dir",
|
146 |
-
type=str,
|
147 |
-
default="logs",
|
148 |
-
help=(
|
149 |
-
"[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to"
|
150 |
-
" *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***."
|
151 |
-
),
|
152 |
-
)
|
153 |
-
parser.add_argument(
|
154 |
-
"--mixed_precision",
|
155 |
-
type=str,
|
156 |
-
default="no",
|
157 |
-
choices=["no", "fp16", "bf16"],
|
158 |
-
help=(
|
159 |
-
"Whether to use mixed precision. Choose"
|
160 |
-
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
|
161 |
-
"and an Nvidia Ampere GPU."
|
162 |
-
),
|
163 |
-
)
|
164 |
-
parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank")
|
165 |
-
|
166 |
-
args = parser.parse_args()
|
167 |
-
env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
|
168 |
-
if env_local_rank != -1 and env_local_rank != args.local_rank:
|
169 |
-
args.local_rank = env_local_rank
|
170 |
-
|
171 |
-
'''
|
172 |
-
if args.train_data_dir is None:
|
173 |
-
raise ValueError("You must specify a train data directory.")
|
174 |
-
'''
|
175 |
-
|
176 |
-
return args
|
177 |
-
|
178 |
-
|
179 |
-
imagenet_templates_small = [
|
180 |
-
"a photo of a {}",
|
181 |
-
"a rendering of a {}",
|
182 |
-
"a cropped photo of the {}",
|
183 |
-
"the photo of a {}",
|
184 |
-
"a photo of a clean {}",
|
185 |
-
"a photo of a dirty {}",
|
186 |
-
"a dark photo of the {}",
|
187 |
-
"a photo of my {}",
|
188 |
-
"a photo of the cool {}",
|
189 |
-
"a close-up photo of a {}",
|
190 |
-
"a bright photo of the {}",
|
191 |
-
"a cropped photo of a {}",
|
192 |
-
"a photo of the {}",
|
193 |
-
"a good photo of the {}",
|
194 |
-
"a photo of one {}",
|
195 |
-
"a close-up photo of the {}",
|
196 |
-
"a rendition of the {}",
|
197 |
-
"a photo of the clean {}",
|
198 |
-
"a rendition of a {}",
|
199 |
-
"a photo of a nice {}",
|
200 |
-
"a good photo of a {}",
|
201 |
-
"a photo of the nice {}",
|
202 |
-
"a photo of the small {}",
|
203 |
-
"a photo of the weird {}",
|
204 |
-
"a photo of the large {}",
|
205 |
-
"a photo of a cool {}",
|
206 |
-
"a photo of a small {}",
|
207 |
-
]
|
208 |
-
|
209 |
-
imagenet_style_templates_small = [
|
210 |
-
"a painting in the style of {}",
|
211 |
-
"a rendering in the style of {}",
|
212 |
-
"a cropped painting in the style of {}",
|
213 |
-
"the painting in the style of {}",
|
214 |
-
"a clean painting in the style of {}",
|
215 |
-
"a dirty painting in the style of {}",
|
216 |
-
"a dark painting in the style of {}",
|
217 |
-
"a picture in the style of {}",
|
218 |
-
"a cool painting in the style of {}",
|
219 |
-
"a close-up painting in the style of {}",
|
220 |
-
"a bright painting in the style of {}",
|
221 |
-
"a cropped painting in the style of {}",
|
222 |
-
"a good painting in the style of {}",
|
223 |
-
"a close-up painting in the style of {}",
|
224 |
-
"a rendition in the style of {}",
|
225 |
-
"a nice painting in the style of {}",
|
226 |
-
"a small painting in the style of {}",
|
227 |
-
"a weird painting in the style of {}",
|
228 |
-
"a large painting in the style of {}",
|
229 |
-
]
|
230 |
-
|
231 |
-
|
232 |
-
class TextualInversionDataset(Dataset):
|
233 |
-
def __init__(
|
234 |
-
self,
|
235 |
-
data_root,
|
236 |
-
tokenizer,
|
237 |
-
learnable_property="object", # [object, style]
|
238 |
-
size=512,
|
239 |
-
repeats=100,
|
240 |
-
interpolation="bicubic",
|
241 |
-
flip_p=0.5,
|
242 |
-
set="train",
|
243 |
-
placeholder_token="*",
|
244 |
-
center_crop=False,
|
245 |
-
):
|
246 |
-
self.data_root = data_root
|
247 |
-
self.tokenizer = tokenizer
|
248 |
-
self.learnable_property = learnable_property
|
249 |
-
self.size = size
|
250 |
-
self.placeholder_token = placeholder_token
|
251 |
-
self.center_crop = center_crop
|
252 |
-
self.flip_p = flip_p
|
253 |
-
|
254 |
-
self.image_paths = [os.path.join(self.data_root, file_path) for file_path in os.listdir(self.data_root)]
|
255 |
-
|
256 |
-
self.num_images = len(self.image_paths)
|
257 |
-
self._length = self.num_images
|
258 |
-
|
259 |
-
if set == "train":
|
260 |
-
self._length = self.num_images * repeats
|
261 |
-
|
262 |
-
self.interpolation = {
|
263 |
-
"linear": PIL.Image.LINEAR,
|
264 |
-
"bilinear": PIL.Image.BILINEAR,
|
265 |
-
"bicubic": PIL.Image.BICUBIC,
|
266 |
-
"lanczos": PIL.Image.LANCZOS,
|
267 |
-
}[interpolation]
|
268 |
-
|
269 |
-
self.templates = imagenet_style_templates_small if learnable_property == "style" else imagenet_templates_small
|
270 |
-
self.flip_transform = transforms.RandomHorizontalFlip(p=self.flip_p)
|
271 |
-
|
272 |
-
def __len__(self):
|
273 |
-
return self._length
|
274 |
-
|
275 |
-
def __getitem__(self, i):
|
276 |
-
example = {}
|
277 |
-
image = Image.open(self.image_paths[i % self.num_images])
|
278 |
-
|
279 |
-
if not image.mode == "RGB":
|
280 |
-
image = image.convert("RGB")
|
281 |
-
|
282 |
-
placeholder_string = self.placeholder_token
|
283 |
-
text = random.choice(self.templates).format(placeholder_string)
|
284 |
-
|
285 |
-
example["input_ids"] = self.tokenizer(
|
286 |
-
text,
|
287 |
-
padding="max_length",
|
288 |
-
truncation=True,
|
289 |
-
max_length=self.tokenizer.model_max_length,
|
290 |
-
return_tensors="pt",
|
291 |
-
).input_ids[0]
|
292 |
-
|
293 |
-
# default to score-sde preprocessing
|
294 |
-
img = np.array(image).astype(np.uint8)
|
295 |
-
|
296 |
-
if self.center_crop:
|
297 |
-
crop = min(img.shape[0], img.shape[1])
|
298 |
-
h, w, = (
|
299 |
-
img.shape[0],
|
300 |
-
img.shape[1],
|
301 |
-
)
|
302 |
-
img = img[(h - crop) // 2 : (h + crop) // 2, (w - crop) // 2 : (w + crop) // 2]
|
303 |
-
|
304 |
-
image = Image.fromarray(img)
|
305 |
-
image = image.resize((self.size, self.size), resample=self.interpolation)
|
306 |
-
|
307 |
-
image = self.flip_transform(image)
|
308 |
-
image = np.array(image).astype(np.uint8)
|
309 |
-
image = (image / 127.5 - 1.0).astype(np.float32)
|
310 |
-
|
311 |
-
example["pixel_values"] = torch.from_numpy(image).permute(2, 0, 1)
|
312 |
-
return example
|
313 |
-
|
314 |
-
|
315 |
-
def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None):
|
316 |
-
if token is None:
|
317 |
-
token = HfFolder.get_token()
|
318 |
-
if organization is None:
|
319 |
-
username = whoami(token)["name"]
|
320 |
-
return f"{username}/{model_id}"
|
321 |
-
else:
|
322 |
-
return f"{organization}/{model_id}"
|
323 |
-
|
324 |
-
|
325 |
-
def freeze_params(params):
|
326 |
-
for param in params:
|
327 |
-
param.requires_grad = False
|
328 |
-
|
329 |
-
|
330 |
-
def merge_two_dicts(starting_dict: dict, updater_dict: dict) -> dict:
|
331 |
-
"""
|
332 |
-
Starts from base starting dict and then adds the remaining key values from updater replacing the values from
|
333 |
-
the first starting/base dict with the second updater dict.
|
334 |
-
|
335 |
-
For later: how does d = {**d1, **d2} replace collision?
|
336 |
-
|
337 |
-
:param starting_dict:
|
338 |
-
:param updater_dict:
|
339 |
-
:return:
|
340 |
-
"""
|
341 |
-
new_dict: dict = starting_dict.copy() # start with keys and values of starting_dict
|
342 |
-
new_dict.update(updater_dict) # modifies starting_dict with keys and values of updater_dict
|
343 |
-
return new_dict
|
344 |
-
|
345 |
-
def merge_args(args1: argparse.Namespace, args2: argparse.Namespace) -> argparse.Namespace:
|
346 |
-
"""
|
347 |
-
|
348 |
-
ref: https://stackoverflow.com/questions/56136549/how-can-i-merge-two-argparse-namespaces-in-python-2-x
|
349 |
-
:param args1:
|
350 |
-
:param args2:
|
351 |
-
:return:
|
352 |
-
"""
|
353 |
-
# - the merged args
|
354 |
-
# The vars() function returns the __dict__ attribute to values of the given object e.g {field:value}.
|
355 |
-
merged_key_values_for_namespace: dict = merge_two_dicts(vars(args1), vars(args2))
|
356 |
-
args = argparse.Namespace(**merged_key_values_for_namespace)
|
357 |
-
return args
|
358 |
-
|
359 |
-
def run_training(args_imported):
|
360 |
-
args_default = parse_args()
|
361 |
-
args = merge_args(args_default, args_imported)
|
362 |
-
|
363 |
-
print(args)
|
364 |
-
logging_dir = os.path.join(args.output_dir, args.logging_dir)
|
365 |
-
|
366 |
-
accelerator = Accelerator(
|
367 |
-
gradient_accumulation_steps=args.gradient_accumulation_steps,
|
368 |
-
mixed_precision=args.mixed_precision,
|
369 |
-
log_with="tensorboard",
|
370 |
-
logging_dir=logging_dir,
|
371 |
-
)
|
372 |
-
|
373 |
-
# If passed along, set the training seed now.
|
374 |
-
if args.seed is not None:
|
375 |
-
set_seed(args.seed)
|
376 |
-
|
377 |
-
# Handle the repository creation
|
378 |
-
if accelerator.is_main_process:
|
379 |
-
if args.push_to_hub:
|
380 |
-
if args.hub_model_id is None:
|
381 |
-
repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token)
|
382 |
-
else:
|
383 |
-
repo_name = args.hub_model_id
|
384 |
-
repo = Repository(args.output_dir, clone_from=repo_name)
|
385 |
-
|
386 |
-
with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore:
|
387 |
-
if "step_*" not in gitignore:
|
388 |
-
gitignore.write("step_*\n")
|
389 |
-
if "epoch_*" not in gitignore:
|
390 |
-
gitignore.write("epoch_*\n")
|
391 |
-
elif args.output_dir is not None:
|
392 |
-
os.makedirs(args.output_dir, exist_ok=True)
|
393 |
-
|
394 |
-
# Load the tokenizer and add the placeholder token as a additional special token
|
395 |
-
if args.tokenizer_name:
|
396 |
-
tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name)
|
397 |
-
elif args.pretrained_model_name_or_path:
|
398 |
-
tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer")
|
399 |
-
|
400 |
-
# Add the placeholder token in tokenizer
|
401 |
-
num_added_tokens = tokenizer.add_tokens(args.placeholder_token)
|
402 |
-
if num_added_tokens == 0:
|
403 |
-
raise ValueError(
|
404 |
-
f"The tokenizer already contains the token {args.placeholder_token}. Please pass a different"
|
405 |
-
" `placeholder_token` that is not already in the tokenizer."
|
406 |
-
)
|
407 |
-
|
408 |
-
# Convert the initializer_token, placeholder_token to ids
|
409 |
-
token_ids = tokenizer.encode(args.initializer_token, add_special_tokens=False)
|
410 |
-
# Check if initializer_token is a single token or a sequence of tokens
|
411 |
-
if len(token_ids) > 1:
|
412 |
-
raise ValueError("The initializer token must be a single token.")
|
413 |
-
|
414 |
-
initializer_token_id = token_ids[0]
|
415 |
-
placeholder_token_id = tokenizer.convert_tokens_to_ids(args.placeholder_token)
|
416 |
-
|
417 |
-
# Load models and create wrapper for stable diffusion
|
418 |
-
text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder")
|
419 |
-
vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae")
|
420 |
-
unet = UNet2DConditionModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="unet")
|
421 |
-
|
422 |
-
# Resize the token embeddings as we are adding new special tokens to the tokenizer
|
423 |
-
text_encoder.resize_token_embeddings(len(tokenizer))
|
424 |
-
|
425 |
-
# Initialise the newly added placeholder token with the embeddings of the initializer token
|
426 |
-
token_embeds = text_encoder.get_input_embeddings().weight.data
|
427 |
-
token_embeds[placeholder_token_id] = token_embeds[initializer_token_id]
|
428 |
-
|
429 |
-
# Freeze vae and unet
|
430 |
-
freeze_params(vae.parameters())
|
431 |
-
freeze_params(unet.parameters())
|
432 |
-
# Freeze all parameters except for the token embeddings in text encoder
|
433 |
-
params_to_freeze = itertools.chain(
|
434 |
-
text_encoder.text_model.encoder.parameters(),
|
435 |
-
text_encoder.text_model.final_layer_norm.parameters(),
|
436 |
-
text_encoder.text_model.embeddings.position_embedding.parameters(),
|
437 |
-
)
|
438 |
-
freeze_params(params_to_freeze)
|
439 |
-
|
440 |
-
if args.scale_lr:
|
441 |
-
args.learning_rate = (
|
442 |
-
args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes
|
443 |
-
)
|
444 |
-
|
445 |
-
# Initialize the optimizer
|
446 |
-
optimizer = torch.optim.AdamW(
|
447 |
-
text_encoder.get_input_embeddings().parameters(), # only optimize the embeddings
|
448 |
-
lr=args.learning_rate,
|
449 |
-
betas=(args.adam_beta1, args.adam_beta2),
|
450 |
-
weight_decay=args.adam_weight_decay,
|
451 |
-
eps=args.adam_epsilon,
|
452 |
-
)
|
453 |
-
|
454 |
-
# TODO (patil-suraj): load scheduler using args
|
455 |
-
noise_scheduler = DDPMScheduler(
|
456 |
-
beta_start=0.00085,
|
457 |
-
beta_end=0.012,
|
458 |
-
beta_schedule="scaled_linear",
|
459 |
-
num_train_timesteps=1000,
|
460 |
-
)
|
461 |
-
|
462 |
-
train_dataset = TextualInversionDataset(
|
463 |
-
data_root=args.train_data_dir,
|
464 |
-
tokenizer=tokenizer,
|
465 |
-
size=args.resolution,
|
466 |
-
placeholder_token=args.placeholder_token,
|
467 |
-
repeats=args.repeats,
|
468 |
-
learnable_property=args.learnable_property,
|
469 |
-
center_crop=args.center_crop,
|
470 |
-
set="train",
|
471 |
-
)
|
472 |
-
train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=args.train_batch_size, shuffle=True)
|
473 |
-
|
474 |
-
# Scheduler and math around the number of training steps.
|
475 |
-
overrode_max_train_steps = False
|
476 |
-
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
|
477 |
-
if args.max_train_steps is None:
|
478 |
-
args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
|
479 |
-
overrode_max_train_steps = True
|
480 |
-
|
481 |
-
lr_scheduler = get_scheduler(
|
482 |
-
args.lr_scheduler,
|
483 |
-
optimizer=optimizer,
|
484 |
-
num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps,
|
485 |
-
num_training_steps=args.max_train_steps * args.gradient_accumulation_steps,
|
486 |
-
)
|
487 |
-
|
488 |
-
text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
|
489 |
-
text_encoder, optimizer, train_dataloader, lr_scheduler
|
490 |
-
)
|
491 |
-
|
492 |
-
# Move vae and unet to device
|
493 |
-
vae.to(accelerator.device)
|
494 |
-
unet.to(accelerator.device)
|
495 |
-
|
496 |
-
# Keep vae and unet in eval model as we don't train these
|
497 |
-
vae.eval()
|
498 |
-
unet.eval()
|
499 |
-
|
500 |
-
# We need to recalculate our total training steps as the size of the training dataloader may have changed.
|
501 |
-
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
|
502 |
-
if overrode_max_train_steps:
|
503 |
-
args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
|
504 |
-
# Afterwards we recalculate our number of training epochs
|
505 |
-
args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
|
506 |
-
|
507 |
-
# We need to initialize the trackers we use, and also store our configuration.
|
508 |
-
# The trackers initializes automatically on the main process.
|
509 |
-
if accelerator.is_main_process:
|
510 |
-
accelerator.init_trackers("textual_inversion", config=vars(args))
|
511 |
-
|
512 |
-
# Train!
|
513 |
-
total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
|
514 |
-
|
515 |
-
logger.info("***** Running training *****")
|
516 |
-
logger.info(f" Num examples = {len(train_dataset)}")
|
517 |
-
logger.info(f" Num Epochs = {args.num_train_epochs}")
|
518 |
-
logger.info(f" Instantaneous batch size per device = {args.train_batch_size}")
|
519 |
-
logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
|
520 |
-
logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
|
521 |
-
logger.info(f" Total optimization steps = {args.max_train_steps}")
|
522 |
-
# Only show the progress bar once on each machine.
|
523 |
-
progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process)
|
524 |
-
progress_bar.set_description("Steps")
|
525 |
-
global_step = 0
|
526 |
-
|
527 |
-
for epoch in range(args.num_train_epochs):
|
528 |
-
text_encoder.train()
|
529 |
-
for step, batch in enumerate(train_dataloader):
|
530 |
-
with accelerator.accumulate(text_encoder):
|
531 |
-
# Convert images to latent space
|
532 |
-
latents = vae.encode(batch["pixel_values"]).latent_dist.sample().detach()
|
533 |
-
latents = latents * 0.18215
|
534 |
-
|
535 |
-
# Sample noise that we'll add to the latents
|
536 |
-
noise = torch.randn(latents.shape).to(latents.device)
|
537 |
-
bsz = latents.shape[0]
|
538 |
-
# Sample a random timestep for each image
|
539 |
-
timesteps = torch.randint(
|
540 |
-
0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device
|
541 |
-
).long()
|
542 |
-
|
543 |
-
# Add noise to the latents according to the noise magnitude at each timestep
|
544 |
-
# (this is the forward diffusion process)
|
545 |
-
noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps)
|
546 |
-
|
547 |
-
# Get the text embedding for conditioning
|
548 |
-
encoder_hidden_states = text_encoder(batch["input_ids"])[0]
|
549 |
-
|
550 |
-
# Predict the noise residual
|
551 |
-
noise_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample
|
552 |
-
|
553 |
-
loss = F.mse_loss(noise_pred, noise, reduction="none").mean([1, 2, 3]).mean()
|
554 |
-
accelerator.backward(loss)
|
555 |
-
|
556 |
-
# Zero out the gradients for all token embeddings except the newly added
|
557 |
-
# embeddings for the concept, as we only want to optimize the concept embeddings
|
558 |
-
if accelerator.num_processes > 1:
|
559 |
-
grads = text_encoder.module.get_input_embeddings().weight.grad
|
560 |
-
else:
|
561 |
-
grads = text_encoder.get_input_embeddings().weight.grad
|
562 |
-
# Get the index for tokens that we want to zero the grads for
|
563 |
-
index_grads_to_zero = torch.arange(len(tokenizer)) != placeholder_token_id
|
564 |
-
grads.data[index_grads_to_zero, :] = grads.data[index_grads_to_zero, :].fill_(0)
|
565 |
-
|
566 |
-
optimizer.step()
|
567 |
-
lr_scheduler.step()
|
568 |
-
optimizer.zero_grad()
|
569 |
-
|
570 |
-
# Checks if the accelerator has performed an optimization step behind the scenes
|
571 |
-
if accelerator.sync_gradients:
|
572 |
-
progress_bar.update(1)
|
573 |
-
global_step += 1
|
574 |
-
if global_step % args.save_steps == 0:
|
575 |
-
save_progress(text_encoder, placeholder_token_id, accelerator, args)
|
576 |
-
|
577 |
-
logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]}
|
578 |
-
progress_bar.set_postfix(**logs)
|
579 |
-
accelerator.log(logs, step=global_step)
|
580 |
-
|
581 |
-
if global_step >= args.max_train_steps:
|
582 |
-
break
|
583 |
-
|
584 |
-
accelerator.wait_for_everyone()
|
585 |
-
|
586 |
-
# Create the pipeline using using the trained modules and save it.
|
587 |
-
if accelerator.is_main_process:
|
588 |
-
pipeline = StableDiffusionPipeline(
|
589 |
-
text_encoder=accelerator.unwrap_model(text_encoder),
|
590 |
-
vae=vae,
|
591 |
-
unet=unet,
|
592 |
-
tokenizer=tokenizer,
|
593 |
-
scheduler=PNDMScheduler(
|
594 |
-
beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", skip_prk_steps=True
|
595 |
-
),
|
596 |
-
safety_checker=StableDiffusionSafetyChecker.from_pretrained("CompVis/stable-diffusion-safety-checker"),
|
597 |
-
feature_extractor=CLIPFeatureExtractor.from_pretrained("openai/clip-vit-base-patch32"),
|
598 |
-
)
|
599 |
-
pipeline.save_pretrained(args.output_dir)
|
600 |
-
# Also save the newly trained embeddings
|
601 |
-
save_progress(text_encoder, placeholder_token_id, accelerator, args)
|
602 |
-
|
603 |
-
if args.push_to_hub:
|
604 |
-
repo.push_to_hub(commit_message="End of training", blocking=False, auto_lfs_prune=True)
|
605 |
-
|
606 |
-
accelerator.end_training()
|
607 |
-
torch.cuda.empty_cache()
|
608 |
-
gc.collect()
|
609 |
-
|
610 |
-
|
611 |
-
if __name__ == "__main__":
|
612 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/EMS-TU-Ilmenau/deepest-demo/helper.py
DELETED
@@ -1,59 +0,0 @@
|
|
1 |
-
from torch.utils.data import DataLoader
|
2 |
-
from deepest.modules import Parameter2dNet
|
3 |
-
from deepest.datasets import InferenceDelayDataset
|
4 |
-
from deepest.metrics import match_components
|
5 |
-
import numpy as np
|
6 |
-
|
7 |
-
class Runner:
|
8 |
-
def __init__(self, model: str, dataset: str, bs: int, num_worker: int):
|
9 |
-
self.module = Parameter2dNet.from_file(f"{model}")
|
10 |
-
self.dataset_config = self.module.get_datasetconfig()
|
11 |
-
self.dataset = InferenceDelayDataset(path=dataset, **self.dataset_config)
|
12 |
-
self.bs = bs
|
13 |
-
self.num_worker = num_worker
|
14 |
-
|
15 |
-
def _preallocate(self, data_shape: tuple[int, ...], eta_shape: tuple[int, ...]):
|
16 |
-
data = np.empty((len(self), *data_shape), dtype=np.complex128)
|
17 |
-
truth = np.empty((len(self), *eta_shape))
|
18 |
-
estim = np.empty((len(self), *eta_shape))
|
19 |
-
return data, truth, estim
|
20 |
-
|
21 |
-
def _get_batchrange_for_index(self, ii: int):
|
22 |
-
start_idx = ii*self.bs
|
23 |
-
stop_idx = (ii+1)*self.bs
|
24 |
-
if stop_idx > len(self.dataset):
|
25 |
-
stop_idx = len(self.dataset)
|
26 |
-
|
27 |
-
return range(start_idx, stop_idx)
|
28 |
-
|
29 |
-
def run(self, snr: int) -> tuple[np.ndarray, np.ndarray, np.ndarray]:
|
30 |
-
self.dataset.noise_var = (snr, snr)
|
31 |
-
dataloader = DataLoader(
|
32 |
-
self.dataset,
|
33 |
-
batch_size=self.bs,
|
34 |
-
num_workers=self.num_worker,
|
35 |
-
worker_init_fn=lambda worker_id: np.random.seed(worker_id),
|
36 |
-
shuffle=False,
|
37 |
-
)
|
38 |
-
|
39 |
-
for ii, (x, _, z) in enumerate(dataloader):
|
40 |
-
z = z[0][:, :2, :]
|
41 |
-
if ii == 0:
|
42 |
-
data, truth, estim = self._preallocate(x.shape[1:], z.shape[1:])
|
43 |
-
|
44 |
-
idx_range = self._get_batchrange_for_index(ii)
|
45 |
-
|
46 |
-
data[idx_range] = x.cpu().numpy()
|
47 |
-
truth[idx_range] = z.cpu().numpy()
|
48 |
-
estim[idx_range] = self.module.fit(x)[:, :2, :]
|
49 |
-
|
50 |
-
estim, truth = match_components(estim, truth)
|
51 |
-
|
52 |
-
return data, truth, estim
|
53 |
-
|
54 |
-
def fit(self, data: np.ndarray) -> np.ndarray:
|
55 |
-
x = self.module.fit(data)
|
56 |
-
return x[:, :2, :]
|
57 |
-
|
58 |
-
def __len__(self):
|
59 |
-
return len(self.dataset)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/EnigmaOfTheWorld/TechnoForge_Automotive/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: TechnoForge Automotive
|
3 |
-
emoji: 🐠
|
4 |
-
colorFrom: blue
|
5 |
-
colorTo: yellow
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.27.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/EronSamez/RVC_HFmeu/demucs/separate.py
DELETED
@@ -1,185 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates.
|
2 |
-
# All rights reserved.
|
3 |
-
#
|
4 |
-
# This source code is licensed under the license found in the
|
5 |
-
# LICENSE file in the root directory of this source tree.
|
6 |
-
|
7 |
-
import argparse
|
8 |
-
import sys
|
9 |
-
from pathlib import Path
|
10 |
-
import subprocess
|
11 |
-
|
12 |
-
import julius
|
13 |
-
import torch as th
|
14 |
-
import torchaudio as ta
|
15 |
-
|
16 |
-
from .audio import AudioFile, convert_audio_channels
|
17 |
-
from .pretrained import is_pretrained, load_pretrained
|
18 |
-
from .utils import apply_model, load_model
|
19 |
-
|
20 |
-
|
21 |
-
def load_track(track, device, audio_channels, samplerate):
|
22 |
-
errors = {}
|
23 |
-
wav = None
|
24 |
-
|
25 |
-
try:
|
26 |
-
wav = AudioFile(track).read(
|
27 |
-
streams=0,
|
28 |
-
samplerate=samplerate,
|
29 |
-
channels=audio_channels).to(device)
|
30 |
-
except FileNotFoundError:
|
31 |
-
errors['ffmpeg'] = 'Ffmpeg is not installed.'
|
32 |
-
except subprocess.CalledProcessError:
|
33 |
-
errors['ffmpeg'] = 'FFmpeg could not read the file.'
|
34 |
-
|
35 |
-
if wav is None:
|
36 |
-
try:
|
37 |
-
wav, sr = ta.load(str(track))
|
38 |
-
except RuntimeError as err:
|
39 |
-
errors['torchaudio'] = err.args[0]
|
40 |
-
else:
|
41 |
-
wav = convert_audio_channels(wav, audio_channels)
|
42 |
-
wav = wav.to(device)
|
43 |
-
wav = julius.resample_frac(wav, sr, samplerate)
|
44 |
-
|
45 |
-
if wav is None:
|
46 |
-
print(f"Could not load file {track}. "
|
47 |
-
"Maybe it is not a supported file format? ")
|
48 |
-
for backend, error in errors.items():
|
49 |
-
print(f"When trying to load using {backend}, got the following error: {error}")
|
50 |
-
sys.exit(1)
|
51 |
-
return wav
|
52 |
-
|
53 |
-
|
54 |
-
def encode_mp3(wav, path, bitrate=320, samplerate=44100, channels=2, verbose=False):
|
55 |
-
try:
|
56 |
-
import lameenc
|
57 |
-
except ImportError:
|
58 |
-
print("Failed to call lame encoder. Maybe it is not installed? "
|
59 |
-
"On windows, run `python.exe -m pip install -U lameenc`, "
|
60 |
-
"on OSX/Linux, run `python3 -m pip install -U lameenc`, "
|
61 |
-
"then try again.", file=sys.stderr)
|
62 |
-
sys.exit(1)
|
63 |
-
encoder = lameenc.Encoder()
|
64 |
-
encoder.set_bit_rate(bitrate)
|
65 |
-
encoder.set_in_sample_rate(samplerate)
|
66 |
-
encoder.set_channels(channels)
|
67 |
-
encoder.set_quality(2) # 2-highest, 7-fastest
|
68 |
-
if not verbose:
|
69 |
-
encoder.silence()
|
70 |
-
wav = wav.transpose(0, 1).numpy()
|
71 |
-
mp3_data = encoder.encode(wav.tobytes())
|
72 |
-
mp3_data += encoder.flush()
|
73 |
-
with open(path, "wb") as f:
|
74 |
-
f.write(mp3_data)
|
75 |
-
|
76 |
-
|
77 |
-
def main():
|
78 |
-
parser = argparse.ArgumentParser("demucs.separate",
|
79 |
-
description="Separate the sources for the given tracks")
|
80 |
-
parser.add_argument("tracks", nargs='+', type=Path, default=[], help='Path to tracks')
|
81 |
-
parser.add_argument("-n",
|
82 |
-
"--name",
|
83 |
-
default="demucs_quantized",
|
84 |
-
help="Model name. See README.md for the list of pretrained models. "
|
85 |
-
"Default is demucs_quantized.")
|
86 |
-
parser.add_argument("-v", "--verbose", action="store_true")
|
87 |
-
parser.add_argument("-o",
|
88 |
-
"--out",
|
89 |
-
type=Path,
|
90 |
-
default=Path("separated"),
|
91 |
-
help="Folder where to put extracted tracks. A subfolder "
|
92 |
-
"with the model name will be created.")
|
93 |
-
parser.add_argument("--models",
|
94 |
-
type=Path,
|
95 |
-
default=Path("models"),
|
96 |
-
help="Path to trained models. "
|
97 |
-
"Also used to store downloaded pretrained models")
|
98 |
-
parser.add_argument("-d",
|
99 |
-
"--device",
|
100 |
-
default="cuda" if th.cuda.is_available() else "cpu",
|
101 |
-
help="Device to use, default is cuda if available else cpu")
|
102 |
-
parser.add_argument("--shifts",
|
103 |
-
default=0,
|
104 |
-
type=int,
|
105 |
-
help="Number of random shifts for equivariant stabilization."
|
106 |
-
"Increase separation time but improves quality for Demucs. 10 was used "
|
107 |
-
"in the original paper.")
|
108 |
-
parser.add_argument("--overlap",
|
109 |
-
default=0.25,
|
110 |
-
type=float,
|
111 |
-
help="Overlap between the splits.")
|
112 |
-
parser.add_argument("--no-split",
|
113 |
-
action="store_false",
|
114 |
-
dest="split",
|
115 |
-
default=True,
|
116 |
-
help="Doesn't split audio in chunks. This can use large amounts of memory.")
|
117 |
-
parser.add_argument("--float32",
|
118 |
-
action="store_true",
|
119 |
-
help="Convert the output wavefile to use pcm f32 format instead of s16. "
|
120 |
-
"This should not make a difference if you just plan on listening to the "
|
121 |
-
"audio but might be needed to compute exactly metrics like SDR etc.")
|
122 |
-
parser.add_argument("--int16",
|
123 |
-
action="store_false",
|
124 |
-
dest="float32",
|
125 |
-
help="Opposite of --float32, here for compatibility.")
|
126 |
-
parser.add_argument("--mp3", action="store_true",
|
127 |
-
help="Convert the output wavs to mp3.")
|
128 |
-
parser.add_argument("--mp3-bitrate",
|
129 |
-
default=320,
|
130 |
-
type=int,
|
131 |
-
help="Bitrate of converted mp3.")
|
132 |
-
|
133 |
-
args = parser.parse_args()
|
134 |
-
name = args.name + ".th"
|
135 |
-
model_path = args.models / name
|
136 |
-
if model_path.is_file():
|
137 |
-
model = load_model(model_path)
|
138 |
-
else:
|
139 |
-
if is_pretrained(args.name):
|
140 |
-
model = load_pretrained(args.name)
|
141 |
-
else:
|
142 |
-
print(f"No pre-trained model {args.name}", file=sys.stderr)
|
143 |
-
sys.exit(1)
|
144 |
-
model.to(args.device)
|
145 |
-
|
146 |
-
out = args.out / args.name
|
147 |
-
out.mkdir(parents=True, exist_ok=True)
|
148 |
-
print(f"Separated tracks will be stored in {out.resolve()}")
|
149 |
-
for track in args.tracks:
|
150 |
-
if not track.exists():
|
151 |
-
print(
|
152 |
-
f"File {track} does not exist. If the path contains spaces, "
|
153 |
-
"please try again after surrounding the entire path with quotes \"\".",
|
154 |
-
file=sys.stderr)
|
155 |
-
continue
|
156 |
-
print(f"Separating track {track}")
|
157 |
-
wav = load_track(track, args.device, model.audio_channels, model.samplerate)
|
158 |
-
|
159 |
-
ref = wav.mean(0)
|
160 |
-
wav = (wav - ref.mean()) / ref.std()
|
161 |
-
sources = apply_model(model, wav, shifts=args.shifts, split=args.split,
|
162 |
-
overlap=args.overlap, progress=True)
|
163 |
-
sources = sources * ref.std() + ref.mean()
|
164 |
-
|
165 |
-
track_folder = out / track.name.rsplit(".", 1)[0]
|
166 |
-
track_folder.mkdir(exist_ok=True)
|
167 |
-
for source, name in zip(sources, model.sources):
|
168 |
-
source = source / max(1.01 * source.abs().max(), 1)
|
169 |
-
if args.mp3 or not args.float32:
|
170 |
-
source = (source * 2**15).clamp_(-2**15, 2**15 - 1).short()
|
171 |
-
source = source.cpu()
|
172 |
-
stem = str(track_folder / name)
|
173 |
-
if args.mp3:
|
174 |
-
encode_mp3(source, stem + ".mp3",
|
175 |
-
bitrate=args.mp3_bitrate,
|
176 |
-
samplerate=model.samplerate,
|
177 |
-
channels=model.audio_channels,
|
178 |
-
verbose=args.verbose)
|
179 |
-
else:
|
180 |
-
wavname = str(track_folder / f"{name}.wav")
|
181 |
-
ta.save(wavname, source, sample_rate=model.samplerate)
|
182 |
-
|
183 |
-
|
184 |
-
if __name__ == "__main__":
|
185 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/EvgenyK/Text-To-Image/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Text To Image
|
3 |
-
emoji: 🌍
|
4 |
-
colorFrom: indigo
|
5 |
-
colorTo: pink
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.12.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: openrail
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/GMFTBY/PandaGPT/config/__init__.py
DELETED
@@ -1,37 +0,0 @@
|
|
1 |
-
import yaml
|
2 |
-
|
3 |
-
def load_model_config(model, mode):
|
4 |
-
# load special config for each model
|
5 |
-
config_path = f'config/{model}.yaml'
|
6 |
-
print(f'[!] load configuration from {config_path}')
|
7 |
-
with open(config_path) as f:
|
8 |
-
configuration = yaml.load(f, Loader=yaml.FullLoader)
|
9 |
-
new_config = {}
|
10 |
-
for key, value in configuration.items():
|
11 |
-
if key in ['train', 'test', 'validation']:
|
12 |
-
if mode == key:
|
13 |
-
new_config.update(value)
|
14 |
-
else:
|
15 |
-
new_config[key] = value
|
16 |
-
configuration = new_config
|
17 |
-
return configuration
|
18 |
-
|
19 |
-
def load_config(args):
|
20 |
-
'''the configuration of each model can rewrite the base configuration'''
|
21 |
-
# base config
|
22 |
-
base_configuration = load_base_config()
|
23 |
-
|
24 |
-
# load one model config
|
25 |
-
configuration = load_model_config(args['model'], args['mode'])
|
26 |
-
|
27 |
-
# update and append the special config for base config
|
28 |
-
base_configuration.update(configuration)
|
29 |
-
configuration = base_configuration
|
30 |
-
return configuration
|
31 |
-
|
32 |
-
def load_base_config():
|
33 |
-
config_path = f'config/base.yaml'
|
34 |
-
with open(config_path) as f:
|
35 |
-
configuration = yaml.load(f, Loader=yaml.FullLoader)
|
36 |
-
print(f'[!] load base configuration: {config_path}')
|
37 |
-
return configuration
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|